content
stringlengths 7
2.61M
|
---|
The invention relates generally to the treatment of protein-polysaccharide complexes. Of special interest to the invention is a method of producing products of high protein content from wheat flour and, even more particularly, from wheat gluten.
Wheat possesses a higher percentage of protein than any other type of grain. Thus, wheat as such contains about 12.0 percent by weight of crude protein. On the other hand, wheat flour contains about 14 percent by weight of protein substances on the average. These protein substances consist predominantly of gluten. The latter is an elastic, rubber-like protein substance which forms an inherent part of the wheat. The characteristic consistency of the gluten after mixing with water determines the baking properties of the wheat flour.
The gluten is obtained from the wheat flour by washing out the latter with water. During this operation, the starch present in the wheat flour is removed in the form of a starch emulsion and the gluten remains behind as an elastic mass. The starch present in the wheat and recovered by washing out of the wheat flour is frequently accorded no more significance than merely a side product obtained during recovery of the gluten.
The gluten contains both water-soluble and water-insoluble proteins, that is, the protein content of the gluten is partially water-soluble. Of the proteins present in the gluten, the water-soluble proteins represent a desired product in the food and pharmaceutical industries. Thus, the water-soluble proteins are particularly easy to digest and may be incorporated into foods in spray-dried or freeze-dried form. Advantageously, the water-soluble proteins are used in baby foods, for dietetic purposes and for geriatric purposes.
Certain problems arise in the prior art. On the one hand, the gluten produced according to the known procedures contains a maximum of 60 percent by weight of crude protein. It is not possible to achieve a higher protein concentration by washing out the wheat flour with water in accordance with the conventional methods. On the other hand, it is not possible to readily recover the water-soluble proteins present in the gluten in the form of a product having a high concentration of protein. |
The 20th Annual Danbury Animal Welfare Society (DAWS) Walk for Animals set for Sept. 27.
It will take place at the at Clifford J. Hurgin Municipal Center.
Check-in will be from 9 to 10:30 a.m. At 10:15 a.m. there will be a chance to meet adoptable dogs. The walk will begin at 10:30 a.m. There will be contests after the walk.
DAWS is a non-profit, volunteer-runorganization dedicated to improving the lives of animals in the community and beyond.
There is a $25 registration fee to participate in the walkathon. All registered participants will receive a "Doggie Goodie Bag" filled with gifts and treats. You may register on-line at www.daws.org. Sponsor sheets are available on the Web site.
All dogs must wear and ID tag and be on a leash.
Walker must scoop up after their dogs.
Dogs must be people and dog-friendly.
Walkers under 16 must be accompanied by an adolt.
Sorry, no cats. Photos of DAWS adoptable cats will be on display.
The rain date for this event is Oct. 4.
A Common Ground offers a free Latin fitness class Aug. 27 at 6:45 p.m. at 346 Main St. Take the rear entrance via the red door to Studio 1.
For more information, call 203-791-9850.
CACERES -- A daughter, Zoe Sophia, was born to Colleen and Javier Caceres, on Aug. 16.
GAY -- A daughter, Eva Angela, was born to Chelsea Steinberg and R. Chris Gay, on July 30.
Robert Apap, a student at the University of Connecticut in Storrs, has been named to the dean's list for the spring semester. He is a junior and majoring in chemical engineering. Robert is the son of Loraine and Charles Apap of Danbury.
The event will take place in September, 2010, and the reunion committee is searching for classmates. Anyone with e-mail address or postal mailing address/telephone information for 1960 graduates may forward information to mschir9253@gmail.com.
Rooms are reserved at Danbury Plaza Hotel for $70 per room.
Call 203-794-0600 to reserve your room and mention the DHS reunion.
There will be brunch at the hotel Sunday from 11 a.m. to 1 p.m. for $30 per person.
Send a check payable to Danbury High School Class of 1969 40th Reunion, c/o Dorlaine McLaughlin, 268 Mooney Hill Road, Patterson, N.Y. 12563.
For more information, e-mail cbissy@optonline.net.
Greater Danbury La Leche League, 7 p.m., Connecticut Childbirth and Women's Center, 94 Locust Ave. Free. (860) 355-1857.
Freemasons-Union Lodge #40 A.F. & A.M., 7:30 p.m., 337 Main St. (203) 417-0697.
Kiwanis Club, 12:15 p.m., Chuck's Steakhouse, Segar Street. 792-5555.
Hispanic Center of Greater, board of directors, 6 p.m., EggHead's Restaurant. 798-2855.
Danbury Area Computer Society, 7 p.m., Danbury Hospital auditorium. 791-2283.
Hat City Youth Sports bingo, 7 p.m., Knights of Columbus, 42 Shelter Rock Road. 746-6876.
Union Lodge No. 40 A.F."&"A.M., 7:30 p.m., Masonic Temple, 337 Main St.
Greater Danbury Junior Women's Club, 7:30 p.m., Pope John Paul II Center, Lincoln Avenue.
Danbury Area Speech, Language and Hearing Association, 7:30 p.m., Danbury Hospital, Tower 10, Hospital Avenue.
Knights of Columbus, 8 p.m., K of C Hall, 42 Shelter Rock Road. 748-3816.
Danbury Area Real Estate Caravan and Network Session, 8:45 a.m., Hatters Community Park, Banquet Hall. 794-0000.
Expectant parent classes, 7 p.m., Danbury Hospital, John C. Creasy Center, conference room 2.
WestConn Toastmasters Club, 8 p.m., Wooster School Library, Miry Brook Road. 312-5059.
Housatonic Habitat Volunteer briefings, 7 p.m., HHfH's Danbury office, in peacock Alley, 1 Pandanarm Road. (203) 744-1340.
Danbury Drum Corps, 7:15 p.m., Catholic War Veterans Halls, Shalvoy's Lane. (203) 746-3258.
Polish American Club, 7 p.m., at 10 Ives St. |
Reconfigurable Josephson Phase Shifter Phase shifter is one of the key elements of quantum electronics. In order to facilitate operation and avoid decoherence, it has to be reconfigurable, persistent, and nondissipative. In this work, we demonstrate prototypes of such devices in which a Josephson phase shift is generated by coreless superconducting vortices. The smallness of the vortex allows a broad-range tunability by nanoscale manipulation of vortices in a micron-size array of vortex traps. We show that a phase shift in a device containing just a few vortex traps can be reconfigured between a large number of quantized states in a broad range. Samples Studied devices contain planar JJ's. They are made from bi-layer films with a 70 nm top Nb layer and thin nonsuperconducting metallic bottom layer. Devices in Figs. 1 (d), 2 and 3 are made using a paramagnetic CuNi alloy, the SQUID from Fig. 4 (a) -using pure Cu underlayer. We also tested other metals and just a single layer Nb film. All of them work in a similar manner and results do not depend on a specific material. Variable-thickness-bridge type JJ's are made by cutting a narrow (∼ 20 nm) grove in the top Nb layer by focused ion beam (FIB) etching, as sketched in Fig. 2 (b). Details of junction fabrication can be found elsewhere. Planar junction properties were described in Ref.. SQUID device, Fig. 4, was made by FIB milling of a rectangular loop in the middle of a junction. Supplementary Figure 1 shows a sketch of the device from Fig. 2 with corresponding junction lengths and distances to the traps counted from the bottom junction-2. Experimental Measurements are performed in a cryogen-free cryostat using a four-probe configuration. Magnetic field is applied perpendicular to the films. Vortex states are prepared in the following way. We start from the Meissner state by zero-field cooling of a device without bias current. Vortices are intro-duced either by applying current pulses, magnetic fields, or both, as described in Ref.. Depending on the amplitude and the sign of current pulses, we can introduce either vortices, or antivortices as shown in Fig. 2 (c). MFM imaging Low-temperature MFM imaging is carried out on AttoCube scanning probe system (AttoDry 1000/SU) with a standard Co/Cr-coated cantilever (MESP, Bruker, 2.8 N/m spring constant). MFM images, shown in Figs. 1 (e) and (f) are made in a tapping mode at a fixed resonance frequency, 87 kHz. The color scale represents the phase of tip oscillations: the black color corresponds to zero phase, brighter areas to a positive phase with the brightest level +10. The positive phase shift indicates a repulsive force on the tip, which is caused by Meissner screening of the tip field by the superconductor. To trap a vortex, the tip was approached close enough to the hole so that inhomogeneous magnetic field of the tip locally introduced a vortex. The sign of the vortex depends on the direction of tip magnetization. Because the vortex is introduced by the tip field, the tip-vortex interaction is attractive, resulting in the dark contrast of the trapped vortex in a subsequent MFM phase map, shown in Fig. 1 (f). I c (H) measurements, presented in Figs. 1 (h) and (i) are performed in the same MFM system with a retracted tip, in order not to induce extra distortion from the tip itself. Numerical simulations We use numerical fitting for extraction of JPS. Simulations presented by red lines in Figs. 1 (g), 2 (d-g) and 3 are done taking v (x) from Eq. with actual trap geometries (x vi /L x, z vi /L x, vi ) and using V i as a fitting parameter. The critical current is calculated by maximization of integrated Josephson current, I = (I c0 /L) L 0 sindx, where (H) represents the linear field-dependent phase gradient in the absence of vortices. Details of the formalism can be found in Ref.. In all demonstrated cases such fitting allows unambiguous estimation of JPS, ∆ v = − i V i vi. |
<reponame>kifferltd/open-mika
/**************************************************************************
* Copyright (c) 2001 by Acunia N.V. All rights reserved. *
* *
* This software is copyrighted by and is the sole property of Acunia N.V. *
* and its licensors, if any. All rights, title, ownership, or other *
* interests in the software remain the property of Acunia N.V. and its *
* licensors, if any. *
* *
* This software may only be used in accordance with the corresponding *
* license agreement. Any unauthorized use, duplication, transmission, *
* distribution or disclosure of this software is expressly forbidden. *
* *
* This Copyright notice may not be removed or modified without prior *
* written consent of Acunia N.V. *
* *
* Acunia N.V. reserves the right to modify this software without notice. *
* *
* Acunia N.V. *
* <NAME> 35 <EMAIL> *
* 3000 Leuven http://www.acunia.com *
* Belgium - EUROPE *
**************************************************************************/
package gnu.testlet.wonka.lang.ClassLoader; //complete the package name ...
import gnu.testlet.TestHarness;
import gnu.testlet.Testlet;
import java.io.ByteArrayOutputStream;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.util.Enumeration;
import java.util.HashMap;
import java.util.jar.JarEntry;
import java.util.jar.JarFile;
//import java.*; // at least the class you are testing ...
/**
* Written by ACUNIA. <br>
* <br>
* this file contains test for java.lang.ClassLoader <br>
*
*/
public class AcuniaClassLoaderTest implements Testlet
{
protected TestHarness th;
protected HashMap hm;
protected Object class1;
protected Object class2;
protected Object duplicate;
protected ClassLoader baseCl;
protected ClassLoader evol1Cl;
protected ClassLoader atest1Cl;
protected ClassLoader atest2Cl;
protected ClassLoader rtest1Cl;
protected ClassLoader rtest2Cl;
protected ClassLoader duplicateCl;
private static final String tc1 = "gnu.testlet.wonka.lang.ClassLoader.TestClass1";
private static final String tc2 = "gnu.testlet.wonka.lang.ClassLoader.TestClass2";
private static final String ti1 = "gnu.testlet.wonka.lang.ClassLoader.TestInterface1";
private static final String ti2 = "gnu.testlet.wonka.lang.ClassLoader.TestInterface2";
private static final String bsi = "gnu.testlet.wonka.lang.ClassLoader.BasicInterface";
private static final String abc = "gnu.testlet.wonka.lang.ClassLoader.AbstractBaseClass";
private static final String ac1 = "gnu.testlet.wonka.lang.ClassLoader.AbstractClass1";
private static final String ac2 = "gnu.testlet.wonka.lang.ClassLoader.AbstractClass2";
private static final String ae1 = "gnu.testlet.wonka.lang.ClassLoader.AbstractEvol1";
private static final String e1i = "gnu.testlet.wonka.lang.ClassLoader.Evol1Interface";
protected boolean setup() {
hm = new HashMap();
th.debug("start seting up ClassLoaderTest");
try {
JarFile jf = newJarFile("/CLTest.jar");
Enumeration e = jf.entries();
while (e.hasMoreElements()) {
JarEntry je = (JarEntry) e.nextElement();
String s = je.getName();
if (!s.endsWith(".class")) {
continue;
}
int i = s.indexOf('/');
while (i != -1) {
s = s.substring(0, i) + "." + s.substring(i + 1);
i = s.indexOf('/');
}
i = s.lastIndexOf('.');
if (i != -1) {
s = s.substring(0, i);
}
InputStream in = jf.getInputStream(je);
byte[] bytes = new byte[1024];
int rd = in.read(bytes, 0, 1024);
ByteArrayOutputStream bos = new ByteArrayOutputStream();
while (rd != -1) {
bos.write(bytes, 0, rd);
rd = in.read(bytes, 0, 1024);
}
bytes = bos.toByteArray();
hm.put(s, bytes);
}
HashMap clmap = new HashMap();
clmap.put(abc, hm.get(abc));
clmap.put(bsi, hm.get(bsi));
baseCl = new ExClassLoader(clmap, "baseClassLoader");
clmap = new HashMap(clmap);
clmap.put(ae1, hm.get(ae1));
clmap.put(e1i, hm.get(e1i));
evol1Cl = new ExClassLoader(baseCl, clmap, "evol1ClassLoader");
clmap = new HashMap(clmap);
clmap.put(ac1, hm.get(ac1));
clmap.put(ti1, hm.get(ti1));
clmap.put(ac2, hm.get(ac2));
clmap.put(ti2, hm.get(ti2));
atest1Cl = new ExClassLoader(evol1Cl, clmap, "atest1ClassLoader");
atest2Cl = new ExClassLoader(evol1Cl, clmap, "atest2ClassLoader");
clmap = new HashMap(clmap);
clmap.put(tc1, hm.get(tc1));
clmap.put(tc2, hm.get(tc2));
rtest1Cl = new ExClassLoader(atest1Cl, clmap, "rtest1ClassLoader");
rtest2Cl = new ExClassLoader(atest2Cl, clmap, "rtestClassLoader");
duplicateCl = new ExClassLoader(clmap, "duplucateClassLoader");
th.debug("done setting up ClassLoaderTest");
return true;
} catch (Exception e) {
th.debug("Jar-file is Missing");
return false;
}
}
private JarFile newJarFile(String string) throws IOException {
InputStream in = getClass().getResourceAsStream(string);
if(in != null) {
File out = new File("tmp.file");
out.deleteOnExit();
FileOutputStream fos = new FileOutputStream(out);
byte[] bytes = new byte[1024];
int rd = in.read(bytes);
while(rd != -1) {
fos.write(bytes,0,rd);
rd = in.read(bytes);
}
fos.close();
return new JarFile(out);
}
return null;
}
public void test (TestHarness harness)
{
th = harness;
th.setclass("java.lang.ClassLoader");
if (setup()){
test_delegation();
}else th.debug("setup failed");
test_badClasses();
}
/**
* not implemented. <br>
*
*/
public void test_delegation(){
th.checkPoint("delegation");
Thread t = new Thread(new InitClass(rtest1Cl));
ClassLoader cl = new ExClassLoader(new HashMap(), "dumbClassLoader");
try {
Class.forName("gnu.testlet.wonka.lang.ClassLoader.BasicInterface",true,cl);
th.fail("should throw a ClassNotFoundException ");
}catch(ClassNotFoundException cnfe){ th.check(true); }
try {
Class c = Class.forName("gnu.testlet.wonka.lang.ClassLoader.BasicInterface",true,baseCl);
t.start();
Thread.yield();
c = Class.forName("gnu.testlet.wonka.lang.ClassLoader.TestClass2",false,rtest2Cl);
Object o = c.newInstance();
class2 = o;
System.out.println("Main class constructed: "+o);
} catch (Exception e){
e.printStackTrace();
}
while (t.isAlive()){
Thread.yield();
}
/* try { t.join(); }
catch (InterruptedException _){}
*/
try {
Class.forName("gnu.testlet.wonka.lang.ClassLoader.BasicInterface",true,cl);
th.fail("should throw a ClassNotFoundException");
}catch(ClassNotFoundException cnfe){ th.check(true); }
Class c = class1.getClass();
th.check(c.getClassLoader(), rtest1Cl,"checking classLoader -- 1");
c = c.getSuperclass();
th.check(c.getClassLoader(), atest1Cl,"checking classLoader -- 2");
Class [] ca = c.getInterfaces();
th.check(ca[0].getClassLoader(), atest1Cl,"checking classLoader -- 3");
c = c.getSuperclass();
th.check(c.getClassLoader(), evol1Cl,"checking classLoader -- 4");
ca = c.getInterfaces();
th.check(ca[0].getClassLoader(), evol1Cl,"checking classLoader -- 5");
c = c.getSuperclass();
th.check(c.getClassLoader(), baseCl,"checking classLoader -- 6");
ca = c.getInterfaces();
th.debug(ca[0].getName());
th.check(ca[0].getClassLoader(), baseCl,"checking classLoader -- 7");
c = class2.getClass();
th.check(c.getClassLoader(), rtest2Cl,"checking classLoader -- 9");
c = c.getSuperclass();
th.check(c.getClassLoader(), atest2Cl,"checking classLoader -- 10");
ca = c.getInterfaces();
th.check(ca[0].getClassLoader(), atest2Cl,"checking classLoader -- 11");
c = c.getSuperclass();
th.check(c.getClassLoader(), evol1Cl,"checking classLoader -- 12");
ca = c.getInterfaces();
th.check(ca[0].getClassLoader(), evol1Cl,"checking classLoader -- 13");
c = c.getSuperclass();
th.check(c.getClassLoader(), baseCl,"checking classLoader -- 14");
ca = c.getInterfaces();
th.check(ca[0].getClassLoader(), baseCl,"checking classLoader -- 15");
th.checkPoint("duplicate class loading");
try {
c = Class.forName("gnu.testlet.wonka.lang.ClassLoader.TestClass2",false,duplicateCl);
Object o = c.newInstance();
duplicate = o;
System.out.println("Main class constructed: "+o);
} catch (Exception e){
e.printStackTrace();
}
c = class2.getClass();
th.check(! c.isInstance(duplicate), "not the same instance");
}
private class InitClass implements Runnable{
ClassLoader cl;
public InitClass(ClassLoader cl){
this.cl = cl;
}
public void run(){
try {
Class c = Class.forName("gnu.testlet.wonka.lang.ClassLoader.TestClass1",false,cl);
Object o = c.newInstance();
class1 = o;
System.out.println("InitClass constructed: "+o);
} catch (Exception e){
e.printStackTrace();
}
}
}
/**
* implemented. <br>
*
*/
public void test_badClasses(){
th.checkPoint("defineClass(java.lang.String,byte[],int,int)java.lang.Class");
BadClassLoader cl = new BadClassLoader();
try {
Class c = cl.findClass("A");
c.newInstance();
th.fail("should throw ClassCircularityError "+c);
}catch (Throwable t){
//t.printStackTrace();
th.check(t.getClass(), ClassCircularityError.class);
}
// not everyone should be allowed to
try {
Class c = cl.findClass("String");
String s =(String) c.newInstance();
th.fail("should throw an Error "+s);
}catch (Throwable t){
//t.printStackTrace();
th.check((t instanceof ClassCastException) || (t instanceof SecurityException), "checking Throwable type");
}
try {
cl.findClass("java.lang.Bad");
th.fail("Bad class");
}catch (Throwable t){
//t.printStackTrace();
th.check((t instanceof Error) || (t instanceof SecurityException));
}
try {
Class c = cl.findClass("Old");
c.newInstance();
}catch (Throwable t){
th.fail("should be allowed, but got "+t);
}
try {
Class c = cl.findClass("Mis");
c.newInstance();
th.fail("should throw a ClassNotFoundException");
}catch (Throwable t){
//t.printStackTrace();
th.check(t.getClass(), ClassNotFoundException.class, "Mis: "+t);
}
try {
Class c = cl.findClass("MisClass");
c.newInstance();
th.fail("should throw a ClassNotFoundException");
}catch (Throwable t){
//t.printStackTrace();
th.check(t.getClass(), ClassNotFoundException.class, "MisClass: "+t);
}
try {
cl.findClass("BadFormat1");
th.fail("should throw a ClassFormatError");
}catch (Throwable t){
t.printStackTrace();
th.check(t.getClass(), ClassFormatError.class);
}
try {
cl.findClass("BadFormat2");
th.fail("should throw a ClassFormatError");
}catch (Throwable t){
t.printStackTrace();
th.check(t.getClass(), ClassFormatError.class);
}
try {
Class c = cl.findClass("BadFormat3");
c.newInstance();
th.fail("should throw a VerifyError");
}catch (Throwable t){
//t.printStackTrace();
th.check(t.getClass(), VerifyError.class);
}
try {
cl.findClass("CreateByteArray");
th.fail("should throw a ClassFormatError");
}catch (Throwable t){
//t.printStackTrace();
th.check(t.getClass(), ClassFormatError.class);
}
try {
cl.findClass("wrongName");//, true, cl);
th.fail("should throw an Error");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof Error);
}
try {
Class c = cl.findClass("Acces.AcMethod");//, true, cl);
c.newInstance();
th.fail("should throw an IncompatibleClassChangeError - 1");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof IllegalAccessError, "got "+t.getClass()+
" =?= IllegalAccessError");
}
try {
Class c = cl.findClass("Acces");//, true, cl);
c.newInstance();
th.fail("should throw an IncompatibleClassChangeError - 2");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof IllegalAccessError || t instanceof SecurityException,"Access -2-");
}
try {
Class c = cl.findClass("NoSuchField");//, true, cl);
c.newInstance();
th.fail("should throw an IncompatibleClassChangeError - 3");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof NoSuchFieldError,
"need a NoSuchFieldError, but got "+t);
}
try {
Class c = cl.findClass("NoSuchMethod");//, true, cl);
c.newInstance();
th.fail("should throw an IncompatibleClassChangeError - 4");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof NoSuchMethodError,
"need a NoSuchMethodError, but got "+t);
}
try {
Class c = cl.findClass("Instantiate");//, true, cl);
c.newInstance();
th.fail("should throw an IncompatibleClassChangeError - 5");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof InstantiationError);
}
try {
Class c = cl.findClass("Initializer");//, true, cl);
c.newInstance();
th.fail("should throw an ExceptionInInitializerError");
}catch (Throwable t){
//t.printStackTrace();
th.check(t instanceof ExceptionInInitializerError);
}
}
}
|
/**
* Handles changes in player positioning and rotation such as when travelling to a new dimension, (re)spawning,
* mounting horses etc. Seems to immediately reply to the server with the clients post-processing perspective on the
* player positioning
*/
public void handlePlayerPosLook(S08PacketPlayerPosLook packetIn)
{
PacketThreadUtil.checkThreadAndEnqueue(packetIn, this, this.gameController);
EntityPlayerSP entityplayersp = this.gameController.thePlayer;
double d0 = packetIn.func_148932_c();
double d1 = packetIn.func_148928_d();
double d2 = packetIn.func_148933_e();
float f = packetIn.func_148931_f();
float f1 = packetIn.func_148930_g();
if (packetIn.func_179834_f().contains(S08PacketPlayerPosLook.EnumFlags.X))
{
d0 += entityplayersp.posX;
}
else
{
entityplayersp.motionX = 0.0D;
}
if (packetIn.func_179834_f().contains(S08PacketPlayerPosLook.EnumFlags.Y))
{
d1 += entityplayersp.posY;
}
else
{
entityplayersp.motionY = 0.0D;
}
if (packetIn.func_179834_f().contains(S08PacketPlayerPosLook.EnumFlags.Z))
{
d2 += entityplayersp.posZ;
}
else
{
entityplayersp.motionZ = 0.0D;
}
if (packetIn.func_179834_f().contains(S08PacketPlayerPosLook.EnumFlags.X_ROT))
{
f1 += entityplayersp.rotationPitch;
}
if (packetIn.func_179834_f().contains(S08PacketPlayerPosLook.EnumFlags.Y_ROT))
{
f += entityplayersp.rotationYaw;
}
entityplayersp.setPositionAndRotation(d0, d1, d2, f, f1);
this.netManager.sendPacket(new C03PacketPlayer.C06PacketPlayerPosLook(entityplayersp.posX, entityplayersp.getEntityBoundingBox().minY, entityplayersp.posZ, entityplayersp.rotationYaw, entityplayersp.rotationPitch, false));
if (!this.doneLoadingTerrain)
{
this.gameController.thePlayer.prevPosX = this.gameController.thePlayer.posX;
this.gameController.thePlayer.prevPosY = this.gameController.thePlayer.posY;
this.gameController.thePlayer.prevPosZ = this.gameController.thePlayer.posZ;
this.doneLoadingTerrain = true;
this.gameController.displayGuiScreen((GuiScreen)null);
}
} |
package author;
public class AuthorInfo {
}
|
Age-related change in brain metabolite abnormalities in autism: a meta-analysis of proton magnetic resonance spectroscopy studies Abnormal trajectory of brain development has been suggested by previous structural magnetic resonance imaging and head circumference findings in autism spectrum disorders (ASDs); however, the neurochemical backgrounds remain unclear. To elucidate neurochemical processes underlying aberrant brain growth in ASD, we conducted a comprehensive literature search and a meta-analysis of 1H-magnetic resonance spectroscopy (1H-MRS) studies in ASD. From the 22 articles identified as satisfying the criteria, means and s.d. of measure of N-acetylaspartate (NAA), creatine, choline-containing compounds, myo-Inositol and glutamate+glutamine in frontal, temporal, parietal, amygdala-hippocampus complex, thalamus and cerebellum were extracted. Random effect model analyses showed significantly lower NAA levels in all the examined brain regions but cerebellum in ASD children compared with typically developed children (n=1295 at the maximum in frontal, P<0.05 Bonferroni-corrected), although there was no significant difference in metabolite levels in adulthood. Meta-regression analysis further revealed that the effect size of lower frontal NAA levels linearly declined with older mean age in ASD (n=844, P<0.05 Bonferroni-corrected). The significance of all frontal NAA findings was preserved after considering between-study heterogeneities (P<0.05 Bonferroni-corrected). This first meta-analysis of 1H-MRS studies in ASD demonstrated robust developmental changes in the degree of abnormality in NAA levels, especially in frontal lobes of ASD. Previously reported larger-than-normal brain size in ASD children and the coincident lower-than-normal NAA levels suggest that early transient brain expansion in ASD is mainly caused by an increase in non-neuron tissues, such as glial cell proliferation. Introduction Autism spectrum disorder (ASD) is a representative neurodevelopmental disorder that is behaviorally defined by deficits in social reciprocity, impaired verbal communication, and restrictive and repetitive behavior. 1,2 In the background of such atypical behavioral development, previous studies have suggested the existence of atypical brain development in ASD that overall brain size was slightly reduced at birth, dramatically increased within the first year of life, but then gradually plateaued into adulthood. However, brain size studies cannot provide tissue neurochemical information. Although post-mortem studies demonstrated cytoarchitectonic abnormalities, aberrant minicolumnar organizations and microglial activations in brains of autistic individuals, 6 post-mortem studies lack information about the trajectory of brain development. Therefore, the neural mechanisms explaining the aberrant trajectory of brain growth in ASD are yet to be elucidated, although several hypotheses such as excess neuron number have been proposed. 6,7 1 H-magnetic resonance spectroscopy ( 1 H-MRS) is a noninvasive neuroimaging technique that estimates specific chemical metabolite measures in vivo. 8 Previous studies have used 1 H-MRS to quantify glutamine/glutamate (referred to collectively as 'Glx'); N-acetylaspartate (NAA), a marker of neuronal density and activity; 9 choline-containing compounds (Cho), a measure primarily reflecting the constituents of cell membranes; 10 creatine and phosphocreatine (Cre), a measure of cellular energy metabolism; 10 and myo-Inositol (mI), a major osmolite, precursor for phosphoinositides involved in the second messenger system. 11 Previous 1 H-MRS findings have yielded some inconsistency such as decreased or no difference or increased NAA measure in ASD people compared with typically developed (TD) individuals. The statistical power of each single previous 1 H-MRS study is relatively small, and previous studies have not corrected for multiple comparisons. As brain structural studies show an aberrant trajectory of neurodevelopment, it was reasonable to predict that the degree of neurochemical abnormalities indexed by 1 H-MRS may also change according to developmental stages in ASD. However, to date only one longitudinal 1 H-MRS study focusing on lactate level has been reported. 19 Therefore, performing a meta-analysis is one possible solution to realize sufficient statistical power for making conclusion about the neurochemical abnormality of autistic brain and is currently the only way to examine age-related change of 1 H-MRS abnormality of autistic brain. To our knowledge, neither a systematic review nor a metaanalysis of 1 H-MRS studies in people with ASD has been reported previously. The current systematic review and metaanalysis were designed to test the hypothesis that the degree of abnormalities in metabolite levels measured with 1 H-MRS would change from childhood to adulthood. Concretely, in case that autistic early brain expansion is mainly caused by increase in neuronal tissue, transient increase in NAA level would be found during childhood but it would not be found in adulthood. On the other hand, if increase in non-neuronal tissue mainly contributes to early brain expansion, NAA level would be remained at normal or even on a decline in childhood but in adulthood. Materials and methods Data sources. 1 H-MRS studies that examined metabolite measures in the brains of individuals with ASD and TD control subjects were obtained through the computerized databases MEDLINE, PsycINFO, EMBASE and Web of Science. The search terms used in the systematic screening were autism, autistic, ASD, Asperger's, developmental disorder, pervasive developmental disorder (PDD) and mental development, which were also combined with magnetic resonance spectroscopy and MRS. Titles and abstracts of studies were examined to check whether or not they could be included. Reference lists of included articles were also examined to search additional studies to be included. Selection of study. Studies were included if they were brain 1 H-MRS studies published between 1980 and December 2010, they examined people with ASD compared with a TD control group and they reported sufficient data to obtain significant effect sizes; means, s.d. and numbers of participants. The literature search was performed without language restriction. If they did not report sufficient data, we emailed the corresponding and then the last author to obtain them. In cases where neither of them responded, we excluded the study. Two reviewers (YA and HY) performed study screenings independently. Data extraction. To perform the meta-analyses, we defined a standardized mean difference as the effect size statistic Cohen's d, which is calculated as the difference between the mean of the experiment group and the mean of the comparison group divided by the pooled s.d. In the current meta-analyses, mean measure of NAA, Cre, Cho, mI and Glx in autistic individuals was subtracted from those in TD groups in each volume of interest (VOI) respectively, and divided by the pooled s.d. of both. Data were separated by the mean age of participants to examine the hypothesis the degree of metabolite measure abnormality would change from childhood to adulthood. When the mean age of participants was 420, the study was included in the meta-analysis in adulthood. 15,18,20,21 In a study reporting the age range as from 3 to 5 years with no description of the mean age of participants, we considered the participants to have a mean age of 4 years. 22 In cases of studies reporting more than two measures of metabolites, we determined the priority for extraction as absolute measure then ratio to Cre. Two reviewers (YA and HY) performed all the data extraction and computation of effect size independently to minimize errors. Meta-analysis of Observational Studies in Epidemiology guidelines 23 were followed in the study. Identification of brain regions. Our hypothesis focused on the developmental aspect of autistic brain pathology, we classified the sub-regions into frontal, amygdala-hippocampus complex (AHC), temporal, parietal, cerebellum and thalamus, in line with the similarity of developmental background within each sub-region. 24 In the case of a study reporting measures from more than one sub-region in one area (for example, anterior cingulate cortex and dorsolateral prefrontal cortex), these were assigned to the appropriate meta-analysis sub-group (that is, frontal lobe) as two (or more) independent data sets regardless of tissue type, such as gray matter, white matter or both. VOIs in the medial temporal lobe that included the hippocampus or amygdala region were included into AHC sub-group. 13,22,25 VOIs in the intraparietal sulcus 20 and temporoparietal junction 20 were assigned to the parietal lobe, while that in the insula 14 was assigned to the temporal lobe. To ensure the meta-analysis was sufficiently powered, brain region measures were included if there were two or more studies reporting more than three VOIs in total. VOIs in TD control subjects who were compared with more than two ASD groups were identified 25,26 and divided into the appropriate number of comparison subgroups to avoid duplicate counting. Meta-analysis. All meta-analyses were performed using Review Manager ((RevMan) , Version 5.1, Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2011) from the Cochrane collaboration. A random effect model was adapted for the current metaanalysis to control potential heterogeneity such as variation in location of VOI, implementation of tissue segmentation within VOIs, single-vs multi-voxel spectroscopy, echo time, volume of VOI and types of metabolites measure. Cohen's d was calculated and used as effect sizes. As the differences in metabolite levels in ASD compared with TD subjects were predicted to differ between childhood and adulthood, the comparison was examined separately in childhood and adulthood. We employed conservative definitions of significance level determined using Bonferroni corrections, with Po0.0022 in childhood ( 0.05/23 included metabolites in six regions) and Po0.0033 in adulthood ( 0.05/15 included metabolites in four regions; Table 2). Sensitivity analyses. The robustness of significant findings from meta-analysis was further tested by sensitivity analysis in specified sub-groups excluding studies with potential confounds. These potential confounds included comorbid epilepsy, medication, presence of mental retardation, field strength of MR scanner, types of MRS measures, segmentation within VOIs and diagnostic tools. The significance level was defined as Po0.0014 ( 0.05/35 comparisons (7 potential confounds 5 regions)). Meta-regression. To test the hypothesis that neurochemical abnormalities would change with age, we performed meta-regression analyses in the combined children-adult sample to examine the relation between participants' mean age and Cohen's d for the NAA levels in the frontal lobe, parietal lobe and AHC in which the meta-analysis revealed significant differences between ASD and TD individuals in childhood or adulthood, with sufficient sample sizes to test meta-regression (n410). 27 The regression was examined using SPSS 18.0 (SPSS, Chicago, IL, USA). The level of statistical significance was defined, by applying the Bonferroni correction, as Po0.012 ( 0.05/4 areas). The current meta-analyses included studies with considerable heterogeneities; such as presence of mental retardation, medication, types of MRS measures, employment of segmentation within the VOI, volume of VOI and comorbid epilepsy (for example, absolute measure or ratio to Cre). To investigate the influence of these potential modifiers, we performed meta-regression analyses for the metabolite measures in which the current meta-analyses showed significant difference between subjects with ASD and TD. The meta-regressions were examined in the childhood and adulthood combined sample to examine in a sufficient number of data sets. 27 The significance level was set at Po0.05 to strictly assess the effect of heterogeneity. Assessing between-study heterogeneity. The Cochran Q and I 2 statistics were employed to examine between-study heterogeneity. The significance level was defined as Po0.10 to conclude the studies were heterogeneous. 28 Publication bias. Publication bias was assessed qualitatively by visual inspection of funnel plots and quantitatively by linear regression analysis for each group and each brain region. Based on previous literature, 27 this calculation was tested with data sets of at least 10. Data synthesis. Twenty-three demographic, clinical and methodological variants, including the number of participants, number of male participants, mean age, range of age, intelligence quotient range, diagnostic criteria or tools, pharmacological status, presence of comorbid epilepsy, sequence for MRS acquisition, utilization of segmentation within VOIs, strength of magnetic field (Tesla), echo time (TE), repetition time (TR), types of MRS measurements (absolute measure or ratio to Cre), type of metabolites reported, location of VOI, size of VOI, and main results of the study were extracted, as shown in Table 1. Total number of participants, studies and VOIs, and mean difference, 95% confidence interval, P-value, I 2 score and significance of linear regression analysis of symmetric property of funnel plots calculated from each meta-analysis are shown in Table 2. Results Study selection. The literature search described above yielded 244 articles, of which 47 studies were identified as potential candidates for the meta-analysis. Nine articles were excluded because lack of the original data. Ten were excluded because they did not recruit ASD individual. Two were excluded because of not utilizing 1 H-MRS. Two were excluded because not reporting new data. Thus, 24 studies were included into the database. 25,26,28,29, From the database, one study was excluded from the meta-analysis because they did not report raw data regarding the metabolite measure 28 and one study was excluded because it did not provide sufficient data to calculate the standardized mean difference 36 (Figure 1). Meta-analysis for metabolite measures in childhood and adulthood. In childhood, individuals with ASD showed significantly reduced NAA levels compared with TD controls in all the brain regions but cerebellum included in the meta-analysis (Po0.05, Bonferroni-corrected; Figure 2). In contrast, no significant difference was found in other metabolite concentrations between children with ASD and TD controls. Although several metabolites showed differences in levels between children with ASD and TD at the trend level significance (NAA in the cerebellum: P 0.008; mI in the frontal areas: P 0.008; Cre in the frontal areas: P 0.01, in AHC: P 0.009; in the thalamus: P 0.04; Cho in the thalamus: P 0.03), these significant effects disappeared after the Bonferroni correction for multiple comparisons was applied ( Table 2). In contrast to childhood, no metabolites showed a significant difference in metabolites levels between people with Neurochemical background of autistic brain growth Y Aoki et al Neurochemical background of autistic brain growth Y Aoki et al Neurochemical background of autistic brain growth Y Aoki et al ASD and TD in adulthood. The NAA levels in the parietal lobe and cerebellum showed a trend for decreased measures in adults with ASD compared with those with TD (P 0.01 in the parietal lobe and P 0.004 in the cerebellum), although these differences did not reach statistical significance after the Bonferroni correction was applied ( Table 2). In addition, it was confirmed that Bonferroni corrections for multiple comparisons were not too strict, because correction using the Student-Newman-Kuels procedure did not change the results both for childhood and adulthood. Sensitivity analyses. All sensitivity analyses performed in the specified-subgroups with more homogeneous quality showed significant reductions in NAA measures in the frontal lobe of children with autism (Po0.05, Bonferroni-corrected). These results demonstrated the robustness of reduced frontal NAA level during childhood even after considering methodological and participant's heterogeneity such as comorbidity of other neuropsychiatric diseases and intellectual disability, medication status, diagnostic methods, field strength of MR scanner, types of MRS measures and implementation of segmentation within VOI. In the other areas, some sensitivity analyses revealed that the significant effect of low NAA level disappeared in some subgroups. In the AHC, parietal cortex, temporal regions and thalamus, the significance of NAA reductions was preserved in the most subgroups, such as ASD individuals with no comorbid epilepsy, no medications and acquisition of MRS in a 1.5-tesla scanner (Supplementary Table 1). Meta-regression. The current meta-regression revealed a significant inverse effect of mean age of participants on NAA measures in the frontal lobe (P 0.009), but in the AHC or parietal cortex (Figure 3). Even after excluding heterogeneity of participants and methodologies in the included study, meta-regression analyses in specified subgroups further demonstrated the significant effects of mean age in the frontal lobe. These analyses were performed in studies with implementation of segmentation within VOIs (Po0.001), with multi-voxel MRS (P 0.021), with 1.5-tesla scanner (P 0.004), with participants without comorbid epilepsy (P 0.001), without medication (P 0.006), without mental retardation (P 0.032) and without participants diagnosed not using ADI-R or ADOS (P 0.006). The meta-regression revealed significant effects of the type of MRS measures and the employment of segmentation within VOIs on the NAA levels in AHC (Po0.05). However, no potential modifiers significantly affect the NAA levels in the frontal and parietal regions (Supplementary Table 2). Between-study heterogeneity. No significant heterogeneity was detected in all the metabolites in any regions but in mI measure in AHC and cerebellum during childhood (I 2 78% and 80%, respectively) ( Table 2). Publication bias. The linear regression test showed significant publication bias was not detected in most metabolites (5/6) but in the parietal NAA of children (Po0.1; Table 2). Discussion To our knowledge, this is the first systematic review and meta-analysis of 1 H-MRS studies in people with ASD. A total of 22 studies were integrated in the meta-analyses, of which 1476 1 H-MRS measures from 31 data sets at maximum, 844 measures from ASD and 632 measures from TD individuals. The current meta-analysis demonstrated that NAA levels in the frontal, parietal, temporal lobule, AHC and thalamus were significantly lower in ASD children compared with TD controls. In contrast, no significant difference in the any metabolite levels was found in adulthood. Importantly, our meta-regression analysis provides the first evidence that the degree of lower frontal NAA levels linearly declined with aging from childhood to adulthood in ASD people. The systematic review showed obvious methodological heterogeneities across studies, including in comorbid epilepsy, psychotropic medications, mental retardation, field strength of MR scanners, types of MRS measures, utilization of segmentation within VOIs and diagnostic tools. However, the sensitivity analyses further emphasized the robustness of current findings, especially regarding the lower-than-normal frontal NAA level in ASD children, since the potential confounds, heterogeneity between studies and publication bias did not significantly affect the findings. The finding of robustly lower-than-normal frontal NAA during childhood seems to be consistent with the importance of this area in the pathophysiology of ASD indicated by previous findings from several lines of research. 6, Previous functional imaging studies have repeatedly reported dysfunctional prefrontal cortices during psychological tasks requiring theory of mind, 47 social perception 48,49 and selfreferencing. 43 The presence of early brain enlargement especially in frontal lobe 50 6 The present linear regression analysis demonstrated a strong inverse correlation between decrement of frontal NAA and mean age in ASD subjects. Previous brain structural findings revealed abnormal trajectory of brain growth, such that the overall brain size in autism was slightly reduced at birth, dramatically increased within the first year of life, then gradually plateaued in adulthood. As the current metaanalyses included studies involving participants with mean ages between 4 and 35, the period of which the metaanalyses covered corresponds to the phase of gradual dissipation of early brain overgrowth in ASD. It is notable that the decrease and subsequent recovery in NAA level was found during the period showing a significant increase and subsequent normalization in brain size in ASD, because these overlapping effects were inferred on the basis of independent data. The current neurochemical findings could provide some insight into the histochemical background of the abnormal trajectory of autistic brain growth. Although the histological background is yet to be uncovered, several hypotheses have been formulated. 6,7 These hypotheses could be divided into two major categories. The first is abnormalities associated with neurons: for example, excessive numbers of neuron, synapse, or minicolumns and excessive and/or premature growth of axon, dendrite, or neuron cell bodies. The second is abnormalities associated with glial cells, such as excessive numbers of glia, activated and enlarged glia, and excessive and/or premature myelination. NAA is localized mainly in the cell bodies, axons, dendrites and dendritic spines of mature neurons, and is considered to function as a marker of functional and structural neuronal integrity. 9 Although recent studies have suggested NAA expressed Figure 2 Forest plot of frontal N-acetylaspartate (NAA). Standardized mean differences for NAA measures in frontal lobe between subjects with autism spectrum disorders (ASDs) and those with typical development (TD) in child and adulthood. The forest plot displays standardized mean differences and 95% confidential intervals (CIs). by oligodendrocytes, 51 NAA is less distributed in glia. 52 Therefore, since brain enlargement because of increase in neuron number should heighten NAA level in autistic brain, it is more likely that reduced NAA measures during childhood reflect decreased neuron density induced by increasing brain volume because of a factor other than associated with neurons. Considered together, among the two possible explanations for early brain expansion in ASD, increased cell bodies, axons, dendrites and dendritic spines of neurons seems less likely than abnormalities associated with glia. Glial cells (for example, astrocytes) have been shown to increase in volume after birth. 53 As glial cells initially occupy large percentage of brain volume, 54 increased glial cell volume may be a major factor in brain enlargement without a significant increase in the NAA measure of 1 H-MRS. Previous post-mortem studies have reported glial abnormalities that can contribute the abnormal volume increase, such as microglial and astroglial activation and increased microglial density, in the prefrontal cortex in ASD. 55,56 Thus, it is reasonable to hypothesize that proliferation of glial cells diluting the density of neurons is a major factor in the abnormal brain overgrowth and decreased NAA levels observed in children with ASD. The hypothesis that proliferation of glia could cause brain expansion with decreased neuron density and low NAA is consistent with observations in some neurological and genetic diseases. Previous studies have demonstrated decreased NAA because of abnormal synthesis of glia in patients with glioma. 57 Some other diseases manifesting macrocephaly, for example, neurofibrosis type 1 demonstrates decreased NAA measures. It was concluded that the reduced NAA measures in neurofibrosis type 1 was caused by increased brain volume because of excess myelination. 58 Proliferation of glia can also explain the transient brain expansion during the neonatal period and infancy, and is consistent with the subsequent preservation of behavioral dysfunction in the period of gradually normalizing for brain expansion in ASD. Transient microglial cell proliferation and subsequent irreversible dysfunction could occur after inflammation or hypoperfusion. 59 Existence of inflammation or hypoperfusion during neonatal and infancy in ASD has also been suggested by several lines of evidence, 55,56 including decreased serum levels of adhesion molecules and a correlation with head circumference at birth. 60 Diminishing of transiently increased glia after early infancy is consistent with the normalization of transiently expanded brain size and decreased NAA during the overlapped time period. However, because we could not find the MRS study involved the participants with the age o1, abnormal brain growth during this period is out of the finding of the current metaanalysis. Several methodological considerations and limitations of our study should be considered. First, because of the nature of meta-analysis, we can make statistical analysis only at the level of studies. There is no way to confirm whether the participants of included studies actually exhibited enlarged brains during childhood. Second, although inverse effect of age on frontal NAA was shown robustly by the metaregression analysis performed with the child-adult combined group, it remains unclear whether NAA levels in adults with ASD are equal to, or exceed, levels in TD, because of the relatively small number of included studies in adulthood. Third, because the most included studies utilized 1.5-tesla instead of 3-tesla scanner, it might be insufficient to collect reliable data about several metabolites, which could not be reliably evaluated with insufficient strength of magnetic field such as Glx. Fourth, the included studies display considerable heterogeneity such as variations in the type of metabolite measures and implementation of segmentation. The use of a ratio to Cre is based on the hypothesis that there is no difference in Cre levels between ASD and TD, which is shown to be questionable in the current meta-analysis. Not implementing segmentation is also based on the hypothesis the same proportion of cerebrospinal fluids in each VOI between cases and controls, while structural differences between ASD and TD have been repeatedly demonstrated. Furthermore, the abnormalities in metabolites level of ASD were reported to be different between those in gray matter and white matter. 30 We employed the random effect model and the sensitivity analyses in more homogeneous subgroups to control the heterogeneity, and the main findings were preserved when the heterogeneity was controlled. However, the current results were partially affected by such heterogeneity, and should be interpreted carefully. Fifth, although we categorized locations of VOIs into six brain areas with the similar developmental origin, 24 the classification might be criticized to be over-simplified considering the functional variability within each sub-areas. In conclusion, the current meta-analysis robustly showed a significant frontal NAA reduction in ASD children compared with TD, and further demonstrated a significant linear correlation between older age and a smaller magnitude of NAA decrease. This NAA reduction then disappeared in adulthood. Taken together with previous findings suggesting early brain overgrowth and subsequent normalization during the same time period, the current findings support the hypothesis that abnormal brain enlargement in ASD is mainly caused by increases of non-neuron tissues, such as glial Figure 3 The relationship between effect sizes for reduced N-acetylaspartate (NAA) and ages of study participants. Effect sizes from each comparison of VOIs are plotted by the mean age of participants with autism spectrum disorders of the study. The line of best fit shows a gradual but substantial decrease in NAA reduction. No data were obtained from individuals before the age of 4 years. cell proliferation. The current systematic review and metaanalysis emphasized the importance and implication for future research. Future longitudinal and large-scale original study with sufficient statistical power that is free from the methodological limitations is required to test the current hypothesis. |
// RecursiveBacktracker : uses recursive backtracking to generate a maze; function : 1
func RecursiveBacktracker(opt settings.Options) cells.Grid {
cellMap := InitializeCells(opt)
seed := int64(SeedConversion(opt.Seed))
s1 := rand.NewSource(seed)
r1 := rand.New(s1)
step := cellMap.AllCells[r1.Intn(len(cellMap.AllCells))]
UpdateGrid(&cellMap, &step, true)
var stack = []cells.Cell{step}
possibleSteps := getNeighbors(cellMap.AllCells, step)
counter := 0
for len(stack) >= 1 {
possibleSteps = getNeighbors(cellMap.AllCells, step)
if len(possibleSteps) <= 1 {
step = stack[len(stack)-1]
stack = stack[:len(stack)-1]
counter++
} else {
nsLength := len(possibleSteps)
bridges := possibleSteps[rand.Intn(nsLength)]
UpdateGrid(&cellMap, &step, true)
UpdateGrid(&cellMap, &bridges.Far, true)
UpdateGrid(&cellMap, &bridges.Near, true)
step = bridges.Far
stack = append(stack, bridges.Far)
}
}
fmt.Println("Done")
return cellMap
} |
//
// RCStickerPackageView.h
// RongSticker
//
// Created by Zhaoqianyu on 2018/8/15.
// Copyright © 2018年 RongCloud. All rights reserved.
//
#import <UIKit/UIKit.h>
#import "RCStickerPackageConfig.h"
@interface RCStickerPackageView : UIView
- (instancetype)initWithPackageConfig:(RCStickerPackageConfig *)packageConfig;
@end
|
Evaluation of Reference Genes for Quantitative Real-Time PCR in Oil Palm Elite Planting Materials Propagated by Tissue Culture Background The somatic embryogenesis tissue culture process has been utilized to propagate high yielding oil palm. Due to the low callogenesis and embryogenesis rates, molecular studies were initiated to identify genes regulating the process, and their expression levels are usually quantified using reverse transcription quantitative real-time PCR (RT-qPCR). With the recent release of oil palm genome sequences, it is crucial to establish a proper strategy for gene analysis using RT-qPCR. Selection of the most suitable reference genes should be performed for accurate quantification of gene expression levels. Results In this study, eight candidate reference genes selected from cDNA microarray study and literature review were evaluated comprehensively across 26 tissue culture samples using RT-qPCR. These samples were collected from two tissue culture lines and media treatments, which consisted of leaf explants cultures, callus and embryoids from consecutive developmental stages. Three statistical algorithms (geNorm, NormFinder and BestKeeper) confirmed that the expression stability of novel reference genes (pOP-EA01332, PD00380 and PD00569) outperformed classical housekeeping genes (GAPDH, NAD5, TUBULIN, UBIQUITIN and ACTIN). PD00380 and PD00569 were identified as the most stably expressed genes in total samples, MA2 and MA8 tissue culture lines. Their applicability to validate the expression profiles of a putative ethylene-responsive transcription factor 3-like gene demonstrated the importance of using the geometric mean of two genes for normalization. Conclusions Systematic selection of the most stably expressed reference genes for RT-qPCR was established in oil palm tissue culture samples. PD00380 and PD00569 were selected for accurate and reliable normalization of gene expression data from RT-qPCR. These data will be valuable to the research associated with the tissue culture process. Also, the method described here will facilitate the selection of appropriate reference genes in other oil palm tissues and in the expression profiling of genes relating to yield, biotic and abiotic stresses. Introduction Oil palm (Elaeis guineensis), which originated from West Africa, is a diploid monocotyledon that belongs to the Arecaceae family and Elaeidinae sub-tribe. It is one of the most economically important plantation crops in Malaysia and accounts for 5% of the world vegetable oils cultivation area. Being the highest yielding oil crop, oil palm produces up to 10 times more oil per hectare of land compared to other major oil cropsError! Hyperlink reference not valid. Two types of oil can be extracted from oil palm fruits. Oil from the mesocarp is known as palm oil and is used mainly in the food based industry, while palm kernel oil from the endosperm is essential for the oleochemical industry. In addition, palm oil can be converted into biodiesel. With the emergence of next generation sequencing technology, the availability of genome information has now expanded for oil palm. The first 1.8-gigabase genome sequence of African oil palm Elaeis guineensis with at least 34,802 genes has recently been published by Singh et al.. Transcriptome sequences from oil palm tissues such as mesocarp, fruit, flower, endosperm and embryo have been deposited in the GenBank database. An increasing number of candidate genes regulating complex traits in oil palm such as yield or disease resistance can now be investigated. Increasing attention is being given to improve the stagnating yield of oil palm. The national average palm oil yield in Malaysia has plateaued at about 3.5 to 3.9 tonnes/hectare/year for more than two decades. One approach to increased yield is the cultivation of superior planting material with high-yielding potential on the existing cultivated land. In order to expedite the production of palms with superior characteristics, clonal propagation using tissue culture has been identified since the 1970s as one of the promising tools. This process has been utilized widely in the oil palm industry to multiply elite planting materials. Results from the first oil palm clonal trials showed an increase of up to 30% in oil yield relative to the dura x pisifera palms planted from seeds. However, the challenge encountered by tissue culturists is the low efficiency of the process itself. The callogenesis rate of leaf explants is around 19%, while the average rate for embryogenesis in leaf-derived callus is in the range of 3 to 6%. Therefore molecular research is extensively carried out to understand the mechanisms underlying somatic embryogenesis (SE) in oil palm. This resulted in the identification of SE associated genes such as EgLSD (Lignostilbene-a,b-dioxygenase), EgER6 (Ethylene responsive 6), Eg707 (unknown protein) and EgIAA9 (putative member of the AUX/IAA gene family). There is increased opportunity to discover key SE regulatory genes with the availability of genome information. The most common and powerful technique used to explore the expression profiles of genes of interest (GOI) is reverse transcription quantitative realtime PCR (RT-qPCR), which is highly specific, sensitive and costeffective. Accurate quantification of gene expression levels using RT-qPCR is highly dependent on the normalization of GOI with the most suitable reference genes. Recent developments have shown that more than one reference gene is required for optimum normalization of non-biological sample-to-sample variation introduced during RT-qPCR. As a result, the number of publications describing systematic evaluation of multiple reference genes in model and non-model plants has increased markedly, for example Arabidopsis, rice, soybean, banana, citrus and bamboo. Similar to other plant species, RT-qPCR studies in oil palm utilized the classical housekeeping genes such as ACTIN and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) for normalization. As there is increasing evidence that these genes are not consistently expressed in certain plant species or experimental conditions, other genes have been investigated on their potential as reference genes. This resulted in the application of gibberellin-responsive protein 2 (GRAS), cyclophilin 2 (CYP2) and pre-mRNA splicing factor 7 (SLU7) as reference genes in the study of oil palm leaf discs subjected to various abiotic stresses. These genes also showed the most stable expression across reproductive and vegetative tissues of oil palm. In relation to SE in oil palm, the availability of expressed sequence tags (ESTs) and cDNA microarray expression data have provided candidate reference genes for RT-qPCR. Given the significance of the tissue culture process to the oil palm industry, expression stability of eight candidate reference genes suggested in preliminary studies by Low and Ooi et al. were investigated in this study across samples collected from various consecutive developmental stages of oil palm tissue culture, with cultured leaf explants sampled at different days, callus and embryoids (EMB). Different tissue culture lines and media treatments were also used. Detailed and systematic analyses were carried out using geNorm, NormFinder and BestKeeper. As a result of this comprehensive evaluation, two novel genes (PD00380 and PD00569) were selected as the most stably expressed genes compared to classical housekeeping genes. Application of these genes to normalize the expression levels of an ethylene-responsive transcription factor 3-like gene (PD00088) in oil palm is also discussed. Results Candidate Reference Genes for Oil Palm Tissue Culture A total of eight candidate genes were selected for determination of the most stable reference genes across various developmental stages of oil palm tissue culture. These samples which constituted leaf explant cultures, callus and EMB from consecutive developmental stages were collected from two different tissue culture lines (MA2, MA8) and media treatments (T527, T694). The selected genes were novel reference genes or classical housekeeping genes. As shown in Table 1, the novel reference genes selected and their GenBank accession number are PD00380 (EY397675), PD00569 (EL682210) and pOP-EA01332 (EY406625), which were identified from an oil palm cDNA microarray study across embryogenic callus (EC), non-embryogenic callus (NEC), EMB, shoot from polyembryoids (ST), female inflorescence (INF), kernel at 12 weeks after anthesis (WAA), mesocarp at 15 WAA and roots from six months old seedling palms. Microarray data from these samples were filtered for non-differentially expressed genes with the cutoff expression levels below 1.5 fold. Genes with missing data points were removed and the remaining genes were then ranked according to their standard deviation (SD). The novel reference genes selected for this study were among the top 75 genes with the lowest SD (File S1). Expression levels of these three novel reference genes were evaluated using RT-qPCR across the same samples as the microarray study with the addition of seven-day tissue culture explants and spear leaf (LEAF). GeNorm analysis showed that all the three genes were stably expressed in the tested tissue culture materials and mature tissues (File S2). Another five genes were classical housekeeping genes and their GenBank accession number are GAPDH (DQ267444), NAD5 (DQ872924), TUBULIN (EL685625), UBIQUITIN (EL689143) and ACTIN (AY550991), which were selected based on a literature review. The biological role of the three novel reference genes has not been extensively studied. Classification of these genes together with well characterized classical reference genes provides a clearer idea regarding their putative function in the oil palm. Thus, functional annotation of the reference genes was performed using Blast2GO. At level 2, the selected reference genes were assigned to various Gene Ontology (GO) terms associated with the three main ontologies, which are biological process, cellular component and molecular function. This analysis showed that the candidate genes were spread across different functional classes except for pOP-EA01332, which was only associated to three GO terms (cellular process, metabolic process and cellular component organization or biogenesis) under the biological process. The majority of the genes are involved in cellular and metabolic processes, in cell or organelle components and engaged in binding or catalytic activities ( Table 2). PCR Amplification Efficiencies of Primer Pairs For each of the candidate reference genes, a standard curve was generated across each tissue culture line with different media treatments ( Figure S1). The estimated PCR amplification efficiencies (Ex) of these genes ranged from 81 to 104% (Table 3). The majority of the candidate reference genes exhibited average amplification efficiencies of more than 86% with the exception of ACTIN. Furthermore, the observed correlation coefficient (R 2 ) values for most of the genes were greater than 0.99, which signified a strong correlation between the cycle threshold (Ct) values and the amount of cDNA template used in the amplification reactions. Expression Levels of Candidate Reference Genes Ct values derived from the amplification curve were used to measure the expression levels of candidate reference genes. As shown in Figure 1, the mean Ct values across MA2 and MA8 tissue culture lines were widely distributed between 17 to 30 cycles. Abundantly expressed genes across both tissue culture lines were transcripts coding for NAD5 and UBIQUITIN, with mean Ct values in the range of 17 to 24 cycles. The pOP-EA01332 gene exhibited the lowest expression levels compared to other candidate reference genes. Preliminary statistical analysis using the coefficient of variation (CV) was performed to determine the most stably expressed gene across this set of samples. Lower CV values (1.90 to 4.10) calculated for the three novel reference genes showed that less variation in expression levels were observed across the tissue culture lines (Table S1). Classical reference genes such as GAPDH, NAD5 and TUBULIN exhibited higher variation in their gene expression levels. The CV values for these genes were from 4.76 to 8.91 (Table S1). Therefore, a thorough analysis is required to shortlist the best combination of reference genes for an accurate and reliable normalization of gene expression data. Selection of Potential Reference Genes for Oil Palm Tissue Culture Three Excel based tools, geNorm, NormFinder and BestKeeper, were evaluated to select the most stably expressed reference genes across oil palm tissue culture lines. Expression stability of the candidate reference genes were firstly ranked using geNorm and the output was compared to the results from NormFinder and BestKeeper. GeNorm Analysis GeNorm was written by Vandesompele et al. as a Visual Basic Application (VBA) for Microsoft Excel. In this program, gene expression stability measure M was calculated for each reference gene across the same set of samples. The least stable gene has a higher M value compared to the most stable gene. Elimination of the least stable gene was carried out in a stepwise manner until the two most stably expressed genes were obtained. Using the algorithm, the potential reference genes in this study were ranked based on their expression stability ( Figure 2). In total samples that consisted of MA2 and MA8 non-normalized expression datasets, PD00380 and PD00569 were identified as the best-performing reference genes, while TUBULIN was the worst-scoring gene (Figure 2a). When the datasets were analysed separately as two individual tissue culture lines (Figure 2b and 2c), PD00380 and PD00569 still showed the most stable expression. The NAD5 and GAPDH genes showed the least stable expression in the MA2 and MA8 tissue culture lines, respectively. Furthermore, the effect of the media treatments on the ranking of the reference genes was also investigated. Data in Figure S2 showed that PD00380 and PD00569 were still the most stably expressed reference genes across media T527 and T694. Among the classical housekeeping genes, ACTIN exhibited more stable expression. This gene was frequently ranked after PD00380 and PD00569 either across different tissue culture lines or media treatments. In addition to ranking the genes according to M values, geNorm can also be used to determine the optimal number of reference genes required for accurate and reliable normalization of expression data across the tested samples. Pairwise variation, V n/n+1 was performed between the two sequential normalization factors (NF n and NF n+1 ) of reference genes. In the event the value of V is lower than the recommended cutoff value of 0.15, addition of expression data from another reference gene is not required for calculation of a normalization factor based on the geometric mean. As shown in Figure 3, the values of V 2+3 for each set of data were less than 0.15. Therefore, only two reference genes, PD00380 and PD00569 are needed for accurate normalization of all the datasets in this study. NormFinder Analysis NormFinder was also written as a VBA for Microsoft Excel. This application uses a mathematical model-based approach to NormFinder analysis also calculated the stability value for two reference genes that could be used in parallel for normalization purposes. The output on gene ranking from NormFinder was similar to geNorm. However, the combination of the two best reference genes was slightly different, which could be due to the different statistical algorithms applied in both applications. The best combination of the two genes for all the samples were pOP-EA01332 and PD00569 with a stability value of 0.065, while PD00380 and ACTIN were identified as the most suitable genes for MA2 and MA8 (Table 4). BestKeeper Analysis BestKeeper is an Excel based spreadsheet software application. Average Ct values were used to calculate the coefficient of variance (CV) and SD for each of the reference genes. Genes with higher variation were classified as least stable whereas genes with lower variation were more stable. Based on this analysis, PD00569 and pOP-EA01332 were ranked as the most stably expressed genes across all the datasets with the CV 6 SD values ranged from 1.6660.41 to 2.9560.80 (Table 5). Similarly the least stable genes, GAPDH and TUBULIN were observed in total samples and MA8 datasets. Expression levels of both genes were inconsistent across the tissue culture lines as the SD values were higher than 1. In the MA2 datasets, NAD5 (4.2960.81) followed by GAPDH (3.9160.83) exhibited least stable expression levels. Grouping of reference genes according to expression stability were consistent with the output generated from geNorm and NormFinder. A distinct expression stability cluster was detected between novel and classical reference genes. The former was always grouped as the most stable cluster while the latter formed the least stable cluster. The BestKeeper software also incorporated a pairwise correlation analysis among all possible pairings of the candidate reference genes and correlation analysis of Ct values from each candidate reference gene with the BestKeeper index or geometric mean. This index was calculated from Ct values generated by all of the candidate reference genes. Results from pairwise correlation analysis showed PD00380 and PD00569 as the most significantly correlated genes in total samples (Table S2), MA2 (Table S3) and MA8 (Table S4) datasets. The recommended gene-pair recorded the highest Pearson correlation coefficient (r) of 0.899 to 0.954 at the p-value of 0.001 across all datasets. The output was consistent with the pairwise variation analysis in geNorm. In addition, the BestKeeper index computed for total samples (Table S2) and MA8 (Table S4) were tightly correlated with the Ct values contributed by each of the reference genes. The r values were in the range of 0.757 to 0.968 with the majority of p-values computed as 0.001. As for the MA2 datasets, Ct values from NAD5 were excluded from the calculation of the BestKeeper index as the reference genes exhibited higher variation (r = 0.490) across the tested samples (Table S3). Additional information from this analysis has enabled a robust selection of optimal reference genes for normalization of gene expression data. Validation of Potential Reference Genes Based on the results from three independent analyses, PD00380 and PD00569 were selected as the most suitable reference genes for this study. Both genes were used singly or in combination to normalize the raw RT-qPCR data obtained from expression profiling of PD00088 across Week_1 (W1) and Week_3 (W3) leaf explants from MA2 and MA8 tissue culture lines that were cultured on media T527 and T694. PD00088 encodes for a putative ethylene-responsive transcription factor 3-like gene that contains a binding site for the AP2 DNA binding domain, which is postulated to be involved in somatic embryogenesis. A previous study using a cDNA microarray platform showed that the transcript for PD00088 was highly expressed in W1 and W3 leaf explants from media T527 as compared to media T694 across MA2 and MA8 tissue culture lines (Figure 4). Reproducible expression patterns of PD00088 were observed in RT-qPCR when the expression data were normalized using either single or two reference genes ( Figure 5). Slight differences in the expression levels were noticed in the data normalized with either PD00380 or PD00569. However, the discrepancies were reduced with the usage of two reference genes at the same time, as this approach takes into consideration the geometric mean of two genes for calculation of the normalization factor. Similar outcomes were also obtained for normalized expression data from callus and embryoids ( Figure 5). These results indicate the importance of using more than one reference gene for normalization of RT-qPCR expression data and the selected reference genes are deemed suitable for this study. Discussion Research to unravel the complex molecular mechanisms underlying SE in oil palm has become extensive, resulting in an increasing amount of sequence and gene information in public databases. Expression patterns of these SE related genes have usually been assayed across various developmental stages of oil palm tissue culture as part of the effort to gauge their potential biological function. An in-depth understanding of the role of these genes will greatly assist in the identification of candidate expression markers for enhancement of the SE process. RT-qPCR is one of the techniques that is commonly used to quantify the relative expression levels of the gene of interest in plants. However, due to the potential systematic variation introduced by total RNA, first strand cDNA synthesis and RT-qPCR assay, there is a need to normalize the raw expression data with constantly expressed internal controls for accurate and reliable results. In recent publications, two or more reference genes with validated expression stability have been used in the normalization of gene expression levels. Utilization of one single reference gene in normalization is no longer advisable as the latest findings show that no single reference gene is consistently expressed across all the tested plant tissues or experimental conditions. For this study, three novel (PD00380, PD00569, pOP-EA01332) oil palm reference genes and five classical (GAPDH, NAD5, TUBULIN, UBIQUITIN, ACTIN) oil palm housekeeping genes were chosen for evaluation across tissue culture samples. Primer pairs for these genes were carefully designed to ensure amplification of specific PCR products from reverse transcribed cDNA. The presence of a single amplicon peak in the melting curve analysis ( Figure S3) further confirmed the amplification of a specific PCR product. Across all the available datasets (total samples, MA2 and MA8 tissue culture lines, different media treatments), PD00380 and PD00569 were ranked as the top two most stably expressed genes by geNorm (Figure 2, Figure S2). Results from Blast2GO analysis supported that both genes were categorized under different functional classes (Table 2). Therefore, the chance of geNorm in selecting co-regulated genes was reduced. Utilization of genes from different functional classes was also taken into consideration by Artico et al. and Brunner et al. in their search for reference genes with stable expression across various tissues of cotton and poplar, respectively. Output from NormFinder and BestKeeper analysis showed that PD00380 and PD00569 were still ranked amongst the top three genes with the most stable expression across oil palm tissue culture. The ranking position of these genes was slightly different from geNorm. Slight discrepancies of gene ranking among the three software are common due to the differences in the statistical algorithms and such observations have been reported recently in bamboo, eggplant and tung tree. The optimal number of reference genes varies in different experimental systems. In our case, geNorm analysis has recommended the geometric mean from two reference genes for calculation of the normalization factor ( Figure 3). As the expression patterns from MA2 and MA8 tissue culture lines will be compared to each other, PD00380 and PD00569 with stable expression across all the datasets are the best choice of reference Table 5. Ranking of oil palm candidate reference genes according to coefficient of variance (CV) and standard deviation (SD) using BestKeeper analysis. Rank Total samples (MA2+MA8) MA2 MA8 Gene abbreviation CV ± SD Gene abbreviation CV ± SD Gene abbreviation CV ± SD genes. The initial study by Vandesompele et al. across 13 different human tissues had suggested a minimal usage of the three most stable reference genes. However, as more research has been carried out in plants, the decision on whether to use two or more reference genes has been based on the considerations of practicality and research purposes. With this in mind, the normalization factor from two reference genes has been used for several plants such as Platycladus orientalis, cotton and Chinese wolfberry. Overall results showed that the expression stability of novel reference genes outperformed the classical housekeeping genes. Similar observations were reported previously for Brassica juncea and soybean. Since the novel genes were identified from the analysis of cDNA microarray data generated from tissue culture materials, such findings are not surprising. Advantages of mining candidate reference genes from publicly available microarray experiments has also been demonstrated for model plants such as Arabidopsis and rice. As for the classical housekeeping genes, their expression stability is dependent on plant species. Genes that frequently appeared to be least stable across our tested samples were GAPDH and TUBULIN. Good performance of GAPDH as a reference gene has been shown for citrus, Chinese wolfberry and cotton under different stress conditions. However its expression was unstable across papaya fruit samples. Although TUBULIN is poorly ranked in bamboo and peanut, it was deemed one of the most appropriate reference genes in banana and across various developmental stages of somatic embryos in conifer species, Pinus pinaster and Picea abies. Another classical housekeeping gene, UBIQUITIN, which performed poorly in our study, was also selected as one of the stably expressed gene in longan tree embryogenic cultures. These outcomes again emphasize the importance of evaluating the expression stability of classical housekeeping genes before selecting them as an internal control for RT-qPCR. The oil palm SE process in this study was initiated from leaf explants, which dedifferentiated into callus and subsequently somatic embryos. The presence of tissue culture samples from undifferentiated and differentiated phases across two auxin concentrations have made the selection of reference genes quite challenging as distinct groups of genes will be expressed. In order to increase the chances of selecting the most suitable reference genes, classical housekeeping genes were evaluated in parallel with novel reference genes. This approach also proved useful for longan SE and resulted in the recommendation of UBIQUITIN and iron superoxide dismutase (FeSOD) as the best combination of reference genes in the longan system. However another class of SOD, designated as manganese superoxide dismutase (MnSOD) was classified as the least stable gene in longan SE as opposed to oil palm. Studies in Nicotiana plumbaginifolia revealed that the expression of MnSOD was induced in the plant cells during conditions of metabolic stress in tissue culture. Hormones and stress are essential for the induction of SE through the tissue culture process. The hormone ethylene was found to be important for SE in Medicago truncatula and Hevea brasiliensis. In both plants, ethylene-responsive transcription factors from the AP2/ERF superfamily were responsible for promoting the SE process through the regulation of ethylene responsive genes. Expression of MtSERF1 from M. truncatula was detected in the embryogenic calli and globular somatic embryo, while several AP2/ERF genes from Hevea were highly expressed in the calli from the embryogenic line. Oil palm PD00088 (a putative ethylene-responsive transcription factor 3-like) belongs to this superfamily. Due to its potential substantive role in SE, this gene Figure 5. Expression profiling of PD00088 across oil palm leaf explants, callus and embryoids using RT-qPCR. Expression levels of PD00088 in leaf explants (W1, W3), callus and embryoids were normalized with PD000380, PD00569 or combination of both reference genes. Calculation of standard deviation on normalized gene expression level was done using geNorm v3.4. The error bars represent 6 standard deviation (SD). doi:10.1371/journal.pone.0099774.g005 was selected to validate the applicability of PD00380 and PD00569 as reference genes. Results from RT-qPCR ( Figure 5) are highly consistent with cDNA microarray data ( Figure 4). Higher accumulation of this transcript in the somatic embryo is in agreement with the finding from M. truncatula. This study showed that the geometric mean from two reference genes provide a better normalized expression levels as it is not sensitive to subtle changes. Thus, normalization with two reference genes are highly recommended. Conclusions Systematic selection of the most stably expressed and the best combination of reference genes for RT-qPCR was established in oil palm tissue culture samples. Based on the analysis of three different statistical algorithms (geNorm, NormFinder, Best-Keeper), PD00380 and PD00569 were selected as the most appropriate reference genes for accurate and reliable normalization of gene expression data from RT-qPCR of oil palm tissue culture samples. These genes outperformed the classical housekeeping genes and the geometric mean from two reference genes was sufficient to normalize the variations introduced in this study. The primer sequences of the eight candidate reference genes presented here would be valuable to the oil palm research community working on the expression profiling across other tissue culture samples or other oil palm tissues. By following the described method, identification of the most stably expressed reference genes for different sets of experiments can be done quickly. This will facilitate the functional characterization of genes associated with SE, yield, biotic and abiotic stresses. Plant Materials Tissue culture samples in this study were obtained from the EBOR Tissue Culture Laboratory (now Sime Darby Berhad, Malaysia). Leaf explants from two different palm trees of tenera fruit type [Deli dura x URT (Ulu Remis Tenera) pisifera) were used to generate two tissue culture lines, designated as MA2 and MA8. For each tissue culture line, the same source of leaf explant was placed on two different media treatments, T527 and T694. Tissue culture media T527, that consisted of 50 mg/l naphthalene acetic acid (NAA) and 0.5 g/l activated charcoal in the Murashige & Skoog (MS) basal culture medium, successfully produced embryogenic callus (EC) followed by embryoids (EMB). Whereas tissue culture media T694, that contained 5 mg/l NAA and 100 mg/l arginine in the MS basal culture medium, produced non-embryogenic callus (NEC). Leaf explants were collected before the start of the tissue culture process (Day_0), after Week_1 (W1), Week_2 (W2), Week_3 (W3), Week_4 (W4) and Week_8 (W8) on culture media, followed by sampling of the callus and EMB. A total of 14 and 12 tissue culture samples were collected from MA2 and MA8, respectively. In MA2 tissue culture line, the 14 samples comprised of 1 Day_0 leaf explants, 5 W1 to W8 leaf explants from media T527, 2 different stages of callus from media T527, 1 EMB from media T527 and 5 W1 to W8 leaf explants from media T694. For MA8 tissue culture line, 12 samples comprised of 1 Day_0 leaf explants, 5 W1 to W8 leaf explants from media T527, 1 callus from media T527 and 5 W1 to W8 leaf explants from media T694. All the samples were stored at -80uC prior to RNA extraction. Total RNA Extraction, Purification and Quality Assessment Total RNA was extracted from various stages of tissue culture samples using the NTES (NaCl-Tris-EDTA-SDS) method with some minor modifications. Total RNA was purified from DNA contamination using the RNeasy Mini Kit and RNase-free DNase I according to the manufacturer's instructions (Qiagen USA, Valencia, CA). The purity and quantity of the purified total RNA was determined using a NanoDrop ND-1000 UV-Vis Spectrophotometer (Thermo Fisher Scientific Inc.), and the integrity was assessed by electrophoretic fractionation on an Agilent 2100 Bioanalyzer and a RNA 6000 Nano LabChip (Agilent Technologies, CA). Primer Design Eight potential reference genes were selected for evaluation across various developmental stages of oil palm tissue culture. Three novel reference genes, PD00380, PD00569 and pOP-EA01332 were identified from an oil palm cDNA microarray study across EC, NEC, EMB, ST, INF, kernel at 12 WAA, mesocarp at 15 WAA and roots from six months old seedling palms. Another five genes were the classical housekeeping genes GAPDH, NAD5, TUBULIN, UBIQUITIN and ACTIN, which were selected based on a literature review. Blast2GO analysis was performed across these genes for assignment of functional classification based on GO terms. Gene-specific primers were designed to locate on either different exons or spanning the exon-exon junctions of the cDNA to avoid the co-amplification of the genes from genomic DNA. Oil palm genomic sequences associated with the housekeeping genes were retrieved from MPOB's In-house Genomics Sequence Database. Alignment between the cDNAs and genomic sequences were performed using Spidey program from NCBI to determine the putative exon-exon junctions of these genes. This information was then used to design gene-specific primers flanking the region of interest using Primer3 software. Input parameters for primer design were as described: primer length (20 to 27 bases), primer GC content (40 to 60%), primer Tm (60 to 67uC) and amplicon length (100 to 150 bp). BLASTN search against the GenBank database was performed to confirm the specificity of each designed primer. The HPLC-purified primers were purchased from Bio Basic Canada Inc. Reverse Transcription Quantitative Real-time PCR Synthesis of first-strand cDNA from 2 mg of total RNA samples from MA2 and MA8 was carried out using the High-capacity cDNA Reverse-Transcription Kit according to the manufacturer's instruction (Applied Biosystems). The first-strand cDNAs were used as templates in the SYBR Green based RT-qPCR using the Eppendorf Mastercycler ep realplex (Eppendorf, Germany). The 20 ml PCR reaction comprised of 4 ml cDNA template, 0.2 mM of reverse primer, 0.2 mM of forward primer and 1x KAPA SYBR FAST Universal 2X qPCR Master Mix (KAPA Biosystems). PCR was performed as follow: 95uC, 3 min for 1 cycle; 95uC, 3 sec and 60uC or 63uC (depending on the annealing temperature of primer pairs), 20 sec for 40 cycles and followed by a melting curve analysis at 60uC to 95uC with 0.4uC increase in temperature at each step. For each total RNA sample, a no reverse transcriptase control (NRT) was included as a control to determine whether the sample was freed from genomic DNA contamination. In addition, a nontemplate control (NTC) was also included as a negative control for each primer pair. PCR amplification efficiencies and R 2 values of primers were determined across each pool of cDNA from MA2T527 (Day_0 leaf explants, W1 to W8 leaf explants, callus and EMB), MA2T694 (Day_0 leaf explants and W1 to W8 leaf explants), MA8T527 (Day_0 leaf explants, W1 to W8 leaf explants and callus) and MA8T694 (Day_0 leaf explants and W1 to W8 leaf explants). Ct values were measured across 5 different concentration of pooled cDNA (1, 2, 4, 8 and 16 ng) and the PCR amplification efficiencies were determined from the standard curves generated through the plotting of mean Ct values versus log10 cDNA concentration using the following calculation; PCR amplification efficiencies, Ex = 6100% (slope represents the slope of linear regression) This was followed by the RT-qPCR of these primers across individual cDNA samples (10 ng) from MA2T527, MA2T694, MA8T527 and MA8T694. Data Analysis The Ct values for each sample were retrieved using Realplex software version 2.2 (Eppendorf, Germany). Data analysis was carried out in Microsoft Excel. Average Ct values from three replicates were calculated and transformed into relative expression quantities using the delta Ct method, Ex ' (minCt -sampleCt). The most stable reference genes across the tissue culture samples were selected based on geNorm v3.4, NormFinder v0.953 and BestKeeper software. Input data for geNorm and NormFinder are the relative expression quantities, while BestKeeper analysis is based on the average raw Ct values. Supporting Information File S1 List of top 75 gene clones identified from analysis of cDNA microarray datasets from tissue culture materials and mature tissues. (XLS) File S2 Determination of the most stably expressed genes across tissue culture materials and mature tissues using geNorm software. Expression levels for each reference gene were measured across tissue culture materials (NEC, EC, EMB, ST, seven-day tissue culture explants) and mature tissues (LEAF, mesocarp, kernel, root and INF). Average expression stability values (M) was calculated for each reference gene. The least stable genes with higher M values were excluded in a stepwise manner until the most stable reference genes were shortlisted. (DOC) Figure S1 Determination of PCR amplification efficiencies (Ex) and correlation coefficient (R 2 ) values for PD00569 using the slope of standard curve. The estimated Ex for PD00569 ranged from 88 to 104% and the R 2 were given as 0.9926 to 0.9989. (DOC) Figure S2 Determination of the most stably expressed reference genes across media treatment T527 and T694 using geNorm software. Average expression stability values (M) were calculated for each reference gene. The least stable genes with higher M values were excluded in a stepwise manner until the two most stable reference genes were obtained for the tested tissue culture media. (DOC) Figure S3 Melting curve generated for PD00569 across tissue culture samples collected from MA2 and MA8 tissue culture lines. The presence of a single amplicon peak indicated the amplification of a specific PCR product. (DOC) Table S1 Preliminary statistical analysis of oil palm candidate reference genes using the coefficient of variation (CV). |
The invention relates generally to waste treatment processes and more specifically to waste treatment processes for non-hazardous, non-septic liquid waste streams.
Septic or municipal sludge waste streams are typically dried using thermal treatment processes that include mechanically pressing the liquid out of the sludge and further drying the solid using an indirect drying process or a direct drying process. The two step process can be unnecessarily costly.
For the second part of the process, an indirect drying process may include using drum dryers or rotary dryers to remove liquid from the sludge. Sludge treatment processes also use biological digesters to digest some of the sludge. The digesters anaerobically generate biogas, such as methane, that may be used to power the indirect rotary dryer. However, where waste streams are not septic, anaerobically generated biogas may not be available. Also, the additional cost of biological digesters to produce anaerobically generated biogas can become cost prohibitive in powering the dryer.
Direct drying processes typically use hot gas that is brought into contact and mixed with the sludge in a chamber to dry the sludge directly. However, such processes may generate odorous gases that need to be subsequently treated to remove particulate matter and to maintain an odorless treatment process. Also, such processes can be costly due to the power required to generate the heated drying gas.
Non-septic, non-hazardous waste streams processes typically do not use biological digesters due to the chemical makeup of the waste stream. As a general matter, waste treatment processes for hazardous liquid waste and septic waste differ from treatment process for non-hazardous liquid waste due to the nature of the waste streams.
Non-hazardous, non-septic liquid waste streams may include kitchen grease, run-off from car washes which include detergents, i.e., soapy, resinous, low solid feeds and high solid feeds. In addition, many landfill based waste treatment facilities do not traditionally process high volumes of non-hazardous, non-septic liquid waste since most liquid waste is sent to public owned treatment works (POTW) for treatment due to the cost in treating such waste streams. Generally, the cost of the energy source necessary to evaporate significant portions of the liquid from a liquid waste stream can be cost prohibitive.
Also, conventional methods for processing non-hazardous, non-septic waste streams or concentrated waste feed, such as waste streams incorporating filler, use mechanical presses to remove some of the liquid. The resultant effluent must typically be sent to POTW's for disposal thereby adding additional cost. The solid by-product is stored in landfills. However, the amount of liquid typically remaining in the solid by-product still causes the volume of the solid by-product to be excessive. This can cause unnecessary strain on landfill sites by causing the sites to be filled earlier than necessary. Also, waste feeds containing soapy content may foam excessively during such a process thereby requiring additional chemical treatment that may unnecessarily increase cost.
Typically higher solid feeds or higher concentrated waste feeds are more economical to process since less liquid removal is required. However, as the feed becomes more concentrated, processing can be more difficult since the concentrated waste feed may more easily clog feed lines, mechanical presses and other equipment. |
Can the alternate healthy eating index (AHEI) score predict health outcomes for Cuban Americans with and without type 2 diabetes? Adherence to a dietary pattern represented by the AHEI has been inversely correlated with CVD incidence in healthy populations. The population with type 2 diabetes (T2D) is at particularly high risk for CVD. Determining a dietary pattern successful in decreasing CVD risk in the diabetic population will enable more specific dietary recommendations. The relationship between AHEI score and 10year coronary heart disease (CHD) risk in Cuban Americans (n=367) with and without T2D. Subjects were randomly recruited from a mailing list in MiamiDade and Broward Counties, FL. AHEI score was calculated from selfreported FFQ; CHD risk was determined using ATPIII's 10year Risk Calculator. AHEI score was evaluated against the presence of CHD risk. Analyses included descriptive statistics, correlations, and linear regressions controlling for BMI, WC and physical activity. Mean AHEI scores for diabetics and nondiabetics were 33.00±10.77 and 33.95±11.26 respectively. There was a significant inverse correlation (r=.14, p=.008) between AHEI and CVD risk. However, linear regression models found AHEI to be a significant predictor of 10year CVD risk only among diabetics (p=.001). Diabetic with higher AHEI scores had lower scores for 10year risk of CVD, therefore dietary guidelines for individuals with T2D that include patterns representing a high AHEI score are warranted. |
def create_mongodb_observer(collection,
mongodb_config=None,
overwrite=None):
if mongodb_config is None:
mongodb_config = get_mongodb_config()
db_name = mongodb_config['db_name']
db_username = mongodb_config['username']
db_password = mongodb_config['password']
db_port = mongodb_config['port']
db_host = mongodb_config['host']
observer = MongoObserver.create(
url=f'mongodb://{db_username}:{db_password}@{db_host}:{db_port}/{db_name}?authMechanism=SCRAM-SHA-1',
db_name=db_name,
collection=collection,
overwrite=overwrite)
return observer |
import {
Component, Input, forwardRef, ContentChild, OnInit, OnChanges, Renderer2, ElementRef, ViewChild
} from '@angular/core';
import { DefaultControlValueAccessor } from './../../common/default-control-value-accessor';
import { NG_VALUE_ACCESSOR, FormControl } from '@angular/forms';
import { AsiComponentTemplateOptionDef, AsiComponentTemplateSelectedDef } from './../../common/asi-component-template';
import { debounceTime, switchMap, tap } from 'rxjs/operators';
import * as nh from '../../../native-helper'
import { AsiDropDown } from '../../asi-dropdown/asi-dropdown.component';
/**
* asi-autocomplete component
*/
@Component({
selector: 'asi-autocomplete',
templateUrl: 'asi-autocomplete.component.html',
host: { 'class': 'asi-component asi-autocomplete' },
providers: [
{
provide: NG_VALUE_ACCESSOR,
useExisting: forwardRef(() => AsiAutoCompleteComponent),
multi: true
}
]
})
export class AsiAutoCompleteComponent extends DefaultControlValueAccessor implements OnInit, OnChanges {
/** html id */
@Input() id: string;
/** html name */
@Input() name: string;
/** Label to display (is translated) */
@Input() label: string;
/** Label position */
@Input() labelPosition: 'top' | 'left' | 'right' | 'bottom' | 'bottom-center' | 'top-center' = 'top';
/** Delay between the moment you stop typing and onRequestData is called */
@Input() delay = 500;
/** A placeholder if needed */
@Input() placeholder = '';
/** Function called to request new data (can return Observable/Promise/Object): Throw error if null */
@Input() onRequestData: Function;
@ContentChild(AsiComponentTemplateOptionDef, {static: false}) optionDef: AsiComponentTemplateOptionDef;
@ContentChild(AsiComponentTemplateSelectedDef, {static: false}) selectedDef: AsiComponentTemplateSelectedDef;
@ViewChild('dropDown', {static: false}) asiDropDown: AsiDropDown;
autoCompleteControl = new FormControl();
open = false;
data: Array<any>;
// Var used to manage component initialization
firstRequestDone: Boolean = null;
init = false;
private currentValue: any = null;
constructor(private renderer: Renderer2, private elementRef: ElementRef) {
super();
}
private checkInput() {
if (null == this.onRequestData) {
throw new Error('AsiAutoCompleteComponent : @Input \'onRequestData\' is required');
}
}
ngOnInit() {
this.checkInput();
this.renderer.addClass(this.elementRef.nativeElement, 'label-' + this.labelPosition);
this.autoCompleteControl.valueChanges.pipe(debounceTime(this.delay),
tap(value => this.currentValue = value),
switchMap((value) => nh.observe(this.onRequestData(value, !this.firstRequestDone))))
.subscribe((data: any) => {
this.data = data;
if (this.firstRequestDone && data && data.length > 0) {
this.open = true;
}
this.firstRequestDone = true;
});
}
onDropdownClose() {
this.open = false;
}
ngOnChanges() {
if (this.init) {
this.open = true;
} else {
if (this.firstRequestDone) {
this.init = true;
}
}
}
selectValue(data: any) {
this.value = data;
this.open = false;
}
clearValue() {
this.value = null;
this.autoCompleteControl.setValue(this.currentValue, { emitEvent: false });
setTimeout(() => { this.open = true });
}
writeValue(value: any) {
this._value = value;
if (this.init === false) {
this.autoCompleteControl.setValue(this.currentValue);
} else {
this.currentValue = value;
if (this.value == null) {
this.autoCompleteControl.setValue(this.currentValue, { emitEvent: false });
}
}
}
}
|
. OBJECTIVE To investigate the risk factors for type 1 diabetes among Uygur children in Xinjiang, China, in order to provide a basis for the prevention of this disease among Uygur children in Xinjiang. METHODS The clinical data of 94 Uygur children with type 1 diabetes (case group) and 96 Uygur children without diabetes (control group) between January, 2003 and December, 2013, were retrospectively analyzed. The risk factors for type 1 diabetes among Uyghur children in Xinjiang were explored using univariate and multivariate analyses. RESULTS According to the result of univariate analysis, there were significant differences in age, prodromal infection, residence, feeding method, time for intake of starchy foods, time for intake of high-fat foods, family history, islet-cell antibodies (ICA), insulin autoantibodies (IAA), and glutamic acid decarboxylase antibodies between the case and the control groups (P<0.05). According to the result of multivariate logistic analysis, older age, early intake of starchy foods, early intake of high-fat foods, prodromal infection, positive ICA, and positive IAA were major risk factors for type 1 diabetes, and breastfeeding was a protective factor. CONCLUSIONS Type 1 diabetes among Uyghur children in Xinjiang is caused by multiple factors. Prevention and reduction of prodromal infection, reasonable diet, and promotion of breastfeeding can reduce the risk of disease. |
<gh_stars>0
import React from "react"
import { Badge } from "../badge"
import type { BadgeProps } from "../badge"
const SailfishOS = (props: BadgeProps) => <Badge name="Sailfish OS" {...props} backgroundColor="#053766" />
export default SailfishOS
|
Association of blood lead and mercury with estimated GFR in herbalists after the ban of herbs containing aristolochic acids in Taiwan Objective This study was undertaken to explore the association of estimated glomerular filtration rate (GFR) with exposure to aristolochic acids (ALAs) and nephrotoxic metals in herbalists after the ban of herbs containing ALAs in Taiwan. Methods This cross-sectional study recruited a total of 138 herbalists without end-stage renal disease or urothelial carcinoma from the Occupational Union of Chinese Herbalists in Taiwan in 2007. Aristolochic acid I (ALA-I) was measured by ultra-high-pressure liquid chromatography/ tandem mass spectrometry (UHPLC-MS/MS) and heavy metals in blood samples were analysed by Agilent 7500C inductively coupled plasma-mass spectrometry. Renal function was assessed by using a simplified Modification of Diet in Renal Disease Study equation to estimate GFR. Results Blood lead was higher in herbal dispensing procedures (p=0.053) and in subjects who self-prescribe herbal medicine (p=0.057); mercury was also higher in subjects living in the workplace (p=0.03). Lower estimated GFR was significantly associated with lead (=−10.66, 95% CI −18.7 to −2.6) and mercury (=−12.52, 95% CI −24.3 to −0.8) with a significant interaction (p=0.01) between mercury and lead; however, estimated GFR was not significantly associated with high ALA-I level groups, arsenic and cadmium after adjusting for other confounding factors. Conclusions We found that lower estimated GFR was associated with blood lead and mercury in herbalists after the ban of herbs containing ALAs in Taiwan. The ALA-I exposure did not show a significant negative association of estimated GFR, which might due to herbalists having known how to distinguish ALA herbs after the banning policy. Rigorous monitoring is still needed to protect herbalists and the general population who take herbs. |
Passive chipless wireless pressure sensor based on dielectric resonators This paper presents a novel approach in passive chipless wireless pressure sensing for tool monitoring in smart factories. The working principle is based on the pressure dependent detuning of a dielectric resonator resulting in a notch in the backscattered spectrum at its resonance frequency. The read-out of the sensor is performed by a reader device that transmits a broadband signal and makes use of time gating to remove clutter from the machine or other environmental obstacles. The dielectric resonator approach is chosen since it allows for these long impulse response times and simultaneously enables high pressure sensitivity and flexibility for the measurement range. Furthermore, the proposed sensor concept is applicable in harsh environments in terms of strong vibrations and high temperatures. The functionality of the design is confirmed by simulation results and experimentally validated. |
// AddTxTransfer is the RPC method to add a transfer transaction.
func (s *ChainRPCService) AddTxTransfer(req *AddTxTransferReq, resp *AddTxResp) (err error) {
if req.Tx == nil {
return ErrUnknownTransactionType
}
s.chain.pendingTxs <- req.Tx
return
} |
Design counsel: the role of clinicians in the prototyping and standard setting of anaesthetic equipment The celebrated Stonehenge scene in Rob Reiners classic 1984 spoof rock band documentary is an apt portrayal of what happens when an end-users expectation differs from the manufacturers delivery of a standard (and those uninitiated should head immediately to their nearest streaming site for illumination). In this issue of Anaesthesia, Thomas et al. assess the performance of the adjustable pressure limiting (APL) valves of two widely-used anaesthesia workstations from different manufacturers. In short, the researchers found that the APL valves were not precise (in terms of circuit pressure generated versus dialled in pressure), and that the valves performed very differently from each other. Their conclusion is especially critical of one of the APL valves performance characteristics: it increased system pressure more-orless linearly, but only after it had been turned approximately 60 degrees; before that, it generated no increase in pressure. The other valve generated a pressure that increased more or less linearly beginning from zero degrees of turn. The response from the manufacturer of the criticised valve, as well as being critical of the study methodology, is adamant that the valve performs within the relevant international standard. So who is right? Perhaps inevitably, both parties are right. The characteristics of APL valves (and indeed all parts of the anaesthesia workstation) are contained in the 117-page standard ISO 80601-2-13:2011, Particular requirements for basic safety and essential performance of an anaesthetic workstation. The exact wording is as follows: |
BERKELEY — Saying it wants to increase government transparency, accountability and civic engagement, the city has announced the launch of a one-stop Open Data portal for disseminating information to the public.
An Open Data Portal pilot project page on the city website, at www.cityofberkeley.info/opendata, includes links to a tutorial page with videos, an open data handbook, frequently asked questions and a feedback form.
Berkeley joins a handful of other Bay Area locales with open-data websites, including Oakland, Alameda County and San Francisco, as well as several state agencies, the White House and the cities of Boston and New York, according to the news release.
Berkeley’s open-data effort was spearheaded by its Department of Information Technology and involved staff members from all city departments as well as student volunteers from UC Berkeley and the Presidio Graduate School of Management in San Francisco.
Berkeley began its open-data effort in August 2013 by researching and testing different platforms, according to a staff report. |
Cameos, game shows, and interviews.
Stay up to date with the latest news from Soaps.com about The Bold and the Beautiful, Days of our Lives, General Hospital, and The Young and the Restless.
Catch the trailer for Garlic and Gunpowder, a mob comedy that features Katherine Kelly Lang (Brooke) in a fun cameo. Get the newest update on Italy’s Dancing With the Stars as Don Diamont (Bill) moved on to the second round of the popular competition. Then get a reminder of the action from last week in Soaps.com’s The Bold and the Beautiful news room where you can catch video highlights of Lt. Baker pondering Sally’s guilt, Thomas asking Sally to take off with him, and the suspect list in Bill’s shooting getting longer by the day. And don’t miss Candace discussing whether or not Caroline could kill, and who the most likely shooter was in the B&B blog.
If you’ve been enjoying Tyler Christopher as Stefan O., you can soon see more of him in a rather different context. The actor joins former B&B star Texas Battle (Marcus) in the sci-fi film F.R.E.D.I. Find out who he will be playing in our Days of our Lives news room. While you’re there, make sure that you grab a sneak peek at what’s ahead for the week with the newest spoiler video and then catch all the highlights from Bryan Dattilo’s (Lucas) latest interview as he reflects on his history with the soap and his most recent storylines. In related news, it looks like someone from Lucas’ past is heading back to Salem as Alison Sweeney returns to Days as Sami. And Christine is back and weeping for Steve, unconvinced by Maggie and Victor’s reunion, and has managed to find some pity for Abigail in spite of the unintended LOLs in the Days blog.
She plays a fashion publisher on GH, but Michelle Stafford (Nina) already has experience in the beauty industry thanks to her cosmetics line and modeling work. The actress will be getting even more as she hosts beauty segments for ExtraTV. Visit our General Hospital news room for all the details as well as highlights of the latest action in Port Charles as Sam gave Liz a warning, Franco got disoriented, and Finn was arrested for assaulting Julian. And Dustin gets excited about Finn’s surprise brother, and a potential friendship for Drew and Franco, but worries about what’s going on between Bensch and Kiki in the latest GH blog.
Daniel Goddard (Cane), Mishael Morgan (Hilary), and Melissa Ordway (Abby) will all be popping up on The Price is Right later this month. Find out when in our The Young and the Restless news room where you can also take a trip into Ashley Abbott’s complicated past with the Newman family and watch original cast flashbacks and learn about the big returns planned for Y&R’s 45th anniversary. Finally, Candace weighs in on the latest surreal plotting, Devon’s sudden change-of-heart, and Nikki wrapping around Arturo in her latest blog. |
The historic Marble Collegiate Church on Fifth Avenue in Manhattan, New York where leading 2016 GOP presidential candidate Donald Trump told reporters Thursday that he is a member, said Friday that the billionaire real estate mogul is not an active member.
"I am Presbyterian Protestant. I go to Marble Collegiate Church," Trump reportedly told reporters in Greenville, South Carolina Thursday according to CNN.
Marble Collegiate Church is not a Presbyterian denomination, however, but part of the Reformed Church in America.
In a statement released to that network however, the church, which dates back to 1628, acknowledged a strong connection with Trump's family but said he was not an active worshipper.
"Donald Trump has had a longstanding history with Marble Collegiate Church, where his parents were for years active members and one of his children was baptized. However, as he indicates, he is a Presbyterian, and is not an active member of Marble," the statement said.
Marble Collegiate church according to its website is the oldest place of worship of the Collegiate Reformed Protestant Dutch Church of the City of New York and the oldest Protestant organization in North America with continuous service. It was organized in 1628 under the Dutch West India Company.
"We are a community of people on a shared journey of life and Faith. We are committed to inclusivity, to being a place of welcome, safety, love, and respect for all persons regardless of age, station, economics, color, sexual orientation, or any of the categories the world constructs to segregate or alienate people from people," states the church in a welcome message on the website. "In this place, we celebrate that we are all children of the same God and thus are all sisters and brothers to each other."
According to CNN, Trump expressed admiration for Dr. Norman Vincent Peale, author of self-help book "The Power of Positive Thinking," who also served as pastor at Marble Collegiate Church for 52 years from 1932 until 1984. Peale died in December, 1993.
"Dr. Norman Vincent Peale, The Power of Positive Thinking was my pastor," Trump said Tuesday. "To this day one of the great speakers I've seen. You hated to leave church. You hated when the sermon was over. That's how great he was at Marble Collegiate Church."
Trump who is a favorite among white evangelical voters also said on Thursday that "at some point I'm going to be meeting with ministers and pastors."
Next month, according to the Wall Street Journal, televangelist Paula White, who the real estate mogul has a longstanding relationship with, will host a meeting with Trump and other Christian leaders in New York City.
The meeting, set for Sept. 28 at Trump Tower is expected to be "a small group meeting, maximum 30 people."
"Mr. Trump's goal is simple, to hear the heart of America's Christian leaders and learn what they feel are the most critical issues facing our nation today," wrote Sheila Withum, a Tampa, Fla., public relations executive. |
<reponame>06needhamt/intellij-community
/*
* Copyright 2000-2021 JetBrains s.r.o. Use of this source code is governed by the Apache 2.0 license that can be found in the LICENSE file.
*/
package com.intellij.codeInsight.editorActions;
import com.intellij.openapi.editor.Editor;
import com.intellij.openapi.extensions.ExtensionPointName;
import com.intellij.openapi.project.Project;
import com.intellij.util.containers.ContainerUtil;
import org.jetbrains.annotations.ApiStatus;
import org.jetbrains.annotations.NotNull;
@ApiStatus.Experimental
public interface TypingActionsExtension {
ExtensionPointName<TypingActionsExtension> EP_NAME = ExtensionPointName.create("com.intellij.typingActionsExtension");
@NotNull
static TypingActionsExtension findForContext(@NotNull Project project, @NotNull Editor editor) {
final TypingActionsExtension extension =
ContainerUtil.find(EP_NAME.getExtensionList(), provider -> provider.isSuitableContext(project, editor));
return extension == null
? new DefaultTypingActionsExtension()
: extension;
}
/**
* Returns `true` if optimizes copy/paste procedure in the editor.
*
* @param project current project
* @param editor target editor
* @return
*/
boolean isSuitableContext(@NotNull Project project, @NotNull Editor editor);
/**
* Optimal implementation of formatting procedure in UI thread.
* @param project current project
* @param editor target editor
* @param howtoReformat one of magic constants
* {@code NO_REFORMAT, INDENT_BLOCK, INDENT_EACH_LINE, REFORMAT_BLOCK} from the {@code CodeInsightSettings} class
* @param startOffset the start offset of fragment
* @param endOffset the end offset of fragment
* @param anchorColumn the indent for the first line (with {@code INDENT_BLOCK}, {@code 0} with other consts)
* @param indentationBeforeReformat indent block before re-format block (with {@code REFORMAT_BLOCK}, {@code false} with other consts)
*/
default void format(@NotNull Project project,
@NotNull Editor editor,
int howtoReformat,
int startOffset,
int endOffset,
int anchorColumn,
boolean indentationBeforeReformat) {}
/**
* Entry point for implementing formatting and folding hints before pasting the text.
*
* @param project current project
* @param editor target editor
*/
default void startPaste(@NotNull Project project, @NotNull Editor editor) {}
/**
* The entry point for postponed post-paste operations.
*
* @param project current project
* @param editor target editor
*/
default void endPaste(@NotNull Project project, @NotNull Editor editor) {}
/**
* Entry point for document commit and other hints before copying the text.
*
* @param project current project
* @param editor target editor
*/
default void startCopy(@NotNull Project project, @NotNull Editor editor) {}
/**
* The entry point for postponed post-copy operations.
*
* @param project current project
* @param editor target editor
*/
default void endCopy(@NotNull Project project, @NotNull Editor editor) {}
} |
<reponame>risuoku/wataru<filename>wataru/rules/templates.py
import jinja2
import wataru.settings as settings
import sys
_env = None
_abspath = None
def setenv(abspath):
me = sys.modules[__name__]
if me._env is None:
me._env = jinja2.Environment (
loader = jinja2.FileSystemLoader(abspath, encoding='utf-8')
)
me._abspath = abspath
def get(path):
return _env.get_template(path)
def get_template_abspath():
if _abspath is None:
raise ValueError('template abspath not set')
else:
return _abspath
|
Valdosta, Ga. (WCTV) -- Five Lowndes High School athletes signed their national letters of intent to go and play at the next level.
JD Lee, Tymere Moore, Terrell Belcher, Walker Schwab and Marcus Browning all signed to play collegiate football on Wednesday.
Lee and Moore will be heading to Albany State, Belcher will be suiting up for Webber International, Schwab is staying in town to play for Valdosta State and Browning is coming to Tallahassee to play for the Rattlers.
"It means a lot," said Browning. "A lot of people don't get to do it, so I'm blessed to be able to do it."
"It means everything to me," added Schwab. "I've been looking forward to this since I was a little kid."
Moore added, "It feels great. I worked all my life for this. I just want to make my mom and my dad and all of my family proud. I've been working for this my whole entire life since I've been playing football. It's a dream come true." |
/**
* Returns an IEntityLock[] containing unexpired locks, based on the params, any or all of which
* may be null EXCEPT FOR <code>expiration</code>. A null param means any value, so <code>
* find(expir,myType,myKey,null,null)</code> will return all <code>IEntityLocks</code> for
* myType and myKey unexpired as of <code>expir</code>.
*
* @param expiration Date
* @param entityType Class
* @param entityKey String
* @param lockType Integer - so we can accept a null value.
* @param lockOwner String
* @exception LockingException - wraps an Exception specific to the store.
*/
@Override
public IEntityLock[] findUnexpired(
java.util.Date expiration,
Class entityType,
String entityKey,
Integer lockType,
String lockOwner)
throws LockingException {
IEntityLock[] locks = find(entityType, entityKey, lockType, null, lockOwner);
List lockAL = new ArrayList(locks.length);
for (int i = 0; i < locks.length; i++) {
if (locks[i].getExpirationTime().after(expiration)) {
lockAL.add(locks[i]);
}
}
return ((IEntityLock[]) lockAL.toArray(new IEntityLock[lockAL.size()]));
} |
/*
* Copyright (c) 2011-2015 ARM Limited
* All rights reserved
*
* The license below extends only to copyright in the software and shall
* not be construed as granting a license to any other intellectual
* property including but not limited to intellectual property relating
* to a hardware implementation of the functionality of the software
* licensed hereunder. You may use the software subject to the license
* terms below provided that you ensure that this notice is replicated
* unmodified and in its entirety in all distributions of the software,
* modified or unmodified, in source code or in binary form.
*
* Copyright (c) 2002-2005 The Regents of The University of Michigan
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions are
* met: redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer;
* redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution;
* neither the name of the copyright holders nor the names of its
* contributors may be used to endorse or promote products derived from
* this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
* Authors: <NAME>
* <NAME>
* <NAME>
* <NAME>
*/
/**
* @file
* Declaration of a coherent crossbar.
*/
#ifndef __MEM_COHERENT_XBAR_HH__
#define __MEM_COHERENT_XBAR_HH__
#include "mem/snoop_filter.hh"
#include "mem/xbar.hh"
#include "params/CoherentXBar.hh"
/**
* A coherent crossbar connects a number of (potentially) snooping
* masters and slaves, and routes the request and response packets
* based on the address, and also forwards all requests to the
* snoopers and deals with the snoop responses.
*
* The coherent crossbar can be used as a template for modelling QPI,
* HyperTransport, ACE and coherent OCP buses, and is typically used
* for the L1-to-L2 buses and as the main system interconnect. @sa
* \ref gem5MemorySystem "gem5 Memory System"
*/
class CoherentXBar : public BaseXBar
{
protected:
/**
* Declare the layers of this crossbar, one vector for requests,
* one for responses, and one for snoop responses
*/
std::vector<ReqLayer*> reqLayers;
std::vector<RespLayer*> respLayers;
std::vector<SnoopRespLayer*> snoopLayers;
/**
* Declaration of the coherent crossbar slave port type, one will
* be instantiated for each of the master ports connecting to the
* crossbar.
*/
class CoherentXBarSlavePort : public QueuedSlavePort
{
private:
/** A reference to the crossbar to which this port belongs. */
CoherentXBar &xbar;
/** A normal packet queue used to store responses. */
RespPacketQueue queue;
public:
CoherentXBarSlavePort(const std::string &_name,
CoherentXBar &_xbar, PortID _id)
: QueuedSlavePort(_name, &_xbar, queue, _id), xbar(_xbar),
queue(_xbar, *this)
{ }
protected:
/**
* When receiving a timing request, pass it to the crossbar.
*/
virtual bool recvTimingReq(PacketPtr pkt)
{ return xbar.recvTimingReq(pkt, id); }
/**
* When receiving a timing snoop response, pass it to the crossbar.
*/
virtual bool recvTimingSnoopResp(PacketPtr pkt)
{ return xbar.recvTimingSnoopResp(pkt, id); }
/**
* When receiving an atomic request, pass it to the crossbar.
*/
virtual Tick recvAtomic(PacketPtr pkt)
{ return xbar.recvAtomic(pkt, id); }
/**
* When receiving a functional request, pass it to the crossbar.
*/
virtual void recvFunctional(PacketPtr pkt)
{ xbar.recvFunctional(pkt, id); }
/**
* Return the union of all adress ranges seen by this crossbar.
*/
virtual AddrRangeList getAddrRanges() const
{ return xbar.getAddrRanges(); }
};
/**
* Declaration of the coherent crossbar master port type, one will be
* instantiated for each of the slave interfaces connecting to the
* crossbar.
*/
class CoherentXBarMasterPort : public MasterPort
{
private:
/** A reference to the crossbar to which this port belongs. */
CoherentXBar &xbar;
public:
CoherentXBarMasterPort(const std::string &_name,
CoherentXBar &_xbar, PortID _id)
: MasterPort(_name, &_xbar, _id), xbar(_xbar)
{ }
protected:
/**
* Determine if this port should be considered a snooper. For
* a coherent crossbar master port this is always true.
*
* @return a boolean that is true if this port is snooping
*/
virtual bool isSnooping() const
{ return true; }
/**
* When receiving a timing response, pass it to the crossbar.
*/
virtual bool recvTimingResp(PacketPtr pkt)
{ return xbar.recvTimingResp(pkt, id); }
/**
* When receiving a timing snoop request, pass it to the crossbar.
*/
virtual void recvTimingSnoopReq(PacketPtr pkt)
{ return xbar.recvTimingSnoopReq(pkt, id); }
/**
* When receiving an atomic snoop request, pass it to the crossbar.
*/
virtual Tick recvAtomicSnoop(PacketPtr pkt)
{ return xbar.recvAtomicSnoop(pkt, id); }
/**
* When receiving a functional snoop request, pass it to the crossbar.
*/
virtual void recvFunctionalSnoop(PacketPtr pkt)
{ xbar.recvFunctionalSnoop(pkt, id); }
/** When reciving a range change from the peer port (at id),
pass it to the crossbar. */
virtual void recvRangeChange()
{ xbar.recvRangeChange(id); }
/** When reciving a retry from the peer port (at id),
pass it to the crossbar. */
virtual void recvReqRetry()
{ xbar.recvReqRetry(id); }
};
/**
* Internal class to bridge between an incoming snoop response
* from a slave port and forwarding it through an outgoing slave
* port. It is effectively a dangling master port.
*/
class SnoopRespPort : public MasterPort
{
private:
/** The port which we mirror internally. */
QueuedSlavePort& slavePort;
public:
/**
* Create a snoop response port that mirrors a given slave port.
*/
SnoopRespPort(QueuedSlavePort& slave_port, CoherentXBar& _xbar) :
MasterPort(slave_port.name() + ".snoopRespPort", &_xbar),
slavePort(slave_port) { }
/**
* Override the sending of retries and pass them on through
* the mirrored slave port.
*/
void sendRetryResp() {
// forward it as a snoop response retry
slavePort.sendRetrySnoopResp();
}
/**
* Provided as necessary.
*/
void recvReqRetry() { panic("SnoopRespPort should never see retry\n"); }
/**
* Provided as necessary.
*/
bool recvTimingResp(PacketPtr pkt)
{
panic("SnoopRespPort should never see timing response\n");
return false;
}
};
std::vector<SnoopRespPort*> snoopRespPorts;
std::vector<QueuedSlavePort*> snoopPorts;
/**
* Store the outstanding requests that we are expecting snoop
* responses from so we can determine which snoop responses we
* generated and which ones were merely forwarded.
*/
m5::hash_set<RequestPtr> outstandingSnoop;
/**
* Keep a pointer to the system to be allow to querying memory system
* properties.
*/
System *system;
/** A snoop filter that tracks cache line residency and can restrict the
* broadcast needed for probes. NULL denotes an absent filter. */
SnoopFilter *snoopFilter;
/** Cycles of snoop response latency.*/
const Cycles snoopResponseLatency;
/**
* @todo this is a temporary workaround until the 4-phase code is committed.
* upstream caches need this packet until true is returned, so hold it for
* deletion until a subsequent call
*/
std::vector<PacketPtr> pendingDelete;
/** Function called by the port when the crossbar is recieving a Timing
request packet.*/
bool recvTimingReq(PacketPtr pkt, PortID slave_port_id);
/** Function called by the port when the crossbar is recieving a Timing
response packet.*/
bool recvTimingResp(PacketPtr pkt, PortID master_port_id);
/** Function called by the port when the crossbar is recieving a timing
snoop request.*/
void recvTimingSnoopReq(PacketPtr pkt, PortID master_port_id);
/** Function called by the port when the crossbar is recieving a timing
snoop response.*/
bool recvTimingSnoopResp(PacketPtr pkt, PortID slave_port_id);
/** Timing function called by port when it is once again able to process
* requests. */
void recvReqRetry(PortID master_port_id);
/**
* Forward a timing packet to our snoopers, potentially excluding
* one of the connected coherent masters to avoid sending a packet
* back to where it came from.
*
* @param pkt Packet to forward
* @param exclude_slave_port_id Id of slave port to exclude
*/
void forwardTiming(PacketPtr pkt, PortID exclude_slave_port_id) {
forwardTiming(pkt, exclude_slave_port_id, snoopPorts);
}
/**
* Forward a timing packet to a selected list of snoopers, potentially
* excluding one of the connected coherent masters to avoid sending a packet
* back to where it came from.
*
* @param pkt Packet to forward
* @param exclude_slave_port_id Id of slave port to exclude
* @param dests Vector of destination ports for the forwarded pkt
*/
void forwardTiming(PacketPtr pkt, PortID exclude_slave_port_id,
const std::vector<QueuedSlavePort*>& dests);
/** Function called by the port when the crossbar is recieving a Atomic
transaction.*/
Tick recvAtomic(PacketPtr pkt, PortID slave_port_id);
/** Function called by the port when the crossbar is recieving an
atomic snoop transaction.*/
Tick recvAtomicSnoop(PacketPtr pkt, PortID master_port_id);
/**
* Forward an atomic packet to our snoopers, potentially excluding
* one of the connected coherent masters to avoid sending a packet
* back to where it came from.
*
* @param pkt Packet to forward
* @param exclude_slave_port_id Id of slave port to exclude
*
* @return a pair containing the snoop response and snoop latency
*/
std::pair<MemCmd, Tick> forwardAtomic(PacketPtr pkt,
PortID exclude_slave_port_id)
{
return forwardAtomic(pkt, exclude_slave_port_id, InvalidPortID,
snoopPorts);
}
/**
* Forward an atomic packet to a selected list of snoopers, potentially
* excluding one of the connected coherent masters to avoid sending a packet
* back to where it came from.
*
* @param pkt Packet to forward
* @param exclude_slave_port_id Id of slave port to exclude
* @param source_master_port_id Id of the master port for snoops from below
* @param dests Vector of destination ports for the forwarded pkt
*
* @return a pair containing the snoop response and snoop latency
*/
std::pair<MemCmd, Tick> forwardAtomic(PacketPtr pkt,
PortID exclude_slave_port_id,
PortID source_master_port_id,
const std::vector<QueuedSlavePort*>&
dests);
/** Function called by the port when the crossbar is recieving a Functional
transaction.*/
void recvFunctional(PacketPtr pkt, PortID slave_port_id);
/** Function called by the port when the crossbar is recieving a functional
snoop transaction.*/
void recvFunctionalSnoop(PacketPtr pkt, PortID master_port_id);
/**
* Forward a functional packet to our snoopers, potentially
* excluding one of the connected coherent masters to avoid
* sending a packet back to where it came from.
*
* @param pkt Packet to forward
* @param exclude_slave_port_id Id of slave port to exclude
*/
void forwardFunctional(PacketPtr pkt, PortID exclude_slave_port_id);
Stats::Scalar snoops;
Stats::Distribution snoopFanout;
public:
virtual void init();
CoherentXBar(const CoherentXBarParams *p);
virtual ~CoherentXBar();
unsigned int drain(DrainManager *dm);
virtual void regStats();
};
#endif //__MEM_COHERENT_XBAR_HH__
|
package com.uber.sdk.rides.client;
import com.uber.sdk.rides.auth.ServerTokenAuthenticator;
import javax.annotation.Nonnull;
/**
* A session containing the details of how an {@link UberRidesApi} will interact with the API.
* Does authentication through either a server token or OAuth 2.0 credential, exactly one of which must exist.
* Uses server token for connection
*/
public class ServerTokenSession extends Session<ServerTokenAuthenticator> {
/**
* @param config to define connection parameters
*/
public ServerTokenSession(@Nonnull SessionConfiguration config) {
super(new ServerTokenAuthenticator(config));
}
}
|
Protein structure, activity and thermal stability within nanoscopic compartments We report that protein confinement within nanoscopic vesicular compartments corresponds to a liquid-liquid phase transition with the protein/water within vesicle lumen interacting very differently than in bulk. We show this effect leads to considerable structural changes on the proteins with evidence suggesting non-alpha helical conformations. Most importantly both aspects lead to a significant improvement on protein stability against thermal denaturation up to 95degC at neutral pH, with little or no evidence of unfolding or reduced enzymatic activity. The latter parameter does indeed exhibit an increase after thermal cycling. Our results suggest that nanoscopic confinement is a promising new avenue for the enhanced long-term storage of proteins. Moreover, our investigations have potentially important implications for the origin of life, since such compartmentalization may well have been critical for ensuring the preservation of primordial functional proteins under relatively harsh conditions, thus playing a key role in the subsequent emergence of primitive life forms. Introduction Proteins result from the sequence-controlled polymerization of amino acids into long chains that fold into functional three-dimensional structures. The understanding of the folding process, as well as the final 3D structure, has been one of the most productive areas where chemistry, physics and biology have merged and contributed to the understanding of biological complexity as well as feeding most drug design efforts. Today we have very little understanding of how protein folding, structure and function are regulated into complex biological milieus. In living cells, compartments confine proteins to local concentrations that can reach up to 40% in volume creating highly crowded, confined and saturated aqueous solutions. We know today that crowding affects the protein, structure, enzymatic conversion, and diffusion, as well as the diffusion of any other small molecules, dispersed in the remaining space. Most importantly, crowding restricts that the water that baths all the components with consequently altered properties compared to free bulk water. The study of protein structure and function within confined spaces has significant ramifications across several areas ranging from protein therapeutics to the food industry. The structure of proteins is the result of a delicate balance between various supramolecular forces, including the hydrophobic effect, hydrogen bonding, sulphur bridge formation, electrostatic and aromatic interactions. These supramolecular forces are normally represented by energy landscapes whereby the protein folded conformation corresponds to the absolute minimum energy. Today we have solved the structure of over 150,000 proteins, and most of these studies have been done isolating the protein and studying it under dilute conditions often neglecting effects such as hydration, protein-protein interactions, etc.. Over the last two decades, macromolecular crowding has been studied by the addition of high concentrations of various macromolecules to aqueous protein solutions, such as poly(ethylene oxide), dextran, hemoglobin or defatted albumin. While these studies have demonstrated that crowding favors protein folding, little or no effect on the thermal stability of proteins has been observed. In contrast, proteins confined within silica gels, polymeric gels or mesoporous silicates exhibit enhanced thermal stability due to confinement effects. The latter is quite extreme in all such experiments with the available volume being very close to that of a single folded protein, suggesting minimal hydration. Under such strong geometric constraints, there is almost no space available for proteins to unfold even if chemical instability were to ensure. However, such a stabilizing effect is only possible when water is free to diffuse in and out of the confined volume. While these studies provide interesting insights regarding protein dynamics, they are often limited by the strong interaction between the confinement/crowding agent and the protein, leading to 'unnatural' denaturation driven by the agent itself. Such artificial conditions do not adequately represent those generally found within the cell interior, where hydration and protein/protein interactions play a significant role in controlling the (un)folding dynamics. Although the whole cell is micrometer in size, its interior is often compartmentalized with membrane-bound volumes ranging from tens to hundreds of nanometers. The presence of the membrane adds an interface to the water pool and consequently further affects the water properties. The membrane hydration itself is not restricted to the membrane/water interface but extends of few nanometers into the cytosol making it considerably different from bulk water. In the present work, we study the simultaneous effect of confinement on protein stability and structure using block copolymer vesicles also known as polymersomes. Polymersomes comprise membrane-enclosed nanoscopic compartments produced by the self-assembly of amphiphilic diblock copolymers in aqueous solution. Polymersomes morphology and supramolecular nature are very similar to those found on natural cell organelles. Polymersomes are however much more robust structures lipid ones and allow for an accurate control over both structural and functional parameters. Polymersome systems have been recently proposed as effective carriers for the delivery of drugs, nucleic acids and proteins. In particular, we have demonstrated that pH-sensitive polymersomes based on poly(2-(methacryloyloxy)ethyl phosphorylcholine)-poly(2-(diisopropylamino)ethyl methacrylate) (PMPC-PDPA) can deliver payloads within live cells with no detrimental effect to cell viability. Here we demonstrate the effective encapsulation of myoglobin within PMPC-PDPA polymersomes and show how such nanoscopic confinement allows for protein protection. Results and discussion Polymersome preparation and characterization. PMPC-PDPA polymersomes (Fig.1) comprise four critical properties for studying protein encapsulation: (i) the PMPC block is a highly hydrated water-soluble polymer that is strongly protein-repellent ; (ii) The main membrane/water interface hydration is controlled by the phosphorylcholine group the most abundant hydrophilic head expressed by natural phospholipids; (iii) PMPC-PDPA polymersomes are stable at high temperature opposite to phospholipids which lose stability at around 60C ; and (iv) the pH-sensitive nature of the PDPA block, allows for the efficient and reversible encapsulation/release of large macromolecules. PMPC polymersomes were prepared using film rehydration techniques, purified by centrifugation and size exclusion chromatography and extruded through porous polycarbonate membranes with pore diameters of 50, 100, 200 or 400 nm. Polymersome size distributions determined using dynamic light scattering (DLS) is plotted in Fig. 2a for dispersions extruded through the various polycarbonate membranes alongside the untreated sample. We further confirmed the colloidal stability as a function of the temperature Fig. 2b showing that no notable difference in the hydrodynamic radius was detected for all the different size dispersions. In addition to this, we encapsulated two different Rhodamine B probes, one amphiphilic probe housed within the membrane: Rhodamine B octadecyl ester, and one hydrophilic probe housed within vesicle lumen: Rhodamine B-PMPC25 polymer. Rhodamine B fluorescence intensity is strongly affected by the temperature and allows precise measurement of thermal changes as we show in the graph in Fig. 2c, we observed no difference in the temperature monitored when the probe was located either in the polymersome membrane or the lumen. We thus conclude that PMPC-PDPA polymersome membrane does not act as an insulator and the temperature within its lumen and membrane is equal to the temperature in the bulk solution. Protein encapsulation. We have proved that PMPC-PDPA polymersomes can encapsulate several types of proteins including immunoglobulin G (IgG), albumin, myoglobin, trypsin, catalase, and glucose oxidase. We have also shown that proteins do not interact with the PMPC stabilized surface of the polymersomes and hence protein encapsulation is due to actual entrapment within the lumen. Protein encapsulation can be achieved either during polymersome preparation or post polymersome formation by controlled electroporation. We encapsulated both IgG and myoglobin by electroporation where sequential pulses of alternative electric currents are used to induce temporary poration in the polymersomes membrane. The temporary membrane disruption allows for the proteins dissolved in bulk to enter the lumen. This latter method allows to prepare and purify polymersomes to obtain controlled size dispersions. As shown in Fig. 2d, we encapsulated myoglobin within different size polymersomes and measured the number of proteins encapsulated,, as a function of the polymersome radius,. Using simple geometrical considerations, we can write the equation 3, where is the protein particle density within the polymersome lumen expressed as the number of molecules per volume, is the PMPC25 chain length estimated from simulations to be 6nm, and is the PDPA membrane thickness which we have measured to be 7nm. We use equation 1 to fit the data, and as shown in Fig. 2d, the experimental data fall within two particle densities of 5 ・10 -3 and 2・10 -3 nm -3 respectively suggesting that the smaller vesicles are more efficient in encapsulating proteins. Lumen encapsulation is further confirmed by transmission electron microscopy (TEM) using 5nm gold nanoparticles conjugated to IgG (GNP-IgG) as a model protein. As shown in Fig. 2e, the empty polymersomes show the typical vesicular geometry while when loaded with the GNP-IgG the gold is visible within the polymersome lumen (Fig. 2d). Further confirmation of effective encapsulation is shown in Fig. S1, where UV-Visible spectroscopy upon incubation of protein-loaded polymersomes with protease trypsin shows that the enzyme degrades non-encapsulated myoglobin within 4hrs (gradual disappearance of the Soret band, which is characteristic of non-degraded protein). In contrast, spectra recorded for myoglobin-loaded polymersomes remain identical to those obtained for the native protein, indicating that the trypsin cannot physically access the encapsulated myoglobin located within the interior of the polymersomes. Using the data in Fig.2d and the myoglobin protein molecular volume = 18.87 nm 3 calculated using Chimera (PDB: 1WLA), we estimated the protein lumen volume fraction,, as well as the corresponding Wigner-Seitz mean interparticle distance calculated as: Both parameters,, and <rij> are plotted in Fig.2g as a function of the polymersome size showing that myoglobin proteins occupy an average 10% of the lumen volume in small polymersomes while the volume fraction decreases to 5-4% in lager polymersomes. In both cases, the mean inter-particle distance is quite short varying from 7 to 9 nm in the least dense configuration. The vesicle lumen as a different liquid-phase. One constant and surprising result that emerges here, as well as in past reports, is that the concentration of the protein within the polymersome lumen, seem to be always higher than the protein bulk concentration. However, such a counterintuitive result is the consequence of how we measure protein concentration. We use HPLC with the appropriate calibration to measure protein concentration, and to be more precise we measure the bulk protein activity = where is the coefficient of activity that indicates the deviation from ideality. The encapsulated protein is also measured in bulk as the polymersomes are broken down and the cargo is dispersed before the measurement. We thus define this to be the apparent lumen activity as a function of the lumen protein concentration, =. From a thermodynamic point of view, the process of encapsulation is driven by the difference between the protein chemical potential in bulk, = 0 + and in the vesicle lumen = 0 + with 0 being the protein standard potential, the Boltzmann constant, the temperature and the protein activity coefficient within the lumen. Assuming equilibrium conditions, =, we can then write The activity coefficients are a measure of the free energy of non-specific interaction between the proteins in water arising from self-excluded volume effects, electrostatic interactions and hydrophobic effect. The activity coefficient is thus related to the free energy change from ideal to a real solution and obeys the relation where the is the protein free energy in the real solution and the ideal is the protein free energy in an ideal solution. This latter is independent of whether the protein is placed within the bulk or the lumen. If we combine equation 3 and 4, we can thus calculate the free energy of encapsulation as Where and are the protein free energy in the lumens and bulk. In Fig.3a, we plot the ratio between the lumen and bulk activity coefficient for myoglobin and IgG measured loading the protein within polymersomes starting from different bulk activities. We also use equation 5 to calculate the corresponding free energy of encapsulation. The graph shows a bimodal trend with a peak at / ≈ 40 and =-3.71kT for myoglobin and / ≈ 80 and =-4.35kT for IgG and as the bulk activity increases / → 1 while → 0. Albeit only a few kTs, the protein encapsulation within polymersomes is a favorable process indicating a stabilization of the mixture protein/water within the vesicle lumen. It is worth mentioning that the non-linear trend is expected and associated with the non-ideal nature of the protein-water solution. IgG, is greater in size than myoglobin and it does show a larger deviation from ideality. If we plot the ratio and for the myoglobin measured at the peak where / ≈ 40 as a function of the polymersome radius, as shown in Fig.3b, we observed a decrease where for larger vesicles / ≈ 10. Assuming a linear trend we can estimate that / = 1 corresponds to a polymersome radius of about 200nm. These results suggest that the equilibrium between the lumen and the bulk phases depends on the level of water confinement which seems to decrease with larger vesicles (see Fig.2g). This result indicates that as the vesicle gets smaller the ratio of interfacial/bulk water increases and consequently this affects the solubility of proteins which, as suggested by our data, increases leading to a more condensed phase. The vesicle encapsulation effect on protein folding-unfolding. Myoglobin is a globular protein comprising 153 amino acids folding around a central HEME prosthetic group implicated in oxygen, NO, CO, and H2O2 storage. Myoglobin was the very first protein whose structure was resolved by Kendrew et al., with several high-resolution crystal structures being reported. Several spectroscopic assays can assess the myoglobin secondary structure making it one of the most studied globular protein. To study protein stability under vesicle confinement myoglobin-loaded polymersomes, as well as free myoglobin mixed with empty polymersomes, were heated from 30°C up to 95°C at pH 7.4 in a step-wise fashion. Samples were allowed to equilibrate for one hour at 5°C intervals to avoid hot spots and thermal gradients. Each sample was subsequently allowed to cool directly to 20°C, and once the sample reached final equilibrium temperature, the protein-loaded polymersomes were dissolved at pH 6 by addition of dilute HCl (0.01M) to release their protein payload. The resulting aqueous solutions were analyzed by both UV-Visible and fluorescence spectroscopy. The folded conformation of the native protein has a characteristic single Soret band at 410 nm. Upon denaturation, a second band due to protein unfolding appears at 390 nm ( Fig. 4a) while it is clear that the encapsulated proteins show no apparent denaturation. Such an effect was observed in all the different polymersome sizes and all conditions of encapsulation as shown in the graph in Fig. 4b where the ratio between the absorbance at 410 and 390 nm are plotted. Similar results were observed using fluorescence spectroscopy, which monitors the emission due to the -helix tryptophan (TRP7 and TRP14). After excitation at 295 nm, these aromatic groups exhibit a typical emission peak at 310 nm. When the protein unfolds, and its secondary structure is lost, the tryptophan moieties are exposed to the surrounding water. This degradation is associated with a second, more intense peak at 340 nm (see Fig. 4c). Tryptophan emission was not detected for any of the encapsulated protein samples independently of the polymersome size, further suggesting that polymersomes confer excellent protection against thermally induced denaturation (Fig.4c-d). Both UV-Visible and fluorescence spectroscopy were performed after the thermal denaturation cycle was applied to the systems and the polymersomes were dissolved at mild acidic pH. This means that even though the encapsulated myoglobin resisted the thermal treatment, the data in Fig.4 do not discard a possible reversible unfolding favored by the confinement. In order to study the protein thermal denaturation during the thermal treatment, we performed in line circular dichroism (CD) studies on both free and encapsulated protein. As shown in Fig.5a, structural degradation of the native protein occurs between 70°C and 80°C and is complete at 95°C. When myoglobin is confined within polymersomes, the resulting CD spectra exhibit minimal temperature-dependent shift, and the ellipticity remains low even at 95°C (see Fig.5b). It is important to note that CD is rather insensitive to the scattering properties associated with polymersomes and CD spectra of empty polymersomes have zero ellipticity across all wavelengths (data not reported). This effect is not surprising as CD measures the differential absorption of left and right circularly polarized components of plane-polarized radiation and this is present only when the chromophore is either chiral or conjugated to a chiral center. Such insensitivity was also reported for both large and small lipid vesicles. In Fig.5c we report the fractions of the folded protein, f220 and f208, measured at 220 and 208 nm respectively as a function of the temperature. These wavelengths are those associated with the characteristic peaks for -helix (see methods for calculation). The data shows that the free protein has f220 = 55% and f208 = 48% of its structure folded and both decrease to 20% and 10% respectively at about 75°C in line with previous reports. The polymersome encapsulated protein folded fractions are always higher than the free protein with f208 = 95% independently of the temperature and the f220 dropping from 64% to 48%. Such a lack of unfolding can be explained by the high confinement effect that myoglobin experiences within the vesicles with a mean inter-particle distance of less than 10nm Fig.2g. Structurally the myoglobin unfolding will result in a considerable change in the radius of gyration with the unfolded structure being about twice the folded one as schematized in Fig.5d, where the PDB deposited structure (1wla) is compared with an unfolded one where the same amino acid sequence is minimized in theta solvent condition (i.e. ∝ 1 2 ). The vesicle encapsulation effect on protein structure. The absorption in chiral peptide bonds in the UV region results from the amides double bond intense → * transition at c.a.190 nm and weaker and broader peak from the nitrogen electron pair → * transition at c.a. 220 nm. Depending on the peptide secondary structure and hence what angles the peptide bonds forms, the two transitions occur at different energies. For example, the CD spectrum for helix comes with two negative bands at 222 nm and 208 nm, -sheet have a weak negative band at ~218 nm, and finally, a random coil shows a negative band at around 195 nm. Myoglobin structure (Fig.5d) comprises eight helices and, as shown in both Fig.5a and Fig.6a, its CD spectrum shows the -helix distinctive bands at 208 and 222 nm respectively. However, for the encapsulated myoglobin at 20°C and 95°C the far UV CD spectra (Fig.6a) show a more negative ellipticity with a blue shift compared to the free protein. The sample incubated at 20°C shows that the two -helix peaks shift from 222 to 218nm and from 208 to 205nm respectively with the latter increasing in intensity. The peak at 208 nm in the far UV spectra of the encapsulated protein at 20°C is consistent with the emergence of the 310 helix arrangement, the third most common structural element observed in globular proteins. The 310 helix has a different hydrogen-bonding pattern compared to the -helix, with the carbonyl amide hydrogen bonds linking amino acids every three units rather than four. Furthermore, the two helices differ from one another by the dihedral angle that two consecutive residues make, with having a 100° while in the 310 forming a 120° angle around the helical axis. A considerable change of the myoglobin structure is also observed for the samples at 95°C, with the free protein sample showing the typical spectrum of random coils, while the encapsulated protein spectrum displays a negative peak at 203 nm and a less negative peak at 208 nm with a further broader peak at 222 nm. At 95°C temperature is unlikely that the protein structure is controlled by hydrogen bonding, yet the negative peak and the percentage of folded structure suggest the presence of a secondary structure rather than a coiled unfolded configuration. Both aspects lead us to conclude that parts of the myoglobin sequence are folded into polyproline II (PPII) helices. Such a secondary structure is the dominant conformation in collagen and other fibrillar proteins. A characteristic feature of PPII is its repetitive torsional angles which form without any hydrogen bonds but only via interaction with water. Differences between the free and encapsulated proteins are also visible in the near UV CD spectra as in Fig.6b where encapsulated myoglobin at 20°C and 95°C spectra show an increase in ellipticity with a broad peak appearing between 270 and 300 nm. This region is associated with aromatic amino acids, with tyrosine (green band) peaking between 275 Fig.6 Far (a) and near (b) UV Circular dichroism spectra recorded for free and encapsulated myoglobin at 20C and 95C. Note the polymersomes used have a 50nm radius. Myoglobin structure (PDB:1wla) with helix 1-20 residue carrying the two tryptophans in orange (c) and the two helices across 100-151 residues that bears the two tyrosines in green (d). The two domains have been rearranged with three different helical arrangements,, 310 and PPII helix. and 282nm and tryptophan (orange band) peaking at 290nm. Tyrosine and tryptophan are highlighted in green and orange respectively in the myoglobin structures shown in Figs.6c-d where we assessed the possibility of helical arrangement changes. We thus isolated the helix from 1 to 20 residue carrying the two tryptophans (Fig.6c) and the two helices spanning from 100 to 151 residues that bears the two tyrosines (Fig.6d). In both cases, we rearrange the helical arrangement imposing both 310 and PPII dihedral angles, and the respective structures are shown in Figs.6c-d. A potential conformational change from to 310 and to PPII forces the aromatic residues farther apart which otherwise would be within the 1nm distance where the aromatic group absorbance will interfere with each other. The interplay between the different helical arrangements was also postulated during folding-unfolding transitions, and indeed the different structures seem to co-exist at the same time in protein structures. We thus propose that the different thermodynamic state encountered within the lumen favors the formation for the 310 helix arrangements and hence it is responsible for the myoglobin to present a more condensed arrangement. At higher temperatures, the loss of hydrogen bonding drives the full unfolding, and completely loss of any secondary structure on the myoglobin when it is free, but when it is encapsulated this seems to evolve differently and a considerable portion of the amino acids maintain a helical arrangement albeit being more consistent with the PPII conformation Enzyme activity within vesicles. The structural stability of the studied polymersome-confined protein is further confirmed by monitoring its enzymatic activity after applying the thermal cycle. The enzymatic activity can be assessed by measuring the rate of oxidation of guaiacol into its tetramer using UV-Visible spectroscopy at pH 7. 4 and 25°C (all measurements were normalized so as to have equal protein substrates concentrations). The HEME group that is responsible for myoglobin's enzymatic activity is susceptible to denaturation, and expectedly the free protein is no longer enzymatically active after the thermal treatment as shown in Fig. 7a. Oppositely, the protein is still active in all vesicles confirming that the activity of the encapsulated myoglobin HEME group is still held in a reactive configuration even after thermal treatment. Interestingly, the activity of protein encapsulated within the polymersomes Fig.7 Schematic of the oxidation of guaiacol into its tetramer catalysed by the myoglobin (a). The reaction can be monitored using UV-Visible spectroscopy and the free and encapsulated protein normalised responses are plotted for the native and thermally treated samples. (Note the reactions were performed at pH 7.4 and 25°C and all measurements were normalised so as to have equal protein and substrates concentration. The corresponding enzyme activity (b) was normalised to the free sample and plotted as a function of the polymersome size for both treated and untreated samples (n = 3; error bars = ± SD). increases concerning the native free protein Fig. 7b. Although within the experimental error, the augmentation is higher within the smaller vesicles where we measured almost twice the activity compared to the free protein (considered 1 here) and only 1.5 for larger vesicles. Enhanced enzymatic activity as a function of macromolecular crowding has already been reported for DNA polymerase, multi-copper oxidase, and ribozyme. However, most of these studies utilized inert crowding agents, whereas, in the present work, myoglobin activity appears to exhibit an auto-catalytic effect once encapsulated within polymersomes under relatively crowded conditions. A similar phenomenon was observed by de Souza et al., who reported that encapsulating the entire ribosomal machinery inside 100 nm lipid vesicles produced an average yield of fluorescent protein more than six times higher than that found in bulk water. Conclusions The data shown herein demonstrate a valid platform for studying protein dynamics under conditions that closely resemble those found in vivo. We show that vesicle entrapment leads to a considerable change on the protein/water interaction creating a more condensed phase with a consequent higher density of protein per volume compared to bulk conditions. In many ways, this suggests that the protein/water mixture within the lumen is in a different liquid-state than the bulk mixture. In this fashion, the protein encapsulation process corresponds with a liquid-liquid phase transition wherein the lumen phase is more condensed than in the bulk phase. The marked enhancement in thermal stability, as well as the enhanced enzymatic activity for the encapsulated myoglobin, corroborate the different energetic scenario furthermore. The stabilization effect observed across a range of polymersome diameters suggests that confinement and crowding are intimately connected. Both phenomena lead to the formation of a highly confined water network between the encapsulated proteins and the polymersome inner leaflet, with the water molecules forced to occupy nanoscopic volumes ranging in size from a few nm to tens of nm. Several studies on water confined between two hydrophilic substrates reported a more glass-like structure with disrupted hydrogen bonds, exhibiting somewhat longer lifetimes than the typical picoseconds, together with a reduction in tetrahedral bonding arrangements. Other studies have shown that this interfacial effect can extend well beyond the electrical double layer thickness of a few nm and reach the m range with the concomitant formation of a less flexible, more organized phase. More relevantly to the present case, Bhattacharyya et al. measured water solvation dynamics using fluorescent probes within vesicles and observed two regimes: one very slow (> 1 ns) and one faster ( ~600ps). Both relaxation times are considerably slower than bulk water ( >1ps) and attributed to the interface-bound water and the water within the vesicle lumens. In conclusion, we provide strong evidence that protein thermodynamics within a nanoscopic aqueous environment is strongly affected by both protein concentration and spatial confinement. More importantly, we demonstrate that proteins encapsulated within polymersomes can withstand large temperature gradients without compromising their structure (and hence their biochemical activity). We believe that such a finding is significant in the context of polymersome-mediated delivery of proteins and the development of nano-reactors. Finally, our findings suggest a new perspective for the 'origin of life' research, as we propose a new paradigm for compartmentalization. We demonstrate that compartmentalization is not just critical for the spatial separation of aqueous volumes, but it also offers a potentially important stabilization mechanism for proteins, which are one of life's essential building blocks. IgG dye labelling. In the first step the proteins were solubilised in pH ~8.3 sodium bicarbonate buffered solution at a final concentration of 5 mg/ml. Subsequently, 0.25 mg of (ex= 649 nm, and em= 669 nm) previously dissolved in 20 l DMSO were added to the protein solutions and incubated at room temperature for 2h in stirring conditions. This was necessary to allow the protein-dye conjugations between the succinimidyl ester group present on the dye chemical structure and primary amines presents on the proteins. The labelled proteins were thus purified from the unbounded dye via SEC using a Sephadex G-25. Finally, the absorbance at 280 nm of the purified product was measured. The absorbance of free dye was subtracted from the total absorbance and protein concentration was determined against a standard curve. PMPC25-PDPA70 polymersomes preparation and myoglobin encapsulation. PMPC25-PDPA70 copolymers were synthesized by atom transfer radical polymerization (ATRP), as reported elsewhere 1. In a typical experiment, PMPC25-PDPA70 powder was dissolved using Methanol/Chlorform 1:2 and a thin film was casted on a glass vial. rehydrated by adding 2 ml of 1X phosphate buffered saline (PBS 0.1 M) at pH 7.4. The buffered solution was stirred (magnetic stirring at 200 rpm) for 8 weeks to allow polymersomes formation. Both IgG and Myoglobin encapsulation by electroporation was tested using different conditions such as different initial protein concentrations with 5 pulses, 2500 V each pulse. After mixing proteins and polymersomes at the desired conditions depending on the experiment, 400 l of the mixture was loaded into a 2 mm width gap electroporation cuvette (Eppendorf, UK) and electroporated using an Electroporator 2510 (Eppendorf, UK) instrument, applying a voltage of 2500 V each pulse. Polymersomes extrusion and encapsulation efficiency. Polymeric vesicles were extruded using a Liposofast extruder. 50, 100, 200, 400 nm pore sized polycarbonate membranes were used to obtain polymersomes of the correct size, loaded with myoglobin. Polymersomes were then purified via gel permeation chromatography (GPC), using Sepharose 4B as stationary phase and PBS at pH 7.4 was used to elute the polymersomes. Proteins per vesicles were determined by Reversed-phase high pressure liquid chromatography (RP-HPLC). An anti-tubulin IgG conjugated with AlexaFluor® 647 dye (ex= 650 nm, and em= 669 nm) (ab6161; Abcam®, UK) was used as a model. Myglobin was used pristine and measured by UV/Vis absorbance at 408nm. The calibration curves were obtained using an RP-HPLC (Dionex, Ultimate 3000) with a C18 analytic column (Phenomenex® Jupiter C18, 300A, 150 x 4.60 mm, 5 micron) and using a constant flow ratio of 1 ml/min. The eluents used were milliQ H2O added of 0.05 % V/V trifluoroacetic acid (TFA) (eluent A) and CH3OH with 0.05 % TFA (eluent B) mixed according a gradient starting at 20% increasing at 40% after 6mins elution time, 45% after 16mins elution time 50% after 20mins, 70% after 23mins to peak at 100% after 24mins. After 27mins of elution time the gradient was decreased to 20% and maintain at this value for the rest of the analysis. Dynamic Light Scattering and transmission electron microscopy. Dynamic light scattering measurements were performed using the Malvern Zetasizer Nano set at 20 0C. Samples were diluted to 0.15 mg/ml in 1X PBS pH 7.4. 800 l of diluted sample was then placed into a polystyrene cuvette (Malvern, DTS0012) and analyzed. TEM analyses were performed using a FEI Tecnica Spirit microscope with maximal working voltage of 120 KV and equipped with Gatan1K MS600CW CCD camera. The copper grids used for the sample's analysis were initially coated with a carbon layer (thickness: ~20 nm) using a carbon coater. Subsequently, the prepared grids were submerged in the polymersomes and afterwards stained using a phosphotungstic acid (PTA) solution (0.75% w/w) as described in previous works (;. The PTA staining was herein applied since it enables the detections of the ester bonds presents in the PMPC-PDPA molecular structure. Trypsin stability. Polymersomes loaded with myoglobin were incubated at 37°C in 0.1 M PBS, pH 7.4 at 10 g/ml of protein together with trypsin. The trypsin/myoglobin molar ratio was 1:2. In the same conditions empty polymersomes were used as control. UV-Vis spectra of the encapsulated myoglobin were recorded immediately after the addition of trypsin (t0), after 4h and 24 hours of incubation. In order to remove the scattering of the polymersomes the same approach described before for evaluating the EE was followed, i.e. solubilization of polymersomes. In this case, though, protease inhibitor was added at a 5%(v/v) (concentration) before solubilizing the polymersomes to prevent trypsin degradation of the released myoglobin that would alter the results. Measurements were performed in triplicate. The UV-Vis spectra were recorded in the 800-200nm measurement range to follow the changes in the peak at 408nm. Figure S1 Trypsin degradation effect on polymersomes at different sizes and free myoglobin compared with native myoglobin (green). The UV-Vis spectra were measured in the range 800-200nm. Thermal stability. Myoglobin polymersomes with different diameters in 0.1 M PBS, pH 7.4, were incubated into a thermostated Peltier chamber where the temperature was gradually raised from 30 to 95°C with an interval step of 5°C and a concentration of 10 g/ml of protein. Each temperature was maintained for 60 minutes. The polymersomes preparations were then solubilized at pH 6 and the UV-Vis spectra 800-200 nm were recorded. Empty polymersomes where also used as control. Myoglobin secondary structure analysis. Circular dichroism (CD) measurements of the polymersomes at 10 g/ml of protein, were recorded at different temperatures 20, 40, 60, 70, 80, 95°C after placing the samples in a thermostated chamber where the temperature was increased with ramps of 10°C. Each temperature was maintained for 60 minutes. The changes in the far-UV CD spectra and determination of ellipticity at 222 nm were analyzed. Myoglobin tertiary structure analysis. The tryptophan fluorescence spectra 300nm to 370nm of the myoglobin polymersomes at different diameters in 0.1 M PBS, pH 7.4, were recorded at a concentration of 10 g/ml of myoglobin. The excitation was set at 295nm. The increase of fluorescence correlated with protein denaturation was measured before and after temperature denaturation. Results were normalised as ratio of 310/340 nm pick intensity. Myoglobin polymersomes bioactivity after thermal denaturation. Bioactivity was measured for all polymersome samples after the thermal denaturation ramp from 30 to 70°C by using a thermostated chamber. 2-methoxyphenol (Guaicol) and H2O2 were added at concentration of 3M and 0.4mM respectively. Activities were measured as the increase in the formation at 470 nm of tetraguaiacol with the time. Values are showed as normalized increase of absorbance and the activity is plotted normalized to the free enzyme as unit. Molecular graphics and analysis. We use the protein database (PDB) 1IGY for IgG and 1wla for myoglobin. All analyses performed with UCSF Chimera, developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco, with support from NIH P41-GM103311. |
import os
import json
import requests
from tqdm import tqdm
from concurrent.futures import ThreadPoolExecutor
def find_image_by_id(id: int):
for i in data['images']:
if i['id'] == id:
return i
f = open('data/export.json',)
data = json.load(f)
f.close()
def download(url):
r = requests.get(url[0], allow_redirects=True) # to get content after redirection
with open(url[1], 'wb') as f:
f.write(r.content)
print("Current: {}".format(url[2]), end='\r')
folder_name = "fishial_collection"
os.makedirs(folder_name, exist_ok=True)
os.makedirs("{}/data".format(folder_name), exist_ok=True)
list_sd = []
urls = []
for i in tqdm(range(len(data['images']))):
# if 'train_data' not in data['images'][i]:
# continue
# print("tut")
list_sd.append(data['images'][i]['file_name'])
folder_type = 'data'
path = os.path.join(os.path.join(folder_name, folder_type), data['images'][i]['file_name'])
urls.append([data['images'][i]['coco_url'], path, i])
with ThreadPoolExecutor(max_workers=10) as executor:
executor.map(download, urls) #urls=[list of url]
|
<reponame>Kwonkyu/SimpleBBS
package com.haruhiism.bbs.command;
import lombok.Getter;
import lombok.Setter;
import org.springframework.format.annotation.DateTimeFormat;
import java.time.LocalDate;
@Getter
@Setter
public class DateBasedListCommand extends ListCommand{
private boolean betweenDates = false;
@DateTimeFormat(pattern = "yyyy-MM-dd")
private LocalDate from = LocalDate.of(1970, 1, 1);
@DateTimeFormat(pattern = "yyyy-MM-dd")
private LocalDate to = LocalDate.now().plusDays(1);
}
|
// The MIT License
//
// Copyright (c) 2020 Temporal Technologies Inc. All rights reserved.
//
// Copyright (c) 2020 Uber Technologies, Inc.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
package filestore
import (
"context"
"encoding/json"
"errors"
"fmt"
"os"
"strconv"
"strings"
"time"
"github.com/dgryski/go-farm"
"github.com/gogo/protobuf/proto"
historypb "go.temporal.io/api/history/v1"
archiverspb "go.temporal.io/server/api/archiver/v1"
"go.temporal.io/server/common/archiver"
"go.temporal.io/server/common/codec"
"go.temporal.io/server/common/primitives/timestamp"
)
var (
errDirectoryExpected = errors.New("a path to a directory was expected")
errFileExpected = errors.New("a path to a file was expected")
errEmptyDirectoryPath = errors.New("directory path is empty")
)
// File I/O util
func fileExists(filepath string) (bool, error) {
if info, err := os.Stat(filepath); err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
} else if info.IsDir() {
return false, errFileExpected
}
return true, nil
}
func directoryExists(path string) (bool, error) {
if info, err := os.Stat(path); err != nil {
if os.IsNotExist(err) {
return false, nil
}
return false, err
} else if !info.IsDir() {
return false, errDirectoryExpected
}
return true, nil
}
func mkdirAll(path string, dirMode os.FileMode) error {
return os.MkdirAll(path, dirMode)
}
func writeFile(filepath string, data []byte, fileMode os.FileMode) (retErr error) {
if err := os.Remove(filepath); err != nil && !os.IsNotExist(err) {
return err
}
f, err := os.Create(filepath)
defer func() {
err := f.Close()
if err != nil {
retErr = err
}
}()
if err != nil {
return err
}
if err = f.Chmod(fileMode); err != nil {
return err
}
if _, err = f.Write(data); err != nil {
return err
}
return nil
}
// readFile reads the contents of a file specified by filepath
// WARNING: callers of this method should be extremely careful not to use it in a context where filepath is supplied by
// the user.
func readFile(filepath string) ([]byte, error) {
// #nosec
return os.ReadFile(filepath)
}
func listFiles(dirPath string) ([]string, error) {
if info, err := os.Stat(dirPath); err != nil {
return nil, err
} else if !info.IsDir() {
return nil, errDirectoryExpected
}
f, err := os.Open(dirPath)
if err != nil {
return nil, err
}
fileNames, err := f.Readdirnames(-1)
f.Close()
if err != nil {
return nil, err
}
return fileNames, nil
}
func listFilesByPrefix(dirPath string, prefix string) ([]string, error) {
fileNames, err := listFiles(dirPath)
if err != nil {
return nil, err
}
var filteredFileNames []string
for _, name := range fileNames {
if strings.HasPrefix(name, prefix) {
filteredFileNames = append(filteredFileNames, name)
}
}
return filteredFileNames, nil
}
// encoding & decoding util
func encode(message proto.Message) ([]byte, error) {
encoder := codec.NewJSONPBEncoder()
return encoder.Encode(message)
}
func encodeHistories(histories []*historypb.History) ([]byte, error) {
encoder := codec.NewJSONPBEncoder()
return encoder.EncodeHistories(histories)
}
func decodeVisibilityRecord(data []byte) (*archiverspb.VisibilityRecord, error) {
record := &archiverspb.VisibilityRecord{}
encoder := codec.NewJSONPBEncoder()
err := encoder.Decode(data, record)
if err != nil {
return nil, err
}
return record, nil
}
func serializeToken(token interface{}) ([]byte, error) {
if token == nil {
return nil, nil
}
return json.Marshal(token)
}
func deserializeGetHistoryToken(bytes []byte) (*getHistoryToken, error) {
token := &getHistoryToken{}
err := json.Unmarshal(bytes, token)
return token, err
}
func deserializeQueryVisibilityToken(bytes []byte) (*queryVisibilityToken, error) {
token := &queryVisibilityToken{}
err := json.Unmarshal(bytes, token)
return token, err
}
// File name construction
func constructHistoryFilename(namespaceID, workflowID, runID string, version int64) string {
combinedHash := constructHistoryFilenamePrefix(namespaceID, workflowID, runID)
return fmt.Sprintf("%s_%v.history", combinedHash, version)
}
func constructHistoryFilenamePrefix(namespaceID, workflowID, runID string) string {
return strings.Join([]string{hash(namespaceID), hash(workflowID), hash(runID)}, "")
}
func constructVisibilityFilename(closeTimestamp *time.Time, runID string) string {
return fmt.Sprintf("%v_%s.visibility", timestamp.TimeValue(closeTimestamp).UnixNano(), hash(runID))
}
func hash(s string) string {
return fmt.Sprintf("%v", farm.Fingerprint64([]byte(s)))
}
// Validation
func validateDirPath(dirPath string) error {
if len(dirPath) == 0 {
return errEmptyDirectoryPath
}
info, err := os.Stat(dirPath)
if os.IsNotExist(err) {
return nil
}
if err != nil {
return err
}
if !info.IsDir() {
return errDirectoryExpected
}
return nil
}
// Misc.
func extractCloseFailoverVersion(filename string) (int64, error) {
filenameParts := strings.FieldsFunc(filename, func(r rune) bool {
return r == '_' || r == '.'
})
if len(filenameParts) != 3 {
return -1, errors.New("unknown filename structure")
}
return strconv.ParseInt(filenameParts[1], 10, 64)
}
func historyMutated(request *archiver.ArchiveHistoryRequest, historyBatches []*historypb.History, isLast bool) bool {
lastBatch := historyBatches[len(historyBatches)-1].Events
lastEvent := lastBatch[len(lastBatch)-1]
lastFailoverVersion := lastEvent.GetVersion()
if lastFailoverVersion > request.CloseFailoverVersion {
return true
}
if !isLast {
return false
}
lastEventID := lastEvent.GetEventId()
return lastFailoverVersion != request.CloseFailoverVersion || lastEventID+1 != request.NextEventID
}
func contextExpired(ctx context.Context) bool {
select {
case <-ctx.Done():
return true
default:
return false
}
}
|
Watching the debate last night was like watching the scene out of a movie. If you’ve seen ‘Good Will Hunting’ there is a scene where Chuckie (played by Ben Affleck) sits in for Will (Matt Damon) at a job interview with some think tank geeks. Chuckie talks a great game but clearly has no idea what he’s talking about.
That was Donald Trump on the debate stage last night. Marco Rubio, Ted Cruz and John Kasich ran circles around Trump explaining policy positions with attention to detail and with confidence. Then there was Donald Trump. If he would have said, “Pull my finger”, to Jake Tapper, it would have likely been the smartest thing he said all night.
There were several jaw dropping moments of imbecility provided for the audience by Trump and yet somehow, Trump earned an ‘A’ from frequent Morning Joe guest, Mark Halperin. That just confirms what I said about some in the media “propping up” the clown.
Here they are (and I am pulling these directly from the CNN Transcript):
1. On Israel:
TRUMP: First of all, there’s nobody on this stage that’s more pro Israel than I am. OK. There’s nobody. I am pro-Israel. I was the grand marshall, not so long ago, of the Israeli Day Parade down 5th avenue. I’ve made massive contributions to Israel. I have a lot of — I have tremendous love for Israel.
Emphasis mine. He actually said he was better for Israel because he was in a parade. And if you notice, he was a second away from saying he has a lot of Jewish friends and thought better of it.
2. On Tiananmen Square:
TAPPER: Mr. Trump, some of your Republican critics have expressed concern about comments you have made praising authoritarian dictators. You have said positive things about Putin as a leader and about China’s massacre of pro-democracy protesters at Tiananmen Square, you’ve said: “When the students poured into Tiananmen Square, the Chinese government almost blew it, then they were vicious, they were horrible, but they put it down with strength. That shows you the power of strength.” How do you respond… TRUMP: That doesn’t mean I was endorsing that. I was not endorsing it. I said that is a strong, powerful government that put it down with strength. And then they kept down the riot. It was a horrible thing. It doesn’t mean at all I was endorsing it.
Emphasis mine. Riot? He actually called pro-democracy protests a “riot.” The protesters were calling for for government accountability, freedom of the press and freedom of speech. Some “riot.”
3. On Social Security:
BASH: Senator Rubio, I know you want to get in. Hang on one second, I just want to follow up with Mr. Trump. You’re talking about waste, fraud and abuse, but an independent bipartisan organization, the Committee For a Responsible Federal Budget, says improper payments like you’re talking about, that would only save about $3 billion, but it would take $150 billion to make Social Security solvent. So how would you find that other $147 million? TRUMP: Because they don’t cover most of the subjects. We’re the policemen of the world. We take care of the entire world. We’re going to have a stronger military, much stronger. Our military is depleted. But we take care of Germany, we take care of Saudi Arabia, we take care of Japan, we take care of South Korea. We take — every time this maniac from North Korea does anything, we immediately send our ships. We get virtually nothing. We have 28,000 soldiers on the line, on the border between North and South Korea. We have so many places. Saudi Arabia was making a billion dollars a day, and we were getting virtually nothing to protect them.
Foreign aid is a great bogey-man, but as Senator Rubio pointed out moments later, that aid makes up less than one percent of our budget. There is no way Social Security’s shortfall can be made up by yanking our troops out of South Korea and Germany. He pulled that out of his ass because he doesn’t have a clue.
These aren’t even very difficult issues. But watching Trump try to talk his way through these issues was painful. He’s like the student who didn’t do the required reading, trying in vain to fake it while the teacher sits there with a bemused smirk knowing the kid has no clue what he’s saying.
The man is not fit to be President. |
Association Between the Functional Independence and Difficulty Scale and Physical Functions in Community-Dwelling Japanese Older Adults Using Long-term Care Services Background and Purpose: The Functional Independence and Difficulty Scale (FIDS), a newly developed basic activities of daily living (BADL) assessment tool, assesses both independence and subjective difficulty of BADL performance. This patient-reported outcome measure has been shown to have acceptable internal consistency, concurrent validity, and reliability. However, little is known about the relationship between FIDS and objective measures of physical function among older Japanese adults using long-term care insurance services. This study aimed to reveal the relationship between FIDS and physical functions and to examine the concurrent validity of FIDS against physical functions. Methods: Participants of this cross-sectional, correlational research study included community-dwelling Japanese adults aged 65 years or older and certified as long-term care insurance service users with musculoskeletal disease, internal disease, cerebrovascular diseases without observable motor paralysis, and others. Data on physical functions, including muscle strength (grip strength and isometric knee extension muscle strength ), flexibility (range of motion of hip flexion and knee flexion), balance (Modified Functional Reach Test ) and gait performance (timed 2.4-m walk), and BADL performance assessed by FIDS, were obtained. Associations between FIDS scores and physical functions were determined by Spearman correlation coefficient and partial correlations after controlling for subject age and sex. Results: Data were collected on 53 participants (mean age = 81.9 years; 62.3% women). Spearman partial correlation coefficients controlled for sex and age between FIDS score and grip strength, IKEMS, ROM of hip flexion, ROM of knee flexion, M-FRT, and timed 2.4-m walk were 0.47 (P =.001), 0.44 (P =.001), 0.29 (P =.04), −0.05 (P =.73), 0.51 (P <.001), and −0.64 (P <.001), respectively. The strength of association was moderate for the M-FRT and 2.4-m walk and was low for grip strength and IKEMS. However, ROM of the knee showed no significant association and hip flexion had negligible association with FIDS. Conclusions: The FIDS, a patient-reported BADL assessment tool, mainly reflected balance and gait performance and had concurrent validity as an objective measure of balance and gait performance. |
import path from 'path'
import { GeneratorConfig } from '../../../types'
import getTemplatesDirPath from '../../utils/getTemplatesDirPath'
/* istanbul ignore next */
const workspaceGeneratorConfig: GeneratorConfig = {
prompts: [
{
type: 'input',
name: 'workspaceName',
required: true,
message: 'Workspace name',
initial: 'my-workspace',
},
{
type: 'input',
name: 'organizationName',
required: true,
message: 'Organization name',
initial: 'my-organization',
},
{
type: 'input',
name: 'workspaceDescription',
required: true,
message: 'Workspace description',
initial: 'Enjoyable tool for programmers',
},
],
actions: answers => {
const newWorkspaceDir = `./${answers.workspaceName}`
return [
{
type: 'copy',
files: {
[path.join(getTemplatesDirPath(), '/create/workspace/')]:
newWorkspaceDir,
},
},
{
type: 'transform',
files: `${newWorkspaceDir}/*`,
data: {
...answers,
currentYear: new Date().getFullYear(),
},
},
{
type: 'rename',
files: {
[`${newWorkspaceDir}/.husky/_gitignore`]: `${newWorkspaceDir}/.husky/.gitignore`,
[`${newWorkspaceDir}/_gitignore`]: `${newWorkspaceDir}/.gitignore`,
[`${newWorkspaceDir}/_package.json`]: `${newWorkspaceDir}/package.json`,
[`${newWorkspaceDir}/_tsconfig.json`]: `${newWorkspaceDir}/tsconfig.json`,
[`${newWorkspaceDir}/_lerna.json`]: `${newWorkspaceDir}/lerna.json`,
[`${newWorkspaceDir}/_github`]: `${newWorkspaceDir}/.github`,
},
},
{
type: 'exec',
command: 'git init',
cwd: newWorkspaceDir,
},
{
type: 'exec',
command: 'npm i',
cwd: newWorkspaceDir,
},
]
},
}
export default workspaceGeneratorConfig
|
Q:
Fridge and freezer on 15amp circuit break via 14-2 wiring
I got a new freezer and fridge, each drawing 0.8 amp and 1.2 amps respectively.
I'm going to use 14-2 NM-B wire and put them both on a 15amp circuit breaker.
Together they're drawing just 2 amps and sounds like they should be fine, but I wonder about start time current. I could not find anything in their manual, but some googling said the start time current can be up to 10 to 12 percent of working current. So here are my questions:
- Is my choice of wire and circuit breaker going to make any issue for my appliances?
- If so, should I go for a different type of wire and higher rated circuit breaker or just put them on separate breakers?
UPDATE:
Here is an update after almost a year! I used the above-mentioned circuit breaker and wire for my set up and after about a year, there has not been any issue and both appliances are still working fine!
A:
You will be fine on the same breaker.
Starting current for refrigerators and freezers is not as bad as you think. Especially, new appliances. They are very efficient and will rarely at exactly the same time.
Good luck! |
Joshua Friedman
Joshua Friedman is an American journalist who worked 32 years for newspapers and won a Pulitzer Prize in 1985. He formerly chaired the Committee to Protect Journalists and directed International Programs at Columbia University Graduate School of Journalism. At the journalism school he also directed the Maria Moors Cabot Prize, inaugurated in 1939, which annually recognizes outstanding coverage of the Americas (the Western hemisphere) by journalists based there. He worked at Columbia as either full-time or adjunct faculty since 1992.European Journalism Centre (EJC) and the Georgian Institute of Public Affairs (GIPA), established the annual GIPA-Friedman prize in 2012 to honor the excellence in journalism in the South Caucasus country. Friedman is on the board of the Committee to Protect Journalists and served as an early chair of CPJ. He is on the advisory board of the Dart Center on Journalism and Trauma. Friedman currently serves as Vice-Chair at the Carey Institute for Global Good and is also on the advisory board of the Institute's Nonfiction Program.
Education
Friedman is a 1968 graduate of the Columbia School of Journalism.
Awards and honors
Working for Newsday in 1984, Friedman, fellow reporter Dennis Bell, and photographer Ozier Muhammad created a series of articles "on the plight of the hungry in Africa", namely the 1983–1985 famine in Ethiopia, for which they won the annual Pulitzer Prize for International Reporting in 1985. He won Pulitzers in 1979 (for his Three Mile Island Coverage) and 1985 while at the Philadelphia Inquirer.
In 2013, the Columbia University School of Journalism honored him with an Alumni Award ("The Alumni Awards are presented annually for a distinguished journalism career in any medium, an outstanding single journalistic accomplishment, a notable contribution to journalism education or an achievement in related fields.") |
Comments on "A Simple Model for Simulating Tornado Damage in Forests" Holland et al. present a very interesting study on development and evaluation of a simple analytical model of tornado vortex flow and its impact on specified forest configurations. The authors also make reference to earlier work by Johannes Peter Letzmann (18851971) on near-surface tornado wind fields, dating back to 1923 and reviewed, for example, by Peterson (1992a). The authors are correct to say that Letzmann did not include information on the physics of tree response (which was unavailable in his time), even though he considered the question of whether and how twisted tree snapping occurred or how the observed tree damage should be interpreted. However, some other statements by Holland et al. about Letzmanns work can be misleading. The review by Peterson (1992a) alone is certainly not sufficient to assess fully the analytical model developed by Letzmann in his Ph.D. thesis and later summarized in a journal article (Letzmann 1925). It appears as if Holland et al., based on the limited information they had available on Letzmanns model, reinvented parts of it. Thus it comes as no surprise that some of Holland et al.s results are somewhat analogous to the hand-drawn diagrams of Letzmann (p. 1598)the underlying model is the same. The fuzzy wording by Holland et al. may have been influenced by their references: Letzmann was cited by Hall and Brewer, yet they only referred to somewhat similar work by Letzmann, and Peterson (1992a) mentions Letzmanns hand calculations. When Holland et al. refer to Letzmanns work as experimenting with various model parameters and emphasize several times his hand drawn diagrams and hand calculations, the reader may get the false impression that Letzmann had received his results merely by chance, instead of by the rigorous analytical calculations he performed in his Ph.D. thesis and that also extend the wind-field description by Holland et al.. Furthermore, hand calculations and handdrawn diagrams were state of the art in the 1920s and 1930s, just as publishing scientific work in the German language was. Nevertheless, the authors must be highly credited for their tying in with Letzmanns research and augmenting it by the modeling of tree response. The purpose of our comment is to draw attention to Corresponding author address: Dr. Nikolai Dotzek, DLRInstitut fur Physik der Atmosphare, Oberpfaffenhofen, 82234 Wessling, Germany. E-mail: nikolai.dotzek@dlr.de 726 J O U R N A L O F A P P L I E D M E T E O R O L O G Y A N D C L I M A T O L O G Y VOLUME 47 |
The U.S. Environmental Protection Agency on Wednesday ordered General Electric Corp. to pay a $1.1 million fine for improper disposal of toxic chemicals at its Southwest Side plant.
For almost two years, the giant appliance and heavy-equipment manufacturer removed polychlorinated biphenyls from old transformers at its plant at 6045 S. Nottingham Ave. without prior government approval, the EPA said.
Although GE has been cleaning PCBs out of transformers at the plant since the late 1970s, the fine involves a new method for handling the chemicals that the company used between June 1987 and February 1989 without first obtaining government approval, the agency said.
The now-banned PCBs, a suspected cancer-causing substance, were once widely used as a non-flammable insulator in electrical engines and equipment such as transformers.
''procedural'' because last November the agency approved the new cleaning method, which flushes liquid PCBs out of old transformers with freon instead of an oil solvent, the original method still used at the plant.
The freon system is more efficient, the company says, because the PCBs can be distilled out of the freon. That reduces the amount of liquid PCBs that has to be shipped to a hazardous-waste incinerator.
But in August, GE said it would not use the new system at the Southwest Side site because of widespread community opposition, which started when word of the PCB-cleaning operation got out.
Residents said they had not known the PCBs were being handled at the plant and have since said the plant is not zoned properly for the PCB operation, which they want shut down, according to Mike Czosnyka, chairman of the neighborhood group Citizens Against Pollution.
The city is still considering the zoning charge.
GE spokesman Len Doviak said the company contends it did not need prior approval before using the new system because it is a method for removing PCBs from equipment, not for disposing of the chemicals, and therefore comes under different regulations.
Doviak said GE will appeal the fine. |
void check(int cond) {
if (!cond) {
ERROR:
goto ERROR;
}
}
void main() {
check(((char)-1)==~((char)0));
check(((char)-6)==~((char)5));
check(((char)-100)==~((char)99));
check(((char)-128)==~((char)127));
check(((unsigned char)255)!=~((unsigned char)0));
check(((int)-1)==~((unsigned char)0));
check(((unsigned char)250)!=~((unsigned char)5));
check(((int)-6)==~((unsigned char)5));
check(((unsigned char)156)!=~((unsigned char)99));
check(((int)-100)==~((unsigned char)99));
check(((unsigned char)128)!=~((unsigned char)127));
check(((int)-128)==~((unsigned char)127));
check(((short)-1)==~((short)0));
check(((short)-6)==~((short)5));
check(((short)-32768)==~((short)32767));
check(((unsigned short)65535)!=~((unsigned short)0));
check(((int)-1)==~((unsigned short)0));
check(((unsigned short)65530)!=~((unsigned short)5));
check(((int)-6)==~((unsigned short)5));
check(((unsigned short)32768)!=~((unsigned short)32767));
check(((int)-32768)==~((unsigned short)32767));
check(((long long)-1)==~((long long)0));
check(((long long)-6)==~((long long)5));
check(((long long)-9223372036854775801LL)==~((long long)9223372036854775800LL));
check(((long long)-9223372036854775808ULL)==~((long long)9223372036854775807LL));
check(((unsigned long long)18446744073709551615ULL)==~((unsigned long long)0));
check(((unsigned long long)18446744073709551610ULL)==~((unsigned long long)5));
check(((unsigned long long)9223372036854775808ULL)==~((unsigned long long)9223372036854775807ULL));
}
|
Dan Froomkin - Would Bush Rather Be Fishing?
Would Bush Rather Be Fishing?
Is it possible that President Bush doesn't really enjoy his job?
Asked by a German tabloid to name the most wonderful moment of his presidency, Bush on Friday said it came while he was on vacation, fishing on his private lake.
Bush was obviously joking -- to a point. But the thing about Bush is that he has stock answers to all the expected questions. So it's the unpredictable ones where he's the most revealing.
So what to make of his response to the German question? Maybe it was just a little innocent clowning. But maybe it emerged from his own candid awareness that historians looking back at his presidency may see an obvious low point (or two or three), but no equally obvious high points.
Or maybe, at heart, he'd rather be fishing.
Here is the transcript of Bush's interview with Kai Diekmann of the pro-Bush, breast-baring German tabloid newspaper, Bild . As it happens, Bush also sheds more light on his delayed reaction on the morning of September 11, 2001.
"Q Three last very short questions. What was the most wonderful moment in your terms of being President so far, and what was the most awful moment?
" THE PRESIDENT: The most awful moment was September the 11th, 2001.
" Q The famous picture when somebody gave you the information?
" THE PRESIDENT: Yes, that. I think, like all of us, it took a while for the -- it was more than a moment. It was the event and the aftermath. On a situation like that, it takes a period to understand exactly what was going on. When somebody says, America is under attack, and -- you've got to fully understand what that meant. And the information coming was haphazard at best for a while. We weren't sure if the State Department got hit. I'd heard the White House had got attacked. Of course, I was worried that -- my family was here.
" And so I would say the toughest moment of all was after the whole reality sunk in and I was trying to help the nation understand what was going on, and at the same time, be empathetic for those who had lost lives. |
//---------------------------------------------------------------------------
#include "satdlg.h"
#include <QShowEvent>
//---------------------------------------------------------------------------
SatDialog::SatDialog(QWidget *parent)
: QDialog(parent)
{
setupUi(this);
for (int i=0;i<36;i++) ValidSat[i]=1;
connect(BtnOk,SIGNAL(clicked(bool)),this,SLOT(BtnOkClick()));
connect(BtnCancel,SIGNAL(clicked(bool)),this,SLOT(BtnCancelClick()));
connect(BtnChkAll,SIGNAL(clicked(bool)),this,SLOT(BtnChkAllClick()));
connect(BtnUnchkAll,SIGNAL(clicked(bool)),this,SLOT(BtnUnchkAllClick()));
}
//---------------------------------------------------------------------------
void SatDialog::showEvent(QShowEvent *event)
{
QCheckBox *sat[]={
PRN01,PRN02,PRN03,PRN04,PRN05,PRN06,PRN07,PRN08,PRN09,PRN10,
PRN11,PRN12,PRN13,PRN14,PRN15,PRN16,PRN17,PRN18,PRN19,PRN20,
PRN21,PRN22,PRN23,PRN24,PRN25,PRN26,PRN27,PRN28,PRN29,PRN30,
PRN31,PRN32,SBAS,GLO,GAL,PRN33
};
if (event->spontaneous()) return;
for (int i=0;i<36;i++) sat[i]->setChecked(ValidSat[i]);
}
//---------------------------------------------------------------------------
void SatDialog::BtnChkAllClick()
{
QCheckBox *sat[]={
PRN01,PRN02,PRN03,PRN04,PRN05,PRN06,PRN07,PRN08,PRN09,PRN10,
PRN11,PRN12,PRN13,PRN14,PRN15,PRN16,PRN17,PRN18,PRN19,PRN20,
PRN21,PRN22,PRN23,PRN24,PRN25,PRN26,PRN27,PRN28,PRN29,PRN30,
PRN31,PRN32,SBAS,GLO,GAL,PRN33
};
for (int i=0;i<36;i++) sat[i]->setChecked(true);
}
//---------------------------------------------------------------------------
void SatDialog::BtnUnchkAllClick()
{
QCheckBox *sat[]={
PRN01,PRN02,PRN03,PRN04,PRN05,PRN06,PRN07,PRN08,PRN09,PRN10,
PRN11,PRN12,PRN13,PRN14,PRN15,PRN16,PRN17,PRN18,PRN19,PRN20,
PRN21,PRN22,PRN23,PRN24,PRN25,PRN26,PRN27,PRN28,PRN29,PRN30,
PRN31,PRN32,SBAS,GLO,GAL,PRN33
};
for (int i=0;i<36;i++) sat[i]->setChecked(false);
}
//---------------------------------------------------------------------------
void SatDialog::BtnOkClick()
{
QCheckBox *sat[]={
PRN01,PRN02,PRN03,PRN04,PRN05,PRN06,PRN07,PRN08,PRN09,PRN10,
PRN11,PRN12,PRN13,PRN14,PRN15,PRN16,PRN17,PRN18,PRN19,PRN20,
PRN21,PRN22,PRN23,PRN24,PRN25,PRN26,PRN27,PRN28,PRN29,PRN30,
PRN31,PRN32,SBAS,GLO,GAL,PRN33
};
for (int i=0;i<36;i++) ValidSat[i]=sat[i]->isChecked();
accept();
}
//---------------------------------------------------------------------------
void SatDialog::BtnCancelClick()
{
reject();
}
//---------------------------------------------------------------------------
|
Comparative study of the luminous environments of vernacular architectures in China and the United States at 30th parallel north ABSTRACT The vernacular architectures in the Eastern and Western worlds are closely related to their respective unique cultures, local climates and environments. To create some new idea collisions on the efficient use of the light environment and promote the interpretation of such vernacular architectures for modern architects, the luminous environments of two typical vernacular architectures, one in China and one in the United States, were studied. By combining experimental measurements and computer simulations, the luminous environments were explored with a focus on the aspects of natural illumination conditions and the effects of windows and orientation. The illumination conditions of the China Meishan Cultural Park farmhouse (designated as the CMCP farmhouse) in China and the Magnolia Mound Plantation house (designated as the MMP house) in the United States were found to generally meet the IES standards. Furthermore, less artificial lighting and more uniform illumination across the entire gallery were advantages in the MMP house. The lateral windows of the house were beneficial in that they provided more natural lighting and were more conducive to the spread of illumination, but this was not the case for the dormer window. To some extent, the eastwest orientation favoured more natural illumination, while the illumination distribution in the southnorth orientation was more uniform. |
A prospective randomised controlled study on the effects of myoinositol on ovarian functions and metabolic factors in women with polycystic ovarian syndrome Polycystic ovarian syndrome (PCOS) is one of the most common endocrine disorder affecting five to ten percent women of reproductive age group. Typically PCOS is characterized by hyperandrogenism, chronic anovulation, polycystic ovaries at ultrasound evaluation and dermatological problems such as acne, hirsutism and seborrhoea. It is one of the most common cause of female infertility. The hallmark of this condition is excess androgen production, mainly by the ovaries. This excess androgen interferes with the reproductive, endocrine and metabolic functions of the body. Women with PCOS usually suffer from infertility and menstrual cycle disorder extending from oligomenorrhoea to amenorrhoea. In addition, PCOS women, in the long run, are found to be at increased risk of metabolic complications like diabetes hypertension, dyslipidaemia, cerebrovascular and cardiovascular accidents. |
A Country Proud to be Democratic: Demanding Democracy in Nineteenth-Century Chile Abstract This article examines criminal court cases about conflicts related to the electoral process at all stages from determining how elections would be administered to disputes about results. It argues that contemporary allegations of fraud, corruption, and misconduct during elections can inform us not only of the anomalies but also of the ways in which elections worked as expected, according to the laws and norms of the time. We can also see how participants defined and defended the democratic ideal. A careful reading of participants' complaints to local authorities about electoral law violations, including incidents that ended in violence, can provide important insights into partisanship, the mechanics of elections, and the formation of political culture. The use of democratic rhetoric in these complaints suggests a consensus around a democratic ideal based on free and fair elections even though that ideal was not yet realized. Through the analysis of criminal court cases, it also introduces a broad range of questions about how nineteenth-century Chileans understood and practiced democracy that suggest avenues for further research. |
#include "headers.h"
using PC=Pokitto::Core;
using PD=Pokitto::Display;
using PB=Pokitto::Buttons;
bool (*const PTAD::BattleEvent::execEvent[]) () =
{
PTAD::BattleEvent::event_shakeScreen, //0x00 ( 0)
PTAD::BattleEvent::event_flashBattler, //0x01 ( 1)
PTAD::BattleEvent::event_flashUi, //0x02 ( 2)
PTAD::BattleEvent::event_basicAttack, //0x03 ( 3)
PTAD::BattleEvent::event_useSkill, //0x04 ( 4)
PTAD::BattleEvent::event_castSpell, //0x05 ( 5)
PTAD::BattleEvent::event_playSoundEffect, //0x06 ( 6)
PTAD::BattleEvent::event_bufferMessage, //0x07 ( 7)
PTAD::BattleEvent::event_bufferValue, //0x08 ( 8)
PTAD::BattleEvent::event_BufferCharacter, //0x09 ( 9)
PTAD::BattleEvent::event_showMessage, //0x0A (10)
PTAD::BattleEvent::event_jump, //0x0B (11)
PTAD::BattleEvent::event_jumpIf, //0x0C (12)
PTAD::BattleEvent::event_jumpIfStatus, //0x0D (13)
PTAD::BattleEvent::event_jumpIfStat, //0x0E (14)
PTAD::BattleEvent::event_changeBattlerSprite, //0x0F (15)
PTAD::BattleEvent::event_changeBackgroundImage, //0x10 (16)
PTAD::BattleEvent::event_playBattleAnimation, //0x11 (17)
PTAD::BattleEvent::event_waitFrames, //0x12 (18)
PTAD::BattleEvent::event_waitButtons, //0x13 (19)
PTAD::BattleEvent::event_inflictStatus, //0x14 (20)
PTAD::BattleEvent::event_consumeMP, //0x15 (21)
PTAD::BattleEvent::event_random, //0x16 (22)
PTAD::BattleEvent::event_endEventProcessing //0x17 (23)
};
DataPack::PackedFile PTAD::BattleEvent::eventFile;
int PTAD::BattleEvent::damageDealt = 0;
uint32_t PTAD::BattleEvent::currentBufferPos = 0;
int32_t PTAD::BattleEvent::eventPos = 0;
int32_t PTAD::BattleEvent::currentEvent = 0;
uint8_t PTAD::BattleEvent::counters[4] = {0, 0, 0, 0};
#ifndef POK_SIM
uint8_t *PTAD::BattleEvent::eventBuffer = (uint8_t*)PTAD::EVENT_BUFFER_MEMORY_ADDRESS;
#else
uint8_t PTAD::BattleEvent::eventBuffer[PTAD::EVENT_BUFFER_MEMORY_SIZE];
#endif
bool PTAD::BattleEvent::atEnd = true;
void PTAD::BattleEvent::setup(DataPack::PackedFile &file)
{
eventFile = file;
}
void PTAD::BattleEvent::begin(uint32_t offset)
{
eventFile.seek(sizeof(PTAD::Battle::EnemyData));
currentBufferPos = sizeof(PTAD::Battle::EnemyData);
PTAD::dataFile->readBytes(&eventFile, eventBuffer, PTAD::EVENT_BUFFER_MEMORY_SIZE);
eventPos = offset;
atEnd = false;
resetCounters();
}
bool PTAD::BattleEvent::update()
{
if (atEnd)
return true;
do
{
currentEvent = eventPos;
} while (execEvent[nextByte()]());
return atEnd;
}
void PTAD::BattleEvent::resetCounters()
{
PTAD::globalCounter = 0;
counters[0] = 0;
counters[1] = 0;
counters[2] = 0;
counters[3] = 0;
}
uint8_t PTAD::BattleEvent::nextByte()
{
bool updateBuffer = false;
while (eventPos < 0)
{
eventPos += PTAD::EVENT_BUFFER_MEMORY_SIZE;
currentEvent += PTAD::EVENT_BUFFER_MEMORY_SIZE;
currentBufferPos -= PTAD::EVENT_BUFFER_MEMORY_SIZE;
updateBuffer = true;
}
while (eventPos >= PTAD::EVENT_BUFFER_MEMORY_SIZE)
{
eventPos -= PTAD::EVENT_BUFFER_MEMORY_SIZE;
currentEvent -= PTAD::EVENT_BUFFER_MEMORY_SIZE;
currentBufferPos += PTAD::EVENT_BUFFER_MEMORY_SIZE;
updateBuffer = true;
}
if (updateBuffer)
{
eventFile.seek(currentBufferPos);
PTAD::dataFile->readBytes(&eventFile, eventBuffer, PTAD::EVENT_BUFFER_MEMORY_SIZE);
}
return eventBuffer[eventPos++];
}
void PTAD::BattleEvent::readValue(uint8_t *value, size_t size)
{
for (size_t i = 0; i < size; ++i)
*value++ = nextByte();
}
bool PTAD::BattleEvent::event_shakeScreen()
{
PTAD::Battle::shakeScreen = nextByte();
PTAD::Battle::shakeRate = nextByte();
return true;
}
bool PTAD::BattleEvent::event_flashBattler()
{
uint8_t duration = nextByte();
uint8_t color1 = nextByte();
uint8_t color2 = nextByte();
uint8_t color3 = nextByte();
PTAD::Battle::flashBattlerSprite(duration, color1, color2, color3);
return true;
}
bool PTAD::BattleEvent::event_flashUi()
{
PTAD::Battle::flashUi = nextByte();
PTAD::Ui::fgColor = nextByte();
return true;
}
bool PTAD::BattleEvent::event_basicAttack()
{
damageDealt = PTAD::Battle::attackDamageDealt(PTAD::Battle::getEnemyAttack(), PTAD::Battle::getEnemyAgility(), PTAD::Battle::getPlayerDefense(), PTAD::Battle::getPlayerAgility(), (PTAD::Battle::playerSpellResistance >> 14) & 3);
if (damageDealt > 0)
{
PTAD::Battle::shakeScreen = 1;
PTAD::Battle::shakeRate = 8;
PTAD::Battle::flashUi = 8;
PTAD::Ui::fgColor = 175;
PTAD::Music::playSFX(PTAD::Music::SFX_HIT);
PD::shiftTilemap(random(0, 8), random(0, 8));
PTAD::Battle::flashPlayerSprite(16, 8, 88, 168);
if (random(0, 100) < 5)
{
counters[3] = 2;
damageDealt *= 2;
PTAD::Battle::playerStatus &= ~(3 << PTAD::Battle::STATUS_FOCUSED);
PTAD::Dialog::addMessage(PTAD::Dialog::MESSAGES_BATTLE_CRITICAL_HIT);
PTAD::Dialog::bufferNumber(damageDealt, 100);
PTAD::Dialog::addMessage(PTAD::Dialog::MESSAGES_BATTLE_DAMAGE_TAKEN_END);
}
else
{
counters[3] = 1;
PTAD::Dialog::addMessage(PTAD::Dialog::MESSAGES_BATTLE_DAMAGE_TAKEN_BEGIN);
PTAD::Dialog::bufferNumber(damageDealt, 100);
PTAD::Dialog::addMessage(PTAD::Dialog::MESSAGES_BATTLE_DAMAGE_TAKEN_END);
}
}
else
{
counters[3] = 0;
PTAD::Music::playSFX(PTAD::Music::SFX_MISS);
PTAD::Dialog::addMessage(PTAD::Dialog::MESSAGES_BATTLE_MISS);
}
PTAD::Dialog::beginMessage();
if (damageDealt >= PTAD::Game::player.hp)
{
atEnd = true;
PTAD::Game::player.hp = 0;
}
else
PTAD::Game::player.hp -= damageDealt;
return false;
}
bool PTAD::BattleEvent::event_useSkill() //TODO
{
return true;
}
bool PTAD::BattleEvent::event_castSpell() //TODO
{
return true;
}
bool PTAD::BattleEvent::event_playSoundEffect()
{
PTAD::Music::playSFX(nextByte());
return true;
}
bool PTAD::BattleEvent::event_bufferMessage()
{
uint8_t length = nextByte();
for (uint8_t i = 0; i < length; ++i)
PTAD::Dialog::bufferCharacter(nextByte());
return true;
}
bool PTAD::BattleEvent::event_bufferValue() //TODO
{
return true;
}
bool PTAD::BattleEvent::event_BufferCharacter()
{
PTAD::Dialog::bufferCharacter(nextByte());
return true;
}
bool PTAD::BattleEvent::event_showMessage()
{
PTAD::Dialog::beginMessage();
return false;
}
bool PTAD::BattleEvent::event_jump()
{
int32_t offset;
readValue((uint8_t*)&offset, sizeof(offset));
eventPos = offset - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
return eventPos > currentEvent;
}
bool PTAD::BattleEvent::event_jumpIf()
{
int32_t truePos, falsePos;
uint8_t counter, value, condition;
bool test;
readValue((uint8_t*)&truePos, sizeof(truePos));
readValue((uint8_t*)&falsePos, sizeof(falsePos));
counter = nextByte();
value = nextByte();
condition = nextByte();
if (condition == CONDITION_EQUAL_TO)
test = counters[counter] == value;
else if (condition == CONDITION_NOT_EQUAL_TO)
test = counters[counter] != value;
else if (condition == CONDITION_GREATER_THAN)
test = counters[counter] > value;
else if (condition == CONDITION_GREATER_THAN_OR_EQUAL_TO)
test = counters[counter] >= value;
else if (condition == CONDITION_LESS_THAN)
test = counters[counter] < value;
else if (condition == CONDITION_LESS_THAN_OR_EQUAL_TO)
test = counters[counter] <= value;
if (test)
eventPos = truePos - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
else
eventPos = falsePos - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
return eventPos > currentEvent;
}
bool PTAD::BattleEvent::event_jumpIfStatus()
{
int32_t truePos, falsePos;
uint8_t value = nextByte();
uint8_t condition = nextByte();
uint8_t status = (value >> 2) & 7;
uint8_t level = value & 3;
uint8_t current;
bool self = (value & 128) != 0;
bool test;
readValue((uint8_t*)&truePos, sizeof(truePos));
readValue((uint8_t*)&falsePos, sizeof(falsePos));
if (self)
current = (PTAD::Battle::enemyStatus >> status) & 3;
else
current = (PTAD::Battle::playerStatus >> status) & 3;
if (condition == CONDITION_EQUAL_TO)
test = current == level;
else if (condition == CONDITION_NOT_EQUAL_TO)
test = current != level;
else if (condition == CONDITION_GREATER_THAN)
test = current > level;
else if (condition == CONDITION_GREATER_THAN_OR_EQUAL_TO)
test = current >= level;
else if (condition == CONDITION_LESS_THAN)
test = current < level;
else if (condition == CONDITION_LESS_THAN_OR_EQUAL_TO)
test = current <= level;
if (test)
eventPos = truePos - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
else
eventPos = falsePos - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
return eventPos > currentEvent;
}
bool PTAD::BattleEvent::event_jumpIfStat()
{
int32_t truePos, falsePos;
uint8_t condition = nextByte();
uint16_t value;
uint16_t current;
bool test, hp = (condition & 128) != 0;
condition &= 0x7F;
readValue((uint8_t*)&value, sizeof(value));
readValue((uint8_t*)&truePos, sizeof(truePos));
readValue((uint8_t*)&falsePos, sizeof(falsePos));
if (hp)
current = PTAD::Battle::enemy->hp;
else
current = PTAD::Battle::enemy->mp;
if (condition == CONDITION_EQUAL_TO)
test = current == value;
else if (condition == CONDITION_NOT_EQUAL_TO)
test = current != value;
else if (condition == CONDITION_GREATER_THAN)
test = current > value;
else if (condition == CONDITION_GREATER_THAN_OR_EQUAL_TO)
test = current >= value;
else if (condition == CONDITION_LESS_THAN)
test = current < value;
else if (condition == CONDITION_LESS_THAN_OR_EQUAL_TO)
test = current <= value;
if (test)
eventPos = truePos - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
else
eventPos = falsePos - (int32_t)currentBufferPos + sizeof(PTAD::Battle::EnemyData);
return eventPos > currentEvent;
}
bool PTAD::BattleEvent::event_changeBattlerSprite()
{
PTAD::battleMonsterID = nextByte();
PTAD::Battle::loadBattlerSprite();
return true;
}
bool PTAD::BattleEvent::event_changeBackgroundImage()
{
PTAD::battleBG = nextByte();
PTAD::Battle::loadBackgroundImage();
return true;
}
bool PTAD::BattleEvent::event_playBattleAnimation()
{
uint32_t animation;
readValue((uint8_t*)&animation, sizeof(animation));
PTAD::BattleAnimation::beginAnimation(animation);
return false;
}
bool PTAD::BattleEvent::event_waitFrames()
{
if (counters[0] == 0)
{
counters[0] = 1;
counters[1] = nextByte() - 1;
eventPos = currentEvent;
return false;
}
else if (counters[1] > 0)
{
--counters[1];
eventPos = currentEvent;
return false;
}
resetCounters();
return true;
}
bool PTAD::BattleEvent::event_waitButtons()
{
if (PTAD::justPressed(PTAD::BTN_MASK_A) || PB::bBtn())
return true;
else
eventPos = currentEvent;
return false;
}
bool PTAD::BattleEvent::event_inflictStatus()
{
uint8_t value = nextByte();
uint8_t successMessageLength = nextByte();
uint8_t failMessageLength = nextByte();
uint8_t status = (value >> 2) & 7;
uint8_t level = value & 3;
bool self = (value & 128) != 0;
if (self)
{
if (status == PTAD::Battle::STATUS_POISON)
PTAD::Battle::enemyStatus &= ~(3 << status);
else if (status == PTAD::Battle::STATUS_SPEED)
{
uint8_t current = (PTAD::Battle::enemyStatus >> PTAD::Battle::STATUS_SPEED) & 3;
PTAD::Battle::enemyStatus &= (3 << PTAD::Battle::STATUS_SPEED);
if (current != PTAD::Battle::SPEED_SLOW)
PTAD::Battle::enemyStatus |= (PTAD::Battle::SPEED_HASTE << PTAD::Battle::STATUS_SPEED);
}
else if (status == PTAD::Battle::STATUS_FOCUSED)
{
if (((PTAD::Battle::enemyStatus >> PTAD::Battle::STATUS_FOCUSED) & 3) < level)
PTAD::Battle::enemyStatus += (1 << PTAD::Battle::STATUS_FOCUSED);
PTAD::Battle::keepFocus = true;
}
else if (status == PTAD::Battle::STATUS_BERSERK)
{
PTAD::Music::playSFX(PTAD::Music::SFX_BERSERK);
if (((PTAD::Battle::enemyStatus >> PTAD::Battle::STATUS_BERSERK) & 3) < level)
PTAD::Battle::enemyStatus += (1 << PTAD::Battle::STATUS_BERSERK);
}
for (uint8_t i = 0; i < successMessageLength; ++i)
PTAD::Dialog::bufferCharacter(nextByte());
eventPos += failMessageLength;
}
else
{
uint16_t chance = nextByte();
if (random(0, 0xFFFF) <= chance * (255 - PTAD::Battle::playerStatusResistance[status / 2]))
{
if (status == PTAD::Battle::STATUS_POISON)
{
uint8_t current = (PTAD::Battle::playerStatus >> PTAD::Battle::STATUS_POISON) & 3;
PTAD::Battle::playerStatus &= ~(3 << PTAD::Battle::STATUS_POISON);
if (current > level)
level = current;
PTAD::Battle::playerStatus |= (level << PTAD::Battle::STATUS_POISON);
PTAD::Music::playSFX(PTAD::Music::SFX_POISON);
PTAD::Ui::fgColor = 203;
PTAD::Battle::shakeScreen = 1;
PTAD::Battle::shakeRate = 8;
PTAD::Battle::flashUi = 16;
}
else if (status == PTAD::Battle::STATUS_SPEED)
{
uint8_t current = (PTAD::Battle::playerStatus >> PTAD::Battle::STATUS_SPEED) & 3;
PTAD::Battle::playerStatus &= ~(3 << PTAD::Battle::STATUS_SPEED);
if (current != PTAD::Battle::SPEED_HASTE)
PTAD::Battle::playerStatus |= (PTAD::Battle::SPEED_SLOW << PTAD::Battle::STATUS_SPEED);
PTAD::Music::playSFX(PTAD::Music::SFX_SLOW);
PTAD::Ui::fgColor = 3;
PTAD::Battle::shakeScreen = 1;
PTAD::Battle::shakeRate = 8;
PTAD::Battle::flashUi = 16;
}
else if (status == PTAD::Battle::STATUS_FOCUSED)
PTAD::Battle::playerStatus &= ~(3 << PTAD::Battle::STATUS_FOCUSED);
else if (status == PTAD::Battle::STATUS_BERSERK)
{
if (((PTAD::Battle::playerStatus >> PTAD::Battle::STATUS_BERSERK) & 3) < level)
PTAD::Battle::playerStatus += (1 << PTAD::Battle::STATUS_BERSERK);
PTAD::Music::playSFX(PTAD::Music::SFX_BERSERK);
PTAD::Ui::fgColor = 175;
PTAD::Battle::shakeScreen = 1;
PTAD::Battle::shakeRate = 8;
PTAD::Battle::flashUi = 16;
}
for (uint8_t i = 0; i < successMessageLength; ++i)
PTAD::Dialog::bufferCharacter(nextByte());
eventPos += failMessageLength;
}
else
{
eventPos += successMessageLength;
for (uint8_t i = 0; i < failMessageLength; ++i)
PTAD::Dialog::bufferCharacter(nextByte());
}
}
if (successMessageLength != 0 || failMessageLength != 0)
PTAD::Dialog::beginMessage();
return false;
}
bool PTAD::BattleEvent::event_consumeMP()
{
uint8_t amount = nextByte();
if (PTAD::Battle::enemy->mp <= amount)
PTAD::Battle::enemy->mp = 0;
else
PTAD::Battle::enemy->mp -= amount;
return true;
}
bool PTAD::BattleEvent::event_random()
{
uint8_t counter = nextByte();
uint8_t max = nextByte();
counters[counter] = random(0, max);
return true;
}
bool PTAD::BattleEvent::event_endEventProcessing()
{
atEnd = true;
return false;
}
|
<filename>ZNews/NetWorking/Networking.h
//
// Networking.h
// ZNews
//
// Created by <NAME> on 2017/7/9.
// Copyright © 2017年 <NAME>. All rights reserved.
//
#import <Foundation/Foundation.h>
typedef void(^ZHttpCallBack)(NSArray *responseData, NSError *error);
@interface Networking : NSObject
singleton_h(Networking)
- (void) doHttpRequestWithRequest:(NSString*) request andParameter:(NSDictionary*)param withCallback:(ZHttpCallBack) callback;
@end
|
Adolescent adjustment to parental divorce: an investigation from the perspective of basic dimensions of structural family therapy theory. This study, conducted within the framework of concepts of structural family therapy, examined the relationship of four family-based, clinical dimensions to the adjustment of 45 adolescents during the first 18 months of parental separation. There were two samples of mother-custody families: an Aided group that applied for treatment, and an Unaided group of paid volunteers. The inclusion of this variable addressed a major methodological deficit of previous studies. There was a significant association between perceived postseparation family structure and adolescent adjustment, demonstrating that individual adolescent adjustment is contingent on structural features of the contemporary postseparation family. Further, Aided families were perceived as more chaotic, disengaged, and enmeshed than Unaided families, while Aided adolescents were characterized by more behavior problems than Unaided adolescents. This suggests that divorce, as an unscheduled transition, might be within the realm of adaptation for many families and adolescents and is not necessarily "disastrous." |
. OBJECTIVE To observe the effect of electroacupuncture (EA) intervention on expression and content of protein kinase C (PKC) in the middle cerebral artery in acute cerebral infarction (ACI) rats so as to explore its mechanism underlying improvement of ACI. METHODS Wistar rats were randomly divided into normal control (n = 6), sham operation (n = 30), ACI model (n = 30), and EA (n = 30) groups, and the latter three groups were further divided into 0. 5 h, 1 h, 3 h, 6 h and 12 h subgroups (n = 6 in each subgroup). The ACI model was established by occlusion of the middle cerebral artery (MCAO). EA (15 Hz, 1 mA) was applied to "Shuigou" (GV 26) for 20 min. The PKC expression levels and activity in the vascular smooth muscle of the middle cerebral artery were detected using immunohistochemistry and ELISA, respectively. RESULTS In comparison with the control group, the immunoactivity and activities of PKC in the middle cerebral artery tissue at 0. 5 h, 1 h, 3 h, 6 h and 12 h were significantly increased in the model group (P<0. 05). After EA intervention, the expression levels and activities of PKC at the 5 time-points were markedly down-regulated in comparison with the model group at the same corresponding time-point (P<0. 05). No significant changes of PKC expression and activity were found in the sham operation group (P>0. 05). CONCLUSION EA intervention can up-regulate the immunoactivity and activity of PKC in the vascular smooth muscle of the middle cerebral artery in ACI rats, which may contribute to its effect in improving ACI by relieving arterial spasm. |
Ireland and Ulster star Rory Best wants to hang up his boots on his terms - leaving but still wanting a bit more.
The 36-year-old talismanic figure in both the white jersey of Ulster and green of Ireland, is set to end his international career at the end of the Rugby World Cup later this year in Japan.
The current skipper of both country and province is still pondering his Ulster future - although talks with Kingspan Stadium bosses have not yet commenced.
Best has given 15 seasons to Ulster already, reaching 216 caps against Munster two weeks ago.
Taking a short break with his family in Dubai, it gave Best an opportunity to take a breath and actually reflect on what was a massive 2018 for him personally.
As for the future he admits to enjoying playing too much at the minute to want to think about it coming to an end.
There had been speculation Best would stay on with Ulster after his international career ends, taking him up to June 2020.
But Best also accepts he does want to try and over cook things either.
“I will just see how I am going, I do not want to flog a dead horse,” Best said in an exclusive interview with the News Letter.
“I am enjoying playing at the minute, I am playing well, but I do not want to get to the position where, and I have seen it in Ulster, with some other players who went on too long and did not go out on their own terms.
“I do not think that I am the sort of personality who will give up or just one day decide I am just going to hang in here and pick up the cheque for the rest of the season.
If this is to be the final or penultimate season, Best would certainly like to be lifting a trophy with Ulster.
Lifting the Six Nations Championship - a Grand Slam by the Irish - was his choice of the highlight of a year which also saw Ireland defeat New Zealand on home soil for the first time.
“I have never ever lifted a trophy as a captain at any level, I do not think, and it was not a bad one to start with,” said Best.
“The whole week was strange, especially after Argentina, I have never been in a Ireland team where we have beat a Southern Hemisphere team by more than 10 points or so.
“Outside of that there was still this expectation you were going to win, which from an Irish point of view would have been a first to have had that.
“Everyone feels you have a good chance. That built, but when we went into Dublin on the Thursday, the whole place, all everything everyone was talking about was this game.
“You would get little lulls and then something would happen and the place exploded - the atmosphere was unreal, it was amazing,” added Best.
Having beaten Italy, Argentina, the All Blacks Best watched on as Ireland closed out a complete sweep in the November Test series with a win over USA - finishing an international season which had seen only one loss to Australia during another first - defeating the Wallabies 2-1 for the first time in their own back yard in a Test series.
The win over New Zealand was the perfect way for Best to sign off.
Add in receiving an honorary doctorate from Queen’s University, the freedom of the Armagh, Banbridge and Craigavon Borough and an MBE from Prince Charles at Buckingham Palace, 2018 will be a hard year to repeat.
Best’s return to Ulster duty saw the side lift themselves for some big performances against Scarlets twice in Europe and Munster in the Guinness PRO14 derby.
It saw the Province move into 2019 second in their Pool in the European Champions Cup and with a realistic opportunity of progressing to the knockout stages and also sitting second in Conference B of the PRO14.
And those recent performances are what keep Best wanting to keep on going in spite of knowing he still has to think about post October.
“I suppose I just keep thinking or wondering if I get to the point when I am not really enjoying it,” he explained.
“Every time I think I might be getting to that point, something happens at Ulster, like those back to back games (against Scarlets), in fact the whole month of December, the Cardiff game, the Scarlets and that Munster game to finish.
As for the season ahead, Best wants to see Ireland perform at the Six Nations and World Cup.
“If we can show we can perform and perform well and if a little bit of bad luck goes your way, or you lose a game by some unbelievable play by another team or a referee’s decision, we will then know that under pressure and expectation we can still perform. That is the case for this squad anyway.
And as for Ulster, Best was in the starting team which lifted the Celtic League title in 2006 and he probably yearns, in what is potentially his last term as skipper, to be collecting some silverware this season.
“It has been a long time since that 2006 success in Swansea. It has been a long career and a lot of caps not winning something.
“I would love to win something with Ulster before I retire, but you also have to be realistic.
“This current group are on a journey and if we happen to get there sooner, then brilliant, but I think it will take a bit more time.
“Unlike the Michael Lowry’s and James Hume’s, I do not have the same amount of time.
“At the same time you do not want them falling into the same trap as we did 10 or 12 years ago, having come so close in semi-finals and finals and saying there is always next season. |
Much like his reliever brethren Todd Jones, former Yankee Kyle Farnsworth took the news that he was traded to Detroit for Pudge Rodriguez yesterday with the stoicism of a Navy Seal. He realizes the business of professional baseball doesn't allow for emotional attachments and that success in this game requires one to compartmentalize overwhelming feelings of loss and present yourself in a public setting (especially with reporter's ramming tape recorders in your grill and the camera lights blinding you) with dignity and honor. Or not.
As difficult as Farnsworth's 2 1/2 seasons in New York have been, the 32-year-old reliever emerged from the meeting with his eyes glassy and his lower lip trembling. He briefly broke down in tears while discussing the trade with reporters, saying, "I had a good time here, so it's tough."
Hugs, Kyle. Your camo underwear and gun collection will be missed in the Bronx. The Yankees shed themselves of their inconsistent reliever in favor of yet another former MVP in Rodriguez, who will be a major upgrade at the position in both leadership and performance. Also? Pudge is not much of a weeper.
Trade deadline today, kids. So we'll probably see more of these outbursts throughout the day. |
Tissue-specific translational regulation of alternative rabbit 15-lipoxygenase mRNAs differing in their 3'-untranslated regions. By screening a rabbit reticulocyte library, an alternative 15-LOX transcript of 3.6 kb (15-LOX mRNA2) was detected containing a 1019 nt longer 3'-untranslated region (UTR2) than the main 2.6 kb mRNA (15-LOX mRNA1). In anaemic animals, northern blotting showed that 15-LOX mRNA2 was predominantly expressed in non-erythroid tissues, whereas 15-LOX mRNA1 was exclusively expressed in red blood cells and bone marrow. The 15-LOX 3'-UTR2 mRNA2 contained a novel 8-fold repetitive CU-rich motif, 23 nt in length (DICE2). This motif is related but not identical to the 10-fold repetitive differentiation control element (DICE1) of 19 nt residing in the 15-LOX UTR1 mRNA1. DICE1 was shown to interact with human hnRNP proteins E1 and K, thereby inhibiting translation. From tissues expressing the long 15-LOX mRNA2, two to three unidentified polypeptides with molecular weights of 53-55 and 90-93 kDa which bound to DICE2 were isolated by RNA affinity chromatography. A 93 kDa protein from lung cytosol, which was selected by DICE2 binding, was able to suppress translational inhibition of 15-LOX mRNA2, but not of 15-LOX mRNA1, by hnRNP E1. A possible interaction between DICE1/DICE2 cis / trans factors in translational control of 15-LOX synthesis is discussed. Furthermore, the 3'-terminal part of the highly related rabbit leukocyte-type 12-LOX gene was analysed. Very similar repetitive CU-rich elements of the type DICE1 (20 repeats) and DICE2 (nine repeats) were found in the part corresponding to the 3'-UTR of the mRNA. |
"""Contains various object definitions needed by the weather utility."""
weather_copyright = """\
# Copyright (c) 2006-2021 <NAME> <<EMAIL>>. Permission to
# use, copy, modify, and distribute this software is granted under terms
# provided in the LICENSE file distributed with this software.
#"""
weather_version = "2.4.1"
radian_to_km = 6372.795484
radian_to_mi = 3959.871528
def pyversion(ref=None):
"""Determine the Python version and optionally compare to a reference."""
import platform
ver = platform.python_version()
if ref:
return [
int(x) for x in ver.split(".")[:2]
] >= [
int(x) for x in ref.split(".")[:2]
]
else: return ver
class Selections:
"""An object to contain selection data."""
def __init__(self):
"""Store the config, options and arguments."""
self.config = get_config()
self.options, self.arguments = get_options(self.config)
if self.get_bool("cache") and self.get_bool("cache_search") \
and not self.get_bool("longlist"):
integrate_search_cache(
self.config,
self.get("cachedir"),
self.get("setpath")
)
if not self.arguments:
if "id" in self.options.__dict__ \
and self.options.__dict__["id"]:
self.arguments.append( self.options.__dict__["id"] )
del( self.options.__dict__["id"] )
import sys
message = "WARNING: the --id option is deprecated and will eventually be removed\n"
sys.stderr.write(message)
elif "city" in self.options.__dict__ \
and self.options.__dict__["city"] \
and "st" in self.options.__dict__ \
and self.options.__dict__["st"]:
self.arguments.append(
"^%s city, %s" % (
self.options.__dict__["city"],
self.options.__dict__["st"]
)
)
del( self.options.__dict__["city"] )
del( self.options.__dict__["st"] )
import sys
message = "WARNING: the --city/--st options are deprecated and will eventually be removed\n"
sys.stderr.write(message)
def get(self, option, argument=None):
"""Retrieve data from the config or options."""
if argument:
if self.config.has_section(argument) and (
self.config.has_option(argument, "city") \
or self.config.has_option(argument, "id") \
or self.config.has_option(argument, "st")
):
self.config.remove_section(argument)
import sys
message = "WARNING: the city/id/st options are now unsupported in aliases\n"
sys.stderr.write(message)
if not self.config.has_section(argument):
guessed = guess(
argument,
path=self.get("setpath"),
info=self.get("info"),
cache_search=(
self.get("cache") and self.get("cache_search")
),
cachedir=self.get("cachedir"),
quiet=self.get_bool("quiet")
)
self.config.add_section(argument)
for item in guessed.items():
self.config.set(argument, *item)
if self.config.has_option(argument, option):
return self.config.get(argument, option)
if option in self.options.__dict__:
return self.options.__dict__[option]
import sys
message = "WARNING: no URI defined for %s\n" % option
sys.stderr.write(message)
return None
def get_bool(self, option, argument=None):
"""Get data and coerce to a boolean if necessary."""
# Mimic configparser's getboolean() method by treating
# false/no/off/0 as False and true/yes/on/1 as True values,
# case-insensitively
value = self.get(option, argument)
if isinstance(value, bool):
return value
if isinstance(value, str):
vlower = value.lower()
if vlower in ('false', 'no', 'off', '0'):
return False
elif vlower in ('true', 'yes', 'on', '1'):
return True
raise ValueError("Not a boolean: %s" % value)
def getint(self, option, argument=None):
"""Get data and coerce to an integer if necessary."""
value = self.get(option, argument)
if value: return int(value)
else: return 0
def average(coords):
"""Average a list of coordinates."""
x = 0
y = 0
for coord in coords:
x += coord[0]
y += coord[1]
count = len(coords)
return (x/count, y/count)
def filter_units(line, units="imperial"):
"""Filter or convert units in a line of text between US/UK and metric."""
import re
# filter lines with both pressures in the form of "X inches (Y hPa)" or
# "X in. Hg (Y hPa)"
dual_p = re.match(
"(.* )(\d*(\.\d+)? (inches|in\. Hg)) \((\d*(\.\d+)? hPa)\)(.*)",
line
)
if dual_p:
preamble, in_hg, i_fr, i_un, hpa, h_fr, trailer = dual_p.groups()
if units == "imperial": line = preamble + in_hg + trailer
elif units == "metric": line = preamble + hpa + trailer
# filter lines with both temperatures in the form of "X F (Y C)"
dual_t = re.match(
"(.* )(-?\d*(\.\d+)? F) \((-?\d*(\.\d+)? C)\)(.*)",
line
)
if dual_t:
preamble, fahrenheit, f_fr, celsius, c_fr, trailer = dual_t.groups()
if units == "imperial": line = preamble + fahrenheit + trailer
elif units == "metric": line = preamble + celsius + trailer
# if metric is desired, convert distances in the form of "X mile(s)" to
# "Y kilometer(s)"
if units == "metric":
imperial_d = re.match(
"(.* )(\d+)( mile\(s\))(.*)",
line
)
if imperial_d:
preamble, mi, m_u, trailer = imperial_d.groups()
line = preamble + str(int(round(int(mi)*1.609344))) \
+ " kilometer(s)" + trailer
# filter speeds in the form of "X MPH (Y KT)" to just "X MPH"; if metric is
# desired, convert to "Z KPH"
imperial_s = re.match(
"(.* )(\d+)( MPH)( \(\d+ KT\))(.*)",
line
)
if imperial_s:
preamble, mph, m_u, kt, trailer = imperial_s.groups()
if units == "imperial": line = preamble + mph + m_u + trailer
elif units == "metric":
line = preamble + str(int(round(int(mph)*1.609344))) + " KPH" + \
trailer
imperial_s = re.match(
"(.* )(\d+)( MPH)( \(\d+ KT\))(.*)",
line
)
if imperial_s:
preamble, mph, m_u, kt, trailer = imperial_s.groups()
if units == "imperial": line = preamble + mph + m_u + trailer
elif units == "metric":
line = preamble + str(int(round(int(mph)*1.609344))) + " KPH" + \
trailer
# if imperial is desired, qualify given forcast temperatures like "X F"; if
# metric is desired, convert to "Y C"
imperial_t = re.match(
"(.* )(High |high |Low |low )(\d+)(\.|,)(.*)",
line
)
if imperial_t:
preamble, parameter, fahrenheit, sep, trailer = imperial_t.groups()
if units == "imperial":
line = preamble + parameter + fahrenheit + " F" + sep + trailer
elif units == "metric":
line = preamble + parameter \
+ str(int(round((int(fahrenheit)-32)*5/9))) + " C" + sep \
+ trailer
# hand off the resulting line
return line
def get_uri(
uri,
ignore_fail=False,
cache_data=False,
cacheage=900,
cachedir="."
):
"""Return a string containing the results of a URI GET."""
if pyversion("3"):
import urllib, urllib.error, urllib.request
URLError = urllib.error.URLError
urlopen = urllib.request.urlopen
else:
import urllib2 as urllib
URLError = urllib.URLError
urlopen = urllib.urlopen
import os, time
if cache_data:
dcachedir = os.path.join( os.path.expanduser(cachedir), "datacache" )
if not os.path.exists(dcachedir):
try: os.makedirs(dcachedir)
except (IOError, OSError): pass
dcache_fn = os.path.join(
dcachedir,
uri.split(":",1)[1].replace("/","_")
)
now = time.time()
if cache_data and os.access(dcache_fn, os.R_OK) \
and now-cacheage < os.stat(dcache_fn).st_mtime <= now:
dcache_fd = open(dcache_fn)
data = dcache_fd.read()
dcache_fd.close()
else:
try:
data = urlopen(uri).read().decode("utf-8")
except URLError:
if ignore_fail: return ""
import os, sys
sys.stderr.write("%s error: failed to retrieve\n %s\n\n" % (
os.path.basename( sys.argv[0] ), uri))
raise
# Some data sources are HTML with the plain text wrapped in pre tags
if "<pre>" in data:
data = data[data.find("<pre>")+5:data.find("</pre>")]
if cache_data:
try:
import codecs
dcache_fd = codecs.open(dcache_fn, "w", "utf-8")
dcache_fd.write(data)
dcache_fd.close()
except (IOError, OSError): pass
return data
def get_metar(
uri=None,
verbose=False,
quiet=False,
headers=None,
imperial=False,
metric=False,
cache_data=False,
cacheage=900,
cachedir="."
):
"""Return a summarized METAR for the specified station."""
if not uri:
import os, sys
message = "%s error: METAR URI required for conditions\n" % \
os.path.basename( sys.argv[0] )
sys.stderr.write(message)
sys.exit(1)
metar = get_uri(
uri,
cache_data=cache_data,
cacheage=cacheage,
cachedir=cachedir
)
if pyversion("3") and type(metar) is bytes: metar = metar.decode("utf-8")
if verbose: return metar
else:
import re
lines = metar.split("\n")
if not headers:
headers = \
"relative_humidity," \
+ "precipitation_last_hour," \
+ "sky conditions," \
+ "temperature," \
+ "heat index," \
+ "windchill," \
+ "weather," \
+ "wind"
headerlist = headers.lower().replace("_"," ").split(",")
output = []
if not quiet:
title = "Current conditions at %s"
place = lines[0].split(", ")
if len(place) > 1:
place = "%s, %s" % ( place[0].title(), place[1] )
else: place = "<UNKNOWN>"
output.append(title%place)
output.append("Last updated " + lines[1])
header_match = False
for header in headerlist:
for line in lines:
if line.lower().startswith(header + ":"):
if re.match(r".*:\d+$", line): line = line[:line.rfind(":")]
if imperial: line = filter_units(line, units="imperial")
elif metric: line = filter_units(line, units="metric")
if quiet: output.append(line)
else: output.append(" " + line)
header_match = True
if not header_match:
output.append(
"(no conditions matched your header list, try with --verbose)"
)
return "\n".join(output)
def get_alert(
uri=None,
verbose=False,
quiet=False,
cache_data=False,
cacheage=900,
cachedir="."
):
"""Return alert notice for the specified URI."""
if not uri:
return ""
alert = get_uri(
uri,
ignore_fail=True,
cache_data=cache_data,
cacheage=cacheage,
cachedir=cachedir
).strip()
if pyversion("3") and type(alert) is bytes: alert = alert.decode("utf-8")
if alert:
if verbose: return alert
else:
if alert.find("\nNATIONAL WEATHER SERVICE") == -1:
muted = False
else:
muted = True
lines = alert.split("\n")
import time
valid_time = time.strftime("%Y%m%d%H%M")
output = []
for line in lines:
if line.startswith("Expires:") \
and "Expires:" + valid_time > line:
return ""
if muted and line.startswith("NATIONAL WEATHER SERVICE"):
muted = False
line = ""
elif line == "&&":
line = ""
elif line == "$$":
muted = True
if line and not muted:
if quiet: output.append(line)
else: output.append(" " + line)
return "\n".join(output)
def get_options(config):
"""Parse the options passed on the command line."""
# for optparse's builtin -h/--help option
usage = \
"usage: %prog [options] [alias1|search1 [alias2|search2 [...]]]"
# for optparse's builtin --version option
verstring = "%prog " + weather_version
# create the parser
import optparse
option_parser = optparse.OptionParser(usage=usage, version=verstring)
# separate options object from list of arguments and return both
# the -a/--alert option
if config.has_option("default", "alert"):
default_alert = config.getboolean("default", "alert")
else: default_alert = False
option_parser.add_option("-a", "--alert",
dest="alert",
action="store_true",
default=default_alert,
help="include local alert notices")
# the --atypes option
if config.has_option("default", "atypes"):
default_atypes = config.get("default", "atypes")
else:
default_atypes = \
"coastal_flood_statement," \
+ "flash_flood_statement," \
+ "flash_flood_warning," \
+ "flash_flood_watch," \
+ "flood_statement," \
+ "flood_warning," \
+ "severe_thunderstorm_warning," \
+ "severe_weather_statement," \
+ "special_weather_statement," \
+ "urgent_weather_message"
option_parser.add_option("--atypes",
dest="atypes",
default=default_atypes,
help="list of alert notification types to display")
# the --build-sets option
option_parser.add_option("--build-sets",
dest="build_sets",
action="store_true",
default=False,
help="(re)build location correlation sets")
# the --cacheage option
if config.has_option("default", "cacheage"):
default_cacheage = config.getint("default", "cacheage")
else: default_cacheage = 900
option_parser.add_option("--cacheage",
dest="cacheage",
default=default_cacheage,
help="duration in seconds to refresh cached data")
# the --cachedir option
if config.has_option("default", "cachedir"):
default_cachedir = config.get("default", "cachedir")
else: default_cachedir = "~/.weather"
option_parser.add_option("--cachedir",
dest="cachedir",
default=default_cachedir,
help="directory for storing cached searches and data")
# the -f/--forecast option
if config.has_option("default", "forecast"):
default_forecast = config.getboolean("default", "forecast")
else: default_forecast = False
option_parser.add_option("-f", "--forecast",
dest="forecast",
action="store_true",
default=default_forecast,
help="include a local forecast")
# the --headers option
if config.has_option("default", "headers"):
default_headers = config.get("default", "headers")
else:
default_headers = \
"temperature," \
+ "relative_humidity," \
+ "wind," \
+ "heat_index," \
+ "windchill," \
+ "weather," \
+ "sky_conditions," \
+ "precipitation_last_hour"
option_parser.add_option("--headers",
dest="headers",
default=default_headers,
help="list of conditions headers to display")
# the --imperial option
if config.has_option("default", "imperial"):
default_imperial = config.getboolean("default", "imperial")
else: default_imperial = False
option_parser.add_option("--imperial",
dest="imperial",
action="store_true",
default=default_imperial,
help="filter/convert conditions for US/UK units")
# the --info option
option_parser.add_option("--info",
dest="info",
action="store_true",
default=False,
help="output detailed information for your search")
# the -l/--list option
option_parser.add_option("-l", "--list",
dest="list",
action="store_true",
default=False,
help="list all configured aliases and cached searches")
# the --longlist option
option_parser.add_option("--longlist",
dest="longlist",
action="store_true",
default=False,
help="display details of all configured aliases")
# the -m/--metric option
if config.has_option("default", "metric"):
default_metric = config.getboolean("default", "metric")
else: default_metric = False
option_parser.add_option("-m", "--metric",
dest="metric",
action="store_true",
default=default_metric,
help="filter/convert conditions for metric units")
# the -n/--no-conditions option
if config.has_option("default", "conditions"):
default_conditions = config.getboolean("default", "conditions")
else: default_conditions = True
option_parser.add_option("-n", "--no-conditions",
dest="conditions",
action="store_false",
default=default_conditions,
help="disable output of current conditions")
# the --no-cache option
if config.has_option("default", "cache"):
default_cache = config.getboolean("default", "cache")
else: default_cache = True
option_parser.add_option("--no-cache",
dest="cache",
action="store_false",
default=True,
help="disable all caching (searches and data)")
# the --no-cache-data option
if config.has_option("default", "cache_data"):
default_cache_data = config.getboolean("default", "cache_data")
else: default_cache_data = True
option_parser.add_option("--no-cache-data",
dest="cache_data",
action="store_false",
default=True,
help="disable retrieved data caching")
# the --no-cache-search option
if config.has_option("default", "cache_search"):
default_cache_search = config.getboolean("default", "cache_search")
else: default_cache_search = True
option_parser.add_option("--no-cache-search",
dest="cache_search",
action="store_false",
default=True,
help="disable search result caching")
# the -q/--quiet option
if config.has_option("default", "quiet"):
default_quiet = config.getboolean("default", "quiet")
else: default_quiet = False
option_parser.add_option("-q", "--quiet",
dest="quiet",
action="store_true",
default=default_quiet,
help="skip preambles and don't indent")
# the --setpath option
if config.has_option("default", "setpath"):
default_setpath = config.get("default", "setpath")
else: default_setpath = ".:~/.weather"
option_parser.add_option("--setpath",
dest="setpath",
default=default_setpath,
help="directory search path for correlation sets")
# the -v/--verbose option
if config.has_option("default", "verbose"):
default_verbose = config.getboolean("default", "verbose")
else: default_verbose = False
option_parser.add_option("-v", "--verbose",
dest="verbose",
action="store_true",
default=default_verbose,
help="show full decoded feeds")
# deprecated options
if config.has_option("default", "city"):
default_city = config.get("default", "city")
else: default_city = ""
option_parser.add_option("-c", "--city",
dest="city",
default=default_city,
help=optparse.SUPPRESS_HELP)
if config.has_option("default", "id"):
default_id = config.get("default", "id")
else: default_id = ""
option_parser.add_option("-i", "--id",
dest="id",
default=default_id,
help=optparse.SUPPRESS_HELP)
if config.has_option("default", "st"):
default_st = config.get("default", "st")
else: default_st = ""
option_parser.add_option("-s", "--st",
dest="st",
default=default_st,
help=optparse.SUPPRESS_HELP)
options, arguments = option_parser.parse_args()
return options, arguments
def get_config():
"""Parse the aliases and configuration."""
if pyversion("3"): import configparser
else: import ConfigParser as configparser
config = configparser.ConfigParser()
import os
rcfiles = [
"/etc/weatherrc",
"/etc/weather/weatherrc",
os.path.expanduser("~/.weather/weatherrc"),
os.path.expanduser("~/.weatherrc"),
"weatherrc"
]
for rcfile in rcfiles:
if os.access(rcfile, os.R_OK):
if pyversion("3"):
config.read(rcfile, encoding="utf-8")
else:
config.read(rcfile)
for section in config.sections():
if section != section.lower():
if config.has_section(section.lower()):
config.remove_section(section.lower())
config.add_section(section.lower())
for option,value in config.items(section):
config.set(section.lower(), option, value)
return config
def integrate_search_cache(config, cachedir, setpath):
"""Add cached search results into the configuration."""
if pyversion("3"): import configparser
else: import ConfigParser as configparser
import os, time
scache_fn = os.path.join( os.path.expanduser(cachedir), "searches" )
if not os.access(scache_fn, os.R_OK): return config
scache_fd = open(scache_fn)
created = float( scache_fd.readline().split(":")[1].strip().split()[0] )
scache_fd.close()
now = time.time()
datafiles = data_index(setpath)
if datafiles:
data_freshness = sorted(
[ x[1] for x in datafiles.values() ],
reverse=True
)[0]
else: data_freshness = now
if created < data_freshness <= now:
try:
os.remove(scache_fn)
print( "[clearing outdated %s]" % scache_fn )
except (IOError, OSError):
pass
return config
scache = configparser.ConfigParser()
if pyversion("3"):
scache.read(scache_fn, encoding="utf-8")
else:
scache.read(scache_fn)
for section in scache.sections():
if not config.has_section(section):
config.add_section(section)
for option,value in scache.items(section):
config.set(section, option, value)
return config
def list_aliases(config, detail=False):
"""Return a formatted list of aliases defined in the config."""
if detail:
output = "\n# configured alias details..."
for section in sorted(config.sections()):
output += "\n\n[%s]" % section
for item in sorted(config.items(section)):
output += "\n%s = %s" % item
output += "\n"
else:
output = "configured aliases and cached searches..."
for section in sorted(config.sections()):
if config.has_option(section, "description"):
description = config.get(section, "description")
else: description = "(no description provided)"
output += "\n %s: %s" % (section, description)
return output
def data_index(path):
import os
datafiles = {}
for filename in ("airports", "places", "stations", "zctas", "zones"):
for dirname in path.split(":"):
for extension in ("", ".gz", ".txt"):
candidate = os.path.expanduser(
os.path.join( dirname, "".join( (filename, extension) ) )
)
if os.path.exists(candidate):
datafiles[filename] = (
candidate,
os.stat(candidate).st_mtime
)
break
if filename in datafiles:
break
return datafiles
def guess(
expression,
path=".",
max_results=20,
info=False,
cache_search=False,
cacheage=900,
cachedir=".",
quiet=False
):
"""Find URIs using airport, gecos, placename, station, ZCTA/ZIP, zone."""
import codecs, datetime, time, os, re, sys
if pyversion("3"): import configparser
else: import ConfigParser as configparser
datafiles = data_index(path)
if re.match("[A-Za-z]{3}$", expression): searchtype = "airport"
elif re.match("[A-Za-z0-9]{4}$", expression): searchtype = "station"
elif re.match("[A-Za-z]{2}[Zz][0-9]{3}$", expression): searchtype = "zone"
elif re.match("[0-9]{5}$", expression): searchtype = "ZCTA"
elif re.match(
r"[\+-]?\d+(\.\d+)?(-\d+){,2}[ENSWensw]?, *[\+-]?\d+(\.\d+)?(-\d+){,2}[ENSWensw]?$",
expression
):
searchtype = "coordinates"
elif re.match(r"(FIPS|fips)\d+$", expression): searchtype = "FIPS"
else:
searchtype = "name"
cache_search = False
if cache_search: action = "caching"
else: action = "using"
if info:
scores = [
(0.005, "bad"),
(0.025, "poor"),
(0.160, "suspect"),
(0.500, "mediocre"),
(0.840, "good"),
(0.975, "great"),
(0.995, "excellent"),
(1.000, "ideal"),
]
if not quiet: print("Searching via %s..."%searchtype)
stations = configparser.ConfigParser()
dataname = "stations"
if dataname in datafiles:
datafile = datafiles[dataname][0]
if datafile.endswith(".gz"):
import gzip
if pyversion("3"):
stations.read_string(
gzip.open(datafile).read().decode("utf-8") )
else: stations.readfp( gzip.open(datafile) )
else:
if pyversion("3"):
stations.read(datafile, encoding="utf-8")
else:
stations.read(datafile)
else:
message = "%s error: can't find \"%s\" data file\n" % (
os.path.basename( sys.argv[0] ),
dataname
)
sys.stderr.write(message)
exit(1)
zones = configparser.ConfigParser()
dataname = "zones"
if dataname in datafiles:
datafile = datafiles[dataname][0]
if datafile.endswith(".gz"):
import gzip
if pyversion("3"):
zones.read_string( gzip.open(datafile).read().decode("utf-8") )
else: zones.readfp( gzip.open(datafile) )
else:
if pyversion("3"):
zones.read(datafile, encoding="utf-8")
else:
zones.read(datafile)
else:
message = "%s error: can't find \"%s\" data file\n" % (
os.path.basename( sys.argv[0] ),
dataname
)
sys.stderr.write(message)
exit(1)
search = None
station = ("", 0)
zone = ("", 0)
dataset = None
possibilities = []
uris = {}
if searchtype == "airport":
expression = expression.lower()
airports = configparser.ConfigParser()
dataname = "airports"
if dataname in datafiles:
datafile = datafiles[dataname][0]
if datafile.endswith(".gz"):
import gzip
if pyversion("3"):
airports.read_string(
gzip.open(datafile).read().decode("utf-8") )
else: airports.readfp( gzip.open(datafile) )
else:
if pyversion("3"):
airports.read(datafile, encoding="utf-8")
else:
airports.read(datafile)
else:
message = "%s error: can't find \"%s\" data file\n" % (
os.path.basename( sys.argv[0] ),
dataname
)
sys.stderr.write(message)
exit(1)
if airports.has_section(expression) \
and airports.has_option(expression, "station"):
search = (expression, "IATA/FAA airport code %s" % expression)
station = ( airports.get(expression, "station"), 0 )
if stations.has_option(station[0], "zone"):
zone = eval( stations.get(station[0], "zone") )
dataset = stations
if not ( info or quiet ) \
and stations.has_option( station[0], "description" ):
print(
"[%s result %s]" % (
action,
stations.get(station[0], "description")
)
)
else:
message = "No IATA/FAA airport code \"%s\" in the %s file.\n" % (
expression,
datafiles["airports"][0]
)
sys.stderr.write(message)
exit(1)
elif searchtype == "station":
expression = expression.lower()
if stations.has_section(expression):
station = (expression, 0)
if not search:
search = (expression, "ICAO station code %s" % expression)
if stations.has_option(expression, "zone"):
zone = eval( stations.get(expression, "zone") )
dataset = stations
if not ( info or quiet ) \
and stations.has_option(expression, "description"):
print(
"[%s result %s]" % (
action,
stations.get(expression, "description")
)
)
else:
message = "No ICAO weather station \"%s\" in the %s file.\n" % (
expression,
datafiles["stations"][0]
)
sys.stderr.write(message)
exit(1)
elif searchtype == "zone":
expression = expression.lower()
if zones.has_section(expression) \
and zones.has_option(expression, "station"):
zone = (expression, 0)
station = eval( zones.get(expression, "station") )
dataset = zones
search = (expression, "NWS/NOAA weather zone %s" % expression)
if not ( info or quiet ) \
and zones.has_option(expression, "description"):
print(
"[%s result %s]" % (
action,
zones.get(expression, "description")
)
)
else:
message = "No usable NWS weather zone \"%s\" in the %s file.\n" % (
expression,
datafiles["zones"][0]
)
sys.stderr.write(message)
exit(1)
elif searchtype == "ZCTA":
zctas = configparser.ConfigParser()
dataname = "zctas"
if dataname in datafiles:
datafile = datafiles[dataname][0]
if datafile.endswith(".gz"):
import gzip
if pyversion("3"):
zctas.read_string(
gzip.open(datafile).read().decode("utf-8") )
else: zctas.readfp( gzip.open(datafile) )
else:
if pyversion("3"):
zctas.read(datafile, encoding="utf-8")
else:
zctas.read(datafile)
else:
message = "%s error: can't find \"%s\" data file\n" % (
os.path.basename( sys.argv[0] ),
dataname
)
sys.stderr.write(message)
exit(1)
dataset = zctas
if zctas.has_section(expression) \
and zctas.has_option(expression, "station"):
station = eval( zctas.get(expression, "station") )
search = (expression, "Census ZCTA (ZIP code) %s" % expression)
if zctas.has_option(expression, "zone"):
zone = eval( zctas.get(expression, "zone") )
else:
message = "No census ZCTA (ZIP code) \"%s\" in the %s file.\n" % (
expression,
datafiles["zctas"][0]
)
sys.stderr.write(message)
exit(1)
elif searchtype == "coordinates":
search = (expression, "Geographic coordinates %s" % expression)
stationtable = {}
for station in stations.sections():
if stations.has_option(station, "location"):
stationtable[station] = {
"location": eval( stations.get(station, "location") )
}
station = closest( gecos(expression), stationtable, "location", 0.1 )
if not station[0]:
message = "No ICAO weather station found near %s.\n" % expression
sys.stderr.write(message)
exit(1)
zonetable = {}
for zone in zones.sections():
if zones.has_option(zone, "centroid"):
zonetable[zone] = {
"centroid": eval( zones.get(zone, "centroid") )
}
zone = closest( gecos(expression), zonetable, "centroid", 0.1 )
if not zone[0]:
message = "No NWS weather zone near %s; forecasts unavailable.\n" \
% expression
sys.stderr.write(message)
elif searchtype in ("FIPS", "name"):
places = configparser.ConfigParser()
dataname = "places"
if dataname in datafiles:
datafile = datafiles[dataname][0]
if datafile.endswith(".gz"):
import gzip
if pyversion("3"):
places.read_string(
gzip.open(datafile).read().decode("utf-8") )
else: places.readfp( gzip.open(datafile) )
else:
if pyversion("3"):
places.read(datafile, encoding="utf-8")
else:
places.read(datafile)
else:
message = "%s error: can't find \"%s\" data file\n" % (
os.path.basename( sys.argv[0] ),
dataname
)
sys.stderr.write(message)
exit(1)
dataset = places
place = expression.lower()
if places.has_section(place) and places.has_option(place, "station"):
station = eval( places.get(place, "station") )
search = (expression, "Census Place %s" % expression)
if places.has_option(place, "description"):
search = (
search[0],
search[1] + ", %s" % places.get(place, "description")
)
if places.has_option(place, "zone"):
zone = eval( places.get(place, "zone") )
if not ( info or quiet ) \
and places.has_option(place, "description"):
print(
"[%s result %s]" % (
action,
places.get(place, "description")
)
)
else:
for place in places.sections():
if places.has_option(place, "description") \
and places.has_option(place, "station") \
and re.search(
expression,
places.get(place, "description"),
re.I
):
possibilities.append(place)
for place in stations.sections():
if stations.has_option(place, "description") \
and re.search(
expression,
stations.get(place, "description"),
re.I
):
possibilities.append(place)
for place in zones.sections():
if zones.has_option(place, "description") \
and zones.has_option(place, "station") \
and re.search(
expression,
zones.get(place, "description"),
re.I
):
possibilities.append(place)
if len(possibilities) == 1:
place = possibilities[0]
if places.has_section(place):
station = eval( places.get(place, "station") )
description = places.get(place, "description")
if places.has_option(place, "zone"):
zone = eval( places.get(place, "zone" ) )
search = ( expression, "%s: %s" % (place, description) )
elif stations.has_section(place):
station = (place, 0.0)
description = stations.get(place, "description")
if stations.has_option(place, "zone"):
zone = eval( stations.get(place, "zone" ) )
search = ( expression, "ICAO station code %s" % place )
elif zones.has_section(place):
station = eval( zones.get(place, "station") )
description = zones.get(place, "description")
zone = (place, 0.0)
search = ( expression, "NWS/NOAA weather zone %s" % place )
if not ( info or quiet ):
print( "[%s result %s]" % (action, description) )
if not possibilities and not station[0]:
message = "No FIPS code/census area match in the %s file.\n" % (
datafiles["places"][0]
)
sys.stderr.write(message)
exit(1)
if station[0]:
uris["metar"] = stations.get( station[0], "metar" )
if zone[0]:
for key,value in zones.items( zone[0] ):
if key not in ("centroid", "description", "station"):
uris[key] = value
elif possibilities:
count = len(possibilities)
if count <= max_results:
print( "Your search is ambiguous, returning %s matches:" % count )
for place in sorted(possibilities):
if places.has_section(place):
print(
" [%s] %s" % (
place,
places.get(place, "description")
)
)
elif stations.has_section(place):
print(
" [%s] %s" % (
place,
stations.get(place, "description")
)
)
elif zones.has_section(place):
print(
" [%s] %s" % (
place,
zones.get(place, "description")
)
)
else:
print(
"Your search is too ambiguous, returning %s matches." % count
)
exit(0)
if info:
stationlist = []
zonelist = []
if dataset:
for section in dataset.sections():
if dataset.has_option(section, "station"):
stationlist.append(
eval( dataset.get(section, "station") )[1]
)
if dataset.has_option(section, "zone"):
zonelist.append( eval( dataset.get(section, "zone") )[1] )
stationlist.sort()
zonelist.sort()
scount = len(stationlist)
zcount = len(zonelist)
sranks = []
zranks = []
for score in scores:
if stationlist:
sranks.append( stationlist[ int( (1-score[0]) * scount ) ] )
if zonelist:
zranks.append( zonelist[ int( (1-score[0]) * zcount ) ] )
description = search[1]
uris["description"] = description
print(
"%s\n%s" % ( description, "-" * len(description) )
)
print(
"%s: %s" % (
station[0],
stations.get( station[0], "description" )
)
)
km = radian_to_km*station[1]
mi = radian_to_mi*station[1]
if sranks and not description.startswith("ICAO station code "):
for index in range(0, len(scores)):
if station[1] >= sranks[index]:
score = scores[index][1]
break
print(
" (proximity %s, %.3gkm, %.3gmi)" % ( score, km, mi )
)
elif searchtype == "coordinates":
print( " (%.3gkm, %.3gmi)" % (km, mi) )
if zone[0]:
print(
"%s: %s" % ( zone[0], zones.get( zone[0], "description" ) )
)
km = radian_to_km*zone[1]
mi = radian_to_mi*zone[1]
if zranks and not description.startswith("NWS/NOAA weather zone "):
for index in range(0, len(scores)):
if zone[1] >= zranks[index]:
score = scores[index][1]
break
print(
" (proximity %s, %.3gkm, %.3gmi)" % ( score, km, mi )
)
elif searchtype == "coordinates" and zone[0]:
print( " (%.3gkm, %.3gmi)" % (km, mi) )
if cache_search:
now = time.time()
nowstamp = "%s (%s)" % (
now,
datetime.datetime.isoformat(
datetime.datetime.fromtimestamp(now),
" "
)
)
search_cache = ["\n"]
search_cache.append( "[%s]\n" % search[0] )
search_cache.append( "cached = %s\n" % nowstamp )
for uriname in sorted(uris.keys()):
search_cache.append( "%s = %s\n" % ( uriname, uris[uriname] ) )
real_cachedir = os.path.expanduser(cachedir)
if not os.path.exists(real_cachedir):
try: os.makedirs(real_cachedir)
except (IOError, OSError): pass
scache_fn = os.path.join(real_cachedir, "searches")
if not os.path.exists(scache_fn):
then = sorted(
[ x[1] for x in datafiles.values() ],
reverse=True
)[0]
thenstamp = "%s (%s)" % (
then,
datetime.datetime.isoformat(
datetime.datetime.fromtimestamp(then),
" "
)
)
search_cache.insert(
0,
"# based on data files from: %s\n" % thenstamp
)
try:
scache_existing = configparser.ConfigParser()
if pyversion("3"):
scache_existing.read(scache_fn, encoding="utf-8")
else:
scache_existing.read(scache_fn)
if not scache_existing.has_section(search[0]):
scache_fd = codecs.open(scache_fn, "a", "utf-8")
scache_fd.writelines(search_cache)
scache_fd.close()
except (IOError, OSError): pass
if not info:
return(uris)
def closest(position, nodes, fieldname, angle=None):
import math
if not angle: angle = 2*math.pi
match = None
for name in nodes:
if fieldname in nodes[name]:
node = nodes[name][fieldname]
if node and abs( position[0]-node[0] ) < angle:
if abs( position[1]-node[1] ) < angle \
or abs( abs( position[1]-node[1] ) - 2*math.pi ) < angle:
if position == node:
angle = 0
match = name
else:
candidate = math.acos(
math.sin( position[0] ) * math.sin( node[0] ) \
+ math.cos( position[0] ) \
* math.cos( node[0] ) \
* math.cos( position[1] - node[1] )
)
if candidate < angle:
angle = candidate
match = name
if match: match = str(match)
return (match, angle)
def gecos(formatted):
import math, re
coordinates = formatted.split(",")
for coordinate in range(0, 2):
degrees, foo, minutes, bar, seconds, hemisphere = re.match(
r"([\+-]?\d+\.?\d*)(-(\d+))?(-(\d+))?([ensw]?)$",
coordinates[coordinate].strip().lower()
).groups()
value = float(degrees)
if minutes: value += float(minutes)/60
if seconds: value += float(seconds)/3600
if hemisphere and hemisphere in "sw": value *= -1
coordinates[coordinate] = math.radians(value)
return tuple(coordinates)
def correlate():
import codecs, csv, datetime, hashlib, os, re, sys, tarfile, time, zipfile
if pyversion("3"): import configparser
else: import ConfigParser as configparser
for filename in os.listdir("."):
if re.match("[0-9]{4}_Gaz_counties_national.zip$", filename):
gcounties_an = filename
gcounties_fn = filename[:-4] + ".txt"
elif re.match("[0-9]{4}_Gaz_cousubs_national.zip$", filename):
gcousubs_an = filename
gcousubs_fn = filename[:-4] + ".txt"
elif re.match("[0-9]{4}_Gaz_place_national.zip$", filename):
gplace_an = filename
gplace_fn = filename[:-4] + ".txt"
elif re.match("[0-9]{4}_Gaz_zcta_national.zip$", filename):
gzcta_an = filename
gzcta_fn = filename[:-4] + ".txt"
elif re.match("bp[0-9]{2}[a-z]{2}[0-9]{2}.dbx$", filename):
cpfzcf_fn = filename
nsdcccc_fn = "nsd_cccc.txt"
ourairports_fn = "airports.csv"
overrides_fn = "overrides.conf"
overrideslog_fn = "overrides.log"
slist_fn = "slist"
zlist_fn = "zlist"
qalog_fn = "qa.log"
airports_fn = "airports"
places_fn = "places"
stations_fn = "stations"
zctas_fn = "zctas"
zones_fn = "zones"
header = """\
%s
# generated by %s on %s from these public domain sources:
#
# https://www.census.gov/geographies/reference-files/time-series/geo/gazetteer-files.html
# %s %s %s
# %s %s %s
# %s %s %s
# %s %s %s
#
# https://www.weather.gov/gis/ZoneCounty/
# %s %s %s
#
# https://tgftp.nws.noaa.gov/data/
# %s %s %s
#
# https://ourairports.com/data/
# %s %s %s
#
# ...and these manually-generated or hand-compiled adjustments:
# %s %s %s
# %s %s %s
# %s %s %s\
""" % (
weather_copyright,
os.path.basename( sys.argv[0] ),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( int(os.environ.get('SOURCE_DATE_EPOCH', time.time())) )
),
hashlib.md5( open(gcounties_an, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(gcounties_an) )
),
gcounties_an,
hashlib.md5( open(gcousubs_an, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(gcousubs_an) )
),
gcousubs_an,
hashlib.md5( open(gplace_an, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(gplace_an) )
),
gplace_an,
hashlib.md5( open(gzcta_an, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(gzcta_an) )
),
gzcta_an,
hashlib.md5( open(cpfzcf_fn, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(cpfzcf_fn) )
),
cpfzcf_fn,
hashlib.md5( open(nsdcccc_fn, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(nsdcccc_fn) )
),
nsdcccc_fn,
hashlib.md5( open(ourairports_fn, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(ourairports_fn) )
),
ourairports_fn,
hashlib.md5( open(overrides_fn, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(overrides_fn) )
),
overrides_fn,
hashlib.md5( open(slist_fn, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(slist_fn) )
),
slist_fn,
hashlib.md5( open(zlist_fn, "rb").read() ).hexdigest(),
datetime.date.isoformat(
datetime.datetime.utcfromtimestamp( os.path.getmtime(zlist_fn) )
),
zlist_fn
)
airports = {}
places = {}
stations = {}
zctas = {}
zones = {}
message = "Reading %s:%s..." % (gcounties_an, gcounties_fn)
sys.stdout.write(message)
sys.stdout.flush()
count = 0
gcounties = zipfile.ZipFile(gcounties_an).open(gcounties_fn, "r")
columns = gcounties.readline().decode("utf-8").strip().split("\t")
for line in gcounties:
fields = line.decode("utf-8").strip().split("\t")
f_geoid = fields[ columns.index("GEOID") ].strip()
f_name = fields[ columns.index("NAME") ].strip()
f_usps = fields[ columns.index("USPS") ].strip()
f_intptlat = fields[ columns.index("INTPTLAT") ].strip()
f_intptlong = fields[ columns.index("INTPTLONG") ].strip()
if f_geoid and f_name and f_usps and f_intptlat and f_intptlong:
fips = "fips%s" % f_geoid
if fips not in places: places[fips] = {}
places[fips]["centroid"] = gecos(
"%s,%s" % (f_intptlat, f_intptlong)
)
places[fips]["description"] = "%s, %s" % (f_name, f_usps)
count += 1
gcounties.close()
print("done (%s lines)." % count)
message = "Reading %s:%s..." % (gcousubs_an, gcousubs_fn)
sys.stdout.write(message)
sys.stdout.flush()
count = 0
gcousubs = zipfile.ZipFile(gcousubs_an).open(gcousubs_fn, "r")
columns = gcousubs.readline().decode("utf-8").strip().split("\t")
for line in gcousubs:
fields = line.decode("utf-8").strip().split("\t")
f_geoid = fields[ columns.index("GEOID") ].strip()
f_name = fields[ columns.index("NAME") ].strip()
f_usps = fields[ columns.index("USPS") ].strip()
f_intptlat = fields[ columns.index("INTPTLAT") ].strip()
f_intptlong = fields[ columns.index("INTPTLONG") ].strip()
if f_geoid and f_name and f_usps and f_intptlat and f_intptlong:
fips = "fips%s" % f_geoid
if fips not in places: places[fips] = {}
places[fips]["centroid"] = gecos(
"%s,%s" % (f_intptlat, f_intptlong)
)
places[fips]["description"] = "%s, %s" % (f_name, f_usps)
count += 1
gcousubs.close()
print("done (%s lines)." % count)
message = "Reading %s:%s..." % (gplace_an, gplace_fn)
sys.stdout.write(message)
sys.stdout.flush()
count = 0
gplace = zipfile.ZipFile(gplace_an).open(gplace_fn, "r")
columns = gplace.readline().decode("utf-8").strip().split("\t")
for line in gplace:
fields = line.decode("utf-8").strip().split("\t")
f_geoid = fields[ columns.index("GEOID") ].strip()
f_name = fields[ columns.index("NAME") ].strip()
f_usps = fields[ columns.index("USPS") ].strip()
f_intptlat = fields[ columns.index("INTPTLAT") ].strip()
f_intptlong = fields[ columns.index("INTPTLONG") ].strip()
if f_geoid and f_name and f_usps and f_intptlat and f_intptlong:
fips = "fips%s" % f_geoid
if fips not in places: places[fips] = {}
places[fips]["centroid"] = gecos(
"%s,%s" % (f_intptlat, f_intptlong)
)
places[fips]["description"] = "%s, %s" % (f_name, f_usps)
count += 1
gplace.close()
print("done (%s lines)." % count)
message = "Reading %s..." % slist_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
slist = codecs.open(slist_fn, "rU", "utf-8")
for line in slist:
icao = line.split("#")[0].strip()
if icao:
stations[icao] = {
"metar": "https://tgftp.nws.noaa.gov/data/observations/"\
+ "metar/decoded/%s.TXT" % icao.upper()
}
count += 1
slist.close()
print("done (%s lines)." % count)
message = "Reading %s..." % nsdcccc_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
nsdcccc = codecs.open(nsdcccc_fn, "rU", "utf-8")
for line in nsdcccc:
line = str(line)
fields = line.split(";")
icao = fields[0].strip().lower()
if icao in stations:
description = []
name = " ".join( fields[3].strip().title().split() )
if name: description.append(name)
st = fields[4].strip()
if st: description.append(st)
country = " ".join( fields[5].strip().title().split() )
if country: description.append(country)
if description:
stations[icao]["description"] = ", ".join(description)
lat, lon = fields[7:9]
if lat and lon:
stations[icao]["location"] = gecos( "%s,%s" % (lat, lon) )
elif "location" not in stations[icao]:
lat, lon = fields[5:7]
if lat and lon:
stations[icao]["location"] = gecos( "%s,%s" % (lat, lon) )
count += 1
nsdcccc.close()
print("done (%s lines)." % count)
message = "Reading %s..." % ourairports_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
ourairports = open(ourairports_fn, "rU")
for row in csv.reader(ourairports):
icao = row[12].lower()
if icao in stations:
iata = row[13].lower()
if len(iata) == 3: airports[iata] = { "station": icao }
if "description" not in stations[icao]:
description = []
name = row[3]
if name: description.append(name)
municipality = row[10]
if municipality: description.append(municipality)
region = row[9]
country = row[8]
if region:
if "-" in region:
c,r = region.split("-", 1)
if c == country: region = r
description.append(region)
if country:
description.append(country)
if description:
stations[icao]["description"] = ", ".join(description)
if "location" not in stations[icao]:
lat = row[4]
if lat:
lon = row[5]
if lon:
stations[icao]["location"] = gecos(
"%s,%s" % (lat, lon)
)
count += 1
ourairports.close()
print("done (%s lines)." % count)
message = "Reading %s..." % zlist_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
zlist = codecs.open(zlist_fn, "rU", "utf-8")
for line in zlist:
line = line.split("#")[0].strip()
if line:
zones[line] = {}
count += 1
zlist.close()
print("done (%s lines)." % count)
message = "Reading %s..." % cpfzcf_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
cpfz = {}
cpfzcf = codecs.open(cpfzcf_fn, "rU", "utf-8")
for line in cpfzcf:
fields = line.strip().split("|")
if len(fields) == 11 \
and fields[0] and fields[1] and fields[9] and fields[10]:
zone = "z".join( fields[:2] ).lower()
if zone in zones:
state = fields[0]
if state:
zones[zone]["coastal_flood_statement"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"flood/coastal/%s/%s.txt" % (state.lower(), zone))
zones[zone]["flash_flood_statement"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"flash_flood/statement/%s/%s.txt"
% (state.lower(), zone))
zones[zone]["flash_flood_warning"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"flash_flood/warning/%s/%s.txt"
% (state.lower(), zone))
zones[zone]["flash_flood_watch"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"flash_flood/watch/%s/%s.txt" % (state.lower(), zone))
zones[zone]["flood_statement"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"flood/statement/%s/%s.txt" % (state.lower(), zone))
zones[zone]["flood_warning"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"flood/warning/%s/%s.txt" % (state.lower(), zone))
zones[zone]["severe_thunderstorm_warning"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"thunderstorm/%s/%s.txt" % (state.lower(), zone))
zones[zone]["severe_weather_statement"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"severe_weather_stmt/%s/%s.txt"
% (state.lower(), zone))
zones[zone]["short_term_forecast"] = (
"https://tgftp.nws.noaa.gov/data/forecasts/nowcast/"
"%s/%s.txt" % (state.lower(), zone))
zones[zone]["special_weather_statement"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"special_weather_stmt/%s/%s.txt"
% (state.lower(), zone))
zones[zone]["state_forecast"] = (
"https://tgftp.nws.noaa.gov/data/forecasts/state/"
"%s/%s.txt" % (state.lower(), zone))
zones[zone]["urgent_weather_message"] = (
"https://tgftp.nws.noaa.gov/data/watches_warnings/"
"non_precip/%s/%s.txt" % (state.lower(), zone))
zones[zone]["zone_forecast"] = (
"https://tgftp.nws.noaa.gov/data/forecasts/zone/"
"%s/%s.txt" % (state.lower(), zone))
description = fields[3].strip()
fips = "fips%s"%fields[6]
county = fields[5]
if county:
if description.endswith(county):
description += " County"
else:
description += ", %s County" % county
description += ", %s, US" % state
zones[zone]["description"] = description
zones[zone]["centroid"] = gecos( ",".join( fields[9:11] ) )
if fips in places and not zones[zone]["centroid"]:
zones[zone]["centroid"] = places[fips]["centroid"]
count += 1
cpfzcf.close()
print("done (%s lines)." % count)
message = "Reading %s:%s..." % (gzcta_an, gzcta_fn)
sys.stdout.write(message)
sys.stdout.flush()
count = 0
gzcta = zipfile.ZipFile(gzcta_an).open(gzcta_fn, "r")
columns = gzcta.readline().decode("utf-8").strip().split("\t")
for line in gzcta:
fields = line.decode("utf-8").strip().split("\t")
f_geoid = fields[ columns.index("GEOID") ].strip()
f_intptlat = fields[ columns.index("INTPTLAT") ].strip()
f_intptlong = fields[ columns.index("INTPTLONG") ].strip()
if f_geoid and f_intptlat and f_intptlong:
if f_geoid not in zctas: zctas[f_geoid] = {}
zctas[f_geoid]["centroid"] = gecos(
"%s,%s" % (f_intptlat, f_intptlong)
)
count += 1
gzcta.close()
print("done (%s lines)." % count)
message = "Reading %s..." % overrides_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
added = 0
removed = 0
changed = 0
overrides = configparser.ConfigParser()
overrides.readfp( codecs.open(overrides_fn, "r", "utf8") )
overrideslog = []
for section in overrides.sections():
addopt = 0
chgopt = 0
if section.startswith("-"):
section = section[1:]
delete = True
else: delete = False
if re.match("[A-Za-z]{3}$", section):
if delete:
if section in airports:
del( airports[section] )
logact = "removed airport %s" % section
removed += 1
else:
logact = "tried to remove nonexistent airport %s" % section
else:
if section in airports:
logact = "changed airport %s" % section
changed += 1
else:
airports[section] = {}
logact = "added airport %s" % section
added += 1
for key,value in overrides.items(section):
if key in airports[section]: chgopt += 1
else: addopt += 1
if key in ("centroid", "location"):
airports[section][key] = eval(value)
else:
airports[section][key] = value
if addopt and chgopt:
logact += " (+%s/!%s options)" % (addopt, chgopt)
elif addopt: logact += " (+%s options)" % addopt
elif chgopt: logact += " (!%s options)" % chgopt
elif re.match("[A-Za-z0-9]{4}$", section):
if delete:
if section in stations:
del( stations[section] )
logact = "removed station %s" % section
removed += 1
else:
logact = "tried to remove nonexistent station %s" % section
else:
if section in stations:
logact = "changed station %s" % section
changed += 1
else:
stations[section] = {}
logact = "added station %s" % section
added += 1
for key,value in overrides.items(section):
if key in stations[section]: chgopt += 1
else: addopt += 1
if key in ("centroid", "location"):
stations[section][key] = eval(value)
else:
stations[section][key] = value
if addopt and chgopt:
logact += " (+%s/!%s options)" % (addopt, chgopt)
elif addopt: logact += " (+%s options)" % addopt
elif chgopt: logact += " (!%s options)" % chgopt
elif re.match("[0-9]{5}$", section):
if delete:
if section in zctas:
del( zctas[section] )
logact = "removed zcta %s" % section
removed += 1
else:
logact = "tried to remove nonexistent zcta %s" % section
else:
if section in zctas:
logact = "changed zcta %s" % section
changed += 1
else:
zctas[section] = {}
logact = "added zcta %s" % section
added += 1
for key,value in overrides.items(section):
if key in zctas[section]: chgopt += 1
else: addopt += 1
if key in ("centroid", "location"):
zctas[section][key] = eval(value)
else:
zctas[section][key] = value
if addopt and chgopt:
logact += " (+%s/!%s options)" % (addopt, chgopt)
elif addopt: logact += " (+%s options)" % addopt
elif chgopt: logact += " (!%s options)" % chgopt
elif re.match("[A-Za-z]{2}[Zz][0-9]{3}$", section):
if delete:
if section in zones:
del( zones[section] )
logact = "removed zone %s" % section
removed += 1
else:
logact = "tried to remove nonexistent zone %s" % section
else:
if section in zones:
logact = "changed zone %s" % section
changed += 1
else:
zones[section] = {}
logact = "added zone %s" % section
added += 1
for key,value in overrides.items(section):
if key in zones[section]: chgopt += 1
else: addopt += 1
if key in ("centroid", "location"):
zones[section][key] = eval(value)
else:
zones[section][key] = value
if addopt and chgopt:
logact += " (+%s/!%s options)" % (addopt, chgopt)
elif addopt: logact += " (+%s options)" % addopt
elif chgopt: logact += " (!%s options)" % chgopt
elif re.match("fips[0-9]+$", section):
if delete:
if section in places:
del( places[section] )
logact = "removed place %s" % section
removed += 1
else:
logact = "tried to remove nonexistent place %s" % section
else:
if section in places:
logact = "changed place %s" % section
changed += 1
else:
places[section] = {}
logact = "added place %s" % section
added += 1
for key,value in overrides.items(section):
if key in places[section]: chgopt += 1
else: addopt += 1
if key in ("centroid", "location"):
places[section][key] = eval(value)
else:
places[section][key] = value
if addopt and chgopt:
logact += " (+%s/!%s options)" % (addopt, chgopt)
elif addopt: logact += " (+%s options)" % addopt
elif chgopt: logact += " (!%s options)" % chgopt
count += 1
overrideslog.append("%s\n" % logact)
overrideslog.sort()
if os.path.exists(overrideslog_fn):
os.rename(overrideslog_fn, "%s_old"%overrideslog_fn)
overrideslog_fd = codecs.open(overrideslog_fn, "w", "utf8")
import time
overrideslog_fd.write(
'# Copyright (c) %s <NAME> <<EMAIL>>. Permission to\n'
'# use, copy, modify, and distribute this software is granted under terms\n'
'# provided in the LICENSE file distributed with this software.\n\n'
% time.gmtime().tm_year)
overrideslog_fd.writelines(overrideslog)
overrideslog_fd.close()
print("done (%s overridden sections: +%s/-%s/!%s)." % (
count,
added,
removed,
changed
) )
estimate = 2*len(places) + len(stations) + 2*len(zctas) + len(zones)
print(
"Correlating places, stations, ZCTAs and zones (upper bound is %s):" % \
estimate
)
count = 0
milestones = list( range(51) )
message = " "
sys.stdout.write(message)
sys.stdout.flush()
for fips in places:
centroid = places[fips]["centroid"]
if centroid:
station = closest(centroid, stations, "location", 0.1)
if station[0]:
places[fips]["station"] = station
count += 1
if not count%100:
level = int(50*count/estimate)
if level in milestones:
for remaining in milestones[:milestones.index(level)+1]:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
milestones.remove(remaining)
if centroid:
zone = closest(centroid, zones, "centroid", 0.1)
if zone[0]:
places[fips]["zone"] = zone
count += 1
if not count%100:
level = int(50*count/estimate)
if level in milestones:
for remaining in milestones[:milestones.index(level)+1]:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
milestones.remove(remaining)
for station in stations:
if "location" in stations[station]:
location = stations[station]["location"]
if location:
zone = closest(location, zones, "centroid", 0.1)
if zone[0]:
stations[station]["zone"] = zone
count += 1
if not count%100:
level = int(50*count/estimate)
if level in milestones:
for remaining in milestones[:milestones.index(level)+1]:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
milestones.remove(remaining)
for zcta in zctas.keys():
centroid = zctas[zcta]["centroid"]
if centroid:
station = closest(centroid, stations, "location", 0.1)
if station[0]:
zctas[zcta]["station"] = station
count += 1
if not count%100:
level = int(50*count/estimate)
if level in milestones:
for remaining in milestones[ : milestones.index(level)+1 ]:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
milestones.remove(remaining)
if centroid:
zone = closest(centroid, zones, "centroid", 0.1)
if zone[0]:
zctas[zcta]["zone"] = zone
count += 1
if not count%100:
level = int(50*count/estimate)
if level in milestones:
for remaining in milestones[:milestones.index(level)+1]:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
milestones.remove(remaining)
for zone in zones.keys():
if "centroid" in zones[zone]:
centroid = zones[zone]["centroid"]
if centroid:
station = closest(centroid, stations, "location", 0.1)
if station[0]:
zones[zone]["station"] = station
count += 1
if not count%100:
level = int(50*count/estimate)
if level in milestones:
for remaining in milestones[:milestones.index(level)+1]:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
milestones.remove(remaining)
for remaining in milestones:
if remaining%5:
message = "."
sys.stdout.write(message)
sys.stdout.flush()
else:
message = "%s%%" % (remaining*2,)
sys.stdout.write(message)
sys.stdout.flush()
print("\n done (%s correlations)." % count)
message = "Writing %s..." % airports_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
if os.path.exists(airports_fn):
os.rename(airports_fn, "%s_old"%airports_fn)
airports_fd = codecs.open(airports_fn, "w", "utf8")
airports_fd.write(header)
for airport in sorted( airports.keys() ):
airports_fd.write("\n\n[%s]" % airport)
for key, value in sorted( airports[airport].items() ):
if type(value) is float: value = "%.7f"%value
elif type(value) is tuple:
elements = []
for element in value:
if type(element) is float: elements.append("%.7f"%element)
else: elements.append( repr(element) )
value = "(%s)"%", ".join(elements)
airports_fd.write( "\n%s = %s" % (key, value) )
count += 1
airports_fd.write("\n")
airports_fd.close()
print("done (%s sections)." % count)
message = "Writing %s..." % places_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
if os.path.exists(places_fn):
os.rename(places_fn, "%s_old"%places_fn)
places_fd = codecs.open(places_fn, "w", "utf8")
places_fd.write(header)
for fips in sorted( places.keys() ):
places_fd.write("\n\n[%s]" % fips)
for key, value in sorted( places[fips].items() ):
if type(value) is float: value = "%.7f"%value
elif type(value) is tuple:
elements = []
for element in value:
if type(element) is float: elements.append("%.7f"%element)
else: elements.append( repr(element) )
value = "(%s)"%", ".join(elements)
places_fd.write( "\n%s = %s" % (key, value) )
count += 1
places_fd.write("\n")
places_fd.close()
print("done (%s sections)." % count)
message = "Writing %s..." % stations_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
if os.path.exists(stations_fn):
os.rename(stations_fn, "%s_old"%stations_fn)
stations_fd = codecs.open(stations_fn, "w", "utf-8")
stations_fd.write(header)
for station in sorted( stations.keys() ):
stations_fd.write("\n\n[%s]" % station)
for key, value in sorted( stations[station].items() ):
if type(value) is float: value = "%.7f"%value
elif type(value) is tuple:
elements = []
for element in value:
if type(element) is float: elements.append("%.7f"%element)
else: elements.append( repr(element) )
value = "(%s)"%", ".join(elements)
if type(value) is bytes:
value = value.decode("utf-8")
stations_fd.write( "\n%s = %s" % (key, value) )
count += 1
stations_fd.write("\n")
stations_fd.close()
print("done (%s sections)." % count)
message = "Writing %s..." % zctas_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
if os.path.exists(zctas_fn):
os.rename(zctas_fn, "%s_old"%zctas_fn)
zctas_fd = codecs.open(zctas_fn, "w", "utf8")
zctas_fd.write(header)
for zcta in sorted( zctas.keys() ):
zctas_fd.write("\n\n[%s]" % zcta)
for key, value in sorted( zctas[zcta].items() ):
if type(value) is float: value = "%.7f"%value
elif type(value) is tuple:
elements = []
for element in value:
if type(element) is float: elements.append("%.7f"%element)
else: elements.append( repr(element) )
value = "(%s)"%", ".join(elements)
zctas_fd.write( "\n%s = %s" % (key, value) )
count += 1
zctas_fd.write("\n")
zctas_fd.close()
print("done (%s sections)." % count)
message = "Writing %s..." % zones_fn
sys.stdout.write(message)
sys.stdout.flush()
count = 0
if os.path.exists(zones_fn):
os.rename(zones_fn, "%s_old"%zones_fn)
zones_fd = codecs.open(zones_fn, "w", "utf8")
zones_fd.write(header)
for zone in sorted( zones.keys() ):
zones_fd.write("\n\n[%s]" % zone)
for key, value in sorted( zones[zone].items() ):
if type(value) is float: value = "%.7f"%value
elif type(value) is tuple:
elements = []
for element in value:
if type(element) is float: elements.append("%.7f"%element)
else: elements.append( repr(element) )
value = "(%s)"%", ".join(elements)
zones_fd.write( "\n%s = %s" % (key, value) )
count += 1
zones_fd.write("\n")
zones_fd.close()
print("done (%s sections)." % count)
message = "Starting QA check..."
sys.stdout.write(message)
sys.stdout.flush()
airports = configparser.ConfigParser()
if pyversion("3"):
airports.read(airports_fn, encoding="utf-8")
else:
airports.read(airports_fn)
places = configparser.ConfigParser()
if pyversion("3"):
places.read(places_fn, encoding="utf-8")
else:
places.read(places_fn)
stations = configparser.ConfigParser()
if pyversion("3"):
stations.read(stations_fn, encoding="utf-8")
else:
stations.read(stations_fn)
zctas = configparser.ConfigParser()
if pyversion("3"):
zctas.read(zctas_fn, encoding="utf-8")
else:
zctas.read(zctas_fn)
zones = configparser.ConfigParser()
if pyversion("3"):
zones.read(zones_fn, encoding="utf-8")
else:
zones.read(zones_fn)
qalog = []
places_nocentroid = 0
places_nodescription = 0
for place in sorted( places.sections() ):
if not places.has_option(place, "centroid"):
qalog.append("%s: no centroid\n" % place)
places_nocentroid += 1
if not places.has_option(place, "description"):
qalog.append("%s: no description\n" % place)
places_nodescription += 1
stations_nodescription = 0
stations_nolocation = 0
stations_nometar = 0
for station in sorted( stations.sections() ):
if not stations.has_option(station, "description"):
qalog.append("%s: no description\n" % station)
stations_nodescription += 1
if not stations.has_option(station, "location"):
qalog.append("%s: no location\n" % station)
stations_nolocation += 1
if not stations.has_option(station, "metar"):
qalog.append("%s: no metar\n" % station)
stations_nometar += 1
airports_badstation = 0
airports_nostation = 0
for airport in sorted( airports.sections() ):
if not airports.has_option(airport, "station"):
qalog.append("%s: no station\n" % airport)
airports_nostation += 1
else:
station = airports.get(airport, "station")
if station not in stations.sections():
qalog.append( "%s: bad station %s\n" % (airport, station) )
airports_badstation += 1
zctas_nocentroid = 0
for zcta in sorted( zctas.sections() ):
if not zctas.has_option(zcta, "centroid"):
qalog.append("%s: no centroid\n" % zcta)
zctas_nocentroid += 1
zones_nocentroid = 0
zones_nodescription = 0
zones_noforecast = 0
zones_overlapping = 0
zonetable = {}
for zone in zones.sections():
if zones.has_option(zone, "centroid"):
zonetable[zone] = {
"centroid": eval( zones.get(zone, "centroid") )
}
for zone in sorted( zones.sections() ):
if zones.has_option(zone, "centroid"):
zonetable_local = zonetable.copy()
del( zonetable_local[zone] )
centroid = eval( zones.get(zone, "centroid") )
if centroid:
nearest = closest(centroid, zonetable_local, "centroid", 0.1)
if nearest[1]*radian_to_km < 1:
qalog.append( "%s: within one km of %s\n" % (
zone,
nearest[0]
) )
zones_overlapping += 1
else:
qalog.append("%s: no centroid\n" % zone)
zones_nocentroid += 1
if not zones.has_option(zone, "description"):
qalog.append("%s: no description\n" % zone)
zones_nodescription += 1
if not zones.has_option(zone, "zone_forecast"):
qalog.append("%s: no forecast\n" % zone)
zones_noforecast += 1
if os.path.exists(qalog_fn):
os.rename(qalog_fn, "%s_old"%qalog_fn)
qalog_fd = codecs.open(qalog_fn, "w", "utf8")
import time
qalog_fd.write(
'# Copyright (c) %s <NAME> <<EMAIL>>. Permission to\n'
'# use, copy, modify, and distribute this software is granted under terms\n'
'# provided in the LICENSE file distributed with this software.\n\n'
% time.gmtime().tm_year)
qalog_fd.writelines(qalog)
qalog_fd.close()
if qalog:
print("issues found (see %s for details):"%qalog_fn)
if airports_badstation:
print(" %s airports with invalid station"%airports_badstation)
if airports_nostation:
print(" %s airports with no station"%airports_nostation)
if places_nocentroid:
print(" %s places with no centroid"%places_nocentroid)
if places_nodescription:
print(" %s places with no description"%places_nodescription)
if stations_nodescription:
print(" %s stations with no description"%stations_nodescription)
if stations_nolocation:
print(" %s stations with no location"%stations_nolocation)
if stations_nometar:
print(" %s stations with no METAR"%stations_nometar)
if zctas_nocentroid:
print(" %s ZCTAs with no centroid"%zctas_nocentroid)
if zones_nocentroid:
print(" %s zones with no centroid"%zones_nocentroid)
if zones_nodescription:
print(" %s zones with no description"%zones_nodescription)
if zones_noforecast:
print(" %s zones with no forecast"%zones_noforecast)
if zones_overlapping:
print(" %s zones within one km of another"%zones_overlapping)
else: print("no issues found.")
print("Indexing complete!")
|
def clean_number(n):
return rx_non_numbers.sub("", n) |
Device Light Fingerprints Identification Using MCU-based Deep Learning Approach We introduce device identification using the light fingerprint by a MCU-based deep learning approach. At first, we observe that minor differences exist for individual components of lighting equipment. The corresponding difference produces a unique phenomenon in the frequency spectrum. Therefore, we adopt deep learning approaches for developing a mobile phone light fingerprint identification system and implementing it on a low-cost microcontroller platform. The screen light of the mobile phone is analyzed to obtain the features of unique light fingerprints. We utilize the convolutional neural network, the improved multi-class greedy autoencoder and variational autoencoder with domain adaptation techniques to develop the identification algorithm. Finally, the Bayesian optimization technique is used to optimize the hyper-parameters of models for implementing in the microprocessor. The corresponding comparisons are introduced to demonstrate the performance. The multi-class greedy autoencoder algorithm produces results with an overall accuracy rate and abnormal sample detection rate of 99.67% and 99.85%, respectively. Only a single model needs to be added or deleted for updating new authentication data and this does not affect the identification ability of all models. This results in greater flexibility in real-life applications and potential for expansion to other fields, such as smart buildings and automated robots. I. INTRODUCTION Recently, the internet of things has introduced new challenges in device identification and authentication. Therefore, light fingerprint applications in device identification are emerging. Hamidi-Rad et al. introduced a 1MHz high-frequency light sensor to collect and transform light source data, and a Raspberry Pi platform was adopted for sampling and performing Fast Fourier Transform (FFT) data pre-processing tasks. Convolution neural network (CNN) and K-nearest neighbors (KNN) algorithms were employed for evaluation. The experiments were conducted using the same brand of LEDs, and the results remained reliable. In the work of Kobayashi, the same type of LED was also tested. A photodiode (PD) was used in this study to sense the light and input it through the microphone interface of the cell phone. The algorithm performed FFT preprocessing and then sent the data to a one-dimensional CNN for classification. The experimental recognition accuracy was 97% for 48 LED lighting devices of the same type. A PD was also used to sample the light. The signal was converted to the frequency domain and the Crest Factor (CF) was used as the input feature to identify 331 signals experimentally. After matching, 3000 lamps were tested on site, with an accuracy of 94.4%. These results demonstrated the reliability of using the light frequency or luminous intensity as the identification feature because regardless of the type or number of lamps, differentiation could be accomplished. In addition, positive and negative sample imbalances often occur during the training of machine learning. In, owing to the extraction of the current data of a large number of undamaged machining tools and the usage of an autoencoder (AE) to fit the undamaged current data, a reconstruction error arose when abnormal samples were input. Thus, the purpose of detecting damaged tools was achieved, producing experimental results with an accuracy of 95%. In, AE was also used for fault diagnosis, and it was difficult to collect all the samples because of variable fault conditions. Therefore, the training method that only fits the positive sample can achieve either normal or abnormal results. However, since AE can only diagnose one condition, the original AE model was improved so that it could simultaneously classify the light source data from multiple cell phones. In this paper, we utilize the CNN and autoencoder to develop the device light fingerprints identification system in MCU. At first, we observe that there exists minor difference for each individual component of lighting equipment. That is, the corresponding difference causes unique phenomenon in the frequency spectrum for authentication. Therefore, the deep learning approaches are adopted to develop a mobile phone light fingerprint identification system and to implement on a low-cost microcontroller platform. In addition, the Bayesian optimization technique is used to optimize the hyper-parameters of models to implement in the microprocessor. Finally, the corresponding comparisons are shown the performance, the MCGAE algorithm having results with an overall accuracy rate of 99.67% and the abnormal sample detection rate (TPR) of 99.85%. The remainder of this study is organized as follows. Section II introduces the system structure, includes hardware specification and data acquisition. The proposed device identification algorithm and model optimization are presented in Section III. Section IV introduces the corresponding experimental results. Finally, the conclusions are presented in Section V. A. Hardware Specification The proposed authentication system architecture using the light fingerprint of a mobile phone is shown in Figure 1, where a PD is used to sense the external light source. A simple amplifier circuit is used to moderately scale the signal detected by the PD, which is then sampled by the artificial intelligence (AI) model installed in the MCU. After sampling, the MCU-based AI model can directly perform the subsequent recognition tasks. The hardware specifications of the main components for this light fingerprint recognition device were a PD-S6967, amplifier-OPA1612A, and MCU-Renesas RX65N. Thus, the components of the device were simple and inexpensive. The PD (type: S6967) was selected as the sensor for the light source with spectral response range . After converting the sensed light source into a voltage signal, the voltage range was adjusted by the amplifier circuit. The low noise amplification property of OPA1612A ensured that the small light source features were preserved after amplification. The choice of the MCU was more flexible. Here, the Renesas RX65N with 2 MB of ROM and 640 KB of RAM was adopted to meet the current experimental needs. We successfully realized the technique of converting the artificial intelligence models into C code. The Renesas RX65N is a lower cost MCU that can be selected for actual implementation. Figure 2 shows the hardware diagram of the proposed approach, where denotes the photodiode and a simple shield is used to fix the distance between the phone and the sensor, denotes the knob for adjusting the DC level and magnification, and is the UART interface for data collection. In this study, we first verified the unique features of the light fingerprint, reliability of light fingerprint recognition, and flexibility of the plural light fingerprint recognition system. According to our previous experiences in converting and constructing models in MCUs, the AI models were finally implemented by low-cost MCUs. The processes of the MCU implementation included sampling, FFT transformation, data preprocessing, AI prediction, and control. First, we confirmed the sample acquisition in which the light source was close and sufficiently bright and the MCU triggered the sampling after the PD responded to the voltage change. After sampling, the samples were converted to frequency domain using FFT. Subsequently, the FFT image was re-sized for the AI-model. Finally, the AI Model was trained for prediction. Herein, the autoencoder (AE) and CNN were utilized to establish the proposed MCU-based AI model. In the AE model, we compared the reconstruction error of each sample model to achieve the individual threshold in the database. If the threshold is reached, we can identify whether the light fingerprint is the one with a known identity. The threshold was determined during the training process and defined in the MCU for identifying the light fingerprint. The CNN model employs a much simpler identification method. The confidence score calculated by the model was compared with the default minimum score. If the score exceeded the default minimum score, it was judged to be a cell phone in the database. Hence, the light fingerprint of mobile phones outside the database can be excluded. B. Data Acquisition To implement the AI algorithm in MCU, the memory size and computation time of AI model should be considered. Therefore, in the early stage of data collection, a higher spectral resolution and wider spectrum range were adjusted with the oscilloscope to confirm the feature range and select the appropriate spectrum range. In this study, the sampling rate and number of sampling points were 200K sample/second and 16K points, respectively. According to the sampling theorem, the spectral features from 0 Hz to 100 KHz can be obtained with a resolution of 12.2 Hz between frequencies. The actual sampling results (frequency spectrum) are shown in Figure 3 for different models of the iPhone. Figure 3 shows that, the distribution of the spectral features, 500 Hz, 8 KHz, 45 KHz, and 90 KHz, are common and obvious for all models of mobile phones. These features are generated from the sampling circuit and the same component modules used inside the phone. We observed that these features appeared steadily and differences existed between non-iPhone series phones and iPhone series. These have been described in this work. To illustrate the differences observed in the same model of mobile phones, we collected several iPhone X1 to obtain the corresponding spectral features, as shown in Figure 4. Figure 4 demonstrates the useful features in each area of the sampling spectrum. Each spectrum in Figure 4 is averaged from 100 light fingerprint data of the same phone. Compared with the spectrum of other iPhone series phones, the spectral features of the same model are similar. However, slight differences still occur at 34 KHz, 48 KHz, 60 KHz, 82 KHz, and 90KHz. If the variation phenomenon of light is steady, they can be considered as effective features for distinguishing various cell phones for authentication. Therefore, if these differences can be effectively fitted and reconstructed using AE, the reconstruction error will vary after inputting the same model of different cell phones into the model of each cell phone. The corresponding threshold can then be identified to distinguish between individual cell phones. III. DEVICE IDENTIFICATION AND MODEL OPTIMIZATION A. DEVICE IDENTIFICATION USING DEEP LEARNING Autoencoders (AE) are usually utilized to learn efficient data coding in an unsupervised learning manner. Its application includes both feature extraction and sample variation diagnosis. It can be broadly divided into two steps, namely data compression and data decompression, as shown in Figure 5. The training aims at making the outputs equivalent to the inputs as much as possible and continuously reduce the reconstruction loss between inputs and outputs. In addition, AE retains data correlation even some information is lost during compressing and uncompressing. Thus, if samples with larger variation are input, the model is unable to effectively recover these samples. Therefore, the reconstruction error increases and this property forms the diagnosis ability described in this study. If samples not belonging to one of the classes in the original training set are input when ANN, CNN, and other machine learning models are performing the task of classification, the output will be incorrect, i.e., misclassified as one of the categories. In contrast, the training method of AE, which only fits one type of sample, effectively avoids the aforementioned situation. In general, since the AE detection algorithm cannot support multi-class tasks, therefore, this study improves and extends for a multi-class problem. As shown in Figure 6, a set of AE models are trained independently for each type of sample, and a set of thresholds for reconstruction errors are defined during the training process according to the training conditions. In the actual application, the samples are input to each AE model for individual reconstruction error calculation, and compared to the threshold defined in the training process. If one of the models exhibits a reconstruction error lower than the default threshold, it is identified as this category. Therefore, the estimation of the threshold value is also an important part of this method. In addition, we use a greedy algorithm, called multi-class greedy autoencoder (MCGAE), to search for the optimal threshold,. The greedy algorithm treats all reconstruction errors as thresholds and calculates the accuracy. Therefore, we can obtain the accuracy corresponding to each reconstruction error or determine the proper threshold automatically. B. MODEL OPTIMIZATION Here, we describe the optimization of the AI model. We introduce the data collection, list of cell phone models involved in the experiment and their corresponding numbers are listed in Table 1. Nineteen mobile phones were used in our experiments, including eight iPhone X1 models and some iPhone and Android phones. In addition to identifying the differences in features between different phone models, we also explored the ability to distinguish between the same phone models. However, the hyper-parameter affects the accuracy of AI model, several approaches are proposed to optimize model, grid search, random search, and surrogatebased optimization. To obtain the highest accuracy with the smallest model parameters, the Bayesian optimization method was used to search for the best parameters when comparing the models studied in this work. Bayesian optimization (BO) is an approach that used Gaussian process to build a probabilistic model corresponds to the hyper-parameter. By iteratively evaluating based on current model and updating, BO has the ability to build the distribution and objective values when much more information is given. Herein, the BO is briefly introduced as follows. At first, denote f to be a black-box function without closed-form expression and it is expensive to evaluate. The goal of optimization is to solve the following problem ) ( min arg where x denotes the hyper-parameters. We here use the BO to find the global optimum by constructing Gaussian process model for f(x) and then exploits to make decisions to next evaluate the function. The pseudo-code of BO procedure is introduced as follows and the optimized algorithm is implemented with "Hyperopt" tool. The corresponding searching parameters for the CNN structure optimization are shown in Table 2. In this optimization, the number of searching parameters was 200 cycles and the optimization of the objective function aimed at estimating the highest accuracy rate of the testing set. The other training parameters were: the number of convolution layers was 2, activation function of convolution layers was set to ReLU, number of training sessions was 1000, learning rate was 0.001, number of batch optimizations was 100, and output activation function was ReLU. Algorithm of Bayesian Optimization In the AE method, the MCGAE framework shown in Figure 6 was applied. We trained the model variational autoencoder (VAE) using the same training dataset. To reduce the overfitting in the training set, the "Domain adaptation (DA)" concept was utilized to enable the model to focus on the real features of the sample without overfitting the small noise in the training set, and the difference between the accuracy of the model after training and the accuracy of the actual online test was reduced. Therefore, in this study, we tested the effect of CNN models along with MCGAE, VAE, DA-AE, and DA-VAE for comparison. Table 3 shows the hyperparameter optimization ranges for MCGAE, VAE, DA-AE, and DA-VAE, respectively. The Bayesian optimization technique was also adopted for hyperparameter tuning. In the hyperparameter search range of these experiments, the model was designed by setting "Number of hidden layers" and "Maximum of nodes in the hidden layer" in a 100% incremental or 100% decremental manner every other layer. This method effectively controls the model size and adds a dynamic dropout layer with Hyperparameter tuning 200 times. The other training parameters were assigned as follows: Learning rate 0.001, Epoch 1000 times, Dropout rate 0.2, and Optimizer Adam. Note athet the DA methods, DA-AE and DA-VAE, whose structural design is presented in Figure 7. The testing data was viewed as the target domain. The new debugging parameters werethe part of the domain classifier, which contained the maximum number of nodes, layers, and activation function. In the models of MCGAE, VAE, DA-AE, and DA-VAE, the greedy algorithm was used to automatically test all threshold values to find the one with the highest accuracy rate. C. IMPLEMENTATION OF AI MODEL INTO MCU Most of the recently AI model were implemented in PC or cloud computational system. This results high-cost and communication loss problems, therefore, we here introduce the implementation of AI model into microcontroller unit (MCU) for device identification. The trained model should be translated to C code. The most common approaches relate to model conversions for AI model frameworks such as Caffe 2, TensorFlow Lite for microcontroller, and Arm NN for deploying trained models and inference engines on MCUs. There are software tools that take pre-trained AI models for MCUs by converting them into C-code. This paper utilizes Renesas RX65N MCU for implementation, therefore, the corresponding e-AI translator is adopted, that is, e-AI converses the trained AI model into C-language code for RX65N MCU. Moreover, the tool also performs the calculations of the memory size and amount of calculation required by AI model are also estimated, and the multiply and accumulation number calculation when the AI model operates. IV. EXPERIMENTAL RESULTS To facilitate the discussion, "normal samples" will be used in this section to denotes the cell phone samples involving training, and "abnormal samples" indicates the cell phone samples not participating in training. An "abnormal sample" is a cell phone sample that did not participate in the training. The True Positive Rate (TPR) of abnormal samples is also a very important part of this application. Therefore, in addition to the accuracy rate as the target of training, improving the TPR of abnormal samples is also essential. In the CNN model, ReLU was chosen as the output activation function and a set of threshold scores was determined for each category according to the training results. The training results are shown in Figure 8, which shows the predicted confidence scores for each category of samples input to the CNN model. The terms "Sample" and "Output Value" denote the sample of each model and output category of the model, respectively. For each version, 100 testing set samples were tested and the results were averaged and plotted on the graph. The accuracy of the testing set was 100%, and the confidence scores of each category were very clear, without overfitting. Based on the above CNN results, only the predictive effect of normal samples was tested. However, the performance of abnormal samples was also the focus of this study. Figure 8 shows the results of testing on the trained CNN model with abnormal samples, using the light fingerprint data from six untrained phones as the abnormal samples. The prediction result of the Sony model in Fig ure 9 shows that the confidence score of the iPhone X1 is higher than 0.6, and the confidence scores of the iPhone 7 and iPhone 8 Plus are also predicted to be very high. In comparison with the results of Figure 8, where the confidence scores of all samples are approximately zero except for the category of correct samples, the performance of CNN in detecting abnormal samples in the non-training set is relatively poor. The detailed results of each algorithm in the AE-related method are summarized in Table 4. In this experiment, MCGAE performs the best after comparing the algorithms. MCGAE uses the most original AE structure and exhibits better ACC and TPR in both training and testing sets than other deformation methods, indicating that there is no significant difference between the training and testing sets in this application. The VAE and DA methods attempt to reduce overfitting and increase generality by restricting the model and adding an adversarial structure to the model, which may affect the overall accuracy of the model. However, if there is a large discrepancy between the performances of the training and testing sets, it is possible to exert its effect and improve the accuracy of the testing set. As our experiences, we can observe that there is hardly variation in training and testing error using different Autoencoder methods due to domain-adaptation learning. It also demonstrates the effectiveness. VI. CONCLUSIONS This study utilizes various Artificial Neural Network algorithms. The comparative algorithms developed for the light fingerprint identification system include CNN, AE, and VAE, as well as the introduction of DA to reduce the distribution distance between the source and target domains to reduce the overfitting of the models. All models were finally installed on the Renesas RX65N MCU. Therefore, during the training process, GS or Bayesian parameter optimization methods were used to optimize the search. The model with higher accuracy and lower memory requirement was selected from the search results for subsequent evaluation. FFT was used as the pre-processing algorithm for the data to compare the effectiveness of machine learning models such as CNN, MCGAE, VAE, DA-AE, and DA-VAE. In this experiment, the Bayesian method was used to optimize the hyperparameters. Among the five algorithms, CNN is a classification algorithm. In the training and validation sets, a 100% recognition rate can be achieved. However, since it could not be effectively detected on the untrained testing samples, it did not meet the requirements of this application. The other four algorithms were variations of the AE algorithm. Among them, MCGAE exhibited the best performance with a 99.85% detection rate of abnormal samples and a 99.67% overall accuracy rate. The feasibility of artificial intelligence to identify light fingerprint anomalies was verified through the empirical results in this paper,. The concept of light fingerprint can be applied applications of authentication. |
The metabolic effects of exercise-induced muscle damage. Exercise-induced skeletal muscle damage results in a remarkable number of localized and systemic changes, including release of intracellular proteins, delayed onset muscle soreness, the acute-phase response, and an increase in skeletal muscle protein turnover. These exercise-induced adaptations appear to be integral to the repair of the damaged muscle and may be essential for hypertrophy. Chronic exercise produces adaptations in skeletal muscle, resulting in increased capacity of oxidative metabolism; the repair of damaged muscle resulting in hypertrophy may be an important mechanism for protection against further exercise-induced damage. Although the release of CK from skeletal muscle following damage is a commonly observed phenomenon, circulating CK activity is not a quantitative and, in some cases, even a qualitative indicator of skeletal muscle damage. Eccentric exercise-induced skeletal muscle damage offers an opportunity to investigate the signals and modulators of the repair of muscle damage, a process that may be central to the adaptations in muscle as a result of chronic activity. |
Food web structure in a Salix subfragilis dominated wetland in Hangang estuary using stable isotopes and fatty acid biomarkers Abstract We investigated food webs of a Salix subfragilis-dominated wetland in the Janghang wetland in the Hangang estuary, which is very close to the Demilitarized Zone, along the west coast of Korea. Our study focused on understanding sesarmine crab (Sesarma dehaani)-related food webs in a S. subfragilis forest. For our study, we used carbon and nitrogen stable isotopes and fatty acid biomarkers. We collected samples of plants, animals, and detrital sediment from four quadrats (55 m2) set in the S. subfragilis community. Samples were collected from September 2006 to June 2009, except during the winter hibernation period of S. dehaani. In the wet season, the sediment showed relatively high 13C and low 15N signatures compared with relatively low 13C and high 15N signatures in the dry season. Mature S. dehaani appeared to feed on fresh leaves and other carbon sources, such as immature individuals or fish, in addition to detrital sediment, which appeared to be the main carbon source for immature crabs. Principal component analysis of fatty acid biomarkers of S. dehaani showed a clear difference between immature individuals (1030 mm) and mature ones (larger than 30 mm), indicating that the main food source for immature crabs was detrital sediment, whereas mature crabs foraged plants in addition to consuming detrital sediment. On the basis of our results from stable isotope and fatty acid analyses, mature S. dehaani appeared to feed on detrital sediment and fresh leaves of S. subfragilis in summer in addition to engaging in cannibalism of immature individuals. |
def _getLabelForNode(self, data):
portTemp = '<TD PORT="{}"> </TD>'
children = getattr(data, self.members['children'])
value = getattr(data, self.members['data'])
ports = [portTemp.format(i) for i in range(len(children))]
if len(children) == 0:
ports = ['<TD COLSPAN="1">∅</TD>']
colspan = max(1, len(children))
return NODE_TEMPLATE.format(
colspan,
len(children),
colspan,
repr(value),
''.join(ports)
) |
import React from 'react';
import { InheritedRole } from '../../../../metadata/types';
import TableRow from './TableRow';
type TableBodyProps = {
inheritedRoles: InheritedRole[];
};
const TableBody: React.FC<TableBodyProps> = props => {
const { inheritedRoles } = props;
return (
<tbody>
{inheritedRoles.map((inheritedRole, i) => (
<TableRow key={i} inheritedRole={inheritedRole} />
))}
<TableRow key={inheritedRoles.length} />
</tbody>
);
};
export default TableBody;
|
/**
* Closes the PermissionManager as well as any underlying Realms.
* Any active tasks in progress will be canceled.
*/
@Override
public void close() {
checkIfValid();
synchronized (cacheLock) {
Cache cache = PermissionManager.cache.get(user.getIdentity()).get();
if (cache.instanceCounter > 1) {
cache.instanceCounter--;
return;
}
cache.instanceCounter = 0;
cache.pm = null;
}
closed = true;
delayedTasks.clear();
if (managementRealmOpenTask != null) {
managementRealmOpenTask.cancel();
managementRealmOpenTask = null;
}
if (permissionRealmOpenTask != null) {
permissionRealmOpenTask.cancel();
permissionRealmOpenTask = null;
}
if (defaultPermissionRealmOpenTask != null) {
defaultPermissionRealmOpenTask.cancel();
defaultPermissionRealmOpenTask = null;
}
if (managementRealm != null) {
managementRealm.close();
}
if (permissionRealm != null) {
permissionRealm.close();
}
if (defaultPermissionRealm != null) {
defaultPermissionRealm.close();
}
} |
BEIJING (AP) — A prominent Chinese official who might have been the country's most famous policeman has dropped from sight amid unconfirmed reports of a political scandal and a bid for U.S. asylum.
The Chongqing city government said Wednesday that Wang Lijun, its police chief until last week, had "reportedly" taken leave to recover from anxiety and physical ailments resulting from his heavy work load. A brief government notice said Wang, who is also a vice mayor, was undergoing "vacation-style therapy," but gave no details.
A city government spokesman, who like many Chinese bureaucrats would only give his surname, Ye, confirmed the notice but did not elaborate.
Wang made his mark busting gangs, but Chongqing announced last week that he was being shifted from his police duties to deal with economic and other matters. He has been the subject of days of speculation since then, including online reports that he sought asylum at the American consulate in Chengdu after a falling-out with the city's powerful Communist Party secretary, Bo Xilai.
Richard Buangan, a spokesman for the U.S. Embassy in Beijing, said there would be no comment on the reports of an asylum bid.
Staff at businesses near the Chengdu consulate reported large numbers of police vehicles in the area on Tuesday night, but said the area was quiet on Wednesday. Buangan declined to discuss the reports, but said there had been "no threat to the consulate yesterday, and the U.S. government did not request increased security around the compound."
Bo, who sits on the Communist Party's powerful 25-member Politburo, appointed Wang in 2008 to clean up the force and take on organized crime in a campaign that drew national attention, as well as criticism that it ignored proper legal procedures.
Ye, the city government spokesman, said he could neither deny or confirm the reports of Wang's asylum bid.
"We saw that on the Internet too. I don't have relevant information now," Ye said.
In a sign of the sensitivity of the matter, search results for Wang and Bo were blocked on China's hugely popular Sina Weibo microblogging service and the comments sections attached to online reports about Wang were disabled.
Wang, a 52-year-old martial arts expert, entered law enforcement in 1984 and served more than two decades in northeast Liaoning province, where Bo was once governor. He won a reputation for personal bravery in confronting gangs and was once the subject of a TV drama called "Iron-Blooded Police Spirits."
His law enforcement success led eventually to high political office, and his association with Bo gave him countrywide name recognition.
A former commerce minister, Bo is considered a leading "princeling" in the party, a reference to the offspring of communist elders whose connections and degrees from top universities have won them entry into the country's elite.
Bo garnered huge publicity for his anti-crime campaign and an accompanying drive to revive communist songs and poems from the 1950s and 1960s. Those campaigns have since fizzled, leading analysts to pull back on speculation that he might be promoted to higher office when the party begins a generational change in leadership later this year.
Associated Press researchers Zhao Liang and Yu Bing contributed to this report. |
package com.lotaris.j2ee.itf;
/**
* An interface that acts as a marker to allow the integration tests
* retrieval and run.
*
* @author <NAME>, <EMAIL>
*/
public interface TestGroup {
/**
* Enforce each SessionBean to return itself to run the tests
* correctly.
* @return Itself
*/
TestGroup getTestGroup();
}
|
//
// Generated by classdumpios 1.0.1 (64 bit) (iOS port by DreamDevLost)(Debug version compiled Sep 26 2020 13:48:20).
//
// Copyright (C) 1997-2019 <NAME>.
//
#import "MBFileHandle.h"
@class MBKeyBag, NSMutableData, NSString;
@interface MBEncryptedFileHandle : MBFileHandle
{
NSString *_path; // 8 = 0x8
MBKeyBag *_keybag; // 16 = 0x10
struct _mkbfileref *_file; // 24 = 0x18
NSMutableData *_buffer; // 32 = 0x20
_Bool _restore; // 40 = 0x28
}
+ (id)encryptedFileHandleForRestoreWithPath:(id)arg1 keybag:(id)arg2 key:(id)arg3 error:(id *)arg4; // IMP=0x000000010006dd7c
+ (id)encryptedFileHandleForBackupWithPath:(id)arg1 keybag:(id)arg2 error:(id *)arg3; // IMP=0x000000010006db40
- (long long)writeWithBytes:(const void *)arg1 length:(unsigned long long)arg2 error:(id *)arg3; // IMP=0x000000010006ead0
- (long long)readWithBytes:(void *)arg1 length:(unsigned long long)arg2 error:(id *)arg3; // IMP=0x000000010006e84c
- (_Bool)statWithBuffer:(struct stat *)arg1 error:(id *)arg2; // IMP=0x000000010006e7d8
- (_Bool)closeWithError:(id *)arg1; // IMP=0x000000010006e574
- (_Bool)validateEncryptionKey:(id)arg1 error:(id *)arg2; // IMP=0x000000010006e3bc
- (id)encryptionKeyWithError:(id *)arg1; // IMP=0x000000010006e1ac
- (int)fd; // IMP=0x000000010006e19c
- (id)path; // IMP=0x000000010006e18c
- (void)dealloc; // IMP=0x000000010006e108
- (id)initWithPath:(id)arg1 keybag:(id)arg2 file:(struct _mkbfileref *)arg3 restore:(_Bool)arg4; // IMP=0x000000010006df98
@end
|
Characterization of 51Cr-EDTA as a marker of duodenal mucosal permeability. Proximal duodenum was perfused with various solutions and mucosal permeability assessed by measuring the clearance of 51Cr labelled ethylenediaminetetra-acetate (EDTA) from blood-to-intestinal lumen in anaesthetized rats. Net flux of fluid was determined by measurement of effluent weight changes. Perfusion of duodenum with 50 mM NaCl significantly increased fluid absorption but had no effect on EDTA clearance. EDTA clearance was unaffected by perfusion with 400 mM or 800 mM mannitol. Perfusion with 400 mM NaCl induced a sustained fluid secretion and a small but irregular increase (40%) in EDTA clearance. A significant 3.6-fold increase in clearance was obtained in response to perfusion of duodenum with deionized water. Similarly, perfusion with either 20 mM HCl or 50 mM ethyleneglycol-bis-(beta-amino-ethylether)-N,N'-tetraacetic acid (EGTA) significantly increased the EDTA clearance 3.3-fold and 2-fold respectively. Perfusion with a hypotonic HCl-solution (10 mM HCl + 40 mM NaCl) increased fluid absorption and the EDTA clearance. It is concluded that no positive linear relationship exists between luminal osmolality and 51Cr-EDTA movement across the mucosa. It is postulated that high luminal acidity or extreme hypotonicity increase the EDTA clearance by widening of and/or disruption of intercellular junctional structure. |
import * as ConfiguratorTextfieldActions from './configurator-textfield-group.actions';
export { ConfiguratorTextfieldActions };
|
ST. LOUIS — Third baseman Will Middlebrooks tripped Allen Craig for a game-ending obstruction call on Jon Jay’s ninth-inning grounder, giving the St. Louis Cardinals a bizarre 5-4 win over the Boston Red Sox on Saturday night and a 2-1 World Series lead.
Boston had tied the score with two runs in the eighth, and Yadier Molina singled with one out in the ninth off loser Brandon Workman. Allen Craig pinch hit and lined Koji Uehara’s first pitch down the left-field line for a double that put runners on second and third.
With the infield in, Jon Jay hit a grounder to diving second baseman Dustin Pedroia. He threw home to catcher Jarrod Saltalamacchia, who tagged out the sliding Molina. Saltalamacchia threw offline past third, and Middlebrooks, with his stomach on the field, raised both legs and tripped Craig.
Third base umpire Jim Joyce immediately signaled obstruction, and even though a sliding Craig was tagged by Saltalamacchia at the plate following the throw by left fielder Daniel Nava, plate umpire Dana DeMuth signaled safe and then pointed to third, making clear the obstruction had been called.
The Red Sox scored twice in the eighth inning to tie it 4-all. Jacoby Ellsbury led off with a single and Shane Victorino was hit by a pitch for the sixth time this postseason. Both runners moved up on Pedroia’s groundout, and David Ortiz was intentionally walked.
Daniel Nava drove in one with a short-hop grounder that was smothered by second baseman Kolten Wong, who had just entered on defense in a double-switch.
Brandon Workman jammed Matt Holliday and retired the slugger on a routine fly with two on to end the bottom of the eighth. That sent the game to the ninth tied at 4.
Holliday’s two-run double puts the Cardinals on top 4-2 in the seventh.
It was a tough inning for Red Sox reliever Craig Breslow. Matt Carpenter reached safely when he checked his swing on an infield single to shortstop. Carlos Beltran was grazed on the elbow pad by a pitch — making no effort to get out of the way.
It was Middlebrooks’ first inning in the field. He entered as a pinch-hitter in the top of the seventh and took over at third base in the bottom half.
That shifted Bogaerts to shortstop — and neither one was able to make the difficult defensive play Boston needed in that inning.
Bogaerts opened the fifth with a triple that banged-up right fielder Beltran couldn’t quite reach. The rookie later scored on a grounder by pinch-hitter Mike Carp.
St. Louis quickly broke ahead, scoring in the first inning for the first time this October on RBI singles by Holliday and Yadier Molina. After the Cardinals got three hits in a span of four pitches, Red Sox reliever Felix Doubront began heating up in a hurry before Jake Peavy settled down.
Peavy wriggled out of bases-loaded, no-out jam in the fourth to keep the Cardinals’ lead at 2-0. He got some help, too, from St. Louis third base coach Jose Oquendo.
With runners on first and second, Jon Jay hit a sharp single to center. The Red Sox were conceding a run and ready to let Molina score from second, but Oquendo held up the slow-footed catcher.
Peavy actually lowered his career postseason ERA by more than a full run, down to 9.27 in five winless starts.
A day before Kelly and Peavy faced each other, they sounded totally different.
Kelly kidded about his pregame preparation: He stays up all night taking on his Twitter followers, shooting away in “Call of Duty,” the popular first-person war video game.
Peavy, meanwhile, was already ramped up and ready to go.
NOTES: Cardinals Hall of Famers Bob Gibson, Lou Brock, Ozzie Smith and Red Schoendienst took part in the first-ball festivities, with fan favorite Willie McGee tossing the pitch. ... At 21, Bogaerts became the third-youngest player to hit a triple in a World Series. Ty Cobb and Mickey Mantle did it at 20. ... Molina has a six-game hitting streak in World Series play. ... The family of late umpire Wally Bell was in the stands. Bell died at 48 this month, and the six-man crew is wearing patches to honor him. Bell’s first plate job in the World Series was at this ballpark in 2006. |
Rosalina Rios is a confident, humorous, fast-talking woman who moved here from Villa Guerrero, Mexico, 20 years ago. For the past year and a half, Rosie has worked as a live-in housekeeper/cook for TV actor Alan Thicke and his two teen-age sons in their big Spanish-style Toluca Lake home.
While making her popular stir-fry chicken dish in the neat, airy kitchen, Rios talks about herself and her job.
"They think I'm the best nanny, the best cook, the best housekeeper and a driver too. I'm happy here."
Rios says that the Thickes are hospitable people. "The house is always running with friends." On a normal evening there are six to eight people; when there are guests, "15 to 20."
"They always ask: 'How come you don't make more so we can have more the next day?' " Rios says. Alan Thicke likes leftovers. "He's always looking for a bite to eat. Now I've learned to always add some, particularly when I make pork chops and barbecue chicken. But nothing lasts."
Rios is quick to point out that she is not a chef. "I like to do simple foods. I don't have time to make complicated things, with all the work in the house. I never measure anything. Very rarely do I look at books." She laughs and adds: "It has to taste good because I always end up eating with them. So I cook to my taste."
Before Rios came to work here, she worked on a computer, in accounts payable. "Then I hurt my back and I couldn't work in the office any more." It turned out to be a lucky injury. "I make more money than any office girl I know--and I save more money living here. I get medical insurance, retirement benefits." She flashes one of her quick smiles and adds: "I like helping people. I need to give my attention and love to someone else."
"This isn't true Chinese or Mexican," says Rios. "But I call it drunken chicken or pollo borracho. It's a very colorful stir-fry dish that I serve with rice. Lots of times I just add whatever vegetables I have in the refrigerator."
Marinate chicken in pepper, white wine and 2 tablespoons soy sauce 10 to 15 minutes.
Heat oil and butter in wok over medium-high heat. Add chicken, garlic salt, ginger and paprika and saute 3 to 5 minutes. Add carrots and red pepper and stir-fry 3 minutes.
Stir cornstarch and water until smooth. Add to wok with remaining 1 tablespoon soy sauce and teriyaki sauce. Allow to boil gently, stirring until thickened. Add onion, Chinese pea pods and napa cabbage. Stir-fry 1 to 2 minutes or until pea pods are tender-crisp. Adjust seasonings to taste. Gently mix in tomato wedges. Heat briefly and serve immediately with rice. Makes 4 to 6 servings. |
package view
import (
"appengine"
"appengine/datastore"
"github.com/OwenDurni/loltools/model"
"github.com/OwenDurni/loltools/util/errwrap"
"net/http"
)
func AdminIndexHandler(
w http.ResponseWriter, r *http.Request, args map[string]string) {
c := appengine.NewContext(r)
user, _, err := model.GetUser(c)
if HandleError(c, w, errwrap.Wrap(err)) {
return
}
ctx := struct {
ctxBase
RiotApiKey *model.RiotApiKey
GameStatsBacklogCount int
RiotRateLimit string
}{}
ctx.ctxBase.init(c, user)
ctx.ctxBase.Title = "Admin Console"
riotApiKey, err := model.GetRiotApiKey(c)
ctx.RiotApiKey = riotApiKey
ctx.ctxBase.AddError(errwrap.Wrap(err))
q := datastore.NewQuery("PlayerGameStats").
Filter("Saved =", false).
Filter("NotAvailable =", false).
KeysOnly()
gameStatsKeys, err := q.GetAll(c, nil)
ctx.GameStatsBacklogCount = len(gameStatsKeys)
ctx.ctxBase.AddError(errwrap.Wrap(err))
ctx.RiotRateLimit = model.RiotApiRateLimiter.DebugStr(c)
err = RenderTemplate(w, "admin.html", "base", ctx)
if HandleError(c, w, errwrap.Wrap(err)) {
return
}
}
func ApiAdminRiotKeySetHandler(
w http.ResponseWriter, r *http.Request, args map[string]string) {
c := appengine.NewContext(r)
apikey := r.FormValue("key")
err := model.SetRiotApiKey(c, apikey)
if ApiHandleError(c, w, err) {
return
}
HttpReplyOkEmpty(w)
}
|
<gh_stars>1-10
'use strict';
import * as chai from 'chai';
import * as _ from 'lodash';
import Operators from '../../../../lib/operators';
import Support from '../../../support';
const expect = chai.expect;
const dialect = Support.getTestDialect();
const QueryGenerator = Support.sequelize.dialect.QueryGenerator;
if (dialect === 'mysql') {
describe('[MYSQL Specific] QueryGenerator', () => {
const suites = {
arithmeticQuery: [
{
title: 'Should use the plus operator',
arguments: ['+', 'myTable', { foo: 'bar' }, {}, {}],
expectation: 'UPDATE `myTable` SET `foo`=`foo`+ \'bar\' '
},
{
title: 'Should use the plus operator with where clause',
arguments: ['+', 'myTable', { foo: 'bar' }, { bar: 'biz'}, {}],
expectation: 'UPDATE `myTable` SET `foo`=`foo`+ \'bar\' WHERE `bar` = \'biz\''
},
{
title: 'Should use the minus operator',
arguments: ['-', 'myTable', { foo: 'bar' }, {}, {}],
expectation: 'UPDATE `myTable` SET `foo`=`foo`- \'bar\' '
},
{
title: 'Should use the minus operator with negative value',
arguments: ['-', 'myTable', { foo: -1 }, {}, {}],
expectation: 'UPDATE `myTable` SET `foo`=`foo`- -1 '
},
{
title: 'Should use the minus operator with where clause',
arguments: ['-', 'myTable', { foo: 'bar' }, { bar: 'biz'}, {}],
expectation: 'UPDATE `myTable` SET `foo`=`foo`- \'bar\' WHERE `bar` = \'biz\''
},
],
attributesToSQL: [
{
arguments: [{id: 'INTEGER'}],
expectation: {id: 'INTEGER'}
},
{
arguments: [{id: 'INTEGER', foo: 'VARCHAR(255)'}],
expectation: {id: 'INTEGER', foo: 'VARCHAR(255)'}
},
{
arguments: [{id: {type: 'INTEGER'}}],
expectation: {id: 'INTEGER'}
},
{
arguments: [{id: {type: 'INTEGER', allowNull: false}}],
expectation: {id: 'INTEGER NOT NULL'}
},
{
arguments: [{id: {type: 'INTEGER', allowNull: true}}],
expectation: {id: 'INTEGER'}
},
{
arguments: [{id: {type: 'INTEGER', primaryKey: true, autoIncrement: true}}],
expectation: {id: 'INTEGER auto_increment PRIMARY KEY'}
},
{
arguments: [{id: {type: 'INTEGER', defaultValue: 0}}],
expectation: {id: 'INTEGER DEFAULT 0'}
},
{
arguments: [{id: {type: 'INTEGER', unique: true}}],
expectation: {id: 'INTEGER UNIQUE'}
},
{
arguments: [{id: {type: 'INTEGER', after: 'Bar'}}],
expectation: {id: 'INTEGER AFTER `Bar`'}
},
// No Default Values allowed for certain types
{
title: 'No Default value for MySQL BLOB allowed',
arguments: [{id: {type: 'BLOB', defaultValue: []}}],
expectation: {id: 'BLOB'}
},
{
title: 'No Default value for MySQL TEXT allowed',
arguments: [{id: {type: 'TEXT', defaultValue: []}}],
expectation: {id: 'TEXT'}
},
{
title: 'No Default value for MySQL GEOMETRY allowed',
arguments: [{id: {type: 'GEOMETRY', defaultValue: []}}],
expectation: {id: 'GEOMETRY'}
},
{
title: 'No Default value for MySQL JSON allowed',
arguments: [{id: {type: 'JSON', defaultValue: []}}],
expectation: {id: 'JSON'}
},
// New references style
{
arguments: [{id: {type: 'INTEGER', references: { model: 'Bar' }}}],
expectation: {id: 'INTEGER REFERENCES `Bar` (`id`)'}
},
{
arguments: [{id: {type: 'INTEGER', references: { model: 'Bar', key: 'pk' }}}],
expectation: {id: 'INTEGER REFERENCES `Bar` (`pk`)'}
},
{
arguments: [{id: {type: 'INTEGER', references: { model: 'Bar' }, onDelete: 'CASCADE'}}],
expectation: {id: 'INTEGER REFERENCES `Bar` (`id`) ON DELETE CASCADE'}
},
{
arguments: [{id: {type: 'INTEGER', references: { model: 'Bar' }, onUpdate: 'RESTRICT'}}],
expectation: {id: 'INTEGER REFERENCES `Bar` (`id`) ON UPDATE RESTRICT'}
},
{
arguments: [{id: {type: 'INTEGER', allowNull: false, autoIncrement: true, defaultValue: 1, references: { model: 'Bar' }, onDelete: 'CASCADE', onUpdate: 'RESTRICT'}}],
expectation: {id: 'INTEGER NOT NULL auto_increment DEFAULT 1 REFERENCES `Bar` (`id`) ON DELETE CASCADE ON UPDATE RESTRICT'}
},
],
createTableQuery: [
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255)) ENGINE=InnoDB;'
},
{
arguments: ['myTable', {data: 'BLOB'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`data` BLOB) ENGINE=InnoDB;'
},
{
arguments: ['myTable', {data: 'LONGBLOB'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`data` LONGBLOB) ENGINE=InnoDB;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)'}, {engine: 'MyISAM'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255)) ENGINE=MyISAM;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)'}, {charset: 'utf8', collate: 'utf8_unicode_ci'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255)) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE utf8_unicode_ci;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)'}, {charset: 'latin1'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255)) ENGINE=InnoDB DEFAULT CHARSET=latin1;'
},
{
arguments: ['myTable', {title: 'ENUM("A", "B", "C")', name: 'VARCHAR(255)'}, {charset: 'latin1'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` ENUM(\"A\", \"B\", \"C\"), `name` VARCHAR(255)) ENGINE=InnoDB DEFAULT CHARSET=latin1;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)'}, { rowFormat: 'default' }],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255)) ENGINE=InnoDB ROW_FORMAT=default;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)', id: 'INTEGER PRIMARY KEY'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255), `id` INTEGER , PRIMARY KEY (`id`)) ENGINE=InnoDB;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)', otherId: 'INTEGER REFERENCES `otherTable` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION'}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255), `otherId` INTEGER, FOREIGN KEY (`otherId`) REFERENCES `otherTable` (`id`) ON DELETE CASCADE ON UPDATE NO ACTION) ENGINE=InnoDB;'
},
{
arguments: ['myTable', {title: 'VARCHAR(255)', name: 'VARCHAR(255)'}, {uniqueKeys: [{fields: ['title', 'name']}]}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`title` VARCHAR(255), `name` VARCHAR(255), UNIQUE `uniq_myTable_title_name` (`title`, `name`)) ENGINE=InnoDB;'
},
{
arguments: ['myTable', {id: 'INTEGER auto_increment PRIMARY KEY'}, {initialAutoIncrement: 1000001}],
expectation: 'CREATE TABLE IF NOT EXISTS `myTable` (`id` INTEGER auto_increment , PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=1000001;'
},
],
dropTableQuery: [
{
arguments: ['myTable'],
expectation: 'DROP TABLE IF EXISTS `myTable`;'
},
],
selectQuery: [
{
arguments: ['myTable'],
expectation: 'SELECT * FROM `myTable`;',
context: QueryGenerator
}, {
arguments: ['myTable', {attributes: ['id', 'name']}],
expectation: 'SELECT `id`, `name` FROM `myTable`;',
context: QueryGenerator
}, {
arguments: ['myTable', {where: {id: 2}}],
expectation: 'SELECT * FROM `myTable` WHERE `myTable`.`id` = 2;',
context: QueryGenerator
}, {
arguments: ['myTable', {where: {name: 'foo'}}],
expectation: "SELECT * FROM `myTable` WHERE `myTable`.`name` = 'foo';",
context: QueryGenerator
}, {
arguments: ['myTable', {where: {name: "foo';DROP TABLE myTable;"}}],
expectation: "SELECT * FROM `myTable` WHERE `myTable`.`name` = 'foo\\';DROP TABLE myTable;';",
context: QueryGenerator
}, {
arguments: ['myTable', {where: 2}],
expectation: 'SELECT * FROM `myTable` WHERE `myTable`.`id` = 2;',
context: QueryGenerator
}, {
arguments: ['foo', { attributes: [['count(*)', 'count']] }],
expectation: 'SELECT count(*) AS `count` FROM `foo`;',
context: QueryGenerator
}, {
arguments: ['myTable', {order: ['id']}],
expectation: 'SELECT * FROM `myTable` ORDER BY `id`;',
context: QueryGenerator
}, {
arguments: ['myTable', {order: ['id', 'DESC']}],
expectation: 'SELECT * FROM `myTable` ORDER BY `id`, `DESC`;',
context: QueryGenerator
}, {
arguments: ['myTable', {order: ['myTable.id']}],
expectation: 'SELECT * FROM `myTable` ORDER BY `myTable`.`id`;',
context: QueryGenerator
}, {
arguments: ['myTable', {order: [['myTable.id', 'DESC']]}],
expectation: 'SELECT * FROM `myTable` ORDER BY `myTable`.`id` DESC;',
context: QueryGenerator
}, {
arguments: ['myTable', {order: [['id', 'DESC']]}, function(sequelize) {return sequelize.define('myTable', {}); }],
expectation: 'SELECT * FROM `myTable` AS `myTable` ORDER BY `myTable`.`id` DESC;',
context: QueryGenerator,
needsSequelize: true
}, {
arguments: ['myTable', {order: [['id', 'DESC'], ['name']]}, function(sequelize) {return sequelize.define('myTable', {}); }],
expectation: 'SELECT * FROM `myTable` AS `myTable` ORDER BY `myTable`.`id` DESC, `myTable`.`name`;',
context: QueryGenerator,
needsSequelize: true
}, {
title: 'functions can take functions as arguments',
arguments: ['myTable', function(sequelize) {
return {
order: [[sequelize.fn('f1', sequelize.fn('f2', sequelize.col('id'))), 'DESC']]
};
}],
expectation: 'SELECT * FROM `myTable` ORDER BY f1(f2(`id`)) DESC;',
context: QueryGenerator,
needsSequelize: true
}, {
title: 'functions can take all types as arguments',
arguments: ['myTable', function(sequelize) {
return {
order: [
[sequelize.fn('f1', sequelize.col('myTable.id')), 'DESC'],
[sequelize.fn('f2', 12, 'lalala', new Date(Date.UTC(2011, 2, 27, 10, 1, 55))), 'ASC'],
]
};
}],
expectation: "SELECT * FROM `myTable` ORDER BY f1(`myTable`.`id`) DESC, f2(12, 'lalala', '2011-03-27 10:01:55') ASC;",
context: QueryGenerator,
needsSequelize: true
}, {
title: 'sequelize.where with .fn as attribute and default comparator',
arguments: ['myTable', function(sequelize) {
return {
where: sequelize.and(
sequelize.where(sequelize.fn('LOWER', sequelize.col('user.name')), 'jan'),
{ type: 1 }
)
};
}],
expectation: "SELECT * FROM `myTable` WHERE (LOWER(`user`.`name`) = 'jan' AND `myTable`.`type` = 1);",
context: QueryGenerator,
needsSequelize: true
}, {
title: 'sequelize.where with .fn as attribute and LIKE comparator',
arguments: ['myTable', function(sequelize) {
return {
where: sequelize.and(
sequelize.where(sequelize.fn('LOWER', sequelize.col('user.name')), 'LIKE', '%t%'),
{ type: 1 }
)
};
}],
expectation: "SELECT * FROM `myTable` WHERE (LOWER(`user`.`name`) LIKE '%t%' AND `myTable`.`type` = 1);",
context: QueryGenerator,
needsSequelize: true
}, {
title: 'single string argument should be quoted',
arguments: ['myTable', {group: 'name'}],
expectation: 'SELECT * FROM `myTable` GROUP BY `name`;',
context: QueryGenerator
}, {
arguments: ['myTable', { group: ['name'] }],
expectation: 'SELECT * FROM `myTable` GROUP BY `name`;',
context: QueryGenerator
}, {
title: 'functions work for group by',
arguments: ['myTable', function(sequelize) {
return {
group: [sequelize.fn('YEAR', sequelize.col('createdAt'))]
};
}],
expectation: 'SELECT * FROM `myTable` GROUP BY YEAR(`createdAt`);',
context: QueryGenerator,
needsSequelize: true
}, {
title: 'It is possible to mix sequelize.fn and string arguments to group by',
arguments: ['myTable', function(sequelize) {
return {
group: [sequelize.fn('YEAR', sequelize.col('createdAt')), 'title']
};
}],
expectation: 'SELECT * FROM `myTable` GROUP BY YEAR(`createdAt`), `title`;',
context: QueryGenerator,
needsSequelize: true
}, {
arguments: ['myTable', {group: 'name', order: [['id', 'DESC']]}],
expectation: 'SELECT * FROM `myTable` GROUP BY `name` ORDER BY `id` DESC;',
context: QueryGenerator
}, {
title: 'HAVING clause works with where-like hash',
arguments: ['myTable', function(sequelize) {
return {
attributes: ['*', [sequelize.fn('YEAR', sequelize.col('createdAt')), 'creationYear']],
group: ['creationYear', 'title'],
having: { creationYear: { gt: 2002 } }
};
}],
expectation: 'SELECT *, YEAR(`createdAt`) AS `creationYear` FROM `myTable` GROUP BY `creationYear`, `title` HAVING `creationYear` > 2002;',
context: QueryGenerator,
needsSequelize: true
}, {
title: 'Combination of sequelize.fn, sequelize.col and { in: ... }',
arguments: ['myTable', function(sequelize) {
return {
where: sequelize.and(
{ archived: null},
sequelize.where(sequelize.fn('COALESCE', sequelize.col('place_type_codename'), sequelize.col('announcement_type_codename')), { in: ['Lost', 'Found'] })
)
};
}],
expectation: "SELECT * FROM `myTable` WHERE (`myTable`.`archived` IS NULL AND COALESCE(`place_type_codename`, `announcement_type_codename`) IN ('Lost', 'Found'));",
context: QueryGenerator,
needsSequelize: true
}, {
arguments: ['myTable', {limit: 10}],
expectation: 'SELECT * FROM `myTable` LIMIT 10;',
context: QueryGenerator
}, {
arguments: ['myTable', {limit: 10, offset: 2}],
expectation: 'SELECT * FROM `myTable` LIMIT 2, 10;',
context: QueryGenerator
}, {
title: 'uses default limit if only offset is specified',
arguments: ['myTable', {offset: 2}],
expectation: 'SELECT * FROM `myTable` LIMIT 2, 10000000000000;',
context: QueryGenerator
}, {
title: 'uses limit 0',
arguments: ['myTable', {limit: 0}],
expectation: 'SELECT * FROM `myTable` LIMIT 0;',
context: QueryGenerator
}, {
title: 'uses offset 0',
arguments: ['myTable', {offset: 0}],
expectation: 'SELECT * FROM `myTable` LIMIT 0, 10000000000000;',
context: QueryGenerator
}, {
title: 'multiple where arguments',
arguments: ['myTable', {where: {boat: 'canoe', weather: 'cold'}}],
expectation: "SELECT * FROM `myTable` WHERE `myTable`.`boat` = 'canoe' AND `myTable`.`weather` = 'cold';",
context: QueryGenerator
}, {
title: 'no where arguments (object)',
arguments: ['myTable', {where: {}}],
expectation: 'SELECT * FROM `myTable`;',
context: QueryGenerator
}, {
title: 'no where arguments (string)',
arguments: ['myTable', {where: ['']}],
expectation: 'SELECT * FROM `myTable` WHERE 1=1;',
context: QueryGenerator
}, {
title: 'no where arguments (null)',
arguments: ['myTable', {where: null}],
expectation: 'SELECT * FROM `myTable`;',
context: QueryGenerator
}, {
title: 'buffer as where argument',
arguments: ['myTable', {where: { field: new Buffer('Sequelize')}}],
expectation: "SELECT * FROM `myTable` WHERE `myTable`.`field` = X'53657175656c697a65';",
context: QueryGenerator
}, {
title: 'use != if ne !== null',
arguments: ['myTable', {where: {field: {ne: 0}}}],
expectation: 'SELECT * FROM `myTable` WHERE `myTable`.`field` != 0;',
context: QueryGenerator
}, {
title: 'use IS NOT if ne === null',
arguments: ['myTable', {where: {field: {ne: null}}}],
expectation: 'SELECT * FROM `myTable` WHERE `myTable`.`field` IS NOT NULL;',
context: QueryGenerator
}, {
title: 'use IS NOT if not === BOOLEAN',
arguments: ['myTable', {where: {field: {not: true}}}],
expectation: 'SELECT * FROM `myTable` WHERE `myTable`.`field` IS NOT true;',
context: QueryGenerator
}, {
title: 'use != if not !== BOOLEAN',
arguments: ['myTable', {where: {field: {not: 3}}}],
expectation: 'SELECT * FROM `myTable` WHERE `myTable`.`field` != 3;',
context: QueryGenerator
}, {
title: 'Regular Expression in where clause',
arguments: ['myTable', {where: {field: {$regexp: '^[h|a|t]'}}}],
expectation: "SELECT * FROM `myTable` WHERE `myTable`.`field` REGEXP '^[h|a|t]';",
context: QueryGenerator
}, {
title: 'Regular Expression negation in where clause',
arguments: ['myTable', {where: {field: {$notRegexp: '^[h|a|t]'}}}],
expectation: "SELECT * FROM `myTable` WHERE `myTable`.`field` NOT REGEXP '^[h|a|t]';",
context: QueryGenerator
},
],
insertQuery: [
{
arguments: ['myTable', {name: 'foo'}],
expectation: "INSERT INTO `myTable` (`name`) VALUES ('foo');"
}, {
arguments: ['myTable', {name: "foo';DROP TABLE myTable;"}],
expectation: "INSERT INTO `myTable` (`name`) VALUES ('foo\\';DROP TABLE myTable;');"
}, {
arguments: ['myTable', {name: 'foo', birthday: new Date(Date.UTC(2011, 2, 27, 10, 1, 55))}],
expectation: "INSERT INTO `myTable` (`name`,`birthday`) VALUES ('foo','2011-03-27 10:01:55');"
}, {
arguments: ['myTable', {name: 'foo', foo: 1}],
expectation: "INSERT INTO `myTable` (`name`,`foo`) VALUES ('foo',1);"
}, {
arguments: ['myTable', {data: new Buffer('Sequelize') }],
expectation: "INSERT INTO `myTable` (`data`) VALUES (X'53657175656c697a65');"
}, {
arguments: ['myTable', {name: 'foo', foo: 1, nullValue: null}],
expectation: "INSERT INTO `myTable` (`name`,`foo`,`nullValue`) VALUES ('foo',1,NULL);"
}, {
arguments: ['myTable', {name: 'foo', foo: 1, nullValue: null}],
expectation: "INSERT INTO `myTable` (`name`,`foo`,`nullValue`) VALUES ('foo',1,NULL);",
context: {options: {omitNull: false}}
}, {
arguments: ['myTable', {name: 'foo', foo: 1, nullValue: null}],
expectation: "INSERT INTO `myTable` (`name`,`foo`) VALUES ('foo',1);",
context: {options: {omitNull: true}}
}, {
arguments: ['myTable', {name: 'foo', foo: 1, nullValue: undefined}],
expectation: "INSERT INTO `myTable` (`name`,`foo`) VALUES ('foo',1);",
context: {options: {omitNull: true}}
}, {
arguments: ['myTable', {foo: false}],
expectation: 'INSERT INTO `myTable` (`foo`) VALUES (false);'
}, {
arguments: ['myTable', {foo: true}],
expectation: 'INSERT INTO `myTable` (`foo`) VALUES (true);'
}, {
arguments: ['myTable', function(sequelize) {
return {
foo: sequelize.fn('NOW')
};
}],
expectation: 'INSERT INTO `myTable` (`foo`) VALUES (NOW());',
needsSequelize: true
},
],
bulkInsertQuery: [
{
arguments: ['myTable', [{name: 'foo'}, {name: 'bar'}]],
expectation: "INSERT INTO `myTable` (`name`) VALUES ('foo'),('bar');"
}, {
arguments: ['myTable', [{name: "foo';DROP TABLE myTable;"}, {name: 'bar'}]],
expectation: "INSERT INTO `myTable` (`name`) VALUES ('foo\\';DROP TABLE myTable;'),('bar');"
}, {
arguments: ['myTable', [{name: 'foo', birthday: new Date(Date.UTC(2011, 2, 27, 10, 1, 55))}, {name: 'bar', birthday: new Date(Date.UTC(2012, 2, 27, 10, 1, 55))}]],
expectation: "INSERT INTO `myTable` (`name`,`birthday`) VALUES ('foo','2011-03-27 10:01:55'),('bar','2012-03-27 10:01:55');"
}, {
arguments: ['myTable', [{name: 'foo', foo: 1}, {name: 'bar', foo: 2}]],
expectation: "INSERT INTO `myTable` (`name`,`foo`) VALUES ('foo',1),('bar',2);"
}, {
arguments: ['myTable', [{name: 'foo', foo: 1, nullValue: null}, {name: 'bar', nullValue: null}]],
expectation: "INSERT INTO `myTable` (`name`,`foo`,`nullValue`) VALUES ('foo',1,NULL),('bar',NULL,NULL);"
}, {
arguments: ['myTable', [{name: 'foo', foo: 1, nullValue: null}, {name: 'bar', foo: 2, nullValue: null}]],
expectation: "INSERT INTO `myTable` (`name`,`foo`,`nullValue`) VALUES ('foo',1,NULL),('bar',2,NULL);",
context: {options: {omitNull: false}}
}, {
arguments: ['myTable', [{name: 'foo', foo: 1, nullValue: null}, {name: 'bar', foo: 2, nullValue: null}]],
expectation: "INSERT INTO `myTable` (`name`,`foo`,`nullValue`) VALUES ('foo',1,NULL),('bar',2,NULL);",
context: {options: {omitNull: true}} // Note: We don't honour this because it makes little sense when some rows may have nulls and others not
}, {
arguments: ['myTable', [{name: 'foo', foo: 1, nullValue: undefined}, {name: 'bar', foo: 2, undefinedValue: undefined}]],
expectation: "INSERT INTO `myTable` (`name`,`foo`,`nullValue`,`undefinedValue`) VALUES ('foo',1,NULL,NULL),('bar',2,NULL,NULL);",
context: {options: {omitNull: true}} // Note: As above
}, {
arguments: ['myTable', [{name: 'foo', value: true}, {name: 'bar', value: false}]],
expectation: "INSERT INTO `myTable` (`name`,`value`) VALUES ('foo',true),('bar',false);"
}, {
arguments: ['myTable', [{name: 'foo'}, {name: 'bar'}], {ignoreDuplicates: true}],
expectation: "INSERT IGNORE INTO `myTable` (`name`) VALUES ('foo'),('bar');"
}, {
arguments: ['myTable', [{name: 'foo'}, {name: 'bar'}], {updateOnDuplicate: ['name']}],
expectation: "INSERT INTO `myTable` (`name`) VALUES ('foo'),('bar') ON DUPLICATE KEY UPDATE `name`=VALUES(`name`);"
},
],
updateQuery: [
{
arguments: ['myTable', {name: 'foo', birthday: new Date(Date.UTC(2011, 2, 27, 10, 1, 55))}, {id: 2}],
expectation: "UPDATE `myTable` SET `name`='foo',`birthday`='2011-03-27 10:01:55' WHERE `id` = 2"
}, {
arguments: ['myTable', {name: 'foo', birthday: new Date(Date.UTC(2011, 2, 27, 10, 1, 55))}, {id: 2}],
expectation: "UPDATE `myTable` SET `name`='foo',`birthday`='2011-03-27 10:01:55' WHERE `id` = 2"
}, {
arguments: ['myTable', {bar: 2}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=2 WHERE `name` = 'foo'"
}, {
arguments: ['myTable', {name: "foo';DROP TABLE myTable;"}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `name`='foo\\';DROP TABLE myTable;' WHERE `name` = 'foo'"
}, {
arguments: ['myTable', {bar: 2, nullValue: null}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=2,`nullValue`=NULL WHERE `name` = 'foo'"
}, {
arguments: ['myTable', {bar: 2, nullValue: null}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=2,`nullValue`=NULL WHERE `name` = 'foo'",
context: {options: {omitNull: false}}
}, {
arguments: ['myTable', {bar: 2, nullValue: null}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=2 WHERE `name` = 'foo'",
context: {options: {omitNull: true}}
}, {
arguments: ['myTable', {bar: false}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=false WHERE `name` = 'foo'"
}, {
arguments: ['myTable', {bar: true}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=true WHERE `name` = 'foo'"
}, {
arguments: ['myTable', function(sequelize) {
return {
bar: sequelize.fn('NOW')
};
}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=NOW() WHERE `name` = 'foo'",
needsSequelize: true
}, {
arguments: ['myTable', function(sequelize) {
return {
bar: sequelize.col('foo')
};
}, {name: 'foo'}],
expectation: "UPDATE `myTable` SET `bar`=`foo` WHERE `name` = 'foo'",
needsSequelize: true
},
],
showIndexesQuery: [
{
arguments: ['User'],
expectation: 'SHOW INDEX FROM `User`'
}, {
arguments: ['User', { database: 'sequelize' }],
expectation: 'SHOW INDEX FROM `User` FROM `sequelize`'
},
],
removeIndexQuery: [
{
arguments: ['User', 'user_foo_bar'],
expectation: 'DROP INDEX `user_foo_bar` ON `User`'
}, {
arguments: ['User', ['foo', 'bar']],
expectation: 'DROP INDEX `user_foo_bar` ON `User`'
},
],
getForeignKeyQuery: [
{
arguments: ['User', 'email'],
expectation: 'SELECT CONSTRAINT_NAME as constraint_name,CONSTRAINT_NAME as constraintName,CONSTRAINT_SCHEMA as constraintSchema,CONSTRAINT_SCHEMA as constraintCatalog,TABLE_NAME as tableName,TABLE_SCHEMA as tableSchema,'
+ 'TABLE_SCHEMA as tableCatalog,COLUMN_NAME as columnName,REFERENCED_TABLE_SCHEMA as referencedTableSchema,REFERENCED_TABLE_SCHEMA as referencedTableCatalog,REFERENCED_TABLE_NAME as referencedTableName,REFERENCED_COLUMN_NAME as'
+ " referencedColumnName FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE WHERE (REFERENCED_TABLE_NAME = 'User' AND REFERENCED_COLUMN_NAME = 'email') OR (TABLE_NAME = 'User' AND COLUMN_NAME = 'email' AND REFERENCED_TABLE_NAME IS NOT NULL)"
},
]
};
_.each(suites, (tests, suiteTitle) => {
describe(suiteTitle, () => {
(tests as any).forEach(test => {
const title = test.title || 'MySQL correctly returns ' + test.expectation + ' for ' + JSON.stringify(test.arguments);
it(title, function() {
// Options would normally be set by the query interface that instantiates the query-generator, but here we specify it explicitly
const context = test.context || {options: {}};
if (test.needsSequelize) {
if (_.isFunction(test.arguments[1])) {
test.arguments[1] = test.arguments[1](this.sequelize);
}
if (_.isFunction(test.arguments[2])) {
test.arguments[2] = test.arguments[2](this.sequelize);
}
}
QueryGenerator.options = _.assign(context.options, { timezone: '+00:00' });
QueryGenerator._dialect = this.sequelize.dialect;
QueryGenerator.sequelize = this.sequelize;
QueryGenerator.setOperatorsAliases(Operators.LegacyAliases);
const conditions = QueryGenerator[suiteTitle].apply(QueryGenerator, test.arguments);
expect(conditions).to.deep.equal(test.expectation);
});
});
});
});
});
}
|
/* Convert warning number to string. */
static char *warnMsg(int warning) {
switch (warning) {
case warn_move0:
return "no initial moveto (inserted)";
case warn_move1:
return "moveto preceeds closepath (discarded)";
case warn_move2:
return "moveto sequence (collapsed)";
case warn_hint0:
return "negative hint (reversed)";
case warn_hint1:
return "duplicate hintsubs (discarded)";
case warn_hint2:
return "unhinted";
case warn_hint3:
return "stem list overflow (discarded)";
case warn_hint4:
return "hint overlap";
case warn_hint5:
return "all hintsubs removed (fixupmap enabled)";
case warn_hint6:
return "consecutive hintsubs (discarded)";
case warn_hint7:
return "unused hints";
case warn_flex0:
return "non-perpendicular flex";
case warn_flex1:
return "suspect flex args";
case warn_dup0:
return "glyph skipped - duplicate of glyph in font";
case warn_dup1:
return "glyph skipped - same name, different charstring as glyph in font";
default:
return "unknown warning!";
}
} |
/**
* @class GameController
*
* Central controller for handling all game related business logic including
* starting and stopping the remote processes. This should be used directly by
* the UI.
*/
#include "game/gamecontroller.h"
/* Configuration file properties */
const QString GameController::CONFIG_CLIENT_OBJECT = QStringLiteral("clientArguments");
const QString GameController::CONFIG_FILE_NAME = QStringLiteral("config.json");
const QString GameController::CONFIG_SERVER_OBJECT = QStringLiteral("serverArguments");
/**
* Destructor function
*/
GameController::~GameController()
{
deleteLaunchConfig();
stopServer();
}
/**
* Delete the currently loaded configuration. This will erase all custom settings stored
* in memory and require that {@link #loadConfig(bool)} is called again.
*/
void GameController::deleteLaunchConfig()
{
if(launch_config != nullptr)
{
delete launch_config;
launch_config = nullptr;
}
}
/**
* Starts a detached game process. This will try to load the Zandronum launcher in play mode
* disconnected from launcher.
* @param arguments custom arguments for starting the process
*/
void GameController::startGameProcess(const QList<Argument> &arguments)
{
QStringList argument_string_list = argument_translator.toStringList(arguments);
if(!QProcess::startDetached(launch_config->getZandronumBinaryFilepath(), argument_string_list))
throw std::runtime_error("Game process failed to detach and start");
}
/**
* Start a connected server process, for running a local Zandronum server.
* @param arguments custom arguments for starting the process
*/
void GameController::startServerProcess(const QList<Argument> &arguments)
{
QStringList argument_string_list = argument_translator.toStringList(arguments);
stopServer();
server_process = new QProcess();
server_process->start(launch_config->getZandronumBinaryFilepath(), argument_string_list);
}
/* ---- PUBLIC FUNCTIONS ---- */
/**
* Checks if the local server process is running and attached to this controller.
* @return TRUE if its running, FALSE otherwise
*/
bool GameController::isServerRunning() const
{
return server_process != nullptr && server_process->state() != QProcess::NotRunning;
}
/**
* Load the {@link LaunchConfig} into memory. This will encapsulate all loading including
* custom overrides from disk. This loaded config will be used for all start*() calls.
* @param reload TRUE to force a reload and delete existing config. Default is FALSE
*/
void GameController::loadConfig(bool reload)
{
if(launch_config == nullptr || reload)
{
deleteLaunchConfig();
// Load in the correct config, based on the OS
#ifdef _WIN32
launch_config = new WinLaunchConfig();
#elif __APPLE__
launch_config = new MacLaunchConfig();
#endif
// Add any custom configuration
QJsonDocument config_json = FileReader::readFileToJson(
launch_config->getBaseExecutableDirectory() + CONFIG_FILE_NAME);
launch_config->insertClientArguments(
JsonArgumentParser::parseArguments(config_json, CONFIG_CLIENT_OBJECT));
launch_config->insertServerArguments(
JsonArgumentParser::parseArguments(config_json, CONFIG_SERVER_OBJECT));
}
}
/**
* Execute and start the game, in normal start-up mode.
* @throws std::invalid_parameter if the launch config is misconfigured
*/
void GameController::start()
{
loadConfig();
startGameProcess(launch_config->getBasicArguments());
}
/**
* Execute and start a multiplayer client.
* @throws std::invalid_parameter if the launch config is misconfigured
*/
void GameController::startClient(QString address)
{
loadConfig();
launch_config->setServerAddress(address);
startGameProcess(launch_config->getClientArguments());
}
/**
* Execute and start a local multiplayer server.
* @throws std::invalid_parameter if the launch config is misconfigured
*/
void GameController::startServer()
{
loadConfig();
startServerProcess(launch_config->getServerArguments());
}
/**
* Stop a running local multiplayer server, if it exists.
*/
void GameController::stopServer()
{
if(server_process != nullptr)
{
server_process->kill();
delete server_process;
server_process = nullptr;
}
}
|
// Boost.Geometry Index
//
// R-tree count visitor implementation
//
// Copyright (c) 2011-2014 <NAME>, <NAME>.
//
// This file was modified by Oracle on 2019.
// Modifications copyright (c) 2019 Oracle and/or its affiliates.
// Contributed and/or modified by <NAME>, on behalf of Oracle
//
// Use, modification and distribution is subject to the Boost Software License,
// Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
// http://www.boost.org/LICENSE_1_0.txt)
#ifndef BOOST_GEOMETRY_INDEX_DETAIL_RTREE_VISITORS_COUNT_HPP
#define BOOST_GEOMETRY_INDEX_DETAIL_RTREE_VISITORS_COUNT_HPP
namespace boost { namespace geometry { namespace index {
namespace detail { namespace rtree { namespace visitors {
template <typename Indexable, typename Value>
struct count_helper
{
template <typename Translator>
static inline typename Translator::result_type indexable(Indexable const& i, Translator const&)
{
return i;
}
template <typename Translator, typename Strategy>
static inline bool equals(Indexable const& i, Value const& v, Translator const& tr, Strategy const& s)
{
return index::detail::equals<Indexable>::apply(i, tr(v), s);
}
};
template <typename Value>
struct count_helper<Value, Value>
{
template <typename Translator>
static inline typename Translator::result_type indexable(Value const& v, Translator const& tr)
{
return tr(v);
}
template <typename Translator, typename Strategy>
static inline bool equals(Value const& v1, Value const& v2, Translator const& tr, Strategy const& s)
{
return tr.equals(v1, v2, s);
}
};
template <typename ValueOrIndexable, typename MembersHolder>
struct count
: public MembersHolder::visitor_const
{
typedef typename MembersHolder::value_type value_type;
typedef typename MembersHolder::parameters_type parameters_type;
typedef typename MembersHolder::translator_type translator_type;
typedef typename MembersHolder::node node;
typedef typename MembersHolder::internal_node internal_node;
typedef typename MembersHolder::leaf leaf;
typedef count_helper<ValueOrIndexable, value_type> count_help;
inline count(ValueOrIndexable const& vori, parameters_type const& parameters, translator_type const& t)
: value_or_indexable(vori), m_parameters(parameters), tr(t), found_count(0)
{}
inline void operator()(internal_node const& n)
{
typedef typename rtree::elements_type<internal_node>::type elements_type;
elements_type const& elements = rtree::elements(n);
// traverse nodes meeting predicates
for (typename elements_type::const_iterator it = elements.begin();
it != elements.end(); ++it)
{
if ( index::detail::covered_by_bounds(count_help::indexable(value_or_indexable, tr),
it->first,
index::detail::get_strategy(m_parameters)) )
{
rtree::apply_visitor(*this, *it->second);
}
}
}
inline void operator()(leaf const& n)
{
typedef typename rtree::elements_type<leaf>::type elements_type;
elements_type const& elements = rtree::elements(n);
// get all values meeting predicates
for (typename elements_type::const_iterator it = elements.begin();
it != elements.end(); ++it)
{
// if value meets predicates
if ( count_help::equals(value_or_indexable, *it, tr,
index::detail::get_strategy(m_parameters)) )
{
++found_count;
}
}
}
ValueOrIndexable const& value_or_indexable;
parameters_type const& m_parameters;
translator_type const& tr;
typename MembersHolder::size_type found_count;
};
}}} // namespace detail::rtree::visitors
}}} // namespace boost::geometry::index
#endif // BOOST_GEOMETRY_INDEX_DETAIL_RTREE_VISITORS_COUNT_HPP
|
/*
* Copyright 2012-2015 Broad Institute, Inc.
*
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
*
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
* OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
* NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
* HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR
* THE USE OR OTHER DEALINGS IN THE SOFTWARE.
*/
package org.broadinstitute.gatk.utils.pairhmm;
import com.google.java.contract.Requires;
/**
* Superclass for PairHMM that want to use a full read x haplotype matrix for their match, insertion, and deletion matrix
*
* User: rpoplin
* Date: 10/16/12
*/
abstract class N2MemoryPairHMM extends PairHMM {
protected double[][] transition = null; // The transition probabilities cache
protected double[][] prior = null; // The prior probabilities cache
protected double[][] matchMatrix = null;
protected double[][] insertionMatrix = null;
protected double[][] deletionMatrix = null;
// only used for debugging purposes
protected boolean doNotUseTristateCorrection = false;
public void doNotUseTristateCorrection() {
doNotUseTristateCorrection = true;
}
/**
* Initialize this PairHMM, making it suitable to run against a read and haplotype with given lengths
*
* Note: Do not worry about padding, just provide the true max length of the read and haplotype. The HMM will take care of the padding.
*
* @param haplotypeMaxLength the max length of haplotypes we want to use with this PairHMM
* @param readMaxLength the max length of reads we want to use with this PairHMM
*/
@Override
public void initialize( final int readMaxLength, final int haplotypeMaxLength ) {
super.initialize(readMaxLength, haplotypeMaxLength);
matchMatrix = new double[paddedMaxReadLength][paddedMaxHaplotypeLength];
insertionMatrix = new double[paddedMaxReadLength][paddedMaxHaplotypeLength];
deletionMatrix = new double[paddedMaxReadLength][paddedMaxHaplotypeLength];
transition = PairHMMModel.createTransitionMatrix(maxReadLength);
prior = new double[paddedMaxReadLength][paddedMaxHaplotypeLength];
}
/**
* Print out the core hmm matrices for debugging
*/
protected void dumpMatrices() {
dumpMatrix("matchMetricArray", matchMatrix);
dumpMatrix("insertionMatrix", insertionMatrix);
dumpMatrix("deletionMatrix", deletionMatrix);
}
/**
* Print out in a human readable form the matrix for debugging
* @param name the name of this matrix
* @param matrix the matrix of values
*/
@Requires({"name != null", "matrix != null"})
private void dumpMatrix(final String name, final double[][] matrix) {
System.out.printf("%s%n", name);
for ( int i = 0; i < matrix.length; i++) {
System.out.printf("\t%s[%d]", name, i);
for ( int j = 0; j < matrix[i].length; j++ ) {
if ( Double.isInfinite(matrix[i][j]) )
System.out.printf(" %15s", String.format("%f", matrix[i][j]));
else
System.out.printf(" % 15.5e", matrix[i][j]);
}
System.out.println();
}
}
}
|
def _has_mixed_freqeuency(freq: DateOffset) -> bool:
return _has_fixed_frequency(freq) and _has_non_fixed_frequency(freq) |
An efficient dynamic memory allocator for sensor operating systems Dynamic memory allocation mechanism is important aspect of operating system, because an efficient dynamic memory allocator improves the performance of operating systems. In wireless sensor networks, sensor nodes have miniature computing device, small memory space and very limited battery power. Therefore, sensor operating systems should be able to operate efficiently in terms of energy consumption and resource management. And the role of dynamic memory allocator in sensor operating system is more important than one of general operating system. In this pager, we propose new dynamic memory allocation scheme that solves problems of existing dynamic memory allocators. We implement our scheme on Nano-Qplus which is a sensor operating system base on multi-threading. Our experimental results show our scheme performs efficiently in both time and space compared with existing memory allocation mechanism. |
David Wain is one funny guy. Busy as all hell and supremely funny. Taking time out of his rigorous schedule of breaking out into song and dance, donning fake mustaches with his buds in Stella and seeking romance across New York City in his web series, the director of the oft-heralded Wet Hot American Summer discussed the finer points his work with Scene. He also touched on the initial stirrings of his childlike absurdism, which all started right here in Northeast Ohio.
Eric Sandy: So you've got a handful of projects you're currently working on, including They Came Together. One thing I'm curious about is how you're co-writing it with Michael Showalter and how collaboration has become a bit of theme in your career.
David Wain: I feel like in movies and TV, it's by necessity of the form. It's very collaborative, whether you're actually credited as co-writing the script or not. I guess I've done that to an even greater degree because I started out in college with Michael Showalter and all the other guys from The State and all of us working as, like, an 11-man beast. And we were learning how to give and take and collaborate and work together for so many years. As we get older and as we get more experience, we learn how to defer and how to trust each other on certain things more. There are a lot of times and places when I know that Showalter and I both know in our brain what we want to do, so one or the other of us can just do it without having to be sitting in the same room all the time.
Speaking of the old gang, how about the supposed Wet Hot American Summer prequel?
We're still working on that. It's definitely an active project that we're in the process of on the script and getting all the elements together. It's a big undertaking, because we intend on reuniting the whole cast. That'll be challenging to say the least.
I'd imagine. And what's your take on the film's cult-following mentality? Does that bring an inherent freedom or do you find it restrictive at all?
It's a nice thing because it allows us to do certain things or keep in contact with certain projects. The fact that there's an ongoing interest in things like Wet Hot American Summer or Stella years later allows us to consider doing new projects or related projects as we go along. And that's very fortunate. And just for it's own sake, it's really nice. It's such a rewarding feeling. It's very rare that a little indie film made for under $2 million 12 years ago is still talked about all the time like Wet Hot American Summer. So for it's own sake it's very nice. We've never had a huge—excuse me, sorry, I'm in New York and it is so cold in my home.
Sounds like Cleveland. The wind is kinda insane here right now.
(Laughs) Same here! Well... We've never had gigantic financial success, which in a way might have helped us keep moving. It's nice to have the chance to keep doing more work.
It seems to me that through all these projects, your sense of humor and your style of comedy has this way of creating almost an alternate world where humor is different and jokes run in opposite directions from where a joke might normally run. Has that been a style of yours from childhood onward?
I think so. I guess I just had a certain sense of humor. I was influenced by Steve Martin a lot growing up and Woody Allen. It's definitely not a conscious thing, like: "Now let's go to the Bizarro World where we all like to play!" But it's evolved in some ways to that - sometimes to my detriment commercially, because there's a certain layer of absurdism that I naturally go to that is often off-putting to a mainstream audience, but I can't help it.
An interesting characteristic I see a lot, especially when you perform with the guys in Stella, is just pushing a joke as far as it can be pushed to the point where it becomes unfunny for a moment. But then it's being pushed so far that it becomes hilarious again.
For some people, it goes so far that it's unfunny and then it just stays unfunny. That's why there's so much hostility toward a lot of our work from people who aren't into it. What I found was interesting in the reviews and feedback for things like Wet Hot American Summer and Stella is that those who didn't like it didn't just not like it; they were truly to-the-core hostile and upset about it. They were like, "Not only do I hate this, I don't understand why anyone likes this. What is going on here? This is so unfunny I can't believe it." It's sort of interesting. For those who aren't into it, it's really brutal. People have come up to me over the years and said, "Your work is my litmus test for who I'm friends with." I was like, "Wow, that's weird, but kind of flattering."
Forming friendships everyday! In terms of this alternate world your humor creates, it seems like that dates back to this video tour of Shaker Heights you put together in 1978. It's fantastic and it's very funny to see the bombast and the idiosyncrasies that still remain a big part of your repertoire. Did you do a lot of video work growing up?
I have a vault of just hundreds of little skits and bits and shows I did on VHS during my formative years. A lot of them I'm literally just standing in front of the camera and rambling on and on, just entertaining myself. I don't know if anyone's ever watched them. It was almost like a reversal: As I went to college and started to consciously write comedy, I often would just revert to my childhood self because I found it creative somehow to take on my more childish points of view. I continue to do that to this day. I often will just default to the same jokes i was making when I was 5 or 10. It's entertaining to me.
Were there any stories growing up in Cleveland or maybe some influences in this town that still sort of hold true in your work?
I was there from when I was born to when I was 18, so yeah, for sure. My family is still there in Shaker. I'm there all the time. Cleveland is everything, my whole upbringing, my entire experience at Shaker Heights High School and before. What's funny is that when you get so busy working, you start lessening your number of life experiences, so you have to keep looking back to before you were writing. I'm trying to think of something specific and entertaining to say about that, but, yes, growing up in Cleveland is the entirety of my identity.
You mentioned Steve Martin, but was there any sort of moment or memory that suddenly piqued your interest in filmmaking or comedy? Or was that just always the plan?
It wasn't even the plan until I got to college. I went to NYU more because I wanted to be in New York and less because I wanted to be in film school. But I remember being very excited about Steve Martin. My sisters brought home his records - and I remember going to Peaches Records on Richmond and chickening out at the last minute but almost doing the Steve Martin Act-Alike contest at age 8 or something. And same thing with the Woody Allen movies that a friend of ours brought home on fourth-generation VHS tapes -- or BetaMax tapes at the time. But I was definitely the class clown and the family clown. I was always making weird jokes. As early as I could, from 11 or 12, my dad brought home this very old video camera that had to attach to two giant machines just to turn on. I started making little skits and tapes with my friends and I just never stopped. One day I'll have to release The Cleveland Tapes.
We're eagerly awaiting. I have one thing to clear up before we run: Who the fuck is Marcus?
I know, right?! I completely, completely 100 percent agree. |
/*
* uni_hash.tbl
*
* Do not edit this file; it was automatically generated by
*
* ./makeuctb ./cp852_uni.tbl
*
*/
static u8 dfont_unicount_cp852[256] =
{
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 0,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1
};
static u16 dfont_unitable_cp852[223] =
{
0x0020, 0x0021, 0x0022, 0x0023, 0x0024, 0x0025, 0x0026, 0x0027,
0x0028, 0x0029, 0x002a, 0x002b, 0x002c, 0x002d, 0x002e, 0x002f,
0x0030, 0x0031, 0x0032, 0x0033, 0x0034, 0x0035, 0x0036, 0x0037,
0x0038, 0x0039, 0x003a, 0x003b, 0x003c, 0x003d, 0x003e, 0x003f,
0x0040, 0x0041, 0x0042, 0x0043, 0x0044, 0x0045, 0x0046, 0x0047,
0x0048, 0x0049, 0x004a, 0x004b, 0x004c, 0x004d, 0x004e, 0x004f,
0x0050, 0x0051, 0x0052, 0x0053, 0x0054, 0x0055, 0x0056, 0x0057,
0x0058, 0x0059, 0x005a, 0x005b, 0x005c, 0x005d, 0x005e, 0x005f,
0x0060, 0x0061, 0x0062, 0x0063, 0x0064, 0x0065, 0x0066, 0x0067,
0x0068, 0x0069, 0x006a, 0x006b, 0x006c, 0x006d, 0x006e, 0x006f,
0x0070, 0x0071, 0x0072, 0x0073, 0x0074, 0x0075, 0x0076, 0x0077,
0x0078, 0x0079, 0x007a, 0x007b, 0x007c, 0x007d, 0x007e, 0x00c7,
0x00fc, 0x00e9, 0x00e2, 0x00e4, 0x016f, 0x0107, 0x00e7, 0x0142,
0x00eb, 0x0150, 0x0151, 0x00ee, 0x0179, 0x00c4, 0x0106, 0x00c9,
0x0139, 0x013a, 0x00f4, 0x00f6, 0x013d, 0x013e, 0x015a, 0x015b,
0x00d6, 0x00dc, 0x0164, 0x0165, 0x0141, 0x00d7, 0x010d, 0x00e1,
0x00ed, 0x00f3, 0x00fa, 0x0104, 0x0105, 0x017d, 0x017e, 0x0118,
0x0119, 0x00ac, 0x017a, 0x010c, 0x015f, 0x00ab, 0x00bb, 0x2591,
0x2592, 0x2593, 0x2502, 0x2524, 0x00c1, 0x00c2, 0x011a, 0x015e,
0x2563, 0x2551, 0x2557, 0x255d, 0x017b, 0x017c, 0x2510, 0x2514,
0x2534, 0x252c, 0x251c, 0x2500, 0x253c, 0x0102, 0x0103, 0x255a,
0x2554, 0x2569, 0x2566, 0x2560, 0x2550, 0x256c, 0x00a4, 0x0111,
0x0110, 0x010e, 0x00cb, 0x010f, 0x0147, 0x00cd, 0x00ce, 0x011b,
0x2518, 0x250c, 0x2588, 0x2584, 0x0162, 0x016e, 0x2580, 0x00d3,
0x00df, 0x00d4, 0x0143, 0x0144, 0x0148, 0x0160, 0x0161, 0x0154,
0x00da, 0x0155, 0x0170, 0x00fd, 0x00dd, 0x0163, 0x00b4, 0x00ad,
0x02dd, 0x02db, 0x02c7, 0x02d8, 0x00a7, 0x00f7, 0x00b8, 0x00b0,
0x00a8, 0x02d9, 0x0171, 0x0158, 0x0159, 0x25a0, 0x00a0
};
/* static struct unipair_str repl_map_cp852[]; */
static struct unimapdesc_str dfont_replacedesc_cp852 = {0,NULL,0,1};
#define UC_CHARSET_SETUP_cp852 UC_Charset_Setup("cp852",\
"Eastern European (cp852)",\
dfont_unicount_cp852,dfont_unitable_cp852,223,\
dfont_replacedesc_cp852,128,1)
|
def next(self) -> Slot:
with self.io as data:
open_slots = self._slot_manager(data).open_slots(self.exp)
return next(open_slots) |
Escherichia coli outer membrane protease OmpT confers resistance to urinary cationic peptides Escherichia coli OmpT, located in the outer membrane, has been characterized as a plasminogen activator, with the ability to hydrolyze protamine and block its entry. In this investigation, a complex of low molecular weight cationic peptides purified from human urine by a combination of membrane ultrafiltration and weak cation exchange chromatography was characterized. The impact of OmpT on E. coli resistance to urinary cationic peptides was investigated by testing ompT knockout strains. The ompT mutants were more susceptible to urinary cationic peptides than ompT+ strains, and this difference was abolished by complementation of the mutants with pUC19 carrying the ompT gene. The urinary protease inhibitor ulinastatin greatly decreased the resistance of the ompT+ strains. Overall, the data indicate that OmpT may help E. coli persist longer in the urinary tract by enabling it to resist the antimicrobial activity of urinary cationic peptides. |
Those were the words of John Adams on the last day of his life. It was great and it was good because it was July 4, 1826. The United States was celebrating its 50th anniversary and Adams, as one of the two remaining signers of the Declaration of Independence still living, could look back and recall the official vote for independence on July 2, 1776.
“The second of July 1776 will be the most memorable epocha in the history of American. I am apt to believe that it will be celebrated by succeeding generations as the great anniversary festival. It ought to be commemorated as the Day of Deliverance by solemn acts of devotion to God Almighty. It ought to be solemnized with pomp and parade, with shows, games, sports, guns, bells, bonfires, and illuminations from one end of this continent to the other from this time forward forever more.”
Written to his wife Abigail, Adams missed the date, as today we celebrate Independence on the day the order was given the publish and distribute the Declaration…July 4th. But our 2nd President got the spirit right, and our celebrations today reflect much of his desires, penned more than 230 years ago.
July 4, 1826 was also the last day of Thomas Jefferson’s life. Our 3rd President (and other remaining “signer”) had a remarkable life, much of it spent in very close connection to Adams. Jefferson penned the Declaration of Independence, served as Vice President to Adams and twice as President, and was the driving force behind the Louisiana Purchase, the greatest single expansion of the country.
In his last letter (to the Mayor of Washington declining an invitation to the country’s 50th anniversary celebration), Jefferson wrote, “May it be to the world, what I believe it will be (to some parts sooner, the others later, but finally to all) the signal of arousing men to burst the chains under which monkish ignorance and superstition had persuaded them to bind themselves, and to assume the blessings and security of self-government…All eyes are opened or opening to the rights of man. The general spread of the light of science has already laid open to every view the palpable truth, that the mass of mankind has not been born with saddles on their backs, nor a favored few, booted and spurred, ready to ride them legitimately by the grace of God. These are the grounds of hope for others: for ourselves, let the annual return to this day forever refresh our recollection of these rights, and an undiminished devotion to them.”
Thomas Jefferson would pass away just after 1pm on the 4th…John Adams would follow 5 hours later. But their desire for freedom had founded a nation. And their words should continue to inspire it 230 years later…and beyond.
On June 30, 1826, a small delegation came to Adams’ home and asked for a quote to read aloud as a toast at the upcoming celebration. Adams thought for moment and said, “Independence forever.” May that be our wish as well.
Happy Birthday, America!!!
Recommended Reading: American Sphinx
Advertisements |
The trend towards sustainable development as a factor in reducing gender inequality COVID-19 was called a "pandemic of inequality", which led to aggravation of all types of inequality, including gender. The pandemic actualized the issue of finding new ways and opportunities to improve the situation with gender equality in Russia. The article presents result of the analysis of Russian reports for the period 2016-2020 on the implementation of The 2030Agenda for Sustainable Development in terms of the country's progress towards achieving gender equality. The article analyzed not the whole national reports, but only the 5th Sustainable Development Goal (SDG) aimed at ensuring gender equality. The novelty of the author's approach to the analysis of the reports was that the indicators used in them to achieve the 5th SDG by 2030 wwere considered as factors of progress towards gender equality in the country. The article consists of two sections, an introduction and a conclusion. The first section provides a historical background that shows the stages in formation of the concept of sustainable development, starting from the ideas of Gro Harlem Brundtland, who headed the International Commission on Environment and Development (ICEDD) in 1987, to the system of indicators developed by Nobel laureates Joseph Stiglitz and Amartya Sen, who headed the Commission on Basic Indicators of Economic Activity and Social Progress in 2008-2009. When developing the Sustainable Development Goals, the experience accumulated over thirty years was used as the basis for them. The 2030 Agenda for Sustainable Development, which includes the SDGs, was signed by 193 countries in 2015, including Russia. In 2020 Russia submitted to the UN two reports on the achievement of the SDGs one prepared by the government and the other by civil society organizations. These reports reflected the progress in implementation of the SDGs for 2016-2020. The results of a brief comparative analysis of both reports are presented in the second section of the article. The analysis showed striking differences between these two reports: the outright bureaucratic formalism of the first and the creative constructive nature of the second. This indicates that it is the civil society, not the government, that is aware of the severity and urgency of the problems of gender inequality in Russia and is ready to solve them. The content of the report and the recommendations addressed by the civil society to the state show that implementation of the gender equality targets set in the SDGs can serve as an important factor in the country's progress towards gender equality. From the analysis of the form and content of the government report, it becomes obvious that the gender agenda is not among the priority areas of the Russian social policy. |
Lucknow, Feb 11 (IBNS): Aiming to gain lost ground in the northern state, Congress president Rahul Gandhi, party in-charge for eastern and western Uttar Pradesh, Priyanka Gandhi Vadra and Jyotiraditya Scindia respectively, kick-started campaign trails on Monday.
The three leaders have started a roadshow in Lucknow, the capital city of Uttar Pradesh.
This is the first time Priyanka visited the state after joining in active politics around two weeks ago. |
import torch
from utils import batch_to
def estimate_advantages(rewards, masks, values, gamma, tau):
device = rewards.device
rewards, masks, values = batch_to(torch.device('cpu'), rewards, masks, values)
tensor_type = type(rewards)
deltas = tensor_type(rewards.size(0), 1)
advantages = tensor_type(rewards.size(0), 1)
prev_value = 0
prev_advantage = 0
for i in reversed(range(rewards.size(0))):
deltas[i] = rewards[i] + gamma * prev_value * masks[i] - values[i]
advantages[i] = deltas[i] + gamma * tau * prev_advantage * masks[i]
prev_value = values[i, 0]
prev_advantage = advantages[i, 0]
returns = values + advantages
advantages = (advantages - advantages.mean()) / advantages.std()
advantages, returns = batch_to(device, advantages, returns)
return advantages, returns
|
<reponame>George-Stephen/movie_hub<gh_stars>0
package com.moringa.movie_hub;
import androidx.annotation.NonNull;
import androidx.appcompat.app.AppCompatActivity;
import android.app.ProgressDialog;
import android.content.Intent;
import android.os.Bundle;
import android.view.View;
import android.widget.Button;
import android.widget.EditText;
import android.widget.Toast;
import com.google.android.gms.tasks.OnCompleteListener;
import com.google.android.gms.tasks.Task;
import com.google.firebase.auth.AuthResult;
import com.google.firebase.auth.FirebaseAuth;
import com.google.firebase.auth.FirebaseUser;
import butterknife.BindView;
import butterknife.ButterKnife;
public class LoginActivity extends AppCompatActivity implements View.OnClickListener{
@BindView(R.id.login_email) EditText mLoginEmail;
@BindView(R.id.login_password)EditText mLoginPassword;
@BindView(R.id.login_button) Button mLoginButton;
@BindView(R.id.new_button)Button mSignUpButton;
private FirebaseAuth mAuth;
private FirebaseAuth.AuthStateListener mAuthListener;
private ProgressDialog mProgressDialog;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_login);
ButterKnife.bind(this);
mAuth = FirebaseAuth.getInstance();
createAuthStateListener();
createAuthProgressDialog();
mSignUpButton.setOnClickListener(this);
mLoginButton.setOnClickListener(this);
}
@Override
public void onClick(View v) {
if(v == mSignUpButton){
Intent intent = new Intent(LoginActivity.this,CreateActivity.class);
startActivity(intent);
finish();
}
if(v == mLoginButton){
loginWithPassword();
}
}
private void createAuthProgressDialog(){
mProgressDialog = new ProgressDialog(this);
mProgressDialog.setTitle("LOADING ...");
mProgressDialog.setMessage("Linking with accounts");
mProgressDialog.setCancelable(false);
}
private void loginWithPassword(){
String email = mLoginEmail.getText().toString();
String password = mLoginPassword.getText().toString();
if(email.equals("")){
mLoginEmail.setError("Please enter your email address");
return;
}
if (password.equals("")){
mLoginPassword.setError("Please enter your password");
return;
}
mProgressDialog.show();
mAuth.signInWithEmailAndPassword(email,password).addOnCompleteListener(this, new OnCompleteListener<AuthResult>() {
@Override
public void onComplete(@NonNull Task<AuthResult> task) {
if (task.isSuccessful()){
Toast.makeText(LoginActivity.this,"Account has been created",Toast.LENGTH_LONG).show();
}else {
Toast.makeText(LoginActivity.this,"This service is unavailable",Toast.LENGTH_LONG).show();
}
}
});
}
// auth listener
private void createAuthStateListener(){
mAuthListener = new FirebaseAuth.AuthStateListener() {
@Override
public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
FirebaseUser user = mAuth.getCurrentUser();
if (user != null){
Intent intent = new Intent(LoginActivity.this, RecentActivity.class);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK | Intent.FLAG_ACTIVITY_CLEAR_TASK);
startActivity(intent);
finish();
}
}
};
}
@Override
protected void onStart() {
super.onStart();
mAuth.addAuthStateListener(mAuthListener);
}
@Override
protected void onStop() {
super.onStop();
if(mAuthListener != null){
mAuth.removeAuthStateListener(mAuthListener);
}
}
} |
/**
* Sensor Data API
* No description provided (generated by Swagger Codegen https://github.com/swagger-api/swagger-codegen)
*
* OpenAPI spec version: 2.20
*
*
* NOTE: This class is auto generated by the swagger code generator 2.3.1-SNAPSHOT.
* https://github.com/swagger-api/swagger-codegen.git
* Do not edit the class manually.
*/
#include "Organisation.h"
namespace io {
namespace swagger {
namespace client {
namespace model {
Organisation::Organisation()
{
m_Id = utility::conversions::to_string_t("");
m_IdIsSet = false;
m_Name = utility::conversions::to_string_t("");
m_NameIsSet = false;
m__linksIsSet = false;
}
Organisation::~Organisation()
{
}
void Organisation::validate()
{
// TODO: implement validation
}
web::json::value Organisation::toJson() const
{
web::json::value val = web::json::value::object();
if(m_IdIsSet)
{
val[utility::conversions::to_string_t("id")] = ModelBase::toJson(m_Id);
}
if(m_NameIsSet)
{
val[utility::conversions::to_string_t("name")] = ModelBase::toJson(m_Name);
}
if(m__linksIsSet)
{
val[utility::conversions::to_string_t("_links")] = ModelBase::toJson(m__links);
}
return val;
}
void Organisation::fromJson(web::json::value& val)
{
if(val.has_field(utility::conversions::to_string_t("id")))
{
setId(ModelBase::stringFromJson(val[utility::conversions::to_string_t("id")]));
}
if(val.has_field(utility::conversions::to_string_t("name")))
{
setName(ModelBase::stringFromJson(val[utility::conversions::to_string_t("name")]));
}
if(val.has_field(utility::conversions::to_string_t("_links")))
{
if(!val[utility::conversions::to_string_t("_links")].is_null())
{
std::shared_ptr<Organisation__links> newItem(new Organisation__links());
newItem->fromJson(val[utility::conversions::to_string_t("_links")]);
setLinks( newItem );
}
}
}
void Organisation::toMultipart(std::shared_ptr<MultipartFormData> multipart, const utility::string_t& prefix) const
{
utility::string_t namePrefix = prefix;
if(namePrefix.size() > 0 && namePrefix.substr(namePrefix.size() - 1) != utility::conversions::to_string_t("."))
{
namePrefix += utility::conversions::to_string_t(".");
}
if(m_IdIsSet)
{
multipart->add(ModelBase::toHttpContent(namePrefix + utility::conversions::to_string_t("id"), m_Id));
}
if(m_NameIsSet)
{
multipart->add(ModelBase::toHttpContent(namePrefix + utility::conversions::to_string_t("name"), m_Name));
}
if(m__linksIsSet)
{
if (m__links.get())
{
m__links->toMultipart(multipart, utility::conversions::to_string_t("_links."));
}
}
}
void Organisation::fromMultiPart(std::shared_ptr<MultipartFormData> multipart, const utility::string_t& prefix)
{
utility::string_t namePrefix = prefix;
if(namePrefix.size() > 0 && namePrefix.substr(namePrefix.size() - 1) != utility::conversions::to_string_t("."))
{
namePrefix += utility::conversions::to_string_t(".");
}
if(multipart->hasContent(utility::conversions::to_string_t("id")))
{
setId(ModelBase::stringFromHttpContent(multipart->getContent(utility::conversions::to_string_t("id"))));
}
if(multipart->hasContent(utility::conversions::to_string_t("name")))
{
setName(ModelBase::stringFromHttpContent(multipart->getContent(utility::conversions::to_string_t("name"))));
}
if(multipart->hasContent(utility::conversions::to_string_t("_links")))
{
if(multipart->hasContent(utility::conversions::to_string_t("_links")))
{
std::shared_ptr<Organisation__links> newItem(new Organisation__links());
newItem->fromMultiPart(multipart, utility::conversions::to_string_t("_links."));
setLinks( newItem );
}
}
}
utility::string_t Organisation::getId() const
{
return m_Id;
}
void Organisation::setId(utility::string_t value)
{
m_Id = value;
m_IdIsSet = true;
}
bool Organisation::idIsSet() const
{
return m_IdIsSet;
}
void Organisation::unsetId()
{
m_IdIsSet = false;
}
utility::string_t Organisation::getName() const
{
return m_Name;
}
void Organisation::setName(utility::string_t value)
{
m_Name = value;
m_NameIsSet = true;
}
bool Organisation::nameIsSet() const
{
return m_NameIsSet;
}
void Organisation::unsetName()
{
m_NameIsSet = false;
}
std::shared_ptr<Organisation__links> Organisation::getLinks() const
{
return m__links;
}
void Organisation::setLinks(std::shared_ptr<Organisation__links> value)
{
m__links = value;
m__linksIsSet = true;
}
bool Organisation::linksIsSet() const
{
return m__linksIsSet;
}
void Organisation::unset_links()
{
m__linksIsSet = false;
}
}
}
}
}
|
. The phantom-pain syndrome model was used to examine the effects of phenazepam, sydnocarb and their combination in chronic oral administration. Phenazepam was shown to have no effects on the development of the phantom-pain syndrome. Sydnocarb arrested the progression of the pain syndrome, reduced its symptoms, alleviated inflammatory manifestations and extremity edema. The agent increased animals' excitability. When their combination was used, the clinical signs of the pain syndrome developed in the same way as with sydnocarb alone. At the same time phenazepam decreased the animals' aggression and excitability caused by sydnocarb. It is suggested that enhancing the efficiency of inhibitory GABAergic processes may result in lower clinical signs of the phantom-pain syndrome in case of involvement of brain catecholaminergic systems whose activation increases the inhibitory functions of its related GABA. The sympathomimetic action of sydnocarb induces an elevation of norepinephrine concentrations in the nerve endings and postsynaptic receptors, resulting in trophic improvement and restoration of tissue viability. |
But South Lanarkshire Council would not confirm that it would clear the site.
An investigation is underway to find those responsible for fly-tipping near to a Rutherglen supermarket.
Pallets, suitcases and large bin bags full of rubbish have been strewn across a strip of land next to the Tesco Extra store on Dalmarnock Road.
The environmental eyesore was spotted by Reformer reporters last week.
Instead it said an investigation would be carried out to establish who owns the land.
Shirley Clelland, head of fleet and environmental services at the council, said: “The council takes fly-tipping very seriously and environmental services will investigate this, and will attempt to identify who is responsible for dumping the material.
She added: “We encourage people to use our special uplift service or household waste recycling centre if they have items that cannot be put in a bin for normal collection.
The last person found dumping rubbish on Dalmarnock Road was fined £200 through a fixed penalty notice.
People do not need to be caught in the act of fly tipping in order to be punished. A fine can be issued if it is clear who the fly-tipper is from the items they have left.
People convicted of more serious environmental offences can face an unlimited fine and up to five years in prison. |
<gh_stars>10-100
import { createHash } from 'crypto'
import { ObservableOptions } from './types'
import { SCHEMA_SUBSCRIPTION } from '../constants'
const generateSubscriptionId = (opts: ObservableOptions) => {
if (opts.type === 'get') {
const hash = createHash('sha256')
hash.update(JSON.stringify(opts.options))
return hash.digest('hex')
} else if (opts.type === 'schema') {
return SCHEMA_SUBSCRIPTION + ':' + opts.db
}
}
export default generateSubscriptionId
|
Preclinical characterization of AZD5305, a next generation, highly selective PARP1 inhibitor and trapper. PURPOSE We hypothesized that inhibition and trapping of PARP1 alone would be sufficient to achieve anti-tumor activity. In particular, we aimed to achieve selectivity over PARP2, which has been shown to a play role in the survival of hematopoietic/stem progenitor cells in animal models. We developed AZD5305 with the aim to achieve improved clinical efficacy and wider therapeutic window. This next generation PARPi could provide a paradigm shift in clinical outcomes achieved by first generation PARPi, particularly in combination. PATIENTS AND METHODS AZD5305 was tested in vitro for PARylation inhibition, PARP-DNA trapping and antiproliferative abilities. In vivo efficacy was determined in mouse xenograft and PDX models. The potential for hematological toxicity was evaluated in rat models as monotherapy and combination. RESULTS AZD5305 is a highly potent and selective inhibitor of PARP1 with 500-fold selectivity for PARP1 over PARP2. AZD5305 inhibits growth in cells with deficiencies in DNA repair, with minimal/no effects in other cells. Unlike first generation PARPi, AZD5305 has minimal effects on hematological parameters in a rat pre-clinical model at predicted clinically efficacious exposures. Animal models treated with AZD5305 at doses ≥0.1mg/kg QD achieved greater depth of tumor regression compared to olaparib 100mg/kg QD, and longer duration of response. CONCLUSIONS AZD5305 potently and selectively inhibits PARP1 resulting in excellent antiproliferative activity and unprecedented selectivity for DNA repair deficient versus proficient cells. These data confirm the hypothesis that targeting only PARP1 can retain therapeutic benefits of non-selective PARPi, while reducing potential for hematotoxicity. AZD5305 is currently in Ph1 trials (NCT04644068). |
The shooter, who killed himself after the school went on lockdown, had also previously been investigated by the FBI, authorities said Friday.
A sign encourages prayer outside an ice cream shop on Dec. 8, 2017, in Aztec, New Mexico.
The man who opened fire inside at New Mexico high school on Thursday had disguised himself as a student and had previously been investigated by the FBI for making comments about plotting a mass shooting, authorities said.
The shooter fatally shot two students before killing himself at the high school in Aztec, New Mexico, state police said.
The San Juan County Sheriff's Office on Friday identified the shooter as William Atchison, 21, a former Aztec High School student who did not graduate. Authorities said they had found plans for the shooting in Atchison's home that included a detailed timeline, along with the line, "If all goes according to plan, today is the day I die."
Francisco "Paco" Fernandez, the shooter's first victim.
An FBI spokesperson said the agency had investigated Atchison in March 2016 for comments he made on a gaming website that were "generally along the lines of 'If you're going to conduct a mass shooting, does anyone know about cheap assault rifles?'"
The FBI interviewed Atchison and his family at the time, but he told investigators he had no plans for an attack and did not own any guns. According to the FBI, Atchison was fond of making trolling comments online to get a rise out of people.
On the day of the shooting, sheriff's officials said Atchison disguised himself as a student and even mingled with other students as they got off school buses to enter campus. He was carrying one pistol, a 9-mm Glock that he had purchased one month ago, as well as "several magazines."
Casey Jordan Marquez, the shooter's second victim.
Shortly after 8 a.m. as first period was beginning, Atchison was loading his weapon in a second-floor bathroom when he was discovered by the first victim, Francisco "Paco" Fernandez, an eleventh-grade student, who had excused himself from class, officials said. Atchison shot Fernandez and then walked into the hallway, where he encountered the second victim, Casey Jordan Marquez, 17, and killed her. He then began firing randomly, San Juan County Sheriff Ken Christesen.
Four officers responded to the school "within one minute" of the dispatcher's call. The officers entered Aztec High by shooting out a window to confront the shooter because it was on lockdown, according to the sheriff's office. Shortly after officers entered, Atchison took his own life.
According to New Mexico officials, no other students or faculty were injured in the shooting.
"We lost lives today. It’s in times like this you feel violated because schools are places where we send our kids to be safe," Aztec Schools Supt. Kirk Carpenter said in a press conference Thursday.
New Mexico Gov. Susana Martinez offered condolences and said the White House had called her to offer the community its prayers.
Aztec High School students and area residents gather for a candlelight vigil on Dec. 7.
"This is a small community where everyone knows each other," Martinez said. "Lift those that need to be lifted, be there for those who need someone to lean on. New Mexicans stand with you, Aztec."
In a statement, president of Navajo Nation, Russell Begaye, also expressed his condolences for the families who had been affected.
"It’s tragic when our children are harmed in violent ways especially on school campuses," he said in the statement. “Our prayers go out to all those affected by this tragedy and everyone throughout San Juan County." |
Hotel Trademarks in Organic Search The importance of the online travel search environment is well documented. In this context, trademarks play an instrumental role in resolving customer confusion in the search environment. An important element of the online search environment is the organic search output that is typically displayed on the left side of the search results screen on major search engines. This study evaluated the organic search listing performance in a U.S.-based search context of hotel trademarks (websites) on three search engines (Google, Yahoo! and MSN ) across four countries (United States, United Kingdom, China, and India) over two separate time periods (years 2007 and 2009) for three tiers of lodging operation (economy, midscale, and upscale). Findings indicate that there are significant differences across countries, across lodging tiers and over time periods. The study draws attention for hoteliers to focus on website optimization and trademark control in the online information space. |
<reponame>dylanrcooke/blueprint
/*
* Copyright 2017 Palantir Technologies, Inc. All rights reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
import classNames from "classnames";
import * as React from "react";
import { polyfill } from "react-lifecycles-compat";
import { AbstractPureComponent2, Classes } from "../../common";
import { DISPLAYNAME_PREFIX, IntentProps, Props } from "../../common/props";
// eslint-disable-next-line deprecation/deprecation
export type FormGroupProps = IFormGroupProps;
/** @deprecated use FormGroupProps */
export interface IFormGroupProps extends IntentProps, Props {
/**
* A space-delimited list of class names to pass along to the
* `Classes.FORM_CONTENT` element that contains `children`.
*/
contentClassName?: string;
/**
* Whether form group should appear as non-interactive.
* Remember that `input` elements must be disabled separately.
*/
disabled?: boolean;
/**
* Optional helper text. The given content will be wrapped in
* `Classes.FORM_HELPER_TEXT` and displayed beneath `children`.
* Helper text color is determined by the `intent`.
*/
helperText?: React.ReactNode;
/** Whether to render the label and children on a single line. */
inline?: boolean;
/** Label of this form group. */
label?: React.ReactNode;
/**
* `id` attribute of the labelable form element that this `FormGroup` controls,
* used as `<label for>` attribute.
*/
labelFor?: string;
/**
* Optional secondary text that appears after the label.
*/
labelInfo?: React.ReactNode;
/** CSS properties to apply to the root element. */
style?: React.CSSProperties;
/**
* Optional text for `label`. The given content will be wrapped in
* `Classes.FORM_GROUP_SUB_LABEL` and displayed beneath `label`. The text color
* is determined by the `intent`.
*/
subLabel?: React.ReactNode;
}
@polyfill
export class FormGroup extends AbstractPureComponent2<FormGroupProps> {
public static displayName = `${DISPLAYNAME_PREFIX}.FormGroup`;
public render() {
const { children, contentClassName, helperText, label, labelFor, labelInfo, style, subLabel } = this.props;
return (
<div className={this.getClassName()} style={style}>
{label && (
<label className={Classes.LABEL} htmlFor={labelFor}>
{label} <span className={Classes.TEXT_MUTED}>{labelInfo}</span>
</label>
)}
{subLabel && <div className={Classes.FORM_GROUP_SUB_LABEL}>{subLabel}</div>}
<div className={classNames(Classes.FORM_CONTENT, contentClassName)}>
{children}
{helperText && <div className={Classes.FORM_HELPER_TEXT}>{helperText}</div>}
</div>
</div>
);
}
private getClassName() {
const { className, disabled, inline, intent } = this.props;
return classNames(
Classes.FORM_GROUP,
Classes.intentClass(intent),
{
[Classes.DISABLED]: disabled,
[Classes.INLINE]: inline,
},
className,
);
}
}
|
Dosimetric verification of intensity-modulated fields. The optimization of intensity distributions and the delivery of intensity-modulated treatments with dynamic multi-leaf collimators (MLC) offer important improvements to three-dimensional conformal radiotherapy. In this study, a nine-beam intensity-modulated prostate plan was generated using the inverse radiotherapy technique. The resulting fluence profiles were converted into dynamic MLC leaf motions as functions of monitor units. The leaf motion pattern data were then transferred to the MLC control computer and were used to guide the motions of the leaves during irradiation. To verify that the dose distribution predicted by the optimization and planning systems was actually delivered, a homogeneous polystyrene phantom was irradiated with each of the nine intensity-modulated beams incident normally on the phantom. For each exposure, a radiographic film was placed normal to the beam in the phantom to record the deposited dose. The films were calibrated and scanned to generate 2-D isodose distributions. The dose was also calculated by convolving the incident fluence pattern with pencil beams. The measured and calculated dose distributions were compared and found to have discrepancies in excess of 5% of the central axis dose. The source of discrepancies was suspected to be the rounded edges of the leaves and the scattered radiation from the various components of the collimation system. After approximate corrections were made for these effects, the agreement between the two dose distributions was within 2%. We also studied the impact of the "tongue-and-groove" effect on dynamic MLC treatments and showed that it is possible to render this effect inconsequential by appropriately synchronizing leaf motions. This study also demonstrated that accurate and rapid delivery of realistic intensity-modulated plans is feasible using a dynamic multi-leaf collimator. |
Single-Hole Pump in Germanium Single-charge pumps are the main candidates for quantum-based standards of the unit ampere because they can generate accurate and quantized electric currents. In order to approach the metrological requirements in terms of both accuracy and speed of operation, in the past decade there has been a focus on semiconductor-based devices. The use of a variety of semiconductor materials enables the universality of charge pump devices to be tested, a highly desirable demonstration for metrology, with GaAs and Si pumps at the forefront of these tests. Here, we show that pumping can be achieved in a yet unexplored semiconductor, i.e. germanium. We realise a single-hole pump with a tunable-barrier quantum dot electrostatically defined at a Ge/SiGe heterostructure interface. We observe quantized current plateaux by driving the system with a single sinusoidal drive up to a frequency of 100 MHz. The operation of the prototype was affected by accidental formation of multiple dots, probably due to disorder potential, and random charge fluctuations. We suggest straightforward refinements of the fabrication process to improve pump characteristics in future experiments. I. INTRODUCTION A single-charge pump is an electronic device that can generate quantized electric current by clocking the transfer of individual charged quasi-particles (electrons, holes or cooper pairs) with an external periodic drive. The pumped current can be expressed as I = nef, where e is the elementary charge, f is the frequency of the drive and n is an integer representing the number of particles transferred per period. The development of this technology has been mainly motivated by its possible application for quantum-based standards of electric current. To date, the most promising pump realisations are semiconductor quantum dots (QDs) with tunable tunnel barriers, which have demonstrated to operate at the highest frequencies (up to few GHz) with the lowest current uncertainty (below part per million), in the pursuit of meeting the stringent metrological requirements. At the core of the any quantum standard lies the concept of universality. This is the idea that the operation of the standard is based on fundamental principles of nature, rather than being dependent on its specific physical implementation. For example, the acceptance of the Quantum Hall Effect as a primary standard of resistance was driven by the experimental demonstrations of agreement at a level of relative uncertainty below 10 −10 among Hall devices manufactured in silicon and GaAs, and later in graphene. Similarly, universality expects that the quantised currents generated by singecharge pumps do not depend on the material system used. Recently, a study of this kind has shown that there is agreement at a level of ≈ 10 −6 between silicon and * alessandro.rossi@strath.ac.uk GaAs pumps, an encouraging stepping stone for future tests with higher accuracy. Hence, from a universality standpoint, it is of interest to investigate a range of material and device systems that can support clocked charge transfers. In fact, besides the advanced performance achieved with QD pumps, there have been other less fulfilling demonstrations in superconductors, normal metal, hybrid normal/superconductive metal, single atoms, and graphene. Here, we introduce a new material system to the family of single-charge pumps, i.e. germanium (Ge). We demonstrate single-hole transfers clocked by a single sinusoidal drive in a tunable-barrier QD electrostatically formed at a Ge/SiGe heterostructure interface. We ascertain that the value of the current plateaux scales linearly with the rf drive frequency up to 100 MHz, as expected for quantized transport. We observe unusual plateaux boundaries in the 2D pump maps and tentatively attribute them to multiple parallel pump operation. We also observe device instability due to random charge fluctuations and suggest changes in the fabrication of the next generation of pumps that may mitigate this problem. II. METHODS The sample used in the experiments was fabricated on a Ge/SiGe heterostructure grown on a 100-mm-thick n-type Si substrate. The material stack is composed of a 16-nm-thick compressively strained Ge quantum well grown on a strain-relaxed Si 0.2 Ge 0.8 buffer layer. The Ge quantum well (QW) is undoped and separated from the semiconductor/oxide interface by a 22 nm Si 0.2 Ge 0.8 barrier, as shown in Fig. 1(a). A twodimensional hole gas with densities up to 6 10 11 cm −2, transport mobility up to 5 10 5 cm 2 /Vs, and effective mass of ≈ 0.05 m e, where m e is the free electron mass, is accumulated in the Ge QW via electrostatic gating. The device's gate layout is shown in Fig. 1(b). Ohmic contacts (shown in green) are defined by electron beam lithography, electron beam evaporation and lift-off of a 30-nm-thick Al layer. Electrostatic gates consist of two Ti/Pd layers with thickness of 20 nm and 40 nm for the barrier (red) and plunger gate (magenta) layer, respectively. Both layers are separated from the substrate and each other by 10 nm of ALD-grown Al 2 O 3. The measurement set-up is schematically represented in Fig. 1(b). The metal gates are connected to programmable dc voltage sources through room-temperature low-pass filters (not shown). The gate voltages are used to selectively accumulate holes in the Ge QW, resulting in the formation of tunnel barriers (under gates B in and B out ) that separate a quantum dot (under PL) from the hole reservoirs. Note that one remaining gate is kept at ground potential at all times to laterally confine the QD in the orthogonal direction to transport. The device current, I, is measured by a low-noise transimpedance amplifier connected to an ohmic contact. Gate B in is connected to an rf source through a low-temperature biastee. This gate operates as the entrance barrier for the pumping cycle by clocking the loading of holes into the QD. The pumping protocol used in this work is known as the ratchet mode, which typically applies to single-QD single-drive pumps and is largely insensitive to device bias. Each pumping cycle begins with the rf drive rising the potential of the entrance barrier loading the QD with holes from the nearest reservoir. Then the rf drive lowers the barrier to trap holes and eject them across the exit barrier, B out, and onto the other reservoir, as depicted in Fig. 1(c). Unless otherwise stated, the measurements presented in this work are carried out with only a small stray bias across device ohmics due to the amplifier (V bias ≈ 250 V), no intended bias voltage is otherwise applied during pumping. The sample is cooled in a cryogen-free dilution refrigerator with a base temperature of approximately 12 mK, although the effective device temperature may be higher due to heating generated by the rf drive. III. RESULTS In order to tune the device into a single-QD operation regime, the transconductance is measured as a function of dc voltages applied to both barriers with the rf source turned off. As illustrated in Fig. 1(d), for a fixed V PL = −1.2 V, parallel coulomb blockade peaks appear diagonally across the studied parameter space, a clear indication of single QD formation. For less negative values of V PL, honeycomb-like stability diagrams are observed (not shown), suggesting a double QD regime instead. This informed the decision of carrying out ratchet experiments at V PL < −1.2 V. As highlighted by the dashed lines in panel (d), on occasions the coulomb peaks present abrupt discontinuities. This fact is an indication that random charge rearrangements are occurring in or in the vicinity of the QD, resulting in discrete jumps in the current level at a given operation point. It is likely that charge traps at the interface between different layers of the material stack may be responsible for this effect. Four (nominally identical) devices have been tested, and they have all showed roughly the same level of random fluctuations in d.c. tests. By applying a sinusoidal drive to the entrance gate with peak-to-peak voltage at the 50 output of the source V pp = 0.275 V, a current plateau at I = ef emerges, as shown in Fig. 2. For lower values of V pp a current plateau is not observed. In fact, a strong capacitive coupling between the entrance gate and the QD in combination with a sufficiently large rf modulation provides captured holes with the energy shift needed to pass below the exit barrier and eventually be emitted at the end of a pump cycle, similarly to a previous report of silicon singlehole pumping. In Fig. 2(a), one can note the effect of the mentioned random fluctuations. The onset of the plateau region with respect to V Bout, the so-called capture line, undergoes discrete shifts at every voltage scan, as opposed to being linearly dependent on V Bin as merely dictated by capacitive coupling considerations. By performing measurements at different frequencies we confirm that the pumped current scales as expected for a single hole transported per clock cycle, as reported in Fig. 2(b). It is of note that in these measurements the current is not seen to increase towards I = 2ef, as one would expect for increasingly negative exit barrier voltage, which would normally allow to trap and emit an additional hole. This may be due to a large charging energy in the QD, which prevents additional holes from being trapped, or insufficient amplitude of the rf drive resulting in incomplete emission. In order to test the latter hypothesis, we have increased the drive amplitude to V pp = 0.3 V. However, this resulted in a severe worsening of the charge fluctuations and did not allow a conclusive test to be carried out. In order to modify the QD charging energy, a pump map is acquired at a more negative V PL, see Fig. 3(a). This is expected to have the main effect of reducing the QD charging energy by enhancing its electrostatically-defined size. A secondary effect is the enhancement of both barriers transparency due to cross-coupling. The map shows that in these circumstances the current rises above the first plateau value for increasingly negative V Bout, albeit without fully reaching the second plateau. The shape of the current staircase between two adjacent plateaux as a function of the exit barrier gate voltage can provide insights into the process by which the QD is decoupled from the reservoir(s), as indicated by the theoretical fits reported in Fig. 3(b). Thus far, most of accurate semiconductor pumps have operated in the decay cascade regime, where the final number of charged particles in the QD is determined by a oneway cascade of back-tunneling events. According to this model, the average number of holes captured per cycle can be written as where and ∆ m are fitting parameters. Alternatively, if the reservoir in the vicinity of the entrance gate is heated by the large-amplitude ac drive, charge capture may follow a thermal equilibrium regime. This operation mode has been previously observed in both electron and hole pumps in silicon. In this regime, particles are exchanged between the QD and the leads only during the initial stage of the pumping cycle, and the average number of captured holes can be written as n = where A m and B are the fitting parameters for the mth current plateau. In Fig. 3(b), the normalized pumped current, I/ef, is used in the numerical fit of n for both decay-cascade and thermal models in the range −0.95 V < V Bout < −0.60 V. A close inspection of the current staircase (see insets) shows that the thermal equilibrium model is a better fit than the decay cascade for the rising edge approaching n = 1. The fitting error for the thermal model, reduced- 2 th = 0.13, is significantly lower than for the decay cascade model, reduced- 2 decay = 0.85, for V Bin = −0.601 V. Similar results are obtained by fitting traces at other V Bin values (not shown). This suggests that this hole pump operates in the thermal regime. However, it is clear that the rising edge to the second plateau is not well represented by either model. This is not unusual and it may be an indication that also other phenomena affect the pumping process, which may include a change in gate coupling between singleand multi-particle QD configurations, as well as the presence of additional pumping entities such as traps or parasitic states besides the intended QD. In order to investigate this aspect, pump maps showing multiple plateaux have been acquired under different experimental conditions, as shown in Fig. 4. Maps in (a) and (b) are taken for different values of V PL and both present current plateaux for n = 1 and n = 2 in a similar fashion to what is already shown in Fig. 3(a). By contrast, the map in Fig. 4(c) presents higher current values approaching n = 3 in addition to lower order plateaux. This measurement is taken with the device in a magnetic field, B, perpendicular to the hole layer plane and intensity B = 5 T. The application of an outof-plane magnetic field is expected to increase the QD confinement and improve the decoupling from the leads, which in GaAs pumps usually results in quantization enhancement. In a typical single-drive tunable-barrier pump the quantized plateau boundaries are set by insufficient loading or incomplete emission and form a checkerstype diagram with trapeze-shaped tiles at fixed values of n. By contrast, the data presented here show that the regions of quantized current are nested one into another. This becomes clearer by looking at the pumped current derivatives shown in Fig. 4 (d), (e) and (f). The red construction lines highlight the boundaries of each plateau region and reveal significant overlaps between trapezes of different areas and orientation with respect to the gate space. This may suggest that multiple pumps are at work in this device, with each pump producing only a plateau at n = 1. As observed in Fig. 2, this could be due to the fact that the QD is only able to capture one hole in the experimental conditions we have probed. By taking the Cartesian coordinates of the vertices of each trapeze, one can obtain their areas by simple geometric construction, as shown in Fig. 4 (g), (h), (i). A Boolean function is used to select regions of the 2D map space where the trapeze areas overlap and to assign different colors depending on how many overlaps are detected. Assuming that each trapeze sets the boundary for a different n = 1 plateau (magenta), the overlap of two trapezes would lead to a region of the map representative of n = 2 (purple), and similarly, three overlapping trapezes would lead to n = 3 (green). Comparing like-for-like panels in the bottom and top row of Fig. 4, one may find enough similarities to indicate that multiple pumps operating in parallel could be the origin of the nested plateaux observed in the experiments. IV. DISCUSSION These experiments have been impacted by the frequent charge fluctuations in the device. The main limitation is that the effect of systematic changes in experimental parameters become intertwined with the effects of random charge rearrangements. Distinguishing with confidence between them is not always possible. Besides the instantaneous change of current levels that resulted in noisy pump maps, one has to account for a longer term instability caused by the accumulation in time of multiple random events. This becomes clear if one compares Fig. 3(a) and Fig. 4(b). These two maps are taken at the same experimental conditions (i.e. same drive frequency, amplitude, B-field and V PL values) and yet the pumping regions appear radically different. This is likely due to the fact that the measurements had been taken few days apart. It is, therefore, out of an abundance of caution that we do not comment on the theoretical error rate of the pump, which one could have calculated based on the thermal fit value at the inflection point of the plateau. Given that the plateau stability is affected by the fluctuators, it would be misleading to present such information for an isolated stable trace. In the absence of the random instabilities, one could have also tried to shed more light on the reasons for the trapeze count capturing of just one hole in the QD and the origin of the multiple pumping mechanisms shown in Fig. 4. For example, by systematically investigating pump maps as a function of magnetic and electrostatic confinements, one could have gathered relevant information on whether they originated from atomic-like trap states or parasitic QDs, depending on the effect on the plateau length and boundary. Finally, note that similar devices to those used in this work have resulted to be excellent hosts for spin-based quantum bits. In that case, stable charge configurations have been obtained by reducing gate voltage swings down to few mV. Unfortunately, this strategy does not lend itself effectively to pumping experiments where large sinusoidal drives are typically needed and extensive dc voltage scans have to be carried out to verify the robustness of the transfer protocol. V. CONCLUSION In summary, we demonstrated a prototype of singlehole pump in a Ge-based QD. By fitting the current staircase to theoretical models, we conclude that the pump operates in a thermal regime. We observed unusual quantized plateaux boundaries in the 2D pump maps and attributed them to unintended parallel pump operation. More in depth analysis around the theoretical pump error rate, as well as the possible origin of the spurious pumping mechanisms was prevented by device instability in the form of random charge fluctuations. In the future, these pumps may become useful for metrological applications by contributing to highaccuracy current generation or universality tests. Furthermore, hole pumps in high-mobility Ge could be of interest for the nascent field of fermionic quantum optics, as well as for the realisation of single-photon sources based on charge transfer. However, to fulfill these expectations, it will be imperative to improve their charge stability. Recent studies have shown that quieter QDs can be realised when the Ge quantum well is positioned deeper in the heterostructure stack as a result of a thicker (55 nm) Si 0.2 Ge 0.8 barrier layer. This also suggests that a possible origin of charge fluctuations resides in trap states at the interface between the Si capping layer and the barrier layer. We expect that the next generation of Ge pumps will directly benefit from this improvement in the fabrication process and will likely achieve much reduced levels of charge fluctuations. |
By Tom Kertscher on Wednesday, December 21st, 2016 at 5:00 a.m.
At a post-election town hall meeting in Kenosha, Wis., 2016 Democratic presidential candidate Bernie Sanders opined on why Republican Donald Trump is headed for the Oval Office and not the Democratic nominee, Hillary Clinton.
"One of the arguments as to why Trump won is the belief that most of – or many of -- his supporters are sexist or are racist or are homophobes. I happen not to believe that`s the case," Sanders said at the Dec. 12, 2016 meeting, which was broadcast on MSNBC.
"I think what he did do is, he said, ‘You know what, there is a lot of pain in this country; people are scared and people are worried.’ One example: Right now, 50 percent of older workers, 55 to 64 -- you know how much money they have in the bank, as they enter retirement? Who wants to guess?
"Zero!" the Vermont senator exclaimed. "What do you think? People are scared to death of retirement."
On another haves and have-nots claim -- that "the top one-tenth of 1 percent" of Americans "own almost as much wealth as the bottom 90 percent" -- we gave Sanders a Mostly True.
But, are half of Americans approaching retirement with nothing in the bank?
To support Sanders’ statement, his campaign cited a May 2015 report on the financial status of retirees and workers approaching retirement from a gold-standard source, the nonpartisan U.S. Government Accountability Office.
Sanders had requested it in his role as ranking member of the Senate Subcommittee on Primary Health and Retirement Security.
The report used data from the latest triennial Survey of Consumer Finances done by the Federal Reserve Board, in 2013. It is the premier survey of wealth in the United States, according to Wellesley College economics professor Courtney Coile, who is the associate director of the Retirement Research Center at the National Bureau of Economic Research.
The GAO found that 52 percent of households age 55 and over have no money in what the report defined as retirement savings -- an Individual Retirement Account (IRA) or a defined-contribution plan, such as a 401(k).
But that group includes people who are well into retirement.
The situation is somewhat less dire for the group that Sanders cited at the town hall, those ages 55 to 64. The report found that 41 percent of the people in that group have no money in defined-contribution plans or IRAs.
There’s another point of clarification, however. The report did not consider in that statistic other savings that the 55 to 64 age group have.
The Center for Retirement Research at Boston College ran those numbers for us, using the same Federal Reserve survey data, although it expressed the statistic a little differently.
The center found that 38.5 percent of Americans ages 55 to 64 do not have a retirement account such as an IRA, traditional pension or 401(k), but nevertheless have some savings. With that group, the median amount of their savings is $1,000 -- in other words, half of them have savings of more than $1,000 and half have less.
So, it’s not as though those people have "zero" in the bank, although many don’t have savings that would last them very long in retirement.
Sanders said 50 percent of workers ages 55 to 64 have "zero" money "in the bank as they enter retirement."
His claim is somewhat high, but doesn’t give a misleading impression. A widely respected government report says 41 percent of Americans 55 to 64 have no savings in common retirement accounts such as an IRA or a 401(k).
It’s also worth noting that some in that group have other savings not in retirement accounts, though for many of them it’s less than $1,000.
Published: Wednesday, December 21st, 2016 at 5:00 a.m. |
/**
* Create a new body based on the calculated vector
* between the mouse and the position where it clicked.
* Add this new body to the simulation.
*/
@Override
public void mouseReleased(MouseEvent event) {
if (SwingUtilities.isLeftMouseButton(event)){
int newBodyRadius = Integer.parseInt(
radiusValueLabel.getText());
Body newBody = new Body(newBodyRadius, newBodyPosition,
newBodyVknot);
/*
* Do not add the new body while the other thread is
* looping through the bodies in the update method
* or when the limit of bodies is reached.
*/
if (!isUpdating && bodies.size() < BODY_LIMIT){
if (newBodyPosition.equals(mousePosition)){
newBody = new Body(newBodyRadius, newBodyPosition);
}
bodies.add(newBody);
simPanel.update(bodies, newBodyPositions, mousePositions,
isAddingBody, isTracingPaths, isColoringPaths,
isShowingNetForces);
/*
* Maintain the mouse and new body positions if the
* simulation is paused.
*/
if (!isPaused){
mousePositions.clear();
newBodyPositions.clear();
}
}
isAddingBody = false;
}
} |
<gh_stars>0
from django.apps import AppConfig
class TftchampionsConfig(AppConfig):
name = "portfolio.tftchampions"
|
<filename>bind.go<gh_stars>1-10
package runn
import (
"context"
"fmt"
"github.com/antonmedv/expr"
)
const bindRunnerKey = "bind"
type bindRunner struct {
operator *operator
}
func newBindRunner(o *operator) (*bindRunner, error) {
return &bindRunner{
operator: o,
}, nil
}
func (rnr *bindRunner) Run(ctx context.Context, cond map[string]string) error {
store := rnr.operator.store.toMap()
for k, v := range cond {
if k == storeVarsKey || k == storeStepsKey {
return fmt.Errorf("'%s' is reserved", k)
}
vv, err := expr.Eval(v, store)
if err != nil {
return err
}
rnr.operator.store.bindVars[k] = vv
}
return nil
}
|
"The big oil companies are the most profitable on the planet. But Scott Brown voted to give them $20 billion in taxpayer subsidies," says the narrator in the ad. "Big Oil gave Scott Brown thousands of dollars within days of his votes. Now Big Oil is spending millions to get him back to Washington."
The commercial also includes short interviews with New Hampshire residents who are critical of Brown.
"Scott Brown is in it for Scott Brown. Nobody else. Not New Hampshire. No way," says one man, at the end of the ad.
The Shaheen campaign tells CNN that the spot ran statewide for at least a week on WMUR and WBIN, New Hampshire's two main broadcast stations, as well as on cable television.
Ever since Brown formally declared his candidacy back in April, the Shaheen campaign repeatedly characterized him as being tainted by money from the big oil interests, but this is the first time they've broadcast that message on a paid TV ad.
The spot's release comes as the New Hampshire Democratic Party launched "Big Oil Billionaires for Brown," a campaign they say will include earned, paid, and owned media initiatives.
The commercial's release comes one day after the Brown campaign went up with a spot statewide on TV that highlights support from the state's other U.S. senator, Republican Kelly Ayotte.
"I support Scott Brown because I know that he's for fiscal responsibility, accountable government, and finally a health care plan that works for all of us. He will give everything he's got for New Hampshire," said Ayotte, in the commercial.
The race has also seen lots of spending by outside groups on both sides of the aisle. And Tuesday two of those groups, who are supporting Brown, took to the airwaves with new ads.
A WMUR/Granite State poll released last week indicated Shaheen held a slight two-percentage point margin over Brown in a general election showdown, which is within the poll's sampling error. Surveys earlier this summer showed Shaheen with leads between 8 and 12 points.
The New Hampshire GOP says the new Shaheen ad is a reaction to what appears to be a tightening of the race.
"Jeanne Shaheen’s dishonest ad is the latest sign that the senator is in a full blown panic after polls have confirmed that voters are souring on her record of voting with Barack Obama 99 percent of the time. Shaheen is a desperate Washington politician who has insulted her constituents by refusing to hold traditional town hall meetings, and Granite Staters are ready to replace her with a responsible Republican senator who will put New Hampshire first," said Jennifer Horn, chair of the Republican State Committee.
"Jeanne Shaheen and her allies are doing everything they can to stop Scott Brown in the primary. Their negative attacks won't work because voters can see for themselves there are real differences in this race – on jobs and the economy, on immigration and on foreign policy," said Brown campaign spokeswoman Elizabeth Guyton.
"In California, Governor Brown took a $20 billion deficit and now we're basking in record surpluses."
The truth of that statement is debateable. But even if true, it simply means the government took TOO MUCH money away from the citizens, and therefore took it out of the economy.
"Basking in record surplus" is nothing to be proud of... why can't the government just budget properly so that there is neither surplus nor deficit? You know... like a balanced budget amendment? |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.