content
stringlengths
7
2.61M
Our skin is still crawling from the vamp-tastic first season of Guillermo del Toro’s FX horror series The Strain, and production is now underway on season two. The first season was a bit of a gory mixed bag, but the series had finally hit its weird-awesome stride by the time the season finale rolled around. Considering del Toro went into the show with a 3-5-year plan already in mind — since it’s based on the book trilogy of the same name — we’re glad to hear he’ll get a chance to tell more of the tale. With filming underway on the season-two premiere, del Toro chatted with Collider about the progress they’re making and how things will change in the second year. Not surprisingly, they’ll be changing a lot. Believe it or not, del Toro says things are getting even darker — and they plan on going a bit “off book” from the established story: “We started shooting last week. We’ve been prepping for about two months. I’m going to, god willing, direct the prologue of the first episode and some second unit and direct the black and white Mexican wrestler B-movie pictures that appear in the season, because one of our characters is a masked Mexican wrestler [laughs], so it will be a lot of fun for me. The pilot is being directed by Gregory Hoblit whom I admire and loved his work for many decades. We start, I think, with a really great episode. We’re about two days away from being done, second and third are in the pipeline, the sets are looking fantastic, we’re doing a lot of new makeup effects, we’re doing a lot of surprises. We’re going a little more off book this season than the last season. The last season went quite a bit off book on the last third, but this season we are introducing new characters even to the books and some characters are going to have really interesting arcs. Eph is going to a much darker place after losing Kelly. It’s a really interesting new world. And it’s great to be back on the show and see everybody back, like a family reunion.” The new season of The Strain is expected to hit FX in 2015. Do you like the direction they seem to be taking things? (Via Collider)
Multi-model Databases and Tightly Integrated Polystores: Current Practices, Comparisons, and Open Challenges One of the most challenging issues in the era of Big Data is the Variety of the data. In general, there are two solutions to directly manage multi-model data currently: a single integrated multi-model database system or a tightly-integrated middleware over multiple single-model data stores. In this tutorial, we review and compare these two approaches giving insights on their advantages, trade-offs, and research opportunities. In particular, we dive into four key aspects of technology for both types of systems, namely theoretical foundation of multi-model data management, storage strategies for multi-model data, query languages across models, and query evaluation and its optimization. We provide a comparison of performance for the two approaches and discuss related open problems and remaining challenges. The slides of this tutorial can be found at http://udbms.cs.helsinki.fi/?tutorials/CIKM2018.
from z3 import * from random import * #If you want to create a new flag, ensure it is printable (32 <= c <= 127) # and add it to flag_mid flag_start = "GLUG{" flag_mid = "C01nc1d3nc3_c4n_b3_fr3aky_T6LSERDYB6" flag_end = "}" flag = flag_start + flag_mid + flag_end len_flag = len(flag) print(len(flag)) s = Solver() #initiate instances x=[] for i in range(0,len_flag): x.append(Int('x'+str(i))) #add printable ASCII contraints to solver for i in range(0,len_flag): s.add( x[i] >= 32 ) s.add( x[i] < 127 ) #add known components of flag def addKnown(s, x, flag_start, flag_end): for i in range(0, len(flag_start)): s.add( x[i] == ord(flag_start[i])) for i in range(0, len(flag_end)): s.add( x[len(x)-len(flag_end) + i] == ord(flag_end[i])) addKnown(s,x,flag_start, flag_end) num_eqn = 2*len_flag #I have no idea what this should be, though it should definitely be bigger than len_flag... otherwise non-unique solutions... E = [] #Given the flag and the number of variables, generate a random contraint #of the form #x % y % z == N #or #w % x % y % z == N #where x,y,z,w are all chars in flag, and % is either {+.-.*} def genRandEqn(total_vars, flag): num_vars = randint(3,4) rand_vars = [] rand_indices = [] for i in range(0, num_vars): index = randint(0, total_vars-1) rand_vars.append( "x[" + str(index) + "]") rand_indices.append(index) num_symbols = num_vars -1 symbols = ["+", "-", "*"] e = "" #SAT var f = "" #flag var c = "" #c-code for i in range(0, num_symbols): s = choice(symbols) e += rand_vars[i] + s f += str(ord(flag[rand_indices[i]])) + s c += "x[" + str(rand_indices[i]) + "]" + s e += rand_vars[-1] f += str(ord(flag[rand_indices[-1]])) c += "x[" + str(rand_indices[-1]) + "]" #print("e: " + e) #print("f: " + f) #print("eval(f) = " + str(eval(f))) result = eval(f) return eval(e + "==" + str(result)), e + "==" + str(result), c + "==" + str(result) #Generate num_eqn random contraints. At the moment, it is 2xlen_flag #Then add it to the Solver, and #add the string to E for printing to C and Z3 python solver later for i in range(0, num_eqn): g = genRandEqn(len(x), flag) s.add(g[0]) E.append(g[2]) #Ensure satisfiablity (given we constructed it, this should always return 'sat' print(s.check()) ''' Check whether solution is unique?? f='' for i in range(0, len(flag)): f+= "x["+str(i)+ "] != " + str(ord(flag[i])) + "," s.add(Or(eval(f[:-1]))) if( str(s.check()) == 'unsat'): #only one solution can continue ''' #Generate a model for the solution mod = s.model() #Get the model into a printable form output = "" for i in range(0,len_flag): output += (chr(int(str(mod[x[i]])))) print(output) ########################################### #### Output C file for challenge ########################################### c_file = open("password.c", "w") c_head1 = ''' #include <stdio.h> #include <stdlib.h> #include <string.h> #define TRUE 1 #define FALSE 0 int main(int argc, char *argv[]) { char * x = NULL; int result = TRUE; int password_length = 0; if(argc != 2) { printf("Usage: a.out <password>\\n"); return -1; } printf("argv[1]: %s, strlen(password) = %d\\n", argv[1], strlen(argv[1])); x = argv[1]; password_length = strlen(x); ''' c_head2 = "\tif (password_length != " + str(len_flag) + ")" c_head3 = ''' { printf("Incorrect password length\\n"); return -1; } ''' c_file.write(c_head1 + c_head2 + c_head3) #print C-code for e in E: c_file.write("\tif (!(" + e + "))\n") c_file.write("\t{\n") c_file.write("\t\tresult = FALSE;\n") c_file.write("\t}\n") c_tail = ''' if (result == TRUE) { printf("CONGRATULATIONS!\\n"); return 0; } if (result == FALSE) { printf("Incorrect password\\n"); return 0; } return 0; } ''' c_file.write(c_tail) c_file.close() ########################################### #### Output python Z3 file for solver ########################################### z3_file = open("password_solve.py", "w") pyz3_head1 = ''' #python2 #pip install z3 from z3 import * flag_start = flag{" flag_end = "}" ''' pyz3_head2 = "len_flag = " + str(len_flag) + "\n" pyz3_head3 = ''' s = Solver() #initiate instances x=[] for i in range(0,len_flag): x.append(Int('x'+str(i))) #add printable ASCII contraints to solver for i in range(0,len_flag): s.add( x[i] >= 32 ) s.add( x[i] < 127 ) #add known components of flag def addKnown(s, x, flag_start, flag_end): for i in range(0, len(flag_start)): s.add( x[i] == ord(flag_start[i])) for i in range(0, len(flag_end)): s.add( x[len(x)-len(flag_end) + i] == ord(flag_end[i])) addKnown(s,x,flag_start, flag_end) ''' z3_file.write(pyz3_head1 + pyz3_head2 + pyz3_head3) #write equations to solve for e in E: z3_file.write("s.add(" + e + ")\n") pyz3_tail = ''' print(s.check()) mod = s.model() output = "" for i in range(0,len_flag): output += (chr(int(str(mod[x[i]])))) print output e='' for i in range(0, len(output)): e += "x["+str(i) + "]==" + str(ord(output[i])) + "," s.add(Not(And(eval(e[:-1])))) if str(s.check()) == 'unsat': print("Unique Solution!") else: print("Non-unique solutions exist... more work needed") ''' z3_file.write(pyz3_tail) z3_file.close()
package com.chends.opengl.renderer.light; import android.content.Context; import android.opengl.GLES20; import android.opengl.Matrix; import com.chends.opengl.renderer.BaseRenderer; import com.chends.opengl.utils.OpenGLUtil; import javax.microedition.khronos.opengles.GL10; /** * @author cds created on 2019/12/13. */ public class LightRenderer extends BaseRenderer { private String vertexLightShaderCode, fragmentLightShaderCode; private float[] CubeCoords = new float[]{ -0.5f, 0.5f, 0.5f, // 上左前顶点 0.5f, 0.5f, 0.5f, // 上右前顶点 -0.5f, 0.5f, -0.5f, // 上左后顶点 0.5f, 0.5f, -0.5f, // 上右后顶点 -0.5f, -0.5f, 0.5f, // 下左前顶点 0.5f, -0.5f, 0.5f, // 下右前顶点 -0.5f, -0.5f, -0.5f, // 下左后顶点 0.5f, -0.5f, -0.5f, // 下右后顶点 }; private short[] indices = new short[]{ 2, 3, 0, 1, 5, 3, 7, 2, 6, 0, 4, 5, 6, 7 }; private float[] lightPos = new float[]{1f, 1f, 1f, 1f}; public LightRenderer(Context context) { super(context); vertexShaderCode = "uniform mat4 uMVPMatrix;" + "attribute vec4 aPosition;" + "void main() {" + " gl_Position = uMVPMatrix * aPosition;" + "}"; fragmentShaderCode = "precision mediump float;" + "void main() {" + " vec3 lightColor = vec3(1.0, 1.0, 1.0);" + " vec3 objectColor = vec3(1.0, 0.5, 0.31);" + " gl_FragColor = vec4(lightColor * objectColor, 1.0);" + "}"; vertexLightShaderCode = "uniform mat4 uMVPMatrix;" + "attribute vec4 aPosition;" + "void main() {" + " gl_Position = uMVPMatrix * aPosition;" + " gl_PointSize = 25.0;" + "}"; fragmentLightShaderCode = "precision mediump float;" + "void main() {" + " gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0);" + "}"; } private final float[] vPMatrix = new float[16], vPMatrix2 = new float[16], projectionMatrix = new float[16], viewMatrix = new float[16]; @Override public void onSurfaceChanged(GL10 gl, int width, int height) { super.onSurfaceChanged(gl, width, height); float ratio = (float) width / height; // 设置透视投影矩阵,近点是3,远点是7 Matrix.frustumM(projectionMatrix, 0, -ratio, ratio, -1, 1, 3f, 7f); Matrix.setLookAtM(viewMatrix, 0, 1.5f, 1.5f, 6f, 0f, 0f, 0f, 0f, 1.0f, 0.0f); } @Override public void onDrawFrame(GL10 gl) { super.onDrawFrame(gl); // 计算 Matrix.multiplyMM(vPMatrix, 0, projectionMatrix, 0, viewMatrix, 0); Matrix.multiplyMM(vPMatrix2, 0, projectionMatrix, 0, viewMatrix, 0); drawCube(); drawLight(); } /** * 绘制光源 */ private void drawLight() { int lightProgram = OpenGLUtil.createProgram(vertexLightShaderCode, fragmentLightShaderCode); GLES20.glUseProgram(lightProgram); // 传入顶点坐标 int lightPositionHandle = GLES20.glGetAttribLocation(lightProgram, "aPosition"); GLES20.glEnableVertexAttribArray(lightPositionHandle); GLES20.glVertexAttribPointer(lightPositionHandle, 4, GLES20.GL_FLOAT, false, 4 * 4, OpenGLUtil.createFloatBuffer(lightPos)); int mLightMVPMatrixHandle = GLES20.glGetUniformLocation(lightProgram, "uMVPMatrix"); // 计算 //Matrix.multiplyMM(vPMatrix, 0, tempMatrix, 0, translateMatrix, 0); GLES20.glUniformMatrix4fv(mLightMVPMatrixHandle, 1, false, vPMatrix2, 0); // 绘制顶点 GLES20.glDrawArrays(GLES20.GL_POINTS, 0, 1); GLES20.glDisableVertexAttribArray(lightPositionHandle); } /** * 绘制立方体 */ private void drawCube() { int shaderProgram = OpenGLUtil.createProgram(vertexShaderCode, fragmentShaderCode); GLES20.glUseProgram(shaderProgram); // 传入顶点坐标 int positionHandle = GLES20.glGetAttribLocation(shaderProgram, "aPosition"); GLES20.glEnableVertexAttribArray(positionHandle); GLES20.glVertexAttribPointer(positionHandle, 3, GLES20.GL_FLOAT, false, 3 * 4, OpenGLUtil.createFloatBuffer(CubeCoords)); int mMVPMatrixHandle = GLES20.glGetUniformLocation(shaderProgram, "uMVPMatrix"); GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, vPMatrix, 0); // 绘制顶点 GLES20.glDrawElements(GLES20.GL_TRIANGLE_STRIP, indices.length, GLES20.GL_UNSIGNED_SHORT, OpenGLUtil.createShortBuffer(indices)); GLES20.glDisableVertexAttribArray(positionHandle); } }
<reponame>syncfusion/angular-ej1-demos import { Component, ViewEncapsulation } from '@angular/core'; declare var $: any; @Component({ selector: 'ej-app', templateUrl: './actionbuttons.component.html', styles: [ '.e-dialog.e-widget-content { background: none !important }' ], encapsulation: ViewEncapsulation.None }) export class ActionButtonsComponent { actionButtons: Array<string>; constructor() { this.actionButtons = ['close', 'collapsible', 'maximize', 'minimize', 'pin']; } onClick(args) { $('#btnOpen').hide(); $('#dialogIcon').ejDialog('open'); } onDialogClose(args) { $('#btnOpen').show(); } }
Collaboration and Integration The University of North Florida (UNF) transitioned to Canvas as its Learning Management System (LMS) in summer 2017. This implementation brought on opportunities that allowed for a more user-friendly learning environment for students. Working with students in courses which were in-person, hybrid, or online, brought about the need for the library to have a place in the Canvas LMS. Students needed to remember how to access and locate library resources and services outside of Canvas. During this time, the Thomas G. Carpenter Librarys online presence was enhanced, yet still not visible in Canvas. It became apparent that the library needed to be integrated into Canvas courses. This would enable students to easily transition between their coursework and finding resources and services to support their studies. In addition, librarians who worked with students, looked for ways for students to easily find library resources and services online. After much discussion, it became clear to the Online Learning Librarian (OLL) and the Director of Technical Services and Library Systems (Library Director) that the library needed to explore ways to integrate more with Canvas.
It’s so often that corporations sweep their dark histories under the rug, but one company is finally stepping up to take responsibility for its past: Sega has issued a formal apology for the role it played in supplying Sonic The Hedgehog games to Nazi soldiers. From the years 1928-45, Sega was the principal video game supplier of the Third Reich, providing millions of copies of their classic “Sonic The Hedgehog” game for SS soldiers to play on their personal Sega Genesis devices. While the company tried to distance itself from its Nazi roots after the end of World War II, it never officially apologized for contributing to the Nazi war machine by raising troop morale with Sonic’s adrenaline-pumping quest to collect all seven Chaos Emeralds. And though the brand has had no affiliation with Nazis for over 70 years, photographs of Hitler Youth arguing over which one of them got to use the good controller to play Sonic have been circulating for decades without ever being officially addressed. As recently as the ’90s, Sega remained silent about the recruiting power that Sonic The Hedgehog provided the Nazis, who encouraged people to enlist so they could come hang in the SS barracks and unwind by playing as Tails or Knuckles in Sonic’s multiplayer mode. Even after watchdog groups unearthed old Sega advertisements reading “Sonic The Hedgehog: The Aryan Choice For Leisure” and “Argentina won’t extradite, and neither will Sonic!” Sega still refused to publicly address its controversial connections. In addition to the apology, Sega has promised to erase Hitler’s official high score, as well as any other save files under the initials “A.H.” The company will also be donating a Sega Dreamcast to the Anne Frank museum. Wow. Owning up to the past is never easy, but it’s good to see Sega doing the right thing. Here’s hoping this apology brings some closure to the Nazis’ victims and their families.
/** * Object including the duration information of the route of one point pair. */ @Data public static class DurationInfo { /** * Description of the duration. * <p> * The unit would be minute or hour. */ private String text; /** * Value of the duration. And the unit is second. */ private Double value; }
Prescribing Trends of Non-Steroidal Anti-Inflammatory Drugs used in Dental Outpatient Department of A Tertiary Hospital in Nepal The aim of the study was to monitor non-steroidal anti-inflammatory drugs prescribing pattern for patients attending the dental OPD of Chitwan Medical College Teaching Hospital, Bharatpur, Nepal. 1173 prescriptions of patients attending the dental OPD were collected randomly during 15 July 2011 to 14 January 2012. Thedata was analyzed using WHO guidelines. The average number of drugs prescribed was 2.3 per prescription. The most commonly prescribed analgesic was ibuprofen + paracetamol (48.4%) followed by piroxicam (31%). In total, 49.6% analgesics were prescribed in fixed-dose combinations. Only 15.5% of analgesics were prescribed by generic name.In this study, Paracetamol + Ibuprofen were the most commonly prescribed analgesics among dental outpatients.
User-oriented approach to control of tone reproduction for electronic reprographic systems Electronic reprographic systems are those which reproduce images through digital means. These systems offer the possibility of more flexible tonal control over pictorial reproduction than earlier analog systems. One challenge of electronic reprographics is to design a method of pictorial tone reproduction control that is simple to operate and that does not require operator knowledge of image processing. Described here is one such method which has been applied to Xerox's DocuTech Production Publisher. All electronic reprographic systems have three major components: a digital scanner, an image processor, and a digital printer. The imaging characteristics of all of these components could be altered to effect the system's tone reproduction. With the described method, scanner and printer imaging characteristics are held rigidly constant through rigorous process controls. This permits direct calculation of image processing characteristics to meet the tone reproduction requirements specified by the customer. The method allows lower cost hardware and faster processing by a reduction in the number of gray levels per pixel. A userfriendly interface and the availability of multiple halftone screens also contribute to meeting customer requirements.
def sort_bam_by_coordinate(self, input, outputs): output_bam = outputs[0] mem = max(int(self.get_stage_options("samtools", "mem")) - 2, 1) command = "samtools sort -m {mem}G {input} > {output} && " \ "samtools index {output}".format(mem=mem, output=output_bam, input=input) run_stage(self.state, "samtools", command)
A third of people in Yorkshire and the Humber have admitted to sharing their online passwords with other people. Senior police officers leading the fight against cyber crime in Yorkshire and the Humber have urged the public to take basic precautions after statistics revealed almost one in three people in the region had shared their passwords with someone else. National crime survey results published this summer showed that an one in 10 people were now the victim of fraud and cyber crime, with an estimated two million computer misuse offences recorded in the past year. It is vital, to do everything possible to protect yourself. But an Ipsos MORI poll found that most of people are still ignoring advice which could stop them falling foul of online scammers. Only one in three people in our region said they were using a strong password made up of three random words, and 32 per cent admitted to sharing their passwords with other people. The worrying statistics have prompted police to team up with the National Cyber Security Centre today to promote the #ThinkRandom campaign. Detective Chief Inspector Vanessa Smith, of the Yorkshire and Humber regional cyber crime unit, said: “So-called cyber crime can happen to anyone with a computer, laptop, tablet or mobile phone – and the impact can be devastating. “Victims can lose large amounts of money but also treasured possessions such as family photos stored on their devices. Specialist officers investigating cyber crime have long warned that a weak password, which is easy for others to guess, can give criminals all they needs to unlock important online accounts. The National Cyber Security Centre said its research had shown that the best way to make a password both strong and memorable was to use three random words. It also advised using different passwords for the most important accounts – email, social media and online banking. Today’s #ThinkRandom activity on social media ties in with the wider aims of Cyber Aware, a campaign funded by the National Cyber Security Programme (NCSP). Since 2014 it has worked to provide individuals and smalls businesses with the knowledge needed to protect themselves from cyber criminals.
Eight Countries Sign ACTA by Glen Shapiro, LawAndTax-News.com, New York 04 October 2011 The United States Trade Representative (USTR) has issued a statement on behalf of Australia, Canada, the European Union and its member states, Japan, South Korea, Mexico, Morocco, New Zealand, Singapore, Switzerland and the United States, who have reaffirmed their commitment to the Anti-Counterfeiting Trade Agreement (ACTA) at a signing ceremony in Tokyo. When it enters into force with all participants, the ACTA will formalize the legal foundation for a first-of-its-kind alliance of trading partners, representing more than half of world trade. It is hoped that it will represent a significant achievement in the fight against the infringement of intellectual property rights (IPR), in particular the proliferation of counterfeiting and piracy on a global scale, providing a mechanism for the parties to work together in a more collaborative manner to achieve the common goal of effective IPR enforcement. It includes provisions on civil, criminal, border and digital environment enforcement measures, robust cooperation mechanisms among the ACTA parties to assist in their enforcement efforts, and the establishment of best practices for effective IPR enforcement. With respect to the legal framework, the ACTA establishes a strengthened standard that builds on the minimum standards of the World Trade Organization Agreement on Trade-Related Aspects of Intellectual Property Rights (TRIPS). It is said that this marks a considerable improvement in international trade norms for effectively combating the current global proliferation of commercial-scale counterfeiting and piracy. Representatives of eight governments – Australia, Canada, Japan, South Korea, Morocco, New Zealand, Singapore and the US – signed the agreement. Representatives of the European Union, Mexico and Switzerland attended the ceremony and confirmed their continuing strong support for and preparations to sign it as soon as practicable. All participants expressed their firm resolve to work cooperatively to achieve the ACTA’s prompt entry into force, and to support actively its goals. Formal ACTA negotiations started in June 2008, with the final round of negotiations being held in Japan in October 2010. Following translation and technical work, the ACTA was opened for signature on May 1, 2011. For those who have already signed, the next step in bringing the ACTA into force is the deposit of instruments of ratification, acceptance or approval. The agreement will enter into force following the deposit of the sixth such instrument. Furthermore, at a symposium in Sendai, Japan on September 30 this year, participants in the ACTA negotiations invited other trading partners to consider joining this emerging consensus on stronger IPR enforcement.
Investigating and targeting chronic lymphocytic leukemia metabolism with the human immunodeficiency virus protease inhibitor ritonavir and metformin Abstract Chronic lymphocytic leukemia (CLL) remains fatal due to the development of resistance to existing therapies. Targeting abnormal glucose metabolism sensitizes various cancer cells to chemotherapy and/or elicits toxicity. Examination of glucose dependency in CLL demonstrated variable sensitivity to glucose deprivation. Further evaluation of metabolic dependencies of CLL cells resistant to glucose deprivation revealed increased engagement of fatty acid oxidation upon glucose withdrawal. Investigation of glucose transporter expression in CLL reveals up-regulation of glucose transporter GLUT4. Treatment of CLL cells with human immunodeficiency (HIV) protease inhibitor ritonavir, which inhibits GLUT4, elicits toxicity similar to that elicited upon glucose deprivation. CLL cells resistant to ritonavir are sensitized by co-treatment with metformin, potentially targeting compensatory mitochondrial complex 1 activity. Ritonavir and metformin have been administered in humans for the treatment of diabetes in patients with HIV, demonstrating the tolerance to this combination in humans. Our studies strongly substantiate further investigation of Food and Drug Administration approved ritonavir and metformin for CLL.
<gh_stars>10-100 import numpy as np import torch import torch.nn.functional as F import warnings def unsqueeze(x, dim=-1, n=1): "Same as `torch.unsqueeze` but can add `n` dims" for _ in range(n): x = x.unsqueeze(dim) return x def _bbs2sizes(crops, init_sz, use_square=True): bb = crops.flip(1) szs = (bb[1]-bb[0]) if use_square: szs = szs.max(0)[0][None].repeat((2,1)) overs = (szs+bb[0])>init_sz bb[0][overs] = init_sz-szs[overs] lows = (bb[0]/float(init_sz)) return lows,szs/float(init_sz) def crop_resize(x, crops, new_sz): # NB assumes square inputs. Not tested for non-square anythings! bs = x.shape[0] lows,szs = _bbs2sizes(crops, x.shape[-1]) if not isinstance(new_sz,(list,tuple)): new_sz = (new_sz,new_sz) id_mat = torch.tensor([[1.,0,0],[0,1,0]])[None].repeat((bs,1,1)).to(x.device) with warnings.catch_warnings(): warnings.filterwarnings('ignore', category=UserWarning) sp = F.affine_grid(id_mat, (bs,1,*new_sz))+1. grid = sp*unsqueeze(szs.t(),1,n=2)+unsqueeze(lows.t()*2.,1,n=2) return F.grid_sample(x, grid-1, mode='nearest') def _px_bounds(x, dim): c = x.sum(dim).nonzero().cpu() idxs,vals = torch.unique(c[:,0],return_counts=True) vs = torch.split_with_sizes(c[:,1],tuple(vals)) d = {k.item():v for k,v in zip(idxs,vs)} default_u = torch.tensor([0,x.shape[-1]-1]) b = [d.get(o,default_u) for o in range(x.shape[0])] b = [torch.tensor([o.min(),o.max()]) for o in b] return torch.stack(b) def mask2bbox(mask): no_batch = mask.dim()==2 if no_batch: mask = mask[None] bb1 = _px_bounds(mask,-1).t() bb2 = _px_bounds(mask,-2).t() res = torch.stack([bb1,bb2],dim=1).to(mask.device) return res[...,0] if no_batch else res def squarePad(x): long = max(x.shape[-2:]) short = min(x.shape[-2:]) if long == x.shape[3]: d3 = long d2 = long-short cat_dim = 2 else: d2 = long d3 = long-short cat_dim = 3 pad = torch.zeros((x.shape[0], x.shape[1], d2, d3), dtype=x.dtype, device = x.device) padded = torch.cat((x, pad), dim=cat_dim) return padded def bbox2Square(bbox, pad=0): bbox_center = (bbox[:, :, 1] + bbox[:, :, 0]) / 2.0 bbox_radius = ((bbox[:, :, 1] - bbox[:, :, 0]).max(-1)[0]) / 2.0 + pad bbox_square = bbox_center.unsqueeze(-1).repeat(1, 1, 2) bbox_square[:, :, 0] -= bbox_radius.unsqueeze(1) bbox_square[:, :, 1] += bbox_radius.unsqueeze(1) bbox_square = bbox_square.ceil() return bbox_square
Color-related Local Binary Pattern: A Learned Local Descriptor for Color Image Recognition Local binary pattern (LBP) as a kind of local feature has shown its simplicity, easy implementation and strong discriminating power in image recognition. Although some LBP variants are specifically investigated for color image recognition, the color information of images is not adequately considered and the curse of dimensionality in classification is easily caused in these methods. In this paper, a color-related local binary pattern (cLBP) which learns the dominant patterns from the decoded LBP is proposed for color images recognition. This paper first proposes a relative similarity space (RSS) that represents the color similarity between image channels for describing a color image. Then, the decoded LBP which can mine the correlation information between the LBP feature maps correspond to each color channel of RSS traditional RGB spaces, is employed for feature extraction. Finally, a feature learning strategy is employed to learn the dominant color-related patterns for reducing the dimension of feature vector and further improving the discriminatively of features. The theoretic analysis show that the proposed RSS can provide more discriminative information, and has higher noise robustness as well as higher illumination variation robustness than traditional RGB space. Experimental results on four groups, totally twelve public color image datasets show that the proposed method outperforms most of the LBP variants for color image recognition in terms of dimension of features, recognition accuracy under noise-free, noisy and illumination variation conditions. I. INTRODUCTION I MAGE descriptors play a key role in many computer vision related applications such as: image retrieval, object and scene recognition, image classification, etc. These applications primarily rely on extracting image features and analyzing these features to classify the image or obtain the image that is most similar to the target image. Therefore, a well designed descriptor can greatly improve the recognition performance and processing speed. In recent years, many descriptors have been proposed. These descriptors can be roughly divided into three categories: local features, global features and deep learning-based features. Among these three kinds of descriptors, local features focus on the local information between pixels in the image so that the pattern matching will not be affected by the local deviation. This kind of descriptors has a great advantage in the computer vision related applications, and mainly include SURF, DAISY, BRIEF, etc. Local binary pattern (LBP) as a kind of widely used local feature was proposed and optimized by Ojala et al.. It has received extensive attention because of its simplicity, easy implementation and strong discriminating power. Based on the traditional LBP, many variants of LBP have been introduced. Some variants change the coding and mode selection strategies such as LBP-TOP, LTP, disLBP, PRICoLBP, 2D-LBP, rotation-invariant local binary descriptor (RI-LBD), etc. Some variants change the neighborhood topologies and sampling structures such as CSLBP, ELBP, median binary pattern, etc. As the most prominent information in an image, the color information is closely related to objects or scenes, which is taken as one of the most widely used feature in image recognition and retrieval. However, most of the LBP variants focus on the texture information and the color information is always ignored. The color images are usually converted to gray-scale images firstly when processing color images by these variants of LBP. To utilize the color information, recently, some variants of LBP have been proposed for color image recognition. Zhu et al. proposed the orthogonal combination of local binary pattern (OC-LBP) and a new local descriptors based on OC-LBP enhanced with color information for image description. In color radial mean completed local binary pattern (CRMCLBP), the radial mean completed local binary pattern was computed on the color channels independently. To consider the correlation information between color channels of an image, multispectral local binary pattern (MSLBP) that used the opponent LBP to capture the spatial relationship between two color channels was proposed for describing a color image by six sets of opponent LBPs and three LBPs computed from each spectrum independently. Lee et al. proposed the local color vector binary pattern (LCVBP) to extract the characteristics of color images by color norm patterns and color angular patterns, where the color angular patterns were calculated by the ratio among different spectralband images. Lan et al. proposed a quaternion local ranking binary pattern (QLRBP), which represents the image by quaternion algebra. The QLRBP operator calculated the similarity between each pixel and the reference point in the image by the phase of Clifford translation of quaternion. Li et al. proposed the completed local similarity pattern (CLSP) for color image recognition that was consisted of two parts: color labeling, and local similarity pattern which calculates LBP from the color distance between the central pixel and the neighborhood pixels. Singh et al. developed a color texture descriptor called Local binary pattern for color images (LBPC). LBPC divided the neighbor of central pixel into two categories by establishing a spatial threshold plane, and can be fused with the local binary pattern of the Hue channel and color histogram to boost the discriminative power. Dubey et al. proposed a method to extract color information called multichannel decoded local binary pattern (mdLBP), which combines the information from the R, G and B color channels in a decoding manner. In the mdLBP, two schemas, i.e., adder and decoder were used for capturing the joint information of multichannel. Compared with other LBP descriptors, mdLBP combines the joint information between each color channel well and keeps the primary information of color channels. In summary, most of existing LBP variants for representing color images cannot adequately consider the color information and contain much redundant information, which results in increasing the dimension of feature vector and decreasing the recognition performance. In this paper, we propose a local color descriptor named color-related local binary pattern (cLBP) which learns the dominant color-related patterns from the decoded LBP for color images representing. In the proposed method, the relative similarity space (RSS) is firstly proposed to obtain the color similarity between the three channels of color images. Then, the LBP decoding is employed to describe the color image on the combination of the RSS and traditional RGB color spaces. Finally, a feature learning strategy is used to learn the most discriminative features (dominant color-related patterns) and reduce the dimension of feature vector. Experimental results show that the proposed cLBP can achieve a promising result on texture, object and face recognition under noisy, noise-free and illumination variation conditions. The rest of this paper is organized as follows. The framework of color image recognition by the proposed cLBP is introduced in Section 2. In Section 3, the effectiveness of the proposed cLBP is demonstrated through experiments. Finally, conclusions are remarked in Section 4. II. THE PROPOSED CLBP This section first introduces the framework of color image recognition by the proposed cLBP. Then, the RSS of color image is provided in subsection A. In subsection B, the scheme of LBP decoding on multi-color channels is introduced. The color-related patterns learning on the decoded LBP features is described in subsection C. The framework of the proposed cLBP for color image recognition is shown in Fig. 1. It mainly contains two procedures: the training and testing procedures. The significance of training procedure is to obtain an adaptive dominant pattern table by learning dominant patterns from the decoded LBP of training images. The training procedure is consisted of three stages: pre-processing, LBP decoding, and color-related patterns learning. For a training image, to well describe the color information, in the pre-processing stage, the RSS and traditional RGB color spaces are combined to fully represent the color information. On this basis, in the LBP decoding stage, the traditional LBP operation is performed on the six channels to obtain the corresponding LBP feature maps respectively. This is followed by decoding these LBP feature maps to capture the joint color information of LBP feature maps, which corresponds to each color channel. Since totally six color channels are used, there are 64 histograms outputted by the decoded LBP, and each of which consists of 256 bins. These decoded LBP feature histograms are concatenated to form a feature vector for color image representing. To remove the redundancy of the obtained feature vector, a color-related pattern learning strategy is applied on the decoded LBP to improve the recognition accuracy and efficiency. In the colorrelated pattern learning stage, the cumulative histogram is calculated by adding the decoded LBP feature vectors of all the training images, and the dominant pattern table is obtained from the cumulative histogram by feature selecting strategy, which will be provided in detail in subsection II.C. Therefore, in the testing procedure, the cLBP of color image is obtained by selecting the dominant color-related patterns from its decoded LBP according to the learned dominant pattern table, and finally, this is followed by feeding into a classifier for image recognition. A. The Relative Similarity Space (RSS) In this subsection, the relative similarity space (RSS) is proposed to represent the color similarity between the R, G and B channels of a color image. The relative similarity considers the joint distribution of each channel of a color image, which can well represent the cross-similarity between color channels. By the proposed RSS, more color information is considered for the following feature extraction and learning stages. Moreover, since the calculation process in RSS can offset the interference of noise and illumination, it makes the proposed cLBP performs well in color image recognition under both noisy and illumination variation conditions. Unlike gray-scale image, the color image consists of multiple channels. Therefore, the similarity is considered to represent the relationship between color channels. For two positive numbers p and q (the values of RGB channels are ranged in ), the similarity of them can be calculated by their difference as In this case, if these two numbers are similar to each other, the similarity S will tend to 0. However, Eq. ignores the order of these two numbers and the value of these numbers themselves. For example, (p = 1, q = 2) and (p = 2, q = 1) (a) have the same similarity by Eq.. The difference between 1 and 2 is the same as the difference between 100 and 101, but these two sets of numbers are at different level of value. To avoid these problems, the relative similarity is considered. The similarity of p relative to q is defined as where is a small value to avoid the denominator being 0. The change of relative similarity RS with p and q is shown in Fig. 2. From this figure, it can be found that relative similarity is always 0 when the values of the two parameters are equal. The relative similarity increases as the difference between the two numbers increases. The relative similarity decrease with the increase of number value when the difference between the two number is a constant. Most importantly, there are two different increment rates of the relative similarity. When p > q, the relative similarity increases slowly as the difference between the two numbers increases, and is always less than 1. On the contrary, the relative similarity increases rapidly when p < q. This phenomenon means that the relative similarities are different when two numbers have different orders. Based on Eq., the relative similarity can be applied for representing the color images. Let R(x, y), G(x, y) and B(x, y) represent the values of color channels correspond to a pixel located at (x, y), the relative similarity of G and B related to R can be computed as Similarly, the relative similarity of R and B related to G, R and G related to B are defined as follows: RSR, RSG and RSB represent the relative similarity of three color channels. They can be constructed as a new color space termed as RSS, which mainly describes the color similarity between the channels of a color image. Fig. 3 illustrates a color image (with illumination variation) and its three channels in RGB and RSS spaces under both noisy and noise-free conditions. We can find that these images in RSS space reflect the color similarity of the original color image well and is difficult to be affected by noise and illumination variation. B. LBP decoding on multi-color channels Fig. 4 illustrates the scheme of LBP decoding on the combination of RSS and RGB color spaces. It mainly consists of three steps. In the first step, after the color image is represented in RSS and RGB color space, the traditional LBP is used for encoding the color information of each channel respectively, and totally six LBP feature maps are obtained. Secondly, the LBP decoding step is followed to capture the joint information of these six LBP feature maps, by mapping the values in the LBP feature maps into the decoded LBP feature maps in a decoding manner. At last, a set of decoded LBP feature histograms is generated, and this is followed by concatenating these histograms to form a feature vector to describe the color image. For a color image of size N M represented by RSS and RGB spaces I(x, y) where V n (x, y) is the value of the center pixel located at (x, y) in the n th channel, and V m n,R,P (x, y) represents the value of the m th neighboring pixel centered at (x, y). P is the total number of involved neighborhood pixels and R is the radius of the neighborhoods around the center pixel. Through Eq., there will generate six traditional LBP feature maps for a color image, and the values of these LBP feature maps are expressed as binary strings and ranged in. In the followed LBP decoding step, for the pixel located at (x, y), six binary strings are corresponded in these feature maps. The m th bits of these six binary strings are concatenated to form a new binary string and denote as dM m (x, y) in the decoded LBP feature map. The dM m (x, y) can be mathematically expressed as Then, these decoded LBP feature maps are counted as feature histograms. The c th feature histogram H c can be computed as follows where ∀k ∈ . Finally, by concatenating these feature histograms in a series, the feature vector of decoded LBP is provided as Compared with simply concatenating the LBP feature histograms from different color channels, the decoded LBP is an excellent way to mine the correlation information between the LBP feature maps corresponds to each color channel. However, from Eq., we can find that, the dimension of decoded LBP feature vector is almost ten times (64 256 to 6 256) larger than that of simply concatenating. In the next subsection, a simple but effective feature learning scheme is introduced to learn the dominant color-related patterns based on the feature vector obtained by the decoded LBP. C. Color-related Patterns Learning The dimension of decoded LBP feature vector on the combination of RSS and RGB color spaces is 256 64 = 16384, to avoid the curse of dimensionality in classification, we introduce a feature learning strategy to reduce the dimension and capture the discriminative features, i.e., the color-related patterns. In the dominant LBP, the authors mentioned that dominant patterns are the patterns with high frequency in the feature maps and represent image's primary information. The proposed color-related patterns learning strategy are shown in Fig. 5. As shown in this figure, in the learning procedure, the patterns with high frequency of occurrence in the decoded LBP feature vectors across all the training images are selected as the discriminative features. For all the training images, the corresponding decoded LBP feature vectors are added together to obtain a cumulative feature vector H cum, which can be computed as follows where T is the number of training images. After the cumulative feature vector H cum is obtained, it is sorted by the value of elements, and those elements correspond to the first D (D is the number of dominant patterns we want to be learned) highest value in H cum are considered as the dominant colorrelated patterns of the training set. Finally, the positions of these elements are recorded as the dominant pattern table. In the selecting procedure, for an input image, we firstly calculate the decoded LBP feature vector through the preprocessing and LBP decoding stages. Then, the D patterns in the decoded LBP feature vector are selected by the learned dominant pattern table. Finally, the feature vector of cLBP is obtained by concatenating these selected patterns as where B i is the selected pattern. The patterns in cLBP are learned from the content and color information of training images. Therefore, the proposed cLBP can reduce the dimension of feature vectors and improve the recognition accuracy in color image recognition. III. EXPERIMENTAL RESULTS AND ANALYSIS To verify the effectiveness of the proposed method, four groups of experiments are designed. The experimental setting and color mage datasets used for validation are introduced in subsection A and B. In subsection C, the color image recognition ability affected by the dimension of feature vector by the proposed cLBP is discussed. The comparisons with some state of the art LBP variants specifically designed for color image recognition are provided in subsection D. Subsection E analyses the noise robustness of the proposed cLBP in color image recognition. Subsection F validates the color image recognition ability of the proposed cLBP under illumination variation. A. Experimental Setting In our experiments, totally six state of the art LBP variants specifically designed for color image recognition, i.e., LBPC (2018, ), mdLBP (2016, ), QLRBP (2015, ), RGB-OC-LBP (2013, ), LCVBP (2011, ) and LBP of RGB (LBP-RGB, 2002, ) are used for comparison. In LBP-RGB, the traditional LBP operator is performed on the R, G and B channels of a color image respectively, and... the feature histograms of these three LBP feature maps are concatenated for recognition. For LBPC, the plane normal is set as the values given by the authors, i.e., local average normal, and the reference point is set as the intensity value of the center pixels. Since LBPC can be combined with the LBP of the hue and color histogram to improve recognition accuracy, in our experiment, the combination of LBPC with hue and color histogram which shows the best performance in color image recognition in is chosen for comparison. For QLRBP, according to the suggestion from the authors in, three weight parameters 1, 2, 3 are set to 1, and the value of three reference points are set as (1 − 11, 12, 13 ), ( 21, 1− 22, 23 ) and ( 31, 32, 1− 33 ), where mn ∈ . For all of these methods, the parameters P and R are fixed to 8 and 1. Except for QLRBP and LCVBP, the source codes of other LBP variants are provided by the corresponding authors and the default parameters provided by the authors are adopted to keep consistency with the results given in the original papers. The linear multi-class support vector machines classifier of "LIBLINEAR" with default parameters is utilized for classification, and the 10-fold cross-validation is used to obtain the final classification accuracy. B. Experimental Datasets To thoroughly test the performance of the proposed cLBP for color image recognition, totally twelve public color datasets that can be divided into four groups: 1) KTH-TIPS, STex-512S and Colored Brodatz which belong to the color texture datasets; 2) Wang or SIMPLIcity, Corel-10k, FTVL and Coil-100 which belong to the color object datasets; 3) Color FERET and AR face which belong to the color face datasets were utilized for validating the recognition accuracy and noise robustness. Moreover, to verify the color image recognition ability of the proposed method under illumination variation condition, the Outex-14, ALOI and CUReT which have been widely used in existing methods, are also employed for validation in this paper. The detail description about these twelve color image datasets is summarized in Table I. C. Discussion on the Dimension of Features In this experiment, the classification accuracy of the proposed cLBP with different dimension of feature, is evaluated on three of the above twelve image datasets (from the texture, object and face groups), to study the effectiveness of the proposed pattern learning framework. The cLBP only extracted from the RGB space (cLBP-RGB), RSS space (cLBP-RSS) and the combination of RGB and RSS spaces (cLBP) are used for comparison. Since the dimension of cLBP-RGB and cLBP-RSS is 2048 (without patterns learning step), the dimension of learned dominant color-related patterns in cLBP (i.e., D) is varied from 100 to 2000 with 100 increments. The experimental results are shown in Fig. 6. It can be seen from this figure that, for both cLBP-RGB and cLBP-RSS, Coil-100 The dataset contains 100 objects with a wide variety of complex geometric. The images of each object are taken as the object is rotated on a turntable. the recognition accuracy increase firstly and then decreases with the increment of feature dimension especially for the "Corel-FERET" dataset, while for the cLBP, the recognition accuracy increases firstly and then keeps stable. Moreover, the recognition accuracy of cLBP is obviously higher than that of cLBP-RGB and cLBP-RSS, the recognition accuracy of cLBP-RSS is higher than that of cLBP-RGB on all of the three datasets. This verifies the following two issues that correspond to our main contributions in this paper: 1) image represented in RSS color space provides more discriminative information for color image recognition than in traditional RGB color space; 2) there has feature redundancy in cLBP, and the proposed color-related patterns learning step can refine the discriminative patterns for representing the image's color information efficiently, and avoid the curse of dimensionality in color image recognition. Moreover, it is worth noting that, in Fig. 6, the recognition accuracy of cLBP increases very slowly or even decreases when the dimension of feature is up to 900. D. Comparison With Existing Methods In this subsection, the proposed cLBP is compared with six LBP variants mentioned in section III.A. For fairly comparison, the dimension of learned features in cLBP is selected as D=100, 400 and 900 respectively. The experiments on color texture, object and face image recognition are designed to evaluate the performance of the propose cLBP comprehensively. Firstly, we conduct the color texture image recognition on "STex-512S", "Colored Brodatz" and "KTH-TIPS" image datasets by the proposed cLBP and other LBP variants. Table II shows the recognition accuracy on those three color texture image datasets by all the comparison methods. From this table, we can find that the cLBP with D=100 achieves 9.35% higher in average recognition accuracy than RGB-OC-LBP which has almost the same dimension of feature, and 11.74% higher in average recognition accuracy than QLRBP which has more than seven times number of features. When D is up to 900, the recognition accuracy of cLBP is obviously higher than all other methods. It achieves almost 20% higher in recognition accuracy on "STex-512S" than LBPC which has achieved the highest recognition accuracy among the state of the art methods. Secondly, we test the color object recognition ability of the proposed cLBP and comparison methods on "Wang or SIMPLIcity", "Corel-10k", "FTVL" and "Coil-100" datasets. The recognition accuracy is shown in Table III. It can be seen from this table that the average recognition accuracy of cLBP with D=100 is higher than all other methods. When D is up to 900, the recognition of our proposed method on "Corel-10k" dataset is up to 74.66%, while the highest recognition accuracy of the state of the art methods on this dataset is only 63.71%. Thirdly, the color face recognition ability of all comparison methods are evaluated on the "Color FERET" and "AR face" datasets. In this experiment, each face image in the datasets is divided into 2 2 sub-regions, and the feature vectors of all these four sub-regions are concatenated as the final feature vector for classification. Table IV shows the recognition accuracy achieved by the proposed cLBP and all comparison methods. From the table, it can be found that the proposed cLBP also achieve higher recognition accuracy than other methods especially when the feature dimension of cLBP is D=900. From the above three experiments, it can be concluded that the proposed cLBP has an excellent performance in color images recognition with low dimension of features. The combination of RSS and RGB spaces in cLBP provides more discriminative patterns than other methods in color image recognition. Moreover, the learning strategy in the proposed method is effective in selecting the dominant color-related patterns in color image recognition. E. Noise Robustness of the cLBP As mentioned above, the calculation process of RSS space can offset the interference of noise. This experiment is designed to evaluate the color image recognition ability by the proposed cLBP and other comparison methods under noisy condition. In the testing procedure of all methods, the testing images are corrupted by Gaussian noise with signal-to-noise ratios (SNR) varying from 20 dB to 0 dB with 5 decrements, and the dimension of feature in cLBP is set as D=900. Fig. 7 shows the recognition accuracy achieved by different methods on the first three groups (nine) image datasets. It can be found that the proposed cLBP has absolutely higher recognition accuracy than other LBP variants under different noisy condition. When the SNR is decreased to 0 dB (i.e., half noise and half signal), the recognition accuracy of the proposed cLBP on "KTH-TIPS" image dataset is still up to 80%. In order to further verify the noise robustness of the proposed cLBP mainly contributed from whether the RSS space or RGB space, the cLBP-RGB (D=900) and cLBP-RSS (D=900) used in Section III. C are also compared in this experiment, and the recognition accuracy of these two methods on all image datasets is also provided in Fig. 7. It can be found that the cLBP-RSS has higher recognition accuracy than cLBP-RGB and lower than cLBP on all of the nine image datasets. This verifies the contribution of our proposed method that the RSS space provides high noise robustness in cLBP for image recognition. F. Illumination invariance of cLBP In the experiments designed in Section III. D, some images in the "KTH-TIPS", "color FERET" and "AR face" image datasets are obtained under illumination variation. Therefore, the experimental results shown in tables II and IV have preliminary verified the effectiveness of the proposed cLBP in color image recognition under illumination variation condition. In this experiment, three color image datasets, i.e., "Outex-14", "ALOI" and "CUReT" which are specifically constructed for illumination invariant image recognition as well as have been widely used in existing methods, are utilized for further validating the recognition ability of the proposed cLBP under illumination variation. The dimension of feature in cLBP is fixed to D=900. Table V shows the recognition accuracy achieved on "Outex-14", "ALOI" and "CUReT" image datasets by different methods. It can be found that, the recognition accuracy on all of three image datasets by cLBP is significantly higher than other methods. This experimental results is consistent with the property we analyzed in Section II. A that the color images represented in RSS space are difficult to be affected by illumination variation. IV. CONCLUSION In this paper, a color-related local binary pattern (cLBP) which learns the dominant patterns from the decoded LBP was proposed for color images recognition. In the proposed method, the relative similarity space (RSS) is firstly introduced to obtain the color similarity between the three channels of color images. Theoretical analysis show that the RSS space provides more discriminative information, has higher noise robustness and has the property of illumination invariance compared with the traditional RGB space. Secondly, the decoded LBP is employed to describe the color image on the combination of the RSS and RGB color space. The decoded LBP provides an excellent way to mine the correlation information between the LBP feature maps corresponds to each color channel. Since the dimension of decoded LBP is too high to easily cause the curse of dimensionality in classification, thirdly, a feature learning strategy is used to learn the dominant color-related patterns to reduce the dimension of feature vector and further improve the recognition accuracy. Finally, the proposed cLBP is compared with six state of the art LBP variants which are specifically designed for color image recognition. The experimental results conducted on the color texture, object and face image recognition show that, the proposed cLBP achieved obviously higher recognition accuracy than other methods and has low dimension of features under noise-free, noisy and illumination variation conditions. Although the learning strategy employed in the proposed method can reduce the dimension of features and learn the dominant patterns to improve the recognition accuracy, the dimension of features are manually fixed by experimental results, how to automatically estimate the dimension of features in the proposed method is the future work.
Numerical simulation of three-dimensional flow field of a paddle-spiral ribbon impeller Impeller is the core component of stirrer, which is widely applied in many industrial fields, therefore more and more endeavor is made to design and optimize its structure, in order to improve the efficiency and reliability of stirrer. In this paper, a paddle-spiral ribbon impeller is introduced. 3-D numerical simulation is carried out to study the mixing performances of four different combined forms of this kind of impeller. In the numerical studies, unstructured grid and multi-reference frame (MRF) are used; standard k- turbulent model and mixture model are adopted to simulate the mixing process of solid-liquid two-phase flow. From the non-steady state simulations, velocity field, mixing time and power consumption are obtained. The results indicate that this kind of impeller is effective for stirring serious fluid. The fist combination form of paddle-spiral ribbon impeller has the best stirring performance.
/** * Place Block for every position occupied by this {@link MultiBlock}.<br> * To be called from inside * {@link Block#onBlockPlacedBy(World, int, int, int, net.minecraft.entity.EntityLivingBase, net.minecraft.item.ItemStack)}. * * @param placeOrigin true if the origin block should be place. The block must be already set * @return true, if all the blocks could be placed, false otherwise */ public boolean placeBlocks(boolean placeOrigin) { if (placeOrigin) { if (getBlock() == null) { MalisisCore.log.error("[MultiBlock] Tried to set multiblock origin at {}, {}, {}, but no block is set.", x, y, z); return false; } if (getBlock().canPlaceBlockAt(world, x, y, z)) world.setBlock(x, y, z, getBlock(), 0, 3); else return false; } ChunkPosition[] listPos = getListPositions(); for (ChunkPosition pos : listPos) { if (pos == null) return false; if (!getBlock().canPlaceBlockAt(world, pos.chunkPosX, pos.chunkPosY, pos.chunkPosZ)) { world.setBlockToAir(x, y, z); return false; } } IProvider te = TileEntityUtils.getTileEntity(IProvider.class, world, x, y, z); if (te == null) { MalisisCore.log.error("[MultiBlock] Tried to set multiblock in provider, but no IProvider found at {}, {}, {}", x, y, z); return false; } te.setMultiBlock(this); for (ChunkPosition pos : listPos) { world.setBlock(pos.chunkPosX, pos.chunkPosY, pos.chunkPosZ, getBlock(), 0, 1); te = TileEntityUtils.getTileEntity(IProvider.class, world, pos.chunkPosX, pos.chunkPosY, pos.chunkPosZ); te.setMultiBlock(this); } return true; }
<filename>sql/expression/wrapper.go // Copyright 2020-2021 Dolthub, Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. package expression import ( "fmt" "github.com/dolthub/go-mysql-server/sql" ) // Wrapper simply acts as a wrapper for another expression. If a nil expression is wrapped, then the wrapper functions // as a guard against functions that expect non-nil expressions. type Wrapper struct { inner sql.Expression } var _ sql.Expression = (*Wrapper)(nil) // WrapExpression takes in an expression and wraps it, returning the resulting Wrapper expression. Useful for when // an expression is nil. func WrapExpression(expr sql.Expression) *Wrapper { return &Wrapper{expr} } // WrapExpressions takes in a number of expressions and wraps each one, returning the resulting slice. Useful for when // an expression in a slice may be nil. func WrapExpressions(exprs ...sql.Expression) []sql.Expression { wrappers := make([]sql.Expression, len(exprs)) for i, expr := range exprs { wrappers[i] = WrapExpression(expr) } return wrappers } // Children implements sql.Expression func (w *Wrapper) Children() []sql.Expression { if w.inner == nil { return nil } return []sql.Expression{w.inner} } // Eval implements sql.Expression func (w *Wrapper) Eval(ctx *sql.Context, row sql.Row) (interface{}, error) { if w.inner == nil { return nil, nil } return w.inner.Eval(ctx, row) } // IsNullable implements sql.Expression func (w *Wrapper) IsNullable() bool { if w.inner == nil { return true } return w.inner.IsNullable() } // Resolved implements sql.Expression func (w *Wrapper) Resolved() bool { if w.inner == nil { return true } return w.inner.Resolved() } // String implements sql.Expression func (w *Wrapper) String() string { if w.inner == nil { return "" } return fmt.Sprintf("(%s)", w.inner.String()) } // Type implements sql.Expression func (w *Wrapper) Type() sql.Type { if w.inner == nil { return sql.Null } return w.inner.Type() } // Unwrap returns the wrapped expression, or nil if no expression was wrapped. func (w *Wrapper) Unwrap() sql.Expression { return w.inner } // WithChildren implements sql.Expression func (w *Wrapper) WithChildren(children ...sql.Expression) (sql.Expression, error) { if len(children) == 0 { return WrapExpression(nil), nil } else if len(children) != 1 { return nil, sql.ErrInvalidChildrenNumber.New(w, len(children), 1) } return WrapExpression(children[0]), nil }
Induction of prominent Th1 response in C57Bl/6 mice immunized with an E. coli-expressed multi T-cell epitope EgA31 antigen against Echinococcus granulosus. First step in developing an epitope-based vaccine is to predict peptide binding to the major histocompatibility complex (MHC) molecules. We performed computational analysis of unique available EgA31 sequence to locate appropriate antigenic propensity positions. T-cell epitopes with best binding affinity values of < 50% inhibitory concentration were selected using different available servers (Propred and IEDB). Peptides with 100% population coverage were selected. A DNA fragment corresponding to the furin linker enriched in Golgi apparatus was inserted sequentially between each epitope sequences in a synthetic DNA in order to cleave the chimeric protein into four separated peptides. Subsequently, the synthetic DNA was cloned into the pGEX4T-1 and pEGFP-N1 vectors and GST-ChEgA31 was expressed in E. coli strain BL21-DE3. The recombinant protein was detected by western blotting using an HRP-conjugated polyclonal anti-GST antibody. Fusion protein purified by affinity chromatography was used to raise antisera in rabbits. Results in agar gel immunodiffusion assay indicated induction of specific antibodies against multiepitope antigen in the tested rabbits. Cytokine assay was carried out in C57Bl/6 mice and the levels of cytokines were analyzed by sandwich ELISA. Interestingly, production of specific IFN-gamma was prominently higher in mice immunized with GST-ChEgA31 and pEGFP-ChEgA31 (650-1300 pg/ml) compared to control groups. No difference was observed in the level of IL-10 and IL-4 in immunized and GST control group. Challenge study with 500 live protoscolices of Echinococcus granulosus on immunized mice demonstrated protectivity level (50-60%). Based on our results, it appeared that the chimeric protein in the study was able to stimulate T-helper cell-1 (Th1) development and high level of cell mediated immunity in mice.
Super resolution imaging method based on the synthetic aperture system In the long-distance imaging system, one of the main factors limiting the imaging resolution is the size of the imaging lens aperture, which determines the diffraction limit of the optical system. Therefore, we propose a non-interference synthetic aperture super-resolution imaging reconstruction and optimization method. The camera array is used to collect a series of low-resolution sub-aperture images. Combined with Fourier ptychography imaging algorithm, the spectrum and aperture function of the current sub-aperture diameter is updated by using the optimization algorithm based on adaptive step size. to obtain the high-resolution spectrum information of the target to be measured. Meanwhile, the high-resolution spectrum information of the target is obtained. In the reconstruction process, the simulated annealing algorithm is introduced to correct the positioning error of the sub-aperture, and the optimization algorithm is used to update the sub-aperture, which greatly improves the accuracy of the reconstruction results and achieves the theoretical imaging resolution. Moreover, it also has excellent imaging results for complex objects, which verifies the feasibility of the algorithm.
/** * Task that updates the database on an interval */ public class DatabaseUpdateTask implements Runnable { @Override public void run() { HeadAPI.getDatabase().update(heads -> Log.info("Fetched " + HeadAPI.getHeads().size() + " heads!")); } }
import numpy as np def analyse_probability_matrix(y_pred,y_true,idx_labels,c): """ Analyses a probability matrix U :param U: probability matrx (nsamples,nclasses) :param dataset: :param LOG: :param L: :return: """ n,nc = y_true.shape c_pred = np.argmax(y_pred,axis=1) c_true = np.argmax(y_true,axis=1) classes = np.asarray(range(nc)) nlabels_selected_pr_class = np.bincount(c_pred[idx_labels],minlength=nc) A = np.zeros((nc,nc),dtype=int) for i in classes: #pred class c_true_i = c_true[c_pred == i] for j in classes: #true class A[i,j] = sum(c_true_i == j) with np.printoptions(formatter={'all':lambda x: "{:6d}".format(x)}): c.LOG.info("Labels selected:") c.LOG.info("Class : {} {:>8s}".format(classes,'total')) c.LOG.info("selected (#): {} {:8d}".format(nlabels_selected_pr_class,np.sum(nlabels_selected_pr_class))) with np.printoptions(formatter={'all':lambda x: "{:6.2f}".format(x)}): c.LOG.info("selected (%): {}".format(nlabels_selected_pr_class/sum(nlabels_selected_pr_class)*100)) c.LOG.info(" ") c.LOG.info("Based on labels selected, the clustering predicted:") with np.printoptions(formatter={'all':lambda x: "{:6d}".format(x)}): c.LOG.info("Predicted \\ True {} {:>8s}".format(classes,'total')) c.LOG.info("------------------------------------------------------------------------------------------------") for i in classes: c.LOG.info(" {} {} {:8d}".format(i,A[i,:],np.sum(A[i,:]))) c.LOG.info("------------------------------------------------------------------------------------------------") c.LOG.info(" {:>6s} {} {:8d}".format('total',np.sum(A[:,:],axis=0),np.sum(A))) c.LOG.info(" ") Accuracy = sum(c_pred == c_true)/len(c_true)*100 c.LOG.info("Accuracy = {}%".format(Accuracy)) return Accuracy
<filename>processor/src/test/fixtures/input/com/example/observable/AbstractObservablesModel.java package com.example.observable; import arez.annotations.ArezComponent; import arez.annotations.Observable; @ArezComponent abstract class AbstractObservablesModel { @Observable public abstract long getTime(); public abstract void setTime( long value ); }
<filename>cmd/handler/message.go<gh_stars>0 package cmd import ( "errors" dsg "github.com/bwmarrin/discordgo" f "github.com/whitman-colm/go-discord" "github.com/whitman-colm/go-discord/dat" "strings" ) /* # MessageCreate * The world's bigest switch statment * * This is a very big switch statment run commands. It reads all the messages in * all the servers its in, determines which ones are commands, and then sees * what in all the commands mean and then takes the appropriate action. * * Parameters: * - s (type *discordgo.Session) | The current running discord session, * (discordgo needs that always apparently) * - m (type *discordgo.Message) | The message thats to be acted upon. * * TODO: See if it can be made so it doesn't have to read every single message * ever. * * TODO: Break this one function up to smaller functions that only run if a user * has a certain role * * NOTE: Please delegate what the command actually does to a function. This * method should only be used to determine what the user is acutally * trying to do. */ func MessageCreate(s *dsg.Session, m *dsg.MessageCreate) { // The message is checked to see if its a command and can be run canRunCommand, err := canTriggerBot(s, m.Message) if err != nil { dat.Log.Println(err.Error()) dat.AlertDiscord(s, m, err) return } if !canRunCommand { return } // Removing case sensitivity: messageSanatized := strings.ToLower(m.Content) // The prefix is cut off the message so the commands can be more easily handled. var msg []string if strings.HasPrefix(m.Content, f.MyBot.Prefs.Prefix) { msg = strings.SplitAfterN(messageSanatized, f.MyBot.Prefs.Prefix, 2) m.Content = msg[1] //TODO: Check if there is a way to use a mention() method of discordgo rather than //this string frankenstein } else if strings.HasPrefix(m.Content, "<@!"+f.MyBot.Auth.ClientID+">") { msg = strings.SplitAfterN(messageSanatized, "<@!"+f.MyBot.Auth.ClientID+">", 2) m.Content = strings.TrimSpace(msg[1]) } else { err := errors.New("Message passed 'can run' checks but does not start with prefix:\n" + m.Content) dat.Log.Println(err.Error()) dat.AlertDiscord(s, m, err) return } message := strings.Split(m.Content, " ") // Now the message is run to see if its a valid command and acted upon. for command, action := range Cmd { if message[0] == command { if action.Perms != -1 { perm, err := f.HasPermissions(s, m.Message, m.Author.ID, action.Perms) if err != nil { dat.Log.Println(err) dat.AlertDiscord(s, m, err) return } if !perm { s.ChannelMessageSend(m.ChannelID, "Sorry, you do not have permission to use this command.") return } } action.Action(s, m) return } } if strings.Contains(m.Message.Content, "@") { s.ChannelMessageSend(m.ChannelID, "Sorry <@"+m.Message.Author.ID+">, but I don't understand.") } else { s.ChannelMessageSend(m.ChannelID, "Sorry <@"+m.Message.Author.ID+">, but I don't understand what you mean by \"`"+m.Message.Content+"`\".") } } /* # Check if user can run command * This switch statment makes sure the bot runs when its triggered and the user has the perms to trigger it. * Prevents: * - Bot posted something that would trigger itself, possibly creating an infinite loop * - Message posted doesn't have the bot's prefix * - Command was posted in a channel where the bot shouldn't respond to commands * - Bot whitelists channels and the command was run in a channel not on the whitelist. * - Users with a blacklisted role from running the bot * * NOTE: Users who have "admin" roles (according to the bot's json data) or * permissions will have the ability to run commands regardless of any * other rules * * NOTE: IF THESE CONDITIONS ARE MET THEN NO ERROR WILL BE SENT TO EITHER DISCORD OR LOGGED. * THIS IS BY DESIGN. DON'T CHANGE IT THINKING I WAS JUST LAZY. * func canTriggerBot(s *dsg.Session, m *dsg.Message) (bool, error) { if m.Author.Bot { return false, nil } admin, err := f.HasPermissions(s, m, m.Author.ID, dsg.PermissionAdministrator) if err != nil { dat.Log.Println(err) return false, err } switch true { case m.Author.ID == s.State.User.ID: return false, nil //TODO: look at this stupid line. that seems like it shouldn't work. case !strings.HasPrefix(m.Content, f.MyBot.Prefs.Prefix) && !strings.HasPrefix(m.Content, "<@!"+f.MyBot.Auth.ClientID+">"): return false, nil case admin: return true, nil case f.Contains(f.MyBot.Perms.BlacklistedChannels, m.ChannelID) == true: return false, nil case f.MyBot.Perms.WhitelistChannels && !f.Contains(f.MyBot.Perms.WhitelistedChannels, m.ChannelID): return false, nil } for _, b := range f.MyBot.Users.BlacklistedRoles { guild, err := f.GetGuild(s, m) if err != nil { return false, err } member, err := s.GuildMember(guild.ID, m.Author.ID) if err != nil { return false, err } blacklisted := f.Contains(member.Roles, b) if blacklisted { return false, nil } } return true, nil } */
package runner; import factory.GuiFactory; import factory.OsxFactory; import factory.WinFactory; /** * Application that actually uses Factory Method Pattern and Abstract Factory * Pattern. * * @author <NAME> */ public class Application { /** * Main driver. * @param args arguments from command line */ public static void main(String[] args) { GuiFactory osxFactory = OsxFactory.getInstance(); Renderer osxRenderer = new Renderer(osxFactory); osxRenderer.render(); GuiFactory winFactory = WinFactory.getInstance(); Renderer winRenderer = new Renderer(winFactory); winRenderer.render(); /* * Output: * This is a MacOSX button. * This is a Windows button. */ } }
<gh_stars>1-10 """Information. """ from pathlib import Path import appdirs APP_NAME = 'nsdu' AUTHOR = 'ns_tsp_usovietnam' DESCRIPTION = 'Automatically update and format dispatches.' DEFAULT_TEMPLATE = '[reserved]' # Pluggy project name for loader plugins. DISPATCH_LOADER_PROJ = 'NSDUDispatchLoader' TEMPLATE_VAR_LOADER_PROJ = 'NSDUTemplateVarLoader' SIMPLE_BB_LOADER_PROJ = 'NSDUSimpleBBLoader' CRED_LOADER_PROJ = 'NSDUCredLoader' # Default directories default_dirs = appdirs.AppDirs(APP_NAME, AUTHOR) CONFIG_DIR = Path(default_dirs.user_config_dir) DATA_DIR = Path(default_dirs.user_data_dir) LOGGING_DIR = Path(default_dirs.user_log_dir) NSDU_PATH = Path('nsdu') # Loader plugin directory path. LOADER_DIR_PATH = NSDU_PATH / 'loaders' LOADER_ENTRY_POINT_NAME = 'nationstates-nsdu' CONFIG_ENVVAR = 'NSDU_CONFIG' CONFIG_NAME = 'config.toml' # Default general configuration path for copying to proper place DEFAULT_CONFIG_PATH = NSDU_PATH / CONFIG_NAME # Logging configuration LOGGING_PATH = LOGGING_DIR / 'nsdu_log.log' LOGGING_CONFIG = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'NSDUFormatter': { 'format': '[%(asctime)s %(name)s %(levelname)s] %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'NSDUFormatter', 'stream': 'ext://sys.stdout' }, 'file': { 'class': 'logging.handlers.RotatingFileHandler', 'level': 'DEBUG', 'formatter': 'NSDUFormatter', 'filename': LOGGING_PATH, 'maxBytes': 5000000, 'backupCount': 2 } }, 'root': { 'level': 'DEBUG', 'handlers': ['console', 'file'] } } # Category name and code reference. SUBCATEGORIES_1 = {'overview': '100', 'history': '101', 'geography': '102', 'culture': '103', 'politics': '104', 'legislation': '105', 'religion': '106', 'military': '107', 'economy': '108', 'international': '109', 'trivia': '110', 'miscellaneous': '111'} SUBCATEGORIES_3 = {'policy': '305', 'news': '315', 'opinion': '325', 'campaign': '385'} SUBCATEGORIES_5 = {'military': '505', 'trade': '515', 'sport': '525', 'drama': '535', 'diplomacy': '545', 'science': '555', 'culture': '565', 'other': '595'} SUBCATEGORIES_8 = {'gameplay': '835', 'reference': '845'} CATEGORIES = {'factbook': {'num': '1', 'subcategories': SUBCATEGORIES_1}, 'bulletin': {'num': '3', 'subcategories': SUBCATEGORIES_3}, 'account': {'num': '5', 'subcategories': SUBCATEGORIES_5}, 'meta': {'num': '8', 'subcategories': SUBCATEGORIES_8}}
<reponame>henryr/minimised-impala // Copyright 2015 Cloudera Inc. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #include <iostream> #include <sstream> #include <stdio.h> #include <stdlib.h> #include <gtest/gtest.h> #include "common/init.h" #include "service/fe-support.h" #include "util/mem-info.h" #include "util/process-state-info.h" #include "util/test-info.h" #include "common/names.h" namespace impala { TEST(MemInfo, Basic) { ASSERT_GT(MemInfo::physical_mem(), 0); ASSERT_LT(MemInfo::vm_overcommit(), 3); ASSERT_GE(MemInfo::vm_overcommit(), 0); ASSERT_GT(MemInfo::commit_limit(), 0); } TEST(ProcessStateInfo, Basic) { ProcessStateInfo process_state_info; ASSERT_GE(process_state_info.GetBytes("io/read_bytes"), 0); ASSERT_GE(process_state_info.GetInt("sched/prio"), 0); ASSERT_GE(process_state_info.GetInt("status/Threads"), 0); } } int main(int argc, char **argv) { ::testing::InitGoogleTest(&argc, argv); impala::InitCommonRuntime(argc, argv, true, impala::TestInfo::BE_TEST); impala::InitFeSupport(); return RUN_ALL_TESTS(); }
New Robotic Telescope enclosure concept selection and optimisation The New Robotic Telescope (NRT) is a robotic and fully autonomous four-metre-class telescope and the first in its size class to utilise a clamshell enclosure. It will be located at the Roque de los Muchachos Observatory (ORM) on La Palma, Canary Islands, Spain. Fast slew time, robotic functionality, and reduced dome seeing are the main reasons for the clamshell design, however cost is also an important factor. The greatest opportunity for cost reduction is the movable roof structure of the enclosure, but is the most complex to design and the heaviest part in the initial concept considerations. To solve a complex optimisation problem considering all limitations, conditions and assemblies, a combination of generative design approach and machine learning is used. This enables us to overcome two major obstacles: First, we are able to combine multiple models into one optimisation problem that could analyse multiple states simultaneously, such as the closed and semi-open states. Second, the machine learning-based predictive models are able to run much faster, which allows us to explore many more possible design solutions. Structural optimisation results show more than one optimal solution, which is consistent with multi-objective optimisation, since there are certain trade-offs between mass, capacity utilisation, and hydraulic forces. The goal of the structural optimisation was to explore the possible design alternatives before engaging a construction design partner. This allows for more efficient project development, as the design partner could immediately focus on the construction design work without needing to understand the implications for the telescope.
/** * Triggered when the offset of start surface header view is changed. * @param verticalOffset The start surface header view's offset. * @param toolbarHeight The height of start surface toolbar. */ public void onStartSurfaceHeaderOffsetChanged(int verticalOffset, int toolbarHeight) { if (mStartSurfaceToolbarCoordinator != null) { mStartSurfaceToolbarCoordinator.onStartSurfaceHeaderOffsetChanged(verticalOffset); updateToolbarLayoutVisibility(toolbarHeight); } }
Clinical value of liver scans. The contribution of technetium sulfacolloid liver scans on patient care was evaluated in 200 consecutive patients; 100 in a public (Grady) and 100 in a private (Emory University) hospital. The problem oriented record clearly described the effect of liver scan on: 1. assessment of the patient's problem(s) and 2. plans made for its diagnosis or management. For the diagnosis and treatment of 30 patients with intrinsic hepatobiliary diseases, the scan was useful in five but was misleading in four. Specific diagnosis was obtained by other methods which made the liver scan an unnecessary overutilization of a diagnostic procedure in the evaluation of patients with primary liver or biliary diseases. The scan was useful in 95 of 123 patients with extrahepatic malignancy. This was mostly due to chemotherapy protocols requiring the scans. Only 18% of positive scans had no other clinical evidence for hepatic malignancy. Definite diagnosis was not made in any of the 123 patients by liver scan. In 47 miscellaneous conditions the scan was useless in 37 (79%). In three of these the scan was misleading and impaired patient care.
Hospital Factory for Manufacturing Customised, Patient-Specific 3D Anatomo-Functional Models and Prostheses The fabrication of personalised prostheses tailored on each patient is one of the major needs and key issues for the future of several surgical specialties. Moreover, the production of patient-specific anatomo-functional models for preoperative planning is an important requirement in the presence of tailored prostheses, as also the surgical treatment must be optimised for each patient. The presence of a prototyping service inside the hospital would be a benefit for the clinical activity, as its location would allow a closer interaction with clinicians, leading to significant time and cost reductions. However, at present, these services are extremely rare worldwide. Based on these considerations, we investigate enhanced methods and technologies for implementing such a service. Moreover, we analyse the sustainability of the service and, thanks to the development of two prototypes, we show the feasibility of the production inside the hospital. E. Lanzarone (B) CNR-IMATI, Istituto di Matematica Applicata e Tecnologie Informatiche Enrico Magenes, Milan, Italy e-mail: ettore.lanzarone@cnr.it S. Marconi M. Conti F. Auricchio Dipartimento di Ingegneria Civile e Architettura, Universit di Pavia, Pavia, Italy I. Fassi C. Pagano G. Pourabdollahian CNR-STIIMA, Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato, Milan, Italy F. Modica CNR-STIIMA, Istituto di Sistemi e Tecnologie Industriali Intelligenti per il Manifatturiero Avanzato, Bari, Italy © The Author(s) 2019 T. Tolio et al. (eds.), Factories of the Future, https://doi.org/10.1007/978-3-319-94358-9_11 233 234 E. Lanzarone et al. 11.1 Scientific and Industrial Motivations, Goals and Objectives The significant increase of life expectancy over the last decades, which was made possible thanks to the progress of medical sciences, generated as a counterpart a higher demand for health care services, as elderly people generally need more intensive medical assistance. In addition, the modern possibilities to treat patients affected by severe diseases are generating new classes of chronic patients who need specific and highly qualified treatments. This new demand for more intensive, advanced and personalised care services is clearly in contrast with the limited budget of national and regional health care systems. Consequently, new models are necessary to guarantee adequate care treatments to each patient. In parallel, new production technologies are boosting the manufacturing of patient-specific solutions. On the one hand, personalised prostheses tailored on the specific patient are becoming crucial for the development of several surgical specialties; on the other hand, patient-specific anatomic models for preoperative planning are essential in the presence of customised prostheses, as also the surgical treatment must be optimised. However, in the common practice, most of medical products are produced in standard sizes and shapes, and then stocked in hospitals, where the product to implant is chosen as the one closest to patients anatomy. Thus, it is often necessary to manually adapt the product to the anatomic characteristics of the patient, with the risk of damaging the product and not reaching the optimal size and shape for the patient. This may also determine longer surgery times, with higher costs per patient. Just recently, we may notice a significant growth of rapid prototyping services dedicated to the medical field, which are mostly provided by external companies that established a medical division.1 However, the presence of a prototyping service inside the hospital would be a benefit for the clinical activity. The location inside the hospital would allow a closer interaction with clinicians during the model development, leading to significant time and cost reductions, and to a higher effectiveness of the products. At present, to the best of our knowledge, there are only four services located inside the hospital worldwide: 3D Print Lab@USB (Basel, Switzerland)2; RIH 3D Lab (Rhode Island, Providence, RI, US)3; Austin Health 3DMedical Printing Laboratory (Austin, Melbourne, Victoria, Australia)4; 3D and Quantitative Imaging Laboratory (Stanford University School ofMedicine, Stanford, CA, US).5 However, though they 1Materialisewww.materialise.com/en/industries/healthcare. Zarewww.zare-prototyping.eu/en/medical-division. 23D Print Lab@USBwww.unispital-basel.ch/das-universitaetsspital/bereiche/medizinischequerschnittsfunktionen/kliniken-institute-abteilungen/departement-radiologie/kliniken-institute/ klinik-fuer-radiologie-und-nuklearmedizin/3d-print-lab. 3RIH 3D Labwww.brown.edu/Research/3DLab. 4Austin Health 3D Medical Printing Laboratorywww.austin.org.au/page?ID=1839. 53D and Quantitative Imaging Laboratoryhttp://3dqlab.stanford.edu/. 11 Hospital Factory for Models and Prostheses 235 are located inside the hospital, these laboratories have several limitations in terms of available technologies, as they are equipped with low cost machines and rely on outsourcing for the complex cases that require high-resolution 3Dprinters.Moreover, their work is confined to the orthopaedic and maxillo-facial specialties, which are the easiest to manage. In this context, the project Hospital Factory for Manufacturing Customised, Patient-Specific 3D Anatomo-Functional Models and Prostheses (Fab@Hospital) proposed an innovative paradigm, i.e., the production of personalised medical products (e.g., prostheses) in an environment closely integrated with the hospital, through new design approaches and technologies, to guarantee a direct interaction between patients, medical personnel, and product manufacturers. This may improve the quality of life for patients, the performance of the health care system, and the competitiveness of manufacturers. The Fab@Hospital paradigm consists of: advanced mathematical tools and modelling technologies tailored to manufacture personalised products; location of a hospital factory inside or near the hospital; production of personalised products (e.g., prostheses) at the hospital factory in a short time, thanks to the combination of innovative technologies and processes. Besides the products, the hospital factory would produce personalised anatomic models (e.g., reconstructions of vascular districts), which may help the surgeons in studying the treatment in advance by simulating different strategies. To meet these goals, the following scientific and technological objectives were identified: 1. Define new mathematical methods and approaches to build accurate anatomofunctional models from medical images. 2. Define improvements to the existing technologies for personalised products, e.g., additive manufacturing and micro Electrical Discharge Machining (EDM). 3. Propose innovative process combinations to reduce the production costs, thus supporting a wide diffusion of personalised medical products. 4. Demonstrate the applicability of the new technological approaches and process combinations through the development of two prototypes. 5. Propose new businessmodels for the production of personalisedmedical products inside or near the hospital. The rest of the chapter is organised as follows. Section11.2 overviews literature related to the addressed problems, which are stated in Sect. 11.3. The developed technologies and methodologies are detailed in Sect. 11.4, while the outcomes of the work (in terms of two prototypes, several mathematical tools and a business model structure) are shown in Sect. 11.5. 236 E. Lanzarone et al. 11.2 State of the Art Somemedical specialties have started to benefit from rapid prototyping in the last few years, especially for preoperative planning purposes. In fact, clinicians may obtain more information from physical objects than from computer virtual models only. Moreover, their educational value to train new surgeons is also recognised. Among the others, maxillofacial and orthopaedic specialties currently employ physical models to test different solutions, e.g., for the implant of bone fixation plates. Vascular surgery has seen the development of a dedicated rapid prototyping sector. In fact, the benefit of a physical vascular model for planning the implant of stents or vascular prostheses is linked not only to its morphological characteristics, but also to the mechanical properties of the reproduced vascular district. As the surgeon tests the placement and the release of the prosthesis, and he/she evaluates the prosthesisvessel interaction, the vessel is required to have a behaviour as consistent as possible with the real pathophysiology; thus, compliant models with controlled elasticity are needed. However, vascular applications are mainly at the research level, and very few companies deal with patient-specific silicone vascular models. In all cases, the current production scheme in factories not linked to the hospitals has some drawbacks: Anatomo-functional properties. Clinicians do not have the expertise to retrieve functional properties from the common medical imaging. At the same time, manufacturers do not have the expertise to translate these properties into suitable production specifications. Consequently, sending the production request to an external factory without a discussion between clinicians and manufacturers about the specifications may reduce the effectiveness of the personalised medical product. Production times. The direct interaction between clinicians and manufacturers would avoid loss of information and the need to correct the product, thus speeding up the process. Moreover, the production inside or near the hospital would significantly reduce transportation times. Costs. Even though the prices of the anatomic models for visualisation purposes are lowering, thanks to the progressive spread of prototyping technologies, moving the production inside the hospital would significantly reduce the costs. Moreover, compliant functional models with proper elasticity are still extremely expensive (thousands of euros even for very small vascular districts) and the production inside the hospital would make them more affordable. Our literature analysis focuses on the four topics addressed in this work, i.e., patient-specific anatomo-functional cardiovascular models, patient-specific fixation plates, mathematical tools to support their design, and business models to make their production more efficient. Cardiovascularmodels. In vitro analyses of the vascular fluid-dynamicsmay help clinicians in understanding the impact of specific pathologies and devices. In this context, additive manufacturing is playing a crucial role, allowing the production 11 Hospital Factory for Models and Prostheses 237 of highly complex geometries at lower costs and in lower times than with standard subtractive technologies. Thus, additive manufacturing is rapidly spreading in the medical field to produce patient-specific anatomic models. In particular, 3D printed anatomic models have been used to test different operative approaches , or to improve the design process of endovascular devices. In fact, in vitro fluid-dynamic analyses are useful to identify the interaction between the device (e.g., valves, endo-prostheses, stents) and the human vascular system . Fixation plates and screws. Patient-specific fixation plates and screws are not widely adopted, due to the difficulties in the small-scale production of 3D complex components, usually made of stainless steel (ASTMF-55 and F-56), pure titanium and its alloys (ASTM F-136), and cobalt-chromium-tungsten-nickel alloy (ASTM F-90). In this context, micro-EDM could be a suitable technology for producing customised plates, due to the ability to perform complex and high-precision machining on electro-conductive materials. Being a thermal process, it can be usedwith considerable success also for themachining of extremely hard and strong materials, including conductive ceramics. Also techno-polymers, like the PolyEther Ether Ketone (PEEK), have been recently introduced for the manufacturing of fixation plates. In this case, 3D printing technologies such as Fusion Deposition Modelling (FDM) and Stereo Lithography Apparatus (SLA) can be applied. Mathematical models. Tools for reconstructing the geometrical features of some districts from medical imagining are nowadays widespread and widely used in clinics. However, as for the mechanical properties, commercial tools do not generally include this possibility, which still represents an open research issue. Business models. There are very few works investigating the business and managerial sides of manufacturing customised medical products. The existing studies, which have been conducted only recently, mainly address the problem from an economic perspective. Some works investigated the economic implications of 3D printing in general, while others focused on evaluating the cost structure and developing cost models for additive manufacturing . Lindemann et al. developed a business model for evaluating the cost of additive manufacturing. Schrder et al. investigated the manufacturing of customised medical devices from a business model perspective with a specific focus on value chain; they emphasised that interoperability is a significant driver for the efficient manufacturing of customised medical devices, as customisation is not a single-stakeholder process but a multi-actor process that includes suppliers, surgeons and patients. 11.3 Problem Statement and Proposed Approach Ourwork includes twomain applications. On the one hand, we employ additivemanufacturing (3D printing) to produce anatomo-functional models for the cardiovascular specialty; on the other hand, we exploit micro-EDM and additive manufacturing to produce fixation plates for the orthopaedic specialty. Moreover, our work also 238 E. Lanzarone et al. involves two supporting activities, i.e., the development of mathematical models and tools for the design of patient-specific products, and the development of appropriate business models to suggest the best management strategies and to prove the benefits of the Fab@Hospital paradigm (i.e., the production inside or near the hospital). All these activities are detailed in the next subsections. 11.3.1 Additive Manufacturing for Cardiovascular Models The goal is to manufacture deformable vascular models, with realistic mechanical and geometrical properties, to be employed for in vitro analyses. In particular, we focus on benchmark aortic models to test innovative endovascular devices and new surgical procedures. The proposed production approach is based on additive manufacturing in combinationwith amoulding technique, to produce siliconemodels featuring bothmechanical and geometrical properties of the vessel. Ideally, the highest adherence to reality is possible through a full patient-specific approach, in which all mechanical and geometrical information, alongwith flows and pressures, are acquired from the specific patient. In the absence of all information, a less specific approach can be pursued by combining patient-specific information and general knowledge common to several individuals (retrieved from the literature or measured on a significant number of patients). 11.3.2 Micro-EDM and Additive Manufacturing for Fixation Plates Although customised implants are occasionally used for surgery, they are not oriented to traumatic pathologies. In fact, the treatment of traumatic pathologies requires Fig. 11.1 Procedure for fabricating customised fixation plates 11 Hospital Factory for Models and Prostheses 239 manufacturing the fixation plate in less than seven days, thus requiring a strong interaction between clinicians and manufacturers (see Fig. 11.1 for the fabrication procedure). The rapid production and the clinicians-manufactures interaction could be easily achieved if the prototyping station were inside the hospital. However, to make such prototyping station available, the fabrication technology for customised devices should fulfil several constraints, due to the confined space and the controlled environment. Additive manufacturing and micro-EDM fulfil these constraints. On the one hand, complex 3D shapes can be manufactured with micro-EDM on every electroconductive material (e.g., titanium and surgical steel); on the other hand, FDM and SLA are suitable for the fabrication of polymeric objects with complex shape at low cost and low environmental impact. 11.3.3 Prediction Models for Patient-Specific Functional Properties Stochastic tools may support the estimation of patient-specificmechanical properties in several districts, based on non-invasivemeasurements and patients characteristics. This is of particular importance in case of soft tissues (e.g., for the cardiovascular specialty). Thus,we focus on two cardiovascular applications: (i) the estimation of the aortic stiffness and its spatial variations; (ii) the estimation of the ultimatemechanical properties and of the stress-strain characteristics in patients with ascending aorta aneurysm. Moreover, due to the lack of effective tools to support Finite Element Analysis (FEA) under uncertain parameters and Structural Topology Optimization (STO), which are common problems when dealing with patient-specific problems, we propose an approach for efficiently solving FEA problems in the presence of stochastic parameters or within iterative optimization algorithms. 11.3.4 Business Models As mentioned above, there are no appropriate business models that can be employed to support the additive manufacturing of patient-specific medical devices. The gap in the literature is even larger when considering the role of the hospitals in manufacturing individualised medical products, as hospitals are usually perceived only as end-users of the products. Also Product-Service System (PSS) oriented business models pay very little attention to apply the concept in health care, and even less to extend the practices of PSS to increase the integration between hospitals and manufacturers of customised medical products. Thus, our goal is to develop a reference structure for business models that can support the Fab@Hospital paradigm. 240 E. Lanzarone et al. 11.4 Developed Technologies, Methodologies and Tools In the following, we present the technologies, the methodologies and the tools developed to address the four problems presented in Sect. 11.3. Moreover, as for additive manufacturing and micro-EDM, we describe the features of the associated prototypes. The first prototype (described in Sect. 11.4.1) is a preoperative model for the cardiovascular specialty, while the second prototype (described in Sect. 11.4.2) consists of a set of fixation plates for the orthopaedic specialty. 11.4.1 Additive Manufacturing for Cardiovascular Models We relied on a moulding technique to create an aorta benchmark model, where the mould is produced by means of 3D printing technology. We built up a flexible and completely parametric model, which is able to adapt in a consistent way to the modifications of the structural parameters. In particular, for the prototype, we considered geometrical and mechanical parameters retrieved from the literature; in addition, in the absence of literature data, some geometrical data were obtained from several Computed Tomography (CT) images. The mould is composed of three parts: two outer shells and an inner part that creates the inner lumen of the vessel (Fig. 11.2). The thickness of the model, given by the distance between inner and outer moulds, was selected according the desired Fig. 11.2 3D printed mould of the aorta model, composed of an inner lumen (white) and two outer shells (blue) 11 Hospital Factory for Models and Prostheses 241 punctual compliance. Indeed, starting from the inner lumen geometry, we computed the thickness in order to get the desired compliance. The mould was manufactured using the 3D printing technology. We employed the Objet 30 Pro printer (Stratasys, MN, US), which is based on theMaterial Jetting technology, where layers of photopolymer resin are deployed on a building tray and cured by means of ultraviolet light. We employed the commercial photopolymer VeroWhitePlus RGD835 and the support material FullCure 705, which is necessary to support the parts of the model that do not lay directly on the tray or the underlying layer. After printing, a fine post-processing was performed to ensure a smooth finishing of all surfaces of the mould that will be in contact with the silicone, and the mould was finally assembled. To create the aorta model, we employed the two-part Sylgard 184 Silicone Elastomer (Dow Corning, MI, US) in a ratio of 10 parts of silicone base to 1 part of curing agent by weight. The silicone mixture was poured into the mould, placed into a vacuum chamber to eliminate air, and then left curing at room temperature (about 23 ◦C) for 48h. Such two-part ratio and curing temperature were chosen in order to tune the mechanical properties of the silicone ; the selected combination leads to a final elastic modulus of 1.32 ± 0.07 MPa. After curing, the mould was removed to get the final silicone model. The resulting prototype, which can be considered a benchmark deformable aortic model, is shown in Fig. 11.3. It is endowedwith terminations that can be connected to in vitro circuits by means of pipe junctions. Moreover, the use of transparent silicone allows inner visibility, which is fundamental for several experiments, e.g., endograft deployment or particle tracking. Transparency is also fundamental to exploit the model to train beginner surgeons and clinicians; allowing inner visibility, the student is able to look through the vessel at the movements of the endoscopic instruments. 11.4.2 Micro-EDM and Additive Manufacturing for Fixation Plates Threemanufacturing technologies (namely,micro-EDM,FDMandSLA)were tested for manufacturing several prototypes made of different materials (metals and polymers). The two additive manufacturing technologies were studied using two geometries and different polymers. The two most appropriate micro-EDM approaches (i.e., milling-EDM and wireEDM)were tested in terms of material removal rate, to minimise the machining time. Two materials were investigated: titanium, which is largely employed in the medical sector due to biocompatibility and mechanical characteristics, and the ceramic composite Si3N4-TiN, which is currently used for dental implants. Several tests were carried out to find the best strategy and parameters to achieve the typical features of fixation plates, such as holes and bores. Holes are necessary on plates both for temporary and permanent fixture because, according to the anatomy of the patient 242 E. Lanzarone et al. Fig. 11.3 Silicone aorta model after mould removal 11 Hospital Factory for Models and Prostheses 243 Fig. 11.4 Fracture plate for osteotomy and arthrodesis in the foot and the fracture, different fixation points are required; bores lighten the plate and help assembling during the surgery. The FDM process was tested with engineering polymers and composites. Tests were carried out to correlate the FDM process parameters with the highest resolution achievable. They were conducted on both ad hoc designed samples and on a few designs of commercial fixation plates, using the printers Sharebot Next Generation (Sharebot, Italy) and S2 (Gimax 3D, Italy). Moreover, to tailor the mechanical properties of the biocompatible polymers, carbon nanotubes (CNT) were added as filler in several polymeric matrixes, e.g., Polymethyl Methacrylate (PMMA), Polyoxymethylene (POM) and Polyamide (PA). The mechanical characterisation of these composites was carried out using FDM 3D printing and SLA dog-bone tensile specimens. SLA technology, with the equipment Form 1+ (Formalabs, MA, US) was finally selected as alternative additive process, in order to compare its performance with the FDM in terms of production time, mechanical strength and surface quality. As for the prototype, the micro-EDM was tested on the prototype shown in Fig. 11.4, which is inspired by a fixation plate commercialised by Vilex (McMinnville, TN, USA). The plate includes four holes for the screws, to lock the plate to the bone, a non-locking lag screw hole, one compression slot and two smaller holes for temporary fixing. Samples were manufactured to prove the customisation methodology, the machining performance and the process ability. A commercial fixation plate for the Rolando fracture (i.e., a fracture to the base of the thumb, see Fig. 11.5a) was selected as test specimen in order to assess additive manufacturing capabilities. The prototype was fabricated in several polymers (both commercial and internally developed) using FDM and SLA. As expected, the two technologies have different potentialities. FDM allowed the use of a greater variety 244 E. Lanzarone et al. Fig. 11.5 Rolando fracture and commercial fixation plate (a); proposed patient-specific plate (b) of polymers and composites with tailored properties, while the selection of materials for SLA was much reduced. Finally, a patient-specific fixation plate for the Rolando fracture was designed and fabricated with different polymers (Fig. 11.5b). Figure11.6 shows some of the produced plates, made using FDM in PA-CNT4% and SLA. SLA samples present very smooth surfaces due to the higher resolution of the technology. 11.4.3 Prediction Models to Identify Patient-Specific Functional Properties Asmentioned in Sect. 11.3.3, we focused our activity on three applications, for which an adequate solution in the literature was still lacking. The first one deals with a Bayesian estimation approach to assess the stiffness and its spatial variations in a given aortic region, based onCTAngiography (CTA) images acquired over a cardiac cycle. The arterial stiffness was derived by linking the kinematic information from the CTA images with pressure waveforms, generated by a lumped parameter model of the circulation. The proposed approach includes 11 Hospital Factory for Models and Prostheses 245 Fig. 11.6 Rolando fracture fixation plates made using FDM (a) and SLA (b) the uncertainty of the input variables and exploits the entire diameter and pressure waveforms over the cardiac cycle. The second application deals with the ascending aorta aneurysm, which is a severe life-threatening conditionwith asymptomatic rupture risk.Wedeveloped an approach to estimate the patient-specific ultimate mechanical properties and the stress-strain characteristics based on non-invasive data. Through a regressionmodel, we built the response surfaces for the ultimate stress and strain, and for the coefficients of the stress-strain characteristics, all in function of patient data commonly available in the clinical practice. Moreover, due to the lack of effective tools to support Finite Element Analysis (FEA) under uncertain parameters and Structural Topology Optimization (STO), which are common problems when dealing with patient-specific problems, we propose an approach for efficiently solving FEA problems in the presence of stochastic parameters or within iterative optimization algorithms. Finally, a relevant issue for studying themechanical behaviour of biological tissues and structures with computational tools (e.g., FEA) is the uncertainty associated with the model parameters. We addressed the problem of solving the FEA in the presence of uncertain parameters by exploiting the functional principal component analysis to get acceptable computational efforts. Indeed, the approach allows to construct 246 E. Lanzarone et al. an optimal basis of the solutions space and to project the full FEA problem into a smaller space spanned by this basis. The same approach was also used to reduce the computational effort of iterative optimization algorithms for STO. 11.4.4 Business Models A general structure to configure potential business models for customised manufacturing in health care was developed. The proposed structure is based on the ProductService System (PSS) concept, considering the morphological box defined by Lay et al.. At the same time, it entails some modifications on the role of hospital and machinery supplier. In particular, the proposed model consists of a set of building blocks, i.e., characteristic features that define the main aspects and decision points to be set (Fig. 11.7). For each feature, a number of optionswere defined,which describe the potential alternatives that can be selected to configure the businessmodel. The features consider six relevant perspectives for a PSS-oriented business model: (i) Location, which refers to the physical production location of customized medical device; (ii) Operational personnel, which refers to the workforce allocation for production; (iii) Equipment ownership, which describes the property right to use the manufacturing equipment and machinery; (iv)Maintenance, which describes the party responsible for carrying out the maintenance of the equipment; (v) Payment mode, which define whether the payment is made in a traditional or an alternative way; (vi) Target segment, which clarifies whether the fabricated devices are produced only to serve the internal use of the hospital, or also to be offered and sold to other potential external customers. Business models are thus configured by selecting different options for each characteristic feature; obviously, each configuration defines a particular strategy for the customer and the supplier. Fig. 11.7 Structure of the proposed PSS-oriented business model 11 Hospital Factory for Models and Prostheses 247 Scientific and Industrial Motivations, Goals and Objectives The significant increase of life expectancy over the last decades, which was made possible thanks to the progress of medical sciences, generated as a counterpart a higher demand for health care services, as elderly people generally need more intensive medical assistance. In addition, the modern possibilities to treat patients affected by severe diseases are generating new classes of chronic patients who need specific and highly qualified treatments. This new demand for more intensive, advanced and personalised care services is clearly in contrast with the limited budget of national and regional health care systems. Consequently, new models are necessary to guarantee adequate care treatments to each patient. In parallel, new production technologies are boosting the manufacturing of patient-specific solutions. On the one hand, personalised prostheses tailored on the specific patient are becoming crucial for the development of several surgical specialties; on the other hand, patient-specific anatomic models for preoperative planning are essential in the presence of customised prostheses, as also the surgical treatment must be optimised. However, in the common practice, most of medical products are produced in standard sizes and shapes, and then stocked in hospitals, where the product to implant is chosen as the one closest to patient's anatomy. Thus, it is often necessary to manually adapt the product to the anatomic characteristics of the patient, with the risk of damaging the product and not reaching the optimal size and shape for the patient. This may also determine longer surgery times, with higher costs per patient. Just recently, we may notice a significant growth of rapid prototyping services dedicated to the medical field, which are mostly provided by external companies that established a medical division. 1 However, the presence of a prototyping service inside the hospital would be a benefit for the clinical activity. The location inside the hospital would allow a closer interaction with clinicians during the model development, leading to significant time and cost reductions, and to a higher effectiveness of the products. At present, to the best of our knowledge, there are only four services located inside the hospital worldwide: 3D Print Lab@USB (Basel, Switzerland) 2 ; RIH 3D Lab (Rhode Island, Providence, RI, US) 3 ; Austin Health 3D Medical Printing Laboratory (Austin, Melbourne, Victoria, Australia) 4 ; 3D and Quantitative Imaging Laboratory (Stanford University School of Medicine, Stanford, CA, US). 5 However, though they are located inside the hospital, these laboratories have several limitations in terms of available technologies, as they are equipped with low cost machines and rely on outsourcing for the complex cases that require high-resolution 3D printers. Moreover, their work is confined to the orthopaedic and maxillo-facial specialties, which are the easiest to manage. In this context, the project Hospital Factory for Manufacturing Customised, Patient-Specific 3D Anatomo-Functional Models and Prostheses (Fab@Hospital) proposed an innovative paradigm, i.e., the production of personalised medical products (e.g., prostheses) in an environment closely integrated with the hospital, through new design approaches and technologies, to guarantee a direct interaction between patients, medical personnel, and product manufacturers. This may improve the quality of life for patients, the performance of the health care system, and the competitiveness of manufacturers. The Fab@Hospital paradigm consists of: advanced mathematical tools and modelling technologies tailored to manufacture personalised products; location of a hospital factory inside or near the hospital; production of personalised products (e.g., prostheses) at the hospital factory in a short time, thanks to the combination of innovative technologies and processes. Besides the products, the hospital factory would produce personalised anatomic models (e.g., reconstructions of vascular districts), which may help the surgeons in studying the treatment in advance by simulating different strategies. To meet these goals, the following scientific and technological objectives were identified: 1. Define new mathematical methods and approaches to build accurate anatomofunctional models from medical images. 2. Define improvements to the existing technologies for personalised products, e.g., additive manufacturing and micro Electrical Discharge Machining (EDM). 3. Propose innovative process combinations to reduce the production costs, thus supporting a wide diffusion of personalised medical products. 4. Demonstrate the applicability of the new technological approaches and process combinations through the development of two prototypes. 5. Propose new business models for the production of personalised medical products inside or near the hospital. The rest of the chapter is organised as follows. Section 11.2 overviews literature related to the addressed problems, which are stated in Sect. 11.3. The developed technologies and methodologies are detailed in Sect. 11.4, while the outcomes of the work (in terms of two prototypes, several mathematical tools and a business model structure) are shown in Sect. 11.5. State of the Art Some medical specialties have started to benefit from rapid prototyping in the last few years, especially for preoperative planning purposes. In fact, clinicians may obtain more information from physical objects than from computer virtual models only. Moreover, their educational value to train new surgeons is also recognised. Among the others, maxillofacial and orthopaedic specialties currently employ physical models to test different solutions, e.g., for the implant of bone fixation plates. Vascular surgery has seen the development of a dedicated rapid prototyping sector. In fact, the benefit of a physical vascular model for planning the implant of stents or vascular prostheses is linked not only to its morphological characteristics, but also to the mechanical properties of the reproduced vascular district. As the surgeon tests the placement and the release of the prosthesis, and he/she evaluates the prosthesisvessel interaction, the vessel is required to have a behaviour as consistent as possible with the real pathophysiology; thus, compliant models with controlled elasticity are needed. However, vascular applications are mainly at the research level, and very few companies deal with patient-specific silicone vascular models. In all cases, the current production scheme in factories not linked to the hospitals has some drawbacks: Anatomo-functional properties. Clinicians do not have the expertise to retrieve functional properties from the common medical imaging. At the same time, manufacturers do not have the expertise to translate these properties into suitable production specifications. Consequently, sending the production request to an external factory without a discussion between clinicians and manufacturers about the specifications may reduce the effectiveness of the personalised medical product. Production times. The direct interaction between clinicians and manufacturers would avoid loss of information and the need to correct the product, thus speeding up the process. Moreover, the production inside or near the hospital would significantly reduce transportation times. Costs. Even though the prices of the anatomic models for visualisation purposes are lowering, thanks to the progressive spread of prototyping technologies, moving the production inside the hospital would significantly reduce the costs. Moreover, compliant functional models with proper elasticity are still extremely expensive (thousands of euros even for very small vascular districts) and the production inside the hospital would make them more affordable. Our literature analysis focuses on the four topics addressed in this work, i.e., patient-specific anatomo-functional cardiovascular models, patient-specific fixation plates, mathematical tools to support their design, and business models to make their production more efficient. Cardiovascular models. In vitro analyses of the vascular fluid-dynamics may help clinicians in understanding the impact of specific pathologies and devices. In this context, additive manufacturing is playing a crucial role, allowing the production of highly complex geometries at lower costs and in lower times than with standard subtractive technologies. Thus, additive manufacturing is rapidly spreading in the medical field to produce patient-specific anatomic models. In particular, 3D printed anatomic models have been used to test different operative approaches, or to improve the design process of endovascular devices. In fact, in vitro fluid-dynamic analyses are useful to identify the interaction between the device (e.g., valves, endo-prostheses, stents) and the human vascular system. Fixation plates and screws. Patient-specific fixation plates and screws are not widely adopted, due to the difficulties in the small-scale production of 3D complex components, usually made of stainless steel (ASTM F-55 and F-56), pure titanium and its alloys (ASTM F-136), and cobalt-chromium-tungsten-nickel alloy (ASTM F-90). In this context, micro-EDM could be a suitable technology for producing customised plates, due to the ability to perform complex and high-precision machining on electro-conductive materials. Being a thermal process, it can be used with considerable success also for the machining of extremely hard and strong materials, including conductive ceramics. Also techno-polymers, like the PolyEther Ether Ketone (PEEK), have been recently introduced for the manufacturing of fixation plates. In this case, 3D printing technologies such as Fusion Deposition Modelling (FDM) and Stereo Lithography Apparatus (SLA) can be applied. Mathematical models. Tools for reconstructing the geometrical features of some districts from medical imagining are nowadays widespread and widely used in clinics. However, as for the mechanical properties, commercial tools do not generally include this possibility, which still represents an open research issue. Business models. There are very few works investigating the business and managerial sides of manufacturing customised medical products. The existing studies, which have been conducted only recently, mainly address the problem from an economic perspective. Some works investigated the economic implications of 3D printing in general, while others focused on evaluating the cost structure and developing cost models for additive manufacturing. Lindemann et al. developed a business model for evaluating the cost of additive manufacturing. Schrder et al. investigated the manufacturing of customised medical devices from a business model perspective with a specific focus on value chain; they emphasised that interoperability is a significant driver for the efficient manufacturing of customised medical devices, as customisation is not a single-stakeholder process but a multi-actor process that includes suppliers, surgeons and patients. Problem Statement and Proposed Approach Our work includes two main applications. On the one hand, we employ additive manufacturing (3D printing) to produce anatomo-functional models for the cardiovascular specialty; on the other hand, we exploit micro-EDM and additive manufacturing to produce fixation plates for the orthopaedic specialty. Moreover, our work also involves two supporting activities, i.e., the development of mathematical models and tools for the design of patient-specific products, and the development of appropriate business models to suggest the best management strategies and to prove the benefits of the Fab@Hospital paradigm (i.e., the production inside or near the hospital). All these activities are detailed in the next subsections. Additive Manufacturing for Cardiovascular Models The goal is to manufacture deformable vascular models, with realistic mechanical and geometrical properties, to be employed for in vitro analyses. In particular, we focus on benchmark aortic models to test innovative endovascular devices and new surgical procedures. The proposed production approach is based on additive manufacturing in combination with a moulding technique, to produce silicone models featuring both mechanical and geometrical properties of the vessel. Ideally, the highest adherence to reality is possible through a full patient-specific approach, in which all mechanical and geometrical information, along with flows and pressures, are acquired from the specific patient. In the absence of all information, a less specific approach can be pursued by combining patient-specific information and general knowledge common to several individuals (retrieved from the literature or measured on a significant number of patients). Micro-EDM and Additive Manufacturing for Fixation Plates Although customised implants are occasionally used for surgery, they are not oriented to traumatic pathologies. In fact, the treatment of traumatic pathologies requires manufacturing the fixation plate in less than seven days, thus requiring a strong interaction between clinicians and manufacturers (see Fig. 11.1 for the fabrication procedure). The rapid production and the clinicians-manufactures interaction could be easily achieved if the prototyping station were inside the hospital. However, to make such prototyping station available, the fabrication technology for customised devices should fulfil several constraints, due to the confined space and the controlled environment. Additive manufacturing and micro-EDM fulfil these constraints. On the one hand, complex 3D shapes can be manufactured with micro-EDM on every electroconductive material (e.g., titanium and surgical steel); on the other hand, FDM and SLA are suitable for the fabrication of polymeric objects with complex shape at low cost and low environmental impact. Prediction Models for Patient-Specific Functional Properties Stochastic tools may support the estimation of patient-specific mechanical properties in several districts, based on non-invasive measurements and patient's characteristics. This is of particular importance in case of soft tissues (e.g., for the cardiovascular specialty). Thus, we focus on two cardiovascular applications: (i) the estimation of the aortic stiffness and its spatial variations; (ii) the estimation of the ultimate mechanical properties and of the stress-strain characteristics in patients with ascending aorta aneurysm. Moreover, due to the lack of effective tools to support Finite Element Analysis (FEA) under uncertain parameters and Structural Topology Optimization (STO), which are common problems when dealing with patient-specific problems, we propose an approach for efficiently solving FEA problems in the presence of stochastic parameters or within iterative optimization algorithms. Business Models As mentioned above, there are no appropriate business models that can be employed to support the additive manufacturing of patient-specific medical devices. The gap in the literature is even larger when considering the role of the hospitals in manufacturing individualised medical products, as hospitals are usually perceived only as end-users of the products. Also Product-Service System (PSS) oriented business models pay very little attention to apply the concept in health care, and even less to extend the practices of PSS to increase the integration between hospitals and manufacturers of customised medical products. Thus, our goal is to develop a reference structure for business models that can support the Fab@Hospital paradigm. Developed Technologies, Methodologies and Tools In the following, we present the technologies, the methodologies and the tools developed to address the four problems presented in Sect. 11.3. Moreover, as for additive manufacturing and micro-EDM, we describe the features of the associated prototypes. The first prototype (described in Sect. 11.4.1) is a preoperative model for the cardiovascular specialty, while the second prototype (described in Sect. 11.4.2) consists of a set of fixation plates for the orthopaedic specialty. Additive Manufacturing for Cardiovascular Models We relied on a moulding technique to create an aorta benchmark model, where the mould is produced by means of 3D printing technology. We built up a flexible and completely parametric model, which is able to adapt in a consistent way to the modifications of the structural parameters. In particular, for the prototype, we considered geometrical and mechanical parameters retrieved from the literature; in addition, in the absence of literature data, some geometrical data were obtained from several Computed Tomography (CT) images. The mould is composed of three parts: two outer shells and an inner part that creates the inner lumen of the vessel (Fig. 11.2). The thickness of the model, given by the distance between inner and outer moulds, was selected according the desired punctual compliance. Indeed, starting from the inner lumen geometry, we computed the thickness in order to get the desired compliance. The mould was manufactured using the 3D printing technology. We employed the Objet 30 Pro printer (Stratasys, MN, US), which is based on the Material Jetting technology, where layers of photopolymer resin are deployed on a building tray and cured by means of ultraviolet light. We employed the commercial photopolymer VeroWhitePlus RGD835 and the support material FullCure 705, which is necessary to support the parts of the model that do not lay directly on the tray or the underlying layer. After printing, a fine post-processing was performed to ensure a smooth finishing of all surfaces of the mould that will be in contact with the silicone, and the mould was finally assembled. To create the aorta model, we employed the two-part Sylgard 184 Silicone Elastomer (Dow Corning, MI, US) in a ratio of 10 parts of silicone base to 1 part of curing agent by weight. The silicone mixture was poured into the mould, placed into a vacuum chamber to eliminate air, and then left curing at room temperature (about 23 C) for 48 h. Such two-part ratio and curing temperature were chosen in order to tune the mechanical properties of the silicone ; the selected combination leads to a final elastic modulus of 1.32 ± 0.07 MPa. After curing, the mould was removed to get the final silicone model. The resulting prototype, which can be considered a benchmark deformable aortic model, is shown in Fig. 11.3. It is endowed with terminations that can be connected to in vitro circuits by means of pipe junctions. Moreover, the use of transparent silicone allows inner visibility, which is fundamental for several experiments, e.g., endograft deployment or particle tracking. Transparency is also fundamental to exploit the model to train beginner surgeons and clinicians; allowing inner visibility, the student is able to look through the vessel at the movements of the endoscopic instruments. Micro-EDM and Additive Manufacturing for Fixation Plates Three manufacturing technologies (namely, micro-EDM, FDM and SLA) were tested for manufacturing several prototypes made of different materials (metals and polymers). The two additive manufacturing technologies were studied using two geometries and different polymers. The two most appropriate micro-EDM approaches (i.e., milling-EDM and wire-EDM) were tested in terms of material removal rate, to minimise the machining time. Two materials were investigated: titanium, which is largely employed in the medical sector due to biocompatibility and mechanical characteristics, and the ceramic composite Si3N4-TiN, which is currently used for dental implants. Several tests were carried out to find the best strategy and parameters to achieve the typical features of fixation plates, such as holes and bores. Holes are necessary on plates both for temporary and permanent fixture because, according to the anatomy of the patient The FDM process was tested with engineering polymers and composites. Tests were carried out to correlate the FDM process parameters with the highest resolution achievable. They were conducted on both ad hoc designed samples and on a few designs of commercial fixation plates, using the printers Sharebot Next Generation (Sharebot, Italy) and S2 (Gimax 3D, Italy). Moreover, to tailor the mechanical properties of the biocompatible polymers, carbon nanotubes (CNT) were added as filler in several polymeric matrixes, e.g., Polymethyl Methacrylate (PMMA), Polyoxymethylene (POM) and Polyamide (PA). The mechanical characterisation of these composites was carried out using FDM 3D printing and SLA dog-bone tensile specimens. SLA technology, with the equipment Form 1+ (Formalabs, MA, US) was finally selected as alternative additive process, in order to compare its performance with the FDM in terms of production time, mechanical strength and surface quality. As for the prototype, the micro-EDM was tested on the prototype shown in Fig. 11.4, which is inspired by a fixation plate commercialised by Vilex (McMinnville, TN, USA). The plate includes four holes for the screws, to lock the plate to the bone, a non-locking lag screw hole, one compression slot and two smaller holes for temporary fixing. Samples were manufactured to prove the customisation methodology, the machining performance and the process ability. A commercial fixation plate for the Rolando fracture (i.e., a fracture to the base of the thumb, see Fig. 11.5a) was selected as test specimen in order to assess additive manufacturing capabilities. The prototype was fabricated in several polymers (both commercial and internally developed) using FDM and SLA. As expected, the two technologies have different potentialities. FDM allowed the use of a greater variety of polymers and composites with tailored properties, while the selection of materials for SLA was much reduced. Finally, a patient-specific fixation plate for the Rolando fracture was designed and fabricated with different polymers (Fig. 11.5b). Figure 11.6 shows some of the produced plates, made using FDM in PA-CNT4% and SLA. SLA samples present very smooth surfaces due to the higher resolution of the technology. Prediction Models to Identify Patient-Specific Functional Properties As mentioned in Sect. 11.3.3, we focused our activity on three applications, for which an adequate solution in the literature was still lacking. The first one deals with a Bayesian estimation approach to assess the stiffness and its spatial variations in a given aortic region, based on CT Angiography (CTA) images acquired over a cardiac cycle. The arterial stiffness was derived by linking the kinematic information from the CTA images with pressure waveforms, generated by a lumped parameter model of the circulation. The proposed approach includes The second application deals with the ascending aorta aneurysm, which is a severe life-threatening condition with asymptomatic rupture risk. We developed an approach to estimate the patient-specific ultimate mechanical properties and the stress-strain characteristics based on non-invasive data. Through a regression model, we built the response surfaces for the ultimate stress and strain, and for the coefficients of the stress-strain characteristics, all in function of patient data commonly available in the clinical practice. Moreover, due to the lack of effective tools to support Finite Element Analysis (FEA) under uncertain parameters and Structural Topology Optimization (STO), which are common problems when dealing with patient-specific problems, we propose an approach for efficiently solving FEA problems in the presence of stochastic parameters or within iterative optimization algorithms. Finally, a relevant issue for studying the mechanical behaviour of biological tissues and structures with computational tools (e.g., FEA) is the uncertainty associated with the model parameters. We addressed the problem of solving the FEA in the presence of uncertain parameters by exploiting the functional principal component analysis to get acceptable computational efforts. Indeed, the approach allows to construct an optimal basis of the solutions space and to project the full FEA problem into a smaller space spanned by this basis. The same approach was also used to reduce the computational effort of iterative optimization algorithms for STO. Business Models A general structure to configure potential business models for customised manufacturing in health care was developed. The proposed structure is based on the Product-Service System (PSS) concept, considering the morphological box defined by Lay et al.. At the same time, it entails some modifications on the role of hospital and machinery supplier. In particular, the proposed model consists of a set of building blocks, i.e., characteristic features that define the main aspects and decision points to be set ( Fig. 11.7). For each feature, a number of options were defined, which describe the potential alternatives that can be selected to configure the business model. The features consider six relevant perspectives for a PSS-oriented business model: (i) Location, which refers to the physical production location of customized medical device; (ii) Operational personnel, which refers to the workforce allocation for production; (iii) Equipment ownership, which describes the property right to use the manufacturing equipment and machinery; (iv) Maintenance, which describes the party responsible for carrying out the maintenance of the equipment; (v) Payment mode, which define whether the payment is made in a traditional or an alternative way; (vi) Target segment, which clarifies whether the fabricated devices are produced only to serve the internal use of the hospital, or also to be offered and sold to other potential external customers. Business models are thus configured by selecting different options for each characteristic feature; obviously, each configuration defines a particular strategy for the customer and the supplier. Outcomes We present the outcomes and the results separately for each addressed problem. In particular, we refer to the prototypes, the mathematical models and the business model. Aorta Silicone Model The model was first analysed from a macroscopic point of view: surfaces are smooth, homogenous, even if slightly damaged only in the subtlest parts. Also the transparency is guaranteed. For a quantitative analysis, the silicone model was tested to assess the actual compliance of the mock aorta. A series of CT scans of the model were performed at different inner pressures, using a 64 slice Definition AS CT (Siemens, Germany). To do so, all branches were closed and an inner pressure was imposed by means of a sphygmomanometer, ranging from 40 to 220 mmHg. Then, all acquisitions were post-processed, performing the segmentation of the air in the aorta lumen with the open source software ITK-Snap. At the end of the segmentation process, a set of labels was obtained, one for each slice of the CT scan, which were interpolated to create the inner volume rendering. Through the inner volume at different pressures, we identified the compliance of the model, expressed as the slope of the volumepressure curve, here equal to 0.2008 cm 3 /mmHg. Results show a linear volumepressure relation that properly replicates the physiology of the aorta, even though the slope of the curve is slightly lower than expected, meaning that the model is more rigid than a physiologic aorta. This small mismatch can be explained by the amount of factors that may impair the properties of the silicone mixture. Actually, a fine tuning of the mechanical properties can be performed by acting on the ratio between base and curing agent, i.e., the elastic modulus can be significantly reduced by lowering the amount of the curing agent. Moreover, lowering the curing temperature leads to the same result, even if the decrease is less significant. Fixation Plates The SX200 HP micro-EDM machine (Sarix, Switzerland) was equipped with different electrode tools according to the manufacturing strategy. A tungsten carbide cylindrical rod and a copper cylindrical tube, both with nominal diameter 0.4 mm, were used for the milling and the drilling operations, respectively. The overall machining time to complete a fracture plate in Titanium was equal to 375 min. The same plate was made in a ceramic composite (Si3N4-TiN) using micro-EDM. Due to the higher material removal rate, the total machining time was reduced to 245 min. Unfortunately, the machining time is a severe drawback for this technology, which might not be suitable for more complex devices with a higher number of holes and bores. Concerning the additive manufacturing, Fig. 11.8 shows the results of the tensile tests performed on different polymers by using dog-bone tensile specimens manufactured using FDM 3D printing and SLA. It can be noticed the poor mechanical performance of the photopolymer with respect to PA. The PA-CNT composites slightly increase the maximum yield stress, while they do not seem to influence the Young modulus. Prediction Models For the Bayesian estimation of the stiffness and its spatial variations, efficiency and accuracy of the proposed method were tested on some simulated cases and on a real patient. The proposed approach showed to be powerful and to catch regional stiffness variations in human aorta using non-invasive data. The obtained estimates can also be used for producing patient-specific prostheses and preoperative tools that respect the estimated mechanical properties. As for the estimation of patient-specific ultimate mechanical properties and stressstrain characteristics, we applied the approach to a dataset of 59 patients. The approach was validated, as accurate response surfaces were obtained for both ultimate properties and stress-strain coefficients : prediction errors are acceptable, even though a larger patient dataset would be required to stabilise the surfaces, making it possible an effective application in the clinical practice. Finally, considering the reduced basis approach to solve FEA problems in the presence of uncertain parameters or for STO, results are promising. We assessed the applicability of the proposed approach on several test cases, obtaining satisfactory results. On the one hand, solving the problem in the reduced space spanned the functional principal components is computationally effective; on the other hand, very good approximations are obtained by upper bounding the error between the full FEA solution and the reduced one. Business Models The generated configurations of business model were defined by combining different options for each characteristic feature. Three main configurations were developed : Product-oriented business model, in which the hospital buys the production machinery from a supplier with additional services. In this scenario, the hospital is the manufacturer and the production can take place either inside or near the hospital. Use-oriented business model, in which the hospital does not acquire any production machinery and the supplier retains the ownership of all equipments. In this configuration, the hospital rents or leases the equipments, and installs them in an internal production lab. While the ownership is not transferred to the hospital, the equipment is run by operating personnel of the hospital. Result-oriented business model, in which the hospital takes a step forward toward collaboration and integration with the supplier. While the fabrication place remains inside or near the hospital and the production takes place under the supervision of the hospital, the supplier is responsible for running the production. The supplier owns the equipments, provides additional maintenance services, and is responsible for running the production through its own personnel. The hospital provides the physical space for the production, and pays for the production of each final product. Conclusions and Future Research In this work, we propose a new paradigm to bring the production of personalised products (e.g., prostheses) inside or near the hospital, i.e., the Fab@Hospital paradigm. Through some relevant examples, we proved the possibility to produce patientspecific products in small factories with production processes that may easily involve clinicians. Moreover, we also validated the approach by interacting with clinicians of several specialties. The major scientific contributions can be summarised as follows:. 9. A regression method based on noninvasive clinical data to predict the mechanical behavior of ascending aorta aneurysmal tissue. 10. Efficient uncertainty quantification in stochastic finite element analysis based on functional principal components. 11. Applying functional principal components to structural topology optimization. 12. Proposal of an innovative business model for customized production in healthcare. 13. Development of a PSS-oriented business model for customized production in healthcare. 14. A new perspective of product-service business models for customized manufacturing in healthcare. Items 1-4 refer to the additive manufacturing for cardiovascular models; items 5-6 to the fixation plates made using micro-EDM and additive manufacturing; items 7-11 to prediction and mathematical tools; items 12-14 to the business models. Future work will consider the implementation of the Fab@Hospital paradigm in a small hospital factory, to simulate the entire production process from the clinical request up to the final product. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
''' AUthor _____________________ | | | Towhidul Islam | | 6T4 | |____________________| ''' s = input() u= 1 l=1 for i in s: if i.isupper(): u+=1 elif i.islower(): l+=1 if l<u: x=s.upper() else: x=s.lower() print(x)
/** * @author Joby Wilson Mathews * */ @Style(fillPattern=1,fillForegroundColor = 11 ,font=@Font(bold=true)) public class EmployeeDetails { @Data(fieldName="Employee Details") private Employee employee; @Data(fieldName="Department Details") private Department department; @Data(fieldName="Salary Details") private Salary salary; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public Department getDepartment() { return department; } public void setDepartment(Department department) { this.department = department; } public Salary getSalary() { return salary; } public void setSalary(Salary salary) { this.salary = salary; } }
NEW YORK (AP) — A new study that analyzed four years’ worth of films found that female-led movies have consistently outperformed those in which men get top billing. The study was conceived through a group that formed through the gender equality initiative Time’s Up, including Amy Pascal, former chairman of Sony Pictures. Earlier research by academics has chronicled similar rates of inequality in top-grossing Hollywood releases, and the financial benefits of inclusion .
A firing squad is normally composed of several soldiers or law enforcement officers. Usually, all members of the group are instructed to fire simultaneously, thus preventing both disruption of the process by a single member and identification of the member who fired the lethal shot. The prisoner is typically blindfolded or hooded, as well as restrained, although in some cases prisoners have asked to be allowed to face the firing squad without their eyes covered. Executions can be carried out with the condemned either standing or sitting. There is a tradition in some jurisdictions that such executions are carried out at first light, or at sunrise, which is usually up to half an hour later. This gave rise to the phrase “shot at dawn”. In some cases one or more members of the firing squad may be issued a weapon containing a blank cartridge instead of one housing a live round. No member of the firing squad is told beforehand if he is using live ammunition. This is believed to reinforce the sense of diffusion of responsibility among the firing squad members, making the execution process more reliable. It also allows each member of the firing squad to believe afterward that he did not personally fire a fatal shot–for this reason, it is sometimes referred to as the “conscience round”. While an experienced marksman can tell the difference between a blank and a live cartridge based on the recoil (the blank will have lower recoil), there is a psychological incentive to not pay attention and, over time, to remember the recoil as soft. In more recent times, such as in the execution of Ronnie Lee Gardner in Utah in the United States in 2010, a rifleman may be given a “dummy” cartridge containing wax instead of a bullet, which provides a more realistic recoil. (Source: Wikipedia)
<filename>config/initconfig_test.go // Copyright 2016 The Upspin Authors. All rights reserved. // Use of this source code is governed by a BSD-style // license that can be found in the LICENSE file. package config import ( "bytes" "fmt" "os" "path/filepath" "strings" "sync" "testing" "upspin.io/pack" "upspin.io/upspin" _ "upspin.io/pack/ee" _ "upspin.io/pack/plain" ) func init() { inTest = true } var once sync.Once type expectations struct { username upspin.UserName keyserver upspin.Endpoint dirserver upspin.Endpoint storeserver upspin.Endpoint packing upspin.Packing secrets string } type envs struct { username string keyserver string dirserver string storeserver string packing string secrets string } var secretsDir string func init() { cwd, _ := os.Getwd() secretsDir = filepath.Join(cwd, "../key/testdata/user1") } func TestInitConfig(t *testing.T) { expect := expectations{ username: "<EMAIL>", keyserver: upspin.Endpoint{Transport: upspin.InProcess, NetAddr: ""}, dirserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"}, storeserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"}, packing: upspin.EEPack, secrets: secretsDir, } testConfig(t, &expect, makeConfig(&expect)) } func TestDefaults(t *testing.T) { expect := expectations{ username: "<EMAIL>", keyserver: defaultKeyEndpoint, packing: upspin.EEPack, secrets: secretsDir, } testConfig(t, &expect, makeConfig(&expect)) } func TestBadKey(t *testing.T) { // "name=" should be "username=". const config = `name: <EMAIL> packing: ee keyserver: inprocess dirserver: inprocess storeserver: inprocess` _, err := InitConfig(strings.NewReader(config)) if err == nil { t.Fatalf("expected error, got none") } if !strings.Contains(err.Error(), "unrecognized key") { t.Fatalf("expected bad key error; got %q", err) } } func TestEnv(t *testing.T) { expect := expectations{ username: "quux", keyserver: upspin.Endpoint{Transport: upspin.InProcess, NetAddr: ""}, dirserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"}, storeserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"}, packing: upspin.EEPack, secrets: secretsDir, } defer func() { os.Setenv("upspinusername", "") os.Setenv("upspinkeyserver", "") os.Setenv("upspindirserver", "") os.Setenv("upspinstoreserver", "") os.Setenv("upspinpacking", "") }() config := makeConfig(&expect) expect.username = "<EMAIL>" os.Setenv("upspinusername", string(expect.username)) expect.keyserver = upspin.Endpoint{Transport: upspin.InProcess, NetAddr: ""} expect.dirserver = upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"} expect.storeserver = upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"} os.Setenv("upspinkeyserver", expect.keyserver.String()) os.Setenv("upspindirserver", expect.dirserver.String()) os.Setenv("upspinstoreserver", expect.storeserver.String()) expect.packing = upspin.EEPack os.Setenv("upspinpacking", pack.Lookup(expect.packing).String()) testConfig(t, &expect, config) } func TestBadEnv(t *testing.T) { expect := expectations{ username: "<EMAIL>", keyserver: upspin.Endpoint{Transport: upspin.InProcess, NetAddr: ""}, dirserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"}, storeserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "who.knows:1234"}, packing: upspin.EEPack, } config := makeConfig(&expect) os.Setenv("upspinuser", string(expect.username)) // Should be upspinusername. _, err := InitConfig(strings.NewReader(config)) os.Unsetenv("upspinuser") if err == nil { t.Fatalf("expected error, got none") } if !strings.Contains(err.Error(), "unrecognized environment variable") { t.Fatalf("expected bad env var error; got %q", err) } } func TestNoSecrets(t *testing.T) { expect := expectations{ username: "<EMAIL>", packing: upspin.EEPack, secrets: "none", } r := strings.NewReader(makeConfig(&expect)) cfg, err := InitConfig(r) if err != ErrNoFactotum { t.Errorf("InitConfig returned error %v, want %v", err, ErrNoFactotum) } if cfg != nil && cfg.Factotum() != nil { t.Errorf("InitConfig returned a non-nil Factotum") } } func TestEndpointDefaults(t *testing.T) { config := ` keyserver: key.example.com dirserver: remote,dir.example.com storeserver: store.example.com:8080 secrets: ` + secretsDir + "\n" expect := expectations{ username: "<EMAIL>", packing: upspin.EEPack, keyserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "key.example.com:443"}, dirserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "dir.example.com:443"}, storeserver: upspin.Endpoint{Transport: upspin.Remote, NetAddr: "store.example.com:8080"}, } testConfig(t, &expect, config) } func makeConfig(expect *expectations) string { var buf bytes.Buffer if expect.username != "" { fmt.Fprintf(&buf, "username: %s\n", expect.username) } var zero upspin.Endpoint if expect.keyserver != zero { fmt.Fprintf(&buf, "keyserver: %s\n", expect.keyserver) } if expect.storeserver != zero { fmt.Fprintf(&buf, "storeserver: %s\n", expect.storeserver) } if expect.dirserver != zero { fmt.Fprintf(&buf, "dirserver: %s\n", expect.dirserver) } fmt.Fprintf(&buf, "packing: %s\n", pack.Lookup(expect.packing)) if expect.secrets != "" { fmt.Fprintf(&buf, "secrets: %s\n", expect.secrets) } return buf.String() } func saveEnvs(e *envs) { e.username = os.Getenv("upspinusername") e.keyserver = os.Getenv("upspinkeyserver") e.dirserver = os.Getenv("upspindirserver") e.storeserver = os.Getenv("upspinstoreserver") e.packing = os.Getenv("upspinpacking") e.secrets = os.Getenv("upspinsecrets") } func restoreEnvs(e *envs) { os.Setenv("upspinusername", e.username) os.Setenv("upspinkeyserver", e.keyserver) os.Setenv("upspindirserver", e.dirserver) os.Setenv("upspinstoreserver", e.storeserver) os.Setenv("upspinpacking", e.packing) os.Setenv("upspinsecrets", e.secrets) } func resetEnvs() { var emptyEnv envs restoreEnvs(&emptyEnv) } func TestMain(m *testing.M) { var e envs saveEnvs(&e) resetEnvs() code := m.Run() restoreEnvs(&e) os.Exit(code) } func testConfig(t *testing.T, expect *expectations, configuration string) { config, err := InitConfig(strings.NewReader(configuration)) if err != nil { t.Fatalf("could not parse config %v: %v", configuration, err) } if config.UserName() != expect.username { t.Errorf("name: got %v expected %v", config.UserName(), expect.username) } tests := []struct { expected upspin.Endpoint got upspin.Endpoint }{ {expect.keyserver, config.KeyEndpoint()}, {expect.dirserver, config.DirEndpoint()}, {expect.storeserver, config.StoreEndpoint()}, } for i, test := range tests { if test.expected != test.got { t.Errorf("%d: got %s expected %v", i, test.got, test.expected) } } if config.Packing() != expect.packing { t.Errorf("got %v expected %v", config.Packing(), expect.packing) } }
It’s hard to determine what the best part of June in Minot is. Is it the weather? Is it the outdoor opportunities for fun now that the weather is more moderate? Is it the close time proximity to the North Dakota State Fair? But with anticipation of all these things on many people’s minds, following are a few thoughts from life around the Minot Daily News over the past few weeks. While statewide and Ward County voter participation numbers were poor, Minot’s weren’t bad, relative to the past few elections. Election Day traffic at the Auditorium appeared brisk, even busy many times of the day, Tuesday. For a news entity, this is very rewarding to see. Every day of the year, our news team works diligently to bring the community news of what is going on politically, since it is these things that have the most impact on our everyday lives. So, when the public is engaged enough to come out and vote, it demonstrates that the public is taking responsibility for its own future, for making the decisions on how to shape the community’s future. Whether voters are our readers or not, public engagement is extremely rewarding, and studies around the nation show that newspaper readers tend to be voters. While we hope that our coverage of local issues inspired people to vote, it is good to see improved voter number numbers in any event. A successful election isn’t just qualified by voter participation. It is the candidates. Minot enjoyed such a good assortment of good candidates this year that it must have been a tough decision for many. Shoot, well qualified candidates for different offices, almost certain to be good public servants, lost or came in second or third place in primaries. This was one of the best cast of candidates in recent memory and every one of them should be proud of their campaign efforts and their willingness to serve. On behalf of the entire paper, as a personal note, I want to thank all of those people who called, wrote or approached MDN staff with thanks for the four candidate forums we sponsored. In total, almost a thousand people participated in the forums. While we are grateful for the compliments, I want to remind everyone that we did have partners this year and they were integral to our success. These included series-wide partner the Minot Chamber of Commerce, and the venue partners – Roosevelt Park Zoo, Minot State University, the State Fair Center and our good friends at the Grand Hotel. These hosts enables us to spread our events around and bring them out to community, while showcasing some of the great venues doing good things for the community. Minot Daily News has re-committed itself to public and community events this year. We enjoy those events that allow us to bring programs to the community and which allow us to spend time meeting and speaking to readers and residents. We’ve already been in negotiations to bring two or three events to town in the late summer or fall and hope to be able to announce some details soon. Watch the news pages or this column space for information. Our events should be a lot of fun and will definitely be of value to the community. We expect this to be a long-term commitment with returning and new events in the years ahead. MDN’s Artist Profile and Craftsperson profiles continue and we are always looking for new regional residents to spotlight. I am really pleased that we have heard from enough artists to feature 2-4 new creative people a month for more than a year, with a few still to come. It’s been particularly gratifying to feature so many young artists, and we hope our small effort helps these creative artists reach broader audiences. There are so many good artists in our region. Personally, I have a dozen pieces of work from North Dakota artists around my home or waiting to be installed. N.D. artists have such diverse perspectives, such interesting eyes for the beauty of our state, that it is a delight to collect pieces of art. Here’s something new art fans don’t always understand. You don’t have to be rich (hello! I work in newspaper) to begin collecting local art. It’s rewarding, you easily develop a unique and local collection, it isn’t costly and you support young and aspiring artists. That said, we need more to continue our features! Besides visual artists, I am looking for performing artists to feature. I am shocked to have not heard from more vocalists, musicians, actors, directors, etc. There aren’t that many opportunities to reach local residents and introduce yourself other than the cover of our Arts section and I would really like to talk music and theater with people way more talented than me! If you fall into that category, or are a visual artist we have yet to profile, please drop me a line! Regardless of your topic, concern or opinion, please feel free to reach out to me directly. As always, our interactive communication is the most inspiring and rewarding part of what we do. And that means something. Because at MDN, every day is a new adventure, with a new storyline, that makes us love what we do.
Financing of healthcare in 2021 from the federal budget: priorities within the national project Healthcare in the context of the fight against coronavirus infection Covid19 An analysis of the upcoming financing of healthcare from the Federal budget shows that in 20212022, despite the difficult financial situation, the volume of financial support for the industry will continue to grow. These are all the more important parameters because the upcoming budget in the period under review assumed a slight reduction in funding compared to the approved figures of the Federal budget for 2020. This should help to ensure the stable functioning of health care.
Apple released a beta of the iPhone OS 3.2 SDK to developers last week so they can get a jump on making existing or new apps ready for the iPad. That version of the iPhone OS is made specifically for the iPad, and, as developers comb through the APIs, resources, and function calls, they are finding references to capabilities Steve Jobs never mentioned during the device's unveiling. We have already heard about how the Contacts app contains a full UI for taking photos, suggesting some kind of camera hardware was at least considered during the design stage of the iPad. Details of what appears to be a fairly complete video conferencing or video calling implementation are also contained within iPhone OS 3.2. Sources for Engadget turned up references to functions for accepting or declining a video conference, mirror-imaging a video feed (useful for webcams), and running a video call full-screen or within a pop-over view. There is certainly support for the idea that video conferencing could be a killer app for the iPad if it had some kind of front-facing camera. The numerous references to functions that require a camera could indicate a number of different scenarios. Apple may have considered adding camera hardware and later decided to scrap it for cost or ergonomic reasons. It may even plan to add camera hardware at a later date, or offer an add-on camera that connects via Bluetooth or the dock connector. Since the iPhone OS is also used on the iPhone, it may even point to video conferencing functionality being added to a future hardware revision as well. Also tucked away inside the SDK are a number of references to phone functions, including making calls and sending SMS and MMS messages. Again, these could be vestigial references to functionality meant for the iPhone. Apple has referred to the 3G hardware in the 3G-equipped iPad model as "data only," but it seems likely the hardware could also support these functions if Apple arranged for carriers to support them. So far, though, the announced plans for 3G are for data services only. We know that there will be a shared storage area for files so they can be shared among applications and easily synced to a desktop. iPhone OS 3.2 is also, according to Engadget's sources, capable of supporting file downloads and local storage for linked files in Mobile Safari. File uploads from Mobile Safari also appear to be in the prototype stage, though there doesn't appear to be any obvious way to access the file system directly. As indicated by the demonstration of iWork applications, iPad will have far more advanced text-handling features than current iPhone OS versions running on the iPhone have had. Along with CoreText APIs for displaying text, the iPad is expected to have a proper spell checker with multiple dictionaries and user-added words, as well as grammar checking. There is also a prototype of a "handwriting" keyboard in iPhone OS 3.2, which may use some combination of Inkwell and FingerWorks technologies; we may yet see stylus input for the iPad after all. The iPad can be connected to external displays, supporting 1024x768 pixels via a VGA adapter, 480p and 576p with a component cable, and 480i and 576i with a composite cable. However, the SDK has functions for drawing objects to a connected display separately from the main display, instead of merely mirroring the internal display. This would allow an application to have a control interface on the iPad itself, while outputting other video to the connected projector or other device. For instance, imagine having a "presenter" view on the iPad while Keynote slides are sent to a projector, or an iTunes controller to pick tracks while a visualizer is output to your HDTV. Finally, we have already heard word that Apple plans to enable the iPad to print to networked printers. The current 3.2 SDK beta includes APIs for converting to PDF, and at least rudimentary early work on a printing API. There also appears to be fairly robust capabilities for accessing Bluetooth and USB devices, but it's not clear if the support is merely there to enable access to just Apple Wireless Keyboards and the Camera Connection Kit, or if the support can be leveraged for other devices such as USB storage, for instance. What developers are finding in iPhone OS 3.2 may point to features that Apple hasn't mentioned so far—60 days is a long time to finalize features that weren't finished enough to demo or discuss during the iPad event. However, it's also possible that these references in the current SDK merely point to features that are being worked on for a future iPhone OS 4.0 update; Jobs told Apple employees at a recent town hall meeting that the next iPhone will be an "A+ update." Since Apple has made software the focus of its mobile devices, however, these hints in the SDK are a good indication that Apple has plenty in store for the future of its mobile offerings.
import java.io.IOException; public class A_VK { public static void main(String[] args) throws IOException { java.io.BufferedReader r = new java.io.BufferedReader( new java.io.InputStreamReader(System.in)); String [] input1 = r.readLine().split(" "); int k = Integer.parseInt(input1[1]); String [] input2 = r.readLine().split(" "); int value = Integer.parseInt(input2[k-1]); int result = 0; for(int i=0; i<Integer.parseInt(input1[0]); i++) { if(Integer.parseInt(input2[i]) < value || Integer.parseInt(input2[i])<1) break; result++; } System.out.println(result); } }
Pediatric urological manpower report. Pediatric Urological Manpower Committee of the American Association of Pediatric Urology. The American Association of Pediatric Urology initiated a Pediatric Urological Manpower Study in 1991. A 24-question survey was distributed to the members of the Society of Pediatric Urology and the American Academy of Pediatrics Section on Urology. The objective of the questionnaire was to obtain information related to fellowship training, regional distribution of pediatric urologists, and practice patterns and attitudes. As of December 31, 1991, 345 questionnaires were distributed, and 244 (71%) were completed and entered into a computer program. The number of pediatric urologists was evenly distributed among 3 consecutive 10-year age groups ranging between age 31 and 60 years. The majority (78%) of urologists practicing 100% pediatric urology were between 31 and 50 years old. Approximately 60% of the responders practiced full-time (100%) pediatric urology and 59% of this group were university based. Pediatric urologists were practicing in 42 states and the District of Columbia. Based upon the United States Department of Commerce 1990 census, the number of pediatric urologists practicing in each state in relation to the total pediatric (less than 18 years old) populations was determined. The number of pediatric urology fellowships has steadily increased since the mid 1950s. Currently, more than 10 fellows are trained annually. Of the 172 responders practicing at least 75% pediatric urology 24% indicated that practice was "too busy" and 53% indicated that practice was "just right." Approximately 44% of the responders were considering adding a partner: 21 indicated that they planned to add a partner in 1 year, 65 in 5 years and 10 in 10 years. Hopefully, the Pediatric Urological Manpower Study will serve as a useful instrument for assessing the pediatric practice patterns and training needs in the United States, thereby enhancing the quality of urological care for children.
Milk energy output in Swiss mice throughout the first, second, third and fourth lactation events SUMMARY Most studies on the factors limiting sustained energy intake (SusEI) during peak lactation period have been performed in females at the 1st lactation event. However, an inconsistent change in SusEI is observed between the 1st and 2nd lactation event. Thus, the limits to SusEI may be associated with reproductive experiences, but the effects of reproductive experiences on SusEI or reproductive output remain unclear. Here, food intake, reproductive output, suckling behaviour and serum prolactin levels were measured in female Swiss mice throughout the 1st, 2nd, 3rd and 4th lactation periods. Asymptotic food intake was significantly elevated during the 2nd lactation period relative to that observed during the 1st lactation period. Females in the 2nd lactation period exported significantly more energy in milk than those in the 1st lactation event and consequently raised larger litters with heavier litters at weaning. This was inconsistent with the prediction of the peripheral limitation hypothesis, but also did not provide support for the heat dissipation limitation hypothesis. Neither food intake nor reproductive output, indicative of litter size, litter mass and milk energy output (MEO), was different between the 1st, 3rd and 4th lactation event. Differences in suckling behaviour and serum prolactin levels were not significant between the four lactation events. Correlations of prolactin levels with asymptotic food intake, MEO and mammary gland mass were only observed in females during the 1st lactation period. This may suggest that prolactin is not a key factor in stimulating milk production when the mammary glands work at their maximum during the peak lactation period.
# coding=utf-8 from webcam_stream import webcam_stream from binarizar_hsv import binarizar_hsv from direccion import direccion from perspectiva import perspectiva from matplotlib import pyplot as plt import cv2 import numpy as np import toolbox def main(): """ Programa principal """ print "*************************************" print "Seguimiento de lineas en OpenCV" print "para AGVs\n" print "Autor: ama0114\n" print "*************************************" stream = get_url_stream() persp = perspectiva() bin_hsv = binarizar_hsv() menu(stream, persp, bin_hsv) def get_url_stream(): """ Comprueba que tenemos conexión con la url que se solicita al usuario, y devuelve el objeto stream""" ex = True while ex is True: try: url = raw_input("Dime la url del servidor de video: ") stream = webcam_stream(url + "/shot.jpg") stream.get_video_stream(0) ex = False except: print("Error, no se ha podido conectar con la url: " + url) return stream def crear_marco_comparacion(img): """ Crea con matplotlib una ventana para mostrar 4 imagenes simultaneamente. Devuelve los 4 espacios donde se dibujaran las imagenes. Parametros - img: Imagen de la misma resolución que las imagenes que se vayan a dibujar posteriormente. """ fig, ax = plt.subplots(nrows=2, ncols=2) image_spaces = [] for row in ax: for col in row: image_spaces.append(col.imshow(img, cmap='gray')) #Hago el plot dinamico plt.ion() #return im_spc1, im_spc2, im_spc3, im_spc4 return image_spaces[0], image_spaces[1], image_spaces[2], image_spaces[3], fig def mostrar_comparacion_imagenes(im_spc1, im_spc2, im_spc3, im_spc4, im1, im2, im3, im4): """ Recive cuatro espacios de imagenes y cuatro imagenes y pinta sobre los espacios las imagenes. Parámetros: - im_spc1, im_spc2, im_spc3, im_spc4: los cuatro espacios de imagenes. - im1, im2, im3, im4: las cuatro imagenes. """ #Asigno las imagenes im_spc1.set_data(im1) im_spc2.set_data(im2) im_spc3.set_data(im3) im_spc4.set_data(im4) #Tiempo de pausa, lo menor posible para que el video se muestre fluido plt.pause(0.001) def check_num_int(num): """Comprueba que una variable es númerica y entera y devuelve el numero entero asociado Parametros: -num: el numero a comprobar """ try: num = int(num) except ValueError: print "Error. Introduce un numero entero\n" return num def check_num_float(num): """Comprueba que una variable es númerica y decimal y devuelve el numero decimal asociado Parametros: -num: el numero a comprobar """ try: num = float(num) except ValueError: print "Error. Introduce un numero decimal\n" return num def menu(stream, persp, binarizar_hsv): """ Menú principal de la aplicación. """ opcion = -1 while opcion is not 0: print "******* Menu ***********" opcion = raw_input(" 1-Prueba binarizar luminosidad \n" + " 2-Prueba binarizar color \n" + " 3-Muestra proceso \n"+ " 4-Ejecucion normal \n"+ " 0-Salir \n") opcion = check_num_int(opcion) print "************************" if opcion is 1: print "************************" print "Prueba binarizar por luminosidad, pulsa 's' para salir." print "************************" binarizar_luminosidad(stream) opcion = -1 elif opcion is 2: print "************************" print "Prueba binarizar por color, pulsa 's' para salir." print "************************" binarizar_color(binarizar_hsv, stream) opcion = -1 elif opcion is 3: menu_aux(stream, persp, binarizar_hsv, muestra_proceso) opcion = -1 elif opcion is 4: menu_aux(stream, persp, binarizar_hsv, ejecucion_normal) opcion = -1 elif opcion is 0: quit() else: print "Error, intentalo de nuevo\n" def binarizar_luminosidad(stream): """ Seccion del menu que permite ejecutar una prueba de binarizacion por luminosidad. """ fps_stats = [] salir = False while not salir: vid, fps = stream.get_video_stream(0) umbral, img_binarizada = toolbox.binarizar_otsu(vid, 255, cv2.THRESH_BINARY_INV) cv2.imshow('Original', vid) cv2.imshow('Bin_Lum', img_binarizada) fps_stats.append(fps) if cv2.waitKey(1) & 0xFF == ord('s'): imprimir_fps_stats(fps_stats) print "Umbral de binarizacion: " + str(umbral) cv2.destroyAllWindows() salir = True def binarizar_color(binarizar_hsv, stream): """ Seccion del menu que permite ejecutar una prueba de binarizacion por luminosidad. """ binarizar_hsv.calibrar_color(stream) def menu_aux(stream, persp, binarizar_hsv, ejecucion): """ Sub menu que permite elegir entre dos tipos de binarizacion para aplicar posteriormente otros metodos de procesado de imagen. """ opcion = -1 while opcion is not 0: print "************************" opcion = raw_input("1-Binarizar Color\n 2-Binarizar Luminosidad\n 0-Salir\n") print "************************" opcion = check_num_int(opcion) if opcion is 1: binarizar_color(binarizar_hsv, stream) calcular_coef_angulo_interaccion(persp, stream, binarizar_hsv.binarizar_frame, 1) ejecucion(1, binarizar_hsv.binarizar_frame, stream, persp) elif opcion is 2: def funcion_adaptador(frame): """ Adapto la funcion de binarizacion para que solo reciba un parametro. Consigo poder pasar esta funcion por parametro. """ umbral, frame = toolbox.binarizar_otsu(frame, 255, cv2.THRESH_BINARY_INV) return umbral, frame calcular_coef_angulo_interaccion(persp, stream, funcion_adaptador, 0) ejecucion(0, funcion_adaptador, stream, persp) elif opcion is 0: opcion = -1 break else: print "************************" print "Error, introduce una opcion correcta\n" print "************************" def pinta_indicadores(frame, dir): """ Permite pintar los indicadores de direccion en la imagen. Devuelve la imagen con los indicadores pintados. Parametros - frame: imagen donde vamos a pintar los indicadores. - dir: objeto del tipo direccion """ centro = [len(frame[0])/2, (len(frame[0])/2)-1] rango = [int(dir.rango_seguro_min), int(dir.rango_seguro_max)] for i in range(len(frame)-1, len(frame)-10, -1): for j in centro: frame.itemset((i, j, 0), 0) frame.itemset((i, j, 1), 255) frame.itemset((i, j, 2), 0) for j in rango: frame.itemset((i, j, 0), 0) frame.itemset((i, j, 1), 0) frame.itemset((i, j, 2), 255) return frame def ejecucion_normal(color_stream, funcion_binarizado, stream, persp): """ Seccion del menu que permite realizar la ejecución normal del programa. """ fps_stats = [] rango_seguro = -1 while rango_seguro > 0.3 or rango_seguro < 0: rango_seguro = raw_input("Dime el rango seguro para la direccion\n min=0, max=0.3\n") rango_seguro = check_num_float(rango_seguro) ang_giro_max = -1 while ang_giro_max > 90 or ang_giro_max < 0: ang_giro_max = raw_input("Dime el angulo de giro maximo del vehiculo[Valor entero]: ") ang_giro_max = check_num_int(ang_giro_max) dir = direccion(rango_seguro, len(stream.get_frame(1)[0]), ang_giro_max) while True: vid, fps = stream.get_video_stream(1) umbral = 0 if color_stream is 0: vid_bn = cv2.cvtColor(vid, cv2.COLOR_BGR2GRAY) umbral, img_binarizada = funcion_binarizado(vid_bn) else: img_binarizada = funcion_binarizado(vid) #Perspectiva img_bin_persp = persp.correjir_distorsion_perspectiva(img_binarizada) vid_persp = persp.correjir_distorsion_perspectiva(vid) bordes_persp = toolbox.obtener_contornos(img_bin_persp, 50, 200) tray_persp = toolbox.obtener_trayectoria(bordes_persp) #Normal bordes_normal = toolbox.obtener_contornos(img_binarizada, 50, 200) tray_normal = toolbox.obtener_trayectoria(bordes_normal) texto, angulo = dir.obtener_direccion(tray_normal) vid = toolbox.pintar_lineas(vid, tray_normal, [255, 0, 0]) vid = pinta_indicadores(vid, dir) vid_persp = toolbox.pintar_lineas(vid_persp, bordes_persp, [0, 255, 0]) vid_persp = toolbox.pintar_lineas(vid_persp, tray_persp, [255, 0, 0]) cv2.putText(vid, texto + " " + str(round(angulo, 2)), (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (255, 0, 255), 1) cv2.putText(vid_persp, "Lum: " + str(round(toolbox.calcular_luminosidad(vid), 2)), (10, 40), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (255, 0, 255), 1) cv2.putText(vid_persp, "Fps: " + str(round(fps, 2)), (10, 20), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (255, 0, 255), 1) if color_stream is 0: cv2.putText(vid_persp, "Umb: " + str(round(umbral, 2)), (10, 60), cv2.FONT_HERSHEY_SIMPLEX, 0.4, (255, 0, 255), 1) cv2.imshow('Guia', vid) cv2.imshow('Tray', vid_persp) fps_stats.append(fps) if cv2.waitKey(1) & 0xFF == ord('s'): imprimir_fps_stats(fps_stats) cv2.destroyAllWindows() break def calcular_coef_angulo_interaccion(persp, stream, funcion, color_stream): """ Sub menu que nos permite calibrar el angulo para resolver la distorsion por perspectiva. Parametros: - persp: objeto del tipo perspectiva. - stream: objeto del tipo webcam_stream. - funcion: funcion de binarizado. - color_stream: 0 si es binarizado por luminosidad, 1 binarizado por color. """ print "************************" raw_input("Pulsa intro para calcular el coeficiente de correcion de distorsion por perspectiva") print "Coloque la plantilla delante de la camara" persp.calcular_coef_angulo(stream, color_stream, funcion) print "El coeficiente calculado es: " + str(persp.coef_correcion) raw_input("Retira la plantilla, pulsa intro para continuar") print "************************" def imprimir_fps_stats(fps_stats): """ Imprime los fps. Parametros: - fps_stats: array de numeros. """ print "************************" print "Minimos fps: " + str(min(fps_stats)) print "Maximos fps: " + str(max(fps_stats)) print "Media fps: " + str(np.average(fps_stats)) print "************************" def muestra_proceso(color_stream, funcion_binarizado, stream, persp): """ Sub menu que nos permite mostrar el proceso que realiza el programa, mostrando como se van procesando las imagenes en las diferentes fases. Parametros: - color_stream: 0 binarizacion por luminosidad, 1 binarizacion por color. - funcion_binarizado: funcion que nos permite realizar la binarizacion. - stream: objeto del tipo webcam_stream - persp: objeto del tipo perspectiva """ salir_evt = [False] fps_stats = [] im_spc1, im_spc2, im_spc3, im_spc4, fig = crear_marco_comparacion(stream.get_frame(1)) def handle_close(evt): """ Funcion auxiliar usada para recoger el evento de cierre de la ventana de matplotlib """ del salir_evt[:] salir_evt.append(True) salir = False while not salir: #0 - B/N, 1 - Color RGB vid, fps = stream.get_video_stream(color_stream) #Añado los fps a una lista para despues mostrar estadisticas fps_stats.append(fps) if color_stream is 0: _, img_correjida = funcion_binarizado(vid) img_binarizada = persp.correjir_distorsion_perspectiva(img_correjida) else: img_correjida = persp.correjir_distorsion_perspectiva(vid) img_binarizada = funcion_binarizado(img_correjida) bordes = toolbox.obtener_contornos(img_binarizada, 50, 200) tray = toolbox.obtener_trayectoria(bordes) bordes_tray = bordes + tray fig.canvas.mpl_connect('close_event', handle_close) if color_stream is 1: vid = cv2.cvtColor(vid, cv2.COLOR_RGB2BGR) img_correjida = cv2.cvtColor(img_correjida, cv2.COLOR_RGB2BGR) mostrar_comparacion_imagenes(im_spc1, im_spc2, im_spc3, im_spc4, vid, img_correjida, img_binarizada, bordes_tray) if salir_evt[0] is True: imprimir_fps_stats(fps_stats) cv2.destroyAllWindows() salir = True salir_evt[0] = False if __name__ == '__main__': main()
Successful repeat ECMO in a patient with AIDS and ARDS Veno-venous extracorporeal membrane oxygenation (ECMO) is being more commonly used in patients with acute respiratory distress syndrome (ARDS) due to potentially reversible illnesses. Survival from ARDS using ECMO has been reported even in patients with AIDS. However, the indications for ECMO for ARDS due to immune reconstitution inflammatory syndrome (IRIS) in patients with AIDS are unknown. A 23-year-old man with AIDS and Pneumocystis jirovecii pneumonia was admitted to the intensive care unit with severe ARDS refractory to mechanical ventilator support requiring ECMO. Although ECMO was discontinued, a second treatment with ECMO was necessary due to IRIS-associated ARDS, resulting in an excellent patient outcome. This patients clinical course suggests two important messages. First, ECMO is a reasonable option for the treatment of patients with ARDS even in a patient with AIDS. Second, ECMO may be effective for the treatment of patients with IRIS. Summary Veno-venous extracorporeal membrane oxygenation (ECMO) is being more commonly used in patients with acute respiratory distress syndrome (ARDS) due to potentially reversible illnesses. Survival from ARDS using ECMO has been reported even in patients with AIDS. However, the indications for ECMO for ARDS due to immune reconstitution inflammatory syndrome (IRIS) in patients with AIDS are unknown. A 23-year-old man with AIDS and Pneumocystis jirovecii pneumonia was admitted to the intensive care unit with severe ARDS refractory to mechanical ventilator support requiring ECMO. Although ECMO was discontinued, a second treatment with ECMO was necessary due to IRIS-associated ARDS, resulting in an excellent patient outcome. This patient's clinical course suggests two important messages. First, ECMO is a reasonable option for the treatment of patients with ARDS even in a patient with AIDS. Second, ECMO may be effective for the treatment of patients with IRIS. Background Extracorporeal membrane oxygenation (ECMO) has been shown to be effective in treating patients with acute respiratory distress syndrome (ARDS), 1 while the use of ECMO for an irreversible cause is considered contraindicated. Extracorporeal life support organisation (ELSO) guidelines suggest that a status predicting poor outcome despite ECMO should be considered a relative contraindication. 2 Previously, the prognosis of patients with AIDS was considered poor, but a recent study showed that mortality in patients with recovered CD4(+) cell counts are not inferior compared with the general population. 3 Reports from 2014 showed that patients with AIDS complicated with ARDS were successfully treated with ECMO. Therefore, the indications for ECMO in patients with AIDS should be considered on an individual basis. Acute respiratory failure in patients with AIDS is associated with various conditions including infections by Pneumocystis jirovecii, multidrug-resistant bacteria and fungus, which were successfully treated with ECMO. However, acute respiratory failure can also result from immune reconstitution inflammatory syndrome (IRIS), a paradoxical clinical worsening after the initiation of antiretroviral therapy (ART). While ECMO seems useful as reported in these case reports, 4-6 the indication of ECMO for IRIS-associated respiratory failure was not discussed. Cawcutt et al reported a patient with AIDS, complicated with ARDS due to probable IRIS, 4 where the patient required a treatment of ECMO, but was finally deceased from multiorgan failure. We treated a patient with newly diagnosed AIDS who presented with P. jirovecii pneumonia (PjP) and was subsequently complicated with probable IRIS. The patient experienced two ARDS episodes due to PjP and probable IRIS, both of which were successfully treated with ECMO, resulting in the patient full recovery. caSe preSentation A 23-year-old man presented with fever (>40°C), dyspnoea and dry cough. He visited a community hospital, where he was found to be hypoxic with arterial oxygen tension (PaO 2 ) of 58 mm Hg on 15 L/min of oxygen via mask with a reservoir, requiring non-invasive positive pressure mode of ventilation (NPPV) to maintain arterial oxygen saturation. Chest X-ray and chest CT scan showed diffuse bilateral ground glass opacities. The patient received empirical antibiotics (ceftriaxone and ciprofloxacin) and methylprednisolone 1 g daily for 3 days with no improvement in respiratory status, and the patient was transferred to our hospital. After admission, the patient's respiratory status further deteriorated, requiring intensive care unit (ICU) admission and endotracheal intubation with ventilator support. The following day, the diagnosis of AIDS was made with a CD4 count of 8.5 cells/L and an HIV virus load of 550 000 copies/mL. A PCR was positive for P. jirovecii in the bronchoalveolar lavage fluid. Trimethoprim/sulfamethoxazole (TMP/SMX) was initiated for the treatment of PjP. In addition, the antibacterial regimen was changed to meropenem, vancomycin, ciprofloxacin, micafungin and ganciclovir. Methylprednisolone 1 mg/kg per day was continued. Unfortunately, he developed hypoxaemia refractory to mechanical ventilation; arterial blood gas analysis showed persistent hypoxaemia (PaO 2 of 48 mm Hg) on 100% fraction of inspired oxygen (FiO 2 ) with positive end expiratory pressure of 12 cm H 2 O on ICU day 3 (figure 1). A decision was made to treat the patient with venous-venous ECMO. He was subsequently initiated with circuit flow of 4.0 L/min and sweep gas of 3.0 L/min of oxygen (FiO 2 of 100%). Simultaneously, ART (tenofovir, emtricitabine and raltegravir) was initiated for the treatment of AIDS. During ECMO, lung protective ventilation and CASE REpORT Successful repeat ECMO in a patient with AIDS and ARDS novel treatment (new drug/intervention; established drug/procedure in new situation) the treatment of PjP with corticosteroids and TMP/SMX were continued. Blood and sputum cultures suggested no bacterial involvement, and antibiotics (meropenem, vancomycin and ciprofloxacin) were discontinued after 2 weeks. Gradually, chest X-ray and arterial blood gas analysis showed improvement and he was weaned from a 12-day course of ECMO support on ICU day 15 (figure 2). After removal of ECMO, the respiratory status was stable. Two days after stopping ECMO (12 days after initiating ART), however, the patient developed high fever, acute worsening hypoxia and hypercapnia refractory to increased ventilatory support, and therefore ECMO was reinstituted (figure 3). differential diagnoSiS Due to the continuation of TMP/SMX for PjP and a negative bacterial culture result on reinstitution of ECMO, recurrent worsening of the primary disease or a new bacterial infection was less likely. Therefore, we suspected IRIS, a paradoxical clinical worsening after the initiation of ART, as the cause of his deteriorating respiratory status. outcome and follow-up After reinitiation of ECMO, his respiratory status gradually improved despite the development of a pneumothorax. On ICU day 30, ECMO was discontinued (14 days after reinstitution) (figure 4). Thirty days after stopping ECMO (a total of 62 days after admission), the patient was discharged home. diScuSSion We treated a patient with newly diagnosed AIDS and PjP, who survived two ARDS episodes due to PjP and probable IRIS, using ECMO. ECMO is effective in treating patients with acute respiratory failure due to potentially reversible processes, but there are no standard contraindications. 2 Although ELSO guidelines suggest that there are no absolute contraindications to ECMO, a status predicting poor outcome despite ECMO should be considered a relative contraindication 2 (eg, major pharmacological immunosuppression (absolute neutrophil count <0.4x10 9 /L)). Davies et al suggested that ECMO is not indicated in patients with AIDS and excluded these patients from their trial in 2009. 1 However, Rodger et al reported that mortality in well-controlled patients with HIV infection, who maintained or had recovery of CD4(+) cell counts to at least 500 cells/L, are not inferior compared with the general population. 3 Thus, the indications for ECMO in patients with AIDS should be considered on an individual basis with respect to the risks and the benefits. In fact, patients with ARDS and AIDS were successfully treated with ECMO. Similar to these patients, the current patient suggests that ECMO is a reasonable option for the treatment of ARDS, even in patients with AIDS. IRIS is a paradoxical clinical worsening after the initiation of ART, in ART-naive patients, and presents with various symptoms depending on the characteristics and sites of the primary opportunistic infection. Low CD4 counts and high HIV-RNA counts are risk factors for the development of IRIS. 7 For the figure 1 Chest X-ray on the day of extracorporeal membrane oxygenation initiation (intensive care unit day 3). novel treatment (new drug/intervention; established drug/procedure in new situation) diagnosis of IRIS, the following conditions should be excluded: worsening of the primary disease, new bacterial infection as a complication and allergic reaction to drugs. In this patient, the CD4 count was not confirmed when the respiratory status deteriorated although an increasing CD4 count is a clue to establish the diagnosis of IRIS. However, other criteria for the diagnosis of IRIS were sufficiently met by the clinical course. It is uncertain whether ECMO is effective for IRIS-associated ARDS or not. One retrospective study suggested that patients with AIDS who developed IRIS had a higher risk of death than those who did not. 8 A case with newly diagnosed AIDS, complicated by both PjP and probable IRIS-associated ARDS did not survive the ICU due to the multiorgan failure despite a successful separation from ECMO. 4 However, IRIS is considered a self-limiting condition, depending on pathogens and organs involved. 9 In the current patient, full recovery was achieved. For this reason, to use ECMO for the treatment of refractory ARDS in patients with IRIS is a reasonable option. We used ECMO for probable IRIS, and the present patient successfully recovered from respiratory failure and was discharged home. To the best of our knowledge, this is the first report of the successful use of ECMO for IRIS-associated ARDS. learning points ► Extracorporeal membrane oxygenation (ECMO) may be indicated for patients with severe acute respiratory failure associated with AIDS. ► Immune reconstitution inflammatory syndrome (IRIS) is a diagnosis of exclusion in patients with acute respiratory worsening after the initiation of ART. ► ECMO may be effective in the treatment of patients with IRIS. open access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/ figure 3 Chest X-ray when the patient progressed re-worsening of respiratory status and requiring the second session of extracorporeal membrane oxygenation (intensive care unit day 17). figure 4 Chest X-ray on the day of termination of the second extracorporeal membrane oxygenation session (intensive care unit day 30).
#ifndef BABYLON_RENDERING_GEOMETRY_BUFFER_RENDERER_H #define BABYLON_RENDERING_GEOMETRY_BUFFER_RENDERER_H #include <memory> #include <unordered_map> #include <babylon/babylon_api.h> #include <babylon/babylon_common.h> #include <babylon/maths/matrix.h> namespace BABYLON { class AbstractMesh; class Effect; class Mesh; class SubMesh; class MultiRenderTarget; class Scene; class SubMesh; using AbstractMeshPtr = std::shared_ptr<AbstractMesh>; using EffectPtr = std::shared_ptr<Effect>; using MeshPtr = std::shared_ptr<Mesh>; using SubMeshPtr = std::shared_ptr<SubMesh>; using MultiRenderTargetPtr = std::shared_ptr<MultiRenderTarget>; struct ISavedTransformationMatrix { Matrix world; Matrix viewProjection; }; // end of struct ISavedTransformationMatrix /** * @brief This renderer is helpfull to fill one of the render target with a * geometry buffer. */ class BABYLON_SHARED_EXPORT GeometryBufferRenderer { public: /** * Constant used to retrieve the position texture index in the G-Buffer * textures array using * getIndex(GeometryBufferRenderer.POSITION_TEXTURE_INDEX) */ static constexpr unsigned int POSITION_TEXTURE_TYPE = 1; /** * Constant used to retrieve the velocity texture index in the G-Buffer * textures array using * getIndex(GeometryBufferRenderer.VELOCITY_TEXTURE_INDEX) */ static constexpr unsigned int VELOCITY_TEXTURE_TYPE = 2; /** * Constant used to retrieve the reflectivity texture index in the G-Buffer textures array * using the getIndex(GeometryBufferRenderer.REFLECTIVITY_TEXTURE_TYPE) */ static constexpr unsigned int REFLECTIVITY_TEXTURE_TYPE = 3; public: /** * @brief Creates a new G Buffer for the scene. * @param scene The scene the buffer belongs to * @param ratio How big is the buffer related to the main canvas. */ GeometryBufferRenderer(Scene* scene, float ratio = 1.f); virtual ~GeometryBufferRenderer(); // = default /** * @brief Returns the index of the given texture type in the G-Buffer textures * array. * @param textureType The texture type constant. For example * GeometryBufferRenderer.POSITION_TEXTURE_INDEX * @returns the index of the given texture type in the G-Buffer textures array */ int getTextureIndex(unsigned int textureType); /** * @brief Checks wether everything is ready to render a submesh to the G * buffer. * @param subMesh the submesh to check readiness for * @param useInstances is the mesh drawn using instance or not * @returns true if ready otherwise false */ bool isReady(SubMesh* subMesh, bool useInstances); /** * @brief Gets the current underlying G Buffer. * @returns the buffer */ [[nodiscard]] MultiRenderTargetPtr getGBuffer() const; /** * @brief Disposes the renderer and frees up associated resources. */ void dispose(); protected: /** * @brief Set the render list (meshes to be rendered) used in the G buffer. */ void set_renderList(const std::vector<MeshPtr>& meshes); /** * @brief Gets wether or not G buffer are supported by the running hardware. * This requires draw buffer supports */ [[nodiscard]] bool get_isSupported() const; /** * @brief Gets a boolean indicating if objects positions are enabled for the G * buffer. */ [[nodiscard]] bool get_enablePosition() const; /** * @brief Sets whether or not objects positions are enabled for the G buffer. */ void set_enablePosition(bool enable); /** * @brief Gets a boolean indicating if objects velocities are enabled for the * G buffer. */ [[nodiscard]] bool get_enableVelocity() const; /** * @brief Sets wether or not objects velocities are enabled for the G buffer. */ void set_enableVelocity(bool enable); /** * @brief Gets a boolean indicating if objects roughness are enabled in the G buffer. */ bool get_enableReflectivity() const; /** * @brief Sets wether or not objects roughness are enabled for the G buffer. */ void set_enableReflectivity(bool enable); /** * @brief Gets the scene associated with the buffer. */ Scene*& get_scene(); /** * @brief Gets the ratio used by the buffer during its creation. * How big is the buffer related to the main canvas. */ [[nodiscard]] float get_ratio() const; /** * @brief Gets the number of samples used to render the buffer (anti * aliasing). */ [[nodiscard]] unsigned int get_samples() const; /** * @brief Sets the number of samples used to render the buffer (anti * aliasing). */ void set_samples(unsigned int value); void _createRenderTargets(); private: /** * @brief Custom render function. * @param subMesh */ void renderSubMesh(SubMesh* subMesh); /** * @brief _Copies the bones transformation matrices into the target array and * returns the target's reference. */ Float32Array& _copyBonesTransformationMatrices(const Float32Array& source, Float32Array& target); public: /** * Dictionary used to store the previous transformation matrices of each * rendered mesh in order to compute objects velocities when enableVelocity is * set to "true" * @hidden */ std::unordered_map<size_t, ISavedTransformationMatrix> _previousTransformationMatrices; /** * Dictionary used to store the previous bones transformation matrices of each * rendered mesh in order to compute objects velocities when enableVelocity is * set to "true" * @hidden */ std::unordered_map<size_t, Float32Array> _previousBonesTransformationMatrices; /** * Array used to store the ignored skinned meshes while computing velocity map * (typically used by the motion blur post-process). Avoids computing bones * velocities and computes only mesh's velocity itself (position, rotation, * scaling). */ std::vector<AbstractMeshPtr> excludedSkinnedMeshesFromVelocity; /** * Gets or sets a boolean indicating if transparent meshes should be rendered */ bool renderTransparentMeshes; /** * The render list (meshes to be rendered) used in the G buffer. */ WriteOnlyProperty<GeometryBufferRenderer, std::vector<MeshPtr>> renderList; /** * Wether or not G buffer are supported by the running hardware. * This requires draw buffer supports */ ReadOnlyProperty<GeometryBufferRenderer, bool> isSupported; /** * Wether or not position are enabled for the G buffer. */ Property<GeometryBufferRenderer, bool> enablePosition; /** * Wether or not objects velocities are enabled for the G buffer. */ Property<GeometryBufferRenderer, bool> enableVelocity; /** * Gets or sets a boolean indicating if objects roughness are enabled in the G buffer. */ Property<GeometryBufferRenderer, bool> enableReflectivity; /** * The scene associated with the buffer. */ ReadOnlyProperty<GeometryBufferRenderer, Scene*> scene; /** * The ratio used by the buffer during its creation. * How big is the buffer related to the main canvas. */ ReadOnlyProperty<GeometryBufferRenderer, float> ratio; /** * The number of samples used to render the buffer (anti aliasing). */ Property<GeometryBufferRenderer, unsigned int> samples; protected: EffectPtr _effect; std::string _cachedDefines; private: Scene* _scene; MultiRenderTargetPtr _multiRenderTarget; float _ratio; bool _enablePosition; bool _enableVelocity; bool _enableReflectivity; int _positionIndex; int _velocityIndex; int _reflectivityIndex; }; // end of class GeometryBufferRenderer } // end of namespace BABYLON #endif // end of BABYLON_RENDERING_GEOMETRY_BUFFER_RENDERER_H
/** * Reserved characters type. * * @param reservedCharactersType the reserved characters type * @return the builder */ public Builder reservedCharactersType(ReservedCharactersType reservedCharactersType){ if(reservedCharactersType!=null){ this.instance.reservedCharactersType = reservedCharactersType; } return this; }
import os from pyweb.web.application import create_server os.environ["APPLICATION_SETTINGS"] = "/server/test/resources/settings.cfg" class MockExtension(object): def init_app(self, server): pass def test_application_extensions(): assert create_server(extensions=[MockExtension()]) def test_application(): assert create_server(extensions=[])
from .id_loss import IDLoss
import { NgModule } from '@angular/core'; import { CommonModule } from '@angular/common'; import { FormsModule, ReactiveFormsModule } from '@angular/forms'; import { XSliderSelectComponent } from './slider-select.component'; import { XTooltipModule } from '@ng-nest/ui/tooltip'; import { DragDropModule } from '@angular/cdk/drag-drop'; import { XSliderSelectProperty } from './slider-select.property'; import { XBaseFormModule } from '@ng-nest/ui/base-form'; @NgModule({ declarations: [XSliderSelectComponent, XSliderSelectProperty], exports: [XSliderSelectComponent], imports: [CommonModule, FormsModule, ReactiveFormsModule, DragDropModule, XTooltipModule, XBaseFormModule] }) export class XSliderSelectModule {}
from torch import nn import torch import logging class Base(nn.Module): r""" Applies a multi-layer RNN to an input sequence. Warning: Do not use this class directly, use one of the sub classes. Args: vocab_size (int): size of the vocabulary max_len (int): maximum allowed length for the sequence to be processed hidden_size (int): number of features in the hidden state `h` input_dropout_p (float): dropout probability for the input sequence dropout_p (float): dropout probability for the output sequence n_layers (int): number of recurrent layers rnn_cell (str): type of RNN cell (Eg. 'LSTM' , 'GRU') Inputs: ``*args``, ``**kwargs`` - ``*args``: variable length argument list. - ``**kwargs``: arbitrary keyword arguments. """ def __init__(self, vocab_size, max_len, hidden_size, input_dropout_p, dropout_p, n_layers, rnn_cell_type): super(Base, self).__init__() self.vocab_size = vocab_size self.max_len = max_len self.hidden_size = hidden_size self.n_layers = n_layers self.input_dropout_p = input_dropout_p self.input_dropout = nn.Dropout(p=input_dropout_p) self.rnn_cell_type = rnn_cell_type.lower() if self.rnn_cell_type == 'lstm': self.rnn_cell = nn.LSTM elif self.rnn_cell_type == 'gru': self.rnn_cell = nn.GRU else: raise ValueError("Unsupported RNN Cell: {0}".format(rnn_cell_type)) self.dropout_p = dropout_p self.logger = logging.getLogger(__name__) def forward(self, *args, **kwargs): raise NotImplementedError() class Encoder(Base): r""" Applies a multi-layer RNN to an input sequence. Args: vocab_size (int): size of the vocabulary max_len (int): a maximum allowed length for the sequence to be processed hidden_size (int): the number of features in the hidden state `h` input_dropout_p (float, optional): dropout probability for the input sequence (default: 0) dropout_p (float, optional): dropout probability for the output sequence (default: 0) n_layers (int, optional): number of recurrent layers (default: 1) bidirectional (bool, optional): if True, becomes a bidirectional encodr (defulat False) rnn_cell (str, optional): type of RNN cell (default: gru) variable_lengths (bool, optional): if use variable length RNN (default: False) embedding (torch.Tensor, optional): Pre-trained embedding. The size of the tensor has to match the size of the embedding parameter: (vocab_size, hidden_size). The embedding layer would be initialized with the tensor if provided (default: None). update_embedding (bool, optional): If the embedding should be updated during training (default: False). Inputs: inputs, input_lengths - **inputs**: list of sequences, whose length is the batch size and within which each sequence is a list of token IDs. - **input_lengths** (list of int, optional): list that contains the lengths of sequences in the mini-batch, it must be provided when using variable length RNN (default: `None`) Outputs: output, hidden - **output** (batch, seq_len, hidden_size): tensor containing the encoded features of the input sequence - **hidden** (num_layers * num_directions, batch, hidden_size): tensor containing the features in the hidden state `h` Examples:: >>> encoder = EncoderRNN(input_vocab, max_seq_length, hidden_size) >>> output, hidden = encoder(input) """ def __init__(self, vocab_size, max_len, hidden_size, input_dropout_p=0, dropout_p=0, n_layers=1, bidirectional=False, rnn_cell_type='gru', variable_lengths=False, embedding=None, update_embedding=True): super(Encoder, self).__init__(vocab_size, max_len, hidden_size, input_dropout_p, dropout_p, n_layers, rnn_cell_type) self.bidirectional = bidirectional self.variable_lengths = variable_lengths self.embedding = nn.Embedding(vocab_size, hidden_size) if embedding is not None: self.embedding.weight = nn.Parameter(embedding) self.embedding.weight.requires_grad = update_embedding self.rnn = self.rnn_cell(hidden_size, hidden_size, n_layers, batch_first=True, bidirectional=self.bidirectional, dropout=dropout_p) def forward(self, input_var, input_lengths=None): r""" Applies a multi-layer RNN to an input sequence. Args: input_var (batch, seq_len): tensor containing the features of the input sequence. input_lengths (list of int, optional): A list that contains the lengths of sequences in the mini-batch Returns: output, hidden - **output** (batch, seq_len, hidden_size): variable containing the encoded features of the input sequence - **hidden** (num_layers * num_directions, batch, hidden_size): variable containing the features in the hidden state h """ embedded = self.embedding(input_var) embedded = self.input_dropout(embedded) if self.variable_lengths: embedded = nn.utils.rnn.pack_padded_sequence(embedded, input_lengths, batch_first=True) output, hidden = self.rnn(embedded) if self.variable_lengths: output, _ = nn.utils.rnn.pad_packed_sequence(output, batch_first=True) return output, hidden class DualEncoder(nn.Module): r"""" Applies dual encoder architecture. Args: context (noesis.networks.dual_encoder.Encoder): encoder RNN for context information response (noesis.networks.dual_encoder.Encoder): encoder RNN for responses information Inputs: context_var, responses_var - **context_var** : a tensor containing context information - **responses_var** : a tensor containing responses per context information Outputs: output - **output** (batch, num_responses): tensor containing scaled probabilities of responses Examples:: >>> dual_encoder = DualEncoder(ctx_encoder, resp_encoder) >>> output = dual_encoder(ctx_variable, resp_variable) """ def __init__(self, context, response, use_output=False): super(DualEncoder, self).__init__() self.context = context self.response = response self.use_output = use_output c_h = context.hidden_size r_h = response.hidden_size if self.context.bidirectional: c_h = 2 * c_h if self.response.bidirectional: r_h = 2 * r_h if torch.cuda.is_available(): self.M = torch.randn([c_h, r_h], requires_grad=True).cuda() else: self.M = torch.randn([c_h, r_h], requires_grad=True) self.final_layer = nn.Softmax() def forward(self, context_var, responses_var, context_lengths_var, responses_lengths_var): r""" Applies a multi-layer RNN to an input sequence. Args: context_var (batch, seq_len): tensor containing the features of the context sequence. responses_var (batch, num_responses, seq_len): tensor containing the features of the responses sequence. Returns: output - **output** (batch, num_responses): variable containing the scaled probabilities over responses """ batch, num_resp, seq_len = responses_var.size() if self.context.rnn_cell_type == 'gru': c, h_c = self.context(context_var, context_lengths_var) elif self.context.rnn_cell_type == 'lstm': c, (h_c, _) = self.context(context_var, context_lengths_var) if self.response.rnn_cell_type == 'gru': r, h_r = self.response(responses_var.reshape([-1, seq_len]), responses_lengths_var.reshape([-1])) elif self.response.rnn_cell_type == 'lstm': r, (h_r, _) = self.response(responses_var.reshape([-1, seq_len]), responses_lengths_var.reshape([-1])) # unscaled log probabilities if self.use_output: f_c = c.gather(1, context_lengths_var.view(-1, 1, 1).expand(c.size(0), 1, c.size(2)) - 1) f_r = r.gather(1, responses_lengths_var.view(-1, 1, 1).expand(r.size(0), 1, r.size(2)) - 1).squeeze(1) logits = torch.matmul(torch.matmul(f_c, self.M), f_r.reshape([batch, num_resp, -1]).transpose(1, 2)).squeeze(1) else: logits = torch.matmul(torch.matmul(h_c.view(self.context.n_layers, 1, -1), self.M), h_r.view(self.response.n_layers, num_resp, -1).transpose(1, 2)).squeeze() output = self.final_layer(logits) return output
New users seeking access to a network have to register with the network first. In return, the network grants them a ‘lifetime’, which describes the period over which the network can be accessed by the user. In turn, users have to periodically renew their registrations with the network before their lifetime expires, if they desire to continue their access. The network has a maximum prescribed lifetime. In order to ensure that users are still present on the network, the network will also request them to periodically re-register, so that it can continue reserving resources for the users. The maximum prescribed lifetime is generally a global setting for all users. When a large number of users are accessing the network in this manner, the processing of registration requests from these users can create a disproportionately high level of load in the network element/server responsible for processing user registrations. Since the arrivals of these user requests tends to be non-uniform, server load can peak at different times, leading to delays in granting access, or worse, loss of registration requests due to overload conditions. Furthermore, the rate of initial access registration requests is not within the control of the registration server, as initial access requests originate from outside the server. Therefore, it would be desirable to alleviate the problem of network element/server overload when handling user registrations.
#include "../common.h" /** * Retrieve the time in seconds. * &returns: The number of seconds since 1970. */ int64_t sys_time(void) { return sys_utime() / 1000000; } /** * Retrieve the time in microseconds. * &returns: The number of microseconds since 1970. */ int64_t sys_utime(void) { FILETIME ft; GetSystemTimeAsFileTime(&ft); return (((int64_t)ft.dwHighDateTime << 32) + ft.dwLowDateTime) / 10; }
<filename>photo-stack-front/src/Heap/HeapContainer.tsx import React from "react"; import { Columns, Column } from "bloomer"; import { Link } from "@reach/router"; import { Query } from "react-apollo"; import gql from "graphql-tag"; import plus from "../plus-solid.png"; import Heap from "./Heap.jsx"; const GET_HEAPS = gql` { getHeaps { id name tags thumbnail } } `; const GET_HEAP_PHOTOS = gql` query GetHeapPhotos($query: [String!]!) { searchPhotos(query: $query) { objectId } } `; export default class HeapContainer extends React.Component { render() { return ( <Columns isMultiline isGrid> <Query query={GET_HEAPS}> {({ loading, error, data }) => { if (loading) { return "Loading..."; } if (error) { console.log(error); return null; } return data.getHeaps.map(heap => { return ( <Column key={heap.id} isSize="1/4"> <Heap id={heap.id} thumbnail={heap.thumbnail} name={heap.name} /> </Column> ); }); }} </Query> <Column isSize="1/4"> <Link to="createheap"> <Heap noLink thumbnail={plus} name="Create new..." /> </Link> </Column> </Columns> ); } }
<gh_stars>0 from ui.gui import setup from core.ProjectManager import ProjectManager setup(ProjectManager())
Short cervix and twins: progesterone, yes or no? The number of twin gestations has dramatically increased, and has contributed substantially to the rate of preterm birth since 1980. Unfortunately, attempts to apply treatments effective at reducing preterm birth in singleton gestations at increased risk for preterm birth, such as cerclage, intramuscular 17hydroxyprogesterone caproate, pessary, and vaginal progesterone, have not been salutary in unselected twin gestations. Moreover, cerclage and 17hydroxyprogesterone caproate may increase the risk of preterm birth in twins. No trials have focused primarily on the treatment of twin gestations with short cervical length measurement, which is one of the best predictors of subsequent preterm birth. Subgroup and secondary analyses of women with a short cervix enrolled in unselected twin trials have suggested a possible reduction in preterm birth with vaginal progesterone treatment, whereas an individual patient-level meta-analysis demonstrated a significant reduction in neonatal morbidity associated with vaginal progesterone treatment (Norman et al. Lancet 2009;373:203440; Klein et al. Ultrasound Obstet Gynecol 2011;38:2817; Romero et al. Am J Obstet Gynecol 2012;206:124.e119). Vaginal progesterone has shown considerable benefit in reducing the risk of preterm birth in women with a short cervix carrying singleton gestations; therefore, its use in twin gestations seems a natural extension. In this study, Brubaker et al. report a more than doubled rate in the use of vaginal progesterone for twin gestations with a short cervix between 2010 and 2013 at their centre (23.5 versus 52.2%), despite a lack of evidence of its effectiveness. In their cohort of 167 twin gestations with a short cervix, more than one-third of patients were treated with vaginal progesterone. As might be expected, the women who received vaginal progesterone were identified with a short cervix (and presumably started treatment with progesterone) approxi mately 4 weeks earlier in gestation than those who did not receive progesterone (23.7 versus 27.4 weeks). Although the use of vaginal progesterone was associated with an increased risk of preterm birth <35 weeks of gestation in unadjusted analysis, using traditional statistical approaches to account for potential confounding factors, the association between vaginal progesterone use and preterm birth dissipated. Using more advanced statistical approaches to better account for potential unrecognised confounding factors and bias in a non-randomised intervention cohort, the authors performed a propensity score analysis that considers the likelihood of a patient receiving vaginal progesterone based on her risk factors. In this analysis, the use of vaginal progesterone was associated with an increased risk of preterm birth. Although these authors used 2.5 cm as the definition of a short cervix, other studies have used different cut-offs, and this remains another important issue to address as we try to identify the women most likely to benefit from treatment. This study adds to the conflicting body of evidence surrounding the effectiveness of vaginal progesterone in preventing preterm birth, or more importantly neonatal morbidity, in the setting of a short cervix in twins. As these authors point out, welldesigned and well-described trials are needed to address this important dilemma before its use in clinical practice becomes even more widespread.
/* * 将日期标签、时间选择器封装为一个组件 * 两个功能会用到这个组件:创建任务和修改任务功能 * 快速创建日期组件 */ import { DatePicker, Tag } from 'antd' import moment from 'moment' import { quickTimeConfig } from '../config' import './index.less' // 限制props类型 interface IProps { value?: moment.Moment onChange?: (v: moment.Moment) => void } export default function QuickDateFormat(props: IProps) { const { value, onChange } = props /* * 快速获取并格式化今天/明天的日期 * offset:天数 */ const handleQuickCreate = (offset: number) => { // 获取当前时间戳 const nowTimeStamp = new Date() // 获得时间字符串 const stringTime = nowTimeStamp.toISOString().split('T')[0] + 'T10:00:00.000Z' // 使用moment格式化日期,加上一天,就是明天的日期 let formatTime = moment(stringTime).add(offset, 'd') // .format('Y-M-D HH:mm:ss') onChange?.(formatTime) } return ( <div className='time-tags'> {/* 批量生成Tag标签 选择标签就可以快速选择到标签上的时间 */} { quickTimeConfig.map((item) => ( <Tag key={item.offset} color={item.color} onClick={() => handleQuickCreate(item.offset)} > {item.title} </Tag> )) } {/* 时间选择器,用于设置任务开始时间以及结束时间 */} <DatePicker showTime onOk={onChange} placeholder="选择任务结束时间" value={value} size='small' /> </div> ) }
Greece, N.Y. (WHAM) - A motorcyclist was killed Friday morning after colliding with a box truck in Greece. This happened on West Ridge Road just west of North Greece Road. According to Greece Police, the motorcycle collided with a car shortly after 11 a.m. "All of a sudden I heard a bang, and I looked and there were flames everywhere," said witness Ryan Phillips. "I noticed that the biker was going significantly fast, maybe 90-100 mph," said witness Brooks Gregory. Investigators said speed may have been a factor in the crash. Police closed West Ridge Road between the Shops at Hampton Ridge and North Greece Road for about 6 hours to reconstruct the accident scene. The motorcyclist has been identified as 22-year-old Zachary Pogel. 13WHAM News will continue to update this story as more information becomes available.
//------------------------------------------------------------------------------- // Licensed Materials - Property of IBM // XXXX-XXX (C) Copyright IBM Corp. 2013. All Rights Reserved. // US Government Users Restricted Rights - Use, duplication or // disclosure restricted by GSA ADP Schedule Contract with IBM Corp. //------------------------------------------------------------------------------- // // IBMDataConstants.h // IBMData iOS SDK #ifdef __cplusplus #define EXTERN extern "C" #else #define EXTERN extern #endif /**--------------------------------------------------------------------------------------- * Notifications * --------------------------------------------------------------------------------------- */ /** Posted when the save for an IBMDataObject completes. The object of the notification is the IBMDataObject that was saved. There is no userInfo dictionary. */ EXTERN NSString *const IBMDataObjectSaveNotification;
<reponame>mirnylab/pdSim #ifndef __debug__ #define __debug__ #include <stdio.h> #include <stdlib.h> #include <assert.h> #include <math.h> #define I(x) { printf("%s: %ld\n",#x, (long)x); } #define F(x) { printf("%s: %g\n",#x,x); } #define VASSERT(condition) { if(!(condition)) { printf("ASSERT FAILED: %s @ %s (line %d)\n", #condition, __FILE__, __LINE__ ); exit(EXIT_FAILURE);} } static double __debug_sum__ = 0; static long __debug_N__ = 0; static char *__debug_name__; static double __debug_min__ = INFINITY; static double __debug_max__ = -INFINITY; void _count(double x, char *name) { __debug_name__ = name; __debug_sum__ += x; __debug_N__++; __debug_min__ = __debug_min__ < x ? __debug_min__ : x; __debug_max__ = __debug_max__ > x ? __debug_max__ : x; } void _describe(void) { static int firstTime = 1; if (firstTime) { printf("Mean of %s is %g\n", __debug_name__, __debug_sum__/(double)__debug_N__); printf("Min of %s is %g\n", __debug_name__, __debug_min__); printf("Max of %s is %g\n", __debug_name__, __debug_max__); firstTime = 0; } } //void check_pointer(void *ptr, void *low, void *high) { } #define DESCRIBE(x) {_count(x, #x); atexit(_describe); } #endif
771 I have made these potato "chips" several times before, and they are really good. However, it usually takes quite a bit longer in many microwaves for them to get crisp! These were very yummy once I got a successful batch, and I can see where they would be addictive. WAY better than packaged chips. It did however, take two failed batches before I had success. I ... NICLB 1k 1 I have made these potato "chips" several times before, and they are really good. However, it usually takes quite a bit longer in many microwaves for them to get crisp! Read more Gracks Kitchen 5 1 I had no idea that a microwave could produce potato chips like this! My wife was blown away by the results^_^ The chips turned out crispy and thin and even curled like many name-brand chips. ... Read more SHARONKAYM 38 2 I've been making my own potato "chips" for years. I don't even use the oil any longer. I spray with 'PAM'. It works great and saves those extra calories and fat. These are great sprinkled wi... Read more Bakin' in Boston 26 7 Believe it or not, I made these for a special occasion. I used a round plate in the microwave and placed parchment paper on top. I didn't have a mandoline, so I used a potato peeler instead. ... Read more Bubba's Mom 634 114 I wasn't looking for this recipe. I only happened upon it by accident and thought it looked easy and worth trying. Boy, was it! I whipped some of these up for lunch (using the box grater slic... Read more MNSUE 0 1 I decided to try this with some feeling of 'this isn't going to work!' attitude. I fixed up my first batch, and put it in the microwave. I had to keep adding minutes until they finally browned... Read more CC<3's2bake 805 644 These were very yummy once I got a successful batch, and I can see where they would be addictive. WAY better than packaged chips. It did however, take two failed batches before I had success. I ... Read more AndyJ 35 24 These were remarkably good, never would have thought to try and make chips in the microwave. To solve the sticking problem, I just put a sheet of parchment paper on the turntable of my microwav... Read more
<filename>src/kmr_behaviortree/plugins/action/navigate_vehicle.cpp // Copyright (c) 2018 Intel Corporation // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. #include <memory> #include <string> #include "geometry_msgs/msg/pose.hpp" #include "nav2_msgs/action/navigate_to_pose.hpp" #include "kmr_behaviortree/bt_action_node.hpp" namespace kmr_behavior_tree { class NavigateVehicleAction : public BtActionNode<nav2_msgs::action::NavigateToPose> { public: NavigateVehicleAction( const std::string & xml_tag_name, const std::string & action_name, const BT::NodeConfiguration & conf) : BtActionNode<nav2_msgs::action::NavigateToPose>(xml_tag_name, action_name, conf) { } void on_tick() override { // Use the position and orientation fields from the XML attributes to initialize the goal geometry_msgs::msg::PoseStamped current_goalpose; config().blackboard->get("current_goalpose", current_goalpose); goal_.pose.pose.position = current_goalpose.pose.position; goal_.pose.pose.orientation = current_goalpose.pose.orientation; } }; } // namespace kmr_behavior_tree #include "behaviortree_cpp_v3/bt_factory.h" BT_REGISTER_NODES(factory) { BT::NodeBuilder builder = [](const std::string & name, const BT::NodeConfiguration & config) { return std::make_unique<kmr_behavior_tree::NavigateVehicleAction>( name, "NavigateToPose", config); }; factory.registerBuilder<kmr_behavior_tree::NavigateVehicleAction>( "NavigateVehicle", builder); }
Traumatic xylophagia leading to foreign body removal and tracheostomy in the setting of postpartum psychosis Abstract Postpartum psychosis (PPP) is a severe mood disorder following childbirth that rarely leads to injurious or suicidal behavior. This report illustrates otolaryngologic intervention for pharyngeal laceration and airway instability following traumatic foreign body ingestion in the setting of PPP. A 25-year-old woman with PPP presented with hemoptysis after attempting suicide by traumatically forcing tree branches into her oropharynx. Imaging revealed pneumomediastinum, and flexible laryngoscopy and esophagoscopy showed a large foreign body (tree branch) extending from the hypopharynx to the gastroesophageal junction. She was taken to the operating room for direct microlaryngoscopy, bronchoscopy and esophagoscopy with removal of the 25-cm tree branch. Panendoscopy revealed a mucosal laceration at the cricopharyngeus with supraglottic and hypopharyngeal edema but no injury to the larynx. Due to airway concerns, a cuffed tracheostomy was placed along with a gastrostomy tube for feeding access. She tolerated her postoperative course with successful decannulation and oral feeding prior to discharge. INTRODUCTION Postpartum psychosis (PPP) is defined as severe mood disturbance following childbirth, possibly with severe and injurious behavior. Hospitalization expedites treatment and ensures patient safety, as cases of suicide and filicide have been reported. Given its tenuity, providers should understand its presentation and initial management, including psychiatric consultation. Only one case of foreign body ingestion (FBI) has been reported in PPP, describing ingestion of a safety pin, nut and bolt at the behest of command auditory hallucinations. Unlike small objects which frequently pass spontaneously, ingestion of large or sharp foreign bodies (FB) may lead to lifethreatening complications, including airway obstruction, hemorrhage and esophageal perforation with mediastinitis. In such cases, endoscopic removal or surgical management are critical for airway stability and prevention of further internal trauma. We report the successful management of a patient with pharyngeal injury, hemorrhage and airway instability following traumatic xylophagia in the setting of PPP. This report highlights the urgent nature of PPP and the importance of prompt intervention for large FB ingestion. CASE REPORT A 25-year-old woman, 3-month postpartum, presented to an outside hospital after attempting suicide by forcing tree branches into her oropharynx. She presented with muffled voice and active hematemesis, with no oral source of bleeding visualized. Due to airway concerns, she was intubated with a 7.5 cuffed endotracheal tube. Computed tomography revealed cervical subcutaneous emphysema and pneumomediastinum, raising concern for esophageal perforation. The patient was given broadspectrum antibiotics and transferred for higher level of care. In the receiving ED, cervical crepitus was palpable and bedside direct laryngoscopy, esophagogastroduodenoscopy, and flexible bronchoscopy revealed a tree branch extending from the proximal esophagus to the gastroesophageal junction (GEJ). A posterior pharyngeal laceration was noted at the level of the cricopharyngeus, but no FB was observed within the trachea or bronchi. The patient was taken emergently to the operating room (OR) for panendoscopy of the larynx, bronchus and esophagus, with FB removal via rigid esophagoscopy. In the OR, direct microlaryngoscopy revealed the aforementioned hypopharyngeal laceration as well as significant supraglottic ecchymosis, edema and partial airway obstruction. However, the vocal folds and distal trachea were atraumatic. A rigid esophagoscope was introduced and advanced to the GEJ, and the 25-cm tree branch was removed (Figs 1 and 2). The esophagus was visualized to be intact. The hypopharyngeal laceration appeared well approximated and primary closure was deferred. Due to the supraglottic edema, a surgical tracheostomy was performed, and a 6.0 cuffed Shiley tracheostomy tube was placed. Then, to facilitate long-term tube-feeding, a percutaneous endoscopic gastrotomy tube was placed by trauma surgery. Postoperatively, the patient received steroids and antibiotics and was ventilated in the intensive care unit for 1 day. She was eventually transitioned to trach collar, and on postoperative day (POD) three was changed to a cuffless trach and transferred to the floor. Serial radiographs demonstrated resolving pneumomediastinum without signs of esophageal perforation. On POD 13, a gastrograffin study demonstrated no esophageal leak but did raise concern for aspiration. A barium study was subsequently performed by speech-language pathology, revealing minor swallowing deficits without aspiration. After education on safe swallowing, the patient was decannulated and transitioned to regular diet. Postoperatively, the patient was evaluated and treated by inpatient psychiatry. She reported worsening insomnia, anxiety and auditory hallucinations since giving birth 3 months earlier, culminating in commands to attempt suicide by tree branch ingestion. Psychiatry diagnosed her with late-onset PPP and prescribed olanzapine and valproic acid with haloperidol for agitation. Inpatient psychiatric transfer was initially recommended but not completed during her admission. By POD 24 she had no hallucinations or suicidal ideation and was discharged home with outpatient otolaryngology and psychiatry appointments. At her 2-week otolaryngology followup in clinic, she reported improved swallowing and phonation and was instructed to return as needed. DISCUSSION PPP has an incidence of 0.1-0.2% in postpartum patients, with proposed mechanisms including hormonal changes, circadian rhythm disruption and immunologic and genetic factors. Symptoms may include depression and anxiety, but disorganized thought, delusions, mania or hallucinations distinguish PPP from the postpartum blues. PPP usually presents immediately postpartum, but late-onset cases have been reported. FBI is typically encountered in children. In adults, intentional FBI may be associated with psychiatric conditions. Notable cases requiring removal include a 17-cm wrench and 12-cm metal spring. In the one case describing FBI in the setting of PPP, no endoscopic or surgical intervention was reported, likely due to the objects' small, blunt nature. While small objects may pass uneventfully, sharp or long (>6 cm) objects require removal due to increased rates of gastrointestinal perforation. This patient's mucronate, 25-cm tree branch fulfilled both criteria, and prompt removal prevented damage beyond the posterior hypopharyngeal laceration. Esophageal perforation and large pharyngeal injuries necessitate surgical repair, but minor pharyngeal injuries may heal with antibiotics and fasting. In addition, injuries above the arytenoid cartilages have shown lower rates of infectious and noninfectious complications. This patient's supra-arytenoid, shallow mucosal injury was therefore allowed to heal by secondary intention. Tracheostomy is rarely performed for FBI, typically for alternate access when transoral removal is unattainable. In contrast, upper airway obstruction is a known indication for tracheostomy. This patient's hypopharyngeal laceration, pneumomediastinum and significant supraglottic edema required temporary tracheostomy placement and mechanical ventilation, but ventilatory weaning and capping trials were performed as soon as was determined safe. In addition, attentive care by a multidisciplinary team of surgeons, psychiatrists, rehab therapists and speech pathologists facilitated this patient's speedy, uneventful recovery, consistent with recommendations for treating FBI. CONCLUSIONS PPP may lead to serious self-harm, as in this case of attempted suicide by traumatic xylophagia. Multidisciplinary treatment with early hospitalization and psychiatric consultation is critical for effective, timely recovery. Ingested objects that are large or sharp should be removed emergently. We managed this complicated FBI with panendoscopy, removal and tracheotomy for airway management.
Charles Ray King Life Charles Ray King was born on March 16, 1813 in Jamaica, Queens, New York City to John Alsop King and Mary Ray King. He was the second of eight children. His brother, also named John A. King, was a delegate to the 1872 Republican National Convention and later a member of the New York State Senate. Charles Ray King attended grammar school in Jamaica and graduated from Columbia University in 1831, King received medical degree from the University of Pennsylvania in 1834. After spending two years studying in Paris, he returned to New York and worked as a physician. He married Hannah Wharton Fisher (1816–1870) on December 12, 1839. King moved to Philadelphia and retired from medicine in 1848. He purchased the Chelwood estate from Edward Biddle, son of Nicholas Biddle and later moved to Devon, an adjacent estate. Devon was subject to a fire five years after King moved in. King rebuilt the house and moved his personal library, the King library, to his estate. Hannah Wharton Fisher died in 1870, two years later in 1872, he married her sister Nancy Wharton Fisher (1826–1905). His desired to build a more ambitious library led to the founding of King Library on 1065 Bristol Pike in Andalusia in 1882. The library's architecture was based on John Quincy Adams Library, now known as Stone Library, located in Quincy, Massachusetts. The King Library building was completed and opened to the public in 1888. King donated a large portion of his own books to the library. Death King died on April 5, 1901 at the age of 88.
The board that oversees San Francisco Pride will have some new members, and its longtime president will be stepping down, following the group's annual meeting September 15 at the LGBT community center. Incumbent board members Manuel Alejandro Perez and Nguyen "Win" Pham were re-elected, along with first time candidates Bruce Beaudette, Suzanne Ford, Kerby Lynch, and Carolyn Wysinger. A seventh candidate, incumbent Anietie Ekanem, who currently serves as vice president, dropped out of the race for personal reasons. The new board members will be seated at the San Francisco LGBT Pride Celebration Committee's October 3 general meeting, at which time a new president and vice president will be chosen. The afternoon included the announcement from current board President Michelle Meow, who said that she would be stepping down next month from her position and the board. "SF Pride will always be home and family to me," Meow told the Bay Area Reporter. "I'm not going anywhere, as I hope to continue with many projects that I have started: short documentaries featuring the grand marshals with local filmmaker Jethro Patalinghug, the parade broadcast I helped save, media campaigns, fundraising, and so on." She said that she will continue focusing on programming she does for the Commonwealth Club of California and her local TV show that has a new home at KBCW. "I'm also excited to be a source of support for my wife as we navigate life as a new immigrant family," Meow said. "My next volunteer project may be political and/or grassroots. Visibility is the backbone of our movement, I'm ready to harness all that I have learned at Pride and make a difference in areas of our most vulnerable." The voting process began with a statement from each candidate, followed by a Q&A. Attendees then cast their ballots. Pham, who was out of town celebrating Houston Pride, sent a video message. "I had a really great time collaborating with colleagues and implementing stability," Pham said. "It's important to have stability with recordkeeping." Lynch said, "I want to make space for young people and for the generations which came before," they said. "I have tons of administrative and fundraising experience. I love Pride, so vote for me." Ford, a transgender woman, followed. "The thrill of protesting and marching as a member of the Resistance Contingent the last two years has given me a unique perspective of the meaning of Pride," she said. "The current board and staff have given us a blueprint on how to run and sustain the premiere LGBTQ statement event in the world. If elected, I will strive to maintain that blueprint while looking for ways to be more inclusive and responsive to our people. I pledge to be a supportive team member who always looks to advance SF Pride." Beaudette recalled moving to San Francisco in 1978 and participating in the White Night riots a year later. The riots were the community's response to the light sentence given to former San Francisco Supervisor Dan White, who had assassinated gay supervisor Harvey Milk and mayor George Moscone at City Hall. "There's too much of a narrative that looks like me," Beaudette, who is white, said, as he held up magazine covers featuring gay African-Americans James Baldwin and Bayard Rustin. "I'd like to bring this to Pride," Beaudette said. Wysinger recalled her work as an educator, an author, a black queer podcaster, and an organizer of Pride celebrations in other cities. She expressed her desire to represent black queer women and hopes to bring more black queer people to Pride. Perez spoke of his life as a first-generation queer Mexican from a migrant family. His goal is to support current initiatives that connect more people to Pride and explore new opportunities to unite new voices to the organization's efforts. During questions, candidates were asked what they would do if they could wave a magic wand and change the parade moving forward. "To make sure we hear the amazing black queer voices," said Wysinger. "Getting better at telling our stories," said Perez. "This is a year-round process." "When I walk around Pride I don't see enough history," added Beaudette. "Maybe put up a history booth." Candidates were also asked about what they might do to make Pride more inclusive for people with disabilities. "To listen," said Ford. "Finding organizations that know better than you, making sure that the disabled are part of the celebration." After the Q&A the voting process took place. Each person could vote for as many or as few candidates as they chose. Twenty-eight ballots were cast, with each candidate needing 15 or more votes to win. Perez was the top vote getter, receiving 24 votes. Wysinger got 23, Ford got 22, Beaudette took 21, Lynch took 20, and Pham came in with 18. All six candidates were then declared winners. Beaudette told the B.A.R. that he was delighted to have won. "I'm excited to learn more about Pride and to bring my knowledge of other communities into this organization," he said. "I hope to share as much history as possible and to create role models for people — to define people who look like you. Harvey Milk and Harry Hay (founder of the Mattachine Society, an early gay activist group) were great, but what about Jose Sarria and Bayard Rustin?" Pride board member Jacquelene Bishop, whose seat was not up for election this year, said that she felt great. "People who want to show up to support the agency and the cause is a good thing," Bishop told the B.A.R. "Our power comes from the strength of our community coming together. I'm happy that people are going to do this work." SF Pride Executive Director George Ridgely Jr. was also pleased. "Today went extremely well," he said. "It's always encouraging to have individuals who want to be engaged with Pride and the governance process. We have a very diverse board and the outcome of today's election just strengthens that." The 2019 Pride parade and celebration will take place June 29-30. The theme will be "Generations of Resistance," in commemoration of the 50th anniversary of the Stonewall riots.
<reponame>chrisidefix/devide # class generated by DeVIDE::createDeVIDEModuleFromVTKObject from module_kits.vtk_kit.mixins import SimpleVTKClassModuleBase import vtk class vtkUGFacetReader(SimpleVTKClassModuleBase): def __init__(self, module_manager): SimpleVTKClassModuleBase.__init__( self, module_manager, vtk.vtkUGFacetReader(), 'Reading vtkUGFacet.', (), ('vtkUGFacet',), replaceDoc=True, inputFunctions=None, outputFunctions=None)
<gh_stars>1000+ /* bug #1143 - Multiple storage class specifiers in one declaration? */ static static void* y[1]; /* warning */ extern static int a; /* error */ extern typedef int A; /* error */ int main(void) { return 0; }
import { css } from "linaria"; import { injectColorGlobals, color } from "./color"; import { injectMixinGlobals } from "./mixins"; import { injectTypographyGlobals } from "./typography"; // From https://hankchizljaw.com/wrote/a-modern-css-reset/ export const globals = css` :global() { ${injectColorGlobals()} ${injectTypographyGlobals()} ${injectMixinGlobals()} /* Box sizing rules */ *, *::before, *::after { box-sizing: border-box; } /* Remove default margin */ body, h1, h2, h3, h4, h5, h6, p, ul, ol, li, figure, figcaption, blockquote, dl, dd { margin: 0; } /* Set core body defaults */ body { margin: 0; min-height: 100vh; scroll-behavior: smooth; -webkit-font-smoothing: antialiased; -moz-osx-font-smoothing: grayscale; text-rendering: optimizeSpeed; line-height: 1.5; color: ${color("text")}; background-color: ${color("bg")}; } /* Remove list styles on ul, ol elements with a class attribute */ ul[class], ol[class] { list-style: none; } /* Inherit fonts for inputs and buttons */ input, button, textarea, select { font: inherit; } /* Remove all animations and transitions for people that prefer not to see them */ @media (prefers-reduced-motion: reduce) { * { animation-duration: 0.01ms !important; animation-iteration-count: 1 !important; transition-duration: 0.01ms !important; scroll-behavior: auto !important; } } } `;
Thomas Murphy (Collector) Thomas Murphy (1821 – August 17, 1901) was an Irish-American businessman and politician from New York City, serving as a New York state senator for a total of three terms, 1866 through 1867, and in 1879. He had joined the Republican Party and made his fortune selling equipment to the Union Army during the American Civil War. Afterward, he became part of the political machine run by US Senator from New York Roscoe Conkling, and was appointed as the Collector of the Port of New York from 1870 to 1871. Life Murphy was born in Ireland in 1821. He emigrated to the United States as a young man and entered the fur business. He became interested in politics, joining first the Whig party and later the Republicans. In 1848, he married Mary Gibbs (died 1897), and they had five children. Their son, Edgar Gibbs Murphy, became well known as a champion pigeon-shooter. Another son, Thomas Vinton Murphy, married Cora Howarth. They had a business running munitions and a gambling house in the 1880s. Murphy made his fortune selling equipment to the Union Army during the American Civil War, and soon thereafter became involved with the Republican political machine run by Roscoe Conkling. He was a member of the New York State Senate (7th D.) in 1866 and 1867. In 1870, Conkling asked President Ulysses S. Grant to appoint Murphy to the office of Collector. Murphy antagonized other New York Republican factions by firing their members from Custom House jobs and replacing them with men loyal to Conkling. Murphy became sufficiently unpopular so that Grant was forced to replace him, appointing Murphy's friend, Chester A. Arthur, to the post in his place. After his removal, Murphy ran for Congress from New York's 9th congressional district, but was defeated. He was elected again as a member of the State Senate in 1879. He eventually owned a horse farm in Deal, New Jersey. He died at his home in 1901 of kidney disease. His funeral was held at St. Patrick's Cathedral in New York. He was buried at Woodlawn Cemetery in the Bronx.
Dual-functioning phage-derived peptides encourage human bone marrow cell-specific attachment to mineralized biomaterials Abstract Cell instructive mineralized biomaterials are a promising alternative to conventional auto-, allo-, and xenograft therapies for the reconstruction of critical sized defects. Extracellular matrix proteins, peptide domains, and functional motifs demonstrating cell and mineral binding activity have been used to improve cell attachment. However, these strategies vary in their tissue regeneration outcomes due to lack of specificity to both regenerative cell populations and the material substrates. In order to mediate cell-specific interactions on apatite surfaces, we identified peptide sequences with high affinity towards apatite (VTKHLNQISQSY, VTK) and clonally derived human bone marrow stromal cells (DPIYALSWSGMA, DPI) using phage display. The primary aims of this study were to measure apatite binding affinity, human bone marrow stromal cell (hBMSC) adhesion strength, and peptide specificity to hBMSCs when the apatite and cell-specific peptides are combined into a dual-functioning peptide. To assess binding affinity to hydroxyapatite (HA), binding isotherms were constructed and peptide binding affinity (K1) determined. HBMSC, MC3T3 and mouse dermal fibroblast (MDF) adhesion strength on biomimetic apatite functionalized with single- and dual-functioning peptide sequences were evaluated using a centrifugation assay. DPI-VTK had the highest binding strength towards hBMSCs (p<0.01). DPI-VTK, while promoting strong initial attachment to hBMSCs, did not encourage strong adhesions to MC3T3s or fibroblasts (p<0.01). Taken together, phage display is a promising strategy to identify preferential cell and material binding peptide sequences that can tether specific cell populations onto specific biomaterial chemistries.
Constructive Quantum Interference in Photochemical Reactions. Interferences emerge when multiple pathways coexist together, leading toward the same result. Here, we report a theoretical study for a reaction scheme that leads to constructive quantum interference in a photoassociation (PA) reaction of a 87Rb Bose-Einstein condensate where the reactant spin state is prepared in a coherent superposition of multiple bare spin states. This is achieved by changing the reactive scattering channel in the PA reaction. As the origin of coherent control comes from the spin part of the wavefunction, we show that it is sufficient to use radio frequency (RF) coupling to achieve the superposition state. We simulate the RF coupling on a quantum processor (IBMQ Lima), and our results show that interferences can be used as a resource for the coherent control of photochemical reactions. The approach is general and can be employed to study a wide spectrum of chemical reactions in the ultracold regime.
<filename>src/main/java/chapter4/CreateThread.java package chapter4; public class CreateThread { public static void main(String[] args) { Thread thread = new Thread(() -> { System.out.println("I'am a Thread"); }); Thread thread1 = new Thread(new Runnable() { @Override public void run() { System.out.println("I'am also a thread"); } }); thread.start(); thread1.start(); } }
def addCoordset(self, coords, weights=None, label=None): atoms = coords try: if self._coords is not None and hasattr(coords, '_getCoordsets'): coords = coords._getCoordsets() else: coords = coords.getCoordsets() except AttributeError: label = label or 'Unknown' else: if coords is None: raise ValueError('coordinates are not set') elif label is None and isinstance(atoms, Atomic): ag = atoms if not isinstance(atoms, AtomGroup): ag = atoms.getAtomGroup() label = ag.getTitle() if coords.shape[0] < ag.numCoordsets(): label += 'm' + str(atoms.getACSIndex()) else: label = label or str(coords) try: checkCoords(coords, csets=True, natoms=self._n_atoms) except TypeError: raise TypeError('coords must be a Numpy array or must have ' '`getCoords` attribute') if coords.ndim == 2: coords = coords.reshape((1, self._n_atoms, 3)) n_csets, n_atoms, _ = coords.shape if not self._n_atoms: self._n_atoms = n_atoms if weights is None: weights = np.ones((n_csets, n_atoms, 1), dtype=float) else: weights = checkWeights(weights, n_atoms, n_csets) if n_csets > 1: if isinstance(label, str): self._labels.extend('{0}_m{1}' .format(label, i+1) for i in range(n_csets)) else: if len(label) != n_csets: raise ValueError('length of label and number of ' 'coordinate sets must be the same') self._labels.extend(label) else: self._labels.append(label) if self._confs is None and self._weights is None: self._confs = coords self._weights = weights self._n_csets = n_csets elif self._confs is not None and self._weights is not None: self._confs = np.concatenate((self._confs, coords), axis=0) self._weights = np.concatenate((self._weights, weights), axis=0) self._n_csets += n_csets else: raise RuntimeError('_confs and _weights must be set or None at ' 'the same time')
Join us for a Woods Seminar with Morgan Levy, University of California, San Diego. The sinking of California's southern Central Valley has long been linked at large scales to groundwater overdraft associated with irrigated agriculture. Recent novel high-resolution vertical land surface displacement measurements (Global Positioning System enhanced interferometric synthetic aperture radar, or GInSAR) show that land subsidence and uplift are more heterogeneous across time and space than previously documented. Here we link these displacement measurements to land cover and weather patterns to show that local seasonal variation in potential vegetation water demand drives displacement. We find that land surface displacement direction, magnitude, and seasonality between 2015 and 2017 vary by land cover across the region. The mean rate of displacement over native rain-fed vegetation ranges from -4.9 ± 0.1 to -4.1 ± 0.1 mm/year in dry and wet years, respectively, and is in phase with water demand (i.e., increased water demand corresponds to uplift). In agricultural areas, subsidence is greater and out of phase with water demand (i.e., increased water demand corresponds to subsidence). The mean rate of displacement ranges from -31.0 ± 0.4 (dry) to -11.2 ± 0.2 (wet) mm/year for fruit and nut crops, and -128.2 ± 1.7 (dry) to -42.5 ± 0.5 (wet) mm/year for field crops. We leverage the large change in surface water availability at the end of the drought in 2016, spatial variation in land cover and water demand, and limited available groundwater withdrawal information to provide a first snapshot of spatially explicit groundwater - subsidence dynamics, and estimates of groundwater withdrawals implied by land surface displacement across different land cover classes. These methods have the potential to be applied widely for groundwater monitoring at policy-relevant spatial and temporal scales. Beyond the capacity of our approach to support detection of groundwater overdraft, our analysis has revealed subsurface basin boundaries and flow patterns that have direct implications for successful implementation of California’s Sustainable Groundwater Management Act (SGMA). Morgan Levy’s research focuses on hydrology and water resources; land use and climate change impacts to human health and the environment; human-environmental system dynamics; and environmental data science. Levy’s background includes training in: physical hydrology and eco-hydrology; environmental and earth system science; and applied statistics, including causal empirical methods and spatiotemporal data analysis and modeling. Morgan is currently a postdoctoral researcher in the School of Global Policy and Strategy at the University of California (UC), San Diego. She holds a Ph.D. (Dec. 2016) and M.S. (2012) in Energy and Resources, and a M.A. in Statistics (2013) from UC Berkeley.
package controllers import ( "context" "go.uber.org/zap" "github.com/go-logr/zapr" . "github.com/onsi/ginkgo" . "github.com/onsi/gomega" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" "k8s.io/apimachinery/pkg/types" "k8s.io/client-go/kubernetes/scheme" ctrl "sigs.k8s.io/controller-runtime" mysqlv1alpha1 "github.com/blaqkube/mysql-operator/mysql-operator/api/v1alpha1" // +kubebuilder:scaffold:imports ) var _ = Describe("Backup Controller", func() { It("Create a backup without any store/database", func() { ctx := context.Background() backup := mysqlv1alpha1.Backup{ ObjectMeta: metav1.ObjectMeta{ GenerateName: "backup-", Namespace: "default", }, Spec: mysqlv1alpha1.BackupSpec{ Store: "store", Instance: "instance", }, } zapLog, _ := zap.NewDevelopment() reconcile := &BackupReconciler{ Client: k8sClient, Log: zapr.NewLogger(zapLog), Scheme: scheme.Scheme, } Expect(k8sClient.Create(ctx, &backup)).To(Succeed()) backupName := types.NamespacedName{Namespace: backup.Namespace, Name: backup.Name} Expect(reconcile.Reconcile(context.TODO(), ctrl.Request{NamespacedName: backupName})).To(Equal(ctrl.Result{Requeue: false})) response := mysqlv1alpha1.Backup{} Expect(k8sClient.Get(ctx, backupName, &response)).To(Succeed()) Expect(response.Status.Reason).To(Equal(mysqlv1alpha1.BackupStoreAccessError), "Expected reconcile to change the status to Check") }) })
#include "enums/OpClass.hh" namespace Enums { const char *OpClassStrings[Num_OpClass] = { "No_OpClass", "IntAlu", "IntMult", "IntDiv", "FloatAdd", "FloatCmp", "FloatCvt", "FloatMult", "FloatMultAcc", "FloatDiv", "FloatMisc", "FloatSqrt", "SimdAdd", "SimdAddAcc", "SimdAlu", "SimdCmp", "SimdCvt", "SimdMisc", "SimdMult", "SimdMultAcc", "SimdShift", "SimdShiftAcc", "SimdSqrt", "SimdFloatAdd", "SimdFloatAlu", "SimdFloatCmp", "SimdFloatCvt", "SimdFloatDiv", "SimdFloatMisc", "SimdFloatMult", "SimdFloatMultAcc", "SimdFloatSqrt", "MemRead", "MemWrite", "FloatMemRead", "FloatMemWrite", "IprAccess", "InstPrefetch", }; } // namespace Enums #include "pybind11/pybind11.h" #include "pybind11/stl.h" #include <sim/init.hh> namespace py = pybind11; static void module_init(py::module &m_internal) { py::module m = m_internal.def_submodule("enum_OpClass"); py::enum_<Enums::OpClass>(m, "enum_OpClass") .value("No_OpClass", Enums::No_OpClass) .value("IntAlu", Enums::IntAlu) .value("IntMult", Enums::IntMult) .value("IntDiv", Enums::IntDiv) .value("FloatAdd", Enums::FloatAdd) .value("FloatCmp", Enums::FloatCmp) .value("FloatCvt", Enums::FloatCvt) .value("FloatMult", Enums::FloatMult) .value("FloatMultAcc", Enums::FloatMultAcc) .value("FloatDiv", Enums::FloatDiv) .value("FloatMisc", Enums::FloatMisc) .value("FloatSqrt", Enums::FloatSqrt) .value("SimdAdd", Enums::SimdAdd) .value("SimdAddAcc", Enums::SimdAddAcc) .value("SimdAlu", Enums::SimdAlu) .value("SimdCmp", Enums::SimdCmp) .value("SimdCvt", Enums::SimdCvt) .value("SimdMisc", Enums::SimdMisc) .value("SimdMult", Enums::SimdMult) .value("SimdMultAcc", Enums::SimdMultAcc) .value("SimdShift", Enums::SimdShift) .value("SimdShiftAcc", Enums::SimdShiftAcc) .value("SimdSqrt", Enums::SimdSqrt) .value("SimdFloatAdd", Enums::SimdFloatAdd) .value("SimdFloatAlu", Enums::SimdFloatAlu) .value("SimdFloatCmp", Enums::SimdFloatCmp) .value("SimdFloatCvt", Enums::SimdFloatCvt) .value("SimdFloatDiv", Enums::SimdFloatDiv) .value("SimdFloatMisc", Enums::SimdFloatMisc) .value("SimdFloatMult", Enums::SimdFloatMult) .value("SimdFloatMultAcc", Enums::SimdFloatMultAcc) .value("SimdFloatSqrt", Enums::SimdFloatSqrt) .value("MemRead", Enums::MemRead) .value("MemWrite", Enums::MemWrite) .value("FloatMemRead", Enums::FloatMemRead) .value("FloatMemWrite", Enums::FloatMemWrite) .value("IprAccess", Enums::IprAccess) .value("InstPrefetch", Enums::InstPrefetch) .value("Num_OpClass", Enums::Num_OpClass) .export_values() ; } static EmbeddedPyBind embed_enum("enum_OpClass", module_init);
Direct Crystallization of Silicoaluminophosphates onto the Surface of OpenCelled SiC Foam Silicoaluminophosphates of the framework types CHA (SAPO34) and AFI (SAPO5) are deposited onto the surface of opencelled ceramic SiC foam monoliths by direct crystallization. Therefore, a conventional hydrothermal synthesis is carried out in the presence of SiC foam monoliths with pore densities of 10 and 30PPI, respectively. Moreover, the crystallization time is varied to determine its effect on the coating produced. In all cases, the direct crystallization of the respective SAPO material is successful. Irrespective of the pores density of the ceramic foam a variation of the crystallization time (SAPO5: 20h/30h, SAPO34:30h/45h) does not show any influence on the thickness of the layers produced as suggested by optical microscopy. However, in the case of SAPO34 these layers are clearly thicker than the layers of SAPO5, which leads to a partial blockage of the pores of the ceramic foam with a pore density of 30PPI. As proved by N2sorption measurement, the coatings offer BET surface areas comparable with those reported for pure powder samples.
<reponame>uwplse/concerto class Cmp: pass class LT(Cmp): def instance(self): return "LtExpr" def flip(self): return GT() def inverse(self): return GE() def propagate(self): return "propagateLT" class LE(Cmp): def instance(self): return "LeExpr" def flip(self): return GE() def inverse(self): return GT() def propagate(self): return "propagateLE" class EQ(Cmp): def instance(self): return "EqExpr" def flip(self): return self def inverse(self): return NE() def propagate(self): return "propagateEQ" class NE(Cmp): def instance(self): return "NeExpr" def flip(self): return self def inverse(self): return EQ() def propagate(self): return "propagateNE" class GE(Cmp): def instance(self): return "GeExpr" def flip(self): return LE() def inverse(self): return LT() def propagate(self): return "propagateGE" class GT(Cmp): def instance(self): return "GtExpr" def flip(self): return LT() def inverse(self): return LE() def propagate(self): return "propagateGT" to_do = [ LT(), LE(), EQ(), NE(), GT(), GE() ] tmpl = """ }} else if(expr instanceof {0}) {{ if(isTrueBranch) {{ propagated = new Pair<>( primOperations.{1}(leftOp, rightOp), primOperations.{2}(rightOp, leftOp) ); }} else {{ propagated = new Pair<>( primOperations.{3}(leftOp, rightOp), primOperations.{4}(rightOp, leftOp) ); }} """ for td in to_do: print tmpl.format(td.instance(), td.propagate(), td.flip().propagate(), td.inverse().propagate(), td.inverse().flip().propagate())
Ask Ethan: Where Does The 'Energy' For Dark Energy Come From? The farther away we look, the closer in time we're seeing towards the Big Bang. The latest record-holder for quasars comes from a time when the Universe was just 690 million years old. These ultra-distant cosmological probes also show us a Universe that contains dark matter and dark energy, but doesn't explain where it came from. [T]he total energy of the universe is increasing such that the energy inherent of space-time is kept constant as the universe expands. It is like, in order to build an extra cubic kilometer of space-time you need this quanta of energy. No more and no less. This energy has to come from somewhere. In everything else I know of, energy (including matter via E = mc2), cannot just appear from nowhere. So something must be giving energy into our universe to cause it to expand. [...] Will it ever stop? The actual, scientific truth of what's going on is much more troubling than you might imagine. In our physical Universe, there are two things that are inextricably linked together: the expansion rate of the Universe and the breakdown of all the different types of energy present within it. The cardinal rule of General Relativity is that matter tells space how to curve, while curved space tells matter how to move. This is true, but it's not complete. It isn't just matter but also energy that affects the curvature of space, and it isn't simply curvature but also the expansion (or contraction) rate of space that gets affected. In particular, it's the energy density that determines the expansion rate. But there are different forms of energy in the Universe, and they each play slightly different roles in how the expansion rate changes over time. While matter and radiation become less dense as the Universe expands owing to its increasing volume, dark energy is a form of energy inherent to space itself. As new space gets created in the expanding Universe, the dark energy density remains constant. For something like normal matter, its energy contributions are actually intuitive. Matter is made of particles that contain mass, and even as the Universe changes, the individual particles themselves remain the same. Over time, the volume of the Universe increases, and as it does, the total matter density drops. Density is mass over volume: mass remains the same, volume increases, and so the density goes down. If all we had in the Universe was matter, the expansion rate would drop as the matter density dropped. For radiation, there's an extra component to it. Sure, radiation is also made of particles, and as the volume expands, the number density of those particles decreases just as it does for matter. But radiation has a wavelength, and that wavelength gets stretched by the expanding Universe. Longer wavelengths mean lower energies, and so the expansion rate drops faster in a radiation-filled Universe than in a matter-filled one. But for a Universe filled with dark energy, the story is very different. Dark energy is caused by energy inherent to the fabric of space itself, and as the Universe expands, it's the energy density — the energy-per-unit-volume — that remains constant. As a result, a Universe filled with dark energy will see its expansion rate remain constant, rather than drop at all. Various components of and contributors to the Universe's energy density, and when they might dominate. If cosmic strings or domain walls existed in any appreciable amount, they'd contribute significantly to the expansion of the Universe. There could even be additional components that we no longer see, or that haven't appeared yet! Note that by time we reach today, dark energy dominates, matter is still somewhat important, but radiation is negligible. In the very distant past, only radiation was important. "Hang on," you might object, thinking, "I thought you said the Universe's expansion was accelerating?" There's a very important point here that doesn't get emphasized enough: there are two different things scientists talk about when it comes to the expansion of the Universe. One is the expansion rate — or the Hubble rate — of the Universe. This behaves exactly as we described above: it drops for matter, it drops faster for radiation, and it asymptotes to a positive constant for dark energy. But the second thing is how quickly an individual galaxy appears to recede from us over time. An illustration of how redshifts work in the expanding Universe. As a galaxy gets more and more distant, it must travel a greater distance and for a greater time through the expanding Universe. In a dark-energy dominated Universe, this means that individual galaxies will appear to speed up in their recession from us. As time goes on, a galaxy gets farther and farther away from us. Since the expansion rate is a speed-per-unit-distance (e.g., 70 km/s/Mpc), a galaxy that's farther away (say, 100 Mpc vs. 10 Mpc) will appear to recede at a faster speed (7,000 km/s vs. 700 km/s). If your Universe is filled with matter or radiation, the expansion rate drops faster than your galaxy's distance increases, so the net recession speed will drop over time: your Universe will be decelerating. If your Universe is dominated by dark energy, however, the net recession speed will increase over time: your Universe is accelerating. Our Universe, today, is made of approximately 68% dark energy. Starting at around 6 billion years ago, our Universe made the switch to accelerating from decelerating, based on the balance of all the different things within it. The relative importance of different energy components in the Universe at various times in the past. Note that when dark energy reaches a number near 100% in the future, the energy density of the Universe (and, therefore, the expansion rate) will remain constant arbitrarily far ahead in time. But how is this okay? It seems like a Universe filled with dark energy doesn't conserve energy. If the energy density — energy-per-unit-volume — remains constant, but the volume of the Universe is increasing, doesn't that mean the total amount of energy in the Universe is increasing? And doesn't that violate the conservation of energy? This should bother you! After all, we think that energy should be conserved in any and all physical processes that take place in the Universe. Does General Relativity offer a possible violation of energy conservation? If you had a static spacetime that weren't changing, energy conservation would be guaranteed. But if the fabric of space changes as the objects you're interested in move through them, there is no longer an energy conservation law under the laws of General Relativity. The scary answer is maybe, actually. There are a lot of quantities that General Relativity does an excellent and precise job of defining, and energy is not one of them. In other words, there is no mandate that energy must be conserved from Einstein's equations; global "energy" is not defined by General Relativity at all! In fact, we can make a very general statement about when energy is and isn't conserved. When you have particles interacting in a static background of spacetime, energy is truly conserved. But when the space through which particles move is changing, the total energy of those particles is not conserved. This is true for photons redshifting in an expanding Universe, and it's true for a Universe dominated by dark energy. But that answer, though technically correct, isn't the end of the story. We can come up with a new definition for energy when the space is changing; but we have to be careful when we do. There is a very smart way of looking at “energy” that allows us to show, in fact, that energy is conserved even in this seemingly paradoxical situation. I want you to remember that, in addition to chemical, electrical, thermal, kinetic, and potential energies, among others, there’s also work. Work, in physics, is when you apply a force to an object in the same direction as the distance it moves; this adds energy to the system. If the direction is opposite, you do negative work; this subtracts energy from the system. A good analogy is to think of gas. What happens if you heat up (add energy to) that gas? The molecules inside move faster as they gain energy, meaning they increase their speed, and they spread out to take up more space more quickly. But what happens, instead, if you heat up gas that's enclosed in a container? Yes, the molecules heat up, they move faster, and they try to spread out, but in this case, they often run into the walls of the container, creating an extra positive pressure on the walls. The container's walls are pushed outward, which costs energy: the molecules are doing work on it! The effects of increasing the temperature of a gas inside a container. The outward pressure can result in an increase in volume, where the interior molecules do work on the container walls. This is very, very analogous to what happens in the expanding Universe. If your Universe were filled with radiation (photons), each quantum would have an energy, given by a wavelength, and as the Universe expands, that photon wavelength gets stretched. Sure, the photons are losing energy, but there is work being done on the Universe itself by everything with a pressure inside of it! Conversely, if your Universe were filled with dark energy, it also has not only an energy density, but a pressure, too. The big difference, though, is that the pressure from dark energy is negative, which means we have the opposite situation we had for radiation. As the container's walls expand, they're doing work on the fabric of space itself! Conventionally, we're used to things expanding because there's a positive (outward) pressure coming from inside of them. The counterintuitive thing about dark energy is that it has a pressure of the opposite sign, but still causes the fabric of space to expand. …the patch does negative work on its surroundings, because it has negative pressure. Assuming the patch expands adiabatically, one may equate this negative work to the increase of mass/energy of the patch. One thereby recovers the correct equation of state for dark energy: P = – ρc2. So the mathematics is consistent. Which, again, still doesn't mean that energy is conserved. It simply gives us an intelligent way to look at this problem. There is a large suite of scientific evidence that supports the picture of the expanding Universe and the Big Bang, complete with dark energy. The late-time accelerated expansion doesn't strictly conserve energy, but the reasoning behind that is fascinating as well. When particles interact in an unchanging spacetime, energy must be conserved. When the spacetime they're in changes, that conservation law no longer holds. If you redefine energy to include the work done, both positive and negative, by a patch of space on its surroundings, you can save the conservation of energy in an expanding Universe. This is true for both positive-pressure quantities (like photons) and negative pressure ones (like dark energy). But this redefinition is not robust; it's simply a mathematical redefinition we can use to force energy to be conserved. The truth of the matter is that energy is not conserved in an expanding Universe. Perhaps in a quantum theory of gravity, it will be. But in General Relativity, we have no good way of defining it at all.
<gh_stars>10-100 /* Released under the BSD 2-Clause License * * Copyright © 2018-present, terrestris GmbH & Co. KG and GeoStyler contributors * All rights reserved. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions are met: * * * Redistributions of source code must retain the above copyright notice, * this list of conditions and the following disclaimer. * * * Redistributions in binary form must reproduce the above copyright notice, * this list of conditions and the following disclaimer in the documentation * and/or other materials provided with the distribution. * * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE * POSSIBILITY OF SUCH DAMAGE. */ import * as React from 'react'; import { Collapse, Form } from 'antd'; import { Symbolizer, FillSymbolizer, PointSymbolizer, GraphicType } from 'geostyler-style'; import ColorField from '../Field/ColorField/ColorField'; import OpacityField from '../Field/OpacityField/OpacityField'; import GraphicEditor from '../GraphicEditor/GraphicEditor'; import WidthField from '../Field/WidthField/WidthField'; import _cloneDeep from 'lodash/cloneDeep'; import _get from 'lodash/get'; import _isEqual from 'lodash/isEqual'; import { localize } from '../../LocaleWrapper/LocaleWrapper'; import en_US from '../../../locale/en_US'; import LineDashField from '../Field/LineDashField/LineDashField'; import { CompositionContext, Compositions } from '../../../context/CompositionContext/CompositionContext'; import CompositionUtil from '../../../Util/CompositionUtil'; import withDefaultsContext from '../../../hoc/withDefaultsContext'; import { DefaultValues } from '../../../context/DefaultValueContext/DefaultValueContext'; const Panel = Collapse.Panel; // i18n export interface FillEditorLocale { fillOpacityLabel?: string; fillColorLabel?: string; outlineColorLabel?: string; outlineWidthLabel?: string; graphicFillTypeLabel?: string; outlineDasharrayLabel?: string; opacityLabel?: string; outlineOpacityLabel?: string; } interface FillEditorDefaultProps { locale: FillEditorLocale; } // non default props export interface FillEditorProps extends Partial<FillEditorDefaultProps> { symbolizer: FillSymbolizer; onSymbolizerChange?: (changedSymb: Symbolizer) => void; defaultValues: DefaultValues; } export class FillEditor extends React.Component<FillEditorProps> { static componentName: string = 'FillEditor'; public static defaultProps: FillEditorDefaultProps = { locale: en_US.GsFillEditor }; public shouldComponentUpdate(nextProps: FillEditorProps): boolean { const diffProps = !_isEqual(this.props, nextProps); return diffProps; } onFillColorChange = (value: string) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.color = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onFillOpacityChange = (value: number) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.fillOpacity = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onOpacityChange = (value: number) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.opacity = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onOutlineOpacityChange = (value: number) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.outlineOpacity = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onOutlineColorChange = (value: string) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.outlineColor = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onOutlineWidthChange = (value: number) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.outlineWidth = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onOutlineDasharrayChange = (value: number[]) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.outlineDasharray = value; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; onGraphicChange = (gFill: PointSymbolizer) => { const { onSymbolizerChange } = this.props; const symbolizer: FillSymbolizer = _cloneDeep(this.props.symbolizer); symbolizer.graphicFill = gFill; if (onSymbolizerChange) { onSymbolizerChange(symbolizer); } }; /** * Wraps a Form Item around a given element and adds its locale * to the From Item label. */ wrapFormItem = (locale: string, element: React.ReactElement): React.ReactElement => { const formItemLayout = { labelCol: { span: 8 }, wrapperCol: { span: 16 } }; return element == null ? null : ( <Form.Item label={locale} {...formItemLayout} > {element} </Form.Item> ); }; render() { const { symbolizer, locale, defaultValues } = this.props; const { color, fillOpacity, outlineColor, graphicFill, outlineWidth, outlineDasharray, opacity, outlineOpacity } = symbolizer; return ( <CompositionContext.Consumer> {(composition: Compositions) => ( <div className="gs-fill-symbolizer-editor" > <Collapse bordered={false} defaultActiveKey={['1']}> <Panel header="General" key="1"> { this.wrapFormItem( locale.fillColorLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.fillColorField', onChange: this.onFillColorChange, propName: 'color', propValue: color, defaultValue: defaultValues?.FillEditor?.defaultFillColor, defaultElement: <ColorField /> }) ) } { this.wrapFormItem( locale.fillOpacityLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.fillOpacityField', onChange: this.onFillOpacityChange, propName: 'opacity', propValue: fillOpacity, defaultValue: defaultValues?.FillEditor?.defaultFillOpacity, defaultElement: <OpacityField /> }) ) } { this.wrapFormItem( locale.opacityLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.opacityField', onChange: this.onOpacityChange, propName: 'opacity', propValue: opacity, defaultValue: defaultValues?.FillEditor?.defaultOpacity, defaultElement: <OpacityField /> }) ) } { this.wrapFormItem( locale.outlineOpacityLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.outlineOpacityField', onChange: this.onOutlineOpacityChange, propName: 'opacity', propValue: outlineOpacity, defaultValue: defaultValues?.FillEditor?.defaultOutlineOpacity, defaultElement: <OpacityField /> }) ) } { this.wrapFormItem( locale.outlineColorLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.outlineColorField', onChange: this.onOutlineColorChange, propName: 'color', propValue: outlineColor, defaultValue: defaultValues?.FillEditor?.defaultOutlineColor, defaultElement: <ColorField /> }) ) } { this.wrapFormItem( locale.outlineWidthLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.outlineWidthField', onChange: this.onOutlineWidthChange, propName: 'width', propValue: outlineWidth, defaultValue: defaultValues?.FillEditor?.defaultOutlineWidth, defaultElement: <WidthField /> }) ) } { this.wrapFormItem( locale.outlineDasharrayLabel, CompositionUtil.handleComposition({ composition, path: 'FillEditor.outlineDasharrayField', onChange: this.onOutlineDasharrayChange, propName: 'dashArray', propValue: outlineDasharray, defaultElement: <LineDashField /> }) ) } </Panel> <Panel header="Graphic Fill" key="2"> { CompositionUtil.handleComposition({ composition, path: 'FillEditor.graphicEditorField', onChange: this.onGraphicChange, onChangeName: 'onGraphicChange', propName: 'graphic', propValue: graphicFill, defaultElement: ( <GraphicEditor graphicTypeFieldLabel={locale.graphicFillTypeLabel} graphic={graphicFill} graphicType={_get(graphicFill, 'kind') as GraphicType} /> ) }) } </Panel> </Collapse> </div> )} </CompositionContext.Consumer> ); } } export default withDefaultsContext(localize(FillEditor, FillEditor.componentName));
Interactive detection of 3D models of building's roofing for the estimation of the solar energy potential The paper presents the work in progress of the design and implementation of an interactive system for the detection of the building's roofing characteristic. For each of its pitches data concerning height, shape, orientation, slope and useful area are estimated at different precision levels. The system operates on a cartographic map and two pre-processed aerial photographs that are aligned for building selection, image segmentation and 3D modelling. Each building roofing is automatically classified and its features are used for disparity computation from two stereoscopic views for precise 3D modelling. Different disparity measurement algorithms are being experimented to measure their accuracy based on reference test buildings. The 3D model is the input to standard software packages for solar energy potential estimation.
Effect of nicotine on spermatogenesis in adult albino rats This study aimed to assess the effects of nicotine on spermatogenesis in 140 mature male albino rats divided into group A (controls), group B (sham controls), group C (nicotine treated) and group D (nicotine withdrawal). Group C was subdivided into CI, CII, CIII according to the dose of injected nicotine (0.2, 0.4 and 0.6mg nicotine per 100g per day), where each subgroup was further subdivided according to the treatment duration into subgroups a, b and c that received nicotine for 2, 4 and 8weeks. Group D received nicotine for 8weeks followed by withdrawal for another 8weeks to assess testicular recovery. Testicular tissue sections were subjected to haematoxylin and eosin, Massons trichrome stains and morphometry. The results showed that nicotine caused degenerative changes in the seminiferous tubules, revealed by altered general tubular architecture, decreased thickness of the spermatogenic cell masses, Sertoli cell vacuolation and thickened basal lamina. These changes were proportional to the nicotine dose and duration. Following nicotine withdrawal, regeneration of the damaged seminiferous tubules was observed to be rather complete in CI group. It is concluded that nicotine could adversely affect testicular spermatogenesis in a dose and timedependent manner which would be almost reversible after nicotine withdrawal, especially after small doses.
THE Tories intend to make an election issue of the loss of sovereignty involved in Tony Blair's decision to sign away at least 30 EU vetoes in the Nice Treaty agreed after a marathon five-day summit. William Hague said yesterday that the Conservatives would not ratify the treaty as it stood if they gained power. He rejected Mr Blair's assurances that it "advanced British interests" and said it was a "major step" towards a European superstate. Although Mr Blair and the 14 other leaders completed the treaty in the early hours of yesterday, a final version will not be available for formal signature for at least two months. It will then have to be ratified by national parliaments throughout Europe. The timetable makes it unlikely that Parliament could approve the legislation ratifying the treaty before a general election expected in May - and it would be a matter for a new government. Labour has tried to play down the prospects of early entry to the euro to avert a Tory "save the pound" election campaign. But the Tory leadership believes that loss of the national veto powers, coupled with the plans for a European defence force, will arouse public concerns over the pace of integration. Mr Hague said the Government should hold a referendum on the treaty and Euro force. If he came to power he would seek to renegotiate many of the elements of the treaty. The Prime Minister, looking weary, left Nice claiming that he had "fought Britain's corner" and preserved the veto in key areas of national interest. These included moves to harmonise tax and social security. He defended the decision to agree to more qualified majority voting, saying it was essential in an enlarged EU of 27 or 30 countries to ensure that one small state could not block decisions. Mr Blair tried to play down the issues on which he gave up the veto. Nine of the changes dealt with freedom of movement, where Britain could decide whether or not to take part, he said. The others were primarily about the efficiency of economic management and the single market, where majority voting was in Britain's interest. Last night, more than 12 hours after the summit finished, Downing Street did not have a comprehensive list of how many vetoes had been given up. A provisional list circulating at Westminster put it as high as 39. The Nice deal sought to streamline EU procedures, paving the way for up to a dozen new nations, mainly former Iron Curtain states, to join from 2004. But many of the smaller states were angry at the way the bigger countries, particularly France, Germany and Britain, sought to maintain their power and influence in an enlarged community. The agreement almost foundered late on when Belgium threatened to walk out. There was a widespread view that Nice had achieved only the minimum of reform necessary for enlargement to go ahead. Romano Prodi, the president of the European Commission, said the summit had achieved "only half" of what was needed. There was bitter criticism of the way President Chirac of France had handled the negotiations, with bleary-eyed delegations left without food or drink in the final hours of critical bargaining. Mr Blair called for an end to all-night wrangling as a way of conducting community business. His official spokesman complained that the first two days of the summit had been taken up by the various delegations setting out positions, which could have been done before they reached Nice. In future Britain would like to see an end to national "grandstanding" and the rituals of formal banquets and photocalls, with future summits restricted to two or three properly equipped venues such as Brussels.
<reponame>ivanfrolovmd/daml // Copyright (c) 2019 The DAML Authors. All rights reserved. // SPDX-License-Identifier: Apache-2.0 package com.digitalasset.testing; import com.daml.ledger.javaapi.data.*; import org.junit.jupiter.api.Test; import org.junit.platform.runner.JUnitPlatform; import org.junit.runner.RunWith; import test.genmapmod.Box; import test.recordmod.Pair; import test.variantmod.Either; import test.variantmod.either.*; import java.math.BigDecimal; import java.util.HashMap; import java.util.Map; import static org.junit.jupiter.api.Assertions.assertEquals; @RunWith(JUnitPlatform.class) public class GenMapTestFor1_dev { private BigDecimal bg1 = new BigDecimal("1.0000000000"); private BigDecimal bg2 = new BigDecimal("-2.2222222222"); private BigDecimal bg3 = new BigDecimal("3.3333333333"); @Test void genMap2Value2GenMap() { HashMap<Pair<Long, BigDecimal>, Either<Long, BigDecimal>> map = new HashMap<>(); map.put(new Pair<>(1L, bg1), new Right<>(bg1)); map.put(new Pair<>(2L, bg2), new Left<>(2L)); map.put(new Pair<>(3L, bg3), new Right<>(bg3)); Box b = new Box(map, "alice"); assertEquals(Box.fromValue(b.toValue()), b); } private Record pair(Long fst, BigDecimal snd) { return new Record( new Record.Field("fst", new Int64(fst)), new Record.Field("snd", new Numeric(snd)) ); } private Variant left(Long l) { return new Variant("Left", new Int64(l)); } private Variant right(BigDecimal r) { return new Variant("Right", new Numeric(r)); } @Test void value2GenMap2value() { Map<Value, Value> value = new HashMap<Value, Value>(); value.put(pair(1L, bg1), left(1L)); value.put(pair(-2L,bg2 ), right(bg2)); value.put(pair(3L, bg3), left(3L)); Value map = new DamlGenMap(value); Record b = new Record( new Record.Field("x", map), new Record.Field("party", new Party("alice")) ); assertEquals(Box.fromValue(b).toValue(), b); } }
Counter-imaging: myth-making and Americanization in Israeli Labor Party campaign ads, 2003 For several decades, the United States has exported not only its particular definition of democracy to developing nations but also the style of modern televisual politics. As a result, the nature of televised commercials in election campaigns in many nations is designed by US-based, -trained or -inspired consultants. This article examines ads run in the 2003 Israeli elections by the Israeli Labor party. Findings show that the ads (a) are indistinguishable in style from American ads; (b) follow a particular American formula of counter-imaging, that is, creating images of candidates and parties contrary to stereotypes held by voters; and (c) obfuscate the actual issues that the embattled nation faced and still faces. The article thus argues that the Americanmodern style of campaign ad damages substantive and constructive political communication in nations wrestling with intensely complex issues.
Ocular findings in IgA nephropathy with renal failure and hypertension. A patient with IgA nephropathy developed cotton-wool spots and serous retinal detachments. Fluorescein angiography demonstrated choroidal infarcts in both eyes from hypertensive retinopathy in acute on chronic renal failure. Plasmapheresis and hemodialysis led to visual recovery. Acute ocular hypertensive changes from IgA nephropathy may be reversed by plasmapheresis and hemodialysis.
<reponame>LasseWolter/clowdr import { gql } from "@apollo/client/core"; import { Content_ElementType_Enum, ElementDataBlob } from "@clowdr-app/shared-types/build/content"; import assert from "assert"; import { CompleteConferencePrepareJobDocument, CreateVideoRenderJobDocument, GetEventsWithoutVonageSessionDocument, GetVideoBroadcastElementsDocument, OtherConferencePrepareJobsDocument, } from "../generated/graphql"; import { apolloClient } from "../graphqlClient"; import { failConferencePrepareJob } from "../lib/conferencePrepareJob"; import { createEventVonageSession } from "../lib/event"; import { ConferencePrepareJobData, Payload } from "../types/hasura/event"; import { callWithRetry } from "../utils"; gql` query OtherConferencePrepareJobs($conferenceId: uuid!, $conferencePrepareJobId: uuid!) { conference_PrepareJob( where: { jobStatusName: { _eq: IN_PROGRESS } conferenceId: { _eq: $conferenceId } id: { _neq: $conferencePrepareJobId } } ) { id updatedAt } } query GetVideoBroadcastElements($conferenceId: uuid) { content_Element(where: { conferenceId: { _eq: $conferenceId }, typeName: { _eq: VIDEO_BROADCAST } }) { id data } } `; export async function handleConferencePrepareJobInserted(payload: Payload<ConferencePrepareJobData>): Promise<void> { assert(payload.event.data.new, "Payload must contain new row data"); const newRow = payload.event.data.new; console.log("Conference prepare: job triggered", { conferencePrepareJobId: newRow.id, conferenceId: newRow.conferenceId, }); try { // get list of other in-progress jobs. If any are in progress, set this new one to failed and return. const otherJobs = await apolloClient.query({ query: OtherConferencePrepareJobsDocument, variables: { conferenceId: newRow.conferenceId, conferencePrepareJobId: newRow.id, }, }); if (otherJobs.data.conference_PrepareJob.length > 0) { console.log( "Conference prepare: another job in progress, aborting.", otherJobs.data.conference_PrepareJob[0].id, newRow.id ); throw new Error( `Another conference prepare job (${otherJobs.data.conference_PrepareJob[0].id}) is already in progress` ); } const createdJob = await createBroadcastTranscodes(newRow.id, newRow.conferenceId); await createEventVonageSessionsBroadcastItems(newRow.conferenceId); console.log("Conference prepare: finished initialising job", newRow.id); if (!createdJob) { await callWithRetry(async () => { await apolloClient.mutate({ mutation: CompleteConferencePrepareJobDocument, variables: { id: newRow.id, }, }); }); console.log("Conference prepare: job completed without needing to render broadcast items", newRow.id); } } catch (e) { console.error("Conference prepare: fatal error while initialising job", e); await callWithRetry(async () => { await failConferencePrepareJob(newRow.id, e.message ?? "Unknown error while initialising job"); }); } } async function createBroadcastTranscodes(conferencePrepareJobId: string, conferenceId: string): Promise<boolean> { const videoBroadcastItems = await apolloClient.query({ query: GetVideoBroadcastElementsDocument, variables: { conferenceId, }, }); console.log("Conference prepare: found video broadcast items", { count: videoBroadcastItems.data.content_Element.length, conferencePrepareJobId, }); let createdJob = false; // Create broadcast transcodes for elements that need one for (const element of videoBroadcastItems.data.content_Element) { console.log("Conference prepare: preparing video broadcast element", { elementId: element.id, conferencePrepareJobId, }); const content: ElementDataBlob = element.data; if (content.length < 1) { console.warn("Conference prepare: no content item versions", { elementId: element.id, conferencePrepareJobId, }); continue; } const latestVersion = content[content.length - 1]; if (latestVersion.data.type !== Content_ElementType_Enum.VideoBroadcast) { console.warn("Conference prepare: invalid content item data (not a video broadcast)", { elementId: element.id, conferencePrepareJobId, }); continue; } if ( latestVersion.data.broadcastTranscode && latestVersion.data.broadcastTranscode.s3Url && latestVersion.data.broadcastTranscode.durationSeconds ) { console.log("Conference prepare: item already has up-to-date broadcast transcode", { elementId: element.id, conferencePrepareJobId, }); } else { console.log("Conference prepare: item needs broadcast transcode", { elementId: element.id, conferencePrepareJobId, }); if ( !latestVersion.data || !latestVersion.data.s3Url || latestVersion.data.s3Url === "" || !latestVersion.data.subtitles || !latestVersion.data.subtitles["en_US"] || !latestVersion.data.subtitles["en_US"].s3Url ) { console.log( "Conference prepare: Skipping item because it is missing one or more pieces of information needed to prepare it", { elementId: element.id, conferencePrepareJobId } ); } else { const broadcastRenderJobData: BroadcastRenderJobDataBlob = { type: "BroadcastRenderJob", subtitlesS3Url: latestVersion.data.subtitles["en_US"].s3Url, videoS3Url: latestVersion.data.s3Url, }; // Create a video render job to populate the broadcast content item await apolloClient.mutate({ mutation: CreateVideoRenderJobDocument, variables: { conferenceId, conferencePrepareJobId, data: broadcastRenderJobData, elementId: element.id, }, }); createdJob = true; } } } return createdJob; } gql` mutation CreateVideoRenderJob( $conferenceId: uuid! $conferencePrepareJobId: uuid! $elementId: uuid! $data: jsonb! ) { insert_video_VideoRenderJob_one( object: { conferenceId: $conferenceId conferencePrepareJobId: $conferencePrepareJobId elementId: $elementId data: $data jobStatusName: NEW } ) { id } } `; async function createEventVonageSessionsBroadcastItems(conferenceId: string): Promise<void> { console.log("Creating broadcast content items for presenter Vonage rooms", conferenceId); gql` query GetEventsWithoutVonageSession($conferenceId: uuid!) { schedule_Event( where: { conferenceId: { _eq: $conferenceId }, _and: { _not: { eventVonageSession: {} } } } ) { id } } `; const eventsWithoutSessionResult = await apolloClient.query({ query: GetEventsWithoutVonageSessionDocument, variables: { conferenceId, }, }); for (const event of eventsWithoutSessionResult.data.schedule_Event) { console.log("Creating Vonage session for event", { eventId: event.id }); try { await createEventVonageSession(event.id, conferenceId); } catch (e) { console.error("Failed to create Vonage session", event.id, e); throw new Error(`Failed to create Vonage session: ${e.message}`); } } }
On inherent in-place and in-order features of the prime factor algorithm For the computation of the prime factor algorithm (PFA), an in-place and in-order approach is always desirable because it reduces the memory requirement for the storage of the temporary results, and the computation time which is required to unscramble the output sequence to a proper order. In fact, the processing time required for this unscrambling process can take up as much as 50% of the overall computation time. It is shown that the PFA has an intrinsic property that allows it to be easily realized in an in-place and in-order form. No extra operation is required as in the previous propositions. Nevertheless, the sequence length of the PFA computation must be carefully selected. The conditions under which a particular sequence length is possible for a natural in-place and in-order PFA computation are analyzed. The result is useful to both the hardware and software realization of the PFA.<<ETX>>
class MsgType: """Message types for between core node and client""" REQUEST_SETUP_DOMAIN = 0 RESPONSE_SETUP_DOMAIN = 1 REQUEST_SET_STATIC_NODE = 4 RESPONSE_SET_STATIC_NODE = 5 REQUEST_GET_CONFIG = 8 RESPONSE_GET_CONFIG = 9 REQUEST_MANIP_LEDGER_SUBSYS = 10 RESPONSE_MANIP_LEDGER_SUBSYS = 11 DOMAIN_PING = 12 REQUEST_GET_DOMAINLIST = 13 RESPONSE_GET_DOMAINLIST = 14 REQUEST_INSERT_NOTIFICATION = 15 CANCEL_INSERT_NOTIFICATION = 16 REQUEST_GET_STATS = 17 RESPONSE_GET_STATS = 18 NOTIFY_DOMAIN_KEY_UPDATE = 19 REQUEST_GET_NEIGHBORLIST = 21 RESPONSE_GET_NEIGHBORLIST = 22 REQUEST_GET_USERS = 23 RESPONSE_GET_USERS = 24 REQUEST_GET_FORWARDING_LIST = 25 RESPONSE_GET_FORWARDING_LIST = 26 REQUEST_GET_NODEID = 27 RESPONSE_GET_NODEID = 28 REQUEST_GET_NOTIFICATION_LIST = 29 RESPONSE_GET_NOTIFICATION_LIST = 30 REQUEST_CLOSE_DOMAIN = 31 RESPONSE_CLOSE_DOMAIN = 32 REQUEST_ECDH_KEY_EXCHANGE = 33 RESPONSE_ECDH_KEY_EXCHANGE = 34 REGISTER = 64 UNREGISTER = 65 MESSAGE = 66 REQUEST_GATHER_SIGNATURE = 67 RESPONSE_GATHER_SIGNATURE = 68 REQUEST_SIGNATURE = 69 RESPONSE_SIGNATURE = 70 REQUEST_INSERT = 71 RESPONSE_INSERT = 72 NOTIFY_INSERTED = 73 NOTIFY_CROSS_REF = 74 REQUEST_SEARCH_TRANSACTION = 82 RESPONSE_SEARCH_TRANSACTION = 83 REQUEST_SEARCH_WITH_CONDITIONS = 86 RESPONSE_SEARCH_WITH_CONDITIONS = 87 REQUEST_TRAVERSE_TRANSACTIONS = 88 RESPONSE_TRAVERSE_TRANSACTIONS = 89 REQUEST_CROSS_REF_VERIFY = 90 RESPONSE_CROSS_REF_VERIFY = 91 REQUEST_CROSS_REF_LIST = 92 RESPONSE_CROSS_REF_LIST = 93 REQUEST_REPAIR = 94 REQUEST_COUNT_TRANSACTIONS = 95 RESPONSE_COUNT_TRANSACTIONS = 95 REQUEST_REGISTER_HASH_IN_SUBSYS = 128 RESPONSE_REGISTER_HASH_IN_SUBSYS = 129 REQUEST_VERIFY_HASH_IN_SUBSYS = 130 RESPONSE_VERIFY_HASH_IN_SUBSYS = 131
<reponame>bnjix/freegeoip<filename>Godeps/_workspace/src/github.com/fiorix/go-redis/redis/commands.go // Copyright 2013-2014 go-redis authors. All rights reserved. // // Licensed under the Apache License, Version 2.0 (the "License"); // you may not use this file except in compliance with the License. // You may obtain a copy of the License at // // http://www.apache.org/licenses/LICENSE-2.0 // // Unless required by applicable law or agreed to in writing, software // distributed under the License is distributed on an "AS IS" BASIS, // WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. // See the License for the specific language governing permissions and // limitations under the License. // // This is a modified version of gomemcache adapted to redis. // Original code and license at https://github.com/bradfitz/gomemcache/ package redis // WORK IN PROGRESS // // Redis commands // // Some commands take an integer timeout, in seconds. It's not a time.Duration // because redis only supports seconds resolution for timeouts. // // Redis allows clients to block indefinitely by setting timeout to 0, but // it does not work here. All functions below use the timeout not only to // block the operation in redis, but also as a socket read timeout (+delta) // to free up system resources. // // The default TCP read timeout is 200ms. If a timeout is required to // be "indefinitely", then set it to something like 86400. // // See redis.DefaultTimeout for details. // // 🍺 import ( "errors" "strings" "time" ) // http://redis.io/commands/append func (c *Client) Append(key, value string) (int, error) { v, err := c.execWithKey(true, "append", key, value) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/bgrewriteaof // BgRewriteAOF is not fully supported on sharded connections. func (c *Client) BgRewriteAOF() (string, error) { v, err := c.execOnFirst(false, "BGREWRITEAOF") if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/bgsave // BgSave is not fully supported on sharded connections. func (c *Client) BgSave() (string, error) { v, err := c.execOnFirst(false, "BGSAVE") if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/ping // goes to first connection only func (c *Client) Ping() error { v, err := c.execOnFirst(false, "PING") if err != nil { return err } s, err := iface2str(v) if err != nil { return err } else if s != "PONG" { return ErrServerError } return nil } // http://redis.io/commands/bitcount // BitCount ignores start and end if start is a negative number. func (c *Client) BitCount(key string, start, end int) (int, error) { var ( v interface{} err error ) if start > -1 { v, err = c.execWithKey(true, "BITCOUNT", key, start, end) } else { v, err = c.execWithKey(true, "BITCOUNT", key) } if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/bitop // BitOp is not fully supported on sharded connections. func (c *Client) BitOp(operation, destkey, key string, keys ...string) (int, error) { a := append([]string{"BITOP", operation, destkey, key}, keys...) v, err := c.execOnFirst(true, vstr2iface(a)...) if err != nil { return 0, err } return iface2int(v) } // blbrPop supports both BLPop and BRPop. func (c *Client) blbrPop(cmd string, timeout int, keys ...string) (k, v string, err error) { var r interface{} r, err = c.execWithKeyTimeout( true, timeout, cmd, keys[0], append(vstr2iface(keys[1:]), timeout)..., ) if err != nil { return } if r == nil { err = ErrTimedOut return } switch r.(type) { case []interface{}: items := r.([]interface{}) if len(items) != 2 { err = ErrServerError return } // TODO: test types k = items[0].(string) v = items[1].(string) return } err = ErrServerError return } // http://redis.io/commands/blpop // BLPop is not fully supported on sharded connections. // A timeout of 0 uses DefaultTimeout, which is probably too low. func (c *Client) BLPop(timeout int, keys ...string) (k, v string, err error) { return c.blbrPop("BLPOP", timeout, keys...) } // http://redis.io/commands/brpop // BRPop is not fully supported on sharded connections. // A timeout of 0 uses DefaultTimeout, which is probably too low. func (c *Client) BRPop(timeout int, keys ...string) (k, v string, err error) { return c.blbrPop("BRPOP", timeout, keys...) } // http://redis.io/commands/brpoplpush // BRPopLPush is not fully supported on sharded connections. // A timeout of 0 uses DefaultTimeout, which is probably too low. func (c *Client) BRPopLPush(src, dst string, timeout int) (string, error) { t := c.Timeout // Extend the client's timeout for this operation only. // TODO: make sure it does not affect other concurrent calls. if t == 0 { c.Timeout = time.Duration(timeout)*time.Second + DefaultTimeout } else { c.Timeout = time.Duration(timeout)*time.Second + t } v, err := c.execWithKey(true, "BRPOPLPUSH", src, dst, timeout) c.Timeout = t if err != nil { return "", err } else if v == nil { return "", ErrTimedOut } return iface2str(v) } // http://redis.io/commands/client-kill // ClientKill is not fully supported on sharded connections. func (c *Client) ClientKill(kill_addr string) error { v, err := c.execOnFirst(false, "CLIENT KILL", kill_addr) if err != nil { return err } switch v.(type) { case string: return nil } return ErrServerError } // http://redis.io/commands/client-list // ClientList is not fully supported on sharded connections. func (c *Client) ClientList() ([]string, error) { v, err := c.execOnFirst(false, "CLIENT LIST") if err != nil { return nil, err } switch v.(type) { case string: return strings.Split(v.(string), "\n"), nil } return nil, ErrServerError } // http://redis.io/commands/client-setname // ClientSetName is not fully supported on sharded connections, and is useless here. // This driver creates connections on demand, thus naming them is pointless. func (c *Client) ClientSetName(name string) error { v, err := c.execOnFirst(false, "CLIENT SETNAME", name) if err != nil { return err } switch v.(type) { case string: return nil } return ErrServerError } // http://redis.io/commands/config-get // ConfigGet is not fully supported on sharded connections. func (c *Client) ConfigGet(name string) (map[string]string, error) { v, err := c.execOnFirst(false, "CONFIG GET", name) if err != nil { return nil, err } return iface2strmap(v), nil } // http://redis.io/commands/config-set // ConfigSet is not fully supported on sharded connections. func (c *Client) ConfigSet(name, value string) error { v, err := c.execOnFirst(false, "CONFIG SET", name, value) if err != nil { return err } switch v.(type) { case string: return nil } return ErrServerError } // http://redis.io/commands/config-resetstat // ConfigResetStat is not fully supported on sharded connections. func (c *Client) ConfigResetStat() error { v, err := c.execOnFirst(false, "CONFIG RESETSTAT") if err != nil { return err } switch v.(type) { case string: return nil } return ErrServerError } // http://redis.io/commands/dbsize // DBSize is not fully supported on sharded connections. func (c *Client) DBSize() (int, error) { v, err := c.execOnFirst(false, "DBSIZE") if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/debug-segfault // DebugSegfault is not fully supported on sharded connections. func (c *Client) DebugSegfault() error { v, err := c.execOnFirst(false, "DEBUG SEGFAULT") if err != nil { return err } switch v.(type) { case string: return nil } return ErrServerError } // http://redis.io/commands/decr func (c *Client) Decr(key string) (int, error) { v, err := c.execWithKey(true, "DECR", key) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/decrby func (c *Client) DecrBy(key string, decrement int) (int, error) { v, err := c.execWithKey(true, "DECRBY", key, decrement) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/del // Del issues a plain DEL command to redis if the client is connected to a // single server. On sharding, it issues one DEL command per key, in the // server selected for each given key. func (c *Client) Del(keys ...string) (n int, err error) { if c.selector.Sharding() { n, err = c.delMulti(keys...) } else { n, err = c.delPlain(keys...) } return n, err } func (c *Client) delMulti(keys ...string) (int, error) { deleted := 0 for _, key := range keys { count, err := c.delPlain(key) if err != nil { return 0, err } deleted += count } return deleted, nil } func (c *Client) delPlain(keys ...string) (int, error) { if len(keys) > 0 { v, err := c.execWithKey(true, "DEL", keys[0], vstr2iface(keys[1:])...) if err != nil { return 0, err } return iface2int(v) } return 0, nil } // http://redis.io/commands/discard // TODO: Discard // http://redis.io/commands/dump func (c *Client) Dump(key string) (string, error) { v, err := c.execWithKey(true, "DUMP", key) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/echo // Echo is not fully supported on sharded connections. func (c *Client) Echo(message string) (string, error) { v, err := c.execWithKey(true, "ECHO", message) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/eval // Eval is not fully supported on sharded connections. func (c *Client) Eval(script string, numkeys int, keys, args []string) (interface{}, error) { a := []interface{}{ "EVAL", script, // escape? numkeys, strings.Join(keys, " "), strings.Join(args, " "), } v, err := c.execOnFirst(true, a...) if err != nil { return nil, err } return v, nil } // http://redis.io/commands/evalsha // EvalSha is not fully supported on sharded connections. func (c *Client) EvalSha(sha1 string, numkeys int, keys, args []string) (interface{}, error) { a := []interface{}{ "EVALSHA", sha1, numkeys, strings.Join(keys, " "), strings.Join(args, " "), } v, err := c.execOnFirst(true, a...) if err != nil { return nil, err } return v, nil } // http://redis.io/commands/exec // TODO: Exec // http://redis.io/commands/exists func (c *Client) Exists(key string) (bool, error) { v, err := c.execWithKey(true, "EXISTS", key) if err != nil { return false, err } return iface2bool(v) } // http://redis.io/commands/expire // Expire returns true if the timeout was set, or false if key does not exist // or the timeout could not be set. func (c *Client) Expire(key string, seconds int) (bool, error) { v, err := c.execWithKey(true, "EXPIRE", key, seconds) if err != nil { return false, err } return iface2bool(v) } // http://redis.io/commands/expireat // ExpireAt returns like Expire. func (c *Client) ExpireAt(key string, timestamp int) (bool, error) { v, err := c.execWithKey(true, "EXPIREAT", key, timestamp) if err != nil { return false, err } return iface2bool(v) } // http://redis.io/commands/flushall // FlushAll is not fully supported on sharded connections. func (c *Client) FlushAll() error { _, err := c.execOnFirst(false, "FLUSHALL") return err } // http://redis.io/commands/flushall // FlushDB is not fully supported on sharded connections. func (c *Client) FlushDB() error { _, err := c.execOnFirst(false, "FLUSHDB") return err } // http://redis.io/commands/get func (c *Client) Get(key string) (string, error) { v, err := c.execWithKey(true, "GET", key) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/getbit func (c *Client) GetBit(key string, offset int) (int, error) { v, err := c.execWithKey(true, "GETBIT", key, offset) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/getrange func (c *Client) GetRange(key string, start, end int) (string, error) { v, err := c.execWithKey(true, "GETRANGE", key, start, end) if err != nil { return "", err } switch v.(type) { case string: return v.(string), nil } return "", ErrServerError } // http://redis.io/commands/getset func (c *Client) GetSet(key, value string) (string, error) { v, err := c.execWithKey(true, "GETSET", key, value) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/incr func (c *Client) Incr(key string) (int, error) { v, err := c.execWithKey(true, "INCR", key) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/incrby func (c *Client) IncrBy(key string, increment int) (int, error) { v, err := c.execWithKey(true, "INCRBY", key, increment) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/keys func (c *Client) Keys(pattern string) ([]string, error) { keys := []string{} v, err := c.execOnFirst(true, "KEYS", pattern) if err != nil { return keys, err } return iface2vstr(v), nil } // http://redis.io/commands/scan func (c *Client) Scan(cursor string, options ...interface{}) (string, []string, error) { return c.scanCommandList("SCAN", "", cursor, options...) } // http://redis.io/commands/sscan func (c *Client) SScan(set string, cursor string, options ...interface{}) (string, []string, error) { return c.scanCommandList("SSCAN", set, cursor, options...) } // http://redis.io/commands/zscan func (c *Client) ZScan(zset string, cursor string, options ...interface{}) (string, map[string]string, error) { return c.scanCommandMap("ZSCAN", zset, cursor, options...) } // http://redis.io/commands/hscan func (c *Client) HScan(hash string, cursor string, options ...interface{}) (string, map[string]string, error) { return c.scanCommandMap("HSCAN", hash, cursor, options...) } // scanCommandList // SCAN and SSCAN func (c *Client) scanCommandList(cmd string, key string, cursor string, options ...interface{}) (string, []string, error) { empty := []string{} resp := []interface{}{} newCursor := "0" var v interface{} var err error if len(key) > 0 { // SSCAN x := []interface{}{cursor} v, err = c.execWithKey(true, cmd, key, append(x, options...)...) } else { // SCAN x := []interface{}{cmd, cursor} v, err = c.execOnFirst(true, append(x, options...)...) } if err != nil { return newCursor, empty, err } switch v.(type) { case []interface{}: resp = v.([]interface{}) } // New cursor to call switch resp[0].(type) { case string: newCursor = resp[0].(string) } switch resp[1].(type) { case []interface{}: return newCursor, iface2vstr(resp[1]), nil } return newCursor, empty, nil } // scanCommandMap // ZSCAN and HSCAN func (c *Client) scanCommandMap(cmd string, key string, cursor string, options ...interface{}) (string, map[string]string, error) { empty := map[string]string{} resp := []interface{}{} newCursor := "0" x := []interface{}{cursor} v, err := c.execWithKey(true, cmd, key, append(x, options...)...) if err != nil { return newCursor, empty, err } switch v.(type) { case []interface{}: resp = v.([]interface{}) } // New cursor to call switch resp[0].(type) { case string: newCursor = resp[0].(string) } switch resp[1].(type) { case []interface{}: return newCursor, iface2strmap(resp[1]), nil } return newCursor, empty, nil } // http://redis.io/commands/lpush func (c *Client) LPush(key string, values ...string) (int, error) { v, err := c.execWithKey(true, "LPUSH", key, vstr2iface(values)...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/lindex func (c *Client) LIndex(key string, index int) (string, error) { v, err := c.execWithKey(true, "LINDEX", key, index) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/lpop func (c *Client) LPop(key string) (string, error) { v, err := c.execWithKey(true, "LPOP", key) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/rpop func (c *Client) RPop(key string) (string, error) { v, err := c.execWithKey(true, "RPOP", key) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/llen func (c *Client) LLen(key string) (int, error) { v, err := c.execWithKey(true, "LLEN", key) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/ltrim func (c *Client) LTrim(key string, begin, end int) (err error) { _, err = c.execWithKey(true, "LTRIM", key, begin, end) return err } // http://redis.io/commands/lrange func (c *Client) LRange(key string, begin, end int) ([]string, error) { v, err := c.execWithKey(true, "LRANGE", key, begin, end) if err != nil { return []string{}, err } return iface2vstr(v), nil } // http://redis.io/commands/lrem func (c *Client) LRem(key string, count int, value string) (int, error) { v, err := c.execWithKey(true, "LREM", key, count, value) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/hget func (c *Client) HGet(key, member string) (string, error) { v, err := c.execWithKey(true, "HGET", key, member) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/hgetall func (c *Client) HGetAll(key string) (map[string]string, error) { v, err := c.execWithKey(true, "HGETALL", key) if err != nil { return nil, err } return iface2strmap(v), nil } // http://redis.io/commands/hincrby func (c *Client) HIncrBy(key string, field string, increment int) (int, error) { v, err := c.execWithKey(true, "HINCRBY", key, field, increment) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/hmget func (c *Client) HMGet(key string, field ...string) ([]string, error) { v, err := c.execWithKey(true, "HMGET", key, vstr2iface(field)...) if err != nil { return nil, err } return iface2vstr(v), nil } // http://redis.io/commands/hmset func (c *Client) HMSet(key string, items map[string]string) (err error) { tmp := make([]interface{}, (len(items) * 2)) idx := 0 for k, v := range items { n := idx * 2 tmp[n] = k tmp[n+1] = v idx++ } _, err = c.execWithKey(true, "HMSET", key, tmp...) return } // http://redis.io/commands/hset func (c *Client) HSet(key, field, value string) (err error) { _, err = c.execWithKey(true, "HSET", key, field, value) return } // http://redis.io/commands/hdel func (c *Client) HDel(key, field string) (err error) { _, err = c.execWithKey(true, "HDEL", key, field) return } // http://redis.io/commands/zincrby func (c *Client) ZIncrBy(key string, increment int, member string) (string, error) { v, err := c.execWithKey(true, "ZINCRBY", key, increment, member) if err != nil { return "", err } return iface2str(v) } // WIP (we stopped here) // http://redis.io/commands/mget // MGet is not fully supported on sharded connections. // TODO: fix func (c *Client) MGet(keys ...string) ([]string, error) { tmp := make([]interface{}, len(keys)+1) tmp[0] = "MGET" for n, k := range keys { tmp[n+1] = k } v, err := c.execOnFirst(true, tmp...) if err != nil { return nil, err } switch v.(type) { case []interface{}: items := v.([]interface{}) resp := make([]string, len(items)) for n, item := range items { switch item.(type) { case string: resp[n] = item.(string) } } return resp, nil } return nil, ErrServerError } // http://redis.io/commands/mset // MSet is not fully supported on sharded connections. // TODO: fix func (c *Client) MSet(items map[string]string) error { tmp := make([]interface{}, (len(items)*2)+1) tmp[0] = "MSET" idx := 0 for k, v := range items { n := idx * 2 tmp[n+1] = k tmp[n+2] = v idx++ } _, err := c.execOnFirst(true, tmp...) if err != nil { return err } return nil } // http://redis.io/commands/pfadd func (c *Client) PFAdd(key string, vs ...interface{}) (int, error) { v, err := c.execWithKey(true, "PFADD", key, vs...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/pfcount func (c *Client) PFCount(keys ...string) (int, error) { v, err := c.execWithKeys(true, "PFCOUNT", keys) if err != nil { return 0, err } sum := 0 if len(v) == 0 { return 0, nil } for _, value := range v { a, err := iface2int(value) if err != nil { return 0, err } sum += a } return iface2int(sum) } // http://redis.io/commands/pfmerge func (c *Client) PFMerge(keys ...string) (err error) { _, err = c.execWithKeys(true, "PFMERGE", keys) return } // http://redis.io/commands/publish func (c *Client) Publish(channel string, value string) error { _, err := c.execWithKey(true, "PUBLISH", channel, value) return err } // http://redis.io/commands/rpush func (c *Client) RPush(key string, values ...string) (int, error) { v, err := c.execWithKey(true, "RPUSH", key, vstr2iface(values)...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/sadd func (c *Client) SAdd(key string, vs ...interface{}) (int, error) { v, err := c.execWithKey(true, "SADD", key, vs...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/srem func (c *Client) SRem(key string, vs ...interface{}) (int, error) { v, err := c.execWithKey(true, "SREM", key, vs...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/script-load func (c *Client) ScriptLoad(script string) (string, error) { v, err := c.execOnFirst(true, "SCRIPT", "LOAD", script) if err != nil { return "", err } return iface2str(v) } // http://redis.io/commands/set func (c *Client) Set(key, value string) (err error) { _, err = c.execWithKey(true, "SET", key, value) return } // http://redis.io/commands/setnx func (c *Client) SetNx(key, value string) (int, error) { v, err := c.execWithKey(true, "SETNX", key, value) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/setbit func (c *Client) SetBit(key string, offset, value int) (int, error) { v, err := c.execWithKey(true, "SETBIT", key, offset, value) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/setex func (c *Client) SetEx(key string, seconds int, value string) (err error) { _, err = c.execWithKey(true, "SETEX", key, seconds, value) return } // http://redis.io/commands/smembers func (c *Client) SMembers(key string) ([]string, error) { var v interface{} var err error v, err = c.execWithKey(true, "SMEMBERS", key) if err != nil { return []string{}, err } return iface2vstr(v), nil } // http://redis.io/commands/smove func (c *Client) SMove(source string, destination string, member string) (int, error) { v, err := c.execWithKey(true, "SMOVE", source, destination, member) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/srandmember func (c *Client) SRandMember(key string, count int) ([]string, error) { var v interface{} var err error v, err = c.execWithKey(true, "SRANDMEMBER", key, count) if err != nil { return []string{}, err } return iface2vstr(v), nil } // http://redis.io/commands/sismember func (c *Client) SIsMember(key string, vs ...interface{}) (int, error) { v, err := c.execWithKey(true, "SISMEMBER", key, vs...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/scard func (c *Client) SCard(key string) (int, error) { v, err := c.execWithKey(true, "SCARD", key) if err != nil { return 0, err } return iface2int(v) } type PubSubMessage struct { Error error Value string Channel string } // http://redis.io/commands/subscribe func (c *Client) Subscribe(channel string, ch chan PubSubMessage, stop chan bool) error { srv, err := c.selector.PickServer("") if err != nil { return err } cn, err := c.getConn(srv) if err != nil { return err } _, err = c.execute(cn.rw, "SUBSCRIBE", channel) if err != nil { cn.condRelease(&err) return err } if err = cn.nc.SetDeadline(time.Time{}); err != nil { cn.condRelease(&err) return err } sibStop := make(chan bool) go func() { for { select { case <-stop: cn.nc.Close() case <-sibStop: return } } }() go func() { for { raw, err := c.parseResponse(cn.rw.Reader) if err != nil { msg := PubSubMessage{ Error: err, } ch <- msg sibStop <- true cn.nc.Close() return } switch raw.(type) { case []interface{}: ret := raw.([]interface{}) msg := PubSubMessage{ Value: ret[2].(string), Channel: ret[1].(string), Error: nil, } ch <- msg default: msg := PubSubMessage{ Error: errors.New("Protocol Error"), } ch <- msg sibStop <- true cn.nc.Close() return } } }() return err } // http://redis.io/commands/ttl func (c *Client) TTL(key string) (int, error) { v, err := c.execWithKey(true, "TTL", key) if err != nil { return 0, err } return iface2int(v) } func (c *Client) ZAdd(key string, vs ...interface{}) (int, error) { if len(vs)%2 != 0 { return 0, errors.New("Incomplete parameter sequence") } v, err := c.execWithKey(true, "ZADD", key, vs...) if err != nil { return 0, err } return iface2int(v) } func (c *Client) ZCard(key string) (int, error) { v, err := c.execWithKey(true, "ZCARD", key) if err != nil { return 0, err } return iface2int(v) } func (c *Client) ZCount(key string, min int, max int) (int, error) { v, err := c.execWithKey(true, "ZCOUNT", key, min, max) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/zrange func (c *Client) ZRange(key string, start int, stop int, withscores bool) ([]string, error) { var v interface{} var err error if withscores == true { v, err = c.execWithKey(true, "ZRANGE", key, start, stop, "WITHSCORES") } else { v, err = c.execWithKey(true, "ZRANGE", key, start, stop) } if err != nil { return []string{}, err } return iface2vstr(v), nil } // http://redis.io/commands/zrevrange func (c *Client) ZRevRange(key string, start int, stop int, withscores bool) ([]string, error) { var v interface{} var err error if withscores == true { v, err = c.execWithKey(true, "ZREVRANGE", key, start, stop, "WITHSCORES") } else { v, err = c.execWithKey(true, "ZREVRANGE", key, start, stop) } if err != nil { return []string{}, err } return iface2vstr(v), nil } // http://redis.io/commands/zscore func (c *Client) ZScore(key string, member string) (string, error) { v, err := c.execWithKey(true, "ZSCORE", key, member) if err != nil { return "", err } return iface2str(v) } func (c *Client) ZRem(key string, vs ...interface{}) (int, error) { v, err := c.execWithKey(true, "ZREM", key, vs...) if err != nil { return 0, err } return iface2int(v) } // http://redis.io/commands/zremrangebyscore func (c *Client) ZRemRangeByScore(key string, start interface{}, stop interface{}) (int, error) { v, err := c.execWithKey(true, "ZREMRANGEBYSCORE", key, start, stop) if err != nil { return 0, err } return iface2int(v) } // GetMulti is a batch version of Get. The returned map from keys to // items may have fewer elements than the input slice, due to memcache // cache misses. Each key must be at most 250 bytes in length. // If no error is returned, the returned map will also be non-nil. /* func (c *Client) GetMulti(keys []string) (map[string]*Item, error) { var lk sync.Mutex m := make(map[string]*Item) addItemToMap := func(it *Item) { lk.Lock() defer lk.Unlock() m[it.Key] = it } keyMap := make(map[net.Addr][]string) for _, key := range keys { if !legalKey(key) { return nil, ErrMalformedKey } addr, err := c.selector.PickServer(key) if err != nil { return nil, err } keyMap[addr] = append(keyMap[addr], key) } ch := make(chan error, buffered) for addr, keys := range keyMap { go func(addr net.Addr, keys []string) { //ch <- c.getFromAddr(addr, keys, addItemToMap) }(addr, keys) } var err error for _ = range keyMap { if ge := <-ch; ge != nil { err = ge } } return m, err } */
Yes, this is an accurate headline, not a Mad Lib. If you dreamed up a list of actors who would one day play Michael Jackson, you likely wouldn't include Joseph Fiennes. And yet here we are. The Guardian reports that the Shakespeare in Love actor will portray the King of Pop in a TV special set to air in the U.K. The movie is being sold as a road-trip comedy and an exploration of superstardom. "It's a fun, light-hearted, tongue-in-cheek road trip of what celebrity of that kind is like," Fiennes told WENN. "But also it's rather beautiful and poignant about their relationships as well." Fiennes' casting struck a nerve, since he's, well, white. The news came just as the Oscars nominated an all-white slate of actors and Hollywood is having an introspective moment about diversity. "Joseph Fiennes as Michael Jackson is a symptom of a deeper sickness that moviemakers are only now beginning to treat," Stereo Williams wrote at The Daily Beast. And, of course, Twitter had a field day. Twitter: "Joseph Fiennes to play Michael Jackson" Oddly enough, it won’t be the first time I’ll see Joseph Fiennes portray a man of color. I saw him 20 years ago as Jesus in a London play.
package proxy import ( "encoding/json" "io/ioutil" "net/http" "net/http/httptest" "net/url" "strconv" "strings" "testing" "gotest.tools/assert" ) type ErrorResponse struct { Code string Message string } func TestValidation(t *testing.T) { type fixture struct { config Config proxy Proxy server *httptest.Server } setup := func() *fixture { s := httptest.NewServer( http.HandlerFunc(func(w http.ResponseWriter, request *http.Request) { if request.URL.Path == "/petstore.yaml" { file, _ := ioutil.ReadFile("resources/petstore.yaml") w.Write(file) } else { w.Write([]byte("hello world")) } }), ) url, _ := url.Parse(s.URL) port, _ := strconv.Atoi(url.Port()) config := Config{ ProxyPort: 8080, ServicePort: port, OpenapiPath: "/petstore.yaml", } proxy := Proxy{} proxy.Init(config) return &fixture{ config: config, proxy: proxy, server: s, } } t.Run("test operation found", func(t *testing.T) { f := setup() recorder := httptest.NewRecorder() request := httptest.NewRequest("GET", f.server.URL+"/pet/findByStatus", nil) f.proxy.ServeHTTP(recorder, request) assert.Equal(t, 200, recorder.Code) assert.Equal(t, "hello world", string(recorder.Body.Bytes())) }) t.Run("test operation not found", func(t *testing.T) { f := setup() recorder := httptest.NewRecorder() request := httptest.NewRequest("GET", f.server.URL+"/unknownoperation", nil) f.proxy.ServeHTTP(recorder, request) assert.Equal(t, 400, recorder.Code) var errorMessage ErrorMessage err := json.Unmarshal(recorder.Body.Bytes(), &errorMessage) assert.NilError(t, err) assert.Equal(t, errorMessage.Code, "400") assert.Equal(t, errorMessage.Message, "no matching operation was found") }) t.Run("invalid enum value", func(t *testing.T) { f := setup() recorder := httptest.NewRecorder() request := httptest.NewRequest("GET", f.server.URL+"/pet/findByStatus?status=testasd", nil) f.proxy.ServeHTTP(recorder, request) assert.Equal(t, 400, recorder.Code) var errorMessage ErrorMessage err := json.Unmarshal(recorder.Body.Bytes(), &errorMessage) assert.NilError(t, err) assert.Equal(t, errorMessage.Code, "400") assert.Assert(t, strings.Contains(errorMessage.Message, "parameter \"status\" in query has an error: value is not one of the allowed values")) }) }
/** * Tests whether a certificate is an OCSP responder certificate. */ @Test public void isOcspResponderCert() { X509Certificate caCert = TestCertUtil.getCaCert(); assertFalse(GlobalConf.isOcspResponderCert(caCert, caCert)); PKCS12 ocspSigner = TestCertUtil.getOcspSigner(); X509Certificate ocspCert = ocspSigner.certChain[0]; assertTrue(GlobalConf.isOcspResponderCert(caCert, ocspCert)); }
import FullTeamStats from '../models/FullTeamStats'; import HLTVConfig from '../models/HLTVConfig'; declare const getTeamStats: (config: HLTVConfig) => ({ id }: { id: number; }) => Promise<FullTeamStats>; export default getTeamStats;
<filename>git-2.4.0/pack.h<gh_stars>1-10 #ifndef PACK_H #define PACK_H #include "object.h" #include "csum-file.h" /* * Packed object header */ #define PACK_SIGNATURE 0x5041434b /* "PACK" */ #define PACK_VERSION 2 #define pack_version_ok(v) ((v) == htonl(2) || (v) == htonl(3)) struct pack_header { uint32_t hdr_signature; uint32_t hdr_version; uint32_t hdr_entries; }; /* * The first four bytes of index formats later than version 1 should * start with this signature, as all older git binaries would find this * value illegal and abort reading the file. * * This is the case because the number of objects in a packfile * cannot exceed 1,431,660,000 as every object would need at least * 3 bytes of data and the overall packfile cannot exceed 4 GiB with * version 1 of the index file due to the offsets limited to 32 bits. * Clearly the signature exceeds this maximum. * * Very old git binaries will also compare the first 4 bytes to the * next 4 bytes in the index and abort with a "non-monotonic index" * error if the second 4 byte word is smaller than the first 4 * byte word. This would be true in the proposed future index * format as idx_signature would be greater than idx_version. */ #define PACK_IDX_SIGNATURE 0xff744f63 /* "\377tOc" */ struct pack_idx_option { unsigned flags; /* flag bits */ #define WRITE_IDX_VERIFY 01 /* verify only, do not write the idx file */ #define WRITE_IDX_STRICT 02 uint32_t version; uint32_t off32_limit; /* * List of offsets that would fit within off32_limit but * need to be written out as 64-bit entity for byte-for-byte * verification. */ int anomaly_alloc, anomaly_nr; uint32_t *anomaly; }; extern void reset_pack_idx_option(struct pack_idx_option *); /* * Packed object index header */ struct pack_idx_header { uint32_t idx_signature; uint32_t idx_version; }; /* * Common part of object structure used for write_idx_file */ struct pack_idx_entry { unsigned char sha1[20]; uint32_t crc32; off_t offset; }; struct progress; typedef int (*verify_fn)(const unsigned char*, enum object_type, unsigned long, void*, int*); extern const char *write_idx_file(const char *index_name, struct pack_idx_entry **objects, int nr_objects, const struct pack_idx_option *, const unsigned char *sha1); extern int check_pack_crc(struct packed_git *p, struct pack_window **w_curs, off_t offset, off_t len, unsigned int nr); extern int verify_pack_index(struct packed_git *); extern int verify_pack(struct packed_git *, verify_fn fn, struct progress *, uint32_t); extern off_t write_pack_header(struct sha1file *f, uint32_t); extern void fixup_pack_header_footer(int, unsigned char *, const char *, uint32_t, unsigned char *, off_t); extern char *index_pack_lockfile(int fd); extern int encode_in_pack_object_header(enum object_type, uintmax_t, unsigned char *); #define PH_ERROR_EOF (-1) #define PH_ERROR_PACK_SIGNATURE (-2) #define PH_ERROR_PROTOCOL (-3) extern int read_pack_header(int fd, struct pack_header *); extern struct sha1file *create_tmp_packfile(char **pack_tmp_name); extern void finish_tmp_packfile(struct strbuf *name_buffer, const char *pack_tmp_name, struct pack_idx_entry **written_list, uint32_t nr_written, struct pack_idx_option *pack_idx_opts, unsigned char sha1[]); #endif
#define BOOST_TEST_DYN_LINK #define BOOST_TEST_MODULE Regression #include <string> #include <boost/test/included/unit_test.hpp> #include <xolotl/perf/dummy/DummyTimer.h> using namespace std; using namespace xolotl::perf::dummy; /** * This suite is responsible for testing the DummyTimer. */ BOOST_AUTO_TEST_SUITE(DummyTimer_testSuite) BOOST_AUTO_TEST_CASE(checkInitialValue) { auto tester = DummyTimer(); BOOST_REQUIRE_EQUAL(0, tester.getValue()); } BOOST_AUTO_TEST_CASE(checkTiming) { auto tester = DummyTimer(); tester.start(); sleep(3); tester.stop(); BOOST_REQUIRE_EQUAL(0, tester.getValue()); BOOST_REQUIRE_EQUAL("", tester.getUnits()); } BOOST_AUTO_TEST_SUITE_END()
San Diego’s innovation economy has come a long way since the bitter winter of 2009. As the great recession deepened, venture investments in the region fell to a 12-year low in the first quarter, with less than $101 million invested in 17 companies, according to one venture industry survey. As a point of contrast, venture firms poured $334.1 million into 27 deals in San Diego during the last three months of 2015—and invested more than $1 billion here over the entire year. The numbers only tell part of the story, however. In a presentation last week for the MIT Enterprise Forum, serial entrepreneur (and San Diego Xconomist) Mark Bowles said there are now more organizations for startups and entrepreneurs in San Diego in terms of incubators, accelerators, and support groups than there were in Silicon Valley when he left 12 years ago. One example: More than 500 people registered to participate when Startup San Diego organized a three-day “convergence” for tech startups earlier this month. The weekend schedule included multiple tours of tech startups in downtown San Diego, an internship fair at UC San Diego, and Demo Night XIII, organized by San Diego Tech Founders. The goal of the convergence weekend was to highlight the growing community for tech startups in downtown San Diego, and to get more college students involved, according to Neal Bloom, who organized the weekend event. “About two years ago, we realized that we had no student attendance” at Startup Week, said Bloom, a local tech entrepreneur who is now a San Diego-based representative for Hired, the San Francisco-based online marketplace for tech jobs. Many students are graduating from UC San Diego, San Diego State University, and the University of San Diego, and leaving town for jobs in Silicon Valley without knowing that Web companies like Tealium, Kyriba, Classy, and Take Lessons, are expanding here, Bloom said. Many students are also unaware of the startup resources that have emerged in San Diego since 2009. Bowles highlighted 10 local incubators and accelerator programs in his presentation, including EvoNexus, Plug and Play San Diego, Janssen Labs, and West Health. Local co-working spaces include CyberHive, DeskHub, Co-Merge, 3rd Space, and the Vine. But San Diego’s tech ecosystem also faces a few challenges. The issue most frequently cited is the availability of venture capital for tech startups in San Diego. “As one of the country’s innovation hubs, we flog ourselves for being a backwater region that doesn’t get any [venture capital] money, but it’s just not true,” Bowles told me. Citing data from City Lab, he said San Diego ranks among the top 10 cities worldwide in terms of total venture capital funding (including life sciences).
CHERRIES boss Kevin Bond will tonight continue his search for new blood after casting his net from Hackney to Brazil - via France, Egypt and Paraguay. Bond will run the rule over another cosmopolitan batch of trialists during Cherries' pre-season friendly against Oxford United (7.45pm kick-off). Among England's representatives in the city of dreaming spires will be London-born striker Shabazz Baidoo who is currently on the books of QPR. The 19-year-old, who has progressed through the ranks at Loftus Road, caught Bond's eye during a behind-closed-doors friendly at Chelsea earlier this year. Baidoo has hit four goals in 34 appearances during his QPR career and featured predominately from the substitutes' bench in their Championship campaign last season. South American trio Alfredo Novaes, Lidier Marmol and Marcio Giovanni look set to join Baidoo in the Cherries line-up at the Kassam Stadium. And although details of the three are sketchy, central midfielder Marmol hails from Paraguay, while defender Giovanni and winger Novaes come from Brazil. Giovanni is understood to have been plying his trade in Israel, while the other two are believed to have played in Spain and Germany last season. Felicien Singbo, who turned out against Southampton six days ago, will have another run-out against the U's and will be joined by fellow Frenchman Christophe Francois. Defender Singbo, midfielder Francois and Egyptian striker Reda Shehata all joined Cherries during their four-day training camp at Woodbury Park this week. Garreth O'Connor will fly the flag for the Republic of Ireland at Oxford after recovering from a nagging thigh strain which forced him to miss Cherries' first three friendlies. The 27-year-old, who is back at Dean Court with a view to signing on loan ahead of the new season, should feature after coming through a series of intensive workouts in Devon. Boss Bond said: "I'm looking forward to seeing Garreth in a match situation. He's had a week's training and he did well." Asked whether he had been frustrated by O'Connor's absence since his return, Bond replied: "To a degree, but there are no prizes won during pre-season and I want to see him at Nottingham Forest in our first game. As long as he's fit, willing and raring to go then I haven't got a problem with not seeing him any earlier." Englishmen Scott Lye and Jason Pearce will again look to impress Bond, although a nap of Cherries players could be rested for various reasons. Russ Perrett, who was born in the same Barton-on-Sea maternity unit as Neil Moss, has a slight hamstring problem, while Darren Anderton and Sam Vokes are both nursing minor groin strains. Danny Hollands is also struggling with his hamstring and Neil Young may not travel for family reasons.
Q: Was Angkor wat built on top of water? I was watching a National Geographic Documentary. In that, it said that Angkor Wat was built on top of water. It was also told that it used to be an ancient observatory. Is this true? I searched all over Wikipedia article and found nothing of this sort. Also, if it was also used as an observatory, what was it used for? Tracking celestial objects? and how? Please give sources for the answer. Thank you. A: (Disclaimer: I've not seen that documentary so I'm sure what exactly it said.) Sort of. In a literal sense, Angkor Wat was built upon a sea of groundwater. The city was built in a very wet and water-rich area; much of this water found its way underground. At the lower levels, the water fills up all the pores and holes in the sandy soil. The water table helps firm up the upper levels of soil, upon which Angkor Wat's foundations sit. In recent years, the regional water table has been lowered through rampant pumping of underground water. It is feared this would literally undermine the ancient city's structural stability. Figuratively speaking, Angkor Wat thrived on its water resources. It provided a massive irrigation system fuelled by a network of water reservoirs. This enabled the high agricultural productivity that allowed Angkor Wat to maintain a large population. It was the failure to maintain these water distribution systems that eventually led to the city's abandonment.
Transcription Factor Binding Site Discovery by the Probabilistic Rules Control of gene expression at the level of transcription is achieved by nuclear factors that bind to regulatory elements, short DNA sequence motifs, called transcription factor binding sites. The development of reliable methods for binding site recognition is an important step in large-scale genome analysis. The Data Mining approaches adapted to bioinformatics tasks show high efficiency. Yet the specificity of the regulatory region analysis task consists in the high false-positive rates. In this paper the program system Discovery was applied to tasks of binding site recognition. Discovery makes a semantic probabilistic inference and finds the statistically significant probabilistic rules. The hypothesis class is defined by the expert in dialog mode. In this paper we demonstrate that Discovery is consistently more accurate than the traditional weight matrices in binding site prediction task, as was established for three families of transcription factors.