content
stringlengths 7
2.61M
|
---|
<filename>src/components/monthSelector.tsx
import * as React from 'react';
import {
FormGroup,
Input,
Label,
} from 'reactstrap';
const months = [
'January',
'February',
'March',
'April',
'May',
'June',
'July',
'August',
'September',
'October',
'November',
'December',
];
interface MonthSelectorProps {
selectedMonth?: number;
updateSelectedMonth: (selectedMonth: number) => void;
}
class MonthSelector extends React.Component<MonthSelectorProps, {}> {
render() {
return (
<FormGroup row={ true }>
<Label for='month'>Month</Label>
<Input type='select' id='month' defaultValue='initial' onChange={ this.callback }>
<option disabled={ true } value='initial'>---</option>
{ months.map(this.createMonth) }
</Input>
</FormGroup>
);
}
private callback = (e: React.ChangeEvent<HTMLInputElement>) => {
const value = e.currentTarget.value;
this.props.updateSelectedMonth(Number(value));
}
private createMonth = (month: string, idx: number) => {
return <option key={ idx } value={ idx + 1 }>{ month }</option>;
}
}
export default MonthSelector;
|
Administrative agreement as a component of the system of public governance tools : Th e purpose of the article is to implement the characteristics of the administrative contract as a component of the system of public administration tools. It is determined that the system of tools for the implementation of functions by public administration bodies must meet the requirements of effi ciency of settlement of management tasks, mobility of implementation of management decisions, accessibility of administrative procedures, and openness of regulations and administrative acts. Th e system of tools of public administration includes decisions, actions or omissions of public authorities and local governments, which have fundamen-tal legal signifi cance and consequences for individuals. It is emphasized that the implementation of the concept of «good governance must comply with the democratic principles of building the rule of law, the achievement of which requires the use of the system of tools defi ned by current legislation. Th e components of the system of public administration tools include bylaws (actually identifying them with regulations), administrative acts, administrative agreements, administrative acts and acts-plans. Th e normative-legal character of the administrative agreement is determined, which to some extent identifi es it with the normative acts of the subjects of power, emphasizing the bilateral and multilateral nature of such relations. It is substantiated that administrative contracts have similar features that are similar to other instruments of public administration, in particular, the need to conclude them in accordance with the established procedure, aimed at satisfying subjective public rights, and so on. It is established that the distinctive features of an administrative agreement are its voluntary nature of adoption, bilateral and multilateral nature of the regulation of public relations, and one of the parties to the agreement is always the subject of power. It is concluded that in the implementation of administra-tive-contractual relations there is a situation of legal equality of its parties, so the mechanism for ensuring its implementation is specifi c. It is concluded that an administrative agreement is a public accession agreement, the content of which is the implementation of management functions related to the provision of public services, ensuring the effi cient use of public property between the subject of power at the initiative of a non-governmental entity. It is substantiated that in the current conditions in order to ensure the availability of legislation, as well as to avoid the situation of emergency accumulation of an array of regulations, it is proposed to supplement the draft Law of Ukraine «On Administrative Procedure with the following provisions: «administrative contract implementation of management functions related to the provision of public services, ensuring the effi cient use of public property, concluded between the subject of power at the initiative of a non-governmental entity. |
<reponame>jeffi/math<filename>src/main/java/edu/unc/cs/robotics/math/Geom3d.java
package edu.unc.cs.robotics.math;
/**
* A collection of geometric computational primitives.
*/
public final class Geom3d {
private Geom3d() {
throw new AssertionError("no instances");
}
/**
* Computes the squared distance between a point and a segment
*
* @param pt the point
* @param s0 endpoint of segment
* @param s1 endpoint of segment
* @return the squared distance between pt and segment from s0 to s1
*/
public static double distPointSegmentSquared(Vec3d pt, Vec3d s0, Vec3d s1) {
return distPointSegmentSquared(pt, s0, s1, null);
}
/**
* Computes the squared distance between a point an a segment, optionally
* returing the point on the segment closed to the point.
*
* @param pt the point
* @param s0 endpoint of segment
* @param s1 endpoint of segment
* @param nearest [output] on return contains the point closed to pt on
* segment from s0 to s1 (may be null)
* @return the squared distance between pt and segment from s0 to s1
*/
public static double distPointSegmentSquared(Vec3d pt, Vec3d s0, Vec3d s1, Vec3d nearest) {
// implementation is inlined/optimized from the following:
//
// Vec3d v = new Vec3d().sub(s1, s0);
// Vec3d w = new Vec3d().sub(pt, s0);
// double c1 = w.dot(v);
// if (c1 <= 0)
// return Vec3d.distSquared(pt, s0);
// double c2 = v.dot(v);
// if (c2 <= c1)
// return Vec3d.distSquared(pt, s1);
// double b = c1 / c2;
// if (nearest == null)
// nearest = new Vec3d();
//
// nearest.mul(v, b).add(s0);
// return Vec3d.distSquared(pt, nearest);
double vx = s1.x - s0.x;
double vy = s1.y - s0.y;
double vz = s1.z - s0.z;
double c1 = (pt.x - s0.x)*vx + (pt.y - s0.y)*vy + (pt.z - s0.z)*vz;
double c2;
double nx, ny, nz;
if (c1 <= 0) {
nx = s0.x;
ny = s0.y;
nz = s0.z;
} else if ((c2 = vx*vx + vy*vy + vz*vz) <= c1) {
nx = s1.x;
ny = s1.y;
nz = s1.z;
} else {
double b = c1/c2;
nx = vx*b + s0.x;
ny = vy*b + s0.y;
nz = vz*b + s0.z;
}
double dx = pt.x - nx;
double dy = pt.y - ny;
double dz = pt.z - nz;
if (nearest != null) {
nearest.x = nx;
nearest.y = ny;
nearest.z = nz;
}
return dx*dx + dy*dy + dz*dz;
}
/**
* Computes the squared distance between two segments.
*
* @param s1p0 endpoint of first segment
* @param s1p1 endpoint of first segment
* @param s2p0 endpoint of second segment
* @param s2p1 endpoint of second segment
* @return the squared distance between two segments
*/
public static double distSquaredSegmentSegment(
Vec3d s1p0, Vec3d s1p1,
Vec3d s2p0, Vec3d s2p1)
{
return distSquaredSegmentSegment(
s1p0, s1p1,
s2p0, s2p1,
null, null);
}
/**
* Computes the squared distance between two segments, and
* return the closest points on the segment.
*
* @param s1p0 endpoint of first segment
* @param s1p1 endpoint of first segment
* @param s2p0 endpoint of second segment
* @param s2p1 endpoint of second segment
* @param c1 [output] on return contains the point on first
* segment closest to the second segment (may be null if not needed)
* @param c2 [output] on return contains the point on second
* segment closest to the first segment (may be null if not needed)
* @return the squared distance between two segments
*/
public static double distSquaredSegmentSegment(
Vec3d s1p0, Vec3d s1p1,
Vec3d s2p0, Vec3d s2p1,
Vec3d c1, Vec3d c2)
{
final double ux = s1p1.x - s1p0.x;
final double uy = s1p1.y - s1p0.y;
final double uz = s1p1.z - s1p0.z;
final double vx = s2p1.x - s2p0.x;
final double vy = s2p1.y - s2p0.y;
final double vz = s2p1.z - s2p0.z;
final double wx = s1p0.x - s2p0.x;
final double wy = s1p0.y - s2p0.y;
final double wz = s1p0.z - s2p0.z;
final double a = ux*ux + uy*uy + uz*uz;
final double b = ux*vx + uy*vy + uz*vz;
final double c = vx*vx + vy*vy + vz*vz;
final double d = -(ux*wx + uy*wy + uz*wz);
final double e = vx*wx + vy*wy + vz*wz;
final double D = a*c - b*b;
double sD = D;
double tD = D;
double sN, tN;
if (D < 1e-9) {
sN = 0.0;
sD = 1.0;
tN = e;
tD = c;
} else {
sN = (b*e + c*d);
tN = (a*e + b*d);
if (sN < 0.0) {
sN = 0.0;
tN = e;
tD = c;
} else if (sN > sD) {
sN = sD;
tN = e + b;
tD = c;
}
}
if (tN < 0.0) {
tN = 0.0;
if (d < 0.0) {
sN = 0.0;
} else if (d > a) {
sN = sD;
} else {
sN = d;
sD = a;
}
} else if (tN > tD) {
tN = tD;
final double b_d = b + d;
if (b_d < 0.0) {
sN = 0.0;
} else if (b_d > a) {
sN = sD;
} else {
sN = b_d;
sD = a;
}
}
final double sc = (Math.abs(sN) < 1e-9 ? 0.0 : sN/sD);
final double tc = (Math.abs(tN) < 1e-9 ? 0.0 : tN/tD);
if (c1 != null && c2 != null) {
c1.x = s1p1.x + sc * ux;
c1.y = s1p1.y + sc * uy;
c1.z = s1p1.z + sc * uz;
c2.x = s2p1.x + tc * vx;
c2.y = s2p1.y + tc * vy;
c2.z = s2p1.z + tc * vz;
return Vec3d.distSquared(c1, c2);
} else {
// if we're not recording the nearest points, we
// can shortcut w = s1p0 - s2p0 into the distance
// dP = w + (sc * u) - (tc * v);
double dPx = wx + sc*ux - tc*vx;
double dPy = wy + sc*uy - tc*vy;
double dPz = wz + sc*uz - tc*vz;
return dPx*dPx + dPy*dPy + dPz*dPz;
}
}
/**
* Short hand for @{code Math.sqrt(distSquaredSegmentSegment(...))}.
*
* @param s1p0 endpoint of first segment
* @param s1p1 endpoint of first segment
* @param s2p0 endpoint of second segment
* @param s2p1 endpoint of second segment
* @return the distance between two segments
*/
public static double distSegmentSegment(
Vec3d s1p0, Vec3d s1p1,
Vec3d s2p0, Vec3d s2p1)
{
return Math.sqrt(distSquaredSegmentSegment(s1p0, s1p1, s2p0, s2p1));
}
}
|
<gh_stars>0
version https://git-lfs.github.com/spec/v1
oid sha256:3e963f7b1ab28b08e90643e54f2789ed24efc11b06478bd86390d6f501be20e3
size 12289
|
<filename>movies-app/src/components/movies-list/movies-list-item/movies-list-item.tsx
import React, { useState } from "react";
import { PICS_BASE_URL } from "../../../constants";
import { Movie } from "../../../contracts";
import "./movies-list-item.scss";
import starIcon from "../../../assets/star.svg";
type MoviesListItemProps = {
movie: Movie;
};
export function MoviesListItem({ movie }: MoviesListItemProps) {
const [showDescription, setShowDescription] = useState(false);
return (
<li
onClick={() => setShowDescription((prev) => !prev)}
className="movies-list-item"
>
<img src={PICS_BASE_URL + movie.poster_path} alt={movie.title} />
<div className="movies-list-item-rating">
<img src={starIcon} alt={"vote average: " + movie.vote_average} />
{movie.vote_average}
</div>
<div className="movies-list-item-title">{movie.title}</div>
{showDescription ? (
<div className="movies-list-item-description">{movie.overview}</div>
) : null}
</li>
);
}
|
Adequacy and Residual Renal Function Objective: Dialysis clearance adequacy of fluid and sodium is of great value to the maintenance of fluid and sodium balance in peritoneal dialysis (PD) patients. It should be noted that adequate sodium removal depends not only on dialysis but also on urine sodium removal and dietary intake. Therefore, in the present study, we applied the mathematical modeling of PD kinetics and performed a more detailed evaluation of the ideal sodium concentration in dialysate. Methods: The sodium removal for PD was theoretically calculated with defined membrane permeability. Based on the 3-pore model, we adjusted the sodium and glucose concentration in dialysate to achieve the fluid and sodium balance for patients with different urine removal and sodium intake. Results: With theoretical calculation, setting total fluid intake at 1000 mL (the sum of urine and dialysis fluid removal) and total sodium intake at 6 g, 8 g, 10 g, 12 g, with the current practice pattern (4 exchanges of 2 L dialysis solution a day for PD), the ideal sodium concentration in dialysate decreased by 8 mmol/L with each 2-g increase in total sodium intake. With the same fluid removal volume, dialysis removed more sodium than urine. When the dialysis fluid removal increased from 0 to 1000 mL as the urine removal decreased from 1000 to 0 mL, there was no obvious difference in dialysate sodium concentration. Conclusion: Our study suggests that to achieve the same fluid and sodium balance for patients with different urine removal and sodium intake, the ideal sodium concentration in dialysate should decrease as the sodium intake increases. Besides, the urine affected little on the ideal sodium concentration, indicating that the diffusion of sodium at the beginning of PD should not be underestimated. |
<reponame>OlehKSS/node-object-hash<filename>src/typeGuess.ts
import { Hashable } from './hasher';
/**
* Type mapping rules.
*/
export const TYPE_MAP: {[type: string]: string} = {
Array: 'array',
Int8Array: 'array',
Uint8Array: 'array',
Uint8ClampedArray: 'array',
Int16Array: 'array',
Uint16Array: 'array',
Int32Array: 'array',
Uint32Array: 'array',
Float32Array: 'array',
Float64Array: 'array',
Buffer: 'array',
Map: 'map',
Set: 'set',
Date: 'date',
String: 'string',
Number: 'number',
Boolean: 'boolean',
Object: 'object',
};
/**
* Guess object type
* @param obj analyzed object
* @returns object type
*/
export function guessObjectType(obj: object): string {
if (obj === null) {
return 'null';
}
if(instanceOfHashable(obj)){
return 'hashable';
}
const type = obj.constructor ? obj.constructor.name : 'unknown';
return TYPE_MAP[type] || 'unknown';
}
/**
* Guess variable type
* @param obj analyzed variable
* @returns variable type
*/
export function guessType(obj: any): string {
const type = typeof obj;
return type !== 'object' ? type : guessObjectType(obj);
}
/**
* Identify if object is instance of Hashable interface
* @param object analyzed variable
* @return true if object has toHash property and this property is function
* otherwise return false
*/
function instanceOfHashable(object: any): object is Hashable {
const toHash = 'toHash';
return toHash in object && object[toHash] instanceof Function;
}
|
Watch out, Madisonians: There’s a new outlaw in town, and Mayor Paul Soglin is determined to play sheriff.
“This is not the Wild West … Uber and Lyft refuse to meet these standards and to date refuse to respect Madison Ordinance, choosing to muscle their way into the Madison market,” Soglin declared in a meeting early this month.
But if you’ve gotten a ride from smartphone-based Lyft or Uber in Madison, you’ve caught a glimpse of what taking a taxi should look and feel like in the near future. That is, if city officials in Madison can rework existing ordinances to foster and promote innovation. Unfortunately, if the Soglins of the world had their way, we’d still be calling cabs on pay phones.
As cities across the nation grapple with how to define and regulate rideshare companies, Soglin (who drove a cab in the 1960s) has taken an uncompromising stance against the apps, refusing to even open the conversation about updating city ordinances.
Soglin has aligned himself with cab drivers, saying the services refuse to operate under the regulatory structures taxis do, which include licensing fees, availability of 24/7 service and uniform appearance for all cars in a taxi fleet.
They also object to the way in which these market entrants use “surge pricing” — raising fares when demand is high — whereas licensed taxicab services are required to file their rates with the city and may not charge any other rate.
While some of these regulations are essential for safe transportation in the city, others are outdated. The antiquated 24/7 rule — implemented when the city was much smaller — is unsurprisingly one that Soglin seems adamant in his refusal to discuss. He says allowing new rideshare services to operate outside this requirement gives them an unfair advantage with the ability to work during only profitable hours.
Instead of being hopelessly obstructionist, Soglin should consider that while rideshare companies — and his precious cab companies — provide a quasi-public service, they are also private entities that look to generate profit. The city should do away with this regulation for cabs and peer-to-peer vehicle companies alike.
There’s also a lot more room for compromise than Soglin seems to think. Companies like Uber and Lyft wouldn’t be hit too hard if they were required to obtain licensing fees, and they could easily comply with background checks.
Regardless, Soglin still clings to old regulations that protect cab companies from competition. The introduction of these services in the city won’t mean the end of traditional cab companies. Some people will prefer to take a normal cab, while others will opt for more modern services like Lyft and Uber.
Luckily, not everyone in Madison’s city government is as out of touch and close-minded as Soglin. Ald. Scott Resnick, District 8, has plans to propose a new ordinance by the end of the month that will hopefully establish a middle ground in regulating these new rideshare services.
Working through how to incorporate new technologies in a growing city with antiquated ordinances is nuanced and difficult. But other cities are adapting, and if Madison wants to be a Midwestern tech hub, it can’t afford to have its mayor taking such an uncompromising stance on new business models.
Before Lyft and Uber came to town, Madison cabs had no reason to innovate. If their monopoly in the city is allowed to continue and rideshare services are forced out, they’ll still have no reason to make updates.
If Soglin wants Madison to be a city that attracts new services for its citizens, he and other city officials must make a good-faith effort to compromise.
Soglin is clinging to the rules of old under a guise of safety and fairness.
Or maybe he just wants to have the biggest mustache in town. |
/**
* Removes the edge from start to dest from the graph. If the edge does not exist, this operation is a no-op. If
* either endpoint does not exist, this throws a {@link NoSuchElementException}.
*
* @param start
* the start node
* @param dest
* the destination node
* @throws NoSuchElementException
* if either node is not in the graph
*/
public void removeEdge(T start, T dest) {
checkNodes(start, dest);
mGraph.get(start).remove(dest);
} |
Digitalisation in Education, Allusions and References The metaphor of digitalisation in education emerged during a period when phenomena such as budget cuts and privatisation, layoffs and outsourcing of labour marked the ethos of the twenty-first century. During this time, digitalisation was constructed as an ultimate purpose and an all-encompassing matter in education. As a result, these narratives add new configurations to the metaphor of digitalisation on an ongoing basis. Such configurations attribute a mythical fullness to the concept, in the sense that digitalisation goes beyond the limits of a property that needs be developed so that society can successfully deal with contemporary challenges and advancements. In this way, digitalisation emerges as a new hegemony in education, with narratives that are more and less directly referential. Less direct references add the element of allusion to the metaphor of digitalisation, in the sense that references can be more implicit/ covert or even concealed/hidden. Moreover, as they combine with abstract terms and concepts, they make the boundaries of the technological and educational domains blurry and render education discourse vague. In order to examine the narratives of digitalisation and how they influence education discourse, this study aims to discuss and analyse relevant policy documents in relation to research and studies on the integration of digital technologies in classroom settings and the hybrid (or blended) learning environments that open up. For this purpose, the study uses thematic analysis and discourse analysis in order to trace allusions and references and discuss how emergent meanings relate to current and future needs in education generated by digitalisation itself. Introduction The aim of this study is to examine what new rationalities are emerging in education and society in an era when the discourse of digitalisation in education is becoming increasingly prominent and prevalent in Finland. The metaphor of digitalisation emerged during a period when phenomena such as public education budget cuts and privatisation, higher education layoffs and outsourcing of labour marked the neoliberal ethos of the twenty-first century. During the period 2015-2018, digitalisation was constructed as an ultimate purpose and, as such, an all-inclusive matter. Narratives that convey the allinclusive character of digitalisation include government documents and other policy documents, and constitute the first wave of digitalisation. In early 2015, the newly elected government of Finland published a long-term strategic government programme that included a section dedicated to education (Finnish Government, 2015). The theme of digitalisation of education is explicit in the programme, and objectives are set to meet the need to modernise learning environments and new pedagogical approaches utilising digitalisation. Modernisation includes the government funding new learning environments to update school information and communication technology (ICT) infrastructure, teacher education and inservice training to encourage the innovative use of ICT and teaching (Haukijrvi, 2016;Saari & Sntti, 2018). As Saari and Sntti put it, education discourse in Finland adopts the rhetoric of the information society. This stresses the possibility of bringing the education system up to date with the rest of society through the use of ICT in order to combat economic depression and the low level of productivity. This narrative is mainly grounded in economic factors. Another relevant narrative stresses the need to move away from outdated pedagogies and learning environments in Finnish schools. As Saari and Sntti argue, while the former is constructed around a widely recognised truth, the latter might be contested. The argumentative strategy, for instance, of building a claim for the benefits of digitalisation on scientific results is weak. In addition, pedagogy-related narratives seem to calibrate themselves on securing economic competitiveness and safeguarding consensus on the necessity to update school pedagogy. However, evidence of the actual need for technology-based pedagogies seems to be lacking (Saari & Sntti, 2018, p. 448). Saari and Sntti do not elaborate further on whether consensus has been achieved or not; however, this first wave of digitalisation narratives did indeed raise the issue of general agreement. The first wave unfolded in the period from 2015 to 2018 and included OECD and government documents, as well as general education discourse that extended to the end of the previous government's term. The beginning of the new/second digitalisation wave was marked by the fact that the April 2019 elections resulted in a new government, and that UNESCO published a new working paper in the same year. Other sources that are markers of the transition include institutional strategic plans 2 that aim to establish the principles for the future, and thus to play a part in the new governmental policy. Following the strategy for digitalisation, higher education institutions (HEIs) change the ways that university-based websites and new social media distribute their virtual space. Furthermore, media practices change and become more explicit with regard to future plans for the transformation of education. This means that strategic planning including higher education and the school becomes public on the Internet. In this process, the second wave of digitalisation arises at the intersection of political and rhetorical changes. In addition to expressions directly linked with digitalisation (e.g., digital pedagogy, tools, skills, etc.), rhetorical changes include other related terms (e.g., artificial intelligence and intelligent tools). In this way, the link between digitalisation and teacher education brought forward by Saari and Sntti's study and evident in the first wave of digitalisation remains, as does the main argument that, if the future should be digitalised, teachers should be able to make this possible. In an effort to clarify the complicated situation, European and worldwide organisations issue reports aimed at encouraging education policies that address the issue of digitalisation. Narratives of digitalisation, then, tie in with discussions concerning the present and future of teacher education. As a result, the latter intersect with European and international documents (e.g., the UNESCO working papers on education policy, ) and influence one another in terms of, for example, what needs are established and which terms and concepts relate to those needs. On the other hand, we cannot ignore the fact that, as Saari and Sntti argue, official narratives (e.g., OECD, 2015a,b) neglect the historical, ideological and social structure of schools. In this way, the possibility for tension to emerge increases due to overwhelming, yet abstract, promises for education reform and the realities and challenges in schools. The fact that an all-encompassing configuration attributes a mythical fullness to digitalisation makes such tension highly possible. To explain the mythical dimension, I draw from Laclau's analysis of hegemony and the work of Holma and Kontinen in which the Gramscian perspective is discussed. Laclau links the mythical dimension of a property with the property's own limits. The Laclaudian argument is that, at some point in history, a property is attributed more meaning than it really possesses. In this sense, the property goes beyond its own limits and, as a result, acquires a mythical dimension. In our case, a mythical dimension means that digitalisation, although only a partial object in the process of social change, is viewed as the property that needs be developed so that society can successfully deal with contemporary challenges and advancements. This results in radical investment in digitalisation and technology, leading to digitalisation becoming a new hegemonic force in education. Both Laclau (e.g., 2005) and Gramsci (Entwistle, 1979;Holma & Kontinen, 2015) posit that hegemonic forces produce new moral, cultural and symbolic orders. Consequently, new boundaries are constructed. Within this framework, the question of consensus remains open. Consensus etymologically originates from the Latin con (= together) -sentire (= agree) and signifies general agreement over an issue. If, for example, there is agreement among social actors, including teachers, parents and policymakers, that digital transformation is needed in education, then digital pedagogies are introduced in education institutions and pedagogical practice. As consensus requires the agreement of the majority, it is not always possible to trace whether it exists or not. It is, however, possible to trace whether there is no significant objection to a decision, policy or practice. This means that if there is no significant objection there is consent, or that permission is granted for a decision to take effect. Consent can be implied, informed or unanimous. For democratic institutions to work, consent is required, coherence of different voices needs to be built, and shared solutions must be sought in order to, in the end, safeguard democracy itself (Holma & Kontinen, 2015). In the case of digitalisation in education, as mentioned earlier, it is not possible to know whether there is overall agreement about the necessity for digital pedagogy. What we do know, and what our research experience is telling us, is that a number of teachers have consented to integrate technologies into their pedagogical and teaching methods. Nonetheless, the dissociative rhetoric of official documents, as analysed by Saari and Sntti, has brought forward a possible boundary between those in favour and those who resist the "new order". This means that both consensus and consent are at risk, especially when techniques such as praise-blame are used. As the issue cannot be resolved at this point, it is possible that the second wave of digitalisation will deepen the rift if narratives work against social consent for digitally enhanced pedagogies. For consent to exist, building coherence of different voices is needed. In this process, building alliances is essential (Holma & Kontinen, 2015). Alliances establish the ground for coherent voices to take shape (Entwistle, 1979;Holma & Kontinen, 2015), thus influencing the overall discourse. Leaders in education, teachers and educational researchers are examples of actors whose voices are critical in the process of decision making and policymaking. Although there are power relations influencing how these roles are played out in the political reality, it is not the purpose of the present study to discuss these hierarchies. Moreover, the study takes it for granted that these roles form categories of specialists who are not necessarily elites. In addition, their perspectives should be considered in educational policymaking. These specialists' voices intertwine and interrelate and are distinct from the articulations of policy documents. Coherence can arise when practitioner/specialists' voices and policies resonate each other and one another. In other words, the voices of the actors involved should echo one another and be internally coherent. To this end, they need to be part of the overall discourse. Policies are normally based on research results and accounts of good practices, and, much like in the case of metaphors and allusion, the relation is unidirectional. For instance, education policies issued in 2019 reflect practices applied prior to that time, while the opposite cannot occur. This means that policy documents allude to other policy documents as well as to other narratives that precede them in time. Our task here is to determine what kind of allusions these are. The UNESCO papers target education policymakers and aim to anticipate the extent to which digitalisation and artificial intelligence (AI) affect the education sector. As a matter of fact, the 2019 paper shifts the discourse from "digital" to "artificial intelligence", which is a marker of the transition to the second wave of digitalisation. In the discussion, the working paper explores how governments and education institutions rethink and rework education programmes and the challenges and policy implications that should be the focus in global and local conversations. In order to trace how policies and practices resonate one another, and the degree to which their relation is directly or indirectly referential, the study will examine metaphors and allusions of digitalisation. Considering these, the study aims to examine second wave digitalisation narratives in EU policy documents in order to understand how these relate to practices in the domains of technology and education. To this end, the study will offer a critical discussion of the UNESCO working papers of 2017 and 2019 in relation to research studies on the integration of digital technologies into schools during the period 2012-2016. The selection of documents was based on the fact that UNESCO papers influence education policy and practice at different levels, ranging from the local to the international. Therefore, by discussing and analysing the agency's policy documents in relation to research findings, this paper aims to contribute to the overall discourse of digitalisation in education in Finland and Europe, as well as internationally. Metaphors, digitalisation and digital pedagogy According to metaphor theorists (e.g., Lakoff, 1993;Lakoff & Johnson, 1980;Ricoeur, 1978;Steen, 2011), a metaphor occurs when we talk about something by means of something else, and therefore a stretch or twist is required for sense making. This metaphorical twist involves a movement to a target domain from a source domain. In our case, digitalisation in education is a metaphorical phrase that requires a stretch of thinking from the technological to the educational domain in order to better understand what technology-enhanced practices involve. Considering first wave narratives, digitalisation is a twenty-first century metaphor that signals a strategic approach to the thorough transformation of the learning space environment, one that requires pedagogical adjustments with the collaboration of experts from various domains (Haukijrvi, 2016). A metaphor does not necessarily only mark direct references to a target domain. In the case of digitalisation, for instance, the need to make changes in pedagogical methods is a direct reference within the totality of education discourse. As a result, the term digital pedagogy emerges. What constitutes digital pedagogy, however, remains obscure until it is defined in terms of what conditions the digital dimension generates and what new teaching/learning environments arise. In this sense, the reference to digital pedagogy is, rather than explicit, less direct and more covert. Therefore, this kind of reference to digital pedagogy is indirectly referential, and thus allusive in an implicit way. For further elaboration, I will use the FINNABLE2020 project as an example of direct reference drawing from the field of research and practice. The FINNABLE2020 project is an example of direct reference in the sense that its rationale explicitly states the purpose of digitalisation in education. It is an umbrella project that covers a range of areas, the Boundless Classroom being one of them. The Boundless Classroom encapsulates the intention to use multiple technologies systematically and create a unified and coordinated learningfor-engagement with fun experience for primary and secondary students by combining and dispersing elements of a story across multiple web-based, digital channels and connected classrooms. For this purpose, digital storytelling was developed as a pedagogical/teaching method based on a learner-centred approach aimed at enabling learning through the use of digital devices and language for the production of stories in a video format. The overall aim was to give students a chance to tell their own stories about the topic under discussion, to highlight participatory practices, to increase engagement in the topic, to sustain collaborative efforts and to encourage shared learning and creativity. The conceptual basis of the implementation was grounded in the relevant literature (e.g.,, Lambert, 2013McGee, 2015;;Woodhouse, 2008). Based on the above, the material I draw on here includes research and studies performed by the research teams of CICERO Learning, the research unit at the University of Helsinki, and other relevant work. This paper, then, is to some degree an attempt to summarise our studies and projects (;Niemi & Multisilta, 2016;Vivitsou, 2016Vivitsou,, 2018Vivitsou,, 2019a with a focus on the integration of digital technologies in schools. Research studies themselves constitute narratives that synthesise the overall education discourse on digitalisation. As the narratives of the study will be discussed within a storytelling framework with a focus on integration, not only the main storyline dimensions of technology and pedagogical practice will be considered. Moreover, the settings where the events of the narrative unfold will be part of the discussion. In the case of technology-enhanced pedagogies, settings include the environments for teaching and learning that emerge through the integration of technology in pedagogical practice. Technology integration in pedagogical practice The Boundless Classroom/Digital Storytelling project attracted the attention and participation of teachers, students and schools across countries and continents. It involved parent/guardian permission and included introductory sessions at which researchers communicated the project aims to the school community. In this sense, it would be safe to claim that the integration of digital technologies in the school was realised on the grounds of the informed consent of the parties involved. For research purposes, the digital storytelling-related research and studies involved surveys, field notes, observations and interview data arising while the international projects were organised and coordinated by the University of Helsinki during the period 2012-14. At that time, students from Finland, Greece and California, and later China, were involved in making and sharing digital stories with peers across classrooms and countries using a webbased environment. From the start, therefore, there was an emphasis on hybrid/ blended learning environments. Hybrid or blended learning environments combine formal and informal settings and can include virtual classrooms, real-life classrooms, field trips and so on (e.g., ;). As a result, in this kind of learning, not only context collapses in the hybrid situations, but time collapses, as well. In their study, Marwick and Boyd argue that context collapse occurs when real-life and virtual worlds are in ongoing interaction. Consequently, real-life, face-to-face communication purposes intertwine and become inseparable from the connected interactions. In this sense, the two contexts collapse within each other. In our research experience, evidence of this phenomenon is provided by the fact that schoolwork extends to after-school hours and involves multiple actors (i.e., students, parents, teachers, software developers, and so on). Consequently, both context and time collapse. Such a complex situation requires a pluralist orientation and involves blending methods in order to cover, for example, the need for the adaptation of previous course design and existing tools () to accommodate digital-related objectives, to establish a participatory culture (), to produce multimodal texts, and to address audiences by developing topic-based argumentation in storytelling. In connected classrooms, pluralism also involves consideration of using multiple languages for communication, awareness of peers' background contexts, histories and perspectives, and deep engagement (;Niemi & Multisilta, 2016) in order for student initiative to emerge (Vivitsou, 2016(Vivitsou,, 2018(Vivitsou,, 2019a. Hybrid/blended learning environments for shared solutions Considering the above, it is evident that hybrid learning situations very much depend on teachers' recursive practices taking action in both virtual and real-life classroom environments. This means that teachers construct professional knowledge in-action and at multiple levels, while observing students performing tasks and modifying decisions in situ. Recursive practices match the current need for flexible and hybridised teaching to guide and support students through the complexities of the digital era, as long as technological design satisfies such needs. In their studies, Niemi et al. and Niemi and Multisilta found that virtual spaces can encourage knowledge construction and information seeking, while the combination of formal and informal elements allows student initiative to develop with a focus on the subject matter ). Overall, quantitative and qualitative analyses of the studies converge, in that multimodalities require literacies and competences that relate to the digital element (e.g., creating, shooting, remixing stories), while collaboration toward shared solutions is a unifying principle and work in groups in both virtual and natural/real-life contexts is common ground. The ultimate purpose of hybrid/blended learning environments is, therefore, to become spaces where different voices speak in a coherent manner in order to work jointly for shared solutions. In the spirit of pluralism, an overall reconceptualisation of teaching is needed, one that considers fluidity over predefined scripts and in-action professional development. Using web-based platforms for pedagogical purposes opens up a whole array of possibilities for activities and collaborative work to both structure and problematise the process and support student work. This type of support is determined when teachers plan classroom work and design the course of action. In this sense, a new pedagogical genre emerges, one that encompasses the ways of acting and the purposes of those who act in order to generate cross-cutting text types, ranging from descriptive to expository to narrative to dialogic and reflective. Teachers' consent to use digital means attests to these insights. The official rhetoric (e.g., Finnish National Board of Education, 2014; OECD, 2015a,b), however, separates the high level of teacher expertise and ICT use from each other without designating the particular areas in which teacher expertise fails digitalisation. Actually, it might be the other way around. This makes first-wave narratives allusive and implicit, very often hiding meanings. Allusion is a reference that is indirect, in the sense that it requires more associations than mere substitution of a referent, it often draws on information that is not readily available, it is typically but not necessarily brief, and it may or may not be literary in nature. Indirect reference is necessary but does not constitute a sufficient condition for allusion. For this reason, authorial intent and the possibility of detection in principle are required. Irwin contends that authorial intent, although difficult to prove, is an epistemological and hermeneutical issue, and, as such, needs thorough investigation. This can occur through the discussion and analysis of in-text associations with other texts and narratives. Considering these, the present study, rather than seeking intentions, aims to trace allusions and references in policy documents through associations, in order to discuss how they influence the domains of technology and education. To do so, the study will seek to respond to the following research questions: 1. How do themes from the domains of technology and education relate in the first and second wave narratives of digitalisation found in policy documents? 2. What overt and covert references to hybrid/blended learning environments and collaboration emerge? Methods In order to discuss and analyse types of allusion and reference, the study will use qualitative methods and a critical discourse analytical framework. To achieve this, changes in the first and second waves of digitalisation will be examined, with a focus on how relevant terms are used. Following this, direct references (i.e., out in the open, overt) and indirect references (i.e., implied, covert) will be discussed in relation to research-based narratives with a focus on learning environments and collaboration. To this end, thematic analysis and discourse analysis will be used to discuss the UNESCO 2017 and 2019 policy documents in relation to earlier research and studies on the integration of digital technologies in the classroom. For the initial analysis of the study, a keyword search was performed throughout the 2019 document to trace sections containing occurrences of phrases from the domain of technology and education. From the domain of technology, the lexical item "digital" and variations of "artificial intelligence" (i.e., in lower case, upper case and in the form of the initials AI) were used. From the domain of education, the items "teacher" and "teaching" were used. The results were compared with relevant searches in the 2017 UNESCO working paper. In this way, both first and second wave narratives of digitalisation were included in the database. A thematic analysis then followed, in order to identify key categories in the 2019 paper. Finally, a post-foundational discourse framework of analysis was applied to examine these types of text in relation to developments in both the technological and the educational domain. Findings As mentioned above, the 2019 paper shifts the discourse from "digital" to "artificial intelligence" and marks the transition to the second wave of digitalisation. In this transition process, keyword search findings indicate that the appearance of the "digital" element is still quite marked, while links are drawn to build the AI narrative in education. As discussed below, frequencies of the use of key terms play a role in this shift in discourse. Terms and frequency of use As shown in Table 1 below, there is more frequent use of the adjectivenoun phrase "artificial intelligence" and less frequent appearance of "teacher" and "teaching" in the 2017 and 2019 working papers. More particularly, initial analysis shows an increased occurrence of various forms of artificial intelligence in the 2019 document. In contrast, occurrences of "digital" decrease compared to the 2017 document. For instance, "digital" appears in 638 adjective-noun phrases in 2017, but in only 96 in 2019. On the other hand, the term artificial intelligence appears in a total of 441 uses in 2019. In addition, the items teacher and teaching appear in 84 and 16 mentions, respectively, in 2019, but 197 and 37, respectively, in 2017. Following this, a later stage of the analysis focuses on the 2019 paper and aims to identify which sections make use of the digital item. The findings show that, of the three sections of the document, Section II mainly uses phrases such as digital technologies, digital skills, digital competence/competencies and digital literacy. While these appear in subsections discussing preparing learners and the need for a new curriculum, the occurrences of digital are scarce in subsections about post-basic education and higher education. In contrast, the frequency of use of artificial intelligence and its variations increases throughout the 2019 working paper. Main themes Thematic analysis in the sections more relevant to the domain of education reveals two major themes in the 2019 paper: preparing learners and preparing teachers. Preparing learners for future demands The thematic analysis draws from Sections II and III of the 2019 document. More particularly, Section II, entitled "Preparing learners to thrive in the future with AI", presents examples from different contexts, while its subsection on the new curriculum for a digital and AI-powered world elaborates on the importance of advancing digital competency frameworks for teachers and students. This part points out the importance of developing new skills to create and decode digital technologies, and illustrates curricular reform efforts in many countries. The latter reveal the need for skills that would allow learners to identify and solve problems using computing techniques, methods and technologies. The word digital is used in adjective-noun phrases to modify words like technologies, skills and competencies. These combinations lead to the articulation of the main objective, which is to develop learner abilities to analyse, use and decode AI, as powerful technology whose scope, limitations, potential and challenges need to be understood. The following subsection concerns digital competencies frameworks, presenting examples of frameworks and definitions of digital literacy and competencies. One of the example frameworks underlines the need for teachers to both manage digital technologies and teach them to students, in order to help students to be capable of collaborating, solving problems and being creative in the use of digital technologies. Computational Thinking (CT) is the title of the last part of the section containing the frequent use of mainly digital +noun phrases. This subsection points out the interdisciplinary nature of CT, in the sense that it finds applications in disciplines other than computer science. According to the document, the presence of AI in the workplace is increasing, which makes CT a critical competency if learners are to cope with changing labour market demands. Examples of the level of CT integration in curricula follow. In these examples, countries are clustered based on the universal recognition across the EU of the importance of integrating CT. The main categories include countries that have commenced a curriculum review and redevelopment, those that are planning to introduce such a review, and those that have a longstanding tradition of computer science education, particularly in secondary school. The subsection that follows concerns higher education and contains no use of the word digital, while the appearance of AI variations become more frequent than in the preceding sections. Challenges in preparing teachers Section III concerns challenges and policy implications, explaining that these should be part of global and local conversations on the possibilities and risks of introducing AI in education. One challenge is to prepare teachers for AI-powered education. This is a two-way path: on the one hand, teachers must learn new digital skills to use AI in a pedagogical and meaningful way, while, on the other, AI developers must learn how teachers work and create sustainable solutions in real-life environments The section discussing how to prepare teachers for AI-powered education and preparing AI to understand education points out that the effectiveness of learning analytics systems lies in their usefulness and relevance to both learners and educators. The claim is made here that teachers should be given autonomy to manage classrooms and schools, as it is teachers who are most familiar with learners' needs. The report concludes that teachers will remain at the frontline of education, adding that it is misinformed to claim that AI can replace teachers. In this respect, teacher training is a critical aspect of teacher empowerment to use education data to improve pedagogy. Training programmes should account for new competencies and aim for a clear understanding and a critical perspective on technologies, development of research and data analytical skills to eventually enable teachers to take advantage of AI. Overt and covert references Following the thematic analysis, Sections II and III and their subsections were further analysed in order to trace overt and covert references in the policy document to the educational domains related to hybrid learning environments and collaboration. Table 2 below shows part of the results of this analysis. As shown in Table 2 above, in the domain of technology, there are overt references to digital devices and technologies, and thus an explicit link is established with tools used in blended environments. However, the link is rather abstract and generic and, as such, very loose. The link is more explicit in the area of collaboration, although an elaboration of the conditions that should underlie the collaboration of academic institutions with the private sector would make the picture a lot clearer. Links with the domain of education are less overt in relation to both learning environments and collaboration. Again, references are rather superficial, without properly elaborating, for example, what relations exist in "digital era", "identity" and "society", and how they interact with reality. Discussion This is a first attempt to discuss and analyse narratives of digitalisation and draw conclusions about how these relate to developments in the domain of technology and education. More studies are therefore needed to further investigate the phenomenon and confirm or falsify the findings. For this purpose, more documents should be included in the database for analysis, drawing from the work of other social partners and actors, such as NGOs, with experience in the training and application of digital methods and literacies. The findings could then be discussed in relation to the domains of technology and education. The present study offers insights into three main categories of examination. In terms of narrative shifts, the findings of keyword searches show that there is a movement from the digital element to the notion of artificial intelligence. More studies are therefore needed on the definitions, significations and applications of artificial intelligence, since the discussion is heading in that direction. Although there is no direct reference (e.g., quotation) to it, the 2019 paper echoes the earlier report on "Digital skills for life and work". Thus, the narrative arguing for the need to digitalise education and provide online learning opportunities continues. This provides evidence of allusion in the more recent report and, along with stylistic similarity and lexical properties, echoes the earlier one. This manifestation of allusion occurs in more and less explicit ways, and is mainly indirect and articulated through the use of phrases containing the word digital. Consequently, the references to what online learning opportunities will be like are more covert than overt, and the meanings become, to some degree, concealed, especially in relation to hybrid/blended environments and collaboration. There are therefore gaps in the report from a methodological, conceptual/theoretical and practical point of view. For example, problems arising from recursive practices in hybrid situations, and solutions to tackle these problems, are not mentioned. Eventually, even if we consider the aphorism included in the report as evidence of critical discussion, the lack of associations with other parts of the text makes the allusion lose its meaning. The aphorism refers to innovations in education as full of lost promises, through failing to understand how teachers work and the culture of schools. As a result, the report suggests, AI developers need to participate in new dialogues with educators, content designers and cross-disciplinary specialists. Although the section opens up space to draw parallels with computational thinking and ground CT within the overall education discourse, this opportunity is not used. Conclusions The present study discusses and analyses first and second wave narratives of digitalisation in policy documents and examines how shifts in thematic choices and terminology relate to developments in the domains of technology and education. This is an important task, as government and European documents establish the ground for a consensus favourable to digitalisation in education. The importance of the task lies in the fact that the praise-blame rhetorical technique employed in first wave policy documents makes discourses of consensus and boundaries of consent blurry and divides interpretations of the teachers' role in education. On the one hand, the expertise of Finnish teachers is recognised as being high quality; on the other, their use of technology is supposed to be meagre. Our research experience (e.g., Vivitsou 2016Vivitsou, 2018Vivitsou, 2019a in the area of integrating digital technologies into classroom practices, however, has shown that teachers consent to technological integration on the basis that it expands the learning environment from conceptual, methodological and practical points of view that take into consideration the nature of digitalisation, artificial intelligence, virtual reality and other configurations of technology. Claiming that there is no need for digital pedagogy is therefore an oversimplification. However, pedagogical adjustment would require a marked reference of both parts of the adjective-noun phrase (i.e., DIGITAL PEDAGOGY, instead of DIGITAL pedagogy) to balance out the metaphor (Vivitsou, 2019b). While the argument for pedagogical adjustments is valid, a more sophisticated approach to pedagogy is needed. Practically, this means that if teachers and the wider community are to keep renewing their consent to technology-enhanced practices, teacher education and related narratives should incorporate more critical and socially embedded paradigms and approaches to technology. As Holma and Kontinen point out, social consent requires building alliances to articulate coherent voices able to balance out the hegemonic force of technology and safeguard democracy. Teachers are certainly one part of this constantly evolving equation. It becomes more and more evident nowadays, for instance, that the model of deregulation that Finland has adopted opens the door to privatisation and marketisation in education. As Hovemark et al. explain, deregulation means that state rules on the internal work of schools are delegated to lower administrative levels such as municipalities. Thus, instead of being the main provider, the state becomes the regulator of a system that becomes more and more market-oriented. The second wave of deregulation in the 2000s presents an example of the complexity of the situation. During that time, the attempt to create a school market by profiling schools, using privatisation and intensified school choice, gave rise to questions of segregation and differentiation (). This put at risk the profile and the essence of Finnish education as a system that combines top quality with equality, equity and equal opportunities. Consequently, the integration of technology in schools creates scepticism in the wider society as to who is going to be authoring the narrative of education in the years to come. Technology actors such as AI developers and companies are part of the alliance-building process. However, the limits of their role still remain uninvestigated and underdefined at this point. The UNESCO policy documents, for instance, make direct reference to AI developers and partnerships between education, industry and the private sector, but they do so in a generic and abstract manner. As the stakes are high, such partnerships should be thoroughly described, because, in the end, we will be called upon to answer hard questions regarding, inter alia, who will be making decisions: Educators? Developers? Companies? All of these? And under what conditions? Unfortunately, the working papers do not respond to these questions at the moment; in fact, they barely pose them. From the point of view of rhetoric, it seems that the more direct references to technology-based narrative increase, the more references to pedagogy and teachers/teaching decrease and become more covert and, ultimately, concealed. However, this opens up the space for further dispute rather than consent. According to Irwin's interpretation, we can construct allusions to purposefully elicit and include the reader's response. Moreover, Irwin adds, the goal is to please the reader/receiver of the intended message, albeit indirectly. As a matter of fact, etymology concurs with this view. Allusion has a Latin origin and stems from alludere (= to mock, play with). This insight offers a specific purposefulness to allusion and adds to its metaphorical gamut. On this basis, we might need to consider the possibility that allusions are by nature incomplete and the process of completing them is a productive one, which results in the most important element of the text always being missing. In our case, what is missing at the moment is a more generalised effort to put together think tanks and research to establish shared spaces for shared intelligences to confer and negotiate toward shared solutions to common problems. |
Effect of sodium hypochlorite, isopropyl alcohol and chlorhexidine on the epoxy sealant penetration into the dentinal tubules. BACKGROUND The sealer penetration into the dentinal tubules might be beneficial, especially in necrotic endodontic cases, as it provides the obstruction of the contaminated tubules. OBJECTIVES To determine the effect of 3 final irrigants (sodium hypochlorite (NaOCl), alcohol and chlorhexidine (CHX)) on the penetration of an epoxy sealer into the dentinal tubules. MATERIAL AND METHODS The study was carried out on 60 single-canal human teeth with straight roots. The root canals were prepared to the ISO 40/04 size, using the Reciproc® instruments. The teeth were divided into 4 groups (n = 15). The canals in each group were irrigated according to the following scheme: group 1 (control) - 5.25% NaOCl; group 2 - smear layer removal (40% citric acid (CA) and 5.25% NaOCl) and 5.25% NaOCl; group 3 - smear layer removal (as in group 2), and 40% CA, water and 98% isopropyl alcohol; and group 4 - smear layer removal (as in group 2), and 40% CA, water and 2% CHX. The root canals were filled using the vertical condensation technique with gutta-percha and the porphyrin-labeled AH Plus™ sealer. After 3 days, 1-milimeter-thick cross-section slices were cut from the roots at a distance of 2 mm, 5 mm and 8 mm from the apex. The sections were imaged under a confocal microscope and the sealant penetration depth into the dentinal tubules was measured. RESULTS The longest resin tags in all parts of the roots were found in group 4 (CHX), and the shortest in group 1 (control). The mean depth of the sealer penetration (in micrometers) was as follows: 21, 22 and 23 (group 1); 201, 231 and 374 (group 2); 170, 232 and 280 (group 3); and 330, 408 and 638 (group 4) in the apical, middle and coronal parts, respectively. CONCLUSIONS The final irrigation with CHX resulted in the deepest penetration of the epoxy sealer into the tubules. Isopropyl alcohol had the most negative impact on the sealer penetration into the tubules. |
package com.simon.credit.toolkit.ext.http;
import java.io.IOException;
import java.lang.reflect.Type;
import java.net.URI;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Set;
import javax.xml.transform.Source;
import org.springframework.core.ParameterizedTypeReference;
import org.springframework.http.HttpEntity;
import org.springframework.http.HttpHeaders;
import org.springframework.http.HttpMethod;
import org.springframework.http.MediaType;
import org.springframework.http.RequestEntity;
import org.springframework.http.ResponseEntity;
import org.springframework.http.client.ClientHttpRequest;
import org.springframework.http.client.ClientHttpRequestFactory;
import org.springframework.http.client.ClientHttpResponse;
import org.springframework.http.client.support.InterceptingHttpAccessor;
import org.springframework.http.converter.ByteArrayHttpMessageConverter;
import org.springframework.http.converter.GenericHttpMessageConverter;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.http.converter.ResourceHttpMessageConverter;
import org.springframework.http.converter.StringHttpMessageConverter;
import org.springframework.http.converter.feed.AtomFeedHttpMessageConverter;
import org.springframework.http.converter.feed.RssChannelHttpMessageConverter;
import org.springframework.http.converter.json.GsonHttpMessageConverter;
import org.springframework.http.converter.json.MappingJackson2HttpMessageConverter;
import org.springframework.http.converter.support.AllEncompassingFormHttpMessageConverter;
import org.springframework.http.converter.xml.Jaxb2RootElementHttpMessageConverter;
import org.springframework.http.converter.xml.MappingJackson2XmlHttpMessageConverter;
import org.springframework.http.converter.xml.SourceHttpMessageConverter;
import org.springframework.util.Assert;
import org.springframework.util.ClassUtils;
import org.springframework.web.client.DefaultResponseErrorHandler;
import org.springframework.web.client.RequestCallback;
import org.springframework.web.client.ResourceAccessException;
import org.springframework.web.client.ResponseErrorHandler;
import org.springframework.web.client.ResponseExtractor;
import org.springframework.web.client.RestClientException;
import org.springframework.web.client.RestOperations;
import org.springframework.web.util.AbstractUriTemplateHandler;
import org.springframework.web.util.DefaultUriTemplateHandler;
import org.springframework.web.util.UriTemplateHandler;
public class RestTemplate extends InterceptingHttpAccessor implements RestOperations {
private static boolean romePresent = ClassUtils.isPresent("com.rometools.rome.feed.WireFeed", RestTemplate.class.getClassLoader());
private static final boolean jaxb2Present = ClassUtils.isPresent("javax.xml.bind.Binder", RestTemplate.class.getClassLoader());
private static final boolean jackson2Present = ClassUtils.isPresent("com.fasterxml.jackson.databind.ObjectMapper", RestTemplate.class.getClassLoader())
&& ClassUtils.isPresent("com.fasterxml.jackson.core.JsonGenerator", RestTemplate.class.getClassLoader());
private static final boolean jackson2XmlPresent = ClassUtils.isPresent("com.fasterxml.jackson.dataformat.xml.XmlMapper", RestTemplate.class.getClassLoader());
private static final boolean gsonPresent = ClassUtils.isPresent("com.google.gson.Gson", RestTemplate.class.getClassLoader());
private final List<HttpMessageConverter<?>> messageConverters = new ArrayList<HttpMessageConverter<?>>();
private ResponseErrorHandler errorHandler = new DefaultResponseErrorHandler();
private UriTemplateHandler uriTemplateHandler = new DefaultUriTemplateHandler();
private final ResponseExtractor<HttpHeaders> headersExtractor = new HeadersExtractor();
public RestTemplate() {
this.messageConverters.add(new ByteArrayHttpMessageConverter());
this.messageConverters.add(new StringHttpMessageConverter());
this.messageConverters.add(new ResourceHttpMessageConverter());
this.messageConverters.add(new SourceHttpMessageConverter<Source>());
this.messageConverters.add(new AllEncompassingFormHttpMessageConverter());
if (romePresent) {
this.messageConverters.add(new AtomFeedHttpMessageConverter());
this.messageConverters.add(new RssChannelHttpMessageConverter());
}
if (jackson2XmlPresent) {
this.messageConverters.add(new MappingJackson2XmlHttpMessageConverter());
} else if (jaxb2Present) {
this.messageConverters.add(new Jaxb2RootElementHttpMessageConverter());
}
if (jackson2Present) {
this.messageConverters.add(new MappingJackson2HttpMessageConverter());
} else if (gsonPresent) {
this.messageConverters.add(new GsonHttpMessageConverter());
}
}
public RestTemplate(ClientHttpRequestFactory requestFactory) {
this();
setRequestFactory(requestFactory);
}
public RestTemplate(List<HttpMessageConverter<?>> messageConverters) {
Assert.notEmpty(messageConverters, "At least one HttpMessageConverter required");
this.messageConverters.addAll(messageConverters);
}
public void setMessageConverters(List<HttpMessageConverter<?>> messageConverters) {
Assert.notEmpty(messageConverters, "At least one HttpMessageConverter required");
// Take getMessageConverters() List as-is when passed in here
if (this.messageConverters != messageConverters) {
this.messageConverters.clear();
this.messageConverters.addAll(messageConverters);
}
}
public List<HttpMessageConverter<?>> getMessageConverters() {
return this.messageConverters;
}
public void setErrorHandler(ResponseErrorHandler errorHandler) {
Assert.notNull(errorHandler, "ResponseErrorHandler must not be null");
this.errorHandler = errorHandler;
}
public ResponseErrorHandler getErrorHandler() {
return this.errorHandler;
}
public void setDefaultUriVariables(Map<String, ?> defaultUriVariables) {
String msg = "Can only use this property in conjunction with an AbstractUriTemplateHandler";
Assert.isInstanceOf(AbstractUriTemplateHandler.class, this.uriTemplateHandler, msg);
((AbstractUriTemplateHandler) this.uriTemplateHandler).setDefaultUriVariables(defaultUriVariables);
}
public void setUriTemplateHandler(UriTemplateHandler handler) {
Assert.notNull(handler, "UriTemplateHandler must not be null");
this.uriTemplateHandler = handler;
}
public UriTemplateHandler getUriTemplateHandler() {
return this.uriTemplateHandler;
}
// GET
@Override
public <T> T getForObject(String url, Class<T> responseType, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = acceptHeaderRequestCallback(responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.GET, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> T getForObject(String url, Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = acceptHeaderRequestCallback(responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.GET, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> T getForObject(URI url, Class<T> responseType) throws RestClientException {
RequestCallback requestCallback = acceptHeaderRequestCallback(responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.GET, requestCallback, responseExtractor);
}
@Override
public <T> ResponseEntity<T> getForEntity(String url, Class<T> responseType, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = acceptHeaderRequestCallback(responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, HttpMethod.GET, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> getForEntity(String url, Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = acceptHeaderRequestCallback(responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, HttpMethod.GET, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> getForEntity(URI url, Class<T> responseType) throws RestClientException {
RequestCallback requestCallback = acceptHeaderRequestCallback(responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, HttpMethod.GET, requestCallback, responseExtractor);
}
// HEAD
@Override
public HttpHeaders headForHeaders(String url, Object... uriVariables) throws RestClientException {
return execute(url, HttpMethod.HEAD, null, headersExtractor(), uriVariables);
}
@Override
public HttpHeaders headForHeaders(String url, Map<String, ?> uriVariables) throws RestClientException {
return execute(url, HttpMethod.HEAD, null, headersExtractor(), uriVariables);
}
@Override
public HttpHeaders headForHeaders(URI url) throws RestClientException {
return execute(url, HttpMethod.HEAD, null, headersExtractor());
}
// POST
@Override
public URI postForLocation(String url, Object request, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request);
HttpHeaders headers = execute(url, HttpMethod.POST, requestCallback, headersExtractor(), uriVariables);
return headers.getLocation();
}
@Override
public URI postForLocation(String url, Object request, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request);
HttpHeaders headers = execute(url, HttpMethod.POST, requestCallback, headersExtractor(), uriVariables);
return headers.getLocation();
}
@Override
public URI postForLocation(URI url, Object request) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request);
HttpHeaders headers = execute(url, HttpMethod.POST, requestCallback, headersExtractor());
return headers.getLocation();
}
@Override
public <T> T postForObject(String url, Object request, Class<T> responseType, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.POST, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> T postForObject(String url, Object request, Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.POST, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> T postForObject(URI url, Object request, Class<T> responseType) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters());
return execute(url, HttpMethod.POST, requestCallback, responseExtractor);
}
@Override
public <T> ResponseEntity<T> postForEntity(String url, Object request, Class<T> responseType, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, HttpMethod.POST, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> postForEntity(String url, Object request, Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, HttpMethod.POST, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> postForEntity(URI url, Object request, Class<T> responseType) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, HttpMethod.POST, requestCallback, responseExtractor);
}
// PUT
@Override
public void put(String url, Object request, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request);
execute(url, HttpMethod.PUT, requestCallback, null, uriVariables);
}
@Override
public void put(String url, Object request, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request);
execute(url, HttpMethod.PUT, requestCallback, null, uriVariables);
}
@Override
public void put(URI url, Object request) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request);
execute(url, HttpMethod.PUT, requestCallback, null);
}
// PATCH
@Override
public <T> T patchForObject(String url, Object request, Class<T> responseType, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.PATCH, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> T patchForObject(String url, Object request, Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
return execute(url, HttpMethod.PATCH, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> T patchForObject(URI url, Object request, Class<T> responseType) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(request, responseType);
HttpMessageConverterExtractor<T> responseExtractor = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters());
return execute(url, HttpMethod.PATCH, requestCallback, responseExtractor);
}
// DELETE
@Override
public void delete(String url, Object... uriVariables) throws RestClientException {
execute(url, HttpMethod.DELETE, null, null, uriVariables);
}
@Override
public void delete(String url, Map<String, ?> uriVariables) throws RestClientException {
execute(url, HttpMethod.DELETE, null, null, uriVariables);
}
@Override
public void delete(URI url) throws RestClientException {
execute(url, HttpMethod.DELETE, null, null);
}
// OPTIONS
@Override
public Set<HttpMethod> optionsForAllow(String url, Object... uriVariables) throws RestClientException {
ResponseExtractor<HttpHeaders> headersExtractor = headersExtractor();
HttpHeaders headers = execute(url, HttpMethod.OPTIONS, null, headersExtractor, uriVariables);
return headers.getAllow();
}
@Override
public Set<HttpMethod> optionsForAllow(String url, Map<String, ?> uriVariables) throws RestClientException {
ResponseExtractor<HttpHeaders> headersExtractor = headersExtractor();
HttpHeaders headers = execute(url, HttpMethod.OPTIONS, null, headersExtractor, uriVariables);
return headers.getAllow();
}
@Override
public Set<HttpMethod> optionsForAllow(URI url) throws RestClientException {
ResponseExtractor<HttpHeaders> headersExtractor = headersExtractor();
HttpHeaders headers = execute(url, HttpMethod.OPTIONS, null, headersExtractor);
return headers.getAllow();
}
// exchange
@Override
public <T> ResponseEntity<T> exchange(String url, HttpMethod method, HttpEntity<?> requestEntity,
Class<T> responseType, Object... uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(requestEntity, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, method, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> exchange(String url, HttpMethod method, HttpEntity<?> requestEntity,
Class<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(requestEntity, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, method, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> exchange(URI url, HttpMethod method, HttpEntity<?> requestEntity,
Class<T> responseType) throws RestClientException {
RequestCallback requestCallback = httpEntityCallback(requestEntity, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(url, method, requestCallback, responseExtractor);
}
@Override
public <T> ResponseEntity<T> exchange(String url, HttpMethod method, HttpEntity<?> requestEntity,
ParameterizedTypeReference<T> responseType, Object... uriVariables) throws RestClientException {
Type type = responseType.getType();
RequestCallback requestCallback = httpEntityCallback(requestEntity, type);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(type);
return execute(url, method, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> exchange(String url, HttpMethod method, HttpEntity<?> requestEntity,
ParameterizedTypeReference<T> responseType, Map<String, ?> uriVariables) throws RestClientException {
Type type = responseType.getType();
RequestCallback requestCallback = httpEntityCallback(requestEntity, type);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(type);
return execute(url, method, requestCallback, responseExtractor, uriVariables);
}
@Override
public <T> ResponseEntity<T> exchange(URI url, HttpMethod method, HttpEntity<?> requestEntity,
ParameterizedTypeReference<T> responseType) throws RestClientException {
Type type = responseType.getType();
RequestCallback requestCallback = httpEntityCallback(requestEntity, type);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(type);
return execute(url, method, requestCallback, responseExtractor);
}
@Override
public <T> ResponseEntity<T> exchange(RequestEntity<?> requestEntity, Class<T> responseType) throws RestClientException {
Assert.notNull(requestEntity, "RequestEntity must not be null");
RequestCallback requestCallback = httpEntityCallback(requestEntity, responseType);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(responseType);
return execute(requestEntity.getUrl(), requestEntity.getMethod(), requestCallback, responseExtractor);
}
@Override
public <T> ResponseEntity<T> exchange(RequestEntity<?> requestEntity, ParameterizedTypeReference<T> responseType) throws RestClientException {
Assert.notNull(requestEntity, "RequestEntity must not be null");
Type type = responseType.getType();
RequestCallback requestCallback = httpEntityCallback(requestEntity, type);
ResponseExtractor<ResponseEntity<T>> responseExtractor = responseEntityExtractor(type);
return execute(requestEntity.getUrl(), requestEntity.getMethod(), requestCallback, responseExtractor);
}
// general execution
@Override
public <T> T execute(String url, HttpMethod method, RequestCallback requestCallback,
ResponseExtractor<T> responseExtractor, Object... uriVariables) throws RestClientException {
URI expanded = getUriTemplateHandler().expand(url, uriVariables);
return doExecute(expanded, method, requestCallback, responseExtractor);
}
@Override
public <T> T execute(String url, HttpMethod method, RequestCallback requestCallback, ResponseExtractor<T> responseExtractor, Map<String, ?> uriVariables) throws RestClientException {
URI expanded = getUriTemplateHandler().expand(url, uriVariables);
return doExecute(expanded, method, requestCallback, responseExtractor);
}
@Override
public <T> T execute(URI url, HttpMethod method, RequestCallback requestCallback, ResponseExtractor<T> responseExtractor) throws RestClientException {
return doExecute(url, method, requestCallback, responseExtractor);
}
protected <T> T doExecute(URI url, HttpMethod method, RequestCallback requestCallback, ResponseExtractor<T> responseExtractor) throws RestClientException {
Assert.notNull(url, "'url' must not be null");
Assert.notNull(method, "'method' must not be null");
ClientHttpResponse response = null;
try {
ClientHttpRequest request = createRequest(url, method);
if (requestCallback != null) {
requestCallback.doWithRequest(request);
}
response = request.execute();
handleResponse(url, method, response);
if (responseExtractor != null) {
return responseExtractor.extractData(response);
} else {
return null;
}
} catch (IOException ex) {
String resource = url.toString();
String query = url.getRawQuery();
resource = (query != null ? resource.substring(0, resource.indexOf('?')) : resource);
throw new ResourceAccessException("I/O error on " + method.name() + " request for \"" + resource + "\": " + ex.getMessage(), ex);
} finally {
if (response != null) {
response.close();
}
}
}
protected void handleResponse(URI url, HttpMethod method, ClientHttpResponse response) throws IOException {
ResponseErrorHandler errorHandler = getErrorHandler();
boolean hasError = errorHandler.hasError(response);
if (logger.isDebugEnabled()) {
try {
logger.debug(method.name() + " request for \"" + url + "\" resulted in " + response.getRawStatusCode()
+ " (" + response.getStatusText() + ")" + (hasError ? "; invoking error handler" : ""));
} catch (IOException ex) {
// ignore
}
}
if (hasError) {
errorHandler.handleError(response);
}
}
protected <T> RequestCallback acceptHeaderRequestCallback(Class<T> responseType) {
return new AcceptHeaderRequestCallback(responseType);
}
protected <T> RequestCallback httpEntityCallback(Object requestBody) {
return new HttpEntityRequestCallback(requestBody);
}
protected <T> RequestCallback httpEntityCallback(Object requestBody, Type responseType) {
return new HttpEntityRequestCallback(requestBody, responseType);
}
protected <T> ResponseExtractor<ResponseEntity<T>> responseEntityExtractor(Type responseType) {
return new ResponseEntityResponseExtractor<T>(responseType);
}
protected ResponseExtractor<HttpHeaders> headersExtractor() {
return this.headersExtractor;
}
private class AcceptHeaderRequestCallback implements RequestCallback {
private final Type responseType;
private AcceptHeaderRequestCallback(Type responseType) {
this.responseType = responseType;
}
@Override
public void doWithRequest(ClientHttpRequest request) throws IOException {
if (this.responseType != null) {
Class<?> responseClass = null;
if (this.responseType instanceof Class) {
responseClass = (Class<?>) this.responseType;
}
List<MediaType> allSupportedMediaTypes = new ArrayList<MediaType>();
for (HttpMessageConverter<?> converter : getMessageConverters()) {
if (responseClass != null) {
if (converter.canRead(responseClass, null)) {
allSupportedMediaTypes.addAll(getSupportedMediaTypes(converter));
}
} else if (converter instanceof GenericHttpMessageConverter) {
GenericHttpMessageConverter<?> genericConverter = (GenericHttpMessageConverter<?>) converter;
if (genericConverter.canRead(this.responseType, null, null)) {
allSupportedMediaTypes.addAll(getSupportedMediaTypes(converter));
}
}
}
if (!allSupportedMediaTypes.isEmpty()) {
MediaType.sortBySpecificity(allSupportedMediaTypes);
if (logger.isDebugEnabled()) {
logger.debug("Setting request Accept header to " + allSupportedMediaTypes);
}
request.getHeaders().setAccept(allSupportedMediaTypes);
}
}
}
private List<MediaType> getSupportedMediaTypes(HttpMessageConverter<?> messageConverter) {
List<MediaType> supportedMediaTypes = messageConverter.getSupportedMediaTypes();
List<MediaType> result = new ArrayList<MediaType>(supportedMediaTypes.size());
for (MediaType supportedMediaType : supportedMediaTypes) {
if (supportedMediaType.getCharset() != null) {
supportedMediaType = new MediaType(supportedMediaType.getType(), supportedMediaType.getSubtype());
}
result.add(supportedMediaType);
}
return result;
}
}
private class HttpEntityRequestCallback extends AcceptHeaderRequestCallback {
private final HttpEntity<?> requestEntity;
private HttpEntityRequestCallback(Object requestBody) {
this(requestBody, null);
}
private HttpEntityRequestCallback(Object requestBody, Type responseType) {
super(responseType);
if (requestBody instanceof HttpEntity) {
this.requestEntity = (HttpEntity<?>) requestBody;
} else if (requestBody != null) {
this.requestEntity = new HttpEntity<Object>(requestBody);
} else {
this.requestEntity = HttpEntity.EMPTY;
}
}
@Override
@SuppressWarnings("unchecked")
public void doWithRequest(ClientHttpRequest httpRequest) throws IOException {
super.doWithRequest(httpRequest);
if (!this.requestEntity.hasBody()) {
HttpHeaders httpHeaders = httpRequest.getHeaders();
HttpHeaders requestHeaders = this.requestEntity.getHeaders();
if (!requestHeaders.isEmpty()) {
for (Map.Entry<String, List<String>> entry : requestHeaders.entrySet()) {
httpHeaders.put(entry.getKey(), new LinkedList<String>(entry.getValue()));
}
}
if (httpHeaders.getContentLength() < 0) {
httpHeaders.setContentLength(0L);
}
} else {
Object requestBody = this.requestEntity.getBody();
Class<?> requestBodyClass = requestBody.getClass();
Type requestBodyType = (this.requestEntity instanceof RequestEntity ? ((RequestEntity<?>) this.requestEntity).getType() : requestBodyClass);
HttpHeaders httpHeaders = httpRequest.getHeaders();
HttpHeaders requestHeaders = this.requestEntity.getHeaders();
MediaType requestContentType = requestHeaders.getContentType();
for (HttpMessageConverter<?> messageConverter : getMessageConverters()) {
if (messageConverter instanceof GenericHttpMessageConverter) {
GenericHttpMessageConverter<Object> genericMessageConverter = (GenericHttpMessageConverter<Object>) messageConverter;
if (genericMessageConverter.canWrite(requestBodyType, requestBodyClass, requestContentType)) {
if (!requestHeaders.isEmpty()) {
for (Map.Entry<String, List<String>> entry : requestHeaders.entrySet()) {
httpHeaders.put(entry.getKey(), new LinkedList<String>(entry.getValue()));
}
}
if (logger.isDebugEnabled()) {
if (requestContentType != null) {
logger.debug("Writing [" + requestBody + "] as \"" + requestContentType + "\" using [" + messageConverter + "]");
} else {
logger.debug("Writing [" + requestBody + "] using [" + messageConverter + "]");
}
}
genericMessageConverter.write(requestBody, requestBodyType, requestContentType, httpRequest);
return;
}
} else if (messageConverter.canWrite(requestBodyClass, requestContentType)) {
if (!requestHeaders.isEmpty()) {
for (Map.Entry<String, List<String>> entry : requestHeaders.entrySet()) {
httpHeaders.put(entry.getKey(), new LinkedList<String>(entry.getValue()));
}
}
if (logger.isDebugEnabled()) {
if (requestContentType != null) {
logger.debug("Writing [" + requestBody + "] as \"" + requestContentType + "\" using [" + messageConverter + "]");
} else {
logger.debug("Writing [" + requestBody + "] using [" + messageConverter + "]");
}
}
HttpMessageConverter<Object> httpMessageConverter = (HttpMessageConverter<Object>) messageConverter;
httpMessageConverter.write(requestBody, requestContentType, httpRequest);
return;
}
}
String message = "Could not write request: no suitable HttpMessageConverter found for request type [" + requestBodyClass.getName() + "]";
if (requestContentType != null) {
message += " and content type [" + requestContentType + "]";
}
throw new RestClientException(message);
}
}
}
private class ResponseEntityResponseExtractor<T> implements ResponseExtractor<ResponseEntity<T>> {
private final HttpMessageConverterExtractor<T> delegate;
public ResponseEntityResponseExtractor(Type responseType) {
if (responseType != null && Void.class != responseType) {
this.delegate = new HttpMessageConverterExtractor<T>(responseType, getMessageConverters(), logger);
} else {
this.delegate = null;
}
}
@Override
public ResponseEntity<T> extractData(ClientHttpResponse response) throws IOException {
if (this.delegate != null) {
T body = this.delegate.extractData(response);
return ResponseEntity.status(response.getRawStatusCode()).headers(response.getHeaders()).body(body);
} else {
return ResponseEntity.status(response.getRawStatusCode()).headers(response.getHeaders()).build();
}
}
}
private static class HeadersExtractor implements ResponseExtractor<HttpHeaders> {
@Override
public HttpHeaders extractData(ClientHttpResponse response) throws IOException {
return response.getHeaders();
}
}
} |
<reponame>Mr-Perfection/coding_practice<filename>Python/Practices/Trees/bst_tree_iterator.py<gh_stars>0
# Definition for a binary tree node
# class TreeNode(object):
# def __init__(self, x):
# self.val = x
# self.left = None
# self.right = None
"""
7
3 9
1 5 8 10
"""
class BSTIterator(object):
def __init__(self, root):
"""
:type root: TreeNode
"""
self.root = root
self.stack = []
self.push_nodes(root, self.stack)
def hasNext(self):
"""
:rtype: bool
"""
return self.stack
def next(self):
"""
:rtype: int
"""
if self.hasNext():
temp = self.stack.pop()
self.push_nodes(temp.right,self.stack)
return temp.val
else:
return -1
def push_nodes(self, root, stack):
while root:
self.stack.append(root)
root = root.left
# Your BSTIterator will be called like this:
# i, v = BSTIterator(root), []
# while i.hasNext(): v.append(i.next())
|
<filename>SES/CheckSESQuota/check_ses_quota.py
# Copyright 2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file
# except in compliance with the License. A copy of the License is located at
#
# http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file. This file is distributed on an "AS IS"
# BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations under the License.
import os
import boto3
__author__ = '<NAME>'
# Get Lambda environment variables
region = os.environ['REGION']
alert_threshold = os.environ['ALERT_THRESHOLD']
sns_topic_arn = os.environ['TOPIC']
def publish_notification(topic_arn, message, subject):
sns_client = boto3.client('sns', region_name=region)
sns_response = sns_client.publish(TopicArn=topic_arn, Message=message, Subject=subject, MessageStructure='string')
if sns_response:
return 'Notification published successfully. Message id %s' % (sns_response['MessageId'])
else:
return 'Failed to publish notification.'
def check_quota():
ses_client = boto3.client('ses', region_name=region)
response = ses_client.get_send_quota()
if response:
daily_quota = response['Max24HourSend']
total_sent = response['SentLast24Hours']
threshold = total_sent / daily_quota * 100
if threshold > alert_threshold:
# Quota over threshold. Alert using SNS
message = 'Daily sending limit threshold of %d%% has been reached.' % threshold
publish_result = publish_notification(sns_topic_arn, message, 'SES daily quota warning')
return message + ' ' + publish_result
else:
return 'Sending quota within threshold.'
else:
return 'Error occurred while getting daily send quota.'
def lambda_handler(event, context):
return check_quota()
if __name__ == "__main__":
result = check_quota()
print(result)
|
/**
* Runs an auction: delegates request to applicable bidders, gathers responses from them and constructs final
* response containing returned bids and additional information in extensions.
*/
public Future<BidResponse> holdAuction(AuctionContext context) {
final RoutingContext routingContext = context.getRoutingContext();
final UidsCookie uidsCookie = context.getUidsCookie();
final BidRequest bidRequest = context.getBidRequest();
final Timeout timeout = context.getTimeout();
final MetricName requestTypeMetric = context.getRequestTypeMetric();
final Account account = context.getAccount();
final ExtBidRequest requestExt;
try {
requestExt = requestExt(bidRequest);
} catch (PreBidException e) {
return Future.failedFuture(e);
}
final List<Imp> imps = bidRequest.getImp();
final List<SeatBid> storedResponse = new ArrayList<>();
final Map<String, String> aliases = aliases(requestExt);
final String publisherId = account.getId();
final ExtRequestTargeting targeting = targeting(requestExt);
final BidRequestCacheInfo cacheInfo = bidRequestCacheInfo(targeting, requestExt);
final Boolean isGdprEnforced = account.getEnforceGdpr();
final boolean debugEnabled = isDebugEnabled(bidRequest, requestExt);
return storedResponseProcessor.getStoredResponseResult(imps, aliases, timeout)
.map(storedResponseResult -> populateStoredResponse(storedResponseResult, storedResponse))
.compose(impsRequiredRequest -> extractBidderRequests(bidRequest, impsRequiredRequest, requestExt,
uidsCookie, aliases, isGdprEnforced, timeout))
.map(bidderRequests ->
updateRequestMetric(bidderRequests, uidsCookie, aliases, publisherId,
requestTypeMetric))
.compose(bidderRequests -> CompositeFuture.join(bidderRequests.stream()
.map(bidderRequest -> requestBids(bidderRequest,
auctionTimeout(timeout, cacheInfo.isDoCaching()), debugEnabled, aliases,
bidAdjustments(requestExt), currencyRates(targeting)))
.collect(Collectors.toList())))
.map(CompositeFuture::<BidderResponse>list)
.map(bidderResponses -> updateMetricsFromResponses(bidderResponses, publisherId))
.map(bidderResponses ->
storedResponseProcessor.mergeWithBidderResponses(bidderResponses, storedResponse, imps))
.compose(bidderResponses ->
toBidResponse(bidderResponses, bidRequest, targeting, cacheInfo, account, timeout,
debugEnabled))
.compose(bidResponse ->
bidResponsePostProcessor.postProcess(routingContext, uidsCookie, bidRequest, bidResponse));
} |
/**
* Replaces a single entity from an entity set.
*
* @param entitySetId The id of the entity set the entity belongs to.
* @param entityKeyId The id of the entity to replace.
* @param entityByFqns The new entity details object that will replace the old value, with property type FQNs as keys.
*/
@POST( BASE + "/" + ENTITY_SET + "/" + SET_ID_PATH + "/" + ENTITY_KEY_ID_PATH )
Integer replaceEntityInEntitySetUsingFqns(
@Path( ENTITY_SET_ID ) UUID entitySetId,
@Path( ENTITY_KEY_ID ) UUID entityKeyId,
@Body Map<FullQualifiedName, Set<Object>> entityByFqns,
@Query( PROPERTY_UPDATE_TYPE ) PropertyUpdateType propertyUpdateType ); |
The Effectiveness of Geropsychological Treatment in Improving Pain, Depression, Behavioral Disturbances, Functional Disability, and Health Care Utilization in Long-Term Care Abstract Geropsychological interventions have become a necessary component of quality long-term care (LTC) designed to address residents' co-morbidities involving emotional, functional, and behavioral difficulties. However, there are few empirical studies of the efficacy of comprehensive geropsychological treatment in LTC. This two-part study was conducted to investigate the impact of Multimodal Cognitive-Behavioral Therapy (MCBT) for the treatment of pain, depression, behavioral dysfunction, functional disability, and health care utilization in a sample of cognitively impaired LTC residents who were suffering from persistent pain. In Study 1, forty-four consecutive new patients received a comprehensive psychological evaluation, eight sessions of cognitive-behavioral therapy, and follow-up psychological evaluation over a five-week period. Analyses indicated that patients exhibited significant reductions in pain, activity interference due to pain, emotional distress due to pain, depression, and significant increases in most activities of daily living. They also exhibited significant reductions in the intensity, frequency, and duration of their behavioral disturbances, but not the number of behavioral disturbances. In Study 2, as a follow-up to Study 1, a retrospective chart review was conducted to compare the treatment group with a matched-control group on post-treatment health care utilization. Comparisons between the two groups on Minimum Data Set (MDS) ratings indicated that the treatment group required significantly fewer physician visits and change orders than the control group. Implications of these collective findings are that geropsychological treatment is likely to improve certain aspects of residents' quality of life in LTC. Further research and development of assessment instruments that are designed specifically for the LTC population would enhance the outcome measurement procedures currently in place in LTC settings. |
#include <bits/stdc++.h>
using namespace std;
int main() {
long n,t,i=1,d=1;
cin >> n;
t=n-1;
while (t--) {
i=(i+d-1)%n+1;
d++;
cout << i << ' ';
}
}
|
"""
Displays overview financial data (cash flow)
"""
import curses
from app.const import NC_COLOR_TAB, NC_COLOR_TAB_SEL
from app.methods import format_currency, ellipsis, alignr
from app.page import Page
class PageOverview(Page):
""" Page class to display overview data """
def __init__(self, win, api, set_statusbar):
self.cols = [
["Month", 8],
["In", 10],
["Out", 10],
["Net", 10],
["Predicted", 12],
["Balance", 10]
]
self.future_cols = ['food', 'general', 'holiday', 'social']
super().__init__(win, api, set_statusbar)
def get_data(self):
res = self.api.req(['data', 'overview'])
return res['data']
def calculate_data(self):
""" calculates future spending data based on past averages """
# calculate table values
year_month_start = self.data['startYearMonth']
year_month_end = self.data['endYearMonth']
year_month_now = self.data['currentYear'], self.data['currentMonth']
# number of months (inclusive) since the start month
num_rows = 12 * (year_month_end[0] - year_month_start[0]) + \
year_month_end[1] - year_month_start[1] + 1
months = ["Jan", "Feb", "Mar", "Apr", "May", "Jun", \
"Jul", "Aug", "Sep", "Oct", "Nov", "Dec"]
# calculate futures based on averages
future_key = 12 * (year_month_now[0] - year_month_start[0]) + \
year_month_now[1] - year_month_start[1] + 1
average = [
sum([self.data['cost'][col][i] for i in range(future_key)]) / future_key
for col in self.future_cols
]
out_with_future = [
sum([self.data['cost'][col][i] if i < future_key else average[index] \
for (index, col) in enumerate(self.future_cols)]) \
+ self.data['cost']['bills'][i]
for i in range(num_rows)
]
# net spending
net = [
self.data['cost']['income'][i] - out_with_future[i]
for i in range(num_rows)
]
# calculate predicted balance based on future spending predictions
predicted = [
self.data['cost']['balance'][max(0, i - 1)] + net[i]
for i in range(future_key + 1)
]
for i in range(future_key + 1, num_rows):
predicted.append(int(predicted[i - 1] + net[i]))
rows = [
[
"{}-{}".format(months[(year_month_start[1] - 1 + i) % 12], \
(year_month_start[0] + (i - 1 + year_month_start[1]) // 12) % 1000),
format_currency(self.data['cost']['income'][i], self.cols[1][1] - 1),
format_currency(out_with_future[i], self.cols[2][1] - 1),
format_currency(net[i], self.cols[3][1] - 1),
format_currency(predicted[i], self.cols[4][1] - 1),
format_currency(self.data['cost']['balance'][i], self.cols[5][1] - 1)
]
for i in range(num_rows)
]
return rows, year_month_start, year_month_now
def draw(self):
rows, year_month_start, year_month_now = self.calculate_data()
colors = [
curses.color_pair(NC_COLOR_TAB[0]), # inactive
curses.color_pair(NC_COLOR_TAB_SEL[0]) # active
]
num = {
'rows': len(rows),
'disp': min(self.dim[0], len(rows))
}
active_row = 12 * (year_month_now[0] - year_month_start[0]) + \
year_month_now[1] - year_month_start[1]
# draw all the rows and columns
for i in range(num['disp']):
if i == 0:
# header
col = 0
for (col_name, col_width) in self.cols:
self.win.addstr(0, col, alignr(col_width - 1, col_name))
col += col_width
else:
# data
row = num['rows'] - num['disp'] + i - 1 + 1
active = row == active_row
color = colors[1] if active else colors[0]
if active:
self.win.addstr(i, 0, ' ' * self.dim[1], color)
col = 0
j = 0
for (j, (col_name, col_width)) in enumerate(self.cols):
self.win.addstr(i, col, ellipsis(rows[row][j], col_width), \
color | curses.A_BOLD if j == 5 else color)
col += col_width
def set_nav_active(self, status):
return False # this page can't be active
|
<gh_stars>1-10
package tech.xuanwu.northstar.strategy.cta.module;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import java.util.Map.Entry;
import org.springframework.beans.BeanUtils;
import lombok.extern.slf4j.Slf4j;
import tech.xuanwu.northstar.strategy.common.ModuleTrade;
import tech.xuanwu.northstar.strategy.common.model.DealRecord;
import tech.xuanwu.northstar.strategy.common.model.TradeDescription;
import xyz.redtorch.pb.CoreEnum.DirectionEnum;
import xyz.redtorch.pb.CoreEnum.OffsetFlagEnum;
import xyz.redtorch.pb.CoreEnum.PositionDirectionEnum;
/**
* 用于记录模组的所有成交记录,并以此计算得出相应的每次开平仓盈亏,以及开平仓配对
* @author KevinHuangwl
*
*/
@Slf4j
public class CtaModuleTrade implements ModuleTrade {
/**
* unifiedSymbol --> tradeList
*/
Map<String, List<TradeDescription>> openingTradeMap = new HashMap<>();
Map<String, List<TradeDescription>> closingTradeMap = new HashMap<>();
public CtaModuleTrade() {}
public CtaModuleTrade(List<TradeDescription> originTradeList) {
for(TradeDescription trade : originTradeList) {
handleTrade(trade);
}
}
@Override
public List<DealRecord> getDealRecords() {
List<DealRecord> result = new LinkedList<>();
for(Entry<String, List<TradeDescription>> e : closingTradeMap.entrySet()) {
String curSymbol = e.getKey();
LinkedList<TradeDescription> tempClosingTrade = new LinkedList<>();
tempClosingTrade.addAll(e.getValue());
LinkedList<TradeDescription> tempOpeningTrade = new LinkedList<>();
tempOpeningTrade.addAll(openingTradeMap.get(curSymbol));
while(tempClosingTrade.size() > 0) {
TradeDescription closingDeal = tempClosingTrade.pollFirst();
TradeDescription openingDeal = tempOpeningTrade.pollFirst();
if(closingDeal == null || openingDeal == null
|| closingDeal.getTradeTimestamp() < openingDeal.getTradeTimestamp()) {
throw new IllegalStateException("存在异常的平仓合约找不到对应的开仓合约");
}
PositionDirectionEnum dir = openingDeal.getDirection() == DirectionEnum.D_Buy ? PositionDirectionEnum.PD_Long
: openingDeal.getDirection() == DirectionEnum.D_Sell ? PositionDirectionEnum.PD_Short : PositionDirectionEnum.PD_Unknown;
if(PositionDirectionEnum.PD_Unknown == dir) {
throw new IllegalStateException("持仓方向不能确定");
}
int factor = PositionDirectionEnum.PD_Long == dir ? 1 : -1;
double priceDiff = factor * (closingDeal.getPrice() - openingDeal.getPrice());
int vol = Math.min(closingDeal.getVolume(), openingDeal.getVolume());
int profit = (int) (priceDiff * closingDeal.getContractMultiplier() * vol);
DealRecord deal = DealRecord.builder()
.contractName(closingDeal.getContractName())
.direction(dir)
.dealTimestamp(closingDeal.getTradeTimestamp())
.openPrice(openingDeal.getPrice())
.closePrice(closingDeal.getPrice())
.tradingDay(closingDeal.getTradingDay())
.volume(vol)
.closeProfit(profit)
.build();
result.add(deal);
int volDiff = Math.abs(closingDeal.getVolume() - openingDeal.getVolume());
TradeDescription restTrade = new TradeDescription();
BeanUtils.copyProperties(closingDeal, restTrade);
restTrade.setVolume(volDiff);
// 平仓手数多于开仓手数,则需要拆分平仓成交
if(closingDeal.getVolume() > openingDeal.getVolume()) {
tempClosingTrade.offerFirst(restTrade);
}
// 平仓手数少于开仓手数,则需要拆分开仓成交
else if(closingDeal.getVolume() < openingDeal.getVolume()) {
tempOpeningTrade.offerFirst(restTrade);
}
}
}
return result;
}
@Override
public void updateTrade(TradeDescription trade) {
handleTrade(trade);
}
@Override
public int getTotalCloseProfit() {
List<DealRecord> dealList = getDealRecords();
return dealList.stream().reduce(0, (d1, d2) -> d1 + d2.getCloseProfit(), (d1,d2) -> d1 + d2);
}
private void handleTrade(TradeDescription trade) {
if(trade.getOffsetFlag() == OffsetFlagEnum.OF_Unkonwn) {
log.warn("未定义开平方向, {}", trade.toString());
return;
}
String unifiedSymbol = trade.getUnifiedSymbol();
if(trade.getOffsetFlag() == OffsetFlagEnum.OF_Open) {
openingTradeMap.putIfAbsent(unifiedSymbol, new LinkedList<>());
openingTradeMap.get(unifiedSymbol).add(trade);
} else {
closingTradeMap.putIfAbsent(unifiedSymbol, new LinkedList<>());
closingTradeMap.get(unifiedSymbol).add(trade);
}
}
}
|
Preliminary evidence for the dedifferentiation of RAW 264.7 cells into mesenchymal progenitor-like cells by a purine analog. Dedifferentiation of cells to multipotential cells is of interest since they have a potential regenerative capacity. Our purpose was to de- and redifferentiate murine RAW 264.7 cells, a committed macrophage cell line of hematopoietic origin, into mesenchymal-like cells such as osteoblasts. RAW 264.7 cells in culture were treated with 5 M reversine, a purine analog that was shown to dedifferentiate myoblasts in osteoblasts. Treatment with reversine resulted in a significant increase in the expression of the STRO-1 antigen, a marker of mesenchymal stem/progenitor cells: from 0.6%±0.5% cells in untreated RAW cells to 19.0%±8.6% in treated cells, but there was no increase in the expression of SH-2 (CD105), an earlier marker of mesenchymal stem cells. The effects of reversine were significantly curtailed by 67% when cultures were pretreated with the c-Jun N-terminal kinase pathway blocker SP600125. These STRO-1+ cells retained a multipotential status and were capable of redifferentiating into cells with osteogenic and lipogenic characteristics under inductive conditions. We showed that STRO-1+ cells in an osteogenic medium significantly increased expression of the osteoblast marker osteocalcin, and formed mineralized nodules. When seeded on a demineralized scaffold of human bone in vitro, these cells deposited a calcium matrix. Under adipogenic conditions, expression of the adipocyte marker peroxisome proliferator-activated receptor gamma 2 on STRO-1+ cells was elevated, and cultures stained positive with Oil red O. Our results demonstrated that treating a committed hematopoietic cell line with a purine analog can alter cell development and result in cellular reverse transformation into stage-limited multipotential cells. These cells could subsequently be redifferentiated into cells with characteristics of the mesenchymal lineage, such as those of an osteoblast and/or adipocyte, under inductive conditions. |
<gh_stars>0
package be.rentvehicle.config;
import lombok.Data;
import lombok.NoArgsConstructor;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* Twilio configuration class
*/
@NoArgsConstructor
@Configuration
@ConfigurationProperties(prefix = "twilio")
public @Data class TwilioConfiguration {
private String accountSid;
private String authToken;
private String trialNumber;
}
|
/// \return true if the given retain instruction is followed by a release on the
/// same object prior to any potential mutating operation.
bool COWArrayOpt::isRetainReleasedBeforeMutate(SILInstruction *RetainInst,
bool IsUniquelyIdentifiedArray) {
if (!Loop->contains(RetainInst))
return true;
LLVM_DEBUG(llvm::dbgs() << " Looking at retain " << *RetainInst);
for (auto II = std::next(SILBasicBlock::iterator(RetainInst)),
IE = RetainInst->getParent()->end(); II != IE; ++II) {
if (isMatchingRelease(&*II, RetainInst))
return true;
if (isRetain(&*II))
continue;
if (!II->mayHaveSideEffects())
continue;
if (isNonMutatingArraySemanticCall(&*II))
continue;
if (isa<BeginBorrowInst>(II) || isa<EndBorrowInst>(II))
continue;
if (IsUniquelyIdentifiedArray) {
This is not the case for a potentially aliased array because a release
can cause a destructor to run. The destructor in turn can cause
arbitrary side effects.
if (isRelease(&*II))
continue;
if (ArrayUserSet.count(&*II)) May be an array mutation.
break;
} else {
Not safe.
break;
}
}
LLVM_DEBUG(llvm::dbgs() << " Skipping Array: retained in loop!\n"
<< " " << *RetainInst);
return false;
} |
Q:
Does the webbing seen in some trumpets change the sound?
Back in high school I played the trumpet for 6 months, and then clarinet for the next 4.5 years (high school is odd in Quebec, and it's not important to the question anyway), and I was a science student, so I have a decent idea of how a wind instrument makes different tones.
Recently I got a chance to see one of my favourite bands (SOIL & "PIMP" SESSIONS) live for the first time and noticed something about one member's trumpet: it had a sort of webbing in the 2 big bends, like on a duck's foot.
My question is how would it affect the sound? Would the additional stability prevent the pipes from vibrating and altering the vibration of the air flowing inside? Or would they maybe send the vibration to the rest of the tube? (though that would likely also have the same effect)
This is the trumpet of that player:
A:
It does affect the sound, but by how much is debatable and I doubt anyone could reliably discern between a trumpet with or without this kind of bracing in a blind listening test.
The main thing it does is affect the way the instrument resonates. When you play a brass instrument, you set up a vibration in the air stream inside the tubing. Some of this vibration will transfer into the instrument itself, which is just a loss of energy. This extra bracing is supposed to reduce this parasitic resistance. I have a trumpet that looks a lot like the one in your second picture, and the feeling is very noticeable. It's much easier to play and doesn't push back as hard as a normal trumpet. |
/* glpenv07.c (stream input/output) */
/***********************************************************************
* This code is part of GLPK (GNU Linear Programming Kit).
*
* Copyright (C) 2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008,
* 2009, 2010 Andrew Makhorin, Department for Applied Informatics,
* Moscow Aviation Institute, Moscow, Russia. All rights reserved.
* E-mail: <mao@gnu.org>.
*
* GLPK is free software: you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 3 of the License, or
* (at your option) any later version.
*
* GLPK is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
* or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
* License for more details.
*
* You should have received a copy of the GNU General Public License
* along with GLPK. If not, see <http://www.gnu.org/licenses/>.
***********************************************************************/
#ifdef HAVE_CONFIG_H
#include <config.h>
#endif
#include "glpenv.h"
/***********************************************************************
* NAME
*
* lib_err_msg - save error message string
*
* SYNOPSIS
*
* #include "glpenv.h"
* void lib_err_msg(const char *msg);
*
* DESCRIPTION
*
* The routine lib_err_msg saves an error message string specified by
* the parameter msg. The message is obtained by some library routines
* with a call to strerror(errno). */
void lib_err_msg(const char *msg)
{ ENV *env = get_env_ptr();
int len = strlen(msg);
if (len >= IOERR_MSG_SIZE)
len = IOERR_MSG_SIZE - 1;
memcpy(env->ioerr_msg, msg, len);
if (len > 0 && env->ioerr_msg[len-1] == '\n') len--;
env->ioerr_msg[len] = '\0';
return;
}
/***********************************************************************
* NAME
*
* xerrmsg - retrieve error message string
*
* SYNOPSIS
*
* #include "glpenv.h"
* const char *xerrmsg(void);
*
* RETURNS
*
* The routine xerrmsg returns a pointer to an error message string
* previously set by some library routine to indicate an error. */
const char *xerrmsg(void)
{ ENV *env = get_env_ptr();
return env->ioerr_msg;
}
/***********************************************************************
* NAME
*
* xfopen - open a stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* XFILE *xfopen(const char *fname, const char *mode);
*
* DESCRIPTION
*
* The routine xfopen opens the file whose name is a string pointed to
* by fname and associates a stream with it.
*
* The parameter mode points to a string, which indicates the open mode
* and should be one of the following:
*
* "r" open text file for reading;
* "w" truncate to zero length or create text file for writing;
* "rb" open binary file for reading;
* "wb" truncate to zero length or create binary file for writing.
*
* RETURNS
*
* The routine xfopen returns a pointer to the object controlling the
* stream. If the open operation fails, xfopen returns NULL. */
static void *c_fopen(const char *fname, const char *mode);
static void *z_fopen(const char *fname, const char *mode);
static int is_gz_file(const char *fname)
{ char *ext = strrchr(fname, '.');
return ext != NULL && strcmp(ext, ".gz") == 0;
}
XFILE *xfopen(const char *fname, const char *mode)
{ ENV *env = get_env_ptr();
XFILE *fp;
int type;
void *fh;
if (!is_gz_file(fname))
{ type = FH_FILE;
fh = c_fopen(fname, mode);
}
else
{ type = FH_ZLIB;
fh = z_fopen(fname, mode);
}
if (fh == NULL)
{ fp = NULL;
goto done;
}
fp = xmalloc(sizeof(XFILE));
fp->type = type;
fp->fh = fh;
fp->prev = NULL;
fp->next = env->file_ptr;
if (fp->next != NULL) fp->next->prev = fp;
env->file_ptr = fp;
done: return fp;
}
/***********************************************************************
* NAME
*
* xfgetc - read character from the stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* int xfgetc(XFILE *fp);
*
* DESCRIPTION
*
* If the end-of-file indicator for the input stream pointed to by fp
* is not set and a next character is present, the routine xfgetc
* obtains that character as an unsigned char converted to an int and
* advances the associated file position indicator for the stream (if
* defined).
*
* RETURNS
*
* If the end-of-file indicator for the stream is set, or if the
* stream is at end-of-file, the end-of-file indicator for the stream
* is set and the routine xfgetc returns XEOF. Otherwise, the routine
* xfgetc returns the next character from the input stream pointed to
* by fp. If a read error occurs, the error indicator for the stream is
* set and the xfgetc routine returns XEOF.
*
* Note: An end-of-file and a read error can be distinguished by use of
* the routines xfeof and xferror. */
static int c_fgetc(void *fh);
static int z_fgetc(void *fh);
int xfgetc(XFILE *fp)
{ int c;
switch (fp->type)
{ case FH_FILE:
c = c_fgetc(fp->fh);
break;
case FH_ZLIB:
c = z_fgetc(fp->fh);
break;
default:
xassert(fp != fp);
}
return c;
}
/***********************************************************************
* NAME
*
* xfputc - write character to the stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* int xfputc(int c, XFILE *fp);
*
* DESCRIPTION
*
* The routine xfputc writes the character specified by c (converted
* to an unsigned char) to the output stream pointed to by fp, at the
* position indicated by the associated file position indicator (if
* defined), and advances the indicator appropriately.
*
* RETURNS
*
* The routine xfputc returns the character written. If a write error
* occurs, the error indicator for the stream is set and xfputc returns
* XEOF. */
static int c_fputc(int c, void *fh);
static int z_fputc(int c, void *fh);
int xfputc(int c, XFILE *fp)
{ switch (fp->type)
{ case FH_FILE:
c = c_fputc(c, fp->fh);
break;
case FH_ZLIB:
c = z_fputc(c, fp->fh);
break;
default:
xassert(fp != fp);
}
return c;
}
/***********************************************************************
* NAME
*
* xferror - test error indicator for the stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* int xferror(XFILE *fp);
*
* DESCRIPTION
*
* The routine xferror tests the error indicator for the stream
* pointed to by fp.
*
* RETURNS
*
* The routine xferror returns non-zero if and only if the error
* indicator is set for the stream. */
static int c_ferror(void *fh);
static int z_ferror(void *fh);
int xferror(XFILE *fp)
{ int ret;
switch (fp->type)
{ case FH_FILE:
ret = c_ferror(fp->fh);
break;
case FH_ZLIB:
ret = z_ferror(fp->fh);
break;
default:
xassert(fp != fp);
}
return ret;
}
/***********************************************************************
* NAME
*
* xfeof - test end-of-file indicator for the stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* int xfeof(XFILE *fp);
*
* DESCRIPTION
*
* The routine xfeof tests the end-of-file indicator for the stream
* pointed to by fp.
*
* RETURNS
*
* The routine xfeof returns non-zero if and only if the end-of-file
* indicator is set for the stream. */
static int c_feof(void *fh);
static int z_feof(void *fh);
int xfeof(XFILE *fp)
{ int ret;
switch (fp->type)
{ case FH_FILE:
ret = c_feof(fp->fh);
break;
case FH_ZLIB:
ret = z_feof(fp->fh);
break;
default:
xassert(fp != fp);
}
return ret;
}
int xfprintf(XFILE *file, const char *fmt, ...)
{ ENV *env = get_env_ptr();
int cnt, j;
va_list arg;
va_start(arg, fmt);
cnt = vsprintf(env->term_buf, fmt, arg);
va_end(arg);
for (j = 0; j < cnt; j++)
{ if (xfputc(env->term_buf[j], file) < 0)
{ cnt = -1;
break;
}
}
return cnt;
}
/***********************************************************************
* NAME
*
* xfflush - flush the stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* int xfflush(XFILE *fp);
*
* DESCRIPTION
*
* The routine xfflush causes any unwritten data for the output stream
* pointed to by fp to be written to the associated file.
*
* RETURNS
*
* The routine xfflush returns zero if the stream was successfully
* flushed. Otherwise, xfflush sets the error indicator for the stream
* and returns XEOF. */
static int c_fflush(void *fh);
static int z_fflush(void *fh);
int xfflush(XFILE *fp)
{ int ret;
switch (fp->type)
{ case FH_FILE:
ret = c_fflush(fp->fh);
break;
case FH_ZLIB:
ret = z_fflush(fp->fh);
break;
default:
xassert(fp != fp);
}
return ret;
}
/***********************************************************************
* NAME
*
* xfclose - close the stream
*
* SYNOPSIS
*
* #include "glpenv.h"
* int xfclose(XFILE *fp);
*
* DESCRIPTION
*
* A successful call to the routine xfclose causes the stream pointed
* to by fp to be flushed and the associated file to be closed. Whether
* or not the call succeeds, the stream is disassociated from the file.
*
* RETURNS
*
* The routine xfclose returns zero if the stream was successfully
* closed, or XEOF if any errors were detected. */
static int c_fclose(void *fh);
static int z_fclose(void *fh);
int xfclose(XFILE *fp)
{ ENV *env = get_env_ptr();
int ret;
switch (fp->type)
{ case FH_FILE:
ret = c_fclose(fp->fh);
break;
case FH_ZLIB:
ret = z_fclose(fp->fh);
break;
default:
xassert(fp != fp);
}
fp->type = 0xF00BAD;
if (fp->prev == NULL)
env->file_ptr = fp->next;
else
fp->prev->next = fp->next;
if (fp->next == NULL)
;
else
fp->next->prev = fp->prev;
xfree(fp);
return ret;
}
/***********************************************************************
* The following routines implement stream input/output based on the
* standard C streams. */
static void *c_fopen(const char *fname, const char *mode)
{ FILE *fh;
if (strcmp(fname, "/dev/stdin") == 0)
fh = stdin;
else if (strcmp(fname, "/dev/stdout") == 0)
fh = stdout;
else if (strcmp(fname, "/dev/stderr") == 0)
fh = stderr;
else
fh = fopen(fname, mode);
if (fh == NULL)
lib_err_msg(strerror(errno));
return fh;
}
static int c_fgetc(void *_fh)
{ FILE *fh = _fh;
int c;
if (ferror(fh) || feof(fh))
{ c = XEOF;
goto done;
}
c = fgetc(fh);
if (ferror(fh))
{ lib_err_msg(strerror(errno));
c = XEOF;
}
else if (feof(fh))
c = XEOF;
else
xassert(0x00 <= c && c <= 0xFF);
done: return c;
}
static int c_fputc(int c, void *_fh)
{ FILE *fh = _fh;
if (ferror(fh))
{ c = XEOF;
goto done;
}
c = (unsigned char)c;
fputc(c, fh);
if (ferror(fh))
{ lib_err_msg(strerror(errno));
c = XEOF;
}
done: return c;
}
static int c_ferror(void *_fh)
{ FILE *fh = _fh;
return ferror(fh);
}
static int c_feof(void *_fh)
{ FILE *fh = _fh;
return feof(fh);
}
static int c_fflush(void *_fh)
{ FILE *fh = _fh;
int ret;
ret = fflush(fh);
if (ret != 0)
{ lib_err_msg(strerror(errno));
ret = XEOF;
}
return ret;
}
static int c_fclose(void *_fh)
{ FILE *fh = _fh;
int ret;
if (fh == stdin)
ret = 0;
else if (fh == stdout || fh == stderr)
fflush(fh), ret = 0;
else
ret = fclose(fh);
if (ret != 0)
{ lib_err_msg(strerror(errno));
ret = XEOF;
}
return ret;
}
/***********************************************************************
* The following routines implement stream input/output based on the
* zlib library, which provides processing .gz files "on the fly". */
#ifndef HAVE_ZLIB
static void *z_fopen(const char *fname, const char *mode)
{ xassert(fname == fname);
xassert(mode == mode);
lib_err_msg("Compressed files not supported");
return NULL;
}
static int z_fgetc(void *fh)
{ xassert(fh != fh);
return 0;
}
static int z_fputc(int c, void *fh)
{ xassert(c != c);
xassert(fh != fh);
return 0;
}
static int z_ferror(void *fh)
{ xassert(fh != fh);
return 0;
}
static int z_feof(void *fh)
{ xassert(fh != fh);
return 0;
}
static int z_fflush(void *fh)
{ xassert(fh != fh);
return 0;
}
static int z_fclose(void *fh)
{ xassert(fh != fh);
return 0;
}
#else
#include <zlib.h>
struct z_file
{ /* .gz file handle */
gzFile file;
/* pointer to .gz stream */
int err;
/* i/o error indicator */
int eof;
/* end-of-file indicator */
};
static void *z_fopen(const char *fname, const char *mode)
{ struct z_file *fh;
gzFile file;
if (strcmp(mode, "r") == 0 || strcmp(mode, "rb") == 0)
mode = "rb";
else if (strcmp(mode, "w") == 0 || strcmp(mode, "wb") == 0)
mode = "wb";
else
{ lib_err_msg("Invalid open mode");
fh = NULL;
goto done;
}
file = gzopen(fname, mode);
if (file == NULL)
{ lib_err_msg(strerror(errno));
fh = NULL;
goto done;
}
fh = xmalloc(sizeof(struct z_file));
fh->file = file;
fh->err = fh->eof = 0;
done: return fh;
}
static int z_fgetc(void *_fh)
{ struct z_file *fh = _fh;
int c;
if (fh->err || fh->eof)
{ c = XEOF;
goto done;
}
c = gzgetc(fh->file);
if (c < 0)
{ int errnum;
const char *msg;
msg = gzerror(fh->file, &errnum);
if (errnum == Z_STREAM_END)
fh->eof = 1;
else if (errnum == Z_ERRNO)
{ fh->err = 1;
lib_err_msg(strerror(errno));
}
else
{ fh->err = 1;
lib_err_msg(msg);
}
c = XEOF;
}
else
xassert(0x00 <= c && c <= 0xFF);
done: return c;
}
static int z_fputc(int c, void *_fh)
{ struct z_file *fh = _fh;
if (fh->err)
{ c = XEOF;
goto done;
}
c = (unsigned char)c;
if (gzputc(fh->file, c) < 0)
{ int errnum;
const char *msg;
fh->err = 1;
msg = gzerror(fh->file, &errnum);
if (errnum == Z_ERRNO)
lib_err_msg(strerror(errno));
else
lib_err_msg(msg);
c = XEOF;
}
done: return c;
}
static int z_ferror(void *_fh)
{ struct z_file *fh = _fh;
return fh->err;
}
static int z_feof(void *_fh)
{ struct z_file *fh = _fh;
return fh->eof;
}
static int z_fflush(void *_fh)
{ struct z_file *fh = _fh;
int ret;
ret = gzflush(fh->file, Z_FINISH);
if (ret == Z_OK)
ret = 0;
else
{ int errnum;
const char *msg;
fh->err = 1;
msg = gzerror(fh->file, &errnum);
if (errnum == Z_ERRNO)
lib_err_msg(strerror(errno));
else
lib_err_msg(msg);
ret = XEOF;
}
return ret;
}
static int z_fclose(void *_fh)
{ struct z_file *fh = _fh;
gzclose(fh->file);
xfree(fh);
return 0;
}
#endif
/* eof */
|
/**
* Delete the notebook instance
*
* @param id notebook id
* @return object
* @throws SubmarineRuntimeException the service error
*/
public Notebook deleteNotebook(String id) throws SubmarineRuntimeException {
Notebook notebook = getNotebook(id);
Notebook patchNotebook = submitter.deleteNotebook(notebook.getSpec());
notebookService.delete(id);
notebook.rebuild(patchNotebook);
return notebook;
} |
An intention-based definition of psychoanalytic attitude: what does it look like? How does it grow? In two previous articles (Gorman, 1999, 2002), I introduced and then refined the notion of basing psychoanalytic attitude not on a set of psychoanalytic techniques but on what I called a psychoanalytic intention. It felt increasingly apparent that a techniquebased definition of psychoanalytic attitude, despite its initial usefulness in orienting the analytic clinical stance, was based on a fundamental misconception of the relation between analyzing and support/suggestion in psychoanalytic treatment. This misconception and consequent equation have inadvertently caused psychoanalysis as a discipline and a form of psychotherapy no end of trouble. They have undermined the analytic rigor of the psychoanalytically oriented psychotherapies, and have created or contributed to schisms between psychoanalysis and these psychotherapies. They have similarly contributed to schisms between different psychoanalytic subtheories, between psychoanalysis and nonpsychoanalytic psychotherapies, and between psychoanalysis and systematic psychoanalytic research. It is not an exaggeration to say that they have contributed fundamentally to the general isolation in which psychoanalysis increasingly finds itself. In the articles cited above, I suggested a provisional definition of psychoanalytic intention to develop my arguments and which came as close as I could to capturing in the most general terms what analytic therapists intend in treatment. However, in the introduction to the volume in which Gorman ap- |
export const folderNew32: string;
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
File: config.sample.py
Description: unittest configuration for Python SDK of the Cognitive Face API.
- Copy `config.sample.py` to `config.py`.
- Change the `BASE_URL` if necessary.
- Assign the `KEY` with a valid Subscription Key.
"""
# Subscription Key for calling the Cognitive Face API.
KEY = '<KEY>'
# Base URL for calling the Cognitive Face API.
# default is 'https://westus.api.cognitive.microsoft.com/face/v1.0/'
BASE_URL = 'https://westus.api.cognitive.microsoft.com/face/v1.0/'
# Time (in seconds) for sleep between each call to avoid exceeding quota.
# Default to 3 as free subscription have limit of 20 calls per minute.
TIME_SLEEP = 3
|
2007 studio album by Amon Tobin
Foley Room is the sixth studio album by the Brazilian artist Amon Tobin. It was recorded in the Foley effects room at Ubisoft Montreal[2] and released on March 5, 2007, by Ninja Tune.
In the past, Tobin had created music through the sampling of old vinyl records. However, Foley Room is a marked departure from his traditional technique. Inspired by the Foley rooms where sound effects are recorded for films, Tobin decided to record and work with original samples for the record. According to the Ninja Tune website, "Amon and a team of assistants headed out into the streets with high sensitivity microphones and recorded found sounds from tigers roaring to cats eating rats, neighbours singing in the bath to ants eating grass".[3] Tobin also called upon The Kronos Quartet, Stefan Schneider and Sarah Pagé to record samples for the record.[4]
“ There's nothing new about field recordings of course. It's obviously been the traditional source material in sampling since the early days, so I'm really going "back to school" on this one. On the other hand, I always saw a divide between music that was based purely on sound design and tunes that were written to physically move people. A challenge for me has been to try and make 'tunes' using aspects of sound design normally associated with highbrow academic studies in this area. I don't know how successful I've been but that was a goal anyway. ” — Amon Tobin, in his MySpace blog.
The first single, "Bloodstone", was released on iTunes on January 9, 2007. The song was later released as a single proper on January 21, 2007, with "Esther's" and the B-side "Here Comes the Moon Man" also included on the disc.
In promotion of the record, Ninja Tune released two YouTube "trailers".[5][6] A DVD documentary detailing the album's recording process was released with the album.
Track listing [ edit ]
All tracks written by Amon Tobin.
"Bloodstone" – 4:13 "Esther's" – 3:21 "Keep Your Distance" – 4:48 "The Killer's Vanilla" – 4:14 "Kitchen Sink" – 4:49 "Horsefish" – 5:07 "Foley Room" – 3:37 "Big Furry Head" – 3:22 "Ever Falling" – 3:49 "Always" – 3:39 "Straight Psyche" – 6:49 "At the End of the Day" – 3:18 |
A Novel Direct Torque Control Strategy for Doubly-Fed Wound Rotor Induction Machines In this paper, a novel direct torque control of doubly- fed wound rotor induction machines is studied. A rotor unity power factor based direct torque control strategy is proposed. By keeping the rotor power factor to be one, a switching-table- based hysteresis direct torque control system is built and the corresponding rotor voltage vector table is obtained. The feasibility of the proposed method is tested through Simulink simulations. Such a control strategy show several significant advantages. As only the feedback signals of the rotor side are required by the control, the system is simple. Due to the unity rotor power factor, the capacity of the inverter is reduced to the lowest level. The stator voltage and current waveforms are kept sinusoidal, which reduces the harmonic pollution to power grids. |
Cypriot Mortality and Pension Benefits Mortality trends in Cyprus show a similar decreasing trend over the past thirty years to other developed countries. Using detailed, age specific data from 2003 and 2009, we estimate the impact of the change in Cypriot male and female mortality on a stylized life annuity framework for a Cypriot retiree. Based on these results and the general pension framework in Cyprus, we propose a few measures that can alleviate the burden of decreased mortality on pension obligations. |
. In a group of 153 patients, the result of unilateral stapedectomy was evaluated using Glasgow Benefit Plot. Pure tone average at frequencies 0.5; 1; 2; and 3 kHz of operated and non-operated ear were used to distribute patients to pre- and post-operative groups. In 26 (79%) of 33 patients with unilateral hearing loss, bilateral normal hearing was achieved. Thirty one (46%) of 68 patients with asymmetric bilateral hearing loss and 37 (71%) of 52 patients with symmetric bilateral hearing loss had unilateral normal hearing after the operation. Twenty (29%) patients of the group III had bilateral symmetrical hearing loss after surgery. Stapedectomy was less beneficial for 17 (25%) of 68 patients with asymmetric hearing bilateral loss and 15 (29%) 52 of patients with symmetric bilateral hearing loss, who still had asymmetric hearing loss after the operation. Evaluation of hearing tests using the Glasgow Benefit Plot enables to evaluate patient's hearing disability and to predict possible benefit from surgery for individual cases. |
// NextFrame returns the next faked video frame
func (c *StatisticalCodec) nextFrame() Frame {
duration := time.Duration((1.0/float64(c.fps))*1000.0) * time.Millisecond
if c.remainingBurstFrames == c.burstFrameCount {
return Frame{
Content: make([]byte, c.burstFrameSize),
Duration: duration,
}
}
bytesPerFrame := c.targetBitrateBps / (8.0 * c.fps)
if c.remainingBurstFrames > 0 {
size := (c.targetBitrateBps * c.burstFrameCount) / (c.burstFrameSize + (c.burstFrameCount - 1))
return Frame{
Content: make([]byte, size),
Duration: duration,
}
}
noisedBytesPerFrame := math.Max(1, float64(bytesPerFrame)*(1-c.frameSizeNoiser.noise()))
noisedDuration := math.Max(0, float64(duration)*(1-c.frameDurationNoiser.noise()))
return Frame{
Content: make([]byte, int(noisedBytesPerFrame)),
Duration: time.Duration(noisedDuration),
}
} |
def create_office():
data = request.get_json()
data_id = data['id']
N_type = data['type']
name = data['name']
officeModels().create_office(data_id, N_type, name)
return make_response(jsonify({
"msg": "office created succefully"
}), 200) |
/**
* RtPublicMembers can fetch its organization.
* @throws IOException If there is an I/O problem
*/
@Test
public void fetchesOrg() throws IOException {
final Organization org = organization();
MatcherAssert.assertThat(
new RtPublicMembers(new FakeRequest(), org).org(),
Matchers.equalTo(org)
);
} |
Plasma electron kinetics in a weak high-frequency field and magnetic field amplification. We describe the linear stage of Weibel instability in a plasma heated via inverse bremsstrahlung absorption of a high-frequency, moderate intensity radiation field under conditions in which the plasma electron velocity distribution function is weakly anisotropic. We report on the possibility of a significant amplification of spontaneous magnetic fields both in the case of an electron distribution function slightly departing from a Maxwellian in the region of subthermal velocities, and in the case where the Langdon nonequilibrium distribution is formed. We show that the direct influence of collisions on the Weibel instability growth rate may be traced back to subthermal electrons, for which the effective collision frequency is large. |
Hierarchical Fuzzy Motion Planning for Humanoid Robots Using Locomotion Primitives and a Global Navigation Path This paper presents a hierarchical fuzzy motion planner for humanoid robots in 3D uneven environments. First, we define both motion primitives and locomotion primitives of humanoid robots. A high-level planner finds a global path from a global navigation map that is generated based on a combination of 2.5 dimensional maps of the workspace. We use a passage map, an obstacle map and a gradient map of obstacles to distinguish obstacles. A mid-level planner creates subgoals that help the robot efficiently cope with various obstacles using only a small set of locomotion primitives that are useful for stable navigation of the robot. We use a local obstacle map to find the subgoals along the global path. A low-level planner searches for an optimal sequence of locomotion primitives between subgoals by using fuzzy motion planning. We verify our approach on a virtual humanoid robot in a simulated environment. Simulation results show a reduction in planning time and the feasibility of the proposed method. |
Local volt/VAr Control in Distribution Networks with Photovoltaic Generators The Distributed Generation (DG) is characterized by the production of electricity connected to the distribution network or to the local network consumers, using renewable or non-renewable resources, independent of the technology. Excessive use of DG systems connected to the distribution network may cause problems of voltage regulation, compromising the system control and protection devices and the reliability, due to the voltage increase in the DG connection point. In this context, this work presents an analysis of frequency inverters that performs a photovoltaic data transmission interface with an electric grid, as a new possibility of voltage control and reactive power (volt/VAr) in sets with the usual instruments of control used in electricity distribution networks. The analysis consists of performing the volt/VAr control from the frequency inverters in low or medium voltage networks, where the inverter assists in voltage regulation through the insertion and/or absorption of reactive power in the distribution system. In order to demonstrate the proposed methodology application, tests were performed on a standard IEEE 13 node in the OpenDSS® software, comparing the voltage profiles and position adjustment of voltage control for a given DG penetration index and different inverter operating conditions. |
The president has asked both leading candidates for the job to come to Washington, moved up the announcement, and scheduled it for prime time.
All that’s missing is a rose ceremony right out of The Bachelor—a disappointing oversight for a president who was a reality-TV star and has a Rose Garden at his disposal.
Donald Trump plans to announce his first selection for the Supreme Court Tuesday night, offering the announcement at 8 p.m., in the middle of prime time. That’s a departure from the standard procedure in recent years, in which presidents have unveiled their nominees during the middle of the work day. But it’s not the only unusual element of the process. CNN reports that Trump has asked the top two contenders for the post, federal Judges Neil Gorsuch and Thomas Hardiman, to come for Washington for the occasion, adding to the drama of the event.
So far, most reporting seems to suggest that Gorsuch will be the pick. A Colorodan, Gorsuch sits on the 10th U.S. Circuit Court, a post to which he was appointed by George W. Bush in 2006. He worked as a law clerk to Judge David Sentelle, a respected conservative member of the U.S. Circuit Court for the District of Columbia, and to Supreme Court Justices Byron White and Anthony Kennedy. Gorsuch, who is 49, brings a typically polished, elite resume to the job (Columbia undergrad, Harvard law degree, and a trip to Oxford University on a Marshall Scholarship), and legal conservatives view him as a fitting intellectual heir to Justice Antonin Scalia, whose death opened up a slot on the court.
But Hardiman is reportedly not out of the running. A member of the Third U.S. Circuit Court, sitting in Pittsburgh, Hardiman, age 51, was nominated to the federal bench by George W. Bush in 2003, and elevated to the appeals court four years later. The first member of his family to graduate from college, Hardiman attended Notre Dame and then got his law degree from Georgetown. But some conservatives are dubious about Hardiman. While his rulings have been largely conservative, some observers worry about his fidelity to ideology—in part because he has the backing of Maryanne Trump Barry, a colleague on the Third Circuit who was nominated by Bill Clinton, and just happens to be the president’s sister. Hardiman’s conservative skeptics see in him the threat of becoming like David Souter, who was appointed to the Supreme Court by George H.W. Bush but ended up with a very moderate record on the Court.
Both judges have strongly defended the Second Amendment from the bench, and it’s expected that both would be unfriendly toward abortion rights.
More recently, Trump aides strongly telegraphed that Representative Cathy McMorris Rodgers of Washington would be his nominee for secretary of the interior, but Trump then changed his mind and offered the job to Representative Ryan Zinke of Montana.
In other words, any prospective nominee should wait to pop the champagne until he hears President Trump make the announcement publicly. That’s just one more way to augment the drama of the pick. The president previously said he would announce his nominee on Thursday, but after the widespread backlash over the weekend to his executive order on immigration, Trump announced he would unveil his selection Tuesday evening instead. That move seemed calculated to change the subject away from the unpopular order and to argue to conservatives that they are best off sticking with him, despite his liabilities, because of the importance of appointing conservative jurists to the Supreme Court.
All of the speculation will come to a head this evening with Trump’s announcement, which the White House is advertising like a major TV finale. Perhaps the most salient difference is that unlike the fired runners-up on The Apprentice, the losing contestant in this showdown will still have a lifetime appointment to the federal judge. |
from typing import Optional, List, Dict
from pydantic import BaseModel
from pathlib import Path
class ResourceAttributesModel(BaseModel):
router_rule: str = None
middlewares: list = None
service_url: str = None
class ResourceModel(BaseModel):
"""Models any resource in the Proximatic().system resources stores.
The attributes parameter accepts any pydantic model, allowing flexible
data schemas depending on the resource type."""
resource_id: str # = Field(..., alias='id')
type: str
attributes: ResourceAttributesModel = (
None # an attributes object representing some of the resource’s data.
)
meta: Dict[str, str] = None
class ResponseErrorModel(BaseModel):
"""Models all errors attached to responses generated by Proximatic()."""
error_id: str = None # Field(..., alias='id') # a unique identifier for this particular occurrence of the problem.
status: str = "" # the HTTP status code applicable to this problem, expressed as a string value.
code: str = "" # an application-specific error code, expressed as a string value.
title: str = "" # a short, human-readable summary of the problem that SHOULD NOT change from occurrence to occurrence of the problem.
detail: str = (
"" # a human-readable explanation specific to this occurrence of the problem.
)
meta: Dict[
str, str
] = {} # a meta object containing non-standard meta-information about the error.
class ResponseModel(BaseModel):
"""Models all responses generated by Proximatic()."""
data: Optional[List[ResourceModel]]
error: Optional[List[ResponseErrorModel]]
meta: Optional[Dict[str, str]]
class DynamicProviderModel(BaseModel):
http: Dict[str, dict] = {"routers": {}, "services": {}, "middlewares": {}}
tls: Dict[str, dict] = None
udp: Dict[str, dict] = None
class SystemConfigModel(BaseModel):
"""Models the entire Proximatic().system configuration store."""
yml_path: Path
fqdn: str = "example.org"
provider: DynamicProviderModel = DynamicProviderModel()
# Options models.
class routerOptionsModel(BaseModel):
"""Models all router options fields."""
entryPoints: List[str] = ["web-secure"]
middlewares: List[str] = []
service: str
rule: str
priority: int = None
tls: dict = {"certResolver": "letsencrypt"}
# options: foobar
# certResolver: foobar
# domains:
# - main: foobar
# sans:
# - foobar
# - foobar
# - main: foobar
# sans:
# - foobar
# - foobar
class loadBalancerOptionsModel(BaseModel):
"""Models all available options for the 'loadBalancer' service type."""
sticky: dict = None
# cookie: dict
# name: str
# secure: bool
# httpOnly: bool
# sameSite: str
servers: List[dict] = [{"url": ""}]
healthCheck: dict = None
# scheme: str
# path: str
# port: int
# interval: str
# timeout: str
# hostname: str
# followRedirects: bool
# headers:
# name0: str
# name1: str
passHostHeader: bool = False
responseForwarding: dict = None
# flushInterval: str
serversTransport: str = None
class mirroringOptionsModel(BaseModel):
"""Models all available options for the 'mirroring' service type."""
service: str
maxBodySize: int
mirrors: List[dict]
# - name: foobar
# percent: 42
# - name: foobar
# percent: 42
class weightedOptionsModel(BaseModel):
"""Models all available options for the 'weighted' service type."""
services: List[dict]
# - name: foobar
# weight: 42
# - name: foobar
# weight: 42
sticky: dict
# cookie:
# name: foobar
# secure: true
# httpOnly: true
# sameSite: foobar
class addPrefixOptionsModel(BaseModel):
prefix: str
class basicAuthOptionsModel(BaseModel):
users: List[str] = None
usersFile: str = None
realm: str = None
removeHeader: bool = None
headerField: str = None
class bufferingOptionsModel(BaseModel):
maxRequestBodyBytes: int
memRequestBodyBytes: int
maxResponseBodyBytes: int
memResponseBodyBytes: int
retryExpression: str
class chainModel:
middlewares: List[str]
class circuitBreakerModel(BaseModel):
expression: str
class compressOptionsModel(BaseModel):
excludedContentTypes: List[str]
class contentTypeModel(BaseModel):
autoDetect: bool
class digestAuthOptionsModel(BaseModel):
users: List[str]
usersFile: str
removeHeader: bool
realm: str
headerField: str
class errorsOptionsModel(BaseModel):
status: List[str]
service: str
query: str
class forwardAuthOptionsModel(BaseModel):
address: str
# !! dict type ##
tls: dict
trustForwardHeader: bool
authResponseHeaders: List[str]
authResponseHeadersRegex: str
authRequestHeaders: List[str]
class headersOptionsModel(BaseModel):
customRequestHeaders: Dict[str, str] = None
customResponseHeaders: Dict[str, str] = None
accessControlAllowCredentials: bool = None
accessControlAllowHeaders: List[str] = None
accessControlAllowMethods: List[str] = None
accessControlAllowOrigin: str = None
accessControlAllowOriginList: List[str] = None
accessControlAllowOriginListRegex: List[str] = None
accessControlExposeHeaders: List[str] = None
accessControlMaxAge: int = None
addVaryHeader: bool = None
allowedHosts: List[str] = None
hostsProxyHeaders: List[str] = None
sslRedirect: bool = True
sslTemporaryRedirect: bool = None
sslHost: str = None
sslProxyHeaders: Dict[str, str] = None
sslForceHost: bool = None
stsSeconds: int = None
stsIncludeSubdomains: bool = True
stsPreload: bool = True
forceSTSHeader: bool = True
frameDeny: bool = True
customFrameOptionsValue: str = None
contentTypeNosniff: bool = True
browserXssFilter: bool = True
customBrowserXSSValue: str = None
contentSecurityPolicy: str = None
publicKey: str = None
referrerPolicy: str = None
featurePolicy: str = None
isDevelopment: bool = None
class ipWhiteListOptionsModel(BaseModel):
sourceRange: List[str]
## dict type!!
ipStrategy: dict = None
# depth: int
# excludedIPs: List[str] # could probably do type validation with ip/cdir types
class inFlightReqModel(BaseModel):
amount: int
## dict type!!
sourceCriterion: dict
class rateLimitOptionsModel(BaseModel):
average: int
period: int
burst: int
## dict type!!
sourceCriterion: dict
# ipStrategy:
# depth: int
# excludedIPs:
# - str
# - str
# requestHeaderName: str
# requestHost: bool
class redirectRegexOptionsModel(BaseModel):
regex: str
replacement: str
permanent: bool
class redirectSchemeOptionsModel(BaseModel):
scheme: str
port: str
permanent: bool
class replacePathOptionsModel(BaseModel):
path: str
class replacePathRegexOptionsModel(BaseModel):
regex: str
replacement: str
class retryOptionsModel(BaseModel):
attempts: int
initialInterval: int
class stripPrefixOptionsModel(BaseModel):
prefixes: List[str]
forceSlash: bool
class stripPrefixRegexOptionsModel(BaseModel):
regex: List[str]
# Store the models in a dict so that they can be
# instantiated automatically during file ingest.
options_models = {
"router": routerOptionsModel,
"loadBalancer": loadBalancerOptionsModel,
"headers": headersOptionsModel,
"ipWhiteList": ipWhiteListOptionsModel,
"basicAuth": basicAuthOptionsModel,
}
|
def _create_predictor(self):
with tmp_dir() as _dir:
self.save_inference_model(dirname=_dir)
predictor_config = fluid.core.AnalysisConfig(_dir)
predictor_config.disable_glog_info()
if self.config.use_cuda:
predictor_config.enable_use_gpu(100, 0)
predictor_config.switch_ir_optim(True)
else:
predictor_config.disable_gpu()
predictor_config.enable_memory_optim()
return fluid.core.create_paddle_predictor(predictor_config) |
/* See LICENSE for license details. */
/*
Module: dsrt_ctxt.h
Description:
Definitions of context structures for dsrt application.
Comments:
- The context is a method of sharing handles between modules of dsrt
application.
*/
/* Reverse include guard */
#if defined(INC_DSRT_CTXT_H)
#error include dsrt_ctxt.h once
#endif /* #if defined(INC_DSRT_CTXT_H) */
#define INC_DSRT_CTXT_H
/* Predefine pointer to module */
struct dsrt_display;
/* Predefine pointer to module */
struct dsrt_opts;
/* Predefine pointer to module */
struct dsrt_jpeg;
/* Predefine pointer to module */
struct dsrt_image;
/* Predefine pointer to module */
struct dsrt_pixmap;
/* Predefine pointer to module */
struct dsrt_view;
/* Predefine pointer to module */
struct dsrt_zoom;
/* Context */
struct dsrt_ctxt
{
struct dsrt_display * p_display;
struct dsrt_opts * p_opts;
struct dsrt_jpeg * p_jpeg;
struct dsrt_image * p_image;
struct dsrt_pixmap * p_pixmap;
struct dsrt_view * p_view;
struct dsrt_zoom * p_zoom;
}; /* struct dsrt_ctxt */
/* end-of-file: dsrt_ctxt.h */
|
The Candida albicans CUG-decoding ser-tRNA has an atypical anticodon stem-loop structure. In many Candida species, the leucine CUG codon is decoded by a tRNA with two unusual properties: it is a ser-tRNA and, uniquely, has guanosine at position 33 (G33). Using a combination of enzymatic (V1 RNase, RnI nuclease) and chemical (Pb(2+), imidazole) probing of the native Candida albicans ser-tRNACAG, we demonstrate that the overall tertiary structure of this tRNA resembles that of a ser-tRNA rather than a leu-tRNA, except within the anticodon arm where there is considerable disruption of the anticodon stem. Using non-modified in vitro transcripts of the C. albicans ser-tRNACAG carrying G, C, U or A at position 33, we demonstrate that it is specifically a G residue at this position that induces the atypical anticodon stem structure. Further quantitative evidence for an unusual structure in the anticodon arm of the G33-tRNA is provided by the observed change in kinetics of methylation of the G at position 37, by purified Escherichia coli mG37 methyltransferase. We conclude that the anticodon arm distortion, induced by a guanosine base at position 33 in the anticodon loop of this novel tRNA, results in reduced decoding ability which has facilitated the evolution of this tRNA without extinction of the species encoding it. |
Expensive cars and characteristically designed vehicles are always in danger of being stolen. Many cases of such theft are actually reported and cause vehicle owners to feel uneasy.
Various security apparatuses are devised as countermeasures against the theft. One example is an immobiliser (electronic lock). When the immobiliser is active, any key other than the qualified one cannot start the engine.
For example, the immobiliser is constructed as follows. A small electronic communication chip called a transponder is embedded in an engine key (in its grip) for a vehicle. An identification code (ID code) is previously recorded in the transponder. When the engine key is inserted into a key cylinder on the vehicle, the transponder's ID code is transmitted to an antenna provided for the key cylinder and is read. The read ID code is collated with an ID code that is prestored in an ECU (Electronic Control Unit). A match between these ID codes authenticates that the used engine key is the qualified one. This permits the engine to be ignited and a fuel to be injected. The immobiliser turns off.
A possible difference between the ID codes for the engine key and the vehicle inhibits the engine from being ignited and a fuel from being injected. The immobiliser remains active. The engine key cannot be used to start the engine. There has been described the general construction of the immobiliser.
As mentioned above, a qualified key can turn off the immobiliser. The immobiliser is useless when the vehicle and the qualified key are stolen together.
To solve this problem, there is proposed a remote immobiliser system that forcibly operates the immobiliser by means of a remote operation using wireless communication. The system is constructed to be able to remotely operate the immobiliser by means of the wireless communication. A remote operation from an external system can forcibly operate the immobiliser. Turning off an ignition can immobilize the vehicle. Once the vehicle becomes immobilized, the system disallows even the qualified key from operating the vehicle and can prevent thefts from increasing.
The system cannot fully function when a vehicle is out of the wireless communication service provided by the external system. The system is ineffective when the vehicle and the qualified key are stolen together and the stolen vehicle moves outside the service range.
According to a proposed technology, the remote immobiliser system measures a time period in a predetermined expiration or counts the number of operations to start a driving source. The system automatically activates the immobiliser when the vehicle is assumed to continuously stay outside the service range over the expiration or a specified threshold value for the number of start operation counts.
Specifically, such technology is described in Patent Document 1. A vehicle may move outside the range of wireless communication to interrupt a periodic, automatic communication between the vehicle and a communication center. When this state continues over the predetermined expiration, the technology disables the vehicle's driving source from starting.
Patent Document 1: JP-A-H8-268231 (U.S. Pat. No. 5,880,679)
However, a qualified user may stay long or frequently outside the service range. The technology described in the above-mentioned patent document may continuously measure the time outside the service range over the specified expiration. Even though the vehicle is not stolen, the immobiliser operates automatically. The qualified user can turn off the immobiliser by entering a password, for example. Even the qualified user may need to frequently turn off the malfunctioning immobiliser. This imposes excessive burdens on the qualified user. |
This American Moment This American Moment focuses on the concept of anxiety politics by arguing that America is in crisis. Those who uphold or participate in racist and misogynist politics are threatened by changes to the status quo, such as the economic gains made by women and therefore respond with reactivity and defensiveness. This book examines first, the Black Lives Matter campaign as the latest disruption of the raced structures that define America and the anxious reactions that seek to protect and maintain the race structures; second, the particular economic, bodily, and reproductive health vulnerabilities that women face that have amalgamated into Americas War on Women as anxious reactions to maintain patriarchy; and, finally, the how racism and misogyny unwittingly and rather unexpectedly led to the election of Trump and opened the door to fascism in the United States. The book argues that these are all destructive outcomes of anxiety and responds by envisioning a creative intervention: arguing that an alternative response to anxiety is to think creatively about our relationships, society, and politics. The author poses this as feminist Christian realism, an update of Reinhold Niebuhrs Christian realism, arguing that religious approaches still have a place in politics and international relations. |
#pragma once
namespace Icons {
const uint8_t ICONS[] PROGMEM = {
0x88, 0x7e, 0x09, 0x01, 0x28, 0x28, 0x28, 0x28,
0x0c, 0x12, 0x21, 0x25, 0x12, 0x2c, 0x40, 0x80,
0x05, 0xf2, 0xa5, 0xa0, 0xfe, 0xaa, 0xaa, 0xfe,
0x10, 0x08, 0x08, 0x17, 0x26, 0x25, 0x10, 0x08,
0x7e, 0x81, 0xa9, 0x91, 0x81, 0xa1, 0x81, 0x7e,
0xfc, 0x94, 0x97, 0xb5, 0xb5, 0x97, 0x94, 0xfc
};
void drawIcon(const uint8_t index, const int16_t x, const int16_t y) {
ab.drawBitmap(x, y, Icons::ICONS + index * 8, 8, 8);
}
}
|
from functools import partial
import sys
import convert_to_schema as cts
class ConvertToDoc(object):
def run(self, resource_name, out_file, sdk_dir, api_doc_dir, req_struct_names, resp_struct_names):
schema = cts.ConvertToSchema()
schema.pre_run(resource_name, sdk_dir, api_doc_dir, req_struct_names, resp_struct_names)
for r in self._convert(schema._structs):
self.write_result(out_file, r)
def _convert(self, structs):
argu_desc = []
attr_desc = ["## Attributes Reference\n\nThe following attributes are exported:\n\n"]
argu_child_struct = []
attr_child_struct = []
struct = structs["CreateOpts"]
for item in struct:
is_struct = self._is_struct(item)
if item.opt_kind == cts.OptKindComputed:
attr_desc.append(self._convert_single_attr(item, is_struct))
if is_struct:
attr_child_struct.append((item.param_type_info.go_type, item.schema_name))
else:
attr_desc.append("* `%s` - See Argument Reference above.\n" % item.schema_name)
argu_desc.append(self._convert_single_argu(item, is_struct))
if is_struct:
argu_child_struct.append((item.param_type_info.go_type, item.schema_name))
for cs in argu_child_struct:
argu_desc.extend(self._convert_struct(structs, self._convert_single_argu, *cs))
for cs in attr_child_struct:
attr_desc.extend(self._convert_struct(structs, self._convert_single_attr, *cs))
return argu_desc, attr_desc
@classmethod
def _convert_struct(cls, structs, handle, struct_name, schema_name):
fcs = partial(cls._convert_struct, structs, handle)
result = []
result.append("The `%s` block supports:\n\n" % schema_name)
child_struct = []
struct = structs[struct_name]
for item in struct:
is_struct = cls._is_struct(item)
result.append(handle(item, is_struct))
if is_struct:
child_struct.append((item.param_type_info.go_type, item.schema_name))
for cs in child_struct:
result.extend(fcs(*cs))
return result
@classmethod
def write_result(cls, out_file, result):
fo = None
try:
fo = open(out_file, "a")
for i in result:
fo.writelines(i)
except Exception as ex:
raise Exception("Write %s failed: %s" % (out_file, ex))
finally:
if fo:
fo.close()
@classmethod
def _is_struct(cls, item):
return cts.ConvertToSchema.is_struct(item.param_type_info.type_kind)
@classmethod
def split_long_str(cls, s):
ls = []
s1 = s
while s1 != "":
i = s1.find(" ", 75)
s2, s1 = (s1, "") if i == -1 else (s1[:i], s1[(i + 1):])
ls.append(s2)
return "\n ".join(ls)
@classmethod
def _convert_opt_kind(cls, kind):
m = {
cts.OptKindRequired: "Required",
cts.OptKindOptionalComputed: "Optional",
cts.OptKindComputed: "Computed",
cts.OptKindOptional: "Optional",
}
return m.get(kind, "")
@classmethod
def _convert_single_attr(cls, item, is_struct):
return cls.split_long_str(
"* `%(name)s` - %(desc)s%(struct_desc)s\n" % {
"name": item.schema_name,
"desc": item.desc,
"struct_desc": " The structure is described below." if is_struct else "",
})
@classmethod
def _convert_single_argu(cls, item, is_struct):
return cls.split_long_str(
"* `%(name)s` - (%(kind)s) %(des)s%(struct_desc)s\n\n" % {
'name': item.schema_name,
"kind": cls._convert_opt_kind(item.opt_kind),
"des": item.desc,
"struct_desc": " The structure is described below." if is_struct else "",
})
if __name__ == "__main__":
if len(sys.argv) != 7:
print "usage: python convert_document.py resource_name, out_file, sdk_dir, api_doc_dir, req_struct_names, resp_struct_names"
sys.exit(0)
try:
ConvertToDoc().run(*sys.argv[1:])
except Exception as ex:
print("convert document failed: ", ex)
sys.exit(1)
sys.exit(0)
|
Surface Modification by Dispersion of Hard Particles on Magnesium Alloy with Laser This study aims to modify the surface of magnesium alloy, AZ91E, to resist sliding wear by means of dispersion of hard particles on its surface. TiC and SiC powders, carried by inert gas stream, were injected into a melting pool formed by laser beam. And the powder of a hypereutectic aluminum-silicon alloy was also injected. These modified surfaces were evaluated to determine their resistance to sliding wear. The main results are summarized as follows: 1) Particles of TiC and SiC are dispersed in the cladding layer of AZ91E magnesium alloy. 2) In the case of hypereutectic aluminum-silicon alloy, fine particles of Mg 2 Si intermetallic compound are dispersed uniformly. 3) These modified surfaces result much higher resistance to sliding wear than the unmodified magnesium alloy. The cladding layers of TiC or SiC show high resistance, but opposing materials are worn. The Mg 2 Si dispersed cladding layer shows high resistance, however, the wear of the opposing material is little. |
Synchronic variation in the expression of French negation: A Distributed Morphology approach ABSTRACT This article discusses ne-variation in French sentential negation based on the phonologically transcribed corpus T-zro (cf. Meisner, in preparation) which allows a new interpretation of the facts. In the last decades, sociolinguistic and stylistic approaches to linguistic variation in French (cf. Armstrong, 2001) have shown that extra-linguistic factors, such as the speaker's age, sex, social background or geographic origin as well as the communication situation may have considerable influence on variable ne-omission. However, in contrast to most sociolinguistic studies dedicated to this phenomenon (cf. Ashby, 1976, 1981, 2001; Armstrong and Smith, 2002; Coveney, 2002) we will focus on the linguistic factors influencing ne-variation, since their importance is empirically evident but not yet fully exploited on a theoretical level. One leading assumption with respect to ne-variation in literature is that the particle ne is most frequently retained in combination with a proper name or a full DP and is commonly omitted when combined with clitic subjects. However, there are many exceptions to this rule which, as we argue, can be better explained by considering the phonological form of the involved subject. Ne-realisation is treated here as an inner-grammatical phenomenon that is triggered by context sensitivity with regard to the element to its left, i.e. usually the grammatical subject, and not as a consequence of code-switching between two grammars nor as a sociolinguistic variable characterising certain groups of speakers in the Labovian sense (cf. Labov, 1972), since we seek to describe general variational tendencies, present in nearly all speakers of contemporary European French. Our analysis, which is implemented in a Distributed Morphology framework (Halle & Marantz, 1994), is compatible, however, with stylistic approaches to ne-variation, such as audience design (cf. Bell, 1984, 2001). |
COVID-19 in a patient with new adult-onset Still disease: A case report Rationale: Adult-onset Still disease (AOSD) is a systemic autoinflammatory illness of unknown cause. Its manifestations comprise fever; arthritis or arthralgia; and skin rash with high inflammatory markers and ferritin levels. Coronavirus disease 2019 (COVID-19) shares several clinical features and laboratory markers of AOSD: making it challenging to differentiate between the 2 conditions. Patient concerns: A 29-year-old woman presented with fever, skin rash, and polyarthritis 4 weeks before admission. Two weeks after illness onset, she had an infection with symptoms similar to those of COVID-19. She observed that her symptoms worsened, and new symptoms appeared including headache; vomiting; diarrhea; and loss of taste and smell. The patient tested positive for severe acute respiratory syndrome coronavirus 2 using polymerase chain reaction. Diagnosis: The patient was diagnosed with AOSD complicated with COVID-19 after exclusion of other possible causes of her illness, such as infections, malignancy, or underlying rheumatological disease. Interventions: The patient was administered corticosteroids and methotrexate. The patient responded quickly, particularly to corticosteroids. Outcomes: This is the second reported case of COVID-19 in a patient with AOSD. She experienced COVID-19 shortly after having AOSD, indicating that those with AOSD might have a higher risk of COVID-19 infection. Furthermore, she developed the most prevalent COVID-19 symptoms. However, distinguishing most of these symptoms from AOSD manifestations was difficult. Lessons: Early diagnosis and differentiation between AOSD and COVID-19 and prompt initiation of treatment are required. Introduction "Still disease" was named after George Still who, in 1897, described 22 children with systemic onset juvenile idiopathic arthritis. In 1971, Eric Bywaters identified adult-onset Still disease (AOSD) by describing 14 adult patients with skin rash, fever, and polyarthritis whose clinical presentation closely resembled that of pediatric Still illness. AOSD is a rare autoinflammatory disorder of unknown etiology with an incidence of 0.16 to 0.40 per 100,000 individuals. However, its prevalence rate has been reported to be 1 to 34 cases per million people. The coronavirus disease 2019 (COVID-19) pandemic started in 2019 with symptoms ranging from asymptomatic to multi-organ failure. Numerous clinical and biochemical characteristics of AOSD and COVID-19 are similar. Both diseases are marked by high levels of serum ferritin and a process of hyperinflammation caused by a cytokine storm that can lead to multiple organ failure. The coexistence of both conditions, particularly when a patient with undiagnosed AOSD has COVID-19 infection, can make appropriate diagnosis very challenging. However, early diagnosis and treatment are crucial to avoid life-threatening complications. Occurrence of COVID-19 in patients with AOSD remains to be elucidated. Therefore, we present the case of a recently diagnosed AOSD patient who acquired COVID-19 shortly after becoming ill. She had classic AOSD symptoms and was treated with corticosteroids and methotrexate (MTX). The patient responded rapidly, clinically, and biochemically, particularly to corticosteroids. Case report Our patient was a 29-year-old Saudi woman with a history of pseudotumor cerebri and acetazolamide use. The patient had no psychological or family history of autoimmune or rheumatological disease. She was admitted to the department of internal medicine in our hospital with chief complaints of high spiking fever reaching up to 39.5 °C combined with chills, skin rash, and polyarthritis that lasted for the last 4 weeks. Skin rash was non-itchy and transient erythematous maculopapular, mainly observed on her trunk and extremities. She also had joint pains involving the knee, ankle, shoulders, and wrists. Moreover, she showed recurrent sore throat and myalgia. Two days before admission, she experienced abdominal pain, vomiting, and diarrhea. During illness, she visited 2 medical centers and was treated with non-steroidal anti-inflammatory drugs, by which temporary mild relief was achieved. There was no clear diagnosis. At that time, the patient tested negative for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) using nasopharyngeal swab reverse transcription polymerase chain reaction (RT-PCR) and further investigations were not requested. In addition, she received 2 COVID-19 vaccines 9 months before. Two weeks after her illness, she started to report headache, diarrhea, and loss of taste and smell. She was then suspected to have COVID-19. RT-PCR was repeated and revealed positivity for SARS-CoV-2. On the day of her admission, 10 days after testing positive for COVID-19, SARS-CoV-2 RT-PCR was repeated and showed negative results. The patient was then admitted to our department due to fever of unknown origin. She was unable to move due to myalgia and polyarthralgia. She continued to have a high-grade fever with subsided skin rash. She had no oral ulcers; hair loss; or cardiac, respiratory, or neurological symptoms. The patient also had mild abdominal pain but without vomiting or diarrhea. On physical examination, she appeared ill and highly febrile (−39.4 °C). Oxygen saturation on room air was 98%. In addition, she had cervical lymphadenopathy. Skin examination was normal. Musculoskeletal examination showed tender joints without effusion. Abdominal, chest, cardiovascular, and neurological examinations revealed unremarkable findings. Laboratory test results revealed white blood cell count, 22.7 10 3 cells/L (4-10 10 3 cells/L); hemoglobin, 8 g/dL (12-15 g/dL); platelet, 671 10 9 /L (150-450 10 9 /L); erythrocyte sedimentation rate (ESR), 120 mm/hour (0-25 mm/hour); positive C-reactive protein; ferritin, 5616 ng/mL (13-150 ng/ mL); lactate dehydrogenase, 490 U/L (100-247 U/L); and elevated liver enzymes aspartate transaminase, 69 U/L (<35 U/L) and alanine aminotransferase 72 U/L (<35 U/L). Renal function test, creatine kinase, triglyceride, fibrinogen, and peripheral blood smear were normal. Moreover, blood, urine, stool cultures, tuberculin skin test, Cytomegalovirus, Epistein-Barr virus, Human immunodeficiency virus, hepatitis B and C, malaria screen, Salmonella (Widal test), and Brucella serology tests were all negative. Rheumatology work-up including antinuclear antibodies, rheumatoid factor, anti-cyclic citrullinated peptide, anti-neutrophil cytoplasmic antibodies, and extractable nuclear antigen panel tests were also negative. Thoracic echocardiogram showed no abnormalities. However, enhanced computed tomography scan of the neck showed mildly enlarged supraclavicular and cervical lymph nodes. Chest computed tomography scan showed bilateral small effusions and mild hepatosplenomegaly. Excisional lymph node biopsy was consistent with reactive lymphoid hyperplasia. Endoscopy and colonoscopy were unremarkable. The patient was examined by hematology, gastroenterology, and respiratory physicians and no abnormality was reported. Based on the above findings, she was finally diagnosed with AOSD. The patient was initially administered intravenous prednisone (60 mg daily) and then switched to oral treatment with similar dose and improved dramatically. Her fever subsided and joint pain improved rapidly. Moreover, her laboratory abnormalities including complete blood count, liver enzymes, acute phase reactants, and ferritin improved. She was then discharged with oral prednisone 60 mg once a day to be tapered slowly. A plan was clearly explained to the patient. During her follow-up after 2 weeks, no fever, mild joint pain, and no skin rash or sore throat were found. Laboratory abnormalities showed further improvement. Her ESR and ferritin levels were 76 mm/hour and 1650 ng/mL, respectively. Oral MTX (15 mg) was administered once per week. A month after discharge, she improved without any signs or symptoms of the disease. The patient well tolerated MTX and started to taper off prednisone without any complications. Her ESR and ferritin level further improved to 40 mm/hour and 230 ng/mL, respectively. Despite improvement, she is still regularly and closely monitored during the follow-up period. Discussion and conclusions AOSD is a complex systemic autoinflammatory disorder that affects young male and female adults. Previous studies reported that female patients were more affected than male ones. The main clinical features of AOSD comprise fever; transient skin rash; and arthritis or arthralgia. Other features include sore throat, lymphadenopathy, serositis, splenomegaly, hepatomegaly, and increased levels of liver enzymes. Additionally, it is associated with hyperleukocytosis, high inflammatory markers, and high ferritin level. However, the pathophysiology of AOSD remains unclear. Multiple factors, including viral infections, genetics, and immunological dysregulation, have been linked to AOSD onset. Multiple factors have been linked to the start of AOSD, such as viral infections; genetics; and immunological dysregulation, such as inflammation caused by cytokines and dysregulated apoptosis. Due to the lack of specific laboratory tests that can distinguish AOSD from other conditions with similar symptoms, AOSD remains challenging to diagnose. Before appropriate diagnosis of AOSD, it is necessary to screen for infectious, neoplastic, and autoimmune illnesses. Our patient was admitted due to fever of unknown origin. Up to 20% of cases of fever of unknown cause can be attributable to AOSD. Moreover, approximately developing a high-spiking fever is one of the early clinical features of this disease. The Yamaguchi and Fautrel classification criteria (Table 1), often used for AOSD diagnosis, were met in our patient. This case was challenging because the patient had COVID-19 complicated with AOSD: symptoms of which had started recently but was not immediately diagnosed. Both conditions share common clinical and biochemical features. Serum ferritin is recognized as a particular diagnostic criterion for AOSD. Moreover, serum ferritin levels that are 5 times higher than the usual upper limit are shown to have a sensitivity of 80% and specificity of 46% for diagnosing AOSD. AOSD, macrophage activation syndrome, catastrophic antiphospholipid syndrome, and septic shock are the 4 clinical conditions under the general term "hyperferritinemic syndromes". They are all marked by elevated serum ferritin levels and cytokine storm that ultimately result in multi-organ failure. In addition, COVID-19 has been recently included in the definition of "hyperferritinemic syndromes" due to similar clinical characteristics and its associated complications. According to the findings of Colafrancesco et al, ferritin expression was higher in patients with AOSD than in those with COVID-19. Data on COVID-19 occurrence in patients with AOSD is obscure. As components of hyperferritinemic syndromes, COVID-19 and AOSD occur at different incidence rates. However, there is only 1 case reported in 2021 of a patient with AOSD who was in remission when he had COVID-19. To the best of our knowledge, this is the second case. However, our case is unique because our patient had AOSD symptoms 2 weeks before having COVID-19, which was not immediately diagnosed. On the other hand, several case reports showed an inappropriate immune response to COVID-19 that caused development of AOSD. Additionally, there are several reported cases of new-onset AOSD after COVID-19 vaccination. Severe COVID-19 and AOSD can be associated with life-threatening complications and high mortality rate when undiagnosed or not treated early. Similarities in the clinical presentation and laboratory markers make it challenging to diagnose and differentiate between the 2 conditions. Of note, the patient was admitted to our hospital and highly suspected of AOSD based on her initial symptoms, such as fever; skin rash; polyarthralgia; and negative SARS-CoV-2 PCR test, which was initially performed during her illness and repeated on admission. Moreover, detailed history was crucial because the patient was aware of the new symptoms of COVID-19 that started later and improved upon admission time with persistent and worsening of the other symptoms, particularly fever and arthralgia. In conclusion, it has been reported that patients with autoimmune disorders may be more likely than the general population to be prone to COVID-19 infection. However, there is no available data regarding COVID-19 infection risk in patients with AOSD. Our patient developed COVID-19 shortly after having AOSD, indicating that those with AOSD might have a higher risk of acquiring COVID-19 infection. More research is required to establish the risk factors of COVID-19 infection in patients with AOSD. In addition, it is essential to determine if such diseases affect the clinical presentation of COVID-19 and the extent to which COVID-19 infection might affect the clinical course of AOSD. Our patient had the most prevalent COVID-19 symptoms, including sore throat, headache, myalgia, arthralgia, diarrhea, and loss of taste and smell. However, it was difficult to distinguish the majority of these symptoms from AOSD manifestations. Early diagnosis and treatment of such conditions is crucial to avoid poor outcomes. |
package local
import (
"github.com/pmylund/go-cache"
"time"
"strconv"
"sync"
)
type LocalCache struct {
instance *cache.Cache
init_ctx sync.Once
}
func (this LocalCache) Init(config map[string]string) {
this.init_ctx.Do( func () {
defaultExpiration, _ := strconv.ParseInt(config["defaultExpiration"], 10, 64)
purgeTime, _ := strconv.ParseInt(config["purgeTime"], 10, 64)
this.instance = cache.New(time.Duration(defaultExpiration)*time.Second, time.Duration(purgeTime)*time.Second)
})
}
func (this LocalCache) Get(key string) (interface{},bool) {
return this.instance.Get(key);
}
func (this LocalCache) Set(key string, value interface{}, timeout time.Duration) bool {
this.instance.Set(key,value,timeout)
return true
}
func (this LocalCache) Delete(key string) bool {
this.instance.Delete(key)
return true
} |
Glucocorticoids suppress corticotropin-releasing hormone and vasopressin expression in human hypothalamic neurons. Glucocorticoids are widely used in clinical practice in a variety of immune-mediated and neoplastic diseases, mostly for their immunosuppressive, leukopenic, antiedematous, or malignancy-suppressive actions. However, their usage is limited because of serious and sometimes life-threatening side-effects. Endogenous glucocorticoids are secreted by the adrenal cortex under the control of the hypothalamus and the pituitary gland. This hypothalamo-pituitary-adrenal axis, in turn, is under the negative feedback control of glucocorticoids. Although the suppression of adrenocortical and pituitary gland functions by glucocorticoids has been shown in humans, a feedback effect at the level of the hypothalamus, as shown in rat, has not been reported to date. The present study shows for the first time that glucocorticoids suppress both CRH and vasopressin (AVP) in the human hypothalamus. We studied immunocytochemically the postmortem hypothalami of nine corticosteroid-exposed subjects and eight controls. The number of CRH-expressing cells in the hypothalamic paraventricular nucleus of glucocorticoid-exposed patients was only 3.3% of that in the controls, and the total immunoreactivities for AVP were 31% and 33% of that in the controls in the supraoptic nucleus and the paraventricular nucleus, respectively, whereas the immunoreactivity for oxytocin did not differ between the two groups. Suppression of hypothalamic CRH and AVP neurons by glucocorticoids may have important consequences for neuroendocrinological mechanisms such as the disturbance of water balance during the treatment as well as the immunological processes in the brain and the pathogenesis of the withdrawal syndrome after discontinuation of corticosteroid treatment. In addition, as both AVP and CRH neurons also project to other brain structures and influence memory, mood, and behavior, their suppression by glucocorticoids may be responsible for at least part of the central nervous system side-effects of glucocorticoids. |
/** @file
ssl_utils.h - a containuer of connection objects
@section license License
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#pragma once
#include <openssl/ssl.h>
#include <string>
#include <ts/ts.h>
#include <mutex>
#include <deque>
#include "publisher.h"
#include "subscriber.h"
#include "stek.h"
struct ssl_session_param {
std::string cluster_name;
int key_update_interval; // STEK master rotation period seconds
int stek_master; // bool - Am I the STEK setter/rotator for POD?
ssl_ticket_key_t ticket_keys[2]; // current and past STEK
std::string redis_auth_key_file;
RedisPublisher *pub = nullptr;
RedisSubscriber *sub;
ssl_session_param();
~ssl_session_param();
};
class PluginThreads
{
public:
void
store(const pthread_t &th)
{
std::lock_guard<std::mutex> lock(threads_mutex);
threads_queue.push_back(th);
}
void
terminate()
{
std::lock_guard<std::mutex> lock(threads_mutex);
for (pthread_t th : threads_queue) {
::pthread_cancel(th);
}
while (!threads_queue.empty()) {
pthread_t th = threads_queue.front();
::pthread_join(th, nullptr);
threads_queue.pop_front();
}
}
private:
std::deque<pthread_t> threads_queue;
std::mutex threads_mutex;
};
int STEK_init_keys();
const char *get_key_ptr();
int get_key_length();
/* Initialize ssl parameters */
/**
Return the result of initialization. If 0 is returned, it means
the initializtion is success, -1 means it is failure.
@param conf_file the configuration file
@return @c 0 if it is success.
*/
int init_ssl_params(const std::string &conf_file);
int init_subscriber();
int SSL_session_callback(TSCont contp, TSEvent event, void *edata);
extern ssl_session_param ssl_param; // almost everything one needs is stored in here
extern PluginThreads plugin_threads;
|
<filename>r2/src/test/java/test/r2/message/TestBuilders.java
/*
Copyright (c) 2012 LinkedIn Corp.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/* $Id$ */
package test.r2.message;
import com.linkedin.r2.message.Message;
import com.linkedin.r2.message.MessageBuilder;
import com.linkedin.r2.message.Request;
import com.linkedin.r2.message.Response;
import com.linkedin.r2.message.rest.RestMessage;
import com.linkedin.r2.message.rest.RestMethod;
import com.linkedin.r2.message.rest.RestRequest;
import com.linkedin.r2.message.rest.RestRequestBuilder;
import com.linkedin.r2.message.rest.RestResponse;
import com.linkedin.r2.message.rest.RestResponseBuilder;
import java.net.URI;
import org.testng.Assert;
import org.testng.annotations.Test;
/**
* @author <NAME>
* @version $Revision$
*/
public class TestBuilders
{
@Test
public void testChainBuildRestRequestFromRestRequestBuilder()
{
final RestRequest req = new RestRequestBuilder(URI.create("test"))
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setMethod(RestMethod.PUT)
.build()
.builder()
.setEntity(new byte[] {5,6,7,8})
.setHeader("k2", "v2")
.setMethod(RestMethod.POST)
.setURI(URI.create("anotherURI"))
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, req.getEntity().copyBytes());
Assert.assertEquals("v1", req.getHeader("k1"));
Assert.assertEquals("v2", req.getHeader("k2"));
Assert.assertEquals(RestMethod.POST, req.getMethod());
Assert.assertEquals(URI.create("anotherURI"), req.getURI());
}
@Test
public void testChainBuildRestRequestFromRequestBuilder()
{
final Request req = new RestRequestBuilder(URI.create("test"))
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setMethod(RestMethod.PUT)
.build()
.requestBuilder()
.setEntity(new byte[] {5,6,7,8})
.setURI(URI.create("anotherURI"))
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, req.getEntity().copyBytes());
Assert.assertEquals(URI.create("anotherURI"), req.getURI());
Assert.assertTrue(req instanceof RestRequest);
final RestRequest restReq = (RestRequest)req;
Assert.assertEquals("v1", restReq.getHeader("k1"));
Assert.assertEquals(RestMethod.PUT, restReq.getMethod());
}
@Test
public void testChainBuildRestRequestFromRestBuilder()
{
final RestMessage req = new RestRequestBuilder(URI.create("test"))
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setMethod(RestMethod.PUT)
.build()
.restBuilder()
.setEntity(new byte[] {5,6,7,8})
.setHeader("k2", "v2")
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, req.getEntity().copyBytes());
Assert.assertEquals("v1", req.getHeader("k1"));
Assert.assertEquals("v2", req.getHeader("k2"));
Assert.assertTrue(req instanceof RestRequest);
final RestRequest restReq = (RestRequest)req;
Assert.assertEquals(RestMethod.PUT, restReq.getMethod());
Assert.assertEquals(URI.create("test"), restReq.getURI());
}
@Test
public void testChainBuildRestRequestFromMessageBuilder()
{
final MessageBuilder<?> builder = new RestRequestBuilder(URI.create("test"))
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setMethod(RestMethod.PUT)
.build()
.builder();
final Message req = builder
.setEntity(new byte[] {5,6,7,8})
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, req.getEntity().copyBytes());
Assert.assertTrue(req instanceof RestRequest);
final RestRequest restReq = (RestRequest)req;
Assert.assertEquals(RestMethod.PUT, restReq.getMethod());
Assert.assertEquals(URI.create("test"), restReq.getURI());
Assert.assertEquals("v1", restReq.getHeader("k1"));
}
@Test
public void testChainBuildRestResponseFromRestResponseBuilder()
{
final RestResponse res = new RestResponseBuilder()
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setStatus(300)
.build()
.builder()
.setEntity(new byte[] {5,6,7,8})
.setHeader("k2", "v2")
.setStatus(400)
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, res.getEntity().copyBytes());
Assert.assertEquals("v1", res.getHeader("k1"));
Assert.assertEquals("v2", res.getHeader("k2"));
Assert.assertEquals(400, res.getStatus());
}
@Test
public void testChainBuildRestResponseFromResponseBuilder()
{
final Response res = new RestResponseBuilder()
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setStatus(300)
.build()
.responseBuilder()
.setEntity(new byte[] {5,6,7,8})
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, res.getEntity().copyBytes());
Assert.assertTrue(res instanceof RestResponse);
final RestResponse restRes = (RestResponse)res;
Assert.assertEquals("v1", restRes.getHeader("k1"));
Assert.assertEquals(300, restRes.getStatus());
}
@Test
public void testChainBuildRestResponseFromRestBuilder()
{
final RestMessage res = new RestResponseBuilder()
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setStatus(300)
.build()
.restBuilder()
.setEntity(new byte[] {5,6,7,8})
.setHeader("k2", "v2")
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, res.getEntity().copyBytes());
Assert.assertEquals("v1", res.getHeader("k1"));
Assert.assertEquals("v2", res.getHeader("k2"));
Assert.assertTrue(res instanceof RestResponse);
final RestResponse restRes = (RestResponse)res;
Assert.assertEquals(300, restRes.getStatus());
}
@Test
public void testChainBuildRestResponseFromMessageBuilder()
{
final MessageBuilder<?> builder = new RestResponseBuilder()
.setEntity(new byte[] {1,2,3,4})
.setHeader("k1", "v1")
.setStatus(300)
.build()
.builder();
final Message res = builder
.setEntity(new byte[] {5,6,7,8})
.build();
Assert.assertEquals(new byte[] {5,6,7,8}, res.getEntity().copyBytes());
Assert.assertTrue(res instanceof RestResponse);
final RestResponse restRes = (RestResponse)res;
Assert.assertEquals("v1", restRes.getHeader("k1"));
Assert.assertEquals(300, restRes.getStatus());
}
}
|
Synuclein pathology of the spinal and peripheral autonomic nervous system in neurologically unimpaired elderly subjects Studies on cases with incidental Lewy body disease (ILBD) suggest that synuclein (SN) pathology of Parkinson's disease (PD) starts in lower brainstem nuclei and in the olfactory bulb. However, medullary structures as the induction site of SN pathology have been questioned as large parts of the nervous system, including the spinal cord and the peripheral autonomic nervous system (PANS), have not been examined in ILBD. Thus, the time course of PD lesions in the spinal cord or PANS in relation to medullary lesions remains unknown. We collected 98 post mortem cases with no reference to PDassociated symptoms on clinical records. SN pathology was found in the central nervous system, including the spinal cord, and in the PANS in 17 (17.3%) cases. SN pathology was encountered in autonomic nuclei of the thoracic spinal cord, brainstem and olfactory nerves in 17/17, in sacral parasympathetic nuclei in 15/16, in the myenteric plexus of oesophagus in 14/17, in sympathetic ganglia in 14/17, and in the vagus nerve in 12/16 cases. In addition to the thoracic lateral horns, a high number of SN lesions was also found in nonautonomic spinal cord nuclei. Considering supraspinal structures our cases corresponded roughly to the recently described sequential order of SN involvement in PD. Our study indicates, however, that the autonomic nuclei of the spinal cord and the PANS belong to the most constantly and earliest affected regions next to medullary structures and the olfactory nerves. A larger cohort of ILBD cases will be needed to pinpoint the precise induction site of SN pathology among these structures. |
import { RouterContext } from "koa-router";
import { Next } from "koa";
import { createBadRequestResponse, createSuccessResponse } from "@server/helpers/responses";
import { BackupsRepo } from "../repository/backupsRepo";
export class SettingsRouter {
static async create(ctx: RouterContext, _: Next) {
const { name, data } = ctx.request.body;
// Validation: Name
if (!name) {
ctx.status = 400;
ctx.body = createBadRequestResponse("No name provided!");
return;
}
// Validation: Settings Data
if (!data) {
ctx.status = 400;
ctx.body = createBadRequestResponse("No settings provided!");
return;
}
// Validation: Settings Data
if (typeof data !== "object" || Array.isArray(data)) {
ctx.status = 400;
ctx.body = createBadRequestResponse("Settings must be a JSON object!");
return;
}
// Safety to always have a name in the JSON dict if not provided
if (!Object.keys(data).includes("name")) {
data.name = name;
}
// Save the settings to a file
await BackupsRepo.saveSettings(name, data);
ctx.body = createSuccessResponse("Successfully saved settings!");
}
static async get(ctx: RouterContext, _: Next) {
const name = ctx.query.name as string;
let res: any;
if (name && name.length > 0) {
res = await BackupsRepo.getSettingsByName(name);
} else {
res = await BackupsRepo.getAllSettings();
}
ctx.body = createSuccessResponse(res);
}
}
|
class Unique:
"""Provides a way to make a field a model unique """
def __init__(self, *args):
"""
:param args: The fields to make unique
"""
self.fields = args |
This invention relates to a segmenting device for portionable filling in a flexible tubular casing, comprising crimping elements which can be swivelled against each other so as to overlap each other and be symmetrical with respect to the tube axis, which crimping elements together circumscribe an opening of variable size and, by reducing the opening, crimp the filled tube, the crimping elements consisting of strips the ends of which are stationarily pivotally mounted with equal spacings on a circle concentric with respect to the tube and opening axis, while their other end portions are guided in a ring which is likewise concentric with respect to the tube and opening axis and can be rotated around the same to a limited extent, such that they extend in the graduated circle of their swivel bearings in a chord-like manner and can perform both swivel and longitudinal movements with respect to the ring in their main plane.
Such segmenting device, namely for portioning individual sausages from a sausage strand, is known from DE 196 06 654 C1. There are provided at least four strips as crimping elements, which in each opening and closing position always extend in parallel in pairs and circumscribe a square (each of a different size). Regardless of their usability, it turned out in the operation of this known device that a square crimping opening in particular in the case of tubular packages of a large diameter can lead to problems in the formation of the crimping neck at the tubular casing; even if due to the reduced relative shearing movements between the tubular casing and the crimping elements the movement of all four strips during crimping represents a distinct improvement as compared to the conventional segmenting devices with two linearly or pivotally movable crimping elements with two active surfaces generally extending at right angles to each other (DE 36 10 010 A1, DE 25 50 042 A1).
It is the object underlying the invention to improve the neck formation during the closing operation of the segmenting device especially in the case of tubular packages of a larger diameterxe2x80x94while maintaining reduced shearing movements or even further reducing the neck formation during the closing operation of the segmenting device. In accordance with the invention, this object is solved in that at least three crimping elements are provided, whose active surfaces facing the opening are bent or curved in the main plane. In this way, the opening is given a shape approaching the more or less ideal circular shape of the finally obtained neck of the tubular casingxe2x80x94which is regularly permanently fixed and closed by means of likewise substantially circular closure clipsxe2x80x94, where at the same time the larger number of active surface portions at the crimping elements, which are distributed over the periphery, promotes the radial crimping of the tubular casing towards the tube and opening axis, so that shearing movements between the tubular casing and the crimping elements are omitted almost completely.
The advantageous effect of this design and arrangement of the crimping elements which, in contrast to known segmenting devices with crimping elements which are pivotable and are provided with a plurality of active surfaces extending at an angle with respect to each other, are not pivotable about one and the same axis, but about a plurality of axes distributed around the periphery, is promoted even more when bending or curving the crimping elements is effected at an obtuse angle.
In general, the crimping elements are made of a relatively thin-walled (as compared to their width) sheet. In accordance with an embodiment of the invention it is therefore provided that the crimping elements are bent twice (in a Z-shaped manner) out of their (original) main plane by substantially the thickness of the crimping elements along a line extending radially with respect to the tube and opening axis. In this way, all crimping elements can each be pivoted in the same plane, so that they need not be staggered parallel to the tube and opening axis. However, the plane of the respective one swivel axis of all crimping elements is offset with respect to the plane of the respective other swivel axis parallel to the tube and opening axis, so that the active surfaces engaging the tubular casing during the crimping operation alternately lie in the one and in the other plane in peripheral direction. However, it is of considerable advantage that even in the case of three, four or even more elements only two adjacent planes are covered by the crimping elements and thus the total thickness of the segmenting device substantially only corresponds to twice the thickness of the crimping elements.
When the inventive segmenting device is part of a spreader, which has two sets of crimping elements which in the closing condition can be moved into an axial distance from each other, the crimping elements of the second set can advantageously be mutually offset with respect to those of the first set by one quarter of the spacing angle of adjacent elements around the tube and opening axis. This will xe2x80x98roundxe2x80x99 the circumscribed opening even more, and it is possible for instance to distribute a total of four crimping elements among two sets of two crimping elements each, without returning to the disadvantages of the prior art (DE 36 10 010 A1, DE 25 50 042 A1).
In a spreader as mentioned above, the two sets of crimping elements can furthermore be rotatable and drivable during the spreading operation in synchronism with each other or against each other, in order to thus improve the neck formation even more. Moreover, it may be expedient to close the two sets of crimping elements one after the other, in order to ensure the withdrawal of tube material and to thus prevent the tube material from being overloaded in the crimping area. |
So, it’s that time of year again where all I want to eat is warm food. This is probably the easiest chili to make, seeing as it doesn’t require a single bit of defrosting or cooking meat. You, the reader, can, if you so choose to, but I will tell you all the secrets of a perfect vegan chili. Even the meat eaters loved it. I suggest you obtain a large sauce pot before you proceed.
What you need:
1 package of Boca meatless crumbles (or preferred faux meats)
2 cans of black beans, drained and rinsed
1 can of kidney beans, drained and rinsed
1 small white onion, chopped
2 cloves of garlic, chopped
1 red pepper, diced
1 yellow pepper, diced
2 cans of San Marzano tomatoes, chopped (do not drain, you’re using the juice as well)
1 tbsp brown sugar
1 1/2 tbsp Chili powder
1 tsp cumin
1 tsp dried oregano
1/2 tsp salt
1 tsp pepper
1 tsp crushed red pepper flakes
3 (?) tbsp olive oil
BEFORE YOU PANIC AT ALL THE THINGS: I assure you this is actually very easy to make.
How to make it:
Heat the olive oil in the pot on medium-high heat, and once it has heated up, throw in the onions, garlic, and peppers. Sautee these for about 5 minutes or so, or until the onions begin to brown a little bit. Add the spices, tomatoes, tomato juice, and beans and bring to a boil. Turn the heat down to somewhere between lo and medium, and cook for about 15-20 minutes. By this time, the Boca meat should be defrosted, and feel free to add as much as you’d like. Stir, and let cook for another 10-15 minutes. If you find that your chili is a little runny, DON’T PANIC. Just throw in a tablespoon or two of flour, and let cook for another 5 minutes.
This dish is best served hot, on a toasted, buttered roll with shredded cheese and tabasco sauce, with a side of macaroni:
Advertisements |
T cells, such as this one, can be destroyed by an invading HIV army (red dots) that is transferred directly into the cell (Image: NIBSC/SPL)
It’s the world’s most studied virus, but HIV can still take us by surprise. It turns out that the virus can infect and kill immune cells by being pumped directly from one cell into another, during brief connections made between the two.
Until recently, we thought that HIV particles circulating in the blood were largely to blame for infecting and destroying crucial immune cells called CD4 T cells. According to this classic model, after a single virus has infected a T cell, it hijacks the cell’s machinery to build hundreds of copies of itself, which bud off into the blood and eventually wear out and kill the host cell.
Advertisement
This thinking was based on research using blood, a relatively easy way to study the virus. But work with newer tools suggests that this is only part of the story. Using tissue-culture methods, a team led by Warner Greene at the Gladstone Institutes in San Francisco, has shown that in fact large numbers of virus particles are often pumped directly from one CD4 T cell into another. And it seems that this process may kill the vast majority of CD4 cells – not infection by single viruses.
Blocking transmission
HIV armies storm neighbouring T cells by hijacking yet another cell system, the immunologic synapses. These are short-term connections between immune cells that allow them to send chemical messages between themselves, which HIV uses to flow from an infected CD4 cell to an uninfected one.
Evidence suggests that this process is hundreds, possibly thousands, of times more efficient than the traditional mode of external infection. Greene says that 95 per cent of the CD4 cells they studied died by this process, rather than from infection by free-floating particles.
This new understanding could open up ways to target the virus, as well as influencing what drugs we choose to treat the disease. Walther Mothes has been studying cell-to-cell HIV transmission at Yale University, and he says that although most antiretroviral drugs work against both forms of infection, the much higher efficiency of pumping viruses directly into a cell can overwhelm some of these drugs, making them less effective.
But the finding may open the way for new treatments. The monkey version of HIV can also be transmitted directly from cell to cell, but monkeys may be able to tolerate this process.
Unlike their human equivalents, monkey CD4 cells manage to survive being inundated with virus particles, and Greene thinks that they have evolved a way to avoid self-destructing. He hopes that anti-inflammatory drugs could be used to mimic this effect in human CD4 cells. One potential drug candidate, VX-765, looks promising in the lab.
Hunt for a vaccine
A better understanding of how the virus spreads directly between cells is probably an important part of the HIV puzzle, says Kenneth Mayer at the Fenway Institute in Boston. He suggests that neglecting to take this mode of transmission into account may at least partly explain the failure of recent vaccine research.
HIV vaccines would work by generating antibodies to fight the virus. But research suggests that different types of antibodies would be needed to kill viruses that are inside cells and viruses that are free-floating in the body. Viruses hiding out inside cells may be more likely to escape destruction, and could perhaps find it easier to evolve resistance to antibodies.
Carl Dieffenbach of the US National Institute of Allergy and Infectious Diseases in Bethesda, Maryland, says that to better understand how to protect against cell infection, we need better vaccine candidates. “Is cell-to-cell transmission going to torpedo a vaccine? We don’t know the answer to this because we don’t have a safe, effective and durable HIV vaccine to understand the exact mechanisms,” he says.
Journal reference: Cell Reports, DOI: 10.1016/j.celrep.2015.08.011 |
<filename>lib/src/main/java/io/astro/lib/ElementComposite.java
package io.astro.lib;
import android.content.Context;
import android.util.Log;
import android.util.SparseArray;
import android.view.View;
import android.view.ViewGroup;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
/**
* @author skeswa
*/
class ElementComposite {
private static final String TAG = "Astro::ElementComposite";
/**
* The Android context to which this composite belongs.
*/
private final Context context;
/**
* The index of this composite in {@link #parent}'s array of childComposites.
*/
private int indexInParent = -1;
/**
* The composite that owns this composite.
*/
private ElementComposite parent;
/**
* The composites owned by this composite.
*/
private ElementComposite[] childComposites;
/**
* The most reduced form of the original input element. This element matches the
* {@link #outputViewable}.
*/
private Element outputElement;
/**
* The renderable generated for the {@link #outputElement}.
*/
private Viewable outputViewable;
/**
* The list of every successive reduction of element-renderable pairs.
*/
private List<ElementReduction> reductions;
/**
* Creates a new element composite.
*
* @param context the context of the Views under this composite.
* @param indexInParent the index of the view that this composite represents in the
* {@link #parent}'s array of children.
* @param parent the parent composite of this composite.
* @param initialInputElement the template used to construct this composite and its children.
*/
ElementComposite(
final Context context,
final int indexInParent,
final ElementComposite parent,
final Element initialInputElement
) {
if (Config.loggingEnabled) Log.d(TAG, toLogMsg("Initializing new element composite."));
this.context = context;
this.parent = parent;
this.indexInParent = indexInParent;
this.reductions = new ArrayList<>();
// Perform the initial reduce.
reduce(initialInputElement, 0);
}
/**
* Updates the renderable and its children starting at the reduction depth specified.
*
* @param reductionDepth the level in the hierarchy at which the reduction resumes.
*/
void update(final int reductionDepth) {
if (reductionDepth >= reductions.size()) {
throw new IllegalArgumentException(
"Reduction depth " + reductionDepth + " is out of bounds.");
}
// Start reduction at the index specified.
reduce(reductions.get(reductionDepth).element, reductionDepth);
}
/**
* Reduces the input element into an Android View by recursively rendering the sub-elements of
* the input element. The initial depth is how deep in the Renderable hierarchy to start
* reduction.
*
* @param inputElement the element to be reduced.
* @param initialReductionDepth the level in the hierarchy at which the reduction begins.
*/
void reduce(final Element inputElement, final int initialReductionDepth) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Starting the reduction process with element:\n\n" + inputElement
+ "\n\nat initial reduction depth", initialReductionDepth));
int reductionDepth = initialReductionDepth;
Element currElement = inputElement;
while (currElement != null && currElement.getViewableType() == null) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Reached reduction depth", reductionDepth, "with the current " +
"element",
"being", currElement));
// First check to see whether this reduction has already been performed before.
final boolean currReductionExists = reductions.size() > reductionDepth;
// If there is already a reduction in place, look to apply diffs.
if (currReductionExists) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Reduction for", currElement, "already exists; applying " +
"result of element comparison."));
// Get the pre-existing reduction.
final ElementReduction reduction = reductions.get(reductionDepth);
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Calling Renderable#render() on the current reduction's " +
"renderable (" + reduction.renderable + ")"));
// Use the current renderable to derive the next output element. After that, advance
// the current element to the next element.
final Element nextElement = renderNextElement(
reduction.renderable,
reduction.element,
currElement
);
// If the next element is null, it means that another render was not necessary.
// Since no follow up needs to occur, exit here.
if (nextElement == null) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("After comparing input elements, deduced that no " +
"further rendering" +
" is necessary."));
return;
}
// Replace the old reduction with an updated version.
reductions.set(
reductionDepth,
new ElementReduction(currElement, reduction.renderable)
);
// Advance the current element to the next element.
currElement = nextElement;
} else {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Reduction for", currElement, "does not already exist; " +
"creating it."));
// If there is no reduction in place, look to create it.
final Renderable currRenderable = createRenderable(currElement, reductionDepth);
// Place the new renderable in the current reduction.
reductions.add(new ElementReduction(currElement, currRenderable));
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Calling Renderable#render() on the current reduction's " +
"renderable (" + currRenderable + ")"));
// Use the current renderable to derive the next output element. After that, advance
// the current element to the next element.
currElement = currRenderable.render();
}
// The next reduction will be one level deeper.
reductionDepth++;
}
// If the current element is null, throw a tantrum.
if (currElement == null) {
if (Config.loggingEnabled)
Log.e(TAG, toLogMsg("Output element of Renderable#render() on", inputElement
.getRenderableType(), "was null."));
throw new IllegalElementException(
"All Renderables must eventually render a Viewable. Renderable \"" +
inputElement.getRenderableType().toString() + "\" does not."
);
}
// If we got this far, it means the current element represents a viewable. Before we
// continue, we gotta shave off extraneous reductions if they are no longer necessary.
if (reductionDepth < reductions.size()) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("After", reductionDepth, "reductions, there are", (reductions
.size()
- reductionDepth), "old reductions to cull."));
for (int i = reductions.size() - 1; i >= reductionDepth; i--) {
unmountRenderable(reductions.get(i).renderable);
reductions.remove(i);
}
// Destroy the existing output viewable and its corresponding element.
destroyOutputs();
}
if (Config.loggingEnabled) Log.d(TAG, toLogMsg("Now updating the output viewable."));
// Pass along the most reduced element to the output viewable.
updateOutputViewable(currElement);
}
/**
* Gets the view generated by the output viewable of this composite.
*
* @return the view generated by the output viewable of this composite.
*/
View getView() {
if (outputViewable == null) {
return null;
}
return outputViewable.getView();
}
/**
* Gets the renderable that is part of the reduction at the specified depth.
*
* @param reductionDepth the level in the hierarchy at which the reduction resumes.
* @return the renderable that is part of the reduction at the specified depth
*/
Renderable getRenderableAtDepth(final int reductionDepth) {
if (reductionDepth >= reductions.size()) {
throw new IllegalArgumentException(
"Reduction depth " + reductionDepth + " is out of bounds.");
}
return reductions.get(reductionDepth).renderable;
}
/**
* Returns true if the input element may be passed to
* {@link ElementComposite#reduce(Element, int)}. Returns false if a new composite must be
* created.
*
* @param inputElement the input element used for comparison.
* @return true if the input element may be passed to
* {@link ElementComposite#reduce(Element, int)}.
*/
boolean inputElementIsCompatible(final Element inputElement) {
return inputElement == null ||
reductions.size() < 1 ||
inputElement.identifier() == reductions.get(0).element.identifier();
}
/**
* Destroy's the state of this composite and that of all of its children.
*/
void destroy() {
// Destroy is depth-first, so destroy all the childComposites first.
if (childComposites != null) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Destroying this composite's child composites."));
for (final ElementComposite child : childComposites) {
child.destroy();
}
// Clear away the destroyed childComposites.
childComposites = null;
}
// After that, destroy all the outputs.
destroyOutputs();
// Get rid of every renderable in the reductions pipeline.
if (reductions.size() > 0) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Destroying this composite's reductions"));
for (int i = reductions.size() - 1; i >= 0; i--) {
unmountRenderable(reductions.get(i).renderable);
}
// Get rid of all the reductions in one fell swoop.
reductions.clear();
}
// Dis-associate from parent composite for garbage collection.
parent = null;
indexInParent = -1;
}
/**
* Destroys the output viewable field.
*/
private void destroyOutputViewable() {
if (outputViewable != null) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Destroying this composite's output viewable"));
// Declare that this viewable is going away for good.
outputViewable.onDestroyView();
// If the viewable has view childComposites, get rid of the childComposites.
if (outputViewable.getView() instanceof ViewGroup) {
((ViewGroup) outputViewable.getView()).removeAllViews();
}
// Suggest de-allocation to JVM.
outputViewable = null;
}
}
/**
* Destroys the output state of this composite.
*/
private void destroyOutputs() {
destroyOutputViewable();
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Destroying this composite's output element."));
outputElement = null;
}
/**
* Updates the output state of this composite according to nextEl.
*
* @param nextEl the element that represents that changes that must be made within
* {@link #outputViewable}.
*/
private void updateOutputViewable(final Element nextEl) {
// Figure out whether or not the existing viewable can be tweaked, or if it needs to be
// replaced completely.
if (outputElement == null || outputElement.identifier() != nextEl.identifier()) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Creating an output viewable for element:\n\n" + nextEl +
"\n\nsince the identifier of the output element has changed"));
// The output viewable and its childComposites simply need to be replaced.
final ElementComposite parent = this.parent;
final int indexInParent = this.indexInParent;
final boolean alreadyMounted = outputViewable != null;
// Destroy the output viewable (if it even exists) because the type of the Viewable
// must change.
if (alreadyMounted) {
destroyOutputViewable();
}
// Then, create a brand new output viewable.
final Viewable nextOutputViewable = createViewable(nextEl);
final Element[] nextOutputElementChildren = nextEl.getChildren();
final ElementComposite[] nextElementCompositeChildren =
new ElementComposite[nextOutputElementChildren.length];
// Stuff the appropriate values into the arrays.
for (int i = 0; i < nextOutputElementChildren.length; i++) {
final Element childElement = nextOutputElementChildren[i];
final ElementComposite childComposite = new ElementComposite(context, i, this,
childElement);
nextElementCompositeChildren[i] = childComposite;
nextOutputViewable.insertChild(childComposite.getView(), i);
}
// Replace the view in the parent if necessary.
if (indexInParent != -1
&& parent != null
&& parent.outputViewable != null
&& parent.outputViewable.getView() instanceof ViewGroup) {
final ViewGroup parentViewGroup = ((ViewGroup) parent.outputViewable.getView());
parentViewGroup.removeViewAt(indexInParent);
parentViewGroup.addView(nextOutputViewable.getView(), indexInParent);
}
// Bind all the new state to the correct fields.
this.parent = parent;
this.childComposites = nextElementCompositeChildren;
this.outputElement = nextEl;
this.indexInParent = indexInParent;
this.outputViewable = nextOutputViewable;
// If the renderables and viewable associated with this composite haven't been mounted
// yet, mount them.
if (!alreadyMounted) {
// Mount all the reductions if they exist.
if (reductions != null) {
for (int i = reductions.size() - 1; i >= 0; i--) {
reductions.get(i).renderable.onMount();
}
}
}
} else {
// The output viewable and its childComposites simply need to be updated.
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Updating the existing output viewable according to element",
nextEl));
// Update the attributes of the output viewable.
if (!ObjectUtil.equals(outputElement.getAttributes(), nextEl.getAttributes())) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Updating the existing output viewable's attributes."));
outputViewable.setAttributes(nextEl.getAttributes());
}
if (nextEl.isStyleable()) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Updating the existing output viewable's style attributes" +
"."));
((Styleable) outputViewable).setStyleAttributes(nextEl.getStyleAttributes());
}
// Compare old children to new children at an item-by-item level if the children have
// changed in any way.
if (!Arrays.equals(outputElement.getChildren(), nextEl.getChildren())) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Updating the existing output viewable's children."));
final Element[] childElements = outputElement.getChildren();
final Element[] nextChildElements = nextEl.getChildren();
final HashMap<Element, Integer> deletedChildElements = new HashMap<>();
final SparseArray<ElementIndexTuple> childElementTuples = new SparseArray<>();
// Record the positions of all the current children for comparison.
for (int i = 0; i < childElements.length; i++) {
final Element childElement = childElements[i];
final int childIdentifier = childElement.identifier();
childElementTuples.put(childIdentifier, new ElementIndexTuple(i, childElement));
deletedChildElements.put(childElement, i);
}
// Create a correctly resized array for the next version of the child composites
// array.
final ElementComposite[] nextCompositeChildren = new
ElementComposite[nextChildElements.length];
// Identify inserted and moved elements to affect the correct change to the array of
// composites.
for (int i = 0; i < nextChildElements.length; i++) {
final Element nextChildElement = nextChildElements[i];
final ElementIndexTuple childElementTuple = childElementTuples.get(
nextChildElement.identifier()
);
if (childElementTuple == null) {
// This element was inserted.
final ElementComposite nextCompositeChild = new ElementComposite(context, i,
this, nextChildElement);
nextCompositeChildren[i] = nextCompositeChild;
outputViewable.insertChild(nextCompositeChild.outputViewable.getView(), i);
} else {
// Localize state.
final int oldIndex = childElementTuple.index;
final ElementComposite childComposite = childComposites[oldIndex];
// Update the child composite.
childComposite.reduce(nextChildElement, 0);
// Place the composite in the array.
nextCompositeChildren[i] = childComposite;
// If the index changed, then there was a move.
if (oldIndex != i) {
outputViewable.moveChild(oldIndex, i);
childComposite.indexInParent = i;
}
// This child clearly wasn't deleted.
deletedChildElements.remove(childElementTuple.element);
}
}
// Identify deleted composites.
for (final Integer deletedChildElementIndex : deletedChildElements.values()) {
final ElementComposite removedElementComposite =
childComposites[deletedChildElementIndex];
removedElementComposite.destroy();
}
// Suggest de-allocation to all childComposites references that were not preserved.
for (int i = 0; i < childComposites.length; i++) {
childComposites[i] = null;
}
// At this point nextCompositeChildren is correct, so it can replace this
// .childComposites.
childComposites = nextCompositeChildren;
}
}
}
/**
* Creates a new renderable according to the type specified by element.
*
* @param placement the placement of the astro placement.
* @param element the reference element for the Renderable.
* @return a new Renderable instance matching element.
*/
@SuppressWarnings("all")
private Renderable createRenderable(
final Element element,
final int reductionDepth
) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Creating a new instance of Renderable", element
.getRenderableType()));
try {
final Renderable renderable = element.getRenderableType().newInstance();
renderable.setComposite(this);
renderable.setCompositeReductionDepth(reductionDepth);
renderable.setAttributes(element.getAttributes());
renderable.setChildren(element.getChildren());
return renderable;
} catch (IllegalAccessException e) {
throw new RenderableCreationException("Could not create a new instance of Renderable " +
"\"" + element.getRenderableType().getName() + "\"", e);
} catch (InstantiationException e) {
throw new RenderableCreationException("Could not create a new instance of Renderable " +
"\"" + element.getRenderableType().getName() + "\"", e);
}
}
/**
* Creates a new Viewable according to the properties of element.
*
* @param element the reference element for the Viewable.
* @return a new Viewable instance matching element.
*/
@SuppressWarnings("all")
private Viewable createViewable(final Element element) {
if (Config.loggingEnabled)
Log.d(TAG, toLogMsg("Creating a new instance of Viewable", element.getViewableType()));
try {
final Viewable viewable = element.getViewableType().newInstance();
viewable.onCreateView(context);
viewable.setAttributes(element.getAttributes());
if (element.isStyleable()) {
((Styleable) viewable).setStyleAttributes(element.getStyleAttributes());
}
return viewable;
} catch (IllegalAccessException e) {
throw new ViewableCreationException("Could not create a new instance of Viewable \""
+ element.getViewableType().getName() + "\"", e);
} catch (InstantiationException e) {
throw new ViewableCreationException("Could not create a new instance of Viewable \""
+ element.getViewableType().getName() + "\"", e);
}
}
/**
* Helper method used to write the log message in logging routines.
*
* @param msgParts the parts of the message to be stitched together.
* @return the combined log message.
*/
private String toLogMsg(final Object... msgParts) {
final StringBuilder builder = new StringBuilder();
builder.append('[');
builder.append(Integer.toHexString(this.hashCode()));
builder.append(']');
for (final Object msgPart : msgParts) {
builder.append(' ');
builder.append(msgPart);
}
if (builder.charAt(builder.length() - 1) != '.') {
builder.append('.');
}
return builder.toString();
}
/**
* Unmounts renderable.
*
* @param renderable the renderable to be unmounted.
*/
private void unmountRenderable(final Renderable renderable) {
if (Config.loggingEnabled) Log.d(TAG, toLogMsg("Unmounting Renderable", renderable));
// Unmount the renderable.
renderable.onUnmount();
// Get rid of the references for garbage collection.
renderable.setComposite(null);
renderable.setChildren(null);
}
/**
* Uses renderable to render an output element.
*
* @param renderable the Renderable that corresponds to el.
* @param el the reference element.
* @param nextEl the next element.
* @return the output element if there was cause for an update; null otherwise.
*/
private static Element renderNextElement(
final Renderable renderable,
final Element el,
final Element nextEl
) {
// Check whether the next element is compatible with the current element.
if (el.identifier() != nextEl.identifier()) {
// TODO(skeswa): it should be fine to have a new type of element. Look into this.
throw new IllegalArgumentException("The provided element is not a valid reduction " +
"target.");
}
final boolean shouldUpdate = attributesCauseUpdate(renderable, el, nextEl) ||
childrenCauseUpdate(el, nextEl);
// Update the state of the renderable now that the comparisons have been finished.
renderable.setAttributes(el.getAttributes());
renderable.setChildren(el.getChildren());
if (shouldUpdate) {
// Perform a render now that the renderable's state is all caught up.
return renderable.render();
}
// The null means that there was no change.
return null;
}
/**
* Returns true if the attributes of nextEl, when compared to the attributes of el, are
* different.
*
* @param renderable the renderable that corresponds to el.
* @param el the reference element.
* @param nextEl the next element.
* @return true if the children of nextEl, when compared to the children of el, are different.
*/
private static boolean attributesCauseUpdate(
final Renderable renderable,
final Element el,
final Element nextEl
) {
return !ObjectUtil.equals(el.getAttributes(), nextEl.getAttributes()) && renderable
.shouldUpdate(nextEl
.getAttributes());
}
/**
* Returns true if the children of nextEl, when compared to the children of el, are not the
* same.
*
* @param el the reference element.
* @param nextEl the next element.
* @return true if the children of nextEl, when compared to the children of el, are not the
* same.
*/
private static boolean childrenCauseUpdate(final Element el, final Element nextEl) {
return !Arrays.equals(el.getChildren(), nextEl.getChildren());
}
/**
* An Element-Index pair used in child element array comparison.
*/
private static class ElementIndexTuple {
private final int index;
private final Element element;
public ElementIndexTuple(final int index, final Element element) {
this.index = index;
this.element = element;
}
}
/**
* Represents an Element-Renderable pair (the Renderable corresponds to the Element).
*/
private static class ElementReduction {
private final Element element;
private final Renderable renderable;
private ElementReduction(final Element element, final Renderable renderable) {
this.element = element;
this.renderable = renderable;
}
}
}
|
package epfl.project.threadpoolcomparison;
import java.io.*;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.concurrent.ForkJoinPool;
import java.util.concurrent.RecursiveAction;
/**
* CharCount using fork and join
* @author Nicolas
*
*/
public class CharCountForkJoin {
public static void main(String[] args) {
//used to compute the execution time
double time = System.currentTimeMillis();
//all thread has his own data collector and put it in this buffer
BufferCharCount buffer = new BufferCharCount();
SpliterCharCountForkJoin spliter;
if (args.length == 2) {
spliter = new SpliterCharCountForkJoin(
new File(args[1]));
} else {
spliter = new SpliterCharCountForkJoin(
new File("word_100MB.txt"));
}
WorkerCharCountForkJoin worker = new WorkerCharCountForkJoin(
spliter.nextChunk(), spliter, buffer);
ForkJoinPool pool = new ForkJoinPool(Runtime.getRuntime()
.availableProcessors());
pool.invoke(worker);
// merge data collector of all thread
HashMap<Character, Integer> resultHashMap = merge(buffer.get());
try {
writeResult("result.txt", resultHashMap);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("Finish : " + (System.currentTimeMillis() - time));
}
/**
* write the result in "ForkJoin Result/"fileName""
* @param fileName
* @param collector
* @throws IOException
*/
public static void writeResult(String fileName,
HashMap<Character, Integer> collector) throws IOException {
File f = new File("ForkJoin Result");
f.mkdir();
BufferedWriter bw = new BufferedWriter(new FileWriter(
"ForkJoin Result/" + fileName));
for (char character : collector.keySet()) {
bw.write("( " + character + " , " + collector.get(character)
+ " )\n");
}
bw.close();
}
/**
* merge the collector of all thread
*
* @param listCollector
* @return
*/
public static HashMap<Character, Integer> merge(
ArrayList<HashMap<Character, Integer>> listCollector) {
HashMap<Character, Integer> mergeResult = new HashMap<Character, Integer>();
for (HashMap<Character, Integer> collector : listCollector) {
for (char character : collector.keySet()) {
Integer counter = mergeResult.get(character);
if (counter == null) {
mergeResult.put(character, collector.get(character));
} else {
mergeResult.put(character,
counter + collector.get(character));
}
}
}
return mergeResult;
}
}
/**
* Class that will count the number of character using fork/join (for exemple: a : 5, b : 6, etc.)
* @author Nicolas
*
*/
@SuppressWarnings("serial")
class WorkerCharCountForkJoin extends RecursiveAction {
private char[] text;
private SpliterCharCountForkJoin spliter;
private BufferCharCount buffer;
public WorkerCharCountForkJoin(char[] text,
SpliterCharCountForkJoin spliter, BufferCharCount buffer) {
this.text = text.clone();
this.spliter = spliter;
this.buffer = buffer;
}
protected void compute() {
char[] childText = spliter.nextChunk();
// no more text to read
if (childText == null) {
HashMap<Character, Integer> collector = new HashMap<Character, Integer>();
if (text != null && text.length > 0) {
int size = text.length;
for (int i = 0; i < size; i++) {
char character = text[i];
Integer counter = collector.get(character);
if (counter == null) {
counter = 0;
}
collector.put(character, counter + 1);
}
buffer.add(collector);
}
return;
}
invokeAll(new WorkerCharCountForkJoin(childText, spliter, buffer),
new WorkerCharCountForkJoin(text, spliter, buffer));
}
}
/**
* Split the text. The nextChunk method gives the part of the text that the
* current Thread must compute
*
* @author Nicolas
*
*/
class SpliterCharCountForkJoin {
private boolean closed = false;
BufferedReader bufferR;
char[] charRead;
public SpliterCharCountForkJoin(File file) {
try {
bufferR = new BufferedReader(new FileReader(file));
charRead = new char[250000];
} catch (FileNotFoundException e) {
e.printStackTrace();
}
}
/**
* return the next part to compute
* @return
*/
public synchronized char[] nextChunk() {
if (closed) {
return null;
}
int read;
try {
read = bufferR.read(charRead);
if (read == -1) {
closed = true;
bufferR.close();
return null;
}
return Arrays.copyOfRange(charRead, 0, read);
} catch (IOException e) {
e.printStackTrace();
}
return null;
}
}
/**
* contain the data collector of all thread
*
* @author Nicolas
*
*/
class BufferCharCount {
private ArrayList<HashMap<Character, Integer>> buffer;
public BufferCharCount() {
buffer = new ArrayList<HashMap<Character, Integer>>();
}
public synchronized void add(HashMap<Character, Integer> hashmap) {
buffer.add(hashmap);
}
public ArrayList<HashMap<Character, Integer>> get() {
return buffer;
}
} |
Ramon Berenguer IV, Count of Provence
Rule
Ramon Berenguer and his wife were known for their support of troubadors, always having some around the court. He was known for his generosity, though his income did not always keep up. He wrote laws prohibiting nobles from performing menial work, such as farming or heavy labor.
Ramon Berenguer had many border disputes with his neighbors, the counts of Toulouse. In 1226, Ramon began to reassert his right to rule in Marseille. The citizens there initially sought the help of Ramon's father-in-law Thomas, Count of Savoy in his role as imperial vicar. However, they later sought the help of Raymond VII, Count of Toulouse.
In 1228, Ramon Berenguer supported his father-in-law in a double-sided conflict against Turin and Guigues VI of Viennois. This small war was one of many rounds intended to more firmly establish control over trade from Italy into France, and Provence included several key routes.
While the Albigensian Crusade worked in his favor against Toulouse, Ramon Berenguer was concerned that its resolution in the Treaty of Paris left him in a precarious position. Raymond turned his troops from fighting France to attempting to claim lands from Provence. When Blanche of Castile sent her knight to both Toulouse and Provence in 1233, Ramon Berenguer entertained him lavishly, and the knight left well impressed by both the count and his eldest daughter, Margaret. Soon after, Blanche negotiated the marriage between Margaret and her son, Louis, with a dowry of ten thousand silver marks. Ramon Berenguer had to get contributions from allies for a portion, and had to pledge several of his castles to cover the rest. Ramon Berenguer and Beatrice travelled with their daughter to Lyon in 1234 to sign the marriage treaty, and then Margaret was escorted to her wedding in Sens by her uncles William and Thomas of Savoy.
Shortly after, William began negotiating on Ramon Berenguer's behalf with Henry III of England to marry his daughter Eleanor. Henry sent his own knight to Provence early in 1235, and again Ramon Berenguer and his family entertained him lavishly. Henry wrote to William on June 22 that he was very interested, and sent a delegation to negotiate the marriage in October. Henry was seeking a dowry of up to twenty thousand silver marks to help offset the dowry he had just paid for his sister, Isabella. However, he had drafted seven different versions of the marriage contract, with different amounts for the dowry, the lowest being zero. Ramon Berenguer shrewdly negotiated for that option, offering as consolation a promise to leave her ten thousand marks in his last will.
In 1238, Ramon Berenguer joined his brother-in-law Amadeus IV at the court of Emperor Frederick II in Turin. Frederick was gathering forces to assert more control in Italy. Raymond VII of Toulouse was also summoned, and all expected to work together in the war.
In January 1244, Pope Innocent IV decreed that no one but the pope could excommunicate Ramon Berenguer. In 1245, Ramon Berenguer sent representatives to the First Council of Lyon, to discuss crusades and the excommunication of Frederick.
Ramon Berenguer died in August 1245 in Aix-en-Provence, leaving the county to his youngest daughter, Beatrice.
Death and legacy
Ramon Berenguer IV died in Aix-en-Provence. At least two planhs (Occitan funeral laments) of uncertain authorship (one possibly by Aimeric de Peguilhan and one falsely attributed to Rigaut de Berbezilh) were written in his honour.
Giovanni Villani in his Nuova Cronica said:
Count Raymond was a lord of gentle lineage, and kin to them of the house of Aragon, and to the family of the count of Toulouse, By inheritance Provence, this side of the Rhone, was his; a wise and courteous lord was he, and of noble state and virtuous, and in his time did honourable deeds, and to his court came all gentle persons of Provence and of France and of Catalonia, by reason of his courtesy and noble estate, and he made many Provençal coblas and canzoni of great worth. |
import { Component, OnInit } from '@angular/core';
import { LocalDataSource } from 'ng2-smart-table';
import { Doctor } from 'app/_model/doctor';
import { DoctorService } from 'app/_services/doctor.service';
import { DatePipe } from '@angular/common';
import { Router } from '@angular/router';
@Component({
selector: 'app-doctores',
templateUrl: './doctores.component.html',
styleUrls: ['./doctores.component.css']
})
export class DoctoresComponent implements OnInit {
settings = {
mode: 'external',
actions: {
columnTitle: 'Acciones',
add: true,
edit: true,
delete: true,
position: 'left'
},
noDataMessage: 'No hay registros',
add: {
addButtonContent: '<i class="fa fa-plus-square fa-2x" title="Agregar Nuevo"></i>',
},
edit: {
editButtonContent: '<i class="fa fa-pencil fa-2x" title="Editar"></i>',
},
delete: {
deleteButtonContent: '<i class="fa fa-trash fa-2x" title="Eliminar"></i>',
confirmDelete: true
},
pager: {
display: true,
perPage: 30
},
columns: {
id: {
title: '#',
type: 'number',
},
numDni: {
title: 'Nº DNI',
type: 'string',
},
nomCompleto: {
title: 'Nombre Completo',
type: 'string',
},
desEmail: {
title: 'Email',
type: 'string'
},
fecIngreso:{
title: 'Fecha de Ingreso',
type: 'string',
},
numColegiatura:{
title: 'Colegiatura',
type: 'string',
},
estudios:{
title: 'Estudios',
type: 'string',
},
especialidad:{
title: 'Especialidad',
type: 'string',
},
}
};
source: LocalDataSource = new LocalDataSource();
data: Doctor[];
constructor(private doctorService: DoctorService, private datePipe: DatePipe, private router: Router) { }
ngOnInit(): void {
this.showInSmartTable();
}
showInSmartTable(): void {
this.doctorService.listar().subscribe( data => {
if (data){
let regToShow : {
id: number;
numDni: string;
nomCompleto: string;
desEmail: string;
fecIngreso:string;
numColegiatura:string;
estudios:string;
especialidad:string } [] = [];
for (var i = 0; i < data.length; i++) {
let rowData: any;
rowData = {
id: data[i].id,
numDni: data[i].numDni,
nomCompleto: data[i].nomCompleto,
desEmail: data[i].desEmail,
fecIngreso: this.datePipe.transform(data[i].fecIngreso,'yyyy-MM-dd'),
numColegiatura:data[i].numColegiatura,
estudios:data[i].estudios,
especialidad:data[i].especialidad.nomEspecialidad }
regToShow.push(rowData);
}
this.source.load(regToShow);
}
});
}
onEdit(event): void {
// this.$sessionStorage.store('currentValueSelect', this.agenda );
// this.localPage = this.source.getPaging();
// this.$sessionStorage.store('currentPageSelect', this.localPage );
// //console.log(this.localPage)
// this.router.navigate(['/pages/tema-agenda/edit'],{ queryParams: { page: event.data.id+"_"+this.agenda.codAgenda } });
this.router.navigate(['/doctores/edit'], { queryParams: { page: event.data.id } });
}
onDelete(event): void {
// if (window.confirm('Está seguro que desea eliminar la agenda seleccionada?')) {
// this.temaAgendaService.delete(event.data.id).subscribe(res => {
// this.source.remove(event.data);
// });
// }
}
onAdd(event): void {
// if (this.agenda.indPublicado) {
// alert("No se puede agregar tema nuevo en agenda publicada. Elegir la opción Control de Sesión")
// return;
// }
// console.log("Se agregará",event);
// this.$sessionStorage.store('currentValueSelect', this.agenda );
// //
// this.localPage = this.source.getPaging();
// this.$sessionStorage.store('currentPageSelect', this.localPage );
// //
this.router.navigate(['/doctores/new'], { queryParams: { page: "0" } });
}
onChangeAgenda(newObj: any) {
// let selectedObj = JSON.parse(newObj);
// this.agenda = selectedObj;
// this.retrieveTemasAgenda();
console.log("onChange");
}
onUserRowSelect (evt: any) {
console.log("onUserRowSelect",evt);
// this.temaAgendaSel.codTema = evt["data"]["id"];
// this.temaAgendaSel.numTema = evt["data"]["numTema"];
}
onRowSelect(evt: any){
console.log("onRowSelect",evt);
// this.temaAgendaSel.codTema = evt["data"]["id"];
// this.temaAgendaSel.numTema = evt["data"]["numTema"];
// this.rowDataSel = evt["data"];
}
}
|
Toward operational methods for the assessment of intrinsic groundwater vulnerability: A review ABSTRACT Assessing the vulnerability of groundwater to adverse effects of human impacts is one of the most important problems in applied hydrogeology. At the same time, many of the widespread vulnerability assessment methods do not provide physically meaningful and operational indicators of vulnerability. Therefore, this review summarizes (i) different methods used for intrinsic vulnerability assessment and (ii) methods for different groundwater systems. It particularly focuses on (iii) timescale methods of water flow as an appropriate tool and (iv) provides a discussion on the challenges in applying these methods. The use of such physically meaningful indices based on timescales is indispensable for groundwater resources management. |
/**
* <p>Original spec-file type: TypeInfo</p>
* <pre>
* Information about a type
* type_string type_def - resolved type definition id.
* string description - the description of the type from spec file.
* string spec_def - reconstruction of type definition from spec file.
* jsonschema json_schema - JSON schema of this type.
* string parsing_structure - json document describing parsing structure of type
* in spec file including involved sub-types.
* list<spec_version> module_vers - versions of spec-files containing
* given type version.
* list<spec_version> released_module_vers - versions of released spec-files
* containing given type version.
* list<type_string> type_vers - all versions of type with given type name.
* list<type_string> released_type_vers - all released versions of type with
* given type name.
* list<func_string> using_func_defs - list of functions (with versions)
* referring to this type version.
* list<type_string> using_type_defs - list of types (with versions)
* referring to this type version.
* list<type_string> used_type_defs - list of types (with versions)
* referred from this type version.
* </pre>
*
*/
@JsonInclude(JsonInclude.Include.NON_NULL)
@Generated("com.googlecode.jsonschema2pojo")
@JsonPropertyOrder({
"type_def",
"description",
"spec_def",
"json_schema",
"parsing_structure",
"module_vers",
"released_module_vers",
"type_vers",
"released_type_vers",
"using_func_defs",
"using_type_defs",
"used_type_defs"
})
public class TypeInfo {
@JsonProperty("type_def")
private java.lang.String typeDef;
@JsonProperty("description")
private java.lang.String description;
@JsonProperty("spec_def")
private java.lang.String specDef;
@JsonProperty("json_schema")
private java.lang.String jsonSchema;
@JsonProperty("parsing_structure")
private java.lang.String parsingStructure;
@JsonProperty("module_vers")
private List<Long> moduleVers;
@JsonProperty("released_module_vers")
private List<Long> releasedModuleVers;
@JsonProperty("type_vers")
private List<String> typeVers;
@JsonProperty("released_type_vers")
private List<String> releasedTypeVers;
@JsonProperty("using_func_defs")
private List<String> usingFuncDefs;
@JsonProperty("using_type_defs")
private List<String> usingTypeDefs;
@JsonProperty("used_type_defs")
private List<String> usedTypeDefs;
private Map<java.lang.String, Object> additionalProperties = new HashMap<java.lang.String, Object>();
@JsonProperty("type_def")
public java.lang.String getTypeDef() {
return typeDef;
}
@JsonProperty("type_def")
public void setTypeDef(java.lang.String typeDef) {
this.typeDef = typeDef;
}
public TypeInfo withTypeDef(java.lang.String typeDef) {
this.typeDef = typeDef;
return this;
}
@JsonProperty("description")
public java.lang.String getDescription() {
return description;
}
@JsonProperty("description")
public void setDescription(java.lang.String description) {
this.description = description;
}
public TypeInfo withDescription(java.lang.String description) {
this.description = description;
return this;
}
@JsonProperty("spec_def")
public java.lang.String getSpecDef() {
return specDef;
}
@JsonProperty("spec_def")
public void setSpecDef(java.lang.String specDef) {
this.specDef = specDef;
}
public TypeInfo withSpecDef(java.lang.String specDef) {
this.specDef = specDef;
return this;
}
@JsonProperty("json_schema")
public java.lang.String getJsonSchema() {
return jsonSchema;
}
@JsonProperty("json_schema")
public void setJsonSchema(java.lang.String jsonSchema) {
this.jsonSchema = jsonSchema;
}
public TypeInfo withJsonSchema(java.lang.String jsonSchema) {
this.jsonSchema = jsonSchema;
return this;
}
@JsonProperty("parsing_structure")
public java.lang.String getParsingStructure() {
return parsingStructure;
}
@JsonProperty("parsing_structure")
public void setParsingStructure(java.lang.String parsingStructure) {
this.parsingStructure = parsingStructure;
}
public TypeInfo withParsingStructure(java.lang.String parsingStructure) {
this.parsingStructure = parsingStructure;
return this;
}
@JsonProperty("module_vers")
public List<Long> getModuleVers() {
return moduleVers;
}
@JsonProperty("module_vers")
public void setModuleVers(List<Long> moduleVers) {
this.moduleVers = moduleVers;
}
public TypeInfo withModuleVers(List<Long> moduleVers) {
this.moduleVers = moduleVers;
return this;
}
@JsonProperty("released_module_vers")
public List<Long> getReleasedModuleVers() {
return releasedModuleVers;
}
@JsonProperty("released_module_vers")
public void setReleasedModuleVers(List<Long> releasedModuleVers) {
this.releasedModuleVers = releasedModuleVers;
}
public TypeInfo withReleasedModuleVers(List<Long> releasedModuleVers) {
this.releasedModuleVers = releasedModuleVers;
return this;
}
@JsonProperty("type_vers")
public List<String> getTypeVers() {
return typeVers;
}
@JsonProperty("type_vers")
public void setTypeVers(List<String> typeVers) {
this.typeVers = typeVers;
}
public TypeInfo withTypeVers(List<String> typeVers) {
this.typeVers = typeVers;
return this;
}
@JsonProperty("released_type_vers")
public List<String> getReleasedTypeVers() {
return releasedTypeVers;
}
@JsonProperty("released_type_vers")
public void setReleasedTypeVers(List<String> releasedTypeVers) {
this.releasedTypeVers = releasedTypeVers;
}
public TypeInfo withReleasedTypeVers(List<String> releasedTypeVers) {
this.releasedTypeVers = releasedTypeVers;
return this;
}
@JsonProperty("using_func_defs")
public List<String> getUsingFuncDefs() {
return usingFuncDefs;
}
@JsonProperty("using_func_defs")
public void setUsingFuncDefs(List<String> usingFuncDefs) {
this.usingFuncDefs = usingFuncDefs;
}
public TypeInfo withUsingFuncDefs(List<String> usingFuncDefs) {
this.usingFuncDefs = usingFuncDefs;
return this;
}
@JsonProperty("using_type_defs")
public List<String> getUsingTypeDefs() {
return usingTypeDefs;
}
@JsonProperty("using_type_defs")
public void setUsingTypeDefs(List<String> usingTypeDefs) {
this.usingTypeDefs = usingTypeDefs;
}
public TypeInfo withUsingTypeDefs(List<String> usingTypeDefs) {
this.usingTypeDefs = usingTypeDefs;
return this;
}
@JsonProperty("used_type_defs")
public List<String> getUsedTypeDefs() {
return usedTypeDefs;
}
@JsonProperty("used_type_defs")
public void setUsedTypeDefs(List<String> usedTypeDefs) {
this.usedTypeDefs = usedTypeDefs;
}
public TypeInfo withUsedTypeDefs(List<String> usedTypeDefs) {
this.usedTypeDefs = usedTypeDefs;
return this;
}
@JsonAnyGetter
public Map<java.lang.String, Object> getAdditionalProperties() {
return this.additionalProperties;
}
@JsonAnySetter
public void setAdditionalProperties(java.lang.String name, Object value) {
this.additionalProperties.put(name, value);
}
@Override
public java.lang.String toString() {
return ((((((((((((((((((((((((((("TypeInfo"+" [typeDef=")+ typeDef)+", description=")+ description)+", specDef=")+ specDef)+", jsonSchema=")+ jsonSchema)+", parsingStructure=")+ parsingStructure)+", moduleVers=")+ moduleVers)+", releasedModuleVers=")+ releasedModuleVers)+", typeVers=")+ typeVers)+", releasedTypeVers=")+ releasedTypeVers)+", usingFuncDefs=")+ usingFuncDefs)+", usingTypeDefs=")+ usingTypeDefs)+", usedTypeDefs=")+ usedTypeDefs)+", additionalProperties=")+ additionalProperties)+"]");
}
} |
Maximum energy harvesting in solar photovoltaic system using fuzzy logic technique ABSTRACT In this paper, fuzzy logic controller (FLC)-based maximum power point tracking (MPPT) for a three-port bidirectional isolated dcdc converter proposed for power management to photovoltaic-battery sources. This converter has an advantage of using least number of switches which reduces the switching losses. The inductorcapacitorinductor resonant circuit achieves soft switching. The converter is modelled to instantaneous power management of photovoltaic (PV) panel, battery and load. The proposed MPPT based on FLC is capable of achieving maximum power from PV panel when solar irradiance is available. The charge and discharge controller of the battery works when there is surplus energy and power shortage with respect to the load, respectively. The recital of fuzzy logic with several membership function is analysed to improve the MPPT. Simulation results show that performance of FLC-based MPPT is better than that of conservative perturb and observe (P&O) MPPT. |
/*
* #%L
* BroadleafCommerce Common Libraries
* %%
* Copyright (C) 2009 - 2014 Broadleaf Commerce
* %%
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
* #L%
*/
package org.broadleafcommerce.common.extensibility.context.merge.handlers;
import org.w3c.dom.Node;
import java.util.LinkedHashSet;
import java.util.Set;
import java.util.regex.Matcher;
import java.util.regex.Pattern;
/**
* <p>
* Designed to specifically handle the merge of schemaLocation references. This takes any of the Spring XSD references
* for particular Spring versions and replaces them with XSDs without a version reference. This allows the final XSD
* reference to refer to the latest version of Spring, and reduces the need for modules to be updated with every Spring
* update.
*
* <p>
* This will also prevents multiple XSD references that cause parse exceptions when the final XML file is presented to Spring
*
* @author <NAME> (phillipuniverse)
*/
public class SchemaLocationNodeValueMerge extends SpaceDelimitedNodeValueMerge {
@Override
protected Set<String> getMergedNodeValues(Node node1, Node node2) {
String node1Values = getSanitizedValue(node1.getNodeValue());
String node2Values = getSanitizedValue(node2.getNodeValue());
Set<String> finalItems = new LinkedHashSet<String>();
for (String node1Value : node1Values.split(getRegEx())) {
finalItems.add(node1Value.trim());
}
for (String node2Value : node2Values.split(getRegEx())) {
// Only add in this new attribute value if we haven't seen it yet
if (!finalItems.contains(node2Value.trim())) {
finalItems.add(node2Value.trim());
}
}
return finalItems;
}
/**
* <p>
* Sanitizes the given attribute value by stripping out the version number for the Spring XSDs.
*
* <p>
* For example, given http://www.springframework.org/schema/beans/<b>spring-beans-4.1.xsd</b> this will return
* http://www.springframework.org/schema/beans/<b>spring-beans.xsd</b>
*
* @param attributeValue the value of an xsi:schemaLocation attribute
* @return the given string with all of the Spring XSD version numbers stripped out.
*/
protected String getSanitizedValue(String attributeValue) {
Pattern springVersionPattern = Pattern.compile("(spring-\\w*-[0-9]\\.[0-9]\\.xsd)");
Matcher versionMatcher = springVersionPattern.matcher(attributeValue);
while (versionMatcher.find()) {
String match = versionMatcher.group();
String replacement = match.replaceAll("-[0-9]\\.[0-9]", "");
attributeValue = attributeValue.replaceAll(match, replacement);
}
return attributeValue;
}
}
|
def find(self, value: any):
num = 0
for l in self.list:
if l == value:
return num
num += 1
return -1 |
<reponame>rileyjmurray/parla<gh_stars>1-10
import parla.comps.sketchers.aware as aware
import parla.comps.qb as qb
import parla.comps.rangefinders as rangefinders
import parla.comps.determiter.saddle as saddle
import parla.comps.preconditioning as preconditioning
|
About a year ago, I was in a part of the American South that had been a major battle zone for the civil rights movement in the 1960s. It was hard to imagine the courage it must have taken for African Americans to take those first steps toward equality. They were willing literally to give their lives for basic human rights - to vote freely, to use public facilities without discrimination, to be free of persecution because of the color of their skin.
Over the years, I've found their example very inspiring. Yet for a long time I didn't feel a personal connection with their struggle, perhaps because I believed that my ancestors had come to the States long after the slavery period. It was easy to stand off a distance and feel I wasn't in any way part of that terrible history. Then I discovered that a few of my ancestors had actually been here much earlier than I'd thought. For some reason, it now seemed harder to shrug my shoulders and say, "Sorry about that."
Instead, I needed to take a mental journey - to look a lot deeper into the nature of existence, to see all of it in more spiritual terms. I soon realized that in a very real sense we are all inextricably intertwined with each other. My actions have an impact on others, and their reaction to what I do has an impact, and so forth. Unkindness can lead others to be unkind. The opposite is also true - good deeds open the way for more good. From this standpoint, Jesus' direction to love one's neighbor as oneself makes a lot of sense.
If we actively love our neighbor, we're embraced in a circle of love that surrounds us and spreads outward to the rest of humanity. Active love includes respect, an expectation of encountering intelligence, honesty, and goodness. This love reflects back on us because it affirms the inherent goodness of our being.
But there's more to this love than a bunch of people being kind to each other. In a letter St. Paul wrote to the Corinthians, he likens all of us to parts of one body - what he calls the body of Christ. He says: "For by one Spirit are we all baptized into one body, whether we be Jews or Gentiles, whether we be bond or free; .... For the body is not one member, but many" (I Cor. 12:13, 14).
In other words, when we express love toward one another - and really mean it - we're uniting with each other in a spiritual way. We're being what we were created to be: the good and pure ideas of God, divine Mind. And from this goodness and purity, wonderful things can flow.
Instead of relegating each other to stereotypes, we can see each other and ourselves as God's spiritual creation. This frees us from believing that one's race or background necessarily inclines one toward certain behaviors or attitudes. When we are all "us" as part of God's family, there isn't a "them" against whom we must protect ourselves or fight.
Such commitment to "us-ness" wipes out barriers that would separate people and families. It also opens the way to the deeper racial healing that Martin Luther King Jr. gave his life to achieve. This healing is inevitable no matter how long it takes, because all of us are on a spiritual journey together - a journey of learning our unity with God and with each other as God's creation. Every step on this journey, however small, frees us from believing in an "other" who must be feared.
Mary Baker Eddy, the Monitor's founder, wrote, "Citizens of the world, accept the 'glorious liberty of the children of God,' and be free! This is your divine right" ("Science and Health with Key to the Scriptures," pg. 227).
This freedom is yours and mine and the world's. It's natural and right for this freedom to be universal. As you and I love more spiritually and embrace everyone we meet in this view of life, we will help to end the legacy of slavery, of hatred, distrust, fear, and anger. Then, truly, all people around the world will be free. |
The world’s largest democracy will head to the polls in April and May to elect a new parliament amid economic troubles, renewed tensions with Pakistan, and concerns about religious violence. Prime Minister Narendra Modi is asking Indian citizens to give his Bharatiya Janata Party (BJP) another five years to deliver the economic transformation it promised in 2014, and to keep the country safe after a recent terrorist attack in Jammu and Kashmir State. Disparate opposition parties, concerned about what rising Hindu nationalism means for India’s minorities, have banded together to try to unseat Modi.
India’s lower house, the Lok Sabha, or House of the People, has 543 seats covering the country’s twenty-nine states and seven union territories. This year’s Lok Sabha elections will occur at the natural end of the current parliament’s five-year term. The scale of the democratic exercise exceeds all elections to date: out of India’s overall population of 1.3 billion people, approximately 900 million will be eligible to vote (meaning that they were at least eighteen years old as of January 1). Per the Election Commission of India, the size of the electorate is more than 84 million people greater than it was in 2014, with more than 15 million people who are eighteen or nineteen years old. This election has already surfaced concerns about social media and disinformation; even with greater attention to the problem, and partnerships between media organizations and fact-checking organizations, India’s sheer scale presents social media platforms with difficulties for controlling the spread of manipulated news. The platforms have collectively designed and committed to a voluntary code of ethics ahead of the election.
The election commission, an independent statutory body, has divided polling into seven phases, stretching from April 11 to May 19, across more than one million polling stations. Ballots from all seven phases will be counted together beginning on May 23.
Indian ballots accommodate a lot of diversity. The country houses seven national parties and fifty-six state parties. Another 2,349 other parties to date are registered but unrecognized, according to the election commission’s categorization. Electronic voting machines accommodate illiterate voters—approximately 24 percent of the population, so more than three hundred million people—with the prominent use of symbols next to candidates’ names. Citizens can vote for the BJP’s lotus, the Congress party’s hand, or a bicycle, ceiling fan, or banana, among a wide range of symbols, even if they are unable to read the ballot.
The economy. In 2014, the BJP campaigned against the Congress-led coalition government by focusing on economic growth, jobs, and good governance. Five years on, the Modi government has not accomplished its ambitions on the economic front. The benefits of some important reforms were offset by a currency demonetization and the overly complex introduction of a national goods and services tax. Growth took a hit, and the economy has only just begun to recover.
While the Modi government has made progress on quality-of-life initiatives such as cleanliness, sanitation, highways, and financial inclusion, the economy is not generating sufficient jobs to absorb India’s large and growing working-age population. Leaked accounts of unreleased government data suggest that unemployment could be at a forty-five-year high of around 6.1 percent; the private Centre for Monitoring Indian Economy put unemployment at more than 7 percent in recent months. Add to that a long-running economic crisis for rural India, which has suffered from poor rainfall in recent years, made worse by a decline in global agricultural prices.
National security. The Modi government’s response to the February 14 suicide attack in Kashmir, which was claimed by a Pakistan-based, UN-designated terrorist group, put national security on the campaign agenda in a way not seen in at least two decades. The government’s decision to target a terrorist facility across the border in Pakistan with air strikes has become a point of campaign pride for Modi, an invocation of strength signaling a tough approach to Pakistan. How highly this will rank for voters is unknown, but by emphasizing its national security bona fides, the government hopes to distract from economic problems. The BJP has launched a national campaign, Main Bhi Chowkidar!, that portrays the prime minister and party members as watchmen or guards of the nation. To prevent misuse of the apolitical military in campaigns, the election commission issued a prohibition in early March on the use of images of defense personnel.
The future of Indian values. Opposition parties across India have expressed concern throughout the Modi government’s tenure about rising intolerance for religious minorities and India’s future as a secular country. Many pundits worry that the Hindu nationalist BJP will undermine the secularism enshrined in India’s constitution. Recent increases in religious violence, particularly in Rajasthan and Uttar Pradesh [PDF], have underscored this concern. (Human Rights Watch recently documented cases in which vigilante mobs, in the name of cow protection, have targeted Muslims and low-caste workers involved in the cattle trade or in handling cow carcasses.) Leaders from more than twenty opposition parties across the country have loosely banded together in a grand alliance, the Mahagathbandhan, to fight what they call “communal forces.” In an unusual show of unity, the disparate grouping convened a rally in Kolkata in January, expressing concerns about communal violence, the economy, rural distress, and the reliability of electronic voting machines. Whether these diverse parties can maintain a united front at the ballot box, however, is another question—as is whether they will persuade voters in politically strategic states. |
package com.yg.webshow.crawl.webdoc.template;
import java.util.Date;
import java.util.List;
public class WebDocBbsList {
private Date timestamp ;
private List<DbbsTitleLine> titleLines ;
public List<DbbsTitleLine> getTitleLines() {
return titleLines;
}
public void setTitleLines(List<DbbsTitleLine> titleLines) {
this.titleLines = titleLines;
}
public Date getTimestamp() {
return timestamp;
}
public void setTimestamp(Date timestamp) {
this.timestamp = timestamp;
}
}
|
<reponame>Motesque/saleor-dashboard
import { stringify as stringifyQs } from "qs";
import urlJoin from "url-join";
import {
ActiveTab,
Dialog,
Filters,
Pagination,
SingleAction,
TabActionDialog,
Sort
} from "../types";
export const serviceSection = "/services/";
export const serviceListPath = serviceSection;
export enum ServiceListUrlFiltersEnum {
query = "query"
}
export type ServiceListUrlFilters = Filters<ServiceListUrlFiltersEnum>;
export type ServiceListUrlDialog = "remove" | TabActionDialog;
export enum ServiceListUrlSortField {
name = "name",
active = "active"
}
export type ServiceListUrlSort = Sort<ServiceListUrlSortField>;
export type ServiceListUrlQueryParams = ActiveTab &
Dialog<ServiceListUrlDialog> &
Pagination &
ServiceListUrlFilters &
ServiceListUrlSort &
SingleAction;
export const serviceListUrl = (params?: ServiceListUrlQueryParams) =>
serviceListPath + "?" + stringifyQs(params);
export const servicePath = (id: string) => urlJoin(serviceSection, id);
export type ServiceUrlDialog = "create-token" | "remove" | "remove-token";
export type ServiceUrlQueryParams = Dialog<ServiceUrlDialog> & SingleAction;
export const serviceUrl = (id: string, params?: ServiceUrlQueryParams) =>
servicePath(encodeURIComponent(id)) + "?" + stringifyQs(params);
export const serviceAddPath = urlJoin(serviceSection, "add");
export const serviceAddUrl = serviceAddPath;
|
Effective Participation of Mentally Vulnerable Defendants in the Magistrates Courts in England and WalesThe Front Line from a Legal Perspective Mentally vulnerable defendants who struggle to effectively participate in their trial in the magistrates courts are not receiving the same protection as those who stand trial in the Crown Court. The Law Commission for England and Wales recognised this lacuna and suggested that the law relating to effective participation should be equally applicable in the magistrates courts. On closer examination of the law, the legal aid system and perspectives of legal professionals on the front line, it is clear that improvements in policy are of greater importance than legal reform and are more likely to meet the needs of these vulnerable individuals. The aim of this paper will be to demonstrate that reform of the law will be insufficient to adequately protect mentally vulnerable defendants in the magistrates courts and that changes in policy are needed in place of, or alongside, legal reforms. Introduction Mentally vulnerable defendants 1 who struggle to effectively participate in their trial in the magistrates' courts are not receiving the same protection as those who stand trial in the Crown Court. 2 The Law Commission for England and Wales recognised this lacuna and suggested that the law in relation to unfitness to plead should be equally applicable in the magistrates' courts, 3 yet this is not currently the case. On closer examination of the law, the legal aid system and perspectives of legal professionals, it has become clear that, while reform of the law is desirable, improvements in policy are of greater importance and are more likely to better meet the needs of mentally vulnerable defendants. Changing the law to achieve parity between the magistrates' and Crown Courts might be unachievable due to existing policy constraints. Legal aid funding, the 'assembly line' 4 approach to justice in the magistrates' court and excessive pressures on all those involved in the criminal justice system point to the conclusion that reform of the law is not capable of delivering the appropriate level of protection. These factors combined with the low level of seriousness of summary offences highlight the need for alternative approaches. 5 This paper will suggest that these could take the form of a more consistent, better funded approach to diversion at an earlier point in time, whether through liaison and diversion services, 6 the Crown Prosecution Service (CPS) or the police, as well as better communication between various stakeholders and training programmes that cross agency boundaries. While the treatment of mentally vulnerable defendants within the Crown Court and youth courts is far from perfect, the focus here will remain on trials of adults within the magistrates' courts where it will be submitted that the greatest failings reside. The inadequacy of the law relating to effective participation in the magistrates' courts will be highlighted, as well as the proposals for reform. Subsequently, themes emerging from the views of legal professionals on the front line will be analysed and recommendations will be made. The aim of this paper will be to demonstrate that reform of the law will be insufficient to adequately protect mentally vulnerable defendants in the magistrates' courts and that changes in policy are needed in place of, or alongside, legal reforms. Methodology This paper will provide a doctrinal examination of the law relating to unfitness to plead in England and Wales, and of reform proposals made by the Law Commission. Consideration is given to the procedures currently available in the magistrates' courts which might be of assistance to mentally vulnerable defendants. The legal aid system in the lower courts is also scrutinised. The qualitative aspect of this research involved semi-structured interviews of participants. All interviews were recorded to allow for transcription but personal details were kept confidential. Recognising that time is a valuable resource in the legal profession, interviews took place at participants' places of work or over the phone. Subsequent analysis of the interviews was anonymised save for the description of the participant's occupation. Purposive sampling was adopted in the form of snowball sampling, directed specifically towards professionals who have or had regular contact with mentally vulnerable defendants. The semi-structured interviews aimed to facilitate discussion based around how mentally vulnerable defendants are generally identified, what tends to happen where a defendant is clearly unable to take part in his/her trial and what, in the participant's opinion, should happen where a defendant is clearly unable to participate in his/her trial. The decision was made to use semi-structured interviews in order to allow participants the flexibility to depart from the questions, thus providing rich, detailed answers. 7 As is often the case with this type of interview, the most significant themes were those that deviated from the standard questions. Of the seven participants interviewed, four were barristers 8 based in London and three were solicitors based in Teesside, in the north east of England. The sample was not large, but the emerging themes were corroborated by the range of other sources adopted, leading to 'fine-grained data' 9 and, for the purposes of this paper, data saturation. Combined with a doctrinal analysis of the law, legal aid provision and Law Commission reform proposals, a more complete picture is achieved by the merging of methodologies. All participants had experience of representing mentally vulnerable defendants in the magistrates' courts, although it is recognised that a small sample cannot be representative of the entire legal profession. As with all qualitative research, by its very nature, there are barriers to achieving objective data and, with the richness of data collected, care must be taken to carry out 'true analysis'. 10 In terms of initial coding, it was important not to take statements made by participants out of context. Care was taken not to generalise inappropriately. 11 It was also important to recognise that participants might have been reluctant to disclose bad practice. Keeping these barriers to objectivity in mind, after initial coding, analysis of the interviews was thematic, using the inductive approach. 12 Prior to this, and before considering the need for any policy changes in this area, a brief examination of the law and reform proposals in relation to effective participation is required. The Law Forming part of the right to a fair trial, the phrase 'effective participation' refers to the right of any defendant to take an active part in his/her trial. This term reflects the approach of the European Court of Human Rights in relation to art 6 of the European Convention on Human Rights. Defined in SC v United Kingdom 13 : "effective participation" in this context presupposes that the accused has a broad understanding of the nature of the trial process and of what is at stake for him or her, including the significance of any penalty which may be imposed. It means that he or she, if necessary with the assistance of, for example, an interpreter, lawyer, social worker or friend, should be able to understand the general thrust of what is said in court. The defendant should be able to follow what is said by the prosecution witnesses and, if represented, to explain to his own lawyers his version of events, point out any statements with which he disagrees and make them aware of any facts which should be put forward in his defence. Article 6 applies to all court hearings, including criminal trials held in the magistrates' court. 14 Currently, when a defendant is unable to effectively participate in the trial process due to either mental illness, a lack of capacity or learning difficulties, there are limited options available. In the Crown Court, the defendant would need to satisfy the test for unfitness to plead. This test requires that the defendant is unable to do one of the following: understand the charges decide whether to plead guilty or not exercise his right to challenge jurors 8. The barristers are labelled B1-B4, while the solicitors are S1-S3. There was a range of experience among participants, the most experienced solicitor [ If found unfit to stand trial in the Crown Court, there is a trial of facts hearing 16 to establish whether the defendant did the act. If found to have committed the act, although this would not be recognised as a criminal conviction, a range of disposal options are available to the court. 17 In the magistrates' court, the unfitness to plead test does not apply. This means that the options available are 'extremely' 18 limited, offering neither support to the defendant, nor adequate protection of the public. The court can either stay proceedings due to abuse of process, 19 on the grounds that any trial would be unfair, or make an order that the defendant did the act or omission but only if all of the following criteria are met 20 : evidence must given by two registered medical practitioners 21 ; the offender must suffering from a mental disorder; the mental disorder must be of a nature or degree which makes it appropriate for him to be detained in a hospital for medical treatment and appropriate medical treatment is available for him; and the court is of the opinion that this is the most suitable method of disposing of the case. The House of Lords has recommended 22 that the power of magistrates to find an abuse of process 'should be strictly confined to matters directly affecting the fairness of the trial of the particular accused'. 23 Similarly, in DPP v P although this power was recognised, the court commented 24 that 'it will be in only exceptional cases that it should be exercised'. The Divisional Court in R (Ebrahim) v Feltham Magistrates' Court, 25 restated that this 'inherent jurisdiction... is one which ought only to be employed in exceptional circumstances'. 26 Such a recommendation is unfortunate as it is easy to envisage circumstances under which a defendant failing to engage with or understand the trial process might benefit from proceedings being stayed. One such, anecdotal, example was provided by B1, who described a defendant being presented at court, charged with obstructing a railway. Having suffered from severe depression all of his life, and grieving for a close family member, the defendant was apprehended trying to jump off a railway bridge. At court, B1 was concerned to see that this defendant was 'an absolute broken man... there was just this passive acceptance'. The defendant had already pleaded guilty before seeing B1. If proceedings had been stayed at an earlier stage due to an abuse of process, a better outcome might have been achieved for the defendant, and valuable court time could have been saved. 15 The legal professionals interviewed for this paper lamented the absence of a proper procedure in the magistrates' court. B2 stated: 'I think the biggest problem is there is no cohesive framework... like the Crown Court. I know it's not perfect, but at least it gives a structure'. B3 expressed surprise that there was no process to test for unfitness to plead in the magistrates' court, adding that there needs to be 'some kind of structure'. B2 commented that 'it's just a mess, there's no cohesion, no framework, it's not set down anywhere, it just depends on your diversion team, on your judge... ', while B4 described the 'dearth' of a proper procedure. S3 compared the different approaches in the Crown and magistrates' courts, commenting 'when somebody raises concerns that somebody isn't fit to plead in the Crown, there's process,... psychiatrists' reports, psychologist reports... If these reports say this guy's unfit to plead you have your... finding of facts... Whereas in the magistrates it's almost a policy consideration'. The scheme in the magistrates' courts is far from being coherent. There is no process to deal with the defendant who lacks capacity to effectively participate in his/her trial due to learning difficulties, rather than a mental illness. There is no power in the magistrates' courts to impose a restriction order under s 41 of the Mental Health Act 2007. 27 The implication of this is that protection of the public is more problematic in the magistrates' court than the Crown Court. Since 2005 it has been possible to make a mental health treatment requirement as a form of community order, 28 however the use of these orders has been infrequent. 29 On a practical level, in order to ensure a fair trial, legal professionals are expected to keep the following in mind 30 : the defendant's level of cognitive functioning; (ii) the use of concise and simple language; (iii) having regular breaks; (iv) taking additional time to explain court proceedings; (v) being proactive in ensuring the defendant has access to support; (vi) explaining and ensuring the defendant understands the ingredients of the charge; (vii) explaining the possible outcomes and sentences; (viii) ensuring that cross-examination is carefully controlled so that questions are short and clear and frustration is minimised. While 'limited intellectual capacity' 31 will not necessarily mean that a trial will be unfair, the criteria above demonstrate the scale of the task undertaken by the legal professional on a regular basis. When combined with the themes discussed below, the situation for mentally vulnerable defendants who face a trial in the magistrates' courts is far from ideal. Findings Themes emerging from the interviews with legal professionals relate to: how mentally vulnerable defendants are identified, the systemic failings surrounding mental vulnerability, funding issues and pressure on mentally vulnerable defendants to plead guilty. Each of these will be considered in turn. 27 Identification of Mental Vulnerability One issue raised by participants was that they were sometimes the first to identify that a defendant might have mental health issues. The implication here is that the police might not be identifying the defendant's vulnerability early enough. This could be due to a number of factors: lack of time, lack of training and also the fact that 'sometimes people who have difficulties... develop strategies to look like they don't'. 32 While all of the legal professionals admitted to having no formal training in identifying mental health issues, the general impression was that often they were 'the first ones to notice' 33 or that there was no set manner or predictability in which the mental vulnerability of the defendant was communicated. The mental health of the defendant was sometimes identified by the custody staff 34 or on the prosecution papers. 35 The general consensus was that, despite having no specialist training in mental health issues, legal professionals identify mental health or learning capacity issues on gut instinct 36 or common sense. 37 This instinct seems to be honed through experience 38 or because legal professionals are 'acute observers of human behaviour and pick up signs'. 39 S3 commented that '... a lot of it is common sense. If I'm meeting a new client... and he's behaving in a strange way or if talking to him doesn't feel like talking to him should', then it is necessary to be aware of possible mental health issues. S3 summarised as follows: 'I guess dealing with people who have problems, legal problems and problems, they go hand in hand'. Commendable as this ability to identify mentally vulnerable defendants seems, it is very much dependant on the willingness of the legal professional to take the extra time necessary to assist such a defendant. When combined with the limitations on legal aid funding, discussed below, it is likely that not all mentally vulnerable defendants will receive the necessary support. Systemic Failings All of the legal professionals interviewed identified significant failings from their viewpoint within the criminal justice system. This is unsurprising given that the Law Commission has been critical of the absence of a test for effective participation in the magistrates' courts. 40 What is interesting is that the failings are pervasive, from the initial police interview, through to conviction, and they are largely attributable to a lack of time or the need for expediency, rather than the need for a change in the law. In relation to the police interview, S1 commented: 'I've had cases at the police station where someone is clearly unfit to be interviewed... but they've been assessed '. They are subsequently sectioned, 'then my argument is 'you're saying he's fit for interview, but after an interview which he shouldn't have had you then section him?'' S1 added 'clearly he wasn't . happens all the time'. B4 gave the example of a forensic medical examiner seeing a client 'for a couple of minutes' finding them 'aggressive or difficult as people with mental health problems often are', and pronouncing them fit. The problem here is that such a brief assessment will not always be sufficient to identify mental vulnerability. The CPS will sometimes proceed with a prosecution where the view of the legal professional is that it is not in the public interest do so; this can result in 'a nonsense trial'. 41 to be discontinued at an early enough point in time, B1 was of the opinion that ' a lot of the cases I've done, particularly the more seriously mentally ill people... there's absolutely no public interest '. If the CPS has insufficient time to screen these types of cases, the time pressure and workload is then transferred to the legal professionals and the magistrates' courts. It is clear that further research into this area is required, as the CPS could play a key role in diverting mentally vulnerable defendants away from the criminal justice system. Once at court, participants felt that some district judges and magistrates were under too much pressure to properly investigate mental vulnerability. B2 commented 'there was a district judge... I couldn't quite believe he said... 'this client is unwell. He's not fit to stand trial, he's not fit to plead... he might need to see the diversion team... but, you know, as far as I'm concerned... most people who come into the magistrates' court will have something wrong with them... we'll just crack through the list, see how far we can get... '.' B2 added 'there's this bizarre obsession, we've started proceedings now get through it'. B3 described an instance where 'it felt very wrong that was on trial. Even the judge recognised it... but didn't stop the trial. I felt... was almost wilfully blind to this chap's mental health issues which were so plain to see from his behaviour in court'. In relation to mental health institutions, B4 commented on the difficulty of finding a place where a mentally vulnerable defendant would be safe: 'I was ringing mental health institutions myself to try and get her sectioned... t the end of the day I was left with the choice of having the hearing go ahead and her being remanded and taken to a prison, a girl of good character who's got mental health issues, or throwing her out on the street where she wasn't fit'. S2 recounted a similar frustration when a defendant prone to attempting suicide was prosecuted for causing a public nuisance. S2 was of the opinion that such a person 'is not a criminal. They need mental health treatment but, if the local mental health facility is saying "oh no they don't," what do you do?' Without the availability of sufficient beds, again the burden of treating a mentally vulnerable individual is transferred to the criminal justice system. These anecdotes provide a picture of mentally vulnerable defendants being failed by the criminal justice system. B1 described an instance where a defendant 'thought he was Jesus, got through the police station without it being flagged'. The court 'sent him back down to cells because they thought he was being... obnoxious'. When he subsequently refused to leave his cell, he was remanded in custody. 'This was a man who had never been in trouble before, who had been picked up... because he had been shouting in the street, who ended up spending 2 weeks in prison having never been in trouble before... and being seriously unwell'. The failings here were compounded by the fact that the defendant's wife had reported him missing and only found out that he was in custody by sheer luck. S1 added 'the ultimate failing for 2 weeks in custody had in my view lay with that bench... who didn't pick up that he was extremely seriously ill'. Similar misgivings were echoed by S1 who stated 'you're sometimes put in a position where there's a voice in your head saying "well actually there's something else should be done here" but you end up going through the process and people are dealt with because they are minor offences'. S1 added that in the magistrates' courts defendants are 'churned in and churned out and we have people who... get convicted when they shouldn't'. S2 was also of the view that some mentally vulnerable defendants are convicted of criminal offences in the magistrates' courts when they should not be, but added that, given limited resources, 'there is no perfect solution'. S3 agreed that this happens: 't will undoubtedly happen. I think it's probably happened. That does have a ring of familiarity to it'. These failings are compounded when legal professionals find themselves in a Catch 22 scenario. For example, B1 described 'a complete Catch 22... where I felt really needed assistance from the mental health team... who wouldn't assess her because her behaviour was too unpredictable'. B4 was of the view that 'the hospitals don't take so the police keep them and they send them to the court and there's no one really at the court who can deal with them properly'. B2 had experienced situations where an inability to take instructions from a defendant sometimes resulted in the court's response being 'we'll take it as not guilty, we'll have the trial... then we'll decide... '. B4, in relation to one defendant, described that 'it felt very wrong that he was on trial. Even the judge recognised it... but didn't stop the trial'. A new legal framework for effective participation in the magistrates' courts might provide clarity in dealing with mentally vulnerable defendants, but is unlikely to provide a practical solution to these failings, given current legal aid funding constraints. Funding Legal aid funding for representation in the magistrates' courts is perhaps the greatest obstacle to the introduction of a test for effective participation in the lower courts. Legal aid for representation in the magistrates' court, as well as advice before and after charge are all termed 'crime lower'. 42 Standard fee claims account for the majority of money spent for representation in the police stations and magistrates' courts. 43 Non-standard fee claims in the magistrates' courts are made by submitting a work assessment form detailing cost information and reasons for being a non-standard fee. 44 One option available on this form is that the core costs exceed the higher limit; there is also a section marked 'Other'. 45 Clearly, the potential to claim a non-standard fee due to a defendant's mental vulnerability is possible. How frequently this occurs as opposed to how frequently it should occur is open to debate since no breakdown of information is available as to the amount of non-standard fee claims. 46 Within the guidance provided by the Legal Aid Agency for crime lower, mental health is mentioned only three times. 47 The implication here is that there is a disincentive to the legal professional to request a non-standard fee on the grounds of a mentally vulnerable defendant. In short, standard fees are paid for the majority of magistrates' court work 48 and are insufficient for the type of expert opinion which might be needed by the mentally vulnerable defendant. This view is supported by the interviews with legal professionals, in particular, the solicitors. S1 commented that 'the bigger firms won't always go to the extent of in every case testing and doing it properly because they've got so many clients coming through; it's the only way they can exercise. They don't do it; they should do it'. S1 added that 'smaller firms are more ready to get reports, to challenge it, to prevent it going any further. When you're a duty at court you don't have that luxury, you've got to make quick decisions, and normally people just want to be dealt with and get out that day if they're in custody or they don't want to come back in case they don't get funding'. The risk here, according to S2 is that 'the worst thing you can do is get yourself known for other solicitors will send their clients '. While legal aid cuts provide a disincentive for the legal professional to do the best possible job for a defendant in the magistrates' court, a complete absence of representation is also clearly a barrier to 42 identifying a mentally vulnerable defendant. As S1 commented 'if you can't get legal aid, you're snookered because, even if you get the duty , they might represent you on the first occasion. They can't then represent you on the second time, so you can't get a report... In an ideal world you should always get representation but just because you've got mental health problems it isn't necessarily enough if it's a minor offence'. S2 went further and commented that 'for a summary only offence... t's not a proper use of resources... to involve a forensic psychiatrist..., possibly two, if you're going in to the issue of a person's fitness... and compare that with the possible penalties-you resolve that usually by means of an appropriate plea in mitigation. It's a question of balancing one against the other. In theory, yes, you could say that the person's acquired a conviction but unfortunately we live in a world of finite resources and no more so than in the criminal law area and also mental health. So it's a question of managing it'. Furthermore, in order to make a profit, in the view of S2, some solicitors will persuade a mentally vulnerable client to plead guilty: 'that is going on all the time. This government, and the last, tried to turn us into legal factories... Once you create that mentality, then people will do the least work for the most profit'. This mentality is supported by Zander's prediction that legal representatives 'would be mainly bidding on the basis of fixed fees. They would need a sufficient number of cases to make that viable on a "swings and roundabouts" basis of cases involving little work compensating for others involving more work'. 49 The Jeffrey Review 50 echoes this criticism in the quality of criminal advocacy, commenting that 'the significance of legal aid fee levels cannot be ignored' 51 in the reduction of quality of legal representation. The implication of the legal aid position in the magistrates' court is that the introduction of the same legal test for effective participation is unlikely to be achievable. As S3 summarised: 'if I'm getting psychiatric reports for a common assault... going to say "why do you need this?." Technically you could argue it but it's not something we've done'. S2 also questioned the practicality, in financial terms, of introducing the test into the magistrates' court: 'if you allowed us as a profession to get psychiatric reports paid for by the state, there would be a lot of them'. When combined with the limits placed on legal aid funding, it is possible that the Law Commission has under-estimated the impact of introducing the same test for effective participation to the magistrates' courts. This key recommendation that any reform of the law relating to unfitness to plead should be equally applicable in the magistrates' courts was made due to the perception that the law relating to 'participation difficulties in the summary courts is in urgent need of reform'. 52 This perception is not disputed; however, the Law Commission does not consider that such an inclusion would produce an unmanageable caseload. The reason given for this is that a defendant's capacity should be assessed in relation to the complexity of proceedings and that his or her condition or impairment 'would have to be extremely severe before he or she would be unable to participate effectively in most summary proceedings'. 53 The Justices' Clerks Society and the CPS share this view, 54 but interviews conducted for this paper seem to throw some doubt on the practicality of this recommendation. In an Impact Assessment, 55 the Law Commission estimates 800 defendants per year are likely to be found to lack capacity in the summary courts. This estimate was based on: government figures for the number of magistrates' court proceedings (minus motoring offences) in a given year and the percentage of unfitness to plead findings in the Crown Court for that period; (para 239) a lack of formal data on stayed proceedings; (para 122) a five month collection of data in magistrates' and youth courts in the Greater London areaparticipating courts were asked to record every case in which issues relating to effective participation were raised or where s 37 of the MHA 1983 was considered; (para 123, only 60 cases were identified) an estimate that there will be a larger number of unrepresented defendants in the magistrates' courts, and a high proportion of these will not be identified as unable to effectively participate in a trial; (para 239) an estimate that summary proceedings are less complex and will therefore be more accessible to more defendants who might lack capacity in the Crown Court; (para 239) an estimate that there is a higher rate of discontinuance from the Crown Prosecution in the magistrates' courts; (para 239) an estimate that legal representatives might be less likely to pursue effective participation proceedings for minor offences, given the range of disposals available. (para 239 ) The Law Commission recognises the weaknesses in this data, 56 which is based, out of necessity, on guesswork. In response to the above estimate, the following should be considered: a five month collection of data which recorded when effective participation issues were raised could not have taken into account the times when effective participation should have been raised; while it might be correct that a large number defendants in the magistrates' courts are either unrepresented or not identified as being unable to effectively participate in a trial, this should not be used as a reason for a lower estimate. The need for identification and representation of these defendants should be addressed alongside the introduction of a test for effective participation in the magistrates' court. A policy on unrepresented defendants is clearly needed, as the situation for unrepresented vulnerable defendants in the magistrates' court is concerning. Epstein comments that there is little guidance for judges on how to proceed with an unrepresented defendant, describing such defendants as 'invisible'; 57 an estimate that legal representatives might be less likely to pursue effective participation proceedings for minor offences, given the range of disposals available is a position that should be rectified, not used as an excuse to save costs; the estimate that summary proceedings are less complex and will therefore be more accessible is not borne out by comments in interviews conducted for this paper. There is also a suggestion that mental health issues appear more frequently in the magistrates' courts. Perceptions of the participants in this study vary as to the frequency of mental vulnerability in magistrates' courts. B1 commented that 'when you take the full spectrum into account it's not rare at all'. B2 commented that mental vulnerability is 'not rare at all. Almost every client I've had, with the exception of a handful certainly in the magistrates' court'. S2 added that mental vulnerable defendants are seen 'every day' and 'it depends on where you place the level of vulnerability'. This contrasts with the views of B3 and S3 that mental vulnerability is 'quite rare actually, 5 or 6 in my 6 months... unusual' and 'not something we tend to deal with on a daily basis'. More significantly, the background but in terms of something that's relevant to whether going to be a trial then that's much rarer'. It seems from this that mental vulnerability may be a factor for many, or even most, defendants in the magistrates' courts, however its impact on a defendant's effective participation is less common. The suggestion here is that all stakeholders involved in the criminal justice system should err on the side of caution. The exercise of caution is key, given that some of the legal professionals raised concerns about the ability of stakeholders to identify mental vulnerability. B4 commented that ' might be a little too much for magistrates to decide whether someone should be sectioned or whether they're not fit'. Furthermore, S3 cautioned that 'sometimes people who have difficulties... develop strategies to look like they don't', and S1 added 'there been times where you just stand there thinking this is ridiculous, this person shouldn't be here'. In addition to the issue of adequately funded legal representation, medical reports would be needed for defendants for whom issues of effective participation are raised. The Law Commission recognises the 'lengthy process' of obtaining two expert reports but that 'significant curtailment' of a defendant's rights should require robust expert evidence. 58 This should remain the case regardless of the court used. The Divisional Court in Blouet v Bath and Wansdyke Magistrates' Court 59 confirmed the current procedure in the magistrates' court to order a fact-finding exercise in lieu of a trial, stating that 60 : 1. There should be up-to-date medical evidence; 2. The issue should then be tried in accordance with s 11 of the Powers of Criminal Courts (Sentencing) Act 2000, 61 allowing an adjournment for further medical reports; 3. Then the matter can proceed to s 37 MHA 1983. The need for up-to-date medical evidence and the potential for an adjournment, where necessary, would exacerbate the funding problems described above by the legal professionals. A better approach might be to use psychiatric nurses and other forensic mental health practitioners who are part of Liaison and Diversion teams, and who could 'contribute significantly to the identification of capacity issues in the magistrates' court'. 62 The Law Commission strongly supports the roll out of Liaison and Diversion scheme 63 ; improved funding and a more robust use of this scheme might be a better alternative to implementing the same test for effective participation in both the Crown and magistrates' courts. Avoiding a two-tier justice system is an ideal supported by almost all participants. S1 was of the view that, in an ideal world, 'you would get the funding to test everything properly'... and 'we should have either a judge or a bench who were able to deal with mental health as opposed to juveniles'. S3 would support the introduction of the same test for unfitness to plead as occurs in the Crown Court, commenting, 'it's not perfect... but if you have the same system as the Crown in the magistrates', it would at least make sense'. B1 would prefer the first step to be 'deciding whether it was in the public interest to prosecute' and this view was endorsed by B2, who also commented that we 'need a proper framework'. Given the funding position, applying the same test for effective participation in both the Crown and magistrates' courts seems unachievable. B3, pragmatically, preferred the flexibility of the diversion process in the magistrates' courts, especially given the short time frames available. This view is endorsed reluctantly here, for purely pragmatic reasons. The position in relation to legal aid funding is unlikely to improve and is neatly summarised by S3: 'you don't win elections by doing that kind of thing. People are going to say, well why are you helping criminals?' If the funding position remains unchanged, then the 58 practical solution is for the CPS or Liaison and Diversion teams to actively seek to divert the mentally vulnerable defendant away from the criminal justice system, in order to avoid subjecting him to the rigours of a trial or, even, the stress of entering a plea. Pressure to Plead The test for effective participation recommended by the Law Commission distinguishes between D's ability to plead and his ability to stand trial, 64 the rationale being that the former is more easily comprehended. 65 Accordingly, if implemented, a defendant might be unfit to stand trial, but might still be fit to enter a guilty plea. This recommendation was supported by the Court of Appeal in Marcantonio. 66 Views given by the legal professionals who were interviewed cast doubt on the appropriateness of introducing this option to the magistrates' courts. Only B3 acknowledged the advantages of an early disposal: 'that could probably save time... can be fit to plead but not to stand trial'. B1 queried 'if they can't properly participate in a trial, how can they enter a plea?... I would worry people would slip through the net', and 'I could just see it opening up a situation whereby they say... you can plead, we don't have to deal with whether you're fit for trial and then the classic line that we hear all the time... "your client knows whether they've done it, it doesn't matter whether they're bipolar and it doesn't matter whether they're severely depressed. They know whether they've done it so they can enter a plea. Get on and do it"'. B2 commented: 'I worry... it's just the sort of thing I could imagine a magistrates' court doing, taking a guilty plea and... not sentenc for ages'. B4 was concerned that this could be 'a slippery slope... you do that and people are forced to plead as they don't want to be remanded'. S1 commented that 'there's so much pressure on people to plead guilty because they get credit and that's getting worse... I don't think you can separate the two'. S2 added that 'to understand some of the charges, you need a law degree... My concern is... that some people lack the intellect to fully understand the charges' while S3 commented 'I think that would lead to problems. If you're saying this guy can't stand trial but he can plead, the next thing is the client says not guilty and then you've got a trial. For me... the two things are very closely linked... If you're going to find them fit to plead, find them fit to stand trial. If you can plead, then surely you need to understand the consequences of that plea?' These concerns are shared elsewhere; a defendant's mental vulnerability could undermine his autonomy in making a guilty plea. 67 Those with learning difficulties or mental disorders are more susceptible to the incentive to offer a guilty plea. 68 It seems common sense that a defendant with a learning difficulty, for example, might be more susceptible to making a false guilty plea. Equally, a defendant suffering from a mental disorder might put in a guilty plea in order to be allowed to go home. All of the above concerns seem to caution against allowing a two part test for effective participation in the magistrates' courts, even if the recommendation makes good sense in the Crown Court. Discussion of Findings and Recommendations The difference in geographical location of the above participants is worthy of comment. While it was not the author's intention to differentiate between locations, there was a perception among many of the participants that the more significant problems are occurring in the London area. This could be due to a number of factors: people with mental vulnerabilities might gravitate towards larger cities, in which case, greater difficulties could be perceived in these areas. S3 commented, 'I tend to find that obvious mental health issues tend to cluster towards cities... I wonder if people like that flock to cities or if cities make... that more prevalent' and that 'homelessness tends to go hand in hand with a lot of mental illness'. While it is not within the scope of this article to investigate this distinction further, other explanations for this perception could be that liaison and diversion teams are more effective, or better funded, in the north east of England. There might be greater pressure on the courts and other agencies in a larger city. S2 was of the view that 'Teesside has a very tight sense of community and sense of connection' which can be a strength when it comes to personal connections within the relevant agencies. In addition to earlier research undertaken in this area, 69 unsurprisingly, this current research confirms the flaws within the current test for unfitness to plead, none being more evident than the absence of a procedure in the magistrates' and youth courts. From a theoretical standpoint, the moral conversation at the heart of the criminal justice process cannot occur where a defendant lacks the capacity to engage in such an exchange. 70 From a legal perspective, the failure to offer equal treatment to mentally vulnerable defendants in the magistrates' and youth courts suggests a discriminatory approach which cannot be condoned. Furthermore, it is highly likely that some mentally vulnerable defendants are being deprived of the right to a fair trial, in contravention of art 6 of the European Convention on Human Rights. This failure is reflected in the Law Commission's proposals to apply the legal test for effective participation to the Crown, magistrates' and youth courts. What is more surprising, and significant, is the discovery that the implementation of parallel tests in the lower courts is unlikely to ameliorate the issues which have been identified. Identical procedures for effective participation in the magistrates' and Crown Courts are likely to be impractical, inefficient and unworkable. Legal aid funding is inadequate and resources are already woefully stretched. In particular, restrictions on legal aid funding in the magistrates' courts are likely to deter legal professionals from seeking psychiatric reports for some defendants. While it is recognised that summary offences are not serious and offer a wide range of disposal options, the outcome for the mentally vulnerable defendant is still a conviction. Regardless of the level of seriousness, criminal convictions carry a stigma, 71 as well as, potentially, other adverse consequences, in terms of the defendant's employment prospects, family life and finances. A criminal conviction sends a message to the defendant that he or she is responsible for his or her act. 72 If a defendant is unable to understand or challenge that message, then a conviction seems inappropriate. 73 Unless policy changes are made, mentally vulnerable defendants will continue to be failed by the criminal justice system. The need for more widespread legal aid funding is echoed in a Justice Committee Report. 74 The concerns for the treatment of mentally vulnerable defendants in the magistrates' courts are likely to be exacerbated by the introduction of video link hearings 75 and by offering the ability to enter guilty pleas online. 76 This 'proposed upheaval in the way courts deliver justice could leave vulnerable people unable to get the legal advice-or decisions-they need '. 77 While this paper has suggested that reform of the law is unlikely to be the solution on its own, more research is required to determine how policy should develop. It appears that early diversion away from the criminal justice system will be key to improving the situation for mentally vulnerable defendants. This could be achievable by the CPS halting a prosecution where it is not in the public interest to proceed. A recent consultation process 78 might provide the opportunity to produce better guidelines for prosecutors when dealing with mentally vulnerable defendants. Consistent and better funded provision of Liaison and Diversion services, whether in police stations or magistrates' courts would also assist in early diversion. Recommendations made by the JUSTICE Report on Mental Health and Fair Trial, 79 include that liaison and diversion practitioners should screen every suspect who comes into custody. 80 Without adequate time and funding, fulfilment of this task might be meaningless. As discussed above, a brief screening process is unlikely to lead to the successful identification of all mentally vulnerable defendants, especially in view of the fact that mental health issues can be hidden by the defendant, or periods of improved mental health mean that a vulnerable defendant might appear to have sufficient capacity at that particular point in time. Further recommendations in the JUSTICE Report are that police training should be provided on how to respond to vulnerability, 81 and dedicated district judges should ensure appropriate treatment of vulnerable defendants, as well having the power to direct the CPS to review its decision to prosecute. 82 These suggestions will go some way towards protecting the mentally vulnerable defendant from an unfair trial, but it might also be advisable to introduce additional training to all stakeholders who work with mentally vulnerable defendants in the magistrates' courts. Areas for development and engagement with other stakeholders should address the need for a more consistent approach to diversion at an earlier point in time, for better training and guidelines that cross boundaries, 83 and for improvements in communication between various stakeholders within the criminal justice system. The solution is far from straightforward and, without the better funding of legal aid and liaison and diversion provision, as well as the availability of a range of mental health services, the burden of treating a mentally vulnerable individual is being borne by the criminal justice system. |
The First Generation of Early Onset Scoliosis Care Reports of the prevalence and natural history of spinal deformity in younger pediatric patients became part of the orthopaedic literature in the middle of twentieth century. Formal use of the term Early Onset Scoliosis to describe a wide range of spinal pathology based on age of onset did not gain popularity until much later. Early reviews of the natural history of these deformities detailed which patients were at risk of progression, and which patients may benefit from intervention rather than simple observation. However, long-term follow-up of the application of adult spine deformity management principles in skeletally immature patients demonstrated a significant risk of both spinal and associated pulmonary complications over time. Reports of efforts to alter the natural history of these conditions through surgical treatment that attempted to control the deformity, while still allowing spinal growth, emerged in the late 1970s and 1980s. |
a = [1,2,3,4,5,6]
print(min(a))
print(max(a))
print(sum(a))
print(len(a))
|
<filename>Eroica2-SSO/src/main/java/com/sendtomoon/eroica/sso/CacheData.java<gh_stars>0
package com.sendtomoon.eroica.sso;
import java.util.HashMap;
import java.util.Map;
/***
* SSO 缓存数据
*/
public class CacheData {
public static final String KEY_UID = "$_UID";
public static final String KEY_IP = "$_IP";
public static final String KEY_ExpiresTime = "$_ExpiresTime";
public static final String KEY_LAST_APP_NAME = "$_lastAppName";
public static final String KEY_LAST_ACC_TIME = "$_lastAccTime";
public static final String KEY_LOGIN_TIME = "$_loginTime";
public static final String KEY_ROLE_TYPE = "$_roleType";
private Map<Object, Object> datas;
public CacheData(String uid) {
this.datas = new HashMap<Object, Object>(4);
this.setUid(uid);
}
CacheData(Map<Object, Object> datas) {
this.datas = datas;
}
public String getUid() {
return (String) datas.get(KEY_UID);
}
public void setUid(String uid) {
datas.put(KEY_UID, uid);
}
public void set(Object key, Object value) {
datas.put(key, value);
}
public Object get(Object key) {
return datas.get(key);
}
public void set(Map<Object, Object> map) {
if (map != null) {
datas.putAll(map);
}
}
@Override
public String toString() {
return this.datas.toString();
}
public String getIp() {
return (String) datas.get(KEY_IP);
}
public void setIp(String ip) {
datas.put(KEY_IP, ip);
}
public Map<Object, Object> peek() {
return datas;
}
public void setExpiresTime(int expiresTime) {
datas.put(KEY_ExpiresTime, expiresTime);
}
public long getLoginTime() {
Long r = (Long) datas.get(KEY_LOGIN_TIME);
if (r == null)
return 0;
return r;
}
public void setLoginTime(long loginTime) {
datas.put(KEY_LOGIN_TIME, loginTime);
}
public long getLastAccTime() {
Long r = (Long) datas.get(KEY_LAST_ACC_TIME);
if (r == null)
return 0;
return r;
}
public void setLastAccTime(long lastAccTime) {
datas.put(KEY_LAST_ACC_TIME, lastAccTime);
}
public String getLastAppName() {
return (String) datas.get(KEY_LAST_APP_NAME);
}
public void setLastAppName(String lastAppName) {
datas.put(KEY_LAST_APP_NAME, lastAppName);
}
public Integer getRoleType() {
return (Integer) datas.get(KEY_ROLE_TYPE);
}
public void setRoleType(Integer roleType) {
datas.put(KEY_ROLE_TYPE, roleType);
}
public int getExpiresTime() {
Integer r = (Integer) datas.get(KEY_ExpiresTime);
if (r == null)
return 0;
return r;
}
}
|
// Lock new locks with old. New locks take priority, and old
// locks not existing in the new list are deleted. New locks
// with missing License details are added from matching old
// locks. Return an alphabetized list by lock name
func (l DependencyLocker) LockNewWithOld(
new, old map[string]DependencyLock,
) []DependencyLock {
var final = make([]DependencyLock, len(new))
var i int
for name, newlock := range new {
oldlock, exists := old[name]
if exists {
if newlock.License.Kind == "" {
newlock.License.Kind = oldlock.License.Kind
}
if newlock.License.Text == "" {
newlock.License.Text = oldlock.License.Text
}
}
final[i] = newlock
i++
continue
}
return l.Alphabetize(final)
} |
Design of intelligent system for indoor lighting Lighting solution has a high impact on total energy consumption worldwide. Particularly 40% of total energy is consumed by indoor lights. For indoor lighting, traditionally fluorescent lamps and electronic ballasts were used. Manual approach often used for switching the lights and thereby leaving lights to remain ON even if not in use. Fluorescent light doesn't support dimming, so user needs to replace light source with different wattage to achieve different intensities. Also, they are energy inefficient. Electronic ballasts on the other hand supports dimming but are expensive. Thus, existing light system need to upgrade. Proposed system is an intelligent LED lighting control system based on fuzzy logic controller. In auto-mode, lights are switched with human presence and controls light intensity depending on available natural light. In manual mode, light intensity adjusted by using a Bluetooth enabled smartphone application. In addition, various lighting effects were added for users to create scene modes using smartphone. Energy consumption and sensor database are logged on a storage device. Reduction in energy consumption by lights and associated control system is achieved without compromising in user's vision comfort. |
Sequences upstream from the mouse c-mos oncogene may function as a transcription termination signal. A region upstream from the mouse c-mos proto-oncogene, termed upstream mouse sequence (UMS), prevents expression of mos transforming activity. Previous studies suggested that the UMS prevented transcription readthrough. In this study, we constructed a recombinant DNA clone, pHTS3MS, with the UMS inserted downstream from both the mos gene and a truncated long terminal repeat containing only the U3 enhancer region. In this position UMS did not inhibit mos transforming activity. We examined cells transformed by pHTS3MS for RNA expression. S1 nuclease analysis showed that the UMS provides two polyadenylation signals to mos-containing RNA and nuclear run-on transcription showed that the primary transcripts terminate in UMS. In addition, using portions of the UMS, we found that a 360-bp fragment containing the UMS polyadenylation signals and sites inserted between the herpes simplex virus type 1 (HSV-1) thymidine kinase gene (tk) and its promoter inhibits tk transforming activity by 99% and prevents detectable expression of this construct in transient expression assays. Thus, the UMS must contain signals for polyadenylation and appears to function as a transcription terminator. |
Ms all de la ilusin de paz en Colombia. Articulacin de voces locales contra la violencia narrativa Introduction. The degradation of the Colombian armed conflict has resulted in a war against society in which the different actors who drive it have acquired military capabilities to the same extent that they have distanced themselves from political and social ideals. Approach. The ending of the armed conflict and the attainment of stable peace has remained, for decades, an unrealised dream for the Colombian people, contributing to a simplification and closure of the conflict narrative. Results. The political and social debate around the plebiscite regarding the process of peace between the Colombian government and the Revolutionary Armed Forces of Colombia showed that the representatives accepted and reproduced only the narratives reinforcing their own approaches. Discussion and conclusions. The narrative from communities that have been direct victims of the armed conflict serve to document their experience in regard to suffering and the ways in which they have faced the effects of violence. Narrating on behalf of themselves and others represents an act that allows them to become political subjects who claim their right to be seen and heard. |
The Open Systems Interconnection (OSI) model is a conceptual framework that characterizes and standardizes the communication functions of a telecommunications or computing system without regard to its underlying internal structure and technology. The model partitions a communication system into abstraction layers.
The physical layer (PHY) is responsible for transmission and reception of unstructured raw data between a device and a physical transmission medium. Layer specifications define characteristics such as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. The components of a physical layer can be described in terms of a network topology. An example of a protocol using the physical layer is Ethernet (as defined by the Institute of Electrical Electronics Engineers (IEEE) 802.3 standard described at standards.ieee.org).
The data link layer provides node to node data transfer—a link between two directly connected nodes. It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them. In one example, the IEEE 802.3 Ethernet standard divides the data link layer into two sublayers: a) medium access control (MAC) layer—responsible for controlling how devices in a network gain access to a medium and permission to transmit data; and b) logical link control (LLC) layer—responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization.
In some cases there are difficulties in connecting external PHY devices to a MAC device (such as an Ethernet network interface controller (NIC) for example). One approach is to integrate the PHY device into the MAC device (called an internal PHY approach). However, this approach introduces various limitations on capabilities that are delivered by the PHY modules. Significantly, the internal PHY approach provides no ability to switch to a different, more suitable PHY device (for example in terms of better supported connections, better supported temperature range, and so on).
Another approach is to use an external PHY device but with a connection over a serializer/deserializer (SERDES) interface. A SERDES interface includes a pair of functional blocks commonly used in high speed communications to compensate for limited input/output. These blocks convert data between serial data and parallel interfaces in each direction. The primary use of a SerDes is to provide data transmission over a single line or a differential pair in order to minimize the number of I/O pins and interconnects. In this approach the connection to the external PHY device is achieved with the use of an integrated circuit that is capable of converting parallel data into the data's serial equivalent and vice versa. Unfortunately, some external PHY devices do not support a SERDES connection. Thus, a better approach is needed. |
Deconvolution of narrowband solar images using aberrations estimated from phase-diverse imagery Phase-Diverse Speckle (PDS) is a short-exposure data- collection and processing technique that blends phase- diversity and speckle-imaging concepts. PDS has been successfully used for solar astronomy to achieve near diffraction-limited resolution in ground-based imaging of solar granulation. Variants of PDS that involve narrow-band, spectroscopic, and polarimetric data provide more information observations. We present results from processing data collected with the 76-cm Richard B. Dunn Solar Telescope (DST) on Sacramento Peak, NM. Three-channel data sets consisting of a pair of phase-diverse images of the solar continuum and a narrow-band image were collected over spans of 15 - 20 minutes. Point-spread functions that are estimated from the PDS data are used in a multi-frame deconvolution algorithm to correct the narrow-band imagery. The data were processed into a number of time series. A rare, short-lived continuum bright point with a peak intensity at a factor of 2.1 above the mean intensity in the continuum was observed in one such sequence. The field of view spans multiple isoplanatic patches, and strategies for processing these large fields were developed. We will discuss these methods along with other techniques that were explored for accelerating the processing. Finally, we show the first PDS reconstruction of adaptive-optics (AO) compensated solar granulation taken at the DST. As expected, we find that these data are less aberrated and, thus, the use of AO in future experiments is planned. |
/**
* A DialogFragment to configure frequency.
*
* @author Jimmy Shih
*/
public class FrequencyDialogFragment extends AbstractMyTracksDialogFragment {
public static final String FREQUENCY_DIALOG_TAG = "frequencyDialog";
private static final String KEY_PREFERENCE_ID = "preferenceId";
private static final String KEY_DEFAULT_VALUE = "defaultValue";
private static final String KEY_TITLE_ID = "titleId";
public static FrequencyDialogFragment newInstance(
int preferenceId, int defaultValue, int titleId) {
Bundle bundle = new Bundle();
bundle.putInt(KEY_PREFERENCE_ID, preferenceId);
bundle.putInt(KEY_DEFAULT_VALUE, defaultValue);
bundle.putInt(KEY_TITLE_ID, titleId);
FrequencyDialogFragment frequencyDialogFragment = new FrequencyDialogFragment();
frequencyDialogFragment.setArguments(bundle);
return frequencyDialogFragment;
}
@Override
protected Dialog createDialog() {
FragmentActivity fragmentActivity = getActivity();
final int preferenceId = getArguments().getInt(KEY_PREFERENCE_ID);
int defaultValue = getArguments().getInt(KEY_DEFAULT_VALUE);
int titleId = getArguments().getInt(KEY_TITLE_ID);
int frequencyValue = PreferencesUtils.getInt(fragmentActivity, preferenceId, defaultValue);
return new AlertDialog.Builder(fragmentActivity).setPositiveButton(
R.string.generic_ok, new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
int listIndex = ((AlertDialog) dialog).getListView().getCheckedItemPosition();
PreferencesUtils.setInt(getActivity(), preferenceId, getFrequencyValue(listIndex));
}
}).setSingleChoiceItems(
getFrequencyDisplayOptions(fragmentActivity), getListIndex(frequencyValue), null)
.setTitle(titleId).create();
}
/**
* Gets the frequency display options.
*/
private String[] getFrequencyDisplayOptions(FragmentActivity fragmentActivity) {
boolean metricUnits = PreferencesUtils.isMetricUnits(fragmentActivity);
return StringUtils.getFrequencyOptions(fragmentActivity, metricUnits);
}
/**
* Gets the list index for a frequency value. Returns 0 if the value is not on
* the list.
*/
private int getListIndex(int frequencyValue) {
String[] values = getResources().getStringArray(R.array.frequency_values);
for (int i = 0; i < values.length; i++) {
if (frequencyValue == Integer.parseInt(values[i])) {
return i;
}
}
return 0;
}
/**
* Gets the frequency value from a list index.
*
* @param listIndex the list index
*/
private int getFrequencyValue(int listIndex) {
String[] values = getResources().getStringArray(R.array.frequency_values);
return Integer.parseInt(values[listIndex]);
}
} |
<gh_stars>1-10
import isDeepEqual from 'fast-deep-equal';
import { useEffect, useRef } from 'react';
/**
* use memo instance if value is consider to be not changed during render by deep equal comparsion
* @param value
* @returns
*/
export function useDeepEqualMemo<T>(value: T): T {
const ref = useRef<T>();
const isEqual = isDeepEqual(value, ref.current);
useEffect(() => {
if (!isEqual) {
ref.current = value;
}
});
return isEqual ? ref.current : value;
}
|
Wrapped and Stacked: Smart Contracts and the Interaction of Natural and Formal Language Abstract This article explores smart contracts from first principles: What they are, whether they are properly called contracts, and what issues they raise for national contract law. A smart contract purports to record contractual promises in language which is both intelligible to human beings and (ultimately) executable by machines. The formalisation of contracting language that this entails is, I argue, the most important aspect for lawyersjust as important as the automation of contractual performance. Rather than taking a doctrinal approach focused on the presence of traditional indicia of contract formation, I examine the nature of contracts as legal entities created by words and documents. In most cases, smart contracts will be wrapped in paper and nested in a national legal system. Borrowing from the idiom of computer science, I introduce the term contract stack to highlight the complex nature of contracts as legal entities incorporating different layers, including speech acts by the parties in both natural and formal languages as well as mandatory legal rules. It is the interactions within this contract stack that will be most important to the development of contract law doctrines appropriate to smart contracts. To illustrate my points, I explore a few issues that smart contracts might raise for English contract law. I touch on the questions of illegality, jurisdiction, and evidence, but my focus in this paper is on exploring issues in contract law proper. This contribution should be helpful not only to lawyers attempting to understand smart contracts, but to those involved in coding smart contractsand writing the languages used to code them. |
Otto Krüger, 67, was granted permission to temporarily leave his ward at 1:45pm on Sunday but he failed to return to the facility at the agreed time.
Now police are warning the public not to approach the man, who is dependent on medication and can be "very aggressive".
It is thought he could be in or around the cities of Cologne and Bonn in North Rhine-Westphalia.
In 1998 the man kicked his 78-year-old neighbour to death in Bad Godesberg, south of Bonn. The following year he was placed in a closed psychiatric ward by a Bonn court.
The man had previously gone missing in December 2014 during an accompanied visit to a Christmas market. RP Online reported that he was on the run for two weeks but was caught after witnesses spotted him in a bistro.
Police have asked anyone with information on Krüger's whereabouts to contact them. |
Two leaders meet in Berlin after Macedonian forces allow migrants through border and up to Serbia, on their way to Hungary – first state in Schengen zone
Thousands of refugees were heading towards Hungary and the EU border on Monday, as the German chancellor, Angela Merkel, said the union’s member states must fairly share the burden of dealing with Europe’s biggest migration crisis since the second world war.
Speaking before talks in Berlin with the French president, François Hollande, Merkel said Europe needed to act together to deal with the chaotic scenes in Greece and the western Balkans as desperate migrants tried to reach the EU. “The current situation troubles us greatly,” she said.
Germany and France are to draft common proposals on immigration and security to deal with the worsening emergency. On Monday, Merkel said they could include building new registration centres in Greece and Italy to be run and staffed by the EU as a whole by the end of the year.
Macedonia declares state of emergency to tackle migrant crisis Read more
She said: “Time is running out. EU member states must share costs relating to this action.”
The two leaders also said that the EU must draw up a unified list of safe countries of origin. Asylum seekers arriving from these countries should be swiftly returned.
Berlin is increasingly determined to push a new system of mandatory quotas for refugees across the EU despite the issue being rejected by EU leaders in acrimonious scenes. The European commission is also to propose a new permanent system of emergency refugee-sharing across the union.
The Berlin summit came as long lines of migrants travelled on foot and by bus through southern Serbia, on the latest leg of their increasingly desperate journey to western Europe. The UN refugee agency UNHCR said more than 7,000 people, including women and children, had reached Serbia from Macedonia.
Many had spent three days on Greece’s northern border after Macedonia refused to allow them to enter. Last week Macedonian riot police tried to beat back crowds using stun grenades. On Saturday and Sunday the authorities relented and opened the border. They laid on trains and buses to ferry the refugees further north.
At the Serbian border crossing of Miratovac, refugees walked three miles to a reception centre in the southern town of Preševo. Most carried their belongings in rucksacks and men carried small children on their shoulders. In Preševo, they received medical aid, food and papers legalising their transit through the country.
“I just want to cross to continue my journey,” Ahmed, from Syria, told Reuters, speaking on the Serbian border. He added: “My final destination is Germany, hopefully.”
How many asylum seekers would other EU countries need to match Germany? Read more
Germany is warning of reintroducing national border controls unless other countries step up to the plate and share the refugee burden more equitably. Proposals from Brussels in May to introduce mandatory refugee quotas across the EU on a small initial scale were rejected by Spain and most of eastern Europe. At their June summit, leaders debated until 3.30am and agreed nothing.
Since then, the number of migrants entering Greece, Italy and the Balkans has soared, with Germany predicting the arrival of 800,000 asylum seekers this year and the figures for the EU projected to triple compared with 2014.
Facebook Twitter Pinterest Angela Merkel and François Hollande attend a brief press conference in Berlin. Photograph: Kay Nietfeld/dpa/Corbis
The European commission, as the guardian of the Schengen system, insisted on Monday that the free travel area was sacrosanct and would not be changed, but added that national authorities could mount identity checks and other monitoring measures on rail traffic and in problem areas as long as the action did not amount to border controls.
The latest frontline in the EU’s attempts to deal with the problem is Hungary, the first state beyond Serbia inside the Schengen zone. Tighter migration laws officially kicked in this month and the Hungarian government promised to complete a fence along its 108-mile border with Serbia to keep out migrants.
Cutting the fence – due to be completed in its first phase by the end of the month – will be punishable with up to four years in prison. In the Hungarian town of Szeged on the border with Serbia the local mayor has supported volunteers who have given refugees food, water and travel information. Local sentiment, however, is mixed.
The influx represents the biggest movement of people in the western Balkans since the wars in the 1990s in the former Yugoslavia. Speaking on Monday during a visit to the Macedonian capital, Skopje, Austria’s foreign minister, Sebastian Kurz, said the countries in the region had been “overrun, overwhelmed and left to their own devices”. “We have to help them,” he added.
With European asylum and immigration policies increasingly a mess and national governments failing to come up with a coherent response to the worst migratory pressures witnessed in many countries, Berlin and Brussels are sounding the alarm.
Jean-Claude Juncker, the president of the European commission, said the failure to be more generous and sharing on the refugee issue created a Europe he did not want to live in.
Germany’s social democrat leaders warned that public opinion would turn against the EU if Germany was left with more than one third of those seeking asylum in Europe, while many other countries, mostly in eastern Europe, admitted minimal numbers of refugees.
Migrants on Hungary's border fence: 'This wall, we will not accept it' Read more
On Monday it emerged that the German authorities had decided to waive certain EU asylum rules for Syrians without informing Brussels. Berlin’s immigration authority ruled that all Syrians entering Germany would have their asylum claims processed, waiving the right to have them deported back to the first EU country the claimants entered.
A record 50,000 migrants, many of them Syrians crossing by boat from Turkey, hit Greek shores in July. In the past two weeks, over 23,000 have entered Serbia, taking the total so far this year to some 90,000.
“They’re coping somehow so far,” said Ivan Mišković, a spokesman for Serbia’s commissariat for refugees and migration, of the aid workers and authorities on Serbia’s southern border. “Refugees in Miratovac and Preševo are receiving first aid and they are fed before proceeding onwards.”
Meanwhile, Greece’s coastguard is searching for at least five people missing at sea after the dinghy they were using to cross from Turkey overturned off the coast of the eastern Aegean island of Lesbos.
The coastguard said it had rescued six people, had recovered the bodies of two men and was searching the area for the missing. It was alerted after a fishing boat picked up one person off the island’s eastern coast on Monday morning, and a second managed to swim to the island. The two told authorities they had been in a boat carrying about 15 people when it overturned.
Greece has been overwhelmed by an influx of mainly refugees reaching its islands from Turkey. The Greek coastguard said it had picked up 877 people in 30 search and rescue operations from Friday morning to Monday morning near the islands of Lesbos, Chios, Samos and Kos. The figures did not include the hundreds who managed to make it to the islands themselves, mostly in inflatable dinghies. |
package model;
public enum Status {
ACTIVE,
COMPLETE
}
|
A forensic aspect of age characteristics of dentine using transversal microradiography: a case report Background Translucency of dentine is the result of occlusion of the corresponding dentinal tubules by a mineral substance which has a refractive index similar to that of the rest of the dentine. Case presentation This case report describes the microradiographic features of an upper cadaveric canine. Transverse microradiograph is one of the methods assessing apical dentine translucency for various dental and medical reasons. Conclusion Estimation of age using teeth structures may be of primary value in forensic dentistry, especially when soft tissues are severely destructed. Background Microradiography has been employed in dental research to measure mineral distribution in hard tissues such as bone, enamel and dentine. It is shown that for wavelengths between 0.5 A° and 3 A°t hat is for the soft x-rays, the mass-absorption coefficient for the organic fraction of the mineralized tissues is only about 1/10 of that of the apatite. Applying these findings to the dentine of human teeth and taking into consideration that the organic content of dentine is less than 1/4 of the total by weight, it is stated that the absorption of the organic matrix of the dentine is less than 2.5% of the total absorption of the dentine. Thus, practically, microradiography of the dentine is the study of the mineral, inorganic, phase of the dentine. This case report emphasizes the possible use of microradiography in forensics. It was our interest to evaluate the dentine characteristics of age using microradiography of a cadaveric canine tooth. Case report A cadaveric upper right canine was selected for estimation of age in forensics. The individual died at 59 years of age. The investigators used an x-ray generator supplied with a fine tube. The radiation emission performed at 20 kv and 30 mA. A nickel filtered copper K alpha monochromatic radiation through a beryllium window used. The targetfilm distance was measured at 26 cm. The specimen was mounted on spectroscopic plate type 649-0, in the cassettes and held in position with a stretched aluminium foil 25 m thick. The specimen section (< 55 m) of the apical third was then subjected to x-ray for 45 min. The microradiographic plate was developed in HRP developer for 5 min, washed in distilled water, rinsed in stop bath (5% solution, per volume, glacial acetic acid) for 30 sec, washed again in distilled water and fixed in two baths for 5 min in each bath. After fixation the plates were washed for 30 min in cold water, rinsed in wetting agent and hung to dry. The microradiographic plate was mounted on microscope slide with D.P.X. mountant and a cover slip was applied using the same mountant. The advanced stage of tubular occlusion was seen clearly ( Figure. 1). Therefore microradiography indicated the deposition of the minerals and based on the variations in the x-ray absorption this material. This may happen due to superimposition of x-ray ( Figure. 2). The possible influence of age on apical dentine structure using microradiography, were highly occluded tubules and superimposition of the peritubular dentine. Discussion Basically, the study of the microstructure of translucent dentine may be performed using light microscopy, microradiography and SEM examination. The microradiographic technique may be divided into two assessment types. Firstly the ground, longitudinal sections of about 250 m thick, in order to establish the relationship between the translucent zones and their x-ray absorption. The microradiographs and the original sections may be compared using the enlarger technique. Secondly, the polished transverse sections, 35-55 m thick, in order to locate the deposition of the extra minerals, if any, and to compare their degree of x-ray absorption with that of the rest of the dentine and to establish how closely packed are the crystals of the deposited minerals. In our case the latter technique was used showing the advanced tubular occlusion with the associated deposition of minerals. The zones which appeared translucent by optical examination, showed an increased x-ray absorption in the microradiographs. Occasionally islands of completely occluded tubules were seen surrounded by patent tubules. Furthermore, tubule-free (that is neither closed nor patent tubules were detectable) areas were seen near the root canal in sections in which tubules were present anywhere else. The original diameters of the lumens of the completely or partially occluded tubules (the diameter of the circle formed by the line of junction between the intertubular dentine and the smooth material in the tubule) was not noticeable different from that of adjacent patent tubules. As already mentioned, no predentine layer was seen by microradiography at the pulpal ends of the translucent zones of root dentine. This might have been the result of sectioning, grinding, and polishing of the specimen. The age changes of the root are associated with the reduction of the number of the sub-odontoblastic pulpal blood vessels with increase age. Another sound explanation is that the odontoblasts corresponding to the translucent areas of dentine may be less active than the rest of the odontoblasts. Despite the reduction of the number of the odontoblasts and their blood supply with age, the corresponding part of the dentine does become translucent with age and furthermore at an increasing rate. Concluding the prementioned features may be used additionally to the well known measures of forensic dentistry. The discussed features of canine microradiography may be used in forensics as a guide of 59 years-old tooth, for estimation of victim's age. Further studies are required for validation of these findings. Furthermore analyses of teeth microradiography seem to be essential for other age groups. Consent Written informed consent was obtained from the patient for publication of this case report and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal. |
/*
* Copyright (c) 2016 The WebRTC project authors. All Rights Reserved.
*
* Use of this source code is governed by a BSD-style license
* that can be found in the LICENSE file in the root of the source
* tree. An additional intellectual property rights grant can be found
* in the file PATENTS. All contributing project authors may
* be found in the AUTHORS file in the root of the source tree.
*/
#ifndef RTC_BASE_NUMERICS_SEQUENCE_NUMBER_UTIL_H_
#define RTC_BASE_NUMERICS_SEQUENCE_NUMBER_UTIL_H_
#include <stdint.h>
#include <limits>
#include <type_traits>
#include "rtc_base/numerics/mod_ops.h"
namespace webrtc {
// Test if the sequence number `a` is ahead or at sequence number `b`.
//
// If `M` is an even number and the two sequence numbers are at max distance
// from each other, then the sequence number with the highest value is
// considered to be ahead.
template <typename T, T M>
inline typename std::enable_if<(M > 0), bool>::type AheadOrAt(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
const T maxDist = M / 2;
if (!(M & 1) && MinDiff<T, M>(a, b) == maxDist)
return b < a;
return ForwardDiff<T, M>(b, a) <= maxDist;
}
template <typename T, T M>
inline typename std::enable_if<(M == 0), bool>::type AheadOrAt(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
const T maxDist = std::numeric_limits<T>::max() / 2 + T(1);
if (a - b == maxDist)
return b < a;
return ForwardDiff(b, a) < maxDist;
}
template <typename T>
inline bool AheadOrAt(T a, T b) {
return AheadOrAt<T, 0>(a, b);
}
// Test if the sequence number `a` is ahead of sequence number `b`.
//
// If `M` is an even number and the two sequence numbers are at max distance
// from each other, then the sequence number with the highest value is
// considered to be ahead.
template <typename T, T M = 0>
inline bool AheadOf(T a, T b) {
static_assert(std::is_unsigned<T>::value,
"Type must be an unsigned integer.");
return a != b && AheadOrAt<T, M>(a, b);
}
// Comparator used to compare sequence numbers in a continuous fashion.
//
// WARNING! If used to sort sequence numbers of length M then the interval
// covered by the sequence numbers may not be larger than floor(M/2).
template <typename T, T M = 0>
struct AscendingSeqNumComp {
bool operator()(T a, T b) const { return AheadOf<T, M>(a, b); }
};
// Comparator used to compare sequence numbers in a continuous fashion.
//
// WARNING! If used to sort sequence numbers of length M then the interval
// covered by the sequence numbers may not be larger than floor(M/2).
template <typename T, T M = 0>
struct DescendingSeqNumComp {
bool operator()(T a, T b) const { return AheadOf<T, M>(b, a); }
};
} // namespace webrtc
#endif // RTC_BASE_NUMERICS_SEQUENCE_NUMBER_UTIL_H_
|
<filename>src/sdk/pynni/nni/nas/benchmarks/nds/model.py
import os
from peewee import CharField, FloatField, ForeignKeyField, IntegerField, Model
from playhouse.sqlite_ext import JSONField, SqliteExtDatabase
from nni.nas.benchmarks.constants import DATABASE_DIR
db = SqliteExtDatabase(os.path.join(DATABASE_DIR, 'nds.db'), autoconnect=True)
class NdsTrialConfig(Model):
"""
Trial config for NDS.
Attributes
----------
model_family : str
Could be ``nas_cell``, ``residual_bottleneck``, ``residual_basic`` or ``vanilla``.
model_spec : dict
If ``model_family`` is ``nas_cell``, it contains ``num_nodes_normal``, ``num_nodes_reduce``, ``depth``,
``width``, ``aux`` and ``drop_prob``. If ``model_family`` is ``residual_bottleneck``, it contains ``bot_muls``,
``ds`` (depths), ``num_gs`` (number of groups) and ``ss`` (strides). If ``model_family`` is ``residual_basic`` or
``vanilla``, it contains ``ds``, ``ss`` and ``ws``.
cell_spec : dict
If ``model_family`` is not ``nas_cell`` it will be an empty dict. Otherwise, it specifies
``<normal/reduce>_<i>_<op/input>_<x/y>``, where i ranges from 0 to ``num_nodes_<normal/reduce> - 1``.
If it is an ``op``, the value is chosen from the constants specified previously like :const:`nni.nas.benchmark.nds.CONV_1X1`.
If it is i's ``input``, the value range from 0 to ``i + 1``, as ``nas_cell`` uses previous two nodes as inputs, and
node 0 is actually the second node. Refer to NASNet paper for details. Finally, another two key-value pairs
``normal_concat`` and ``reduce_concat`` specify which nodes are eventually concatenated into output.
dataset : str
Dataset used. Could be ``cifar10`` or ``imagenet``.
generator : str
Can be one of ``random`` which generates configurations at random, while keeping learning rate and weight decay fixed,
``fix_w_d`` which further keeps ``width`` and ``depth`` fixed, only applicable for ``nas_cell``. ``tune_lr_wd`` which
further tunes learning rate and weight decay.
proposer : str
Paper who has proposed the distribution for random sampling. Available proposers include ``nasnet``, ``darts``, ``enas``,
``pnas``, ``amoeba``, ``vanilla``, ``resnext-a``, ``resnext-b``, ``resnet``, ``resnet-b`` (ResNet with bottleneck).
See NDS paper for details.
base_lr : float
Initial learning rate.
weight_decay : float
L2 weight decay applied on weights.
num_epochs : int
Number of epochs scheduled, during which learning rate will decay to 0 following cosine annealing.
"""
model_family = CharField(max_length=20, index=True, choices=[
'nas_cell',
'residual_bottleneck',
'residual_basic',
'vanilla',
])
model_spec = JSONField(index=True)
cell_spec = JSONField(index=True, null=True)
dataset = CharField(max_length=15, index=True, choices=['cifar10', 'imagenet'])
generator = CharField(max_length=15, index=True, choices=[
'random',
'fix_w_d',
'tune_lr_wd',
])
proposer = CharField(max_length=15, index=True)
base_lr = FloatField()
weight_decay = FloatField()
num_epochs = IntegerField()
class Meta:
database = db
class NdsTrialStats(Model):
"""
Computation statistics for NDS. Each corresponds to one trial.
Attributes
----------
config : NdsTrialConfig
Corresponding config for trial.
seed : int
Random seed selected, for reproduction.
final_train_acc : float
Final accuracy on training data, ranging from 0 to 100.
final_train_loss : float or None
Final cross entropy loss on training data. Could be NaN (None).
final_test_acc : float
Final accuracy on test data, ranging from 0 to 100.
best_train_acc : float
Best accuracy on training data, ranging from 0 to 100.
best_train_loss : float or None
Best cross entropy loss on training data. Could be NaN (None).
best_test_acc : float
Best accuracy on test data, ranging from 0 to 100.
parameters : float
Number of trainable parameters in million.
flops : float
FLOPs in million.
iter_time : float
Seconds elapsed for each iteration.
"""
config = ForeignKeyField(NdsTrialConfig, backref='trial_stats', index=True)
seed = IntegerField()
final_train_acc = FloatField()
final_train_loss = FloatField(null=True)
final_test_acc = FloatField()
best_train_acc = FloatField()
best_train_loss = FloatField(null=True)
best_test_acc = FloatField()
parameters = FloatField()
flops = FloatField()
iter_time = FloatField()
class Meta:
database = db
class NdsIntermediateStats(Model):
"""
Intermediate statistics for NDS.
Attributes
----------
trial : NdsTrialStats
Corresponding trial.
current_epoch : int
Elapsed epochs.
train_loss : float or None
Current cross entropy loss on training data. Can be NaN (None).
train_acc : float
Current accuracy on training data, ranging from 0 to 100.
test_acc : float
Current accuracy on test data, ranging from 0 to 100.
"""
trial = ForeignKeyField(NdsTrialStats, backref='intermediates', index=True)
current_epoch = IntegerField(index=True)
train_loss = FloatField(null=True)
train_acc = FloatField()
test_acc = FloatField()
class Meta:
database = db
|
import { Component, OnInit } from '@angular/core';
import { FormBuilder, FormGroup, Validators } from '@angular/forms';
import { DrestaurantCourierService, CourierModel, EventManager, MyEvent } from '@d-restaurant-frontend/drestaurant-shared';
@Component({
selector: 'd-restaurant-frontend-courier-create',
templateUrl: './courier-create.component.html',
styleUrls: ['./courier-create.component.scss']
})
export class CourierCreateComponent implements OnInit {
form: FormGroup;
constructor(private courierService: DrestaurantCourierService, private formBuilder: FormBuilder, private eventManager: EventManager) { }
ngOnInit() {
this.form = this.formBuilder.group({
firstName: ['', Validators.required],
lastName: ['', Validators.required],
maxNumberOfActiveOrders: [10, Validators.compose([Validators.required, Validators.min(1), Validators.max(1000)])]
});
}
onSubmit({ value, valid }: { value: CourierModel; valid: boolean }) {
this.courierService
.createCourier(value)
.subscribe(
response => this.onSaveSuccess(response),
() => this.onSaveError()
);
}
private onSaveSuccess(result) {
// Note that command for creating a Courier (command side) and materializing the CourierEntity (query side - event handler) can happen in different threads (no transaction). We should wait websocket event from the backend marking that view has been materialized. We are 'eventually' consistent, and we have to handle it accordingly.
// We do not need to fire event here in order for the list to be refreshed
// because: STOMP message will be sent over the WebSocket protocol once the Courier is saved into the database on the backend side
// check: 'courier-list.datasource.ts' to see how we subscribe to WebSocket event (Couriers list updated)
// this.eventManager.broadcast({
// name: MyEvent.COURIER_LIST_MODIFICATION,
// content: 'OK'
// });
}
private onSaveError() {
//Do something smart
}
}
|
<reponame>ndevelder/amr-wind-channel-les<filename>submods/amrex/Src/LinearSolvers/MLMG/AMReX_MLEBNodeFDLaplacian.cpp
#include <AMReX_MLEBNodeFDLaplacian.H>
#include <AMReX_MLEBNodeFDLap_K.H>
#include <AMReX_MLNodeLap_K.H>
#include <AMReX_MLNodeTensorLap_K.H>
namespace amrex {
MLEBNodeFDLaplacian::MLEBNodeFDLaplacian (
const Vector<Geometry>& a_geom,
const Vector<BoxArray>& a_grids,
const Vector<DistributionMapping>& a_dmap,
const LPInfo& a_info,
const Vector<EBFArrayBoxFactory const*>& a_factory)
{
define(a_geom, a_grids, a_dmap, a_info, a_factory);
}
MLEBNodeFDLaplacian::~MLEBNodeFDLaplacian ()
{}
void
MLEBNodeFDLaplacian::setSigma (Array<Real,AMREX_SPACEDIM> const& a_sigma) noexcept
{
for (int i = 0; i < AMREX_SPACEDIM; ++i) {
m_sigma[i] = a_sigma[i];
}
}
void
MLEBNodeFDLaplacian::setEBDirichlet (Real a_phi_eb)
{
m_s_phi_eb = a_phi_eb;
}
void
MLEBNodeFDLaplacian::define (const Vector<Geometry>& a_geom,
const Vector<BoxArray>& a_grids,
const Vector<DistributionMapping>& a_dmap,
const LPInfo& a_info,
const Vector<EBFArrayBoxFactory const*>& a_factory)
{
static_assert(AMREX_SPACEDIM > 1, "MLEBNodeFDLaplacian: 1D not supported");
BL_PROFILE("MLEBNodeFDLaplacian::define()");
// This makes sure grids are cell-centered;
Vector<BoxArray> cc_grids = a_grids;
for (auto& ba : cc_grids) {
ba.enclosedCells();
}
if (a_grids.size() > 1) {
amrex::Abort("MLEBNodeFDLaplacian: multi-level not supported");
}
Vector<FabFactory<FArrayBox> const*> _factory;
for (auto x : a_factory) {
_factory.push_back(static_cast<FabFactory<FArrayBox> const*>(x));
}
int eb_limit_coarsening = false;
MLNodeLinOp::define(a_geom, cc_grids, a_dmap, a_info, _factory, eb_limit_coarsening);
}
std::unique_ptr<FabFactory<FArrayBox> >
MLEBNodeFDLaplacian::makeFactory (int amrlev, int mglev) const
{
if (amrlev == 0 && mglev > 0) {
return std::make_unique<FArrayBoxFactory>();
} else {
return makeEBFabFactory(m_geom[amrlev][mglev],
m_grids[amrlev][mglev],
m_dmap[amrlev][mglev],
{1,1,1}, EBSupport::full);
}
}
void
MLEBNodeFDLaplacian::restriction (int amrlev, int cmglev, MultiFab& crse, MultiFab& fine) const
{
BL_PROFILE("MLEBNodeFDLaplacian::restriction()");
applyBC(amrlev, cmglev-1, fine, BCMode::Homogeneous, StateMode::Solution);
bool need_parallel_copy = !amrex::isMFIterSafe(crse, fine);
MultiFab cfine;
if (need_parallel_copy) {
const BoxArray& ba = amrex::coarsen(fine.boxArray(), 2);
cfine.define(ba, fine.DistributionMap(), 1, 0);
}
MultiFab* pcrse = (need_parallel_copy) ? &cfine : &crse;
const iMultiFab& dmsk = *m_dirichlet_mask[amrlev][cmglev-1];
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(*pcrse, TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& bx = mfi.tilebox();
Array4<Real> cfab = pcrse->array(mfi);
Array4<Real const> const& ffab = fine.const_array(mfi);
Array4<int const> const& mfab = dmsk.const_array(mfi);
AMREX_HOST_DEVICE_PARALLEL_FOR_3D(bx, i, j, k,
{
mlndlap_restriction(i,j,k,cfab,ffab,mfab);
});
}
if (need_parallel_copy) {
crse.ParallelCopy(cfine);
}
}
void
MLEBNodeFDLaplacian::interpolation (int amrlev, int fmglev, MultiFab& fine,
const MultiFab& crse) const
{
BL_PROFILE("MLEBNodeFDLaplacian::interpolation()");
bool need_parallel_copy = !amrex::isMFIterSafe(crse, fine);
MultiFab cfine;
const MultiFab* cmf = &crse;
if (need_parallel_copy) {
const BoxArray& ba = amrex::coarsen(fine.boxArray(), 2);
cfine.define(ba, fine.DistributionMap(), 1, 0);
cfine.ParallelCopy(crse);
cmf = &cfine;
}
const iMultiFab& dmsk = *m_dirichlet_mask[amrlev][fmglev];
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(fine, TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
Box const& bx = mfi.tilebox();
Array4<Real> const& ffab = fine.array(mfi);
Array4<Real const> const& cfab = cmf->const_array(mfi);
Array4<int const> const& mfab = dmsk.const_array(mfi);
AMREX_HOST_DEVICE_PARALLEL_FOR_3D(bx, i, j, k,
{
mlndtslap_interpadd(i,j,k,ffab,cfab,mfab);
});
}
}
void
MLEBNodeFDLaplacian::averageDownSolutionRHS (int /*camrlev*/, MultiFab& /*crse_sol*/,
MultiFab& /*crse_rhs*/,
const MultiFab& /*fine_sol*/,
const MultiFab& /*fine_rhs*/)
{
amrex::Abort("MLEBNodeFDLaplacian::averageDownSolutionRHS: todo");
}
void
MLEBNodeFDLaplacian::reflux (int /*crse_amrlev*/, MultiFab& /*res*/,
const MultiFab& /*crse_sol*/, const MultiFab& /*crse_rhs*/,
MultiFab& /*fine_res*/, MultiFab& /*fine_sol*/,
const MultiFab& /*fine_rhs*/) const
{
amrex::Abort("MLEBNodeFDLaplacian::reflux: TODO");
}
void
MLEBNodeFDLaplacian::prepareForSolve ()
{
BL_PROFILE("MLEBNodeFDLaplacian::prepareForSolve()");
MLNodeLinOp::prepareForSolve();
buildMasks();
// Set covered nodes to Dirichlet
for (int amrlev = 0; amrlev < m_num_amr_levels; ++amrlev) {
for (int mglev = 0; mglev < m_num_mg_levels[amrlev]; ++mglev) {
auto factory = dynamic_cast<EBFArrayBoxFactory const*>(m_factory[amrlev][mglev].get());
if (factory) {
auto const& levset = factory->getLevelSet();
auto& dmask = *m_dirichlet_mask[amrlev][mglev];
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(dmask,TilingIfNotGPU()); mfi.isValid(); ++mfi) {
const Box& ndbx = mfi.tilebox();
Array4<int> const& mskarr = dmask.array(mfi);
Array4<Real const> const lstarr = levset.const_array(mfi);
AMREX_HOST_DEVICE_FOR_3D(ndbx, i, j, k,
{
if (lstarr(i,j,k) >= Real(0.0)) {
mskarr(i,j,k) = -1;
}
});
}
}
}
}
m_acoef.clear();
m_acoef.emplace_back(amrex::convert(m_grids[0][0],IntVect(1)),
m_dmap[0][0], 1, 0);
const auto dxinv = m_geom[0][0].InvCellSizeArray();
AMREX_D_TERM(const Real bcx = m_sigma[0]*dxinv[0]*dxinv[0];,
const Real bcy = m_sigma[1]*dxinv[1]*dxinv[1];,
const Real bcz = m_sigma[2]*dxinv[2]*dxinv[2];)
m_a_huge = 1.e10 * AMREX_D_TERM(std::abs(bcx),+std::abs(bcy),+std::abs(bcz));
Real ahuge = m_a_huge;
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(m_acoef[0],TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
Box const& bx = mfi.tilebox();
auto const& acf = m_acoef[0].array(mfi);
auto const& msk = m_dirichlet_mask[0][0]->const_array(mfi);
AMREX_HOST_DEVICE_PARALLEL_FOR_3D(bx, i, j, k,
{
acf(i,j,k) = msk(i,j,k) ? ahuge : 0.0;
});
}
for (int mglev = 1; mglev < m_num_mg_levels[0]; ++mglev) {
m_acoef.emplace_back(amrex::convert(m_grids[0][mglev],IntVect(1)),
m_dmap[0][mglev], 1, 0);
auto const& fine = m_acoef[mglev-1];
auto & crse = m_acoef[mglev];
bool need_parallel_copy = !amrex::isMFIterSafe(crse, fine);
MultiFab cfine;
if (need_parallel_copy) {
const BoxArray& ba = amrex::coarsen(fine.boxArray(), 2);
cfine.define(ba, fine.DistributionMap(), 1, 0);
}
MultiFab* pcrse = (need_parallel_copy) ? &cfine : &crse;
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(*pcrse, TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& bx = mfi.tilebox();
Array4<Real> cfab = pcrse->array(mfi);
Array4<Real const> const& ffab = fine.const_array(mfi);
AMREX_HOST_DEVICE_PARALLEL_FOR_3D(bx, i, j, k,
{
cfab(i,j,k) = ffab(2*i,2*j,2*k);
});
}
if (need_parallel_copy) {
crse.ParallelCopy(cfine);
}
}
}
void
MLEBNodeFDLaplacian::Fapply (int amrlev, int mglev, MultiFab& out, const MultiFab& in) const
{
BL_PROFILE("MLEBNodeFDLaplacian::Fapply()");
const auto dxinv = m_geom[amrlev][mglev].InvCellSizeArray();
AMREX_D_TERM(const Real bx = m_sigma[0]*dxinv[0]*dxinv[0];,
const Real by = m_sigma[1]*dxinv[1]*dxinv[1];,
const Real bz = m_sigma[2]*dxinv[2]*dxinv[2];)
const auto phieb = (m_in_solution_mode) ? m_s_phi_eb : Real(0.0);
auto const& dmask = *m_dirichlet_mask[amrlev][mglev];
auto factory = dynamic_cast<EBFArrayBoxFactory const*>(m_factory[amrlev][mglev].get());
if (factory) {
auto const& edgecent = factory->getEdgeCent();
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(out,TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& box = mfi.tilebox();
Array4<Real const> const& xarr = in.const_array(mfi);
Array4<Real> const& yarr = out.array(mfi);
Array4<int const> const& dmarr = dmask.const_array(mfi);
bool cutfab = edgecent[0]->ok(mfi);
AMREX_D_TERM(Array4<Real const> const& ecx
= cutfab ? edgecent[0]->const_array(mfi) : Array4<Real const>{};,
Array4<Real const> const& ecy
= cutfab ? edgecent[1]->const_array(mfi) : Array4<Real const>{};,
Array4<Real const> const& ecz
= cutfab ? edgecent[2]->const_array(mfi) : Array4<Real const>{};)
if (phieb == std::numeric_limits<Real>::lowest()) {
auto const& phiebarr = m_phi_eb[amrlev].const_array(mfi);
AMREX_HOST_DEVICE_FOR_3D(box, i, j, k,
{
mlebndfdlap_adotx_eb(i,j,k,yarr,xarr,dmarr,AMREX_D_DECL(ecx,ecy,ecz),
phiebarr, AMREX_D_DECL(bx,by,bz));
});
} else {
AMREX_HOST_DEVICE_FOR_3D(box, i, j, k,
{
mlebndfdlap_adotx_eb(i,j,k,yarr,xarr,dmarr,AMREX_D_DECL(ecx,ecy,ecz),
phieb, AMREX_D_DECL(bx,by,bz));
});
}
}
} else {
AMREX_ALWAYS_ASSERT(amrlev == 0);
auto const& acoef = m_acoef[mglev];
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(out,TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& box = mfi.tilebox();
Array4<Real const> const& xarr = in.const_array(mfi);
Array4<Real> const& yarr = out.array(mfi);
Array4<int const> const& dmarr = dmask.const_array(mfi);
Array4<Real const> const& acarr = acoef.const_array(mfi);
AMREX_HOST_DEVICE_FOR_3D(box, i, j, k,
{
mlebndfdlap_adotx(i,j,k,yarr,xarr,dmarr,acarr,AMREX_D_DECL(bx,by,bz));
});
}
}
}
void
MLEBNodeFDLaplacian::Fsmooth (int amrlev, int mglev, MultiFab& sol, const MultiFab& rhs) const
{
BL_PROFILE("MLEBNodeFDLaplacian::Fsmooth()");
const auto dxinv = m_geom[amrlev][mglev].InvCellSizeArray();
AMREX_D_TERM(const Real bx = m_sigma[0]*dxinv[0]*dxinv[0];,
const Real by = m_sigma[1]*dxinv[1]*dxinv[1];,
const Real bz = m_sigma[2]*dxinv[2]*dxinv[2];)
auto const& dmask = *m_dirichlet_mask[amrlev][mglev];
for (int redblack = 0; redblack < 4; ++redblack) {
if (redblack > 0) {
applyBC(amrlev, mglev, sol, BCMode::Homogeneous, StateMode::Correction);
}
auto factory = dynamic_cast<EBFArrayBoxFactory const*>(m_factory[amrlev][mglev].get());
if (factory) {
auto const& edgecent = factory->getEdgeCent();
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(sol,TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& box = mfi.tilebox();
Array4<Real> const& solarr = sol.array(mfi);
Array4<Real const> const& rhsarr = rhs.const_array(mfi);
Array4<int const> const& dmskarr = dmask.const_array(mfi);
bool cutfab = edgecent[0]->ok(mfi);
AMREX_D_TERM(Array4<Real const> const& ecx
= cutfab ? edgecent[0]->const_array(mfi) : Array4<Real const>{};,
Array4<Real const> const& ecy
= cutfab ? edgecent[1]->const_array(mfi) : Array4<Real const>{};,
Array4<Real const> const& ecz
= cutfab ? edgecent[2]->const_array(mfi) : Array4<Real const>{};)
AMREX_HOST_DEVICE_FOR_3D(box, i, j, k,
{
mlebndfdlap_gsrb_eb(i,j,k,solarr,rhsarr,dmskarr,AMREX_D_DECL(ecx,ecy,ecz),
AMREX_D_DECL(bx,by,bz), redblack);
});
}
} else {
AMREX_ALWAYS_ASSERT(amrlev == 0);
auto const& acoef = m_acoef[mglev];
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(sol,TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& box = mfi.tilebox();
Array4<Real> const& solarr = sol.array(mfi);
Array4<Real const> const& rhsarr = rhs.const_array(mfi);
Array4<int const> const& dmskarr = dmask.const_array(mfi);
Array4<Real const> const& acarr = acoef.const_array(mfi);
AMREX_HOST_DEVICE_FOR_3D(box, i, j, k,
{
mlebndfdlap_gsrb(i,j,k,solarr,rhsarr,dmskarr,acarr,
AMREX_D_DECL(bx,by,bz), redblack);
});
}
}
}
nodalSync(amrlev, mglev, sol);
}
void
MLEBNodeFDLaplacian::normalize (int amrlev, int mglev, MultiFab& mf) const
{
if (amrlev == 0 && mglev > 0) {
Real ahugeinv = Real(1.0) / m_a_huge;
auto const& acoef = m_acoef[mglev];
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(mf,TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
const Box& box = mfi.tilebox();
Array4<Real> const& fab = mf.array(mfi);
Array4<Real const> const& acarr = acoef.const_array(mfi);
AMREX_HOST_DEVICE_PARALLEL_FOR_3D(box, i, j, k,
{
if (acarr(i,j,k) > Real(0.0)) {
fab(i,j,k) *= ahugeinv;
}
});
}
}
}
void
MLEBNodeFDLaplacian::fixUpResidualMask (int /*amrlev*/, iMultiFab& /*resmsk*/)
{
amrex::Abort("MLEBNodeFDLaplacian::fixUpResidualMask: TODO");
}
void
MLEBNodeFDLaplacian::compGrad (int amrlev, const Array<MultiFab*,AMREX_SPACEDIM>& grad,
MultiFab& sol, Location /*loc*/) const
{
BL_PROFILE("MLEBNodeFDLaplacian::compGrad()");
AMREX_ASSERT(AMREX_D_TERM(grad[0]->ixType() == IndexType(IntVect(AMREX_D_DECL(0,1,1))),
&& grad[1]->ixType() == IndexType(IntVect(AMREX_D_DECL(1,0,1))),
&& grad[2]->ixType() == IndexType(IntVect(AMREX_D_DECL(1,1,0)))));
const int mglev = 0;
AMREX_D_TERM(const auto dxi = m_geom[amrlev][mglev].InvCellSize(0);,
const auto dyi = m_geom[amrlev][mglev].InvCellSize(1);,
const auto dzi = m_geom[amrlev][mglev].InvCellSize(2);)
const auto phieb = m_s_phi_eb;
auto const& dmask = *m_dirichlet_mask[amrlev][mglev];
auto factory = dynamic_cast<EBFArrayBoxFactory const*>(m_factory[amrlev][mglev].get());
AMREX_ASSERT(factory);
auto const& edgecent = factory->getEdgeCent();
#ifdef AMREX_USE_OMP
#pragma omp parallel if (Gpu::notInLaunchRegion())
#endif
for (MFIter mfi(*grad[0],TilingIfNotGPU()); mfi.isValid(); ++mfi)
{
AMREX_D_TERM(const Box& xbox = mfi.tilebox(IntVect(AMREX_D_DECL(0,1,1)));,
const Box& ybox = mfi.tilebox(IntVect(AMREX_D_DECL(1,0,1)));,
const Box& zbox = mfi.tilebox(IntVect(AMREX_D_DECL(1,1,0)));)
Array4<Real const> const& p = sol.const_array(mfi);
AMREX_D_TERM(Array4<Real> const& gpx = grad[0]->array(mfi);,
Array4<Real> const& gpy = grad[1]->array(mfi);,
Array4<Real> const& gpz = grad[2]->array(mfi);)
Array4<int const> const& dmarr = dmask.const_array(mfi);
bool cutfab = edgecent[0]->ok(mfi);
AMREX_D_TERM(Array4<Real const> const& ecx
= cutfab ? edgecent[0]->const_array(mfi) : Array4<Real const>{};,
Array4<Real const> const& ecy
= cutfab ? edgecent[1]->const_array(mfi) : Array4<Real const>{};,
Array4<Real const> const& ecz
= cutfab ? edgecent[2]->const_array(mfi) : Array4<Real const>{};)
if (phieb == std::numeric_limits<Real>::lowest()) {
auto const& phiebarr = m_phi_eb[amrlev].const_array(mfi);
AMREX_LAUNCH_HOST_DEVICE_LAMBDA_DIM(
xbox, txbox,
{
mlebndfdlap_grad_x(txbox, gpx, p, dmarr, ecx, phiebarr, dxi);
}
, ybox, tybox,
{
mlebndfdlap_grad_y(tybox, gpy, p, dmarr, ecy, phiebarr, dyi);
}
, zbox, tzbox,
{
mlebndfdlap_grad_z(tzbox, gpz, p, dmarr, ecz, phiebarr, dzi);
});
} else {
AMREX_LAUNCH_HOST_DEVICE_LAMBDA_DIM(
xbox, txbox,
{
mlebndfdlap_grad_x(txbox, gpx, p, dmarr, ecx, phieb, dxi);
}
, ybox, tybox,
{
mlebndfdlap_grad_y(tybox, gpy, p, dmarr, ecy, phieb, dyi);
}
, zbox, tzbox,
{
mlebndfdlap_grad_z(tzbox, gpz, p, dmarr, ecz, phieb, dzi);
});
}
}
}
#if defined(AMREX_USE_HYPRE)
void
MLEBNodeFDLaplacian::fillIJMatrix (MFIter const& mfi,
Array4<HypreNodeLap::AtomicInt const> const& gid,
Array4<int const> const& lid,
HypreNodeLap::Int* const ncols,
HypreNodeLap::Int* const cols,
Real* const mat) const
{
amrex::Abort("MLEBNodeFDLaplacian::fillIJMatrix: todo");
}
void
MLEBNodeFDLaplacian::fillRHS (MFIter const& mfi, Array4<int const> const& lid,
Real* const rhs, Array4<Real const> const& bfab) const
{
amrex::Abort("MLEBNodeFDLaplacian::fillRHS: todo");
}
#endif
}
|
A Scalable Work Function Algorithm for the k-Server Problem We provide a novel implementation of the classical Work Function Algorithm (WFA) for the k -server problem. In our implementation, processing a request takes O ( n 2 + k 2 ) time per request; where n is the total number of requests and k is the total number of servers. All prior implementations take ( kn 2 + k 3 ) time per request. Previous approaches process a request by solving a min-cost flow problem. Instead, we show that processing a request can be reduced to an execution of the Dijkstras shortest-path algorithm on a carefully computed weighted graph leading to the speed-up. |
<gh_stars>1-10
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: topodata.proto
/*
Package topodata is a generated protocol buffer package.
It is generated from these files:
topodata.proto
It has these top-level messages:
KeyRange
TabletAlias
Tablet
Shard
Keyspace
ShardReplication
ShardReference
SrvKeyspace
CellInfo
*/
package topodata
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// KeyspaceIdType describes the type of the sharding key for a
// range-based sharded keyspace.
type KeyspaceIdType int32
const (
// UNSET is the default value, when range-based sharding is not used.
KeyspaceIdType_UNSET KeyspaceIdType = 0
// UINT64 is when uint64 value is used.
// This is represented as 'unsigned bigint' in mysql
KeyspaceIdType_UINT64 KeyspaceIdType = 1
// BYTES is when an array of bytes is used.
// This is represented as 'varbinary' in mysql
KeyspaceIdType_BYTES KeyspaceIdType = 2
)
var KeyspaceIdType_name = map[int32]string{
0: "UNSET",
1: "UINT64",
2: "BYTES",
}
var KeyspaceIdType_value = map[string]int32{
"UNSET": 0,
"UINT64": 1,
"BYTES": 2,
}
func (x KeyspaceIdType) String() string {
return proto.EnumName(KeyspaceIdType_name, int32(x))
}
func (KeyspaceIdType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
// TabletType represents the type of a given tablet.
type TabletType int32
const (
// UNKNOWN is not a valid value.
TabletType_UNKNOWN TabletType = 0
// MASTER is the master server for the shard. Only MASTER allows DMLs.
TabletType_MASTER TabletType = 1
// REPLICA is a slave type. It is used to serve live traffic.
// A REPLICA can be promoted to MASTER. A demoted MASTER will go to REPLICA.
TabletType_REPLICA TabletType = 2
// RDONLY (old name) / BATCH (new name) is used to serve traffic for
// long-running jobs. It is a separate type from REPLICA so
// long-running queries don't affect web-like traffic.
TabletType_RDONLY TabletType = 3
TabletType_BATCH TabletType = 3
// SPARE is a type of servers that cannot serve queries, but is available
// in case an extra server is needed.
TabletType_SPARE TabletType = 4
// EXPERIMENTAL is like SPARE, except it can serve queries. This
// type can be used for usages not planned by Vitess, like online
// export to another storage engine.
TabletType_EXPERIMENTAL TabletType = 5
// BACKUP is the type a server goes to when taking a backup. No queries
// can be served in BACKUP mode.
TabletType_BACKUP TabletType = 6
// RESTORE is the type a server uses when restoring a backup, at
// startup time. No queries can be served in RESTORE mode.
TabletType_RESTORE TabletType = 7
// DRAINED is the type a server goes into when used by Vitess tools
// to perform an offline action. It is a serving type (as
// the tools processes may need to run queries), but it's not used
// to route queries from Vitess users. In this state,
// this tablet is dedicated to the process that uses it.
TabletType_DRAINED TabletType = 8
)
var TabletType_name = map[int32]string{
0: "UNKNOWN",
1: "MASTER",
2: "REPLICA",
3: "RDONLY",
// Duplicate value: 3: "BATCH",
4: "SPARE",
5: "EXPERIMENTAL",
6: "BACKUP",
7: "RESTORE",
8: "DRAINED",
}
var TabletType_value = map[string]int32{
"UNKNOWN": 0,
"MASTER": 1,
"REPLICA": 2,
"RDONLY": 3,
"BATCH": 3,
"SPARE": 4,
"EXPERIMENTAL": 5,
"BACKUP": 6,
"RESTORE": 7,
"DRAINED": 8,
}
func (x TabletType) String() string {
return proto.EnumName(TabletType_name, int32(x))
}
func (TabletType) EnumDescriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
// KeyRange describes a range of sharding keys, when range-based
// sharding is used.
type KeyRange struct {
Start []byte `protobuf:"bytes,1,opt,name=start,proto3" json:"start,omitempty"`
End []byte `protobuf:"bytes,2,opt,name=end,proto3" json:"end,omitempty"`
}
func (m *KeyRange) Reset() { *m = KeyRange{} }
func (m *KeyRange) String() string { return proto.CompactTextString(m) }
func (*KeyRange) ProtoMessage() {}
func (*KeyRange) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *KeyRange) GetStart() []byte {
if m != nil {
return m.Start
}
return nil
}
func (m *KeyRange) GetEnd() []byte {
if m != nil {
return m.End
}
return nil
}
// TabletAlias is a globally unique tablet identifier.
type TabletAlias struct {
// cell is the cell (or datacenter) the tablet is in
Cell string `protobuf:"bytes,1,opt,name=cell" json:"cell,omitempty"`
// uid is a unique id for this tablet within the shard
// (this is the MySQL server id as well).
Uid uint32 `protobuf:"varint,2,opt,name=uid" json:"uid,omitempty"`
}
func (m *TabletAlias) Reset() { *m = TabletAlias{} }
func (m *TabletAlias) String() string { return proto.CompactTextString(m) }
func (*TabletAlias) ProtoMessage() {}
func (*TabletAlias) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *TabletAlias) GetCell() string {
if m != nil {
return m.Cell
}
return ""
}
func (m *TabletAlias) GetUid() uint32 {
if m != nil {
return m.Uid
}
return 0
}
// Tablet represents information about a running instance of vttablet.
type Tablet struct {
// alias is the unique name of the tablet.
Alias *TabletAlias `protobuf:"bytes,1,opt,name=alias" json:"alias,omitempty"`
// Fully qualified domain name of the host.
Hostname string `protobuf:"bytes,2,opt,name=hostname" json:"hostname,omitempty"`
// Map of named ports. Normally this should include vt and grpc.
// Going forward, the mysql port will be stored in mysql_port
// instead of here.
// For accessing mysql port, use topoproto.MysqlPort to fetch, and
// topoproto.SetMysqlPort to set. These wrappers will ensure
// legacy behavior is supported.
PortMap map[string]int32 `protobuf:"bytes,4,rep,name=port_map,json=portMap" json:"port_map,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"varint,2,opt,name=value"`
// Keyspace name.
Keyspace string `protobuf:"bytes,5,opt,name=keyspace" json:"keyspace,omitempty"`
// Shard name. If range based sharding is used, it should match
// key_range.
Shard string `protobuf:"bytes,6,opt,name=shard" json:"shard,omitempty"`
// If range based sharding is used, range for the tablet's shard.
KeyRange *KeyRange `protobuf:"bytes,7,opt,name=key_range,json=keyRange" json:"key_range,omitempty"`
// type is the current type of the tablet.
Type TabletType `protobuf:"varint,8,opt,name=type,enum=topodata.TabletType" json:"type,omitempty"`
// It this is set, it is used as the database name instead of the
// normal "vt_" + keyspace.
DbNameOverride string `protobuf:"bytes,9,opt,name=db_name_override,json=dbNameOverride" json:"db_name_override,omitempty"`
// tablet tags
Tags map[string]string `protobuf:"bytes,10,rep,name=tags" json:"tags,omitempty" protobuf_key:"bytes,1,opt,name=key" protobuf_val:"bytes,2,opt,name=value"`
// MySQL hostname.
MysqlHostname string `protobuf:"bytes,12,opt,name=mysql_hostname,json=mysqlHostname" json:"mysql_hostname,omitempty"`
// MySQL port. Use topoproto.MysqlPort and topoproto.SetMysqlPort
// to access this variable. The functions provide support
// for legacy behavior.
MysqlPort int32 `protobuf:"varint,13,opt,name=mysql_port,json=mysqlPort" json:"mysql_port,omitempty"`
}
func (m *Tablet) Reset() { *m = Tablet{} }
func (m *Tablet) String() string { return proto.CompactTextString(m) }
func (*Tablet) ProtoMessage() {}
func (*Tablet) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{2} }
func (m *Tablet) GetAlias() *TabletAlias {
if m != nil {
return m.Alias
}
return nil
}
func (m *Tablet) GetHostname() string {
if m != nil {
return m.Hostname
}
return ""
}
func (m *Tablet) GetPortMap() map[string]int32 {
if m != nil {
return m.PortMap
}
return nil
}
func (m *Tablet) GetKeyspace() string {
if m != nil {
return m.Keyspace
}
return ""
}
func (m *Tablet) GetShard() string {
if m != nil {
return m.Shard
}
return ""
}
func (m *Tablet) GetKeyRange() *KeyRange {
if m != nil {
return m.KeyRange
}
return nil
}
func (m *Tablet) GetType() TabletType {
if m != nil {
return m.Type
}
return TabletType_UNKNOWN
}
func (m *Tablet) GetDbNameOverride() string {
if m != nil {
return m.DbNameOverride
}
return ""
}
func (m *Tablet) GetTags() map[string]string {
if m != nil {
return m.Tags
}
return nil
}
func (m *Tablet) GetMysqlHostname() string {
if m != nil {
return m.MysqlHostname
}
return ""
}
func (m *Tablet) GetMysqlPort() int32 {
if m != nil {
return m.MysqlPort
}
return 0
}
// A Shard contains data about a subset of the data whithin a keyspace.
type Shard struct {
// No lock is necessary to update this field, when for instance
// TabletExternallyReparented updates this. However, we lock the
// shard for reparenting operations (InitShardMaster,
// PlannedReparentShard,EmergencyReparentShard), to guarantee
// exclusive operation.
MasterAlias *TabletAlias `protobuf:"bytes,1,opt,name=master_alias,json=masterAlias" json:"master_alias,omitempty"`
// key_range is the KeyRange for this shard. It can be unset if:
// - we are not using range-based sharding in this shard.
// - the shard covers the entire keyrange.
// This must match the shard name based on our other conventions, but
// helpful to have it decomposed here.
// Once set at creation time, it is never changed.
KeyRange *KeyRange `protobuf:"bytes,2,opt,name=key_range,json=keyRange" json:"key_range,omitempty"`
// served_types has at most one entry per TabletType
// The keyspace lock is always taken when changing this.
ServedTypes []*Shard_ServedType `protobuf:"bytes,3,rep,name=served_types,json=servedTypes" json:"served_types,omitempty"`
// SourceShards is the list of shards we're replicating from,
// using filtered replication.
// The keyspace lock is always taken when changing this.
SourceShards []*Shard_SourceShard `protobuf:"bytes,4,rep,name=source_shards,json=sourceShards" json:"source_shards,omitempty"`
// Cells is the list of cells that contain tablets for this shard.
// No lock is necessary to update this field.
Cells []string `protobuf:"bytes,5,rep,name=cells" json:"cells,omitempty"`
// tablet_controls has at most one entry per TabletType.
// The keyspace lock is always taken when changing this.
TabletControls []*Shard_TabletControl `protobuf:"bytes,6,rep,name=tablet_controls,json=tabletControls" json:"tablet_controls,omitempty"`
}
func (m *Shard) Reset() { *m = Shard{} }
func (m *Shard) String() string { return proto.CompactTextString(m) }
func (*Shard) ProtoMessage() {}
func (*Shard) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3} }
func (m *Shard) GetMasterAlias() *TabletAlias {
if m != nil {
return m.MasterAlias
}
return nil
}
func (m *Shard) GetKeyRange() *KeyRange {
if m != nil {
return m.KeyRange
}
return nil
}
func (m *Shard) GetServedTypes() []*Shard_ServedType {
if m != nil {
return m.ServedTypes
}
return nil
}
func (m *Shard) GetSourceShards() []*Shard_SourceShard {
if m != nil {
return m.SourceShards
}
return nil
}
func (m *Shard) GetCells() []string {
if m != nil {
return m.Cells
}
return nil
}
func (m *Shard) GetTabletControls() []*Shard_TabletControl {
if m != nil {
return m.TabletControls
}
return nil
}
// ServedType is an entry in the served_types
type Shard_ServedType struct {
TabletType TabletType `protobuf:"varint,1,opt,name=tablet_type,json=tabletType,enum=topodata.TabletType" json:"tablet_type,omitempty"`
Cells []string `protobuf:"bytes,2,rep,name=cells" json:"cells,omitempty"`
}
func (m *Shard_ServedType) Reset() { *m = Shard_ServedType{} }
func (m *Shard_ServedType) String() string { return proto.CompactTextString(m) }
func (*Shard_ServedType) ProtoMessage() {}
func (*Shard_ServedType) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3, 0} }
func (m *Shard_ServedType) GetTabletType() TabletType {
if m != nil {
return m.TabletType
}
return TabletType_UNKNOWN
}
func (m *Shard_ServedType) GetCells() []string {
if m != nil {
return m.Cells
}
return nil
}
// SourceShard represents a data source for filtered replication
// accross shards. When this is used in a destination shard, the master
// of that shard will run filtered replication.
type Shard_SourceShard struct {
// Uid is the unique ID for this SourceShard object.
Uid uint32 `protobuf:"varint,1,opt,name=uid" json:"uid,omitempty"`
// the source keyspace
Keyspace string `protobuf:"bytes,2,opt,name=keyspace" json:"keyspace,omitempty"`
// the source shard
Shard string `protobuf:"bytes,3,opt,name=shard" json:"shard,omitempty"`
// the source shard keyrange
KeyRange *KeyRange `protobuf:"bytes,4,opt,name=key_range,json=keyRange" json:"key_range,omitempty"`
// the source table list to replicate
Tables []string `protobuf:"bytes,5,rep,name=tables" json:"tables,omitempty"`
}
func (m *Shard_SourceShard) Reset() { *m = Shard_SourceShard{} }
func (m *Shard_SourceShard) String() string { return proto.CompactTextString(m) }
func (*Shard_SourceShard) ProtoMessage() {}
func (*Shard_SourceShard) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3, 1} }
func (m *Shard_SourceShard) GetUid() uint32 {
if m != nil {
return m.Uid
}
return 0
}
func (m *Shard_SourceShard) GetKeyspace() string {
if m != nil {
return m.Keyspace
}
return ""
}
func (m *Shard_SourceShard) GetShard() string {
if m != nil {
return m.Shard
}
return ""
}
func (m *Shard_SourceShard) GetKeyRange() *KeyRange {
if m != nil {
return m.KeyRange
}
return nil
}
func (m *Shard_SourceShard) GetTables() []string {
if m != nil {
return m.Tables
}
return nil
}
// TabletControl controls tablet's behavior
type Shard_TabletControl struct {
// which tablet type is affected
TabletType TabletType `protobuf:"varint,1,opt,name=tablet_type,json=tabletType,enum=topodata.TabletType" json:"tablet_type,omitempty"`
Cells []string `protobuf:"bytes,2,rep,name=cells" json:"cells,omitempty"`
// what to do
DisableQueryService bool `protobuf:"varint,3,opt,name=disable_query_service,json=disableQueryService" json:"disable_query_service,omitempty"`
BlacklistedTables []string `protobuf:"bytes,4,rep,name=blacklisted_tables,json=blacklistedTables" json:"blacklisted_tables,omitempty"`
}
func (m *Shard_TabletControl) Reset() { *m = Shard_TabletControl{} }
func (m *Shard_TabletControl) String() string { return proto.CompactTextString(m) }
func (*Shard_TabletControl) ProtoMessage() {}
func (*Shard_TabletControl) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{3, 2} }
func (m *Shard_TabletControl) GetTabletType() TabletType {
if m != nil {
return m.TabletType
}
return TabletType_UNKNOWN
}
func (m *Shard_TabletControl) GetCells() []string {
if m != nil {
return m.Cells
}
return nil
}
func (m *Shard_TabletControl) GetDisableQueryService() bool {
if m != nil {
return m.DisableQueryService
}
return false
}
func (m *Shard_TabletControl) GetBlacklistedTables() []string {
if m != nil {
return m.BlacklistedTables
}
return nil
}
// A Keyspace contains data about a keyspace.
type Keyspace struct {
// name of the column used for sharding
// empty if the keyspace is not sharded
ShardingColumnName string `protobuf:"bytes,1,opt,name=sharding_column_name,json=shardingColumnName" json:"sharding_column_name,omitempty"`
// type of the column used for sharding
// UNSET if the keyspace is not sharded
ShardingColumnType KeyspaceIdType `protobuf:"varint,2,opt,name=sharding_column_type,json=shardingColumnType,enum=topodata.KeyspaceIdType" json:"sharding_column_type,omitempty"`
// ServedFrom will redirect the appropriate traffic to
// another keyspace.
ServedFroms []*Keyspace_ServedFrom `protobuf:"bytes,4,rep,name=served_froms,json=servedFroms" json:"served_froms,omitempty"`
}
func (m *Keyspace) Reset() { *m = Keyspace{} }
func (m *Keyspace) String() string { return proto.CompactTextString(m) }
func (*Keyspace) ProtoMessage() {}
func (*Keyspace) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4} }
func (m *Keyspace) GetShardingColumnName() string {
if m != nil {
return m.ShardingColumnName
}
return ""
}
func (m *Keyspace) GetShardingColumnType() KeyspaceIdType {
if m != nil {
return m.ShardingColumnType
}
return KeyspaceIdType_UNSET
}
func (m *Keyspace) GetServedFroms() []*Keyspace_ServedFrom {
if m != nil {
return m.ServedFroms
}
return nil
}
// ServedFrom indicates a relationship between a TabletType and the
// keyspace name that's serving it.
type Keyspace_ServedFrom struct {
// the tablet type (key for the map)
TabletType TabletType `protobuf:"varint,1,opt,name=tablet_type,json=tabletType,enum=topodata.TabletType" json:"tablet_type,omitempty"`
// the cells to limit this to
Cells []string `protobuf:"bytes,2,rep,name=cells" json:"cells,omitempty"`
// the keyspace name that's serving it
Keyspace string `protobuf:"bytes,3,opt,name=keyspace" json:"keyspace,omitempty"`
}
func (m *Keyspace_ServedFrom) Reset() { *m = Keyspace_ServedFrom{} }
func (m *Keyspace_ServedFrom) String() string { return proto.CompactTextString(m) }
func (*Keyspace_ServedFrom) ProtoMessage() {}
func (*Keyspace_ServedFrom) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{4, 0} }
func (m *Keyspace_ServedFrom) GetTabletType() TabletType {
if m != nil {
return m.TabletType
}
return TabletType_UNKNOWN
}
func (m *Keyspace_ServedFrom) GetCells() []string {
if m != nil {
return m.Cells
}
return nil
}
func (m *Keyspace_ServedFrom) GetKeyspace() string {
if m != nil {
return m.Keyspace
}
return ""
}
// ShardReplication describes the MySQL replication relationships
// whithin a cell.
type ShardReplication struct {
// Note there can be only one Node in this array
// for a given tablet.
Nodes []*ShardReplication_Node `protobuf:"bytes,1,rep,name=nodes" json:"nodes,omitempty"`
}
func (m *ShardReplication) Reset() { *m = ShardReplication{} }
func (m *ShardReplication) String() string { return proto.CompactTextString(m) }
func (*ShardReplication) ProtoMessage() {}
func (*ShardReplication) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5} }
func (m *ShardReplication) GetNodes() []*ShardReplication_Node {
if m != nil {
return m.Nodes
}
return nil
}
// Node describes a tablet instance within the cell
type ShardReplication_Node struct {
TabletAlias *TabletAlias `protobuf:"bytes,1,opt,name=tablet_alias,json=tabletAlias" json:"tablet_alias,omitempty"`
}
func (m *ShardReplication_Node) Reset() { *m = ShardReplication_Node{} }
func (m *ShardReplication_Node) String() string { return proto.CompactTextString(m) }
func (*ShardReplication_Node) ProtoMessage() {}
func (*ShardReplication_Node) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{5, 0} }
func (m *ShardReplication_Node) GetTabletAlias() *TabletAlias {
if m != nil {
return m.TabletAlias
}
return nil
}
// ShardReference is used as a pointer from a SrvKeyspace to a Shard
type ShardReference struct {
// Copied from Shard.
Name string `protobuf:"bytes,1,opt,name=name" json:"name,omitempty"`
KeyRange *KeyRange `protobuf:"bytes,2,opt,name=key_range,json=keyRange" json:"key_range,omitempty"`
}
func (m *ShardReference) Reset() { *m = ShardReference{} }
func (m *ShardReference) String() string { return proto.CompactTextString(m) }
func (*ShardReference) ProtoMessage() {}
func (*ShardReference) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{6} }
func (m *ShardReference) GetName() string {
if m != nil {
return m.Name
}
return ""
}
func (m *ShardReference) GetKeyRange() *KeyRange {
if m != nil {
return m.KeyRange
}
return nil
}
// SrvKeyspace is a rollup node for the keyspace itself.
type SrvKeyspace struct {
// The partitions this keyspace is serving, per tablet type.
Partitions []*SrvKeyspace_KeyspacePartition `protobuf:"bytes,1,rep,name=partitions" json:"partitions,omitempty"`
// copied from Keyspace
ShardingColumnName string `protobuf:"bytes,2,opt,name=sharding_column_name,json=shardingColumnName" json:"sharding_column_name,omitempty"`
ShardingColumnType KeyspaceIdType `protobuf:"varint,3,opt,name=sharding_column_type,json=shardingColumnType,enum=topodata.KeyspaceIdType" json:"sharding_column_type,omitempty"`
ServedFrom []*SrvKeyspace_ServedFrom `protobuf:"bytes,4,rep,name=served_from,json=servedFrom" json:"served_from,omitempty"`
}
func (m *SrvKeyspace) Reset() { *m = SrvKeyspace{} }
func (m *SrvKeyspace) String() string { return proto.CompactTextString(m) }
func (*SrvKeyspace) ProtoMessage() {}
func (*SrvKeyspace) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7} }
func (m *SrvKeyspace) GetPartitions() []*SrvKeyspace_KeyspacePartition {
if m != nil {
return m.Partitions
}
return nil
}
func (m *SrvKeyspace) GetShardingColumnName() string {
if m != nil {
return m.ShardingColumnName
}
return ""
}
func (m *SrvKeyspace) GetShardingColumnType() KeyspaceIdType {
if m != nil {
return m.ShardingColumnType
}
return KeyspaceIdType_UNSET
}
func (m *SrvKeyspace) GetServedFrom() []*SrvKeyspace_ServedFrom {
if m != nil {
return m.ServedFrom
}
return nil
}
type SrvKeyspace_KeyspacePartition struct {
// The type this partition applies to.
ServedType TabletType `protobuf:"varint,1,opt,name=served_type,json=servedType,enum=topodata.TabletType" json:"served_type,omitempty"`
// List of non-overlapping continuous shards sorted by range.
ShardReferences []*ShardReference `protobuf:"bytes,2,rep,name=shard_references,json=shardReferences" json:"shard_references,omitempty"`
}
func (m *SrvKeyspace_KeyspacePartition) Reset() { *m = SrvKeyspace_KeyspacePartition{} }
func (m *SrvKeyspace_KeyspacePartition) String() string { return proto.CompactTextString(m) }
func (*SrvKeyspace_KeyspacePartition) ProtoMessage() {}
func (*SrvKeyspace_KeyspacePartition) Descriptor() ([]byte, []int) {
return fileDescriptor0, []int{7, 0}
}
func (m *SrvKeyspace_KeyspacePartition) GetServedType() TabletType {
if m != nil {
return m.ServedType
}
return TabletType_UNKNOWN
}
func (m *SrvKeyspace_KeyspacePartition) GetShardReferences() []*ShardReference {
if m != nil {
return m.ShardReferences
}
return nil
}
// ServedFrom indicates a relationship between a TabletType and the
// keyspace name that's serving it.
type SrvKeyspace_ServedFrom struct {
// the tablet type
TabletType TabletType `protobuf:"varint,1,opt,name=tablet_type,json=tabletType,enum=topodata.TabletType" json:"tablet_type,omitempty"`
// the keyspace name that's serving it
Keyspace string `protobuf:"bytes,2,opt,name=keyspace" json:"keyspace,omitempty"`
}
func (m *SrvKeyspace_ServedFrom) Reset() { *m = SrvKeyspace_ServedFrom{} }
func (m *SrvKeyspace_ServedFrom) String() string { return proto.CompactTextString(m) }
func (*SrvKeyspace_ServedFrom) ProtoMessage() {}
func (*SrvKeyspace_ServedFrom) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{7, 1} }
func (m *SrvKeyspace_ServedFrom) GetTabletType() TabletType {
if m != nil {
return m.TabletType
}
return TabletType_UNKNOWN
}
func (m *SrvKeyspace_ServedFrom) GetKeyspace() string {
if m != nil {
return m.Keyspace
}
return ""
}
// CellInfo contains information about a cell. CellInfo objects are
// stored in the global topology server, and describe how to reach
// local topology servers.
type CellInfo struct {
// ServerAddress contains the address of the server for the cell.
// The syntax of this field is topology implementation specific.
// For instance, for Zookeeper, it is a comma-separated list of
// server addresses.
ServerAddress string `protobuf:"bytes,1,opt,name=server_address,json=serverAddress" json:"server_address,omitempty"`
// Root is the path to store data in. It is only used when talking
// to server_address.
Root string `protobuf:"bytes,2,opt,name=root" json:"root,omitempty"`
}
func (m *CellInfo) Reset() { *m = CellInfo{} }
func (m *CellInfo) String() string { return proto.CompactTextString(m) }
func (*CellInfo) ProtoMessage() {}
func (*CellInfo) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{8} }
func (m *CellInfo) GetServerAddress() string {
if m != nil {
return m.ServerAddress
}
return ""
}
func (m *CellInfo) GetRoot() string {
if m != nil {
return m.Root
}
return ""
}
func init() {
proto.RegisterType((*KeyRange)(nil), "topodata.KeyRange")
proto.RegisterType((*TabletAlias)(nil), "topodata.TabletAlias")
proto.RegisterType((*Tablet)(nil), "topodata.Tablet")
proto.RegisterType((*Shard)(nil), "topodata.Shard")
proto.RegisterType((*Shard_ServedType)(nil), "topodata.Shard.ServedType")
proto.RegisterType((*Shard_SourceShard)(nil), "topodata.Shard.SourceShard")
proto.RegisterType((*Shard_TabletControl)(nil), "topodata.Shard.TabletControl")
proto.RegisterType((*Keyspace)(nil), "topodata.Keyspace")
proto.RegisterType((*Keyspace_ServedFrom)(nil), "topodata.Keyspace.ServedFrom")
proto.RegisterType((*ShardReplication)(nil), "topodata.ShardReplication")
proto.RegisterType((*ShardReplication_Node)(nil), "topodata.ShardReplication.Node")
proto.RegisterType((*ShardReference)(nil), "topodata.ShardReference")
proto.RegisterType((*SrvKeyspace)(nil), "topodata.SrvKeyspace")
proto.RegisterType((*SrvKeyspace_KeyspacePartition)(nil), "topodata.SrvKeyspace.KeyspacePartition")
proto.RegisterType((*SrvKeyspace_ServedFrom)(nil), "topodata.SrvKeyspace.ServedFrom")
proto.RegisterType((*CellInfo)(nil), "topodata.CellInfo")
proto.RegisterEnum("topodata.KeyspaceIdType", KeyspaceIdType_name, KeyspaceIdType_value)
proto.RegisterEnum("topodata.TabletType", TabletType_name, TabletType_value)
}
func init() { proto.RegisterFile("topodata.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 1115 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0xac, 0x56, 0x5f, 0x6f, 0xe2, 0x46,
0x10, 0xaf, 0xc1, 0x10, 0x18, 0x03, 0xe7, 0x6c, 0x73, 0x95, 0xe5, 0xea, 0x54, 0x84, 0x54, 0x15,
0x5d, 0x55, 0x5a, 0x71, 0xbd, 0x36, 0x3a, 0xa9, 0x52, 0x08, 0xf1, 0xf5, 0xc8, 0x1f, 0x42, 0x17,
0xa2, 0x36, 0x4f, 0x96, 0x83, 0x37, 0x39, 0x2b, 0xc6, 0xf6, 0xed, 0x2e, 0x91, 0xf8, 0x0c, 0xf7,
0xd0, 0x7b, 0xee, 0x37, 0xe9, 0x53, 0x1f, 0xfb, 0xb5, 0xaa, 0xdd, 0xb5, 0xc1, 0x90, 0x26, 0xcd,
0x55, 0x79, 0xca, 0xcc, 0xce, 0x1f, 0xcf, 0xfc, 0xe6, 0x37, 0x13, 0xa0, 0xc1, 0xe3, 0x24, 0xf6,
0x3d, 0xee, 0x75, 0x12, 0x1a, 0xf3, 0x18, 0x55, 0x32, 0xbd, 0xd5, 0x85, 0xca, 0x11, 0x59, 0x60,
0x2f, 0xba, 0x22, 0x68, 0x07, 0x4a, 0x8c, 0x7b, 0x94, 0x5b, 0x5a, 0x53, 0x6b, 0xd7, 0xb0, 0x52,
0x90, 0x09, 0x45, 0x12, 0xf9, 0x56, 0x41, 0xbe, 0x09, 0xb1, 0xf5, 0x02, 0x8c, 0x89, 0x77, 0x11,
0x12, 0xde, 0x0b, 0x03, 0x8f, 0x21, 0x04, 0xfa, 0x94, 0x84, 0xa1, 0x8c, 0xaa, 0x62, 0x29, 0x8b,
0xa0, 0x79, 0xa0, 0x82, 0xea, 0x58, 0x88, 0xad, 0x3f, 0x75, 0x28, 0xab, 0x28, 0xf4, 0x35, 0x94,
0x3c, 0x11, 0x29, 0x23, 0x8c, 0xee, 0xd3, 0xce, 0xb2, 0xba, 0x5c, 0x5a, 0xac, 0x7c, 0x90, 0x0d,
0x95, 0xb7, 0x31, 0xe3, 0x91, 0x37, 0x23, 0x32, 0x5d, 0x15, 0x2f, 0x75, 0xb4, 0x0b, 0x95, 0x24,
0xa6, 0xdc, 0x9d, 0x79, 0x89, 0xa5, 0x37, 0x8b, 0x6d, 0xa3, 0xfb, 0x6c, 0x33, 0x57, 0x67, 0x14,
0x53, 0x7e, 0xe2, 0x25, 0x4e, 0xc4, 0xe9, 0x02, 0x6f, 0x25, 0x4a, 0x13, 0x59, 0xaf, 0xc9, 0x82,
0x25, 0xde, 0x94, 0x58, 0x25, 0x95, 0x35, 0xd3, 0x25, 0x0c, 0x6f, 0x3d, 0xea, 0x5b, 0x65, 0x69,
0x50, 0x0a, 0xfa, 0x16, 0xaa, 0xd7, 0x64, 0xe1, 0x52, 0x81, 0x94, 0xb5, 0x25, 0x0b, 0x47, 0xab,
0x8f, 0x65, 0x18, 0xca, 0x34, 0x0a, 0xcd, 0x36, 0xe8, 0x7c, 0x91, 0x10, 0xab, 0xd2, 0xd4, 0xda,
0x8d, 0xee, 0xce, 0x66, 0x61, 0x93, 0x45, 0x42, 0xb0, 0xf4, 0x40, 0x6d, 0x30, 0xfd, 0x0b, 0x57,
0x74, 0xe4, 0xc6, 0x37, 0x84, 0xd2, 0xc0, 0x27, 0x56, 0x55, 0x7e, 0xbb, 0xe1, 0x5f, 0x0c, 0xbd,
0x19, 0x39, 0x4d, 0x5f, 0x51, 0x07, 0x74, 0xee, 0x5d, 0x31, 0x0b, 0x64, 0xb3, 0xf6, 0xad, 0x66,
0x27, 0xde, 0x15, 0x53, 0x9d, 0x4a, 0x3f, 0xf4, 0x25, 0x34, 0x66, 0x0b, 0xf6, 0x2e, 0x74, 0x97,
0x10, 0xd6, 0x64, 0xde, 0xba, 0x7c, 0x7d, 0x93, 0xe1, 0xf8, 0x0c, 0x40, 0xb9, 0x09, 0x78, 0xac,
0x7a, 0x53, 0x6b, 0x97, 0x70, 0x55, 0xbe, 0x08, 0xf4, 0xec, 0x57, 0x50, 0xcb, 0xa3, 0x28, 0x86,
0x7b, 0x4d, 0x16, 0xe9, 0xbc, 0x85, 0x28, 0x20, 0xbb, 0xf1, 0xc2, 0xb9, 0x9a, 0x50, 0x09, 0x2b,
0xe5, 0x55, 0x61, 0x57, 0xb3, 0x7f, 0x84, 0xea, 0xb2, 0xa8, 0xff, 0x0a, 0xac, 0xe6, 0x02, 0x0f,
0xf5, 0x4a, 0xd1, 0xd4, 0x0f, 0xf5, 0x8a, 0x61, 0xd6, 0x5a, 0xef, 0xcb, 0x50, 0x1a, 0xcb, 0x29,
0xec, 0x42, 0x6d, 0xe6, 0x31, 0x4e, 0xa8, 0xfb, 0x00, 0x06, 0x19, 0xca, 0x55, 0xb1, 0x74, 0x6d,
0x7e, 0x85, 0x07, 0xcc, 0xef, 0x27, 0xa8, 0x31, 0x42, 0x6f, 0x88, 0xef, 0x8a, 0x21, 0x31, 0xab,
0xb8, 0x89, 0xb9, 0xac, 0xa8, 0x33, 0x96, 0x3e, 0x72, 0x9a, 0x06, 0x5b, 0xca, 0x0c, 0xed, 0x41,
0x9d, 0xc5, 0x73, 0x3a, 0x25, 0xae, 0xe4, 0x0f, 0x4b, 0x09, 0xfa, 0xf9, 0xad, 0x78, 0xe9, 0x24,
0x65, 0x5c, 0x63, 0x2b, 0x85, 0x09, 0x6c, 0xc4, 0x2e, 0x31, 0xab, 0xd4, 0x2c, 0x0a, 0x6c, 0xa4,
0x82, 0x5e, 0xc3, 0x13, 0x2e, 0x7b, 0x74, 0xa7, 0x71, 0xc4, 0x69, 0x1c, 0x32, 0xab, 0xbc, 0x49,
0x7d, 0x95, 0x59, 0x41, 0xd1, 0x57, 0x5e, 0xb8, 0xc1, 0xf3, 0x2a, 0xb3, 0xcf, 0x01, 0x56, 0xa5,
0xa3, 0x97, 0x60, 0xa4, 0x59, 0x25, 0x67, 0xb5, 0x7b, 0x38, 0x0b, 0x7c, 0x29, 0xaf, 0x4a, 0x2c,
0xe4, 0x4a, 0xb4, 0xff, 0xd0, 0xc0, 0xc8, 0xb5, 0x95, 0x1d, 0x03, 0x6d, 0x79, 0x0c, 0xd6, 0xd6,
0xaf, 0x70, 0xd7, 0xfa, 0x15, 0xef, 0x5c, 0x3f, 0xfd, 0x01, 0xe3, 0xfb, 0x0c, 0xca, 0xb2, 0xd0,
0x0c, 0xbe, 0x54, 0xb3, 0xff, 0xd2, 0xa0, 0xbe, 0x86, 0xcc, 0xa3, 0xf6, 0x8e, 0xba, 0xf0, 0xd4,
0x0f, 0x98, 0xf0, 0x72, 0xdf, 0xcd, 0x09, 0x5d, 0xb8, 0x82, 0x13, 0xc1, 0x94, 0xc8, 0x6e, 0x2a,
0xf8, 0xd3, 0xd4, 0xf8, 0x8b, 0xb0, 0x8d, 0x95, 0x09, 0x7d, 0x03, 0xe8, 0x22, 0xf4, 0xa6, 0xd7,
0x61, 0xc0, 0xb8, 0xa0, 0x9b, 0x2a, 0x5b, 0x97, 0x69, 0xb7, 0x73, 0x16, 0x59, 0x08, 0x6b, 0xfd,
0x5d, 0x90, 0x37, 0x5b, 0xa1, 0xf5, 0x1d, 0xec, 0x48, 0x80, 0x82, 0xe8, 0xca, 0x9d, 0xc6, 0xe1,
0x7c, 0x16, 0xc9, 0x43, 0x92, 0xee, 0x18, 0xca, 0x6c, 0x7d, 0x69, 0x12, 0xb7, 0x04, 0x1d, 0xde,
0x8e, 0x90, 0x7d, 0x17, 0x64, 0xdf, 0xd6, 0x1a, 0xa8, 0xf2, 0x1b, 0x03, 0xc5, 0xee, 0x8d, 0x5c,
0x12, 0x83, 0xbd, 0xe5, 0x8e, 0x5c, 0xd2, 0x78, 0xc6, 0x6e, 0x1f, 0xe1, 0x2c, 0x47, 0xba, 0x26,
0xaf, 0x69, 0x3c, 0xcb, 0xd6, 0x44, 0xc8, 0xcc, 0x9e, 0x67, 0x34, 0x14, 0xea, 0xe3, 0x8e, 0x22,
0x4f, 0xb2, 0xe2, 0x3a, 0xc9, 0xd4, 0x75, 0x69, 0xbd, 0xd7, 0xc0, 0x54, 0x9b, 0x47, 0x92, 0x30,
0x98, 0x7a, 0x3c, 0x88, 0x23, 0xf4, 0x12, 0x4a, 0x51, 0xec, 0x13, 0x71, 0x5b, 0x44, 0x33, 0x5f,
0x6c, 0xac, 0x55, 0xce, 0xb5, 0x33, 0x8c, 0x7d, 0x82, 0x95, 0xb7, 0xbd, 0x07, 0xba, 0x50, 0xc5,
0x85, 0x4a, 0x5b, 0x78, 0xc8, 0x85, 0xe2, 0x2b, 0xa5, 0x75, 0x06, 0x8d, 0xf4, 0x0b, 0x97, 0x84,
0x92, 0x68, 0x4a, 0xc4, 0x7f, 0xd6, 0xdc, 0x30, 0xa5, 0xfc, 0xd1, 0x77, 0xac, 0xf5, 0x41, 0x07,
0x63, 0x4c, 0x6f, 0x96, 0x8c, 0xf9, 0x19, 0x20, 0xf1, 0x28, 0x0f, 0x44, 0x07, 0x59, 0x93, 0x5f,
0xe5, 0x9a, 0x5c, 0xb9, 0x2e, 0xa7, 0x37, 0xca, 0xfc, 0x71, 0x2e, 0xf4, 0x4e, 0xea, 0x15, 0x3e,
0x9a, 0x7a, 0xc5, 0xff, 0x41, 0xbd, 0x1e, 0x18, 0x39, 0xea, 0xa5, 0xcc, 0x6b, 0xfe, 0x7b, 0x1f,
0x39, 0xf2, 0xc1, 0x8a, 0x7c, 0xf6, 0xef, 0x1a, 0x6c, 0xdf, 0x6a, 0x51, 0x70, 0x30, 0x77, 0xf7,
0xef, 0xe7, 0xe0, 0xea, 0xe0, 0xa3, 0x3e, 0x98, 0xb2, 0x4a, 0x97, 0x66, 0xe3, 0x53, 0x74, 0x34,
0xf2, 0x7d, 0xad, 0xcf, 0x17, 0x3f, 0x61, 0x6b, 0x3a, 0xb3, 0xdd, 0xc7, 0xd8, 0x86, 0x7b, 0x8e,
0xeb, 0xa1, 0x5e, 0x29, 0x99, 0xe5, 0x96, 0x03, 0x95, 0x3e, 0x09, 0xc3, 0x41, 0x74, 0x19, 0x8b,
0x9f, 0x08, 0xb2, 0x0b, 0xea, 0x7a, 0xbe, 0x4f, 0x09, 0x63, 0x29, 0xdb, 0xea, 0xea, 0xb5, 0xa7,
0x1e, 0x05, 0x15, 0x69, 0x1c, 0xf3, 0x34, 0xa1, 0x94, 0x9f, 0x77, 0xa1, 0xb1, 0x3e, 0x28, 0x54,
0x85, 0xd2, 0xd9, 0x70, 0xec, 0x4c, 0xcc, 0x4f, 0x10, 0x40, 0xf9, 0x6c, 0x30, 0x9c, 0xfc, 0xf0,
0xbd, 0xa9, 0x89, 0xe7, 0xfd, 0xf3, 0x89, 0x33, 0x36, 0x0b, 0xcf, 0x3f, 0x68, 0x00, 0xab, 0xba,
0x91, 0x01, 0x5b, 0x67, 0xc3, 0xa3, 0xe1, 0xe9, 0xaf, 0x43, 0x15, 0x72, 0xd2, 0x1b, 0x4f, 0x1c,
0x6c, 0x6a, 0xc2, 0x80, 0x9d, 0xd1, 0xf1, 0xa0, 0xdf, 0x33, 0x0b, 0xc2, 0x80, 0x0f, 0x4e, 0x87,
0xc7, 0xe7, 0x66, 0x51, 0xe6, 0xea, 0x4d, 0xfa, 0x6f, 0x94, 0x38, 0x1e, 0xf5, 0xb0, 0x63, 0xea,
0xc8, 0x84, 0x9a, 0xf3, 0xdb, 0xc8, 0xc1, 0x83, 0x13, 0x67, 0x38, 0xe9, 0x1d, 0x9b, 0x25, 0x11,
0xb3, 0xdf, 0xeb, 0x1f, 0x9d, 0x8d, 0xcc, 0xb2, 0x4a, 0x36, 0x9e, 0x9c, 0x62, 0xc7, 0xdc, 0x12,
0xca, 0x01, 0xee, 0x0d, 0x86, 0xce, 0x81, 0x59, 0xb1, 0x0b, 0xa6, 0xb6, 0xbf, 0x0d, 0x4f, 0x82,
0xb8, 0x73, 0x13, 0x70, 0xc2, 0x98, 0xfa, 0x7d, 0x7c, 0x51, 0x96, 0x7f, 0x5e, 0xfc, 0x13, 0x00,
0x00, 0xff, 0xff, 0xba, 0xda, 0xa7, 0xb1, 0x38, 0x0b, 0x00, 0x00,
}
|
1. Field of the Invention
The present invention relates generally to a treatment device for medical application to body tissue for maintaining antisepsis in long-term indwelling medical devices and, more particularly, it relates to a treatment device which electrolytically infuses silver ions from the surface of an indwelling device into the surrounding tissues under remote control and power delivery from an external source thereby providing an means of long-term antisepsis which is active and controlled.
2. Description of the Prior Art
There have been attempts in the past to develop hydrophilic or lubricious coatings, thrombo-resistant coatings, antibiotic coatings, and even metallic coatings for long-term indwelling medical devices. These attempts have been in an effort to deal with the pervasive issue of sepsis in indwelling medical devices caused primarily by the surface colonization of biofilms. Biofilms are layers or colonies of bacteria which adhere to a foreign surface and protect themselves with a slime layer rendering the bacteria that breed within the slime thousands of times more resistant to known antibacterial agents.
The hydrophilic and lubricious coatings seek to stop the adhering of these films and thus thwart the development of the colonies. The thrombo-resistant coatings seek to resist the protein layer build-ups that can host the colonies. The metallic coatings and antibiotic coatings seek to kill the bacteria as they attach. Unfortunately, the coatings have quite clearly failed for various reasons to provide sufficient protection by any of those means. Sepsis of indwelling devices (catheters, ports, prosthetic joints, meshes, and shunts) still range from 5% to 15%.
The failure of these techniques rests in part on the fact that these techniques are all passive. That is to say, the techniques do not force interaction between the surface and the bacteria. A silver coated surface elutes too slowly to repel or effectively inhibit the colonizing bacteria. While special silver coatings have been developed which have high rates of elution, the silver coatings exhaust their capability frivolously in a few days by continuously eluting at rates that cannot be sustained. Antibiotic coatings effectively kill the layer of colonizing bacteria that adhere to the surface. Ironically, this activity is its own un-doing as the dead bacteria serve as an insulating layer to protect and facilitate adherence of the next layer. None of the existing systems offer any means for effectively penetrating the biofilm layer that protects the bacterial colonies and hence offers extremely limited effectivity.
Additionally, since the release of agent from these existing technologies is dependant upon the interstitial environment, the dose over time changes in some unknown manner varying with the person, with time, and with the adherence of proteins. As a fibrin sheath builds up on the surface and the material elutes in rates determined by the physiology of the particular surrounding tissue and individual, the actual dose, rate, and effectiveness diminish and become unknown. This renders the present techniques quite ineffective and difficult or impossible to clinically verify.
The present invention is a device for maintaining antisepsis of tissue of a body. The device comprises at least a first active surface contactable with the tissue and at least a second active surface contactable with the tissue. A control system is electrically connected to the first active surface and the second active surface with the control system creating an electric field between the first active surface and the second active surface through the body. Silver, an antimicrobial substance, is on or comprised of at least a portion of the first active surface with the substance being ionized upon application of the electric field thereby profusing the substance into the tissue.
In yet another embodiment of the present invention, the device further comprises using a bipolar field at low frequencies allowing both the first active surface and the second active surface to act as anodes. Preferably, the device further comprises using a bipolar field at low frequencies to maintain a surface only distribution of the substance.
In still another embodiment of the present invention, the method further comprises providing an embedded microcontroller or other control circuitry for controlling the delivery rate of the substance, dose of the substance, and selected active surfaces.
In these embodiments, the therapeutic protocol administered by the embedded microcontroller is controller by embedded firmware within the implanted device. Alternately, the therapeutic protocol is modified or communicated entirely through the modulation of the applied electromagnetic signal. The electromagnetic signal serves to allow communication to the embedded microcontroller, which is controlling the administration of silver ions, and to provide power for the operation of the system. This is accomplished by modulating the RF electromagnetic field, which is being applied to the system transcutaneously. The modulation of the applied field is detected by the circuitry of the system and then de-coded by the microcontroller. The modulation could be amplitude based, frequency shift keying based, phase shift keying or any other modulation technique known to one skilled in the art.
The system of the present invention derives power from this applied RF electromagnetic field, an internal battery, or any other power source. The system then generates the higher voltages necessary to force the ionization of silver from the active surface surfaces and the penetration into the surrounding tissue or biofilms by means of a controllable DC-to-DC converter. The converter is operated by the embedded microcontroller and can be adjusted in real-time to produce the required waveforms.
In the case of a battery operated system, the system is pre-programmed to come out of a stand-by mode (wake-up) and operate, treating the surrounding tissue according to the protocol required for the particular application. After treatment (typically five (5) to fifteen (15) minutes) the system goes back to sleep (stand-by) thus conserving battery power and surface silver until it is required to operate again. This can be daily, or weekly depending on the expected re-colonization rate of the device. |
Development of high intensity neutron source at the European Spallation Source The European Spallation Source being constructed in Lund, Sweden will provide the user community with a neutron source of unprecedented brightness. By 2025, a suite of 15 instruments will be served by a high-brightness moderator system placed above the spallation target. The ESS infrastructure, consisting of the proton linac, the target station, and the instrument halls, allows for implementation of a second source below the spallation target. We propose to develop a second neutron source with a high-intensity moderator able to deliver a larger total cold neutron flux, provide high intensities at longer wavelengths in the spectral regions of Cold (410 ), Very Cold, and Ultra Cold (several 100 ) neutrons, as opposed to Thermal and Cold neutrons delivered by the top moderator. Offering both unprecedented brilliance, flux, and spectral range in a single facility, this upgrade will make ESS the most versatile neutron source in the world and will further strengthen the leadership of Europe in neutron science. The new source will boost several areas of condensed matter research such as imaging and spin-echo, and will provide outstanding opportunities in fundamental physics investigations of the laws of nature at a precision unattainable anywhere else. At the heart of the proposed system is a volumetric liquid deuterium moderator. Based on proven technology, its performance will be optimized in a detailed engineering study. This moderator will be complemented by secondary sources to provide intense beams of Very- and Ultra-Cold Neutrons. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.