content
stringlengths 7
2.61M
|
---|
An Eye for a Bird, by Eric Hosking with Frank W. Lane Hutchinson, £3.25 First Catch your Tiger by Graham-Jones Oliver Collins, £1.80. Reviewing this book for another journal, I described it as 'notable for its clarity and its humane and balanced approach'. Overall, this last is certainly true, but we all of us have our PPPs (People's Pet Places), about which we are never quite sane; the Tsavo is obviously one of Toni Harthoorn's, and here his judgment does not seem to be so sure. His own views on elephant reduction are stated clearly, but does he give the contrary arguments full weight? Perhaps we can judge this better after reading in the last ORYX (December 1970) Dr Laws's comments on Dr Glover's recent article on Tsavo. Dr Laws was the Director of the Tsavo Research Project to whom Dr Harthoorn refers, although not by name. Postscript What is it about Kenya or possibly East Africa? which causes so many head-on clashes between individuals? Why do not the scientists there get together more and discuss their work as it proceeds? Where this is done it is surprising how often a jointly held view emerges, and one on which the Administration can act. When the poor Admin, is faced with two completely conflicting arguments, things simply drift or someone tosses a coin. Either way, not a very satisfactory outcome |
<reponame>louisgv/nemo
import { Api, JsonRpc } from "eosjs";
import { JsSignatureProvider } from "eosjs/dist/eosjs-jssig";
const debug = require("debug")("api.dapp");
// Jungle testnet keys
export const dappVault = {
ipfsRepo: '/nemo',
apiUrl: "https://api.jungle.alohaeos.com:443",
keys: [
"<KEY>",
"<KEY>",
"<KEY>",
"<KEY>"
],
table: 'nemotablemk2',
account: {
eosiotoken: "eosio.token",
contract: "nemoeosmark1",
captain: "nemotestero3",
producer: "nemotestero4"
}
}
export const encodeNemoTXValue = (id: string, blockNum: any) => `${id}.NEMOTX.${blockNum}`
export const decodeNemoTXValue = (tx: string) => tx.split('.NEMOTX.')
export const claimCatchEvent = async ({ txId, apiUrl }: any, ipfs: any) => {
// debug()
const { account, table, keys } = dappVault;
const signatureProvider = new JsSignatureProvider(keys);
const rpc = new JsonRpc(apiUrl);
const api = new Api({ rpc, signatureProvider });
const tableData = await rpc.get_table_rows({
json: true, // Get the response as json
code: account.contract, // Contract that we target
scope: account.contract, // Account that owns the data
table, // Table name
lower_bound: txId, // Table primary key value
limit: 1, // Here we limit to 1 to get only the
show_payer: false, // Optional: Show ram payer
})
debug(tableData);
// result.trx.trx.actions;
const [{
buyer,
seller,
price,
tax,
value
}] = tableData.rows
if (buyer.length > 0) {
throw new Error('Record is already claimed')
}
const result = await api.transact(
{
actions: [
{
account: account.eosiotoken,
name: "transfer",
authorization: [
{
actor: account.producer,
permission: "active"
}
],
data: {
from: account.producer,
to: seller,
quantity: price,
memo: `payment for ${value}`
}
},
{
account: account.eosiotoken,
name: "transfer",
authorization: [
{
actor: account.producer,
permission: "active"
}
],
data: {
from: account.producer,
to: account.contract,
quantity: tax,
memo: `tax for ${value}`
}
}
]
},
{
blocksBehind: 3,
expireSeconds: 30
}
);
const receipt = encodeNemoTXValue(result.transaction_id, result.processed.block_num)
const claimResult = await api.transact(
{
actions: [
{
account: account.contract,
name: "claim",
authorization: [
{
actor: account.producer,
permission: "active"
}
],
data: {
buyer: account.producer,
id: txId,
receipt
}
}
]
},
{
blocksBehind: 3,
expireSeconds: 30
}
);
const epcisDataBuffer = await ipfs.cat(value)
return {
epcisData: epcisDataBuffer.toString('utf8'),
originId: encodeNemoTXValue(claimResult.transaction_id, claimResult.processed.block_num)
}
}
export const sendCatchEvent = async ({
apiUrl,
price
}: any, ipfs: any, body: any) => {
const { account, keys } = dappVault;
const content = Buffer.from(body);
console.log(ipfs);
const results = await ipfs.add(content).next();
console.log(results);
const { path: hash } = results.value;
const signatureProvider = new JsSignatureProvider(keys);
const rpc = new JsonRpc(apiUrl);
const api = new Api({ rpc, signatureProvider });
const result = await api.transact(
{
actions: [
{
account: account.contract,
name: "submit",
authorization: [
{
actor: account.captain,
permission: "active"
}
],
data: {
seller: account.captain,
value: hash,
price,
}
}
]
},
{
blocksBehind: 3,
expireSeconds: 30
}
);
return {
ipfsHash: hash,
originId: encodeNemoTXValue(result.transaction_id, result.processed.block_num)
}
};
|
from typing import Union
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch import FloatTensor
from torch.autograd import Variable
NetIO = Union[FloatTensor, Variable]
class InvariantModel(nn.Module):
def __init__(self, phi: nn.Module, rho: nn.Module, length):
super().__init__()
self.phi = phi
self.rho = rho
self.length = length
def forward(self, x: NetIO) -> NetIO:
x = torch.reshape(x, (-1,1,28,28))
# compute the representation for each data point
x = self.phi.forward(x)
x = torch.reshape(x, (-1,self.length,10))
# sum up the representations
# here I have assumed that x is 2D and the each row is representation of an input, so the following operation
# will reduce the number of rows to 1, but it will keep the tensor as a 2D tensor.
# x = torch.sum(x, dim=0, keepdim=True)
x = torch.sum(x, dim=1, keepdim=True)
x = torch.reshape(x, (-1,10))
# compute the output
out = self.rho.forward(x)
return F.normalize(x,dim=-1), F.normalize(out,dim=-1)
class SmallMNISTCNNPhi(nn.Module):
def __init__(self):
super().__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc1_drop = nn.Dropout2d()
self.fc2 = nn.Linear(50, 10)
def forward(self, x: NetIO) -> NetIO:
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = self.conv2_drop(self.conv2(x))
x = F.relu(F.max_pool2d(x, 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = self.fc1_drop(x)
x = F.relu(self.fc2(x))
return x
class SmallRho(nn.Module):
def __init__(self, input_size: int, output_size: int = 1):
super().__init__()
self.input_size = input_size
self.output_size = output_size
self.fc1 = nn.Linear(self.input_size, 10)
self.fc2 = nn.Linear(10, self.output_size)
def forward(self, x: NetIO) -> NetIO:
x = F.relu(self.fc1(x))
x = self.fc2(x)
# x = F.softmax(self.fc2(x))
return x
|
#include "text-slice.h"
#include "text.h"
#include <cassert>
TextSlice::TextSlice() :
text{nullptr} {}
TextSlice::TextSlice(const Text *text, Point start_position, Point end_position) :
text{text}, start_position{start_position}, end_position{end_position} {}
TextSlice::TextSlice(const Text &text) :
text{&text}, start_position{Point()}, end_position{text.extent()} {}
size_t TextSlice::start_offset() const {
if (start_position.is_zero()) return 0;
assert(start_position.row < text->line_offsets.size());
return text->line_offsets[start_position.row] + start_position.column;
}
size_t TextSlice::end_offset() const {
if (end_position.is_zero()) return 0;
return text->line_offsets[end_position.row] + end_position.column;
}
bool TextSlice::is_valid() const {
uint32_t start_offset = this->start_offset();
uint32_t end_offset = this->end_offset();
if (start_offset > end_offset) {
return false;
}
if (start_position.row + 1 < text->line_offsets.size()) {
if (start_offset >= text->line_offsets[start_position.row + 1]) {
return false;
}
}
if (end_position.row + 1 < text->line_offsets.size()) {
if (end_offset >= text->line_offsets[end_position.row + 1]) {
return false;
}
}
if (end_offset > text->size()) {
return false;
}
return true;
}
std::pair<TextSlice, TextSlice> TextSlice::split(Point split_point) const {
Point absolute_split_point = Point::min(
end_position,
start_position.traverse(split_point)
);
return std::pair<TextSlice, TextSlice>{
TextSlice{text, start_position, absolute_split_point},
TextSlice{text, absolute_split_point, end_position}
};
}
std::pair<TextSlice, TextSlice> TextSlice::split(uint32_t split_offset) const {
return split(position_for_offset(split_offset));
}
Point TextSlice::position_for_offset(uint32_t offset, uint32_t min_row) const {
return text->position_for_offset(
offset + start_offset(),
start_position.row + min_row
).traversal(start_position);
}
TextSlice TextSlice::prefix(Point prefix_end) const {
return split(prefix_end).first;
}
TextSlice TextSlice::prefix(uint32_t prefix_end) const {
return split(prefix_end).first;
}
TextSlice TextSlice::suffix(Point suffix_start) const {
return split(suffix_start).second;
}
TextSlice TextSlice::slice(Range range) const {
return suffix(range.start).prefix(range.extent());
}
Point TextSlice::extent() const {
return end_position.traversal(start_position);
}
const char16_t *TextSlice::data() const {
return text->data() + start_offset();
}
uint32_t TextSlice::size() const {
return end_offset() - start_offset();
}
bool TextSlice::empty() const {
return size() == 0;
}
Text::const_iterator TextSlice::begin() const {
return text->cbegin() + start_offset();
}
Text::const_iterator TextSlice::end() const {
return text->cbegin() + end_offset();
}
uint16_t TextSlice::front() const {
return *begin();
}
uint16_t TextSlice::back() const {
return *(end() - 1);
}
|
/*
* Return generation number of current machine descriptor. Can be used for
* performance purposes to avoid requesting new md handle just to see if graph
* was updated.
*/
uint64_t
md_get_current_gen(void)
{
uint64_t gen = MDESC_INVAL_GEN;
mutex_enter(&curr_mach_descrip_lock);
if (curr_mach_descrip != NULL)
gen = (curr_mach_descrip->gen);
mutex_exit(&curr_mach_descrip_lock);
return (gen);
} |
1. Field of Invention
The present invention relates to an electrical connector, especially to a socket terminal for grip array connector, and connecting chip modules with circuit boards.
2. Description of Related Art
Along with prosperous development of electronic industries, electrical connectors have become more important and with broader applications in this field. Thus there are two important factors for people in this field to select proper electrical connectors-performance and cost of the electrical connector. Some other issues are also taken into considerations such as how to provide better electrical connection between the terminals of the electrical connector and electronic components/circuit boards, how to reduce cost of electrical connectors and improved manufacturing efficiency, and factors that affect performance or cost of the electrical connector. One of the factors is the shape and structure of the terminal of the electrical connectors.
Refer to Chinese Pat. No. 01279922, a conductive terminal of electrical connectors is disclosed. The conductive terminal includes a base and two arms. The base includes a main body and a welding part that connects a conductive terminal of the electrical connector to a circuit board. The two arms are bent from two opposite sides of the main body. Each arm consists of a connection part extending upwardly from the main body and disposed on opposite sides, a contact part arranged at a rear end of the connection part, and a guiding part extending from one side of the contact part near the main body. A receiving space is formed between the two connection parts. The two connection parts are parallel while extending upward and the two contact parts are also parallel to each other. Pins of chip modules are toward the contact part by the guiding of the guiding part so as to electrically connect with the conductive terminal of the electrical connector.
The two connection parts are parallel while extending upward and the two contact parts are also parallel to each other. The two contact parts are respectively formed at a rear end of the each connection parts so that the distance between the two contact parts is the same with the distance between the two connection parts.
The above conductive terminal of the electrical connector has following shortcomings: 1. Because the distance between the two contact parts is the same with the distance between the two connection parts, the pin size can only be smaller or equal to the distance between the two connection parts. 2. The sharp end of the pin is easy to scratch inner walls of the two arms and cause damages of the pin as well as the conductive terminal of the electrical connector. Thus both the electrical connection between the conductive terminal of the electrical connector and the chip module, and the electrical connection between the conductive terminal of the electrical connector and the circuit board are further affected.
Refer to U.S. Pat. No. 6,319,038, a contact (terminal) for another connector is revealed. The contact includes a base, a pair of contact regions for electrically connected with a chip module and a pair of arm sections extending from the base. Each arm section comprises an upper arm extending from the base, a forearm extending from a free end of the upper arm for connecting the upper arm with a contact region, and a palm extending from the contact region toward the base. The distance between the pair of forearms is smaller than the distance between the two upper arms but larger than the distance between the two contact regions.
When the conductive terminals are aligned on a metal band and the pair of arms is in the same plane as the base, the two arms are totally formed by bending of material on left and right sides of the base inwards. Thus the width of each conductive terminal occupied on the metal band is increased. That means the distance between central lines of two contiguous conductive terminals is increased and the total extended area of the conductive terminal is enlarged. During the assembling processes, the conductive terminal is assembled into the terminal receiving slot with the metal band in an alignment. Under the limit of the shape of the conductive terminal of the electrical connector, the distance of the above two contiguous conductive terminals is unable to be reduced or adjusted according to the distance between central lines of the two contiguous terminal receiving slots.
According to the above structure, when the pins of the chip module connect with the conductive terminal of the electrical connector, the pin size can be adjusted properly because the distance between the pair of upper arms is larger than the distance between the two contact regions. At the same time, the sharp end of the pin will not damage inner walls of the two upper arms so that damages of pins and of conductive terminals can be avoided.
Although the above conductive terminal solves the problems of damages of pins as well as the conductive terminal, such design still has some disadvantages: 1. Considering the cost, the material is wasted because the two arms are totally formed by material on left and right sides of the base and then being bent inwards. Furthermore, the distance between the central lines of two contiguous conductive terminals of electrical connectors is increased. 2. In consideration of time: during the assembling processes, the conductive terminals of the electrical connector connected with the metal band are assembled into the terminal receiving slot in a line. Under the limit of the shape of the conductive terminal, the distance of the above two contiguous conductive terminals is unable to be reduced along with the shortened distance between central lines of the two contiguous terminal receiving slots. This increases the level of difficulty in assembling and reduces the efficiency of stamping as well as assembling. Thus the manufacturing cost is increased.
Thus is a need to design a novel electrical connector to overcome the shortcomings mentioned above. |
Holonics in manufacturing: bringing intelligence closer to the machine Manufacturing companies are under pressure to find ways to customize their products quickly, and a new approach to an old idea holonics might provide the answer. A holon is a self-organizing unit that can communicate with other holons and make decisions autonomously. A holon both comprises subordinate parts and constitutes part of a larger system. Scientists interested in the holon's autonomous, cooperative, and self-configuring characteristics are researching ways to create distributed systems that can change manufacturing processes on the fly, manage systems robustly, and handle unpredictable processing requests and inputs. |
Web services: A solution to interoperability problems in sharing Grid resources Grid is a computing and data management infrastructure whose goal is to provide electronic underpinning for a global society in business, government, research, science and entertainment. Being a distributed system, grid is complex due to the heterogeneous nature of the underlying software and hardware resources forming it. The heterogeneous nature of grid will hinder interoperation of grid applications. The purpose of this paper is to develop a model for communication among grid applications in spite of its distributed and heterogeneous nature. To realize this aim, we conducted an extensive review of existing implementation solutions for managing and integrating heterogeneous distributed applications. We used the knowledge gained from the literature to develop a model for integrating heterogeneous grid applications, and to implement the model designed. |
Microstructure and Granularity Effects in Electromigration The persistent advancements made in the scaling and vertical implementation of front-end-of-line transistors has reached a point where the back-end-of-line metallization has become the bottleneck to circuit speed and performance. The continued scaling of metal interconnects at the nanometer scale has shown that their behavior is far from that expected from bulk films, primarily due to the increased influence that the microstructure and granularity plays on the conductive and electromigration behavior. The impact of microstructure is noted by a sharp shift in the changing crystal orientation at the grain boundaries and the roughness introduced at the interfaces between metal films and the surrounding dielectric or insulating layers. These locations are primary scattering centers of conducting electrons, impacting a films electrical conductivity, but they also impact the diffusion of atoms through the film during electromigration. Therefore, being able to fully understand and model the impact of the microstructure on these phenomena has become increasingly important and challenging, because the boundaries and interfaces must be treated independently from the grain bulk, where continuum simulations become insufficient. In light of this, recent advances in modeling electromigration in nanometer sized copper interconnects are described, which use spatial material parameters to identify the locations of the grain boundaries and material interfaces. This method reproduces the vacancy concentration in thin copper interconnects, while allowing to study the impact of grain size and microstructure on copper interconnect lifetimes. I. INTRODUCTION The continued trend in transistor scaling along Moore's Law has, for the most part, been accompanied by simultaneous miniaturization of the interconnect lines. The miniaturization of metal lines down to the size of several nanometers results in an increased impact of the material interfaces (MIs) and grain boundaries (GBs) on the conductivity and reliability of the thin film, which was mainly composed of copper. The impact of the microstructure includes the impact of line and via sidewall roughness, the intersection of porous low- voids with the sidewall, copper (Cu) surface and copper/barrier interface roughness, and the presence of GBs. Understanding and mitigating the impact of granularity in interconnects is essential for scaling to continue. Alternatively, new materials will have to be used, whose lifetime and resistivity to electromigration will also need a closer investigation. Of particular interest in this regard is cobalt, which is applied for use in combination with copper for M0 and M1 metallization,, shown in Fig. 1.Another interesting structure being investigated to replace the copper interconnect is a carbon nanotube,. However, integrating a completely novel structure and material is complex and the accepted reality is that we will live with copper for the foreseeable future, at least down to the 3 nm node. Ultimately, copper is expected to continue to be used for the next several technology nodes,, possibly in combination with cobalt at the M0 and M1 layers and on its own in higher metal stacks. The layers critical for electromigration (EM) are those where the lines have the smallest cross-sectional area, or pitch, resulting in the highest current density. Most often, these are lines closest to the front-end-of-line (FEOL) transistors, nearest to M0 from Fig. 1 which shows the back-end-of-line (BEOL) metallization for Intel's 10 nm technology node. Here, the first five layers (M0-M5) require metal lines with a pitch below 45 nm, making them susceptible to EM failure and granularity effects. Therefore, it must be ensured that EM is properly modeled in order to be able to accurately estimate interconnect lifetimes. In this review a framework developed to accelerate EM simulations of grained copper interconnect lines is presented. It should be noted that this framework is not limited to copper only, but can easily be extended to deal with any number of materials which are prone to EM failure and require the inclusion of granularity effects for proper understanding of their conductive and reliability behavior. With increased scaling and reduction in the interconnect half-pitch with every new generation, it was noted that the lifetime of copper interconnects has approximately halved for every new technology node, even when the current density is kept the same. As the thickness is reduced, the average crystal grain size decreases almost linearly, as shown in Fig. 2. The decreased grain size, combined with the overall reduction in metal thickness, means that GBs and MIs play an increasingly important role in determining the film behavior. The influence of these properties on electron scattering, and thus on conductivity, has been explored for many decades, starting with Fuchs, Mayadas and Schatzkes, and Sondheimer. In addition to the changes in its conductive behavior, the reliability of nanometer sized copper films is significantly influenced by their microstructure. Degradation due to EM is the primary form of failure in metal films, which occurs due to the transport and accumulation of vacancies which ultimately nucleate to form a void. Under a high current density, this void grows to increase the line resistance and finally to FIGURE 2. Relationship between film thickness and resulting average grain diameter. The symbols are measurements from, while the lines represent the best-fit linear regression. As the film thickness is decreased, the average grain size decreases, meaning that more grain boundaries are present. cause an open circuit failure. Alternatively, the stress induced by the accumulation of vacancies can sometimes be significant enough to cause cracking and a failure by itself,. II. ELECTROMIGRATION PHYSICS The physics of EM phenomena is described in detail in, e.g., the work of Ceric and Selberherr. There are two main driving forces for electromigration, the direct force F direct, initiated through the direct action of the external field on the charge of the migrating ion, and the wind force F wind, arising due to the scattering of the conduction electrons by impurities or point defects. The total force F is given by the sum of the two forces Each force is described by its defect valence Z direct or Z wind, respectively, according to The effective valence Z * is commonly used to describe the total force and is the sum of the direct Z direct and wind Z wind valences (Z * = Z direct + Z wind ), ultimately allowing to represent the total force using where e is the elementary charge and E is the electric field. In free-electron-like metals (e.g., Cu) F wind is dominant. To experimentally determine EM failure on a new technology, the failure behavior of all materials making up the interconnect must be characterized. This is commonly performed under accelerated conditions (i.e., high temperature and high current) to find the mean time to failure (MTTF) which is then extracted to operating conditions using a log-normal plot following Black's equation, given by where n and E t are the density exponent and activation energy, respectively, A is a constant determined by the material properties and geometry of the interconnect, j is the current density, T is temperature, and k B is Boltzmann's constant. It should be noted that simulations, just like measurements, are carried out using the same accelerated conditions in order to as closely as possible match the experimental conditions. A. CONDUCTIVITY The current density which builds up in metal films is one of the main drivers of vacancy accumulation and EM failure. Therefore, it is important to understand how the conductivity changes with increased impact of microstructure. To assess the conductivity, three main components must be included: 1) Intrinsic resistivity of bulk copper, limited only by the electron mean free path (MFP). 2) The decrease in conductivity due to the electron scattering at copper's surfaces, which includes material interfaces. The surface roughness also plays a role in determining this impact, which is why different adjoining materials (e.g., dielectrics or insulators) will have a different resulting conductive behavior of the copper line. 3) The decrease in conductivity due to the impact of grain boundaries. The conducting electrons will scatter when reaching a grain boundary, limiting their MFP. In order to introduce the influence of GBs and MIs on the copper conductivity in a continuum way, several models have been proposed,,,,. The effects of the granular microstructure, surface scattering on GBs and MIs, and cross-sectional area of a copper interconnect on its resistivity f is commonly modeled by applying a continuum equation derived by Clarke et al., based on the works of Mayadas and Shatzkes where i is the bulk resistivity, is the MFP, w is the metal width, p is the probability of electron scattering from a MI, D is the average grain diameter, and R is the probability of electron scattering from a GB. The added temperature influence on the final resistivity can be calculated using where T is the temperature, T ref is the reference (room) temperature, and e = 0.0043K −1 is the temperaturedependent factor for copper resistivity, referred to as the temperature coefficient of resistance (TCR). Several studies suggest that TCR varies according to the microstructure in several metals, when looking at the difference between nanometer sized and micrometer sized grains,. However, this was shown not to be the case for nanometer sized copper grains where the size varied between 20 nm and 120 nm, a range of interest for the study presented in this manuscript, and in copper lines from different data sets. Most simulations dealing with the conductivity and reliability of copper interconnects use equation. This approach provides a new bulk value for the microstructure-dependent resistivity, while the effects of an individual grain boundary cannot be analyzed with this model. For this, we need to make sure the entire line, with its microstructure, is represented in an appropriate simulation environment. This can be done by observing that the proximity to GBs and MIs can be treated as a parameter, which influences conductivity. Therefore, finding the local conductivity based on the distance of each point inside the grain to a GB and MI can help create a spatial representation of the conductivity inside a copper line. Once the distance to the boundaries d b from every point inside the individual metal grains is known, the local resistivities l and conductivities l, which depend on the nearest GB and MI, are derived from the intrinsic resistivity i using Conductivity is one of the primary properties which influences the electromigration behavior of copper. A high current density and a high electron wind can lead to the diffusion of metal atoms in the direction of electron motion. The diffusion is governed by the atom diffusivity property of the material, which also varies depending on whether the atom is located in the grain, on the grain boundary, or along the interface between the metal and adjacent material. Because atoms are more strongly bound inside the grain lattice than at the grain boundaries, their migration is more likely to take place along the boundary, meaning that their diffusivity there is increased. It should be noted that self-heating leading to thermo-migration (TM) has an additional effect in vacancy dynamics, which is also included in the presented framework. However, TM is commonly ignored in EM measurements due to the accelerated conditions quickly providing a uniform temperature and minimizing temperature gradients. B. VACANCY DYNAMICS The main driver of electromigration is the accumulation of vacancies which then form a void. The diffusion of vacancies D v through a material with pre-exponential diffusivity D v0 is determined by where E a is the activation energy, is the atomic volume, and is the hydrostatic stress. The vacancy diffusion determines its flux J v using where C v is the vacancy concentration and F( j, T, ) is a function which depends on the current density j, temperature, and hydrostatic stress, with where is the resistivity (EM component), Q * is the heat of transport (TM component), and f is the vacancy relaxation ration (stress-migration). The subsequent accumulation and depletion of vacancies is found using the continuity equation where G is a surface function which describes vacancy generation and annihilation. Furthermore, from it is clear that the resistivity plays a significant role in determining the vacancy flux and thereby in the overall EM behavior. The discussion thus far describes the EM process in a bulk material, which can be modeled assuming a continuum in the material properties. However, granularity can modify this view significantly, as discussed in the next section. C. EFFECT OF GRANULARITY ON VACANCY DYNAMICS In addition to the conductivity, the diffusion of vacancies D v is different between atoms located in the GB, MI, or in the grain bulk. From it has been shown that both D v0 and E a depend on the atom's location in the granular structure of copper according to Table 1. Of note is that the atomic diffusivity in MIs is three orders of magnitude larger than the bulk value, which explains why MIs play such an increasing role for electromigration in nanometer sized interconnects. Therefore it is essential that these parameters are properly treated in any EM simulations. Another aspect of EM, which is ignored in continuum models, is that the generation and annihilation of vacancies G, according to only takes place inside the GBs and MIs and not in the grain bulk. The equation which governs this process is given by where C v,T and C v,eq are the trapped and equilibrium vacancy concentrations, respectively, is the relaxation time, and R and T are the vacancy release and trapping rates, respectively. In is a step function which is assigned a value of 1 inside the GB and MI, and 0 otherwise. Therefore, a total of four spatial parameters is used to sufficiently include granularity in EM models, those being l, D v0, E a, and. A simulation framework designed to implement this model is given here. Solving equations to gives the time dependent change of the vacancy concentration inside the copper film. The accumulation of vacancies at one end of the wire and their bunching on the other end results in an increase in tensile and compressive stresses, respectively. Once a critical stress level is reached, the material can no longer conduct sufficient current for the required application, resulting in failure. III. ELECTROMIGRATION SIMULATION FRAMEWORK The simulation framework relies on three main components, namely Voronoi tessellation to generate the grained interconnect line, the assignment of the relevant granularitydependent parameters discussed in the previous section ( l, D v0, E a, and ), and the solution of the EM equations to find the vacancy accumulation and resulting EM-induced stress, as visualized in Fig. 3. These three components are addressed in further detail in this section. A. TESSELLATION The stochastic polycrystalline copper line is generated using a Voronoi tessellation. Assuming spherical grains and knowing the average grain size, the total number of grains which fit into the volume is found. For each grain a seed point is placed at a random location inside the metal line, which then grows isotropically, until the entire volume is filled. When grains hit each other, they merge to form a GB. The Neper tessellation tool is used in order to create the required tesselated structures, allowing for the generation of a Vonoroi tessellation with ideal copper orientations of (1 1 2) and (1 1 2). In order to show the key features of the tessellation tool used in this study, the above mentioned technique was applied to copper lines with different average grain sizes. The two-dimensional (2D) lines have a length of 1000 nm and height of 20 nm with an average grain size of 20 nm and 40 nm, shown in the top and bottom sections of Fig. 4, respectively. In the wire with smaller grains, we note a very granular structure. However, when the grain diameter is larger than the dimensions of the metal wire, a bamboolike structure is formed, as depicted in the bottom of Fig. 4. This is consistent with many studies which show that as a wire becomes narrower the grains begin to be more bamboolike and less granular,,. This means that the GBs are primarily near-perpendicular to the direction of the applied electric field and the direction of the atom diffusion. Therefore, they simultaneously act as fast diffusivity pathways and diffusion barriers, depending on the grain boundary orientation. B. SPATIAL PARAMETER ASSIGNMENT The assignment of spatial parameters ( l, D v0, E a, and ) on a Cartesian grid with spacing d g ensures that the GB and MI locations are explicitly defined and that the EM framework properly treats the granular nature of the interconnect line. Linear interpolation is used in the EM model in order to populate the entire material domain between the defined points. This proceeds according to the flow chart in Fig. 5. A boundary thickness of 1 nm was assumed here as was found to be appropriate from previous studies. A 2D test geometry with dimensions 20 nm 2000 nm and a grain diameter of 25 nm was used to test the given framework. Due to the 2D nature of the test geometry, the simulation is effectively performed on a Cu sheet and the line width is ignored. The results of the spatial parameter assignment for the diffusivity (D v = D v0 e E a /k B T ) on one section of the structure are shown in Fig. 6. The impact of the GBs and MIs is evident. Noteworthy is the dependence of the diffusivity on the angle between the GB and the current flow, or the direction of the electron wind. When the GB is perpendicular to the flow, the diffusivity is almost zero and the grain boundary acts as a vacancy blocking site. On the other hand, GBs which are parallel to the current flow have a diffusivity which is higher than that of the bulk material, speeding up the vacancy transport. C. ELECTROMIGRATION MODEL The equations which govern EM physics are given in Section II. To model EM requires the solution of three physical phenomena simultaneously, including the electro-thermal problem (current density, self-heating, and temperature), the vacancy dynamics problem, and the solid mechanics problem (induced strain and stress). Ultimately, finding the induced stress is desired in order to ascertain whether the critical stress is reached, which results in failure or the formation of a void. The flowchart of the electromigration model is given in Fig. 7. Solving the electro-thermal problem allows to identify the temperature distribution and current density in the interconnect. Joule heating must be considered, as this can lead to higher thermal gradients in the interconnect and an increased proclivity to the thermo-migration component of vacancy diffusion. The vacancy dynamics problem is solved by finding a solution to equations to, as given in Section II. Finally, in order to calculate the vacancy-induced strain, the solid mechanics problem must be solved. The change in volume caused by the migration and formation of vacancies is represented by ∂ where is the trace of the strain tensor, while m and f represent the strain induced due to the migration and formation of vacancies. The second term is multiplied by because vacancy formation takes place only at the grain boundaries and material interfaces. The induced strain can then be derived to The above equation connects the vacancy transport ( J v ) and mechanics () models. Given that the strain in a dual-damascene interconnect is anistotropic, the induced strain is modeled by applying one third of the strain in in each Cartesian direction. From and, it can be observed that the vacancy flux depends on the stress, which in-tern depends on the vacancy flux. In order to solve these mutually-dependent sets of equations, time discretization is required and the time steps must be small enough to ensure that the induced error is minimal. The entire flow sequence shown in Fig. 7 is solved at every time step and segregated solvers are used to calculate each of the electro-thermal, vacancy dynamics, and solid mechanics problems. Newton's method is used to obtain a solution for the entire set of equations at each step. IV. RESULTS AND DISCUSSION For the results obtained in this section, a 2D copper line is simulated, with a height of 20 nm. The applied current density if 1MA/cm 2 at an ambient temperature of 300 C. Furthermore, the results are analyzed primarily for the electromigration component during the relatively early stages of vacancy dynamics up to the point where the electromigration induced vacancy transport balances out the stress-induced transport. After this point, the stress-induced component takes over, further increasing the stress until eventual failure. The simulated times are long enough to ensure that stressmigration is the dominant vacancy dynamics effect. This allows to fully encompass the EM and TM phenomena in the result. Thereafter, the stress-time curve can be extrapolated without the need to solve the complex EM model. A. IMPACT OF GRAIN SIZE We performed several simulations on a 1000 nm long copper line while varying the average grain diameter (D g ) from 15 nm to 40 nm. The impact of D g on the maximum vacancy concentration (C v /C v0 − 1) and the maximum induced stress ( ) are given in Fig. 8 and Fig. 9, respectively. The highest vacancy concentration and EM-induced stress are observed at the end of the Cu line, downwind the electron motion. This is due to the assumption of zero vacancy diffusivity there, causing the highest accumulation of vacancies, independent of the grain size. The first observation from the figures is that C v and increase with decreasing D g. This is not surprising, since smaller grains mean that there are less columnar GBs, which act as diffusion barriers, and more GBs in parallel to the electron wind, accelerating the vacancy transport. This was shown previously in Fig. 4. In order to analyze whether a continuum model could be devised which replicates the behavior of the framework presented here, we analyzed the average values for the spatial parameters of interest, namely conductivity, diffusivity (D v = D v0 e −E v /k B T ), and, the ratio of the interconnect volume where vacancy generation and annihilation can take place. The calculated values are given in Table 2. We note that the average diffusivity does not vary much with increasing grain size, suggesting that a continuum model, which relies on bulk parameter representations of the copper film, might not be easily attainable. Using the values from Table 2 directly resulted in an underestimation of the maximum vacancy accumulation and induced stress, because the continuum model is not able to properly represent the gradients which occur in the film due to the presence of complex interfaces and boundaries. Regardless how we varied the parameters, it was not possible to reproduce both the vacancy concentration and induced stress graphs which were obtained with the microstructure simulations. B. LOCAL STRESS Another aspect which cannot be properly treated with a continuum model is the representation of local stresses. For example, triple points (where a GB and MI meet) are known to cause a slight increase in the induced stress, compared to its surrounding. We performed a sample simulation on a 2000 nm 20 nm copper line with an average grain diameter of 25 nm and the resulting EM-induced stress after 200 s is given in Fig. 10. Here, the influence of the GBs and MIs on EM is evident. The framework is able to reproduce the stress generation at triple points TP, shown in the circled regions in Fig. 10, including at (x, y) = (287 nm, 0.5 nm), (x, y) = (287 nm, 19.5 nm), (x, y) = (337 nm, 0.5 nm), and (x, y) = (337 nm, 0.5 nm). With the presented simulation framework, this stress can be accurately modeled, even with very coarse meshes,. In fact, when the mesh for the simulations was varied from 0.4 nm to 2 nm (25 speedup in 2D), the variation was under 5% (the parameter grid d g from Fig. 5 was set to 0.1 nm). C. SIMULATION TIME The proposed framework allows for a very quick and efficient estimation of EM phenomena while taking the film's granularity into consideration. In Fig. 11 the simulation time is plotted against the chosen grid spacing, when the grid during the electromigration simulation is varied. The spatial parameter grid d g is either set to 0.1 nm ( ) or is varied together with the electromigration grid ( ). A drastic reduction in the simulation time can be achieved by increasing the coarseness of the mesh with relatively little loss of accuracy. For the entire simulation range shown in Fig. 11, the variation in the stress at triple points varied by less than 8%. Therefore, when the goal is to model local stresses, even very coarse meshes will suffice, allowing for simulation times in the order of a few minutes. V. CONCLUSION Continuum EM models frequently underestimate the time at which EM effects initiate, due to their inability to properly take into account the granularity of nanometer sized interconnects. The effects of granularity (GBs and MIs) have been known to exacerbate the electromigration phenomena. Therefore, it is essential that they are properly treated. Here, a sophisticated modeling framework is described, which considers granularity by applying spatial material parameters ( l, D v0, E a, and ) in EM simulation to identify the locations of the GBs and MIs. The framework allows to efficiently model the impact of the average grain size on the resulting EM behavior as well as to study induced local stresses, such as those at triple points, even when very coarse meshes are applied to accelerate the simulation time. |
def indices(ctx, newer_than, older_than, prefix, suffix, time_unit,
timestring, regex, exclude, index, all_indices):
if timestring and not ctx.obj['filters']:
regex = r'^.*{0}.*$'.format(get_date_regex(timestring))
ctx.obj['filters'].append({ 'pattern': regex })
if not all_indices and not ctx.obj['filters'] and not index:
click.echo('{0}'.format(ctx.get_help()))
click.echo(click.style('ERROR. At least one filter must be supplied.', fg='red', bold=True))
sys.exit(1)
logger.info("Job starting...")
logger.debug("Params: {0}".format(ctx.parent.parent.params))
client = get_client(**ctx.parent.parent.params)
indices = get_indices(client)
logger.debug("Full list of indices: {0}".format(indices))
if index and not ctx.obj['filters']:
working_list = []
else:
if indices:
working_list = indices
else:
click.echo(click.style('ERROR. Unable to get indices from Elasticsearch.', fg='red', bold=True))
sys.exit(1)
if all_indices:
working_list = indices
logger.info('Matching all indices. Ignoring flags other than --exclude.')
logger.debug('All filters: {0}'.format(ctx.obj['filters']))
for f in ctx.obj['filters']:
if all_indices and not f['exclude']:
continue
logger.debug('Filter: {0}'.format(f))
working_list = regex_iterate(working_list, **f)
if ctx.parent.info_name == "delete":
logger.info("Pruning Kibana-related indices to prevent accidental deletion.")
working_list = prune_kibana(working_list)
working_list.extend(in_list(index, indices))
if working_list:
working_list = sorted(list(set(working_list)))
logger.debug('ACTION: {0} will be executed against the following indices: {1}'.format(ctx.parent.info_name, working_list))
if ctx.parent.info_name == 'show':
show(working_list)
else:
if ctx.parent.parent.params['dry_run']:
logging.info("DRY RUN MODE. No changes will be made.")
logging.info("The following indices would have been altered:")
show(working_list)
else:
try:
retval = do_command(client, ctx.parent.info_name, working_list, ctx.parent.params)
sys.exit(0) if retval else sys.exit(1)
except Exception as e:
logger.error("Unable to complete: {0} Exception: {1}".format(ctx.parent.info_name, e.message))
sys.exit(1)
else:
logger.warn('No indices matched provided args.')
click.echo(click.style('ERROR. No indices matched provided args.', fg='red', bold=True))
sys.exit(99) |
<gh_stars>0
import { addEvent, getMods, getKeys, compareArray } from './utils';
import { _keyMap, _modifier, modifierMap, _mods, _handlers } from './var';
type KeyMap = keyof typeof _keyMap;
type Modifier = keyof typeof _modifier;
type ModifierMap = keyof typeof modifierMap;
type Mods = keyof typeof _mods;
let _downKeys: Array<number> = []; // 记录摁下的绑定键
let _scope: string = 'all'; // 默认热键范围
const elementHasBindEvent: Array<Document> = []; // 已绑定事件的节点记录
// 返回键码
const code = (x: KeyMap | Modifier | string) => {
return _keyMap[x.toLowerCase() as KeyMap] || _modifier[x.toLowerCase() as Modifier] || x.toUpperCase().charCodeAt(0);
};
// 设置获取当前范围(默认为'所有')
function setScope(scope: string) {
_scope = scope || 'all';
}
// 获取当前范围
function getScope() {
return _scope || 'all';
}
// 获取摁下绑定键的键值
function getPressedKeyCodes() {
return _downKeys.slice(0);
}
// 表单控件控件判断 返回 Boolean
// hotkey is effective only when filter return true
function filter(event: Event & {srcElement: any}) {
const target = event.target || event.srcElement;
const { tagName } = target;
let flag = true;
// ignore: isContentEditable === 'true', <input> and <textarea> when readOnly state is false, <select>
if (
target.isContentEditable
|| ((tagName === 'INPUT' || tagName === 'TEXTAREA' || tagName === 'SELECT') && !target.readOnly)
) {
flag = false;
}
return flag;
}
// 判断摁下的键是否为某个键,返回true或者false
function isPressed(keyCode: string|number) {
if (typeof keyCode === 'string') {
keyCode = code(keyCode); // 转换成键码
}
return _downKeys.indexOf(keyCode) !== -1;
}
// 循环删除handlers中的所有 scope(范围)
function deleteScope(scope: string, newScope: string) {
let handlers;
let i;
// 没有指定scope,获取scope
if (!scope) scope = getScope();
for (const key in _handlers) {
if (Object.prototype.hasOwnProperty.call(_handlers, key)) {
handlers = _handlers[key];
for (i = 0; i < handlers.length;) {
if (handlers[i].scope === scope) handlers.splice(i, 1);
else i++;
}
}
}
// 如果scope被删除,将scope重置为all
if (getScope() === scope) setScope(newScope || 'all');
}
// 清除修饰键
function clearModifier(event: any) {
let key = event.keyCode || event.which || event.charCode;
const i = _downKeys.indexOf(key);
// 从列表中清除按压过的键
if (i >= 0) {
_downKeys.splice(i, 1);
}
// 特殊处理 cmmand 键,在 cmmand 组合快捷键 keyup 只执行一次的问题
if (event.key && event.key.toLowerCase() === 'meta') {
_downKeys.splice(0, _downKeys.length);
}
// 修饰键 shiftKey altKey ctrlKey (command||metaKey) 清除
if (key === 93 || key === 224) key = 91;
if (key in _mods) {
_mods[key as Mods] = false;
// 将修饰键重置为false
for (const k in _modifier) if (_modifier[k as Modifier] === key) hotkeys[k] = false;
}
}
function unbind(keysInfo: any, ...args: any) {
// unbind(), unbind all keys
if (!keysInfo) {
Object.keys(_handlers).forEach((key) => delete _handlers[key]);
} else if (Array.isArray(keysInfo)) {
// support like : unbind([{key: 'ctrl+a', scope: 's1'}, {key: 'ctrl-a', scope: 's2', splitKey: '-'}])
keysInfo.forEach((info) => {
if (info.key) eachUnbind(info);
});
} else if (typeof keysInfo === 'object') {
// support like unbind({key: 'ctrl+a, ctrl+b', scope:'abc'})
if (keysInfo.key) eachUnbind(keysInfo);
} else if (typeof keysInfo === 'string') {
// support old method
// eslint-disable-line
let [scope, method] = args;
if (typeof scope === 'function') {
method = scope;
scope = '';
}
eachUnbind({
key: keysInfo,
scope,
method,
splitKey: '+',
});
}
}
// 解除绑定某个范围的快捷键
const eachUnbind = ({
key, scope, method, splitKey = '+',
}: {
key?: string,
scope?: string,
method?: Function,
splitKey?: string
}) => {
const multipleKeys: Array<string> = getKeys(key);
multipleKeys.forEach((originKey) => {
const unbindKeys = originKey.split(splitKey);
const len = unbindKeys.length;
const lastKey = unbindKeys[len - 1];
const keyCode = lastKey === '*' ? '*' : code(lastKey);
if (!_handlers[keyCode]) return;
// 判断是否传入范围,没有就获取范围
if (!scope) scope = getScope();
const mods = len > 1 ? getMods(_modifier, unbindKeys) : [];
_handlers[keyCode] = _handlers[keyCode].map((record: any) => {
// 通过函数判断,是否解除绑定,函数相等直接返回
const isMatchingMethod = method ? record.method === method : true;
if (
isMatchingMethod
&& record.scope === scope
&& compareArray(record.mods, mods)
) {
return {};
}
return record;
});
});
};
// 对监听对应快捷键的回调函数进行处理
function eventHandler(event: Event, handler: any, scope: string) {
let modifiersMatch;
// 看它是否在当前范围
if (handler.scope === scope || handler.scope === 'all') {
// 检查是否匹配修饰符(如果有返回true)
modifiersMatch = handler.mods.length > 0;
for (const y in _mods) {
if (Object.prototype.hasOwnProperty.call(_mods, y)) {
if (
(!_mods[y as Mods] && handler.mods.indexOf(+y) > -1)
|| (_mods[y as Mods] && handler.mods.indexOf(+y) === -1)
) {
modifiersMatch = false;
}
}
}
// 调用处理程序,如果是修饰键不做处理
if (
(handler.mods.length === 0
&& !_mods[16]
&& !_mods[18]
&& !_mods[17]
&& !_mods[91])
|| modifiersMatch
|| handler.shortcut === '*'
) {
if (handler.method(event, handler) === false) {
if (event.preventDefault) event.preventDefault();
else event.returnValue = false;
if (event.stopPropagation) event.stopPropagation();
if (event.cancelBubble) event.cancelBubble = true;
}
}
}
}
// 处理keydown事件
function dispatch(this: any, event: Event) {
const asterisk = _handlers['*'];
let key = event.keyCode || event.which || event.charCode;
// 表单控件过滤 默认表单控件不触发快捷键
if (!hotkeys.filter.call(this, event)) return;
// Gecko(Firefox)的command键值224,在Webkit(Chrome)中保持一致
// Webkit左右 command 键值不一样
if (key === 93 || key === 224) key = 91;
/**
* Collect bound keys
* If an Input Method Editor is processing key input and the event is keydown, return 229.
* https://stackoverflow.com/questions/25043934/is-it-ok-to-ignore-keydown-events-with-keycode-229
* http://lists.w3.org/Archives/Public/www-dom/2010JulSep/att-0182/keyCode-spec.html
*/
if (_downKeys.indexOf(key) === -1 && key !== 229) _downKeys.push(key);
/**
* Jest test cases are required.
* ===============================
*/
['ctrlKey', 'altKey', 'shiftKey', 'metaKey'].forEach((keyName) => {
const keyNum = modifierMap[keyName as ModifierMap];
if (event[keyName as keyof typeof event] && _downKeys.indexOf(keyNum as number) === -1) {
_downKeys.push(keyNum as number);
} else if (!event[keyName as keyof typeof event] && _downKeys.indexOf(keyNum as number) > -1) {
_downKeys.splice(_downKeys.indexOf(keyNum as number), 1);
} else if (keyName === 'metaKey' && event[keyName] && _downKeys.length === 3) {
/**
* Fix if Command is pressed:
* ===============================
*/
if (!(event.ctrlKey || event.shiftKey || event.altKey)) {
_downKeys = _downKeys.slice(_downKeys.indexOf(keyNum as number));
}
}
});
/**
* -------------------------------
*/
if (key in _mods) {
_mods[key as Mods] = true;
// 将特殊字符的key注册到 hotkeys 上
for (const k in _modifier) {
if (_modifier[k as Modifier] === key) hotkeys[k] = true;
}
if (!asterisk) return;
}
// 将 modifierMap 里面的修饰键绑定到 event 中
for (const e in _mods) {
if (Object.prototype.hasOwnProperty.call(_mods, e)) {
_mods[e as Mods] = (event[modifierMap[e as ModifierMap] as keyof typeof event] as boolean);
}
}
/**
* https://github.com/jaywcjlove/hotkeys/pull/129
* This solves the issue in Firefox on Windows where hotkeys corresponding to special characters would not trigger.
* An example of this is ctrl+alt+m on a Swedish keyboard which is used to type μ.
* Browser support: https://caniuse.com/#feat=keyboardevent-getmodifierstate
*/
if (event.getModifierState && (!(event.altKey && !event.ctrlKey) && event.getModifierState('AltGraph'))) {
if (_downKeys.indexOf(17) === -1) {
_downKeys.push(17);
}
if (_downKeys.indexOf(18) === -1) {
_downKeys.push(18);
}
_mods[17] = true;
_mods[18] = true;
}
// 获取范围 默认为 `all`
const scope = getScope();
// 对任何快捷键都需要做的处理
if (asterisk) {
for (let i = 0; i < asterisk.length; i++) {
if (
asterisk[i].scope === scope
&& ((event.type === 'keydown' && asterisk[i].keydown)
|| (event.type === 'keyup' && asterisk[i].keyup))
) {
eventHandler(event, asterisk[i], scope);
}
}
}
// key 不在 _handlers 中返回
if (!(key in _handlers)) return;
for (let i = 0; i < _handlers[key].length; i++) {
if (
(event.type === 'keydown' && _handlers[key][i].keydown)
|| (event.type === 'keyup' && _handlers[key][i].keyup)
) {
if (_handlers[key][i].key) {
const record = _handlers[key][i];
const { splitKey } = record;
const keyShortcut = record.key.split(splitKey);
const _downKeysCurrent = []; // 记录当前按键键值
for (let a = 0; a < keyShortcut.length; a++) {
_downKeysCurrent.push(code(keyShortcut[a]));
}
if (_downKeysCurrent.sort().join('') === _downKeys.sort().join('')) {
// 找到处理内容
eventHandler(event, record, scope);
}
}
}
}
}
// 判断 element 是否已经绑定事件
function isElementBind(element: Document) {
return elementHasBindEvent.indexOf(element) > -1;
}
var hotkeys: Hotkeys = (key: any, option: any, method?: any): void => {
_downKeys = [];
const keys = getKeys(key as string); // 需要处理的快捷键列表
let mods = [];
let scope = 'all'; // scope默认为all,所有范围都有效
let element = document; // 快捷键事件绑定节点
let i = 0;
let keyup = false;
let keydown = true;
let splitKey = '+';
// 对为设定范围的判断
if (method === undefined && typeof option === 'function') {
method = option;
}
if (Object.prototype.toString.call(option) === '[object Object]') {
if (option.scope) scope = option.scope; // eslint-disable-line
if (option.element) element = option.element; // eslint-disable-line
if (option.keyup) keyup = option.keyup; // eslint-disable-line
if (option.keydown !== undefined) keydown = option.keydown; // eslint-disable-line
if (typeof option.splitKey === 'string') splitKey = option.splitKey; // eslint-disable-line
}
if (typeof option === 'string') scope = option;
// 对于每个快捷键进行处理
for (; i < keys.length; i++) {
key = keys[i].split(splitKey); // 按键列表
mods = [];
// 如果是组合快捷键取得组合快捷键
if (key.length > 1) mods = getMods(_modifier, key);
// 将非修饰键转化为键码
key = key[key.length - 1];
key = key === '*' ? '*' : code(key as string); // *表示匹配所有快捷键
// 判断key是否在_handlers中,不在就赋一个空数组
if (!(key in _handlers)) _handlers[key] = [];
_handlers[key].push({
keyup,
keydown,
scope,
mods,
shortcut: keys[i],
method,
key: keys[i],
splitKey,
});
}
// 在全局document上设置快捷键
if (typeof element !== 'undefined' && !isElementBind(element) && window) {
elementHasBindEvent.push(element);
addEvent(element, 'keydown', (e: Event) => {
dispatch(e);
});
addEvent(window, 'focus', () => {
_downKeys = [];
});
addEvent(element, 'keyup', (e: Event) => {
dispatch(e);
clearModifier(e);
});
}
};
const _api = {
setScope,
getScope,
deleteScope,
getPressedKeyCodes,
isPressed,
filter,
unbind,
};
for (const a in _api) {
if (Object.prototype.hasOwnProperty.call(_api, a)) {
hotkeys[a] = _api[a as keyof typeof _api];
}
}
if (typeof window !== 'undefined') {
const _hotkeys = window.hotkeys;
hotkeys.noConflict = (deep: any) => {
if (deep && window.hotkeys === hotkeys) {
window.hotkeys = _hotkeys;
}
return hotkeys;
};
window.hotkeys = hotkeys;
}
export default hotkeys;
|
Omar first came under fire for an anti-Israel tweet sent in 2012, long before making history in the 2018 midterms. But the controversy deepened after she tweeted that American Israel Public Affairs Committee (AIPAC) buys the support of lawmakers.
A tweet published late Friday afternoon by President Donald Trump stirred up more controversy and reaction from Democrats who found the message to be offensive.
McConnell slams Omar's "anti-Semitic slurs"
U.S. Senate majority leader Mitch McConnell on Tuesday took a veiled swipe at controversial Democratic Minnesota Congresswoman Ilhan Omar for comments of hers some viewed as anti-Semitic. Rough Cut (no..
Jeanine Pirro's Fox News Program Suspended After Comments on Rep. Ilhan Omar The network has not revealed when the show will be back on the air. Fox News, via 'USA Today' Pirro stirred major..
According to Business Insider, President Donald Trump demanded that Fox News reinstate Jeanine Pirro's show after the network didn't air a new episode Saturday night. Fox News declined to comment on.. |
South Korea’s acting leader has rejected a request to extend an investigation into the country’s biggest scandal in decades that led to President Park Geun-hye’s impeachment. A special investigation team led by independent counsel Park Young-soo was launched in December to investigate allegations Park let her longtime confidante meddle in state affairs and extort money from businesses.
Watch What Else Is Making News
Advertising
The team, whose investigation is by law to end tomorrow, had asked acting leader and Prime Minister Hwang Kyo-ahn to allow 30 more days of investigations.
But Hwang’s office said today it rejected the request because key suspects implicated in the scandal have already been indicted.
It says a longer investigation could sway a possible presidential election that would happen if the Constitutional Court approves President Park Geun-hye’s impeachment. |
/*
* Copyright (c) 2017 <NAME>
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
package de.halirutan.mathematica.settings;
import com.intellij.ui.IdeBorderFactory;
import de.halirutan.mathematica.settings.MathematicaSettings.SmartEnterResult;
import javax.swing.*;
import java.awt.*;
/**
* The UI that is shown under Settings -> Languages -> Mathematica
* @author patrick (01.12.16).
*/
@SuppressWarnings("InstanceVariableNamingConvention")
class SettingsUI extends JPanel {
private JCheckBox insertTemplate;
private JCheckBox insertAsCode;
private JCheckBox insertBraces;
private JCheckBox sortByImportance;
private JCheckBox sortByName;
SettingsUI() {
init();
}
public static void main(String[] args) {
JFrame frame = new JFrame("Settings Test");
frame.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE);
JPanel panel = new JPanel();
SettingsUI ui = new SettingsUI();
ui.setSettings(new MathematicaSettings());
panel.add(ui, BorderLayout.CENTER);
frame.getContentPane().add(panel);
frame.setSize(450, 450);
frame.setVisible(true);
}
private void init() {
setLayout(new BorderLayout());
JPanel panel = this;
insertAsCode = new JCheckBox("Insert template arguments as code");
insertAsCode.setMnemonic('C');
insertBraces = new JCheckBox("Insert braces only");
insertBraces.setMnemonic('B');
insertTemplate = new JCheckBox("Insert template arguments as LiveTemplate");
insertTemplate.setMnemonic('T');
ButtonGroup g1 = new ButtonGroup();
g1.add(insertBraces);
g1.add(insertAsCode);
g1.add(insertTemplate);
JPanel insertPanel = new JPanel(new BorderLayout());
insertPanel.setBorder(IdeBorderFactory.createTitledBorder("Insert completion on SmartEnter"));
panel.add(panel = new JPanel(new BorderLayout()), BorderLayout.NORTH);
panel.add(insertPanel, BorderLayout.NORTH);
insertPanel.add(insertTemplate, BorderLayout.NORTH);
insertPanel.add(insertPanel = new JPanel(new BorderLayout()), BorderLayout.SOUTH);
insertPanel.add(insertAsCode, BorderLayout.NORTH);
insertPanel.add(insertPanel = new JPanel(new BorderLayout()), BorderLayout.SOUTH);
insertPanel.add(insertBraces);
sortByImportance = new JCheckBox("Sort by importance");
sortByImportance.setMnemonic('I');
sortByName = new JCheckBox("Sort by name");
sortByName.setMnemonic('N');
ButtonGroup g2 = new ButtonGroup();
g2.add(sortByImportance);
g2.add(sortByName);
JPanel sortPanel = new JPanel(new BorderLayout());
sortPanel.setBorder(IdeBorderFactory.createTitledBorder("Sorting of completion entries"));
panel.add(panel = new JPanel(new BorderLayout()), BorderLayout.SOUTH);
panel.add(sortPanel, BorderLayout.SOUTH);
sortPanel.add(sortByImportance, BorderLayout.NORTH);
sortPanel.add(sortByName, BorderLayout.SOUTH);
}
public MathematicaSettings getSettings() {
final MathematicaSettings settings = new MathematicaSettings();
if (insertAsCode.isSelected()) {
settings.setSmartEnterResult(SmartEnterResult.INSERT_CODE);
} else if (insertBraces.isSelected()) {
settings.setSmartEnterResult(SmartEnterResult.INSERT_BRACES);
} else if (insertTemplate.isSelected()) {
settings.setSmartEnterResult(SmartEnterResult.INSERT_TEMPLATE);
}
settings.setSortCompletionEntriesLexicographically(sortByName.isSelected());
return settings;
}
public void setSettings(MathematicaSettings settings) {
insertBraces.setSelected(
settings.getSmartEnterResult().equals(SmartEnterResult.INSERT_BRACES));
insertAsCode.setSelected(
settings.getSmartEnterResult().equals(SmartEnterResult.INSERT_CODE)
);
insertTemplate.setSelected(
settings.getSmartEnterResult().equals(SmartEnterResult.INSERT_TEMPLATE)
);
sortByName.setSelected(
settings.isSortCompletionEntriesLexicographically()
);
sortByImportance.setSelected(
!settings.isSortCompletionEntriesLexicographically()
);
}
}
|
Oxide-based platform for reconfigurable superconducting nanoelectronics We report quasi-1D superconductivity at the interface of LaAlO3 and SrTiO3. The material system and nanostructure fabrication method supply a new platform for superconducting nanoelectronics. Nanostructures having line widths w ∼ 10 nm are formed from the parent two-dimensional electron liquid using conductive atomic force microscope lithography. Nanowire cross-sections are small compared to the superconducting coherence length in LaAlO3/SrTiO3, placing them in the quasi-1D regime. Broad superconducting transitions versus temperature and finite resistances in the superconducting state well below Tc ≈ 200 mK are observed, suggesting the presence of fluctuation- and heating-induced resistance. The superconducting resistances and VI characteristics are tunable through the use of a back gate. Four-terminal resistances in the superconducting state show an unusual dependence on the current path, varying by as much as an order of magnitude. This new technology, i.e., the ability to write gate-tunable superconducting nanostructures on an insulating LaAlO3/SrTiO3 canvas, opens possibilities for the development of new families of reconfigurable superconducting nanoelectronics. |
use std::cell::RefCell;
use std::collections::{BTreeSet, HashSet};
use std::fmt;
use std::hash;
use std::iter::FromIterator;
use std::ops;
use std::rc::Rc;
use crate::cdsl::types::{LaneType, ReferenceType, SpecialType, ValueType};
const MAX_LANES: u16 = 256;
const MAX_BITS: u16 = 128;
const MAX_FLOAT_BITS: u16 = 64;
/// Type variables can be used in place of concrete types when defining
/// instructions. This makes the instructions *polymorphic*.
///
/// A type variable is restricted to vary over a subset of the value types.
/// This subset is specified by a set of flags that control the permitted base
/// types and whether the type variable can assume scalar or vector types, or
/// both.
#[derive(Debug)]
pub(crate) struct TypeVarContent {
/// Short name of type variable used in instruction descriptions.
pub name: String,
/// Documentation string.
pub doc: String,
/// Type set associated to the type variable.
/// This field must remain private; use `get_typeset()` or `get_raw_typeset()` to get the
/// information you want.
type_set: TypeSet,
pub base: Option<TypeVarParent>,
}
#[derive(Clone, Debug)]
pub(crate) struct TypeVar {
content: Rc<RefCell<TypeVarContent>>,
}
impl TypeVar {
pub fn new(name: impl Into<String>, doc: impl Into<String>, type_set: TypeSet) -> Self {
Self {
content: Rc::new(RefCell::new(TypeVarContent {
name: name.into(),
doc: doc.into(),
type_set,
base: None,
})),
}
}
pub fn new_singleton(value_type: ValueType) -> Self {
let (name, doc) = (value_type.to_string(), value_type.doc());
let mut builder = TypeSetBuilder::new();
let (scalar_type, num_lanes) = match value_type {
ValueType::Special(special_type) => {
return TypeVar::new(name, doc, builder.specials(vec![special_type]).build());
}
ValueType::Reference(ReferenceType(reference_type)) => {
let bits = reference_type as RangeBound;
return TypeVar::new(name, doc, builder.refs(bits..bits).build());
}
ValueType::Lane(lane_type) => (lane_type, 1),
ValueType::Vector(vec_type) => {
(vec_type.lane_type(), vec_type.lane_count() as RangeBound)
}
};
builder = builder.simd_lanes(num_lanes..num_lanes);
let builder = match scalar_type {
LaneType::Int(int_type) => {
let bits = int_type as RangeBound;
builder.ints(bits..bits)
}
LaneType::Float(float_type) => {
let bits = float_type as RangeBound;
builder.floats(bits..bits)
}
LaneType::Bool(bool_type) => {
let bits = bool_type as RangeBound;
builder.bools(bits..bits)
}
};
TypeVar::new(name, doc, builder.build())
}
/// Get a fresh copy of self, named after `name`. Can only be called on non-derived typevars.
pub fn copy_from(other: &TypeVar, name: String) -> TypeVar {
assert!(
other.base.is_none(),
"copy_from() can only be called on non-derived type variables"
);
TypeVar {
content: Rc::new(RefCell::new(TypeVarContent {
name,
doc: "".into(),
type_set: other.type_set.clone(),
base: None,
})),
}
}
/// Returns the typeset for this TV. If the TV is derived, computes it recursively from the
/// derived function and the base's typeset.
/// Note this can't be done non-lazily in the constructor, because the TypeSet of the base may
/// change over time.
pub fn get_typeset(&self) -> TypeSet {
match &self.base {
Some(base) => base.type_var.get_typeset().image(base.derived_func),
None => self.type_set.clone(),
}
}
/// Returns this typevar's type set, assuming this type var has no parent.
pub fn get_raw_typeset(&self) -> &TypeSet {
assert_eq!(self.type_set, self.get_typeset());
&self.type_set
}
/// If the associated typeset has a single type return it. Otherwise return None.
pub fn singleton_type(&self) -> Option<ValueType> {
let type_set = self.get_typeset();
if type_set.size() == 1 {
Some(type_set.get_singleton())
} else {
None
}
}
/// Get the free type variable controlling this one.
pub fn free_typevar(&self) -> Option<TypeVar> {
match &self.base {
Some(base) => base.type_var.free_typevar(),
None => {
match self.singleton_type() {
// A singleton type isn't a proper free variable.
Some(_) => None,
None => Some(self.clone()),
}
}
}
}
/// Create a type variable that is a function of another.
pub fn derived(&self, derived_func: DerivedFunc) -> TypeVar {
let ts = self.get_typeset();
// Safety checks to avoid over/underflows.
assert!(ts.specials.is_empty(), "can't derive from special types");
match derived_func {
DerivedFunc::HalfWidth => {
assert!(
ts.ints.is_empty() || *ts.ints.iter().min().unwrap() > 8,
"can't halve all integer types"
);
assert!(
ts.floats.is_empty() || *ts.floats.iter().min().unwrap() > 32,
"can't halve all float types"
);
assert!(
ts.bools.is_empty() || *ts.bools.iter().min().unwrap() > 8,
"can't halve all boolean types"
);
}
DerivedFunc::DoubleWidth => {
assert!(
ts.ints.is_empty() || *ts.ints.iter().max().unwrap() < MAX_BITS,
"can't double all integer types"
);
assert!(
ts.floats.is_empty() || *ts.floats.iter().max().unwrap() < MAX_FLOAT_BITS,
"can't double all float types"
);
assert!(
ts.bools.is_empty() || *ts.bools.iter().max().unwrap() < MAX_BITS,
"can't double all boolean types"
);
}
DerivedFunc::HalfVector => {
assert!(
*ts.lanes.iter().min().unwrap() > 1,
"can't halve a scalar type"
);
}
DerivedFunc::DoubleVector => {
assert!(
*ts.lanes.iter().max().unwrap() < MAX_LANES,
"can't double 256 lanes"
);
}
DerivedFunc::SplitLanes => {
assert!(
ts.ints.is_empty() || *ts.ints.iter().min().unwrap() > 8,
"can't halve all integer types"
);
assert!(
ts.floats.is_empty() || *ts.floats.iter().min().unwrap() > 32,
"can't halve all float types"
);
assert!(
ts.bools.is_empty() || *ts.bools.iter().min().unwrap() > 8,
"can't halve all boolean types"
);
assert!(
*ts.lanes.iter().max().unwrap() < MAX_LANES,
"can't double 256 lanes"
);
}
DerivedFunc::MergeLanes => {
assert!(
ts.ints.is_empty() || *ts.ints.iter().max().unwrap() < MAX_BITS,
"can't double all integer types"
);
assert!(
ts.floats.is_empty() || *ts.floats.iter().max().unwrap() < MAX_FLOAT_BITS,
"can't double all float types"
);
assert!(
ts.bools.is_empty() || *ts.bools.iter().max().unwrap() < MAX_BITS,
"can't double all boolean types"
);
assert!(
*ts.lanes.iter().min().unwrap() > 1,
"can't halve a scalar type"
);
}
DerivedFunc::LaneOf | DerivedFunc::AsBool => { /* no particular assertions */ }
}
TypeVar {
content: Rc::new(RefCell::new(TypeVarContent {
name: format!("{}({})", derived_func.name(), self.name),
doc: "".into(),
type_set: ts,
base: Some(TypeVarParent {
type_var: self.clone(),
derived_func,
}),
})),
}
}
pub fn lane_of(&self) -> TypeVar {
self.derived(DerivedFunc::LaneOf)
}
pub fn as_bool(&self) -> TypeVar {
self.derived(DerivedFunc::AsBool)
}
pub fn half_width(&self) -> TypeVar {
self.derived(DerivedFunc::HalfWidth)
}
pub fn double_width(&self) -> TypeVar {
self.derived(DerivedFunc::DoubleWidth)
}
pub fn half_vector(&self) -> TypeVar {
self.derived(DerivedFunc::HalfVector)
}
pub fn double_vector(&self) -> TypeVar {
self.derived(DerivedFunc::DoubleVector)
}
pub fn split_lanes(&self) -> TypeVar {
self.derived(DerivedFunc::SplitLanes)
}
pub fn merge_lanes(&self) -> TypeVar {
self.derived(DerivedFunc::MergeLanes)
}
/// Constrain the range of types this variable can assume to a subset of those in the typeset
/// ts.
/// May mutate itself if it's not derived, or its parent if it is.
pub fn constrain_types_by_ts(&self, type_set: TypeSet) {
match &self.base {
Some(base) => {
base.type_var
.constrain_types_by_ts(type_set.preimage(base.derived_func));
}
None => {
self.content
.borrow_mut()
.type_set
.inplace_intersect_with(&type_set);
}
}
}
/// Constrain the range of types this variable can assume to a subset of those `other` can
/// assume.
/// May mutate itself if it's not derived, or its parent if it is.
pub fn constrain_types(&self, other: TypeVar) {
if self == &other {
return;
}
self.constrain_types_by_ts(other.get_typeset());
}
/// Get a Rust expression that computes the type of this type variable.
pub fn to_rust_code(&self) -> String {
match &self.base {
Some(base) => format!(
"{}.{}().unwrap()",
base.type_var.to_rust_code(),
base.derived_func.name()
),
None => {
if let Some(singleton) = self.singleton_type() {
singleton.rust_name()
} else {
self.name.clone()
}
}
}
}
}
impl Into<TypeVar> for &TypeVar {
fn into(self) -> TypeVar {
self.clone()
}
}
impl Into<TypeVar> for ValueType {
fn into(self) -> TypeVar {
TypeVar::new_singleton(self)
}
}
// Hash TypeVars by pointers.
// There might be a better way to do this, but since TypeVar's content (namely TypeSet) can be
// mutated, it makes sense to use pointer equality/hashing here.
impl hash::Hash for TypeVar {
fn hash<H: hash::Hasher>(&self, h: &mut H) {
match &self.base {
Some(base) => {
base.type_var.hash(h);
base.derived_func.hash(h);
}
None => {
(&**self as *const TypeVarContent).hash(h);
}
}
}
}
impl PartialEq for TypeVar {
fn eq(&self, other: &TypeVar) -> bool {
match (&self.base, &other.base) {
(Some(base1), Some(base2)) => {
base1.type_var.eq(&base2.type_var) && base1.derived_func == base2.derived_func
}
(None, None) => Rc::ptr_eq(&self.content, &other.content),
_ => false,
}
}
}
// Allow TypeVar as map keys, based on pointer equality (see also above PartialEq impl).
impl Eq for TypeVar {}
impl ops::Deref for TypeVar {
type Target = TypeVarContent;
fn deref(&self) -> &Self::Target {
unsafe { self.content.as_ptr().as_ref().unwrap() }
}
}
#[derive(Clone, Copy, Debug, Hash, PartialEq)]
pub(crate) enum DerivedFunc {
LaneOf,
AsBool,
HalfWidth,
DoubleWidth,
HalfVector,
DoubleVector,
SplitLanes,
MergeLanes,
}
impl DerivedFunc {
pub fn name(self) -> &'static str {
match self {
DerivedFunc::LaneOf => "lane_of",
DerivedFunc::AsBool => "as_bool",
DerivedFunc::HalfWidth => "half_width",
DerivedFunc::DoubleWidth => "double_width",
DerivedFunc::HalfVector => "half_vector",
DerivedFunc::DoubleVector => "double_vector",
DerivedFunc::SplitLanes => "split_lanes",
DerivedFunc::MergeLanes => "merge_lanes",
}
}
/// Returns the inverse function of this one, if it is a bijection.
pub fn inverse(self) -> Option<DerivedFunc> {
match self {
DerivedFunc::HalfWidth => Some(DerivedFunc::DoubleWidth),
DerivedFunc::DoubleWidth => Some(DerivedFunc::HalfWidth),
DerivedFunc::HalfVector => Some(DerivedFunc::DoubleVector),
DerivedFunc::DoubleVector => Some(DerivedFunc::HalfVector),
DerivedFunc::MergeLanes => Some(DerivedFunc::SplitLanes),
DerivedFunc::SplitLanes => Some(DerivedFunc::MergeLanes),
_ => None,
}
}
}
#[derive(Debug, Hash)]
pub(crate) struct TypeVarParent {
pub type_var: TypeVar,
pub derived_func: DerivedFunc,
}
/// A set of types.
///
/// We don't allow arbitrary subsets of types, but use a parametrized approach
/// instead.
///
/// Objects of this class can be used as dictionary keys.
///
/// Parametrized type sets are specified in terms of ranges:
/// - The permitted range of vector lanes, where 1 indicates a scalar type.
/// - The permitted range of integer types.
/// - The permitted range of floating point types, and
/// - The permitted range of boolean types.
///
/// The ranges are inclusive from smallest bit-width to largest bit-width.
///
/// Finally, a type set can contain special types (derived from `SpecialType`)
/// which can't appear as lane types.
type RangeBound = u16;
type Range = ops::Range<RangeBound>;
type NumSet = BTreeSet<RangeBound>;
macro_rules! num_set {
($($expr:expr),*) => {
NumSet::from_iter(vec![$($expr),*])
};
}
#[derive(Clone, PartialEq, Eq, Hash)]
pub(crate) struct TypeSet {
pub lanes: NumSet,
pub ints: NumSet,
pub floats: NumSet,
pub bools: NumSet,
pub refs: NumSet,
pub specials: Vec<SpecialType>,
}
impl TypeSet {
fn new(
lanes: NumSet,
ints: NumSet,
floats: NumSet,
bools: NumSet,
refs: NumSet,
specials: Vec<SpecialType>,
) -> Self {
Self {
lanes,
ints,
floats,
bools,
refs,
specials,
}
}
/// Return the number of concrete types represented by this typeset.
pub fn size(&self) -> usize {
self.lanes.len()
* (self.ints.len() + self.floats.len() + self.bools.len() + self.refs.len())
+ self.specials.len()
}
/// Return the image of self across the derived function func.
fn image(&self, derived_func: DerivedFunc) -> TypeSet {
match derived_func {
DerivedFunc::LaneOf => self.lane_of(),
DerivedFunc::AsBool => self.as_bool(),
DerivedFunc::HalfWidth => self.half_width(),
DerivedFunc::DoubleWidth => self.double_width(),
DerivedFunc::HalfVector => self.half_vector(),
DerivedFunc::DoubleVector => self.double_vector(),
DerivedFunc::SplitLanes => self.half_width().double_vector(),
DerivedFunc::MergeLanes => self.double_width().half_vector(),
}
}
/// Return a TypeSet describing the image of self across lane_of.
fn lane_of(&self) -> TypeSet {
let mut copy = self.clone();
copy.lanes = num_set![1];
copy
}
/// Return a TypeSet describing the image of self across as_bool.
fn as_bool(&self) -> TypeSet {
let mut copy = self.clone();
copy.ints = NumSet::new();
copy.floats = NumSet::new();
copy.refs = NumSet::new();
if !(&self.lanes - &num_set![1]).is_empty() {
copy.bools = &self.ints | &self.floats;
copy.bools = ©.bools | &self.bools;
}
if self.lanes.contains(&1) {
copy.bools.insert(1);
}
copy
}
/// Return a TypeSet describing the image of self across halfwidth.
fn half_width(&self) -> TypeSet {
let mut copy = self.clone();
copy.ints = NumSet::from_iter(self.ints.iter().filter(|&&x| x > 8).map(|&x| x / 2));
copy.floats = NumSet::from_iter(self.floats.iter().filter(|&&x| x > 32).map(|&x| x / 2));
copy.bools = NumSet::from_iter(self.bools.iter().filter(|&&x| x > 8).map(|&x| x / 2));
copy.specials = Vec::new();
copy
}
/// Return a TypeSet describing the image of self across doublewidth.
fn double_width(&self) -> TypeSet {
let mut copy = self.clone();
copy.ints = NumSet::from_iter(self.ints.iter().filter(|&&x| x < MAX_BITS).map(|&x| x * 2));
copy.floats = NumSet::from_iter(
self.floats
.iter()
.filter(|&&x| x < MAX_FLOAT_BITS)
.map(|&x| x * 2),
);
copy.bools = NumSet::from_iter(
self.bools
.iter()
.filter(|&&x| x < MAX_BITS)
.map(|&x| x * 2)
.filter(|x| legal_bool(*x)),
);
copy.specials = Vec::new();
copy
}
/// Return a TypeSet describing the image of self across halfvector.
fn half_vector(&self) -> TypeSet {
let mut copy = self.clone();
copy.lanes = NumSet::from_iter(self.lanes.iter().filter(|&&x| x > 1).map(|&x| x / 2));
copy.specials = Vec::new();
copy
}
/// Return a TypeSet describing the image of self across doublevector.
fn double_vector(&self) -> TypeSet {
let mut copy = self.clone();
copy.lanes = NumSet::from_iter(
self.lanes
.iter()
.filter(|&&x| x < MAX_LANES)
.map(|&x| x * 2),
);
copy.specials = Vec::new();
copy
}
fn concrete_types(&self) -> Vec<ValueType> {
let mut ret = Vec::new();
for &num_lanes in &self.lanes {
for &bits in &self.ints {
ret.push(LaneType::int_from_bits(bits).by(num_lanes));
}
for &bits in &self.floats {
ret.push(LaneType::float_from_bits(bits).by(num_lanes));
}
for &bits in &self.bools {
ret.push(LaneType::bool_from_bits(bits).by(num_lanes));
}
for &bits in &self.refs {
ret.push(ReferenceType::ref_from_bits(bits).into());
}
}
for &special in &self.specials {
ret.push(special.into());
}
ret
}
/// Return the singleton type represented by self. Can only call on typesets containing 1 type.
fn get_singleton(&self) -> ValueType {
let mut types = self.concrete_types();
assert_eq!(types.len(), 1);
types.remove(0)
}
/// Return the inverse image of self across the derived function func.
fn preimage(&self, func: DerivedFunc) -> TypeSet {
if self.size() == 0 {
// The inverse of the empty set is itself.
return self.clone();
}
match func {
DerivedFunc::LaneOf => {
let mut copy = self.clone();
copy.lanes =
NumSet::from_iter((0..=MAX_LANES.trailing_zeros()).map(|i| u16::pow(2, i)));
copy
}
DerivedFunc::AsBool => {
let mut copy = self.clone();
if self.bools.contains(&1) {
copy.ints = NumSet::from_iter(vec![8, 16, 32, 64, 128]);
copy.floats = NumSet::from_iter(vec![32, 64]);
} else {
copy.ints = &self.bools - &NumSet::from_iter(vec![1]);
copy.floats = &self.bools & &NumSet::from_iter(vec![32, 64]);
// If b1 is not in our typeset, than lanes=1 cannot be in the pre-image, as
// as_bool() of scalars is always b1.
copy.lanes = &self.lanes - &NumSet::from_iter(vec![1]);
}
copy
}
DerivedFunc::HalfWidth => self.double_width(),
DerivedFunc::DoubleWidth => self.half_width(),
DerivedFunc::HalfVector => self.double_vector(),
DerivedFunc::DoubleVector => self.half_vector(),
DerivedFunc::SplitLanes => self.double_width().half_vector(),
DerivedFunc::MergeLanes => self.half_width().double_vector(),
}
}
pub fn inplace_intersect_with(&mut self, other: &TypeSet) {
self.lanes = &self.lanes & &other.lanes;
self.ints = &self.ints & &other.ints;
self.floats = &self.floats & &other.floats;
self.bools = &self.bools & &other.bools;
self.refs = &self.refs & &other.refs;
let mut new_specials = Vec::new();
for spec in &self.specials {
if let Some(spec) = other.specials.iter().find(|&other_spec| other_spec == spec) {
new_specials.push(*spec);
}
}
self.specials = new_specials;
}
pub fn is_subset(&self, other: &TypeSet) -> bool {
self.lanes.is_subset(&other.lanes)
&& self.ints.is_subset(&other.ints)
&& self.floats.is_subset(&other.floats)
&& self.bools.is_subset(&other.bools)
&& self.refs.is_subset(&other.refs)
&& {
let specials: HashSet<SpecialType> = HashSet::from_iter(self.specials.clone());
let other_specials = HashSet::from_iter(other.specials.clone());
specials.is_subset(&other_specials)
}
}
pub fn is_wider_or_equal(&self, other: &TypeSet) -> bool {
set_wider_or_equal(&self.ints, &other.ints)
&& set_wider_or_equal(&self.floats, &other.floats)
&& set_wider_or_equal(&self.bools, &other.bools)
&& set_wider_or_equal(&self.refs, &other.refs)
}
pub fn is_narrower(&self, other: &TypeSet) -> bool {
set_narrower(&self.ints, &other.ints)
&& set_narrower(&self.floats, &other.floats)
&& set_narrower(&self.bools, &other.bools)
&& set_narrower(&self.refs, &other.refs)
}
}
fn set_wider_or_equal(s1: &NumSet, s2: &NumSet) -> bool {
!s1.is_empty() && !s2.is_empty() && s1.iter().min() >= s2.iter().max()
}
fn set_narrower(s1: &NumSet, s2: &NumSet) -> bool {
!s1.is_empty() && !s2.is_empty() && s1.iter().min() < s2.iter().max()
}
impl fmt::Debug for TypeSet {
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(fmt, "TypeSet(")?;
let mut subsets = Vec::new();
if !self.lanes.is_empty() {
subsets.push(format!(
"lanes={{{}}}",
Vec::from_iter(self.lanes.iter().map(|x| x.to_string())).join(", ")
));
}
if !self.ints.is_empty() {
subsets.push(format!(
"ints={{{}}}",
Vec::from_iter(self.ints.iter().map(|x| x.to_string())).join(", ")
));
}
if !self.floats.is_empty() {
subsets.push(format!(
"floats={{{}}}",
Vec::from_iter(self.floats.iter().map(|x| x.to_string())).join(", ")
));
}
if !self.bools.is_empty() {
subsets.push(format!(
"bools={{{}}}",
Vec::from_iter(self.bools.iter().map(|x| x.to_string())).join(", ")
));
}
if !self.refs.is_empty() {
subsets.push(format!(
"refs={{{}}}",
Vec::from_iter(self.refs.iter().map(|x| x.to_string())).join(", ")
));
}
if !self.specials.is_empty() {
subsets.push(format!(
"specials={{{}}}",
Vec::from_iter(self.specials.iter().map(|x| x.to_string())).join(", ")
));
}
write!(fmt, "{})", subsets.join(", "))?;
Ok(())
}
}
pub(crate) struct TypeSetBuilder {
ints: Interval,
floats: Interval,
bools: Interval,
refs: Interval,
includes_scalars: bool,
simd_lanes: Interval,
specials: Vec<SpecialType>,
}
impl TypeSetBuilder {
pub fn new() -> Self {
Self {
ints: Interval::None,
floats: Interval::None,
bools: Interval::None,
refs: Interval::None,
includes_scalars: true,
simd_lanes: Interval::None,
specials: Vec::new(),
}
}
pub fn ints(mut self, interval: impl Into<Interval>) -> Self {
assert!(self.ints == Interval::None);
self.ints = interval.into();
self
}
pub fn floats(mut self, interval: impl Into<Interval>) -> Self {
assert!(self.floats == Interval::None);
self.floats = interval.into();
self
}
pub fn bools(mut self, interval: impl Into<Interval>) -> Self {
assert!(self.bools == Interval::None);
self.bools = interval.into();
self
}
pub fn refs(mut self, interval: impl Into<Interval>) -> Self {
assert!(self.refs == Interval::None);
self.refs = interval.into();
self
}
pub fn includes_scalars(mut self, includes_scalars: bool) -> Self {
self.includes_scalars = includes_scalars;
self
}
pub fn simd_lanes(mut self, interval: impl Into<Interval>) -> Self {
assert!(self.simd_lanes == Interval::None);
self.simd_lanes = interval.into();
self
}
pub fn specials(mut self, specials: Vec<SpecialType>) -> Self {
assert!(self.specials.is_empty());
self.specials = specials;
self
}
pub fn build(self) -> TypeSet {
let min_lanes = if self.includes_scalars { 1 } else { 2 };
let bools = range_to_set(self.bools.to_range(1..MAX_BITS, None))
.into_iter()
.filter(|x| legal_bool(*x))
.collect();
TypeSet::new(
range_to_set(self.simd_lanes.to_range(min_lanes..MAX_LANES, Some(1))),
range_to_set(self.ints.to_range(8..MAX_BITS, None)),
range_to_set(self.floats.to_range(32..64, None)),
bools,
range_to_set(self.refs.to_range(32..64, None)),
self.specials,
)
}
pub fn all() -> TypeSet {
TypeSetBuilder::new()
.ints(Interval::All)
.floats(Interval::All)
.bools(Interval::All)
.refs(Interval::All)
.simd_lanes(Interval::All)
.specials(ValueType::all_special_types().collect())
.includes_scalars(true)
.build()
}
}
#[derive(PartialEq)]
pub(crate) enum Interval {
None,
All,
Range(Range),
}
impl Interval {
fn to_range(&self, full_range: Range, default: Option<RangeBound>) -> Option<Range> {
match self {
Interval::None => {
if let Some(default_val) = default {
Some(default_val..default_val)
} else {
None
}
}
Interval::All => Some(full_range),
Interval::Range(range) => {
let (low, high) = (range.start, range.end);
assert!(low.is_power_of_two());
assert!(high.is_power_of_two());
assert!(low <= high);
assert!(low >= full_range.start);
assert!(high <= full_range.end);
Some(low..high)
}
}
}
}
impl Into<Interval> for Range {
fn into(self) -> Interval {
Interval::Range(self)
}
}
fn legal_bool(bits: RangeBound) -> bool {
// Only allow legal bit widths for bool types.
bits == 1 || (bits >= 8 && bits <= MAX_BITS && bits.is_power_of_two())
}
/// Generates a set with all the powers of two included in the range.
fn range_to_set(range: Option<Range>) -> NumSet {
let mut set = NumSet::new();
let (low, high) = match range {
Some(range) => (range.start, range.end),
None => return set,
};
assert!(low.is_power_of_two());
assert!(high.is_power_of_two());
assert!(low <= high);
for i in low.trailing_zeros()..=high.trailing_zeros() {
assert!(1 << i <= RangeBound::max_value());
set.insert(1 << i);
}
set
}
#[test]
fn test_typevar_builder() {
let type_set = TypeSetBuilder::new().ints(Interval::All).build();
assert_eq!(type_set.lanes, num_set![1]);
assert!(type_set.floats.is_empty());
assert_eq!(type_set.ints, num_set![8, 16, 32, 64, 128]);
assert!(type_set.bools.is_empty());
assert!(type_set.specials.is_empty());
let type_set = TypeSetBuilder::new().bools(Interval::All).build();
assert_eq!(type_set.lanes, num_set![1]);
assert!(type_set.floats.is_empty());
assert!(type_set.ints.is_empty());
assert_eq!(type_set.bools, num_set![1, 8, 16, 32, 64, 128]);
assert!(type_set.specials.is_empty());
let type_set = TypeSetBuilder::new().floats(Interval::All).build();
assert_eq!(type_set.lanes, num_set![1]);
assert_eq!(type_set.floats, num_set![32, 64]);
assert!(type_set.ints.is_empty());
assert!(type_set.bools.is_empty());
assert!(type_set.specials.is_empty());
let type_set = TypeSetBuilder::new()
.floats(Interval::All)
.simd_lanes(Interval::All)
.includes_scalars(false)
.build();
assert_eq!(type_set.lanes, num_set![2, 4, 8, 16, 32, 64, 128, 256]);
assert_eq!(type_set.floats, num_set![32, 64]);
assert!(type_set.ints.is_empty());
assert!(type_set.bools.is_empty());
assert!(type_set.specials.is_empty());
let type_set = TypeSetBuilder::new()
.floats(Interval::All)
.simd_lanes(Interval::All)
.includes_scalars(true)
.build();
assert_eq!(type_set.lanes, num_set![1, 2, 4, 8, 16, 32, 64, 128, 256]);
assert_eq!(type_set.floats, num_set![32, 64]);
assert!(type_set.ints.is_empty());
assert!(type_set.bools.is_empty());
assert!(type_set.specials.is_empty());
let type_set = TypeSetBuilder::new().ints(16..64).build();
assert_eq!(type_set.lanes, num_set![1]);
assert_eq!(type_set.ints, num_set![16, 32, 64]);
assert!(type_set.floats.is_empty());
assert!(type_set.bools.is_empty());
assert!(type_set.specials.is_empty());
}
#[test]
#[should_panic]
fn test_typevar_builder_too_high_bound_panic() {
TypeSetBuilder::new().ints(16..2 * MAX_BITS).build();
}
#[test]
#[should_panic]
fn test_typevar_builder_inverted_bounds_panic() {
TypeSetBuilder::new().ints(32..16).build();
}
#[test]
fn test_as_bool() {
let a = TypeSetBuilder::new()
.simd_lanes(2..8)
.ints(8..8)
.floats(32..32)
.build();
assert_eq!(
a.lane_of(),
TypeSetBuilder::new().ints(8..8).floats(32..32).build()
);
// Test as_bool with disjoint intervals.
let mut a_as_bool = TypeSetBuilder::new().simd_lanes(2..8).build();
a_as_bool.bools = num_set![8, 32];
assert_eq!(a.as_bool(), a_as_bool);
let b = TypeSetBuilder::new()
.simd_lanes(1..8)
.ints(8..8)
.floats(32..32)
.build();
let mut b_as_bool = TypeSetBuilder::new().simd_lanes(1..8).build();
b_as_bool.bools = num_set![1, 8, 32];
assert_eq!(b.as_bool(), b_as_bool);
}
#[test]
fn test_forward_images() {
let empty_set = TypeSetBuilder::new().build();
// Half vector.
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..32)
.build()
.half_vector(),
TypeSetBuilder::new().simd_lanes(1..16).build()
);
// Double vector.
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..32)
.build()
.double_vector(),
TypeSetBuilder::new().simd_lanes(2..64).build()
);
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(128..256)
.build()
.double_vector(),
TypeSetBuilder::new().simd_lanes(256..256).build()
);
// Half width.
assert_eq!(
TypeSetBuilder::new().ints(8..32).build().half_width(),
TypeSetBuilder::new().ints(8..16).build()
);
assert_eq!(
TypeSetBuilder::new().floats(32..32).build().half_width(),
empty_set
);
assert_eq!(
TypeSetBuilder::new().floats(32..64).build().half_width(),
TypeSetBuilder::new().floats(32..32).build()
);
assert_eq!(
TypeSetBuilder::new().bools(1..8).build().half_width(),
empty_set
);
assert_eq!(
TypeSetBuilder::new().bools(1..32).build().half_width(),
TypeSetBuilder::new().bools(8..16).build()
);
// Double width.
assert_eq!(
TypeSetBuilder::new().ints(8..32).build().double_width(),
TypeSetBuilder::new().ints(16..64).build()
);
assert_eq!(
TypeSetBuilder::new().ints(32..64).build().double_width(),
TypeSetBuilder::new().ints(64..128).build()
);
assert_eq!(
TypeSetBuilder::new().floats(32..32).build().double_width(),
TypeSetBuilder::new().floats(64..64).build()
);
assert_eq!(
TypeSetBuilder::new().floats(32..64).build().double_width(),
TypeSetBuilder::new().floats(64..64).build()
);
assert_eq!(
TypeSetBuilder::new().bools(1..16).build().double_width(),
TypeSetBuilder::new().bools(16..32).build()
);
assert_eq!(
TypeSetBuilder::new().bools(32..64).build().double_width(),
TypeSetBuilder::new().bools(64..128).build()
);
}
#[test]
fn test_backward_images() {
let empty_set = TypeSetBuilder::new().build();
// LaneOf.
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..1)
.ints(8..8)
.floats(32..32)
.build()
.preimage(DerivedFunc::LaneOf),
TypeSetBuilder::new()
.simd_lanes(Interval::All)
.ints(8..8)
.floats(32..32)
.build()
);
assert_eq!(empty_set.preimage(DerivedFunc::LaneOf), empty_set);
// AsBool.
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..4)
.bools(1..128)
.build()
.preimage(DerivedFunc::AsBool),
TypeSetBuilder::new()
.simd_lanes(1..4)
.ints(Interval::All)
.bools(Interval::All)
.floats(Interval::All)
.build()
);
// Double vector.
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..1)
.ints(8..8)
.build()
.preimage(DerivedFunc::DoubleVector)
.size(),
0
);
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..16)
.ints(8..16)
.floats(32..32)
.build()
.preimage(DerivedFunc::DoubleVector),
TypeSetBuilder::new()
.simd_lanes(1..8)
.ints(8..16)
.floats(32..32)
.build(),
);
// Half vector.
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(256..256)
.ints(8..8)
.build()
.preimage(DerivedFunc::HalfVector)
.size(),
0
);
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(64..128)
.bools(1..32)
.build()
.preimage(DerivedFunc::HalfVector),
TypeSetBuilder::new()
.simd_lanes(128..256)
.bools(1..32)
.build(),
);
// Half width.
assert_eq!(
TypeSetBuilder::new()
.ints(128..128)
.floats(64..64)
.bools(128..128)
.build()
.preimage(DerivedFunc::HalfWidth)
.size(),
0
);
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(64..256)
.bools(1..64)
.build()
.preimage(DerivedFunc::HalfWidth),
TypeSetBuilder::new()
.simd_lanes(64..256)
.bools(16..128)
.build(),
);
// Double width.
assert_eq!(
TypeSetBuilder::new()
.ints(8..8)
.floats(32..32)
.bools(1..8)
.build()
.preimage(DerivedFunc::DoubleWidth)
.size(),
0
);
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(1..16)
.ints(8..16)
.floats(32..64)
.build()
.preimage(DerivedFunc::DoubleWidth),
TypeSetBuilder::new()
.simd_lanes(1..16)
.ints(8..8)
.floats(32..32)
.build()
);
}
#[test]
#[should_panic]
fn test_typeset_singleton_panic_nonsingleton_types() {
TypeSetBuilder::new()
.ints(8..8)
.floats(32..32)
.build()
.get_singleton();
}
#[test]
#[should_panic]
fn test_typeset_singleton_panic_nonsingleton_lanes() {
TypeSetBuilder::new()
.simd_lanes(1..2)
.floats(32..32)
.build()
.get_singleton();
}
#[test]
fn test_typeset_singleton() {
use crate::shared::types as shared_types;
assert_eq!(
TypeSetBuilder::new().ints(16..16).build().get_singleton(),
ValueType::Lane(shared_types::Int::I16.into())
);
assert_eq!(
TypeSetBuilder::new().floats(64..64).build().get_singleton(),
ValueType::Lane(shared_types::Float::F64.into())
);
assert_eq!(
TypeSetBuilder::new().bools(1..1).build().get_singleton(),
ValueType::Lane(shared_types::Bool::B1.into())
);
assert_eq!(
TypeSetBuilder::new()
.simd_lanes(4..4)
.ints(32..32)
.build()
.get_singleton(),
LaneType::from(shared_types::Int::I32).by(4)
);
}
#[test]
fn test_typevar_functions() {
let x = TypeVar::new(
"x",
"i16 and up",
TypeSetBuilder::new().ints(16..64).build(),
);
assert_eq!(x.half_width().name, "half_width(x)");
assert_eq!(
x.half_width().double_width().name,
"double_width(half_width(x))"
);
let x = TypeVar::new("x", "up to i32", TypeSetBuilder::new().ints(8..32).build());
assert_eq!(x.double_width().name, "double_width(x)");
}
#[test]
fn test_typevar_singleton() {
use crate::cdsl::types::VectorType;
use crate::shared::types as shared_types;
// Test i32.
let typevar = TypeVar::new_singleton(ValueType::Lane(LaneType::Int(shared_types::Int::I32)));
assert_eq!(typevar.name, "i32");
assert_eq!(typevar.type_set.ints, num_set![32]);
assert!(typevar.type_set.floats.is_empty());
assert!(typevar.type_set.bools.is_empty());
assert!(typevar.type_set.specials.is_empty());
assert_eq!(typevar.type_set.lanes, num_set![1]);
// Test f32x4.
let typevar = TypeVar::new_singleton(ValueType::Vector(VectorType::new(
LaneType::Float(shared_types::Float::F32),
4,
)));
assert_eq!(typevar.name, "f32x4");
assert!(typevar.type_set.ints.is_empty());
assert_eq!(typevar.type_set.floats, num_set![32]);
assert_eq!(typevar.type_set.lanes, num_set![4]);
assert!(typevar.type_set.bools.is_empty());
assert!(typevar.type_set.specials.is_empty());
}
|
/**
* \brief Free a DNP3PointList.
*/
void DNP3FreeObjectPointList(int group, int variation, DNP3PointList *list)
{
DNP3Point *point;
while ((point = TAILQ_FIRST(list)) != NULL) {
TAILQ_REMOVE(list, point, next);
if (point->data != NULL) {
DNP3FreeObjectPoint(group, variation, point->data);
}
SCFree(point);
}
SCFree(list);
} |
Athletic activities in the life of students and graduates in the German Democratic Republic Athletic activities of students and post-graduates are studies from sociological-pedagogical points of view. Starting from theoretical points, the methodological procedures are characterized on 2.483 subjects. Results are presented and discussed concerning recreational athletic activities prior to, during, and after the time of studies, on organizational forms in which postgraduates are engaged in sports, as well as on their motives to go in for sports in the inter-relation between physical fitness and cardio-vascular regulatory capacity. Future tasks are sketched. |
package org.processmining.contexts.cli;
import java.io.BufferedInputStream;
import java.io.File;
import java.io.FileFilter;
import java.io.FileInputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.lang.reflect.Modifier;
import java.net.MalformedURLException;
import java.net.URISyntaxException;
import java.net.URL;
import java.net.URLClassLoader;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import java.util.Queue;
import java.util.jar.JarEntry;
import java.util.jar.JarInputStream;
import org.processmining.framework.annotations.TestMethod;
import org.processmining.framework.boot.Boot;
import org.processmining.framework.boot.Boot.Level;
import org.processmining.framework.plugin.PluginManager;
import org.processmining.framework.plugin.annotations.Bootable;
import org.processmining.framework.plugin.annotations.Plugin;
import org.processmining.framework.plugin.impl.PluginCacheEntry;
import org.processmining.framework.util.CommandLineArgumentList;
public class PromTestFramework {
@Plugin(name = "ProMTest", parameterLabels = {}, returnLabels = {}, returnTypes = {}, userAccessible = false)
@Bootable
public Object main(CommandLineArgumentList commandlineArguments) throws Throwable {
System.out.println("Entering ProM Test Framework");
// from where do we read the tests
String classesToTestDir = null; // default
if (commandlineArguments.size() != 2)
throw new Exception("Error. The ProM Test Framework requires 2 arguments: (1) location of classes that contain tests, (2) location of test files");
// directory where test cases are stored
classesToTestDir = commandlineArguments.get(0);
// read location where test input files and expected test outputs are stored
final String testFileRoot = commandlineArguments.get(1);
// scan directory for tests
getAllTestMethods(classesToTestDir);
// and run tests, collect all failed tests
List<PromTestException.ResultMismatch> failedTest = new LinkedList<PromTestException.ResultMismatch>();
List<PromTestException.WrappedException> errorTest = new LinkedList<PromTestException.WrappedException>();
System.out.println("Running "+testMethods.size()+" tests:");
for (Method test : testMethods) {
try {
System.out.println(test);
// run test and get test result
String result = (String)test.invoke(null);
// load expected result
String expected = null;
if (testResultFromOutputAnnotation(test)) {
expected = test.getAnnotation(TestMethod.class).output();
} else if (testResultFromFile(test)) {
expected = readFile(testFileRoot+"/"+test.getAnnotation(TestMethod.class).filename());
}
// compare result and expected
if (!result.equals(expected)) {
// test failed, store for reporting
failedTest.add(
new PromTestException.ResultMismatch(test, expected, result));
}
} catch (Throwable e) {
// test crashed, store exception for reporting
errorTest.add(
new PromTestException.WrappedException(test, e));
}
}
if (!failedTest.isEmpty() || ! errorTest.isEmpty()) {
throw new PromTestException(failedTest, errorTest);
}
return null;
}
private void getAllTestMethods(String lookUpDir) throws Exception {
URL[] defaultURLs;
if (lookUpDir == null) {
URLClassLoader sysloader = (URLClassLoader) ClassLoader.getSystemClassLoader();
defaultURLs = sysloader.getURLs();
} else {
File f = new File(lookUpDir);
defaultURLs = new URL[] { f.toURI().toURL() };
}
File f = new File("." + File.separator + Boot.LIB_FOLDER);
String libPath = f.getCanonicalPath();
for (URL url : defaultURLs) {
if (Boot.VERBOSE == Level.ALL) {
System.out.println("Processing url: " + url);
}
if (!(new File(url.toURI()).getCanonicalPath().startsWith(libPath))) {
if (Boot.VERBOSE == Level.ALL) {
System.out.println("Scanning for tests: " + url);
}
register(url);
} else {
if (Boot.VERBOSE == Level.ALL) {
System.out.println("Skipping: " + url.getFile() + " while scanning for tests.");
}
}
}
}
/**
* (non-Javadoc)
*
* @see
* org.processmining.framework.plugin.PluginManager#register(java.net.URL)
*/
public void register(URL url) {
if (url.getProtocol().equals(PluginManager.FILE_PROTOCOL)) {
try {
File file = new File(url.toURI());
if (file.isDirectory()) {
scanDirectory(file);
return;
}
// we ignore: PluginManager.MCR_EXTENSION
else if (file.getAbsolutePath().endsWith(PluginManager.JAR_EXTENSION)) {
scanUrl(url);
}
} catch (URISyntaxException e) {
// fireError(url, e, null);
System.err.println(e);
}
} else {
// scanUrl(url);
System.err.println("Loading tests from "+url+" not supported.");
}
}
private void scanDirectory(File file) {
try {
URL url = file.toURI().toURL();
URLClassLoader loader = new URLClassLoader(new URL[] { url });
Queue<File> todo = new LinkedList<File>();
FileFilter filter = new FileFilter() {
public boolean accept(File pathname) {
return pathname.isDirectory() || pathname.getPath().endsWith(PluginManager.CLASS_EXTENSION);
}
};
todo.add(file);
while (!todo.isEmpty()) {
File dir = todo.remove();
for (File f : dir.listFiles(filter)) {
if (f.isDirectory()) {
todo.add(f);
} else {
if (f.getAbsolutePath().endsWith(PluginManager.CLASS_EXTENSION)) {
loadClassFromFile(loader, url,
makeRelativePath(file.getAbsolutePath(), f.getAbsolutePath()));
}
}
}
}
} catch (MalformedURLException e) {
//fireError(null, e, null);
System.err.println(e);
}
}
private void scanUrl(URL url) {
URLClassLoader loader = new URLClassLoader(new URL[] { url });
PluginCacheEntry cached = new PluginCacheEntry(url, Boot.VERBOSE);
if (cached.isInCache()) {
for (String className : cached.getCachedClassNames()) {
loadClass(loader, url, className);
}
} else {
try {
InputStream is = url.openStream();
JarInputStream jis = new JarInputStream(is);
JarEntry je;
List<String> loadedClasses = new ArrayList<String>();
while ((je = jis.getNextJarEntry()) != null) {
if (!je.isDirectory() && je.getName().endsWith(PluginManager.CLASS_EXTENSION)) {
String loadedClass = loadClassFromFile(loader, url, je.getName());
loadedClasses.add(loadedClass);
}
}
jis.close();
is.close();
cached.update(loadedClasses);
} catch (IOException e) {
//fireError(url, e, null);
System.err.println(e);
}
}
}
private String makeRelativePath(String root, String absolutePath) {
String relative = absolutePath;
if (relative.startsWith(root)) {
relative = relative.substring(root.length());
if (relative.startsWith(File.separator)) {
relative = relative.substring(File.separator.length());
}
}
return relative;
}
private static final char PACKAGE_SEPARATOR = '.';
private static final char URL_SEPARATOR = '/';
private static final char INNER_CLASS_MARKER = '$';
private String loadClassFromFile(URLClassLoader loader, URL url, String classFilename) {
if (classFilename.indexOf(INNER_CLASS_MARKER) >= 0) {
// we're not going to load inner classes
return null;
}
return loadClass(loader, url, classFilename.substring(0, classFilename.length() - PluginManager.CLASS_EXTENSION.length())
.replace(URL_SEPARATOR, PACKAGE_SEPARATOR).replace(File.separatorChar, PACKAGE_SEPARATOR));
}
private final List<Method> testMethods = new LinkedList<Method>();
/**
* Returns the name of the class, if it is annotated, or if any of its
* methods carries a plugin annotation!
*
* @param loader
* @param url
* @param className
* @return
*/
private String loadClass(URLClassLoader loader, URL url, String className) {
boolean isAnnotated = false;
if ((className == null) || className.trim().equals("")) {
return null;
}
className = className.trim();
try {
Class<?> pluginClass = Class.forName(className, false, loader);
/*
// Check if plugin annotation is present
if (pluginClass.isAnnotationPresent(Plugin.class) && isGoodPlugin(pluginClass)) {
PluginDescriptorImpl pl = new PluginDescriptorImpl(pluginClass, pluginContextType);
addPlugin(pl);
}*/
for (Method method : pluginClass.getMethods()) {
if (method.isAnnotationPresent(TestMethod.class) && isGoodTest(method)) {
testMethods.add(method);
}
}
} catch (Throwable t) {
// fireError(url, t, className);
if (Boot.VERBOSE != Level.NONE) {
System.err.println("[Framework] ERROR while scanning for testable plugins at: " + url + ":");
System.err.println(" in file :" + className);
System.err.println(" " + t.getMessage());
t.printStackTrace();
}
}
return isAnnotated ? className : null;
}
private boolean isGoodTest(Method method) {
assert(method.isAnnotationPresent(TestMethod.class));
// check annotations
if (!testResultFromFile(method) && !testResultFromOutputAnnotation(method)) {
if (Boot.VERBOSE != Level.NONE) {
System.err.println("Test " + method.toString() + " could not be loaded. "
+ "No expected test result specified.");
}
return false;
}
// check return type: must be String
if ((method.getModifiers() & Modifier.STATIC) == 0) {
if (Boot.VERBOSE != Level.NONE) {
System.err.println("Test " + method.toString() + " could not be loaded. "
+ "Test must be static.");
}
return false;
}
// check return type: must be String
if (!method.getReturnType().equals(String.class)) {
if (Boot.VERBOSE != Level.NONE) {
System.err.println("Test " + method.toString() + " could not be loaded. "
+ "Return result must be java.lang.String");
}
return false;
}
// check parameter types: must be empty
Class<?>[] pars = method.getParameterTypes();
if (pars != null && pars.length > 0) {
if (Boot.VERBOSE != Level.NONE) {
System.err.println("Test " + method.toString() + " could not be loaded. "
+ "A test must not take any parameters.");
}
return false;
}
return true;
}
/**
* @param method
* @return <code>true</code> iff the method is annotated with
* {@link TestMethod#filename()}. Then the result of the test will
* be compared to the contents of a file.
*/
private static boolean testResultFromFile(Method method) {
assert(method.isAnnotationPresent(TestMethod.class));
return
method.getAnnotation(TestMethod.class).filename() != null
&& !method.getAnnotation(TestMethod.class).filename().isEmpty();
}
/**
* @param method
* @return <code>true</code> iff the method is annotated with
* {@link TestMethod#output()}. Then the result of the test will
* be compared to the specified string.
*/
private static boolean testResultFromOutputAnnotation(Method method) {
assert(method.isAnnotationPresent(TestMethod.class));
return
method.getAnnotation(TestMethod.class).output() != null
&& !method.getAnnotation(TestMethod.class).output().isEmpty();
}
private static String readFile(String scriptFile) throws IOException {
InputStream is = new FileInputStream(scriptFile);
String result = readWholeStream(is);
is.close();
return result;
}
private static String readWholeStream(InputStream is) throws IOException {
InputStreamReader reader = new InputStreamReader(new BufferedInputStream(is));
StringBuffer result = new StringBuffer();
int c;
while ((c = reader.read()) != -1) {
result.append((char) c);
}
return result.toString();
}
public static void main(String[] args) throws Throwable {
try {
Boot.boot(PromTestFramework.class, CLIPluginContext.class, args);
} catch (InvocationTargetException e) {
throw e.getCause();
}
}
}
|
Seventy-five years ago today, the American people rejected not just a president — Herbert Hoover — but a royalist vision of federal policymaking that had allowed tens of millions of citizens to suffer as the Great Depression swept across the land.
The election of November 8, 1932, is now generally accepted as one of the great realigning moments in U.S. politics, the point at which the country took the great leap forward from a past that favored limited federal and state involvement in economic affairs — except where it came to securing the interests of the wealthy — and embraced a more humane and democratic approach to governing.
To be sure, that approach has been under assault in recent decades. Yet, Social Security remains, as does the the Federal Deposit Insurance Corporation, and the Fair Labor Standards Act and the minimum wage. Those of us with roots in small-town America still enjoy the benefits of Rural Electrification. And Americans of every region, race and religion retain at least a few of the liberties that were defined and protected by Roosevelt-nominated Supreme Court Justices William O. Douglas, Hugo Black and Felix Frankfurter. There’s still a Securities and Exchange Commission, which sometimes does its job, and a Federal Communications Commission, which could yet be redeemed by the appointment of a new chairman.
The agent of these reforms — and the fundamental shift in the American experience they embodied — was Franklin Delano Roosevelt, the Democrat who displaced Republican Hoover. But it is important to remember that Roosevelt, the most patrician of our nation’s many patrician politicians, did not compete in the 1932 election as the radical reformer that he became. The Democratic platform of that year was a cautious document, dictated by fear itself rather than the boldness that would later be associated with Roosevelt.
What made Roosevelt so remarkable, and so radical?
The results that were tabulated 75 years ago this evening influenced FDR to evolve his policies in a direction that was more egalitarian and democratic — his critics still use the term “socialistic,” and they are not entirely wrong. It was that evolution that redefined not just American politics but America.
Roosevelt won a stunning victory in 1932. He secured 57.4 percent of the popular vote, as compared with just 39.7 percent for Hoover. The Democrat carried 42 states, most by wide margins, while the Republican won just 6.
But those numbers do not begin to tell the whole story of what happened on that distant November 8. Roosevelt’s popular vote total of 22,821,277 was 52 percent higher than that received by Al Smith, the Democratic nominee in the election of four years earlier. The Roosevelt landslide was sufficient to create a coat-tail effect that dramatically increased a narrow Democratic majority in the House of Representatives and gave the party control of the Senate.
A total of 97 new Democrats were elected to the House, most of them young and left-leaning. Their numbers were augmented by five members of the Minnesota Farmer-Labor Party, who made no apologies for their radicalism. Thus, 73 percent of the seats in the House (313 out of 435) were held by members who had been elected on pledges to alter the economic equation to favor Main Street over Wall Street. Even some Republicans, especially from New York state and the upper Midwest, espoused a progressive vision that was to the left of what Roosevelt advocated while campaigning in 1932.
Nine Republican senators were defeated that year by the Democrats, who also won three open seats. This shifted control of the chamber from 48-47 Republican to 59-36 Democratic with one Farmer-Laborite. A half dozen “insurgent” Republican senators stood with Roosevelt or to his left on economic issues.
The congressional majorities would free Roosevelt to move steadily to the left, knowing that if he did not make the shift Congress would force his hand on a host of relief measures and related economic initiatives. And Roosevelt was inclined to move. It was not just the size of the Democratic landslide that influenced him. It was the clear evidence that many American voters were looking to the left of new president and his party for responses to the economic crisis.
On November 8, 1932, more than a million Americans — almost three percent of the electorate — cast ballots for presidential candidates who proposed far more radical changes than “a new deal.” Socialist Norman Thomas won 884,885 votes, for a 230 percent improvement in his party’s total. Communist William Z. Foster won 103,307 votes, for a 112 percent increase in his party’s total — and its best finish ever in a presidential race. And southern populist William Hope Harvey, who had helped manage Democratic populist William Jennings Bryan’s 1896 presidential campaign, secured another 53,425 votes.
Roosevelt was conscious of the fact that, in a number of states outside the south, the combined vote for the Socialists and Communists edged toward 5 percent of the total. Shortly after the election, the president-elect met with Thomas, a former associate editor of The Nation, and Henry Rosner, a frequent contributor to The magazine who had authored the Socialist Party’s detailed 1932 platform and who would go on to be a key aide of New York Mayor Fiorello LaGuardia.
The new president did not adopt the whole of the Socialist platform. But, as historian Paul Berman observed, “President Franklin D. Roosevelt lifted ideas from the likes of Norman Thomas and proclaimed liberal democratic goals for everyone around the world…” FDR’s borrowing of ideas about Social Security, unemployment compensation, jobs programs and agricultural assistance from the Socialists was sufficient to pull voters who had rejected the Democrats in 1932 into the New Deal Coalition that would sweep the congressional elections of 1934 and reelect the president with 61 percent of the popular vote and 523 of 531 electoral votes in 1936 — the largest Electoral College win in the history of two-party politics.
As for Norman Thomas, he ran again in 1936, conducting what Time magazine would refer to as “a more civilized and enlightened campaign than any other candidate.” But he amassed only 187,910 votes, for 0.4 percent of the total.
Thomas would joke that, “Roosevelt did not carry out the Socialist platform, unless he carried it out on a stretcher.” That was a slightly bitter variation on the old Socialist’s acknowledgment that FDR had read the results of the 1932 election right.
That process began 75 years ago this evening, when Franklin Roosevelt recognized that, while Americans had chosen him as their president, they signaled their intention that America should turn left. |
Trajectory tracking control method of robotic intra-oral treatment At present, people suffer from a high rate of cross-infection of oral diseases, and it is an urgent clinical need to explore effective methods to promote high-security and high-quality imaging. This paper proposes a collaborative robot control strategy that assists in oral diagnosis and treatment. This method can track the dynamic movement of the patient's upper and lower jaw during treatment. According to the small space of the oral cavity and the change of the tooth position, dynamic trajectory tracking with adaptive control is performed on the robotic arm. This control strategy is of great significance for accurate oral diagnosis and treatment and reducing cross-infection between doctors and patients. Introduction Due to the high incidence of oral diseases and the explosive spread of the novel coronavirus (COVID-19), the number of patients has increased sharply, which is a huge clinical challenge. Reducing crossinfection between doctors and patients and exploring high-safety and high-quality oral diagnosis and treatment methods are urgent clinical needs. Therefore, robot-assisted oral imaging medical treatment has become a trend. In the field of dental medical robots, my country is still in its infancy, and only a few medical and research institutions are experimenting with the development of dental assistant robots. Since entering the 21st century, based on the development and application of robotics technology, medical surgical robotics technology can be roughly divided into the following directions: medical surgical robotics based on industrial robot platforms, dedicated medical surgical robotics, small modular medical surgical robot technology, and telesurgery medical robot technology. However, there is no dental medical robot technology dedicated to the research of assisted imaging. In order to promote the rapid development of the oral imaging medical process to be more efficient, accurate and low-cost, Akhoondali proposed an automatic segmentation method based on region growth. Keyhaninejad et al. and Hosntalab proposed a level set model based on 3D regions to extract teeth. Keustermans and Hiew et al. proposed an interactive method of segmenting threedimensional teeth using graph cut algorithm. Barone et al. proposed a new frame iteration method to simulate the three-dimensional shape information of a single tooth in CT images. The method is to outline the two-dimensional outline of the target tooth from a set of projection images, and use the outline to model the B-spline representation of the three-dimensional tooth shape, and finally obtain the proposed a robot-assisted treatment system with a position sensor arm. The system uses X-rays to generate teeth images to diagnose tooth erosion activities and bone loss. The Renaissance manufactured by Israel's Mazor Robotics company uses the intraoperative C-arm X-ray machine to obtain two-dimensional images and pre-operative threedimensional graphics for real-time registration for positioning, which greatly improves the accuracy. However, in view of the current spread of the coronavirus, no oral assistant diagnosis and treatment robot has been developed to reduce the cross infection of the novel coronavirus. Therefore, this paper proposes a trajectory tracking method for an oral assisted diagnosis and treatment robot. For the photosensitive plate holding robot, according to the small space of the oral cavity and the change of the tooth position, a dynamic trajectory tracking method with adaptive control is proposed, which reconstructs and matches the oral model to control the robot to perform auxiliary diagnosis and treatment operations. Design of intra-oral assistant robot system Oral diagnosis and treatment imaging assistance robots are designed to reduce virus cross-infection caused by doctor-patient contact, and cross-combining robotics and oral medical expert knowledge systems. The realization process of avoiding virus cross-infection mainly includes oral lesion spatial positioning and tube automatic tracking. And the doctor interacts with the robot. As shown in Figure 1, the overall control scheme of the image-assisted robot is proposed on the basis of oral diagnosis and treatment, which realizes the stable interaction between the robot and the oral space with the goal of stability and reduction of cross-infection. Trajectory tracking control of photosensitive plate clamping robot As shown in Figure 1, the photosensitive plate clamping robot system mainly includes a UR5 mechanical arm, a force sensor and the end photosensitive plate clamping. The main body of the photosensitive plate clamping robot system is the UR5 robotic arm, and the three-dimensional sensor is installed at the end of the arm. The end effector includes a fixing device and a photosensitive plate fixed on the sensor. The camera is placed as far as possible to monitor the entire working space of the robot arm. Trajectory tracking control In order to achieve the transition from oral lesions to robot end gripping, it is necessary to pre-treat the inside of the patient's mouth before treatment. First, a CT scan is performed on the patient's oral cavity to reconstruct the patient's 3D oral model and obtain the lesion site. Second, calibrate the end grip of the robot, and convert the angle of the lesion from the oral coordinate system to the end grip coordinate system. Then, perform hand-eye calibration. After that, the posture of the marker is estimated and converted from the camera coordinate system to the basic coordinate system of the robot arm. Finally, when the posture detected by the camera of the marker changes, the robotic arm clamped at the end will track the trajectory of the teeth in the patient's mouth. After the manipulator moves to the specified position, if the patient does not move within the specified time, the force control mode will be turned on to make the end clamp as close as possible to the lesion area. Figure 2 shows the trajectory tracking control process of the robotic dental diagnosis and treatment system. Robot hand eye calibration In order for the camera and the robot to work together, hand-eye calibration must be performed. The robot hand-eye calibration system is shown in Figure 3. In is the camera projection matrix and cam p is the position of the point in the camera image. Finally, the mathematical model of the robotic arm coordinate system and the camera coordinate system can be expressed as Where x, y and z are the coordinate values of the target in the basic coordinate system of the robot arm. U, V and W are the coordinates of the target point in the camera coordinate system. 1 M and 2 M are expressed as follows: 11 12 13 1 2 1 2 2 2 3 31 32 33 R T R T R T M R T R T R T R T R T R T The two unknown matrices tool board T and cam base T use internal point optimization method to find the parameter that minimizes the point projection error. Since there are many unknown parameters, it is easy to make the optimization fall into the local optimal solution. Therefore, the flag is used for initial calibration, and the calibration value is used as the initial value of the above optimization process. Then the position of the mark center point is projected into the camera coordinate system, and the posture of the camera can be obtained as ker ker cam base tool ma base tool ma center Where center P is the center posture of the marking plate in the marking coordinate system. Among them, ker ma p p T is the conversion relationship between the marking plate and the end photosensitive plate. Adaptive tracking control In order to improve safety and make the photosensitive plate as close as possible to the dental lesion in the mouth. Therefore, the force/position control is performed on the robot, and the end presses the photosensitive plate to make the patient's tooth surface contact the photosensitive plate. As shown in Figure 4, create a coordinate system scene. Where {W} is the world coordinate system, {R} is the manipulator coordinate system attached to the robot base, and {S} is the sensor coordinate system and the coincident Z axis parallel to the tool center point (TCP) coordinate system of the manipulator. At the same time, the Z direction of {S} is perpendicular to the TMS coil plane. Fig.4 Scene diagram of the coordinate system of the photosensitive plate clamping robot According to the D-H parameters of the universal robot, the conversion matrix from the sensor coordinate system to the basic frame can be obtained by the following methods: In order to achieve active control and approach the lesion in the patient's mouth, we applied a force of 1 N to the patient's teeth in a direction perpendicular to the plane of the photosensitive plate. Figure 5 is a structural diagram of an active control system. It can be seen that the output of the adaptive PD controller is the posture correction of the robot arm, q is the angle of the robot arm joint, s F is the sensor measurement value, and i F is the expected force acting on the patient's mouth. Fig.5 Structure diagram of the position control system of the photosensitive plate clamping robot Define the coordinate system {T} in the plane of the photosensitive plate, as shown in Figure 4, the Z direction is perpendicular to the plane of the photosensitive plate. {T} is parallel to the sensor coordinate system. After the robot arm moves to the specified position, if the patient does not move within the specified time, the force control mode will be turned on to make the photosensitive plate as close as possible to the lesion tooth area. If the head is moving, the tracking robot arm and the photosensitive plate are always relative to the front side of the tooth position of the lesion, and there is no interaction force between the robot arm and the tooth position in the mouth. At the same time, after reaching the designated area, the arm only moves in the Z direction of {T}, and the posture of the photosensitive plate will not change in the active control mode. Due to the use of traditional PD controllers, system instability will occur : when the input value is small, the angle of motion of the arm is too small, and the amount of time is too long. If the input value is large, the adjustment amount of the arm is too large to swing back and forth. Therefore, an adaptive PD controller is used to determine the motion angle of the manipulator. The measured value after gravity compensation and the expected force value acting in the mouth are used as the input of the adaptive PD controller, which is used to determine the correction of the robot posture. The logarithmic function is used as the identification function of the adaptive controller as log (1 ) log ( Conclusion This paper proposes a collaborative robot control strategy that assists in oral diagnosis and treatment. According to the small space of the oral cavity and the change of the tooth position, a dynamic trajectory tracking method with adaptive control was proposed. This control strategy is of great significance for accurate oral diagnosis and treatment and reducing cross-infection between doctors and patients. |
package com.evostar.exception;
import com.evostar.model.MsgCodeEnum;
public class UnauthorizedException extends ServiceException {
public UnauthorizedException() {
super(MsgCodeEnum.UNAUTHORIZED_ERROR);
}
}
|
A small file performance optimization algorithm on P2P distributed file system With the further development of the Internet, the amount of data on the network grows exponentially in recent years. While the fastest increasing objects are the mass small files from blogs, forums, etc. Master/Slave structure distributed file systems have some shortcomings, such as the poor access performance with small files, single-point bottlenecks, etc. Although these problems have been solved in the P2P structure distributed file system to some extent, there is still some improvement room of accessing performance for small files in the P2P structure distributed file system, so a small file merging strategy(SFMS) is proposed in this paper. The throughput rate of reading small files is increased significantly. Experiments show that the throughput rate of reading small files is increased by 90% compared with the original system, and 25 times higher than the TFS which is based on Master/Slave structure. |
Reuse of Wikimedia Commons Cultural Heritage Images on the Wider Web Objective Cultural heritage institutions with digital images on Wikimedia Commons want to know if and how those images are being reused. This study attempts to gauge the impact of digital cultural heritage images from Wikimedia Commons by using Reverse Image Lookup (RIL) to determine the quantity and content of different types of reuse, barriers to using RIL to assess reuse, and whether reused digital cultural heritage images from Wikimedia Commons include licensing information. Methods 171 digital cultural heritage Wikimedia Commons images from 51 cultural heritage institutions were searched using the Google images Search by image tool to find instances of reuse. Content analysis of the digital cultural heritage images and the context in which they were reused was conducted to apply broad content categories. Reuse within Wikimedia Foundation projects was also recorded. Results A total of 1,533 reuse instances found via Google images and Wikimedia Commons file usage reports were analyzed. Over half of reuse occurred within Wikimedia projects or wiki aggregator and mirror sites. Notable People, people, historic events, and buildings and locations were the most widely reused topics of digital cultural heritage both within Wikimedia projects Evidence Based Library and Information Practice 2019, 14.3 29 and beyond, while social, media gallery, news, and education websites were the most likely places to find reuse outside of wiki projects. However, the content of reused images varied slightly depending on the website type on which they were found. Very few instances of reuse included licensing information, and those that did often were incorrect. Reuse of cultural heritage images from Wikimedia Commons was either done without added context or content, as in the case of media galleries, or was done in ways that did not distort or mischaracterize the images being reused. Conclusion Cultural heritage institutions can use this research to focus digitization and digital content marketing efforts in order to optimize reuse by the types of websites and users that best meet their institutions mission. Institutions that fear reuse without attribution have reason for concern as the practice of reusing both Creative Commons and public domain media without rights statements is widespread. More research needs to be conducted to determine if notability of institution or collection affects likelihood of reuse, as preliminary results show a weak correlation between number of images searched and number of images reused per institution. RIL technology is a reliable method of finding image reuse but is a labour-intensive process that may best be conducted for selected images and specific assessment campaigns. Finally, the reused content and context categories developed here may contribute to a standardized set of codes for assessing digital cultural heritage reuse. Introduction Cultural heritage institutions with digital images online want to know if and how those images are being reused. Whether the image was uploaded to a digital library by the institution or added to a website by an individual user, knowledge and understanding of digital image reuse helps cultural institutions determine the impact of their collections as well as whether they are meeting the needs of their users. One method of measuring reuse of digital images online is Reverse Image Lookup (RIL), in which the RIL service searches the internet for other versions of an image. Recent scholarship includes several RIL studies of digital cultural heritage media from specific collections or institutions. However, research by the Wikimedia Foundation has found that cultural heritage institutions with digital media in Wikimedia Commons, the media repository for Wikimedia Foundation projects, want better understanding of the impact of their uploaded media, in particular as it relates to institutional goals (Research:Supporting Commons contribution, 2018). As increasing numbers of cultural heritage institutions upload their digital media to Wikimedia Commons, and as users add digital cultural media found during their own research, the opportunity and necessity of assessing the impact of these objects becomes more relevant. This study attempts to gauge the impact that digital cultural heritage images from Wikimedia Commons have both in and beyond wiki projects by using RIL to determine quantity and quality of different types of reuse while also identifying barriers to assessing reuse in this way. Rooted in empirical evidence, this study will provide concrete examples of how digital cultural heritage from Wikimedia Commons is used outside of the Wikimedia landscape along with documented steps for finding and analyzing image reuse in order to facilitate greater reuse research among digital cultural heritage stakeholders, leading to improvements in efforts to make digital collections more widely available and reusable. Media Reuse Studies Media reuse research is still a relatively new field without standard or widely accepted definitions of use and reuse. The Digital Library Federation Assessment Interest Group (DLF-AIG) Content Reuse working group completed a 1-year Institute of Museum and Library Services (IMLS) grant in 2018 to evaluate the needs and functions of a digital library reuse toolkit, and in doing so also researched digital library stakeholder interpretations of use and reuse. While refined definitions of use and reuse by the group are forthcoming, at this time and for the purposes of this paper reuse will be defined as "how often and in what ways digital library materials are utilized and repurposed" and in what contexts (O'). Collection curators, digital librarians, and archivists find value in assessing the reuse of their digital collections in order to show the collection's reach and to determine who uses collections. This data can then be used to make decisions about collection development and digitization priorities as well as to negotiate increases in staffing and funding. While digital library stakeholders find a great deal of value in assessing the reuse of their collections, they also find it very difficult to do. A survey administered by the DLF-AIG Content Reuse IMLS project team found that only 40% of respondents were gathering reuse data, usually from social media metrics or citation analysis (O'). There is also tension between cultural heritage organizations' missions to provide access and a desire to maintain control over collections. Sometimes there are valid and commendable reasons for wishing to restrict access or mediate use and reuse of digital collections. Digital content misuse and cultural appropriation are concerns for digital library stakeholders (O'). Ethnographic archives, especially those that document the history and cultures of marginalized populations, prove challenging to determine meaningful impact beyond simple quantitative metrics such as clicks, likes, and downloads (). Other times, however, archives unnecessarily attempt to control reuse of their online holdings via restrictive or unclear rights statements. While published literature about media reuse is still somewhat limited, the existing scholarship primarily focuses on use and reuse of specific archival and digital collections, reuse of generalized collections by scholars within specific areas of study, and reuse of specific types of media. These studies are often undertaken with the purpose of improving the services and technological infrastructure that make library and archival collections reusable by researchers. Studies involving focus groups, observational research, and citation analyses have evaluated the reuse of archival images by historians, archaeologists, architects, and artists (Beaudoin, 2014;Harris & Hepburn, 2013). Additional researchers, after creating or using digital media collections in their own work, have advocated for the creation of open-licensed digital collections of geology and film in order to enhance the research process for students and scholars alike (O'Sullivan, 2017;Rygel, 2013). The reuse of digital cultural heritage media on social media platforms has received increasing attention in the scholarly literature over the course of the last decade. As noted in one study, "our data indicate that everyday users are repurposing digital content in ways that are meaningful to them, and they are acknowledging and fulfilling personal interests. These users are also sharing this content through a variety of environments on the Web, including popular social media platforms, blogs, and personal Web sites". Social media platforms like Pinterest, which allow users to curate personal collections of images, blog posts, and other media from the web, have an "archival shape" due to their infrastructure that captures the provenance, or original source, of the item, making such platforms rich for analysis by media reuse researchers. Examples of cultural heritage media reuse could include images downloaded from digital library collections and uploaded onto a Pinterest Pinboard, as well as those reproduced in commercial projects like artwork or included in official government reports. Reuse of digital cultural heritage media on Wikimedia Commons, Wikipedia, and other Wikimedia Foundation projects has also received scholarly attention in the last year. One of the most widely documented methods for evaluating digital image reuse involves RIL services such as Google images or TinEye, in which an image is either uploaded or an originating URL is input to the search platform and then duplicates and similar images are found online. RIL studies have been performed on images from NASA, academic digital libraries, the Library of Congress, and the British National Gallery (Kelly, 2015;Kirton & Terras, 2013;;Reilly & Thompson, 2014;. In all of these studies, after duplicate images were found online, the context and purpose in which the images were reused was analyzed in order to determine who uses digital cultural heritage images and for what objective. Cultural Heritage, Wikimedia, and Impact A ready-made platform for sharing digital cultural heritage media and encouraging reuse can be found in Wikimedia Commons (commons.wikimedia.org), the Wikimedia Foundation's repository for photographs, artwork, video, sound, diagrams, and more. Many cultural heritage institutions have developed programs to upload their digital media to Wikimedia Commons and enhance Wikipedia articles with links to their collections and finding aids in order to increase traffic to their websites and repositories, typically with impressive results (Kelly, 2018 institutions" research project (GLAM standing for Galleries, Libraries, Archives, and Museums) noted that for cultural heritage organizations, "donating media to Commons is a means to an end. GLAM organizations and the volunteers who work with them want to know the media they upload is being used, and to be able to evaluate the impact of their donations against institutional goals" (Research:Supporting Commons contribution, 2018). Research Questions This study attempts to answer the following questions with the hopes of providing concrete strategies for assessing collection reuse to cultural heritage institutions: 1. What is the content of cultural heritage images found in Wikimedia Commons? 2. What content gets reused most often, and where? 3. Do reused cultural heritage images from Wikimedia Commons carry license or attribution information with them ? Research Methods A list of cultural heritage repositories, including museums, historical associations, and academic archives, among others, was generated from the archival discovery tool ArchiveGrid, and a random number generator was used to pull a sample of 66 institutions from the list for inclusion in this study. Searches were conducted over a two-week period for images from these institutions' collections, determined primarily by examining the "Source" field in the Wikimedia Commons object metadata. While images documenting an institution's buildings or grounds were not included in the study, usergenerated photos or videos of collections, such as pictures taken of an artwork or exhibit, were included. The number of results for each institution varied greatly, with some institutions not having any related images in Wikimedia Commons and others having hundreds of results. A list of all institutions and counts of their reuse results is available in Appendix A: 51 of the 66 institutions had digital images in Wikimedia Commons. As the purpose of this study was not to determine how many cultural heritage institutions have images in Wikimedia Commons, or how many images institutions have on average, not all results were analyzed; instead, at most 20 results from each institution were documented. 1 A total of 308 images from cultural heritage institutions were initially analyzed. A separate research project is underway to assess the validity of rights statements provided in Wikimedia Commons for all of these results. For the purposes of this study, a smaller subset was extracted for RIL analyses. All results from the initial 308 images with Creative Commons or other open licenses were selected for inclusion, as one research question pertinent to this study is how often evidence of open licensing is available when images are reused. These accounted for 44 images to be searched using RIL; an additional 126 public domain images, and two instances of images published with copyright permission from the Wikimedia Commons cultural heritage sample set, were selected for inclusion as well. Wikimedia Commons includes wiki reuse information on the record page for uploaded media; the number of instances of reuse, both on Wikimedia Commons and on other wikis, was noted for each object (see Figure 1). Figure 2 Screenshot of Google images result with multiple sizes. Then each image was searched using the Google Chrome browser "Search by image" function. When available, the option to search Google for "all sizes" of the image, as opposed to just those matching the original image, was selected to receive the greatest amount of results (see Figure 2). For each image, a number of elements were recorded. These included: Most of the elements only required simple analysis of frequency counts. For elements with a greater level of subjectivity, such as "content of reused media" and "reuse context," the content analysis method was used to examine each object, label it, and then categorize the labels into broader themes. Content analysis is a quantitative research method used to "examine large amounts of data in a systematic fashion, which helps to identify and clarify topics of interest" (Drisko & Maschi, 2015, pp. 25). Here, codes or categories were developed inductively, or without a prior scheme, rather than deductively, as reuse research is still in its infancy and existing codes and theory are diverse and not yet synthesized. However, it should be noted that content analysis of some type was conducted in all of the RIL studies previously mentioned, so the potential for integrating codes and developing a standard set for assessing cultural heritage via RIL may be a possibility in the future. In this study, the websites featuring Wikimedia Commons digital cultural heritage images were analyzed as to the site's purpose. Many results were in languages other than English; for these, Google translate was used to infer the content of the site. Following the analysis and application of codes, tables and graphs were generated to assist in conveying the results of the study. Results From 171 digital cultural heritage Wikimedia Commons images searched in Google images, 34 did not have any results. Of the remaining 137 images, one had been deleted in Wikimedia Commons since initial data collection began and couldn't be searched in Google, and two did not have any wiki results and only had results in Google images that were false positives. Over 25% of Google images results were also discarded as being unusable. These included dead links; false results in which the image was not found on the site; spam, porn websites, and sites blocked by the computer's antivirus program; one instance of a website that was behind a paywall; and a site that Google translate could not decipher. To ensure that remaining analysis was based on true reuse, any result found by Google images that matched the "Source" field in Wikimedia Commons (for example, if the source of a painting was given as a museum, and Google images located the painting on the museum's website) was removed from analysis. Finally, 21 results were for videos of a zoetrope at a museum. While instances of wiki reuse could be analyzed for these images, they were not suitable for Google images, so they were removed as well. After fully cleaning any unusable, false, non-reuse, or missing results, a total of 1,533 Google images and wiki search results from 51 cultural heritage institutions remained for analysis. Approximately 5% of reuse cases from the total uncleaned data set, and 51% of the cleaned data set were associated with Wikimedia's projects. This includes reuse on other Wikimedia Commons pages like galleries or featured images; reuse on other Wikimedia projects, like Wikipedia articles and Wikidata; reuse by wiki mirror sites, or exact replicas of wiki projects hosted at different URLs; and reuse by wiki aggregators, or sites that pull content straight from Wikimedia and repurpose it for readability, content curation, usability, or other reasons (such as Wikiwand and WikiVividly). While wiki aggregator and mirror results were found through Google images, they weren't considered to be true examples of reuse as they simply copied entire Wikipedia articles or Wikimedia Commons galleries without providing any additional context or value to the original Wikimedia Commons object. The subject matter of the digital images analyzed from Wikimedia Commons was coded, and then Google images results were analyzed to determine themes in what reusers of digital cultural heritage images are most likely to reuse. Note that these subjects are not one-to-one coordinates for each image; a single image could have multiple subjects. Instead, these numbers represent general areas that reusers of digital cultural heritage tend to focus on when reusing images online. A full description of the codes used to label image content can be found in Appendix B. Notable People or people were included in more than half of the reuse results, while images documenting historical events and buildings and locations were also widely reused. Several categories identified in the initial image analysis were not reused at all outside of wiki products; these were book cover, book plate, data, diaries and personal letters, and library card. Similar results can be found in analyzing just the reuse of these images on other wikis. The primary difference is that more of the image categories were reused in wiki products, with yearbook photos the only image content that was not reused at all. Also, while the content of reused images varies slightly depending on whether the image is reused on a wiki project or elsewhere, there is generally a strong correlation (r=0.66) between wiki reuse and non-wiki reuse. Finally, for comparison's sake, the following table shows the percentage of instances for each reuse content category found within the initial cleaned data set. This shows a strong correlation between the number of images labeled with a content category and the number of times reused (r=0.84). However, people accounted for 38% of the data set but were only reused in 19% of reuse occurrences, while notable people accounted for 24% of the data set but were reused in 34% of instances. Historic events (3% original, 12% reuse) also had a higher level of reuse. The original medium of the reused object was also documented and analyzed. Photographs accounted for nearly three quarters of all reuse. When looking at reuse outside of wiki products, there are again clear trends in how and where digital cultural heritage images are being reused. Social websites, defined here to include social media, blogs, discussion boards, online journals, and other websites whose primary purpose is user-generated content and interaction, account for just under half of reuse instances outside of wiki platforms. Media galleries, or user-curated collections of media (usually images), and news websites are also popular scenes for digital cultural heritage reuse. Only 11% of Google images results for Wikimedia Commons digital cultural images were on educational sites like research guides, encyclopedias, and historical timelines. Full definitions of the codes used to categorize reuse context are in Appendix C. Slight variances in what subject matter is most viable for reuse on what type of websites can be found as well. While images representing notable people are the most popular reuse type across all websites, maps are almost exclusively found on social sites, whereas images representing historical objects are primarily reused by news sites. Delving further into what subjects are reused most by different types of websites may help cultural heritage institutions pinpoint where their digitization and marketing efforts should lie in order to meet institutional priorities. Wikimedia provides ample guidelines on how wiki media should be shared from Wiki platforms, including providing appropriate attribution if required by the media's license. Of the sample set analyzed for this study, a mere 40 results out of a possible 755 non-wiki reuse instances had any type of license or copyright statement available. And in comparing the licenses provided in reuse instances, there were significant discrepancies between these and the licenses on Wikimedia Commons. "Compatible" refers to instances where the Wikimedia Commons object and the reused object had the exact same license. The "semi-compatible" designation was used when slight differences occurred, for example, the Wikimedia Commons license listed CC BY-SA 3.0, whereas the reused instance noted an updated CC BY-SA 4.0 license. The remaining "incompatible" results referred to wholly different licenses being applied, such as Wikimedia Commons marking an image as being in the public domain where another website included a Creative Commons or copyright statement alongside the object. The two images that were copyrighted but published to Wikimedia Commons with permission were reused four times outside of wiki products, but none of the reuse instances included a license or attribution. Finally, a few other unexpected discoveries emerged in this analysis. While only 40 reuse instances provided some sort of license, 147 results, or 19% of non-wiki reuse results at least included some sort of credit, such as the name of the work and the cultural heritage institution that held it. Of these, 50 credited Wikimedia Commons or Wikipedia in some way, or linked back to the original image on Wikimedia Commons. Also, in analyzing the reuse context of the digital cultural heritage images outside of Wikimedia, only three results appeared to be entirely "misused." These involved the following misidentifications or questionable reuse situations: A news article that uses an unlabeled photo of the 1966 UT Austin Tower shooter Charles Whitman's gun to illustrate new laws for gun amnesty in Canada; A blog post that mislabels an image of Gerald Ford as Richard Nixon; An image of railway workers laying the last rail of the Union Pacific Railroad in 1869, used to illustrate minimum wage. Overall, reuse of cultural heritage images from Wikimedia Commons was either done without added context or content, as in the case of media galleries, or was done so in ways that did not distort or mischaracterize the image being reused. Commons donation efforts on images related to notable people, historic photos of unidentified people, and historical events, but should also observe that photographs of historic objects ranked highly in reuse by news organizations. However, this study does not delve into great detail as to the content and context of images reused. In this sample set, all of the images labeled as "historic object" were photographs of University of Texas shooter Charles Whitman's guns. Does this mean that images of weapons in general might be reused more by news organizations than other topics, or would images of other historic objects be reused as frequently? This question could be tested by conducting reuse analysis on Wikimedia Commons images of both historic weapons and generic images of weapons, or of historic weapons and other historic objects. Additional media reuse research should continue to narrow down what exactly makes one media object more reusable than another. Factors such as notability or fame, uniqueness, presentation, artistic merit, and others may be analyzed to further understand reuse priorities. This study also does not attempt to measure the notability of specific cultural heritage institutions or collections. Previous scholarship documenting cultural heritage institutions voluntarily donating digital images to Wikimedia Commons focuses almost exclusively on large research universities, many of whom have internationally-recognized collections. It is unknown whether smaller institutions with lesser-known or niche collections would see similar increases in website traffic or similar reuse of their digital images. While this study includes a variety of institution sizes and types, it does not attempt to qualify the notability of these institutions, nor of their collections or individual images. We can, however, see that there is a weak correlation (r=0.27) between how many images were searched from each repository and how many instances of reuse were found, so content and quality of the reused object may be larger factors in determining reuse than quantity of object per institution. The research reported here shows that cultural heritage institutions have cause for concern about reuse of their collections without attribution. Only 9% of Creative Commonslicensed images that were reused outside of wiki projects were labeled as Creative Commons in their new context, only 19% of non-wiki reused images had any sort of credit at all, and most that did, did not include a reuse license or public domain statement. Still, at least for images that are in the public domain and don't legally require a license or attribution, perhaps cultural heritage institutions should be less concerned with attribution and more concerned with increasing reuse. Unfortunately, a lack of proper attribution can make tracking reuse difficult, thus impeding the institution's ability to measure the impact of their collections. Strategies such as using RIL to locate instances of reuse without text attribution included may be beneficial for image collections, but as of yet the RIL process is very labour-intensive and probably unfeasible for institutions to perform on all of their digital images on a regular basis. Instead, performing RIL reuse analysis on selected images may be undertaken for specific assessment campaigns, such as to assess reuse of a new collection after a year's time, to show impact for annual reports and reviews, or to highlight the success of marketing campaigns the institution has undertaken related to a collection or object. The DLF-AIG IMLS grant project found that embedded metadata is one of the most-needed pieces of infrastructure for tracking reuse; the Wikimedia Foundation's "Supporting Commons contribution by GLAM institutions" project similarly identified "demonstrating and preserving media provenance" as a priority (O';Research:Supporting Commons contribution, 2018). Improved infrastructure for embedded or sticky" metadata may allow reuse assessment without the need for formal attribution. What cultural heritage institutions can begin to do with this research is to determine where their digitization efforts may have the most impact and alignment with institutional goals. The DLF-AIG IMLS grant project found that digital library practitioners had different priorities for where they hoped their digital resources would be reused; for example, some institutions might find more value in reuse by nationally-recognized news organizations, others by students and scholars, still others by community groups (O'). These goals will vary depending on the type, size, and mission of the institution the practitioner represents. By beginning to understand what types of Wikimedia Commons digital cultural heritage content are reused most often on what types of websites, practitioners can strategize which of their collections and objects they should focus on donating to Wikimedia Commons to reach the user communities they are most interested in connecting with. While great care was taken in developing and analyzing the codes used for identifying content and context of reused images, it should be noted that content analysis as a method is highly subjective but often made less so by involving multiple researchers who "norm" their codes to come to agreement about classification. As this study was undertaken by a sole researcher, elements determined by content analysis may bear a higher level of subjectivity than is desired. This paper contributes to media reuse literature, and to RIL research in particular by furthering understanding of what content categories are most likely to be reused and where, both within Wikimedia Foundation projects and on the wider web. Digital library practitioners should use the results of this study to develop digitization strategies that prioritize content attractive to the types of websites where reuse would most align with their institutional missions. This research also emphasizes the need for better education and infrastructure related to licensing and rights for digital content reuse, as reused digital cultural heritage images from Wikimedia Commons rarely includes attribution or licensing information. The content categories developed here may be combined with content categories found in other RIL studies to begin synthesizing a common code of subjects for assessing image reuse. By continuing to deepen understanding of digital cultural heritage reuse, we can better assess the impact of our collections online and strive to meet the needs of current and potential users in line with institutional priorities and missions. |
import React from "react";
import { Container, Row, LegendColor, LineChartEl } from "./style";
import {
Line,
XAxis,
CartesianGrid,
Tooltip,
ResponsiveContainer,
} from "recharts";
import { useTheme } from "../../hooks/useTheme";
import formatAmountValue from "../../utils/formatAmountValue";
interface IHistoryBox {
data: {
monthNumber: number;
month: string;
amountOutput: string;
amountEntry: string;
}[];
lineColorEntry: string;
lineColorOutput: string;
}
export const HistoryBox: React.FC<IHistoryBox> = ({
data,
lineColorEntry,
lineColorOutput,
}) => {
const { theme } = useTheme();
function getColor(color: string): string | undefined {
if (color === "success") return theme.color.success;
if (color === "warning") return theme.color.warning;
if (color === "info") return theme.color.info;
if (color === "text") return theme.color.text;
if (color === "card") return theme.color.card;
}
return (
<Container>
<Row>
History
<div>
<div>
<LegendColor color="info" />
Gains
</div>
<div>
<LegendColor color="warning" />
Expenses
</div>
</div>
</Row>
<ResponsiveContainer>
<LineChartEl data={data} margin={{ right: 10, left: 10 }}>
<CartesianGrid strokeDasharray="3 3" stroke={getColor("card")} />
<XAxis dataKey="month" stroke={getColor("text")} />
<Tooltip formatter={(value) => formatAmountValue(Number(value))} />
<Line
type="monotone"
dataKey="amountEntry"
name="Gains"
stroke={getColor(lineColorEntry)}
strokeWidth={5}
dot={{ r: 5 }}
activeDot={{ r: 8 }}
cursor="pointer"
/>
<Line
type="monotone"
dataKey="amountOutput"
name="Expenses"
stroke={getColor(lineColorOutput)}
strokeWidth={5}
dot={{ r: 5 }}
activeDot={{ r: 8 }}
cursor="pointer"
/>
</LineChartEl>
</ResponsiveContainer>
</Container>
);
};
|
Molecular and Chemical Engineering of Bacteriophages for Potential Medical Applications Recent progress in molecular engineering has contributed to the great progress of medicine. However, there are still difficult problems constituting a challenge for molecular biology and biotechnology, e.g. new generation of anticancer agents, alternative biosensors or vaccines. As a biotechnological tool, bacteriophages (phages) offer a promising alternative to traditional approaches. They can be applied as anticancer agents, novel platforms in vaccine design, or as target carriers in drug discovery. Phages also offer solutions for modern cell imaging, biosensor construction or food pathogen detection. Here we present a review of bacteriophage research as a dynamically developing field with promising prospects for further development of medicine and biotechnology. Introduction Recent progress in molecular engineering has contributed to the great progress of medicine. Biopharmaceutics such as hormones, interferons, interleukins, hematopoietic growth factors or therapeutic enzymes constitute a new class of drugs. They have found application in various diseases including anemia, leukemia, multiple sclerosis, diabetes and many others. The beginning of modern biotechnology dates back to 1982, when recombinant human insulin for the first time was introduced for treatment of diabetes. Later that success was continued with production of human growth hormone as the first fusion protein in 1985. Among promising molecular tools in biotechnology, molecular biology or medicine phages can be used. Phages have induced progress in such biotechnological branches as: biosensors, drug-carrying particles, cancer cell imaging agents, investigation of phage proteins' structure and function, epitope mapping, studies of protein-protein interactions, determination of inhibitors' and enzymes' specificity, screening for receptor agonists and antagonists and finally for vaccine design, and anti-cancer research. These may contribute to the development of alternative methods in medicine, for examples in still serious medical problems like high risk of cancer or AIDS. Phage Display Probably the first idea of phage application as a modern biotechnology tool was phage display. Phage display is a molecular technique that allows expression of exogenous proteins on a bacteriophage surface. The first report describing display of a foreign polypeptide on the surface of a bacteriophage particle comes from 1985 (Smith 1985). George P. Smith introduced a fragment of the EcoRI restrictase coding sequence into the middle section of gene 3 of the non-lytic filamentous phage f1. The new fusion protein P3-EcoRI did not destroy phage infectivity and the presented EcoRI retained antigenic properties of its native form. Filamentous phages that do not lyse infected bacteria during their propagation cycle have been most commonly used as phage display platforms (a). Infection caused by these phages does not cause cell lysis, only ''constant production'', which results in twice as slow growth of bacteria (Czaplicki 2005). However, T4 and T7 phages that represent lytic Caudovirales have also been used for phage display (). The lytic cycle results in the destruction of the infected cell after the phage penetrates into bacteria; the replication and expression of the bacterial genetic material changes in favor of the phage. After the assembly and maturation of virions, the cell wall is destroyed and viruses can infect other cells (Karam 1994). Filamentous phage strains such as M13, fd or f1 (Smith and Petrenko 1997) are characterized by a flexible rod shape with a circular ssDNA genome. They infect Escherichia coli via the F pilus. The M13 capsid is built by 2,700 copies of major coat protein p8 and is capped by p3 (5 copies), p6 (5 copies), p7 (5 copies), and p9 (5 copies). All these coat proteins can be used as fusion targets for display, but p3 and p8 proteins are used most widely. Protein p8 is limited to displaying short peptide sequences, while p3 allows display of larger insertions (). The most popular way for expression of a foreign peptide or protein on the bacteriophage surface is the fusion proposed by Smith. A gene encoding the foreign protein is fused to one of the M13-related viral coat protein genes. Filamentous phage expression is ideal for oligopeptides and small proteins (Bratkovi 2010); in the case of bigger proteins, this platform is insufficient. This problem has been solved by the introduction of phagemids as special helping display vectors (). A phagemid is a plasmid with phage origin of replication and packing signal which can express a fusion protein but does not encode any viral structural or replication proteins. Fusion proteins are carried by phagemids while the majority of the genes required for the formation of phage particles are carried by helper phages that are co-infected together with phagemids into host bacteria (Sidhu 2001). Co-infection of the bacterial host cell by a phagemid and a phage produces hybrid virions displaying only a few copies of the fusion coat protein additionally to the majority of wild-type structural coat proteins (). This system is called a ''hybrid-phage system'', and has been created by Smith (Smith's classification); this system is based on the arrangement of the coat proteins (). The authors introduced the terms 3, 33 or 3 ? 3 (for p3based display), 6, 66, or 6 ? 6 (for p6-based display), and 8, 88, or 8 ? 8 (for p8-based display) to differentiate possible protein arrangements (Smith and Petrenko 1997). Type 3, 6 and 8 are the simplest cases. A foreign protein is displayed on each copy of a phage protein in the capsid. Types 3 ? 3, 6 ? 6, and 8 ? 8 of the system engage a combination of the phage and the phagemid, allowing combination of fusion proteins and wild proteins in the same capsid. Types 33, 66, 88 also allow one to combine fusion proteins and wild proteins in the same capsid, but they are expressed from the same phage genome (Bratkovi 2010). Later, in a relatively short time, phage display was developed into a wide range of variations, employing differentiated phage strains and several technological approaches. Now, the multiplicity of phage display variations can by classified according to diverse aspects of these techniques (Table 1). Phage capsid in the phage display can contain only fusion protein or both fusion and native wild-type proteins. Phage display with fusion proteins can be grouped into two types: permanent fusion of phage and foreign genes in phage genome, which can be classified as type 3, 6 or 8 according to Smith's ordination (description above). It can also be done by deletion of a non-essential phage coat protein followed by the introduction of a selected fusion to the phage capsid. This second type is based primarily on the T4 bacteriophage. The phage T4 capsid is built with two essential proteins, gp23* and gp24*, and two decorative proteins: Hoc (highly antigenic outer capsid protein) and Soc (small outer capsid protein). The in vivo display system allows for target fusion, which will be displayed on the phage surface to the capsid protein. Capsid proteins fused to foreign proteins or peptides are overexpressed in a bacterial system such as the E. coli system. During assembly, these fusions are incorporated into the phage capsid by simply mixing. Protein (Hoc-target or Soc-target) is built into hocor socphage by simply mixing. The phage strains used in the experiments with supplementary expression vectors had a deletion or a non-sense mutation in the gene, and thus no native gene products have been incorporated into its head during phage assembly. Since Hoc and Soc are not essential head proteins, these defects do not affect phage viability (;;;Ren and Black 1998). This system was used, e.g. to display full-length antigens from human immunodeficiency virus (HIV) (;). In vivo systems have been used on other phages such as k or T7 phage (;;). One of the limitations of in vivo display is the fact that no control can be exerted on intracellular expression structure and assembly on foreign proteins. This problem is solved by the use of in vitro phage display. This system differs from that extended in vivo because of incorporation of target proteins to the capsid outer bacterial cell on mature bacteriophage particles. In vitro phage display has been reported in the first presentation of a 710 kDa anthrax toxin on bacteriophage T4 (). Phage display with both fusion and native proteins, as its name suggests, engages two types of proteins at the same time: fusion proteins and wild-type protein. Fusion protein is made by fusion of foreign amino acid sequence to the endogenous amino acids of the coat protein. Fusion protein can be expressed from the plasmid (competitive phage display), phagemid (type 3 ? 3) or phage genome (type 33). Phage Display Libraries The most common applications of phage display constructs are random peptide libraries. A phage library is a collection of phages carrying on their surface foreign proteins or peptides encoded by DNA inserted into the phage genome (e.g. type 3 phage display) (Smith 1993). One of the most important functions of phage libraries is to deliver a diversified pool of elements. Each phage displays on its surface a single type of protein or peptide, but the whole library has a large number of viruses with many different proteins in total. Each viral vector is capable of infection, and therefore each phage selected from the phage library with the respective peptide can be separately replicated in bacteria. The method for the production of peptide libraries is based on Zoller and Smiths's procedure Smith 1982, 1983); it allows a foreign nucleotide sequence to be introduced into the M13 vector, which causes formation of mutant clones capable of displaying a respective peptide. This method can be used to construct libraries of the size ranging 10 9 -10 11 (a). Scholle et al. created libraries with 100 % recombinant phages by the use of negative selection of amber stop codon at the 5 0 end of the gene p3 in the filamentous phage. One of the most popular applications of phage display libraries is selection for affinity domains of antibodies. In 1990, McCafferty and colleagues for the first time displayed complete V variable domains of antibodies on the surface of a filamentous phage. Thus, they initiated a new way of making antibodies demonstrated using polymerase chain reaction (PCR) reaction. Phage display antibody libraries are a combination of heavy-chain variable domain (VH) with light-chain variable domain (VL) which are presented on a phage surface (Pini and Bracci 2000). These two domains together form the variable domain (V domain) which is responsible for antigen binding and unique specificity. The traditional molecular method for producing antibody fragments uses the phagemid vector including the V domain from lymphocytes which is amplified by PCR (Table 2). There are two categories of antibody libraries: postimmunization libraries and single-pot libraries. The first type contains the immunoglobulin G (IgG) sequence derived from the spleen B cells of immunized animals. V genes are isolated and assembled into functional antibody fragment (mostly scFv or Fab) and inserted into phage library vectors. Post-immunization libraries have high affinity to an antigen; it is however necessary to construct a new library for every antigen. Single-pot libraries use B cells from unimmunized donors. Two types of single-pot library have been developed: nave and synthetic. Nave libraries are constructed using V genes sequences that have Li et al., 2007;Ren and Black 1998) e.g. Competitive phage display () e.g. Type 3 ? 3 (Smith 1985) e.g. Type 33 (Smith 1985) Arch. Immunol. Ther. Exp. 63:117-127 119 undergone some natural rearrangement, for example those derived from IgM mRNA. Synthetic libraries are built in vitro on rearranged antibody gene segments with some additional sequences; therefore, good knowledge of the complementary determining region (CDR) sequences, which are critical for antigen binding, is necessary (;Willats 2002). Phage Display Applications Phage display can be used in a variety of applications as presented in exemplary use in experimental animal or in vivo models, e.g. for epitope mapping (), studies of protein-protein interactions (), to determine specificity of inhibitors and enzymes (Diamod 2007;), in screening for receptor agonists and antagonists (), or in vaccine design (De Berardinis and Haigwood 2004;;Ren and Black 1998;). Due to the wide range of possibilities offered by phage display, this method is postulated as a tool for looking of solution in many medical problems. The most popular one is screening for anticancer peptides or proteins. There are many publications describing peptides' selection by phage display. These peptides are able to influence angiogenesis and tumor cell growth (;;). One of the best characterized and important factors causing angiogenesis is vascular endothelial growth factor (VEGF); moreover, this factor is responsible for tumor growth stimulation. It is the key mediator of angiogenesis (the formation of new blood vessels), and binds two types of VEGF receptors (VEGFR1 and VEGFR2) (Carmeliet 2005). It has been demonstrated that VEGF is responsible for stimulation and cancer metastasis, and therefore is a frequent therapeutic target. The VEGF family and its receptor system has been shown to be the fundamental regulator in the cell signaling of angiogenesis (). The phage display method allows one to select peptides that are capable of inhibition of tumor growth (Borysowski and Grski 2004). From a large human nave antibody library four fully human anti-KDR (KDR, kinase insert domain-containing receptor or VEGFR2) antibodies were identified and it was demonstrated that these antibodies were able to block KDR/VEGF interaction and neutralize VEGF-induced angiogenic activity (). Other studies described 90 Y-labeled nanoparticles targeted to the vasculature with anti-VEGFR antibodies for anti-tumor therapy (). Fibroblast growth factor 8b (FGF8b) is the major isoform of FGF8 expressed in prostate cancer and it correlates with the stage and grade of the disease. Using the phage display method 12 specific FGF8b-binding phage clones were constructed and isolated by screening a phage display heptapeptide library with FGF8b, which was named as P12. Studies suggested that P12 may have a greater potential to interrupt FGF8b binding to receptors than others, which were identified by phage display libraries. Functional analysis indicated that synthetic P12 peptides mediate significant inhibition of FGF8b-induced cell proliferation and blockade of the activation of Erk1/2 and Akt cascades in both prostate cancer cells and vascular endothelial cells (). Cysteine-rich protein 61 (CCN1/Cyr61) has been used as an important mediator in proliferation and metastasis of breast cancer; blockage of Cyr61 might be a potent target for breast cancer treatment. Using the phage display system developed antibody denoted as 093G9, an inhibitory effect on breast human cell line proliferation and migration was induced. Additionally, 093G9 also showed significant efficacy on suppressing primary tumor growth and spontaneous lymph node metastasis in vivo in a mouse model (). High exposure of selected antigens on the phage display platform and the relative ease of their production make phage display a promising tool for constructing vaccines. This includes the challenge of developing an effective vaccine for HIV (;Esparza 2005). Bacteriophage T4 has been proposed as a recombinant platform that allows construction of multicomponent vaccine boosting humoral and cellular responses (;Ren and Black 1998;). The head of T4 bacteriophage is an icosahedron with one portal Random sequence of antibody coding genes mRNA from pre-B cells vertex (120 9 86 nm) to which the phage tail is attached (;Rao and Black 2010). The icosahedral caps are formed by hexamers of gp23* and pentamers of gp24*, which are essential proteins. Hoc and Soc are two nonessential proteins (). The nonessential proteins have a number of features that recommend them as alternative vehicles for protein display. Ren fused the V3 loop domain of gp120 (HIV-1 envelope glycoprotein) to Soc protein. Soc-V3 fusion was expressed in the E. coli system, and then bound in vitro to the phage. Soc-V3 displaying phage were highly antigenic in mice and produced antibodies reactive with native gp120 (). Further, a Hoc-based in vivo assembly system allowed display of HIV antigens p24-gag, Nef and gp41 C-peptide trimer, which represent differentiated structures and biological function. P24 displayed as a Hoc-fusion was highly immunogenic in mice in the absence of any external adjuvant, eliciting strong p24-specific antibodies, as well as Th1 and Th2 cellular responses with a bias toward the Th2 response. The phage T4 system, which was used in these experiments, increases the range of immune responses; therefore, it was proposed for HIV vaccine development (;Ren and Black 1998;). Phage display can also be used to design new antibacterial peptides. This may be useful when phages specific to difficult pathogens are not available, which has been shown in the case of phage specificity to Staphylococcus aureus and Helicobacter pylori. S. aureus cause many diseases by producing toxins, such as pneumonia, endocarditis, meningitis, septicemia, and toxic shock syndrome (Lowy 1998). A large percentage of S. aureus infections are caused by MRSA (Methicillin-resistant S. aureus). Development of resistance to available antibiotics has become a serious problem in treatment of S. aureus infections (Lowy 2003). There is therefore an imperative need to find new types of antibacterial agents. H. pylori is a Gram-negative bacterium which produces the virulence factor urease. This allows survival of the bacteria in the acidic environment of the stomach. H. pylori causes many serious diseases including duodenal ulcers and stomach cancer. Antibiotic therapy has significant limitations, such as the high cost and the emergence of antibiotic-resistant strains, generating the need for new ways of treatment. For selection of new specific peptides capable of binding difficult bacterial pathogens, phage display library screening can be utilized (). Phage display libraries are used for identification of specific peptides and antibodies against pathogen targets. Synthesis of virulence factors produced by S. aureus is regulated by a quorum-sensing mechanism. S. aureus secretes a protein termed RNAIII activating protein (RAP) which autoinduces toxin. Young and colleagues showed that mice which have been vaccinated with selected RAP were protected from S. aureus infection, which suggested that RAP is a useful target for selecting potential therapeutic molecules to inhibit this pathogen (). Peptides recognizing staphylococcal enterotoxin B (SEB) selected from the M13 phage library were applied to attenuate this bacterium. SEB is a pyrogenic toxin responsible for staphylococcal food poisoning in humans and has been an attractive choice as a biological aerosol weapon due to its inherent stability and high intoxication effect (). Three peptides of high affinity to SEB (WRPLTPESPPA, MNLHDYHRLFWY, QHPQINQTLYRM) were highly active against Staphylococcus (). Phage display technology was used for identification of peptides able to bind specifically and to inhibit H. pylori urease such as 24-mer TFLPQPRCSALLRYLSEDG-VIVPS and 6-mer YDFYWW that can inhibit the activity of urease purified from H. pylori (). In another approach, an antibody display was developed for a single-variable domain of heavy chain antibody against recombinant UreC. The isolated UreC nanobody can specifically detect and bind to UreC and inhibit urease activity (nanobodies are isolated variable domains of heavy chain antibodies). This nanobody could be a novel class of treatment measure against H. pylori infection (). The identification of inhibitory peptides and nanobodies specific for H. pylori urease may open a new approach for the development of therapeutic drugs. Recently, phage display was applied in purification of bacteriophages. New methods combining phage display and chromatography give good results in separating T4 bacteriophages from bacterial proteins, DNA, lipopolysaccharides and even from other phages (;). It is a very useful technique for purification of phages compared to other methods, e.g. gradient centrifugation in cesium or saccharose, which are more time-consuming operations. Moreover, because of this method phages are highly purified. There are two methods, one based on competitive phage display, in which bacterial cells produce both wild-type proteins (expressed from the phage genome) and the protein fusions with affinity tags (expressed from expression vectors). These tags are glutathione S-transferase (GST) or six histidine (His-tag); phage proteins fused to the tags are incorporated into the phage capsid during assembly. Random Mutagenesis of Bacteriophages Random mutagenesis is the process of introducing changes inside bacteriophage DNA without pre-designing these changes. Random mutagenesis can be spontaneous or induced. Spontaneous mutagenesis results from errors in DNA replication; this kind of mutant occurs frequently in the environment. Random mutations can also be inducted by many physical or chemical factors. Physical factors causing changes in DNA are ionization, UV radiation and temperature. Acridine dyes, nitrogenous base analogs, nitric acid, some hydrocarbons and others are chemical inductors of mutations. Mutations induced by physical or chemical mutagens are a very popular approach in research methodologies. They make it possible to examine functions of genes based on changes in phenotype characteristics. At the molecular level, random mutagenesis techniques base on typical kinds of point mutations, including transitions, transversions, frameshift mutations and deletions. The principal strategy of random mutagenesis is formation of random mutations by chemical or physical mutagens, and selection of mutants with altered properties. This technique does not require detailed knowledge of the phage. The main point is the modification of phage phenotype and the assumption that mutation in the genome is responsible for the new virus properties. Random mutagenesis is more of a random modification at the molecular level than genetic engineering; however, when well planned, it allows for the selection of a phage even with very sophisticated properties. A good example is provided by the studies of Merril et al.. They isolated k and P22 phage mutants which can circulate longer in mice in comparison to wild-type phages, which was related to one point mutation located in a major head protein coding gene. Another example of the application of random mutagenesis is identification and selection of their most able T4 lysozyme. Two variants were found, which exhibited increased reversible melting temperatures with respect to the wild-type protein. The results also illustrate the power of random mutagenesis in obtaining variants with a desired phenotype (Pjura and Matthews 1993). UV irradiation is one of the most popular physically induced methods of mutagenesis. T4 phage can be inactivated by UV light at a wavelength of 253.7 ml. Mutation in a single gene m that controls the repair process by photoreaction causes increased UV sensitivity in T4 phage (Harm 1963). Hall and colleagues have applied bacteriophages in studies of coevolutionary dynamism of phages and their hosts. These authors examined interaction between bacteria and a phage by the use of a parasitic phage with random mutations in its tail fiber. They demonstrated that random mutations could be a very useful tool for investigation of a phage influence on the genetic divergence of bacterial hosts. One of the most interesting conclusions was that bacteriophages were more likely to emerge through long-term coevolution with their hosts than through spontaneous adaption to a single novel host (a, b). Site-directed mutagenesis is a phage modification technique leading to introduction of designed changes into a specific site in the genome. It involves specially designed synthetic oligonucleotides, i.e. primers for PCR. Mutagenic primers are complementary to the part of the engineered DNA template, but they also contain an internal mismatch which codes for a designed change of the selected genes. Specified changes in nucleotide sequences of a gene lead to modification of amino acid sequences in the protein often enabling introduction or removal of a considerable number of amino acids. Thus site-directed mutagenesis allows studies of molecular determinants of protein structure and function (). A molecular tool that allows for the introduction of sitedirected mutagenesis is the bacteriophage insertion-substitution (I/S) vector system. This system was first derived by Selick et al. for T4 bacteriophage. This method enables the transmissions of in vitro-generated mutations from a plasmid into the phage gene. The mutation insert is first constructed as a cloned insert within the I/S plasmid. Bacteria containing this plasmid are then infected with bacteriophage that carries amber mutations in selected genes. The plasmid integrates the mutation-containing sequence into the phage genome by homologous recombination. Integrant phage mutants are selected by comparison of their growth on the amber-suppressing and non-suppressing bacterial strains. This technique allows for introduction of almost any changes in the genes. Therefore, site-directed mutagenesis is a very popular technique that has been used for studying gene function and protein structure/function. Site-directed mutagenesis has helped in understanding the function and structures of some important gene products such as gene 32 from T4 phage () or the role of D-loop using ATP binding cassette proteins from T4 phage and gp47 (De la Rosa and Nelson 2011). Moreover, it is widely used for engineering disulfide bonds in the case of T4 lysozyme and improves its stability (Perry and Wetzel 1984). Site-directed mutagenesis is in fact a molecular tool for constructing recombinant phage libraries which display foreign peptides on surfaces (b). It is used for constructing libraries of protein variants in M13 bacteriophage (;b). This powerful technique allows us to obtain phage display libraries that comprise a vast number of mutants, as it has been observed that the size of a phage library is closely correlated with the affinity of the isolated mutants (Ling 2003). Site-directed mutagenesis was recently studied by Yoichi et al.. They modified phage lytic spectrum by changes in tail fiber protein gp38 of T2 bacteriophage. In that research, homologous recombination between T2 phage genome and a plasmid encoding the region around genes 37-38 from PP01 phage was used. Authors obtained recombinant T2ppD1phage derived from T2 which carried gp37 and gp38 from PP01 phage. Insertion of foreign gp37 and gp38 into T2 phage conferred infectivity of the heterogeneous host E. coli O157:H7 which was not sensitive to the wild-type of T2. E. coli K12 strains which was the original host of T2, could not be infected by recombinant T2ppD1 phage. Site-directed mutagenesis could be employed for customizing of phage host range according to specific needs. This may have practical significance for phage therapy and identification of bacteria (). Bacteriophage recombineering of electroporated DNA (BRED) is the most powerful technique allowing for phage mutant creation. It has been mainly applied on temperate phages but developments for lytic phages are also emerging. This approach was first used in mycobacteriophages. BRED can be used for the construction of unmarked detection in essential as well as nonessential genes. It can take advantage of creation in-frame internal deletions, point mutations, nonsense mutations and additions of gene tags. BRED strategy allows for co-electroporation of phage DNA temple and targeting substrate to recombineering Mycobacterium smegmatis cells. This strategy offers a very high effectiveness of mutant construction; as reported by the authors more than 10 % of finally recovered plaques contained the desired mutant (;van Kessel and Hatfull 2008). Recombinieering phages technique has been applied also for k phage (). There are many potential possibilities offered by BRED technology, mostly for investigations of bacteriophage genetics. Phage genomes can by modified with genes of unknown function which may allow for the function identification. New concept for using phages as detection tools are fluorescent and luminescent-labeled phages. Bacterial lucyferase gene (lux) was transferred to a phage genome for first time by Ulitzur and Kuhn. The gene was cloned into the bacteriophage k genome. Reporter phages with lucyferase gene were demonstrated as creative tools for rapid detection of bacterial host cells following phage infection. The minimum number of cells which can be detected did not exceed ten for E. coli (Ulitzur and Kuhn 1987), one hundred for Salmonella typhimurium (), or ten for enterobacterial cells (). Loessner et al. developed new recombinant phage against Listeria: A511::luxAB by introduction of the luxAB genes into the gene coding major capsid phage protein. After infection they observed a high level of luciferase expression in bacterial cells (). These bacteria are food borne pathogens which gives a high practical potential to this new method of identification. Other approach that employs light for phage detection is that based on green fluorescent protein (GFP) introduced to the phage capsid. This technique combines mutagenesis and phage display method. Small outer capsid (SOC) protein of PP01 phage was used as a platform to present a marker protein GFP. Fusion of GFP to SOC did not change host range of the phage, interestingly, binding of the recombinant phage to bacterial cells enhanced. Adsorption of the GFP-labeled PP01phages to the E. coli cell surface enabled visualization of cells under a fluorescence microscope. The fluorescence of GFP within infected bacteria enables highly sensitive detection (). Kamierczak et al. proposed T4 phage as a new tool for molecular imaging of bacteriophages in living systems. They used T4 phage mutant HAP (T4 without decorative Hoc protein). GFP was fused to the N-terminus of Hoc by in vivo phage display. Fluorescent phages were positively assessed as regards their applicability for detection inside living mammalian cells (by phagocytosis) and tissues (filtering and retention by lymph nodes and spleen) (). Another application of site-directed mutagenesis is using this method for affinity maturation of antibodies or for epitope mapping. These approaches usually combine construction of phage-displayed antibody libraries with mutagenesis. Parent antibody sequence is subjected to mutagenesis which is followed by optimized selection of affinity-improved variants. These variants are selected against a relevant target. Combination of phage display and site-directed mutagenesis is usually applied to understand the structure, function and interactions between antibodies and antigens (;). Chemical Modification of Bacteriophage Particles Most chemical modifications of bacteriophages are based on conjugation of some prosthetic groups to the phage surface. Various chemical compounds can be attached to the surface protein in a specific reaction at the appropriate temperature, incubation time and favorable pH. Conditions of the incubation reactions are relatively non-aggressive, so the phage does not lose its biological properties. By this reaction, such reactive groups can be attached as amino groups of lysine residues, carboxylic acid groups of aspartic acid, glutamic acid residues, the phenol group of tyrosine residues, some ester bond linkers and a chemical monolayer. To these linkers can be further attached to folic acid, some fluorescent markers, or even various antibiotics. Moreover, chemical conjugations allow one to attach polyethylene glycol particles directly to a phage. Li and colleagues' data showed a potential application of the M13 phage in cell imaging. For dual modification of M13 they chose the most reactive group, tyrosine, as the site where fluorescent dye and folic acid were attached. Folic acid is one of the most common ligands for cancer cell targeting. Dual-modified M13 showed very good binding to HeLa contaminant of KB cells, thus demonstrating the potential of chemically modified M13 in bioimaging and drug delivery (). Chemical modification of phages allows them to be used in targeted drug therapy. A very interesting research paper by Yacoby et al. presented bacteriophage M13 as a platform of targeted drug carriers for the eradication of pathogenic bacteria. The same phage was used for the conjugation of antibiotic and IgG antibodies. IgG attached on each P3 protein allowed for recognition of the target and P8 coat protein with a conjugated drug designed to kill bacteria. A schematic representation of antibacterial-targeted drug carrying bacteriophages can be summarized in three steps: preparing a pro-drug; conjugating the pro-drug to the phage; drug-carrying phage binding to the target bacteria and drug release. This research offered a new approach to selective drugs of a targeted specificity by the attached elements, which may allow the reintroduction of nonspecific drugs that have thus far been excluded from antibacterial use because of toxicity or low selectivity. Drug-currying phage has high selectivity against bacteria; in general use, it may help to combat emerging bacterial antibiotic resistance (). PEGylation is a very noteworthy type of chemical modification. It allows for phage structural protein conjugation with polyethylene glycol (PEG). After PEGylation bacteriophages do not change their specificity to bacteria. In vivo and in vitro experiments with mice showed that PEGylation of bacteriophages is efficient to delay virus clearance and achieve longer circulation in non-immunized mice. Additionally, PEGylation can reduce the cellular immune response such as antigen-specific T cell proliferation (). This chemical modification of bacteriophages may have great significance for phage therapy against antibiotic-resistant bacteria. Nowadays, antibiotic resistant bacteria represent an important medical problem. An increasing number of drug resistant-bacterial infections are very difficult to treat (). Bacteriophages are an alternative to antibiotics. They can be used as a useful tool against pathogenic bacteria. Special properties of PEGylated bacteriophages such as long persistence in the organism together with drug-targeting bacteriophages may contribute to the development of phage therapy. Bacteriophages are viruses which can recognize and bind specific bacterial receptors and create covalent bonds between phage particles and gold surfaces. This method is used for constructing biosensors by phage covalent immobilization. Detection of E. coli K12 can be done by covalent immobilization of T4 bacteriophages onto gold surfaces using a self-assembled monolayer of dithiobis(succinimidyl propionate) (DTSP) (). E. coli is a natural inhabitant of the intestinal tracts of humans and warmblooded animals. Although some pathogenic strains of E. coli cause diarrhea, bloody feces, kidney failure, hemolytic uremic syndrome and even death, it is a common foodborne pathogen (). Bacteriophage T4 can recognize and bind to specific receptors on the E. coli host using T4 tail spike proteins (Karam 1994;Kutter and Sulakvelidze 2005). A bioassay platform utilizing T4 bacteriophages via a specific receptor has been developed for the detection of E. coli K12 bacteria. Petrenko has pointed out the potential of bacteriophages in creation of bioselective materials and eventually in constructing phage-derived analytical platforms. Phages can be used as a recognition element in biosensors by using physical adsorption to immobilize phage on the sensor surface. Filamentous phages are most commonly used in biosensor construction. This phages characterized by simple composition. The major desired characteristics of the biosensors are their sensitivity, selectivity, robustness and prompt performance. Phage layers bind biological agents with high affinity and specificity and generate detectable signals in analytical platforms, for instance, it can be used for detection of Bacillus anthracis spores and S. typhimurium cells (Petrenko 2008). Biosensors may solve the problem of detection of many pathogenic and food borne bacteria or their when their concentration is low (for standard detection methods) but still harmful for the consumers' health. Conclusions Nowadays medicine is still intensively searching for new and imaginative solutions of difficult problems. These problems are for example, new generations of anticancer agents that would allow for replacement of traditional ones, new types of biosensors that may help in easy detection of still dangerous bacteria, or vaccine development since we still face the threat of HIV and other difficult pathogens. Phages offer solutions for many of those challenges. They are a very useful tool in medicine and biotechnology, showing encouraging results. Phage can be applied as anticancer agents, novel platforms in vaccine design, or as target carriers in drug discovery. They offer solutions for modern cell imaging, biosensor construction or food pathogen detection. Bacteriophage research is a dynamically developing field with promising prospects for further development of medicine and biotechnology. |
It is known to provide a pipette configured to grasp a pipette tip to transfer a quantity of liquid with the pipette tip. Once the quantity of liquid is transferred, the conventional pipette is known to include an ejector to eject the pipette tip from the pipette. It is important to provide an ejector that is appropriately fitted to accommodate the particular size and design of the desired pipette and/or pipette tip. If inappropriately sized, a new ejector must be located and fitted to accommodate the particular combination of features presented by the pipette and/or pipette tip. As such, there is a need to provide ejectors that may be quickly adjusted to the appropriate size, easily removed for possible servicing or replacement with another appropriate ejector design, and lock the adjusted position of the ejector member to prevent inadvertent change of an optimum adjusted position.
U.S. Pat. No. 7,264,779 to Viot, hereinafter Viot, discloses a known pipette. As shown in FIG. 1 of Viot, the pipette includes a control button 12, an actuator 14, a connection screw 16 and a knurled wheel 18. The pipette of Viot further includes an ejector rod 20 for separating a cone that is fixed to the pipette. As shown in FIG. 11 of Viot, at its top end, the ejector rod 20 presents a top vertical duct 52 extending to a notch 56 extending horizontally. The knurled wheel 18 can be received in the notch 56 and includes a central threaded bore for forming a screw-and-nut connection with the shank 46 of a screw 16. The screw 16 also has a male coupling portion 38 that may be received in a female coupling portion 22 of the actuator. Col. 5, lines 9-12 discloses a bayonet connection may be used to connect the ejector rod 20 to the actuator 14. Viot, however, fails to disclose a lock device for inhibiting adjustment of the extended position of the ejector rod. As such, the design of Viot promotes inadvertent adjustment of the ejector rod 20 after a desired adjustment is achieved.
U.S. Pat. No. 6,833,114 to Christen et al., hereinafter Christen et al., discloses another pipette with an ejector device 6 including an ejector mechanism 16, 16′ which can be displaced to eject a tip 8 from the pipette. The ejector device 6 includes a device for adjusting position 24, 24′, 33, 33′, 39, 39′ which is arranged in such a way that it is possible to modify the limit position reached by an ejection end 22, 22′ of the ejector device at the end of a stroke. The device for adjusting position 24, 24′ involves discrete adjustment positioning of the ejection end to discrete positions without the ability to maintain an adjusted position between the predetermined discrete positions defined by the engagement between the notches 31′ and the bump 32″ illustrated in FIG. 5. The devices for adjusting position 39, 39′ illustrated in FIGS. 7 and 8 likewise involve discrete adjustment positioning of the ejection end in discrete positions. FIGS. 6 and 9 disclose devices 33, 33′ including a thread 35, 35′. However, like Viot, Christen et al. fails to disclose a lock device for inhibiting adjustment of the extended position of the ejector rod. As such, Christen et al. likewise promotes inadvertent adjustment of the ejector rod 20 after a desired adjustment is achieved.
From Polish patent application No. P-381 071 and from international patent application No. PCT/PL2007/000077 is known an exchangeable tip ejection device in a pipette built of a handle 1 and a nozzle 2 and of, coupled with them, a pipetting push-button and a drawing up and discharging mechanism. Said ejection device comprises an ejector push-button 3 seated in the upper part of the pipette handle 1, an ejector 4 by its lower end seated onto the pipette nozzle 2 and an assembly for continuous adjustment of the ejector length, which assembly for continuous adjustment of the ejector length is arranged between the ejector push-button 3 and the ejector 4. The assembly for continuous adjustment of the ejector length is the connection of a “turnbuckle” type. The ejector 4 comprises an ejector rod 6 and an ejector arm 9, whereas the ejector rod 6 is coupled with the ejector push-button 3, the ejector arm 9 is seated by its lower end onto the pipette nozzle 2, and the ejector rod 6 and the ejector arm 9 are coupled with each other with formation the cylindrical joint of the axis of rotation perpendicular to the longitudinal axis X of the pipette nozzle 2.
As such, there is a need to provide an infinitely adjustable ejection end that can be locked in a desired position. There is also a need to incorporate the lock into an actuator member to reduce parts and simplify the lock design.
There is also a need to provide an ejection mechanism that may provide audible and/or vibrational feedback to indicate when a connected position is achieved between the ejection mechanism and the body of the pipette.
There is also a need to provide at least one stop to limit adjustment of the extended position of the ejector end with respect to the body. |
<reponame>peter-toth/nifi
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.nifi.kerberos;
import java.io.File;
import java.util.ArrayList;
import java.util.Collection;
import java.util.List;
import org.apache.nifi.annotation.behavior.Restricted;
import org.apache.nifi.annotation.behavior.Restriction;
import org.apache.nifi.annotation.documentation.CapabilityDescription;
import org.apache.nifi.annotation.documentation.Tags;
import org.apache.nifi.annotation.lifecycle.OnEnabled;
import org.apache.nifi.components.PropertyDescriptor;
import org.apache.nifi.components.RequiredPermission;
import org.apache.nifi.components.ValidationContext;
import org.apache.nifi.components.ValidationResult;
import org.apache.nifi.controller.AbstractControllerService;
import org.apache.nifi.controller.ConfigurationContext;
import org.apache.nifi.controller.ControllerServiceInitializationContext;
import org.apache.nifi.processor.util.StandardValidators;
import org.apache.nifi.reporting.InitializationException;
@CapabilityDescription("Provides a mechanism for specifying a Keytab and a Principal that other components are able to use in order to "
+ "perform authentication using Kerberos. By encapsulating this information into a Controller Service and allowing other components to make use of it "
+ "(as opposed to specifying the principal and keytab directly in the processor) an administrative is able to choose which users are allowed to "
+ "use which keytabs and principals. This provides a more robust security model for multi-tenant use cases.")
@Tags({"Kerberos", "Keytab", "Principal", "Credentials", "Authentication", "Security"})
@Restricted(restrictions = {
@Restriction(requiredPermission = RequiredPermission.ACCESS_KEYTAB, explanation = "Allows user to define a Keytab and principal that can then be used by other components.")
})
public class KeytabCredentialsService extends AbstractControllerService implements KerberosCredentialsService {
static final PropertyDescriptor PRINCIPAL = new PropertyDescriptor.Builder()
.name("Kerberos Principal")
.description("Kerberos principal to authenticate as. Requires nifi.kerberos.krb5.file to be set in your nifi.properties")
.addValidator(StandardValidators.NON_BLANK_VALIDATOR)
.expressionLanguageSupported(true)
.required(true)
.build();
static final PropertyDescriptor KEYTAB = new PropertyDescriptor.Builder()
.name("Kerberos Keytab")
.description("Kerberos keytab associated with the principal. Requires nifi.kerberos.krb5.file to be set in your nifi.properties")
.addValidator(StandardValidators.FILE_EXISTS_VALIDATOR)
.expressionLanguageSupported(true)
.required(true)
.build();
private File kerberosConfigFile;
private volatile String principal;
private volatile String keytab;
@Override
protected final void init(final ControllerServiceInitializationContext config) throws InitializationException {
kerberosConfigFile = config.getKerberosConfigurationFile();
}
@Override
protected Collection<ValidationResult> customValidate(final ValidationContext validationContext) {
final List<ValidationResult> results = new ArrayList<>();
// Check that the Kerberos configuration is set
if (kerberosConfigFile == null) {
results.add(new ValidationResult.Builder()
.subject("Kerberos Configuration File")
.valid(false)
.explanation("The nifi.kerberos.krb5.file property must be set in nifi.properties in order to use Kerberos authentication")
.build());
} else if (!kerberosConfigFile.canRead()) {
// Check that the Kerberos configuration is readable
results.add(new ValidationResult.Builder()
.subject("Kerberos Configuration File")
.valid(false)
.explanation("Unable to read configured Kerberos Configuration File " + kerberosConfigFile.getAbsolutePath() + ", which is specified in nifi.properties. "
+ "Please ensure that the path is valid and that NiFi has adequate permissions to read the file.")
.build());
}
return results;
}
@Override
protected List<PropertyDescriptor> getSupportedPropertyDescriptors() {
final List<PropertyDescriptor> properties = new ArrayList<>(2);
properties.add(KEYTAB);
properties.add(PRINCIPAL);
return properties;
}
@OnEnabled
public void setConfiguredValues(final ConfigurationContext context) {
this.keytab = context.getProperty(KEYTAB).evaluateAttributeExpressions().getValue();
this.principal = context.getProperty(PRINCIPAL).evaluateAttributeExpressions().getValue();
}
@Override
public String getKeytab() {
return keytab;
}
@Override
public String getPrincipal() {
return principal;
}
}
|
<filename>src/main/java/com/illucit/instatrie/index/PrefixSearchMapped.java<gh_stars>1-10
package com.illucit.instatrie.index;
import java.io.Serializable;
import java.util.function.Function;
import java.util.stream.Stream;
public class PrefixSearchMapped<T extends Serializable, U extends Serializable> implements PrefixSearch<U> {
private final PrefixSearch<T> delegate;
private final Function<T, U> mappingFunction;
public PrefixSearchMapped(PrefixSearch<T> delegate, Function<T, U> mappingFunction) {
this.delegate = delegate;
this.mappingFunction = mappingFunction;
}
@Override
public Stream<U> searchStream(String query) {
return delegate.searchStream(query).map(mappingFunction);
}
@Override
public Stream<U> searchExactStream(String query) {
return delegate.searchExactStream(query).map(mappingFunction);
}
}
|
def store_adversarial(file_name, adversarial):
if adversarial is not None:
adversarial = check_image(adversarial)
path = os.path.join("results", file_name)
path_without_extension = os.path.splitext(path)[0]
np.save(path_without_extension, adversarial) |
From 1979, during his presidential election campaign, Ronald Reagan (Republican) advocated the establishment of a North American free trade area and called for the USA, Canada and Mexico to negotiate such an agreement. He was convinced that it would be beneficial to all three countries.
In 1990 negotiations for the North America Free Trade Agreement (NAFTA) commenced under Canadian Prime Minister Brian Mulroney, US President George H W Bush (Republican), and Mexican President Carlos Salinas de Gortari. The agreement was concluded and signed in 1992. It was ratified under US President Bill Clinton (Democrat) in 1993 and entered into force in 1994. Although there are some challenges, NAFTA has been assessed as beneficial to its three parties.
At the time of its adoption, countries in the Caribbean Basin were exporting textiles and clothing to the USA under its 807 programme. Jamaica, for example, had an agreement for textiles and clothing with the US under which quotas were set. Jamaica exported textiles and clothing to the USA from its free zones. Sugar was also exported to the US under quotas.
Caribbean trade with the USA was generally covered by the Caribbean Basin Initiative (CBI), which was primarily the 1983 Caribbean Basin Economic Recovery Act (CBERA) expanded in 1990. CBERA — aimed at promoting development — allowed Caribbean countries to export goods, with some exemptions, including for textiles and clothing, duty-free on a non-reciprocal basis. This meant that while Caribbean goods entered the US market duty-free, US goods exported to the Caribbean were subject to duties (tariffs). In 1986, under the US Super 807 programme, partial duty-free access was given to some textiles and clothing products.
With the conclusion of NAFTA, Caribbean countries exporting textiles and clothing and sugar to the USA were concerned about the impact of NAFTA provisions on their trade with the USA. They were worried that Mexico, with duty-free access and expanded quotas, particularly for textiles and clothing and sugar, would threaten their access to the US market and investments would also be diverted to Mexico. Caribbean countries, particularly those in Caricom, feared they would not be able to compete with Mexico.
Their concerns seemed justified as Mexico, under NAFTA, received duty-free treatment for textiles and clothing and, for sugar, received an expanded quota as well with the guarantee of further access if they became net exporters of sugar.
Caribbean countries, commencing in 1993, lobbied the US Administration for NAFTA parity (similar access as Mexico), citing the negative impact specifically on their textiles and clothing and sugar industries. While the Clinton Administration was sympathetic, various members of Congress opposed NAFTA parity. For these Congress representatives the Caribbean case was exaggerated — as they saw no evidence that NAFTA impeded Caribbean exports. Some were opposed to the CBI feeling that Caribbean countries should give reciprocal access to US exports. As Caribbean countries battled to obtain parity in Washington, DC, the textile and clothing industry in Jamaica was declining.
The countries of the Caribbean were finally accorded NAFTA parity in 2000 with the adoption in Congress of the Caribbean Basin Trade Partnership Act (CBTPA). This Act was further extended by the 2002 US Trade Act. Textiles and clothing received duty-free access to the US market.
By 2005, however, the World Trade Organisation Agreement on Textiles and Clothing had fully liberalised the trade in textiles and clothing ending the system of quotas. The textile and apparel industry in Jamaica, which had employed nearly 40,000 women at its peak, had folded by this time. Without quotas, production in Jamaica could not compete with production in Mexico, other Latin American and Caribbean countries, and the large producers in Asia.
For sugar, some Caribbean countries, such as Trinidad and Tobago and St Kitts and Nevis, have phased out sugar production. Countries, such as Jamaica, Guyana, Belize, and Barbados, continue to have small US quotas. They remain concerned about Mexico's access to the US sugar market fearing further erosion of these quotas. US domestic sugar producers, who have always been opposed to Mexico's expanded access under NAFTA, are seen as allies by Caribbean sugar producers. This is ironic as government subsidies to US sugar/sweetener producers since the 1980s and a dispute settlement in the General Agreement on Tariffs and Trade (eg 1989 Australia vs USA), reduced Caribbean sugar quotas and flexibility in supply. In addition, the Caribbean's main market for sugar has been in the European Union (UK).
US President Donald Trump, describing the 23-year-old NAFTA as the worst trade deal ever for the USA, has insisted that it should be renegotiated or he would withdraw from it. The renegotiation commenced on August 16, 2017. Caribbean countries, given the history, may want to monitor these negotiations to determine whether any changes to the agreement could have positive or negative effects on their interests. |
Food, delicatessen and beverage souvenir shopping the role of travel experience, trip purpose and destination Shopping is an important activity of the tourist experience, for example, as a vehicle for sense making, status and identity building, and making connections to the local culture. Purchasing food and beverages as souvenirs to bring home has always been an important part of this activity. The aim of this article is to analyze the influence of destination image, tourists prior travel experience and tourists trip purpose on tourists food, delicatessen and beverage souvenir shopping. Tourists personal income is also included in the research model. This quantitative study examined 405 Swedish tourists traveling to the UK and Spain. Drawing on the sophistication process of tourists, research on destination image, travel motives and trip purpose, the results show that the most important factors explaining food and beverage souvenir shopping while on vacation are tourists general travel experience and personal income. These findings both confirm and contradict previous research on tourists souvenir shopping. |
<filename>TerraForge3D/src/AppStructs.h<gh_stars>1-10
#pragma once
#include <string.h>
#include <glm/gtc/constants.hpp>
#include <glm/gtc/quaternion.hpp>
#include <glm/gtc/matrix_transform.hpp>
#include <glm/ext/matrix_relational.hpp>
#include <glm/ext/vector_relational.hpp>
#include <glm/ext/scalar_relational.hpp>
#include <time.h>
#include <json.hpp>
struct Vec2 {
float x = 0;
float y = 0;
};
struct Vec3 {
float x = 0;
float y = 0;
float z = 0;
};
struct Vec4 {
float x = 0;
float y = 0;
float z = 0;
float w = 0;
};
struct Stats
{
float deltaTime = 1;
float frameRate = 1;
int triangles = 0;
int vertCount = 0;
};
struct NoiseLayer {
NoiseLayer() {
noiseType = (char*)malloc(1024);
memset(noiseType, 0, 1024);
strcpy(noiseType, "Simplex Perlin");
strcpy_s(name, "Noise Layer");
strength = 0.0f;
enabled = true;
active = false;
scale = 1;
offsetX = 0;
offsetY = 0;
}
nlohmann::json Save() {
nlohmann::json data;
data["type"] = std::string(noiseType);
data["strength"] = strength;
data["name"] = std::string(name);
data["scale"] = scale;
data["offsetX"] = offsetX;
data["offsetY"] = offsetY;
data["enabled"] = enabled;
data["active"] = active;
return data;
}
void Load(nlohmann::json data) {
if(noiseType)
delete noiseType;
noiseType = (char*)malloc(1024);
memset(noiseType, 0, 1024);
strcpy(noiseType, std::string(data["type"]).c_str());
std::string t = std::string(data["name"]);
memcpy_s(name, 256, t.c_str(), 256);
strength = data["strength"];
scale = data["scale"];
offsetX = data["offsetX"];
offsetY = data["offsetY"];
enabled = data["enabled"];
active = data["active"];
}
NoiseLayer Clone(){
NoiseLayer clone;
if(clone.noiseType)
delete noiseType;
clone.noiseType = (char*)malloc(1024);
memset(clone.noiseType, 0, 1024);
strcpy(clone.noiseType, noiseType);
memcpy_s(clone.name, 256, name, 256);
clone.strength = strength;
clone.scale = scale;
clone.offsetX = offsetX;
clone.offsetY = offsetY;
clone.enabled = enabled;
clone.active = active;
return clone;
}
char* noiseType;
char name[256];
float strength;
float offsetX, offsetY;
float scale;
bool enabled;
bool active;
};
struct ActiveWindows {
bool styleEditor = false;
bool statsWindow = false;
bool shaderEditorWindow = false;
bool texturEditorWindow = false;
bool seaEditor = false;
bool elevationNodeEditorWindow = false;
bool contribWindow = false;
bool textureStore = false;
bool osLisc = false;
bool filtersManager = false;
bool foliageManager = false;
bool supportersTribute = false;
bool skySettings = false;
}; |
Validation of the Cognitive Telephone Screening Instrument (COGTEL) for detecting mild cognitive impairment and dementia due to Alzheimer's disease (AD) Cognitive assessment is necessary for diagnosing cognitive impairment. In epidemiologic surveys and genetic family studies cognitive tests that can be administered over the telephone are valuable tools. The Cognitive Telephone Screening Instrument (COGTEL) is a brief instrument for capturing interindividual differences in cognitive functioning, with the advantage of covering more cognitive domains than traditional screening tools such as the Minimental State Examination (MMSE), as well as differentiating between individual performance levels in healthy older adults. Here, we report first evidence of the utility of the COGTEL in detecting mild cognitive impairment (MCI) and dementia due to Alzheimers disease (AD). |
FORT MYERS, Fla. - Some pet owners have a hard time staying afloat financially. When they are down on their luck fortunately there is a safety net, foster care for animals.
Lexi is a loving German Shepard. Her owner got evicted, and while her master looks for a new home, a foster family took the animal in. Little did they know Lexi would repay their kindness.
Denise Default lost her house and can't afford a new one with what she is making on disability. Living in a truck she realized life was too hard on her German Shepard, Lexi, so Denise looked into foster care.
Cecelia Patterson offered to help, and she's glad she did. A stray bottle rocket on new years eve caught her backyard shed on fire. Lexi was insistent and kept barking till Ceceila got up to see what was wrong.
According to Cecelia, the fire department told her if they had come any later her house would have gone up in flames too.
Lexi's master isn't surprised by her dog's bravery, but she misses her terribly. "I want my family back together, I want her home, but I can't bring her home to live with me in the truck."
"Foster mom" Cecelia understands Denise's pain but says Lexi will always have a safe place to stay. "She has a home for as long as it takes for Denise to find a place. Because she's my hero."
If you're interested in fostering an animal, more information is available on the Lee County Domestic Animal Services website. |
<gh_stars>1-10
package com.demo;
import jade.core.Agent;
/**
* 新建一个Agent FirstAgent 消息发送者
* @author YHC
*
*/
public class FirstAgent extends Agent {
private static final long serialVersionUID = 1L;
@Override
protected void setup() {
//添加Behaviour FirstBehaviour
this.addBehaviour(new SendBehaviour(this));
}
}
|
D-004 ameliorates phenylephrine-induced urodynamic changes and increased prostate and bladder oxidative stress in rats Background Lower urinary tract symptoms (LUTS) in patients with benign prostatic hyperplasia (BPH) mainly depend on alpha1-adrenoreceptors (1-ADR) stimulation, but a link with oxidative stress (OS) is also involved. D-004, a lipid extract of Roystonea regia fruits, antagonizes ADR-induced responses and produces antioxidant effects. The objective of this study was to investigate whether D-004 produce antioxidant effects in rats with phenylephrine (PHE)-induced urodynamic changes. Methods Rats were randomized into eight groups (ten rats/group): a negative vehicle control and seven groups injected with PHE: a positive control, three treated with D-004 (200, 400 and 800 mg/kg) and three others with tamsulosin (0.4 mg/kg), grape seed extract (GSE) (250 mg/kg) and vitamin E (VE) (250 mg/kg), respectively. Results Effects on urinary total volume (UTV), volume voided per micturition (VM), malondialdehyde (MDA) and carbonyl groups (CG) concentrations in prostate and bladder homogenates were study outcomes. While VM and UTV lowered significantly in the positive control as compared to the negative control group, the opposite occurred with prostate and bladder MDA and CG values. D-004 (200-800 mg/kg) increased significantly both VM and UTV, lowered significantly MDA in prostate and bladder homogenates, and reduced GC levels only in the prostate. Tamsulosin increased significantly VM and UTV, but unchanged oxidative variables. GSE and VE unchanged the UTV, whereas VE, not GSE, modestly but significantly attenuated the PHE-induced decrease of VM. Conclusions Single oral administration of D-004 (200-800 mg/kg) was the only treatment that ameliorated the urodynamic changes and reduced increased oxidative variables in the prostate of rats with PHE-induced prostate hyperplasia. Introduction Benign prostatic hyperplasia (BPH), an enlargement of the prostate gland, may lead to troublesome lower urinary tract symptoms (LUTS), bladder outlet obstruction and reduced quality of life, common in aging men. The pathogenesis of BPH/LUTS clinical entity involves both hormonal and non-hormonal factors. The increased conversion of testosterone (T) in dihydrotestosterone (DHT) mediated by the activity of prostate 5-reductase is the key factor in prostate growth, the static component of this disease. In turn, stimulation of alpha1-adrenoreceptors ( 1 -ADR) that regulate bladder and prostatic smooth muscle tone triggers the enhanced contraction of these tissues and leads to LUTS and bladder outlet obstruction in BPH patients. Hence, the pharmacological management of BPH/LUTS mainly includes the use of 1 -ADR antagonists, 5-reductasa inhibitors and their combined therapy, although phosphodiesterase inhibitors and non-steroidal antiinflammatory drugs (NSAIDs) have been also used in BPH © Translational Andrology and Urology. All rights reserved. On its side, BPH has been also linked with increased oxidative stress (OS) and free radicals have been claimed to be implicated in the micturition dysfunction observed in patients with BPH/LUTS. Also, it is suggested that the use of antioxidants would ameliorate micturition dysfunction in patients with BPH. Sympathomimetic stimulation achieved by phenylephrine (PHE) administration (1-20 mg/kg) to rodents has been shown to induce atypical prostatic hyperplasia (PH), characterized by piling-up with papillary and cribriform patterns, and budding-out of epithelial cells, accompanied by urodynamic changes like reduced volume voided per micturition (VM) and micturition total volume (VT). These PHE-induced changes seem to be mediated by the 1A -ADR, which predominate in the stroma of the rodent ventral prostate, so that PHE could directly modulate prostate stromal growth, and indirectly modulate epithelial growth in a paracrine fashion, although the contribution of other indirect effects, including the increased OS, have not been ruled out. D-004, a lipid extract of Roystonea regia fruits that contains a mixture of fatty acids, mainly oleic, lauric, palmitic and myristic, has been effective in experimental models of PH. D-004 has been able to inhibit prostate 5-reductase in vitro, and to prevent PH induced with T, not with DHT, in rodents. Also, D-004 effectively antagonizes ADR-mediated responses in vitro and in vivo. The addition of D-004 inhibited PHE-induced contractions in preparations of isolated prostate strips and vas deferens, while its oral administration reduced significantly PHE-induced impairment of micturition and histological changes in rat prostate, indicating that, in vivo, D-004 effectively opposed these PHE-induced responses, mediated through urogenital 1A -ADR. Also, D-004 has demonstrated to produce antioxidant effects on normal and hyperplasic prostate tissue in rats, and on plasma oxidative variables in healthy and BPH men. Nevertheless, the coincidence of the effects of D-004 on OS variables in prostate and bladder tissues and PHE-induced urodynamic changes in rats have not been explored. In light of this background, this study investigated whether D-004 produces antioxidant effects in rats treated with PHE. The study was conducted according to the Cuban Guidelines for Animal Handling and the Cuban Code of Good Laboratory Practices (GLP). An independent institutional board of the centre approved the study protocol and the use of the animals in the experiments. Treatment methods and dosage D-004, GSE, VE and tamsulosin were suspended in Tween-65/H 2 O vehicle (2%). All treatments (including the vehicle) were given as single oral doses by gastric gavage (5 mL/kg of body wt) one hour before inducing PHEurodynamic impairment. PHE was diluted in saline solution (5 mg/mL) and administered by subcutaneous injection (s.c). Rats were randomized into eight groups (ten rats/group): a negative vehicle control and seven groups treated with PHE: a positive control, three treated with D-004 (200, 400 and 800 mg/kg), three with tamsulosin (0.4 mg/kg), one with GSE (250 mg/kg) and one with VE (250 mg/kg). After treatment completion, rats were weighed, then anesthetized under ether atmosphere and sacrificed by complete bleeding from the abdominal aorta. Prostate was immediately separated from bladder and both of them were removed and weighed in analytical balance Mettler Toledo. Effects on urodynamic variables One hour after administering the oral treatments, PHEtreated groups were injected with PHE (5 mg/kg, s.c). All groups (including the negative control) received a fluid loading dose (distilled water, 5 mL s.c + 5 mL p.o) for increasing the VM. Thirty min later, rats were placed unrestrained in metabolic cages for one hour and the urinary total volume (UTV) and VM were measured. The amelioration of the reduction of VM induced by PHE was the main study outcome. Effects on oxidative variables Aliquots of whole prostate and bladder tissue were taken and gently homogenized in an ice-cold bath, with an Ultra-Turrax homogenizer. Tissue samples were homogenized in 150 mmol/L Tris/HCl buffer (pH 7.4) and 50 mmol/L phosphate buffer (pH 7.4) for determining the TBARS and carbonyl groups (CG) levels, respectively. Effect on malondialdehyde (MDA) levels Concentration of TBARS was determined according to Ohkawa. For that, the reaction mixture (prostate and bladder homogenates) was treated with 0.2 mL of sodium dodecyl sulfate (8.1%), 1.5 mL of acetic acid (20%, pH 3.5), and 1.5 of thiobarbituric acid (TBA) (0.8%), and heated to 95 ℃ for one hour. To prevent the production of TBA reactants 50 L of butylated hydroxytoluene (1 mmol/L) were added to the mixtures. After cooling, 5 mL of n-butanol:pyridine (15:1 v/v) mixture was added, stirring vigorously with vortex, and centrifuged at 4,000 rpm for 20 min. The absorbance of the organic layer was measured at 534 nm using a spectrophotometer (Genesys 10 UV). Concentrations of TBARS were determined from a standard curve of MDA bis-(dimethyl acetal) and reported as nmol MDA/mg protein. Protein concentrations were assessed by a modified Lowry method. Effect on protein-linked carbonyl groups (CG) Protein-linked CG levels were determined according to Reznick and Packer. The prostate or bladder homogenates were measured at 280/260 nm to discard the presence of nucleic acids. In all cases streptomycin sulphate 1% was added to eliminate nucleic acids. A volume equivalent to 50 mg of protein was then added to 4 mL of dinitrophenyl-hydrazine (DNPH) solution (10 mmol/L), dissolved in HCl 2.5 mol/L. The mixture was vigorously stirred and placed in the darkness for 1 h, 5 mL of trichloroacetic acid (TCA) at 10% were added, and the mixture centrifuged at 3,000 rpm for 15 minutes. The protein pellet was washed three times with a mixture of ethanol:ethyl acetate (1:1, v/v) to eliminate the excess of DNPH and then dissolved in 2 mL of guanidine 6 mol/L. Optical density was measured at 450 nm (coefficient of molar extinction: 22,000 M -1 ) and the concentration of CG was reported in nmol/mg of protein. Statistical analysis Comparisons among groups were performed with the Kruskal Wallis test and paired comparisons versus the control group with the Mann Whitney U test. The level of statistical significance was set at 0.05. All analyses were performed using statistic software for Windows (Release 6.0, Stat Soft, Inc. USA). Dose-effect relationship was assessed by using linear regression and correlation tests using the Primer of Biostatistics Program . Results Decreased VM (P<0.001) and UTV (P<0.05) values were found in the positive control as compared to the negative control group, reductions that were significantly (P<0.001) and markedly attenuated by tamsulosin, the reference substance, which produced a 78% inhibition of the VM ( Table 1). Single oral doses of D-004 (200-800 mg/kg) dosedependently (r=0.999; P<0.05) ameliorated the reduction of the VM induced by PHE. Meanwhile the doses of 400 and 800 mg/kg increased significantly the VM, the lowest dose was ineffective. The magnitude of the effect was moderate since the highest dose tested (800 mg/kg) increased VM by 48.1% as compared to the positive control. All doses of D-004, however, increased significantly the UTV. GSE and VE unchanged the UTV, whereas VE, not GSE, modestly (22.1%) but significantly attenuated the PHE-induced decrease of VM ( Table 1). Tables 2 and 3 show the effects of treatments on the oxidative variables (MDA and CG concentrations) in rat prostate and bladder, respectively. The s.c injection of PHE significantly increased MDA and CG concentrations in both tissues as © Translational Andrology and Urology. All rights reserved. Oral treatment with D-004 (200-800 mg/kg) reduced significantly the PHE-induced increase of prostate MDA and CG concentrations to about 60% and 100%, respectively. Likewise, significant reductions of both oxidative variables were achieved with GSE and VE at 250 mg/kg ( Table 2). All treatments prevented significantly the increase of MDA concentrations in bladder homogenates, but unchanged those of CG ( Table 3). None treatment modified body, bladder or prostate weight values (data not shown for simplicity). Discussion In this study, single oral doses of D-004 were able to ameliorate the PHE-induced impairment of urodynamic variables in rats, like the reductions of VM and UTV; and of the concomitant increases of prostate concentrations of both oxidative variables: MDA (a marker of lipid peroxidation) and GC (a marker of protein oxidation). Rats of the positive control group exhibited PHEinduced significant reductions of VM (main efficacy outcome) and UTV as compared to the negative control group. These decreases were markedly attenuated in the group pre-treated with tamsulosin (0.4 mg/kg), which reduced such effect by 78% as compared to the positive control. These findings support the validity of the model in our experimental conditions. The effects of tamsulosin, the reference substance, on this model are attributable to its specific antagonistic action on the 1 -ADR that mediate the contractile function of prostate and bladder neck smooth muscle. Single oral doses of D-004 (200-800 mg/kg) ameliorated the PHE-induced impairment of both urodynamic variables in a dose-dependent fashion, consistent with previous data on this model, and with its antagonism of 1 -ADRmediated responses in vitro. The highest effect observed, differently from that of tamsulosin, was moderate (≈48% of inhibition versus the positive control) and the lowest dose was not effective. The novelty of this study, however, resides on the demonstration of the ability of D-004 for lowering the PHE-induced increases of the concentrations of oxidative variables (MDA and GC) in the rat prostate and of MDA in the rat bladder. In addition, the increase of markers of OS in this model had not been demonstrated previously (Entrez PubMed, review up to August 2013). Nevertheless, keeping in mind that OS has been implicated in the damage to the detrusor musculature following a period of chronic intravesical obstruction in rats and that such effect was attenuated by the antioxidant galangin, we assumed that a similar situation may be present in the PHE-induced urodynamic dysfunction in the rat. In this study, we used two markers for demonstrating the occurrence of OS increase in rat prostate and bladder tissues (MDA and CG). MDA, the final product of free radicalinduced lipid peroxidation, has been implicated in the attack to the polyunsaturated fatty acids of cell membranes, with consequent changes in membrane fluidity and permeability, increased protein degradation and rates of cell lysis. In turn, GC concentrations are used as a marker of protein oxidation, another indicator of the levels of OS in tissues. Prostate and bladder MDA and GC concentrations in the positive controls were greater than in the negative controls. When taken together, these findings support that high levels of lipid peroxidation and protein oxidation are linked with PHE-induced urodynamic dysfunction. We believe that the damage induced by PHE (proven by the reduction of VM and UTV) is related, disregarding its effects on 1 -ADR, to the increased OS on prostate and bladder. The coexistence of the ameliorating effects of D-004 on PHE-induced urodynamic impairment and increased OS markers (MDA and CG) demonstrates, for the first time, the efficacy of D-004 for lowering OS in conditions that mimics the stimulation of 1 -ADR at the same doses reported as effective for preventing T-induced PH in rats. Although this result seems to reinforce the link between urinary dysfunction and OS on this model, it should be noted that proven antioxidants like GSE and VE at 250 mg/kg, effective for lowering oxidative markers, did not modify UTV, and that tamsulosin, highly effective for reducing urodynamic dysfunction, was devoid of effect on the oxidative variables. These facts support that the main efficacy of treatments on this model is related with the antagonism of 1 -ADR, not with changes on oxidative variables. Nevertheless, since a modest effect of VE on VM was seen, a door for the contribution of antioxidant effects for alleviating urodynamic dysfunction remains open, mainly in light of the important role of the reactive oxygen species-reactive nitrogen species (ROS-RNS) on the and rabbits. The fact that D-004, GSE and VE were effective to lower prostate and bladder MDA concentrations, but only the concentrations of CG in the prostate, not in the bladder, remains intriguing, since we have no conclusive explanation of this differential effect on both tissues. This result, therefore, merits further studies on this target. From a wider perspective, since several experimental and clinical evidences have demonstrated the role of ROS-RNS on the stimulation of lower urinary tract contractions and the benefits of some antioxidant agents on the progression of BPH 35,36) it is very interesting to find that D-004 not only ameliorates the PHE-induced urodynamic dysfunction in rats, but also decrease the oxidative damage induced by PHE on target tissues related with the BPH/ LUTS clinical entity. These results suggest that the use of antioxidants would have a protective role against micturition dysfunction due to PHE-induced stimulation of 1 -ADR. Conclusions Single oral administration of D-004 (200-800 mg/kg), not of tamsulosin, VE or GSE, was the only treatment able to ameliorate simultaneously PHE-induced urodynamic dysfunction and increased prostate OS in rats. |
Top seed Roger Federer booked his place in the last 16 of the Australian Open with a straight sets victory over Albert Montanes.
Roger Federer sets up a last-16 encounter with either Lleyton Hewitt at the Australian Open after a routine victory over Albert Montanes.
(CNN) -- Roger Federer booked his place in the last 16 of the Australian Open after a routine straight sets win over Spain's Albert Montanes on Saturday to set up a match with home favourite Lleyton Hewitt.
The world number one was relatively untroubled as claimed a 6-3, 6-4, 6-4 victory over Montanes and felt his serve proved to be crucial factor in the victory.
"I thought it was dominated from my side with my serve, which allowed me then to take chance on the return," Federer told the Australian Open Web site.
"It was a pretty straightforward match, really. I don't remember him having any breakpoints. He was playing tough from the baseline and making it hard. I'm happy with the match and was able to serve it out, so it was good."
Federer will now turn his attention to Australian Hewitt who led Marcos Baghdatis 6-0 4-2 before the Cypriot was forced to retire with a shoulder injury.
Third seed Novak Djokovic thrashed Denis Istomin 6-1 6-1 6-2, while sixth seed Nikolay Davydenko claimed a 6-0 6-3 6-4 victory over Juan Monaco.
It was a pretty straightforward match.
Davydenko's fourth-round opponent will be Spaniard Fernando Verdasco, who advanced when Stefan Koubek of Austria retired due to sickness after losing the first set 6-1.
France's Jo-Wilfried Tsonga, who was a beaten finalist in 2008, came from a break down in the fourth set to beat Tommy Haas 6-4 3-6 6-1 7-5.
Defending champion Rafael Nadal battled through into the last 16 after being taken to four sets, while fellow title hopefuls Juan Martin Del Potro, Andy Murray and Andy Roddick also progressed.
World number two Nadal progressed after beating German 27th seed Philipp Kohlschreiber 6-4 6-2 2-6 7-5 in three hours and 39 minutes.
The Spaniard will play Ivo Karlovic in the fourth round after the big-serving Croatian defeated 24th-seeded compatriot Ivan Ljubicic 6-3 3-6 6-3 7-6 (9-7).
Kohlschreiber had eliminated two other left-handers in his opening matches, and fought back to stun an out-of-sorts Nadal by winning the third set.
He fought back from a break down to level at 4-4, but paid the price for three wild forehands on his own serve at 5-5 and Nadal duly closed out the match.
Fifth seed Murray had no such problems as he cruised past Frenchman Florent Serrain with a 7-5 6-1 6-4 victory, and has yet to drop a set in three matches so far.
The 22-year-old Briton will next play American 33rd seed John Isner, with Nadal waiting for him in the quarterfinals if they both win their fourth-round ties.
But he faces a challenge against the big-serving Isner, who knocked out French 12th seed Gael Monfils 6-1 4-6 7-6 (7-4) 7-6 (7-5) and is the second tallest player on tour.
Murray said: "I have broken serve a lot so far this tournament against guys that have good serves. Kevin Anderson is a good server. I returned well against him. I'm going to need that in the next match.
"Isner is playing really well. He won the tournament in Auckland last week. The guy is 6ft 9in and he has if not the best serve on the tour... I'll have to return well again."
Fourth seed Del Potro was also taken to four sets as he defeated Germany's Florian Mayer 6-3 0-6 6-4 7-5.
The U.S. Open champion will face 14th seed Marin Cilic in the fourth round after the Croatian defeated Swiss 19th seed Stanislas Wawrinka 4-6 6-4 6-3 6-3.
If Argentina's Del Potro wins that match, he will play either seventh seed Andy Roddick or 11th seed Fernando Gonzalez in the quarterfinals.
Roddick, the 2007 losing finalist, advanced after being tested by Spain's Feliciano Lopez before he finally won 6-7 (4-7) 6-4 6-4 7-6 (7-3) in three hours and 32 minutes.
Both men recorded 29 aces, but Lopez suffered through 60 unforced errors, while American Roddick had 21.
Roddick said: "I felt like I was real close to getting on top of the match and making it, turning it, to kind of make it a little bit more comfortable. Just didn't quite get there."
Gonzalez was also on court for more than three hours as he edged out Kazakhstan's Evgeny Korolev 6-7 (7-5) 6-3 1-6 6-3 6-4. |
A New Cell-Level Search Based Non-Exhaustive Approximate Nearest Neighbor (ANN) Search Algorithm in the Framework of Product Quantization Non-exhaustive search is widely used in the approximate nearest neighbor (ANN) search. In this paper, we propose a new cell-level search-based non-exhaustive ANN search algorithm in the framework of product quantization (PQ) called cell-level PQ to speed up the ANN search. The cell-level search is introduced by searching all the PQ cells of the query vector on the cell level, and the length of the candidate list can be significantly reduced with negligible computational costs. Instead of using the high time-consuming coarse quantizers, which are necessary in all of the existing non-exhaustive ANN search algorithms such as inverted files (IVFs) and inverted multi-index (IMI), our proposed cell-level PQ reuses the PQ cells of query vector to reject database vectors so that the ANN search in the framework of PQ can be efficiently speeded up. In addition, because our proposed cell-level PQ does not need to store the auxiliary indexes of coarse quantizers for each database vector, no extra memory consumption is required. The experimental results on different databases demonstrate that our proposed cell-level PQ can significantly speed up the ANN search in the framework of PQ, and meanwhile, the search accuracy is almost the same as that of the standard PQ. |
<filename>app/src/main/java/com/app/tabletopdiceroller/AddNewFavoriteFragment.java
package com.app.tabletopdiceroller;
import android.os.Bundle;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import android.support.v4.app.Fragment;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.EditText;
import android.widget.TextView;
import com.app.tabletopdiceroller.Objects.Roll;
/**
* This fragment is the screen displayed when creating a new favorite roll and adding it to the preset roll list
*/
public class AddNewFavoriteFragment extends Fragment implements View.OnClickListener {
private static AddNewFavoriteFragment fragmentInstance = null;
private Button backBtn;
private Button createBtn;
private TextView numSides;
private TextView numDice;
private EditText rollNameText;
private static String sides = "N/A";
private static String dice = "N/A";
private int sidesInt = 0;
private int diceInt = 0;
/**
* Creates a new fragment if the current one is null, otherwise returns the current fragment.
* @return The current AddNewFavoriteFragment instance
*/
public static AddNewFavoriteFragment getFragment() {
if (fragmentInstance == null) {
fragmentInstance = new AddNewFavoriteFragment();
}
return fragmentInstance;
}
@Nullable
@Override
public View onCreateView(@NonNull LayoutInflater inflater, @Nullable ViewGroup container, @Nullable Bundle savedInstanceState) {
View view = inflater.inflate(R.layout.fragment_add_new_fav, container, false);
backBtn = view.findViewById(R.id.back_button);
backBtn.setOnClickListener(this);
createBtn = view.findViewById(R.id.confirm_button);
createBtn.setOnClickListener(this);
numSides = view.findViewById(R.id.favDisplayNumSides);
numDice = view.findViewById(R.id.favDisplayNumDice);
rollNameText = view.findViewById(R.id.rollNameText);
numSides.setText(sides);
numDice.setText(dice);
return view;
}
/**
* OnClickListener
* @param v is the view for this fragment
*/
@Override
public void onClick(View v) {
switch (v.getId()) {
case R.id.back_button:
((MainActivity)getActivity()).cancelNewFavorite();
break;
case R.id.confirm_button:
try {
rollNameText.clearFocus();
sidesInt = Integer.parseInt(sides);
diceInt = Integer.parseInt(dice);
String rollName = rollNameText.getText().toString();
Roll roll = new Roll(diceInt, sidesInt, rollName);
((MainActivity)getActivity()).insertRollToDatabase(roll);
((MainActivity)getActivity()).confirmNewRoll(sidesInt, diceInt, rollName);
} catch (Exception e) {
// Catches exception with user input
}
break;
}
}
/**
* Sets the 'sides' variable, which states how many sides there are per dice
* @param s is the updated number of sides
*/
public static void setSides(String s) {
sides = s;
}
/**
* Sets the 'dice' variable, which states how many dice are rolled in this roll
* @param d is the updated number of dice
*/
public static void setDice(String d) {
dice = d;
}
}
|
<reponame>santjuan/dailyBot
package dailyBot.analysis.view;
import java.awt.Dimension;
import java.awt.GridLayout;
import java.util.TimeZone;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.JOptionPane;
import javax.swing.WindowConstants;
import dailyBot.analysis.Utils;
import dailyBot.control.DailyProperties;
import dailyBot.model.SignalProvider.SignalProviderId;
public class RMIClientMain extends JFrame
{
private static final long serialVersionUID = 7878714258759106938L;
public RMIClientMain()
{
super("DailyBot");
initialize();
}
public static void attemptSave(SignalProviderId only)
{
for(SignalProviderId id : (only == null ? SignalProviderId.values() : new SignalProviderId[]{only}))
if(JOptionPane.YES_OPTION == JOptionPane.showConfirmDialog(null, "Desea guardar " + id + "?"))
Utils.getFilterSignalProvider(id.ordinal()).writePersistence();
}
private void initialize()
{
GridLayout gridLayout = new GridLayout();
gridLayout.setRows(5);
gridLayout.setColumns(2);
setLayout(gridLayout);
setSize(259, 290);
JButton activo = new JButton("Cambiar activo");
activo.addActionListener(new java.awt.event.ActionListener()
{
public void actionPerformed(java.awt.event.ActionEvent e)
{
new AnalysisFormat((SignalProviderId) JOptionPane.showInputDialog(null, "Seleccione el proveedor", "Proveedor", JOptionPane.QUESTION_MESSAGE, null, SignalProviderId.values(), SignalProviderId.values()[0]));
}
});
add(activo);
for(SignalProviderId id : SignalProviderId.values())
this.add(getSignalProviderButton(id));
JButton salir = new JButton();
salir.setText("Salir");
salir.addActionListener(new java.awt.event.ActionListener()
{
public void actionPerformed(java.awt.event.ActionEvent e)
{
attemptSave(null);
System.exit(0);
}
});
add(salir);
setSize(new Dimension(259, 244));
setDefaultCloseOperation(WindowConstants.DO_NOTHING_ON_CLOSE);
pack();
setVisible(true);
}
private JButton getSignalProviderButton(final SignalProviderId signalProviderId)
{
JButton botonNuevo = new JButton();
botonNuevo.setText(signalProviderId.toString());
botonNuevo.addActionListener(new java.awt.event.ActionListener()
{
public void actionPerformed(java.awt.event.ActionEvent e)
{
new DailyTable(signalProviderId);
new SignalProviderFormat(signalProviderId);
}
});
return botonNuevo;
}
public static void logRMI(String error)
{
JOptionPane.showMessageDialog(null, error);
}
public static void main(String[] args)
{
TimeZone.setDefault(TimeZone.getTimeZone("America/Bogota"));
DailyProperties.setAnalysis(true);
Utils.getRecords();
new RMIClientMain();
}
} |
<reponame>orangeGoran/neuralmonkey-experiments
"""A set of helper functions for TensorFlow."""
from typing import Callable, Iterable, List, Optional, Tuple
# pylint: disable=unused-import
from typing import Dict, Set
# pylint: enable=unused-import
import tensorflow as tf
def _get_current_experiment():
# This is needed to avoid circular imports.
from neuralmonkey.experiment import Experiment
return Experiment.get_current()
def update_initializers(initializers: Iterable[Tuple[str, Callable]]) -> None:
_get_current_experiment().update_initializers(initializers)
def get_initializer(var_name: str,
default: Callable = None) -> Optional[Callable]:
"""Return the initializer associated with the given variable name.
The name of the current variable scope is prepended to the variable name.
This should only be called during model building.
"""
full_name = tf.get_variable_scope().name + "/" + var_name
return _get_current_experiment().get_initializer(full_name, default)
def get_variable(name: str,
shape: List[Optional[int]] = None,
dtype: tf.DType = None,
initializer: Callable = None,
**kwargs) -> tf.Variable:
"""Get an existing variable with these parameters or create a new one.
This is a wrapper around `tf.get_variable`. The `initializer` parameter is
treated as a default which can be overriden by a call to
`update_initializers`.
This should only be called during model building.
"""
return tf.get_variable(
name=name, shape=shape, dtype=dtype,
initializer=get_initializer(name, initializer),
**kwargs)
|
Measurement of Rapid Variation in Ultrasound Backscattering During Change in Thickness of Tissue Phantom The cyclic variation in ultrasound integrated backscatter (IB) during one cardiac cycle offers potential for evaluation of myocardial contractility. Since there is large motion due to the heartbeat in the heart wall, in the conventional method, the position of the region of interest (ROI) for calculating the IB is manually set for each timing during one heartbeat. Moreover, change in the size of the ROI during contraction and relaxation of the myocardium is not considered. In this paper, a new method is proposed for automatic tracking of the position and the size of the ROI. Rapid components, which are detected by increasing the spatial and time resolutions to 1 mm and 200 s, respectively, highly depend on the instantaneous velocity of the ROI. These components are the result of interference between the waves reflected by the ROI and those reflected by scatterers other than the ROI. By separately estimating the bias component, these interfering components which cause interference are eliminated. By applying the proposed method to a sponge phantom, which was cyclically depressed in a water tank, and to the posterior wall of the heart of a healthy subject, the interference components were sufficiently suppressed and the IB signals were obtained with high spatial and time resolution. |
def calculate_stats(self):
for corpus in self.corpora:
corpus['speakers'] = list(set(sp for sp in corpus['dur_by_speaker']) & set(sp for sp in corpus['tok_by_speaker']))
corpus['informants'] = [sp for sp in corpus['speakers'] if not self.is_interviewer(sp)]
corpus['total_sound_dur'] = 0
if '#TOTAL_SOUND_DURATION' in corpus['dur_by_speaker']:
corpus['total_sound_dur'] = corpus['dur_by_speaker']['#TOTAL_SOUND_DURATION']
corpus['total_sound_dur_str'] = self.str_duration(corpus['total_sound_dur'])
del corpus['dur_by_speaker']['#TOTAL_SOUND_DURATION']
corpus['dur_by_speaker_str'] = {sp: self.str_duration(corpus['dur_by_speaker'][sp])
for sp in corpus['dur_by_speaker']}
corpus['total_dur'] = sum(corpus['dur_by_speaker'][sp]
for sp in corpus['dur_by_speaker'])
corpus['inf_dur'] = sum(corpus['dur_by_speaker'][sp]
for sp in corpus['dur_by_speaker']
if not self.is_interviewer(sp))
corpus['total_tok'] = 0
corpus['total_tok_by_speaker'] = {}
for sp in corpus['tok_by_speaker']:
corpus['total_tok_by_speaker'][sp] = 0
for tok in corpus['tok_by_speaker'][sp]:
corpus['total_tok'] += corpus['tok_by_speaker'][sp][tok]
corpus['total_tok_by_speaker'][sp] += corpus['tok_by_speaker'][sp][tok]
corpus['inf_tok'] = 0
for sp in corpus['tok_by_speaker']:
if self.is_interviewer(sp):
continue
for tok in corpus['tok_by_speaker'][sp]:
corpus['inf_tok'] += corpus['tok_by_speaker'][sp][tok]
corpus['tok_freq'] = {}
for sp in corpus['tok_by_speaker']:
for token in corpus['tok_by_speaker'][sp]:
if token not in corpus['tok_freq']:
corpus['tok_freq'][token] = corpus['tok_by_speaker'][sp][token]
else:
corpus['tok_freq'][token] += corpus['tok_by_speaker'][sp][token]
corpus['freq_tokens'] = [token for token in sorted(corpus['tok_freq'],
key=lambda t: (-corpus['tok_freq'][t], t))][:MAX_FREQ_TOKENS]
corpus['total_dur_str'] = self.str_duration(corpus['total_dur'])
corpus['inf_dur_str'] = self.str_duration(corpus['inf_dur']) |
#ifndef __LOG_LIBRARY_H_
#define __LOG_LIBRARY_H_
#include <stdio.h>
#define MAX_TIME_LEN 100
#define CONVERT_SEC_TO_USEC 1000000
#define MAX_USEC_SLEEP 1000000
char *queryFileByLoc(int fd, int row, int col);
void printQuery(FILE *fLogFile, int fdSrc, char *queryField, int row, int col);
char *generateLogTime(char *timeStr);
void longSleep(long sleepTime);
#endif // __LOG_LIBRARY_H_
|
Follow up by colour Doppler imaging of 102 patients with retinal vein occlusion over 1 year Background/aim: Retinal vein occlusion (RVO) is one of the most frequent ocular vascular diseases and leads to severe vision impairment. Colour Doppler imaging (CDI) is the first method which allows distinct evaluation of arterial and venous velocities in RVO. CDI is valuable for diagnosis of RVO and shows the effects of isovolaemic haemodilution. Patients with RVO were monitored by CDI for 1 year in order to clarify venous and arterial involvement in the pathogenesis of this disease. Methods: Patients with RVO were monitored prospectively for 1 year with clinical examinations, fluorescein angiography, and CDI every 3 months. 102 adults referred for RVO for less than 2 months were enrolled. Unaffected eyes were used as control. The maximum systolic and diastolic flow velocities and the resistance index (RI) were measured in the central retinal artery (CRA) and the maximum and minimum blood flow velocities in the central retinal vein (CRV). Results: During the year of observation, branch retinal vein occlusion (BRVO), ischaemic central retinal vein occlusion (CRVO), and non-ischaemic CRVO had a distinct pattern of venous velocity changes. BRVO had a similar profile to that observed in controls. Venous velocities were continuously lower in central forms, with the lowest values in ischaemic occlusion. In contrast, a brief decrease in arterial diastolic velocity was observed in ischaemic CRVO at presentation, correlated with arteriovenous passage time on fluorescein angiography, but with rapid normalisation. Conclusions: CDI findings were correlated with the type of RVO at all times during follow up. CDI showed persistent impairment of central venous velocity in CRVO whereas there was a fast initial values recovery of the arterial velocity. These results using CDI show strong evidence of a primary venous mechanism in RVO. |
Diagnostic accuracy of fine needle aspiration cytology, triple test and tru-cut biopsy in the detection of breast lesion INTRODUCTION Palpable breast masses require a thorough clinical breast examination, imaging, and tissue sampling for a definitive diagsnosis to rule out malignancy. Mammography screens in same or contralateral breast can also detect malignant lesions in older women.Ultrasonography is particularly valuable in detecting cystic masses, and may be used to guide biopsy techniques. Invasive procedures such as core-needle biopsy allow histologic diagnosis, hormone-receptor testing, and differentiation between in situ and invasive disease. Breast masses have a variety of etiologies, benign and malignant. Fibroadenoma is the most common benign breast mass while invasive ductal carcinoma is the most common malignancy. In this review, an attempt is made to examine the role of FNAC, triple test & Tru-cut biopsy in the detection of breast lesions. |
/*此类自动生成,请勿修改*/
package model
import logserverlog "fgame/fgame/logserver/log"
func init() {
logserverlog.RegisterLogMsg((*PlayerTianMoTi)(nil))
}
/*天魔体*/
type PlayerTianMoTi struct {
PlayerLogMsg `bson:",inline"`
//当前阶数
CurAdvancedNum int32 `json:"curAdvancedNum"`
//变化前阶数
BeforeAdvancedNum int32 `json:"beforeAdvancedNum"`
//进阶阶数
AdvancedNum int32 `json:"advancedNum"`
//进阶原因编号
Reason int32 `json:"reason"`
//进阶原因
ReasonText string `json:"reasonText"`
}
func (c *PlayerTianMoTi) LogName() string {
return "player_tianmoti"
}
|
Dawson’s Creek changed the landscape of teen dramas when it premiered on the WB’s midseason lineup in 1998. Complex language, frank talk about mature topics, and the first gay kiss on television were among its lasting legacy.
Scream writer Kevin Williamson created the concept and penned many of the scripts, inspired by his own life. He drew from high school friends and his dreams of Hollywood to create Dawson (James Van Der Beek), Joey (Katie Holmes), Pacey (Joshua Jackson), and Jen (Michelle Williams).
The series launched the core four into the TV stratosphere and made them household names. Meredith Monroe, Kerr Smith, and Busy Philipps all joined the series in later seasons, but left lasting impressions on the audience as well.
Despite a loyal fan following and plenty of success in the eyes of the tiny WB network, a few bumps occurred behind the scenes. The series almost wasn’t on the WB, executives weren’t too keen on some members of the young cast, some of the cast didn’t even get along, and the budget often didn’t allow producers to do what they really wanted.
After pouring through old interviews, DVD commentary tracks, and cast reunions, we’ve uncovered 15 Behind-The-Scenes Secrets You Never Knew About Dawson’s Creek.
Since the end of Dawson’s Creek, late addition Busy Philipps has spoken candidly about her time on the show. She even revealed that her hard partying caused script rewrites.
Filming in Wilmington, North Carolina allowed the cast to enjoy letting loose away from the prying eyes of Hollywood. The actress told Wendy Williams that, while being interviewed, she was so drunk one night that she dislocated her knee while at a local bar.
After spending some time in the emergency room, Philipps had to stay off her leg for a few weeks. Writers and directors reworked the next two episodes so that her character Audrey spent her time laying in bed or sitting in a chair to accommodate the injury.
Despite the close-knit cast and some of the life-long friendships made on set, not everyone enjoyed their Dawson’s Creek experience.
Monica Keena played Abby Morgan, the high school mean girl in the early days of the show. As a recurring character, Keena spent her time flying back and forth between North Carolina and Los Angeles for work and to see her boyfriend.
When the back and forth became too much for her, Keena asked for her character to be written out. Abby drunkenly fell to her death off a pier as a result.
Likewise, John Wesley Shipp played Mitch Leery, father to the title character, in early seasons of the show. As the show moved the main characters into adulthood, Shipp realized that there would be less for him to do and asked to leave. Mitch then died when he dropped an ice cream cone while driving.
Anyone even slightly familiar with Busy Philipps’ interview style knows that she isn’t shy about telling the truth about behind the scenes conflict.
She spilled the beans that Katie Holmes didn’t speak to her former Dawson’s castmates once she married Tom Cruise, once disliked her Freaks and Geeks costar James Franco, and also hates her former Dawson’s costar Chad Michael Murray.
What exactly transpired between the two isn’t known by the public, but Philipps discussed not liking Murray from the moment that they met in multiple interviews, including members of the cast appearing at the Paley Center in 2009.
Philipps met Murray on the plane from California to North Carolina when both of them joined the show for the college years. She described him with some choice words that we can’t reprint and explained that she wasn’t worried about burning any bridges there.
Dawson’s Creek saw controversy before the show even began airing. In addition to some executives and reporters wondering if teenagers really use the five syllable words that Dawson and Joey pulled out in the pilot, sponsors disliked the direction that the show had planned to go.
Procter and Gamble wasn’t just a sponsor for the show, but also a production partner, with the company receiving a producing credit for their financing of the show. Just before the series aired, the company pulled their sponsorship after hearing about storylines on the way.
Sponsorship for the series disappeared as a result of the mature content in the first few episodes of the series. The pilot episode featured a discussion about just how often Dawson Leery “walked his dog,” and later in the season, Pacey Witter had an affair with a teacher.
However, audiences today wouldn’t bat an eye at what was considered scandalous for Dawson’s Creek.
Though TV audiences know Dawson’s Creek for being one of the teen dramas that put the WB on the map, it didn’t always belong to the network.
Instead, FOX originally purchased the series for a development deal with Kevin Williamson and producer Paul Stupin, but executives changed their minds. Several of the writers for the series reunited at the ATX festival in 2015, and Stupin shared the story with reporters.
It was Stupin who brought the finished pilot script to the network. Supposedly a done deal, by the time Christmas break rolled around, the network decided that they didn’t want it.
FOX had difficulty keeping Party of Five on the air at the time and decided they didn’t want another teen drama. Stupin recalled the FOX executives wondering if teens really “talked like that” as well.
When Kevin Williamson returned to pen the series finale and its time jump, one character was noticeably absent: Audrey. Busy Philipps didn’t like that one bit.
Philipps starred in seasons five and six as the college roommate of Joey Potter, but when WB asked for a special finale by Williamson, he didn’t include her in the script, instead only sticking with the characters he brought to the show.
Despite the outcome, Philipps’ experience on Dawson’s Creek wasn’t all bad-- she met her best friend Michelle Williams while working on the show. The two remain close and are godmothers of each other’s children.
Paula Cole’s “I Don’t Want To Wait” remains forever linked to the show, but it almost wasn’t the theme song.
Instead, Alanis Morissette’s “Hand In My Pocket” nabbed that honor. The only problem was that getting the rights to broadcast it were out of the show’s budget. Producers happened to hear Cole’s song used for a network promo and asked if it was in the budget instead. It was.
When it came to the DVD releases, though, fans purchasing the discs noticed that the music heard there wasn’t what played in the original broadcast.
The cost of the music chosen for the broadcast to be used for the DVDs was also out of the network’s price range. Even for international broadcasts of the show, the music changed to less expensive options.
A lot of actors claim difficulty seeing themselves on screen, and for Dawson himself, that’s also the case. He refuses to watch the series beyond a certain point.
While the series was on the air, Van Der Beek watched up to season four, but he found the seasons beyond that, which included the characters headed off to university settings, “stressful.” In fact, he hasn’t seen the show past that season at all, including the much talked about series finale.
When Van Der Beek met his future wife, it was a plus that she wasn't familiar with the series. The actor didn’t have to worry about her associating him with a teenage love triangle that still makes adults argue 20 years later, his crying meme, or that ill-conceived relationship with a woman named Eve.
When the series introduced Jack McPhee, he immediately became a new love interest for Joey Potter and a roadblock to the star-crossed Dawson and Joey. He later revealed himself to be gay.
Jack’s storylines were historic for teen television. The writers handled him coming out with sincerity that came from some of their own experiences.
Kerr Smith then participated in the first gay kiss in prime time television. Much of the audience had no idea that Jack was gay when he was introduced, and that’s exactly how Kevin Williamson wanted it.
Williamson, in a DVD commentary track, revealed that he planned for Jack to be gay when the writers originally discussed the character, but told no one because he didn’t want the writing or the actor’s performance bogged down with stereotypes or preconceived notions.
There was no pushback from the network over the decision either. The only note given to the writers was to make sure Jack wasn’t vilified for dating Joey before revealing how he felt.
As Dawson’s Creek neared its end, the writers’ room drafted “Joey Potter and the Capeside Redemption” as the finale, but it ended up not being the series finale at all.
Instead, the episode aired as the penultimate chapter in the lives of the kids from the Creek, and Kevin Williamson wrote his own series finale special.
At that point Williamson had already left the series to focus on another project, but executives at WB weren’t satisfied with the finale they had. WB then reached out to Williamson to ask him to write the finale.
Once Williamson agreed to return, WB gave him nearly free reign to do what he wanted to wrap up loose ends, including jumping the story ahead five years, killing off a major character, and allowing Joey to make her final decision between Dawson and Pacey.
Not long after the series ended, there were rumors that Michelle Williams distanced herself from many of the cast members on the show, especially Katie Holmes, but those rumors seem to be rooted in the actress’ insecurities.
When speaking with Huffington Post in 2012, Williams explained that there were times when the show focused primarily on the love triangle of Dawson-Joey-Pacey so much that there would be less pages for her in the script. A sudden decrease in her workload left her feeling insecure and wondering what she did wrong.
It was Van Der Beek who helped Williams realize that with less of an attachment to the show, she would have the easiest time transitioning into other projects. A few Oscar, Golden Globe, and BAFTA Award nominations later, and he was right.
Jack became a fan favorite and shared a large part in the series finale, becoming the guardian for Jen’s little girl, but Kerr Smith disliked one major thing about his character’s final story.
The series closed on Jack’s story by revealing his relationship with Pacey’s big brother Doug. Pacey spent much of the early years of the show teasing Doug for being gay, though his brother always protested. Smith saw the decision to pair the two as something of a cop out -- the only two gay men in Capeside happened to wind up together.
It also raised questions for the audience as to whether Pacey had genuinely believed that his brother was gay all along, or if the constant ribbing led the writers to make the decision in the end.
Every studio and network wants to know if they have the perfect lead actor in a role. Sometimes, the executives with the final say don’t have the same actors in mind as the showrunner, as was the case with Dawson’s Creek.
Though most of the cast was set, the show still needed its Dawson when Sony flew James Van Der Beek out to test in front of several executives. “I don’t know. He didn’t walk in the room like a star,” was the feedback from the audition, according to Van Der Beek himself.
Van Der Beek wasn’t the only star with a rough audition-- Joshua Jackson actually put people to sleep.
Like Van Der Beek, Jackson flew out to audition for the role of Pacey in front of Sony studio executives. Also like Van Der Beek, there were some who didn’t think he made a great impression, though instead of vocalizing it, they fell asleep. One executive in particular slept so hard he began snoring during Jackson’s audition.
Jackson had already been put through the ringer for the series-- he already auditioned for both Dawson and Pacey by that point. He sure it was his last chance and that the part would go to someone else. Jackson, however, heard not long after that he nabbed the role. Maybe that executive didn’t get a vote.
The term “jumping the shark” comes from Happy Days employing a stunt that indicated the show overstayed its welcome. James Van Der Beek knows exactly when Dawson’s Creek did just that.
Van Der Beek appeared on Bravo’s Watch What Happens Live With Andy Cohen earlier this year and decided the series had its shark jumping moment pretty early on-- in season two.
He declared the show began going downhill “when I had to wear a wire to implicate Joey’s father for a drug deal gone bad." This means he thinks the series wasted its next next four years.
Of course, Van Der Beek did get the scenario a little wrong. It was Joey who wore the wire, getting her own father to confess, as she tried to prove Dawson wrong. The decision strained their friendship almost beyond repair and pushed the show in a whole new direction-- toward Joey and Pacey.
Did you learn something new about what went on behind the scenes of Dawson’s Creek? Did we leave out details everyone should know? Tell us in the comments! |
//sub class of super class Animal
public class Cat extends Animal {
// Cat sub class of Animal
// make Sound method overrides makeSound method
public void makeSound() {
// method prints "meow when called'
System.out.println("the cat does meow!");
}
// public owner String returns string when used
public String owner = "Alice";
} |
/**
* Creates a backup of the given file or folder by renaming it. The file handle returned is for the renamed backup
* folder. If the file or folder cannot be backed up (i.e. renamed), this method will return null.
*
* @param fileOrFolder the file or folder to rename as a backup
* @return File
* @throws MojoExecutionException thrown if the backup cannot be created because of an error
*/
protected File backup(File fileOrFolder) throws MojoExecutionException {
File backupFileOrFolder = null;
if ((fileOrFolder != null) && fileOrFolder.exists()) {
String suffix = "";
int count = 0;
while ((backupFileOrFolder =
new File( fileOrFolder.getParentFile(), fileOrFolder.getName() + ".bak" + suffix )).exists()) {
suffix = (++count) + "";
}
if (!fileOrFolder.renameTo( backupFileOrFolder )) {
throw new MojoExecutionException(
"Unable to create backup of existing snapshot item: " + fileOrFolder.getName() );
}
}
return backupFileOrFolder;
} |
<reponame>tyler-cai-microsoft/FluidFramework<filename>packages/dds/tree/src/flatAttachTree.ts
/*!
* Copyright (c) Microsoft Corporation and contributors. All rights reserved.
* Licensed under the MIT License.
*/
/**
* Allows multiple marks to compete for the same relative sibling.
* The race is represented as an array of ranked lanes where all of the content in the lane i (race[i])
* should appear before all of the contents in lane k for i \< k.
*
* ```
* ============
* A
* +-B
* +-C
* [A<B<C]
* ============
* C
* B-+
* A-+
* [A>B>C]
* ============
* A
* +-B <- 1st insert
* +-C <- 2nd insert
* [A
* {
* <
* [<B]
* [<C]
* }
* ]
* ============
* C
* A-+ <- 1st insert
* B-+ <- 2nd insert
* [
* {
* >
* [A>]
* [B>]
* }
* C]
* ============
* A D
* +---C
* B-+
* [A
* {
* <
* [B>]
* [<C]
* }
* D]
* ============
* A D
* B---+
* +-B
* [A
* {
* >
* [B>]
* [<C]
* }
* D]
* ============
* A E
* +---C
* B-+-D
* [A
* {
* <
* [B>]
* [<C <D]
* }
* E]
* ============
* A E
* C---+
* B-+-D
* [A
* {
* >
* [B> C>]
* [<D]
* }
* E]
* ============
* ```
*
* This information is needed in original changes to produce a merge outcome with the correct ordering
* of the entries in the race vs other entries in concurrent edits targeting the same index and sibling.
* For example if the original edit includes the insertion of:
* - node X before A with tiebreak LastToFirst
* - node Z before A with tiebreak FirstToLast
* And a concurrent edit inserts:
* - node Y before B
* Then we know that the outcome should be X Y Z B.
* Had the inserts for X and Y been represented as adjacent outside a Race then it would have
* looked as though X had been inserted relative to Z. Since Z belongs after Y in the merge then
* X would have landed right before Z yielding Y X Z B.
*/
|
Assessing Pigment-Based Phytoplankton Community Distributions in the Red Sea Pigment-based phytoplankton community composition and primary production were investigated for the first time in the Red Sea in February-April 2015 to demonstrate how the strong south to north environmental gradients determine phytoplankton community structure in Red Sea offshore regions (along the central axis). Taxonomic pigments were used as size group markers of pico, nano- and microphytoplankton. Phytoplankton primary production rates associated with the three phytoplankton groups (pico-, nano- and microphytoplankton) were estimated using a bio-optical model. Pico- (Synechococcus and Prochlorococcus sp) and Nanophytoplankton (Prymnesiophytes and Pelagophytes) were the dominant size groups and contributed to 49% and 38%, respectively, of the phytoplankton biomass. Microphytoplankton (diatoms) contributed to 13% of the phytoplankton biomass within the productive layer (1.5 Zeu). Sub-basin and mesoscale structures (cyclonic eddy and mixing) were exceptions to this general trend. In the southern Red Sea, diatoms and picophytoplankton contributed to 27% and 31% of the phytoplankton biomass, respectively. This result induced higher primary production rates (430 ± 50 mgC m-2 d-1) in this region (opposed to CRS and NRS). The cyclonic eddy contained the highest microphytoplankton proportion (45% of TChla) and the lowest picophytoplankton contribution (17% of TChla) while adjacent areas were dominated by pico- and nanophytoplankton. We estimated that the cyclonic eddy is an area of enhanced primary production, which is up to twice those of the central part of the basin. During the mixing of the water column in the extreme north of the basin, we observed the highest TChla integrated (40 mg m-2) and total primary production rate (640 mgC m-2 d-1) associated with the highest nanophytoplankton contribution (57% of TChla). Microphytoplankton were a major contributor to total primary production (54%) in the cyclonic eddy. The contribution of picophytoplankton (Synechococcus and Prochlorococcus sp) reached maximum values (49%) in the central Red Sea. Nanophytoplankton seem to provide a ubiquitous substantial contribution (30-56%). Our results contribute to providing new insights on the spatial distribution and structure of phytoplankton groups. An understanding and quantification of the carbon cycle in the Red Sea was made based on estimates of primary production associated with pico-, nano- and microphytoplankton. |
Indoctrination: 4-H Sells Its Soul to Monsanto and the US Soybean Council
Pin 2 540 Shares
If you're new here, you may want to subscribe to my RSS feed. Thanks for visiting!
If you’re looking for wholesome activities for your kids, you might be considering something like 4-H. Geared towards an agricultural lifestyle, the 4-H Youth Development Organization seems like a great way for your child to hang out with other kids who are interested in more than the latest TV reality show. Doesn’t this sound great?
4-H is the nation’s largest youth development and empowerment organization, reaching more than 7 million 4-H youth in urban neighborhoods, suburban schoolyards and rural farming communities. Fueled by university-backed curriculum, 4-H’ers engage in hands-on learning activities in the areas of science, healthy living, and food security. (source)
Unfortunately, 4-H has sold out and become irrevocably tainted by it’s corporate donors. The roster reads like a Who’s Who list of evil corporations, including the ever evil Monsanto, the eugenicists extraordinaire at the Bill & Melinda Gates Foundation, United Soybean Board (don’t forget that 90% of the soybeans produced in America are GMO), Coca-Cola (who spent over $1.5 million to fight against the labeling of GMOs in California), Big Biotech buddies Cargill and DuPont, and Big Pharma representative, Pfizer.
Dr. Joseph Mercola revealed the appalling corporate connections in an article on his site. He wrote eloquently:
The organization is extremely influential to children, impacting their intellectual and emotional development through their numerous programs and clubs. Unfortunately, Monsanto is using its partnership with 4-H as a vehicle to worm its way into your child’s mind in order to influence her developing beliefs and values. Children are like little sponges, soaking up everything they see and hear, which makes them particularly vulnerable to being sucked in by propaganda. And the effects could be life-long—at least they’re intended to be. Indeed you’d be hard-pressed to convince an adult, who from childhood was taught the merits of genetically engineered foods, that there’s anything wrong with such alterations of the food supply. If your child is involved in 4-H, it would be wise to monitor the messages she’s getting, given this organization’s corporate sponsors and alliances. 4-H is really the perfect vehicle for Big Ag to manipulate an entire generation, using tactics not that different from the youth indoctrination strategies employed by political extremists in order to gain children’s trust and then “groom” them however they wish. Think about it—what better way to control the future of our food system than to brainwash 6.8 million impressionable youth into believing that genetically modified organisms (GMOs) are safe and beneficial, if not the answer to all the problems of the world? (source)
Obviously, when an organization is given millions of dollars, they will provide the messages that the corporate sponsors want them to provide. Would they speak out against GMOs when biotech is the one that funds them? Of course not. A quick visit to their website takes you to the “AgriScience” page, where it blatantly says, “The 4-H AgriScience curriculum and supporting programming has been created to cultivate the emerging study of biotechnology and business/economics in the agriculture industry.” (emphasis mine)
The page goes on to discuss the involvement of the US Soybean Council (remember, these aren’t organic soybeans we’re talking about!) with the indoctrination…oops – I mean, curriculum.
National 4-H Council partnered with the United Soybean Board (USB) and five state 4-H programs to conduct AgriScience/Biotechnology programs in ten urban areas of Delaware, Illinois, Indiana, Missouri, and Ohio. In addition to providing an introduction to AgriScience/Biotechnology, youth learned concepts such as Agricultural Literacy, Global Food Security, Sustainability, and about the variety of career paths associated with the field. A total of 82 teenagers were extensively trained, and in turn, reached 620 youth in afterschool and summer programs during 2012. Each of the sites involved biotechnology partners from agribusinesses, commodity groups, and universities. (source)
For those who couldn’t attend and be brainwashed in person, power point presentations are available on PDF.
But it gets even worse. Dr. Mercola’s article also points out that Public Enemy Number 1, Monsanto, is actually training 4-H volunteers!
Pro-GMO propaganda would be easy to weave into 4-H’s program since they already occupy the role of teaching children the art of farming, and in their position of authority, children would never question it. Monsanto is now also training tens of thousands of 4-H volunteers, according to an article in 4-Traders: “In 2007, Monsanto expanded its 4-H volunteerism support by funding state and regional development. More than 52,600 volunteers have attended Monsanto-supported forums and training events in 50 states, three US territories and four Extension regional forums.” (source)
Monsanto also boasts of the connection on their website. Read between the lines and you’ll find this to be a rather chilling message.
The motto of the world’s largest youth organization is “To Make the Best Better.” This is honored by Monsanto as we are proud to share and support the motto of 6.8 million youth, aged 5-21, who are involved in 4-H programs annually. 4-H can be found in every county in every U.S. state, as well as the District of Columbia, Puerto Rico and more than 80 countries around the world. As 4-H programs continue to develop youth to reach their fullest potential through developing life skills, learning by doing, and utilizing the knowledge of the land-grand university system, we at Monsanto directly support the program in many ways, including the 4-H Volunteer Initiative, which attracts volunteers who coordinate local community clubs and help to plan and conduct local, regional, state and national 4-H events. (source)
That’s right…Monsanto is directly influencing the developing minds of 6.8 million kids per year. Let that terrifying figure sink in.
If your kids are already involved in 4-H, please take a close look at what they’re being taught, because damage control may well be in order.
Biotech will stop at nothing to dominate American agriculture. From their website called GMOAnswers, which is nothing but a compendium of disinformation, to the use of cartoons to make their ways of farming seem normal and acceptable, this is just another terrifying effort to control the minds of the future farmers of this country so that they believe the benefits of toxic farming methods outweigh the horrific damage caused by it, as was tragically seen on the island petri dish of Molokai, Hawaii.
The corporate sponsorship of 4-H is a match made in hell. A once-positive organization for kids has sold its soul to corporate sponsors, and the insidious brainwashing may well put GMO-tainted food on every plate in America within the next 20 years. If you think that the cancer epidemic is outrageously high now, just wait. The USDA keeps approving the use of ever-more-toxic chemicals, and through 4-H propaganda, an entire generation of future farmers is being taught that this is the best way to feed the world.
Thank you to Survival for Blondes! |
def instance():
if BiThemeAccess.__instance is None:
BiThemeAccess.__instance = BiTheme()
return BiThemeAccess.__instance |
<reponame>Atwinenickson/lendsuphumanresourcemanagement<filename>contracts/views.py
from django.contrib import messages
from django.db import IntegrityError
from django.http import HttpResponseRedirect
from django.shortcuts import render
from django.urls import reverse
from contracts.models import Contract, Penalty, Offence, Termination
from contracts.selectors import get_contract, get_terminated_contracts, get_active_contracts, get_employee_contracts, \
get_all_offences, get_all_penalties, get_penalty, get_offence, get_all_terminations, get_termination
from contracts.services import activate, terminate
from employees.procedures import send_notification
from employees.selectors import get_employee, get_active_employees
from ems_admin.decorators import log_activity
from ems_auth.decorators import hr_required
from notification.services import send_notification_generic
from organisation_details.selectors import get_position, get_all_positions
@hr_required
@log_activity
def manage_job_contracts(request):
if request.POST and request.FILES:
reference_number = request.POST.get('reference_number')
position_id = request.POST.get('position')
employee_id = request.POST.get('employee')
effective_date = request.POST.get('effective_date')
expiry_date = request.POST.get('expiry_date')
risk = request.POST.get('risk')
contract_type = request.POST.get('contract_type')
document = request.FILES.get('document')
position = get_position(position_id)
employee = get_employee(employee_id)
try:
new_contract = Contract.objects.create(
reference_number=reference_number,
position=position,
employee=employee,
effective_date=effective_date,
expiry_date=expiry_date,
risk=risk,
type=contract_type,
document=document
)
except IntegrityError:
messages.warning(request, "The reference number needs to be unique")
return HttpResponseRedirect(reverse(manage_job_contracts))
positions = get_all_positions()
employees = get_active_employees()
contracts = get_active_contracts()
context = {
"contracts_page": "active",
"employees": employees,
"positions": positions,
"contracts": contracts,
}
return render(request, 'contracts/manage_job_contracts.html', context)
@log_activity
def terminate_contract(request, contract_id):
contract = get_contract(contract_id)
terminate(contract)
return HttpResponseRedirect(reverse(manage_job_contracts))
@hr_required
@log_activity
def edit_contract_page(request, contract_id):
if request.POST and request.FILES:
reference_number = request.POST.get('reference_number')
position_id = request.POST.get('position')
employee_id = request.POST.get('employee')
effective_date = request.POST.get('effective_date')
expiry_date = request.POST.get('expiry_date')
risk = request.POST.get('risk')
document = request.FILES.get('document')
position = get_position(position_id)
employee = get_employee(employee_id)
contract_list = Contract.objects.filter(id=contract_id)
contract_list.update(
reference_number=reference_number,
position=position,
employee=employee,
effective_date=effective_date,
expiry_date=expiry_date,
risk=risk,
document=document
)
return HttpResponseRedirect(reverse(manage_job_contracts))
contract = get_contract(contract_id)
positions = get_all_positions()
employees = get_active_employees()
context = {
"contracts_page": "active",
"contract": contract,
"employees": employees,
"positions": positions,
}
return render(request, 'contracts/edit_contract.html', context)
@log_activity
def terminated_contracts_page(request):
terminated_contracts = get_terminated_contracts()
context = {
"contracts_page": "active",
"terminated_contracts": terminated_contracts
}
return render(request, 'contracts/terminated_contracts.html', context)
@hr_required
@log_activity
def activate_contract(request, contract_id):
contract = get_contract(contract_id)
activate(contract)
return HttpResponseRedirect(reverse(manage_job_contracts))
@hr_required
@log_activity
def user_contracts_page(request):
user = request.user
employee = user.solitonuser.employee
contracts = get_employee_contracts(employee)
context = {
"contracts_page": "active",
"contracts": contracts,
}
return render(request, 'contracts/user_contracts.html', context)
@hr_required
@log_activity
def manage_offences_page(request):
if request.POST:
name = request.POST.get('name')
employee_id = request.POST.get('employee')
penalty_id = request.POST.get('penalty')
resolved = request.POST.get('resolved')
description = request.POST.get('description')
penalty = get_penalty(penalty_id)
employee = get_employee(employee_id)
try:
new_offence = Offence.objects.create(
name=name,
penalty=penalty,
employee=employee,
resolved=resolved,
description=description
)
send_notification_generic(employee, "Offence Recorded", "You have a new offence recorded")
except IntegrityError:
messages.warning(request, "Integrity problems with trying to add a new offence")
return HttpResponseRedirect(reverse(manage_offences_page))
offences = get_all_offences()
employees = get_active_employees()
penalties = get_all_penalties()
context = {
"contracts_page": "active",
"employees": employees,
"offences": offences,
"penalties": penalties
}
return render(request, 'contracts/manage_offences.html', context)
@hr_required
@log_activity
def edit_offence_page(request, offence_id):
if request.POST:
name = request.POST.get('name')
employee_id = request.POST.get('employee')
penalty_id = request.POST.get('penalty')
resolved = request.POST.get('resolved')
description = request.POST.get('description')
penalty = get_penalty(penalty_id)
employee = get_employee(employee_id)
offence_list = Offence.objects.filter(id=offence_id)
offence_list.update(
name=name,
penalty=penalty,
employee=employee,
resolved=resolved,
description=description
)
return HttpResponseRedirect(reverse(manage_offences_page))
offence = get_offence(offence_id)
offences = get_all_offences()
employees = get_active_employees()
penalties = get_all_penalties()
context = {
"contracts_page": "active",
"employees": employees,
"offences": offences,
"penalties": penalties,
"offence": offence
}
return render(request, 'contracts/edit_offence.html', context)
@hr_required
@log_activity
def delete_offence(request, offence_id):
offence = get_offence(offence_id)
offence.delete()
return HttpResponseRedirect(reverse(manage_offences_page))
@hr_required
@log_activity
def manage_penalties_page(request):
if request.POST:
name = request.POST.get('name')
description = request.POST.get('description')
try:
new_penalty = Penalty.objects.create(
name=name,
description=description
)
except IntegrityError:
messages.warning(request, "There integrity problems while adding new penalty")
return HttpResponseRedirect(reverse(manage_penalties_page))
penalties = get_all_penalties()
context = {
"contracts_page": "active",
"penalties": penalties,
}
return render(request, 'contracts/manage_penalties.html', context)
@hr_required
@log_activity
def edit_penalty_page(request, penalty_id):
if request.POST:
name = request.POST.get('name')
description = request.POST.get('description')
penalty_list = Penalty.objects.filter(id=penalty_id)
penalty_list.update(
name=name,
description=description
)
return HttpResponseRedirect(reverse(manage_penalties_page))
penalty = get_penalty(penalty_id)
context = {
"contracts_page": "active",
"penalty": penalty
}
return render(request, 'contracts/edit_penalty.html', context)
@hr_required
@log_activity
def delete_penalty(request, penalty_id):
penalty = get_penalty(penalty_id)
penalty.delete()
return HttpResponseRedirect(reverse(manage_penalties_page))
@hr_required
@log_activity
def manage_terminations_page(request):
if request.POST:
employee_id = request.POST.get('employee')
type = request.POST.get('type')
termination_letter = request.FILES.get('termination_letter')
clearance_form = request.FILES.get('clearance_form')
description = request.POST.get('description')
employee = get_employee(employee_id)
try:
new_termination = Termination.objects.create(
employee=employee,
type=type,
termination_letter=termination_letter,
clearance_form=clearance_form,
description=description
)
send_notification_generic(employee, "Termination Recorded", "Your termination record has been created")
except IntegrityError:
messages.warning(request, "Integrity problems with trying to add a new offence")
return HttpResponseRedirect(reverse(manage_terminations_page))
terminations = get_all_terminations()
employees = get_active_employees()
context = {
"contracts_page": "active",
"employees": employees,
"terminations": terminations,
}
return render(request, 'contracts/manage_terminations.html', context)
@hr_required
@log_activity
def edit_termination_page(request, termination_id):
if request.POST and request.FILES:
employee_id = request.POST.get('employee')
type = request.POST.get('type')
termination_letter = request.FILES.get('termination_letter')
clearance_form = request.FILES.get('clearance_form')
description = request.POST.get('description')
employee = get_employee(employee_id)
termination_list = Termination.objects.filter(id=termination_id)
try:
termination_list.update(
employee=employee,
type=type,
termination_letter=termination_letter,
clearance_form=clearance_form,
description=description
)
return HttpResponseRedirect(reverse(manage_terminations_page))
except IntegrityError:
messages.warning(request, "Integrity problems with trying to add a new offence")
termination = get_termination(termination_id)
terminations = get_all_terminations()
employees = get_active_employees()
context = {
"contracts_page": "active",
"termination": termination,
"terminations": terminations,
"employees": employees
}
return render(request, 'contracts/edit_termination.html', context)
@hr_required
@log_activity
def delete_termination(request, termination_id):
termination = get_termination(termination_id)
termination.delete()
return HttpResponseRedirect(reverse(manage_terminations_page))
|
<reponame>johan--/dark-rush-photography
/* eslint-disable @typescript-eslint/no-var-requires */
import * as fs from 'fs-extra';
import { Injectable, Logger, NotFoundException } from '@nestjs/common';
import { v4 as uuidv4 } from 'uuid';
import { Model } from 'mongoose';
import {
combineLatest,
concatMap,
concatMapTo,
from,
map,
mapTo,
Observable,
of,
tap,
} from 'rxjs';
import {
VideoDimension,
VideoDimensionAddDto,
VideoDimensionType,
VideoUpdateDto,
} from '@dark-rush-photography/shared/types';
import { Media, VideoResolution } from '@dark-rush-photography/api/types';
import { DocumentModel } from '../schema/document.schema';
import {
downloadBlobAsBuffer$,
getBlobPathWithDimension,
getBlobPath,
uploadStreamToBlob$,
exifVideo$,
downloadBlobToFile$,
deleteBlob$,
resizeVideo$,
findVideoResolution$,
} from '@dark-rush-photography/api/util';
import { validateEntityFound } from '../entities/entity-validation.functions';
import {
validateVideoDimensionNotAlreadyExists,
validateVideoDocumentModelFound,
} from '../content/video-validation.functions';
import { loadVideoDimension } from '../content/video-dimension.functions';
import { ConfigProvider } from './config.provider';
@Injectable()
export class VideoDimensionProvider {
private readonly logger: Logger;
constructor(private readonly configProvider: ConfigProvider) {
this.logger = new Logger(VideoDimensionProvider.name);
}
add$(
id: string,
entityId: string,
videoId: string,
videoDimensionAdd: VideoDimensionAddDto,
entityModel: Model<DocumentModel>
): Observable<VideoDimension> {
return from(entityModel.findById(entityId)).pipe(
map(validateEntityFound),
map((documentModel) =>
validateVideoDocumentModelFound(videoId, documentModel)
),
map((documentModel) =>
validateVideoDimensionNotAlreadyExists(
videoId,
videoDimensionAdd,
documentModel
)
),
concatMap((documentModel) => {
return from(
entityModel.findByIdAndUpdate(entityId, {
videoDimensions: [
...documentModel.videoDimensions,
{
id,
entityId,
videoId,
type: videoDimensionAdd.type,
resolution: videoDimensionAdd.resolution,
},
],
})
);
}),
concatMapTo(this.findOne$(id, entityId, entityModel))
);
}
updateBlobPathAndExif$(
videoUpdate: VideoUpdateDto,
videoMedia: Media,
videoUpdateMedia: Media,
videoDimension: VideoDimension
): Observable<boolean> {
return downloadBlobToFile$(
this.configProvider.getConnectionStringFromMediaState(videoMedia.state),
getBlobPathWithDimension(videoMedia, videoDimension.type),
videoMedia.fileName
).pipe(
tap(() =>
this.logger.debug(
`Exif video dimension ${videoDimension.type} with update`
)
),
concatMap((filePath) =>
exifVideo$(
filePath,
this.configProvider.getVideoArtistExif(new Date().getFullYear()),
{
title: videoUpdate.title,
description: videoUpdate.description,
}
)
),
tap(() =>
this.logger.debug(
`Upload video dimension ${videoDimension.type} to new blob path`
)
),
concatMap((filePath) =>
uploadStreamToBlob$(
this.configProvider.getConnectionStringFromMediaState(
videoUpdateMedia.state
),
fs.createReadStream(filePath),
getBlobPathWithDimension(videoUpdateMedia, videoDimension.type)
)
),
tap(() =>
this.logger.debug(
`Remove video dimension ${videoDimension.type} at previous blob path`
)
),
concatMap(() =>
deleteBlob$(
this.configProvider.getConnectionStringFromMediaState(
videoMedia.state
),
getBlobPathWithDimension(videoMedia, videoDimension.type)
)
),
mapTo(true)
);
}
resize$(
media: Media,
videoResolution: VideoResolution,
entityModel: Model<DocumentModel>
): Observable<VideoDimension> {
const id = uuidv4();
Logger.log(`Resizing video dimension ${videoResolution.type}`);
return downloadBlobToFile$(
this.configProvider.getConnectionStringFromMediaState(media.state),
getBlobPath(media),
media.fileName
).pipe(
concatMap((filePath) =>
resizeVideo$(media.fileName, filePath, videoResolution)
),
concatMap((filePath) =>
combineLatest([
of(filePath),
uploadStreamToBlob$(
this.configProvider.getConnectionStringFromMediaState(media.state),
fs.createReadStream(filePath),
getBlobPathWithDimension(media, videoResolution.type)
),
])
),
tap(() => Logger.log(`Finding video resolution ${videoResolution.type}`)),
concatMap(([filePath]) => findVideoResolution$(filePath)),
tap(() => Logger.log(`Adding video dimension ${videoResolution.type}`)),
concatMap((resolution) =>
this.add$(
id,
media.entityId,
media.id,
{
type: videoResolution.type,
resolution,
},
entityModel
)
),
tap(() =>
Logger.log(`Resizing video dimension ${videoResolution.type} complete`)
)
);
}
findOne$(
id: string,
entityId: string,
entityModel: Model<DocumentModel>
): Observable<VideoDimension> {
return from(entityModel.findById(entityId)).pipe(
map(validateEntityFound),
map((documentModel) => {
const foundVideoDimension = documentModel.videoDimensions.find(
(videoDimension) => videoDimension.id == id
);
if (!foundVideoDimension)
throw new NotFoundException('Could not find video dimension');
return loadVideoDimension(foundVideoDimension);
})
);
}
findDataUri$(
media: Media,
videoDimensionType: VideoDimensionType
): Observable<string> {
return downloadBlobAsBuffer$(
this.configProvider.getConnectionStringFromMediaState(media.state),
getBlobPathWithDimension(media, videoDimensionType)
).pipe(
map((buffer) => {
const datauri = require('datauri/parser');
const parser = new datauri();
return parser.format('.mp4', buffer).content;
})
);
}
}
|
<gh_stars>0
#!/usr/bin/env python
f = open('0902ex.txt')
cnt = 0
line = 2
for eachLine in f:
if cnt < line :
print eachLine,
cnt += 1
f.close()
|
The Australian IT employment market has been claimed to be at its strongest ever, and is expected to strengthen further over the next 12 months, according to Chris Le Coic, CEO of Australian online employment portal CareersSites.
With the resources boom, an increase in local and foreign investment and the appearance of constant innovations and improvements in the Australian market, job vacancies are far outnumbering the number of qualified IT workers, says Le Coic.
His speculations are derived from a study of national recruiter surveys conducted over the past few months, in which almost one in two employers in the IT industry revealed plans to increase permanent staff levels during the next quarter.
Recruitment has moved from being an employer’s market to becoming far more candidate-centric, Le Coic says, adding that recruiters can no longer afford to be as “dismissive of candidates” as in the past, and need to consider more innovative ways of recruiting people.
Additionally, the competition for skilled candidates is forcing recruiters to make job offers in increasingly short periods of time, he says, noting that recruiters now have to be more efficient in spotting and placing the right candidates in the right jobs.
“Companies used to take up to three weeks to come back with an offer of a second interview ... It is closer to two or three days now, and increasingly, on the spot,” he says.
While shortened interview processes are likely to benefit job hunters, employers may be disadvantaged if the candidates turn out to be unsuitable. Le Coic recommends better organised and structured interviews to ensure a good match between employers and potential recruits.
“It’s just a matter of expediting the process, which is not a bad thing in today’s competitive environment,” he says. |
export enum GuildNsfwLevels {
DEFAULT,
EXPLICIT,
SAFE,
AGE_RESTRICTED,
} |
Lindokuhle Welemu
Border Bulldogs
Welemu was one of several amateur club players brought into the Border Bulldogs provincial set-up at the start of 2014 after the professional side was declared bankrupt. He was included in their squad for the 2014 Vodacom Cup competition and made his debut in their Round Three match against Kenyan invitational side Simba XV, helping the Border Bulldogs to an 18–17 win, their only victory of the competition. He started three more matches and also came on as a substitute in their match against Boland Cavaliers as the Border Bulldogs finished bottom of the log.
Welemu was retained for their 2014 Currie Cup qualification campaign and he made his debut in the Currie Cup competition in their opening-day 5–52 defeat to Griquas. He made his first Currie Cup start in their next match against the Boland Cavaliers and made a further three appearances from the bench, but the Border Bulldogs lost all six of their matches to qualify to the 2014 Currie Cup First Division. Welemu made one appearance off the bench in the First Division in a 6–27 defeat to the Boland Cavaliers as the Bulldogs finished bottom of the log with a single win all season.
Welemu returned in the 2015 Vodacom Cup, coming on as a reserve in their opening match of the season against the Sharks XV. |
package com.example.sphinx.mix.dagger2;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import javax.inject.Scope;
/**
* Created by Sphinx on 2017/3/30.
*/
@Scope
@Retention(RetentionPolicy.RUNTIME)
public @interface Dagger2Scope {
}
|
def pyinstaller_datas(cli_args=False):
datas = [
(STATIC_PATH, 'pywebio/html'),
(normpath(STATIC_PATH + '/../platform/tpl'), 'pywebio/platform/tpl')
]
if cli_args:
args = ''
for item in datas:
args += ' --add-data %s%s%s' % (item[0], os.pathsep, item[1])
return args
return datas |
export const SET_USER = 'USER:SET_USER';
|
He is the next great Providence College basketball player.
He's probably not going to get all the way there this year, not on a team that features Kris Dunn, but greatness is in his future, right out there over his outstretched hands, just waiting for him to reach out and grab it. If the history of PC basketball long has been written by kids who turned out to be much better than anyone ever thought they were going to be, then Ben Bentil is the latest, this 6-foot-9 sophomore who seems to get better by the week.
Count on it.
On this afternoon, he is sitting in the PC basketball office, where the hallway is a shrine to the school's great basketball story with photos of the players who helped write it.
Earlier in the day, there had been a big announcement of the new state-of the-art practice facility that the school is going to build, one more symbolic statement that this is a school that's still pointed toward a basketball future and not just looking back at its storied past. But now the afternoon news conference was over and Bentil was talking about the long journey that's taken him to this place, this moment.
And it's a journey, no question about that.
It began back in his native Ghana, where he was the middle of five kids, back when volleyball, not basketball, was his favorite sport. Back when playing college basketball in the United States must have seemed as far away as the stars in the nighttime sky. But he left home for the United States when he was 15 to live with a relative of his mother in Philadelphia, left for the age-old reasons people always have come to America, in search of a better life, a better future.
He went to St. Andrew's, a private school in nearby Wilmington, Del. And it wasn't always easy being a stranger in a strange land, and it wasn't always easy playing a game he didn't really like, not in the beginning anyway.
"It takes time to find out what you love,'' Bentil says quietly.
From the beginning, he was told he had the size and talent to one day play college basketball. But what does that really mean when you're just a kid far from home, just a kid in some strange new country playing some game you weren't all that crazy about in the first place? What was it like when one day a college coach came to see him play and said he was coming back, but he never did, because he thought the kid wasn't good enough?
That became Bentil's motivation.
Still, there was the adjustment to his new country, a huge adjustment. There was the adjustment of learning how to deal with racial inequality. In many ways, there was the adjustment of having to deal with everything.
But he also knew something else, something that, in retrospect, made all the difference.
"It was time to be a man,'' Bentil said.
Certainly, Bentil has become a man on the basketball court, and it's more than that he's very tall with a 235-pound body that easily would fit into an NBA game. He keeps getting better, now making jump shots that he would not have made a year ago. He now has a presence on the court he didn't have a year ago. Maybe more important is the sense that he's getting closer to what he came to this country for in the first place. For opportunity. For the promise of a better life. For all those words on the Statue of Liberty.
"I really appreciate America,'' he says. "In America, there's a lot of opportunity.''
Last Saturday afternoon, when the Friars played Bryant at the Dunk, he was on the bench in the first half because of an ankle sprain he'd suffered the game before. Coach Ed Cooley was trying to get through the game without him. Until Cooley felt he could not, and there was Bentil coming into a game to a huge ovation.
"I heard it and didn't think it was for me,'' he says.
But it was, in this season that is changing his life, this season that is so far from the Ghana of his childhood. He is so much more confident than he was a year ago, a confidence that's made all the difference. And he knows what this game has given him and he knows what this country has given him — that it's giving him the opportunity to have a better life than he could have had back in Ghana.
And every once in a while, when Ben Bentil wakes up in the morning in his dorm room, in those fleeting seconds when he's between sleep and consciousness, it hits him: He is in the United States, with all the promise of his life right in front of him, the next great Providence College basketball player. |
package org.emerjoin.hi.web;
import org.jsoup.Jsoup;
import org.jsoup.nodes.Document;
/**
* @author <NAME>
*/
public abstract class AbstractTransformable implements Transformable {
private String markup = "";
public AbstractTransformable(String html){
if(html==null||html.isEmpty())
throw new IllegalArgumentException("Html must not be null nor empty");
this.markup = html;
}
protected void validateContent(String content){
if(content==null||content.isEmpty())
throw new IllegalArgumentException("Content must not be null nor empty");
}
public Transformable replaceHtml(String html){
if(html==null||html.isEmpty())
throw new IllegalArgumentException("html must not be null nor empty");
this.markup = html;
return this;
}
public String getHtml(){
return markup;
}
}
|
Last week, renowned Canadian film director Atom Egoyan was in Turkey for the first time. Although he had received many invitations from film festivals before, he had preferred not to come. He had legitimate reasons, fears and concerns. ‘Ararat’, the film he shot 13 years ago, had faced intense reaction and repression in Turkey; and could not be screened.
There was a special reason for Egoyan’s visit: He had been asked to be best man at Sera Dink and Eric Nazarian’s wedding. His concerns accompanied his curiosity as he set out towards Turkey. He returned as a host to the country his ancestors had been exiled from. He was surprised, happy and full of emotion. History was being written: “History is not always made by large public statements, but by the conversation between individuals.” We spoke to Egoyan, and discussed the Diaspora and Turkey, and also ‘Ararat’.
This is the first question I have in mind: Why didn’t you come to Turkey before?
I received many invitations from the Istanbul Film Festival. I always asked that the festival send me a letter saying that I can speak freely about the Armenian Genocide and I never got this letter. So I never came. But when I think deeper than that, many Armenians had a resistance to come. I felt that there was a lot of issues that are unresolved in our own minds, which continue to be unresolved. My wife Arsinée Khanjian has been making this trip for many years. I have been incredibly impressed with her openness and yet it was still difficult to understand what she was observing from afar. When the invitation came from Sera Dink and Eric Nazarian’s wedding for us to be best man and maid of honour, I couldn’t say no. I felt this was the perfect opportunity to come. Because I was coming for a family matter, a personal matter. There wasn’t an agenda except celebrating the love of these two extraordinary people. I am very thankful for that. In the days that I spent here, the whole world has opened up. I had heard about it from Arsinée, I was aware that there was a dialogue happening here. But I didn’t understand the nature of it until I arrived.
What is the difference between observing from afar and observing from close-up?
I think for many Armenians who are in the Diaspora, we forget that we have relationships with Turks at all. We forget that because history has cut us off from this culture, and we are frozen in a moment. In a way, we need to be frozen in that moment, because that has become our identity. However, when you are here you understand that there are huge changes and shifts in society. You understand that there are Armenians who have been living with Turks for the past century. There is an organic process occurring where there is a development in this relationship. In the Diaspora, as I said, we are frozen in a moment, and we are also frozen within an agenda. It is part of our formation in the Diaspora that we have to have our host countries acknowledge the history of what happened. But what we saw now, in 2015, is that after the Pope made this strong statement, and after Germany, Austria and Belgium recognized the Genocide, that this pressure is not enough. We have seen that with the Pope statement, Erdoğan has basically said, “It doesn’t matter, I don’t need to listen”. And you realize that this idea of pressuring Turkey from the outside can only go so far. There is now work to be done within the country. As Diasporans, we cannot be involved in that process unless we have a relationship with the community here.
Was there a deeper reason for you to prefer not to come until now?
I did not want to feel small. I have spent so much of my life trying to build an identity, and trying to find a place within my own country. I tried so much to assert who I am. I didn’t want to come back here and feel insignificant. I did not want to come here and feel that all the work we have done in Canada, in the USA, or in France and in Argentina is somehow going to be demolished. In Turkey, I would suddenly be back in a place that I would feel fear; a fear that history has moved on, a fear that everything that I have tried to claim is completely insignificant. But here I saw change, and I met some incredible people.
"When we saw the images in the streets after the Hrant Dink assassination of these large groups of people saying, “We are Hrant! We are Armenians!” In Diaspora, we all understood this to mean that these crowds were saying, “We recognize the Genocide”. We did not understand that these crowds were saying, “We are Hrant, because we want freedom of expression,” and “We are Hrant, because we want to be able to ask questions”.
How could you feel the change in Turkey in such a short period of time you were here?
Because a few things happened to me. I came to this building of the Hrant Dink Foundation and Agos. I saw that we have been here for a really long time. I didn’t feel alien. With Arsinée, we were in a restaurant, we heard a conversation between a Kurdish student and his teacher, and at one point the teacher used the word genocide. It was the first time I heard that word used in Turkey by someone else. I turned around and I said, “I am sorry, I don’t know you, but I have to get involved in this conversation.” And we had this amazing afternoon. He was Kurdish, he was from Diyarbakır, he used the word Diyarbekir, and he said Dikranagerdzi. He was talking about this incredible journey that he had made as a Kurd. He told me when he had found out about the Armenian Genocide. And I understood the fact that until the 1990s the Kurds didn’t even hear about this question. I understood for the first time that there was this process where there was a whole movement towards the civil society that we in the Diaspora don’t understand. I’ll give you a good example. When we saw the images in the streets after the Hrant Dink assassination of these large groups of people saying, “We are Hrant! We are Armenians!” In Diaspora, we all understood this to mean that these crowds were saying, “We recognize the Genocide”. We did not understand that these crowds were saying, “We are Hrant, because we want freedom of expression,” and “We are Hrant, because we want to be able to ask questions”. We are very focused on one issue. But now, 100 years later, we can understand that we also have to focus on the community, where there is still the fear that if the mentality of society does not change, this can happen again. There is still ongoing pressure. We don’t feel this in the Diaspora. But it is dialogue that will change this country, and the truth will be known.
Based on your experience, how do you see the Kurds’ process of facing the Armenian Genocide?
There has been an incredible openness. This young man’s, this Kurdish intellectual’s sincerity was so clear. He was saying that Kurds began to ask questions about the Armenian Genocide after what they experienced in the 80s and 90s. It was extraordinary that he also mentioned their role as accomplices. This is something that they carry as well. He was telling me that sometimes he has talked to people who feel they were punished for being accomplices. It was very open. I was suspicious to be honest. Were they using the issue to gain credit before the international community by using the Armenian issue? It takes a single conversation with somebody who is sincere to erase that suspicion. Why did I meet this person? That can only happen here. If I made a film of it, no one would believe me, but it happened.
Did you come across anything in Turkey that caused disappointment?
I went to an exhibition of Ottoman photography. I was looking for traces of Armenian life. But there was hardly any mention of minorities, and only one mention of Armenians, and that was the 1905 assassination attempt on Sultan Abdul Hamid II. In other words, it was a continuation of the official narrative that portrayed Armenians as terrorists. I then went to a cinema museum. There I saw a statute of a person who was a hero, because he was the first Turk, the first non-ethnic, to have opened a cinema, because everyone before him was a member of an ethnic minority. I thought to myself, ‘What a strange reason to celebrate’. I saw something else there, a poster of a film that was made in 1922. Half the cast is Armenian with Armenian names. This was incredible; you realize how important the Armenian presence was even after 1915. Armenians were still involved in every aspect of the society. The life in Istanbul was very different from the life in provinces, where we were completely eradicated. But they were like public ghosts. I, too, was afraid for many years to come here and become a public ghost.
What is the main difference between the commemoration of the Genocide in the USA and European countries, and its commemoration in Turkey?
In the Diaspora we don’t trust other people to remember for us. We feel we have to remember ourselves; no one can remember us in the way we want to be remembered. This means the process of remembering for us has become more ideological than organic. The process here is perhaps more organic because it is in relationship to the perpetrator also coming to terms. What happens in the West? Politicians remember, they make wonderful speeches, but it is not part of their emotional construction. The Genocide is not part of their narrative. They don’t ask themselves big questions. For instance, in Canada there has been a question raised about Genocide against the First Nations. At no point, do we think the Native Indian leaders would align themselves with Armenians. It’s a separate issue in our minds. We are not building this social construct together, which you could do here.
“We expect that suddenly, at the centennial of the Genocide, to have a film on the Genocide that can define everything. But we forget that the destruction continues.”
How has the Centennial of the Genocide been in terms of artistic production?
It is an amazing year; we won Best National Pavilion at the Venice Biennale. It’s amazing that we are able to have an artist like Sarkis showing at the Pavilion of Turkey. We are culturally on track of reconstructing ourselves after this devastation that took place 100 years ago. On the other hand, art isn’t created as part of a program. It is created because individuals feel a certain thing at a certain point in their life at a certain stage in their development as an artist. We expect that suddenly, at the centennial of the Genocide, to have a film on the Genocide that can define everything. But we forget that the destruction continues. And no depiction of catastrophe will ever make up for a 100 years of denial. There is a long history of holocaust films. Some of them are really strong, some of them are terrible. Armenians don’t have access to that tradition. We are still doing what we need to do, but we are still coming to terms with the fact that denial is a defining aspect of our experience. When people say it is a difficult subject to deal with, it is because we need to do so much.
Would you consider making a film about the Genocide?
No. It is interesting that my next film, ‘Remember’ deals with some of these issues through the Holocaust. The protagonist is a Holocaust survivor who has Alzheimer’s. He finds out that a Nazi is responsible for killing his family at Auschwitz. He goes on a mission to kill this person, but he keeps forgetting why. Sometimes he finds himself in hotel room with a gun, but he doesn’t know why.
Which moment you had in Istanbul will be the most unforgettable?
The strongest impression is being best man, and shaking hands with members of the Armenian community. To me it was so emotional. Not only being at the wedding, but also to be in this extraordinary position where I was welcoming everybody in their home, in their church. I looked at every single face, they lived in this community and they managed to exist together. Sharing this moment of joy and union, that was monumental for me. |
Differential Regulation of Estrogen-Inducible Proteolysis and Transcription by the Estrogen Receptor N Terminus ABSTRACT The ubiquitin-proteasome pathway has emerged as an important regulatory mechanism governing the activity of several transcription factors. While estrogen receptor (ER) is also subjected to rapid ubiquitin-proteasome degradation, the relationship between proteolysis and transcriptional regulation is incompletely understood. Based on studies primarily focusing on the C-terminal ligand-binding and AF-2 transactivation domains, an assembly of an active transcriptional complex has been proposed to signal ER proteolysis that is in turn necessary for its transcriptional activity. Here, we investigated the role of other regions of ER and identified S118 within the N-terminal AF-1 transactivation domain as an additional element for regulating estrogen-induced ubiquitination and degradation of ER. Significantly, different S118 mutants revealed that degradation and transcriptional activity of ER are mechanistically separable functions of ER. We find that proteolysis of ER correlates with the ability of ER mutants to recruit specific ubiquitin ligases regardless of the recruitment of other transcription-related factors to endogenous model target genes. Thus, our findings indicate that the AF-1 domain performs a previously unrecognized and important role in controlling ligand-induced receptor degradation which permits the uncoupling of estrogen-regulated ER proteolysis and transcription. |
Iron in the Vegan Diet
by Reed Mangels, PhD, RD
From Simply Vegan 5th Edition updated August, 2018
Summary
Dried beans and dark green leafy vegetables are especially good sources of iron, even better on a per calorie basis than meat. Iron absorption is increased markedly by eating foods containing vitamin C along with foods containing iron. Vegetarians do not have a higher incidence of iron deficiency than do meat eaters.
Iron is an essential nutrient because it is a central part of hemoglobin, which carries oxygen in the blood. Iron deficiency anemia is a worldwide health problem that is especially common in young women and in children.
Iron is found in food in two forms, heme and non-heme iron. Heme iron, which makes up 40 percent of the iron in meat, poultry, and fish, is well absorbed. Non-heme iron, 60 percent of the iron in animal tissue and all the iron in plants (fruits, vegetables, grains, nuts) is less well absorbed. Because vegan diets only contain non-heme iron, vegans should be especially aware of foods that are high in iron and techniques that can promote iron absorption. Recommendations for iron for vegetarians (including vegans) may be as much as 1.8 times higher than for non-vegetarians 1.
Some might expect that since the vegan diet contains a form of iron that is not that well absorbed, vegans might be prone to developing iron deficiency anemia. However, surveys of vegans (2,3) have found that iron deficiency anemia is no more common among vegetarians than among the general population although vegans tend to have lower iron stores 3.
The reason for the satisfactory iron status of many vegans may be that commonly eaten foods are high in iron, as Table 1 shows. In fact, if the amount of iron in these foods is expressed as milligrams of iron per 100 calories, many foods eaten by vegans are superior to animal-derived foods. This concept is illustrated in Table 2. For example, you would have to eat more than 1700 calories of sirloin steak to get the same amount of iron as found in 100 calories of spinach.
Another reason for the satisfactory iron status of vegans is that vegan diets are high in vitamin C. Vitamin C acts to markedly increase absorption of non-heme iron. Adding a vitamin C source to a meal increases non-heme iron absorption up to six-fold which makes the absorption of non-heme iron as good or better than that of heme iron 4.
Fortunately, many vegetables, such as broccoli and bok choy, which are high in iron, are also high in vitamin C so that the iron in these foods is very well absorbed. Commonly eaten combinations, such as beans and tomato sauce or stir-fried tofu and broccoli, also result in generous levels of iron absorption.
It is easy to obtain iron on a vegan diet. Table 3 shows several menus whose iron content is markedly higher than the RDA for iron.
Both calcium and tannins (found in tea and coffee) reduce iron absorption. Tea, coffee, and calcium supplements should be used several hours before a meal that is high in iron 5.
Table 1: Iron Content of Selected Vegan Foods Food Amount Iron (mg) Blackstrap molasses 2 Tbsp 7.2 Lentils, cooked 1 cup 6.6 Tofu 1/2 cup 6.6 Spinach,cooked 1 cup 6.4 Kidney beans, cooked 1 cup 5.2 Chickpeas, cooked 1 cup 4.7 Soybeans,cooked 1 cup 4.5 Tempeh 1 cup 4.5 Lima beans, cooked 1 cup 4.5 Black-eyed peas, cooked 1 cup 4.3 Swiss chard, cooked 1 cup 4.0 Bagel, enriched 1 medium 3.8 Black beans, cooked 1 cup 3.6 Pinto beans, cooked 1 cup 3.6 Veggie hot dog, iron-fortified 1 hot dog 3.6 Prune juice 8 ounces 3.0 Quinoa, cooked 1 cup 2.8 Beet greens, cooked 1 cup 2.7 Tahini 2 Tbsp 2.7 Peas, cooked 1 cup 2.5 Cashews 1/4 cup 2.0 Brussels sprouts, cooked 1 cup 1.9 Potato with skin 1 large 1.9 Bok choy, cooked 1 cup 1.8 Bulgur, cooked 1 cup 1.7 Raisins 1/2 cup 1.5 Apricots, dried 15 halves 1.4 Soy yogurt 6 ounces 1.4 Veggie burger, commercial 1 patty 1.4 Watermelon 1/8 medium 1.4 Almonds 1/4 cup 1.3 Sesame seeds 2 Tbsp 1.2 Sunflower seeds 1/4 cup 1.2 Turnip greens, cooked 1 cup 1.2 Millet, cooked 1 cup 1.1 Broccoli, cooked 1 cup 1.0 Kale, cooked 1 cup 1.0 Tomato juice 8 ounces 1.0 Sources: USDA Nutrient Database for Standard Reference, Legacy, 2018 and Manufacturer´s information. The RDA for iron is 8 mg/day for adult men and for post-menopausal women and 18 mg/day for pre-menopausal women. Vegetarians (including vegans) may need up to 1.8 times more iron.
Table 2: Comparison of Iron Sources Food Iron (mg/100 calories) Spinach, cooked 15.6 Collard greens, cooked 4.5 Lentils, cooked 2.9 Broccoli, cooked 1.9 Chickpeas, cooked 1.7 Sirloin steak, choice, broiled 1.1 Hamburger, lean, broiled 0.8 Chicken, breast roasted, no skin 0.6 Pork chop, pan fried 0.4 Flounder, baked 0.3 Milk, skim 0.1 Note that the top iron sources are vegan.
Table 3: Sample Menus Providing Generous Amounts of Iron Iron 1 serving Oatmeal Plus (p. 23)† 3.8 Lunch: 1 serving Tempeh/Rice Pocket Sandwich (p. 94) 4.7 15 Dried Apricots 1.4 Dinner: 1 serving Black-Eyed Peas and Collards (p. 76) 2.1 1 serving Corn Bread (p. 21) 2.6 1 slice Watermelon 1.4 TOTAL 16.0 Breakfast: Cereal with 8 ounces of Soy Milk 1.5 Lunch: 1 serving Creamy Lentil Soup (p. 49) 6.0 1/4 cup Sunflower Seeds 1.2 1/2 cup Raisins 1.5 Dinner: 1 serving Spicy Sautéed Tofu with Peas (p. 103) 14.0 1 cup cooked Bulgur 1.7 1 cup cooked Spinach 6.4 sprinkled with 2 Tbsp Sesame Seeds 1.2 TOTAL 33.5 †Note: Page Numbers refer to recipes in the book Simply Vegan. Additional foods should be added to these menus to provide adequate calories and to meet requirements for nutrients besides iron.
More information
References |
Effects of Nanoparticle Surface Treatment on the Crystalline Morphology and Dielectric Property of Polypropylene/Calcium Carbonate Nanocomposites It is well known that the morphology is one of the most important factors to affect the dielectric properties of polymeric nanocomposites. In this work, it is thought that the interfacial interaction between nanoparticles and polymer matrix is one of the most important factors affecting the crystalline morphology of the nanocomposites. So the effects of the surface-treatment of CaCO3 nanoparticle on the crystalline morphology of polypropylene (PP)/CaCO3 nanocomposites were investigated using differential scanning calorimetry (DSC), X-ray diffraction (XRD), polarized optical microscope (POM) facilities. The dielectric properties of the nanocomposites were tested using the dielectric analyzer (DEA) at room temperature. The results showed that the nanoparticles treated with a compound surface-treating agent (CA-2) induced much more beta-phase crystal in the nanocomposites than that with a single surface-treating agent (CA-1). The relations between the dielectric properties and morphology of the nanocomposites were discussed |
def apply_async_and_get_result(self, args=None, kwargs=None, timeout=None, propagate=True, **options):
result = self.apply_async(args=args, kwargs=kwargs, **options)
if timeout is None or timeout > 0:
return result.get(timeout=timeout, propagate=propagate)
else:
ex = TimeoutError('The operation timed out.')
result.timeout(ex)
raise ex |
The Hard Copy Observer
The Hard Copy Observer is a regular publication of Lyra Research based in Newtonville, Massachusetts. It is a business (as opposed to consumer) publication targeted at the Printing and Imaging business, and is widely considered the premier authoritative factual source for that industry. Tekrati, a firm in the Analyst Relations business, summarizes The Hard Copy Observer as "a leading publication serving the industry and, in fact, the printer industry bible."
History and profile
Volume 1, Number 1 of "the Observer" was published in October 1991. Its founder as well as original editor and publisher is Charles LeCompte, who remains as publisher (2010). The success of the publication spawned what grew to be a much larger marketing research organization, Lyra Research, headquartered in Newton, Massachusetts. A companion monthly newsletter, The Hard Copy Supplies Journal, was also developed to focus on the consumables portion of the business.
The first edition was 28 pages in length, and featured front-page stories about Xerox, Apple Computer, and Dataproducts. Its "look" was the no-nonsense black-and-white format that for the most part has remained unchanged. Today's subscription rate is $650 a year, compared to 1991's $495 a year, though the average edition's length, in its print form, grew to over 50 pages. Then as now, the publication is subscriber-supported with virtually no advertising.
The print version of the publication ended with the August 2009 issue, and beginning in September 2009, the newsletter content was continuously updated and available via web access.
In April 2012, the Photizo Group acquired Lyra Research, and the online Observer was phased out as it was merged with Photizo's industry news resources. |
P-36Advance care planning in norwegian nursing homes study protocol for a randomised controlled trial Background The continuous process of mental impairment in many nursing home patients leads to communication problems that make ethical decisions challenging. To ensure dignity, the communication process about end of life care should start while the patient is still capable of being actively participating. Aim To examine the implementation of advance care planning in Norwegian nursing homes (NHs). Determine whether advance care planning is more effective in improving the quality of life and mental health than the usual care provided to people living in nursing homes. Design This randomised controlled trial is being carried out in 60 Norwegian long-term-care units in the period of 20142016. Each unit will be treated as one cluster and will be randomised to either the intervention or the control group. The intervention group gets to follow an educational program, and they are systematically followed-up by phone to support the implementation. The control group receives care as usual. The study entails a 4-month intervention with a 9-month follow-up. Discussion This study includes collaboration between physician, nurse, patient and next of kin. It will describe the effect of focusing on advance care planning by following an educational program based on international research. It is of high priority to provide more comprehensive training for staff working in nursing homes. Intervention with advance care planning will be a part of a larger study named COSMOS, and the educational program combines the most effective research within advance care planning, pain assessment and treatment, medication review, and occupational therapy. |
Modeling the evolution of a fresh sea surface anomaly produced by tropical rainfall The evolution of a rain-produced, fresh surface anomaly observed in the western equatorial Pacific warm pool was modeled by use of the Princeton Ocean Model with Mellor-Yamada turbulent closure. The simulation was forced by a wind stress of 0.12 N m−2 and surface cooling of 225 W m−2. Following spin-up, 34-mm average rainfall was applied over a small portion of the domain. When rainfall ceased, momentum was trapped in a 1-m, fresh surface layer, and turbulent mixing was inhibited below the layer. Seven hours later, the surface density anomaly was reduced by an order of magnitude, and the surface velocity anomaly in the direction of the wind was 0.15 m s−1. The simulations show upwelling at the upwind edge of the anomaly and downwelling at the downwind edge with maxima of 15 and 35 m d−1, respectively. The momentum balance for the velocity component in the direction of the wind indicated that local acceleration, horizontal advection, and vertical diffusion were in approximate balance except at the downwind edge of the anomaly, where pressure gradient terms were also important. The budget of turbulent kinetic energy showed that shear production plus buoyancy production (or destruction) was approximately balanced by dissipation in much of the anomaly; advection and vertical turbulent transport terms were significant at some depths at the edges. The strength of the surface density front at the downwind edge of the anomaly tended to increase with time relative to the front at the upwind edge. |
Gastrointestinal Distress in Pregnancy: Prevalence, Assessment, and Treatment of 5 Common Minor Discomforts Gastrointestinal discomforts are a very common complaint in pregnancy. In fact, most pregnant women will experience at least one discomfort. This article focuses on 5 common conditions that occur in pregnancy: gastroesophageal reflux disease, diarrhea, constipation, hemorrhoids, and pica. While these conditions do occur in men and nonpregnant women, they occur more frequently in pregnancy because of the anatomic and physiologic changes associated with gestation. The type and severity of symptoms can vary from individual to individual, making treatment a challenge for healthcare providers, particularly when caring for pregnant women because the effects of medications and other treatments on the developing fetus are often not extensively studied. While these discomforts are rarely life-threatening, they can cause significant distress and impair quality of life. The goal of this article was to provide a summary of the anatomic and physiological changes during pregnancy that contribute to the increasing incidence of these discomforts and to provide information about each condition including prevalence, symptoms, and treatment modalities. |
package cn.jia.admin.config.db;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
/**
* 自定义注解 + AOP的方式实现数据源动态切换
*/
@Aspect
@Order(-10)
@Component
public class DynamicDataSourceAspect {
@Before("execution(* cn.jia.*.service.*.*(..))")
public void beforeSwitchDS(JoinPoint point) {
String typeName = point.getSignature().getDeclaringTypeName();
String dataSource = typeName.split("\\.")[2];
DataSourceContextHolder.setDB(dataSource);
}
@Before("execution(* org.springframework.security.oauth2.provider.token.DefaultTokenServices.refreshAccessToken(..))")
public void beforeWorkflowDS(JoinPoint point) {
DataSourceContextHolder.setDB("user");
}
@Before("execution(* org.camunda.bpm.engine.impl.cfg.ProcessEngineConfigurationImpl.buildProcessEngine(..))")
public void beforeWorkflowInit(JoinPoint point) {
DataSourceContextHolder.setDB("workflow");
}
// @Before("execution(* org.activiti.engine.impl.asyncexecutor.*.*(..))")
// public void beforeWorkflowRunJob(JoinPoint point) {
// DataSourceContextHolder.setDB("workflow");
// }
/* @Before("execution(* org.activiti.spring.SpringTransactionInterceptor.*(..))")
public void beforeWorkflowCommandExecutorImpl(JoinPoint point) {
DataSourceContextHolder.setDB("workflow");
}
@Before("execution(* org.activiti.engine.impl.interceptor.*.*(..))")
public void beforeWorkflowCommandExecutor(JoinPoint point) {
DataSourceContextHolder.setDB("workflow");
}*/
// @After("execution(* cn.jia.*.service.*.*(..))")
// public void afterSwitchDS(JoinPoint point) {
// DataSourceContextHolder.clearDB();
// }
} |
Mike O’Hanlon is a great colleague, and we have collaborated in the past, including on a book on nuclear arms control. However, we do not agree on everything.
Russia’s seizure and illegal annexation of Crimea in early 2014, followed by its support for armed separatism in eastern Ukraine, dealt a crippling blow to the European security order. The Kremlin’s actions shattered the cardinal rule of this order, going back to the 1975 Helsinki Final Act: European states should not use force to change borders or acquire territory.
Defining a new security order that can restore peace and stability in Eastern Europe poses a big challenge. Mike’s February 27 op-ed in the Wall Street Journal made a bold attempt to answer that challenge.
Mike suggested that NATO forswear further enlargement and proposed establishing a zone of permanently neutral countries, running from the Baltic states to the Black Sea. The zone would include Sweden, Finland, Belarus, Ukraine, Moldova, Georgia, Armenia, and Azerbaijan, plus Cyprus, Serbia, and some other Balkan countries. NATO and Moscow would commit to upholding the security of those countries, and Russia would withdraw its forces from states in the region, after which the West would lift its economic sanctions on Russia.
This is an interesting idea, but it would not work.
But Who Would Want to Belong?
First, a number of the listed countries would not agree to be relegated to such a zone of permanent neutrality. The obvious two are Ukraine and Georgia, both of whom have suffered from Russian aggression in the past 10 years. Others would object as well, including Sweden and Finland, which have recently taken a closer look at NATO in light of Russia’s more provocative posture and would not want to see their future security options circumscribed. Even a country such as Armenia, a member of the Moscow-led Collective Security Treaty Organization, might well object: Russian troops on its territory provide a hedge against Azerbaijan, which is using its energy wealth to strengthen its military capabilities, which Yerevan fears might be used to regain Nagorno-Karabakh.
NATO and Russia would have to consider these views. A “Yalta II,” negotiated over the heads of the Nordic and Eastern European states, would not go far.
Second, were such a neutral zone agreed, NATO would respect it. Would Moscow? The Kremlin has pursued a sphere of influence in the post-Soviet space, including Belarus, Ukraine, Moldova, Georgia, Armenia, and Azerbaijan. While Russia might say that it would respect those countries’ neutrality, we could expect a pattern of interference in their domestic and foreign affairs. In 1994, Russia pledged to respect Ukraine’s sovereignty, territorial integrity, and independence in the Budapest Memorandum, a pledge it blithely ignored in 2014 on the basis that circumstances had changed.
A key challenge of getting back to a more normal relationship between the West and Russia will turn on restoring a European security order that satisfies both sides.
Moscow would not welcome the prospect of countries such as Ukraine, Georgia, and Moldova gravitating towards the European Union. Indeed, Russia’s alarm over Ukraine’s course in 2013 and 2014 was not triggered by a prospect that Kyiv might join NATO (there was no interest then within the alliance in putting Ukraine on a membership track, and Ukraine was not pressing the issue). The triggering concern was Kyiv’s desire to conclude an association agreement with the European Union. The Kremlin recognized that, if Ukraine implemented all the economic and political reforms in the association agreement, it would be forever beyond Moscow’s reach.
A key challenge of getting back to a more normal relationship between the West and Russia will turn on restoring a European security order that satisfies both sides. NATO enlargement into the post-Soviet space is off the table for the foreseeable future. However, trying to create a single zone of neutral states, many of whose members would object to being there and where Russian interference would continue, does not offer the needed answer. |
<filename>Inc/thread_push.h<gh_stars>1-10
#ifndef _THREAD_PUSH_H_
#define _THREAD_PUSH_H_
#include "cmsis_os.h"
extern osMessageQId mesq_id;
extern osThreadId thread_push_id;
extern void thread_push_entry(void const * arg);
#endif
|
Prevalence and distribution of dermatophytosis lesions on cattle in Plateau State, Nigeria Background and Aim: Dermatophytosis is an infection of the superficial, keratinized structures of the skin, nails, and hair of man and animals caused by a group of fungi called dermatophytes in the genera Trichophyton, Microsporum, and Epidermophyton. The prevalence of dermatophytosis among cattle in Nigeria and Plateau State, in particular, is yet to be fully determined. This study aimed to determine the prevalence and the distribution of dermatophytosis lesions on cattle in Plateau State, Nigeria. Materials and Methods: Four hundred and thirty-seven cattle showing visible skin lesions suggestive of dermatophytosis were drawn from nine local government areas (three each) from the three senatorial districts of Plateau State, Nigeria. Skin scrapings were aseptically collected using a cross-sectional study, in which sampling units were selected using purposive sampling method. Samples were processed for both direct microscopic examination and isolation of dermatophytes in culture. The isolates were stained with lactophenol cotton blue and identified microscopically based on the size, shape, and arrangement of macro- and micro-conidia. The dermatophytes were further identified by determining the sequences of the internal transcribed spacer regions of their ribosomal DNA. Data were analyzed and presented as percentages, bar graph, and Chi-square test of association. p≤0.05 was considered statistically significant. Results: The overall prevalence rate of bovine dermatophytosis in Plateau State was found to be 11.0%. Trichophyton verrucosum was more frequently isolated (54.2%) than Trichophyton mentagrophytes (45.8%). Age, breed, management practice, and season were significantly associated with the occurrence of the disease (p<0.05). Conclusion: Dermatophytosis among cattle may be of public health significance in Plateau State, Nigeria. This is the first report on the prevalence and distribution of dermatophytosis lesions on cattle from Plateau State, Nigeria. Introduction Dermatophytosis also known as tinea or ringworm, is an infectious, highly contagious skin disease that affects animals as well as humans. The disease is caused by a group of keratinophilic filamentous fungi called dermatophytes. The dermatophytes are classified into the genera Trichophyton, Microsporum, and Epidermophyton based on conidia morphology and accessory organs. However, De-Hoog et al. in a multilocus phylogenetic study of the family Arthrodermataceae found the genus Trichophyton to be polyphyletic and proposed nine genera for the dermatophytes including Trichophyton, Epidermophyton, Nannizzia, Paraphyton, Lophophyton, Microsporum, Arthroderma, Ctenomyces, and Guarromyces. Dermatophytosis is an important public health problem worldwide, impacting millions of individuals annually. It is the most frequently encountered dermatologic problem in veterinary practice affecting a wide range of domestic and wild animals. Ambilo and Melaku in a study of the major skin diseases affecting cattle in Ethiopia found that dermatophytosis was the most common skin disease affecting the bovine species. In animal husbandry, losses may be incurred due to the high cost of treatment, weight loss, decrease in milk production, and the poor quality of rawhide materials in view of hides and skins being affected and destroyed by dermatophytes. Consequently, a national vaccination program against the disease was advocated by the Swedish hide industry. In Nigeria, reports on the prevalence of dermatophytosis in domestic animals and cattle, in particular, are scanty. Earlier studies on dermatophytoses of domesticated animals in different parts of the country in Ibadan, Oyo State, in Zaria, Kaduna State, in Oyo State and a more recent report from Nsuka, Enugu State, did not cover cattle. In the study conducted in Enugu, Anambra, Ebonyi, Abia, Imo, Kogi and Delta States of Nigeria, only 55 out of a total Available at www.veterinaryworld.org/Vol.12/September-2019/20.pdf of 538 animals in that study were cattle. Apart from the gross underrepresentation of the Nigerian cattle population in the previous studies, only two locations, Zaria, in Kaduna State and in Kogi State, were from the northern part of the country, which has over 85% of the total cattle population in Nigeria. It is quite obvious, therefore, that the prevalence of bovine dermatophytosis in Nigeria and Plateau State, in particular, is yet to be fully determined. This study investigates and documents the prevalence rates and the distribution of dermatophytosis lesions on cattle in Plateau State, Nigeria. Ethical approval This type of research (removal of keratinized dead scales which does not inflict pain to an animal and in fact, constitutes a form of treatment for the disease) does not require ethical clearance in our laws and regulations. Study area The study was conducted on cattle in Plateau State, located in North Central Nigeria. Its coordinates are 9 o 10 '0"N and 9 o 45'0"E in (Degrees, Min, S). Sampling locations: Bassa, Jos North, and Jos South Local Government Areas (LGAs) in Plateau North senatorial district; Mangu, Pankshin, and Kanam LGAs in Plateau Central senatorial district; and Wase, Mikang, and Shendam LGAs in Plateau South senatorial district were selected by simple random sampling technique by balloting. Samples were collected based on a cross-sectional study, in which sampling units were selected using the purposive sampling method. A total of 4753 cattle were physically examined for skin lesions consistent with dermatophytosis. Four hundred and thirty-seven skin scrapings were collected by first, cleaning the lesion site using cotton wool soaked in 70% alcohol to remove surface adhering organisms, and the edges of the lesions were scraped using sterile scalpel blade into the clean envelope. Age of animal (young < 1 year and adult > 1 year), sex, breed, and the anatomical location of lesions where samples were collected on each animal were recorded. Samples were labeled and transported to the Mycology Laboratory, Department of Veterinary Microbiology, Ahmadu Bello University, Zaria, and stored at room temperature until analyzed. Laboratory examination Each specimen was divided into two parts. One part was used for direct microscopic examination and the other portions for the isolation of etiologic agent in culture. The direct examination was performed as described by Bhatia and Sharma. Briefly, a portion of each specimen was placed in a drop of 20% potassium hydroxide (KOH) on a clean grease-free glass slide and covered with a coverslip. The preparation was gently heated over the flame from a Bunsen burner to facilitate digestion. The slides were examined using 10 and 40 objectives of a light microscope (Nikon, ECLIPSE-E100, 824592, China) for fungal elements. The presence of hyaline septate hyphae in skin scales or spores inside or outside the hair shaft was considered positive for dermatophytes. Isolation of dermatophytes was conducted using the method described by Robert and Pihet. Each of the skin samples was also inoculated onto a Petri dish containing Sabouraud's dextrose agar incorporated with cycloheximide at 0.5 mg/ml and chloramphenicol at 16 g/ml. The specimens were placed directly and pressed into the medium with an inoculating loop to ensure adequate contact between specimen and medium. The plates were sealed with masking tape, incubated at 37°C, and observed for fungal growth every 3 days for a period of 21-30 days. Identification of fungal isolates Information on colony features such as pigmentation (surface and reverse sides), topography (flat, heaped, regularly, or irregularly folded), texture (yeast-like, glabrous, powdery, granular, velvety, or cottony), and rate of growth (slow or rapid) were noted. Microscopic identification of dermatophytes was performed as described by Robert and Pihet. Briefly, a portion of mycelium was transferred into a drop of lactophenol cotton blue on a clean grease-free glass slide and teased with a 22 gauge nichrome needle to separate the filaments. A coverslip was placed on the preparation and examined with 10 and 40 objectives of a light microscope using a much-reduced light for identification. Dermatophytes were identified based on variations in size and shape of macroconidia and microconidia, chlamydoconidia, spiral hyphae, nodular organs, and pectinate branches. The isolates were further identified by determining the sequences of the internal transcribed spacer (ITS) regions of their ribosomal DNA. The DNA was extracted using ZR Fungal/Bacterial DNA Kit ™ Catalog No. D 6005 (Zymo Research Corporation) following the manufacturer's instructions. The PCR amplification of the ITS regions of ribosomal DNA (rDNA) was carried out using the primers ITS-1 (5'-TCCGTAGGTGAACCT GCGG-3') and ITS-4 (5'-TCCTCCGCTTATTGATA TGC-3') as the forward and reverse primers, respectively. Amplification products were separated by electrophoresis in 1.5% agarose gels incorporated with ethidium bromide and documented using gel documentation system (BIORAD Laboratories). The amplified DNA was purified using Wizard PCR Preps DNA Purification System (Promega) and sequenced with the primers ITS-1 (5'-TCCGTAGGTGAACCT GCGG-3') and ITS-4 (5'-TCCTCCGCTTATTGATA TGC-3') using the dye terminator method (Applied Biosystems 377 automatic sequencer). The partial sequences of the ITS1, 5.8S, and partial ITS2 regions of the ribosomal DNA were compared at nucleotide positions 76-587 with the sequences of the ITS1, Available at www.veterinaryworld.org/Vol.12/September-2019/20.pdf 5.8S, and ITS2 of Trichophyton verrucosum and Trichophyton mentagrophytes available in the Gene Bank for identification. Statistical analysis Data were analyzed using SPSS version 21(IBM, USA). p≤0.05 was considered statistically significant. Clinical signs Skin lesions were discrete, alopecic, circular, circumscribed, crusty, and grayish-white, raised above the skin. Some animals had very few lesions on the head region, especially around the eyes while in other cases, many lesions involving the head, face, and dewlap were observed ( Figure-1). Laboratory examination Of 437 skin samples processed, 57 (13.0%) were positive for dermatophytes by direct examination, while 48 (11.0%) were positive by culture. All the samples that were negative through direct examination were also culture negative. Direct examination of samples revealed non-pigmented, septate hyphae in skin scales while ectothrix spores were observed on hair. Of the 48 dermatophytes isolated in culture, 22 (45.8%) and 26 (54.2%) were identified as T. mentagrophytes and T. verrucosum, respectively. Colonies of T. mentagrophytes were white, flat, and granular with yellow reverse color. Microscopically, many globose microconidia arranged in grape-like clusters were observed with elongated, pencil-shaped, thin, and smooth-walled macroconidia. Colonies of T. verrucosum were generally slow-growing, button or disk-shaped, white, glabrous, raised center, and flat periphery with some submerged growth and yellow reverse pigment. Microscopically, broad, irregular hyphae with abundant chlamydoconidia in chains (chains of pearls) were observed. Comparing the sequences of the ITS regions of the ribosomal DNA of the isolates with the sequences of the ITS regions of the rDNA of dermatophytes available in the Gene Bank, 22 of the isolates were identified as T. mentagrophytes, whereas 26 were identified as T. verrucosum in complete agreement with the culture-based technique. Prevalence rates of bovine dermatophytosis based on sampling locations in Plateau State Of the 48 dermatophytes isolated in culture, 28 (58%) including 15 of T. verrucosum and 13 of T. mentagrophytes were isolated from cattle in Plateau North Senatorial District, while 9 isolates (19%) including four of T. verrucosum and five of T. mentagrophytes, whereas11 isolates (23%) including seven of T. verrucosum and four of T. mentagrophytes were isolated from cattle in Plateau Central and Plateau South senatorial districts, respectively. The prevalence rate of bovine dermatophytosis was significantly higher (p<0.05) in Plateau North senatorial district when compared with Plateau Central senatorial and Plateau South senatorial districts. There was no significant difference in occurrence of the disease among cattle in Plateau Central and those in Plateau South senatorial districts (p>0.05) ( Table-1). Prevalence rates of bovine dermatophytosis based on age, sex, breed, management system, and season in Plateau State Of the 437 samples analyzed, 253 were obtained from adult cattle (> 1 year) while 184 were from young animals (≤ 1 year). Eleven (4.3%) adult and 37 (20.1%) young animals tested positive for dermatophytes. The prevalence rate of dermatophytosis was significantly higher (p<0.05) in young than adult cattle. Of the 275 male and 162 female animals tested, 27 (9.8%) males and 21 (13.0) females were positive for dermatophytes. There was no significant difference (p>0.05) in occurrence of the disease between male and female animals. Of the 251 local Bunaji breed and 186 Friesian-Bunaji crossbred tested, 9 (3.6%) Bunaji breed and 39 (21.0%) Friesian-Bunaji crossbred were positive for dermatophytes. The prevalence of dermatophytosis was significantly higher (p<0.05%) in the Friesian-Bunaji crossbred than the local Bunaji breed. Of the 135 samples obtained from confined and 302 from unconfined cattle analyzed, 28 (20.7%) confined and 20 (6.6%) unconfined animals were positive for dermatophytes. The prevalence of dermatophytosis was significantly higher (p<0.05) in confined than unconfined animals. Thirty-nine (15.5%) of 251 cattle examined during the rainy season were positive for dermatophytes, whereas only 9 (4.8%) of the 186 animals examined during the dry season were positive for dermatophytes. The occurrence of the disease was significantly higher in the rainy than dry season (p<0.05) ( Table-2). Distribution of dermatophytosis lesions on the body of cattle based on age Of a total of 37 young cattle that were positive for dermatophytes, ringworm lesions were seen most frequently on the head region, especially around the Discussion The overall prevalence rate of bovine dermatophytosis in Plateau State was found to be 11.0%. Of the species isolated, T. verrucosum was more frequently isolated (54.2%) than T. mentagrophytes (45.8%). Age, breed, management practice, and season were found to be significantly associated with the occurrence of the disease (p<0.05). Dermatophytosis lesions occurred more frequently on the head region (70.3%) than other parts of the body in young cattle, whereas in adult animals, the lesions were more common on the back (27.3%). The prevalence rate of 11% bovine dermatophytosis in this study is similar to 12.6% reported by Nweze who carried out a study on dermatophytosis of domestic animals involving 55 cows among other animals in seven Nigerian states. However, lower rates ranging from 4.5% to 8% have been reported in Perugia, Italy while a much higher prevalence rate of 30.6% in Jordan. These variations may be as a result of differences in geographical locations. The prevalence and causative agents of dermatophytosis may vary from one location to another depending on the population density, climatic and socioeconomic conditions, and natural reservoirs. The isolation of T. verrucosum more frequently from ringworm lesions in this study suggests that it may be the most prevalent dermatophyte of cattle in the state. This finding is in agreement with the report of Shams-Ghahfarokhi et al., Swai and Sanka, and Agnetti et al. that T. verrucosum is the most common cause of ringworm in cattle. It is, however, different from the report of Ranganathan et al. who found that T. mentagrophytes was the predominant cause of bovine dermatophytosis in India. The main lesions of bovine dermatophytosis observed in this study were discrete alopecia, circular, circumscribed, thickly crusted, and grayish-white lesions raised above the skin which were consistent with the report of Nweze. It is, however, difficult to clinically distinguish dermatophytosis from other non-mycotic dermatoses and quite often, different dermatophyte species produce similar lesions which are difficult to differentiate through clinical examination. Therefore, identification by direct microscopic examination and in vitro culture is required for appropriate diagnosis. The presence of non-pigmented, septate hyphae in skin scales and ectothrix spores on hair surfaces observed by direct examination in this study agrees with the report of Silveira-Gomes et al. who studied 494 human skin scrapings by direct examination and concluded that the presence of arthroconidia in skin scales is diagnostic of dermatophyte involvement. Although it is a highly efficient screening technique, direct examination is limited in its specificity and sensitivity as false-negative had been reported in 10-15% of samples. Nasimuddin et al. found that only 34.35% of the 300 skin scrapings processed for mycology were positive by direct examination while 49.0% were culture positive. Furthermore, it is not possible to identify a fungus up to species level by this method. Since prophylaxis and therapy may vary depending on the species causing the infection, the need to isolate the pathogen in culture and its identification at species level is imperative. Gupta et al. used macroscopic features such as the colony pigmentation, topography, texture, and rate of growth coupled with microscopic morphology of fungal elements including the size and shape of macro-and micro-conidia, spirals, nodular organs, and pectinate branches for the identification of dermatophyte species. According to Rosen, T. verrucosum should be considered if a fungal colony is slow-growing and white, with a smooth folded surface. Colonies of T. verrucosum in this report were slow-growing, white, heaped, smooth, and slightly folded with some submerged growth and yellow reverse pigment. This observation is consistent with the findings of Rosen and Al-ani et al.. It is, however, different from the report of Shams-Gahfarokhi et al. who described the growth of T. verrucosum on selective agar for pathogenic fungi medium as small, button-like, white cream-colored, with suede-like to velvety surface, a raised center, and flat periphery with some submerged growth. This difference could be as a result of variation in the strains studied and media since colonial characteristics may vary depending on the media used. The broad, irregular hyphae with abundant chlamydoconidia in chains (chains of pearls) typical of T. verrucosum as observed under the microscope in this study agree with the report of Rippon who found that chlamydospores of T. verrucosum have thick walls and occur in chains. Most strains of T. verrucosum do not produce conidia. However, the microconidia producing strains have been documented by Yuksel and Likit. In this study, the direct examination method had higher number of positive cases (13.0%) than the culture method (11.0%). This is in agreement with the findings of Moreira et al. and that of Bhatia and Sharma who found higher rates by direct examination than by culture when they studied skin samples obtained from dermatophytosis affected rabbits and humans, respectively, and concluded that direct examination was a rapid and efficient method for presumptive diagnosis of ringworm infection. However, it is at variance with the findings of Gupta et al. who reported sensitivities of 73.3% and 100% for direct examination and culture, respectively. The lower sensitivity of the culture technique in this report may be due to frequent contamination by saprophytic fungi which might have prevented the growth of the pathogens. Moreover, the authors do not know whether the animals which were positive for dermatophytes by direct examination but negative by culture had been treated with antifungal agents before sample collection. Perhaps, some culture-negative specimens contained residual chemotherapeutic agents due to the previous topical antifungal therapy. Many antifungal drugs used for the treatment of ringworm are retained for long period of time within the horny layer of the epidermis and drug residues present in the samples may prevent the growth of the causative dermatophytes. Furthermore, since the KOH technique cannot differentiate between viable and non-viable fungal elements, chances are that some of these samples might contain dead dermatophytes. These non-viable fungi would not grow in culture and may be responsible for the false-negative culture in spite of a positive direct examination. Robert and Pihet had suggested that insufficient material, very short incubation, or non-suitable temperature could result in a false-negative culture. The significantly higher prevalence rate of cattle ringworm (p<0.05) in Plateau North senatorial district may be due to the colder, near temperate climatic condition when compared with the prevalence rate obtained in Plateau South senatorial district with considerably warmer climate. This is contrary to the generally accepted belief that the disease is more common in environments with warm and humid climatic conditions. This may be due to the higher number of crossbred cattle in this area. It is believed that crossbreed animals are less resistant to dermatophyte infection than local breeds of cattle. It may also be for the same reason that we found significantly higher prevalence of the disease (p<0.05) in the Friesian-Bunaji crossbred than the local Bunaji breed of cattle and thus confirming the report of Swai and Sanka who found a higher rate in the crosses than the indigenous Tanzanian short horn zebu. This may be because the local breeds of animals are likely to have more specific immunity resulting from frequent exposure to local dermatophyte strains and hence more resistant to the disease than the crossbreeds. The significantly higher prevalence of dermatophytosis in calves than adult cattle (p<0.05) in this report suggests that young animals are more likely to acquire infection than older animals. This observation agrees with the findings of Al-Ani et al.. Sham-Ghahfarokhi et al. and Agnetti et al. who reported that fungal infections were more common among cattle less than 6 months of age and that the age of animal was a statistically significant risk factor associated with dermatophytosis. This may be in part due to the weak specific and non-specific immunity and high pH of the skin in young animals. Animal susceptibility is determined largely by immunological status, and hence, young animals may be more susceptible since immunity increases with age. Furthermore, adult animals have more subcutaneous adipose tissues. The breakdown of fat into fatty acid and glycerol lowers the skin pH and makes the adult animal less susceptible to fungal infection. In the present study, there was no significant difference in occurrence of dermatophytosis between male and female cattle (p>0.05). This agrees with the report of Agnetti et al. who studied several factors for their roles in bovine dermatophytosis and concluded that age was the only significant risk factor in animals. The significantly higher prevalence rate was obtained for cattle that were confined (p<0.05) when compared with unconfined animals. This is in agreement with the findings of Ala-ani et al. and that of Shams-Ghahfarokhi et al. who reported that housing animals in close proximity to each other for long periods in the presence of infected debris were responsible for the high incidence of the disease in winter. This may be because animals in close confinement have restricted movements. The chances of direct contact with one another are higher, especially during the cold season when animals huddle together to keep warm and, consequently, increase transmission. The significantly higher prevalence of the disease was obtained during the rainy season (p<0.05) when compared with the dry season and, therefore, agreeing with the report by Shams-Ghahfarokhi et al., Sudan et al., and Bhatia and Sharma. This may be attributable to the high humidity resulting from heavy rainfall which favors multiplication of dermatophytes, thereby predisposing animals to infection. The occurrence of ringworm lesions more commonly on the head region in calves agrees with the report of Pandey and Cabaret who studied the distribution of ringworm lesions in cattle naturally infected with T. verrucosum. They observed that periocular lesions were more characteristic of young animals, while bulls had more lesions on the dewlap and intermaxillary space. Much earlier reports also showed similar patterns in the distribution of ringworm lesions on the body of cattle. Gentles and O'Sullivan examined 77 infected animals and found lesions on the head (79%), neck (53%), shoulders (15.5%), back (23.5%), lumbar region (15.5%), and tail (9%). According to Ford, the neck, head, trunk, and limbs were infected in decreasing order and the preferential sites on the head were around the eyes followed by the ears, cheeks, and face and muzzle. The reason for the occurrence of more lesions around the eyes and face in young animals is not well understood. However, the habit of licking and grooming by calves could predispose these parts of the animal to infection. Furthermore, in sucking calves, these parts of the body are likely to be subject to constant wetting by mammary secretions, especially during suckling. Maceration resulting from continuous wetness could predispose to fungal infection. Apart from the report by Ford and that of Gentles and O'Sullivan to the best of the authors' knowledge, there are no recent reports in literature regarding the distribution of dermatophytosis lesions on the body of cattle. Conclusion T. verrucosum and T. mentagrophytes were isolated and identified from skin lesions of cattle with T. verrucosum occurring more frequently (54.2%) than T. mentagrophytes (45.8%). The head region was the preferred site of dermatophyte infection in young animals. Age, breed, management practice, and season were found to be significant risk factors associated with dermatophytosis of cattle in Plateau State, Nigeria. |
Synthetic catalysis of amide isomerization. Rotation about the C-N bond in amides can be catalyzed by Bronsted and Lewis acids, as well as nucleophiles and bases. Catalysis of amide isomerization occurs in biological systems via "rotamase" enzymes; however, the mechanisms by which these proteins operate are not completely understood. We outline investigations that provide experimental support for mechanisms believed to be feasible for the catalysis of amide isomerization and present practical applications that have resulted from this work. |
Identification of residues controlling transport through the yeast aquaglyceroporin Fps1 using a genetic screen. Aquaporins and aquaglyceroporins mediate the transport of water and solutes across biological membranes. Saccharomyces cerevisiae Fps1 is an aquaglyceroporin that mediates controlled glycerol export during osmoregulation. The transport function of Fps1 is rapidly regulated by osmotic changes in an apparently unique way and distinct regions within the long N- and C-terminal extensions are needed for this regulation. In order to learn more about the mechanisms that control Fps1 we have set up a genetic screen for hyperactive Fps1 and isolated mutations in 14 distinct residues, all facing the inside of the cell. Five of the residues lie within the previously characterized N-terminal regulatory domain and two mutations are located within the approach to the first transmembrane domain. Three mutations cause truncation of the C-terminus, confirming previous studies on the importance of this region for channel control. Furthermore, the novel mutations identify two conserved residues in the channel-forming B-loop as critical for channel control. Structural modelling-based rationalization of the observed mutations supports the notion that the N-terminal regulatory domain and the B-loop could interact in channel control. Our findings provide a framework for further genetic and structural analysis to better understand the mechanism that controls Fps1 function by osmotic changes. |
Compounds with a similar structural formula and suitable processes for their preparation are described in the Offenlegungsschrift DE 40 34 785.
Inflammatory bowel disorders frequently lead to colonic pain, digestive disorders and in the worst case to intestinal obstruction. The latter is associated with colic-like pain as a result of a heavy contractile stimulus, stool and wind retention, vomiting and, with increasing duration of the condition, dehydration, rebound tenderness of the abdomen and finally shock.
Functional bowel disorders are attributed to all sorts of causes; inter alia to an abnormality in the contractility of the smooth intestinal muscles and the gastrointestinal motor activity. An excessive contractile activity and a modified coordination of the motor activity can cause pain by the activation of the mechanico-receptor and by transport abnormalities which lead to distension of the intestine. These causes were until now also assumed to explain chest pains which are not due to the heart, and also to explain the pain of irritable bowel syndrome or dyspepsia which is not associated with an ulcer. Meanwhile, this relationship has been further supported by 24-hour recordings of the motor oesophageal and gastroduodenal function of patients who were suffering from chest pain which was not due to the heart or dyspepsia which was not associated with an ulcer (Katz, P.O. et al. Ann. Intern. Med. (1987) 106, 593-7). Motor abnormalities can occur in normal controls without symptoms, but can also disappear, whereby a temporal correlation can be shown with the symptoms of the patient (Fefer, L. et al. Gastroenterology (1992) 102: A447 (Abstract)).
The treatment of the motor abnormalities with all sorts of therapeutically active agents, for example with agents which promote the motions of the gastrointestinal tract, with anticholinergics or calcium channel and cholecystokinin antagonists, are in most cases effective in the correction of the motor abnormalities, but they do not always improve the symptoms of the patients. |
Astronomers may be a step closer to solving the mystery of a strange object seen orbiting the massive black hole at the center of our Milky Way galaxy.
Dubbed G2, the object was first spotted in 2011 and was thought initially to be a gas cloud on the verge of being ripped apart by the black hole, which is known as Sagittarius A*. But when the object stayed intact, some scientists suggested G2 was something else: a pair of binary stars.
But now a team of scientists at the Max Planck Institute for Extraterrestrial Physics in Garching, Germany have sparked new debate, offering more evidence to support the gas cloud theory.
For their research, the team used a computer model to compare the orbit of G2 to that of G1, another object observed near Sagittarius A* a decade ago.
“We explored the connection between G1 and G2 and find an astonishing similarity in both orbits,” team member Dr. Stefan Gillessen said in a written statement.
(Story continues below image.)
High-resolution images of the center of our Milky Way, with x marking the galaxy's black hole. G1 and G2 are shown in blue and red, respectively.
The similarity suggests that the two objects are dense clumps of gas that are part of a larger "gas streamer"--sort of like beads on a string, Space.com reported.
“The good agreement of the model with the data renders the idea that G1 and G2 are part of the same gas streamer highly plausible,” Gillessen said in the statement. |
GRP-011Adherence to Tyrosine Kinase Inhibitor Therapy in Chronic Myeloid Leukaemia Background Improved survival associated with tyrosine kinase inhibitor (TKI) treatment has transformed chronic myeloid leukaemia (CML) into a long-term disease, but therapeutic success is challenged with poor medicines adherence. Controlling side effects in combination with patient education that includes direct communication between the pharmacist and the patient are essential components for maximising the benefits of TKI treatment. Purpose To estimate the adherence to oral chemotherapy and describe side effects with TKI treatment and their impact on adherence in patients with CML. Materials and Methods An 18-month retrospective observational study (from January 2011 to June 2012) was made on patients diagnosed with CML in which patients were selected who collected medicines in the pharmacy and who were being treated with selected TKIs (imatinib, dasatinib, nilotinib). The SMAQ interview was used to determinate adherence. Adherence data, side effects and demographic characteristics of the patients were tabulated using Excel. The x2 test was used for categorical variables and the t-test was used for normally-distributed continuous variables using SPSS statistical software. Results 25 patients were included in the study. 16 were men and 9 were women. The mean age was 60 years. Imatinib was the first line treatment for all patients. The average adherence was 62.5%. Adherence for patients younger than 50 years was 83.3% and in older patients was 55.6% (P = 0.125). Relating to years of treatment: less than 4 years 70.0% but for longer treatment 57.1% (p = 0.521). Patients with side effects showed less adherence: gastrointestinal disorders (80.0% vs. 64.28%, p = 0.402), musculoskeletal pain (70.0% vs. 42.8% p = 0.188). Conclusions Data suggest that more than one-third of patients are poorly adherent to TKI treatment. Identifying risk factors such as side effects, and educating patients on the need to take medicines as prescribed is essential to help patients to achieve maximum benefit from their treatment. No conflict of interest. |
Multidisciplinary approach to the craniovertebral junction. Historical insights, current and future perspectives in the neurosurgical and otorhinolaryngological alliance SUMMARY Historically considered as a nobodys land, craniovertebral junction (CVJ) surgery or specialty recently gained high consideration as symbol of challenging surgery as well as selective top level qualifying surgery. The alliance between Neurosurgeons and Otorhinolaringologists has become stronger in the time. CVJ has unique anatomical bone and neurovascular structures architecture. It not only separates from the subaxial cervical spine but it also provides a special cranial flexion, extension, and axial rotation pattern. Stability is provided by a complex combination of osseous and ligamentous supports which allows a large degree of motion. The perfect knowledge of CVJ anatomy and physiology allows to better understand surgical procedures of the occiput, atlas and axis and the specific diseases that affect the region. Although many years passed since the beginning of this pioneering surgery, managing lesions situated in the anterior aspect of the CVJ still remains a challenging neurosurgical problem. Many studies are available in the literature so far aiming to examine the microsurgical anatomy of both the anterior and posterior extradural and intradural aspects of the CVJ as well as the differences in all the possible surgical exposures obtained by 360° approach philosophy. Herein we provide a short but quite complete at glance tour across the personal experience and publications and the more recent literature available in order to highlight where this alliance between Neurosurgeon and Otorhinolaringologist is mandatory, strongly advisable or unnecessary. Introduction Despite the continuous evolution and refinements of operating techniques, the disposability of dedicated surgical instruments along with the growing awareness and experience of the dedicated surgeons, treatment of craniovertebral junction (CVJ) pathologies still is a complex challenge. The tricky combination of bony, muscular and neurovascular vital structures crowded in a deep and narrow space makes surgical approaches to the CVJ hard and risky. Depending on the location of the lesion, surgical approaches have traditionally been directed toward ventral, dorsal and lateral aspect of the cervico-medullary junction. The anterior aspect of CVJ can be approached by the transoral approach (TOA), simple or extended, the endoscopic endonasal approach (EEA), introduced by Kassam 1, and the submandibular approach (SMA), i.e. retropharyngeal approach, which is indicated only in selected cases. Posterior suboccipital approach (SOA) intra-extradural approaches along with instrumentation procedures has been traditionally considered for inferior craniectomy with or without C1-C2 laminectomy for CVJ lesions. Through the same route it is possible to perform C0-C1-C2 instrumentation procedures with titanium cables, wires, screws and rods in order to fix and stabilize the CVJ. Intradural lesions located at the ventrolateral aspect of CVJ can be approached by means of a postero-lateral or far lateral approach (FLA), an extension of the suboccipital approach with removal of a variable amount of occipital bone. Extradural lesions of the same region may require an antero-lateral or extreme lateral approach (ELA), which allows a better control of the entire length of the vertebral artery (VA), the jugular foramen, the lowest cranial nerves, and the jugular-sigmoid complex. Finally, the posterior midline approach is the most popular in the neurosurgical culture both for extra and intradural surgical control of the CVJ and mainly for instrumentation and fusion techniques. Moving from a comparative analysis of the CVJ approaches, and in the wake of our surgical experience 2-7 consisting of more than 40 anterior surgical procedures including TOA and EEA, more than ten comprising ELA, FLA, SMA and more than hundred posterior instrumentation and fu-sion procedures, we herein outline the experience matured in our department including an equipped Cranio-Vertebral Junction Laboratory for anatomical dissection 8-10, a II Degree Master Course on Surgical Approaches on CVJ and a University Research Center on CVJ, all mastered and directed by the Senior Authors (MV and GP) along with the Junior Authors (MR and FS) and referring to the Surgical Department / Pole of Medical Interest of our Catholic University of Rome Medical School. In this review we will try to identify and objectivate the coworking potential of Neurosurgeons and Othorhinolaringologists in the common CVJ surgery field of interest. Where alliance between neurosurgeons and otorhinolaringologists is mandatory? Submandibular anterior Approach (SMA) Terms like anterolateral 11, submandibular 12, anterior high cervical 13, and retropharyngeal pre-vascular 14 have been used to describe a surgical approach between carotid sheath laterally and pharyngeal constrictor muscles medially to high cervical spine. Cloward 15 and Robinson and Smith 16 are generally acknowledged as establishing the anterior approach to the cervical spine for the management of disk herniation. McAfee et al. 14 described the retropharyngeal pre-vascular approach using the same fascial plane described by Southwick and Robinson 17. Submandibular retropharyngeal approach provides a direct, perpendicular trajectory to the C2-3 interspace through a "natural" corridor above the superior laryngeal nerve (SLN) and below the hypoglossal nerve. The approach requires a very little retraction and, comparing to other approaches (especially ELA) is associated with a lower risk of hypoglossal, glossopharyngeal and superior laryngeal nerves injury. These risks can be further limited using an endoscope-assisted retropharyngeal approach, mainly indicated for lesions involving the clivus. Nevertheless, care must be taken when using the approach in the setting of prior neck dissection. On the other hand, this route can be burdened by some complications as respiratory dysfunctions; pharyngeal fistula; transient hoarseness and dysphagia; dural leakage; hypoglossal and facial nerves paresis and salivary fistula. Where alliance between neurosurgeons and otorhinolaringologists is strongly advisable? TOA and EEA TOA still represents the "gold standard" for the surgical treatment of several conditions resulting in anterior CVJ compression and myelopathy 18. Refinements of the approach have been introduced during the late 1970s by Menezes who outlined several issues that now represent pivotal steps of the approach 19. Nevertheless, some concerns, such as the need of temporary tracheostomy and postoperative nasogastric tube 20 EEA Although this approach, conceived in order to overcome these surgical complications, rapidly gained wide attention, a clear predominance over the TOA in the treatment of CVJ pathologies, is still matter of discussion. In recent years, several papers have reported anatomical studies and surgical experiences in EEA to target different areas of the midline skull base, including the CVJ 20-28. Starting from these preliminary experiences, further anatomical studies defined the theoretical (radiological) and practical (surgical) craniocaudal limits of the endonasal route ( Fig. 2). Our group, on the basis of the clinical experience gained after 30 anterior procedures, both transoral and transnasal, did the same for the transoral approach 32,33 and compared the reliability of the radiological and surgical lines of the two different approaches. Very recently, a cadaveric study tried to define, with the aid of Neuronavigation (Fig. 3), the upper and lower limits of the endoscopic transoral approach 34. This approach appears more consistent with the global rhinological endoscopic experience of the Othorhinolaringologist up to C1-C2. TOA is a ventrally directed approach from the inferior third of clivus to C2-C3 interspace. It allows the shortest, wider and most direct access to the CVJ, among the other approaches to the CVJ 35. Extensions of the approach, sometimes necessary to expose more rostrally located pathologies, carry the risk of numerous permanent comorbidities expecially on the soft palate and the need for temporary tracheostomy and nasogastric feeding tube 20. The need to overcome the impact and significance of these comorbidities has led to the development of potentially less invasive techniques, such as the EEA. As widely demonstrated by numerous comparative anatomic and clinical studies, the endoscope provides also an improved rostral exposure, brighter illumination and closer visualization of the surgical target 35 and can be also used during a TOA, as a valid complement tool in a combined procedure. Nevertheless, a recent systematic review and meta-analysis 37, while demonstrating a statistically significant increased risk of postoperative tracheostomy after TOA comparing with EEA, showed a slight, although not statistically significant, tendency toward a morbidity/mortality prevalence of EEA on TOA (Fig. 4). In order to clearly define the limits of the TOA, our research group devised a radiologic "theoretical" line, the Palatine Inferior dental Arch line (PIA), as a reliable predictor of the maximal superior extension of the transoral approach and then compared the reliability of the radiological and surgical lines of the two different approaches 33. Very recently, a cadaveric study tried to define, with the aid of Neuronavigation, the upper and lower limits of the endoscopic TOA 34. Starting from our previous experimental volumetric studies 32,33 and other recent contributions, we tried to experimentally exploit the accuracy provided by Neuronavigation, to further compare operative sagittal and axial extensions of the transnasal and transoral corridors. Our observations were consistent with a relevant advantage of TOA over EEA in all the specimens. According to other clinical and experimental studies reported in literature, we found several advantages of TOA over EEA: wide working area in terms of both craniocaudal and lateral extension, a more familial anatomy for neurosurgeons, a safer top-down drilling of the clivus and odontoid with a better detachment of the ligaments (Fig. 5). On the other hand, excluding some well-known disadvantages and predictable complications appreciable only in clinical setting, such as working in a contaminated field, CSF leak management, the airway swelling, the upper airway obstruction and the velopharyngeal insufficiency, our study confirms the relevance of fixed obstacles to the required retraction as the tongue and the teeth. The management of TOA requires the role of the Otorhinolaryngologist for performing tracheostomy, cooperate in the surgical exposure and final reconstruction of the pharyngeal opening. ELA Starting from the 1970s, many surgeons developed and introduced new skull base approaches to the lesions of the anterolateral CVJ introducing several variations and modifications. Hammon in 1972 and thereafter Heros in 1986 described a true lateral suboccipital approach for vertebral and vertebrobasilar aneurysms 38,39. Heros described the combination of a lateral suboccipital craniotomy, C1 laminectomy and drilling of the occipital condyle (OC). George described a VA medial mobilization from C2 to its dural entrance point, with ligation of the sigmoid sinus and without condyle drilling. Spetzler, Bertalanffy, and Seeger mobilized the VA from C1 to dural entrance point, by drilling C1 facet, posterior C1 arch and posterior lateral third of the OC. In recent years, extensive use of tools like neuroendoscope and neuronavigation, greatly implemented safety and efficacy of this and other skull base approaches, as demonstrated by several cadaveric studies. ELA is a direct lateral approach to the deep anterior portion of the SCM, behind the internal jugular vein and anterior to the VA. It is generally considered a more aggressive extension of far lateral approach. This term comes from 1990 when Sen and Sekhar described an alternative way to deal with meningiomas and schwannomas located anteriorly at the CVJ 47. The rationale behind this procedure is to allow gross total resection of lesions with significant lateral extensions that would be otherwise inaccessible via anterior or classic FLA. ELA involves a greater extent of bony removal, skeletization of the jugular bulb along with the sigmoid sinus (in the transjugular variant), and more often VA transposition. These technical nuances overall widen the surgical corridor, but inherently are associated with a higher rate of morbidity and mortality 48,49. ELA provides good access to the bone and extradural anterior and lateral space. It can be easily extended caudally to the cervical spine and it offers simultaneous control of the VA, cervical segment of the ICA, the lower cranial nerves, and the sigmoid-jugular complex 50. In ELA, muscles are detached from their insertion on the transverse process of atlas. Great attention should be paid to avoid damage of VA, internal jugular vein, and spinal nerves, which are under these muscles. The key point for dissection and control of the VA is to preserve the periosteal sheath surrounding it. Our study further confirms that ELA allows exposure of the whole odontoid process, the inferior clivus, and the medial surface of the contralateral atlanto-occipital joint. In this surgery the more confident knowledge of Otorhinolaryngologists of the superficial and middle and deep plane layers of the neck make this alliance absolutely advisable. Where alliance between neurosurgeons and otorhinolaringologists is unnecessary? Transcervical Anterior Approach (TCA) Wolinsky described an endoscopic transcervical approach in order to perform odontoidectomy without traversing the oral cavity 51. A recent cadaveric study exploited the feasibility of an endoscope-assisted retropharyngeal approach to the CVJ and clivus following submandibular gland resection 52. The knowledge of the Neurosurgeons of this region gained by cervical spine surgery along with the skill obtained in spine traumatology aimed to screwing the odontoid fractures with biplanar fluoroscopy, make him confident and no surgical alliance seems to be required for this infrequent surgery. FLA nowadays represents a mainstay for the surgical treatment of intradural pathologies at the ventral CVJ. Since the first description of Heros and George 53, extensive discussion and modifications of this approach have been reported in the literature. Several cadaver studies have demonstrated the use and benefits of the endoscope in the FLA. A study 54 has divided the surgical corridors for inserting the endoscope into upper, middle and lower. The cranial nerves VII and VIII, IX and X, and XII are respectively roof and floor of the three corridors and provide access and observation of the aspects of brainstem and posterior circulation by means of 0° lens (upper and middle corridor) and 30° lens (inferior corridor). Another cadaver study compared 3D endoscopic and microscopic vision in FLA after partial condilectomy and resection of jugular tubercle. The study concluded that the 3D endoscopic probe is too large and the surgical maneuverability is significantly hampered. Several authors have stated similar benefits of endoscope use in clinical series. These studies report a significant benefit in the endoscope's ability to identify any tumor adherent to brainstem or clivus amenable to resection 55. For this approach the Neurosurgeon appears to be quite confident, since it can be considered an extension of the classic well known PIFP but in park bench position. SOA Occipitocervical fusion (OCF) as well as C1-C2 is indicated for instability at the CVJ. Numerous surgical techniques, which evolved over 90 years, as well as unique anatomic and kinematic relationships of this region present a challenge to the neurosurgeon. The current standard involves internal rigid fixation by polyaxial screws in cervical spine, contoured rods and, eventually, occipital plate. Such approach precludes the need of postoperative external stabilization, lesser number of involved spinal segments, and provides 95-100% fusion rates. New surgical techniques such as occipital condyle screw or transarticular occipito-condylar screws address limitations of occipital fixation such as variable lateral occipital bone thickness and dural sinus anatomy. As the C0-C1-C2 complex is the most mobile portion of the cervical spine (40% of flexionextension, 60% of rotation and 10% of lateral bending) stabilization leads to substantial reduction of neck movements. Preoperative assessment of vertebral artery anatomical variations and feasibility of screw insertion as well as visualization with intraoperative fluoroscopy are necessary. Placement of structural and supplemental bone graft around the decorticated bony elements is an essential step of every OCF procedure as the ultimate goal of stabilization with implants is to provide immobilization until bony fusion can develop. This historical neurosurgical approach makes the Neurosurgeon absolutely confident, since it is required for conventional posterior cranial lesions approaches. Future perspectives In recent years, the surgical armamentarium has been enriched with high-definition 4 K endoscope 56 as well as exoscope 57 systems, which potentially provide a wide viewing angle as well as high-resolution image quality available with an endoscope with an optic resolution power equal or superior to the conventional Operating Microscope (OM) 57. In particular, the exoscope is a new surgical tool recently conceived in order to overcome some limitations of OM and endoscope. Limitations of the first are mainly ergonomics: the size and weight, the ocular-dependent visualization, the continuous need of refocus because of the short field depth at high magnification and of continuously readjusting the OM and the body position in order to preserve a perfect stereoscopic picture. Limitations of the endoscope include a short focal distance and a limited field of view that requires an endoscope placement in the surgical field with the shaft reducing the available working space. Overall, these limitations are even more evident in complex and narrow anatomical corridors as those of the CVJ. Besides to the classic neuronavigation with preoperative neuroradiological assessment it's worth mentioning also OArm neuronavigation and intraoperatory System. Intraoperative imaging represents another important upgrade in neurosurgery. For spinal surgery in particular, the introduction of the OArm system has made it possible to implement the safety of instrumentation procedures on the one hand, allowing much more accurate intraoperative neuronavigation than traditional techniques; secondly the setting with intraoperative imaging allows a real-time verification of the effectiveness of the procedure, such as in cases of medullary decompression or the correct positioning of arthrodesis systems 58. OArm acquisition, comparing to fluoroscopy, not only should have the obvious advantage of a better definition with a resulting easier screws insertion, but, for sure, it permits an intraoperative direct and indirect assessment of bony and legamentous CVJ anterior decompression. In two of five cases, after OArm acquisition the cranio-caudal decompression was augmented because it proved to be suboptimal in an absolutely reliable and anatomically detailed way. Otherwise in our previous experience concerning fluoroscopic monitoring of TOA, the use of Iopamire, as contrast filler of the surgical cavity, allowed in a quite fair way to, indirectly, evaluate possible residual compression at the CVJ. Otherwise, it does not provide a real time visualization. Finally, the possibility to convert the intraoperative neuronavigated 3D modality into 2 D real time OArm monitoring is very unconfortable due to the poor volume space avail-able for the surgeon (also in the presence of EX) and the need of complex, time consuming and uneffective surgical manouvres required. The spreading diffusion of such technologies seems to belong to the personal and institutional skill of both Neurosurgeons and Otorhinolaryngologists, always more devoted to share common objectives, operative tools for a common clinical and experimental final strategy. Conclusions The present paper confirms the irreplaceable role of interdisciplinary coworking in order to improve the difficult knowledge of the CVJ. Anatomical dissections in the training of surgeons, especially when approaching an anatomical region among the most complex such as the CVJ, is possible only with sharing experience and traditions and it is of paramount importance when dealing with this region. Accurate and multidisciplinary preoperative evaluation of the best corridor of approach, taking care also of all the possible intra, perioperative and postoperative problems are nowadays the mainstays for the best treatment of the patients affected of pathologies of CVJ. |
<gh_stars>1-10
package org.vichtisproductions.focusting.func_debug.view;
import android.annotation.TargetApi;
import android.app.Activity;
import android.content.Context;
import android.os.Build;
import android.util.AttributeSet;
import android.widget.Button;
import android.widget.EditText;
import android.widget.FrameLayout;
import android.widget.TextView;
import org.vichtisproductions.focusting.MainApplication;
import org.vichtisproductions.focusting.R;
import org.vichtisproductions.focusting.func_debug.presenter.IAveragesSetViewPresenter;
import org.vichtisproductions.focusting.model.Stage;
import org.vichtisproductions.focusting.utils.TimePickerDialog;
import javax.inject.Inject;
import timber.log.Timber;
/**
* Created by Renier on 2016/05/07.
*/
public class AveragesSetView extends FrameLayout implements IAveragesSetView {
@Inject
IAveragesSetViewPresenter mPresenter;
private TextView tvStageLabel;
private EditText etUnlocks;
private Button btnSOTPick;
private Button btnSFTPick;
private Button btnTotalSOTPick;
private Button btnTotalSFTPick;
private Button btnSetForCurrentStageDayHour;
private Button btnSetForCurrentStageDay;
private Button btnSetForCurrentStage;
private Activity mActivity;
public AveragesSetView(Context context) {
super(context);
init();
}
public AveragesSetView(Context context, AttributeSet attrs) {
super(context, attrs);
init();
}
public AveragesSetView(Context context, AttributeSet attrs, int defStyleAttr) {
super(context, attrs, defStyleAttr);
init();
}
@TargetApi(Build.VERSION_CODES.LOLLIPOP)
public AveragesSetView(Context context, AttributeSet attrs, int defStyleAttr, int defStyleRes) {
super(context, attrs, defStyleAttr, defStyleRes);
init();
}
@Override
public void updateTitleTextWith(String text) {
tvStageLabel.setText(String.format(getContext().getString(R.string.averages_set_view_stage_label), text));
}
public void setStage(Stage stage) {
mPresenter.setStage(stage);
}
public void setActivity(Activity activity) {
mActivity = activity;
}
public void clearActivity() {
mActivity = null;
}
private void init() {
inflate(getContext(), R.layout.averages_set_view, this);
if (!isInEditMode()) {
MainApplication.from(getContext()).getGraph().inject(this);
tvStageLabel = (TextView) findViewById(R.id.tvStageLabel);
etUnlocks = (EditText) findViewById(R.id.etUnlocks);
btnSOTPick = (Button) findViewById(R.id.btnSOTPick);
btnSFTPick = (Button) findViewById(R.id.btnSFTPick);
btnTotalSOTPick = (Button) findViewById(R.id.btnTotalSOTPick);
btnTotalSFTPick = (Button) findViewById(R.id.btnTotalSFTPick);
btnSetForCurrentStageDayHour = (Button) findViewById(R.id.btnSetForCurrentStageDayHour);
btnSetForCurrentStageDay = (Button) findViewById(R.id.btnSetForCurrentStageDay);
btnSetForCurrentStage = (Button) findViewById(R.id.btnSetForCurrentStage);
btnSOTPick.setOnClickListener(buttonClickListener);
btnSFTPick.setOnClickListener(buttonClickListener);
btnTotalSOTPick.setOnClickListener(buttonClickListener);
btnTotalSFTPick.setOnClickListener(buttonClickListener);
btnSetForCurrentStageDayHour.setOnClickListener(buttonClickListener);
btnSetForCurrentStageDay.setOnClickListener(buttonClickListener);
btnSetForCurrentStage.setOnClickListener(buttonClickListener);
}
}
private OnClickListener buttonClickListener = v -> {
// Timber.d("Button clicked to " + String.valueOf(v.getId()));
if (v.getId() == R.id.btnSOTPick) {
// Timber.d("It was a btnSOTPick");
TimePickerDialog dialog = new TimePickerDialog(mActivity);
dialog.setTimePickerEventListener(new TimePickerDialog.TimePickerEventListener() {
@Override
public void onCancel() {
}
@Override
public void onTimeSelected(long millis) {
mPresenter.avgSOTPicked(millis);
}
});
// Timber.d("All set, let's show.");
dialog.show(getContext());
} else if (v.getId() == R.id.btnSFTPick) {
TimePickerDialog dialog = new TimePickerDialog(mActivity);
dialog.setTimePickerEventListener(new TimePickerDialog.TimePickerEventListener() {
@Override
public void onCancel() {
}
@Override
public void onTimeSelected(long millis) {
mPresenter.avgSFTPicked(millis);
}
});
dialog.show(getContext());
} else if (v.getId() == R.id.btnTotalSOTPick) {
TimePickerDialog dialog = new TimePickerDialog(mActivity);
dialog.setTimePickerEventListener(new TimePickerDialog.TimePickerEventListener() {
@Override
public void onCancel() {
}
@Override
public void onTimeSelected(long millis) {
mPresenter.totalSOTPicked(millis);
}
});
dialog.show(getContext());
} else if (v.getId() == R.id.btnTotalSFTPick) {
TimePickerDialog dialog = new TimePickerDialog(mActivity);
dialog.setTimePickerEventListener(new TimePickerDialog.TimePickerEventListener() {
@Override
public void onCancel() {
}
@Override
public void onTimeSelected(long millis) {
mPresenter.totalSFTPicked(millis);
}
});
dialog.show(getContext());
}
else if (v.getId() == R.id.btnSetForCurrentStageDay) {
try {
mPresenter.avgUnlocksPicked(Integer.parseInt(String.valueOf(etUnlocks.getText())));
} catch (NumberFormatException e) {
e.printStackTrace();
mPresenter.avgUnlocksPicked(0);
}
mPresenter.setForStageDayClicked();
} else if (v.getId() == R.id.btnSetForCurrentStageDayHour) {
try {
mPresenter.avgUnlocksPicked(Integer.parseInt(String.valueOf(etUnlocks.getText())));
} catch (NumberFormatException e) {
e.printStackTrace();
mPresenter.avgUnlocksPicked(0);
}
mPresenter.setForStageDayHourClicked();
} else if (v.getId() == R.id.btnSetForCurrentStage) {
try {
mPresenter.avgUnlocksPicked(Integer.parseInt(String.valueOf(etUnlocks.getText())));
} catch (NumberFormatException e) {
e.printStackTrace();
mPresenter.avgUnlocksPicked(0);
}
mPresenter.setForStageClicked();
}
};
@Override
protected void onAttachedToWindow() {
super.onAttachedToWindow();
if (mPresenter != null) {
mPresenter.setView(this);
mPresenter.onAttached();
}
}
@Override
protected void onDetachedFromWindow() {
super.onDetachedFromWindow();
if (mPresenter != null) {
mPresenter.onDetached();
mPresenter.clearView();
}
}
@Override
public void setAvgSOTText(String text) {
btnSOTPick.setText(text);
}
@Override
public void setAvgSFTText(String text) {
btnSFTPick.setText(text);
}
@Override
public void setTotalSOTText(String text) {
btnTotalSOTPick.setText(text);
}
@Override
public void setTotalSFTText(String text) {
btnTotalSFTPick.setText(text);
}
}
|
What: Just one quarter after thoroughly disappointing analysts, shares of Monolithic Power Systems (Nasdaq: MPWR) surged as much as 12% on better-than-expected earnings.
So what: Profit checked in $0.18 per share on a Non-GAAP basis, $0.04 better than the consensus estimate, according to Yahoo! Finance. Management also announced a $20 million increase in its active stock repurchase program.
Now what: Buybacks have had the desired effect. Diluted shares outstanding dipped to 36.7 million in Q4, from 37.4 million the prior-year quarter, adding needed juice to the bottom line. Future repurchases will be paid from Monolithic’s war chest, which included $177 million in cash and short-term investments as of Dec. 31.
That’s the good news. The bad is that Monolithic Power has experienced a massive dropoff in revenue growth, which hasn’t yet tarnished direct competitors Analog Devices (NYSE: ADI) and Texas Instruments (NYSE: TXN). Revenue growth will have to return for investors to realize healthy returns from here. |
from flask import Flask, request, jsonify
import json
import hashlib
from dao.mysql_db import Mysql # 与 rs_news 相同
from entity.user import User # 与 rs_news 相同
app = Flask(__name__)
from sqlalchemy import Column, String, create_engine
from sqlalchemy.orm import sessionmaker
import datetime
from service.LogData import LogData
log_data = LogData()
from service.test_page import PageSize
page_query = PageSize()
@app.route("/recommendation/get_rec_list", methods=['POST'])
def get_rec_list():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
page_num = rec_obj['page_num'] # 页码
page_size = rec_obj['page_size'] # 每页条数
user_id = rec_obj['user_id'] # 用户id
types = rec_obj['type'] # 4种,国内/综艺/电影/推荐
try:
data = page_query.get_data_with_page(page_num, page_size)
print(data)
return jsonify({"code": 0, "msg": "请求成功", "data": data, "user_id": user_id, "type": types})
except Exception as e:
print(str(e))
return jsonify({"code": 2000, "msg": "error"})
@app.route("/recommendation/register", methods=['POST', 'GET'])
def register():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
user = User()
user.username = rec_obj['username']
user.nick = rec_obj['nick']
user.age = rec_obj['age']
user.gender = rec_obj['gender']
user.city = rec_obj['city']
user.password = str(hashlib.md5(rec_obj['password'].encode()).hexdigest()) # 密码加密
try:
mysql = Mysql()
sess = mysql._DBSession()
if sess.query(User.id).filter(User.username == user.username).count() > 0: # 用户是否已经注册
return jsonify({"code": 1000, "msg": "用户已存在"})
sess.add(user)
sess.commit()
sess.close()
result = jsonify({"code": 0, "msg": "注册成功"})
return result
except Exception as e:
print(str(e))
return jsonify({"code": 2000, "msg": "error"})
@app.route("/recommendation/login", methods=['POST'])
def login():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
username = rec_obj['username']
password = str(hashlib.md5(rec_obj['password'].encode()).hexdigest())
try:
mysql = Mysql()
sess = mysql._DBSession()
res = sess.query(User.id).filter(User.username == username, User.password == password)
if res.count() > 0:
for x in res.all():
data = {"userid": str(x[0])}
info = jsonify({"code": 0, "msg": "登录成功", "data":data})
return info
else:
return jsonify({"code": 1000, "msg": "用户名或密码错误"})
except Exception as e:
print(str(e))
return jsonify({"code": 2000, "msg": "error"})
@app.route("/recommendation/likes", methods=['POST']) # 点赞
def likes():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
user_id = rec_obj['user_id']
content_id = rec_obj['content_id']
title = rec_obj['title']
try:
mysql = Mysql()
sess = mysql._DBSession()
if sess.query(User.id).filter(User.id == user_id).count() > 0:
if log_data.insert_log(user_id, content_id, title, "likes"):
return jsonify({"code": 0, "msg": "点赞成功"})
else:
return jsonify({"code": 1001, "msg": "点赞失败"})
else:
return jsonify({"code": 1000, "msg": "用户名不存在"})
except Exception as e:
return jsonify({"code": 2000, "msg": "error"})
@app.route("/recommendation/collections", methods=['POST'])
def collections():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
user_id = rec_obj['user_id']
content_id = rec_obj['content_id']
title = rec_obj['title']
try:
mysql = Mysql()
sess = mysql._DBSession()
if sess.query(User.id).filter(User.id == user_id).count() > 0:
if log_data.insert_log(user_id, content_id, title, "collections"):
# if log_data.modify_article_detail('news_detial:'+content_id, 'collections'): # 加分
return jsonify({"code": 0, "msg": "收藏成功"})
else:
return jsonify({"code": 1001, "msg": "收藏失败"})
else:
return jsonify({"code": 1000, "msg": "用户名不存在"})
except Exception as e:
return jsonify({"code": 2000, "msg": "error"})
@app.route("/recommendation/get_likes", methods=['POST'])
def getLikes():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
user_id = rec_obj['user_id']
try:
data = log_data.get_logs(user_id, 'likes')
print(data)
return jsonify({"code": 0, "data": str(data)})
except Exception as e:
return jsonify({"code": 2000, "msg": "error"})
@app.route("/recommendation/get_collections", methods=['POST'])
def getCollections():
if request.method == 'POST':
req_json = request.get_data()
rec_obj = json.loads(req_json)
user_id = rec_obj['user_id']
try:
data = log_data.get_logs(user_id, 'collections')
print(data)
return jsonify({"code": 0, "data": str(data)})
except Exception as e:
print(e)
return jsonify({"code": 2000, "msg": "error"})
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=10086, threaded=True) |
Pregnane X Receptor Activation Triggers Rapid ATP Release in Primed Macrophages That Mediates NLRP3 Inflammasome Activation The pregnane X receptor (PXR) is a ligand-activated nuclear receptor that acts as a xenobiotic sensor, responding to compounds of foreign origin, including pharmaceutical compounds, environmental contaminants, and natural products, to induce transcriptional events that regulate drug detoxification and efflux pathways. As such, the PXR is thought to play a key role in protecting the host from xenobiotic exposure. More recently, the PXR has been reported to regulate the expression of innate immune receptors in the intestine and modulate inflammasome activation in the vasculature. In the current study, we report that activation of the PXR in primed macrophages triggers caspase-1 activation and interleukin-1 release. Mechanistically, we show that this response is nucleotide-binding oligomerization domain, leucine-rich repeat, and pyrin domain-containing 3-dependent and is driven by the rapid efflux of ATP and P2X purinoceptor 7 activation following PXR stimulation, an event that involves pannexin-1 gating, and is sensitive to inhibition of Src-family kinases. Our findings identify a mechanism whereby the PXR drives innate immune signaling, providing a potential link between xenobiotic exposure and the induction of innate inflammatory responses. Introduction The pregnane X receptor (PXR) is a xenobiotic sensor that plays a key role in drug metabolism by regulating the expression of genes that encode enzymes responsible for drug detoxification and efflux (). As a member of the nuclear receptor (NR) superfamily, the PXR acts as a ligand-activated transcription factor, regulating gene expression in concert with its heterodimeric binding partner, the retinoid X receptor. In contrast to other NRs, the PXR's ligand-binding domain exhibits a large flexible pocket that accommodates the binding of a variety of structurally unique ligands, including rifamycin antibiotics, pharmaceutical compounds, natural compounds, and contaminants of environmental origin (e.g., bisphenol A, organochloride pesticides) (;Chang and Waxman, 2006;;;Chang, 2009;). The PXR is highly expressed in the liver and regions of the small and large intestine, and its role in regulating the host's response to exogenous chemicals at these sites has been well characterized () given their exposure to high concentration of exogenous ligands and xenobiotics. In addition, the PXR has been shown to regulate tissue inflammation through a reciprocal interaction with nuclear factor k light chain enhancer of activated B cells (NF-kB) Mencarelli et al.,, 2011. Indeed, we, and others, have reported that PXR agonists inhibit the release of inflammatory mediators from hepatocytes (;) and intestinal epithelial cells Mencarelli et al.,, 2011 by inhibiting NF-kB-dependent signaling events, and can afford protection in experimental models of hepatic () and intestinal inflammation ;Dou et al.,, 2013;). The PXR can also regulate the expression/function of innate immune pattern recognition receptors within the intestinal epithelium, an effect that contributes to the proper maintenance of intestinal mucosal homeostasis and barrier function (). Beyond the gastrointestinal tract, the PXR may contribute to the regulation of inflammation in other cell types (;;;Casey and Blumberg, 2012;). In the context of innate immunity, in contrast to its reported anti-inflammatory effects, Wang et al. (2014a) found that stimulation of the PXR in cultured vascular endothelial cells enhanced the expression of a number of innate immune receptors, including Toll-like receptors 2, 4, and 9, as well as nucleotide-binding oligomerization domain (NOD)-like receptor family members nucleotide-binding oligomerization domain, leucine-rich repeat, and CARD domain-containing 1, and nucleotide-binding oligomerization domain, leucinerich repeat, and pyrin domain-containing 3 (NLRP3). Furthermore, endothelial cells treated with PXR agonists displayed features of NLRP3 inflammasome activation (a). The NLRP3 inflammasome is an innate immune signaling complex that mediates the responses to a variety of pathogenassociated molecular patterns (PAMPs) and endogenous danger-associated molecular patterns (DAMPs) (). The NLRP3 inflammasome's activation following PAMP exposure and the induction of the ensuing innate immune response play key roles in host defense against viral, bacterial, and fungal pathogens (), whereas the recognition of DAMPs and subsequent activation of the NLRP3 inflammasome play an important role in the initiation of sterile inflammatory responses following tissue damage. Interestingly, several studies have reported the independent contributions of the PXR (b) and NLRP3 inflammasome activation () in the pathogenesis of acute liver injury caused by sterile inflammation. In this scenario, the interplay between direct hepatocyte damage, DAMP release, and activation of resident macrophages is thought to contribute to the sterile inflammatory response that propagates acute liver damage (). Although the response of the hepatocyte has been the focus of much research, the role of PXR signaling and NLRP3 inflammasome within the macrophage, as a driver of inflammation, has not been addressed. In the current study, we sought to test the hypothesis that the PXR plays a critical role in the macrophage by modulating NLRP3 inflammasome activation. In this study, we report that exposing primed human or mouse macrophages to their respective PXR agonists triggers caspase-1 activation and interleukin (IL)-1b secretion through a NLRP3-dependent mechanism. PXR-induced NLRP3 inflammasome activation was abolished by apyrase and selective inhibition of the P2X purinoceptor 7 (P2X7 receptor). Lastly, PXR ligands triggered a rapid and significant release of ATP, an effect that is dependent on pannexin-1 and Src kinase activation. Materials and Methods Reagents PXR Agonists. For experiments in mouse macrophages, the rodent selective PXR agonist pregnenolone 16a-carbonitrile (PCN; Sigma-Aldrich Canada, Oakville, Ontario, Canada) was prepared, as described above. For experiments in human macrophages, the humanselective PXR agonists rifaximin and SR12813 (Sigma-Aldrich Canada) were dissolved in sterile dimethylsulfoxide and added to culture media to attain the final experimental concentrations. As a vehicle control, identical volumes of dimethylsulfoxide were added and did not exceed a concentration of 1% v/v in media. The addition of exogenous ATP (5 mM; Sigma-Aldrich Canada) was used as a positive control in all experiments assessing inflammasome activation, as we have done previously (;). Mouse Studies. Peritoneal macrophages were isolated from Nlrp3 2/2 and PXR 2/2 mice and their littermate wild-type counterparts (male, 8-10 weeks of age; all bred in house) 48 hours after receiving an i.p. injection of 4% thioglycolate-injected mice (BD Biosciences, San Jose, CA), as we have done previously (; The PXR Modulates NLRP3 Inflammasome Activation ). Isolated macrophages were plated in complete RPMI media at 5 10 5 cells/well of a 24-well plate overnight and stimulated with 100 ng/ml ultra-pure lipopolysaccharide (LPS; Invivogen) in serum-free Opti-MEM for 30 minutes before challenge. For experiments with knockout mice, littermates were used as the wild-type control group for all experiments to control for potential microbiota-dependent differences in phenotype. All studies were approved by the University of Calgary's Health Sciences Animal Care Committee (protocol AC15-0181). All approved activities conform to the guidelines and regulation for laboratory animal use set forth by the Canadian Council for Animal Care. Assessing Inflammasome Activation Western Blots. Prior to performing inflammation activation experiments, isolated macrophages were pulsed with ultra-pure LPS (100 ng/ml for 30 minutes; Invivogen/Cedarlane). PMA-differentiated or LPS-pulsed mouse peritoneal macrophages were treated with their respective PXR agonists. Following the designated treatment period, culture supernatants were collected, cells were washed with ice-cold phosphate-buffered saline, and cell lysates were isolated following incubating the cells with lysis buffer (150 mM NaCl, 20 mM Tris, pH 7.5, 1 mM EDTA, 1 mM EGTA, 1% Triton X-100, and protease inhibitor cocktail; phosphatase inhibitor cocktail, Complete Minitab; Complete PhoStop, Roche/Sigma-Aldrich Canada). Total protein was quantified using the Precision Red Advanced Protein Assay (Cytoskeleton/Cedarlane, Burlington, Ontario, Canada), and sample protein concentration was equalized. Culture supernatant and cell lysate samples were resolved, transferred to polyvinylidene difluoride membranes (0.2-mm pores; Bio-Rad Laboratories, Mississauga, Ontario, Canada), and blotted with the following antibodies: anti--caspase-1 (sc-622, sc-56036, and sc392736; Santa Cruz Biotechnology, Dallas, TX). Densitometry was performed using ImageJ, and cleaved caspase-1 was expressed as a percentage of pro-caspase-1. Assessing ATP Release To characterize the mechanism by which PXR agonist triggered inflammasome activation, in some experiments, PMA-differentiated or LPS-pulsed mouse peritoneal macrophages were treated with their respective PXR agonists, culture supernatants were collected, and ATP was quantified using CellTiter-Glo Luminescent Cell Viability Assay (Promega North America, Madison, WI), as per the manufacturer's instructions and described previously (). Statistical Analysis All data were assessed for distribution using D'Agostino-Pearson normality test prior to statistical analysis using GraphPad Prism. Multiple comparisons of parametric data were accomplished using an analysis of variance, followed by Tukey's post hoc test. For nonparametric data, or experiments with small samples sizes (N, 5), a Kruskal-Wallis test was used, followed by a Mann-Whitney test with a Bonferroni correction for multiple comparisons. In all experiments, N denotes individual experiments performed in different cell passages or cells derived from unique animals. Results PXR Agonists Trigger Caspase-1 Activation and IL-1b Secretion from Macrophages in a NLRP3-Dependent Manner. Recent reports suggest that the PXR can regulate NLRP3 inflammasome activity in cultured vascular endothelial cells (a, but this has not been assessed in macrophages, the prototypical model for innate immune signaling. To test the hypothesis that stimulation of the PXR triggers NLRP3 inflammasome activation in primed macrophages, we first treated LPS-primed peritoneal macrophages or PMA-differentiated THP-1 cells with their respective species-specific PXR ligands at concentrations previously reported to elicit selective responses in other cell types (). In primed mouse macrophages, treatment with the rodent-specific PXR agonist PCN (for 6 hours) was able to trigger the release of IL-1b (Fig. 1A). In human macrophages, stimulation with two structurally unique selective human PXR agonists (SR12813 or rifaximin; for 6 hours) also triggered significant IL-1b secretion (Fig. 1, B and C). Because antagonists of the human and mouse PXR are lacking specificity and exhibit off-target effects, we used macrophage cells isolated from PXR 2/2 mice (Fig. 3A) to validate the role of this receptor in the observed NLRP3 inflammasome responses. In support of our hypothesis, LPSprimed peritoneal macrophages isolated from PXR 2/2 mice did not exhibit caspase-1 activation nor secrete IL-1b in response to the selective mouse PXR agonist PCN (10 and 100 mM; Fig. 3, B-D), highlighting the role of the PXR in the activation of the inflammasome within macrophages. PXR-Mediated NLRP3 Inflammasome Activation Does Not Involve ROS or Intracellular Ca 21. There are a variety of mechanisms that contribute to NLRP3 inflammasome activation, including, but not limited to, the generation of ROS, mobilization of intracellular Ca 21, and ATP-dependent K 1 efflux via activation of the P2X7 receptor. In a human colon cancer cell line, the expression of the PXR enhanced sensitivity to oxidative stress that was associated with increased agonist-induced ROS production (). To assess the role of ROS in the activation of the NLRP3 inflammasome by the PXR in macrophages, we used the broad-spectrum antioxidant DPI treatment (100 mM) (). In PMA-differentiated THP-1 cells, DPI pretreatment had no effect on SR12813-or rifaximininduced IL-1b secretion (Supplemental Fig. 1A), suggesting the ROS production does not mediate the PXR-induced inflammasome activation within macrophages. Activation of the vitamin D receptor (VDR), a nuclear receptor closely related to the PXR, has been linked with increasing intracellular Ca 21 concentrations via an inositol 1,4,5-trisphosphate-dependent process () in epithelial cells, as well as activation of the NLRP3 inflammasome in macrophages (). Furthermore, Ainscough et al. reported that intracellular Ca 21 and its interaction with calmodulin were required for nigericin-induced IL-1b, along with significant IL-1b release from wild-type (WT) peritoneal macrophages (C), effects that were absent in macrophages isolated from Nlrp3 2/2 mice (A-C). ATP (5 mM; positive control), SR121813 (SR, 4 mM), and rifaximin (Rifx, 5 and 10 mM) trigger caspase-1 activation , along with significant IL-1b release (F) from wild-type PMA-differentiated THP-1 cells, effects that were absent in NLRP3-deficient THP-1 cells (D-F). CL, cell lysate; SN, supernatant. All outcomes were measured after a 6-hour treatment period. N = 3-6; *P, 0.05 for indicated comparison generated by analysis of variance and Tukey's post hoc test; Western blots are representative of three separate experiments. The PXR Modulates NLRP3 Inflammasome Activation secretion in macrophages. To test whether intracellular Ca 21 plays a role in NLRP3 inflammasome activation mediated by the PXR, macrophages were pretreated with BAPTA-AM (10 mM), a cell membrane-permeable Ca 21 chelating agent. As reported previously, BAPTA-AM significantly reduced ATP-induced IL-1b secretion (), but had no effect on the activation of the inflammasome by PXR agonists SR12813 and rifaximin (Supplemental Fig. 1B), suggesting mobilization of intracellular Ca 21 is not playing a role in PXR-mediated NLRP3 inflammasome activation. Activation of the PXR Triggers ATP Release, Which Mediates NLRP3 Inflammasome Activation. Reports have characterized the activation of the NLRP3 inflammasome through the autocrine/paracrine signaling of ATP released into the extracellular environment via pannexin-1 (;). Although there is no reported role for PXR inducing the release of ATP, bile acids, agents known to activate a variety of nuclear receptors, including the PXR, have been shown to induce ATP release in liver cells (). Furthermore, activation of the farnesoid X receptor (FXR), a nuclear receptor closely related to the PXR, by bile acids induces rapid release of ATP in pancreatic cell lines (). Taken together, we next sought to test the hypothesis that PXR-mediated NLRP3 inflammasome activation could be driven by the release of ATP and subsequent activation of P2X7. First, to identify a role for ATP in our system, we cotreated PMA-differentiated THP-1 cells with apyrase (30 U/ml) to break down extracellular ATP (). As in our previous experiments, SR12813 and rifaximin triggered IL-1b secretion, but this response was abolished in macrophages cotreated with apyrase (Fig. 4A). As extracellular ATP is known to activate the NLRP3 inflammasome by triggering the K 1 efflux through the P2X7 receptor (;), we next treated macrophages with oATP (100 mM), a selective P2X7 receptor antagonist (). As with apyrase cotreatment, oATP significantly attenuated IL-1b secretion in response to PXR agonists (Fig. 4B). Importantly, neither apyrase nor oATP had any effect on cell viability over the course of our experiments (Supplemental Fig. 2). Taken together, these data suggest that the PXR-mediated activation of the NLRP3 inflammasome involves the release of ATP and its subsequent activation of the P2X7 receptor. PXR Stimulation Triggers the Rapid Release of ATP from Macrophages via Pannexin-1. To further strengthen the link between the PXR, ATP release, and NLRP3 inflammasome activation, we directly quantified ATP release from mouse peritoneal macrophages (wildtype versus PXR 2 /2 ) and PMA-differentiated THP-1 cells at different time points. In macrophages isolated from wild-type mice, the rodent-specific PXR agonist PCN triggered rapid and significant extracellular ATP release that could be detected within 15 seconds and was sustained for up to 60 seconds before tapering off after 300 seconds (5 minutes). ATP release following PCN treatment was completely absent in cells isolated from PXR 2/2 mice (Fig. 5A). Similar ATP release responses were observed in THP-1 cells treated with either SR12813 or rifaximin (Fig. 5B). Collectively, these data indicate that the activation of the PXR triggers a rapid release of ATP that fits the kinetics reported by other NLRP3 inflammasome activators (). The transmembrane channel pannexin-1 has been implicated in ATP release and NLRP3 inflammasome activation in macrophages (;), and thus we sought to determine whether this mechanism was at play following PXR stimulation in our studies. First, to assess the role of pannexin-1 in PXR-driven ATP release, PMA-differentiated THP-1 cells were exposed to pannexin-1-blocking peptide ( 10 Panx; 400 mM) () or a scrambled peptide control ( Sc Panx; 400 mM) prior to stimulation with SR12813 or rifaximin. PXR activation triggered rapid and significant ATP release in the scrambled peptide-treated cells (Fig. 6A), a response that was significantly attenuated by pannexin-1 channel blockade with 10 Panx (Fig. 6A), without affecting cell viability (Supplemental Fig. 2). Although nongenomic roles for other NRs have been described, little is known about how the PXR regulates intracellular signaling processes within the cytosol. Others have reported the involvement of SFKs in the cytosolic effects of NRs (;Buitrago and Boland, 2010;;). Interestingly, receptormediated gating of pannexin-1 has been shown to involve SFK-dependent phosphorylation events (;). To assess the role of SFKs in PXR-driven ATP release and NLRP3 inflammasome activation, we pretreated PMA-differentiated THP-1 cells with the Src-kinase inhibitor PP2 (10 mM) () and exposed them to PXR agonists. Selective inhibition of SFKs abolished the rapid ATP release triggered by SR12813 and rifaximin (Fig. 6B). Taken together these data suggest that PXR agonists trigger ATP efflux through a SFK-and pannexin-1-dependent process. Discussion In the current study, we found that stimulation of the PXR in primed macrophages triggered the activation of the NLRP3 inflammasome, resulting in caspase-1 activation and IL-1b secretion. Mechanistically, PXR activation-induced NLRP3 inflammasome signaling was reliant on pannexin-1-dependent ATP release and subsequent stimulation of the P2X7 receptor, a well-characterized driver of inflammasome activation (). In this process, PXR-dependent ATP release occurs as early as 15 seconds following receptor stimulation and involves SFK signaling, suggesting a cytosolic function for the PXR in macrophages. Altogether our data support a novel role for the PXR in triggering a host-defense response in macrophages, linking xenobiotic sensing and innate immunity in a system that may protect against xenobiotics and other chemical contaminants of foreign origin. The PXR is a member of the nuclear receptor superfamily, which includes such members as the FXR, VDR, glucocorticoid receptor, and retinoid X receptor (). The PXR, which is highly expressed in the liver and in intestinal epithelial cells, is best characterized for its ability to regulate the expression of enzymes involved in drug metabolism, detoxification, and excretion (). The PXR is also expressed in a variety of immune cells, including T cells, macrophages, and dendritic cells (), and its signaling has been reported to modulate their function through mechanisms that are less understood (;;Casey and Blumberg, 2012;). More recently, the PXR's direct and indirect regulation of innate immune signaling has been reported in a number of systems. In a seminal report, Venkatesh et al. found that the PXR functions as a negative regulatory of Toll-like receptor gene expression in the intestinal epithelium, thereby indirectly modulating innate immune signaling in the intestinal mucosa. These data build on a body of literature that suggests that PXR can negatively regulate NF-kB-dependent inflammatory signaling in a variety of cell types (Xie and Tian, 2006;;). In contrast to the notion that the PXR exhibits solely antiinflammatory effects, Wang et al. (2014a) reported that the PXR could enhance NLRP3 inflammasome activity in vascular endothelial cells. The authors found that PXR stimulation upregulated the expression of NLRP3 and pro-IL-1b, and that prolonged stimulation triggered caspase-1 activation and IL-1b processing indicative of NLRP3 inflammasome activation. Although the kinetics of activation reported in this study do not conform to the traditional view of the NLRP3 inflammasome as an expeditious effector, our data suggest that PXR activation can trigger immediate ATP efflux from macrophages, a prerequisite for inflammasome The PXR Modulates NLRP3 Inflammasome Activation activation in response to a variety of stimuli (). Indeed, the notion that a nuclear receptor can activate intracellular signaling events that culminate in rapid cellular responses is not without precedent. For instance, activation of the FXR by bile acids induces rapid release of ATP in pancreatic cell lines (). Tulk et al. also reported Fig. 5. PXR agonists trigger rapid and significant ATP release from mouse and human macrophages. (A) PCN, a mouse PXR agonist, triggers rapid and significant ATP release from LPS-pulsed peritoneal macrophages isolated from wild-type mice, an effect that is absent in cells isolated from PXR 2/2 mice. N = 3; *P, 0.05 for wild-type (WT) PCN-treated macrophages compared with naive and PXR 2/2 cells generated by analysis of variance and Tukey's post hoc test. (B) Human PXR agonists, rifaximin (Rifx) and SR12813 (SR), trigger rapid and significant ATP release from PMA-differentiated THP-1 cells. N = 3; *P, 0.05 for indicated comparison generated by analysis of variance and Tukey's post hoc test. 50 that stimulation of the VDR in primed macrophages triggered rapid NLRP3 inflammasome activation and IL-1b release. Our data suggest PXR stimulation involves the rapid gating of pannexin-1 through a SFK-dependent mechanism, resulting in ATP efflux and subsequent P2X7 receptor activation to trigger NLRP3 inflammasome activation. Although our findings support a role for the PXR in initiating cell signaling that activates the NLRP3 inflammasome to induce IL-1b processing and release, others have reported contrasting observations. Sun et al. reported that pretreating hepatocytes with PXR agonists attenuated LPS-induced IL-1b release. Furthermore, a recent report by Wang et al. described a model wherein statins inhibit inflammasome activity in vascular endothelial cells, through the PXR-dependent inhibition of NF-kB-driven NLRP3 gene transcription. Some clarity could be provided to these disparities by interpreting them in context of the temporal nature of inflammasome activation. NLRP3 inflammasome output (i.e., IL-1b processing and release) requires two signals. The first stimulus, often termed signal 1, often involves Toll-like receptor-driven NF-kB activation to prime the cells, inducing the expression of the inflammasome components, including NLRP3 and pro-IL-1b (). In order for a functional NLRP3 inflammasome to elaborate, a second signal (also termed signal 2) is required, which usually takes the form of a DAMP or PAMP, and results in NLRP3 oligomerization, capsase-1 activation, resulting in IL-1b processing, and release (). The wealth of data implicating the PXR as a negative regulator of NF-kB signaling suggests its inhibitory effect on IL-1b release may be due to its inhibition of signal 1 (i.e., inhibiting the induction of NLRP3 and pro-IL-1b expression). Indeed, Luo et al. reported that pretreating macrophages with baicalein, an agent reported to activate the PXR (), attenuated LPS-induced NLRP3 and pro-IL-1b expression, through its inhibition of NF-kB signaling. In this context, baicalein's inhibition of signal 1 attenuated subsequent ATPinduced IL-1b secretion (). It is important to highlight that in our studies we assessed the impact of PXR activation in primed macrophages that have already received signal 1. Thus, in the context of a primed cell that exhibits abundant expression of NLRP3 and pro-IL-1b, PXR stimulation acts as signal 2, gating pannexin-1 to allow the efflux of ATP, which culminates in caspase-1 activation and IL-1b release (Fig. 7). Although the functional impact of the PXR's modulation of NLRP3 inflammasome activation has yet to be elucidated, xenobiotic-and endobiotic-sensing mechanisms are thought to add an additional level of defense in the gastrointestinal tract of multicellular organisms (Dussault and Forman, 2002). For example, Caenorhabditis elegans upregulates xenobiotic response genes and initiates avoidance behaviors in the presence of pathogens and/or specific pathogenic factors in a process believed to enhance survival in the context of infection or exposure to environmental contaminants (Melo and Ruvkun, 2012;). Mechanistically, these responses are mediated by a family of nuclear receptors that exhibit functional similarities to the mammalian PXR (). As a ligand-activated receptor, the PXR's flexible binding domain allows a variety of receptor-ligand interactions to occur (;Chang and Waxman, 2006;;;Chang, 2009;). Thus, the interplay between xenobiotics Fig. 7. Proposed model for the events that link PXR activation to NLRP3 inflammasome activation and IL-1b release. Ligation of the PXR (and heterodimerization with the retinoid X receptor) triggers the release of intracellular ATP through pannexin-1 channels. Extracellular ATP then binds to the P2X7 receptor, an event that prompts the assembly of the NLRP3 inflammasome and subsequent activation of caspase-1. Caspase-1 then cleaves pro-IL-1b into its active, secreted IL-1b. The PXR Modulates NLRP3 Inflammasome Activation and innate immune signaling through the PXR may be broad reaching. Furthermore, the impact of endogenous PXR ligands of microbial origin and the regulation of intestinal mucosal inflammasome signaling require further attention. Ultimately, our findings suggest the existence of a complex interplay between xenobiotic-sensing mechanisms and innate immunity that may function as a conserved mechanism to protect the host from exposure to chemical agents of foreign origin. Additional work will be required to determine the functional role for this interplay in the context of health and disease. |
// I128Hash hashes a string to an i128 database value, often used as an index for a string in a table.
// It is the most-significant 16 bytes in big-endian of a sha1 hash of the provided string, returned as a hex-string
func I128Hash(s string) string {
sha := sha1.New()
_, err := sha.Write([]byte(s))
if err != nil {
return ""
}
return "0x" + hex.EncodeToString(flip(sha.Sum(nil)))[8:]
} |
As the Fire head into the final home game of their three-match home stretch at Toyota Park, they stand as the only Eastern Conference team above the playoff line that has a negative goal differential. Saturday’s visiting opponent, Lee Nguyen and the New England Revolution, are sitting just behind the Fire in the conference table with but one point less.
In other words, league standings are quite fragile at the moment, and Chicago’s uncharacteristic (and very possibly momentary) placement near the top of the Eastern Conference standings could be undone by the end of this MLS weekend. Even as an optimistic start to the season has lent an increasing sense of hope for the Fire’s playoff picture, only a three point result from Saturday’s match will keep anxious fans at least somewhat comfortable before the club heads out on a three-match away stint.
That bridge will be crossed next week, however, as this Saturday will see two very evenly matched teams take the field at Toyota Park and battle out what’s looking to be an MLS classic.
Trifectas
A few weeks ago, the Revolution’s Lee Nguyen subtly bragged about his club’s dangerous offensive weapon, branding himself along with teammates Juan Agudelo and Kei Kamara as a “triangle partnership” following the trio’s four goals and one assist in their home opener. Despite the undoubted chemistry between the trifecta, the Fire are operating under a “triangle partnership” of their own.
Dax McCarty has looked sharp all season. After the addition of Bastian Schweinsteiger, the Fire’s last two games have proven that the midfield duo can do some serious damage. Factor in last week’s game-clinching goal from one Nemanja Nikolic who is finally beginning to find his rhythm at striker, and the trio have accounted for four of the last five consecutive Fire goals.
Defense
Despite the Fire’s below-zero goal difference, goalkeeper Jorge Bava has two shutouts to his name so far this season, including in last week’s match when his save in stoppage-time secured a 1-0 Chicago victory. That game followed a 2-2 draw while hosting Columbus, a single-point result that was happily welcomed after the Fire had been shutout by Atlanta 4-0 just two weeks earlier.
Yes, the Fire’s defensive performances have largely lacked anything positively noteworthy all season, but the last three matches show improvement. Furthermore, New England’s Kei Kamara – who, by the way, led his team in scoring last season – was held to a zero stat line in both goals and assists in all three of the Revs’ contests with the Fire in 2016.
A Shift
In Bruce Arena’s takeover of the USMNT, Lee Nguyen was left off of the official roster heading into two “must-win” games against Honduras and Panama. Perhaps this is telling of Nguyen’s place in New England as well, a side that missed last season’s playoffs for the first time in three-straight prior appearances.
Meanwhile, Chicago Fire midfielder Dax McCarty has filled Nguyen’s empty spot on the national team. It could be said that McCarty’s trade to the Fire from his captaincy at New York has finally been the central building block head coach Velkjo Paunovic has needed to build his team around. Although usual scoring machine David Accam has been relatively quiet as of late, McCarty’s growing chemistry with German star Schweinsteiger, combined with his proven facilitation of dangerous goal-scoring passes, have propelled the Fire into a palpable playoff position the Men in Red haven’t seen in years.
Final Notes: It’s worth mentioning that the Fire have not walked away from the Revs without at least a point in five straight matches, and moreover Chicago has lost only one game in all of their last 19 matches at home including US Open Cup fixtures.
The game begins at 4 p.m. this Saturday from Toyota Park.
Advertisements
Share this: Tweet
Email |
The number of asylum seekers who are crossing into Canada by foot has been steadily increasing, but the RCMP has yet to charge anyone for illegal entry into Canada this year, federal officials said.
The asylum seekers are arrested as they enter into Canada, at which point the RCMP conducts checks to see if they were engaged in criminal acts such as trafficking.
So far this year, none of the hundreds of asylum seekers who have come to Canada have been charged for illegal entry, and they were all transferred to the Canada Border Services Agency. Their status will now be determined by the Immigration and Refugee Board of Canada, federal officials said at a background media briefing on Thursday.
Story continues below advertisement
Read more: Following the Midwest Passage: Asylum seekers take a cold journey to Manitoba via Trump's America
Explainer: Asylum seekers' cold crossings to Canada: A guide to the saga so far
Read more: Canada has a border problem. Here's how to fix it
The official added it is too early to determine if the levels of asylum seekers this year will go beyond normal fluctuations, although they insisted they are closely monitoring the situation. One official said it would be "speculative" to state that the coming arrival of spring will lead to sharp rises in numbers this year.
Still, federal officials confirmed that numbers are on the rise. Between Jan. 1 and Feb. 21 of this year, the federal government dealt with nearly 4,000 asylum cases, compared to 2,500 in the same time frame in 2016. So far this year, 435 people were arrested at the border by the RCMP before being transferred to the CBSA.
The federal officials, who spoke on condition that they would not be named, represented the key agencies involved in border issues, namely the RCMP, CBSA and Immigration, Refugees and Citizenship Canada.
They pointed out there have historically been large variations in the arrival of asylum seekers in Canada. In 2001, for example, there were nearly 45,000 cases, compared to 10,400 in 2013. Last year, the officials said Canada dealt with 24,000 asylum claims, including 2,500 people who were intercepted by the RCMP.
Story continues below advertisement
Story continues below advertisement
"The government of Canada has always managed these fluctuations," a federal official said.
A number of the recent claimants said they were fleeing a climate of intolerance in the United States and the threat of deportation stemming from U.S. President Donald Trump's statements on illegal immigration.
Canadian officials said large numbers of recent asylum seekers have come from Somalia, Djibouti and the Middle East, in addition to countries such as Romania. The largest increase in illegal border crossings has been seen in Quebec, although there have been dramatic stories of people crossing into Manitoba in frigid and life-threatening conditions.
One official said that Canada "does not encourage" anyone to illegally cross the border.
Federal officials said the asylum seekers often arrive into Canada with legal visas from the United States, adding such facts will be considered in the determination of their status in Canada. The process will take between four and 12 months.
For now, the government has gathered biometric data on all individuals and conducted health assessments before releasing them.
Story continues below advertisement
Ottawa still considers the U.S. a "safe third country," as defined by an agreement that requires all asylum claimants to seek protection in the first safe country in which they arrive. However, the agreement "does not apply when someone enters Canada illegally between designated ports of entry," the federal government said. |
<filename>appengine/findit/waterfall/update_analysis_with_flake_info_pipeline.py
# Copyright 2017 The Chromium Authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
from collections import defaultdict
from google.appengine.ext import ndb
from gae_libs.pipeline_wrapper import BasePipeline
from model import result_status
from model.wf_analysis import WfAnalysis
from waterfall import swarming_util
def _GetFlakyTests(task_results):
flaky_failures = defaultdict(list)
for step, step_task_results in task_results.iteritems():
flaky_tests = step_task_results[2]
if flaky_tests:
flaky_failures[step].extend(flaky_tests)
return flaky_failures
@ndb.transactional
def _UpdateAnalysisWithFlakeInfo(
master_name, builder_name, build_number, flaky_tests):
if not flaky_tests:
return False
analysis = WfAnalysis.Get(master_name, builder_name, build_number)
if not analysis or not analysis.result:
return False
all_flaked = swarming_util.UpdateAnalysisResult(analysis.result, flaky_tests)
if all_flaked:
analysis.result_status = result_status.FLAKY
analysis.put()
return True
class UpdateAnalysisWithFlakeInfoPipeline(BasePipeline):
"""A pipeline to update analysis with flake info."""
# Arguments number differs from overridden method - pylint: disable=W0221
def run(
self, master_name, builder_name, build_number, *task_results):
"""
Args:
master_name (str): The master name.
builder_name (str): The builder name.
build_number (str): The build number.
flaky_tests (list): A list of results from swarming tasks.
"""
flaky_tests = _GetFlakyTests(dict(task_results))
_UpdateAnalysisWithFlakeInfo(
master_name, builder_name, build_number, flaky_tests)
|
<gh_stars>1-10
/*
Problem: Ural1319
Algorithm: None
Time: O()
Memory: O()
Detail: Simple
Coded by [BUPT]AkemiHomura
*/
#include <cstdio>
#include <cstring>
using namespace std;
const int MaxN = 100;
int m[101][101], cnt;
int N;
void solve(int r, int c)
{
m[r][c] = ++cnt;
if (cnt == N*N) return;
if (r == N) {solve(N+2-c, 1); return;}
if (c == N) {solve(1, N-r); return;}
solve(r+1, c+1);
}
int main()
{
scanf("%d", &N);
solve(1, N);
for (int i = 1; i <= N; ++i)
{
for (int j = 1; j < N; ++j)
printf("%d ", m[i][j]);
printf("%d", m[i][N]);
puts("");
}
return 0;
}
|
Early diagnosis of the multiple endocrine neoplasia type 2 syndrome: consensus statement Abstract. The diagnosis of medullary thyroid carcinoma by biochemical and genetic testing is possible in families with multiple endocrine neoplasia type 2. At an early stage total thyroidectomy usually cures the patient. As the clinical penetrance of the autosomal dominant, transmitted, multiple endocrine neoplasia type 2 gene is not complete, family screening is indicated for every new patient who presents with apparently sporadic medullary thyroid carcinoma. Problems related to a screening programme and early diagnosis have led the members of the European Community Concerted Action: Medullary Thyroid Carcinoma group to formulate a consensus on biochemical and genetic screening. For biochemical screening, measurement of basal and pentagastrin and/or calcium stimulated serum levels ofcalcitonin by radioimmunoassay are essential starting at the age of three and continuing annually until 35 years of age. Furthermore, annual screening for pheochromocytoma by measuring the urinary excretion of catecholamines and for hyperparathyroidism by serum calcium determination is indicated. Genetic screening using linked markers can be done with a 95% accuracy in informative families when DNA is available from at least two family members proven to be affected. Biochemical screening can thus be reserved for gene carriers, while those at low risk can be reassured. Combined biochemical and genetic screening for multiple endocrine neoplasia type 2 is important and effective for the cure of medullary thyroid carcinoma. |
The present invention relates to a novel solid state thermal process for the preparation of lithium cobaltate (LiCoO2) useful as a cathode material in nonaqueous, solid state and polymer electrolyte for secondary rock in chair or intercalated batteries.
Lithium cobaltate (LiCoO2) is widely used as a cathode in lithium secondary cells in the view of its high reversibility to lithium ions and less fading capacity over LiNiO2 and LiMn2O4 electrodes.
Methods reported in the art for the preparation of cathode lithium cobaltate (LiCoO2) disclose the reaction of lithium nitrate, or lithium hydroxide, lithium acetate or any other lithium salts with cobalt nitrates, oxides, acetates, hydroxides, sulphates by soft chemistry method like sol-gel process between temperature ranges of 350-500xc2x0 C. for long duration of time and multistep preparation procedures. Normally, in solid state thermal methods in the synthesis of these oxide materials, the duration of preparation is long heating, intermittent cooling and grinding process. Other preparation methods are also available in literature for synthesizing lithium cobaltate like pulsed laser deposition, sputtering and electrostatic spray deposition.
1. xe2x80x9cSynthesis and electrochemical properties of LiCoO2 spinel cathodesxe2x80x9dxe2x80x94S. Chol and A. Manthiram, Journal of the Electrochemical Society, Vol. 149(2) (2002) A162-166.
2. xe2x80x9cX-ray absorption spectroscopic study of LiAlyCO1-yO2 cathode for lithium rechargeable batteriesxe2x80x9dxe2x80x94Won-Sub Yoon, Kyung-Keun Lee and Kwang-Bum Kim, Journal of the Electrochemical society, Vol. 149(?) (2002) A146-151.
3. xe2x80x9cHigh temperature combustion synthesis and electrochemical characterization of LiNiO2, LiCoO2 and LiMn2O4 for lithium ion secondary batteriesxe2x80x9dxe2x80x94M. M. Rao, C. Liebenow, M. Jayalakshmi, M. Wulff, U. Guth and F. Scholz, J. of Solid State Electrochemistry, Vol. 5, Issue 5 (2001) 348-354.
4. xe2x80x9cFabrication of LiCoO2 thin films by sol gel method and characterization as positive electrodes for Li/LiCoO2 cellsxe2x80x9dxe2x80x94M. N. Kim, H. Chung, Y. Park, J. Kim, J. Son, K. Park and H. Kim, Journal of Power Sources, Vol. 99(2001) 34-40.
5. xe2x80x9cPreparation and characterization of high-density sperical LiNi0.8CoO2 cathode material for lithium secondary batteriesxe2x80x9dxe2x80x94Jierong Ying, Chunrong Wan, Changyin Jiang and Yangxing Li, J. of Power Sources, Vol. 99 (2001) 78-84.
6. xe2x80x9cElectrochemical characterization of layered LiCoO2 films prepared by electrostatic depositionxe2x80x9d, Won-Sub Yoon, Sung-Ho Ban, Kyung-Keun Lee, Kwang-Bum Kim, Min Dyu Kim and Jay Min Lee, J. of Power Sources, Vol. 97-98 (2001) 282-286.
7. xe2x80x9cEmulsion-derived lithium manganese oxide powder for positive electrodes in lithium ion batteriesxe2x80x9d Chung-Hsin Lu and Shang-Wei Lin. J. of Power Sources, Vol. 93(2001) 14-19.
8. xe2x80x9cCobalt doped chromium oxides as cathode materials for secondary batteries for secondary lithium batteriesxe2x80x9d Dong Zhang, Branko N. Popov, Yury M. Poddrahansky, Pankaj Arora and Ralph E. White, J. of Power Sources, Vol. 83 (1999) 121-127.
9. xe2x80x9cSynthesis and electrochemical studies of spinel phase LiMn2O4 cathode materials prepared by the pechini processxe2x80x9d W. Liu, G. C. Farrington, F. Chaput and B. Dunn, Journal of the Electrochemical society, Vol. 143, No.3(1996) 879-884.
The above reported conventional processes show several disadvantages. Generally any one or all of the following are seen:
1. Side reactions occur, i.e., formation of unexpected and unwanted byproducts.
2. Unreacted material is left behind which acts as impurity.
3. Partial reactions occur.
4. Several steps and long calcination time are needed for preparation.
5. Controlled conditions required.
6. Undesirable phases formed.
It is therefore important to develop processes which overcome the disadvantages enumerated above.
The main object of this present invention is to provide a novel method for the preparation of Lithium cobaltate (LiCoO2) hitherto unattempted which obviates the drawbacks mentioned above.
It is another object of the invention to avoid multi-step processes, formation of undesirable and unexpected byproducts and undesirable phases reported in prior art.
These and other objects of the invention are achieved by the novel process of the invention comprising solid state thermal one step reaction of lithium oxide and cobalt nitrate
Accordingly, the present invention relates to a process for the preparation of lithium cobaltate by a solid state thermal one step process comprising mixing lithium oxide (Li2O) and cobalt nitrate (Co(NO3)2) in solid state uniformly, adding a heat generating material to the mixture and grinding the mixture, heating the ground mixture at a temperature in the range of 650 to 700xc2x0 C. to obtain the desired lithium cobaltate.
In one embodiment of the invention, the ratio of the Li2O+Co(NO3)2 mixture and the heat generating material is 1:3.
In another embodiment of the invention the ground mixture is heated in a furnace for about 8 hours.
In one embodiment of the invention, the Li2O is mixed with Co(NO3)2 in the following ratios.
Li2O:Ni(NO3)21:2
In another embodiment of the invention, the heat generating material is selected from urea and ammonium nitrate.
In yet another embodiment of the invention electric furnace is used for heating.
In still another embodiment of the invention, the materials used are all in solid state. |
#include <iostream>
#include <vector>
#include <string>
using namespace std;
int main()
{
vector<int> nums;
int temp_num;
while (1)
{
cin >> temp_num;
nums.push_back(temp_num); //每输入一个数字就把它添加到数组的最后
if (cin.get() == '\n') //如果是回车符则跳出循环
break;
}
vector<int> res(7, 0);
for (auto num : nums)
{
vector<int> cur = res;
for (auto i : cur)
{
res[(i + num) % 7] = max(res[(i + num) % 7], num + i);
}
}
cout << res[0];
return 0;
} |
<filename>src/test/java/eu/galjente/zooplus/user/AuthorityRepositoryIntegrationTest.java
package eu.galjente.zooplus.user;
import eu.galjente.zooplus.user.domain.entity.Authority;
import eu.galjente.zooplus.user.domain.repository.AuthorityRepository;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.orm.jpa.DataJpaTest;
import org.springframework.dao.DataIntegrityViolationException;
import org.springframework.test.context.junit4.SpringRunner;
import static org.assertj.core.api.Assertions.assertThat;
@RunWith(SpringRunner.class)
@DataJpaTest
public class AuthorityRepositoryIntegrationTest {
private final static String ROLE_USER = "ROLE_USER";
private final static String ROLE_TEST = "ROLE_TEST";
@Autowired private AuthorityRepository authorityRepository;
@Test
public void findDefaultRoleForUser() {
//when
Authority found = authorityRepository.findOneByName(Authority.ROLE_USER);
//then
assertThat(found.getName()).isEqualTo(ROLE_USER);
}
@Test(expected = DataIntegrityViolationException.class)
public void trySaveDuplicateAuthority() {
//given
Authority authority = new Authority();
authority.setName(ROLE_USER);
//when
authorityRepository.save(authority);
}
@Test
public void saveNewAuthority() {
//given
Authority authority = new Authority();
authority.setName(ROLE_TEST);
//when
authorityRepository.save(authority);
//then
Authority found = authorityRepository.findOneByName(ROLE_TEST);
assertThat(authority)
.isEqualTo(found);
}
@Test(expected = DataIntegrityViolationException.class)
public void saveAuthorityWithEmptyName() {
//given
Authority authority = new Authority();
//when
authorityRepository.save(authority);
}
}
|
To Trace the Shifting Sands: Community, Ritual, and the Memorial Landscape The memorial landscape is a landscape of tremendous cultural significance. It reinserts sacred stories into public open space: stories that reveal and heal. These stories can have a positive impact on a community and can teach the lessons of history and place. The memorial landscape serves intellectual, emotional, spiritual, and communal functions, including: a) a place for memory, b) a place for mourning, c) a place for reflection and healing, d) a place for ceremony, and e) a place for collective action. Furthermore, specific design elements, such as art, architecture, landscape, and text are typically used to further these functions. Through investigating the memorial typology and its cultural significance, a deeper understanding of the landscape architect's role in this important cultural landscape is defined and clarified. |
Distribution and dispersal of the invasive Asian chestnut gall wasp, Dryocosmus kuriphilus (Hymenoptera: Cynipidae), across the heterogeneous landscape of the Iberian Peninsula Dryocosmus kuriphilus (Hymenoptera: Cynipidae), also known as the Asian chestnut gall wasp, is a non-native invasive species that has recently appeared in many regions of Europe, including the Iberian Peninsula. This species is an important pest of chestnut trees in several regions and is of concern for foresters in these areas. The results of this research revealed 14 different hotspots of infestation of D. kuriphilus and resulted in the development of models that predict the distribution of D. kuriphilus in Spain over the next 37 years (20192055). These results indicate a rapid spread in all Spanish chestnut forests and identify areas that are theoretically highly suitable and susceptible to colonization by this cynipid based on predictions of three different niche models. Although D. kuriphilus is able to induce galls on all chestnut trees, the models indicate that there are differences in the suitability of the different regions for this species. This differential suitability results in some areas having better environmental conditions than others for D. kuriphilus, which is a factor that should be taken into account in its management and biological control. This study of the current distribution, patterns of dispersal using GIS and potentially suitable areas for D. kuriphilus, using niche models will assist in the management and control of this pest in Spain. INTRODUCTION The Asian chestnut gall wasp, Dryocosmus kuriphilus Yasumatsu, 1951 (Hymenoptera: Cynipidae), is a cynipid species native to southern China, where it induces galls on all species of Castanea Mill. (Fagaceae). Taxonomically, it is included in the tribe Cynipini (Hymenoptera: Cynipidae), a large group of cynipids commonly known as oak gall wasps, with approximately 1000 species that induce galls on plants in the family Fagaceae, especially Quercus L.. Dryocosmus kuriphilus is univoltine, involving parthenogenetic thelytokous reproduction, which means that only females are known. The adult females are small insects of approximately 2 mm in length and their life expectancy is only approximately ten days. Adults emerge from galls between May and August and lay eggs in chestnut buds that will develop into galls the next spring, which prevents the formation of healthy shoots and leaves (). Dryocosmus kuriphilus induces galls on new shoots, stipules and leaves of chestnut (Castanea spp.). It has been reported that D. kuriphilus adversely affects chestnut trees by reducing fruit production by up to 80% (EFSA, of D. kuriphilus in Spain were combined in two maps showing where this species of cynipid was present, one for 2012 to 2015, the early period of the spread, and another for 2012 to 2018, the current distribution. Most of the records are for the Catalonian region ( Fig. 1A) mainly because of the quality of the information provided by MNBC. As the number of records per unit area for Catalonian was higher than that for other regions, a lower number of records was used to homogenize the number of locations where D. kuriphilus was present. This method prevents a possible bias that would result if all 414 locations for Catalonian were used to evaluate the model performance as it would suppress other data on the niche of D. kuriphilus in this region. Maximum and minimum values of each climatic variable for all the records for the Catalonian region were taken into account, and equal proportions of data points were selected for each area (Fig. 2). Based on the information available up to 2018, it is assumed that, in the absence of any control of D. kuriphilus, this cynipid wasp is still present and abundant in areas where it was recorded in previous years. In addition, D. kuriphilus has been able to spread by SDD and colonize adjacent areas. A map of the distribution of C. sativa was created using georeferenced records from different sources: the GBIF (Global Biodiversity Information Facility) (GBIF Data Portal, 2018), MAGRAMA (unpublished data) and BVdb (BVdb, 2018b). Data from GBIF and BVDB were appropriately processed and fi ltered, and possible inconsistencies, such as locations with very low georeferenced accuracy or errors, were deleted and only the records from the period 1997 to 2018 were selected for homogenization with other records. All these chestnut tree records were used to predict the spread and potential distribution of D. kuriphilus in Spain. This dataset constitutes an approximation of the current distribution of chestnut trees in Spain, especially the large number of trees of commercial interest and those of public heritage. However, it is assumed that some chestnut trees were not included because they were not georeferenced or referenced in offi cial and inaccessible data sources. Differences among values of bioclimatic variables for Spain were analyzed using a paired Student t-tests with a Bonferroni correction based on a similar number of locations for each of the regions considered. Models of dispersal of D. kuriphilus The models were used to identify chestnut trees that could theoretically be threatened by D. kuriphilus in the next 37 years. These models were constructed using the distribution map of all the records of C. sativa in Spain, the values of the mean annual linear rate of SDD for D. kuriphilus (6.6 km/year), and the maximum annual rate of 11 km/year based on the results of Gilioli et al.. Due to a lack of data regarding the mean rate of dispersal for this cynipid species in Spain, the information derived from this approach was used to determine the annual rate of dispersal in threatened areas. Previous research, such as that of Gilioli et al., provides many parameters describing the dispersal of D. kuriphilus. However, in Spain, research on this pest began only recently and there are several factors that can affect the dispersal of D. kuriphilus (), which have not yet been measured. Other examples of parameters that can infl uence the dispersal of D. kuriphilus are wind, precipitation and temperature. In fact, the annual range of high temperatures may affect D. kuriphilus development and activity (Bonsignore & Bernardo, 2018) because, as D. kuriphilus is an ectotherm, low temperatures can limit or reduce its dispersal, while high temperatures can increase its dispersal ability. We aimed to study these aspects because of the lack of previous In addition to short-distance dispersal (SDD) by fl ying and wind-assisted passive transportation (Graziosi & Rieske, 2012), the dispersal of D. kuriphilus has also been facilitated by passive long-distance dispersal (LDD). Such dispersal mainly occurs through the trade in and transport of infested chestnut trees from chestnut nurseries. These chestnut trees have dormant buds in which D. kuriphilus had previously laid eggs in their place of origin (). LDD is considered to be the most important factor determining the dispersal of D. kuriphilus, especially the colonization of distant areas (;). By this means of dispersal, D. kuriphilus arrived in Catalonia in 2012 (northeast Iberian Peninsula) (EPPO, 2012;), spread to other Spanish regions, such as Galicia (northwest Iberian Peninsula) and Mlaga (Andalusia, southern Iberian Peninsula), and arrived in Portugal in 2014. Some authors have suggested that the rapid spread of D. kuriphilus is related to the genetics of the European populations of this species (: Avtzis & Matosevic, 2013. Chestnut trees in Spain occur in the wet northern part of the country, except the Pyrenees; they are also present in the Midwest and southern half of Spain, although a few trees are also found in various urban areas and gardens throughout Spain. Therefore, all the mentioned areas are likely to be susceptible to D. kuriphilus and should also be monitored and managed. One of the most important tools that can be used for monitoring invasive species in different areas are species distribution models, which assess the potential of a region for invasion (). By overlapping different climatic variables and the current distribution of a species, it is possible to infer the requirements or ecological preferences of D. kuriphilus, such as, optimal temperatures (Bonsignore & Bernardo, 2018). Taking this into account, predictions regarding the presence or absence of populations in a particular area can be made using mathematical models that describe the potential distributions of species. For this gall-inducing species, the distribution of its hosts can be used as a predictor of its potential distribution (;). Therefore, updating the knowledge on gall wasp occurrence in Spain and providing tools for monitoring its spread may fi ll the gaps in its geographic distribution and will facilitate management. In this study, we aim to (a) characterize the current distribution of D. kuriphilus in Spain, (b) predict the spread of this species regarding only its SDD, taking into account that there are no new LDD events that affect their natural dispersal., and (c) identify the most suitable areas in Spanish forests using niche models. Current distribution of D. kuriphilus in Spain We obtained georeferenced, nation-wide and unpublished records from the Spanish Ministry of Agriculture, departments of forests and biodiversity of Catalonia and Andalusia, research by Perez-Otero et al. and the Biodiversidad Virtual database (BVdb) (BVdb, 2018a). These 24,916 georeferenced records studies on infestation rates of D. kuriphilus in Spain. The probability that an individual would infest a chestnut tree within its range of dispersal was based on the maximum distance it is able to disperse. For these models, LDD was not considered because it is impossible to predict. The annual increase in the distribution of D. kuriphilus in Spain was analyzed using a buffer tool based on the total set of locations where it was present. This method used 6.6 and 11 km/year as the annual mean and maximum rates, respectively, to obtain the possible dispersal or infl uence area. By overlapping the buffers surrounding chestnut trees, new infestations of chestnut trees in subsequent years were predicted. This process was repeated itera-tively until predictions for ten years were obtained. ArcGIS 10.1 software was used to generate the maps. A graph of the estimated fi tted curves of the infestation rate was also created, which shows that the area over which C. sativa occurs where D. kuriphilus is absent would decrease due to the possible expansion of the species each year based on the dispersal values. The estimated area was calculated using the overall trend in D. kuriphilus dispersal, which can stabilize at an asymptote or intersect the y-axis, indicating that it would hypothetically be able to infest all the chestnut trees in this area. These fi tted curves were obtained using CurveExpert Professional 2.4. For a better understanding of the incidence of D. kuriphilus and its possible future spread throughout Spain, fi ve different types of predictive distribution models were constructed. The models were a generalized linear model (GLM), generalized additive model (GAM) (), random forest (RF) model, maximum entropy (MaxEnt) model (;Phillips & Dudik, 2008) and environmental coverage model (ECM) (). Furthermore, since all the records refer to galls on chestnut trees, it must be considered that the presence or absence of chestnut trees cannot be considered as a simple variable in the model but has to be considered as a limiting variable (). This means that at any point where chestnut trees were not recorded, the presence of D. kuriphilus galls could also not be recorded, even though dispersing adults of D. kuriphilus could potentially be found there. To solve this problem, this cynipid was considered a priori to be unable to disperse beyond the maximum or mean SDD values. Consequently, buffer areas around infested chestnut trees or with a high probability of infestation with the chestnut wasp were identifi ed. This total buffer area of infl uence of D. kuriphilus was used as a limit on the preliminary models (Fig. 2). The selection of bioclimatic variables was determined by the phenology of the adult cynipid. The adult stage occurs only for a short period of time, between May and August (;), with the highest abundance in Spain recorded in the months of June and July (Gil-Tapetado et al., unpubl. data); therefore, only summer variables were used in this analysis. Therefore, the chosen summer variables were extracted from WorldClim version 1.4 () at a scale of 30 arc seconds (Table 1). The variables bio08 (Mean Temperature in Wettest Quarter) and bio09 (Mean Temperature in Driest Quarter) were not included in the analysis due to their anomalous pattern in Spain and the great difference in values for areas that were very close together. Later, an iterative variance infl ation factor (VIF) () analysis was conducted, which deleted correlated variables (VIF > 5). The chosen and non-correlated bioclimatic variables were bio03 (Isothermality), bio07 (Temperature Annual Range), bio16 (Precipitation of Wettest Quarter), bio17 (Precipitation of Driest Quarter). ECMs are potential distribution models that take into account the fundamental niche, i.e., the potential suitability of an area, depending on the ranges in the values of the environmental variables for the habitat in which a species is located. Following Jimnez-Valverde et al., all bioclimatic variables were transformed, normalized and checked to verify that there were no discrepancies among them using the software Biomapper (). Then, an iterative ecological niche factor analysis () was performed to obtain factors and eliminate redundant information. The correlations between bioclimatic variables are represented by a similarity dendrogram that includes only those that were not auto correlated. Subsequently, following the broken stick criterion used in the program, the fi rst 6 factors were selected. Using these 6 factors, a discriminant model (using presences and pseudo absences) was generated in STATISTICA (Statistica-StatSoft Inc, 2009). By means of the second Mahalanobis distance (Farber & Kadmon, 2003), the environmental favourability for D. kuriphilus was calculated for each location. A location was considered to be potentially suitable for this species if its favourability was similar to or higher than the lowest favourability recorded for the localities where it was present, and a favourability map was generated. Moreover, using the calculated factors and presence data for D. kuriphilus, a raster layer was obtained using the algorithm BIOCLIM () implemented in the program DIVA-GIS (). Finally, an ECM model map was generated in ArcGIS, which combined the favourability map and the BIOCLIM model. The GLMs, GAMs and RF models were analyzed using R Table 2. The AUC values for the model evaluation were calculated using a random 30% of the total number of locations where D. kuriphilus was present. A consensus model (CM) was also constructed using averages from the GLM, GAM, MaxEnt model, RF model and ECM. The decision regarding which models to include in this ensemble was made according to the standard deviation per pixel of each resulting consensus model resulting from the combination of different models and the comparison among models according to pixel favourability values based on the Euclidean dissimilarity distance among the individual models. Similar models were included in CM in order to obtain a prediction of the favourability for this species, whereas dissimilar models were considered as different hypotheses. Current distribution of D. kuriphilus in Spain The map of the distribution of D. kuriphilus in Spain, which includes all the records available in May 2018, reveals that this species has a disjunct distribution (Fig. 1A), likely caused by LDDs due to human activity, such as chestnut forestry, although sporadically isolated trees can become infested resulting from SDD. The hotspots of D. kuriphilus occur in three different regions in this country: one Euro Siberian area and two separate Mediterranean areas; the regions of Catalonia and Malaga. In the early stages of colonization by D. kuriphilus in 2015, there were eleven different areas where the chestnut wasp was present in Spain, 8 in the Euro Siberian region of Spain and 3 in the two Mediterranean regions : West Ourense and East Pontevedra, Val do Dubra (A Corua), A Corua, East Lugo, central Asturias (in two separate areas, 5 and 6), East Cantabria and West Basque Country, Navarra and East Basque Country, Catalonia (Barcelona and Gerona), Madrid and Valle del Genal and Sierra de las Nieves (Malaga) (Fig. 1A). Currently, in 2018, the spread of D. kuriphilus by SDD into adjacent areas has occurred as the hotspots 1 and 4 have become one continuous hotspot (1&4) as well as has the hotspots 5 and 6 (5&6). Hotspot 1&4 now includes the region of El Bierzo, along with Galicia, one of the most important areas for producing chestnuts. Hotspot 8, in the north, has become a continuous area of infected chestnut trees extending towards the south of France. In addition, there are two new hotspots in the Mediterranean region, Lanjarn, Prades, and one new hotspot in the Euro Siberian region, Alta Sanabria. It has also been recorded infesting trees in the central area of Spain, such as, the Titar valley Table 1. Bioclimatic variables from WorldClim 1.4 (except bio08 and bio09) and the altitude and codes used to refer to them in this study. The variables used in the environmental coverage model (ECM) are indicated by 1, while the variables used in the generalized linear (GLM), environmental coverage (ECM), maximum entropy (MaxEnt) and random forest (RF) models are indicated by 2. Mean, standard deviation (SD), and maximum (MAX) and minimum values (MIN) for the locations where D. kuriphilus is present in all regions and three different regions in Spain (Euro Siberian, Catalonian and Malaga) are also included in the description of the area occupied by this cynipid wasp. (vila), El Jerte valley (Cceres) and Sierra de S. Vicente (Toledo), but the phytosanitary authorities have effectively eliminated this pest from this area by cutting and burning all the early galls. Therefore, this area is not included in the distribution or dispersal models. In Table 1, there are the ranges in altitude and climatic variables recorded for the areas where D. kuriphilus is currently present, which are used as an approximation of the ecological characteristics of this exotic cynipid in Spain. The Madrid hotspot was not included in the comparison of regions since only two records of the presence of D. kuriphilus are recorded for this hotspot. The comparison of the bioclimatic values for the different regions revealed signifi cant differences between most of the values and wide variation in all bioclimatic variables among the regions, indicating that this species occurs in areas with very different climatic conditions (Table 3). Models of dispersal of D. kuriphilus The dispersion model predicts that the area of infested chestnut trees in Spain is expected to increase over the next thirty-seven-years (Fig. 1B). The maps also show the rapid spread of this cynipid across Spain from fi rst presence to saturation between the years 2032 and 2041, when the potential occupation curves for both annual dispersal rates become asymptotic (Fig. 1C). This deceleration of potential occupation would hypothetically leave only 4% of Spanish forests not infested with D. kuriphilus, indicating the theoretical advance and fi nal distribution of this species in Spain. This pattern only occurs because of the disjunct distribution of C. sativa on the Iberian Peninsula, with only chestnut forests in the centre of the Iberian Peninsula remaining uninfested via SDD. However, these forests would still be vulnerable to new D. kuriphilus LDD events. The declining curves of the mean and maximum dispersal were adjusted to a reciprocal form with correlation coeffi cients of 0.99 for each annual dispersal rate. However, they have a breakpoint at x 6.6km = 2041 and x 11km = 2032 in terms of the mean and maximum dispersal, respectively, with both models predicting constant dispersal and an asymptote at 52,317.5 km 2 at the end of this period. Considering these predictions, only ≈ 20.7% of the total area of chestnut forest in Spain would not be infested with D. kuriphilus. The equations for the best fi tting curves for the theoretical decrease in area for mean and maximum dispersal up to a particular year are as follows: where y is the area of chestnut forest without D. kuriphilus, and x a particular year. The fi rst equation is for mean dispersal and the second for maximum dispersal. The distribution maps for different years (Fig. 1B) show the patterns of distribution of D. kuriphilus throughout Spain, with a considerable increase in the area infested in the northwest corner of the Euro Siberian region. According to the mean dispersal model, the hotspots in the Euro Siberian region will unite into a continuously infested area in the year 2031. Models of the distribution of D. kuriphilus The maps of the potential distribution of D. kuriphilus in Spain predicted by the models are shown in Fig. 3: the GAM (Fig. 3A) and mean CM of the GLM, ECM, MaxEnt model and RF model (Fig. 3B). The decision to separate the GAM from the CM was their higher pixel values of favourability compared to the predictions of the other four models for the west-central areas of Spain, as the average of the total ensemble of these models would resulted in a very different prediction for areas in west-central Spain. Of the total number of pixels in this west-central area, 69% had standard deviations of between 28 and 42, resulting in areas with values of low consensus between models (Fig. 3C). In addition, GAM was not included in CM because the prediction of this model differs from those of the other models (mean Euclidean distance = 0.72 between the GAM and other models). Both fi nal maps of the distribution of D. kuriphilus are similar but their suitability values per pixel differ (R 2 = 0.358); the GAM suitability values are mostly higher in the west-central and southern areas of Spain, but lower in the northern part. The range of variation and suitability predicted by GAM are, respectively, greater and a lower than predicted by CM (Mean GAM = 0.37, SD GAM = 0.31; Mean CM = 0.40, SD CM = 0.26). Very high suitability values predicted by GAM (> 0.85) in practically all areas in the west-central and southern Spain where C. sativa is present, whereas the high values predicted by CM are in the D. kuriphilus hotspots in the Mediterranean regions of Catalonia and Malaga. The map of the percentage dissimilarity (Fig. 3C) shows the areas that are similarly predicted by GAM and CM and those with the lowest discrepancy. The predictions of the two models for these areas are similar. The comparison of the bioclimatic variables associated with D. kuriphilus presence and favourability in the different regions (Table 3) indicate they are very similar in the Euro Siberian region and differ signifi cantly in the west-central area of Spain. Current distribution of D. kuriphilus in Spain Dryocosmus kuriphilus effectively colonized Spain by LDD, with at least, 11 entry points between the years 2012 and 2015, and was then most likely transported from infested areas by the trade in chestnut trees. However, new hotspots of D. kuriphilus occurrence were detected, highlighting the role of LDD in the establishment of new hotspots in a region. There are currently a total of 14 such hotspots in Spain. The main cause of the occurrence of multiple hotspots of D. kuriphilus is related to human activity and it is likely that it is driven by the transport of Castanea trees and seedlings for commercial purposes related to chestnut forestry. Such transport should be controlled and prevented from areas where this cynipid is known to occur or by transporting only trees that have undergone an appropriate quarantine. Many cases of invasion could be prevented by quarantine, during which the imported trees are kept at a sealed location during the period of the annual reproductive cycle of the chestnut wasp. This would be the best way to prevent further LDD of D. kuriphilus to other areas where the chestnut trees are not infested, although it would be diffi cult given the cryptic nature of the early galls of D. kuriphilus. It is interesting that D. kuriphilus is in Madrid and Pasajes (Guipuzcoa, in the north of Spain), as these areas are isolated and its arrival there must be a result of city-wide LDD events. This is also recorded for the hotspots at Lanjarn and Prades. Specifi cally, in the city of Madrid, this cynipid is restricted to the Royal Botanical Garden Alfonso XIII of Ciudad Universitaria (Madrid) and the Royal Botanical Garden of Madrid. These locations are surrounded by urban areas, and the last chestnut trees that were planted in these green spaces more than 3 years ago came from El Bierzo (1&4), where this cynipid was fi rst recorded in 2017. This could indicate that LDD in this case is due to a factor other than the transport of chestnut trees, such as, the transport of other propagules. In addition, sporadic and intermittent hosts can have a profound effect on the dispersal of D. kuriphilus and it is likely there are other unknown factors that affect the spread of cynipids. The presence of D. kuriphilus at Pasajes could have been due to the transport of chestnut trees into this area, although it is also possible that it was by a sequence of SDD events from France. However, there are no published georeferenced data that show how the dispersal of this cynipid has occurred in this area. Regarding the climatic characteristics associated with D. kuriphilus records (Table 1), which are assumed to relate to this gall wasp's ecological requirements, they all seem to be correlated with the niche of Castanea trees. However, as these records are for three separate areas, different climatic conditions can affect the ecophysiology of this wasp and its host plant (). The ecophysiology of the host plant can indirectly affect the cynipid and may also determine the level of suitability of each area and, therefore, it is a factor that should be taken into account. Differences between the two Mediterranean regions and the Euro Siberian region are related to high rates of precipitation and lower temperatures in the Euro Siberian region, while the opposite occurs in other regions. As C. sativa is widely distributed in Spain in areas not subject to prolonged droughts in summer and with well-drained and permeable soils, high precipitation, and low temperatures (;), it is likely that chestnut trees in drier areas occur under worse ecological conditions, which affects tree growth and gall formation, and as a consequence are less suitable for D. kuriphilus (Gil-Tapetado et al., unpubl.). With respect to the two Mediterranean regions, it is not by chance that all sites are similar to one another except in terms of altitude because of the climatic compensation between latitude and altitude. However, this pattern can also occur in very cold areas in the north of Spain, where frosts occur and the viability of C. sativa and D. kuriphilus is likely to be less. Overall, the models used in this paper indicate that areas that are highly suitable for D. kuriphilus exist in different regions independent of the difference in the variables recorded in each region. This high suitability could indicate that all these regions in Spain are suitable for D. kuriphilus (Fig. 3A). Models of dispersal of D. kuriphilus The hypothetical distribution of D. kuriphilus presented in this paper (Fig. 1B) depends mainly on the presence of chestnut trees, and thus is very sensitive to changes caused by the introduction of new locations for C. sativa in Spain. The existence of non-georeferenced individuals of chestnut trees is highly likely, and they can worsen the hypothetical trend in dispersal identifi ed in this paper. This possibility highlights the importance of knowing the distribution of C. sativa and indicates that it is indispensable that all these trees in Spain are georeferenced in order to achieve a highly accurate monitoring of the status of the pest cynipid. The theoretical fi nal distribution of D. kuriphilus in Spain in the year 2055 indicates that two-thirds of the chestnut trees will be infested by this wasp. These fi rst approximations of the possible future framework of spread of D. kuriphilus in Spain also indicate that the high dispersal ability of D. kuriphilus may be a major concern for chestnut forestry. However, many parameters that affect the dispersal of this wasp are still unknown, and others cannot be taken into account either because they are analytically complex or a lack of information. Therefore, the parameters that determine the presence of D. kuriphilus are related to their probability of establishment or infestation (Jerde & Lewis, 2007), their intrinsic population growth rate (Neubert & Caswell, 2000) and natural or artifi cial barriers and elements of resistance, such as wind direction or the presence of urban structures that may affect the rate of dispersal, as well as the frequency of introductions and size of propagules. Another factor to take into account is the sensitivity of D. kuriphilus to the volatiles produced by chestnut trees (), similar to what occurs in other galling insects, such as fi g wasps (Chalcidoidea: Agaonidae) () and other cynipids, such as Antistrophus rufus Gillette, 1891 (). These volatiles may affect the dispersal of this species and may attract individuals to specifi c areas where there are high densities of chestnut trees or with particular characteristics. Indeed, attractant and repellent volatiles and wind speed and direction are factors that increase the complexity of the patterns of dispersal. The climatic conditions of the regions included in this study are very different (see Table 3), presumably indicating that D. kuriphilus is not limited by ecological requirements apart from the presence of Castanea trees. As a consequence of LDD events due to human activity in Spain (see Fig. 1A), and since D. kuriphilus can become established in any chestnut forest, this cynipid could occur in better or worse areas depending on its ecological niche. The above limitations may have skewed the results of this study but may also indicate how the dispersal of D. kuriphilus is not ecologically restricted except by the spatial confi guration of C. sativa trees and that the maximum distance D. kuriphilus can spread and the areas it is potentially able to colonize each year is determined by its SSD. Within this framework, the stabilization of the infested area may occur quickly, and, in the absence of control measures and based on the dispersal rate of D. kuriphilus, the spread should end in 2032 if there are no new infestation hotspots due LDD events. These two types of dispersal are the most important and infl uential determinants of the patterns of dispersion (Liebhold & Tobin, 2008) because new introductions can modify all the predictions based on SDD (). Although all the data on the presence of D. kuriphilus was collected in 2018, the monitoring of specifi c areas of Spain for D. kuriphilus has continued, especially in the areas of Malaga, West and South Galicia and El Bierzo (1&4). Predictions of the rates of spread noticeably differ, being greater in reality than that predicted by the models, even though the parameters included in these models are not restrictive. Models of the distribution of D. kuriphilus The models are limited in their predictions as many variables were not included due to a lack of data, such as, the biotic interactions between autochthonous fauna (), chalcid parasitoids of oak gall communities, and the experimental release of biocontrol species and the natural enemy, T. sinensis, in specifi c areas, or other variables that are not measurable, such as local wind direction and the fi tness of individual chestnut trees. The variables considered are those associated with summer conditions, the only period in the cynipid cycle when this insect experiences the environment outside of the microenvironment of the gall (Yasumatsu, 1951;). As cynipid larvae develop within galls, the climatic conditions in other seasons affect Castanea tree gall tissues but do not directly affect the wasp, which reduces the effects of some of the variables in ecological niche models of the requirements of D. kuriphilus. The different models (Fig. 3A, B) predict different suitability values for different areas in Spain. The GAM (Fig. 3A) indicates that the west-central and southern areas of Spain are the most suitable areas for D. kuriphilus. However, there are also areas of high suitability near the northern hotspots of D. kuriphilus, which greatly affect the predictions of the model for these areas, perhaps due to a commission error, and decreases the predictive potential of this analysis. Based on this model, the zones in the west-central areas of Spain, where D. kuriphilus is currently not present and potentially free of SDD events are the areas likely to experience the greatest theoretical settlement given their high suitability. Thus, preventing LDD events in this area is critical because D. kuriphilus cannot reach this highly suitable area by SDD. Therefore, a special monitoring of the transportation of C. sativa trees in these areas should be developed in order to prevent D. kuriphilus spreading into these chestnut forests. In fact, LDD events are reported for these areas (sites of vila, Cceres and Toledo provinces), where developing galls of D. kuriphilus galls were detected, but quickly destroyed by the forestry authorities. Its presence there was not included in the models because galls were destroyed before adult emergence, making it impossible for it to disperse and become established in the west-central area of Spain. The CM (Fig. 3B) indicates that the hotspots in Catalonia and Malaga are the areas with the highest suitability; however, most of the northern chestnut forests also have high values. Unlike GAM, CM indicates that all of the chestnut tree areas in the northern part of Spain are very suitable, not only those close to where the pest is recorded. This fact may indicate that most of Spain is suitable for D. kuriphilus; however, there are differences in the levels of these high values. The CM may indicate that Mediterranean regions have the best conditions and are the most suitable areas for D. kuriphilus, possibly due to their high or moderate temperatures, which favour the development of this species. Differences between the Euro Siberian and Mediterranean regions (Tables 1 and 3) include hotter temperatures and lower precipitation in the latter and greater probability of frost occurring during winter in the former, negatively affecting chestnut development (). Therefore, the concept of suitability used here is based on the environmental conditions that could affect the life cycle of D. kuriphilus, especially those that can modify its development in chestnut forests. As discussed previously the ecological requirements of chestnut trees are also highly suitable for this pest and coincide with predictions of CM of intermediate rates of precipitation and mild temperatures in all the chestnut forests in Spain (Table 1). These are the optimal habitat conditions for C. sativa. The map of the percentage dissimilarity in the areas predicted by the two models (Fig. 3C) shows that they agree in terms of the suitability values in most of the region except the west-central areas. Although there are differences in these areas, the models similarly predict high or low suitability values in most of the zones, even if there are a few differences between them, for example, in Malaga and West and South Galicia and El Bierzo (1&4). The two models agree in that certain areas are of high suitability (Fig. 3C), specifi cally the previously mentioned hotspots. In these areas it is very likely that the conditions are optimal for D. kuriphilus, since fi ve ecological niche models indicate these areas are highly suitable. It is crucial to manage these highly suitable areas, the main D. kuriphilus hotspots, using biological control based on different methods, such as the introduction of its natural enemy, T. sinensis, in order to mitigate this gall wasp adverse effects on the production of chestnuts in these regions. That an area is predicted to be of low suitability does not indicate that D. kuriphilus will not become established in these areas; instead, they indicate that the vigour or fi tness of C. sativa and D. kuriphilus is likely to be reduced or adversely affected by the environment in these areas and that these populations are more limited ecologically than those in areas that are more suitable. This low suitability might be refl ected in D. kuriphilus inducing smaller or more irregular shaped galls. In addition, it is possible that the suboptimal conditions in these areas will also will prevent T. sinensis from becoming established or thriving there. Although the models do not predict areas of low suitability near hotspots, it is possible that a gradient in high suitability could affect the establishment of T. sinensis. This fact, together with the information mentioned in Quacchia et al., could explain the differences in the successful establishment of T. sinensis in different areas. In conclusion, in Spain the spread and infestation of chestnut forests by D. kuriphilus is predicted by using models that are based on the distribution of chestnut forests in this area; in addition, this study is the fi rst attempt to understand the spread and habitat selection behaviour of D. kuriphilus in this area. The climatic characteristics in the areas where this pest is present do not seem to be important, with the only factor limiting its dispersal and distribution, the presence of chestnut trees. The models, however, indicate areas of greater or lesser suitability where D. kuriphilus could occur under different conditions and possibly behave differently. The models of distribution show that because of the confi guration of their spatial distribution in this region approximately half of the Castanea forests in Spain are likely to be colonized by D. kuriphilus as a result of SDD and predict two scenarios for the areas that are highly suitable. Areas close to where D. kuriphilus is known to be present are the most likely to be colonized by this pest, however, the occurrence of new LDD events resulting in this pest arriving from another country, the main type of dispersal of D. kuriphilus between countries, could greatly change the situation. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.