content
stringlengths 7
2.61M
|
---|
An expert has warned that as many as 33 people are committing suicide each week.
On a notepad next to her bed, she had written a suicide note.
That is where her mother found her – a 26-year-old Wellington artist, loved by her family and all who knew her.
Her father is speaking out this weekend after a coroner ruled there was not enough evidence to prove "whether she was then mentally capable of forming an intention to take her life", despite hearing the details of the events leading up to her death and the suicide note.
READ MORE:
* Mental Health Foundation says there is a 'small under-counting' of suicides
* Bereaved mum wants to stop suicide taking Kiwi kids like Ollie
* New Zealand needs to talk about suicide, bereaved dad says
* The 10 lessons I learned after my young son killed himself
* The silent treatment: How one mum has been stopped from talking about suicide
* 'Taboo-ness' around suicide needs to end
* Film around suicide in New Zealand seeking funding
"It begs the question whether the numbers are real," said.
"The statistics need to be right so that the right amount of funds are allocated to the mental health sector."
From June 2014 to May 2015, 569 people are officially listed as having died by suicide or suspected suicide – the highest number ever recorded in New Zealand.
Officially, at least 11 people a week die by suicide. But Timaru-based GP Dr Oliver Bourke warned that the true number could be three times higher. "I honestly think people have no idea how many people die by suicide," he said. "These figures are awful."
If a coroner is not satisfied that a death can be ruled a suicide, then it is classified under a different category, such as otherwise self-inflicted, an accident, or the cause is simply declared undetermined.
The Coroner who investigated the Wellington woman's death found she had taken her own life, but he could not rule on intent. "If they don't want to call it a suicide, it's got to be classified as self-inflicted," the woman's father said.
"The way I look at it is, to the common person on the street, which is was I was before this, to actually realise that number is wildly different caught me by surprise. If people knew that they'd see it as a bigger issue than we ever imaged."
The man said he was "yet to hear a story where the person is in their right mind before taking their own life".
Former Chief Coroner Neil MacLean said the number of recorded suicides in New Zealand should be taken "with a large grain of salt".
"Some people could fall through the gaps because just relying on raw suicide numbers isn't giving us the true number," he said.
Judge MacLean said each case was judged on whether the evidence proved the person deliberately intended to take their life.
"That would include making a determination that it wasn't just an accident or indulging in risky behaviour without thinking of the consequences," he said.
"The coroner has to be sure they had the mental capacity to take their own life. That has to be established at a pretty high level, you have to be sure. If there is any reasonably plausible alternative that suggests that they didn't have that intent then the coroner shouldn't make that finding.
Asked whether the open rulings could mask the true rate, MacLean said he would "like to think not" as the 16 coroners and the chief coroner were "full time professionals, well-trained in psychological and psychiatric aspects of suicide. They really study this in great detail.
Dr Bourke said because the threshold ruling a death was a suicide was so high, the real number was far off the reported number.
"When you get clear-cut cases, there would be at least as many again that a coroner couldn't give a ruling on because there is a slight doubt. Those sort of deaths aren't counted as suicide, it's scary."
But the Mental Health Foundation said Dr Bourke's suggestion the figure could be closer to 1500 deaths caused unnecessary alarm and distress to the public. "It is grossly irresponsible and patently wrong to claim that there are 33 deaths by suicide in New Zealand per week," Foundation chief executive Shaun Robinson said.
He conceded there was a "small under-counting" but said over-stating the suicide figures would created a false impression of a worsening crisis.
Meanwhile, the father of the Wellington artist said that not revealing the true statistics prevented people from getting the help they needed from mental health services. "I felt they were stretched and I also felt they were lacking the ability to bring the work that was needed to be done."
During the coroner's hearing into his daughter's death, the man said the family discovered she had called Lifeline for support on the day she died. But the family had not been told she had contemplated taking her own life.
"We weren't made aware of that call. They regarded it as non-urgent. I would have said that anyone calling that line was straight away a red flag. You don't call the line because you're relaxed, it's because you're contemplating taking your life. We only found out after the whole event occurred."
He said that education and privacy should extend to family and friends.
"When we found out the call was made on the Sunday, we thought she must have taken her life then. But we couldn't contact her because her mobile was turned off. She was isolating herself. If we'd known we would have gone round there," he said.
Instead, after not hearing from her for two days his wife drove to where their daughter was staying and found her lying dead on her bed.
"It sounds like there is a hell of a lot of pressure on mental health services," he said.
"But make family a part of the solution."
* Comment from the Mental Health Foundation has been added to this story.
WHERE TO GET HELP
Lifeline (open 24/7) - 0800 543 354
Depression Helpline (open 24/7) - 0800 111 757
Healthline (open 24/7) - 0800 611 116
Samaritans (open 24/7) - 0800 726 666
Suicide Crisis Helpline (open 24/7) - 0508 828 865 (0508 TAUTOKO). This is a service for people who may be thinking about suicide, or those who are concerned about family or friends.
Youthline (open 24/7) - 0800 376 633. You can also text 234 for free between 8am and midnight, or email talk@youthline.co.nz
0800 WHATSUP children's helpline - phone 0800 9428 787 between 1pm and 10pm on weekdays and from 3pm to 10pm on weekends. Online chat is available from 7pm to 10pm every day at www.whatsup.co.nz.
Kidsline (open 24/7) - 0800 543 754. This service is for children aged 5 to 18. Those who ring between 4pm and 9pm on weekdays will speak to a Kidsline buddy. These are specially trained teenage telephone counsellors.
Your local Rural Support Trust - 0800 787 254 (0800 RURAL HELP)
Alcohol Drug Helpline (open 24/7) - 0800 787 797. You can also text 8691 for free.
For further information, contact the Mental Health Foundation's free Resource and Information Service (09 623 4812).
* This article has been amended to clearly distinguish between self-inflicted deaths and suicides. |
first = input()
second = input()
row_1 = [w for w in first]
row_2 = [w for w in second]
if row_1[0] == row_2[2] and row_1[1] == row_2[1] and row_1[2] == row_2[0]:
ans = 'YES'
else:
ans = 'NO'
print(ans) |
A survey on Entomobryomorpha (Collembola) fauna in northern forests of Iran Corresponding author: Masoumeh Shayanmehr, E-mail: Shayanm30@yahoo.com Copyright © 2018, Yahyapour et al. This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. A survey on Entomobryomorpha (Collembola) fauna in northern forests of Iran |
Undiagnosed tuberculosis in a general hospital. Tuberculosis occasionally appears in an obscure form. Six patients are presented in whom the diagnosis was made either at autopsy or after an exploratory laparotomy. The disease had remained undiagnosed because of the paucity of clinical and roentgenologic findings referable to the lungs, an indeterminate tuberculin skin reaction and the misinterpretation of the so-called inactive fibrotic scars. To improve the diagnostic accuracy, repeated skin testing, cultures of liver and bone marrow biopsies and surgical specimens should be more widely used. A therapeutic trial with antituberculosis drugs is a useful diagnostic test in prolonged unexplained fevers. |
"""
Generate PDB file containing periodic box data.
"""
try:
import openmm
from openmm import unit, app
except ImportError: # OpenMM < 7.6
from simtk import openmm, unit
from simtk.openmm import app
prmtop_filename = 'alanine-dipeptide.prmtop'
crd_filename = 'alanine-dipeptide.crd'
pdb_filename = 'alanine-dipeptide.pdb'
# Read topology and positions.
prmtop = app.AmberPrmtopFile(prmtop_filename)
inpcrd = app.AmberInpcrdFile(crd_filename)
# Write PDB.
outfile = open(pdb_filename, 'w')
app.PDBFile.writeFile(prmtop.topology, inpcrd.positions, file=outfile, keepIds=False)
outfile.close()
|
Interleukin 6 (interferon beta 2) and interferon alpha/beta present in postendotoxin serum induce differentiation of murine M1 myeloid leukemia cells. Serum from lipopolysaccharide-treated mice (postendotoxin serum, PES) induces the differentiation of M1 myeloid leukemia cells into mature macrophages, as well as supporting the proliferation of the interleukin 6 (IL6)-dependent B9 hybridoma cells. The kinetics of appearance of these two activities in PES were identical. To determine whether these two activities are due to the presence of the same substance, we tested whether anti-IL6 antibodies could neutralize the differentiation-inducing activity of PES. We found that anti-IL6 antibodies completely neutralized the proliferation of B9 cells and resulted in a 60% neutralization of the differentiation-inducing activity of PES. Anti-interferon alpha/beta (INF alpha/beta) antibodies neutralized 70% of the differentiation-inducing activity of PES. These data suggest that the differentiation-inducing activity of PES is not limited to IL6, and that PES contains additional factors such as INF alpha/beta that are capable of inducing differentiation of M1 cells. |
import dgl.nn.pytorch as dglnn
import torch
import torch.nn as nn
from dgl import function as fn
from dgl._ffi.base import DGLError
from dgl.nn.pytorch.utils import Identity
from dgl.ops import edge_softmax
from dgl.utils import expand_as_pair
class Bias(nn.Module):
def __init__(self, size):
super().__init__()
self.bias = nn.Parameter(torch.Tensor(size))
self.reset_parameters()
def reset_parameters(self):
nn.init.zeros_(self.bias)
def forward(self, x):
return x + self.bias
class GCN(nn.Module):
def __init__(self, in_feats, n_hidden, n_classes, n_layers, activation, dropout, use_linear):
super().__init__()
self.n_layers = n_layers
self.n_hidden = n_hidden
self.n_classes = n_classes
self.use_linear = use_linear
self.convs = nn.ModuleList()
if use_linear:
self.linear = nn.ModuleList()
self.bns = nn.ModuleList()
for i in range(n_layers):
in_hidden = n_hidden if i > 0 else in_feats
out_hidden = n_hidden if i < n_layers - 1 else n_classes
bias = i == n_layers - 1
self.convs.append(dglnn.GraphConv(in_hidden, out_hidden, "both", bias=bias))
if use_linear:
self.linear.append(nn.Linear(in_hidden, out_hidden, bias=False))
if i < n_layers - 1:
self.bns.append(nn.BatchNorm1d(out_hidden))
self.dropout0 = nn.Dropout(min(0.1, dropout))
self.dropout = nn.Dropout(dropout)
self.activation = activation
def forward(self, graph, feat):
h = feat
h = self.dropout0(h)
for i in range(self.n_layers):
conv = self.convs[i](graph, h)
if self.use_linear:
linear = self.linear[i](h)
h = conv + linear
else:
h = conv
if i < self.n_layers - 1:
h = self.bns[i](h)
h = self.activation(h)
h = self.dropout(h)
return h
class GATConv(nn.Module):
def __init__(
self,
in_feats,
out_feats,
num_heads=1,
feat_drop=0.0,
attn_drop=0.0,
negative_slope=0.2,
residual=False,
activation=None,
allow_zero_in_degree=False,
norm="none",
):
super(GATConv, self).__init__()
if norm not in ("none", "both"):
raise DGLError('Invalid norm value. Must be either "none", "both".' ' But got "{}".'.format(norm))
self._num_heads = num_heads
self._in_src_feats, self._in_dst_feats = expand_as_pair(in_feats)
self._out_feats = out_feats
self._allow_zero_in_degree = allow_zero_in_degree
self._norm = norm
if isinstance(in_feats, tuple):
self.fc_src = nn.Linear(self._in_src_feats, out_feats * num_heads, bias=False)
self.fc_dst = nn.Linear(self._in_dst_feats, out_feats * num_heads, bias=False)
else:
self.fc = nn.Linear(self._in_src_feats, out_feats * num_heads, bias=False)
self.attn_l = nn.Parameter(torch.FloatTensor(size=(1, num_heads, out_feats)))
self.attn_r = nn.Parameter(torch.FloatTensor(size=(1, num_heads, out_feats)))
self.feat_drop = nn.Dropout(feat_drop)
self.attn_drop = nn.Dropout(attn_drop)
self.leaky_relu = nn.LeakyReLU(negative_slope)
if residual:
if self._in_dst_feats != out_feats:
self.res_fc = nn.Linear(self._in_dst_feats, num_heads * out_feats, bias=False)
else:
self.res_fc = Identity()
else:
self.register_buffer("res_fc", None)
self.reset_parameters()
self._activation = activation
def reset_parameters(self):
gain = nn.init.calculate_gain("relu")
if hasattr(self, "fc"):
nn.init.xavier_normal_(self.fc.weight, gain=gain)
else:
nn.init.xavier_normal_(self.fc_src.weight, gain=gain)
nn.init.xavier_normal_(self.fc_dst.weight, gain=gain)
nn.init.xavier_normal_(self.attn_l, gain=gain)
nn.init.xavier_normal_(self.attn_r, gain=gain)
if isinstance(self.res_fc, nn.Linear):
nn.init.xavier_normal_(self.res_fc.weight, gain=gain)
def set_allow_zero_in_degree(self, set_value):
self._allow_zero_in_degree = set_value
def forward(self, graph, feat):
with graph.local_scope():
if not self._allow_zero_in_degree:
if (graph.in_degrees() == 0).any():
assert False
if isinstance(feat, tuple):
h_src = self.feat_drop(feat[0])
h_dst = self.feat_drop(feat[1])
if not hasattr(self, "fc_src"):
self.fc_src, self.fc_dst = self.fc, self.fc
feat_src, feat_dst = h_src, h_dst
feat_src = self.fc_src(h_src).view(-1, self._num_heads, self._out_feats)
feat_dst = self.fc_dst(h_dst).view(-1, self._num_heads, self._out_feats)
else:
h_src = h_dst = self.feat_drop(feat)
feat_src, feat_dst = h_src, h_dst
feat_src = feat_dst = self.fc(h_src).view(-1, self._num_heads, self._out_feats)
if graph.is_block:
feat_dst = feat_src[: graph.number_of_dst_nodes()]
if self._norm == "both":
degs = graph.out_degrees().float().clamp(min=1)
norm = torch.pow(degs, -0.5)
shp = norm.shape + (1,) * (feat_src.dim() - 1)
norm = torch.reshape(norm, shp)
feat_src = feat_src * norm
# NOTE: GAT paper uses "first concatenation then linear projection"
# to compute attention scores, while ours is "first projection then
# addition", the two approaches are mathematically equivalent:
# We decompose the weight vector a mentioned in the paper into
# [a_l || a_r], then
# a^T [Wh_i || Wh_j] = a_l Wh_i + a_r Wh_j
# Our implementation is much efficient because we do not need to
# save [Wh_i || Wh_j] on edges, which is not memory-efficient. Plus,
# addition could be optimized with DGL's built-in function u_add_v,
# which further speeds up computation and saves memory footprint.
el = (feat_src * self.attn_l).sum(dim=-1).unsqueeze(-1)
er = (feat_dst * self.attn_r).sum(dim=-1).unsqueeze(-1)
graph.srcdata.update({"ft": feat_src, "el": el})
graph.dstdata.update({"er": er})
# compute edge attention, el and er are a_l Wh_i and a_r Wh_j respectively.
graph.apply_edges(fn.u_add_v("el", "er", "e"))
e = self.leaky_relu(graph.edata.pop("e"))
# compute softmax
graph.edata["a"] = self.attn_drop(edge_softmax(graph, e))
# message passing
graph.update_all(fn.u_mul_e("ft", "a", "m"), fn.sum("m", "ft"))
rst = graph.dstdata["ft"]
if self._norm == "both":
degs = graph.in_degrees().float().clamp(min=1)
norm = torch.pow(degs, 0.5)
shp = norm.shape + (1,) * (feat_dst.dim() - 1)
norm = torch.reshape(norm, shp)
rst = rst * norm
# residual
if self.res_fc is not None:
resval = self.res_fc(h_dst).view(h_dst.shape[0], -1, self._out_feats)
rst = rst + resval
# activation
if self._activation is not None:
rst = self._activation(rst)
return rst
class GAT(nn.Module):
def __init__(
self, in_feats, n_classes, n_hidden, n_layers, n_heads, activation, dropout=0.0, attn_drop=0.0, norm="none"
):
super().__init__()
self.in_feats = in_feats
self.n_hidden = n_hidden
self.n_classes = n_classes
self.n_layers = n_layers
self.num_heads = n_heads
self.convs = nn.ModuleList()
self.linear = nn.ModuleList()
self.bns = nn.ModuleList()
self.biases = nn.ModuleList()
for i in range(n_layers):
in_hidden = n_heads * n_hidden if i > 0 else in_feats
out_hidden = n_hidden if i < n_layers - 1 else n_classes
# in_channels = n_heads if i > 0 else 1
out_channels = n_heads
self.convs.append(GATConv(in_hidden, out_hidden, num_heads=n_heads, attn_drop=attn_drop, norm=norm))
self.linear.append(nn.Linear(in_hidden, out_channels * out_hidden, bias=False))
if i < n_layers - 1:
self.bns.append(nn.BatchNorm1d(out_channels * out_hidden))
self.bias_last = Bias(n_classes)
self.dropout0 = nn.Dropout(min(0.1, dropout))
self.dropout = nn.Dropout(dropout)
self.activation = activation
def forward(self, graph, feat, perturb=None):
h = feat
h = self.dropout0(h)
for i in range(self.n_layers):
conv = self.convs[i](graph, h)
linear = self.linear[i](h).view(conv.shape)
h = conv + linear
if i < self.n_layers - 1:
h = h.flatten(1)
h = self.bns[i](h)
h = self.activation(h)
h = self.dropout(h)
h = h.mean(1)
h = self.bias_last(h)
return h
class GAT_embed(nn.Module):
def __init__(
self, in_feats, n_classes, n_hidden, n_layers, n_heads, activation, dropout=0.0, attn_drop=0.0, norm="none"
):
super().__init__()
self.in_feats = in_feats
self.n_hidden = n_hidden
self.n_classes = n_classes
self.n_layers = n_layers
self.num_heads = n_heads
self.convs = nn.ModuleList()
self.linear = nn.ModuleList()
self.bns = nn.ModuleList()
self.biases = nn.ModuleList()
for i in range(n_layers):
in_hidden = n_heads * n_hidden if i > 0 else in_feats
out_hidden = n_hidden if i < n_layers - 1 else n_classes
# in_channels = n_heads if i > 0 else 1
out_channels = n_heads
self.convs.append(GATConv(in_hidden, out_hidden, num_heads=n_heads, attn_drop=attn_drop, norm=norm))
self.linear.append(nn.Linear(in_hidden, out_channels * out_hidden, bias=False))
if i < n_layers - 1:
self.bns.append(nn.BatchNorm1d(out_channels * out_hidden))
self.bias_last = Bias(n_classes)
self.dropout0 = nn.Dropout(min(0.1, dropout))
self.dropout = nn.Dropout(dropout)
self.activation = activation
self.embed = nn.Linear(self.in_feats, self.in_feats)
def forward(self, graph, feat, perturb=None):
h = feat
h = self.embed(h) if perturb is None else self.embed(h) + perturb
# h = self.dropout0(h)
for i in range(self.n_layers):
conv = self.convs[i](graph, h)
linear = self.linear[i](h).view(conv.shape)
h = conv + linear
if i < self.n_layers - 1:
h = h.flatten(1)
h = self.bns[i](h)
h = self.activation(h)
h = self.dropout(h)
h = h.mean(1)
h = self.bias_last(h)
return h
class GAT_no_bn(nn.Module):
def __init__(
self, in_feats, n_classes, n_hidden, n_layers, n_heads, activation, dropout=0.0, attn_drop=0.0, norm="none"
):
super().__init__()
self.in_feats = in_feats
self.n_hidden = n_hidden
self.n_classes = n_classes
self.n_layers = n_layers
self.num_heads = n_heads
self.convs = nn.ModuleList()
self.linear = nn.ModuleList()
self.bns = nn.ModuleList()
self.biases = nn.ModuleList()
for i in range(n_layers):
in_hidden = n_heads * n_hidden if i > 0 else in_feats
out_hidden = n_hidden if i < n_layers - 1 else n_classes
# in_channels = n_heads if i > 0 else 1
out_channels = n_heads
self.convs.append(GATConv(in_hidden, out_hidden, num_heads=n_heads, attn_drop=attn_drop, norm=norm))
self.linear.append(nn.Linear(in_hidden, out_channels * out_hidden, bias=False))
if i < n_layers - 1:
self.bns.append(nn.BatchNorm1d(out_channels * out_hidden))
self.bias_last = Bias(n_classes)
self.dropout0 = nn.Dropout(min(0.1, dropout))
self.dropout = nn.Dropout(dropout)
self.activation = activation
def forward(self, graph, feat):
h = feat
h = self.dropout0(h)
for i in range(self.n_layers):
conv = self.convs[i](graph, h)
linear = self.linear[i](h).view(conv.shape)
h = conv + linear
if i < self.n_layers - 1:
h = h.flatten(1)
# h = self.bns[i](h)
h = self.activation(h)
h = self.dropout(h)
h = h.mean(1)
h = self.bias_last(h)
return h |
/**
* ScoredPrefixSelectorFactory
*
* @author jwu
* @since 02/12, 2011
*
* 05/19, 2011 - Added default serialVersionUID <br/>
*/
public class ScoredPrefixSelectorFactory<E extends Element> implements SelectorFactory<E> {
private static final long serialVersionUID = 1L;
@Override
public Selector<E> createSelector(String... terms) {
return new ScoredPrefixSelector<E>(terms);
}
} |
A group from Stanford University in the US has created the first computer simulation that mimics the work of an entire living organism – a primitive parasitic bacterium with a tiny genome. Yet the simulation required the power of 128 computers.
By making virtual versions of bacteria, scientists may be able to observe how they behave in certain real-life conditions. This would enable them to come up with more efficient therapies without having to invest too much time and money in laboratory experiments. Computer models are also safer when it comes to dealing with pathogens.
Mycoplasma genitalium is a widespread human pathogen responsible for some urethral and vaginal infections. It also appears to be a perfect model organism for various research implications due to its simple organization. It has one of the smallest known genomes of all living organisms, with a single chromosome containing only 525 genes. By contrast, E. coli – another common bacterium used in laboratory experiments – has 4,288 genes in its DNA.
No wonder that Markus Covert, an assistant professor of bio-engineering, and his research group chose M. genitalium for their computer model. In order to simulate the work of all of its components and their interactions, including the “behavior” of all the 525 genes, the scientists had to bring together various data from over 900 publications. Eventually, they were able to define 28 cellular processes and include each of them in the simulation as a separate submodel – a block of the resulting software.
“These modules then communicated with each other after every step, making for a unified whole that closely matched M. genitalium's real-world behaviors,” the team explains in the top-rated scientific journal Cell. |
def evaluate(RPNS):
intermediate_results = []
delimiter = ','
operators = {
'+': lambda y, x: x + y, '-': lambda y, x: x - y, '*': lambda y, x: x * y,
'/': lambda y, x: int(x/y)
}
for t in RPNS.split(delimiter):
if t in operators:
intermediate_results.append(operators[t](intermediate_results.pop(), intermediate_results.pop()))
else:
intermediate_results.append(int(t))
return intermediate_results[-1]
str1 = "3,4,+,2,*,1,+"
print(evaluate(str1))
|
/**
* Returns a lambda function that creates and instance for a given class and
* wrappes it in a {@link Stream}. If the given class could not be
* instantiated, i.e. if throws an {@link InstantiationException}, the given
* error message is printed to {@link System#err} and an empty stream is
* returned.
*
* @param <T>
* the class type
* @param errorFormat
* the error format
* @return a lambda function that instantiates a given class and wraps it in
* an instance of {@link Stream}
* @throws NullPointerException
* if {@code errorFormat} is {@code null}
* @see #instantiate()
*/
public static <T> Function<Class<? extends T>, Stream<? extends T>> instantiate(String errorFormat) {
return clazz -> {
try {
return Stream.of(clazz.newInstance());
} catch (InstantiationException | IllegalAccessException e) {
if (errorFormat != null) {
System.err.printf(errorFormat, getName(clazz));
}
return Stream.empty();
}
};
} |
A pillow sham generally refers to an ornamental covering for a bed pillow. In recent years, the popularity of pillow shams has increased. For instance, it is now popular for comforters to be sold in combination with matching pillow shams and curtains in order to tie the decorative features of a bedroom together.
Split back pillow shams generally include a face fabric attached to two overlapping plies of backing fabric. A pillow can be inserted and removed from the sham through an opening located where the two plies of backing fabric overlap. Because the plies of fabric overlap, the pillow remains secured within the sham.
Currently, split back pillow shams are made in several separate steps. For instance, typically the face fabric and the two overlapping backing fabrics are first cut to an appropriate size. Once cut, each of the two backing fabrics are hemmed along an edge where the fabrics are intended to overlap. Next, the three fabric pieces are assembled and sewn together. Once sewn, the sham is then turned right side out and further enhanced if desired.
The above process is typically done manually. Specifically, each step is usually done separately by an individual working at a cutting and sewing station.
Due to the amount of time and expense involved in producing pillow shams according to the above process, it would be desirable if a machine could be developed that could automatically form pillow shams from one or more rolls of material. |
package org.jflex.calc ;
//----------------------------------------------------
// The following code was generated by CUP v0.11a beta 20060608
// Tue Mar 08 15:42:18 CST 2016
//----------------------------------------------------
import java_cup.runtime.*;
/** CUP v0.11a beta 20060608 generated parser.
* @version Tue Mar 08 15:42:18 CST 2016
* 此类负责从Lexer.java中接受token流,
* 构造语法规则,比如生成语法解析树。
*/
public class parser extends java_cup.runtime.lr_parser {
/** Default constructor. */
public parser() {super();}
/** Constructor which sets the default scanner. */
public parser(java_cup.runtime.Scanner s) {super(s);}
/** Constructor which sets the default scanner. */
public parser(java_cup.runtime.Scanner s, java_cup.runtime.SymbolFactory sf) {super(s,sf);}
/** Production table. */
protected static final short _production_table[][] =
unpackFromStrings(new String[] {
"\000\016\000\002\002\004\000\002\002\004\000\002\002" +
"\003\000\002\007\002\000\002\003\005\000\002\004\005" +
"\000\002\004\005\000\002\004\003\000\002\005\005\000" +
"\002\005\005\000\002\005\003\000\002\006\005\000\002" +
"\006\003\000\002\006\003" });
/** Access to production table. */
public short[][] production_table() {return _production_table;}
/** Parse-action table. */
protected static final short[][] _action_table =
unpackFromStrings(new String[] {
"\000\027\000\010\011\013\013\010\014\004\001\002\000" +
"\016\004\ufff4\005\ufff4\006\ufff4\007\ufff4\010\ufff4\012\ufff4" +
"\001\002\000\016\004\ufff7\005\ufff7\006\ufff7\007\ufff7\010" +
"\ufff7\012\ufff7\001\002\000\012\002\uffff\011\uffff\013\uffff" +
"\014\uffff\001\002\000\016\004\ufffa\005\ufffa\006\ufffa\007" +
"\022\010\021\012\ufffa\001\002\000\016\004\ufff5\005\ufff5" +
"\006\ufff5\007\ufff5\010\ufff5\012\ufff5\001\002\000\012\002" +
"\031\011\013\013\010\014\004\001\002\000\010\004\ufffe" +
"\005\016\006\017\001\002\000\010\011\013\013\010\014" +
"\004\001\002\000\010\005\016\006\017\012\015\001\002" +
"\000\016\004\ufff6\005\ufff6\006\ufff6\007\ufff6\010\ufff6\012" +
"\ufff6\001\002\000\010\011\013\013\010\014\004\001\002" +
"\000\010\011\013\013\010\014\004\001\002\000\016\004" +
"\ufffb\005\ufffb\006\ufffb\007\022\010\021\012\ufffb\001\002" +
"\000\010\011\013\013\010\014\004\001\002\000\010\011" +
"\013\013\010\014\004\001\002\000\016\004\ufff9\005\ufff9" +
"\006\ufff9\007\ufff9\010\ufff9\012\ufff9\001\002\000\016\004" +
"\ufff8\005\ufff8\006\ufff8\007\ufff8\010\ufff8\012\ufff8\001\002" +
"\000\016\004\ufffc\005\ufffc\006\ufffc\007\022\010\021\012" +
"\ufffc\001\002\000\004\004\027\001\002\000\012\002\ufffd" +
"\011\ufffd\013\ufffd\014\ufffd\001\002\000\012\002\001\011" +
"\001\013\001\014\001\001\002\000\004\002\000\001\002" +
"" });
/** Access to parse-action table. */
public short[][] action_table() {return _action_table;}
/** <code>reduce_goto</code> table. */
protected static final short[][] _reduce_table =
unpackFromStrings(new String[] {
"\000\027\000\014\002\010\003\005\004\011\005\006\006" +
"\004\001\001\000\002\001\001\000\002\001\001\000\002" +
"\001\001\000\002\001\001\000\002\001\001\000\012\003" +
"\027\004\011\005\006\006\004\001\001\000\004\007\025" +
"\001\001\000\010\004\013\005\006\006\004\001\001\000" +
"\002\001\001\000\002\001\001\000\006\005\024\006\004" +
"\001\001\000\006\005\017\006\004\001\001\000\002\001" +
"\001\000\004\006\023\001\001\000\004\006\022\001\001" +
"\000\002\001\001\000\002\001\001\000\002\001\001\000" +
"\002\001\001\000\002\001\001\000\002\001\001\000\002" +
"\001\001" });
/** Access to <code>reduce_goto</code> table. */
public short[][] reduce_table() {return _reduce_table;}
/** Instance of action encapsulation class. */
protected CUP$parser$actions action_obj;
/** Action encapsulation object initializer. */
protected void init_actions()
{
action_obj = new CUP$parser$actions(this);
}
/** Invoke a user supplied parse action. */
public java_cup.runtime.Symbol do_action(
int act_num,
java_cup.runtime.lr_parser parser,
java.util.Stack stack,
int top)
throws java.lang.Exception
{
/* call code in generated class */
return action_obj.CUP$parser$do_action(act_num, parser, stack, top);
}
/** Indicates start state. */
public int start_state() {return 0;}
/** Indicates start production. */
public int start_production() {return 1;}
/** <code>EOF</code> Symbol index. */
public int EOF_sym() {return 0;}
/** <code>error</code> Symbol index. */
public int error_sym() {return 1;}
/* Change the method report_error so it will display the line and
column of where the error occurred in the input as well as the
reason for the error which is passed into the method in the
String 'message'. */
public void report_error(String message, Object info) {
/* Create a StringBuilder called 'm' with the string 'Error' in it. */
StringBuilder m = new StringBuilder("Error");
/* Check if the information passed to the method is the same
type as the type java_cup.runtime.Symbol. */
if (info instanceof java_cup.runtime.Symbol) {
/* Declare a java_cup.runtime.Symbol object 's' with the
information in the object info that is being typecasted
as a java_cup.runtime.Symbol object. */
java_cup.runtime.Symbol s = ((java_cup.runtime.Symbol) info);
/* Check if the line number in the input is greater or
equal to zero. */
if (s.left >= 0) {
/* Add to the end of the StringBuilder error message
the line number of the error in the input. */
m.append(" in line "+(s.left+1));
/* Check if the column number in the input is greater
or equal to zero. */
if (s.right >= 0)
/* Add to the end of the StringBuilder error message
the column number of the error in the input. */
m.append(", column "+(s.right+1));
}
}
/* Add to the end of the StringBuilder error message created in
this method the message that was passed into this method. */
m.append(" : "+message);
/* Print the contents of the StringBuilder 'm', which contains
an error message, out on a line. */
System.err.println(m);
}
/* Change the method report_fatal_error so when it reports a fatal
error it will display the line and column number of where the
fatal error occurred in the input as well as the reason for the
fatal error which is passed into the method in the object
'message' and then exit.*/
public void report_fatal_error(String message, Object info) {
report_error(message, info);
System.exit(1);
}
}
/** Cup generated class to encapsulate user supplied action code.*/
class CUP$parser$actions {
private final parser parser;
/** Constructor */
CUP$parser$actions(parser parser) {
this.parser = parser;
}
/** Method with the actual generated action code. */
public final java_cup.runtime.Symbol CUP$parser$do_action(
int CUP$parser$act_num,
java_cup.runtime.lr_parser CUP$parser$parser,
java.util.Stack CUP$parser$stack,
int CUP$parser$top)
throws java.lang.Exception
{
/* Symbol object for return from actions */
java_cup.runtime.Symbol CUP$parser$result;
/* select the action based on the action number */
switch (CUP$parser$act_num)
{
/*. . . . . . . . . . . . . . . . . . . .*/
case 13: // term ::= ID
{
Integer RESULT =null;
int ileft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int iright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer i = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = i;
CUP$parser$result = parser.getSymbolFactory().newSymbol("term",4, ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 12: // term ::= NUMBER
{
Integer RESULT =null;
int nleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int nright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer n = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = n;
CUP$parser$result = parser.getSymbolFactory().newSymbol("term",4, ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 11: // term ::= LPAREN expr RPAREN
{
Integer RESULT =null;
int eleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-1)).left;
int eright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-1)).right;
Integer e = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-1)).value;
RESULT = e;
CUP$parser$result = parser.getSymbolFactory().newSymbol("term",4, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 10: // factor ::= term
{
Integer RESULT =null;
int tleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int tright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer t = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = new Integer(t.intValue());
CUP$parser$result = parser.getSymbolFactory().newSymbol("factor",3, ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 9: // factor ::= factor DIVIDE term
{
Integer RESULT =null;
int fleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).left;
int fright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).right;
Integer f = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-2)).value;
int tleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int tright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer t = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = new Integer(f.intValue() / t.intValue());
CUP$parser$result = parser.getSymbolFactory().newSymbol("factor",3, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 8: // factor ::= factor TIMES term
{
Integer RESULT =null;
int fleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).left;
int fright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).right;
Integer f = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-2)).value;
int tleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int tright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer t = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = new Integer(f.intValue() * t.intValue());
CUP$parser$result = parser.getSymbolFactory().newSymbol("factor",3, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 7: // expr ::= factor
{
Integer RESULT =null;
int fleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int fright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer f = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = new Integer(f.intValue());
CUP$parser$result = parser.getSymbolFactory().newSymbol("expr",2, ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 6: // expr ::= expr MINUS factor
{
Integer RESULT =null;
int eleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).left;
int eright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).right;
Integer e = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-2)).value;
int fleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int fright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer f = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = new Integer(e.intValue() - f.intValue());
CUP$parser$result = parser.getSymbolFactory().newSymbol("expr",2, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 5: // expr ::= expr PLUS factor
{
Integer RESULT =null;
int eleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).left;
int eright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).right;
Integer e = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-2)).value;
int fleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int fright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer f = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
RESULT = new Integer(e.intValue() + f.intValue());
CUP$parser$result = parser.getSymbolFactory().newSymbol("expr",2, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 4: // expr_part ::= expr NT$0 SEMI
{
Object RESULT =null;
// propagate RESULT from NT$0
RESULT = (Object) ((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-1)).value;
int eleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).left;
int eright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)).right;
Integer e = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-2)).value;
CUP$parser$result = parser.getSymbolFactory().newSymbol("expr_part",1, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-2)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 3: // NT$0 ::=
{
Object RESULT =null;
int eleft = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).left;
int eright = ((java_cup.runtime.Symbol)CUP$parser$stack.peek()).right;
Integer e = (Integer)((java_cup.runtime.Symbol) CUP$parser$stack.peek()).value;
System.out.println(" = " + e);
CUP$parser$result = parser.getSymbolFactory().newSymbol("NT$0",5, ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 2: // expr_list ::= expr_part
{
Object RESULT =null;
CUP$parser$result = parser.getSymbolFactory().newSymbol("expr_list",0, ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 1: // $START ::= expr_list EOF
{
Object RESULT =null;
int start_valleft = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-1)).left;
int start_valright = ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-1)).right;
Object start_val = (Object)((java_cup.runtime.Symbol) CUP$parser$stack.elementAt(CUP$parser$top-1)).value;
RESULT = start_val;
CUP$parser$result = parser.getSymbolFactory().newSymbol("$START",0, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-1)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
/* ACCEPT */
CUP$parser$parser.done_parsing();
return CUP$parser$result;
/*. . . . . . . . . . . . . . . . . . . .*/
case 0: // expr_list ::= expr_list expr_part
{
Object RESULT =null;
CUP$parser$result = parser.getSymbolFactory().newSymbol("expr_list",0, ((java_cup.runtime.Symbol)CUP$parser$stack.elementAt(CUP$parser$top-1)), ((java_cup.runtime.Symbol)CUP$parser$stack.peek()), RESULT);
}
return CUP$parser$result;
/* . . . . . .*/
default:
throw new Exception(
"Invalid action number found in internal parse table");
}
}
}
|
Localization in a random phase-conjugating medium We theoretically study reflection and transmission of light in a one-dimensional disordered phase-conjugating medium. Using an invariant imbedding approach a Fokker-Planck equation for the distribution of the probe light reflectance and expressions for the average probabilities of reflection and transmission are derived. A new crossover length scale for localization of light is found, which depends on the competition between phase conjugation and disorder. For weak disorder, our analytical results are in good agreement with numerical simulations. We theoretically study reflection and transmission of light in a one-dimensional disordered phase-conjugating medium. Using an invariant imbedding approach a Fokker-Planck equation for the distribution of the probe light reflectance and expressions for the average probabilities of reflection and transmission are derived. A new crossover length scale for localization of light is found, which depends on the competition between phase conjugation and disorder. For weak disorder, our analytical results are in good agreement with numerical simulations. Over the last two decades scattering of light from random optical media has received a lot of attention. In passive random media many interesting multiplescattering effects were discovered, such as enhanced backscattering of light, intensity correlations in reflected and transmitted waves and Anderson localization. Also absorbing or amplifying random optical media have been investigated. In the latter, the combination of coherent amplification and confinement by Anderson localization leads to amplified spontaneous emission and laser action without using mirrors, which have been observed in laser dyes and semiconductor powders resp. These being linear random media, it is interesting to ask what happens in a nonlinear active random medium, such as a disordered phase-conjugating medium (PCM). A PCM consists of a nonlinear optical medium with a large third-order susceptibility, see Fig. 1. The medium is pumped by two intense counterpropagating laser beams of frequency 0. When a probe beam of frequency 0 + is incident on the material, a fourth beam will be generated due to the nonlinear polarization of the medium. This conjugate wave has frequency 0 − and travels with the reversed phase in the opposite direction as the probe beam. The medium thus acts as a "phase-conjugating mirror". Depending on the characteristics of the PCM, the reflected beam is either stronger or weaker than the incoming one, while the transmitted probe beam is always amplified. It has been shown that phase conjugation also occurs in disordered -media. This raises several interesting questions with respect to reflection and transmission of light at such a disordered medium: how are the amplifying properties of a transparent PCM affected in the presence of disorder? What are the fundamental similarities and differences between a nonlinear random phase-conjugating medium and a linear amplifying or absorbing random medium? Is there a regime in which Anderson localization occurs, and what are the requirements to observe this? These questions and their answers form the subject of this paper. Our starting point is the wave equation describing a one-dimensional (1D) disordered PCM ∂ 2 Here, with E p (x) and E * c (x) the slowly-varying amplitudes of the probe and conjugate electric fields respectively. The offdiagonal parameter ≡ 0 e i = 6 2 0 0c 2 E 1 E 2 is the pumping-induced coupling strength between the probe and conjugate waves in the PCM, with E 1, E 2 the electric field amplitudes of the two pump beams. The disorder is modeled by a randomly fluctuating part (x) in the relative dielectric constant. In order to calculate the reflection and transmission coefficients r p, r c, t p and t c, we use an invariant imbedding approach. Following Ref. we obtain the evolution equations for the probe and conjugate waves in the medium with k 0 ≡ 0 /c and ≡ 2 + 2 0 /c. Using the boundary conditions from Fig. 1 at x = 0 and x = L then yields In the absence of phase conjugation, for 0 = 0, equations (5a) and (5c) reduce to the well-known imbedding equations for a linear random medium, and r c = t c = 0. In the absence of disorder, equations (5b) and (5c) reduce to the evolution equations for r c and t p in a transparent PCM, and r p = t c = 0. Equations satisfy the energy conservation law R p + T p − R c − T c = 1, with R p ≡ |r p | 2 the probe reflectance etc. They form the basis of all our results here. We first derive a Fokker-Planck (FP) equation for the probability distribution of R p. We set r p ≡ R p e ip, substitute this into (5a), subsequently into the Liouville equation is the density of points (R p, p ) in phase space, and average over the disorder. Assuming a gaussian distribution for (L), with (L) = 0 and (L)(L ) = g(L − L ), where pointed brackets denote an average over disorder, yields Here W ≡ Q, l ≡ L/ 0, M (L) ≡ 0 B(L) and −1 0 ≡ 1 2 gk 0 2 p, the inverse localization length in the absence of phase conjugation. In deriving we have neglected angular variations of W, the random-phase approximation (RPA), which applies to the situation of weak disorder when 0 ≫ 1/. Equation is of the same form as the equation for the probability distribution of the reflectance at a linear active random medium, with the important difference that in the latter case M(L) is an L-independent constant, proportional to the imaginary part of the dielectric constant. Using this analogy, our phase-conjugating medium alternates between a linear amplifying (for M (L) < 0) and linear absorbing (for M (L) > 0) random medium. For M (L) = 0 the well-known FP equation for a passive random medium is retrieved. Multiplying both sides of by R n p and integrating by parts leads to a recursion relation for the moments of the probe reflectance, For n = 1 and setting R 2 p ≈ R p, integration of yields for the average probe reflectance with C ≡ (4 2 + 2 )(2 2 + 2 0 ) and ≡ 1/ 0. In the absence of phase conjugation, this reduces to R p = 1 − e −L/0 and in the absence of disorder R p = 0, as for a transparent PCM. Using equations (5a) and (5b), one can directly obtain an evolution equation for the average Z n,m ≡ R n p R m c, which is given by and equivalent to for m = 0. Solving for the conjugate reflectance yields Similarly, one obtains for the probe transmittance from (5a) and (5c) The conjugate transmittance is then given by R c = 0, through the conservation law R p + T p − R c − T c = 1. In order to test these analytical predictions we have carried out numerical simulations. Using a transfer matrix method, equations are discretized on a 1D lattice with lattice constant d, into which disorder is introduced by letting (x) randomly fluctuate from site to site. Figures - show the probe and conjugate reflectance and transmittance as a function of the length L of the medium for various values of the detuning and disorder. In all cases we took d = 10 −4 m and 0 = 10 15 s −1 and typical PCM parameters. Fig. 2 shows how the periodic behavior of R c and T p which is characteristic of a transparent PCM becomes "modulated" by an exponentially decaying envelope in the presence of weak disorder. Simultaneously, and with the same periodicity, some probe light is now reflected and some conjugate light transmitted, due to normal reflections in the disordered medium. When the amount of disorder is increased, the oscillatory behavior of the reflectances and transmittances is less and becomes suppressed for large L, see Fig. 3. The reflected probe and conjugate intensities then both saturate, with lim L→∞ R c = lim L→∞ R p − 1, and T p and T c decay to zero (localization). For a transparent PCM the conservation law T p − R c = 1 applies, i.e. for each pump photon scattered into the forward (probe) beam in the medium, a photon from the other pump is scattered into the backward (phase-conjugate) beam. In the localization regime of Fig. 3, on the other hand, the conservation law R p − R c = 1 applies (cf. Eq. ). Hence T p has exchanged roles with R p due to disorder: all pump photons which are absorbed into probe and conjugate beams are now reflected and despite amplification, transmitted intensities are suppressed. This suppression has also been found in linear amplifying random media. The saturation of R c suggests that the phaseconjugate reflected beam arises in the region into which the probe beam penetrates and that amplification takes mostly place within a localization length of the point of incidence. The behavior of the transmitted intensities with increasing length of the medium is determined by two competing effects: on the one hand, enhancement occurs due to increased probability of multiple reflections. On the other hand, less light is transmitted due to increased probability of retroreflection of the incoming probe light. For small L, the latter effect dominates T p in Fig. 3. As L increases, the increasing amplification of probe light due to multiple scattering takes over, which leads to exponential increase and a maximum in T p. For again larger L, most of the probe light is reflected, and T p decreases exponentially to zero, as in a normal disordered medium. The crossover length scale L c between exponential increase and decrease is given by the solution of 2 + 2 0 cos 2 (L) = 2 2 0 0 cos(L) sin(L), which for ≪ 0 becomes In the opposite limit of ≫ 0 phase-conjugate reflection is weak (maximum value of R c = 0.16) and we retrieve exponential localization, see Fig. 4. Randomness now dominates over phase conjugation and has almost washed out the oscillatory behavior of R p and T p. Comparing the numerical results with the analytic ones from, and we find good agreement (deviations < 5 %) for weak disorder as in Fig. 2, and for stronger disorder and weak phase conjugation as in Fig. 4. In the intermediate regime, for 0 > 1/ ≈ c/ 0 results differ considerably, see inset in Fig. 3. There the RPA and the assumption R 2 p ≈ R p are not valid and a different approach is needed. In conclusion, we have studied reflection and transmission of light at a 1D disordered phase-conjugating medium in the limit of small disorder, for 0 ≫ 1. The predicted behavior of reflectances and transmittances arising from the interplay between amplification and Anderson localization displays similar features as that in a linear disordered amplifying medium. The main difference is the coupling of two waves in the PCM, which leads to additional interference effects. In future work we intend to: investigate the strong disorder regime. There the reflection of the pump beams cannot be neglected, and a full nonlinear analysis is required. Study the distribution of reflection and transmission eigenvalues and the statistical fluctuations in reflectance and transmittance for a multimode 2D or 3D disordered phase-conjugating medium. This is relevant to experiments, which mostly employ 3D PCM's, and interesting in the context of random lasers: in a linear disordered amplifying medium the average reflectance becomes infinitely large with increasing amplification, upon approaching threshold. It would be interesting to investigate whether something similar occurs in a disordered PCM, this being a "naturally" amplifying medium and feasible candidate for nonlinear random lasing. The author gratefully acknowledges stimulating discussions with D. Lenstra. This work was supported by the Netherlands Organisation for Scientific Research (NWO). |
Extracellular Uridine Nucleotides-Induced Contractions Were Increased in Femoral Arteries of Spontaneously Hypertensive Rats Introduction: Femoral arterial dysfunction including abnormal vascular responsiveness to endogenous ligands was often seen in arterial hypertension. Extracellular nucleotides including uridine 5-diphosphate (UDP) and uridine 5-triphosphate (UTP) play important roles for homeostasis in the vascular system including controlling the vascular tone. However, responsiveness to UDP and UTP in femoral arteries under arterial hypertension remains unclear. The aim of this study was to investigate if hypertension has an effect of vasoconstrictive responsiveness to UDP and UTP in femoral arteries of spontaneously hypertensive rats (SHRs) and Wistar-Kyoto rats (WKYs) after 7 and 12 months old. Methods: Organ baths were conducted to determine vascular reactivity in isolated femoral arterial rings. Results: In femoral arteries obtained from 12-month-old rats, augmented contractile responses to UDP and UTP were seen in femoral arteries of SHR than in those of WKY under situations not only intact but also nitric oxide synthase inhibition, whereas no difference of extracellular potassium-induced vasocontraction was seen in both SHR and WKY groups. Similar contraction trends occurred in femoral arteries obtained from 7-month-old rats. Moreover, contractions induced by UDP and UTP were increased in endothelium-denuded arteries. Cyclooxygenase inhibition decreased the contractions induced by these nucleotides and abolished the differences in responses between the SHR and WKY groups. Conclusions: This study demonstrates the importance of regulation of extracellular uridine nucleotides-induced contractions in hypertension-associated peripheral arterial diseases. |
public class result {
public static void main(String args[]) {
int a = -2147483648;
System.out.println(a == -a && a != 0);
System.out.println(1 & 1 - 1 ^ 0); // 1 - 1 -> 1 & 0 -> 0 ^ 0
}
}
|
import sys
from pathlib import Path
sys.path.insert(0, str(Path(__file__).parent)) # data dir
|
Clinical predictors for biochemical failure in patients with positive surgical margin after robotic-assisted radical prostatectomy Objective: Patients with positive surgical margins (PSMs) after radical prostatectomy for localized prostate cancer have a higher risk of biochemical failure (BCF). We investigated the risk factors of BCF in patients with PSMs after robotic-assisted radical prostatectomy (RARP). Methods: We evaluated 462 patients who underwent RARP in a single medical center from 2006 through 2013. Of them, 61 with PSMs did not receive any treatment before BCF. Kaplan-Meier curve and Cox regression analysis were used to compare patients with (n = 19) and without (n = 41) BCF. Results: Overall, 13.2% of patients had PSMs, and of those, 31.7% experienced BCF during follow-up. The mean follow-up duration was 43.7 months (42.4 vs 46.35 (BCF], p = 0.51). In univariant analyses, the platelet to lymphocyte ratio (6.26 vs 8.02 , p = 0.04) differed statistically. When patients were grouped by pathologic grade ≦2 or ≧3 (p = 0.004), the BCF-free survival rates differed significantly. Seminal vesicle invasion also differed significantly (5 vs 7 , p = 0.005). Patients with undetectable nadir prostate-specific antigen (PSA) after RARP (BCF rate 4/34) differed statistically from those with detectable PSA after RARP (BCF rate 15/26) (p < 0.001). In the multivariate analysis, the platelet/lymphocyte (P/L) ratio, pathologic grade, and undetectable nadir PSA remained statistically significant. Conclusions: In patients who undergo RARP and have PSMs, P/L ratio >9 preoperatively, pathologic grade ⩾3, and detectable nadir PSA after RARP should be considered adverse features. Early intervention such as salvage radiation therapy or androgen deprivation therapy should be offered to these patients. |
<reponame>valera-rozuvan/sharky<filename>src/board_routines.h
#ifndef __BOARD_ROUTINES_H__
#define __BOARD_ROUTINES_H__
#include "board.h"
#define FileRankTo120SQ(f, r) 21 + f + 10 * r
extern unsigned char board64to120[64];
extern unsigned char board120to64[120];
extern unsigned char board120toFile[120];
void printBoard(BOARD *cBoard);
// 5 + 1 = 6 // max move length + 1 (the 1 is `\0` char - C string termination character)
#define MAX_MOVE_STR_LENGTH 6
void chessMoveToStr(unsigned long long move, char fmtdMove[MAX_MOVE_STR_LENGTH]);
void printBestMove(BOARD *cBoard);
void printMoves(BOARD *cBoard);
void setupEmptyPosition(BOARD *cBoard);
void setupInitialPosition(BOARD *cBoard);
unsigned char checkDrawByRepetition(BOARD *cBoard);
#endif // __BOARD_ROUTINES_H__
|
package testapp.endpoint;
import org.junit.Test;
public class GHIssue295 extends EndpointTester {
@Test
public void test() throws Exception {
url("/gh/295/done").get();
bodyEq("done");
}
}
|
Impulsivity and compulsivity in Internet gaming disorder: A comparison with obsessivecompulsive disorder and alcohol use disorder Background and aims Internet gaming disorder (IGD) is characterized by a loss of control and a preoccupation with Internet games leading to repetitive behavior. We aimed to compare the baseline neuropsychological profiles in IGD, alcohol use disorder (AUD), and obsessivecompulsive disorder (OCD) in the spectrum of impulsivity and compulsivity. Methods A total of 225 subjects (IGD, N=86; AUD, N=39; OCD, N=23; healthy controls, N=77) were administered traditional neuropsychological tests including Korean version of the Stroop ColorWord test and computerized neuropsychological tests, including the stop signal test (SST) and the intraextra dimensional set shift test (IED). Results Within the domain of impulsivity, the IGD and OCD groups made significantly more direction errors in SST (p=.003, p=.001) and showed significantly delayed reaction times in the colorword reading condition of the Stroop test (p=.049, p=.001). The OCD group showed the slowest reading time in the colorword condition among the four groups. Within the domain of compulsivity, IGD patients showed the worst performance in IED total trials measuring attentional set shifting ability among the groups. Conclusions Both the IGD and OCD groups shared impairment in inhibitory control functions as well as cognitive inflexibility. Neurocognitive dysfunction in IGD is linked to feature of impulsivity and compulsivity of behavioral addiction rather than impulse dyscontrol by itself. INTRODUCTION Internet gaming disorder (IGD) was recently included in the fifth edition of the Diagnostic and Statistical Manual of Mental disorders (DSM-5) as "a condition for further study." The clinical diagnosis of IGD is based on behavioral patterns encompassing persistent thoughts about Internet games and persistent use of the Internet to engage in games, leading to significant impairment or distress (American Psychiatric Association, 2013). Symptoms in patients with IGD resemble addiction-specific phenomena, comparable with those seen in substance-related addiction, including cravings and withdrawal symptoms such as unpleasant feeling states, and tolerance. Consistent with this notion, many researchers have proposed that IGD be recognized as a behavioral addiction (Dowling, 2014;Pontes, Kiraly, Demetrovics, & Griffiths, 2014). There have been suggested the needs for establishing diagnostic criteria of IGD as a form of unique condition, differentiating from substance use or gambling disorder (). Moreover, debates on the proposed inclusion of gaming disorder in the upcoming ICD-11 have been included in the specificity of current operationalization of the IGD construct compared with other traditional substance addiction (). In spite of such concerns, other researchers claim that loss of control and continued playing behavior despite negative consequences of gaming disorder in ICD-11 proposal have strong general support and would fit well in behavioral addiction framework. Both DSM-5 and ICD-11 proposal have loss of control and continuous harmful behavior in common, as definite features of IGD (Kirly & Demetrovics, 2017). Hence, great attention would be paid to integrate the neurobiological substrate and clinical phenomenon of loss of control and repetitive behavior in the direction of alternative theoretical models. Alcohol use disorder (AUD), a "traditional" substance addictive disorder, shows repeated behavior involving continued excessive use of the substance. Obsessive-compulsive disorder (OCD) is also associated with repetitive compulsive behavior. Most patients with OCD have excessive repetitive behavior, characterized by an inability to delay or inhibit ongoing action, leading to functional impairment (). IGD and substance use disorder (SUD) have some phenomenological overlap with OCD in terms of repetitive behaviors. A feature of IGD involves repeated unsuccessful efforts to control gaming behavior. Similarly, patients with SUD cannot resist their impulse toward substance use and continue compulsive substance consumption despite adverse consequences (O'Brien, Volkow, & Li, 2006). Such phenomenological similarities across these disorders in terms of repetitive behavior can be viewed in terms of the spectrum between impulsivity and compulsivity. Traditionally, impulsivity and compulsivity have been proposed as opposite constructs. The impulsivity construct is conceptualized as a tendency to act prematurely without foresight, in a manner that is unduly risky or inappropriate to the situation, whereas compulsivity is related to repetitive behaviors in a habitual manner to protect the individual from perceived negative consequences (Curatolo, Paloscia, D'Agati, Moavero, & Pasini, 2009;). However, regarding symptoms, disorders characterized by impulsivity often share features with compulsivity (Grant & Kim, 2014). Indeed, it has been proposed that impulsive and compulsive behaviors overlap and often become more intertwined over time. In attempting to understand the neurobiological and psychological processes mediating addictive behavior, researchers have suggested that continued substance use is not only related to an intense urge and craving but also to loss of control and a compulsive pattern (). Patients with OCD have difficulty suppressing intrusive thoughts, and their compulsive behavior might arise from such an underlying deficit in inhibitory cognitive control (Purcell, Maruff, Kyrios, & Pantelis, 1998). On a neuroanatomical level, these two constructs may both be explained by a failure of the response-control system mediated by separate but intercommunicating frontal-striatal neural circuits (Dalley, Everitt, & Robbins, 2011). On a neurocognitive level, obsessive-compulsive symptoms seen in OCD have been suggested to result from a failure of inhibitory control or inability to shift attention from these ongoing thoughts or motor activities toward less distressing ones (Greisberg & McKay, 2003). Furthermore, many studies have examined cognitive dysfunction in OCD based on the assumption that compulsive behaviors result from failure of dysfunctional frontal circuits to inhibit basal ganglia motor or cognitive programs (). In an attempt to understand OCD within this impulsivecompulsive spectrum, a previous study proposed that impulsive and compulsive symptoms in OCD refer to cognitive inflexibility as well as impaired motor inhibition based on cognitive tasks assessing the ability to shift attentional focus and to suppress unwanted motor responses (Chamberlain, Fineberg, Blackwell, Robbins, & Sahakian, 2006). The author also reported that OCD patients showed cognitive inflexibility, as measured by extradimensional set shifting and motor impulsivity using stop signal reaction time (). To investigate disrupted underlying neurocognitive processes across behavioral addiction, substance addiction, and OCD, a recent study directly compared pathological gambling (PG), alcohol dependence (AD), and OCD patients with healthy controls (HC) on self-reported and cognitive measures of compulsivity and impulsivity (Bottesi, Ghisi, Ouimet, Tira, & Sanavio, 2015). They suggested similarities and differences in patterns across PG, AD, and OCD groups in motor inhibition ability and decision-making processes (). In a recent study, directly comparing impulsivity and compulsivity in IGD, PG, and AUD patients using neurocognitive measurements, the IGD group was found to share features of impulsivity rather than compulsivity with those having other addictive disorders (Choi, Kim, et al., 2014). Taken together, a recent review raises issues of direct comparison of IGD and OCD at a neurobiological level to provide more precise conceptualization of IGD between behavioral addiction and impulse-control disorders as initial impulsivity followed by compulsivity in behavioral addiction can be differentiated from impulsecontrol disorder (Starcevic & Aboujaoude, 2017). In this study, our objective was to investigate two questions: (i) whether IGD patients exhibit higher disinhibition compared with non-clinical control in the cognitive and motor domain of impulsivity and (ii) whether IGD patients have higher cognitive inflexibility compared with nonclinical control in the domain of compulsivity. Our second area of interest in this study was to clarify if IGD patients differ from non-clinical comparison group with respect to the level of impulsivity and compulsivity, and such difference was unique to IGD or shared by individuals with AUD and OCD. Subjects The sample comprised 86 patients with a diagnosis of IGD, 39 with AUD, 23 with OCD, and 77 HC. IGD and AUD patients were recruited from the outpatient clinic of SMG-SNU Boramae Medical Center in Seoul, South Korea, where they were being treated for excessive Internet gaming or alcohol use. HC subjects were recruited from the local community; they had no history of psychiatric illness and played Internet games less than 2 hr/day. OCD patients were recruited from the OCD outpatient clinic at the Seoul National University Hospital (SNUH). All patients with IGD, AUD, and OCD were diagnosed by an experienced psychiatrist according to criteria of the DSM-5. Young's Internet Addiction Test was used to assess the severity of IGD. Test items are rated on a 5-point scale ranging from 1 (very rarely) to 5 (very frequently). The Korean version of the Alcohol Use Disorder Identification Test (AUDIT-K; ) was used to assess the severity of AUD. This scale measures the frequency of alcohol abuse behavior and contains 10 questions, scored on a 4-point Likert scale. Cutoff value for high-risk drinking is above 10 for male and 6 for female (). The severity of OCD was assessed with the Yale-Brown Obsessive-Compulsive Scale, a clinical-administered measurement consisting of 10 items (). Total scores range from 0 to 40, and under 7 are considered subclinical. Of the 23 OCD patients, 11 were medicated at the time of testing; all were taking a selective serotonin reuptake inhibitor, and one patient was prescribed a small dose of olanzapine (2.5 mg) as an adjuvant. Seven OCD patients were medication-naive, and five patients were medication-free for more than 1 month before entering the study. All patients with IGD and AUD were medication-naive for their lifetime. The Structured Clinical Interview for DSM-IV (SCID-IV) was administered to identify past and present psychiatric illness in the participants. To measure comorbid depression and anxiety, all patients completed the Beck Depression Inventory (BDI; Beck, Ward, Mendelson, Mock, & Erbaugh, 1961) and the Beck Anxiety Inventory (BAI; Beck, Epstein, Brown, & Steer, 1988). The BDI and BAI are 21-item self-reporting questionnaires for evaluating the severity of depression and anxiety based on score range from 0 (not at all) to 3 (severely). In BDI, total score of 0-9 is considered minimal range, 10-18 is mild, 19-29 is moderate, and 30-63 is severe. In BAI, total score of 0-9 is considered minimal range, 10-16 is mild, 17-29 is moderate, and 30-63 is severe. Exclusion criteria included neurological disease; significant head injury accompanied by loss of consciousness; medical illness with documented cognitive sequelae; sensory impairment; or intellectual disability (IQ < 70). Assessments of impulsivity and compulsivity We used the Cambridge Neuropsychological Test Automated Battery (CANTAB), a neuropsychological assessment battery administered by computer using a touch-sensitive screen. It has been used for neuropsychological research across different populations and to study development in the cognitive domain (Luciana & Nelson, 2002;Roque, Teixeira, Zachi, & Ventura, 2011). Impulsivity was measured using the stop signal test (SST) from the CANTAB, which assesses the ability to inhibit a prepotent response and impulse control (Logan, Schachar, & Tannock, 1997). During the task, the participants have to press button by selecting left or right button depending on the direction in which the arrow points. In the second part, an audio stop signal follows in which participants instruct to stop that response. The net direction errors, proportion of successful stops, reaction time on go trials, and stop signal reaction time when quitting the task were used as the dependent variable in this study (see http://www.cambridgecognition/ cantab/). Compulsivity was assessed by intra-extra dimensional set shift test (IED) from the CANTAB, which measures the ability to shift attentional set. This test examines the ability to inhibit and shift attention between stimulus dimensions (Lawrence, Sahakian, & Robbins, 1998). In this task, two artificial dimensions of color-filled shapes and white line are presented. Participants must learn which one is correct from two visual stimulus following feedback at each stage, when satisfying six consecutive correct responses. Outcome measures are the number of errors, the number of trials completed, and the number of stages (see http://www.cambridgecognition/cantab/). Inability to shift attention is an important factor in rigid mental acts and repetitive behavior, leading to an inability to shift attention from a specific thoughts or behavioral set (). We used the Korean Color-Word Stroop Test (K-CWST; ) as a measure of interference control. In the color-word condition, participants are asked to name the ink color of color-words differing from the name of color-words on the presented card as quickly as possible. Therefore, they have to inhibit the automatic process of reading during the K-CSWT. The trail making test (TMT), which assesses motor planning (type A) and cognitive flexibility related to compulsivity (type B) was also used. The task requires participants to connect a sequence of consecutive targets on a computer screen. TMT-A requires an individual to connect presented numbers as quickly as possible, reflecting visuospatial searching ability. The TMT-B requires a subject to connect the numbers and letters alternately, additionally measuring the ability for cognitive shifting. Total time in seconds for parts A and B and errors (incorrect lines that reach its target) were set as dependent variables. Statistical analysis Before the formal analysis, we conducted exploratory data analyses to identify and remove outliers to reduce the possibility of spurious results. We performed analysis of variance (ANOVA) to examine distinct characteristics of the groups. Analysis of covariance (ANCOVA) and Poisson regression were performed to evaluate group differences. We divided variables into continuous and discrete variables. ANCOVA was performed to compare continuous variables, including TMT A/B reaction time and K-CWST reading time. Discrete variables, such as TMT A/B error number and K-CWST reading error number were analyzed using Poisson regression. We set age, IQ, depression (BDI), and anxiety (BAI) scores as covariates for ANCOVA and Poisson regression. All statistical analyses were performed using the IBM SPSS software (version 21; IBM Inc., Armonk, NY, USA). p values <.05 were considered to indicate statistical significance. Ethics The study was conducted in accordance with the Declaration of Helsinki. The institutional review boards of SMG-SNU Boramae Medical Center and SNUH approved this study. All participants were informed about the study and provided written informed consent. Subject characteristics The demographic and clinical/cognitive characteristics of participants are presented in Table 1. No statistically significant difference was observed in gender distribution (p =.058) among the four groups, but male participants were prominent in all groups. Clinical/cognitive differences were observed in IGD, AUD, OCD, and HC. The IGD group showed the highest IAT (p <.001) score. The AUD group was oldest (p <.001) and had the highest AUDIT (p <.001), BDI (p <.001), and BAI (p <.001) scores among the four groups. The OCD group showed markedly higher Y-BOCS (p <.001) scores compared with IGD, AUD, and HC. Also, the HC group had the highest IQ (p <.001) scores. Neurocognitive performance Impulsivities in neurocognitive measurements. In the domain of impulsivity, both the IGD and OCD groups made significantly more net direction errors on stop and go trial in the SST (IGD; mean = 3.929 ± 6.852, OCD; mean = 4.000 ± 5.222) than did HC (mean = 2.000 ± 3.495) after the post-hoc test (p =.004 and p <.001, respectively). IGD and OCD (IGD; mean = 3.071 ± 5.544, OCD; mean = 3.043 ± 3.902) made more direction error compared with HC (mean = 1.493 ± 2.910) on go trials in the SST after the post-hoc test (p =.003 and p =.001, respectively). The findings for other groups were inconclusive (Table 2, Figure 1). Neurocognitive measurement of compulsivity. In the domain of compulsivity, we found that IGD needed more number of total trials to complete IED test (IED total trials) compared with AUD (IGD; mean = 80.635 ± 19.660, AUD; mean = 75.943 ± 11.757; p =.007) in the post-hoc test with the Bonferroni correction (Table 2, Figure 1). Completion times and error rate on the TMT part A and B did not vary by diagnostic status (group) ( Table 2). In the CWST condition, both IGD and OCD (IGD; mean = 105.470 ± 21.389, OCD; mean = 118.217 ± 36.478) had slower reading time compared with HC (mean = 94.623 ± 17.826), which requires participants to name the color of word with mismatched ink color in the post-hoc test with the Bonferroni correction (p =.004 and p =.001, respectively). In particular, the OCD group showed the slowest reading time in the color-word condition among other groups (Table 2, Figure 1). DISCUSSION This is the first reported study to identify the neurocognitive characteristic of IGD, AUD, and OCD from the perspective of impulsivity and compulsivity. This study showed behavioral abnormalities in both IGD and OCD in relation to impaired response inhibition and cognitive inflexibility. Regarding response inhibition, the IGD and OCD groups showed worst performance than the HC group in the SST with motor and cognitive inhibition. Therefore, our first hypothesis for impulsivity of IGD was supported. Regarding compulsivity, the IGD and OCD groups needed more effort to switch attention in the incongruent color-word condition of Stroop test, reflecting their cognitive inflexibility. This finding supports our second hypothesis for compulsivity of IGD. Previous studies using the SST have reported impaired response inhibition compared with the control group, suggesting behavioral impulsivity in IGD ). A chronic course followed by repetitive relapse in addiction may stem from dysfunctional top-down inhibitory circuitry (). This impairment may explain why individuals with IGD have difficulty suppressing cravings toward disease-related cue and continue repetitive self-defeating behavior. An increase in the response time on the K-CWST may result from response competition in a situation demanding that one inhibit the incorrect, but easier, response (). Many researchers have also used the Strooprelated effect to measure the suppression of prepotent response in substance addiction (;Goldstein & Volkow, 2002). Obsessive-compulsive symptoms seen in OCD have been suggested as examples of inhibitory failure or inability to shift attention from these ongoing thoughts or motor activities toward less distressing ones (Greisberg & McKay, 2003). Based on evidence from neuroimaging and neuropsychological studies, fronto-striatal dysfunction has been implicated in the pathophysiology of OCD (;Van den ). There have been extensive studies on dysfunctional inhibitory control with various measures and paradigms in patients with OCD (Benatti, Dell'Osso, Arici, Hollander, & Altamura, 2014;Krikorian, Zimmerman, & Fleck, 2004;Moritz, Kloss, & Jelinek, 2010). Regarding the Stroop test, prior work suggests that interference control has been shown to be compromised in individuals with OCD, since OCD patients performed worst than controls in inhibitory prefrontal function tests, including the STOP task, GO/NO-GO task, and Stroop task (). Since cognitive flexibility implies the ability to deautomatize automated responses and to adapt cognitive processing strategies to face new conditions, the Stroop interference effect is related to cognitive inflexibility (Canas, Quesada, Antol, & Fajardo, 2003;Moore & Malinowski, 2009). Taken together, the increased response time on the CWST in IGD and OCD can be regarded as evidence not only for cognitive inflexibility but also of impaired inhibition of interfering stimuli. In this study, the IGD group showed the worst performance among the four groups in IED total trials, which measures attentional set shifting, in which attention is required to switch between higher-order modalities (Block, Dhanji, Thompson-Tardif, & Floresco, 2007). As attentional set shifting assesses the ability to adapt behavior flexibly following feedback (Kehagia, Murray, & Robbins, 2010), this finding indicates persistent damaging behaviors in IGD, which result from a failure to learn new strategies according to the requirements of a given context. Several studies in SUD have argued that in the course of addiction, initial impulsive use of a drug becomes compulsive drug-taking behavior following neuro-adaptation of striatal circuits, notably shifting from ventral striatal to dorsal striatal hyperactivation (Everitt & Robbins, 2005. That is, in the early phase of addiction, individuals initially make risky, but goal-directed, acts to gain immediate pleasure or relief. However, as addiction progresses, the reward effect diminishes, leading to escalating time spent on addictive behaviors. Instead, as compulsive habits develop, stimulus-driven responses can be the driving force toward repetitive behavior (Lubman, Ycel, & Pantelis, 2004). Individuals with IGD have difficulty ending their gaming behavior. Their repetitive action toward gaming-related cues may be explained by their tendency to respond habitually rather than take goal-directed action. Similarly, in OCD, even though stress and anxiety initially lead to the formation of habits, the driving force of compulsive actions may come from habitual and automatic responses rather than expectation of anxiety relief. Consistent with this, one promising treatment modality in OCD is exposure to a conditioned cue (e.g., a bathroom doorknob) and prevention of the subsequent compulsive action, leading to the subject's gaining control over the external stimulus (Whittal, Thordarson, & McLean, 2005). Thus, in both IGD and OCD, excessive and inflexible behaviors can be explained by stimulus-driven habitual responses with respect to compulsivity. The OCD and HC groups did not differ significantly in IED total trials. One explanation for this finding could be a medication effect in the OCD group, because manipulation of the serotonergic system can affect cognitive functioning. In this study participants, the OCD group only included 12 patients taking prescribed medications at the time of testing. This study has several limitations that need to be considered when interpreting the findings. First, the representativeness of the populations may be a concern. In this study, the AUD group did not show clear deficits compared with the control groups in neurocognitive measurements. Second, the sample consisted primarily of male participants. In addition, the medication status of the OCD patients was not controlled in the analysis. Further research should consider include equal proportions of subjects in all groups and greater homogeneity among patient groups. In this study, we sought to determine commonalities and differences in the neurocognitive characteristics of IGD, AUD, and OCD individuals, all of which show rigid patterns of behavioral repetition associated with significant impairments in function, viewed from the perspective of impulsivity and compulsivity. Our findings indicate that patients with IGD and OCD share underlying deficit in inhibitory control and cognitive shifting. Thus, continuous playing behavior in IGD may reflect difficulty with suppressing cue-initiated responses and responding flexibly to changing conditions. We conclude that cognitive characteristics of IGD are different in some ways from those in AUD and OCD, but there are also similarities across these conditions. These findings may help in characterizing substance and behavioral addiction more precisely, and in understanding shared neurobiological substrates in addiction and OCD. Initial patient assessment based on measurable neurocognitive characteristics from the impulsivity-compulsivity perspective may offer a more integrated understanding of these disorders rather than a categorical conceptualization based on certain diagnostic criteria. Such findings would help people to understand their problem based on objective neurobiological perspective, and to enter into a specific cognitive-behavioral change strategy. Specific therapeutic interventions such as targeting prepotent motor inhibition for out-of-control behavior, and targeting cognitive inflexibility for repetitive behavior would be adapted. There remains a need for further investigations of neurobiological correlates of the relationships between IGD, OCD, and other addictive disorders. Funding sources: This work was supported by a grant from the National Research Foundation of Korea (Grant No. 2014M3C7A1062894). Authors' contribution: J-SC was responsible for the study concept and design. Y-JK drafted the manuscript and performed interpretation of data. JAL and SO contributed to statistical analysis of data. SNK, DJK, JEH, and JSK contributed to the acquisition and supervision of data collection. All authors had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. |
Reliability and agreement of adipose tissue fat fraction measurements with waterfat MRI in patients with manifest cardiovascular disease The supraclavicular fat depot is known for brown adipose tissue presence. To unravel adipose tissue physiology and metabolism, high quality and reproducible imaging is required. In this study we quantified the reliability and agreement of MRI fat fraction measurements in supraclavicular and subcutaneous adipose tissue of 25 adult patients with clinically manifest cardiovascular disease. MRI fat fraction measurements were made under ambient temperature conditions using a vendor supplied mDixon chemicalshift waterfat multiecho pulse sequence at 1.5T field strength. Supraclavicular fat fraction reliability (intraclass correlation coefficientagreement, ICCagreement) was 0.97 for testretest, 0.95 for intraobserver and 0.56 for interobserver measurements, which increased to 0.88 when ICCconsistency was estimated. Supraclavicular fat fraction agreement displayed mean differences of 0.5% (limit of agreement (LoA) −1.7 to 2.6) for testretest, −0.5% (LoA −2.9 to 2.0) for intraobserver and 5.6% (LoA 0.4 to 10.8) for interobserver measurements. Median fat fraction in supraclavicular adipose tissue was 82.5% (interquartile range (IQR) 78.684.0) and 89.7% (IQR 87.291.5) in subcutaneous adipose tissue (p<0.0001). In conclusion, waterfat MRI has good reliability and agreement to measure adipose tissue fat fraction in patients with manifest cardiovascular disease. These findings enable research on determinants of fat fraction and enable longitudinal monitoring of fat fraction within adipose tissue depots. Interestingly, even in adult patients with manifest cardiovascular disease, supraclavicular adipose tissue has a lower fat fraction compared with subcutaneous adipose tissue, suggestive of distinct morphologic characteristics, such as brown adipose tissue. Copyright © 2015 John Wiley & Sons, Ltd. |
. A draft of guideline for phase III study of anticancer drugs was presented, and its objective problems in practical aspect was discussed. The anticipated of phase III study is to evaluate the clinical usefulness of anticancer drug in terms of effectiveness and toxicity. And the comparative study of new drugs with standard therapy is most important. The comparative study includes two trials: randomized controlled trial and the second non-randomized controlled trial. Considering the precision of study, the randomized controlled trial is most rational, but it includes some controversies in practical use and ethical aspect. On the other hand, non-randomized controlled trial includes some problems on comparability in relation to prognostic factors. |
// Enqueue puts a element in the tail of queue
func (q *CircularQueue) enqueue(v interface{}) bool {
if q.IsFull() {
return false
}
q.data[q.tail] = v
q.tail = (q.tail + 1) % q.capacity
return true
} |
#include<stdio.h>
typedef unsigned u;
u A[222222],w;
u chk(u i){return(i&1)?(A[i]>=A[i-1]):(A[i]<=A[i-1]);}
u F(u i,u j)
{
u k,r=0;if(i==j)return 0;
k=A[i];A[i]=A[j];A[j]=k;
r+=chk(i)+(i+1==j?0:chk(i+1))+chk(j)+(j+1==i?0:chk(j+1));
k=A[i];A[i]=A[j];A[j]=k;
r-=chk(i)+(i+1==j?0:chk(i+1))+chk(j)+(j+1==i?0:chk(j+1));
return r;
}
int main()
{
u n,i=0,j,r=0;*A=-1;
for(scanf("%u",&n);++i<=n;w+=chk(i))scanf("%u",A+i);A[n+1]=(n&1)?-1:0;
if(w>4){printf("0\n");return 0;}
for(i=0;++i<=n;)if(chk(i)||chk(i+1))for(j=0;++j<=n;)
{
if((chk(j)||chk(j+1))&&j<i)continue;
if(F(i,j)==-w)++r;
}
printf("%u\n",r);
return 0;
}
|
<reponame>ben-dasilva/strongbox
package org.carlspring.strongbox.xml;
import org.carlspring.strongbox.storage.repository.MutableRepository;
import javax.xml.bind.annotation.adapters.XmlAdapter;
import java.util.LinkedHashMap;
import java.util.List;
import java.util.Map;
/**
* @author mtodorov
*/
public class RepositoryMapAdapter
extends XmlAdapter<RepositoryMap, Map<String, MutableRepository>>
{
@Override
public RepositoryMap marshal(Map<String, MutableRepository> map)
throws Exception
{
RepositoryMap repositoryMap = new RepositoryMap();
if (map != null)
{
for (Map.Entry<String, MutableRepository> entry : map.entrySet())
{
repositoryMap.getEntries().add(entry.getValue());
}
}
return repositoryMap;
}
@Override
public Map<String, MutableRepository> unmarshal(RepositoryMap repositoryMap)
throws Exception
{
Map<String, MutableRepository> map = new LinkedHashMap<>();
if (repositoryMap != null && repositoryMap.getEntries() != null)
{
List<MutableRepository> entries = repositoryMap.getEntries();
map = new LinkedHashMap<>(entries.size());
for (MutableRepository repository : entries)
{
map.put(repository.getId(), repository);
}
}
return map;
}
}
|
Origin of the stress-induced magnetic anisotropy in Fe-based nanocrystalline alloy In this study, the presence of structural anisotropy in stress-annealed Fe-Si-B-Nb-Cu nanocrystalline alloy was investigated. Also, the origin of this large magnetic anisotropy induced by the stress-annealing was discussed. Melt-spun amorphous ribbons with a composition of Fe/sub 73.5/Si/sub 15.5/B/sub 7/Nb/sub 3/Cu were heated to 550/spl deg/C. A tensile stress was applied to the ribbons during the entire annealing process. X-ray diffraction profiles of the samples were measured in the transmission mode. Results showed that increasing the tensile stress induces a large anisotropy of the spacing of the plane. In addition, the results revealed that the origin of the anisotropic energy is the magneto-elastic effect. |
A campaign group has urged the public to have their say in the next phase of the £400m regeneration project on the former Royal Exchange site in Belfast.
The move comes after the council granted approval to demolish several buildings on Royal Avenue to make way for the development earlier this month.
Outline planning permission for the next phase of the project, however, is yet to be decided.
Carried out in phases, the construction of the development will include: two hotels, the reintegration and refurbishment of seven listed buildings, three new public realm spaces and a 22-27 storey tower block.
Concerns over the project were previously raised at City Hall by several councillors and campaign groups, such as Save Cathedral Quarter and Ulster Architectural Heritage, namely regarding the potential negative impact on heritage, arts and smaller businesses.
Recent court cases and changes to the planning system in the past few years have left the decades-long trail of paperwork regarding the site “an absolute mess”, said Save CQ’s Mura Quigley, who wants the public to write to the council giving their views on the plans.
Previously known as the Royal Exchange project, the site switched hands in January 2016 in an off-the-market deal between Castlebrooke and Cerberus Capital, the US firm who purchased Nama assets totalling almost £1.5bn in 2014.
Plans were first drawn up to develop the site in 2012, which were altered and resubmitted in 2016 when it was bought over to allow for a phased delivery.
“In the midst of all this, the planning system was completely overhauled and case law, as we’ve seen, is changing the goalposts every month,” Ms Quigley said.
“You must consult with communities before applications of this scale go through and it’s a three-tiered system.
The overall site covers 12 acres of land in the North East Quarter of the city centre bound by Royal Avenue, Donegall Street, North Street, Lower Garfield Street and High Street.
Construction for the proposed scheme is estimated to be around £250million, with the total investment reported to be close to £400million.
Mura Quigley said during the pre-application community consultation phase last year the public was not given sufficient information about the development and “everyone just became more confused”.
“People were horrified when they found out what was exactly going on, but it was still very vague,” she said.
“When you come in and put a wrecking ball to streets you lose the character and identity of the area."
“Heritage was sacrificed in the 2012 planning application, with losing North Street Arcade – now it has gone even further.," she said.
"We have full streets getting demolished and a 27-storey tower block erected right next to the Assembly Rooms, one of the oldest parts of Belfast. |
Clint Eastwood paints a broad, meticulous, and shallow portrait of controversial FBI director J. Edgar Hoover in the new biopic J. Edgar. As seen by Eastwood’s past output from at least the past several years (and arguably even further), the Hollywood veteran seems content to glide along the surface of his subject rather than probe deeper and ask tough questions. The result for J. Edgar is a movie where at the end of two hours and nineteen minutes you shrug and go, “Yep. That’s a Napoleon Complex.” Despite Leonardo DiCaprio acting his heart out as Hoover, and a fine supporting performance from Armie Hammer, J. Edgar is fascinated with its title character but the fascination runs skin-deep.
J. Edgar Hoover was the first director of the Federal Bureau of Investigations and ran the government agency from 1935 until his death in 1972, and J. Edgar jumps between Hoover’s work in the 1920s and 30s and his later years of the 60s and early 70s. During his time as the agency’s director, he vigilantly fought against Communism and expanded that fight to any organization or person he perceived as disruptive to the status quo, notably Martin Luther King Jr. and John F. Kennedy. He used blackmail and intimidation to maintain power, and did so from the agency’s founding in 1935.
And therein lies one of the major problems of J. Edgar. Hoover is the same character at the beginning of the movie as he is at the end. He never once questions his controversial methods, the war against Communism remained his highest priority throughout his career, and he was (at least in the view of Eastwood and screenwriter Dustin Lance Black) a deeply insecure human being. He was short in stature, believed he was less attractive than other men, and he always surrounded himself with tall, handsome confidants—most notably, Clyde Tolson (Hammer).
Which leads to J. Edgar‘s other major issue. Black crafted the emotional core of the movie around the homosexual relationship between Hoover and Tolson. The historical record strongly implies this relationship existed, but it’s never been confirmed. There’s nothing wrong with making Hoover and Tolson’s secret affair the heart of the story, but Eastwood and Black absolutely botch the execution. Because the movie is framed as Hoover reciting his memoirs to staffers, we’re supposed to see Hoover’s early years as his perception of events. But Eastwood shoots all the young Hoover scenes in the same way, which becomes problematic when we see the burgeoning romance between Hoover and Tolson, and obviously Hoover would never tell this to anyone. The other issue is that the relationship between the two men comes off like Saturday Night Live‘s “The Ambiguously Gay Duo”, so the obvious subtext comes off as comical rather than emotional.
Because Eastwood paints a sober and hollow portrait of Hoover, the movie places a heavy burden on DiCaprio to make the character come alive. In 2004, DiCaprio portrayed another famous American by taking on the role of Howard Hughes in The Aviator. At the time, he seemed too young for the role when it came to playing the scenes with Hughes as an old man. DiCarpio has aged into more mature roles, but he’s not quite in the range necessary for Hoover. The old age make-up is spectacular, but DiCaprio’s bright, shining eyes provide a slight distraction from an otherwise terrific performance. The movie wastes Naomi Watts in the thankless role of Hoover’s secretary, but Hammer holds his own as Tolson and shows that his work in The Social Network wasn’t just a product of Aaron Sorkin’s excellent script.
J. Edgar once again proves that Eastwood is not worthy of the material he’s receiving. His legacy and name recognition allows him to churn out a movie per year, which would be impressive if the movies were good. One could argue the same about Woody Allen’s output, but Allen’s films are personal and original, where it feels like Eastwood is stealing stories that other filmmakers could do better. But we’re stuck with over-praised director relying on the performances of his talented his actors and hoping that the film’s premise and script can make the movie passably mediocre. By that painfully low standard, J. Edgar qualifies as a success. |
A Flexible Three-in-One Microsensor for Real-Time Monitoring of Internal Temperature, Voltage and Current of Lithium Batteries Lithium batteries are widely used in notebook computers, mobile phones, 3C electronic products, and electric vehicles. However, under a high charge/discharge rate, the internal temperature of lithium battery may rise sharply, thus causing safety problems. On the other hand, when the lithium battery is overcharged, the voltage and current may be affected, resulting in battery instability. This study applies the micro-electro-mechanical systems (MEMS) technology on a flexible substrate, and develops a flexible three-in-one microsensor that can withstand the internal harsh environment of a lithium battery and instantly measure the internal temperature, voltage and current of the battery. Then, the internal information can be fed back to the outside in advance for the purpose of safety management without damaging the lithium battery structure. The proposed flexible three-in-one microsensor should prove helpful for the improvement of lithium battery design or material development in the future. Introduction Many countries are devoted to alleviating global warming and finding coping strategies, especially with the development of green energy. The green energy industry includes wind power, tidal power generation, hydropower and solar power generation. These green energies use pollution-free energy sources to replace the traditional power generation systems which produce greenhouse gases. However, if these power generation systems lack a good energy storage mechanism, the excess energy will be wasted. Therefore, it is required to use energy storage devices to store the excess energy. Lithium batteries are a useful tool for energy storage. Lithium batteries are characterized by portability, high energy density, high operating voltage, wide service temperature range, no memory effect and long life. Hence, they are indispensable energy storage devices at present. However, in the lithium battery charging/discharging process, the anode material and electrolyte perform electrochemical reactions, which generate a great deal of heat. The overcharge/overdischarge can result in voltage instability an even thermal runaway, as well as safety problems. A new approach, suitable for real-time implementation, was introduced for estimation of the non-uniform internal temperature distribution in cylindrical lithium-ion cells, in which a radial 1-D model is used to estimate the distribution using two inputs: the real or imaginary part of the electrochemical impedance of the cell at a single frequency, and the surface temperature. A preliminary calorimetric analysis and the surface temperatures of high-energy lithium-ion batteries indicated that the cells are prone to thermal runaway at temperatures of approximately 175 ~ 185 °C, which can be triggered by the Joule effect of the short circuit that results from the melting of the separator. Galobardes studied the application of C-MEMS as a lithium-ion battery anode. It is a protective film, referred to as a solid electrolyte interface (SEI), that forms on carbonaceous materials used as negative electrodes in commercial lithium-ion batteries. Chacko studied the electrothermal model of a polymer lithium battery with LiMn2O4 anode material and graphite cathode material. They also conducted loop tests to draw the battery surface temperature profile models. Wiedemann found that different electrolyte concentrations resulted in different voltage distributions in the lithium battery charge/discharge. Waag presented a lithium battery having a large charge and discharge accelerated aging. Forgez reported measurement of the internal temperature of lithium iron phosphate and coordination with a commercial thermocouple. Internal temperature measurements and surface temperature measurements of LiFePO4/graphite lithium-ion batteries using the model were validated in current-pulse experiments and a complete charge/discharge of the battery and were within 1.5 °C. Lee developed a flexible temperature micro sensor to embed into a lithium battery. Garay used MEMS techniques to develop an interdigitated electrode geometry and a minimum footprint area of 12 mm 2 for the medical and biological fields. Pomerantseva showed that the internal stresses of battery electrodes during discharge/charge are important for improving the reliability and cycle lifetime of lithium batteries, using the stress evolution observed in a silicon thin-film electrode incorporated into a MEMS device. Ryan demonstrated thin film technologies that could produce a NiOOH cathode layer that was of high quality and only 1-5 microns thick, and demonstrated the feasibility of microscopic batteries for MEMS. Mutyala used a flexible polymer produced on glass substrates and later transferred it onto thin copper foil embedded thin film thermocouples in a lithium ion battery pouch cell for in-situ temperature monitoring. Sun reported a thermal model that can qualitatively predict the dynamic cell temperature changes that occur when a lithium ion battery works under adiabatic conditions. Richardson studied a method of estimating the battery cell core and surface temperature using a thermal model coupled with electrical impedance measurements, rather than using direct surface temperature measurements. This proved advantageous compared to previous methods of estimating the temperature from impedance. Analysis on lithium battery failure is necessary. The endogenous events of lithium batteries can be observed by real-time monitoring of the internal temperature, voltage and current of the battery, as well as by analyzing the electrochemical reactions occurring inside the battery and possible failure causes. The findings of this study can be applied to the improvement of lithium battery materials in the future, and assist lithium battery management systems to monitor the conditions and design safe failure protection early warning systems. Existing commercial temperature, voltage and current sensors are unlikely candidates to be embedded in a lithium battery due to their large size. The probable poor airtightness of the packaging may result in electrolyte leakage, influencing the lithium battery performance and safety. The micro-electro-mechanical systems (MEMS) technology is used in this study to develop a flexible three-in-one microsensor which can be embedded in a lithium battery for real-time monitoring of the internal temperature, voltage and current. The proposed design is characterized by good accuracy, high sensitivity and short reaction times, as well as high flexibility and measurement degrees of freedom (DOF). The developed flexible three-in-one microsensor is embedded in a coin cell for real-time monitoring. The reaction inside the lithium battery can be monitored instantly and more accurately by using this method. The internal temperature uniformity and voltage and current variation are analyzed microscopically, completing the measuring tool for internal real-time microscopic monitoring and safety diagnosis of lithium batteries. Theory and Design of Microsensors The temperature microsensor used in this study was a resistance temperature detector (RTD). The sensed temperature range was wide, and the linearity was good. The serpentine sensing electrode wire of the RTD was 10 m wide, and the interval was 10 m. The voltage microsensor was a miniaturized voltmeter probe, and its size was 135 m 100 m. The sensing principle of the current microsensor was that the resistivity (R) of analyte and the voltage difference (V) of analyte were measured. The current value of the analyte was calculated by using Ohm's law V = I R. The current microsensor consisted of four miniature probes, including a set of two voltage measuring probes and a set of two resistance measuring probes. Their sizes were 135 m 100 m and 155 m 100 m, respectively. The structure and design of the flexible three-in-one microsensor are shown in Figure 1. Fabrication The flexible substrate of this three-in-one microsensor was 50 m thick polyimide (PI) foil. The foil was cleaned in acetone and methanol. An E-beam evaporator evaporated Cr (500 ) as adhesion layer and Au as sensing layer, as shown in Figure 2A,B. The unnecessary Au/Cr film was removed by photolithography with a wet etch to complete the microsensor layout structure, as shown in Figure 2C,D. Finally, polyimide 7505 was spin coated on the sample as insulating layer. The voltage and current probes and sensor pad end were exposed by using a photolithography process again to complete the flexible three-in-one microsensor, as shown in Figure 2E,F. The finished flexible three-in-one microsensor and an optical micrograph are shown in Figure 3. The coin cell for this test was provided by Professor I-Ming Hung at the Department of Chemical Engineering and Materials Science (Yuan Ze University, Taoyuan, Taiwan). The cathode material was lithium titanium oxide (Li4Ti5O12, LTO). The anode material was lithium iron phosphate (LFP). The lithium battery structure consisted of a top cap, anchor, current collection sheet, cathode electrode, separator, anode electrode, bottom cap and electrolyte. The flexible three-in-one microsensors embedded in the lithium battery were numbered sensor 1 and sensor 2. Sensor 1 was embedded between the cathode electrode and separator and facing the cathode electrode. Sensor 2 was embedded between the anode electrode and separator and facing the anode electrode, as shown in Figure 4. Flexible Three-in-One Microsensor Correction When the flexible three-in-one microsensor was completed, it was corrected to validate its reliability. After the correction procedure, a lithium battery testing machine and NI data acquisition unit were used for lithium battery tests and internal information acquisition and microscopic diagnostic analysis, to determine the differences in the electric properties of the cells with and without the flexible three-in-one microsensor. The local temperature, voltage and current changes in the lithium battery were monitored and analyzed instantly under different operating conditions. Figures 5 and 6 show the correction curves of two temperature microsensors. Each microsensor showed high linearity and high reproducibility after three correction cycles. Table 1 shows the voltage correction data of the voltage microsensor. The NI measuring instrument measured the dry battery to obtain the voltage reference. The voltage microsensor then measured the dry battery. The correction difference was obtained by subtracting the voltage measured by the voltage microsensor from the dry battery voltage. It was observed that the voltage error in the voltage measurements resulting from the voltage microsensor conductor was 0.001 V ~ 0.006 V, and the influence was low. The current microsensor was corrected by using a standard electrical conductivity solution as reference. The resistivity in the solution was measured by using the current microsensor and converted into electrical conductivity, and compared with the theoretical value of a standard electrical conductivity solution for confirming the reliability of the current microsensor. Table 2 compares the resistivity of the standard electrical conductivity solution measured by the current microsensor with the theoretical value. It was observed that the difference between measured value and theoretical value was less than 1%. Coin Cell Test The electrochemical performance of Li-ion batteries was tested using CR2032-type coin cells. The cathode (LiFePO4) and anode (Li4Ti5O12) powders were mixed with a binder (polyvinylidene fluoride) and two conducting media (Super-P and KS-4) at a weight ratio of 80:10:5:5 in N-methylpyrrolidinone (NMP) solvent to form the electrode slurry. The mixture was blended by a three-dimensional mixer using Zr balls for 3 h to prepare a uniform slurry. Then, the resultant slurry was uniformly pasted on Al (for cathode) and Cu (for anode) foil substrates with a doctor blade, followed by evaporation of the NMP solvent with a blow dryer. The prepared cathode sheets were dried at 135 °C in a vacuum oven for 12 h and pressed under a pressure of approximately 200 kgcm −2. The electrode layers were adjusted to a thickness of ~100 m. The coin cells were assembled in a glove box for their electrochemical characterization, using an electrochemical analyzer (CHI 608, CH Instruments, Inc., Austin, TX, USA). In the test cells, the Li foil and the porous polypropylene film served as the counter electrode and the separator, respectively. The electrolyte solution was 1.0 M LiPF6 in a mixture of ethylene carbonate, polycarbonate, and dimethyl carbonate with a weight ratio of 1:1:1. The charge/discharge cycling tests at different C rates (from 0.1 to 10 C) were performed within the voltage region at ambient temperature. The lithium battery charging/discharging set CHG-5500C was used in this study for testing the coin cell. The coin cell embedded with flexible three-in-one microsensors was placed on the test carrier. A thermocouple temperature recorder was placed on the lithium battery surface to measure and record the battery surface temperature instantly. The NI Data Acquisition System performed real-time measurements and data acquisition of the flexible three-in-one microsensor. Figure 7 shows the coin cell assembly and instrument mounting. The lithium battery was embedded with two flexible three-in-one microsensors. Figure 7. Coin cell test assembly and instrument mounting. The anode material of the coin cell was LFP, the cathode material was LTO, and the theoretical capacitance value was 170 mAhg −1. The test conditions included constant current (CC), charging/discharging voltage range 0.5 ~ 2.9 V. The six charge/discharge rates, ranging from 0.1 to 10 C, are frequently used for evaluating the performance of Li-ion batteries. The performance of Li-ion batteries charged at 0.1-0.5 C reflects the capability for general 3 C portable electronics, whereas the charge-discharge curves at >2 C could serve as a crucial index for evaluating the cell performance for EVs and mobile tools. Nominal capacity (120 mAhg −1 for LiFePO4 cathode), separator (Celgard), maximal charge/discharge rate (10 C), and operating potential range (0.5~3.0 V). The test process is shown in Table 3. Table 3. Coin cell test process (three cycles at each C-rate). Charge Discharge Trigger Voltage Static Voltage Trigger Voltage Static Voltage 0.1 C 0.5 V 2.9 V 2.9 V 0.5 V 0.2 C 0.5 V 2.9 V 2.9 V 0.5 V 0.5 C 0.5 V 2.9 V 2.9 V 0.5 V 1 C 0.5 V 2.9 V 2.9 V 0.5 V 5 C 0.5 V 2.9 V 2.9 V 0.5 V 10 C 0.5 V 2.9 V 2.9 V 0.5 V Figure 8 is the charge-discharge test curve diagram of the coin cell embedded with three-in-one microsensors. The maximum unit cumulative capacity of the 0.1 C charge-discharge test was 92.4731 mAhg −1, and the maximum unit cumulative capacity of 0.1 C discharge test was 61.7204 mAhg −1. The calculated irreversible capacity was 30.7527 mAhg −1, accounting for about 33.26% of the initial value. Table 4 shows the maximum unit cumulative capacity of charge and discharge of coin cell at various C-rates. Figure 9, Table 5, Figure 10 and Table 6 show the maximum unit cumulative capacity in various cycles of coin cell charge/discharge tests. The electrical performance of the lithium battery in maximum unit cumulative capacity was not good. In the 5 C charge-discharge test, the residual capacity still accounted for 45.81% of the initial value, and even for 39.77% of the initial value in the 10 C charge-discharge test. The lithium battery charge/discharge test did not have complete failure in the end and the lithium battery could complete the overall charge-discharge test process. Table 7 shows the performance of the lithium batteries with and without the three-in-one microsensors. The maximum performance difference between the lithium batteries with and without the three-in-one microsensors was only about 10.32%. CA ratio is the weight ratio of anode and cathode materials. In the operation of lithium batteries, the cathode releases lithium ions, and the anode receives the lithium ions. When the releasing capacity of cathode was higher than the receptivity of the anode, the lithium ions could not be received by the anode completely in discharge, so the maximum capacitance value could not be reached. If the releasing capacity of the cathode was lower than the receptivity of the anode, the anode could not release lithium ions completely to the cathode during charge. The lithium battery performance was thus influenced. The difference between CA ratios with and without three-in-one micro sensor was 8.64%. Disregarding this factor, the influence of the three-in-one microsensor on the lithium battery performance was 1.68%. Therefore, the flexible three-in-one microsensor embedded in the lithium battery for real-time measurement had only a slight influence on the electrical performance of the battery. Basically, the specific capacity as a decreasing function of C rate can be attributed to a polarization situation, indicating poor electronic conductivity and slow ionic diffusion rate. On the basis of the experimental results, the degradation of specific capacity at high C rates is minor, e.g., the capacity retention still remains at >60 for the ratio of specific capacity at 0.1 C to 5 C. Persistence Effect Test for the Flexible Three-in-One Microsensors The total time for the lithium battery charge/discharge test was about 109.8 h, 0.1 C accounted for 60 h, 0.2 C accounted for 30 h, 0.5 C accounted for 12 h, 1 C accounted for 6 h, 5 C accounted for 1.2 h and 10 C accounted for 0.6 h. The monitoring data are shown in Table 8. After the coin cell charge-discharge test, the flexible three-in-one microsensor performed temperature correction again, as shown in Figure 11. The correction curve still showed high linearity, suggesting that the flexible three-in-one microsensor is durable and reliable. Figure 11. Microsensor temperature correction curve before and after coin cell charge-discharge tests. Conclusions The flexible micro temperature-voltage-current sensor was successfully integrated into PI by using MEMS technology in this study. The total thickness of three-in-one microsensor was 58 m. It is characterized by quick response, real-time measurement and good durability. After the temperature, voltage and electrical conductivity correction of the flexible microsensor, the temperature correction curve shows high linearity and good reproducibility. The voltage and electrical conductivity correction shows the error value of microsensor measurement is smaller than 1%, proving the reliability of the flexible microsensor in temperature, voltage and current measurements. The flexible three-in-one microsensor was successfully embedded in a coin cell in this study. According to the performance of the batteries with and without three-in-one microsensors, the three-in-one micro-sensor could measure the internal temperature, voltage and current of coin cell instantly without disturbing the operation of the lithium battery. |
pub type c_char = i8;
pub type wchar_t = i32;
s! {
pub struct stat {
pub st_dev: ::dev_t,
pub st_ino: ::ino_t,
pub st_nlink: ::c_ulong,
pub st_mode: ::c_uint,
pub st_uid: ::uid_t,
pub st_gid: ::gid_t,
pub st_rdev: ::dev_t,
pub st_size: ::off64_t,
pub st_blksize: ::c_long,
pub st_blocks: ::c_long,
pub st_atime: ::c_ulong,
pub st_atime_nsec: ::c_ulong,
pub st_mtime: ::c_ulong,
pub st_mtime_nsec: ::c_ulong,
pub st_ctime: ::c_ulong,
pub st_ctime_nsec: ::c_ulong,
__unused: [::c_long; 3],
}
pub struct stat64 {
pub st_dev: ::dev_t,
pub st_ino: ::ino_t,
pub st_nlink: ::c_ulong,
pub st_mode: ::c_uint,
pub st_uid: ::uid_t,
pub st_gid: ::gid_t,
pub st_rdev: ::dev_t,
pub st_size: ::off64_t,
pub st_blksize: ::c_long,
pub st_blocks: ::c_long,
pub st_atime: ::c_ulong,
pub st_atime_nsec: ::c_ulong,
pub st_mtime: ::c_ulong,
pub st_mtime_nsec: ::c_ulong,
pub st_ctime: ::c_ulong,
pub st_ctime_nsec: ::c_ulong,
__unused: [::c_long; 3],
}
}
pub const O_DIRECT: ::c_int = 0x4000;
pub const O_DIRECTORY: ::c_int = 0x10000;
pub const O_NOFOLLOW: ::c_int = 0x20000;
pub const O_LARGEFILE: ::c_int = 0o0100000;
pub const SYS_gettid: ::c_long = 186;
pub const SIGSTKSZ: ::size_t = 8192;
pub const MINSIGSTKSZ: ::size_t = 2048;
pub const MAP_32BIT: ::c_int = 0x40;
|
High-performance interband cascade lasers emitting between 3.3 and 3.5 microns Semiconductor laser performance in the 3 to 4 micron wavelength region has lagged behind lasers at longer and shorter wavelengths. However, recent advances by the group at the Naval Research Laboratory (NRL) have markedly changed this situation, and in a recent collaboration with the NRL group, we demonstrated high performance interband cascade lasers at 3.8 microns. In this work, we present results extending this earlier work to shorter wavelengths. In particular, we designed four new interband cascade lasers at target wavelengths between 3.3 and 3.5 microns. Initial testing of broad area devices show threshold current densities of ~230 A/cm2 at 300K, almost a factor of two lower than the ~425 A/cm2 results obtained on the broad area devices at 3.8 microns. In this paper, we present performance data on these broad area lasers and also data on narrow ridge devices fabricated from the same material. |
Wednesday, Jan. 18: the day of the SOPA "blackout" protest. As you may have seen from our coverage, major names in the online world such as Google, Wikipedia, Mozilla and Reddit are censoring their own websites with black bars and blacked-out pages in protest of SOPA and PIPA, two online anti-piracy bills currently under consideration on Capitol Hill.
Lawmakers who support the bills say the Stop Online Piracy Act and the Protect Intellectual Property Act will protect the intellectual property rights of music, movie and TV studios. But the websites and tech giants taking part in the Wednesday blackout argue that SOPA and PIPA would allow for a censoring of the Internet that would forever alter the Web and what we can do, say and publish online.
And it's not just Silicon Valley that's protesting SOPA and PIPA in the day-long blackout -- a few publications that cover the tech world are taking part as well, including Wired and ArsTechnica.
Here's a list of more than 30 websites (and screen shots of each) we've spotted that are protesting today in the form of full-on blackouts or even just making their anti-SOPA and anti-PIPA stances known publicly. If there are a few we've missed, feel free to let us know in the comments.
Images: Screenshots (made using the Mac app LittleSnapper) of websites taking part in the Jan. 18, 2011 protests against SOPA and PIPA by either blacking out their websites, or publishing statements condemning the controversial anti-piracy bills. |
A Teaching Model of Hypnosis in Psychiatric-Residency Training A stepwise hypnosis-training model for psychiatric residents is presented as used in the Netherlands. Hypnosis is presented to residents as an intervention that can be incorporated into the treatment of various types of disorders in structured, time-limited units. The model takes into account the usual reluctance and insecurity of the psychiatric resident, who is usually encountering hypnosis for the first time. |
package br.com.zup.proposta.bloqueio;
import br.com.zup.proposta.cartao.Cartao;
import javax.persistence.*;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
import java.time.LocalDateTime;
@Entity
public class Bloqueio {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@NotNull
@Column(nullable = false)
private LocalDateTime dataBloqueio;
@NotBlank
@Column(nullable = false)
private String ip;
@NotBlank
@Column(nullable = false)
private String userAgent;
@ManyToOne
@JoinColumn(name = "cartao_id")
private Cartao cartao;
@Deprecated
public Bloqueio(){
}
public Bloqueio(@NotBlank String ip, @NotBlank String userAgent, Cartao cartao) {
this.ip = ip;
this.userAgent = userAgent;
this.cartao = cartao;
this.dataBloqueio = LocalDateTime.now();
}
}
|
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing,
* software distributed under the License is distributed on an
* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
* KIND, either express or implied. See the License for the
* specific language governing permissions and limitations
* under the License.
*/
package org.apache.streams.facebook.provider;
import org.apache.streams.core.StreamsDatum;
import org.apache.streams.facebook.FacebookConfiguration;
import org.apache.streams.facebook.IdConfig;
import org.apache.streams.util.api.requests.backoff.BackOffStrategy;
import org.apache.streams.util.api.requests.backoff.impl.ExponentialBackOffStrategy;
import org.apache.streams.util.oauth.tokens.tokenmanager.SimpleTokenManager;
import org.apache.streams.util.oauth.tokens.tokenmanager.impl.BasicTokenManager;
import com.google.common.annotations.VisibleForTesting;
import facebook4j.Facebook;
import facebook4j.FacebookFactory;
import facebook4j.conf.ConfigurationBuilder;
import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.util.concurrent.BlockingQueue;
import java.util.concurrent.atomic.AtomicBoolean;
/**
* Abstract data collector for Facebook. Iterates over ids and queues data to be output
* by a {@link org.apache.streams.core.StreamsProvider}
*/
public abstract class FacebookDataCollector implements Runnable {
private static final Logger LOGGER = LoggerFactory.getLogger(FacebookDataCollector.class);
private static final String READ_ONLY = "read_streams";
@VisibleForTesting
protected AtomicBoolean isComplete;
protected BackOffStrategy backOff;
private FacebookConfiguration config;
private BlockingQueue<StreamsDatum> queue;
private SimpleTokenManager<String> authTokens;
/**
* FacebookDataCollector constructor.
* @param config config
* @param queue queue
*/
public FacebookDataCollector(FacebookConfiguration config, BlockingQueue<StreamsDatum> queue) {
this.config = config;
this.queue = queue;
this.isComplete = new AtomicBoolean(false);
this.backOff = new ExponentialBackOffStrategy(5);
this.authTokens = new BasicTokenManager<>();
if (config.getUserAccessTokens() != null) {
for (String token : config.getUserAccessTokens()) {
this.authTokens.addTokenToPool(token);
}
}
}
/**
* Returns true when the collector has finished querying facebook and has queued all data
* for the provider.
* @return isComplete
*/
public boolean isComplete() {
return this.isComplete.get();
}
/**
* Queues facebook data.
* @param data data
* @param id id
*/
protected void outputData(Object data, String id) {
try {
this.queue.put(new StreamsDatum(data, id));
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
}
}
/**
* Gets a Facebook client. If multiple authenticated users for this app are available
* it will rotate through the users oauth credentials
* @return client
*/
protected Facebook getNextFacebookClient() {
ConfigurationBuilder cb = new ConfigurationBuilder();
cb.setDebugEnabled(true);
cb.setOAuthPermissions(READ_ONLY);
cb.setOAuthAppId(this.config.getOauth().getAppId());
cb.setOAuthAppSecret(this.config.getOauth().getAppSecret());
if (this.authTokens.numAvailableTokens() > 0) {
cb.setOAuthAccessToken(this.authTokens.getNextAvailableToken());
} else {
cb.setOAuthAccessToken(this.config.getOauth().getAppAccessToken());
LOGGER.debug("appAccessToken : {}", this.config.getOauth().getAppAccessToken());
}
cb.setJSONStoreEnabled(true);
if (StringUtils.isNotEmpty(config.getVersion())) {
cb.setRestBaseURL("https://graph.facebook.com/" + config.getVersion() + "/");
}
LOGGER.debug("appId : {}", this.config.getOauth().getAppId());
LOGGER.debug("appSecret: {}", this.config.getOauth().getAppSecret());
FacebookFactory ff = new FacebookFactory(cb.build());
return ff.getInstance();
}
/**
* Queries facebook and queues the resulting data.
* @param id id
* @throws Exception Exception
*/
protected abstract void getData(IdConfig id) throws Exception;
@Override
public void run() {
for ( IdConfig id : this.config.getIds()) {
try {
getData(id);
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
} catch (Exception ex) {
LOGGER.error("Caught Exception while trying to poll data for page : {}", id);
LOGGER.error("Exception while getting page feed data: {}", ex);
}
}
this.isComplete.set(true);
}
@VisibleForTesting
protected BlockingQueue<StreamsDatum> getQueue() {
return queue;
}
}
|
Positron Sources for Future High Energy Physics Colliders An unprecedented positron average current is required to fit the luminosity demands of future $e^+e^-$ high energy physics colliders. In addition, in order to access precision-frontier physics, these machines require positron polarization to enable exploring the polarization dependence in many HEP processes cross sections, reducing backgrounds and extending the reach of chiral physics studies beyond the standard model. The ILC has a mature plan for the polarized positron source based on conversion in a thin target of circularly polarized gammas generated by passing the main high energy e-beam in a long superconducting helical undulator. Compact colliders (CLIC, C3 and advanced accelerator-based concepts) adopt a simplified approach and currently do not plan to use polarized positrons in their baseline design, but could greatly benefit from the development of compact alternative solutions to polarized positron production. Increasing the positron current, the polarization purity and simplifying the engineering design are all opportunities where advances in accelerator technology have the potential to make a significant impact. This white-paper describes the current status of the field and provides R\&D short-term and long-term pathways for polarized positron sources. A : An unprecedented positron average current is required to fit the luminosity demands of future e+e-high energy physics colliders. In addition, in order to access precision-frontier physics, these machines require positron polarization to enable exploring the polarization dependence in many HEP processes cross sections, reducing backgrounds and extending the reach of chiral physics studies beyond the standard model. The ILC has a mature plan for the polarized positron source based on conversion in a thin target of circularly polarized gammas generated by passing the main high energy e-beam in a long superconducting helical undulator. Compact colliders (CLIC, C3 and advanced accelerator-based concepts) adopt a simplified approach and currently do not plan to use polarized positrons in their baseline design, but could greatly benefit from the development of compact alternative solutions to polarized positron production. Increasing the positron current, the polarization purity and simplifying the engineering design are all opportunities where advances in accelerator technology have the potential to make a significant impact. This white-paper describes the current status of the field and provides R&D short-term and long-term pathways for polarized positron sources. Positron sources are a critical element for current and future e+e-colliders as luminosity requirements push the performances of these sources well beyond the current state of the art. For example, the International Linear Collider (ILC) plans to use average positron currents of 30 A, nearly two orders of magnitude larger than any other positron source ever realized. In addition, there is a clear demand for high polarization control of the positron beam in order to improve the effective luminosity, reduce the background, and extend the reach of searches for beyond-the-standard-model chiral physics. As discussed in the Snowmass Energy Frontier report on future linear colliders, polarization of both beams is really needed to reap the benefits of the spin-dependence in the collision cross-sections. Within the Snowmass process, the importance of this topic has been recognized, as well as the lack of a coherent effort in the US accelerator physics community to tackle the challenges associated with very high current production of polarized positrons. It is worth to note here that the positron source is one of the future collider components where a relatively small investment (compared to the development of the main linear accelerator) has the potential to yield significant gains in the performances of the machine. In addition, the stand-alone nature of the positron source allows for tests and parallel developments that can be carried out in an independent fashion with respect to the main collider complex. The generation of polarized positrons has been included in the baseline design of the ILC. The scheme is based on passing the 125 GeV collision beam through a very long shortperiod superconducting helical undulator to generate circularly polarized gamma rays that can be converted using a thin target into polarized electron-positron pairs. The ILC design is quite mature, at a level well beyond technical design report, and nearly shovel-ready. Still, the reduction of the ILC center-of-mass energy to 250 GeV implies a lower-than-ideal gamma photon energy of 7-8 MeV from the undulator, at the falling edge of the pair production cross-section. To compensate for this, an extended undulator length (up to 231 m) will be employed to preserve a safety margin (> 1.5) in the ratio of output positrons to incoming electrons. This effort shares many commonalities with existing activities in superconducting helical undulator development and characterization for X-ray FELs which are carried out with DOE Basic Energy Science funding at several US National Labs (FNAL, ANL, and LBNL). The interdependency of the polarized positron source on the availability of the 125 GeV electron beam has also spurred a parallel effort in the development of a conventional positron sources based on a 6 GeV electron beam. This conventional high current source will simplify the commissioning phase of the accelerator providing a reliable source of positrons. It will also allow to develop and test technical solutions for the challenges associated with the energy deposition in the target and the positron capture section immediately downstream of it. In particular, the target suffers from extremely large heat deposition rates, exacerbated in the polarized positron production case by the small transverse spot of the gamma-ray beam from the undulator and by the burst temporal format of the drive beam. The design for the ILC baseline source includes a rapidly spinning wheel with state-of-the-art ferrofluidic seals to allow for the fast rotation. The capture section after the target must be able to match the very large phase space of the emitted positrons into the small acceptance of the booster linac and finally into the damping ring. Quarter-wave transformers, flux concentrators and pulsed solenoids are typical solutions right after the conversion target. Simulations show that the very large magnetic fields achievable with a pulsed solenoid (currently the favored choice for the ILC) can increase the positron capture rate by 30 % with respect to the previously adopted quarter-wave-transformer scheme. On the longer term, the use of advanced solutions based on strong focusing lenses such as active plasma lenses has been considered and deserves further investigation. In parallel to these efforts, other linear collider designs such as CLIC and CCC do not foresee in their baseline the use of polarized positrons and linear collider schemes based on advanced accelerator concepts are just now beginning to consider the issues related to the accelerator of positrons. Improving the polarization purity and providing overhead in the positron average current, are all important goals in the development of future polarized positron sources. Two particular schemes are being considered based on Inverse Compton Scattering (ICS) and polarized bremmsstrahlung. Polarized positron sources based on the Inverse Compton Scattering laser photons off energetic electrons have been proposed for a long time and are recently making a comeback fueled by the progress in laser technology. For example, scattering a circularly polarized 515 nm laser off 1 GeV electrons yields very energetic polarized gamma rays. In this case, the yield of polarized positrons per incident photon can be up 3-5 times larger than when using <10 MeV photons. The scheme is plagued by the poor cross section of the Compton scattering process, but as GeV electrons are easily available, it is possible to increase the electron current to recover the required photon flux. One of the main outstanding problem of this scheme is the availability of high average power laser beams, but continuous progress in laser technology (for example laser stacking cavities ) and new ideas in high efficiency free-electron lasers open the opportunity for a compact, independent, polarized positron source with high flux and high polarization purity for future collider design. Polarized bremmsstrahlung is the process through which spin-polarized relativistic electrons (typically produced from strained GaAs lattice with very high (>80 % polarization purity) can generate polarized positrons. Polarization transfer up to 80 % have been demonstrated in a recent successful experiment at JLAB, but the efficiency (number of positrons produced per incoming electrons) of this process is very small. Still, a collaboration has been formed around the idea of using this approach to generate polarized positrons for nuclear physics experiments. The positron currents from this source (50-100 nA) are still orders of magnitude lower than what required for linear colliders. Even lower positron production rates can be obtained using high intensity laser plasma interactions or isotope decays. These sources could possibly be used to provide a positron beam to test some of the charge-asymmetries in the high gradient accelerator schemes. Positron sources for High Energy Physics colliders: requirements and current status In all positron sources used for high energy physics, positrons are produced by the process of pair production as as secondary beams after a drive beam (typically electrons, but it could be gamma photons) hits a conversion target. The resulting positron distribution has very large angular and energy spread and is captured transversely and longitudinally to match into an acceleration section and ultimately a damping ring to generate positrons beam of sufficiently high quality for the intended application. A cartoon schematics of the various elements of a positron source is shown in Fig. 1. State-of-the-art positron sources parameters are summarized in Table 1 which shows that typical positron flux obtained is around 10 10 e+/s, several orders of magnitude lower than what required for the linar collider. The main limit in conventional sources is the heat load on the target which limits the power of the primary beam. The SuperKEKB positron source is the highest intensity positron source in operation, thanks to improvements on the drive beam parameters, the flux concentrator used to capture the beam and improvements in the positron line diagnostics. Table 1: Performances of ever existed positron sources (readapted from. Some parameters were not found in literature and, therefore, marked as "-". The requirement for positron polarization adds additional degrees of complexity in the design of the source, but directly stems from the physics demands for future colliders. Having simultaneously polarized − and + beams in fact, is a very effective tool for direct as well as indirect searches for new physics. Polarized beams offer new powerful analyses, provide addedvalue and optimize -together with the clean and precise environment-the physics potential of an + − linear collider. The use of both beams polarized compared with the configuration with only polarized electrons can lead to an important gain in statistics and luminosity, reducing the required running time and increasing the search reach of the linear collider. In addition, it will allow to decrease -due to pure error propagation-the polarization uncertainty originating from the polarimeter. The gain in the polarization accuracy is directly transferred to, for instance, the accuracy for the left-right asymmetry measurement and is therefore decisive for getting systematic uncertainties under control. Furthermore, the use of both beams polarized are important to identify independently and unambiguously the chiral structures of interactions in various processes, several of these tests are not possible with polarized electrons alone, see and references therein. Simultaneously polarized ± beams offer new and more observable, e.g. double-polarized asymmetries, and allow to exploit even transversely-polarized beams that are powerful for detecting new kind of interactions ( e.g. tensor-like) or new sources of CP-violation. Already at the first energy stage of √ = 250 GeV, the availability of polarized positrons would be essential to keep systematics under control, save running time and to match the precision promises. Current plans for ILC positron source The positron production scheme for the high-energy linear − + -collider has been at the center of much debate with the result that two parallel approaches are currently being pursued: i) a baseline scheme which is based on passing the high energy electron beam through a helical undulator to generate an intense photon beam for the positron production in a thin target, and ii) a scheme based on the use of a separate and independent electron beamline to create (unpolarized) + − -pairs in a thick target. The efficiency of positron production in a conversion target together with the capture acceleration of the positrons is low, so in both cases it is a challenge to generate the 1.3 10 14 positrons per second that are required at the ILC collision point (nominal luminosity). However, using a helical undulator allows to produce a circularly polarized photon beam enabling the generation of a longitudinally polarized positron beam which is the reason it has been selected as the baseline option for the ILC. The high number of required positrons at a linear collider pushes the drive beam intensity up and causes a high thermal load on the target load. The target wheel has to be cooled as well as rotated at an appropriate speed in order to distribute the load sufficiently. The material must stand the cyclic load at elevated temperatures. Experimental tests were performed with the electron beam of the microtron in Mainz (MAMI) to simulate the cyclic load as expected during ILC operation. The results of the irradiation tests at MAMI and detailed simulation studies with ANSYS showed that the expected load at the ILC positron target is below the material limits. The radiated targets have been analyzed both via laser scanning methods as well as synchrotron X-ray diffraction methods, demonstrating that the ILC target will stand the load. The design include detailed plans for radiation cooling as well as rotating the target, see below. Detailed engineering solutions on theses issues are still outstanding, however no technical showstopper is anticipated here. The helical undulator is one of the main components of an undulator-based polarized positron source. Parameters of the ILC polarized positron source helical undulator are given below in Table 2. We refer to and references therein for updates since the Technical Design Report. Undulators for polarized positron production The required period length is as short as 11.5 mm which sets the challenge of fabricating the long helical undulator with such period length. This has been addressed and successfully solved by the UK HeLICal Collaboration. After an intensive R&D phase, the collaboration has eventually fabricated and tested a superconducting helical undulator prototype which has achieved the required parameters, Table 3. Advantage of employing a superconducting magnet technology for building a short-period helical undulator has been demonstrated, and a helical superconducting undulator (HSCU) has become a baseline for the ILC positron source undulator. Here a helical magnetic field is generated by a pair of helical electromagnetic coils with the currents in the opposite directions which are wound on the same magnet former. Compared to an alternative approach of using permanent magnets for generating helical field, the HSCU offers a natural simplicity of winding helical coils combined with a high magnetic field. The HSCU field can be increased further when high-field superconductors like 3 are employed instead of. A team at the Advanced Photon Source of Argonne National Laboratory, US has recently experimentally demonstrated that in a planar SCU the field is increased by at least 20 % over the. Also, development of HTS-type superconductors is currently a very dynamic field with a high potential of reaching undulator field exceeding the one of a 3 undulator. This has been shown in a small test planar undulators wound with HTS tape starting in 2010 and has now reached the current densities in the winding higher than in the 3. Application of 3 and HTS superconductors in short-period helical undulators is therefore a topic for future R&D with potential for significant impact. Positron target technology The average energy deposition in the ILC positron target is about 2-7 kW depending on the drive beam energy in the undulator, the target thickness and the luminosity (nominal or high). As an example, for ILC250, the average energy deposition in the target is 2 kW. Energy deposition of up to few kW can be extracted by radiation cooling if the radiating surface is large enough and the heat diffuses fast enough from the area where the beam is incident to a large radiating surface. In this design, the wheel spinning in vacuum radiates the heat to a stationary cooler opposite to the wheel surface. It is easy to keep the stationary cooler at room temperature by water cooling. But it is crucial for this scheme that the heat diffuses from the volume heated by the photon beam to a larger surface area. With the wheel rotation frequency of 2000rpm each part of the target rim is illuminated every 6-8 seconds, but this interval of time is not sufficient to distribute the heat load almost uniformly over a large area. The heat is then accumulated in the rim with the highest temperatures located in a relatively small region around the beam path. The average temperature distribution was calculated using the ANSYS software package and is shown in figure 2 for one sector representative for the track of one bunch train. For the studies of the positron yield optimization, the temperature distribution and cooling principle a target wheel designed as full 1 m-diameter disc of 7 mm thickness made of Ti6Al4V was assumed. As expected, the radial steady state temperature in the wheel depends strongly on the radius. Due to the the heat conductivity in the target material and the 4 dependence in the Stefan-Boltzmann law, most of the heat is removed close to the rim of the wheel. One should note, that by increasing the outer radius of the wheel up to 60 cm, while maintaining the beam impact at r=50 cm, substantially lower average temperatures can be expected. Thus it is possible to conceive a target wheel consisting of two distinct parts with separate functionalities: i) a 'carrier wheel', designed and optimized in terms of weight, material, moment of inertia, centrifugal forces, stresses and vibrations, etc., and ii) a second unit, the actual Ti-target rim. The target units are fitted mechanically to the rim of the carrier wheel in such a way that the cyclic loads, temperature rises and stresses in the target units are not or little transmitted to the carrier wheel. This would allow to design and optimize the engineering of the carrier wheel independently from that of the target proper. A possible layout in Figure 3 shows the main items of the target wheel, the spoked rotating carrier wheel with its magnetic bearings and the water cooled stationary coolers. Another interesting development in terms of target technology is the so called two-stage process for positron production. The first stage is optimized for the generation of photons/gamma rays (for example using channeling radiation in crystals). The charged particles in the EM shower get separated away using a magnetic field so that only the photons hit the second stage target improving the heat load and yield for a given drive beam intensity. Flux concentrator Most of these studies for positron capture after the target assumed a pulsed flux concentrator (FC) as optical matching device (OMD). A promising prototype study for the FC was performed by LLNL. However, detailed studies identified some weak points in this design. The Bfield distribution along cannot be kept stable over the long bunch train duration and therefore the luminosity would vary during the pulse which is not desired. Further, the particle shower downstream the target causes a high load at the inner part of the flux concentrator front side, which is at least for ILC250 beyond the recommended material load level. This is mainly caused by the larger opening angle of the photon beam and the wider distribution of the shower particles downstream the target at ILC250. As alternatives are discussed the use of a quarter-wave-transformer or a pulsed solenoid or -as an example for new technology-a plasma lens. Pulsed solenoids Apart from the matching devices which are currently in use at positron sources in different facilities, like flux concentrators and quarter wave transformers, pulsed solenoid magnets have also been employed, e.g., at the LEP. Due to the limited yield that a quarter wave transformer can provide, interest in using a pulsed solenoid as an optical matching was renewed recently. To evaluate whether a pulsed solenoid would provide a sufficient yield, a stable magnetic field over 1 ms, and would not cause an excessive amount of heating in the rotating target wheel due to induced eddy currents, simulations have been performed in a collaboration involving CERN, University Hamburg, DESY, and the Helmholtz-Zentrum hereon. The principle layout is depicted in Fig. 4. A coil of 7 windings, with a tapered inner diameter of 20 mm at the target end and 80 mm at the downstream end, is formed by a square-shaped copper conductor with a circular inner cooling channel. The length of the quadrupole is 70 mm. According to simulations made in Comsol Multiphysics, a peak magnetic field of 5 T is produced by applying a pulsed current with a peak amplitude of 50 kA. This field can be slightly increased by introducing a magnetic shielding made of ferrite around the solenoid. The field deviation over 1 ms is found to be well below 1 % when applying a pulse with 2 ms sinusoidal rise time, a flat-top current of 1 ms duration and another sinusoidal fall time of 2 ms. Using a ferrite shielding also reduces the magnetic field at the target position and therefore reduces heating in the target wheel due to eddy currents. This heating was also simulated and the expected values of the peak and average heat load, as well as the peak force on the target wheel are expected to be well manageable. Similarly, no critical values have been found for the thermal load and mechanical stress in the coil itself. The positron yield of an undulator-based positron source with a pulsed solenoid as a matching device was also simulated. Without ferrite shielding, a yield of 1.9 positrons per electron was simulated at the ILC positron damping ring. When using a ferrite shielding, the yield was slightly reduced to 1.7. For comparison, the positron yield using a quarter wave transformer, which is currently the baseline design option for the ILC, is only 1.1. Further increase of the yield with the pulsed solenoid might be possible by further optimisation of the exact coil geometry. In summary, pulsed solenoids are a viable option as a matching device for positron sources compared with current state-of-the-art solutions like quarter wave transformers and flux concentrators and especially for long bunch trains (as in the case of the ILC undulator-based positron source), simulations indicate that such a device would bring some advantages compare with the other options. Plasma lens An alternative device to capture positrons after the target is an active plasma lens (APL). These focusing elements exhibit several advantages compared to conventional focusing elements like solenoids or quadrupoles: due to the azimuthal magnetic field the focusing is radially symmetric unlike, e.g., in a quadrupole field Figure 4: Sketch of the pulsed solenoid optical matching device for the ILC undulator-driven positron source. focusing fields are potentially very high due to the close proximity of focusing currents and focused particles focusing fields are transverse to main direction of motion unlike, e.g., in a solenoid mitigation of space charge forces between beam particles due to the quasi-neutrality of the plasma medium low scattering of beam particles due to the low density of the conductive medium compared to, e.g., a Lithium lens or a magnetic horn. In the particular application as a positron capture device, the APL has additional advantages over focusing schemes with solenoidal fields as flux compressors, solenoids or quarter wave transformers: fields are localised, i.e., do not influence the positron target wheel and the focusing is selective with respect to the particle charge that is when the active plasma lens is focusing positrons it is defocusing co-propagating electrons at the same time. These unwanted, low energy pair-electrons from the positron source will therefore not be accelerated in the capture linac, which reduces beam losses and radiation at high energies in the downstream accelerator significantly and also renders a dedicated charge separation chicane and a high energy electron beam dump unnecessary. The usage of an active plasma lens as a matching device at a positron source was proposed for LEP and again recently for the ILC. Especially due to the advances in development of highgradient, beam quality preserving active plasma lenses in recent years, their application as focusing elements rather than as research objects on their own is now in reach and partially already the case. Nevertheless, the positron source of the ILC poses several challenges for an active plasma lens to be used as an optical matching device (OMD) including the close proximity to accelerating cavities, which require ultra-high vacuum conditions, the large beam size (up to 1.5 mm) and very strong divergence of positrons and the challenging time format with short bunch separation of 554 ns in a train of >1000 bunches. On the other hand, beam-quality preservation is not a critical issue for an active plasma lens as an OMD for the low beam quality positron bunches at the source. To investigate the possibilities for APLs to meet these challenges, a project has been initiated at the University Hamburg, Germany (UHH) and the Deutsches Elektronen-Synchrotron DESY in Hamburg. First results indicate that an APL indeed allows to increase the positron yield significantly w.r.t. the quarter wave transformer baseline design. It should be noted though, that the simulated APL which allowed such an increased yield had a complex taper. Tapered lenses have been studied before but the available data is still very limited compared to more simple, linear discharge channels. Studies at the UHH and DESY are concentrating on investigating the field distribution within the APL in such a complex geometry and at high repetition rates both experimentally and in numerical simulations as well as on the question whether the yield improvement and required vacuum levels in nearby accelerating structures can be achieved at the same time. Other groups are also looking into plasma lenses as capture optics for highly divergent beams at the source and while the requirements of the ILC positron source e.g. in terms of repetition rate are certainly very demanding for state-of-the-art APL technology, plasma lenses can still be considered an option for other positron sources with different requirements in the future. 6 Novel approaches to polarized positrons 6.1 Compton-based polarized positron sources Another attractive and compact solution foresees the use of an high-power laser beam and the Inverse Compton Scattering (ICS) interaction to create such photons. The electron beam requirements in this case are greatly reduced while still reaching higher photon energies. Considering the scattering with a typical laser ( = 515 nm), the electron energy required to generate 30 MeV photons is around 1.0 GeV and very small spot sizes have to be maintained only over relatively short interaction lengths (less than few cms). In 2005, a proof of principle experiment for the Compton scattering-based scheme for polarized positron generation was performed at the KEK Accelerator Test Facility (ATF). Several options for the future linear collider positron source based on Compton scattering have been proposed. Today, they can be classified according to the electron source used for the Compton scattering: the linac scheme, the storage ring scheme or so-called Compton Ring and the energy recovery linac scheme. For all of them, the polarized positron current produced is not sufficient to fulfill the future linear collider requirements. Therefore, the application of the multiple-point collision line and multiple stacking of the positron bunches in the DR were investigated. On the other hand, owing to the small size of the Compton (Thompson) cross section, the demands of such solution on the high-power laser system are extremely challenging. The time format of ILC beams, for example, is constituted by an elevate number of bunches (>1000) per RF macropulse, with macropulse repetition rates of 5-10 Hz. At visible wavelengths, Joules of energy are required in order to provide sufficient photon density for the generation of one photon per incoming electron in the laser-beam interactions. The laser system should, therefore, provide multi-MW-class average power within a burst mode matching the electron bunch time-format. Using the additional degree of freedom offered by fast kickers, one can imagine to reformat the positron source to 30 KHz repetition rate and recreate the collider bunch format only after the DR, easing somewhat the peak and average requirements on the laser. Notwithstanding the exceptional progress of the RF and of the laser technology in the last decades, even this latter kind of laser system does not exist yet. Various new concepts, such as stacking cavities and optical energy re-circulation have been proposed to address the lack of a suitable laser source for this application. In Murokh et al. the authors present an alternative approach for an independent highcurrent polarized positron source based on the combination of laser-based acceleration with the observation, that the electron and laser beams are only minimally degraded in a ICS interaction. The laser pulse can then be used not only to drive the Compton scattering process, but also to accelerate the electrons to the required GeV-level for energetic polarized photon production. At the same time, after the ICS interaction point, the kinetic energy stored in the electrons can be recuperated with an high efficiency Free-Electron Laser (FEL) amplifier operating in the Tapering Enhanced Stimulated Superradiant Amplification (TESSA) regime to replenish the laser pulse before it is redirected to scatter against the next electron bunch. Due to the limited electron beam and laser In case of Gamma Factory proposed at CERN, which uses partially stripped ion beams and their resonant interactions with laser light, the resonant photon absorption cross section can be up to a factor 10 9 higher than for the ICS of photon on point-like electrons. The proof of principle experiment was already proposed. power requirements of this scheme, the electron current used in the accelerator can be very large and, even with the yield of 0.1 + / − after conversion of the gamma-rays in the target, positron fluxes of up to 10 15 + /s could be achieved. It should be emphasized that due to a common technological constraint of all the above mentioned schemes being the average laser power of the optical systems, the Compton scatteringbased polarized positron sources are considered only as the alternative solutions for the future collider projects. Presently, it is proposed as a preferred option for an upgrade of the CLIC positron source. Polarized bremmsstrahlung Important topics in nuclear, hadronic, and electroweak physics including nuclear femtography, meson and baryon spectroscopy, quarks and gluons in nuclei, precision tests of the standard model, and dark sector searches may be explored at CEBAF, especially when considering potential upgrades in luminosity, polarized and unpolarized positron beams and doubling of the beam energy to 24 GeV. For a positron program, Polarized Electrons for Polarized Positrons (PEPPo) represents a pathway to generate the highly spin-polarized positron beams required. The technique is based upon the electro-magnetic shower of electron beams in matter and the two-step process of polarized bremsstrahlung followed by polarized pair creation. Both steps can occur in a single high-Z conversion target, or be accomplished using a separate radiator and converter, if desired. Notably, this technique can be applied at any electron accelerator where spin polarized electron beams are produced, whether at a university or national lab. The transfer efficiency of spin polarization from the electron beam to the positron beam, defined as = ( +)/ ( −), can be very efficient, approaching unity as the momenta of the collected positrons approaches the initial electron beam momentum. The technique was first demonstrated at CEBAF where an 8.2 MeV/c electron beam with polarization 85.2% produced positrons with polarization >82% (see Fig. 5). Collecting the positrons at half the electron beam momentum serves to maximize figure of merit defined as IP 2, with positrons receiving >60% of the electron beam polarization. In contrast to positron polarization, the positron yield N(e+)/N(e-) falls precipitously with increasing positron momenta, due to the characteristic bremsstrahlung power spectrum. While this is not a deciding factor for unpolarized positron sources which select a low-momentum fraction of positrons from the conversion target, a PEPPo-driven polarized positron source must select the high-momentum fraction to provide polarization. Limitations in electron spin polarization and beam intensity likely explain why a PEPPo-based positron source has not been constructed to date. However, this landscape has changed significantly in the last 10 years. Electron beam polarization is now routinely ≈ 90% and with average beam currents at milliAmpere level. Today, strained-layer superlattice (SSL) photocathodes composed of quantum well multi-layer heterostructures provide very high spin polarization >85% and with yields ≈ 6 mA/W of laser light. And SSL photocathodes fabricated with an integrated diffracted Bragg reflector -to more efficiently absorb optical power -have demonstrated yields >30 mA/W. One may now reasonably imagine providing 100 kW of highly spin polarized electron beam at energies in the range of 10-100 MeV. In this context, a recent Jefferson Lab LDRD project explored the possibility of generating >100 In summary, PEPPo demonstrated a compact and efficient technique to produce highly spin polarized positrons, suitable for small to large-scale accelerator facilities. Advances in GaAs photocathodes capable of producing a high degree of spin polarization and milliAmpere intensities makes this technique viable. It is recommended to the P5 panel to support R&D in the areas of high current polarized electron sources, 100 kW high power targets and magnets for efficient collection of positrons over energies 10-100 MeV. High intensity laser-based positron polarization The positron production using high intensity lasers was extensively studied over the last two decades employing a number of different mechanisms and interaction setups mostly analytically and in computer simulations, though a number of experimental studies was also reported. The most straightforward one is the interaction of a moderate intensity laser with a solid-density target, which is several millimeters thick. Here, the electrons accelerated by the laser at the front surface go through the target emitting photons along the way due to the bremsstrahlung. These photons create electron-positron pairs in the course of their interaction with nuclei. In principle, a high-energy electron beam can be used instead of the laser pulse in a positron production scheme, Points marked with asterisks indicate experimental results from LWFA electron-beam interactions with high-Z foils ; in these cases the laser power is not indicated. Reproduced from. The positron production using high energy lasers as converters of high energy photons into electron-positron pairs is based on the effects of strong field quantum electrodynamics (SFQED). Here an electron beam interacts with a single high intensity laser pulse or a combination of several pulses, or instead of an electron beam a fixed plasma target is used : high energy gammas are emitted by electrons going through a region of strong EM field via multiphoton Compton process, and these gammas decay into electron-positron pairs via the multi-photon Breit-Wheeler process (see Fig. 6). We note that the production of electron-positron pairs is very sensitive to the EM field strength: for three orders of laser intensity the number of positrons varies ten orders of magnitude. There is an advantages of using a high energy electron beam interaction with a high intensity laser pulse, the positrons are produced as a collimated beam and the required laser intensity is much lower. The use of polarized electron beams in the above mentioned schemes will, first, lead to the production of polarized -ray beams and, subsequently, to the production of polarized positron beams. It is due to the fact that multi-photon Compton and Breit-Wheeler processes depend on the spin of participating particles. However, most of the reported studies use initially unpolarized electron beam and rely on its polarization during the interaction with a high intensity laser, which needs to be shaped in a way that breaks the symmetry of field oscillation to achieve net polarization. This can be achieved with a two-color laser pulse or with a laser pulse with a small degree of ellipticity. For example, an initially unpolarized 2-GeV electron beam interacting with a two-color laser pulse, with 0 = 83 and 25% of its energy in the second harmonic acquires an average polarization degree of only 8%, whereas the positrons produced have a polarization degree of 60% due to the Breit-Wheeler process depending more strongly on spin than the Compton one. A laser pulse with a small degree of ellipticity can, in principle, generate positron beams with a polarization degree exceeding 80%. In summary, it was theoretically shown that the polarized positron production using high intensity lasers can be achieved, however, the characterization of the phase space of these positron beams need to be carried out in future studies, as well as the study of their capture by beam transport systems and subsequent injection into an accelerator. The proof-of-principle experiments are required to access the possibility of using such positron source for compact colliders (CLIC, C3 and advanced accelerator-based concepts ). Electrostatic traps as a test-bed for polarized positron physics The generation of positron beams is an expensive process requiring significant infrastructure. Experimental tests on positron beams are limited to facilities that are already equipped with a high-energy, high-intensity electron beam accelerator, a high-power target, and damping ring for cooling. As a result, very few institutions provide access to positron beams for experimental use. An alternative, compact system for producing polarized positron beams could provide experimental opportunities for testing systems associated with positron beam production and transport. We propose a beamline design utilizing an electrostatic positron trap as a beam source for positron beams that is comparatively inexpensive and small. The concept is shown in Figure 7. In this proposal, the positrons can be generated either by emission from a -decay emitter, such as 22 Na, which produces roughly 10 9 positrons per second, or by impacting a 5 MeV electron beam on a high-Z target. The positrons pass through a solid-neon moderator which reduces their energy so that they can be trapped. The electrostatic trap holds the positrons while they accumulate and cool via interaction with a buffer gas. The longitudinal trap potential is shaped by high-voltage rings, and a solenoidal magnetic field provides radial confinement. After positrons are accumulated in the trap, the trapping potential is changed to accelerate and eject the beam. The beam is both long and non-relativistic when ejected from the trap. The remainder of the beamline is dedicated to compressing and bringing the beam up to relativistic energies. To accomplish this, an electrostatic accelerator of 100 kV is employed, which compresses and accelerates the beam to the point that it can be injected into an s-band cavity. The beam is compressed to a bunch length of 0.2 mm and accelerated to an energy of 17.8 MeV. The entire beamline is inside of a 1 T solenoid. The beam is cooled inside a magnetic field and has intrinsic angular momemtum L. The effective emittance is given by : With a small thermal emittance, the beam is dominated by angular momentum. Future linear colliders assume that the emittance in the vertical plane is much smaller than in the horizontal plane because the beams are generated in a damping ring. Our example beamline is capable of producing flat beams for ILC-type applications. While this compact source is not a suitable candidate for future Linear Colliders, it may be useful for testing positron capture technology or for demonstrating transport of flat beams. |
import { hashAsync, hash } from "../Math/hash";
const __doOnceIdempotency = new Set<number>();
/**
* The doOnce function executes a callback only one time
* @async
*/
export function doOnce<T>(callback: () => T): Promise<T>;
/**
* The doOnce function executes a callback only one time. if the callback function has already been executed once, the error function is executed.
* @async
*/
export function doOnce<T, E>(callback: () => T, err: () => E): Promise<T>;
export function doOnce<T>(callback: () => T, err?: () => any): Promise<T> {
return new Promise((res, rej) => {
hashAsync(callback.toString()).then((idempotency) => {
if (!__doOnceIdempotency.has(idempotency)) {
__doOnceIdempotency.add(idempotency);
res(callback());
} else if (err) {
rej(err());
}
});
});
}
/**
* The doOnce function executes a callback only one time
*/
export function doOnceSync<T>(callback: () => T): T | null;
/**
* The doOnce function executes a callback only one time. if the callback function has already been executed once, the error function is executed.
*/
export function doOnceSync<T, E>(callback: () => T, err: () => E): T | E;
export function doOnceSync<T>(callback: () => T, err?: () => any) {
const idempotency = hash(callback.toString());
if (!__doOnceIdempotency.has(idempotency)) {
__doOnceIdempotency.add(idempotency);
return callback();
} else if (err) {
return err();
}
return null;
}
|
package org.willemsens.player.exceptions;
public class NetworkClientException extends Exception {
public NetworkClientException(String message) {
super(message);
}
public NetworkClientException(String message, Throwable cause) {
super(message, cause);
}
}
|
Lee Dickson says Saints are doing things ‘out of the ordinary’ in the bid to retain their Aviva Premiership crown.
The scrum-half believes preventing predictability proved key in the 25-20 victory against Saracens last Saturday.
And Saints will now look to continue keeping teams on their toes as they seek a second successive title.
Jim Mallinder’s men face two more regular season games, knowing a big win against London Welsh on May 9 will secure their top-two spot.
That would set up a home play-off semi-final on May 23, seven days before the league showpiece at Twickenham.
And after overcoming big defeats at Clermont and Exeter to see off Saracens, England scrum-half Dickson is confident Saints are on the right track.
He said: “We started the season well and I hope we have had our hiccup and can look forward to the big games ahead. Play-off games are won on small margins, a few points here and there are big.
“There will be a time in the season when you do not click, but in the last couple of years we have done so at the right time.
“We went back to basics after losing to Clermont and Exeter: sometimes we can be predictable and teams work us out, so we were looking to do something out of the ordinary.
“We put a lot of kicking on (Stephen) Myler against Saracens and it was well executed, with the chase good.
“We played a lot of rugby as well, as we did in last year’s final (win against Sarries). |
PHILADELPHIA, Pa., June 11, 2018 (SEND2PRESS NEWSWIRE) -- The C Diff Foundation is honored to welcome 20+ leading topic experts joined by Dale Gerding, MD, FACP, FIDSA, Professor of Medicine at Loyola University Chicago Stritch School of Medicine in Maywood, Illinois and Research Physician at the Edward Hines Jr. VA Hospital and Mark Wilcox, B Med Sci, BM, BS, MD, FRCPath-, Head of Microbiology and Academic Lead of Pathology at the Leeds Teaching Hospitals (LTHT).
PHILADELPHIA, Pa., Jan. 8, 2018 (SEND2PRESS NEWSWIRE) -- Rittenhouse Capital Advisors (RCA) is a commercial real estate finance advisor with over 60 years of combined banking experience. Currently in its fourth year of operation, Rittenhouse Capital has increased its loan production volume by a minimum of 30 percent year-over-year by delivering creative commercial financing solutions for their real estate investor clients.
PHILADELPHIA, Pa., April 15, 2016 (SEND2PRESS NEWSWIRE) -- Actress, producer and businesswoman, Vivica A. Fox, will join chair of the Darby County PA Democratic Party, Richard Womack Jr, to host the DogonVillage 2016 Democratic National Convention (DNC) Watch Party complete with dinner, dancing, DJ, and a live performance. Themed, 'Celebrating the Black Vote,' the soiree will be held Tuesday July 26, 2016 in the ballroom of the Sheet Metal Workers Union hall on Penns Landing in Philadelphia. |
The aim of integrating more and more functionality in a single integrated circuit (IC) has resulted in a fast and inevitable increase in System-on-Chip (SoC) design complexity. In this scenario, reuse-based design using hardware Intellectual Property (IP) cores has become extremely common. These IP cores are usually in the form of synthesizable Register-Transfer Level (RTL) descriptions in Hardware Description Languages (HDLs), or gate-level designs directly implementable in hardware. This approach of designing complex systems by integrating tested and verified, smaller and reusable modules can help reduce the design cycle time dramatically. It is quite common to have SoC designs where multiple IPs from different IP vendors are integrated by the chip designer and ultimately multiple such chips are integrated by the system designer to build the desired system. Unfortunately, recent trends in IP-piracy and reverse-engineering efforts to produce counterfeit ICs have raised serious concerns in the IC design community. |
The content of my posts are my own and may or may not reflect the views of my agency. They are solely my opinions, thoughts, and observations based off of my professional experiences.
Currently serving as the Assistant Chief of Training for Federal Fire Ventura County, Navy Region Southwest Fire & Emergency Services, located in southern California. 15 years in the fire service, with 6 of those years an Active Duty Firefighter for the United States Air Force, holding the positions of Firefighter, Engineer, Captain, Battalion Chief, and Assistant Chief. Credentialed Fire Officer (FO) since 2013, Chief Fire Officer (CFO) since 2018, and hold a Bachelor’s degree in Fire Administration, Associate’s degree in Fire Science, and currently pursuing a Master's in Public Administration. CFAI Peer Assessor, Agency Mentor, and Agency Accreditation Manager. |
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('doacao/', include('donations.urls', namespace='donations')),
path('', include('core.urls')),
path('conta/', include('users.urls', namespace='users')),
path('member/', include('member.urls', namespace='member')),
path('evento/', include('events.urls', namespace='events')),
]
urlpatterns += [
# API base url
path("api/", include("aprepi.api_router", namespace="api")),
]
|
Apoptosis in cultured rat hepatocytes: the effects of tumour necrosis factor alpha and interferon gamma. We investigated the cytotoxic effects of tumour necrosis factor alpha (TNF alpha) and interferon gamma (IFN gamma) on rat hepatocytes in culture. Under phase contrast microscopy, we found a small number of dying hepatocytes in control cultures, each having been transformed into a cluster of small spheres. Under transmission electron microscopy, these cells showed the characteristics of apoptosis. TNF alpha and a combination of TNF alpha and IFN gamma exerted a cytotoxic effect, whereas IFN gamma showed no significant cytotoxicity when assessed by neutral red assay and by measuring LDH activity in culture medium. Under phase contrast microscopy, the number of apoptotic cells increased with the addition of either TNF alpha or IFN gamma, and markedly with the addition of both. DNA extracted from apoptotic cells cultured with TNF alpha and IFN gamma was fragmented, and a set of bands of the '200 bp ladder', which is characteristic of the DNA of apoptotic cells, was observed in agarose gel electrophoresis. These findings indicate that cultured hepatocytes die from apoptosis. TNF alpha killed cultured rat hepatocytes by increasing apoptosis, and this effect was potentiated by the addition of IFN gamma, which by itself was also weakly cytotoxic. |
// Code generated by the Pulumi SDK Generator DO NOT EDIT.
// *** WARNING: Do not edit by hand unless you're certain you know what you are doing! ***
package iotsitewise
import (
"context"
"reflect"
"github.com/pkg/errors"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
// Resource schema for AWS::IoTSiteWise::Portal
type Portal struct {
pulumi.CustomResourceState
// Contains the configuration information of an alarm created in an AWS IoT SiteWise Monitor portal. You can use the alarm to monitor an asset property and get notified when the asset property value is outside a specified range.
Alarms AlarmsPropertiesPtrOutput `pulumi:"alarms"`
// The email address that sends alarm notifications.
NotificationSenderEmail pulumi.StringPtrOutput `pulumi:"notificationSenderEmail"`
// The ARN of the portal, which has the following format.
PortalArn pulumi.StringOutput `pulumi:"portalArn"`
// The service to use to authenticate users to the portal. Choose from SSO or IAM. You can't change this value after you create a portal.
PortalAuthMode pulumi.StringPtrOutput `pulumi:"portalAuthMode"`
// The AWS SSO application generated client ID (used with AWS SSO APIs).
PortalClientId pulumi.StringOutput `pulumi:"portalClientId"`
// The AWS administrator's contact email address.
PortalContactEmail pulumi.StringOutput `pulumi:"portalContactEmail"`
// A description for the portal.
PortalDescription pulumi.StringPtrOutput `pulumi:"portalDescription"`
// The ID of the portal.
PortalId pulumi.StringOutput `pulumi:"portalId"`
// A friendly name for the portal.
PortalName pulumi.StringOutput `pulumi:"portalName"`
// The public root URL for the AWS IoT AWS IoT SiteWise Monitor application portal.
PortalStartUrl pulumi.StringOutput `pulumi:"portalStartUrl"`
// The ARN of a service role that allows the portal's users to access your AWS IoT SiteWise resources on your behalf.
RoleArn pulumi.StringOutput `pulumi:"roleArn"`
// A list of key-value pairs that contain metadata for the portal.
Tags PortalTagArrayOutput `pulumi:"tags"`
}
// NewPortal registers a new resource with the given unique name, arguments, and options.
func NewPortal(ctx *pulumi.Context,
name string, args *PortalArgs, opts ...pulumi.ResourceOption) (*Portal, error) {
if args == nil {
return nil, errors.New("missing one or more required arguments")
}
if args.PortalContactEmail == nil {
return nil, errors.New("invalid value for required argument 'PortalContactEmail'")
}
if args.RoleArn == nil {
return nil, errors.New("invalid value for required argument 'RoleArn'")
}
var resource Portal
err := ctx.RegisterResource("aws-native:iotsitewise:Portal", name, args, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
// GetPortal gets an existing Portal resource's state with the given name, ID, and optional
// state properties that are used to uniquely qualify the lookup (nil if not required).
func GetPortal(ctx *pulumi.Context,
name string, id pulumi.IDInput, state *PortalState, opts ...pulumi.ResourceOption) (*Portal, error) {
var resource Portal
err := ctx.ReadResource("aws-native:iotsitewise:Portal", name, id, state, &resource, opts...)
if err != nil {
return nil, err
}
return &resource, nil
}
// Input properties used for looking up and filtering Portal resources.
type portalState struct {
}
type PortalState struct {
}
func (PortalState) ElementType() reflect.Type {
return reflect.TypeOf((*portalState)(nil)).Elem()
}
type portalArgs struct {
// Contains the configuration information of an alarm created in an AWS IoT SiteWise Monitor portal. You can use the alarm to monitor an asset property and get notified when the asset property value is outside a specified range.
Alarms *AlarmsProperties `pulumi:"alarms"`
// The email address that sends alarm notifications.
NotificationSenderEmail *string `pulumi:"notificationSenderEmail"`
// The service to use to authenticate users to the portal. Choose from SSO or IAM. You can't change this value after you create a portal.
PortalAuthMode *string `pulumi:"portalAuthMode"`
// The AWS administrator's contact email address.
PortalContactEmail string `pulumi:"portalContactEmail"`
// A description for the portal.
PortalDescription *string `pulumi:"portalDescription"`
// A friendly name for the portal.
PortalName *string `pulumi:"portalName"`
// The ARN of a service role that allows the portal's users to access your AWS IoT SiteWise resources on your behalf.
RoleArn string `pulumi:"roleArn"`
// A list of key-value pairs that contain metadata for the portal.
Tags []PortalTag `pulumi:"tags"`
}
// The set of arguments for constructing a Portal resource.
type PortalArgs struct {
// Contains the configuration information of an alarm created in an AWS IoT SiteWise Monitor portal. You can use the alarm to monitor an asset property and get notified when the asset property value is outside a specified range.
Alarms AlarmsPropertiesPtrInput
// The email address that sends alarm notifications.
NotificationSenderEmail pulumi.StringPtrInput
// The service to use to authenticate users to the portal. Choose from SSO or IAM. You can't change this value after you create a portal.
PortalAuthMode pulumi.StringPtrInput
// The AWS administrator's contact email address.
PortalContactEmail pulumi.StringInput
// A description for the portal.
PortalDescription pulumi.StringPtrInput
// A friendly name for the portal.
PortalName pulumi.StringPtrInput
// The ARN of a service role that allows the portal's users to access your AWS IoT SiteWise resources on your behalf.
RoleArn pulumi.StringInput
// A list of key-value pairs that contain metadata for the portal.
Tags PortalTagArrayInput
}
func (PortalArgs) ElementType() reflect.Type {
return reflect.TypeOf((*portalArgs)(nil)).Elem()
}
type PortalInput interface {
pulumi.Input
ToPortalOutput() PortalOutput
ToPortalOutputWithContext(ctx context.Context) PortalOutput
}
func (*Portal) ElementType() reflect.Type {
return reflect.TypeOf((**Portal)(nil)).Elem()
}
func (i *Portal) ToPortalOutput() PortalOutput {
return i.ToPortalOutputWithContext(context.Background())
}
func (i *Portal) ToPortalOutputWithContext(ctx context.Context) PortalOutput {
return pulumi.ToOutputWithContext(ctx, i).(PortalOutput)
}
type PortalOutput struct{ *pulumi.OutputState }
func (PortalOutput) ElementType() reflect.Type {
return reflect.TypeOf((**Portal)(nil)).Elem()
}
func (o PortalOutput) ToPortalOutput() PortalOutput {
return o
}
func (o PortalOutput) ToPortalOutputWithContext(ctx context.Context) PortalOutput {
return o
}
// Contains the configuration information of an alarm created in an AWS IoT SiteWise Monitor portal. You can use the alarm to monitor an asset property and get notified when the asset property value is outside a specified range.
func (o PortalOutput) Alarms() AlarmsPropertiesPtrOutput {
return o.ApplyT(func(v *Portal) AlarmsPropertiesPtrOutput { return v.Alarms }).(AlarmsPropertiesPtrOutput)
}
// The email address that sends alarm notifications.
func (o PortalOutput) NotificationSenderEmail() pulumi.StringPtrOutput {
return o.ApplyT(func(v *Portal) pulumi.StringPtrOutput { return v.NotificationSenderEmail }).(pulumi.StringPtrOutput)
}
// The ARN of the portal, which has the following format.
func (o PortalOutput) PortalArn() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.PortalArn }).(pulumi.StringOutput)
}
// The service to use to authenticate users to the portal. Choose from SSO or IAM. You can't change this value after you create a portal.
func (o PortalOutput) PortalAuthMode() pulumi.StringPtrOutput {
return o.ApplyT(func(v *Portal) pulumi.StringPtrOutput { return v.PortalAuthMode }).(pulumi.StringPtrOutput)
}
// The AWS SSO application generated client ID (used with AWS SSO APIs).
func (o PortalOutput) PortalClientId() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.PortalClientId }).(pulumi.StringOutput)
}
// The AWS administrator's contact email address.
func (o PortalOutput) PortalContactEmail() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.PortalContactEmail }).(pulumi.StringOutput)
}
// A description for the portal.
func (o PortalOutput) PortalDescription() pulumi.StringPtrOutput {
return o.ApplyT(func(v *Portal) pulumi.StringPtrOutput { return v.PortalDescription }).(pulumi.StringPtrOutput)
}
// The ID of the portal.
func (o PortalOutput) PortalId() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.PortalId }).(pulumi.StringOutput)
}
// A friendly name for the portal.
func (o PortalOutput) PortalName() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.PortalName }).(pulumi.StringOutput)
}
// The public root URL for the AWS IoT AWS IoT SiteWise Monitor application portal.
func (o PortalOutput) PortalStartUrl() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.PortalStartUrl }).(pulumi.StringOutput)
}
// The ARN of a service role that allows the portal's users to access your AWS IoT SiteWise resources on your behalf.
func (o PortalOutput) RoleArn() pulumi.StringOutput {
return o.ApplyT(func(v *Portal) pulumi.StringOutput { return v.RoleArn }).(pulumi.StringOutput)
}
// A list of key-value pairs that contain metadata for the portal.
func (o PortalOutput) Tags() PortalTagArrayOutput {
return o.ApplyT(func(v *Portal) PortalTagArrayOutput { return v.Tags }).(PortalTagArrayOutput)
}
func init() {
pulumi.RegisterInputType(reflect.TypeOf((*PortalInput)(nil)).Elem(), &Portal{})
pulumi.RegisterOutputType(PortalOutput{})
}
|
Connectomics in Brain Aging and Dementia The Background and Design of a Study of a Connectome Related to Human Disease The natural history of Alzheimers Disease (AD) includes significant alterations in the human connectome, and this disconnection results in the dementia of AD. The organizing principle of our research project is the idea that the expression of cognitive dysfunction in the elderly is the result of two independent processes the neuropathology associated with AD, and second the neuropathological changes of cerebrovascular disease. Synaptic loss, senile plaques, and neurofibrillary tangles are the functional and diagnostic hallmarks of AD, but it is the structural changes as a consequence of vascular disease that reduce brain reserve and compensation, resulting in an earlier expression of the clinical dementia syndrome. This work is being completed under the auspices of the Human Connectome Project (HCP). We have achieved an equal representation of Black individuals (vs. White individuals) and enrolled 60% Women. Each of the participants contributes demographic, behavioral and laboratory data. We acquire data relative to vascular risk, and the participants also undergo in vivo amyloid imaging, and magnetoencephalography (MEG). All of the data are publicly available under the HCP guidelines using the Connectome Coordinating Facility and the NIMH Data Archive. Locally, we use these data to address specific questions related to structure, function, AD, aging and vascular disease in multi-modality studies leveraging the differential advantages of magnetic resonance imaging (MRI), functional magnetic resonance imaging (fMRI), MEG, and in vivo beta amyloid imaging. INTRODUCTION The natural history of Alzheimer's Disease (AD) includes significant alterations in the human connectome, and this disconnection results in the dementia of the Alzheimer's type (DAT). Data from structural and functional magnetic resonance imaging (MRI) (Dai and He, 2014;), as well as magnetoencephalopathy (MEG) () and electroencephalography ) all demonstrate significant changes in neural networks even prior to the onset of clinical dementia. While such changes are not explicit in the popular A/T/N (amyloid/tau/neurodegeneration) model of AD (), they appear to be an early consequence of the accumulation of beta amyloid (Busche and Konnerth, 2016;), and thus may be an early warning sign of impending neurodegeneration. Indeed, models of the natural history of AD that propose that the loss of synapses is one of the first pathological stages of AD, imply changes in the connectome. In 2016 the University of Pittsburgh was awarded funds by the National Institute on Aging under the Connectomes Related to Human Disease 1 of the Human Connectome Project. 2 Our project is organized around the idea that the natural history of AD is affected by multiple independent factors (), and that the expression of cognitive dysfunction is the result of independent processes including AD and vascular-related neuropathology. Here we describe the general organization of the Connectomics in Brain Aging and Dementia project, the sampling frame, a brain imaging protocols, and the behavioral/cognitive data that were acquired as part of the study. All of the study data are currently being uploaded to the Connectome Coordination Facility 3 and the NIMH National Data Archive. 4 To accomplish the study goals, we acquired neuropsychological data, as well as brain structural and functional (functional MRI, MEG) imaging, and positron emission tomography (PET) imaging of in vivo of brain amyloid with Pittsburgh Compound B (PET-PiB). We used different measures of brain function because fMRI and MEG rely on fundamentally different biological processes to generate "signal" (), and this has the potential to provide critical information about the uncoupling of the neural and vascular components in AD (and possibly in normal aging). Because the MEG signal is derived from post-synaptic currents, and fMRI signal also includes a vascular response, they may expose different sources of the disconnection (i.e., degeneration vs. vascular). We also acquire a direct measure of cerebrovascular function -an MRI-based measure of cerebral blood flow, as well as a direct measure of AD pathology using in vivo amyloid imaging. These data provide the opportunity to examine the relationship between amyloid deposition and local and distant connectivity () among individuals with and without cognitive impairment. Study Design This is a longitudinal, community-based study of brain structural and functional connectivity among cognitively normal and cognitively impaired individuals aged 50-89 years. Recruitment Sources There are currently two primary portals of entry into the study: the University of Pittsburgh Alzheimer's Disease Research Center 5 and the Pitt + Me web portal (primarily to recruit Black individuals and Whites without college education). 6 Additional individuals were identified through active links with the Heart SCORE Study (), the Long Life Family Study (), and by word of mouth. Study Protocol All study participants are tested/scanned over three days. On Day One, all study enrollees complete the informed consent process and the intake forms. They are then escorted to the MR Research Center (MRRC) and where they complete the two fMRI tasks (motor, working memory), and the structural imaging. Following a break, the individuals complete the behavioral tests that are not components of the NIH Toolbox. On Day Two, the participants undergo a brief exam and fasting blood tests. They are then taken back to the MRRC where they undergo diffusion imaging, task free fMRI and the language/math task fMRI; they then complete all the NIH Toolbox tests. On Day Three the participants undergo MEG and PET-PiB scanning; this is scheduled approximately 1 week after the last MRI scanning session (to avoid any interference of the MRI on the MEG data). The participants are escorted to the Center for Advanced Brain Magnetic Source Imaging 7 where they are prepared for the MEG scan, and complete task training. Once in the magnetically shielded room, the individuals complete task free MEG, and one task MEG (working memory). Individuals will then take for a short break and for the placement of the electrodes for the motor stimulation; then they will complete the Language/Math and Motor MEG task scans. Following a break for either a snack or lunch, the participants are escorted to the UPMC PET Facility for their PiB scan. Neuropsychological Tests and Questionnaires The individual tests and questionnaires that serve as outcome variables include items from the NIH Toolbox, 8 the Promis battery, 9 and additional paper-and-pencil tests (see Supplementary Tables 1-3). The questionnaires cover symptomatology, personality, diet, and exercise. Magnetic Resonance Imaging Scanning We use Siemens Prisma 3-Tesla 64-channel systems equipped with Connectome level gradients operating at 80mT/m. They are equipped with fMRI presentation systems including E-Prime, a MR compatible video projector, and Celeritas response gloves. The MRI scanning is completed in two 90-min sessions over two days. The scan sequences include: T1-weighted MP-RAGE, T2-weighted SPACE image, FLAIR, susceptibility weighted imaging, diffusion tensor imaging, task-free functional MRI, task-based fMRI, and arterial spin labeling (see Supplementary Table 4). The tasks used were those described for the HCP and, with one exception, used the stimuli provided by the HCP; the exception was the N-back task. For that task all of the original photographs of faces were of White individuals; we substituted photos of Black individuals so that half of all of the N-back trials used White faces, and half Black faces. The same race was used for all of the stimuli within a trial (i.e., race could not be used to select responses). All the MRI data are processed locally through the HCP pipeline, as modified to work in the local environment. The raw data are stored on an XNAT server 10 and pushed to a receiving server at Washington University in St. Louis for processing by the Connectome Coordination Facility and eventual upload to the on-line, public HCP database. Magnetoencephalopathy Recording Magnetoencephalography (MEG) studies are completed on an Elekta-Neuromag Vectorview 306 MEG system. The whole-scalp neuromagnetic measurement system uses 102 triple sensors -102 magnetometers and 204 planar gradiometers -in a helmet-shaped array. The locations of three cardinal anatomical landmarks (nasion, and two preauricular points) and of four head localization coils are digitized prior to each MEG study using a 3D-digitizer (ISOTRAK; Polhemus, Inc., Colchester VT) to define the subject-specific Cartesian head coordinate system. 30-50 anatomical points are digitized on the head surface to provide for more accurate co-registration of the MEG data with the reconstructed volumetric MR image. Eye movements are measured and recorded simultaneously with the MEG. The MEG sensor unit, the floor-mounted gantry, the subject chair and bed, together with the patient audio-visual monitoring and stimulus delivery systems are contained in a magnetically shielded room. Once a subject is comfortably positioned in the MEG machine, a short electrical signal is sent to the head coils enabling their localization with respect to the MEG sensor array. The MEG data are acquired at a sampling rate of 1 kHz, with on-line filtering of 0.10-330 Hz. The acquisition includes two memory tasks, as well as 10 min of "resting state" data -5 min with eyes open followed by 5 min with eyes closed. At the end of the scan, we collect 2 min of "empty room" data to assess the validity of any signal in the test conditions. Recordings were filtered offline using a tempo-spatial filtering algorithm (tSSS, correlation window 0.9, time window 10 s) (Taulu and Simola, 2006) to eliminate magnetic noise originating outside the head and to compensate for head movements. The raw data are stored on an XNAT server and are pushed to the NDA for eventual inclusion in the study database (C3159). Positron Emission Tomography Amyloid Imaging The PET amyloid tracer, Pittsburgh Compound B (PiB) is synthesized by a simplified radiosynthetic method based on the captive solvent method (;). High specific activity (> 0.50 Ci/mol at time of injection) PiB (15 mCi) is injected over 20 s and the participant then relaxes quietly in a chair for ∼25 min, after which they are positioned in the scanner. A windowed transmission scan (10 min) is acquired for attenuation correction, followed by a 30 min PiB PET study (6 300 s frames). The Siemens/CTI ECAT HR + scanner gantry is equipped with a Neuro-insert (CTI PET Systems) to reduce the contribution of scattered photon events. Positron emission tomography data are reconstructed using filtered back-projection (Fourier rebinning and 2D backprojection with Hann filter: kernel FWHM = 3 mm). Data are corrected for photon attenuation, scatter (), and radioactive decay. The final reconstructed PET image resolution is ∼ 6 mm (transverse and axial) based on in-house point source measurements. The raw data are stored on an XNAT server and are pushed to the NDA for inclusion in the study database (C3159). The data include the dynamic images as well as a single SUV image. Imaging Data Processing (Local) All the MRI data are pushed to the HCP CCF XNAT server where they are processed using standard quality control measures, and analysis via the HCP Pipeline. The processed data are made available by the CCF. The MEG and PET data are saved to the NIMH Data Archive as.FIF files (MEG) and DICOM images (PET SUV images). What follows below is the description of the local processing of these data. Magnetic Resonance Imaging Structural Image Processing We briefly describe here the HCP Minimal Processing Pipelines that are implemented at the CCF prior to the release of the data . There are three main components to the structural data processing. In the first steps, the goal is to produce a "native" structural space for each subject, align the T1 and T2 images, perform a bias field correction, and co-register the structural volumes into MNI space. The second component which uses FreeSurfer extensively, segments these volumes into predefined subcortical and cortical regions. It also reconstructs cortical surfaces and performs the standard surface registration to the FreeSurfer atlas. Finally, in the third step all the NIFTI and GIFTI surface files are created that can then be used in the Connectome Workbench. In addition, we also process all the MP-RAGE data through Computational Anatomy Toolbox (CAT12) for SPM. 11 This process provides the basis for a range of morphological analysis methods, including voxel-based morphometry, surface-based morphometry, deformation-based morphometry, and region-or label-based morphometry. Positron Emission Tomography Processing The PET data are processed using PMOD 12 and Freesurfer software packages. Correction for subject motion during the multi-frame PET scan is performed using frame-to-frame registration procedure. The PET data are averaged to generate images that correspond to the 50-70 min post-injection uptake. The anatomical T1-weighted MR image is reoriented along the anterior-posterior commissure and the averaged PET images are co-registered to the reoriented MR image. Freesurfer software is used for MR bias field correction, automated ROI parcellation and tissue segmentation. The Freesurfer ROI parcellations are converted into an ROI template and ROI sampling of the PET images is performed to include anterior cingulate, frontal cortex, parietal, precuneus, lateral temporal cortex, primary visual cortex, hippocampus, anterior ventral striatum, thalamus, pons, and cerebellum. Regional standardized uptake value (SUV) measures are computed for PiB by normalizing tissue uptake to the injected radioligand dose and body mass. Each regional SUV is normalized to a reference ROI in the cerebellum to generate the SUV ratio (SUVR). Cortical SUVRs were measured in anterior cingulate cortex, the superior frontal cortex, orbital frontal cortex, lateral temporal cortex, parietal lobe, precuneus, and the anterior ventral striatum regions and averaged across hemispheres. The volume-weighted average of these seven SUVR values constituted the Global SUVR. The SUVR in each area is compared to a region-specific cut-off determined by sparse k-means clustering; those scores above the cut-off are considered "positive". If any of the regions was considered "PiB Positive, " then the Global rating was set to positive (). Magnetoencephalography Signal Processing Ocular, muscular and jump artifacts are identified using an automatic procedure from the Fieldtrip package (). The remaining data are segmented into 4 s epochs of artifact-free activity using only the magnetometer data (). An ICA-based procedure is used to remove the electrocardiographic component. Source Reconstruction. Artifact-free epochs are filtered between 2 and 40 Hz, to remove both low frequency noise and network line artifact. The epochs are padded with 2 s of real signal from both sides prior to the filtering to prevent edge effects inside the data. The source model consists of 2459 sources placed in a homogeneous grid of 1 cm in MNI template, then linearly transformed to subject space by warping the subject T1-weighted MRI into the MNI template. The lead field is calculated using a single shell (the brain-skull interface) generated from the T1 MRI using Fieldtrip 13 and a modified spherical solution. A Linearly Constrained Minimum Variance beamformer (Van ) is used to obtain the source time series by using the computed lead field and building the beamforming filter with the epoch-averaged covariance matrix and a regularization factor of 5% of the average channel power. Spectral Analysis. The estimated spatial filters are used to reconstruct the source-space time series for each epoch and source location. MEG power spectra are calculated between 2 and 40 Hz for every clean epoch using a Hann taper, with 0.25 Hz steps. The resulting spectra for each trial are averaged to build the final spectrum for each source. The obtained power is normalized with the overall power in Rey, Reitan, Welsh et al. (1991Welsh et al. (, 1994, Reitan and Wolfson, Van Veen et al., Watson et al., Weinhard, Lopez et al., Saxton et al., Wilson et al., Selkoe, Nolte The normalized spectra of all the sources in each brain lobe were averaged, obtaining one value per frequency step, brain lobe and subject. Last, we calculated the relative power per lobe in each of the standard frequency bands: Delta (2-4 HZ), Theta (4-8 Hz), Alpha (8-12 Hz), Beta (12-30 Hz), and Gamma (30-40 Hz). Genotyping We are genotyping each study participant for 21 previously identified susceptibility genes () including APOE * 4 (see Supplementary Table 5). The genetic information is also uploaded to the NDA but requires special permissions for access. Measures Related to Risk/Protection From Cognitive Impairment Each of the study subjects provides additional data related to risk for and protection from cognitive impairment based on studies from our prior research. With regard to exercise and motor function, each subject wears an activity monitor ((Erickson et al.,, 2013 for five days, and we query them about the amount of walking per week, estimate the number of kilocalories burned per week, and measure gait speed () (in addition to the motor tasks used by the NIH Toolbox). Each participant completes the Florida Cognitive Activities scale to obtain a measure of activities that might affect cognitive and brain health (;). Quality Control/Assurance Procedures Quality Control Magnetic Resonance Imaging Scanner. The MRRC has QC/QA procedures and American College of Radiology certification in place for all scanners. These include daily signal stability scans for echo planar imaging (1% maximum RMS over a continuous 30min acquisition with a 64 64 matrix size) and daily signal-tonoise measurements with the standard RF head coil. In addition to the daily QC testing of the MRI scanner, each imaging protocol is examined visually prior to submitting it to the local data archive. The scans are checked immediately by a member of the Imaging Team and repeated if necessary. Positron Emission Tomography Scanner. QC/QA procedures are run according to the University of Pittsburgh PET Facility Standard Operating Procedures HR + Quality Assurance Task Schedule. The "Daily QC" protocol runs a scan that is compared to the last standard that was written into the database. That is, the standard that was written by the Norm 2D and ECF (Customer) protocol. The resulting deviation between scans must be less than 2.5. The protocol uses the internal rod sources of the gantry, so no phantoms are used. Magnetoencephalography Scanner. The operating status of the Elekta NeuroMag system is tested daily. This includes determining that there is a sufficient level of liquid helium, calibrating and tuning the sensors, determining the proper functioning of the magnetic shielding producing a sufficiently low ambient magnetic interference level. Neuropsychological Testing. Clinical Team Leader Dr. Snitz trains the staff who are responsible for administering and/or scoring questionnaires or paper-and-pencil tests as she does within the ADRC. Quality Assurance Magnetic Resonance Imaging Scanner. We use the ADNI phantom as a reference tool for our structural and functional images. Positron Emission Tomography Scanner. The 68 Ge phantom is run at on a weekly basis to check for changes in the scanner calibration or changes in uniformity. Four times each year the following procedures are performed in order: Full ASIC Bucket Setup; System Normalization; Daily QC; and, Scanner/Well Counter Cross Calibration. Magnetoencephalography Scanner. Prior to and after every scan we record 2 min of empty room data to measure ambient magnetic noise. We complete a simple spectral analysis and then save the raw data and spectra. This allows for monitoring the noise level and system status over time to help identify changes in the background environment. Neuropsychological Testing. Dr. Snitz reviews the scoring of all questionnaires and paper-and-pencil tests. Every six months a sample of ten protocols will be "double scored" to ensure interrater reliability. Five of these protocols will be repeated annually to check for scoring drift. PRELIMINARY RESULTS The data acquired through this protocol are and will continue to be uploaded to the CCF and NDA. However, the team has completed some initial analyses to help to better explicate the participants who had enrolled in the study by March 31, 2020. The data provides critical information about the relationship between the breakdown in functional and structural connectivity and the expression of cognitive impairment along the ADpathology continuum. Because of our unique sampling frame, we have data from participants who are less likely to enroll in biomedical research studies, and this has revealed several aspects of the normal/pathological aging spectrum that were previously under-appreciated. The study was reviewed and approved by the University of Pittsburgh Human Research Protection Office. All participants signed written statements of Informed Consent prior to initiation of any research procedures. Subjects A total of 472 individuals inquired about the study and of these, 208 either chose not to enroll or failed the initial screening questions related to MR compatibility (e.g., metal implants) or medical history (e.g., clinical stroke). Twenty-seven individuals were excluded after having signed an informed consent form; as of 31 March 2020, 227 individuals had enrolled in the study. Of these participants, 13 had been diagnosed with DAT; these individuals are not described in this report. Sixty-seven study participants (31%) entered via the ADRC; 97 (45%) came through Pitt + Me, and 27 (13%) were volunteers from the community. Twenty-one participants (10%) entered through HeartScore or the LLFS. We compared the characteristics of the participants initially classified as having normal cognition to those with some degree of impairment. There were two subgroups among the Cognitively Normal participants: those who reported no limitations in their cognition and those who reported significant concerns . There were also two subgroups among the cognitively impaired participants: those who reported no concerns or loss of abilities , and those who reported loss of abilities (i.e., MCI) (see Tables 1-3). The proportion of Black individuals was greater within the cognitively impaired group, as was the proportion reporting being left-handed. As would be expected, the Crystallized and Fluid Intelligence measures from the NIH Toolbox were significantly lower among the impaired participants. The two subgroups of individuals who were cognitively normal did not differ in terms of age, years of education, distribution of men and women, race, or handedness (see Table 2). The MoCA scores were equivalent, but the individuals in the SCC group performed more poorly on the Wide Range Achievement Test. The SCC group reported more cognitive concerns, and lower scores on the measure of Meaning and Purpose. The latter indicates more hopelessness, less goaldirectedness, less optimism, and weaker feelings that their life is "worthy". 14 Between the two subgroups of individuals with Impaired Cognition those in the IWOC group were younger, less well educated, and more likely to self-identify as Black; they had decreased physical endurance (see Table 3). The IWOC group reported significantly better cognitive abilities (higher scores) and fewer cognitive concerns (lower scores) than the people in the MCI group. They reported higher scores on the Meaning and Purpose questions from the Promis battery. Finally, when we compared the MCI and IWOC groups, we found that those with MCI had lower scores on the Cognitive Abilities questionnaire , and reported significantly more concerns about their cognition than the IWOC group. Structural Magnetic Resonance Imaging Data We calculated an index of the cortical thickness of critical temporal lobe areas including the fusiform gyrus, entorhinal cortex, and the inferior and middle temporal gyri () using values taken from the standard output of the HCP pipeline. We then classified each case as "normal" or "atrophic" based on the standard cut-off of + 2.70 mm (see Table 4). The mean cortical thickness differed as a function of group (One-Way Analysis of Variance) . Furthermore, the rate of abnormal thickness differed significantly between groups (X 2 = 7.87, df = 3, V = 0.21, p < 0.05) with the controls and the IWOC having the lowest rates, and the SCC and MCI groups having the highest. Positron Emission Tomography Pittsburgh Compound B Data Positron emission tomography (PET) data were available from 176 of the individuals enrolled in the study. Table 4 shows the data including the mean SUVR for each of the brain regions used for determining amyloid deposition, as well as the global rate of PiB positivity. There is a significant Main Effect of group (One-Way ANOVA) for each of the seven regions of interest (summed across each hemisphere). In addition, the rate of PiB positivity was significantly different across all groups (chi-square test). However, these effects were due to the lower-than-normal SUVRs in each of the six brain regions for the 35 individuals in the IWOC group compared to the healthy controls (all ds > 0.61) and their low rate of PiB positivity (Odds Ratio = 14.0, 95% CI = 1.8−110, Exact Test p = 0.002) compared to the controls. Among the normal controls the rate of positivity was greater among the White (51.4%) relative to the Black participants (4.5%; OR = 32.2, 95% CI = 2.7−184; Exact Test p = 0.0003). Amyloid/Neurodegeneration Classification We compared the rates of PiB retention and temporal lobe atrophy as a function of the clinical classification (see Table 5). There was a significant difference in the rates of biomarker abnormality across groups ( 2 = 21.5, df = 9, V = 0.21, p < 0.05). Fifty-eight percent of the normal controls were biomarker negative, which is similar to the rates for the SCC (53%) and MCI (49%) groups. By contrast, the IWOC group was 74% biomarker negative. Among the participants with MCI, 29% had only temporal lobe atrophy, while 9.8% had only PiB + imaging. Magnetoencephalopathy Summary Data One hundred and eighty-six individuals contributed MEG data that met all quality control standards. We examined the relative power across all five MEG frequency bands in regions of interest (ROI) extracted using the AAL templates (). The repeated measures (band) Analysis of Covariance (age) of temporal lobe power by subject group revealed that the SCC group had elevated theta power compared to the other study groups (see Figure 1A), and decreased beta power. There was no significant association (chi-square tests) between elevated theta power (> 75%tile of normal controls) and race, sex, and APOE * 4 status. However, an ANCOVA of temporal lobe theta power revealed a significant interaction between group (NC vs. SCC) and PiB status (positive vs. negative) . As can be seen in Figure 1B, theta power in the temporal lobe (adjusted for age) is similar in the normal controls (PiB ±) and the PiB-SCC group; power is elevated only in the PiB + SCC participants. DISCUSSION The purpose of this report is to describe the creation of the Connectomics of Brain Aging and Dementia study. 15 The MRI brain images are being uploaded to the CCF and the behavioral and cognitive data, PET PiB scan regional SUVRs (and raw SUV images), and the raw data from the MEG are being uploaded to the NDA (ID C3159). Study Advantages, Limitations, Possible Pitfalls, and How to Counteract Them When this project was initially proposed to the NIH, we specified that the sample would consist of 50% women and 50% black participants. We further proposed that the 50:50 splits be maintained in each subject group. While we were able to achieve this goal in our sample of healthy controls, some subgroups of participants did not conform to these expectations which in fact reveals much about the characteristics of those phenotypes. We believe that the single biggest advantage of using data derived from this study, and which will continue to be acquired and deposited for public consumption, is the composition of the study Amyloid Only 21.6 13.3 2.9 9.8 Atrophy Only 9.5 6.7 20.6 29.3 Amyloid and Atrophy 10.8 26.7 2.9 12.2 Cramer's V = 0.21, p < 0.05. sample. We found that by carefully tailoring our public face on Pitt + Me we were able to recruit individuals across a wide range of socioeconomic strata as well as a high rate of Black volunteers. While many studies successfully enroll Black participants at a rate consistent with the population distribution, we specifically chose to oversample Blacks. The individuals that we ended up enrolling, both White and Black, were frequently new to research, and often had relatively low health-related knowledge. In our view, these are the people who need to be enrolled in studies such as COBRA in order to see the process of aging and neurodegeneration as it exists in the broader community. However, we learned several things about the execution of the protocol that had not been self-evident prior to the study. First, and perhaps most important, the research participants require a great deal of "hands-on" care than the typical research participant. In the end, each participant is assigned to a Research Associate who is, in effect, a concierge. They escort the participant around the medical center for the various procedures. They may be an examiner or interviewer who sits with the participant during neuropsychological testing or completion of healthcare questionnaires. They may take the participant to the cafeteria for lunch, or if time is short, purchase the lunch from the hospital cafeteria. These are also the individuals who make interval telephone calls to maintain the necessary contact with the participants during follow-up. This means that we had underestimated our need for support staff by as much as 50%. We also learned that because many of these individuals were new to research, many of the procedures that we use must be explained to them in ways that differ from the more researchexperienced individuals we are more accustomed to working with. For example, the PET procedures are explained in more detail as the notion of injecting radioactive compounds (or any other solution) is not universally accepted without good explanation. To facilitate this process, we talk in terms of the important changes that can occur in the brain with dementia, and that we can take a picture of those changes using that injected solution. After our participants have completed their baseline examinations, we send them a signed certificate of participation accompanied by a color image of the surface of the brain using the Freesurfer parcellations. Frequently, this results in our getting telephone calls being asked to explain "what it means." One of the Investigators always returns these calls; it is critically important to "give back" to the communities. We also attend monthly gatherings at local Community Engagement Centersjust being present increases our familiarity to the community. We also found that it was important to pay close attention to transportation needs. Many of our participants live in neighborhoods where public transportation is less than ideal (e.g., two or more transfers needed for a 60-min one-way trip). Consequently, we had to develop relationships with ridesharing services to obtain the quality of service that we wanted for our participants. Everyone is met at the door to the hospital by their "concierge, " and from there escorted to all of the tasks that they will do during the day. At the end of the day the ride is scheduled, and the "concierge" takes the volunteer back to the lobby and awaits the arrival of the car. Finally, while all imaging researchers are familiar with the problem of incidental findings, the quality of those findings in a study such as this is different than that which we have encountered in the past. Many of the individuals in the study had limited healthcare resources which might have identified potential problems; many participants do not have a regular annual physical. However, we have also had instances of more severe brain injury that was a consequence of the participants living environment. One individual, for example, had suffered a severe closed head injury, and the sequalae were evident on the scan. However, there was no mention of this event despite multiple opportunities during screening and interview. The individual seemed surprised that spending more than three days in the hospital, much of the time in coma, would result in brain damage. This view is likely due in part due to lack of awareness of health-related issues. Comments on Preliminary Data A significant proportion of the participants in this study have never been involved in biomedical research. Thus, our sample likely includes individuals who are typically under-represented in academic research studies and may be more representative of the population at-risk for cognitive impairment. This has resulted in the identification of a group of study participants who were cognitively impaired but had no complaints or concerns about their cognitive abilities. Further, we found that the rate of amyloid deposition among those individuals with cognitive impairment (i.e., MCI and IWOC) was lower than expected based on prior analyses (). Among the MCI participants 4/10 individuals (40%) recruited from the ADRC were amyloid positive, whereas only 1/16 among the individuals (6%) recruited via Pitt + Me were amyloid positive (Odds Ratio = 10.0, 95% Confidence Interval = 0.92 -108, p = 0.055) . We had assumed when the project began that participants recruited from the community would be, on average, cognitively normal; the cognitively impaired participants (and those with subjective complaints) would enroll through the ADRC. However, experience revealed a more nuanced picture. The group of individuals with impaired cognition, but who did not complain of changes in their behavior or cognition deserve special mention. The participants in this group were predominantly Black (85%) which contrasts sharply with the NC (41.7%) and SCC (14.3%) groups. Their performance on the tests used for classification was equivalent to that of the MCI participants, but without the complaints necessary for that classification. Indeed, on average the IWOC participants reported better cognitive abilities, and fewer cognitive concerns than did the cognitively normal controls. The near absence of PiB retention means that these individuals were not as yet, on the AD pathology spectrum; although with a mean age of 60 years, the amyloid cascade may not be well developed, or perhaps other non-amyloid factors may be in play . Given the age range of the IWOC group there is also a high likelihood that these individuals (as well as other Black participants in the study) are the children or grandchildren of the people who migrated from the rural South to cities like Pittsburgh. Growing up Black in a northern city in the 1950s and 1960s was likely associated with poorer educational quality, poor access to medical care and health maintenance, as well as a range of psychosocial consequences of explicit and implicit discrimination. It may be that any racial inequities in the development of cognitive impairments are driven by pervasive institutionalized inequities that shape risk and disadvantage individuals at multiple levels, including biological, environmental, behavioral, sociocultural (). Although these factors have often been referred to as "modifiable individual risk factors, " this term fails to recognize that individual risk is influenced by racism and social determinants that are outside of an individual's control. At a population level, Black communities experience racism and more adverse social determinants of health, including negative work, living and educational conditions, that can lead to long-term negative biological consequences (;Braveman P.A. et al., 2011). Indeed, neighborhood-level disadvantage was associated with an increased likelihood of AD neuropathology at autopsy (). While there are established diagnostic hallmarks of AD, little attention has been paid to the possibility that factors such as neighborhood context may directly and indirectly impact brain changes that alter the connectome, thus resulting in earlier expression of the dementia. To date, little attention has been paid to the possibility that early social structural and social determinants may affect brain structure and function, alter the connectome, and reduce brain reserve and compensation resulting in the earlier expression of DAT and an apparent increased incidence of dementia among Blacks . Indeed, there needs to be a paradigm shift in the field to focus on collecting the contextual and environmental data that may help disentangle apparent differences due to race; "analyzing findings by race/ethnicity without appropriate contextual data could lead to inaccurate, misleading, or stigmatizing conclusions that may detract from the overall goals of diversity in research: to enhance the accuracy, utility, and generalizability of scientific evidence" (). This view is supported by the decades of research that argue that racial and socioeconomic inequities are not the result of individual behavior or biological factors but rather are due to the structures, institutional practices, and policies which contribute to adverse outcomes and susceptibilities (;;a,b;;). The data included in this project provides investigators around the world with the opportunity to investigate the spectrum of aging and AD effects on the brain and cognition using true-multimodal imaging, and detailed cognitive/behavioral evaluations. Genetic analyses will be completed starting at the end of 2020, and those restricted data will be available directly from the study investigators. Longitudinal follow-up of the individuals in the study is underway, and there are plans to enrich the sample of pre-DAT participants and continue follow-up. These data, combined with the main HCP dataset, the HCP Lifespan and Aging datasets, and the other CRHD project related to AD provide richest and most comprehensive resource for the neurobiological study of AD and related dementias. CONCLUSION The study has two unique characteristics. First, the data are acquired using standard and standardized procedures that are shared by other CRHD studies, including the HCP Lifespan Study (). This provides an international, accessible database for all investigators. Second, and more important, are the characteristics of the study sample. We used multiple portals of entry, including customized web sites that allowed to achieve our goal of ∼50% Black participants, and reaching people who were participating in their first research study. This, we believe, at least partly explains why our measured rates of AD pathology are lower than those in more typical research samples . In addition, we identified a group of participants whose test performance was as poor as that of the MCI participants, but who reports few concerns about their cognition ; this group is predominately Black. This leads us to what we believe is the most important implication of our data, and which is a weakness of the study as currently described. Specifically, we, like many others, make the mistake of "analyzing findings by race/ethnicity without appropriate contextual data could lead to inaccurate, misleading, or stigmatizing conclusions that may detract from the overall goals of diversity in research: to enhance the accuracy, utility, and generalizability of scientific evidence" (). Race is a socially determined construct that is not biologically or genetically based (Cooper and David, 1986). In addition to strong data suggesting there are no biologically determined differences between races (Serre and Paabo, 2004), defining race as a social construct has the advantage of capturing the concept of racism more precisely. Racism is thus better defined as a system that structures opportunity based on race, providing unfair advantages and disadvantages based on race. There is still considerable disagreement on the factors contributing to disparities in many AD-related outcomes, e.g., dementia onset and course. Much of this is likely due to the focus on individual behavior or "lifestyle factors" without consideration for the social, physical, and policy environments that are inextricably linked to the individual and are key to understanding health disparities (). Perhaps a better way to place the factors related to AD and dementia into the NIA Health Disparities Framework is to study the interplay between social determinants of health, racism, and AD and dementia (). Aside from the more direct effects racism on risk factors, we also believe that racism may have the moderating effect of reducing the impact of the positive social determinants of health (SDOH) (e.g., education, access to health care) and increasing the impact of negative SDOH (e.g., poverty, social isolation). Significant advances in AD and dementia prevention and management will be made as we accumulate more information SDOH and how racism affects their relationship with resilience, diagnosis, prognosis, and response to treatment. With the exceptions of AC and JB, the order of the authors is alphabetical. DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: HCP Connectome Central Facility https://www.humanconnectome.org/study/ connectomics-brain-aging-and-dementia; NIMH Data Archive https://nda.nih.gov/edit_collection.html?id=3159. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Human Research Protection Office; University of Pittsburgh. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS AC and JB made the initial draft of the work while the remaining authors revised it for important intellectual content. All authors made substantial contributions to the conception and design of the work, as well as the acquisition, analysis, interpretation of data for the work, and had final approval of the version to be published. FUNDING This research and the preparation of this manuscript was supported by funds from the National Institute on Aging (UF1-AG051197), as well as the Neuroimaging Core of the |
<reponame>marksweiss/sofine
"""Supports taking in data and returning data from calls to `get_data` and `get_namespaced_data` as CSV."""
import csv
import sys
from io import BytesIO
# TODO Document only supported CSV format
# - No column headers
# - Key is always position 0
# - Attr keys and values alternate in positions 1..n
delimiter = ','
def set_delimiter(c):
delimiter = c
lineterminator = '|'
def set_line_terminator(s):
lineterminator = s
quoting = csv.QUOTE_MINIMAL
def set_quoting_none():
quoting = csv.QUOTE_NONE
def set_quoting_all():
quoting = csv.QUOTE_ALL
def set_quoting_minimal():
quoting = csv.MINIMAL
def set_quoting_nonnumeric():
quoting = csv.NONNUMERIC
quotechar = '"'
def set_quote_char(c):
quotechar = c
def deserialize(data):
"""
* `data` - `dict mapping string keys to lists of dicts of string keys and arbitrary values`.
Required function for a data format plugin. Converts data in CSV format to the sofine Python data
structure for data sets. In particular, data passed to `sofine` on `stdin` will be processed with this call.
This data is expected to adhere to this format (assuming the comma is the delimiter):
Key,attribute_name_1,attribute_value_1, Key,attribute_name_1, ...
which will be translated to:
{ "Key": [attribute_name_1: attribute_value_1, ...]
...
}
"""
ret = {}
schema = []
# Note this hack with lineterminator. The alternative to manually splitting lines is
# to put 'data' into a StringIO, because csv.reader needs an iterable
reader = csv.reader(data.split(lineterminator), delimiter=delimiter, lineterminator='',
quoting=quoting, quotechar=quotechar)
for row in reader:
if not len(row):
continue
# 0th elem in CSV row is data row key
key = row[0]
key.encode('utf-8')
attr_row = row[1:]
ret[key] = [{attr_row[j].encode('utf-8') : attr_row[j + 1].encode('utf-8')}
for j in range(0, len(attr_row) - 1, 2)]
return ret
def serialize(data):
"""
* `data` - `dict mapping string keys to lists of dicts of string keys and arbitrary values`.
Required function for a data format plugin. Converts data from the sbfine Python data
structure for data sets into CSV. In particular, data passed from `sofine` to `stdout` will be processed with this call.
This data is expected to adhere to this format (assuming the comma is the delimiter):
{
"Key": [attribute_name_1: attribute_value_1, ...]
...
}
which is be translated to:
Key,attribute_name_1,attribute_value_1,Key,attribute_name_1, ...
"""
# Python docs cryptically say the csv Writer should set the 'b' flag on its
# File writer "on platforms that support it." Googling finds that to make this work
# with streams you should use BytesIO. StringIO also works (at least for ASCII).
out_strm = BytesIO()
writer = csv.writer(out_strm, delimiter=delimiter, lineterminator='|',
quoting=quoting, quotechar=quotechar)
# Flatten each key -> [attrs] 'row' in data into a CSV row with
# key in the 0th position, and the attr values in an array in fields 1 .. N
for key, attrs in data.iteritems():
row = []
row.append(key)
for attr in attrs:
row.append(attr.keys()[0])
row.append(attr.values()[0])
writer.writerow(row)
ret = out_strm.getvalue()
out_strm.close()
return ret
def get_content_type():
"""
Required for a data format plugin. Returns the value for the HTTP Content-Type header.
"""
return 'text/csv'
|
The 2008 Survey of Consumer Payment Choice This paper presents the 2008 version of the Survey of Consumer Payment Choice (SCPC), a nationally representative survey developed by the Consumer Payments Research Center of the Federal Reserve Bank of Boston and implemented by the RAND Corporation with its American Life Panel. The survey fills a gap in knowledge about the role of consumers in the transformation of payments from paper to electronic by providing a broad-based assessment of U.S. consumers' adoption and use of nine payment instruments, including cash. The average consumer has 5.1 of the nine instruments, and uses 4.2 in a typical month. Consumers make 53 percent of their monthly payments with a payment card (credit, debit, and prepaid). More consumers now have debit cards than credit cards, and consumers use debit cards more often than cash, credit cards, or checks individually. Cash, checks, and other paper instruments are still popular and account for 37 percent of consumer payments. Most consumers have used newer electronic payments, such as online banking bill payment, but they only account for 10 percent of consumer payments. Security and ease of use are the characteristics of payment instruments that consumers rate as the most important. |
Single stage: dorsolateral onlay buccal mucosal urethroplasty for long anterior urethral strictures using perineal route ABSTRACT Objective To assess the outcome of single stage dorsolateral onlay buccal mucosal urethroplasty for long anterior urethral strictures (>4cm long) using a perineal incision. Materials and Methods From August 2010 to August 2013, 20 patients underwent BMG urethroplasty. The cause of stricture was Lichen sclerosis in 12 cases (60%), Instrumentation in 5 cases (25%), and unknown in 3 cases (15%). Strictures were approached through a perineal skin incision and penis was invaginated into it to access the entire urethra. All the grafts were placed dorsolaterally, preserving the bulbospongiosus muscle, central tendon of perineum and one-sided attachement of corpus spongiosum. Procedure was considered to be failure if the patient required instrumentation postoperatively. Results Mean stricture length was 8.5cm (range 4 to 12cm). Mean follow-up was 22.7 months (range 12 to 36 months). Overall success rate was 85%. There were 3 failures (meatal stenosis in 1, proximal stricture in 1 and whole length recurrent stricture in 1). Other complications included wound infection, urethrocutaneous fistula, brownish discharge per urethra and scrotal oedema. Conclusion Dorsolateral buccal mucosal urethroplasty for long anterior urethral strictures using a single perineal incision is simple, safe and easily reproducible by urologists with a good outcome. InTRODuCTIOn Urethral stricture is a common disease encountered by urologist. Exact incidence in Indian population has not been reported. Reconstruction of long and complex anterior urethral strictures is technically demanding. Long anterior strictures with dense focal narrowing and scarred, extremely narrow urethral plates, fistula or infection are best managed with staged procedures. Those with a salvageable urethral plate are being increasingly managed with a single stage repair using genital or non-genital tissues grafts/flaps. Since Suprechko's first description of buccal mucosa used as a graft in 1886, it has become the tissue of choice for urethral reconstruction. Its popularity can be credited to extensive work by Braca and Barbagli. It is readily available and easily harvested with minimal donor site morbidity. Buccal mucosa is hairless, has a thin, elastin rich epithelium giving it excellent handling characteristics and a highly vascular lamina propria, which facilitates harvesting and imbibition. The ideal location for BMG onlay has been debated for quite some time. There is now adequate evidence that dorsal onlay has an edge over the ventral onlay technique, especially in the penile urethra. Recently Barbagli and Kulkarni have proposed one-sided mobilization of the urethra with sparing of central tendon of perineum and dorsal anterior/lateral placement of the BMG in order to preserve the blood supply to the urethra and neuro-vascular integrity of the bulbospongiosus muscle respectively. We present our experience single stage urethroplasty with dorso-lateral onlay of BMG for long strictures of anterior urethra approached through a perineal incision. MATERIALs AnD METhODs The study was conducted between August 2010 and August 2013. Approval was taken from the hospital ethical committee. Patients who presented to us with anterior urethral strictures (>4cm measured on RGU) were included in the study. Each patient was evaluated by detailed history, physical examination, uroflow with post void residual urine, RGU and VCUG, and other routine investigations necessary for surgery. A suprapubic catheter was placed pre-operatively in those presenting with acute retention of urine and/or with altered renal parameters. The cause of stricture was Lichen sclerosis in 12 cases (60%), instrumentation in 5 cases (25%), and unknown in 3 cases (15%). Exclusion criteria were previous failed urethroplasty, urethral abscess, urethral fistulas and a scarred and unsalvageable urethral plate. Uroflowmetry and measurement of post--void residue was done at 1 month, 3 months and 6 months after surgery and every 6 months for the first 3 years thereafter. Those who had a recurrence of voiding symptoms with an objective evidence on uroflow study underwent imaging and/ or cystoscopy to identify the site of re-stricture. These cases were considered as treatment failures. Operation was performed under general anesthesia with nasal intubation. Two teams worked simultaneously, one at the donor site and other at the recipient site. Urethroscopy was performed using a 6-7.5Fr semi rigid (Karl Storz) ureteroscope and a hydrophilic (Terumo) guide wire was passed into the bladder. A 5fr ureteric catheter was guided over it and the ureteric catheter was secured with a stich on the glans. A midline perineal skin incision is made; the bulbar urethra is exposed, preserving the midline tendon of the perineum and bulbospongiosus muscle (Figure-1). The involved bulbar urethra is dissected off the corpora cavernosa on the left side, so as to leave the right half attached and preservation of its lateral blood supply. The penis is invaginated into the perineal incision and the involved segment of penile urethra is similarly dissected of corpora cavernosa along the left side. On the left side urethra is partially rotated and the dorso-lateral surface is incised exposing the lumen (Figure-2). The incision is extended for about 1cm beyond the stricture segment at both ends. The proximal and distal lumen is calibrated to ensure adequate patency. In case of strictures ex- tending up to the external urethral meatus, a dorsal meatotomy is performed from the meatus, through the urethra inside the glans, connecting it to the dorso-lateral incision in the distal penile urethra. The buccal mucosa is harvested from the inner cheek (one or both sides, depending on the length required). The inner cheek from just inside the labial angle up to the retromolar trigone is marked, keeping 0.5cm away from the opening of the Stensen duct, to obtain a buccal graft of 2.5-3cm width and 6-7cm length. We use a 26 gauge needle to infiltrate dilute (1:200000) adrenaline under the marked portion of the mucosa. The edges are incised, 2 stay sutures are placed at the distal corners of the graft using 3-0 chromic catgut, for traction. Once the graft is harvested, the raw area is allowed to epithelize secondarily. The graft is defatted, trimmed to an appropriate shape and used as an onlay. We do not perform a primary closure of the mucosal defect. The buccal mucosal graft is trimmed to an appropriate size and is spread and fixed (quilted) over the exposed half of the corpora. The edges of the graft are sutured to the corresponding edges of the opened urethral lumen using 4-0 polygalactin sutures (Figures 3a, 3b and 3c) over a 14Fr silicone Foley's catheter. In those cases with external urethral involvement, the dorsal meatotomy incision allowed us to widen the narrow meatus/fossa navicularis region and draw the graft in through the glans from the distal urethrotomy and place it right up to the tip of the external meatus (Figures 4a and 4b). After completion of anastomosis, the wound is closed in layers (Figure-5). The periurethral catheter is left in-situ for 3-4 weeks. REsuLTs Twenty patients were included in the study (Table-1 (range 18 to 56 years). Mean stricture length was 8.5±1.395cm (range 4 to 12cm). Mean operative time was 140±11.337 min (range 120-180 min). The mean postoperative Qmax at the 12-month follow-up was 24±3.162mL/sec (range 18-32mL/ sec). None of these patients had any significant post void residual urine. The mean hospital stay was 6.25±1.070 days (range 5-9 days). None of the patients required peri-operative blood transfusions. Mean follow-up was 22.7±4.105 months (range 12 to 36 months). Treatment was successful in 17 (85%) and failed in 3 (15%). These 3 patients presented with decreased flow rates of<9mL/sec after 1-3 months. VCUG revealed a stricture at the proximal end of the graft in 1 (confirmed by urethroscopy), meatal stenosis in 1, and 1 had recurrent stricture along the whole length of the graft. Recurrent stricture was treated by DVIU. Meatal stenosis was managed by a meatotomy. The patient who had recurrent stricture of the whole length was planned for revision urethroplasty but he lost follow-up. Other complications included scrotal oedema in 3 (17.6%), 3 patients (17.6%) had brownish discharge through external meatus and 2 (10%) patients had wound infection (Figure-3). One of these patients of wound infection had an urethrocutaneous fistula, which presented to us 3 weeks after catheter removal. None of the patients in our study had postoperative chordee, diverticulum formation or post void dribble. DIsCussIOn BMG augmentation urethroplasty has become the standard of care for long urethral strictures. Whether to place the graft dorsally, ventrally or laterally is controversial. Dorsal placement of graft has advantage of using corporal bodies to provide a secure well vascularized graft bed that helps to prevent protrusion of the graft with resulting pseudo-diverticulum formation. In addition, this spread BMG fixation preserves graft width and hence urethral caliber. On the other hand ventral location provides the advantage of ease of exposure and good vascular supply by avoiding circumferential rotation of urethra. Ventral urethrotomy allows the lumen to be clearly delineated, thus enabling the surgeon to identify mucosal edges, measure the size of the plate, carry out water tight anastomosis and if necessary, excise a portion of the stricture and perform dorsal re-anastomosis. Barbagli et al., in 2005 published a retrospective study of 50 cases with bulbar urethral stricture where buccal mucosa graft urethroplasty was done. Grafts were placed as ventral, dorsal and lateral onlay in 17, 27 and 6 patients respectively. After a mean follow-up of 42 months, placement of graft into ventral, dorsal or lateral surface of the bulbar urethra showed similar results. Later in 2008, Barbagli et al. showed that the dorsal urethral surface could be easily approached leaving the bulbospongiosum muscle and central tendon of the perineum intact, thus preserving the branches of perineal nerves from surgical injury. The bulbospongiosum muscle is primarily responsible for ejaculation because of its rhythmic contractions with other perineal muscles to expel semen from the urethra. It may also have an important role in expelling urine. Kulkarni et al. published their series of 24 patients in 2009, wherein they described a new technique of one-sided anterior dorsal oral mucosal graft urethroplasty while preserving the lateral vascular supply to the urethra, the central tendon of the perineum, the bulbospongiosum muscle and its perineal innervation and showed a success rate of 92%. They also reported that the factors such as age, cause of stricture, length and prior instrumentation previously said to have influence on any kind of urethroplasty have no effect on the success rate, suggesting that other factors (possibly vascular and neurogenic injury) may play an important role in determining stricture recurrence. In our series of 20 patients overall success was 85%, in a mean follow-up of 22.7 months. We feel that with a single perineal incision and invagination of the penis, adequate exposure of the whole anterior urethra is possible. This approach avoids a separate penile skin incision, making it more cosmetic and also reduces the chances of development of urethrocutaneous fistulas. One-sided dissection of the anterior urethra from the corpora cavernosa allowed us to visualize the urethral lumen with minimal rotation of the urethra. Also, placement of a guide wire/urethral catheter in the urethral lumen acts as a valuable guide while incising the urethra. We were able to avoid creating false passages, especially in very narrow or scarred portions of the stricture by this maneuver. None of the patients on our series had post void dribble following the procedure. All 3 failures occurred in the early days of the study period. The patient who developed meatal stenosis had a Lichen sclerosus stricture involving the external meatus. The dorsal meatotomy incision that we used in such cases for laying the buccal mucosa on the glans portion of the distal urethra was probably of insufficient depth/width. He was treated by a simple meatotomy, which was sufficient. The cause for stricture at the proximal anastomotic site was similarly due to a failure to achieve mucosa-to-mucosa approximation of the graft and healthy urethra. This was managed by DVIU and the patient remained symptom free till the end of follow-up period. The patient with recurrent pan-urethral stricture was a chronic tobacco chewer and had to quit only 2 months prior to surgery. This could have resulted in a sub-optimal buccal mucosa graft. Scrotal oedema in 3 (17.6%) was managed conservatively with scrotal support, and oral serratiopeptidase twice a day for 3 days. Three patients (17.6%) had brownish discharge through external meatus that was managed by gently squeezing the shaft from penoscrotal region till the meatus, which subsided in 3 days. This discharge was probably the collected blood that was retained in urethra during dissection. Two patients (10%) had wound infection and they were managed by regular dressings. One of these patients of wound infection had an urethrocutaneous fistula, which presented to us 3 weeks after catheter removal. He underwent reinsertion of suprapubic catheter & regular dressings. A VCUG done 3 weeks later showed resolution of the fistula tract, so the suprapubic catheter was removed. These patients of wound infection had prolonged hospital stay. Our results are comparable with those published by Kulkarni et al. in 2009, using the same technique. Limitations of our study are small number of patients and a short follow-up period of 22.7 months. COnCLusIOns Dorsolateral placement of buccal mucosa graft for long anterior strictures is minimally invasive, safe and has good outcomes with short to intermediate length of follow-up. Further studies on larger series of patients are necessary to confirm that preservation of the one-sided lateral vascular supply to the urethra and its entire muscular and neurogenic support reduces the incidence of stricture recurrence, post void dribble and ejaculatory dysfunction. ACKnOwLEDgEMEnTs Ethical Approval of the study was not required as we followed the standard operating procedures of our Hospital. However, we had a discussion in one of our IEC which also had the opinion that approval was not required. |
import originCreateVuePlugin, { Options } from '@vitejs/plugin-vue'
import qs from 'querystring'
import { SourceMapInput } from 'rollup'
import { Plugin } from 'vite'
function transformId(
id: string
): {
newId: string
pageData?: true
tag?: 'excerpt' | 'content'
} {
const [filename, rawQuery] = id.split(`?`, 2)
const query = qs.parse(rawQuery || '')
if (filename.endsWith('.md')) {
if (query.pageData !== undefined) {
return { newId: id, pageData: true }
} else if (query.excerpt !== undefined) {
const newFilename = filename.slice(0, -3) + '.excerpt.md'
return {
newId: newFilename + '?' + rawQuery,
tag: 'excerpt'
}
} else if (query.content !== undefined) {
const newFilename = filename.slice(0, -3) + '.content.md'
return {
newId: newFilename + '?' + rawQuery,
tag: 'content'
}
}
}
return { newId: id }
}
function mapSourceMap(map?: SourceMapInput): void {
if (typeof map === 'object' && map !== null) {
if ('file' in map && map.file) {
map.file = map.file.replace(/\.(?:content|excerpt)\.md$/, '.md')
}
if ('sources' in map) {
map.sources = map.sources.map((file) =>
file.replace(/\.(?:content|excerpt)\.md$/, '.md')
)
}
}
}
export default function createVuePlugin(options: Options): Plugin {
const vuePlugin = originCreateVuePlugin(options)
const originVuePluginLoad = vuePlugin.load
if (originVuePluginLoad !== undefined) {
vuePlugin.load = async function (id, ssr) {
const { newId, pageData, tag } = transformId(id)
if (!pageData) {
const replaceTag = tag === undefined ? '.md?' : `.md?${tag}&`
let result = await originVuePluginLoad.call(this, newId, ssr)
// noinspection SuspiciousTypeOfGuard
if (typeof result === 'string') {
result = result.replace(/\.(?:content|excerpt)\.md\?/g, replaceTag)
} else if (typeof result === 'object' && result !== null) {
result.code = result.code.replace(
/\.(?:content|excerpt)\.md\?/g,
replaceTag
)
mapSourceMap(result.map)
}
return result
}
}
}
const originVuePluginTransform = vuePlugin.transform
if (originVuePluginTransform !== undefined) {
vuePlugin.transform = async function (code, id, ssr) {
const { newId, pageData, tag } = transformId(id)
if (!pageData) {
const replaceTag = tag === undefined ? '.md?' : `.md?${tag}&`
let result = await originVuePluginTransform.call(this, code, newId, ssr)
// noinspection SuspiciousTypeOfGuard
if (typeof result === 'string') {
result = result.replace(/\.(?:content|excerpt)\.md\?/g, replaceTag)
} else if (typeof result === 'object' && result !== null) {
if (result.code !== undefined) {
result.code = result.code.replace(
/\.(?:content|excerpt)\.md\?/g,
replaceTag
)
}
mapSourceMap(result.map)
}
return result
}
}
}
const originVueHandleHotUpdate = vuePlugin.handleHotUpdate
if (originVueHandleHotUpdate !== undefined) {
vuePlugin.handleHotUpdate = async function (ctx) {
return ctx.modules
}
}
return vuePlugin
}
|
<filename>NvTK/Model/Publications.py
""" Reimplement of Publicated Model Archiectures applicable in NvTK.
This module provides
1. `NINCNN` - Network in Network CNN model Archiecture
2. `DeepSEA` - DeepSEA architecture (Zhou & Troyanskaya, 2015).
3. `Beluga` - DeepSEA architecture used in Expecto (Zhou & Troyanskaya, 2019).
4. `DanQ` - DanQ architecture (Quang & Xie, 2016).
5. `Basset` - Basset architecture (Kelley, 2016).
and supporting methods.
"""
import logging
import numpy as np
import torch
import torch.nn as nn
from ..Modules import BasicModule
__all__ = ['NINCNN', 'DeepSEA', 'Beluga', 'DanQ', 'Basset']
# TODO update Nvwa model
# class Nvwa(BasicModule):
# def __init__(self, sequence_length, n_genomic_features):
# super().__init__()
class NINCNN(BasicModule):
"""
@misc{https://doi.org/10.48550/arxiv.1312.4400,
doi = {10.48550/ARXIV.1312.4400},
url = {https://arxiv.org/abs/1312.4400},
author = {<NAME> and <NAME> and <NAME>},
keywords = {Neural and Evolutionary Computing (cs.NE), Computer Vision and Pattern Recognition (cs.CV), Machine Learning (cs.LG), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Network In Network},
publisher = {arXiv},
year = {2013},
copyright = {arXiv.org perpetual, non-exclusive license}
}
"""
def __init__(self, sequence_length, n_genomic_features):
super().__init__()
self.conv1 = nn.Sequential(
nn.Conv1d(4, 512, 3, 1),
nn.ReLU(),
nn.Conv1d(512, 512, 1, 1),
nn.ReLU(),
nn.Conv1d(512, 512, 1, 1),
nn.ReLU(),
nn.AvgPool1d(3),
)
self.conv2 = nn.Sequential(
nn.Conv1d(512, 768, 3, 1),
nn.ReLU(),
nn.Conv1d(768, 768, 1, 1),
nn.ReLU(),
nn.Conv1d(768, 768, 1, 1),
nn.ReLU(),
nn.AvgPool1d(3),
)
self.conv3 = nn.Sequential(
nn.Conv1d(768, 1024, 3, 1),
nn.ReLU(),
nn.Conv1d(1024, 1024, 1, 1),
nn.ReLU(),
nn.Conv1d(1024, 1024, 1, 1),
nn.ReLU(),
nn.AvgPool1d(3),
)
self.classifier = nn.Conv1d(1024, n_genomic_features, 1, 1)
self.GAP = nn.AdaptiveAvgPool1d(1)
def forward(self, x):
logging.debug(x.shape)
x = self.conv1(x)
logging.debug(x.shape)
x = self.conv2(x)
logging.debug(x.shape)
x = self.conv3(x)
logging.debug(x.shape)
x = self.classifier(x)
logging.debug(x.shape)
x = self.GAP(x)
logging.debug(x.shape)
x = x.view(x.size(0), -1)
logging.debug(x.shape)
return x
class DeepSEA(BasicModule):
"""
DeepSEA architecture (Zhou & Troyanskaya, 2015).
"""
def __init__(self, sequence_length, n_genomic_features):
"""
Parameters
----------
sequence_length : int
n_genomic_features : int
"""
super(DeepSEA, self).__init__()
conv_kernel_size = 8
pool_kernel_size = 4
self.conv_net = nn.Sequential(
nn.Conv1d(4, 320, kernel_size=conv_kernel_size),
nn.ReLU(inplace=True),
nn.MaxPool1d(
kernel_size=pool_kernel_size, stride=pool_kernel_size),
nn.Dropout(p=0.2),
nn.Conv1d(320, 480, kernel_size=conv_kernel_size),
nn.ReLU(inplace=True),
nn.MaxPool1d(
kernel_size=pool_kernel_size, stride=pool_kernel_size),
nn.Dropout(p=0.2),
nn.Conv1d(480, 960, kernel_size=conv_kernel_size),
nn.ReLU(inplace=True),
nn.Dropout(p=0.5))
reduce_by = conv_kernel_size - 1
pool_kernel_size = float(pool_kernel_size)
self.n_channels = int(
np.floor(
(np.floor(
(sequence_length - reduce_by) / pool_kernel_size)
- reduce_by) / pool_kernel_size)
- reduce_by)
self.classifier = nn.Sequential(
nn.Linear(960 * self.n_channels, n_genomic_features),
nn.ReLU(inplace=True),
nn.Linear(n_genomic_features, n_genomic_features),
nn.Sigmoid())
def forward(self, x):
"""Forward propagation of a batch.
"""
out = self.conv_net(x)
reshape_out = out.view(out.size(0), 960 * self.n_channels)
predict = self.classifier(reshape_out)
return predict
def criterion(self):
"""
The criterion the model aims to minimize.
"""
return nn.BCELoss()
def get_optimizer(self, lr):
"""
The optimizer and the parameters with which to initialize the optimizer.
At a later time, we initialize the optimizer by also passing in the model
parameters (`model.parameters()`). We cannot initialize the optimizer
until the model has been initialized.
"""
return (torch.optim.SGD,
{"lr": lr, "weight_decay": 1e-6, "momentum": 0.9})
class LambdaBase(nn.Sequential):
def __init__(self, fn, *args):
super(LambdaBase, self).__init__(*args)
self.lambda_func = fn
def forward_prepare(self, input):
output = []
for module in self._modules.values():
output.append(module(input))
return output if output else input
class Lambda(LambdaBase):
def forward(self, input):
return self.lambda_func(self.forward_prepare(input))
class Beluga(BasicModule):
"""
DeepSEA architecture used in Expecto (Zhou & Troyanskaya, 2019).
"""
def __init__(self, sequence_length, n_genomic_features):
super(Beluga, self).__init__()
conv_kernel_size = 8
pool_kernel_size = 8
n_hiddens = 32
reduce_by = (conv_kernel_size - 1) * 2 # conv twice
self.n_channels = int(
np.floor(
(np.floor(
(sequence_length - reduce_by) / pool_kernel_size)
- reduce_by) / pool_kernel_size)
- reduce_by)
self.model = nn.Sequential(
nn.Sequential(
nn.Conv2d(4,320,(1, conv_kernel_size)),
nn.ReLU(),
nn.Conv2d(320,320,(1, conv_kernel_size)),
nn.ReLU(),
nn.Dropout(0.2),
nn.MaxPool2d((1, pool_kernel_size),(1, pool_kernel_size)),
nn.Conv2d(320,480,(1, conv_kernel_size)),
nn.ReLU(),
nn.Conv2d(480,480,(1, conv_kernel_size)),
nn.ReLU(),
nn.Dropout(0.2),
nn.MaxPool2d((1, pool_kernel_size),(1, pool_kernel_size)),
nn.Conv2d(480,640,(1, conv_kernel_size)),
nn.ReLU(),
nn.Conv2d(640,640,(1, conv_kernel_size)),
nn.ReLU(),
),
nn.Sequential(
nn.Dropout(0.5),
Lambda(lambda x: x.view(x.size(0),-1)),
nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(640 * self.n_channels, n_hiddens)),
nn.ReLU(),
nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(n_hiddens, n_genomic_features)),
),
nn.Sigmoid(),
)
def forward(self, x):
x = x.unsqueeze(2) # update 2D sequences
return self.model(x)
class DanQ(nn.Module):
"""
DanQ architecture (Quang & Xie, 2016).
"""
def __init__(self, sequence_length, n_genomic_features):
"""
Parameters
----------
sequence_length : int
Input sequence length
n_genomic_features : int
Total number of features to predict
"""
super(DanQ, self).__init__()
self.nnet = nn.Sequential(
nn.Conv1d(4, 320, kernel_size=26),
nn.ReLU(inplace=True),
nn.MaxPool1d(
kernel_size=13, stride=13),
nn.Dropout(0.2))
self.bdlstm = nn.Sequential(
nn.LSTM(
320, 320, num_layers=1, batch_first=True, bidirectional=True))
self._n_channels = np.floor(
(sequence_length - 25) / 13)
self.classifier = nn.Sequential(
nn.Dropout(0.5),
nn.Linear(self._n_channels * 640, 925),
nn.ReLU(inplace=True),
nn.Linear(925, n_genomic_features),
nn.Sigmoid())
def forward(self, x):
"""Forward propagation of a batch.
"""
out = self.nnet(x)
reshape_out = out.transpose(0, 1).transpose(0, 2)
out, _ = self.bdlstm(reshape_out)
out = out.transpose(0, 1)
reshape_out = out.contiguous().view(
out.size(0), 640 * self._n_channels)
predict = self.classifier(reshape_out)
return predict
def criterion(self):
return nn.BCELoss()
def get_optimizer(self, lr):
return (torch.optim.RMSprop, {"lr": lr})
class Basset(BasicModule):
'''Basset architecture (Kelley, 2016).
Deep convolutional neural networks for DNA sequence analysis.
The architecture and optimization parameters for the DNaseI-seq compendium analyzed in the paper.
'''
def __init__(self, sequence_length, n_genomic_features):
super(Basset, self).__init__()
self.model = nn.Sequential(
nn.Sequential(
nn.Conv2d(4,300,(1, 19)),
nn.BatchNorm2d(300),
nn.ReLU(),
nn.MaxPool2d((1, 8),(1, 8)),
nn.Conv2d(300,200,(1, 11)),
nn.BatchNorm2d(200),
nn.ReLU(),
nn.MaxPool2d((1, 8),(1, 8)),
nn.Conv2d(200,200,(1, 7)),
nn.BatchNorm2d(200),
nn.ReLU(),
nn.MaxPool2d((1, 8),(1, 8)),
),
nn.Sequential(
Lambda(lambda x: x.view(x.size(0),-1)),
nn.Sequential(Lambda(lambda x: x.view(1,-1) if 1==len(x.size()) else x ),nn.Linear(4800, 1000)),
nn.BatchNorm1d(1000),
nn.Dropout(0.3),
nn.ReLU(),
nn.Linear(1000, 32),
nn.BatchNorm1d(32),
nn.Dropout(0.3),
nn.ReLU(),
nn.Linear(32, n_genomic_features),
),
nn.Sigmoid(),
)
def forward(self, x):
x = x.unsqueeze(2) # update 2D sequences
return self.model(x)
def architecture(self):
d = {'conv_filters1':300,
'conv_filters2':200,
'conv_filters3':200,
'conv_filter_sizes1':19,
'conv_filter_sizes2':11,
'conv_filter_sizes3':7,
'pool_width1':3,
'pool_width2':4,
'pool_width3':4,
'hidden_units1':1000,
'hidden_units2':32,
'hidden_dropouts1':0.3,
'hidden_dropouts2':0.3,
'learning_rate':0.002,
'weight_norm':7,
'momentum':0.98}
return d
|
Police have arrested a 29-year-old man in Ubon Ratchathani province for allegedly molesting a woman he met during the Songkran celebrations on April 13.
The man was identified as Pipatpong Singsan of Si Sa Ket province. According to police, he confessed and apologised to the woman, whose name has been withheld, as well as to the people of Ubon Ratchathani people for the assault, saying he was very drunk at the time.
He has been charged with public indecency and faces a fine of not more than Bt200,000 and a jail term of up to 10 years.
The alleged molestation was recorded by a Facebook user who posted the video clip on the social networking site.
The clip showed that the suspect was among a group of Songkran revelers in the back of a pick up. He touched the woman then tried to kiss her.
The clip went viral and police learnt about the offence, hunted for the suspect and arrested him.
The suspect was earlier arrested on a gun-related charge. He was jailed for nine months and fined Bt6,000. |
import { TilemapStrategyInterface } from './tilemapServiceInterface';
import { injectable } from 'inversify';
@injectable()
export class TilemapService implements TilemapStrategyInterface {
getProperty<T>(obj: Phaser.Types.Tilemaps.TiledObject, name: string, defaultValue: T = null): T {
if (obj.properties) {
for (const property of obj.properties) {
if (property.name === name) {
return property.value as T;
}
}
}
return defaultValue;
}
}
|
package com.baeldung.persistence.dao;
import org.springframework.data.jpa.domain.Specification;
import com.baeldung.persistence.model.User;
import com.baeldung.web.util.SpecSearchCriteria;
import javax.persistence.criteria.CriteriaBuilder;
import javax.persistence.criteria.CriteriaQuery;
import javax.persistence.criteria.Predicate;
import javax.persistence.criteria.Root;
public class UserSpecification implements Specification<User> {
private SpecSearchCriteria criteria;
public UserSpecification(final SpecSearchCriteria criteria) {
super();
this.criteria = criteria;
}
public SpecSearchCriteria getCriteria() {
return criteria;
}
@Override
public Predicate toPredicate(final Root<User> root, final CriteriaQuery<?> query, final CriteriaBuilder builder) {
switch (criteria.getOperation()) {
case EQUALITY:
return builder.equal(root.get(criteria.getKey()), criteria.getValue());
case NEGATION:
return builder.notEqual(root.get(criteria.getKey()), criteria.getValue());
case GREATER_THAN:
return builder.greaterThan(root.get(criteria.getKey()), criteria.getValue().toString());
case LESS_THAN:
return builder.lessThan(root.get(criteria.getKey()), criteria.getValue().toString());
case LIKE:
return builder.like(root.get(criteria.getKey()), criteria.getValue().toString());
case STARTS_WITH:
return builder.like(root.get(criteria.getKey()), criteria.getValue() + "%");
case ENDS_WITH:
return builder.like(root.get(criteria.getKey()), "%" + criteria.getValue());
case CONTAINS:
return builder.like(root.get(criteria.getKey()), "%" + criteria.getValue() + "%");
default:
return null;
}
}
}
|
<gh_stars>0
import random
words = ['hello', 'friend', 'world']
def get_word(words):
random_index = random.randint(0, len(words) - 1)
return words[random_index]
def guess_letter(word, current_word, letter):
if len(letter) > 1 or ord(letter) < ord('a') or ord(letter) > ord('z'):
raise ValueError(f'{letter} is not a letter')
hit = False
for i in range(len(word)):
if word[i] == letter and current_word[i] == '*':
hit = True
current_word[i] = word[i]
return hit
def main():
word = list(get_word(words))
current_word = list('*' * len(word))
mistakes = 0
win = False
while mistakes < 5:
letter = input('Guess a letter:\n')
if guess_letter(word, current_word, letter):
print('Hit!')
else:
mistakes += 1
print(f'Missed, mistake {mistakes} out of 5.')
print(f'\nThe word: {"".join(current_word)}\n')
if current_word == word:
win = True
break
if win:
print('You won!')
else:
print('You lost!')
if __name__ == "__main__":
main()
|
Busting
Plot summary
Keneely and Farrell are detectives with the LAPD vice squad. Although they show great talent for breaking up prostitution and drug rings, many of these enterprises are protected by crime boss Carl Rizzo, who exerts his influence throughout the city and the department. Evidence is altered before trial, colleagues refuse to help with basic policework, and the detectives are pushed to pursue other cases—mostly stakeouts on gay bars and public lavatories. After personally confronting Rizzo, Keneely and Farrell are brutally beaten while investigating one his prostitutes. Frustrated but without any legal options, they resort to harassing Rizzo and his establishments, warding off customers and following his family around the city. Soon, Rizzo is rushed to the hospital for a heart condition. Realizing that he also used a medical emergency as an alibi during a previous drug sale, Keneely and Farrell head to the hospital and discover that drugs are trading hands there, hidden in flower pots. Rizzo escapes in an ambulance, while Keneely and Farrell make chase in another. The chase ends when both ambulances crash; although Keneely holds Rizzo at gunpoint, Rizzo laughs that the evidence against him is circumstantial—and, at most, will result in a light sentence.
The film ends on a freeze-frame of Keneely's face as Rizzo dares him to shoot. In a voice-over, Keneely applies to an employment agency, claiming that he doesn't know why he left his job at the LAPD—finally concluding that he "needed a change."
Production
Robert Chartoff wanted to make another film about vice cops after The New Centurions. They hired Peter Hyams to write and direct one off the back of the success of his TV movie, Goodnight, My Love. "I’d made a TV movie of the week that people had liked, and people started coming after me," he recalled. "The producers Robert Chartoff and Irwin Winkler came to me and said they wanted to do a film about vice cops. I said okay, and spent about six months researching it."
Hyams later said "like a journalist, I went around to New York, Boston, Chicago and Los Angeles and spoke with hookers, pimps, strippers and cops and DAs. Every episode in the film was true."
Elliott Gould was offered the lead role after Hyams saw him on The Dick Cavett Show.
In February 1973 Ron Leibman was cast as Gould's partner. However he was soon fired. Hyams says, "It turned out the contrast between Ron and Elliott Gould was not the same contrast between Robert Blake and Elliott, so it was suggested we go with Robert and I listened." Gould says that while he respected Leibman as an actor it was he who suggested Leibman be replaced. “I just had a sense that I don’t know if he’s the right partner for me."
Filming started in February 1973. The film was shot over 35 days.
"United Artists was a dream studio," said Hyams. "Once they thought the script and the people making the film were good, they really didn't intrude. They were very encouraging, and fabulous for filmmakers."
Reception
The film was not a popular success.
Vincent Canby of The New York Times wrote, "It's not great but it's a cool, intelligent variation on a kind of movie that by this time can be most easily identified by the license numbers on the cars in its chase sequences ... Mr. Hyams, who wrote and directed 'Busting,' brings off something of a feat by making a contemporary cop film that is tough without exploiting the sort of right-wing cynicism that tells us all to go out and buy our own guns." Gene Siskel of the Chicago Tribune gave the film 2 stars out of 4 and wrote that the disillusionment of the two main characters "is hardly made significant to us," as "the script fails to give either Gould or Blake an opportunity to establish their personal history. Here we have two actors who are strongly identified as rebellious types, and yet the script never once permits them to explain their motivation to become police officers." Arthur D. Murphy of Variety called it "a confused, compromised and clumsy concoction of unmitigated vulgarity" and "a total shambles," with "a couple of well-staged vehicle chases" among the film's few bright spots. Kevin Thomas of the Los Angeles Times slammed the film as "an abomination through and through. It may earn the distinction of insulting both the Police Department and the homosexual citizenry of Los Angeles equally." Thomas explained that "the film's humor is burlesque-based rather than satirical, which means that the unthinking and the bigoted are invited to laugh at some of the most oppressed and persecuted segments of an all-too-hypocritical and ignorant society."
Controversy
The film was criticized for homophobia on the grounds of its depictions of gay characters and the attitudes of the lead characters towards them. In an essay for The New York Times, journalist and gay rights activist Arthur Bell condemned the film for derogatory language used by characters to describe homosexuals, as well as a scene in a gay bar that he called "exploitative, unreal, unfunny and ugly" for its presentation of gay stereotypes. Hyams defended this on the ground it was accurate to the milieu depicted. |
Transportation Security Administration employees, who are tasked with handling security at the nation’s airports, face missing their first payday this Friday due to the ongoing government shutdown.
As the shutdown continues into its third week, some screeners have reportedly called in sick in protest in recent days.
During a press conference at Newark Liberty Airport, U.S. Senators Cory Booker and Robert Menendez, D-NJ and Congressmen Albio Sires and Donald Payne, Jr. , D-NJ, made an urgent plea to GOP leadership to put an end to shutdown.
The shutdown began Dec. 22 when President Donald Trump refused to sign legislation funding the rest of the government unless there it included more than $5 billion for southern border wall that he at one time promised Mexico would pay for. Officials announced Monday that Internal Revenue Service employees to process income tax forms would be recalled to work without pay so that taxpayers wouldn’t be delayed getting refunds, if they are entitled to one.
Trump said he would address the nation Tuesday in a prime time address on immigration. Menendez urged viewers to fact check statements made by Trump.
“This president hasn’t told the truth, there is a moral urgency to end this shutdown.” Booker said.
The affect the shutdown is having on travel delays due to screeners who’ve called out sick is hard to quantify. The Port Authority, which operates the airports, declined to comment about wait times, referring questions to the TSA.
While TSA wait time screens in Terminal B in Newark Airport said there was a 20-minute wait in standard security lanes and 5 minute wait at Pre Check express screening, some travelers reported having a worse experience.
At metro area airports, LaGuardia had the highest wait times at 52 minutes in standard lanes and 37 minutes at Newark Airport on Sunday, said James O. Gregory, a TSA spokesman., who did not say if those times were average for a busy weekend. JFK had the lowest, 18 minutes and passengers who used TSA’s pre-check program that uses pre-screening of passengers saw a 5 to 10 minute wait, he said.
Nationwide, the TSA screened approximately 2.22 million passengers on Sunday, which Gregory called “a historically busy day” due to holiday travel.
“We are grateful to the more than 51,000 officers across the country who remain focused on the mission and are respectful to the traveling public as they continue the important work necessary to secure the nation’s transportation systems, Gregory said in a statement.
A call to Local 2222 of the American Federation of Government Employees union, which represents TSA workers at the local airports, was not immediately returned.
Menendez said he believes it is still safe to fly, but he said the stress of not having a paycheck or way to pay their bills may create a distraction for TSA screeners on the job. |
/**
* adds the given related entity fields to the list of fields to be
* retrieved. Fields must be from entities that are valid related entities
* to the entity being retrieved.
*
* @param fieldNames
* The names of the fields of the related entity to be retrieved.
* @return returns the current instance to conform to the builder pattern.
* @since 1.0.0
*/
public RestParameters relatedFields(final FieldName... fieldNames) {
Validate.notNull(fieldNames, "fieldNames cannot be null");
Validate.isTrue(fieldNames.length > 0, "fieldNames must contain at least one field name");
for (final FieldName fieldName : fieldNames) {
fields.add(fieldName.getQualifiedName());
}
return this;
} |
In an attempt to boost Thailand's prospects in the world marketplace, the country's prime minister, Thaksin Shinawatra, has emphasized re-invigorating the Thai economy. In 2003, the Thaksin government even enlisted Harvard Business School Professor Michael Porter, a competitiveness guru, to advise Thailand on improving its ability to win business in the global economy. Since then, Thailand has made some progress; it now ranks 29th in the International Institute for Management Development (IMD) worldwide competitiveness ranking.
One of the major challenges in building a more competitive economy is creating a significant human-capital advantage. This is key to Thailand's "becoming a knowledge-based nation" -- a stated goal of the Thaksin government. Achieving this goal requires several important initiatives, including increasing employee productivity, improving employee contributions to profitability, and enhancing how workers are selected and developed and how their talents are used. In effect, the task is to significantly raise the level of engagement and commitment among employees who work in Thailand's many public- and private-sector enterprises.
But if the Thaksin government is counting on Thai employees to fuel a vibrant, progressive economy, it should be forewarned that these efforts may remain stuck in neutral, especially if employee engagement remains at its present levels. A recent Gallup Employee Engagement Index survey in Thailand revealed that "engaged" employees -- a company's most committed and productive workers -- make up only 12% of Thailand's employee population.
Not surprisingly, disengagement has a big impact on the Thai economy. Gallup Organization experts estimate that the lower productivity of disengaged workers costs the Thai economy as much as 98.8 billion Thai baht ($2.5 billion U.S.) each year.
With a base of engaged employees to build on, however, Thailand could well enhance its competitiveness -- both within the Association of Southeast Asian Nations region and worldwide. And boosting employee engagement among not-engaged employees is a good place to start.
But how can Thailand create and sustain the momentum for building engagement? Gallup's research into employee behavior in that country offers some key insights.
Managers must play a key role in the change process. Gallup research shows that managers who engage their employees also significantly enhance their workgroups' success. This remains true whether such business success is measured through sales, revenues, or outcomes, such as reduced turnover and enhanced productivity.
But the real agent of change is, and must be, the employee. The term "employee engagement" suggests that employees willingly contribute their energy and ideas, perform at consistently high levels, and are passionately committed to moving their company forward.
For Thai managers, improving employee engagement is a significant challenge. Thai workplaces have historically followed a traditional production model: a "top-down" or hierarchical system in which managers treat employees as tools to carry out assigned work. But as Thailand moves toward a knowledge-based economy, the management emphasis must shift from seeing employees as cogs in a machine to enlisting workers as active contributors.
Set clear expectations. According to Gallup research, "knowing what is expected" is one of an employee's most basic needs. Employees cannot hope to perform their roles acceptably -- let alone at excellence -- if they don't know what they're supposed to be doing. Yet, according to the Gallup survey, only 1 in 5 Thai employees can strongly agree that they know what is expected of them at work.
Demonstrate a sense of caring. Great managers genuinely care about the people with whom they work. They treat employees as individuals and help them see the connections between the work they do every day and the organization's mission and business results. But few Thai employees (1 in 5) strongly agree that there is someone at work who cares about them. The situation is worst for government officers: Only 1 in 10 feel cared for in their workplace.
Help employees realize their potential. About a third of Thai employees feel that they have an opportunity to do what they do best each day at work. This is a solid percentage, considering that few employees know what is expected of them at work or feel that someone at work cares about them.
According to the Employee Engagement Index survey, about half of the employees in Thailand report that they have a "best friend" at work, while more than a third feel that their organization's mission or purpose makes them feel their job is important. These two survey items are strong indicators of the strength of teamwork in the workplace.
High levels of teamwork are perhaps unique to the cultural ethos of Thai workplaces, which include values such as greang jai (which, in broad terms, refers to an attitude in which individuals restrain their own interests or desires in an effort to benefit others, even at the cost of their own discomfort). Strong friendships with coworkers could help mitigate the effects of an unsupportive manager. However, they could also lead to an unspoken but profound "us versus them" mindset that could polarize managers and employees.
Successfully bridging the gap between the workplace realities of today's Thailand and the demands of growing Thailand's economy will require more than the collective leadership acumen and vision of government leaders and their advisors. As Thailand moves forward, corporate leaders, managers, and employees will share the responsibility for making the dreams and aspirations of a nation real. Thailand's success -- or failure -- in moving to a knowledge-based economy will depend on whether its employees can rise to the challenge and become active participants and co-creators in building the country's economic growth.
Results of this survey on perceptions about work life in Thailand are based on a nationally representative sample of 1,600 Thai citizens and permanent residents between the ages of 18 and 65 who are currently employed full time. This Gallup Poll was conducted using in-person interviews in November 2004. For results based on samples of this size, one can say with 95% confidence that the error attributable to sampling and other random effects could be ±3 percentage points. For findings based on sub-groups, the sampling error would be greater.
The ramifications of matching employees to what they naturally do best are profound. So much so that this aspect of work life emerged as one the elements that best predict the performance of an employee or team. The authors of the New York Times bestseller 12: The Elements of Great Managing explain. |
George McGovern, the former U.S. Senator and 1972 Democratic nominee for president, died early Sunday. He was 90.
McGovern had been moved to hospice care last Monday. His death was announced in a statement from his family.
From the New York Times:
To the liberal Democratic faithful, Mr. McGovern remained a standard-bearer well into his old age, writing and lecturing even as his name was routinely invoked by conservatives as synonymous with what they considered the failures of liberal politics… Elected to the Senate in 1962, Mr. McGovern left no special mark in his three terms, but he voted consistently in favor of civil rights and antipoverty bills, was instrumental in developing and expanding food stamp and nutrition programs, and helped lead opposition to the Vietnam War in the Senate… That was the cause he took into the 1972 election, one of the most lopsided in American history. Mr. McGovern carried only Massachusetts and the District of Columbia and won just 17 electoral votes to Nixon’s 520.
RELATED: TPM slideshow of McGovern’s life and career.
here. |
Spatial inequalities in life expectancy within postindustrial regions of Europe: a cross-sectional observational study Objectives To compare spatial inequalities in life expectancy (LE) in West Central Scotland (WCS) with nine other postindustrial European regions. Design A cross-sectional observational study. Setting WCS and nine other postindustrial regions across Europe. Participants Data for WCS and nine other comparably deindustrialised European regions were analysed. Male and female LEs at birth were obtained or calculated for the mid-2000s for 160 districts within selected regions. Districts were stratified into two groups: small (populations of between 141000 and 185000 people) and large (populations between 224000 and 352000). The range and IQR in LE were used to describe within-region disparities. Results In small districts, the male LE range was widest in WCS and Merseyside, while the IQR was widest in WCS and Northern Ireland. For women, the LE range was widest in WCS, though the IQR was widest in Northern Ireland and Merseyside. In large districts, the range and IQR in LE was widest in WCS and Wallonia for both sexes. Conclusions Subregional spatial inequalities in LE in WCS are wide compared with other postindustrial mainland European regions, especially for men. Future research could explore the contribution of economic, social and political factors in reducing these inequalities. INTRODUCTION Reducing inequalities in health has been identified as a priority by governments across Europe. 1 2 While inequalities in health are often described using individual characteristics (eg, socioeconomic class), there is also considerable interest in spatial disparities in health, 3 4 despite a lack of research found by Tyner. 5 All countries exhibit subnational variation in mortality and life expectancy (LE). The pattern is observed for countries as diverse as France, 9 Sweden, 10 Australia 11 and Poland. 12 Almost universally, the geographical gap in these health outcomes is wider for men than women. 13 There are some observed differences in within-country dispersion in LE, with the spatial gap being more pronounced for some nations (eg, USA 14 and UK 15 ) than others (eg, Germany 16 and Poland 12 ). Regional inequalities in mortality between English regions, for instance, have been found to be severe and persistent over a 40-year period. 17 Differences are also observed in whether spatial inequality in mortality has been narrowing, static or increasing over time. 13 18 Although the findings are dependent on the size of geographies selected for analysis, 19 there is evidence that inequalities between and within English regions have increased over time. 17 20 Deindustrialisation has been proposed as a mechanism to partly explain these spatial inequalities. Across Europe, there is a clear Strengths and limitations of this study ▪ This is an extensive international comparison of contemporary, within-region disparities in life expectancy. It compares 100 small districts and 60 large districts across 10 European regions. ▪ Ecological bias was mitigated by selecting regions with a similar history of deindustrialisation and comparing districts with similar-sized populations. ▪ While the approach taken here partly addressed the scale issue associated with the 'modifiable area unit problem', it was unable to resolve the zoning issue. ▪ The study was unable to say whether more heterogeneous populations or higher levels of social segregation were driving these differences, though the limited evidence we have does not support this view. ▪ The analyses are restricted to one period during the mid-to-late 2000s. ▪ The approach was restricted to describing spatial differences in life expectancy-we cannot draw any conclusions on within-region inequalities by socioeconomic status, rurality or ethnicity. overlap between former coal mining and industrial areas and districts and regions with the poorest health. 7 21 Riva and Curtis 22 found that areas in England with persistently low or deteriorating employment rates (relative to the national average), often located in ex-industrial regions, had the highest rates of mortality and physical morbidity, even after adjusting for migration and individual characteristics of residents. A number of mechanisms (eg, greater poverty, loss of purpose and status and higher levels of substance misuse) provide plausible links between economic dislocation and health outcomes. 23 24 Making spatial comparisons of health within and between geographies is subject to a number of difficulties. Comparing geographies that have been 'clustered' according to some shared characteristics (such as a similar economic and social history) can partly adjust for this and produce more meaningful results. 25 Geographical comparisons are more valid when the spatial units being compared are of a similar population size and where there is less social diversity within them, since the differences between areas will depend on the degree to which the geographical units of analysis are internally diverse or homogeneous. Units of analysis with larger population sizes or more heterogeneity in their composition are less likely to display differences between areas because of the averaging effect of this greater internal diversity. 19 26 Failing to take this into account may result in misleading comparisons. The present study approaches this issue from a Scottish perspective. Scotland's position as the 'sick man' of Europe-characterised by a slower rate of improvement in LE compared with other West European nations since the 1950s, and a consequent relative deterioration in its international position-has been discussed elsewhere. 27 28 Furthermore, the within-region spatial gap in mortality was greater in Scotland than any other region of Britain. 29 A similar 'faltering' in the pace of improvement in mortality and LE has also been noted for West Central Scotland (WCS), the region of Scotland most affected by deindustrialisation in recent decades, relative to other postindustrial regions. 30 Postindustrial regions are extremely important in epidemiological terms as they tend to exhibit the highest rates of mortality in their parent countries. 31 32 A recent study also suggested that WCS was more spatially divided in terms of mortality than other comparable European postindustrial regions, though the authors did not pursue this question in depth. 31 This paper explores this question in a systematic way, to investigate whether spatial disparities in mortality within WCS are large compared with other European regions, taking industrial heritage and differences in population sizes of subregions into account. METHODS This study was informed by the authors' involvement in a larger project which aimed to contribute to an understanding of the poor health observed in one postindustrial region, WCS, in the context of other comparable European regions. WCS is a region of 2.1 million people, centred on the City of Glasgow. Nine other regions, highlighted in other recent epidemiological analyses, 30 32 were selected for comparison with WCS. The regions were chosen through consultation with experts on European history on the basis of their shared historic economic dependence on industries such as coal, steel, shipbuilding and textiles, alongside analysis of their subsequent loss of industrial employment over the past 30-40 years. 30 Table 1 presents summary information on the list of regions selected. Selecting a range of regions from across East and West Europe allowed contrasts to be made between WCS and European areas with different social and political contexts. The inclusion of UK regions meant that WCS could be compared with areas subject to the same set of socioeconomic policies over the past 30-40 years. Male and female LEs at birth were obtained from relevant statistical agencies (or where appropriate calculated) for the mid-2000s, for 160 districts within the 10 selected regions. Ideally, the years of the data collected would be of identical time frame and size. It was not possible or practical to do so here, because of variation between countries in terms of availability of the required small-area statistics data. All life tables were constructed in the same way, using all deaths within each district and the resident population of each district. The sources of the LE data for each region are given in table S2 (web only table). In order to reduce the risk of bias due to differing subregional population sizes (the scale problem), we stratified the regions into two. Five regions (Swansea and South Wales Coalfields, Northern Ireland, Nord-Pas-de-Calais, Silesia and Merseyside) had subregional (or district) populations of between 141 000 and 185 000 people. These areas were compared with similarly sized geographies in WCS Community Health Partnership areas (CHPs). i Three regions (the Ruhr, Saxony and Wallonia) had LE data calculated across 45 'large' districts of population size ranging from 224 000 and 352 000: these were compared with similarly sized WCS Nomenclature of Units for Territorial Statistics (NUTS) 3 areas. Data for Northern Moravia and WCS were available for both strata. For four regions (Northern Ireland, Wallonia, Silesia and Nord-Pas-de-Calais), it was necessary to create pseudodistricts to ensure a more even distribution of population across districts. This process took into account contiguous boundaries and, where possible, the character of districts. LE at birth was then calculated for these new areas using the Chiang 33 method (II), using population and mortality data obtained from the relevant national statistical agencies. i There were 15 CHP areas in WCS prior to April 2010, when the five Glasgow CHPs were merged into three. Within regions, we then ranked the subregional (district) populations by their LE separately for men and women and separately for the large and small subregional populations. We then created line graphs for each strata of regions to show the size and distribution of subregional populations and their corresponding LEs. Taking each region separately, we then calculated the range in LE and IQRs, accounting for the population sizes in each subregional district, to describe the withinregional disparities. RESULTS Regions with small district data ( populations between 141 000 and 185 000) The districts with the highest male LEs (>77 years at birth) were in the rural districts in Northern Ireland, plus the more affluent WCS districts of East Renfrewshire and East Dunbartonshire. The lowest male LEs (<70 years at birth) were in Silesia and in areas of WCS (North and East Glasgow). The districts with the highest levels of female LE (>82.5 years at birth) were all located in Nord-Pas-de Calais, while the districts with the lowest levels of female LE (<78 years at birth) were in WCS (all five Glasgow districts), Merseyside (City and North Liverpool) and parts of the Silesia region (Ruda Slaska-Swietochlowice and Chorzow-Siemianowice Slaskie). Within regions, the range in male LE was widest for WCS (8.6 years) and Merseyside (5.9 years) and narrowest in Swansea and the South Wales Coalfields (1.6 years) and Northern Moravia (2.7 years). The IQR in LE for men was widest in WCS and Northern Ireland (2.7 and 2.6 years, respectively), followed by Silesia (2.2 years), and was much less pronounced in the other regions. For women, WCS had the widest range in LE (6.5 years) and Northern Moravia the narrowest (1.6 years). The range of LEs observed for Merseyside districts was also high (5.9). The IQR in female LE was highest in Northern Ireland (2 years) and Merseyside (1.9 years) and lowest in Northern Moravia (figure 1). Regions with large district data ( populations between 224 000 and 352 000) The highest male LEs were found in Saxony, Wallonia and the Ruhr, while the lowest were observed in WCS (Glasgow), Wallonia (Mons) and in Northern Moravia. For women, districts with the highest LE were located in Wallonia and Saxony, while the districts with the lowest LE were found within WCS and Northern Moravia. Within regions, the range in male LE across 'large' districts was widest for WCS (5.3 years), followed by Wallonia (4.8 years), with the Ruhr Valley, Saxony and Northern Moravia less polarised. The IQR in LE was much wider in WCS (3.9 years) than in all other regions. For women, the pattern was similar, with the widest range in LE observed for WCS (3.5 years) and Wallonia (2.5 years), with much less disparity evident in the German and Czech regions (figure 2). DISCUSSION Similarly deindustrialised regions in Europe, which share similar economic, social and health problems, 30 32 display different patterns in spatial inequalities in LE. In The present study has four important strengths. First, it provides an original comparison of contemporary, international and within-region disparities in LE. Second, its geographical coverage is extensive: more than 100 small districts and 60 large districts, spanning 10 regions across Western and Eastern Europe. Third, it uses a straightforward metric of health outcomes (LE at birth) that is readily understood. Finally, by attempting to ensure that the areas are of a similar size and have a common experience of industrial development and subsequent deindustrialisation, the potential bias arising from comparisons of differently sized populations and the heterogeneity within regions is reduced. The study also has a number of limitations. A key challenge in any study of this kind is the 'modifiable area unit problem' (MAUP). As discussed by Openshaw, 34 the spatial units that can be used to describe individuallevel data are usually highly modifiable and their boundaries are often decided on an arbitrary basis. There are a large number of different spatial units that could be used to describe the same data, often producing quite different conclusions. There are two components of the MAUP. First, there is a scale problem, with different results being produced depending on the number of spatial units used in analysis (eg, for census tracts, districts, regions). Second, there is a grouping or zoning problem, reflecting different choices about how very small areas are joined together to create areas of a similar size. In this study, the scale problem has been partly addressed by making comparisons of subregional inequalities at two different geographical levels. The similar findings (of greater spatial inequalities in WCS) for both scales can give more confidence that the approach adopted is reasonable. However, the zoning problem remains difficult to resolve without access to individual-level data coded to geographic areas, which are currently not available. It is important to note that the findings may not apply beyond the selection of postindustrial regions shown here. For example, Hoffman et al, 35 who analysed neighbourhood-level differences in mortality for 15 large European large cities, found that inequalities were wider for women than for men, and there was no evidence that within-area inequalities varied between cities. The methods used to compare spatial inequalities (IQR) could also be criticised as not ideal. Other studies 36 have used the slope index of inequality and relative index of inequality to estimate spatial inequalities in mortality. 37 This would undoubtedly allow for more robust analyses. However, to allow these indices to be constructed would require robust, internationally comparable measures for ranking all the districts by socioeconomic status. Data limitations make this a difficult task. Europe-wide indicators of material and income deprivation are unavailable for small-area geographies. A prototype European Socio-economic Classification 38 has been developed, but comparable small-area data (from national censuses) for all areas are not yet available. Limited measures of housing tenure and car ownership are available, though these may also reflect different cultural patterns between countries rather than deprivation per se (eg, the different role that renting plays in the German housing market 39 ). Some studies have also questioned whether car ownership is a good indicator of deprivation. 40 41 Measures of unemployment might also be challenged as not fully comparable either, due to the large-scale diversion of working-age adults into economic inactivity (eg, disability benefits) during the 1990s across many European countries. 42 Exploring options to overcome these methodological challenges might be a useful avenue for future research. Data restrictions mean we were unable to explore systematically the degree of social segregation or migration within each region. Spatial inequalities observed could simply reflect greater population heterogeneity between districts within each region. However, evidence comparing WCS with the Ruhr and Nord-Pas-de-Calais does not support this hypothesis. 43 44 Nor can we say how spatial inequalities in LE changed within these regions over time, since the analysis is also confined to a single time period. Lack of individual-level data and common markers of socioeconomic status meant that this study was also confined to a focus on spatial differences in LE. If data had been available, analysis by inequalities by socioeconomic status or other characteristics (eg, rurality and ethnicity) may have led to different conclusions. For example, in Northern Moravia, the gap in male LE between districts was approximately 5 years, 45 but the gap in LE between the highest and least-educated men has been enumerated at16.5 years. 46 The more pronounced spatial inequalities in LE in three of the four UK regions, especially WCS, are notable. What factors might help account for this? As reported elsewhere, despite the relatively high levels of mean prosperity and lower unemployment, WCS and the other British regions have higher levels of relative poverty, income inequality and single person and lone parent households compared with postindustrial areas of mainland Europe. 32 There is also a more mixed pattern on some other indicators (eg, social capital and educational attainment). 32 It would be appropriate to consider the sociopolitical context to this. Others have contrasted the UK 'path destructive' road to deindustrialisation, characterised by the growth of a low-wage service sector and reduced social protection, with alternative strategies pursued in mainland Europe. 24 47 It has been argued that a more rapid adoption of neoliberal politics by local government in WCS alongside greater vulnerability to the deleterious impacts of associated economic policies might provide some basis for explaining the findings for WCS. 24 48 There may be differences between regions in the homogeneity of the populations, and the degree to which there is social segregation. It is possible that the greater disparities observed in WCS could be due to greater social segregation rather than larger socioeconomic inequalities (although the likelihood of this is reduced by the same finding being observed at two different sizes of subregional districts). The limited analyses available (comparing spatial segregation in Nord-Pasde-Calais and Merseyside with WCS) suggests that this cannot provide a wholly adequate explanation for the results shown here. 31 Nor is it clear that stronger withinregion migration (from the unhealthiest to the healthiest districts) in WCS can explain these differences. One comparative study of WCS and the Ruhr (1995Ruhr ( -2008 suggests that this pattern took place in both regions and, if anything, seemed to be slightly stronger in the Ruhr than in WCS. 43 This view is supported by Popham et al, 49 who argued that selective out-migration is not the only or most important reason for the wide levels of health inequality seen in the region. Differences in overall population change might provide a partial explanation. Recent long-run analysis of commune-level data for France by Ghosn et al 50 found that population growth was associated with decreases in relative mortality. Between 1982 and 2005, while most of the regions included in our study saw little change in their population, WCS saw a marked decline; while Saxony saw an even larger loss of its population over a shorter time frame. 30 This might explain why inequalities in LE were wider in the Scottish region, but the much narrower inequalities in Saxony suggest that this may not be the whole story. It may be that in other countries, 'protective' factors such as lower levels of income inequality (Northern Moravia), 51 higher levels of social capital (The Ruhr) 43 or fewer lone parent or single person households (Nord-Pas-de-Calais) 44 or a more managed deindustrialisation process, which included active labour market policies and re-employment in new industrial sectors, 24 might have partly mitigated against the health-damaging effects of deindustrialisation, reducing the extent of spatial inequalities in health. However, as yet unexplained region-specific factors are also likely to play a role. Within the UK, Swansea and South Wales have relatively narrow spatial inequalities in health and WCS has some of the widest. In the former case, this may partly reflect the more homogeneous social mix across ex-mining areas/ villages, compared with more metropolitan areas. Difference in lifestyle factors (ie, worse health behaviours in WCS) could also play a role. This argument is more plausible for alcohol, since levels of consumption and alcohol-related harm are high in WCS for both genders compared with the other regions. 32 For smoking and diet, matters are less clear. Female smoking rates are higher in WCS compared with most regions but male smoking rates are similar across all regions. 32 Dietary indicators suggest that WCS compares poorly with Nord-Pas-de-Calais but is very similar to Merseyside and Northern Ireland. 31 That said, any explanation based on health behaviours alone would be insufficient, as the underlying causes of these health behaviours would remain unexplained. Finally, environmental factors, such as air pollution and climate, have also been proposed as possible explanations for health inequalities. Could these factors explain the results? Richardson et al 52 found that while exposure to particulate air pollution (PM10), and risk of some causes of mortality, was higher in low-income European regions, their mapping also revealed the concentration of the worst areas of pollution in East European regions (including Silesia and Northern Moravia). Although vitamin D deficiency (linked to lower levels of sunlight) may be higher in WCS than some other regions, the detrimental impacts on health are likely to be observed among older people. 53 Decomposition of the excess mortality observed in WCS compared with European regions shows it to be greatest among the working-age population, especially young men and middle-aged women. 30 It therefore seems less plausible that the observed difference in spatial inequalities can be attributed to environmental factors. CONCLUSIONS Subregional spatial inequalities in LE in WCS are wide compared with other postindustrial European regions, even after accounting for differences in the population size of the subregional districts. These spatial inequalities are particularly profound for men. By contrast, within-region spatial inequalities in LE were relatively low in the German and Czech regions. These data generally show similar patterns to that for inequalities by individual educational attainment for the parent countries. 54 Outside the UK, wider determinants of health (such as income distribution, positive social capital and family networks) may have acted to protect health in postindustrial regions. Future research could explore the contribution of these wider determinants of health to reducing spatial inequalities in mortality, especially in WCS. |
A multi-wavelength search for bulge millisecond pulsars More than a decade after its discovery, the Fermi GeV excess is still an exciting subject of research. Thus far, an unresolved population of millisecond pulsars (MSPs) in the Galactic bulge shining in gamma rays is the favorite explanation to the excess, but other explanations exist. Data from the Fermi-LAT have been thoroughly studied and, in order to discriminate between the different hypotheses, a multi-wavelength approach is now needed. In a recent study, we demonstrated that if the GeV excess is caused by an MSP population, about a hundred of them could be detectable in X-rays in a region of 6° 6° about the Galactic Center. The comparison with X-ray data allowed us to conclude that the MSP hypothesis was not excluded, as we found more than 3000 MSP candidates in a conservative approach. Besides, we selected few hundreds of promising candidates, with good X-ray spectral knowledge and no optical counterpart. In our new study, we additionally exploit ultraviolet and infrared data to exclude candidates. Finally, we compute a relation between the X-ray and radio luminosity of MSPs, aiming at predicting the radio luminosity of our candidates, with the ultimate goal of motivating radio observations needed to detect a pulsation and confirm a pulsar detection. |
<gh_stars>0
import functools
import sys
from ..const import TARGET_ALL
from ..exceptions import NodeNotFound
from ..types import TransformationResult
from ..utils.helpers import VariablesGenerator, warn
from ..utils.tree import find, get_node_position, get_parent, insert_at
from ..utils.snippet import snippet
from .. import ast
from .base import BaseNodeTransformer
from typing import Optional
PY38 = sys.version_info >= (3, 8)
if sys.version_info < (3, 8):
class NamedExpr(ast.AST):
pass
class Constant(ast.AST):
pass
else:
NamedExpr = ast.NamedExpr
Constant = ast.Constant
# The standard walrus operator transformer, this one can only transform more
# basic usage of walrus operators in certain if and while statements.
class WalrusTransformer(BaseNodeTransformer):
"""Compiles:
if (x := 1 // 2):
print(0)
elif (x := 5) and x > 2:
print(x)
else:
print(2)
while buf := sock.recv(4096):
print(buf)
To
x = 1 // 2
if x:
print(0)
else:
x = 5
if x > 2:
print(1)
else:
print(2)
while True:
buf = sock.recv(4096)
if not buf:
break
print(buf)
"""
# Although the walrus operator gets patched into astunparse, autopep8
# doesn't (yet) know how to handle walrus operators correctly, so this
# has to TARGET_ALL.
target = TARGET_ALL
def _get_walruses(self, nodes):
"""
Recursively search for walruses that are most likely safe to be moved
outside the current statement.
"""
if not isinstance(nodes, (tuple, list, map)):
nodes = (nodes,)
for node in nodes:
if isinstance(node, NamedExpr):
yield node
if isinstance(node, ast.Compare):
yield from self._get_walruses(node.left)
yield from self._get_walruses(node.comparators)
elif isinstance(node, ast.BoolOp):
yield from self._get_walruses(node.values[0])
elif isinstance(node, ast.UnaryOp):
yield from self._get_walruses(node.operand)
elif isinstance(node, ast.Call):
yield from self._get_walruses(node.args)
yield from self._get_walruses(map(lambda arg : arg.value,
node.keywords))
def _has_walrus(self, nodes) -> bool:
"""
Returns True if self._get_walruses(nodes) is not empty, otherwise
False.
"""
try:
next(iter(self._get_walruses(nodes)))
return True
except StopIteration:
return False
def _invert_expr(self, node: ast.AST) -> ast.AST:
"""
Prepends an AST expression with 'not' or removes an existing 'not'.
"""
if isinstance(node, ast.UnaryOp) and isinstance(node.op, ast.Not):
return node.operand
return ast.UnaryOp(op=ast.Not(), operand=node)
def visit_While(self, node: ast.While) -> ast.While:
"""
Compiles:
while data := sock.recv(8192):
print(data)
To
while True:
if not (data := sock.recv(8192)):
break
print(data)
"""
# If the condition contains a walrus operator, move the test into an
# if statement and let the if handler in transform() deal with it.
if not node.orelse and self._has_walrus(node.test):
self._tree_changed = True
# Remove redundant not statements.
n = self._invert_expr(node.test)
node.body.insert(0, ast.If(test=n, body=[ast.Break()], orelse=[]))
node.test = ast.NameConstant(value=True)
return self.generic_visit(node) # type: ignore
def _has_walrus_any(self, node) -> bool:
"""
Checks if any walrus operators are in node without any sanity checks.
"""
try:
next(iter(find(node, ast.NamedExpr)))
return True
except StopIteration:
return False
def visit_If(self, node: ast.If) -> Optional[ast.AST]:
"""
Compiles:
if test1 and (test2 := do_something()):
pass
if test1 and test2:
pass
To
if test1:
if test2 := do_something():
pass
if test1 and test2:
pass
"""
if node.orelse or not isinstance(node.test, ast.BoolOp) or \
not isinstance(node.test.op, ast.And):
return self.generic_visit(node)
# Split and-s into multiple if statements if they contain walruses.
for i, value in enumerate(node.test.values):
if not i or not self._has_walrus_any(value):
continue
# Split the if statement
self._tree_changed = True
new_values = node.test.values[i:]
if i > 1:
node.test.values = node.test.values[:i]
else:
node.test = node.test.values[0]
if len(new_values) > 1:
test = ast.BoolOp(op=ast.And(), values=new_values) # type: ast.AST
else:
test = new_values[0]
new_if = ast.If(test=test, body=node.body, orelse=[])
node.body = [new_if]
break
return self.generic_visit(node)
# This fixes standalone walrus operators (that shouldn't exist in the first
# place).
def visit_Expr(self, node: ast.Expr) -> Optional[ast.AST]:
"""
Compiles:
(a := 1)
To
a = 1
"""
if isinstance(node.value, NamedExpr):
self._tree_changed = True
new_node = ast.Assign(targets=[node.value.target],
value=node.value.value)
return self.generic_visit(new_node)
return self.generic_visit(node)
def _replace_walruses(self, test: ast.AST):
"""
Replaces walrus operators in the current if statement and yields Assign
expressions to add before the if statement.
"""
for walrus in self._get_walruses(test):
target = walrus.target
if isinstance(target, ast.Name):
target = ast.Name(id=target.id, ctx=ast.Load())
parent = get_parent(self._tree, walrus)
if isinstance(parent, ast.keyword):
parent = get_parent(self._tree, parent)
if isinstance(parent, ast.Compare):
if parent.left is walrus:
parent.left = target
else:
comps = parent.comparators
comps[comps.index(walrus)] = target
elif isinstance(parent, ast.BoolOp):
parent.values[0] = target
elif isinstance(parent, ast.UnaryOp):
parent.operand = target
elif isinstance(parent, ast.If):
parent.test = target
elif isinstance(parent, ast.Call):
if walrus in parent.args:
parent.args[parent.args.index(walrus)] = walrus.target
else:
for kw in parent.keywords:
if kw.value is walrus:
kw.value = target
break
else:
raise AssertionError('Failed to find walrus in Call.')
else:
raise NotImplementedError(parent)
yield ast.Assign(targets=[walrus.target], value=walrus.value)
@classmethod
def transform(cls, tree: ast.AST) -> TransformationResult:
self = cls(tree)
self.visit(tree)
# Do if statement transformations here so values can be set outside of
# the statement, if this is done in visit_If weird things happen.
for node in find(tree, ast.If):
try:
position = get_node_position(tree, node)
except (NodeNotFound, ValueError):
warn('If statement outside of body')
continue
for i, assign in enumerate(self._replace_walruses(node.test)):
self._tree_changed = True
position.holder.insert(position.index + i, assign)
return TransformationResult(tree, self._tree_changed, [])
# A CPython-only fallback. This uses an undocumented feature.
@snippet
def walrus_snippet(ctypes_):
let(ctypes)
let(getframe)
import ctypes_ as ctypes
from sys import _getframe as getframe
def _py_backwards_walrus(name, value):
frame = getframe(1)
frame.f_locals[name] = value
ctypes.pythonapi.PyFrame_LocalsToFast(ctypes.py_object(frame),
ctypes.c_int(0))
del frame
return value
# The fallback walrus operator, this can handle more walrus operators,
# however only works on CPython and if the variable used has been defined
# in the same scope.
class FallbackWalrusTransformer(BaseNodeTransformer):
"""Compiles:
def test(e):
l = None
if (l := len(e)) > 50:
raise TypeError(f'Object too long ({l} characters).')
To
def test(e):
l = None
if _py_backwards_walrus('l', len(e)) > 50:
raise TypeError(f'Object too long ({l} characters).')
"""
target = TARGET_ALL
# Convert standalone NamedExprs
def visit_NamedExpr(self, node: NamedExpr) -> ast.Call:
if not self._tree_changed:
self._tree_changed = True
warn('The fallback named expression transformer has been used, '
'the resulting code will only work in CPython (if at all).')
target = node.target
if not isinstance(target, ast.Name):
raise NotImplementedError
call = ast.Call(func=ast.Name(id='_py_backwards_walrus',
ctx=ast.Store()),
args=[Constant(value=target.id), node.value],
keywords=[])
return self.generic_visit(call) # type: ignore
@classmethod
def transform(cls, tree: ast.AST) -> TransformationResult:
res = super().transform(tree)
if res.tree_changed and hasattr(tree, 'body'):
insert_at(0, tree, walrus_snippet.get_body(ctypes_='ctypes'))
return res
|
Yahoo Inc (NASDAQ:YHOO) will pay about $640 million for automated advertising service BrightRoll, beefing up its ability to sell video ads in real-time to marketers.
The acquisition sustains Chief Executive Officer Marissa Mayer's acquisition spree and sharpens the company's focus on video ads, which it hopes can offset declining Internet ad prices and decelerating growth.
Buying BrightRoll, which is profitable and expected to have revenues of more than $100 million this year, will make Yahoo's video advertising platform the largest in the United States, the company said in a statement on Tuesday. |
Interactive Automation of COVID-19 Classification through X-Ray Images using Machine Learning Machine learning had given many benefits to the humankind by implementing technology on the daily human lives. To add, when the pandemic COVID19 hits Earth globally in early 2020, mankind is challenged with the sudden emergence of the virus that costed many lives. With the virus spreading fast, it has become a challenge towards medical experts to keep their environment clean from the virus. Scientists and medical experts raced to find a cure and plausible methods to avoid the virus from spreading, ranging from lockdowns to standard operating procedures on daily routines. Studies have also shown that geographical factors in the rural area becomes a great challenge to experts on providing medical attention towards the community that had been infected in the rural areas. Fortunately, with the help of advanced current technology, scientists and medical experts are able to counter these problems. In this study, an experimental model with an accuracy of 87% is used, and the application to a web server is used via Python and Flask. The accuracy is achieved by adjusting batch sizes and implementing image augmentation using Keras ImageDataGenerator feature. Therefore, this project focuses on utilizing machine learning to classify COVID-19 patients through X-ray images on a web server, which could further improve the accessibility for humanity to seek for medical attention. Keywords COVID-19 (Coronavirus Disease), Machine Learning (ML) Ashura binti Hasmadi Department of Computer and Information Sciences Universiti Teknologi PETRONAS Seri Iskandar, Malaysia (email:ashura.hasmadi_23960@utp.edu.my) Mehak Maqbool Memon Department of Computer and Information Sciences Universiti Teknologi PETRONAS Seri Iskandar, Malaysia (email:mehak_19001057@utp.edu.my) Manzoor Ahmed Hashmani Department of Computer and Information Sciences Universiti Teknologi PETRONAS Seri Iskandar, Malaysia (email:manzoor.hashmani@utp.edu.my) INTRODUCTION According to the WHO, COVID-19 is a lifethreatening virus that spreads primarily through droplets of saliva or discharge from the nose or mouth, when the infected coughs or sneezes. Other than being a deadly virus itself, COVID-19 also spreads in the human body fast, ranging from asymptomatic to symptomatic infections. Thus, this results to hospitals lacking enough labor to uphold the many patients that needed aid, as well as the increase in front liners getting infected themselves. As the number of infected increases, hospitals are not capable of treating many of those infected, where their hospitals are lacking medical equipment especially those of in the rural areas. In fact, hospitals have become the most dangerous place to visit, and patients are not advised to stay around any longer to avoid getting exposed to the virus. This problem on providing patients a safer environment for testing, makes it a huge challenge for medical team to detect the virus in patients. Scientists have come up with various innovative ways on detecting and identifying COVID-19 in human bodies. Generally, COVID-19 can be identified from common symptoms such as fever, cough, fatigue, shortness of breath and loss of smell and taste. For the common medical tools to detect COVID-19, doctors regularly use a device called reverse transcription polymerase chain reaction: RT-PCR. The device detects proteins from the COVID-19 virus in respiratory samples or detection in blood. However, there are issues of where there is a shortage of RT-PCR kits especially in developing countries. Machine learning techniques are used as an alternative to detect certain variables of COVID-19 detection in a much economical way. II. LITERATURE REVIEW According to Cleverly et al. and Wong et al., both of their research on chest X-ray images of COVID-19 patient has consolidation or ground-glass opacities forming on parts of the chest. Another article from Zhou et al. also stated that the same form of consolidation forms on lung CT images. It is also important to note that most COVID-19 patients do not develop pneumonia therefore it should be distinguishable between a pneumonia infected lung and COVID-19 infected lung. Zhou et al., states that in China, CT is the main form of COVID-19 detection in lungs as it has a high reputable sensitivity for the detection. However, another article from Wong et al. suggests that CT scans are a less preferred approach due to how huge and inconvenient for COVID-19 decontamination. Acknowledging that COVID-19 is a highly infectious virus that could survive on non-living objects, medical equipment used must be cleaned each time used. This then would be non-economical and slower down radiology services for COVID-19 detection in lungs. Cleverly et al. also mentioned that portable radiology is the most advisable method to be used in these times to minimize the risk of COVID-19 infection. However, the gaps that could be seen between the two articles from Cleverly et al and Wong et al. is that the observation for each radiology images are done by radiologists. This is rather worrying as there is not many radiologists in this world that could observe every X-ray images of COVID-19 patients, especially in smaller countries. The challenges faced by radiologists in this pandemic era harms their health as they are required to analyze their patient's result. Asymptomatic infection makes it a greater challenge for doctors and researchers to grant care for their patients, as the identification process gets harder. The issue with COVID-19 is that the virus primarily spreads via droplets via the nose or mouth, and it spreads even rapidly in an area that is not well ventilated. Hospitals have to make sure that they follow up with strict protocols to avoid the infection spread, and this just causes the frontline warriors find difficulty in keeping themselves safe and sanitized, but also challenging to tend to the infected patients. Machines have to be regularly sanitized and face to face meetups needed to be done regularly. This issue serves a huge challenge to CT scanners, despite their high sensitivity in detecting infection in radiology. To accommodate the scarcity of clean expensive CT scanners and keeping the environment safe from the spread of virus, researches have proposed on using mobile X-rays as an alternative choice. From these studies we can identify that by the help of portable devices, the detection of COVID-19 can be executed faster without risking anymore lives for the frontline warriors. Now with the help of portable technology, the medical team have less to worry about time, as the virus infect and could kill the infected at a concerning rate. The theme for this subtopic is that all of the research and articles agree on using deep learning on the detection of COVID-19 in lungs radiology. Some researches have been made on detecting COVID-19 via deep learning on predicting chest CT images, and blood tests. The machine learning application of blood test to the detection of COVID-19 in human body has a distinctive methodology that differs from X-ray and CT scanned image detections. The application of machine learning in blood tests mainly uses logistic regression and random forest classification algorithm. This hold different for image based dataset such as CT scanned images and X-ray images. Deep learning methods such as VGG-19, COVID-Net, ResNet50+ SVM, and DarkCovidNet has been developed by researchers to detect COVID-19 infection for image-based dataset. And as for X-ray detection, many researches have trained and tested their models by using a collected dataset shared by Dr. Joseph Cohen in his GitHub. Researches had been using many different combination of dataset throughout the machine learning process in the detection of COVID-19 in lungs X-ray images, since the lack of data could cause bias upon the research. A research by Ozturk, T., et al had been done with Dr. JP Cohen's open access COVID-19 chest X-ray images, with a dataset containing non-COVID images only belonging to children of 1 to 5 years old. Another research by Sethy, P. et al that uses Dr. JP Cohen's dataset and a combination of other sources also includes applying the dataset on 11 number of pre-trained CNN models. III. METHODOLOGY The study has its model built and trained in a Google Colab terminal, and obtained its dataset from Kaggle. After building the model, the model will be saved in an h5 file, and transported to Visual Studio Code where the web application interface will be implemented with Flask. The image database includes image X-rays of Normal, COVID-19 and Viral Pneumonia infected lungs included in the file. As of the now, there are 3616 images of COVID X-rays, 10.2k images of Normal X-rays and 1345 images of Viral Pneumonia X-rays. All images are in the Portable Network Graphic (PNG) file format and has a resolution of 299x299 pixels. Thus, for these research only Normal and COVID-19 X-ray images are used to conduct study. The database is constantly updated and is contributed and filtered by researchers from various regions. The Figure below shows a few examples of the chest X-ray images in the dataset. B. Building and training the model Google Colab is a free-to-use online terminal for python programming. It is widely used by students, data scientists and AI researchers on creating machine learning programs. Google Colab offers wide range of pre-installed libraries and allows user to save their projects on Cloud. They also provide free GPU and TPU to use, with a certain limitation to free accounts. The Kaggle dataset is then import to the Google Colab interface, unzipped, and placed into directories. The images then are separated between dataset for images, x_dataset, and dataset for labels, y_dataset. Since the initial image size from the dataset is 299 x 299 pixels, the images are to be resized to 70 x 70 pixels. This is due to the reason where GPU does not accommodate large images along with large batch sizes, limiting the chances for a higher accuracy. The model uses train test split method, splitting 80% training data and 20% testing data. For this project, the model is attempted to be built by using Convolutional Neural Network (CNN). CNN works by bedding pixels of a 2-Dimensional object into neurons in the input layer, multiplying the pixels to weights and their sum is sent as input to the neurons in the hidden layer. For this project, the model uses a simple neural network that is developed from the scratch with 3 convolutional networks, and 2 dropout layers in a sequential method. Sequential method here means that the layers of the model is arranged in a sequence. After building the model, image augmentation is used to refine the model's accuracy. ImageDataGenerator is used to augment images while not affecting the original data. It creates new data so that the model can be trained with greater dataset, providing a better validation accuracy. ImageDataGenerator will be implied to both training and testing data, passing the data to training and testing directories. To ensure a high accuracy for our model, epoch callbacks are necessary. Epoch is the number of successful passes after each training. Callback is a Keras function that includes stopping the model training after a certain accuracy or loss value is obtained and saving checkpoints of every epochs in the model. This feature eases the desire to gain the highest accuracy during training, saves time, as well as avoiding overfitting. Overfitting happens when training accuracy value is higher than testing accuracy, and may cause the model to predict wrongly, despite the high accuracy. In this case, the model is unfamiliar with new "testing" images, as it is too used with the trained images. Hence, a callback is used in this project. After the training process, final model will be saved, and testing loss and testing accuracy is evaluated. C. Visual Studio Code and Flask To produce a web server that can run uploading image and producing an output from the trained model that has been saved, Visual Studio Code is used to program Python, HTML and CSS environments for the web server. The web framework used is Flask, as Flask is written in Python, hence easier to configure along with the model. Firstly, a python file is created. The model will be uploaded in the app,py file, and predict the uploaded image. As for the website's user interface, it is designed with a simple user interface that requires user to upload an X-ray image, and the displays the predicted outcome on the front of the web page itself. The code combines both HTML and CSS in one html file. By using Flask, the web framework is uploaded to localhost by running Flask in the terminal. Flask will then upload a link that will redirect to the local web server. A. Model building: Image Augmentation As we all know, there are various methods on refining and the optimization of the model. One of the ways is to increase the image dataset with more augmented images. However, in some circumstances, image augmentation might not be the answer to a perfect fitted model. For instance, in this study, the detection of COVID-19 in X-ray images is detected by the amount of grey consolidation in the image. If the augmented image is augmented to a Normal lung X-ray in a certain scale of zoom, the output will produce what seem to be a consolidated image. Hence, this issue increases loss values. Therefore, to achieve great results with expanding the data via image augmentation, specific choices of image augmentation is used to match with the detection method of COVID-19 in X-ray images. Changes are made between the amount of zoom range, and brightness range. The reasons to why brightness range is chosen for image augmentation rather than zoom is because brightness could stretch the grayscale images of X-ray to many different scales of brightness without creating confusing training or testing dataset for the model to detect consolidation, unlike zooming. The result bearing 87% validation accuracy for model's accuracy and loss is as shown below. B. Batch sizes and image sizes Both batch size and image size are important to get an optimized accuracy value, and a perfectly fitted model. The average image size in the dataset is of 299x299 pixels. However, it seems that to the free GPU provided in Google Colab, it could not withstand huge images to be trained and tested, therefore sessions will crash. Therefore, to proceed with this study, smaller values of image size will be used, which is 70x70 pixels. Batch sizes are also crucial for the purposes of obtaining greater accuracy, as batch sizes is a hyperparameter that refers to amount of training in one iteration. In this study, 3 batch sizes are used for comparison ranging from 64, 128 and 256. Batch size of 128 and the image size of 70x70 pixels, are chosen together with the configured image augmentation as they bear satisfying result of 85% validation accuracy. As epochs are used with callbacks in this study, no adjustments with epochs are necessary to obtain the desirable value, as the epochs will stop once a desired value of loss and accuracy is achieved, to prevent overfitting issues. C. Applying to web server After applying the model to Visual Studio Code and running it in Flask, the web server requires user to upload image, and the model will provide a predicted result as shown below. V) CONCLUSION AND RECOMMENDATION In this study, the presented idea of using machine learning to classify COVID-19 patients through X-ray images proves to be effective and beneficial for the future of humanity in facing the ever-evolving virus and pandemic. As results are analyzed, certain methods have been identified to specifically detect COVID-19 such as the limitations to image augmentation. This concludes that certain methods that are needed for model optimization are not necessary for X-ray image detection. As image sizes could not be experimented with using this dataset, at least the comparison between batch sizes is doable. The comparison made in this study is to obtain a simple CNN model with a satisfying value that could be implemented to a web server. The local web server implemented also serves a great length to this study, as the study emphasizes the need of portability, operating procedures during the pandemic and ease of detection for radiologists. The simple user interface design serves straight-forwardness and ease of access to the users. The study only limits to COVID-19 and Normal Xray images, and future works can be done by expanding on detecting viral pneumonia. Moreover, the study can be improvised by working the web server to an online service, or a private web server for organizations. |
A control system for the efficient operation of Bulk Air Coolers on a mine The mining sector is a vital contributor to the economy of South Africa. This sector, however, consumes 15% of the country's electrical energy. Deep mine operations require ventilation and cooling (VC) systems, which can account for up to 25% of the mine's electricity cost. Refrigeration systems provide the cold water and air needed by the VC systems to mine at depths of over 2 km. Electricity cost savings on these refrigeration systems can be achieved by using time-dependent operating schedules. Peak-time electricity usage especially needs to be minimised to maximise these cost savings. The focus of this study was the development of a Bulk Air Cooler (BAC) controller, due to the lack of a controller to regulate these systems in the mining industry. This controller enables equipment to adapt dynamically to environmental changes by monitoring the underground temperature and adhering to input boundaries. The BAC controller was implemented on two sites, to control pumps, chillers and fans. A combined daily peak-time usage reduction of 4.3 MW was achieved on the two sites. This translates to an annual cost saving of R831 973. There is also a clear need to reduce electricity usage during the Eskom peak period. The BAC controller was therefore designed to monitor and control mine refrigeration machines. Equipment can thus be switched off during peak periods, provided the environmental parameters comply with safety regulations. |
/*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#ifndef usbvideo_h
#define usbvideo_h
#include <linux/videodev.h>
#include <media/v4l2-common.h>
#include <linux/usb.h>
#include <linux/mutex.h>
/* Most helpful debugging aid */
#define assert(expr) ((void) ((expr) ? 0 : (err("assert failed at line %d",__LINE__))))
#define USBVIDEO_REPORT_STATS 1 /* Set to 0 to block statistics on close */
/* Bit flags (options) */
#define FLAGS_RETRY_VIDIOCSYNC (1 << 0)
#define FLAGS_MONOCHROME (1 << 1)
#define FLAGS_DISPLAY_HINTS (1 << 2)
#define FLAGS_OVERLAY_STATS (1 << 3)
#define FLAGS_FORCE_TESTPATTERN (1 << 4)
#define FLAGS_SEPARATE_FRAMES (1 << 5)
#define FLAGS_CLEAN_FRAMES (1 << 6)
#define FLAGS_NO_DECODING (1 << 7)
/* Bit flags for frames (apply to the frame where they are specified) */
#define USBVIDEO_FRAME_FLAG_SOFTWARE_CONTRAST (1 << 0)
/* Camera capabilities (maximum) */
#define CAMERA_URB_FRAMES 32
#define CAMERA_MAX_ISO_PACKET 1023 /* 1022 actually sent by camera */
#define FRAMES_PER_DESC (CAMERA_URB_FRAMES)
#define FRAME_SIZE_PER_DESC (CAMERA_MAX_ISO_PACKET)
/* This macro restricts an int variable to an inclusive range */
#define RESTRICT_TO_RANGE(v,mi,ma) { if ((v) < (mi)) (v) = (mi); else if ((v) > (ma)) (v) = (ma); }
#define V4L_BYTES_PER_PIXEL 3 /* Because we produce RGB24 */
/*
* Use this macro to construct constants for different video sizes.
* We have to deal with different video sizes that have to be
* configured in the device or compared against when we receive
* a data. Normally one would define a bunch of VIDEOSIZE_x_by_y
* #defines and that's the end of story. However this solution
* does not allow to convert between real pixel sizes and the
* constant (integer) value that may be used to tag a frame or
* whatever. The set of macros below constructs videosize constants
* from the pixel size and allows to reconstruct the pixel size
* from the combined value later.
*/
#define VIDEOSIZE(x,y) (((x) & 0xFFFFL) | (((y) & 0xFFFFL) << 16))
#define VIDEOSIZE_X(vs) ((vs) & 0xFFFFL)
#define VIDEOSIZE_Y(vs) (((vs) >> 16) & 0xFFFFL)
typedef unsigned long videosize_t;
/*
* This macro checks if the camera is still operational. The 'uvd'
* pointer must be valid, uvd->dev must be valid, we are not
* removing the device and the device has not erred on us.
*/
#define CAMERA_IS_OPERATIONAL(uvd) (\
(uvd != NULL) && \
((uvd)->dev != NULL) && \
((uvd)->last_error == 0) && \
(!(uvd)->remove_pending))
/*
* We use macros to do YUV -> RGB conversion because this is
* very important for speed and totally unimportant for size.
*
* YUV -> RGB Conversion
* ---------------------
*
* B = 1.164*(Y-16) + 2.018*(V-128)
* G = 1.164*(Y-16) - 0.813*(U-128) - 0.391*(V-128)
* R = 1.164*(Y-16) + 1.596*(U-128)
*
* If you fancy integer arithmetics (as you should), hear this:
*
* 65536*B = 76284*(Y-16) + 132252*(V-128)
* 65536*G = 76284*(Y-16) - 53281*(U-128) - 25625*(V-128)
* 65536*R = 76284*(Y-16) + 104595*(U-128)
*
* Make sure the output values are within [0..255] range.
*/
#define LIMIT_RGB(x) (((x) < 0) ? 0 : (((x) > 255) ? 255 : (x)))
#define YUV_TO_RGB_BY_THE_BOOK(my,mu,mv,mr,mg,mb) { \
int mm_y, mm_yc, mm_u, mm_v, mm_r, mm_g, mm_b; \
mm_y = (my) - 16; \
mm_u = (mu) - 128; \
mm_v = (mv) - 128; \
mm_yc= mm_y * 76284; \
mm_b = (mm_yc + 132252*mm_v ) >> 16; \
mm_g = (mm_yc - 53281*mm_u - 25625*mm_v ) >> 16; \
mm_r = (mm_yc + 104595*mm_u ) >> 16; \
mb = LIMIT_RGB(mm_b); \
mg = LIMIT_RGB(mm_g); \
mr = LIMIT_RGB(mm_r); \
}
#define RING_QUEUE_SIZE (128*1024) /* Must be a power of 2 */
#define RING_QUEUE_ADVANCE_INDEX(rq,ind,n) (rq)->ind = ((rq)->ind + (n)) & ((rq)->length-1)
#define RING_QUEUE_DEQUEUE_BYTES(rq,n) RING_QUEUE_ADVANCE_INDEX(rq,ri,n)
#define RING_QUEUE_PEEK(rq,ofs) ((rq)->queue[((ofs) + (rq)->ri) & ((rq)->length-1)])
struct RingQueue {
unsigned char *queue; /* Data from the Isoc data pump */
int length; /* How many bytes allocated for the queue */
int wi; /* That's where we write */
int ri; /* Read from here until you hit write index */
wait_queue_head_t wqh; /* Processes waiting */
};
enum ScanState {
ScanState_Scanning, /* Scanning for header */
ScanState_Lines /* Parsing lines */
};
/* Completion states of the data parser */
enum ParseState {
scan_Continue, /* Just parse next item */
scan_NextFrame, /* Frame done, send it to V4L */
scan_Out, /* Not enough data for frame */
scan_EndParse /* End parsing */
};
enum FrameState {
FrameState_Unused, /* Unused (no MCAPTURE) */
FrameState_Ready, /* Ready to start grabbing */
FrameState_Grabbing, /* In the process of being grabbed into */
FrameState_Done, /* Finished grabbing, but not been synced yet */
FrameState_Done_Hold, /* Are syncing or reading */
FrameState_Error, /* Something bad happened while processing */
};
/*
* Some frames may contain only even or odd lines. This type
* specifies what type of deinterlacing is required.
*/
enum Deinterlace {
Deinterlace_None=0,
Deinterlace_FillOddLines,
Deinterlace_FillEvenLines
};
#define USBVIDEO_NUMFRAMES 2 /* How many frames we work with */
#define USBVIDEO_NUMSBUF 2 /* How many URBs linked in a ring */
/* This structure represents one Isoc request - URB and buffer */
struct usbvideo_sbuf {
char *data;
struct urb *urb;
};
struct usbvideo_frame {
char *data; /* Frame buffer */
unsigned long header; /* Significant bits from the header */
videosize_t canvas; /* The canvas (max. image) allocated */
videosize_t request; /* That's what the application asked for */
unsigned short palette; /* The desired format */
enum FrameState frameState;/* State of grabbing */
enum ScanState scanstate; /* State of scanning */
enum Deinterlace deinterlace;
int flags; /* USBVIDEO_FRAME_FLAG_xxx bit flags */
int curline; /* Line of frame we're working on */
long seqRead_Length; /* Raw data length of frame */
long seqRead_Index; /* Amount of data that has been already read */
void *user; /* Additional data that user may need */
};
/* Statistics that can be overlaid on screen */
struct usbvideo_statistics {
unsigned long frame_num; /* Sequential number of the frame */
unsigned long urb_count; /* How many URBs we received so far */
unsigned long urb_length; /* Length of last URB */
unsigned long data_count; /* How many bytes we received */
unsigned long header_count; /* How many frame headers we found */
unsigned long iso_skip_count; /* How many empty ISO packets received */
unsigned long iso_err_count; /* How many bad ISO packets received */
};
struct usbvideo;
struct uvd {
struct video_device vdev; /* Must be the first field! */
struct usb_device *dev;
struct usbvideo *handle; /* Points back to the struct usbvideo */
void *user_data; /* Camera-dependent data */
int user_size; /* Size of that camera-dependent data */
int debug; /* Debug level for usbvideo */
unsigned char iface; /* Video interface number */
unsigned char video_endp;
unsigned char ifaceAltActive;
unsigned char ifaceAltInactive; /* Alt settings */
unsigned long flags; /* FLAGS_USBVIDEO_xxx */
unsigned long paletteBits; /* Which palettes we accept? */
unsigned short defaultPalette; /* What palette to use for read() */
struct mutex lock;
int user; /* user count for exclusive use */
videosize_t videosize; /* Current setting */
videosize_t canvas; /* This is the width,height of the V4L canvas */
int max_frame_size; /* Bytes in one video frame */
int uvd_used; /* Is this structure in use? */
int streaming; /* Are we streaming Isochronous? */
int grabbing; /* Are we grabbing? */
int settingsAdjusted; /* Have we adjusted contrast etc.? */
int last_error; /* What calamity struck us? */
char *fbuf; /* Videodev buffer area */
int fbuf_size; /* Videodev buffer size */
int curframe;
int iso_packet_len; /* Videomode-dependent, saves bus bandwidth */
struct RingQueue dp; /* Isoc data pump */
struct usbvideo_frame frame[USBVIDEO_NUMFRAMES];
struct usbvideo_sbuf sbuf[USBVIDEO_NUMSBUF];
volatile int remove_pending; /* If set then about to exit */
struct video_picture vpic, vpic_old; /* Picture settings */
struct video_capability vcap; /* Video capabilities */
struct video_channel vchan; /* May be used for tuner support */
struct usbvideo_statistics stats;
char videoName[32]; /* Holds name like "video7" */
};
/*
* usbvideo callbacks (virtual methods). They are set when usbvideo
* services are registered. All of these default to NULL, except those
* that default to usbvideo-provided methods.
*/
struct usbvideo_cb {
int (*probe)(struct usb_interface *, const struct usb_device_id *);
void (*userFree)(struct uvd *);
void (*disconnect)(struct usb_interface *);
int (*setupOnOpen)(struct uvd *);
void (*videoStart)(struct uvd *);
void (*videoStop)(struct uvd *);
void (*processData)(struct uvd *, struct usbvideo_frame *);
void (*postProcess)(struct uvd *, struct usbvideo_frame *);
void (*adjustPicture)(struct uvd *);
int (*getFPS)(struct uvd *);
int (*overlayHook)(struct uvd *, struct usbvideo_frame *);
int (*getFrame)(struct uvd *, int);
int (*startDataPump)(struct uvd *uvd);
void (*stopDataPump)(struct uvd *uvd);
int (*setVideoMode)(struct uvd *uvd, struct video_window *vw);
};
struct usbvideo {
int num_cameras; /* As allocated */
struct usb_driver usbdrv; /* Interface to the USB stack */
char drvName[80]; /* Driver name */
struct mutex lock; /* Mutex protecting camera structures */
struct usbvideo_cb cb; /* Table of callbacks (virtual methods) */
struct video_device vdt; /* Video device template */
struct uvd *cam; /* Array of camera structures */
struct module *md_module; /* Minidriver module */
};
/*
* This macro retrieves callback address from the struct uvd object.
* No validity checks are done here, so be sure to check the
* callback beforehand with VALID_CALLBACK.
*/
#define GET_CALLBACK(uvd,cbName) ((uvd)->handle->cb.cbName)
/*
* This macro returns either callback pointer or NULL. This is safe
* macro, meaning that most of components of data structures involved
* may be NULL - this only results in NULL being returned. You may
* wish to use this macro to make sure that the callback is callable.
* However keep in mind that those checks take time.
*/
#define VALID_CALLBACK(uvd,cbName) ((((uvd) != NULL) && \
((uvd)->handle != NULL)) ? GET_CALLBACK(uvd,cbName) : NULL)
int RingQueue_Dequeue(struct RingQueue *rq, unsigned char *dst, int len);
int RingQueue_Enqueue(struct RingQueue *rq, const unsigned char *cdata, int n);
void RingQueue_WakeUpInterruptible(struct RingQueue *rq);
void RingQueue_Flush(struct RingQueue *rq);
static inline int RingQueue_GetLength(const struct RingQueue *rq)
{
return (rq->wi - rq->ri + rq->length) & (rq->length-1);
}
static inline int RingQueue_GetFreeSpace(const struct RingQueue *rq)
{
return rq->length - RingQueue_GetLength(rq);
}
void usbvideo_DrawLine(
struct usbvideo_frame *frame,
int x1, int y1,
int x2, int y2,
unsigned char cr, unsigned char cg, unsigned char cb);
void usbvideo_HexDump(const unsigned char *data, int len);
void usbvideo_SayAndWait(const char *what);
void usbvideo_TestPattern(struct uvd *uvd, int fullframe, int pmode);
/* Memory allocation routines */
unsigned long usbvideo_kvirt_to_pa(unsigned long adr);
int usbvideo_register(
struct usbvideo **pCams,
const int num_cams,
const int num_extra,
const char *driverName,
const struct usbvideo_cb *cbTable,
struct module *md,
const struct usb_device_id *id_table);
struct uvd *usbvideo_AllocateDevice(struct usbvideo *cams);
int usbvideo_RegisterVideoDevice(struct uvd *uvd);
void usbvideo_Deregister(struct usbvideo **uvt);
int usbvideo_v4l_initialize(struct video_device *dev);
void usbvideo_DeinterlaceFrame(struct uvd *uvd, struct usbvideo_frame *frame);
/*
* This code performs bounds checking - use it when working with
* new formats, or else you may get oopses all over the place.
* If pixel falls out of bounds then it gets shoved back (as close
* to place of offence as possible) and is painted bright red.
*
* There are two important concepts: frame width, height and
* V4L canvas width, height. The former is the area requested by
* the application -for this very frame-. The latter is the largest
* possible frame that we can serve (we advertise that via V4L ioctl).
* The frame data is expected to be formatted as lines of length
* VIDEOSIZE_X(fr->request), total VIDEOSIZE_Y(frame->request) lines.
*/
static inline void RGB24_PUTPIXEL(
struct usbvideo_frame *fr,
int ix, int iy,
unsigned char vr,
unsigned char vg,
unsigned char vb)
{
register unsigned char *pf;
int limiter = 0, mx, my;
mx = ix;
my = iy;
if (mx < 0) {
mx=0;
limiter++;
} else if (mx >= VIDEOSIZE_X((fr)->request)) {
mx= VIDEOSIZE_X((fr)->request) - 1;
limiter++;
}
if (my < 0) {
my = 0;
limiter++;
} else if (my >= VIDEOSIZE_Y((fr)->request)) {
my = VIDEOSIZE_Y((fr)->request) - 1;
limiter++;
}
pf = (fr)->data + V4L_BYTES_PER_PIXEL*((iy)*VIDEOSIZE_X((fr)->request) + (ix));
if (limiter) {
*pf++ = 0;
*pf++ = 0;
*pf++ = 0xFF;
} else {
*pf++ = (vb);
*pf++ = (vg);
*pf++ = (vr);
}
}
#endif /* usbvideo_h */
|
<gh_stars>100-1000
""" Tests the broker service registry. """
import json
import threading
from nose.plugins.attrib import attr
from nose.tools import nottest
from dxlclient.test.base_test import BaseClientTest
from dxlclient import ErrorResponse, Request, Response
from dxlclient import RequestCallback, ServiceRegistrationInfo, UuidGenerator
# pylint: disable=missing-docstring
@attr('system')
class BrokerServiceRegistryTest(BaseClientTest):
DXL_SERVICE_UNAVAILABLE_ERROR_CODE = 0x80000001
DXL_SERVICE_UNAVAILABLE_ERROR_MESSAGE = \
'unable to locate service for request'
DXL_SERVICE_REGISTRY_QUERY_TOPIC = '/mcafee/service/dxl/svcregistry/query'
MAX_WAIT = 5 * 60
RESPONSE_WAIT = 60
@staticmethod
def normalized_error_code(error_response):
return (0xFFFFFFFF + error_response.error_code + 1) \
if error_response.error_code < 0 else error_response.error_code
@nottest
def register_test_service(self, client, service_type=None):
topic = "broker_service_registry_test_service_" + \
UuidGenerator.generate_id_as_string()
reg_info = ServiceRegistrationInfo(
client,
service_type or "broker_service_registry_test_service_" +
UuidGenerator.generate_id_as_string())
callback = RequestCallback()
callback.on_request = \
lambda request: client.send_response(Response(request))
reg_info.add_topic(topic, callback)
client.register_service_sync(reg_info, self.DEFAULT_TIMEOUT)
return reg_info
def query_service_registry(self, client, query):
request = Request(self.DXL_SERVICE_REGISTRY_QUERY_TOPIC)
if not query:
query = {}
request.payload = json.dumps(query)
response = client.sync_request(request, timeout=self.RESPONSE_WAIT)
return json.loads(
response.payload.decode("utf8").rstrip("\0"))["services"]
def query_service_registry_by_service_id(self, client, service_id):
response = self.query_service_registry(
client, {"serviceId": service_id})
return response[service_id] if service_id in response else None
def query_service_registry_by_service_type(self, client, service_type):
return self.query_service_registry(
client, {"serviceType": service_type})
def query_service_registry_by_service(self, client, service_reg_info):
return self.query_service_registry_by_service_id(
client, service_reg_info.service_id)
#
# Test querying the broker for services with a specific identifier.
#
@attr('system')
def test_registry_query_by_id(self):
with self.create_client() as client:
client.connect()
reg_info = self.register_test_service(client)
# Validate that the service was initially registered with the
# broker.
self.assertIsNotNone(self.query_service_registry_by_service_id(
client, reg_info.service_id))
client.unregister_service_sync(reg_info, self.DEFAULT_TIMEOUT)
# Validate that the broker unregistered the service after the
# request made to the service failed.
self.assertIsNone(self.query_service_registry_by_service_id(
client, reg_info.service_id))
#
# Test querying the broker for services based on their type
#
@attr('system')
def test_registry_query_by_type(self):
with self.create_client() as client:
client.connect()
# Register two services (reg_info_1 and reg_info_2) with the same
# service_type and one service (reg_info_3) with a different
# service_type. When querying the registry using reg_info_1's
# service_type, expect entries to be returned for reg_info_1 and
# reg_info_2 but not reg_info_3 (since the service_type for the
# latter would not match the query).
reg_info_1 = self.register_test_service(client)
reg_info_2 = self.register_test_service(client,
reg_info_1.service_type)
reg_info_3 = self.register_test_service(client)
services = self.query_service_registry_by_service_type(
client, reg_info_1.service_type)
self.assertIn(reg_info_1.service_id, services)
self.assertIn(reg_info_2.service_id, services)
self.assertNotIn(reg_info_3.service_id, services)
#
# Test round-robin for multiple services that support the same channel.
#
@attr('system')
def test_round_robin_services(self):
service_count = 10
request_per_service_count = 10
request_to_send_count = service_count * request_per_service_count
request_received_count = [0]
request_to_wrong_service_id_count = [0]
requests_by_service = {}
request_lock = threading.Lock()
topic = UuidGenerator.generate_id_as_string()
with self.create_client() as service_client:
service_client.connect()
def my_request(callback_service_id, request):
with request_lock:
request_received_count[0] += 1
if request.service_id and \
(request.service_id != callback_service_id):
request_to_wrong_service_id_count[0] += 1
if request.service_id in requests_by_service:
requests_by_service[request.service_id] += 1
else:
requests_by_service[request.service_id] = 1
response = Response(request)
service_client.send_response(response)
def create_service_reg_info():
reg_info = ServiceRegistrationInfo(service_client,
"round_robin_service")
callback = RequestCallback()
callback.on_request = \
lambda request: my_request(reg_info.service_id, request)
reg_info.add_topic(topic, callback)
service_client.register_service_sync(reg_info,
self.DEFAULT_TIMEOUT)
return reg_info
reg_infos = [create_service_reg_info() # pylint: disable=unused-variable
for _ in range(service_count)]
with self.create_client() as request_client:
request_client.connect()
for _ in range(0, request_to_send_count):
request = Request(topic)
response = request_client.sync_request(
request, timeout=self.RESPONSE_WAIT)
self.assertNotIsInstance(response, ErrorResponse)
self.assertEqual(request.message_id,
response.request_message_id)
with request_lock:
self.assertEqual(0, request_to_wrong_service_id_count[0])
self.assertEqual(request_to_send_count,
request_received_count[0])
self.assertEqual(service_count, len(requests_by_service))
for service_request_count in requests_by_service.values():
self.assertEqual(request_per_service_count,
service_request_count)
#
# Test routing requests to multiple services.
#
@attr('system')
def test_multiple_services(self):
with self.create_client() as service_client:
service_client.connect()
reg_info_topic_1 = "multiple_services_test_1_" + \
UuidGenerator.generate_id_as_string()
reg_info_1 = ServiceRegistrationInfo(
service_client, "multiple_services_test_1")
def reg_info_request_1(request):
response = Response(request)
response.payload = "service1"
service_client.send_response(response)
reg_info_callback_1 = RequestCallback()
reg_info_callback_1.on_request = reg_info_request_1
reg_info_1.add_topic(reg_info_topic_1, reg_info_callback_1)
service_client.register_service_sync(reg_info_1,
self.DEFAULT_TIMEOUT)
reg_info_topic_2 = "multiple_services_test_2_" + \
UuidGenerator.generate_id_as_string()
reg_info_2 = ServiceRegistrationInfo(
service_client, "multiple_services_test_2")
def reg_info_request_2(request):
response = Response(request)
response.payload = "service2"
service_client.send_response(response)
reg_info_callback_2 = RequestCallback()
reg_info_callback_2.on_request = reg_info_request_2
reg_info_2.add_topic(reg_info_topic_2, reg_info_callback_2)
service_client.register_service_sync(reg_info_2,
self.DEFAULT_TIMEOUT)
with self.create_client() as request_client:
request_client.connect()
response = request_client.sync_request(
Request(reg_info_topic_1), self.DEFAULT_TIMEOUT)
self.assertIsInstance(response, Response)
self.assertEqual(response.payload.decode("utf8"), "service1")
response = request_client.sync_request(
Request(reg_info_topic_2), self.DEFAULT_TIMEOUT)
self.assertIsInstance(response, Response)
self.assertEqual(response.payload.decode("utf8"), "service2")
#
# Test circumventing round-robin of services by specifying a single service
# instance in the request.
#
@attr('system')
def test_specify_service_in_request(self):
service_count = 10
request_count = 100
request_received_count = [0]
request_to_wrong_service_id_count = [0]
requests_by_service = {}
request_lock = threading.Lock()
topic = UuidGenerator.generate_id_as_string()
with self.create_client() as service_client:
service_client.connect()
def my_request(callback_service_id, request):
with request_lock:
request_received_count[0] += 1
if request.service_id and \
(request.service_id != callback_service_id):
request_to_wrong_service_id_count[0] += 1
if request.service_id in requests_by_service:
requests_by_service[request.service_id] += 1
else:
requests_by_service[request.service_id] = 1
response = Response(request)
service_client.send_response(response)
def create_service_reg_info():
reg_info = ServiceRegistrationInfo(
service_client, "registry_specified_service_id_test")
callback = RequestCallback()
callback.on_request = \
lambda request: my_request(reg_info.service_id, request)
reg_info.add_topic(topic, callback)
service_client.register_service_sync(reg_info,
self.DEFAULT_TIMEOUT)
return reg_info
reg_infos = [create_service_reg_info()
for _ in range(service_count)]
with self.create_client() as request_client:
request_client.connect()
for _ in range(0, request_count):
request = Request(topic)
request.service_id = reg_infos[0].service_id
response = request_client.sync_request(
request, timeout=self.RESPONSE_WAIT)
self.assertNotIsInstance(response, ErrorResponse)
self.assertEqual(request.message_id,
response.request_message_id)
with request_lock:
self.assertEqual(0, request_to_wrong_service_id_count[0])
self.assertEqual(request_count, request_received_count[0])
self.assertEqual(1, len(requests_by_service))
self.assertIn(reg_infos[0].service_id, requests_by_service)
self.assertEqual(
request_count, requests_by_service[reg_infos[0].service_id])
#
# Test registering and unregistering the same service
#
@attr('system')
def test_multiple_registrations(self):
service_registration_count = 10
request_received_count = [0]
topic = UuidGenerator.generate_id_as_string()
with self.create_client() as service_client:
service_client.connect()
def my_request(request):
request_received_count[0] += 1
response = Response(request)
service_client.send_response(response)
reg_info = ServiceRegistrationInfo(service_client,
"multiple_registrations_test")
callback = RequestCallback()
callback.on_request = my_request
reg_info.add_topic(topic, callback)
with self.create_client() as request_client:
request_client.connect()
for _ in range(0, service_registration_count):
service_client.register_service_sync(reg_info,
self.DEFAULT_TIMEOUT)
request = Request(topic)
response = request_client.sync_request(
request, timeout=self.RESPONSE_WAIT)
self.assertNotIsInstance(response, ErrorResponse)
self.assertEqual(request.message_id,
response.request_message_id)
service_client.unregister_service_sync(reg_info,
self.DEFAULT_TIMEOUT)
self.assertEqual(service_registration_count,
request_received_count[0])
#
# Test the state of the response when no channel is registered with the
# broker for a service.
#
@attr('system')
def test_response_service_not_found_no_channel(self):
request_received = [False]
topic = UuidGenerator.generate_id_as_string()
with self.create_client() as service_client:
service_client.connect()
def my_request(request):
request_received[0] = True
service_client.send_response(Response(request))
reg_info = ServiceRegistrationInfo(
service_client, "response_service_not_found_no_channel_test")
callback = RequestCallback()
callback.on_request = my_request
reg_info.add_topic(topic, callback)
service_client.register_service_sync(reg_info,
self.DEFAULT_TIMEOUT)
service_client.unsubscribe(topic)
self.assertIsNotNone(
self.query_service_registry_by_service(
service_client, reg_info))
with self.create_client() as request_client:
request_client.connect()
request = Request(topic)
response = request_client.sync_request(
request, timeout=self.RESPONSE_WAIT)
self.assertFalse(request_received[0])
self.assertIsInstance(response, ErrorResponse)
self.assertEqual(reg_info.service_id, response.service_id)
self.assertEqual(
self.DXL_SERVICE_UNAVAILABLE_ERROR_CODE,
BrokerServiceRegistryTest.normalized_error_code(response))
self.assertEqual(self.DXL_SERVICE_UNAVAILABLE_ERROR_MESSAGE,
response.error_message)
self.assertIsNone(self.query_service_registry_by_service(
service_client, reg_info))
#
# Test the state of the response when the broker routes a service request
# to a client which has no matching service id registered.
#
@attr('system')
def test_response_service_not_found_no_service_id_at_client(self):
request_received = [False]
topic = UuidGenerator.generate_id_as_string()
with self.create_client() as service_client:
service_client.connect()
def my_request(request):
request_received[0] = True
service_client.send_response(Response(request))
reg_info = ServiceRegistrationInfo(
service_client,
"response_service_not_found_no_service_id_at_client_test")
callback = RequestCallback()
callback.on_request = my_request
reg_info.add_topic(topic, callback)
reg_info.add_topic(topic, callback)
service_client.register_service_sync(reg_info,
self.DEFAULT_TIMEOUT)
self.assertIsNotNone(
self.query_service_registry_by_service(
service_client, reg_info))
with self.create_client() as request_client:
request_client.connect()
# Remove the service's registration with the client-side
# ServiceManager, avoiding unregistration of the service from
# the broker. This should allow the broker to forward the
# request on to the service client.
registered_services = service_client._service_manager.services
service = registered_services[reg_info.service_id]
del registered_services[reg_info.service_id]
request = Request(topic)
response = request_client.sync_request(
request, timeout=self.RESPONSE_WAIT)
# Re-register the service with the internal ServiceManager so
# that its resources (TTL timeout, etc.) can be cleaned up
# properly at shutdown.
registered_services[reg_info.service_id] = service
# The request should receive an 'unavailable service' error
# response because the service client should be unable to route
# the request to an internally registered service.
self.assertFalse(request_received[0])
self.assertIsInstance(response, ErrorResponse)
self.assertEqual(reg_info.service_id, response.service_id)
self.assertEqual(
self.DXL_SERVICE_UNAVAILABLE_ERROR_CODE,
BrokerServiceRegistryTest.normalized_error_code(response))
self.assertEqual(self.DXL_SERVICE_UNAVAILABLE_ERROR_MESSAGE,
response.error_message)
self.assertIsNone(self.query_service_registry_by_service(
service_client, reg_info))
|
. The authors describe the most important methods in use for the laboratory diagnosis of toxoplasmosis, with special reference to the presence of IgM and to their significance in the diagnosis of acute infection. Statistics of tests carried out on pregnant women at the "Giovanni Lelli" Centre (Rome, Italy) during the years 1981-1984 are quoted. The recommended diagnostic protocol to be followed to prevent congenital infections is given. |
export class Empowered {
public id: number;
public name: string;
public paternalLastname: string;
public maternalLastname: string;
public birthdate: string;
public email: string;
public dni: number;
public phone: number;
} |
Immunoselected STRO-3+ mesenchymal precursor cells reduce inflammation and improve clinical outcomes in a large animal model of monoarthritis Background The purpose of this study was to investigate the therapeutic efficacy of intravenously administered immunoselected STRO-3+mesenchymal precursor cells (MPCs) on clinical scores, joint pathology and cytokine production in an ovine model of monoarthritis. Methods Monoarthritis was established in 16 adult merino sheep by administration of bovine type II collagen into the left hock joint following initial sensitization to this antigen. After 24 h, sheep were administered either 150 million allogeneic ovine MPCs (n=8) or saline (n=8) intravenously (IV). Lameness, joint swelling and pain were monitored and blood samples for leukocytes and cytokine levels were collected at intervals following arthritis induction. Animals were necropsied 14 days after arthritis induction and gross and histopathological evaluations were undertaken on tissues from the arthritic (left) and contralateral (right) joints. Results MPC-treated sheep demonstrated significantly reduced clinical signs of lameness, joint pain and swelling compared with saline controls. They also showed decreased cartilage erosions, synovial stromal cell activation and angiogenesis. This was accompanied by decreased infiltration of the synovial tissues by CD4+ lymphocytes and CD14+ monocytes/macrophages. Over the 3 days following joint arthropathy induction, the numbers of neutrophils circulating in the blood and plasma concentrations of activin A were significantly reduced in animals administered MPCs. Conclusions The results of this study have demonstrated the capacity of IV-administered MPCs to mitigate the clinical signs and some of the inflammatory mediators responsible for joint tissue destruction in a large animal model of monoarthritis. Background Neutrophils play an important role in the initiation and progression of rheumatoid arthritis (RA) where they accumulate in large numbers within the synovium and synovial fluid (SF) of the joint affected joints. Apart from their potent cytotoxic properties they also contribute to cytokine and chemokine release/activation and regulate immune responses through cell-cell interactions. Recent research has also identified their ability to participate in autoimmune diseases via the production of neutrophil extracellular traps (Nets). Subsequent to the influx of neutrophils, there is an influx of monocytes that mature into tissue macrophages. Both cell types can play both pro-and anti-inflammatory roles. In the autoimmune condition, rheumatoid arthritis, the prominent T cell infiltrate demonstrates that these cells are also key participants. Murine models of arthritis have been widely used to identify many of the pathogenic pathways implicated in inflammation and joint tissue destruction in RA [4,, however, their high metabolic rate, low body mass and short lifecycle has resulted in some mechanistic discrepancies to human RA, particularly in their response to treatment with anti-arthritis therapeutic agents. The ovine model of collagen-induced arthritis (CIA) produces a reproducible model of inflammatory arthritis in hock joints over 2 weeks and this can be used to evaluate clinical signs of lameness, synovial fluid and synovial membrane changes and cartilage erosion through the course of the inflammatory response. The major advantages of a large animal model such as the sheep is that the anatomy of the joints (in terms of tissue thickness) and the load-bearing stresses transmitted across the peripheral joints are comparable to the human. Since biomechanical factors are important contributors to joint tissue homeostasis and failure, we consider that assessment of lameness in this species is more relevant to the human disease than in rodents. Moreover, the ovine CIA model exhibits pain and swelling of the affected joints shortly after the induction of arthritis, followed by the development of a mild but chronic arthritis. From a practical standpoint, the ovine CIA model also has advantages over rodent models, with respect to the ease of access to target joints for intra-articular injections, collection of multiple fluid samples and the ability to cannulate the lymphatic ducts in order to monitor the cell populations exiting the joint. Adult mesenchymal stem cells (MSCs) are able to differentiate into cells of the mesodermal lineage (including bone, cartilage and tendons) and were first developed therapeutically with a view to utilising their regenerative capacity. However, a large number of studies have now reported the capacity of these cells to modulate immune system functions both in vivo and in vitro. A limited number of studies of MSC therapy in rodent models of collagen-induced arthritis (CIA) have been published, reporting mixed therapeutic and nontherapeutic effects. Differences in MSC source, isolation and culture techniques, dose, route and timing of administration may all account for the variable outcomes observed. Nevertheless, several of these studies have reported promising therapeutic effects including suppression of T cell activation and reductions in pro-inflammatory cytokine production. Mesenchymal precursor cells (MPCs) are a restricted subset of MSCs that, when STRO-1 or STRO-3 immunoselected, demonstrate increased clonogenic, developmental and proliferative capacity compared with unfractionated MSCs. In the present study, we utilized the ovine CIA model of monoarthritis to test the hypothesis that the intravenous administration of a single dose of 150 million allogeneic MPCs would reduce the clinical signs of arthritis, ameliorate the systemic elevation of leukocytes, particularly neutrophils, diminish the infiltration of inflammatory cells, synovial proliferation, stromal activation and cartilage destruction in the affected joints. Animals A total of 16 2-year-old female merino sheep were obtained from a local supplier and were acclimatised to their housing for at least 2 weeks before experiments commenced. The Animal Ethics Committee of the University of Melbourne approved all experimental animal procedures and sample collections (reference number: 1212422.3). One sheep from the MPC-treated group was excluded from the study at necropsy due to concurrent inflammatory disease of the lung that was unrelated to the study, giving final groups of eight control sheep and seven MPC-treated sheep. Arthritis induction The ovine arthritis model was based on the sensitization of the animals to bovine collagen type II (BCII; ). The sheep were allocated randomly into two groups. BCII was refined from bovine tracheal cartilage based on Miller's method using pepsin digestion (3200-4500 units/mg protein; Sigma-Aldrich, St. Louis, MO, USA) and fractional salt precipitation as we have described previously. The purity of the preparation was confirmed by biochemical analysis for the hydroxyproline content and Western Blotting using a commercial sample of bovine tracheal cartilage type II collagen (Sigma-Aldrich). A single batch was then lyophilised and stored at -20°C and used for all animals. Lyophilised BCII was dissolved aseptically in 50 mM acetic acid at 4°C and reconstituted aliquots were stored at -80°C. The dissolved collagen was emulsified with the Freund's Complete Adjuvant (FCA) (Sigma-Aldrich) by mixing equal volumes using a Normject Luer-lock syringe (Henk Sass Wolf, Tuttlingen, Germany) connected to a second Normject Luer-lock syringe with a Popper micro-emulsifying needle. Sheep were sensitized to collagen on day 0 by subcutaneous injection in the flanks with an emulsion of 5 mg/ml BCII in Freund's Complete Adjuvant (FCA). An immunization boost was given on day 14 by subcutaneous (SC) injection of 5 mg/ml BCII in Freund's Incomplete Adjuvant (Sigma-Aldrich). Two weeks later (day 28), arthritis was induced by intra-articular (IA) injection of 100 g BCII dissolved in 0.5 ml saline into the left hock (tibio-tarsal) joint. Both groups of sheep had arthritis induced in the left hock. Following thawing, cell counting (using a Neubauer haemocytometer), and determination of viability (trypan blue exclusion method), 150 million MPC were injected into a sterile 0.9% saline drip bag (100 ml) immediately prior to administration into the sheep. The cells were then administered systemically over 30 minutes via a pre-placed jugular intravenous (IV) catheter. A filter was placed in the giving set to trap any cell clumps. Control sheep received the equivalent volume of saline. Treatments were administered 1 day following the intraarticular BCII injection and arthritis induction. Clinical lameness scoring Clinical lameness was assessed using a semi-quantitative scoring system, as described previously. A 6-point scale was used for lameness, and 4-point scales were used for joint swelling and pain elicited on flexion of the hock. The lameness assessment included the parameters of behaviour, standing posture and gait. The clinical signs for joint swelling were assessed as 0 (none detectable), 1 (barely detectable, but present), 2 (clearly discernible swelling on palpitation) and 3 (very marked joint swelling). Pain on flexion was assessed as 0 (none elicited), 1 (slight discomfort on strong flexion), 2 (clear discomfort with strong flexion) and 3 (severe discomfort even with slight flexion and sheep very reluctant to flex the joint). Lameness, joint swelling and pain on flexion were assessed weekly until the IA collagen injections (day 28) and then on days 29, 30, 31, 32, 34, 36, and 42 after the IA injection. All investigators who participated remained blinded to the treatment group allocation. Sample collection Blood was collected from the jugular vein weekly until day 28, then daily for 3 days following arthritis induction, then every 2-3 days thereafter. Sheep were killed 2 weeks after the induction of arthritis (on day 42), and post mortem examinations were performed. Synovial membranes (SM), cartilage from the articular surface of the talus bone, and synovial fluid were collected at necropsy from all animals. Gross findings were recorded for each tissue and SM were collected from the dorsal region of the left and right joints. Part of the SM was fixed in 10% buffered formalin and sent to Gribbles Veterinary Laboratories, Melbourne, VIC, Australia for routine processing and staining (haematoxylin and eosin) for light microscopy. The other half of the SM was placed in OCT compound (Tissue-Tek, Sakura Finetek, Torrance, CA, USA), snap frozen in liquid nitrogen and stored at -80°C for immunohistology. Blood and synovial fluid cytology Total leukocyte numbers in blood and SF were obtained using an automated cell counter (Coulter Particle Counter, Model Z1; Beckman Coulter, Indianapolis, IN, USA) while the differential cell count was determined on Giemsa-stained blood smears or SF cytospots. The differential cell count was performed counting a minimum of 200 cells under a light microscope. Results are presented as number of cells/ml of blood or SF. Macroscopic scoring of articular cartilage from hock joints The cartilage on the surface of the talus was assessed using a 5-point scale based on the Osteoarthritis Research Society International (OARSI) recommendations for macroscopic scoring of cartilage pathology, but simplified because the primary cartilage injuries were confined to the central trochlear groove of the talus, rather than the whole joint surface as we have described previously. The scheme used was: normal cartilage surface = 0; roughened cartilage surface but not deep or extensive fissuring = 1; clear fibrillation and fissuring of surface = 2; full-depth small erosion confined to trochlear groove = 3; full-depth erosions which extended outside the trochlear groove to the adjacent condyles = 4. The mean of the values from two blinded independent observers were pooled for each treatment group. Histopathological scoring of synovial tissues from hock joints The scoring system developed for this ovine CIA model was a composite of those used for humans with modifications for ruminants, and has been described previously. The scoring system assessed three parameters of the synovium; namely intimal hyperplasia, stromal activation and inflammatory infiltrate (scores from 0 to 3) and the final score was a total of these three scores, with a maximal score of 9 points. For consistency, synovial intimal hyperplasia was specifically evaluated at the predominant cell depth. The synovial intima ranged from normal (1-3 cells; 0 points) to moderate diffuse hyperplasia (>6 cells thick in multiple areas; 3 points). Synovial stromal activation and inflammatory infiltrate were based upon those areas with the greatest alterations. The degree of activation ranged from none (0 points) to marked activation with chronic oedema, marked fibrosis and cellularity, including endothelial cells and histiocytes (3 points). Synovial inflammatory infiltration could vary from absence of inflammation (0 points) to marked inflammatory infiltrate, which could include lymphocytes, plasma cells and histiocytes, seen as large aggregations of cells within the synovial stroma, the synovial intima and in a perivascular location (3 points). Large areas of necrosis often accompanied severe inflammation. Each parameter was observed at low power, before evaluation at high power (40 objectives). The pathological changes were scored by two blinded observers and if their scores differed by 2 or more points, the sections were examined by a third blinded observer. Immunohistochemical staining of synovial tissues from hock joints Frozen sections of SM from the dorsal region of the arthritic and contralateral hock joints were acetone-fixed and stained by indirect immunohistochemistry. The primary monoclonal antibodies (mAbs) used were specific for CD4, CD8 and TCR (86D-127) and were also obtained from A/Prof. Scheerlinck (Centre for Animal Biotechnology). B lymphocytes were identified using mAb specific for CD79acy (HM57, Dako, Glostrup, Denmark) and monocytes and macrophages were identified using mAb specific for CD14 (M-M9, VMRD, Pullman, WA, Australia). Monoclonal antibody specific for Ki-67 (MIB-1, Dako) was used to detect cells in active phases of the cell cycle while angiogenesis was detected using mAb specific for von Willebrand factor (vWF) (Dako) on endothelial cells. In all cases, isotype-matched non-specific antibodies were used as negative controls. The primary antibodies were detected with a rabbit anti-mouse HRP (Dako) and DAB (Sigma-Aldrich). Scoring of the immunohistochemically stained tissues from hock joints The SM sections stained for mononuclear inflammatory cell types were scored and assessed on a 7-point scale based on the approximate cell count and/or the size and the numbers of cell clusters. The criteria used were: 0 = no cells in the entire section; 1 = < 10 cells in the entire section; 2 = 10-50 cells and/or 1-2 small clusters of cells; 3 = 50-200 cells and/or > 2 smallmedium clusters of cells or numerous cells; 4 = 200-500 cells and/or many small-medium clusters; 5 = 500-1000 cells and/or many medium-large clusters of cells and 6 = > 1000 cells and/or very large clusters of cells. This system was modified for CD14+ cells, where the cell numbers were 10-fold greater (i.e. 0 to > 10,000 cells). The density of blood vessels in the synovium was estimated by counting the average number of vWF-stained vessels in three microscope fields using a 4 objective. Statistical methods Results are expressed as mean ± SEM unless otherwise indicated. Data were analysed using GraphPad Prism statistical software (version 6.0b; GraphPad Software Inc, La Jolla, CA, USA). Analysis of data between groups at different time points was performed using two-way ANOVA with Sidak's multiple comparison tests. Area under the curve with respect to increase (AUC I ) was used to compare changes in plasma cytokines where initial baseline values differed between groups. The AUC I was calculated by the trapezoid rule using GraphPad Prism software, starting from day 29 (immediately prior to MPC or saline administration) and using the value at that point as the baseline. Endpoint data were evaluated using Mann-Whitney or Wilcoxon matched-pairs signed rank tests and statistical significance between groups was accepted at p < 0.05. Clinical assessment Intra-articular administration of collagen caused a mild to moderate lameness with localised joint swelling and pain on flexion, which was detectable in all sheep after 24 h (Fig. 1). All signs of lameness and inflammation then decreased steadily from day 29 to day 42. Lameness scores were significantly lower in the group treated with MPC from days 31 to 36 inclusive (Fig. 1A), and there was a significant overall improvement in lameness for the MPC treatment group relative to saline controls. Lameness was decreased to a mean score of 1 by day 34 in the treated group while the untreated group still had a mean greater than 1 at day 42. For pain on flexion (Fig. 1B), there was a more rapid improvement in the MPC-treated group compared to the saline group with almost no pain by day 42. The pain scores were significantly improved between days 31 and 36 inclusive. Figure 1C shows the results of swelling in the hock joint. Although the swelling was mild, the scores for swelling were significantly lower for days 31 and 34 in the MPCtreated group. When combining all three clinical parameters (Fig. 1D), there were again significant improvements in the scores between days 31 and 36. Total and differential white cell counts (WCC) in blood There was a significant increase in the total blood leukocyte count following IA arthritis induction in both treatment groups ( Fig. 2A) and this was attributable primarily to the neutrophil numbers (Fig. 2B). However, following MPC treatment there was a faster decline in the number of neutrophils in the blood, with significantly lower neutrophil numbers measured for the 2 days following MPC infusion ( Fig. 2A and B). Lymphocyte and monocyte numbers in blood showed little change after IA arthritis induction, and treatment had no effect on their numbers (data not shown). Total and differential white cell counts in synovial fluid The total leukocyte count in the SF of arthritic left joints was significantly higher than in the contralateral right joints as we have previously reported, however the neutrophil counts were not significantly different between left and right joints in the MPC-treated sheep. After this period of time the leukocyte counts had decreased such that there were no significant differences between the synovial fluid cell numbers in the left hock joints of treated versus control sheep. However, there was a moderate trend to suggest that the numbers of neutrophils in the synovial fluid of the left hock joints in MPC-treated sheep (3 ± 1% of total cells) was reduced compared with the numbers in saline-treated control sheep (21 ± 8% of total cells; p = 0.09). Inflammatory markers in blood Activin A While the mean plasma activin A levels at individual time points for the two treatment groups were found not to be significantly different (due mainly to one sheep in the control group that had a 10-20-fold higher level of activin A than other sheep prior to arthritis induction) analysis of areas under the curve (AUC) for each sheep did demonstrate significant differences as shown in Fig. 3. Using this approach, starting from day 29 (immediately prior to MPC or saline administration), indicated that there was a significant drop in plasma activin A following MPC treatment (38 ± 117 arbitrary units; Fig. 3A) compared with plasma levels in untreated sheep (378 ± 294 arbitrary units; p = 0.021). IL-17A Plasma IL-17A levels decreased over the first 2 days post treatment in the sheep treated with MPCs. Again, there There was a significant reduction in all measured parameters and the aggregate score (D) 2 days after MPC treatment. Values were analysed using two-way ANOVA with Sidak's multiple comparison tests and each point represents the mean ± SEM of seven to eight sheep with *, **, *** and **** representing p ≤ 0.05, 0.01, 0.001 and 0.0001 respectively. MPC mesenchymal precursor cells was variability between individual sheep in the initial baseline levels of this cytokine in both groups that precluded statistical significance using the mean values. However, analysing the area under the curve for IL-17A plasma levels for each sheep starting from day 29, demonstrated a significant reduction of this cytokine in the MPC treatment group compared to the saline-injected group (p = 0.006; Fig. 3B). Inflammatory markers in synovial fluid Although there were significantly raised levels of the cytokines activin A, IL-17A and IFN-, but not IL-10 in the synovial fluid of left arthritic joints compared to the right contralateral joints, no significant changes could be demonstrated between the saline and MPC treatments ( Fig. 4A-C). Gross joint pathology As shown in Fig. 5, at the time of necropsy (day 42), left hock joints revealed profound pathological changes. These included joint swelling, gross SM thickening and focal cartilage erosion on the articular surface of the talus bone and distal tibia bones. The magnitude of these changes were most evident in the left hock joints from the salinetreated control sheep ( Fig. 5A and C) but were less pronounced in the MPC-injected sheep ( Fig. 5B and D). Scoring of cartilage erosions Cartilage erosions on the articular surface of the talus bone ( Fig. 5A-D) were assessed using a macroscopic scoring system and MPC treatment was shown to significantly reduce cartilage erosion in the arthritic joint compared to arthritic joints from saline-treated control sheep (Fig. 6). Histopathological scoring of synovial tissues from hock joints The histopathology scores for synovial tissues, which included hyperplasia, stromal activation and inflammatory cell infiltration are shown in Fig. 7. All parameters differed significantly between the arthritic and contralateral synovial tissues within both groups. Treatment with MPC significantly reduced the levels of stromal activation, including lowered fibrosis, cellularity and matrix deposition, in the arthritic left joint compared to the same joint in the saline-treated controls (p < 0.05). The reduction in intimal hyperplasia or inflammatory cell infiltration did not reach statistical significance. The combined total histopathology score also showed a trend towards being lower in the treated group (p = 0.0665; Fig. 7D). Immunohistochemical studies of the synovial tissues The inflammatory cell types (CD4 +, CD8 + and TCR + T cells, B cells, and monocytes/macrophages) examined in Fig. 2 Effect of MPC administration on total blood leukocyte (A), neutrophil (B), lymphocyte (C) and monocyte (D) counts. Values were analysed using two-way ANOVA with Sidak's multiple comparison tests and each point represents the mean ± SEM of seven to eight sheep with *, ** and *** representing p ≤ 0.05, 0.01 and 0.001, respectively. MPC mesenchymal precursor cells the synovial tissues of the arthritic left joints were observed to be more abundant than in the corresponding tissues of the contralateral joints (Fig. 8a-e). Treatment with MPC significantly reduced the number of CD4 + T cells and CD14 + monocytes/macrophages (by 40%) in the synovium of MPC-treated sheep compared to the salinetreated control group (Fig. 8a, e). The other cell subsets (CD8 + and TCR + T cells, and CD79a + B cells) were also lower in the MPC-injected group, but they were not statistically different from the saline-treated controls (Fig. 8b-e). Cells in active phases of the cell cycle, as indicated by Ki-67 expression, were also increased in the inflamed joints compared to contralateral joints, but showed no response to MPC treatment (Fig. 8f). There was, however, a significant reduction of 50% in the number of blood vessels (as measured by vWF expression) in the arthritic synovium of MPC-treated sheep compared with saline-treated controls (Fig. 8g). Discussion This study evaluated the acute anti-inflammatory activities of immunoselected allogeneic STRO-3 + ovine mesenchymal precursor cells (MPCs) in an ovine model of monoarthritis and clearly demonstrated an anti-inflammatory effect of MPCs following IV administration. In rodent collagen arthritis models, where systemic administration of MSCs has reduced disease severity, this treatment has been associated with the suppression of T cell activation, a reduction in serum pro-inflammatory cytokine expression, and the induction of Foxp3 + regulatory T cells with an immunosuppressive phenotype. MPCs are a restricted subset of MSCs, and this is the first study to demonstrate their effectiveness in a non-rodent, large animal model of collagen-induced arthritis. The significant reduction in clinical scores within 4 days of systemic administration of the MPCs correlated with the marked decline of plasma neutrophil levels that were increased within 24 hours of arthritis induction. The increased blood neutrophil levels in response to CII immunisation have been attributed to the induction of IL-8 by IL-6, and the recruitment of neutrophils from the marginal pool by chemotactic factors including leukotriene B4, C5aR and FcRs in sequence. However, in a parallel study using the same ovine model of CIA but sacrificing the animals 72 days post arthritis induction, leukocyte populations within synovial fluids were elevated,, but the levels were not altered by MPC treatment. Presumably, 2 weeks after arthritis induction, most of the inflammatory changes were restricted to the synovium itself, and there was little chemotactic stimulus to promote migration of neutrophils from synovium into the synovial fluid. Nevertheless, in the present study there was evidence to suggest that the numbers of neutrophils in the synovial fluid of the left hock joints in MPC-treated sheep was reduced compared with the numbers in joints of the salinetreated control sheep. Although SF samples were collected only at the time of necropsy, sampling at earlier time points in the study may have revealed differences in the synovial fluid cellular composition associated with MPC treatment; however this procedure was excluded from the study protocol due to the risk of inducing inflammatory changes within the joint from repeated joint sampling. Activated neutrophils have been shown to be a major source of the inflammatory marker, activin A. Activin A is a member of the transforming growth factor, and IL-17A (B) were compared between the MPC-treated and control group using two-way ANOVA with Sidak's multiple comparison tests. The AUC I for the maximal responses between days 29 and 36 of activin A (A) and IL-17A (B) were compared using Mann-Whitney tests. Each point represents the mean ± SEM for concentrations or mean ± SD for AUC of seven to eight sheep with * and ** representing p ≤ 0.05 and 0.01, respectively). MPC mesenchymal precursor cells beta (TGF-) family, and it is thought to play a major role in inflammation. It stimulates the production of cytokines and iNOS, and regulates or suppresses TH1 and TH2 responses. Increases in plasma concentrations of activin A have been demonstrated in patients suffering various inflammatory conditions. In particular, activin A concentrations in the synovial fluid of patients with gout and rheumatoid arthritis are elevated relative to those patients with osteoarthritis. In such conditions, it has been suggested that activin A promotes proinflammatory macrophages and induces hyperalgesia, and that early suppression of activin A in RA might reduce pain and joint damage. In the present study, plasma activin A levels showed a significant increase in the saline-treated control group, soon after arthritis was initiated with a peak at day 34, while the sheep administered MPC exhibited relatively reduced levels of activin A. Our findings of lowered activin A levels, reduced blood neutrophils and lowered signs of pain on flexion and lameness in the MPC-treated group appears to support the hypothesis that the reduction of blood neutrophilia, with subsequent reduction of activin A and other pro-inflammatory factors, may be an important mechanism by which MPC exert their anti-inflammatory effect. Plasma IL-17A levels decreased over the first 2 days post MPC treatment, however the IL-17A levels in blood were low and rather variable between individual sheep. IL-17A is a pro-inflammatory cytokine produced primarily by Th17 cells and -T cells. In gout and many other inflammatory conditions, assembly of intracellular pattern recognition receptors (NLRs) into the inflammasome complex leads to the enzymic activation of IL-1 and IL-18 into their mature forms, which promotes IL-17 production from the above mentioned cells. In the joints of arthritic patients this cytokine is thought to promote inflammatory cell infiltration, bone destruction, and synovial fibroblastic activity. Serum and synovial IL-17A levels in human RA patients can be significantly higher compared with normal controls, although IL-17A may not be present in all RA patients. MPC treatment did not appear to have a marked effect on cytokine levels in the synovial fluid of arthritic joints 2 weeks after arthritis induction. Activin A, IL-17A and IFN- were all higher in arthritic SF of the saline-treated sheep than in the contralateral joints, but not altered significantly in the MPC-treated group. Only IL-17A showed a possible reduction in SF of MPC-treated sheep, with no IL-17A detected in five sheep, but the effect of treatment was unclear due to the presence of two sheep that had very high levels of IL-17A. The synovial membrane plays a very important role in disease pathogenesis in arthritis, with synovitis and erosion of articular cartilage being key features. The reduced cartilage damage evident in MPC-treated sheep is consistent with an anti-inflammatory mode of action. It is thought that cartilage damage is initially driven by inflammatory cytokines including IL-17A, IL-1 and tumour necrosis factor alpha (TNF-), acting synergistically to induce the production of matrix metalloproteinase proteinases (including aggrecanases), which degrade cartilage. Reduced levels of stromal activation (fibroblast numbers, cellularity and matrix deposition) in the arthritic joints of MPC-treated sheep are important because the synovial stroma is the major site for inflammatory cell recruitment and cell activation. Neutrophils play an important early role inflammatory arthritis, but in the current acute model they were no longer present in the synovium in significant numbers by the time of necropsy 14 days later, and instead lymphocytes, monocytes and macrophages were the predominant inflammatory cell types. MPC treatment 1 day after arthritis induction appeared to greatly limit the recruitment of CD4 + T cells and CD14 + monocytes and macrophages (by around 40%) to the synovium. CD4 + T cells and monocytes/macrophages both play an important role in the pathophysiology of arthritis. These cells release a number of chemotactic and other inflammatory cytokines such as TNF- that recruit other inflammatory cells that maintain the arthritic process. While MSC transplantation has been shown to suppress the proliferation of CD4 + T cells in a mouse model of graftversus-host disease, it was uncertain whether the finding in the current study reflected reduced proliferation, or reduced recruitment of cells. Since the number of Ki-67 + cells was similar in saline-and MPC-treated synovium, it is arguable that reduced cellular recruitment may be a likely mechanism. This would be consistent with findings that MSCs significantly reduce expression of key chemokines for attracting macrophages (MCP-1) in vitro and in a rat model of acute traumatic brain injury, with a corresponding decrease in macrophage infiltration. Hyperplasia of the inflamed synovial tissue is supported by endothelial proliferation and angiogenesis, which in long-standing disease may ultimately result in the formation of an invasive pannus. Angiogenesis within the synovium was significantly reduced with MPC treatment in this study. Angiogenesis is a noted feature of RA and is thought to be associated with angiogenic chemokines and vascular endothelial growth factor. VEGF is produced by monocytes, macrophages and fibroblasts; and together with IL-17A stimulates angiogenesis leading to cell recruitment and synovitis development. MSC are generally thought to increase angiogenesis via the release of VEGF and naturally occurring MSC in the synovium of RA patients have been implicated in this process, so the significant reduction in angiogenesis following MPC treatment in the sheep model may be a specific effect of the highly purified MPCs used in this study. Values were compared between the MPC-treated and the control group using paired Wilcoxon (left and right) and unpaired Mann-Whitney (saline and MPC treatment) tests. Lines represent mean ± SEM of seven to eight sheep. * and ** represent significant difference; p ≤ 0.05 and p ≤ 0.01, respectively. B Representative histopathological images of synovial membranes from arthritic hock joints following saline (a-c) or MPC (d-f) treatment. Paraffinembedded tissues were stained with haematoxylin and eosin. The bars represent 250 m. MPC mesenchymal precursor cells Conclusions The results of the present study using the ovine model of inflammatory arthritis have confirmed that a single intravenous infusion of 150 million allogeneic MPC per animal was effective in reducing the clinical signs of arthritis and gross pathological changes in the arthritic joint. Treatment also attenuated histopathological changes including synovial stromal tissue activation, CD4 + T cell and monocyte/ macrophage accumulation in the synovium and synovial angiogenesis. MPC treatment was associated with significant decreases in blood neutrophilia and associated inflammatory biomarkers such as activin A. This study demonstrates that MPCs were able to profoundly modulate the inflammatory cascade in this ovine model of collagen-induced arthritis, leading to a downregulation of both local and systemic inflammation. Together, these data support the potential application of MPCs for the treatment of acute tissue inflammation such as arthritis. It appears that MPCs have the ability to reduce the early events in the disease process, which is consistent with current Ki-67 (f). Blood vessels were identified by vWF expression (g). The data were compared using paired Wilcoxon (left and right) and unpaired Mann-Whitney (saline and MPC treatment) tests. Lines represent mean ± SEM of six to eight sheep with * and ** representing p ≤ 0.05 and p ≤ 0.01, respectively. B Immunohistology of synovial membranes from arthritic hock joints following saline (left column) or MPC (right column) treatment. The frozen sections were stained with antibodies to CD4 (a, b), CD8 (c, d), TCR (e, f), CD79a for B cells (g, h) and CD14 (i, j). The bars represent 25 m. MPC mesenchymal precursor cells recommendations for early anti-inflammatory intervention in arthritis. The present studies also highlight the potential of using MPCs as a novel biological agent for the management of human RA. However, additional preclinical and clinical studies will be clearly required to establish more precisely the pathways used by MPCs to mediate their therapeutic effects in RA. Availability of data and materials The data that support the findings of this study are available from Mesoblast Ltd. but restrictions apply to the availability of these data, which were used under license for the current study, and so are not publicly available. Data are, however, available from the authors upon reasonable request and with permission of Mesoblast Ltd. Authors' contributions AA and LMD contributed equally to this manuscript. AA, LMD and EAW participated in the acquisition of data, analysis and interpretation of data and writing of the manuscript. CK participated in the acquisition of data, analysis and interpretation of data. JVH and BAB contributed to the analysis and interpretation of data. WGK and SRB participated in the study design, acquisition of data, analysis and interpretation of data, and writing of the manuscript. PG and SI contributed to the study design and interpretation of data. All authors read and approved the final manuscript for publication. Competing interests PG is a consultant to Mesoblast Ltd but does not own stock in the company. SI is an employee of Mesoblast Ltd, who owns stock and has commercial interests in the therapeutic applications of MPC in rheumatic diseases. The other authors have no competing interests. Consent for publication Not applicable. Ethics approval The Animal Ethics Committee of the University of Melbourne approved all experimental animal procedures and sample collections. |
Popularization of Legal Knowledge in Community Level Analysis on the Foundations, Problems and Routes of the Rule of Law in Public Governance in Community Level Legislation is an important guidance of grassroots social governance in China and its practical foundation embodies in two mutually-supporting aspects: reconciling social contradictions and guaranteeing the "one core and multiple" governance structure according to law. However, Legislation is not the purpose of grassroots social governance itself, and its practice cannot be separated from the constraints of Chinese grassroots social traditions such as "no litigation" and "public and private alike", which will cause many problems such as weak legal authority and law enforcement, unadaptable hard law and so on. It is necessary for further to promote the legalization process of grassroots social governance by strengthening the guidance of party building at the grassroots level, promoting the co-existence of hard law and soft law, and shifting the emphasis of of law enforcement in grassroots level. Keywordspublic governance in community level, rule of law, guided by party building, soft law I. INTRODUCTION Rule of law is an important guidance in public governance in community level in China. Since 18th CPC the Center keeps stressing on improving the public governance modes and increasing the level of rule of law in public governance. In the report of the 18th CPC, it stipulates: "quicken the space of building the following mechanisms and systems: a law-based social management system featuring Party Committee leadership, government execution, nongovernmental support and public participation." On the 3rd Plenary Session of the 18th CPC, it has decided: "we should improve the manners of public governance...insist on rule by law, enforce legal guarantee and use legal frameworks and methods to reconcile social contradictions according to rule of law." On the 4th Plenary, it also stresses: "promote multilevel and multifield administration according to the law. We will take a systematic, law-based, and holistic approach to governance and try to resolve root causes of problems, if there are any." In the report of the 18th CPC, it points out: "improve the law-based public governance model under which Party Committee exercise leadership, government assumes responsibility, nongovernmental actors provide assistance, and the public get involved. We'll strengthen public participation and rule of law in public governance, and make such governance smarter and more specialized." Then, what is the foundation of rule of law in public governance in community level? What problems shall we encounter during the process? And what is the origin of these problems? What is the route of doing this? At present, there are few papers on these questions. This paper will try to give an answer to them. II. DOUBLE BASIS OF THE NATIONAL POPULARIZATION OF LEGAL KNOWLEDGE TO COMMUNITY LEVEL What problems should we solve in public governance in community level? This is the question we should answer. Theoretically, we will achieve two goals in public governance in community level, one is order, and the other is vitality. Order means: "putting everything, every person in its proper position and playing each role." The main goal in public governance in community level is to put everything and every person in its proper position and this position is usually defined by traditions, ethics and laws while it will embody in environmental order, relationship order, and security order and so on. Besides order, the community level should also have vitality in order to have more opportunities to develop. Order emphasizes on interior while vitality exterior. These two parts work together. President Xi Jinping used to point out: "national and public governance is a science. We should master the proper manner." Then, we come to question, why does our country need to popularize legal knowledge in community level? The answer is many problems arise in public governance in community level. For example, Liang Ping thinks public governance in community level faces four dilemmas: traditional public governance manners and its practical situation prevent the change of the governance manners in community level; the contradiction of localization and informality of social governance in social transition; the bigot of governance guidance on stability in the public governance in community level; the degrade of the authority of the community level and the rise of the authority of the traditional governance weaken the enforcement abilities in public governance in community level. Now we cannot give a complete solution to these problems. I think the basis to promote rule of law in public governance in community level can at least embody in two aspects; to reconcile social contradictions according to rule of law and to guarantee the governance system of one core with many aspects according to rule of law. The former mainly stresses on social order in community level, and the latter on the vitality in community level. A. Reconcile social contradictions according to rule of law In reconciling social contradictions, the functionality of rule by law can be fully demonstrated. Since the reform and opening, especially in this new century, many contradictions arise. In the social bluebook of Y2013 issued by Chinese Social Science Research Group, it shows that: "recently, the number of collective incidents caused by social contradictions amount to ten thousand, even hundred thousand each year." Some research shows, after our GDP surpasses US$1000, Chinese has come to the prominent period of social contradictions. Social contradictions usually can be divided into two categories: one is family disputes, neighborhood conflicts, disputes between capital and labor, disputes between owners and property companies, all these have nothing to do with the government; the other contradictions are caused by governments policies and actions such as building garbage disposal incineration factory, land requisition and demolishing, improper legislation. No matter what kind of social conditions are, once improperly dealt with, the government would have to get involved. Traditionally, the government will take on the main role in reconciling the social contradictions, such as the Bureau for Letters and Visits will be responsible for solving these problems. During this process, the official, the actual spokesman of the government, sometimes will have his own emotions even his own interest in solving these problems, thus, this kind of administration mode would be blamed for rule by man and arbitration. When the interest differentiations are not very obvious, the social contradictions are still single and the other social subjects such as village heads, religion leaders can play their role in reconciling these contradictions, the administration mode of the government can still work properly. However, when the above-mentions conditions are becoming serious, this administration mode will not work properly. What's more, with the government involved in land requisition and demolishing, villages in cities renovation and so on, government itself becomes the core of the contradiction. In this situation, it will be very improper to solve these contradictions in the usual administration mode. The best way should be solving these problems according to the law. B. Guarantee the governance structure of one core with many aspects by rule of law One core with many aspects means the public governance structure is under the leadership of the executive party, and includes the executive party, the government, the society and the public. Under this structure, these bodies under the leadership of the executive party will take their positive actions and make full use of their advantages to promote social development. Compared with the traditional single management structure, this structure can be regarded as the ideal structure in public governance in community level. Meanwhile, the executive party also realizes that it is hard to achieve the goal of public governance in community level only by government and administration mode, therefore, the party sticks that public governance need many bodies to participate and cooperate in many important documents. However, this recognized structure should be gut anteed by rule of law. In order to achieve this, the most important part is to regulate government powers and improve social and people's rights according to law, otherwise, this structure cannot be formed. In reality, we have many ways to regulate government powers and improve social and people's rights, for example, execute the regulations of government's power list to restrict government's powers; reform the registration management system of social organizations to separate some particular organizations from their administration management departments; strengthen information legislation to guarantee every people's knowing and participation rights; establish legal statue of the residential quarter's Hearings, Coordination and Evaluation so that the residents can get involved in community governance. Of course, if there are any illegal or breaching promise behaviors in public governance, they should get punished (these behaviors will do damage to the governance structure of "one core with many aspects"). Here, we can take an example, in November 2018, the Ministry of Civil Affairs gave punishment of warning and listed them into the abnormal list to Yongheng Charitable Foundation (not make its internal management system and project capital allocation public, no description of the parties concerned) and More Love Foundation (not description of the parties concerned and its transactions) according to the stipulations in Regulations for the Management of Foundation and Measures for Foundation Information Announcement. III. THE PROBLEMS AND ITS ORIGIN OF THE RULE OF LAW IN PUBLIC GOVERNANCE IN COMMUNITY LEVEL Advancing the rule of law in public governance in community level has its logic basis, but its application is not very easy. Even in eastern seaside cities where their economy is developed, civilization is high and legislation is perfect, they also encounter some difficulties and problems, among which are weak legal authority, unadaptable of hard law and weak law enforcement. A. Weak legal authority Weak legal authority means the rule of law is not the governments' and the society's first choice when solving problems, even in some cases, they choose to ignore the existing law. From the analysis we can see that weak legal authority can be divided into two types: one is the authority of the rule of law is not strong in society and people's mind; the other is the local government and officials do not pay enough attention on the authority of the rule of law. In the first situation, it mainly reflects in people would choose to believe in letters or visits rather than laws, and would like to make incidents greater. Yang Xiaojun makes the following conclusion after he compared the data of the administrative litigation, reconsideration and petition from Y2003 to Y2009: "compared with legal approaches, people tend to choose letters and visits to defend their own rights." This idea would do great harm to the authority of the rule of law. Some people point out: "letters and visits would, of course, give some people justice in their rights and responsibilities, but at the same time would do harm to the rule of law." In the second situation, it mainly reflects in the local officials not abiding to law. Some scholars point out that the excessive managing of the government, eroding social interests, monopolizing public decisions and excluding people's involvement etc., are the main causes to trigger the collective incidents. In fact, it is the local government's illegal actions such as do not announce and get the people involved in the process of decisions make, do not execute their duties according to the law, do not regulate the officials' behaviors that trigger many social contradictions. For example, now many NIMBYs arise because the government does not make the policies of choosing the sites public. These reflects that the local government ignores the fact that making the key decisions public is a basic requirement of the rule of law. It needs to point out that people do not believe in law and the officials do not respect law is working together: if peoples do not believe in law, it would strengthen the government's adopting traditional administration mode, thus would develop their habits of doing everything not strictly in conformity with law, which in turn make peoples do not believe in law. B. Unadaptable of hard law The problems arising in public governance in community level are very complicated. It is hard to solve all these problems only by the existing laws. Moreover, some hard law would face the problem of unadaptable. Take the collective incidents as example, Annual Report on the Development of Rule of Law in China points out that China has built the law system including the Emergency Response Law and Regulations on Letters and Visits, but from the stipulations and working mechanism, we can see that when dealing with the collective incidents, we mainly rely on afterwards measures, and once there is, the Police Station will take main responsibilities to solve. So, we can see that if we need to put law in execution, we still need many mechanisms to support it. It is unavoidable that some hard laws cannot be adapted especially in the transition period in the community level. On one hand, compared with the new issues arising in the community level, the introduction of law is relatively retarded, for example the supervision of the speeches in net; on the other hand, law cannot reach every aspect, and for example neighborhood conflicts, the contradiction arising from breeding dogs. Cao Yunqing makes a list to illustrate the insufficiency of solving the family conflicts by law: "According to the law, the parent's property can be inherited by siblings equally, but I have visited some places and found the actual situation is not so. Usually the property will be inherited by sons, if no son, by nephews. I asked them the reasons. Someone told me that daughter can of course inherit the property, but if she does so, although it is legal, the whole village will blame on her." So, when we want to popularize legal knowledge in community level, we should restrict the adaptable into some scope, otherwise, it will damage the authority of the rule of law in these places. C. Weak law enforcement When we popularize legal knower in community levels, we will face some problems: who will execute laws, who will supervise laws, how to allocate the relative cost of the law enforcement. Many regulations in public governance in the community level are related to the individuals, and its purpose is to regulate their daily behaviors, for example no smoking in public areas, no littering casual, dog leash. All of these require the strong enforcement of local government. Because of shortage of hands, the government cannot send persons every day in every community to check. The usual way is the government can only send persons to execute laws when these violations will cause new contradictions or great danger. Sometimes the government would have campaign-style law enforcement at intervals. As many researches show the campaign-style law enforcement actually opposes the gist of the rule of law. The core of modern laws is law should have stability, and the stability can be realized according to the law procedures. Relying on campaign-style law enforcement excessively can weaken the authority of law in the people's mind, and sometimes it will provide opportunities to the potential offenders. These are the three problems in the process of the popularization of legal knowledge to the community level, and they are closed related and have almost same origins. First, rule of law is a mode or a tool in public governance in community level. In the eyes of the officials in community level including judges, rule of law itself is not the purpose of public governance in the community level (order and vitality). Just as Suli observed: to the judges in community level, preventing contradictions intensifying and no terrible things happens are good matters. This is why when the judges are dealing with some intensifying cases, they would suppress both parties, sometimes, even take strict methods (occasional illegal in some degree) to solve the problems and satisfy all. Second, apart from rule of law, other methods such as rule of virtue, autonomy, jointnomy can be used. Even the traditional administration mode (including letters and visits) can gain new vitality with the development of times. Once they meet difficulties in law enforcement, they would choose to rely on the original methods. At last, during the transitional period of modernization, the execution of rule of law cannot get rid of the cultural traditional restriction such as no litigation, no distinguish between what is one's own and what is public's own. These traditions influence people's daily behavior, for example, people would think it is shame to go to court, which also means uncivilized. Some people especially in the developed cities would ignore the government's regulations and put private affairs to the hand of the government in community level. IV. THE ROUTE TO PROMPT THE POPULARIZATION OF LEGAL KNOWLEDGE IN COMMUNITY LEVEL IN NEW TIMES Mode of rule of law in public governance in the community level is obviously very important. Some scholars think the rule of law in public governance is one of the strategic measures in the establishment of social rule of law. In comprehensively advancing the rule of law, rule of law in public governance in the community level have new opportunities. The basic methods to advance rule of law lays in publicizing, education and changing viewpoints. Mr. Fei Xiaotong mentioned: "the establishment of the legal order cannot just base on formulating some law provisions and establishing some courts, the most important thing is to see how people use them." Moreover, we should have a reform in social structure and ideas. Apart from publicizing and education (which is a longtime project) should also speed the process of rule of law from strengthening party building in community level, promote the coexistence of hard law and soft law, shift the emphasis of the public governance to the community level. A. Strengthen party building in the community level and optimize the political environment of rule of law in community level The defining feature of public governance in community level is the leadership of the Communist Party of China. To promote the rule of law in public governance in community level, the first thing we should do is to strengthen the guidance of party building in community level, so as to optimize the political environment in the practice of the rule of law in community level. As is pointed out above, the problem of weak legal authority arising from the popularization legal knowledge to the community level is mainly caused by the local governments' and officials' not paying much attention to the law. Therefore, to strengthen the guidance of party building should mainly put emphasis on making the Party members in the community level should take the lead on respecting, learning about, observing, and applying the law, strengthening their law beliefs and set up action guidance in doing everything according to the law. The report on 19th CPC clearly states: "Every Party organization and every Party member must take the lead on respecting, learning about, observing, and applying the law. No organization and individual have the power to overstep the Constitution or the law; and no one is allowed in any way to override the law with his or her own orders, place his or her authority above the law, violate the law for personal gain, or abuse the law." In fact, only when the Party organization and the Party members respect law can we solve the problem of weak legal authority thoroughly. Otherwise, the law will be ignored in the community level. Specifically, in the background of exercising Party self-governance fully and with rigor, emphasis should be put on improving the Party members' idea and ability to reconcile social contradictions and optimize public governance structure according to mode of rule of law, and idea and ability to make public decisions, negotiate with people and serve the people according to mode of rule of law. B. Promote the coexistence of hard law and soft law and improve the tolerance of the flexibility of rule of law Soft law is a concept against hard law. Luo Haocai mentions soft law has many features such as the diversity in forming subjects and forms, focusing on self-restriction and stimulation, emphasizing on open and negotiation, and its execution mainly rely on self-restriction and public opinions. It can play the role in regulating behaviors, saving the cost of legislation and enforcement of the law, respecting social labors and make the regulation system more flexible. Because of the complex in public governance in community level, it is very hard to reconcile social contradictions and guarantee the governance system of "one core with many aspects" just according to the hard law, we must adopt soft laws. Through the coexistence of hard law and soft law, we can solve the problem of unadaptable of hard law and strengthen the authority of law at the same time. Xi Jinping stresses: "in public governance, besides the relative laws and regulations we should also have Citizens' Behavior Guidance, Village Regulations, Industry Regulations, and Team Charters. These will bind the organization and its members and they can also serve as the grounds for dealing with social affairs. " Therefore, during the process of promoting public governance in community level, every level governments should also guide and support its enterprises, societies and mass autonomous organization to establish some soft regulations, such as Residents Autonomous charters. C. Shift the emphasis of enforcement to the community level In short term, the problem of weak enforcement ability in community level will still exist. Apart from reducing the burden of enforcement of hard law through establishing soft law, we can also shift the emphasis of enforcement to the community level. Shifting the public governance emphasis to the community level is the assignment of the Center, and we should establish its structures accordingly. Tang Shoudong and Sunying discussed the working mechanism from service downward (establish legal service center, legal support working station, law hotline and so on) and talents downward (recommend excellent law professionals to work for the community level). Based on the discussion above, this paper thinks the emphasis should be put on the shifting of the enforcement and its purpose is to increase the enforcement ability. Simply speaking, we should build a strong enforcement team including city management, market supervision (including food and medicine supervision) and environment supervision. But the difficult lies in how to attract young persons to join us and how to keep them. Therefore, we should support them from top devising and local innovation, such as enlarge the team, give them more chances to get promotion and raise their incomes. |
/*
* Copyright 2016-2021 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License"). You may not use this file except in compliance with
* the License. A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
* CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions
* and limitations under the License.
*/
package com.amazonaws.services.mediatailor.model;
import java.io.Serializable;
import javax.annotation.Generated;
import com.amazonaws.protocol.StructuredPojo;
import com.amazonaws.protocol.ProtocolMarshaller;
/**
* <p>
* Access configuration parameters.
* </p>
*
* @see <a href="http://docs.aws.amazon.com/goto/WebAPI/mediatailor-2018-04-23/AccessConfiguration" target="_top">AWS
* API Documentation</a>
*/
@Generated("com.amazonaws:aws-java-sdk-code-generator")
public class AccessConfiguration implements Serializable, Cloneable, StructuredPojo {
/**
* <p>
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.
* </p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your source
* location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4) authentication to
* access the bucket where your source content is stored. Your MediaTailor source location baseURL must follow the
* S3 virtual hosted-style request URL format. For example, https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal access in
* IAM. For information about configuring access in IAM, see Access management in the IAM User Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced by your
* MediaTailor VodSource packaging configurations.
* </p>
*/
private String accessType;
/**
* <p>
* AWS Secrets Manager access token configuration parameters.
* </p>
*/
private SecretsManagerAccessTokenConfiguration secretsManagerAccessTokenConfiguration;
/**
* <p>
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.
* </p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your source
* location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4) authentication to
* access the bucket where your source content is stored. Your MediaTailor source location baseURL must follow the
* S3 virtual hosted-style request URL format. For example, https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal access in
* IAM. For information about configuring access in IAM, see Access management in the IAM User Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced by your
* MediaTailor VodSource packaging configurations.
* </p>
*
* @param accessType
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.</p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your
* source location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4)
* authentication to access the bucket where your source content is stored. Your MediaTailor source location
* baseURL must follow the S3 virtual hosted-style request URL format. For example,
* https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal
* access in IAM. For information about configuring access in IAM, see Access management in the IAM User
* Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced
* by your MediaTailor VodSource packaging configurations.
* @see AccessType
*/
public void setAccessType(String accessType) {
this.accessType = accessType;
}
/**
* <p>
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.
* </p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your source
* location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4) authentication to
* access the bucket where your source content is stored. Your MediaTailor source location baseURL must follow the
* S3 virtual hosted-style request URL format. For example, https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal access in
* IAM. For information about configuring access in IAM, see Access management in the IAM User Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced by your
* MediaTailor VodSource packaging configurations.
* </p>
*
* @return The type of authentication used to access content from HttpConfiguration::BaseUrl on your source
* location. Accepted value: S3_SIGV4.</p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your
* source location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4)
* authentication to access the bucket where your source content is stored. Your MediaTailor source location
* baseURL must follow the S3 virtual hosted-style request URL format. For example,
* https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal
* access in IAM. For information about configuring access in IAM, see Access management in the IAM User
* Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced
* by your MediaTailor VodSource packaging configurations.
* @see AccessType
*/
public String getAccessType() {
return this.accessType;
}
/**
* <p>
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.
* </p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your source
* location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4) authentication to
* access the bucket where your source content is stored. Your MediaTailor source location baseURL must follow the
* S3 virtual hosted-style request URL format. For example, https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal access in
* IAM. For information about configuring access in IAM, see Access management in the IAM User Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced by your
* MediaTailor VodSource packaging configurations.
* </p>
*
* @param accessType
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.</p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your
* source location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4)
* authentication to access the bucket where your source content is stored. Your MediaTailor source location
* baseURL must follow the S3 virtual hosted-style request URL format. For example,
* https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal
* access in IAM. For information about configuring access in IAM, see Access management in the IAM User
* Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced
* by your MediaTailor VodSource packaging configurations.
* @return Returns a reference to this object so that method calls can be chained together.
* @see AccessType
*/
public AccessConfiguration withAccessType(String accessType) {
setAccessType(accessType);
return this;
}
/**
* <p>
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.
* </p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your source
* location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4) authentication to
* access the bucket where your source content is stored. Your MediaTailor source location baseURL must follow the
* S3 virtual hosted-style request URL format. For example, https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal access in
* IAM. For information about configuring access in IAM, see Access management in the IAM User Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced by your
* MediaTailor VodSource packaging configurations.
* </p>
*
* @param accessType
* The type of authentication used to access content from HttpConfiguration::BaseUrl on your source location.
* Accepted value: S3_SIGV4.</p>
* <p>
* S3_SIGV4 - AWS Signature Version 4 authentication for Amazon S3 hosted virtual-style access. If your
* source location base URL is an Amazon S3 bucket, MediaTailor can use AWS Signature Version 4 (SigV4)
* authentication to access the bucket where your source content is stored. Your MediaTailor source location
* baseURL must follow the S3 virtual hosted-style request URL format. For example,
* https://bucket-name.s3.Region.amazonaws.com/key-name.
* </p>
* <p>
* Before you can use S3_SIGV4, you must meet these requirements:
* </p>
* <p>
* • You must allow MediaTailor to access your S3 bucket by granting mediatailor.amazonaws.com principal
* access in IAM. For information about configuring access in IAM, see Access management in the IAM User
* Guide.
* </p>
* <p>
* • The mediatailor.amazonaws.com service principal must have permissions to read all top level manifests
* referenced by the VodSource packaging configurations.
* </p>
* <p>
* • The caller of the API must have s3:GetObject IAM permissions to read all top level manifests referenced
* by your MediaTailor VodSource packaging configurations.
* @return Returns a reference to this object so that method calls can be chained together.
* @see AccessType
*/
public AccessConfiguration withAccessType(AccessType accessType) {
this.accessType = accessType.toString();
return this;
}
/**
* <p>
* AWS Secrets Manager access token configuration parameters.
* </p>
*
* @param secretsManagerAccessTokenConfiguration
* AWS Secrets Manager access token configuration parameters.
*/
public void setSecretsManagerAccessTokenConfiguration(SecretsManagerAccessTokenConfiguration secretsManagerAccessTokenConfiguration) {
this.secretsManagerAccessTokenConfiguration = secretsManagerAccessTokenConfiguration;
}
/**
* <p>
* AWS Secrets Manager access token configuration parameters.
* </p>
*
* @return AWS Secrets Manager access token configuration parameters.
*/
public SecretsManagerAccessTokenConfiguration getSecretsManagerAccessTokenConfiguration() {
return this.secretsManagerAccessTokenConfiguration;
}
/**
* <p>
* AWS Secrets Manager access token configuration parameters.
* </p>
*
* @param secretsManagerAccessTokenConfiguration
* AWS Secrets Manager access token configuration parameters.
* @return Returns a reference to this object so that method calls can be chained together.
*/
public AccessConfiguration withSecretsManagerAccessTokenConfiguration(SecretsManagerAccessTokenConfiguration secretsManagerAccessTokenConfiguration) {
setSecretsManagerAccessTokenConfiguration(secretsManagerAccessTokenConfiguration);
return this;
}
/**
* Returns a string representation of this object. This is useful for testing and debugging. Sensitive data will be
* redacted from this string using a placeholder value.
*
* @return A string representation of this object.
*
* @see java.lang.Object#toString()
*/
@Override
public String toString() {
StringBuilder sb = new StringBuilder();
sb.append("{");
if (getAccessType() != null)
sb.append("AccessType: ").append(getAccessType()).append(",");
if (getSecretsManagerAccessTokenConfiguration() != null)
sb.append("SecretsManagerAccessTokenConfiguration: ").append(getSecretsManagerAccessTokenConfiguration());
sb.append("}");
return sb.toString();
}
@Override
public boolean equals(Object obj) {
if (this == obj)
return true;
if (obj == null)
return false;
if (obj instanceof AccessConfiguration == false)
return false;
AccessConfiguration other = (AccessConfiguration) obj;
if (other.getAccessType() == null ^ this.getAccessType() == null)
return false;
if (other.getAccessType() != null && other.getAccessType().equals(this.getAccessType()) == false)
return false;
if (other.getSecretsManagerAccessTokenConfiguration() == null ^ this.getSecretsManagerAccessTokenConfiguration() == null)
return false;
if (other.getSecretsManagerAccessTokenConfiguration() != null
&& other.getSecretsManagerAccessTokenConfiguration().equals(this.getSecretsManagerAccessTokenConfiguration()) == false)
return false;
return true;
}
@Override
public int hashCode() {
final int prime = 31;
int hashCode = 1;
hashCode = prime * hashCode + ((getAccessType() == null) ? 0 : getAccessType().hashCode());
hashCode = prime * hashCode + ((getSecretsManagerAccessTokenConfiguration() == null) ? 0 : getSecretsManagerAccessTokenConfiguration().hashCode());
return hashCode;
}
@Override
public AccessConfiguration clone() {
try {
return (AccessConfiguration) super.clone();
} catch (CloneNotSupportedException e) {
throw new IllegalStateException("Got a CloneNotSupportedException from Object.clone() " + "even though we're Cloneable!", e);
}
}
@com.amazonaws.annotation.SdkInternalApi
@Override
public void marshall(ProtocolMarshaller protocolMarshaller) {
com.amazonaws.services.mediatailor.model.transform.AccessConfigurationMarshaller.getInstance().marshall(this, protocolMarshaller);
}
}
|
Suzeanna and Matthew Brill of Macon were arrested in April on reckless conduct charges after an investigation by the Georgia Division of Family and Children Services determined the pair had allowed their son to smoke marijuana to treat his seizures.
A Special Olympics gymnastics coach has reportedly been charged with sexually abusing two special needs students during a sports trip where they were under the coachâEUR(TM)s care overnight.
A Collective Of Those Most Experienced With The Prison Industrial Complex First Hand And Actively Work Toward Justice For Those Placed In A Second Class Citizenship Caste System Due To Their Incarceration. This has a wealth of information on it. Share.
Advocates on both sides of the abortion debate said Tuesday that they are stunned police arrested a Georgia woman on murder charges after a hospital social worker told officers she terminated her pregnancy by taking abortion pills.
On May 15, 2014, Mohammad Abu Daher, 16, was fatally shot in the back by an Israeli soldier in the occupied West Bank city of Beitunia. One hour earlier, Israeli forces shot and killed Nadeem Nawara, 17, in the same spot. |
Finding an affordable home isn’t getting any easier for your clients.
In the fourth quarter of 2018, affordability worsened for the fourteenth consecutive quarter, National Bank says in a report released Thursday. The deterioration was the largest in a single quarter in more than a year—the result of rising mortgage rates and home prices.
In the quarter, the benchmark mortgage rate (five-year term) rose 20 basis points and seasonally adjusted home prices increased 0.9%. Financing costs were up for the sixth consecutive quarter, the longest streak of increases since 1999-2000.
The worst deteriorations in affordability were in Victoria, Toronto and Vancouver. The only markets showing an improvement were Calgary and Edmonton.
For example, in Vancouver, mortgage payments as a percent of pre-tax median income are now 101.5% for a representative home. Thus, homes are “even more out of reach for a median income family,” says the report. The deterioration of affordability in the city is less pronounced for condos.
In Victoria, the corresponding figure is about 86% of pre-tax median household income; in Toronto, about 76%; in Montreal, about 36%.
The time required to save for a down payment at a savings rate of 10% of pre-tax income is 340 months in Vancouver, 102 months in Toronto and 34 months in Montreal. Median prices for representative homes in these cities are, respectively, about $1.1 million, $847,000 and $341,000.
In contrast, Edmonton and Calgary experienced declines in time required to save—to 23 and 33 months, respectively. Corresponding median home prices are about $397,000 and $438,000, respectively.
Countrywide, affordability generally worsened for condos and non-condos alike.
On the bright side, the report puts Canada’s home affordability in a global context, showing in a graph that Canadian homes are relatively affordable compared to those in cities such as London, New York and Paris. The most expensive city to buy a 645-square-foot, urban home? Hong Kong.
For methodology and details according to city, see the full National Bank report.
Also today, National Bank announced its partnership with a mortgage broker in Quebec.
The bank has partnered with M3 Group, which operates mostly under the banners of Multi-Prêts Mortgages, Mortgage Intelligence and Verico, the bank said Wednesday in a release. The partnership will give M3 Group’s brokers the option of offering the bank’s products to their clients in the province, starting this spring.
The partnership complements the bank’s diversified mortgage distribution model, made up of about 430 branches and a team of mobile specialists, says the release. White label distribution through a partnership model will continue unchanged, it says. |
<reponame>YeMin-dev/FNI-NexLab2
package org.tat.fni.api.domain.services;
import java.util.List;
import java.util.Optional;
import org.apache.commons.lang3.StringUtils;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import org.tat.fni.api.domain.GradeInfo;
import org.tat.fni.api.domain.repository.GradeInfoRepository;
import org.tat.fni.api.exception.DAOException;
import org.tat.fni.api.exception.ErrorCode;
@Service
public class GradeInfoService {
@Autowired
private GradeInfoRepository gradeInfoRepository;
public List<GradeInfo> findAll() {
return gradeInfoRepository.findAll();
}
public List<Object[]> findAllNativeObject() {
return gradeInfoRepository.findAllNativeObject();
}
public List<Object> findAllColumnName() {
return gradeInfoRepository.findAllColumnName();
}
@Transactional
public Optional<GradeInfo> findById(String id) throws DAOException {
if (!StringUtils.isBlank(id)) {
if (gradeInfoRepository.findById(id).isPresent()) {
return gradeInfoRepository.findById(id);
} else {
throw new DAOException(ErrorCode.SYSTEM_ERROR_RESOURCE_NOT_FOUND, id + " not found in GradeInfo");
}
} else {
return Optional.empty();
}
}
}
|
Wal-Mart and other retailers say they are boosting holiday hiring this year as they prep of what they hope will be a rosier shopping season.
The ramp-up in temporary hiring could hit more than 800,000 workers in the last months of the year, the highest level since 1999, according to career counseling firm Challenger, Gray & Christmas Inc.
"The last two years saw hiring return to pre-recession levels," said John Challenger, chief executive of the company. "This year, we could see hiring return to levels not seen since the height of the dot-com boom."
The planned hires indicate retailers are expecting a decent but not blockbuster holiday season, a crucial period in which they can bring in as much as 40% of their total annual sales, observers said. Retail sales are also considered a bellwether of consumer spending, which makes up two-thirds of U.S. economic activity.
Wal-Mart Stores Inc. announced plans Thursday to hire 60,000 seasonal workers, including up to 5,000 in Southern California. That's nearly 10% more than the retail giant brought on board last year. The chain has previously said it would keep all its cash registers open at more than 3,800 stores during peak shopping hours from Thanksgiving through Christmas.
Crystal Garcia, a Wal-Mart store manager in West Covina, said many customers had already used the chain's holiday layaway option, which was rolled out Sept. 12. It is the fourth straight year the chain is offering such a service.
"It's actually doing really great," Garcia said. "We have been getting a lot of high-ticket items like PS4s and Xboxes put on layaway, as well as a lot of toys."
Other companies are also increasing their holiday staff.
Kohl's said it would hire more than 67,000 employees to staff its stores, up from about 50,000 last year. FedEx is upping its holiday hiring 25% to 50,000, while UPS said it would add 95,000 seasonal workers, nearly double from last year.
Analysts say falling gas prices and an improved job market have brightened consumers' financial outlook. But many said stagnant incomes will keep people from spending lavishly for presents this year.
Nearly 70% of Americans say they will spend the same as last year during the holidays, and 16% said they planned to spend less, according to Bob Shullman, chief executive of the Shullman Research Center. Only 13% planned to dig deeper into their wallets and splurge more compared with last year.
About half of those surveyed plan to spend less than $500. An additional 19% said they will spend between $500 and $750.
"If we can do as well as we did last year, it will be a good thing," Shullman said. "There is quite a bit of anxiety in the world."
Slow sales in the first half of 2014 prompted the National Retail Federation to drop its annual sales forecast to 3.6%, down from a previous estimate of 4.1%. The trade group hasn't issued its holiday forecast.
Last year, retail sales in November and December rose 3.8% from a year earlier as consumers spent $601.8 billion at stores and websites, the group said.
Observers said delivery companies such as FedEx may not actually be expecting big growth in business this year but want to avoid a repeat of last year's staffing problems.
Both UPS and FedEx were heavily criticized in 2013 after failing to deliver some packages before Christmas amid an Internet shopping surge. UPS and Amazon.com ended up offering refunds to customers who did not get their orders in time.
"People are less likely to trudge to brick-and-mortar stores. They are shopping online," Shullman said. "FedEx and UPS just didn't have enough people last year."
Customers who choose to shop from their laptops and smartphones will help drive down the need for workers at stores and malls during the holidays, analysts said.
Challenger said Target slashed its hiring 20% last year "due in part to more online shopping." Target hasn't revealed its 2014 holiday hiring plans yet.
But such cuts are balanced out by e-commerce heavyweights like Amazon that bring on extra workers during the winter months, Challenger said. |
from sys import stdin,stdout
nmbr=lambda:int(stdin.readline())
lst=lambda:list(map(int, stdin.readline().split()))
for _ in range(nmbr()):
n=nmbr()
a=lst()
odd=[]
even=[]
for i in range(2*n):
if a[i]&1:odd+=[i+1]
else:even+=[i+1]
# print(odd)
# print(even)
c=0
for i in range(1,len(odd),2):
if c == n - 1: break
print(odd[i],odd[i-1])
c+=1
for i in range(1,len(even),2):
if c == n - 1: break
print(even[i],even[i-1])
c += 1 |
The utilization of true potato seed (TPS) as an alternative method of potato production Potato is grown as a rainfed and irrigated crop in the cooler highlands and mid-altitude regions of the tropics. Its productivity is very low mainly due to unavailability of healthy and sufficient amount of seed potatoes to the farmers. The utilization of true potato seed (TPS) can be considered as an alternative method to produce potatoes, thereby alleviating the problems associated with seed potatoes.The first aim of the research report in this thesis was to study problems related with poor TPS germination. The second aim was to determine the best growing season to grow TPS for seedling tuber production. The third aim was to develop appropriate seed bed management practices for maximum seedling tuber production.Increasing nitrogen fertilization to the mother plant enhanced the rate of gerniination of the TPS of the open pollinated progeny (AL 624) and reduced that of the hybrid progeny ( AL 624 x CIP 378371.5) without affecting the final germination percentages (FGP) of both progenies.TPS dormancy was effectively broken by soaking in 1000-1500 ppm GA 3 for 8 hrs. Treating TPS with water can also break dormancy and maintain about 70 % germination. The latter may be considered as a cheap, readiliy available and practical alternative method of breaking TPS dormancy under farmers' condition.The field and nursery experiments indicated that seedling tuber yields are very low during the rain season due to late blight ( Phytophthora infestans ) pressure, shorter sunshine hrs and a shorter growing season than the dry season.Based on the nursery results, in the central highlands of Ethiopia, dry season production of seedling tubers in a seed bed substrate mix of 50 % forest soil and 50 % manure, and 40 - 80 g N per m 2 bed were found to be suitable for the production of a maximum number of seedling tubers by direct sowing methods.Manipulation of seedling population in a seed bed is one method of producing a maximum number of usable seedling tubers. The results revealed that a plant density of 100 plants per m 2 seed bed was optimal for the production of a maximum number of up to 1200 seedling tubers or a total tuber weight of 29 kg per m 2 without hampering management operations such as weeding, fertilization and billing up.The research results showed that there is a considerable potential of alleviating the problems of seed potatoes by improving the TPS germination quality, seed bed production of seedling tubers and using them as seed potato source for subsequent growing seasons. |
A new series of Blackadder could be on its way, according to Baldrick actor Tony Robinson.
Richard Curtis and Ben Elton’s classic historical sitcom originally ran for four series between 1983 and 1989, though the cast reunited in 1999 for a one-off special called Blackadder: Back And Forth that was first shown at the Millennium Dome.
However, Robinson also revealed that it could be difficult to tempt back original star Hugh Laurie following his huge success on US TV with House. “The only problem is Hugh’s fee. He’s a huge star now – or so he’d like to think,” Robinson added.
During a 2008 interview, Richard Curtis revealed that back in the day, he and Ben Elton had agreed to set a possible fifth series of Blackadder in the swinging ’60s. “[Blackadder] was going to kill Kennedy and it would have been a mistake. We’d decided on the ’60s because it was such a rich period. It was a time of dodgy entrepreneurs with their fingers in loads of pies,” Curtis said at the time. |
package org.dbsyncer.biz.metric.impl;
import org.dbsyncer.biz.metric.AbstractMetricDetailFormatter;
import org.dbsyncer.biz.vo.MetricResponseVo;
import org.dbsyncer.monitor.model.Sample;
public final class CpuMetricDetailFormatter extends AbstractMetricDetailFormatter {
@Override
public void apply(MetricResponseVo vo) {
Sample sample = vo.getMeasurements().get(0);
Double value = (Double) sample.getValue();
vo.setDetail(String.format("%.2f", (value * 100)) + "%");
}
} |
Invasion of Vero cells and induction of apoptosis in macrophages by pathogenic Leptospira interrogans are correlated with virulence Interactions of virulent Leptospira interrogans serovar icterohaemorrhagiae strain Verdun with Vero cells (African green monkey kidney fibroblasts) and a monocyte-macrophage-like cell line (J774A.1) were assayed by a double-fluorescence immunolabelling method. Infectivity profiles were investigated according to (i) the duration of contact between leptospires and eukaryotic cells and (ii) the number of in vitro passages after primary isolation from lethally infected guinea pigs. Comparative experiments were conducted with the corresponding high-passage avirulent variant and the saprophytic leptospire Leptospira biflexa Patoc I. In Vero cells, virulent leptospires were quickly internalized from 20 min postinfection, whereas avirulent and saprophytic strains remained extracellularly located. In addition, the virulent strain demonstrated an ability to actively invade the monocyte-macrophage-like J774A.1 cells during the early stages of contact and to induce programmed cell death, as shown by the detection of oligonucleosomes in a quantitative sandwich enzyme immunoassay. In both cellular systems, subsequent in vitro subcultures demonstrated a progressive decrease of the invasiveness, pointing out the necessity of using primocultures of Leptospira for virulence studies. Invasiveness of virulent leptospires was significantly inhibited with monodansylcadaverine, indicating that internalization was dependent on receptor-mediated endocytosis. Invasion of epithelial cells and induction of apoptosis in macrophages may be related to the pathogenicity of Leptospira, and both could contribute to its ability to survive in the host and to escape from the immune response. |
Endodontic infection control routines among general dental practitioners in Sweden and Norway: a questionnaire survey Abstract Objective: The purpose of this study was to investigate endodontic infection prevention and control routines among general dental practitioners in Sweden and Norway. Materials and methods: A questionnaire was sent by email to 1384 general dental practitioners employed in Sweden and Norway. The participants were asked questions concerning different aspects of infection prevention and control during endodontic treatment; use of rubber dam, sealing of rubber dam, antibacterial solutions, and use of hand disinfectant and gloves. Results: The response rate was 61.4% (n: 819). 96.9% reported routinely using rubber dam during endodontic treatment. 88.3% reported always, or sometimes, sealing the area between rubber dam and tooth. Most disinfected the endodontic operative field, but the antibacterial solutions used varied. 11.9% did not use gloves at all during treatment, and 10.5% did not use hand disinfectant during treatment. Conclusions: Most of the general dental practitioners took measures to establish and maintain aseptics during endodontic treatment, which infers an awareness of the importance of endodontic infection prevention and control. But the results were self-reported and there may be a gap between claimed and actual behaviour. Further studies using observation methodologies are needed to assess how infection control routines are performed in everyday clinical practice. |
Smartphone addiction: psychological and social factors predict the use and abuse of a social mobile application ABSTRACT As smartphones have been revealed as hosts for addictive behaviors, many studies have focused on the relationship between psychological factors and the generalized use of smartphones without considering the wide range of activities involved. By collecting data from a large sample, this study investigated the relative contribution of both psychological and social factors in predicting different usage levels of the social mobile application (app) LINE, which is very popular in Asia. The results indicated that subjective norms and social identity predicted positively for individuals in the addictive use cluster, while self-esteem and social skills predicted negatively. The predictive power of self-esteem and life satisfaction was positive for the heavy use of LINE with no serious problems while that of subjective norms was negative. No single factor was associative with the ordinary use. These results highlight that people deficient in self-esteem and social skills, but eager to obtain others approval and a sense of belonging, are more inclined to develop an addiction to this app. Those who use LINE heavily with no serious problems are highly satisfied with life; they do not mind acting in accordance with others expectations, because they have high self-esteem. Ordinary users are not influenced psychologically or socially. This research not only underscores the importance of considering a specific form of application to understand the smartphone addiction, but also advances the knowledge of social contributors in this issue. |
<reponame>sohumango/arm-trusted-firmware
/*
* Copyright (c) 2015-2019, Renesas Electronics Corporation
* All rights reserved.
*
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <stdint.h>
#include <common/debug.h>
#include "../qos_common.h"
#include "../qos_reg.h"
#include "qos_init_v3m.h"
#define RCAR_QOS_VERSION "rev.0.01"
#include "qos_init_v3m_mstat.h"
struct rcar_gen3_dbsc_qos_settings v3m_qos[] = {
/* BUFCAM settings */
{ DBSC_DBCAM0CNF1, 0x00044218 },
{ DBSC_DBCAM0CNF2, 0x000000F4 },
{ DBSC_DBSCHCNT0, 0x080F003F },
{ DBSC_DBSCHCNT1, 0x00001010 },
{ DBSC_DBSCHSZ0, 0x00000001 },
{ DBSC_DBSCHRW0, 0x22421111 },
{ DBSC_DBSCHRW1, 0x00180034 },
{ DBSC_SCFCTST0, 0x180B1708 },
{ DBSC_SCFCTST1, 0x0808070C },
{ DBSC_SCFCTST2, 0x012F1123 },
/* QoS Settings */
{ DBSC_DBSCHQOS00, 0x0000F000 },
{ DBSC_DBSCHQOS01, 0x0000E000 },
{ DBSC_DBSCHQOS02, 0x00007000 },
{ DBSC_DBSCHQOS03, 0x00000000 },
{ DBSC_DBSCHQOS40, 0x0000F000 },
{ DBSC_DBSCHQOS41, 0x0000EFFF },
{ DBSC_DBSCHQOS42, 0x0000B000 },
{ DBSC_DBSCHQOS43, 0x00000000 },
{ DBSC_DBSCHQOS90, 0x0000F000 },
{ DBSC_DBSCHQOS91, 0x0000EFFF },
{ DBSC_DBSCHQOS92, 0x0000D000 },
{ DBSC_DBSCHQOS93, 0x00000000 },
{ DBSC_DBSCHQOS130, 0x0000F000 },
{ DBSC_DBSCHQOS131, 0x0000EFFF },
{ DBSC_DBSCHQOS132, 0x0000E800 },
{ DBSC_DBSCHQOS133, 0x00007000 },
{ DBSC_DBSCHQOS140, 0x0000F000 },
{ DBSC_DBSCHQOS141, 0x0000EFFF },
{ DBSC_DBSCHQOS142, 0x0000E800 },
{ DBSC_DBSCHQOS143, 0x0000B000 },
{ DBSC_DBSCHQOS150, 0x000007D0 },
{ DBSC_DBSCHQOS151, 0x000007CF },
{ DBSC_DBSCHQOS152, 0x000005D0 },
{ DBSC_DBSCHQOS153, 0x000003D0 },
};
void qos_init_v3m(void)
{
return;
rcar_qos_dbsc_setting(v3m_qos, ARRAY_SIZE(v3m_qos), false);
#if !(RCAR_QOS_TYPE == RCAR_QOS_NONE)
#if RCAR_QOS_TYPE == RCAR_QOS_TYPE_DEFAULT
NOTICE("BL2: QoS is default setting(%s)\n", RCAR_QOS_VERSION);
#endif
/* Resource Alloc setting */
io_write_32(QOSCTRL_RAS, 0x00000020U);
io_write_32(QOSCTRL_FIXTH, 0x000F0005U);
io_write_32(QOSCTRL_REGGD, 0x00000004U);
io_write_64(QOSCTRL_DANN, 0x0202020104040200U);
io_write_32(QOSCTRL_DANT, 0x00201008U);
io_write_32(QOSCTRL_EC, 0x00080001U); /* need for H3 ES1 */
io_write_64(QOSCTRL_EMS, 0x0000000000000000U);
io_write_32(QOSCTRL_INSFC, 0x63C20001U);
io_write_32(QOSCTRL_BERR, 0x00000000U);
/* QOSBW setting */
io_write_32(QOSCTRL_SL_INIT, 0x0305007DU);
io_write_32(QOSCTRL_REF_ARS, 0x00330000U);
/* QOSBW SRAM setting */
uint32_t i;
for (i = 0U; i < ARRAY_SIZE(mstat_fix); i++) {
io_write_64(QOSBW_FIX_QOS_BANK0 + i * 8, mstat_fix[i]);
io_write_64(QOSBW_FIX_QOS_BANK1 + i * 8, mstat_fix[i]);
}
for (i = 0U; i < ARRAY_SIZE(mstat_be); i++) {
io_write_64(QOSBW_BE_QOS_BANK0 + i * 8, mstat_be[i]);
io_write_64(QOSBW_BE_QOS_BANK1 + i * 8, mstat_be[i]);
}
/* AXI-IF arbitration setting */
io_write_32(DBSC_AXARB, 0x18010000U);
/* Resource Alloc start */
io_write_32(QOSCTRL_RAEN, 0x00000001U);
/* QOSBW start */
io_write_32(QOSCTRL_STATQC, 0x00000001U);
#else
NOTICE("BL2: QoS is None\n");
#endif /* !(RCAR_QOS_TYPE == RCAR_QOS_NONE) */
}
|
#pragma once
#include "bin_saver.h"
namespace NMemIoInternals {
class TMemoryStream: public IBinaryStream {
TVector<char>& Data;
ui64 Pos;
public:
TMemoryStream(TVector<char>* data, ui64 pos = 0)
: Data(*data)
, Pos(pos)
{
}
~TMemoryStream() override {
} // keep gcc happy
bool IsValid() const override {
return true;
}
bool IsFailed() const override {
return false;
}
private:
int WriteImpl(const void* userBuffer, int size) override {
if (size == 0)
return 0;
Y_ASSERT(size > 0);
if (Pos + size > Data.size())
Data.yresize(Pos + size);
memcpy(&Data[Pos], userBuffer, size);
Pos += size;
return size;
}
int ReadImpl(void* userBuffer, int size) override {
if (size == 0)
return 0;
Y_ASSERT(size > 0);
int res = Min(Data.size() - Pos, (ui64)size);
if (res)
memcpy(userBuffer, &Data[Pos], res);
Pos += res;
return res;
}
};
template <class T>
inline void SerializeMem(bool bRead, TVector<char>* data, T& c, bool stableOutput = false) {
if (IBinSaver::HasNonTrivialSerializer<T>(0u)) {
TMemoryStream f(data);
{
IBinSaver bs(f, bRead, stableOutput);
bs.Add(1, &c);
}
} else {
if (bRead) {
Y_ASSERT(data->size() == sizeof(T));
c = *reinterpret_cast<T*>(&(*data)[0]);
} else {
data->yresize(sizeof(T));
*reinterpret_cast<T*>(&(*data)[0]) = c;
}
}
}
////////////////////////////////////////////////////////////////////////////
class THugeMemoryStream: public IBinaryStream {
TVector<TVector<char>>& Data;
i64 Block, Pos;
bool ShrinkOnRead;
enum {
MAX_BLOCK_SIZE = 1024 * 1024 // Aligned with cache size
};
public:
THugeMemoryStream(TVector<TVector<char>>* data, bool shrinkOnRead = false)
: Data(*data)
, Block(0)
, Pos(0)
, ShrinkOnRead(shrinkOnRead)
{
Y_ASSERT(!data->empty());
}
~THugeMemoryStream() override {
} // keep gcc happy
bool IsValid() const override {
return true;
}
bool IsFailed() const override {
return false;
}
private:
int WriteImpl(const void* userDataArg, int sizeArg) override {
if (sizeArg == 0)
return 0;
const char* userData = (const char*)userDataArg;
i64 size = sizeArg;
i64 newSize = Pos + size;
if (newSize > Data[Block].ysize()) {
while (newSize > MAX_BLOCK_SIZE) {
int maxWrite = MAX_BLOCK_SIZE - Pos;
Data[Block].yresize(MAX_BLOCK_SIZE);
if (maxWrite) {
memcpy(&Data[Block][Pos], userData, maxWrite);
userData += maxWrite;
size -= maxWrite;
}
++Block;
Pos = 0;
Data.resize(Block + 1);
newSize = Pos + size;
}
Data[Block].yresize(newSize);
}
if (size) {
memcpy(&Data[Block][Pos], userData, size);
}
Pos += size;
return sizeArg;
}
int ReadImpl(void* userDataArg, int sizeArg) override {
if (sizeArg == 0)
return 0;
char* userData = (char*)userDataArg;
i64 size = sizeArg;
i64 rv = 0;
while (size > 0) {
int curBlockSize = Data[Block].ysize();
int maxRead = 0;
if (Pos + size > curBlockSize) {
maxRead = curBlockSize - Pos;
if (maxRead) {
memcpy(userData, &Data[Block][Pos], maxRead);
userData += maxRead;
size -= maxRead;
rv += maxRead;
}
if (Block + 1 == Data.ysize()) {
memset(userData, 0, size);
return rv;
}
if (ShrinkOnRead) {
TVector<char>().swap(Data[Block]);
}
++Block;
Pos = 0;
} else {
memcpy(userData, &Data[Block][Pos], size);
Pos += size;
rv += size;
return rv;
}
}
return rv;
}
};
template <class T>
inline void SerializeMem(bool bRead, TVector<TVector<char>>* data, T& c, bool stableOutput = false) {
if (data->empty()) {
data->resize(1);
}
THugeMemoryStream f(data);
{
IBinSaver bs(f, bRead, stableOutput);
bs.Add(1, &c);
}
}
}
template <class T>
inline void SerializeMem(const TVector<char>& data, T& c) {
if (IBinSaver::HasNonTrivialSerializer<T>(0u)) {
TVector<char> tmp(data);
SerializeFromMem(&tmp, c);
} else {
Y_ASSERT(data.size() == sizeof(T));
c = *reinterpret_cast<const T*>(&data[0]);
}
}
template <class T, class D>
inline void SerializeToMem(D* data, T& c, bool stableOutput = false) {
NMemIoInternals::SerializeMem(false, data, c, stableOutput);
}
template <class T, class D>
inline void SerializeFromMem(D* data, T& c, bool stableOutput = false) {
NMemIoInternals::SerializeMem(true, data, c, stableOutput);
}
// Frees memory in (*data)[i] immediately upon it's deserialization, thus keeps low overall memory consumption for data + object.
template <class T>
inline void SerializeFromMemShrinkInput(TVector<TVector<char>>* data, T& c) {
if (data->empty()) {
data->resize(1);
}
NMemIoInternals::THugeMemoryStream f(data, true);
{
IBinSaver bs(f, true, false);
bs.Add(1, &c);
}
data->resize(0);
data->shrink_to_fit();
}
|
Systematic mutagenesis reveals critical effector functions in the assembly and dueling of the H1-T6SS in Pseudomonas aeruginosa Pseudomonas aeruginosa is an important human pathogen that can cause severe wound and lung infections. It employs the type VI secretion system (H1-T6SS) as a molecular weapon to carry out a unique dueling response to deliver toxic effectors to neighboring sister cells or other microbes after sensing an external attack. However, the underlying mechanism for such dueling is not fully understood. Here, we examined the role of all H1-T6SS effectors and VgrG proteins in assembly and signal sensing by ectopic expression, combinatorial deletion and point mutations, and imaging analyses. Expression of effectors targeting the cell wall and membrane resulted in increased H1-T6SS assembly. Deletion of individual effector and vgrG genes had minor- to-moderate effects on H1-T6SS assembly and dueling activities. The dueling response was detectable in the P. aeruginosa mutant lacking all H1-T6SS effector activities. In addition, double deletions of vgrG1a with either vgrG1b or vgrG1c and double deletions of effector genes tse5 and tse6 severely reduced T6SS assembly and dueling activities, suggesting their critical role in T6SS assembly. Collectively, these data highlight the diverse roles of effectors in not only dictating antibacterial functions but also their differential contributions to the assembly of the complex H1-T6SS apparatus. |
// GameSystem.cpp
// Dominicus
#include "core/GameSystem.h"
#include <cstdio>
#include <SDL/SDL.h>
#include <sstream>
#include "math/MiscMath.h"
#include "platform/Platform.h"
extern Platform* platform;
GameSystem::StandardEntry GameSystem::getStandard(const char* key) {
if(standards.find(key) == standards.end()) {
std::stringstream logMessage;
logMessage << "Non-existent standards key requested: " << key << ".";
this->log(LOG_FATAL, logMessage.str().c_str());
}
return standards[key];
}
GameSystem::GameSystem() {
// set the build version string
std::stringstream versionStream;
versionStream <<
PROGRAM_IDENTIFIER << " " <<
PROGRAM_VERSION << " " <<
"(" << PROGRAM_BUILDSTRING << ") " <<
PROGRAM_ARCH_STR << " ";
const char* dateString = __DATE__;
std::string monthString = std::string(dateString).substr(0, 3);
int month = 0;
if(strcmp(monthString.c_str(), "Jan") == 0) month = 1;
else if(strcmp(monthString.c_str(), "Feb") == 0) month = 2;
else if(strcmp(monthString.c_str(), "Mar") == 0) month = 3;
else if(strcmp(monthString.c_str(), "Apr") == 0) month = 4;
else if(strcmp(monthString.c_str(), "May") == 0) month = 5;
else if(strcmp(monthString.c_str(), "Jun") == 0) month = 6;
else if(strcmp(monthString.c_str(), "Jul") == 0) month = 7;
else if(strcmp(monthString.c_str(), "Aug") == 0) month = 8;
else if(strcmp(monthString.c_str(), "Sep") == 0) month = 9;
else if(strcmp(monthString.c_str(), "Oct") == 0) month = 10;
else if(strcmp(monthString.c_str(), "Nov") == 0) month = 11;
else month = 12;
const int day = atoi(std::string(dateString).substr(4, 2).c_str());
const int year = atoi(std::string(dateString).substr(7, 4).c_str());
char fullDateString[11];
sprintf(fullDateString, "%04i-%02i-%02i", year, month, day);
buildDate = fullDateString;
versionStream << fullDateString;
versionString = versionStream.str();
// get the display resolution
SDL_VideoInfo* vidInfo = (SDL_VideoInfo*) SDL_GetVideoInfo();
if(vidInfo == NULL)
this->log(LOG_FATAL, "Could not obtain screen resolution from SDL.");
displayResolutionX = (unsigned short int) vidInfo->current_w;
displayResolutionY = (unsigned short int) vidInfo->current_h;
// window and element scaling
setStandard("displayWindowedResolutions", "800x600,1024x768,1152x864,1280x960,1600x1200,2560x1920,3200x2400,4096x3072", "Supported resolutions for windowed mode.");
setStandard("displayWindowedMaxPortion", 0.9f, "Maximum portion of vertical screen resolution to take up in windowed mode.");
std::vector< std::pair<unsigned int, unsigned int> > allowedResolutions = getAllowedWindowResolutions();
std::stringstream allowedResolutionsText;
for(size_t i = 0; i < allowedResolutions.size(); ++i)
allowedResolutionsText << (i > 0 ? "," : "") << allowedResolutions[i].first << "x" << allowedResolutions[i].second;
setStandard("displayWindowedResolutions", allowedResolutionsText.str().c_str());
std::stringstream maximumResolutionText;
maximumResolutionText << allowedResolutions[allowedResolutions.size() - 1].first << "x" <<
allowedResolutions[allowedResolutions.size() - 1].second;
setStandard("displayWindowedResolution", maximumResolutionText.str().c_str());
setStandard("hudBaseElementMargin", 20.0f, "Base value for space between HUD elements in pixels (must be even number).");
setStandard("hudBaseContainerPadding", 10.0f, "Base value for space between HUD elements' external border and content in pixels.");
setStandard("hudBaseButtonPadding", 10.0f, "Base value for space between HUD buttons' external border and content in pixels (vertical padding is 1/2 of this due to automatic padding in font rendering).");
setStandard("hudBaseBigButtonPadding", 12.0f, "Base value for space between large HUD buttons' external border and content in pixels (vertical padding is 1/2 of this due to automatic padding in font rendering).");
setStandard("hudBaseGaugePadding", 20.0f, "Base value for gauge panel padding in pixels.");
setStandard("fontBaseSizeSmall", 12.0f, "Font size for small display in points (1/72 inch).");
setStandard("fontBaseSizeMedium", 18.0f, "Font size for standard display in points (1/72 inch).");
setStandard("fontBaseSizeLarge", 26.0f, "Font size for enlarged display in points (1/72 inch).");
setStandard("fontBaseSizeSuper", 36.0f, "Font size for title display in points (1/72 inch).");
setStandard("logoBaseHeight", 40.0f, "Base logo height in pixels");
setStandard("gaugeImagesBaseHeight", 40.0f, "Base height of gauge images.");
// state standards
setStandard("stateUpdateFrequency", 120.0f, "Number of times per second the core state updates.");
setStandard("stateShipOrbitMargin", 500.0f, "Radius of margin between maximum edge of island and first ship orbit.");
setStandard("stateShipMargin", 150.0f, "Lateral distance between ships orbiting island.");
setStandard("stateShipSpeed", 120.0f, "Ship speed in world units per second.");
setStandard("stateShipEntryTime", 20.0f, "Time it takes between ship introduction and beginning of orbit.");
setStandard("stateShipAddIntervalEasy", 150.0f, "Time for easy level between ships being added to the world.");
setStandard("stateShipAddIntervalMedium", 90.0f, "Time for medium level between ships being added to the world.");
setStandard("stateShipAddIntervalHard", 75.0f, "Time for hard level between ships being added to the world.");
setStandard("stateShipAddIntervalLogarithmicScaleEasy", 1800.0f, "Time in seconds for easy level that it takes to reach critical ship addition rate.");
setStandard("stateShipAddIntervalLogarithmicScaleMedium", 900.0f, "Time in seconds for medium level that it takes to reach critical ship addition rate.");
setStandard("stateShipAddIntervalLogarithmicScaleHard", 600.0f, "Time in seconds for hard level that it takes to reach critical ship addition rate.");
setStandard("stateShipAddIntervalLogarithmicScaleExponent", 4.0f, "Exponent of 2 for logarithmic scale for ship addition rate.");
setStandard("stateMissileSpeed", 100.0f, "Missile speed in world units per second.");
setStandard("stateMissileFiringIntervalEasy", 12.0f, "Wait time for easy level in between missile firings for each ship.");
setStandard("stateMissileFiringIntervalMedium", 10.0f, "Wait time for medium level in between missile firings for each ship.");
setStandard("stateMissileFiringIntervalHard", 8.0f, "Wait time for hard level in between missile firings for each ship.");
setStandard("stateMissileRadiusMultiplier", 1.5f, "Multiplier of missile radius for actual collision area.");
setStandard("stateFortressMinimumTilt", 0.0f, "Minimum tilt angle of fortress turret.");
setStandard("stateFortressMaximumTilt", 45.0f, "Maximum tilt angle of fortress turret.");
setStandard("stateTurretTurnSpeed", 90.0f, "Turning speed of turret in degrees per second.");
setStandard("stateHealthRegenerationRate", 0.0625f, "Portion of fortress health capacity regenerated each second.");
setStandard("stateMissileStrikeDepletion", 0.5f, "Portion of fortress health depleted by one missile strike.");
setStandard("stateAmmoFiringCost", 0.1f, "Portion of total ammunition capacity depleted by firing one shell.");
setStandard("stateAmmoReloadMultiplier", 4.0f, "Multiplier of reload rate for amount of ammo actually needed to counter missile firing rate.");
setStandard("stateShellSpeed", 500.0f, "Shell speed in world units per second.");
setStandard("stateShellExpirationDistance", 1500.0f, "Distance at which shells are deleted.");
setStandard("stateEMPFiringCost", 0.5f, "Portion of total ammunition capacity depleted by firing one EMP.");
setStandard("stateEMPHealthCost", 0.5f, "Portion of total health capacity depleted by firing one EMP.");
setStandard("stateEMPChargingTime", 3.0f, "Time in seconds required for EMP charge.");
setStandard("stateEMPRange", 350.0f, "Radius in world units of EMP blast.");
setStandard("stateEMPDuration", 1.5f, "Time required for one full EMP discharge.");
setStandard("stateTurretRecoilSpeed", 0.125f, "Time it takes for turret to recoil after shot.");
setStandard("stateTurretRecoilRecoverySpeed", 1.0f, "Time it takes for turret to recover from recoil after shot.");
setStandard("stateTurretRecoilDistance", 3.0f, "Distance in world units of turret recoil.");
setStandard("stateKeyDampeningBasePortion", 0.0f, "Portion of movement to make when key initially pressed.");
setStandard("stateKeyDampeningTime", 0.5f, "Time in seconds over which to dampen arrow key presses");
setStandard("stateKeyDampeningExponent", 1.0f, "Power to raise fractional key movement values to when dampening key input");
// input standards
setStandard("inputPollingFrequency", 120.0f, "Number of times per second to poll the input devices.");
setStandard("inputAllowedNameChars", "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890-_ ", "Characters allowed to be used in names on high scores list.");
setStandard("inputDeleteKeyRepeatRate", 0.125f, "Wait time in between deletion of characters when backspace key is held down.");
setStandard("inputDeleteKeyRepeatWait", 0.5f, "Wait time before commencing rapid delete when backspace key is held down.");
// logic standards
setStandard("logicUpdateFrequency", 120.0f, "Number of times per second to update game logic.");
// display and drawing standards
setStandard("displayFramerateLimiting", (float) LIMIT_VSYNC, "How to limit framerate (vsync, fps count, or off).");
setStandard("displayStartFullscreen", false, "Whether or not to start the program in full screen mode.");
setStandard("displayColorDepth", 32.0f, "Color depth of display (may only affect full screen mode).");
setStandard("displayDepthSize", 32.0f, "Size of depth buffer.");
setStandard("displayMultisamplingLevel", 4.0f, "Multisampling level (0, 2, or 4).");
setStandard("colorClear", Vector4(174.0f / 255.0f, 187.0f / 255.0f, 224.0f / 255.0f, 1.0f), "Color of empty space.");
// scene rendering standards
setStandard("renderingPerspectiveFOV", 30.0f, "Field-of-view angle for perspective projection.");
setStandard("renderingPerspectiveBinocularsFOV", 10.0f, "Field-of-view angle for perspective projection while binoculars enabled.");
setStandard("renderingPerspectiveNearClip", 0.5f, "Near clip distance for perspective projection.");
setStandard("renderingPerspectiveFarClip", 9000.0f, "Far clip distance for perspective projection.");
setStandard("waterColor", Vector4(0.025f, 0.05f, 0.15f, 1.0f), "Water color.");
setStandard("horizonColor", Vector4(0.88f, 0.88f, 0.88f, 1.0f), "Horizon color.");
setStandard("baseSkyColor", Vector4(0.76f, 0.88f, 1.0f, 1.0f), "Sky color at approximately halfway up.");
setStandard("apexColor", Vector4(0.08f, 0.24f, 0.4f, 1.0f), "Sky color at apex.");
setStandard("empColorMultiplier", 4.0f, "Values to multiply colors by while under EMP.");
setStandard("terrainDepth", 10.0f, "How far below the water the ground extends.");
setStandard("terrainTextureRepeat", 50.0f, "Number of times to repeat the ground texture over the maximum surface area.");
setStandard("terrainNoiseTextureDensity", 512.0f, "Terrain mixing noise texture resolution.");
setStandard("terrainNoiseTextureRoughness", 0.6f, "Terrain mixing noise texture roughness factor.");
setStandard("terrainNoiseTextureDepth", 4.0f, "Terrain mixing noise texture color depth.");
setStandard("shellDensity", 32.0f, "Number of segments for shell sphere.");
setStandard("missileTrailLength", 100.0f, "Length of missile trail.");
setStandard("explosionRadius", 25.0f, "Radius of missile explosion.");
setStandard("explosionDuration", 2.0f, "Duration in seconds of missile explosion.");
setStandard("explosionSphereDensity", 64.0f, "Density of explosion sphere.");
setStandard("cameraMovementSpeed", 200.0f, "Speed in world units per second of camera movement in roaming mode.");
// HUD standards
setStandard("hudFPSTestFrequency", 1.0f, "Frequency per second of the FPS test.");
setStandard("hudControlSpotSize", 80.0f, "Width and height of control spot in pixels.");
setStandard("hudControlSpotBorder", 5.0f, "Thickness of control spot border in pixels.");
setStandard("hudControlSpotColor", Vector4(0.3f, 0.3f, 0.3f, 0.6f), "Color of control spot.");
setStandard("hudCursorSize", 40.0f, "Width and height of cursor in pixels.");
setStandard("hudCursorColor", Vector4(0.0f, 0.0f, 0.0f, 0.5f), "Color of cursor.");
setStandard("hudControlAreaRadius", 300.0f, "Radius in pixels of allowed cursor movement.");
setStandard("hudCursorPositionExponent", 1.5f, "Raise the cursor fractional position to this exponent for more precision toward middle.");
setStandard("radarSize", 35.0f, "Size of radar panel in percentage of screen height.");
setStandard("radarRefreshSpeed", 1.0f, "Time in seconds for a full radar turn.");
setStandard("radarSpotSize", 6.0f, "Size in pixels of radar missile spots.");
setStandard("radarCenterSpotSize", 8.0f, "Size in pixels of radar missile spots.");
setStandard("radarSpotColor", Vector4(1.0f, 0.0f, 0.0f, 1.0f), "Color of radar missile spots.");
setStandard("radarEMPColor", Vector4(1.0f, 1.0f, 1.0f, 0.5f), "Color of radar missile spots.");
setStandard("radarRadius", 1500.0f, "Radius of radar coverage.");
setStandard("radarViewConeColor", Vector4(1.0f, 1.0f, 1.0f, 0.2f), "Color of radar view cone.");
setStandard("radarViewConeBorderColor", Vector4(1.0f, 1.0f, 1.0f, 0.5f), "Color of border of radar view cone.");
setStandard("hudContainerBorder", 2.0f, "Thickness in pixels of HUD container element borders.");
setStandard("hudContainerSoftEdge", 2.0f, "Thickness in pixels of HUD container element border antialiased portion.");
setStandard("hudContainerInsideColor", Vector4(0.15f, 0.15f, 0.15f, 0.75f), "Background color of HUD container elements.");
setStandard("hudContainerHighlightColor", Vector4(0.863f, 0.863f, 0.863f, 0.247f), "Highlight background color of HUD container elements.");
setStandard("hudContainerBorderColor", Vector4(0.918f, 1.0f, 0.945f, 0.714f), "Border color of HUD container elements.");
setStandard("hudFieldWidth", 20.0f, "Standard field width (in number of '#' characters).");
setStandard("hudFieldColor", Vector4(0.031f, 0.075f, 0.184f, 0.752f), "Background color for inactive text fields.");
setStandard("hudGaugeWidth", 200.0f, "Width of gauges in pixels.");
setStandard("hudGaugeHeight", 30.0f, "Height of gauges in pixels.");
setStandard("hudGaugeBackgroundColor", Vector4(0.3f, 0.3f, 0.3f, 0.6f), "Background color of gauges.");
setStandard("hudGaugeColorFalloff", Vector4(0.3f, 0.3f, 0.3f, 0.75f), "Factor to be multiplied into gauge color for falloff at bottom.");
setStandard("hudGaugeHealthBarColor", Vector4(0.0f, 1.0f, 0.0f, 1.0f), "Color of health gauge.");
setStandard("hudGaugeAmmoBarColor", Vector4(0.0f, 1.0f, 1.0f, 1.0f), "Color of ammunition gauge.");
setStandard("hudGaugeEMPChargingBarColor", Vector4(1.0f, 1.0f, 0.0f, 1.0f), "Color of EMP gauge while charging.");
setStandard("hudGaugeEMPChargedBarColor", Vector4(1.0f, 0.0f, 0.0f, 1.0f), "Color of EMP gauge when charged.");
setStandard("hudGrayOutColor", Vector4(0.0f, 0.0f, 0.05f, 0.75f), "Color of gray screen during pause.");
setStandard("hudStrikeEffectTime", 1.0f, "Time in seconds to display strike effect.");
setStandard("hudMissileIndicatorSize", 50.0f, "Size in pixels of missile indicator box.");
setStandard("hudMissileIndicatorBinocularsFactor", 3.0f, "Factor to multiply missile indicator size by when in binoculars mode.");
setStandard("hudMissileIndicatorColor", Vector4(1.0f, 0.0f, 0.0f, 1.0f), "Color of missile indicator box.");
setStandard("hudMissileArrowColor", Vector4(1.0f, 0.0f, 0.0f, 0.75f), "Color of missile indicator arrows.");
setStandard("hudMissileArrowWidth", 14.0f, "Width of missile indicator arrows in pixels.");
setStandard("hudMissileArrowHeight", 22.0f, "Height of missile indicator arrows in pixels.");
// setStandard("hudMissileArrowBlinkRate", 0.5f, "Blink rate for missile indicator arrows behind player.");
// font standards
setStandard("fontFile", "TitilliumWeb-Bold.ttf", "Font file to load for use by HUD and menus.");
setStandard("fontColorLight", Vector4(1.0f, 1.0f, 1.0f, 1.0f), "Light font color.");
setStandard("fontColorDark", Vector4(0.5f, 0.5f, 0.5f, 1.0f), "Medium font color.");
setStandard("helpTextScreenPortion", 0.9f, "Horizontal portion of screen taken up by help text.");
// audio standards
setStandard("audioTickRate", 50.0f, "Audio manager tick rate.");
setStandard("audioMusicVolume", 0.5f, "Music volume.");
setStandard("audioEffectsVolume", 0.5f, "Audio effects volume.");
setStandard("audioVolumeDropOffDistance", 1500.0f, "Distance it takes for an object's effect volume to fade to zero.");
// general game standards
setStandard("preferencesVersion", 6.0f, "Version of preferences file format.");
setStandard("developmentMode", false, "Whether to enable extra development features.");
setStandard("gameStartingLevel", 1.0f, "Starting difficulty level.");
setStandard("gameMaximumHighScores", 5.0f, "Maximum number of high scores to track.");
setStandard("gameDefaultHighScoreName", "Anonymous", "Default player name for new high score entry.");
setStandard("gameHighScoreName", "", "Stored player name for new high score entry.");
setStandard("islandMaximumWidth", 1000.0f, "Maximum island width.");
setStandard("islandMaximumHeight", 100.0f, "Maximum island height.");
setStandard("islandTerrainBaseDensity", 128.0f, "Density of island terrain tessellation.");
setStandard("islandTerrainDetail", 2.0f, "Island detail level (1, 2, or 3) beyond base density, to be specified by user.");
setStandard("islandTerrainRoughness", 0.5f, "Roughness of island terrain randomization.");
setStandard("islandTerrainGradDist", 0.5f, "Island terrain generation gradual distance factor.");
setStandard("islandTerrainBlends", 4.0f, "Island terrain generation blending factor.");
setStandard("islandTerrainSink", 0.5f, "Island terrain generation sink to sea level factor.");
// text strings
setStandard("textControls", "Move Turret:\tArrow Keys / Mouse Movement\nFire Cannon:\tSpace / Left Mouse Button / Mouse Wheel\nCharge / Fire EMP:\tTab / Right Mouse Button\nToggle Binoculars:\tShift / Middle Mouse Button\nPause / Resume:\tEsc\nToggle Fullscreen:\tF1\nFast Quit:\tF12", "Controls help text.");
setStandard("textInstructions", "You occupy a cannon tower atop an island mountain. You must destroy enemy missiles fired at you by orbiting ships. Your first weapon is a cannon which fires shells, depleting your ammunition reservoir. Your second weapon is an electromagnetic pulse device, which will destroy all missiles within its radius. An electromagnetic pulse requires a charging period, and depletes both your health and ammunition reservoirs.\n\nYou gain a certain number of points (based on the difficulty setting) for each missile you destroy. As the game progresses, the number of enemy ships will increase. When your health level drops below zero, the game is over. Good luck!");
setStandard("textCredits", "Dedicated to <NAME> #6894 of the Phoenix Police Department; EOW October 18, 2010.\n\nCreated by <NAME>.\n\nMusic and sound effects by <NAME> (http://flexstylemusic.com).\n\nThis software uses the Titillium Web font by <NAME> and students of MA course of Visual design.\n\nPortions of this software are copyright (c) 2018 The FreeType Project (www.freetype.org). All rights reserved.\n\nThis software uses the Simple DirectMedia Layer library (http://www.libsdl.org/).", "Credits text.");
// load standards from preferences (or save standard preferences if no file)
if(platform->getPreferenceFloat("preferencesVersion") == getFloat("preferencesVersion")) {
if(getString("displayWindowedResolutions").find(platform->getPreferenceString("displayWindowedResolution")) != std::string::npos)
setStandard("displayWindowedResolution", platform->getPreferenceString("displayWindowedResolution").c_str());
setStandard("displayFramerateLimiting", platform->getPreferenceFloat("displayFramerateLimiting"));
setStandard("displayMultisamplingLevel", platform->getPreferenceFloat("displayMultisamplingLevel"));
setStandard("displayStartFullscreen", platform->getPreferenceFloat("displayStartFullscreen") == 1.0f ? true : false);
setStandard("audioMusicVolume", platform->getPreferenceFloat("audioMusicVolume"));
setStandard("audioEffectsVolume", platform->getPreferenceFloat("audioEffectsVolume"));
setStandard("gameStartingLevel", platform->getPreferenceFloat("gameStartingLevel"));
setStandard("gameHighScoreName", platform->getPreferenceString("gameHighScoreName").c_str());
setStandard("islandTerrainDetail", platform->getPreferenceFloat("islandTerrainDetail"));
setStandard("developmentMode", platform->getPreferenceFloat("developmentMode") == 1.0f ? true : false);
std::string highScoresString = platform->getPreferenceString("highScores");
size_t i = 0;
while(highScoresString.length() > 0 && i < highScoresString.length() - 1 && i != std::string::npos) {
if(i > 0) ++i;
highScores.push_back(highScoresString.substr(i, highScoresString.find('\n', i) - i));
i = highScoresString.find('\n', i);
}
} else {
flushPreferences();
}
// log the build version
std::stringstream buildInfo;
buildInfo << "Game Version: " << versionString;
this->log(LOG_INFO, buildInfo.str().c_str());
}
void GameSystem::log(LogDetail detail, std::string report) {
std::stringstream fullReport;
if(platform != NULL)
fullReport << platform->getExecMills() << " ";
else
fullReport << "0 ";
if(detail == LOG_INFO)
fullReport << "INFO: " << report;
else if(detail == LOG_VERBOSE)
fullReport << "VERBOSE: " << report;
else
fullReport << "FATAL: " << report;
logLines.push_back(fullReport.str());
if(detail == LOG_FATAL) {
Platform::consoleOut(fullReport.str() + "\n");
exit(1);
}
}
Vector4 GameSystem::getColor(const char* key) {
int colors[4];
sscanf(getString(key).c_str(), "%2x%2x%2x%2x", &colors[0], &colors[1], &colors[2], &colors[3]);
return Vector4(
(float) colors[0] / 255.0f,
(float) colors[1] / 255.0f,
(float) colors[2] / 255.0f,
(float) colors[3] / 255.0f
);
}
void GameSystem::setStandard(const char* key, const char* value, const char* description, bool locked) {
StandardEntry entry;
entry.value = value;
if(strcmp(description, "") != 0) entry.description = description;
entry.locked = locked;
standards[key] = entry;
}
void GameSystem::setStandard(const char* key, Vector4 value, const char* description, bool locked) {
char str[9];
sprintf(
str,
"%02x%02x%02x%02x",
(int) (value.x * 255.0f),
(int) (value.y * 255.0f),
(int) (value.z * 255.0f),
(int) (value.w * 255.0f)
);
setStandard(key, str, description, locked);
}
void GameSystem::setStandard(const char* key, float value, const char* description, bool locked) {
std::stringstream str;
str << value;
setStandard(key, str.str().c_str(), description, locked);
}
void GameSystem::setStandard(const char* key, bool value, const char* description, bool locked) {
setStandard(key, (value != false ? "true" : "false"), description, locked);
}
void GameSystem::flushPreferences() {
platform->setPreference("preferencesVersion", getFloat("preferencesVersion"));
platform->setPreference("displayWindowedResolution", getString("displayWindowedResolution").c_str());
platform->setPreference("displayStartFullscreen", getBool("displayStartFullscreen") == true ? 1.0f : 0.0f);
platform->setPreference("displayFramerateLimiting", getFloat("displayFramerateLimiting"));
platform->setPreference("displayMultisamplingLevel", getFloat("displayMultisamplingLevel"));
platform->setPreference("audioMusicVolume", getFloat("audioMusicVolume"));
platform->setPreference("audioEffectsVolume", getFloat("audioEffectsVolume"));
platform->setPreference("gameStartingLevel", getFloat("gameStartingLevel"));
platform->setPreference("gameHighScoreName", getString("gameHighScoreName").c_str());
platform->setPreference("islandTerrainDetail", getFloat("islandTerrainDetail"));
platform->setPreference("developmentMode", (getBool("developmentMode") == true ? 1.0f : 0.0f));
if(highScores.size() == 0) {
platform->setPreference("highScores", "");
} else {
std::stringstream stringStream;
stringStream << "\"";
for(size_t i = 0; i < highScores.size(); ++i)
stringStream << highScores[i] << "\n";
stringStream << "\"";
platform->setPreference("highScores", stringStream.str().c_str());
}
}
unsigned int GameSystem::extractScoreFromLine(std::string scoreString) {
size_t scoreBeginning = scoreString.find('\t');
if(scoreBeginning == std::string::npos)
log(GameSystem::LOG_FATAL, "Could not locate score beginning of high score line: " + scoreString);
size_t scoreEnd = scoreString.substr(scoreBeginning + 1, std::string::npos).find('\t');
if(scoreEnd == std::string::npos)
log(GameSystem::LOG_FATAL, "Could not locate score end of high score line: " + scoreString);
return (unsigned int) atoi(scoreString.substr(scoreBeginning + 1, scoreEnd - scoreBeginning - 1).c_str());
}
std::vector< std::pair<unsigned int, unsigned int> > GameSystem::getAllowedWindowResolutions() {
// return all resolutions within limit of screen resolution factor
std::string resolutionsString = getString("displayWindowedResolutions");
std::vector< std::pair<unsigned int, unsigned int> > resolutions;
size_t stringOffset = 0;
while(stringOffset < resolutionsString.length()) {
std::string resolution = resolutionsString.substr(stringOffset, resolutionsString.find(',', stringOffset) - stringOffset);
std::pair<unsigned int, unsigned int> resolutionPair;
resolutionPair.first = (unsigned int) atoi(resolution.substr(0, resolution.find('x')).c_str());
resolutionPair.second = (unsigned int) atoi(resolution.substr(resolution.find('x') + 1, std::string::npos).c_str());
if(resolutionPair.second <= (unsigned int) ((float) displayResolutionY * getFloat("displayWindowedMaxPortion")))
resolutions.push_back(resolutionPair);
stringOffset += resolution.length() + 1;
}
return resolutions;
}
void GameSystem::applyScreenResolution(std::string resolution) {
std::string resolutionsString = getString("displayWindowedResolutions");
float scalingFactor = atof(resolution.substr(resolution.find('x') + 1, std::string::npos).c_str()) /
atof(resolutionsString.substr(resolutionsString.find('x') + 1, resolutionsString.find(',') - resolutionsString.find('x') + 1).c_str());
// round sizes to nearest even integer
setStandard("hudElementMargin", (float) (roundToInt(getFloat("hudBaseElementMargin") * scalingFactor / 2.0f) * 2));
setStandard("hudContainerPadding", (float) (roundToInt(getFloat("hudBaseContainerPadding") * scalingFactor / 2.0f) * 2));
setStandard("hudButtonPadding", (float) (roundToInt(getFloat("hudBaseButtonPadding") * scalingFactor / 2.0f) * 2));
setStandard("hudBigButtonPadding", (float) (roundToInt(getFloat("hudBaseBigButtonPadding") * scalingFactor / 2.0f) * 2));
setStandard("hudGaugePadding", (float) (roundToInt(getFloat("hudBaseGaugePadding") * scalingFactor / 2.0f) * 2));
setStandard("fontSizeSmall", (float) (roundToInt(getFloat("fontBaseSizeSmall") * scalingFactor / 2.0f) * 2));
setStandard("fontSizeMedium", (float) (roundToInt(getFloat("fontBaseSizeMedium") * scalingFactor / 2.0f) * 2));
setStandard("fontSizeLarge", (float) (roundToInt(getFloat("fontBaseSizeLarge") * scalingFactor / 2.0f) * 2));
setStandard("fontSizeSuper", (float) (roundToInt(getFloat("fontBaseSizeSuper") * scalingFactor / 2.0f) * 2));
setStandard("logoHeight", (float) (roundToInt(getFloat("logoBaseHeight") * scalingFactor / 2.0f) * 2));
setStandard("gaugeImagesHeight", (float) (roundToInt(getFloat("gaugeImagesBaseHeight") * scalingFactor / 2.0f) * 2));
}
|
It’s a sad moment: AIM, AOL’s long-running instant messenger service that was core to many people’s first social experiences on the internet, will shut down once and for all on December 15th. AOL announced the shutdown today, acknowledging that people now communicate in new ways online, so AIM is no longer needed.
“AIM tapped into new digital technologies and ignited a cultural shift, but the way in which we communicate with each other has profoundly changed,” writes Michael Albers, communications products VP at Oath (the Verizon behemoth that consumed AOL).
Time to set your final away message
AOL cut off access to AIM from third-party chat clients back in March, hinting at this eventual shutdown. It’s hard to imagine that many people are still using AIM, so that change, nor this upcoming shutdown, are likely to make a huge difference.
AIM was one of the first and most successful instant messengers, widely used in the late ‘90s and even throughout the 2000s. I was still using AIM to chat with my friends throughout college at the end of the decade, including to stay in touch with my (not-yet) significant other while she was studying abroad.
But with the proliferation of smartphones, everything has changed. Text messaging has taken over for desktop instant messaging apps, and increasingly, we’re seeing other social apps, like Snapchat and Instagram, take over for those in certain ways. For straight messaging, Facebook also makes things much easier, since you’re already connected to everyone you know and can just start up a chat without exchanging arcane things like screen names. In fact, Facebook has multiple billion-user messaging services at this point, Messenger and WhatsApp.
Other classic chat apps have shut down in recent years, too. MSN Messenger shut down in 2014, and Yahoo Messenger shut down last year (although Yahoo also launched a new messaging service under the same name). It was only a matter of time until AIM joined them, but there’s still some nostalgia in seeing it go.
With AIM on its way out the door, now’s your last chance to write that perfect away message. |
<gh_stars>10-100
import unittest
from PyStacks.PyStacks.template import templateCF
class TestTemplate(unittest.TestCase):
def test_templateCF_ElasticSearch(self):
self.maxDiff = None
resources = {
"elasticsearch": {
"ElasticSearchTest": {
"version": 5.5,
"dedicatedmaster": True,
"instancecount": 4,
"instancetype": "m4.large.elasticsearch",
"mastertype": "m4.large.elasticsearch",
"mastercount": 2,
"zoneid": "testaws",
"zonesuffix": "test.aws",
"ebsoptions": {
"iops": 0,
"size": 60,
"type": "gp2"
},
"snapshotoptions": {
"AutomatedSnapshotStartHour": 0
},
"advancedoptions": {
"rest.action.multi.allow_explicit_index": "true"
},
"policy": {
"Action": "*",
"Effect": "Allow",
"Resource": "*"
}
}
}
}
expected = {
"ElasticSearchTest": {
"Properties": {
"AccessPolicies": {
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*"
}
],
"Version": "2012-10-17"
},
"AdvancedOptions": {
"rest.action.multi.allow_explicit_index": "true"
},
"DomainName": "ElasticSearchTest",
"EBSOptions": {
"EBSEnabled": "true",
"Iops": 0,
"VolumeSize": 60,
"VolumeType": "gp2"
},
"ElasticsearchClusterConfig": {
"DedicatedMasterCount": "2",
"DedicatedMasterEnabled": "true",
"DedicatedMasterType": "m4.large.elasticsearch",
"InstanceCount": "4",
"InstanceType": "m4.large.elasticsearch",
"ZoneAwarenessEnabled": "true"
},
"ElasticsearchVersion": "5.5",
"SnapshotOptions": {
"AutomatedSnapshotStartHour": "0"
}
},
"Type": "AWS::Elasticsearch::Domain"
},
"ElasticSearchTestDNS": {
"Properties": {
"Comment": "ElasticSearchTest Records by default",
"HostedZoneId": {
"Fn::ImportValue": {
"Fn::Sub": [
"${DNSStack}-Route53-testaws-Zone", {
"DNSStack": {
"Ref": "DNSStack"
}
}
]
}
},
"RecordSets": [
{
"Name": "ElasticSearchTest.es.test.aws",
"ResourceRecords": [
{
"Fn::GetAtt": [
"ElasticSearchTest",
"DomainEndpoint"
]
}
],
"SetIdentifier": "ElasticSearchTest.es.test.aws",
"TTL": "60",
"Type": "CNAME",
"Weight": "10"
}
]
},
"Type": "AWS::Route53::RecordSetGroup"
}
}
actual = templateCF(resources, 'resources')
self.assertDictEqual(actual, expected)
if __name__ == '__main__':
unittest.main()
|
package com.foundation.canal.spring.handle;
import com.foundation.canal.spring.parser.CanalConfigBeanDefinitionParser;
import org.springframework.beans.factory.xml.NamespaceHandlerSupport;
/**
* @author fqh
* @version 1.0 2016/9/25
*/
public class CanalClientNamespaceHandler extends NamespaceHandlerSupport {
@Override
public void init() {
registerBeanDefinitionParser("canal-config", new CanalConfigBeanDefinitionParser());
}
}
|
<reponame>dollarkillerx/easyutils<gh_stars>1-10
/**
* @Author: DollarKiller
* @Description: 压缩字符串
* @Github: https://github.com/dollarkillerx
* @Date: Create in 20:38 2019-09-25
*/
package compression
import (
"bytes"
"compress/gzip"
"encoding/base64"
"io/ioutil"
)
type Str struct {
}
func NewStrZip() *Str {
return &Str{}
}
func (s *Str) Zip(str string) string {
var b bytes.Buffer
gz := gzip.NewWriter(&b)
if _, err := gz.Write([]byte(str)); err != nil {
return ""
}
if err := gz.Flush(); err != nil {
return ""
}
if err := gz.Close(); err != nil {
return ""
}
strc := base64.StdEncoding.EncodeToString(b.Bytes())
return strc
}
func (s *Str) Unzip(str string) string {
data, err := base64.StdEncoding.DecodeString(str)
if err != nil {
return ""
}
rdata := bytes.NewReader(data)
rc, err := gzip.NewReader(rdata)
if err != nil {
return ""
}
all, err := ioutil.ReadAll(rc)
if err != nil {
return ""
}
return string(all)
}
|
import numpy as np
import math
import matplotlib.pyplot as plt
from kernel_generalization.utils import gegenbauer
import scipy as sp
import scipy.special
import scipy.optimize
from kernel_generalization.utils import neural_tangent_kernel as ntk
###############################################################
################# Use Only These Functions ####################
###############################################################
def f(phi, L):
if L == 1:
return np.arccos(1 / np.pi * np.sin(phi) + (1 - 1 / np.pi * np.arccos(np.cos(phi))) * np.cos(phi))
elif L == 0:
return np.arccos(np.cos(phi))
else:
return f(phi, L - 1)
def NTK(phi, L):
if L == 1:
ntk = np.cos(f(phi, 1)) + (1 - phi / np.pi) * np.cos(phi)
return ntk
else:
a = phi
for i in range(L - 1):
a = f(a, 1)
ntk = np.cos(f(a, 1)) + NTK(phi, L - 1) * (1 - a / np.pi)
return ntk
def get_gaussian_spectrum(ker_var, dist_var, kmax, dim):
## Sigma is sample variance
## Gamma is kernel variance
sigma = dist_var
gamma = ker_var
a = 1/(4*sigma)
b = 1/(2*gamma)
c = np.sqrt(a**2 + 2*a*b)
A = a+b+c
B = b/A
spectrum = np.array([np.sqrt(2*a/A)**(dim) * B**(k) for k in range(kmax)])
lambda_bar = np.array([B**(k) for k in range(kmax)])
degens = np.array([scipy.special.comb(k+dim-1,dim-1) for k in range(kmax)])
return spectrum, degens, lambda_bar
def get_kernel_spectrum(layers, sig_w, sig_b, kmax, dim, num_pts=10000, IfNTK = True):
alpha = dim / 2.0 - 1
z, w = sp.special.roots_gegenbauer(num_pts, alpha)
Q = gegenbauer.gegenbauer(z, kmax, dim)
degens = np.array([gegenbauer.degeneracy_kernel(dim, k) for k in range(kmax)])
kernel = np.zeros((len(layers), num_pts))
L = max(layers)+1
theta = np.arccos(z)
KernelNTK, KernelNormalizedNTK, ThetaNTK = ntk.NTK(theta, sig_w, sig_b, L, IfNTK);
for i, layer in enumerate(layers):
kernel[i] = KernelNTK[layer]
scaled_kernel = kernel * np.outer(np.ones(len(layers)), w)
normalization = gegenbauer.eigenvalue_normalization(kmax, alpha, degens)
spectrum_scaled = scaled_kernel @ Q.T / normalization
spectrum_scaled = spectrum_scaled * np.heaviside(spectrum_scaled - 1e-20, 0)
spectrum_true = spectrum_scaled / np.outer(len(layers), degens)
for i in range(len(layers)):
for j in range(kmax - 1):
if spectrum_true[i, j + 1] < spectrum_true[i, j] * 1e-5:
spectrum_true[i, j + 1] = 0
return z, spectrum_true, spectrum_scaled, degens, kernel
def exp_spectrum(s, kmax, degens):
## Here s denotes the s^(-l)
spectrum_scaled = np.array([s**(-l) for l in range(1,kmax)])
spectrum_scaled = np.append([1],spectrum_scaled) ## We add the zero-mode
spectrum_true = spectrum_scaled / degens
return spectrum_true, spectrum_scaled
def power_spectrum(s, kmax, degens):
## Here s denotes the l^(-s)
spectrum_scaled = np.array([l**(-s) for l in range(1,kmax)])
spectrum_scaled = np.append([1],spectrum_scaled) ## We add the zero-mode
spectrum_true = spectrum_scaled / degens
return spectrum_true, spectrum_scaled
def white_spectrum(N):
return np.ones(N)/N
###############################################################
################# For Kernel Spectrum From Mathematica ####################
###############################################################
def ntk_spectrum(file, kmax = -1, layer = None, dim = None, return_NTK = False):
## Obtain the spectrum
data = np.load(file, allow_pickle=True)
eig, eig_real, eig_raw = [data['arr_'+str(i)] for i in range(len(data.files))]
if(kmax != -1):
eig = eig[:,:kmax,:]
eig_real = eig_real[:,:kmax,:]
eig_raw = eig_raw[:,:kmax,:]
## Reconstruct the NTK
num_pts = 10000
Dim = np.array([5*(i+1) for i in range(40)])
alpha = Dim[dim] / 2.0 - 1
z, w = sp.special.roots_gegenbauer(num_pts, alpha)
Q = gegenbauer.gegenbauer(z, kmax, Dim[dim])
k = np.array([i for i in range(kmax)]);
norm = (alpha+k)/alpha
NTK = eig_real[dim,:,layer]*norm @ Q
if(layer != None and dim != None):
if return_NTK:
return eig[dim,:,layer], eig_real[dim,:,layer], NTK
return eig[dim,:,layer], eig_real[dim,:,layer]
if(layer != None and dim == None):
return eig[:,:,layer], eig_real[:,:,layer]
if(layer == None and dim != None):
return eig[dim,:,:], eig_real[dim,:,:]
if(layer == None and dim == None):
return eig[:,:,:], eig_real[:,:,:]
def degeneracy(d,l):
alpha = (d-2)/2
degens = np.zeros((len(l),1))
degens[0] = 1
for i in range(len(l)-1):
k = l[i+1,0]
degens[i+1,:] = comb(k+d-3,k)*((alpha+k)/(alpha))
return degens
def norm(dim,l):
alpha = (dim-2)/2;
area = np.sqrt(np.pi)*gamma((dim-1)/2)/gamma(dim/2);
degen = degeneracy(dim,l)
Norm = area*degen*((alpha)/(alpha+l))**2
## Also another factor of lambda/(n+lambda) comes from spherical harmoincs -> gegenbauer
Norm1 = area*((alpha)/(alpha+l))*degen
#Norm2 = area*((alpha)/(alpha+l))
return [Norm1, degen]
def save_spectrum(directory, dim, deg, layer):
# dim = np.array([5*(i+1) for i in range(20)])
# deg = np.array([i+1 for i in range(100)])
# layer = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9])
layer_str = [str(num) for num in layer]
data = np.zeros((dim.size,deg.size,layer.size))
data_real = np.zeros((dim.size,deg.size,layer.size))
for i in range(dim.size):
data_i = pd.read_csv(direc+str(dim[i])+".txt"
,delim_whitespace=True
, skipinitialspace=False).T.to_numpy()
##Norm = np.array([norm(dim[i],d) for d in deg])
normalization = norm(dim[i], deg.reshape(len(deg),1))
data_real[i,:,:] = data_i / normalization[0]
data_real[i,:,:] = data_real[i,:,:]*(data_real[i,:,:] > 1e-60)
data[i,:,:] = data_real[i,:,:] * normalization[1]
np.savez(directory+'GegenbauerEigenvalues.npz', data, data_real)
return directory+'GegenbauerEigenvalues.npz'
###############################################################
################# Use Only These Functions ####################
###############################################################
|
<gh_stars>0
package com.soonsoft.uranus.security.authentication;
import com.soonsoft.uranus.security.entity.UserInfo;
import com.soonsoft.uranus.core.Guard;
/**
* IUserManager
*/
public interface IUserManager {
UserInfo getUser(String username);
boolean createUser(UserInfo user);
boolean deleteUser(String username);
default boolean deleteUser(UserInfo user) {
Guard.notNull(user, "the UserInfo is required.");
return deleteUser(user.getUsername());
}
boolean disableUser(UserInfo user);
void resetPassword(UserInfo user);
void changeMyPassword(UserInfo user);
void findMyPassword(UserInfo user);
String encryptPassword(String password, String salt);
} |
// --------------------------------------------------------------------------------------
// <NAME>
//
// --------------------------------------------------------------------------------------
#include "Boss_EvilHurri.hpp"
#include "stdafx.hpp"
// --------------------------------------------------------------------------------------
// Konstruktor
// --------------------------------------------------------------------------------------
GegnerEvilHurri::GegnerEvilHurri(int Wert1, int Wert2, bool Light) {
Handlung = GEGNER_INIT;
BlickRichtung = RECHTS;
Energy = 6000;
Value1 = Wert1;
Value2 = Wert2;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
AnimSpeed = 0.4f;
ChangeLight = Light;
Destroyable = true;
OwnDraw = true;
BlinkDirection = 1.0f;
BlinkWert = 0.0f;
}
// --------------------------------------------------------------------------------------
// Eigene Draw Funktion
// --------------------------------------------------------------------------------------
void GegnerEvilHurri::DoDraw() {
for (int i = 0; i < 4; i++) {
DirectGraphics.SetAdditiveMode();
if (BlickRichtung == LINKS) {
Player[0].PlayerRun.RenderSprite(static_cast<float>(xPos - TileEngine.XOffset),
static_cast<float>(yPos - TileEngine.YOffset), AnimPhase, 0xAA444444,
true);
Player[0].PlayerRun.RenderSprite(static_cast<float>(xPos - TileEngine.XOffset),
static_cast<float>(yPos - TileEngine.YOffset), AnimPhase, 0xFF0022FF,
true);
} else {
Player[0].PlayerRun.RenderSprite(static_cast<float>(xPos - TileEngine.XOffset),
static_cast<float>(yPos - TileEngine.YOffset), AnimPhase, 0xAA444444);
Player[0].PlayerRun.RenderSprite(static_cast<float>(xPos - TileEngine.XOffset),
static_cast<float>(yPos - TileEngine.YOffset), AnimPhase, 0xFF0022FF);
}
}
}
// --------------------------------------------------------------------------------------
// "Bewegungs KI"
// --------------------------------------------------------------------------------------
void GegnerEvilHurri::DoKI() {
// Animieren
SimpleAnimation();
// Energie anzeigen
if (Handlung != GEGNER_INIT && Handlung != GEGNER_EXPLODIEREN)
HUD.ShowBossHUD(6000, Energy);
// Levelausschnitt auf den Boss zentrieren, sobald dieser sichtbar wird
if (Active == true && TileEngine.Zustand == TileStateEnum::SCROLLBAR) {
TileEngine.ScrollLevel(static_cast<float>(Value1), static_cast<float>(Value2),
TileStateEnum::SCROLLTOLOCK); // Level auf die Faust zentrieren
SoundManager.FadeSong(MUSIC_STAGEMUSIC, -2.0f, 0, true); // Ausfaden und pausieren
}
// Zwischenboss blinkt nicht so lange wie die restlichen Gegner
if (DamageTaken > 0.0f)
DamageTaken -= 100 SYNC; // Rotwerden langsam ausfaden lassen
else
DamageTaken = 0.0f; // oder ganz anhalten
// Testen, ob der Spieler den Boss berührt hat
for (int p = 0; p < NUMPLAYERS; p++)
if (SpriteCollision(xPos, yPos, GegnerRect[GegnerArt], Player[p].xpos, Player[p].ypos, Player[p].CollideRect) ==
true) {
// Spieler als Rad ? Dann abprallen
if (Player[p].Handlung == PlayerActionEnum::RADELN ||
Player[p].Handlung == PlayerActionEnum::RADELN_FALL) {
if (Player[p].xpos < xPos)
Player[p].Blickrichtung = LINKS;
if (Player[p].xpos > xPos)
Player[p].Blickrichtung = RECHTS;
}
// Sonst Energie abziehen
else {
Player[p].DamagePlayer(float(10.0 SYNC));
// Spieler wegschieben
if (Player[p].xpos < xPos)
Player[p].xpos = xPos + GegnerRect[GegnerArt].left - Player[p].CollideRect.right;
if (Player[p].xpos > xPos)
Player[p].xpos = xPos + GegnerRect[GegnerArt].right - Player[p].CollideRect.left;
}
}
// Hat der Boss keine Energie mehr ? Dann fliegt er in die Luft
if (Energy <= 100.0f && Handlung != GEGNER_EXPLODIEREN) {
ShakeScreen(5);
AnimPhase = 40;
AnimEnde = 0;
AnimCount = 0.0f;
ySpeed = 0.0f;
xSpeed = 0.0f;
yAcc = 0.0f;
xAcc = 0.0f;
SoundManager.PlayWave(100, 128, 11025, SOUND_EXPLOSION2);
Handlung = GEGNER_EXPLODIEREN;
ActionDelay = 0.0f;
// Endboss-Musik ausfaden und abschalten
SoundManager.FadeSong(MUSIC_BOSS, -2.0f, 0, false);
}
// EvilHurri blinken lassen
BlinkWert += BlinkDirection * 30.0f SYNC;
if (BlinkWert <= 0)
BlinkDirection = 1.0f;
if (BlinkWert >= 128)
BlinkDirection = -1.0f;
// Rand checken
if (xPos < float(Value1 + 10) && Handlung != GEGNER_EINFLIEGEN && Handlung != GEGNER_INIT)
xPos = float(Value1 + 10);
if (xPos > float(Value1 + 550) && Handlung != GEGNER_EINFLIEGEN && Handlung != GEGNER_INIT)
xPos = float(Value1 + 550);
// Je nach Handlung richtig verhalten
switch (Handlung) {
case GEGNER_INIT: // Warten bis der Screen zentriert wurde
{
if (TileEngine.Zustand == TileStateEnum::LOCKED) {
// Zwischenboss-Musik abspielen, sofern diese noch nicht gespielt wird
// DKS - Added function SongIsPlaying() to SoundManagerClass:
if (!SoundManager.SongIsPlaying(MUSIC_BOSS))
SoundManager.PlaySong(MUSIC_BOSS, false);
// Und Boss erscheinen lassen
Handlung = GEGNER_EINFLIEGEN;
}
} break;
case GEGNER_STEHEN: // Counter runterzählen und nächste Aktion entscheiden
{
AnimPhase = 0;
AnimStart = 0;
AnimEnde = 0;
xSpeed = 0.0f;
ySpeed = 0.0f;
if (xPos + 70 < pAim->xpos)
BlickRichtung = RECHTS;
if (xPos > pAim->xpos + 70)
BlickRichtung = LINKS;
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
ActionDelay = 0.0f;
// Bei großem Abstand spieler zerquetschen
if (PlayerAbstand() > 300) {
if (random(2) == 0) {
Handlung = GEGNER_CRUSHEN;
ActionDelay = 1.0f;
AnimPhase = 48;
if (BlickRichtung == RECHTS)
xSpeed = 40.0f;
else
xSpeed = -40.0f;
} else {
Handlung = GEGNER_CRUSHEN2;
ActionDelay = 1.0f;
AnimPhase = 48;
if (BlickRichtung == RECHTS)
xSpeed = 40.0f;
else
xSpeed = -40.0f;
}
}
// Ansonsten im Kreis rum ballern
else {
if (random(2) == 0 && xPos > Value1 + 200 && xPos < Value1 + 440) {
Handlung = GEGNER_BOMBARDIEREN;
ActionDelay = 3.0f;
AnimEnde = 0;
AnimStart = 0;
AnimPhase = 35;
BlickRichtung = RECHTS;
}
// oder ballernd auf den Spieler zurennen
else {
if (random(2) == 0) {
Handlung = GEGNER_LAUFEN;
ActionDelay = 1.0f;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = FRAMES_RUN;
AnimSpeed = 0.4f;
if (BlickRichtung == RECHTS)
xSpeed = 20.0f;
else
xSpeed = -20.0f;
}
else {
Handlung = GEGNER_LAUFEN2;
ActionDelay = 1.0f;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
AnimSpeed = 0.4f;
if (BlickRichtung == RECHTS)
xSpeed = 20.0f;
else
xSpeed = -20.0f;
}
}
}
}
} break;
case GEGNER_EINFLIEGEN: // EvilHurri läuft ins Level
{
Energy = 6000;
DamageTaken = 0.0f;
xSpeed = +15.0f;
if (xPos >= Value1 + 30) // Weit genug eingelaufen ?
{
Handlung = GEGNER_STEHEN;
ActionDelay = 10.0f;
}
} break;
case GEGNER_CRUSHEN: {
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
ActionDelay = 1.0f;
if (BlickRichtung == RECHTS)
PartikelSystem.PushPartikel(xPos, yPos, EVILSMOKE2);
else
PartikelSystem.PushPartikel(xPos, yPos, EVILSMOKE);
}
if ((BlickRichtung == RECHTS && xPos > pAim->xpos + 140) ||
(BlickRichtung == LINKS && xPos + 140 < pAim->xpos)) {
Handlung = GEGNER_STEHEN;
ActionDelay = 10.0f;
}
} break;
case GEGNER_CRUSHEN2: // Nur bis zur Mitte des Screens und dann blitzen
{
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
ActionDelay = 1.0f;
if (BlickRichtung == RECHTS)
PartikelSystem.PushPartikel(xPos, yPos, EVILSMOKE2);
else
PartikelSystem.PushPartikel(xPos, yPos, EVILSMOKE);
}
if ((BlickRichtung == RECHTS && xPos + 35 > Value1 + 320) ||
(BlickRichtung == LINKS && xPos + 35 < Value1 + 320)) {
Handlung = GEGNER_SPECIAL;
AnimPhase = 0;
AnimEnde = 0;
ActionDelay = 0.0f;
xSpeed = 0.0f;
SoundManager.PlayWave(100, 128, 11025, SOUND_BLITZSTART);
}
} break;
case GEGNER_LAUFEN: // Auf Spieler zulaufen
{
// Ballern
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
ActionDelay = 2.5f;
// PartikelSystem.PushPartikel(xPos+30, yPos+28, BULLET);
SoundManager.PlayWave(100, random(255), 8000 + random(4000), SOUND_CANON);
if (BlickRichtung == RECHTS) {
PartikelSystem.PushPartikel(xPos + 50, yPos + 20, SMOKE);
Projectiles.PushProjectile(xPos + 55, yPos + 30, EVILSHOT);
} else {
PartikelSystem.PushPartikel(xPos + 10, yPos + 20, SMOKE);
Projectiles.PushProjectile(xPos, yPos + 30, EVILSHOT2);
}
}
// Spingen ?
if (pAim->ypos + 10 < yPos && ySpeed == 0.0f && random(10) == 0) {
ySpeed = -43.0f;
yAcc = 8.0f;
yPos -= 2.0f;
AnimPhase = 48;
AnimEnde = 55;
AnimSpeed = 2.0f;
}
if (ySpeed != 0.0f) {
// Gelandet ?
if (blocku & BLOCKWERT_WAND) {
ySpeed = 0.0f;
yAcc = 0.0f;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
AnimSpeed = 0.4f;
TileEngine.BlockUnten(xPos, yPos, xPosOld, yPosOld, GegnerRect[GegnerArt]);
}
}
if (((BlickRichtung == RECHTS && xPos > pAim->xpos + 70) ||
(BlickRichtung == LINKS && xPos + 70 < pAim->xpos)) &&
ySpeed == 0.0f) {
Handlung = GEGNER_STEHEN;
ActionDelay = 10.0f;
}
} break;
case GEGNER_LAUFEN2: // In die Mitte des Screens laufen und da Schweinereien machen
{
// Ballern
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
ActionDelay = 2.5f;
// PartikelSystem.PushPartikel(xPos+30, yPos+28, BULLET);
SoundManager.PlayWave(100, random(255), 8000 + random(4000), SOUND_CANON);
if (BlickRichtung == RECHTS) {
PartikelSystem.PushPartikel(xPos + 50, yPos + 20, SMOKE);
Projectiles.PushProjectile(xPos + 55, yPos + 30, EVILSHOT);
} else {
PartikelSystem.PushPartikel(xPos + 10, yPos + 20, SMOKE);
Projectiles.PushProjectile(xPos, yPos + 30, EVILSHOT2);
}
}
if ((BlickRichtung == RECHTS && xPos + 35 > Value1 + 320) ||
(BlickRichtung == LINKS && xPos + 35 < Value1 + 320)) {
Handlung = GEGNER_SPECIAL;
AnimPhase = 0;
AnimEnde = 0;
ActionDelay = 0.0f;
xSpeed = 0.0f;
SoundManager.PlayWave(100, 128, 11025, SOUND_BLITZSTART);
}
} break;
// Boss ballert im Kreis um sich herum
case GEGNER_BOMBARDIEREN: {
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
SoundManager.PlayWave(100, random(255), 8000 + random(4000), SOUND_CANON);
PartikelSystem.PushPartikel(xPos + (AnimPhase - 36) * 5, yPos - 23 + abs(AnimPhase - 41) * 8, SMOKE);
// PartikelSystem.PushPartikel(xPos+30, yPos+28, BULLET);
Projectiles.PushProjectile(xPos + 5 + (AnimPhase - 36) * 5, yPos - 10 + abs(AnimPhase - 41) * 8,
EVILROUND1 + (AnimPhase - 35));
ActionDelay = 3.0f;
AnimPhase++;
if (AnimPhase > 46) {
Handlung = GEGNER_STEHEN;
AnimPhase = 0;
ActionDelay = 10.0f;
}
}
} break;
// <NAME>zt in den Himmel und Schüsse fallen von oben
case GEGNER_SPECIAL: {
ActionDelay += 1.0f SYNC;
if (ActionDelay > 5.0f && ActionDelay < 20.0f && AnimPhase == 0)
AnimPhase = 40;
if (ActionDelay > 25.0f && AnimPhase != 0) {
AnimPhase = 0;
for (int i = 0; i < 30; i++)
PartikelSystem.PushPartikel(xPos + 30 + random(10), yPos + random(10), LASERFUNKE2);
Projectiles.PushProjectile(xPos, yPos - 20, EVILBLITZ);
SoundManager.PlayWave(100, 128, 11025, SOUND_SPIDERGRENADE);
}
// Screen blinken
if (ActionDelay > 25.0f && ActionDelay < 35.0f) {
int w = 128 - int(ActionDelay - 25.0f) * 12;
RenderRect(0, 0, 640, 480, D3DCOLOR_RGBA(255, 255, 128, w));
}
if (ActionDelay > 70.0f) {
if (random(2) == 0) {
Handlung = GEGNER_AUFRICHTEN;
BlickRichtung = LINKS;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
xSpeed = -20;
} else {
Handlung = GEGNER_AUFRICHTENZWEI;
BlickRichtung = RECHTS;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
xSpeed = 20;
}
}
} break;
case GEGNER_AUFRICHTEN: {
if (xPos < Value1 + 20) {
BlickRichtung = RECHTS;
xSpeed = 15.0f;
ActionDelay = 4.0f;
Handlung = GEGNER_AUSSPUCKEN;
}
} break;
case GEGNER_AUFRICHTENZWEI: {
if (xPos >= Value1 + 550) {
BlickRichtung = LINKS;
xSpeed = -15.0f;
ActionDelay = 4.0f;
Handlung = GEGNER_AUSSPUCKENZWEI;
}
} break;
// Über den Screen rennen und dabei nach oben ballern
case GEGNER_AUSSPUCKEN: {
// Am Rand angekommen ?
if (xPos >= Value1 + 550) {
xSpeed = 0;
AnimPhase = 0;
ActionDelay = 10.0f;
Handlung = GEGNER_STEHEN;
}
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
// schiessen ?
if (AnimPhase != 40) {
xSpeed = 0.0f;
AnimEnde = 0;
AnimPhase = 40;
Projectiles.PushProjectile(xPos + 20, yPos + 10, ARCSHOT);
Projectiles.PushProjectile(xPos + 20, yPos + 10, ARCSHOTLEFT);
Projectiles.PushProjectile(xPos + 20, yPos + 10, ARCSHOTRIGHT);
SoundManager.PlayWave(100, 128, 11025, SOUND_SPIDERGRENADE);
ActionDelay = 3.0f;
}
// oder weiterlaufen
else {
xSpeed = 15.0f;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
ActionDelay = 7.0f;
}
}
} break;
// Über den Screen rennen und dabei nach oben ballern
case GEGNER_AUSSPUCKENZWEI: {
// Am Rand angekommen ?
if (xPos <= Value1 + 20) {
xSpeed = 0;
AnimPhase = 0;
ActionDelay = 10.0f;
Handlung = GEGNER_STEHEN;
}
ActionDelay -= 1.0f SYNC;
if (ActionDelay < 0.0f) {
// schiessen ?
if (AnimPhase != 40) {
xSpeed = 0.0f;
AnimEnde = 0;
AnimPhase = 40;
Projectiles.PushProjectile(xPos, yPos + 10, ROCKETSPIDER);
SoundManager.PlayWave(100, 128, 11025, SOUND_ROCKET);
ActionDelay = 4.0f;
}
// oder weiterlaufen
else {
xSpeed = -15.0f;
AnimPhase = 3;
AnimStart = 3;
AnimEnde = 21;
ActionDelay = 10.0f;
}
}
} break;
case GEGNER_EXPLODIEREN: {
Energy = 80.0f;
ActionDelay += 1.0f SYNC;
if (ActionDelay >= 50.0f)
Energy = 0.0f;
AnimCount -= 1.0f SYNC;
if (AnimCount < 0.0f) {
AnimCount = 2.0f;
SoundManager.PlayWave(100, 128, 8000 + random(4000), SOUND_EXPLOSION1);
PartikelSystem.PushPartikel(xPos - 20 + random(70), yPos + random(80), EXPLOSION_MEDIUM);
PartikelSystem.PushPartikel(xPos - 20 + random(70), yPos + random(80), EXPLOSION_MEDIUM2);
PartikelSystem.PushPartikel(xPos - 50 + random(70), yPos - 30 + random(80), EXPLOSION_BIG);
}
} break;
default:
break;
} // switch
}
// --------------------------------------------------------------------------------------
// EvilHurri explodiert
// --------------------------------------------------------------------------------------
void GegnerEvilHurri::GegnerExplode() {
Player[0].Score += 9000;
for (int i = 0; i < 10; i++)
PartikelSystem.PushPartikel(xPos - 20 + random(70), yPos + random(80), SPLITTER);
for (int i = 0; i < 10; i++)
PartikelSystem.PushPartikel(xPos - 20 + random(70), yPos + random(80), EXPLOSION_MEDIUM2);
SoundManager.PlayWave(100, 128, 11025, SOUND_EXPLOSION2);
ScrolltoPlayeAfterBoss();
HUD.BossHUDActive = false;
}
|
Knowledge-based vessel position prediction using historical AIS data The improvement in Maritime Situational Awareness (MSA), or the capability of understanding events, circumstances and activities within and impacting the maritime environment, is nowadays of paramount importance for safety and security. Enhancing coverage of existing technologies such as Automatic Identification System (AIS) provides the possibility to integrate and enrich services and information already available in the maritime domain. In this scenario, the prediction of vessels position is essential to increase the MSA and build the Maritime Situational Picture (MSP), namely the map of the ships located in a certain Area Of Interest (AOI) at a desired time. The integration of de-facto maritime traffic routes information in the vessel prediction process has the appealing potential to provide a more accurate picture of what is happening at sea by exploiting the knowledge of historical vessel positioning data. In this paper, we propose a Bayesian vessel prediction algorithm based on a Particle Filter (PF). The system, aided by the knowledge of traffic routes, aims to enhance the quality of the vessel position prediction. Experimental results are presented, evaluating the algorithm in the specific area between the Gibraltar passage and the Dover Strait using real AIS data. |
from auction import email_service
class Owner:
def __init__(self, email: str) -> None:
self.email = email
def new_bid_notify(self, bidder_name: str, amount: int) -> None:
email_service.send_email(self.email, f"New bid from {bidder_name} in your auction ({amount})") |
<gh_stars>100-1000
export interface MessageSignRequest {
message: string // Message to be signed
publicKey: string // PublicKey of the signer
signature: string // Signature of the message
callbackURL?: string // eg. https://airgap.it/?data={{data}}
}
|
/**
* A 3-D triangulated manifold oriented surface, possibly with boundary.
* <p>
* This class currently enables construction of a surface from a set of
* points, using the algorithm of Cohen-Steiner and Da, 2002, A greedy
* Delaunay based surface reconstruction algorithm: The Visual Computer,
* v. 20, p. 4-16.
*
* @author Dave Hale, Colorado School of Mines
* @version 2004.06.14, 2007.01.12
*/
public class TriSurf {
/**
* A node, which can be added or removed from the surface.
*/
public static class Node {
/**
* An integer index associated with this node.
* Intended for external use only; the surface does not use it.
*/
public int index;
/**
* A data object associated with this node.
* Intended for external use only; the surface does not use it.
*/
public Object data;
/**
* Constructs a node with specified coordinates.
* @param x the x coordinate.
* @param y the y coordinate.
* @param z the z coordinate.
*/
public Node(float x, float y, float z) {
_meshNode = new TetMesh.Node(x,y,z);
_meshNode.data = this;
}
/**
* Returns the x coordinate of this node.
* @return the x coordinate.
*/
public final float x() {
return _meshNode.x();
}
/**
* Returns the y coordinate of this node.
* @return the y coordinate.
*/
public final float y() {
return _meshNode.y();
}
/**
* Returns the z coordinate of this node.
* @return the z coordinate.
*/
public final float z() {
return _meshNode.z();
}
/**
* Determines whether this node is in the surface.
* @return true, if in surface; false, otherwise.
*/
public boolean isInSurface() {
return _face!=null;
}
/**
* Determines whether this node is on the surface boundary.
* @return true, if on the boundary; false, otherwise.
*/
public boolean isOnBoundary() {
return _edgeBefore!=null;
}
/**
* Returns the edge before this node on the surface boundary.
* Returns null, if this node is not on the surface boundary.
* If on the surface boundary, this is node B of the returned edge.
* @return previous edge; null if node not on surface boundary.
*/
public Edge edgeBefore() {
return _edgeBefore;
}
/**
* Returns the edge after this node on the surface boundary.
* Returns null, if this node is not on the surface boundary.
* If on the surface boundary, this is node A of the returned edge.
* @return next edge; null if node not on surface boundary.
*/
public Edge edgeAfter() {
return _edgeAfter;
}
/**
* Returns the area-weighted average normal vector for this node.
* @return array containing the {X,Y,Z} components of the normal vector.
*/
public float[] normalVector() {
float[] vn = new float[3];
normalVector(vn);
return vn;
}
/**
* Computes the area-weighted average normal vector for this node.
* @param vn array to contain the {X,Y,Z} components of the normal vector.
*/
public void normalVector(float[] vn) {
vn[0] = vn[1] = vn[2] = 0.0f;
FaceIterator fi = getFaces();
while (fi.hasNext()) {
Face face = fi.next();
accNormalVector(face,vn);
}
float x = vn[0];
float y = vn[1];
float z = vn[2];
float s = 1.0f/(float)sqrt(x*x+y*y+z*z);
vn[0] *= s;
vn[1] *= s;
vn[2] *= s;
}
/**
* Returns the number of faces that reference this node.
* @return the number of faces.
*/
public int countFaces() {
int nface = 0;
FaceIterator fi = getFaces();
while (fi.hasNext()) {
fi.next();
++nface;
}
return nface;
}
/**
* Gets an iterator for all faces that reference this node.
* @return the iterator.
*/
public FaceIterator getFaces() {
return new FaceIterator() {
public boolean hasNext() {
return _next!=null;
}
public Face next() {
if (_next==null)
throw new NoSuchElementException();
Face face = _next;
loadNext();
return face;
}
private Face _next = _face;
private boolean _ccw = true;
private void loadNext() {
if (_ccw) {
_next = faceNext(_next);
if (_next==null) {
_ccw = false;
_next = _face;
} else if (_next==_face) {
_next = null;
}
}
if (!_ccw) {
_next = facePrev(_next);
}
}
};
}
public String toString() {
return _meshNode.toString();
}
private TetMesh.Node _meshNode;
private Face _face; // null if node not in surface
private Edge _edgeBefore; // non-null if on surface boundary
private Edge _edgeAfter; // non-null if on surface boundary
private void validate() {
assert _meshNode!=null;
assert _face==null || _face.references(this);
if (_edgeBefore==null) {
assert _edgeAfter==null;
} else {
assert this==_edgeBefore.nodeB();
assert this==_edgeAfter.nodeA();
assert this==_edgeBefore.nodeA().edgeAfter().nodeB();
assert this==_edgeAfter.nodeB().edgeBefore().nodeA();
}
assert _edgeBefore==null && _edgeAfter==null ||
_edgeBefore!=null && this==_edgeBefore.nodeB() &&
_edgeAfter!=null && this==_edgeAfter.nodeA();
}
private void init() {
_face = null;
_edgeBefore = null;
_edgeAfter = null;
}
private void setFace(Face face) {
_face = face;
}
private void setEdgeBefore(Edge edgeBefore) {
_edgeBefore = edgeBefore;
}
private void setEdgeAfter(Edge edgeAfter) {
_edgeAfter = edgeAfter;
}
private Face face() {
return _face;
}
private Face faceNext(Face face) {
if (this==face.nodeA()) {
return face.faceB();
} else if (this==face.nodeB()) {
return face.faceC();
} else {
return face.faceA();
}
}
private Face facePrev(Face face) {
if (this==face.nodeA()) {
return face.faceC();
} else if (this==face.nodeB()) {
return face.faceA();
} else {
return face.faceB();
}
}
private static void accNormalVector(Face face, float[] v) {
Node na = face.nodeA();
Node nb = face.nodeB();
Node nc = face.nodeC();
float xa = na.x();
float ya = na.y();
float za = na.z();
float xb = nb.x();
float yb = nb.y();
float zb = nb.z();
float xc = nc.x();
float yc = nc.y();
float zc = nc.z();
float x0 = xc-xa;
float y0 = yc-ya;
float z0 = zc-za;
float x1 = xa-xb;
float y1 = ya-yb;
float z1 = za-zb;
v[0] += y0*z1-y1*z0;
v[1] += x1*z0-x0*z1;
v[2] += x0*y1-x1*y0;
}
}
/**
* A type-safe iterator for nodes.
*/
public interface NodeIterator {
public boolean hasNext();
public Node next();
}
/**
* A directed edge.
* <p>
* An edge is specified by two nodes A and B. The order of these nodes
* is significant. An edge is directed from A to B.
* <p>
* Every edge has a mate. An edge and its mate reference the same two
* nodes, but in the opposite order, so they have opposite directions.
* Therefore, an edge does not equal its mate.
* <p>
* An edge within the surface has a left and right face. An edge on the
* boundary of the surface has a right face and a null left face. An edge
* on the boundary is linked to the previous and next edge on the boundary.
*
*/
public static class Edge {
public Node nodeA() {
return (Node)_meshEdge.nodeA().data;
}
public Node nodeB() {
return (Node)_meshEdge.nodeB().data;
}
public Face faceLeft() {
return _faceLeft;
}
public Face faceRight() {
return _faceRight;
}
public Node nodeLeft() {
return (_faceLeft!=null)?otherNode(_faceLeft,nodeA(),nodeB()):null;
}
public Node nodeRight() {
return (_faceRight!=null)?otherNode(_faceRight,nodeA(),nodeB()):null;
}
public Edge edgeBefore() {
return nodeA()._edgeBefore;
}
public Edge edgeAfter() {
return nodeB()._edgeAfter;
}
public Edge mate() {
return new Edge(_meshEdge.mate(),_faceRight);
}
public boolean isInSurface() {
return _faceRight!=null;
}
public boolean isOnBoundary() {
return _faceLeft==null;
}
public boolean equals(Object object) {
if (object==this)
return true;
if (object!=null && object.getClass()==getClass()) {
Edge other = (Edge)object;
return other.nodeA()==nodeA() && other.nodeB()==nodeB();
}
return false;
}
public int hashCode() {
return nodeA().hashCode()^nodeB().hashCode();
}
private TetMesh.Edge _meshEdge;
private Face _faceLeft; // null if edge on surface boundary
private Face _faceRight; // null if edge not in surface
private void validate() {
assert _meshEdge!=null;
assert _faceLeft==null || _faceLeft.references(nodeA(),nodeB());
assert _faceRight==null || _faceRight.references(nodeA(),nodeB());
}
private Edge(TetMesh.Edge meshEdge, Face face) {
_meshEdge = meshEdge;
Node nodeA = (Node)meshEdge.nodeA().data;
Node nodeB = (Node)meshEdge.nodeB().data;
Node nodeC = (face!=null)?otherNode(face,nodeA,nodeB):null;
Check.argument(face==null || nodeC!=null,"face references edge");
if (nodeC!=null) {
if (nodesInOrder(face,nodeA,nodeB,nodeC)) {
_faceLeft = face;
_faceRight = face.faceNabor(nodeC);
} else {
_faceLeft = face.faceNabor(nodeC);
_faceRight = face;
}
}
}
}
/**
* A type-safe iterator for edges.
*/
public interface EdgeIterator {
public boolean hasNext();
public Edge next();
}
/**
* One triangular face in the surface.
* Each face references three nodes (A, B, and C), and up to three
* face neighbors (nabors) opposite those nodes. A null nabor denotes
* an edge (opposite the corresponding node) on the surface boundary.
* The nodes A, B, and C are in CCW order.
*/
public static class Face {
/**
* An integer index associated with this face.
* Intended for external use only; the surface does not use it.
*/
public int index;
/**
* A data object associated with this face.
* Intended for external use only; the surface does not use it.
*/
public Object data;
/**
* Returns the node A referenced by this face.
* @return the node A.
*/
public final Node nodeA() {
return (Node)_meshFace.nodeA().data;
}
/**
* Returns the node B referenced by this face.
* @return the node B.
*/
public final Node nodeB() {
return (Node)_meshFace.nodeB().data;
}
/**
* Returns the node C referenced by this face.
* @return the node C.
*/
public final Node nodeC() {
return (Node)_meshFace.nodeC().data;
}
/**
* Returns the face nabor A (opposite node A) referenced by this face.
* @return the face nabor A.
*/
public final Face faceA() {
return _faceA;
}
/**
* Returns the face nabor B (opposite node B) referenced by this face.
* @return the face nabor B.
*/
public final Face faceB() {
return _faceB;
}
/**
* Returns the face nabor C (opposite node C) referenced by this face.
* @return the face nabor C.
*/
public final Face faceC() {
return _faceC;
}
/**
* Returns the mate of this face.
* @return the mate of this face.
*/
public Face mate() {
return new Face(_meshFace.mate());
}
/**
* Returns the node referenced by this face that is nearest to
* the point with specified coordinates.
* @param x the x coordinate.
* @param y the y coordinate.
* @param z the z coordinate.
* @return the node nearest to the point (x,y,z).
*/
public final Node nodeNearest(float x, float y, float z) {
Node na = nodeA();
Node nb = nodeB();
Node nc = nodeC();
double da = distanceSquared(na,x,y,z);
double db = distanceSquared(nb,x,y,z);
double dc = distanceSquared(nc,x,y,z);
double dmin = da;
Node nmin = na;
if (db<dmin) {
dmin = db;
nmin = nb;
}
if (dc<dmin) {
dmin = dc;
nmin = nc;
}
return nmin;
}
/**
* Gets the face nabor opposite the specified node.
* @param node a node.
* @return the face nabor opposite the node.
*/
public final Face faceNabor(Node node) {
if (node==nodeA()) return _faceA;
if (node==nodeB()) return _faceB;
if (node==nodeC()) return _faceC;
Check.argument(false,"node is referenced by face");
return null;
}
/**
* Gets the node in the specified face nabor that is opposite this face.
* @param faceNabor a face nabor.
* @return the node in the specified face nabor.
*/
public final Node nodeNabor(Face faceNabor) {
if (faceNabor._faceA==this) return faceNabor.nodeA();
if (faceNabor._faceB==this) return faceNabor.nodeB();
if (faceNabor._faceC==this) return faceNabor.nodeC();
Check.argument(false,"faceNabor is a nabor of face");
return null;
}
/**
* Computes the circumcenter of this face.
* @param cc array of circumcenter coordinates {xc,yc,zc}.
* @return radius-squared of circumcircle.
*/
public double centerCircle(double[] cc) {
Node na = nodeA();
Node nb = nodeB();
Node nc = nodeC();
double xa = na.x();
double ya = na.y();
double za = na.z();
double xb = nb.x();
double yb = nb.y();
double zb = nb.z();
double xc = nc.x();
double yc = nc.y();
double zc = nc.z();
Geometry.centerCircle3D(xa,ya,za,xb,yb,zb,xc,yc,zc,cc);
double xcc = cc[0];
double ycc = cc[1];
double zcc = cc[2];
double dx = xcc-xc;
double dy = ycc-yc;
double dz = zcc-yc;
return dx*dx+dy*dy+dz*dz;
}
/**
* Returns the circumcenter of this face.
* @return array of circumcenter coordinates {xc,yc,zc}.
*/
public double[] centerCircle() {
double[] cc = new double[3];
centerCircle(cc);
return cc;
}
/**
* Returns the area of this face.
* @return the area.
*/
public float area() {
return TriSurf.normalVector(_meshFace,(float[])null);
}
/**
* Returns the normal vector for this face.
* @return array containing the {X,Y,Z} components of the normal vector.
*/
public float[] normalVector() {
float[] vn = new float[3];
TriSurf.normalVector(_meshFace,vn);
return vn;
}
/**
* Computes the normal vector and returns the area for this face.
* @param vn array to contain the {X,Y,Z} components of the normal vector.
* @return the area.
*/
public float normalVector(float[] vn) {
return TriSurf.normalVector(_meshFace,vn);
}
/**
* Determines whether this face references the specified node.
* @param node the node.
* @return true, if this face references the node; false, otherwise.
*/
public boolean references(Node node) {
return node==nodeA() || node==nodeB() || node==nodeC();
}
/**
* Determines whether this face references the specified nodes.
* @param node1 a node.
* @param node2 a node.
* @return true, if this face references the nodes; false, otherwise.
*/
public boolean references(Node node1, Node node2) {
Node na = nodeA();
Node nb = nodeB();
Node nc = nodeC();
if (node1==na) {
return node2==nb || node2==nc;
} else if (node1==nb) {
return node2==na || node2==nc;
} else if (node1==nc) {
return node2==na || node2==nb;
} else {
return false;
}
}
/**
* Determines whether this face references the specified nodes.
* @param node1 a node.
* @param node2 a node.
* @param node3 a node.
* @return true, if this face references the nodes; false, otherwise.
*/
public boolean references(Node node1, Node node2, Node node3) {
Node na = nodeA();
Node nb = nodeB();
Node nc = nodeC();
if (node1==na) {
if (node2==nb) {
return node3==nc;
} else if (node2==nc) {
return node3==nb;
} else {
return false;
}
} else if (node1==nb) {
if (node2==na) {
return node3==nc;
} else if (node2==nc) {
return node3==na;
} else {
return false;
}
} else if (node1==nc) {
if (node2==na) {
return node3==nb;
} else if (node2==nb) {
return node3==na;
} else {
return false;
}
} else {
return false;
}
}
private TetMesh.Face _meshFace;
private Face _faceA,_faceB,_faceC;
private int _mark;
private void validate() {
assert _meshFace!=null;
}
/**
* Constructs a new face.
*/
private Face(TetMesh.Face meshFace) {
_meshFace = meshFace;
}
}
/**
* A type-safe iterator for faces.
*/
public interface FaceIterator {
public boolean hasNext();
public Face next();
}
/**
* A dynamically growing list of faces.
*/
public static class FaceList {
/**
* Appends the specified face to this list.
* @param face the face to append.
*/
public final void add(Face face) {
if (_n==_a.length) {
Face[] t = new Face[_a.length*2];
System.arraycopy(_a,0,t,0,_n);
_a = t;
}
_a[_n++] = face;
}
/**
* Removes the face with specified index from this list.
* @param index the index of the face to remove.
* @return the face removed.
*/
public final Face remove(int index) {
Face face = _a[index];
--_n;
if (_n>index)
System.arraycopy(_a,index+1,_a,index,_n-index);
return face;
}
/**
* Facems this list so that its array length equals the number of faces.
* @return the array of faces in this list, after trimming.
*/
public final Face[] trim() {
if (_n<_a.length) {
Face[] t = new Face[_n];
System.arraycopy(_a,0,t,0,_n);
_a = t;
}
return _a;
}
/**
* Removes all faces from this list.
*/
public final void clear() {
_n = 0;
}
/**
* Returns the number of faces in this list.
* @return the number of faces.
*/
public final int nface() {
return _n;
}
/**
* Returns (by reference) the array of faces in this list.
* @return the array of faces.
*/
public final Face[] faces() {
return _a;
}
private int _n = 0;
private Face[] _a = new Face[64];
}
/**
* Adds the specified node to this surface, if not already present.
* @param node the node.
* @return true, if node was added; false, otherwise.
*/
public synchronized boolean addNode(Node node) {
boolean added = _mesh.addNode(node._meshNode);
if (added)
rebuild();
return added;
}
/**
* Adds the specified nodes to this surface, if not already present.
* @param nodes the nodes.
* @return true, if all nodes were added; false, otherwise.
*/
public synchronized boolean addNodes(Node[] nodes) {
int nnode = nodes.length;
int nadded = 0;
for (int inode=0; inode<nnode; ++inode) {
if (_mesh.addNode(nodes[inode]._meshNode))
++nadded;
}
if (nadded>0)
rebuild();
return nadded==nnode;
}
/**
* Removes the specified node from this surface, if present.
* @param node the node.
* @return true, if node was removed; false, otherwise.
*/
public synchronized boolean removeNode(Node node) {
boolean removed = _mesh.removeNode(node._meshNode);
if (removed)
rebuild();
return removed;
}
/**
* Removes the specified nodes from this surface, if present.
* @param nodes the nodes.
* @return true, if all nodes were removed; false, otherwise.
*/
public synchronized boolean removeNodes(Node[] nodes) {
int nnode = nodes.length;
int nremoved = 0;
for (int inode=0; inode<nnode; ++inode) {
if (_mesh.removeNode(nodes[inode]._meshNode))
++nremoved;
}
if (nremoved>0)
rebuild();
return nremoved==nnode;
}
/**
* Returns the number of nodes in the surface.
* @return the number of nodes.
*/
public int countNodes() {
return _mesh.countNodes();
}
/**
* Returns the number of faces in the surface.
* @return the number of faces.
*/
public int countFaces() {
return _faceMap.size();
}
/**
* Gets an iterator for all nodes in this surface.
* @return the iterator.
*/
public synchronized NodeIterator getNodes() {
return new NodeIterator() {
public final boolean hasNext() {
return _i.hasNext();
}
public final Node next() {
return (Node)_i.next().data;
}
private TetMesh.NodeIterator _i = _mesh.getNodes();
};
}
/**
* Gets an iterator for all faces in this surface.
* @return the iterator.
*/
public synchronized FaceIterator getFaces() {
return new FaceIterator() {
public final boolean hasNext() {
return _i.hasNext();
}
public final Face next() {
return _i.next();
}
private Iterator<Face> _i = _faceMap.values().iterator();
};
}
/**
* Finds the node nearest to the point with specified coordinates.
* @param x the x coordinate.
* @param y the y coordinate.
* @param z the z coordinate.
* @return the nearest node.
*/
public synchronized Node findNodeNearest(float x, float y, float z) {
TetMesh.Node meshNode = _mesh.findNodeNearest(x,y,z);
return (Node)meshNode.data;
}
/**
* Gets an array of face nabors of the specified node.
* @param node the node for which to get nabors.
* @return the array of nabors.
*/
public synchronized Face[] getFaceNabors(Node node) {
FaceList nabors = new FaceList();
getFaceNabors(node,nabors);
return nabors.trim();
}
/**
* Appends the face nabors of the specified node to the specified list.
* @param node the node for which to get nabors.
* @param nabors the list to which nabors are appended.
*/
public synchronized void getFaceNabors(Node node, FaceList nabors) {
clearFaceMarks();
getFaceNabors(node,node._face,nabors);
}
/**
* Returns a face that references the specified node.
* @param node the node.
* @return a face that references the specified node; or null, if
* the node is not in the surface or the surface has no faces.
*/
public Face findFace(Node node) {
return node._face;
}
/**
* Returns a face that references the specified nodes.
* @param node1 a node.
* @param node2 a node.
* @return a face that references the specified nodes; or null,
* if a node is not in the surface or the surface has no faces.
*/
public synchronized Face findFace(Node node1, Node node2) {
Face face = findFace(node1);
if (face!=null) {
// clearFaceMarks();
// return findFace(face,node1,node2);
if (face.references(node2))
return face;
Face face1 = face;
face = node1.faceNext(face1);
while (face!=face1 && face!=null) {
if (face.references(node2))
return face;
face = node1.faceNext(face);
}
if (face==null) {
face = node1.facePrev(face1);
while (face!=face1 && face!=null) {
if (face.references(node2))
return face;
face = node1.facePrev(face);
}
}
}
return null;
}
/**
* Returns a face that references the specified nodes.
* @param node1 a node.
* @param node2 a node.
* @param node3 a node.
* @return a face that references the specified nodes; or null,
* if a node is not in the surface or the surface has no faces.
*/
public synchronized Face findFace(Node node1, Node node2, Node node3) {
Face face = findFace(node1,node2);
// if (face!=null) {
// clearFaceMarks();
// face = findFace(face,node1,node2,node3);
// }
if (face!=null) {
if (face.references(node3))
return face;
face = face.faceNabor(node3);
if (face!=null && face.references(node3))
return face;
}
return null;
}
/**
* Returns a directed edge AB that references the specified nodes.
* @param nodeA a node.
* @param nodeB a node.
* @return a directed edge that references the specified nodes;
* or null, if nodes A and B are not adjacent in the surface.
*/
public synchronized Edge findEdge(Node nodeA, Node nodeB) {
TetMesh.Edge meshEdge = findMeshEdge(nodeA,nodeB);
Edge edge = getEdge(meshEdge);
if (meshEdge!=null && edge==null) {
Face face = findFace(nodeA,nodeB);
if (face!=null) {
Node nodeC = otherNode(face,nodeA,nodeB);
if (nodesInOrder(face,nodeA,nodeB,nodeC))
edge = new Edge(meshEdge,face);
}
}
return edge;
}
///////////////////////////////////////////////////////////////////////////
// private
private static class EdgeFace implements Comparable<EdgeFace> {
Edge edge;
Face face;
double grade;
EdgeFace(Edge edge, Face face, double grade) {
this.edge = edge;
this.face = face;
this.grade = grade;
}
public int compareTo(EdgeFace other) {
double gradeOther = other.grade;
if (grade<gradeOther) {
return -1;
} else if (grade>gradeOther) {
return 1;
} else {
Edge edgeOther = other.edge;
int hash = edge.hashCode();
int hashOther = edgeOther.hashCode();
if (hash<hashOther) {
return -1;
} else if (hash>hashOther) {
return 1;
} else {
return 0;
}
}
}
}
private static final int FACE_MARK_MAX = Integer.MAX_VALUE-1;
// tet mesh
private TetMesh _mesh = new TetMesh();
// mesh faces not yet in surf
private Set<TetMesh.Face> _faceSet = new HashSet<TetMesh.Face>();
// mesh face -> surf face
private Map<TetMesh.Face,Face> _faceMap = new HashMap<TetMesh.Face,Face>();
// mesh edge -> surf edge-face
private Map<TetMesh.Edge,EdgeFace> _edgeMap =
new HashMap<TetMesh.Edge,EdgeFace>();
// edge-face sorted by grade
private SortedSet<EdgeFace> _edgeQueue = new TreeSet<EdgeFace>();
private int _faceMarkRed; // current value of red face mark
private int _faceMarkBlue; // current value of blue face mark
private void validate() {
NodeIterator ni = getNodes();
while (ni.hasNext()) {
Node node = ni.next();
node.validate();
}
FaceIterator fi = getFaces();
while (fi.hasNext()) {
Face face = fi.next();
face.validate();
}
}
/**
* Returns the distance squared between the specified node and a point
* with specified coordinates.
*/
private static double distanceSquared(
Node node, double x, double y, double z)
{
double dx = x-node.x();
double dy = y-node.y();
double dz = z-node.z();
return dx*dx+dy*dy+dz*dz;
}
/**
* Recursively searches for any face that references n1 and n2, given
* a face that references n1. If no such face exists, then returns null.
* Face marks must be cleared before calling this method.
*/
private Face findFace(Face face, Node n1, Node n2) {
if (face!=null) {
mark(face);
Node na = face.nodeA();
Node nb = face.nodeB();
Node nc = face.nodeC();
Face fa = face.faceA();
Face fb = face.faceB();
Face fc = face.faceC();
if (n1==na) {
if (n2==nb || n2==nc ||
fb!=null && !isMarked(fb) && (face=findFace(fb,n1,n2))!=null ||
fc!=null && !isMarked(fc) && (face=findFace(fc,n1,n2))!=null)
return face;
} else if (n1==nb) {
if (n2==nc || n2==na ||
fc!=null && !isMarked(fc) && (face=findFace(fc,n1,n2))!=null ||
fa!=null && !isMarked(fa) && (face=findFace(fa,n1,n2))!=null)
return face;
} else if (n1==nc) {
if (n2==na || n2==nb ||
fa!=null && !isMarked(fa) && (face=findFace(fa,n1,n2))!=null ||
fb!=null && !isMarked(fb) && (face=findFace(fb,n1,n2))!=null)
return face;
} else {
assert false:"n1 is referenced by face";
}
}
return null;
}
/**
* Recursively searches for any face that references n1, n2, and n3,
* given a face that references n1 and n2. If no such face exists, then
* returns null. Face marks must be cleared before calling this method.
*/
private Face findFace(Face face, Node n1, Node n2, Node n3) {
if (face!=null) {
mark(face);
Node na = face.nodeA();
Node nb = face.nodeB();
Node nc = face.nodeC();
Face fa = face.faceA();
Face fb = face.faceB();
Face fc = face.faceC();
if (n1==na) {
if (n2==nb) {
if (n3==nc ||
fc!=null && !isMarked(fc) && (face=findFace(fc,n1,n2,n3))!=null)
return face;
} else if (n2==nc) {
if (n3==nb ||
fb!=null && !isMarked(fb) && (face=findFace(fb,n1,n2,n3))!=null)
return face;
} else {
assert false:"n2 is referenced by face";
}
} else if (n1==nb) {
if (n2==na) {
if (n3==nc ||
fc!=null && !isMarked(fc) && (face=findFace(fc,n1,n2,n3))!=null)
return face;
} else if (n2==nc) {
if (n3==na ||
fa!=null && !isMarked(fa) && (face=findFace(fa,n1,n2,n3))!=null)
return face;
} else {
assert false:"n2 is referenced by face";
}
} else if (n1==nc) {
if (n2==na) {
if (n3==nb ||
fb!=null && !isMarked(fb) && (face=findFace(fb,n1,n2,n3))!=null)
return face;
} else if (n2==nb) {
if (n3==na ||
fa!=null && !isMarked(fa) && (face=findFace(fa,n1,n2,n3))!=null)
return face;
} else {
assert false:"n2 is referenced by face";
}
} else {
assert false:"n1 is referenced by face";
}
}
return null;
}
/**
* Marks the specified face (red). Marks are used during iterations
* over faces. Because faces (e.g., those faces containing a particular
* node) are linked in an unordered structure, such iterations are
* often performed by recursively visiting faces, and marks are used
* to tag faces that have already been visited.
* @param face the face to mark (red).
*/
private void mark(Face face) {
face._mark = _faceMarkRed;
}
/**
* Marks the specified face red.
* This is equivalent to simply marking the face.
* @param face the face to mark red.
*/
private void markRed(Face face) {
face._mark = _faceMarkRed;
}
/**
* Marks the specified face blue.
* @param face the face to mark blue.
*/
private void markBlue(Face face) {
face._mark = _faceMarkBlue;
}
/**
* Determines whether the specified face is marked (red).
* @param face the face.
* @return true, if the face is marked (red); false, otherwise.
*/
private boolean isMarked(Face face) {
return face._mark==_faceMarkRed;
}
/**
* Determines whether the specified face is marked red.
* @param face the face.
* @return true, if the face is marked red; false, otherwise.
*/
private boolean isMarkedRed(Face face) {
return face._mark==_faceMarkRed;
}
/**
* Determines whether the specified face is marked blue.
* @param face the face.
* @return true, if the face is marked blue; false, otherwise.
*/
private boolean isMarkedBlue(Face face) {
return face._mark==_faceMarkBlue;
}
/**
* Clears all face marks, so that no face is marked. This can usually
* be accomplished without iterating over all faces in the mesh.
*/
private synchronized void clearFaceMarks() {
// If the mark is about to overflow, we must zero all the marks.
if (_faceMarkRed==FACE_MARK_MAX) {
Iterator<Face> fi = _faceMap.values().iterator();
while (fi.hasNext()) {
Face face = fi.next();
face._mark = 0;
}
_faceMarkRed = 0;
_faceMarkBlue = 0;
}
// Usually, we simply increment/decrement the mark values.
++_faceMarkRed;
--_faceMarkBlue;
}
/**
* Recursively adds face nabors of the specified node to the specified list.
* The face marks must be cleared before calling this method. This method
* could be made shorter by using another recursive method, but this longer
* inlined version is more efficient.
*/
private void getFaceNabors(Node node, Face face, FaceList nabors) {
if (face!=null) {
mark(face);
nabors.add(face);
Node na = face.nodeA();
Node nb = face.nodeB();
Node nc = face.nodeC();
Face fa = face.faceA();
Face fb = face.faceB();
Face fc = face.faceC();
if (node==na) {
if (fb!=null && !isMarked(fb))
getFaceNabors(node,fb,nabors);
if (fc!=null && !isMarked(fc))
getFaceNabors(node,fc,nabors);
} else if (node==nb) {
if (fc!=null && !isMarked(fc))
getFaceNabors(node,fc,nabors);
if (fa!=null && !isMarked(fa))
getFaceNabors(node,fa,nabors);
} else if (node==nc) {
if (fa!=null && !isMarked(fa))
getFaceNabors(node,fa,nabors);
if (fb!=null && !isMarked(fb))
getFaceNabors(node,fb,nabors);
} else {
assert false:"node is referenced by face";
}
}
}
private Edge getEdge(TetMesh.Edge meshEdge) {
EdgeFace edgeFace = _edgeMap.get(meshEdge);
return (edgeFace!=null)?edgeFace.edge:null;
}
private EdgeFace getEdgeFace(Edge edge) {
return _edgeMap.get(edge._meshEdge);
}
private EdgeFace getBestEdgeFace() {
return (!_edgeQueue.isEmpty())?_edgeQueue.last():null;
}
private EdgeFace getNextEdgeFace(EdgeFace edgeFace) {
SortedSet<EdgeFace> headSet = _edgeQueue.headSet(edgeFace);
return (!headSet.isEmpty())?headSet.last():null;
}
private EdgeFace addEdge(Edge edge) {
// trace("addEdge: edge="+edge);
EdgeFace edgeFace = makeEdgeFace(edge);
assert edgeFace!=null:"edgeFace!=null";
Object edgeFaceOld = _edgeMap.put(edge._meshEdge,edgeFace);
assert edgeFaceOld==null:"edge was not mapped";
boolean added = _edgeQueue.add(edgeFace);
assert added:"edgeFace was not in queue";
return edgeFace;
}
private void removeEdge(Edge edge) {
// trace("removeEdge: edge="+edge);
EdgeFace edgeFace = getEdgeFace(edge);
assert edgeFace!=null:"edgeFace!=null";
Object edgeFaceOld = _edgeMap.remove(edge._meshEdge);
assert edgeFaceOld!=null:"edge was mapped";
boolean removed = _edgeQueue.remove(edgeFace);
assert removed:"edgeFace was in queue";
}
private void addFace(Face face) {
boolean removed = _faceSet.remove(face._meshFace) ||
_faceSet.remove(face._meshFace.mate());
assert removed:"face not already in surface";
Face faceOld = _faceMap.put(face._meshFace,face);
assert faceOld==null:"face not already in surface";
}
private void removeFace(Face face) {
_faceMap.remove(face._meshFace);
}
private void init(Face face) {
trace("init: face="+face);
trace(" meshFace A="+face._meshFace.nodeA());
trace(" meshFace B="+face._meshFace.nodeB());
trace(" meshFace C="+face._meshFace.nodeC());
face._faceA = null;
face._faceB = null;
face._faceC = null;
Node nodeA = face.nodeA();
Node nodeB = face.nodeB();
Node nodeC = face.nodeC();
nodeA.setFace(face);
nodeB.setFace(face);
nodeC.setFace(face);
Edge edgeCB = makeEdge(nodeC,nodeB,face);
Edge edgeBA = makeEdge(nodeB,nodeA,face);
Edge edgeAC = makeEdge(nodeA,nodeC,face);
nodeA.setEdgeBefore(edgeBA);
nodeB.setEdgeBefore(edgeCB);
nodeC.setEdgeBefore(edgeAC);
nodeA.setEdgeAfter(edgeAC);
nodeB.setEdgeAfter(edgeBA);
nodeC.setEdgeAfter(edgeCB);
addEdge(edgeCB);
addEdge(edgeBA);
addEdge(edgeAC);
addFace(face);
}
private void extend(Edge edge, Face face) {
trace("extend: edge="+edge+" face="+face);
trace(" meshEdge A="+edge._meshEdge.nodeA());
trace(" meshEdge B="+edge._meshEdge.nodeB());
trace(" meshFace A="+face._meshFace.nodeA());
trace(" meshFace B="+face._meshFace.nodeB());
trace(" meshFace C="+face._meshFace.nodeC());
assert edge.isOnBoundary();
Node nodeA = edge.nodeA();
Node nodeB = edge.nodeB();
Node nodeC = otherNode(face,nodeA,nodeB);
nodeC.setFace(face);
linkFaces(face,nodeC,edge.faceRight(),edge.nodeRight());
Edge edgeAC = makeEdge(nodeA,nodeC,face);
Edge edgeCB = makeEdge(nodeC,nodeB,face);
nodeA.setEdgeAfter(edgeAC);
nodeB.setEdgeBefore(edgeCB);
nodeC.setEdgeAfter(edgeCB);
nodeC.setEdgeBefore(edgeAC);
removeEdge(edge);
addFace(face);
addEdge(edgeAC);
addEdge(edgeCB);
}
private void fillEar(Edge edge, Face face) {
trace("fillEar: edge="+edge+" face="+face);
trace(" meshEdge A="+edge._meshEdge.nodeA());
trace(" meshEdge B="+edge._meshEdge.nodeB());
trace(" meshFace A="+face._meshFace.nodeA());
trace(" meshFace B="+face._meshFace.nodeB());
trace(" meshFace C="+face._meshFace.nodeC());
Node nodeA = edge.nodeA();
Node nodeB = edge.nodeB();
Node nodeC = otherNode(face,nodeA,nodeB);
Edge edge1 = nodeC.edgeBefore();
Edge edge2 = nodeC.edgeAfter();
Node node1 = edge1.nodeA();
Node node2 = edge2.nodeB();
if (node2==nodeA) {
linkFaces(face,nodeC,edge.faceRight(),edge.nodeRight());
linkFaces(face,nodeB,edge2.faceRight(),edge2.nodeRight());
Edge edgeCB = makeEdge(nodeC,nodeB,face);
nodeC.setEdgeAfter(edgeCB);
nodeB.setEdgeBefore(edgeCB);
nodeA.setEdgeAfter(null);
nodeA.setEdgeBefore(null);
removeEdge(edge);
removeEdge(edge2);
addFace(face);
addEdge(edgeCB);
} else if (node1==nodeB) {
linkFaces(face,nodeC,edge.faceRight(),edge.nodeRight());
linkFaces(face,nodeA,edge1.faceRight(),edge1.nodeRight());
Edge edgeAC = makeEdge(nodeA,nodeC,face);
nodeA.setEdgeAfter(edgeAC);
nodeC.setEdgeBefore(edgeAC);
nodeB.setEdgeAfter(null);
nodeB.setEdgeBefore(null);
removeEdge(edge);
removeEdge(edge1);
addFace(face);
addEdge(edgeAC);
} else {
assert false:"ear is valid";
}
}
private void fillHole(Edge edge, Face face) {
trace("fillHole: edge="+edge+" face="+face);
trace(" meshEdge A="+edge._meshEdge.nodeA());
trace(" meshEdge B="+edge._meshEdge.nodeB());
trace(" meshFace A="+face._meshFace.nodeA());
trace(" meshFace B="+face._meshFace.nodeB());
trace(" meshFace C="+face._meshFace.nodeC());
Edge edgeAB = edge;
Edge edgeBC = edge.edgeAfter();
Edge edgeCA = edge.edgeBefore();
assert edgeAB.isOnBoundary();
assert edgeBC.isOnBoundary();
assert edgeCA.isOnBoundary();
Face faceAB = edgeAB.faceRight();
Face faceBC = edgeBC.faceRight();
Face faceCA = edgeCA.faceRight();
Node nodeA = edgeAB.nodeA();
Node nodeB = edgeBC.nodeA();
Node nodeC = edgeCA.nodeA();
linkFaces(face,nodeA,faceBC,otherNode(faceBC,nodeB,nodeC));
linkFaces(face,nodeB,faceCA,otherNode(faceCA,nodeA,nodeC));
linkFaces(face,nodeC,faceAB,otherNode(faceAB,nodeA,nodeB));
nodeA.setEdgeBefore(null);
nodeB.setEdgeBefore(null);
nodeC.setEdgeBefore(null);
nodeA.setEdgeAfter(null);
nodeB.setEdgeAfter(null);
nodeC.setEdgeAfter(null);
removeEdge(edgeAB);
removeEdge(edgeBC);
removeEdge(edgeCA);
addFace(face);
}
/**
* Returns a valid twin with grade higher than the specified edge-face.
* Returns null, if no such twin exists.
*/
private EdgeFace findTwin(EdgeFace edgeFace) {
// trace("findTwin");
Edge edge = edgeFace.edge;
Face face = edgeFace.face;
double grade = edgeFace.grade;
Node nodeA = edge.nodeA();
Node nodeB = edge.nodeB();
Node nodeC = otherNode(face,nodeA,nodeB);
assert nodeA.isOnBoundary();
assert nodeB.isOnBoundary();
assert nodeC.isOnBoundary();
Node node1 = nodeC.edgeBefore().nodeA();
assert node1!=nodeA;
assert node1!=nodeB;
if (node1.isOnBoundary()) {
Edge edgeTwin = node1.edgeAfter();
assert nodeC==edgeTwin.nodeB();
removeEdge(edgeTwin);
EdgeFace edgeFaceTwin = addEdge(edgeTwin);
Face faceTwin = edgeFaceTwin.face;
double gradeTwin = edgeFaceTwin.grade;
if (faceTwin!=null &&
nodesInOrder(faceTwin,node1,nodeC,nodeB) &&
gradeTwin>grade)
return edgeFaceTwin;
}
Node node2 = nodeC.edgeAfter().nodeB();
assert node2!=nodeA;
assert node2!=nodeB;
if (node2.isOnBoundary()) {
Edge edgeTwin = node2.edgeBefore();
assert nodeC==edgeTwin.nodeA();
removeEdge(edgeTwin);
EdgeFace edgeFaceTwin = addEdge(edgeTwin);
Face faceTwin = edgeFaceTwin.face;
double gradeTwin = edgeFaceTwin.grade;
if (faceTwin!=null &&
nodesInOrder(faceTwin,node2,nodeA,nodeC) &&
gradeTwin>grade)
return edgeFaceTwin;
}
return null;
}
/**
* Glues the specified edge-face to its twin.
*/
private void glue(Edge edge, Face face, Edge edgeTwin, Face faceTwin) {
trace("glue: edge="+edge+" face="+face);
trace(" meshEdge A="+edge._meshEdge.nodeA());
trace(" meshEdge B="+edge._meshEdge.nodeB());
trace(" meshFace A="+face._meshFace.nodeA());
trace(" meshFace B="+face._meshFace.nodeB());
trace(" meshFace C="+face._meshFace.nodeC());
trace(" meshEdgeTwin A="+edgeTwin._meshEdge.nodeA());
trace(" meshEdgeTwin B="+edgeTwin._meshEdge.nodeB());
trace(" meshFaceTwin A="+faceTwin._meshFace.nodeA());
trace(" meshFaceTwin B="+faceTwin._meshFace.nodeB());
trace(" meshFaceTwin C="+faceTwin._meshFace.nodeC());
Node nodeA = edge.nodeA();
Node nodeB = edge.nodeB();
Node nodeC = otherNode(face,nodeA,nodeB);
assert nodeA.isOnBoundary();
assert nodeB.isOnBoundary();
assert nodeC.isOnBoundary();
// Remove edge and its twin; add face and its twin.
removeEdge(edge);
removeEdge(edgeTwin);
addFace(face);
addFace(faceTwin);
// If face is ABC and its twin is ACD, ...
if (faceTwin.references(nodeA)) {
Node nodeD = nodeC.edgeAfter().nodeB();
assert nodeD.isOnBoundary();
// If face twin is a hole, fill it.
if (nodeD.edgeAfter()==nodeA.edgeBefore()) {
Edge edgeDA = nodeD.edgeAfter();
nodeA.setEdgeBefore(null);
nodeD.setEdgeBefore(null);
nodeA.setEdgeAfter(null);
nodeD.setEdgeAfter(null);
removeEdge(edgeDA);
}
// Else face twin is an ear, so fill it.
else {
Edge edgeAD = makeEdge(nodeA,nodeD,faceTwin);
nodeA.setEdgeAfter(edgeAD);
nodeD.setEdgeBefore(edgeAD);
addEdge(edgeAD);
}
// For either hole or ear, ...
Edge edgeCB = makeEdge(nodeC,nodeB,face);
nodeC.setEdgeAfter(edgeCB);
nodeB.setEdgeBefore(edgeCB);
addEdge(edgeCB);
linkFaces(face,nodeB,faceTwin,nodeD);
linkFaces(face,nodeC,edge.faceRight(),edge.nodeRight());
linkFaces(faceTwin,nodeA,edgeTwin.faceRight(),edgeTwin.nodeRight());
}
// Else if face is ABC and its twin is BDC, ...
else if (faceTwin.references(nodeB)) {
Node nodeD = nodeC.edgeBefore().nodeA();
assert nodeD.isOnBoundary();
// If face twin is a hole, fill it.
if (nodeD.edgeBefore()==nodeB.edgeAfter()) {
Edge edgeBD = nodeD.edgeBefore();
nodeB.setEdgeBefore(null);
nodeD.setEdgeBefore(null);
nodeB.setEdgeAfter(null);
nodeD.setEdgeAfter(null);
removeEdge(edgeBD);
}
// Else face twin is an ear, so fill it.
else {
Edge edgeDB = makeEdge(nodeD,nodeB,faceTwin);
nodeD.setEdgeAfter(edgeDB);
nodeB.setEdgeBefore(edgeDB);
addEdge(edgeDB);
}
// For either hole or ear, ...
Edge edgeAC = makeEdge(nodeA,nodeC,face);
nodeA.setEdgeAfter(edgeAC);
nodeC.setEdgeBefore(edgeAC);
addEdge(edgeAC);
linkFaces(face,nodeA,faceTwin,nodeD);
linkFaces(face,nodeC,edge.faceRight(),edge.nodeRight());
linkFaces(faceTwin,nodeB,edgeTwin.faceRight(),edgeTwin.nodeRight());
}
}
private boolean stitch(EdgeFace edgeFace) {
// validate();
Edge edge = edgeFace.edge;
Face face = edgeFace.face;
// assert face!=null;
// assert _faceSet.contains(face._meshFace) ||
// _faceSet.contains(face._meshFace.mate());
// Nodes A and B of edge, and the other node C in the face.
Node nodeA = edge.nodeA();
Node nodeB = edge.nodeB();
Node nodeC = otherNode(face,nodeA,nodeB);
// If face is not valid, replace it with a valid one, if possible.
if (!validForFace(nodeA,nodeB,nodeC)) {
removeEdge(edge);
addEdge(edge);
return true;
}
// If node C is not in the surface, then extend.
if (!nodeC.isInSurface()) {
extend(edge,face);
return true;
}
// Else if node C is on the surface boundary, ...
else if (nodeC.isOnBoundary()) {
// Nabor nodes 1 and 2 of node C, also on the surface boundary.
Node node1 = nodeC.edgeBefore().nodeA();
Node node2 = nodeC.edgeAfter().nodeB();
// If both edge nodes A and B are nabors of node C, fill hole.
if (node1==nodeB && node2==nodeA) {
fillHole(edge,face);
return true;
}
// Else if either node A or node B is a nabor of node C, fill ear.
else if (node1==nodeB || node2==nodeA) {
fillEar(edge,face);
return true;
}
// Else ...
else {
// If face has a valid twin with higher grade, glue.
EdgeFace edgeFaceTwin = findTwin(edgeFace);
if (edgeFaceTwin!=null) {
Edge edgeTwin = edgeFaceTwin.edge;
Face faceTwin = edgeFaceTwin.face;
glue(edge,face,edgeTwin,faceTwin);
return true;
} else {
return false;
}
}
}
// Else the face is not valid, and we should not be here!
else {
assert false:"valid face for extend, fill ear, fill hole, or glue";
return false;
}
}
private void rebuild() {
trace("rebuild");
init();
while (surf())
;
}
private void init() {
trace(" init: ntets="+_mesh.countTets());
_faceSet.clear();
_faceMap.clear();
_edgeMap.clear();
_edgeQueue.clear();
TetMesh.TetIterator ti = _mesh.getTets();
while (ti.hasNext()) {
TetMesh.Tet tet = ti.next();
TetMesh.Node a = tet.nodeA();
TetMesh.Node b = tet.nodeB();
TetMesh.Node c = tet.nodeC();
TetMesh.Node d = tet.nodeD();
TetMesh.Face[] meshFaces = {
new TetMesh.Face(a,b,c,tet),
new TetMesh.Face(b,d,c,tet),
new TetMesh.Face(c,d,a,tet),
new TetMesh.Face(d,b,a,tet),
};
for (int i=0; i<4; ++i) {
TetMesh.Face meshFacei = meshFaces[i];
if (!_faceSet.contains(meshFacei.mate())) {
_faceSet.add(meshFacei);
trace(" init: added face"+meshFacei);
trace(" node A="+meshFacei.nodeA());
trace(" node B="+meshFacei.nodeB());
trace(" node C="+meshFacei.nodeC());
}
}
((Node)a.data).init();
((Node)b.data).init();
((Node)c.data).init();
((Node)d.data).init();
}
trace(" init: _faceSet size="+_faceSet.size());
}
/**
* Creates a part of the surface. Returns true, if any faces were created.
* Typically, this method is called repeatedly until it creates no faces
* and returns false.
*/
private boolean surf() {
int nface = countFaces();
// If no mesh faces left in set, simply return.
if (_faceSet.isEmpty())
return false;
// Among mesh faces in set, find mesh face with smallest circumradius.
TetMesh.Face meshFace = null;
double rrmin = Double.MAX_VALUE;
double[] cc = new double[3];
Iterator<TetMesh.Face> mfi = _faceSet.iterator();
while (mfi.hasNext()) {
TetMesh.Face meshFacei = mfi.next();
double rr = meshFacei.centerCircle(cc);
if (rr<rrmin) {
meshFace = meshFacei;
rrmin = rr;
}
}
assert meshFace!=null;
// Initialize a part of surface with that mesh face.
Face face = new Face(meshFace);
init(face);
// While boundary edges in part remain to be processed, stitch surface.
trace(" surf: stitching");
EdgeFace edgeFace = getBestEdgeFace();
while (edgeFace!=null && edgeFace.face!=null) {
if (stitch(edgeFace)) {
edgeFace = getBestEdgeFace();
} else {
edgeFace = getNextEdgeFace(edgeFace);
}
}
// Remove all faces in set that reference nodes already in surface.
trace(" surf: removing faces");
ArrayList<TetMesh.Face> faceList = new ArrayList<TetMesh.Face>();
mfi = _faceSet.iterator();
while (mfi.hasNext()) {
TetMesh.Face meshFacei = mfi.next();
Node nodeA = (Node)meshFacei.nodeA().data;
Node nodeB = (Node)meshFacei.nodeB().data;
Node nodeC = (Node)meshFacei.nodeC().data;
if (nodeA.isInSurface() || nodeB.isInSurface() || nodeC.isInSurface()) {
faceList.add(meshFacei);
}
}
mfi = faceList.iterator();
while (mfi.hasNext()) {
TetMesh.Face meshFacei = mfi.next();
_faceSet.remove(meshFacei);
}
// We may have more faces.
trace(" surf: more faces = "+(countFaces()>nface));
return countFaces()>nface;
}
private static boolean nodesInOrder(Face face, Node na, Node nb, Node nc) {
Node fa = face.nodeA();
Node fb = face.nodeB();
Node fc = face.nodeC();
return na==fa && nb==fb && nc==fc ||
na==fb && nb==fc && nc==fa ||
na==fc && nb==fa && nc==fb;
}
private static Node otherNode(Face face, Node na, Node nb) {
Node fa = face.nodeA();
Node fb = face.nodeB();
Node fc = face.nodeC();
if (na==fa) {
if (nb==fb) {
return fc;
} else if (nb==fc) {
return fb;
} else {
return null;
}
} else if (na==fb) {
if (nb==fa) {
return fc;
} else if (nb==fc) {
return fa;
} else {
return null;
}
} else if (na==fc) {
if (nb==fa) {
return fb;
} else if (nb==fb) {
return fa;
} else {
return null;
}
} else {
return null;
}
}
private static TetMesh.Node otherNode(
TetMesh.Face face, TetMesh.Node na, TetMesh.Node nb)
{
TetMesh.Node fa = face.nodeA();
TetMesh.Node fb = face.nodeB();
TetMesh.Node fc = face.nodeC();
if (na==fa) {
if (nb==fb) {
return fc;
} else if (nb==fc) {
return fb;
} else {
return null;
}
} else if (na==fb) {
if (nb==fa) {
return fc;
} else if (nb==fc) {
return fa;
} else {
return null;
}
} else if (na==fc) {
if (nb==fa) {
return fb;
} else if (nb==fb) {
return fa;
} else {
return null;
}
} else {
return null;
}
}
private static void linkFaces(
Face face, Node node, Face faceNabor, Node nodeNabor)
{
if (face!=null) {
if (node==face.nodeA()) {
face._faceA = faceNabor;
} else if (node==face.nodeB()) {
face._faceB = faceNabor;
} else if (node==face.nodeC()) {
face._faceC = faceNabor;
} else {
assert false:"node referenced by face";
}
}
if (faceNabor!=null) {
if (nodeNabor==faceNabor.nodeA()) {
faceNabor._faceA = face;
} else if (nodeNabor==faceNabor.nodeB()) {
faceNabor._faceB = face;
} else if (nodeNabor==faceNabor.nodeC()) {
faceNabor._faceC = face;
} else {
assert false:"nodeNabor referenced by faceNabor";
}
}
}
static float normalVector(TetMesh.Face meshFace, float[] v) {
TetMesh.Node na = meshFace.nodeA();
TetMesh.Node nb = meshFace.nodeB();
TetMesh.Node nc = meshFace.nodeC();
double xa = na.x();
double ya = na.y();
double za = na.z();
double xb = nb.x();
double yb = nb.y();
double zb = nb.z();
double xc = nc.x();
double yc = nc.y();
double zc = nc.z();
double x0 = xc-xa;
double y0 = yc-ya;
double z0 = zc-za;
double x1 = xa-xb;
double y1 = ya-yb;
double z1 = za-zb;
double x2 = y0*z1-y1*z0;
double y2 = x1*z0-x0*z1;
double z2 = x0*y1-x1*y0;
double alpha = x2*x2+y2*y2+z2*z2;
double delta = sqrt(alpha);
double scale = (delta>0.0)?1.0/delta:1.0;
if (v!=null) {
v[0] = (float)(x2*scale);
v[1] = (float)(y2*scale);
v[2] = (float)(z2*scale);
}
return (float)(0.5*scale*alpha);
}
static double normalVector(TetMesh.Face meshFace, double[] v) {
TetMesh.Node na = meshFace.nodeA();
TetMesh.Node nb = meshFace.nodeB();
TetMesh.Node nc = meshFace.nodeC();
double xa = na.x();
double ya = na.y();
double za = na.z();
double xb = nb.x();
double yb = nb.y();
double zb = nb.z();
double xc = nc.x();
double yc = nc.y();
double zc = nc.z();
double x0 = xc-xa;
double y0 = yc-ya;
double z0 = zc-za;
double x1 = xa-xb;
double y1 = ya-yb;
double z1 = za-zb;
double x2 = y0*z1-y1*z0;
double y2 = x1*z0-x0*z1;
double z2 = x0*y1-x1*y0;
double alpha = x2*x2+y2*y2+z2*z2;
double delta = sqrt(alpha);
double scale = (delta>0.0)?1.0/delta:1.0;
if (v!=null) {
v[0] = x2*scale;
v[1] = y2*scale;
v[2] = z2*scale;
}
return 0.5*scale*alpha;
}
private static double angle(TetMesh.Face face1, TetMesh.Face face2) {
double[] v1 = new double[3];
double[] v2 = new double[3];
normalVector(face1,v1);
normalVector(face2,v2);
double cos12 = v1[0]*v2[0]+v1[1]*v2[1]+v1[2]*v2[2];
return acos(cos12);
}
private TetMesh.Edge findMeshEdge(Node nodeA, Node nodeB) {
TetMesh.Node meshNodeA = nodeA._meshNode;
TetMesh.Node meshNodeB = nodeB._meshNode;
TetMesh.Edge meshEdge = _mesh.findEdge(meshNodeA,meshNodeB);
if (meshEdge!=null && meshNodeA!=meshEdge.nodeA())
meshEdge = meshEdge.mate();
return meshEdge;
}
private Edge makeEdge(Node nodeA, Node nodeB, Face face) {
TetMesh.Edge meshEdge = findMeshEdge(nodeA,nodeB);
return (meshEdge!=null)?new Edge(meshEdge,face):null;
}
private static final double VV_SLIVER = cos(5.0*PI/6.0);
private static final double VV_LARGE = cos(1.1*PI/2.0);
/**
* Makes and edge-face for the specified edge. The edge already has
* a mesh face incident on its right side. This method computes the
* best candidate face for the left side of the edge. The edge and
* candidate face form an edge-face pair, with a grade that depends
* on the circumradius of the candidate and on the angle between
* the normal vectors for the right and left faces. Small circumradii
* and small angles correspond to high grades.
*/
private EdgeFace makeEdgeFace(Edge edge) {
assert edge.isOnBoundary();
Node nodeA = edge.nodeA();
Node nodeB = edge.nodeB();
// Mesh face incident on right side of edge is already in surface.
TetMesh.Face meshFace = edge.faceRight()._meshFace;
// Vector normal to mesh face already in surface.
double[] v = new double[3];
normalVector(meshFace,v);
// Mesh nodes A and B of edge.
TetMesh.Edge meshEdge = edge._meshEdge;
TetMesh.Node meshNodeA = meshEdge.nodeA();
TetMesh.Node meshNodeB = meshEdge.nodeB();
// Variables used to find the best mesh face not yet in surface.
double[] cc = new double[3];
double[] vi = new double[3];
double rrBest = Double.MAX_VALUE;
double vvBest = -1.0;
TetMesh.Face mfBest = null;
TetMesh.Face mfMate = meshFace.mate();
// Mesh faces incident on left side of edge. One of these is the
// mate of the right mesh face that is already in the surface.
TetMesh.Face[] meshFaces = _mesh.getFaceNabors(meshEdge);
int n = meshFaces.length;
// Find the best mesh face not yet in the surface.
for (int i=0; i<n; ++i) {
TetMesh.Face mf = meshFaces[i];
// Ignore the mate of the right mesh face already in the surface.
if (mf.equals(mfMate))
continue;
// Node C of the face does not equal nodes A or B of the edge.
TetMesh.Node fa = mf.nodeA();
TetMesh.Node fb = mf.nodeB();
TetMesh.Node fc = mf.nodeC();
Node nodeC;
if (fc==meshNodeA) {
nodeC = (Node)fb.data;
} else if (fc==meshNodeB) {
nodeC = (Node)fa.data;
} else {
nodeC = (Node)fc.data;
}
// If nodes A, B, and C would make a valid face ABC, ...
if (validForFace(nodeA,nodeB,nodeC)) {
// Normal vector of mesh face.
normalVector(mf,vi);
// Dot product equals the cosine of angle between normal vectors.
double vv = v[0]*vi[0]+v[1]*vi[1]+v[2]*vi[2];
// If the angle is not too close to PI, ...
if (vv>VV_SLIVER) {
// Square of mesh face circumradius.
double rr = mf.centerCircle(cc);
// If circumradius is the best found so far, ...
if (rr<rrBest) {
rrBest = rr;
vvBest = vv;
mfBest = mf;
}
}
}
}
// Candidate face corresponding to best mesh face found, or null,
// if no valid candidate. The best candidate has a grade; a null
// best candidate has the lowest possible grade = -2.0.
assert !_faceMap.containsKey(mfBest) && !_faceMap.containsKey(mfBest);
Face face = (mfBest!=null)?new Face(mfBest):null;
double grade = (vvBest>VV_LARGE)?1.0/rrBest:vvBest-1.0;
if (grade<=0.0)
face = null;
return new EdgeFace(edge,face,grade);
}
/**
* Determines whether the specified nodes are an internal edge.
* An internal edge is not on the boundary; it has two nabor faces.
*/
private boolean hasInternalEdge(Node nodeA, Node nodeB) {
Face face = findFace(nodeA,nodeB);
if (face==null)
return false;
face = face.faceNabor(otherNode(face,nodeA,nodeB));
if (face==null)
return false;
return true;
}
/**
* Determines whether the specified nodes would form a valid face.
* Assumes that nodes A and B are a boundary edge. Node C is the
* other node that would form the face in question.
*/
private boolean validForFace(Node nodeA, Node nodeB, Node nodeC) {
return !nodeC.isInSurface() ||
nodeC.isOnBoundary() &&
!hasInternalEdge(nodeB,nodeC) &&
!hasInternalEdge(nodeC,nodeA);
}
private static final boolean TRACE = false;
private static void trace(String s) {
if (TRACE)
System.out.println(s);
}
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.