content
stringlengths 7
2.61M
|
---|
<reponame>dylanirion/wbia-plugin-pie-v2<filename>wbia_pie_v2/metrics/knn.py
# -*- coding: utf-8 -*-
import numpy as np
from sklearn.neighbors import NearestNeighbors
def predict_k_neigh(db_emb, db_lbls, test_emb, k=5):
"""Get k nearest solutions from the database for test embeddings (query)
using k-NearestNeighbors algorithm.
Input:
db_emb (float array): database embeddings of size (num_emb, emb_size)
db_lbls (str or int array): database labels of size (num_emb,)
test_emb (float array): test embeddings of size (num_emb_t, emb_size)
k (int): number of predictions to return
Returns:
neigh_lbl_un (str or int array): labels of predictions of shape (num_emb_t, k)
neigh_ind_un (int array): labels of indices of nearest points of shape (num_emb_t, k)
neigh_dist_un (float array): distances of predictions of shape (num_emb_t, k)
"""
# Set number of nearest points (with duplicated labels)
k_w_dupl = min(50, len(db_emb))
nn_classifier = NearestNeighbors(n_neighbors=k_w_dupl, metric='euclidean')
nn_classifier.fit(db_emb, db_lbls)
# Predict nearest neighbors and distances for test embeddings
neigh_dist, neigh_ind = nn_classifier.kneighbors(test_emb)
# Get labels of nearest neighbors
neigh_lbl = np.zeros(shape=neigh_ind.shape, dtype=db_lbls.dtype)
for i, preds in enumerate(neigh_ind):
for j, pred in enumerate(preds):
neigh_lbl[i, j] = db_lbls[pred]
# Remove duplicates
neigh_lbl_un = []
neigh_ind_un = []
neigh_dist_un = []
for j in range(neigh_lbl.shape[0]):
indices = np.arange(0, len(neigh_lbl[j]))
a, b = rem_dupl(neigh_lbl[j], indices)
neigh_lbl_un.append(a[:k])
neigh_ind_un.append(neigh_ind[j][b][:k].tolist())
neigh_dist_un.append(neigh_dist[j][b][:k].tolist())
return neigh_lbl_un, neigh_ind_un, neigh_dist_un
def pred_light(query_embedding, db_embeddings, db_labels, n_results=10):
"""Get k nearest solutions from the database for one query embedding
using k-NearestNeighbors algorithm.
"""
neigh_lbl_un, neigh_ind_un, neigh_dist_un = predict_k_neigh(
db_embeddings, db_labels, query_embedding, k=n_results
)
neigh_lbl_un = neigh_lbl_un[0]
neigh_dist_un = neigh_dist_un[0]
ans_dict = [
{'label': lbl, 'distance': dist} for lbl, dist in zip(neigh_lbl_un, neigh_dist_un)
]
return ans_dict
def rem_dupl(seq, seq2=None):
"""Remove duplicates from a sequence and keep the order of elements.
Do it in unison with a sequence 2."""
seen = set()
seen_add = seen.add
if seq2 is None:
return [x for x in seq if not (x in seen or seen_add(x))]
else:
a = [x for x in seq if not (x in seen or seen_add(x))]
seen = set()
seen_add = seen.add
b = [seq2[i] for i, x in enumerate(seq) if not (x in seen or seen_add(x))]
return a, b
|
The 1980 Republican presidential primaries were the selection process by which voters of the Republican Party chose its nominee for President of the United States in the 1980 U.S. presidential election. Former California Governor Ronald Reagan was selected as the nominee through a series of primary elections and caucuses culminating in the Republican National Convention held from July 14 to July 17, 1980, in Detroit, Michigan.
Primary race [ edit ]
As the 1980 presidential election approached, incumbent Democratic President Jimmy Carter appeared vulnerable. High gas prices, economic stagflation, a renewed Cold War with the Soviet Union following the invasion of Afghanistan, and the Iran hostage crisis that developed when Iranian students seized the American embassy in Tehran all contributed to a general dissatisfaction with Carter's presidency. Likewise, the president faced stiff primary challenges of his own from Senator Ted Kennedy and California Governor Jerry Brown. A large field of Republican challengers soon emerged. Former Governor Ronald Reagan was the early odds-on favorite to win his party's nomination for president after nearly beating incumbent President Gerald Ford just four years earlier. He was so far ahead in the polls that campaign director John Sears decided on an "above the fray" strategy. He did not attend many of the multicandidate forums and straw polls in the summer and fall of 1979.
George H. W. Bush, the former director of the Central Intelligence Agency and chairman of the Republican National Committee, taking a page from the George McGovern/Jimmy Carter playbook, did go to all the so-called "cattle calls", and began to come in first at a number of these events. Along with the top two, a number of other Republican politicians entered the race.
In January 1980, the Iowa Republicans decided to have a straw poll as a part of their caucuses for that year. Bush defeated Reagan by a small margin. Bush declared he had "the Big Mo", and with Reagan boycotting the Puerto Rico primary in deference to New Hampshire, Bush won the territory easily, giving him an early lead going into New Hampshire.
The Nashua debate, the 9th debate between Ronald Reagan (left) and George H. W. Bush (right)
With the other candidates in single digits, the Nashua Telegraph offered to host a debate between Reagan and Bush. Worried that a newspaper-sponsored debate might violate electoral regulations, Reagan subsequently arranged to fund the event with his own campaign money, inviting the other candidates to participate at short notice. The Bush camp did not learn of Reagan's decision to include the other candidates until the debate was due to commence. Bush refused to participate, which led to an impasse on the stage. As Reagan attempted to explain his decision, the editor of the Nashua Telegraph ordered the sound man to mute Reagan's microphone. A visibly angry Reagan responded, "I am paying for this microphone, Mr. Green!" [sic] (referring to the editor Jon Breen).[1][2][3] Eventually the other candidates agreed to leave, and the debate proceeded between Reagan and Bush. Reagan's quote was often repeated as "I paid for this microphone!" and dominated news coverage of the event; Reagan sailed to an easy win in New Hampshire.[4]
Lee Bandy, a writer for the South Carolina newspaper The State stated that heading into the South Carolina primary, political operative Lee Atwater worked to engineer a victory for Reagan: "Lee Atwater figured that Connally was their biggest threat here in South Carolina. So Lee leaked a story to me that John Connally was trying to buy the black vote. Well, that story got out, thanks to me, and it probably killed Connally. He spent $10 million for one delegate. Lee saved Ronald Reagan's candidacy."[5]
Reagan swept the South, and although he lost five more primaries to Bush—including the Massachusetts primary in which he came in third place behind John B. Anderson—the former governor had a lock on the nomination very early in the season. Reagan said he would always be grateful to the people of Iowa for giving him "the kick in the pants" he needed.
Reagan was an adherent to a policy known as supply-side economics, which argues that economic growth can be most effectively created using incentives for people to produce (supply) goods and services, such as adjusting income tax and capital gains tax rates. Accordingly, Reagan promised an economic revival that would benefit all sectors of the population. He said that cutting tax rates would actually increase tax revenues because the lower rates would cause people to work harder as they would be able to keep more of their money. Reagan also called for a drastic cut in "big government" and pledged to deliver a balanced budget for the first time since 1969. In the primaries Bush memorably called Reagan's economic policy "voodoo economics" because it promised to lower taxes and increase revenues at the same time.
Nominee [ edit ]
Withdrew during primaries [ edit ]
Withdrew before primaries [ edit ]
Declined to run [ edit ]
The following potential candidates declined to run for the Republican nomination in 1980.[6][7]
Results [ edit ]
Statewide [ edit ]
Nationwide [ edit ]
Primaries, total popular vote:[8]
The Republican National Convention was held in Detroit, Michigan, from July 14 to July 17, 1980.
See also [ edit ]
References [ edit ]
Further reading [ edit ] |
<reponame>sarithay/mock-api
package br.com.elementalsource.mock.generic.mapper;
import br.com.elementalsource.mock.generic.model.Endpoint;
import br.com.elementalsource.mock.infra.component.file.FileJsonReader;
import com.google.gson.Gson;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.RequestMethod;
import java.io.IOException;
import java.util.Optional;
@Component
public class EndpointMapper {
private static final Logger LOGGER = LoggerFactory.getLogger(EndpointMapper.class);
private final FileJsonReader fileJsonReader;
@Autowired
public EndpointMapper(FileJsonReader fileJsonReader) {
this.fileJsonReader = fileJsonReader;
}
public Optional<Endpoint> mapper(RequestMethod requestMethod, String requestUrl, String fileName) {
try {
return fileJsonReader
.getJsonByFileName(fileName)
.map(endpointDtoJson -> new Gson().fromJson(endpointDtoJson, EndpointDto.class))
.map(endpointDto -> endpointDto.toModel(requestMethod, requestUrl))
.map(endpoint -> new Endpoint.Builder(endpoint).withId(fileName).build());
} catch (IOException e) {
LOGGER.error("Cannot to map endpoint from file", e);
return Optional.empty();
}
}
}
|
Speaker Change Detection in Broadcast TV Using Bidirectional Long Short-Term Memory Networks Speaker change detection is an important step in a speaker diarization system. It aims at finding speaker change points in the audio stream. In this paper, it is treated as a sequence labeling task and addressed by Bidirectional long short term memory networks (Bi-LSTM). The system is trained and evaluated on the Broadcast TV subset from ETAPE database. The result shows that the proposed model brings good improvement over conventional methods based on BIC and Gaussian Divergence. For instance, in comparison to Gaussian divergence, it produces speech turns that are 19.5% longer on average, with the same level of purity. |
Analyzing Welding Performance of Metal Using Artificial Neural Network In recent years, the speed of modernization construction in China has been exponentially growing. The trend of high parameters, large capacity, and large-scale development of the welding structure has been promoted. It needs higher requirements on the type and quality of the welding materials. Most of the welding materials are imported to China. The main reason is that China still follows the traditional design method. The quality of the welding materials is also low. The design of metal welding materials involves many factors and properties. There is no fixed-function relationship between the properties and components of the welding materials. This makes it difficult to design metal welding materials. The emergence of neural network algorithms provides a new way to analyze the weldability of metal materials. In this paper, BP (backpropagation) network is used to analyze the welding performance of metals. The tensile test of welded joints is carried out through training test samples. The results show that the tensile strength and yield strength of metal materials are about 500MPa (megapascals) and 400MPa, respectively. For further analysis of the influence of welding current, electrode pressure, and power-on time on the tensile and shear strength of metal materials, a shear test and tension test were used. With the increase of welding current, the shear strength of spot welding continuously increased. When the welding current was 10,000A (Ampere), the shear strength decreased rapidly from 24.25MPa to 18.84MPa. After prolonging the welding time, at first, both tensile strength and shear strength increase and then decrease. When the welding pressure increases from 32psi to 48psi, the tensile strength increases from 16.47MPa to 24.52MPa and then decreases continuously to 17.26MPa, whereas the shear strength decreases first and then increases. |
The U.S. has issued five permits in recent weeks for oil drilling in the Gulf of Mexico—hailing some as "new."
But all allow work to resume that was halted last year during BP's Deepwater Horizon spill.
Faced with rising oil prices, melting alternatives, and growing criticism, the U.S. began issuing the permits in February. The fifth went to Chevron yesterday.
Administration officials have championed the permits with statements asserting that things are back to normal, only safer:
“Today’s permit approval further demonstrates industry’s ability to meet and satisfy the enhanced safety requirements associated with deepwater drilling, including the capability to contain a deepwater loss of well control and blowout,” said Michael R. Bromwich, director of the Bureau of Ocean Energy Management, Regulation and Enforcement, yesterday.
“We will continue to review and approve those applications that demonstrate the ability to operate safely in deep water.”
The Bureau's press release insists this drilling is "completely new":
"Today’s is the first deepwater permit approved for completely new exploration since the deepwater drilling moratorium was lifted; this means that this is the first exploratory well drilled into this reservoir or field, which has never produced."
But the press release goes on to say, "Initial drilling on Chevron’s Well #1 began March 2010." And drilling was halted in June, at 80 percent of its target depth, because of the Deepwater Horizon spill. Since then, Chevron's operation has undergone a new and more rigorous review.
Chevron didn't get very far with Well #1 last Spring, but it had been in the field and poking holes in the sea floor—in 6,750 feet of water about 215 miles south of the Louisiana coast.
The Administration puts emphasis on the new to blunt criticism that it is holding up domestic energy production. Most media have bit on the new-drilling hook, and environmentalists have reacted with predictable outrage. Grist called the recent permits "giveaways to polluters."
But industry has been glib, at best: “We look forward to the day when a single permit on plan doesn’t merit a press conference by the Secretary of the Interior," said Erik Milito of the American Petroleum Institute.
Despite its history of record profits, the oil industry faces exhausting obstacles in the Gulf, Milito said in a statement yesterday (pdf), as the government pursues an unofficial moratorium on new driling:
The administration has repeatedly decided to pursue policies and actions that delay, defer or deny access and production from our domestic resources."
Of 14 permits submitted for initial exploratory drilling in the Gulf—drilling that would be, in other words, new—one has been withdrawn for modification and 13 are listed as "pending."
On Monday, the Interior Department announced it had approved an exploration plan for Shell, also describing it as new:
This is the first new deepwater exploration plan approved since the Deepwater Horizon explosion and resulting oil spill."
But later in the same press release we learn:
The plan is a supplemental exploration plan that proposes activities that were not included in an original exploration plan for the same lease – located in Shell’s Auger field – which was approved in 1985."
A plan, too, is a long way from a permit. It describes proposed activities, and once the plan is approved, the applicant can apply for permits to carry out those activities, a process that can take, according to Milito, another decade.
Who's wearing the black hats in this scene—and who's wearing the white hats—depends on whether the beholder is green. Either way, for all of the chatter about new drilling in the Gulf of Mexico, little has changed. |
<gh_stars>1-10
package com.home.commonClient.control;
import com.home.commonBase.config.game.SceneConfig;
import com.home.commonBase.config.game.enumT.TaskTypeConfig;
import com.home.commonBase.constlist.generate.QuestType;
import com.home.commonBase.constlist.generate.SceneType;
import com.home.commonBase.control.LogicExecutorBase;
import com.home.commonBase.data.quest.TaskData;
import com.home.commonBase.global.BaseC;
import com.home.commonBase.global.CommonSetting;
import com.home.commonClient.global.ClientC;
import com.home.commonClient.part.player.Player;
import com.home.commonClient.scene.base.GameScene;
import com.home.shine.ctrl.Ctrl;
import com.home.shine.global.ShineSetting;
import com.home.shine.support.collection.SMap;
import com.home.shine.support.pool.ObjectPool;
/** 逻辑执行器 */
public class LogicExecutor extends LogicExecutorBase
{
/** 角色字典(uid为key) */
private SMap<String,Player> _players=new SMap<>();
/** 场景对象池 */
private ObjectPool<GameScene>[] _scenePoolDic;
/** 任务目标数据池 */
private ObjectPool<TaskData>[] _taskDataPool;
public LogicExecutor(int index)
{
super(index);
}
/** 初始化(池线程) */
@Override
public void init()
{
super.init();
_scenePoolDic=new ObjectPool[SceneType.size];
for(int i=0;i<SceneType.size;++i)
{
_scenePoolDic[i]=createScenePool(i);
}
//逻辑部分
TaskTypeConfig typeConfig;
_taskDataPool=new ObjectPool[QuestType.size];
_taskDataPool[0]=createTaskDataPool(0);
for(int i=0;i<_taskDataPool.length;++i)
{
if((typeConfig=TaskTypeConfig.get(i))!=null && typeConfig.needCustomTask)
{
_taskDataPool[i]=createTaskDataPool(i);
}
}
}
private ObjectPool<GameScene> createScenePool(int type)
{
ObjectPool<GameScene> re=new ObjectPool<GameScene>(()->
{
GameScene scene=ClientC.factory.createScene();
scene.setType(type);
scene.construct();
return scene;
},CommonSetting.scenePoolSize);
return re;
}
private ObjectPool<TaskData> createTaskDataPool(int type)
{
ObjectPool<TaskData> re=new ObjectPool<TaskData>(()->
{
return BaseC.logic.createTaskData(type);
});
return re;
}
@Override
protected void onFrame(int delay)
{
super.onFrame(delay);
//玩家部分
Object[] table=_players.getTable();
Player player;
for(int i=table.length - 2;i >= 0;i-=2)
{
if(table[i]!=null)
{
player=(Player)table[i + 1];
try
{
player.onFrame(delay);
}
catch(Exception e)
{
Ctrl.errorLog(e);
}
if(player!=table[i + 1])
{
i+=2;
}
}
}
}
/** 获取角色数目 */
public int getPlayerNum()
{
return _players.size();
}
/** 创建场景(实际创建)(未init) */
public GameScene createScene(int sceneID)
{
GameScene scene=_scenePoolDic[SceneConfig.get(sceneID).type].getOne();
scene.initSceneID(sceneID);
//绑定执行器
scene.setExecutor(this);
return scene;
}
/** 释放场景(dispose过) */
public void releaseScene(GameScene scene)
{
_scenePoolDic[scene.getType()].back(scene);
}
/** 角色登录(与角色进入不同)(逻辑线程) */
public void playerLogin(Player player)
{
if(ShineSetting.openCheck)
{
if(_players.contains(player.role.uid))
{
Ctrl.throwError("此时executor不该有角色");
}
}
_players.put(player.role.uid,player);
player.system.executorIndex=_index;
//开始登陆
player.system.startLogin();
}
/** 角色下线(与角色退出不同)(逻辑线程) */
public void playerExit(Player player)
{
if(ShineSetting.openCheck)
{
if(!_players.contains(player.role.uid))
{
Ctrl.throwError("此时executor不该没有角色");
return;
}
}
_players.remove(player.role.uid);
//不置空
//player.system.executorIndex=-1;
}
/** 创建任务目标数据 */
public TaskData createTaskData(int type)
{
if(TaskTypeConfig.get(type).needCustomTask)
{
return _taskDataPool[type].getOne();
}
else
{
return _taskDataPool[0].getOne();
}
}
/** 回收任务目标数据 */
public void releaseTaskData(int type,TaskData data)
{
if(TaskTypeConfig.get(type).needCustomTask)
{
_taskDataPool[type].back(data);
}
else
{
_taskDataPool[0].back(data);
}
}
}
|
Mechanisms of early Drosophila mesoderm formation. Several morphogenetic processes occur simultaneously during Drosophila gastrulation, including ventral furrow invagination to form the mesoderm, anterior and posterior midgut invagination to create the endoderm, and germ band extension. Mutations changing the behaviour of different parts of the embryo can be used to test the roles of different cell populations in gastrulation. Posterior midgut morphogenesis and germ band extension are partly independent, and neither depends on mesoderm formation, nor mesoderm formation on them. The invagination of the ventral furrow is caused by forces from within the prospective mesoderm (i.e. the invaginating cells) without any necessary contribution from other parts of the embryo. The events that lead to the cell shape changes mediating ventral furrow formation require the transcription of zygotic genes under the control of twist and snail. Such genes can be isolated by molecular and genetic screens. |
//=============================================================================
// main.cpp
//
// The driver for the command line version of the user interface.
//
// author:
// Dr. <NAME>
// Department of Civil, Environmental, and Geo- Engineering
// University of Minnesota
//
// version:
// 30 June 2017
//=============================================================================
#include <cstring>
#include <ctime>
#include <iostream>
#include "engine.h"
#include "now.h"
#include "numerical_constants.h"
#include "read_data.h"
#include "version.h"
#include "write_results.h"
//-----------------------------------------------------------------------------
int main(int argc, char* argv[]) {
// Check the command line.
switch (argc) {
case 1: {
Usage();
return 0;
}
case 2: {
if ( strcmp(argv[1], "--help") == 0 )
Help();
else if ( strcmp(argv[1], "--version") == 0 )
Version();
else
Usage();
return 0;
}
case 13: {
Banner( std::cout );
break;
}
default: {
Usage();
return 1;
}
}
// Gimiwan <xo> <yo> <k alpha> <k beta> <k count> <h alpha> <h beta> <h count> <radius> <obs file> <wells file> <output root>
// [0] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12]
// Get <xo> and <yo>.
double xo = atof( argv[1] );
double yo = atof( argv[2] );
// Get and check the hydraulic conductivity distribution.
double k_alpha = atof( argv[3] );
double k_beta = atof( argv[4] );
if ( k_beta <= EPS ) {
std::cerr << "ERROR: k_beta = " << argv[4] << " is not valid; 0 < k_beta." << std::endl;
std::cerr << std::endl;
Usage();
return 2;
}
int k_count = atoi( argv[5] );
if ( k_count < 1 ) {
std::cerr << "ERROR: k_count = " << argv[5] << " is not valid; 0 < k_count." << std::endl;
std::cerr << std::endl;
Usage();
return 2;
}
// Get and check the aquifer thickness distribution.
double h_alpha = atof( argv[6] );
double h_beta = atof( argv[7] );
if ( h_beta <= EPS ) {
std::cerr << "ERROR: h_beta = " << argv[7] << " is not valid; 0 < h_beta." << std::endl;
std::cerr << std::endl;
Usage();
return 2;
}
int h_count = atoi( argv[8] );
if ( h_count < 1 ) {
std::cerr << "ERROR: h_count = " << argv[5] << " is not valid; 0 < h_count." << std::endl;
std::cerr << std::endl;
Usage();
return 2;
}
// Get and check the buffer radius.
double radius = atof( argv[9] );
if ( radius < 0 ) {
std::cerr << "ERROR: buffer radius = " << argv[9] << " is not valid; 0 <= buffer radius." << std::endl;
std::cerr << std::endl;
Usage();
return 2;
}
// Read in the observation data from the specified <obs file>.
std::vector<ObsRecord> obs;
try {
obs = read_obs_data( argv[10] );
std::cout << obs.size() << " observation data records read from <" << argv[10] << ">." << std::endl;
}
catch (InvalidObsFile& e) {
std::cerr << e.what() << std::endl;
return 3;
}
catch (InvalidObsRecord& e) {
std::cerr << e.what() << std::endl;
return 3;
}
// Read in the well data from the specified <well file>.
std::vector<WellRecord> wells;
try {
wells = read_well_data( argv[11] );
std::cout << wells.size() << " well data records read from <" << argv[11] << ">." << std::endl;
}
catch (InvalidWellFile& e) {
std::cerr << e.what() << std::endl;
return 3;
}
catch (InvalidWellRecord& e) {
std::cerr << e.what() << std::endl;
return 3;
}
// Execute all of the computations.
Results results;
try {
results = Engine(xo, yo, k_alpha, k_beta, k_count, h_alpha, h_beta, h_count, radius, obs, wells);
}
catch (TooFewObservations& e) {
std::cerr << e.what() << std::endl;
return 4;
}
catch (CholeskyDecompositionFailed& e) {
std::cerr << e.what() << std::endl;
return 4;
}
catch (...) {
std::cerr << "The Gimiwan Engine failed for an unknown reason." << std::endl;
throw;
}
// Write out the results to the specified output data file.
try {
write_results( argv[12], results );
std::cout << "Six output files with root name <" << argv[12] << "> created. " << std::endl;
}
catch (InvalidOutputFile& e) {
std::cerr << e.what() << std::endl;
return 4;
}
// Successful termination.
double elapsed = static_cast<double>(clock())/CLOCKS_PER_SEC;
std::cout << "elapsed time: " << std::fixed << elapsed << " seconds." << std::endl;
std::cout << std::endl;
// Terminate execution.
return 0;
}
|
Flame-Sprayed NiCoCrAlTaY Coatings as Damage Detection Sensors The piezoresistivity of flame-sprayed NiCoCrAlTaY on an electrically insulated surface of a steel substrate was investigated through cyclic extension and compression cycles between 0 and 0.4 mm for 1000 cycles and uniaxial tensile test. The sprayed NiCoCrAlTaY was in grid form with grid thickness of 3 mm and grid length of 30 mm while the electrical insulation was fabricated by flame spraying alumina on the surface of the steel. During mechanical loading, instantaneous electrical resistance measurements were conducted to evaluate the corresponding relative resistance change. Images of the loaded samples were captured for strain calculations through Digital Image Correlation (DIC) technique. After consolidation of the pores within the coating, the behavior of the flame-sprayed NiCoCrAlTaY was consistent and linear within the cyclic compression and extension limits, with strain values of approximately -1000 and +1700, respectively. The coating had a consistent and steady maximum relative resistance change of approximately 5% within both limits. The tensile test revealed that the coating has two gauge factors due to the bi-linearity of the plot of relative resistance change against strain. The progression of damage within the coating layers was analyzed from its piezoresistive response and through back-scattered scanning electron microscopy images. Based on the results, the nickel alloy showed high piezoresistive sensitivity for the duration of the loading cycles, with little or no damage to the coating layers. These results suggest that the flame-sprayed nickel alloy coating has great potential as a surface damage detection sensor. |
<reponame>coral-labs/plugins
// Copyright 2013 The Flutter Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#import <UIKit/UIKit.h>
@interface UIImage (ios_platform_images)
/// Loads a UIImage from the embedded Flutter project's assets.
///
/// This method loads the Flutter asset that is appropriate for the current
/// screen. If you are on a 2x retina device where usually `UIImage` would be
/// loading `@2x` assets, it will attempt to load the `2.0x` variant. It will
/// load the standard image if it can't find the `2.0x` variant.
///
/// For example, if your Flutter project's `pubspec.yaml` lists "assets/foo.png"
/// and "assets/2.0x/foo.png", calling
/// `[UIImage flutterImageWithName:@"assets/foo.png"]` will load
/// "assets/2.0x/foo.png".
///
/// See also https://flutter.dev/docs/development/ui/assets-and-images
///
/// Note: We don't yet support images from package dependencies (ex.
/// `AssetImage('icons/heart.png', package: 'my_icons')`).
+ (UIImage *)flutterImageWithName:(NSString *)name;
@end
|
// Decompiled by Jad v1.5.8g. Copyright 2001 <NAME>.
// Jad home page: http://www.kpdus.com/jad.html
// Decompiler options: packimports(3) annotate safe
package com.comscore.analytics;
import com.comscore.applications.KeepAlive;
import com.comscore.utils.ConnectivityChangeReceiver;
import com.comscore.utils.OfflineMeasurementsCache;
import com.comscore.utils.task.TaskExecutor;
// Referenced classes of package com.comscore.analytics:
// Core
class aa
implements Runnable
{
aa(Core core, boolean flag)
{
b = core;
// 0 0:aload_0
// 1 1:aload_1
// 2 2:putfield #14 <Field Core b>
a = flag;
// 3 5:aload_0
// 4 6:iload_2
// 5 7:putfield #16 <Field boolean a>
super();
// 6 10:aload_0
// 7 11:invokespecial #19 <Method void Object()>
// 8 14:return
}
public void run()
{
if(a && !Core.b(b))
//* 0 0:aload_0
//* 1 1:getfield #16 <Field boolean a>
//* 2 4:ifeq 71
//* 3 7:aload_0
//* 4 8:getfield #14 <Field Core b>
//* 5 11:invokestatic #26 <Method boolean Core.b(Core)>
//* 6 14:ifne 71
{
Core.a(b, true);
// 7 17:aload_0
// 8 18:getfield #14 <Field Core b>
// 9 21:iconst_1
// 10 22:invokestatic #29 <Method boolean Core.a(Core, boolean)>
// 11 25:pop
b.setErrorHandlingEnabled(Core.c(b));
// 12 26:aload_0
// 13 27:getfield #14 <Field Core b>
// 14 30:aload_0
// 15 31:getfield #14 <Field Core b>
// 16 34:invokestatic #32 <Method boolean Core.c(Core)>
// 17 37:invokevirtual #36 <Method void Core.setErrorHandlingEnabled(boolean)>
b.reset();
// 18 40:aload_0
// 19 41:getfield #14 <Field Core b>
// 20 44:invokevirtual #39 <Method void Core.reset()>
b.getConnectivityReceiver().start();
// 21 47:aload_0
// 22 48:getfield #14 <Field Core b>
// 23 51:invokevirtual #43 <Method ConnectivityChangeReceiver Core.getConnectivityReceiver()>
// 24 54:invokevirtual #48 <Method void ConnectivityChangeReceiver.start()>
b.getKeepAlive().start(3000);
// 25 57:aload_0
// 26 58:getfield #14 <Field Core b>
// 27 61:invokevirtual #52 <Method KeepAlive Core.getKeepAlive()>
// 28 64:sipush 3000
// 29 67:invokevirtual #57 <Method void KeepAlive.start(int)>
return;
// 30 70:return
}
if(!a && Core.b(b))
//* 31 71:aload_0
//* 32 72:getfield #16 <Field boolean a>
//* 33 75:ifne 175
//* 34 78:aload_0
//* 35 79:getfield #14 <Field Core b>
//* 36 82:invokestatic #26 <Method boolean Core.b(Core)>
//* 37 85:ifeq 175
{
Core.a(b, false);
// 38 88:aload_0
// 39 89:getfield #14 <Field Core b>
// 40 92:iconst_0
// 41 93:invokestatic #29 <Method boolean Core.a(Core, boolean)>
// 42 96:pop
Core.b(b, b.ag);
// 43 97:aload_0
// 44 98:getfield #14 <Field Core b>
// 45 101:aload_0
// 46 102:getfield #14 <Field Core b>
// 47 105:getfield #60 <Field boolean Core.ag>
// 48 108:invokestatic #62 <Method boolean Core.b(Core, boolean)>
// 49 111:pop
if(Thread.getDefaultUncaughtExceptionHandler() != b.ah)
//* 50 112:invokestatic #68 <Method Thread$UncaughtExceptionHandler Thread.getDefaultUncaughtExceptionHandler()>
//* 51 115:aload_0
//* 52 116:getfield #14 <Field Core b>
//* 53 119:getfield #72 <Field Thread$UncaughtExceptionHandler Core.ah>
//* 54 122:if_acmpeq 135
Thread.setDefaultUncaughtExceptionHandler(b.ah);
// 55 125:aload_0
// 56 126:getfield #14 <Field Core b>
// 57 129:getfield #72 <Field Thread$UncaughtExceptionHandler Core.ah>
// 58 132:invokestatic #76 <Method void Thread.setDefaultUncaughtExceptionHandler(Thread$UncaughtExceptionHandler)>
b.getConnectivityReceiver().stop();
// 59 135:aload_0
// 60 136:getfield #14 <Field Core b>
// 61 139:invokevirtual #43 <Method ConnectivityChangeReceiver Core.getConnectivityReceiver()>
// 62 142:invokevirtual #79 <Method void ConnectivityChangeReceiver.stop()>
b.getKeepAlive().stop();
// 63 145:aload_0
// 64 146:getfield #14 <Field Core b>
// 65 149:invokevirtual #52 <Method KeepAlive Core.getKeepAlive()>
// 66 152:invokevirtual #80 <Method void KeepAlive.stop()>
b.getOfflineCache().clear();
// 67 155:aload_0
// 68 156:getfield #14 <Field Core b>
// 69 159:invokevirtual #84 <Method OfflineMeasurementsCache Core.getOfflineCache()>
// 70 162:invokevirtual #89 <Method void OfflineMeasurementsCache.clear()>
b.f.removeAllEnqueuedTasks();
// 71 165:aload_0
// 72 166:getfield #14 <Field Core b>
// 73 169:getfield #93 <Field TaskExecutor Core.f>
// 74 172:invokevirtual #98 <Method void TaskExecutor.removeAllEnqueuedTasks()>
}
// 75 175:return
}
final boolean a;
final Core b;
}
|
. With the introduction of the contrast agent gadolinum DTPA there were hopes that "MRM" would prove to be the investigatory technique that would largely solve the problems of breast diagnostics. However, after the early years of acceptance, the new method of investigation became a subject of controversy. Nonetheless, MRM today occupies a recognized place in diagnostics for certain indications. It is still true, however, that reliable use of this procedure requires a great deal of experience, since there is a relatively large area of overlap between benign and malignant tumors. Further, the costs are significantly higher than those for conventional methods of investigation. New studies that have been conducted at the Charit, Campus Virchow Medical Center in Berlin, suggest that, if one takes the relevant indications into account, MRM can be economic and contribute significantly to cost reduction. Application of a newly developed software package has shown that the good discrimination in a suspect area resulting from contrast agent enhancement makes possible a reliable differentiation between malignant and benign tissue changes. A further result was that, when certain boundary conditions are satisfied, a contrast agent bolus of 0.1 mmol/kg BW is sufficient, making a double dose (0.2 mmol/kg BW) unnecessary. |
Elderly Fall Detection Devices Using Multiple AIoT Biomedical Sensors Abstract Due to the influence of degeneration and chronic diseases of elderly people, a higher chance of fall-related injuries occurs among them. Falling is one of the accidents frequently confronted by elderly people, so this issue is worthy of concern. We propose diverse models to analyze falls through a wearable device. Then, we use Artificial Intelligence of Things (AIoT) biomedical sensors for fall detection to build a system for monitoring elderly peoples falls caused by dementia. The system can meet the safety needs of elderly people by providing communication, position tracking, fall detection, and pre-warning services. This device can be worn on the waist of an elderly people. Moreover, the device can monitor whether or not the person is walking normally, transmit the information to the rear-end system, and inform his/her family member via a cellphone app while an accident is occurring. Considering the risks on the fall test of elderly people, this study adopts activities of daily living (ADL) to verify the test. According to the test results, the accuracy of fall detection is 93.7%, the false positive rate is 6.2%, and the false negative rate is 6.5%. To improve the accuracy of fall detection and the timely handling of appropriate referrals, may be highly expected to reduce the occurrence of fall-related injuries. JEL classification numbers: D61, I30, O32. Keywords: Fall Detection, AIoT Sensor, Elderly People. |
def status(data_dir):
print(f'Showing statmech status in {data_dir}')
dm = DataManager(data_dir)
for subdir in dm.subdirs.keys():
print('{}: {} files'.format(subdir, dm.count(subdir)))
info_json = os.path.join(data_dir, 'info.json')
if os.path.isfile(info_json):
with open(info_json) as f:
info_dict = json.load(f)
analysis = SimpleAnalysis(data_dir)
analysis.combine_inputs()
analysis.combine_results()
analysis.calculate_run_time_stats()
estimated_time = pd.to_timedelta(
analysis.estimate_run_time(
dm.load('inputs'),
max(info_dict['max_tau'].values()),
max(info_dict['i_disorder'].values()) + 1,
),
unit='s'
)
actual_time = pd.to_timedelta(
analysis.run_time_stats['total_time'].sum(),
unit='s'
)
print(f'Estimated CPU time {estimated_time}')
print(f'Actual CPU time {actual_time}')
progress = float(
100*actual_time.total_seconds()/estimated_time.total_seconds()
)
print(f'Progress {progress:.2f}%') |
/*
* Copyright 2021-2022 the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springframework.kafka.retrytopic;
import static org.assertj.core.api.Assertions.assertThat;
import static org.assertj.core.api.Assertions.fail;
import java.time.Clock;
import java.util.HashMap;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicInteger;
import org.apache.kafka.clients.admin.AdminClientConfig;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.apache.kafka.common.serialization.StringSerializer;
import org.junit.jupiter.api.Test;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.DltHandler;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.kafka.annotation.RetryableTopic;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaProducerFactory;
import org.springframework.kafka.core.KafkaAdmin;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.core.ProducerFactory;
import org.springframework.kafka.listener.ContainerProperties;
import org.springframework.kafka.listener.KafkaConsumerBackoffManager;
import org.springframework.kafka.support.KafkaHeaders;
import org.springframework.kafka.support.converter.ConversionException;
import org.springframework.kafka.test.EmbeddedKafkaBroker;
import org.springframework.kafka.test.context.EmbeddedKafka;
import org.springframework.messaging.handler.annotation.Header;
import org.springframework.retry.annotation.Backoff;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.junit.jupiter.SpringJUnitConfig;
import org.springframework.util.backoff.FixedBackOff;
/**
* @author <NAME>
* @since 2.8.4
*/
@SpringJUnitConfig
@DirtiesContext
@EmbeddedKafka
public class RetryTopicExceptionRoutingIntegrationTests {
private static final Logger logger = LoggerFactory.getLogger(RetryTopicExceptionRoutingIntegrationTests.class);
public final static String BLOCKING_AND_TOPIC_RETRY = "blocking-and-topic-retry";
public final static String ONLY_RETRY_VIA_BLOCKING = "only-retry-blocking-topic";
public final static String ONLY_RETRY_VIA_TOPIC = "only-retry-topic";
public final static String USER_FATAL_EXCEPTION_TOPIC = "user-fatal-topic";
public final static String FRAMEWORK_FATAL_EXCEPTION_TOPIC = "framework-fatal-topic";
@Autowired
private KafkaTemplate<String, String> kafkaTemplate;
@Autowired
private CountDownLatchContainer latchContainer;
@Test
void shouldRetryViaBlockingAndTopics() {
logger.debug("Sending message to topic " + BLOCKING_AND_TOPIC_RETRY);
kafkaTemplate.send(BLOCKING_AND_TOPIC_RETRY, "Test message to " + BLOCKING_AND_TOPIC_RETRY);
assertThat(awaitLatch(latchContainer.blockingAndTopicsLatch)).isTrue();
assertThat(awaitLatch(latchContainer.dltProcessorLatch)).isTrue();
}
@Test
void shouldRetryOnlyViaBlocking() {
logger.debug("Sending message to topic " + ONLY_RETRY_VIA_BLOCKING);
kafkaTemplate.send(ONLY_RETRY_VIA_BLOCKING, "Test message to ");
assertThat(awaitLatch(latchContainer.onlyRetryViaBlockingLatch)).isTrue();
assertThat(awaitLatch(latchContainer.annotatedDltOnlyBlockingLatch)).isTrue();
}
@Test
void shouldRetryOnlyViaTopic() {
logger.debug("Sending message to topic " + ONLY_RETRY_VIA_TOPIC);
kafkaTemplate.send(ONLY_RETRY_VIA_TOPIC, "Test message to " + ONLY_RETRY_VIA_TOPIC);
assertThat(awaitLatch(latchContainer.onlyRetryViaTopicLatch)).isTrue();
assertThat(awaitLatch(latchContainer.dltProcessorWithErrorLatch)).isTrue();
}
@Test
public void shouldGoStraightToDltIfUserProvidedFatal() {
logger.debug("Sending message to topic " + USER_FATAL_EXCEPTION_TOPIC);
kafkaTemplate.send(USER_FATAL_EXCEPTION_TOPIC, "Test message to " + USER_FATAL_EXCEPTION_TOPIC);
assertThat(awaitLatch(latchContainer.fatalUserLatch)).isTrue();
assertThat(awaitLatch(latchContainer.annotatedDltUserFatalLatch)).isTrue();
}
@Test
public void shouldGoStraightToDltIfFrameworkProvidedFatal() {
logger.debug("Sending message to topic " + FRAMEWORK_FATAL_EXCEPTION_TOPIC);
kafkaTemplate.send(FRAMEWORK_FATAL_EXCEPTION_TOPIC, "Testing topic with annotation 1");
assertThat(awaitLatch(latchContainer.fatalFrameworkLatch)).isTrue();
assertThat(awaitLatch(latchContainer.annotatedDltFrameworkFatalLatch)).isTrue();
}
private static void countdownIfCorrectInvocations(AtomicInteger invocations, int expected, CountDownLatch latch) {
int actual = invocations.get();
if (actual == expected) {
latch.countDown();
}
else {
logger.error("Wrong number of Listener invocations: expected {} actual {}", expected, actual);
}
}
private boolean awaitLatch(CountDownLatch latch) {
try {
return latch.await(30, TimeUnit.SECONDS);
}
catch (Exception e) {
fail(e.getMessage());
throw new RuntimeException(e);
}
}
static class BlockingAndTopicRetriesListener {
@Autowired
CountDownLatchContainer container;
@KafkaListener(id = "firstTopicId", topics = BLOCKING_AND_TOPIC_RETRY)
public void listen(String message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
logger.debug("Message {} received in topic {}", message, receivedTopic);
container.blockingAndTopicsLatch.countDown();
container.blockingAndTopicsListenerInvocations.incrementAndGet();
throw new ShouldRetryViaBothException("Woooops... in topic " + receivedTopic);
}
}
static class DltProcessor {
@Autowired
CountDownLatchContainer container;
public void processDltMessage(Object message) {
countdownIfCorrectInvocations(container.blockingAndTopicsListenerInvocations, 12,
container.dltProcessorLatch);
}
}
static class OnlyRetryViaTopicListener {
@Autowired
CountDownLatchContainer container;
@KafkaListener(topics = ONLY_RETRY_VIA_TOPIC)
public void listenAgain(String message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
logger.debug("Message {} received in topic {} ", message, receivedTopic);
container.onlyRetryViaTopicLatch.countDown();
container.onlyRetryViaTopicListenerInvocations.incrementAndGet();
throw new ShouldRetryOnlyByTopicException("Another woooops... " + receivedTopic);
}
}
static class DltProcessorWithError {
@Autowired
CountDownLatchContainer container;
public void processDltMessage(Object message) {
countdownIfCorrectInvocations(container.onlyRetryViaTopicListenerInvocations,
3, container.dltProcessorWithErrorLatch);
throw new RuntimeException("Dlt Error!");
}
}
static class OnlyRetryBlockingListener {
@Autowired
CountDownLatchContainer container;
@RetryableTopic(exclude = ShouldRetryOnlyBlockingException.class, traversingCauses = "true",
backoff = @Backoff(50), kafkaTemplate = "kafkaTemplate")
@KafkaListener(topics = ONLY_RETRY_VIA_BLOCKING)
public void listenWithAnnotation(String message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
container.onlyRetryViaBlockingLatch.countDown();
container.onlyRetryViaBlockingListenerInvocations.incrementAndGet();
logger.debug("Message {} received in topic {} ", message, receivedTopic);
throw new ShouldRetryOnlyBlockingException("User provided fatal exception!" + receivedTopic);
}
@DltHandler
public void annotatedDltMethod(Object message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
logger.debug("Received message in Dlt method " + receivedTopic);
countdownIfCorrectInvocations(container.onlyRetryViaBlockingListenerInvocations, 4,
container.annotatedDltOnlyBlockingLatch);
}
}
static class UserFatalTopicListener {
@Autowired
CountDownLatchContainer container;
@RetryableTopic(backoff = @Backoff(50), kafkaTemplate = "kafkaTemplate")
@KafkaListener(topics = USER_FATAL_EXCEPTION_TOPIC)
public void listenWithAnnotation(String message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
container.fatalUserLatch.countDown();
container.userFatalListenerInvocations.incrementAndGet();
logger.debug("Message {} received in topic {} ", message, receivedTopic);
throw new ShouldSkipBothRetriesException("User provided fatal exception!" + receivedTopic);
}
@DltHandler
public void annotatedDltMethod(Object message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
logger.debug("Received message in Dlt method " + receivedTopic);
countdownIfCorrectInvocations(container.userFatalListenerInvocations, 1,
container.annotatedDltUserFatalLatch);
}
}
static class FrameworkFatalTopicListener {
@Autowired
CountDownLatchContainer container;
@RetryableTopic(fixedDelayTopicStrategy = FixedDelayStrategy.SINGLE_TOPIC, backoff = @Backoff(50))
@KafkaListener(topics = FRAMEWORK_FATAL_EXCEPTION_TOPIC)
public void listenWithAnnotation(String message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
container.fatalFrameworkLatch.countDown();
container.fatalFrameworkListenerInvocations.incrementAndGet();
logger.debug("Message {} received in second annotated topic {} ", message, receivedTopic);
throw new ConversionException("Woooops... in topic " + receivedTopic, new RuntimeException("Test RTE"));
}
@DltHandler
public void annotatedDltMethod(Object message, @Header(KafkaHeaders.RECEIVED_TOPIC) String receivedTopic) {
logger.debug("Received message in annotated Dlt method!");
countdownIfCorrectInvocations(container.fatalFrameworkListenerInvocations, 1,
container.annotatedDltFrameworkFatalLatch);
throw new ConversionException("Woooops... in topic " + receivedTopic, new RuntimeException("Test RTE"));
}
}
static class CountDownLatchContainer {
CountDownLatch blockingAndTopicsLatch = new CountDownLatch(12);
CountDownLatch onlyRetryViaBlockingLatch = new CountDownLatch(4);
CountDownLatch onlyRetryViaTopicLatch = new CountDownLatch(3);
CountDownLatch fatalUserLatch = new CountDownLatch(1);
CountDownLatch fatalFrameworkLatch = new CountDownLatch(1);
CountDownLatch annotatedDltOnlyBlockingLatch = new CountDownLatch(1);
CountDownLatch annotatedDltUserFatalLatch = new CountDownLatch(1);
CountDownLatch annotatedDltFrameworkFatalLatch = new CountDownLatch(1);
CountDownLatch dltProcessorLatch = new CountDownLatch(1);
CountDownLatch dltProcessorWithErrorLatch = new CountDownLatch(1);
AtomicInteger blockingAndTopicsListenerInvocations = new AtomicInteger();
AtomicInteger onlyRetryViaTopicListenerInvocations = new AtomicInteger();
AtomicInteger onlyRetryViaBlockingListenerInvocations = new AtomicInteger();
AtomicInteger userFatalListenerInvocations = new AtomicInteger();
AtomicInteger fatalFrameworkListenerInvocations = new AtomicInteger();
}
@SuppressWarnings("serial")
public static class ShouldRetryOnlyByTopicException extends RuntimeException {
public ShouldRetryOnlyByTopicException(String msg) {
super(msg);
}
}
@SuppressWarnings("serial")
public static class ShouldSkipBothRetriesException extends RuntimeException {
public ShouldSkipBothRetriesException(String msg) {
super(msg);
}
}
@SuppressWarnings("serial")
public static class ShouldRetryOnlyBlockingException extends RuntimeException {
public ShouldRetryOnlyBlockingException(String msg) {
super(msg);
}
}
@SuppressWarnings("serial")
public static class ShouldRetryViaBothException extends RuntimeException {
public ShouldRetryViaBothException(String msg) {
super(msg);
}
}
@Configuration
static class RetryTopicConfigurations {
private static final String DLT_METHOD_NAME = "processDltMessage";
@Bean
public RetryTopicConfiguration blockingAndTopic(KafkaTemplate<String, String> template) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackOff(50)
.includeTopic(BLOCKING_AND_TOPIC_RETRY)
.dltHandlerMethod("dltProcessor", DLT_METHOD_NAME)
.create(template);
}
@Bean
public RetryTopicConfiguration onlyTopic(KafkaTemplate<String, String> template) {
return RetryTopicConfigurationBuilder
.newInstance()
.fixedBackOff(50)
.includeTopic(ONLY_RETRY_VIA_TOPIC)
.useSingleTopicForFixedDelays()
.doNotRetryOnDltFailure()
.dltHandlerMethod("dltProcessorWithError", DLT_METHOD_NAME)
.create(template);
}
@Bean
public BlockingAndTopicRetriesListener blockingAndTopicRetriesListener() {
return new BlockingAndTopicRetriesListener();
}
@Bean
public OnlyRetryViaTopicListener onlyRetryViaTopicListener() {
return new OnlyRetryViaTopicListener();
}
@Bean
public UserFatalTopicListener userFatalTopicListener() {
return new UserFatalTopicListener();
}
@Bean
public OnlyRetryBlockingListener onlyRetryBlockingListener() {
return new OnlyRetryBlockingListener();
}
@Bean
public FrameworkFatalTopicListener frameworkFatalTopicListener() {
return new FrameworkFatalTopicListener();
}
@Bean
CountDownLatchContainer latchContainer() {
return new CountDownLatchContainer();
}
@Bean
DltProcessor dltProcessor() {
return new DltProcessor();
}
@Bean
DltProcessorWithError dltProcessorWithError() {
return new DltProcessorWithError();
}
@Bean(name = RetryTopicInternalBeanNames.LISTENER_CONTAINER_FACTORY_CONFIGURER_NAME)
public ListenerContainerFactoryConfigurer lcfc(KafkaConsumerBackoffManager kafkaConsumerBackoffManager,
DeadLetterPublishingRecovererFactory deadLetterPublishingRecovererFactory,
@Qualifier(RetryTopicInternalBeanNames
.INTERNAL_BACKOFF_CLOCK_BEAN_NAME) Clock clock) {
ListenerContainerFactoryConfigurer lcfc = new ListenerContainerFactoryConfigurer(kafkaConsumerBackoffManager, deadLetterPublishingRecovererFactory, clock);
lcfc.setBlockingRetriesBackOff(new FixedBackOff(50, 3));
lcfc.setBlockingRetryableExceptions(ShouldRetryOnlyBlockingException.class, ShouldRetryViaBothException.class);
return lcfc;
}
@Bean(name = RetryTopicInternalBeanNames.DESTINATION_TOPIC_CONTAINER_NAME)
public DefaultDestinationTopicResolver ddtr(ApplicationContext applicationContext,
@Qualifier(RetryTopicInternalBeanNames
.INTERNAL_BACKOFF_CLOCK_BEAN_NAME) Clock clock) {
DefaultDestinationTopicResolver ddtr = new DefaultDestinationTopicResolver(clock, applicationContext);
ddtr.addNotRetryableExceptions(ShouldSkipBothRetriesException.class);
return ddtr;
}
}
@Configuration
public static class KafkaProducerConfig {
@Autowired
EmbeddedKafkaBroker broker;
@Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> configProps = new HashMap<>();
configProps.put(
ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
this.broker.getBrokersAsString());
configProps.put(
ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
configProps.put(
ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
return new DefaultKafkaProducerFactory<>(configProps);
}
@Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
}
@EnableKafka
@Configuration
public static class KafkaConsumerConfig {
@Autowired
EmbeddedKafkaBroker broker;
@Bean
public KafkaAdmin kafkaAdmin() {
Map<String, Object> configs = new HashMap<>();
configs.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, this.broker.getBrokersAsString());
return new KafkaAdmin(configs);
}
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(
ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,
this.broker.getBrokersAsString());
props.put(
ConsumerConfig.GROUP_ID_CONFIG,
"groupId");
props.put(
ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(
ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
StringDeserializer.class);
props.put(
ConsumerConfig.ALLOW_AUTO_CREATE_TOPICS_CONFIG, false);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(props);
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> retryTopicListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
ContainerProperties props = factory.getContainerProperties();
props.setIdleEventInterval(100L);
props.setPollTimeout(50L);
props.setIdlePartitionEventInterval(100L);
factory.setConsumerFactory(consumerFactory);
factory.setConcurrency(1);
factory.setContainerCustomizer(
container -> container.getContainerProperties().setIdlePartitionEventInterval(100L));
return factory;
}
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory);
factory.setConcurrency(1);
return factory;
}
}
}
|
Nathan Glick
Nathan H. Glick (June 10, 1912 - October 16, 2012) was an American artist and illustrator best known for his work as a combat artist depicting aerial battles in World War II. He also worked as art director for Progressive Farmer magazine, and as the illustrator of several books on early Alabama history.
Glick was born in Leeds, Alabama in 1912 but finished high school in Montgomery. He continued his art studies under Eric Pape and George Ennis in New York City and studied animal anatomy under James L. Clarke at the American Museum of Natural History.
During the 1930s, Glick was art director for Paragon Press, a small publisher which issued history works by Alabama state archivist Marie Bankhead Owen. She also commissioned him to design the scenes cast in bronze for the doors of Alabama's 1940 Alabama Department of Archives and History building.
During World War II, Glick was assigned as combat artist for the Ninth Air Force. He created dramatic scenes of combat in the skies of North Africa, France, India and the South Pacific. The Air Force's public relations department distributed his drawings for publication in Yank, Stars and Stripes, The Illustrated London News, Life and Parade.
After the end of the war, Glick returned to Birmingham and took a job as art director and illustrator for Progressive Farmer, retiring in 1977. He continued to contribute illustrations of Alabama history to books and helped create a series of fourteen murals for the United States Forest Service's Forest Heritage Center in Broken Bow, Oklahoma. His drawings, lithographs and paintings are also sold through private galleries.
The "Nathan Glick Lifetime Achievement Award for Aviation Art" created by Birmingham's Southern Museum of Flight is named in his honor. |
def parse(fs):
sul = StorageUnitLabel()
value = readBytes(fs, StorageUnitLabel.SU_SEQNUM_LENGTH)
try:
sul._susn = int(value)
except ValueError:
raise Exception('Fail to interpret SequenceNumber of SU from value [{}]'.format(value))
logger.debug("Storage Unit Sequence Number:%s", sul._susn)
value = readAsString(fs, StorageUnitLabel.DLIS_VERSION_LENGTH)
if re.match(r'V1.[0-9][0-9]', value) is None:
raise Exception('Only supported DLIS version is V1.xx, but get {}'.format(value))
sul._version = int(value[1:2])
logger.debug("DLIS Version:%s", sul._version)
sul._sus = readAsString(fs, StorageUnitLabel.SU_STRUCTURE_LENGTH)
if sul._sus != StorageUnitLabel.SU_STRUCTURE_RECORD:
raise Exception('Unsupported Storage Unit Structure in V1: {} '.format(sul._sus))
logger.debug("Storage Unit Structure:%s", sul._sus)
sul._maxRecordLen = readAsInteger(fs, StorageUnitLabel.MAX_RECORD_LEN_LENGTH)
logger.debug("Maximum Record Length:%s", sul._maxRecordLen)
sul._ssi = readAsString(fs, StorageUnitLabel.SSI_LENGTH)
logger.debug("Storage Set Identifer:%s", sul._ssi)
assert(StorageUnitLabel.LENGTH == fs.tell())
return sul |
/**
* Method to delete a statistical program by id
* @param command the command to execute
* @return DeleteStatisticalProgramCommand including the DTOBoolean
* @throws AuthorizationException when the user has no rights to delete
*/
public DeleteStatisticalProgramCommand deleteStatisticalProgram(final DeleteStatisticalProgramCommand command)
throws AuthorizationException {
final AccountRole role = AccountRole.valueOf(JWT.decode(command.getJwt()).getClaim("role").asString());
if (role == AccountRole.USER) {
throw new AuthorizationException(ExceptionCodes.NOT_AUTHORIZED);
}
try {
statisticalProgramRepository.deleteById(command.getId());
} catch (Exception e) {
LOG.debug("Error deleting statistical program: " + e.getMessage());
command.getEvent().setData(DTOBoolean.FAIL);
return command;
}
command.getEvent().setData(DTOBoolean.TRUE);
return command;
} |
/*
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.palantir.cassandra.cvim;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.Collections;
import static org.assertj.core.api.Assertions.assertThat;
import org.junit.Test;
import org.apache.cassandra.config.DatabaseDescriptor;
import org.apache.cassandra.net.MessageIn;
import org.apache.cassandra.net.MessagingService;
import static org.mockito.ArgumentMatchers.any;
import static org.mockito.ArgumentMatchers.anyInt;
import static org.mockito.ArgumentMatchers.eq;
import static org.mockito.Mockito.doReturn;
import static org.mockito.Mockito.mock;
import static org.mockito.Mockito.spy;
import static org.mockito.Mockito.times;
import static org.mockito.Mockito.verify;
public class CrossVpcIpMappingAckVerbHandlerTest
{
private final CrossVpcIpMappingAckVerbHandler handler = spy(new CrossVpcIpMappingAckVerbHandler());
@Test
public void doVerb_invokedByMessagingService() throws UnknownHostException
{
InetAddress remote = InetAddress.getByName("127.0.0.2");
InetAddressHostname targetName = new InetAddressHostname("target");
InetAddressIp targetExternalIp = new InetAddressIp("2.0.0.0");
InetAddressIp targetInternalIp = new InetAddressIp("1.0.0.0");
CrossVpcIpMappingAck ack = new CrossVpcIpMappingAck(targetName, targetInternalIp, targetExternalIp);
MessageIn<CrossVpcIpMappingAck> messageIn = MessageIn.create(remote,
ack,
Collections.emptyMap(),
MessagingService.Verb.CROSS_VPC_IP_MAPPING_ACK,
MessagingService.current_version);
MessagingService.instance().registerVerbHandlers(MessagingService.Verb.CROSS_VPC_IP_MAPPING_ACK, handler);
MessagingService.instance().receive(messageIn, 0, 0, false);
// Potential race condition since MessageDeliveryTask is run in another executor
verify(handler, times(1)).doVerb(eq(messageIn), anyInt());
}
@Test
public void doVerb_invokesCrossVpcIpMappingHandshaker() throws UnknownHostException
{
InetAddress remote = InetAddress.getByName("127.0.0.2");
InetAddressHostname targetName = new InetAddressHostname("target");
InetAddressIp targetExternalIp = new InetAddressIp("2.2.2.2");
InetAddressIp targetInternalIp = new InetAddressIp("127.0.0.1");
InetAddress input = InetAddress.getByName(targetInternalIp.toString());
CrossVpcIpMappingAck ack = new CrossVpcIpMappingAck(targetName, targetInternalIp, targetExternalIp);
MessageIn<CrossVpcIpMappingAck> messageIn = MessageIn.create(remote,
ack,
Collections.emptyMap(),
MessagingService.Verb.CROSS_VPC_IP_MAPPING_ACK,
MessagingService.current_version);
DatabaseDescriptor.setCrossVpcInternodeCommunication(true);
DatabaseDescriptor.setCrossVpcHostnameSwapping(false);
DatabaseDescriptor.setCrossVpcIpSwapping(true);
InetAddress result = CrossVpcIpMappingHandshaker.instance.maybeSwapAddress(input);
assertThat(result.getHostAddress()).isNotEqualTo(targetExternalIp.toString());
handler.doVerb(messageIn, 0);
result = CrossVpcIpMappingHandshaker.instance.maybeSwapAddress(input);
assertThat(result.getHostAddress()).isEqualTo(targetExternalIp.toString());
}
}
|
The Impact of Climate Change on Farm Business Performance in Western Australia. Understanding Farmers Adaptation Responses and Their Key Characteristics in Response to a Changing and Variable Climate This study examines ten years of financial and production data of 249 farm businesses operating in southwestern Australia. It also identifies the behavioural characteristics of the farm operators through a comprehensive socio-managerial survey of each farm business. The study area has a Mediterranean climate, where three quarters of the rainfall is received during the growing season from April and October. Growers have learned to produce 2 tonnes per hectare of wheat on less than 200 ml of growing season rainfall. Australia is the driest continent in the world and is renowned for its climate variability. In addition, evidence is emerging that its southern parts, like south-western Australia, are experiencing a warming, drying trend in their climate. Average annual rainfall over the last thirty years in the study area has declined and average minimum and maximum temperatures have risen. Moreover, in the last ten years a number of droughts have occurred. This multidisciplinary study examines the business performance of 249 farms from 2002 to 2011 and identifies the strategies farm managers have adopted to adapt to a drying, warming environment. Farms are categorised according to their performance. Their characteristics are compared and contrasted. We find many significant differences between farm performance categories and the adaptation strategies used by the farmers in each category. There are also different socio-managerial and behavioural characteristics between the groups of farmers identified. |
// =====================================================================================================================
// Create derivative calculation on float or vector of float or half
Value* BuilderImplMisc::CreateDerivative(
Value* pValue,
bool isDirectionY,
bool isFine,
const Twine& instName)
{
uint32_t tableIdx = isDirectionY * 2 + isFine;
Value* pResult = nullptr;
if (SupportDpp())
{
static const uint32_t firstDppCtrl[4] =
{
0x55,
0xF5,
0xAA,
0xEE,
};
static const uint32_t secondDppCtrl[4] =
{
0x00,
0xA0,
0x00,
0x44,
};
uint32_t perm1 = firstDppCtrl[tableIdx];
uint32_t perm2 = secondDppCtrl[tableIdx];
pResult = Scalarize(pValue,
[this, perm1, perm2](Value* pValue)
{
Type* pValTy = pValue->getType();
pValue = CreateBitCast(pValue, getIntNTy(pValTy->getPrimitiveSizeInBits()));
pValue = CreateZExtOrTrunc(pValue, getInt32Ty());
Value* pFirstVal = CreateIntrinsic(Intrinsic::amdgcn_mov_dpp,
getInt32Ty(),
{
pValue,
getInt32(perm1),
getInt32(15),
getInt32(15),
getTrue()
});
pFirstVal = CreateZExtOrTrunc(pFirstVal, getIntNTy(pValTy->getPrimitiveSizeInBits()));
pFirstVal = CreateBitCast(pFirstVal, pValTy);
Value* pSecondVal = CreateIntrinsic(Intrinsic::amdgcn_mov_dpp,
getInt32Ty(),
{
pValue,
getInt32(perm2),
getInt32(15),
getInt32(15),
getTrue()
});
pSecondVal = CreateZExtOrTrunc(pSecondVal, getIntNTy(pValTy->getPrimitiveSizeInBits()));
pSecondVal = CreateBitCast(pSecondVal, pValTy);
Value* pResult = CreateFSub(pFirstVal, pSecondVal);
return CreateUnaryIntrinsic(Intrinsic::amdgcn_wqm, pResult);
});
}
else
{
static const uint32_t firstSwizzleCtrl[4] =
{
0x8055,
0x80F5,
0x80AA,
0x80EE,
};
static const uint32_t secondSwizzleCtrl[4] =
{
0x8000,
0x80A0,
0x8000,
0x8044,
};
uint32_t perm1 = firstSwizzleCtrl[tableIdx];
uint32_t perm2 = secondSwizzleCtrl[tableIdx];
pResult = Scalarize(pValue,
[this, perm1, perm2](Value* pValue)
{
Type* pValTy = pValue->getType();
pValue = CreateBitCast(pValue, getIntNTy(pValTy->getPrimitiveSizeInBits()));
pValue = CreateZExtOrTrunc(pValue, getInt32Ty());
Value* pFirstVal = CreateIntrinsic(Intrinsic::amdgcn_ds_swizzle,
{},
{ pValue, getInt32(perm1)});
pFirstVal = CreateZExtOrTrunc(pFirstVal, getIntNTy(pValTy->getPrimitiveSizeInBits()));
pFirstVal = CreateBitCast(pFirstVal, pValTy);
Value* pSecondVal = CreateIntrinsic(Intrinsic::amdgcn_ds_swizzle,
{},
{ pValue, getInt32(perm2) });
pSecondVal = CreateZExtOrTrunc(pSecondVal, getIntNTy(pValTy->getPrimitiveSizeInBits()));
pSecondVal = CreateBitCast(pSecondVal, pValTy);
Value* pResult = CreateFSub(pFirstVal, pSecondVal);
return CreateUnaryIntrinsic(Intrinsic::amdgcn_wqm, pResult);
});
}
pResult->setName(instName);
return pResult;
} |
<gh_stars>0
// --------------------------------------------------------------------------------
// Copyright 2002-2022 Echo Three, LLC
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// --------------------------------------------------------------------------------
package com.echothree.model.control.security.server.transfer;
import com.echothree.model.control.security.common.transfer.PartySecurityRoleTemplateTrainingClassTransfer;
import com.echothree.model.control.security.common.transfer.PartySecurityRoleTemplateTransfer;
import com.echothree.model.control.security.server.control.SecurityControl;
import com.echothree.model.control.training.common.transfer.TrainingClassTransfer;
import com.echothree.model.control.training.server.control.TrainingControl;
import com.echothree.model.data.security.server.entity.PartySecurityRoleTemplateTrainingClass;
import com.echothree.model.data.user.server.entity.UserVisit;
import com.echothree.util.server.persistence.Session;
public class PartySecurityRoleTemplateTrainingClassTransferCache
extends BaseSecurityTransferCache<PartySecurityRoleTemplateTrainingClass, PartySecurityRoleTemplateTrainingClassTransfer> {
TrainingControl trainingControl = Session.getModelController(TrainingControl.class);
/** Creates a new instance of PartySecurityRoleTemplateTrainingClassTransferCache */
public PartySecurityRoleTemplateTrainingClassTransferCache(UserVisit userVisit, SecurityControl securityControl) {
super(userVisit, securityControl);
}
public PartySecurityRoleTemplateTrainingClassTransfer getPartySecurityRoleTemplateTrainingClassTransfer(PartySecurityRoleTemplateTrainingClass partySecurityRoleTemplateTrainingClass) {
PartySecurityRoleTemplateTrainingClassTransfer partySecurityRoleTemplateTrainingClassTransfer = get(partySecurityRoleTemplateTrainingClass);
if(partySecurityRoleTemplateTrainingClassTransfer == null) {
PartySecurityRoleTemplateTransfer partySecurityRoleTemplate = securityControl.getPartySecurityRoleTemplateTransfer(userVisit, partySecurityRoleTemplateTrainingClass.getPartySecurityRoleTemplate());
TrainingClassTransfer trainingClass = trainingControl.getTrainingClassTransfer(userVisit, partySecurityRoleTemplateTrainingClass.getTrainingClass());
partySecurityRoleTemplateTrainingClassTransfer = new PartySecurityRoleTemplateTrainingClassTransfer(partySecurityRoleTemplate, trainingClass);
put(partySecurityRoleTemplateTrainingClass, partySecurityRoleTemplateTrainingClassTransfer);
}
return partySecurityRoleTemplateTrainingClassTransfer;
}
}
|
Foreign nuclear experts on Friday blasted the operator of Japan's crippled Fukushima nuclear plant, with one saying its lack of transparency over toxic water leaks showed "you don't know what you're doing".
The blunt criticism comes after a litany of problems at the reactor site, which was swamped by a quake-sparked tsunami two years ago. The disaster sent reactors into meltdown and forced the evacuation of tens of thousands of residents in the worst atomic accident in a generation.
Earlier this week, Tokyo Electric Power (TEPCO) admitted for the first time that radioactive groundwater had leaked into the sea, confirming long-held suspicions of ocean contamination from the shattered reactors.
"This action regarding the water contamination demonstrates a lack of conservative decision-making process," Dale Klein, former head of the US Nuclear Regulatory Commission (NRC), told a panel in Tokyo.
"It also appears that you are not keeping the people of Japan informed. These actions indicate that you don't know what you are doing...you do not have a plan and that you are not doing all you can to protect the environment and the people."
Klein was invited to attend the TEPCO-sponsored nuclear reform monitoring panel composed of two foreign experts and four Japanese including the company's chief executive.
The utility had previously reported rising levels of cancer-causing materials in groundwater samples from underneath the plant, but maintained it had contained toxic water from leaking beyond its borders.
But the embattled company has now conceded it delayed the release of test results confirming the leaks as Japan's nuclear watchdog heaped doubt on its claims.
"We would like to express our frustrations in your recent activities regarding the water contamination," Klein said.
"We believe that these events detract from the progress that you have made on your clean up and reform for the Fukushima (plant)."
Barbara Judge, chairman of Britain's Atomic Energy Authority, said she was "disappointed and distressed" over the company's lack of disclosure.
"I hope that there will be lessons learned from the mishandling of this issue and the next time an issue arises -- which inevitably it will because decommissioning is a complicated and difficult process -- that the public will be immediately informed about the situation and what TEPCO is planning to do in order to remedy it," she said.
Decommissioning the site is expected to take decades and many area residents will likely never be able to return home, experts say. |
<gh_stars>1-10
import React, { useState, useEffect, useRef } from "react";
import ChatMessage from "./ChatMessage";
import Splash from "./Splash";
import { useSocket } from "../contexts/SocketContext";
import { useAppSelector } from "../store/hooks";
import { DirectMessage } from "../types/entities";
import axios from "../axios";
import { ResponseData } from "../types";
import { useAuth } from "../contexts/AuthContext";
interface ChatProps {}
const Chat: React.FC<ChatProps> = () => {
const { receiver } = useAppSelector((state) => state.currentChat);
const { user } = useAuth();
const { socket } = useSocket();
const [messages, setMessages] = useState<DirectMessage[]>([]);
const [hasMore, setHasMore] = useState(false);
const chatDivRef = useRef<HTMLDivElement>(null);
const prevScrollHeightRef = useRef<any>({ scrollTop: 0, scrollHeight: 0 });
useEffect(() => {
if (!receiver) return;
const emptyArr: DirectMessage[] = [];
setMessages(emptyArr);
(async () => {
const payload = { receiverId: receiver.id };
const { data: resData } = await axios.post<ResponseData>(
"/api/direct-message/",
payload
);
const { data, ok } = resData;
if (ok) {
data.results.reverse();
setMessages(data.results);
setHasMore(data.hasMore);
const scrollTop =
chatDivRef.current!.scrollHeight - chatDivRef.current!.clientHeight;
chatDivRef.current?.scrollTo({ top: scrollTop });
}
})();
}, [receiver]);
useEffect(() => {
if (!receiver || !socket) return;
const messageReceiver = (message: DirectMessage) => {
setMessages((p) => [...p, message]);
const scrollTop =
chatDivRef.current!.scrollHeight - chatDivRef.current!.clientHeight;
chatDivRef.current?.scrollTo({ top: scrollTop, behavior: "smooth" });
};
socket.emit("join-direct-message", { receiverName: receiver.username });
socket.on("receive-direct-message", messageReceiver);
return () => {
socket.off("receive-direct-message", messageReceiver);
socket?.emit("leave-direct-message", { receiverName: receiver.username });
};
}, [socket, receiver]);
const handleLoadMore = async () => {
if (!hasMore) return;
prevScrollHeightRef.current = {
scrollTop: chatDivRef.current?.scrollTop,
scrollHeight: chatDivRef.current?.scrollHeight,
};
const payload = {
receiverId: receiver!.id,
timestamp: messages[0].createdAt,
id: messages[0].id,
};
const { data: resData } = await axios.post<ResponseData>(
"/api/direct-message/",
payload
);
const { data, ok } = resData;
if (ok) {
data.results.reverse();
setMessages((p) => [...data.results, ...p]);
setHasMore(data.hasMore);
const newScrollPos =
chatDivRef.current!.scrollHeight -
prevScrollHeightRef.current.scrollHeight;
chatDivRef.current!.scrollTo({ top: newScrollPos });
}
};
if (!receiver) {
return (
<div className="flex-grow overflow-x-hidden overflow-y-auto">
<Splash isfullScreen={false} spinner={false} text={splashText} />
</div>
);
}
return (
<div
className="flex-grow overflow-x-hidden overflow-y-auto pb-20"
ref={chatDivRef}
>
{hasMore ? (
<button
className="py-1 px-3 rounded m-1 border-2 border-gray-300 uppercase text-sm hover:bg-gray-300"
onClick={handleLoadMore}
>
Load More
</button>
) : (
<p
style={{ width: "fit-content" }}
className="text-center text-sm bg-yellow-100 p-1 px-3 m-auto mt-2 mb-2 text-gray-600 rounded shadow"
>
Messages are end-to-end encrypted
</p>
)}
{messages.map((msg) => (
<ChatMessage
key={msg.id}
message={{
...msg,
sender: msg.senderId === user!.id ? user! : receiver,
receiver: msg.senderId === user!.id ? receiver! : user!,
}}
/>
))}
</div>
);
};
const splashText =
"Start a chat either by creating a new one or by selecting a previous from the sidebar";
export default Chat;
|
Prevalence of dental trauma and use of mouthguards in professional handball players BACKGROUND/AIM Published data about orofacial injuries and mouthguard use by professional handball players are scarce. The aim of this study was to investigate the prevalence of orofacial trauma and mouthguard use in professional handball players. MATERIALS AND METHODS Data were collected from 100 professional handball players through a questionnaire, which contained 17 questions about age, experience in playing handball, playing position, orofacial trauma experience during the past 12 months, type of injury and mouthguard use. RESULTS Almost half (49%) of the interviewed players experienced head and/or facial trauma during the past year. The most common injuries were soft tissue lacerations (39.6%). Dental injuries occurred in 22% of the participants, with socket bleeding being the most frequent injury (14%). Of the affected teeth, 76.9% were upper incisors. Mouthguards had a statistically significant protective role regarding tooth fractures and tooth avulsion (P=.043). Players who wore a mouthguard had a 5.55 times less chance of suffering dental injuries. Almost 76% of dental injuries resulted in complications afterward. Sixty-seven percentage of the players knew that mouthguards could prevent injuries, but only 28% used them regularly. Of the players who wore a mouthguard regularly, 76.9% were advised to do so by their dentists. CONCLUSIONS The incidence of head and orofacial injuries among professional handball players is high. Mouthguards prevented severe dental injuries such as tooth fracture and avulsion, but their use was still limited. |
Recombinant plasmid conferring proline overproduction and osmotic tolerance A recombinant plasmid carrying the proBA (pro-74) mutant allele which governs osmotic tolerance and proline overproduction was constructed by using the broad-host-range plasmid vector pQSR49. The physiological, biochemical, and genetic properties of strains carrying the pQSR49 derivatives pMJ101 and pMJ1, mutant and wild type, respectively, were investigated. pMJ101 conferred enhanced osmotolerance compared with strains carrying the wild type, pMJ1. These results are in contrast to those obtained previously with strains carrying recombinant plasmids based on pBR322 that failed to confer the osmotic tolerance phenotype. gamma-Glutamyl kinase (first step in proline biosynthesis) from strains carrying pMJ101 was 200-fold less sensitive to feedback inhibition than was the wild-type enzyme. As expected, the intracellular proline levels of strains carrying pMJ101 were more than an order of magnitude higher than those of the wild type. An analysis of copy number revealed that the pQSR49 constructs were present in the cell at a level six- to eightfold lower than those of the pBR322 recombinants, which may account for the difference in phenotype. We found that the genetic stability of the pQSR49 derivative in a variety of gram-negative bacteria was dependent on the insert orientation and the presence of foreign DNA on the plasmid. These factors may be significant in future studies aimed at expanding the osmotolerance phenotype to a broad range of gram-negative bacteria. |
<reponame>Kapoorlabs-CAPED/CAPED-AI-Deftrack
package strategies;
/*
* ------------------------------------------------------------------------
*
* Copyright (C) 2003 - 2013
* University of Konstanz, Germany and
* KNIME GmbH, Konstanz, Germany
* Website: http://www.knime.org; Email: <EMAIL>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, Version 3, as
* published by the Free Software Foundation.
*
* This program is distributed in the hope that it will be useful, but
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, see <http://www.gnu.org/licenses>.
*
* Additional permission under GNU GPL version 3 section 7:
*
* KNIME interoperates with ECLIPSE solely via ECLIPSE's plug-in APIs.
* Hence, KNIME and ECLIPSE are both independent programs and are not
* derived from each other. Should, however, the interpretation of the
* GNU GPL Version 3 ("License") under any applicable laws result in
* KNIME and ECLIPSE being a combined program, KNIME GMBH herewith grants
* you the additional permission to use and propagate KNIME together with
* ECLIPSE with only the license terms in place for ECLIPSE applying to
* ECLIPSE and the GNU GPL Version 3 applying for KNIME, provided the
* license terms of ECLIPSE themselves allow for the respective use and
* propagation of ECLIPSE together with KNIME.
*
* Additional permission relating to nodes for KNIME that extend the Node
* Extension (and in particular that are based on subclasses of NodeModel,
* NodeDialog, and NodeView) and that only interoperate with KNIME through
* standard APIs ("Nodes"):
* Nodes are deemed to be separate and independent programs and to not be
* covered works. Notwithstanding anything to the contrary in the
* License, the License does not apply to Nodes, you are not required to
* license Nodes under the License, and you are granted a license to
* prepare and propagate Nodes, in each case even if such Nodes are
* propagated with or for interoperation with KNIME. The owner of a Node
* may freely choose the license terms applicable to such Node, including
* when such Node is propagated with or for interoperation with KNIME.
* ---------------------------------------------------------------------
*
* Created on 01.12.2013 by Andreas
*/
import net.imglib2.RandomAccess;
import net.imglib2.RandomAccessible;
import net.imglib2.type.logic.BitType;
/**
* An implementation of the Algorithm proposed by <NAME>.
*
* @author <NAME>, University of Konstanz
*/
public class HilditchAlgorithm extends Abstract3x3NeighbourhoodThinning {
/**
* Create a new hilditch strategy. The passed boolean will represent the foreground-value of the image.
*
* @param foreground Value determining the boolean value of foreground pixels.
*/
public HilditchAlgorithm(final boolean foreground)
{
super(foreground);
}
/**
* {@inheritDoc}
*/
@Override
public boolean removePixel(final long[] position, final RandomAccessible<BitType> accessible) {
RandomAccess<BitType> access = accessible.randomAccess();
access.setPosition(position);
boolean[] vals = getNeighbourhood(access);
// First condition is to ensure there are at least 2 and at most 6 neighbouring foreground pixels.
int numForeground = 0;
for (int i = 1; i < vals.length; ++i) {
if (vals[i] == m_foreground) {
++numForeground;
}
}
if (!(2 <= numForeground && numForeground <= 6)) {
return false;
}
// Second condition checks for transitions between foreground and background. Exactly 1 such transition
// is required.
int numPatterns = findPatternSwitches(vals);
if (!(numPatterns == 1)) {
return false;
}
// The third and fourth conditions require neighbourhoods of adjacent pixels.
// Access has to be reset to current image-position before moving it, since
// the getNeighbourhood() method moves it to the top-left of the initial pixel.
access.setPosition(position);
access.move(-1, 1);
int p2Patterns = findPatternSwitches((getNeighbourhood(access)));
if (!( (vals[1] == m_background || vals[3] == m_background || vals[7] == m_background) || p2Patterns != 1)) {
return false;
}
access.setPosition(position);
access.move(1, 0);
int p4Patterns = findPatternSwitches((getNeighbourhood(access)));
if (!((vals[1] == m_background || vals[3] == m_background || vals[5] == m_background) || p4Patterns != 1)) {
return false;
}
// If all conditions are met, we can safely remove the pixel.
return true;
}
/**
* {@inheritDoc}
*/
@Override
public ThinningStrategy copy() {
return new HilditchAlgorithm(m_foreground);
}
} |
If you're thinking of booking time off work, you have to be aware of the amount of paid holiday you're entitled to.
Almost the entire British workforce is allowed a minimum of 5.6 weeks paid holiday every year - roughly 28 days for those working a five day week.
That holiday will apply whether your work shifts or have an annual contract, and will remain even while you're on maternity, paternity or sick leave.
But seeing as you have a right to your holiday, what happens if you are told you can't book time off? Well, a closer look at the law shows that holiday rights are more restricted than you may have thought.
The Mirror put together a guide on everything you need to know about booking time off work.
When your boss absolutely can say "no"
While the total amount of holiday is enshrined in law, that doesn't mean when you take it is entirely up to you.
"It is a common misconception that employees can take annual leave whenever they want," said Alastair Brown, BrightHR chief technological officer.
"Employers can decline annual leave requests where they have a business reason to do so."
Worse, they are also allowed to force you to take holiday at certain times - for example if the office is shut between Christmas and New Year or on bank holidays.
"Many employers will also have rules regarding the maximum length of annual leave that can be taken at one time to prevent staff having long periods of absence from work," Mr Brown explained.
"Additionally, seasonal restrictions may also be enforced to prevent employees from taking annual leave during certain times of the year, for example, during the lead up to Christmas.
"Employers often place a cap on how many employees from each department are able to take annual leave at the same time to ensure service levels are maintained, with this being more important in smaller departments where there are fewer members of staff to cover workplace duties".
Making sure they say "yes"
While bosses can say no to a specific holiday request, they absolutely can't refuse to let you take leave at all. In fact, if they do they face quite strict penalties.
"Employees must be permitted to take their minimum leave entitlement as failure to reasonably allow this could result in costly tribunal claims," Mr Brown explained.
That means that as long as you've checked first (and got in ahead of your colleagues) repeated rejections are something you can take it up with Acas .
Generally, you need to request holiday twice as many days ahead as the amount of time you want to take off - so give two days’ notice for one day’s leave. But check your contract to be sure, as it might say something different.
Likewise, bosses who refuse leave requests have to give as much notice as the amount of leave requested - so they have to tell you two weeks before the holiday if you've requested two weeks off.
What if I don't take it all?
Sadly, you don't have an automatic right to carry holiday over to the next year, but many employers let you.
Generally, workers have to take at least four weeks of statutory leave during the leave year, but can carry over any remaining time off if their boss OKs it.
So if you have a five-day working week, you can carry over up to eight days. But - again - this is up to your boss.
The details should be in your employment contract - and it is required by law to have one and give it to employees no later than two months after the start of a job. |
Frank Lloyd Wright called the Alice Millard house in Pasadena “this little house” as a term of endearment. Over time, the nickname La Miniatura has stuck. Both monikers seem ill-fitting for a landmark of such stature today, one that grandson Eric Lloyd Wright called the best of Wright's four concrete block houses in the region.
It boggles the mind to think that the Millard house has been on the market for two years, currently listed at $4,995,000.
If that's out of your price range, at least you can live vicariously through our recently posted article and photo gallery, the latest installment in our Landmark Houses project. |
HYPERPROLACTINEMIC HIPOGONADISM: PREDICTION OF EFFECTIVENESS TREATMENT AND MANAGEMENT Hyperprolactinemia syndrome (HPRL) is one of the most common neuroendocrine diseases, leading to the development of hypogonadism in young women. The aim of the study was to study the effectiveness of treatment of menstrual and reproductive disorders caused by hyperprolactinemia, depending on the nature of the relationship of gonadotropic hormones. Materials and methods. 98 women of reproductive age were monitored, of which 78 with functional HPRL and 20 healthy women. Clinical-anamnestic, enzyme-linked immunosorbent, instrumental (perimetry, computed tomography), functional, statistical research methods were used. The effectiveness of therapy for menstrual and reproductive disorders caused by HPRL, depending on the nature of the relationship of gonadotropic hormones (GH). Results. In patients of reproductive age with HPRL there are four types of GH secretion: the first LH and FSH levels are reduced; the second the level of LH is increased and FSH is reduced; third the level of LH is reduced and FSH is increased; the fourth type LH and FSH levels are both elevated. As a result of the treatment, ovarian function was restored in the first type of GH secretion in 83.6% of patients, in the second type in 66.7% of patients, in the third in 37.5%. In the group of women with HPRL and high levels of GH normalization of the menstrual cycle and restoration of reproductive function did not occur. Conclusions. Detection of four types of GH secretion in HPRL indicates the presence of different pathogenetic features of this pathology, which must be taken into account when prescribing personalized therapy to restore ovarian function and fertility, timely use of assisted reproductive technologies. Key words: hyperprolactinemia, hypogonadism, pathogenetic features, prognosis, therapy. |
"First Detection of PER-Type Extended-Spectrum -lactamases at Saint Camille Hospital Center of Ouagadougou, Burkina Faso " Resistance to a wide variety of common antimicrobials is observed among clinical strains designated as extended-spectrum -lactamase (ESBL) producers. They produce enzymatic proteins that effectively inactivate cephalosporins and aztreonam and are a serious global health problem that complicates treatment strategies. Many studies report a high prevalence of ESBL producers among Gram-negative bacilli. The purpose of this work was to identify of PER resistance gene in enterobacterial strains. Gram-negative bacilli resistant to at least one third-generation cephalosporin, Aztreonam or showing a synergistic image between amoxicillin + clavulanic acid and a third generation cephalosporin were isolated during an antibiogram. Antibiotic resistance was detected for the following antibiotics: Ceftriaxone, Cefotaxime, Ceftazidime and Aztreonam. A classical polymerase chain reaction (PCR) analysis of the -lactamase PER (Pseudomonas Extended Resistance) gene was performed using specific primers in 60 ESBL-producing isolates. Among 250 strains of Gram negative bacilli collected, 60 strains (24%) showed resistance to antibiotics used. Stool samples are a major source of ESBL producers. The highest prevalence of resistant strains was observed in Escherichia coli with a rate of 35%. Among the producers of ESBL isolates, the presence of the PER gene was detected in the present study by up to 15% in 6 bacterial species. This study represents the first detection of the PER gene in Burkina Faso. |
package it.polimi.ingsw.message.viewMsg;
import it.polimi.ingsw.message.ViewObserver;
import it.polimi.ingsw.model.card.LeaderCard;
import java.util.ArrayList;
import java.util.List;
/**
* InitializedC ---> VV ---> CLI
*
* "initialization"
* this class represent the msg send by the controller to the view in the initialization of the game
* so is the Initialized Controller that create it and notify the view
* it will send it to the client, waiting for the respond
*
* "remove"
* it is used even in the choice of REMOVE a Leader card not activated yet
*
* "active"
* it is used even in the choice of ACTIVATE a Leader card not activated yet
*/
public class VChooseLeaderCardRequestMsg extends ViewGameMsg {
private ArrayList<Integer> miniDeckLeaderCardFour; //the four card throw the client has to chose
private String username;
private String whatFor;
public VChooseLeaderCardRequestMsg(String msgContent, ArrayList<Integer> cardToChoose, String username, String whatFor) {
super(msgContent);
miniDeckLeaderCardFour = cardToChoose;
this.username = username;
this.whatFor = whatFor;
}
public ArrayList<Integer> getMiniDeckLeaderCardFour() {
return miniDeckLeaderCardFour;
}
public String getUsername() {
return username;
}
public String getWhatFor() {
return whatFor;
}
@Override
public void notifyHandler(ViewObserver viewObserver) {
viewObserver.receiveMsg(this);
}
}
|
def _log_progress(self):
if self.skip(self.LOGGING_RATE_):
return
logging.info("%8s / %s", self.stage, self.STAGES_PER_SIMULATION_)
time_diff = time.time() - self.time_start
seconds_per_100 = time_diff / self.stage * 100
eta = (self.STAGES_PER_SIMULATION_ - self.stage) / 100 * seconds_per_100
stages_per_min = int(self.stage / (time_diff / 60))
runtime = get_dhm(time_diff)
time_per_1M = get_dhm(time_diff / self.stage * 1000000)
eta = get_dhm(eta)
content = (self.stage, eta, time_per_1M, runtime, stages_per_min)
with open(self.progress_path, "ab") as f:
np.savetxt(f, [content], fmt="%-10s", delimiter="| ") |
def _get_digest_from_cache(self, digest):
try:
obj = self._s3cache.Object(self._bucket,
digest.hash + '_' + str(digest.size_bytes))
return remote_execution_pb2.Digest.FromString(obj.get()['Body'].read())
except ClientError as e:
if e.response['Error']['Code'] not in ['404', 'NoSuchKey']:
raise
return None |
New Delhi: State-owned oil companies will soon set up a fund that will finance start-ups based on innovative ideas in the energy sector, oil minister Dharmendra Pradhan said at the Natural Gas Vision 2025 conference organized by the Indian Oil Corp. Ltd (IOC) in New Delhi on Tuesday.
Pradhan said the chiefs of state-run energy firms will take part in the start-up movement in the country, which already has the support of iconic businessmen like Infosys co-founder N.R. Narayana Murthy and chairman emeritus of Tata Sons Ratan Tata. The move is in line with the government’s idea of promoting entrepreneurship, innovation and self employment.
“Indian intellectual capacity is contributing immensely to the global oil and gas economy. We would like the Indian energy market too to benefit from that capacity," Pradhan said, adding that state-run exploration, production, refining and gas infrastructure companies will contribute to the fund. The oil ministry has advised these companies to create a corpus that would be utilized to invest in new businesses with an innovative business model and brings efficiency. Companies will decide how big the fund will be.
“Any start-up needing hand-holding can seek support from the fund. Public sector companies will independently evaluate the commercial viability of the project and take a decision on their equity participation," Pradhan said.
Oil industry leaders welcomed the move. “The minister said this fund would cater to innovation across the entire oil and gas value chain. I guess there will be many takers for it, especially, those who seek to cater to the consumer end of the energy industry," said Ranbir Singh Butola, former chairman of IOC.
Prime Minister Narendra Modi, who announced a start-up initiative last 15 August, announced an action plan on 16 January to promote start-ups. These include setting up a dedicated ₹ 10,000 crore fund, income-tax exemption for the first three years, and a simple exit policy for start-ups. |
A few weeks ago, I came home. End of deployment. “Hello USA, oh how I’ve missed you!” It was fantastic. Every homecoming is fantastic. I’ve walked off a ship, walked off a flight line and now walked through an airport security barrier.
Many of our young Sailors and Marines are pushed, pressured, and led to use supplements that claim to help achieve unrealistic or superhuman results.
Four of Major League Baseball’s greatest players gathered here last night, not just to talk baseball, but to reflect on their proud service as war veterans.
[VIDEO] Rear Adm. Guadagnini, Commander, Carrier Strike Group 9 visits the Medical Department of USS Abraham Lincoln.
Marine Cpl. Michael Pride, 29, from Kansas City, Mo., joined the U.S. Marine Corps in May 2007 and was trained as a motor transportation operator. He deployed in March 2008 with 2nd Battalion, 7th Marine Regiment out of Twentynine Palms, to serve in Afghanistan in support of Operation Enduring Freedom (OEF). In Sept. 2008, during his final month in theater, Cpl. Pride was on a convoy in Farah Province when his Humvee was struck by an IED and rolled over, crushing his left arm and mortally wounding his platoon Sergeant.
Shipments are beginning to arrive with more expected by the end of September. I encourage all to receive their vaccination as soon as possible thereafter.
Taking care of our servicemembers calls for enhanced efforts throughout the Departments of Defense and Veterans Affairs, and the community. We’re all in this fight together – as we bring our servicemembers home safe, we want to keep them safe.
In the military, we have learned of the power of partnership in the joint world. But now we need to focus on partnership in the worlds of international, interagency, and private-public activity. |
//package club.tinysme.lsongseven.agent;
//
//import java.util.Objects;
//
//
//public class RedisCatMonitor {
//
// public static final ThreadLocal<RedisCatMonitor> THREAD_LOCAL_CAT_LOG = new ThreadLocal<>();
//
// public static void start(String action, Object data) {
// THREAD_LOCAL_CAT_LOG.remove();
// RedisCatMonitor redisCatMonitor = new RedisCatMonitor(action);
// redisCatMonitor.before(String.valueOf(data));
// THREAD_LOCAL_CAT_LOG.set(redisCatMonitor);
// }
//
// public static void end(boolean success) {
// RedisCatMonitor redisCatMonitor = THREAD_LOCAL_CAT_LOG.get();
// if (Objects.nonNull(redisCatMonitor)) {
// redisCatMonitor.after(success);
// THREAD_LOCAL_CAT_LOG.remove();
// }
// }
//
// private String action;
// private Transaction tranx;
//
// public RedisCatMonitor(String action) {
// this.action = action;
// }
//
// public void before(String data) {
// this.tranx = Cat.newTransaction("RedisTemplateInfo.", this.action);
// this.tranx.addData("key", data);
// }
//
// public void after(boolean success) {
// if (!success) {
// this.tranx.setStatus("failed");
// } else {
// this.tranx.setStatus("0");
// }
// this.tranx.complete();
// }
//}
|
Automated Histogram-Based Brain Segmentation in T1-Weighted Three-Dimensional Magnetic Resonance Head Images Current semiautomated magnetic resonance (MR)-based brain segmentation and volume measurement methods are complex and not sufficiently accurate for certain applications. We have developed a simpler, more accurate automated algorithm for whole-brain segmentation and volume measurement in T-weighted, three-dimensional MR images. This histogram-based brain segmentation (HBRS) algorithm is based on histograms and simple morphological operations. The algorithm's three steps are foreground/background thresholding, disconnection of brain from skull, and removal of residue fragments (sinus, cerebrospinal fluid, dura, and marrow). Brain volume was measured by counting the number of brain voxels. Accuracy was determined by applying HBRS to both simulated and real MR data. Comparing the brain volume rendered by HBRS with the volume on which the simulation is based, the average error was 1.38%. By applying HBRS to 20 normal MR data sets downloaded from the Internet Brain Segmentation Repository and comparing them with expert segmented data, the average Jaccard similarity was 0.963 and the kappa index was 0.981. The reproducibility of brain volume measurements was assessed by comparing data from two sessions (four total data sets) with human volunteers. Intrasession variability of brain volumes for sessions 1 and 2 was 0.55 +/- 0.56 and 0.74 +/- 0.56%, respectively; the mean difference between the two sessions was 0.60 +/- 0.46%. These results show that the HBRS algorithm is a simple, fast, and accurate method to determine brain volume with high reproducibility. This algorithm may be applied to various research and clinical investigations in which brain segmentation and volume measurement involving MRI data are needed. |
// Test setup:
import { expect } from '@tractor/unit-test';
// Dependencies:
import * as npmlog from 'npmlog';
// Under test:
import { error, info, mute, warn } from './index';
describe('@tractor/logger:', () => {
it('should have the correct heading', () => {
expect(npmlog.heading).to.equal('🚜 tractor');
});
describe('@tractor/logger: mute:', () => {
it('should set the npmlog level to silent', () => {
mute();
expect(npmlog.level).to.equal('silent');
});
});
describe('@tractor/logger: error:', () => {
it('should pass through to npmlog', () => {
const log = jest.spyOn(npmlog, 'log');
error('error');
expect(log.mock.calls).to.deep.equal([['tractor-error', '', 'error']]);
log.mockRestore();
});
it('should handle multiple messages', () => {
const log = jest.spyOn(npmlog, 'log');
error('error', 'details');
expect(log.mock.calls).to.deep.equal([['tractor-error', '', 'error', 'details']]);
log.mockRestore();
});
});
describe('@tractor/logger: info:', () => {
it('should pass through to npmlog', () => {
const log = jest.spyOn(npmlog, 'log');
info('info');
expect(log.mock.calls).to.deep.equal([['tractor-info', '', 'info']]);
log.mockRestore();
});
});
describe('@tractor/logger: warn:', () => {
it('should pass through to npmlog', () => {
const log = jest.spyOn(npmlog, 'log');
warn('warn');
expect(log.mock.calls).to.deep.equal([['tractor-warn', '', 'warn']]);
log.mockRestore();
});
});
});
|
. The current chemotherapy has been able to give us many options to treat for lung cancer and recent studies have showed that perioperative chemotherapy may improve survival. In this study, we compared 2 groups with locally advanced lung cancers (stage III, T3N0M0, inclusive of ipsilateral PM2, D1 and D2) ; group A, which had been treated by chemotherapy for downstaging prior to surgery (n = 23), and group B, which had been treated by surgery alone (n = 48). The postoperative 3- and 5-year overall survival rates analyzed using the Kaplan-Meier method were 64.7 and 29.4% for group A, 32.5 and 10% for group B, respectively. And there was a significant difference between 2 groups. Further on patients with pN2, 3-year survival rate was 60% for group A and 36.7% for group B. In view of the progress of chemotherapy, even if the locally advanced lung cancer, which may be suspected of invasion to pulmonary artery, pulmonary vein and central bronchus, is not classified as T4, a patient with it should be performed an induction chemotherapy for downstaging and an operation for complete resection of the tumor and preserving lung function. |
/**
* The MIT License (MIT)
* Copyright (c) 2012 <NAME>
* Permission is hereby granted, free of charge, to any person obtaining
* a copy of this software and associated documentation files (the
* "Software"), to deal in the Software without restriction, including
* without limitation the rights to use, copy, modify, merge, publish,
* distribute, sublicense, and/or sell copies of the Software, and to
* permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
* OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
* OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
* WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF
* OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
package us.nineworlds.serenity.core.model.impl;
import java.util.ArrayList;
import java.util.LinkedList;
import java.util.List;
import us.nineworlds.plex.rest.model.impl.Directory;
import us.nineworlds.plex.rest.model.impl.Genre;
import us.nineworlds.plex.rest.model.impl.MediaContainer;
import us.nineworlds.serenity.core.model.SeriesContentInfo;
/**
* @author dcarver
*
*/
public class SeriesMediaContainer extends AbstractMediaContainer {
protected List<SeriesContentInfo> videoList;
/**
* @param mc
*/
public SeriesMediaContainer(MediaContainer mc) {
super(mc);
}
public List<SeriesContentInfo> createSeries() {
videoList = new LinkedList<SeriesContentInfo>();
createSeriesInfo();
return videoList;
}
protected void createSeriesInfo() {
String baseUrl = factory.baseURL();
if (mc != null && mc.getSize() > 0) {
String mediaTagId = Long.valueOf(mc.getMediaTagVersion()).toString();
List<Directory> shows = mc.getDirectories();
if (shows != null) {
for (Directory show : shows) {
TVShowSeriesInfo mpi = new TVShowSeriesInfo();
mpi.setId(show.getRatingKey());
mpi.setMediaTagIdentifier(mediaTagId);
if (show.getSummary() != null) {
mpi.setSummary(show.getSummary());
}
mpi.setStudio(show.getStudio());
if (show.getRating() != null) {
mpi.setRating(Double.parseDouble(show.getRating()));
} else {
mpi.setRating(0d);
}
String burl = factory.baseURL()
+ ":/resources/show-fanart.jpg";
if (show.getArt() != null) {
burl = baseUrl + show.getArt().replaceFirst("/", "");
}
mpi.setBackgroundURL(burl);
String turl = "";
if (show.getBanner() != null) {
turl = baseUrl + show.getBanner().replaceFirst("/", "");
}
mpi.setImageURL(turl);
String thumbURL = "";
if (show.getThumb() != null) {
thumbURL = baseUrl
+ show.getThumb().replaceFirst("/", "");
}
mpi.setThumbNailURL(thumbURL);
mpi.setTitle(show.getTitle());
mpi.setContentRating(show.getContentRating());
List<String> genres = processGeneres(show);
mpi.setGeneres(genres);
int totalEpisodes = 0;
int viewedEpisodes = 0;
if (show.getLeafCount() != null) {
totalEpisodes = Integer.parseInt(show.getLeafCount());
}
if (show.getViewedLeafCount() != null) {
viewedEpisodes = Integer.parseInt(show.getViewedLeafCount());
}
int unwatched = totalEpisodes - viewedEpisodes;
mpi.setShowsUnwatched(Integer.toString(unwatched));
mpi.setShowsWatched(Integer.toString(viewedEpisodes));
mpi.setKey(show.getKey());
videoList.add(mpi);
}
}
}
}
protected List<String> processGeneres(Directory show) {
ArrayList<String> genres = new ArrayList<String>();
if (show.getGenres() != null) {
for (Genre genre : show.getGenres()) {
genres.add(genre.getTag());
}
}
return genres;
}
}
|
/**
* An email message from a 'MAILTO:' or similar QRCode type.
*/
public static class Email extends AutoSafeParcelable {
/**
* Email type.
*/
public static final int UNKNOWN = 0;
public static final int WORK = 1;
public static final int HOME = 2;
@Field(1)
private int versionCode = 1;
@Field(2)
public int type;
@Field(3)
public String address;
@Field(4)
public String subject;
@Field(5)
public String body;
public static Creator<Email> CREATOR = new AutoCreator<>(Email.class);
} |
/**
* Method to update a activity by id
* @param activityId the id of the activity to update
* @param activity the new activity info
* @param email email of the user executing the request
* @return a DTO of the updated activity
*/
@Transactional
@Override
public ActivityDto updateActivity(UUID activityId, ActivityDto activity, String email) {
Activity updateActivity = this.activityRepository.findById(activityId)
.orElseThrow(ActivityNotFoundException::new);
activityRepository.flush();
updateActivity.setTitle(activity.getTitle());
updateActivity.setDescription(activity.getDescription());
updateActivity.setStartDate(activity.getStartDate());
updateActivity.setEndDate(activity.getEndDate());
updateActivity.setSignupStart(activity.getSignupStart());
updateActivity.setSignupEnd(activity.getSignupEnd());
updateActivity.setCapacity(activity.getCapacity());
if(activity.getLevel()!=null)updateActivity.setTrainingLevel(getTrainingLevel(activity.getLevel()));
if (activity.getGeoLocation() != null)
setGeoLocation(activity, updateActivity);
if (activity.getEquipment() != null)
setEquipment(activity, updateActivity);
if(!updateActivity.isClosed() && activity.isClosed()){
closeActivity(activity);
}
updateActivity.setClosed(activity.isClosed());
if(!updateActivity.isInviteOnly() && activity.isInviteOnly()){
inviteService.inviteBatch(activityId,
registrationService.getRegistratedUsersInActivity(activityId));
}
updateActivity.setInviteOnly(activity.isInviteOnly());
if (activity.getImages() != null){
updateActivity.setImages(activityImageService.updateActivityImage(activity.getImages(), updateActivity));
}
ActivityDto activityDto = modelMapper.map(this.activityRepository.save(updateActivity), ActivityDto.class);
activityDto.setHasLiked(activityLikeService.hasLiked(email, activityDto.getId()));
return addRegisteredAmount(activityDto);
} |
Development of electronic circuits that can operate in high radiation environments such as nuclear power plants Robots can play a vital role in disaster rescue, relief and recovery. These versatile machines can perform tasks humans cannot and can enter into dangerous environments otherwise inaccessible to human workers. However, they are not invincible and there are still some situations where robots and the electronic components they're made of fail. By studying the limits of electronics in these extreme scenarios though, better components can be developed and deployed on robots; keeping these invaluable workers online in the most dangerous and urgent of situations. The ongoing decommissioning work at the devastated Fukushima Daiichi Nuclear Power Plant is an example of this. The earthquake that destroyed the plant happened nearly 11 years ago, yet the clean-up at Fukushima is still ongoing. Long term efforts are needed in order to resolve the issues such as decontamination of the scattered radioactive materials, contaminated water, and decommissioning of the nuclear reactor itself. Associate Professor Kenichiro Takakura, from the Electronics Materials and Devices Research Group at the National Institute of Technology (KOSEN), Kumamoto College, is carrying out research to develop electronic equipment that can be used in a nuclear reactor to assist with the early completion of decommissioning. |
Optical Control of Cardiac Function with a Photoswitchable Muscarinic Agonist Light-triggered reversible modulation of physiological functions offers the promise of enabling on-demand spatiotemporally controlled therapeutic interventions. Optogenetics has been successfully implemented in the heart, but significant barriers to its use in the clinic remain, such as the need for genetic transfection. Herein, we present a method to modulate cardiac function with light through a photoswitchable compound and without genetic manipulation. The molecule, named PAI, was designed by introduction of a photoswitch into the molecular structure of an M2 mAChR agonist. In vitro assays revealed that PAI enables light-dependent activation of M2 mAChRs. To validate the method, we show that PAI photoisomers display different cardiac effects in a mammalian animal model, and demonstrate reversible, real-time photocontrol of cardiac function in translucent wildtype tadpoles. PAI can also effectively activate M2 receptors using two-photon excitation with near-infrared light, which overcomes the scattering and low penetration of short-wavelength illumination, and offers new opportunities for intravital imaging and control of cardiac function. cardiac photoregulation, cholinergic, muscarinic, photopharmacology, dualsteric agonist, two-photon Introduction Remote spatiotemporal control of physiological processes may provide novel treatment opportunities. Cardiopathies are paradigmatic in this regard because of the rapid time course and complex integration of electrophysiological and molecular events in very specific areas of the heart. For instance, most cardiac rhythm control strategies rely on antiarrhythmic drugs (AADs) targeting ionic currents, whose effects cannot be regulated spatiotemporally. As a result, AADs often give rise to intolerable side effects, including ventricular pro-arrhythmogenicity, and are only partially effective. Overcoming the high failure and complication rates of current therapeutic strategies to treat these diseases will require both patient-personalized determination of the specific physiopathological mechanism and qualitative pharmacological breakthroughs. 1 The application of light and optical techniques in medicine has had a profound impact over the last several decades, in diagnostics, surgery and therapy. 2 In particular, photoexcitation of intrinsic molecules or exogenous light-sensitive agents introduced in the body can affect the tissues and cells within in various ways, via the generation of heat (photothermal), chemical reactions (photochemical), and biological processes (photobiological/photopharmacological or optogenetic). 2 The potential of light as a therapeutic tool with high spatiotemporal resolution has been recently investigated in the cardiovascular field, particularly for arrhythmias, through optogenetics. However, the application of such genetic techniques to human subjects with therapeutic purposes is still hampered by safety, regulatory and economic hurdles. Unlike optogenetics, photopharmacology rely on the use of exogenous light-regulated small molecules that can photocontrol native targets and that could be tested and approved using standard drug development procedures. These molecules can be used in combination with devices that deliver light to specific locations in the body 7, in order to remotely control drug dosing and duration of action. Since the activity of drugs is structure-dependent, reversible photoresponsive drugs are obtained by the rational introduction of a molecular photoswitch into the structure of a bioactive compound. 8,14 Cardiac function is controlled by the autonomic sympathetic and parasympathetic nervous systems, which act via adrenoceptors and muscarinic acetylcholine receptors (mAChRs), respectively. 19 In particular, stimulation of 1 and 2 adrenergic receptors increases the heart rate (positive chronotropy) and contractility (positive inotropy), whilst stimulation of M2 mAChRs decreases heart rate and prolongs the atrioventricular conduction time. 20,21 Thus, adrenergic and muscarinic receptors constitute suitable target candidates to control cardiac function with light. Muscarinic acetylcholine receptors (mAChRs) belong to class A G protein-coupled receptors (GPCRs) and are divided in five different subtypes (M1-M5). 22 M2 receptor is extensively expressed in the heart. All five mAChRs are characterized by a high sequence homology in the orthosteric site located in the transmembrane region. This fact limits the development of subtype-selective orthosteric agonists. On the other hand, the allosteric site located in the extracellular loop is less conserved, thus muscarinic allosteric agents are commonly endowed with a more pronounced subtype-selectivity. 23 A chemical strategy commonly applied to overcome such limitation is the incorporation, within the same molecular structure, of two distinct pharmacophore elements belonging to (a) high-affinity orthosteric agonists and (b) highly selective allosteric ligands. 24 Iperoxo-like orthosteric agonist moiety, and (b) the M2-selective allosteric fragments derived from W84 and Naphmethonium (Fig. 1). The incorporation of a photoisomerizable unit into the structure of a dualsteric agonist should enable controlling with light the mutual position of the orthosteric and the allosteric moieties, presumably leading to differences between the two isomers in receptor affinity and efficacy. We chose an azobenzene core as photoresponsive component because of the favourable characteristics that azobenzene-based photoswitches normally display for biological purposes in comparison to other photoswitches, such as design flexibility, large changes in geometry upon isomerization, high photostationary states and fatigue resistance, fast photoisomerization rates, and chemical stability, among others. 33 Moreover, the use of arylazo compounds has been proven safe in humans for some approved drugs and food colorants. 33 PAI and NAI were prepared via two subsequent Menshutkin reactions between the azobenzene linker and the corresponding allo-and orthosteric intermediates (5 and 6, 9) (Scheme 1). Compound 3 was synthesized via the typical Mills reaction and successively brominated photochemically to afford the desired linker 4. Notably, this photochemical reaction exempted us from using a radical initiator 34 and gave an excellent yield (96%), proving for the first time that light-induced benzylic halogenations can be conveniently used also for the preparation of such versatile photoswitchable linkers. Compounds 5, 6 and 9 were prepared as previously reported from commercially available starting materials (Scheme 1 and SI). 31 As a prerequisite for a reversible light-dependent control of their biological activity, PAI and NAI need to effectively behave as reversible photoswitches, which means that the photoisomerization should be relatively fast and quantitatively significant in both directions. UV/Vis spectroscopy experiments showed that PAI and NAI have the typical absorption bands of conventional azobenzenes. PAI can be isomerized to the cis form (about 73% conversion) by applying 365 nm light, while it thermally relaxes back to the trans form in several hours at room temperature. It can be also effectively back-isomerized to the trans form by applying white or blue (460 nm) light (83% trans) ( Fig. 2 and SI, Fig. S2). Surprisingly, NAI resulted refractory to photoisomerization (only 23% cis after 10 min at 365 nm, SI, Fig. S2.1CD), which shows that rational design of azobenzene-containing ligands does not always afford the expected results. We hypothesized that the absorption and emission properties of the naphthalimide moiety 35 could interfere with its photochromism. Given the unsatisfactorily photochromic behaviour of NAI, we selected only PAI for further studies. PAI allows reversible photo-activation of M2 mAChRs in calcium imaging experiments and molecular docking simulations. The photopharmacological properties of PAI were first assessed in vitro with real-time calcium imaging assays in transiently transfected HEK cells under 1P-illumination ( Fig. 3 and SI). We tested also the non-photoresponsive muscarinic agonist Iperoxo (IPX) 36 as a control (Fig. 3a). The application of trans-PAI (dark-adapted state) induced cytosolic calcium oscillations indicative of M2 agonism, which were reduced by converting PAI to its cis form upon illumination with UV light (365 nm) (Fig. 3bc). Calcium oscillations could be restored after back-isomerizing PAI to the trans configuration using blue light (460 nm). The time course of calcium responses during activation with trans-PAI displayed a diversity of behaviours in individual cells (Fig. 3c), including oscillatory waves, transient peaks, and step responses as previously observed with PLC-activating GPCRs. 12 Quantification of photoresponses (F/F 0 ) to PAI application and 365 nm illumination shows a reduction in the calcium signal induced by UV light pulses (Fig. 3d). Intriguingly, PAI activated M2 mAChRs in the range of picomolar concentrations, similarly to the super-agonist Iperoxo. 36 Thus, we demonstrated that PAI can effectively activate M2 mAChRs in vitro in its dark-adapted (trans) form and its activity can be reversibly switched off and on with light.In order to account for the observed photoswitchable activity of PAI in M2 mAChR, we looked for putative differences on the receptor level regarding binding efficacy of cis-and trans-PAI using molecular docking simulations (see SI for details). PAI isomers were docked into their theoretical binding site at the human M2 mAChR (PDB 4MQT). Our results suggested that trans-PAI can bind to the M2 mAChR in a typical dualsteric pose compatible with receptor activation (SI, Fig. S5.1a). 37 In contrast, a flipped orientation is favoured in the case of the cis-isomer (SI, Fig. S5.1b). This binding pose is likely incompatible with receptor activation and provides a possible explanation for the light-dependent efficacy of PAI. Trans-PAI is more effective than cis-PAI at inducing bradycardia and PR lengthening in rats. Once established that PAI allows lightdependent reversible activation of M2 mAChRs, we aimed at testing it as an agent to photocontrol cardiac function in vivo. We initially used Wistar rats for our experiments. The intraperitoneal administration of PAI induced progressive bradycardia and PR lengthening in a dose-dependent manner in both configurations ( Fig. 4 and SI, Fig. S6.1). These effects were accompanied with Fig. 4. In vivo effect of trans-and cis-PAI on the cardiac activity of rats. The activity of dark-relaxed (trans, grey plots) and UV-illuminated PAI (cis, purple plots) administered intraperitoneally in anesthetized rats was tested by means of electrocardiography. The heart rate (left panel) and PR interval (right panel) are plotted as a function of increasing doses of both isomers, which induced progressive bradycardia and PR lengthening in a dose-dependent manner. Significant differences between the dark-relaxed and UV-treated PAI were found in the heart rate and PR interval at the higher doses (*p < 0.05; ***p < 0.001 trans vs cis), in agreement with the higher agonist activity of the trans form observed in vitro and in tadpoles. Three rats in the trans-group died because of extreme bradycardia after the 100 M/kg dose. The effects of PAI were reversible only upon administration of the muscarinic antagonist Atropine. variable degrees of systemic parasympathetic effects, such as salivation, urination and defecation. At low doses (≤ 3 M/kg), both isomers yielded a similar small effect, but remarkably differed at intermediate and high doses. At 10 M/kg PAI and higher doses, the PR interval was significantly more prolonged in trans; at 30 M and higher doses, heart rate was also lower in trans. The effects of PAI could not be photoswitched either with blue or with UV light, showing that the ability of light to penetrate murine cardiac tissue at those wavelengths is likely not sufficient to reach M2 mAChR location. Only the administration of atropine (2 mg) completely reverted bradycardia, PR lengthening and systemic parasympathetic effects in both groups (SI, Fig. S6.1). These results demonstrated an enhanced parasympathetic activity for the trans-isomer, and confirmed in mammals the previous findings observed in cells. PAI enables reversible photocontrol of cardiac activity in Xenopus tropicalis tadpoles. As an alternative to demonstrate reversible control of cardiac function in vivo, we turned to an animal model in which light scattering is known to be low thus allowing better light penetration. We selected Xenopus tropicalis tadpoles for this purpose since they are translucent and are recognized as an excellent model for studying the human cardiovascular system (Fig. 5). 38,39 Moreover, we had already successfully used video light microscopy to acquire real-time images of the developing beating heart by digitizing the expanding and contracting blood pool in early translucent hearts (Fig. 5a). 38,40 In the absence of PAI, the cardiac rate of tadpoles remained nearly constant at 2.3 ± 0.1 beatss -1 during control illumination with UV light and at 2.10 ± 0.01 beatss -1 in the dark. The variability score (V.S.) was 7.97 ± 0.07. Upon administration of 10 M trans-PAI, heart rate decreased dramatically (0.41 ± 0.02 beatss -1 in the trace of Fig. 5e) leading in some cases to cardiac arrest (Fig. 5c). Heart beating recovered progressively upon UV illumination (cisisomerisation, 1.28 ± 0.02 beatss -1, Fig. 5f), and was not altered in the dark since thermal relaxation is slow. Some animals displayed less stable cardiac rate during UV periods compared to controls, Fig. S6.2). Adding 10 M trans-PAI under dim light reduces the heart rate in animals 2, 3 and 4. UV illumination isomerizes PAI to the cis form and the heart rate is partially recovered. Dim red light does not isomerize PAI (SI Appendix, Fig. S6.1) and heart rate is relatively stable. White light converts PAI to the trans isomer, causing cardiac arrest in all 4 animals. UV light restores heartbeat in all animals, some displaying an unstable rate. Several white/UV light cycles were repeated in some animals, showing similar effects. e) Quantification of heart rate during the last minute of each period (beatss-1, n=4 tadpoles) in control conditions, under white light (trans-PAI) and under UV light (cis-PAI). f) Two-way for repeated measures ANOVA was performed with uncorrected Fisher's LSD test, significance values were established with a p-value = 0.05. Error bars represent standard error of the mean (SEM). The heart rate was significantly higher under UV illumination compared to visible light (p-value < 0.05). Both isomers produced a significant reduction of heart rate in comparison to controls (p-value < 0.001). (V.S. of 17.4 ± 0.2 and 8.30 ± 0.02, respectively; SI, Fig. S7.3). Subsequent illumination of the animals with visible light (cis-to-trans isomerization) again reduced cardiac rate and eventually interrupted heart beating (V.S. of 374 ± 47.7, SI, Fig. S7.3). Cardiac activity was restored by later exposition to UV light, and further UV/visible light cycles confirmed the reversibility of the pharmacological effects. (see example Supporting Movie S1). Overall, these experiments demonstrated that PAI allows remote and reversible control of heart rate with light in living animals. Activation of PAI with near infrared (NIR) light. In order to overcome the scattering and low penetration of violet and visible illumination, we tested whether PAI could be used to activate M2 mAChRs at longer wavelengths. In fact, a critical aspect that must be addressed to unleash the full potential of light-regulated drugs and favour their translation into clinic is their responsiveness to red or NIR radiation, 44,41 which enables higher penetration through tissue, abolishes photodamage and, in the case of 2P excitation, allows three-dimensional subcellular resolution. PAI has an excellent thermal stability in both configurations (Fig. S2.3) and is photochemically suited for cis-to-trans photoisomerization with NIR light under 2P-excitation, which encouraged us to test its effects in living cells in real-time calcium imaging assays using a confocal microscope equipped with a pulsed laser. (Fig. 6). PAI was initially applied in its cis (off) state, which as expected did not produce cytosolic calcium oscillations. Subsequent illumination at 840 nm induced robust calcium responses, as previously observed in calcium imaging experiments for cis-to-trans photoisomerization under 1P-excitation (Fig. 6a). These results are quantified in Fig. 6c. The responses (F/F0) obtained for cis-PAI (1P pre-irradiation at 365 nm) are comparable to controls, and 2P-induced isomerization to trans-PAI achieves calcium responses nearly as high as perfusion of iperoxo. It is worth noting that even under NIR excitation PAI maintains an outstanding potency (picomolar) to activate M2 mAChRs, which is rarely observed in photoswitches. 11,44,47,48 Interestingly, 2P microscopy is extensively used for intravital imaging including cardiovascular imaging at subcellular resolution. 49,50 Thus, PAI has a bright future to control cardiac function with light. Conclusion The rapid and reversible control of cardiac activity is of particular interest in medicine, including the spatiotemporal manipulation of Error bars are ± SEM. Data were analyzed by using one-way ANOVA with Sidak post hoc test for multiple comparisons for statistical significance (p-value (****) < 0.0001; GraphPad Prism 6). close anatomic structures bearing different electrophysiological functions in the heart. Light-activated cardiac drugs could be selectively enhanced in certain regions of the heart (e.g., preventing undesired pro-arrhythmogenic ventricular effects when atria are targeted), or at certain times (on-demand, i.e., active only during atrial fibrillation or bradycardia). For that purpose, cardiac patches with integrated electronics and electric stimulation 46 could be further equipped with optoelectronic devices for photostimulation. Drug-based cardiac photoregulation techniques offer potential advantages compared to electric stimulation of cardiac muscle, which produces inhomogeneous areas of de-and hyperpolarization, causes faradaic reactions that alter pH, and produce toxic gases (H 2, O 2, Cl 2 ), all of which would be prevented by light-stimulation. To this end, we have developed the first photoswitchable compound that enables control of cardiac activity with light in wildtype animals without genetic manipulation. To the best of our knowledge, PAI is also the first photoswitchable M2 mAChR agonist to be reported. Despite the changes introduced in the ligand structure in order to photoregulate its activity, PAI retains the high potency of its parent compounds Iperoxo and P-8-Iper. 31,36 PAI activates M2 receptors in its trans configuration and can be reversibly photoswitched with different wavelengths including NIR light under 2P excitation. Future experiments will be addressed to demonstrate that PAI enables precise spatiotemporal control of cardiac function in mammalians in combination with 2P cardiovascular imaging. Supporting Information The Supporting Information is available free of charge on the ACS Publications website. Detailed materials and methods, synthetic procedures, chemical analyses, and any additional data and figures as noted in the text (PDF). Conflicts of interest There are no conflicts to declare. EG designed and performed in vivo experiments in rats. PG conceived the project and designed experiments. FR, CM, and PG wrote the paper with contributions from all authors. All authors have given approval to the final version of the manuscript. Fabio Riefolo and Carlo Matera contributed equally. Authors information All authors have given approval to the final version of the manuscript. Funding Sources This project has received funding from the EU Horizon 2020 Framework Programme for Research and Innovation under the Specific Grant Agreement 2 (Human Brain Project WaveScalES SGA2 No. 785907), AGAUR/Generalitat de Catalunya (CERCA Programme, 2017-SGR-1442 and 2017-SGR-1548), FEDER funds, ERANET SynBio MODULIGHTOR, Fundaluce foundation, Ramn Areces foundation, MINECO (FPI fellowship BES-2014-068169 and project CTQ2016-80066R), and CATCH-ME (grant agreement n 633196). CM was supported by the Ermenegildo Zegna Founder's Scholarship. LA was supported by the graduate program "Receptor Dynamics -Emerging paradigms for novel drugs" funded by the Elite Network of Bavaria. ACKNOWLEDGMENTS. The authors are grateful to Jean-Philippe Pin for providing the chimeric Gi/Gq protein clone and to Nria Camarero for helping during preliminary in vitro experiments. CM and FR are grateful to Prof. Marco De Amici for helpful discussion and continuous support. Molecular graphics and analyses were performed with the UCSF Chimera package. Chimera is developed by the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco (supported by NIGMS P41-GM103311). Mass spectrometry was performed at the IRB Barcelona Mass Spectrometry Core Facility, which actively participates in the BMBS European COST Action BM 1403 and is a member of Proteored, PRB2-ISCIII, supported by grant PRB2 (IPT13/0001 -ISCIII-SGEFI / FEDER). |
/// Convenience function allowing an `Automaton` to be created for this `Family` type. Note that this is shorthand
/// for `Automaton::new()`, and therefore `Self::Mode` *must* implement `Default`. See
/// [`Automaton::new()`](struct.Automaton.html#method.new) for more details.
///
/// # Usage
/// ```
/// use mode::*;
/// #
/// # struct SomeFamily;
/// # impl Family for SomeFamily {
/// # type Base = ModeWithDefault;
/// # type Mode = ModeWithDefault;
/// # }
///
/// struct ModeWithDefault { count : u32 };
///
/// impl Mode for ModeWithDefault {
/// type Family = SomeFamily;
/// }
///
/// impl Default for ModeWithDefault {
/// fn default() -> Self {
/// ModeWithDefault { count: 0 }
/// }
/// }
///
/// // Create an Automaton with a default Mode.
/// let mut automaton = SomeFamily::automaton();
/// ```
///
fn automaton() -> Automaton<Self>
where Self::Mode : Default
{
Automaton::new()
} |
Cellular actions of arginine vasopressin in the kidney. physeal peptide hormone that exerts important effects on the kidney. The primary renal effect of AVP is its hydroosmotic action whereby the hormone decreases water excretion by increasing the water permeability of renal collecting tubules mediated via adenosine 3', 5 ' -monophosphate (cAMP). We also found that cellular free calcium (i) is mobilized by AVP in renal collecting tubule cells. In the last couple of years there has been significant progress in AVP research. Important findings on the cloning and expression of AVP receptors and water channel have been reported in the literature. The signal transduction of AVP in glomerular mesangial cells has also been clarified. AVP regulates cell contraction and cell growth in the mesangium. Recent in vivo and in vitro studies have provided information on the mechanisms of the renal effects of AVP, and the complexity of these effects will now be discussed. |
<filename>tests/test_conversation/test_step_1_subject.py
from constants import EXPIRATION_DATE, SUBJECT
def test_start_creating_entry_handler(bot_app, update, context):
return_value = bot_app.call("start_creating_entry", update, context)
assert "Enter your subject or event below" in update.message.reply_text.call_args[0][0]
# We store 'user_id' in 'user_data' for the future reference
assert context.user_data["user_id"] == update.effective_user.id
# Correct step is returned for correct conversation flow
assert return_value == SUBJECT
def test_add_new_entry_handler(bot_app, update, context):
user_input_text = "Buy new socks"
update.message.text = user_input_text # emulate user input of subject
return_value = bot_app.call("add_new_entry", update, context)
assert user_input_text in update.message.reply_text.call_args[0][0]
# We store 'entry' in 'user_data' for the future reference
assert context.user_data["entry"] == user_input_text
# Correct step is returned for correct conversation flow
assert return_value == EXPIRATION_DATE
|
PT-Sync: COTS Speaker-based Pseudo Time Synchronization for Acoustic Indoor Positioning Positioning with a small number of anchors is an important issue in the study of acoustic indoor positioning systems (AIPS). We present PT-Sync, a novel approach to time-synchronize between a commercial off-the-shelf (COTS) speaker and a mobile device by leveraging acoustic sensing and reflected signals from the floor. PT-Sync enables ranging with one speaker and 2-D positioning with two speakers without any additional hardware. Our proposed time synchronization method requires height information from the floor in the calculation process, which we estimate by active acoustic sensing. Using a unique averaging technique and IMU enables robust height estimation. PT-Sync can be used in a variety of indoor environments and can be time-synchronized with already installed speakers. The results of the evaluation experiment confirmed that a synchronization error of 0.16 ms was achieved even at a distance of 6 m from the speaker, and that synchronization on the order of microseconds could be achieved. Furthermore, pedestrian tracking experiments confirmed that positioning of less than 38.9 cm can be achieved at the 90th percentile. |
<reponame>JasonXu314/charitable-chads-bot<filename>src/db.service.ts
import { Injectable, Logger } from '@nestjs/common';
import { InsertOneResult, ModifyResult, MongoClient } from 'mongodb';
@Injectable()
export class DBService {
private client: MongoClient | null = null;
private clientPromise: Promise<MongoClient> | null = null;
private readonly logger: Logger;
constructor() {
this.logger = new Logger('DB');
this.clientPromise = this.connect();
}
public async addUser(user: Person): Promise<InsertOneResult<Person>> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const result = await users.insertOne(user).catch((err) => {
console.error(err);
});
return result || this.addUser(user);
}
public async renameUser(id: string, name: string): Promise<ModifyResult<Person>> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const user = await this.getUser(id);
if (!user) {
throw new Error('User does not exist');
}
user.name = name;
const result = await users.findOneAndReplace({ id }, user).catch((err) => {
console.error(err);
});
return result || this.renameUser(id, name);
}
public async getUser(id: string): Promise<Person | null> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const result = await users.findOne({ id }).catch((err) => {
console.error(err);
});
if (!result && result === null) {
return result;
}
return result || this.getUser(id);
}
public async addWorkout(id: string, workout: Workout): Promise<ModifyResult<Person>> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const user = await this.getUser(id);
if (!user) {
throw new Error('User does not exist');
}
user.workouts.push(workout);
const result = await users.findOneAndReplace({ id }, user).catch((err) => {
console.error(err);
});
return result || this.addWorkout(id, workout);
}
public async editWorkout(id: string, idx: number, quantity: number): Promise<ModifyResult<Person>> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const user = await this.getUser(id);
if (!user) {
throw new Error('User does not exist');
}
if (user.workouts.length <= idx) {
throw new Error('Workout does not exist');
}
const workout = user.workouts[idx];
if ('distance' in workout) {
workout.distance = quantity;
} else if ('time' in workout) {
workout.time = quantity;
} else if ('quantity' in workout) {
workout.quantity = quantity;
}
const result = await users.findOneAndReplace({ id }, user).catch((err) => {
console.error(err);
});
return result || this.editWorkout(id, idx, quantity);
}
public async deleteWorkout(id: string, idx: number): Promise<ModifyResult<Person>> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const user = await this.getUser(id);
if (!user) {
throw new Error('User does not exist');
}
if (user.workouts.length <= idx) {
throw new Error('Workout does not exist');
}
user.workouts.splice(idx, 1);
const result = await users.findOneAndReplace({ id }, user).catch((err) => {
console.error(err);
});
return result || this.deleteWorkout(id, idx);
}
public async setToken(id: string, token: string): Promise<ModifyResult<Person>> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const user = await this.getUser(id);
if (!user) {
throw new Error('User does not exist');
}
user.token = token;
const result = await users.findOneAndReplace({ id }, user).catch((err) => {
console.error(err);
});
return result || this.setToken(id, token);
}
public async getUsers(): Promise<Person[]> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const currUsers = await users
.find()
.toArray()
.catch((err) => {
console.error(err);
});
return currUsers || this.getUsers();
}
public async dbReset(): Promise<void> {
const client = await this.connect();
const users = client.db('main').collection<Person>('exercises');
const currUsers = await this.getUsers();
currUsers.forEach((user) => (user.workouts = []));
const res = await Promise.all(currUsers.map((user) => users.findOneAndReplace({ id: user.id }, user))).catch((err) => {
console.error(err);
});
if (!res) {
this.dbReset();
}
}
private async connect(): Promise<MongoClient> {
if (this.client) {
return this.client;
} else if (this.clientPromise) {
return this.clientPromise;
} else {
const client = await MongoClient.connect(process.env.MONGODB_URL!).catch((err) => {
console.error(err);
});
if (client) {
this.logger.log('Successfully Connected');
this.client = client;
}
return client || this.connect();
}
}
}
|
import unittest
from src.austin_heller_repo.version_controlled_containerized_python_manager import VersionControlledContainerizedPythonManager, VersionControlledContainerizedPythonInstance, DockerContainerInstanceTimeoutException
from austin_heller_repo.git_manager import GitManager
import tempfile
import docker
import time
import json
class VersionControlledContainerizedPythonManagerTest(unittest.TestCase):
def setUp(self) -> None:
docker_client = docker.from_env()
image_names = [
"vccpm_testdockertimedelay",
"vccpm_testdockerspawnscript"
]
for image_name in image_names:
try:
docker_client.containers.get(f"{image_name}").kill()
except Exception as ex:
pass
try:
docker_client.containers.get(f"{image_name}").remove()
except Exception as ex:
pass
try:
docker_client.images.remove(
image=f"{image_name}"
)
except Exception as ex:
pass
docker_client.close()
def test_initialize(self):
temp_directory = tempfile.TemporaryDirectory()
git_manager = GitManager(
git_directory_path=temp_directory.name
)
vccpm = VersionControlledContainerizedPythonManager(
git_manager=git_manager
)
self.assertIsNotNone(vccpm)
temp_directory.cleanup()
def test_run_time_delay_script_timeout_failed(self):
temp_directory = tempfile.TemporaryDirectory()
git_manager = GitManager(
git_directory_path=temp_directory.name
)
vccpm = VersionControlledContainerizedPythonManager(
git_manager=git_manager
)
with vccpm.run_python_script(
git_repo_clone_url="https://github.com/AustinHellerRepo/TestDockerTimeDelay.git",
script_file_path="start.py",
script_arguments=[],
timeout_seconds=5,
is_docker_socket_needed=False
) as vccpi:
with self.assertRaises(DockerContainerInstanceTimeoutException):
vccpi.wait()
temp_directory.cleanup()
def test_run_time_delay_script(self):
temp_directory = tempfile.TemporaryDirectory()
git_manager = GitManager(
git_directory_path=temp_directory.name
)
vccpm = VersionControlledContainerizedPythonManager(
git_manager=git_manager
)
with vccpm.run_python_script(
git_repo_clone_url="https://github.com/AustinHellerRepo/TestDockerTimeDelay.git",
script_file_path="start.py",
script_arguments=[],
timeout_seconds=20,
is_docker_socket_needed=False
) as vccpmi:
vccpmi.wait()
output = vccpmi.get_output()
self.assertEqual(b'{ "data": [ ], "exception": null }\n', output)
temp_directory.cleanup()
def test_run_time_delay_script_after_delay(self):
temp_directory = tempfile.TemporaryDirectory()
git_manager = GitManager(
git_directory_path=temp_directory.name
)
vccpm = VersionControlledContainerizedPythonManager(
git_manager=git_manager
)
with vccpm.run_python_script(
git_repo_clone_url="https://github.com/AustinHellerRepo/TestDockerTimeDelay.git",
script_file_path="start.py",
script_arguments=[],
timeout_seconds=5,
is_docker_socket_needed=False
) as vccpmi:
time.sleep(15)
with self.assertRaises(DockerContainerInstanceTimeoutException):
vccpmi.wait()
temp_directory.cleanup()
def test_recursive_docker_spawn_script(self):
temp_directory = tempfile.TemporaryDirectory()
git_manager = GitManager(
git_directory_path=temp_directory.name
)
vccpm = VersionControlledContainerizedPythonManager(
git_manager=git_manager
)
git_url = "https://github.com/AustinHellerRepo/TestDockerTimeDelay.git"
script_file_path = "start.py"
with vccpm.run_python_script(
git_repo_clone_url="https://github.com/AustinHellerRepo/TestDockerSpawnScript.git",
script_file_path="/app/start.py",
script_arguments=["-g", git_url, "-s", script_file_path, "-t", "20"],
timeout_seconds=30,
is_docker_socket_needed=True
) as vccpi:
vccpi.wait()
output = vccpi.get_output()
temp_directory.cleanup()
print(f"output: {output}")
output_json = json.loads(output.decode())
self.assertEqual(0, len(output_json["data"][0]))
self.assertEqual(git_url, output_json["data"][1])
self.assertEqual(script_file_path, output_json["data"][2])
print(f"Execution time: {output_json['data'][4]}")
|
def max_num_moves(self):
return self.board_size[0] * self.board_size[1] |
Exact correspondence relationship for the expectation values of r-k for hydrogenlike states. An exact correspondence relationship between the classical and quantum-mechanical expectation values of ${\mathit{r}}^{\mathrm{\ensuremath{-}}\mathit{k}}$ for arbitrary hydrogenic states with quantum numbers n and l is established. The quantum-mechanical result for arbitrary powers of k is explicitly expressed in terms of relatively simple orthogonal polynomials which are intimately related to the Legendre polynomials and which consist of only the minimum of n-l or k+1 terms. The correspondence between the sets of polynomials, and the complete formal analogy of the classical and quantum results, are found to originate in the fact that the correspondence limit of the Pasternack-Kramers recursion relation for the 〈${\mathit{r}}^{\mathrm{\ensuremath{-}}\mathit{k}}$〉 is the three-term recursion for the Legendre polynomials. |
/**
* Called to dump out a node in the search space
*
* @param out the output stream.
* @param state the state to dump
* @param level the level of the state
*/
@Override
protected void startDumpNode(PrintStream out, SearchState state, int level) {
if (skipHMMs && (state instanceof HMMSearchState)) {
} else {
String color = getColor(state);
String shape = "box";
out.println(" node: {" + "title: " + qs(getUniqueName(state))
+ " label: " + qs(state.toPrettyString()) + " color: "
+ color + " shape: " + shape + " vertical_order: " + level
+ '}');
}
} |
I first landed in England in September 2004. I took the underground from Heathrow and sat in the carriage with my luggage, face plastered to the window, as the train made its way through the late summer greenery of west London. Culture shock blended with a counterintuitive sense of ease and familiarity with a country – in fact, a whole hemisphere – that I had never visited. I had lived my entire life in Sudan, Egypt and Saudi Arabia, and had come to the UK to study for a postgraduate degree at the University of London. Over the next weeks, I found the city and its people both bewilderingly cool and enthusiastically welcoming. That duality would go on to be the central theme of my life in the UK – confusing impenetrability accompanied by a yielding accommodation.
I settled in quickly, squatting in a relative’s spare bedroom until I could make arrangements. But I had severely underestimated the expense of London and, already impoverished by the high overseas student tuition fees, I began working while I was studying, my student visa allowing for 20 hours a week. I temped in offices across London, using an A–Z to find my way around. My topography of London is still anchored in the locations of those anonymous office blocks across the city. At the end of my course I extended my student visa in order to finish my dissertation and meanwhile was offered a contract as a research assistant at an investment bank where I had been temping. I went into the interview with precisely £15 to my name. Had the position not paid by the end of the week, I would not have been able to get through the first month.
A few weeks into the job and with a little disposable income for the first time in my life, I rented a room on a Bethnal Green council estate. Standing on the balcony, looking out at east London, I remember thinking that it was a sort of Valhalla. After a year or so, in 2007, a combination of student visa extensions and a partner visa by virtue of a relationship I was in at the time meant that I was granted limited leave to remain (ie with no recourse to public funds). After five years, I would be eligible for permanent residency.
It is hard to describe what it feels like to confront the possibility of leaving a country in which you are settled. I had by then been living, working (in emerging markets private equity) and paying taxes in the UK for nine years and enjoyed all the natural extensions of that investment – a career, close friends, a deep attachment to the place, a whole life. It is almost as if the laws of nature change, like gravity disappears and all the things that root you to your existence lose their shape and float away. I remember thinking, “I can’t leave, I’ve just bought a sofa.” It was a ridiculous thought, but that secondhand sofa from the local flea market was the first item of furniture I had ever bought. Suddenly, it signified the folly of nesting in a country that had no intention of letting me make a home.
In January 2010 David Cameron, backed into a tough stance by the looming election, announced a “no ifs no buts” pledge to bring immigration down to the tens of thousands. Theresa May took the helm at the Home Office in May and immediately set about making as big a dent in the net migration number, then about 244,000, as possible. Despite the Liberal Democrats making an attempt to dilute some of the crueller aspects of immigration law, condemning the “Go home” billboard vans May sent through the streets of London and publicly challenging Cameron on the tens of thousands figure, immigration policies continued to harden. They culminated in the 2013 immigration bill that declared the country would become a “hostile environment” for illegal immigrants.
The resulting legislation represented a fundamental dismantling of the means by which all migrants could challenge Home Office decisions, despite around half of appeals ultimately being successful. By the time the 2015 immigration bill was introduced, the Conservatives, unfettered by coalition, introduced a host of measures that meant a hostile environment policy was surreptitiously rolled out against legal migrants as well.
Unable to tackle EU migration due to freedom of movement, the Home Office, while cutting its numbers of immigration case-workers, focused on non-EU migrants and their families, even when they were legal. “Discretion” – a word that sends chills down the spine of many a Home Office application veteran – became the governing principle. As with Nadir Farsani, a 27-year-old Saudi engineer who has lived in the UK most of his adult life and whose parents have British citizenship. He nevertheless had his student visa rejected by a case worker who decided a quirk of Arabic naming convention meant Nadir’s father’s supporting financial documents were not legitimate. Nadir was not informed nor asked to provide additional evidence and was asked to leave the country. While waiting for his application to be processed, his grandmother in Saudi Arabia fell ill and died. He could not travel to say goodbye.
Since 2010 I have experienced a constant attrition in the ranks of friends who did not have the means or the time to challenge often unfair decisions. Damned by discretion, rather than the law, they left.
The right to appeal decisions was curbed. The tier-1 visa, which had allowed for highly skilled migrants looking for a job or wishing to become self-employed, was abolished. Students’ right to work after graduation was limited and the Life in the UK test became a residency requirement. And British citizens began to be affected. In 2012 May announced rules that allowed only those British citizens earning more than £18,600 to bring their spouse to live with them in the UK. The figure is higher where visa applications are also made for children. She also made it all but impossible for people to bring their non-European elderly relatives to the UK. “Skype families” can spend years on opposite sides of the world, watching their children grow up on video.
Incentivised to reject, the Home Office grew ever more brutal and incompetent. Satbir Singh, CEO of the Joint Council for the Welfare of Immigrants, is one of many British citizens whose application for his spouse to join him in the UK was rejected. They had satisfied all the requirements, but the Home Office lost their documents. In one of JCWI’s cases, a British citizen on a zero hours contract had a nervous breakdown due to the long hours he had to work in order to satisfy the income requirement. He needed hospitalisation but refused – two weeks off would mean that his income would fall under the threshold.
The hostile environment also began to chew up those who had lived their entire lives in the UK. Commonwealth citizens who arrived in the country decades ago have discovered that in a hardened immigration climate they are without the necessary papers. So Paulette Wilson, a 61-year-old former cook in the House of Commons, was sent to Yarl’s Wood immigration removal centre last October and taken to Heathrow for deportation to Jamaica, a country she had not visited since she was 10. A last-minute legal intervention prevented her removal, and, following media coverage in the Guardian, she was granted a residency permit.
In most cases, the speed with which the Home Office capitulates when challenged is a clear giveaway that decisions were made in the hope they would not be appealed. In my case, I appealed my residency extension and prepared a case with a litigation lawyer – only for permanent residency to be granted days before my appearance in court. There was no explanation and we had not provided, yet, any new information. My joy was followed by a nausea of fury. I had bankrupted myself trying to pay the £30,000 legal fees and lived in a constant anguish of instability, paralysed and yet tensed for action, only for the decision to be overturned because it was wrong in the first place, and because the Home Office couldn’t be bothered to fight it.
Forty per cent of cases brought before a judge on appeal are overturned. Consider that this applies only to the small number of individuals who have the means to appeal, and the scale of the wider miscarriage of justice becomes apparent. At one point, the government was proposing that the rule of “deport first, appeal later” that currently applies only to foreign national criminals be applied across the board; thankfully, this was eventually overturned by the supreme court, which declared it unfair and unlawful.
The original sin, the motivation for so much of the inhumanity being visited on applicants, is the “tens of thousands” target, an unrealistic and arbitrary number, backed by no intelligence or research. But the heart of the dysfunction throughout the past eight years isn’t that immigration laws have tightened, it is that they have become unpredictable, as new rules are introduced or scrapped. There have been 45,000 changes to immigration rules since May took over at the Home Office. Both applicants and immigration officials are navigating the system using a map whose contours and geology shift constantly. Farsani compares the process to “climbing a crumbling staircase”.
At the same time, the public tone, led by the Tory populism on immigration, became sharper and the idea that the UK had a soft-touch immigration system grew stronger. By the time the Brexit referendum campaign was under way, the national perception of the country’s immigration rules was in fantasyland. It was surreal to watch when, at the same time, I was unable to secure a residency, let alone a passport.
And the ignorance culminated in Brexit. The mainstreaming of lies was complete. A points-based system? We already have one. It’s called a tier-2 visa and to avail yourself of it you have to have sponsorship and a job offer from a UK employer, as well as sufficient funds to sustain you until your first salary. An NHS surcharge? We already have one. Every non-EU citizen who takes up a job or student position in the UK pays £150-200 before the visa is issued. They also pay national insurance, taxes that go towards the Home Office, plus high and escalating fees to process routine applications – in addition to fees paid to all the outsourced affiliated agencies that administer peripheral processes such as English tests and interviews.
The really dirty secret is that the government can stop non-EU migration dead whenever it wants. Of the 170,000 non-EU migrants who came to the UK in 2016, about 90,000 were granted tier-2 employment. These are visas that we can simply stop issuing. But the economy needs the labour, something the government will not admit, instead choosing to treat applicants as people who somehow manage to come to the country against its will. If anything, the UK needs more non-EU migrants to plug skills gaps, particularly in the NHS – yet doctors offered jobs in hospitals are being blocked from coming to Britain because monthly quotas for skilled worker visas have been oversubscribed.
And, if Brexit finally goes through, into this inflexible immigration system will march three million EU citizens whose status will need to be registered and regularised. It is simply, for those of us who have been through it, a terrifying prospect. And May still doubles down, running the Home Office from Downing Street. In mid-February she overruled the Home Office in order to insist that EU citizens who arrived during a Brexit transition period would not have the automatic right to remain in the country. The move has caused alarm in the Home Office, with government sources admitting that work on a separate registration scheme had “barely begun” and “almost certainly” would not be ready in time. May then backed down.
The cavalier detachment with which these big decisions are made cannot be isolated from the general corporate cheapening of human life that has set in over the past decade. Satbir Singh sees immigration policy as indivisible from this environment. “If you look at what has happened in Britain over the last eight years,” he says, “there’s a thread of institutional degradation that runs through it all. Whether you are waiting for medical treatment, a welfare payment or an immigration decision, we are all clients, standing behind a glass window.” And we were the lucky ones. We weren’t in detention, which almost 28,000 people entered in 2016-17. We weren’t the ones being interviewed while hallucinating and wetting ourselves. We weren’t being handcuffed as we left burning buses.
In 2017, the permanent residency that was granted on appeal qualified me for British citizenship. More than a decade after that moment of pregnant possibility on a balcony in Bethnal Green, and 14 years after excitedly taking in the view of London’s parks on a train from the airport, I was making my way towards my naturalisation with leaden feet. The citizenship had been so shorn of its significance, so stripped of its essential meaning, that the ceremony felt like a formality. And when it was over it felt hollow. My relief was dulled by exhaustion and sadness that becoming the citizen of a country in which I had invested so much had been marred by an extractive, dishonest and punitive system. I now looked forward to only one thing – to never have to think about any of it again.
But the day after the ceremony I was crossing a bridge I had crossed thousands of times before, absentmindedly listening to Talking Heads’ This Must Be the Place. It was one of those cinematic London winter dusks, when the rich colours in the sky cast a benign, almost otherworldly light on the water. And I heard the lyrics – “Home is where I want to be” – for the first time. Every grain in the scene around me sharpened as a welling of belonging stung my eyes.
“They don’t want you to integrate,” Farsani had told me. “They want you to fail so they can point their fingers at you and say, ‘Look, immigrants do not integrate’.” But we do, because the country, in spite of its broken immigration system, slowly, organically, casually, naturalises you in ways that cannot be validated by a Life in the UK test, citizenship ceremony or exhaustive application dossier. But daily this natural, healthy process is being violated, via administrative incompetence and politically instructed cruelty, to fulfil a soundbite “tens of thousands” target the government cannot meet, and is too proud to jettison. |
/*
* JEF - Copyright 2009-2010 Jiyi (<EMAIL>)
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package jef.codegen;
import java.io.File;
import java.io.IOException;
import java.io.PrintStream;
import java.net.URL;
import java.util.ArrayList;
import java.util.List;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import jef.tools.ClassScanner;
import jef.tools.IOUtils;
import jef.tools.StringUtils;
import jef.tools.resource.ClasspathLoader;
import jef.tools.resource.IResource;
/**
* JEF中的Entity静态增强任务类 <h3>作用</h3> 这个类中提供了{@link #enhance(String...)}
* 方法,可以对当前classpath下的Entity类进行字节码增强。
*
*
* @author jiyi
* @Date 2011-4-6
*/
public class EntityEnhancer {
private String includePattern;
private String[] excludePatter;
private List<URL> roots;
PrintStream out = System.out;
private EnhanceTaskASM enhancer;
private static final Logger log = LoggerFactory.getLogger(EntityEnhancer.class);
public void setOut(PrintStream out) {
this.out = out;
}
public EntityEnhancer() {
enhancer = new EnhanceTaskASM(new ClasspathLoader());
}
public EntityEnhancer addRoot(URL url){
if(url!=null){
if(roots==null){
roots=new ArrayList<URL>();
}
roots.add(url);
}
return this;
}
/**
* 在当前的classpath目录下扫描Entity类(.clsss文件),使用字节码增强修改这些class文件。
*
* @param pkgNames 要增强的包名
*/
public void enhance(final String... pkgNames) {
int n = 0;
if (roots == null || roots.size() == 0) {
IResource[] clss = ClassScanner.listClassNameInPackage(null, pkgNames, true);
for (IResource cls : clss) {
if (cls.isFile()) {
try {
if (processEnhance(cls)) {
n++;
}
} catch (Exception e) {
log.error("Enhance error: {}", cls, e);
continue;
}
}
}
} else {
for (URL root : roots) {
IResource[] clss = ClassScanner.listClassNameInPackage(root, pkgNames, true);
for (IResource cls : clss) {
if (!cls.isFile()) {
continue;
}
try {
if (processEnhance(cls)) {
n++;
}
} catch (Exception e) {
log.error("Enhance error: {}", cls, e);
continue;
}
}
}
}
out.println(n + " classes enhanced.");
}
/**
* 增强制定名称的类
* @param className 类全名
* @return 是否进行增强
*/
public boolean enhanceClass(String className) {
URL url = this.getClass().getClassLoader().getResource(className.replace('.', '/') + ".class");
if (url == null) {
throw new IllegalArgumentException("not found " + className);
}
try {
return enhance(IOUtils.urlToFile(url), className);
} catch (Exception e) {
throw new RuntimeException(e);
}
}
private boolean enhance(File f, String cls) throws IOException, Exception {
EnhanceTaskASM enhancer = new EnhanceTaskASM(null);
File sub = new File(f.getParentFile(), StringUtils.substringAfterLastIfExist(cls, ".").concat("$Field.class"));
byte[] result = enhancer.doEnhance(IOUtils.toByteArray(f), (sub.exists() ? IOUtils.toByteArray(sub) : null));
if (result != null) {
if (result.length == 0) {
out.println(cls + " is already enhanced.");
} else {
IOUtils.saveAsFile(f, result);
out.println("enhanced class:" + cls);// 增强完成
return true;
}
}
return false;
}
private boolean processEnhance(IResource cls) throws Exception {
File f = cls.getFile();
File sub = new File(IOUtils.removeExt(f.getAbsolutePath()).concat("$Field.class"));
if (!f.exists()) {
return false;
}
byte[] result = enhancer.doEnhance(IOUtils.toByteArray(f), (sub.exists() ? IOUtils.toByteArray(sub) : null));
if (result != null) {
if (result.length == 0) {
out.println(cls + " is already enhanced.");
} else {
IOUtils.saveAsFile(f, result);
out.println("enhanced class:" + cls);// 增强完成
return true;
}
}
return false;
}
/**
* 设置类名Pattern
*
* @return
*/
public String getIncludePattern() {
return includePattern;
}
public EntityEnhancer setIncludePattern(String includePattern) {
this.includePattern = includePattern;
return this;
}
public String[] getExcludePatter() {
return excludePatter;
}
public EntityEnhancer setExcludePatter(String[] excludePatter) {
this.excludePatter = excludePatter;
return this;
}
}
|
1. Field of the Invention
The present invention relates to a semiconductor device with a built-in antenna to be used in a wireless network, such as Zigbee.
2. Description of the Related Art
Zigbee is a new standard for a remote control system which is directed to building/home automation with a low-cost and low-power device which can be used for several years on two size AA batteries. Zigbee uses the 2.4-GHz band radio frequency divided into 16 channels, so that 255 devices can be connected per net work and data can be transferred at a maximum of 250 kbps within 30 meters. While Zigbee has a lower data transfer rate than that in the recent wireless LAN or Bluetooth (trademark registered by Bluetooth SIG Inc.) which uses the same frequency band, it has an advantage such that the power consumption is suppressed considerably lower. In home usage, a network which can radio-control everything with Zigbee from light illumination to a home security system.
As Zigbee puts priority on a low cost as compared with Bluetooth or so, Zigbee demands an inexpensive system with fewer components. Particularly, there is a demand for a semiconductor device on which a high-frequency power circuit including an antenna is mounted on-chip.
Japanese Patent Laid-Open No. 2001-143039 (Document 1) describes a semiconductor device in which an antenna for wire communication formed at one portion of a lead frame and a semiconductor integrated circuit chip (hereinafter called “IC chip”) are sealed as an integral unit with an encapsulating resin. The prior art can provide a low-cost semiconductor device which has a non-contact type communication capability and is excellent in productivity.
Hitachi Metals Technical Journal Vol. 17 (2001), “Development of Chip Antennas for Bluetooth Devices” by Hiroyuki Aoyama et al., p. 67 to 72, (Document 2) describes a prototype antenna for Bluetooth devices, in which an improved inverted F antenna is formed of a metal conductor on a dielectric circuit board. The antenna for Bluetooth devices is characterized in that the conductor width is made narrower toward the open end from the ground end of the antenna conductor in order to cover a wider band than the conventional inverted F antenna. Further, a part of the ground conductor extends to near the open end of the antenna conductor and power is supplied to one end of the antenna conductor. Document 2 reported that with the antenna conductor formed on the top surface of dielectric ceramics together with the power supply conductor and ground conductor, the actual measurements on the antenna gain showed the intended performance of the antenna.
As the semiconductor device described in Document 1 has the entire antenna for wireless communication buried in the encapsulating resin or dielectric, however, it can acquire sufficient radiation power only in a relatively low frequency band and cannot be used as a small antenna aiming at ensuring low power consumption particularly in a 2.4-GHz band.
While the antenna for Bluetooth devices described in Document 2 can radiate electric waves efficiently with low power consumption, it is a chip antenna formed on the ceramic board so that the antenna cannot be manufactured at a low cost and with a high productivity. |
SUNNYVALE — A man wanted by the FBI for food tampering at a number of Los Angeles-area grocery stores was arrested last week for allegedly pouring bleach onto cartons of eggs and hydrogen peroxide into a tray of rotisserie chickens at a Safeway in Sunnyvale, authorities said.
David Lohr, 48, came to the attention of the Santa Clara County Sheriff’s Office after he was spotted spreading a white powder — later determined to be salt — and hydrogen peroxide aboard a Santa Clara Valley Transportation Authority bus in Sunnyvale on Feb. 6, according to Deputy Michael Low.
Low said the driver cleared the bus of passengers, but Lohr slipped away before deputies arrived. He was later spotted by another driver at a bus stop near El Camino Real and San Antonio Road in Los Altos. Deputies raced to the location and detained him.
Lohr, they learned, had a felony no-bail warrant out of the FBI-Los Angeles Office for food tampering and attempting to tamper with consumer products, according to Low.
Deputies also found salt and several receipts in Lohr’s pockets, which led them to a Safeway in Sunnyvale.
Employees at the grocery store told deputies they discovered an empty bottle of hydrogen peroxide and hydrogen peroxide spilled into a heated tray of rotisserie chickens on Jan. 28. Low said Lohr was also caught on camera pouring bleach onto cartons of eggs. Some bleach was also found on some beer containers.
Bleach contains hazardous material that can irritate the respiratory tract when inhaled and cause nausea, vomiting and diarrhea if inhaled.
The sheriff’s office is investigating whether any of the chickens or eggs or beer were sold to the public. It has notified and is working closely with the county’s Department of Environmental Health.
“The quick response and thorough investigation by our deputies prompted by vigilant and observant VTA bus drivers led to the apprehension of this suspect, whose actions had the potential to adversely affect the public’s health and safety,” Sheriff Laurie Smith in a statement. She also praised the Sunnyvale Department of Public Safety for swiftly clearing and determining the white powder on the bus was not hazardous.
Lohr is suspected of pouring bleach into refrigerators and freezers containing ice and alcoholic beverages at markets in Manhattan Beach, Redondo Beach, Westwood and West Hollywood in December and January. Customers also reported smelling or handling frozen shrimp contaminated by bleach at some of those stores, according to an FBI affidavit.
Lohr was booked into the Santa Clara County Main Jail on the warrant, and new food tampering charges are pending. He appeared in federal court on Feb. 8 and remains in federal custody.
The Santa Clara County Sheriff’s Office is asking anyone with information about the alleged food tampering to call 408-808-4500. Those wishing to remain anonymous can call 408-808-4431. |
<reponame>facebook/react-native-deprecated-modules<gh_stars>1-10
/**
* Copyright (c) Meta Platforms, Inc. and affiliates.
*
* This source code is licensed under the MIT license found in the
* LICENSE file in the root directory of this source tree.
*/
#import <MapKit/MapKit.h>
@interface RCTMapOverlay : MKPolyline <MKAnnotation>
@property (nonatomic, copy) NSString *identifier;
@property (nonatomic, strong) UIColor *strokeColor;
@property (nonatomic, assign) CGFloat lineWidth;
@end
|
Neuromorphic Vision Based Control for the Precise Positioning of Robotic Drilling Systems The manufacturing industry is currently witnessing a paradigm shift with the unprecedented adoption of industrial robots, and machine vision is a key perception technology that enables these robots to perform precise operations in unstructured environments. However, the sensitivity of conventional vision sensors to lighting conditions and high-speed motion sets a limitation on the reliability and work-rate of production lines. Neuromorphic vision is a recent technology with the potential to address the challenges of conventional vision with its high temporal resolution, low latency, and wide dynamic range. In this paper and for the first time, we propose a novel neuromorphic vision based controller for faster and more reliable machining operations, and present a complete robotic system capable of performing drilling tasks with sub-millimeter accuracy. Our proposed system localizes the target workpiece in 3D using two perception stages that we developed specifically for the asynchronous output of neuromorphic cameras. The first stage performs multi-view reconstruction for an initial estimate of the workpiece's pose, and the second stage refines this estimate for a local region of the workpiece using circular hole detection. The robot then precisely positions the drilling end-effector and drills the target holes on the workpiece using a combined position-based and image-based visual servoing approach. The proposed solution is validated experimentally for drilling nutplate holes on workpieces placed arbitrarily in an unstructured environment with uncontrolled lighting. Experimental results prove the effectiveness of our solution with an average positional errors of less than 0.1 mm, and demonstrate that the use of neuromorphic vision overcomes the lighting and speed limitations of conventional cameras. Introduction The fourth industrial revolution shows significant emphasis on the automation of high-precision cyber-physical manufacturing and machining processes. Automating such processes offer numerous advantages in terms of performance, productivity, efficiency, and safety; as manual operation is often associated with structural damage, risk of rework, and health hazards. Among other processes, drilling has been studied extensively by academics and practitioners due to their widespread use in various manufacturing activities, especially in the automotive and aerospace industries. High-precision drilling is essential for these industries, as the quality of drilling is highly correlated with the performance and fatigue life of the machined structures. Traditionally, the automation of drilling and similar machining processes has been highly dependant on Computer Numerical Control (CNC) equipment for their highprecision and repeatability. However, CNC equipment are limited in functionality and workspace, and require substantial investment in both equipment and infrastructure. In recent years, industrial robots have been rising as a promising alternative for CNC equipment in machining applications due to their cost efficiency, their wide range of functionality, their ability to operate on large workspace volumes and their capability to adapt to variations in the environment and workpiece positioning. Despite several successful examples of utilizing robots in industrial machining applications, repeatability remains the main challenge in robotic machining; where errors originate either from the relatively low stiffness of robot joints or the imperfect positioning and localization of a workpiece relative to the robot. These errors can be undermined by the use of real time guidance and closed-loop control based on sensory feedback and metrology systems. Several works in the literature have adopted such approaches for precise position, orientation, and force control in a robotic machining paradigm. Machine vision is amongst the most utilized perception technologies that enable the closed-loop control of robots due to their maturity, availability, and relatively low cost. In, a 2D vision system was used to enhance the drilling positional accuracy through the detection and localization of several reference holes in the workpiece; achieving a position accuracy of 0.1 mm. Similarly, the work in utilizes feedback from an eye-in-hand camera and uses template matching to localize reference holes for a combined drilling and riveting process, reducing positioning errors to 0.05 mm. proposed combining the 2D camera detection with laser distance sensors to localize reference holes in 3D, and reported an accuracy of 0.3 mm. Similar approaches for the positioning of a drilling tool are reported in ; with variations in the underlying perception and reference hole detection algorithms. Several works in the literature focus exclusively on the robust detection of circular holes through contour refinement and model fitting due to its direct impact on the precision of the drilling process. These concepts of vision-based feature detection and workpiece localization are also widely adopted in other various manufacturing tasks. For instance, proposed a visual guidance system for robotic a peg-in-hole application consisting of four cameras: two in an eye-to-hand configuration for the localization of the robotic tool, while the others are in an eye-in-hand configuration and are used for alignment of the tool with reference holes. A multi-view approach was presented in for the localization of target objects in a pick-and-place framework with sub-millimeter level accuracy. The versatility of vision systems have also enabled other uses in navigation, guidance, and calibration systems of mobile industrial robots. All of the aforementioned robotic manufacturing approaches utilize conventional frame-based cameras, which suffer from latency, motion blur, low dynamic range and poor perception at low-light condition. Frame-based cameras output intensity images based on a time integration of incident illumination over a fixed exposure period. This integral action introduces latency in perception, and causes blurring in the image when considerable relative motion exists; especially with larger exposure periods. On the other hand, short exposure times greatly degrade image clarity in reduced lighting conditions, and require larger apertures which leads to a narrow depth of field. These shortcomings of frame-based cameras impose constraints on robot operational speeds, workspace volumes, and ambient lighting conditions; which affect the robustness and productivity of robotic manufacturing processes. Relevant work in the literature attempt to mitigate these problems with conventional cameras by adding additional supporting sensors ; which increases the complexity and cost of the system. The recent neuromorphic vision sensor (also known as event-based camera) has the potential to address the challenges of conventional machine vision. The pixels of a neuromorphic camera operate independently and respond asynchronously to variations in incident illumination in continuous time, providing low-latency, high dynamic range, and computationally efficient perception. Neuromorphic cameras do not suffer from motion blur, and are robust to varying lighting conditions; making them an appealing choice for a wide variety of applications such as: autonomous driving, Unmanned Aerial Vehicle Control, object recognition and tracking, localization and mapping, and tactile sensing. In our recent work, we have demonstrated the advantages of neuromorphic cameras over their conventional frame-based counterparts for high-speed and uncontrolled lighting operation in a robotic pick-and-place framework. However, the low-resolution of the neuromorphic camera, the assumption of known depth, and the act-to-perceive nature of the event camera resulted in positional errors of up to 2 cm. In this paper, we develop and employ a two-stage neuromorphic vision-based controller to perform a robotic drilling task with sub-millimeter level accuracy. The first stage localizes the target workpiece in 6DoF using a multi-view 3D reconstruction approach and Position Based Visual Servoing (PBVS). The second control stage applies Image Based Visual Servoing (IBVS) to compensate for positional errors using and a set of reference holes. Using both control stages, the robot performs peg-in-hole to insert a clamp mandrel (or split-pin) in the reference hole with less than 0.2 mm clearance. The clamp mandrel holds the robotic tool in place, and the robot drills nutplate installation holes on both side of each reference hole. The capabilities of the neuromorphic camera enables higher-speed operation and robustness to changes in ambient lighting. A video demonstrating the presented robotic drilling solution can be accessed through this link: https://drive.google.com/file/d/ 1q9QwPvkd7ZcEBcGMIxIVy2r_iRfTKCSe/view?usp=sharing. The contributions of this paper can be summarized as below: 1. For the first time, we present a neuromorphic visionbased control approach for robotic machining applications. The proposed method utilizes the feedback of a neuromorphic camera to precisely align a drilling tool with the target workpiece, and demonstrates advantages over conventional vision-based solutions in high speed operation and uncontrolled lighting conditions. 2. We devise an event-based multi-view 3D reconstruction method for the 6DoF localization of workpiece in the environment. This method matches events generated from different poses of the neuromorphic camera and solves for the 3D location of workpiece features using the Direct Linear Transformation (DLT). 3. We develop a novel event-based approach for the detection and tracking of circular objects in the scene. This approach applies an event-based variant of the circle hough transform in a bayesian framework to detect and track the location of reference holes in the workpiece. 4. We perform rigorous experimentation to test the precision and performance of the proposed methods. Experimental results validates the use of neuromorphic vision for robotic machining applications with positional accuracy of 0.1 mm, and prove that our approach overcomes the speed and lighting challenges in conventional vision-based robotic machining approaches. The remainder of this paper is organized as follows. Section 2 outlines the setup and configuration of the proposed robotic drilling system. Section 3 describes the working principle and functional advantages of neuromorphic cameras. Section 4 explains the event-based multi-view 3D reconstruction and workpiece localization algorithm. Section 5 introduces the event-based circular hole detection and tracking pipeline. Section 6 presents the two-stage visionbased controller employing both PBVS and IBVS. Finally, Section 7 demonstrates both quantitative and qualitative experimental evaluation of the presented approach, which confirm the advantages of using neuromorphic vision for precise robotic processes. Robotic Drilling Setup The overall configuration of the robotic nutplate hole drilling system can be seen in Figure 1. The system consists of an industrial robot with an end-effector comprising a drill motor and a neuromorphic vision sensor for perception and guidance. The robotic system drills nutplate installation holes on a workpiece that includes a set of reference holes. We define the following frames of reference that are used throughout this paper for robot guidance and control: : The robot base coordinate frame. : The vision sensor coordinate frame. ℎ : The coordinate frame of the i'th reference hole. We denote the rotation matrix that maps from a source frame to a target frame by ∈ ℝ 33. The position of point relative to point described in coordinate frame is given by ⃗ ∈ ℝ 3. As such, we define the affine transformation matrix ∈ ℝ 44 as follows: For the remainder of this paper, we consider the transformation from to to be known by solving the robot's forward kinematics: where ( ) is a nonlinear function representing the robot's kinematics, are the observed robot joint angles, and ℂ is the robot's configuration space. Furthermore, and are constants, and can be found using a geometrical calibration procedure as described in. Therefore, and can be easily computed by combining and the calibrated transformations. Similarly, the robot's forward kinematics are used to find the end-effector's twist vector ⃗ ∈ ℝ 6 combining linear and angular velocity components, as follows: where ( ) is the Jacobian matrix, is the number of robot joints, and denotes the Moore-Penrose inverse computed using Singular Value Decompision. Additionally, the camera's twist vector ⃗ can be calculated from ⃗ using the adjoint representation of, denoted by , as follows: where denotes the matrix representation of the cross product for the vector ⃗. In order to perform the drilling operation, the robot requires knowledge of ℎ. We solve for this transformation in two stages. First, a multi-view 3D reconstruction approach provides an initial estimate of ℎ for all the workpiece holes as described in section 4. Then, for each hole, ℎ is refined to sub-millimeter accuracy using the circular hole detection and tracking approach presented in section 5. Following these two perception stages, robot control is performed by two subsequent methods: PBVS and IBVS; that align with ℎ and drills the required holes in the workpiece, which is explained in detail in section 6. Neuromorphic Vision Sensor The Neuromorphic vision sensor, often referred to as 'event camera', decodes illumination changes in the visual scene as a steam of events =<,,, >, where (, ) represent the pixel coordinates of the change, is the event's timestamp, and is the illumination change polarity (either 1 or -1). Unlike conventional frame-based imagers, neuromorphic vision sensors do not operate on a fixed sampling rate; instead, pixels operate asynchronously and respond to logarithmic illumination changes with microsecond resolution. Figure. 2 visualizes the differences between neuromorphic event-based cameras and conventional imagers. Let (,, ) denote the illumination intensity at pixel (, ) and time, an event =<,,, is triggered as soon as the following condition is met: where is the logarithmic illumination change threshold, and is the time since the last triggered event at,. The working principle of neuromorphic cameras provides substantial advantages over conventional imaging sensors. For instance, neuromorphic cameras offer high temporal resolution and an exceptionally low-latency in the order of microseconds; meaning that these sensors do not suffer from motion blur and guarantee timely perception of changes in the scene. Additionally, since independent pixels are self-sampled, neuromorphic cameras have a wide dynamic range (> 120 ), and are not impeded by the exposure timing complications that arise in frame-based cameras. This enables neuromorphic cameras to offer robust perception across a variety of lighting conditions, including extremely low-light cases. Another practically valuable feature that result of the aforementioned capabilities of neuromorphic vision is the ability to perceive under very small aperture, leading to a substantially wide depth of field. In applications such as ours where the camera is expected to acquire information across a varied depth, this feature can alleviate the need of an autofocus system, which often requires additional hardware and induces uncertainty in the camera projection model. Other advantages of neuromoorphic vision include low power consumption and reduction in signal redundancy as only informative data is transmitted in the form of events. Despite the capabilities of neuromorphic vision, the fundamentally different output of these cameras require novel computer vision algorithms and processing techniques than those developed for conventional frame-based imaging. It must be noted that neurmorphic cameras use identical optics as conventional cameras. As such, the standard pinhole model can still describe the projection properties of neuromorphic cameras. Following the pinhole model, the mapping between a point in three dimensional space ⃗ = described in coordinate frame , and its projection on the image plane (, ) can be expressed in homogeneous coordinates as follows: where ∼ indicates equality up to an unknown scalar multiplication, and denotes the camera intrinsic matrix. In this paper, we consider (, ) to be the pixel coordinates post rectification for tangential and radial distortions. Neuromorphic Event-based Multi-View Workpiece localization This section presents the event-based multi-view 3D reconstruction method used for the 6-Dof localization of the workpiece. As described in section 4.1, we utilize the camera's projective geometry and the Direct Linear Transformation (DLT) to solve for the location of each reference hole in the workpiece using their corresponding stream of events from multiple camera viewpoints. We establish correspondences between the asynchronous events and reference holes using the space-sweep approach described in 4.2. Finally, we use model fitting to determine the correct orientation of the workpiece. The Direct Linear Transformation As the camera moves in the environments, a stream of events will be generated corresponding to each reference hole in the workpiece. The objective of the direct linear transformation is to determine the holes' position from their corresponding events generated at different time-steps. At a given time-step and camera pose, the relationship between the position of the i'th reference hole ⃗ ℎ and its corresponding event in homogenous coordinates ℎ = can be described using the pinhole model in eq.. By multiplying both sides of eq. by , which is the matrix representation of the cross product for vector ℎ, we obtain the following expression: We define a matrix ∈ ℝ 3 4 that encapsulates the right side of eq. across observations of the i'th reference hole as follows: From eqs. and, it is evident that the following expression holds: Eq. is used to obtain a valid solution for ⃗ ℎ as a least square problem, which can be efficiently solved using Singular Value Decomposition. The Space-Sweep Method Reconstruction using the Direct Linear Transformation described in section 4.1 requires accurate correspondence between features in the environment (reference holes) and the events generated from different camera views. For this objective, we use the space-sweep method first introduced in and adapted for neuromorphic cameras in. This approach utilizes a descritized representation of the volume of interest, denoted b ∈ ℝ ℎ where, ℎ, and indicate the width, heigh, and depth of. Each generated event is then back-projected as a ray passing through using the camera's pose and projection model. A Disparity Space Image (DSI) is defined that records the density of rays passing through each voxel of. Local maxima are then extracted from the DSI, and the rays passing through highdensity voxels are clustered together and are considered to correspond to the same feature in 3D space. Consequently, since each ray is defined by a camera pose and an event, the clustering of events and camera poses is inferred directly from these rays, and the DLT can hence be applied for each cluster independently. Figure 3 visualizes the principal of the space-sweep method, and illustrates the different steps involved in this process. We refer interested readers to the original manuscripts in for the details on the computationally efficient implementation of the Space-Sweep method. After estimating the position of all reference holes ⃗ ℎ, = , the orientation of the workpiece is found by fitting the estimated holes positions against a pre-known model of the workpiece; which can be done using an Iterative Closest Point approach. In our case, we only assume that the workpiece is flat such that all reference holes lie on the same plane; and hence we do not require knowledge of the number or position of holes in the workpiece. We simply fit a plane through the estimated position of all reference holes and and infer the workpiece's orientation from the parameters of the fitted plane. This plane-fitting step is also used to remove any outliers or noise in the localization of holes. Neuromorphic Event-based Hole Detection and Tracking The precise detection of circular holes from visual feedback directly affects the positional accuracy of the drilling process. In frame-based vision, several Well established methods exist for detecting circular formations in images, and the Circle Hough Transform (CHT) is amongst the most popular of these methods. In this section, we present an event-based variant of CHT that is appropriate for the asynchronous output of neuromorphic cameras In conventional CHT, each feature point (e.g. edge point) in the image frame is mapped to the hough parameter space using the constraint equation given below: where (, ) are the pixel coordinates of the i'th edge point, (, ) are the coordinates of circle's center, and is the circle's radius. As such, a three dimensional parameter space ∈ ℝ 3, often referred to as the hough parameter space, that spans all possible values for, and is defined. Following, each edge point in the image plane represents a hollow cone in as depicted in Figure 4. The intersection of multiple cones signals the presence of a circle with parameters that correspond to the intersection's location in. In practice, is discritized to form an accumulator arra (,, ), and each edge point contributes t through a voting process. Circle parameters are finally extracted from peaks in. CHT cannot be directly used with neuromorphic cameras due to their significantly different output from frame-based cameras. CHT establishes correspondence between edge points assuming that they are extracted from the same image frame with exact temporal match. Neuromorphic cameras on the other hand do not output image frames, but output an asynchronous stream of events in continuous time as shown in Figure 2. A naive solution would be to concatenate events within a defined time period to form artificial frames, and apply CHT to these frames. However, the generation of events is dependent on the rate of changes in the visual scene. As such, it would be challenging to determine a single period for event concatenation that is appropriate for all conditions. Figure. 9 provides a visualization for this premise, where grouping events at different rates result in contradicting CHT performance at different egomotion velocities. To address these challenges, we formulate an eventbased variant of CHT that adopts a bayesian framework to retain the asynchronous nature of neuromorphic cameras. In our algorithm, the accumulator arra (,, ) is considered to be a Probability Mass Function (PMF) that reflects the probability of the existence of a circle for any given values of,, or. Following the principle of recursive bayesian filtering, our approach for event-based circle detection follows two steps: measurement update and prediction. The measurement update step follows the traditional CHT but in an asynchronous manner, where each event independently votes to regions in that satisfy. Measurement update is performed whenever a new event is received, and is normalized so that ∑ = 1 after each update step. The prediction step establishes a temporal continuity between events triggered at different times, and allows for the inference of circle parameters at any point in continuous time. In the prediction step, we updat using the camera's egomotion, which is obtained by solving the forward kinematics of the robot manipulator as described in section 2. Given the camera's twist vector ⃗ = that describe the camera's velocity in , where (,, ) are the linear components and (,, ) are the angular components; the velocity in pixel coordinates of a feature point (, ) can be computed using the image Jacobian as shown below: Where is the focal length, and is depth. In our control pipeline discussed in section 6, we constrain the camera' motion during the reference hole detection step to a linear 2D motion perpendicular to the camera's optic axis, such that = 0 and = = = 0. This constraint simplifies the expression in to: It is evident from that features' velocities in pixel coordinates are uniform across all pixel locations, since they only depend on the camera's velocity and the depth of the point. We denote this uniform velocity by (, ). This translates to uniform motion in the and components of. We define a gaussian kernel (,, ) that incorporates this motion in accumulator array while also as follows: Where is the time since the last prediction step, is the tunable covariance matrix, and is a scale factor so that ∑ = 1. Although the expression in is deterministic, we select a gaussian distribution to model uncertainties in,, or the camera's egomotion. The prediction step is then realized by the convolution of with as shown in. This step can be performed whenever an event is triggered or at a fixed rate independent from event generation. In our experiments, the prediction step is applied at a rate of 100Hz. Finally, the circle parameters are extracted from the highest probability region of as follows: * * * = arg max (,, ) Figure 5: Outline of the different control steps for the proposed neuromorphic vision-based drilling system. The robot performs multiple stages of PBVS and IBVS to scan the environment, align with the wokrpiece, and drill the desired holes. This process is repeated until all the holes on the workpiece are drilled. Vision Based Robot Controller This section explains the vision-based control logic that utilizes the perception algorithms in sections 4 and 5 to regulate the robot motion during the drilling procedure. Figure 5 provides an outline of this controller, which consists of two subsequent PBVS and IBVS stages. The PBVS stage guides the end-effector towards initial alignment with the reference holes on the workpiece using the 6-DoF pose estimate from the multi-view detection. IBVS refines the end-effector alignment to sub-millimeter accuracy using the event-based hole detection algorithm. Both stages are described in detail in sections 6.1 and 6.2. Position Based Robot Control (PBVS) In the PBVS stage, we consider a known desired pose for the robot's end effector denoted by. This pose is either a pre-defined constant, which is the case during the scanning step; or is computed from knowledge of the reference holes' poses ℎ and a pre-defined stance of the end-effector relative to these holes ℎ. Concurrently, we define a desired joint angles vector ∈ ℂ such that: which we solve for using the Newton-Raphson inverse kinematic approach of the open-source Kinematic and Dynamics Library (KDL) 1. Using the current joint angles and the desired one, we compute a time-parametrized trajectory for the joint angle ( ) using RRT-connect implementation on the Open Motion Planning Library of MoveIt! 2. Finally, a low-level PID controller regulates each joint to track ( ). Image Based Robot Control (IBVS) The IBVS stage refines the end-effector's position based on the detected hole location in image coordinates. Let ⃗ = ∈ ℝ 2 denote the pixel coordinates of the the detected hole, and ⃗ ∈ ℝ 2 denote the desired coordinates of these high level features (e.g. the camera's principal point); we define an error vector as ⃗ = ⃗ − ⃗, and a control law that exponentially decays this error to zero as: where ∈ ℝ 22 is a positive-definite gain matrix. What follows is the generation of joint movements that achieve the desire⃗. First, we define the command twist vector ⃗ * = that describes the camera's desired velocity in . In our case, We constrain the camera's motion to a linear 2D movement perpendicular to camera's optical axis, such that and = ⃗ 0. Hence, the linear and components of * can be easily computed by inverting the expression in, as follows: It must be noted that the depth value is obtained from where ℎ is estimated from the multi-view 3D localization step. Reference joint angular velocitie * that result in ⃗ * are then calculated by inverting the expressions in and as: Finally, * is tracked using PID control for each indivdual joint. Once the robot's end-effector is aligned with the target reference hole on the workpiece, the clamp mandrel (see Fig. 6-b) is inserted in the reference hole and the pressure foot clamps up against the workpiece until a target contact force is achieved. These contact forces are estimated from the torques on each of the robot's joints. Afterwards, the clamp mandrel is retracted against the workpiece from the blind side, providing additional clamping force. This two-sided clamping ensures stability during the drilling process and minimizes normality errors using the inherent compliance of the robot manipulator. The robot then proceeds with activating the drill motor and drilling nutplate installation holes on the sides of the reference hole. Experimental Setup and Protocol The presented event-based robotic drilling algorithms were tested on the setup shown in Figure 6. We used Universal Robot's UR10, which provides a repeatability of 0.1mm, as the primary manipulator. The manipulator was mounted on top of a customized version of the Neobotix MPO-500 robot base. The mobile robot uses two Sick S300 LIDARs and the ROS Navigation Stack to autonomously navigate the factory settings, and place the manipulator in the vicinity of the workpiece. This enables the robot to perform multiple drilling jobs and operate across a large workspace without the need for human involvement. Figure 6-b displays the end-effector configuration, which comprises the drill-motor and the camera. Inivation's DAVIS346 camera with a spatial resolution of 346x260 was used for visual perception, which provides both a neuromorphic event stream in addition to conventional framebased intensity images. The event stream grants a dynamic range of 120 dB, a latency of 20, and a bandwidth of 12 10 6 events per second. While all operations are conducted solely using the event stream, intensity images are used as a benchmark to assess the performance and advantages of event-based perception. All required computations are executed using an on-board computer with an i7-5530 processer and 4GB of RAM. Our experimental evaluation focuses on three aspects. In section 7.2, we assess the accuracy of the event-driven multiview 3D reconstruction technique, and quantify its advantages against conventional frame-based vision under different conditions of lighting and operational speeds. Similarly, we evaluate and benchmark our event-based hole detection and tracking pipeline in section 7.3. Finally, section 7.4 presents the evaluation data of the comprehensive drilling experiments. 6-DOF Workpiece localization In this section, we quantify the accuracy of the proposed event-based multi-view reconstruction approach that provides initial estimates of position of the workpiece and its reference holes. The setup for these experiments is shown in Figure 7. Ground truth data is obtained using a set of four ArUco fiducials, denoted by, =1,...,4. The fiducials are observed from a static robot state using the intensity image output of the DAVIS346, and the 6-DoF pose of each fidicual is estimated using OpenCV's ArUco library. As the position of each hole relative to each fiducial (denoted by ⃗ ℎ ) is known with high accuracy, the ground truth position of the i'th hole relative to is given by: Consequently, we define the error in the 3D localization of each hole as the mismatch between the hole's estimated location ⃗ ℎ obtained using the multi-view approach and its ground truth location ⃗ ℎ. This mismatch is averaged across all holes in the workpiece to quantify the overall error in 3D workpiece localization as follows: where represent the overall number of reference holes. We benchmark the multi-view localization results obtained using the neuromorphic event stream against results obtained using conventional intentsity images. For conventional images, we use the standard Canny edge detector to The error is quantified in terms of the euclidean distance between the estimated hole location and the corresponding ground-truth location extracted using a set of AruCo Fiducials. Table 1 Evaluatuin of the 3D workpiece localization error in using both neuromorphic and conventional frame-based vision under different light intensity levels and maximum robot speed during the scanning movement. Bold indicates the lower error. These positional errors are then reduced by the IBVS stage to sub-millimeter errors as seen in Table. Table 1 indicate that at lower speeds and good lighting conditions, the multi-view localization using neuromorphic vision and conventional vision provides similar accuracy. The advantages of neuromorphic vision became apparent as the robot's operational speed is increased or at lower light conditions. At such conditions, perception using conventional cameras become challenging due to motion blur and high latency that results due to increased exposure timing. Neuromorphic cameras do not suffer from these shortcomings and as such, they result in more precise and reliable 3D localization results. Figure 8 visualizes the output The proposed event-based CHT provides consistent performance regardless of the camera's egomotion, while the performance of the standard CHT clearly degrades. In the case of regular images, the standard CHT fails due to excessive motion blur. As for event frames, the proper period at which events should be grouped is highly dependent on the motion in the visual scene, which in turn affects the quality of the CHT detection. of both types of cameras during the multi-view localization experiments, which confirms the reasoning of the superior performance of neuromorphic vision. Regardless of the vision sensor, the multi-view localization step is not capable of achieving the sub-millimeter accuracy requirements of the drilling process. This signals the need for the second pose refinement step, which we do using hole detection, which we validate in the below section. Neuromorphic Hole Detection This section evaluates our proposed event-based circular detection and tracking method against the conventional CHT. For evaluation purposes, we test the conventional CHT with both intensity images and artificial image frames generated from the concatenation of events at specified time period. We assess each method's capability to track the circular holes in the workpiece under different movement speeds and light intensity levels. Figure 9 visualizes the results of each method at different camera egomotions. The use of intensity images leads to unreliable detection at higher egomotion speeds due to motion blur in the visual scene, which is caused by the working principal of conventional imagers that rely on the time integration of incident light. Using conventional CHT with event frames is sensitive to the time period at which events are being grouped. For instance, a small concatenation period leads to a featureless image at slower speeds; while larger periods cause an overpopulated image at high speeds, where feature can no longer be accurately extracted. Our event-based variant of the CHT exploits the advantages of neuromorphic vision and takes into the account the asynchronous nature of the event stream. As such, it provides precise results regardless of the motion in the scene and does not suffer from the motion blur or latency complications of conventional cameras. The experimental results shown in figure 10 further enforce the advantages of neuromorphic vision coupled with our proposed event-based CHT. In these experiments, the speed of the camera was gradually increased and the tracking performance of each method is evaluated. Experiments were conducted at different lighting conditions to assess each method's robustness. For intensity image-based detection, the well-known Kanade-Lucas-Tomase (KLT) tracker was coupled with conventional CHT to track the detected circular holes. As expected, intensity image-based tracking deteriorates at higher speeds due to motion blur; and entirely fails to detect the hole at low-light due to the inclarity of the image despite the increased exposure time. Neuromorphic vision-based perception on the other hand remains persistent despite these variations. Nutplate holes drilling performance The performance of the overall nutplate holes' drilling process is presented in this section. Tests were conducted using the setup shown in Figure 6 with five workpieces placed differently in the environment. The mobile robot autonomously navigates to the front of each of the workpieces, and then our novel neuromorphic visual control pipeline described in sections 4, 5, and 6 control the manipulators to perform the drilling objective. A video demonstration of these experiments can be viewed through this link: https://drive. google.com/file/d/1q9QwPvkd7ZcEBcGMIxIVy2r_iRfTKCSe/view? usp=sharing. We assess the drilling performance in terms of the positional error of the nutplate holes. Table 2 presents the per-hole positional error across the five different Figure 10: Comparison between the proposed event-based variant of the CHT detector for the neuromorphic camera and the to conventional intensity image-based CHT with a KLT tracker for the detection and tracking of a circular hole 75mm in front of the camera at different lighting speeds and ego-motion velocities. The use of neuromorphic vision and our proposed circular hole detector clearly provides more reliable and consistent results at high operation speeds or imperfect lighting. workpieces and Figure 11 shows an example workpiece after drilling the nutplate holes. Quantitative results show that our proposed neuromorphic vision-based approach is capable of precisely drilling nutplate holes with an average error of less than 0.1 mm. These results conform with the precision requirements of a large variety of processes in the automotive and aerospace manufacturing industries. This validates the use of neuromorphic vision for precise manufacturing tasks and highlight the potential of using neuromorphic cameras for faster and more reliable automated manufacturing. The obtained results also prove the effectiveness of our proposed algorithms in employing the advantages of neuromorphic cameras while addressing several of their challenges in terms of unconventional data output, and relatively low resolution. We would like to indicate that the nature of the performed drilling process, which includes inserting the clamp mandrel in a pilot hole, contributes to further minimizing the positional errors. During this peg-in-hole stage, the endeffector's pose can be driven to better alignment with the reference hole due to the compliance of the manipulator, which can further reduce any errors resulting from the visual guidance process. Conclusions and Future Work In this paper, we presented the first system that employs the recent neuromorphic vision technology for robotic Table 2 Position error in mm for the nutplate holes' drilling experiments across multiple workpieces. Workpiece Hole ID Mean Max Standard Deviation 1 2 3 4 5 6 7 8 9 machining applications. In particular, we have developed a complete visual guidance solution that precisly positions the robot relative to the desired workpiece with sub-millimeter accuracy using two consequent stages of perception and control. The first stage utilizes a multi-view 3D reconstruction approach and PBVS for the initial alignment of the robot's end-effector. Concurrently, the second stage regulates any residual errors using a novel event-based hole detection algorithm and IBVS. We have validated our system experimentally for a nutplate hole drilling application using a collaborative robot manipulator, an iniVation neuromorphic camera, and a customized end-effector. Our quantitative results show that the presented neuromorphic vision-based solution can successfully drill the target holes with an average positional error of less than 0.1mm. Our tests also verify that the use of neuromorphic cameras overcomes the lighting, speeds and motion blur challenges associated with the use of conventional frame-based cameras. These results demonstrate the potential of using neuromorphic cameras in precise manufacturing processes, where they can facilitate faster and more reliable production lines. For future work, we aim to improve the normal adjustment and orientation control aspects of our robotic drilling system. In our current system, the only measurement on workpiece orientation is obtained using the multi-view reconstruction step; and any orientation errors resulting from this step would not be corrected for, unlike position measurements which are further refined using circular hole detection. Although the two-sided clamping and the compliance of the collaborative robot can passively drive the end-effector towards better normality with the workpiece, a more precise and reliable normal alignment method is required to expand the range of manufacturing processes our system can perform. To this end, We will investigate the application of visual tactile sensing for the normality control in robotic machining processes. |
Environmental Scenarios for the Future Nitrogen Policy in Flanders, Belgium The agricultural sector accounts for two thirds of nitrogen losses in Flanders, Belgium. Since 1991 both the government and the farmers have been taking measures to reduce the nitrogen surplus. Initially, the manure policy was aimed at distributing the manure surplus equally across Flanders. At the same time, the growth of livestock was stopped by a strict licensing policy, which required command and control measures. In recent years, the policy has switched to the use of individual target commitments by farmers. The Flemish manure policy will be tightened even more as a result of international pressures. An ex ante evaluation of possible policy options was carried out using three different scenarios spread out until 2010 (Business As Usual, Additional Measures, and Sustainable Development). To do this, a sector-economic, regionalized, environmental, comparative static, partial equilibrium, mathematical programming model of the Flemish agriculture was developed. The nitrogen emission into the agricultural soil was calculated by means of a regional soil balance. European targets can only be reached with manure processing, reduced fertilizer usage, and a strong reduction of intensive livestock breeding activities. The atmospheric deposition of nitrogen compounds will strongly decrease in 2010 if additional measures are taken. This will also result in a strong reduction of nitrous oxide emissions. Intensive Livestock Breeding in Flanders Agriculture in Flanders is highly mixed. Soilless agricultural farms (intensive pig and poultry farming) exist alongside soil-dependent farms (dairy cattle and arable farming). Regional concentration has taken place and specialization has occurred mainly in soilless livestock farming. In 1999 the production value of Flemish agriculture amounted to 4.25 billion Euro and accounted for 72.9% of the total production value of Belgian agriculture. In 1998 agriculture accounted for 1.42% of Belgium's Gross National Product. The main sectors of Flemish agriculture are livestock and horticulture. Households spend 12.6% of their income on food products. In 1999 there were about 1.5 million cows, 7.3 million pigs, and 36 million poultry on 35,000 farms. These animals produce 212 million kg N, 50% coming from cows, 40% from pigs, and 10% from poultry. The agricultural surface of Flanders is 635,000 ha, of which 40% is grassland. In 1998 manure surplus at farm level reached 60 million kg N, assuming maximum spreading according to the manure limits. One fourth of livestock farms had to deal with manure surplus purely based on the use of animal manure. Specialized pig In Europe the highest concentration of animals is found in the Netherlands and Flanders. Animal manure is very intensively applied to agricultural land. The manure surplus situation in certain regions of Flanders is due to an explosive growth of the livestock in the past 3 decades. Between 1965 and 1990, the number of pigs and poultry more than tripled. Between 1990 and 1998, the growth rate for pigs was 2% per annum and the growth rate for poultry was 5% per annum. Besides this growth, the cause of the surplus problem is the evolution from soil-dependent toward soilless specialized pig and poultry farms. Soilless agricultural farms were stimulated in the 1960s in order to intensify agricultural production. This has led to the concentration of intensive livestock farms close to producers of compound feed, which were usually located near rivers or close to harbors due to their dependence on imported feed stuff. Soilless livestock farms usually have contracts with feed producers. With this form of vertical integration, feed producers try to safeguard their market. In Flanders, approximately 60% of pig production and 90% of veal and poultry production takes place on a contractual basis between feed producers and farmers. The manure surplus problem is structurally linked with the agroindustrial development of the Flemish agriculture. Environmental Effects In 1998 total nitrogen loss to the environment in Flanders equaled 245 million kg N. It consisted of losses to the air (NO x, N 2 O, NH 3 : 123 million kg N), losses to the surface water (emissions from agriculture and wastewater treatment plants: 50 million kg N), and a residual compartment equal to 72 million kg N. The latter comprised losses to soil, groundwater and nonestimated losses to surface water and into the air. The share of the economic sectors in the emissions of nitrogen and phosphorous amounted to 66% for agriculture, 20% for industry, 10% for popu-lation, and 4% for traffic. The agricultural sector accounted for 65% of the nitrogen emissions, which makes it the largest polluter of nitrogen. Agriculture accounted for 98% of the ammonia emissions, 49% of the nitrogen losses to surface water, and 50% of nitrous oxide emissions. Agriculture is 100% responsible for the residual compartment. Losses to the surface water raise nitrogen concentrations. In 1999 the surface water quality met the legal basic quality standard for nitrate nitrogen (10 mg NO 3 -N/l) at only 70% of the measuring points. The average concentration of nitrate nitrogen (NO 3 -N) in surface water amounted to 5.2 mg N/l. In 1999 the ammonium nitrogen (NH 4 -N) concentration met the Flemish basic quality standard of 5 mg NH 4 -N/l at only 27% of the measuring points, with an annual average concentration of 3.5 mg NH 4 -N/l. This illustrates clearly the relatively poor water quality in Flanders. Since 1999 a supplementary water quality monitoring network was set up to provide feedback to farmers on the effects of manure practices on surface water quality. Measured concentrations were evaluated against the standard of 11.3 mg NO 3 -N/l (equals 50 mg NO 3 /l), imposed by the European directive on nitrate (EC/91/676). This standard was exceeded at 60% of the measuring points at least once from July 1999 to June 2000. Peak values of 22 mg NO 3 -N/l are regularly reached and exceeded. Most (77%) of the atmospheric nitrogen deposition caused by air emissions in Flanders originates from agricultural activities. The share of reduced nitrogen (ammonia) in the atmospheric deposition amounts to 71%. In 1998 the average nitrogen deposition in Flanders was 39 kg/ha. However, more than 70 kg N/ha can be deposited locally. In 1998 the critical load for nutrient nitrogen was exceeded at 55% of 652 representative forest ecosystems. The value of the critical load varied from 7.5 to 13.6 kg N/ha. Nitrogen losses from agriculture are calculated using the soil surface balance approach, according to the OECD methodology. Unlike the OECD method, ammonia emission is taken into account. Fig. 1 illustrates the main streams of the soil surface balance of the Flemish agriculture. The input into the soil consists of livestock manure, chemical fertilizers, atmospheric deposition, organic fertilizers (compost, organic waste from the nutrition industry), and bacterial nitrogen fixation in the soil. On the output side, harvesting crops and grass and fodder crop production remove nutrients from the soil. Industrial manure processing and manure export are also considered as outputs, but were of no significance in 1998. The unbalance between input and output results in a nitrogen surplus and is lost into the environment. This surplus consists of ammonia emissions to the air and emissions to the soil. The latter can be broken down into the following streams: emissions to the air by denitrification processes, runoff to surface water, leaking to groundwater, and temporary stock in the soil. In 1998, the nitrogen surplus amounted to 187 million kg N or 294 kg N/ha (inclusive ammonia emission), which equaled almost half of the inputs. The efficiency output/input in nitrogen use was 46%. A target value of 44 million kg N or 70 kg N/ha is set for the surplus, excluding ammonia emission. This is derived from the drinking water quality standard of 11.3 mg NO 3 -N/l, imposed in the EU directive 91/676. The following assumptions have been made to derive the nitrogen surplus target: 50% of the nitrogen leaking out of the soil is denitrified during the migration through the soil; the annual rainfall surplus equals 300 mm. The European standard is already met at a leaking of 70 kg N/ha (3,000,000 l rainfall surplus/ha 11.3 mg NO 3 -N/l 2 = 70 kg N/ha). If an equilibrium is assumed between nitrogen immobilization and mineralization, then a straightforward relationship exists between the soil surface nitrogen surplus and the concentration in ground-and surface water. The discrepancy to target needs to be minimized by 2003, otherwise the whole territory of Flanders would need to be designated as a vulnerable zone under pressure from the European Commission. This would lead to a restriction of the manure limits from an average of 225 to 170 kg animal N/ha. Table 1 shows a comparison of national soil surface nitrogen balances in the OECD and indicates that Flanders has a balance surplus even higher than the Netherlands. This directive on the protection of water from nitrate pollution from agricultural sources aims to reduce and prevent eutrophication problems of coastal and marine waters. The directive prescribes when and how the member states of the European Union should deal with the nitrate problems. It contains directives on how to treat livestock manure and chemical fertilizers in vulnerable regions. The directive prescribes that member states need to: 1. Identify all surface waters and regions that are influenced or could be influenced by nitrate pollution. These regions are identified as regions where agriculture activities cause problems with drinking water quality or eutrophication. 2. Establish an action program for vulnerable regions. The measures stated in the directive are obligatory in these regions. 3. Set up a code of good agricultural practice to offer a basic protection level against pollution. This code should contain a minimum number of measures. Farmers need to be informed about this code. The manure policy evolved in three phases. In the first phase (1991 to 1995), manure was seen as a resource, not as a waste, in order to reach the goals of the Nitrate Directive. The value of manure was recognized, primarily because of its nutrient content, but also because of its organic matter (farmyard manure). Manure rules were set up in order to avoid the application of manure in excess of the capacity of the environment. More severe area specific rules were required in order to protect some vulnerable areas. For farms with too little land to use the manure produced at the farm, the real solution consists of removing the surplus. Redistribution of nutrients is achieved by spreading slurry in the fields of neighboring farms or by transporting over longer distance. As the transportation costs were very high, a compulsory long distance transport system for larger farms was worked out. The targets of this first phase were reached: the transport of livestock manure from regions with surplus to regions where extra manure could be used was increased from 22 million kg N in 1992 to 60 million kg N in 1995. Consequently, distribution of manure from local livestock concentration areas to the whole territory of Flanders generated additional eutrophication. During the second phase of the manure policy (1996 to 2000), restrictions were imposed at farm level. By means of a license policy, farmers were obliged to prove their past and future manure deposition and export (disposal). At the same time, the livestock sector was given until 2002 to reduce the manure surplus by means of source-oriented measures (forage techniques), effect-oriented alternatives (manure processing), or reduction of the livestock through natural or accelerated release of farmers. To compensate farmers for the negative socioeconomic consequences of this policy, family farms, which are considered to form the backbone of the Flemish agricultural sector, were positively discriminated. Despite these measures, however, it remained impossible to spread all the manure on the available agricultural land. Therefore, in the third phase in 2000, a policy mix was worked out including the reduction of the number of animals, the use of feed concentrates with lower nutrient content, and manure processing (drying, burning, composting) or export of processed manure. The size of the livestock is kept at the 1995 to 1997 level until 2005 by controlling the manure volume produced at farm level. Policy Response The manure policy in Flanders was designed to implement the Nitrate Directive (EC/91/676) of the European Community. An important innovation in this phase is the increased responsibility of the farmers for their manure practices. Farmers are allowed to use fertilizers beyond the manure limits if they can prove that the residual nitrate in the upper soil, which is 90 cm deep, is lower than 90 kg N/ha. This is measured during the period from October 1 to November 15. Farmers who obtain a better result than the nitrate residue regulation are rewarded. By 2002 additional scientific research will clarify whether the targets of the Nitrate Directive can be translated into a parcel-specific, controllable regulation and whether the current regulation is sufficient to prevent the eutrophication of the surface and groundwater. However, nitrogen is lost not only after harvest. Research in the Netherlands proved that the residual nitrate regulation is a highly uncertain policy instrument when the high cost, the need for additional control mechanisms by the government, and the low prediction value for the individual farmer are taken into account. MATERIALS AND METHODS The third phase manure policy was evaluated in order to check whether the targets imposed by the Nitrate Directive could be reached on time. Also, additional measures were evaluated against the need to reach the targets after 2002. A brief description of applied methods and all results of this ex ante evaluation are published in the Environment and Nature Report Flanders: Scenarios MIRA-S 2000, which is a more comprehensive policy preparatory document. In order to evaluate the environmental policy of the agricultural sector a sector-economic, regionalized, environmental, comparative static, partial equilibrium, mathematical programming model of the Flemish agriculture was used, called Socio-Economic Agricultural Evaluation System (SELES), in combination with environment modules to calculate air emissions, use of water, energy, and pesticides. SELES consists of two modules: a mathematical programming module called VRAM and an input-output module representing the Flemish economy called VLIO. Only the results of the VRAM module are used to analyze the agricultural nitrogen balance. The mathematical programming module describes in detail all the options available to farmers in eight subregions of Flanders, called regional farms. The model is activity based and describes for every region nine livestock activity groups (milking cows, male beef cattle, female beef cattle, pigs, sows, laying hens, broiler, and broiler parents), two roughage activity groups (grassland and maize), nine arable activity groups (cereals, potatoes, sugar beets, cash crops with low nitrogen demand, cash crop with high nitrogen demand, dried vegetables, vegetables with low nitrogen demand, and a rest activity), and two vegetables in the open air groups (extensively grown and intensively grown). The activity "milking cows" is further split into four different technologies, representing observed differences in milk production per cow and nitrogen input per hectare on the farm. The fruit, horticulture under glass, and ornamental plant growing sectors are represented by each one aggregated activity. The rationale behind the selection of activities and technologies to describe the sectors is based on the potential contribution of these sectors to the manure problem in Flanders. Regional balances for production and consumption within the model are included for intermediates: young stock, animal manure, and roughage. The levels of agricultural activities in VRAM are adjusted in such a way that each regional farm maximizes its profits while complying with constraints imposed by intermediate balances (manure, young stock, and roughage), primary input balances (availability of land and production quota), technical restrictions (mineral balances for animals, fertilization requirements) and environmental policies. The model shows in detail the consequences for regional production, nutrient surface soil balance, and regional farmers' income. All input and output prices are exogenous, with exception of the intermediates. Mineral fertilizers and animal manure are modeled as substitutes into a fertilization balance, subject to an exogenous, crop-specific, minimum fertilization requirement. Input costs included in the objective function are purchased concentrates, chemicals, mineral fertilizers, and other inputs (hired labor, seed, etc.). Other elements of the cost function are application and industrial processing costs of animal manure, costs for imports and exports, and interregional transportation costs for tradable intermediates in Flanders. Assumptions for the price evolution are based on simulated effects of the European common agricultural policy rules (Agenda 2000) and farmers' behavioral response to policy changes on a European level.The SELES results on livestock, land use, and animal manure production are used as inputs in the environmental modules. Outputs of these modules are calculated per unit of agricultural activity (animal or hectare). Ammonia emissions are calculated with generic emission coefficients for emission in pastures, in stables, and after manure application. Calculations of nitrous oxide emissions are based on a methodology developed for the Intergovernmental Panel on Climate Change. Nitrogen depositions are modeled using an atmospheric dispersion model. The development of the environmental pressure from the agricultural sector is simulated, using four scenarios, starting in 1998 and tentatively using 2010 as the end date. The Autonomous scenario (AUT) is based on the economic development determined by growing markets and technological innovation, on the price policy of the European Common Agricultural Policy in the year 2000 and on the second phase manure policy. The Business As Usual scenario (BAU) incorporates the third-phase manure policy until 2003. The gradual tightening of manure limits restricts the size of the manure market. Livestock is maintained at the 1998 level until 2005. Manure processing is compulsory for large livestock farms (with more than 7500 kg phosphate production per year). A fixed processing cost per volume is used in the model. Animal manure must be spread out by low emission techniques such as subsoil injection, shallow injection into slots, or incorporation within 4 h. After 2004 licenses will be renewed on condition that the stables are ventilated with low emission techniques. In the Business As Usual Plus (BAU+) scenario, additional measures are added: tightening of the livestock manure limits until 170 kg N/ha from 2003, strong growth of organic farming (from 0.4% in 1998 to 10% in 2010, expressed in livestock numbers and land use), accelerated introduction of low emission stables, accelerated introduction of low nutrient forage, and the introduction of multiphase foddering for pigs. The last two measures partly neutralize the effect of lower manure limits. Manure processing is no longer compulsory. Additional measures in the Sustainable Development (SD) scenario reflect stronger ecological principles, giving organic farming a 25 to 50% share of total farming. The conversion to organic farm-ing is assumed only for arable land activities, vegetable crops in open air, and milk production. Conversion costs are lowest in these activities and hence conversion is more feasible. Changes in the variable productivity, sales prices, and costs are dealt with by applying indices to the values for classic farming. These indices are a ratio of the considered variable for organic farming vs. classic farming. The time horizon of the SD scenario is of no realistic meaning, but the scenario is calculated against the background of 2005 (25% organic farming) and 2010 (50% organic farming). The SD scenario shows the effect of strong ecological measures. The outcome of the ex ante evaluation for the other economic sectors is taken into account for the modeling of the future atmospheric nitrogen deposition. Environmental Results Model results include livestock evolution, which is the driving factor for the nitrogen surplus in Flanders. One of the driving factors in the evolution of cattle stock numbers is the European price policy. The beef cattle stock decreases by 85% in all scenarios in 2010 due to lower prices and exit from the constrained manure market. These results should be taken as an upper limit since subsidies by the Belgian government are not taken into account. Moreover, the model assumes that farmers easily shift between activities based on profitability. In the AUT scenario the manure production by cattle is substituted by the more profitable pig breeding activities (+17% in 2010). This substitution is observed in all scenarios, but the degree of substitution depends on the tightening of the manure policy. The total pig stock in the BAU, BAU+, and SD scenarios decreases from 1998 to 2010 by 3, 5, and 24% respectively. Installation and exploitation costs for additional measures in the BAU+ and SD scenarios are not taken into account in the model. These outcomes should therefore be considered as a minimum decrease. The poultry stock in 2010 increases by 7% in the AUT scenario, but decreases by 12% in the SD scenario. Poultry number is relatively stable in the BAU and BAU+ scenarios. The surplus of the nitrogen soil surface balance decreases in the scenarios along with the tightening of the manure policy ( Table 2). The driving force in the AUT scenario is the beef cattle stock reduction. The implementation of a manure limit 170 kg N/ha in the BAU+ scenario is not sufficient to reach the soil surface balance target. The surplus is only lower than the target value of 44 million kg N in the SD scenario. Chemical fertilizers are used less in all scenarios. The driving force here is the rising income of arable land farmers caused by increased acceptance of manure. Moreover, organic farming bans the use of chemical fertilizers. Biological nitrogen fixation is kept constant, although organic farming systems use relatively more leguminosae. Decreasing atmospheric depositions in the BAU and BAU+ scenarios are the result of planned or additional emission reduction measures in agriculture, industry, and traffic and transport. The nitrogen deposition decreases from 39 kg N/ha in 1998 to 29 kg N/ha in the BAU scenario and to 25 kg N/ha in the BAU+ scenario. This means that a deposition target of 26 kg N/ha is within reach in 2010. There is still a large discrepancy with ecological targets formulated as critical loads. In 2010 critical loads will be exceeded in 40% of the 652 selected forest ecosystems. To reach a zero excess, deposition should be lower than 14 kg N/ha. This is beyond the time horizon of this evaluation. On the output side of the balance, plant production removes a decreasing amount of nitrogen through the decrease in fodder crop area, both across different scenarios and over time. Manure processing is only realized in the BAU and BAU+ scenarios. Facultative processing leads to lower volumes processed because of the relatively high processing costs. All processed manure is for export. The ammonia emission, which is part of the surplus, is decreasing both across all scenarios and over time, thanks to lower manure production, manure processing, and low emission techniques. The Flemish target is set at 37 million kg N for all sectors and is derived from the national emission ceilings, agreed at the Gteborg protocol of the Long Range Transboundary Air Pollution protocol of the United Nations of December 1, 1999. Hence the target can be reached easily, since agriculture produces 98% of the emission. An indirect consequence of the manure policy is the important reduction of nitrous oxide emission in all scenarios. This illustrates the integrated effect of this policy. In 2010 reductions are up to 11% in the AUT scenario and 44% in the SD scenario compared to 1998 when 8 million kg N were emitted as nitrous oxide. This reduction is the effect of lower livestock manure production and lower use of chemical fertilizers. The effect of low emission spreading techniques on subsoil denitrification is not accounted for. In comparison with other economic sectors, agriculture has a greater potential for emission reduction. On the contrary, despite a gain in efficiency, all other sectors show an increase in emission due to a volume increase. Discussion of the Model In this research, a regionalized sector model is used to evaluate the possible impact of manure and ammonia policies on the agricultural sector in Flanders. The model has both advantages and disadvantages for the economic problem at hand. An important advantage of the model is that the agricultural sector is described as a whole. This is important because, through the manure and land balances, nitrogen policies not only affect the producers, but also the users of animal manure. A disadvantage is that activities are aggregated to the regional level in order to keep the model and the computation time manageable. One of the consequences of this aggregation is that manure transportation costs within regions are not taken into account. It could be argued that these costs are small compared to application costs and transportation costs between regions. Another consequence is that behavioral and structural differences between farms are not taken into account. Farm models would be more suitable to do this. They would also be more suitable to include more technical options than the ones modeled at the aggregate level. However, changes in aggregate demand and supply and resulting changes of market prices cannot be modeled by farm models. The effects of nitrogen policies go beyond the individual farm and therefore at least the impact on manure and land prices should be taken into account. This requires more aggregate models to model effects of important policy changes. Another weakness is that the model focuses on profit maximization by the individual farmer. In reality, the farmers' behavior is more diverse, which explains a different response to policies than the one predicted by the model. It should be noted that the model should not be used as a tool to predict the future, but as a decision support system to choose between different policy options. CONCLUSIONS The ex ante evaluation of nitrogen policy in agriculture shows that the nitrogen surplus can only be reduced by drastic measures. The current and planned policy (BAU scenario) will not be sufficient. Assuming that manure processing will be restricted in application through high costs, livestock reduction and more efficient use of animal manure become necessary. In the SD scenario this is realized by conversion to organic farming on a large scale. This implies a conversion to extensive (low input/ha) farming, both on arable land and in milk production. Conversion to extensive farming could be realized by subsidies for livestock reduction. This policy instrument would be very expensive for an effective reduction. The effects of nitrogen pollution on vulnerable nature were evaluated for deposition effects, but measures equivalent to the ones implemented in the SD scenario are required as a minimum in order to close the gap to the long-term target. Finally, the manure policy has a positive side effect on the climate change policy through the reduction of nitrous oxide emissions. |
Sexual and gender minority populations and suicidal behaviours Despite the sociolegal changes that have signalled greater acceptance of sexual diversity and gender expansive identities in the twenty-first century, worldwide studies highlight the increased risk of mental health problems and suicidality for those with a sexual and/or gender minority status. This chapter discusses this increased risk of suicidality among sexual and gender minority populations across the life course. A recurring theme is that those who are less sure of their status, are most at risk. Therefore, the capacity of an individual to overcome internalized minority stress, stigma, guilt, and shame, and to reconcile, even take pride, in their sexual/gender status, is a significant issue. While sociolegal, cultural, and familial norms may seem beyond the remit of mental health professionals, affirmative and inclusive actions can be taken to support questioning youth, unsure adults, and distrustful older people, and to help improve the resilience and well-being of their LGBTQ+ service-users. |
<filename>tests/system/test_integration_acm.py
import time
import unittest
from urlparse import urljoin
import uuid
import requests
from apmserver import ElasticTest
from beat.beat import INTEGRATION_TESTS
class AgentConfigurationIntegrationTest(ElasticTest):
config_overrides = {
"logging_json": "true",
"kibana_enabled": "true",
"acm_cache_expiration": "1s",
}
def create_service_config(self, settings, name, env=None, _id="new"):
data = {
"service": {"name": name},
"settings": settings
}
if env is not None:
data["service"]["environment"] = env
meth = requests.post if _id == "new" else requests.put
return meth(
urljoin(self.kibana_url, "/api/apm/settings/agent-configuration/{}".format(_id)),
headers={
"Accept": "*/*",
"Content-Type": "application/json",
"kbn-xsrf": "1",
},
json=data,
)
def update_service_config(self, settings, _id, name, env=None):
return self.create_service_config(settings, name, env, _id=_id)
@unittest.skipUnless(INTEGRATION_TESTS, "integration test")
def test_config_requests(self):
service_name = uuid.uuid4().hex
service_env = "production"
bad_service_env = "notreal"
expect_log = []
# missing service.name
r1 = requests.get(self.agent_config_url,
headers={"Content-Type": "application/x-ndjson"},
)
assert r1.status_code == 400, r1.status_code
expect_log.append({
"level": "error",
"message": "error handling request",
"error": "service.name is required",
"response_code": 400,
})
# no configuration for service
r2 = requests.get(self.agent_config_url,
params={"service.name": service_name + "_cache_bust"},
headers={"Content-Type": "application/x-ndjson"},
)
assert r2.status_code == 200, r2.status_code
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 200,
})
self.assertDictEqual({}, r2.json())
create_config_rsp = self.create_service_config({"transaction_sample_rate": 0.05}, service_name)
create_config_rsp.raise_for_status()
assert create_config_rsp.status_code == 200, create_config_rsp.status_code
create_config_result = create_config_rsp.json()
assert create_config_result["result"] == "created"
# yes configuration for service
r3 = requests.get(self.agent_config_url,
params={"service.name": service_name},
headers={"Content-Type": "application/x-ndjson"})
assert r3.status_code == 200, r3.status_code
# TODO (gr): validate Cache-Control header - https://github.com/elastic/apm-server/issues/2438
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 200,
})
self.assertDictEqual({"transaction_sample_rate": "0.05"}, r3.json())
# not modified on re-request
r3_again = requests.get(self.agent_config_url,
params={"service.name": service_name},
headers={
"Content-Type": "application/x-ndjson",
"If-None-Match": r3.headers["Etag"],
})
assert r3_again.status_code == 304, r3_again.status_code
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 304,
})
# no configuration for service+environment
r4 = requests.get(self.agent_config_url,
params={
"service.name": service_name,
"service.environment": bad_service_env,
},
headers={"Content-Type": "application/x-ndjson"})
assert r4.status_code == 200, r4.status_code
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 200,
})
self.assertDictEqual({}, r4.json())
create_config_with_env_rsp = self.create_service_config(
{"transaction_sample_rate": 0.15}, service_name, env=service_env)
assert create_config_with_env_rsp.status_code == 200, create_config_with_env_rsp.status_code
create_config_with_env_result = create_config_with_env_rsp.json()
assert create_config_with_env_result["result"] == "created"
create_config_with_env_id = create_config_with_env_result["_id"]
# yes configuration for service+environment
r5 = requests.get(self.agent_config_url,
params={
"service.name": service_name,
"service.environment": service_env,
},
headers={"Content-Type": "application/x-ndjson"})
assert r5.status_code == 200, r5.status_code
self.assertDictEqual({"transaction_sample_rate": "0.15"}, r5.json())
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 200,
})
# not modified on re-request
r5_again = requests.get(self.agent_config_url,
params={
"service.name": service_name,
"service.environment": service_env,
},
headers={
"Content-Type": "application/x-ndjson",
"If-None-Match": r5.headers["Etag"],
})
assert r5_again.status_code == 304, r5_again.status_code
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 304,
})
updated_config_with_env_rsp = self.update_service_config(
{"transaction_sample_rate": 0.99}, create_config_with_env_id, service_name, env=service_env)
assert updated_config_with_env_rsp.status_code == 200, updated_config_with_env_rsp.status_code
# TODO (gr): remove when cache can be disabled via config
# wait for cache to purge
time.sleep(1.1) # sleep much more than acm_cache_expiration to reduce flakiness
r5_post_update = requests.get(self.agent_config_url,
params={
"service.name": service_name,
"service.environment": service_env,
},
headers={
"Content-Type": "application/x-ndjson",
"If-None-Match": r5.headers["Etag"],
})
assert r5_post_update.status_code == 200, r5_post_update.status_code
self.assertDictEqual({"transaction_sample_rate": "0.99"}, r5_post_update.json())
expect_log.append({
"level": "info",
"message": "handled request",
"response_code": 200,
})
config_request_logs = list(self.logged_requests(url="/config/v1/agents"))
assert len(config_request_logs) == len(expect_log)
for want, got in zip(expect_log, config_request_logs):
self.assertDictContainsSubset(want, got)
class AgentConfigurationKibanaDownIntegrationTest(ElasticTest):
config_overrides = {
"logging_json": "true",
"secret_token": "supersecret",
"kibana_enabled": "true",
"kibana_host": "unreachablehost"
}
@unittest.skipUnless(INTEGRATION_TESTS, "integration test")
def test_config_requests(self):
r1 = requests.get(self.agent_config_url,
headers={
"Content-Type": "application/x-ndjson",
})
assert r1.status_code == 401, r1.status_code
r2 = requests.get(self.agent_config_url,
params={"service.name": "foo"},
headers={
"Content-Type": "application/x-ndjson",
"Authorization": "Bearer " + self.config_overrides["secret_token"],
})
assert r2.status_code == 503, r2.status_code
config_request_logs = list(self.logged_requests(url="/config/v1/agents"))
assert len(config_request_logs) == 2, config_request_logs
self.assertDictContainsSubset({
"level": "error",
"message": "error handling request",
"error": {"error": "invalid token"},
"response_code": 401,
}, config_request_logs[0])
self.assertDictContainsSubset({
"level": "error",
"message": "error handling request",
"response_code": 503,
}, config_request_logs[1])
class AgentConfigurationKibanaDisabledIntegrationTest(ElasticTest):
config_overrides = {
"logging_json": "true",
"kibana_enabled": "false",
}
@unittest.skipUnless(INTEGRATION_TESTS, "integration test")
def test_log_kill_switch_active(self):
r = requests.get(self.agent_config_url,
headers={
"Content-Type": "application/x-ndjson",
})
assert r.status_code == 403, r.status_code
config_request_logs = list(self.logged_requests(url="/config/v1/agents"))
self.assertDictContainsSubset({
"level": "error",
"message": "error handling request",
"error": {"error": "forbidden request: endpoint is disabled"},
"response_code": 403,
}, config_request_logs[0])
|
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'UI-Layouts/qError.ui'
#
# Created: Sat Feb 7 19:49:38 2015
# by: pyside-uic 0.2.15 running on PySide 1.2.1
#
# WARNING! All changes made in this file will be lost!
from PySide import QtCore, QtGui
class Ui_qError(object):
def setupUi(self, qError):
qError.setObjectName("qError")
qError.resize(250, 100)
qError.setMinimumSize(QtCore.QSize(250, 100))
qError.setMaximumSize(QtCore.QSize(250, 100))
self.gridLayout_3 = QtGui.QGridLayout(qError)
self.gridLayout_3.setObjectName("gridLayout_3")
self.gridLayout_2 = QtGui.QGridLayout()
self.gridLayout_2.setSpacing(0)
self.gridLayout_2.setObjectName("gridLayout_2")
self.label_2 = QtGui.QLabel(qError)
font = QtGui.QFont()
font.setPointSize(12)
self.label_2.setFont(font)
self.label_2.setAlignment(QtCore.Qt.AlignCenter)
self.label_2.setObjectName("label_2")
self.gridLayout_2.addWidget(self.label_2, 0, 0, 1, 1)
self.label = QtGui.QLabel(qError)
self.label.setAlignment(QtCore.Qt.AlignLeading|QtCore.Qt.AlignLeft|QtCore.Qt.AlignVCenter)
self.label.setWordWrap(True)
self.label.setIndent(8)
self.label.setObjectName("label")
self.gridLayout_2.addWidget(self.label, 2, 0, 1, 1)
spacerItem = QtGui.QSpacerItem(20, 40, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.gridLayout_2.addItem(spacerItem, 3, 0, 1, 1)
spacerItem1 = QtGui.QSpacerItem(20, 40, QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Expanding)
self.gridLayout_2.addItem(spacerItem1, 1, 0, 1, 1)
self.gridLayout_3.addLayout(self.gridLayout_2, 0, 0, 1, 1)
self.retranslateUi(qError)
QtCore.QMetaObject.connectSlotsByName(qError)
def retranslateUi(self, qError):
qError.setWindowTitle(QtGui.QApplication.translate("qError", "qAndora - ERROR", None, QtGui.QApplication.UnicodeUTF8))
self.label_2.setText(QtGui.QApplication.translate("qError", "Houston we have a problem!!!", None, QtGui.QApplication.UnicodeUTF8))
self.label.setText(QtGui.QApplication.translate("qError", "Most likely you\'ve requested too many playlists. Try back in a little while.", None, QtGui.QApplication.UnicodeUTF8))
|
On a Theorem of Arrow In a famous article, Arrow studied the allocation of risk-bearing in a competitive economy consisting of I individuals, C commodities and S states. Two schemes were considered. The first scheme was one equipped with S x C complete contingent commodity claims. These claims were assumed to be tradable in the market. The second scheme, in contrast, was equipped with S types of independent2 money claims and a set of C " spot " markets to be opened upon the occurrence of a particular state. Arrow then arrived at a remarkable conclusion (his Theorem 2) that the two schemes would lead to the same optimal allocation of risk-bearing. The " social significance" of the second scheme, he said, was that " it permits economizing on markets; only S+ C markets are needed to achieve the optimal allocation, instead of the SC markets ". I wish to argue, however, that the two schemes will not generally lead to the same allocation and consequently that the theorem is false. |
<filename>src/juicebox/tools/utils/original/BAMPairIterator.java
/*
* The MIT License (MIT)
*
* Copyright (c) 2011-2020 Broad Institute, Aiden Lab, Rice University, Baylor College of Medicine
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
* THE SOFTWARE.
*/
package juicebox.tools.utils.original;
import htsjdk.samtools.util.CloseableIterator;
import org.broad.igv.sam.Alignment;
import org.broad.igv.sam.ReadMate;
import org.broad.igv.sam.reader.AlignmentReader;
import org.broad.igv.sam.reader.AlignmentReaderFactory;
import java.io.IOException;
import java.util.Map;
/**
* TODO - should this be deleted?
* Also, the chromosomeOrdinals map seems to always be empty
*
* @author <NAME>
* @since 9/24/11
*/
public class BAMPairIterator implements PairIterator {
private AlignmentPair nextPair = null;
private AlignmentPair preNext = null;
private final CloseableIterator<?> iterator;
private final AlignmentReader<?> reader;
// Map of name -> index
private Map<String, Integer> chromosomeOrdinals;
public BAMPairIterator(String path) throws IOException {
this.reader = AlignmentReaderFactory.getReader(path, false);
this.iterator = reader.iterator();
advance();
}
private void advance() {
while (iterator.hasNext()) {
Alignment alignment = (Alignment) iterator.next();
final ReadMate mate = alignment.getMate();
if (alignment.isPaired() && alignment.isMapped() && alignment.getMappingQuality() > 0 &&
mate != null && mate.isMapped()) {
// Skip "normal" insert sizes
if ((!alignment.getChr().equals(mate.getChr())) || alignment.getInferredInsertSize() > 1000) {
// Each pair is represented twice in the file, keep the record with the "leftmost" coordinate
if ((alignment.getChr().equals(mate.getChr()) && alignment.getStart() < mate.getStart()) ||
(alignment.getChr().compareTo(mate.getChr()) < 0)) {
final String chrom1 = alignment.getChr();
final String chrom2 = mate.getChr();
if (chromosomeOrdinals.containsKey(chrom1) && chromosomeOrdinals.containsKey(chrom2)) {
int chr1 = chromosomeOrdinals.get(chrom1);
int chr2 = chromosomeOrdinals.get(chrom2);
// nextPair = new AlignmentPair(chr1, alignment.getStart(), chr2, mate.getStart());
}
return;
}
}
}
}
nextPair = null;
}
public boolean hasNext() {
return preNext != null || nextPair != null; //To change body of implemented methods use File | Settings | File Templates.
}
public AlignmentPair next() {
if (preNext == null) {
AlignmentPair p = nextPair;
advance();
return p;
} else {
AlignmentPair p = preNext;
preNext = null;
return p;
}
}
public void remove() {
// Not implemented
}
public void close() {
iterator.close();
try {
reader.close();
} catch (IOException e) {
e.printStackTrace(); //To change body of catch statement use File | Settings | File Templates.
}
}
}
|
Strands of Sand This essay demonstrates that performative writing can incorporate performance research methods such as mystory, psychogeography, and autoethnography into standard prose. The theoretical underpinnings of this essay appear in extensive notes, which may be read alongside or following the text, or perhaps not at all. The narrative alludes to memory, desire, identity, body image, self-concept, and genre in a plot during which the narrator does little more than walk from a beach into a hotel men's room. |
<reponame>janrtvld/animo-demo
/* eslint-disable no-console */
import type { StepperItem } from '../../../slices/types'
import { motion } from 'framer-motion'
import React from 'react'
export interface Props {
steps: StepperItem[]
stepCount: number
sectionCount: number
colorPrimary: string
colorSecondary: string
}
export const StepperCard: React.FC<Props> = ({ steps, stepCount, sectionCount, colorPrimary, colorSecondary }) => {
const stepViewItems = steps.map((item, idx) => {
// if step section count equals step section count
const equalSection = sectionCount === item.section - 1
// get section of the previous step
const prevStepSection = steps[idx - 1]?.section - 1 ?? 0
// get stepCount of the previous section
const prevStepCount = sectionCount === prevStepSection ? steps[idx - 1]?.steps ?? 0 : 0
const on =
// if current stepCount is higher then previous stepCount &&
// currentStepCount is lower then steps for this stepItem
(stepCount > prevStepCount && stepCount <= item.steps && equalSection) ||
// if stepCount is bigger than steps for this stepItem
(stepCount > item.steps && equalSection) ||
// if sectionCount is higher than sectionCount for this stepItem
sectionCount > item.section - 1
return (
<div className="flex flex-row" key={item.id}>
<div className="flex flex-col">
<div
className="rounded-full h-7 w-7 p-3.5 ring-2 border-2 border-white dark:border-animo-darkgrey ring-animo-lightgrey dark:ring-animo-black mx-2 transition transition-all duration-300 "
style={{ background: on ? colorPrimary : colorSecondary }}
/>
{idx !== steps.length - 1 && (
<div className="border-l-2 border-animo-lightgrey dark:border-animo-black border-rounded h-full m-auto" />
)}
</div>
<div className={`flex flex-col mx-2 ${!on && 'opacity-40'}`}>
<h1 className="font-medium">{item.name}</h1>
<div className="my-2 mb-6 text-xs md:text-sm">{item.description}</div>
</div>
</div>
)
})
return (
<motion.div className="flex flex-col bg-white dark:bg-animo-darkgrey rounded-lg p-4 h-auto shadow mb-4">
<div className="flex-1-1 title mb-2">
<h1 className="font-semibold dark:text-white">Follow this path</h1>
<hr className="text-animo-lightgrey" />
</div>
<div className="my-4">{stepViewItems}</div>
</motion.div>
)
}
|
Michael Higdon
Crewe Alexandra
Higdon progressed from the Crewe Alexandra youth team into their first team in 2003. He was originally used as a midfielder, but due to his height and physical presence, he was converted into a striker by manager Dario Gradi. Higdon's most famous moment for Crewe came against Coventry City in the last game of the 2004–05 season, where Crewe had to win to have a chance of staying up. They went 1-0 down but Higdon scored the equaliser to set up Steve Jones to score the winner which kept Crewe Alexandra in the Championship.
Falkirk
Higdon joined Scottish Premier League club Falkirk in June 2007 on a free transfer. He had been offered a contract by Crewe, but chose to turn the offer down as he felt he was unlikely to get into the team ahead of Luke Varney and Nicky Maynard. He scored twice on his Falkirk debut against Gretna.
In the final SPL game of 2008–09 for Falkirk, Higdon scored the winning goal to save Falkirk from relegation and thus relegating Inverness Caledonian Thistle.
St Mirren
Higdon joined St Mirren on 24 June 2009 and scored his first goal for St Mirren against Ayr United in the Co-Operative Insurance Cup . On 12 December 2009, he scored a spectacular long-range strike against his former club, Falkirk, much to the delight of the home fans.
Higdon found his scoring boots in the 2010–11 season, scoring his first hat-trick in a 3–1 win over Hamilton Academical on 2 April 2011. In the 2010–11 season, Higdon notched a total of 15 goals in 33 appearances for St Mirren.
Motherwell
On 3 June 2011, Higdon left St Mirren to sign for fellow SPL side Motherwell. He made his debut appearance for Motherwell on 23 July, against Inverness Caledonian Thistle at Fir Park. Higdon scored his first goal for Motherwell in a 4–0 League Cup win over Clyde at Broadwood, and his first league goals in a 4–2 win over Dunfermline Athletic. On 22 February 2012, Higdon scored his first Motherwell hat-trick in a 4–3 win over Hibernian at Fir Park, which included a stunning overhead kick.
Higdon got his first goal of the 2012–13 season at Fir Park in an SPL game where he opened the scoring in a 1–1 draw against former club St Mirren with a well placed goal into the corner with his weak foot. Higdon scored frequently during the 2012–13 season, becoming the top Motherwell goalscorer in a season since the Second World War. He ended the season with 26 league goals, including a club record 2 hat-tricks, to finish as the league's top scorer. He was voted PFA Scotland Players' Player of the Year for the 2012–13 season.
On 27 June 2013, Higdon indicated through his agent that he would not return to Motherwell for pre-season training.
NEC Nijmegen
On 8 July 2013, Higdon signed a two-year contract with NEC Nijmegen. NEC technical director, Carlos Aalbers, cited his target-man qualities and his ability "to score goals at the highest level" as reasons for the signing. Higdon made his competitive debut for NEC on the opening day of the league season, in a 4–1 home loss to Groningen. Higdon scored his first goal for NEC, which came from the penalty spot in a 2–2 home draw against RKC Waalwijk. Higdon netted a second equaliser for NEC in a 3–2 home defeat to SVB Vitesse on 29 September. Higdon scored in a 4–3 defeat at Go Ahead Eagles on 5 October. Higdon hit a double in 2–1 home win over Heerenveen on 25 October, and then fired in an early winner in 1–0 defeat of FC Eindhoven in the KNVB Cup on 29 October. Higdon would then go on a run of scoring eight goals in 11 league games, beginning with goals in 1–1 draws with SVB Vitesse (away) on 26 January and Go Ahead Eagles (at home) on 2 February. and then scoring the third in 3–1 victory at RKC Waalwijk on 15 February. During his run of eight goals in 11 league games, he next hit four in four league games; 3–3 draw at PEC Zwolle (1 March), 3–1 defeat at Roda JC (8 March), and home ties with FC Utrecht (16 March) and Heerenveen (22 March) that both finished 2–2. And his final goal during this run came in 2–1 home defeat to Heracles Almelo (29 March), despite Higdon opening the scoring.
Sheffield United
On 4 August 2014 Higdon returned to play in England, joining Sheffield United for an undisclosed fee on a two-year deal with the option of a third year. On 9 August 2014, Higdon scored on his début against Bristol City. Higdon netted his second goal for the Blades in a 1–0 win at Leyton Orient to send them through to the fourth round of the League Cup. On 28 October 2014, Higdon netted a late brace in a 2–1 away victory over Milton Keynes Dons, which sent United to the last eight of the League Cup in the fifth round. In early November, Higdon was ruled out for a month after he pulled his hamstring during training. On 1 February 2016 the club cancelled the striker's contract after Higdon left by mutual consent.
Oldham Athletic (loan)
On 23 September 2015, Higdon joined Oldham Athletic on a three-month loan deal. He played 13 games, scoring five goals, before returning to Sheffield United upon the completion of his loan. |
<filename>ex4-4.c<gh_stars>0
/* Exercise 4-4. Add commands to print the top element of the stack
* without popping, to duplicate it, and to swap the top two elements.
* Add a command to clear the stack.
*
* Notes: It may seem more natural to add keywords for the commands,
* like "top", "dup" and "swap", that involves a bunch of extra
* processing to parse those keywords. Much easier to add some more
* single character operators, such as '@', '"' and '$', as the
* framework is already set up for that and it's just a case of adding
* in more cases to the switch.
*
* We already have a command for "print the top element of the stack",
* though - '\n', so no need to create another one. */
#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#define MAXOP 100
#define NUMBER '0'
#define CLEAR '#'
#define DUP '\"'
#define SWAP '$'
int getop(char s[]);
int getch(void);
void ungetch(char c);
void push(double f);
double pop(void);
double peek(void);
void clear(void);
int
main()
{
int type;
double op1;
double op2;
char s[MAXOP];
while ((type = getop(s)) != EOF) {
switch (type) {
case NUMBER:
push(atof(s));
break;
case '+':
push(pop() + pop());
break;
case '*':
push(pop() * pop());
break;
case '-':
op2 = pop();
push(pop() - op2);
break;
case '/':
op2 = pop();
if (op2 != 0.0)
push(pop() / op2);
else
printf("error: divide by zero\n");
break;
case '%':
op2 = pop();
push((int)pop() % (int)op2);
break;
case DUP:
push(peek());
break;
case SWAP:
op1 = pop();
op2 = pop();
push(op1);
push(op2);
break;
case CLEAR:
clear();
break;
case '\n':
printf("\t%.8g\n", peek());
break;
default:
printf("error: unknown command %s\n", s);
break;
}
}
return 0;
}
int
getop(char s[])
{
int i, c;
while ((s[0] = c = getch()) == ' ' || c == '\t')
;
s[1] = '\0';
if (!isdigit(c) && c != '.' && c != '-')
return c;
i = 0;
if (c == '-'){
c = getch();
if (!isdigit(c) && c != '.') {
ungetch(c);
return '-';
}
s[++i] = c;
}
if (isdigit(c))
while (isdigit(s[++i] = c = getch()))
;
if (c == '.')
while (isdigit(s[++i] = c = getch()))
;
s[i] = '\0';
if (c != EOF)
ungetch(c);
return NUMBER;
}
#define BUFSIZE 100
char buf[BUFSIZE];
int bufp = 0;
int
getch(void)
{
return (bufp > 0) ? buf[--bufp] : getchar();
}
void
ungetch(char c)
{
if (bufp >= BUFSIZE)
printf("ungetch: too many characters\n");
else
buf[bufp++] = c;
}
#define MAXVAL 100
int sp = 0;
double val[MAXVAL];
void
push(double f)
{
if (sp < MAXVAL)
val[sp++] = f;
else
printf("error: stack full, can't push %g\n", f);
}
double
pop(void)
{
if (sp > 0)
return val[--sp];
else {
printf("error: stack empty\n");
return 0.0;
}
}
double
peek(void)
{
if (sp > 0)
return val[sp-1];
else {
printf("error: stack empty\n");
return 0.0;
}
}
void clear(void)
{
sp = 0;
}
|
Q:
How to indicate sortable elements
I have a list of todo items with 6 properties of which 4 are sortable. The type (icon), the tags, the stars and date are sortable by clicking on them. Name and description are not sortable (no added value). I'm thinking if in the look below it's sufficiently clear to users that they can click, for example on the icon, to sort the elements according to the icon.
The classic way to solve this would be by including a 'sort bar' with carets to indicate ascending - descending sorting.
But I don't like this. It just looks like unnecessary clutter to me, makes it messier and attacks the visual unity while the functionality can be achieved without this. I'm feeling people would like to see this sort bar included but I'd like some suggestions on how to make it clear to users that elements are sortable by clicking on them. Without the sort bar. Any other recommendations are also welcome. Thank you.
A:
Personally, I think you should leave it up to what users are most used to which is the 'sort bar' you have in your second picture.
Alternatively, you can do what sites like Amazon do and just provide the sorting options in a dropdown menu like so:
These two methods are the most common and users are likely to be used to them. Deviating too much from them may add too much confusion, even if they slightly reduce clutter.
Also, clicking on the icons to sort them may confuse users. It is expected behavior for clicking on an icon to open the item itself, like in file managers, image viewers, etc.
A:
I completely agree with the reasoning in Oztaco's answer about expected behaviour, and will add an alternative solution:
You mentioned you want to avoid clutter - you could add a 'sorting' icon such as the one below, which could open a dropdown or expand to show sorting options. |
package main
import (
"bytes"
"fmt"
"net"
"strconv"
"github.com/aergoio/aergo/config"
"github.com/aergoio/aergo/consensus/chain"
"github.com/aergoio/aergo/contract/system"
"github.com/aergoio/aergo/pkg/component"
"github.com/gin-gonic/gin"
)
type dumper struct {
*component.ComponentHub
cfg *config.Config
}
// NewDumper returns a new dumer object.
func NewDumper(cfg *config.Config, hub *component.ComponentHub) *dumper {
return &dumper{
ComponentHub: hub,
cfg: cfg,
}
}
func (dmp *dumper) Start() {
go dmp.run()
}
func (dmp *dumper) run() {
hostPort := func(port int) string {
// Allow debug dump to access only from the local machine.
host := "127.0.0.1"
if port <= 0 {
port = config.GetDefaultDumpPort()
}
return net.JoinHostPort(host, fmt.Sprintf("%d", port))
}
r := gin.Default()
///////////////////////////////////////////////////////////////////////////
// Dump Voting Power Rankers
///////////////////////////////////////////////////////////////////////////
// Dump Handler Generator
dumpFn := func(topN int) func(c *gin.Context) {
return func(c *gin.Context) {
var buf bytes.Buffer
dumpRankers := func() error {
chain.Lock()
defer chain.Unlock()
return system.DumpVotingPowerRankers(&buf, topN)
}
if err := dumpRankers(); err != nil {
c.JSON(400, gin.H{
"message": err.Error(),
})
return
}
c.Header("Content-Type", "application/json; charset=utf-8")
c.String(200, string(buf.Bytes()))
}
}
// Dump all rankers.
r.GET("/debug/voting-power/rankers", dumpFn(0))
// Dump the top n rankers.
r.GET("/debug/voting-power/rankers/:topn", func(c *gin.Context) {
topN := 0
if n, err := strconv.Atoi(c.Params.ByName("topn")); err == nil && n > 0 {
topN = n
}
dumpFn(topN)(c)
})
if err := r.Run(hostPort(cfg.DumpPort)); err != nil {
svrlog.Fatal().Err(err).Msg("failed to start dumper")
}
}
|
inp = int(input())
num = int()
for x in range(inp):
inp2 = list(map(int,input().split()))
if inp2[0] > inp2[1]:
num = num + 1
elif inp2[0] < inp2[1]:
num = num - 1
if num > 0:
print("Mishka")
elif num < 0:
print("Chris")
else:
print("Friendship is magic!^^") |
/**************************************************************************
Copyright (c) 2016, Intel Corporation
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of Intel Corporation nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
***************************************************************************/
#ifndef EEPROM_CONFIG_H_
#define EEPROM_CONFIG_H_
#include "E1000.h"
// EEPROM power management bit definitions
#define E1000_INIT_CONTROL_WORD1 0x0A
#define E1000_PME_ENABLE_BIT 0x0008
#define E1000_INIT_CONTROL_WORD2 0x0F
#define E1000_APM_PME_ENABLE_BIT 0x8000
#define E1000_LEGACY_APM_ENABLE_BIT 0x0004
#define E1000_LEGACY_FLASH_DISABLE_BIT 0x0100
#define PCH_APM_INIT_CONTROL_WORD 10
#define PCH_APM_ENABLE_BIT 0x0004
#define PCH2_APM_INIT_CONTROL_WORD 0x1A
#define PCH2_APM_ENABLE_BIT 0x0001
#define IOV_CONTROL_WORD_OFFSET 0x25
#define IOV_CONTROL_WORD_IOVENABLE_SHIFT 0
#define IOV_CONTROL_WORD_IOVENABLE_MASK 0x0001
#define IOV_CONTROL_WORD_MAXVFS_SHIFT 5
#define IOV_CONTROL_WORD_MAXVFS_MASK 0x00E0
#define IOV_CONTROL_WORD_MAXVFS_MAX 7
#define E1000_INIT_CONTROL_WORD3 0x24
#define E1000_INIT_CONTROL_WORD3_LANB 0x14
#define E1000_FLASH_DISABLE_BIT 0x0800
#define E1000_FLASH_DISABLE_BIT_ZOAR 0x0080
#define E1000_APM_ENABLE_BIT 0x0400
#define E1000_FLASH_SIZE_WORD_HARTW 0xF
#define E1000_NVM_TYPE_BIT_HARTW 0x1000
#define E1000_HARTW_FLASH_LAN_ADDRESS 0x21
#define E1000_HARTW_EXP_ROM_DISABLE 0x80 /* bit 7 */
#define LAN1_BASE_ADDRESS_82580 0x80
#define LAN2_BASE_ADDRESS_82580 0xC0
#define LAN3_BASE_ADDRESS_82580 0x100
/** Gets LAN speed setting for port
@param[in] UndiPrivateData Pointer to adapter structure
@retval LINK_SPEED_AUTO_NEG We do not support speed settings
@retval LINK_SPEED_AUTO_NEG Default Auto-Negotiation settings
@retval LINK_SPEED_100FULL LAN speed 100 MBit Full duplex
@retval LINK_SPEED_10FULL LAN speed 10 MBit Full duplex
@retval LINK_SPEED_100HALF LAN speed 100 MBit Half duplex
@retval LINK_SPEED_10HALF LAN speed 10 MBit Half duplex
**/
UINTN
EepromGetLanSpeedStatus (
UNDI_PRIVATE_DATA *UndiPrivateData
);
/** Sets LAN speed setting for port
@param[in] UndiPrivateData Driver private data structure
@param[in] LanSpeed Desired LAN speed
@retval EFI_SUCCESS LAN speed set successfully
**/
EFI_STATUS
EepromSetLanSpeed (
UNDI_PRIVATE_DATA *UndiPrivateData,
UINT8 LanSpeed
);
/** Sets WOL (enable/disable) setting
@param[in] UndiPrivateData Driver private data structure
@param[in] Enable Enable/Disable boolean.
@return WOL enabled/disabled according to Enable value.
**/
VOID
EepromSetWol (
UNDI_PRIVATE_DATA *UndiPrivateData,
IN UINT8 Enable
);
/** Sets the override MAC address back to FF-FF-FF-FF-FF-FF to disable
(or in 82580-like case) restores the factory default MAC address.
@param[in] UndiPrivateData Driver private data structure
@retval EFI_UNSUPPORTED Invalid offset for alternate MAC address
@retval EFI_SUCCESS Default MAC address set successfully
**/
EFI_STATUS
EepromMacAddressDefault (
IN UNDI_PRIVATE_DATA *UndiPrivateData
);
/** Programs the port with an alternate MAC address, and (in 82580-like case)
backs up the factory default MAC address.
@param[in] UndiPrivateData Pointer to driver private data structure
@param[in] MacAddress Value to set the MAC address to.
@retval EFI_UNSUPPORTED Invalid offset for alternate MAC address
@retval EFI_SUCCESS Default MAC address set successfully
**/
EFI_STATUS
EepromMacAddressSet (
IN UNDI_PRIVATE_DATA *UndiPrivateData,
IN UINT16 * MacAddress
);
/** Reads the currently assigned MAC address and factory default MAC address.
@param[in] UndiPrivateData Driver private data structure
@param[out] DefaultMacAddress Factory default MAC address of the adapter
@param[out] AssignedMacAddress CLP Assigned MAC address of the adapter,
or the factory MAC address if an alternate MAC
address has not been assigned.
@retval EFI_SUCCESS MAC addresses successfully read.
**/
EFI_STATUS
EepromMacAddressGet (
IN UNDI_PRIVATE_DATA *UndiPrivateData,
OUT UINT16 * DefaultMacAddress,
OUT UINT16 * AssignedMacAddress
);
#define EEPROM_CAPABILITIES_WORD 0x33
#define EEPROM_CAPABILITIES_SIG 0x4000
/** Returns EEPROM capabilities word (0x33) for current adapter
@param[in] UndiPrivateData Points to the driver instance private data
@param[out] CapabilitiesWord EEPROM capabilities word (0x33) for current adapter
@retval EFI_SUCCESS Function completed successfully,
@retval !EFI_SUCCESS Failed to read EEPROM capabilities word
**/
EFI_STATUS
EepromGetCapabilitiesWord (
IN UNDI_PRIVATE_DATA *UndiPrivateData,
OUT UINT16 * CapabilitiesWord
);
/** Checks if it is LOM device
@param[in] UndiPrivateData Points to the driver instance private data
@retval TRUE It is LOM device
@retval FALSE It is not LOM device
@retval FALSE Failed to read NVM word
**/
BOOLEAN
EepromIsLomDevice (
IN UNDI_PRIVATE_DATA *UndiPrivateData
);
/** Updates NVM checksum
@param[in] UndiPrivateData Pointer to driver private data structure
@retval EFI_SUCCESS Checksum successfully updated
@retval EFI_DEVICE_ERROR Failed to update NVM checksum
**/
EFI_STATUS
EepromUpdateChecksum (
IN UNDI_PRIVATE_DATA *UndiPrivateData
);
#endif /* EEPROM_CONFIG_H_ */
|
Do all aphids benefit from climate warming? An effect of temperature increase on a native species of temperate climatic zone Cinara juniperi Global warming has the potential to affect many animal species, in particular temperature-dependent insects with their short generation times and high reproductive rates which facilitate adaptations to long-term climatic fluctuations. Aphids are the model species in studying the association of insect biology with large-scale climate fluctuations because they multiply only within a certain range of temperatures and their rate of development directly depends on temperature. Here, we investigate the effect of climate warming on phenology and voltinism of the juniper aphid (Cinara juniperi De Geer 1773), a native species of a temperate climate zone in Poland. We also experimentally test for the temperature optimum of the study species. Our study demonstrated that environmental conditions significantly affected phenology of juniper aphid. The timing of larval emergence depended on mean temperatures in March, and in warmer years larvae appeared earlier. The emergence of the sexual generation was related to mean temperatures in August, and higher temperatures resulted in later sexual reproduction. Despite the elongation of the entire life cycle, by almost 3 months, we observed only one additional generation in warmer years. A possible explanation for this pattern may be that the increase in temperature recorded in recent years went beyond the temperature optimum of the study species. Our chamber experiment supports this assumption juniper aphids developed faster and reproduced more effectively at 20 °C than at 25 °C. |
Modelling of Fuel Flow in Climb Phase Through Multiple Linear Regression Based on the Data Collected by Quick Access Recorder Received: 30 March 2019 Accepted: 21 June 2019 The fuel flow is a key indicator of the performance of aircraft engine. It helps to identify the performance degradation and failure of the engine. This calls for an aircraft engine model that can predict the fuel flow accurately throughout the flight, especially the climb phase. This paper performs stepwise linear regression of the data collected by a quick access recorder (QAR), and creates a model of the fuel flow of Boeing 737-700 in the climb phase. Firstly, the possible influencing factors of fuel flow were screened based on scatterplots and Pearson correlation coefficients (PCCs). Next, the selected factors were further modified and screened through similarity correction and power correction. On this basis, the fuel flow model for the climb phase was established through stepwise linear regression and corrected in the light of the tolerance and variance inflation factor (VIF) of each variable. The prediction results of the final model were basically in line with the actual QAR data. |
import { CreepAgent } from "agents/CreepAgent";
import { SpawnAgent } from "agents/SpawnAgent";
import { TowerAgent } from "agents/TowerAgent";
import { BaseObjective, IdleObjective } from "objectives/BaseObjective";
import { ContinuousHarvesting } from "objectives/ContinuousHarvesting";
import { DefendColony } from "objectives/DefendColony";
import { RefillContainers, RefillSpawnStorage } from "objectives/EnergyHauling";
import { MaintainBuildings } from "objectives/MaintainBuildings";
import { ReachRCL2 } from "objectives/ReachRCL2";
import { ReachRCL3 } from "objectives/ReachRCL3";
import * as cpuUsageEstimator from "utils/cpuUsageEstimator";
import { COLORS, getLogger } from "utils/Logger";
import { RoomPlanner } from "./RoomPlanner";
const logger = getLogger("colony.Battalion", COLORS.colony);
/**
* Controls a group of agents collaborating to achieve an objective.
*/
export class Battalion {
public creeps: CreepAgent[] = [];
public spawn: SpawnAgent;
public towers: TowerAgent[] = [];
public roomPlanner: RoomPlanner;
public objective: BaseObjective;
public memory: BattalionMemory;
public name: keyof ColonyBattalionsMemory;
constructor(name: keyof ColonyBattalionsMemory, spawn: SpawnAgent, roomPlanner: RoomPlanner) {
this.spawn = spawn;
this.roomPlanner = roomPlanner;
this.name = name;
const roomName = roomPlanner.room.name;
const roomBattalionMemory = Memory.battalions[roomName] || {};
this.memory = roomBattalionMemory[name] || {
objective: {
name: "IDLE",
},
};
this.objective = this.reloadObjective();
}
private reloadObjective(): BaseObjective {
const objectiveMemory = this.memory.objective;
switch (objectiveMemory.name) {
case "REACH_RCL2":
return new ReachRCL2(this.name);
case "REACH_RCL3":
return new ReachRCL3(this.name);
case "CONTINUOUS_HARVESTING":
const chMem = objectiveMemory as ContinuousHarvestingMemory;
return new ContinuousHarvesting(this.name, chMem.miningSpotsPerSource);
case "REFILL_CONTAINERS":
return new RefillContainers(this.name);
case "REFILL_SPAWN_STORAGE":
return new RefillSpawnStorage(this.name);
case "IDLE":
return new IdleObjective(this.name);
case "MAINTAIN_BUILDINGS":
return new MaintainBuildings(this.name);
case "DEFEND_COLONY":
const defMem = objectiveMemory as DefendColonyMemory;
return new DefendColony(this.name, defMem.attackLaunched);
}
}
/**
* Reload a creep agent and associate it to the current colony
* @param name name of the creep to reload
*/
public reloadCreep(name: string) {
try {
const agent = new CreepAgent(name);
this.creeps.push(agent);
} catch (err) {
logger.warning(`Unable to reload ${name}: ${err} - discarding from battalion.`);
}
}
public assignTower(tower: TowerAgent) {
tower.memory.battalion = this.name;
this.towers.push(tower);
}
/**
* Execute the objective and all creep agents members of this battalion.
* Spawn and room agents are not executed as multiple battalion may be referencing the same spawn and room.
*/
public execute() {
cpuUsageEstimator.notifyStart(`battalions.${this.roomPlanner.room.name}.${this.name}`);
cpuUsageEstimator.notifyStart(`objective.${this.objective.name}`);
this.objective.execute(this.creeps, this.roomPlanner, this.spawn, this.towers);
cpuUsageEstimator.notifyComplete();
for (const creep of this.creeps) {
creep.execute();
}
for (const tower of this.towers) {
tower.execute();
}
this.requestNewCreepsIfNecessary();
cpuUsageEstimator.notifyComplete();
}
public requestNewCreepsIfNecessary() {
const pendingSpawnRequests = this.spawn.pendingSpawnRequests(this.name);
const pendingCount = pendingSpawnRequests.reduce((acc, r) => {
acc[r.creepProfile] = (acc[r.creepProfile] || 0) + r.count;
return acc;
}, {} as { [key in CREEP_PROFILE]: number });
const creepCount = this.creeps.reduce((acc, r) => {
acc[r.memory.profile] = (acc[r.memory.profile] || 0) + 1;
return acc;
}, {} as { [key in CREEP_PROFILE]: number });
const desired = this.objective.estimateRequiredWorkForce(this.roomPlanner);
const desiredCount = desired.reduce((acc, r) => {
acc[r.creepProfile] = (acc[r.creepProfile] || 0) + r.count;
return acc;
}, {} as { [key in CREEP_PROFILE]: number });
const profiles = Object.keys(desiredCount) as CREEP_PROFILE[];
for (const profile of profiles) {
const profilePendingCount = pendingCount[profile] || 0;
const profileCreepCount = creepCount[profile] || 0;
const profileDesiredCount = desiredCount[profile] || 0;
if (profilePendingCount + profileCreepCount < profileDesiredCount) {
const requestCount = profileDesiredCount - profilePendingCount - profileCreepCount;
logger.info(
`${this}: requesting spawn of ${requestCount} creeps ` +
`(desired: ${profileDesiredCount}, existing:${profileCreepCount}, pending: ${profilePendingCount})`,
);
this.spawn.requestSpawn(this.name, requestCount, profile);
}
}
}
public assignObjective(objective: BaseObjective) {
this.objective = objective;
}
public save() {
const roomName = this.roomPlanner.room.name;
logger.debug(`Saving ${this.objective}`);
Memory.battalions = Memory.battalions || {};
Memory.battalions[roomName] = Memory.battalions[roomName] || {};
Memory.battalions[roomName][this.name] = {
objective: this.objective.save(),
};
for (const creep of this.creeps) {
logger.debug(`Saving ${creep}`);
creep.save();
}
for (const tower of this.towers) {
logger.debug(`Saving ${tower}`);
tower.save();
}
}
public toString() {
return `Battalion ${this.name}`;
}
}
|
July 13, 2012 He had been accused of hurling racial insults at an opponent. The case has been front-page news in Britain. But a magistrate said today that there was doubt about whether Terry's words were meant as an insult. |
Fabrication and analysis of metallic nanoslit structures: advancements in the nanomasking method Abstract. This work advances the fabrication capabilities of a two-step lithography technique known as nanomasking for patterning metallic nanoslit (nanogap) structures with sub-10-nm resolution, below the limit of the lithography tools used during the process. Control over structure and slit geometry is a key component of the reported method, exhibiting the control of lithographic methods while adding the potential for mass-production scale patterning speed during the secondary step of the process. The unique process allows for fabrication of interesting geometric combinations such as dual-width gratings that are otherwise difficult to create with the nanoscale resolution required for applications, such as nanoscale optics (plasmonics) and electronics. The method is advanced by introducing a bimetallic fabrication design concept and by demonstrating blanket nanomasking. Here, the need for the secondary lithography step is eliminated improving the mass-production capabilities of the technique. Analysis of the gap width and edge roughness is reported, with the average slit width measured at 7.4±2.2nm. It was found that while no long-range correlation exists between the roughness of either gap edge, and there are ranges in the order of tens of nanometers over which the slit edge roughness is correlated or anticorrelated across the gap. This work helps quantify the nanomasking process, which aids in future fabrications and leads toward the development of more accurate computational models for the optical and electrical properties of fabricated devices. Introduction The ability to create metallic nanostructures, whether via top-down or bottom-up methods, has become increasingly common, if not necessary, for many areas of modern technological development. Advances in self-assembling chemical processes, lithographic methods involving accelerating ions and particles, and high-precision deposition and etch techniques have enabled much of this fabrication progress. These advances will enable new and exciting technologies that take advantage of properties, such as increased surface area at the nanoscale, high-density arrays of structures, and even Angstrom-scale features or gaps (also referred to as slits) among structures. High-precision nanofabrication may produce a wide range of structures that are beneficial for their chemical, mechanical, electrical, optical, or combined properties across nearly endless applications. Plasmonic nanostructures, which interact with light in unique ways depending on the surrounding materials and device geometry, have been applied to improved electronics performance and enhancement of optical signals. In addition to nanostructures producing electric field enhancement, slits among structures (specifically those approaching 10 nm and smaller) 19 have been shown to further increase the local field strength. One limitation of using nanoslits for this type of plasmonic enhancement in various applications is the difficulty of reliable fabrication at the sub-10-nm scale, which is below the resolution limit of most lithography systems, and over large wafer-scale areas. 25 Existing gap-fabrication methods, such as focused ion beam (FIB) milling, electromigration, mechanical break junctions, 34 or photo/electron/ ion beam lithographies, 35,36 have the limitation that they must create gaps serially. It is crucial to consider the time of fabrication for a large area of nanostructures or slits when considering scaling up of the technology for applications beyond pure basic research. A variety of self-assembly techniques have emerged as promising candidates for large-area fabrication of high densities of nanogaps, some even with control over gap sizes. The ideal fabrication technique would perfectly couple the dimensional control of methods such as lithography with the rapid and large-scale, simultaneous creation of the desired nanostructures and nanoslits. Various techniques make use of a sacrificial layer or layers of material during top-down lithographic processes to improve the resolution or other properties of the fabrication. Even during its advent two hundred years ago, the basic concept of lithography (from the Greek lithos, "stone" and graphien, "to write") relied on a sacrificial resist layer. 11 Resists sensitive to exposure via an electron beam allow the user to take advantage of the smaller wavelength, and therefore, higher diffraction-limited resolution of accelerated electrons over the ultraviolet light used with photosensitive resists. Sacrificial layers made from metals, semiconductor oxides, or other materials used during top-down lithography processes may provide additional benefits due to specific material properties not present in lithographic resists, which are often organic compounds. One benefit of using different materials is the ability to take advantage of different etchants or solvents that react solely with the desired layer material. The sacrificial layer may simply protect another surface from contamination, 45 fogging, 46 or other nanoscale defects, or it may be implemented to remove these defects from the important surface. 47 Some imprint-type techniques rely on a sacrificial material as a carrier from which the desired structures are removed after processing or this layer may provide structure to a specific geometric design during fabrication. The addition of a sacrificial aluminum layer prior to FIB milling has been demonstrated to improve the resolution and edge smoothness in a process called metal-assisted FIB. 51 Here, the sacrificial metal layer works to protect the working material from ion-induced damage and redeposition of milled working material. This technique was used in the production of improved templates for nanoimprint lithography and two-dimensional plasmonic open-ring nanostructure arrays with significantly improved absorption due to increased structural integrity of the patterns. While this and other sacrificial masking techniques provide the mentioned benefits, they do not necessarily directly improve the resolution of the corresponding nanofabrication process. This work describes the nanomasking technique, which is an advanced fabrication method that takes advantage of a unique lithography and deposition process to create nanoslits adjacent to metallic nanostructures to produce nanoscale devices. As we have outlined previously, preliminary results have demonstrated the ability to simultaneously fabricate sub-10-nm slits with a density of over 500 million per square cm. 52 Nanomasking has also been used to simultaneously create gaps in two dimensions, which is not possible with some serial techniques. An additional benefit of nanomasking is that it has been shown to be capable of simultaneously fabricating both sub-10-nm slits and adjacent sub-20-nm metallic structures. 52 The nanomasking technique overcomes this limitation via a unique multistep lithography process to simultaneously produce many sub-10-nm gaps across a surface. The geometrical control of this technique has been demonstrated as well, in which gaps can be created adjacent to sublithography-limited structures with control over their shapes and sizes. The increased geometric control over nanoscale structures and slits has resulted in the patterning of metallic devices that can be applied to developments in fields, such as plasmonics, nanoscale and nonlinear optics, photonic crystals, waveguides, electronics, and microfluidics. This work improves the nanomasking technique beyond the previous results by introducing a bimetallic nanoslit design and demonstrates blanket nanomasking, which improves upon the original technique by eliminating one of the lithography steps. Here, we also carefully analyze and quantify the gap wall roughness and sidewall correlation, which reveal important insights for device design and applications. Nanomasking Fabrication In Fig. 1, we introduce a capability of the nanomasking technique. Figure 1(a) shows the standard technique, while Fig. 1(b) shows an additional capability of this technique: bimetallic nanoslit fabrication. This advanced method of fabricating two different metals with nanoslit spacing is an innovative concept and has potential for interesting optoelectronic applications. 63 The standard nanomasking process utilizes a two-step lithography process to obtain sub-10-nm gaps with a high degree of control over structure geometries. Figure 1 outlines the key steps in the process for creating (a) an Au grid structure and (b) concentric circular patterns utilizing two different evaporation materials. 63 After the primary structures have been patterned via standard electron beam lithography (EBL) or photolithography, resist development, and evaporation, the first step of the nanomasking process takes place as shown in Fig. 1(i). During the evaporation step of (i), another material layer is evaporated atop the desired material for the primary structures; this layer is, crucially, a metal or other material that will undergo oxidation and expansion under ambient or controlled conditions. In our work and, therefore, the sketches shown in Fig. 1, a Cr layer is used to create this nanoscale mask layer upon oxidation. The overhanging oxidized layer acts to shield the substrate from further material evaporation during the second lithography, development, and evaporation step, as shown in Fig. 1(ii). The second important criterion for choosing a nanomask layer material is that it must be etchable without damaging the substrate, any necessary adhesion layer, or the other desired primary and secondary evaporation materials. Thus, upon etching the nanomask layer, the resulting patterns consist of the primary and secondary materials separated by nanogaps where they overlap as designed, with the nanogap size being tunable by controlling the oxidation of the mask material. Example of final structures is shown in Fig. 1(iii). Not shown in Fig. 1 but important for fabrication of Au structures on a Si∕SiO 2 substrate is a Ti adhesion layer. In the work described here, this was typically 1.0 to 1.5 nm of Ti, with 15 nm of Au as both the primary and secondary evaporation material. Previous work studying the effects of a Ti adhesion layer on the plasmonic response of an Au nanostructure to incident light illumination has shown that the optical enhancement produced by the structures decreases with increasing Ti layer thickness. Therefore, for optical applications, ideally no adhesion layer would be used, but in the case in which it is required, the smallest possible adhesion layer should be used to preserve the optical characteristics of the patterned structures. The thickness of the Au layer has also been found to significantly affect the optical response of a patterned nanostructure, with and without nanoslits. Beyond consideration of the final application, however, these thicknesses should be chosen such that the nanomasking effect can still pattern the desired nanostructures/slits. The thickness of the Cr layer is critical if tight control over the gap width is desired, as the thickness affects the lateral expansion of Cr oxidation that occurs upon exposure to oxygen, affecting the gap width as reported in work by Fursina et al. 74 Nanomasking has successfully demonstrated the fabrication of Au grid structures on a Si∕SiO 2 substrate as designed and as outlined in Fig. 1. Scanning electron microscopy (SEM) images were taken of the resulting structures, as shown colorized in Fig. 2. Different design widths and spacings of the primary nanowires were found to produce different widths of the adjacent secondary Au structures. With the designs producing a resulting primary width of 165 nm, secondary structures were measured to be 65 nm as shown in Figs. 2(a) and 2(b). Increasing the primary width to 200 nm resulted in a secondary structure width of 40 nm . The fact that adjusting the primary Au structure width demonstrates the tunability of the resulting grids via the nanomasking technique. This will prove useful in future experiments by allowing designers to strategically pattern geometries that show optimal enhancement in simulations. This will accelerate the learning process in determining the efficiency of the grid structures as SERS substrates, for example. The ability of the nanomasking process to fabricate nanogaps has been demonstrated as a proof of concept for the possible large-scale integration of the technique. This work also demonstrated the simultaneous fabrication of nanogaps and adjacent nanostructures that are both below the lithography resolution limit. This has been described for patterns aligned on the same center point as with the circle patterns shown in Fig. 1. Varying the overlap of the two patterns, however, has the effect of changing the width of the resulting secondary metal, creating an adjacent sublithography-limited structure separated via nanoslit. Figure 3 shows colorized SEM images of the result of varying the overlap between a square primary pattern and a rectangular secondary pattern. One larger rectangular and one square pattern with different amounts of overlap are shown in Fig. 3(a). The higher magnification image (b) shows one case in which the square pattern and rectangular pattern were overlapped so that structures are formed adjacent to the gap; below the square, a 30-nm metal nanostructure was formed with features below the typical lithography limit of the EBL system used in this work (∼60 nm). This highlights the capability to use the nanomasking technique to not only fabricate sub-10-nm slits but also as means to overcome lithography resolution limits for nanostructures. Next, we show another advancement in the nanomasking technique, blanket nanomasking, which eliminates the need for the secondary lithography, reducing the process down to only one lithography step. This has the advantage of time and cost savings in the fabrication process. Here, it is used to create an array of parallel nanowires with different widths separated by nanogaps. The process shown in Fig. 4 begins, again, with patterning a desired geometry using lithography. This work used EBL, yet photolithography could be utilized instead for larger structures and rapid patterning. The desired metal and Cr layer were then deposited, and the Cr was allowed to oxidize (under ambient conditions in this work). This is shown in Fig. 4(a) with parallel Au nanowires that have an oxidized Cr layer overhanging the edge of the Au. From here, a second deposition of Au is all that is needed to produce more parallel nanowires, separated from the primary structures by nanogaps on both sides . The deposition covers the entire sample area, and the Cr mask is still able to produce nanogaps adjacent to the wires. Figure 4(d) shows colorized SEM images of the results of this type of fabrication process. The width of the primary and secondary nanowires could be controlled in a design such as this to optimize the plasmonic response to specific wavelengths of incident light. 9 Thus, blanket nanomasking has been shown to be capable of patterning nanogap structures over a large area without the need for a secondary lithography step. The geometrical design possibilities are limited compared to the standard nanomasking process, but blanket deposition over a sample containing nanostructures may prove useful for economical mass production of devices. We have shown that the optical response and plasmonic nature of dual-width nanogap gratings, as shown in Fig. 4(d), can be more beneficial than that of standard single-width structures for photodetector and spectroscopy enhancement applications. 75,76 These works help to demonstrate the value of nanomasking fabrication, which allows for the dual-width grating structure with the added benefit of nanoslit separation. Nanoslit Analysis Having demonstrated various geometrical fabrication capabilities via nanomasking, we then analyzed the structure of nanoslits created by the process. Three high-resolution SEM images of nanogaps (taken using an FEI Nova Nanolab 200) were studied to determine the gap widths along the length of the gap, as well as the correlation between the two gap edges in each case. The fabrication conditions for the studied patterns included deposition of a 1.5-nm Ti adhesion layer, 15 nm of Au, a 1.5-nm SiO 2 separation layer between the Au and Cr, and 15 nm of Cr allowed to oxidize under ambient conditions. The gap widths were determined by measuring the full width at half maximum of the SEM image pixel values along a line drawn perpendicularly across the gap. The location of each edge point was determined in this manner along a gap length, L, of 100 nm. From these edge locations, the gap width, average gap width, deviation from the mean, and edge correlation were calculated. The mean value across the three gaps was found to be 7.4 nm with a standard deviation of 2.2 nm. The analyzed gaps are shown in the SEM images at the top of Fig. 5 with the specific locations for each gap measurement length, L, labeled. The red plots shown in the middle row of Fig. 5 display the gap width deviation from the mean, g, versus L. The red histogram at the right side of Fig. 5 plots the number of occurrences, N, for gap widths over different g ranges. The blue plots display the population correlation coefficient,, versus L, with 1 representing complete correlation and −1 representing complete anticorrelation. From these data, it was found that there are ranges in the order of tens of nanometers along each gap over which the secondary structure edge is highly correlated with the edge position of the primary structure. There are also, however, ranges in the same order of length over which the edges are anticorrelated. Considering the total gap lengths studied, there does not appear to be a net correlation among the edge positions across a given gap. Therefore, with the nanomasking fabrication process, an expected gap width can be patterned with a relatively high degree of accuracy, but the roughness of the gap on the secondary structure edge is not necessarily defined by the primary structure surface roughness. This corroborates previous discussion of the gap roughness for electrical measurements, which found the secondary electrodes to be rougher than the primary ones, attributing this to the added roughness of the Cr x O y film. 74 The primary structure edge roughness is due to the resolution of the lithography process, the resist development, and structure evaporation. If the secondary structure edge roughness is not correlated with that of the primary structure, then the secondary evaporation and the Cr oxidation steps are the contributing factors of the additional roughness causing the anticorrelation. Using these measured results to incorporate accurate edge roughness into future, computational simulations will be highly valuable for precisely predicting and designing nanostructure properties. Previous work has shown that gap roughness can improve plasmonic devices, and that accurate models are keys to predict device behavior. 77 This work produces better characterization of the gap width, roughness, and edge correlation, and will be helpful in optimizing nanostructures for applications, such as plasmonic nano-optics for enhanced spectroscopy. We plan to incorporate these measured structural results into computational electromagnetic and other simulations, further aiding in the fabrication design and optimization. Conclusion A nanomasking fabrication technique has demonstrated the capability of simultaneous fabrication of sub-10-nm slits and sublithography-resolution nanostructures. This work advances the technique by illustrating how hybrid structures can be created using different materials in each evaporation step. The width of the secondary structures can be controlled by the degree of overlap of the two lithography patterns used in the process, and the sizes of both the nanoslits and nanostructures can overcome the resolution limit of the electron beam or photolithography process used. This work advances the nanomasking method even further by introducing blanket nanomasking, where no secondary patterning step is required, as the secondary evaporation simply covers the entire sample. This has been demonstrated to successfully eliminate the secondary lithography step from the process while still producing nanogaps adjacent to multiple structures on a substrate surface. This is another aspect of the method which increases the appeal for applications requiring mass production. An analysis of the resulting gap structure was conducted, with the correlation among gap edges studied as well. It was found that over the length of three gaps fabricated via nanomasking under ambient Cr oxidation conditions, the average gap width is 7.4 AE 2.2 nm. There was found to be no longrange edge correlation between one side of the gap and the other, but over shorter ranges (tens of nanometers), some gap regions displayed significant correlation, and others displayed significant anticorrelation. These measured values and correlation characteristics reveal parameters that can be useful for predicting nanoscale behavior, enabling more accurate modeling, design, and optimization of nanostructures. |
Partial least squares regression as novel tool for gas mixtures analysis in quartz-enhanced photoacoustic spectroscopy Gas mixtures analysis is a challenging task because of the demand for sensitive and highly selective detection techniques. Partial least squares regression (PLSR) is a statistical method developed as generalization of standard multilinear regression (MLR), widely employed in multivariate analysis for relating two data matrices even with noisy and strongly correlated experimental data. In this work, PLSR is proposed as a novel approach for the analysis of gas mixtures spectra acquired with quartz-enhanced photoacoustic spectroscopy (QEPAS). Results obtained analyzing CO/N2O and CH4/C2H2/N2O gas mixtures are presented. A comparison with standard MLR approach highlights a prediction errors reduction up to 5 times. |
/***************************************************************
* Name: OfficeMain.h
* Purpose: Defines Application Frame
* Author: <NAME> ()
* Created: 2021-06-26
* Copyright: <NAME> ()
* License:
**************************************************************/
#ifndef OFFICEMAIN_H
#define OFFICEMAIN_H
//(*Headers(OfficeFrame)
#include <wx/button.h>
#include <wx/frame.h>
#include <wx/menu.h>
#include <wx/stattext.h>
#include <wx/statusbr.h>
//*)
class OfficeFrame: public wxFrame
{
public:
OfficeFrame(wxWindow* parent,wxWindowID id = -1);
virtual ~OfficeFrame();
private:
//(*Handlers(OfficeFrame)
void OnQuit(wxCommandEvent& event);
void OnAbout(wxCommandEvent& event);
void OnstudentClick(wxCommandEvent& event);
void OnteacherClick(wxCommandEvent& event);
void OnadminClick(wxCommandEvent& event);
void OnexitClick(wxCommandEvent& event);
//*)
//(*Identifiers(OfficeFrame)
static const long ID_STATICTEXT1;
static const long ID_STUDENT;
static const long ID_STATICTEXT2;
static const long ID_TEACHER;
static const long ID_ADMIN;
static const long ID_EXIT;
static const long idMenuQuit;
static const long idMenuAbout;
static const long ID_STATUSBAR1;
//*)
//(*Declarations(OfficeFrame)
wxButton* admin;
wxButton* exit;
wxButton* student;
wxButton* teacher;
wxStaticText* StaticText1;
wxStaticText* StaticText2;
wxStatusBar* StatusBar1;
//*)
DECLARE_EVENT_TABLE()
};
#endif // OFFICEMAIN_H
|
/**
* Created by sigveh on 10/20/14.
*/
public class Keys {
private int[] doubleKeys;
private int[] intKeys;
public Keys(int[] doubleKeys, int[] intKeys){
this.doubleKeys = doubleKeys;
this.intKeys = intKeys;
}
public int[] getDoubleKeys() {
return doubleKeys;
}
public int[] getIntKeys() {
return intKeys;
}
} |
Quantum phase transition of light in the dissipative Rabi-Hubbard lattice: A dressed-master-equation perspective In this work, we investigate the quantum phase transition of light in the dissipative Rabi-Hubbard lattice under the framework of the mean-field theory and quantum dressed master equation. The order parameter of photons in strong qubit-photon coupling regime is derived analytically both at zero and low temperatures. Interestingly, we can locate the localization and delocalization phase transition very well in a wide parameter region. {In particular for the zero-temperature limit, the critical tunneling strength approaches zero generally in the deep-strong qubit-photon coupling regime, regardless of the quantum dissipation. This is contrary to the previous results with the finite minimal critical tunneling strength based on the standard Lindblad master equation. Moreover, a significant improvement of the critical tunneling is also observed at finite temperature, compared with the counterpart under the Lindblad description. We hope these results may deepen the understanding of the phase transition of photons in the Rabi-Hubbard model. INTRODUCTION The microscopic interaction between light and quantum matter is ubiquitous in broad fields ranging from quantum optics, condensed-matter physics, to quantum chemistry, and has attracted tremendous attention in many years. It continues to be a hot topic due to the tremendous progress in supercondcting qubits, trapped ions, and cold atoms. The simplest paradigm is composed of a two-level system interacting with a single-mode radiation field, theoretically characterized by the seminal quantum Rabi model and its restricted model after the rotating-wave approximation, known as the Jaynes-Cummings (JC) model. Recently, due to the significant development of circuit quantum electrodynamics (cQED), light-matter coupling now has reached the ultrastrong and even deepstrong coupling regimes, which invalidates the rotatingwave approximation and spurs a plethora of inspiring works. While considering the interplay between the on-site light-matter interaction and intersite photon hopping lattice, the Rabi (JC)-Hubbard model is the representative cQED lattice model. One of the most intriguing effects for the light-matter interacting lattice is the quantum phase transition (QPT) of light, i.e., the localization to delocalization transition of photons in the ground state. Specifically, multiple Mott-to-superfluid transitions and a series of Mott lobes are exploited in * Electronic address: wangchenyifang@gmail.com Electronic address: qhchen@zju.edu.cn the JC-Hubbard model, whereas the Mott lobes are strongly suppressed and a single global boundary is unraveled to clarify the localized and delocalized phases in the Rabi-Hubbard model. Meanwhile, the mean-field theory is confirmed to be an efficient and reliable approach to consistently obtain the phase diagram of photons, which reduces the lattice system to the order-parameter driven single-site model. Practically, one quantum system inevitably interacts with the environment. For the dissipative JC-Hubbard model, the effective non-Hermitian Hamiltonian description leads to the suppression of Mott lobes. While for the dissipative Rabi-Hubbard model, Schir et al. applied the Lindblad master equation (LME) to clarify exotic phases based on the spin-spin correlation function. In quantum optics, it was reported that as the qubit-photon interaction becomes strong, the light-matter hybrid system should be treated as a whole. This implies that the quantum master equation should be microscopically derived in the eigenspace of the hybrid system, rather than in the localcomponent basis (e.g., qubit and resonator). This directly results in the emergence of generalized master equations (GME) and failure of the Lindblad description. Moreover, the finite-time dynamics based on the GME shows significant distinction from the counterpart under the LME. After long-time evolution, the GME is usually reduced to the dressed master equation (DME), where off-diagonal elements of the density matrix of the hybrid system become negligible due to the full thermalization. Hence, the DME can be properly employed to investigate the steady-state behavior of the hybrid quantum systems from weak to strong hybridization strengths [15,40,41,. However, to the best of our knowledge, the application of the DME to investigate steady-state phase transition of light in the dissipative Rabi-Hubbard model currently lacks exploration, even under the mean-field framework. In this work, we apply the DME combined within mean-field theory to study the steady-state phase diagram of the dissipative Rabi-Hubbard model. The QPT of light are exhibited both at zero and finite temperatures. The nontrivial analytical expression of the order parameter is obtained, which relies on a comprehensive set of system parameters. Moreover, an improved boundary to characterize the phase transition of photons is also achieved, compared with the previous work with the Lindblad dissipation. The paper is organized as follows: In Sec. II we briefly introduced the dissipative Rabi-Hubbard model and the DME. In Sec. III we numerically show the phase diagram of the order parameter of photons, and analytically locate the phase boundary. Finally, we give a summary in Sec. IV. A. The Rabi-Hubbard model The Rabi-Hubbard lattice system, which is composed by the on-site quantum Rabi model and photon tunneling between the nearest-neighboring sites with the strength J, is described as where H Rabi n denotes the Hamiltonian of the Rabi model at the nth site, which describes the interaction between a qubit and a single-mode photon field. The Hamiltonian is specified as H Rabi n = 0 a n a n + 2 n z + g n x (a n + a n ), where a n and a n are the annihilation and creation operator of cavity field at the nth site, n x and n z are the Pauli operators of qubit at the nth site, 0 denotes the frequency of cavity field, is the energy splitting of qubit, and g is the qubit-photon coupling strength. Then, we adopt the mean-field theory to simplify the Rabi-Hubbard model to an effective single-site model. Specifically, the photon hopping term in Eq. is decoupled as a m a n = a m a n + a m a n − a m a n. Consequently, the Hamiltonian of the mean-field Rabi model is given by where z is the number of nearest neighboring sites and = a can be regarded as the order parameter. The subindex n is ignored for all sites, because each site shares the same Hamiltonian in the mean-field framework. The emergence of the nonzero order parameter can be used to characterize the QPT of light in the Rabi-Hubbard lattice, i.e., the localization phase to delocalization phase transition. We note that though the reduced mean-field Rabi model at Eq. can be efficiently solved at steady state, the mean-field theory indeed has its own limitations. It includes the decoupling approximation, i.e., the effective driving strength zJ should be weak. Due to the tremendous advance of superconducting circuits engineering, the ultrastrong qubit-resonator coupling was experimentally detected in cQED, which is theoretically described by the quantum Rabi model. Meanwhile, several large cQED lattices have also been designed, which provide the solid ground to simulate the JC-Hubbard model. Hence, we believe the Rabi-Hubbard model could be realized by combining these two components. B. Quantum dressed master equation We take quantum dissipation into consideration in this work. Specifically, we include local dissipation where the nth site mean-field Rabi model is coupled to two individual bosonic thermal baths. Hence, the total Hamiltonian under the mean-field theory can be expressed as The bosonic thermal baths are described as where b u,k and b u,k are the annihilation and creation operators of the boson with the frequency k in the uth bath. The interactions between the Rabi-Hubbard model and bosonic thermal baths are given by V = V q + V c, where the interaction terms associated with the qubit and the cavity respectively read with q,k ( c,k ) the coupling strength between the qubit (photon) and the corresponding thermal bath. The system-bath interaction is characterized as the spec- In this work, we select the Ohmic case to quantify the thermal bath, i.e., G q () = q / exp(−/ c ) and is the dissipation strength and c is the cutoff frequency. c is considered to be large enough, so the spectral functions are simplified as G q () = q / and G c () = c / 0. Next, we assume weak coupling between the quantum system and bosonic thermal baths. Regarding the system-bath interactions and as perturbations, we obtain the GME under the Born-Markov approxi-mation as where the rate is given by being the Bose-Einstein distribution function, the projecting operator of the res- is given by P c () = n,m n |(a + a)| m ( − E n,m )| n m |, and the projecting operator of the qubit, based on the relation After a long-time evolution, the off-diagonal densitymatrix elements of the mean-field Rabi model expressed in the eigenstate representation are negligible. Then, the populations are decoupled from the off-diagonal elements, which simplifies the pair of projectors P () and a) and A q = x. Then, the quantum master equation can be simplified to the DME. Specifically, DME is expressed as [15,40, where the dissipator is is the transition frequency of two energy levels, and the effective dissipative rates kj q and kj c thus read With the DME, we can self-consistently solve the steady state of the Rabi-Hubbard model. To be specific, we initially set the order parameter to be an arbitrary reasonable value, and find a temporary steady state ss. Then we calculate the order parameter = Tr{ ss a}, for the next-step iteration. This procedure can be repeated until the converged steady state and order parameter are achieved. All physical quantities can be calculated within the final steady state. C. Two-dressed-state approximation In the limiting regime of the deep-strong qubit-photon coupling, low temperature, and weak excitation of the order parameter, we may confine the complete Hilbert space to the subspace only spanned by the ground state | 0 and the first-excited state | 1. Moreover, we approximately replace the eigenstate | k of the mean-field Rabi model The eigenstates under the adiabatic approximation are given by | Rabi with x |± x = ±|± x, | is the coherent state, and the displaced coefficient = g/ 0. Then, the DME is simplified as ∂ 00 ∂t =i ( 01 − 10 ) where the reduced density-matrix element is Here, the order parameter can be reexpressed as = g( ss 01 + ss 10 )/ 0 and = −2g 2 zJ( ss 01 + ss 10 )/ 2 0, with ss 10 and ss 01 being the density elements at steady state. Moreover, based on the eigenstates (11a) the energy gap and the transition rate can be obtained as ∆ ≈ exp(−2g 2 / 2 0 ) and respectively. 18), and the white dashed curve with circles shows the boundary based on the LME. A. Numerical analysis of QPT We first apply the DME combined with mean-field theory to numerically investigate the phase diagram of photons of the Rabi-Hubbard model at zero temperature. By calculating the order parameter || with Eq., the sharp phase transition of photons from the localization to the delocalization phase is clearly exhibited in Fig.1. The localization phase with vanished order parameter (i.e., || = 0) is denoted by the dark blue region, whereas the delocalization phase, characterized as significant excitation of the order parameter, locates at light green and yellow regions. The present phase transition corresponds to the Z 2 symmetry breaking. As the inter-site photon tunneling strength J increases, the critical on-site qubitphoton coupling strength g c separating the delocalization and localization phases decreases gradually. In particular, as g c approaches ∞, it is found that the corresponding J c →0. We also study the phase boundary at finite temperature, e.g., T = 0.05 0 in Fig. 2. The delocalization phase is partially suppressed, particularly in the weak photon tunneling regime, which leads to finite J c for g c →∞. Hence, the thermal noise favors the localization phase of photons. These numerical results are qualitatively consistent with the previous ones observed in the QPT of light at both the steady state and the ground state, where the competition between the inter-site photon tunneling and the on-site qubit-photon coupling are also considered. In the next subsection, we analytically explain the global boundary through the order parameter of photons. B. Analytical solution of quantum phase transition Since the phase boundary located by numerical calculation in Fig. 1 is just in the deep-strong coupling regime where the analytical adiabatic approximation can be applicable, we may also derive an analytical solution of the order parameter || at steady state in the critical regime. Specifically, at steady state (d ss ij /dt = 0) we obtain the following relations from Eqs. (12a)-(12b) In the zero temperature limit, combining Eq. (12b) with Eqs. (15a) and (15b), the order parameter is obtained as Moreover, from Eq. it is explicitly shown that the emergence of the nonzero order parameter is bounded by the inequality Such inequality has pronounced consequence on the phase-transition boundary in Fig. 1. Then, we analyze the boundary of QPT of photons from the analytical perspective. In one previous study of the dissipative Rabi-Hubbard lattice, Schir et al. applied the LME to approximately quantify the phase boundary with the critical tunneling strength at zero temperature J crit ≈/d, with d being the dimensional number, which is also reproduced by the white dashed curve with circles in Fig. 1. In the photon-tunneling regime, J crit > 0.06 0, their result agrees with the numerical counterpart. However, for the strong qubit-photon coupling, one can note that J crit has a finite minimal value in Ref., i.e., c /2d. This result is inconsistent with the present numerical result in Fig. 1 and those observed in the Rabi-Hubbard model in absence of the quantum dissipation, in which J crit ∝∆ vanishes with the increase of g. Here, considering the inequality with ∆ and (∆) specified by Eqs. and, we obtain the critical tunneling strength as It is found that in the absence of quantum dissipation, i.e., q(c) = 0, J c is naturally reduced to 2 0 ∆/(4zg 2 ), which is quite compatible with the result in Ref.. While by tuning on the dissipation, the critical tunneling strength is shifted up by the cooperative contribution of the dissipation strengths, i.e., q and c. However, as the qubit-photon coupling becomes deep strong, J c again approaches zero, as indicated by the red curve in Fig. 1. We immediately note that this result at Eq. is different from that based on the LME in Ref. in the strong qubit-photon interaction regime. Physically, the DME captures the microscopic transitions from the eigenstate | Rabi 1 to | Rabi 0, which is characterized as the rate (∆). Hence, the dynamical transition in the DME generally relies on the energy gap, e.g., by selecting Ohmic (in this work) or super-Ohmic types of thermal baths, which leads to the result that J c →0 as g c →∞. In contrast, LME phenomenologically treats (∆) independent of the energy gap, which generally overestimates the dissipative processes at strong qubit-photon coupling. Therefore, we believe that the DME may generally improve the boundary from the microscopic view. We also analyze the phase boundary at finite temperatures. From Eqs. (12b), (15a), and (15b), the order parameter || can be obtained as Meanwhile, the nonzero order parameter is bounded by the inequality Hence, we may predict the critical tunneling strength at finite temperature as which is naturally reduced to Eq. as T approaches aero. From Fig. 2 it is found that J c (the red solid line) based on the DME agrees well with the numerical result in a wide qubit-photon coupling regime. Hence, the analytical expression of critical tunneling strength at Eq. may be helpful to deepen the understanding of the finite temperature phase transitions of photons. However, we should admit that this analytical result deviates from the numerical one at extremely strong qubitphoton coupling limit (∆≈0), where J c ∝g 2 exp(4g 2 / 2 0 ) is significantly enhanced as n B ≈k B T /∆. Moreover, the two-dressed-state approximation may break down at high-temperature regime, and more dressed states should be considered like in the numerical analysis. IV. CONCLUSION In this paper, we study the QPT of light in the Rabi-Hubbard lattice with local dissipation under the framework of the mean-field theory and the quantum dressed master equation. The steady-state phase diagram of photons are numerically calculated, which clearly classifies the localization and delocalization phases. We then analytically obtain an approximate expression of the order parameter in the low temperature and deep-strong qubit-photon coupling regime, where the mean-field Rabi model is reduced to the effective nearly degenerate twodressed-state system. We further analyze the boundary between two difference phases at zero temperature, which is characterized as the critical tunneling strength. The expression of the critical tunneling strength in the absence of the quantum dissipation can be naturally reduced to the previous ones, see, e.g., Ref.. While by tuning on the quantum dissipation, which is characterized by representative thermal baths, e.g., Ohmic or super-Ohmic type, it is found that the critical tunneling strength approaches zero as the qubit-photon coupling strength becomes deep-strong. This result is generally distinct from the previous work based on the LME, which in contrast has finite minimal critical tunneling strength. We also predict the critical tunneling strength at finite temperatures, which may be helpful to study the phase transition of light at thermal equilibrium. In the future, it should be interesting to explore the steady-state phase diagram of photons in the Rabi-Hubbard model beyond the simple mean-field framework, e.g., based on the cluster mean-field theory and linked-cluster expansion approach. |
def _download(
response: requests.Response,
intermediate_buffer: NamedTemporaryFile,
chunk_size: int,
size: int,
progress_hook: ProgressHookType,
completion_hook: CompletionHookType,
sha1: str,
):
hex_d = hashlib.sha1()
with _CompletionManager(completion_hook):
for chunk in response.iter_content(chunk_size=chunk_size):
progress_hook(intermediate_buffer.write(chunk), size)
hex_d.update(chunk)
if sha1:
assert sha1 == hex_d.hexdigest(), "Download verification failed." |
// https://serde.rs/container-attrs.html
use serde::{Deserialize, Serialize};
// Serialize to send the data to a client(used at the client side)
// Deserialize to use the data from a client
#[derive(Debug, Serialize, Deserialize)]
#[serde(rename_all = "camelCase")]
pub struct UpdatePasswordRequest {
pub old_password: String,
pub new_password: String,
} |
Daily Walks Train Future Leaders Gemba is a Japanese word meaning the actual place where value-creating work happens. Many leaders use gemba only for solving problems, visiting only when there is an issue. Others practice gemba walks on a daily basis to follow up and monitor the situation. However, Toyota believes that leaders truly develop through daily experiences at the gemba. In reality, gemba is a principle for managing, developing and improving people and processes. It is a valuable tool that helps lean practitioners learn the true facts so they can base management decisions on the actual situation. |
Patients Awareness of Advanced Disease Status, Psychological Distress and Quality of Life Among Patients With Advanced Cancer: Results From the APPROACH Study, India Background: Prognostic disclosure to patients with advanced cancer facilitates treatment decisions and goals of care discussions. However, the perspectives of patients, families and physicians differ in this regard across different cultures. Non-disclosure of cancer diagnosis or prognosis is commonly observed in family-centric cultures such as India. Aim: To assess the prevalence of and factors associated with cancer patients awareness of advanced disease status; and its with quality of life and psychological distress. Methods: Patients for this cross-sectional questionnaire-based survey were recruited from oncology and palliative medicine clinics at a tertiary cancer hospital in India from January 2017 to June 2018. Patients aged ≥ 21 years, aware of cancer diagnosis and receiving oncology treatment for Stage IV solid cancer were included in the study after obtaining written informed consent. Results: Two hundred patients were enrolled, of which 146 (73%) were not aware of the stage of their malignancy and 9 (4.5%) believed that their disease was at stage I, II or III. Those who were aware of their advanced cancer stage had more years of education (9.9 years vs 8.1 years, p =.05) and had poorer spiritual wellbeing in the faith domain (adjusted difference −1.6, 95% confidence interval −3.1 to −0.1, p =.03) compared to those who were unaware. Conclusion: It is recommended that future studies may explore prognostic understanding in Indian patients according to their socio-cultural, spiritual and educational background. |
def command_load(args):
if isinstance(args.config, list):
for cfg in args.config[:-1]:
new_args = argparse.Namespace(**args.__dict__)
new_args.detached = True
new_args.config = cfg
command_load(new_args)
new_args = argparse.Namespace(**args.__dict__)
new_args.config = args.config[-1]
command_load(new_args)
return
if '.' == args.config:
if config.in_cwd():
configfile = config.in_cwd()[0]
else:
sys.exit('No tmuxp configs found in current directory.')
else:
configfile = args.config
file_user = os.path.join(config_dir, configfile)
file_cwd = os.path.join(cwd_dir, configfile)
if os.path.exists(file_cwd) and os.path.isfile(file_cwd):
print('load %s' % file_cwd)
load_workspace(file_cwd, args)
elif os.path.exists(file_user) and os.path.isfile(file_user):
load_workspace(file_user, args)
else:
logger.error('%s not found.' % configfile) |
Minimally invasive surgery is a surgical approach aimed at reducing the healing time and trauma to a patient as a result of performing surgery on internal organs. In this approach, the treated internal organs are accessed through a small number of incisions in the patient's body. In particular, cannulas or sleeves are inserted through small incisions to provide entry ports through which surgical instruments are passed. Alternatively, access to the area to be treated is obtained using a natural bodily opening (e.g., throat, rectum), a cannula or sleeve is inserted into the bodily opening and the surgical instruments are passed through the cannula/sleeve or the bodily opening and the operable end localized to the treatment site.
The surgical instruments are generally similar to those used in open surgical procedures except they include an extension (e.g., a tubular extension) between the end of the tool entering the surgical field (i.e., the operable end of the tool, instrument or device) and the portion gripped by the surgeon. Because the surgical site or treatment site is not directly visible to the surgeon or other medical personnel when performing a minimally invasive procedure, a visualization tool/guide (e.g., endoscope, laparoscope, laryngoscope, etc.) also is inserted along with the surgical instruments so that, as the surgeon manipulates the surgical instruments outside of the surgical site, he or she is able to view the procedure on a monitor.
The limited motion available at the operable end of current devices, however, creates limitations that necessarily limit that which can be accomplished with the methods and procedures using current devices and systems. Most instruments or devices are rigid and are limited to motions of four (4) degrees of freedom of motion or less about the incision point and in/out translation. Further, the instruments can limit the surgeon's ability to accurately perceive the force/interaction between the instruments and tissues/organs. Some techniques have been established whereby the location of the incision(s) is optimized so as to in effect counter the limitations imposed by the available movement of a given instrument. This approach, however, does not work for all surgical techniques such as those surgical techniques in which access to the treatment or surgical site is accomplished using an existing bodily opening, such as the throat.
Several approaches to distal tool dexterity enhancement have been reported including designs for catheters or surgical tool manipulation devices based on articulated designs.. Many systems and actuation methods are mainly based on wire actuation or use of wire actuated articulated wrists [G. Guthart and K. Salisbury, “The Intuitive' Telesurgery System: Overview and Application,” IEEE International Conference on Robotics and Automation, pp.618-621, 2000; M. Cavusoglu, I. Villanueva, and F. Tendick, “Workspace Analysis of Robotics Manipulators for a Teleoperated Suturing Task,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, HI, 2001] and by using Shape Memory Alloys (, bending SMA forceps were suggested for laparoscopic surgery, Y. Nakamura, A. Matsui, T. Saito, and K. Yoshimoto, “Shape-Memory-Alloy Active Forceps for Laparoscopic Surgery,” IEEE International Conference on Robotics and Automation, pp.2320-2327, 1995]; an SMA actuated 1 degree of freedom planar bending snake device for knee arthroscopy was described, P. Dario, C. Paggetti, N. Troisfontaine, E. Papa, T. Ciucci, M. C. Carrozza, and M. Marcacci, “A Miniature Steerable End-Effector for Application in an Integrated System for Computer-Assisted Arthroscopy,” IEEE International Conference on Robotics and Automation, pp.1573-1579, 1997; and a hyper-redundant SMA actuated snake for gastro-intestinal intervention was described;[D. Reynaerts, J. Peirs, and H. Van Brussel, “Shape Memory Micro-Actuation for a Gastro-Intesteinal Intervention System,” Sensors and Actuators, vol. 77, pp. 157-166, 1999). A two DoF 5 mm diameter wire-driven snake-like tool using super-elastic NiTi flexure joints also has been described [J. Piers, D. Reynaerts, H. Van Brussel, G. De Gersem, and H. T. Tang, “Design of an Advanced Tool Guiding System for Robotic Surgery,” IEEE International Conference on Robotics and Automation, pp.2651-2656, 2003]. Also described is actuation methods and systems that use Electro-Active Polymers (EAP) (e.g., see A. Della Santa, D. Mazzoldi, and DeRossi, “Steerable Microcatheters Actuated by Embedded Conducting Polymer Structures,” Journal of Intelligent Material Systems and Structures, vol. 7, pp. 292-300, 1996). These designs however, have a number of limitations. The articulated designs limit downsize scalability and complicate the sterilization process and wire actuation limits the force application capability since the wires can apply only pulling forces (i.e.,buckle when pushed). SMA suffers from hysteresis and low operation frequency due to the time necessary for temperature changes to affect its martensite/austenite change. Also, the various designs of catheters do not meet the force application capabilities required for surgical tool manipulation.
Other approaches have been reported describing snake robots using a flexible backbone for snake-like robots (e.g., see I. Gravagne and I. Walker, “On the Kinematics of Remotely-Actuated Continuum Robots,” IEEE International Conference on Robotics and Automation, pp. 2544-2550, 2000; I. Gravagne and I. Walker, “Kinematic Transformations for Remotely-Actuated Planar Continuum Robots,” IEEE International Conference on Robotics and Automation, pp. 19-26, 2000; C. Li and C. Rhan, “Design of Continuous Backbone, Cable-Driven Robots,” ASME Journal of Mechanical Design, vol. 124, pp. 265-271, 2002; G. Robinson and J. Davies, “Continuum Robots—a State of the Art,” IEEE International Conference on Robotics and Automation, pp. 2849-2853, 1999). These efforts, however, focused on large scale snake-like robots that used one flexible backbone actuated by wires (see S. Hirose, Biologically Inspired Robots, Snake-Like Locomotors and Manipulators: Oxford University Press, 1993). Also, these designs have a number of limitations including that wire actuation in only a pull mode does not allow for large force actuation once the diameter of the snake is downsized to diameters less than 5 mm Further, when the diameter of the snake like unit is downsized, the stiffness of the snake-like unit is relatively low because it relies only on one central backbone supported by wires. This is strongly seen in the tensional stiffness.
Alternative designs of a 3 DoF wrist for MIS suturing were analyzed and a method was proposed to determine the workspace and to optimize the position of the entry port in the patient's body to provide optimal dexterity [M. Cavusoglu, I. Villanueva, and F. Tendick, “Workspace Analysis of Robotics Manipulators for a Teleoperated Suturing Task,” IEEE/RSJ International Conference on Intelligent Robots and Systems, Maui, HI, 2001]. Also, three architectures of endoscopic wrists: a simple wire actuated joint, a multi-revolute joint wrist, a tendon snake-like wrist; have been analyzed and these joints compared in terms of dexterity and showed the superiority of the snake-like wrist over the other two wrists in terms of dexterity [A. Faraz and S. Payandeh, “Synthesis and Workspace Study of Endoscopic Extenders with Flexible Stem,” Simon Fraser University, Canada 2003].
In chest and abdomen minimally invasive surgery, the entry portals for surgical instruments are usually placed some distance apart, and the instruments approach the operative site from somewhat differing directions. This arrangement makes it possible (though sometimes inconvenient and limiting) for telesurgical systems, such as the DaVinci or Zeus, to use rather large robotic slave manipulators for extracorporeal instrument positioning. The optimal placement of entry portals based on dexterity requirements for particular procedures is an important subject and has recently been addressed by several authors [L. Adhami and E. C. Maniere, “Optimal Planning for Minimally Invasive Surgical Robots,” IEEE Transactions on Robotics and Automation, vol. 19, pp. 854-863, 2003; J. W. Cannon, J. A. Stoll, S. D. Sehla, P. E. Dupont, R. D. Howe, and D. F. Torchina, “Port Placement Planning in Robot-Assisted Coronary Artery Bypass,” IEEE transactions on Robotics and Automation, vol. 19, pp. 912-917, 2003]. In contrast, with minimally invasive surgery of the throat, the size and location of the entry port is pre-determined and no such optimization is possible.
The upper airway of the throat is a long, narrow, and irregularly shaped organ that includes the pharynx (throat), hypopharynx, and larynx, commonly referred to as the voice box. These areas are subject to a variety of benign and malignant growths, paralysis, and scar tissue formation requiring surgical interventions for excision and/or reconstruction. These procedures also often must be performed past the vocal cords closer to the lungs. In order to maintain the voice characteristics, it is very important to be able to reconstruct the vocal cord region as accurately as possible. These procedures (e.g., partial or total laryngectomy, vocal fold repositioning, and laryngotracheal reconstruction) are routinely performed using open surgical techniques at the expense of damaging the integrity of the framework supporting the laryngeal cartilage, muscle, and the connective tissue vital to normal function. A minimally invasive endoscopic procedure is generally preferred over the open procedure, as it would preserve the laryngeal framework integrity, promote faster recovery and frequently overcome the need for tracheostomy.
There is shown in FIGS. 1A, B a conventional minimally invasive system that is used for the performance of laryngeal surgery. As illustrated, the internal regions of the airway are accessed through the use of an array of long instruments (usually ranging between 240 to 350 mm long) through a laryngoscope that is inserted into the patient's mouth and serves as a visualization tool and a guide for surgical instrumentation. The laryngoscope is typically 180 mm long with an oval cross-section usually ranging between 16-20 mm in width at its smallest cross section.
This surgical setup involves the surgeon manipulating several long tools, instruments or devices (for example, one tool for suction and another for tissue manipulation). The conventional instruments or devices, as indicated herein, are constrained by design to provided four (4) degrees of freedom of motion (or less) and also are characterized as lacking tool-tip dexterity. Consequently, such instruments or devices do not provide the surgeon with the required tip dexterity to allow the user to perform delicate and accurate surgical procedures such as for example, soft tissue reconstruction and sewing. Further, the vocal folds preclude the performance of surgical procedures past them using such instruments or devices.
Consequently, and due to these limitations, laryngeal minimally invasive surgery is currently limited to simple operations such as microflap elevation, excisional biopsies, and removal of papilloma using laser or powered microdebrider.
Functional reconstructive procedures (e.g., tissue flap rotation or suturing), are not performed in throat minimally invasive surgery, although, reconstruction of the vocal fold structures as accurately as possible is crucial for maintaining the voice characteristics. Suture closure of surgical defects has been shown to reduce scar tissue, shorten healing time, and result in improved laryngeal function and sound production (D. J. Fleming, S. McGuff, and C. B. Simpson, “Comparison of Microflap Healing Outcomes with Traditional and Microsuturing Techniques: Initial Results in a Canine Model,” Ann Otol Rhinol Laryngol., vol. 110, pp. 707-712, 2001; P. Woo, J. Casper, B. Griffin, R. Colton, and C. Brewer, “Endoscopic Microsuture Repair of Vocal Fold Defects,” J. Voice, vol. 9, pp. 332-339, 1995). This seemingly simple operation is very difficult, if not impossible, to perform in laryngeal minimally invasive surgery.
Although laryngeal surgery is exemplary, many other minimally-invasive surgical procedures have similar needs for precise, high dexterity motions in confined spaces within a patient's body. Further, it is often necessary to operate multiple instruments in close proximity to each other, where the instruments are inserted into the body through roughly parallel access paths in confined spaces such as the throat. Further, the physical size of the instruments is often a significant factor in determining the feasibility of procedures. Further, as instruments are constructed to be smaller-and-smaller, designs using conventional approaches involving complicated linkages become more and more difficult and costly to fabricate and are increasingly susceptible to limitations due to backlash and other factors.
It thus would be desirable to provide improved methods and devices for minimally invasive surgeries. It also would be desirable to provide new devices and systems particular suited and adaptable for use with a wide range of minimally invasive surgical techniques that provide a similar freedom of motion at the treatment site as would be experienced using open surgical procedures or techniques. It also would be desirable to provided such minimally invasive devices and systems that provide such motion in a constrained surgical environment such as that presented in the throat and sinus. It also would be desirable to provide methods for treating any of a number of organs or areas of a body, more specifically the throat and sinus, using such devices and systems. It would be desirable to provide methods and devices that are capable of performing surgery on particularly challenging sites such as the throat. It would be highly desirable to provide methods and devices that are configurable to operate at multiple physical scales, ranging from small to extremely small, without requiring fundamental changes in the design concept approach. Such methods and devices should overcome the deficiencies of the presently available methods and devices. |
The vasculosome theory. ACKNOWLEDGMENTS This work was supported by National Institutes of Health grants R01 AG-25016, R01 DK-074095, and 1R01 HL104236-01 (to G.C.G.); the American College of Surgeons Franklin H. Martin Faculty Research Fellowship, the Hagey Laboratory for Pediatric Regenerative Medicine, and the Stanford University Child Health Research Institute Faculty Scholar Award (to D.C.W.); and National Institutes of Health grants R01 DE021683-01 and RC2 DE020771, the Oak Foundation, and the Hagey Laboratory for Pediatric Regenerative Medicine (to M.T.L.). |
Does board gender diversity play significant role in determining firm performance? The aim of this research is to analyse whether the board gender diversity plays vital part in explaining firm performance. The research is performed using static panel analysis using the sample of Croatian largest manufacturers that operated in the 2015 2019 period. In order to conduct such analysis several variables relating to board characteristics are employed in the research including proportion of women in the boardroom, dummy variable whether a female is present on the board, Blau index and size of the board. Furthermore, a set of firm-specific, industry oriented and macroeconomic variables are encompassed by the analysis as well comprising of size, liquidity, leverage, inventory management, capital intensity, market structure expressed with concentration ratio and GDP real growth rate. In order to obtain more robust results, three performance measures are introduced ROA, ROS and NPM. The findings reveal that size, liquidity, leverage, inventory management, capital intensity and concentration of the industry play an important role in determining firm profitability whereas we have not found support for greater gender diversity in the boardroom in terms of superior performance. Similar results are also provided with robustness check. The paper contributes to the scientific thought in a way that it adds to the scarce empirical evidence on gender diversity in manufacturing industry in general and particularly in Croatian context. |
PARIS — Tunisian-born designer Azzedine Alaia, a fashion iconoclast whose clingy styles helped define the 1980s and who dressed famous women from Hollywood to the White House, has died.
The French Haute Couture Federation announced Alaia’s death on Saturday without providing details. Twitter tributes to his influence on fashion poured in from around the world.
Alaia ((AZ’-uh-deen uh-LY’-uh) sometimes was dubbed the “king of cling” for the sculptural, formfitting designs he first popularized during the 1980s and updated over the decades. His clients included women as diverse as Michelle Obama, Lady Gaga, Grace Jones and Greta Garbo.
The couture federation said Alaia was born in 1940, while the Tunisian Culture Ministry said he was born in 1942. The discrepancy could not immediately be explained.
The supermodel Naomi Campbell, who enjoyed a close relationship with Alaia for many years and affectionately called him “papa,” has credited the designer with helping launch her career and taking care of her like a father when she met him in Paris at age 16.
News of his death sparked a flurry of tributes from figures in the fashion and entertainment worlds.
“An architect for the body, a man for femininity, Azzedine Alaia is undeniably the inventor” of an important kind of style, Toledano said.
As a rare Tunisian with global name recognition, Alaia’s death prompted collective grief in his native Tunisia on Saturday, with tributes pouring in from the business and culture worlds and the government.
The Tunisian Foreign Ministry said in a statement that Alaia was born in the medina, or old town, of the capital Tunis in 1942, and developed a passion for fashion thanks to his sister Hafidha. It said he had expressed a wish to be buried in Tunisia.
“A genius who weaved connections among fashion, architecture and fine arts, sculpting creations to magnify women’s bodies. A free and generous man, loved and admired,” former French Culture Minister Audrey Azoulay, who recently became director of the United Nations cultural agency UNESCO, said in tribute to the designer.
Azzedine Alaia was raised by his grandmother and earned a diploma at the Tunis Fine Arts Institute, according to his house’s website. He arrived in Paris in the 1950s, where he rented a room in a countess’ home in exchange for small jobs.
He learned to sew at Guy Laroche and worked briefly at Christian Dior, and started his own house in 1980. He worked with superstars as well as low-cost retailer Tati, well before H&M popularized that kind of high-low co-operation with well-known designers.
Alaia received offers to take over other fashion houses, but he routinely refused. He sought financial support in the 1990s and kept his company going. It had revenue of 60 million euros last year.
No information was immediately available on survivors or memorial arrangements. |
package cn.yiya.shiji.adapter;
import android.content.Context;
import android.support.v7.widget.RecyclerView;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ImageView;
import android.widget.RelativeLayout;
import android.widget.TextView;
import java.util.ArrayList;
import cn.yiya.shiji.R;
import cn.yiya.shiji.config.Configration;
import cn.yiya.shiji.entity.navigation.CountryListInfo;
import cn.yiya.shiji.entity.navigation.CouponDetailInfo;
import cn.yiya.shiji.utils.SimpleUtils;
/**
* Created by Tom on 2016/4/6.
*/
public class CouponAdapter extends RecyclerView.Adapter<CouponAdapter.CouponViewHolder> {
private Context mContext;
private ArrayList<CountryListInfo> mList;
private ArrayList<CouponDetailInfo> countryCouponList;
private OnItemClickListener onItemClickListener;
public CouponAdapter(Context mContext, ArrayList<CouponDetailInfo> countryCouponList) {
this.mContext = mContext;
this.countryCouponList = countryCouponList;
}
public ArrayList<CountryListInfo> getmList() {
return mList;
}
public void setmList(ArrayList<CountryListInfo> mList) {
this.mList = mList;
}
@Override
public CouponViewHolder onCreateViewHolder(ViewGroup parent, int viewType) {
return new CouponViewHolder(LayoutInflater.from(mContext).inflate(R.layout.coupon_item, parent, false));
}
@Override
public void onBindViewHolder(CouponViewHolder holder, int position) {
final CouponDetailInfo info = countryCouponList.get(position);
RelativeLayout.LayoutParams layoutParams = (RelativeLayout.LayoutParams)holder.ivCouponContent.getLayoutParams();
float scale = 2.3f;
int width = SimpleUtils.getScreenWidth(mContext) - SimpleUtils.dp2px(mContext,32);
layoutParams.height = (int) (width / scale);
holder.ivCouponContent.setLayoutParams(layoutParams);
String imgPath = Configration.COUPON_PATH + "/" + info.getId() + ".png";
info.setCover(imgPath);
holder.tvBrief.setText(info.getBrief());
holder.tvStoreName.setText(info.getStore_name());
holder.tvRange.setText(info.getRange());
holder.tvIndate.setText(info.getStart_time() + " - " + info.getEnd_time());
holder.tvDes.setText(info.getDes());
holder.tvTitle.setText(info.getStore_name());
holder.relativelayout.setOnClickListener(new View.OnClickListener() {
@Override
public void onClick(View v) {
if (onItemClickListener != null) {
onItemClickListener.OnItemClick(info);
}
}
});
}
@Override
public int getItemCount() {
if (countryCouponList == null) {
return 0;
} else {
return countryCouponList.size();
}
}
public ArrayList<CouponDetailInfo> getCountryCouponList() {
return countryCouponList;
}
public void setCountryCouponList(ArrayList<CouponDetailInfo> countryCouponList) {
this.countryCouponList = countryCouponList;
}
public interface OnItemClickListener {
void OnItemClick(CouponDetailInfo info);
}
public void setOnItemClickListener(OnItemClickListener onItemClickListener) {
this.onItemClickListener = onItemClickListener;
}
class CouponViewHolder extends RecyclerView.ViewHolder {
TextView tvTitle, tvBrief, tvRange, tvIndate, tvDes, tvStoreName;
RelativeLayout relativelayout;
ImageView ivCouponContent;
public CouponViewHolder(View itemView) {
super(itemView);
relativelayout = (RelativeLayout) itemView.findViewById(R.id.relativelayout);
tvTitle = (TextView) itemView.findViewById(R.id.tv_coupon_title);
tvDes = (TextView) itemView.findViewById(R.id.tv_des);
tvIndate = (TextView) itemView.findViewById(R.id.tv_indate);
tvRange = (TextView) itemView.findViewById(R.id.tv_range);
tvStoreName = (TextView) itemView.findViewById(R.id.tv_store_name);
tvBrief = (TextView) itemView.findViewById(R.id.tv_brief);
ivCouponContent = (ImageView) itemView.findViewById(R.id.coupon_content);
}
}
}
|
// CommitToVoteSet constructs a VoteSet from the Commit and validator set.
// Panics if signatures from the commit can't be added to the voteset.
// Inverse of VoteSet.MakeCommit().
func CommitToVoteSet(chainID string, commit *Commit, vals *ValidatorSet) *VoteSet {
voteSet := NewVoteSet(chainID, commit.Height, commit.Round, tmproto.PrecommitType, vals)
for idx, commitSig := range commit.Signatures {
if commitSig.Absent() {
continue
}
added, err := voteSet.AddVote(commit.GetVote(int32(idx)))
if !added || err != nil {
panic(fmt.Sprintf("Failed to reconstruct LastCommit: %v", err))
}
}
return voteSet
} |
//---------------------------------------------------------------------------------------
// Copyright (c) 2001-2018 by PDFTron Systems Inc. All Rights Reserved.
// Consult legal.txt regarding legal and license information.
//---------------------------------------------------------------------------------------
package com.pdftron.pdf.interfaces;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import com.pdftron.sdf.Obj;
/**
* Callback interface to be invoked when either a standard rubber stamp or custom rubber stamp has been selected.
*/
public interface OnRubberStampSelectedListener {
/**
* Called when a standard rubber stamp is selected.
*
* @param stampLabel The label of stamp
*/
void onRubberStampSelected(@NonNull String stampLabel);
/**
* Called when a custom rubber stamp is selected.
*
* @param stampObj The option for creating custom rubber stamp
*/
void onRubberStampSelected(@Nullable Obj stampObj);
}
|
. OBJECTIVE To study the condition and clinical characteristics of Epstein Barr virus (EBV) infection in hospitalized children from Wuhan region. METHODS A total of 14 840 hospitalized children were classified into five age groups: less than 6 months old, 6 months to 1 year old, 1 to 3 years old, 3 to 7 years old and 7 to 15 years old. The antibodies IgM and IgG to EBV capsid antigen (VCA) were detected using ELISA. RESULTS In the 14 840 hospitalized children, 7 899 were positive for EBV antibodies, with an infection rate of 53.23%. The positive rate of VCA-IgM was 4.05% (601/14 840) and that of VCA-IgG was 49.18% (7 298/14 840). The lowest positive rate of VCA-IgM (0.11%) was found in the group of less than 6 months old and the highest positive rate of VCA-IgG (79.83%) was found in the group of 7 to 15 years old. Of the 601 children with positive VCA-IgM, 429 (71.4%) suffered from respiratory tract infection. CONCLUSIONS EBV infection is very common in hospitalized children in Wuhan. Respiratory tract infection is a leading disease in children with positive EBV antibodies. The infection rate of EBV in different age groups is different. |
package item
const (
KindModification Kind = "modification"
KindModificationBarrel Kind = "modificationBarrel"
KindModificationBipod Kind = "modificationBipod"
KindModificationCharge Kind = "modificationCharge"
KindModificationDevice Kind = "modificationDevice"
KindModificationForegrip Kind = "modificationForegrip"
KindModificationGasblock Kind = "modificationGasblock"
KindModificationHandguard Kind = "modificationHandguard"
KindModificationLauncher Kind = "modificationLauncher"
KindModificationMount Kind = "modificationMount"
KindModificationMuzzle Kind = "modificationMuzzle"
KindModificationGoggles Kind = "modificationGoggles"
KindModificationGogglesSpecial Kind = "modificationGogglesSpecial"
KindModificationPistolgrip Kind = "modificationPistolgrip"
KindModificationReceiver Kind = "modificationReceiver"
KindModificationSight Kind = "modificationSight"
KindModificationSightSpecial Kind = "modificationSightSpecial"
KindModificationStock Kind = "modificationStock"
)
type Modification struct {
Item
Ergonomics float64 `json:"ergonomicsFP"`
Accuracy float64 `json:"accuracy"`
Recoil float64 `json:"recoil"`
RaidModdable int64 `json:"raidModdable"`
GridModifier GridModifier `json:"gridModifier"`
Slots Slots `json:"slots"`
Compatibility ItemList `json:"compatibility"`
Conflicts ItemList `json:"conflicts"`
}
type ModificationResult struct {
*Result
Items []Modification `json:"items"`
}
func (r *ModificationResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
// Weapon modifications //
type Barrel struct {
Modification
Length float64 `json:"length"`
Velocity float64 `json:"velocity"`
Suppressor bool `json:"suppressor"`
}
type BarrelResult struct {
*Result
Items []Barrel `json:"items"`
}
func (r *BarrelResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
var modBarrelFilter = Filter{
"suppressor": {
"true",
"false",
},
}
type Bipod struct {
Modification
}
type BipodResult struct {
*Result
Items []Bipod `json:"items"`
}
func (r *BipodResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Charge struct {
Modification
}
type ChargeResult struct {
*Result
Items []Charge `json:"items"`
}
func (r *ChargeResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Device struct {
Modification
Type string `json:"type"`
Modes []string `json:"modes"`
}
type DeviceResult struct {
*Result
Items []Device `json:"items"`
}
func (r *DeviceResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
var modDeviceFilter = Filter{
"type": {
"combo",
"light",
},
}
type Foregrip struct {
Modification
}
type ForegripResult struct {
*Result
Items []Foregrip `json:"items"`
}
func (r *ForegripResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type GasBlock struct {
Modification
}
type GasBlockResult struct {
*Result
Items []GasBlock `json:"items"`
}
func (r *GasBlockResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Handguard struct {
Modification
}
type HandguardResult struct {
*Result
Items []Handguard `json:"items"`
}
func (r *HandguardResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Launcher struct {
Modification
Caliber string `json:"caliber"`
}
type LauncherResult struct {
*Result
Items []Launcher `json:"items"`
}
func (r *LauncherResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Mount struct {
Modification
}
type MountResult struct {
*Result
Items []Mount `json:"items"`
}
func (r *MountResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Muzzle struct {
Modification
Type string `json:"type"`
Velocity float64 `json:"velocity"`
}
type MuzzleResult struct {
*Result
Items []Muzzle `json:"items"`
}
func (r *MuzzleResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
var modMuzzleFilter = Filter{
"type": {
"brake",
"combo",
"compensator",
"supressor",
},
}
type PistolGrip struct {
Modification
}
type PistolGripResult struct {
*Result
Items []PistolGrip `json:"items"`
}
func (r *PistolGripResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Receiver struct {
Modification
Velocity float64 `json:"velocity"`
}
type ReceiverResult struct {
*Result
Items []Receiver `json:"items"`
}
func (r *ReceiverResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
type Sight struct {
Modification
Type string `json:"type"`
Magnification []string `json:"magnification"`
VariableZoom bool `json:"variableZoom"`
ZeroDistances []int64 `json:"zeroDistances"`
}
type SightResult struct {
*Result
Items []Sight `json:"items"`
}
func (r *SightResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
var modSightFilter = Filter{
"type": {
"hybrid",
"iron",
"reflex",
"telescopic",
},
}
type SightSpecial struct {
Sight
OpticSpecial
}
type SightSpecialResult struct {
*Result
Items []SightSpecial `json:"items"`
}
func (r *SightSpecialResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
var modSightSpecialFilter = Filter{
"type": {
"nightVision",
"thermalVision",
},
}
type Stock struct {
Modification
FoldRectractable bool `json:"foldRectractable"`
}
type StockResult struct {
*Result
Items []Stock `json:"items"`
}
func (r *StockResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
// Gear modifications //
type Goggles struct {
Modification
Type string `json:"type"`
}
type GogglesResult struct {
*Result
Items []Goggles `json:"items"`
}
func (r *GogglesResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
var modGogglesFilter = Filter{
"type": {
"nightVision",
"thermalVision",
},
}
type GogglesSpecial struct {
Goggles
OpticSpecial
}
type GogglesSpecialResult struct {
*Result
Items []GogglesSpecial `json:"items"`
}
func (r *GogglesSpecialResult) GetEntities() []Entity {
e := make([]Entity, len(r.Items))
for i, item := range r.Items {
e[i] = item
}
return e
}
// Properties //
type OpticSpecial struct {
Modes []string `json:"modes"`
Color RGBA `json:"color"`
Noise string `json:"noise"`
}
|
Sergio Fabbrini
Sergio Fabbrini (born 21 February 1949) is an Italian political scientist. He is Dean of the Department of Political Science and Professor of Political science and International relations at Libera Università Internazionale degli Studi Sociali Guido Carli in Rome, where he holds a Jean Monnet Chair. He is the founder and former Director of the LUISS School of Government He is also recurrent professor of Comparative Politics at the Institute of Governmental Studies at the University of California at Berkeley.
He contributed to build and then served as Director of the School of International Studies at University of Trento in the period 2006-2009. He was the Editor of the Italian Journal of Political Science (Rivista Italiana di Scienza Politica) in the period 2004-2009. He is also an editorialist for the Italian newspaper "Il Sole 24 ore".
Background
Fabbrini was born in Pesaro. He did his undergraduate and graduate studies at the University of Trento, Italy. Starting his university studies in 1969, he took the four years degree in Sociology in 1973, graduating with laude with a dissertation on the role of the state in Italian post-second world war economic miracle. Because there was not yet a doctoral program in Italy in the 1970s, he got a three years scholarship (1974–1977), equivalent to a Ph.D. program, for specializing in political economy. His research concerned the place of the state and politics in the theories of classical political economists, thus published in the 1977 dissertation on “Thinking Over the Theory of Value of Classical Political Economists”.
He then got the equivalent of a four years post-doc fellowship (1977–1981) to investigate “The political economy of the welfare state”, researching at the Department of Economics, Cambridge University, United Kingdom and Department of Economics at Trento University. In the beginning of the 1980s, thanks to a NATO Fellowship and an Italian CNR Scholarship, he researched for three years at the University of California at Riverside and Berkeley. Since the beginning of the 1990s he has taught periodically at the University of California at Berkeley, Department of Political Science and Institute of Governmental Studies.
Scholarly contributions
He has published fifteen books, two co-authored books and fourteen edited or co-edited books or journals’ special issues, and more than two hundred scientific articles and essays in seven languages in comparative and European government and politics, American government and politics, international relations and foreign policy, Italian government and politics, and political theory. According to a review in 2010:
In books and articles over the last decade, Italian political scientist Sergio Fabbrini has been scrambling to understand [the] recent ebb and flow in transatlantic relations. Anti-Americanism in Europe and anti-Europeanism in the United States, Fabbrini argues in America and Its Critics, have challenged the viability of NATO and, prior to Obama’s election, called into question the ability to cooperate on global concerns from terrorism to global warming. At the same time, however, Fabbrini has devoted several articles and an entire book, Compound Democracies, to the thesis that the United States and Europe are converging at an institutional level as examples of what he calls “compound democracies.” Over the long term, in other words, the two political systems are becoming more alike even as the politicians themselves, in the short term, articulate a different set of political values.
Regarding his main recent contributions: (1) he brought the analysis of the European Union (EU) back to the comparative framework; (2) he showed that the EU cannot be analyzed with the categories utilized for nation states; (3) he developed a more comprehensive distinction between national democracies on the basis of their functional logic and institutional structure; (4) he elaborated the original model of ‘compound democracy’ for explaining the functioning logic and the institutional structure of democratic unions of states (as the EU, but also the United States and Switzerland), thus distinguishing between unions of states and nation states; (5) he defined an unprecedented model for understanding political leadership in contemporary governmental systems.
Teaching
He was Jemolo Fellow at the Nuffield College, Oxford University. He was Jean Monnet Chair Professor at the Robert Schuman Center for Advanced Studies, European University Institute in Florence and Visiting Professor in the Department of Political and Social Sciences, European University Institute in Florence. He was Fulbright Assistant Professor at Harvard University in 1987-1988. He lectured, among others, in Canada (Carlton University), in Mexico (El Colegio de México, Mexico City), in Argentina (University of Buenos Aires and Universitad Abierta Interamericana), in Ecuador (Quito Simon Bolivar University), in China (Nanjing University), in Japan (Osaka University, Tokyo Imperial University and Sapporo University), in Thailand (Chulalongkorn University, Bangkok), in the Philippines (University of Philippines-Diliman, Manila) and in several US and European universities.
At the LUISS School of Government he is the director of the Master in International Public Affairs, while teaching in other graduate courses offered by the School.
Recognition
He won the 2017 “Spinelli Prize for political editorials on Europe”, the 2011 “Capalbio Prize for Europe”, the 2009 “Filippo Burzio Prize for the Political Sciences” and the 2006 “Amalfi European Prize for the Social Sciences”. He was awarded an honorary professorship by the Universidad Interamericana of Buenos Aires (Argentina). He was the Editor of the 9-volumes series on “The Institutions of Contemporary Democracies” for the Italian publisher G. Laterza. He is a referee for academic journals such as “American Political Science Review”, “Comparative Political Studies”, “Perspective on Politics”, “Political Behavior”, “European Journal of Political Research”, “West European Politics” and “European Political Science”. He was member of the Steering Committee of the European Consortium for Political Research (ECPR) Standing Group on European Union. He is currently a member of the executive board of the IPSA (International Political Science Association), Research Committee on "European Unification". He is member of several academic associations and organizations.
Personal life
Married with Manuela Cescatti, they have two sons. |
<reponame>mann-software/rulee-ts
import { ObjectValueProvider } from './object-value-provider';
import { mock } from 'jest-mock-extended';
interface ComplexType {
a?: {
c?: {
d?: number;
};
f?: boolean;
};
b: string;
}
test('object value provider', () => {
const obj: ComplexType = { a: { f: true }, b: 'abc' };
const objValProv1 = new ObjectValueProvider<ComplexType, number>(obj,
(obj) => obj?.a?.c?.d ?? null,
(obj, val) => obj.a = { ...obj?.a, c: { ...obj?.a?.c, d: val ?? undefined } } // you may use lodash set for this
);
const objValProv2 = new ObjectValueProvider<ComplexType, string>(obj, (obj) => obj.b, (obj, val) => obj.b = val ?? '');
expect(objValProv1.getValue()).toBe(null);
objValProv1.setValue(42);
expect(objValProv1.getValue()).toBe(42);
expect(obj?.a?.c?.d).toBe(42);
expect(obj?.a?.f).toBe(true);
expect(objValProv2.getValue()).toBe('abc');
objValProv2.setValue('');
expect(objValProv2.getValue()).toBe('');
expect(obj.b).toBe('');
});
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.