content
stringlengths 7
2.61M
|
---|
Short vector code generation for the discrete Fourier transform In this paper we use a mathematical approach to automatically generate high performance short vector code for the discrete Fourier transform (DFT). We represent the well-known Cooley-Tukey fast Fourier transform in a mathematical notation and formally derive a "short vector variant". Using this recursion we generate for a given DFT a large number of different algorithms, represented as formulas, and translate them into short vector code. Then we present a vector code specific dynamic programming method that searches in the space of different implementations for the fastest on the given architecture. We implemented this approach as part of the SPIRAL library generator On Pentium III and 4, our automatically generated SSE and SSE2 vector code compares favorably with the hand-tuned Intel vendor library. |
The use of personal computers (PCs), personal digital assistants (PDAs), Web-enabled phones, wireline and wireless networks, the Internet, Web-based query systems and engines, and the like has gained relatively widespread acceptance in recent years. This is due, in large part, to the relatively widespread availability of high-speed, broadband Internet access through digital subscriber lines (DSLs) (including asymmetric digital subscriber lines (ADSLs) and very-high-bit-rate digital subscriber lines (VDSLs)), cable modems, satellite modems, wireless local area networks (WLANs), and the like. Thus far, user interaction with PCs, PDAs, Web-enabled phones, wireline and wireless networks, the Internet, Web-based query systems and engines, and the like has been primarily non-voice-based, through keyboards, mice, intelligent electronic pads, monitors, printers, and the like. This has limited the adoption and use of these devices and systems somewhat, and it has long been felt that allowing for accurate, precise, and reliable voice-based user interaction, mimicking normal human interaction, would be advantageous. For example, allowing for accurate, precise, and reliable voice-based user interaction would certainly draw more users to e-commerce, e-support, e-learning, etc., and reduce learning curves.
In this context, “mimicking normal human interaction” means that a user would be able to speak a question into a Web-enabled device or the like and the Web-enabled device or the like would respond quickly with an appropriate answer or response, through text, graphics, or synthesized speech, the Web-enabled device or the like not simply converting the user's question into text and performing a routine search, but truly understanding and interpreting the user's question. Thus, if the user speaks a non-specific or incomplete question into the Web-enabled device or the like, the Web-enabled device or the like would be capable of inferring the user's meaning based on context or environment. This is only possible through multimodal input.
Several software products currently allow for limited voice-based user interaction with PCs, PDAs, and the like. Such software products include, for example, ViaVoice™ by International Business Machines Corp. and Dragon NaturallySpeaking™ by Scansoft, Inc. These software products, however, allow a user to perform dictation, voice-based command-and-control functions (opening files, closing files, etc.), and voice-based searching (using previously-trained uniform resource locators (URLs)), only after time-consuming, and often inaccurate, imprecise, and unreliable, voice training. Their accuracy rates are inextricably tied to a single user that has provided the voice training.
Typical efforts to implement voice-based user interaction in a support and information retrieval context may be seen in U.S. Pat. No. 5,802,526, to Fawcett et al. (Sep. 1, 1998). Typical efforts to implement voice-based user interaction in an Internet context may be seen in U.S. Pat. No. 5,819,220, to Sarukkai et al. (Oct. 6, 1998).
U.S. Pat. No. 6,446,064, to Livowsky (Sep. 3, 2002), discloses a system and method for enhancing e-commerce using a natural language interface. The natural language interface allows a user to formulate a query in natural language form, rather than using conventional search terms. In other words, the natural language interface provides a “user-friendly” interface. The natural language interface may process a query even if there is not an exact match between the user-formulated search terms and the content in a database. Furthermore, the natural language interface is capable of processing misspelled queries or queries having syntax errors. The method for enhancing e-commerce using a natural language interface includes the steps of accessing a user interface provided by a service provider, entering a query using a natural language interface, the query being formed in natural language form, processing the query using the natural language interface, searching a database using the processed query, retrieving results from the database, and providing the results to the user. The system for enhancing e-commerce on the Internet includes a user interface for receiving a query in natural language form, a natural language interface coupled to the user interface for processing the query, a service provider coupled to the user interface for receiving the processed query, and one or more databases coupled to the user interface for storing information, wherein the system searches the one or more databases using the processed query and provides the results to the user through the user interface.
U.S. Pat. No. 6,615,172, to Bennett et al. (Sep. 2, 2003), discloses an intelligent query system for processing voice-based queries. This distributed client-server system, typically implemented on an intranet or over the Internet accepts a user's queries at the user's PC, PDA, or workstation using a speech input interface. After converting the user's query from speech to text, a two-step algorithm employing a natural language engine, a database processor, and a full-text structured query language (SQL) database is implemented to find a single answer that best matches the user's query. The system, as implemented, accepts environmental variables selected by the user and is scalable to provide answers to a variety and quantity of user-initiated queries.
U.S. Patent Application Publication No. 2001/0039493, to Pustejovsky et al. (Nov. 8, 2001), discloses, in an exemplary embodiment, a system and method for answering voice-based queries using a remote mobile device, e.g., a mobile phone, and a natural language system.
U.S. Patent Application Publication No. 2003/0115192, to Kil et al. (Jun. 19, 2003), discloses, in various embodiments, an apparatus and method for controlling a data mining operation by specifying the goal of data mining in natural language, processing the data mining operation without any further input beyond the goal specification, and displaying key performance results of the data mining operation in natural language. One specific embodiment includes providing a user interface having a control for receiving natural language input describing the goal of the data mining operation from the control of the user interface. A second specific embodiment identifies key performance results, providing a user interface having a control for communicating information, and communicating a natural language description of the key performance results using the control of the user interface. In a third specific embodiment, input data determining a data mining operation goal is the only input required by the data mining application.
U.S. Patent Application Publication No. 2004/0044516, to Kennewick et al. (Mar. 4, 2004), discloses systems and methods for receiving natural language queries and/or commands and executing the queries and/or commands. The systems and methods overcome some of the deficiencies of other speech query and response systems through the application of a complete speech-based information query, retrieval, presentation, and command environment. This environment makes significant use of context, prior information, domain knowledge, and user-specific profile data to achieve a natural language environment for one or more users making queries or commands in multiple domains. Through this integrated approach, a complete speech-based natural language query and response environment may be created. The systems and methods create, store, and use extensive personal profile information for each user, thereby improving the reliability of determining the context and presenting the expected results for a particular question or command.
U.S. Patent Application Publication No. 2004/0117189, to Bennett (Jun. 17, 2004), discloses an intelligent query system for processing voice-based queries. This distributed client-server system, typically implemented on an intranet or over the Internet, accepts a user's queries at the user's PC, PDA, or workstation using a speech input interface. After converting the user's query from speech to text, a natural language engine, a database processor, and a full-text SQL database are implemented to find a single answer that best matches the user's query. Both statistical and semantic decoding are used to assist and improve the performance of the query recognition.
Each of the systems, apparatuses, software products, and methods described above suffers from at least one of the following shortcomings. Several of the systems, apparatuses, software products, and methods require time-consuming, and often inaccurate, imprecise, and unreliable, voice training. Several of the systems, apparatuses, software products, and methods are single-modal, meaning that a user may interact with each of the systems, apparatuses, software products, and methods in only one way, i.e. each utilizes only a single voice-based input. As a result of this single-modality, there is no context or environment within which a voice-based search is performed and several of the systems, apparatuses, software products, and methods must perform multiple iterations to pinpoint a result or answer related to the voice-based search.
Thus, what is needed are natural language query systems and methods for processing voice and proximity-based queries that do not require time-consuming, and often inaccurate, imprecise, and unreliable, voice training. What is also needed are natural language query systems and methods that are multimodal, meaning that a user may interact with the natural language query systems and methods in a number of ways simultaneously and that the natural language query systems and methods utilize multiple inputs in order to create and take into consideration a context or environment within which a voice and/or proximity-based search or the like is performed. In other words, what is needed are natural language query systems and methods that mimic normal human interaction, attributing meaning to words based on the context or environment within which they are spoken. What is further needed are natural language query systems and methods that perform only a single iteration to pinpoint a result or answer related to a voice and/or proximity-based search or the like. |
<filename>src/app/shared/enums/poster-size.ts<gh_stars>0
export enum PosterSize {
W92 = "w92",
W154 = "w154",
W185 = "w185",
W342 = "w342",
W500 = "w500",
W780 = "w780",
ORIGINAL = "original"
}
|
<reponame>bzxy/cydia
/**
* This header is generated by class-dump-z 0.2b.
*
* Source: /System/Library/PrivateFrameworks/Stocks.framework/Stocks
*/
#import <Stocks/NSURLConnectionDelegate.h>
#import <Stocks/XXUnknownSuperclass.h>
@class NSURLConnection, NSMutableData, NSString, NSURL;
@interface NetPreferences : XXUnknownSuperclass <NSURLConnectionDelegate> {
NSString *_buildVersion; // 4 = 0x4
NSString *_productVersion; // 8 = 0x8
NSString *_UUID; // 12 = 0xc
NSURL *_serviceURL; // 16 = 0x10
NSURL *_serviceURLGT; // 20 = 0x14
NSURLConnection *_gtButtonLogoConnection; // 24 = 0x18
NSMutableData *_gtButtonLogoData; // 28 = 0x1c
NSURLConnection *_gtBacksideLogoConnection; // 32 = 0x20
NSMutableData *_gtBacksideLogoData; // 36 = 0x24
BOOL _serviceDebugging; // 40 = 0x28
BOOL _isNetworkReachable; // 41 = 0x29
}
@property(assign, nonatomic, getter=isNetworkReachable) BOOL networkReachable; // G=0xefe9; S=0xef75;
@property(readonly, retain) NSString *UUID; // G=0xea01; converted property
@property(readonly, retain) NSURL *serviceURL; // G=0xe7f1; converted property
@property(readonly, retain) NSURL *serviceURLGT; // G=0xe73d; converted property
@property(readonly, assign) BOOL serviceDebugging; // G=0xda41; converted property
+ (id)sharedPreferences; // 0xdb6d
- (id)init; // 0xda51
- (void)setupLogging; // 0xf0e9
// declared property getter: - (BOOL)isNetworkReachable; // 0xefe9
// declared property setter: - (void)setNetworkReachable:(BOOL)reachable; // 0xef75
- (id)_stocksUserAgent; // 0xee79
- (id)_stocksCountryCode; // 0xee1d
- (id)_stocksAcceptLanguage; // 0xed6d
- (void)addStocksHeadersToPostRequest:(id)postRequest; // 0xec85
- (id)financeRequestAttributes; // 0xeb51
// converted property getter: - (id)UUID; // 0xea01
- (BOOL)multipleDataSourcesAllowedForGT; // 0xe965
- (id)_urlStringWithHost:(id)host; // 0xe905
// converted property getter: - (id)serviceURL; // 0xe7f1
// converted property getter: - (id)serviceURLGT; // 0xe73d
- (id)newsServiceURLForStock:(id)stock; // 0xe649
- (id)fullQuoteURLOverrideForStock:(id)stock; // 0xe49d
- (id)backsideLogoURL; // 0xe3b9
- (id)_cacheDirectoryPath; // 0xe349
- (id)logoButtonImage; // 0xe16d
- (id)logoBacksideImage; // 0xdfc5
- (void)connection:(id)connection didReceiveData:(id)data; // 0xdef9
- (void)connection:(id)connection didFailWithError:(id)error; // 0xde6d
- (void)connectionDidFinishLoading:(id)connection; // 0xdc6d
// converted property getter: - (BOOL)serviceDebugging; // 0xda41
- (id)serviceDebuggingPath; // 0xdbb5
@end
|
<filename>src/UCP.cpp<gh_stars>0
#include "UCP.h"
#include "MCP.h"
#include "Application.h"
#include "ModuleAgentContainer.h"
// TODO: Make an enum with the states
enum State
{
ST_INIT,
ST_WAITING_ITEM_REQUEST,
ST_WAITING_ITEM_CONSTRAINT,
ST_IDLE,
ST_FINISHED = 10
};
UCP::UCP(Node *node, uint16_t requestedItemId, uint16_t contributedItemId, const AgentLocation &uccLocation, unsigned int searchDepth, unsigned int totalSearch) :
Agent(node),
_requestedItemId(requestedItemId),
_contributedItemId(contributedItemId),
_searchDepth(searchDepth),
_totalSearch(totalSearch)
{
// TODO: Save input parameters
_uccLocation = uccLocation;
setState(ST_INIT);
_negotiationAccepted = false;
}
UCP::~UCP()
{
}
void UCP::update()
{
switch (state())
{
case ST_INIT:
SendPacketToUCC();
setState(ST_WAITING_ITEM_REQUEST);
break;
case ST_WAITING_ITEM_REQUEST:
break;
case ST_WAITING_ITEM_CONSTRAINT:
break;
case ST_IDLE:
{
if (_mcp.get())
{
if (_mcp->state() == 10)
{
if (_mcp->NegotiationAccepted())
{
_negotiationAccepted = true;
setState(ST_FINISHED);
SendPacketToUCCAccept();
}
else
{
_negotiationAccepted = false;
setState(ST_FINISHED);
SendPacketToUCCAccept();
}
}
}
break;
}
case ST_FINISHED:
break;
default:
break;
}
}
void UCP::stop()
{
// TODO: Destroy search hierarchy below this agent
if(_mcp != nullptr)
_mcp->stop();
destroy();
}
void UCP::OnPacketReceived(TCPSocketPtr socket, const PacketHeader &packetHeader, InputMemoryStream &stream)
{
const PacketType packetType = packetHeader.packetType;
switch (packetType)
{
// TODO: Handle packets
case PacketType::RequestConstraintResponse:
if (state() == ST_WAITING_ITEM_REQUEST || state() == ST_IDLE)
{
PacketConstraintResponse packetData;
packetData.Read(stream);
if (packetData.itemId == contributedItemId())
{
iLog << " - Accept Negotation: ";
_negotiationAccepted = true;
setState(ST_FINISHED);
PacketHeader outPacketHead;
PacketAcceptNegotiation packetNegot;
OutputMemoryStream outStream;
outPacketHead.packetType = PacketType::AcceptNegotiation;
outPacketHead.srcAgentId = id();
outPacketHead.dstAgentId = packetHeader.srcAgentId;
packetNegot.acceptedNegotiation = _negotiationAccepted;
outPacketHead.Write(outStream);
packetNegot.Write(outStream);
socket->SendPacket(outStream.GetBufferPtr(), outStream.GetSize());
}
else
{
iLog << " - Search another Negotation: " << "Search Depth" << _searchDepth;
if (_totalSearch >= MAX_SEARCH)
{
iLog << " - STOP SEARCH: ";
setState(ST_FINISHED);
PacketHeader outPacketHead;
PacketAcceptNegotiation packetNegot;
OutputMemoryStream outStream;
outPacketHead.packetType = PacketType::AcceptNegotiation;
outPacketHead.srcAgentId = id();
outPacketHead.dstAgentId = packetHeader.srcAgentId;
packetNegot.acceptedNegotiation = _negotiationAccepted;
outPacketHead.Write(outStream);
packetNegot.Write(outStream);
socket->SendPacket(outStream.GetBufferPtr(), outStream.GetSize());
}
else if (_searchDepth == 3)
{
iLog << " - MAX SEARCH DEPTH: ";
setState(ST_FINISHED);
PacketHeader outPacketHead;
PacketAcceptNegotiation packetNegot;
OutputMemoryStream outStream;
outPacketHead.packetType = PacketType::AcceptNegotiation;
outPacketHead.srcAgentId = id();
outPacketHead.dstAgentId = packetHeader.srcAgentId;
packetNegot.acceptedNegotiation = _negotiationAccepted;
outPacketHead.Write(outStream);
packetNegot.Write(outStream);
socket->SendPacket(outStream.GetBufferPtr(), outStream.GetSize());
}
else
{
setState(ST_IDLE);
_searchDepth += 1;
Node* newNode = new Node(App->agentContainer->allAgents().size());
_mcp = App->agentContainer->createMCP(newNode, packetData.itemId, contributedItemId(), _searchDepth, _totalSearch);
}
}
break;
}
else
{
wLog << "UCP 1 - OnPacketReceived() - PacketType::RequestConstraint was unexpected";
}
default:
wLog << "UCP 2 - OnPacketReceived() - Unexpected PacketType.";
}
}
bool UCP::SendPacketToUCC()
{
// Create message header and data
PacketHeader packetHead;
packetHead.packetType = PacketType::RequestConstraint;
packetHead.srcAgentId = id();
packetHead.dstAgentId = _uccLocation.agentId;
// Serialize message
OutputMemoryStream stream;
packetHead.Write(stream);
return sendPacketToAgent(_uccLocation.hostIP, _uccLocation.hostPort, stream);
}
bool UCP::SendPacketToUCCAccept()
{
// Create message header and data
PacketHeader packetHead;
packetHead.packetType = PacketType::AcceptNegotiation;
packetHead.srcAgentId = id();
packetHead.dstAgentId = _uccLocation.agentId;
PacketAcceptNegotiation packetNegot;
packetNegot.acceptedNegotiation = _negotiationAccepted;
// Serialize message
OutputMemoryStream stream;
packetHead.Write(stream);
packetNegot.Write(stream);
return sendPacketToAgent(_uccLocation.hostIP, _uccLocation.hostPort, stream);
}
|
# coding: utf-8
import logging
import requests
import mimetypes
from io import BytesIO
from urllib.parse import urlparse
from datetime import datetime, timedelta
from collections import OrderedDict
from flask_babelex import gettext as _
from flask import (
render_template,
abort,
current_app,
request,
session,
redirect,
jsonify,
url_for,
Response,
send_from_directory,
g,
make_response,
)
from werkzeug.contrib.atom import AtomFeed
from urllib.parse import urljoin
from legendarium.formatter import descriptive_short_format
from . import main
from webapp import babel
from webapp import cache
from webapp import controllers
from webapp.choices import STUDY_AREAS
from webapp.utils import utils
from webapp.utils.caching import cache_key_with_lang, cache_key_with_lang_with_qs
from webapp import forms
from webapp.config.lang_names import display_original_lang_name
from opac_schema.v1.models import Journal, Issue, Article, Collection
from lxml import etree
from packtools import HTMLGenerator
logger = logging.getLogger(__name__)
JOURNAL_UNPUBLISH = _("O periódico está indisponível por motivo de: ")
ISSUE_UNPUBLISH = _("O número está indisponível por motivo de: ")
ARTICLE_UNPUBLISH = _("O artigo está indisponível por motivo de: ")
IAHX_LANGS = dict(
p='pt',
e='es',
i='en',
)
def url_external(endpoint, **kwargs):
url = url_for(endpoint, **kwargs)
return urljoin(request.url_root, url)
class RetryableError(Exception):
"""Erro recuperável sem que seja necessário modificar o estado dos dados
na parte cliente, e.g., timeouts, erros advindos de particionamento de rede
etc.
"""
class NonRetryableError(Exception):
"""Erro do qual não pode ser recuperado sem modificar o estado dos dados
na parte cliente, e.g., recurso solicitado não exite, URI inválida etc.
"""
def fetch_data(url: str, timeout: float = 2) -> bytes:
try:
response = requests.get(url, timeout=timeout)
except (requests.ConnectionError, requests.Timeout) as exc:
raise RetryableError(exc) from exc
except (requests.InvalidSchema, requests.MissingSchema, requests.InvalidURL) as exc:
raise NonRetryableError(exc) from exc
else:
try:
response.raise_for_status()
except requests.HTTPError as exc:
if 400 <= exc.response.status_code < 500:
raise NonRetryableError(exc) from exc
elif 500 <= exc.response.status_code < 600:
raise RetryableError(exc) from exc
else:
raise
return response.content
@main.before_app_request
def add_collection_to_g():
if not hasattr(g, 'collection'):
try:
collection = controllers.get_current_collection()
setattr(g, 'collection', collection)
except Exception:
# discutir o que fazer aqui
setattr(g, 'collection', {})
@main.after_request
def add_header(response):
response.headers['x-content-type-options'] = 'nosniff'
return response
@main.after_request
def add_language_code(response):
language = session.get('lang', get_locale())
response.set_cookie('language', language)
return response
@main.before_app_request
def add_forms_to_g():
setattr(g, 'email_share', forms.EmailShareForm())
setattr(g, 'email_contact', forms.ContactForm())
setattr(g, 'error', forms.ErrorForm())
@main.before_app_request
def add_scielo_org_config_to_g():
language = session.get('lang', get_locale())
scielo_org_links = {
key: url[language]
for key, url in current_app.config.get('SCIELO_ORG_URIS', {}).items()
}
setattr(g, 'scielo_org', scielo_org_links)
@babel.localeselector
def get_locale():
langs = current_app.config.get('LANGUAGES')
lang_from_headers = request.accept_languages.best_match(list(langs.keys()))
if 'lang' not in list(session.keys()):
session['lang'] = lang_from_headers
if not lang_from_headers and not session['lang']:
# Caso não seja possível detectar o idioma e não tenhamos a chave lang
# no seção, fixamos o idioma padrão.
session['lang'] = current_app.config.get('BABEL_DEFAULT_LOCALE')
return session['lang']
@main.route('/set_locale/<string:lang_code>/')
def set_locale(lang_code):
langs = current_app.config.get('LANGUAGES')
if lang_code not in list(langs.keys()):
abort(400, _('Código de idioma inválido'))
referrer = request.referrer
hash = request.args.get('hash')
if hash:
referrer += "#" + hash
# salvar o lang code na sessão
session['lang'] = lang_code
return redirect(referrer)
def get_lang_from_session():
"""
Tenta retornar o idioma da seção, caso não consiga retorna
BABEL_DEFAULT_LOCALE.
"""
try:
return session['lang']
except KeyError:
return current_app.config.get('BABEL_DEFAULT_LOCALE')
@main.route('/')
@cache.cached(key_prefix=cache_key_with_lang)
def index():
language = session.get('lang', get_locale())
news = controllers.get_latest_news_by_lang(language)
tweets = controllers.get_collection_tweets()
press_releases = controllers.get_press_releases({'language': language})
urls = {
'downloads': '{0}/w/accesses?collection={1}'.format(
current_app.config['METRICS_URL'],
current_app.config['OPAC_COLLECTION']),
'references': '{0}/w/publication/size?collection={1}'.format(
current_app.config['METRICS_URL'],
current_app.config['OPAC_COLLECTION']),
'other': '{0}/?collection={1}'.format(
current_app.config['METRICS_URL'],
current_app.config['OPAC_COLLECTION'])
}
if (
g.collection is not None
and isinstance(g.collection, Collection)
and g.collection.metrics is not None
and current_app.config['USE_HOME_METRICS']
):
g.collection.metrics.total_journal = Journal.objects.filter(
is_public=True, current_status="current"
).count()
g.collection.metrics.total_article = Article.objects.filter(
is_public=True
).count()
context = {
'news': news,
'urls': urls,
'tweets': tweets,
'press_releases': press_releases,
}
return render_template("collection/index.html", **context)
# ##################################Collection###################################
@main.route('/journals/alpha')
@cache.cached(key_prefix=cache_key_with_lang)
def collection_list():
allowed_filters = ["current", "no-current", ""]
query_filter = request.args.get("status", "")
if not query_filter in allowed_filters:
query_filter = ""
journals_list = [
controllers.get_journal_json_data(journal)
for journal in controllers.get_journals(query_filter=query_filter)
]
return render_template("collection/list_journal.html",
**{'journals_list': journals_list, 'query_filter': query_filter})
@main.route("/journals/thematic")
@cache.cached(key_prefix=cache_key_with_lang)
def collection_list_thematic():
allowed_query_filters = ["current", "no-current", ""]
allowed_thematic_filters = ["areas", "wos", "publisher"]
thematic_table = {
"areas": "study_areas",
"wos": "subject_categories",
"publisher": "publisher_name",
}
query_filter = request.args.get("status", "")
title_query = request.args.get("query", "")
thematic_filter = request.args.get("filter", "areas")
if not query_filter in allowed_query_filters:
query_filter = ""
if not thematic_filter in allowed_thematic_filters:
thematic_filter = "areas"
lang = get_lang_from_session()[:2].lower()
objects = controllers.get_journals_grouped_by(
thematic_table[thematic_filter],
title_query,
query_filter=query_filter,
lang=lang,
)
return render_template(
"collection/list_thematic.html",
**{"objects": objects, "query_filter": query_filter, "filter": thematic_filter}
)
@main.route('/journals/feed/')
@cache.cached(key_prefix=cache_key_with_lang)
def collection_list_feed():
language = session.get('lang', get_locale())
collection = controllers.get_current_collection()
title = 'SciELO - %s - %s' % (collection.name, _('Últimos periódicos inseridos na coleção'))
subtitle = _('10 últimos periódicos inseridos na coleção %s' % collection.name)
feed = AtomFeed(title,
subtitle=subtitle,
feed_url=request.url, url=request.url_root)
journals = controllers.get_journals_paginated(
title_query='', page=1, order_by='-created', per_page=10)
if not journals.items:
feed.add('Nenhum periódico encontrado',
url=request.url,
updated=datetime.now())
for journal in journals.items:
issues = controllers.get_issues_by_jid(journal.jid, is_public=True)
last_issue = issues[0] if issues else None
articles = []
if last_issue:
articles = controllers.get_articles_by_iid(last_issue.iid,
is_public=True)
result_dict = OrderedDict()
for article in articles:
section = article.get_section_by_lang(language[:2])
result_dict.setdefault(section, [])
result_dict[section].append(article)
context = {
'journal': journal,
'articles': result_dict,
'language': language,
'last_issue': last_issue
}
feed.add(journal.title,
render_template("collection/list_feed_content.html", **context),
content_type='html',
author=journal.publisher_name,
url=url_external('main.journal_detail', url_seg=journal.url_segment),
updated=journal.updated,
published=journal.created)
return feed.get_response()
@main.route("/about/", methods=['GET'])
@main.route('/about/<string:slug_name>', methods=['GET'])
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def about_collection(slug_name=None):
language = session.get('lang', get_locale())
context = {}
page = None
if slug_name:
# caso seja uma página
page = controllers.get_page_by_slug_name(slug_name, language)
if not page:
abort(404, _('Página não encontrada'))
context['page'] = page
else:
# caso não seja uma página é uma lista
pages = controllers.get_pages_by_lang(language)
context['pages'] = pages
return render_template("collection/about.html", **context)
# ###################################Journal#####################################
@main.route('/scielo.php/')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def router_legacy():
script_php = request.args.get('script', None)
pid = request.args.get('pid', None)
tlng = request.args.get('tlng', None)
allowed_scripts = [
'sci_serial', 'sci_issuetoc', 'sci_arttext', 'sci_abstract', 'sci_issues', 'sci_pdf'
]
if (script_php is not None) and (script_php in allowed_scripts) and not pid:
# se tem pelo menos um param: pid ou script_php
abort(400, _(u'Requsição inválida ao tentar acessar o artigo com pid: %s' % pid))
elif script_php and pid:
if script_php == 'sci_serial':
# pid = issn
journal = controllers.get_journal_by_issn(pid)
if not journal:
abort(404, _('Periódico não encontrado'))
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
return redirect(url_for('main.journal_detail',
url_seg=journal.url_segment), code=301)
elif script_php == 'sci_issuetoc':
issue = controllers.get_issue_by_pid(pid)
if not issue:
abort(404, _('Número não encontrado'))
if not issue.is_public:
abort(404, ISSUE_UNPUBLISH + _(issue.unpublish_reason))
if not issue.journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(issue.journal.unpublish_reason))
if issue.url_segment and "ahead" in issue.url_segment:
return redirect(
url_for('main.aop_toc', url_seg=url_seg), code=301)
return redirect(
url_for(
"main.issue_toc",
url_seg=issue.journal.url_segment,
url_seg_issue=issue.url_segment),
301
)
elif script_php == 'sci_arttext' or script_php == 'sci_abstract':
article = controllers.get_article_by_pid_v2(pid)
if not article:
abort(404, _('Artigo não encontrado'))
# 'abstract' or None (not False, porque False converterá a string 'False')
part = (script_php == 'sci_abstract' and 'abstract') or None
if tlng not in article.languages:
tlng = article.original_language
return redirect(url_for('main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article.aid,
part=part,
lang=tlng),
code=301)
elif script_php == 'sci_issues':
journal = controllers.get_journal_by_issn(pid)
if not journal:
abort(404, _('Periódico não encontrado'))
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
return redirect(url_for('main.issue_grid',
url_seg=journal.url_segment), 301)
elif script_php == 'sci_pdf':
# accesso ao pdf do artigo:
article = controllers.get_article_by_pid_v2(pid)
if not article:
abort(404, _('Artigo não encontrado'))
return redirect(
url_for(
'main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article.aid,
format='pdf',
),
code=301
)
else:
abort(400, _(u'Requsição inválida ao tentar acessar o artigo com pid: %s' % pid))
else:
return redirect('/')
@main.route('/<string:journal_seg>')
@main.route('/journal/<string:journal_seg>')
def journal_detail_legacy_url(journal_seg):
return redirect(url_for('main.journal_detail',
url_seg=journal_seg), code=301)
@main.route('/j/<string:url_seg>/')
@cache.cached(key_prefix=cache_key_with_lang)
def journal_detail(url_seg):
journal = controllers.get_journal_by_url_seg(url_seg)
if not journal:
abort(404, _('Periódico não encontrado'))
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
utils.fix_journal_last_issue(journal)
# todo: ajustar para que seja só noticias relacionadas ao periódico
language = session.get('lang', get_locale())
news = controllers.get_latest_news_by_lang(language)
# Press releases
press_releases = controllers.get_press_releases({
'journal': journal,
'language': language})
# Lista de seções
# Mantendo sempre o idioma inglês para as seções na página incial do periódico
if journal.last_issue and journal.current_status == "current":
sections = [section for section in journal.last_issue.sections if section.language == 'en']
recent_articles = controllers.get_recent_articles_of_issue(journal.last_issue.iid, is_public=True)
else:
sections = []
recent_articles = []
latest_issue = journal.last_issue
if latest_issue:
latest_issue_legend = descriptive_short_format(
title=journal.title, short_title=journal.short_title,
pubdate=str(latest_issue.year), volume=latest_issue.volume, number=latest_issue.number,
suppl=latest_issue.suppl_text, language=language[:2].lower())
else:
latest_issue_legend = ''
journal_metrics = controllers.get_journal_metrics(journal)
context = {
'journal': journal,
'press_releases': press_releases,
'recent_articles': recent_articles,
'journal_study_areas': [
STUDY_AREAS.get(study_area.upper()) for study_area in journal.study_areas
],
# o primiero item da lista é o último número.
# condicional para verificar se issues contém itens
'last_issue': latest_issue,
'latest_issue_legend': latest_issue_legend,
'sections': sections if sections else None,
'news': news,
'journal_metrics': journal_metrics
}
return render_template("journal/detail.html", **context)
@main.route('/journal/<string:url_seg>/feed/')
@cache.cached(key_prefix=cache_key_with_lang)
def journal_feed(url_seg):
journal = controllers.get_journal_by_url_seg(url_seg)
if not journal:
abort(404, _('Periódico não encontrado'))
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
issues = controllers.get_issues_by_jid(journal.jid, is_public=True)
last_issue = issues[0] if issues else None
articles = controllers.get_articles_by_iid(last_issue.iid, is_public=True)
feed = AtomFeed(journal.title,
feed_url=request.url,
url=request.url_root,
subtitle=utils.get_label_issue(last_issue))
feed_language = session.get('lang', get_locale())
feed_language = feed_language[:2].lower()
for article in articles:
# ######### TODO: Revisar #########
article_lang = feed_language
if feed_language not in article.languages:
article_lang = article.original_language
feed.add(article.title or _('Artigo sem título'),
render_template("issue/feed_content.html", article=article),
content_type='html',
id=article.doi or article.pid,
author=article.authors,
url=url_external('main.article_detail_v3',
url_seg=journal.url_segment,
article_pid_v3=article.aid,
lang=article_lang),
updated=journal.updated,
published=journal.created)
return feed.get_response()
@main.route("/journal/<string:url_seg>/about/", methods=['GET'])
@cache.cached(key_prefix=cache_key_with_lang)
def about_journal(url_seg):
language = session.get('lang', get_locale())
journal = controllers.get_journal_by_url_seg(url_seg)
if not journal:
abort(404, _('Periódico não encontrado'))
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
latest_issue = utils.fix_journal_last_issue(journal)
if latest_issue:
latest_issue_legend = descriptive_short_format(
title=journal.title, short_title=journal.short_title,
pubdate=str(latest_issue.year), volume=latest_issue.volume, number=latest_issue.number,
suppl=latest_issue.suppl_text, language=language[:2].lower())
else:
latest_issue_legend = None
page = controllers.get_page_by_journal_acron_lang(journal.acronym, language)
context = {
'journal': journal,
'latest_issue_legend': latest_issue_legend,
'last_issue': latest_issue,
'journal_study_areas': [
STUDY_AREAS.get(study_area.upper()) for study_area in journal.study_areas
],
}
if page:
context['content'] = page.content
if page.updated_at:
context['page_updated_at'] = page.updated_at
return render_template("journal/about.html", **context)
@main.route("/journals/search/alpha/ajax/", methods=['GET', ])
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def journals_search_alpha_ajax():
if not request.is_xhr:
abort(400, _('Requisição inválida. Deve ser por ajax'))
query = request.args.get('query', '', type=str)
query_filter = request.args.get('query_filter', '', type=str)
page = request.args.get('page', 1, type=int)
lang = get_lang_from_session()[:2].lower()
response_data = controllers.get_alpha_list_from_paginated_journals(
title_query=query,
query_filter=query_filter,
page=page,
lang=lang)
return jsonify(response_data)
@main.route("/journals/search/group/by/filter/ajax/", methods=['GET'])
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def journals_search_by_theme_ajax():
if not request.is_xhr:
abort(400, _('Requisição inválida. Deve ser por ajax'))
query = request.args.get('query', '', type=str)
query_filter = request.args.get('query_filter', '', type=str)
filter = request.args.get('filter', 'areas', type=str)
lang = get_lang_from_session()[:2].lower()
if filter == 'areas':
objects = controllers.get_journals_grouped_by('study_areas', query, query_filter=query_filter, lang=lang)
elif filter == 'wos':
objects = controllers.get_journals_grouped_by('subject_categories', query, query_filter=query_filter, lang=lang)
elif filter == 'publisher':
objects = controllers.get_journals_grouped_by('publisher_name', query, query_filter=query_filter, lang=lang)
else:
return jsonify({
'error': 401,
'message': _('Parámetro "filter" é inválido, deve ser "areas", "wos" ou "publisher".')
})
return jsonify(objects)
@main.route("/journals/download/<string:list_type>/<string:extension>/", methods=['GET', ])
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def download_journal_list(list_type, extension):
if extension.lower() not in ['csv', 'xls']:
abort(401, _('Parámetro "extension" é inválido, deve ser "csv" ou "xls".'))
elif list_type.lower() not in ['alpha', 'areas', 'wos', 'publisher']:
abort(401, _('Parámetro "list_type" é inválido, deve ser: "alpha", "areas", "wos" ou "publisher".'))
else:
if extension.lower() == 'xls':
mimetype = 'application/vnd.ms-excel'
else:
mimetype = 'text/csv'
query = request.args.get('query', '', type=str)
data = controllers.get_journal_generator_for_csv(list_type=list_type,
title_query=query,
extension=extension.lower())
timestamp = datetime.now().strftime('%Y-%m-%d_%H-%M-%S')
filename = 'journals_%s_%s.%s' % (list_type, timestamp, extension)
response = Response(data, mimetype=mimetype)
response.headers['Content-Disposition'] = 'attachment; filename=%s' % filename
return response
@main.route("/<string:url_seg>/contact", methods=['POST'])
def contact(url_seg):
if not request.is_xhr:
abort(403, _('Requisição inválida, deve ser ajax.'))
if utils.is_recaptcha_valid(request):
form = forms.ContactForm(request.form)
journal = controllers.get_journal_by_url_seg(url_seg)
if not journal.enable_contact:
abort(403, _('Periódico não permite envio de email.'))
recipients = journal.editor_email
if form.validate():
sent, message = controllers.send_email_contact(recipients,
form.data['name'],
form.data['your_email'],
form.data['message'])
return jsonify({'sent': sent, 'message': str(message),
'fields': [key for key in form.data.keys()]})
else:
return jsonify({'sent': False, 'message': form.errors,
'fields': [key for key in form.data.keys()]})
else:
abort(400, _('Requisição inválida, captcha inválido.'))
@main.route("/form_contact/<string:url_seg>/", methods=['GET'])
def form_contact(url_seg):
journal = controllers.get_journal_by_url_seg(url_seg)
if not journal:
abort(404, _('Periódico não encontrado'))
context = {
'journal': journal
}
return render_template("journal/includes/contact_form.html", **context)
# ###################################Issue#######################################
@main.route('/grid/<string:url_seg>/')
def issue_grid_legacy(url_seg):
return redirect(url_for('main.issue_grid', url_seg=url_seg), 301)
@main.route('/j/<string:url_seg>/grid')
@cache.cached(key_prefix=cache_key_with_lang)
def issue_grid(url_seg):
journal = controllers.get_journal_by_url_seg(url_seg)
if not journal:
abort(404, _('Periódico não encontrado'))
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
# idioma da sessão
language = session.get('lang', get_locale())
# A ordenação padrão da função ``get_issues_by_jid``: "-year", "-volume", "-order"
issues_data = controllers.get_issues_for_grid_by_jid(journal.id, is_public=True)
latest_issue = issues_data['last_issue']
if latest_issue:
latest_issue_legend = descriptive_short_format(
title=journal.title, short_title=journal.short_title,
pubdate=str(latest_issue.year), volume=latest_issue.volume, number=latest_issue.number,
suppl=latest_issue.suppl_text, language=language[:2].lower())
else:
latest_issue_legend = None
context = {
'journal': journal,
'last_issue': issues_data['last_issue'],
'latest_issue_legend': latest_issue_legend,
'volume_issue': issues_data['volume_issue'],
'ahead': issues_data['ahead'],
'result_dict': issues_data['ordered_for_grid'],
'journal_study_areas': [
STUDY_AREAS.get(study_area.upper()) for study_area in journal.study_areas
],
}
return render_template("issue/grid.html", **context)
@main.route('/toc/<string:url_seg>/<string:url_seg_issue>/')
def issue_toc_legacy(url_seg, url_seg_issue):
if url_seg_issue and "ahead" in url_seg_issue:
return redirect(url_for('main.aop_toc', url_seg=url_seg), code=301)
return redirect(
url_for('main.issue_toc',
url_seg=url_seg,
url_seg_issue=url_seg_issue),
code=301)
@main.route('/j/<string:url_seg>/i/<string:url_seg_issue>/')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def issue_toc(url_seg, url_seg_issue):
section_filter = None
goto = request.args.get("goto", None, type=str)
if goto not in ("previous", "next"):
goto = None
if goto in (None, "next") and "ahead" in url_seg_issue:
# redireciona para `aop_toc`
return redirect(url_for('main.aop_toc', url_seg=url_seg), code=301)
# idioma da sessão
language = session.get('lang', get_locale())
if current_app.config["FILTER_SECTION_ENABLE"]:
# seção dos documentos, se selecionada
section_filter = request.args.get('section', '', type=str).upper()
# obtém o issue
issue = controllers.get_issue_by_url_seg(url_seg, url_seg_issue)
if not issue:
abort(404, _('Número não encontrado'))
if not issue.is_public:
abort(404, ISSUE_UNPUBLISH + _(issue.unpublish_reason))
# obtém o journal
journal = issue.journal
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
# completa url_segment do last_issue
utils.fix_journal_last_issue(journal)
# goto_next_or_previous_issue (redireciona)
goto_url = goto_next_or_previous_issue(
issue, request.args.get('goto', None, type=str))
if goto_url:
return redirect(goto_url, code=301)
# obtém os documentos
articles = controllers.get_articles_by_iid(issue.iid, is_public=True)
if articles:
# obtém TODAS as seções dos documentos deste sumário
sections = sorted({a.section.upper() for a in articles if a.section})
else:
# obtém as seções dos documentos deste sumário
sections = []
if current_app.config["FILTER_SECTION_ENABLE"] and section_filter != '':
# obtém somente os documentos da seção selecionada
articles = [a for a in articles if a.section.upper() == section_filter]
# obtém PDF e TEXT de cada documento
has_math_content = False
for article in articles:
article_text_languages = [doc['lang'] for doc in article.htmls]
article_pdf_languages = [(doc['lang'], doc['url']) for doc in article.pdfs]
setattr(article, "article_text_languages", article_text_languages)
setattr(article, "article_pdf_languages", article_pdf_languages)
if 'mml:' in article.title:
has_math_content = True
# obtém a legenda bibliográfica
issue_bibliographic_strip = descriptive_short_format(
title=journal.title, short_title=journal.short_title,
pubdate=str(issue.year), volume=issue.volume, number=issue.number,
suppl=issue.suppl_text, language=language[:2].lower())
context = {
'this_page_url': url_for(
'main.issue_toc',
url_seg=url_seg,
url_seg_issue=url_seg_issue),
'has_math_content': has_math_content,
'journal': journal,
'issue': issue,
'issue_bibliographic_strip': issue_bibliographic_strip,
'articles': articles,
'sections': sections,
'section_filter': section_filter,
'journal_study_areas': [
STUDY_AREAS.get(study_area.upper()) for study_area in journal.study_areas
],
'last_issue': journal.last_issue
}
return render_template("issue/toc.html", **context)
def goto_next_or_previous_issue(current_issue, goto_param):
if goto_param not in ["next", "previous"]:
return None
all_issues = list(
controllers.get_issues_by_jid(current_issue.journal.id, is_public=True))
if goto_param == "next":
selected_issue = utils.get_next_issue(all_issues, current_issue)
elif goto_param == "previous":
selected_issue = utils.get_prev_issue(all_issues, current_issue)
if selected_issue in (None, current_issue):
# nao precisa redirecionar
return None
try:
url_seg_issue = selected_issue.url_segment
except AttributeError:
return None
else:
return url_for('main.issue_toc',
url_seg=selected_issue.journal.url_segment,
url_seg_issue=url_seg_issue)
def get_next_or_previous_issue(current_issue, goto_param):
if goto_param not in ["next", "previous"]:
return current_issue
all_issues = list(
controllers.get_issues_by_jid(current_issue.journal.id, is_public=True))
if goto_param == "next":
return utils.get_next_issue(all_issues, current_issue)
return utils.get_prev_issue(all_issues, current_issue)
@main.route('/j/<string:url_seg>/aop')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def aop_toc(url_seg):
section_filter = request.args.get('section', '', type=str).upper()
aop_issues = controllers.get_aop_issues(url_seg) or []
if not aop_issues:
abort(404, _('Artigos ahead of print não encontrados'))
goto = request.args.get("goto", None, type=str)
if goto == "previous":
url = goto_next_or_previous_issue(aop_issues[-1], goto)
if url:
redirect(url, code=301)
journal = aop_issues[0].journal
if not journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(journal.unpublish_reason))
utils.fix_journal_last_issue(journal)
articles = []
for aop_issue in aop_issues:
_articles = controllers.get_articles_by_iid(
aop_issue.iid, is_public=True)
if _articles:
articles.extend(_articles)
if not articles:
abort(404, _('Artigos ahead of print não encontrados'))
sections = sorted({a.section.upper() for a in articles if a.section})
if section_filter != '':
articles = [a for a in articles if a.section.upper() == section_filter]
for article in articles:
article_text_languages = [doc['lang'] for doc in article.htmls]
article_pdf_languages = [(doc['lang'], doc['url']) for doc in article.pdfs]
setattr(article, "article_text_languages", article_text_languages)
setattr(article, "article_pdf_languages", article_pdf_languages)
context = {
'this_page_url': url_for("main.aop_toc", url_seg=url_seg),
'journal': journal,
'issue': aop_issues[0],
'issue_bibliographic_strip': "ahead of print",
'articles': articles,
'sections': sections,
'section_filter': section_filter,
'journal_study_areas': [
STUDY_AREAS.get(study_area.upper())
for study_area in journal.study_areas
],
# o primeiro item da lista é o último número.
'last_issue': journal.last_issue
}
return render_template("issue/toc.html", **context)
@main.route('/feed/<string:url_seg>/<string:url_seg_issue>/')
@cache.cached(key_prefix=cache_key_with_lang)
def issue_feed(url_seg, url_seg_issue):
issue = controllers.get_issue_by_url_seg(url_seg, url_seg_issue)
if not issue:
abort(404, _('Número não encontrado'))
if not issue.is_public:
abort(404, ISSUE_UNPUBLISH + _(issue.unpublish_reason))
if not issue.journal.is_public:
abort(404, JOURNAL_UNPUBLISH + _(issue.journal.unpublish_reason))
journal = issue.journal
articles = controllers.get_articles_by_iid(issue.iid, is_public=True)
feed = AtomFeed(journal.title or "",
feed_url=request.url,
url=request.url_root,
subtitle=utils.get_label_issue(issue))
feed_language = session.get('lang', get_locale())
for article in articles:
# ######### TODO: Revisar #########
article_lang = feed_language
if feed_language not in article.languages:
article_lang = article.original_language
feed.add(article.title or 'Unknow title',
render_template("issue/feed_content.html", article=article),
content_type='html',
author=article.authors,
id=article.doi or article.pid,
url=url_external('main.article_detail_v3',
url_seg=journal.url_segment,
article_pid_v3=article.aid,
lang=article_lang),
updated=journal.updated,
published=journal.created)
return feed.get_response()
# ##################################Article######################################
@main.route('/article/<regex("S\d{4}-\d{3}[0-9xX][0-2][0-9]{3}\d{4}\d{5}"):pid>/')
@cache.cached(key_prefix=cache_key_with_lang)
def article_detail_pid(pid):
article = controllers.get_article_by_pid(pid)
if not article:
article = controllers.get_article_by_oap_pid(pid)
if not article:
abort(404, _('Artigo não encontrado'))
return redirect(url_for('main.article_detail_v3',
url_seg=article.journal.acronym,
article_pid_v3=article.aid))
def render_html_from_xml(article, lang, gs_abstract=False):
logger.debug("Get XML: %s", article.xml)
if current_app.config["SSM_XML_URL_REWRITE"]:
result = fetch_data(use_ssm_url(article.xml))
else:
result = fetch_data(article.xml)
xml = etree.parse(BytesIO(result))
generator = HTMLGenerator.parse(
xml, valid_only=False, gs_abstract=gs_abstract, output_style="website")
return generator.generate(lang), generator.languages
def render_html_from_html(article, lang):
html_url = [html
for html in article.htmls
if html['lang'] == lang]
try:
html_url = html_url[0]['url']
except IndexError:
raise ValueError('Artigo não encontrado') from None
result = fetch_data(use_ssm_url(html_url))
html = result.decode('utf8')
text_languages = [html['lang'] for html in article.htmls]
return html, text_languages
def render_html_abstract(article, lang):
abstract_text = ''
for abstract in article.abstracts:
if abstract['language'] == lang:
abstract_text = abstract["text"]
break
return abstract_text, article.abstract_languages
def render_html(article, lang, gs_abstract=False):
if article.xml:
return render_html_from_xml(article, lang, gs_abstract)
elif article.htmls:
if gs_abstract:
return render_html_abstract(article, lang)
return render_html_from_html(article, lang)
else:
# TODO: Corrigir os teste que esperam ter o atributo ``htmls``
# O ideal seria levantar um ValueError.
return '', []
# TODO: Remover assim que o valor Article.xml estiver consistente na base de
# dados
def use_ssm_url(url):
"""Normaliza a string `url` de acordo com os valores das diretivas de
configuração OPAC_SSM_SCHEME, OPAC_SSM_DOMAIN e OPAC_SSM_PORT.
A normalização busca obter uma URL absoluta em função de uma relativa, ou
uma absoluta em função de uma absoluta, mas com as partes *scheme* e
*authority* trocadas pelas definidas nas diretivas citadas anteriormente.
Este código deve ser removido assim que o valor de Article.xml estiver
consistente, i.e., todos os registros possuirem apenas URLs absolutas.
"""
if url.startswith("http"):
parsed_url = urlparse(url)
return current_app.config["SSM_BASE_URI"] + parsed_url.path
else:
return current_app.config["SSM_BASE_URI"] + url
@main.route('/article/<string:url_seg>/<string:url_seg_issue>/<string:url_seg_article>/')
@main.route('/article/<string:url_seg>/<string:url_seg_issue>/<string:url_seg_article>/<regex("(?:\w{2})"):lang_code>/')
@main.route('/article/<string:url_seg>/<string:url_seg_issue>/<regex("(.*)"):url_seg_article>/')
@main.route('/article/<string:url_seg>/<string:url_seg_issue>/<regex("(.*)"):url_seg_article>/<regex("(?:\w{2})"):lang_code>/')
@cache.cached(key_prefix=cache_key_with_lang)
def article_detail(url_seg, url_seg_issue, url_seg_article, lang_code=''):
issue = controllers.get_issue_by_url_seg(url_seg, url_seg_issue)
if not issue:
abort(404, _('Issue não encontrado'))
article = controllers.get_article_by_issue_article_seg(issue.iid, url_seg_article)
if article is None:
article = controllers.get_article_by_aop_url_segs(
issue.journal, url_seg_issue, url_seg_article
)
if article is None:
abort(404, _('Artigo não encontrado'))
req_params = {
"url_seg": article.journal.acronym,
"article_pid_v3": article.aid,
}
if lang_code:
req_params["lang"] = lang_code
return redirect(url_for('main.article_detail_v3', **req_params))
@main.route('/j/<string:url_seg>/a/<string:article_pid_v3>/')
@main.route('/j/<string:url_seg>/a/<string:article_pid_v3>/<string:part>/')
@cache.cached(key_prefix=cache_key_with_lang)
def article_detail_v3(url_seg, article_pid_v3, part=None):
qs_lang = request.args.get('lang', type=str) or None
qs_goto = request.args.get('goto', type=str) or None
qs_stop = request.args.get('stop', type=str) or None
qs_format = request.args.get('format', 'html', type=str)
gs_abstract = (part == "abstract")
if part and not gs_abstract:
abort(404,
_("Não existe '{}'. No seu lugar use '{}'"
).format(part, 'abstract'))
try:
qs_lang, article = controllers.get_article(
article_pid_v3, url_seg, qs_lang, gs_abstract, qs_goto)
if qs_goto:
return redirect(
url_for(
'main.article_detail_v3',
url_seg=url_seg,
article_pid_v3=article.aid,
part=part,
format=qs_format,
lang=qs_lang,
stop=getattr(article, 'stop', None),
),
code=301
)
except (controllers.PreviousOrNextArticleNotFoundError) as e:
if gs_abstract:
abort(404, _('Resumo inexistente'))
abort(404, _('Artigo inexistente'))
except (controllers.ArticleNotFoundError,
controllers.ArticleJournalNotFoundError):
abort(404, _('Artigo não encontrado'))
except controllers.ArticleLangNotFoundError:
return redirect(
url_for(
'main.article_detail_v3',
url_seg=url_seg,
article_pid_v3=article_pid_v3,
format=qs_format,
),
code=301
)
except controllers.ArticleAbstractNotFoundError:
abort(404, _('Recurso não encontrado'))
except controllers.ArticleIsNotPublishedError as e:
abort(404, "{}{}".format(ARTICLE_UNPUBLISH, e))
except controllers.IssueIsNotPublishedError as e:
abort(404, "{}{}".format(ISSUE_UNPUBLISH, e))
except controllers.JournalIsNotPublishedError as e:
abort(404, "{}{}".format(JOURNAL_UNPUBLISH, e))
except ValueError as e:
abort(404, str(e))
def _handle_html():
citation_pdf_url = None
for pdf_data in article.pdfs:
if pdf_data.get("lang") == qs_lang:
citation_pdf_url = url_for(
'main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article_pid_v3,
lang=qs_lang,
format="pdf",
)
break
website = request.url
if website:
parsed_url = urlparse(request.url)
if current_app.config["FORCE_USE_HTTPS_GOOGLE_TAGS"]:
website = "{}://{}".format('https', parsed_url.netloc)
else:
website = "{}://{}".format(parsed_url.scheme, parsed_url.netloc)
if citation_pdf_url:
citation_pdf_url = "{}{}".format(website, citation_pdf_url)
try:
html, text_languages = render_html(article, qs_lang, gs_abstract)
except (ValueError, NonRetryableError):
abort(404, _('HTML do Artigo não encontrado ou indisponível'))
except RetryableError:
abort(500, _('Erro inesperado'))
text_versions = sorted(
[
(
lang,
display_original_lang_name(lang),
url_for(
'main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article_pid_v3,
lang=lang
)
)
for lang in text_languages
]
)
citation_xml_url = "{}{}".format(
website,
url_for(
'main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article_pid_v3,
format="xml",
lang=article.original_language,
)
)
context = {
'next_article': qs_stop != 'next',
'previous_article': qs_stop != 'previous',
'article': article,
'journal': article.journal,
'issue': article.issue,
'html': html,
'citation_pdf_url': citation_pdf_url,
'citation_xml_url': citation_xml_url,
'article_lang': qs_lang,
'text_versions': text_versions,
'related_links': controllers.related_links(article),
'gs_abstract': gs_abstract,
'part': part,
}
return render_template("article/detail.html", **context)
def _handle_pdf():
if not article.pdfs:
abort(404, _('PDF do Artigo não encontrado'))
pdf_info = [pdf for pdf in article.pdfs if pdf['lang'] == qs_lang]
if len(pdf_info) != 1:
abort(404, _('PDF do Artigo não encontrado'))
try:
pdf_url = pdf_info[0]['url']
except (IndexError, KeyError, ValueError, TypeError):
abort(404, _('PDF do Artigo não encontrado'))
if pdf_url:
return get_pdf_content(pdf_url)
raise abort(404, _('Recurso do Artigo não encontrado. Caminho inválido!'))
def _handle_xml():
if current_app.config["SSM_XML_URL_REWRITE"]:
result = fetch_data(use_ssm_url(article.xml))
else:
result = fetch_data(article.xml)
response = make_response(result)
response.headers['Content-Type'] = 'application/xml'
return response
if 'html' == qs_format:
return _handle_html()
elif 'pdf' == qs_format:
return _handle_pdf()
elif 'xml' == qs_format:
return _handle_xml()
else:
abort(400, _('Formato não suportado'))
@main.route('/readcube/epdf/')
@main.route('/readcube/epdf.php')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def article_epdf():
doi = request.args.get('doi', None, type=str)
pid = request.args.get('pid', None, type=str)
pdf_path = request.args.get('pdf_path', None, type=str)
lang = request.args.get('lang', None, type=str)
if not all([doi, pid, pdf_path, lang]):
abort(400, _('Parâmetros insuficientes para obter o EPDF do artigo'))
else:
context = {
'doi': doi,
'pid': pid,
'pdf_path': pdf_path,
'lang': lang,
}
return render_template("article/epdf.html", **context)
def get_pdf_content(url):
logger.debug("Get PDF: %s", url)
if current_app.config["SSM_ARTICLE_ASSETS_OR_RENDITIONS_URL_REWRITE"]:
url = use_ssm_url(url)
try:
response = fetch_data(url)
except NonRetryableError:
abort(404, _('PDF não encontrado'))
except RetryableError:
abort(500, _('Erro inesperado'))
else:
mimetype, __ = mimetypes.guess_type(url)
return Response(response, mimetype=mimetype)
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def get_content_from_ssm(resource_ssm_media_path):
resource_ssm_full_url = current_app.config['SSM_BASE_URI'] + resource_ssm_media_path
url = resource_ssm_full_url.strip()
mimetype, __ = mimetypes.guess_type(url)
try:
ssm_response = fetch_data(url)
except NonRetryableError:
abort(404, _('Recurso não encontrado'))
except RetryableError:
abort(500, _('Erro inesperado'))
else:
return Response(ssm_response, mimetype=mimetype)
@main.route('/media/assets/<regex("(.*)"):relative_media_path>')
@cache.cached(key_prefix=cache_key_with_lang)
def media_assets_proxy(relative_media_path):
resource_ssm_path = '{ssm_media_path}{resource_path}'.format(
ssm_media_path=current_app.config['SSM_MEDIA_PATH'],
resource_path=relative_media_path)
return get_content_from_ssm(resource_ssm_path)
@main.route('/article/ssm/content/raw/')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def article_ssm_content_raw():
resource_ssm_path = request.args.get('resource_ssm_path', None)
if not resource_ssm_path:
raise abort(404, _('Recurso do Artigo não encontrado. Caminho inválido!'))
else:
return get_content_from_ssm(resource_ssm_path)
@main.route('/pdf/<string:url_seg>/<string:url_seg_issue>/<string:url_seg_article>')
@main.route('/pdf/<string:url_seg>/<string:url_seg_issue>/<string:url_seg_article>/<regex("(?:\w{2})"):lang_code>')
@main.route('/pdf/<string:url_seg>/<string:url_seg_issue>/<regex("(.*)"):url_seg_article>')
@main.route('/pdf/<string:url_seg>/<string:url_seg_issue>/<regex("(.*)"):url_seg_article>/<regex("(?:\w{2})"):lang_code>')
@cache.cached(key_prefix=cache_key_with_lang)
def article_detail_pdf(url_seg, url_seg_issue, url_seg_article, lang_code=''):
"""
Padrões esperados:
`/pdf/csc/2021.v26suppl1/2557-2558`
`/pdf/csc/2021.v26suppl1/2557-2558/en`
"""
if not lang_code and "." not in url_seg_issue:
return router_legacy_pdf(url_seg, url_seg_issue, url_seg_article)
issue = controllers.get_issue_by_url_seg(url_seg, url_seg_issue)
if not issue:
abort(404, _('Issue não encontrado'))
article = controllers.get_article_by_issue_article_seg(issue.iid, url_seg_article)
if not article:
abort(404, _('Artigo não encontrado'))
req_params = {
'url_seg': article.journal.url_segment,
'article_pid_v3': article.aid,
'format': 'pdf',
}
if lang_code:
req_params['lang'] = lang_code
return redirect(url_for('main.article_detail_v3', **req_params), code=301)
@main.route('/pdf/<string:journal_acron>/<string:issue_info>/<string:pdf_filename>.pdf')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def router_legacy_pdf(journal_acron, issue_info, pdf_filename):
pdf_filename = '%s.pdf' % pdf_filename
journal = controllers.get_journal_by_url_seg(journal_acron)
if not journal:
abort(404, _('Este PDF não existe em http://www.scielo.br. Consulte http://search.scielo.org'))
article = controllers.get_article_by_pdf_filename(
journal_acron, issue_info, pdf_filename)
if not article:
abort(404, _('PDF do artigo não foi encontrado'))
return redirect(
url_for(
'main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article.aid,
format='pdf',
lang=article._pdf_lang,
),
code=301
)
@main.route('/cgi-bin/fbpe/<string:text_or_abstract>/')
@cache.cached(key_prefix=cache_key_with_lang_with_qs)
def router_legacy_article(text_or_abstract):
pid = request.args.get('pid', None)
lng = request.args.get('lng', None)
if not (text_or_abstract in ['fbtext', 'fbabs'] and pid):
# se tem pid
abort(400, _('Requsição inválida ao tentar acessar o artigo com pid: %s' % pid))
article = controllers.get_article_by_pid_v1(pid)
if not article:
abort(404, _('Artigo não encontrado'))
return redirect(
url_for(
'main.article_detail_v3',
url_seg=article.journal.url_segment,
article_pid_v3=article.aid,
),
code=301
)
# ###############################E-mail share##################################
@main.route("/email_share_ajax/", methods=['POST'])
def email_share_ajax():
if not request.is_xhr:
abort(400, _('Requisição inválida.'))
form = forms.EmailShareForm(request.form)
if form.validate():
recipients = [email.strip() for email in form.data['recipients'].split(';') if email.strip() != '']
sent, message = controllers.send_email_share(form.data['your_email'],
recipients,
form.data['share_url'],
form.data['subject'],
form.data['comment'])
return jsonify({'sent': sent, 'message': str(message),
'fields': [key for key in form.data.keys()]})
else:
return jsonify({'sent': False, 'message': form.errors,
'fields': [key for key in form.data.keys()]})
@main.route("/form_mail/", methods=['GET'])
def email_form():
context = {'url': request.args.get('url')}
return render_template("email/email_form.html", **context)
@main.route("/email_error_ajax/", methods=['POST'])
def email_error_ajax():
if not request.is_xhr:
abort(400, _('Requisição inválida.'))
form = forms.ErrorForm(request.form)
if form.validate():
recipients = [email.strip() for email in current_app.config.get('EMAIL_ACCOUNTS_RECEIVE_ERRORS') if email.strip() != '']
sent, message = controllers.send_email_error(form.data['name'],
form.data['your_email'],
recipients,
form.data['url'],
form.data['error_type'],
form.data['message'],
form.data['page_title'])
return jsonify({'sent': sent, 'message': str(message),
'fields': [key for key in form.data.keys()]})
else:
return jsonify({'sent': False, 'message': form.errors,
'fields': [key for key in form.data.keys()]})
@main.route("/error_mail/", methods=['GET'])
def error_form():
context = {'url': request.args.get('url')}
return render_template("includes/error_form.html", **context)
# ###############################Others########################################
@main.route("/media/<path:filename>/", methods=['GET'])
@cache.cached(key_prefix=cache_key_with_lang)
def download_file_by_filename(filename):
media_root = current_app.config['MEDIA_ROOT']
return send_from_directory(media_root, filename)
@main.route("/img/scielo.gif", methods=['GET'])
def full_text_image():
return send_from_directory('static', 'img/full_text_scielo_img.gif')
@main.route("/robots.txt", methods=['GET'])
def get_robots_txt_file():
return send_from_directory('static', 'robots.txt')
@main.route("/revistas/<path:journal_seg>/<string:page>.htm", methods=['GET'])
def router_legacy_info_pages(journal_seg, page):
"""
Essa view function realiza o redirecionamento das URLs antigas para as novas URLs.
Mantém um dicionário como uma tabela relacionamento entre o nome das páginas que pode ser:
Página âncora
[iaboutj.htm, eaboutj.htm, paboutj.htm] -> #about
[iedboard.htm, eedboard.htm, pedboard.htm] -> #editors
[iinstruc.htm einstruc.htm, pinstruc.htm]-> #instructions
isubscrp.htm -> Sem âncora
"""
page_anchor = {
'iaboutj': '#about',
'eaboutj': '#about',
'paboutj': '#about',
'eedboard': '#editors',
'iedboard': '#editors',
'pedboard': '#editors',
'iinstruc': '#instructions',
'pinstruc': '#instructions',
'einstruc': '#instructions'
}
return redirect('%s%s' % (url_for('main.about_journal',
url_seg=journal_seg), page_anchor.get(page, '')), code=301)
@main.route("/api/v1/counter_dict", methods=['GET'])
def router_counter_dicts():
"""
Essa view function retorna um dicionário, em formato JSON, que mapeia PIDs a insumos
necessários para o funcionamento das aplicações Matomo & COUNTER & SUSHI.
"""
end_date = request.args.get('end_date', '', type=str)
try:
end_date = datetime.strptime(end_date, '%Y-%m-%d')
except ValueError:
end_date = datetime.now()
begin_date = end_date - timedelta(days=30)
page = request.args.get('page', type=int)
if not page:
page = 1
limit = request.args.get('limit', type=int)
if not limit or limit > 100 or limit < 0:
limit = 100
results = {'dictionary_date': end_date,
'end_date': end_date.strftime('%Y-%m-%d %H-%M-%S'),
'begin_date': begin_date.strftime('%Y-%m-%d %H-%M-%S'),
'documents': {},
'collection': current_app.config['OPAC_COLLECTION']}
articles = controllers.get_articles_by_date_range(begin_date, end_date, page, limit)
for a in articles.items:
results['documents'].update(get_article_counter_data(a))
results['total'] = articles.total
results['pages'] = articles.pages
results['limit'] = articles.per_page
results['page'] = articles.page
return jsonify(results)
def get_article_counter_data(article):
return {
article.aid: {
"journal_acronym": article.journal.acronym,
"pid": article.pid if article.pid else '',
"aop_pid": article.aop_pid if article.aop_pid else '',
"pid_v1": article.scielo_pids.get('v1', ''),
"pid_v2": article.scielo_pids.get('v2', ''),
"pid_v3": article.scielo_pids.get('v3', ''),
"publication_date": article.publication_date,
"default_language": article.original_language,
"create": article.created,
"update": article.updated
}
}
@main.route('/cgi-bin/wxis.exe/iah/')
def author_production():
# http://www.scielo.br/cgi-bin/wxis.exe/iah/
# ?IsisScript=iah/iah.xis&base=article%5Edlibrary&format=iso.pft&
# lang=p&nextAction=lnk&
# indexSearch=AU&exprSearch=MEIERHOFFER,+LILIAN+KOZSLOWSKI
# ->
# //search.scielo.org/?lang=pt&q=au:MEIERHOFFER,+LILIAN+KOZSLOWSKI
search_url = current_app.config.get('URL_SEARCH')
if not search_url:
abort(404, "URL_SEARCH: {}".format(_('Página não encontrada')))
qs_exprSearch = request.args.get('exprSearch', type=str) or ''
qs_indexSearch = request.args.get('indexSearch', type=str) or ''
qs_lang = request.args.get('lang', type=str) or ''
_lang = IAHX_LANGS.get(qs_lang) or ''
_lang = _lang and "lang={}".format(_lang)
_expr = "{}{}".format(
qs_indexSearch == "AU" and "au:" or '', qs_exprSearch)
_expr = _expr and "q={}".format(_expr.replace(" ", "+"))
_and = _lang and _expr and "&" or ''
_question_mark = (_lang or _expr) and "?" or ""
if search_url.startswith("//"):
protocol = "https:"
elif search_url.startswith("http"):
protocol = ""
else:
protocol = "https://"
url = "{}{}{}{}{}{}".format(
protocol, search_url, _question_mark, _lang, _and, _expr)
return redirect(url, code=301)
|
Study protocol: EXERcise and Cognition In Sedentary adults with Early-ONset dementia (EXERCISE-ON) Background Although the development of early-onset dementia is a radical and invalidating experience for both patient and family there are hardly any non-pharmacological studies that focus on this group of patients. One type of a non-pharmacological intervention that appears to have a beneficial effect on cognition in older persons without dementia and older persons at risk for dementia is exercise. In view of their younger age early-onset dementia patients may be well able to participate in an exercise program. The main aim of the EXERCISE-ON study is to assess whether exercise slows down the progressive course of the symptoms of dementia. Methods/Design One hundred and fifty patients with early-onset dementia are recruited. After completion of the baseline measurements, participants living within a 50 kilometre radius to one of the rehabilitation centres are randomly assigned to either an aerobic exercise program in a rehabilitation centre or a flexibility and relaxation program in a rehabilitation centre. Both programs are applied three times a week during 3months. Participants living outside the 50 kilometre radius are included in a feasibility study where participants join in a daily physical activity program set at home making use of pedometers. Measurements take place at baseline (entry of the study), after three months (end of the exercise program) and after six months (follow-up). Primary outcomes are cognitive functioning; psychomotor speed and executive functioning; (instrumental) activities of daily living, and quality of life. Secondary outcomes include physical, neuropsychological, and rest-activity rhythm measures. Discussion The EXERCISE-ON study is the first study to offer exercise programs to patients with early-onset dementia. We expect this study to supply evidence regarding the effects of exercise on the symptoms of early-onset dementia, influencing quality of life. Trial registration The present study is registered within The Netherlands National Trial Register (ref: NTR2124) Background Early-onset dementia (EOD; < 66 years) is less common than late-onset dementia (LOD; > 65 years). In the majority of studies, Alzheimer's disease (AD) is the most common subtype of EOD, followed by vascular dementia (VaD) and frontotemporal dementia (FTD). The clinical presentation of EOD, concerning cognitive and behavioural disturbances, is quite heterogeneous and depends on the neuropathological substrate. EOD places a large psychological and economical burden on patients and caregivers because of the patients' prominent role in society (having young children, working) at disease onset. Despite the devastating impact of the disease, few intervention studies focus on this specific group of younger patients. Some pharmacological studies report the inclusion of (a small subset of ) EOD patients. Non-pharmacological intervention studies, such as exercise interventions studies, do not report to include EOD patients. Notably, there is an increasing number of studies examining the effects of exercise on cognitive and behavioural functioning in sedentary older people, in patients in a very early stage of LOD, and in patients in a moderate stage of LOD. Most evidence of these studies points in the direction of beneficial effects of exercise on cognition in the aging population. Although, in view of their younger age many EOD patients may be better able to perform aerobic physical activity compared to healthy elders and LOD patients, certain dementia characteristics such as apathy may lead to sedentary and socially impoverished lifestyles. Indeed, a 'frontal' presentation, i.e. symptoms as apathy and dysexecutive functioning, is relatively common in EOD patients. A brain region that plays a pivotal role in executive functions is the prefrontal cortex. Interestingly, especially those functions mediated by the prefrontal cortex react positively to increased physical activity. The functioning of other cortical areas such as the parietal lobe also show a positive relationship with physical activity. Involvement of the prefrontal and parietal cortices in exercise benefits stems form neuro-imaging studies in older adults that show greater task-related activity after an exercise intervention in regions of the prefrontal and parietal cortices during an executive function task. It is noteworthy that particularly the prefrontal cortex and the parietal lobe are vulnerable in EOD and may therefore offer a potentially appropriate venue for intervention. Furthermore, EOD patients are relatively young and suffer from less physical inconveniences compared to LOD patients which makes participation in an intensive exercise program feasible and may result in larger benefits of the intervention. The earlier mentioned positive effects of exercise combined with the characteristics of EOD patients, makes the lack of studies examining the effects of physical activity in the EOD population a remarkable observation. An elaborate description of the theory behind this observation is described in our review, also including a brief summary of part of the present study protocol. In the present manuscript we present the detailed description of the entire protocol. This study is the first to investigate the exercise effects on cognition and behaviour in EOD patients. In the "EXERcise and Cognition In Sedentary adults with Early-ONset dementia study" (EXERCISE-ON study), three different exercise programs are offered to persons with EOD: an aerobic exercise program in a rehabilitation centre, a flexibility and relaxation program in a rehabilitation centre and a daily physical activity program at home using pedometers (for a description see section Interventions). The present study is divided into two parts: a randomized controlled trial (RCT) and a feasibility study. The main aim of this study is to assess, making use of an RCT design, whether exercise slows down the progressive course of the symptoms of dementia, with respect to cognition, in particular psychomotor speed and executive functioning, and (instrumental) activities of daily living ((i) ADL), and may subsequently lead to better quality of life and less caregiver burden. In addition, a feasibility study is conducted to evaluate whether a physical activity program offered at home using pedometers can bring positive effects for EOD patients and their caregivers. Study design This study consists of two parts. The main study is designed as a randomized controlled trial with one hundred EOD patients. In addition a feasibility study will be conducted with fifty EOD patients ( Figure 1). Participants This study will include persons with a diagnosis of EOD: AD, VaD, FTD or other types of dementia. Participants will be recruited primarily in the Alzheimer centre of the VU University medical centre (VUmc) in Amsterdam, the Netherlands. Secondarily, participants will be recruited through affiliate memory clinics. A careful screening process, including medical history, physical, neurological, and neuropsychological examination as well as laboratory tests, electroencephalogram (EEG) and magnetic resonance imaging (MRI), will lead to the identification of patients with EOD. Diagnoses are made based on a multidisciplinary consensus team conference, according to the clinical criteria of the Diagnostic and Statistical Manual of Mental Disorders-IV-TR (DSM-IV-TR), on the National Institute of Neurological Disorders and Stroke Alzheimer's Disease and Related Disorders Association (NINCDS-ADRDA) for probable AD ; and on the National Institute of Neurological Disorders and Stroke-Association Internationale pour la Recherche et l'Enseignement en Neurosciences (NINDS-AIREN) for VaD. Initially participants are recruited by a neurologist during a clinic visit where the patient, caregiver, and other family members are informed of the diagnosis. After provisional consent the study is explained (verbally and by use of printed material) by the principal investigator (AH), after which formal written consent is obtained. Inclusion criteria are the following: 1) Diagnosis of EOD (onset of complaints < 66 years) (among others: AD, VaD, FTD); 2) Relatively early stage of dementia (Mini Mental State Examination (MMSE) score > 15 ); 3) Primary caregiver available. Participants will be excluded from participation when they 1) are wheelchair-bound; 2) are diagnosed with a neurodegenerative disease that primarily results in motor impairments, such as Parkinson's disease; 3) are diagnosed with serious cardiovascular disease, such as heart failure; 4) have a history of substance abuse; 5) had a head injury involving loss of consciousness greater than 15 minutes; 5) have a history of major psychiatric illness (e.g. personality disorder, schizophrenia); 6) have severe visual problems; 7) have severe hearing problems; 8) have insufficient proficiency of the Dutch language. Randomization After obtaining written informed consent and completion of the baseline measurement, patients are assigned to one of the exercise programs. Originally, it was planned to randomize patients in one of three exercise programs, with an allocation ration of 1:1:1. However, during the pilot study travel distance to the rehabilitation centres appeared to be a restraint for many patients (even when patients were brought with taxis). The randomization procedure had to be adjusted. Patients who are living within 50 kilometres of one of the participating rehabilitation centres are randomly assigned to either the aerobic exercise program or the flexibility and relaxation exercise program. The allocation ratio is 1:1. To ensure the allocation ratio block randomization is used (block size is 4). Randomization is provided by e-mail. An anonymous list with identification numbers is sent to an independent researcher who is blinded to the identity of the patients. This independent researcher keeps the randomization list. Patients living outside a 50 kilometre radius of one of the rehabilitation centres are included in the daily physical activity program at home using pedometers. Interventions Patients living within a radius of 50 kilometres of one of the rehabilitation centres are randomly assigned to one of the following programs: Aerobic exercise program in a rehabilitation centre This aerobic exercise program aims to improve the cardiorespiratory fitness of the participants. Activities consist of a warming-up, a core activity, i.e. cycling on a cycling ergometer, and a cooling down. The program endures three months, takes place three times a week, and is build up in duration and intensity. Group size varies from 2 to 5 participants. This program is guided by a physical therapist (JB). Flexibility and relaxation program in rehabilitation centre The setting of this program is the same as for the aerobic exercise program (duration, frequency, group size, rehabilitation centre, and physical therapist). The difference is in the activities and the intensity of the program. Activities consist of stretching and toning exercises in combination with relaxation exercises. No build up is used. Patients living outside the 50 kilometre radius from one of the rehabilitation centres are included in the following program: Daily physical activity program at home using pedometers This program takes place at the participant's home. The program is based on the COACH method, developed by the interfaculty Centre for Human Movement Sciences of the University of Groningen (Netherlands) and the Centre for Movement and Research (CBO), Groningen (Netherlands). The COACH method is an evidence-based method to stimulate sedentary individuals to enhance physical activity participation in daily life, using "exercise counselling". The exercise counselling is focused on intrinsic motivation, since this type of motivation is predominantly related to sedentariness and is often disturbed in patients with brain damage. Patients develop a benchmark for physical activity in several phases. Using 3 interviews, following motivational interviewing and goal setting techniques, the participant's attitude towards physical activity is discussed and changed if necessary. The interviews are given by a neuropsychologist (in training). Setting Both the aerobic exercise program and the flexibility and relaxation program are given in two rehabilitation centres in the Netherlands (i.e. Reade, hospital Amstelland, Amstelveen; department of rehabilitation, Jeroen Bosch hospital, 's Hertogenbosch). The third exercise program (pedometer) is offered nation-wide at the patients' homes. Measurements and procedures Each participant will undergo three measurements: baseline measurement (entry of the study, prior to randomization), post measurement (end of the exercise program, three months after baseline measurement), and follow-up measurement (six months after baseline measurement). A measurement consists of a neuropsychological examination during approximately 2 hours. Thoroughly trained master students Clinical Neuropsychology, blinded to group allocation will administer the tests. In the week following the neuropsychological examination the participants wear a pedometer and an actiwatch activity monitor for one week. Participants that are able to visit one of the rehabilitation centres also receive a physical examination during 30 minutes. Primary outcome measures Cognitive functioning Alzheimer's Disease Assessment Scale (ADAS)-COG. The ADAS-COG is designed to evaluate cognitive disorders in persons with AD. The ADAS-COG is often used in clinical trials for dementia. Psychomotor speed/executive functioning Trail Making Test (TMT). The TMT consists of two parts. Part A gives a measure for psychomotor speed. The more complex Part B gives an indication of cognitive flexibility, which is a part of executive functioning. (Instrumental) activities of daily living ((i)ADL) Disability Assessment for Dementia (DAD). To gain insight in the (i)ADL of participants, the DAD questionnaire is administered. The DAD is an informant-report questionnaire specifically designed for persons with a (beginning) dementia. The DAD is administered to the caregiver and provides information with respect to how the patient performs (i)ADL's and whether he/she is taking initiative to do things. Quality of life The Dementia-Quality of Life (D-QOL) is a valid instrument to assess the quality of life in persons with a mild to moderate stage of dementia. The D-QOL is a self-report questionnaire and consists of propositions in 5 subscales: self-esteem, positive effect, negative effect, feelings of belonging, and enjoying the environment. Secondary outcome measures Physical measures Physical fitness is assessed using the strand cycle test, which is a submaximal measure of fitness and is used to estimate the maximal oxygen uptake (VO2max). Walking speed is measured using the 6 Minutes Walk Test. Balance and strength of the lower extremities is measured by the Sit to Stand Test. Finally, the level of physical activity (steps/day) is assessed using a pedometer (OMRON HJ-113). Participants are also asked to report their level of physical activity during the last week. This is assessed using a self-report questionnaire, the Physical Activity Scale for the Elderly (PASE). Neuropsychological measures An extensive neuropsychological battery is administered. Regarding episodic memory the "Face Recognition" from the Rivermead Behavioural Memory Test (RBMT) is used to measure face recognition. Short term memory is measured using the "Digit span forwards", working memory using the "Digit span backwards" from the Wechsler Adult Intelligence Scale -Revised (WAIS-R) ; Word fluency is assessed using "category fluency" (animals and occupations) from the Groninger Intelligence Test (GIT) ; The Stroop Colour Word Test short version is used to measure interference and inhibition. Coding, psychomotor-, and processing speed is measured using the subtest "Symbol substitution" from the (WAIS-R). Furthermore, visuospatial capacity is assessed by the "Perceptual closure" subtest from the GIT. Finally, self-efficacy is measured using a self-report questionnaire: General Self-efficacy scale (GES): Dutch version. Rest-activity rhythm The rest-activity rhythm is assessed using the Actiwatch activity monitor; Cambridge Neurotechnology Ltd., Cambridge, Great Britain. Demographic and control variables At baseline, demographic information, i.e. age, sex, educational attainment (using the system of Verhage, ranging from 1 (low) to 7 (high) ), diagnosis of dementia, number of years since diagnosis, and currently prescribed medications will be collected. To determine whether there has been a change in subjective physical functioning and in the amount of encouragement needed to start exercising several questions are asked during recruitment. Outcome measures will also be controlled for co-morbid medical conditions (medical chart), depressive symptoms: Centre for Epidemiologic Studies Depression Scale (CES-D), and Apolipoprotein E (ApoE) genotype, in view of possible moderating effects on treatment outcome. Statistical analysis In the statistical analysis an intention-to-treat analysis will be performed in order to minimize bias. Differences in the outcome measures between the aerobic exercise program and the flexibility and relaxation program on the three measurement moments will be analysed using a Linear Mixed Model (LMM) with contrasts. Time and treatment condition will be considered as a within-subjects and a between-subjects variable respectively. To analyse possible effects of the daily physical activity program at home using pedometers also a LMM is used, with time as within-subjects variable. Post-hoc analyses will be conducted to differentiate between persons who were physically active and persons who were physically inactive, based on pedometer outcomes on baseline measurement. Sample size The sample size calculation (using G*power ) is based upon two meta-analyses concerning studies in which the effects of physical training on cognition was investigated in either older adults with a cognitive disorder or dementia or in healthy sedentary older adults. Summary effect sizes in these studies were respectively Hedges' g = 0.57 and g = 0.68, which are considered medium effect sizes. Since in the RCT part of the present study the control group receives an intervention existing of flexibility and relaxation exercises, in contrast to a control group who does not perform any physical activity, we assume a small effect size instead of a medium effect size (f =.1). For 80 % power, the sample size requirement (using a 5 % significance level) is 82 persons in total. Accounting for participants who withdraw from the study, as is seen in other intervention studies with healthy older adults and patients with dementia the total study population is targeted at 100 participants. In the feasibility study a small to medium effect size was used (f =.15), in order to avoid Type II errors. Keeping the parameters equal to the situation above, the estimated sample size is 38 persons. Accounting for participants who withdraw from the study, the study population for this part of the study is targeted at 50 persons. Ethical and legal considerations The protocol is reviewed and approved by the Extra Discussion The EXERCISE-ON study will evaluate whether exercise slows down the progressive course of the symptoms of dementia in EOD patients. Characteristics of EOD patients, such as apathy, loss of initiative, together with their age and physique, make them good candidates for a physical activity program. This study has several strengths. Despite the devastating impact of EOD on the lives of patients and their families, few specific treatments are available for EOD patients. First of all, EOD patients are dependent on facilities developed primarily for elderly, and second of all, EOD patients are underrepresented in scientific studies. Some pharmacological studies report the inclusion of (a small subset of ) EOD patients. To our knowledge the present study is the first study that offers exercise interventions to EOD patients. Another strength is that the exercise programs are offered in different settings. This is the first study that brings patients suffering from dementia into a rehabilitation setting involving the guidance of an experienced physical therapist. In the rehabilitation centres patients are exposed to fellow EOD patients. Fellow patients can share experiences and tips, which may help getting grip on the consequences of a chronic disease. The daily activity program is offered at participants homes. Particular this setting stimulates potential implementation in the future. A challenge in this study is the inclusion of a sufficient number of participants. EOD is a rare condition, the proportion of people with EOD varies between 7.3 % to 31 % in studies from Japan, the UK, and the USA. Furthermore, it is expected that EOD patients have busier lives and are involved in more activities in contrast to patients suffering from dementia at an older age, similar to healthy adults of middle and older age. To overcome this challenge we include multiple rehabilitation centres and also offer a home-based daily physical activity program to patients that are not able to travel to one of the rehabilitation centres. In summary, the EXERCISE-ON study is an innovative study examining possible beneficial effects of exercise on symptoms, with respect to cognition, (i)ADL, and quality of life, of EOD. Study results may contribute substantially to care facilities for EOD patients. Furthermore, exercise may offer a new set of coping skills for this patient group. Competing interests The authors declare that they have no competing interests. Authors' contributions ES, LE, and PK conceived the idea of this study. ES and LE wrote the grant application of the study. AH, LE and ES developed the intervention programs and the protocol of outcome measures. AH coordinates the study under direct supervision of LE and ES. PS and WMF enable the recruitment and selection of EOD patients in the Alzheimer Center of the VUmc and have an advisory role in the project. PK screens the participants on motor disturbances before participation. JB executes the exercise interventions. MG supplied the daily physical activity program at home using pedometers and trained AH in performing the interviews. AH was the primary author for this manuscript. LE and ES helped draft this manuscript. All authors provided critical feedback and approved the final manuscript. |
#!/usr/bin/python
import os, sys, argparse
from appcast import Appcast, Delta
import urlparse
import datetime
import time
import json
from subprocess import Popen, PIPE
# -----------------
def sign_update(file = '', private_key_path = ''):
sign_update_script = os.path.join(SPARKLE_BIN_PATH , "sign_update")
sign_update_call = [sign_update_script, file, private_key_path]
process = Popen(sign_update_call, stdout=PIPE)
(output, err) = process.communicate()
exit_code = process.wait()
return output.rstrip()
# -----------------
# Parse incoming arguments
parser = argparse.ArgumentParser(description='Generate sparkle appcast!')
parser.add_argument('config_file')
parser.add_argument('input_archive')
parser.add_argument('-v', '--version', help='the target version number')
parser.add_argument('-vv', '--verbose', action="store_true")
parser.add_argument('output_file')
args = parser.parse_args()
# Are we verbose?
VERBOSE = args.verbose
# Resolve paths
cwd = os.getcwd()
input_archive = os.path.join(cwd,args.input_archive)
config_file = os.path.join(cwd,args.config_file)
output_file = os.path.join(cwd,args.output_file)
# What version is this
version = args.version
if VERBOSE:
print 'Input archive: ', input_archive
print 'Version: ', version
print 'Config: ', config_file
# Read config file
with open(config_file) as data_file:
data = json.load(data_file)
SPARKLE_BIN_PATH = os.path.join(cwd,data["SPARKLE_BIN_PATH"])
PRIVATE_KEY_PATH = os.path.join(cwd,data["PRIVATE_KEY_PATH"])
if VERBOSE:
print '-- sparkle bin: ', SPARKLE_BIN_PATH
print '-- private key path: ', PRIVATE_KEY_PATH
APPCAST_URL = urlparse.urljoin(data["APPCAST_BASE_URL"],data["APPCAST_FILE_NAME"])
if VERBOSE:
print 'APPCAST_URL: ', APPCAST_URL
(_,input_archive_filename) = os.path.split(input_archive)
if VERBOSE:
print '-- input_archive_filename: ', input_archive_filename
APPCAST_LATEST_VERSION_URL = urlparse.urljoin(data["APPCAST_BASE_URL"],data["RELEASES_DIR"])
APPCAST_LATEST_VERSION_URL = urlparse.urljoin(APPCAST_LATEST_VERSION_URL,input_archive_filename)
if VERBOSE:
print '-- APPCAST_LATEST_VERSION_URL: ', APPCAST_LATEST_VERSION_URL
DSA_SIGNATURE = sign_update(file = input_archive, private_key_path = PRIVATE_KEY_PATH)
if VERBOSE:
print "DSA Signature: ", DSA_SIGNATURE
APP_SIZE = os.path.getsize(input_archive)
APPCAST_PUBDATE = time.strftime("%a, %d %b %G %T %z")
## ACTUALLY CREATE THE APPCAST
appcast = Appcast()
appcast.title = data["APPCAST_TITLE"]
appcast.app_name = data["APP_NAME"]
appcast.appcast_url = APPCAST_URL
appcast.appcast_description = data["APPCAST_DESCRIPTION"]
# if APPCAST_RELEASE_NOTES_FILE:
# appcast.release_notes_file = APPCAST_RELEASE_NOTES_FILE
appcast.launguage = data["APPCAST_LANGUAGE"]
appcast.latest_version_number = version
appcast.short_version_string = version
appcast.latest_version_update_description = data["APPCAST_LATEST_VERSION_UPDATE_DESCRIPTION"]
appcast.pub_date = APPCAST_PUBDATE
appcast.latest_version_url = APPCAST_LATEST_VERSION_URL #format_url(url=APPCAST_LATEST_VERSION_URL, title=LATEST_APP_ARCHIVE)
appcast.latest_version_size = APP_SIZE
appcast.latest_version_dsa_key = DSA_SIGNATURE
## write out the appcast
appcast_xml = appcast.render()
with open(output_file, 'w') as f:
f.write(appcast_xml)
# log("create {}".format(output_file))
|
The Impact of Analysts' Forecast Errors and Forecast Revisions on Stock Prices We present a comprehensive analysis of the association between stock returns, quarterly earnings forecast errors, and quarter-ahead and year-ahead earnings forecast revisions. We find that forecast errors and the two forecast revisions have significant effects on stock prices, indicating each conveys information content. Findings also show that the fourth quarter differs from other quarters-the relative importance of the forecast error (quarter-ahead forecast revision) is lower (higher). We also find a marked upward shift over time in the forecast error and forecast revision coefficients, consistent with the I/B/E/S database reflecting an improved quality of both earnings forecasts and actual earnings. Copyright (c) 2008 The Authors Journal compilation (c) 2008 Blackwell Publishing Ltd. |
Sixty-five years ago yesterday, the Bell Labs team made up of William Shockley, John Bardeen, and Walter Brattain created what would soon be named the world's first transistor. It was a breakthrough invention?one that I would argue has been the most important innovation of the last 100 years.
Sixty-five years ago yesterday, the Bell Labs team made up of William Shockley, John Bardeen, and Walter Brattain created what would soon be named the world's first transistor. It was a breakthrough invention—one that I would argue has been the most important innovation of the last 100 years.
But while the transistor has been incredibly popular and versatile, my guess is that the team that worked on it had no inkling about most of its eventual applications. Indeed, the whole point of the solid-state physics group at Bell Labs, which was formed by Shockley in 1945, was to improve communications and essentially to replace the vacuum tubes and electromechanical switches then used in the phone system. Before the war, phone company researchers had used silicon to help detect vibrations, and afterward, Shockley started working on trying to develop semiconductors—such as silicon and germanium—to come up with a replacement for the vacuum tube. Two members of the group, theoretical physicist Bardeen and experimental physicist Brattain, conducted an experiment on December 16, 1947. They eventually figured out a method of using two gold contacts that were designed to connect to a point on a slab of germanium so the signal came in one contact and increased as it was sent out on the other, in what became known as a "point-contact transistor."
Later, Shockley would create a different approach, called the bipolar junction transistor, which proved much easier to manufacture. A few months later, Engineer John Pierce coined the term "transistor," a shortening of "transfer resistor." The transistor was announced on June 30, 1948, and it was a few years before it started to gain traction in the nascent electronics industry, notably in the transistor radio.
Over the years, the transistor made possible the portable radio, much smaller TVs, and, of course, computers as we know them. (The first very large computers had rows and rows of vacuum tubes, but transistors have proven much more reliable and much more compact.) Indeed, personal computers, portable phones, flat-panel televisions, personal media players, smartphones and tablets, networking, and the Internet wouldn't be possible without transistors. It's hard to think of anything that's changed our lives more in the past 50 years.
Perhaps the most amazing thing about transistors is how the industry has been able to shrink them in size, doubling the density roughly every two years in what is known as Moore's Law, after a paper by Intel co-founder Gordon Moore from 1965. The first transistor was about half an inch tall. For comparison, Intel's Ivy Bridge processor, which is about the size of your thumb nail, contains 1.4 billion transistors. That's pretty amazing. |
Identifying Large Problem/Failed Banks: The Case of Franklin National Bank of New York If the smart money was out of Franklin before its financial difficulties became public knowledge, just what elements of Franklin's balance sheet were the sagacious analysts reading and why weren't the banking authorities aware of this information? The purpose of this paper is to determine what balance-sheet and income-statement figures, if any, could have been arrayed in an ex post early-warning system to spotlight Franklin's developing problems. |
Kalman Filter and Proportional Navigation Based Missile Guidance System Missile Guidance involves guiding the missile to its target. The target's accuracy has a huge impact on its effectiveness. In this paper, the Kalman Filter and the Proportional Navigation (PN) algorithm are used for missile guidance. It is a guidance, navigation, and control (GNC) system for a missile. A fully adaptive Kalman filter that processes a measurement every n milli-second is designed. The Kalman filter estimates the guidance signals for an augmented PN, the rate of the line of sight (LOS), and the target's normal acceleration. A comparative study for missile guidance was done by using different navigation algorithms and filters. |
Australian dentists' educational needs for smoking cessation counseling. BACKGROUND Australian dentists' continuing educational needs and their attitudes towards and self-reported practices related to smoking cessation counseling were examined. METHOD Self-administered questionnaires were received from 149 dentists (83% response rate). RESULTS Many dentists were aware that smoking is a risk factor for the development of oral cancer (n = 128, 86%). Most considered smoking cessation counseling to be part of their professional role (n = 105, 70%). However, few "always" asked about the smoking status of their patients (n = 21, 14%). The dentists' use of specific behavioral techniques known to assist patients to quit also was low. Furthermore, the dentists were as likely to use ineffective (advice to "cut down") as effective (advice to "quit") (p > 0.05) strategies. The respondents were significantly more interested in self-help pamphlets for their patients than in either evidence-based guidelines (McNemar's chi2 = 9.76, df = 1, p < 0.01) or a self-study module about smoking cessation (McNemar's chi2 = 42.0, df = 1, p < 0.001). CONCLUSIONS Continuing education for dentists that combines skills training, patient materials, and epidemiology is likely to be acceptable and effective. |
def with_rw_repo(func):
def wrapper(self, path):
src_dir = dirname(dirname(dirname(__file__)))
assert(os.path.isdir(path))
os.rmdir(path)
shutil.copytree(src_dir, path)
target_gitdir = os.path.join(path, '.git')
assert os.path.isdir(target_gitdir)
return func(self, self.RepoCls(target_gitdir))
wrapper.__name__ = func.__name__
return with_rw_directory(wrapper) |
package com.chrismin13.additionsapi.listeners.custom;
import java.util.HashMap;
import java.util.UUID;
import org.bukkit.Bukkit;
import org.bukkit.entity.EntityType;
import org.bukkit.entity.Player;
import org.bukkit.event.EventHandler;
import org.bukkit.event.EventPriority;
import org.bukkit.event.Listener;
import org.bukkit.event.entity.EntityToggleGlideEvent;
import com.chrismin13.additionsapi.AdditionsAPI;
import com.chrismin13.additionsapi.durability.ElytraDurability;
import com.chrismin13.additionsapi.events.elytra.CustomElytraPlayerToggleGlideEvent;
import com.chrismin13.additionsapi.items.CustomItem;
import com.chrismin13.additionsapi.permissions.ElytraPermissions;
import com.chrismin13.additionsapi.utils.ElytraDurabilityTask;
import com.chrismin13.additionsapi.utils.PermissionUtils;
public class CustomElytraPlayerToggleGlide implements Listener {
private static HashMap<UUID, Integer> playersGliding = new HashMap<UUID, Integer>();
@EventHandler(priority = EventPriority.MONITOR)
public void onCustomElytraPlayerGlide(CustomElytraPlayerToggleGlideEvent customEvent) {
if (customEvent.isCancelled())
return;
CustomItem cItem = customEvent.getCustomItem();
Player player = customEvent.getPlayer();
UUID playerUUID = player.getUniqueId();
if (cItem.getDurabilityMechanics() instanceof ElytraDurability) {
cancelPlayerGlideDamage(playerUUID);
ElytraDurabilityTask task = new ElytraDurabilityTask(player,
customEvent.getCustomItemStack().getItemStack(), cItem);
task.runTaskTimer(AdditionsAPI.getInstance(), 0L, 20L);
playersGliding.put(playerUUID, task.getTaskId());
}
}
@EventHandler(priority = EventPriority.LOWEST)
public void onCustomElytraPlayerGlideLowest(CustomElytraPlayerToggleGlideEvent customEvent) {
if (customEvent.isCancelled())
return;
CustomItem cItem = customEvent.getCustomItem();
if (!(cItem.getPermissions() instanceof ElytraPermissions))
return;
ElytraPermissions perm = (ElytraPermissions) cItem.getPermissions();
EntityToggleGlideEvent event = customEvent.getEntityToggleGlideEvent();
if (event.getEntity().getType().equals(EntityType.PLAYER)
&& !PermissionUtils.allowedAction((Player) event.getEntity(), perm.getType(), perm.getFlight()))
event.setCancelled(true);
}
public static void cancelPlayerGlideDamage(UUID playerUUID) {
if (playersGliding.containsKey(playerUUID)) {
Bukkit.getScheduler().cancelTask(playersGliding.get(playerUUID));
playersGliding.remove(playerUUID);
}
}
}
|
BERT meets Shapley: Extending SHAP Explanations to Transformer-based Classifiers Transformer-based neural networks offer very good classification performance across a wide range of domains, but do not provide explanations of their predictions. While several explanation methods, including SHAP, address the problem of interpreting deep learning models, they are not adapted to operate on state-of-the-art transformer-based neural networks such as BERT. Another shortcoming of these methods is that their visualization of explanations in the form of lists of most relevant words does not take into account the sequential and structurally dependent nature of text. This paper proposes the TransSHAP method that adapts SHAP to transformer models including BERT-based text classifiers. It advances SHAP visualizations by showing explanations in a sequential manner, assessed by human evaluators as competitive to state-of-the-art solutions. Introduction Recent wide spread use of deep neural networks (DNNs) has increased the need for their transparent classification, given that DNNs are black box models that do not offer introspection into their decision processes or provide explanations of their predictions and biases. Several methods that address the interpretability of machine learning models have been proposed. Model-agnostic explanation approaches are based on perturbations of inputs. The resulting changes in the outputs of the given model are the source of their explanations. The explanations of individual instances are commonly visualized in the form of histograms of the most impactful inputs. However, this is insufficient for text-based classifiers, where the inputs are sequential and structurally dependent. We address the problem of incompatibility of modern explanation techniques, e.g., SHAP (Lundberg and Lee, 2017), and state-of-the-art pretrained transformer networks such as BERT (). Our contribution is twofold. First, we propose an adaptation of the SHAP method to BERT for text classification, called TransSHAP (Transformer-SHAP). Second, we present an improved approach to visualization of explanations that better reflects the sequential nature of input texts, referred to as the TransSHAP visualizer, which is implemented in the TransSHAP library. The paper is structured as follows. We first present the background and motivation in Section 2. Section 3 introduces TransSHAP, an adapted method for explaining transformer language model such as BERT, which includes the TransSHAP visualizer for improved visualization of the generated explanations. Section 4 presents the results of an evaluation survey, followed by the discussion of results and the future work in Section 5. Background and motivation We first present the transformer-based language models, followed by an outline of perturbationbased explanation methods, in particular the SHAP method. We finish with the overview of visualizations for prediction explanations. BERT () is a large pretrained language model based on the transformer neural network architecture (). Nowadays, BERT models exist in many mono-and multilingual variants. Fine-tuning BERT-like models to a specific task produces state-of-the-art results in many natural language processing tasks, such as text classification, question answering, POS-tagging, dependency parsing, inference, etc. There are two types of explanation approaches, general and model specific. The general explanation approaches are applicable to any prediction model, since they perturb the inputs of a model and observe changes in the model's output. The second type of explanation approaches are specific to certain types of models, such as support vector machines or neural networks, and exploit the internal information available during training of these methods. We focus on general explanation methods and address their specific adaptations for use in text classification, more specifically, in text classification with transformer models such as BERT. The most widely used perturbation-based explanation methods are IME (trumbelj and Kononenko, 2010), LIME (), and SHAP (Lundberg and Lee, 2017). Their key idea is that the contribution of a particular input value (or set of values) can be captured by 'hiding' the input and observing how the output of the model changes. In this work, we focus on the stateof-the-art explanation method SHAP (SHapley Additive exPlanations) that is based on the Shapley value approximation principle. Lundberg and Lee noted that several existing methods, including IME and LIME, can be regarded as special cases of this method. We propose an adaptation of SHAP for BERTlike classifiers, but the same principles are trivially transferred to LIME and IME. To understand the behavior of a prediction model applied to a single instance, one should observe perturbations of all subsets of input features and their values, which results in exponential time complexity. trumbelj and Kononenko showed that the contribution of each variable corresponds to the Shapley value from the coalition game, where players correspond to input features, and the coalition game corresponds to the prediction of an individual instance. Shapley values can be approximated in time linear to the number of features. The visualization approaches implemented in the explanation methods LIME and SHAP are primarily designed for explanations of tabular data and images. Although the visualization with LIME includes adjustments for text data, the resulting explanations are presented in the form of histograms that are sometimes hard to understand, as Figure 1 shows. The visualization with SHAP for the same sentence is illustrated in Figure 2. Here, the fea-tures with the strongest impact on the prediction correspond to longer arrows that point in the direction of the predicted class. For textual data this representation is non-intuitive. Various approaches have been proposed to interpret neural text classifiers. Some of them focus on adapting existing SHAP based explanation methods by improving different aspects, e.g., the word masking, or reducing feature dimension (), while others explore the complex interactions between words (contextual decomposition) that are crucial when dealing with textual data but are ignored by other post-hoc explanation methods (;. 3 TransSHAP: The SHAP method adapted for BERT Many modern deep neural networks, including transformer networks () such as BERT-like models, split the input text into subword tokens. However, perturbation-based explanation methods (such as IME, LIME, and SHAP) have problems with the text input and in particular subword input, as the credit for a given output cannot be simply assigned to clearly defined units such as words, phrases, or sentences. In this section, we first present the components of the new methodology and describe the implementation details required to make explanation method SHAP to work with state-of-the-art transformer prediction models such as BERT, followed by a brief description of the dataset used for training the model. Finally we introduce the TransSHAP visualizer, the proposed visualization method for text classification with neural networks. We demonstrate it using the SHAP method and the BERT model. TransSHAP components The model-agnostic implementation of the SHAP method, named Kernel SHAP 1, requires a classifier function that returns probabilities. Since SHAP contains no support for BERT-like models that use subword input, we implemented custom functions for preprocessing the input data for SHAP, to get the predictions from the BERT model, and to prepare data for the visualization. Figure 3 shows the components required by SHAP in order to generate explanations for the predictions made by the BERT model. The text data we want to interpret is used as an input to Kernel SHAP along with the special classifier function we constructed, which is necessary since SHAP requires numerical input in a tabular form. To achieve this, we first convert the sentence into its numerical representation. This procedure consists of splitting the sentence into tokens and then preprocessing it. The preprocessing of different input texts is specific to their characteristics (e.g., tweets). The result is a list of sentence fragments (with words, selected punctuation marks and emojis), which serves as a basis for word perturbations (i.e. word masking). Each unique fragment is assigned a unique numerical key (i.e. index). We refer to a sentence, represented with indexes, as an indexed instance. In summary, the TransSHAP's classifier function first converts each input instance into a wordlevel representation. Next, the representation is perturbed in order to generate new, locally similar instances which serve as a basis for the constructed explanation. This perturbation step is performed by the original SHAP. Then the perturbed versions of the sentence are processed with the BERT tokenizer that converts the sentence fragments to sub-word tokens. Finally, the predictions for the new locally generated instances are produced and returned to the Kernel SHAP explainer. With this modification, SHAP is able to compute the features' impact on the prediction (i.e. the explanation). Datasets and models We demonstrate our TransSHAP method on tweet sentiment classification. The dataset contains 87,428 English tweets with human annotated sentiment labels (positive, negative and neutral). For tweets we split input instances using the Tweet-Tokenizer function from NLTK library 2, we removed apostrophes, quotation marks and all punctuation marks except for exclamation and question marks. We fine-tuned the CroSloEngual BERT model (Ular and Robnik-ikonja, 2020) on this classification task and the resulting model achieved the classification accuracy of 66.6%. Visualization of a prediction explanation for the BERT model To make a visualization of predictions better adapted to texts, we modified the histogram-based visualizations used in IME, LIME and SHAP for tabular data. Figure 4 is an example of our visualization for explaining text classifications. It was inspired by the visualization used by the LIME method but we made some modifications with the aim of making it more intuitive and better adapted to sequences. Instead of the horizontal bar chart of features' impact on the prediction sorted in descending order of feature impact, we used the vertical bar chart and presented the features (i.e. words) in the order they appear in the original sentence. In this way, the graph allows the user to compare the direction of the impact (positive/negative) and also the magnitude of impact for individual words. The bottom text box representation of the sentence shows the words colored green if they significantly contributed to the prediction and red if they significantly opposed it. Evaluation We evaluated the novel visualization method using an online survey. The targeted respondents were researchers and PhD students not involved in the study that mostly had some previous experience with classifiers and/or their explanation methods. In the survey, the respondents were presented with three visualization methods on the same example: two visualizations were generated by existing libraries, LIME and SHAP, and the third one used our novel TransSHAP library. Respondents were asked to evaluate the quality of each visualization, suggest possible improvements, and rank the three methods. 3 The results of 38 completed surveys are as follows. The most informative features of the visualization layout recognized by the users were the impact each word had on a prediction and the importance of the word contributions shown in a sequential view. The positioning of the visualization elements for each of the three methods was rated on the scale of 1 to 5. Our method achieved the highest average score of 3.66 (63.1% of the respondents rated it with a score of 4 or 5), second best was the LIME method with an average score of 3.13 (39.1% rated it with 4 or 5), and the SHAP method was rated as the worst with an average of 2.42 (81.5% rated it with 1 or 2). Regarding the question whether they would use each visualization method, LIME scored highest (44.7% voted "Yes"), TransSHAP closely followed (42.1% voted "Yes"), while SHAP was not praised (34.2% voted "Yes"). The overall ranking also corresponds to these results. LIME got the most votes (54.3%), TransSHAP was voted second best (40.0% of votes), and SHAP was the least desirable (5.7% of votes). In addition, we asked the participants to choose the preferred usage of the method out of the given options. The TransSHAP and SHAP methods were considered most useful for the purpose of debugging and bias detection, while the LIME method was also recognized as suitable for explaining a model to other researchers (usage in scientific articles). Conclusion and further work We presented the TransSHAP library, an extension of the SHAP explanation approach for transformer 3 The survey questions are available here: https:// forms.gle/icpYvHH78oE2TCJt7. neural networks. TransSHAP offers a novel testing ground for better understanding of neural text classifiers, and will be freely accessible after acceptance of the paper (for review purposes available here: https://bit.ly/2UVY2Dy). The explanations obtained by TransSHAP were quantitatively compared in a user survey, where we assessed the visualization capabilities, showing that the proposed TransSHAP's visualizations were simple, yet informative when compared to existing instance-based visualizations produced by LIME or SHAP. TransSHAP was scored better than SHAP, while LIME was scored slightly better in terms of overall user preference. However, in specific elements, such as positioning of the visualization elements, the visualization produced by TransSHAP is slightly better. In further work, we plan to address problems of the perturbation-based explanation process when dealing with textual data. Currently, TransSHAP only supports random sampling from the word space, which may produce unintelligible and grammatically wrong sentences, and overall completely uninformative texts. We intend to take into account specific properties of text data and apply language models in the sampling step of the method. We plan to restrict the sampling candidates for each word based on their part of speech and general context of the sentence. We believe that better sampling will improve the speed of explanations and decrease the variance of explanations. Furthermore, the explanations could be additionally improved by expanding the features of explanations from individual words to larger textual units consisting of words that are grammatically and semantically linked. |
Target cell specificity of the Pasteurella haemolytica leukotoxin is unaffected by the nature of the fatty-acyl group used to activate the toxin in vitro. The leukotoxin (LktA) of Pasteurella haemolytica is active only against cells of ruminant origin. It is synthesised as an inactive protoxin encoded by the lktA gene and post-translationally modified to the active toxin by the product of the lktC gene. The LktA and LktC proteins were expressed separately in Escherichia coli and partially purified. Active LktA was produced in vitro in the presence of LktC and acyl-acyl carrier protein (ACP) charged separately in vitro with a fatty-acyl group. The toxic activity and target cell specificity of LktA and adenylate cyclase toxin (CyaA), a toxin active against a wide variety of mammalian cells, were investigated after activation with ACP charged with different fatty acids. Palmitoyl-ACP produced the most active toxin in both cases and, although other fatty acids were also effective, the fatty acid preference was the same for the in vitro activation of both toxins. Activated LktA remained ruminant cell-specific whichever acyl group was used to acylate the A protoxin. |
<reponame>ceveral/generator-ceveral
import {
Token, Type, BaseVisitor, IResult, TranspileOptions, RecordTypeExpression,
PackageExpression, RecordExpression,
AnnotationExpression, PropertyExpression, TypeExpression, ImportTypeExpression,
RepeatedTypeExpression, MapTypeExpression, OptionalTypeExpression,
StringEnumExpression, StringEnumMemberExpression, NumericEnumExpression, NumericEnumMemberExpression,
ExpressionPosition, AnnotatedExpression, ServiceExpression, MethodExpression, AnonymousRecordExpression
} from 'ceveral-compiler';
export class <%= className %> extends BaseVisitor {
constructor(public options: TranspileOptions) {
super();
}
parse(expression: PackageExpression): IResult[] {
let out: string[] = this.visit(expression);
}
visitPackage(expression: PackageExpression): any {
}
visitRecord(expression: RecordExpression): any {
}
visitProperty(expression: PropertyExpression): any {
}
visitiRecordType(expression: RecordTypeExpression) {
}
visitType(expression: TypeExpression): any {
}
visitImportType(expression: ImportTypeExpression): any {
}
visitOptionalType(expression: OptionalTypeExpression): any {
}
visitRepeatedType(expression: RepeatedTypeExpression): any {
}
visitMapType(expression: MapTypeExpression): any {
}
visitAnnotation(expression: AnnotationExpression): any {
}
visitNumericEnum(expression: NumericEnumExpression): any {
}
visitNumericEnumMember(expression: NumericEnumMemberExpression): any {
}
visitStringEnum(expression: StringEnumExpression): any {
}
visitStringEnumMember(expression: StringEnumMemberExpression): any {
}
visitService(_: ServiceExpression): any {
}
visitMethod(_: MethodExpression): any {
}
visitAnonymousRecord(_: AnonymousRecordExpression): any {
}
}
|
Integration of Multiple Genomic Data Sources in a Bayesian Cox Model for Variable Selection and Prediction Bayesian variable selection becomes more and more important in statistical analyses, in particular when performing variable selection in high dimensions. For survival time models and in the presence of genomic data, the state of the art is still quite unexploited. One of the more recent approaches suggests a Bayesian semiparametric proportional hazards model for right censored time-to-event data. We extend this model to directly include variable selection, based on a stochastic search procedure within a Markov chain Monte Carlo sampler for inference. This equips us with an intuitive and flexible approach and provides a way for integrating additional data sources and further extensions. We make use of the possibility of implementing parallel tempering to help improve the mixing of the Markov chains. In our examples, we use this Bayesian approach to integrate copy number variation data into a gene-expression-based survival prediction model. This is achieved by formulating an informed prior based on copy number variation. We perform a simulation study to investigate the model's behavior and prediction performance in different situations before applying it to a dataset of glioblastoma patients and evaluating the biological relevance of the findings. Introduction In cancer research, we often deal with time-to-event endpoints, and the more advances in technology enable the systematic collection of different genome-wide data, the more interest arises in integrative statistical analyses, that is, using more than one information source to obtain a more comprehensive understanding of the biology of diseases and improve the performance of risk prediction models. Recently, a lot of research has been done in the following three areas: Cox proportional hazards models for survival (or time-to-event) data in high dimensions For variable selection in high-dimensional problems For integrative analyses of several data sources The novelty of our approach is the combination of recent advances in these three areas in one Bayesian model as outlined below. To model survival data, Cox developed the semiparametric proportional hazards regression model for taking into account the relation between covariates and the hazard function. The Cox model has been widely used and analyzed in low-dimensional settings for this purpose; see, for example, Harrell Jr., Klein et al., or Ibrahim et al.. In biological applications with genomic data, we are, however, often in a highdimensional setting, that is, having more variables than subjects. Therefore, we are in need of a high-dimensional survival time model. One recent approach in this context was suggested by, who use a Bayesian version of the Cox model for right censored survival data, where high dimensions are handled by regularization of the 2 Computational and Mathematical Methods in Medicine regression coefficient vector imposed by Laplace priors. This corresponds to the lasso penalty; see Tibshirani or Park and Casella, which shrinks regression coefficients towards zero and thus allows parameter inference in problems where the number of variables is larger than number of subjects. Since the automatic variable selection property of lasso is lost in fully Bayesian inference, adopted a post hoc approach to identify the most important variables by thresholding based on the Bayesian Information Criterion. Since variable selection is a core question in many statistical applications, it has been subject to a lot of research, and many approaches exist, especially for linear models. In low-dimensional settings and for frequentist inference, the most common procedures are best subset selection or backward or forward selection (Harrell Jr., Hocking, ). There are 2 different models to evaluate for best subset selection which becomes infeasible in higher dimensions ( > 30). In high dimensions, classical backward selection cannot be applied since the full model is not identified, and both backward and forward selection will typically only explore a very small proportion of all possible models. In addition, all of these approaches do not incorporate shrinkage in the estimation procedure. Bayesian approaches offer a good alternative to stochastically search over the whole parameter space, implicitly taking into account the model uncertainty; see Held et al. for a recent evaluation study in the context of Cox regression models. One appealing approach often used in regression analyses is the stochastic search variable selection (SSVS) of George and McCulloch, a flexible and intuitive method which makes use of data augmentation for the selection task and incorporates shrinkage. For biological information on a molecular level, many different data sources exist nowadays, and they often provide shared information, for example, the amount of expressed genes being transcribed to different proteins results in different functions of the cells or the body. If unexpected or unusual changes in the expression levels occur, the functionality of the cells can be disturbed. Cancer is often caused by changes in the DNA, for example, single-base mutations or copy number changes in larger genomic regions, which in turn will have an effect on gene expression. Therefore, including such data sources jointly into the analyses can lead to more accurate results. Bayesian approaches offer a handy pipeline to do so. In our approach we combine the three mentioned tasks in one model: variable selection in a high-dimensional survival time model based on an integrative analysis. In particular, we integrate copy number variation (CNV) data with gene expression data, aiming to jointly use their respective advantages to achieve sparse and well interpretable models and good prediction performance. We combine the variable selection procedure of George and McCulloch with the Cox proportional hazards model of and use CNV data for the construction of an informed prior. We investigate the use of parallel tempering methods to improve the mixture of the Markov chains and to circumvent the manual tuning of hyperprior parameters. In the following, we describe the details of the model, including the technical details, the sampler with extensions, and diagnostics, in Section 2. Afterwards, we describe the synthetic data as well as the real dataset on glioblastoma; we state the prior settings needed and chosen for the simulation study as well as for the real data analysis. Before drawing conclusions in Section 4, we describe the most important findings for the application to synthetic and real data, including findings regarding the extracted genes for glioblastoma patients, and discuss the results in Section 3. Materials and Methods..,, in this case choosing the breaks as the points at which at least one event occurred and defining the last interval so that the last event lies in the middle of it, leading to the grouped data likelihood introduced by Burridge Here, D = {(, R, D ) : = 1,..., } denotes the observed data, where R and D are the risk sets and the event sets corresponding to the th interval. (⋅) describes a Gamma distribution with shape 0 − 0 −1 and scale 0, where 0 = 0 * ( ), = 1,...,, and * ( ) is a monotonously increasing function with * = 0. * represents an initial estimate for the cumulative baseline hazard function 0 ( ). The constant 0 > 0 specifies how strong the believe in the initial estimate for this cumulative baseline hazard function is. Mostly, a known parametric function for * ( ) is used, for example, the Weibull distribution, which then leads to the following form: The hyperparameters ( 0, 0 ) have to be carefully chosen, though, to avoid convergence problems within the MCMC sampling. The implicit shrinkage of the model and the variable selection will be done through the stochastic search variable Computational and Mathematical Methods in Medicine 3 selection procedure of George and McCulloch. Assuming equal variances for the regression coefficients of variables which are included in the model, the prior distribution for conditioned on, = 1,..., is as follows: where the variance parameter 2 > 0 is small, 2 > 1, and represents an indicator vector, analogous to the concept of data augmentation (Tanner and Wong, 1987 ), giving the state of the respective variable of being in the model or not. Finally, we use a Gibbs sampler to update,, and ℎ iteratively according to the full conditional distributions described above. Extension of MCMC Sampling Procedure. For multimodal posterior distributions, some problems may occur during the MCMC sampling, because the areas in the model space with higher posterior probability might be separated by a low-probability region, which the MCMC sampler might not manage to overcome. Therefore, there is a risk that important values cannot be sampled, because the MCMC sampler never visits the relevant region in the model space. Parallel tempering can alleviate this problem. Even in unimodal situations, parallel tempering can help by broadening the area of the sampling. This is done through the parallel generation of V + 1 different MCMC chains with their own stationary distributions, where at regular intervals (after a predetermined number of MCMC iterations) a swapping of states (i.e., of the current values of all parameters in the model) of two neighboring chains is proposed. The distributions of all chains have the same basic form as the original, but are more flat. This is achieved by raising the original density function to the power T −1 (T ≥ 1) with values between 0 and 1, with 0 (for T → ∞) corresponding to a complete flattening of the distribution and 1 corresponding to the desired target. This can improve the sampling performance in two ways: (a) the flattened probability distribution covers more of the parameter space with sufficiently large probability to be reached by the sampler in a given number of iterations, and (b) the "hills" and "valleys" of a multimodal probability density will be less steep, thus reducing the likelihood that the sampler might get stuck in local optima (which in turn will improve its mixing performance). For historical reasons, the parameter T is usually referred to as a temperature parameter. At regular intervals (in our applications after every tenth MCMC iteration), two neighboring chains are selected randomly, and the Metropolis-Hastings acceptance probability is calculated based on the target distributions and the current states of the chains to determine whether a swap of the states between these two chains is accepted. Let ( ch 1 ) and ( ch 2 ) be the respective target distributions of the selected chains with current parameter states ch 1 and ch 2. The acceptance probability of swapping states is given by min{1, } with Within the Metropolis update, this will be compared with a uniform random variable in the interval, where < min{1, } means that the swap will be accepted. The probability of a chain to swap to another state therefore only depends on the current states of the compared chains. In this manuscript, we use log-linear temperature scales T ch, (ch = 0,..., 5). The original, untempered chain is hence given by ch = 0. The distributions of the tempered versions are determined so that the standard deviation of the normal mixture prior of | (equation ) will be broadened, which is achieved by multiplying the parameter in the prior with T ch (ch = 0,..., 5). It is recommended to choose the temperatures so that the acceptance rate lies between 20% and 50%, since different studies have shown that rates in this range deliver the most satisfactory results (e.g., ). 4 Computational and Mathematical Methods in Medicine 2.3. Prior Settings. For the application of the Bayesian model, several prior specifications are needed. We start with the hyperparameters 0 and 0, which are chosen so that * ( ) in is similar to the Nelson-Aalen estimator of the cumulative hazard function, which is therefore used to provide an initial guess for 0 ( ). For this we determine the scale parameters for the Weibull distribution from the estimated survival model of the event times of the training data without covariable information. For the update of the cumulated baseline hazard 0 ( ) within the iterations of the MCMC chains, the hyperparameter 0, which describes the level of certainty associated with *, has to be specified. We follow the suggestion by to set 0 = 2. We have previously performed a sensitivity analysis to investigate the influence of the choice of 0 ( ), where we found that while there was a notable influence on the posterior estimates of the baseline hazard ℎ, the posterior distributions of were nearly unchanged. The parameters and of the normal mixture distribution of in conditioned on in, that is, ( | ), will be set to = 20 and = 0.0375. This implies that we obtain a standard deviation of = 0.75 for ( | = 1) and a corresponding 95% probability interval of . The specifications of the prior probabilities for the selection of the variables are described in Section 2.5, separately for the simulation scenarios and for the glioblastoma data application. Posterior Estimation and Prediction. We report the posterior distributions of and in terms of their posterior means and standard deviations. In order to select the most relevant variables, we choose an inclusion criterion in an automated data dependent way, which respects the prior model setup instead of choosing one cutoff for all cases. This is done by first calculating the mean model size (by rounding the average of selected variables per iteration). Then we choose variables with the highest selection probability. We used the empty model, with = 0 for all = 1,...,, as starting values of the MCMC chains. The results of the simulation study are based on single MCMC chains with 100,000 iterations each, after removal of 20,000 iterations ("burn-in"). The results for the glioblastoma data application are based on a combined analysis of five Markov chains, each of length 90,000 after removal of 10,000 initial iterations ("burn-in"). For the parallel tempering (only applied to the simulated data), we included four chains with 30,000 iterations each and log-linear temperature scales. We evaluated the mixing and convergence properties of the Markov chains in several ways. We used graphical evaluations of running means plots of the individual parameters as well as trace plots for summary measures such as the 2 -norm of the vector, the model size, and the log likelihood. Additionally, we calculated the effective sample sizes ( ) for each. The R package coda offers a wide variety of graphics and diagnostic measures to assess both mixing and diagnostic performance of MCMC chains. We evaluate the prediction accuracy of the models chosen this way by prediction error curves and by computing the integrated Brier score (IBS) and comparing them with the reference approach, which is the Kaplan-Meier estimator without any covariates. The Brier score is a strictly proper scoring rule, since it takes its minimum when the true survival probabilities are used as predictions. It therefore measures both discrimination and calibration, contrary to other common measures of evaluation such as Harrell's -Index (which only measures discrimination) and the calibration slope (for measuring calibration); see, for example,. The implementation of the model and the evaluations were done in the statistical computing environment R and are available upon request from the authors. In short, we first simulate the hypothetical survival times * ( = 1,..., ) that would be observed without the presence of censoring, and the censoring times *, which are generated to be uninformative and a mixture of uniform administrative censoring and exponential loss to follow-up. Note that scale and shape parameters and are chosen such that the survival probabilities at 12 and 36 time units are 0.5 and 0.9, respectively. For more details, we refer to Zucknick et al.. Then, for each subject = 1,..., the individual observed time to event or censoring and the corresponding survival status are defined as = min ( *, * ), For both scenarios, we generate a training dataset for model fitting and a test dataset to evaluate the prediction performance of the final models. The generated datasets comprise = 500 genomic variables and = 200 subjects. In the sparse setting, we have true effects of the prognostic variables Computational and Mathematical Methods in Medicine 5. Therefore, the first true = 6 variables are simulated to be related to the response (called "predictors" throughout the manuscript). For the nonsparse setting we randomly generated true = 122 variables in the range of (−0.8, −0.2) ∪ (0.2, 0.8) and equally distributed for the negative and positive part. Therefore, in this setting, the first true = 122 variables of the dataset represent the true predictors. See Tables 1 and 2 for an overview of all simulation scenarios. Prior Inclusion Probabilities. To evaluate the impact of prior information we investigate three different scenarios for the simulated data. First, we choose an uninformative selection prior (in short: uninformative prior) as = ( /,..., / ), where is the a priori expected number of predictors being set to = 20 here. With this we can assess the model's behavior if no prior knowledge is present. Second, mimicking the influence of correct prior information we set the prior probability of the true variables to 0.8 and the others to 0.1. Finally, to see what happens if our prior knowledge does not represent the truth, we specify a third prior, setting the prior probabilities of true randomly selected variables of the nonpredictors to 0.8 and the remaining variables, which include the true ones, to 0.1. Application to a Glioblastoma Study. To evaluate our model in a real application, we used a dataset of glioblastoma multiforme (GBM) patients, retrieved from The Cancer Genome Atlas (TCGA) database. Glioblastoma is the most common and fast-growing brain tumor in adults. It shows a very poor prognosis with a median overall survival time of less than 15 months after diagnosis and a two-year survival rate of about 30%. Therefore, a more detailed understanding of the molecular behavior of glioblastoma tumors is sorely needed. Recent publications studying the genomic profile of glioblastoma include the original publication from the TCGA network ( ) and the follow-up article by Brennan et al., as well as Sturm et al.. We extracted the data from two sources: from the GBM dataset of the TCGA Pancancer dataset https://www.synapse.org/#!Synapse:syn1710678 and from the derivative DREAM challenge TCGA Pancancer Survival Prediction project (https://www.synapse.org/#!Synape: syn1710282). Our final dataset comprises 210 subjects, for which we matched the patient survival data and gene expression data (from the DREAM challenge dataset) with their respective CNV data retrieved from the PanCan12 dataset. For the analysis, we selected the = 1,000 genes (selected among all genes located on autosomal chromosomes with available annotation information) with the highest variability in their gene expression values across patients, and we matched the copy number variation data to these genes. These 1,000 genes together make up 30% of the total variation in the dataset. The choice of selecting the genes with the largest variance is based on the assumption that genes which do not vary much between subjects will not be helpful in discriminating between patients with poor and good survival prognosis, respectively. We randomly split the data with ratio 2 : 1 into a training set with = 140 patients for model fitting and a test set with 70 subjects, which we use for the evaluation of the prediction performance of the final models. Simulation Study. In the simulation study, we use the synthetic data generated as described in Section 2.5.1. Sparse Setting. First, we look at the sparse scenario where we generated true = 6 true predictors, which correspond to the first variables in our setting. For all three prior settings, we observe that variables with an absolute effect of at least 0.5 will generally be selected by the model (Table 1), though the posterior estimates generally show an overestimation of the true values. In Figures 1, 2, and 3, we can see that the true predictors with higher absolute effects of at least 0.5 are always selected, even for the setting where the prior probabilities are wrongly stated (compare Figure 3). The true predictors with smaller absolute effect sizes are less often selected, which is not surprising since with smaller underlying absolute effect sizes the posterior evidence of being one of the predictors is getting weaker. This shows that in general the model is very robust with regard to wrongly stated prior information (Figure 3) or in the absence of information (Figure 1). The rate of wrongly selected variables does not differ much. However, when having prior information that comes close to the truth, even the variables with the smaller absolute effect sizes of 0.25 can be selected by the model, though their posterior selection probability is smaller than one; see Figure 2. This is also confirmed by the prediction error curves and the IBS obtained for the test dataset in Figure 4. The difference in the prediction error curves between settings is not very big, since the identification of the effects is quite distinct in the sparse setting. The area between the curves and the integrated Brier score are the same with IBS = 0.16 for the uninformative (a) and incorrect (c) prior and slightly better for the correct informative prior with an IBS of 0.13 (b). For the sparse setting, the mixing (i.e., the ability of the Gibbs sampler to move around in the model space) is very good and therefore the results are robust and consistent for the different scenarios (see ; the results of the sparse setting are shown in (a, b) of the figures). Because of the good initial mixing performance of the single Markov chains, the incorporation of parallel tempering does not further improve the mixing performance. Therefore, we only show the results for the single chain setups. For the parallel tempering, we obtained an acceptance rate of around 50% for swapped states of the Markov chains. The MCMC mixing and convergence performances for the implementation with and without parallel tempering are illustrated in Figures 12-15. Figure 12 shows running mean plots that illustrate the development of the posterior mean estimates of the regression coefficients with increasing number of MCMC iterations. This shows how the estimates stabilize, thereby helping us to assess whether the MCMC sampler has run long enough. The running mean plots for the sparse simulation scenario indicate that the running means of do not change much after ca. 10000 MCMC iterations. Figure 13, which shows trace plots for the log likelihood functions, and Figures 14 and 15, which show trace plots for the regression coefficients, are useful for deciding if the Markov chains are mixing well enough and to see if the MCMC sampler gets stuck in local optima. In addition, they can help with the decision for how long the burn-in period should be, that is, how many MCMC iterations at the start of the sampling process cannot be used for posterior estimation, because the sampler has not yet converged to the target distribution. All trace plots indicate very good mixing and show that the Markov chains move very fast (in less than 5000 MCMC iterations) to the best-performing model regions. Nonsparse Setting. As a second evaluation step, we constructed a nonsparse scenario, where we generated true = 122 true predictors, again corresponding to the first variables in the simulation setting. As expected in this case, the results are more inconsistent. In the nonsparse setting, the influence of the prior probabilities can be seen very nicely in the posterior selection probabilities (Figures 5, 6, and 7 (c, d), resp.). Variables with higher prior probability show a slight increase in the posterior selection rate. For the case with correctly specified informative prior probabilities, it can be seen that more of the true predictors are selected and the increase is more obvious than in the other cases (see Table 2). Furthermore, fewer of the nonpredictors are selected. When incorrect information is used to specify the prior probabilities (Figure 7), fewer of the true predictors will be selected as well as more of the false ones that obtained higher probability mass in the beginning. In the uninformative prior setting the model selects about 11% of the true predictors. With the correct informative prior 18% of the true predictors are selected and with incorrect informative priors we only identify 3% correctly (see Table 2). The posterior selection probabilities are shown in Figures 5, 6, and 7, where there is a clearer increase in the selection probabilities for the true predictors and generally smaller probabilities for the remaining nonpredictor variables ( Figure 6). Additionally, we can see the impact of prior information more clearly from the prediction error curves obtained for the test data (Figure 8) where the prediction error is lowest for the correct informative prior with an IBS of 0.223 (a) compared to an IBS of 0.233 (b) for the uninformative prior and 0.239 (c) for the incorrect informative prior information case. Again, we compared the results for the MCMC samplers with and without parallel tempering (see (c, d) of Figures 12-15). Since the nonsparse simulation scenario is more complex than the sparse scenario, we anticipated that the simple MCMC sampler (without parallel tempering) might need more iterations to move into the regions of the model space with the best-performing models or that the sampler might have problems with poor mixing. Indeed, we observe somewhat slower convergence (up to ca. 5000 MCMC iterations according to the trace plots in Figures 13-15). Therefore, parallel tempering can potentially be more useful in the nonsparse simulation scenario. However, we find that parallel tempering does not improve the mixing performance of the Markov chains sufficiently to justify the increase in computation time. Figure 9 summarizes the posterior estimates for and for the glioblastoma application. Again, parallel tempering did not improve the Markov chain mixing sufficiently to outweigh the increased computational burden. Therefore, we performed the full MCMC runs only without parallel tempering. Glioblastoma. The posterior selection probabilities are quite different for the models with the informative and uninformative selection priors, respectively, as only 3 variables among the variables with the largest marginal posterior selection probabilities for both priors; see also Figure 10. These are the genes with gene symbols ACMSD (on chromosome 2), SP8 (chromosome 7), and PXDNL (chromosome 8). On average, across all MCMC iterations, the models contained = 10 variables (uninformative prior) and = 9 variables (informative prior), respectively. Therefore, for our top models, we select variables with the largest posterior selection probabilities. The corresponding variables are highlighted in Figure 9 and their gene names are shown. Table 3 gives an overview over the top genes including the gene symbols, full names, and the posterior selection probabilities. ACMSD can prevent the accumulation of the neuronal excitotoxin quinolinate, which has been implicated in the pathogenesis of several neurodegenerative disorders (https: //www.ncbi.nlm.nih.gov/gene/130013, updated 19-Jan-2017). This agrees with our finding of a negative regression coefficient estimate for ACMSD, since negative coefficients indicate a reduction in the hazard rate with an increase in gene expression. Not much is known about the roles of SP8 (https://www.ncbi.nlm.nih.gov/gene/ 221833, updated 6-Dec-2016) and PXDNL (https://www.ncbi.nlm.nih.gov/gene/137902, updated 6-Dec-2016) in human cancers or neurological diseases, but genetic variants in SP8 have been associated with psychotic disorders in recent genome-wide association studies in Han Chinese and Japanese populations. While some of the remaining genes are involved in neurological processes or neural development (CALB2, CDH10, ENPP5, and FLRT2), others have been associated with cancer (AKR1B10, CALB2, CDH10, and CYB5R2), but only CYB5R2 has specifically been identified as a potential (epigenetic) marker for glioblastoma prognosis. The prediction performance of the top models is evaluated in terms of the prediction error curves and integrated Brier scores (IBS) on the test dataset; see Figure 11. While the IBS for the model with the uninformative selection prior is not better than the IBS of the reference model (IBS = 0.163), we see a good improvement in the prediction performance for the model with the informative selection prior (IBS = 0.157), and the (test set) prediction error curve for the informative selection prior is lower than the reference prediction error curve, in particular after ca 12 months. For sampling diagnostics, we refer to Figure 16. It shows the trace plots for the log likelihood functions for all five MCMC chains that were run for sampling from the model with the uninformative selection prior (a) and correspondingly all five MCMC chains used for the informative selection prior (b). The trace plots demonstrate that all Markov chains move very fast (within the first 1000 MCMC iterations) to a region of the model space, where most model log likelihood values are in the range between ca. −500 and −450. The trace plots also show that the Markov chains do not get stuck in Conclusion In this manuscript, we have combined a Bayesian Cox model for survival data ( ) with a variable selection approach suitable for high-dimensional input data (George and McCulloch, 1993 ). This approach of framing variable selection via Gibbs sampling over the binary indicator vector = ( 1,..., ) gave us the opportunity to integrate information from a second data source into the model via the prior distribution for. In our application to glioblastoma data, we integrated copy number variation data into a gene expressionbased model for overall survival prognosis, and we found that the inclusion of the copy number data results in a better prediction performance in the test dataset. This confirms our findings from the simulation studies that our model setup is able to use the second data source to achieve clear improvements in the prediction accuracy, if the second data source truly supplies an informative selection prior, that is, if the variables that are assigned an increased prior selection probability due to information in the secondary data source really are associated (in the main data source) with the response. An incorrect specification of the selection prior, however, might lead to slightly worse prediction performance compared to the uninformative selection prior. In real applications, we will typically not know if an informative selection prior is specified correctly. Therefore, it is important to always compare the prediction performance of such an informative prior with the uninformative (standard) prior to see whether or not prediction performance is improved by the prior information. In general, a sensitivity analysis to assess the impact of the choice of priors on the results is a recommended procedure for any Bayesian analysis, especially when using informative priors. The advantage of our fully Bayesian modeling approach compared to frequentist approaches is that we obtain full inference, not only for the posterior distributions of the regression coefficients, but also for the posterior selection probabilities of all the variables. Note that due to the joint modeling we can even obtain posterior inference about the joint selection probabilities of specific sets of variables. In this way, we can explore how the selection of one variable affects the selection probability of another variable, or we can estimate and compare the joint posterior selection probabilities of specific (published) gene signatures, that is, sets of genes that have been identified as being prognostic in previous studies. Since we essentially use the Gibbs sampler to perform a stochastic search over the model space of size 2 (with easily being in the hundreds or thousands), it is not feasible to run the MCMC sampler long enough for reliable posterior estimation in the low-probability regions. However, this is usually not a concern, since we are mostly interested in the variables and models with highest posterior selection probabilities. Because of the nature of the stochastic search sampler to visit models with a frequency that is proportional to their posterior selection probability, it is much easier to obtain a sufficient number of MCMC samples for good estimation performance for these high-probability models. In general, there is a trade-off between the computational expense of longer MCMC runs and the improvement in estimation accuracy, both by reducing the MCMC error and by ensuring that the relevant high-probability model regions have been visited with sufficient frequency. Increasing the number of variables that are considered in the modeling process will also increase the computational expense. Here a good trade-off is achieved if the number of variables without predictive value with regard to the survival outcome is kept to a minimum. Our implementation of the algorithm in R has not been optimized with respect to computing performance and the computing speed could be improved substantially, for example, by using the R package Rcpp and by more efficient memory management. Currently, a single MCMC run in our simulation studies and data application takes ca. one hour per 1000 MCMC iterations on a 2.6 GHz compute node running Linux with 64 GB memory; all results presented in this manuscript are based on MCMC runs that took a maximum of one week running time. We found in our applications that the parallel tempering algorithm did not sufficiently improve the mixing performance of the Markov chains (i.e., the ability of the Gibbs sampler to move around in the space of all models) to offset the increase in computation time. The increase in computation time can be minimized by implementing the parallel tempering with true computational parallelization, for example, by running each of the tempered Markov chains on a different node. In that case, the only increase in computation time comes from the necessary regular exchanges of the states of the Markov chains between neighboring tempered chains. 18 Computational and Mathematical Methods in Medicine Thus, parallel tempering might be much more favorable in such an implementation. However, note that another tradeoff is involved, namely, the increase in computation time and the improvement in mixing performance due to an increased frequency of state exchanges. See for a simple example implementation in R, which illustrates the procedure. Conflicts of Interest The authors declare that they have no conflicts of interest. |
A case of wrist tenosynovitis caused by Mycobacterium kansasii in a renal transplant recipient Mycobacterial infection in an organ transplant recipient is a diagnostic and therapeutic challenge. Diagnosis is often delayed, resulting in significant morbidity. Antimicrobial chemotherapy needs careful selection to prevent potentially significant complications, such as organ rejection and doserelated toxicities. We present the case of a 61yearold Caucasian male kidney transplant recipient with chronic tenosynovitis of the left wrist. Histological findings of the synovial biopsy revealed multinucleated giant cell epithelioid granuloma. Culture of synovial fluid grew Mycobacterium kansasii. Treatment with rifampicin, ethambutol, and clarithromycin proved curative, but the patient developed irreversible ethambutolrelated optic neuritis. |
Development of a grid computing platform for electric power system applications This paper describes a platform developed for the application of grid computing in electric power system. Two applications are implemented. One is the distributed monitoring and control implemented by using virtual database technology. Another is the distributed parallel computing of power system. This paper describes the whole infrastructure of the system. All the computing devices in the electric power system can be wrapped into integrity to realize distributed monitoring and control to provide support for the operation and control of electric power system. While the computers of high performance and tightly interconnected are organized by resource pooling to form a computing pool, to facilitate the distributed parallel computation. The paper first discusses the visualization of the resource, and the application of virtual database technology to realize the distributed data access and control. Then the structure for realizing the distributed parallel computing, especially the computing pool, resource and task scheduling, solution of differential algebraic equation and graph partition, is described. Finally the test benches built at laboratory are described and the test results are presented. The test results shows that the developed platform can provide comparable performance to other solutions such as MPI, and can be further improved |
<filename>angular/src/app/app.component.spec.ts<gh_stars>0
import { LazyLoadService } from '@abp/ng.core';
import { ComponentFixture, TestBed } from '@angular/core/testing';
import { RouterTestingModule } from '@angular/router/testing';
import { NgxsModule } from '@ngxs/store';
import { OAuthService } from 'angular-oauth2-oidc';
import { AppComponent } from './app.component';
import { LoaderBarComponent } from '@abp/ng.theme.shared';
import { Subject, Observable } from 'rxjs';
import { Component } from '@angular/core';
import { By } from '@angular/platform-browser';
@Component({
template: '',
selector: 'abp-loader-bar',
})
class DummyLoaderBarComponent {}
describe('AppComponent', () => {
let component: AppComponent;
let fixture: ComponentFixture<AppComponent>;
let mockLazyLoadService: { load: () => Observable<void> };
let loadResponse$: Subject<void>;
let spy: jasmine.Spy<() => Observable<void>>;
beforeEach(() => {
TestBed.configureTestingModule({
imports: [RouterTestingModule],
declarations: [AppComponent, DummyLoaderBarComponent],
providers: [{ provide: LazyLoadService, useValue: { load: () => loadResponse$ } }],
});
loadResponse$ = new Subject();
fixture = TestBed.createComponent(AppComponent);
component = fixture.componentInstance;
mockLazyLoadService = TestBed.get(LazyLoadService);
spy = spyOn(mockLazyLoadService, 'load');
spy.and.returnValue(loadResponse$);
fixture.detectChanges();
});
describe('LazyLoadService load method', () => {
it('should call', () => {
expect(spy).toHaveBeenCalledWith(
[
'primeng.min.css',
'primeicons.css',
'primeng-nova-light-theme.css',
'fontawesome-all.min.css',
'fontawesome-v4-shims.min.css',
],
'style',
null,
'head',
);
});
});
describe('template', () => {
it('should have the abp-loader-bar', () => {
const abpLoader = fixture.debugElement.query(By.css('abp-loader-bar'));
expect(abpLoader).toBeTruthy();
});
it('should have router-outlet', () => {
const abpLoader = fixture.debugElement.query(By.css('router-outlet'));
expect(abpLoader).toBeTruthy();
});
});
});
|
Severe pulmonary hypertension and reduced right ventricle systolic function associated with maternal mortality in pregnant uncorrected congenital heart diseases Background Pregnant uncorrected congenital heart disease patients, especially those who already developed pulmonary hypertension, have increased risk for maternal mortality. The pulmonary hypertension severity and right ventricle function may be associated with higher maternal mortality. The study aimed to investigate the mortality rate of pregnant uncorrected congenital heart disease and the impact of pulmonary hypertension severity on mortality. Methods This is the sub study of COngenital HeARt Disease in adult and Pulmonary Hypertension Registry. The data of pregnant uncorrected congenital heart disease patients were analyzed from registry database. The maternal mortality was recorded. The data of demography, clinics, obstetrics, and transthoracic echocardiography were collected. The factors that influenced maternal mortality were analyzed. A statistical significance was determined when p value<0.05. Results From 2012 until 2017, there were 78 pregnant congenital heart disease patients. Of them, 56 patients were eligible for analyses. The majority of congenital heart disease was atrial septal defect (91.1%). The maternal mortality rate was 10.7% (6 of 56). Pulmonary hypertension occurred in 48 patients, therefore the maternal mortality rate among congenital heart disease-pulmonary hypertension with majority of atrial septal defect was 12.5% (6 of 48). Among nonsurvivors, 100% suffered from severe pulmonary hypertension as compared to survivors (56.0%), p=0.041. Most nonsurvivors were Eisenmenger syndrome (83.3%), significantly higher compared to survivors (22.0%), p=0.006. Nonsurvivors had significantly worsened WHO functional class, reduced right ventricle systolic function, and right heart failure. The modes of maternal death were severe oxygen desaturation (66.7%) and respiratory failure and sepsis (33.3%). Most of the maternal deaths occurred within 24h postpartum period. Conclusion Maternal mortality rate among pregnant uncorrected congenital heart disease with majority of atrial septal defect was 10.7% and among congenital heart disease-pulmonary hypertension with majority of atrial septal defect was 12.5%. Factors related with maternal mortality were severe pulmonary hypertension, Eisenmenger syndrome, and reduced right ventricle systolic function. Introduction The maternal mortality rate in pregnant women with pulmonary hypertension (PH) is high, ranging from 25 to 56%. 1 In PH associated with congenital heart disease (CHD-PH), the maternal mortality rate varies from 3.8 to 28%. The wide variation of maternal mortality rate is possibly due to the disparity in patients' underlying CHDs, treatment modalities, PH-specific medication, pregnancy follow-up, and peripartum care among hospitals/centers. Women with uncorrected and uncomplicated small patent ductus arteriosus (PDA) and uncomplicated atrial or ventricular septal defect (ASD or VSD) have no risk or slightly increased risk of maternal mortality if pregnancy occurs (modified WHO class risk I and II, respectively). 4 Nevertheless, women with CHD-PH are contraindicated for pregnancy and subsequent delivery because it bears a very high-risk maternal mortality rate (WHO class IV risk). 4 Pregnant women with CHD-PH have difficulty to endure the alterations in hemodynamics during pregnancy and delivery. Our hospital registry from 2012 to 2017 indicated that a number of pregnant women did not know that they had CHD, because the CHD was not diagnosed previously. While others with previously diagnosed CHD-PH still choose to become pregnant, although they had been advised to avoid pregnancy. During the period of 2012-2017, in our hospital, the high-risk pregnancy management was conducted through a multidisciplinary approach involving cardiologists, obstetricians, fetomaternal specialists, anesthesiologists, and neonatologists. The important constraints in managing the pregnant women with CHD-PH in our hospital were the lack of PH-specific therapy, along with patients' and family's ignorance and delayed presentation in our hospital. This study aimed to investigate the mortality rate of pregnant women with uncorrected CHD and the impact of PH and right ventricle (RV) function on the mortality rate. Study population The COngenital HeARt Disease in adult and Pulmonary Hypertension (COHARD-PH) Registry is a hospital-based clinical registry of adult CHD in the vicinity of Special Province of Jogjakarta and southern parts of Central Java, Indonesia. The COHARD-PH registry was initiated in 2012 and is currently still recruiting patients. The registry has been centered in Dr Sardjito Hospital, Jogjakarta, Indonesia, a national referral hospital in the region. Until the end of 2017, the COHARD-PH registry has comprised more than 800 adult patients with CHD. The initial pilot study of the registry had been published previously. 5 For the current study, the pregnant patients were identified from the COHARD-PH registry database and their data were retrieved for analysis. In the current study, we conducted a retrospective analysis from the COHARD-PH registry database from 2012 to 2017. The inclusion criteria were as follows: pregnant patients, uncorrected septal defects (ASD, VSD, atrioventricular septal defect, and/or PDA), and complete termination of pregnancy was in Dr Sardjito Hospital. The exclusion criteria were as follows: patients with other significant heart diseases, patients with existing pulmonary disease, patients with left heart disease, and patients with uncompleted pregnancy during this study period. Patients with other significant heart diseases were those with pulmonal stenosis, aortic stenosis, mitral stenosis, and/or moderate-to-severe regurgitation detected by transthoracic echocardiography (TTE). Patients with existing pulmonary diseases were those with asthma bronchial, chronic obstructive pulmonary disease, and/or interstitial lung disease from history of patients. Patients with left heart diseases were those with decreased left ventricle ejection fraction (<40%) by TTE, diastolic dysfunction by TTE, and presence of regional wall motion abnormality by TTE. The informed consent from patients was obtained as part of the complete informed consent in the COHARD-PH registry. Study protocol The data of demographics, clinics, and obstetrics among patients were collected from the registry database. The TTE results were also accumulated from the registry database. Recorded baseline data included current gestational age, gravid status, parity status, WHO functional class, peripheral oxygen saturation, chief complaints, the presence of PH, Eisenmenger syndrome, mode of delivery, anesthesia used during delivery, and hospital length of stay. The results of TTE performed during current admission were retrieved and reviewed. PH was assessed by TTE criteria, based on current recommendation using tricuspid regurgitation velocity and echocardiography signs suggesting PH. 6 Patients with PH were those with high probability of PH defined by TTE. 6 We investigated the medical records to assess the outcome of pregnancy from the time of admission until time of postpartum discharge or time of death in our hospital. Based on the outcome of pregnancy, patients were divided as survivors and nonsurvivors. Nonsurvivors were patients who died between time of admission and time of postpartum period in our hospital. Survivors were patients who were safely discharged during postpartum period. The causes of death among nonsurvivors were recorded from the medical records signed by treating clinicians. The study protocol was approved by the Medical and Health Research Ethics Committee of the Faculty of Medicine, Public Health and Nursing, Universitas Gadjah Mada, Jogjakarta, Indonesia. Statistical analysis For statistical analysis, two-group comparison was applied, i.e. survivors versus nonsurvivors. The comparison of continuous data between the two groups was performed with Student's t-test (for normally distributed data) or Mann-Whitney test (for nonparametric test). The comparison of categorical data between the two groups was performed with Chi-square test or Fisher's exact test whichever was applicable. A p value < 0.05 was considered statistically significant. Results From 2012 until the end of 2017, there were about 800 patients enrolled in the COHARD-PH registry. Among them, 78 patients were pregnant and underwent pregnancy examination in Dr Sardjito Hospital. From 78 patients, four patients had already undergone defect closure (all were ASD patients, with closure time between three and seven years previously), three patients had severe pulmonal stenosis, two patients had undergone assisted vaginal delivery in district hospitals, seven patients had not continued the delivery in our hospital and the delivery data were unknown, and six patients had undergone pregnancy termination by assisted abortion (curettage) in Dr Sardjito Hospital. Four patients with closed defects were delivered safely in our hospital (two spontaneous vaginal delivery (G1P0A0 and G2P1A0), one elective caesarean section (due to history of SC) (G3P2A0), and one emergency caesarean section (due to fetal distress) (G2P1A0)). Three patients with severe pulmonary stenosis were delivered safely (two assisted vaginal delivery (ASD and severe pulmonary stenosis, G2P0A1; and VSD and severe pulmonary stenosis, G2P0A1)) and one elective caesarean section (reason unknown) (ASD and moderatesevere pulmonary stenosis, G2P1A0). Two patients who had delivered in district hospital were referred in our hospital (one died in our hospital due to severe sepsis (P2A0) and one survived a complication of acute heart failure in our hospital (P3A0)). Six patients underwent assisted abortion were all survived (all patients were ASD and severe PH). Therefore, as many as 56 pregnant patients with uncorrected CHD were analyzed in this study. Figure 1 depicted the patients' selection for current study. The majority of patients were ASD (91.1%). Among 56 patients, as many as six patients (10.7%) died during postpartum period due to cardiac/obstetric complications. PH was detected in 48 patients; therefore, the maternal mortality rate among patients with CHD-PH in our study, which mostly comprised of ASD, was 6 out of 48 (12.5%). The comparison of characteristics between survivors (n 50) and nonsurvivors (n 6) is presented in Table 1. There were no differences in terms of years of age, gestational age, number of gravid, and obstetric status between survivors and nonsurvivors. In nonsurvivors, the majority of patients were primigravida (66.7%). The time of pregnancy termination was significantly longer in nonsurvivors as compared to survivors. All of the nonsurvivors were patients with uncorrected ASD (100%). Among nonsurvivors, five patients (83.3%) were ignorant of CHD and first diagnosed with CHD during current pregnancy, whereas among survivors 31 patients (62.0%) were first diagnosed. The most common chief complaint among survivors and nonsurvivors was dyspnea. As many as 17 survivors (34.0%) had no complaints relating CHD and PH. The PH-specific drugs available in our hospital, which also in Indonesia, were sildenafil and beraprost. They were given to 32 patients of 48 CHD-PH patients (66.7%). Sildenafil (oral 20 mg t.i.d) was administered mostly to 50% of patients, and combination of sildenafil and beraprost (oral 20 mcg t.i.d) to 7.4% of patients. All nonsurvivors had been administered sildenafil alone or combined sildenafil and beraprost during current pregnancy. The duration of PH-specific drugs in average was started from diagnosis of PH. Other non-specific drugs administered during current pregnancy were furosemid (50% among nonsurvivors) and digoxin (4.7% among survivors). In all patients, 85.7% had already developed signs of PH. High probability of PH, based on TTE examination, was diagnosed in 42 (84.0%) survivors and 6 (100%) nonsurvivors (p 0.226). Among nonsurvivors, all patients (100%) had suffered from severe PH based on the TTE result (severe PH was defined as estimated right ventricle systolic pressure ! 90 mmHg). Among survivors, 56.0% had severe PH. The difference of severe PH proportion between groups was statistically significant (100% versus 56.0%, p 0.041). In line with this finding, Eisenmenger syndrome occurred in 83.3% of nonsurvivors, which was significantly higher compared to survivors (22.0%), p 0.006. The complete TTE result was retrieved from 48 patients and shown in Table 2. Eight patients had incomplete TTE results performed during current admission, and all of them were among survivors. They had been diagnosed with CHD previously and TTE had been performed. Nonsurvivors had significantly reduced TAPSE value, which indicated decreasing RV systolic function and imminent right heart failure. The LV systolic function was normally comparable between survivors and nonsurvivors. Table 3 shows the mode of delivery in all patients, which pattern was significantly different. In nonsurvivors, the majority mode of delivery was emergency caesarean section, performed in four (66.7%) nonsurvivors, one patient (16.7%) underwent elective caesarean section, and one patient (16.7%) underwent induced vaginal delivery. All patients underwent emergency caesarean section were referred to our hospital from district hospitals (via emergency room) with poor maternal clinical condition on admission. The decision for performing emergency caesarean section was made based on obstetric indication on admission by obstetrician treating the patients and cardiologist in consultation. The indications for emergency caesarean section were as follows: one patient with severe preeclampsia, one patient with total placenta previa, and two patients due to maternal indication without specific obstetric indication. For survivors, the majority of them had elective caesarean section (38.0%). Other modes of delivery were assisted vaginal delivery (30.0%), induced vaginal delivery (24.0%), and emergency caesarean section (8.0%). The emergency caesarean section in survivors was due to obstetric/fetal indication, i.e. one patient due to failed vacuum extraction and two patients due to fetal distress. The intrathecal/epidural labor anesthesia was the most frequent anesthesia used in the survivors. For nonsurvivors, general anesthesia and spinal anesthesia were the most frequent anesthesia used. Among nonsurvivors, severe oxygen desaturation was the major cause of death (four out of six patients, 66.7%). The severe oxygen desaturation was highly likely due to PH crisis, which was also accompanied by low cardiac output and cardiogenic shock. Sepsis and respiratory failure were the following causes of death, in two out of six nonsurvivors (33.3%). Table 4 shows the individual characteristics of nonsurvivors. Discussion In this study we reported that among pregnant patients with uncorrected CHD, the maternal mortality rate was 10.7% and among CHD-PH patients, the maternal mortality rate was 12.5%. Since the majority of patients were ASD, the mortality rate of this study cannot be attributed to generalize for all CHD-PH patients. The factors related with maternal mortality were severe PH, Eisenmenger syndrome, and reduced RV systolic function. The proportion of patients with severe PH and Eisenmenger syndrome was significantly higher in nonsurvivors. Reduced RV systolic function was observed in nonsurvivors. The mode of delivery among nonsurvivors was mostly emergency caesarean section. The causes of maternal death were severe oxygen desaturation, respiratory failure, and sepsis. Most of the maternal deaths were within 24 h of the postpartum period. The worsened outcome in the pregnancy and delivery of pregnant patients with CHD-PH is mostly due to compromised cardiovascular system and worsening PH. 7 In the cases of uncorrected septal defects, especially right-to-left shunt or bidirectional shunt, PH is a threatening condition that increases maternal mortality. 7 Due to improvement in peripartum management involving multidisciplinary teams and availability of PH-specific medication, the maternal mortality in pregnant women with CHD-PH is reduced as low as 3.8%. 2 However, in our hospital registry, the maternal mortality rate among pregnant patients with CHD-PH was still high. Other registries reported no or very low maternal mortality rate, mostly related to severe PH. The most common CHD in our registry is uncorrected ASD. 5 All nonsurvivors in our current study were uncorrected ASD and PH. Among other congenital defects, ASD is the most common defect encountered during adult life due to underdiagnosis and late finding. In our registry database, adult ASD is the most common CHD with PH complication. 5 They presented to the hospital due to signs and symptoms of PH and/or right heart failure. 5 In this study, 64.8% of patients were first diagnosed as CHD during this pregnancy. As many as five of six nonsurvivors were first diagnosed during current pregnancy, therefore no antenatal care for CHD condition was done to the patients. Furthermore, most pregnant patients who came to our hospital were already in their third trimester of pregnancy (mean gestational age: 31.1 AE 7.7 weeks). Among nonsurvivors, the termination of pregnancy was performed straightaway, mostly by emergency caesarean section and needed the general anesthesia. Whereas among survivors there was on average 4.2 weeks waiting time before planned termination of pregnancy, mostly performed by elective/planned caesarean section. The mode of death among patients with uncorrected CHD-PH during pregnancy and delivery in several previous reports was PH crisis, which was indicated by a sudden rise in pulmonary artery pressure and hampered blood flow to the left heart which subsequently caused a low output state. 2 The severe lowered oxygen saturation and reduced systemic blood pressure are hallmarks of PH crisis. The condition is fatal and currently no effective treatment is available. 2 In our current study, severe oxygen desaturation and hemodynamic disturbance, in accordance with PH crisis, occurred in four out of six patients. The postpartum period, especially within 72 h of postpartum, is a critical period for maternal mortality. In our current study, the most common duration of maternal death was within 24 h postpartum. The cardiovascular compromise occurs after 20 weeks of gestation, indicated by worsened oxygen desaturation/hypoxemia and heart failure as the most common cardiovascular events. 2 Deterioration of WHO functional class may occur with increasing gestational age and associated with maternal cardiovascular complication and mortality. 2,10,11 In our study, deteriorated WHO functional class was observed in nonsurvivors (five patients had WHO class III and IV). In our study, in addition to severe PH, Eisenmenger syndrome was also a predictor for maternal mortality. Almost all nonsurvivors were Eisenmenger syndrome as indicated by right-to-left shunt defects, cyanosis, and reduced oxygen saturation. Eisenmenger syndrome gradually leads to progressive RV failure. Previous study had reported the outcome of Eisenmenger syndrome in pregnancy. 12 Similar to our finding, the presence of Eisenmenger syndrome was associated with worst outcome, both for maternal and fetal survival. 12 RV dysfunction and failure are also associated with worsened outcome for pregnant women with CHD-PH. 13 Reduced TAPSE (mean TAPSE < 19 mm) was observed in nonsurvivors, which indicated the reduced RV systolic function during current pregnancy. The LV function was still normal (mean LVEF 66.8%) in nonsurvivors; however, RV dysfunction might affect LV function, which contributed to reduced cardiac output. 14 The unclosed defects and rising mean PAP and PVR during peripartum period also worsened RV dysfunction and subsequently reduced cardiac output. 14 In patients with CHD-PH, there is an increasing risk of maternal mortality or severe morbidity. Therefore, pregnancy is contraindicated in patients with CHD-PH. If pregnancy occurs, termination should be discussed by multidisciplinary experts. 4 If pregnancy continues, expert counseling is a requisite and intensive cardiac and obstetric monitoring is obligatory throughout pregnancy, delivery, and postpartum period. 4 The mode of delivery and anesthetics strategy have to be defined by an expert team. 4,15 During the study period, our hospital had already developed a multidisciplinary team for higher risk pregnancy, including pregnancy in CHD-PH patients. However, the uniformed protocol for CHD-PH or CHD with pregnancy had not been developed. In our cases, the decision for pregnancy termination and mode of pregnancy was based on team decision involving cardiologists, obstetricians, and anesthesiologists. Unfortunately, 11% of patients did not survive despite the multidisciplinary effort to manage the patients. Most nonsurvivors underwent emergency caesarean section, were on general anesthesia, were primigravida, were first diagnosed in this current pregnancy, had late findings, and were ASD patients. The high number of emergency caesarean section and general anesthesia was due to maternal indication, who were referred to our emergency unit with deteriorated pregnancy. These patients have never been examined at our hospital, therefore no close monitoring performed during her antenatal care. The availability of PH-specific medication (in our hospital only sildenafil and beraprost were available during the study period) was one factor that also contributed to maternal mortality. Limitation of this study is the diagnosis of PH, and severity of PH was based on TTE examination. The gold standard for PH diagnosis and severity measurement is by right heart catheterization, which we did not perform. The small number of nonsurvivors made statistical comparison and conclusion challenging. Other limitation was the number of patients who did not continue pregnancy follow-up and delivery in our hospital despite the high-risk pregnancy was already informed. We did not have any information regarding the loss-of-follow up patients. In conclusion, among pregnant patients with uncorrected CHD, the majority of whom were ASD, the maternal mortality rate was 10.7% and among CHD-PH with ASD as majority, the maternal mortality rate was 12.5%. The factors related to maternal mortality were severe PH, Eisenmenger syndrome, and reduced RV systolic function. The causes of maternal death were severe oxygen desaturation, respiratory failure, and sepsis. Most of the maternal deaths were during the postpartum period. |
Comparative analysis of planar and spherical cathodes in gridded electron guns for Inductive output tubes Inductive Output Tube (IOT) is a vacuum electron tube capable of amplifying radio frequency (RF) power with high efficiency. It is popularly used as high power source in communication transmitters and particle accelerators operating at ultra-high frequency (UHF) band. The basic structure of IOT is similar to a Klystron except that the gun and input cavity are two distinctly separated components in Klystron whereas these two components are integrated together in case of an IOT which makes the electron gun and input cavity the most critical part to design and develop. The paper discusses the design results of two electron guns having different shapes of cathode emitting surfaces - spherical and planar. Both structures have been designed for same specifications and a comparative analysis has been presented. On the basis of the merits and demerits of these structures, the more preferable design has been pointed out to implement in IOTs. |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import re
import sys
from six.moves import urllib, xrange
import tensorflow as tf
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_integer('batch_size', 128, """Number of images to process in a batch.""")
tf.app.flags.DEFINE_string('data_dir', 'data', """Path to the data directory.""")
IMAGE_SIZE = 96
NUM_CLASSES = 2
NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 10000
NUM_EXAMPLES_PER_EPOCH_FOR_EVAL = 10000
def read_input_dataset (filename_queue):
HEIGHT = 128
WIDTH = 128
DEPTH = 3
label_bytes = 1
image_bytes = HEIGHT * WIDTH * DEPTH
# Every record consists of a label followed by the image, with a
# fixed number of bytes for each.
record_bytes = label_bytes + image_bytes
# Read a record, getting filenames from the filename_queue. No
# header or footer in the file format, so we leave header_bytes
# and footer_bytes at their default of 0.
reader = tf.FixedLengthRecordReader(record_bytes=record_bytes)
key, value = reader.read(filename_queue)
# Convert from a string to a vector of uint8 that is record_bytes long.
record_bytes = tf.decode_raw(value, tf.uint8)
# The first bytes represent the label, which we convert from uint8->int32.
label = tf.cast (tf.slice (record_bytes, [0], [label_bytes]), tf.int32)
# The remaining bytes after the label represent the image, which we reshape
# from [depth * height * width] to [depth, height, width].
depth_major = tf.reshape(tf.slice(record_bytes, [label_bytes], [image_bytes]), [DEPTH, HEIGHT, WIDTH])
# Convert from [depth, height, width] to [height, width, depth].
uint8image = tf.transpose(depth_major, [1, 2, 0])
return uint8image, label
def _generate_image_and_label_batch(image, label, min_queue_examples, batch_size, shuffle, summary_name):
num_preprocess_threads = 4
if shuffle:
images, label_batch = tf.train.shuffle_batch( [image, label], batch_size=batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * batch_size, min_after_dequeue=min_queue_examples)
else:
images, label_batch = tf.train.batch( [image, label], batch_size=batch_size, num_threads=num_preprocess_threads, capacity=min_queue_examples + 3 * batch_size)
# Display the training images in the visualizer.
tf.image_summary(summary_name, images)
return images, tf.reshape(label_batch, [batch_size])
def distorted_inputs ():
data_dir = FLAGS.data_dir
batch_size = FLAGS.batch_size
filenames = [os.path.join (data_dir, 'train_batch.bin')]
filename_queue = tf.train.string_input_producer(filenames)
train_image, train_label = read_input_dataset (filename_queue)
reshaped_image = tf.cast (train_image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for training the network. Note the many random
# distortions applied to the image.
# Randomly crop a [height, width] section of the image.
distorted_image = tf.random_crop(reshaped_image, [height, width, 3])
# Randomly flip the image horizontally.
distorted_image = tf.image.random_flip_left_right(distorted_image)
# Because these operations are not commutative, consider randomizing
# the order their operation.
distorted_image = tf.image.random_brightness(distorted_image, max_delta=63)
distorted_image = tf.image.random_contrast(distorted_image, lower=0.2, upper=1.8)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_whitening(distorted_image)
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN * min_fraction_of_examples_in_queue)
print ('Filling queue with %d images before starting to train. '
'This will take a few minutes.' % min_queue_examples)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch (float_image, train_label, min_queue_examples, batch_size, shuffle=True, summary_name='train_images')
def inputs (eval_data):
data_dir = FLAGS.data_dir
batch_size = FLAGS.batch_size
if not eval_data:
filenames = [os.path.join (data_dir, 'train_batch.bin')]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN
else:
filenames = [os.path.join(data_dir, 'test_batch.bin')]
num_examples_per_epoch = NUM_EXAMPLES_PER_EPOCH_FOR_EVAL
filename_queue = tf.train.string_input_producer(filenames)
test_image, test_label = read_input_dataset (filename_queue)
reshaped_image = tf.cast (test_image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
# Image processing for evaluation.
# Crop the central [height, width] of the image.
resized_image = tf.image.resize_image_with_crop_or_pad(reshaped_image, width, height)
# Subtract off the mean and divide by the variance of the pixels.
float_image = tf.image.per_image_whitening(resized_image)
# Ensure that the random shuffling has good mixing properties.
min_fraction_of_examples_in_queue = 0.4
min_queue_examples = int(num_examples_per_epoch * min_fraction_of_examples_in_queue)
# Generate a batch of images and labels by building up a queue of examples.
return _generate_image_and_label_batch(float_image, test_label, min_queue_examples, batch_size, shuffle=False, summary_name='test_images')
def input_eval (N):
filenames = [os.path.join (FLAGS.data_dir, 'test_batch.bin')]
filename_queue = tf.train.string_input_producer (filenames)
test_image, test_label = read_input_dataset (filename_queue)
reshaped_image = tf.cast (test_image, tf.float32)
height = IMAGE_SIZE
width = IMAGE_SIZE
images = []
labels = []
# Randomly crop N times
for i in xrange (N):
distorted_image = tf.random_crop (reshaped_image, [height, width, 3])
float_image = tf.image.per_image_whitening (distorted_image)
images.append (tf.expand_dims (float_image, 0))
return tf.concat (0, images), test_label
|
// Copyright (c) 2019-2021 WAZN Project
// Copyright (c) 2014-2019, The Monero Project
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#include "TransactionHistorySortFilterModel.h"
#include "TransactionHistoryModel.h"
#include <QDebug>
#include <QtGlobal>
namespace {
/**
* helper to extract scope value from filter
*/
template <typename T>
T scopeFilterValue(const QMap<int, QVariant> &filters, int role, int scopeIndex)
{
if (!filters.contains(role)) {
return T();
}
return filters.value(role).toList().at(scopeIndex).value<T>();
}
/**
* helper to setup scope value to filter
*/
template <typename T>
void setScopeFilterValue(QMap<int, QVariant> &filters, int role, int scopeIndex, const T &value)
{
QVariantList scopeFilter;
if (filters.contains(role)) {
scopeFilter = filters.value(role).toList();
}
while (scopeFilter.size() < 2) {
scopeFilter.append(T());
}
scopeFilter[scopeIndex] = QVariant::fromValue(value);
filters[role] = scopeFilter;
}
}
TransactionHistorySortFilterModel::TransactionHistorySortFilterModel(QObject *parent)
: QSortFilterProxyModel(parent)
{
setDynamicSortFilter(true);
}
QString TransactionHistorySortFilterModel::searchFilter() const
{
return m_searchString;
}
void TransactionHistorySortFilterModel::setSearchFilter(const QString &arg)
{
if (searchFilter() != arg) {
m_searchString = arg;
emit searchFilterChanged();
invalidateFilter();
}
}
QString TransactionHistorySortFilterModel::paymentIdFilter() const
{
return m_filterValues.value(TransactionHistoryModel::TransactionPaymentIdRole).toString();
}
void TransactionHistorySortFilterModel::setPaymentIdFilter(const QString &arg)
{
if (paymentIdFilter() != arg) {
m_filterValues[TransactionHistoryModel::TransactionPaymentIdRole] = arg;
emit paymentIdFilterChanged();
invalidateFilter();
}
}
QDate TransactionHistorySortFilterModel::dateFromFilter() const
{
return scopeFilterValue<QDate>(m_filterValues, TransactionHistoryModel::TransactionTimeStampRole, ScopeIndex::From);
}
void TransactionHistorySortFilterModel::setDateFromFilter(const QDate &date)
{
if (date != dateFromFilter()) {
setScopeFilterValue(m_filterValues, TransactionHistoryModel::TransactionTimeStampRole, ScopeIndex::From, date);
emit dateFromFilterChanged();
invalidateFilter();
}
}
QDate TransactionHistorySortFilterModel::dateToFilter() const
{
return scopeFilterValue<QDate>(m_filterValues, TransactionHistoryModel::TransactionTimeStampRole, ScopeIndex::To);
}
void TransactionHistorySortFilterModel::setDateToFilter(const QDate &date)
{
if (date != dateToFilter()) {
setScopeFilterValue(m_filterValues, TransactionHistoryModel::TransactionTimeStampRole, ScopeIndex::To, date);
emit dateToFilterChanged();
invalidateFilter();
}
}
double TransactionHistorySortFilterModel::amountFromFilter() const
{
return scopeFilterValue<double>(m_filterValues, TransactionHistoryModel::TransactionAmountRole, ScopeIndex::From);
}
void TransactionHistorySortFilterModel::setAmountFromFilter(double value)
{
if (value != amountFromFilter()) {
setScopeFilterValue(m_filterValues, TransactionHistoryModel::TransactionAmountRole, ScopeIndex::From, value);
emit amountFromFilterChanged();
invalidateFilter();
}
}
double TransactionHistorySortFilterModel::amountToFilter() const
{
return scopeFilterValue<double>(m_filterValues, TransactionHistoryModel::TransactionAmountRole, ScopeIndex::To);
}
void TransactionHistorySortFilterModel::setAmountToFilter(double value)
{
if (value != amountToFilter()) {
setScopeFilterValue(m_filterValues, TransactionHistoryModel::TransactionAmountRole, ScopeIndex::To, value);
emit amountToFilterChanged();
invalidateFilter();
}
}
int TransactionHistorySortFilterModel::directionFilter() const
{
return m_filterValues.value(TransactionHistoryModel::TransactionDirectionRole).value<TransactionInfo::Direction>();
}
void TransactionHistorySortFilterModel::setDirectionFilter(int value)
{
if (value != directionFilter()) {
m_filterValues[TransactionHistoryModel::TransactionDirectionRole] = QVariant::fromValue(value);
emit directionFilterChanged();
invalidateFilter();
}
}
void TransactionHistorySortFilterModel::sort(int column, Qt::SortOrder order)
{
QSortFilterProxyModel::sort(column, order);
}
TransactionHistory *TransactionHistorySortFilterModel::transactionHistory() const
{
const TransactionHistoryModel * model = static_cast<const TransactionHistoryModel*> (sourceModel());
return model->transactionHistory();
}
bool TransactionHistorySortFilterModel::filterAcceptsRow(int source_row, const QModelIndex &source_parent) const
{
if (source_row < 0 || source_row >= sourceModel()->rowCount()) {
return false;
}
QModelIndex index = sourceModel()->index(source_row, 0, source_parent);
if (!index.isValid()) {
return false;
}
bool result = true;
// iterating through filters
for (int role : m_filterValues.keys()) {
if (m_filterValues.contains(role)) {
QVariant data = sourceModel()->data(index, role);
switch (role) {
case TransactionHistoryModel::TransactionPaymentIdRole:
result = data.toString().contains(paymentIdFilter());
break;
case TransactionHistoryModel::TransactionTimeStampRole:
{
#if QT_VERSION >= QT_VERSION_CHECK(5, 14, 0)
QDateTime from = dateFromFilter().startOfDay();
QDateTime to = dateToFilter().endOfDay();
#else
QDateTime from = QDateTime(dateFromFilter());
QDateTime to = QDateTime(dateToFilter());
to = to.addDays(1); // including upperbound
#endif
QDateTime timestamp = data.toDateTime();
bool matchFrom = from.isNull() || timestamp.isNull() || timestamp >= from;
bool matchTo = to.isNull() || timestamp.isNull() || timestamp <= to;
result = matchFrom && matchTo;
}
break;
case TransactionHistoryModel::TransactionAmountRole:
{
double from = amountFromFilter();
double to = amountToFilter();
double amount = data.toDouble();
bool matchFrom = from <= 0 || amount >= from;
bool matchTo = to <= 0 || amount <= to;
result = matchFrom && matchTo;
}
break;
case TransactionHistoryModel::TransactionDirectionRole:
result = directionFilter() == TransactionInfo::Direction_Both ? true
: data.toInt() == directionFilter();
break;
default:
break;
}
if (!result) { // stop the loop once filter doesn't match
break;
}
}
}
if (!result || m_searchString.isEmpty())
return result;
QVariant data = sourceModel()->data(index, TransactionHistoryModel::TransactionPaymentIdRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionDisplayAmountRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionBlockHeightRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionFeeRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionHashRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionDateRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionTimeRole);
if (data.toString().contains(m_searchString))
return true;
data = sourceModel()->data(index, TransactionHistoryModel::TransactionDestinationsRole);
if (data.toString().contains(m_searchString))
return true;
return false;
}
bool TransactionHistorySortFilterModel::lessThan(const QModelIndex &source_left, const QModelIndex &source_right) const
{
return QSortFilterProxyModel::lessThan(source_left, source_right);
}
|
# -*- coding: utf-8 -*-
"""
Tencent is pleased to support the open source community by making 蓝鲸智云PaaS平台社区版 (BlueKing PaaS Community
Edition) available.
Copyright (C) 2017-2020 TH<NAME>, a Tencent company. All rights reserved.
Licensed under the MIT License (the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://opensource.org/licenses/MIT
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
"""
from __future__ import absolute_import
import logging
import traceback
from pipeline.exceptions import PipelineException
from pipeline.core.flow.gateway import ConditionalParallelGateway
from pipeline.core.data.hydration import hydrate_data
from pipeline.engine.models import (
Status,
PipelineProcess,
)
from .base import FlowElementHandler
logger = logging.getLogger('celery')
__all__ = ['ConditionalParallelGatewayHandler']
class ConditionalParallelGatewayHandler(FlowElementHandler):
@staticmethod
def element_cls():
return ConditionalParallelGateway
def handle(self, process, element, status):
if status.loop > 1:
process.top_pipeline.context.recover_variable()
try:
hydrate_context = hydrate_data(process.top_pipeline.context.variables)
targets = element.targets_meet_condition(hydrate_context)
except PipelineException as e:
logger.error(traceback.format_exc(e))
Status.objects.fail(element, ex_data=e.message)
return self.HandleResult(next_node=None, should_return=True, should_sleep=True)
children = []
for target in targets:
try:
child = PipelineProcess.objects.fork_child(parent=process,
current_node_id=target.id,
destination_id=element.converge_gateway_id)
except PipelineException as e:
logger.error(traceback.format_exc(e))
Status.objects.fail(element, ex_data=e.message)
return self.HandleResult(next_node=None, should_return=True, should_sleep=True)
children.append(child)
process.join(children)
Status.objects.finish(element)
return self.HandleResult(next_node=None, should_return=True, should_sleep=True)
|
On the advice of touch judge Jeff Younis, Sutton sent it to be reviewed by the NRL bunker after he had ruled it as a try on the field. The bunker denied the try as replays appeared to show Morris only marginally failed to ground the ball. So close: Josh Morris goes within centimetres of scoring for NSW. Credit:Getty Images Daley said he had "no idea" about whether Morris' effort should have been ruled a try, but when pressed further didn't hold back on his assessment of Sutton and Cummins. "Put it this way, I'll be asking those two referees to not be officiating in game two," Daley snarled. "There's your story. Based on history and based on that game." Both sides were granted four penalties in the dour affair.
Daley's Queensland counterpart, Kevin Walters, joked "we can get them back in" when asked about the performance of Sutton and Cummins, the latter who was subject to an Anzac Day tirade from Roosters coach Trent Robinson. No try: Robbie Farah speaks to referee Gerard Sutton after the bunker decision to deny Josh Morris a try. Credit:Getty Images "Being honest, we did get some nice calls – favourable calls – and you need them at this level," Walters said. "You make your own luck, too. All of those 50-50 calls certainly went Queensland's way." The game's most celebrated referee Bill Harrigan was quick to condemn Daley's attack on the referees, the major talking point of a scrappy affair. "It was disappointing and it's always disappointing to see the questions lead that way," Harrigan told Triple M. "When he does have an opportunity to review it on video he will see [the Morris call] was a dead set no try. That decision was correct.
"What will be interesting is to see how much pull a coach has in this day and age with the appointment of referees." NSW forced a series decider last year after losing the first game in Sydney, but face a monumental task of squaring the series at Suncorp Stadium after emerging victorious at the Melbourne Cricket Ground last year. But Daley and Blues skipper Paul Gallen remained upbeat about their side's chances of securing an unlikely series victory. "The mentality changes to a must-win game in game two, and if we apply that effort and execute a bit better, I think we can level it," Gallen said. Added Daley: "There were some really encouraging signs for us. We're a young team with new blood and we always knew we were going to be better as the series rolled on. |
A New Two-Dimensional Functional Material with Desirable Bandgap and Ultrahigh Carrier Mobility Two-dimensional (2D) semiconductors with direct and modest bandgap and ultrahigh carrier mobility are highly desired functional materials for nanoelectronic applications. Herein, we predict that monolayer CaP3 is a new 2D functional material that possesses not only a direct bandgap of 1.15 eV (based on HSE06 computation), and also a very high electron mobility up to 19930 cm2 V-1 s-1, comparable to that of monolayer phosphorene. More remarkably, contrary to the bilayer phosphorene which possesses dramatically reduced carrier mobility compared to its monolayer counterpart, CaP3 bilayer possesses even higher electron mobility (22380 cm2 V-1 s-1) than its monolayer counterpart. The bandgap of 2D CaP3 can be tuned over a wide range from 1.15 to 0.37 eV (HSE06 values) through controlling the number of stacked CaP3 layers. Besides novel electronic properties, 2D CaP3 also exhibits optical absorption over the entire visible-light range. The combined novel electronic, charge mobility, and optical properties render 2D CaP3 an exciting functional material for future nanoelectronic and optoelectronic applications. In this communication, we report a new member in the phosphorene derivative family, namely, 2D CaP3 structure with distinctly different structure from recently reported 2D GeP3 and 2D InP3. 21,22 This new atomic-layered material possesses several fascinating electronic properties compared to previously reported phosphorene derivatives. Note that the bulk CaP3, discovered in 1973, 26 is a metal phosphide. It belongs to a Ca-P compound consisting of porous framework of phosphorene with Ca atoms located at P dimer vacancy site. Recently, the MP3 (M = Ca, Sr, Ba) series are predicted to be topological nodal-line semimetals. 27 Here, our comprehensive densityfunctional theory (DFT) computations suggest that 2D monolayer CaP3 is a direct-gap semiconductor with a bandgap of 1.15 eV (based on HSE06 functional). The bandgap of 2D multilayered CaP3 ranges from 1.15 to 0.37 eV (HSE06), depending on the number of layers. Also importantly, 2D CaP3 exhibits strongly anisotropic carrier mobilities with the electron mobility being as high as 22380 cm 2 V -1 s -1. The latter value is comparable the predicted hole mobility of phosphorene. Structure and Stability The optimized structures of bulk and monolayer CaP3 are shown in Figure 1. Bulk CaP3 exhibits P-1 symmetry and its crystalline structure can be viewed as layered porous framework of phosphorene with Ca atoms located at P dimer vacancy site. CaP3 monolayer is constructed by taking an atomic layer from the CaP3 bulk in the (0 0 1) direction. Monolayer CaP3 also exhibits P-1 symmetry as the bulk. However, some reconstructions of the structure are worthy of noting, e.g., the distortion of the P-P bond, and the positions of the Ca atoms being closer to the plane of the CaP3 sheet, rendering the thickness of the atomic layer thinner for about 0.49. The lattice constants of the CaP3 monolayer are a = 5.59, b=5.71, and = 81.09°, slightly different from those of the bulk lattice. CaP3 monolayer also exhibits a puckered configuration, similar to black phosphorene. In CaP3 monolayer, the P-Ca bond length is 2.83 -2.97, while the P-P bond length is 2.21-2.24, shorter than that in the phosphorene (2.24 -2.28 ). Electron localization function shows clear ionic bonding between Ca-P (see Figure S1). Bader charge analysis suggests that each Ca atom transfers about 1.39 electron to P atoms (see Table S2). For the CaP3 bilayer and trilayer, our DFT calculation shows that both prefer the same stacking order as bulk, and the optimized structure of the bilayer CaP3 is shown in Figure S2. The dynamic stability of CaP3 monolayer is confirmed via computing the phonon dispersion curves, which show no imaginary phonon modes (Figure 1c). The highest vibration frequency is 443 cm -1, comparable to that of phosphorene (450 cm -1 ) 28 and 2D GeP3 (480 cm -1 ), 21 reflecting the mechanical robustness of the covalent P-P bonds. Thermal stability of structures is also examined via BOMD simulations in which the temperature of the system is controlled at 300, 600, and 1000 K, respectively. The overall structure is still well kept after 5 ps simulation even when the temperature is up to 1000 K (see Figure S4). We note that the use of periodic boundary condition in BOMD simulation with relatively small system size can artificially increase the stability of the structure. Still, the simulation at the elevated temperature (1000 K) can suggest that the CaP3 monolayer is highly stable, at least at the ambient temperature. To examine easiness in mechanical exfoliation of a CaP3 monolayer from the bulk, we compute the cleavage energy Ecl, characterized by the interlayer coupling strength. 29 To this end, a planar fracture is introduced within a unit cell of CaP3 bulk. The CaP3 supercell with four layers is used so that the interaction between two neighboring fractures due to periodic boundary condition can be neglected. As shown in Figure 1 (1.32 J/m 2 ), 22 we conclude that it should be quite feasible to exfoliate a monolayer CaP3 from the bulk. Electronic Structure The computed electronic structure of monolayer CaP3 is shown in Figure 2a For 2D CaP3 multilayers, their electronic structure can be strongly dependent on the number of layers. Also, the HSE06 computation indicates that the 2D CaP3 multilayers with different thickness still retain the direct bandgap feature ( Figure S6). As shown in Figure 2(b) and Table S3, Hence, we expect that the predicted physical trend in the bandgap change is realistic (a similar conclusion has been drawn for the widely studied phosphorene), and will be confirmed by future experiments. Optical Absorption Optical properties of 2D CaP3 have also been computed based on the HSE06 functional. Carrier Mobility The carrier mobility of the 2D CaP3 is calculated on the basis of the deformation potential (DP) theory, 33 which has been widely used to predict the carrier mobility of 2D atomic layered structures. 21-23, 34, 35 Note that the DP theory based mobility tends to overestimate the measured mobilities. For phosphorene, the calculated hole mobility based on the DP theory is around 10000 cm 2 V −1 s −1, 14 which overestimates the measured hole mobility of 5200 cm 2 V −1 s −1. 20 In any case, according to the DP theory, the carrier mobility of 2D systems can be described by the following formula: 23 2 = ℏ 3 2 * ( 1 ) 2, where is Boltzmann constant, T is the temperature (300 K), m* is the effective mass in the transport direction, and md = (ma*mb*) 1/2 is the average effective mass. Because the directions a and b for the unit cell of CaP3 monolayer are close to be perpendicular with one another, the md is considered to be (ma*mb*) 1/2 for simplicity. For the CaP3 bilayer, a convention unit cell is used in the calculation, and the directions a and b are defined in Figure S2. Our results are summarized in Table 1 Discussion Among all phosphorene derivatives reported so far, 2D CaP3 possesses the highest electron mobility while phosphorene possesses the highest hole mobility. 2D CaP3 can be excellent complementary material for phosphorene in electron mobility. And electrons in 2D CaP3 are much more mobile than hole, contrary to the trend for phosphorene and hittorfene, for which hole exhibits higher mobility. For phosphorene, the carrier mobility decreases rapidly from its monolayer to bilayer. In contrast, the CaP3 bilayer even possesses a higher electron mobility, up to 2.2410 4 cm 2 V -1 s -1, along the a direction, indicating the superior optical absorption with high mobility can be achieved by a thicker CaP3 multilayer. Compared to recently reported 2D GeP3 or InP3, 21,22 despite of apparent similarity in chemical formula, the structure of CaP3 is fundamentally different. Specifically, the structure of 2D GeP3 and InP3 can be regarded as a semi-single-atom-thick sheet with combination of single metal unit and P6 hexatomic ring. There is no continuous framework of P in the two structures, but only P6 cluster. And the coordination of all atoms is 3. In contrast, in the 2D crystalline structure of CaP3 the coordination of Ca and most P atoms is more than 3. The structure is anisotropic, resulting in anisotropic electronic and mechanical properties. So the dramatically different structure, electronic properties of 2D CaP3 are fundamentally different from those of 2D GeP3 or InP3, as shown previously. In particular, electron mobilities of 2D GeP3 and InP3 are an order of magnitude lower than that of 2D CaP3 (10 4 cm 2 V -1 s -1 ). In addition, CaP3 monolayer possesses a direct bandgap of 1.15 eV (HSE06), higher than that of GeP3 (indirect, 0.55 eV (HSE06)) and InP3 (indirect, 1.14 eV (HSE06)). So 2D CaP3 would be more suitable for electronic and optoelectronic applications than 2D GeP3. Conclusion In Computational methods All calculations are performed within the framework of DFT, implemented in the Vienna ab initio simulation package (VASP 5.3). 36,37 The generalized gradient approximation (GGA) in the form of the Perdew-Burke-Ernzerhof (PBE) functional, and projector augmented wave (PAW) potentials are used. The valence electron of P is 2s 2 2p 3, and Ca is 3s 2 3p 6 4s 2. Dispersion-corrected DFT method (optB88-vdW functional) is used in all the structure optimization, 41,42 which has been proven reliable for phosphorene systems. 14 The vacuum spacing in the supercell is larger than 15 so that interaction among periodic images can be neglected. An energy cutoff of 500 eV is adopted for the plane-wave expansion of the electronic wave function. During the DFT calculation, the k-point sampling is carefully examined to assure that the calculation results are converged. Geometry structures are relaxed until the force on each atom is less than 0.01 eV/, while the energy convergence criteria of 1 10 -5 eV are met. For each system, the supercell is optimized to obtain the lattice parameters of the lowest-energy structure. For bulk CaP3, the optB88-vdW computation predicts the lattice constants of a = 5.60, b = 5.68, and c = 5.62, in good agreement with experimental values of a = 5.59, b = 5.67, and c = 5.62. 26 Since DFT/GGA method tends to underestimate bandgap of semiconductors, the screen hybrid HSE06 method is also used to examine the band structures. 43 With the optB88-vdW optimized structure, the computed bandgap of the bulk CaP3 at the HSE06 level of theory is 0.37 eV, consistent with the previous study. 27 For phosphorene, the HSE06 functional tends to underestimate the electronic band gap but give optical gaps very close to the experiment values, while the more computationally demanding hybrid PBE0 functional gives an electronic band gap closer to the experiment. In any case, the PBE0 computation is also performed. 44 Detailed discussion of different DFT methods is given in Supplemental Information. The Bader's atom in molecule (AIM) method (based on charge density topological analysis) is used for the charge population calculation. 45 The BOMD simulations are performed using the CASTEP 7.0 package, 46,47 for which the supercell contains 96 atoms. First, the PBE functional and ultrasoft pseudopotential are selected. The energy cut-off is 280 eV. The BOMD simulations are carried out in the constant-temperature and constant-pressure ensemble. The temperature (300 K, 600 K, or 1000 K) and pressure (1 atm) are controlled using the Nos -Hoover 48 and Anderson-Hoover method, 49 respectively. The time step in BOMD simulations is 1 fs. Each independent BOMD simulation lasts 5 ps. Moreover, to confirm dynamics stability of the 2D structure, the phonon dispersion spectrum is also computed using the CASTEP 7.0 package, for which the finite displacement method is selected. Supporting Information Lattice constant, bandgap, band structure and PDOS of CaP3 bulk and 2D layers. Charge Author Contributions N.L. and Z.Z. contribute equally to this work. Notes There are no conflicts to declare. |
<filename>via/via_system_windows.hpp
/*
* Copyright (c) 2016-2018 Valve Corporation
* Copyright (c) 2016-2018 LunarG, Inc.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
* Author: <NAME> <<EMAIL>>
*/
#ifdef VIA_WINDOWS_TARGET
#pragma once
#include <tuple>
#pragma warning(disable : 4996)
#include <shlwapi.h>
#include <Cfgmgr32.h>
#include "via_system.hpp"
class ViaSystemWindows : public ViaSystem {
public:
ViaSystemWindows();
protected:
virtual bool IsAbsolutePath(const std::string& path) override;
virtual int RunTestInDirectory(std::string path, std::string test, std::string cmd_line) override;
virtual ViaResults PrintSystemEnvironmentInfo();
virtual ViaResults PrintSystemHardwareInfo();
virtual ViaResults PrintSystemExecutableInfo();
virtual ViaResults PrintSystemDriverInfo();
virtual ViaResults PrintSystemLoaderInfo();
virtual ViaResults PrintSystemSdkInfo();
virtual ViaResults PrintSystemImplicitLayerInfo();
virtual ViaResults PrintSystemExplicitLayerInfo();
virtual ViaResults PrintSystemSettingsFileInfo();
virtual void PrintFileVersionInfo(const std::string& json_filename, const std::string& library);
virtual bool CheckExpiration(OverrideExpiration expiration);
virtual std::string GetEnvironmentalVariableValue(const std::string& env_var);
virtual bool ExpandPathWithEnvVar(std::string& path);
private:
bool FindDriverIdsFromPlugAndPlay();
bool FindDriverSpecificRegistryJsons(const std::string& key_name,
std::vector<std::tuple<std::string, bool, std::string>>& json_paths);
bool PrintDriverRegistryInfo(std::vector<std::tuple<std::string, bool, std::string>>& cur_driver_json, std::string system_path,
bool& found_lib);
bool GetFileVersion(const std::string& filename, std::string& version_string);
void PrintUninstallRegInfo(HKEY reg_folder, char* output_string, char* count_string, char* generic_string, char* version_string,
unsigned int& install_count);
bool PrintSdkUninstallRegInfo(HKEY reg_folder, char* output_string, char* count_string, char* generic_string);
bool PrintExplicitLayersRegInfo(std::vector<std::tuple<std::string, bool, std::string>>& cur_layer_json,
ViaSystem::ViaResults& res);
bool PrintImplicitLayersRegInfo(std::vector<std::tuple<std::string, bool, std::string>>& cur_layer_json,
std::vector<std::string>& override_paths, ViaResults& res);
ViaResults FindAndPrintAllExplicitLayersInPath(const std::string& layer_path);
OSVERSIONINFOEX _os_version_info;
SYSTEM_INFO _system_info;
std::vector<std::tuple<std::string, DEVINST>> _device_ids;
};
#endif // VIA_WINDOWS_TARGET |
<reponame>blitzcodes/improved-initiative
import * as React from "react";
import * as ReactDOM from "react-dom";
interface OverlayProps {
maxHeightPx?: number;
handleMouseEvents?: (e: React.MouseEvent<HTMLDivElement>) => void;
left?: number;
top?: number;
}
interface OverlayState {
height: number;
}
export class Overlay extends React.Component<OverlayProps, OverlayState> {
constructor(props: OverlayProps) {
super(props);
this.state = {
height: null
};
}
public render() {
const overflowAmount = Math.max((this.props.top || 0) + this.state.height - window.innerHeight + 4, 0);
const style: React.CSSProperties = {
maxHeight: this.props.maxHeightPx || "100%",
left: this.props.left || 0,
top: (this.props.top - overflowAmount) || 0,
};
return <div
className="c-overlay"
style={style}
onMouseEnter={this.props.handleMouseEvents}
onMouseLeave={this.props.handleMouseEvents}>
{this.props.children}
</div>;
}
public componentDidMount() {
this.updateHeight();
}
public componentDidUpdate() {
this.updateHeight();
}
private updateHeight() {
let domElement = ReactDOM.findDOMNode(this);
if (domElement instanceof Element) {
let newHeight = domElement.getBoundingClientRect().height;
if (newHeight != this.state.height) {
this.setState({
height: newHeight,
});
}
}
}
}
|
/**
* Timus 1585 - Penguins
* Created by Darren on 14-7-9.
*/
public class Q1585 {
public static void main(String[] args) {
BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
try {
int n = Integer.parseInt(in.readLine());
int emperor = 0, little = 0, macaroni = 0;
// Count the number of each penguin
for (int i = 0; i < n; i++) {
char initial = in.readLine().charAt(0);
if (initial == 'E')
emperor++;
else if (initial == 'L')
little++;
else
macaroni++;
}
// Output the species with the most number
if (emperor > little && emperor > macaroni)
System.out.println("Emperor Penguin");
else if (little > emperor && little > macaroni)
System.out.println("Little Penguin");
else
System.out.println("Macaroni Penguin");
} catch (IOException e) {
e.printStackTrace();
}
}
} |
On Monday, Anderson Cooper throws his hat into the daytime talk ring with the premiere of Anderson-- a syndicated, hour-long show that he describes as "an adventure, with interesting people [and] compelling stories."
For his inaugural episode, the newsman sits down with Mitch and Janis Winehouse as they speak -- for the first time -- about their late daughter, Amy. "I think the real conversation about Amy Winehouse is the one we want to have with her family," Cooper says. "We all saw the struggle she was going through and I think everybody can relate to having somebody in their family who has a substance abuse problem."
Now, a clip of Anderson in action has been released -- giving you not only the first look at his new talk show, but also a first look at his exclusive interview with the Winehouse family!
Anderson premieres Monday, September 12th -- click here to find out when the show airs in your area! |
Vasopressin Secretion During Insulininduced Hypoglycaemia: Exaggerated Response in People with Type 1 Diabetes Insulin hypoglycaemia causes a rise in plasma vasopressin concentrations in man and the rat, and vasopressin stimulates glucagon secretion and increases hepatic glucose output in man. Vasopressin has also been suggested to have an important synergistic role with corticotrophin releasing factor in the release of adrenocorticotrophin hormone, and a counterregulatory role for the hormone has been proposed. As diminished anterior pituitary hormone responses to hypoglycaemia have been reported in diabetes mellitus, we studied the plasma vasopressin responses to insulininduced hypoglycaemia in 10 patients with established Type 1 diabetes and 10 matched control subjects. Blood glucose fell from 4.5 ± 0.3 to 1.6 ± 0.1 mmol l1 (p < 0.001) in the diabetic group and from 4.6 ± 0.2 to 1.5 ± 0.2 mmol l1 (p < 0.001) in control subjects, with delayed blood glucose recovery in the diabetic patients. Plasma vasopressin rose in the diabetic patients from 0.9 ± 0.2 to 6.9 ± 2.8 pmol l1 (p < 0.001), a significantly greater rise (p < 0.05) than in the control subjects, 0.8 ± 0.1 to 2.4 ± 1.0 pmol l1 (p < 0.001). Plasma osmolalities remained unchanged and haemodynamic changes were similar in both groups. There is an exaggerated rise in plasma vasopressin concentrations in diabetic patients in response to insulininduced hypoglycaemia. The putative mechanisms and potential significance of the exaggerated rise are discussed. |
package skadistats.clarity.platform;
import org.slf4j.Logger;
import skadistats.clarity.LogChannel;
import skadistats.clarity.io.bitstream.BitStream;
import skadistats.clarity.io.bitstream.BitStream32;
import skadistats.clarity.io.bitstream.BitStream64;
import skadistats.clarity.logger.PrintfLoggerFactory;
import skadistats.clarity.platform.buffer.CompatibleBuffer;
import skadistats.clarity.platform.buffer.UnsafeBuffer;
import skadistats.clarity.util.ClassReflector;
import skadistats.clarity.util.ThrowingRunnable;
import java.lang.invoke.MethodHandle;
import java.lang.invoke.MethodType;
import java.nio.ByteBuffer;
import java.nio.MappedByteBuffer;
import java.util.function.Consumer;
import java.util.function.Function;
public class ClarityPlatform {
private static final Logger log = PrintfLoggerFactory.getLogger(LogChannel.runner);
private static final boolean VM_64BIT = System.getProperty("os.arch").contains("64");
private static final boolean VM_PRE_JAVA_9 = System.getProperty("java.specification.version","9").startsWith("1.");
private static Function<byte[], BitStream> bitStreamConstructor;
private static Consumer<MappedByteBuffer> byteBufferDisposer;
public static Function<byte[], BitStream> getBitStreamConstructor() {
return bitStreamConstructor;
}
public static void setBitStreamConstructor(Function<byte[], BitStream> bitStreamConstructor) {
ClarityPlatform.bitStreamConstructor = bitStreamConstructor;
}
public static Consumer<MappedByteBuffer> getByteBufferDisposer() {
return byteBufferDisposer;
}
public static void setByteBufferDisposer(Consumer<MappedByteBuffer> byteBufferDisposer) {
ClarityPlatform.byteBufferDisposer = byteBufferDisposer;
}
public static BitStream createBitStream(byte[] data) {
if (bitStreamConstructor == null) {
synchronized (ClarityPlatform.class) {
if (bitStreamConstructor == null) {
bitStreamConstructor = determineBitStreamConstructor();
}
}
}
return bitStreamConstructor.apply(data);
}
public static void disposeMappedByteBuffer(MappedByteBuffer buf) {
if (byteBufferDisposer == null) {
synchronized (ClarityPlatform.class) {
if (byteBufferDisposer == null) {
byteBufferDisposer = determineByteBufferDisposer();
}
}
}
byteBufferDisposer.accept(buf);
}
private static Function<byte[], BitStream> determineBitStreamConstructor() {
if (ClarityPlatform.VM_64BIT) {
if (UnsafeBuffer.available) {
return data -> new BitStream64(new UnsafeBuffer.B64(data));
} else {
return data -> new BitStream64(new CompatibleBuffer.B64(data));
}
} else {
if (UnsafeBuffer.available) {
return data -> new BitStream32(new UnsafeBuffer.B32(data));
} else {
return data -> new BitStream32(new CompatibleBuffer.B32(data));
}
}
}
private static Consumer<MappedByteBuffer> determineByteBufferDisposer() {
// see http://stackoverflow.com/questions/2972986/how-to-unmap-a-file-from-memory-mapped-using-filechannel-in-java
if (VM_PRE_JAVA_9) {
ClassReflector cleanerReflector = new ClassReflector("sun.misc.Cleaner");
// public void clean()
MethodHandle mhClean = cleanerReflector.getPublicVirtual(
"clean", MethodType.methodType(void.class));
ClassReflector bufReflector = new ClassReflector("sun.nio.ch.DirectBuffer");
// public Cleaner cleaner()
MethodHandle mhCleaner = bufReflector.getPublicVirtual(
"cleaner", MethodType.methodType(cleanerReflector.getCls()));
return buf -> runCleaner(
() -> mhClean.invoke(mhCleaner.invoke(buf)),
mhClean, mhCleaner
);
} else {
MethodHandle mhInvokeCleaner = UnsafeReflector.INSTANCE.getPublicVirtual(
"invokeCleaner",
MethodType.methodType(void.class, ByteBuffer.class));
return buf -> runCleaner(
() -> mhInvokeCleaner.invoke(buf),
mhInvokeCleaner
);
}
}
private static void runCleaner(ThrowingRunnable runnable, Object... nonNulls) {
for (Object nonNull : nonNulls) {
if (nonNull == null) {
log.error("Cannot run cleaner because method was not found. Please file an issue!");
return;
}
}
try {
runnable.run();
} catch (Throwable e) {
throw new RuntimeException(e);
}
}
}
|
<reponame>skizze-hq/skizze
package server
import (
pb "datamodel/protobuf"
"errors"
"golang.org/x/net/context"
)
func (s *serverStruct) CreateSnapshot(ctx context.Context, in *pb.CreateSnapshotRequest) (*pb.CreateSnapshotReply, error) {
//err := s.manager.Save()
// FIXME: return a snapshot ID
status := pb.SnapshotStatus_FAILED
return &pb.CreateSnapshotReply{Status: &status}, errors.New("Snapshots not supported (yet)!")
}
|
. The metacrylate and latex corrosion techniques were used to establish that the vascular system of testes is based on one coherent principle in common domestic mammals. The cone-shaped Plexus pampiniformis consists of numerous venous rami, between 0.25 mm and 1.0 mm in thickness and forming a dense vascular network, which practically encase the spiral-shaped A. spermatica interna (cooling coil principle). The testicular veins and arteries in the Tunica albuginea constitute a somewhat voluminous layer of vessels for dissipation of heat, with rami branching off radially into the testicular parenchyma. Most of the artieral rami with radial penetration of the testicular parenchyma turn towards the surface in the mediastinum testis for three-shape ramification. The vascular rami are characterised by countless meanders, primarily for temperature control, pulse flattening for more regular and even blood flow, and blood reflex pumping. |
Biologic monitoring of exposure to organophosphorus pesticides in 195 Italian children. One hundred ninety-five 6- to 7-year-old children who lived in the municipality of Siena (Tuscany, Italy), underwent biologic monitoring to evaluate urinary excretion of several alkylphosphates that are metabolites of organophosphorus pesticides. We evaluated dimethylphosphate (DMP), dimethylthiophosphate (DMTP), dimethyldithiophosphate (DMDTP), diethylphosphate (DEP), diethylthiophosphate (DETP), and diethyldithiophosphate (DEDTP). We obtained urine samples taken in the children's schools, and each sample was accompanied by a questionnaire about lifestyle and dietary habits. We found DMP and DMTP in detectable concentrations in the greatest number of samples (96 and 94%, respectively). The DMP values were geometric mean (GM) 116.7, , and a range of 7.4-1,471.5 nmol/g creatinine. The corresponding DMTP values were GM 104.3 (GSD 2.8) and a range of 4.0-1,526.0 nmol/g creatinine. DMDTP, DEP, DETP, and DEDTP concentrations were GM 14.1, (GSD 3.0), and a range of 3.3-754.6 nmol/g creatinine in 34% of the children; GM 33.2, (GSD 2.4), and a range of 5.1-360.1 nmol/g creatinine in 75% of the children; GM 16.0, (GSD 2.9), and a range of 3.1-284.7 in 48% of the children; and GM 7.7, (GSD 2.1), and a range of 2.3-140.1 in 12% of the children, respectively. The significant variable for urinary excretion of these metabolites in children was pest control operations performed inside or outside the house in the preceding month; however, the presence of a vegetable garden near the house rarely emerged. The urinary excretion of alkylphosphates in children was significantly higher than in a group of the adult population resident in the same province. /rnp:iehpnetl.niehs.nih.govld8p521.-52S*pra/absen.dbtm The determination of pesticide residues or metabolites in biologic fluids of the general population has recently been the subject of many articles. Exposure of the general population to pesticides is due to residues in food and drink (dietary exposure), atmospheric dispersal of aerosols and vapors (respiratory exposure), and skin contact with contaminated articles (cutaneous exposure). Skin contamination may sometimes lead to oral nondietary exposure. Pesticide residues in indoor environments are not subject to degradation by sun, rain, and soil microbes and are therefore more persistent than in the environment at large. Children's exposure to pesticides is potentially greater than that of adults for two reasons. First, depending on their age, children may spend much of their time on the floor, where they may come into contact with dust and soil. A substantial quantity of contaminated matter may be ingested through fingers and other objects placed in the mouth. Studies reported by U.S. Environmental Protection Agency investigators estimate that children have a 12-times greater health risk than adults associated with the ingestion ofdust and soil. Dietary exposure to pesticide residues is also potentially higher for children. In relation to body weight, children drink more water, milk, and fruit juice than adults, and consume a large quantity of fresh foods. Organochlorine compounds were the first to be studied in the general population because of their widespread use, persistence, and effects on health. However, in the last 20 years there has been a considerable increase in the use of less persistent compounds, such as organophosphorus insecticides, which have greater acute toxicity. The acute effects of the organophosphorous insecticides are well known, but the chronic effects are not well characterized and the available data are mainly for adults. Little is known about chronic toxicity in children and no studies have been published on the neurotoxic effects of low levels of children's exposure. Few studies regarding biologic monitoring of exposure of the general population to OPs by urinary alkylphosphate assay have been published, and only one examines children. In this study, urinary excretion of DMTP in children living in families in which at least one member performed pest control operations with OPs was compared with that of a reference group consisting of children who lived far from agricultural environments and who had no member of the family working in agriculture. The present study evaluated urinary excretion of six alkylphosphates in 195 children 6-7 years of age, who lived in Siena, a hill town in Tuscany (Italy). The collection of urine samples was accompanied by a questionnaire on lifestyle and dietary habits. We had three specific aims for the study. The first aim was to compare urinary excretion of these metabolites in the general infantile and adult populations . The second aim was to determine whether dietary habits or lifestyle influence the urinary excretion of alkylphosphates by children in a statistically significant manner. The third aim was to determine whether children who ate one meal/day (lunch) at the school mensa, where all plant products (vegetables, fruit, cereals, legumes, vegetable oil, etc.) served were organic, had lower urinary excretion of alkylphosphates than those who ate lunch at home. "Organic" is defined as not treated with pesticides except copper sulfate and sulfur. Methods Stdy &sign andpopulation recruitment. In May 1995, we obtained 195 spot urine samples from 195 children 6-7 years of age, who lived in Siena, a hill town in southwest Tuscany (central Italy). Siena has practically no major industries and the population consists mainly of bank, hospital, and university employees; shopkeepers; and professionals. To obtain the population sample, we held preliminary meetings with the parents of children enrolled in the first and second classes of all Siena elementary schools. At these meetings, we explained the study and the parents were given a questionnaire. The meeting was mainly spent explaining how to fill in the form and answering questions. The parents were then given the date that urine samples would be taken at the schools. Parents who agreed to participate had to return the completed questionnaire on the day of sampling. Urine samples were only obtained from children who returned the form. Some of the children ate one meal/day (lunch) at the school mensa; others, who did not have school in the afternoon, ate all meals at home. Urine sampling. On the day of sampling, health personnel went to the schools, collected the questionnaires, and gave each of the children a polyethylene container for the urine sample. Urine samples were produced between 0900 and 1200 hr. The urine was immediately refrigerated and was frozen as soon as it reached the laboratory. Compilation ofquestionnaire. The questionnaire provided the parents' informed consent to their child's enrollment in the study. The details asked by the questionnaire concerned lifestyle and dietary habits, as follows: * Surname * Name * Sex * Date of birth * Weight * Height * Address * Telephone number * School * Class * Father's occupation * Mother's occupation * Illnesses and hospitalization of child * Do you have a garden or vegetable garden? * Do you keep ornamental plants in the house? * Do you buy cut flowers for the house? * Do you keep domestic animals in the house? * Do you use pesticides inside or outside the house? * Food and drink ingested the day before urine collection. If the questionnaire was incomplete, the parents were contacted by telephone to obtain the missing information. Analysis ofalkylphosphate metabolites. We analyzed alkylphosphates in the urine samples by gas chromatography with flame photometric detection after derivatization with pentafluorobenzylbromide and purification on SPE columns with CN-bound phase. Table 1 shows the recovery, reproducibility, and detection limits of the six compounds. The calibration curves, obtained by adding the six alkylphosphates to urine, were linear (r > 0.990) for all compounds in the concentration interval between the detection limit and 1,500 pg/L. The analytical results are expressed in nanomoles per gram creatinine. The creatinine assay was performed by the Larsen procedure with a precision of 3.1%. Urinary creatinine concentrations formed a normal distribution in the range of 0.17-1.93 g/L. Statistical analysis. Many urine samples had concentrations below detection limits for some metabolites. Our preliminary analysis therefore consisted of calculating the positivity percentages (% pos), i.e., the percentage of samples above detection limit for each analyte. Statistical analysis of the samples was then carried out, including a value half the detection limit for nondetectable analytes. We used the Kolmogorov-Smirnov test to check the distribution of samples for the six alkylphosphates; we found a positive asymmetric distribution, which became normal after log transformation. Parametric analysis (multiple regression) was therefore used for subsequent comparisons. We used the Bonferroni/Dunn post hoc test (multiple comparisons) to examine whether the mean values of the dependent variables were different from each other for each level of the factors. Statistical significance was set at oc= 0.05. We used some of the information obtained with the questionnaire (diet, occupation of parents, height, weight, and height/weight ratio) for qualitative classification of the population. Age was not considered because the children differed in age by no more than 1 year. The variables considered for subsequent statistical analysis were sex, the presence of a vegetable garden or garden near the house, ornamental plants or cut flowers (taken together) or pets in the house, pest control operations performed in the preceding month, and whether the child ate lunch at school on the day before sampling. The influence of these variables was evaluated for single alkylphosphates and for the sum of dimethyl (DMP + DMTP + DMDTP), diethyl (DEP + DETP + DEDTP), and all metabolites expressed in nanomole per gram creatinine. We named these sums methyl, ethyl, and sum, respectively. To calculate these sums, analytes below the detection limit were counted as a value half the detection limit. None of the subjects had all ofthe metabolites below detection limit. Results Study participation was approximately 67%: of the 291 questionnaires distributed to parents, only 195 were completed and returned. The main reasons for low participation were forgetting to compile the form or return it, absence from school, and lack of interest. Of the 195 children, 103 were girls (53%). None of the children had any particular diseases: there were two cases of allergy (one asthma and the other glomerulonephritis). The children were between 6 and 7 years of age; 119 children (61%) were 6 years of age. The most common parental occupation was public servant (31 and 45% for mother and father, respectively), although a small percentage of parents were farmers (1 and 2% for mother and father, respectively). The latter occupation could result in paraoccupational (take-home) exposures of OP pesticides to children. In 115 cases (59.0%), the family had a vegetable garden or garden. In 170 cases the houses contained ornamental plants and/or cut flowers and in 46 cases (23.6%) the houses contained pets. In 29 cases (14.9%), pest-control operations had been performed in the previous month. The use of OPs was not declared in any questionnaire. Seventy-two children (36.9%) ate lunch at the mensa. Urinary excretion of alkylphosphates (in nanomole per gram creatinine) is shown in Table 2. The geometric mean (GM) of DMP and DMTP was 3.5-15.2 times higher than that of the other metabolites. The % pos was approximately 95% for these two metabolites, 75% for DEP, and 48 and 12% for DETP and DEDTP, respectively. The highest values of DMTP and DMDTP in the ranges of concentration belonged to the same boy. The boy's father was a cook and the mother a white-collar worker. Their house had a vegetable garden and they kept ornamental plants and cut flowers in the house, as well as pets. They did not state that they used pesticides. The boy did not eat at the mensa and the day before sampling he consumed meat, fish, bread, pasta, smaligoods, cakes, Data from Aprea et al.. aRecovery was evaluated at the added concentration of 62.5 pg/L. bCoefficient of variation of the whole analysis (the six alkylphosphates added to 10 aliquots of the same urine sample) including purification. "The detection limit was calculated on the basis of a signal 3 times the background noise. The data are for sodium and potassium salts of the six alkylphosphates. VOLUME 108 1 NUMBER 6 1 June 2000 * Environmental Health Perspectives fruit juice, tea, and bottled mineral water. The highest values of DMP, DEP, DETP, and DEDTP in the concentration ranges were in four boys and in one girl, whose parents were mostly white-collar workers. In all of the cases with the highest values, the house had a vegetable garden or contained ornamental plants/cut flowers or pets. In one case the occasional flea treatment of pets was stated; however, the product did not contain OPs. Two boys ate at the mensa and the other two went home for lunch. All of these children drank bottled mineral water and ate meat, pasta, bread, cheese, vegetables, fresh fruit, and fruit juice. Comparison (Student's t-test) of the data in Table 2 with that of a group of 124 adults in the general population of southwest Tuscany 2), and a range of 3.4-97.6 (DETP); and GM 13.7 (GSD 1.9), and a range of 6.3-54.9 (DEDTP) nmol/g creatinine. The % pos observed in the adults were 87, 99, 48, 82, 73, and 7, respectively, which were well correlated with those of the children. The results of multiple regression, based on a model that used sex, mensa, vegetable garden, plants and flowers, pest control, and pets as independent variables, are shown in Table 3. The model was significant for DMTP, DMDTP, methyl, and sum, indicating that together, these variables influenced urinary excretion of the metabolites considered. The R2 values were in the range of 0.044-0.080 and the variance explained by the models was in the range of4.4-8.0%. GM values and % pos of single analytes and the sums, divided according to variable, are given in Tables 4 and 5. For DMP, DEP, DEDTP, and ethyl, none of the variables was significantly related to urinary concentration (Bonferroni/Dunn post hoc test). The pestcontrol variable, however, was significantly related to the urinary excretion of DMTP, DETP, methyl, and sum, and the variable garden was related to the urinary excretion of DMDTP (Bonferroni/Dunn post hoc test). Coye et al. reported that residues of DMP and DEP are directly associated with exposure to OPs, whereas DMDTP and DEDTP are less directly associated with exposure because they break down rapidly to the corresponding monosulfates (DMTP and DETP) and phosphates (DMP and DEP). This may explain the low % pos of the two disulfate metabolites (DMDTP and DEDTP) in the urine of the 195 children. Urinary alkylphosphates can be detected in urine at exposure levels much less than those affecting cholinesterase activity. These metabolites are quickly eliminated and maximum excretion usually occurs within 24 hr of exposure. Because of this rapid excretion, the data on food and drink consumption obtained with the questionnaire regard the day before sampling. There have been few studies on biologic monitoring of exposure of the general population to OPs based on assays of urinary alkylphosphates and only one of the studies considered children. The Loewenherz et al. study was conducted in the state ofWashington, and used biologic monitoring to determine the exposure of children of farm workers to OPs. DMTP emerged as the biologic indicator of exposure. Urinary pg/mL. These values are lower than the pestcontrol operators' children, but slightly above the reference child population. The high percentage of positive samples in the present study is due to the detection limit of the analytical method used. A previous study based on analysis of 5,976 samples obtained in the period 1976-1980 from adults and children living in 64 areas of the United States (induding the second National Health and Nutrition Examina-tion Survey sampling areas) found lower alkylphosphate positivity percentages; however, the detection limits were 20 pg/L. If we exclude concentrations < 20 pg/L from the results of the present study, the % pos become 26.0% for DMP, 28.0% for DMTP, 2.0% for DMDTP, 0.5% for DEP, 2.0% for DETP, and 0.5% for DEDTP. These percentages are higher than those published for dimethyl metabolites (12% for DMP, 6% for DMTP, and < 1% for DMDTP) and lower than those for diethyl metabolites (7% for DEP, 6% for DETP, and < 1% for DEDTP). The difference is presumably due to the greater use of dimethyl OPs than diethyl OPs in Italy. The results of the present study are significantly higher than those obtained with a population of 124 adults who lived in southwest Tuscany and were sampled in the same period. There may be a number of reasons for this difference. Exposure to pesticide residues in food. may be greater for children than for adults; for example, children tend to eat more fresh products, and they drink more water, milk, and fruit juice than adults in proportion to body weight. The food eaten by the children the day before sampling confirms this observation: approximately 85, 43, 66, 51, 41, and 36%, respectively, had eaten fresh fruit, milk, cooked vegetables, fruit juice, and infusions such as tea. The observation is nevertheless exclusively qualitative, as these foods may or may not be contaminated by different types of pesticides. Consumption of food containing OPs is nevertheless a potential source of human exposure. In a recently published study, daily dietary exposure to chlorpyrifos, diazinon, and malathion in 1990 was evaluated in approximately 120,000 adults in the United States. Women's exposure to chlorpyrifos, diazinon, and malathion was GM 0.8 (GSD 1.47), and a range of 0.12-5.6 pg/day; it could not be evaluated for the other two pesticides because many samples were below detection limits. There do not seem to be any similar studies in children. The results of the present study showed that when one meal a day was eaten at the school mensa, where organically grown plant products were served, urinary excretion of the six alkylphosphates was not affected (Tables 4 and 5). Other types of exposure may be greater for children than for adults. For example, children may be more exposed than adults to pesticide residues in the house because they play on the floor and put things in their mouths (oral nondietary and cutaneous exposure). Pesticides may be present in house dust (e.g., due to the use of pesticides in the house or in the garden), on dirt brought into the house on shoes or by pets, or on cut flowers and ornamental plants. The results of the present study confirm that pest control operations performed inside or outside the house in the preceding month and the presence of a vegetable garden near the house affect urinary excretion of methyl alkylphosphates in a significant manner. Information from the questionnaire does not indicate the use of OP insecticides at houses with gardens. However, houses with gardens are usually associated with other houses with gardens where OP pesticides could be used. These compounds are often used in gardens with flowers such as roses. Pesticides are used for domestic purposes in the United States in approximately 90% of homes. One of the most widely used products is chlorpyrifos, which has replaced compounds such as aldrin, dieldrin, and chlordane. The main use is against termites, and in many cases, spraying is carried out by residents of the homes. Two apartments were evaluated for the accumulation of chlorpyrifos on toys after the safety period. The compound distributes in two phases and may accumulate on toys and other surfaces such as pillows, and may be a considerable source of exposure, for 2 weeks after application. The total nondietary dose of chlorpyrifos may reach 208 jig/kg/day in children 3-6 years of age. Potential respiratory exposure was negligible, whereas dermal and oral nondietary doses were 39 and 61% of the total dose. For children who often put their fingers in their mouths, the daily nondietary dose may reach 356 pg/kg/day. A study carried out in the state of Washington studied whether children between 1 and 6 years of age who belonged to farming families and who lived in farming areas were more exposed to pesticides than children whose parents did not work in agriculture and who did not live in farming areas (reference families). House dust and soil samples were obtained where the children played. The samples were analyzed for four OPs commonly used on fruit trees (azinphos-methyl, chlorpyrifos, parathion, and phosmet). Pesticide concentrations were greater in house dust than in soil samples in all cases. Levels ranged from nondetectable to 930 ng/g in soil and from nondetectable to 17,000 ng/g in dust of houses near orchards or in which the parents worked in agriculture. All four compounds were detectable in 62% of dust samples and two-thirds of the houses near orchards contained at least one insecticide at concentrations > 1,000 ng/g. Residues were detected less often in reference houses, and concentrations were always < 1,000 ng/g. These results showed that children from farming families have a higher potential exposure than children from nonfarming families. Azinphos-methyl, which is only registered for use in agriculture, was found regularly in the samples, suggesting widespread exposure. Based on urinary concentrations of alkylphosphates, it is difficult to estimate the daily dose of OPs to which the children were exposed because the same metabolites may be derived from hydrolytic cleavage of various compounds, which may have very different physicochemical, toxicologic, and metabolic characteristics, although they are all phosphoric esters. The problem is further complicated by the fact that absorption may also be due to cutaneous, oral, and respi-ratory exposure. Measures of urinary alkyl-phosphates can therefore only be used as a qualitative indication of exposure to OPs. The present results seem quite significant. Our statistical sample was probably rather small to evaluate all of the variables considered. Moreover, the classes considered for each variable consisted of a different number of samples, which may also partially reduce the validity of the significance levels. It seems worthwhile to extend the study to the whole Italian population for confirmation of our findings and to detect differences between different areas. In conclusion, the presence of these metabolites in the biologic fluids of adults and children is an excellent indicator ofwidespread environmental contamination and is more sensitive than evaluations of contamination of environmental matrices (air, water, food, drinks, etc.). In fact, analysis of environmental matrices and food sometimes provides results below detection limits, leading to the erroneous conclusion that these substances are not present in the environment and are therefore not dangerous for humans. The fact that detection limits are not reached for single matrices does not mean that OPs are absent or that they may not occur in increasing concentrations in living organisms at progressively higher positions in the food chain. Because humans do not have a direct relation with a single matrix, rather with all environmental compartments, they may act as concentrator-accumulators. Humans can therefore be regarded as one of the best indicators of diffuse contamination. |
/*
* Unless explicitly stated otherwise all files in this repository are licensed under the Apache-2.0 License.
* This product includes software developed at Datadog (https://www.datadoghq.com/).
* Copyright 2019-Present Datadog, Inc.
*/
// Code generated by OpenAPI Generator (https://openapi-generator.tech); DO NOT EDIT.
package datadog
import (
"encoding/json"
)
// ProcessSummariesMetaPage Paging attributes.
type ProcessSummariesMetaPage struct {
// The cursor used to get the next results, if any. To make the next request, use the same parameters with the addition of the `page[cursor]`.
After *string `json:"after,omitempty"`
// Number of results returned.
Size *int32 `json:"size,omitempty"`
// UnparsedObject contains the raw value of the object if there was an error when deserializing into the struct
UnparsedObject map[string]interface{} `json:-`
}
// NewProcessSummariesMetaPage instantiates a new ProcessSummariesMetaPage object
// This constructor will assign default values to properties that have it defined,
// and makes sure properties required by API are set, but the set of arguments
// will change when the set of required properties is changed
func NewProcessSummariesMetaPage() *ProcessSummariesMetaPage {
this := ProcessSummariesMetaPage{}
return &this
}
// NewProcessSummariesMetaPageWithDefaults instantiates a new ProcessSummariesMetaPage object
// This constructor will only assign default values to properties that have it defined,
// but it doesn't guarantee that properties required by API are set
func NewProcessSummariesMetaPageWithDefaults() *ProcessSummariesMetaPage {
this := ProcessSummariesMetaPage{}
return &this
}
// GetAfter returns the After field value if set, zero value otherwise.
func (o *ProcessSummariesMetaPage) GetAfter() string {
if o == nil || o.After == nil {
var ret string
return ret
}
return *o.After
}
// GetAfterOk returns a tuple with the After field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *ProcessSummariesMetaPage) GetAfterOk() (*string, bool) {
if o == nil || o.After == nil {
return nil, false
}
return o.After, true
}
// HasAfter returns a boolean if a field has been set.
func (o *ProcessSummariesMetaPage) HasAfter() bool {
if o != nil && o.After != nil {
return true
}
return false
}
// SetAfter gets a reference to the given string and assigns it to the After field.
func (o *ProcessSummariesMetaPage) SetAfter(v string) {
o.After = &v
}
// GetSize returns the Size field value if set, zero value otherwise.
func (o *ProcessSummariesMetaPage) GetSize() int32 {
if o == nil || o.Size == nil {
var ret int32
return ret
}
return *o.Size
}
// GetSizeOk returns a tuple with the Size field value if set, nil otherwise
// and a boolean to check if the value has been set.
func (o *ProcessSummariesMetaPage) GetSizeOk() (*int32, bool) {
if o == nil || o.Size == nil {
return nil, false
}
return o.Size, true
}
// HasSize returns a boolean if a field has been set.
func (o *ProcessSummariesMetaPage) HasSize() bool {
if o != nil && o.Size != nil {
return true
}
return false
}
// SetSize gets a reference to the given int32 and assigns it to the Size field.
func (o *ProcessSummariesMetaPage) SetSize(v int32) {
o.Size = &v
}
func (o ProcessSummariesMetaPage) MarshalJSON() ([]byte, error) {
toSerialize := map[string]interface{}{}
if o.UnparsedObject != nil {
return json.Marshal(o.UnparsedObject)
}
if o.After != nil {
toSerialize["after"] = o.After
}
if o.Size != nil {
toSerialize["size"] = o.Size
}
return json.Marshal(toSerialize)
}
func (o *ProcessSummariesMetaPage) UnmarshalJSON(bytes []byte) (err error) {
raw := map[string]interface{}{}
all := struct {
After *string `json:"after,omitempty"`
Size *int32 `json:"size,omitempty"`
}{}
err = json.Unmarshal(bytes, &all)
if err != nil {
err = json.Unmarshal(bytes, &raw)
if err != nil {
return err
}
o.UnparsedObject = raw
return nil
}
o.After = all.After
o.Size = all.Size
return nil
}
type NullableProcessSummariesMetaPage struct {
value *ProcessSummariesMetaPage
isSet bool
}
func (v NullableProcessSummariesMetaPage) Get() *ProcessSummariesMetaPage {
return v.value
}
func (v *NullableProcessSummariesMetaPage) Set(val *ProcessSummariesMetaPage) {
v.value = val
v.isSet = true
}
func (v NullableProcessSummariesMetaPage) IsSet() bool {
return v.isSet
}
func (v *NullableProcessSummariesMetaPage) Unset() {
v.value = nil
v.isSet = false
}
func NewNullableProcessSummariesMetaPage(val *ProcessSummariesMetaPage) *NullableProcessSummariesMetaPage {
return &NullableProcessSummariesMetaPage{value: val, isSet: true}
}
func (v NullableProcessSummariesMetaPage) MarshalJSON() ([]byte, error) {
return json.Marshal(v.value)
}
func (v *NullableProcessSummariesMetaPage) UnmarshalJSON(src []byte) error {
v.isSet = true
return json.Unmarshal(src, &v.value)
}
|
Improved Performance of Polymeric Light-Emitting Diodes with an Electron Blocking Layer In this paper, we describe a new approach for fabrication of high efficient polymeric light-emitting diodes (PLEDs). In the device configuration of ITO/HTL/EBL/EML/BaF2:Ca:Al (ITO: indium tin oxide, HTL: hole transport layer, EBL: electron blocking layer, EML: emitting layer), EBL contains cross-linkable moieties in order to make the layer which is insoluble to layering of an additional emitting polymer. The devices with EBL exhibit strong blue emissions and higher efficiency values than those in devices without EBL. The synthesis, characterization, device fabrication, and electroluminescence (EL) properties of devices will be presented. |
Baseline and Temporal Changes in Sensitivity of Zymoseptoria tritici Isolates to Benzovindiflupyr in Oregon, U.S.A., and Cross-Sensitivity to Other SDHI Fungicides. Zymoseptoria tritici is the causal agent of Septoria tritici blotch (STB), a disease of wheat (Triticum aestivum) that results in significant yield loss worldwide. Z. tritici's life cycle, reproductive system, effective population size, and gene flow put it at high likelihood of developing fungicide resistance. Succinate dehydrogenase inhibitor (SDHI) fungicides (FRAC code 7) were not widely used to control STB in the Willamette Valley until 2016. Field isolates of Z. tritici collected in the Willamette Valley at dates spanning the introduction of SDHI (2015 to 2017) were screened for sensitivity to four SDHI active ingredients: benzovindiflupyr, penthiopyrad, fluxapyroxad, and fluindapyr. Fungicide sensitivity changes were determined by the fungicide concentration at which fungal growth is decreased by 50% (EC50) values. The benzovindiflupyr EC50 values increased significantly, indicating a reduction in sensitivity, following the adoption of SDHI fungicides in Oregon (P < 0.0001). Additionally, significant reduction in cross-sensitivity among SDHI active ingredients was also observed with a moderate and significant relationship between penthiopyrad and benzovindiflupyr (P = 0.0002) and a weak relationship between penthiopyrad and fluxapyroxad (P = 0.0482). No change in cross-sensitivity was observed with fluindapyr, which has not yet been labeled in the region. The results document a decrease in SDHI sensitivity in Z. tritici isolates following the introduction of the active ingredients to the Willamette Valley. The reduction in cross-sensitivity observed between SDHI active ingredients highlights the notion that careful consideration is required to manage fungicide resistance and suggests that within-group rotation is insufficient for resistance management. |
/*
* Copyright 2020-2022 <NAME>, the original author or authors.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package net.croz.nrich.security.csrf.webflux.filter;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import net.croz.nrich.security.csrf.api.service.CsrfTokenManagerService;
import net.croz.nrich.security.csrf.core.constants.CsrfConstants;
import net.croz.nrich.security.csrf.core.exception.CsrfTokenException;
import net.croz.nrich.security.csrf.core.model.CsrfExcludeConfig;
import net.croz.nrich.security.csrf.core.util.CsrfUriUtil;
import net.croz.nrich.security.csrf.webflux.holder.WebFluxCsrfTokenKeyHolder;
import org.springframework.web.server.ServerWebExchange;
import org.springframework.web.server.WebFilter;
import org.springframework.web.server.WebFilterChain;
import org.springframework.web.server.WebSession;
import reactor.core.publisher.Mono;
import java.util.List;
@Slf4j
@RequiredArgsConstructor
public class CsrfWebFilter implements WebFilter {
private final CsrfTokenManagerService csrfTokenManagerService;
private final String tokenKeyName;
private final String initialTokenUrl;
private final String csrfPingUrl;
private final List<CsrfExcludeConfig> csrfExcludeConfigList;
@Override
public Mono<Void> filter(ServerWebExchange exchange, WebFilterChain chain) {
log.debug("csrfFilter.filter()");
String pathWithinApplication = exchange.getRequest().getPath().pathWithinApplication().value();
Mono<Void> result = chain.filter(exchange);
if (CsrfConstants.EMPTY_PATH.equals(pathWithinApplication)) {
return result;
}
String requestUri = uri(exchange);
return exchange.getSession().switchIfEmpty(Mono.defer(() -> returnErrorIfCsrfProtectedUri(requestUri))).flatMap(webSession -> {
Mono<Void> csrfActionResult = result;
if (CsrfUriUtil.excludeUri(csrfExcludeConfigList, requestUri)) {
updateLastApiCallAttribute(webSession);
}
else if (requestUri.endsWith(csrfPingUrl)) {
csrfActionResult = handleCsrfPingUrl(exchange, webSession).flatMap(value -> result);
}
else {
csrfTokenManagerService.validateAndRefreshToken(new WebFluxCsrfTokenKeyHolder(exchange, webSession, tokenKeyName, CsrfConstants.CSRF_CRYPTO_KEY_NAME));
updateLastApiCallAttribute(webSession);
}
return csrfActionResult.doOnSuccess(value -> addInitialToken(exchange, webSession));
});
}
private void addInitialToken(ServerWebExchange exchange, WebSession webSession) {
if (uri(exchange).endsWith(initialTokenUrl)) {
String token = csrfTokenManagerService.generateToken(new WebFluxCsrfTokenKeyHolder(exchange, webSession, tokenKeyName, CsrfConstants.CSRF_CRYPTO_KEY_NAME));
exchange.getAttributes().put(CsrfConstants.CSRF_INITIAL_TOKEN_ATTRIBUTE_NAME, token);
updateLastApiCallAttribute(webSession);
}
}
private Mono<Void> handleCsrfPingUrl(ServerWebExchange exchange, WebSession webSession) {
Long lastRealApiRequestMillis = webSession.getAttribute(CsrfConstants.NRICH_LAST_REAL_API_REQUEST_MILLIS);
log.debug(" lastRealApiRequestMillis: {}", lastRealApiRequestMillis);
if (lastRealApiRequestMillis != null) {
long deltaMillis = System.currentTimeMillis() - lastRealApiRequestMillis;
log.debug(" deltaMillis: {}", deltaMillis);
long maxInactiveIntervalMillis = webSession.getMaxIdleTime().toMillis();
log.debug(" maxInactiveIntervalMillis: {}", maxInactiveIntervalMillis);
if ((maxInactiveIntervalMillis > 0) && (deltaMillis > maxInactiveIntervalMillis)) {
return webSession.invalidate().doOnSuccess(value -> {
log.debug(" sessionJustInvalidated");
exchange.getResponse().getHeaders().add(CsrfConstants.CSRF_PING_STOP_HEADER_NAME, "stopPing");
log.debug(" sending csrf stop ping header in response");
updateLastActiveRequestMillis(exchange, deltaMillis);
});
}
}
updateLastActiveRequestMillis(exchange, 0L);
return Mono.fromRunnable(() -> csrfTokenManagerService.validateAndRefreshToken(new WebFluxCsrfTokenKeyHolder(exchange, webSession, tokenKeyName, CsrfConstants.CSRF_CRYPTO_KEY_NAME)));
}
private void updateLastApiCallAttribute(WebSession webSession) {
webSession.getAttributes().put(CsrfConstants.NRICH_LAST_REAL_API_REQUEST_MILLIS, System.currentTimeMillis());
}
private String uri(ServerWebExchange exchange) {
return exchange.getRequest().getURI().toString();
}
private void updateLastActiveRequestMillis(ServerWebExchange exchange, long deltaMillis) {
exchange.getResponse().getHeaders().add(CsrfConstants.CSRF_AFTER_LAST_ACTIVE_REQUEST_MILLIS_HEADER_NAME, Long.toString(deltaMillis));
}
private Mono<WebSession> returnErrorIfCsrfProtectedUri(String requestUri) {
if (CsrfUriUtil.excludeUri(csrfExcludeConfigList, requestUri) || requestUri.endsWith(csrfPingUrl)) {
return Mono.empty();
}
// Session doesn't exist, but we should not pass through request.
return Mono.error(new CsrfTokenException("Can't validate token. There is no session."));
}
}
|
1. Field of the Invention
The present invention relates generally to an optical transmitter-receiver, and more particularly, to an optical transmitter-receiver in which an optical transmitter and an optical receiver are interconnected such that optical transmission is possible.
2. Description of the Background Art
Optical transmission for transmitting information with light modulated by the information has been expected to be widely used for a future high speed communication network due to low-loss and wideband properties. For example, an optical transmitter-receiver for optically transmitting an electrical signal having a high-frequency (hereinafter referred to as a first optical transmitter-receiver), and an optical transmitter-receiver for optically transmitting a baseband signal (hereinafter referred to as a second optical transmitter-receiver), have been proposed. The two optical transmitter-receivers will be specifically described referring to the drawings.
Description is now made of the first optical transmitter-receiver. In recent years, a wireless service such as a portable telephone or a PHS (Personal Handyphone System) has been rapidly enlarged. Therefore, utilization of still higher frequencies has been examined. A mirco-cell system or a pico-cell system utilizing a millimeter-wave band of approximately 30 GHz to 300 GHz has been examined. In such a cell system, a signal having a high frequency such as a millimeter-wave band is radiated from a lot of base stations connected to a control station, so that a wireless service is provided. The cell system has various advantages. First, the signal having the millimeter-wave band does not easily adversely affect next cells due to a propagation loss in a space. Second, the signal having the millimeter-wave band has a short wavelength, so that an antenna or the like set in the control station or the like is miniaturized. Third, the signal having the millimeter-wave band has a high frequency, so that the transmission capacity can be increased. Consequently, it may be possible to provide a high speed transmission service which is difficult to realize in a conventional wireless service.
In a wireless communication system to which such a cell system is applied, however, a lot of base stations are set throughout a town. Therefore, the base station must be small in size and low in cost. A first optical transmitter-receiver employing a so-called subcarrier optical transmission system which has been tremendously researched and developed in recent years may, in some cases, be applied to the wireless communication system. The subcarrier optical transmission system is described in detail in xe2x80x9cMicrowave and millimeter-wave fiber, optic technologies for subcarrier transmission systemsxe2x80x9d (Hiroyo Ogawa, IEICE Transactions on Communications, Vol. E76-B, No. 9, pp. 1078-1090, September, 1993), for example.
In the subcarrier transmission system, the intensity of a main carrier, which is typically unmodulated light, is modulated by a modulated signal so that an optical signal is obtained. In the modulated signal a subcarrier is modulated by information, which is a voice signal and/or an image signal. The change in the intensity of the optical signal uniquely corresponds to the change in the amplitude, the change in the frequency or the change in the phase of the modulated signal. In the subcarrier optical transmission system, an optical fiber, which is very low in loss, is used. When the modulated signal has a millimeter-wave band, therefore, the modulated signal can be transmitted to a remote location as it is.
FIG. 17 is a block diagram showing the structure of a typical first optical transmitter-receiver. In FIG. 17, the first optical transmitter-receiver comprises a light source 110, an external optical modulating portion 120, an optical fiber 140, an optical/electrical converting portion 150, a frequency converting portion 1710, and a demodulating portion 1720. The light source 110 and the external optical modulating portion 120 constitute an optical transmitter 101, and are set in a base station, while the optical/electrical converting portion 150, the frequency converting portion 1710, and the demodulating portion 1720 constitute an optical receiver 102, and are set in a control station. FIG. 17 shows only signal path in the one direction, that is, the signal path transmitted from the base station to the control station.
In the first optical transmitter-receiver, an electrical signal to be transmitted from the base station to the control station is typically a modulated electrical signal Smod having a millimeter-wave band in which a subcarrier is modulated by a baseband signal such as a voice signal and/or an image signal. The modulated electrical signal Smod is inputted to the external optical modulating portion 120 in the light transmitter 101 through an antenna or an amplifier (not shown) from a portable telephone, a PHS terminal, or the like which is moved outside the base station. The light source 110 oscillates using unmodulated light as a main carrier Mc. The main carrier Mc is also inputted to the external optical modulating portion 120. The external optical modulating portion 120 performs external light-intensity modulation, to modulate the intensity of the inputted main carrier MC on the basis of the change in the amplitude of the inputted modulated electrical signal Smod, thereby obtaining an optical signal OSmod. The optical signal OSmod itself outputted from the external optical modulating portion 120 to the optical fiber 140 is changed into a carrier, and is incident on the optical/electrical converting portion 150 in the optical receiver 102 while the modulated electrical signal Smod is being conveyed through the optical fiber 140 as it is. The optical/electrical converting portion 150 performs optical/electrical conversion, to convert the incident optical signal OSmod into an electrical signal including its intensity modulation component. The frequency converting portion 1710 down-coverts the electrical signal inputted from the optical/electrical converting portion 150 into an electrical signal having an intermediate frequency band. The demodulating portion 1720 demodulates the information of the baseband signal such as the voice signal and/or the image signal on the basis of the electrical signal having the intermediate frequency band inputted from the frequency converting portion 1710.
Description is now made of the second optical transmitter-receiver for merely optically transmitting a baseband signal. FIG. 18 is a block diagram showing the structure of a typical second optical transmitter-receiver. In FIG. 18, the second optical transmitter-receiver comprises a light source driving portion 1810, a light source 110, an optical fiber 140, and an optical/electrical converting portion 150. The light source driving portion 1810 and the light source 110 constitute an optical transmitter 101, while the optical/electrical converting portion 150 constitutes an optical receiver 102. In the second optical transmitter-receiver, it is assumed that a baseband signal SBB to be transmitted from the optical transmitter 101 to the optical receiver 102 is digital information, which is a voice signal and/or an image signal, for example. The baseband signal SBB is inputted to the light source driving portion 1810. The light source driving portion 1810 drives the light source 110, and modulates the intensity of an optical signal outputted from the light source 110 on the basis of the inputted baseband signal SBB (a direct optical modulation system). The optical signal is transmitted through the optical fiber 140, and is then optical/electrical-converted in the optical/electrical converting portion 150, so that the original baseband signal SBB is obtained. Such a light transmission technique is general, and is described in Chapter 2 xe2x80x9cPractice of Optical Communication Systemxe2x80x9d of xe2x80x9cHikari Tsushin Gijyutsu Dokuhon (Optical Transmission Technical Book)xe2x80x9d (edited by Shimada, Ohm Publishing Co., Ltd.) issued in 1980, for example.
However, the optical/electrical converting portion 150 and the frequency converting portion 1710 shown in FIG. 17 must accurately perform optical/electrical conversion and frequency conversion of a signal having a high frequency such as a millimeter-wave band, so that wideband characteristics are required. Otherwise the demodulating portion 1720 would not perform accurate demodulation processing. In the first optical transmitter-receiver, therefore, electrical components corresponding to a high frequency band are interconnected. For this connection, a dedicated connector, waveguide or semirigid cable is used. The waveguide or the semirigid cable is difficult to freely work, so that the first optical transmitter-receiver is difficult to manufacture. It is necessary to use a wave guide, in the case of an attempt to transmit an electrical signal having a high-frequency such as milliwave band with low loss, however, the size of the first transmitter-receiver becomes large, because the size of the waveguide is larger than the size of coaxial cable.
As described in the foregoing, the second optical transmitter-receiver (see FIG. 18) is frequently used for online transmitting the baseband signal SBB, which is digital information, by wire. On the other hand, it is examined whether or not the first optical transmitter-receiver (see FIG. 17) is applied to a wireless communication system. The first and second optical transmitter-receivers are thus examined as separate systems because they differ in their applications. The optical transmitter-receiver for simultaneously optically transmitting both a baseband signal and a high-frequency electrical signal has not been so examined. If a wavelength division multiplexing technique is used, however, such an optical transmitter-receiver can be constructed. That is, the optical signal outputted from the light source 110 shown in FIG. 18 and the optical signal outputted from the external optical modulating portion 120 shown in FIG. 17 are wavelength-division-multiplexed on the transmission side. A signal obtained by the wavelength division multiplexing (WDM) is transmitted through the optical fiber 140, and is separated on the side of optical receiving. Thereafter, signals obtained by the separation are then respectively optical/electrical-converted. Consequently, both the signals are simultaneously obtained on the receiving side. However, the optical transmitter-receiver to which a wavelength division multiplexing technique is applied must separate an optical signal obtained by accurate wavelength division multiplexing on the side of optical receiving. Therefore, a plurality of light sources 110 which differ in oscillation wavelength are required, so that significant cost is required to construct the optical transmitter-receiver.
U.S. Pat. No. 5,596,436 discloses an optical transmitter-receiver to which a subcarrier multiplex optical transmission system is applied, which has apparently similar portions to those in some of optical transmitter-receivers disclosed in the present application. In the optical transmitter-receiver according to the U.S. Patent, however, modulated electrical signals are respectively first produced by modulating subcarriers by baseband signals using mixers. A multiplexed signal is produced by multiplexing the modulated electrical signals by a combiner 40. An external optical modulator 46 modulates unmodulated light from a laser 44 by the multiplexed signal. The optical transmitter according to the U.S. Patent differs in structure from the optical transmitter 101 according to the present invention. That is, a single subcarrier is used in the optical transmitter 101 according to the present invention, while a plurality of subcarriers are used in the optical transmitter according to the U.S. Patent. Consequently, spectrums of optical signals outputted from both the optical transmitters differ from each other. In the optical signal according to the U.S. Patent, a component of a main carrier and a component of each of subcarriers are in close proximity to each other on an optical frequency axis. On the other hand, in an optical signal OS according to the present invention (as described later), a component of a main carrier and components of both sidebands are not in close proximity to each other. Consequently, the optical receiver according to the present invention produces such a significant technical effect that a component of a baseband signal SBB can be taken out more simply and accurately, as compared with that in the U.S. Patent.
Therefore, an object of the present invention is to provide an optical transmitter-receiver capable of optically transmitting a high-frequency electrical signal and being simple in manufacture and small in size.
Another object of the present invention is to provide an optical transmitter-receiver capable of simultaneously optically transmitting both a baseband signal and a high-frequency signal using the same light source.
The first aspect is directed to an optical transmitter-receiver in which an optical transmitter and an optical receiver are interconnected such that an optical transmission is possible, characterized by comprising: a double-modulating portion, to which a subcarrier modulated by an electrical signal to be transmitted is inputted from outside, for double-modulating a main carrier, which is unmodulated light having a predetermined optical frequency, by the inputted subcarrier, to produce and output a double-modulated optical signal, an optical spectrum of the double-modulated optical signal inputted from the double-modulating portion including a component of the main carrier at the predetermined optical frequency and further including components of an upper sideband and a lower sideband at frequencies spaced by the frequency of the subcarrier apart from the predetermined optical frequency, an optical filter portion for selectively passing an optical signal including the component of either one of the upper sideband and the lower sideband in the double-modulated optical signal inputted from the double-modulating portion; and an optical/electrical converting portion for optical/electrical-converting the optical signal inputted from the optical filter portion, to obtain the electrical signal to be transmitted, the optical transmitter comprising at least the local oscillating portion and the double-modulating portion, and the optical receiver comprising at least the optical/electrical converting portion, the optical filter portion being included in either one of the optical transmitter and the optical receiver.
According to the first aspect, the optical/electrical converting portion can directly obtain from the optical signal the electrical signal having a relatively low frequency to be transmitted, thereby eliminating the necessity of an electrical component, which is high in cost and is difficult to process, corresponding to a subcarrier band which is a relatively high frequency as in a conventional optical transmission of a subcarrier. Correspondingly, the optical receiver can be constructed simply and at low cost.
A second aspect is characterized in that in the first aspect, the double-modulating portion comprises a semiconductor laser diode for outputting the main carrier, and at least one external optical modulating portion for amplitude-modulating the main carrier inputted from the semiconductor laser diode by a subcarrier amplitude-modulated by an electrical signal to be transmitted which is inputted from outside using an external optical modulation.
According to the second aspect, the double-modulating portion is constituted by an existing semiconductor laser diode and an existing external optical modulating portion, so that the optical transmitter-receiver is constructed at low cost.
A third aspect is characterized in that in the second aspect, wherein the subcarrier amplitude-modulated by the electrical signal to be transmitted is a signal transmitted from outside, further comprising an antenna portion receiving the signal transmitted for supplying the signal to the double-modulating portion.
According to the third aspect, the optical transmitter-receiver can be easily connected to the wireless transmission system by comprising the antenna portion receiving the signal transmitted from outside.
A fourth aspect is characterized in that in the third aspect, the electrical signal to be transmitted is a multichannel signal frequency-multiplexed, and the electrical modulating portion amplitude-modulates the inputted subcarrier by the multichannel signal, to produce and output the modulated electrical signal.
According to the fourth aspect, the optical transmitter-receiver can optically transmit a lot of information.
A fifth aspect is characterized in that in the third aspect, the electrical signal to be transmitted is digital information, and the electrical modulating portion OOK (on-off keying)-modulates the subcarrier by the digital information.
According to the fifth aspect, the optical transmitter-receiver can transmit information high in quality.
A sixth aspect is directed to an optical transmitter-receiver in which an optical transmitter and an optical receiver are interconnected such that optical transmission is possible, comprising: a double-modulating portion, to which a subcarrier modulated by an electrical signal to be transmitted is inputted from outside, for double-modulating a main carrier, which is unmodulated light having a predetermined optical frequency, by the inputted subcarrier, to produce and output a double-modulated optical signal; an optical spectrum of the double-modulated optical signal inputted from the double-modulating portion including a component of the main carrier at the predetermined optical frequency and further including components of an upper sideband and a lower sideband at frequencies spaced by the frequency of the subcarrier apart from the predetermined optical frequency; an optical filter portion for selectively passing an optical signal including the component of either one of the upper sideband and the lower sideband in the double-modulated optical signal inputted from the double-modulating portion; and an optical branching portion for branching the optical signal inputted from the optical filter portion into a first optical signal and a second optical signal and outputting the first and second optical signals;
a first optical/electrical converting portion for optical/electrical-converting the first optical signal inputted from the optical branching portion, to obtain the electrical signal to be transmitted; a second optical/electrical converting portion for outputting as a detecting signal an electrical signal obtained by optical/electrical-converting the second optical signal inputted from the optical branching portion; and a wavelength control portion for detecting the average values of detected signals inputted from the second optical/electrical converting portion at predetermined time intervals, and controlling the wavelength of the double-modulated optical signal outputted from the double-modulating portion on the basis of the maximum value of the detected average values, the optical transmitter comprising at least the local oscillating portion and the double-modulating portion, the optical receiver comprising at least the first optical/electrical converting portion, and the optical filter portion being included in either one of the optical transmitter and the optical receiver.
According to the sixth aspect, as same as the first aspect, the optical transmitter-receiver can be constructed simply and at low cost, eliminating the necessity of an electrical component, which is high in cost and is difficult to process, corresponding to a subcarrier band which is a relatively high frequency. Further, the optical filter portion can output an optical signal possible a constantly precise demodulation for controlling a wavelength of the double-modulated optical.
A seventh aspect is directed to an optical transmitter-receiver in which an optical transmitter and first and second optical receivers are interconnected such that subcarrier optical transmission is possible, characterized in that the optical transmitter comprises: a local oscillating portion for outputting a subcarrier having a predetermined frequency; a double-modulating portion for double-modulating a main carrier, which is unmodulated light having a predetermined optical frequency, by an electrical signal to be transmitted, which is inputted from outside, and the subcarrier inputted from the local oscillating portion, to produce and output a double-modulated optical signal, a spectrum of the double-modulated optical signal outputted from the double-modulating portion including a component of the main carrier at the predetermined optical frequency and further including components of an upper sideband and a lower sideband at frequencies spaced by the frequency of the subcarrier apart from the predetermined optical frequency, and an optical portion for dividing the double-modulated optical signal inputted from the double-modulating portion into a first optical signal including the component of either one of the upper sideband and the lower sideband and a second optical signal including the component of the main carrier and the component of the other one of the upper sideband and the lower sideband, to output the first optical signal and the second optical signal, the first optical receiver optical/electrical-converts the first optical signal transmitted from the optical transmitter, to obtain the electrical signal to be transmitted, and the second optical receiver optical/electrical converts the second optical signal transmitted from the optical transmitter, to output the subcarrier that is modulated by the electrical signal to be transmitted.
The first optical signal includes the component of one of the sidebands included in the double-modulated optical signal obtained by the double modulation, and is optical/electrical-converted by the first optical/electrical converting portion, to be converted into an electrical signal to be transmitted. Further, the second optical signal includes the components of the other sideband and the main carrier in the double-modulated optical signal, and is optical/electrical-converted by the second optical/electrical converting portion, to be converted into a signal in which the subcarrier is modulated by the electrical signal to be transmitted. According to the seventh aspect, both the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted can be simultaneously obtained on the receiving side. Further, as apparent by referring to the foregoing, both the signals can be transmitted by a signal wave of unmodulated light, so that the optical transmitter-receiver can be constructed at low cast according to the seventh aspect without requiring a plurality of light sources as in a wavelength division multiplexing technique.
An eighth aspect is characterized in that in the seventh aspect, the double-modulating portion comprises: an electrical modulating portion for amplitude-modulating the subcarrier inputted from the local oscillating portion by the electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated electrical signal; a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; and an external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the modulated electrical signal inputted from the electrical modulating portion, to produce the double-modulated optical signal.
According to the eighth aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A ninth aspect is characterized in that in the eighth aspect, the electrical signal to be transmitted is digital information, and the electrical modulating portion OOK (on-off keying)-modulates the subcarrier by the digital information.
According to the ninth aspect, the optical transmitter-receiver can transmit information high in quality.
A tenth aspect is characterized in that in the seventh aspect, the double-modulating portion comprises: a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; a first external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the subcarrier inputted from the local oscillating portion, to produce and output a modulated optical signal; and a second external optical modulating portion for amplitude-modulating the modulated optical signal inputted from the first external optical modulating portion by the electrical signal to be transmitted, which is inputted from outside, to produce the double-modulated optical signal.
According to the tenth aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
An eleventh aspect is characterized in that in the seventh aspect, the double-modulating portion comprises: a light source for outputting the main carrier which is unmodulated light having a predetermined optical frequency; a first external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated optical signal; and a second external optical modulating portion for amplitude-modulating the modulated optical signal inputted from the first external optical modulating portion by the subcarrier inputted from the local oscillating portion, to produce the double-modulated optical signal.
According to the eleventh aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A twelfth aspect is characterized in that in the seventh aspect, the optical filter portion comprises an optical circular portion for outputting the double-modulated optical signal inputted from the double-modulating portion as it is, and an optical fiber grating portion for reflecting the component of either one of the upper sideband and the lower sideband in the double-modulated optical signal inputted from the optical circulator portion, to produce the first optical signal and output the produced first optical signal to the optical circulator portion, and passing the component of the main carrier and the component of the other one of the upper sideband and the lower sideband, to produce and output the second optical signal to the second optical receiver, the optical circulator portion further outputting the first optical signal inputted from the optical fiber grating portion as it is to the first optical receiver.
In the twelfth aspect, the optical filter portion is constituted by the optical circulator and the optical fiber grating which are optical components, so that the optical transmitter-receiver is constructed simply and at low cost.
A thirteenth aspect is characterized in that in the seventh aspect, the second optical receiver comprises an antenna portion for radiating to a space the subcarrier that is modulated by the electrical signal to be transmitted which is obtained by the optical/electrical conversion.
The subcarrier modulated by the electrical signal to be transmitted is a signal suitable for wireless transmission. According to the thirteenth aspect, the second optical receiver comprises the antenna portion for radiating the subcarrier to a space, so that the optical transmitter-receiver is easily connected to a wireless transmission system.
A fourteenth aspect is characterized in that in the seventh aspect, the electrical signal to be transmitted is an electrical signal to be transmitted which is converted analog information into digital information.
According to the fourteenth aspect, the optical transmitter-receiver can transmit information high in quality.
A fifteenth aspect is characterized in that in the seventh aspect, the electrical signal to be transmitted is a carrier modulated by analog information and digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
When the electrical signal to be transmitted is the above-mentioned electrical signal, the carrier having the intermediate frequency modulated by the analog information or the like and the signal in which the subcarrier is modulated by the carrier having the intermediate frequency are obtained on the receiving side of the optical transmitter-receiver according to the fifteenth aspect. Consequently, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form.
A sixeenth aspect is characterized in that in the seventh aspect, the electrical signal to be transmitted is obtained by multiplexing a plurality of electrical signals that have the intermediate frequency and are modulated by analog information or digital information using a predetermined multiplexing technique, respectively.
A seventeenth aspect is characterized in that in the sixteenth aspect, the predetermined multiplexing technique is a frequency division multiplexing access, a time division multiplexing access or a code division multiplexing access.
According to the sixteenth and seventeenth aspects, the optical transmitter-receiver can multiplex a lot of information and optically transmit information obtained by the multiplexing.
An eighteenth aspect is directed to an optical transmitter-receiver in which an optical transmitter and first and second optical receivers are interconnected such that subcarrier optical information is possible, characterized in that the optical transmitter comprises: a local oscillating portion for outputting a subcarrier having a predetermined frequency; a double-modulating portion for double-modulating a main carrier, which is unmodulated light having a predetermined optical frequency, by an electrical signal to be transmitted, which is inputted from outside, and by the subcarrier inputted from the local oscillating portion, to produce and output a double-modulated optical signal; and an optical branching portion for branching the double-modulated optical signal inputted from the double-modulating portion and outputting double-modulated optical signals obtained by the branching, the first optical receiver comprises a low-pass filter portion for passing a component included in a low frequency band of an electrical signal obtained by optical/electrical-converting the double-modulated optical signal transmitted from the optical transmitter, to output the electrical signal to be transmitted, and the second optical receiver comprises a high-pass filter portion for passing a component included in a high frequency band of an electrical signal obtained by optical/electrical-converting the double-modulated optical signal transmitted from the optical transmitter, to output the subcarrier that is modulated by the electrical signal to be transmitted.
On the receiving side in the eighteenth aspect, as same as the seventh aspect, the low-pas filter portion and the high-pass filter portion respectively pass a low frequency band part and a high frequency band part of the electrical signal obtained by optical/electrical-converting the double-modulated optical signal. Therefore, signals obtained upon respectively modulating the subcarrier by an electrical signal to be transmitted, which is included in the relatively low frequency band, and an electrical signal to be transmitted, which is included in the relatively high frequency band, can be simultaneously obtained. Further, the optical transmitter-receiver can be constructed at low cost.
A nineteenth aspect is characterized in that in the eighteenth aspect, the double-modulating portion comprises: an electrical modulating portion for amplitude-modulating the subcarrier outputted from the local oscillating portion by the electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated electrical signal; a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; and an external optical modulating portion for amplitude-modulating the main carrier outputted from the light source by the modulated electrical signal inputted from the electrical modulating portion, to produce the double-modulated optical signal.
According to the nineteenth aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A twentieth aspect is characterized in that in the nineteenth aspect, the electrical signal to be transmitted is digital information, and the electrical modulating portion OOK (on-off keying)-modulates the subcarrier by the digital information.
According to the twentieth aspect, the optical transmitter-receiver can transmit information high in quality.
A twenty-first aspect is characterized in that in the eighteenth aspect, the double-modulating portion comprises: a light source for outputting the main carrier which is unmodulated light having a predetermined optical frequency; a first external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the subcarrier inputted from the local oscillating portion, to produce and output a modulated optical signal; and a second external optical modulating portion for amplitude-modulating the modulated optical signal inputted from the first external optical modulating portion by the electrical signal to be transmitted which is inputted from outside, to produce the double-modulated optical signal.
According to the twenty-first aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost. p A twenty-second aspect is characterized in that in the eighteenth aspect, the double-modulating portion comprises: a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; a first external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated optical signal; and a second external optical modulating portion for amplitude-modulating the modulated optical signal inputted from the first external optical modulating portion by the subcarrier inputted from the local oscillating portion, to produce the double-modulated optical signal.
According to the twenty-second aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A twenty-third aspect is characterized in that in the eighteenth aspect, an antenna portion for radiating to a space is set in a back end against the high-pass filter portion. The antenna portion radiates the subcarrier that is modulated by the electrical signal to be transmitted, which is outputted from the high-pass filter portion.
According to the twenty-third aspect, the optical transmitter-receiver is simply connected to a wireless transmission system, as in the thirteenth aspect.
A twenty-fourth aspect, is characterized in that in the eighteenth aspect, the electrical signal to be transmitted is a carrier modulated by analog information or digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
According to the twenty-fourth aspect, when the electrical signal to be transmitted is the above-mentioned electrical signal, the carrier having the intermediate frequency modulated by the analog information or the like and the signal in which the subcarrier is modulated by the carrier having the intermediate frequency are obtained on the side of optical receiving. Consequently, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form.
A twenty-fifth aspect is characterized in that the eighteenth aspect, the double-modulating portion modulates the main carrier by the subcarrier inputted from the local oscillating portion using a single sideband amplitude modulation system.
In the twenty-fifth aspect, the double-modulated optical signal is not easily affected by wavelength dispersion in an optical fiber serving as an optical transmission line by applying a single sideband amplitude modulation system, so that the transmission distance increases.
A twenty-sixth aspect is directed to an optical transmitter-receiver in which an optical transmitter and an optical receiver are interconnected such that subcarrier optical transmission is possible, characterized in that the optical transmitter comprises: a local oscillating portion for outputting a subcarrier having a predetermined frequency; and a double-modulating portion for double-modulating a main carrier, which is unmodulated light having a predetermined optical frequency, by an electrical signal to be transmitted, which is inputted from outside, and by the subcarrier inputted from the local oscillating portion, to produce and output a double-modulated optical signal, and the optical receiver comprises an optical/electrical converting portion for optical/electrical-converting the double-modulated optical signal transmitted from the optical transmitter, to output an electrical signal; a distributing portion for distributing the electrical signal inputted from the optical/electrical converting portion into at least two electrical signals; a low-pass filter portion for passing a component included in a low frequency band of the electrical signal obtained by the distribution, to output the electrical signal to be transmitted; and a high-pass filter portion for passing a component included in a high frequency band of the electrical signal obtained by the distribution, to output the subcarrier that is modulated by the electrical signal to be transmitted.
On the receiving side in the twenty-sixth aspect, the low-pass filter portion and the high-pass filter portion respectively pass a low frequency band part and a high frequency band part of the electrical signal obtained by optical/electrical-converting the double-modulated optical signal, as in the seventh aspect. Therefore, signals in which a subcarrier is modulated by an electrical signal to be transmitted, which is included in the relatively low frequency band, and an electrical signal to be transmitted, which is included in the relatively high frequency band, can be simultaneously obtained. Further, the optical transmitter-receiver can be constructed at low cost.
A twenty-seventh aspect is characterized in that in the twenty-sixth aspect, the double-modulating portion comprises: an electrical modulating portion for amplitude-modulating the subcarrier inputted from the local oscillating portion by the electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated electrical signal; a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; and an external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the modulated electrical signal inputted from the electrical modulating portion, to produce the double-modulated optical signal.
According to the twenty-seventh aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A twenty-eighth aspect is characterized in that in the twenty-seventh aspect, the electrical signal to be transmitted is digital information, and the electrical modulating portion OOK (on-off keying)-modulates the subcarrier by the digital information.
According to the twenty-eighth aspect, the optical transmitter-receiver can transmit information high in quality.
A twenty-ninth aspect is characterized in that in the twenty-sixth aspect, the double-modulating portion comprises: a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; a first external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the subcarrier inputted from the local oscillating portion, to produce and output a modulated optical signal; and a second external optical modulating portion for amplitude-modulating the modulated optical signal inputted from the first external optical modulating portion by the electrical signal to be transmitted, which is inputted from outside, to produce the double-modulated optical signal.
According to the twenty-ninth aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A thirtieth aspect is characterized in that in the twenty-sixth aspect, the double-modulating portion comprises: a light source for outputting the main carrier, which is unmodulated light having a predetermined optical frequency; a first external optical modulating portion for amplitude-modulating the main carrier inputted from the light source by the electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated optical signal; and a second external optical modulating portion for amplitude-modulating the modulated optical signal inputted from the first external optical modulating portion by the subcarrier inputted from the local oscillating portion, to produce the double-modulated optical signal.
According to the thirtieth aspect, the optical transmitter uses the same light source to simultaneously transmit the electrical signal to be transmitted and the signal in which the subcarrier is modulated by the electrical signal to be transmitted toward the receiving side. Consequently, the optical transmitter-receiver is constructed at low cost.
A thirty-first aspect is characterized in that in the twenty-sixth aspect, an antenna portion for radiating to a space is set in a back end against the high-pass filter portion. The antenna portion radiates the subcarrier that is modulated by the electrical signal to be transmitted, which is outputted from the high-pass filter portion.
According to the thirty-first aspect, the optical transmitter-receiver is simply connected to a wireless transmission system, as in the thirteenth aspect.
A thirty-second aspect is characterized in that in the twenty-sixth aspect, the electrical signal to be transmitted is a carrier modulated by analog information or digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
According to the thirty-second aspect, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form, as in the fifteenth aspect.
A thirty-third aspect is characterized in that in the twenty-sixth aspect, the double-modulating portion modulates the main carrier by the subcarrier inputted from the local oscillating portion using a single sideband amplitude modulation system.
In the thirty-third aspect, the double-modulated optical signal is not easily affected by wavelength dispersion in an optical fiber serving as an optical transmission line, so that the transmission distance increases, as in the twenty-fifth aspect.
A thirty-fourth aspect is directed to an optical transmitter-receiver in which an optical transmitter and first and second optical receivers are interconnected such that subcarrier optical transmission is possible, characterized in that the optical transmitter comprises: a local oscillating portion for outputting a subcarrier having a predetermined frequency; a mode locked light source which is mode-locked on the basis of the subcarrier inputted from the local oscillating portion and oscillating with spacing between optical frequencies related to the subcarrier, to produce and output a mode-locked optical signal; an external optical modulating portion for amplitude-modulating the mode-locked optical signal inputted from the mode locked light source by an electrical signal to be transmitted, which is inputted from outside, to produce and output a double-modulated optical signal; and an optical branching portion for branching the double-modulated optical signal inputted from the external optical modulating portion and outputting double-modulated optical signals obtained by the branching, the first optical receiver comprises a low-pass filter portion for passing a component included in a low frequency band of an electrical signal obtained by optical/electrical-converting the double-modulated optical signal transmitted from the optical transmitter, to output the electrical signal to be transmitted, and the second optical receiver comprises a high-pass filter portion for passing a component included in a high frequency band of an electrical signal obtained by optical/electrical-converting the double-modulated optical signal transmitted from the optical transmitter, to output the subcarrier that is modulated by the electrical signal to be transmitted.
On the receiving side in the thirty-fourth aspect, the low-pass filter portion and the high-pass filter portion respectively pass a low frequency band part and a high frequency band part of the electrical signal obtained by optical/electrical-converting the double-modulated optical signal, as in the seventh aspect. Therefore, signals in which the subcarrier is modulated by an electrical signal to be transmitted, which is included in the relatively low frequency band, and an electrical signal to be transmitted, which is included in the relatively high frequency band, can be simultaneously obtained. Further, the optical transmitter-receiver can be constructed at low cost.
A thirty-fifth aspect is characterized in that in the thirty-fourth aspect, an antenna portion for radiating to a space is set in a back end against the high-pass filter portion. The antenna portion radiates the subcarrier that is modulated by the electrical signal to be transmitted, which is outputted from the high-pass filter portion.
According to the thirty-fifth aspect, the optical transmitter-receiver is simply connected to a wireless transmission system, as in the thirteenth aspect.
A thirty-sixth aspect is characterized in that in the thirty-fourth aspect, the electrical signal to be transmitted is a carrier modulated by analog information or digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
According to the thirty-sixth aspect, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form, as in the fifteenth aspect.
A thirty-seventh aspect is directed to an optical transmitter-receiver in which an optical transmitter and an optical receiver are interconnected such that subcarrier optical transmission is possible, characterized in that the optical transmitter comprises: a local oscillating portion for outputting a subcarrier having a predetermined frequency; a mode locked light source which is mode-locked on the basis of a subcarrier inputted from the local oscillating portion and oscillating with spacing between optical frequencies related to the subcarrier, to produce and output a mode-locked optical signal, and an external optical modulating portion for amplitude-modulating the mode-locked optical signal inputted from the mode locked light source by the electrical signal to be transmitted, which is inputted from outside, to produce and output a double-modulated optical signal, and the optical receiver comprises: an optical/electrical converting portion for optical/electrical-converting the double-modulated optical signal transmitted from the optical transmitter, to output an electrical signal; a distributing portion for distributing the electrical signal inputted from the optical/electrical converting portion into at least two electrical signals; a low-pass filter portion for passing a component included in a low frequency band of the electrical signal obtained by the distribution, to output the electrical signal to be transmitted, and a high-pass filter portion for passing a component included in a high frequency band of the electrical signal obtained by the distribution, to output the subcarrier that is modulated by the electrical signal to be transmitted.
On the receiving side in the thirty-seventh aspect, the low-pass filter portion and the high-pass filter portion respectively pass a low frequency band part and a high frequency band part of the electrical signal obtained by optical/electrical-converting the double-modulated optical signal, as in the seventh aspect. Therefore, signals obtained by modulating the subcarrier by an electrical signal to be transmitted, which is included in the relatively low frequency band, and an electrical signal to be transmitted, which is included in the relatively high frequency band, can be simultaneously obtained. Further, the optical transmitter-receiver can be constructed at low cost.
A thirty-eighth aspect is characterized in that in the thirty-seventh aspect, an antenna portion for radiating to a space is set in a back end against the high-pass filter portion. The antenna portion radiates the subcarrier that is modulated by the electrical signal to be transmitted which is outputted from the high-pass filter portion.
According to the thirty-eighth aspect, the optical transmitter-receiver is simply connected to a wireless transmission system, as in the thirteenth aspect.
A thirty-ninth aspect is characterized in that in the thirty-seventh aspect, the electrical signal to be transmitted is a carrier modulated by analog information or digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
According to the thirty-ninth aspect, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form, as in the fifteenth aspect.
A fortieth aspect is directed to an optical transmitter-receiver in which an optical transmitter and first and second optical receivers are interconnected such that optical transmission is possible, wherein the optical transmitter comprises: a first light source for outputting first unmodulated light having a first optical frequency; an external optical modulating portion for amplitude-modulating the first unmodulated light inputted from the first light source by an electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated optical signal; a second light source for outputting second unmodulated light having a second optical frequency, which differs from the first optical frequency by a predetermined optical frequency; an optical multiplexing portion for multiplexing the modulated optical signal inputted from the external optical modulating portion and the second unmodulated light inputted from the second light source such that polarization of the modulated optical signal and the second unmodulated light coincide with each other, to produce and output an optical signal; and an optical branching portion for branching the optical signal inputted from the optical multiplexing portion and outputting optical signals obtained by the branching, the first optical receiver comprises a low-pass filter portion for passing a component included in a low frequency band of an electrical signal obtained by optical/electrical-converting the optical signal transmitted from the optical transmitter, to output the electrical signal to be transmitted, and the second optical receiver comprises a high-pass filter portion for passing a component included in a high frequency band of an electrical signal obtained by optical/electrical converting the optical signal transmitted from the optical transmitter, to output the subcarrier that is modulated by the electrical signal to be transmitted.
According to the fortieth aspect, the first unmodulated light is amplitude-modulated by the electrical signal to be transmitted, to produce the modulated optical signal. The modulated optical signal and the second unmodulated light are multiplexed, to produce the optical signal. Although optical/electrical conversion must be made twice in the seventh aspect, for example, the optical transmitter in the fortieth aspect performs optical/electrical conversion only once. By thus reducing the number of times of optical/electrical conversion, low-loss optical transmission can be realized. Further, in the optical transmitter in the fortieth aspect, no electrical component for amplitude-modulating the subcarrier by the electrical signal to be transmitted is required. That is, according to the fortieth aspect, the necessity of an electrical component, which is high in cost and is difficult to process, corresponding to a subcarrier band which is a relatively high frequency is eliminated. Correspondingly, the optical receiver can be constructed simply and at low cost.
A forty-first aspect is characterized in that in the fortieth aspect, an antenna portion for radiating to a space is set in a back end against the high-pass filter portion. The antenna portion radiates the subcarrier that is modulated by the electrical signal to be transmitted, which is outputted from the high-pass filter portion.
According to the forty-first aspect, the optical transmitter-receiver is simply connected to a wireless transmission system, as in the thirteenth aspect.
A forty-second aspect is characterized in that in the fortieth aspect, the electrical signal to be transmitted is a carrier modulated by analog information or digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
According to the forty-second aspect, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form, as in the fifteenth aspect.
A forty-third aspect is directed to an optical transmitter-receiver in which an optical transmitter and an optical receiver are interconnected such that optical transmission is possible, wherein the optical transmitter comprises: a first light source for outputting first unmodulated light having a first optical frequency, an external optical modulating portion for amplitude-modulating the first unmodulated light inputted from the first light source by an electrical signal to be transmitted, which is inputted from outside, to produce and output a modulated optical signal; a second light source for outputting second unmodulated light having a second optical frequency, which differs from the first optical frequency by a predetermined optical frequency; an optical multiplexing portion for multiplexing the modulated optical signal inputted from the external optical modulating portion and the second unmodulated light inputted from the second light source such that polarization of the modulated optical signal and the second unmodulated light coincide with each other, to produce and output an optical signal; and an optical branching portion for branching the optical signal inputted from the optical multiplexing portion and outputting optical signals branched by the branching portion, and the optical receiver comprises: an optical/electrical converting portion for optical/electrical-converting the optical signal transmitted from the optical transmitter, to output an electrical signal; a distributing portion for distributing the electrical signal inputted from the optical/electrical converting portion into at least two electrical signals; a low-pass filter portion for passing a component included in a low frequency band of the electrical signal obtained by the distribution, to output the electrical signal to be transmitted; and a high-pass filter portion for passing a component included in a high frequency band of the electrical signal obtained by the distribution, to output the subcarrier that is modulated by the electrical signal to be transmitted.
According to the forty-third aspect, it is possible to realize low-loss optical transmission as well as to construct the optical transmitter-receiver simply and at low cost.
A forty-fourth aspect is characterized in that in the forty-third aspect, an antenna portion for radiating to a space is set in a back end against the high-pass filter portion. The antenna portion radiates the subcarrier that is modulated by the electrical signal to be transmitted, which is outputted from the high-pass filter portion.
According to the forty-fourth aspect, the optical transmitter-receiver is simply connected to a wireless transmission system, as in the thirteenth aspect.
A forty-fifth aspect is characterized in that in the forty-third aspect, the electrical signal to be transmitted is a carrier modulated by analog information or digital information, the frequency of the carrier is an intermediate frequency lower than that of the subcarrier outputted from the local oscillating portion.
According to the forty-fifth aspect, the optical transmitter-receiver can perform optical transmission which does not depend on a modulation form, as in the fifteenth aspect.
The foregoing and other objects, features, aspects and advantages of the present invention will become more apparent from the following detailed description of the present invention when taken in conjunction with the accompanying drawings. |
Therapeutic approaches to chronic lymphocytic leukemia. Chronic lymphocytic leukemia is the most common leukemia in the Western Hemisphere and will increase in prevalence as the population continues to age. Partly because of the advanced age of patients and the often prolonged indolent course of disease, therapy has changed little over the past several decades. The clinical course is highly variable and several clinical features have prognostic value. Until recently, therapeutic approaches have ranged from watchful observation to palliative treatment with an alkylating agent alone or combined with a corticosteroid; however, a number of new chemotherapeutic agents (eg, fludarabine, deoxycoformycin, and chloride-oxyadenosine) have been shown to act effectively against the disease. The availability of effective new agents combined with a better understanding of the biology and prognostic determinants in this disease have sparked recent interest in the optimal management of patients with this increasingly common malignancy. |
<reponame>mishamsk/nerddiary
from __future__ import annotations
import datetime
import enum
from abc import ABC, abstractmethod
from pathlib import Path
import arrow
import sqlalchemy as sa
from cryptography.fernet import InvalidToken
from pydantic import BaseModel, DirectoryPath, ValidationError
from sqlalchemy.dialects.sqlite import BLOB
from sqlalchemy.sql.expression import Select
from .crypto import EncryptionProdiver
from typing import Any, ClassVar, Dict, List, Tuple, Type
class DataCorruptionType(enum.Enum):
UNDEFINED = enum.auto()
LOCK_WRITE_FAILURE = enum.auto()
INCORRECT_LOCK = enum.auto()
USER_DATA_NO_LOCK = enum.auto()
class DataCorruptionError(Exception): # pragma: no cover
def __init__(self, type: DataCorruptionType = DataCorruptionType.UNDEFINED) -> None:
self.type = type
super().__init__(type)
def __str__(self) -> str:
mes = ""
match self.type:
case DataCorruptionType.INCORRECT_LOCK:
mes = "Lock file didn't match this user_id"
case DataCorruptionType.LOCK_WRITE_FAILURE:
mes = "Failed to create lock file"
case DataCorruptionType.USER_DATA_NO_LOCK:
mes = "User data exists, but lock file is missing"
case _:
mes = "Unspecified data corruption"
return f"Data corrupted: {mes}"
class IncorrectPasswordKeyError(Exception):
pass
class DataProvider(ABC):
name: ClassVar[str]
def __init__(self, params: Dict[str, Any] | None) -> None:
super().__init__()
@classmethod
@property
def supported_providers(cls) -> Dict[str, Type[DataProvider]]:
def all_subclasses(cls) -> Dict[str, Type[DataProvider]]:
subc = {} | {cl.name: cl for cl in cls.__subclasses__()}
sub_subc = {}
for c in subc.values():
sub_subc |= all_subclasses(c)
return subc | sub_subc
return all_subclasses(cls)
def get_connection(self, user_id: str, password_or_key: str | bytes) -> DataConnection:
"""Creates a data connection for a new or existing user. Checks for correct password/key and data corruption.
If this is a new user (meaning `check_lock_exist()` returns False) a `str` password must be provided. If a lock file exist, either a password or encryption `key` may be provided (see `DataConnection` property `key`).
Throws `DataCorruptionError` with the corresponding `DataCorruptionType` if config or data exist but lock file is missing (data corruption), or if lock file corrupted or couldn't be created for any reason.
Throws `IncorrectPasswordKeyError` if incorrect password or key was provided. Also throws
"""
encr = None
if not self.check_lock_exist(user_id):
if self.check_user_data_exist(user_id):
raise DataCorruptionError(DataCorruptionType.USER_DATA_NO_LOCK)
if not isinstance(password_or_key, str):
raise ValueError("No lock file for this user. A `str` type password must be provided")
encr = EncryptionProdiver(password_or_key)
lock = encr.encrypt(user_id.encode())
if not self.save_lock(user_id, lock):
raise DataCorruptionError(DataCorruptionType.LOCK_WRITE_FAILURE) # pragma: no cover
else:
if not isinstance(password_or_key, str) and not isinstance(password_or_key, bytes):
raise ValueError("Lock file found. Either a `str` password ot `bytes` key must be provided")
lock = self.get_lock(user_id)
assert lock
try:
encr = EncryptionProdiver(password_or_key, init_token=lock, control_message=user_id.encode())
except InvalidToken:
raise IncorrectPasswordKeyError()
except ValueError:
raise DataCorruptionError(DataCorruptionType.INCORRECT_LOCK)
return self._get_connection(user_id, encr)
@abstractmethod
def _get_connection(self, user_id: str, encr: EncryptionProdiver) -> DataConnection:
pass # pragma: no cover
@abstractmethod
def get_user_list(self) -> List[str]:
pass
@abstractmethod
def check_user_data_exist(self, user_id: str, category: str | None = None) -> bool:
pass # pragma: no cover
@abstractmethod
def check_lock_exist(self, user_id: str) -> bool:
pass # pragma: no cover
@abstractmethod
def get_lock(self, user_id: str) -> bytes | None:
pass # pragma: no cover
@abstractmethod
def save_lock(self, user_id: str, lock: bytes) -> bool:
pass # pragma: no cover
@classmethod
@abstractmethod
def _validate_params(cls, params: Dict[str, Any] | None) -> bool:
pass # pragma: no cover
@classmethod
def validate_params(cls, name: str, params: Dict[str, Any] | None) -> bool:
if name not in cls.supported_providers:
raise NotImplementedError(f"Data provider {name} doesn't exist")
return cls.supported_providers[name]._validate_params(params)
@classmethod
def get_data_provider(cls, name: str, params: Dict[str, Any] | None) -> DataProvider:
if name not in cls.supported_providers:
raise NotImplementedError(f"Data provider {name} doesn't exist")
return cls.supported_providers[name](params)
class DataConnection(ABC):
def __init__(
self,
data_provider: DataProvider,
user_id: str,
encryption_provider: EncryptionProdiver,
) -> None:
super().__init__()
self._data_provider = data_provider
self._user_id = user_id
self._encryption_provider = encryption_provider
@property
def user_id(self) -> str:
return self._user_id
@property
def key(self) -> bytes:
return self._encryption_provider.key
@abstractmethod
def store_user_data(self, data: str, category: str) -> bool:
"""Saves serialized config"""
pass # pragma: no cover
@abstractmethod
def get_user_data(self, category: str) -> str | None:
"""Reads serialized config if exists"""
pass # pragma: no cover
@abstractmethod
def append_log(self, poll_code: str, poll_ts: datetime.datetime, log: str) -> bool:
"""Appends a single serialized `log` for a given `poll_code`
Args:
poll_code (str): poll code
poll_ts (datetime.datetime): poll timestamp - the date to which this log belongs
log (str): seriazlized poll answers (log)
Returns:
bool: 'True' if append was succesful
"""
pass # pragma: no cover
def update_log(self, id: int, poll_ts: datetime.datetime | None = None, log: str | None = None) -> bool:
"""Updates a log identified by `id` with a new serialized `log`"""
raise NotImplementedError("This provider doesn't support row updates") # pragma: no cover
def get_all_logs(self) -> List[Tuple[int, str, datetime.datetime, str]]:
"""Get all serialized logs"""
return self.get_poll_logs()
def get_log(self, id: int) -> Tuple[int, str, datetime.datetime, str]:
"""Get a single serialized log identified by `id`"""
ret = self.get_logs([id])
if len(ret) == 1:
return ret[0]
else:
raise ValueError("Log id wasn't found")
def get_logs(
self,
ids: List[int],
) -> List[Tuple[int, str, datetime.datetime, str]]:
"""Get a list of serialized logs identified by `ids`"""
raise NotImplementedError("This provider doesn't support retrieving rows") # pragma: no cover
def get_poll_logs(
self,
poll_code: str | None = None,
date_from: datetime.datetime | None = None,
date_to: datetime.datetime | None = None,
max_rows: int | None = None,
skip: int | None = None,
) -> List[Tuple[int, str, datetime.datetime, str]]:
"""Get a list of serialized logs for a given `poll_code` sorted by creation date, optionally filtered by `date_from`, `date_to` and optionally limited to `max_rows`+`skip` starting from `skip` (ordered by date DESC)"""
raise NotImplementedError("This provider doesn't support retrieving rows") # pragma: no cover
def get_last_n_logs(
self,
count: int,
*,
poll_code: str | None = None,
skip: int | None = None,
) -> List[Tuple[int, str, datetime.datetime, str]]:
return self.get_poll_logs(poll_code=poll_code, max_rows=count, skip=skip)
class SQLLiteProviderParams(BaseModel):
base_path: DirectoryPath
class Config:
extra = "forbid"
class SQLLiteProvider(DataProvider):
name = "sqllite"
BASE_URI = "sqlite:///"
DB_FILE_NAME = "data.db"
POLL_LOG_TABLE = "poll_log"
USER_DATA_TABLE = "user_data"
def __init__(self, params: Dict[str, Any]) -> None:
super().__init__(params)
self._params = SQLLiteProviderParams.parse_obj(params)
def _get_connection(self, user_id: str, encr: EncryptionProdiver) -> SQLLiteConnection:
return SQLLiteConnection(self, user_id, encr)
def get_user_list(self) -> List[str]:
ret = []
for file in self._params.base_path.iterdir():
if file.is_dir():
ret.append(str(file.name))
return ret
def check_user_data_exist(self, user_id: str, category: str | None = None) -> bool:
data_path = self._params.base_path.joinpath(user_id, self.DB_FILE_NAME)
db_exists = data_path.exists() and data_path.is_file()
if category is None or not db_exists:
return db_exists
# TODO: check user id is a valid folder path
engine = sa.create_engine(self.BASE_URI + str(self._params.base_path.joinpath(user_id, self.DB_FILE_NAME)))
with engine.connect() as conn:
result = conn.execute(sa.text(f"SELECT count(*) FROM {self.USER_DATA_TABLE} WHERE category='{category}'"))
count = result.scalar()
if count == 1:
return True
else:
return False
def check_lock_exist(self, user_id: str) -> bool:
lock_path = self._params.base_path.joinpath(user_id, "lock")
return lock_path.exists() and lock_path.is_file()
def get_lock(self, user_id: str) -> bytes | None:
if not self.check_lock_exist(user_id):
return None
lock_path = self._params.base_path.joinpath(user_id, "lock")
return lock_path.read_bytes()
def save_lock(self, user_id: str, lock: bytes) -> bool:
assert isinstance(self._params.base_path, Path)
self._params.base_path.joinpath(user_id).mkdir(parents=True, exist_ok=True)
lock_path = self._params.base_path.joinpath(user_id, "lock")
try:
lock_path.write_bytes(lock)
except OSError: # pragma: no cover
return False
return True
@classmethod
def _validate_params(cls, params: Dict[str, Any] | None) -> bool:
try:
SQLLiteProviderParams.parse_obj(params)
except ValidationError:
return False
return True
class SQLLiteConnection(DataConnection):
def __init__(
self,
data_provider: SQLLiteProvider,
user_id: str,
encryption_provider: EncryptionProdiver,
) -> None:
super().__init__(data_provider, user_id, encryption_provider)
base_path = data_provider._params.base_path
base_path.joinpath(self.user_id).mkdir(exist_ok=True)
# TODO: check user id is a valid folder path
self._engine = engine = sa.create_engine(
data_provider.BASE_URI
+ str(data_provider._params.base_path.joinpath(self.user_id, data_provider.DB_FILE_NAME))
)
self._meta = meta = sa.MetaData()
self._poll_log_table = poll_log_table = sa.Table(
data_provider.POLL_LOG_TABLE,
meta,
sa.Column("id", sa.Integer, primary_key=True, index=True, nullable=False),
sa.Column("poll_code", sa.String, index=True, unique=False, nullable=False),
sa.Column("poll_ts", sa.TIMESTAMP(timezone=True), index=True, unique=False, nullable=False),
sa.Column("log", BLOB, nullable=False),
sa.Column("created_ts", sa.TIMESTAMP(timezone=True), nullable=False),
sa.Column("updated_ts", sa.TIMESTAMP(timezone=True), nullable=False),
)
self._user_data_table = user_data_table = sa.Table(
data_provider.USER_DATA_TABLE,
meta,
sa.Column("category", sa.String, primary_key=True, index=True, unique=True, nullable=False),
sa.Column("data", BLOB, nullable=False),
sa.Column("created_ts", sa.TIMESTAMP(timezone=True), nullable=False),
sa.Column("updated_ts", sa.TIMESTAMP(timezone=True), nullable=False),
)
with engine.connect() as conn:
poll_log_table.create(conn, checkfirst=True)
user_data_table.create(conn, checkfirst=True)
def store_user_data(self, data: str, category: str) -> bool:
now = datetime.datetime.now(tz=datetime.timezone.utc)
stmt = self._user_data_table.select().where(self._user_data_table.c.category == category)
new = True
with self._engine.connect() as conn:
row = conn.execute(stmt).first()
if row:
new = False
stmt = None
data_out = self._encryption_provider.encrypt(data.encode())
if new:
stmt = self._user_data_table.insert(
values={
"data": data_out,
"category": category,
"created_ts": now,
"updated_ts": now,
}
)
else:
stmt = self._user_data_table.update(
values={
"data": data_out,
"category": category,
"created_ts": now,
"updated_ts": now,
}
).where(self._user_data_table.c.category == category)
with self._engine.connect() as conn:
result = conn.execute(stmt)
if result.rowcount == 1:
return True
else:
return False
def get_user_data(self, category: str) -> str | None:
stmt = sa.select(self._user_data_table.c.data).where(self._user_data_table.c.category == category) # type: ignore
with self._engine.connect() as conn:
result = conn.execute(stmt)
data = result.scalar()
if data:
return self._encryption_provider.decrypt(data).decode()
else:
return None
def append_log(self, poll_code: str, poll_ts: datetime.datetime, log: str) -> int | None:
now = datetime.datetime.now(tz=datetime.timezone.utc)
log_out = self._encryption_provider.encrypt(log.encode())
stmt = self._poll_log_table.insert(
values={
"log": log_out,
"poll_code": poll_code,
"poll_ts": arrow.get(poll_ts).to("utc").datetime,
"created_ts": now,
"updated_ts": now,
}
)
with self._engine.connect() as conn:
result = conn.execute(stmt)
if result.rowcount == 1:
return result.inserted_primary_key[0]
else:
return None
def _query_and_decrypt(self, stmt: Select) -> List[Tuple[int, str, datetime.datetime, str]]:
ret = []
with self._engine.connect() as conn:
result = conn.execute(stmt)
rows = result.all()
for row in rows:
ret.append(
(
row["id"],
row["poll_code"],
row["poll_ts"],
self._encryption_provider.decrypt(row["log"]).decode(),
)
)
return ret
def get_logs(
self,
ids: List[int],
) -> List[Tuple[int, str, datetime.datetime, str]]:
stmt = self._poll_log_table.select().where(self._poll_log_table.c.id.in_(ids))
return self._query_and_decrypt(stmt)
def update_log(self, id: Any, poll_ts: datetime.datetime | None = None, log: str | None = None) -> bool:
now = datetime.datetime.now(tz=datetime.timezone.utc)
stmt = self._poll_log_table.update().where(self._poll_log_table.c.id == id).values(updated_ts=now)
if log is not None:
log_out = self._encryption_provider.encrypt(log.encode())
stmt = stmt.values(log=log_out)
if poll_ts is not None:
stmt = stmt.values(poll_ts=arrow.get(poll_ts).to("utc").datetime)
with self._engine.connect() as conn:
result = conn.execute(stmt)
if result.rowcount == 1:
return True
else:
return False
def get_poll_logs(
self,
poll_code: str | None = None,
date_from: datetime.datetime | None = None,
date_to: datetime.datetime | None = None,
max_rows: int | None = None,
skip: int | None = None,
) -> List[Tuple[int, str, datetime.datetime, str]]:
if not skip:
skip = 0
stmt = self._poll_log_table.select()
if poll_code:
stmt = stmt.where(self._poll_log_table.c.poll_code == poll_code)
if date_from:
stmt = stmt.where(self._poll_log_table.c.poll_ts >= arrow.get(date_from).to("utc").datetime)
if date_to:
stmt = stmt.where(self._poll_log_table.c.poll_ts <= arrow.get(date_to).to("utc").datetime)
if max_rows:
stmt = stmt.limit(max_rows + skip)
stmt = stmt.order_by(self._poll_log_table.c.poll_ts.desc())
return self._query_and_decrypt(stmt)[skip:]
|
<filename>x-pack/plugins/security/public/authentication/access_agreement/access_agreement_page.test.tsx
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License;
* you may not use this file except in compliance with the Elastic License.
*/
import React from 'react';
import ReactMarkdown from 'react-markdown';
import { EuiLoadingContent } from '@elastic/eui';
import { act } from '@testing-library/react';
import { mountWithIntl, nextTick } from 'test_utils/enzyme_helpers';
import { findTestSubject } from 'test_utils/find_test_subject';
import { coreMock } from '../../../../../../src/core/public/mocks';
import { AccessAgreementPage } from './access_agreement_page';
describe('AccessAgreementPage', () => {
beforeAll(() => {
Object.defineProperty(window, 'location', {
value: { href: 'http://some-host/bar', protocol: 'http' },
writable: true,
});
});
it('renders as expected when state is available', async () => {
const coreStartMock = coreMock.createStart();
coreStartMock.http.get.mockResolvedValue({ accessAgreement: 'This is [link](../link)' });
const wrapper = mountWithIntl(
<AccessAgreementPage
http={coreStartMock.http}
notifications={coreStartMock.notifications}
fatalErrors={coreStartMock.fatalErrors}
/>
);
expect(wrapper.exists(EuiLoadingContent)).toBe(true);
expect(wrapper.exists(ReactMarkdown)).toBe(false);
await act(async () => {
await nextTick();
wrapper.update();
});
expect(wrapper.find(ReactMarkdown)).toMatchSnapshot();
expect(wrapper.exists(EuiLoadingContent)).toBe(false);
expect(coreStartMock.http.get).toHaveBeenCalledTimes(1);
expect(coreStartMock.http.get).toHaveBeenCalledWith(
'/internal/security/access_agreement/state'
);
expect(coreStartMock.fatalErrors.add).not.toHaveBeenCalled();
});
it('fails when state is not available', async () => {
const coreStartMock = coreMock.createStart();
const error = Symbol();
coreStartMock.http.get.mockRejectedValue(error);
const wrapper = mountWithIntl(
<AccessAgreementPage
http={coreStartMock.http}
notifications={coreStartMock.notifications}
fatalErrors={coreStartMock.fatalErrors}
/>
);
await act(async () => {
await nextTick();
wrapper.update();
});
expect(coreStartMock.http.get).toHaveBeenCalledTimes(1);
expect(coreStartMock.http.get).toHaveBeenCalledWith(
'/internal/security/access_agreement/state'
);
expect(coreStartMock.fatalErrors.add).toHaveBeenCalledTimes(1);
expect(coreStartMock.fatalErrors.add).toHaveBeenCalledWith(error);
});
it('properly redirects after successful acknowledgement', async () => {
const coreStartMock = coreMock.createStart({ basePath: '/some-base-path' });
coreStartMock.http.get.mockResolvedValue({ accessAgreement: 'This is [link](../link)' });
coreStartMock.http.post.mockResolvedValue(undefined);
window.location.href = `https://some-host/security/access_agreement?next=${encodeURIComponent(
'/some-base-path/app/kibana#/home?_g=()'
)}`;
const wrapper = mountWithIntl(
<AccessAgreementPage
http={coreStartMock.http}
notifications={coreStartMock.notifications}
fatalErrors={coreStartMock.fatalErrors}
/>
);
await act(async () => {
await nextTick();
wrapper.update();
});
findTestSubject(wrapper, 'accessAgreementAcknowledge').simulate('click');
await act(async () => {
await nextTick();
});
expect(coreStartMock.http.post).toHaveBeenCalledTimes(1);
expect(coreStartMock.http.post).toHaveBeenCalledWith(
'/internal/security/access_agreement/acknowledge'
);
expect(window.location.href).toBe('/some-base-path/app/kibana#/home?_g=()');
expect(coreStartMock.notifications.toasts.addError).not.toHaveBeenCalled();
});
it('shows error toast if acknowledgement fails', async () => {
const currentURL = `https://some-host/login?next=${encodeURIComponent(
'/some-base-path/app/kibana#/home?_g=()'
)}`;
const failureReason = new Error('Oh no!');
const coreStartMock = coreMock.createStart({ basePath: '/some-base-path' });
coreStartMock.http.get.mockResolvedValue({ accessAgreement: 'This is [link](../link)' });
coreStartMock.http.post.mockRejectedValue(failureReason);
window.location.href = currentURL;
const wrapper = mountWithIntl(
<AccessAgreementPage
http={coreStartMock.http}
notifications={coreStartMock.notifications}
fatalErrors={coreStartMock.fatalErrors}
/>
);
await act(async () => {
await nextTick();
wrapper.update();
});
findTestSubject(wrapper, 'accessAgreementAcknowledge').simulate('click');
await act(async () => {
await nextTick();
});
expect(coreStartMock.http.post).toHaveBeenCalledTimes(1);
expect(coreStartMock.http.post).toHaveBeenCalledWith(
'/internal/security/access_agreement/acknowledge'
);
expect(window.location.href).toBe(currentURL);
expect(coreStartMock.notifications.toasts.addError).toHaveBeenCalledWith(failureReason, {
title: 'Could not acknowledge access agreement.',
});
});
});
|
<gh_stars>1-10
export * from '@real-system/select';
|
Adaption to Extreme Rainfall with Open Urban Drainage System: An Integrated Hydrological Cost-Benefit Analysis This paper presents a cross-disciplinary framework for assessment of climate change adaptation to increased precipitation extremes considering pluvial flood risk as well as additional environmental services provided by some of the adaptation options. The ability of adaptation alternatives to cope with extreme rainfalls is evaluated using a quantitative flood risk approach based on urban inundation modeling and socio-economic analysis of corresponding costs and benefits. A hedonic valuation model is applied to capture the local economic gains or losses from more water bodies in green areas. The framework was applied to the northern part of the city of Aarhus, Denmark. We investigated four adaptation strategies that encompassed laissez-faire, larger sewer pipes, local infiltration units, and open drainage system in the urban green structure. We found that when taking into account environmental amenity effects, an integration of open drainage basins in urban recreational areas is likely the best adaptation strategy, followed by pipe enlargement and local infiltration strategies. All three were improvements compared to the fourth strategy of no measures taken. Introduction While climate change predictions are inherently uncertain, the predictions of future changes in precipitation patterns seem fairly robust for Northern Europe (van der Linden and Mitchell 2009). The anticipated climate change will affect and increase precipitation extremes, leading to an increase in design intensities of at least 20 % (Madsen and others 2009;Arnbjerg-Nielsen 2012). This poses a challenge to urban drainage design as future drainage systems will have to deal with increased frequency and volume of storm water flows. As a result, the urban drainage capacity needs to be significantly increased in many parts of Northern Europe, including the case area in Denmark addressed in this study (Arnbjerg-Nielsen and Fleischer 2009). There are, however, increased concerns that expanding the underground pipe system is not a sustainable solution for climate adaptation in the long term or that attractive alternatives exist (Roy and others 2008;Zevenbergen and others 2008;Wong and Eadie 2000). There is increasing acknowledgment of the potentials of decentralized drainage system based on local treatment, attenuation, re-use, retention, and infiltration of precipitation runoffs (Ashley and others 2007;Roy and others 2008;Stahre 2006). Depending on design, such decentralized solutions may promote a more sustainable development by adding also to esthetics, social, and environmental values in the urban area. In many respects, a decentralized system can substitute or be integrated into the conventional sewer system. If carefully planned, a decentralized system can be a part of the green infrastructure in urban area, thus meeting demands for both climate change adaptation and urban recreational services. The idea of decentralized drainage system has been promoted through, and as part of, the idea of local community activism for climate change adaptation. The focus has been on small-scale systems in which local property owners could implement on their own properties, typically by means of underground infiltration units. We will denote these systems, local urban drainage systems (LUDS). A common characteristic of LUDS is that they do not impact on the urban landscape in ways that provide additional recreational benefits. In general, LUDS will go unnoticed to the public eye. LUDS must develop into large-scale systems to have an impact on amenity value. As an alternative strategy to green roofs, water trenches, and rain gardens, one could consider transforming the urban landscape, e.g., by creating small lakes and green spaces. Appropriately designed such large-scale open urban drainage systems could both serve as places of recreational experience and as a significant temporary rainwater storage capacity during extreme rain events. We will name these large-scale systems, open urban drainage systems (OUDS), as they are open to the air and to the general public and may provide a range of recreational services, which the small-scale LUDS do not. The implementation of LUDS and OUDS is not straightforward. Decision-makers need tools to react to the challenges ahead in an economically rational manner. There have been many visionary demonstrations of the decentralized solutions but only a few have come up with appropriate technical and economic tools to underpin their efficiency (Marsalek and Chocat 2002;Stahre 2006;Wong and Eadie 2000). More efforts are needed to further study their effects on extreme events as well as the costs and benefits (Ashley and others 2007;Hellstrm and others 2000;Wong and Eadie 2000). Risk-based economic assessment is a fundamental method for climate adaptation assessment; however, the majority of such economic analyses remain in the form of traditional budget cost-benefit analysis (CBA), see, e.g., Gafni, which only accounts for the impacts in a hydrological context. In our study, the expansion of possible approaches to urban storm water management caused us to extend the CBA to include estimates of the welfare economic measures of non-market effects in the form of recreational effects from the proposed OUDS. We evaluate the performance of four distinct strategies to handle the expected changes in extreme rainfall events. The first is a baseline strategy, the laissez-faire strategy, which assumes that urban storm water is to be handled by existing infrastructure only. The second strategy, the business-as-usual (BAU) (Baura 2006) strategy, assumes that increased drainage capacity is obtained by means of expansion of sewer pipes and concrete rainwater basins when necessary. 1 The third strategy, the infiltration strategy, builds on a LUDS approach where property owners implement rainwater trenches in their gardens. The LUDS will infiltrate rainwater on a day-to-day basis and will serve as a temporal storage capacity during larger rainfall events. The fourth strategy is the OUDS, which exploit the existing green spaces and implement lakes which will temporally allow for massive influx of rainwater during a rain event. In short, such OUDS solutions essentially are rainwater basins integrated in pleasant green areas, which provide additional recreational benefits within the urban landscape. The value of the additional recreational amenities from the potential OUDS is estimated using hedonic house price valuation capturing the value of the surrounding neighborhood. When implementing this strategy in our case study area, it is necessary to convert some private properties into green spaces to provide room for OUDS. This implies additional costs for obtaining the benefits. To evaluate the performance of the four strategies, we established a cross-disciplinary model, which integrated techniques of risk assessment with flood inundation modeling, climate change, environmental evaluation tools, and socio-economic tools to uncover the costs and benefits associated with the strategies. A budget-oriented CBA approach is insufficient as a decision-maker tool as it will be blind to the potential additional non-market services (negative and positive) provided by the urban water infrastructure. Methods The general procedure of the cross-disciplinary framework is shown in Fig. 1. It contains a comprehensive urban inundation model and several detailed economic models. The adaptation scheme describes the anticipated climate change impacts in an area as well as the planned adaptation alternatives. The flood risk analysis is performed on the basis of a flood risk assessment framework estimating both hazard and vulnerability characteristics of the area under the investigated adaptation strategy. Economic valuation of risk reduction is assessed using a step-by-step approach to aggregate the gross benefits and costs of the adaptation strategy in the context of risk reduction. The methodological background of the flood risk analysis and the step-by-step approach is a coherent economic pluvial flood risk assessment framework for evaluation of climate change adaptation options in a hydrological context developed by Zhou and others. Finally, the environmental economic analysis applies a hedonic valuation approach to capture at least a substantial part of the value of externalities related to the urban water infrastructure. Current State and Development of Urban Infrastructure and City Planning With increasing recognition of climate impacts on urban flood risk, there is a strong need to adapt urban infrastructure to reduce the substantial economic losses from extreme climatic events. While planning a climate change adaptation scheme, in general, several infrastructure development scenarios need to be constructed and assessed. A comparative cost-benefit assessment is often necessary to provide decision-makers with a firm basis for selecting the appropriate adaptable solution. Therefore, each scenario will be analyzed through the cross-disciplinary framework to compare their performance in terms of costs and benefits. Flood Risk Analysis and Integration in CBA Flood risk analysis is the fundamental procedure for climate adaptation assessment. To assess the risk level of flooding in an area, an analysis of hazards and vulnerabilities is required. Hazards describe the extreme climatic loadings, such as a range of occurrence probabilities for different flood events and the extent and depth of these floods. In general, each occurrence probability is described by the equivalent return period, which is a statistic measure of the average recurrence interval of an extreme climatic loading (Haynes and others 2008). Vulnerabilities describe the spatial distribution of susceptible groups and properties to flooding and the potential adverse effects caused by exposure of these vulnerabilities to the hazards, e.g., the number of houses flooded, or the number of people exposed for a given loading. The flood risk posed by extreme rain events was simulated using a comprehensive 1D-2D coupled urban inundation model. Such a model can simulate one-dimensional pipe flow underground and two-dimensional surface flow patterns. The pipe flow is simulated by the 1D sewer model and the surface flow is simulated by the 2D overland flow model. There are a number of connections between the two models (e.g., manholes, open channels) allowing water exchange dynamically (Domingo and others 2010;Mark and others 2004;Mike By DHI 2011). Runoff from buildup areas due to precipitation is first collected through subcatchments and generated in the 1D sewer model. As flow increases, water can flow out to the surface through the connections. Depending on the flow conditions, water can also flow back to the sewer system in the modeling process. Input data in the simulation include a description of the rainfall, models of the drainage system, a digital elevation model (DEM), and parameter descriptions for water exchange between the 1D and 2D simulations. The resulting outcomes are a range of flood hazard maps that show the locations of inundation and the simulated maximum water depths for a range of return periods covering the time period during which the strategies are evaluated. In the vulnerability analysis, mainly physical impacts were investigated, such as damage to houses, basements, and roads. Some intangible losses were taken into account, including traffic delay, pollution of recreational sites, and health impacts. With a spatial distribution of the land use and socio-economic data of an area, we used a ''threshold principle'' to identify the affected damage categories in a GISbased risk model based on the simulated inundation depth maps from the hazard analysis. Such a threshold principle adopts a binary approach: ''flooded or not flooded'' due to the lack of sufficient information on the staged-depth-damage function (Kubal and others 2009;Zhou and others 2012). As a result, the damage was identified as a result of exposure of vulnerable properties to the hazards and was modeled depending only on whether the inundation depth exceeds the threshold or not. The threshold level differs between damage categories and uniform unit costs are assigned to the flooded units when water depth rises above their critical thresholds. Further details on damage categories, threshold levels, and costs are provided in Zhou and Arnbjerg-Nielsen (submitted). Finally, the damage costs were estimated for different flood events by multiplying the affected units by the corresponding unit costs, respectively. The final outcome was expressed in terms of expected annual damage (EAD) as a measure of flood risk level of an area. Fig. 1 The stepwise procedure of the cross-disciplinary framework for evaluating the alternative adaptation strategies The flood risk analysis and damage assessment were integrated into a CBA, assessing the performance of each alternative adaptation strategy in the form of net present value, using a discount rate of 3 % (Pearce and others 2006). We adjusted the actual design of each adaptation strategy in the case area in a heuristic manner to maximize the resulting cost-benefit measure of each. The costs in the CBA included the investment expenses of a planned adaptation in this study, e.g., infrastructure establishments, and the gross benefits were calculated as saved damage costs by means of EADs from the risk assessment to account for the flood frequency and damage estimation. Environmental Economic Analysis: Hedonic House Price Valuation We used the hedonic house price valuation method to estimate the marginal willingness to pay for proximity to urban green spaces of various types. Previous studies on hedonic house price valuation have found that amenity services provided by green spaces have clear impacts on property prices in nearby residential areas. Attributes such as tree cover, maintenance, and management have been found to have distinct property price signals, which reflects the underlying preference for the different attributes within the same general environmental good (Anthon and others 2005;Bark and others 2009;Jiao and Liu 2010;Mansfield and others 2005). Urban green spaces are not a uniform amenity. Accessibility, size, and the presence of a lake and/or tree cover provide different recreational opportunities within the urban green spaces. In the hedonic valuation analysis here, we distinguish between these categories as found empirically relevant, cf. below. The Theoretical Basis of the Hedonic Valuation Method The theoretical foundation of the hedonic valuation method was developed, among others and in particular, by Rosen, and further developed by e.g., Palmquist (1992Palmquist (, 2005. We refer the reader to these and other references for the details, but here it suffices to explain that the basic idea of the method is that in equilibrium, the price P of any given house, n, can be modeled as a function of a vector z that includes all K house characteristics, z ik. The hedonic price function may be formulated as follows: where H is a set of parameters related to the characteristics and is specific to the housing market considered. Note that the characteristics may also include environmental attributes and values obtained by ownership of the house, in this context proximity and access to urban green areas. Assuming weak separability with respect to the parameters of interest insures that the marginal rate of substitution between any two characteristics is independent of the level of all other characteristics. With that assumption in place, the implicit price of a house characteristic z nk is a measure of the Marginal Willingness To Pay, MWTP dP n =z nk for this house characteristic (Palmquist 1992). This allows us to estimate the value of a small change in the environmental good. The hedonic price function only provides information on one point on the households' demand function with respect to the environmental good in question-not the demand schedule for that good. Nevertheless, it is the most reported result in the hedonic literature (Palmquist 2005). However, if a policy brings about a non-marginal change in the environmental amenity in focus, it may likely result in a shift of the hedonic equilibrium due to implied increase in supply, and the hedonic price function, estimated before the change in amenity supply, will not be able to accurately predict the welfare change in the new equilibrium. However, Bartik demonstrated that an ex-anteestimated hedonic price function can be used to predict the welfare change of a non-marginal localized amenity change, as this is unlikely to affect the equilibrium in the entire housing market. Too few properties would be affected, which would leave the hedonic price function stable. The interpretation of a non-marginal localized amenity change is therefore similar to a marginal non-localized amenity change, and the ex ante house price function can be used for reliable estimates of the welfare effect of the amenity change. A final comment here is needed on the fact that the hedonic method by construction can only measure values as perceived by house owners. There may be other users of recreational areas as those implied by OUDS, which obtain a welfare gain or loss. We briefly discuss this aspect below. The Econometric Methods The functional form of the hedonic house price function is not prescribed by theory. A simple semi-log functional form of the hedonic price function is chosen based on the findings of Cropper and others. Other functional forms were investigated and largely resulted in the same patterns. The house price function was estimated using four different models. One was a simple non-spatial OLS estimation whereas the three other models contained a spatial autoregressive error term which corrects for the presence of spatial autocorrelation. Due to problems of endogeneity, the spatial models are estimated using maximum likelihood (ML) and the GMM estimator (Kelejian and Prucha 2010). The spatial econometric model follows Anselin's original definition of the spatial error model. It includes a spatial autoregressive error term which corrects for spatial autocorrelation. The specific spatial error model that we Environmental Management 51:586-601 589 arrived at and applied in the valuation can be written as follows: Here y is the price of the n'th house, which is a function of the vector Z consisting of several structural, neighborhood, and environmental variables not in focus here. Several variables and transformations of these were evaluated to find a set that performed well and enabled us to capture the benefits of various types of green areas and the presence of water in these. It was found that the group of green areas that contained features such as lakes and trees could be aggregated into one. The impacts of proximity to these green areas as well as the impact of their size were captured in the hedonic price function with the proximity to the nearest green area measured in beeline distance r access (in 100 m) and size measured in hectares. A second group of urban green spaces was identified as areas without trees or lakes, i.e., typically open grass areas with no other features. The impact of these on the price of nearby properties was captured using the measure, r negative, which is the beeline distance to the nearest such urban green space areas. It was found that a transformation of this distance as a squared inverse provided the best model fit. This transformation depicts a sharp decline in spatial effect. Only the very close neighbors were affected by this second group of green spaces. The inverse distance is also used in other studies, e.g., Anthon and others. In addition, the model contained a term which describes the value of proximity and access to lakes, lake n. This accessibility measure was defined by the natural log to the beeline distance to the nearest lake. Finally, we allowed for spatial autocorrelation in the error term e. W is an M 9 M spatial weight matrix of autocorrelation in errors and u is assumed i.i.d. The spatial weight matrix W defines the extent of the spatial neighborhood effect at each location. The spatial autoregressive error term in the spatial error model can be understood as a correction term for omitted variables, which are shared by the local neighborhood. Area Description The analysis covered two survey areas: an area for CBA analysis of climate adaptations and an area for estimating the hedonic price function applied in the CBA. The CBA area is restricted to the urban catchment of Risskov located in the northern part of the center of Aarhus city (see Fig. 2). Risskov is one of the wealthiest residential areas in Aarhus with high property values. The catchment size is about 377.3 ha. Commercial and industrial activities are marginal in the area. Risskov has several large green spaces and therefore has a great potential for decentralized drainage constructions. The mean annual precipitation is about 650 mm in Risskov and the highest elevation is 70 m above sea level. A separate sewer system conveys storm water from west to the outlets along the eastern coastline. The region has experienced a few precipitation extremes in recent years, e.g., the extreme rain event on May 3, 2005 with around 50 mm rain in 140 min, and the event on August 1, 2006 with around 56.2 mm in 266 min. The area that formed the basis for estimating the hedonic price function covered the entire city of Aarhus. The location of the green spaces is shown in Fig. 2. It is seen that green space is widespread throughout the city of Aarhus. Less than 25 % of all properties in Aarhus are located more than 500 m from the nearest green space. The size of green spaces included in the valuation varies between 1 and 741 ha with a mean of 9.5 ha and a standard deviation of 48 ha. Furthermore, the hedonic valuation involves 12,339 properties sold between 2000 and April 2010. Apartments are not considered in this study as only few apartments within the area would be affected by the location of OUDS and/or new green spaces. In addition, we consider that apartments are a separate housing market, and it would lead to bias if we included them in this analysis. Due to the long planning horizon in this study, potential changes in city environment (e.g., population growth, socio-economic development) are important to include in the analysis. However, the residential catchment is relatively small and well developed and it is, therefore, assumed that there will be no dramatic changes in the city environment in the foreseeable future, see Table 1. Rainfall Input and Socio-Economic Data for Flooding Loss When analyzing runoff from individual rainfall events, the internal spatial and temporal characteristics of precipitation have a large impact on the maximum discharges and antecedent conditions may also be important (Arnbjerg-Nielsen and Harremos 1996;Segond and others 2007). Therefore, the modeling software used to calculate the inundations accepts rainfall input with high spatio-temporal resolution. However, when assessing the average properties of runoff from precipitation extremes from urban catchments simple point estimates of intensity-duration-frequency remains a state-of-the-art approach as indicated by e.g., Arnbjerg-Nielsen and Harremos and Willems and others. The description adopted is, therefore, to use Chicago design storms (CDS) as input rainfall to urban inundation modeling. It is a synthetic rain event constructed to represent a loading of sewer system that corresponds to a prescribed return period for the entire urban catchment. The CDS is estimated based on regional intensity-duration-frequency relationships with inputs of rainfall variables, such as the mean annual precipitation, rainfall location and duration, and return period (Madsen and others 2009). The key assumption of using CDS is that antecedent conditions of the catchment play a minor role in the calculated extend of the floods for extreme precipitation, see Table 1. We have applied CDS rainfall of return periods of 2, 10, 50, and 100 years for hazard map simulation. The expected increase in precipitation extremes due to climate change and the associated uncertainties have been studied extensively recently as reported by e.g., Arnbjerg-Nielsen, Larsen and others and Madsen and others. The current Danish urban drainage design practice suggests a 20, 30, and 40 % increase for the 2-, 10-, and 100-year frequency, respectively, over a 100-year planning horizon. These values are, therefore, used to assess the impacts of climate change in this study. This means that the estimated flood magnitude and frequency of the present return periods will increase in future. For instance, the investigated 100-year event will become a 20-year event after 100 years. As a result, a significant increase in flood risk is expected due to climate change. The DEM used for inundation modeling is derived from LIDAR data and has a grid resolution of 2 m with a root mean square error of the elevation below 0.05 m. Socioeconomic data (e.g., unit costs) together with applied threshold criteria for flood damage estimation are derived from regional databases on climate adaptation studies, documented by Zhou and others. Strategies for Future Drainage Design The four adaptation strategies considered relevant to the catchment are described in the following subsections, including their assumptions and restrictions. Two types of decision criteria are applied, see Table 1. Decision criterion 1 proposes a uniform service level corresponding to no surcharge at the current 5-year event. This design criterion is prioritized in the case study to achieve an acceptable risk level of flooding in the area. However, in some cases, adaptation based on Decision criterion 1 may lead to very costly and uneconomical solutions because the adaptation strategy is not very well suited to solve the problem in particular parts of the catchment. In such cases, Decision criterion 2, the economically optimal approach, is applied to insure an efficient allocation of investment by weighing both costs and benefits. This means that although, for some of the areas, the minimum service level is not fulfilled, the actual flood damage is expected to be at a level acceptable to society. For a given catchment, critical areas with overloaded manholes are first identified based on inundation modeling. Adaptation measures are subsequently applied to the areas to comply with the service level. Meanwhile, the efficiency of the proposed measures is evaluated to assess the corresponding costs and benefits. Decision criterion 2 is adopted in case the proposed adaptation is not economically beneficial. As a result, the proposed measures for each strategy have been assessed based on a manual heuristic trial and error approach which optimizes the efficiency in terms of risk reduction. The Laissez-Faire Strategy: Climate Change Impacts in Risskov The laissez-faire strategy exposes a situation where no adaptation activity is initiated to cope with climate change impacts. Such a strategy may lead to increased costs of flooding in the future. In this study, it serves as a baseline Future changes There will be little change in the city layout and socioeconomic conditions. City development and population growth are not considered in this case study Future changes can be expected due to the long planning horizon; however, it is difficult to tell whether the city will be more vulnerable or resilient The catchment is relatively small, well developed and has not changed much over the last decades. The main land use is residential and changes are not foreseen in planning documents Risk reduction Design criterion Combination of two types of decision criteria: D1: Uniform service level (5-year) based on the equity principle D2: Economically optimal approach considering both costs and benefits Depending on topographical and land use conditions, adaptation based on D1, in some cases, may be very costly and thus lead to uneconomical solutions. D2 is used to supplement D1 in such cases (Zhou and others 2012) Adaptation alternatives Model setup of infiltration The runoffs are directly removed from the selected subcatchments by reducing the imperviousness. Infiltration capacity with regard to rainfall depths and duration, soil condition is assumed to be constant over the entire catchment and sufficiently high to avoid spilling from local infiltration units to the drainage system/overland flow Infiltration process is simplified due to a lack of data and advanced models for infiltration simulation The geographic restrictions and legislation limitations of OUDS are not considered. The OUDS have negligible volumes of water at the time of large storms implying that the entire volume is used to minimize floods while still containing sufficient volume on a day-to-day basis to provide the services provided by the natural systems used in the hedonic price analysis It is beyond the scope of this paper to assess fully the complex dynamics of the OUDS, which would include a detailed ecological and hydrological model of the systems. It is recognized that such an analysis most likely would lead to the OUDS requiring more space and/or provide less services than the actual natural systems. As such, the calculated benefits may be an optimistic estimate compared to the other adaptation alternatives for evaluating the efficiency of other proposed adaptation scenarios. BAU: Pipe Enlargement Conventional handling of climate change impacts is based on a series of sewer solutions, including optimization of transport capacity of existing sewers, implementing additional pipes or storage spaces, increasing existing pipe capacity, and so on. We applied pipe enlargement in this study as the BAU scenario to enhance existing sewer capacity for excess flows. This is done by replacing relevant pipelines with larger pipes, see Fig. 3a. The implementation of such a solution in inundation modeling is performed by increasing the pipe diameter of relevant links in the 1D sewer model. Note that the pipe enlargement solution may potentially have minor impacts on received water quality since the increased urban runoffs contain more pollutants from roofs and roads. Additional end-of-pipe solutions may be needed to improve the water quality. The enlargement process may also influence the local traffic conditions including causing traffic inconveniences, road renovation, etc. However, it is difficult to take all of these impacts into account. In this study, we only assessed the direct impacts in the hydrological context. Hedonic valuation Baseline scenario The hedonic valuation estimates of marginal values of additional green-blue spaces can be validly used to assess the value of additional space used for OUDS for the surrounding neighborhood This is related to the scale of our scenario, which will induce a change in environmental amenities clearly marginal in relation to the overall supply of such areas in the housing market area underlying the hedonic function Evaluation scope The hedonic method only accounts for costs and benefits reflected in how property prices and therefore also property taxes change with changes in e.g., environmental variables. Thus, it is only an approximation of the possible social and environmental benefits In this study, the additional benefits refer to the increase in property values and taxes due to the recreational design of the OUDS system The major economic benefit from the OUDS design at the neighborhood level is the increased property values and the resulting increase in property taxes. The hedonic method can capture at least a substantial part of such additional values Environmental Management 51:586-601 593 Local Infiltration This scenario is aimed for infiltration with water trenches, which has been increasingly sought and promoted in the literature and demonstration projects (Wong and Eadie 2000;Stahre 2006). The solution has effects on slowing down and attenuating water flows; however, it may have very limited effects on extreme rain events in some regions due to geological and spatial limitations. As shown in Fig. 3b, local infiltrations were implemented in the form of infiltration trenches, with green coverings (e.g., grass, vegetation) on the top of the sub-surface devices. However, such details cannot yet be modeled with the available program and models; a simplified approach (Table 1) is thus applied by reducing the imperviousness of selected sub-catchments included in the runoff component in the 1D model as a representation of disconnections of subcatchments and water infiltrated into the ground. We assume that there is no additional effect due to this approach even though concerns have been raised on the rise of ground water which in a worst case assessment could cause widespread basement flooding and structural instability of many tangible assets (Roldin and others 2012). Water contamination from urban pollution has also been raised as a serious issue which ultimately could result in contamination of ground water and drinking water (Birch and others 2011). Furthermore, from a welfare economic point of view, there is no additional recreational benefits from local infiltrations. This is due to the assumption that all infiltration trenches are implemented as invisible structures under existing green spaces (gardens) in Risskov. As a result, no marginal changes/benefits can be observed by local neighborhoods. Open Urban Drainage System (OUDS) Green spaces in the urban landscape provide amenity services to the surrounding neighborhood in the form of recreational opportunities. The concept of OUDS implies that such a facility is concealed as green recreational sites which are designed to have the additional function of serving as a temporary detention sink for precipitation. Such a solution can exploit new aspects for urban drainage design on recreational amenities, multiple uses, see Fig. 3c. As a result, the economic performance in terms of costrecovery may occur at different stages of the planning process compared with the conventional solutions. In the modeling, the OUDS solutions are constructed by creating local depressions/holes in the existing DEM to represent the basin location and size, see Table 1. The potential locations of the green features are first identified based on the inundation modeling. It can be noted that the OUDS solutions are mainly located on the pathway toward or directly in the potential flooding zones. The efficiencies of the proposed locations are subsequently evaluated using the flood risk assessment and economic analysis to estimate their net benefits. Priority is given to locations with higher benefits. In doing so, it is possible to achieve a reasonable optimization of OUDS locations based on a trial and error approach. Furthermore, the scenario was implemented with two subcategories in the model. This is because to attain good performances on flood mitigation some OUDS need to be located in private gardens or spaces. Such OUDS will mainly perform as rainwater basins in the area while OUDS located in green spaces are assumed to be designed as a lake integrated in urban landscape. These two settings differ from a socio-environmental point of view and will lead to different impacts on the economic assessment. The feasibility of achieving both the technical functionality of OUDS and the amenity value is not considered in this study, see Table 1. Such systems are studied in many regions in the world and it remains an issue to insure that the systems can in fact perform as well as natural systems in terms of continuous provision of positive environmental values and functions. Typical problems are drying out, eutrophication, overgrowing, and/or heavy maintenance requirements. The costs of, e.g., maintaining nutrient balances by removal of excessive plant growth, installing and maintaining systems that may artificially add water in dry periods, are not included in the cost-benefit analyses presented. Assumptions and Simplifications in the Study Due to the complexity of the cross-disciplinary approach, several important assumptions were made to simplify the integrated analysis in this study, as summarized in Table 1. Overall, the assumptions seem reasonable, and indeed necessary to reach an evaluation of each of the strategies. Based on Table 1, it may appear that the benefits of the OUDS systems may be exaggerated somewhat, which should be taken into consideration when comparing strategies. Since it is not possible to quantify the importance of this potential exaggeration in economic terms, we will discuss the importance in qualitative terms as part of the discussion and conclusion of the paper. Flood Risk Assessment The flood hazard maps indicating the current hazards are shown in Fig. 4. The calculated depths are the maximum water depths observed for each of the recurrence intervals indicated in the figure. A 5 9 5 m grid was applied for surface flood modeling to achieve a balance of computing time and accuracy. There is a severe overloading of the sewer system near the outlet in the north center, as marked in the hazard map of the 2-year event. Several local floodprone areas were identified from the maps, as indicated in the hazard maps of the 100-year recurrence interval. To calculate the damage costs for individual rainfall event, the hazard maps were incorporated with the land use map in GIS to give a visualized overview of the potential damages in the area. We assessed the number of flooded properties for each damage category on the basis of the GIS-map and estimated the total costs as a summary of the individual costs of each damage category. The unit costs used for assessing the costs are identical to the ones used by Zhou and Arnbjerg-Nielsen (submitted). Hedonic Valuation The hedonic price functions included a large number of control variables that cover structural, neighborhood, and environmental characteristics of the property. Each property is geo-coded with its exact location which enabled very accurate location-based variables describing neighborhood and environmental characteristics of each property's surroundings. Data on property sales and structural characteristics of the property were obtained from the OIS database (Hansen and Skov-Petersen 2000). The locationbased variables were constructed using GRASS (6.4) and ArcGIS (9.3). The GIS data are provided by the National Survey and Cadastra. The model was estimated in R while using the spdep package and the sphet package (Bivand and others 2011;Gianfranco 2010;Team 2011). The parameter estimates of the variables are robust in terms of size (within same order of scale) and significance over three models, differing in their modeling of the error term only, see Table 2. The significance levels of the variables vary slightly between the models. The OLS model resulted in highly significant parameter estimates for all parameters of interest. The robust spatial error model has less significant variables with the log(lake) variable only being significant at the 10 % level. The non-robust error model performs the poorest with log(lake) being nonsignificant and r negative being significant only at 10 % level. The spatial variables in the OLS model are likely to capture some of the spatial autocorrelation which is not related to the variable itself, and hence, the parameters may suffer from an omitted variable bias due to the assumption of an i.i.d. error term. It seems that especially r negative and log(lake) are sensitive to spatial autocorrelation, being both more significant and having larger parameter values (though not significantly) than the estimation results of the spatial error models revealed. Due to these observations, we decided to apply the results of the Robust GMM model with spatial autocorrelation accounted for. We used a row standardized 30th nearest neighbor weight matrix, W, in the spatial error models, which proved sufficient to account for the autocorrelation revealed by global and local Moran I tests on the residuals of the simple OLS model, as well as spatial correlogram analysis (Cliff and Ord 1981). The Lagrange Multiplier test for spatial error dependency and spatial lag dependency are both highly significant (Anselin 1988). The robust version of the test indicates that the spatial error model outperforms the spatial lag model. Heteroscedasticity is a problem in the The dependent variable in the models was the natural log to the house price. Thus, in the robust GMM model, we found that the marginal value of accessibility to the urban green areas, which included lakes or tree cover or both, decreased with 0.6 % of the property price for every 100 meters a house was removed from such an area. The marginal value of an increase in the size of the nearest such urban green area was 0.01 % of the house price for every additional hectare. The urban green areas not including lakes or tree cover affected the very nearby properties negatively, as seen on the parameter for r negative. On the other hand, access to nearby lakes, including those not integrated in a green area, was exponentially related to the house price which means that a 1 % increase in distance to a lake will reduce the property value with 1.7 %. While the parameters all have the expected sign and are significant, it should be stressed that the effects they imply are in fact quite small compared with that of, e.g., proximity to forests and similar effects often found in other hedonic studies (e.g., Anthon and others 2005). Nevertheless, because of the high aggregate value of the properties in the areas, the effects of enhanced environmental amenities may still be significant. Laissez-Faire Strategy Owing to climate change, the EAD was estimated to increase from 8.3 to 17.8 MDKK (10 6 Danish Kroner) from year 2011 to 2100 if discounting was ignored. That is to say, the total added damage costs due to anticipated climate change will be 92.7 MDKK in the present form if no adaptation is planned for the area. This value can be considered as an indication of the levels of investment allowed for adaptation from a cost-benefit point of view. In addition, it is noteworthy that the estimated value only reflects the expected damage on an average level, and the real costs may be several times higher in the worst case. Early actions can be recommended to tackle the climate change. Pipe Enlargement To achieve an acceptable risk reduction by pipe enlargement, in total 2636 meters of pipe need to be enlarged, see Fig. 5. The investment unit costs will increase as a function of pipe diameter with 7,000 DKK/m as an average estimate. The total investment costs for pipe enlargement were calculated to be 24.1 MDKK. It is a one-time payment invested evenly in the first five years of the planning horizon. Moreover, it can be noted that there is an extra open basin invested for both pipe enlargement and infiltration. This is because the extra water from the overloaded sewer in the north center (Fig. 4, 2-year event) requires intensive adaptation measures in the area if handled by pipe enlargement and infiltration individually. With pipe enlargement, the original EAD in 2100 was reduced from 17.8 to 8.4 MDKK per year. The calculated net benefits of the solution are 147 MDKK over a 100-year planning horizon. Nevertheless, it is noteworthy that adaptation by pipe enlargement needs dramatic changes in the sewer system if lower flood risks are to be achieved. Infiltration It is estimated that a large part of Risskov will need to be disconnected from the sewerage system when applying this strategy. In total 14.53 ha impervious area had to be disconnected, corresponding to the roof area of 727 buildings. The areas to be disconnected should be upstream of the inundated areas to be effective in minimizing the flood hazard. The locations of these buildings are shown in Fig. 5. The unit cost for implementing the infiltration trench is 250 DKK/m 2 plus 60,000 DKK per property owner. This estimate is based on empirical data from the facility company Bornholm Vand and A/S which in cooperation with local citizens decoupled several streets in the town of Allinge. Using this strategy, the calculated total investment costs are 87.1 MDKK accounting for two reinvestments later to consider the low technical life time of infiltration devices. The estimated net benefits are around 111 MDKK. However, we want to address that the implemented infiltration is an optimistic scenario in the case without considering restraints due to low permeability soils and high ground water levels in the area. The hydrological response process of infiltration is also simplified. The practical performance may have much lower efficiency in reducing the hydrological loadings to sewer system. Open Urban Drainage System (OUDS) Based on the model simulation, 49,558 m 3 of storage volume will be required in the strategy, for detailed information see Fig. 5. Unit costs of 745 DKK/m 3 are used for estimation of investment costs (PH-Consult 2006). We divided the OUDS strategy into two subscenarios: OUDS 1 and OUDS 2. In the OUDS 1 scenario, we assume that basins located on private properties will take up parts of the garden of the property. In this scenario, three lakes are located within existing green spaces. Two of the green spaces initially without lakes or tree covers had negative impacts on nearby properties. However, in this scenario, their category is changed from being a negative green space to a positive green space in the hedonic valuation after obtaining the lakes. In addition, the five rainwater basins located on private properties took up garden space. In total, 35 properties lose parts of theirs garden. In the OUDS 2 scenario, we assume that properties affected by rainwater basins in OUDS 1 are converted into green spaces with smaller permanent lakes. Two of the affected areas are too small to be considered as green spaces and will, therefore, still be categorized as rainwater basins, which was found to have no hedonic effects. In total, six new small lakes and three new positive green spaces are located within the survey area. In total, 35 single family houses are removed along with their entire property. In the context of flood reduction, the EAD decreased to 6.3 MDKK per year with implementation costs of 54.5 MDKK in the present value. The estimated net benefits from the conventional CBA are 157 MDKK. The welfare changes of the two OUDS scenarios were further calculated based on results of the robust spatial error model of the hedonic price function. The welfare estimates used properties from all of Aarhus municipality (see Table 3). OUDS 1 provides a potential welfare increase of 223.1 MDKK and OUDS 2 provides a potential welfare increase of 154.0 MDKK, which account for 1.48 and 1.03 % increase in value of affected properties, respectively. In total, 3,450 properties would be affected by the changes in OUDS 1 and OUDS 2. The scale of the change in the urban landscape and the expected welfare change are of a magnitude that can be considered localized in relation to the overall Aarhus housing market, assuming the OUDS solutions are only implemented in Risskov. Thus, the estimated welfare changes of the OUDS should be considered an upper bound measure not likely to be valid as a central estimate if OUDS are applied widespread in the city. On the other hand, the hedonic method only includes the benefit of these areas as experienced by the local home owners affected directly. There may, in some cases, be effects also for people further away. In the present case, however, the spatial extent of the green areas established are small compared to the overall supply of larger green recreational areas in and around Aarhus. The environmental amenity changes in the two scenarios would be capitalized in the property market if implemented. In Denmark, part of the property tax is collected as a percentage of the property value. In this situation, part of the resulting welfare change will not be reflected in the house price change, but instead in increasing property taxes acquired by the taxation authorities. Thus, not accounting for property tax will underestimate the true welfare change (Anthon and others 2005). In Aarhus municipality, the property tax is 2.458 % of the property value. The additional value acquired by the municipality over a 100-year period with a discount rate of 3 % will sum to 177 MDKK for OUDS 1 and 122 MDKK for OUDS 2. Summary The estimated cost reductions in investigated rainfall events and EAD under climate change impacts are summarized in Table 4. The calculated NPVs of the four strategies based on the traditional CBA and extended CBA including hedonic estimation are shown in Table 4 as well. It was found that all investigated adaption strategies are economically beneficial relative to the laissez-faire alternative. The largest gain was found for the OUDS solutions in this area, and there is a considerable increase in estimated NPV when taking into account the additional environmental amenity benefits that the OUDS imply. Note that The numbers in italics show the added economic benefits due to increased property values in the area and the resulting increase in property taxes Note that the investment costs are calculated in NPV with a discount rate of 3 % for a 100-year horizon. The NPV1 and NPV2 denote the calculated net benefits from the conventional and extended CBA, respectively * Three investments were assumed needed over the planning horizon this happens in spite of relatively small, but significant increases in property prices that will occur from, e.g., establishing a new urban green area with a lake or improve existing areas with lakes. Discussion This study compares a laissez-faire strategy of inaction, a traditional business-as-usual enlarged drainage solution, local infiltration solutions, and OUDSs for climate change adaptation. The results indicated the conventional drainage solution (e.g., pipe enlargement) was cost efficient in terms of flood risk reduction, however, incapable of integrating other positive perspectives in the drainage facilities, such as amenity values. Rebuilding the pipe system may be relevant to areas where small-scale renovation is required to improve the runoff conditions, or areas where no open space is available for decentralized solutions. Our results were more supportive of OUDS, which can be considered as a significant supplement or replacement of the traditional solutions owing to its positive impacts on recreational and environmental aspects in urban context. Especially under the influences of climate change and city development impacts, such approaches may prevail over the traditional solutions since OUDS can be better integrated in urban landscape for excess surface waters as well as strengthen the efficiency of multiple land use. Certainly, we also stress that the open drainage solution may not be as relevant and beneficial for areas where access to amenities and water is already widespread (and the marginal value of more amenities is therefore low), or in areas where costs (in terms of land e.g.,) of space for such a system are much higher than traditional solutions. In some cases, due to technical reasons (e.g., pollution control, safety issues, and legal constraints), open drainage solution may not be the appropriate way of adaptation either. However, it may very well be that in many cases, OUDS has the capacity to integrate different recreational activities in the drainage facilities, which is especially relevant to areas with a lack of blue-green features in a large-scale neighborhood or areas where multifunctional drainage solutions are required. The assumptions behind the analysis are likely to favor the OUDS solutions, in the sense that probably the systems will require more space to provide as much value as natural systems (increasing space requirements and land and construction costs) or alternatively be less attractive than the areas the estimates are based on and hence yield less value to the neighborhood (reducing the welfare benefit). However, the numbers are quite unambiguous in the sense that even without taking the recreational gain into account, the OUDS systems are attractive from an economic point of view as a means of flood risk mitigation, and, even without considering the flood risk mitigation, the OUDS systems are economically attractive because of the welfare gain from amenity values. The uncertainties involved in the methods presented in this study are substantial. The 1D-2D coupled model is a ''compromise'' modeling approach to achieve relatively accurate representations of overland flow dynamics with reasonable-yet extensive-amounts of data and computational requirements. Such an approach involves uncertainties associated with input data, system setup, model parameters, and assumptions (Domingo and others 2010;Freni and others 2010;Koivumaki and others 2010;Timbe and Willems 2004). The setup of the applied adaptation options has been simplified in terms of both modeling simulation and economic assessment as discussed in Table 1. Nevertheless, the results seem unequivocal in the sense that the differences in net present value between the analyzed strategies are substantial. The results highlight the difficulties in setting up the proper framework for the analysis and how the results should be interpreted. A traditional framing approach would be to consider only the urban drainage sector in the analysis, leading to the result that pipe enlargements and open basins are equally suitable as adaptation measures against increased risk of flooding. When framing the analysis to include potential benefits of the OUDS; however, this solution turns out to be very likely best solution of the options considered. However, the value of the added recreational benefits is estimated under the assumption that only this part of the city will implement OUDS, and hence the change in environmental amenities is marginal in relation to the overall housing market captured in the hedonic function. If the entire city chooses to implement OUDS, the benefits are likely to be smaller than those estimated here, and the estimates should, therefore, also for this reason be considered an upper bound. This is because a widespread implementation of OUDS may affect the housing market's marginal pricing of the environmental benefits offered by OUDS, as supply change is no longer marginal. Thus, caution should be taken if one wishes to upscale the results presented here. Other environmental costs arising from the different scenarios have not been considered in the present analyses, and little actual information is available that can be linked to the adaptation scenarios presented. The amount of pollutants present in the different fractions of water will vary between the scenarios, will have very different fates across the proposed scenarios, and hence, will present different threats to ground water quality, environmental status of recipients, etc. European legislation tends to put high emphasis on surface water, which would tend to favor infiltration and OUDS. However, Danish legislation puts high emphasis on ground water protection, which would Environmental Management 51:586-601 599 tend to favor traditional sewerage expansion. Thus, adding these additional environmental concerns is likely to draw conclusions in different directions and complicate the overall choice of adaptation action. Conclusions Our results indicate that there is a large potential for studying and implementing OUDS as a means to both mitigate increased risk of flooding in urban areas as well as enhance the recreational value of local neighborhoods. The results are based on cross-disciplinary methods where risk assessment of urban floods covers the topics of flood inundation modeling, climate change, environmental evaluation tools, and socio-economic tools, to reveal the costs and benefits associated with our four different climate adaptation strategies. A budget oriented socio-economic analysis was found to be a sub-optimal approach for decision making as it will be blind to the potential additional services provided by non-market goods linked with some adaptation scenarios. We find that in the case area, a climate adaption strategy based on OUDS is better than the other strategies, given the framing of the problem, while a strategy of laissez-faire is the least attractive. Our results indicate that the conceptual framework around the decentralized sewerage system needs to be rethought. Retaining the water on individual properties is a more expensive solution than pipe enlargement and does not provide the recreational benefits of open systems with permanent water bodies, which require that neighborhoods have a joint drainage system. The approach presented in this study is especially suitable for complex evaluations where not only the traditional framing of urban drainage is used, but also a broader perspective is needed. Many studies have dealt with the recreational values of making urban drainage more visible. These studies have discussed the issue in a qualitative manner, but without putting the recreational value on the same monetary scale as traditional engineering methods usually do. This method bridges the gap between the different scales used by engineers, landscape architects, and urban planners and will hopefully, therefore, be a valuable means of choosing between different adaptation options within urban drainage in fully developed cities. |
Studies on lymphatic absorption of 1',2'-( 3 H)-coenzyme Q 10 in rats. The intestinal absorption of 1', 2', -(3H)-coenzyme Q10 (3H-Q-10) was studied in rats with cannulated thoracic duct and the effect of surface-active agents on the lymphatic absorption of 3H-Q-10 was determined. The amount of radioactivity absorbed via lymphatics during the first 48 hr was 1 % of the dose after oral administration of 3H-Q-10 dissolved in sesame oil or 20 mM sodium taurocholate, and was 1.5% of the dose of 3H-Q-10 dissolved by HCO-60. Assuming that the amount of radioactivity recovered from urine (0.481%) and liver (0.004%) might come from the radioactivity absorbed via portal vein, the total amount of the radioactivity absorbed via portal vein and lymphatics was approximately 2% of the dose of 3H-Q-10 dissolved by HCO-60 and the main route in absorption of 3H-Q-10 was lymphatics. The model for lymphatic absorption of 3H-Q-10 was proposed by kinetic analysis of the data. |
<reponame>kiroInn/excel-io
import Excel from "exceljs";
import moment from "moment";
import * as _ from "lodash";
export const CELL_VALUE_TYPE = {
IMAGE: "image",
DATE: "date",
STRING: "string",
SHEET: "sheet",
SHEET_CAPTURE: "sheet-capture",
VALUE: "value"
};
// function hasSheet(workbook: Excel.Workbook, sheetName: string) {
// return workbook && workbook.getWorksheet(sheetName);
// }
function getCellSheet(mp: string): string {
return mp.split(":").length === 2 ? mp.split(":")[0] : mp;
}
function getCellPosition(mp: string): string {
return mp.split(":").length === 2 ? mp.split(":")[1] : mp;
}
export function getUsingSheet(
mappings: Array<object> = [],
cellKey: string
): Array<string> {
const metaCells = _.flatten(
mappings.map(mp =>
_.get(mp, "values", []).map((value: object) => _.get(value, cellKey))
)
);
return Array.from(new Set(metaCells.map(getCellSheet)));
}
interface Mapping {
valuse: MappingValue[];
templateName: string;
}
interface MappingValue {
from: string;
to: string;
type: string;
range?: object | undefined;
}
export function fillData(
from: Excel.Workbook,
to: Excel.Workbook,
mapping: Mapping
) {
const values = _.get(mapping, "values");
_.forEach(values, value => {
const type = _.get(value, "type");
if (type === CELL_VALUE_TYPE.SHEET_CAPTURE) {
const fromSheetNames: string[] = _.map(
_.get(from, "worksheets"),
sheet => sheet.name
);
const matchedSheets = _.filter(
fromSheetNames,
name => name && name.match(value.from)
);
_.each(matchedSheets, sheetName => {
sheetName = `${sheetName}`;
const fromSheet = from.getWorksheet(sheetName);
const toSheetName = _.get(sheetName.match(new RegExp(value.from)), 1, "");
let toSheet = to.getWorksheet(toSheetName);
if (!toSheet) {
toSheet = to.addWorksheet(toSheetName);
}
toSheet.model = Object.assign(fromSheet.model, {
mergeCells: _.get(fromSheet, 'model.merges'),
});
toSheet.name = toSheetName;
_.each(fromSheet.getImages(), image => {
const fromImageId = Number(_.get(image, "imageId"));
const imageId = to.addImage({
buffer: from.getImage(fromImageId).buffer,
extension: "png"
});
toSheet.addImage(imageId, {
tl: {
col: Number(_.get(image, "range.tl.col")),
row: Number(_.get(image, "range.tl.row"))
},
br: {
col: Number(_.get(image, "range.br.col")),
row: Number(_.get(image, "range.br.row"))
}
});
});
});
return true;
}
const fromSheet = from.getWorksheet(getCellSheet(_.get(value, "from")));
if (fromSheet) {
const toSheetName = getCellSheet(_.get(value, "to"));
let toSheet = to.getWorksheet(toSheetName);
if (!toSheet) {
toSheet = to.addWorksheet(toSheetName);
}
if (type === CELL_VALUE_TYPE.SHEET) {
toSheet.model = Object.assign(fromSheet.model, {
mergeCells: _.get(fromSheet, 'model.merges'),
});
toSheet.name = toSheetName;
_.each(fromSheet.getImages(), image => {
const fromImageId = Number(_.get(image, "imageId"));
const imageId = to.addImage({
buffer: from.getImage(fromImageId).buffer,
extension: "png"
});
toSheet.addImage(imageId, {
tl: {
col: Number(_.get(image, "range.tl.col")),
row: Number(_.get(image, "range.tl.row"))
},
br: {
col: Number(_.get(image, "range.br.col")),
row: Number(_.get(image, "range.br.row"))
}
});
});
} else if (type === CELL_VALUE_TYPE.IMAGE) {
const fromImageId = Number(
_.get(_.first(fromSheet.getImages()), "imageId")
);
const imageId = to.addImage({
buffer: from.getImage(fromImageId).buffer,
extension: "png"
});
toSheet.addImage(imageId, _.get(value, "range"));
} else if (type === CELL_VALUE_TYPE.DATE) {
const cellValue = fromSheet.getCell(
getCellPosition(_.get(value, "from"))
).value;
toSheet.getCell(getCellPosition(_.get(value, "to"))).value = moment(
_.get(cellValue, "result")
).format("YYYY-MM-DD");
} else {
toSheet.getCell(
getCellPosition(_.get(value, "to"))
).value = fromSheet.getCell(
getCellPosition(_.get(value, "from"))
).value;
}
}
});
return to;
}
export function eliminateFormula(workbook: Excel.Workbook) {
workbook.eachSheet(sheet => {
sheet.eachRow({ includeEmpty: false }, row => {
row.eachCell({ includeEmpty: false }, cell => {
// if(_.get(cell, 'type') === 6){
// const {formula, result} = cell.value;
// console.log(`sheetName:${cell._column._worksheet.name} || position:${cell._address} || formula:${formula} || result:${result}`, cell);
// }
if(_.get(cell, 'type') === 6){
cell.value = _.get(cell, 'value.result');
}
})
})
})
} |
When the military occupied the streets of Harare and parked their tanks on every corner of the streets of Harare, the people of Zimbabwe celebrated in anticipation of a better Zimbabwe. The military, under the command of general Chiwenga announced through S. B Moyo on national radio and television that his excellency, the president was safe and they were only targeting the criminals around him.
The people celebrated not because they liked the presence of the military on the streets in a country that was not at war, but they were celebrating the fall of a dictator, a self-crowned Hitler of Africa in the name of Robert Mugabe. They also celebrated on the promise of "Targeting the criminals around his excellence ". These criminals in the words of S.B Moyo were the ones who had caused untold suffering and misery among the ordinary Zimbabweans, but unfortunately little did we know that seven months down the line, only one small unfortunate criminal in the name Samuel Undenge will be prosecuted and sent to jail.
The general population expected the likes of Ignatius Chombo, Saviour Kasukuwere, Grace "Gucci" Mugabe, Jonathan Moyo, Augustine Chihuri, Obert Mpofu, Phillip Chinyangwa, Chivhayo and many more others to be behind bars. Unfortunately, none of them have been prosecuted until to date. So, who were the criminals that the military were targeting? No wonder why the people of Zimbabwe are still suffering and haven't seen any real change because all the criminals are still walking scot-free and still causing untold suffering to the people of Zimbabwe. Some of them are still serving in your current coup cabinet courtesy of your appointment. Mugabe might have gone but his legacy is still very much alive because of the people still serving in this coup government through your personal appointments.
Your inauguration speech on the 24th November 2017, was greeted with so much optimism and anticipation by all Zimbabweans and the world over. But to date its only the words that you said and not much change in your actions. The general population of Zimbabwe haven't seen much in terms of improvements in the basic things that matters in their lives. Healthcare provisions hasn't improved in fact your deputy even unilaterally fired some fifteen thousand nurses for voicing their concerns through industrial action. Still people cannot get their hard-earned cash from banks. They still have to spend hours in bank queues to get a paltry $30-$50 bond coins. The cost of most basic goods has in fact gone up since you took office. No significant industries or factories have been opened to create employment for the unemployed youth. You have gone around the world with "Zimbabwe is open for business" mantra, but we haven't seen any businesses that have benefited the economic recovery of Zimbabwe.
Unemployment is still as high as it was during Mugabe's era. Vending is the only job available to our educated youths. No road map to recovery of this economy since you took office is evident. Still no cash in banks. Still no medication in our hospitals. You and your colleagues still fly to neighbouring countries for medical treatment. What about the general ordinary Zimbabwean public? Still there isn't enough electricity for domestic consumption. These are basic things you could have prioritised instead of buying cars for chiefs and aspiring Zanu PF MPs.
Mr President, you failed to accept responsibility for Gukurahundi, right. The least you could have done was to issue an apology and say sorry to the people of Matabeleland and Midlands and set up a commission of inquiry as soon as you took office. You should also just have said sorry for the part you played in 2008 post-election violence and said sorry for the part you and the army played in denying the people's president Morgan Tsvangirai the right to take over power from Robert Mugabe in 2008.
In your inauguration speech, you said let us not dwell in the past, but why is it that your colleagues like Christopher Mutsvangwa who is your special advisor seems to have nothing to offer to the electorate. His slogan is "We went to war, so you should let us rule". We then begin to wonder what special advice he is giving you in private. Its now 38years after independence and it seems Zanu Pf only has got war veterans credentials to offer to the electorate.
Surely, Mr President you could have done better, and you had time to prove to the world and to the people of Zimbabwe that you're better equipped than Mugabe to deal with the basic needs of Zimbabweans. You had an opportunity to show the world you are changing course and ready to mend the wrongs that were done by Robert Mugabe and Zanu PF. But as it is, Mugabe may have gone but his legacy is still very much alive. You therefore missed an opportunity to extend your sell by date which is due on the 30th of July 2018.
Instead of hitting the ground running by repealing oppressive laws like AIPPA and POSA, you hit the ground running by legalising mbanje. This was your opportunity to concentrate on bread and butter issues and implement electoral reforms that would have made this election free, fair and credible. Unfortunately, you chose to ignore them, you will live to regret this missed opportunity to make Zimbabwe great again. |
/**
* Recursively handles each node, using internal logic.
*
* @param __n The initial node.
* @throws NullPointerException On null arguments.
* @since 2018/02/20
*/
private final void __recurse(TrackedThread.Node __n)
throws NullPointerException
{
if (__n == null)
throw new NullPointerException();
this._byindex.add(__n);
int narrowp = this._narrowp,
widep = this._widep;
this._offsets.put(__n, new __Pointer__(narrowp, widep));
TrackedThread.Node[] subs = __n.subNodes();
int n = subs.length;
narrowp += 28 + (n * 3);
widep += 28 + (n * 4);
this._narrowp = narrowp;
this._widep = widep;
for (TrackedThread.Node sub : subs)
this.recurse(sub);
} |
Good Practices to Encourage Bicycling and Pedestrians on Federal Lands: 11 Components The Paul S. Sarbanes Transit in Parks Technical Assistance Center, sponsored by the Federal Transit Administration, recently completed the report Good Practices to Encourage Bicycling and Pedestrians on Federal Lands. The report was developed for federal land managers interested in creating or expanding bicycle and pedestrian options in their units and looking for more information about successful models and practices. Bicycle and pedestrian transportation has many benefits of interest to federal land managers, including resource protection, reducing green house gas emissions, achieving financial sustainability, and improving visitor enjoyment and health. After reviewing bicycle and pedestrian planning documents from federal land units and selected cities and counties, the technical assistance team identified 11 components of an effective bicycle and pedestrian plan: needs assessment; partnerships; goals, objectives, and performance measures; bicycle and pedestrian network plan; design guidelines; maintenance policy and procedures; pedestrian and bicycle support facilities; cost and funding analysis; encouragement, education, and enforcement programs; evaluation and monitoring; and updates. This paper describes each component and presents illustrative examples. The full report contains additional examples and a more in-depth discussion of each component of the plan. |
<reponame>Golang-Commons/php2go<filename>php/directory.go
package php
import (
"os"
"syscall"
)
// Chdir - Change directory
func Chdir(dir string) error {
return os.Chdir(dir)
}
// Getcwd - Get current directory
func Getcwd() (dir string) {
dir, err := os.Getwd()
if err != nil {
dir = err.Error()
}
return
}
// Closedir - Close directory's handle
func Closedir(fd int) (err error) {
return syscall.Close(fd)
}
|
Hard x-ray phase imaging using simple propagation of a coherent synchrotron radiation beam Particularly high coherence of the x-ray beam is associated, on the ID19 beamline at ESRF, with the small angular size of the source as seen from a point of the sample (0.1-1 rad). This feature makes the imaging of phase objects extremely simple, by using a `propagation' technique. The physical principle involved is Fresnel diffraction. Phase imaging is being simultaneously developed as a technique and used as a tool to investigate light natural or artificial materials introducing phase variations across the transmitted x-ray beam. They include polymers, wood, crystals, alloys, composites or ceramics, exhibiting inclusions, holes, cracks,.... `Tomographic' three-dimensional reconstruction can be performed with a filtered back-projection algorithm either on the images processed as in attenuation tomography, or on the phase maps retrieved from the images with a reconstruction procedure similar to that used for electron microscopy. The combination of diffraction (`topography') and Fresnel (`phase') imaging leads to new results. |
def copy(self, s3_src_bucket, s3_dest_bucket, s3_object, s3_object_version=None, dryrun=True, public_read=True):
try:
if not dryrun:
s3 = boto3.resource("s3", region_name=self._region, **self._credentials)
copy_source = {"Bucket": s3_src_bucket, "Key": s3_object}
if s3_object_version:
copy_source["VersionId"] = s3_object_version
extra_args = {}
if public_read:
extra_args["ACL"] = "public-read"
s3.meta.client.copy(copy_source, s3_dest_bucket, s3_object, ExtraArgs=extra_args)
else:
logging.info(
"Dryrun mode enabled. The following file with version %s would have been copied from s3://%s/%s "
"to s3://%s/%s",
s3_object_version or "latest",
s3_src_bucket,
s3_object,
s3_dest_bucket,
s3_object,
)
except Exception as e:
self.error(
f"Failed when copying file {s3_object} with version {s3_object_version or 'latest'} from bucket "
f"{s3_src_bucket} to bucket {s3_dest_bucket} in region {self._region} with error {e}"
)
raise |
PALO ALTO — For the first time ever, Apple (AAPL) released two iPhones simultaneously, but fans did not get double the choice, as a widespread shortage kept supplies low in stores.
The dual iPhone release finished out a tumultuous couple of weeks for the tech giant, which took a hit on the stock market, rebuffed backlash over the pricing of its low-cost iPhone option and discovered a security glitch in its new operating system. The pressure was on for Apple to post big sales of the new iPhone 5C and iPhone 5S, and assure consumers and investors that the Cupertino company still had something revolutionary up its sleeve.
Apple fans turned out in droves to stores across the Bay Area, many camping in line for days in hopes of getting one of the coveted iPhone 5S, the company’s new flagship device loaded with Apple’s latest and greatest. On Palo Alto’s University Avenue, fans got another surprise when the store opened Friday morning — an upbeat Apple CEO Tim Cook, who shook hands, posed for pictures and joined the crowd for a few celebratory cheers.
Cook then paid a visit to the store at Stanford Shopping Center and later sent his first tweet: “Seeing so many happy customers reminds us of why we do what we do.”
First in line to meet Cook and get the new iPhone was a group from San Jose-based Gift A Vet, an organization dedicated to supporting veterans in need. Dorothy Arndt said she had been camped out since Monday, trading 12-hour shifts with vets and colleagues. She left the store with a 16GB iPhone 5S in “space gray” to give to a 53-year-old disabled Santa Clara County veteran struggling to make ends meet on public assistance.
The iPhone 5S rolled out in metallic colors and with super-speedy processing power and upgraded camera features, and the lower-cost, heavier iPhone 5C is offered in a spectrum of candy-coated colors. Along with the U.S., Australia, Canada, China, France, Germany, Hong Kong, Japan, Puerto Rico, Singapore and the U.K. launched the iPhone 5S and 5C on Friday.
But Apple released only a limited number of the coveted iPhone 5S for the worldwide launch and only a handful of those in gold — the color that most fans were lusting after.
“Everyone wanted gold,” said Benjamin Smith of Sunnyvale, who settled for gray at the University Avenue store.
The iPhone 5C was more plentiful, but few fans were willing to pull out their credit cards for what some have called a repackaged version of the older iPhone 5, dressed in colorful plastic. It starts at $99.
Megan Davidson, 21, arrived at the Sprint store in Palo Alto at 3:45 a.m., where she avoided the crowd at the Apple store and was first in line for the 5S.
“I think the 5C is more geared toward kids and cheaper,” she said. “I want the legit Apple phone.”
Shortages of the iPhone 5S were apparent in stores across the Peninsula, with websites showing shipments delayed until October. The location at Stanford Shopping Center ran out of unlocked units — phones sold without the software code that limits them to work only on one wireless carrier — before 9:30 a.m. An employee said the gold phones were first to go and the store had “a very limited number.”
The Best Buy on Almaden Expressway in San Jose had 28 units of the 5S and sold out before 11:30 a.m., said store manager Mark Fragoza. The store had more than twice as many phones when the iPhone 5 launched last year.
The picture wasn’t much better on the other side of the pond. Carriers in the U.K. told the BBC there was a severe shortage of the iPhone 5S, and customers in Asia and Australia were told they’ll have to wait until October.
“Demand for the new iPhones has been incredible and we are currently sold out or have limited supply of certain iPhone 5S models in some stores,” Apple spokesman Bill Evans said Friday.
Opening-weekend sales are crucial for Apple after about a year without releasing a new device while rivals have begun to chip away at Apple’s dominance in the smartphone market. Piper Jaffray analyst Gene Munster said in a note to investors he expected Apple would sell 5 million to 6 million iPhones, including pre-sale orders that started Sept. 13. BTIG analyst Walter Piecyk wrote that he was encouraged because “lines were the strongest we have seen at both Apple and carrier stores” but wouldn’t know until Monday if unit sales would meet his expectations of 6 million during launch weekend.
Apple has reportedly asked its suppliers to increase production of the gold-colored iPhone 5S by an additional one-third after seeing strong demand, people familiar with the situation told The Wall Street Journal on Thursday. What remains unclear is whether there are still manufacturing constraints that may keep supplies low.
Apple did not respond to questions Friday about production.
Contact Heather Somerville at 510-208-6413. Follow her at Twitter.com/heathersomervil. |
Rate of conversion to secondary arthroplasty after femoral neck fractures in 796 younger patients treated with internal fixation: a Swedish national register-based study Background and purpose In younger patients with a femoral neck fracture (FNF), internal fixation is the recommended treatment regardless of displacement. Healing complications are often treated with arthroplasty. We determined the rate of conversion to arthroplasty up to 5 years after fixation of either undisplaced FNFs (uFNFs) or displaced FNFs (dFNFs). Patients and methods The study was based on prospectively collected data from the Swedish Fracture Register (SFR) and the Swedish Arthroplasty Register (SAR). FNFs in patients aged < 60 treated with parallel pins/screws or sliding hip screws (SHS) registered in SFR 20122018 were cross-referenced with conversions to arthroplasty registered in SAR until 2019. The cumulative conversion and mortality rates were determined by KaplanMeier analyses and patient- and surgery-dependent risk factors for conversion by Cox regression analyses. Results We included 407 uFNFs and 389 dFNFs (median age 52, 59% men). The 1-year conversion rate was 3% (95% CI 15) for uFNFs and 9% (CI 612) for dFNFs. Corresponding results at 5 years were 8% (CI 511) and 25% (CI 2030). Besides a displaced fracture, age 5059 was associated with an increased rate of conversion in uFNFs. This older group also had a higher mortality rate, compared with patients aged < 50. There was no sex difference for mortality. Interpretation Adults aged under 60 with uFNFs and dFNFs face an 825% risk, respectively, of conversion to arthroplasty within 5 years after internal fixation. This is new and pertinent information for surgeons as well as patients. Background and purpose -In younger patients with a femoral neck fracture (FNF), internal fixation is the recommended treatment regardless of displacement. Healing complications are often treated with arthroplasty. We determined the rate of conversion to arthroplasty up to 5 years after fixation of either undisplaced FNFs (uFNFs) or displaced FNFs (dFNFs). Patients and methods -The study was based on prospectively collected data from the Swedish Fracture Register (SFR) and the Swedish Arthroplasty Register (SAR). FNFs in patients aged < 60 treated with parallel pins/screws or sliding hip screws (SHS) registered in SFR 2012-2018 were cross-referenced with conversions to arthroplasty registered in SAR until 2019. The cumulative conversion and mortality rates were determined by Kaplan-Meier analyses and patient-and surgery-dependent risk factors for conversion by Cox regression analyses. Interpretation -Adults aged under 60 with uFNFs and dFNFs face an 8-25% risk, respectively, of conversion to arthroplasty within 5 years after internal fixation. This is new and pertinent information for surgeons as well as patients. In younger individuals with femoral neck fractures (FNF), internal fixation (IF) is the recommended treatment alternative. Nevertheless, the risk of healing complications has to be acknowledged; osteonecrosis of the femoral head and non-union are the most common but the actual rate of conversion to arthroplasty is insufficiently described in younger patients. A population-based study on 796 individuals aged under 50 years found a conversion rate of 14%, but did not distinguish fracture displacement. A smaller case series (n = 122) presented a conversion rate of 22% for displaced FNFs (dFNF). Besides the obvious need to give correct information on prognosis to younger patients, detailed knowledge on conversion rate is mandatory to underpin a sound treatment strategy. The debate focuses on where to draw the line between internal fixation and hip replacement as primary treatment of a dFNF. Different age limits are proposed, even as low as 45 years has been suggested. Traditions and surgical preferences vary internationally; the Scandinavian countries have had a higher age limit for primary arthroplasty as treatment for FNFs but have gradually shifted from 70 to approximately 60 years. Also, for undisplaced FNFs (uFNF), primary arthroplasty has recently been put forward as an alternative, at least in elderly patients. We designed a national register-based study to determine the rate of conversion to arthroplasty from IF due to uFNFs and dFNFs in patients under the age of 60. Furthermore, we descriptively analyzed mortality and the relationship between conversion rate and sex, age, trauma mechanism, and surgeon's experience. Study design This longitudinal cohort study is based on 2 Swedish national registries with prospectively collected data: the Swedish Fracture Register (SFR) and the Swedish Arthroplasty Register (SAR). We followed the STROBE guidelines for reporting the study. Setting The SFR started in 2011 and during the study period the coverage for hip fractures increased from 18% to 86% due to an increased number of hospitals reporting to the register. By 2021 all orthopedic departments in Sweden participated, i.e., coverage of 100%, in the register, which comprised 645,000 fractures at the end of 2021. The completeness of the register has been validated and in 2018 the completeness for femoral fractures was 55%. FNFs are classified in the SFR according to the 2007 AO/OTA classification as undisplaced subcapital (31-B1), transcervical/ basicervical (31-B2), and displaced subcapital (31-B3). The accuracy of the fracture classification in the SFR has been validated, and was found to be substantial. The injury, fracture classification, and treatment are registered by a physician through individual log-in on the SFR webpage. SAR is the national quality register for hip and knee replacement surgery in Sweden. SAR has a coverage of 100% for all departments performing hip replacement surgery, both public and private. For the years of the current study, the completeness was approximately 98% for total hip arthroplasty (THA), 96% for hemiarthroplasties (HA), and 92% regarding revisions of both THA and HA. By regular co-processing with the population register (the Swedish Tax Agency) any date of death is noted in both register databases. Participants Data for all patients aged 18 to 59 years registered with a hip fracture (defined by the ICD codes S72.00, S72.10 and S72.20) in SFR from 2012 to 2018 was extracted and crossreferenced with available data from SAR for each individual from the date of the index fracture until December 31, 2019. The unique individual personal number of each Swedish citizen ensures a reliable match between registers and subsequent surgeries and/or death. Only the 1st registered hip fracture was included in the study; contralateral and subsequent ipsilateral fractures and duplicate registrations were excluded. The uFNFs (AO/OTA 31-B1, Garden 1-2) and dFNFs (AO/ OTA 31-B3, Garden 3-4) were further examined for eligibility; other fracture types were excluded. We identified all available FNFs in the SFR, but the data search did not include any concurrent fractures. As they are specified in the reporting procedure, and identified by their ICD-10 diagnose codes (M84.4, M84.8, M84.3), pathological, spontaneous, and stress fractures were excluded from the analysis together with peri-implant fractures. Based on the primary treatment, fractures treated with IF (parallel pins/screws or sliding hip screw devices ) were identified, and we excluded patients treated with primary arthroplasty, intramedullary nail, other types of plate fixation, or non-surgically from further analysis on conversion rate ( Figure 1). Study variables We analyzed basic demographic and epidemiological variables (i.e., sex, age, and trauma mechanism) and data on the primary fracture treatment from SFR (i.e., type of IF used and surgeon's experience defined as performed by either a resident or a specialist), together with the rate of conversion to hip arthroplasty registered in SAR and mortality. Trauma mechanism was defined according to the definition used in SFR: low-energy trauma is same-level falls and high-energy trauma is caused by truly high level of energy, such as traffic accidents or falls from a height. Length of follow-up was defined as time from injury date to date of death or end of study period on December 31, 2019. Study outcomes The main aim was to determine rates of conversion to arthroplasty after IF of uFNFs and dFNFs at 1, 2, and 5 years. Fur- thermore, analyses were performed on mortality and associations between conversion to arthroplasty and sex, age, trauma mechanism, and surgeon's experience in the study group. Statistics Observations were grouped according to fracture classification (i.e., uFNF or dFNF), sex, and age < 50 or 50-59. Data on continuous variables were assessed for normality and presented as mean or median, depending on normal distribution. We analyzed associations between categorical variables using a chi-square test. Kaplan-Meier analysis was used to determine the rate of conversion to secondary arthroplasty as cumulative reoperation rate (CRR) with 95% confidence interval (CI) at 1, 2, and 5 years after the injury and to estimate mortality rates. We used a Cox proportional hazards regression model to determine hazard ratios (HR) between risk factors for secondary arthroplasty, where female sex, age 50-59, high-energy trauma mechanism, and resident surgeon previously have been described to have increased risk of reoperation and were assumed to be associated with a higher HR (4, Results 2105 hip fractures were identified in the SFR. After exclusion, 407 uFNFs and 389 dFNFs treated with internal fixation with parallel pins/screws or SHS were analyzed (Figure 1). Patients were aged 20 to 59 years at the time of the fracture, 59% of the fractures occurred in men, and 77% were due to low-energy trauma. Fractures due to high-energy trauma were more prevalent in dFNFs compared with uFNFs. The distribution of parallel pins/screws and SHS was similar in uFNFs and dFNFs. Specialists performed 2/3 of all operations due to FNFs ( Table 1). Discussion A considerable proportion of young and middle-aged individuals with an FNF can expect a conversion to hip arthroplasty within 5 years post-fracture, 1 in 4 for displaced fractures and 1 in 12 for undisplaced fractures. Our rates of conversion to arthroplasty were comparable to previous reports on younger patients. Stockton et al. considered their conversion rate to be high and called for improvement in the treatment of FNFs in younger patients. Our results for uFNFs are in close proximity, but we regard the conversion rate to be acceptable and believe it confirms IF as the gold standard for uFNFs in this age group. For patients with dFNFs on the other hand, outcome after IF is poorer. In our 50-59-year group, there is an immediate and steady increase in the rate of conversion during the entire follow-up, showing a readiness of the surgeons to perform secondary surgery. Surgeons may feel at ease, as other patients in the same age span with symptomatic osteoarthritis are routinely given a hip replacement nowadays, as we know better the good longterm prognosis for the arthroplasty. Remarkably, the youngest group with dFNF also ended up with a 23% conversion rate at 5 years, albeit their rate was modest during the earliest years, maybe reflecting a more guarded attitude towards arthroplasty in this age group. On the other hand, when 3 of 4 with dFNFs still had their native hip at 5 years, the result in terms of conversion to arthroplasty can be said to be acceptable or even good. Future endeavors should focus on improving the clinical pathway for this group of young patients, for whom this fracture is still unsolved. In elderly patients, the degree of displacement of the FNF, including both posterior and anterior tilt, and fracture comminution, have been found to predict failure of IF. Our results confirm that displacement according to Garden is a risk factor for failure leading to conversion arthroplasty in younger patients also. Nevertheless, our conversion rate is much lower than in geriatric patients treated with internal fixation of their dFNFs, where major secondary surgery can be expected in approximately 40%. Should we lower the age limit for primary arthroplasty? The rationale for treating younger patients with internal fixation, even if their fracture is displaced, is the theoretical benefits of preserving the femoral head and a fear of multiple revisions of an arthroplasty during a long remaining life span. But if we consider long-term results from RCTs on patients aged over 60, those initially treated with IF never reached superior functional results compared with those treated with arthroplasty. When considering risk of revision of the primary arthroplasty, one should bear in mind that conversion arthroplasties are associated with inferior outcome compared with primary arthroplasties for FNFs. Ideally, those with an inherently higher risk of fixation failure should be identified preoperatively and selected for primary arthroplasty. Otherwise, a focus on realistic expectations and readiness for swift conversion arthroplasty when needed would also be acceptable in the future, given that most young patients' fractures actually do heal. Notably, there was no difference between men and women regarding mortality, although elderly males with hip fractures have a higher risk of dying, and younger women have been reported to have more comorbidities. The 5-year mortality of 16% for those 50-59 years old is noteworthy, and the 1-year mortality of 4% was 10-fold higher compared with the mean mortality rate for the same ages in the general Swedish population during the years of the study. They may in this aspect resemble the elderly, which could speak in favor of a primary arthroplasty rather than internal fixation in those of advanced biological age and an expected shorter survival. This is supported by an analysis of cost-effectiveness where the lowest age proposed for THA as primary treatment of FNFs was 45 years in patients with multiple comorbidities whereas it was 54 for healthy patients. Limitations That some individuals in the older age span with dFNF were initially selected for primary arthroplasty may affect the conversion rates reported in our study. Assuming that these patients were identified as at particularly high risk of fixation failure, our estimates of the conversion rates are potentially underestimated by this selection bias. The number of parallel implants varies internationally. In line with Scandinavian tradition, 2 pins or screws are used almost exclusively in this cohort. There is little support in the literature that adding extra screws will reduce the risk of redislocation or non-union. That only 6% received an SHS hindered us from testing the suggestion made by the FAITH study, i.e., that SHS could have some benefits in those with displaced fractures. We lack data on whether an open reduction has been performed, but the Swedish tradition is to rely on closed reduction only. Also, the literature has so far not been able to show any clear benefits of open reduction. Indices depicting comorbidities and biological age/frailty would have been desirable variables to analyze, but unfortunately the registers do not include these potentially important risk factors for conversion to arthroplasty. Those selected for primary arthroplasty in our material may represent such a subgroup of frailer individuals. Strengths Our study is the largest to date analyzing conversion rate after IF due to uFNFs and dFNFs. We believe our result to have good external validity as it reflects everyday practice in non-selected patients and surgeons. We consider conversion to arthroplasty as a marker of a major hip complication. Naturally, other outcomes are valuable and patient-reported outcome is always preferrable. Any kind of reoperation could be relevant to report, but in Sweden valgus osteotomy, core decompression, or vascular grafts are very seldom utilized. Implant removal is a common reoperation, but the severity of the underlying situation is difficult to grade. It can span from routine procedures with no or little discomfort experienced by the patients to major complications such as deep infection or fracture collapse. We also chose our outcome due to the SAR's high completeness and national coverage, leading to a reliable result. Conclusion After IF in patients aged < 60, the rate of conversion to arthroplasty for dFNFs was significantly higher than for uFNFs during the entire follow-up. At 5 years, 25% and 8%, respectively, had undergone a conversion to hip arthroplasty. In dFNFs, the conversion rates were similar in all ages. For uFNFs the conversion rates in patients aged 50-59 were significantly higher than for younger patients. No other risk factors for conversion to arthroplasty could be identified in our material. Mortality rates were markedly higher for patients aged 50-59 but did not differ between men and women or between uFNFs and dFNFs. In perspective, both surgeons and patients should be aware of the risk of conversion to arthroplasty at the time of initial treatment. A clinical implication would be a long-term followup scheme and readiness for swift conversion when needed. |
More than 10,000 Long Islanders had their income tax refunds stolen last year, Sen. Charles Schumer (D-N.Y.) said Wednesday.
The senator, citing estimates based on Internal Revenue Service data, said about 4,880 taxpayers in Nassau County and about 5,400 in Suffolk had their identities stolen by criminals who then claimed tax refunds in 2014.
He said the numbers are "probably higher this year."
Nationwide, 2.3 million people were victims of tax refund fraud last year; more than 71,000 were New Yorkers.
The IRS lost more than $5 billion to bogus refund claims in 2013, according to the most recent available data. There were nearly 2 million cases of fraud in 2013, up from 440,000 in 2010, the agency said.
Schumer Wednesday attributed the rise in tax fraud to criminals who are more aggressively seeking to steal refunds, and the failure of some online tax preparers to verify clients' identities.
He said criminals file fraudulent tax returns early in the year, and request that refunds be paid to them via debit cards.
Taxpayers often don't know their refund has been stolen until after they file a return and are told by the IRS that a refund has already been paid. Schumer said it takes the IRS an average of more than 300 days to deliver refunds to fraud victims.
on Wednesday, the official deadline for filing returns, the senator called for the adoption of legislation requiring the IRS to pay refunds to fraud victims within an average of 90 days.
The legislation would also establish identity verification requirements for online tax preparers.
The Identity Theft and Tax Fraud Prevention Act of 2015 was introduced last month by Sen. Bill Nelson (D-Fla.). It has six co-sponsors -- all members of the Democratic minority, including New York's Schumer and Kirsten Gillibrand. A similar bill didn't pass the Senate in 2013-14. |
This invention relates to a cooling unit for an integrated circuit chip mounted on a substrate. The cooling unit is for use in combination with a cooling medium supplying unit.
The cooling unit of the type described disclosed in U.S. Pat. No. 4,685,211 issued to Takashi Hagihara et al and assigned to NEC Corporation. This conventional cooling unit comprises a hat, plurality of pistons attached to the hat, screws for fixing the pistons to the hat with a gap left between each of the pistons and each of integrated circuit chips mounted on a substrate. The Conventional cooling unit for use in combination with a cooling medium supplying unit. The cooling medium supplying unit comprises a cooling plate attached to the hat, which has a main path having a main inlet and a main outlet. The main path is for passing the cooling medium. As will later be described more in detail, the conventional cooling unit is incapable of speedily cooling the integrated circuit chips. In addition, the conventional cooling unit is not suitable for providing large power supply to the integrated circuit chips. |
The 2005 Dietary Guidelines for Americans Adherence Index: development and application. The sixth edition of the Dietary Guidelines for Americans (DGA) was released in January 2005, with revised healthy eating recommendations for all adult Americans. We developed the 2005 Dietary Guidelines Adherence Index (DGAI) as a measure of adherence to the key dietary intake recommendations. Eleven index items assess adherence to energy-specific food intake recommendations, and 9 items assess adherence to "healthy choice" nutrient intake recommendations. Each item was scored from a minimum of 0 to a maximum of 1, depending on the degree of adherence to the recommendation. A score of 0.5 was given for partial adherence on most items or for exceeding the recommendation for energy-dense food items. The DGAI was applied to dietary data collected at the fifth examination of the Framingham Heart Study Offspring Cohort. The mean DGAI score was 9.6 (range 2.5-17.50). Those with higher DGAI scores were more likely to be women, older, multivitamin supplement users, and have a lower BMI and less likely to be smokers. The DGAI demonstrated a reasonable variation in this population of adult Americans, and by design this index was independent of energy consumption. The DGAI also demonstrated face validity based on the observed associations of the index with participant characteristics. Given these attributes, this index should provide a useful measure of diet quality and adherence to the new 2005 Dietary Guidelines for Americans. |
<reponame>Spirrwell/Mod-Engine<filename>src/interfaces/engine/imodel.hpp
#ifndef IMODEL_HPP
#define IMODEL_HPP
#include "glm/glm.hpp"
class IModel
{
public:
virtual ~IModel() = default;
};
#endif // IMODEL_HPP |
Influence of chain length of pyrene fatty acids on their uptake and metabolism by Epstein-Barr-virus-transformed lymphoid cell lines from a patient with multisystemic lipid storage myopathy and from control subjects. The uptake and intracellular metabolism of 4-(1-pyrene)butanoic acid (P4), 10-(1-pyrene)decanoic acid (P10) and 12-(1-pyrene)dodecanoic acid (P12) were investigated in cultured lymphoid cell lines from normal individuals and from a patient with multisystemic lipid storage myopathy (MLSM). The cellular uptake was shown to be dependent on the fatty-acid chain length, but no significant difference in the uptake of pyrene fatty acids was observed between MLSM and control lymphoid cells. After incubation for 1 h the distribution of fluorescent fatty acids taken up by the lymphoid cell lines also differed with the chain length, most of the fluorescence being associated with phospholipid and triacylglycerols. In contrast with P10 and P12, P4 was not incorporated into neutral lipids. When the cells were incubated for 24 h with the pyrene fatty acids, the amount of fluorescent lipids synthesized by the cells was proportional to the fatty acid concentration in the culture medium. After a 24 h incubation in the presence of P10 or P12, at any concentration, the fluorescent triacylglycerol content of MLSM cells was 2-5-fold higher than that of control cells. Concentrations of pyrene fatty acids higher than 40 microM seemed to be more toxic for mutant cells than for control cells. This cytotoxicity was dependent on the fluorescent-fatty-acid chain length (P12 greater than P10 greater than P4). Pulse-chase experiments permitted one to demonstrate the defect in the degradation of endogenously biosynthesized triacylglycerols in MLSM cells (residual activity was around 10-25% of controls on the basis of half-lives and initial rates of P10- or P12-labelled-triacylglycerol catabolism); MLSM lymphoid cells exhibited a mild phenotypic expression of the lipid storage (less severe than that observed in fibroblasts). P4 was not utilized in the synthesis of triacylglycerols, and thus did not accumulate in MLSM cells: this suggests that natural short-chain fatty acids might induce a lesser lipid storage in this disease. |
News Release: Eyewitness News
January 31, 2015
Tallahassee Police Officers responded to the parking lot of the Villa Del Lago apartments at 2700 West Pensacola Street in reference to a fight just after 3:30am Saturday morning. More officers responded to the scene after gunshots were reported.
Officers located three victims, who were transported to a local hospital to be treated for their injuries. The victims have been identified as Darrel Wyche, 19,Javaris Jones, 20 and a 17-year old juvenile.
Tallahassee Police have arrested Jerome Thomas, 20, on charges of attempted murder and possession of a firearm by a delinquent. Jairo Lainez Padilla, 20, was arrested on two charges of attempted murder and shooting into an occupied vehicle.
This is a developing story. It will be updated as more details become available. |
India, the second biggest producer of sugar, is likely to surpass Brazil as the world’s top sweetener producer soon. According to analysts, this may happen as recently as next sugar year, which will begin in October. Last year, India produced 32.3 million tonnes sugar and this is expected to go up to 33-35 million tones this year.
While India’s sugar output is expected to rise not only due to various factors like subsidy schemes, in Brazil, mills are allocating more cane for ethanol production. Also, low investments have affected cane yields in Brazil, which has been the leading sugar producer since 1990. According to indications, sugar output in Brazil may decline by 10 million tonnes to reach 30 million tonnes. Last year, Brazil produced was nearly 40 million tones of sweetener.
On the other hand in India, according to an Icra report, early estimates suggest that production could increase by 10 per cent to around 35 million tonnes in SY18. It will add to the existing sugar surplus in the country. The domestic sugar industry is grappling with a situation of excess supply, which is likely to make sustainability of price recovery uncertain. Further, increase in inventories is likely to result in deterioration in gearing and liquidity indicators for most sugar mills in near term. While sugar prices have recovered to around Rs 32,500-33,000 per million tonne (ex-mill Uttar Pradesh) from a low of Rs 26,500 per million tonne in May on the back of recent government measures, the sustainability is uncertain given the oversupply conditions.
“Another year of bumper production at 35 million tonnes in SY19, as per the preliminary estimates, is likely to be higher than at least 9 million tonnes than the consumption, adding to the existing sugar surplus in the market.
“Significant increase in the sugar production by around 60 per cent YoY in SY18 is likely to result in closing stocks between 9-9.5 million tones even after considering the successful implementation of the 2 million tonnes exports. However, export of entire 2 million tonnes might pose a challenge, given the subdued global sugar prices. Hence, while the prices have recovered after the announcement of a bailout package for the industry, Icra expects pressure on the sugar prices, given the continued oversupply scenario,” said Sabyasachi Majumdar, senior vice-president and group head, Icra Ratings.
India has been struggling to export its surplus as prices in the world market are trading at steep discounts to local prices. The world sugar market too will find it hard to absorb more than 2-4 million tonnes of sugar in the next 12 months because of reasonably enough sugar stock, analysts said.
Explaining the domestic pricing scenario, Majumdar said with sugar realisations likely to be under supply-induced pressure in SY19, this is likely to result in margin pressures as well as the increase in cane arrears. Also, the effective increase in the fair and remunerative price (FRP) by 2.5 per cent for SY19 could also trigger a rise in the state-advised price set by the governments in some states.
“Apart from causing margin pressures, higher cane prices may further incentivise farmers to keep sowing cane, which could exacerbate supply pressures in the medium term. Notwithstanding the government support, the operating environment for sugar mills in the short-term will be challenging. Further, the long-term viability of the sugar mills will remain critically dependent upon ensuring linkage between prices of cane and sugar, especially in the SAP-following states,” said Majumdar. |
Correlation of Life Skills and Academic Achievement of High School Students The purpose of the study was examined the Correlation of Life Skills and Academic Achievement of high school students. The investigator has randomly selected the sample of high school students in different area of karaikudi region. The investigator was chosen Normative Survey method. The Life skills of students at high school level and Academic Achievement of students at high school level was standardized by the investigator were used to collect the data. Statistical Techniques used by Pearson product moment correlation method, ttest and Fratio was employed for analyzing the data. The result reveals that there significant Relationship between life skills and academic achievement of high school students in karaikudi region. Hence the result showed that the students who had received life skills and Academic training gained significantly higher scores in life skills and academic achievement observed. It seems that analysis the student knowledge based on life skills and academic achievement needs especially in the first year of their study (9th standard) is very essential. INTRODUCTION: Education aims at an all-round development of students life skills and academic Achievement in high school level. But if we consider in its broader sense it will be seen that life skills aims not only at physical development and organic health but also at developing social maturity and academic excellence which cultivate social qualities in the students. The World Health Organization (WHO) defines Life skills as the ability for adaptive and positive behaviors that enables individuals to deal effectively with the demands and challenges of everyday life. Achievement is an essential aspect of human life especially in school life. Socially mature individuals have confidence to face reality for their integrity and are well developed in discriminating power to make appropriate decisions about their personal and social life Academic Achievement is the demonstrated ability to perform, achieve and excel in scholastic activities. Academic excellence has been identified with achieving high grades and superior performance. Objectives of the study: To study the relationship between Life Skills and Academic Achievement of high school students. To find out the significant difference between the mean scores of life skills of high school students in terms of their sex medium of instruction, and type of the school. To find out the significant difference between the mean scores of Academic achievement of high school students in terms of their sex medium of instruction, and type of the school. Hypotheses of the study: There is no significant Relationship between the mean scores of Life Skills and Academic Achievement of high school students. There is no significant difference between the mean scores of life skills of high school students in terms of their sex, medium of instruction, and type of the school. There is no significant difference between the mean scores of Academic achievement of high school students in terms of their sex medium of instruction, and type of the school. Sample of the study: The investigator has chosen Randomly 162 students from various schools in karaikudi region, Tamil Nadu state for the Investigation. Methodology of the study: Normative Survey method of research way employed to investigate the relationship and difference in various variable of the study. Research Tools: The present study used the following Tools 1. Life Skills scale developed and standardized by the Investigator. 2. Academic Achievement scale developed and standardized by the Investigator. Statistical techniques used: 1. Karl Pearsons product moment correlation Technique to study the Relationship between the variable. 2. Differential analysis (t-Test) to find out the significant difference between the variable. Analysis and Interpretation: Hypothesis: 1 There is no significant Relationship between the mean scores of Life Skills and Academic Achievement of high school students. Table: 1.1 Variables N Mean S.D D.f Coefficient of correlation Level of significance Life Skills 162 16.82 2.68 158 0.83 Significant at 0.01 level Academic Achievement 162 15.04 2.85 From the table 1.1, it is found that the calculated r value (0.83) is greater than the table value at 0.01 level of significant.hence our null hypothesis is rejected. So it is concluded that there is a significant relationship between the mean scores of Life Skills and Academic Achievement of high school students. Hypothesis: 2 There is no significant difference between the mean scores |
<filename>src/test/java/org/powertac/producer/ProducerTest.java
/*******************************************************************************
* Copyright 2014 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
******************************************************************************/
package org.powertac.producer;
import static org.junit.Assert.*;
import static org.mockito.Matchers.*;
import static org.mockito.Mockito.*;
import java.util.ArrayList;
import java.util.List;
import java.util.TreeMap;
import org.apache.commons.configuration.Configuration;
import org.apache.commons.configuration.MapConfiguration;
import org.joda.time.Instant;
import org.junit.Before;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
import org.powertac.common.Broker;
import org.powertac.common.Competition;
import org.powertac.common.CustomerInfo;
import org.powertac.common.Rate;
import org.powertac.common.Tariff;
import org.powertac.common.TariffEvaluator;
import org.powertac.common.TariffSpecification;
import org.powertac.common.TariffSubscription;
import org.powertac.common.TariffTransaction;
import org.powertac.common.TimeService;
import org.powertac.common.WeatherForecast;
import org.powertac.common.WeatherForecastPrediction;
import org.powertac.common.WeatherReport;
import org.powertac.common.config.Configurator;
import org.powertac.common.enumerations.PowerType;
import org.powertac.common.interfaces.Accounting;
import org.powertac.common.interfaces.ServerConfiguration;
import org.powertac.common.interfaces.TariffMarket;
import org.powertac.common.repo.BrokerRepo;
import org.powertac.common.repo.CustomerRepo;
import org.powertac.common.repo.RandomSeedRepo;
import org.powertac.common.repo.TariffRepo;
import org.powertac.common.repo.TariffSubscriptionRepo;
import org.powertac.common.repo.TimeslotRepo;
import org.powertac.common.repo.WeatherForecastRepo;
import org.powertac.common.repo.WeatherReportRepo;
import org.powertac.producer.Producer.PreferredOutput;
import org.powertac.producer.Producer.ProducerAccessor;
import org.powertac.producer.fossil.SteamPlant;
import org.powertac.producer.hydro.RunOfRiver;
import org.powertac.producer.utils.Curve;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.test.annotation.DirtiesContext;
import org.springframework.test.context.ContextConfiguration;
import org.springframework.test.context.junit4.SpringJUnit4ClassRunner;
import com.thoughtworks.xstream.XStream;
@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration(locations = { "classpath:test-config.xml" })
@DirtiesContext
public class ProducerTest
{
@Autowired
private TimeService timeService;
@Autowired
private Accounting mockAccounting;
@Autowired
private TariffMarket mockTariffMarket;
@Autowired
private ServerConfiguration mockServerProperties;
@Autowired
private TariffRepo tariffRepo;
@Autowired
private CustomerRepo customerRepo;
@Autowired
private TariffSubscriptionRepo tariffSubscriptionRepo;
@Autowired
private TimeslotRepo timeslotRepo;
@Autowired
private WeatherReportRepo weatherReportRepo;
@Autowired
private WeatherForecastRepo weatherForecastRepo;
@Autowired
private BrokerRepo brokerRepo;
@Autowired
private RandomSeedRepo randomSeedRepo;
private Configurator config;
private Instant exp;
private Broker broker1;
private Instant now;
private TariffSpecification defaultTariffSpec;
private Tariff defaultTariff;
private Competition comp;
private List<Object[]> accountingArgs;
@Before
public void setUp ()
{
customerRepo.recycle();
brokerRepo.recycle();
tariffRepo.recycle();
tariffSubscriptionRepo.recycle();
randomSeedRepo.recycle();
timeslotRepo.recycle();
weatherReportRepo.recycle();
weatherReportRepo.runOnce();
reset(mockAccounting);
reset(mockServerProperties);
// create a Competition, needed for initialization
comp = Competition.newInstance("producer-test");
broker1 = new Broker("Joe");
// now = new DateTime(2009, 10, 10, 0, 0, 0, 0,
// DateTimeZone.UTC).toInstant();
now = comp.getSimulationBaseTime();
timeService.setCurrentTime(now);
timeService.setClockParameters(now.toInstant().getMillis(), 720l,
60 * 60 * 1000);
exp = now.plus(TimeService.WEEK * 10);
defaultTariffSpec =
new TariffSpecification(broker1, PowerType.PRODUCTION)
.withExpiration(exp).addRate(new Rate().withValue(0.5));
defaultTariff = new Tariff(defaultTariffSpec);
defaultTariff.init();
defaultTariff.setState(Tariff.State.OFFERED);
tariffRepo.setDefaultTariff(defaultTariffSpec);
when(mockTariffMarket.getDefaultTariff(PowerType.FOSSIL_PRODUCTION))
.thenReturn(defaultTariff);
when(mockTariffMarket.getDefaultTariff(PowerType.RUN_OF_RIVER_PRODUCTION))
.thenReturn(defaultTariff);
when(mockTariffMarket.getDefaultTariff(PowerType.SOLAR_PRODUCTION))
.thenReturn(defaultTariff);
when(mockTariffMarket.getDefaultTariff(PowerType.WIND_PRODUCTION))
.thenReturn(defaultTariff);
accountingArgs = new ArrayList<Object[]>();
// mock the AccountingService, capture args
doAnswer(new Answer<Object>() {
public Object answer (InvocationOnMock invocation)
{
Object[] args = invocation.getArguments();
accountingArgs.add(args);
return null;
}
}).when(mockAccounting)
.addTariffTransaction(isA(TariffTransaction.Type.class),
isA(Tariff.class), isA(CustomerInfo.class),
anyInt(), anyDouble(), anyDouble());
// Set up serverProperties mock
config = new Configurator();
doAnswer(new Answer<Object>() {
@Override
public Object answer (InvocationOnMock invocation)
{
Object[] args = invocation.getArguments();
config.configureSingleton(args[0]);
return null;
}
}).when(mockServerProperties).configureMe(anyObject());
TreeMap<String, String> map = new TreeMap<String, String>();
map.put("common.competition.expectedTimeslotCount", "1440");
Configuration mapConfig = new MapConfiguration(map);
config.setConfiguration(mapConfig);
config.configureSingleton(comp);
}
@Test
public void testFossilSerialize ()
{
SteamPlant plant = new SteamPlant(10000, 2000, -500000);
XStream x = new XStream();
x.autodetectAnnotations(true);
String out = x.toXML(plant);
plant = (SteamPlant) x.fromXML(out);
assertNotNull(plant.customerInfo);
assertNotNull(plant.customerRepo);
assertNotNull(plant.name);
assertNotNull(plant.producerAccessor);
assertNotNull(plant.randomSeedRepo);
assertNotNull(plant.seed);
assertNotNull(plant.tariffEvaluationHelper);
assertNotNull(plant.tariffEvaluator);
assertNotNull(plant.tariffMarketService);
assertNotNull(plant.tariffSubscriptionRepo);
assertNotNull(plant.timeService);
assertNotNull(plant.timeslotRepo);
assertNotNull(plant.weatherForecastRepo);
assertNotNull(plant.weatherReportRepo);
assertTrue(plant.preferredOutput == plant.upperPowerCap);
assertTrue(plant.upperPowerCap != 0);
}
@Test
public void testRunofRiverSerialize ()
{
Curve inputFlow = new Curve();
double[] ys =
{ 1.478, 4.200, 3.147, 1.249, 0.779, 1.658, 3.952, 3.380, 1.911, 1.072,
0.632, 0.422, 0.278, 0.189, 0.282, 5.181, 1.510, 2.466, 2.597, 3.388,
3.731, 2.367, 1.172, 2.233, 7.465, 1.838, 2.575, 4.050, 6.299, 7.258,
2.119, 2.095, 1.188, 0.674, 1.494, 2.088, 1.687, 1.393, 5.438, 1.498,
1.068, 0.958, 7.255, 1.356, 1.442, 0.837, 0.532, 0.451, 0.385, 0.397,
0.266, 0.331, 0.586, 0.684, 4.748, 4.081, 3.892, 2.155, 3.136, 2.657,
4.408, 2.300, 1.063, 2.828, 3.494, 2.103, 2.439, 4.418, 2.645, 1.572,
1.550, 2.999, 3.946, 2.296, 2.155, 2.349, 1.577, 0.909, 0.704, 2.282,
1.450, 0.932, 0.746, 0.484, 0.361, 0.296, 0.253, 0.264, 0.220, 0.203,
0.195, 0.263, 0.341, 1.594, 1.328, 1.058, 2.878, 0.718, 0.528, 0.387,
0.268, 0.220, 0.193, 0.291, 0.253, 0.228, 0.170, 0.149, 0.125, 0.116,
0.130, 0.250, 0.218, 0.156, 0.137, 0.115, 0.105, 0.103, 0.119, 0.406,
0.410, 6.439, 2.978, 1.379, 1.312, 0.616, 0.371, 0.256, 0.214, 0.175,
0.148, 0.119, 0.100, 0.083, 0.077, 0.097, 0.087, 0.084, 0.083, 0.078,
0.080, 0.233, 0.242, 0.257, 0.749, 0.448, 0.274, 0.614, 0.626, 0.283,
0.175, 0.147, 0.114, 0.085, 0.072, 0.065, 0.075, 0.077, 0.065, 0.060,
0.057, 0.194, 0.217, 0.113, 0.088, 0.509, 0.274, 0.146, 0.096, 0.078,
0.088, 0.072, 0.057, 0.048, 0.047, 0.050, 0.059, 0.110, 0.267, 0.137,
0.524, 0.373, 0.612, 0.423, 1.173, 0.693, 0.395, 0.416, 0.274, 0.539,
0.332, 0.196, 0.149, 0.124, 0.997, 1.267, 0.432, 0.386, 0.222, 0.149,
0.114, 0.092, 0.076, 0.063, 0.058, 0.058, 0.054, 0.047, 0.044, 0.043,
0.041, 0.043, 0.126, 0.080, 0.062, 0.053, 0.055, 0.124, 0.086, 0.466,
0.196, 0.137, 0.147, 0.691, 0.512, 0.239, 0.489, 2.408, 4.195, 3.547,
3.645, 3.786, 4.669, 4.040, 2.805, 5.952, 3.559, 2.267, 1.144, 1.705,
5.231, 2.095, 1.159, 0.762, 0.505, 2.967, 2.293, 3.346, 3.407, 4.923,
2.327, 8.239, 4.889, 3.815, 4.096, 1.663, 1.133, 1.565, 1.394, 0.849,
0.805, 2.371, 5.286, 2.031, 2.470, 2.933, 7.810, 3.650, 5.703, 5.035,
4.136, 1.497, 7.962, 8.858, 2.012, 1.258, 2.711, 1.323, 0.748, 0.573,
0.443, 0.350, 0.310, 0.272, 0.688, 0.671, 0.377, 0.411, 1.831, 2.838,
2.787, 3.282, 1.880, 1.703, 1.764, 1.785, 1.773, 1.223, 1.892, 0.968,
0.624, 5.239, 3.996, 3.446, 1.694, 1.553, 0.945, 1.122, 3.178, 4.589,
1.375, 0.980, 3.189, 2.515, 5.211, 2.724, 1.993, 1.457, 8.642, 2.711,
8.083, 4.706, 2.587, 2.694, 2.828, 2.784, 2.419, 1.951, 2.995, 1.503,
1.365, 1.452, 1.010, 0.724, 0.521, 0.430, 0.389, 4.168, 1.603, 0.994,
2.273, 2.480, 1.333, 1.023, 0.599, 0.420, 0.528, 7.982, 6.061, 1.906,
1.112, 7.080, 6.820, 4.021, 1.622, 0.850, 7.273, 3.291, 5.765, 3.105,
2.546, 1.809, 2.215, 4.255, 5.606 };
int i = 1;
for (double y: ys) {
inputFlow.add(i, y);
i++;
}
RunOfRiver plant =
new RunOfRiver(inputFlow, 0, 20, inputFlow, 1000, 60, 1, -1000);
XStream x = new XStream();
x.autodetectAnnotations(true);
String out = x.toXML(plant);
plant = (RunOfRiver) x.fromXML(out);
assertNotNull(plant.customerInfo);
assertNotNull(plant.customerRepo);
assertNotNull(plant.name);
assertNotNull(plant.producerAccessor);
assertNotNull(plant.randomSeedRepo);
assertNotNull(plant.seed);
assertNotNull(plant.tariffEvaluationHelper);
assertNotNull(plant.tariffEvaluator);
assertNotNull(plant.tariffMarketService);
assertNotNull(plant.tariffSubscriptionRepo);
assertNotNull(plant.timeService);
assertNotNull(plant.timeslotRepo);
assertNotNull(plant.weatherForecastRepo);
assertNotNull(plant.weatherReportRepo);
assertTrue(plant.preferredOutput == plant.upperPowerCap);
assertTrue(plant.upperPowerCap != 0);
assertTrue(plant.timeslotLengthInMin == 60);
}
@Test
public void testSubscribeDefault ()
{
final SteamPlant plant = new SteamPlant(10000, 2000, -500000);
assertNull(plant.getCurrentSubscription());
doAnswer(new Answer<Object>() {
@Override
public Object answer (InvocationOnMock invocation) throws Throwable
{
assertTrue((Tariff) invocation.getArguments()[0] == defaultTariff);
assertTrue((CustomerInfo) invocation.getArguments()[1] == plant
.getCustomerInfo());
assertTrue((Integer) invocation.getArguments()[2] == 1);
TariffSubscription sub =
new TariffSubscription(plant.getCustomerInfo(), defaultTariff);
sub.subscribe(1);
tariffSubscriptionRepo.add(sub);
return null;
}
}).when(mockTariffMarket).subscribeToTariff(defaultTariff,
plant.getCustomerInfo(), 1);
plant.subscribeDefault();
assertNotNull(plant.getCurrentSubscription());
assertEquals(plant.getCurrentSubscription().getTariff(), defaultTariff);
}
@Test
public void testEvaluateTariffs ()
{
SteamPlant plant = new SteamPlant(10000, 2000, -500000);
TariffEvaluator te = mock(TariffEvaluator.class);
plant.setTariffEvaluator(te);
plant.evaluateNewTariffs();
verify(te).evaluateTariffs();
ProducerAccessor accessor = mock(ProducerAccessor.class);
when(accessor.generateOutput(any(Tariff.class), anyInt()))
.thenReturn(new PreferredOutput(plant.upperPowerCap, new double[24]));
plant.setProducerAccessor(accessor);
TariffSubscription sub =
new TariffSubscription(plant.customerInfo, defaultTariff);
sub.subscribe(1);
tariffSubscriptionRepo.add(sub);
plant.evaluateNewTariffs();
verify(te, times(2)).evaluateTariffs();
verify(accessor).generateOutput(any(Tariff.class), anyInt());
assertTrue(plant.currentSubscription == sub);
}
@Test
public void testStep ()
{
SteamPlant plant = new SteamPlant(10000, 2000, -500000);
SteamPlant spy = spy(plant);
doNothing().when(spy).consumePower();
spy.step();
verify(spy).consumePower();
}
@Test
public void testCalculateOutput ()
{
// TODO
SteamPlant plant = new SteamPlant(10000, 2000, -500000);
List<WeatherForecastPrediction> predictions =
new ArrayList<WeatherForecastPrediction>();
predictions.add(new WeatherForecastPrediction(1, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(2, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(3, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(4, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(5, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(6, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(7, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(8, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(9, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(10, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(11, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(12, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(13, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(14, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(15, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(16, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(17, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(18, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(19, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(20, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(21, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(22, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(23, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(24, 22, 5, 0.5, 0));
assertTrue(predictions.size() == 24);
WeatherForecast forecast =
new WeatherForecast(timeslotRepo.currentSerialNumber(), predictions);
weatherForecastRepo.add(forecast);
Rate r = new Rate().withDailyBegin(9).withDailyEnd(13).withValue(-1.0);
defaultTariff.getTariffSpecification().addRate(r);
assertTrue(defaultTariff.init());
defaultTariff.setState(Tariff.State.OFFERED);
double mon =
defaultTariff.getUsageCharge(timeslotRepo.getTimeForIndex(10), -1, 0);
assertTrue(mon < 0);
mon =
defaultTariff
.getUsageCharge(timeslotRepo.getTimeForIndex(10), -100000, 0);
assertTrue(mon < 0);
mon =
defaultTariff.getUsageCharge(timeslotRepo.getTimeForIndex(10), -100000,
-125000000);
assertTrue(mon < 0);
assertFalse(defaultTariff.isTiered());
double pref =
plant.producerAccessor.generateOutput(defaultTariff, 24).preferredOutput;
double[] out =
plant.producerAccessor.generateOutput(defaultTariff, 24).output;
assertTrue(out.length == 24);
assertEquals(plant.getUpperPowerCap(), pref, 1000);
for (double i: out) {
if (i == 0) {
return;
}
}
fail("Shouldn't be reachable");
}
@Test
public void testCalculateOutputTiered ()
{
SteamPlant plant = new SteamPlant(10000, 2000, -500000);
List<WeatherForecastPrediction> predictions =
new ArrayList<WeatherForecastPrediction>();
predictions.add(new WeatherForecastPrediction(1, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(2, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(3, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(4, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(5, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(6, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(7, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(8, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(9, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(10, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(11, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(12, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(13, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(14, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(15, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(16, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(17, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(18, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(19, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(20, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(21, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(22, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(23, 22, 5, 0.5, 0));
predictions.add(new WeatherForecastPrediction(24, 22, 5, 0.5, 0));
assertTrue(predictions.size() == 24);
WeatherForecast forecast =
new WeatherForecast(timeslotRepo.currentSerialNumber(), predictions);
weatherForecastRepo.add(forecast);
Rate r = new Rate().withTierThreshold(-12 * 500000).withValue(-80);
defaultTariff.getTariffSpecification().addRate(r);
assertTrue(defaultTariff.init());
defaultTariff.setState(Tariff.State.OFFERED);
assertTrue(defaultTariff.isTiered());
double pref =
plant.producerAccessor.generateOutput(defaultTariff, 24).preferredOutput;
double[] out =
plant.producerAccessor.generateOutput(defaultTariff, 24).output;
assertTrue(out.length == 24);
assertTrue(Math.abs(pref) < Math.abs(plant.upperPowerCap));
assertEquals(-250000, pref, 1000);
}
@Test
public void testProducePower ()
{
SteamPlant plant = new SteamPlant(10000, 2000, -500000);
WeatherReport report = new WeatherReport(5, 22, 5, 0.5, 0);
WeatherReportRepo rep = mock(WeatherReportRepo.class);
when(rep.currentWeatherReport()).thenReturn(report);
plant.setWeatherReportRepo(rep);
TariffSubscription sub = mock(TariffSubscription.class);
when(sub.getTariff()).thenReturn(defaultTariff);
plant.setCurrentSubscription(sub);
plant.consumePower();
verify(rep).currentWeatherReport();
verify(sub).usePower(anyDouble());
verify(sub).getTariff();
}
}
|
<gh_stars>1-10
package com.salton123.qa.kit.topactivity;
import android.content.Context;
import android.content.Intent;
import android.view.View;
import com.salton123.qa.QualityAssistant;
import com.salton123.qa.config.TopActivityConfig;
import com.salton123.qa.constant.BundleKey;
import com.salton123.qa.constant.FragmentIndex;
import com.salton123.qa.kit.Category;
import com.salton123.qa.kit.IKit;
import com.salton123.qa.kit.SimpleKitViewItem;
import com.salton123.qa.ui.UniversalActivity;
import com.zhenai.qa.R;
/**
* 项目名: Android
* 包名 com.salton123.qa.kit.topactivity
* 文件名: TopActivity
* 创建时间: 2019-04-29 on 12:13
* 描述: 当前栈顶的Activity信息
*
* @author 阿钟
*/
public class TopActivity implements IKit {
@Override
public int getCategory() {
return Category.TOOLS;
}
@Override
public View displayItem() {
return new SimpleKitViewItem(QualityAssistant.application) {
@Override
public int getName() {
return R.string.dk_kit_top_activity;
}
@Override
public int getIcon() {
return R.drawable.dk_view_check;
}
@Override
public void onClick(Context context) {
Intent intent = new Intent(context, UniversalActivity.class);
intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK);
intent.putExtra(BundleKey.FRAGMENT_INDEX, FragmentIndex.FRAGMENT_TOP_ACTIVITY);
context.startActivity(intent);
}
};
}
@Override
public void onAppInit(Context context) {
TopActivityConfig.setTopActivityOpen(context, false);
}
}
|
Epidemiology, Haemato-biochemical and Pathological Changes Related to Field Outbreaks of PPR in Small Ruminants in Odisha Background: Odisha experiencing sporadic outbreaks of Peste des petits ruminants (PPR) throughout the year. There is a scarcity of available literature on PPR in Odisha till today. This is the first ever detail investigative approach in the state undertaken with an objective to corelate the epidemiological risk factors, haemato-biochemical and pathological changes in natural field outbreaks occurring in eight different districts. Methods: Fourteen field outbreaks of PPR were evaluated clinically as well as epidemiologically and confirmed through polymerase chain reaction (PCR). Blood, serum, faecal and tissue samples were collected to observe haemato-biochemical and pathomorphological changes to asses disease severity. Result: Present study concluded an overall mortality rate of 46.81%. Chi-square analysis revealed significant highest prevalence among 7-12 months (46.13%) age, Ganjam breed (45.51%) and females (80.49%). Frequent migration among the border areas along with poor management and helminthic infection was major precipitating factor. There was polycythemia along with neutrophilia and lymphopenia. Significant increase in alanine transaminase (ALT), aspartate aminotransferase (AST), K+ and Ca+2 along with creatinine, urea and blood urea nitrogen (BUN) BUN was observed in affected flocks. Antero-ventral consolidation of lungs, syncytia and presence of both eosinophilic intranuclear and intracytoplasmic inclusion bodies were major pathological changes. |
Physics-based classification of acoustic emission waveforms The classification of acoustic emission source mechanisms based on features related to the physics of acoustic emission signal generation is considered in this paper. Numerically generated acoustic emission waveforms are used for this purpose. Conventional acoustic emission parameters such as rise-time, duration, and frequency content do not effectively characterize acoustic emission waveforms for the purpose of identifying the source mechanisms. Features unique to the different source mechanisms and relative positions of the sensor with respect to the source were identified and extracted from numerically obtained acoustic emission waveforms. This feature selection appears to be successful in capturing the differences related to the source mechanisms considered here. Correlation coefficients of the 45 features with different waveforms were first obtained, and their principal components determined. The dominant principle components were found to adequately characterize the waveforms and relate them to their source mechanisms. Better than 90 percent success was seen when only the first two principle components were employed, even in noisy signals considered here. |
<filename>nest-angular/libs/admin/project/src/lib/admin-project-index.component.ts
import { Component } from '@angular/core';
import { AdminDataService } from '@sandbox/admin/data';
@Component({
template: `
<div class="text-center">
<h2>Projects</h2>
<ng-container *ngFor="let item of items">
{{ item.name }}
</ng-container>
</div>
`,
})
export class AdminProjectIndexComponent {
public items: any[];
constructor(private readonly data: AdminDataService) {
this.items = this.data.projectFindAll();
}
}
|
Rebelies e crimes brbaros na penitenciria agrcola do Monte Cristo (PAMC): a crise no sistema prisional de Roraima / Rebellions and barbaric crimes in the Monte Cristo (PAMC) agricultural penitentiary: the crisis in the Roraima prison system The recent rebellions and crimes with cruelty refinements committed by prisoners at the Monte Cristo Agricultural Penitentiary (PAMC), in the State of Roraima, are presented as an object of research in the face of the crisis in the local prison system prior to the federal intervention in the respective Penitentiary. In a panoramic view, one sees the most common problems of the Brazilian prison system, such as overcrowding and installed crisis, in addition to an approach to legal provisions and international treaties that deal with the main rights of prisoners. Through the logical deductive method, based on doctrinal, jurisprudential and normative construction, the research addresses the rebellions that occurred in the PAMC, which indicate prison fragility, the emergence of criminal factions, in addition to the impacts resulting from Venezuelan migration in local prisons. Barbaries in committing crimes during the latest PAMC rebellions, mostly stemming from faction fights for power in the local prison, and the results of these rebellions are discussed. In view of the facts, it seeks to understand the role of the State in the face of the prison crisis and the measures of containment of rebellions and crimes in PAMC, by the local government. Thus, the present study lists possible proposals for resolving internal conflicts. |
def process(self):
self.assignDates()
self.selectColumns()
self.harmonizeVariables()
self.convertTypes()
self.checkFilterDict()
self.filter()
self.filterConsistentHours()
self.addStrColumns()
self.composeStartAndEndTimestamps()
self.updateEndTimestamps()
self.harmonizeVariablesGenericIdNames()
print('Parsing completed') |
The present invention relates to wireless location systems, particularly for locating persons in distress or emergency situations.
Though the scope of the present invention is far beyond a specific system or a specific application, it may be well understood by elaborating on the case of man overboard (MOB), i.e. a person that accidentally falls overboard a vessel to the sea (or ocean, or lake, or river, etc.).
Over a thousand people are lost at sea every year due to MOB accidents. Fast detection and location of such accidents is crucial, since survival time in water is limited, typically less than two days at −20° C. and less than 6 hours at −10° C.
A reliable device to detect and locate MOB is required to save lives, but also to provide the sailor with confidence and peace of mind, as well as reducing costs and risks of Search and Rescue (SAR) operations.
The present art provides a reasonable solution for locating ships in danger of being wrecked, airplanes upon emergency landings, and in many cases also individuals in distress. This is typically accomplished by activating emergency radio beacons, detectable and locatable by satellites orbiting around the earth. Still, present SAR systems are less efficient for individuals, specifically MOB.
A major satellite SAR system presently operating worldwide is Cospas-Sarsat. Though the present invention is not limited to this specific system, Cospas-Sarsat is a good example to clarify the present art, as well as the present invention, so it is specifically enlightened here.
Cospas-Sarsat is a satellite communications system to assist SAR of people in distress, all over the world and at anytime. The system was launched in 1982 by the USA, Canada, France and the Soviet Union (Russia) and since then, it has been used for thousands of SAR events and has been instrumental in the rescue of over 20,000 lives worldwide. The goal of the system is to detect and locate signals from distress radio beacons and forward the data to ground stations, in order to support all organizations in the world with responsibility for SAR operations, whether at sea, in the air or on land.
The system uses spacecraft—Low Earth Orbit (LEO) and Geostationary (GEO) satellites; and in the future also Medium Earth Orbit (MEO) satellites; Cospas-Sarsat radio beacons transmit in the 406 MHz band (and 121.5 MHz until 2009). The position of the beacon is determined either by the Doppler shift of the received beacon signal or by position data modulated on the signal, provided by a Global Navigation Satellite System (GNSS) receiver integrated in the radio beacon.
A detailed description of the Cospas-Sarsat System is provided in the document “Introduction to the Cospas-Sarsat System, C/S G.003”, accessed through—http://cospas-sarsat.org/Documents/gDocs.htm
All Cospas-Sarsat beacons are subject to the same RF specifications, yet may employ a different mechanical structure and different activation method, possibly also slight differences in the data modulated on the signal, usually adopted to different applications, and named accordingly: a) Emergency Position Indicating Radio Beacon (EPIRB) for marine use; b) Emergency Locator Transmitter (ELT) for aviation use; and c) Personal Locator Beacon (PLB) for personal and/or terrestrial use. For the purpose of the present invention, the name “PLB” is mainly used, however it refers to any type of radio location beacon (not necessarily related to “persons”).
When activated, automatically or manually, a Cospas-Sarsat beacon transmits short signals, each about 0.5 seconds long, repetitively every 50 seconds, for at least a day, until its battery drains out.
Cospas-Sarsat beacons are already mandatory to carry onboard large ships (>300 Ton) and passenger airplanes. In several countries, also leisure yachts are required to carry such beacons.
There are various products in the market that implement Cospas-Sarsat specified beacons, for example: “ResQFix” provided by ACR (www.acrelectronics.com); “fastfind” provided by McMurdo (www.mcmurdo.co.uk); “SA50’ provided by SIMRAD (www.simradyachting.com).
Still, the problem of man overboard, which is very troublesome in the maritime arena, is not covered well enough by present art, including by present Cospas-Sarsat beacons. Since a MOB accident can happen anytime, an effective MOB device should be always carried by a person onboard, at sea, preferably worn on the body. Indeed, such wearable MOB radio beacons were been introduced to the market, for example: “LIFETAG” by Raymarine (www.raymarine.com); “WAVEFINDER” provided by Viking Life (www.viking-life.com); “MOB i-lert” provided by Ocean Safety (www.oceansafety.com), yet, these are not satellite compatible beacons, but rather short range transmitters to communicate with a receiver onboard the vessel. Some of these MOB systems can accurately record the time and position of a MOB accident, however as the vessel sails away, and turns back to the recorded MOB position, or a SAR team is dispatched to this last reported location, the poor victim could have been drifted away, even by 100-200 meters, and without an accurate updated position report, it could be very difficult to locate and rescue this person in the water, particularly in poor visibility and high sea conditions.
U.S. Pat. No. 6,545,606 to Piri et al. discloses a Device and method for alerting to the need to recover something, identifying it, and determining its location for purposes of recovery. This invention discloses a man overboard beacon, still a low power transmitting beacon (less than 15 mw in average) which is not configured to communicate with satellites.
U.S. Pat. No. 6,362,778 to Neher discloses a Personal location detection system. This invention does not disclose location methods for men overboard, and the disclosed beacon is neither configured to reach any communication satellites.
U.S. patent application 20060196499 to Cannizzaro; Kenneth Peter discloses a Scuba diver surface location, navigational and communication device and method. The disclosed device is configured to operate on local VHF networks, not with communication satellites or in a wide area network. If wearing or carrying a satellite detectable beacon, the MOB could be located accurately, by Cospas-Sarsat for example, but this information is usually not communicated to the very vessel from which the person fell overboard. This is a problem, since in case that a vessel is in the open sea, away from shore, the vessel from which a man fell overboard is the most relevant source for swift and effective rescue.
Thus, it is mostly desirable to receive onboard the vessel updated location reports from a MOB.
Yet, it is also desirable to communicate such updated location reports from a MOB to the satellite SAR system, since the MOB vessel is not always available for rescue, as in case of a single handed vessel, when persons onboard are not capable of rescuing the MOB (e.g. if the skipper fell overboard), when the vessel itself is in trouble (e.g. fire, wrecking), etc.
Apparently, incorporating present satellite beacons with present wearable MOB devices, could lead to the required solution, i.e. a wearable beacon detectable by both the satellites and the vessel.
Further, such an incorporated system seems straight forward to achieve by shrinking the size of present satellite PLBs, and installing an onboard receiver similar to those carried by Cospas-Sarsat satellites.
But such an efficient dual mode MOB system is not straight forward, for several reasons.
One reason is that design considerations good for few satellite receivers (actually, satellites usually carry transponders, and much of the receiving process is done on earth) are not optimal for mass production receivers to be installed onboard ships, and unlike satellites, the vessel receiver may lack a line of site with an MOB transmitter.
Aspects related to a receiver onboard a vessel configured to detect a satellite compatible MOB transmitter were already considered by the applicant, who proposed a method for “Determining Precise Direction and Distance to a Satellite Radio Beacon”, U.S. patent application Ser. No. 11/836,783, filed on 10 Aug. 2007.
Another reason for this non trivial incorporation is that current satellite PLB antennas are difficult to be conveniently worn by humans, while providing a good RF performance. A worn antenna should enable communicating the satellite, but disturb as less as possible the mariner, in its routine tasks, and especially when in distress.
Aspects related to a wearable antenna for a satellite compatible MOB transmitter were already considered by the applicant, who proposed a “Wrist Worn Communication Device coupled with Antenna Extendable by the Arm”, U.S. patent application Ser. No. 11/938,311, filed on 12 Nov. 2007.
Then, there is the method of activation of the MOB PLB to be effectively solved. Present PLBs are usually activated manually. Obviously, a manual activation of an MOB PLB is not desirable, since the person overboard might be unable to activate the device, being unconscious, or almost frozen, or simply focused on keeping itself above the water level. Alternatively, an automatic activation could be considered, e.g. upon water sensing.
U.S. Pat. No. 5,710,989 to Flood discloses a Water-activated emergency radio beacon.
However, an automatic activation might cause many false alarms, e.g. when a person bearing the PLB innocently jumps to swim by the boat, or washes hands onboard. Furthermore, it would be desirable, that if an MOB is swiftly rescued by the vessel from which he fell overboard, the satellite system would not be alerted, in order to avoid unnecessary SAR operations directed by the satellite system operators.
It is then an object of the present invention to provide a system and device and method for MOB, enabling detecting and locating an MOB by means installed onboard a vessel, as well as by a satellite SAR system (or satellite communication system linked to SAR capable teams).
It is also an object of the present invention to provide a system and device and method for MOB, enabling detecting and locating an MOB by means installed onboard a vessel, as well as by a satellite SAR system, even if that MOB is unconscious.
It is another object of the present invention to provide a system and device and method for MOB, significantly reducing the probability of alerting a satellite SAR system, if the MOB is swiftly rescued by the vessel from which he fell overboard.
It is yet another object of the present invention to provide a system and device and method for MOB, compatible with a satellite system for Search and Rescue, such as Cospas-Sarsat.
As already indicated, the present invention is not limited to the application of MOB and neither to Cospas-Sarsat or any other satellite SAR system. There are other scenarii that can benefit from the present invention, some of them are briefly described following.
In the military arena, the control over a group of soldiers, during a military operation, is paramount. It would be advantageous that if one of the soldiers is getting away from the group, undesirably, beyond a predefined range, the group commander would be alerted, and provided with this soldier last known location. Further, if this soldier is too distant from the group, it would be advantageous to report this soldier location to a remote headquarters, in order to enable the headquarters to better control the operation and assist this soldier, when required.
A similar logic may apply to a group of tourists, traveling in a foreign country, with a guide. Here, it would be desirable that the guide would be indicated that one of his group members is potentially lost, and provide the guide with the location of his lost sheep, in order to promptly get it back to the herd. However, if the tourist is too distant, e.g. left behind when the group took the bus, then it would be desirable to report its location to a remote station, e.g. the tourist office.
Naturally, the same logic applies to other applications and scenarii, where the location of an object, such as a person, but also an animal, pet, or valuable, is to be monitored, in reference with a fixed location, such as a home or farm or schoolyard, or in reference with a moving point, such as a vehicle or a roaming group. For example, the present invention may assist in locating a senior citizen that leaves home, potentially lost, or a cow that moves away from the corrals, or a car that is unlawfully taken away from the garage.
So, it is as well an object of the present invention to provide a system and device and method for location of an object, in reference with a predefined place, or in reference with a moving point.
It is still an object of the present invention to provide a system and device and method for location of objects such as: person, animal, pet, vehicle, weapon, ammunition, valuable asset.
One requirement which is common to the above mentioned cases of MOB and soldiers and tourists and so on, is that their location should preferably be first locally monitored, accordingly by the vessel, the commander and the tourist guide, and if successfully located at this stage, it is desirable to avoid alerting the remote station, accordingly the satellite SAR system, the military headquarters and the tourist office, as if already treated locally, it is a false alarm for the remote station.
Then, it is also an object of the present invention to provide a system and device and method for location of objects, first by a local monitoring station, and otherwise by a remote monitoring station.
So it is still another object of the present invention to provide a system and device and method for location of objects, reducing the false alarm rate at a remote monitoring station.
For MOB, an automatic activation of the location beacon is paramount, but this is also relevant to other applications, as the soldier and tourist, and is certainly relevant to a lost animal or stolen car.
So, it is nonetheless an object of the present invention to provide a system and device and method for location of objects, with an automatic activation of the device attached to the object.
Other objects and advantages of the invention will become apparent as the description proceeds. |
#include <stdio.h>
#include <string.h>
void str_rvs(char a[100])
{
int i=0,j,k;
char ch;
while(a[i]!='\0')
{
i++;
}
i--;
for(j=0,k=i;j<=i/2;j++,k--)
{
ch=a[j];
a[j]=a[k];
a[k]=ch;
}
}
int main()
{
char s1[100];
printf("Enter string :");
gets(s1);
str_rvs(s1);
printf("String reverse :");
puts(s1);
return 0;
}
|
import json
def filter_geojson(fn, bbox):
l, b, r, t = bbox
assert l < r
assert b < t
fn = 'usgs_gage_locations.geojson'
js = json.load(open(fn))
_features = []
for feature in js['features']:
lng, lat = feature['geometry']['coordinates']
if l < lng < r and b < lat < t:
_features.append(feature)
js['features'] = _features
return js
if __name__ == "__main__":
from pprint import pprint
pprint(filter_geojson('usgs_gage_locations.geojson', [-117, 46.5, -116.5, 47]))
|
Anthopleura ballii
Description
Anthopleura ballii has a broad base up to 5 cm (2 in) across and a trumpet-shaped column up to 10 cm (4 in) high. The surface of the column bears forty-eight longitudinal rows of small warts, each tipped with red. The oral disc is wide and there are up to 96 tapering tentacles arranged in five whorls. These are retractable to a limited extent and are flecked with white. The colouring of this species is variable, being some shade of red or yellow, with the tentacles sometimes having an iridescent green sheen. The warts are non-adhesive. This is in contrast to the closely related glaucous pimplet (Anthopleura thallia), which has gravel or debris adhering to the column.
Distribution and habitat
Anthopleura ballii is native to northeastern Atlantic Ocean and the coasts of Western Europe. It is found on rocky coasts from the intertidal zone down to depths of about 25 metres (82 ft). It usually occurs in crevices, in the holes made by piddocks as they burrow, under boulders and in other concealed locations. It is sometimes attached to pebbles and shells and may be semi-immersed in sand or mud.
Biology
Anthopleura ballii contains unicellular dinoflagellates living inside the tissues. These are species of Symbiodinium and are commonly known as zooxanthellae. They are photosynthetic organisms and provide the sea anemone with nutrients and energy, the products of photosynthesis. This type of arrangement is common in corals and sea anemones in nutrient-deficient tropical seas but is rare in temperate waters, which tend to be nutrient-rich. Researchers have found that the ova become infected with maternal zooxanthellae just before spawning takes place. The zooxanthellae are restricted to one side of the ovum and during the rearrangement of tissues that takes place during the development of the embryo into a planula larva, the zooxanthellae are confined to the endoderm of the larva and to the gastrodermal cells of the adult. This method of acquiring zooxanthellae is unusual. In tropical seas zooxanthellae are frequently liberated into the sea by symbiotic invertebrates and occur in the faeces of predators feeding on symbiotic cnidarians. This means that there is little need for maternal transfer of symbionts. This is not the case in temperate seas where free-living zooxanthellae are scarce.
A. ballii is gonochoric, with individuals being either male or female. It is a broadcast spawner. |
In vitro phenotypes to elvitegravir and dolutegravir in primary macrophages and lymphocytes of clonal recombinant viral variants selected in patients failing raltegravir. OBJECTIVES The cross-resistance profiles of elvitegravir and dolutegravir on raltegravir-resistant variants is still controversial or not available in macrophages and lack extensive evaluations on wide panels of clonal variants. Thus, a complete evaluation in parallel with all currently available integrase inhibitors (INIs) was performed. METHODS The integrase coding region was RT-PCR-amplified from patient-derived plasma samples and cloned into an HIV-1 molecular clone lacking the integrase region. Twenty recombinant viruses bearing mutations to all primary pathways of resistance to raltegravir were phenotypically evaluated with each integrase inhibitor in freshly purified CD4+ T cells or monocyte-derived macrophages. RESULTS Y143R single mutants conferred a higher level of raltegravir resistance in macrophages compared with CD4+ T cells (FC 9.55-11.56). All other combinations had similar effects on viral susceptibility to raltegravir in both cell types. Elvitegravir displayed a similar behaviour both in lymphocytes and macrophages with all the tested patterns. When compared with raltegravir, none to modest increases in resistance were observed for the Y143R/C pathways. Dolutegravir maintained its activity and cross-resistance profile in macrophages. Only Q148H/R variants had a reduced level of susceptibility (FC 5.48-18.64). No variations were observed for the Y143R/C (+/-T97A) or N155H variants. CONCLUSIONS All INIs showed comparable antiretroviral activity in both cell types even if single mutations were associated with a different level of susceptibility in vitro to raltegravir and elvitegravir in macrophages. In particular, dolutegravir was capable of inhibiting with similar potency infection of raltegravir-resistant variants with Y143 or N155 pathways in both HIV-1 major cell reservoirs. |
// WithDecodeHook sets the decode hooks for this decoder
func WithDecodeHook(hooks ...mapstructure.DecodeHookFunc) DecodeOption {
return func(dc *mapstructure.DecoderConfig) {
if len(hooks) == 0 {
dc.DecodeHook = nil
} else if len(hooks) == 1 {
dc.DecodeHook = hooks[0]
} else {
dc.DecodeHook = mapstructure.ComposeDecodeHookFunc(hooks...)
}
}
} |
Emily Ford
Emily Susan Ford (1850–1930), artist and campaigner for women's rights, was born into a Quaker family in Leeds. She trained as an artist at the Slade School of Art and exhibited at the Royal Academy.
Life
Emily Ford was born in Leeds into a politically active Quaker family who moved to Adel Grange in Adel on the outskirts of Leeds when she was 15. Her parents were Robert Lawson Ford (1809–1878) a solicitor and Hannah (née Pease) (1814–1886). Her youngest sister Isabella became a prominent campaigner for the rights of working women. When in Leeds Emily lived at the family home, Adel Grange, but after her older sister Bessie died in 1922, Emily and Isabella moved to Adel Willows, a small property nearby.
Ford attended the Slade School of Art in London from 1875.
From 1873 until 1881 Ford was an active member of the Leeds Ladies' Educational Association, which provided lectures and courses, supervised Cambridge Local Examinations and with other local bodies founded Leeds Girls' High School. Ford was secretary of the association and in 1879 backed a series of lectures on the laws relating to women's property rights and custody of infants. The controversy surrounding these subjects split the association's membership and it was abandoned in 1881. Like other members of the association, such as Alice Cliff Scatcherd, Ford was a member of the Manchester Society for Women's Suffrage which was a hive for activism in the 1880s. The society formed strong links with the Manchester Society of Women Painters, of which Ford was a member. The society was active from 1879 until 1883 and had among its leading members suffragettes such as Susan Dacre, Annie Swynnerton and Jessie Toler Kingsley. Ford became vice-president of the Leeds Suffrage Society where she was an active member and speaker
In 1887 Ford and her sisters Isabella and Bessie became heavily involved in labour politics, focusing on the inequalities of capitalism, class and gender. Ford joined the Leeds Socialist League. Together with her sisters and Scatcherd she supported strikes of women weavers and the tailoresses in 1888 and 1889 with practical assistance and contributions towards the strike fund.
In the early 1880s Ford became interested in spiritualism and joined the Society for Psychical Research. For spiritualists colour invoked spiritual and emotional states. Ford painted The Sphere of Suffering series, in which she depicted the "naked Soul in the Storm Abyss" as a female nude plunging through space while shafts of light break through the clouds above. Ford argued that "people must learn to see Spiritual truth as an artist must learn to see colour".
Ford's religious convictions, feminism and social politics underwent profound change. She converted to Anglicanism, abandoned socialism and instead of focusing on a wide range of issues that concerned women she focused her efforts on women's suffrage. She transferred her suffrage society membership to London and expressed the desire that her art works should be hung "where they could speak". By that time declamatory art by women artists had reached a wide audience outside the institutions of culture and scholarship through the women's suffrage banners. She was baptised into the Anglican Church at All Souls, Blackman Lane Leeds in 1890.
Work
Ford's work was influenced by the Pre-Raphaelite movement particularly Burne-Jones. After her baptism at All Souls, she gave the church a tall font canopy designed by R. J. Johnson of Newcastle attached to which are eight panels that she painted herself. Painted in a primitivist Italian style, they depict scenes from the Bible but the figures in them are portraits of people she knew, her friends clerics, the church's congregation and herself. The paintings were restored after fundraising and intervention by Victorian Society. Her painting Towards the Dawn, described as "feminist", was donated to Newnham College in 1890 by her friend Millicent Fawcett.
Ford had a studio in Chelsea that was described by fellow artist, Dora Meeson as "a meeting ground for artists, suffragists, people who "did" things". She joined the Artists' Suffrage League and designed a poster for it in 1908. She continued to devote herself to religious art designing stained glass windows and painting murals, but also produced posters, banners and shields for the suffrage movements. |
/**
* @author Huang Zhaoping
*
*
*/
public class SmsProvider {
private String apiUrl;
private String apiUrlBak;
private String authorization;
private Map<String, String> basicBody;
private ObjectMapper objectMapper = new ObjectMapper();
public SmsProvider(SmsProperties smsProperties) {
apiUrl = StringUtils.isEmpty(smsProperties.getSmsUrl())? ReflectionUtils.tryGetStaticFieldValue("com.yh.csx.bsf.core.base.BsfBaseConfig","smsurl",""):smsProperties.getSmsUrl();
apiUrlBak= StringUtils.isEmpty(smsProperties.getSmsUrlBak())?ReflectionUtils.tryGetStaticFieldValue("com.yh.csx.bsf.core.base.BsfBaseConfig","smsurlbak",""):smsProperties.getSmsUrlBak();
authorization = "Basic " + Base64.getEncoder().encodeToString(((StringUtils.isEmpty(smsProperties.getSmsUser())?ReflectionUtils.tryGetStaticFieldValue("com.yh.csx.bsf.core.base.BsfBaseConfig","smsuser",""):smsProperties.getSmsUser()) + ":" + (StringUtils.isEmpty(smsProperties.getSmsPassword())?ReflectionUtils.tryGetStaticFieldValue("com.yh.csx.bsf.core.base.BsfBaseConfig","smspassword",""):smsProperties.getSmsPassword())).getBytes());
basicBody = new HashMap<>();
basicBody.put("SMS_SERVER", StringUtils.isEmpty(smsProperties.getSmsServer())? ReflectionUtils.tryGetStaticFieldValue("com.yh.csx.bsf.core.base.BsfBaseConfig","smsserver",""):smsProperties.getSmsServer());
basicBody.put("USER_ID", StringUtils.isEmpty(smsProperties.getSmsUser())? ReflectionUtils.tryGetStaticFieldValue("com.yh.csx.bsf.core.base.BsfBaseConfig","smsuser",""):smsProperties.getSmsUser());
}
/**
* 新增接口
* 发送短信
* */
public void sendText(String phone,String content,String systemCode)
{
Assert.hasText(phone, "手机号码不能为空");
Assert.hasText(content, "短信内容不能为空");
Assert.hasText(systemCode, "系统编码不能为空");
Map<String,String> map=new HashMap<String,String>();
map.put("content", content);
map.put("phone", phone);
map.put("systemCode", systemCode);
try {
HttpUriRequest request = createPostRequest(apiUrlBak,map);
try (CloseableHttpClient client = getHttpClient(apiUrlBak)) {
try (CloseableHttpResponse response = client.execute(request)) {
StatusLine status = response.getStatusLine();
if (status.getStatusCode() != 200) {
throw new BsfException("请求短信接口失败: " + status.getStatusCode() + ", Reason: " + status.getReasonPhrase());
}
}
}
}
catch (Exception exp)
{
throw new MessageException(exp);
}
}
/***
*
* Recommand to use sendText(String phone,String content,String systemCode) method
* */
@Deprecated
public void sendText(String phone, String content) {
sendRequest("/sms/v1/yhsms/industrial", phone, content);
}
public void sendVoiceCode(String phone, String code) {
if (code == null || !code.matches("\\d{4,6}")) {
throw new MessageException("语音验证码内容只能是4~6位数字");
}
sendRequest("/sms/v1/yhsms/voice", phone, code);
}
private void sendRequest(String servicePath, String phone, String content) {
try {
HttpUriRequest request = createRequest(apiUrl, servicePath, phone, content);
try (CloseableHttpClient client = getHttpClient(apiUrl)) {
try (CloseableHttpResponse response = client.execute(request)) {
StatusLine status = response.getStatusLine();
if (status.getStatusCode() != 200) {
throw new BsfException("请求短信接口失败: " + status.getStatusCode() + ", Reason: " + status.getReasonPhrase());
}
}
}
}
catch (Exception exp)
{
throw new MessageException(exp);
}
}
private HttpUriRequest createRequest(String basicUrl, String serviceUrl, String phone, String content) {
if (basicBody == null || basicUrl == null || basicUrl.length() == 0) {throw new IllegalStateException("短信服务未初始化");}
Map<String, String> body = new HashMap<>(basicBody);
body.put("PHONENUMBER", phone);
body.put("CONTENT", content);
String requestBody;
try {
requestBody = objectMapper.writeValueAsString(body);
} catch (JsonProcessingException e) {
throw new BsfException("转换JSON失败");
}
return RequestBuilder.create("POST")
.setUri(basicUrl + serviceUrl)
.addHeader("Authorization", authorization)
.setEntity(new StringEntity(requestBody, ContentType.create("application/json", "UTF-8")))
.build();
}
private CloseableHttpClient getHttpClient(String url) throws Exception {
String protocol = url.length() > 4 ? url.substring(0, 5).toLowerCase() : "";
if ("https".equalsIgnoreCase(protocol)) {
SSLContext ctx = SSLContext.getInstance("TLS");
ctx.init(null, new TrustManager[]{
new X509TrustManager() {
@Override
public void checkClientTrusted(X509Certificate[] chain, String authType) {
}
@Override
public void checkServerTrusted(X509Certificate[] chain, String authType) {
}
@Override
public X509Certificate[] getAcceptedIssuers() {
return null;
}
}
}, null);
SSLConnectionSocketFactory factory = new SSLConnectionSocketFactory(ctx, new String[]{"TLSv1"}, null,
(host, session) -> true);
return HttpClients.custom().setSSLSocketFactory(factory).build();
} else {
return HttpClients.createDefault();
}
}
/**
*
* 新增短信接口
* */
private HttpUriRequest createPostRequest(String serviceUrl, Map<String,String> data) {
if (basicBody == null || serviceUrl == null || serviceUrl.length() == 0) {throw new IllegalStateException("短信服务未初始化");}
Map<String, String> body = new HashMap<>(basicBody);
body.putAll(data);
String requestBody;
try {
requestBody = objectMapper.writeValueAsString(body);
} catch (JsonProcessingException e) {
throw new BsfException("转换JSON失败");
}
return RequestBuilder.create("POST")
.setUri(serviceUrl)
.addHeader("Authorization", authorization)
.setEntity(new StringEntity(requestBody, ContentType.create("application/json", "UTF-8")))
.build();
}
} |
package push
import (
"encoding/json"
"time"
)
type QuotaReq struct {
Appkey string `json:"appkey"`
Timestamp int64 `json:"timestamp"`
}
type QuotaResp struct {
Ret string `json:"ret"`
Data QuotaData `json:"data"`
}
type QuotaData struct {
VivoSysMsgCount string `json:"vivoSysMsgCount"`
XmAckedCount string `json:"xmAckedCount"`
OppoTotalCount string `json:"oppoTotalCount"`
XmQuotaCount string `json:"xmQuotaCount"`
OppoPushCount string `json:"oppoPushCount"`
VivoMarketMsgCount string `json:"vivoMarketMsgCount"`
OppoRemainCount string `json:"oppoRemainCount"`
}
func (a *App) Quota() (ret QuotaData, err error) {
var result []byte
data := QuotaReq{a.AppKey, time.Now().Unix()}
if result, err = a.Request(Host+QuotaPath, data); err != nil {
return
}
var q QuotaResp
if err = json.Unmarshal(result, &q); err != nil {
return
}
ret = q.Data
return
}
|
"""
MIT License
Copyrights © 2020, <NAME>.
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the “Software”), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
The Software is provided “as is”, without warranty of any kind, express or
implied, including but not limited to the warranties of merchantability, fitness
for a particular purpose and noninfringement. In no event shall the authors or
copyright holders be liable for any claim, damages or other liability, whether
in an action of contract, tort or otherwise, arising from, out of or in
connection with the software or the use or other dealings in the Software.
Except as contained in this notice, the name of <NAME> shall
not be used in advertising or otherwise to promote the sale, use or other
dealings in this Software without prior written authorization from
Philippe-<NAME>osselin.
"""
from .GameMode import GameMode
from state import GameState
from layer import ArrayLayer, UnitsLayer, BulletsLayer, ExplosionsLayer, SoundLayer
from command import MoveCommand, TargetCommand, ShootCommand, MoveBulletCommand, DeleteDestroyedCommand
import pygame
from pygame.math import Vector2
class PlayGameMode(GameMode):
def __init__(self):
super().__init__()
# Game state
self.gameState = GameState()
# Rendering properties
self.cellSize = Vector2(64,64)
# Layers
self.layers = [
ArrayLayer(self.cellSize,"assets/level/ground.png",self.gameState,self.gameState.ground,0),
ArrayLayer(self.cellSize,"assets/level/walls.png",self.gameState,self.gameState.walls),
UnitsLayer(self.cellSize,"assets/level/units.png",self.gameState,self.gameState.units),
BulletsLayer(self.cellSize,"assets/level/explosions.png",self.gameState,self.gameState.bullets),
ExplosionsLayer(self.cellSize,"assets/level/explosions.png"),
SoundLayer("assets/sound/170274__knova__rifle-fire-synthetic.wav","assets/sound/110115__ryansnook__small-explosion.wav")
]
# All layers listen to game state events
for layer in self.layers:
self.gameState.addObserver(layer)
# Controls
self.playerUnit = self.gameState.units[0]
self.gameOver = False
self.commands = [ ]
@property
def cellWidth(self):
return int(self.cellSize.x)
@property
def cellHeight(self):
return int(self.cellSize.y)
def processInput(self):
# Pygame events (close, keyboard and mouse click)
moveVector = Vector2()
mouseClicked = False
for event in pygame.event.get():
if event.type == pygame.KEYDOWN:
if event.type == pygame.QUIT:
self.notifyQuitRequested()
break
elif event.key == pygame.K_ESCAPE:
self.notifyShowMenuRequested()
break
elif event.key == pygame.K_RIGHT:
moveVector.x = 1
elif event.key == pygame.K_LEFT:
moveVector.x = -1
elif event.key == pygame.K_DOWN:
moveVector.y = 1
elif event.key == pygame.K_UP:
moveVector.y = -1
elif event.type == pygame.MOUSEBUTTONDOWN:
mouseClicked = True
# If the game is over, all commands creations are disabled
if self.gameOver:
return
# Keyboard controls the moves of the player's unit
if moveVector.x != 0 or moveVector.y != 0:
self.commands.append(
MoveCommand(self.gameState,self.playerUnit,moveVector)
)
# Mouse controls the target of the player's unit
mousePos = pygame.mouse.get_pos()
targetCell = Vector2()
targetCell.x = mousePos[0] / self.cellWidth - 0.5
targetCell.y = mousePos[1] / self.cellHeight - 0.5
command = TargetCommand(self.gameState,self.playerUnit,targetCell)
self.commands.append(command)
# Shoot if left mouse was clicked
if mouseClicked:
self.commands.append(
ShootCommand(self.gameState,self.playerUnit)
)
# Other units always target the player's unit and shoot if close enough
for unit in self.gameState.units:
if unit != self.playerUnit:
self.commands.append(
TargetCommand(self.gameState,unit,self.playerUnit.position)
)
if unit.position.distance_to(self.playerUnit.position) <= self.gameState.bulletRange:
self.commands.append(
ShootCommand(self.gameState,unit)
)
# Bullets automatic movement
for bullet in self.gameState.bullets:
self.commands.append(
MoveBulletCommand(self.gameState,bullet)
)
# Delete any destroyed bullet
self.commands.append(
DeleteDestroyedCommand(self.gameState.bullets)
)
def update(self):
for command in self.commands:
command.run()
self.commands.clear()
self.gameState.epoch += 1
# Check game over
if self.playerUnit.status != "alive":
self.gameOver = True
self.notifyGameLost()
else:
oneEnemyStillLives = False
for unit in self.gameState.units:
if unit == self.playerUnit:
continue
if unit.status == "alive":
oneEnemyStillLives = True
break
if not oneEnemyStillLives:
self.gameOver = True
self.notifyGameWon()
def render(self, window):
for layer in self.layers:
layer.render(window)
|
$NetBSD: patch-mapserver.h,v 1.1 2012/12/24 21:09:47 joerg Exp $
--- mapserver.h.orig 2012-12-23 17:16:27.000000000 +0000
+++ mapserver.h
@@ -2614,10 +2614,10 @@ int msSaveRasterBuffer(rasterBufferObj *
int msSaveRasterBufferToBuffer(rasterBufferObj *data, bufferObj *buffer,
outputFormatObj *format);
-inline void msBufferInit(bufferObj *buffer);
-inline void msBufferResize(bufferObj *buffer, size_t target_size);
-MS_DLL_EXPORT inline void msBufferFree(bufferObj *buffer);
-MS_DLL_EXPORT inline void msBufferAppend(bufferObj *buffer, void *data, size_t length);
+void msBufferInit(bufferObj *buffer);
+void msBufferResize(bufferObj *buffer, size_t target_size);
+MS_DLL_EXPORT void msBufferFree(bufferObj *buffer);
+MS_DLL_EXPORT void msBufferAppend(bufferObj *buffer, void *data, size_t length);
struct rendererVTable {
int supports_transparent_layers;
|
/*
Name: XYZ
Version: 1.5.5
Web-site: http://www.qtrpt.tk
Programmer: <NAME>
E-mail: <EMAIL>
Web-site: http://www.aliks-os.tk
Copyright 2012-2015 <NAME>
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
#include "XYZ_Label.h"
XYZLabel::XYZLabel(QWidget *parent) : QLabel(parent) {
m_bHover = false;
setCursor(Qt::PointingHandCursor);
}
XYZLabel::XYZLabel(const QString &Text, QWidget *parent) : QLabel(Text, parent) {
m_bHover = false;
setCursor(Qt::PointingHandCursor);
}
XYZLabel::~XYZLabel() {
}
void XYZLabel::setHoverText(bool bHover) {
m_bHover = bHover;
}
void XYZLabel::enterEvent(QEvent *) {
if( m_bHover ) {
QFont font = this->font();
font.setUnderline(m_bHover);
setFont(font);
}
}
void XYZLabel::leaveEvent(QEvent *) {
if( m_bHover ) {
QFont font = this->font();
font.setUnderline(false);
setFont(font);
}
}
void XYZLabel::mouseReleaseEvent(QMouseEvent *) {
emit clicked();
}
|
When virtually all of the residents of Piedmont, New Mexico, are found dead after the return to Earth of a space satellite, the head of the US Air Force's Project Scoop declares an emergency. Many years prior to this incident, a group of eminent scientists led by Dr. Jeremy Stone (Arthur Hill) advocated for the construction of a secure laboratory facility that would serve as a base in the event an alien biological life form was returned to Earth from a space mission. Stone and his team - Drs. Dutton, Leavitt and Hall (David Wayne, Kate Reid, and James Olson, respectively)- go to the facility, known as Wildfire, and try to first isolate the life form while determining why two people from Piedmont (an old wino and a six-month-old baby) survived. The scientists methodically study the alien life form unaware that it has already mutated and presents a far greater danger in the lab, which is equipped with a nuclear self-destruct device should it manage to escape. Written by garykmcd |
// SetRemove removes a single specified value from the specified set document.
// WARNING: This relies on Go's interface{} comparison behaviour!
// PERFORMANCE WARNING: This performs full set fetch, modify, store cycles.
func (b *Bucket) SetRemove(key string, value interface{}) (Cas, error) {
for {
var setContents []interface{}
cas, err := b.Get(key, &setContents)
if err != nil {
return 0, err
}
foundItem := false
newSetContents := make([]interface{}, 0)
for _, item := range setContents {
if item == value {
foundItem = true
} else {
newSetContents = append(newSetContents, item)
}
}
if !foundItem {
return 0, ErrRangeError
}
cas, err = b.Replace(key, newSetContents, cas, 0)
if err != nil {
if IsKeyExistsError(err) {
continue
}
return 0, err
}
return cas, nil
}
} |
<reponame>ExcpOccured/DataTypes<filename>tests/check-0.15.2/config.h
/*-*- mode:C; -*- */
/* config.h. Generated from build/cmake/config.h.in by cmake configure */
/*
* Check: a unit test framework for C
*
* Copyright (C) 2011 <NAME>
* Copyright (C) 2001, 2002 <NAME>
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* This library is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this library; if not, write to the
* Free Software Foundation, Inc., 59 Temple Place - Suite 330,
* Boston, MA 02111-1307, USA.
*/
#if defined(__osf__)
# define _OSF_SOURCE
#endif
/*
* Ensure we have C99-style int64_t, etc, all defined.
*/
/* First, we need to know if the system has already defined them. */
#define HAVE_INT16_T
#define HAVE_INT32_T
#define HAVE_INT64_T
#define HAVE_INTMAX_T
#define HAVE_UINT8_T
#define HAVE_UINT16_T
#define HAVE_UINT32_T
#define HAVE_UINT64_T
#define HAVE_UINTMAX_T
/* We might have the types we want under other spellings. */
/* #undef HAVE___INT64 */
/* #undef HAVE_U_INT64_T */
/* #undef HAVE_UNSIGNED___INT64 */
/* The sizes of various standard integer types. */
#define SIZE_OF_SHORT 2
#define SIZE_OF_INT 4
#define SIZE_OF_LONG 8
#define SIZE_OF_LONG_LONG 8
#define SIZE_OF_UNSIGNED_SHORT 2
#define SIZE_OF_UNSIGNED 4
#define SIZE_OF_UNSIGNED_LONG 8
#define SIZE_OF_UNSIGNED_LONG_LONG 8
/*
* If we lack int64_t, define it to the first of __int64, int, long, and long long
* that exists and is the right size.
*/
#if !defined(HAVE_INT64_T) && defined(HAVE___INT64)
typedef __int64 int64_t;
#define HAVE_INT64_T
#endif
#if !defined(HAVE_INT64_T) && SIZE_OF_INT == 8
typedef int int64_t;
#define HAVE_INT64_T
#endif
#if !defined(HAVE_INT64_T) && SIZE_OF_LONG == 8
typedef long int64_t;
#define HAVE_INT64_T
#endif
#if !defined(HAVE_INT64_T) && SIZE_OF_LONG_LONG == 8
typedef long long int64_t;
#define HAVE_INT64_T
#endif
#if !defined(HAVE_INT64_T)
#error No 64-bit integer type was found.
#endif
/*
* Similarly for int32_t
*/
#if !defined(HAVE_INT32_T) && SIZE_OF_INT == 4
typedef long int32_t;
#define HAVE_INT32_T
#endif
#if !defined(HAVE_INT32_T) && SIZE_OF_LONG == 4
typedef long int32_t;
#define HAVE_INT32_T
#endif
#if !defined(HAVE_INT32_T)
#error No 32-bit integer type was found.
#endif
/*
* Similarly for int16_t
*/
#if !defined(HAVE_INT16_T) && SIZE_OF_INT == 2
typedef int int16_t;
#define HAVE_INT16_T
#endif
#if !defined(HAVE_INT16_T) && SIZE_OF_SHORT == 2
typedef short int16_t;
#define HAVE_INT16_T
#endif
#if !defined(HAVE_INT16_T)
#error No 16-bit integer type was found.
#endif
/*
* Similarly for uint64_t
*/
#if !defined(HAVE_UINT64_T) && defined(HAVE_UNSIGNED___INT64)
typedef unsigned __int64 uint64_t;
#define HAVE_UINT64_T
#endif
#if !defined(HAVE_UINT64_T) && SIZE_OF_UNSIGNED == 8
typedef unsigned uint64_t;
#define HAVE_UINT64_T
#endif
#if !defined(HAVE_UINT64_T) && SIZE_OF_UNSIGNED_LONG == 8
typedef unsigned long uint64_t;
#define HAVE_UINT64_T
#endif
#if !defined(HAVE_UINT64_T) && SIZE_OF_UNSIGNED_LONG_LONG == 8
typedef unsigned long long uint64_t;
#define HAVE_UINT64_T
#endif
#if !defined(HAVE_UINT64_T)
#error No 64-bit unsigned integer type was found.
#endif
/*
* Similarly for uint32_t
*/
#if !defined(HAVE_UINT32_T) && SIZE_OF_UNSIGNED == 4
typedef unsigned uint32_t;
#define HAVE_UINT32_T
#endif
#if !defined(HAVE_UINT32_T) && SIZE_OF_UNSIGNED_LONG == 4
typedef unsigned long uint32_t;
#define HAVE_UINT32_T
#endif
#if !defined(HAVE_UINT32_T)
#error No 32-bit unsigned integer type was found.
#endif
/*
* Similarly for uint16_t
*/
#if !defined(HAVE_UINT16_T) && SIZE_OF_UNSIGNED == 2
typedef unsigned uint16_t;
#define HAVE_UINT16_T
#endif
#if !defined(HAVE_UINT16_T) && SIZE_OF_UNSIGNED_SHORT == 2
typedef unsigned short uint16_t;
#define HAVE_UINT16_T
#endif
#if !defined(HAVE_UINT16_T)
#error No 16-bit unsigned integer type was found.
#endif
/*
* Similarly for uint8_t
*/
#if !defined(HAVE_UINT8_T)
typedef unsigned char uint8_t;
#define HAVE_UINT8_T
#endif
#if !defined(HAVE_UINT16_T)
#error No 8-bit unsigned integer type was found.
#endif
/* Define intmax_t and uintmax_t if they are not already defined. */
#if !defined(HAVE_INTMAX_T)
typedef int64_t intmax_t;
#define INTMAX_MIN INT64_MIN
#define INTMAX_MAX INT64_MAX
#endif
#if !defined(HAVE_UINTMAX_T)
typedef uint64_t uintmax_t;
#endif
/* Define to 1 if you have the declaration of `INT64_MAX', and to 0 if you
don't. */
/* #undef HAVE_DECL_INT64_MAX */
/* Define to 1 if you have the declaration of `INT64_MIN', and to 0 if you
don't. */
/* #undef HAVE_DECL_INT64_MIN */
/* Define to 1 if you have the declaration of `SIZE_MAX', and to 0 if you
don't. */
/* #undef HAVE_DECL_SIZE_MAX */
/* Define to 1 if you have the declaration of `SSIZE_MAX', and to 0 if you
don't. */
/* #undef HAVE_DECL_SSIZE_MAX */
/* Define to 1 if you have the declaration of `UINT32_MAX', and to 0 if you
don't. */
/* #undef HAVE_DECL_UINT32_MAX */
/* Define to 1 if you have the declaration of `UINT64_MAX', and to 0 if you
don't. */
/* #undef HAVE_DECL_UINT64_MAX */
/* Define to 1 if you have the <errno.h> header file. */
#define HAVE_ERRNO_H 1
/* Define to 1 if you have the `fork' function. */
#define HAVE_FORK 1
/* Define to 1 if you have the `getpid' function. */
#define HAVE_GETPID 1
/* Define to 1 if you have the `gettimeofday' function. */
#define HAVE_GETTIMEOFDAY 1
/* Define to 1 if you have the <inttypes.h> header file. */
#define HAVE_INTTYPES_H 1
/* Define to 1 if you have the <limits.h> header file. */
#define HAVE_LIMITS_H 1
/* Define to 1 if you have the `localtime_r' function. */
#define HAVE_DECL_LOCALTIME_R 1
/* Define to 1 if you have the `localtime_s' function. */
/* #undef HAVE_LOCALTIME_S */
/* Define to 1 if the system has the type `long long int'. */
/* #undef HAVE_LONG_LONG_INT */
/* Define to 1 if you have the `malloc' function. */
#define HAVE_MALLOC 1
/* Define to 1 if you have the `realloc' function. */
#define HAVE_REALLOC 1
/* Define to 1 if you have the `setenv' function. */
#define HAVE_DECL_SETENV 1
/* Define to 1 if you have the <signal.h> header file. */
#define HAVE_SIGNAL_H 1
/* Define to 1 if you have the 'sigaction' function. */
#define HAVE_SIGACTION 1
/* Define to 1 if you have the <stdarg.h> header file. */
#define HAVE_STDARG_H 1
/* Define to 1 if you have the <stdint.h> header file. */
#define HAVE_STDINT_H 1
/* Define to 1 if you have the <stdlib.h> header file. */
#define HAVE_STDLIB_H 1
/* Define to 1 if you have the `strdup' function. */
#define HAVE_DECL_STRDUP 1
/* Define to 1 if you have the <strings.h> header file. */
#define HAVE_STRINGS_H 1
/* Define to 1 if you have the <string.h> header file. */
#define HAVE_STRING_H 1
/* Define to 1 if you have the `strsignal' function. */
#define HAVE_DECL_STRSIGNAL 1
/* Define to 1 if you have the <sys/time.h> header file. */
#define HAVE_SYS_TIME_H 1
/* Define to 1 if you have the <sys/types.h> header file. */
#define HAVE_SYS_TYPES_H 1
/* Define to 1 if you have the <time.h> header file. */
#define HAVE_TIME_H 1
/* Define to 1 if you have the <unistd.h> header file. */
#define HAVE_UNISTD_H 1
/* Define to 1 if you have <windows.h> header file. */
/* #undef HAVE_WINDOWS_H */
/* Define to 1 if you have <synchapi.h> header file. */
/* #undef HAVE_SYNCHAPI_H */
/* Define to 1 if you have the 'InitOnceBeginInitialize' function. */
/* #undef HAVE_INIT_ONCE_BEGIN_INITIALIZE */
/* Define to 1 if you have the 'InitOnceComplete' function. */
/* #undef HAVE_INIT_ONCE_COMPLETE */
/* Define to 1 if the system has the type `unsigned long long'. */
/* #undef HAVE_UNSIGNED_LONG_LONG */
/* Define to 1 if the system has the type `unsigned long long int'. */
/* #undef HAVE_UNSIGNED_LONG_LONG_INT */
/* Define to 1 if the system has the type `wchar_t'. */
/* #undef HAVE_WCHAR_T */
/* Define to 1 if you have the `_getpid' function. */
/* #undef HAVE__GETPID */
/* Define to 1 if you have the `_localtime64_s' function. */
/* #undef HAVE__LOCALTIME64_S */
/* Define to 1 if you have the `_strdup' function. */
/* #undef HAVE__STRDUP */
/* Define 1 if you have pthread support. */
#define HAVE_PTHREAD 1
/* Version number of Check */
/* #undef CHECK_VERSION */
/* The size of `wchar_t', as computed by sizeof. */
/* #undef SIZEOF_WCHAR_T */
/* Define to 1 if strerror_r returns char *. */
/* #undef STRERROR_R_CHAR_P */
/* Define to 1 if you can safely include both <sys/time.h> and <time.h>. */
/* #undef TIME_WITH_SYS_TIME */
/*
* Some platform requires a macro to use extension functions.
*/
/* #undef SAFE_TO_DEFINE_EXTENSIONS */
#ifdef SAFE_TO_DEFINE_EXTENSIONS
/* Enable extensions on AIX 3, Interix. */
#ifndef _ALL_SOURCE
# define _ALL_SOURCE 1
#endif
/* Enable GNU extensions on systems that have them. */
#ifndef _GNU_SOURCE
# define _GNU_SOURCE 1
#endif
/* Enable threading extensions on Solaris. */
#ifndef _POSIX_PTHREAD_SEMANTICS
# define _POSIX_PTHREAD_SEMANTICS 1
#endif
/* Enable extensions on HP NonStop. */
#ifndef _TANDEM_SOURCE
# define _TANDEM_SOURCE 1
#endif
/* Enable general extensions on Solaris. */
#ifndef __EXTENSIONS__
# define __EXTENSIONS__ 1
#endif
#endif /* SAFE_TO_DEFINE_EXTENSIONS */
/* Number of bits in a file offset, on hosts where this is settable. */
/* #undef _FILE_OFFSET_BITS */
/* Define to 1 to make fseeko visible on some hosts (e.g. glibc 2.2). */
/* #undef _LARGEFILE_SOURCE */
/* Define for large files, on AIX-style hosts. */
/* #undef _LARGE_FILES */
/* Define for Windows to use Windows 2000+ APIs. */
/* #undef _WIN32_WINNT */
/* #undef WINVER */
/* Define to empty if `const' does not conform to ANSI C. */
/* #undef const */
/* Define to `int' if <sys/types.h> doesn't define. */
/* #undef clockid_t */
/* Define to `int' if <sys/types.h> doesn't define. */
/* #undef gid_t */
/* Define to `unsigned long' if <sys/types.h> does not define. */
/* #undef id_t */
/* Define to `int' if <sys/types.h> does not define. */
/* #undef mode_t */
/* Define to `long long' if <sys/types.h> does not define. */
/* #undef off_t */
/* Define to `int' if <sys/types.h> doesn't define. */
/* #undef pid_t */
/* Define to `unsigned int' if <sys/types.h> does not define. */
/* #undef size_t */
/* Define to `int' if <sys/types.h> does not define. */
/* #undef ssize_t */
/* Define to `int' if <sys/types.h> does not define. */
/* #undef timer_t */
/* Define to `int' if <sys/types.h> doesn't define. */
/* #undef uid_t */
/* Define to `int' if <sys/types.h> does not define. */
/* #undef intptr_t */
/* Define to `unsigned int' if <sys/types.h> does not define. */
/* #undef uintptr_t */
|
GOCAD TSurf 1
HEADER {
name: OCBA-CBFZ-EAST-Coronado_Bank_east_splay-CFM5_500m
visible: true
*solid*color: 0.149020 0.223529 0.490196 1
name_in_model_list: OCBA-CBFZ-EAST-Coronado_Bank_east_splay-CFM5
}
GOCAD_ORIGINAL_COORDINATE_SYSTEM
NAME " gocad Local"
PROJECTION Unknown
DATUM Unknown
AXIS_NAME X Y Z
AXIS_UNIT m m m
ZPOSITIVE Elevation
END_ORIGINAL_COORDINATE_SYSTEM
PROPERTY_CLASS_HEADER X {
kind: X
unit: m
}
PROPERTY_CLASS_HEADER Y {
kind: Y
unit: m
}
PROPERTY_CLASS_HEADER Z {
kind: Z
unit: m
is_z: on
}
PROPERTY_CLASS_HEADER vector3d {
kind: Length
unit: m
}
TFACE
VRTX 1 462330.3359375 3621316.65625 -1738.4110107421875
VRTX 2 462552.359375 3621200.25 -2031.7490234375
VRTX 3 462468.76171875 3621536.59375 -1975.14453125
VRTX 4 462352.640625 3620927.59375 -1681.5325927734375
VRTX 5 462668.84375 3620771.40625 -2070.3232421875
VRTX 6 462456.49609375 3620441.90625 -1668.69384765625
VRTX 7 462813.359375 3620292.875 -2071.981201171875
VRTX 8 462612.0234375 3619968.15625 -1675.373291015625
VRTX 9 462189.1875 3620119.8125 -1222.9761962890625
VRTX 10 462122.53515625 3620608.625 -1319.046142578125
VRTX 11 461821.6875 3620434.5625 -913.02044677734375
VRTX 12 461816.3125 3619963.25 -760.0758056640625
VRTX 13 462130.21484375 3619706.0625 -993.54315185546875
VRTX 14 462474.83203125 3619622.3125 -1334.132568359375
VRTX 15 462858.96875 3619457.3125 -1699.2091064453125
VRTX 16 462726.66796875 3619101.09375 -1314.7364501953125
VRTX 17 462282.21875 3619301.75 -944.9678955078125
VRTX 18 462455.48046875 3618895.5 -863.77197265625
VRTX 19 462957.27734375 3618615.40625 -1234.6175537109375
VRTX 20 463105.47265625 3618985.9375 -1724.2860107421875
VRTX 21 463202.734375 3619379.96875 -2110.303955078125
VRTX 22 463451.17578125 3618951.90625 -2178.1005859375
VRTX 23 463376.08984375 3618579.4375 -1780.788330078125
VRTX 24 463377.703125 3618220.4375 -1469.6627197265625
VRTX 25 463142.41015625 3618217.0625 -1124.5614013671875
VRTX 26 463447.9296875 3617817 -1187.0977783203125
VRTX 27 463146.77734375 3617872.96875 -803.02691650390625
VRTX 28 463455.859375 3617448.53125 -855.4375
VRTX 29 463777.078125 3617329.1875 -1205.33984375
VRTX 30 463707.0078125 3617781.46875 -1511.1507568359375
VRTX 31 463696.546875 3618165.75 -1854.074951171875
VRTX 32 463736.109375 3618543.71875 -2224.84765625
VRTX 33 464026.796875 3618139.34375 -2268.81103515625
VRTX 34 463995.58203125 3617764.90625 -1893.2371826171875
VRTX 35 464323.94921875 3617739.3125 -2309.6435546875
VRTX 36 464286.87109375 3617345.03125 -1955.6461181640625
VRTX 37 464008.2109375 3617338.6875 -1551.455810546875
VRTX 38 464118.109375 3616772.5 -1255.796142578125
VRTX 39 463766.23046875 3616955.59375 -874.43994140625
VRTX 40 463982.19140625 3616469.5625 -814.24371337890625
VRTX 41 464316.7421875 3616286.25 -1192.4285888671875
VRTX 42 464523.73828125 3616502.96875 -1684.9658203125
VRTX 43 464280.796875 3616948.09375 -1646.0152587890625
VRTX 44 464560.1015625 3616919.75 -2047.872314453125
VRTX 45 464832.71875 3616492.0625 -2169.555908203125
VRTX 46 464838.8203125 3616068.25 -1880.1749267578125
VRTX 47 465105.9453125 3616076.375 -2317.782958984375
VRTX 48 465077.08984375 3615641.28125 -1997.9295654296875
VRTX 49 464799.87109375 3615628.4375 -1554.1632080078125
VRTX 50 464959.6953125 3615169.75 -1616.7410888671875
VRTX 51 465288.37890625 3615246.8125 -2121.32080078125
VRTX 52 465191.29296875 3614762 -1793.5748291015625
VRTX 53 464836.66015625 3614581.9375 -1275.4072265625
VRTX 54 464674.453125 3614988.9375 -1157.962890625
VRTX 55 464602.4140625 3615371.71875 -1147.4029541015625
VRTX 56 464471.79296875 3615795.53125 -1114.9908447265625
VRTX 57 464610.62890625 3616042.5 -1493.481201171875
VRTX 58 464168.41015625 3615998 -747.8350830078125
VRTX 59 463831.375 3616197.3125 -384.93637084960938
VRTX 60 463989.93359375 3615732 -326.18801879882812
VRTX 61 464281.82421875 3615525.1875 -703.90899658203125
VRTX 62 464360.65234375 3615099.71875 -719.02001953125
VRTX 63 464462.17578125 3614653.75 -816.34417724609375
VRTX 64 464601.09375 3614181.375 -882.15771484375
VRTX 65 464943.08203125 3613992.71875 -1133.6461181640625
VRTX 66 465178.51953125 3614229.28125 -1535.9879150390625
VRTX 67 465507.19921875 3614368.03125 -1985.190673828125
VRTX 68 465591.6875 3613955.25 -1818.8800048828125
VRTX 69 465914.57421875 3614057.1875 -2242.169189453125
VRTX 70 465997.4921875 3613697.625 -2059.02685546875
VRTX 71 465720.1484375 3613569.09375 -1651.1513671875
VRTX 72 465347.71484375 3613623.84375 -1293.1324462890625
VRTX 73 465834.33984375 3613110.59375 -1431.2015380859375
VRTX 74 465583.4375 3613043.1875 -1094.2054443359375
VRTX 75 465254.42578125 3613264 -894.3209228515625
VRTX 76 465376.27734375 3612755.1875 -702.68670654296875
VRTX 77 464936.453125 3613152.75 -526.023681640625
VRTX 78 464848.28515625 3613608.8125 -790.40380859375
VRTX 79 464495.24609375 3613408.71875 -360.630859375
VRTX 80 464572.44921875 3612966.4375 -146.16668701171875
VRTX 81 465011.296875 3612791.84375 -361.93008422851562
VRTX 82 465159.2890625 3612386.625 -358.70608520507812
VRTX 83 465526.0390625 3612267.5 -714.31866455078125
VRTX 84 465293.3046875 3611924.34375 -391.84271240234375
VRTX 85 465684.34375 3611791.5625 -772.40350341796875
VRTX 86 465859.4921875 3612120.25 -1057.835693359375
VRTX 87 465731.50390625 3612610.46875 -1062.8038330078125
VRTX 88 466023.21875 3612458.9375 -1362.36572265625
VRTX 89 466115.6484375 3611808.625 -1262.158447265625
VRTX 90 466361.6640625 3612233.21875 -1673.19091796875
VRTX 91 466193.62890625 3612787.25 -1700.69873046875
VRTX 92 466091.2578125 3613307.40625 -1870.1959228515625
VRTX 93 466322.94140625 3613462.28125 -2212.4189453125
VRTX 94 466426.98046875 3613048.6875 -2093.0283203125
VRTX 95 466568.47265625 3612594.71875 -2040.97216796875
VRTX 96 466749.04296875 3612130.9375 -2064.133544921875
VRTX 97 466535.3359375 3611770.21875 -1707.0689697265625
VRTX 98 466351.3046875 3611421.84375 -1406.530029296875
VRTX 99 466717.2421875 3611297.25 -1736.6846923828125
VRTX 100 466510.5625 3610945.6875 -1422.7579345703125
VRTX 101 466141.07421875 3611191.875 -1122.4451904296875
VRTX 102 466298.5390625 3610683.65625 -1131.8262939453125
VRTX 103 466692.9765625 3610477.9375 -1441.054443359375
VRTX 104 466904.234375 3610834.03125 -1755.3612060546875
VRTX 105 467098.91015625 3610373.03125 -1767.3709716796875
VRTX 106 466887.98828125 3610010.8125 -1450.917724609375
VRTX 107 467300.955078125 3609915.5 -1770.0126953125
VRTX 108 467073.4375 3609547.59375 -1440.247802734375
VRTX 109 467503.873046875 3609458.46875 -1771.716064453125
VRTX 110 467737.421875 3609836.0625 -2100.08251953125
VRTX 111 467531.92578125 3610291.875 -2101.99365234375
VRTX 112 467329.591796875 3610749.0625 -2100.15185546875
VRTX 113 467131.296875 3611207.9375 -2093.507568359375
VRTX 114 466940.33984375 3611669.5 -2078.974365234375
VRTX 115 467146.7421875 3612035.1875 -2418.328125
VRTX 116 466951.72265625 3612495.4375 -2407.25634765625
VRTX 117 466759.4921875 3612935.5 -2402.749755859375
VRTX 118 466589 3613311.5625 -2405.43310546875
VRTX 119 466483.1171875 3613610.34375 -2471.611572265625
VRTX 120 466233.80078125 3613790.4375 -2378.556396484375
VRTX 121 466212.83984375 3614062.75 -2553.9326171875
VRTX 122 465997.921875 3614327.5625 -2504.476318359375
VRTX 123 465752.703125 3614547.46875 -2375.84228515625
VRTX 124 465508.51953125 3614871.375 -2252.0927734375
VRTX 125 465599.8125 3615300.84375 -2580.42529296875
VRTX 126 465832.83984375 3614935.21875 -2677.91943359375
VRTX 127 466045.7265625 3614629.3125 -2741.21728515625
VRTX 128 466225.5 3614342.1875 -2745.446044921875
VRTX 129 466449.3515625 3614091.8125 -2800.374267578125
VRTX 130 466445.68359375 3613849.59375 -2623.41748046875
VRTX 131 466432.1171875 3614335.65625 -2944.05322265625
VRTX 132 466308.4375 3614589.96875 -2984.007080078125
VRTX 133 466148.08203125 3614914.625 -3021.67626953125
VRTX 134 465924.26171875 3615305.71875 -2999.912109375
VRTX 135 465658.37890625 3615716.59375 -2905.62548828125
VRTX 136 465367.26953125 3615679.96875 -2461.527587890625
VRTX 137 465383.92578125 3616111.1875 -2768.775146484375
VRTX 138 465112.7109375 3616507.1875 -2628.873291015625
VRTX 139 464850.21875 3616910.25 -2492.388916015625
VRTX 140 464589.23828125 3617324.03125 -2391.47802734375
VRTX 141 467346.466796875 3611576.875 -2423.5341796875
VRTX 142 467548.76171875 3611119.65625 -2425.53857421875
VRTX 143 467754.177734375 3610663.8125 -2423.584228515625
VRTX 144 467960.791015625 3610208.5 -2420.111083984375
VRTX 145 468168.484375 3609753.71875 -2415.273193359375
VRTX 146 467943.310546875 3609380.4375 -2097.68994140625
VRTX 147 468376.595703125 3609299.125 -2409.90771484375
VRTX 148 468148.609375 3608924.53125 -2096.1005859375
VRTX 149 468583.58984375 3608844 -2405.953369140625
VRTX 150 468353.3359375 3608468.34375 -2095.251708984375
VRTX 151 468790.177734375 3608388.6875 -2402.5146484375
VRTX 152 468558.005859375 3608012.125 -2094.557861328125
VRTX 153 468997.240234375 3607933.625 -2398.478759765625
VRTX 154 468763.25 3607556.1875 -2093.140380859375
VRTX 155 468315.423828125 3607630.1875 -1779.4046630859375
VRTX 156 468518.501953125 3607173.78125 -1779.875244140625
VRTX 157 468050.033203125 3607262.28125 -1448.8736572265625
VRTX 158 468243.845703125 3606798.875 -1445.475341796875
VRTX 159 467762.060546875 3606956.15625 -1117.0946044921875
VRTX 160 467932.91796875 3606491.46875 -1099.4814453125
VRTX 161 467489.919921875 3606617.09375 -782.3099365234375
VRTX 162 467304.388671875 3607090 -783.85693359375
VRTX 163 467574.005859375 3607426.90625 -1120.5882568359375
VRTX 164 467853.380859375 3607721.25 -1447.5164794921875
VRTX 165 467377.869140625 3607901.15625 -1115.1871337890625
VRTX 166 467127.66796875 3607550.59375 -786.4158935546875
VRTX 167 466885.9609375 3607212.09375 -467.28338623046875
VRTX 168 466717.1484375 3607680.15625 -471.43313598632812
VRTX 169 466477.10546875 3607332.28125 -150.23625183105469
VRTX 170 466643.1171875 3606863.1875 -149.29281616210938
VRTX 171 467055.546875 3606743.53125 -462.54022216796875
VRTX 172 467223.53515625 3606277.75 -458.86080932617188
VRTX 173 467654.21484375 3606149.6875 -768.8201904296875
VRTX 174 468114.703125 3606052.375 -1101.6534423828125
VRTX 175 468435.837890625 3606341.46875 -1442.67919921875
VRTX 176 468722.29296875 3606716.90625 -1781.3834228515625
VRTX 177 468968.609375 3607100.3125 -2091.587158203125
VRTX 178 469173.65625 3606644.3125 -2090.451171875
VRTX 179 468925.857421875 3606259.96875 -1783.3695068359375
VRTX 180 468631.298828125 3605887.65625 -1445.971923828125
VRTX 181 469123.39453125 3605801.40625 -1783.9998779296875
VRTX 182 468824.19140625 3605422.125 -1447.1829833984375
VRTX 183 468304.59375 3605585.8125 -1102.5150146484375
VRTX 184 467835.259765625 3605695.71875 -771.760986328125
VRTX 185 468014.5 3605226.4375 -766.9063720703125
VRTX 186 468501.216796875 3605079.5625 -1097.589111328125
VRTX 187 469035.703125 3604962.03125 -1467.997314453125
VRTX 188 469320.611328125 3605343.1875 -1788.3319091796875
VRTX 189 469580.9921875 3605731.09375 -2091.92724609375
VRTX 190 469783.32421875 3605273.90625 -2094.5263671875
VRTX 191 469525.83203125 3604885.34375 -1802.13232421875
VRTX 192 469266.234375 3604487.90625 -1504.1278076171875
VRTX 193 468759.583984375 3604572.25 -1146.718505859375
VRTX 194 468213.021484375 3604718.75 -767.79571533203125
VRTX 195 468335.912109375 3604244.21875 -725.07623291015625
VRTX 196 468714.103515625 3604103.28125 -978.94512939453125
VRTX 197 468491.87890625 3603752.53125 -710.92205810546875
VRTX 198 468928.3359375 3603523.96875 -978.82513427734375
VRTX 199 469078.873046875 3604040 -1239.725341796875
VRTX 200 469504.404296875 3603970.125 -1533.07177734375
VRTX 201 469354.744140625 3603457.6875 -1269.247314453125
VRTX 202 469786.84765625 3603415.125 -1562.61865234375
VRTX 203 469645.525390625 3602852.9375 -1296.475341796875
VRTX 204 469216.6796875 3602932.3125 -1001.71484375
VRTX 205 469443.076171875 3602386.1875 -1002.2244262695312
VRTX 206 469887.11328125 3602285.71875 -1323.227783203125
VRTX 207 470064.330078125 3602889.28125 -1610.962158203125
VRTX 208 470335.423828125 3602455.59375 -1717.7855224609375
VRTX 209 470213.89453125 3601989.4375 -1515.9488525390625
VRTX 210 470630.47265625 3602040.28125 -1839.4407958984375
VRTX 211 470436.0859375 3601535.6875 -1581.768798828125
VRTX 212 470902.568359375 3601619.84375 -1931.3212890625
VRTX 213 470688.4375 3601151.71875 -1659.4598388671875
VRTX 214 471196.447265625 3601219.65625 -1969.0142822265625
VRTX 215 471015.173828125 3600805.09375 -1745.6328125
VRTX 216 471488.484375 3600772.40625 -1981.20166015625
VRTX 217 471123.443359375 3600399.15625 -1695.542236328125
VRTX 218 470603.32421875 3600653.90625 -1453.451416015625
VRTX 219 470748.181640625 3600085.46875 -1404.084716796875
VRTX 220 471293.310546875 3600004.09375 -1687.1676025390625
VRTX 221 470925.58984375 3599636.65625 -1416.78515625
VRTX 222 470353.390625 3599646.8125 -1125.617919921875
VRTX 223 470282.103515625 3600299.46875 -1182.34619140625
VRTX 224 469921.939453125 3599943.65625 -945.57763671875
VRTX 225 469898.287109375 3599260.28125 -869.7659912109375
VRTX 226 470312.7578125 3599076.9375 -1037.75634765625
VRTX 227 470668.84765625 3599199.59375 -1216.8204345703125
VRTX 228 471096.822265625 3599161.59375 -1429.585205078125
VRTX 229 470718.115234375 3598675.90625 -1173.803466796875
VRTX 230 470190.55859375 3598641.125 -930.5859375
VRTX 231 469727.724609375 3598635 -724.50262451171875
VRTX 232 469972.798828125 3598152.75 -771.40032958984375
VRTX 233 470460.685546875 3598181.90625 -988.33251953125
VRTX 234 470948.455078125 3598214.4375 -1215.919189453125
VRTX 235 471244.1015625 3598681.21875 -1430.983642578125
VRTX 236 471587.201171875 3599097.375 -1674.3153076171875
VRTX 237 471757.35546875 3598579.34375 -1671.6783447265625
VRTX 238 471446.646484375 3598187.59375 -1451.0950927734375
VRTX 239 471194.16015625 3597782.15625 -1255.1158447265625
VRTX 240 471656.912109375 3597628 -1446.13818359375
VRTX 241 471946.26171875 3598035.96875 -1667.297119140625
VRTX 242 472242.904296875 3598445.03125 -1894.9697265625
VRTX 243 472428.86328125 3597910.90625 -1893.7772216796875
VRTX 244 472110.919921875 3597476.84375 -1639.083984375
VRTX 245 472610.78125 3597390.78125 -1898.6392822265625
VRTX 246 472279.251953125 3596984.5625 -1615.7890625
VRTX 247 471748.09765625 3597002.9375 -1332.6407470703125
VRTX 248 472002.33203125 3596530.625 -1309.843017578125
VRTX 249 472503.65625 3596529.34375 -1601.8443603515625
VRTX 250 472795.47265625 3596928.0625 -1906.6727294921875
VRTX 251 473082.20703125 3597312.09375 -2186.014404296875
VRTX 252 473259.66796875 3596868.3125 -2206.711669921875
VRTX 253 473523.2578125 3597244.90625 -2501.23974609375
VRTX 254 473353.28515625 3597704.875 -2452.265625
VRTX 255 472900.14453125 3597787.3125 -2153.113037109375
VRTX 256 472719.41015625 3598319.375 -2134.699462890625
VRTX 257 472543.67578125 3598835.71875 -2137.656005859375
VRTX 258 472067.41796875 3598973.3125 -1905.9847412109375
VRTX 259 471926.37109375 3599461.46875 -1923.6405029296875
VRTX 260 471440.53125 3599565.34375 -1680.20458984375
VRTX 261 471785.529296875 3599927.625 -1939.3162841796875
VRTX 262 472266.267578125 3599816.28125 -2168.342529296875
VRTX 263 472137.07421875 3600276.65625 -2185.104248046875
VRTX 264 471641.23828125 3600357.9375 -1957.6854248046875
VRTX 265 472003.162109375 3600691.90625 -2204.270751953125
VRTX 266 472497.1953125 3600628.65625 -2416.66748046875
VRTX 267 472359.583984375 3601059.75 -2431.94287109375
VRTX 268 471867.046875 3601036.34375 -2221.18310546875
VRTX 269 471650.482421875 3601324.40625 -2204.10888671875
VRTX 270 472161.833984375 3601427.84375 -2433.25537109375
VRTX 271 472663.349609375 3601457.125 -2636.809814453125
VRTX 272 472377.265625 3601864.59375 -2619.10546875
VRTX 273 471884.498046875 3601789.21875 -2411.8447265625
VRTX 274 471387.171875 3601701.375 -2189.2197265625
VRTX 275 471091.69140625 3602105.75 -2133.966552734375
VRTX 276 470809.251953125 3602523.5 -2044.95458984375
VRTX 277 470542.271484375 3602929.59375 -1944.8585205078125
VRTX 278 470248.890625 3603402.46875 -1867.40576171875
VRTX 279 469953.376953125 3603924.1875 -1830.1837158203125
VRTX 280 469722.94921875 3604424.90625 -1813.9677734375
VRTX 281 469983.998046875 3604815.96875 -2099.546142578125
VRTX 282 470185.884765625 3604357.71875 -2104.02392578125
VRTX 283 470419.509765625 3603876.5625 -2113.23388671875
VRTX 284 470728.15234375 3603400 -2165.619140625
VRTX 285 471004.60546875 3602998.03125 -2243.28076171875
VRTX 286 471283.84765625 3602586.8125 -2318.084228515625
VRTX 287 471579.5 3602178.25 -2376.738525390625
VRTX 288 472068.224609375 3602256.5 -2589.181640625
VRTX 289 471764.14453125 3602651.5625 -2552.75244140625
VRTX 290 471473.052734375 3603055.03125 -2502.94580078125
VRTX 291 471188.04296875 3603461.71875 -2445.096923828125
VRTX 292 470885.748046875 3603855.25 -2386.453857421875
VRTX 293 470647.025390625 3604289.84375 -2374.673583984375
VRTX 294 470438.181640625 3604744.09375 -2380.988525390625
VRTX 295 470233.744140625 3605200.375 -2381.693115234375
VRTX 296 470029.099609375 3605656.5625 -2382.6640625
VRTX 297 469823.55078125 3606112.375 -2384.7822265625
VRTX 298 469377.955078125 3606188 -2090.3154296875
VRTX 299 469617.80078125 3606568.0625 -2387.155517578125
VRTX 300 469411.181640625 3607023.34375 -2390.6357421875
VRTX 301 469204.447265625 3607478.59375 -2394.258544921875
VRTX 302 472834.001953125 3600989.59375 -2631.5908203125
VRTX 303 472965.955078125 3600507.375 -2625.835205078125
VRTX 304 472620.357421875 3600160.65625 -2405.83056640625
VRTX 305 472743.796875 3599681.59375 -2400.952392578125
VRTX 306 472404.1171875 3599327.875 -2155.29541015625
VRTX 307 472877.595703125 3599187.25 -2393.513671875
VRTX 308 473203.994140625 3599536.28125 -2638.465087890625
VRTX 309 473333.599609375 3599053.4375 -2645.179443359375
VRTX 310 473014.60546875 3598706.71875 -2387.49365234375
VRTX 311 473460.087890625 3598569.8125 -2655.63427734375
VRTX 312 473180.333984375 3598188.15625 -2401.7763671875
VRTX 313 473614.798828125 3598098.0625 -2707.100830078125
VRTX 314 473779.302734375 3597630.3125 -2771.507080078125
VRTX 315 473944.65234375 3597161.625 -2823.55908203125
VRTX 316 473700.55078125 3596790.9375 -2521.201904296875
VRTX 317 474111.150390625 3596691.125 -2853.92578125
VRTX 318 473902.080078125 3596312.46875 -2522.798828125
VRTX 319 474291.4375 3596225 -2867.64306640625
VRTX 320 474141.62890625 3595801.53125 -2538.29443359375
VRTX 321 473747.576171875 3595883.8125 -2194.992431640625
VRTX 322 474121.548828125 3595393.90625 -2303.960205078125
VRTX 323 473739.990234375 3595427.03125 -1922.8782958984375
VRTX 324 473313.833984375 3595938.1875 -1856.084228515625
VRTX 325 473372.228515625 3595374.9375 -1576.7598876953125
VRTX 326 473752.275390625 3594978.96875 -1645.524658203125
VRTX 327 473392.78515625 3594937.4375 -1303.7562255859375
VRTX 328 473745.939453125 3594546.8125 -1343.615478515625
VRTX 329 473361.939453125 3594508.0625 -1007.8527221679688
VRTX 330 473036.8359375 3594891.125 -1028.6846923828125
VRTX 331 472812.474609375 3594593.84375 -735.295166015625
VRTX 332 473267.595703125 3594197.90625 -763.3173828125
VRTX 333 473729.248046875 3594071.03125 -1018.9281616210938
VRTX 334 474095.724609375 3594174.21875 -1429.348876953125
VRTX 335 474104.72265625 3594592.46875 -1738.3350830078125
VRTX 336 474110.908203125 3594989.5625 -2036.7857666015625
VRTX 337 474455.548828125 3594994 -2436.941650390625
VRTX 338 474451.896484375 3594594.53125 -2137.293212890625
VRTX 339 474460.3125 3594204.40625 -1824.68505859375 CNXYZ
VRTX 340 474449.03515625 3593801.09375 -1534.0010986328125
VRTX 341 474118.708984375 3593709.28125 -1168.100830078125
VRTX 342 474452.078125 3593377.46875 -1323.6143798828125
VRTX 343 474171.837890625 3593225.5625 -1016.072021484375
VRTX 344 474447.654296875 3593013.53125 -1165.3062744140625
VRTX 345 474402.66015625 3592743.28125 -1037.6204833984375
VRTX 346 474156.244140625 3592745.5625 -853.25299072265625
VRTX 347 473936.400390625 3593081.71875 -787.28302001953125
VRTX 348 473835.08203125 3593541.625 -856.1849365234375
VRTX 349 473383.0078125 3593754 -626.8836669921875
VRTX 350 472897.326171875 3594098 -517.37493896484375
VRTX 351 472448.353515625 3594340.65625 -451.201171875
VRTX 352 472226.619140625 3594798.78125 -592.60186767578125
VRTX 353 471943.099609375 3594472.9375 -338.23046875
VRTX 354 471683.064453125 3594866.90625 -425.1031494140625
VRTX 355 471487.826171875 3594450.59375 -184.74838256835938
VRTX 356 471767.57421875 3594145.9375 -147.34461975097656
VRTX 357 472158.73046875 3594117.84375 -252.47801208496094
VRTX 358 472530.529296875 3593886 -284.21115112304688
VRTX 359 472951.859375 3593624.75 -359.89340209960938
VRTX 360 473240.70703125 3593323.03125 -411.58938598632812
VRTX 361 473597.001953125 3593343.40625 -628.04473876953125
VRTX 362 473692.52734375 3592997.09375 -592.1029052734375
VRTX 363 473455.646484375 3593058.15625 -467.69561767578125
VRTX 364 473641.923828125 3592750.53125 -502.72848510742188
VRTX 365 473427.25 3592807.0625 -390.24740600585938
VRTX 366 473193.71484375 3592956.1875 -301.20388793945312
VRTX 367 472898.078125 3593169.4375 -209.86178588867188
VRTX 368 472537.994140625 3593430.4375 -139.55955505371094
VRTX 369 472215.509765625 3593690.21875 -115.76949310302734
VRTX 370 471994.078125 3593921.90625 -119.40510559082031
VRTX 371 473881.931640625 3592748.0625 -657.37677001953125
VRTX 372 471186.361328125 3594824.90625 -241.72331237792969
VRTX 373 471390.236328125 3595252.9375 -487.097412109375
VRTX 374 471928.19921875 3595248.5625 -695.8536376953125
VRTX 375 472518.771484375 3595136.46875 -899.42498779296875
VRTX 376 472121.884765625 3595640 -975.9609375
VRTX 377 472577.00390625 3595658.71875 -1221.8455810546875
VRTX 378 472280.11328125 3596093.71875 -1282.0325927734375
VRTX 379 472778.005859375 3596094 -1574.065185546875
VRTX 380 473006.88671875 3595701.625 -1506.547119140625
VRTX 381 472947.4609375 3595291 -1222.244873046875
VRTX 382 473012.337890625 3596464.9375 -1891.5728759765625
VRTX 383 471802.37109375 3596078.21875 -1020.69580078125
VRTX 384 471511.134765625 3596505.875 -1052.489013671875
VRTX 385 471221.75 3596916.78125 -1054.3719482421875
VRTX 386 471394.27734375 3597325.78125 -1245.5498046875
VRTX 387 470945.255859375 3597356.125 -1050.128662109375
VRTX 388 470711.923828125 3597749.375 -1030.3880615234375
VRTX 389 470227.697265625 3597717.53125 -816.61907958984375
VRTX 390 469743.822265625 3597684.375 -609.5745849609375
VRTX 391 469979.28515625 3597278.96875 -628.796630859375
VRTX 392 470460.396484375 3597322.6875 -834.77471923828125
VRTX 393 470733.404296875 3596918.5 -841.666259765625
VRTX 394 471017.154296875 3596486.71875 -820.2264404296875
VRTX 395 471321.919921875 3596061.625 -789.0743408203125
VRTX 396 471606.380859375 3595660.90625 -746.63311767578125
VRTX 397 471107.232421875 3595643.59375 -540.1009521484375
VRTX 398 470888.896484375 3595217.71875 -309.9140625
VRTX 399 470604.349609375 3595620.78125 -371.8321533203125
VRTX 400 470830.90625 3596048.5625 -590.3087158203125
VRTX 401 470528.275390625 3596480.75 -620.98443603515625
VRTX 402 470246.935546875 3596880.5 -631.95843505859375
VRTX 403 469747.001953125 3596839.5 -442.60955810546875
VRTX 404 470035.326171875 3596434.3125 -429.330078125
VRTX 405 470329.6953125 3596033.4375 -414.42822265625
VRTX 406 469478.55859375 3597257.8125 -441.26043701171875
VRTX 407 469229.732421875 3597687.75 -412.41888427734375
VRTX 408 469486.150390625 3598132.75 -561.8218994140625
VRTX 409 468988.939453125 3598121.15625 -374.1217041015625
VRTX 410 469240.41796875 3598569.3125 -515.43756103515625
VRTX 411 469442.91015625 3599073.4375 -652.089111328125
VRTX 412 468984.07421875 3598980.875 -460.567626953125
VRTX 413 469086.544921875 3599451.46875 -520.50054931640625
VRTX 414 468640.43359375 3599297 -327.63333129882812
VRTX 415 468625.29296875 3598903.375 -312.57931518554688
VRTX 416 468770.41015625 3598539.65625 -337.20257568359375
VRTX 417 468704.841796875 3599763.15625 -347.21340942382812
VRTX 418 469128.84765625 3600003.40625 -545.32037353515625
VRTX 419 468717.5859375 3600259.75 -318.69412231445312
VRTX 420 469127.875 3600531.53125 -535.3626708984375
VRTX 421 469541.279296875 3600267.78125 -762.69329833984375
VRTX 422 469514.365234375 3599668.3125 -726.82940673828125
VRTX 423 469891.298828125 3600603.9375 -986.04241943359375
VRTX 424 469481.634765625 3600865.875 -748.91571044921875
VRTX 425 469757.7578125 3601245.03125 -993.34649658203125
VRTX 426 470163.5 3600996.4375 -1240.5721435546875
VRTX 427 470011.744140625 3601628.71875 -1266.9161376953125
VRTX 428 469623.29296875 3601854.25 -1002.794921875
VRTX 429 469191.55078125 3601991.5 -707.9454345703125
VRTX 430 469361.271484375 3601466.46875 -726.5347900390625
VRTX 431 469067.5234375 3601094.0625 -507.73773193359375
VRTX 432 468918.564453125 3601612.5 -455.55990600585938
VRTX 433 468615.380859375 3601235.40625 -247.88714599609375
VRTX 434 468718.7734375 3600757 -300.5372314453125
VRTX 435 468461.5 3601704.5 -188.57662963867188
VRTX 436 468736.021484375 3602091.96875 -425.88681030273438
VRTX 437 469002.400390625 3602469.59375 -703.43310546875
VRTX 438 468555.87890625 3602538.625 -420.43490600585938
VRTX 439 468286.3671875 3602169 -154.34219360351562
VRTX 440 468112.201171875 3602634.71875 -143.1170654296875
VRTX 441 468369.33984375 3602986.40625 -410.89962768554688
VRTX 442 468806.162109375 3602921.03125 -700.60357666015625
VRTX 443 468626.966796875 3603298.90625 -686.80712890625
VRTX 444 468209.32421875 3603422.4375 -417.21804809570312
VRTX 445 468042.89453125 3603898.5 -411.65557861328125
VRTX 446 467894.623046875 3604384.0625 -427.00149536132812
VRTX 447 467619.5390625 3604042.6875 -137.65206909179688
VRTX 448 467769.203125 3603568.8125 -132.06686401367188
VRTX 449 467938.451171875 3603101 -141.81932067871094
VRTX 450 467472.001953125 3604517.4375 -146.03642272949219
VRTX 451 467740.59375 3604868.75 -445.98440551757812
VRTX 452 467563.5234375 3605334.6875 -447.71896362304688
VRTX 453 467393.529296875 3605807.625 -453.30984497070312
VRTX 454 466975.4140625 3605925.15625 -147.08445739746094
VRTX 455 466809.21875 3606394.15625 -148.2440185546875
VRTX 456 467141.85546875 3605456.21875 -145.61619567871094
VRTX 457 467308.431640625 3604987.375 -143.97821044921875
VRTX 458 473457.240234375 3596409.90625 -2194.48974609375
VRTX 459 474458.6484375 3595399.75 -2687.6171875
VRTX 460 474448.5859375 3595787.8125 -2855.633056640625
VRTX 461 473090.201171875 3600023.0625 -2629.447021484375
VRTX 462 466311.06640625 3607801.34375 -151.209716796875
VRTX 463 466548.3828125 3608148.25 -475.48831176757812
VRTX 464 466145.0078125 3608270.40625 -152.20611572265625
VRTX 465 466379.38671875 3608616.25 -479.70468139648438
VRTX 466 466782.92578125 3608474.4375 -794.41387939453125
VRTX 467 466608.24609375 3608937.4375 -802.304443359375
VRTX 468 466209.01953125 3609083.71875 -485.370361328125
VRTX 469 465978.91796875 3608739.4375 -153.23690795898438
VRTX 470 465812.81640625 3609208.46875 -154.28465270996094
VRTX 471 466038.5703125 3609551.84375 -490.37530517578125
VRTX 472 466427.73046875 3609419.75 -812.8275146484375
VRTX 473 466799.76953125 3609283.5625 -1101.00439453125
VRTX 474 466657.625 3609742.1875 -1144.7529296875
VRTX 475 467260.33203125 3609101.1875 -1436.077392578125
VRTX 476 467707.955078125 3609001.90625 -1772.1182861328125
VRTX 477 467909.796875 3608544.40625 -1775.269287109375
VRTX 478 467464.296875 3608645.28125 -1445.218017578125
VRTX 479 467659.150390625 3608180.28125 -1445.41357421875
VRTX 480 468112.81640625 3608087.40625 -1777.0205078125
VRTX 481 467200.0546875 3608382.6875 -1128.8448486328125
VRTX 482 466957.953125 3608030.03125 -798.89984130859375
VRTX 483 467006.07421875 3608839.28125 -1121.028564453125
VRTX 484 466477.51171875 3610201 -1147.7891845703125
VRTX 485 466262.96484375 3609885.21875 -832.86273193359375
VRTX 486 465872.3125 3610021 -490.53274536132812
VRTX 487 465650.73046875 3609678.90625 -150.41616821289062
VRTX 488 465489.81640625 3610149.71875 -145.11628723144531
VRTX 489 465713.6640625 3610495.03125 -480.45391845703125
VRTX 490 466104.90625 3610350.96875 -837.57415771484375
VRTX 491 465940.4296875 3610829.53125 -813.3707275390625
VRTX 492 465570.88671875 3610973.125 -455.75234985351562
VRTX 493 465824.38671875 3611347.65625 -819.4697265625
VRTX 494 465433.01953125 3611448.0625 -424.6160888671875
VRTX 495 465060.48046875 3611576.9375 -65.11712646484375
VRTX 496 465203.046875 3611101 -92.450546264648438
VRTX 497 465344.12109375 3610624.71875 -121.55570983886719
VRTX 498 464917.34765625 3612052.75 -38.453708648681641
VRTX 499 464734.8203125 3612513.75 -60.553108215332031
VRTX 500 464423.37109375 3613848.65625 -581.74609375
VRTX 501 464262.85546875 3614298.6875 -488.16693115234375
VRTX 502 464134.875 3614759.125 -385.42852783203125
VRTX 503 464058.08203125 3615240.5 -289.07064819335938
VRTX 504 463644.328125 3616653.375 -452.56375122070312
VRTX 505 463435.83203125 3617097.34375 -502.91595458984375
VRTX 506 463132.96484375 3617486.6875 -437.62774658203125
VRTX 507 462824.39453125 3617872.75 -380.13101196289062
VRTX 508 462853.24609375 3618232.78125 -745.13885498046875
VRTX 509 462490.82421875 3618238.53125 -336.23928833007812
VRTX 510 462600.92578125 3618549.9375 -745.46099853515625
VRTX 511 462196.9609375 3618626.90625 -406.5810546875
VRTX 512 461994.97265625 3619050.28125 -564.134033203125
VRTX 513 461851.50390625 3619499.75 -652.8409423828125
VRTX 514 462979.7421875 3619821.53125 -2058.887939453125
VRTX 515 461825.3203125 3620863.34375 -1053.9713134765625
VRTX 516 462055.28515625 3621019.40625 -1340.37451171875
VRTX 517 461827.61328125 3621211.03125 -1169.1953125
VRTX 518 461930.64453125 3621403.21875 -1327.050048828125
VRTX 519 462136.34765625 3621245.0625 -1490.737060546875
VRTX 520 462073.95703125 3621549.21875 -1509.885009765625
VRTX 521 462271.22265625 3621542.90625 -1742.35498046875
TRGL 4 1 519
TRGL 519 1 520
TRGL 518 517 516
TRGL 516 517 515
TRGL 516 515 10
TRGL 8 514 7
TRGL 34 31 30
TRGL 17 14 13
TRGL 17 13 513
TRGL 17 512 18
TRGL 511 18 512
TRGL 510 18 511
TRGL 510 19 18
TRGL 25 508 27
TRGL 25 19 508
TRGL 506 27 507
TRGL 28 27 506
TRGL 29 28 39
TRGL 39 28 505
TRGL 39 505 504
TRGL 40 39 504
TRGL 72 68 66
TRGL 78 72 65
TRGL 64 78 65
TRGL 61 60 503
TRGL 62 61 503
TRGL 63 62 502
TRGL 510 511 509
TRGL 63 502 501
TRGL 64 63 501
TRGL 64 501 500
TRGL 64 500 78
TRGL 78 500 79
TRGL 101 493 491
TRGL 101 89 493
TRGL 89 85 493
TRGL 81 80 499
TRGL 82 81 499
TRGL 508 19 510
TRGL 82 499 498
TRGL 82 498 84
TRGL 492 494 496
TRGL 494 84 495
TRGL 494 85 84
TRGL 484 106 103
TRGL 484 103 102
TRGL 484 102 490
TRGL 490 486 485
TRGL 486 488 487
TRGL 471 487 470
TRGL 471 485 486
TRGL 474 485 472
TRGL 474 106 484
TRGL 478 476 475
TRGL 481 479 478
TRGL 481 478 483
TRGL 481 483 466
TRGL 486 490 489
TRGL 482 463 168
TRGL 481 482 165
TRGL 481 165 479
TRGL 479 165 164
TRGL 477 480 150
TRGL 508 507 27
TRGL 492 491 493
TRGL 152 480 155
TRGL 157 155 164
TRGL 479 480 477
TRGL 479 477 478
TRGL 478 477 476
TRGL 476 477 148
TRGL 109 146 110
TRGL 13 12 513
TRGL 473 108 474
TRGL 473 472 467
TRGL 468 472 471
TRGL 109 476 146
TRGL 468 471 470
TRGL 467 465 466
TRGL 463 462 168
TRGL 169 168 462
TRGL 304 305 461
TRGL 305 308 461
TRGL 257 310 307
TRGL 322 337 459
TRGL 375 352 331
TRGL 375 330 381
TRGL 516 4 519
TRGL 381 327 325
TRGL 252 458 316
TRGL 324 321 458
TRGL 382 324 458
TRGL 240 386 247
TRGL 390 232 408
TRGL 422 418 413
TRGL 198 201 199
TRGL 452 453 456
TRGL 456 453 454
TRGL 171 455 172
TRGL 172 454 453
TRGL 173 172 453
TRGL 173 453 184
TRGL 472 468 467
TRGL 184 452 185
TRGL 185 451 194
TRGL 446 450 447
TRGL 441 449 440
TRGL 443 441 442
TRGL 441 443 444
TRGL 184 453 452
TRGL 441 444 449
TRGL 172 455 454
TRGL 444 448 449
TRGL 444 445 448
TRGL 445 447 448
TRGL 445 195 446
TRGL 195 445 197
TRGL 197 445 444
TRGL 197 444 443
TRGL 443 198 197
TRGL 198 443 442
TRGL 204 442 437
TRGL 445 446 447
TRGL 442 438 437
TRGL 438 442 441
TRGL 438 441 440
TRGL 436 439 435
TRGL 436 438 439
TRGL 381 330 327
TRGL 429 432 430
TRGL 432 435 433
TRGL 420 434 419
TRGL 424 420 421
TRGL 420 424 431
TRGL 431 430 432
TRGL 430 431 424
TRGL 425 430 424
TRGL 425 428 430
TRGL 428 205 429
TRGL 423 224 223
TRGL 219 218 223
TRGL 426 423 223
TRGL 218 215 213
TRGL 211 426 213
TRGL 206 427 209
TRGL 465 463 466
TRGL 427 425 426
TRGL 426 425 423
TRGL 423 425 424
TRGL 418 420 419
TRGL 418 419 417
TRGL 418 417 413
TRGL 482 166 165
TRGL 414 413 417
TRGL 410 412 416
TRGL 412 415 416
TRGL 412 414 415
TRGL 224 422 225
TRGL 412 411 413
TRGL 428 429 430
TRGL 411 412 410
TRGL 411 410 231
TRGL 408 410 409
TRGL 408 409 407
TRGL 408 407 390
TRGL 406 390 407
TRGL 390 406 391
TRGL 403 391 406
TRGL 400 405 399
TRGL 491 489 490
TRGL 400 401 405
TRGL 401 404 405
TRGL 401 402 404
TRGL 402 391 403
TRGL 392 391 402
TRGL 393 402 401
TRGL 218 213 426
TRGL 393 401 394
TRGL 394 400 395
TRGL 373 396 397
TRGL 376 378 383
TRGL 519 520 518
TRGL 376 383 396
TRGL 518 516 519
TRGL 383 395 396
TRGL 383 384 395
TRGL 384 394 395
TRGL 385 387 393
TRGL 333 341 334
TRGL 387 392 393
TRGL 387 388 392
TRGL 389 391 392
TRGL 389 232 390
TRGL 389 233 232
TRGL 233 389 388
TRGL 234 233 388
TRGL 239 388 387
TRGL 486 489 488
TRGL 239 387 386
TRGL 386 387 385
TRGL 386 385 247
TRGL 247 385 384
TRGL 247 384 248
TRGL 248 383 378
TRGL 249 378 379
TRGL 484 490 485
TRGL 249 379 382
TRGL 379 380 324
TRGL 380 325 324
TRGL 377 380 379
TRGL 377 379 378
TRGL 377 378 376
TRGL 428 206 205
TRGL 394 401 400
TRGL 377 376 375
TRGL 218 426 223
TRGL 375 376 374
TRGL 354 373 372
TRGL 354 372 355
TRGL 346 347 371
TRGL 362 371 347
TRGL 362 364 371
TRGL 369 357 370
TRGL 358 357 369
TRGL 180 183 182
TRGL 20 19 23
TRGL 180 182 181
TRGL 175 179 176
TRGL 75 76 74
TRGL 508 509 507
TRGL 157 156 155
TRGL 306 307 305
TRGL 176 179 178
TRGL 50 51 48
TRGL 196 199 193
TRGL 161 171 172
TRGL 318 320 319
TRGL 123 127 126
TRGL 167 168 169
TRGL 1 521 520
TRGL 244 243 241
TRGL 480 152 150
TRGL 159 161 160
TRGL 15 21 514
TRGL 475 108 473
TRGL 266 302 267
TRGL 321 323 322
TRGL 336 338 337
TRGL 157 158 156
TRGL 152 151 150
TRGL 261 260 259
TRGL 148 149 147
TRGL 310 311 309
TRGL 82 76 81
TRGL 473 474 472
TRGL 40 38 39
TRGL 9 10 11
TRGL 375 331 330
TRGL 180 179 175
TRGL 112 143 142
TRGL 474 484 485
TRGL 113 142 141
TRGL 248 384 383
TRGL 36 140 35
TRGL 324 323 321
TRGL 162 167 171
TRGL 45 138 139
TRGL 256 310 257
TRGL 66 67 52
TRGL 195 194 446
TRGL 437 436 429
TRGL 341 340 334
TRGL 47 137 138
TRGL 159 158 157
TRGL 48 136 47
TRGL 155 154 152
TRGL 377 375 381
TRGL 48 51 136
TRGL 50 55 54
TRGL 34 33 31
TRGL 128 131 132
TRGL 198 196 197
TRGL 120 119 130
TRGL 53 64 65
TRGL 120 130 121
TRGL 121 130 129
TRGL 121 129 128
TRGL 122 121 128
TRGL 14 8 9
TRGL 124 123 126
TRGL 203 204 205
TRGL 124 125 51
TRGL 201 203 202
TRGL 15 514 8
TRGL 252 316 253
TRGL 190 281 295
TRGL 78 75 72
TRGL 124 67 123
TRGL 363 360 366
TRGL 123 67 69
TRGL 463 464 462
TRGL 69 120 121
TRGL 93 118 119
TRGL 160 175 158
TRGL 221 228 260
TRGL 95 116 117
TRGL 475 473 483
TRGL 194 451 446
TRGL 206 207 203
TRGL 154 177 301
TRGL 287 273 288
TRGL 326 335 336
TRGL 96 114 115
TRGL 97 99 114
TRGL 99 113 114
TRGL 92 91 94
TRGL 205 204 437
TRGL 100 104 99
TRGL 508 510 509
TRGL 50 54 53
TRGL 215 214 213
TRGL 99 104 113
TRGL 136 135 137
TRGL 423 421 224
TRGL 254 314 313
TRGL 105 111 112
TRGL 273 272 288
TRGL 105 107 111
TRGL 107 108 109
TRGL 176 177 156
TRGL 106 107 105
TRGL 106 105 103
TRGL 103 105 104
TRGL 70 93 120
TRGL 103 104 100
TRGL 451 457 450
TRGL 127 132 133
TRGL 353 356 357
TRGL 100 102 103
TRGL 280 282 281
TRGL 261 262 263
TRGL 107 110 111
TRGL 253 314 254
TRGL 463 465 464
TRGL 190 295 296
TRGL 150 149 148
TRGL 385 393 394
TRGL 98 99 97
TRGL 50 53 52
TRGL 123 122 127
TRGL 90 97 96
TRGL 492 496 497
TRGL 90 96 95
TRGL 91 90 95
TRGL 236 237 258
TRGL 96 116 95
TRGL 104 112 113
TRGL 51 125 136
TRGL 227 221 222
TRGL 376 396 374
TRGL 91 95 94
TRGL 176 178 177
TRGL 50 52 51
TRGL 474 108 106
TRGL 192 280 191
TRGL 19 16 18
TRGL 100 101 102
TRGL 52 67 124
TRGL 173 161 172
TRGL 380 381 325
TRGL 167 166 168
TRGL 241 243 242
TRGL 92 71 73
TRGL 92 73 91
TRGL 108 107 106
TRGL 96 115 116
TRGL 94 117 118
TRGL 88 89 90
TRGL 422 413 411
TRGL 87 73 74
TRGL 446 451 450
TRGL 209 427 211
TRGL 85 83 84
TRGL 83 82 84
TRGL 26 29 30
TRGL 186 183 185
TRGL 366 360 367
TRGL 88 90 91
TRGL 234 388 239
TRGL 152 154 153
TRGL 193 186 194
TRGL 81 77 80
TRGL 163 165 166
TRGL 93 94 118
TRGL 146 147 145
TRGL 42 44 43
TRGL 230 225 231
TRGL 171 167 170
TRGL 182 183 186
TRGL 187 192 191
TRGL 154 301 153
TRGL 114 141 115
TRGL 427 428 425
TRGL 111 144 143
TRGL 77 75 78
TRGL 55 61 62
TRGL 422 421 418
TRGL 68 69 67
TRGL 17 513 512
TRGL 286 290 285
TRGL 436 437 438
TRGL 211 212 210
TRGL 222 224 225
TRGL 85 494 493
TRGL 73 88 91
TRGL 270 267 271
TRGL 66 52 53
TRGL 66 53 65
TRGL 53 63 64
TRGL 219 217 218
TRGL 173 160 161
TRGL 63 54 62
TRGL 188 189 181
TRGL 255 254 312
TRGL 209 208 206
TRGL 473 467 483
TRGL 159 162 161
TRGL 155 156 154
TRGL 182 188 181
TRGL 351 353 357
TRGL 485 471 472
TRGL 171 170 455
TRGL 159 163 162
TRGL 231 410 408
TRGL 276 286 285
TRGL 353 355 356
TRGL 244 241 240
TRGL 58 59 60
TRGL 122 69 121
TRGL 158 176 156
TRGL 58 40 59
TRGL 327 326 325
TRGL 72 73 71
TRGL 1 3 521
TRGL 56 61 55
TRGL 42 45 44
TRGL 58 41 40
TRGL 41 58 56
TRGL 279 283 282
TRGL 274 287 275
TRGL 89 86 85
TRGL 465 468 469
TRGL 160 173 174
TRGL 124 51 52
TRGL 57 41 56
TRGL 180 181 179
TRGL 203 207 202
TRGL 57 46 42
TRGL 396 373 374
TRGL 49 46 57
TRGL 380 377 381
TRGL 283 284 292
TRGL 87 88 73
TRGL 255 312 256
TRGL 69 70 120
TRGL 49 57 56
TRGL 193 187 186
TRGL 50 49 55
TRGL 123 69 122
TRGL 167 169 170
TRGL 136 137 47
TRGL 411 225 422
TRGL 50 48 49
TRGL 87 86 88
TRGL 146 145 110
TRGL 44 45 139
TRGL 87 74 76
TRGL 343 342 341
TRGL 46 48 47
TRGL 465 469 464
TRGL 16 20 15
TRGL 325 323 324
TRGL 92 93 70
TRGL 4 516 10
TRGL 68 67 66
TRGL 43 44 36
TRGL 164 480 479
TRGL 44 139 140
TRGL 37 43 36
TRGL 71 92 70
TRGL 152 153 151
TRGL 95 117 94
TRGL 306 257 307
TRGL 54 63 53
TRGL 87 83 86
TRGL 34 35 33
TRGL 16 19 20
TRGL 43 37 38
TRGL 402 403 404
TRGL 159 160 158
TRGL 235 234 238
TRGL 42 38 41
TRGL 61 58 60
TRGL 262 261 259
TRGL 214 269 274
TRGL 352 374 354
TRGL 323 326 336
TRGL 136 125 135
TRGL 38 29 39
TRGL 150 151 149
TRGL 29 38 37
TRGL 160 174 175
TRGL 491 492 489
TRGL 255 245 251
TRGL 225 411 231
TRGL 49 48 46
TRGL 6 8 7
TRGL 101 100 98
TRGL 189 296 297
TRGL 110 144 111
TRGL 343 348 347
TRGL 163 164 165
TRGL 37 34 30
TRGL 34 36 35
TRGL 432 429 436
TRGL 24 26 30
TRGL 277 285 284
TRGL 122 128 127
TRGL 159 157 163
TRGL 31 32 23
TRGL 68 71 70
TRGL 24 31 23
TRGL 395 400 397
TRGL 112 111 143
TRGL 127 133 126
TRGL 361 362 347
TRGL 226 222 225
TRGL 57 42 41
TRGL 183 184 185
TRGL 343 344 342
TRGL 9 6 10
TRGL 246 248 249
TRGL 29 26 28
TRGL 412 413 414
TRGL 4 2 1
TRGL 373 398 372
TRGL 26 27 28
TRGL 373 397 398
TRGL 92 94 93
TRGL 26 25 27
TRGL 166 167 162
TRGL 241 242 237
TRGL 87 76 83
TRGL 349 360 361
TRGL 25 26 24
TRGL 491 490 102
TRGL 392 402 393
TRGL 269 273 274
TRGL 112 142 113
TRGL 125 126 134
TRGL 131 128 129
TRGL 328 333 334
TRGL 24 19 25
TRGL 6 4 10
TRGL 19 24 23
TRGL 82 83 76
TRGL 24 30 31
TRGL 458 318 316
TRGL 370 357 356
TRGL 20 23 22
TRGL 492 497 489
TRGL 20 22 21
TRGL 382 458 252
TRGL 20 21 15
TRGL 352 351 331
TRGL 164 155 480
TRGL 158 175 176
TRGL 16 17 18
TRGL 46 47 45
TRGL 395 397 396
TRGL 16 14 17
TRGL 14 16 15
TRGL 495 84 498
TRGL 109 475 476
TRGL 431 432 433
TRGL 14 15 8
TRGL 465 467 468
TRGL 148 147 146
TRGL 389 392 388
TRGL 215 216 214
TRGL 482 168 166
TRGL 77 78 79
TRGL 298 179 181
TRGL 285 291 284
TRGL 14 9 13
TRGL 488 489 497
TRGL 375 374 352
TRGL 192 200 280
TRGL 13 9 12
TRGL 483 467 466
TRGL 9 11 12
TRGL 38 40 41
TRGL 187 191 188
TRGL 113 141 114
TRGL 240 247 244
TRGL 251 252 253
TRGL 163 166 162
TRGL 341 342 340
TRGL 34 37 36
TRGL 120 93 119
TRGL 6 9 8
TRGL 6 7 5
TRGL 101 491 102
TRGL 353 354 355
TRGL 55 62 54
TRGL 268 270 269
TRGL 208 210 276
TRGL 6 5 4
TRGL 89 88 86
TRGL 243 255 256
TRGL 4 5 2
TRGL 1 2 3
TRGL 199 192 193
TRGL 183 174 184
TRGL 224 421 422
TRGL 182 187 188
TRGL 327 328 326
TRGL 212 214 274
TRGL 188 190 189
TRGL 358 369 368
TRGL 77 79 80
TRGL 187 193 192
TRGL 195 193 194
TRGL 361 347 348
TRGL 195 196 193
TRGL 196 198 199
TRGL 199 200 192
TRGL 316 317 315
TRGL 327 329 328
TRGL 329 330 331
TRGL 203 201 204
TRGL 352 353 351
TRGL 278 284 283
TRGL 206 208 207
TRGL 209 210 208
TRGL 410 416 409
TRGL 211 210 209
TRGL 213 212 211
TRGL 281 294 295
TRGL 213 214 212
TRGL 66 65 72
TRGL 274 273 287
TRGL 215 217 216
TRGL 471 486 487
TRGL 218 217 215
TRGL 219 220 217
TRGL 333 348 341
TRGL 322 459 320
TRGL 219 222 221
TRGL 222 219 223
TRGL 346 345 343
TRGL 336 337 322
TRGL 180 175 174
TRGL 222 223 224
TRGL 475 109 108
TRGL 227 222 226
TRGL 203 205 206
TRGL 161 162 171
TRGL 227 228 221
TRGL 11 10 515
TRGL 227 229 228
TRGL 275 286 276
TRGL 227 226 229
TRGL 38 42 43
TRGL 226 230 229
TRGL 226 225 230
TRGL 230 231 232
TRGL 451 452 457
TRGL 230 232 233
TRGL 230 233 229
TRGL 249 248 378
TRGL 201 200 199
TRGL 349 359 360
TRGL 235 229 234
TRGL 235 228 229
TRGL 438 440 439
TRGL 312 313 311
TRGL 235 237 236
TRGL 238 241 237
TRGL 231 408 232
TRGL 432 436 435
TRGL 427 426 211
TRGL 281 282 294
TRGL 320 460 319
TRGL 431 433 434
TRGL 235 238 237
TRGL 239 238 234
TRGL 98 100 99
TRGL 221 260 220
TRGL 240 238 239
TRGL 244 245 243
TRGL 244 246 245
TRGL 418 421 420
TRGL 384 385 394
TRGL 107 109 110
TRGL 89 97 90
TRGL 246 247 248
TRGL 29 37 30
TRGL 246 249 250
TRGL 72 71 68
TRGL 234 229 233
TRGL 332 350 349
TRGL 239 386 240
TRGL 104 105 112
TRGL 74 72 75
TRGL 250 251 245
TRGL 127 128 132
TRGL 219 221 220
TRGL 251 250 252
TRGL 256 312 310
TRGL 201 198 204
TRGL 251 253 254
TRGL 255 251 254
TRGL 344 343 345
TRGL 243 245 255
TRGL 283 292 293
TRGL 242 243 256
TRGL 242 256 257
TRGL 397 400 399
TRGL 242 257 258
TRGL 124 126 125
TRGL 242 258 237
TRGL 280 200 279
TRGL 205 437 429
TRGL 86 83 85
TRGL 236 258 259
TRGL 49 56 55
TRGL 236 259 260
TRGL 62 503 502
TRGL 452 456 457
TRGL 198 442 204
TRGL 236 260 228
TRGL 45 47 138
TRGL 335 338 336
TRGL 260 261 220
TRGL 217 220 264
TRGL 217 264 216
TRGL 264 265 216
TRGL 264 263 265
TRGL 265 263 266
TRGL 265 266 267
TRGL 354 374 373
TRGL 268 265 267
TRGL 304 461 303
TRGL 220 261 264
TRGL 268 216 265
TRGL 268 269 216
TRGL 268 267 270
TRGL 494 495 496
TRGL 270 271 272
TRGL 186 185 194
TRGL 273 270 272
TRGL 483 478 475
TRGL 269 270 273
TRGL 246 250 245
TRGL 214 216 269
TRGL 212 274 275
TRGL 321 320 318
TRGL 210 212 275
TRGL 238 240 241
TRGL 178 299 300
TRGL 210 275 276
TRGL 397 399 398
TRGL 379 324 382
TRGL 110 145 144
TRGL 208 276 277
TRGL 351 357 358
TRGL 321 318 458
TRGL 208 277 207
TRGL 278 207 277
TRGL 207 278 202
TRGL 382 252 250
TRGL 279 202 278
TRGL 202 279 200
TRGL 363 366 365
TRGL 280 281 191
TRGL 423 424 421
TRGL 328 334 335
TRGL 279 278 283
TRGL 278 277 284
TRGL 125 134 135
TRGL 343 347 346
TRGL 97 114 96
TRGL 68 70 69
TRGL 359 367 360
TRGL 101 98 89
TRGL 277 276 285
TRGL 28 506 505
TRGL 266 304 303
TRGL 191 281 190
TRGL 275 287 286
TRGL 287 288 289
TRGL 286 287 289
TRGL 61 56 58
TRGL 286 289 290
TRGL 427 206 428
TRGL 285 290 291
TRGL 284 291 292
TRGL 283 293 282
TRGL 72 74 73
TRGL 282 293 294
TRGL 189 190 296
TRGL 298 189 297
TRGL 189 298 181
TRGL 196 195 197
TRGL 179 298 178
TRGL 163 157 164
TRGL 298 299 178
TRGL 298 297 299
TRGL 476 148 146
TRGL 177 178 300
TRGL 177 300 301
TRGL 156 177 154
TRGL 23 32 22
TRGL 267 302 271
TRGL 40 504 59
TRGL 263 304 266
TRGL 264 261 263
TRGL 262 304 263
TRGL 253 315 314
TRGL 262 305 304
TRGL 262 306 305
TRGL 262 259 306
TRGL 258 306 259
TRGL 81 76 77
TRGL 46 45 42
TRGL 306 258 257
TRGL 330 329 327
TRGL 307 308 305
TRGL 182 186 187
TRGL 307 309 308
TRGL 310 309 307
TRGL 310 312 311
TRGL 249 382 250
TRGL 312 254 313
TRGL 482 481 466
TRGL 316 315 253
TRGL 482 466 463
TRGL 280 279 282
TRGL 318 317 316
TRGL 185 452 451
TRGL 420 431 434
TRGL 235 236 228
TRGL 362 361 363
TRGL 173 184 174
TRGL 318 319 317
TRGL 351 358 350
TRGL 321 322 320
TRGL 325 326 323
TRGL 468 470 469
TRGL 266 303 302
TRGL 329 331 332
TRGL 329 332 333
TRGL 329 333 328
TRGL 326 328 335
TRGL 477 150 148
TRGL 389 390 391
TRGL 126 133 134
TRGL 323 336 322
TRGL 201 202 200
TRGL 335 339 338
TRGL 246 244 247
TRGL 334 339 335
TRGL 180 174 183
TRGL 334 340 339
TRGL 188 191 190
TRGL 348 343 341
TRGL 98 97 89
TRGL 332 349 333
TRGL 331 350 332
TRGL 31 33 32
TRGL 75 77 76
TRGL 331 351 350
TRGL 493 494 492
TRGL 36 44 140
TRGL 353 352 354
TRGL 350 358 359
TRGL 350 359 349
TRGL 320 459 460
TRGL 333 349 348
TRGL 349 361 348
TRGL 363 364 362
TRGL 363 365 364
TRGL 361 360 363
TRGL 359 368 367
TRGL 359 358 368
BSTONE 339
BORDER 522 339 340
END
|
def do_work(self):
while self.is_running:
item = None
try:
item = self.get_nowait()
except queue.Empty:
pass
if item is not None:
try:
self.process_item(item)
except BaseException as e:
self.stop(e)
finally:
self.task_done() |
<filename>client/internal/config_test.go
package internal
import (
"errors"
"os"
"path/filepath"
"testing"
"github.com/netbirdio/netbird/util"
"github.com/stretchr/testify/assert"
)
func TestReadConfig(t *testing.T) {
}
func TestGetConfig(t *testing.T) {
managementURL := "https://test.management.url:33071"
adminURL := "https://app.admin.url"
path := filepath.Join(t.TempDir(), "config.json")
preSharedKey := "preSharedKey"
// case 1: new config has to be generated
config, err := GetConfig(managementURL, adminURL, path, preSharedKey)
if err != nil {
return
}
assert.Equal(t, config.ManagementURL.String(), managementURL)
assert.Equal(t, config.PreSharedKey, preSharedKey)
if _, err := os.Stat(path); errors.Is(err, os.ErrNotExist) {
t.Errorf("config file was expected to be created under path %s", path)
}
// case 2: existing config -> fetch it
config, err = GetConfig(managementURL, adminURL, path, preSharedKey)
if err != nil {
return
}
assert.Equal(t, config.ManagementURL.String(), managementURL)
assert.Equal(t, config.PreSharedKey, preSharedKey)
// case 3: existing config, but new managementURL has been provided -> update config
newManagementURL := "https://test.newManagement.url:33071"
config, err = GetConfig(newManagementURL, adminURL, path, preSharedKey)
if err != nil {
return
}
assert.Equal(t, config.ManagementURL.String(), newManagementURL)
assert.Equal(t, config.PreSharedKey, preSharedKey)
// read once more to make sure that config file has been updated with the new management URL
readConf, err := util.ReadJson(path, config)
if err != nil {
return
}
assert.Equal(t, readConf.(*Config).ManagementURL.String(), newManagementURL)
}
|
<filename>axelrod/tests/test_cooperator.py
"""Test for the cooperator strategy."""
import axelrod
from test_player import TestPlayer
class TestCooperator(TestPlayer):
name = "Cooperator"
player = axelrod.Cooperator
stochastic = False
def test_strategy(self):
"""Test that always cooperates."""
P1 = axelrod.Cooperator()
P2 = axelrod.Player()
self.assertEqual(P1.strategy(P2), 'C')
P1.history = ['C', 'D', 'C']
P2.history = ['C', 'C', 'D']
self.assertEqual(P1.strategy(P2), 'C')
class TestTrickyCooperator(TestPlayer):
name = "<NAME>"
player = axelrod.TrickyCooperator
stochastic = False
def test_strategy(self):
"""Test if it tries to trick opponent"""
P1 = axelrod.TrickyCooperator()
P2 = axelrod.Player()
self.assertEqual(P1.strategy(P2), 'C')
P1.history = ['C', 'C', 'C']
P2.history = ['C', 'C', 'C']
self.assertEqual(P1.strategy(P2), 'D')
P1.history.extend(['D', 'D'])
P2.history.extend(['C', 'D'])
self.assertEqual(P1.strategy(P2), 'C')
P1.history.extend(['C']*11)
P2.history.extend(['D'] + ['C']*10)
self.assertEqual(P1.strategy(P2), 'D')
|
<gh_stars>1-10
package edu.purdue.jtk;
import oscP5.OscMessage;
/**
* The MuseMessage class encapsulates the data sent from the Muse headband (or proxy sources) to the Model.
*/
class MuseMessage extends OscMessage implements Comparable<MuseMessage> {
private Double timestamp;
MuseMessage(double timestamp, String address, Object[] arguments) {
super(address, arguments);
this.timestamp = timestamp;
hostAddress = "127.0.0.1";
port = 8000;
}
MuseMessage(OscMessage oscMessage) {
super(oscMessage);
timestamp = System.currentTimeMillis() / 1000.0;
}
double getTimestamp() {
return timestamp;
}
@Override
public int compareTo(MuseMessage o) {
return timestamp.compareTo(o.timestamp);
}
}
|
A spark plug is attached to, for example, an internal combustion engine (engine), and is used for igniting an air-fuel mixture within a combustion chamber. In general, such a spark plug includes a tubular insulator extending in the direction of an axis; a center electrode inserted into the insulator; a metallic shell provided around the insulator; and a ground electrode provided at a forward end portion of the metallic shell and forming a spark discharge gap in cooperation with the center electrode. The metallic shell and the insulator are fixed together by inserting the insulator into the metallic shell and applying a load along the direction of the axis to a rear end opening portion of the metallic shell through use of a predetermined die to thereby bend the rear end opening portion inward in the radial direction (i.e., through a crimping step).
Further, there has been known a technique of disposing talc between the metallic shell and the insulator in order to enhance the airtightness between the metallic shell and the insulator (see, for example, Japanese Patent Application Laid-Open (kokai) No. 2006-92955 “Patent Document 1”). |
// DecodeRendezvousRequest decodes a Rendezvous request from a payload.
func DecodeRendezvousRequest(payload []byte) (r Rendezvous, err error) {
request, err := DecodeRequest(payload)
if err != nil {
return Rendezvous{}, err
}
rid, err := jsonparser.GetString(payload, rendezvousIDKey)
if err != nil {
return Rendezvous{}, nil
}
r = Rendezvous{
BaseMoneySocketRequest: request,
RendezvousID: rid,
}
return r, nil
} |
def is_mirror(x):
if x == 'A' or x == 'H' or x == 'I' or x == 'M' or x == 'O' or x == 'T' or x == 'U' or x == 'V' or x == 'W' or x == 'X' or x == 'Y':
return True
return False
if __name__ == '__main__':
name = raw_input()
size = len(name)
l = 0
r = size-1
while(l <= r):
if not is_mirror(name[l]):
print "NO"
quit()
if name[l] != name[r]:
print "NO"
quit()
l = l + 1
r = r - 1
print "YES"
|
/**
* Some test constants used in this package
*/
public class TestDetails {
public static final String PATH_TO_DATA_FILE = "src/test/resources/TestSuite.xls";
public static final String WRONG_PATH_TO_DATA_FILE = "src/test/resources/TestSute.xls";
public static final String WRONG_BIFF_FILE = "src/test/resources/wrongBiffFile.txt";
public static final String DATA_FILE2 = "DataFile2.xls";
public static final String DATA_FILE1 = "DataFile1.xls";
public static final String DATA_FILE_IN_CLASSPATH = "DataFileInClasspath.xls";
public static final String DATA_FILES_FOLDER = "src/test/resources/dataFilesFolder/";
public static final String FIRST_TEST_SCENARIO = "test_scenario_1";
public static final String SECOND_TEST_SCENARIO = "test_scenario_2";
public static final String THIRD_TEST_SCENARIO = "test_scenario_3";
public static final String FOURTH_TEST_SCENARIO = "test_scenario_4";
public static final String FIFTH_TEST_SCENARIO = "test_scenario_5";
public static final String SIXTH_TEST_SCENARIO = "test_scenario_6";
public static final String SEVENTH_TEST_SCENARIO = "test_scenario_7";
public static final String EIGHTH_TEST_SCENARIO = "test_scenario_8";
public static final String NINGHT_TEST_SCENARIO = "test_scenario_9";
public static final String TENGHT_TEST_SCENARIO = "test_scenario_10";
// a method with less parameters than that in the excel data sheet
@Test( dataProvider = "ConfigurableDataProvider", dataProviderClass = AtsDataProvider.class)
@TestOptions( dataFileFolder = TestDetails.DATA_FILES_FOLDER, dataFile = TestDetails.DATA_FILE1, dataSheet = "test_scenario_1")
public void test_scenario_1(
String parameter1,
String parameter2 ) {
}
// a method with more parameters than that in the excel data sheet
@Test( dataProvider = "ConfigurableDataProvider", dataProviderClass = AtsDataProvider.class)
@TestOptions( dataFileFolder = TestDetails.DATA_FILES_FOLDER, dataFile = TestDetails.DATA_FILE1, dataSheet = "dataSheet_usingTheMethodName")
public void dataSheet_usingTheMethodName(
String parameter1,
String parameter2,
String parameter3,
String parameter4 ) {
}
// a method with number of parameters equal to that in the excel data sheet
@Test( dataProvider = "ConfigurableDataProvider", dataProviderClass = AtsDataProvider.class)
@TestOptions( dataFileFolder = TestDetails.DATA_FILES_FOLDER, dataFile = TestDetails.DATA_FILE1, dataSheet = "dataSheet_usingTheMethodName")
public void dataSheet_usingEqualP(
String parameter1,
String parameter2,
String parameter3 ) {
}
} |
package com.utils;
public class Constantes {
public static final String COMA = ",";
public static enum CAMPOS {
FECHA,
TEMPERATURA,
TEMPERATURA_INCERTIDUMBRE,
CIUDAD,
PAIS,
LATITUD,
LONGITUD
}
}
|
Quantitative analysis of the effect of energetic particle bombardment during deposition on texture formation in ZnO films C-axis parallel-oriented ZnO films are suitable for shear mode devices. In previous studies, we pointed out that texture formation was induced by the ion bombardment during a planer RF magnetron sputtering deposition. However, quantitative information of the relationship between ion energy and amount of ion irradiation are not clear. In this study, we investigated the effects of energetic ion bombardment during sputtering deposition on texture formation. The distribution of crystalline orientation of the films on the anode plane was compared with the distribution of the amount of ion flux in the anode plane. Highly crystallized orientation appeared above the target erosion area where highly energetic O-l ions bombardment was observed under the low gas pressure condition. This information will give us how to obtain much better textured ZnO films for share mode devices. |
//0
#include <bits/stdc++.h>
using namespace std;
#define ll long long int
#define pb push_back
#define pii pair<int,int>
#define maxn 100005
#define mod 1000000007
int a[500005],cnt[1000005];
int main(){
ios::sync_with_stdio(false);
int n,k;
cin>>n>>k;
for(int i=1;i<=n;i++)cin>>a[i];
int l=1;
int c=0;
int ans=0;
int al,ar;
for(int i=1;i<=n;i++){
cnt[a[i]]++;
if(cnt[a[i]]==1)c++;
while(c>k){
cnt[a[l]]--;
if(cnt[a[l]]==0)c--;
l++;
}
int len=i-l+1;
if(len>ans){
ans=len;
al=l;
ar=i;
}
}
cout<<al<<" "<<ar<<endl;
return 0;
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.