content
stringlengths 7
2.61M
|
---|
Comparison of AC-Superconducting Multiphase Symmetric-Winding Topologies for Wind Power Generators With PM Rotors In this article, an ac superconducting multiphase symmetric-winding machine is designed for a wind power generator to improve its performance and reduce losses, where four handpicked topological designs were explored and compared. In particular, it is found that using high-phase order of unique phasors further improves the performance. The iron losses are reduced, and the rippling behavior is lowered due to the smoother airgap magnetic flux density. Furthermore, a higher least common multiple is achieved due to a better slot-pole combination for fractional-slot concentrated windings without having space subharmonics. Nonetheless, it is shown that creating a smooth airgap magnetic flux density does not improve the ac hysteretic superconducting losses; thus, further research is needed using additional approaches. Moreover, it is found that the Meisner effect is present in the machine and is inversely proportional to the ac hysteretic superconducting losses. Finally, the work shows that a 13-phase ac-superconducting machine can achieve a theoretical limit approaching $\text{101.70 Nm/kg}$ for the torque-to-weight ratio, outperforming classic winding layouts. |
def write_params(parent_dir):
params_file = open(os.path.join(parent_dir, 'params.json'), 'w')
params = {
'num_epochs': FLAGS.num_epochs,
'layer_id': FLAGS.layer_id,
'concept1': FLAGS.concept1,
'concept2': FLAGS.concept2,
'learning_rate': FLAGS.learning_rate,
'train_dir': FLAGS.train_dir,
'mae': FLAGS.mae,
'val_split': FLAGS.val_split,
'random_seed': FLAGS.random_seed,
'adam': FLAGS.adam
}
json.dump(params, params_file)
params_file.close() |
<gh_stars>1-10
# Copyright (C) 2022 <NAME> <<EMAIL>>
# SPDX-License-Identifier: MIT
#
# pylint: disable=invalid-name
"""Lint rule class to test if all role arguments are specified in meta/argument_specs.yml
"""
import typing
import yaml
import ansiblelint.rules
from yaml.composer import Composer
from yaml.constructor import Constructor
from yaml.nodes import ScalarNode
from yaml.resolver import BaseResolver
from yaml.loader import SafeLoader
from pathlib import Path
from ansiblelint.utils import parse_yaml_from_file, LINE_NUMBER_KEY
if typing.TYPE_CHECKING:
from ansiblelint.errors import MatchError
from ansiblelint.file_utils import Lintable
from typing import List
from typing import Dict
from typing import Any
ID: str = 'no_unspecified_argument'
SHORTDESC: str = 'All role parameters must have a specification'
DESC: str = r"""Rule to test if all role arguments are specified in meta/argument_specs.yml.
- Notes
- Argument files = roles/**/{vars,defaults}/**/*.ya?ml
- .. seealso:: ansiblelint.config.DEFAULT_KINDS
- Configuration
.. code-block:: yaml
rules:
no_unspecified_argument:
"""
def _lookup_argument_specs(var_file: Path, var_name: str) -> bool:
"""
Find arg specification and lookup variable name is present.
"""
meta_data: Dict[str, Any] = {}
if var_file.is_file():
argument_specs_path: Path = var_file.parent / ".." / "meta" / "argument_specs.yml"
argument_specs = str(argument_specs_path)
if argument_specs_path.is_file() and argument_specs not in meta_data.keys():
meta_data[argument_specs] = parse_yaml_from_file(argument_specs)
if argument_specs in meta_data.keys():
try:
if meta_data[argument_specs]["argument_specs"]["main"]["options"]:
return var_name in meta_data[argument_specs]["argument_specs"]["main"]["options"]
except KeyError:
return False
return False
class LineLoader(SafeLoader):
def __init__(self, stream):
super(LineLoader, self).__init__(stream)
def compose_node(self, parent, index):
# the line number where the previous token has ended (plus empty lines)
line = self.line
node = Composer.compose_node(self, parent, index)
node.__line__ = line + 1
return node
def construct_mapping(self, node, deep=False):
node_pair_lst = node.value
node_pair_lst_for_appending = []
for key_node, value_node in node_pair_lst:
shadow_key_node = ScalarNode(tag=BaseResolver.DEFAULT_SCALAR_TAG, value=LINE_NUMBER_KEY + key_node.value)
shadow_value_node = ScalarNode(tag=BaseResolver.DEFAULT_SCALAR_TAG, value=key_node.__line__)
node_pair_lst_for_appending.append((shadow_key_node, shadow_value_node))
node.value = node_pair_lst + node_pair_lst_for_appending
mapping = Constructor.construct_mapping(self, node, deep=deep)
return mapping
class NoUnspecifiedArgumentRule(ansiblelint.rules.AnsibleLintRule):
"""
Rule class to test if all role parameters (defaults, vars) have
a format specification in meta/argument_specs.yml.
"""
id = ID
shortdesc = SHORTDESC
description = DESC
severity = 'HIGH'
tags = [ID, 'metadata', 'readability']
def matchyaml(self, file: 'Lintable') -> typing.List['MatchError']:
"""Return matches for variables defined in vars files with no specification."""
results: List["MatchError"] = []
if file.kind == 'vars':
with open(str(file.path), 'r') as f:
variables = yaml.load(f, Loader=LineLoader)
for var_name in filter(lambda k: not k.startswith(LINE_NUMBER_KEY) and not isinstance(variables[k], dict), variables.keys()):
if not _lookup_argument_specs(file.path, var_name):
results.append(
self.create_matcherror(
details=f'{self.shortdesc}: {var_name}', filename=file, linenumber=variables[LINE_NUMBER_KEY + var_name]
)
)
else:
results.extend(super().matchyaml(file))
return results
|
The NDTV has falsely attributed a tweet to Sushma Swaraj in an article on its website.
The NDTV Twitter account has falsely attributed a tweet to Sushma Swaraj. NDTV, last night, confirmed that they haven’t reported the story on TV. The unverified tweet pooh-poohs “Modi-wave” and NDTV tweeted it without confirming the source of the tweet.
Both the NDTV and Editor and Journalist Barkha Dutt apologised on Twitter for the “erroneous tweet”. |
#!/usr/bin/env python3
# run with command line -a switch to show animation
import bdsim
sim = bdsim.BDSim(verbose=True)
bd = sim.blockdiagram()
steer = bd.PIECEWISE( (0,0), (3,0.5), (4,0), (5,-0.5), (6,0), name='steering')
speed = bd.CONSTANT(1, name='speed')
bike = bd.BICYCLE(x0=[0, 0, 0], name='bicycle')
tscope= bd.SCOPE(name='theta')
scope = bd.SCOPEXY(scale=[0, 10, 0, 1.2])
bd.connect(speed, bike[0])
bd.connect(steer, bike[1])
bd.connect(bike[0:2], scope)
bd.connect(bike[2], tscope)
bd.compile()
bd.report()
out = sim.run(bd, dt=0.05)
sim.savefigs(bd, format='pdf')
bd.done(block=True)
|
<gh_stars>0
//
// WMBlock_Handles.h
// YeahUtils
//
// Created by WMYeah on 16/9/29.
// Copyright © 2016年 WMYeah. All rights reserved.
//
/*!
* 常规处理Block 无返回值 无传参
*/
typedef void(^WM_Util_handleBlcok_Normal)();
/*!
* 处理Block
*
* @param complate blcok处理完成后传参
* @param error error nil 正常 其他值参照接口定义
*/
typedef void(^WM_Util_handleBlcok_Complate)(BOOL complate, NSError *error);
/*!
* 带自定义参数的Block处理
*
* @param Response Block传递参数
* @param error 错误信息
*/
typedef void(^WM_Util_handleBlcok_Complate_Response)(id Response,NSError *error);
|
NEW DELHI: India is eyeing investments to the tune of Rs 2 lakh crore at Chabahar port in Iran in various infrastructure projects, Union Minister Nitin Gadkari said today.The investments, however, will depend on the outcome of the negotiations on gas price as Iran has offered to supply natural gas at $2.95 while India wants rates to be lowered.Meanwhile, three more countries have offered gas to India, which will be examined, Road Transport, Highways and Shipping Minister Gadkari said at an interaction with media at Indian Women Press Corp here."India is ready to invest Rs 2 lakh crore at Chabahar SEZ in Iran but the investments would depend on gas prices as India wants it to be lowered," Gadkari said.He added that various Indian companies are ready to invest in Iran in various projects ranging from road and rail to shipping and agriculture.The total investment in the projects will be around Rs 2,00,000 crore, Gadkari said.Asked about the development of the port, he said: "Various ministries have given their report to the Shipping Secretary and Prime Minister Narendra Modi will soon take a call on it."With the US and other western powers easing sanctions against Iran, India has been in talks with Tehran to set up a gas-based urea manufacturing plant at the Chabahar port, besides developing a gas discovery ONGC had made.On talks on supply of natural gas, Gadkari said that Iran has offered gas to India at $2.95 per million British thermal unit to set up urea plant at the Chabahar port but India is negotiating the gas price, demanding lowering the same.The rate offered by Iran is less than half the rate at which India currently imports natural gas from the spot or current market.Long-term supplies from Qatar cost four-times the Iranian price.India, which imports around 8-9 million tonnes of the nitrogenous fertiliser, is negotiating for a price of $1.5 per mmBtu with the Persian Gulf nation in a move which if successful will see a significant decline in the country's Rs 80,000 crore subsidy for the soil nutrient.India has already pledged to invest about $85 million in developing the strategic port off Iran's south eastern coast, which would provide India a sea-land access route to Afghanistan, bypassing Pakistan."If urea plant is set up there, it will result in slashing of urea prices in India by 50 per cent and cut on huge subsidy on urea, which is Rs 80,000 crore," Gadkari said, adding that he would be visiting Iran soon.In 2013, Iran had offered gas at the rate of 82 cents, less than a dollar, the Minister said.The ministries of Chemical & Fertiliser and Petroleum are working on the proposed 1.3 million tonnes per annum plant, which once successful, will lead to urea prices coming down by 50 per cent, he had earlier said.The Minister had visited Tehran in May, and both nations had inked a pact to develop the Chabahar port.Iran's Foreign Minister Mohammad Javad Zarif had also called on Gadkari last month.In August, Gadkari had said that Iran has given "very good offers" to India to develop the integrated Chabahar port, which has a special economic zone (SEZ). |
1. Field of the Invention
The present invention relates generally to systems and methods for performing model-based scanner tuning and optimization and more particularly to optimization of performance of multiple lithography systems.
2. Description of Related Art
Lithographic apparatus can be used in the manufacture of integrated circuits (“ICs”). A mask contains a circuit pattern corresponding to an individual layer of an IC, and this pattern is imaged onto a target portion comprising one or more dies on a substrate of silicon wafer that has been coated with a layer of radiation-sensitive resist material. In general, a single wafer will contain a network of adjacent target portions that are successively irradiated via the projection system, one at a time. In one type of lithographic projection apparatus, commonly referred to as a wafer stepper, each target portion is irradiated by exposing the entire mask pattern onto the target portion in one pass. In step-and-scan apparatus, each target portion is irradiated by progressively scanning the mask pattern under the projection beam in a given reference or “scanning direction” while synchronously scanning the substrate table parallel or anti-parallel to this direction. In a projection system having a magnification factor M (generally <1), the speed V at which the substrate table is scanned will be a factor M times that at which the mask table is scanned. More information with regard to lithographic devices as described herein can be gleaned, for example, from U.S. Pat. No. 6,046,792, incorporated herein by reference.
In a manufacturing process using a lithographic projection apparatus, a mask pattern is imaged onto a substrate that is at least partially covered by a layer of radiation sensitive resist material. Prior to this imaging step, the substrate may undergo various procedures, such as priming, resist coating and soft bake. After exposure, the substrate may be subjected to other procedures, such as a post exposure bake (“PEB”), development, a hard bake and measurement/inspection of the imaged features. This array of procedures is used as a basis to pattern an individual layer of a device, e.g., an IC. Such a patterned layer may then undergo various processes such as etching, ion implantation or doping, metallization, oxidation, chemo mechanical polishing, etc. to finish an individual layer. If several layers are required, then the procedure, or a variant thereof, will have to be repeated for each new layer. Eventually, an array of devices will be present on the substrate wafer. These devices are then separated from one another by a technique such as dicing or sawing and the individual devices can be mounted on a carrier, connected to pins, etc.
A projection system (hereinafter the “lens”) encompasses various types of projection systems, including, for example refractive optics, reflective optics, and catadioptric systems and may include one or more lens. The lens may also include components of a radiation system used for directing, shaping or controlling the projection beam of radiation. Further, the lithographic apparatus may be of a type having two or more substrate tables and/or two or more mask tables. In such multiple stage devices the additional tables may be used in parallel and/or preparatory steps may be carried out certain tables while other tables are used for exposure. Twin stage lithographic apparatus are described, for example, in U.S. Pat. No. 5,969,441, incorporated herein by reference.
The photolithographic masks referred to above comprise geometric patterns corresponding to the circuit components to be integrated onto a silicon wafer. The patterns used to create such masks are generated utilizing computer-aided design (“CAD”) programs, this process often being referred to as electronic design automation (“EDA”). Most CAD programs follow a set of predetermined design rules in order to create functional masks. These rules are set by processing and design limitations. For example, design rules define the space tolerance between circuit devices such as gates, capacitors, etc. or interconnect lines, so as to ensure that the circuit devices or lines do not interact with one another in an undesirable way. The design rule limitations are referred to as critical dimensions (“CDs”). A CD of a circuit can be defined as the smallest width of a line or hole or the smallest space between two lines or two holes. Thus, the CD determines the overall size and density of the designed circuit. Of course, one of the goals in integrated circuit fabrication is to faithfully reproduce the original circuit design on the wafer via the mask.
Generally, benefit may be accrued from utilizing a common process for imaging a given pattern with different types of lithography systems, such as scanners, without having to expend considerable amounts of time and resources determining the necessary settings of each lithography system to achieve optimal/acceptable imaging performance. Designers and engineers can spend a considerable amount of time and money determining optimal settings of a lithography system which include numerical aperture (“NA”), σin, σout, etc., when initially setting up a process for a particular scanner and to obtain images that satisfy predefined design requirements. Often, a trial and error process is employed wherein the scanner settings are selected and the desired pattern is imaged and then measured to determine if the output image falls within specified tolerances. If the output image is out of tolerance, the scanner settings are adjusted and the pattern is imaged once again and measured. This process is repeated until the resulting image is within the specified tolerances.
However, the actual pattern imaged on a substrate can vary from scanner to scanner due to the different optical proximity effects (“OPEs”) exhibited by different scanners when imaging a pattern, even when the scanners are identical model types. For example, different OPEs associated with certain scanners can introduce significant CD variations through pitch. Consequently, it is often impossible to switch between scanners and obtain identical imaged patterns. Thus, engineers must optimize or tune the new scanner when a new or different scanner is to be used to print a pattern with the expectation of obtaining a resulting image that satisfies the design requirements. Currently, an expensive, time-consuming trial and error process is commonly used to adjust processes and scanners. |
. The high frequency of formation diseases of upper gastrointestinal tract (UIT) in children with congenital defects (CGD), and minor anomalies of the heart (MAOH) is the need to improve their treatment tactics. In this regard, we have worked out the optimal scheme of conservative therapy in the observed groups of children. The aim of treatment tactics has been leveling the inflammatory and functional disorders UIT, ultimately contributing to improved quality of life of patients with CHD, and MAOH. |
#!/usr/bin/env python3
import matplotlib
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (10.0, 5.0)
import matplotlib.pyplot as plt
import shutil
import numpy
import numpy as np
import h5py
import os
import glob
import traceback
import matplotlib
import matplotlib.mlab
import scipy
import scipy.signal
from per_file_data_checks_functions import *
checks = {
'dataset_length': check_dataset_length,
'mains_frequency': check_mains_frequency,
'voltage_rms': check_voltage_rms,
'voltage_values': check_voltage_values,
'voltage_bandwidth': check_voltage_bandwidth,
'current_rms': check_current_rms,
'flat_regions': check_flat_regions,
}
for check_name, check_func in checks.items():
files = glob.glob('file-checks-test-data/*.hdf5'.format(check_name), recursive=True)
for file in files:
with h5py.File(file, 'r', driver='core') as f:
print(check_name, file)
check_func(f)
files = glob.glob('file-checks-test-data/{}/*.hdf5'.format(check_name), recursive=True)
for file in files:
with h5py.File(file, 'r', driver='core') as f:
print(check_name, file)
try:
print(check_func(f))
assert(False)
except ValueError as e:
print(e)
print('SUCCESS')
# %%
files = glob.glob('file-checks-test-data/*.hdf5')
os.makedirs('file-checks-test-data/dataset_length', exist_ok=True)
for file in files:
shutil.copy(file, 'file-checks-test-data/dataset_length/')
files = glob.glob('file-checks-test-data/dataset_length/*.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in list(f):
f[n].resize(len(f[n]) - 42, axis=0)
shutil.copy(next(file for file in files if 'clear' in file), 'file-checks-test-data/dataset_length/special_clear.hdf5')
with h5py.File('file-checks-test-data/dataset_length/special_clear.hdf5', 'r+') as f:
f.attrs['frequency'] = 1234
shutil.copy(next(file for file in files if 'medal' in file), 'file-checks-test-data/dataset_length/special_medal.hdf5')
with h5py.File('file-checks-test-data/dataset_length/special_medal.hdf5', 'r+') as f:
f.attrs['frequency'] = 1234
# %%
files = glob.glob('file-checks-test-data/*.hdf5')
os.makedirs('file-checks-test-data/mains_frequency', exist_ok=True)
for file in files:
shutil.copy(file, 'file-checks-test-data/mains_frequency/')
files = glob.glob('file-checks-test-data/mains_frequency/*.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
length = len(f[n][:])
x = scipy.signal.resample(f[n][:], int(length * 0.75))
f[n][:len(x)] = x
f[n][len(x):length] = x[:length - len(x)]
# %%
os.makedirs('file-checks-test-data/voltage_rms', exist_ok=True)
shutil.copy('file-checks-test-data/clear-2017-06-12T11-10-55.327670T+0200-0022211.hdf5', 'file-checks-test-data/voltage_rms/clear-rms.hdf5')
shutil.copy('file-checks-test-data/medal-1-2017-06-12T11-10-33.862780T+0200-0022314.hdf5', 'file-checks-test-data/voltage_rms/medal-rms.hdf5')
shutil.copy('file-checks-test-data/clear-2017-06-12T11-10-55.327670T+0200-0022211.hdf5', 'file-checks-test-data/voltage_rms/clear-mean.hdf5')
shutil.copy('file-checks-test-data/medal-1-2017-06-12T11-10-33.862780T+0200-0022314.hdf5', 'file-checks-test-data/voltage_rms/medal-mean.hdf5')
shutil.copy('file-checks-test-data/clear-2017-06-12T11-10-55.327670T+0200-0022211.hdf5', 'file-checks-test-data/voltage_rms/clear-crest.hdf5')
shutil.copy('file-checks-test-data/medal-1-2017-06-12T11-10-33.862780T+0200-0022314.hdf5', 'file-checks-test-data/voltage_rms/medal-crest.hdf5')
files = glob.glob('file-checks-test-data/voltage_rms/*-rms.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
f[n][:] = np.rint(f[n][:] * 0.75)
files = glob.glob('file-checks-test-data/voltage_rms/*-mean.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
f[n][:] += int(8 / f[n].attrs['calibration_factor'])
files = glob.glob('file-checks-test-data/voltage_rms/*-crest.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
min = np.min(f[n][:])
max = np.max(f[n][:])
signs = f[n][:]
x = f[n][:].astype('i4') * max / 3
x = np.copysign(x, signs)
f[n][:] = np.clip(x, min * 0.7, max * 0.7)
# %%
os.makedirs('file-checks-test-data/voltage_values', exist_ok=True)
shutil.copy('file-checks-test-data/BLOND-50-clear-2016-10-02T00-02-44.043307T+0200-0000443.hdf5', 'file-checks-test-data/voltage_values/clear-bandwidth.hdf5')
shutil.copy('file-checks-test-data/BLOND-50-medal-1-2016-10-02T00-02-09.962358T+0200-0000148.hdf5', 'file-checks-test-data/voltage_values/medal-bandwidth.hdf5')
files = glob.glob('file-checks-test-data/voltage_values/*.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
f[n][:] = np.floor_divide(f[n][:], 2) * 2
# %%
os.makedirs('file-checks-test-data/voltage_bandwidth', exist_ok=True)
shutil.copy('file-checks-test-data/BLOND-50-clear-2016-10-02T00-02-44.043307T+0200-0000443.hdf5', 'file-checks-test-data/voltage_bandwidth/clear-bandwidth.hdf5')
shutil.copy('file-checks-test-data/BLOND-50-medal-1-2016-10-02T00-02-09.962358T+0200-0000148.hdf5', 'file-checks-test-data/voltage_bandwidth/medal-bandwidth.hdf5')
shutil.copy('file-checks-test-data/BLOND-50-clear-2016-10-02T00-02-44.043307T+0200-0000443.hdf5', 'file-checks-test-data/voltage_bandwidth/clear-min.hdf5')
shutil.copy('file-checks-test-data/BLOND-50-medal-1-2016-10-02T00-02-09.962358T+0200-0000148.hdf5', 'file-checks-test-data/voltage_bandwidth/medal-min.hdf5')
shutil.copy('file-checks-test-data/BLOND-50-clear-2016-10-02T00-02-44.043307T+0200-0000443.hdf5', 'file-checks-test-data/voltage_bandwidth/clear-max.hdf5')
shutil.copy('file-checks-test-data/BLOND-50-medal-1-2016-10-02T00-02-09.962358T+0200-0000148.hdf5', 'file-checks-test-data/voltage_bandwidth/medal-max.hdf5')
files = glob.glob('file-checks-test-data/voltage_bandwidth/*-bandwidth.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
f[n][:] = np.rint(f[n][:] * 0.5)
files = glob.glob('file-checks-test-data/voltage_bandwidth/*-min.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
max = np.max(f[n][:])
f[n][:] = np.clip(f[n][:].astype('i4') * 2, -2**15 - 1, max)
files = glob.glob('file-checks-test-data/voltage_bandwidth/*-max.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'voltage' in n]:
min = np.min(f[n][:])
f[n][:] = np.clip(f[n][:].astype('i4') * 2, min, 2**15 - 1)
# %%
os.makedirs('file-checks-test-data/current_rms', exist_ok=True)
shutil.copy('file-checks-test-data/clear-2017-06-12T11-10-55.327670T+0200-0022211.hdf5', 'file-checks-test-data/current_rms/clear-rms.hdf5')
shutil.copy('file-checks-test-data/medal-1-2017-06-12T11-10-33.862780T+0200-0022314.hdf5', 'file-checks-test-data/current_rms/medal-rms.hdf5')
shutil.copy('file-checks-test-data/clear-2017-06-12T11-10-55.327670T+0200-0022211.hdf5', 'file-checks-test-data/current_rms/clear-mean.hdf5')
shutil.copy('file-checks-test-data/medal-1-2017-06-12T11-10-33.862780T+0200-0022314.hdf5', 'file-checks-test-data/current_rms/medal-mean.hdf5')
shutil.copy('file-checks-test-data/clear-2017-06-12T11-10-55.327670T+0200-0022211.hdf5', 'file-checks-test-data/current_rms/clear-crest.hdf5')
shutil.copy('file-checks-test-data/medal-1-2017-06-12T11-10-33.862780T+0200-0022314.hdf5', 'file-checks-test-data/current_rms/medal-crest.hdf5')
files = glob.glob('file-checks-test-data/current_rms/*-rms.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'current' in n]:
f[n][:] = np.rint(np.clip(f[n][:].astype('i4') * 10, -2**15 - 1, 2**15 - 1))
files = glob.glob('file-checks-test-data/current_rms/*-mean.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in [n for n in list(f) if 'current' in n]:
f[n][:] += int(2 / f[n].attrs['calibration_factor'])
files = glob.glob('file-checks-test-data/current_rms/*-crest.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
frequency = f.attrs['frequency']
for n in [n for n in list(f) if 'current' in n]:
s = f[n][:] * f[n].attrs['calibration_factor']
rms = numpy.max(numpy.sqrt(numpy.mean(numpy.square(s).reshape(-1, frequency), axis=1)))
f[n][:] = np.rint(np.clip(f[n][:].astype('f4') * 1.5, -rms / f[n].attrs['calibration_factor'], rms / f[n].attrs['calibration_factor']))
# %%
os.makedirs('file-checks-test-data/flat_regions', exist_ok=True)
shutil.copy('file-checks-test-data/BLOND-50-clear-2016-10-02T00-02-44.043307T+0200-0000443.hdf5', 'file-checks-test-data/flat_regions/')
shutil.copy('file-checks-test-data/BLOND-50-medal-1-2016-10-02T00-02-09.962358T+0200-0000148.hdf5', 'file-checks-test-data/flat_regions/')
files = glob.glob('file-checks-test-data/flat_regions/*.hdf5')
for file in files:
with h5py.File(file, 'r+') as f:
for n in list(f):
length = len(f[n])
f[n][int(length / 2):int(length / 2 + frequency * 2)] = 2000
|
Role of T cells in non-immediate drug allergy reactions. PURPOSE OF REVIEW Nonimmediate drug hypersensitivity reactions (NI-DHR) constitute the most complex group of drug allergy, with many drugs involved. Both parent drugs and their reactive metabolites can be implicated. Although with some drugs the number of metabolites is limited, with others it is quite extensive and many still remain to be identified. The diagnostic approaches are insufficient for the diagnosis and realistic approaches that reproduce the pathological response are lacking. RECENT FINDINGS A wider view has now been considered, with the inclusion of several mechanisms that may contribute to drug hypersensitivity reactions (DHR): the classical hapten hypothesis, the danger signal and the pharmacological interaction. Monitoring the acute response provides relevant information about the mechanisms involved, with the identification of a large number of genes that can be over-expressed or under-expressed in the acute phase of the response. Assessment of risk of developing reactions can be verified by HLA associations. SUMMARY Further knowledge of these NI-DHR, including molecular genetics and transcriptomic analysis, has enabled a better understanding and management of these reactions. |
<gh_stars>0
import * as mongoose from 'mongoose';
export const UserSchema = new mongoose.Schema({
link: String,
createdAt: String,
updatedAt: String,
firstName: String,
middleName: String,
lastName: String,
password: String,
email: String,
twets: [
{
type: mongoose.Schema.Types.ObjectId,
ref: 'Twet',
},
],
tags: [
{
type: mongoose.Schema.Types.ObjectId,
ref: 'Tag',
},
],
});
|
Downregulation of c-kit expression in human endothelial cells by inflammatory stimuli. In recent studies we have shown that the expression of stem cell factor (SCF) in human endothelial cells is regulated by inflammatory processes. Gram-negative bacteria, interleukin-1 (IL-1), and lipopolysaccharide were able to upregulate the expression of SCF in human umbilical vein endothelial cells (HUVEC) (Blood 83:2836, 1994). Interestingly enough c-kit, the receptor of SCF, is coexpressed on HUVEC, suggesting an autoregulatory mechanism. To investigate the relation of c-kit and inflammatory processes we stimulated HUVEC with IL-1alpha and we established an in vitro model of inflammation. Binding experiments with 125I-SCF were performed to study the c-kit receptor expression on HUVEC. Scatchard analysis revealed both high-affinity receptors (K(d) approximately 0.36 nmol/L) and low-affinity receptors (K(d) approximately 2.9 nmol/L). Exposure to IL-1alpha led to a significant 50% reduction of c-kit high-affinity receptors, whereas the number of low-affinity receptors was not affected, in comparison to a control group of untreated HUVEC. Furthermore, using Northern blot analysis we studied the regulation c-kit mRNA expression in HUVEC after stimulation with IL-1alpha. Kinetic experiments showed a time-dependent downregulation of c-kit specific transcripts. In addition, we cocultured HUVEC with diverse bacterial strains. Experiments were performed over time with 1 x 10 bacteria/mL. Our data showed that, in contrary to the previously reported upregulation of SCF mRNA expression, stimulation with Yersinia enterocolitica or with Neisseria meningitidis led to a significant time-dependent downregulation of c-kit mRNA within 3 hours. These data indicate that inflammatory stimuli such as IL-1 or living bacteria activate a mechanism that downregulates c-kit receptor expression in human endothelial cells during the state of inflammation. |
// IMPLEMENTATIONS:
// Extract fron Json structure information and then create the collision object to return
moveit_msgs::CollisionObject create_collision_object(const Json::Value &object,
const std::string &object_id) {
const auto &type = object["type"].asString();
const auto &dimensions = object["dimensions"];
const auto &position = object["position"];
const auto &orientation = object["orientation"];
if (type.empty() || dimensions.empty() || position.empty() || orientation.empty()) {
throw json_field_error("missing json field");
}
moveit_msgs::CollisionObject collision_object;
collision_object.header.frame_id = "world";
collision_object.id = object_id;
shape_msgs::SolidPrimitive primitive;
if (type == "box") {
primitive.type = primitive.BOX;
primitive.dimensions.resize(3);
primitive.dimensions[0] = dimensions["x"].asDouble();
primitive.dimensions[1] = dimensions["y"].asDouble();
primitive.dimensions[2] = dimensions["z"].asDouble();
} else if (type == "sphere") {
primitive.type = primitive.SPHERE;
primitive.dimensions.resize(2);
primitive.dimensions[0] = dimensions["r"].asDouble();
} else if (type == "cylinder") {
primitive.type = primitive.CYLINDER;
primitive.dimensions.resize(2);
primitive.dimensions[0] = dimensions["h"].asDouble();
primitive.dimensions[1] = dimensions["r"].asDouble();
} else if (type == "cone") {
primitive.type = primitive.CONE;
primitive.dimensions.resize(2);
primitive.dimensions[0] = dimensions["h"].asDouble();
primitive.dimensions[1] = dimensions["r"].asDouble();
} else {
throw collision_object_creation_error("the type specified in json is not valid");
}
geometry_msgs::Pose object_pose;
object_pose.orientation.w = orientation["w"].asDouble();
object_pose.orientation.x = orientation["x"].asDouble();
object_pose.orientation.y = orientation["y"].asDouble();
object_pose.orientation.z = orientation["z"].asDouble();
object_pose.position.x = position["x"].asDouble();
object_pose.position.y = position["y"].asDouble();
object_pose.position.z = position["z"].asDouble();
collision_object.primitives.push_back(primitive);
collision_object.primitive_poses.push_back(object_pose);
collision_object.operation = collision_object.ADD;
return std::move(collision_object);
} |
package no.rustelefonen.hap.customviews;
/**
* Created by lon on 19/04/16.
*/
import android.content.Context;
import android.util.AttributeSet;
import android.view.MotionEvent;
public class NestedScrollView extends android.support.v4.widget.NestedScrollView {
private boolean enableScrolling = true;
public NestedScrollView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
}
public NestedScrollView(Context context, AttributeSet attrs) {
super(context, attrs);
}
public NestedScrollView(Context context) {
super(context);
}
@Override
public boolean onInterceptTouchEvent(MotionEvent ev) {
return isEnableScrolling() && super.onInterceptTouchEvent(ev);
}
@Override
public boolean onTouchEvent(MotionEvent ev) {
return isEnableScrolling() && super.onTouchEvent(ev);
}
public boolean isEnableScrolling() {
return enableScrolling;
}
public void setEnableScrolling(boolean enableScrolling) {
this.enableScrolling = enableScrolling;
}
} |
package language.observation.deterministic.term;
import language.observation.deterministic.TermObservation;
import planner.State;
import structure.AdvancedSet;
public abstract class UnconditionalTermObservation extends TermObservation {
@Override
public AdvancedSet<UnconditionalTermObservation> getObservations(State predictedState) {
return new AdvancedSet<UnconditionalTermObservation>(this);
}
}
|
Can Pokemon GO really improve your mental health?
Ever since its release last week, the augmented-reality app Pokemon Go has inundated our social media feeds. While some of the news has been troubling (think players who've injured themselves as they attempted to navigate neighborhoods while staring at their phones), other reports have been surprisingly positive, with users praising the addictive game as a fun way to get more exercise.
Not familiar with Pokemon Go? The app (which you can download for free in iTunes and Google Play) lets you search for digital Pokemon in real-world locations. In other words, players have to physically go outside and chase down characters like Charizard and Pikachu, who might be hiding in the park, for example, or at the mall.
But aside from prompting physical activity, the game may offer another, more unexpected health benefit: As Buzzfeed reports, Pokemon Go has been encouraging people with mental health issues to spend time outdoors, which has, in turn, boosted their well-being.
For example, an 18-year-old Tumblr user named Ari who suffers from anxiety and depression spent the last three years terrified to leave her house, until Pokemon Go gave her a much-needed push to get out the door: "I walked outside for hours and suddenly found myself enjoying it," she told Buzzfeed. "I had the instant rush of dopamine whenever I caught a Pokemon, and I wanted to keep going."
Ben Michaelis, PhD, an evolutionary clinical psychologist and author of Your Next Big Thing ($10; amazon.com), isn't surprised by these reports. In fact, he's already seen Pokemon Go help one of his own clients. "I think it's a genuinely positive development," he says.
Michaelis believes apps like this one are most likely to help people with mild to moderate cases of anxiety, depression, and agoraphobia.
"The game could provide motivation to go outside and explore the world through a sort of enhanced reality," he explains. "It could also provide people with enough of a distraction from their fears and inner monologue to get them to do something that might be challenging for them."
And, as we already know from research, getting more exercise and spending time outdoors can help alleviate symptoms of anxiety and depression.
That said, "games shouldn't be seen as a cure, but as a useful tool," Michaelis points out. It's still important to work with a mental health professional to treat your condition.
Another cautionary note: "One obvious potential drawback [of the app] is that Pokemon Go could become the only way a person can interact with the world," Michaelis says.
To enjoy the game in a healthy way, he recommends giving yourself a time limit (say 30 minutes a day), and to make sure hunting digital critters isn't the only activity you're doing outside. After you put down your phone, spend some time gardening, or walking or running, he suggests. |
<filename>uva-soln/u11879_multipleof17.py
while True:
x = int(input())
if x == 0:
break
print(int((x%17) == 0)) |
Google came to Hartford Monday, offering lessons to business owners, nonprofit groups and individuals who want to reach more customers, learn about visitors to their websites and other digital marketing skills. It’s part of the tech company’s “Grow With Google” initiative reaching out to businesses and individuals looking to improve their online savvy.
The internet gives business a worldwide audience, presenting owners and managers with opportunities to reach customers far beyond their front doors. But business owners and others must pick up new marketing skills to promote their business and learn details about customer tastes and demographics.
Google representatives are set to travel Wednesday to New Haven and New London Friday.
Advertising in the digital era is more difficult than ever, prompting businessman Herb Glick to attend the Google training session at the Hartford Public Library.
Not long ago -- 1987, to be exact, when Glick bought Yush Sign & Display Co. Inc., an East Hartford company that designs and makes signs -- a business owner bought advertising from the local phone company for space in the Yellow Pages.
“It was easy. You didn’t have to visit it. This...you have to visit it every day. It’s a chore,” Glick said, referring to his website.
The nature of his business makes his work difficult, he said. For example, he said, there is no simple answer to the question of cost. With 169 municipalities in Connecticut, scores of sign regulations scramble easy answers about cost. Placement and size also are factors.
“It’s a tough business. That’s why I’m here," Glick said.
Jennifer Cassidy works with Business for Downtown Hartford to help about 40 businesses bring in more customers and boost commerce in the Capital City.
She spent about 15 minutes with a Google trainer for tips on how to promote business and help owners and managers get a presence on Google search. One business, for example, doesn’t come up on Google search, she said, severely restricting its online presence.
“I think there are some opportunities to get businesses in shape,” she said.
Chakai Duany a Hartford resident working for a master’s degree in business analytics at Central Connecticut State University, attended a session on how to promote businesses on Google maps to reach more customers.
Duany, 33, of Hartford, said she would sit in on the “Reach Customers Online With Google” workshop that includes information on how to get businesses on Google maps to extend their reach.
She’s created her own website that serves as a platform for a blog, “forevermyhair,” on hair care and hair products.
Stephanie Hughes and Zac Camner of the accounting firm blumshapiro attended a session on how to reach customers online.
“We want to better understand how people use our website,” Hughes said.
They also want to gauge what prospective customers are looking for when they do a search for accounting firms and figure out characteristics or possible clients. “They’re not just people clicking on an ad,” Camner said.
Customers also fill out forms describing services they’re seeking and how they found blumshapiro’s website. All of that provides valuable marketing information that needs to be interpreted. The lessons are important for their own sake, too.
“It’s always good to continue our education,” Camner said. |
Your military bureaucracy, hard at work. Nine months ago, Marine Corps Major General Richard Zilmer, the head of coalition forces in western Iraq, sent an "Priority 1" request to the Pentagon, asking for new gear. Today, according to Inside Defense, the Pentagon's Joint Staff said they'd start thinking about it.
renewable power stations, equipped with "solar panels and wind turbines," instead. Constantly resupplying out-of-the-way bases with fossil fuels was putting troops at risk of "serious and grave casualties" on Iraq's roadways, Zilmer noted in his request.
Not to mention the expense: Factor in transportation and storage, and the price of a gallon of fuel in Iraq can be as high as $400. Green power had become a battlefield necessity.
“Military officials have confirmed that the renewable energy [joint urgent operational need] made it to the Joint Staff from [U.S. Central Command] on March 28,” Joint Staff spokesman Army Lt. Col. Gary Tallman tells* Inside Defense *.
Now, granted, the Pentagon can't magically, instantly start fulfilling every request on a general's wish list. And Zilmer's plea isn't the easiest to satisfy; he asked for some pretty major power supplies. But nine months – just to begin to respond to a battlefield commander's "urgent" request? C'mon. We're at war here. The bureaucrats have got to shuffle paper faster than that.
UPDATE: Meanwhile, the Army's Rapid Equipping Force, seemingly sidestepping the Joint Staff, is sending prototype green-power generators to Iraq and Afghanistan. |
//median function for ArrayList, returns median value of Integer ArrayList. Used for implementing smoothing buffer
public int median(ArrayList<Integer> list) {
ArrayList<Integer> temp = new ArrayList<Integer>(list);
Collections.sort(temp);
return temp.get(temp.size() / 2);
} |
One of the more significant changes at Eastbourne Borough this summer will go by largely unnoticed.
The Blue Square premier new boys have bolstered their squad with four decent signings.
Off the field former Chelsea managing director Colin Hutchinson has joined the club in an advisory role Amid all that chief coach Nick Greenwood's conversion from part to full-time status has not attracted much attention but fans should not under-estimate its importance.
Long-serving Greenwood provides the link on and off the pitch and being able to donate a lot more time to his duties will make the club's move to the top tier of non-league football a lot smoother.
Whether Garry Wilson could have continued as manager without Greenwood becoming full-time is something only he could answer.
Borough will be only one a few clubs in Blue Square premier this season who are not full-time and most of those who have remained part-time, including neighbours Lewes, at least have a full-time manager.
Wilson is an engineering manager by day so his trusty sidekick has given up his job in the fire service.
Greenwood, who turns 50 in a fortnight, has been a fundamental part of Borough's meteoric rise from County League to Blue Square premier since joining the club from Hassocks in 1997.
He said: "This will enable me to take the pressure off Garry. I will be fielding calls, freeing him up from the responsibility of that so he can get on with doing his job during the day.
"I will be putting on extra training sessions for the players when we can because a lot of them have unusual shift patterns. It will also be an opportunity to watch more matches in the professional game that are going on during the week that we should have some sort of attendance at. Garry is the manager and I am the coach but we are a pair and we share the responsibilities jointly.
"As you move up the leagues there are more responsibilities, more things to do. It was bad enough last year when a lot of the managers in Conference south were full-time.
"The whole thing is to do with gearing ourselves up to be more professional and giving ourselves the best possible chance of staying in Conference national.
"I have done 30 years in the fire service. I had an opportunity to stay on and take promotion but basically I couldn't do both. I couldn't take extra responsibilities with the fire service and extra responsibilities of coaching a Conference national side.
"On the night of our play-off win against Hampton I knew I had reached a situation where a decision had to be made but it was something Garry and I had talked about for a while."
Although Greenwood will not go full-time until the start of August, he and Wilson have been working hard all summer trying to get Borough as prepared as they can be for the challenge ahead.
One thing they did was have a meeting with the management team at Salisbury, whose players remained part-time after promotion to the Blue Square premier and enjoyed a superb season last year.
Greenwood said: "Everybody looks to us as a role model but we take advice from others. We have spent a lot of time with our counterparts at Salisbury, Tommy Widdrington and Nick Holmes. We have talked to them and they have told us the pitfalls, things they did well and things they didn't do so well. They are both full-time themselves and they have given us pointers.
"The club have a plan that we have to stay part-time but if we manage to stay in the Conference national next year we will look at the next stage, having some players full-time and some part-time. Ultimately if we stay in that division with part-time players we will be relegated in the end.
"Me doing this is the first step towards that. That has always been the secret of our success at Eastbourne, continually looking at ways to do things better."
Greenwood, who will be assisted in the coaching department by Simon Colbran and Dean Lightwood, initially joined Borough as coach to Steve Richardson and, after a spell in caretaker charge, stayed on when Wilson was appointed in February 1999.
He said: "I worked with Garry in the County League and now we are in exactly the same positions in the Conference national. It has been a great journey and we are really looking forward to the next step."
>How do you rate Nick Greenwood's contribution to Borough's progress? |
<reponame>giger85/google-ads-java<gh_stars>1-10
// Generated by the protocol buffer compiler. DO NOT EDIT!
// source: google/ads/googleads/v10/enums/ad_group_criterion_status.proto
package com.google.ads.googleads.v10.enums;
public final class AdGroupCriterionStatusProto {
private AdGroupCriterionStatusProto() {}
public static void registerAllExtensions(
com.google.protobuf.ExtensionRegistryLite registry) {
}
public static void registerAllExtensions(
com.google.protobuf.ExtensionRegistry registry) {
registerAllExtensions(
(com.google.protobuf.ExtensionRegistryLite) registry);
}
static final com.google.protobuf.Descriptors.Descriptor
internal_static_google_ads_googleads_v10_enums_AdGroupCriterionStatusEnum_descriptor;
static final
com.google.protobuf.GeneratedMessageV3.FieldAccessorTable
internal_static_google_ads_googleads_v10_enums_AdGroupCriterionStatusEnum_fieldAccessorTable;
public static com.google.protobuf.Descriptors.FileDescriptor
getDescriptor() {
return descriptor;
}
private static com.google.protobuf.Descriptors.FileDescriptor
descriptor;
static {
java.lang.String[] descriptorData = {
"\n>google/ads/googleads/v10/enums/ad_grou" +
"p_criterion_status.proto\022\036google.ads.goo" +
"gleads.v10.enums\032\034google/api/annotations" +
".proto\"z\n\032AdGroupCriterionStatusEnum\"\\\n\026" +
"AdGroupCriterionStatus\022\017\n\013UNSPECIFIED\020\000\022" +
"\013\n\007UNKNOWN\020\001\022\013\n\007ENABLED\020\002\022\n\n\006PAUSED\020\003\022\013\n" +
"\007REMOVED\020\004B\365\001\n\"com.google.ads.googleads." +
"v10.enumsB\033AdGroupCriterionStatusProtoP\001" +
"ZCgoogle.golang.org/genproto/googleapis/" +
"ads/googleads/v10/enums;enums\242\002\003GAA\252\002\036Go" +
"ogle.Ads.GoogleAds.V10.Enums\312\002\036Google\\Ad" +
"s\\GoogleAds\\V10\\Enums\352\002\"Google::Ads::Goo" +
"gleAds::V10::Enumsb\006proto3"
};
descriptor = com.google.protobuf.Descriptors.FileDescriptor
.internalBuildGeneratedFileFrom(descriptorData,
new com.google.protobuf.Descriptors.FileDescriptor[] {
com.google.api.AnnotationsProto.getDescriptor(),
});
internal_static_google_ads_googleads_v10_enums_AdGroupCriterionStatusEnum_descriptor =
getDescriptor().getMessageTypes().get(0);
internal_static_google_ads_googleads_v10_enums_AdGroupCriterionStatusEnum_fieldAccessorTable = new
com.google.protobuf.GeneratedMessageV3.FieldAccessorTable(
internal_static_google_ads_googleads_v10_enums_AdGroupCriterionStatusEnum_descriptor,
new java.lang.String[] { });
com.google.api.AnnotationsProto.getDescriptor();
}
// @@protoc_insertion_point(outer_class_scope)
}
|
A phase II trial of interferon and 5fluorouracil in patients with advanced renal cell carcinoma: A Southwest Oncology Group study Renal cell carcinoma is a common neoplasm that is often refractory to treatment. It is occasionally responsive to immunomodulating agents including interferon, which enhances the effects of 5fluorouracil upon cells. Combinations of these two drugs have been most frequently tested in patients with gastrointestinal cancers, with some promising results. Because interferon has activity for renal cell carcinoma, a trial of this combination in patients with this malignancy was undertaken. |
<reponame>ORESoftware/rare.bits
'use strict';
const {Client} = require('pg');
export class RareBitsWallet {
client = new Client({
host: 'localhost',
user: 'postgres',
password: '<PASSWORD>',
database: 'oleg'
});
constructor() {
this.createTable();
this.connect();
}
async connect() {
await this.client.connect();
}
async createTable() {
return await this.client.query(`
DROP TABLE ledger;
DROP TABLE balance;
CREATE TABLE IF NOT EXISTS ledger (
id SERIAL,
user_id integer,
type text,
amount bigint,
new_balance bigint
);
CREATE TABLE IF NOT EXISTS balance (
id SERIAL,
user_id integer,
balance_value bigint
);
INSERT INTO balance (user_id,balance_value) VALUES(
0,
0
);
`);
}
async deposit(amount: number, userId :number) {
const res = await this.client.query(`SELECT balance_value from balance WHERE user_id = ${userId};`);
const balance = res.rows && res.rows[0] && res.rows[0].balance_value;
console.log('balance:', balance);
const newBalance = balance + amount;
await this.client.query(`INSERT INTO ledger (user_id, type, amount, new_balance) VALUES (${userId}, 'deposit', ${amount}, ${newBalance});`);
return this.client.query(`UPDATE balance SET balance_value = ${newBalance} WHERE user_id = ${userId};`);
}
async wager(amount: number, userId: number) {
const res = await this.client.query(`SELECT balance_value from balance WHERE user_id = ${userId};`);
const balance = res.rows && res.rows[0] && res.rows[0].balance_value;
if(!balance){
return Promise.reject('No balance available.');
}
if(balance < amount){
return Promise.reject('Not enough money in your account.');
}
const type = 'wager';
const calcAmount = Math.random() > 0.5 ? amount : -amount;
const newBalance = Number(balance) + Number(calcAmount);
await this.client.query(`INSERT INTO ledger (user_id, type, amount, new_balance) VALUES (${userId}, 'wager', ${amount}, ${newBalance});`);
return this.client.query(`UPDATE balance SET balance_value = ${newBalance} WHERE user_id = ${userId};`);
}
async withdraw(amount: number, userId: number) {
const res = await this.client.query(`SELECT balance_value from balance WHERE user_id = ${userId};`);
const balance = res.rows && res.rows[0] && res.rows[0].balance_value;
if(!balance){
return Promise.reject('No balance available.');
}
if(balance < amount){
return Promise.reject('Not enough money in your account.');
}
console.log('balance:', balance);
const newBalance = balance - amount;
await this.client.query(`INSERT INTO ledger (user_id, type, amount, new_balance) VALUES (${userId}, 'withdrawal', ${amount}, ${newBalance});`);
return this.client.query(`UPDATE balance SET balance_value = ${newBalance} WHERE user_id = ${userId};`);
}
}
|
Ultraviolet photoelectron studies of biological pyrimidines. The valence electronic structure of cytosine U V photoelectron spectroscopy and C N D O j S molecular orbital calculations have been employed to investigate the electronic structure of cytosine ( I ), 1-methylcytosine (II), N,1-dimethylcytosine ( I I I ), N,N,l-trimethylcytosine (IV), 3-methylcytosine (V), 1,5-dimethylcytosine (VI ), 1,6-dimethylcytosine (VI I ), 5-methylcytosine ( V I I I ), and 6-methylcytosine (IX). The resolution of the spectra obtained for different members of this series of molecules varies markedly. Of all the molecules investigated the photoelectron bands arising from the five uppermost orbitals are well resolved only for N, I-dimethylcytosine. The variation in the resolution arises partially from the overlapping of bands. Furthermore, spectra obtained for molecules in which labile H atoms are replaced by methyl groups exhibit much better resolution than spectra for other molecules. This observation is probably related to hydrogen bonding effects. For cytosine the spacing of bands occurring in the spectrum is accurately reproduced in the results of C N D O / S calculations carried out on the 1(H) aminooxo tautomeric form of the molecule. In compounds 11-IV and VI-IX the spacing of bands and the shifts observed in the spectra are also well predicted by calculations carried out on the aminooxo tautomers. However, for 3-methylcytosine the results indicate that an imino tautomeric form is most stable. For all compounds the C N D O / S calculations indicate that three of the five uppermost orbitals are A orbitals and that two are lone-pair orbitals. In cytosine the first and fifth bands arise from A orbitals while the fourth band arises from a lone-pair orbital. The second and third bands arise from a A and a lone-pair orbital which are strongly overlapping and their ordering remains uncertain. Introduction electron distributions associated with the valence orbitals of these molecules influence the manner in which purines and pyrimidines participate in weak bonding interactions as well as in chemical reaction^.^-^ The valence structure of biological purines and pyrimidines The valence molecular orbital structure of biological purines and pyrimidines plays an important role in determining the biochemical properties of these molecules.' Energies and 0002-7863/78/ 1500-2303$01.OO/O |
#ifndef _Base_H_
#define _Base_H_
class Base
{
protected:
int x, y, dx, dy, ancho, alto, fil, col, id;
public:
Base() : x(0), y(0), dx(0), dy(0), ancho(0), alto(0), fil(0), col(0), id(0) {}
~Base() {}
void Cambiar_X(int nuevo) { x = nuevo; }
void Cambiar_Y(int nuevo) { y = nuevo; }
void Cambiar_DX(int nuevo) { dx = nuevo; }
void Cambiar_DY(int nuevo) { dy = nuevo; }
void Cambiar_ANCHO(int nuevo) { ancho = nuevo; }
void Cambiar_ALTO(int nuevo) { alto = nuevo; }
void Cambiar_FILA(int nuevo) { fil = nuevo; }
void Cambiar_COLUMNA(int nuevo) { col = nuevo; }
void Cambiar_ID(int nuevo) { id = nuevo; }
int Retornar_X() { return x; }
int Retornar_Y() { return y; }
int Retornar_DX() { return dx; }
int Retornar_DY() { return dy; }
int Retornar_ANCHO() { return ancho; }
int Retornar_ALTO() { return alto; }
int Retornar_FILA() { return fil; }
int Retornar_COLUMNA() { return col; }
int Retornar_ID() { return id; }
virtual void Mover(System::Drawing::Graphics ^g) {};
virtual void Mostrar(System::Drawing::Graphics ^g, System::Drawing::Image^img) { g->DrawImage(img, x, y, ancho, alto); };
};
#endif // !_Base_H_
|
package jef.tools.management;
import jef.common.Callback;
/**
* 捕捉和处理TERM信号量的句柄
* @author Administrator
*
*/
public interface TermHandler {
public void setDoNotExit();
public int getExitStatus();
public void setExitStatus(int exitStatus);
public void addEvent(Callback<Integer,Exception> event);
public void activate();
}
|
It is ‘incomprehensible’ that the Academy Selsey will be rebuilt without sprinklers.
That is according to the Fire Brigades Union (FBU) and National Education Union (NEU), which have accused the government of a ‘shockingly cavalier’ attitude to fire safety.
In a letter to Damian Hinds, secretary of state for education, the unions criticise the announcement last month that the Selsey secondary school, destroyed by fire in 2016, will be rebuilt without a sprinkler system.
Calling Selsey ‘not an isolated case’, it cites the school at the base of the fire-destroyed Grenfell Tower, built in 2014, as not having sprinklers, along with newly-built schools in Croydon.
Andy Dark, from the FBU, said: “The government’s attitude toward fire safety is shockingly cavalier.
TKAT, the trust that runs The Academy, said detailed discussion, risk assessment and consultation with experts had been done before it decided not to fit sprinklers in the new school, which is currently being built to be finished by the end of the year.
TKAT CEO Karen Roberts said: “The building design at Selsey is in accordance with our stringent regulations, and we are confident in both the fire prevention and safety procedures in place across all our academies.
The unions warned that only 35 per cent of new schools built in England since 2010 have been fitted with sprinklers, calling on the government to adopt the policy in Scotland and Wales for all schools rebuilt after fire to have sprinklers installed.
A Department for Education spokesman said: “Schools have a range of fire protection measures and new schools undergo an additional check while being designed. |
l = [0,1,1]+[0]*(10**5-2)
for i in range(3,10**5+1):
if (i)%4==3:l[i] = (l[i-1]+l[i-2]+2*((i/4)+1))%(10**9+7)
elif i%4==2:l[i] = l[i-1]
else:l[i] = (l[i-1]+l[i-2]+1)%(10**9+7)
n = int(raw_input())
print l[n]
|
class PractitionerTemplate:
def __init__(self):
super().__init__()
def practitioner_default(self, data_list):
keys = []
for data in data_list:
key = f"{data.get('rowid')},{data.get('employee_id')}"
keys.append(key)
return keys
|
Organogenesis of lung and kidney in Thoroughbreds and ponies. Equine lung and kidney organogenesis has not previously been examined with the use of unbiased stereological techniques. The present study examined healthy (control) pony and Thoroughbred lungs and kidneys to establish baseline data of organ development from before birth until maturity at age 3-18 years. Whole left lungs and kidneys were collected from 45 equine postmortem examinations (34 Thoroughbred, 11 pony). Stereological techniques were used to estimate whole kidney, cortex and medulla volume, total glomerular number and volume-weighted mean glomerular volume, lung volume, total terminal bronchiolar duct ending number and total gas exchange surface area. Lungs were demonstrated to be more developed at birth in ponies compared with Thoroughbreds. Thoroughbreds showed continued lung development after birth, a unique micromorphogenic postnatal development. Kidneys were developed equally in ponies and Thoroughbreds. This study has provided data on the baseline development of the equiune lung and kidney which can be used in further studies to examine whether the development of these organs is affected by specific illnesses. |
#include "gripper/configuer.h"
using namespace hirop_gripper;
Configure::Configure(std::string fileName)
{
try{
config = YAML::LoadFile(fileName);
}catch(std::exception e){
IErrorPrint("Load configure file error");
}
}
int Configure::getGripperName(std::string &gripperName){
if(!config["gripperName"]){
IErrorPrint("Get gripper name error: No gripperName node");
return -1;
}
gripperName = config["gripperName"].as<std::string>();
configDebug("Get gripperName: %s", gripperName.c_str());
return 0;
}
int Configure::getPrivateParams(YAML::Node &yamlNode)
{
if(!config["parameters"]){
IErrorPrint("无私有参数");
return 0;
}
yamlNode = config;
return 0;
}
|
The Opioid Epidemic and the State of Stigma: A Pennsylvania Statewide Survey Abstract Background: The opioid epidemic is a public health crisis. Among initiatives surrounding treatment and prevention, opioid use disorder (OUD) stigma has emerged as a subject for intervention. Objectives: This study examines overall results and demographic differences of three subscales of a public stigma survey instrument: general attitudes, social distance, and treatment availability and effectiveness. Methods: A statewide sample of Pennsylvanian adults (N=1033) completed an online survey about the opioid epidemic. Weighted percentage level of agreement was reported for each item. To determine significant differences in responding across demographic groups (gender, race, and urban/rural status), multiple one-way ANOVAs were analyzed. Significant differences in the level of agreement and disagreement (p <.05) were reported. Results: The majority of respondents agreed that the opioid epidemic is a problem and that anyone can become addicted to opioids; however, many Pennsylvanians still disagree that OUD is a medical disorder and continue to endorse social distance beliefs of people with OUD. Most participants agreed that there are effective treatments available, and that recovery was possible; however, a large portion of participants were unsure whether specific treatments are effective. Subscale mean differences were significant for gender and age. Conclusions/Importance: Findings highlight that stigmatized attitudes, behaviors, and beliefs about individuals who use opioids are still prevalent and that uncertainty remains about the effectiveness of OUD treatment. OUD interventions should use targeted messaging in order to impact the ongoing opioid crisis. |
/*
Copyright 2019 <NAME> (i-net software)
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
package de.inetsoftware.classparser;
/**
* @author <NAME>
*/
public class ConstantInvokeDynamic implements Member {
private final ConstantNameAndType nameAndType;
/**
* Invoke dynamic info in the constant pool.
* https://docs.oracle.com/javase/specs/jvms/se9/html/jvms-4.html#jvms-4.4.10
*
* @param bootstrapMethodAttrIndex
* a valid index into the bootstrap_methods array of the bootstrap method table
* @param nameAndType
* the name and type
*/
ConstantInvokeDynamic( int bootstrapMethodAttrIndex, ConstantNameAndType nameAndType ) {
this.nameAndType = nameAndType;
}
/**
* {@inheritDoc}
*/
@Override
public String getName() {
return nameAndType.getName();
}
/**
* {@inheritDoc}
*/
@Override
public String getClassName() {
return null;
}
/**
* Get the type of the method. For example "(Ljava.lang.String;)I"
*/
@Override
public String getType() {
return nameAndType.getType();
}
} |
Registration of airborne laser scanning point clouds with aerial images through terrestrial image blocks Integration of airborne laser scanning (ALS) point clouds and aerial images has a great potential for accurate and robust 3D modeling and recognition of objects in our environment. The integration requires, however, an accurate registration of data sources, which cannot be yet achieved by direct georeferencing using both the GPS and IMU. This research paper presents a method for registering aerial images with ALS data and for evaluating the accuracy of existing registration. An aerial image is included into a multi-scale image block, in which relative orientations of terrestrial close range images and aerial images are then known from the bundle block adjustment. Close range images provide more detailed view of possible tie features and also a new perspective compared to aerial images. For the actual registration of ALS data and image block, one or more images of the block can be chosen. Selected images can include only close range images or both close range images and aerial images. For the registration, the interactive orientation method was used. When selected images are registered with ALS data, the exterior orientations of all other images of the block can be calculated from the known relative orientations. Accuracies of interactive orientations were examined using the reference ALS point cloud that was transformed to the known geodetically determined coordinate system. The coordinate transformation was solved by applying the iterative closest point (ICP) method between the ALS data and the photogrammetrically derived 3D model, the absolute orientation of which was known. Before making experiments of interactive registration, the absolute orientation of the image block was changed in order to get incorrect initial orientation. The final results of interactive orientations were compared with the original orientation information from the bundle block adjustment. The comparison indicated that including an aerial image with a terrestrial image block, the registration of ALS data and aerial images can be improved or verified. The accuracy of the interactive registration was depended on selected images that were used in registration. The maximum differences between original and interactively solved locations of the aerial image varied between 2.3 and 9 cm. |
/*
Lightmetrica - Copyright (c) 2019 <NAME>
Distributed under MIT license. See LICENSE file for details.
*/
#include <pch.h>
#include "pylm_test.h"
LM_NAMESPACE_BEGIN(LM_TEST_NAMESPACE)
class PyTestBinder_Math : public PyTestBinder {
public:
virtual void bind(py::module& m) const {
// Python -> C++
m.def("compSum2", [](lm::Vec2 v) -> lm::Float {
return v.x + v.y;
});
m.def("compSum3", [](lm::Vec3 v) -> lm::Float {
return v.x + v.y + v.z;
});
m.def("compSum4", [](lm::Vec4 v) -> lm::Float {
return v.x + v.y + v.z + v.w;
});
m.def("compMat4", [](lm::Mat4 m) -> lm::Float {
lm::Float sum = 0;
for (int i = 0; i < 4; i++) {
for (int j = 0; j < 4; j++) {
sum += m[i][j];
}
}
return sum;
});
// C++ -> Python
m.def("getVec2", []() -> lm::Vec2 {
return lm::Vec2(1, 2);
});
m.def("getVec3", []() -> lm::Vec3 {
return lm::Vec3(1, 2, 3);
});
m.def("getVec4", []() -> lm::Vec4 {
return lm::Vec4(1, 2, 3, 4);
});
m.def("getMat4", []() -> lm::Mat4 {
return lm::Mat4(
1,1,0,1,
1,1,1,1,
0,1,1,0,
1,0,1,1
);
});
}
};
LM_COMP_REG_IMPL(PyTestBinder_Math, "pytestbinder::math");
LM_NAMESPACE_END(LM_TEST_NAMESPACE)
|
Effect of Friction Stir Welding Process on Crystallinity and Degradation of Polypropylene The aim of this study was to investigate the crystallinity changes and degradation of polypropylene due to heat generated by friction stir welding, i.e., heat generated by friction between the rotation tool and the welded materials. The tool pin was rotated at 620 rpm in the welding process. The travelling speed was varied between 7.3 mm/minute and 13 mm/minute. A cylindrical tool pin, 4.5 mm in diameter and 5.7 mm in length, was used in this experiment. The shoulder dimension was 18 mm in diameter and 90 mm in length. A conventional milling machine was used in the friction stir welding process. The crystallinity test was carried out with X-ray diffraction, hardness was observed using a Shore Type-D durometer, and polymer degradation data was obtained by thermogravimetry analysis. The areas compared were base material, weld nugget area, and thermomechanical affected zone. The results showed that there was a change in the percentage of crystallinity in areas that had undergone friction stir welding, and that the change was inversely proportional to the traveling rate of the friction stir welding process. The friction stir welding process was affected by the initial degradation temperature and hardness property of the polypropylene. This result shows that it is possible to choose specific parameters of friction stir welding in order to obtain good weld joint properties. |
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.tajo.engine.planner.physical;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.tajo.plan.logical.JoinNode;
import org.apache.tajo.storage.Tuple;
import org.apache.tajo.worker.TaskAttemptContext;
import java.io.IOException;
import java.util.Iterator;
import java.util.List;
public class HashLeftOuterJoinExec extends HashJoinExec {
private static final Log LOG = LogFactory.getLog(HashLeftOuterJoinExec.class);
private final List<Tuple> nullTupleList;
public HashLeftOuterJoinExec(TaskAttemptContext context, JoinNode plan, PhysicalExec leftChild,
PhysicalExec rightChild) {
super(context, plan, leftChild, rightChild);
nullTupleList = nullTupleList(rightNumCols);
}
@Override
public Tuple next() throws IOException {
if (first) {
loadRightToHashTable();
}
while (!context.isStopped() && !finished) {
if (iterator != null && iterator.hasNext()) {
frameTuple.setRight(iterator.next());
return projector.eval(frameTuple);
}
Tuple leftTuple = leftChild.next(); // it comes from a disk
if (leftTuple == null) { // if no more tuples in left tuples on disk, a join is completed.
finished = true;
return null;
}
frameTuple.setLeft(leftTuple);
if (leftFiltered(leftTuple)) {
iterator = nullTupleList.iterator();
continue;
}
// getting corresponding right
TupleList hashed = tupleSlots.get(leftKeyExtractor.project(leftTuple));
Iterator<Tuple> rightTuples = rightFiltered(hashed);
if (!rightTuples.hasNext()) {
//this left tuple doesn't have a match on the right.But full outer join => we should keep it anyway
//output a tuple with the nulls padded rightTuple
iterator = nullTupleList.iterator();
continue;
}
iterator = rightTuples;
}
return null;
}
}
|
Controlled power point tracking for autonomous operation of PMSG based wind energy conversion system With continuous depletion of conventional sources of energy, Wind Energy Conversion Systems (WECSs) are turning out to be one of the major players with immense potential to meet the future energy demands. These WECSs are quickly becoming primary source of energy in coastal regions, islands, where they operate in autonomous mode. In this paper, a control strategy for controlled power extraction from a WECS, operating in islanded mode is presented. The proposed control strategy enables limited as well as maximum power extraction from WECSs with desired load voltage profile while minimizing the installation as well as the operating costs associated with the use of expensive batteries in the system. The motive behind using batteries in the system is to facilitate transient stability and enhance reliability. As opposed to pitch angle control, in the present work, real power control is attained by field-oriented control (FOC) of permanent magnet synchronous generator (PMSG). The operating point of the WECS is decided based on the wind turbine characteristics and the demanded power. Proper decoupling and feed-forward techniques have been deployed to eliminate cross-coupling and mitigate the effect of load side disturbances. Simulations are carried out under varying load demand as well as changing weather conditions to demonstrate the applicability and effectiveness of the proposed control strategy. |
Loss Sensitivity Approach in Evolutionary Algorithms for Reactive Power Planning Abstract This article presents evolutionary algorithm-based optimal reactive power planning. A loss sensitivity approach is developed and implemented using differential evolution, particle swarm optimization, and the genetic algorithm. The objectives are to minimize real power loss and to improve the voltage profile of an interconnected power system. Transmission loss is expressed in terms of voltage increments by relating the control variables, i.e., reactive var generations by the generators, tap positions of transformers, and reactive power injected by the shunt capacitors. Based on the values of the loss sensitivity, corrective action is taken by adding a shunt capacitor at the weak buses identified by weak bus analysis, by controlling reactive generations at the generator buses by judging the sensitivity at these buses, and also by controlling tap changing positions if the tap changing transformers are in between the loss sensitive buses. The solutions obtained by this method is compared with the solutions obtained by each of these evolutionary algorithmseparately and with their hybrids with simulated annealing. From the comparisons, it is shown how the sensitivity-based evolutionary technique can be a very useful new tool for the reactive power planning. |
Reliability of MR imaging-based virtual cystoscopy in the diagnosis of cancer of the urinary bladder. OBJECTIVE Our purpose was to evaluate MR imaging-based virtual endoscopy in patients with urinary bladder cancer compared with conventional cystoscopy as the gold standard. SUBJECTS AND METHODS Twenty-five patients with urinary bladder cancer diagnosed on conventional cystoscopy underwent MR imaging of the pelvis. Patients were examined without external bladder filling or administration of IV contrast medium. No medications were administered. The data obtained by MR imaging were reconstructed for virtual endoscopy on a workstation. The locations and sizes of tumors were individually determined and compared with results of conventional cystoscopy. RESULTS Twenty-four patients were evaluated; one patient's examination was excluded from analysis because of metallic artifacts. Seventeen patients were diagnosed with a single bladder tumor. Five patients had two tumors each, and two patients had three tumors. Tumor diameter ranged from 0.4 to 6.4 cm. Thirty (90.9%) of 33 tumors detected on cystoscopy were visualized with virtual endoscopy. The detection rate for 23 tumors of 1 cm or greater was 100%. Difficult conditions for conventional cystoscopy, including hematuria, anterior wall involvement, and urethral strictures, had no deleterious impact on virtual cystoscopy. Difficulties in detection on virtual endoscopy were associated with flat bladder tumors with minimal surface elevation. CONCLUSION The results of this study suggest a high reliability in the diagnosis of urinary bladder cancer by MR imaging-based virtual cystoscopy-a noninvasive method, independent of medication or contrast enhancement, that may be of value for screening, primary diagnosis, and surveillance. Virtual MR cystoscopy may be indicated when conventional cystoscopy cannot be performed or is ineffective. |
Tribological Characteristics of Calophyllum inophyllumBased TMP (Trimethylolpropane) Ester as Energy-Saving and Biodegradable Lubricant The purpose of this research is an experimental study of Calophyllum inophyllum (CI)-based trymethylolpropane (TMP) ester as an energy-saving and biodegradable lubricant and compare it with commercial lubricant and paraffin mineral oil using a four-ball tribometer. CI-based TMP ester is a renewable lubricant that is nonedible, biodegradable, and nontoxic and has net zero greenhouse gases. The TMP ester was produced from CI oil, which has high lubricity properties such as higher density, higher viscosity at both 40°C and 100°C and higher viscosity index (VI). Experiments were conducted during 3,600 s with constant load of 40 kg and constant sliding speed of 1,200 rpm at temperatures of 50, 60, 70, 80, 90, and 100°C for all three types of lubricant. The results show that CI TMP ester had the lowest coefficient of friction (COF) as well as lower consumption of energy at all test temperatures, but the worn surface roughness average (Ra) and wear scar diameter were higher compared to paraffin mineral oil and commercial lubricant. Before 80°C, CI TMP ester actually has a higher flash temperature parameter (FTP) than paraffin mineral oil and as the temperature increases, the FTP of TMP ester decreases. The worn surfaces of the stationary balls were analyzed by scanning electron microscopy (SEM) and results show that CI TMP ester has the highest wear compared to paraffin mineral oil and lowest wear compared to commercial lubricant. However, CI TMP ester is environmentally desired, competitive to commercial lubricant, and its use should be encouraged. |
<reponame>csumutaskin/bitemporal-pattern
package tr.com.poc.temporaldate.bitemporalexample.controller;
import java.math.BigDecimal;
import java.util.List;
import java.util.Optional;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.web.bind.annotation.DeleteMapping;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;
import io.swagger.annotations.ApiParam;
import lombok.extern.log4j.Log4j2;
import tr.com.poc.temporaldate.bitemporalexample.dto.bitemporalorganization.BitemporalOrganizationSaveOrUpdateRequestDTO;
import tr.com.poc.temporaldate.bitemporalexample.dto.bitemporalorganization.BitemporalOrganizationSaveOrUpdateResponseDTO;
import tr.com.poc.temporaldate.bitemporalexample.dto.common.BitemporalReadRequestDTO;
import tr.com.poc.temporaldate.bitemporalexample.service.BitemporalOrganizationService;
import tr.com.poc.temporaldate.core.util.logging.RestLoggable;
import tr.com.poc.temporaldate.common.Constants;
import tr.com.poc.temporaldate.core.model.BooleanDTO;
import tr.com.poc.temporaldate.core.util.response.RestResponse;
/**
* A Bitemporal Organization Rest Collection Example
*
* @author umut
*/
@RestController
@Log4j2
@RequestMapping(value = "/bitemporal-organization")
@RestLoggable
@ResponseBody
public class BitemporalOrganizationController
{
@Autowired
private BitemporalOrganizationService bitemporalOrganizationService;
/**
* Retrieves all organization data with the given parameter set in {@link BitemporalReadRequestDTO}
* @param toRead input parameters for read criteria
* @return List of {@link RestResponse of BitemporalOrganizationDTO}
*/
@PostMapping(value = "/getAll" , consumes = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE}, produces= {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public RestResponse<BitemporalOrganizationSaveOrUpdateResponseDTO> getOrganizationList(@RequestBody BitemporalReadRequestDTO toRead)
{
List<BitemporalOrganizationSaveOrUpdateResponseDTO> allOrganizations = bitemporalOrganizationService.getAllOrganizations(toRead);
log.debug("Organization list retrieved using /bitemporal-organization/getAll rest");
return new RestResponse.Builder<BitemporalOrganizationSaveOrUpdateResponseDTO>(HttpStatus.OK.toString()).withBodyList(allOrganizations).build();
}
/**
* Saves or Updates the given {@link BitemporalOrganizationSaveOrUpdateResponseDTO} object
* @param id if null, persist operation is done, if non-null update operation is done
* @param toSaveOrUpdate object to be persisted or updated
* @return {@link BitemporalOrganizationSaveOrUpdateResponseDTO} Saved or Updated object details
*/
@PostMapping(value = "/saveOrUpdate/{orgId}" , consumes = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE}, produces= {MediaType.APPLICATION_JSON_VALUE,MediaType.APPLICATION_XML_VALUE})
public RestResponse<BitemporalOrganizationSaveOrUpdateResponseDTO> saveOrUpdateOrganization(@ApiParam(required=false) @PathVariable(required=false) Optional<String> orgId, @RequestBody BitemporalOrganizationSaveOrUpdateRequestDTO toSaveOrUpdate)
{
BitemporalOrganizationSaveOrUpdateResponseDTO toReturn = null;
if(!orgId.isPresent() || Constants.UNDEFINED_STR.equalsIgnoreCase(orgId.get()))
{
toReturn = bitemporalOrganizationService.saveOrMergeOrganization(null, toSaveOrUpdate);
log.debug("Organization created with @pid: {}", toReturn.getOrgId());
}
else
{
BigDecimal bd = new BigDecimal(orgId.get());
toReturn = bitemporalOrganizationService.saveOrMergeOrganization(bd, toSaveOrUpdate);
log.debug("Organization created with @pid: {}", toReturn.getOrgId());
}
return new RestResponse.Builder<BitemporalOrganizationSaveOrUpdateResponseDTO>(HttpStatus.OK.toString()).withBody(toReturn).build();
}
/**
* Retrieves all organization data with the given parameter set in {@link BitemporalReadRequestDTO}
* @param toRead input parameters for read criteria
* @return List of {@link RestResponse of BitemporalOrganizationDTO}
*/
@DeleteMapping(value = "/deleteEntities" , consumes = {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE}, produces= {MediaType.APPLICATION_JSON_VALUE, MediaType.APPLICATION_XML_VALUE})
public RestResponse<BooleanDTO> deleteOrganizations(@RequestBody BitemporalReadRequestDTO toDelete)
{
bitemporalOrganizationService.removeOrganizations(toDelete);
log.debug("Organization entities deleted using /bitemporal-organization/deleteEntities rest");
return new RestResponse.Builder<BooleanDTO>(HttpStatus.OK.toString()).withBody(new BooleanDTO(Boolean.TRUE)).build();
}
}
|
How does pre-dialysis education need to change? Findings from a qualitative study with staff and patients Background Pre-dialysis education (PDE) is provided to thousands of patients every year, helping them decide which renal replacement therapy (RRT) to choose. However, its effectiveness is largely unknown, with relatively little previous research into patients views about PDE, and no research into staff views. This study reports findings relevant to PDE from a larger mixed methods study, providing insights into what staff and patients think needs to improve. Methods Semi-structured interviews in four hospitals with 96 clinical and managerial staff and 93 dialysis patients, exploring experiences of and views about PDE, and analysed using thematic framework analysis. Results Most patients found PDE helpful and staff valued its role in supporting patient decision-making. However, patients wanted to see teaching methods and materials improve and biases eliminated. Staff were less aware than patients of how informal staff-patient conversations can influence patients treatment decision-making. Many staff felt ill equipped to talk about all treatment options in a balanced and unbiased way. Patient decision-making was found to be complex and patients abilities to make treatment decisions were adversely affected in the pre-dialysis period by emotional distress. Conclusions Suggested improvements to teaching methods and educational materials are in line with previous studies and current clinical guidelines. All staff, irrespective of their role, need to be trained about all treatment options so that informal conversations with patients are not biased. The study argues for a more individualised approach to PDE which is more like counselling than education and would demand a higher level of skill and training for specialist PDE staff. The study concludes that even if these improvements are made to PDE, not all patients will benefit, because some find decision-making in the pre-dialysis period too complex or are unable to engage with education due to illness or emotional distress. It is therefore recommended that pre-dialysis treatment decisions are temporary, and that PDE is replaced with on-going RRT education which provides opportunities for personalised education and on-going review of patients treatment choices. Emotional support to help overcome the distress of the transition to end-stage renal disease will also be essential to ensure all patients can benefit from RRT education. Background Every year in the United Kingdom, 7000 patients with end-stage chronic kidney disease (CKD stage 5) start a renal replacement therapy (RRT) for the first time. There are five main treatment options open to these patients: transplantation; haemodialysis in a nurse-led unit/hospital; haemodialysis at home; peritoneal dialysis; and conservative care. Each option has different clinical advantages and disadvantages, and different impacts on patients' lives. This makes the selection of RRT quite complex. In line with national policy, renal services promote patient choice of treatment, within a framework of clinically feasible options. Most patients therefore undertake pre-dialysis education (PDE) over a number of months prior to starting RRT, which is designed to help them make a treatment decision. Although practice varies, PDE usually includes one or more one-to-one sessions with a specialist nurse; a group meeting with talks from clinical staff and patients already on RRT; and written and audio-visual materials to take home. Although the importance of PDE was highlighted in 2010 by the European Renal Best Practice Advisory Board, research into its effectiveness is still in its infancy. Studies consistently report that one-third or more of patients do not recall receiving information about treatment options. Patient dissatisfaction with various aspects of PDE was found in two recent large studies, one in the US and the second in 36 European countries : some patients felt not all treatment options were presented equally ; others could not recall being told about options other than their current treatment. Satisfaction with education about transplantation and in-centre haemodialysis is often higher than for PD and home haemodialysis, although a more recent Australian study found no significant differences in knowledge between patients on different types of treatment. Smaller studies suggest that PDE is sub-optimal because: patients lack information or feel choices are limited ; education may be provided too late, when patients are too ill to make decisions ; individual healthcare professionals have a bias towards/against certain treatments ; patient information is too complex or hard to understand ; information may not stress that patients have choices or may not consider sufficiently patients' preferences and lifestyles ; and patients may not be as involved in treatment decision-making as they would prefer. In the absence of a high-quality evidence base, national guidelines have been developed using consensus-building techniques. In 2014, UK renal experts published three patient education standards covering: the importance of education in supporting patient choice; the need to tailor education to individual needs; and the continuation of education into the RRT treatment phase. In 2015, European experts published quality standards, making recommendations about the content, timing, delivery and evaluation of PDE. To our knowledge, there have been no trials of enhanced PDE designed to address these shortcomings, although there have been two recent trials of RRT decision-aids. In 2011-12, we undertook a 4-site mixed methods study looking at the barriers and success factors for home dialysis treatment and the influence of a target on uptake rates. Since this study found that PDE was one of three main barriers to increasing the uptake of home dialysis, we subsequently decided to report the findings about PDE in more detail. Given that the original study set out to explore home dialysis, it is possible that this could have skewed the data and findings. However, we consider that the data we present are highly likely to be relevant to all dialysis patients because: patients were asked about their treatment pathways and how choices had been made in general, rather than specifically related to home dialysis; and it was only at the very end of the patient interviews, after talking about PDE, that views about home dialysis were explored. In this article, we report a complete analysis of data related to PDE from staff and patients from the main study, exploring the following question: how effective is PDE from the perspective of patients and staff? Methods The main study used mixed methods to look at quantitative changes in home dialysis uptake rates and qualitative case studies to explore barriers and success factors for home dialysis. The setting was four hospital renal units, selected from seven West Midlands units to achieve a demographic and rural/urban mix. Semistructured one-to-one interviews were undertaken with dialysis patients and clinical and managerial staff. An intellectual framework for the design and analysis of qualitative interviews, which has been reported in detail elsewhere, was derived by mapping systematic review evidence of potential success factors onto an established theoretical model for health system change, and cross-checking this for relevance against renal service guidance. The interview topic guides consisted of a small number of semi-structured open-ended questions designed to prompt the sharing of experiences and views. For patients, the topic guide covered: how patients came to be on dialysis; experiences of pre-dialysis and dialysis pathways; and suggestions for improvement. For staff, the topic guide covered: current practice, using the last 2-3 patients as exemplars; how well the pre-dialysis and dialysis pathways work; how the team had been working to increase the uptake of home dialysis; and suggestions for improvement. No direct questions about PDE were asked in either staff or patient interviews. If patients/ staff did not spontaneously talk about the pre-dialysis period, they were prompted with an open-ended question about how treatment decisions were made. The patient population was dialysis patients aged 18+ starting their current treatment within 12 months, excluding patients scheduled for surgery within 3 months as they were unlikely to be available or unfit to be interviewed. Purposive sampling by age, sex, ethnic group and treatment type was used to achieve maximum diversity. Potential patient participants were approached by phone by renal secretaries, who sent out study information to interested patients who were subsequently contacted by the research team. Staff participants were encouraged to take part and provided with study information via e-mail from the renal clinical lead, with renal secretaries then scheduling interviews. Semi-structured qualitative telephone interviews were undertaken with 20-25 patients per site (November 2011-March 2012) until saturation was achieved. The staff population was clinical staff working with CKD stage 5 patients and managerial staff. Semi-structured qualitative face-to-face interviews were undertaken on-site with 20-30 staff per site (Table 1) (September 2011-April 2012) until saturation was achieved. Interviews lasted for 30-60 min and were undertaken in private with only the interviewer and interviewee present. Brief field notes were made, as appropriate, after each interview. The interviews were shared equally between GC, KA and KS who were all experienced qualitative health service researchers, employed by the University of Birmingham. KS is a specialist in qualitative methods. This information was provided to participants via the Participant Information sheet. GC and KA are female and have Ph.Ds. KS is male and has an M.Sc. None of the research team: were clinically qualified/experienced; had any prior or current relationships with the four renal teams or NHS Trusts taking part in the research; had not undertaken previous research with end-stage renal patients/staff; and did not have personal experience or a particular personal interest in the research topic. Tables 2 and 3 summarise the characteristics of eligible and interviewed patients. The eligibility criteria were amended during fieldwork in site 1 to include patients starting treatment within the last 24 months, rather than 12 months, as there were few eligible patients in some sampling categories. No effects were observed from this change, particularly on patients' abilities to recall their treatment experiences. Of the 618 patients who had started their current dialysis treatment within the last 24 months, 101 patients (16%) were invited to interview, 8 refused and 93 were interviewed (21-25 per site). Of the 106 staff invited to interview, 10 refused and 96 were interviewed (20-30 per site). Table 1 details the roles of staff interviewed. There were no withdrawals of patients or staff from the study. All interviews were audiorecorded and were transcribed verbatim by a specialist transcription team. Transcripts were checked by researchers but not participants. The written and audiovisual PDE materials used in each site were also reviewed. Analysis Data were analysed using a form of thematic analysis, the framework method, which has been shown to be useful in conducting healthcare research with a multi-disciplinary team of researchers. This allowed the development of themes to be derived entirely from the raw data to provide rich descriptions of how patients experienced PDE and what might need to improve. Researchers familiarized themselves with the audio-recordings and transcripts, and analysed a small number of entire transcripts line by line to generate initial codes, which were then compared, refined and agreed on as a team. An analytic framework was developed from the initial group of transcripts and then refined as the full set of transcripts was coded onto a spreadsheet using a matrix of codes and cases. Coding was shared equally between GC, KA and KS with 10% of transcripts coded by two researchers and discrepancies resolved at team meetings. The resulting themes were refined through team discussion. Separate analysis of staff and patient transcripts at each site were then triangulated. Discussion of findings with clinical staff at site-specific feedback meetings led to further refinement, followed by triangulation and synthesis across sites to identify overall study findings. Research team meetings provided the forum for discussing reflexivity and considering how to minimise the influence of individual researchers on the research. For the analysis presented in this article, transcripts were subsequently re-read, checking that a complete data set on PDE had been extracted: to include all direct mentions of PDE and treatment decision-making, and more general comments about the pre-dialysis period; and to exclude any data linked to or arising from prompts about home dialysis. No data were identified for exclusion as a result of this checking. The themes identified for PDE were not specified in advance, but were derived entirely from the data. Results Formal PDE in all four sites included: one or more oneto-one sessions with a specialist nurse; a group information session, including talks from patients on RRT; and written materials/DVDs which patients took home. In several sites, specialist nurses undertook home visits where they discussed treatment options with patients. Doctors also discussed treatment options with patients during out-patient appointments. Most staff made favourable comments about PDE and valued the role of specialist nursing staff in educating and supporting patients' treatment decisions. Most patients recalled taking up part or all of the formal PDE on offer and reported finding it helpful overall. Three themes related to improving PDE were identified (Table 4): sub-optimal education; different perspectives between patients and staff; and the influence of patient experience. These themes are explored, using quotes from patients and staff to Sub-optimal education Restricted range of teaching materials and methods Although some patients were critical of the volume and types of information about treatment options they had been given by staff and the methods that were used in PDE, there were no concerns about these issues from staff. Although patients need information to make treatment decisions, some felt they had been unable to use it because the volume and complexity of information meant they were unable to understand or assimilate it: "You get all this information and that's the information overload bit. But you're not really taking it in." Patient 15, site 4 (PD, female, white, aged 18-39). From the staff perspective, the deliberate reliance on written materials was designed so that patients had information to take home and consider over time. However, it seemed that some patients were unable to take advantage of this positive intention. This was particularly the case for one patient, whose dyslexia had not been catered for, leaving her with written educational materials that demanded a high level of reading skills. Although this was only one patient, it is possible that other patients' dissatisfaction with the written materials could be explained partly by difficulties with reading. Additional limitations in the range of materials were noted: a lack of information for patients whose first language was not English; a paucity of computer-based materials; and low confidence amongst staff in signposting patients to reputable and relevant websites. Another perspective on teaching materials came from patients who thought that they were not 'real' enough, and that this explained why they had struggled to apply the information to their own lives. Seeing different treatments being undertaken by real patients, being able to see and touch the equipment, and being able to talk to patients already on treatment about what it felt like and how they experienced it, were all suggested as ways of improving the education: "And actually see something, you know, like see a haemodialysis machine, a PD , rather than just its different seeing it on the page than actually seeing it in real life." Patient 5, site 1 (PS, female, white, aged 40-64). "I was given lots of stuff but really I needed to go out and see a couple of people to see how it suited themit doesn't really matter what the nurses say, its how it affects people really, that's why I wanted to go and see them." Patient 9, site 4 (HD, male, white, aged 40-64). This suggests that patients would benefit from the use of a wider range of teaching methods, including interactive methods. Bias in the presentation of information and treatment options Whilst some patients thought that all treatment options were presented fairly and with equal emphasis, others felt not all options had been presented to them and that they had only found out about viable alternatives once they were on dialysis. "Erm, yeah there was no preference from the hospital side I don't think I was never pressured to do either ." Patient 4, site 3 (PD, male, white, aged 40-64). "She didn't really give me any choice, no. She just recommended dialysis and that was it." Patient 18, site 3 (HD, male, white, 65+). Following discussion of this issue with clinicians, analysis showed that these patients were evenly distributed across treatment types and did not include a disproportionate number of acute kidney injury patients, for whom choices could have been restricted. There was also a view from patients that staff were overly positive about dialysis, which had not prepared patients well for side-effects or the impacts of treatment on day-to-day life: Staff were also aware of the potential for bias, and that their position as a trusted health professional could potentially lead to them having undue influence over patients. However, all staff groups thought that the first conversation that doctors have with patients about treatment options is crucial in influencing treatment choice. If doctors appear to favour a particular treatment, no amount of PDE can counteract the influence this initial conversation has over patients' eventual choice: "I do think patients do get swayed, particularly by consultants, because they think they know best. I think it's the initial conversation that they have, you know, which I can only presume will be a consultant initial conversation...but if they've had that underpinning by the consultant first, it's then very difficult ." Renal ward sister, site 4. Different perspectives between patients and staff The importance of informal education Staff tended to equate PDE with the work undertaken by specialist pre-dialysis nurses, whilst recognising that patients' conversations with doctors also influence treatment decision-making. These opportunities for PDE were similarly recognised and valued by patients. However, patients were equally gathering information and views about treatment options through their informal contacts with staff, for example, chatting to staff whilst waiting for appointments or during in-patient stays. This emerged as significant because some of these patients talked about these groups of nonspecialist staff as being less knowledgeable about the full range of treatment options and often unable to answer patients' questions or signpost them to appropriate sources of information and advice. Only a handful of staff thought likewise: "I think a lot of effective patient education is delivered through everyday conversation and chat." Pre-dialysis nurse, site 4. This same member of staff recognised that this means that all staff, wherever they work, should be equipped to deal with casual questions from patients: "You know somebody might ask a question. Well if staff haven't got the knowledge then that conversation isn't going to go any further." Pre-dialysis nurse, site 4. However, nurses working on the wards and in haemodialysis units reflected that they felt ill-equipped to talk with patients about all treatment options due to lack of training or lacked experience of the full range of treatment options. For some, this seemed to be explained by the trend towards specialisation, and to remedy this, several sites had recently introduced staff rotations: "They've tended to be employed and they've stuck where they are so when you get newly qualifieds just going straight into haemo. and not even done any ward work, they can't see the whole picture then and can't advise patients on what its like to go on PD because they've not seen it." Senior nurse, site 2. "The staff will be starting to rotate the benefit is that you end up with a renal nurse who knows all about everything who even if they decided to work in haemo. permanently, can at least talk to the patient ' well this is what PD is about and this is what transplantation is about'." Home therapies nurse, site 2. It was also apparent that some patients continued to consider treatment options well after they had started dialysis, and carried on gathering information and views about treatment options, some with a view to switching treatment. This highlighted the importance of all staff, irrespective of their role, being able to present all options neutrally and answer basic questions about all types of treatment. Approaches to treatment decision-making A second issue, about which staff and patients appeared to have very different perspectives, was in how they viewed treatment decision-making. Nearly all staff described a rational fact-based approach to treatment decision-making, where patients would use detailed information to weigh up treatment options: "You must make sure as a doctor or a service that you've given the patients all the information, given them enough time to come to terms with what's required, help them to, support them with their choices." Consultant, site 2. This rational decision-making approach was also reflected in the written PDE materials provided in all sites. It contrasted with the patients, who mostly talked about a more personalised approach of thinking about their own lives and how different treatment options might work for them. For some, there appeared to be one main reason for their choice of treatment, which was often non-clinical and highly relevant to each individual's life: "I made up my mind on doing the PD one because it still allowed me to carry on working." Patient 7, site 1 (PD, male, Indian, aged 18-39). "If I've got the choice I think I would still prefer hospital for the simple reason that there's qualified staff on the scene. If anything goes wrong you've got qualified staff there and with you." Patient 18, site 3 (HD, male, white, 65+). For others, decisions seemed to be influenced primarily by a fear of the unknown or anxiety about making a decision, rather than the rational processes described by staff. Only one member of staff seemed to fully appreciate the difficulty patients faced when making a choice: "Asking people to make a choice I think is a mixed blessing, 'cos I think it causes extra anxiety and stress to patients when they've got to be making the decision about what to do. And I think some people can't, they just don't feel they can make the decision." Pre-dialysis nurse, site 4. The influence of patient experience How other patients can influence decision-making The influence of other patients on decision-making had several aspects. Firstly, some patients valued having opportunities to talk to other patients, particularly those who were already on dialysis, because they were able to portray what treatment is really like: "Speaking directly to someone who has had it , so you're getting all the unfiltered information...it was useful to be able to speak to a person who had gone through that to give us, you know, warts and all what's going to happen, so that was good." Patient 15, site 4 (PD, female, white, aged 18-39). "I mean, patients can be talked to by professionals, nurses or doctors and what have you, but I think they've got to, you know, another patient, a fellow patient just has that more credibility." Patient 2, site 2 (HHD, male, white, aged 65+). Secondly, some patients thought this helped to balance any biases from staff: "I think the nurses, although they're very good there, they kind of just look at it from one side don't they? If you talk to a patient he's going to tell you how it's sort of happened to him." Patient 4, site 1 (HD, Male, Indian, aged 40-64). Some staff also recognised that pre-dialysis patients can find it very helpful to talk to patients already on RRT: "I suppose successfully, the patients the way they take the information, often the most potent thing is speaking to the patient next door or in the waiting room and they say 'oh you can't have that one I was awful on that'." Consultant, site 2. However, other staff were more cautious and actively discouraged patient contact, because some patients may have atypical experiences or be biased against certain treatments: "Patients are swayed far more by what they hear from other patients than all the information we give them." PD sister, site 4. Interestingly, none of the patients had been offered pre-dialysis opportunities to talk to other patients, although this had recently been introduced in one site, and a second site would put patients in touch with each other on request. The impact of distress The impact of distress on decision-making emerged as a strong theme across all patient groups and sites. Patients described at length, the traumatic and frightening nature of the transition to end-stage renal failure. It seemed likely that this might help to explain why so many patients said they found treatment decision-making very difficult, including those patients who had known for years they would need RRT and who might therefore be expected to be well prepared for treatment decisionmaking: "they were explaining it to me but it just didn't go through me head that I was going to get ill, like. I mean they were very, very nice but I was just too scared ." Patient 4, site 2 (HD, female, Indian, 40-64). Some patients were quite critical of the staffs' focus on the factual and clinical aspects of treatment: "So they focus totally on the practical side of things.You're going to die if you don't do it . It's all very black and white, all very aggressive and you know perhaps that works for some people, it certainly doesn't work for me.. a huge mental side to it, well I don't know what you'd call it, a psychological element they probably don't quite press." Patient 4, site 3 (PD, male, white, 40-64). However, very few staff appeared to appreciate the potential adverse impact of psychological distress on patients' ability to make treatment decisions. Just three staff raised this issue in their interviews: "So quite often people are shocked, you know, they just kind of don't know what to think really about anything, and even when they, even if they've had all the information, they start with us, they still need a lot of support, to kind of make the right choices really. I kind of equate it to like the grieving really, they've lost their kidneys and its almost like a death for them they kind of go through all those emotions that come with bereavement really." Dialysis unit nurse manager, site 4. In contrast, one-third of the patients talked about the distress of going onto end-stage treatments, and for some, they had only become open to some treatment options once they had started on dialysis. Although some staff were aware of this, once patients had started on dialysis there were no additional routine education opportunities for patients in any of the sites, nor routine reviews of treatment choice, which could support patients to revisit their treatment choice: "People might start on one treatment and then six months down the line feel very differently the education is, should be on-going, rather than you have it before you start and then you never have any more. people do change their minds and gain confidence as they get used to a situation" Pre-dialysis nurse, site 4. Discussion Although the study found that patients' and staffs' views about PDE were largely favourable, a number of suggestions for improvement and optimisation emerged. The literature supports the findings that some patients thought teaching materials and the way they were used could be biased, and that patients wanted a wider range of teaching methods to be used, particularly active learning methods and seeing dialysis treatments in action. The diversity in patients' preferences for different teaching methods suggests it would be appropriate for a patient's preferred learning style to be assessed ahead of starting PDE in line with the principles of adult learning. As in other studies, patients were provided with lots of information, and some complained of information overload. They wanted less detailed factual information with more time spent on helping patients to apply information to their own lives, which suggests that PDE may need to be re-balanced away from a reliance on information-giving. Recent initiatives to develop and trial decision support tools may go some way to helping with this. Likewise, opportunities for patients to talk to other patients already on RRT, could help them to envisage what life on dialysis is really like, as noted in previous studies, and help to counter the perception that staff may be biased or overly positive about treatments. However, this would need to be implemented with care, given evidence that patients' stories can bias other patients' treatment choices, irrespective of clinical advice. Whilst some of these improvements to PDE could be relatively easy to implement, the study identified two additional themes which potentially have more fundamental implications for PDE: differences in perspective between staff and patients; and the influence of patient experience. Several important differences in perspective emerged from the data. Our study suggests that staff and patients may not conceptualise PDE in the same way, with staff focussing on formal PDE sessions and discussions during out-patient appointments, whilst patients appear to place additional value on more informal education, arising from conversations with staff and other patients. However, for this to contribute positively to patients' treatment decision-making, staff who are not PDE specialists, from across the spectrum of renal services, would need to be informed enough to chat with patients about the full range of treatment options. This was not the case in this study. The small amount of relevant literature suggests this may be hard to achieve, as one study has found that renal nurses' attitudes to RRT options are strongly associated with their own area of expertise and experience, whilst a second study has recommended that all staff who come into contact with patients need experience of all treatment types in order to talk confidently with patients. The second notable difference in perspective between staff and patients was how treatment decision-making was conceptualised. Whilst staff thought patients should or do make decisions using a rational fact-based approach, patients mostly described a process of thinking about the possible impact of dialysis on everyday life and giving one highly individual reason for choosing their treatment. Although some previous studies have stressed the importance of renal patients using information to weigh options, many other studies suggest that RRT treatment decision-making is not as simple as this, and that patients make decisions which accord with the context of their lives, values and identity [25,27,. The choice of RRT is a complex and time-pressured serious decision for patients. Studies have found that when faced with these kinds of decisions, as in our study, patients tend to use heuristic or intuitive decision-making strategies, rather than more systematic strategies where information is used to weigh benefits and risks. We also found that many patients characterised treatment decision-making as very difficult or impossible. This could be explained by previous studies that found that patients may feel too ill in the pre-dialysis period to make a decision, possibly reflecting reduced cognitive functioning associated with reducing kidney function. Our patients' reports of distress or trauma in the transition to RRT, whilst not new and reported in previous studies, may also help to explain these reported difficulties with decision-making. Cancer studies have found that emotional distress can impede patients' understanding of information, whilst the process of PDE itself may also contribute by adding emotional distress. In addition to this aspect of patient experience, our finding that patients valued opportunities to talk to other patients about their treatment experiences and did this informally, is in line with recent studies. Taken together, these findings suggest that PDE undertaken in the pre-dialysis period may not be effective for some patients, and the timing of PDE may need to extend beyond the pre-dialysis period. This may be appropriate for patients who are highly distressed in the pre-dialysis period, or patients who become open to other treatments only once they have themselves started treatment. Although the continuation of education beyond the pre-dialysis period is also supported by systematic review evidence and clinical guidelines, this would be a significant change in practice, as none of the study sites were providing on-going education or undertook formal treatment reviews as part of the RRT pathway. Conclusions Our findings have highlighted a number of important issues for PDE. The finding that patients want improvements to teaching methods and materials is not new, and demonstrates that PDE may still have some way to go in meeting patients' expectations, despite these issues having been highlighted for 10 years or more. This would involve specialist staff having access to a more diverse range of educational materials and using teaching methods which suit each patient's learning style. Whilst these improvements would be relatively easy to implement, we also conclude that the approach to PDE needs to change. A much more individualised approach is required which takes account of the wide variation in patients' motivation and interest in making treatment choices. Staff would need to help patients apply information to their own lives, taking account of living circumstances, values and priorities, and consider how psychosocial barriers to preferred treatments might be overcome. This is more akin to counselling than education and would demand a higher level of skill and training for specialist PDE staff. In addition to these improvements to formal PDE, we also conclude that renal units need to recognise that informal education takes place through casual conversations between staff and patients. We therefore recommend that all renal staff should be trained about all treatment options, irrespective of their role in PDE, so that they are more in tune with the complexities and difficulties patients face when considering treatment options. All staff would then also be to handle patients' informal queries in an informed and unbiased way. Even if the above improvements are made to PDE, we conclude that significant proportions of patients will still not benefit from it. If in the pre-dialysis period, significant numbers of patients find treatment information too complex to process, find decision-making difficult, feel too ill or too distressed to make decisions, and if some patients become more open to some treatment options only once they are on RRT, then education must continue into the RRT treatment phase as a routine part of the pathway. We also suggest that decisions made in the pre-dialysis period may not be optimal for significant numbers of patients and should therefore be considered temporary, with reviews built into the pathway so that there are structured opportunities for patients to revisit their treatment choices. We conclude that the phrase 'PDE' is a misnomer and argue instead for referring to on-going RRT education which starts in the pre-dialysis period and continues through into dialysis treatment. Finally, we argue for the provision of emotional support both pre-dialysis and in the first year, once RRT has begun. This could be incorporated into education, which would also need to take account of psychosocial barriers to treatment and coping strategies. This could help patients to make decisions that are best for them in the medium-term rather than in response to the very real distress they may experience as they approach the transition to RRT. Strengths and limitations The inclusion of four study sites, which varied in geographical location and patient demography, was a strength. The relatively large interview sample sizes lend weight to the findings, alongside the purposive patient sampling, which captured diverse patient experiences. The main limitation is that we did not set out to study PDE as a stand-alone topic. Had we done so, a mixed methods study would have been preferable, so that we could explore findings qualitatively and quantitatively. Another limitation is that sites may not be typical because they were working towards a target for home dialysis uptake. This had led to scrutiny of all aspects of the pathway, including PDE, and it might therefore be expected that PDE was more advanced in these sites compared with the rest of the country. However, the finding that improvements to PDE were still required suggests that there are enduring issues which are likely to be relevant to renal units elsewhere. |
Christopher Stowell
Early life and education
Born in New York City, he is the son of Kent Stowell and Francia Russell, who were dancers with New York City Ballet. At the age of four he moved to Germany with his parents, who danced with the Bavarian State Ballet in Munich and then became the artistic directors of the Frankfurt Ballet. He moved back to the United States in 1977, when his parents became the founding artistic directors of the Pacific Northwest Ballet. Christopher trained with the Pacific Northwest Ballet School in Seattle and the School of American Ballet in New York.
Career
Stowell joined the San Francisco Ballet (SFB) in 1985, where he became a principal dancer in 1990. He performed in ballets such as Romeo and Juliet, Swan Lake, The Sleeping Beauty, and Othello. Stowell appeared in most of SFB's productions of George Balanchine's ballets and danced roles created for him by choreographers such as Mark Morris, William Forsythe, James Kudelka, and SFB's artistic director Helgi Tómasson. Other venues he performed at include Lincoln Center in New York, the Kennedy Center in Washington D.C., the Bolshoi Theatre in Moscow, and the Paris Opera. He retired from the San Francisco Ballet in April 2001.
He then worked as a teacher and coach in San Francisco, New York, Europe, and Japan. He choreographed new ballets for the San Francisco Ballet, the Pacific Northwest Ballet, the Pennsylvania Ballet, and the Diablo Ballet.
Stowell joined the Oregon Ballet Theatre as artistic director in July 2003. His additions to the company's repertoire include ballets by Balanchine, Jerome Robbins, Christopher Wheeldon, Paul Taylor, Lar Lubovitch, Frederick Ashton, and Helgi Tómasson. Stowell has also commissioned works from choreographers such as Trey McIntyre, James Kudelka, Julia Adam and Kent Stowell. By the end of the 2010-2011 season, Stowell had added 50 new works, 20 of which were world premieres.
In 2004, Stowell was named one of "25 to Watch" by Dance Magazine. |
The County Commission has approved revisions in the 10-year plan for spending money generated by Brevard County's half-percent sales tax targeted for Indian River Lagoon restoration.
Under the plan commissioners unanimously approved last week, compared with the previous version of the plan, at least $28.1 million more will go toward projects to convert properties from septic tanks to sewers. Meanwhile, less money will go toward projects to remove muck for the Indian River Lagoon.
Over the past decade, a majority of the lagoon's seagrass — considered the estuary's most ecologically important habitat — died off in the wake of severe algae blooms. The algae is fueled by decades of excess nitrogen and phosphorus from fertilizer, septic tank and leaky sewer systems.
When county commissioners considered approving the 2019 revisions of the Save Our Indian River Lagoon Project Plan in February, they decided to approve a maximum $125 million of the $225 million that was designated for muck-removal and related project. They delayed action on the other $100 million.
Commissioners asked the advisory Save Our Indian River Lagoon Project Plan Citizen Oversight Committee that reviews sales tax spending issues to reconsider how to spend the other $100 million. The oversight committee met on March 15 to do that review.
$28.1 million will fund nine additional septic-to-sewer projects.
$25.2 million will fund muck removal and related water-treatment projects.
$46.7 million will remain unallocated until the 2020 plan update, pending the receipt of additional data and additional project proposals.
Among the muck-removal projects that will be stopped are ones in the Pineda Causeway area of the Indian River and the west end of the NASA Causeway over the Indian River, partly due to the higher costs related to the amount of nitrogen the projects would remove from the lagoon.
The net result of the County Commission's action is that the amount set aside for muck-removal in the 10-year sales tax plan decreases by $74.8 million — from $225 million to $150.2 million.
County Commission Vice Chair Bryan Lober introduced the revised lagoon project plan at the April 9 County Commission meeting. Lober also is a member of the Indian River Lagoon Council board of directors.
Some commissioners have contended that the Save Our Indian River Lagoon Project Plan designated too much money for muck removal and not enough for infrastructure projects to prevent the discharge of sewage into the lagoon.
During public comment prior to the County Commission's unanimous vote to approve the revised lagoon sales tax spending plan, five Brevard residents asked the board to approve the revised plan.
George Rosenfield of Suntree urged commissioners to let Natural Resources Management Department Director Virginia Barker and her staff "have their reins, and do continue with their plan."
Rosenfield said the environmental problem related to the lagoon "is not political, but scientific."
Brevard County voters in 2016 voted to implement the lagoon sales tax for a 10-year period, starting in January 2017. The tax is projected to collect a total of $486 million during those 10 years.
From January 2017 through January 2019 — the latest-available figure — a total of $93.95 million has been collected through the lagoon sales tax.
In a related discussion, Lober said he would like to see the Citizen Oversight Committee consider approving use of lagoon sales tax money for a $10.71 million project to remove and replace 3.3 miles of force main on the west side of North Riverside Drive from Eau Gallie Boulevard to Oakland Avenue.
The Florida Department of Environmental Protection has approved the project for a low-interest loan through the State Revolving Fund Program. But Lober said he would like to see whether the money from that loan could be redirected to other, similar pipe projects that would help guard against discharges into the lagoon.
In a memo to county commisioners, Brevard County Utility Services Director Edward Fontanin said the North Riverside Drive project "has been identified as a priority infrastructure replacement project in the Utility Services Capital Improvement Program. The current force main has experienced several breaks along its length that have resulted in sewage spills that directly or indirectly impact the Indian River Lagoon."
In November 2016, the Florida Department of Environmental Protection issued a consent order related to these sewage spills, requiring the county to make the necessary repairs and improvements to eliminate potential discharges in the future.
The removal and replacement of one mile of force main north of Eau Gallie Boulevard along South Patrick Drive was completed in September. Fontanin said the current 3.3-mile project south of Eau Gallie Boulevard will be completed in July 2020, which would complete the requirements of the DEP consent order prior to its December 2020 deadline.
Support local journalism: It you would like to read more government and environmental news, and you are not a subscriber, please consider subscribing. For details, go to floridatoday.com/subscribe. |
// stream.cpp: stream benchmarks of vector operations
//
// Copyright (C) 2017-2021 Stillwater Supercomputing, Inc.
//
// This file is part of the universal number project, which is released under an MIT Open Source license.
#include <math.h>
#include <stdint.h>
#include <iostream>
#include <chrono>
#include <cmath>
// Configure the fixpnt template environment
// first: enable general or specialized fixed-point configurations
#define FIXPNT_FAST_SPECIALIZATION
// second: enable/disable fixpnt arithmetic exceptions
#define FIXPNT_THROW_ARITHMETIC_EXCEPTION 1
#include <universal/number/fixpnt/fixpnt.hpp>
// Configure the cfloat template environment
// first: enable general or specialized cfloat configurations
#define CFLOAT_FAST_SPECIALIZATION
// second: enable/disable fixpnt arithmetic exceptions
#define CFLOAT_THROW_ARITHMETIC_EXCEPTION 1
#include <universal/number/cfloat/cfloat.hpp>
// Configure the posit template environment
// first: enable general or specialized posit configurations
//#define POSIT_FAST_SPECIALIZATION
// second: enable/disable posit arithmetic exceptions
#define POSIT_THROW_ARITHMETIC_EXCEPTION 1
#include <universal/number/posit/posit.hpp>
#include <universal/verification/performance_runner.hpp>
#include <universal/verification/test_status.hpp>
#include <universal/verification/test_reporters.hpp>
template<typename Scalar>
void Copy(std::vector<Scalar>& c, const std::vector<Scalar>& a, size_t start, size_t end) {
for (size_t i = start; i < end; ++i) {
c[i] = a[i];
}
}
template<typename Scalar>
void Sum(std::vector<Scalar>& c, const std::vector<Scalar>& a, const std::vector<Scalar>& b, size_t start, size_t end) {
for (size_t i = start; i < end; ++i) {
c[i] = a[i] + b[i];
}
}
template<typename Scalar>
void Scale(std::vector<Scalar>& c, const Scalar& a, const std::vector<Scalar>& b, size_t start, size_t end) {
for (size_t i = start; i < end; ++i) {
c[i] = a * b[i];
}
}
template<typename Scalar>
void Triad(std::vector<Scalar>& c, const std::vector<Scalar>& a, const std::vector<Scalar>& b, size_t start, size_t end) {
constexpr double pi = 3.14159265358979323846;
Scalar alpha(pi);
for (size_t i = start; i < end; ++i) {
c[i] = a[i] + alpha*b[i];
}
}
void ClearCache() {
constexpr size_t SIZE = (1ull << 27); // 128MB element array of 8byte doubles = 1GB data set
std::vector<double> a(SIZE);
for (size_t i = 0; i < SIZE; ++i) {
a[i] = INFINITY;
}
}
template<typename Scalar>
void Reset(std::vector<Scalar>& v, Scalar resetValue) {
for (size_t i = 0; i < v.size(); ++i) {
v[i] = resetValue;
}
}
template<typename Scalar>
void Sweep(size_t startSample = 13, size_t endSample = 28) {
using namespace std;
using namespace std::chrono;
constexpr double pi = 3.14159265358979323846;
Scalar alpha(pi);
// create storage
size_t leftShift = endSample;
size_t SIZE = (1ull << leftShift);
std::vector<Scalar> a(SIZE), b(SIZE), c(SIZE);
for (size_t i = 0; i < SIZE; ++i) {
a[i] = Scalar(1.0f);
b[i] = Scalar(0.5f);
c[i] = Scalar(0.0f);
}
// benchmark different vector sizes
for (size_t i = startSample; i < endSample; ++i) {
size_t start = 0;
size_t stop = (1ull << i);
Reset(c, Scalar(0));
ClearCache();
steady_clock::time_point begin = steady_clock::now();
Copy(c, a, start, stop);
steady_clock::time_point end = steady_clock::now();
duration<double> time_span = duration_cast<duration<double>> (end - begin);
double elapsed_time = time_span.count();
size_t NR_OPS = (stop - start);
cout << setw(10) << NR_OPS << " copies per " << setw(15) << elapsed_time << "sec -> " << toPowerOfTen(double(NR_OPS) / elapsed_time) << "ops/sec" << endl;
}
for (size_t i = startSample; i < endSample; ++i) {
size_t start = 0;
size_t stop = (1ull << i);
Reset(c, Scalar(0));
ClearCache();
steady_clock::time_point begin = steady_clock::now();
Sum(c, a, b, start, stop);
steady_clock::time_point end = steady_clock::now();
duration<double> time_span = duration_cast<duration<double>> (end - begin);
double elapsed_time = time_span.count();
size_t NR_OPS = (stop - start);
cout << setw(10) << NR_OPS << " adds per " << setw(15) << elapsed_time << "sec -> " << toPowerOfTen(double(NR_OPS) / elapsed_time) << "ops/sec" << endl;
}
for (size_t i = startSample; i < endSample; ++i) {
size_t start = 0;
size_t stop = (1ull << i);
Reset(c, Scalar(0));
ClearCache();
steady_clock::time_point begin = steady_clock::now();
Scale(c, alpha, b, start, stop);
steady_clock::time_point end = steady_clock::now();
duration<double> time_span = duration_cast<duration<double>> (end - begin);
double elapsed_time = time_span.count();
size_t NR_OPS = (stop - start);
cout << setw(10) << NR_OPS << " muls per " << setw(15) << elapsed_time << "sec -> " << toPowerOfTen(double(NR_OPS) / elapsed_time) << "ops/sec" << endl;
}
for (size_t i = startSample; i < endSample; ++i) {
size_t start = 0;
size_t stop = (1ull << i);
Reset(c, Scalar(0));
ClearCache();
steady_clock::time_point begin = steady_clock::now();
Triad(c, a, b, start, stop);
steady_clock::time_point end = steady_clock::now();
duration<double> time_span = duration_cast<duration<double>> (end - begin);
double elapsed_time = time_span.count();
size_t NR_OPS = (stop - start);
cout << setw(10) << NR_OPS << " triads per " << setw(15) << elapsed_time << "sec -> " << toPowerOfTen(double(NR_OPS) / elapsed_time) << "ops/sec" << endl;
}
}
// Regression testing guards: typically set by the cmake configuration, but MANUAL_TESTING is an override
#define MANUAL_TESTING 0
// REGRESSION_LEVEL_OVERRIDE is set by the cmake file to drive a specific regression intensity
// It is the responsibility of the regression test to organize the tests in a quartile progression.
//#undef REGRESSION_LEVEL_OVERRIDE
#ifndef REGRESSION_LEVEL_OVERRIDE
#undef REGRESSION_LEVEL_1
#undef REGRESSION_LEVEL_2
#undef REGRESSION_LEVEL_3
#undef REGRESSION_LEVEL_4
#define REGRESSION_LEVEL_1 1
#define REGRESSION_LEVEL_2 1
#define REGRESSION_LEVEL_3 0
#define REGRESSION_LEVEL_4 0
#endif
int main()
try {
using namespace sw::universal;
std::string test_suite = "STREAM performance measurement";
std::string test_tag = "stream";
//bool reportTestCases = true;
int nrOfFailedTestCases = 0;
std::cout << test_suite << '\n';
#if MANUAL_TESTING
Sweep<float>();
Sweep < fixpnt<8, 4> >();
ReportTestSuiteResults(test_suite, nrOfFailedTestCases);
return EXIT_SUCCESS; // ignore errors
#else
#if REGRESSION_LEVEL_1
Sweep<float>(10, 15);
#endif
#if REGRESSION_LEVEL_2
Sweep<float>(12, 18);
#endif
#if REGRESSION_LEVEL_3
Sweep<float>(13, 22);
#endif
#if REGRESSION_LEVEL_4
Sweep<float>(10, 28);
#endif
ReportTestSuiteResults(test_suite, nrOfFailedTestCases);
return (nrOfFailedTestCases > 0 ? EXIT_FAILURE : EXIT_SUCCESS);
#endif // MANUAL_TESTING
}
catch (char const* msg) {
std::cerr << "Caught ad-hoc exception: " << msg << std::endl;
return EXIT_FAILURE;
}
catch (const sw::universal::universal_arithmetic_exception& err) {
std::cerr << "Caught unexpected universal arithmetic exception: " << err.what() << std::endl;
return EXIT_FAILURE;
}
catch (const sw::universal::universal_internal_exception& err) {
std::cerr << "Caught unexpected universal internal exception: " << err.what() << std::endl;
return EXIT_FAILURE;
}
catch (std::runtime_error& err) {
std::cerr << "Caught unexpected runtime error: " << err.what() << std::endl;
return EXIT_FAILURE;
}
catch (...) {
std::cerr << "Caught unknown exception" << std::endl;
return EXIT_FAILURE;
}
|
Knowledge Production Patterns of Environmental Sociology: A Bibliometric Analysis of Top Journals of Sociology 1. Research Scholar, Department of Sociology, Quaid-i-Azam University Islamabad, Pakistan 2. Assistant Professor, Department of Sociology, Quaid-i-Azam University Islamabad, Pakistan 3. Assistant Professor, Department of Anthropology, Quaid-i-Azam University Islamabad, Pakistan PAPER INFO ABSTRACT Received: April 22, 2020 Accepted: June 15, 2020 Online: June 30, 2020 Sociologys inability to symmetrically produce knowledge across its sub disciplines has often been under investigation to highlight the academic marginalization of important social issues. This study investigates how the top journals of Sociology have been treating the issue of Environment since 1990s. The published content of six high impact factor journals of Sociology was bibliometrically analyzed for the authorship patterns, methodological, thematic and geographic focus of the environmental issues. By analyzing total of 203 articles focusing environmental issues, we found a perpetual increase in environmental articles over time, with geographic focus on European and United States environmental issues, and a methodological divide between qualitative and quantitative methods. The study concludes that environmental Sociology, despite being an important sub discipline of Sociology has failed to attract high proportion of publications in top Sociology journals, which may undermine its academic worth. many institutions evaluate sociologists' academic performance, and to derive knowledge production trends" (Krogman & Darlington 1996: 44). In other words, credible journal articles serve as a proxy to determine what is important and valuable in the specialized scientific domain and its subfields. Literature Review In this section we explore, what the gender composition has been in Sociology in the past, what was the methodological understandings, productivity of the journals and the geographic concentrations of knowledge production. It is important to note that, some of the findings of this paper are unique in their features. For example, the geographic focus of environmental knowledge which has been never the topic of neither any study nor any sociological analysis which could have enlightened us about this particular area of concern. Productivity of Journals There is dearth of research which can locate us to the productivity of generalist Sociology journals for environmental articles. On the productivity of environmental articles specifically the study of Krogman and Darlington reveals that environmental article has been given less space in mainstream Sociology journals, but with time this trend has changed between 1982 and 1992. They have seen an increase in the environmental articles. In the top tier journals like (ASR, AJS and SF) they find out only 14 environmental articles out of total 3673 published articles. However, in the lower tier (Social problems, Sociological Quarterly, Sociological perspective, Rural Sociology, Sociological Spectrum and Sociological inquiry) they find out 151 environmental articles out of 4652 totals published. This data has been extracted in the period of 1969 till 1996. By summing both top and lower tier, there were total 165 environmental articles in all nine Sociology journals out of 8325. This tells us that, the total published articles are huge in numbers, but the environmental articles have been very less in that time. However, in four journals (AJS, ASR, SP, SF) which is also the part of my study, they find out total 41 environmental articles out of 4628 published. Apart from these findings, no previous study has considered the productivity of Sociology journals in the field of environmental Sociology. Gender Composition in Authorship It is a well-established fact that scholarly publication is critical to women success in their career and also served as a reputation in academic circles. Studies over the past decades have provided a growing literature on the topic of gender and its relationship with knowledge production. Feminist social scientists have explored the area of gender and publication through various dimensions. In this context, previous explanations reveal that less representation in authorship may be due to multiple factors. For example, rewards to women and men in scholarly publication, gender differences in publication rates and gender effects on publication process in Sociology (Grant and Ward 1991). Other than that, women domestic and professional engagements (Long 1990;Austin &Davis 1985;and Cole & Zuckerman 1987) and their marital life have disadvantageous effects on women publications while some scholars maintain that marital life has no effect if the job status is stable. Apart from that, gender politics influence the publication and hence affects visibility within disciplines. Reason include, less resources, little access to participation, fewer rewards and less considerable chances to join research teams in academia. It is surprising that how little importance has been given to such concerning area, which, if remain unexplored will be difficult to mainstream half of our population (Grant &Ward 1991). The less representation of women authors has been seen to have a causal link to a specific journal and the underrepresentation of women authors in it. For example, Vanderstraeten explained the scientific communication of Sociology journals and its publication practices in Netherland & Belgium. He explored the historical transition of some local Sociology journals in Belgium with its publication practices. He illustrated that, the participation of women authors in three of the main journals of Sociology named as; Tijdschrift voor Sociologie (Journal for Sociology, TvS), Sociologische Gids (Sociological Compass, SG) and Mens & Maatschappij (People & Society, M&M). He says that female authorship in Sociologische Gids (GS) was 11 percent while in that time women hardly have access to publication until 1960s. However, in the last two decades the share of women publication fluctuates from 7 to 25 percent. In other two journals (i, e. M&M and TvS) he says until 1980s almost no women publication can be seen but with the turn of 21 st century the share of women publication rises to 30 percent in both journals. Most importantly, he elucidated that there is no available list of authors in these journals so we cannot assume it as a discrimination against women in publication. In the same manner Rotchford, Willis and McNamee in their study found total 1082 article in four leading core journals of Sociology with 1672 authors. The time period of their study was 1960-1985. The journals include, American journal of Sociology, American Sociological Review, Social Forces and Social Problems. They identified that women authors in all four prestigious Sociology journals women are underrepresented. Apart from that, they also determined that women are tends to use more qualitative methods and men with much emphasis on quantitative. While Davenport and Synder find that there is gender bias in the citation of Sociology too. They based their findings on the period 1985-1994. They built this argument by examining 25 Sociology journals as indexed in Social Science Citation index. More recently, West et al analyzed extensive scholarly work available on the JSTOR. The study time period last from 1665 till recent 2011. They carried out their study in huge details and covered many disciplines including Sociology. They analyzed the gender composition in different fields and sub-fields. In most areas of Sociology men have been predominantly dominating in social issues, Sociology of communication, social ties, delinquency and deviance, crime, Sociology of education, social movements, segregation, social structure, religion and henceforth. On the other hand, women most of time wrote on family, sex and sexuality, early childhood, stress coping and household composition. This shows us the underrepresentation of women in the Sociology discipline and its sub-fields. Summing altogether, according to this study in Sociology women composed of 31.5% while men with 68.5%. This is a huge gap for scholar who still believes that the composition of women is relatively equal to that of men. Methodological Divide In the field of Sociology, the quantitative methods have advanced rapidly than that of qualitative methods, but the boundaries are melting (Abell 1990). Studies provide us with handful evidences that still there is a clear dividing line between the two methodological orientations (Quantitative and Qualitative) across national and international sociologists. For example, Schwemmer and Wieczorek have ascertained methodological divide in Sociology by analyzing 8737 abstracts of papers from 1995-2017. They also find out a high time trend of utilization for quantitative methodologies. In this respect, a paradigmatic adherence sometimes entrenched one methodological use and draws a sharp line of divide between the two by extracting assumption of their own from the philosophy of science. United Kingdom is the best example of this paradigmatic strife where much emphasis is on the use of qualitative methods (Bryman 2008;Gage 1989). Another example of this methodological war we have seen is in Germany where the so called 'Positivismusstreit' and the clash of critical rationalism with critical theory led to methodological and theoretical divide which still exists (Munch 2018). In more detailed analysis, International Benchmarking Review of UK Sociology report of 2010 explained the divide of methodological orientations in UK. The report points out the innumeracy of British Sociologist in quantitative methods which they attributed to the lack of quantitative training and other statistical measure which another international sociologist uses to measure social reality. However, British Sociologist are robust in the qualitative methods. The report further quotes; "A disturbing result of all this is that most British trained sociologists cannot read the quantitative literature in Sociology with any degree of understanding. Furthermore, there appears to be an anti-quant culture -a standard undergraduate methods course will include as much time critiquing the use of quantitative methods as teaching them (although critique presupposes an understanding of what is critiqued). It seems to us that the place to start seriously in quantitative methods training should be at the undergraduate level. Quantitative researchers feel isolated in many (but not all) Sociology departments, which typically have only one or two faculty members with strong quantitative knowledge and may feel more welcome in social policy or education (BSA, HaPS and ESRC 2010: p 23)." The evidence of this stark differences and lack of quantification in Sociology provides by ESRC report says that, "It may be appropriate to provide some quantification of the lack of quantification. A recent assessment of 146 End of Award reports from ESRC Sociology projects found that only 21% of papers were purely quantitative, and an additional 14% mixed qualitative/quantitative, while 62% were qualitative only. To place the issue in international perspective, we compared the distribution of articles published in the 2008 issues of the British Journal of Sociology (BJS) and the American Sociological Review (ASR). Of articles in the ASR, 66% were quantitative, compared with just 47% of the articles in BJS. This contrast becomes starker when nationality of the (first) author is considered -for the BJS, most (9/14) of the quantitative articles were by overseas authors, while for the articles by UK authors only 31% were quantitative (BSA, HaPS and ESRC 2010;23). The ESRC report paved way for the debate of 'methodological pluralism' in British Sociology in which series of papers has been published to seek the trends of methods across time. For example, Payne, Williams and Chamberlain conducted a study in order to know the methodological orientations among other indictors. They cover three data sources; BSA conference papers, Work, Employment and Society (WES) and some other mainstream journals. In mainstream journals quantitative methods employed 14.3% while qualitative stands 40.6%, BSA conference with 10.8 percent of quantitative and 47.1% qualitative. This percentage for WES is different for with 38.3% quantitative and 40.4% qualitative. They further asserted that, there should be no less qualitative inquiries but more quantitative ones. Hence, they did not identify any methodological pluralism in the mainstream British Sociology journals. In the above discussion of gender composition is representing general domain of Sociology but not that of environmental Sociology. In the domain of environmental Sociology, we did not find any relevant study which could have shed light on it. So, we consider general sociological domain in order to have some indirect relevance. Materials and Methods This article seeks to demonstrate the knowledge patterns of Environmental Sociology in six high impact factor Sociology journals. The study draws upon Krogman and Darlington's research with certain differences. Firstly, their study covers 27 years of period between 1969 and 1996, while present research covers the period between 1990 and 2018. Secondly, their study consists upon 9 Sociology journals with lower tier and high tiers. Our study takes six high impact factor journals which include American Journal of Sociology (AJS), American Sociological Review (ASR), Annual Review of Sociology (ARS), Social Forces (SF), Social Problems (SP) and British Sociological Association-Sociology (BSA-Sociology). Thirdly, four journals (ASR, AJS, SF and SP) from our study were also the subject of the study in Krogman and Darlington's analysis, which helped us to compare the findings of both time periods. Classical Bibliometrics was utilized as research design to carry out this study. It quantifies the academic outputs of people and institutions which then is complemented through qualitative explanations (Ball 2017). However, the current study was based on output analysis of the Bibliometrics. Output analysis is the quantifications of publications through multiple angles. As Ball pointed out that the "The basic parameter for a Bibliometrics output analysis is the amount of academic output by a person, institution, country or other group (aggregated on different levels)" (2017:19). By counting the publications and summing up one could know the productivity of authors, institutions, geographic region, countries, journals and research organizations. However, only quantification of publications does not reveal much if it is not relatable to the causal and intervening factors. For this purpose, in this study some other factors of quantifying nature were explored such as: (i) Gender composition in authorship, (ii) Methodological orientations of the articles (iii) Geographic location of the research (by geographic location of the research we mean the location of the environmental issue(s) on which the article has been written) (iv) Author's geographic affiliations (as discerned from the institutional affiliations). In furtherance, the data collection procedure was followed with utmost care for its validity. We found total 203 environmental articles by employing advance search on the respective websites of all six journals included in the study. The journals were selected for their highest impact factors, established by the Web of Science in the field of Sociology as depicted in Table 1. Keeping in view the feasibility issues and predicaments that we face in the absence of some standardize uniform database of journals we had to rely on web of science to select our sample of journals based on impact factor. These journals include; American Sociological Review (ASR), American Journal of Sociology (AJS), Annual Review of Sociology (ARS), Social problems (SP), Social Forces (SF) and British Sociological Association-Sociology (BSA-Sociology). It was a difficult task to categorize and isolate environmental articles from that of others. For that purpose, we firstly conceptualize the range of environmental issues and then took that conceptualization into a broader framework for the inclusivity of environmental articles. In this respect, we used Dunlap and Jorgenson's definition of environmental problems. They are of the view that, environmental problem is a common but vague concept. Ecology, according to them, has some functions to play and which is necessary for the function of society and environment. Further, they conceptualize environmental problems with respect to three contributing functions of it. Firstly, environment provides us with the basic necessities of life that is resources which includes food, water, air and shelter. To ecologist, it is the basic function of environment to facilitate us in "sustenance base", and the environment serves as a supply depot for human societies. Dunlap and Jorgenson assert that when the sustenance base of our environment face overuse then it translates to environmental problems in the form of shortage and exploitations which pave way for environmental destructions. Secondly, the consumption process of our resources produces waste and humans produce more waste than any other species on earth. So, the environment then serves as a 'waste repository' in order to absorb the waste or turn it to useful or harmless substance. When the absorbing capacity exceeds by the waste it becomes water and air pollution. Finally, the human has to have a place to live. In this regard, environment provides us with this place a 'habitat'. Where we live, work, play, travel (in the form of homes, shops, factories, transportations system and recreational settings). Thus, environment provides us habitat to live but, when we overuse it then it converts to overpopulation, overcrowding from a city to the entire planet earth. For example, taking one function from the environment may lead to the impairment of other two. That is, when an area is used for the waste dumping it becomes unsuitable as living space. In like manner, when the hazardous material expands from one area to other then natural resources like water can no longer remain potable for both humans and animals. Finally, when an area of natural habitat is converted to living space for human beings, the area can no longer be served as a supply depot or a habitat for wildlife. The mismatch among three functions produces environmental destruction and may cause different forms of environmental problems. We broadly take this definition of environmental problems as yardstick for extracting environmental articles from selected journals. Thus, we expect that all environmental articles will be found under the umbrella of this conceptualization. That is, all environmental issues will conform to this categorization and the problems addressed in environmental articles will come under these broader themes of environmental problems. In order to extract environmental articles, we used the publisher's search functions at each of the six journals' web sites. We began our search by, entering the syntax of 'environment' in the search query as "anywhere" to find all the possible publications carrying the stem word and its variations. We were expecting that this word has more chances of inclusivity than others for the identification of environmental articles. We filtered our search results for the duration of publications between 1990 and 2018; for excluding review articles, book reviews, editorials, abstract, case reports, product reviews, letters and introductions. The time period of these data sources was onward from January 1990 to December 2018. There are couples of factors to draw the rationale for selecting this specific duration. First, Krogman and Darlington had already covered the take off period of environmental Sociology. Second, with turn of the decade of 1990, the environmental Sociology began to establish its institutional rootswhen some environmental departments and research centers were established, and research papers began to appear in abundance in different peer-reviewed journals (Laska 1993;Dunlap and Catton 1994;Krogman and Darlington 1996). After getting filtered result of our search, we inspected each of the extracted articles to determine if it confirms to our conceptual framework or not. The inspection of each article was done due to the fact that, the word 'environment' may be used in some other context as well, like organizational environment, school environment, political environment etc. Thus, we excluded non-environmental articles based on our conceptualization after extensive search and filtration criteria. The study has some methodological limitations. First, it only offers quantitative analysis of knowledge production in environmental Sociology and is not extended to judge the quality and reception of the academic publications. Second, it is based only on top tier Sociology journals (published from the global North) and does not include journals with low impact factor or from global South. Third, this research is limited to the scholarly knowledge production in the field of Sociology and does not cover other fields or specialized journals. We are conscious of the fact that there is much environmental research published outside the category of Sociology journals and sometimes in higher impact factor journals (e.g. environmental studies, demography and geography). However, including those was beyond the scope of this study. Results and Discussion In the following section results have been presented with the help of figures and tables. What stand out in all findings are the multiple aspects of environmental knowledge which comprise of its geographic focus, productivity of journals with respect to environmental knowledge, methodological focuses, gender composition and institutional affiliations of the authors. The result of our study reveals that from 1990 through 2018, the six journals published a total of 203 environmental articles, or we can say only three percent of all articles (i.e. 6366) which they published in these journals. Figure No.1 Journals' Productivity In the above figure the blue color shows the total number of articles published in each journal while the dark red color indicated environmental articles. Unsurprisingly, the number of environmental articles in comparison to total articles published is very low. The highest number of environmental articles we have seen comes from two journals which is SP (with 65) and SF (with 64). While, 3 rd highest number of environmental articles were published in Sociology journal (i.e,34). However, the top Sociology journals like ASR, AJS, and ARS has the lowest number of environmental articles. Apart from that, each journal has their specific pattern of publication that is, ASR, AJS, Sociology all are bi-monthly, ARS publishes on issue per year, SP and SF are both quarterly peer reviewed journals. Elsewhere, acceptance rate of paper also matters. Summing all, the number of less environmental articles top impact factor was a finding of Krogman and Darlington. Interestingly, in their study they find out that the top three journals (like AJS, ARS and SF) publish only 14 environmental articles in that time collectively. However, in our study these journals published a total of 94 environmental articles. The numbers show us a much higher receptivity to environmental articles now, in contrary to past. Most importantly, by comparing it to our study we were aware that there are differences in time period and journals. But our study includes four journals of their study which are AJS, ASR, SF and SP. By comparing it from another dimension, theirs four journal out of nine published a total of 41 environmental articles out of 4628 their all articles. The above ( Figure 2) reveals the trend of environmental articles for almost three decades in comparison to Non-environmental. We can see a huge divide between two categories environmental and non-environmental articles. There can be two possible explanations for the behavior of the trend. Firstly, it indicates the occasional increase in environmental articles across time with three times fifteen and one time eleven in 2008, 2012, 2016 and 1993 respectively. Secondly, the non-environmental articles peaked in 2004 as well as a general increase in environmental articles onward from 2004 and in the forthcoming years. One possible reason for the rise in 2004 in both categories could be the limitation of our study as we were not able to find out ASR articles before 2004. On the home page of ASR there are all issues available online onward from 2004 to date. However, for our study we were not able to locate the data of the years 1990-2003 as the issues of these years were not available online. In consequence to that, both the categories of articles shown an increase, but we cannot deny the fact of increase in environmental articles in the last 10 years of our study's time period. Moreover, some authors like (that Laska 1993;Dunlap and Catton 1994;Krogman and Darlington 1996) maintain the argument that 1990 was the takeoff time for the environmental Sociology as articles related to environment appears in different Sociology journals. The above trend line indicates different view for their argument as we have seen the rise in environmental articles with the turn of century and we predict that the rise in environmental debates will further enhanced to new premises and understandings. In addition to this, in our aim to address the geographic focus of environmental articles written on numerous dimensions, we have found increasing geographic concentration to two main regions i.e. United States (USA) and Canada and Europe. Figure 3 below displays the geographic locations of environmental issues upon which these studies have taken place. It is apparent from these figures that more than half of 1990 1992 1994 1996 1998 2000 2002 2004 2006 2008 2010 2012 2014 2016 2018 Non Environmental articles Environmental Articles the environmental articles had a focused on environmental issues of the USA and Canada which is 54.7%. It is pertinent to note that the environmental discourse has been started in USA some decades ago and now more contribution also coming from there. However, the second highest numbers of environmental articles have been published on the environmental issues of Western Europe with 13.3%. On contrary to this, other regions of the world have been found with negligible representation. For instance, Asia and Other regions has been location for 4.9% of articles respectively. In like manner, South/Latin America and Global focus behaving in the same way with 2.5%. It is imperative to clarify what categories like "other", "no location" and "global focus" means for us. Firstly, the category "No location" includes those environmental articles which have no specific geographic particulars. For instance, some environmental articles have been written on the subject of environmental Sociology, its academic conceptualization and other academic issues of it. Secondly, the category, "global focus" means those environmental articles where the geographic focus of the articles was inclined to specific country but has a global eye different country at a time. For instance, there were two such journal articles which have focused on the comparison of different environmental issues in the developing world, developed and less developed countries. The total frequency of such articles has been seen only 5. Figure No. 3 Geographic Focus of Environmental issues in the Articles Finally, the third category "Other" means those environmental articles which did not comes under our coded regions. Only, 4.9 percent of articles have been detected with such particulars. By summing "other" and "no location" it makes a total of 22.1 percent articles. Overall, the finding suggests more focus on the developed part of the World and less or negligible concentration in regions likes South/Latin America and Asia. Canada Asia Pakistan Social Sciences Review (PSSR) June, 2020 Volume 4, Issue 2 947 the environmental articles had a focused on environmental issues of the USA and Canada which is 54.7%. It is pertinent to note that the environmental discourse has been started in USA some decades ago and now more contribution also coming from there. However, the second highest numbers of environmental articles have been published on the environmental issues of Western Europe with 13.3%. On contrary to this, other regions of the world have been found with negligible representation. For instance, Asia and Other regions has been location for 4.9% of articles respectively. In like manner, South/Latin America and Global focus behaving in the same way with 2.5%. It is imperative to clarify what categories like "other", "no location" and "global focus" means for us. Firstly, the category "No location" includes those environmental articles which have no specific geographic particulars. For instance, some environmental articles have been written on the subject of environmental Sociology, its academic conceptualization and other academic issues of it. Secondly, the category, "global focus" means those environmental articles where the geographic focus of the articles was inclined to specific country but has a global eye different country at a time. For instance, there were two such journal articles which have focused on the comparison of different environmental issues in the developing world, developed and less developed countries. The total frequency of such articles has been seen only 5. Figure No. 3 Geographic Focus of Environmental issues in the Articles Finally, the third category "Other" means those environmental articles which did not comes under our coded regions. Only, 4.9 percent of articles have been detected with such particulars. By summing "other" and "no location" it makes a total of 22.1 percent articles. Overall, the finding suggests more focus on the developed part of the World and less or negligible concentration in regions likes South/Latin America and Asia. the environmental articles had a focused on environmental issues of the USA and Canada which is 54.7%. It is pertinent to note that the environmental discourse has been started in USA some decades ago and now more contribution also coming from there. However, the second highest numbers of environmental articles have been published on the environmental issues of Western Europe with 13.3%. On contrary to this, other regions of the world have been found with negligible representation. For instance, Asia and Other regions has been location for 4.9% of articles respectively. In like manner, South/Latin America and Global focus behaving in the same way with 2.5%. It is imperative to clarify what categories like "other", "no location" and "global focus" means for us. Firstly, the category "No location" includes those environmental articles which have no specific geographic particulars. For instance, some environmental articles have been written on the subject of environmental Sociology, its academic conceptualization and other academic issues of it. Secondly, the category, "global focus" means those environmental articles where the geographic focus of the articles was inclined to specific country but has a global eye different country at a time. For instance, there were two such journal articles which have focused on the comparison of different environmental issues in the developing world, developed and less developed countries. The total frequency of such articles has been seen only 5. Figure No. 3 Geographic Focus of Environmental issues in the Articles Finally, the third category "Other" means those environmental articles which did not comes under our coded regions. Only, 4.9 percent of articles have been detected with such particulars. By summing "other" and "no location" it makes a total of 22.1 percent articles. Overall, the finding suggests more focus on the developed part of the World and less or negligible concentration in regions likes South/Latin America and Asia. Other No location Figure No. 4 Methodological Orientation of environmental articles One of the important and interesting findings of our study was the methodological focuses of environmental articles as shown in the above figure. Before proceeding to methodological focuses it is important to understand the classification scheme that was employed. In majority of the articles the scholars explain methods for approaching the environmental issues. However, there were some articles which are difficult to categorize in one area of quantitative, qualitative or mixed method. For instance, those articles that measure the issue through numerical and statistical data, we coded it as quantitative. On the contrary, those which implied simple description and explorations of the issue with the help of themes, and elaborative details are coded as qualitative. Some articles implied both qualitative and quantitative approaches, and those did not specify the research design as mixed method approach. Similarly, some studies are considered as mixed method because of explicitly employing both the quantitative and qualitative methods. As shown in the figure, more than half of environmental articles (i.e. 52.7%) were quantitative, whereas 41.4% were qualitative, and only 5.9% used mixed method approach. When we find out this, our next much related and interesting finding was, to know what the methodological orientation would be when compared to this regional categorization. We found that, there is a methodological divide clearer in the sub-field of environmental Sociology literature. This is very interesting on the part of American and European methodological polarization in environmental Sociology. That is, in the USA more quantitative methods have been employed to environmental issues and considerably less on qualitative. On contrary, the same trend is followed in the European Region where an increasing emphasis has been seen in the qualitative while less in the quantitative. Below figure represents these regional methodological divides among six Sociology journals. One of the important and interesting findings of our study was the methodological focuses of environmental articles as shown in the above figure. Before proceeding to methodological focuses it is important to understand the classification scheme that was employed. In majority of the articles the scholars explain methods for approaching the environmental issues. However, there were some articles which are difficult to categorize in one area of quantitative, qualitative or mixed method. For instance, those articles that measure the issue through numerical and statistical data, we coded it as quantitative. On the contrary, those which implied simple description and explorations of the issue with the help of themes, and elaborative details are coded as qualitative. Some articles implied both qualitative and quantitative approaches, and those did not specify the research design as mixed method approach. Similarly, some studies are considered as mixed method because of explicitly employing both the quantitative and qualitative methods. As shown in the figure, more than half of environmental articles (i.e. 52.7%) were quantitative, whereas 41.4% were qualitative, and only 5.9% used mixed method approach. When we find out this, our next much related and interesting finding was, to know what the methodological orientation would be when compared to this regional categorization. We found that, there is a methodological divide clearer in the sub-field of environmental Sociology literature. This is very interesting on the part of American and European methodological polarization in environmental Sociology. That is, in the USA more quantitative methods have been employed to environmental issues and considerably less on qualitative. On contrary, the same trend is followed in the European Region where an increasing emphasis has been seen in the qualitative while less in the quantitative. Below figure represents these regional methodological divides among six Sociology journals. One of the important and interesting findings of our study was the methodological focuses of environmental articles as shown in the above figure. Before proceeding to methodological focuses it is important to understand the classification scheme that was employed. In majority of the articles the scholars explain methods for approaching the environmental issues. However, there were some articles which are difficult to categorize in one area of quantitative, qualitative or mixed method. For instance, those articles that measure the issue through numerical and statistical data, we coded it as quantitative. On the contrary, those which implied simple description and explorations of the issue with the help of themes, and elaborative details are coded as qualitative. Some articles implied both qualitative and quantitative approaches, and those did not specify the research design as mixed method approach. Similarly, some studies are considered as mixed method because of explicitly employing both the quantitative and qualitative methods. As shown in the figure, more than half of environmental articles (i.e. 52.7%) were quantitative, whereas 41.4% were qualitative, and only 5.9% used mixed method approach. When we find out this, our next much related and interesting finding was, to know what the methodological orientation would be when compared to this regional categorization. We found that, there is a methodological divide clearer in the sub-field of environmental Sociology literature. This is very interesting on the part of American and European methodological polarization in environmental Sociology. That is, in the USA more quantitative methods have been employed to environmental issues and considerably less on qualitative. On contrary, the same trend is followed in the European Region where an increasing emphasis has been seen in the qualitative while less in the quantitative. Below figure represents these regional methodological divides among six Sociology journals. Figure No.5 Methodological focuses across regions In the above figure, the blue color denotes quantitative methods, dark-red shows qualitative while light green indicates mixed method approach. It is important to mention that 17.2 percent of articles which we found with no specific location were excluded from this finding because it will serve no linkage with any region. However, in the USA and Canada the highest number of articles have been found with using quantitative methods i.e. 78 while this number for qualitative is 26 and only 7 for mixed methodological approach. Western Europe where we also find out more concentration of environmental articles shows an increasing emphasis on the qualitative with numbers standing for qualitative 19, quantitative 5 and mixed with only 3. The remaining regions have slight variation that is in the 'global focused' environmental articles we found that there were only quantitative but no qualitative and mixed methods used. In the same pattern, in Asia these numbers for quantitative, qualitative and mixed stands as 6, 3 and 1 respectively. The "other" category behaves same as Asian region. In addition to this, one important finding of our study was the institutional affiliations of those authors who wrote environmental articles. For that purpose, to know this, we simply coded this affiliation from the articles. Every journal article has mention of author's affiliation and addresses. However, we did not cover it through the prestige of the institution or their rankings. For us, the best source to know the affiliation of authors was these articles. In case if there is no mention somewhere of the author's address and affiliations, then we just Google it and get information from there. Our finding reveals that there were total 363 authors in all environmental articles. In our study, we also detect 5 as the highest numbers of co-authors in some papers. The total authors (i.e. 368) have been further classified into geographic regions of the institutions so that we can further elaborate this finding. This classification has been done for the purpose to know the share of authors with respect to each region. In the region wise share of institutional affiliations of authors, the highest quantity belongs to USA which is 80.9%. This is a remarkable outcome. The highest share of authors from the USA hold true when it compares to the geographic focus of environmental issues as shown in the Figure that for 111 articles the geographic was US/Canada. Moreover, United Kingdom (UK) behaves second highest with the 10% of author's institutional affiliations. Other regions such as Asia, Western Europe and Canada are hardly 10% collectively. Last but not the least, our final finding was related to gender composition in authorship. In our study at time of coding, we identified the total authorship of all environmental articles but, hand in hand we were able to determine the male and female authorship as a distinctive category of findings which reveals much in gender and feminist discourses. One cannot ignore this important aspect of many disciplines where the gender differences in authorship are huge. However, our study, in contrary to our expectation reveals that in environmental Sociology literature under these six journals male dominates in authorship. That is, nearly seventy percent (i.e. 69%) of authors were male and the remaining 31% were female. One cannot be assuming freely that in environmental related articles these journals are biased by seeing that it has less authorship in the domain. In this regard, it is important to highlight that, there could be many possible explanations and reasons to answer this question of gender differences in authorship. As previous studies show us that less authorship from women may be due to multiple factors such as having more lucrative position than authorship, other life choices better for them, or a lack of access to opportunity, different life situations and less engagement in academic circles. In Sociology we see comparatively a considerable female authorship but still in The total authors (i.e. 368) have been further classified into geographic regions of the institutions so that we can further elaborate this finding. This classification has been done for the purpose to know the share of authors with respect to each region. In the region wise share of institutional affiliations of authors, the highest quantity belongs to USA which is 80.9%. This is a remarkable outcome. The highest share of authors from the USA hold true when it compares to the geographic focus of environmental issues as shown in the Figure that for 111 articles the geographic was US/Canada. Moreover, United Kingdom (UK) behaves second highest with the 10% of author's institutional affiliations. Other regions such as Asia, Western Europe and Canada are hardly 10% collectively. Last but not the least, our final finding was related to gender composition in authorship. In our study at time of coding, we identified the total authorship of all environmental articles but, hand in hand we were able to determine the male and female authorship as a distinctive category of findings which reveals much in gender and feminist discourses. One cannot ignore this important aspect of many disciplines where the gender differences in authorship are huge. However, our study, in contrary to our expectation reveals that in environmental Sociology literature under these six journals male dominates in authorship. That is, nearly seventy percent (i.e. 69%) of authors were male and the remaining 31% were female. One cannot be assuming freely that in environmental related articles these journals are biased by seeing that it has less authorship in the domain. In this regard, it is important to highlight that, there could be many possible explanations and reasons to answer this question of gender differences in authorship. As previous studies show us that less authorship from women may be due to multiple factors such as having more lucrative position than authorship, other life choices better for them, or a lack of access to opportunity, different life situations and less engagement in academic circles. In Sociology we see comparatively a considerable female authorship but still in The total authors (i.e. 368) have been further classified into geographic regions of the institutions so that we can further elaborate this finding. This classification has been done for the purpose to know the share of authors with respect to each region. In the region wise share of institutional affiliations of authors, the highest quantity belongs to USA which is 80.9%. This is a remarkable outcome. The highest share of authors from the USA hold true when it compares to the geographic focus of environmental issues as shown in the Figure that for 111 articles the geographic was US/Canada. Moreover, United Kingdom (UK) behaves second highest with the 10% of author's institutional affiliations. Other regions such as Asia, Western Europe and Canada are hardly 10% collectively. Last but not the least, our final finding was related to gender composition in authorship. In our study at time of coding, we identified the total authorship of all environmental articles but, hand in hand we were able to determine the male and female authorship as a distinctive category of findings which reveals much in gender and feminist discourses. One cannot ignore this important aspect of many disciplines where the gender differences in authorship are huge. However, our study, in contrary to our expectation reveals that in environmental Sociology literature under these six journals male dominates in authorship. That is, nearly seventy percent (i.e. 69%) of authors were male and the remaining 31% were female. One cannot be assuming freely that in environmental related articles these journals are biased by seeing that it has less authorship in the domain. In this regard, it is important to highlight that, there could be many possible explanations and reasons to answer this question of gender differences in authorship. As previous studies show us that less authorship from women may be due to multiple factors such as having more lucrative position than authorship, other life choices better for them, or a lack of access to opportunity, different life situations and less engagement in academic circles. In Sociology we see comparatively a considerable female authorship but still in Other environmental related articles less representation of women as authors is a big question in front of the scholars and academicians. We then extend this finding for the purpose to see gender composition across other journals. Notwithstanding, that women wrote less, in the BSA-Sociology journal there were exactly equal percentage of male and female author (i.e. 50% each). Apart from this, in all other journals we witness less proportionality for women authors. The journals which published more articles have more gender gap in their authorship at least a case with our findings. For instance, in SP journal 77.6% authors of environmental articles were male while only 22.3% authors stand female. Figure No. 7Sex of the Authors across Journals In the same pattern in SF journal there were only 29.9% female authors in comparison 70% male authors. Moreover, in AJS there were total 27 authors in which male and female share was 74% and 25.9% respectively. However, we can see that AJS and ASR are nearly the same in gender composition; same is the case with SP and SF. In ARS there were 55% male authors and female with only 45%. Discussion Keeping in view our core objectives, this study discovered some interesting results. We found total 203 environmental articles in six top Sociology journals (ASR, AJS, ARS, SF, SP and BSA-Sociology). This finding has some differences with the findings of Krogman and Darlington, where they find 165 journal articles of environment in nine journals (low and top tiers) in the period onward from 1969 till 1996. In our study, for instance, the total environmental articles from 1990-1999 were 42 which increased to 72 onwards from 2000-2009. However, in the last period of time which starts from 2010 and lasts till 2018 this numbers jumped to 89. Despite, these differences with time period of and the gap of years, this finding confirm and suggest continued growing qualities through yearly variations. In comparison to the productivity of the journals in Krogman also present in their analysis. Those four journals were American journal of Sociology, American sociological review, Social problem and Social forces. They find total 41 environmental articles while we found this number 159 which shows a huge variance and growing nature across time specifically in regard to these four impact factor journals. Perhaps, the most striking finding of our study was the geographic focuses of environmental articles which clearly make distinctions on the part of USA/Canada and Europe and other regions. It also suggests the concentration of environmental discourse in these two regions. This may be in part due to the initiation of environmental discourse in these regions. As we came to know that other parts of the world have less apparent in the context. The smaller number of environmental articles from other parts of the regions raises a question that is whether those regions have less environmental issues. Or then, the all journals are whether UK based, or USA based that's why the focus of environmental articles are less from other parts of the world? In response to the last concern, that five of the six journals under this study has been based on USA and less on UK centric that is why the geographic focus shows more skewness for USA. It is important here to clarify that, we did not select these six journals on the basis of any region, country or other geographic location but through high impact factors. Thus, the geographic focus of environmental articles stands for what the data shows. Hence, to explain this concern we thus claim that if the journals has been selected on the basis of regions and each journal has equal proportion then it might be possible to draw conclusion on the basis of regions. One of the traditions used for the global inequalities in sciences is decolonizing theory (Kerr, 2014;Mignolo, 2011;2018and Santos, 2007, 2018.By looking into the geographic focuses of environmental articles of this study, decolonizing theorist, will possibly interpret that the West maintain its hegemony of knowledge production by not giving space to alternative research forms and explanations. In this regards Santos used the word epistmicide which refers to global science hegemonic structure overlook peripheral epistemologies and knowledge. These power relations pave way for epistemic monoculture where the West holds the whole structure of knowledge. In the same way this tradition also asserted the need for a cognitive justice which will enable the norms of plurality of knowledge by ensuring that peripheral members of the academic community also have a voice and weightage (Santos 2007&Visvanathan 1997 More specifically, Schott's study reveals the regional dominance of Western countries in natural science. He also extracted data from the Web of Science. However, such geographical skewness is huger in social sciences and humanities than natural sciences (Demeter 2019). Apart from this, Nye goes beyond just dominance and says that social sciences are the means of global control and facing a hegemonic bias. Demeter makes a connection between the global knowledge production patterns and its relationship in maintaining the power structure of its existing system. He also argued that, in the social sciences there exist a double-edged Mathew Effect in which the peripheral academics are not in the position to fully participate in the mainstream system of knowledge production. Demeter argued that in social sciences 75% of studies comes from United States and Western Europe. He analyzed data from the Web of Science and maintains that the center of all social science disciplines and its knowledge production is either US or Western Europe while other regions have very marginalized position. Previous research has explored the uneven dominance academic capital regionally. For example, most important paper in this respect is that of Bonitz, Bruckner and Scharnhorst.In their study they claim that there are very few core countries which produce more than that of any other peripheral countries. They developed the concept of Mathew Effect which account for not only micro level academic capital of a researcher but also for countries and regions. They stated that "a minority of countries, expecting a high number of citations per scientific paper, gains more citations than expected, while the majority of countries, expecting only a low number of citations per scientific paper, achieves fewer citations than expected. In the spirit of Merton, we called this effect the 'Matthew Effect for Countries" (Bonitz, Bruckner& Scharnhorst 1997: 408). In the same way, various other studies have confirmed that the global share of knowledge production in science is unequal. That is, there are very few successful countries and majority of countries are invisible to be seen in such practice. See for example (;Lee & Sohn, D. 2016;Makkonen & Mitze 2016;Perc 2014;Schmoch &Schubert, 2008 ). Similarly, the methodological focuses of environmental articles have also been observed in the dichotomies. Owing to the less focus and greater on quantitative methods our study confirms the methodological divide of Schwemmer and Wieczorek in the field of environmental Sociology with respect to our six journals. It has been observed in various literatures that in the USA there is much emphasis on the quantitative strand of methodologies while UK with qualitative (see also Kerlin 2000;Bryman 2008;Payne, Williams & Chamberlain 2004;and Gage 1989). In total 363 authors share of USA is 294, Canada with 10, UK 37, Asia 11, and Western Europe with 11. One can easily be convinced that if the highest number of articles were written mostly in the USA, then too its institutional affiliation will be the same. It is important to see why environmental knowledge and its authorship patterns have less proportionality to other regions? It is high time for environmental sociologist to see such patterns of knowledge productions in detailed analysis, especially in Sociology journals. In the total authorships, the number of female's authors were 114 while the remaining 249 authors were male. This finding is very close and not revealing more information. However, we came to know the total male and female authors collectively. This finding needs further breakdown to what are the patterns of authorship in detail. For instance, do women write as first author or men? Do women and men have co-authorship papers in environmental Sociology or not? What are the patterns of first and last author? How many women and men write papers separately as single authors? On the part of our finding, we have seen reflections of clear sex based division of academic labor with male dominancy in producing sociological literature on environmental issues. For instance, the female authors are 31% of the total 363 authors and for male this percentage is 69%. One can easily be convinced with the unequal preferences in authorship. It is still a question that, whether female author write less on the issues of environment as compared to male counterpart? Or can we ignore the gender equality discourse in the authorship patterns of scholarly works? Prior researches highlighted gender division in scholarly works but where environmental Sociology specialty stands, is appeared in this study especially with reference to the context of top six Sociology's journals. Apart from this, our paper is a new addition to the newly emerged conceptual framework of Wilder and Walters. They developed two concepts of contribution studies and productivity studies. Based on the characteristics of 'contribution studies', ours study confirms this framework and added a more value to it. Contribution studies according to them are those that evaluate contribution from contributors such as departments, universities, and other to a well-defined body of literature. Firstly, in these studies a publication outlet has to be identified which will be included in the analysis. For instance, in our study the publication outlets were these top six Sociology journals which contain Sociology's literature. Secondly, the contribution study has to have the list of contributors which in our paper were consist upon those authors who published in these six journals, but we did not identify each author particularly but as a whole we measure their contribution to the sub-field of environmental Sociology. Conclusion The study concludes that environmental Sociology has not been much successful in attracting the attention of Sociologists for knowledge production during recent past. The asymmetrical distribution across gender, geography and methods suggests that the sub discipline might go under crisis if it arbitrarily engages that intellectual community to produce knowledge in the global knowledge market where academic division of intellectual labor is carefully being examined for all the variables involved. However, the findings of the study have limited methodological scope. We could analyze the publications from only six top ranked Sociology journals, whereas the scope of knowledge produced in this domain can contain books, chapters and other vast majority of knowledge production sources. It could be much more interesting to see the patterns of knowledge production in this domain across sources and with relatively larger scope, where the sources from both the global South and North could be taken into account. |
<reponame>PHCCorso/screeps
import roleCollector from './collector';
import { Activity } from '../../constants/creep';
import roleHarvester from './harvester';
export function collectOrHarvest(creep: Creep) {
if (creep.memory['activity'] != Activity.HARVEST) {
const collectorActivity = roleCollector.run(creep);
if (collectorActivity !== Activity.NONE) {
return collectorActivity;
}
}
if (creep.room.memory['containers'].length == 0) {
return roleHarvester.run(creep);
} else {
return roleCollector.run(creep);
}
}
|
<reponame>Kumassy/tunnelto-tcp-vhost
use futures::channel::mpsc::{unbounded, UnboundedSender};
use futures::{SinkExt, StreamExt};
use tokio::net::TcpStream;
use tokio_tungstenite::tungstenite::Message;
use tokio_tungstenite::{MaybeTlsStream, WebSocketStream};
use human_panic::setup_panic;
pub use log::{debug, error, info, warn};
use std::collections::HashMap;
use std::env;
use std::sync::{Arc, RwLock};
mod cli_ui;
mod config;
mod error;
mod introspect;
mod local;
mod update;
pub use self::error::*;
pub use config::*;
pub use tunnelto_lib::*;
use crate::cli_ui::CliInterface;
use crate::introspect::IntrospectionAddrs;
use colored::Colorize;
use futures::future::Either;
use std::time::Duration;
use tokio::sync::Mutex;
pub type ActiveStreams = Arc<RwLock<HashMap<StreamId, UnboundedSender<StreamMessage>>>>;
lazy_static::lazy_static! {
pub static ref ACTIVE_STREAMS:ActiveStreams = Arc::new(RwLock::new(HashMap::new()));
pub static ref RECONNECT_TOKEN: Arc<Mutex<Option<ReconnectToken>>> = Arc::new(Mutex::new(None));
}
#[derive(Debug, Clone)]
pub enum StreamMessage {
Data(Vec<u8>),
Close,
}
#[tokio::main]
async fn main() {
setup_panic!();
let mut config = match Config::get() {
Ok(config) => config,
Err(_) => return,
};
update::check().await;
let introspect_addrs = introspect::start_introspection_server(config.clone());
loop {
let (restart_tx, mut restart_rx) = unbounded();
let wormhole = run_wormhole(config.clone(), introspect_addrs.clone(), restart_tx);
let result = futures::future::select(Box::pin(wormhole), restart_rx.next()).await;
config.first_run = false;
match result {
Either::Left((Err(e), _)) => match e {
Error::WebSocketError(_) | Error::NoResponseFromServer | Error::Timeout => {
error!("Control error: {:?}. Retrying in 5 seconds.", e);
tokio::time::sleep(Duration::from_secs(5)).await;
}
_ => {
eprintln!("Error: {}", format!("{}", e).red());
return;
}
},
Either::Right((Some(e), _)) => {
warn!("restarting in 3 seconds...from error: {:?}", e);
tokio::time::sleep(Duration::from_secs(3)).await;
}
_ => {}
};
info!("restarting wormhole");
}
}
/// Setup the tunnel to our control server
async fn run_wormhole(
config: Config,
introspect: IntrospectionAddrs,
mut restart_tx: UnboundedSender<Option<Error>>,
) -> Result<(), Error> {
let interface = CliInterface::start(config.clone(), introspect.clone());
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
let (websocket, sub_domain) = connect_to_wormhole(&config).await?;
interface.did_connect(&sub_domain);
// split reading and writing
let (mut ws_sink, mut ws_stream) = websocket.split();
// tunnel channel
let (tunnel_tx, mut tunnel_rx) = unbounded::<ControlPacket>();
// continuously write to websocket tunnel
let mut restart = restart_tx.clone();
tokio::spawn(async move {
loop {
let packet = match tunnel_rx.next().await {
Some(data) => data,
None => {
warn!("control flow didn't send anything!");
let _ = restart.send(Some(Error::Timeout)).await;
return;
}
};
if let Err(e) = ws_sink.send(Message::binary(packet.serialize())).await {
warn!("failed to write message to tunnel websocket: {:?}", e);
let _ = restart.send(Some(Error::WebSocketError(e))).await;
return;
}
}
});
// continuously read from websocket tunnel
loop {
match ws_stream.next().await {
Some(Ok(message)) if message.is_close() => {
debug!("got close message");
let _ = restart_tx.send(None).await;
return Ok(());
}
Some(Ok(message)) => {
let packet = process_control_flow_message(
&introspect,
tunnel_tx.clone(),
message.into_data(),
)
.await
.map_err(|e| {
error!("Malformed protocol control packet: {:?}", e);
Error::MalformedMessageFromServer
})?;
debug!("Processed packet: {:?}", packet.packet_type());
}
Some(Err(e)) => {
warn!("websocket read error: {:?}", e);
return Err(Error::Timeout);
}
None => {
warn!("websocket sent none");
return Err(Error::Timeout);
}
}
}
}
async fn connect_to_wormhole(
config: &Config,
) -> Result<(WebSocketStream<MaybeTlsStream<TcpStream>>, String), Error> {
let (mut websocket, _) = tokio_tungstenite::connect_async(&config.control_url).await?;
// send our Client Hello message
let client_hello = match config.secret_key.clone() {
Some(secret_key) => ClientHello::generate(
config.sub_domain.clone(),
ClientType::Auth { key: secret_key },
),
None => {
// if we have a reconnect token, use it.
if let Some(reconnect) = RECONNECT_TOKEN.lock().await.clone() {
ClientHello::reconnect(reconnect)
} else {
ClientHello::generate(config.sub_domain.clone(), ClientType::Anonymous)
}
}
};
info!("connecting to wormhole...");
let hello = serde_json::to_vec(&client_hello).unwrap();
websocket
.send(Message::binary(hello))
.await
.expect("Failed to send client hello to wormhole server.");
// wait for Server hello
let server_hello_data = websocket
.next()
.await
.ok_or(Error::NoResponseFromServer)??
.into_data();
let server_hello = serde_json::from_slice::<ServerHello>(&server_hello_data).map_err(|e| {
error!("Couldn't parse server_hello from {:?}", e);
Error::ServerReplyInvalid
})?;
let sub_domain = match server_hello {
ServerHello::Success {
sub_domain,
client_id,
..
} => {
info!("Server accepted our connection. I am client_{}", client_id);
sub_domain
}
ServerHello::AuthFailed => {
return Err(Error::AuthenticationFailed);
}
ServerHello::InvalidSubDomain => {
return Err(Error::InvalidSubDomain);
}
ServerHello::SubDomainInUse => {
return Err(Error::SubDomainInUse);
}
ServerHello::Error(error) => return Err(Error::ServerError(error)),
};
Ok((websocket, sub_domain))
}
async fn process_control_flow_message(
introspect: &IntrospectionAddrs,
mut tunnel_tx: UnboundedSender<ControlPacket>,
payload: Vec<u8>,
) -> Result<ControlPacket, Box<dyn std::error::Error>> {
let control_packet = ControlPacket::deserialize(&payload)?;
match &control_packet {
ControlPacket::Init(stream_id) => {
info!("stream[{:?}] -> init", stream_id.to_string());
}
ControlPacket::Ping(reconnect_token) => {
log::info!("got ping. reconnect_token={}", reconnect_token.is_some());
if let Some(reconnect) = reconnect_token {
let _ = RECONNECT_TOKEN.lock().await.replace(reconnect.clone());
}
let _ = tunnel_tx.send(ControlPacket::Ping(None)).await;
}
ControlPacket::Refused(_) => return Err("unexpected control packet".into()),
ControlPacket::End(stream_id) => {
// find the stream
let stream_id = stream_id.clone();
info!("got end stream [{:?}]", &stream_id);
tokio::spawn(async move {
let stream = ACTIVE_STREAMS.read().unwrap().get(&stream_id).cloned();
if let Some(mut tx) = stream {
tokio::time::sleep(Duration::from_secs(5)).await;
let _ = tx.send(StreamMessage::Close).await.map_err(|e| {
error!("failed to send stream close: {:?}", e);
});
ACTIVE_STREAMS.write().unwrap().remove(&stream_id);
}
});
}
ControlPacket::Data(stream_id, data) => {
info!(
"stream[{:?}] -> new data: {:?}",
stream_id.to_string(),
data.len()
);
if !ACTIVE_STREAMS.read().unwrap().contains_key(&stream_id) {
local::setup_new_stream(
introspect.forward_address.port(),
tunnel_tx.clone(),
stream_id.clone(),
)
.await;
}
// find the right stream
let active_stream = ACTIVE_STREAMS.read().unwrap().get(&stream_id).cloned();
// forward data to it
if let Some(mut tx) = active_stream {
tx.send(StreamMessage::Data(data.clone())).await?;
info!("forwarded to local tcp ({})", stream_id.to_string());
} else {
error!("got data but no stream to send it to.");
let _ = tunnel_tx
.send(ControlPacket::Refused(stream_id.clone()))
.await?;
}
}
};
Ok(control_packet.clone())
}
|
<reponame>Wellheor1/l2
from django.apps import AppConfig
class LqConfig(AppConfig):
name = 'lq'
|
def permutate(self):
if self.__speeds[self.__act_index] == -1:
temp = self[self.__act_index - 1]
temp_speed = self.__speeds[self.__act_index -1]
self[self.__act_index - 1] = self[self.__act_index]
self[self.__act_index] = temp
self.__speeds[self.__act_index - 1] = self.__speeds[self.__act_index]
self.__speeds[self.__act_index] = temp_speed
self.__act_index -= 1
elif self.__speeds[self.__act_index] == 1:
pass
else:
pass
if "something":
pass
elif self.__act_index == 0 or self.__act_index == (self.size -1):
pass |
#!/usr/bin/env python3
# MIT License
#
# Copyright (c) 2017 <NAME>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
import xml.etree.ElementTree as ET
import argparse
import requests
from urllib.parse import urlparse
import re
import sys
class NRDPClient:
url = ""
token = ""
def __init__(self, url, token):
urlparse(url)
self.token = token
self.url = url
def run(self, args):
xml = self.generate_xml(args)
try:
response = self.send(xml)
except Exception as e:
print("Connection Error occurred: {}".format(str(e),))
return 1
try:
count = self.parse_response(response)
except Exception as e:
print("Failed to parse response: {}".format(str(e),))
return 1
if count > 0:
return 0
else:
return 2
def generate_xml(self, data):
checktype = data.checktype
hostname = data.hostname
servicename = data.service
state = data.state
output_text = data.output
checkresults_tag = ET.Element('checkresults')
checkresult_tag = ET.SubElement(checkresults_tag, 'checkresult')
# cr.set('type', 'service')
if data.service:
checkresult_tag.set('type', 'service')
checkresult_tag.set('checktype', checktype)
hostname_tag = ET.SubElement(checkresult_tag, 'hostname')
hostname_tag.text = hostname
if data.service:
servicename_tag = ET.SubElement(checkresult_tag, 'servicename')
servicename_tag.text = servicename
else:
checkresult_tag.set('type', 'host')
state_tag = ET.SubElement(checkresult_tag, 'state')
state_tag.text = state
output_tag = ET.SubElement(checkresult_tag, 'output')
output_tag.text = output_text
return ET.tostring(checkresults_tag, method='xml')
def send(self, xml):
""" Sends the service/host check to a remote NRDP server """
try:
response = requests.post(self.url, data={'token': self.token, 'cmd': 'submitcheck', 'XMLDATA': xml},
timeout=5)
except requests.exceptions.Timeout as e:
raise Exception("Request timed out") from e
except requests.exceptions.ConnectionError as e:
raise Exception("Failed to connect to server, network error: {}".format(e,)) from e
except requests.exceptions.RequestException as e:
raise Exception("Failed to connect to server: {}".format(e,)) from e
if response.ok:
return response
else:
raise RuntimeError
def parse_response(self, response):
root = ET.fromstring(response.text)
status = root.find('./status')
if status is None:
raise Exception("Failed to get status from response")
match = re.match("^\d+$", status.text)
if match is None:
raise Exception("Unsupported status message: " + status.text)
statuscode = int(match.group(0))
if statuscode != 0:
error_message = root.find('./message')
raise Exception("Server returned an error. Status: {}, Message \"{}\"".format(statuscode, error_message.text))
processed_text = root.find('./meta/output')
if processed_text is None:
raise Exception("Failed to get output text from server response: \"{}\"".format(processed_text))
match = re.match("^(\d+) checks processed.$", processed_text.text)
if match is None:
raise Exception("Failed to parse count from output text of server response: \"{}\"".format(processed_text.text))
count = int(match.group(1))
return count
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('-u', '--url', required=True, help='URL to the NRDP server')
parser.add_argument('-t', '--token', required=True, help='Authentication token to the NRDP agent')
parser.add_argument('-H', '--hostname', required=True, help='Hostname of the host/service check')
parser.add_argument('-s', '--service', help='For service checks, the name of the service associated with the passive check result')
parser.add_argument('-S', '--state', required=True, help='')
parser.add_argument('-o', '--output', required=True, help='Text output to submit')
# parser.add_argument('-d', '--delim', help='')
parser.add_argument('-c', '--checktype', required=True, help='1 for passive, 0 for passive')
args = parser.parse_args()
statuscode = NRDPClient(args.url, args.token).run(args)
sys.exit(statuscode)
|
<filename>src/main/java/com/webank/servicemanagement/dto/UpdateServiceRequestRequest.java
package com.webank.servicemanagement.dto;
import java.text.ParseException;
import com.webank.servicemanagement.domain.ServiceRequest;
import com.webank.servicemanagement.utils.DateUtils;
public class UpdateServiceRequestRequest {
private String name;
private String reporterRoleId;
private String reporter;
private String reportTime;
private String emergency;
private String description;
private String result;
private String ProcessInstanceId;
private String status;
public static ServiceRequest toDomain(UpdateServiceRequestRequest updateServiceRequestRequest,
ServiceRequest existedServiceRequest) throws ParseException {
ServiceRequest serviceRequest = existedServiceRequest;
if (serviceRequest == null) {
serviceRequest = new ServiceRequest();
}
if (updateServiceRequestRequest.getName() != null) {
serviceRequest.setName(updateServiceRequestRequest.getName());
}
if (updateServiceRequestRequest.getReporter() != null) {
serviceRequest.setReporter(updateServiceRequestRequest.getReporter());
}
if (updateServiceRequestRequest.getReportTime() != null) {
serviceRequest.setReportTime(DateUtils.formatStringToDate((updateServiceRequestRequest.getReportTime())));
}
if (updateServiceRequestRequest.getEmergency() != null) {
serviceRequest.setEmergency(updateServiceRequestRequest.getEmergency());
}
if (updateServiceRequestRequest.getDescription() != null) {
serviceRequest.setDescription(updateServiceRequestRequest.getDescription());
}
if (updateServiceRequestRequest.getResult() != null) {
serviceRequest.setResult(updateServiceRequestRequest.getResult());
}
if (updateServiceRequestRequest.getStatus() != null) {
serviceRequest.setStatus(updateServiceRequestRequest.getStatus());
}
return serviceRequest;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getReporterRoleId() {
return reporterRoleId;
}
public void setReporterRoleId(String reporterRoleId) {
this.reporterRoleId = reporterRoleId;
}
public String getReporter() {
return reporter;
}
public void setReporter(String reporter) {
this.reporter = reporter;
}
public String getReportTime() {
return reportTime;
}
public void setReportTime(String reportTime) {
this.reportTime = reportTime;
}
public String getEmergency() {
return emergency;
}
public void setEmergency(String emergency) {
this.emergency = emergency;
}
public String getDescription() {
return description;
}
public void setDescription(String description) {
this.description = description;
}
public String getResult() {
return result;
}
public void setResult(String result) {
this.result = result;
}
public String getProcessInstanceId() {
return ProcessInstanceId;
}
public void setProcessInstanceId(String processInstanceId) {
ProcessInstanceId = processInstanceId;
}
public String getStatus() {
return status;
}
public void setStatus(String status) {
this.status = status;
}
}
|
Effectiveness of treatment with surfactant in premature infants with respiratory failure and pulmonary infection. INTRODUCTION Surfactant inactivation is present in neonatal pneumonia. MATERIALS AND METHODS One hundred thirty-nine preterm babies with Birth Weight (BW) < or = 1250 grams were studied and subdivided in two groups: RDS Group, with a diagnosis of "simple" RDS (N 80) and RDS with Pneumonia Group, consisting of babies with a diagnosis of RDS and a positive BALF culture in the first 24-48 h of life (N 59). OUTCOMES Surfactant administration seems less effective in the latter group, because a significantly higher number of infants needed a second dose of surfactant, compared to the patients suffering from RDS alone. (www.actabiomedica.it). |
Ectopic pregnancy in the liver. Report of a case and angiographic findings. A 23-year-old woman underwent laparotomy due to physical signs of intra-abdominal bleeding. A 3 X 3 cm bleeding mass adherent to the liver surface was found. Microscopic examination of the removed encapsulated tumour demonstrated an ectopic pregnancy of the liver. Angiography performed on the 10th postoperative day showed a hypervascular lesion in the right liver lobe. The angiographic findings are similar to those previously described in cases of tubal pregnancies. |
Despite the criticism of FC Barcelona's academy, it continues to be rated highly across the world. A sign of that is that important clubs from across Europe are regularly interested in players with a La Masia background.
Josep Capdevila
In some cases, transfers end up being done, in others, there isn't so much activity, but the moves for the players are constant.
The most recent confirmed move is that of Mauodo Diallo, who is now a Roma player. Born in Senegal 15 years ago, he has been at Barça for the last four seasons.
He's a defender who has played to a good level, but he spent last season injured, halting his development at the Catalan club and leading to him missing minutes when he did return.
That prompted his agent, Nunzio Marchione, to look for a new club. And the chosen destination, from a range of offers, has been Roma.
Diallo has signed for three years with the Italian club and will play in the club's Primavera team.
At Roma, he will join another former Barça player, the Paraguayan Tony Sanabria, who is now 19. |
ASHWAUBENON, Wis. (AP) - Cameron Morse had 27 points to help Youngstown State beat Green Bay 92-89 on Friday night, the Penguins’ first road win against the Phoenix since 2003.
Francisco Santiago gave Youngstown State (9-12, 3-5 Horizon) the lead for good at 88-85 on a 3-pointer with 22 seconds left. Kerem Kanter cut the deficit to one for Green Bay (11-8, 5-2), but Brett Frantz made all four free throws to secure it for the Penguins. Trevor Anderson missed a 3-point attempt at the buzzer that would have tied it.
The Penguins had the largest lead of the second half at 64-58 and neither team led by more than four points over the final 13 minutes.
Youngstown State had all five starters reach double figures. Jorden Kaufman and Frantz had 17 points each, Braun Hartfield scored 12 and Santiago had 10.
Kanter led the Phoenix with 21 points and Charles Cooper scored 18. |
import java.util.Scanner;
public class Vita {
public static void main(String... strings) {
Scanner sc = new Scanner(System.in);
int no = sc.nextInt();
int[] val = new int[no];
for (int i = 0; i < no; i++) {
val[i] = sc.nextInt();
}
if (val[no - 1] == 0 || (no >= 2 && val[no - 1] > val[no - 2]&&val[no-1]!=15)) {
System.out.println("UP");
} else if (val[no - 1] == 15 || (no >= 2 && val[no - 1] < val[no - 2]&&val[no-1]!=0)) {
System.out.println("DOWN");
} else {
System.out.println("-1");
}
return;
}
}
|
/**
* null safe convert from util date to sql date
* @param date
* @return the sql date
*/
public static java.sql.Date toSqlDate(Date date) {
if (date == null) {
return null;
}
return new java.sql.Date(date.getTime());
} |
Acupuncture for Chronic Neck Pain - a Cohort Study in An Nhs Pain Clinic The study investigates the outcome of acupuncture for chronic neck pain in a cohort of patients referred to an NHS chronic pain clinic. One hundred and seventy two patients were selected for acupuncture over a period of 6.5 years. Treatment was given by a single acupuncturist and consisted of a course of needle acupuncture for an average of seven sessions per patient. Treatment outcome was measured by an oral rating scale of improvement at the end of treatment and at follow up six months and one year after treatment. Nineteen patients were withdrawn from treatment for various reasons, two for adverse events. One hundred and fifty three patients were evaluated, of whom 68% had a successful outcome from acupuncture, reporting an improvement in pain of at least 50%. The success rate was higher in patients with a short duration of pain: 85% in patients with pain for up to three months and 78% with pain for up to six months. Long-term follow up showed that 49% of the patients who completed treatment had maintained the benefit after six months, and 40% at one year. The results indicate that acupuncture can be an effective treatment for selected patients with chronic neck pain. |
<reponame>phpyandong/opentaobao<filename>model/promotion/TaobaoPromotionCouponBuyerSearchResponse.go
package promotion
import (
"encoding/xml"
"github.com/bububa/opentaobao/model"
)
/*
查询买家在相关app领取的优惠券信息 APIResponse
taobao.promotion.coupon.buyer.search
查询买家在相关app领取的优惠券信息
*/
type TaobaoPromotionCouponBuyerSearchAPIResponse struct {
model.CommonResponse
TaobaoPromotionCouponBuyerSearchResponse
}
type TaobaoPromotionCouponBuyerSearchResponse struct {
XMLName xml.Name `xml:"promotion_coupon_buyer_search_response"`
RequestId string `json:"request_id,omitempty" xml:"request_id,omitempty"` // 平台颁发的每次请求访问的唯一标识
// 结果码
ResultCode string `json:"result_code,omitempty" xml:"result_code,omitempty"`
// 错误信息
ErrorMsg string `json:"error_msg,omitempty" xml:"error_msg,omitempty"`
// 调用是否成功
InvokeResult bool `json:"invoke_result,omitempty" xml:"invoke_result,omitempty"`
// 结果集
BuyerCouponInfos []BuyerCouponInfo `json:"buyer_coupon_infos,omitempty" xml:"buyer_coupon_infos>buyer_coupon_info,omitempty"`
// 符合条件的总数,用于分页判断
TotalCount int64 `json:"total_count,omitempty" xml:"total_count,omitempty"`
}
|
The present invention relates to a semiconductor device technology and, particularly to a technology which is effective when applied to a semiconductor device having transistors in which gate insulating films have different thicknesses.
In semiconductor devices, there are used integrated circuits each formed of elements having various characteristics, formed over a semiconductor substrate, and electrically coupled to each other with wiring. Integrated circuits include a logic circuit for control, a driving circuit, a memory circuit for storing information, and the like. To allow these integrated circuits to perform desired functions, types of semiconductor elements forming the integrated circuits, a wiring method, and the like are designed.
Examples of the semiconductor elements forming the integrated circuits include a field effect transistor (FET), and the like. The field effect transistor mostly has a metal insulator semiconductor (MIS) structure in which a gate electrode is formed over a semiconductor substrate via an insulating film. Note that, in the case of using a silicon dioxide film or the like as the insulating film, the resulting structure is called a metal oxide semiconductor (MOS) structure. Such MIS field effect transistors (hereinafter simply referred to as MIS transistors) are covered with an interlayer insulating film over the semiconductor substrate, and individually insulated. In addition, contact plugs are formed so as to extend through the interlayer insulating film to be electrically coupled to the terminals of the semiconductor elements. Over the interlayer insulating film, such metal wires as to electrically couple the desired contact plugs to each other are formed.
Examples of semiconductor devices examined by the present inventors include an LCD driver which is a driving semiconductor device for causing a liquid crystal display (LCD) to perform a display operation. The LCD driver has integrated circuits having various functions such as an operation control circuit, a main memory circuit, a nonvolatile memory circuit, and a power source control circuit which are mounted over one chip. Thus, the LCD driver is formed of MIS transistors having various characteristics. In particular, there are a MIS transistor which satisfies a high-speed-operation requirement, a MIS transistor which satisfies a high-breakdown-voltage requirement, a MIS transistor which serves as a component of a nonvolatile memory, and the like.
The MIS transistors that satisfy the respective requirements shown above have gate insulating films of different thicknesses. Qualitatively, a MIS transistor having a thinner gate insulating film is capable of higher-speed operation, while a MIS transistor having a thicker gate insulating film is capable of operation with a higher voltage. In the LCD driver examined by the present inventors, MIS transistors having gate insulating films which differ in thickness in the range of 2 to 100 nm are used in accordance with required characteristics. As a result, the LCD driver examined by the present inventors has a structure including gates of different heights over a semiconductor substrate.
For example, in Japanese Unexamined Patent Publication No. 2004-235313 (Patent Document 1), a technology is disclosed which forms a bird's beak of a desired size in each of the end portions of a gate insulating film covering an active region defined by an isolation portion. This allows the provision of a semiconductor device having a gate insulating film with excellent electrical characteristics.
For example, in Japanese Unexamined Patent Publication No. 2005-197652 (Patent Document 2), a technology is disclosed which forms an oxide film for high-voltage element in a high-voltage-element region, and then adjusts a pad-nitride-film strip step in a low-voltage-element/cell region to reduce the height of the oxide film for high-voltage element. This allows a reduction in the level difference between the high-voltage-element region and the low-voltage-element/cell region.
In Japanese Unexamined Patent Publication No. 2008-16499 (Patent Document 3), a technology is disclosed which performs a plasma nitridation process to inhibit the thermal oxidation a second region by a predetermined thermal oxidation process performed in a region of a semiconductor substrate located in a second region, and thereby promoting the thermal oxidation of a first region by a predetermined thermal oxidation process performed in a region of the semiconductor substrate located in the first region to a position deeper than that reached by the thermal oxidation of the second region. As a result, the position of the upper surface of a first oxide film becomes closer to that of the upper surface of a second oxide film thinner than the first oxide film, and the level difference between the first region and the second region can be significantly reduced.
[Prior Art Documents]
[Patent Documents]
[Patent Document 1]
Japanese Unexamined Patent Publication No. 2004-235313
[Patent Document 2]
Japanese Unexamined Patent Publication No. 2005-197652
[Patent Document 3]
Japanese Unexamined Patent Publication No. 2008-16499 |
if __name__ == "__main__":
n,d = map(long,raw_input().split())
arr = map(long,raw_input().split())
kol=0
for i in range(1,len(arr)):
if arr[i-1]>=arr[i]:
plus = (arr[i-1]-arr[i])/d + 1
kol+=plus
arr[i]+=d*plus
print kol |
Nonlocal correlations in the vicinity of the $\alpha$-$\gamma$ phase transition in iron within a DMFT plus spin-fermion model approach We consider nonlocal correlations in iron in the vicinity of the $\alpha$-$\gamma$ phase transition within the spin-rotationally-invariant dynamical mean-field theory (DMFT) approach, combined with the recently proposed spin-fermion model of iron. The obtained nonlocal corrections to DMFT yield a decrease of the Curie temperature of the $\alpha$ phase, leading to an agreement with its experimental value. We show that the corresponding nonlocal corrections to the energy of the $\alpha$ phase are crucially important to obtain the proximity of energies of $\alpha$ and $\gamma$ phases in the vicinity of the iron $\alpha$-$\gamma$ transformation. Introduction. Iron is one of the substances known from ancient times. Many technologically important applications of iron and its alloys, such as producing steels, are dealt with the structural transition between the phase with a body-centered cubic (bcc) lattice and the phase with a face-centered cubic (fcc) lattice. In pure iron this transition occurs in the paramagnetic region at 1185 K slightly above the Curie temperature of 1043 K. The theoretical description of this transition is important from both, fundamental and practical points of view. The ground-state properties of and phases were extensively studied 1 by the density functional theory (DFT) methods, in particular local density approximation (LDA) and generalized gradient approximation (GGA); the disordered local moment (DLM) approach 2 was applied to simulate the paramagnetic state by randomly distributed magnetic moments. The energies of various phases were compared and the respective correct values of magnetic moments at zero temperature were obtained within these studies 1,3,4. The combination of these methods with the Heisenberg model gave a possibility to treat magnetic correlations (also at finite temperature), and provided an accurate value for the Curie temperature of bcc Fe 5,6, its thermodynamic properties 7,8, and magnon-phonon coupling 9. This combination also resulted in an accurate value for the alpha-gamma transition temperature as a function of carbon concentration 8. Despite these successes, the described methods do not consider important local correlations in iron, and, therefore, do not provide a comprehensive view on the - transition. To treat the effect of local correlations we apply in the present paper the combination of dynamical mean-field theory (DMFT) 10 with density functional theory (DFT) methods, usually called LDA+DMFT 11. Previous studies by LDA+DMFT allowed one to obtain the correct values of magnetic moments in and phases, the linear behavior of the temperature dependence of the inverse local (, phases) and uniform magnetic 12,17,18 ( phase) susceptibilities, and revealed the non-monotonic temperature dependence of in-verse the uniform magnetic susceptibility in the phase in a broad temperature range 14. In most of these studies the Curie temperature of the phase was found, however, to be substantially overestimated. As a result, the description of the magnetization 12, the temperature of the - transition 19, and the phonon spectra 20 was provided in units of the calculated Curie temperature. The overestimation of the Curie temperature mainly comes from the DMFT part and is due to using the approximate (density-density) form of the Coulomb interaction 16,17 and neglecting nonlocal correlations in DMFT. To solve the former problem, we apply in the present study the spin-rotationally-invariant DMFT approach 17. Although the nonlocal corrections to DMFT can be taken into account using, e.g., the dynamic vertex approximation 21, the dual fermion approach 22, or cluster methods 23, these approaches are too computationally expensive to be applied to real multiorbital compounds at the moment. For iron, the nonlocal degrees of freedom can be described within the effective Heisenberg model, which was combined previously with DFT approaches in Refs. 6-9. However, a derivation of this model from microscopic principles, and its combination with a treatment of local correlations within LDA+DMFT was not considered previously. In the present paper we address the microscopic derivation of an effective Heisenberg model in the presence of local moments and calculate the nonlocal correction to the energy of the phase near the magnetic phase transition. We show that this correction is crucially important to compare the energies of and phases near the structural phase transition without adjustable parameters. Let us turn first to the LDA+DMFT part. We performed DFT calculations using the full-potential linearized augmented plane-wave method implemented in the ELK code supplemented by the Wannier function projection procedure (Exciting-plus code 24 ). The Perdew-Burke-Ernzerhof form 25 of GGA was considered. The calculations were carried out with the experimental lattice constant a = 2.91 for the phase in the vicinity of the - transition 26. The lattice constant for the phase was set to keep the experimental volume of the unit cell for the phase. The integration in the reciprocal space was performed using an 181818 k-point mesh. The convergence threshold for the total energy was set to 10 −6 Ry. From the converged DFT results we constructed effective Hamiltonians in the basis of Wannier functions, which were built as a projection of the original Kohn-Sham states to site-centered localized functions as described in Ref. 27, considering spd states. This differentiates the present approach from that of Ref. 19, where only sd states were taken into account. The difference of DFT total energies obtained in our non-magnetic calculations for and phases is 0.280 eV/at in agreement with previous DFT studies 19,28,29 resulting in values from 0.24 to 0.3 eV/at. The effect of local correlations is considered within the DMFT approach of Ref. 17, applied to the Hamiltonian where WF DFT is the effective Hamiltonian in the basis of Wannier functions constructed for states near the Fermi level, Coul is the on-site Coulomb interaction Hamiltonian, and DC is the double-counting correction. This correction was considered in the fully localized limit and had the form DC = (n d DMFT − 1/2), where n d DMFT is the number of d electrons in DMFT, and is the average Coulomb interaction in the d shell. We choose the on-site Coulomb and Hund interaction parameters U ≡ F 0 = 4 eV and J S ≡ (F 2 + F 4 )/14 = 0.9 eV, where F 0, F 2, and F 4 are the Slater integrals as obtained in Ref. 30 by the constrained density functional theory (cDFT) in the basis of spd Wannier functions. From the uniform magnetic susceptibility of the phase, we extracted the effective local moment 2 eff, = 2.7 2 B and the Curie temperature T,DMFT C = 1400 K in agreement with a previous study 17. As in this study, we expect that the Curie temperature is weakly dependent on Hubbard U and is more sensitive to the Hund's coupling. The DMFT results for the energies are shown in Fig. 1. One can see that the energy of the phase strongly increases with decreasing temperature. Looking at the partial contributions from kinetic and potential energies, one can see that the increase of the energy of the phase with decreasing T is due to the strong increase of the kinetic energy, while the potential energy expectedly decreases, reflecting the increase of instantaneous magnetic moment S 2 i, (Ref. 14). Although the energy of the phase also increases with decreasing T, it saturates in the temperature range 1000 − 1500 K. Moreover, inspection of kinetic and potential energies shows the opposite tendencies to those in the phase: the mentioned increase of total energy upon cooling is provided by a strong increase of the potential energy, and a weaker decrease of the kinetic energy. The increase of potential energy reflects a decrease of instantaneous moment S and, compared to the opposite tendency of the phase, provides a mechanism of stabilization of the phase at low T. However, this mechanism is not the only contribution, and at the level of DMFT the phase is "protected" by the respective decrease in kinetic energy in the phase and its increase in the phase. Non-local corrections. To calculate the nonlocal corrections to the Curie temperature and the energy of the phase, we treat the effect of local moments on the energy within the spin-fermion model of Ref. 15, supplemented by a soft spin constraint, where i, l are site and orbital indices, H = H WF DFT, ll are the DMFT self-energies, S i corresponds to the spin of the local-moment degrees of freedom, s i = l c il c il to the spin of itinerant degrees of freedom, and are the Pauli matrices. The first and second lines in Eq. describe the propagation of itinerant electrons and the dynamics of the local moments, the third line corresponds to their interaction via Hund exchange J K ≃ (5/7)J S in Kanamori parameterization, and the fourth line adds a spin constraint on the local moments, which restricts the size of the moment. This model can be considered as a simplified version of the multiorbital model, studied by LDA+DMFT, where the major effect of correlationsformation of the local moments -is incorporated in the local variables S. The bare local moment propagator 15; loc (i n ) is the dynamic local susceptibility and irr ≈ 2 2 B /eV is the static local two-particle irreducible susceptibility in the phase. Decoupling the four-spin interaction in the soft constraint part in Eq. within bosonic mean-field theory (which implies neglecting critical fluctuations near T C ), we obtain: In the second order in J K we find the corresponding effective model for spin degrees of freedom (cf. Ref. 15) where q is the static two-particle irreducible susceptibility, which can be calculated as a bubble, constructed from itinerant Green functions 15. The determination of the function (T, i n ) is a rather complicated problem, since it requires knowledge of the S 2 2 interaction potential in Eq.. We fix its static component by the equality of the obtained static part of the on-site spin correlation function to that, obtained in DMFT 2 eff, = 3T loc ; the latter is found to be almost temperature independent (contrary to the instantaneous moment S 2 i ) in a broad temperature range 14. The corresponding condition reads where 0 = 4 2 B −1 loc + (T, 0). The equation is analogous to the one, obtained in the (static) spherical approximation to the classical Heisenberg model Indeed, this model, treated in the spherical approximation, yields the action i Heis and the corresponding condition, Eq., with 2 eff, = 4 2 B S 2 i Heis. Equation is also essentially equivalent to the static limit of Eq. up to the local contribution, which does not depend on S q. The Curie temperature is determined by vanishing of the gap of the paramagnon spectrum, and in the same static approximation reads: In the following we assume the nearest-neighbor approximation J q = 8J cos(q x /2) cos(q y /2) cos(q z /2), as justified in Refs. 15,29. The exchange integral J can be extracted from the Curie temperature without nonlocal corrections (i.e. in DMFT, cf. Refs. 14,15), Using this, we find T C < T,DMFT nonlocal contribution to the energy of the phase is obtained from the Eq. in the static approximation or from Eq., Since 0 < (T, 0) < (T C, 0) at T > T C, the obtained correction is negative, decreasing the energy; the decrease is maximal at T C. In principle, the same calculation could be applied to obtain the nonlocal correction to the energy of the phase. However, since the corresponding Neel temperature is much lower, than T C (see, e.g., Ref. 14), and pronounced corrections are obtained only in the vicinity of the magnetic transition temperature, we do not expect a substantial correction in that case. Using Eq. we find J 0 = 0.20 eV, which is close to the estimates of Refs. 29 and 15. The corresponding Curie temperature with account of nonlocal correlations T C = 1005 K is in a good agreement with the experimental data. The resulting temperature dependence of the energy of the phase is shown in Fig. 2, together with the energies of and phases, obtained in DMFT. One can see, that the obtained nonlocal correction to the energy of the phase compensates the increase of its kinetic energy upon cooling and makes the energies of and phases very close in the vicinity of the - transition. This demonstrates from one side, that the non-local corrections are crucially important for the description of this transition, and from the other side, the proposed methods are capable of describing adequately the effect of nonlocal correlations. The description of the alpha-gamma transition can be further improved by, e.g., using a more advanced rotationally-invariant quantum impurity solver than in our study (see, e.g., Ref. 16). Another improvement can be made by considering free energies. However, at the moment such calculations are too computationally expensive and beyond the scope of the present paper. We note that our results considerably differ from those of Leonov et al. 19, where the - transition was captured by LDA+DMFT with density-density interaction in units of the overestimated Curie temperature (1600 K). Aside from using the rotationally-invariant interaction and considering the absolute temperature dependencies, there are some computational details that differ in our study. In particular, (i) we use the all-electron full-potential LAPW method implemented in the ELK code resulting in a difference of DFT total energies of 0.280 eV, while Leonov et al. used the pseudopotential Quantum ESPRESSO package leading to 0.244 eV. (ii) We use the spd Wannier function basis, while only sd states were included by Leonov et al. (iii) We use Hubbard U = 4 eV, while a much smaller value U = 1.8 eV was employed by Leonov et al. Within the above-mentioned methods we found the energy of the phase to be at least 0.044 eV below the phase in the paramagnetic region for the density-density interaction, yielding a Curie temperature ∼2150 K, which is larger than the 1600 K value obtained by Leonov et al. and is close to 1900 K obtained by Lichtenstein et al. 12 Previous LDA+DMFT studies of iron indicated that the Curie temperature is weakly dependent on Hubbard U (Ref. 30). Therefore we expect that this discrepancy is mainly due to different Wanner function basis. This is supported by the fact that the DMFT calculations by Lichtenstein et al. were performed with 3d, 4s, and 4p states included in the basis set (not Wannier functions). Since the total energy is a subtle quantity, we suppose that the discrepancy between our and Leonov et al. results can be further influenced by the above mentioned computational differences (i)-(iii), but consider our calculation to be more accurate in these respects. To shed light on this point, further studies are required. In conclusion, we have presented a method to evaluate the nonlocal correction to the Curie temperature and energy, obtained in DMFT in the presence of local moments by deriving the spherical approximation results for the effective Heisenberg model from the spin-fermion model. We have shown that the obtained results yield the energies of and phases, which are very close in the vicinity of the - transition, which is necessary to describe the structural phase transition in iron. The work was supported by the grant of the Russian Science Foundation (project no. 14-22-00004). |
<reponame>sehovaclj/k_means<gh_stars>0
"""Main sequence of functions to run algorithm."""
import time
from typing import Dict
import numpy as np
from k_means.core.algorithm import k_means_algorithm
from k_means.core.data_prep import main_data_engineering
from k_means.core.plotting import plot_simulation
from k_means.utils.mapping import Parameters
def run(message: Dict[str, any]) -> None:
"""Main sequence of functions needed to run the algorithm
and plot the results as a simulation. Helps keep the program organized.
Args:
message: see k_means.resources.input
Returns:
None.
"""
print('\nStarting program')
# get parameters from post request message
parameters = Parameters(message)
# preserving random state
np.random.seed(parameters.seed)
# main data engineering is first
t_0 = time.time()
data_eng, parameters = main_data_engineering(parameters)
# main k-means algorithm, return results
results = k_means_algorithm(parameters, data_eng)
print(f'Main data prep and k means algorithm took: {round(time.time() - t_0, 3)}s')
# plot initial distributions, results and simulation of our k means algorithm
plot_simulation(parameters, data_eng, results)
|
Irish racer Alan Bonner has passed away after an incident during today’s Superbike/Supersport qualifying at the Isle of Man TT.
Bonner, 33, from County Meath in the Republic of Ireland, was involved in an incident at the 33rd Milestone as riders completed their opening lap of the two lap session and unfortunately succumbed to his injuries. Red flags were brought out across the course following the crash.
A seasoned competitor on the road racing scene, Bonner made his mountain course debut at the 2014 TT races and achieved his best result just a year later when he crossed the line 15th in the 2015 Senior TT.
TOP STORIES
He won a bronze replica in Sunday’s Superbike race with a 28th place finish and also finished this morning’s Superstock race in 30th to secure another bronze replica.
Bonner’s quickest lap of 127.090mph, achieved in 2015, earned him the accolade of being the fastest ever TT rider from the Republic of Ireland.
MCN would like to join the wider racing community in expressing our deepest condolences to Alan’s partner Gemma and his family and friends. |
A fraudster who stole more than £40,000 after meeting his victims on social media has had his hearing to claw back some of the cash he stole adjourned.
Mark Grace, 30, of Corton Road, in Lowestoft, was convicted of four counts of fraud and one of theft and was jailed for six years at Norwich Crown Court back in December last year.
The court heard how he got into relationships with the victims after meeting them through social media and dating apps like Plenty of Fish.
He then convinced the women to take out loans and credit for him, promising he would pay them back, but he ended the relationship after receiving the money.
Jude Durr, for Grace, who did not appear at the short hearing, asked for a further adjournment in the case until May 24.
Judge Katharine Moore agreed to the new timetable and adjourned the proceeds of crime hearing. |
Surrounded by yachts, Malcolm Turnbull made his pitch to retirees.
"We've come here to meet the people that Bill Shorten wants to rob," the prime minister said in picturesque Port Macquarie.
More than 3.6 million Aussies are 65 or older, and that number will keep growing as more baby boomers pass retirement age.
That's a lot of votes.
But right now Shorten isn't actively wooing those millions of retirees, many of whom are pensioners who will probably vote Labor regardless.
He's going after almost 16 million voting-age Australians who can't yet retire to a life of overseas cruises, golf, and long lunches.
"What we want to do is make sure that the government has enough money to pay for hospitals, to pay for education," Shorten told reporters.
"Our population is getting older and, whilst we grow older, we need health services more than ever."
The youth strategy worked for Labor in New Zealand, where Jacinda Ardern won an election in part because of her promise to make things fairer for young people.
Shorten's plan to end cash handouts for share investors is just one of his arguments to young and middle-aged Australians struggling to deal with a high-cost economy.
He's also going after negative gearing, which has allowed wealthy people to get significantly more wealthy at taxpayers' expense.
And he's targeting multinational companies which bring in huge revenue but use complex financial arrangements to avoid paying tax.
Meanwhile Turnbull stuck to his "trickle down economics" plan to cut corporate tax.
"We have to have a competitive company tax rate to attract investment, which will drive jobs," Turnbull said.
But younger Australians have lived through 25 years of uninterrupted economic growth, and watched dozens of multinationals set up shop in Australia despite the allegedly high tax rate.
At the same time they've seen their wages stagnate, job security shrivel, and the rise of the gig economy.
And at the banking royal commission they've heard bankers admit they screwed over mortgagees and didn't play by the rules.
"How can you justify a five per cent tax cut to the big banks when there's a royal commission into their misconduct?" Senator Derryn Hinch told Sky News.
Shorten is banking on a mood for change.
The ACTU this week set out ambitions to shift the economy towards letting the people who create value (the workers) enjoy more of the benefits (the capital).
Unions want industry-wide bargaining, more protections for casual workers, and a stronger industrial umpire.
They say flat wage growth is holding back the economy - and the head of the Reserve Bank agrees.
Maybe the mood for change among those 16 million voters will be enough to get Shorten into The Lodge.
Turnbull can cut through to his retiree base who have lived large off Howard-era welfare for 18 years, but how will he go with middle Australia?
Electricity prices are up. Gas prices are up. Private health insurance is up. School fees are up.
Swinging voters are sitting at home wondering how they will pay their bills, what a corporate tax cut will do for them and why should they care if some people lose a tax break.
Turnbull told the story of a couple in their 80s who would lose $3500 a year if Labor's dividend tax scheme was made law.
"Do you know what they said they'd have to do? Cancel their private health insurance," he told reporters.
Families have been going without private health insurance for years, better sorry than safe when they count the cost.
And the federal government has some responsibility for private health premiums, which have risen dramatically in the past decade.
Turnbull's government has been generally pragmatic, and it might be time for him to use that pragmatism once again.
If the mood for change is there, he can make some changes himself rather than let Labor do it.
Because you don't need to be a wealthy retiree to understand 16 million beats 3.6 million, and a Shorten government will have a mandate to make the tax changes it believes people want. |
<filename>libs/event/src/lib/components/meeting/+state/twilio.query.ts
import { Injectable } from '@angular/core';
import { QueryEntity } from '@datorama/akita';
import { Observable } from 'rxjs';
import { LocalAttendee } from './twilio.model';
import { TwilioStore, TwilioState } from './twilio.store';
@Injectable({ providedIn: 'root' })
export class TwilioQuery extends QueryEntity<TwilioState> {
constructor(protected store: TwilioStore) {
super(store);
}
get localAttendee() {
return this.getEntity('local') as LocalAttendee;
}
selectLocal() {
return this.selectEntity('local') as Observable<LocalAttendee>;
}
}
|
<filename>src/main/java/com/autsia/codefreeze/impl/CGLIBCodeFreeze.java
/*
* Copyright 2016 <NAME>
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package com.autsia.codefreeze.impl;
import com.autsia.codefreeze.CodeFreeze;
import com.autsia.codefreeze.impl.callbacks.DelegatingMethodInterceptor;
import com.autsia.codefreeze.impl.callbacks.EqualsMethodInterceptor;
import com.autsia.codefreeze.impl.callbacks.ExceptionMethodInterceptor;
import com.autsia.codefreeze.impl.callbacks.FreezingMethodInterceptor;
import com.autsia.codefreeze.impl.filters.ImmutabilityCallbackFilter;
import com.google.common.collect.ImmutableList;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.ImmutableSet;
import net.sf.cglib.proxy.Callback;
import net.sf.cglib.proxy.Enhancer;
import net.sf.cglib.proxy.Factory;
import java.lang.reflect.Modifier;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;
/**
* CGLIB-based implementation
*/
public class CGLIBCodeFreeze implements CodeFreeze {
private ConcurrentHashMap<Class, Factory> factories = new ConcurrentHashMap<>();
private ConcurrentHashMap<Class, Boolean> enhanceableMap = new ConcurrentHashMap<>();
/**
* {@inheritDoc}
*/
public <T> T freeze(T bean) {
if (bean == null) {
return null;
}
try {
if (bean instanceof List) {
return proxifyList((List) bean);
}
if (bean instanceof Set) {
return proxifySet((Set) bean);
}
if (bean instanceof Map) {
return proxifyMap((Map) bean);
}
return proxifyBean(bean);
} catch (InstantiationException | IllegalAccessException e) {
throw new RuntimeException(e);
}
}
/**
* {@inheritDoc}
*/
public boolean isEnhanceable(Class<?> type) {
if (enhanceableMap.containsKey(type)) {
return enhanceableMap.get(type);
}
boolean isEnhanceable = !Modifier.isFinal(type.getModifiers()) && hasParameterlessConstructor(type);
enhanceableMap.putIfAbsent(type, isEnhanceable);
return isEnhanceable;
}
private boolean hasParameterlessConstructor(Class<?> returnType) {
return Collection.class.isAssignableFrom(returnType)
|| Map.class.isAssignableFrom(returnType)
|| Arrays.stream(returnType.getConstructors()).anyMatch(c -> c.getGenericParameterTypes().length == 0);
}
@SuppressWarnings("unchecked")
private <T> T proxifyList(List list) throws InstantiationException, IllegalAccessException {
ImmutableList.Builder<Object> builder = ImmutableList.builder();
list.stream().forEach(bean -> builder.add(freeze(bean)));
return (T) builder.build();
}
@SuppressWarnings("unchecked")
private <T> T proxifySet(Set set) throws InstantiationException, IllegalAccessException {
ImmutableSet.Builder<Object> builder = ImmutableSet.builder();
set.stream().forEach(bean -> builder.add(freeze(bean)));
return (T) builder.build();
}
@SuppressWarnings("unchecked")
private <T> T proxifyMap(Map map) throws InstantiationException, IllegalAccessException {
ImmutableMap.Builder<Object, Object> builder = ImmutableMap.builder();
map.keySet().stream().forEach(key -> builder.put(freeze(key), freeze(map.get(key))));
return (T) builder.build();
}
@SuppressWarnings("unchecked")
private <T> T proxifyBean(T bean) throws IllegalAccessException, InstantiationException {
if (!isEnhanceable(bean.getClass())) {
return bean;
}
Factory factory = getFactory(bean.getClass());
Callback[] callbacks = getCallbacks(bean);
T newInstance = (T) factory.newInstance(callbacks);
ExceptionMethodInterceptor exceptionCallback = (ExceptionMethodInterceptor) callbacks[1];
// By default the ExceptionMethodInterceptor is not active to allow calling setters in class constructor
// After class instance creation it should be immediately activated
exceptionCallback.setActive(true);
return newInstance;
}
private <T> Callback[] getCallbacks(T bean) {
return new Callback[]{
new EqualsMethodInterceptor(bean),
new ExceptionMethodInterceptor(bean),
new DelegatingMethodInterceptor(bean),
new FreezingMethodInterceptor(this, bean)
};
}
private Factory getFactory(Class<?> classToProxify) throws IllegalAccessException, InstantiationException {
if (factories.containsKey(classToProxify)) {
return factories.get(classToProxify);
}
Factory factory = createFactory(classToProxify);
factories.putIfAbsent(classToProxify, factory);
return factory;
}
private Factory createFactory(Class<?> classToProxify) throws IllegalAccessException, InstantiationException {
Enhancer enhancer = new Enhancer();
enhancer.setSuperclass(classToProxify);
enhancer.setCallbackFilter(new ImmutabilityCallbackFilter(this, classToProxify));
enhancer.setCallbackTypes(new Class[]{
EqualsMethodInterceptor.class,
ExceptionMethodInterceptor.class,
DelegatingMethodInterceptor.class,
FreezingMethodInterceptor.class
});
Class proxyClass = enhancer.createClass();
return (Factory) proxyClass.newInstance();
}
}
|
<reponame>565353780/pytorch-voxblox-plus-plus
// This file automatically generated by create_export_module.py
#define NO_IMPORT_ARRAY
#include <NumpyEigenConverter.hpp>
#include <boost/cstdint.hpp>
void import_D_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, Eigen::Dynamic > >::register_converter();
}
void import_D_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, 1 > >::register_converter();
}
void import_D_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, 2 > >::register_converter();
}
void import_D_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, 3 > >::register_converter();
}
void import_D_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, 4 > >::register_converter();
}
void import_D_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, 5 > >::register_converter();
}
void import_D_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, Eigen::Dynamic, 6 > >::register_converter();
}
void import_1_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, Eigen::Dynamic > >::register_converter();
}
void import_1_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, 1 > >::register_converter();
}
void import_1_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, 2 > >::register_converter();
}
void import_1_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, 3 > >::register_converter();
}
void import_1_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, 4 > >::register_converter();
}
void import_1_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, 5 > >::register_converter();
}
void import_1_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 1, 6 > >::register_converter();
}
void import_2_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, Eigen::Dynamic > >::register_converter();
}
void import_2_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, 1 > >::register_converter();
}
void import_2_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, 2 > >::register_converter();
}
void import_2_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, 3 > >::register_converter();
}
void import_2_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, 4 > >::register_converter();
}
void import_2_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, 5 > >::register_converter();
}
void import_2_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 2, 6 > >::register_converter();
}
void import_3_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, Eigen::Dynamic > >::register_converter();
}
void import_3_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, 1 > >::register_converter();
}
void import_3_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, 2 > >::register_converter();
}
void import_3_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, 3 > >::register_converter();
}
void import_3_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, 4 > >::register_converter();
}
void import_3_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, 5 > >::register_converter();
}
void import_3_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 3, 6 > >::register_converter();
}
void import_4_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, Eigen::Dynamic > >::register_converter();
}
void import_4_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, 1 > >::register_converter();
}
void import_4_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, 2 > >::register_converter();
}
void import_4_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, 3 > >::register_converter();
}
void import_4_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, 4 > >::register_converter();
}
void import_4_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, 5 > >::register_converter();
}
void import_4_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 4, 6 > >::register_converter();
}
void import_5_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, Eigen::Dynamic > >::register_converter();
}
void import_5_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, 1 > >::register_converter();
}
void import_5_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, 2 > >::register_converter();
}
void import_5_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, 3 > >::register_converter();
}
void import_5_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, 4 > >::register_converter();
}
void import_5_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, 5 > >::register_converter();
}
void import_5_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 5, 6 > >::register_converter();
}
void import_6_D_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, Eigen::Dynamic > >::register_converter();
}
void import_6_1_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, 1 > >::register_converter();
}
void import_6_2_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, 2 > >::register_converter();
}
void import_6_3_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, 3 > >::register_converter();
}
void import_6_4_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, 4 > >::register_converter();
}
void import_6_5_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, 5 > >::register_converter();
}
void import_6_6_uchar()
{
NumpyEigenConverter<Eigen::Matrix< boost::uint8_t, 6, 6 > >::register_converter();
}
|
<reponame>ldwqh0/jexcel
package com.test;
public class C implements B {
}
|
Natural Attribute-based Shift Detection Despite the impressive performance of deep networks in vision, language, and healthcare, unpredictable behaviors on samples from the distribution different than the training distribution cause severe problems in deployment. For better reliability of neural-network-based classifiers, we define a new task, natural attribute-based shift (NAS) detection, to detect the samples shifted from the training distribution by some natural attribute such as age of subjects or brightness of images. Using the natural attributes present in existing datasets, we introduce benchmark datasets in vision, language, and medical for NAS detection. Further, we conduct an extensive evaluation of prior representative out-of-distribution (OOD) detection methods on NAS datasets and observe an inconsistency in their performance. To understand this, we provide an analysis on the relationship between the location of NAS samples in the feature space and the performance of distance- and confidence-based OOD detection methods. Based on the analysis, we split NAS samples into three categories and further suggest a simple modification to the training objective to obtain an improved OOD detection method that is capable of detecting samples from all NAS categories. INTRODUCTION Deep learning has significantly improved the performance in various domains such as computer vision, natural language processing, and healthcare (;;). However, it has been reported that the deep classifiers make unreliable predictions on samples drawn from a different distribution than the training distribution (;Hendrycks & Gimpel, 2017;). Especially, this problem can become severe when the test distribution is shifted from the training distribution by some attributes (i.e., age of subjects or brightness of images), as such shift could gradually degrade the classifier performance until its malfunction is explicitly visible. These shifts occur in the real-world as a result of a change in specific attribute. For example, a clinical text-based diagnosis classifier trained in 2021 will gradually encounter increasingly shifted samples as time flows, since writing styles change and new terms are introduced in time. Detection of such samples is a vital task especially in safety-critical systems, such as autonomous vehicle control or medical diagnosis, where wrong predictions can lead to dire consequences. To this end, we take a step forward by proposing a new task of detecting samples shifted by a natural attribute (e.g., age, time) that can easily be observed in the real-world setting. We refer to such shifts as Natural Attribute-based Shifts (NAS), and the task of detecting them as NAS detection. Detection of NAS is both different from, and also more challenging than out-of-distribution (OOD) detection (Hendrycks & Gimpel, 2017;;;Sastry & Oore, 2020), which typically evaluates the detection methods with a clearly distinguished in-distribution (ID) samples and OOD samples (e.g., CIFAR10 as ID and SVHN as OOD, which have disjoint labels). In contrast, we aim to detect samples from a natural attribute-based shift within the same label space. Since NAS samples share more features with the ID than the typical OOD samples do, identifying the former is expected to be more challenging than the latter. Although OOD detection has some relevance to NAS detection, comprehensive evaluation of the existing OOD detection methods on the natural attribute-based shift is an unexplored territory. Therefore, in this paper, we perform an extensive evaluation of representative OOD methods on NAS samples. Depending on the task environment, NAS detection can be pursued in parallel to domain generalization (;Gulrajani & Lopez-Paz, 2020;), which aims to overcome domain shifts (e.g., image classifier adapting to sketches, photos, art paintings, etc.). For example, an X-ray-based diagnosis model should detect images of unusual brightness so that the X-ray machine can be properly configured, and the diagnosis model can perform in the optimal setting. In other cases, domain generalization can be preferred, such as when we expect the classifier to be deployed in a less controlled environment (e.g., online image classifier) for non-safety critical tasks. In this paper, we formalize NAS detection to enhance the reliability of real-world decision systems. Since there exists no standard dataset for this task, we create a new benchmark dataset in the vision, text, and medical domain by adjusting the natural attributes (e.g., age, time, and brightness) of the ID dataset. Then we conduct an extensive evaluation of representative confidence-and distance-based OOD methods on our datasets and observe that none of the methods perform consistently across all NAS datasets. After a careful analysis on where NAS samples reside in the feature space and its impact on the distance-and confidence-based OOD detection performance, we identify the root cause of the inconsistent performance. Following this observation, we define three general NAS categories based on two criteria: the distance between NAS samples and the decision boundary, the distance between NAS samples, and the ID data. Finally, we conduct an additional experiment to demonstrate that a simple modification to the negative log-likelihood training objective can dramatically help the Mahalanobis detector (), a distance-based OOD detection method, generalize to all NAS categories. We also compare our results with various baselines and show that our proposed modification outperforms the baselines and is effective across the three NAS datasets. In summary, the contributions of this paper are as follows: We define a new task, Natural Attribute-based Shift detection (NAS detection), which aims to detect the samples from a distribution shifted by some natural attribute. We create a new benchmark dataset and provide them to encourage further research on evaluating NAS detection. To the best of our knowledge, this is the first work to conduct a comprehensive evaluation of the OOD detection methods on shifts based on natural attributes, and discover that none of the OOD methods perform consistently across all NAS scenarios. We provide novel analysis based on the location of shifted samples in the feature space and the performance of existing OOD detection methods. Based on the analysis, we split NAS samples into three categories. We demonstrate that a simple yet effective modification to the training objective for deep classifiers enables consistent OOD detection performances for all NAS categories. NATURAL ATTRIBUTE-BASED SHIFT DETECTION We now formalize a new task, NAS detection, which aims to enhance the reliability of real-world decision systems by detecting samples from NAS. We address this task in the classification problems. Let D I = {X, Y} denote the in-distribution data, which is composed of N training samples with inputs X = {x 1,..., x N } and labels Y = {y 1,..., y N }. Specifically, x i ∈ R d represents a d-dimensional input vector, and y i ∈ K represents its corresponding label where K = {1,..., K} is a set of class labels. The discriminative model f : X → Y learns with ID dataset D I to assign label y i for each x i. In the NAS detection setting, we assume that an in-distribution sample consists of attributes, and some of the attributes can be shifted in the test time due to natural causes such as time, age, or brightness. When a particular attribute A (e.g., age), which has a value of a (e.g., 16), is shifted by the degree of, the shifted distribution can be denoted as D A=a+., x M } and Y = {y 1,..., y M } represents the M shifted samples and labels respectively. Importantly, in the NAS setting, although the test distribution is changed from the ID, the label space is preserved as K, which is the set of class labels in D I. In the test time, the model f might encounter the sample x Figure 1: Facial images from the UTKFace dataset to show the variation with age. X-ray images with different levels of brightness created from the RSNA Bone Age dataset. from a shifted data D A=a+ S, and it should be able to identify that the attribute-shifted sample is not from the ID. NAS DATASET DESCRIPTION In this section, we describe three benchmark datasets which have a controllable attribute for simulating realistic distribution shifts. Since there exists no standard dataset for NAS detection, we create new benchmark datasets using existing datasets by adjusting natural attributes in order to reflect real-world scenarios. We carefully select datasets from vision, language, and medical domains containing natural attributes (e.g., year, age, and brightness), which allows us to naturally split the samples. By grouping samples based on these attributes, we can induce natural attribute-based distribution shifts as described below. Image. We use the UTKFace dataset () which consists of over 20, 000 face images with annotations of age, gender, and ethnicity. As shown in Figure 1, we can visually observe that the facial images vary with age. Therefore, we set the 1, 282 facial images of 26 years old age as D I. For creating the NAS dataset, we vary the age of UTKFace dataset. To obtain an equal number of samples in the NAS dataset, the age groups that has less than 200 images are merged into one group until it has 200 samples. Finally, 15 groups D age S are produced for the NAS datasets, varying the ages from 25 to 1 (i.e., D age=25 Text. We use the Amazon Review dataset (He & McAuley, 2016;) which contains product reviews from Amazon. We consider the product category "book" and group its reviews based on the year to reflect the distributional shift across time. We obtain 9 groups with each group containing reviews from the year between 2005 and 2014. Then, the group with 24, 000 reviews posted in 2005 is set as D I, and the groups with reviews after 2005 as D year S (i.e., D year=2006 S, D year=2007 S,..., D year=2014 S ). Each D year S group contains 1500 positive reviews and 1500 negative reviews. We observed that as we move ahead in time, the average length of a review gets shorter and it uses more adjectives than previous years. Due to the space constraint, we provide a detailed analysis of the dataset in the Section B of the Appendix. Medical. We use the RSNA Bone Age dataset (), a real-world dataset that contains left-hand X-ray images of the patient, along with their gender and age (0 to 20 years). We consider patients in the age group of 10 to 12 years for our dataset. To reflect diverse X-ray imaging set-ups in the hospital, we varied the brightness factor between 0 and 4.5 and form 16 different dataset D brightness S (i.e., D brightness=0.0 S, D brightness=0.2 S,..., D brightness=4.5 S ), and each group contains Xray images of 200 males and 200 females. Figure 1 presents X-ray images with different levels of brightness with realistic and continuous distribution shifts. In-distribution data D I is composed of 3, 000 images of brightness factor 1.0 (unmodified images). CAN OOD DETECTION METHODS ALSO DETECT NAS? In this section, we briefly discuss about OOD detection methods and conduct an extensive evaluation of OOD detection methods on our proposed benchmark datasets. OOD DETECTION METHODS In this work, we use three widely-used post-hoc and modality-agnostic OOD detection methods. We use maximum of softmax probability (MSP) (Hendrycks & Gimpel, 2017) and ODIN () as confidence-based OOD detection baselines, and Mahalanobis detector () as distance-based OOD detection baseline. Note that ODIN and Mahalanobis detector assume the availability of OOD validation dataset to tune their hyperparameters. However, for all our experiments, we use variants of the above methods that do not access the OOD validation dataset as Figure 2: Comparison of well-known distance-based and confidence-based OOD detection methods for the CE model on our benchmark datasets. Age '26', year '2005', and brightness '1.0' are indistribution data in UTKFace, Amazon Review, and RSNA Bone Age dataset, respectively. The NAS detection performance of these methods is inconsistent across different datasets. Hsu et al.. The exact equations and details of how each OOD detection method assigns an OOD score to a given sample is provided in Section A of the Appendix. EXPERIMENTS AND RESULTS We now systematically evaluate the performance of the three OOD detection methods under NAS. We report the AUROC of all OOD detection methods averaged across five random seeds, evaluated for all NAS datasets. Experimental Settings. In image domain, we train a gender classification model on our UTKFace NAS dataset using ResNet18 model and the cross-entropy loss. We use our Amazon Book Review NAS dataset in text domain and train a 4-layer Transformer with the cross-entropy loss for the sentiment classification task. Lastly, in medical domain, we use our RSNA Bone Age NAS dataset and train a ResNet18 with cross-entropy loss to predict the gender given the hand X-ray image of the patient. We then evaluate the trained model on the corresponding test set in image, text, medical domains, respectively. Further, we evaluate the NAS detection performance of representative OOD detection methods in image, text, and medical domains on their corresponding NAS datasets, which gradually shift with age, year, and brightness, respectively. Results. We present the classification accuracy of the trained models on the ID test set in Table 2. We observe that the models trained using the cross-entropy loss obtain high accuracy and perform well on their corresponding tasks. We further demonstrate the effectiveness of the existing representative OOD detection methods on our benchmark datasets in Figure 2. We observe that in the UTKFace NAS dataset, samples are detected by ODIN and MSP, which are confidence-based methods, but not by Mahalanobis detector (Figure 2a). In the Amazon Review dataset, NAS samples are detected only by the Mahalanobis detector, while MSP and ODIN fail ( Figure 2b). Moreover, the scores of confidence-based methods are lower than 50 in AUROC. Lastly, Figure 2c shows that inputs from NAS in the RSNA Bone Age dataset are detected well by all three methods. ANALYZING INCONSISTENCY OF OOD DETECTION METHODS In this section, we first study the behavior of NAS samples in three datasets using PCA visualization. Then we analyze the inconsistent performance of the OOD detection methods by considering them in two categories, namely confidence-based and distance-based methods. Lastly, based on the analysis, we conclude this section by defining three NAS categories. ANALYSIS OF THE LOCATION OF NAS SAMPLES As illustrated in Figure 3, we apply principal component analysis (PCA) on the feature representations obtained from the penultimate layer of the models to visualize the movement of NAS samples as we monotonically increase the degree of attribute shift (i.e., age, year, and brightness). Further, Figure 4 presents the model's prediction confidence across varying degrees of the attribute shift. Image. By gradually changing the age, NAS samples move toward the space between the two clusters of ID samples (i.e., the decision boundary) as can be seen in Figure 3 . Further, Figure 4a demonstrates that confidence decrease as we increase the degree of attribute shift, which indicates that NAS samples move close to the decision boundary. Note that the majority of the NAS samples still overlap with ID sample clusters as we change the age. Text. As shown in Figure 3 , NAS samples gradually move away from the ID samples (and away from the decision boundary) as the year changes. In contrast to the UTKFace dataset, the confidence gradually increases, as shown in Figure 4b, since the NAS samples are getting far away from the decision boundary. Medical. Figure 3 demonstrates that when we increase the brightness, NAS samples move to the middle of the two classes and also move towards the outer edge of the ID sample clusters. Furthermore, as shown in Figure 4c, the relatively decreased confidence indicates that NAS samples are placed near the decision boundary as we increase the brightness of the images. COMPARISON BETWEEN CONFIDENCE-BASED AND DISTANCE-BASED OOD DETECTION Confidence-based methods. Figure 2 and Figure 3 illustrate that confidence-based methods achieve high AUROC when the NAS samples are near the decision boundary due to their low confidence. In the image and medical domain, we observed the NAS samples moving towards the decision boundary with increasing attribute shift, thus they are detected by the confidence-based methods. In contrast, in the text domain, the high prediction confidence of the shifted NAS samples causes the degradation in the AUROC of the confidence-based methods. Therefore, we conclude that to effectively utilize the confidence-based methods in all three NAS datasets, it is necessary to reduce the confidence of samples outside the ID, namely, enforce NAS samples to move near the decision boundary, which is not always possible (e.g., Amazon Review dataset). Distance-based methods. From Figure 2 and Figure 3, we observe that the distance-based OOD detection method (i.e., Mahalanobis Detector) achieves high AUROC when NAS samples are sufficiently away from the ID samples. In the text and medical domains, the Mahalanobis detector worked well since NAS samples moved sufficiently away from the ID samples as the shift increased. However, in the image domain, the method fails to detect NAS samples because instead of deviating from the ID, they move intermediately between the classes. Prior works (;) report that the cross-entropy loss cannot guarantee a sufficient inter-class distance. In other words, representations do not need to be far from the decision boundary to lower the cross-entropy loss. In this regard, we assume that the performance degradation of the Mahalanobis detector is caused by the cross-entropy loss learning latent features that are not separable enough to detect the NAS samples located between the classes (e.g., Figure 3 ). Specifically, if some classes are located nearby in the feature space, samples moving between classes (i.e., the case of the UTKFace dataset) will not be far from the ID. Even though NAS samples move away from one of the ID class cluster, they will gradually get closer to another ID class cluster. Figure 5: ID score landscape (brighter region means higher ID score) of the existing OOD detection methods (left: MSP, middle: ODIN, right: Mahalanobis). We use a synthetic 2D dataset to train a 4-layer ResNet. The Red points represent the ID samples; Purple Stars, Gray Diamonds and Orange Triangles indicate samples from different NAS categories. A sample is regarded as NAS when it has a low ID score. Considering the performance of confidence-based and distance-based detection methods, we now divided NAS into three categories based on two criteria: 1) whether the dataset is near the decision boundary or not; 2) whether they are close or far from the in-distribution dataset. Since a dataset (i.e., samples) far from the decision boundary and overlapping with the in-distribution dataset is not a distributionally shifted dataset, our work focuses on the remaining three cases that cover all possible scenarios of NAS. Without loss of generality, we use a classification task with three classes as a motivating example depicted by Figure 5. NAS CATEGORIZATION NAS category 1: This category comprises of NAS samples that are located near the decision boundary, and between ID samples of different classes. Purple Stars in Figure 5 represents samples from this category. Such samples are easily detected by confidence-based methods, MSP and ODIN, but harder to be detected by Mahalanobis Detector, which is a distance-based method. NAS category 2: This category consists of NAS samples that are placed away from the decision boundary and the ID data. For example, Gray Diamonds in Figure 5. Mahalanobis detector regards such samples as NAS, whereas confidence-based methods fail to detect them since NAS samples have a higher prediction confidence (i.e., higher ID score) than ID samples. NAS category 3: This category mainly comprises of NAS samples located anywhere near the decision boundary but far away from ID data. For example, Orange Triangles in Figure 5. Such samples are easily detected by both distance-based and confidence-based OOD detection methods. METHOD FOR CONSISTENT NAS DETECTION PERFORMANCE In this section, we suggest a modification in the training objective for deep classifiers to encourage consistent NAS detection performance on all NAS categories. Then we provide experiment results where the proposed method was compared against diverse OOD detection methods on the three NAS datasets (UTKFace, Amazon Review, RSNA Bone Age). 6.1 METHOD For a generally applicable OOD detection method to all NAS categories, we suggest a new training objective for deep classifiers comprised of classification loss (L CE ), distance loss (L dist ) and entropy loss (L entropy ). The proposed objective improves the performance of the Mahalanobis detector on NAS samples from category 1 without sacrificing performance on NAS samples from other categories. We focus on improving the distance-based OOD detection method rather than the confidence-based method since it is not always possible to enforce NAS samples to be embedded near the decision boundary, as discussed in Section 5.2. Table 2: In-distribution classification accuracy on three datasets with cross-entropy loss and our proposed loss. The proposed training loss is defined as: Note that for classification loss, we use the standard cross-entropy loss, but other losses such as focal loss can also be used. The distance loss is used to increase the distance between distinct class distributions of ID samples so that NAS samples have larger space to move around without overlapping with ID clusters, especially in NAS category 1: where 1 < 0 is a hyperparameter, K is the number of target classes, D is the dimension of feature representation in the latent space (typically the penultimate layer), and l is the mean vector of the features of samples with label l. Since the value of the vector norm increases as the dimension of the feature space increases, we normalize the distance by the square root of the feature dimension. As will be discussed in Section D of Appendix, we discovered after some initial experiments that the distance loss often made the model use a very limited number of latent dimensions to increase the distance between class mean vectors, which degraded the NAS detection performance. In other words, adding only L dist to L CE caused the latent feature space to collapse into a very small number of dimensions (i.e. rank-deficient), which caused all NAS samples to be embedded near the ID samples. Therefore, we add entropy loss to increase the number of features used to represent samples. The entropy loss is defined as: where 2 > 0 and 3 > 0 are hyperparameters, z ∈ R D is the feature representation in the latent feature space, Var() is variance, and C ij is the correlation coefficient between ith and jth dimension of the feature space. Specifically, C ij is described as: where Cov() and () are covariance and the standard deviation, respectively. Intuitively, the first term in equation 3 encourages each latent dimension to have diverse values, therefore preventing the latent feature space from collapsing into a confined space. With the first term alone, however, all latent dimensions might learn correlated information, thus making the latent space rank-deficient. Therefore, we use the second term in equation 3 to minimize the correlation between different latent dimensions. Note that minimizing the feature correlation was also used in previous works under different contexts such as self-supervised learning (). RESULTS AND DISCUSSION To demonstrate the effectiveness of the suggested method, we train all classifiers using the standard cross-entropy loss and our modified loss and compare post-hoc OOD detection methods across three NAS datasets. Specifically, we present the results of the confidence-based (i.e., MSP and ODIN) and the distance-based methods (i.e., Mahalanobis distance). We also include a recently proposed OOD detection method (Sastry & Oore, 2020) which computes channel-wise correlations in CNN with the gram matrix and estimates the deviation of the test samples from the training samples to detect the OOD samples. 1 Although this method uses the distance in the channel correlation space, we expect it to behave more similarly to the Mahalanobis detector than confidence-based methods, namely MSP and ODIN. We also compare with another recent baseline which exploits the energy score to detect OOD samples (). As the method leverage the logit layer to calculate the energy score, and the softmax score is based on the logit values, we conjecture that the energy score demonstrates detection ability similar to the confidence-based methods. For a fair comparison, we use only the penultimate layer for evaluating the Mahalanobis detector and Gram matrix since our method is developed based on the analysis of the NAS samples in the penultimate layer feature space. We also provide results of the ensemble version that utilize all layers in Section E of the Appendix. We describe the details of experiment setups and selecting the values for 1, 2 and 3 in the Appendix Section C, where the three values can be reasonably chosen without any explicit NAS validation datasets. We also present an ablation study to investigate the effect of different terms in the proposed loss in Section D of the Appendix. We also compare with other recent baselines that suggest other training objective for OOD detection in Section E of the Appendix. As shown in Table 2, the in-distribution classification accuracy of the model trained with our suggested loss is comparable to that of the model trained with the cross-entropy loss. Further, we present the NAS detection performance of the baselines and our method in Table 3. As we expected above, the gram matrix achieves performance similar to the Mahalanobis detector, demonstrating the low AUROC in the UTKFace dataset. Interestingly, even though the energy score is calculated based on the logit values, which are more tempered than the softmax score, it shows NAS detection performance similar to that of MSP, which is a confidence-based method. These results show that similar to MSP, ODIN, and Mahalanobis, Gram matrix and Energy-based methods also have performance inconsistency on NAS datasets. We report the additional NAS detection performance on four other metrics, which are often used in the OOD detection community in Section F of Appendix. Also, it is readily visible that our proposed training objective makes the Mahalanobis detector a robust NAS detection method for all NAS categories. In UTKFace, we can see the dramatic NAS detection performance increase for Mahalanobis detector. And for the other two datasets, the proposed loss does not decrease the NAS detection performance of Mahalanobis detector. Note that ODIN is more sensitive to OOD samples than Mahalanobis detector for some brightness levels in the RSNA Bone Age dataset, but it shows inconsistent performance across all three datasets. While these approaches are viable, it is not directly related with the downstream task but aims to detect features which is different from training distribution. In this paper, we mainly focus on methods that utilize the classification models to detect OOD samples since we are mainly interested in samples that affect decision-making systems. Model uncertainty. A number of previous works measure model uncertainty using various methods such as Bayesian neural networks (), Monte Carlo dropout (Gal & Ghahramani, 2016), and deep ensembles (). Note that technically, model uncertainty can be used to detect NAS samples, especially for NAS categories 1 and 3, since sampling model weights from the function space can be seen as redrawing the decision boundary, and NAS samples in categories 1 and 3 will be affected heavily by this process. Model uncertainty, however, aims to capture the uncertainty in the model weights rather than detecting OOD samples, making the two rather independent research directions. CONCLUSIONS To enhance the reliability of decision-making systems, we define a new task, Natural Attributebased Shift (NAS) detection, that aims to detect the samples shifted by a natural attribute. We introduce NAS detection benchmark datasets by adjusting the natural attributes present in the existing datasets. Through extensive evaluation of existing OOD detection methods on NAS datasets, we observe inconsistent performance depending on the nature of NAS samples. Then, we analyze the inconsistency by probing the relationship between the location of NAS samples and the performance of existing OOD detection methods. Based on this observation, we suggest a simple remedy to help Mahalanobis OOD detection method to have consistent performance across all NAS categories. We hope our dataset and task inspire fellow researchers to investigate practical methods for identifying NAS, which is crucial for deploying the prediction models in real-world systems. A DETAILS OF OOD DETECTION METHODS In this section, we describe the three post-hoc and task-agnostic OOD detection methods in detail, focusing mainly on their formulation and how each method assigns an OOD score to an input sample. A.1 MAXIMUM OF SOFTMAX PROBABILITY (MSP) In this method, the maximum of softmax probability is considered as confidence score (Hendrycks & Gimpel, 2017). Formally, we calculate the maximum softmax probability as follows: where C is the number of target classes, c is the index of a class, and z (j) denotes j th attribute of the feature in the logit layer. A.2 ODIN ODIN () utilized two well established techniques, namely temperature scaling and input preprocessing to increase the difference between softmax scores of in-distribution and OOD samples. Temperature scaling was originally proposed in Hinton et al. to distill the knowledge in neural networks and was later adopted widely in classification tasks to calibrate confidence of prediction (). In addition to temperature scaling, the input is preprocessed in order to increase the softmax score of given input by adding small perturbations which are obtained by back-propagating the gradient of the loss with respect to the input. More specifically, ODIN is computed as follows: where T ∈ R + is the temperature scaling parameter, C is the number of target classes, c is the index of class, and z (j) denotes j th attribute of the logit layer features of input x. During training, T is set to 1. For OOD detection, the input is first pre-processed as follows: where represents the magnitude of perturbation. Next, the network calculates the calibrated softmax score of the preprocessed input as follows:, wherez (j) denotes j th attribute of the logit layer features of the preprocessed inputx. Lastly, the modified softmax score is compared to a threshold value. If the score is greater than the threshold, then the input is classified as ID sample and otherwise OOD. Originally, T,, and are hyperparameters and are selected such that the false positive rate (FPR) at true positive rate (TPR) 95% is minimized on the validation OOD dataset. However, the performance saturates when T is greater than 1000 and therefore, in general, a large value of T is preferred. Following this, in this paper, we fix T = 1000 for our experiments. A.3 MAHALANOBIS DETECTOR To obtain Mahalanobis distance-based OOD score () of a sample, we calculate the mahalanobis distance from the clusters of classes to the sample. Then, the distance from the closest class is chosen as the confidence score. Specifically, the Mahalanobis score of an input x is defined as where c and l are the class and layer index, respectively, f l is the l th layer's feature representation of an input x, c,l and l are their class mean vector and tied covariance of the training data, correspondingly. Note that ODIN and Mahalanobis Detector assume the availability of OOD validation dataset. However, some recent works (;) report that this assumption limits the OOD detection generalizability since a model is biased towards an OOD validation set. In response, this paper validate the performance of OOD methods in the version that does not require to tune with OOD validation dataset. We perform the performance of ODIN as Hsu et al.. We do not perform Mahalanobis Detector ensembling over on all layers with the optimal linear combination which requires explicit OOD data. Instead, we perform two version that use only the penultimate layer of hidden representation and sum uniformly over all layers. Therefore, for all our experiments, we use modified OOD detection methods that do not require the OOD validation dataset. B ANALYSIS ON THE TEXT DATASET In this section, we provide a detailed analysis of the text dataset. We use the Amazon Review (He & McAuley, 2016;) dataset. We consider the product category "book" and conducted an analysis to see the impact of time on product reviews. We then performed an analysis to see the impact of time on the length of the reviews. Figure 6 presents a comparison between density plot of review length for each year from 2006 to 2014. We observe that as we move ahead in time, the length of the reviews gradually reduces. Further, Figure 7 presents the distribution of the average ratio of important words in a sequence by year. First, we train a model to classify sentiment polarity using Catboost () using document term frequency vector. We then extract top-100 important words using feature importance. Lastly, we manually select important words over 100 words. We find that the distribution of ratio of important words in a sequence gradually increase over time. Based on this analysis, we figure out that the feature related with downtream task is shifted by time. C IMPLEMENTATION DETAILS In this section, we describe the training details. Followed by it, we describe the algorithm we used to select the hyperparameters in our suggested modification to the loss function. Then, we provide the links to download the datasets. C.1 TRAINING DETAILS AND COMPUTING INFRASTRUCTURE Image. We used ResNet18 that is pretrained with ImageNet () to train a gender classifier using a UTKFace NAS dataset. After the average pooling layer, we trained a fully-connected network composed of 2 hidden layers which have 128 and 2 units respectively with a relu activation. The network is trained for 100 epochs with a batch size of 64. We used stochastic gradient descent with Adam optimizer (Kingma & Ba, 2015) and set learning rate as 3e − 5. For data augmentation, the technique of SimCLR () is used for all the experiments. Text. We perform experiment with Transformer network with 4 layers. The network is trained for 10 epochs with batch size of 128. We used Adam as an optimizer and set the learning rate as 3e − 6. Medical. We use a ResNet18 model, pretrained on ImageNet (), and add two fully-connected layers containing 128 and 2 hidden units with a relu activation. We train the network to classify gender given the x-ray image. Each model is trained for 30 epochs using SGD optimizer with a learning rate of 0.01 and momentum of 0.9, using a batch size of 64. All the experiments are conducted on a single GeForce RTX 3090 GPU with 24GB memory. The results are measured by computing mean and standard deviation across 5 trials upon randomly chosen seeds. C.2 HYPERPARAMETER TUNING Our suggested modification to the training loss mainly comprises of distance loss (L dist ) and a entropy loss (L entropy ). More formally, the training loss is given by: The entropy loss comprises of two terms and is defined as: where 2 > 0 and 3 > 0 are hyperparameters, z ∈ R D is the feature representation in the latent feature space, and C ij is the correlation coefficient between ith and jth dimension of the feature space. For more details, please refer to Section 6.1. For simplicity, in this section, we will refer the first term of entropy loss as variance loss, and the second term as correlation loss. To obtain the hyperparameters of different terms in our loss function, we explore the value of 2 in and 3 in . The hyperparameter corresponding the distance loss, 1, is set as 0.1, and then the hyperparameters of the variance loss and the correlation loss are chosen by a simple algorithm. Algorithm: We now describe the algorithm we used to find the hyperparameters of variance and correlation loss. First, we calculate the harmonic mean of the variance loss and correlation loss using our training dataset. Next, we select the hyperparameters with the lowest value of harmonic mean. Then, to ensure that entropy loss prevents the feature space collapsing problem, we apply the singular value decomposition (SVD) in the penultimate features and test if the sum of singular values except the two largest values is improved by the entropy loss or not. Concretely, to prove the enhancement, we compare the values with those of the model trained with the L CE + 1 L dist. If the selected hyperparameters do not improve the values, we reject them and investigate the hyperparameters that have the next lowest harmonic mean. The hyperparameters satisfying the above steps are selected as the hyperparameters of our loss function. Note that we reject the hyperparameters if they significantly degrade the classification accuracy. In image domain, we applied the algorithm and obtained 0.1, 0.1, 0.0001 as hyperparameters of 1, 2, and 3, respectively. In the medical domain, we obtain the hyperparameter corresponding to 3 as 1.0, and the other hyperparameters are the same as those of the image domain. In the text domain, 10. and 1.0 are used for the 2 and 3. The distance loss is set by 0.1. C.3 LINKS FOR DATASETS In our work, we use the openly available datasets, namely, UTKFace dataset 2, Amazon Review dataset 3, and RSNA Bone Age dataset 4. In this section, we conduct an ablation study to investigate the effect of each proposed term in the loss function in equation 5. Table 4 presents an ablation study to find the effect of the entropy loss and distance loss in the proposed modification to training loss. In UTKFace, when training with solely attaching L dist or L entropy to L CE, the NAS detection performance using mahalanobis OOD detection method improves compared to training with only L CE, but the performance does not monotonically increase in both cases along with the variation of Age. However, when both L dist and L entropy are used, there is a large improvement in performance with the results being monotonically increasing as we move from ID towards NAS, which implies that these two losses are mutually beneficial. In the text and medical domains, although there is a little degradation of AUROC when only L dist is added, the performance is improved by using L entropy in the training objective. D ABLATION STUDIES ON DISTANCE AND ENTROPY LOSS Further, to analyze the impact of each loss term to the feature-level representations, we perform a singular value decomposition (SVD) in the penultimate layer of ResNet18 trained on the UTKFace dataset, similar to Verma et al.. When trained with L CE only, the sum of the two largest singular values was 617.47, while the sum of the remaining singular values was 785.54. When trained with L CE + L dist, the sum of the two largest singular values increased (4481.88), while the sum of the remaining singular values decreased (371.86). As discussed in Section 6.1 in the main paper, this indicates that the addition of the distance loss is likely to collapse the latent feature space (i.e. rank-deficient), thus possibly decreasing the model's sensitivity to NAS samples. For example, if a model is trained to classify apples and bananas based only on the color, it will not be able to tell that a firetruck is a NAS sample. When we train the classifier with L entropy added, the problem is effectively alleviated. In this section, we investigate NAS detection performance using additional baselines, including recent work, and compare them with those of the model trained with our modified loss on three NAS datasets. Firstly, as we mentioned in the main paper, we evaluate the model trained with crossentropy loss with ensembled version of Mahalanobis detector () and the method that utilizes the gram matrix in OOD detection (Sastry & Oore, 2020). We also consider the other recent baselines for the comparison. Recently, Sehwag et al. propose a unified framework to leverage self-supervised learning for OOD detection. We set the method as our baseline and denote this baseline as SSD +. The experiment on SSD + requires a data augmentation strategy. In the Image domain, we augment the training dataset as Chen et al.. We construct an augmented training dataset using a back-translation model for text experiments as (;). We train SSD + for 100 epochs for UTKFace and RSNA Bone age dataset and 50 epochs for Amazon Review dataset. Some recent methods (;) exploit the cosine similarity to detect OOD samples. Hsu et al. suggest to decompose confidence scores and to modify input preprocessing method in ODIN (). For OOD detection, they calculate the cosine similarity between the features from the penultimate layer and the class specific weights and use the maximum value as an OOD score. Techapanurak et al. also propose to compute cosine similarity between the class weight vector and the penultimate layer feature of each input to obtain OOD scores. Specifically, they train the model with cross entropy loss as standard classification model, but logit value is set by the scaled cosine similarity value. After training, the cosine similarity values between the penultimate layer feature of the input and the weight vector for each class are calculated. The maximum cosine similarity value is used as the OOD score for the samples. We name the methods as GODIN and Scaled Cosine in Table 6, respectively. NAS detection performance of our method and other baselines are provided in Table 6. In UTKFace dataset, GODIN, Scaled Cosine, and ensembled version of Mahalanobis detector have relatively low detection performance than the other baselines. Although Gram matrix shows reasonable detection performance in UTKFace and RSNA Bone Age NAS groups, this method is hard to applied to the Amazon Book Review dataset because the notion of gram matrix is vague in the text domain. In three NAS categories, SSD+ has the most robust detection performance among the five baselines, however, our simple remedy in training loss help Mahalanobis detector to demonstrate improved performance particularly under the UTKFace NAS dataset. In conclusion, when compared with the five more recent baselines, our method shows robust and high detection performance in all the three NAS datasets, regardless of the domains. F QUANTITATIVE RESULTS WITH OTHER METRICS We report the NAS detection performance of baselines and our method based on four other metrics: Detection Accuracy, AUPR-In, AUPR-Out, and TNR@95%TPR, which are often used in the OOD detection community. We present the detection performance on NAS datasets in three domain measured by Detection Accuracy, AUPR-In, AUPR-Out, and TNR@95%TPR in Tables 7,8,9, and 10 respectively. The results indicate that our simple modification in the training objective improves the performance of the Mahalanobis detector, thus making it a robust NAS detection method for the three NAS categories presented in the paper. Specifically, in UTKFace, we observe that using our suggested training loss results in a significant increase in NAS detection performance using the Mahalanobis detector. At the same time, the suggested method effectively detects the NAS samples in the other two datasets. In summary, based on the results obtained using five different metrics (AU-ROC, Detection Accuracy, AUPR-In, AUPR-Out, and TNR@95%TPR), our suggested modification makes the Mahalanobis detector a general NAS detection method for all three NAS categories. |
Application of the DC Offset Cancellation Method and S Transform to Gearbox Fault Diagnosis In this paper, the direct current (DC) offset cancellation and S transform-based diagnosis method is verified using three case studies. For DC offset cancellation, correlated kurtosis (CK) is used instead of the cross-correlation coefficient in order to determine the optimal iteration number. Compared to the cross-correlation coefficient, CK enhances the DC offset cancellation ability enormously because of its excellent periodic impulse signal detection ability. Here, it has been proven experimentally that it can effectively diagnose the implanted bearing fault. However, the proposed method is less effective in the case of simultaneously present bearing and gear faults, especially for extremely weak bearing faults. In this circumstance, the iteration number of DC offset cancellation is determined directly by the high-speed shaft gear mesh frequency order. For the planetary gearbox, the application of the proposed method differs from the fixed-axis gearbox, because of its complex structure. For those small fault frequency parts, such as planet gear and ring gear, the DC offset cancellations ability is less effective than for the fixed-axis gearbox. In these studies, the S transform is used to display the time-frequency characteristics of the DC offset cancellation processed results; the performances are evaluated, and the discussions are given. The fault information can be more easily observed in the time-frequency contour than the frequency domain. |
Are you READY guys?! The Walking Dead returns TONIGHT (and it’s my birthday!) I swear, whatever happens will just bring me down today. It’s going to be NUTS. (I will say, I think it will follow the comics, but that’s just me. I stick to the source.)
Check out this Walking Dead inspired piece I worked on. This was actually a San Diego Comic Con commission by a lovely pole dancing/ Walking Dead fan! (The girl on the pole is her… of course we would use our skills in a zombie apocalypse!) I love that she requested to fight off walkers with her favorite WD characters!
ENJOY!
ALSO IT’S MY BIRTHDAY!
To celebrate, I’m giving you 20% off your order until the end of the month! Use code “birthday20” at checkout. Shop for books, prints, stickers and more!
shop.poledancingadventures.com |
Clinical Outcomes in Elderly Kidney Transplant Recipients are Related to Acute Rejection Episodes Rather Than Pretransplant Comorbidity Background. Deciding whether an elderly patient with end-stage renal disease is a candidate for kidney transplantation can be difficult. We aimed to evaluate pre- and early posttransplant risk factors that could predict outcome in elderly kidney recipients. Methods. Data from all elderly (≥70 years, n=354), senior (6069 years, n=577), and control (4554 years, n=563) patients receiving their first kidney transplant at our center from 1990 to 2005 were retrieved. Patient and graft survival were analyzed in a Cox model addressing the common risk factors including Charlson comorbidity index (CCI), pretransplant dialysis time, and early acute rejection episodes. Results. Acute rejection in the first 90 days, Hazard ratio (HR) 1.74 (1.342.25); time on dialysis, HR 1.02 (1.011.03) per month; and donor age more than 60 years, HR 1.52 (1.142.01) predicted mortality in the elderly. CCI score did not predict mortality in the elderly, HR 1.05 (0.981.12); but did so both in senior, HR 1.17 (1.081.27) and control recipients, HR 1.33 (1.191.48). Delayed graft function, HR 3.69 (2.016.79); donor age more than 60 years, HR 2.42 (1.304.49); and presence of human leukocyte antigen antibodies, HR 3.96 (1.3811.37) were independent predictors for death-censored graft loss in the elderly. Conclusion. Adequate immunosuppresion with low frequency of rejection episodes improves the outcome for elderly kidney recipients as does a reduction of time on dialysis. CCI score at transplantation does not seem helpful in the selection of elderly patients for kidney transplantation but plays a significant role in patients under 70 years of age. |
#pragma once
// A header file which includes all Tirous Toolbox header files relating to iterators.
#include "../iterator_util.h"
#include "../contiguous_iterator.h"
|
<gh_stars>100-1000
# Copyright (c) 2017 Microsoft Corporation.
#
# Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software,
# and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all copies or substantial portions of
# the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO
# THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
# TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# ===================================================================================================================
from __future__ import absolute_import
import six
import tensorflow as tf
from tensorflow.core.framework.summary_pb2 import Summary
from ..visualizer import BaseVisualizer
class TensorboardVisualizer(BaseVisualizer):
"""
Visualize the generated results in Tensorboard
"""
def __init__(self):
super(TensorboardVisualizer, self).__init__()
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.01)
self._session = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
self._train_writer = None
def initialize(self, logdir, model, converter=None):
assert logdir is not None, "logdir cannot be None"
assert isinstance(logdir, six.string_types), "logdir should be a string"
if converter is not None:
assert isinstance(converter, TensorflowConverter), \
"converter should derive from TensorflowConverter"
converter.convert(model, self._session.graph)
self._train_writer = tf.summary.FileWriter(logdir=logdir,
graph=self._session.graph,
flush_secs=30)
def add_entry(self, index, tag, value, **kwargs):
if "image" in kwargs and value is not None:
image_string = tf.image.encode_jpeg(value, optimize_size=True, quality=80)
summary_value = Summary.Image(width=value.shape[1],
height=value.shape[0],
colorspace=value.shape[2],
encoded_image_string=image_string)
else:
summary_value = Summary.Value(tag=tag, simple_value=value)
if summary_value is not None:
entry = Summary(value=[summary_value])
self._train_writer.add_summary(entry, index)
def close(self):
if self._train_writer is not None:
self._train_writer.close()
def __enter__(self):
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
class TensorflowConverter(object):
def convert(self, network, graph):
raise NotImplementedError()
|
Biological approach synthesis and characterization of iron sulfide (FeS) thin films from banana peel extract for contamination of environmental remediation Biological approach synthesis and characterization of Iron Sulfide (FeS) thin films from banana peel extract for contamination remediation of environment studied. Iron chloride, Sodium thiosulfate and Ethylene-di-amine-tetra acetate (EDTA) were used as precursor solutions without further purification. The nanoparticle of banana peel was extracted and prepared with synthesized FeS thin films and analyzed by X ray-diffraction for structural examination, Scanning electron microscope (SEM) for surface morphological analysis, Ultra-violet-visible-spectrometer (UVVis) and photo-luminescence spectro-photo-meter (P-L) for optical characterizations. XRD peaks are shown with recognized to,,, and crystalline planes. The occurrence of this deflection peak are recognised the FeS crystal segment of the tetragonal crystalline systems. SEM micrographs of the films prepared biological method show the distribution of grains, which cover the surface of the substrate completely and are uniform and films deposited purely have defects. The photo-luminescence, absorbance, and transmittance strength of banana peel extract FeS thin film is greater than pure FeS thin films in which wide-ranging and symmetries groups were perceived. In the present study, the comparison of pure FeS thin films and Nano synthesized banana peel extract with FeS thin films was studied. It is observed that Nano synthesized banana fibre absorbs higher than pure FeS thin films in solar cell application. Finally, green synthesis is an ecofriendly, easy and cheap promising method for the fabrication of thin films for solar cell applications. Today, ecological challenges such as enlarged air as well as water pollution have prolonged because of populace growth and fast industrialized expansion woeldwide 1,2. Semiconductors have become the furthermost significant part of investigations throughout the ancient few years, particularly in the areas of electricity and optoelectronic technology 3. Thin-film technologies are concurrently one of the eldest paintings and one of the latest science associations with thin-film ages to the metallic days of ancient times. Though non-solid and the related matters of interfering insignias have been investigated for over three eras, Integrating thin solid film was perhaps first attained by electrolyses very recently. A composite is a material made up of two or more other materials which give properties, in combination, which is not available from any of the ingredients alone. Nature continues to be generous to mankind by providing all kinds of resources in abundance for his living and existence. In this era of technology, products depend on new varieties of materials that have special characteristics 4,5. Metal composites, plastic and fibre-reinforced polymer composites are playing a vital role in the fields of nanotechnology. The performance of machine components depends mainly on the material that it is made of in the fields of www.nature.com/scientificreports/ automobile, railways, aerospace, structural applications, etc., and the strength to weight ratio of the material plays an important role 6. Due to the improved physical characteristics, the importance of fibers strengthened polymers compounds are gradually substituting numerous of the conservative ingredients 7. Above the previous era, polymers compounds covered with ordinary fiber have been getting consideration, both from the theoretical biosphere as well as from several manufacturing 8. Currently, the applications of natural fibers, particularly in motorized manufacturing, have collective repetition. Effectively instigated examples comprise both green fiber thermosets as well as thermo-plastic composite for internal uses such as door boards, shapely parts, and orchestra and tract tables 9. Biological technology is the addition of natural technologies as well as engineering to accomplish the claim of creatures, cells, body's thereof as well as molecules correspondents for yields as well as facilities 10. Biotechnology is multipurpose and has been considered an important area that has significantly impacted numerous technologies depending on the solicitation of bio procedure in engineering, farming, food-processing, medical approach, eco-friendly protection, and resource upkeep 11. This new groundswell of high-tech deviations has strong-minded affected enhancements in numerous segments (preparation of medicines, vitamins, minerals, interferon, yields of fermentation serve as nutrition or drink, energy from renewable basis and contaminations and hereditary engineering applied on plants, animals, humans) since it can give completely novel chances for maintainable preparation of surviving and novel yields and services 12. Additionally, ecological fears help determine the use of biological technology not only for pollution control (decontamination of water, air, and soil) but to preclude pollution and reduce waste in the first place and for ecologically friendly production of chemicals bio-monitoring 12. The preparation of polycrystalline Iron Sulfide thin films via the biological techniques using enzymes, micro-organisms, and bodies of florae, as well as their excerpts, has been recommended as cheap techniques 13. A Nanocomposite material has meaningfully broadened in the previous few years. This term now encompasses a huge diversity of schemes joining one to two as well as three dimensions material with Iron Sulfide constituents variegated at the Nano-meter scales. Natural fibres are universal throughout the world in plants such as flax, sisal, banana, hemp, banana, wood, grasses etc. From naturally available fibres, banana fibre is effortlessly obtainable in fabrics as well as fibres systems with fine mechanically and thermally characteristics 14. Cultured banana is resulting from two species of the genus Musa, explicitly from Musa-acuminate and Musabalbisiana. Musa-acuminate originated from Malaysians, while Musa-balbisiana initiated from Indians. Banana in Africa is categorized into three types, comprising East African(mainly desserts) banana, African plantains bananas grown up largely in the centre as well as west African and the East Africans highlands bananas, applied in cooking and beer preparation 15. The banana varieties that have been released by the Ethiopian Institute of Agricultural Research (EIAR) are kept in order as (Ducasse Hybrid, Dwarf Cavendish, Giant Cavendish, poyo, Matoke, Nijiru, Kitawira, Cardaba, Butuzua, Robusta, Grand Nain and Williams-1) having Potential Yield (q/ ha) 16 between 261 and 556. Because of availability in the large area, the researchers have done their study on the Williams-1 banana variety. Banana fibre is biodegradable, cost-effective and lowers compactness fibre with extraordinary precise behaviours. So, banana grounded combination constituents could be applied in industrial, automobile, structural and aerospace applications. Banana fibre is hydrophilic in nature which causes poor wettability with hydrophobic organic matrix resins like polyester when preparing composites 17. The hydrophobicity nature of banana fibre is reduced by chemical modifications like alkalization, lightening etc. These treatments not only decrease the water absorption capacity of the fibre but also increase the wettability of the fibre with resin and improve inter bond between fibre and matrix. The foremost features of banana fibre are celluloses, lignins, hemicelluloses and pectin 18. The importance of banana cellulose fibre resulting from yearly renewables resource as a supporting part in polymer atmosphere composite delivers optimistic environmentally welfare with respect to eventual disposable and raw materials. The main advantage of banana cellulose fibers is: a Renewables environment, a wide variety of plasters obtainable throughout the world, Nonfood undeveloped grounded economies, Low-slung energy consumption, cheap, lowest density, Extraordinary precise asset and modulus, great comprehensive repetition of cellulosic based composite, the reprocessing by ignition of cellulose filled composites are easier in contrast with mineral filler systems. The possibility of using banana cellulose fibres as a reinforcing phase has received considerable interest in accumulation, the intrinsic Nanoscale properties of banana fibre cellulose material for developing advanced Nanomaterials and composites 19. This environmentally suitable technique for polycrystalline Iron Sulfide (FeS) preparation draws extra significance and is unusual to the biochemical and physical techniques 15 because of the escaping of the importance of poisonous chemicals as well as maximum energy components in the preparation procedures. Bodies of plant extracts vigorously contribute to the biological decline procedure to change the metallic ions to metals and metallic oxides 16. Green deposition FeS plays a vital role in contradiction of the degradation procedure of engineering dyes because of photocatalytic influence 20. Presently the earth is in fear of airborne and water contamination unconfined from non-renewable energy bases like Coal, natural gases, fossil fuel and gas released from the industry. The excess fluids released out of fabrics pour out to the streams and result in water contamination. This contaminated water is in a straight line drunken by the community and causes diseases such as cholera, amoeba and typhoids. Commonly it disturbs human's protection fitness in the biosphere. Solar technology connects solar panels, and renewable and Sustainable energy sources 21. This sustainable energy source starts from natural possessions that endlessly substitute. These contain the sunlight, oceans as well as the power of winds. These types of technologies are considered clean and do not contain carbons since it doesn't emit greenhouse gas. The solar cell is an uncontaminated energy source; it hasn't an environmental impact on nature like the energy originates from fossil coals. When fossil fuels are burned, it releases hazardous carbon poisonous radiations into the atmosphere 10,22. To minimize these harmful wastes and contaminations from the environment, the preparation and manufacture of solar energy from compound semiconductors and thin films are the only elucidations. As presently, prevailing elemental 26.In this concern, we have efficiently produced the FeS thin films from banana peel extract. The photocatalytic degradation of maximum concentrations of crystal violet has been studied for the first time in facet. Materials and methods All chemicals, iron chloride, Sodium thiosulfate and Ethylene-di-amine-tetr-acetate (EDTA), were bought from Sigma Aldrich and used without any distillation. Triple deionized water was served as solvents in all laboratory works. The peel of bananas was collected from the local area, Ethiopian country, Oromia region, and Gudaya Bila, as shown in Fig. 1. This peel of banana was polluting the area, and we collected it to make it dry in normal conditions and grind it to make it powder by using a pistol and mortar. The plant we have used in this report was cultivated in Gudaya Bila, Ethiopia. This study complies with relevant legislation and international, national, and institutional guidelines. Characterization techniques. The structure, morphology and optical characteristics of Iron Sulfide (FeS) were investigated through X-ray diffraction (XRD), Scanning electron microscope(SEM) and UV-vis spectrophoto-meter (UV) and Photoluminescence(PL). Figure 2 displays the characterization techniques with utilities. Furthermore, the average crystal size of the peel of the banana collected was calculated using the Scherer equation. where K = 0.94 is Scherer constant, is X-ray wavelength, is Bragg diffraction angle, and is the peak width of the diffraction line at the maximum intensity. Table 1 presents the variation of the measured FeS thin films. Experimental details : green deposition of polycrystalline iron sulfide (FeS) using banana peel extraction. The peel of the banana was collected and dried in the normal condition for 3 weeks. Then by using a mortar with a pestle, grinded to get a powder form. In the chemical bath deposition techniques, a water bath beaker with 0.2 M of 250 ml solution was added to 70 mL of Iron Chloride(99.99% purity), 30 mL of Sodium thiosulfate (99.997% purity), and 0.1 M f 10 mL of EDTA (99.89% purity), was added as complexing agent. After appropriate involvement, this solution reaction was placed on a heater of temperature 180℃ with PH value was adjusted to two (PH = 2 acidic bath) adding a droplet of H2SO4 for the 20 min string time and Plastic substrate inserted vertically, then taken at 120 min as sample 1. Then 10 mg of banana peel extract powder was added to the newly prepared solution with the same step. Finally, the substrate was vertically immersed, and the total time of deposition was 120 min (2 h). After the accomplishment of synthesis time, the sample was taken out of the bath as sample 2 and kept in the oven for further characterization. Result and discussion Structural analysis of polycrystalline FeS thin films. Structure characteristics of the organized banana collected were conducted XRD and given in Fig. 3. XRD of FeS prepared by 2.5 mL of 10 mg of banana leaf extract powder was added to the solution (Fig. 3) From XRD patterns, numerous peaks were observed for banana peel extract of FeS thin films and no peak was gained from pure FeS thin films. This shows that the pure FeS thin films are an amorphous structure, and the tetragonal crystal structure was observed for the biosynthesis of FeS thin films from banana peel extract. The crystal size and parameters gained from XRD data are discussed in the table below. Optical characterization of iron sulfide (FeS) thin films. The photoluminescence (PL) spectra of FeS thin films at 180 °C for 2 h prepared using banana peel extracts and pure FeS thin films are displayed in Fig. 4, and spectrums are gained from the U-V; Figs. 5 and 6 show the absorbance and transmittance of excitation wavelength = 300-700 nm. It obviously can be seen that the photo-luminescence strength of banana peel extracts FeS thin film is greater than pure FeS thin films in which wide-ranging and symmetries groups were perceived. This may be because the existence of impurity comes from substrates. The profound emanations in the visible range designate the existence of structural defects 28. A feeble band's emissions in the U-V section are detected at 372-428 nm, conforming to the radiated re-combinations between the animated electron in the conductions band as well as the holes in the valence bands. The optical characteristics allow banana peel extracts FeS thin films are useful for photovoltaic solar cells 33. According to the findings of this research, it is also anticipated to carry out similar studies insight of regulating the shapes of the Nanoparticles and investigating other physical characteristics as reported beforehand on other Nanoscaled metal-sulfide Nanocomposites or oxide 34. Figure 7 displays the scanning-electron-microscopy (SEM) micrographs of the Iron sulfide films deposited using chemical or pure and banana peel extracts. Depending on Fig. 7a,b, the films prepared by chemical and biological methods confirm complete exposure of materials on superficial substrates. The Scanning electron microscope micrographs of the films prepared from the biological approach show distribution of grains, which covers the surface of the substrate completely and uniform (Fig. 7a). The result may be because of an inadequate quantity of iron source and banana peel extracts of Iron sulfide ions in the solution and has good agreement with reports 35. The FeS thin film deposited through biological methods is uniformly covered on the substrate. Established on the scanning electron microscope micro-graph, the particle configurations are made on an agglomerate surface (average size is around 21 -m) 36. Comparison between the thin films deposited at biological methods from peel extracts exposes that the amount of Iron Sulfide thin films peak enlarged, demonstrating the best crystal phases for the films prepared 37. The thin films show slighter grain associated with the other thin films prepared from chemicals only or pure ones (Fig. 7b). The pin-holes are perceived on the superficial of the thin film. This result is finely matched with reports. Conclusion Iron Sulfide (FeS) films were effectively deposited through chemical bath deposition techniques. X-Ray Diffraction examination shows the polycrystalline nature of the thin films with the tetragonal segment. Thin films organized through banana peel extract FeS was a higher number of peaks than pure FeS thin films. The superficial morphology of biosynthesized films was detected moderately even and fully deposited on the substrate than the www.nature.com/scientificreports/ pure FeS thin films. Experimental outcomes showed that the deposition using 0.2 M iron chloride and sodium thiosulphate was the best complaint about the preparation of Iron Sulfide thin films. The spectra are obtained using the UV excitation wavelength = 300-700 nm and PL. It is evidently seen that the photo-luminescence strength of banana peel extract FeS thin film is greater than pure FeS thin films in which broads and symmetries bands are witnessed. These profound emanations in the visible ranges show the presence of structural imperfections. Finally, the use of biosynthesis in the production of thin films as a photo absorber is promising for future energy sources which are clean, portable and simple operations. |
Ardent Donald Trump supporters are expected to turn out Election Day in large numbers, but their support for GOP congressional candidates -- particularly those distancing themselves from the party's presidential nominee -- appears increasingly uncertain.
Dozens of House and Senate candidates bolted from Trump after the recent release of a 2005 audiotape in which he brags about his celebrity status allowing him to make uninvited advances on women.
Rep. Joe Heck, the Republican nominee for an open Senate seat in Nevada, immediately felt the backlash from Republican voters, getting booed at a party unity rally when calling for Trump to quit the race. House Speaker Paul Ryan, though not in a tight race, faced similar heckling at a Wisconsin rally from which Trump was dis-invited.
In Pennsylvania, some Trump loyalists are vowing not to vote for incumbent Republican Sen. Pat Toomey -- who is in a close race with Democratic challenger Katie McGinty and has refused to endorse Trump.
Johansen, who founded the online, Pennsylvania-based Trump group MAG -- for "Make America Great" -- says his dissatisfaction with Toomey started long before he abandoned Trump.
And he has continued the argument -- railing against Democrats and Republicans alike -- through the final weeks of the campaign.
“There is nothing the political establishment will not do -- no lie that they won't tell -- to hold their prestige and power at your expense,” Trump said at a rally last week in Florida.
Rhetoric like this raises concerns that his supporters might not be motivated to vote Republican down the ballot, as Democrats fight to win back the Senate -- and hold out hope for a much-less-likely takeover in the House, where Republicans hold a 30-seat majority.
At the same time, Republicans distancing themselves from Trump are trying to keep the peace with swing and undecided voters.
Johansen and others acknowledge the political tightrope GOP incumbents are trying to walk -- support Trump and bear all of his controversial remarks or abandon him and risk losing votes from the most energized faction of the party.
Missouri GOP Sen. Roy Blunt, in a surprisingly close reelection bid, is sticking with Trump, who leads Democratic rival Hillary Clinton there by 8 percentage points, according to the RealClearPolitics’ poll average.
And at least a couple candidates who split with the billionaire businessman after the tape controversy have since warmed back up to him, including Sen. Deb Fischer of Nebraska.
Asked about his stance on Trump last week, Toomey's office provided this statement: "I have not endorsed Donald Trump and I have repeatedly spoken out against his flawed policies, and his outrageous comments, including his indefensible and appalling comments about women."
He contrasted that against his rival, saying: "Katie McGinty has yet to say a single word against Hillary Clinton’s disastrous policies that have endangered our country, her widespread dishonesty, or the corruption of her behavior with the Clinton Foundation."
The extent to which Trump is truly tied to GOP House and Senate candidates remains to be seen.
“Most Republican voters can tell the difference between a viable candidate and one who is not,” David Payne, a Republican strategist and partner at Vox Global, said Tuesday. “But they could cast a protest vote or not vote at all. There is some danger here, most notably in the Senate."
Oregon Rep. Greg Walden, who leads the National Republican Congressional Committee, last week told an Omaha TV station that surveys across the country suggest roughly 20-to-25 percent of voters connect GOP congressional candidates to Trump.
“There is not that tight of an attachment,” he told ABC affiliate KETV.
Nathan Gonzales, of the Rothenberg & Gonzales Political Report, recently suggested a good bellwether is anti-Trump GOP Rep. John Katko’s bid for a second term in upstate New York, where he has led by as many as 15 percentage points.
“If that race completely turns around, everybody should take note,” he said. |
Mother’s Day is coming up, and wouldn’t a Danzig card be the sweetest gift you could possibly give your mommy, before you go out and kill tonight?
Best Play Ever has these cards which look like a typical Mother’s Day card on the front and open to an illustration of the dark one himself with the text: “Tell your children not to walk my way!”
Adorable!
If your Mom’s some kind of weirdo Glenn Danzig hater, you can always pick her up a different card from the site’s collection. They offer Tupac, Freddie Mercury, Abba, and Spice Girls themed cards for mothers of all walks!
...And to get Mom in the mood for her special day, here’s a very “special” remix of “Mother”:
Previously on Dangerous Minds:
Glenn Danzig Valentine’s Day cards |
Measurement of visceral fat and abdominal obesity by single-frequency bioelectrical impedance and CT: a cross-sectional study Objectives The measurement of visceral fat (VF) is clinically important for the identification of individuals at high risk of visceral obesity-related health conditions. Bioelectrical impedance analysis (BIA) is a widely available and frequently used body composition assessment method, but there have been few validation studies for the measurement of VF. This validation study investigated agreement between BIA and CT for the assessment of VF in adults. Design Cross-sectional study. Setting Between 2015 and 2016 in China. Participants A total of 414 adults (119 men and 295women) aged 4082 years. Primary and secondary outcome measures CT-visceral fat area (VFA) was derived at the L2-3 and umbilicus level and VFA cut-offs for visceral obesity applied. BIA measurements of visceral fat level were compared with CT VFA findings using scatter plots and receiver operator characteristic (ROC) curves. Results Scatter plots showed poor agreement between BIA and CT-derived visceral fat measurements in both sexes (R=0.3870.636). ROC curves gave optimum figures for sensitivity and specificity of 65% and 69% in women and 76% and 70% in men, respectively, for BIA to discriminate between adults with normal levels of VF and those with visceral obesity determined by CT. Conclusion BIA has limited accuracy for the assessment of VF in adults in practice when compared with the criterion method. INTRODUCTION An excess of visceral adipose tissue (VAT) can cause metabolic abnormalities, through the secretion of harmful inflammatory adipokines such as interleukin-6, tumour necrosis factoralpha and macrophage chemoattractant protein-1. 1 In particular, visceral fat increases the risk for development of chronic low-grade inflammation and is involved in the pathogenesis of numerous inflammatory medical conditions including metabolic syndrome, diabetes and cardiovascular disease, 2-4 as well as being an important, independent predictor of all-cause mortality. 4 5 It is therefore clinically important to identify individuals with high levels of visceral fat, so that appropriate interventions can implemented. Proxy measures of excess fat accumulation such as body mass index (BMI) and waist circumference have been demonstrated to be largely ineffective in identifying visceral obesity, although waist-to-height ratio has shown promise. 6 7 The gold standard methods for the measurement of visceral fat are CT and MRI. Visceral fat area (VFA) based on single-slice imaging of CT/MRI is widely used in research studies 8 9 but rarely used in clinical practice. Several studies have provided cut-off values of VFA for visceral obesity assessment in Japanese, Korean and Chinese populations, recognising a greater amount of visceral adiposity at any given BMI in East Asian populations compared with other ethnic groups like white population, African Caribbean black population and Hispanics. 13 However, CT and MRI are limited in largescale studies or in clinical protocols, due to cost, availability and radiation exposure. Bioelectrical impedance analysis (BIA) is a widely available, low-cost and non-X-ray-based method, and is used frequently in clinical Strengths and limitations of this study ► The agreement of bioelectrical impedance analysis (BIA) with CT for the assessment of visceral fat and abdominal obesity in adults was poor. ► We found improved visceral fat level thresholds in men and women compared to the manufacturer's recommendation. ► In this study, the BIA device was single frequency and therefore findings cannot be generalised to multifrequency BIA devices. Open access practice and research settings to evaluate total body water and body composition. There have been few validation studies of BIA-derived assessments of visceral fat, 14 15 and no study has yet investigated BIA-estimates in accord with CT-derived visceral obesity reference cut-points. Therefore, the aim of this study was to investigate agreement between single-frequency BIA and abdominal CT for the assessment of visceral fat and visceral obesity in Chinese adults. MATERIALS AND METHODS Study participants Participants were recruited from community-based population samples of the Changzhou region from the Prospective Urban Rural Epidemiology China Action on Spine and Hip status study. 16 The inclusion criterion and exclusion criterion have been described previously. 16 In addition, for this study, individuals who had hydration abnormalities such as visible oedema, cirrhosis or heart failure were excluded from the study. figure 1A,B) and umbilicus cross-section level. Details of adipose tissue measurements have been reported previously. 18 In brief, adipose tissue was segmented and mapped in blue with a default threshold, and the outer contour of abdominal wall was then outlined by the software automatically on each 1 mm-thick slice. All measurements were carried out by two trained and experienced radiologists (CY and RY). The interobserver and intraobserver reliabilities of QCT VFA measurements were good with intraclass correlation coefficient 0.996 and 0.990, 8 respectively. BIA body composition Body composition was estimated using whole-body, upright, single-frequency (SF)-BIA (Tanita BC-554, Tanita Corp, Tokyo, Japan). All participants were measured in lightweight clothing and standing barefoot on the metal footpads. To measure the bio-impedance, a very low, safe electrical signal is sent from four metal electrodes through the feet to the legs and abdomen. The Tanita BIA uses a SF-BIA at 50 kHz which predominately measures extracellular water and approximately 25% of intracellular water. Participant information entered into the system to enable the computing of the BIA algorithms, included gender, age, height and weight. Body fat mass percentage (BF) and visceral fat level (VFL) were recorded as the mean value of two repeated measurements. The time interval between the BIA and QCT measurements did not exceed 7 days. The Tanita body composition analyser gave a Open access range of VFL rating between 1 and 59. According to the manufacturer's information, a rating between 1 and 12 indicates a healthy level of visceral fat, whereas a rating between 13 and 59 indicates excess visceral fat. The reproducibility of estimated values using this BIA system have been reported previously. 18 19 Statistical analysis Statistical analyses were performed using SPSS V.25.0 software (IBM, Armonk, NY, USA) and R V.3.6.2 (R Core Team, R Foundation for Statistical Computing, Vienna, Austria). The measurement data are presented as the mean±SD. The Mann-Whitney U test was used for intergroup and subgroup comparisons of baseline characteristics. Spearman's rank correlation coefficients were used to evaluate whether VFL was correlated with other parameters. Pearson correlation coefficients were determined among the anthropometric parameters, body fat variables measured by CT and BF. A correlogram was used to plot a graph of correlation matrix. In this plot, correlation coefficients were coloured and sized according to the value. Statistical analyses were performed to assess the prevalence of visceral obesity based on BIA VFL (VFL >13) and VFA (VFA >142 cm 2 for men and 115 cm 2 for women at L2/3 level; VFA >111 cm 2 for men and 96 cm 2 for women at the umbilical level) 12 by CT. Scatter plots of VFL against VFA were drawn and receiver operator characteristic (ROC) curves used to determine the sensitivity and specificity for BIA measurements to discriminate between adults with normal levels of visceral fat and those with visceral obesity determined by CT. p<0.05 was considered statistically significant. Open access Patients and public involvement Patients and the public were not involved in this study, including data collection, analysis and interpretation. RESULTS Anthropometric, body fat percentage and visceral fat parameters are shown in table 1. There were significant differences in height, weight, BMI, body fat percentage, VFL and VFA between women and men. Figure 2 shows the plots of correlation matrix of body fat composition variables and anthropometric measurements in men (figure 2A) and women (figure 2B). VFL was poorly correlated with VFA and TFA at L2/3 and umbilicus level (R=0.387-0.636, all p<0.001) in both genders. The correlation between VFL and BF was good in both sexes (R=0.851 for women and 0.894 for men, p<0.001). BMI and weight showed higher associations (R=0.586-0.762, all p<0.001) with VFA than VFL (R=0.384-0.565, all p<0.001). Total body fat percentage was poorly associated with VFA and TFA at both levels (R=0.335-0.506, all p<0.001). Table 2 shows BIA and CT-derived fat mass results for normal weight and overweight/obesity subgroups. Significant differences (p<0.001) were found between overweight/obesity and normal weight subgroups for all body fat composition parameters in both sexes (table 2). Figures 3 and 4 demonstrate the level of agreement between BIA and CT for the identification of visceral obesity in women and men, stratified by BMI, respectively. Approximately 10% of overweight/obese women and no normal-weight women were correctly identified as having high levels of visceral fat by BIA. Conversely, CT imaging identified high levels of visceral fat in 40% of normalweight women. In overweight/obese men, the agreement between BIA and CT was slightly better, with BIA correctly identifying 50% of men with visceral obesity in the overweight/obese group. While in normal-weight men, BIA only correctly identified 5% of men with visceral obesity. Open access Figures 5 and 6 show the corresponding ROC curves. A BIA VFL threshold of 8 gave 65% sensitivity and 69% specificity for identifying women with VAT >115 cm 2 at L2/3. A BIA threshold of 12 gave 76% sensitivity and 70% specificity for identifying men with VAT >142 cm 2 at L2/3. Overall there was poor agreement between the two methods for the assessment of visceral obesity. DISCUSSION Abdominal adipose tissue can be measured accurately using the state-of-art imaging techniques such as CT. However, due to increased ionising radiation and highcost, CT is inappropriate for the measurement and monitoring of abdominal visceral fat in many research and clinical situations. As such, BIA, as a more widely available and low-cost body composition tool is more feasible, at least in clinical practice. However, we found poor agreement between BIA and CT for the measurement of visceral fat. The correlation coefficients (R=0.387-0.636) for visceral fat between BIA and CT in this study ( figure 2A,B) are similar to those reported elsewhere between BIA and MRI (r 2 =0.13-0.44). 15 At the manufacturer's recommended VFL threshold of 13 the sensitivity and specificity of BIA measurements to discriminate visceral obesity measured by CT VFA were 10% and 97%, respectively, in women and 52% and 90% in men. However, we found improved figures for sensitivity and specificity by choosing different VFL thresholds in men and women. Another study using two whole-body BIA devices and one abdominal BIA device found that agreement between Open access all three BIA devices for visceral fat assessment was better for total fat mass than for visceral fat in both men and women. 20 There has been some recent interest in the potential of locally applied BIA for the quantification of abdominal subcutaneous fat thickness. 21 In addition, two previous studies have reported positive correlations between BIA-derived visceral fat measures and metabolic parameters including blood pressure, lipid profiles and fasting glucose. 22 23 However, it should be considered that the correlation coefficients for visceral fat and metabolic parameters in these studies are relatively low (R=0.2-0.4) and interestingly, the correlation coefficients were better for waist circumference. 22 23 Unlike CT, BIA does not provide a direct measure of fat tissue. 24 BIA most closely estimates body water and there is no direct theoretical relationship between resistance and/or reactance and relative body fatness. 25 The estimation of adiposity from BIA is instead based on empirical relationships from samples of experimental subjects and calculations involve assumptions at several steps. 26 Given the uncertainties surrounding the BIA-VFL calculation process, the significant disagreement between BIA and CT for defining visceral obesity requires further exploration. We recognise several limitations to this study. First, it should be considered that we did not include measurements of waist circumference or waist-to-height ratio, of which the latter has been found to be highly correlated with visceral fat mass using dual-energy X-ray absorptiometry. 7 BMI is used to assess general obesity, while waist circumference is used to assess abdominal obesity. Therefore, it might be better to do the analyses of figures 3 and 4 stratified by waist circumference rather than BMI. Second, in this study, the BIA device was single frequency and therefore findings cannot be generalised to multifrequency BIA devices. Notably, different types of BIA equipment on the market include SF and multifrequency devices, which vary in price. The instrument (Tanita BC-554) used in this study is a consumer-grade instrument and relatively inexpensive (US$170) compared with professional-grade instruments (>US$1000). It is important to acknowledge the wide range of variability in the accuracy of BIA scales and the comparative validities of SF and multifrequency BIA devices has also been questioned. 26 CONCLUSION The agreement of BIA with the criterion method, CT, for the assessment of visceral fat and abdominal obesity in adults was poor. Further studies are warranted to improve the predictive value of abdominal BIA relative to the gold standard of CT/MRI, before BIA should be accepted for the definition of visceral obesity in practice. |
By Amy Goodman with Denis Moynihan
President Barack Obama proclaimed Dec. 15 Bill of Rights Day, praising those first 10 amendments to the U.S. Constitution as “the foundation of American liberty, securing our most fundamental rights — from the freedom to speak, assemble and practice our faith as we please to the protections that ensure justice under the law.” The next day, U.S. District Judge Richard J. Leon called Obama’s surveillance policies “almost Orwellian” in a court order finding the National Security Agency’s bulk collection of Americans’ telephone metadata very likely unconstitutional. If that was not enough, the president’s own task force on the issues, the Review Group on Intelligence and Communications Technologies, delivered its report, which the White House released, with 46 recommendations for changes.
One adviser to the panel, Sascha Meinrath of the Open Technology Institute, was skeptical, telling me that “intelligence-community insiders, administration officials, comprise the entirety of this five-member group. I do not see how you can do a truly independent review of surveillance when so many people are tied in.” The panel is chaired by former CIA Deputy Director Michael Morrell, and is managed under the auspices of the Office of the Director of National Intelligence, run by James Clapper. Clapper is widely considered to have lied in a Senate hearing on this issue. When asked by Sen. Ron Wyden, D-Ore., if the NSA collected phone records on millions or hundreds of millions of Americans, Clapper replied, “No, sir.” Following the Snowden leaks, Clapper admitted to NBC News that his answer was the “least untruthful” manner to say no.
Judge Leon’s ruling relates to just one of several filed after the June disclosures by former NSA contractor Edward Snowden about the vast, global surveillance system vacuuming up personal data from billions of people. A separate federal lawsuit in New York, ACLU v. Clapper, seeks to end the mass surveillance completely, and to have all the data collected so far deleted.
Anthony Romero, the executive director of the American Civil Liberties Union, called Edward Snowden “a patriot,” noting: “As a whistle-blower of illegal government activity that was sanctioned and kept secret by the legislative, executive and judicial branches of government for years, he undertook great personal risk for the public good. And he has single-handedly reignited a global debate about the extent and nature of government surveillance and our most fundamental rights as individuals.”
Jay Carney, Obama’s press secretary, reiterated the White House’s hard line this week: “Mr. Snowden has been accused of leaking classified information, and he faces felony charges here in the United States.”
Currently in Russia, halfway through a year of temporary asylum he was granted there, Edward Snowden this week issued a public letter to the people of Brazil, in hopes of gaining permanent asylum there. In the letter, Snowden wrote, “Six months ago, I stepped out from the shadows of the United States Government’s National Security Agency to stand in front of a journalist’s camera … with open eyes, knowing that the decision would cost me family and my home, and would risk my life. I was motivated by a belief that the citizens of the world deserve to understand the system in which they live.” He continued: “My greatest fear was that no one would listen to my warning. Never have I been so glad to have been so wrong.”
The world continues to listen to Snowden. As he also said in his open letter, “The culture of indiscriminate worldwide surveillance, exposed to public debates and real investigations on every continent, is collapsing.” A recent poll suggests at least 55 percent of those questioned consider Snowden a whistle-blower. Despite the polls, CNN anchor Brooke Baldwin blustered about potential amnesty for Snowden: “This is a hated man, what would he even do here?”
Adopted on Dec. 15, 1791, the Bill of Rights comprises the first 10 amendments to the Constitution. While praising it last week and ticking through “our most fundamental rights,” President Obama failed to mention the Fourth Amendment. It reads:
“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
Perhaps President Obama, the erstwhile constitutional-law professor, should go back and reread that amendment.
Amy Goodman is the host of “Democracy Now!,” a daily international TV/radio news hour airing on more than 1,000 stations in North America. She is the co-author of “The Silenced Majority,” a New York Times best-seller.
© 2013 Amy Goodman
Distributed by King Features Syndicate |
<gh_stars>10-100
package io.github.karadkar.veggie.user.ui;
import android.os.Bundle;
import android.support.v4.widget.SwipeRefreshLayout;
import android.support.v7.app.AppCompatActivity;
import android.support.v7.widget.LinearLayoutManager;
import android.support.v7.widget.RecyclerView;
import android.view.View;
import android.widget.ProgressBar;
import android.widget.TextView;
import com.greentopli.core.presenter.history.OrderHistoryPresenter;
import com.greentopli.core.presenter.history.OrderHistoryView;
import com.greentopli.core.service.OrderHistoryService;
import com.greentopli.model.OrderHistory;
import java.util.List;
import butterknife.BindView;
import butterknife.ButterKnife;
import io.github.karadkar.veggie.R;
import io.github.karadkar.veggie.user.adapter.OrderHistoryAdapter;
import io.github.karadkar.veggie.user.tool.ProductItemDecoration;
public class OrderHistoryActivity extends AppCompatActivity implements OrderHistoryView, SwipeRefreshLayout.OnRefreshListener {
@BindView(R.id.orderHistory_recyclerView)
RecyclerView mRecyclerView;
@BindView(R.id.orderHistory_empty_message)
TextView emptyMessage;
@BindView(R.id.progressbar_orderHistory_activity)
ProgressBar progressBar;
@BindView(R.id.order_history_swipeRefreshLayout)
SwipeRefreshLayout mSwipeRefreshLayout;
private OrderHistoryPresenter mPresenter;
private LinearLayoutManager mLayoutManager;
private OrderHistoryAdapter mAdapter;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_order_history);
ButterKnife.bind(this);
mPresenter = OrderHistoryPresenter.bind(this, getApplicationContext());
mRecyclerView.addItemDecoration(new ProductItemDecoration(getApplicationContext()));
mSwipeRefreshLayout.setOnRefreshListener(this);
}
private void initRecyclerView() {
mAdapter = new OrderHistoryAdapter();
mRecyclerView.setAdapter(mAdapter);
mLayoutManager = new LinearLayoutManager(getApplicationContext());
mRecyclerView.setLayoutManager(mLayoutManager);
}
@Override
public void onHistoryReceived(List<OrderHistory> orderHistoryList) {
initRecyclerView();
mAdapter.addNewData(orderHistoryList);
mSwipeRefreshLayout.setRefreshing(false);
}
@Override
public void onEmpty(boolean show) {
emptyMessage.setVisibility(show ? View.VISIBLE : View.GONE);
mSwipeRefreshLayout.setRefreshing(false);
}
@Override
public void showProgressbar(boolean show) {
mSwipeRefreshLayout.setRefreshing(false);
progressBar.setVisibility(show ? View.VISIBLE : View.GONE);
}
@Override
public void onRefresh() {
OrderHistoryService.start(getApplicationContext());
}
@Override
protected void onDestroy() {
mPresenter.detachView();
super.onDestroy();
}
}
|
The subject matter herein relates generally to lighting devices and, more particularly, to assemblies that house and supply electric current to lighting devices.
Known lighting assemblies including lighting devices that emit light out of the assemblies in desired directions. Some lighting assemblies include light emitting diodes (LEDs) that emit light from a light emitting surface of the assemblies. The assemblies typically include several interconnected components or parts that are used to house the LED and other components used to operate the LED. For example, the LED may be mounted on a circuit board in a housing of the lighting assembly. The housing may be formed of one or more parts, such as heat sinks, optical lenses, additional circuit boards, and the like. Moreover, one or more additional electronic components used to operate the LED may be mounted on the circuit board or on an additional circuit board located in the housing. For example, an LED driver may be mounted to the same circuit board as the LED or to an additional circuit board. The electronic components receive electric current from an external source and use the current to drive, or activate, the LED and cause the LED to emit light from the lighting assembly. The various components in some known lighting assemblies may be secured together using adhesives, latching devices, and the like.
The LED and electronic components located within the housing may be electronically joined with one another by one or more internal contacts located in the housing. Additionally, the LED and electronic components may be coupled with the external source by one or more external contacts that extend from inside to outside of the housing. The external contacts may be coupled with the external source of electric current to supply the current to the LED and electronic components. In some known lighting assemblies, these contacts, circuit boards, components and LEDs are soldered together during assembly of the lighting assemblies.
In general, as the number of interconnected components and electrical components in the lighting assemblies increases, the complexity and cost of manufacturing the lighting assemblies also increases. For example, some known lighting assemblies include interconnected housing components such as heat sinks, contact housings, optical lenses, and the like, that are secured together by adhesives, such as thermal adhesives. The application of the adhesives increases the cost and time involved in manufacturing the lighting assemblies. Additionally, the manufacturing process of some known lighting assemblies uses several soldering steps to electrically couple the several electronic components. As the number of soldering steps and solder connections between components increases, the cost and complexity involved in manufacturing the lighting assemblies also may increase.
A need exists for lighting assemblies that include fewer components and/or manufacturing steps. Eliminating components and/or manufacturing steps may reduce the complexity and/or cost involved in manufacturing the lighting assemblies. |
Dealing with Aspect-Limited Data Through an Innovative Microwave Imaging Multi-Source Technique: Potentialities and Limitations In this contribution, an innovative methodology aimed at increasing the amount of scattering data (avoiding further a-]priori assumptions on the investigation domain) is analyzed. According to the multi-source (MS) approach, the investigation domain is illuminated by means of different probing sources, each of them characterized by a proper (and different) radiation pattern, to induce different scattering interactions able to "show" different "aspects" of the scatterer under test. Integrated with a multi-view strategy and recurring to the iterative multi-scaling procedure , the exploitation of the "source diversity" (through the definition of a suitable multi-source/multi-view cost function) enlarges in a non-negligible fashion the number of retrievable unknowns by enhancing the robustness of the imaging process with respect to the noise and the stability of the inversion procedure as well as the reconstruction accuracy. Moreover, the reduction of the ratio between dimension of the space of the unknowns and that of data implies a decreased sensitivity to false solutions leading to a more tractable optimization problem. A large number of numerical simulations confirm the effectiveness of the inversion strategy as well as its robustness with respect to noise on data. Moreover, the results of a comparative study with single-source methodologies further point out the advantages and potentialities of the approach when dealing with aspect-limited data acquisition setups. |
The in vitro post-antifungal effect of nystatin on Candida species of oral origin. The post-antifungal effect (PAFE) is defined as the suppression of growth that persists following limited exposure of yeasts to antimycotics and subsequent removal of the drug. Although limited data are available on the PAFE of nystatin on oral isolates of C albicans, there is no information on non-albicans Candida species. As nystatin is the commonest antifungal agent prescribed in dentistry, the main aim of this investigation was to measure the PAFE of oral isolates of Candida belonging to six different species (five isolates each of C. albicans, C. tropicalis, C. krusei, C. parapsilosis, C. glabrata and C. guilliermondii) following limited exposure (1 h) to nystatin. The yeasts were examined for the presence of the PAFE after 1 h exposure to the minimum inhibitory concentration (MIC) of nystatin. The PAFE was determined as the difference in time (h) required for the growth of the drug-free control and the drug-exposed test cultures to increase to the 0.05 absorbance level following removal of the antifungal agent. The mean duration of nystatin-elicited PAFE was lowest for C. albicans (6.85 h) and greatest for C. parapsilosis (15.17 h), while C. krusei (11.58 h), C. tropicalis (12.73 h), C. glabrata (8.51 h), and C. guilliermondii (8.68 h) elicited intermediate values. These findings clarify another intriguing possibility for the persistent, chronic recurrence of oral C. albicans infections despite apparently adequate antifungal drug regimens. The significant variations in nystatin-induced PAFE amongst non-albicans species may also have clinical implications, in terms of nystatin regimens used in the management of these fungal infections. |
<filename>common/src/main/java/com/vmware/bdd/apitypes/NetConfigInfo.java
/***************************************************************************
* Copyright (c) 2014-2015 VMware, Inc. All Rights Reserved.
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
***************************************************************************/
package com.vmware.bdd.apitypes;
import com.google.gson.annotations.Expose;
import com.google.gson.annotations.SerializedName;
public class NetConfigInfo {
@Expose
@SerializedName("port_group_name")
private String portGroupName;
@Expose
@SerializedName("network_name")
private String networkName;
@Expose
@SerializedName("traffic_type")
private NetTrafficType trafficType;
public NetConfigInfo(){
}
public NetConfigInfo(NetTrafficType trafficType, String networkName, String portGroupName) {
this.trafficType = trafficType;
this.networkName = networkName;
this.portGroupName = portGroupName;
}
public String getPortGroupName() {
return portGroupName;
}
public void setPortGroupName(String portGroupName) {
this.portGroupName = portGroupName;
}
public String getNetworkName() {
return networkName;
}
public void setNetworkName(String networkName) {
this.networkName = networkName;
}
public NetTrafficType getTrafficType() {
return trafficType;
}
public void setTrafficType(NetTrafficType trafficType) {
this.trafficType = trafficType;
}
public enum NetTrafficType{
MGT_NETWORK, HDFS_NETWORK, MAPRED_NETWORK
}
}
|
import { XCircle as FeatherXCircle, Props } from 'react-feather';
import * as React from 'react';
export const XCircle: React.FC<Props> = ({ ...rootProps }) => (
<FeatherXCircle data-icon="xcircle" {...rootProps} />
);
|
def visualize_data(data, data_type, out_file, data2=None, info=None, c1=None, c2=None, show=False, s1=5, s2=5):
if data_type == 'img':
if data.dim() == 3:
data = data.unsqueeze(0)
save_image(data, out_file, nrow=4)
elif data_type == 'voxels':
visualize_voxels(data, out_file=out_file)
elif data_type == 'pointcloud':
visualize_pointcloud(data, out_file=out_file, points2=data2, info=info, c1=c1, c2=c2, show=show, s1=s1, s2=s2)
elif data_type is None or data_type == 'idx':
pass
else:
raise ValueError('Invalid data_type "%s"' % data_type) |
The influence of smoking, sedentary lifestyle and obesity on cognitive impairment-free life expectancy. BACKGROUND Smoking, sedentary lifestyle and obesity are risk factors for mortality and dementia. However, their impact on cognitive impairment-free life expectancy (CIFLE)has not previously been estimated. METHODS Data were drawn from the DYNOPTA dataset which was derived by harmonizing and pooling common measures from five longitudinal ageing studies. Participants for whom the Mini-Mental State Examination was available were included (N8111,48.6% men). Data on education, sex, body mass index, smoking and sedentary lifestyle were collected and mortality data were obtained from Government Records via data linkage.Total life expectancy (LE), CIFLE and years spent with cognitive impairment (CILE)were estimated for each risk factor and total burden of risk factors. RESULTS CILE was approximately 2 years for men and 3 years for women, regardless of age. For men and women respectively, reduced LE associated with smoking was 3.82and 5.88 years, associated with obesity was 0.62 and 1.72 years and associated with being sedentary was 2.50 and 2.89 years. Absence of each risk factor was associated with longer LE and CIFLE, but also longer CILE for smoking in women and being sedentary in both sexes. Compared with participants with no risk factors, those with 2 had shorter CIFLE of up to 3.5 years depending on gender and education level. CONCLUSIONS Population level reductions in smoking, sedentary lifestyle and obesity increase longevity and number of years lived without cognitive impairment. Years lived with cognitive impairment may also increase. |
<reponame>gtrafimenkov/ja2-vanilla-cp
#ifndef __TEXT_H
#define __TEXT_H
#include "SGP/Types.h"
#include "Tactical/ItemTypes.h"
extern wchar_t ShortItemNames[MAXITEMS][80];
extern wchar_t ItemNames[MAXITEMS][80];
extern void LoadAllExternalText(void);
extern const wchar_t *GetWeightUnitString(void);
extern FLOAT GetWeightBasedOnMetricOption(UINT32 uiObjectWeight);
#define AmmoCaliber_SIZE 17
#define BobbyRayAmmoCaliber_SIZE 16
#define WeaponType_SIZE 9
#define TeamTurnString_SIZE 5
#define Message_SIZE 59
#define pTownNames_SIZE 13
#define g_towns_locative_SIZE 13
#define sTimeStrings_SIZE 6
#define pAssignmentStrings_SIZE 35
#define pMilitiaString_SIZE 3
#define pMilitiaButtonString_SIZE 2
#define pConditionStrings_SIZE 9
#define pEpcMenuStrings_SIZE 5
#define pLongAssignmentStrings_SIZE 35
#define pContractStrings_SIZE 7
#define pPOWStrings_SIZE 2
#define pInvPanelTitleStrings_SIZE 3
#define pShortAttributeStrings_SIZE 10
#define pUpperLeftMapScreenStrings_SIZE 4
#define pTrainingStrings_SIZE 4
#define pAssignMenuStrings_SIZE 7
#define pRemoveMercStrings_SIZE 2
#define pAttributeMenuStrings_SIZE 10
#define pTrainingMenuStrings_SIZE 5
#define pSquadMenuStrings_SIZE 21
#define pPersonnelScreenStrings_SIZE 14
#define gzMercSkillText_SIZE 17
#define pTacticalPopupButtonStrings_SIZE 19
#define pDoorTrapStrings_SIZE 5
#define pMapScreenMouseRegionHelpText_SIZE 6
#define pNoiseVolStr_SIZE 4
#define pNoiseTypeStr_SIZE 12
#define pDirectionStr_SIZE 8
#define pLandTypeStrings_SIZE 40
#define gpStrategicString_SIZE 69
#define sKeyDescriptionStrings_SIZE 2
#define gWeaponStatsDesc_SIZE 7
#define gzMoneyAmounts_SIZE 6
#define pVehicleStrings_SIZE 6
#define zVehicleName_SIZE 6
#define const_SIZE 137
#define pExitingSectorHelpText_SIZE 14
#define pRepairStrings_SIZE 4
#define sPreStatBuildString_SIZE 6
#define sStatGainStrings_SIZE 11
#define pHelicopterEtaStrings_SIZE 10
#define gsTimeStrings_SIZE 4
#define sFacilitiesStrings_SIZE 7
#define pMapPopUpInventoryText_SIZE 2
#define pwTownInfoStrings_SIZE 7
#define pwMineStrings_SIZE 14
#define pwMiscSectorStrings_SIZE 7
#define pMapInventoryErrorString_SIZE 5
#define pMapInventoryStrings_SIZE 2
#define pMovementMenuStrings_SIZE 4
#define pUpdateMercStrings_SIZE 6
#define pMapScreenBorderButtonHelpText_SIZE 6
#define pMapScreenBottomFastHelp_SIZE 8
#define pSenderNameList_SIZE 51
#define pDeleteMailStrings_SIZE 2
#define pEmailHeaders_SIZE 3
#define pFinanceSummary_SIZE 12
#define pFinanceHeaders_SIZE 7
#define pTransactionText_SIZE 28
#define pSkyriderText_SIZE 3
#define pMoralStrings_SIZE 6
#define pMapScreenStatusStrings_SIZE 5
#define pMapScreenPrevNextCharButtonHelpText_SIZE 2
#define pShortVehicleStrings_SIZE 6
#define pVehicleStrings_SIZE 6
#define pTrashItemText_SIZE 2
#define pMapErrorString_SIZE 50
#define pMapPlotStrings_SIZE 5
#define pBullseyeStrings_SIZE 5
#define pMiscMapScreenMouseRegionHelpText_SIZE 3
#define pImpPopUpStrings_SIZE 7
#define pImpButtonText_SIZE 26
#define pExtraIMPStrings_SIZE 4
#define pFilesSenderList_SIZE 7
#define pHistoryHeaders_SIZE 5
#define pHistoryStrings_SIZE 78
#define pLaptopIcons_SIZE 8
#define pBookMarkStrings_SIZE 8
#define pDownloadString_SIZE 2
#define gsAtmStartButtonText_SIZE 3
#define pWebPagesTitles_SIZE 35
#define pShowBookmarkString_SIZE 2
#define pLaptopTitles_SIZE 5
#define pPersonnelDepartedStateStrings_SIZE 5
#define pPersonelTeamStrings_SIZE 8
#define pPersonnelCurrentTeamStatsStrings_SIZE 3
#define pPersonnelTeamStatsStrings_SIZE 11
#define pMapVertIndex_SIZE 17
#define pMapHortIndex_SIZE 17
#define pMapDepthIndex_SIZE 4
#define pUpdatePanelButtons_SIZE 2
#define LargeTacticalStr_SIZE 4
#define InsContractText_SIZE 4
#define InsInfoText_SIZE 2
#define MercAccountText_SIZE 9
#define MercInfo_SIZE 11
#define MercNoAccountText_SIZE 3
#define MercHomePageText_SIZE 5
#define sFuneralString_SIZE 12
#define sFloristText_SIZE 14
#define sOrderFormText_SIZE 22
#define sFloristGalleryText_SIZE 5
#define sFloristCards_SIZE 2
#define BobbyROrderFormText_SIZE 26
#define BobbyRText_SIZE 26
#define BobbyRaysFrontText_SIZE 9
#define AimSortText_SIZE 5
#define AimPolicyText_SIZE 6
#define AimMemberText_SIZE 4
#define CharacterInfo_SIZE 12
#define VideoConfercingText_SIZE 15
#define AimPopUpText_SIZE 9
#define AimHistoryText_SIZE 5
#define AimFiText_SIZE 14
#define AimAlumniText_SIZE 5
#define AimScreenText_SIZE 8
#define AimBottomMenuText_SIZE 6
#define SKI_SIZE 14
#define SkiMessageBoxText_SIZE 7
#define zOptionsText_SIZE 9
#define zSaveLoadText_SIZE 21
#define zMarksMapScreenText_SIZE 23
#define pMilitiaConfirmStrings_SIZE 10
#define gpDemoString_SIZE 41
#define gpDemoIntroString_SIZE 6
#define gzMoneyWithdrawMessageText_SIZE 2
#define zOptionsToggleText_SIZE 20
#define gzGIOScreenText_SIZE 16
#define pDeliveryLocationStrings_SIZE 17
#define pPausedGameText_SIZE 3
#define pMessageStrings_SIZE 68
#define pDoctorWarningString_SIZE 2
#define pMilitiaButtonsHelpText_SIZE 4
#define gzLaptopHelpText_SIZE 16
#define gzNonPersistantPBIText_SIZE 10
#define gzMiscString_SIZE 5
#define pNewNoiseStr_SIZE 11
#define wMapScreenSortButtonHelpText_SIZE 6
#define BrokenLinkText_SIZE 2
#define gzBobbyRShipmentText_SIZE 4
#define gzCreditNames_SIZE 15
#define gzCreditNameTitle_SIZE 15
#define gzCreditNameFunny_SIZE 15
#define sRepairsDoneString_SIZE 4
#define zGioDifConfirmText_SIZE 3
#define gzLateLocalizedString_SIZE 58
#define zOptionsScreenHelpText_SIZE 20
#define ItemPickupHelpPopup_SIZE 5
#define TacticalStr_SIZE 137
#define zDealerStrings_SIZE 4
#define zTalkMenuStrings_SIZE 6
#define gMoneyStatsDesc_SIZE 8
#define zHealthStr_SIZE 7
#define SKI_Text_SIZE 14
#define str_stat_list_SIZE 11
#define str_aim_sort_list_SIZE 8
#define zNewTacticalMessages_SIZE 6
// Weapon Name and Description size
#define SIZE_ITEM_NAME 80
#define SIZE_SHORT_ITEM_NAME 80
#define SIZE_ITEM_INFO 240
#define SIZE_ITEM_PROS 160
#define SIZE_ITEM_CONS 160
typedef const wchar_t *StrPointer;
struct LanguageRes {
const StrPointer *AmmoCaliber;
const StrPointer *BobbyRayAmmoCaliber;
const StrPointer *WeaponType;
const StrPointer *Message;
const StrPointer *TeamTurnString;
const StrPointer *pAssignMenuStrings;
const StrPointer *pTrainingStrings;
const StrPointer *pTrainingMenuStrings;
const StrPointer *pAttributeMenuStrings;
const StrPointer *pVehicleStrings;
const StrPointer *pShortAttributeStrings;
const StrPointer *pContractStrings;
const StrPointer *pAssignmentStrings;
const StrPointer *pConditionStrings;
const StrPointer *pTownNames;
const StrPointer *g_towns_locative;
const StrPointer *pPersonnelScreenStrings;
const StrPointer *pUpperLeftMapScreenStrings;
const StrPointer *pTacticalPopupButtonStrings;
const StrPointer *pSquadMenuStrings;
const StrPointer *pDoorTrapStrings;
const StrPointer *pLongAssignmentStrings;
const StrPointer *pMapScreenMouseRegionHelpText;
const StrPointer *pNoiseVolStr;
const StrPointer *pNoiseTypeStr;
const StrPointer *pDirectionStr;
const StrPointer *pRemoveMercStrings;
const StrPointer *sTimeStrings;
const StrPointer *pLandTypeStrings;
const StrPointer *pInvPanelTitleStrings;
const StrPointer *pPOWStrings;
const StrPointer *pMilitiaString;
const StrPointer *pMilitiaButtonString;
const StrPointer *pEpcMenuStrings;
const StrPointer *pRepairStrings;
const StrPointer *sPreStatBuildString;
const StrPointer *sStatGainStrings;
const StrPointer *pHelicopterEtaStrings;
const StrPointer sMapLevelString;
const StrPointer gsLoyalString;
const StrPointer gsUndergroundString;
const StrPointer *gsTimeStrings;
const StrPointer *sFacilitiesStrings;
const StrPointer *pMapPopUpInventoryText;
const StrPointer *pwTownInfoStrings;
const StrPointer *pwMineStrings;
const StrPointer *pwMiscSectorStrings;
const StrPointer *pMapInventoryErrorString;
const StrPointer *pMapInventoryStrings;
const StrPointer *pMovementMenuStrings;
const StrPointer *pUpdateMercStrings;
const StrPointer *pMapScreenBorderButtonHelpText;
const StrPointer *pMapScreenBottomFastHelp;
const StrPointer pMapScreenBottomText;
const StrPointer pMercDeadString;
const StrPointer *pSenderNameList;
const StrPointer pNewMailStrings;
const StrPointer *pDeleteMailStrings;
const StrPointer *pEmailHeaders;
const StrPointer pEmailTitleText;
const StrPointer pFinanceTitle;
const StrPointer *pFinanceSummary;
const StrPointer *pFinanceHeaders;
const StrPointer *pTransactionText;
const StrPointer *pMoralStrings;
const StrPointer *pSkyriderText;
const StrPointer str_left_equipment;
const StrPointer *pMapScreenStatusStrings;
const StrPointer *pMapScreenPrevNextCharButtonHelpText;
const StrPointer pEtaString;
const StrPointer *pShortVehicleStrings;
const StrPointer *pTrashItemText;
const StrPointer *pMapErrorString;
const StrPointer *pMapPlotStrings;
const StrPointer *pBullseyeStrings;
const StrPointer *pMiscMapScreenMouseRegionHelpText;
const StrPointer str_he_leaves_where_drop_equipment;
const StrPointer str_she_leaves_where_drop_equipment;
const StrPointer str_he_leaves_drops_equipment;
const StrPointer str_she_leaves_drops_equipment;
const StrPointer *pImpPopUpStrings;
const StrPointer *pImpButtonText;
const StrPointer *pExtraIMPStrings;
const StrPointer pFilesTitle;
const StrPointer *pFilesSenderList;
const StrPointer pHistoryLocations;
const StrPointer *pHistoryStrings;
const StrPointer *pHistoryHeaders;
const StrPointer pHistoryTitle;
const StrPointer *pShowBookmarkString;
const StrPointer *pWebPagesTitles;
const StrPointer pWebTitle;
const StrPointer pPersonnelString;
const StrPointer pErrorStrings;
const StrPointer *pDownloadString;
const StrPointer *pBookMarkStrings;
const StrPointer *pLaptopIcons;
const StrPointer *gsAtmStartButtonText;
const StrPointer *pPersonnelTeamStatsStrings;
const StrPointer *pPersonnelCurrentTeamStatsStrings;
const StrPointer *pPersonelTeamStrings;
const StrPointer *pPersonnelDepartedStateStrings;
const StrPointer *pMapHortIndex;
const StrPointer *pMapVertIndex;
const StrPointer *pMapDepthIndex;
const StrPointer *pLaptopTitles;
const StrPointer pDayStrings;
const StrPointer *pMilitiaConfirmStrings;
const StrPointer *pDeliveryLocationStrings;
const StrPointer pSkillAtZeroWarning;
const StrPointer pIMPBeginScreenStrings;
const StrPointer pIMPFinishButtonText;
const StrPointer pIMPFinishStrings;
const StrPointer pIMPVoicesStrings;
const StrPointer pPersTitleText;
const StrPointer *pPausedGameText;
const StrPointer *zOptionsToggleText;
const StrPointer *zOptionsScreenHelpText;
const StrPointer *pDoctorWarningString;
const StrPointer *pMilitiaButtonsHelpText;
const StrPointer pMapScreenJustStartedHelpText;
const StrPointer pLandMarkInSectorString;
const StrPointer *gzMercSkillText;
const StrPointer *gzNonPersistantPBIText;
const StrPointer *gzMiscString;
const StrPointer *wMapScreenSortButtonHelpText;
const StrPointer *pNewNoiseStr;
const StrPointer *gzLateLocalizedString;
const StrPointer pAntiHackerString;
const StrPointer *pMessageStrings;
const StrPointer *ItemPickupHelpPopup;
const StrPointer *TacticalStr;
const StrPointer *LargeTacticalStr;
const StrPointer zDialogActions;
const StrPointer *zDealerStrings;
const StrPointer *zTalkMenuStrings;
const StrPointer *gzMoneyAmounts;
const StrPointer gzProsLabel;
const StrPointer gzConsLabel;
const StrPointer *gMoneyStatsDesc;
const StrPointer *gWeaponStatsDesc;
const StrPointer *sKeyDescriptionStrings;
const StrPointer *zHealthStr;
const StrPointer *zVehicleName;
const StrPointer *pExitingSectorHelpText;
const StrPointer *InsContractText;
const StrPointer *InsInfoText;
const StrPointer *MercAccountText;
const StrPointer *MercInfo;
const StrPointer *MercNoAccountText;
const StrPointer *MercHomePageText;
const StrPointer *sFuneralString;
const StrPointer *sFloristText;
const StrPointer *sOrderFormText;
const StrPointer *sFloristGalleryText;
const StrPointer *sFloristCards;
const StrPointer *BobbyROrderFormText;
const StrPointer *BobbyRText;
const StrPointer str_bobbyr_guns_num_guns_that_use_ammo;
const StrPointer *BobbyRaysFrontText;
const StrPointer *AimSortText;
const StrPointer str_aim_sort_price;
const StrPointer str_aim_sort_experience;
const StrPointer str_aim_sort_marksmanship;
const StrPointer str_aim_sort_medical;
const StrPointer str_aim_sort_explosives;
const StrPointer str_aim_sort_mechanical;
const StrPointer str_aim_sort_ascending;
const StrPointer str_aim_sort_descending;
const StrPointer *AimPolicyText;
const StrPointer *AimMemberText;
const StrPointer *CharacterInfo;
const StrPointer *VideoConfercingText;
const StrPointer *AimPopUpText;
const StrPointer AimLinkText;
const StrPointer *AimHistoryText;
const StrPointer *AimFiText;
const StrPointer *AimAlumniText;
const StrPointer *AimScreenText;
const StrPointer *AimBottomMenuText;
const StrPointer *zMarksMapScreenText;
const StrPointer *gpStrategicString;
const StrPointer gpGameClockString;
const StrPointer *SKI_Text;
const StrPointer *SkiMessageBoxText;
const StrPointer *zSaveLoadText;
const StrPointer *zOptionsText;
const StrPointer *gzGIOScreenText;
const StrPointer gzHelpScreenText;
const StrPointer *gzLaptopHelpText;
const StrPointer *gzMoneyWithdrawMessageText;
const StrPointer gzCopyrightText;
const StrPointer *BrokenLinkText;
const StrPointer *gzBobbyRShipmentText;
const StrPointer *zGioDifConfirmText;
const StrPointer *gzCreditNames;
const StrPointer *gzCreditNameTitle;
const StrPointer *gzCreditNameFunny;
const StrPointer pContractButtonString;
const StrPointer gzIntroScreen;
const StrPointer *pUpdatePanelButtons;
const StrPointer *sRepairsDoneString;
const StrPointer str_ceramic_plates_smashed;
const StrPointer str_arrival_rerouted;
const StrPointer str_stat_health;
const StrPointer str_stat_agility;
const StrPointer str_stat_dexterity;
const StrPointer str_stat_strength;
const StrPointer str_stat_leadership;
const StrPointer str_stat_wisdom;
const StrPointer str_stat_exp_level;
const StrPointer str_stat_marksmanship;
const StrPointer str_stat_mechanical;
const StrPointer str_stat_explosive;
const StrPointer str_stat_medical;
const StrPointer *str_stat_list;
const StrPointer *str_aim_sort_list;
const StrPointer *zNewTacticalMessages;
const StrPointer str_iron_man_mode_warning;
};
/** Current language resources. */
extern const LanguageRes *g_langRes;
/* --------------------------------------------------------------------------------------------
*/
/* below are defines that helps to keep original source code in tact */
/* --------------------------------------------------------------------------------------------
*/
#define AmmoCaliber (g_langRes->AmmoCaliber)
#define BobbyRayAmmoCaliber (g_langRes->BobbyRayAmmoCaliber)
#define WeaponType (g_langRes->WeaponType)
#define TeamTurnString (g_langRes->TeamTurnString)
#define pAssignMenuStrings (g_langRes->pAssignMenuStrings)
#define pTrainingStrings (g_langRes->pTrainingStrings)
#define pTrainingMenuStrings (g_langRes->pTrainingMenuStrings)
#define pAttributeMenuStrings (g_langRes->pAttributeMenuStrings)
#define pVehicleStrings (g_langRes->pVehicleStrings)
#define pShortAttributeStrings (g_langRes->pShortAttributeStrings)
#define pContractStrings (g_langRes->pContractStrings)
#define pAssignmentStrings (g_langRes->pAssignmentStrings)
#define pConditionStrings (g_langRes->pConditionStrings)
#define pTownNames (g_langRes->pTownNames)
#define g_towns_locative (g_langRes->g_towns_locative)
#define pPersonnelScreenStrings (g_langRes->pPersonnelScreenStrings)
#define pUpperLeftMapScreenStrings (g_langRes->pUpperLeftMapScreenStrings)
#define pTacticalPopupButtonStrings (g_langRes->pTacticalPopupButtonStrings)
#define pSquadMenuStrings (g_langRes->pSquadMenuStrings)
#define pDoorTrapStrings (g_langRes->pDoorTrapStrings)
#define pLongAssignmentStrings (g_langRes->pLongAssignmentStrings)
#define pMapScreenMouseRegionHelpText (g_langRes->pMapScreenMouseRegionHelpText)
#define pNoiseVolStr (g_langRes->pNoiseVolStr)
#define pNoiseTypeStr (g_langRes->pNoiseTypeStr)
#define pDirectionStr (g_langRes->pDirectionStr)
#define pRemoveMercStrings (g_langRes->pRemoveMercStrings)
#define sTimeStrings (g_langRes->sTimeStrings)
#define pLandTypeStrings (g_langRes->pLandTypeStrings)
#define pInvPanelTitleStrings (g_langRes->pInvPanelTitleStrings)
#define pPOWStrings (g_langRes->pPOWStrings)
#define pMilitiaString (g_langRes->pMilitiaString)
#define pMilitiaButtonString (g_langRes->pMilitiaButtonString)
#define pEpcMenuStrings (g_langRes->pEpcMenuStrings)
#define pRepairStrings (g_langRes->pRepairStrings)
#define sPreStatBuildString (g_langRes->sPreStatBuildString)
#define sStatGainStrings (g_langRes->sStatGainStrings)
#define pHelicopterEtaStrings (g_langRes->pHelicopterEtaStrings)
#define sMapLevelString (g_langRes->sMapLevelString)
#define gsLoyalString (g_langRes->gsLoyalString)
#define gsUndergroundString (g_langRes->gsUndergroundString)
#define gsTimeStrings (g_langRes->gsTimeStrings)
#define sFacilitiesStrings (g_langRes->sFacilitiesStrings)
#define pMapPopUpInventoryText (g_langRes->pMapPopUpInventoryText)
#define pwTownInfoStrings (g_langRes->pwTownInfoStrings)
#define pwMineStrings (g_langRes->pwMineStrings)
#define pwMiscSectorStrings (g_langRes->pwMiscSectorStrings)
#define pMapInventoryErrorString (g_langRes->pMapInventoryErrorString)
#define pMapInventoryStrings (g_langRes->pMapInventoryStrings)
#define pMovementMenuStrings (g_langRes->pMovementMenuStrings)
#define pUpdateMercStrings (g_langRes->pUpdateMercStrings)
#define pMapScreenBorderButtonHelpText (g_langRes->pMapScreenBorderButtonHelpText)
#define pMapScreenBottomFastHelp (g_langRes->pMapScreenBottomFastHelp)
#define pMapScreenBottomText (g_langRes->pMapScreenBottomText)
#define pMercDeadString (g_langRes->pMercDeadString)
#define pSenderNameList (g_langRes->pSenderNameList)
#define pNewMailStrings (g_langRes->pNewMailStrings)
#define pDeleteMailStrings (g_langRes->pDeleteMailStrings)
#define pEmailHeaders (g_langRes->pEmailHeaders)
#define pEmailTitleText (g_langRes->pEmailTitleText)
#define pFinanceTitle (g_langRes->pFinanceTitle)
#define pFinanceSummary (g_langRes->pFinanceSummary)
#define pFinanceHeaders (g_langRes->pFinanceHeaders)
#define pTransactionText (g_langRes->pTransactionText)
#define pMoralStrings (g_langRes->pMoralStrings)
#define pSkyriderText (g_langRes->pSkyriderText)
#define str_left_equipment (g_langRes->str_left_equipment)
#define pMapScreenStatusStrings (g_langRes->pMapScreenStatusStrings)
#define pMapScreenPrevNextCharButtonHelpText (g_langRes->pMapScreenPrevNextCharButtonHelpText)
#define pEtaString (g_langRes->pEtaString)
#define pShortVehicleStrings (g_langRes->pShortVehicleStrings)
#define pTrashItemText (g_langRes->pTrashItemText)
#define pMapErrorString (g_langRes->pMapErrorString)
#define pMapPlotStrings (g_langRes->pMapPlotStrings)
#define pBullseyeStrings (g_langRes->pBullseyeStrings)
#define pMiscMapScreenMouseRegionHelpText (g_langRes->pMiscMapScreenMouseRegionHelpText)
#define str_he_leaves_where_drop_equipment (g_langRes->str_he_leaves_where_drop_equipment)
#define str_she_leaves_where_drop_equipment (g_langRes->str_she_leaves_where_drop_equipment)
#define str_he_leaves_drops_equipment (g_langRes->str_he_leaves_drops_equipment)
#define str_she_leaves_drops_equipment (g_langRes->str_she_leaves_drops_equipment)
#define pImpPopUpStrings (g_langRes->pImpPopUpStrings)
#define pImpButtonText (g_langRes->pImpButtonText)
#define pExtraIMPStrings (g_langRes->pExtraIMPStrings)
#define pFilesTitle (g_langRes->pFilesTitle)
#define pFilesSenderList (g_langRes->pFilesSenderList)
#define pHistoryLocations (g_langRes->pHistoryLocations)
#define pHistoryStrings (g_langRes->pHistoryStrings)
#define pHistoryHeaders (g_langRes->pHistoryHeaders)
#define pHistoryTitle (g_langRes->pHistoryTitle)
#define pShowBookmarkString (g_langRes->pShowBookmarkString)
#define pWebPagesTitles (g_langRes->pWebPagesTitles)
#define pWebTitle (g_langRes->pWebTitle)
#define pPersonnelString (g_langRes->pPersonnelString)
#define pErrorStrings (g_langRes->pErrorStrings)
#define pDownloadString (g_langRes->pDownloadString)
#define pBookMarkStrings (g_langRes->pBookMarkStrings)
#define pLaptopIcons (g_langRes->pLaptopIcons)
#define gsAtmStartButtonText (g_langRes->gsAtmStartButtonText)
#define pPersonnelTeamStatsStrings (g_langRes->pPersonnelTeamStatsStrings)
#define pPersonnelCurrentTeamStatsStrings (g_langRes->pPersonnelCurrentTeamStatsStrings)
#define pPersonelTeamStrings (g_langRes->pPersonelTeamStrings)
#define pPersonnelDepartedStateStrings (g_langRes->pPersonnelDepartedStateStrings)
#define pMapHortIndex (g_langRes->pMapHortIndex)
#define pMapVertIndex (g_langRes->pMapVertIndex)
#define pMapDepthIndex (g_langRes->pMapDepthIndex)
#define pLaptopTitles (g_langRes->pLaptopTitles)
#define pDayStrings (g_langRes->pDayStrings)
#define pMilitiaConfirmStrings (g_langRes->pMilitiaConfirmStrings)
#define pDeliveryLocationStrings (g_langRes->pDeliveryLocationStrings)
#define pSkillAtZeroWarning (g_langRes->pSkillAtZeroWarning)
#define pIMPBeginScreenStrings (g_langRes->pIMPBeginScreenStrings)
#define pIMPFinishButtonText (g_langRes->pIMPFinishButtonText)
#define pIMPFinishStrings (g_langRes->pIMPFinishStrings)
#define pIMPVoicesStrings (g_langRes->pIMPVoicesStrings)
#define pPersTitleText (g_langRes->pPersTitleText)
#define pPausedGameText (g_langRes->pPausedGameText)
#define zOptionsToggleText (g_langRes->zOptionsToggleText)
#define zOptionsScreenHelpText (g_langRes->zOptionsScreenHelpText)
#define pDoctorWarningString (g_langRes->pDoctorWarningString)
#define pMilitiaButtonsHelpText (g_langRes->pMilitiaButtonsHelpText)
#define pMapScreenJustStartedHelpText (g_langRes->pMapScreenJustStartedHelpText)
#define pLandMarkInSectorString (g_langRes->pLandMarkInSectorString)
#define gzMercSkillText (g_langRes->gzMercSkillText)
#define gzNonPersistantPBIText (g_langRes->gzNonPersistantPBIText)
#define gzMiscString (g_langRes->gzMiscString)
#define wMapScreenSortButtonHelpText (g_langRes->wMapScreenSortButtonHelpText)
#define pNewNoiseStr (g_langRes->pNewNoiseStr)
#define gzLateLocalizedString (g_langRes->gzLateLocalizedString)
#define pAntiHackerString (g_langRes->pAntiHackerString)
#define pMessageStrings (g_langRes->pMessageStrings)
#define ItemPickupHelpPopup (g_langRes->ItemPickupHelpPopup)
#define TacticalStr (g_langRes->TacticalStr)
#define LargeTacticalStr (g_langRes->LargeTacticalStr)
#define zDialogActions (g_langRes->zDialogActions)
#define zDealerStrings (g_langRes->zDealerStrings)
#define zTalkMenuStrings (g_langRes->zTalkMenuStrings)
#define gzMoneyAmounts (g_langRes->gzMoneyAmounts)
#define gzProsLabel (g_langRes->gzProsLabel)
#define gzConsLabel (g_langRes->gzConsLabel)
#define gMoneyStatsDesc (g_langRes->gMoneyStatsDesc)
#define gWeaponStatsDesc (g_langRes->gWeaponStatsDesc)
#define sKeyDescriptionStrings (g_langRes->sKeyDescriptionStrings)
#define zHealthStr (g_langRes->zHealthStr)
#define zVehicleName (g_langRes->zVehicleName)
#define pExitingSectorHelpText (g_langRes->pExitingSectorHelpText)
#define InsContractText (g_langRes->InsContractText)
#define InsInfoText (g_langRes->InsInfoText)
#define MercAccountText (g_langRes->MercAccountText)
#define MercInfo (g_langRes->MercInfo)
#define MercNoAccountText (g_langRes->MercNoAccountText)
#define MercHomePageText (g_langRes->MercHomePageText)
#define sFuneralString (g_langRes->sFuneralString)
#define sFloristText (g_langRes->sFloristText)
#define sOrderFormText (g_langRes->sOrderFormText)
#define sFloristGalleryText (g_langRes->sFloristGalleryText)
#define sFloristCards (g_langRes->sFloristCards)
#define BobbyROrderFormText (g_langRes->BobbyROrderFormText)
#define BobbyRText (g_langRes->BobbyRText)
#define str_bobbyr_guns_num_guns_that_use_ammo (g_langRes->str_bobbyr_guns_num_guns_that_use_ammo)
#define BobbyRaysFrontText (g_langRes->BobbyRaysFrontText)
#define AimSortText (g_langRes->AimSortText)
#define str_aim_sort_price (g_langRes->str_aim_sort_price)
#define str_aim_sort_experience (g_langRes->str_aim_sort_experience)
#define str_aim_sort_marksmanship (g_langRes->str_aim_sort_marksmanship)
#define str_aim_sort_medical (g_langRes->str_aim_sort_medical)
#define str_aim_sort_explosives (g_langRes->str_aim_sort_explosives)
#define str_aim_sort_mechanical (g_langRes->str_aim_sort_mechanical)
#define str_aim_sort_ascending (g_langRes->str_aim_sort_ascending)
#define str_aim_sort_descending (g_langRes->str_aim_sort_descending)
#define AimPolicyText (g_langRes->AimPolicyText)
#define AimMemberText (g_langRes->AimMemberText)
#define CharacterInfo (g_langRes->CharacterInfo)
#define VideoConfercingText (g_langRes->VideoConfercingText)
#define AimPopUpText (g_langRes->AimPopUpText)
#define AimLinkText (g_langRes->AimLinkText)
#define AimHistoryText (g_langRes->AimHistoryText)
#define AimFiText (g_langRes->AimFiText)
#define AimAlumniText (g_langRes->AimAlumniText)
#define AimScreenText (g_langRes->AimScreenText)
#define AimBottomMenuText (g_langRes->AimBottomMenuText)
#define zMarksMapScreenText (g_langRes->zMarksMapScreenText)
#define gpStrategicString (g_langRes->gpStrategicString)
#define gpGameClockString (g_langRes->gpGameClockString)
#define SKI_Text (g_langRes->SKI_Text)
#define SkiMessageBoxText (g_langRes->SkiMessageBoxText)
#define zSaveLoadText (g_langRes->zSaveLoadText)
#define zOptionsText (g_langRes->zOptionsText)
#define gzGIOScreenText (g_langRes->gzGIOScreenText)
#define gzHelpScreenText (g_langRes->gzHelpScreenText)
#define gzLaptopHelpText (g_langRes->gzLaptopHelpText)
#define gzMoneyWithdrawMessageText (g_langRes->gzMoneyWithdrawMessageText)
#define gzCopyrightText (g_langRes->gzCopyrightText)
#define BrokenLinkText (g_langRes->BrokenLinkText)
#define gzBobbyRShipmentText (g_langRes->gzBobbyRShipmentText)
#define zGioDifConfirmText (g_langRes->zGioDifConfirmText)
#define gzCreditNames (g_langRes->gzCreditNames)
#define gzCreditNameTitle (g_langRes->gzCreditNameTitle)
#define gzCreditNameFunny (g_langRes->gzCreditNameFunny)
#define pContractButtonString (g_langRes->pContractButtonString)
#define gzIntroScreen (g_langRes->gzIntroScreen)
#define pUpdatePanelButtons (g_langRes->pUpdatePanelButtons)
#define sRepairsDoneString (g_langRes->sRepairsDoneString)
#define str_ceramic_plates_smashed (g_langRes->str_ceramic_plates_smashed)
#define str_arrival_rerouted (g_langRes->str_arrival_rerouted)
#define str_stat_health (g_langRes->str_stat_health)
#define str_stat_agility (g_langRes->str_stat_agility)
#define str_stat_dexterity (g_langRes->str_stat_dexterity)
#define str_stat_strength (g_langRes->str_stat_strength)
#define str_stat_leadership (g_langRes->str_stat_leadership)
#define str_stat_wisdom (g_langRes->str_stat_wisdom)
#define str_stat_exp_level (g_langRes->str_stat_exp_level)
#define str_stat_marksmanship (g_langRes->str_stat_marksmanship)
#define str_stat_mechanical (g_langRes->str_stat_mechanical)
#define str_stat_explosive (g_langRes->str_stat_explosive)
#define str_stat_medical (g_langRes->str_stat_medical)
#define str_stat_list (g_langRes->str_stat_list)
#define str_aim_sort_list (g_langRes->str_aim_sort_list)
#define zNewTacticalMessages (g_langRes->zNewTacticalMessages)
#define str_iron_man_mode_warning (g_langRes->str_iron_man_mode_warning)
/* --------------------------------------------------------------------------------------------
*/
enum {
STR_LATE_01,
STR_LATE_02,
STR_LATE_03,
STR_LATE_04,
STR_LATE_05,
STR_LATE_06,
STR_LATE_07,
STR_LATE_08,
STR_LATE_09,
STR_LATE_10,
STR_LATE_11,
STR_LATE_12,
STR_LATE_13,
STR_LATE_14,
STR_LATE_15,
STR_LATE_16,
STR_LATE_17,
STR_LATE_18,
STR_LATE_19,
STR_LATE_20,
STR_LATE_21,
STR_LATE_22,
STR_LATE_23,
STR_LATE_24,
STR_LATE_25,
STR_LATE_26,
STR_LATE_27,
STR_LATE_28,
STR_LATE_29,
STR_LATE_30,
STR_LATE_31,
STR_LATE_32,
STR_LATE_33,
STR_LATE_34,
STR_LATE_35,
STR_LATE_36,
STR_LATE_37,
STR_LATE_38,
STR_LATE_39,
STR_LATE_40,
STR_LATE_41,
STR_LATE_42,
STR_LATE_43,
STR_LATE_44,
STR_LATE_45,
STR_LATE_46,
STR_LATE_47,
STR_LATE_48,
STR_LATE_49,
STR_LATE_50,
STR_LATE_51,
STR_LATE_52,
STR_LATE_53,
STR_LATE_54,
STR_LATE_55,
STR_LATE_56,
STR_LATE_57,
STR_LATE_58
};
enum {
MSG_EXITGAME,
MSG_OK,
MSG_YES,
MSG_NO,
MSG_CANCEL,
MSG_REHIRE,
MSG_LIE,
MSG_NODESC,
MSG_SAVESUCCESS,
MSG_DAY,
MSG_MERCS,
MSG_EMPTYSLOT,
MSG_RPM,
MSG_MINUTE_ABBREVIATION,
MSG_METER_ABBREVIATION,
MSG_ROUNDS_ABBREVIATION,
MSG_KILOGRAM_ABBREVIATION,
MSG_POUND_ABBREVIATION,
MSG_HOMEPAGE,
MSG_USDOLLAR_ABBREVIATION,
MSG_LOWERCASE_NA,
MSG_MEANWHILE,
MSG_ARRIVE,
MSG_VERSION,
MSG_EMPTY_QUICK_SAVE_SLOT,
MSG_QUICK_SAVE_RESERVED_FOR_TACTICAL,
MSG_OPENED,
MSG_CLOSED,
MSG_LOWDISKSPACE_WARNING,
MSG_MERC_CAUGHT_ITEM,
MSG_MERC_TOOK_DRUG,
MSG_MERC_HAS_NO_MEDSKILL,
MSG_INTEGRITY_WARNING,
MSG_CDROM_SAVE,
MSG_CANT_FIRE_HERE,
MSG_CANT_CHANGE_STANCE,
MSG_DROP,
MSG_THROW,
MSG_PASS,
MSG_ITEM_PASSED_TO_MERC,
MSG_NO_ROOM_TO_PASS_ITEM,
MSG_END_ATTACHMENT_LIST,
MSG_CHEAT_LEVEL_ONE,
MSG_CHEAT_LEVEL_TWO,
MSG_SQUAD_ON_STEALTHMODE,
MSG_SQUAD_OFF_STEALTHMODE,
MSG_MERC_ON_STEALTHMODE,
MSG_MERC_OFF_STEALTHMODE,
MSG_WIREFRAMES_ADDED,
MSG_WIREFRAMES_REMOVED,
MSG_CANT_GO_UP,
MSG_CANT_GO_DOWN,
MSG_ENTERING_LEVEL,
MSG_LEAVING_BASEMENT,
MSG_DASH_S, // the old 's
MSG_TACKING_MODE_OFF,
MSG_TACKING_MODE_ON,
MSG_3DCURSOR_OFF,
MSG_3DCURSOR_ON,
MSG_SQUAD_ACTIVE,
MSG_CANT_AFFORD_TO_PAY_NPC_DAILY_SALARY_MSG,
MSG_SKIP,
MSG_EPC_CANT_TRAVERSE,
MSG_CDROM_SAVE_GAME,
MSG_DRANK_SOME,
MSG_PACKAGE_ARRIVES,
MSG_JUST_HIRED_MERC_ARRIVAL_LOCATION_POPUP,
MSG_HISTORY_UPDATED,
};
enum {
STR_LOSES_1_WISDOM,
STR_LOSES_1_DEX,
STR_LOSES_1_STRENGTH,
STR_LOSES_1_AGIL,
STR_LOSES_WISDOM,
STR_LOSES_DEX,
STR_LOSES_STRENGTH,
STR_LOSES_AGIL,
STR_INTERRUPT,
STR_PLAYER_REINFORCEMENTS,
STR_PLAYER_RELOADS,
STR_PLAYER_NOT_ENOUGH_APS,
STR_RELIABLE,
STR_UNRELIABLE,
STR_EASY_TO_REPAIR,
STR_HARD_TO_REPAIR,
STR_HIGH_DAMAGE,
STR_LOW_DAMAGE,
STR_QUICK_FIRING,
STR_SLOW_FIRING,
STR_LONG_RANGE,
STR_SHORT_RANGE,
STR_LIGHT,
STR_HEAVY,
STR_SMALL,
STR_FAST_BURST,
STR_NO_BURST,
STR_LARGE_AMMO_CAPACITY,
STR_SMALL_AMMO_CAPACITY,
STR_CAMO_WORN_OFF,
STR_CAMO_WASHED_OFF,
STR_2ND_CLIP_DEPLETED,
STR_STOLE_SOMETHING,
STR_NOT_BURST_CAPABLE,
STR_ATTACHMENT_ALREADY,
STR_MERGE_ITEMS,
STR_CANT_ATTACH,
STR_NONE,
STR_EJECT_AMMO,
STR_ATTACHMENTS,
STR_CANT_USE_TWO_ITEMS,
STR_ATTACHMENT_HELP,
STR_ATTACHMENT_INVALID_HELP,
STR_SECTOR_NOT_CLEARED,
STR_NEED_TO_GIVE_MONEY,
STR_HEAD_HIT,
STR_ABANDON_FIGHT,
STR_PERMANENT_ATTACHMENT,
STR_ENERGY_BOOST,
STR_SLIPPED_MARBLES,
STR_FAILED_TO_STEAL_SOMETHING,
STR_REPAIRED,
STR_INTERRUPT_FOR,
STR_SURRENDER,
STR_REFUSE_FIRSTAID,
STR_REFUSE_FIRSTAID_FOR_CREATURE,
STR_HOW_TO_USE_SKYRIDDER,
STR_RELOAD_ONLY_ONE_GUN,
STR_BLOODCATS_TURN,
};
enum {
AIR_RAID_TURN_STR,
BEGIN_AUTOBANDAGE_PROMPT_STR,
NOTICING_MISSING_ITEMS_FROM_SHIPMENT_STR,
DOOR_LOCK_DESCRIPTION_STR,
DOOR_THERE_IS_NO_LOCK_STR,
DOOR_LOCK_UNTRAPPED_STR,
DOOR_NOT_PROPER_KEY_STR,
DOOR_LOCK_IS_NOT_TRAPPED_STR,
DOOR_LOCK_HAS_BEEN_LOCKED_STR,
DOOR_DOOR_MOUSE_DESCRIPTION,
DOOR_TRAPPED_MOUSE_DESCRIPTION,
DOOR_LOCKED_MOUSE_DESCRIPTION,
DOOR_UNLOCKED_MOUSE_DESCRIPTION,
DOOR_BROKEN_MOUSE_DESCRIPTION,
ACTIVATE_SWITCH_PROMPT,
DISARM_TRAP_PROMPT,
ITEMPOOL_POPUP_MORE_STR,
ITEM_HAS_BEEN_PLACED_ON_GROUND_STR,
ITEM_HAS_BEEN_GIVEN_TO_STR,
GUY_HAS_BEEN_PAID_IN_FULL_STR,
GUY_STILL_OWED_STR,
CHOOSE_BOMB_FREQUENCY_STR,
CHOOSE_TIMER_STR,
CHOOSE_REMOTE_FREQUENCY_STR,
DISARM_BOOBYTRAP_PROMPT,
REMOVE_BLUE_FLAG_PROMPT,
PLACE_BLUE_FLAG_PROMPT,
ENDING_TURN,
ATTACK_OWN_GUY_PROMPT,
VEHICLES_NO_STANCE_CHANGE_STR,
ROBOT_NO_STANCE_CHANGE_STR,
CANNOT_STANCE_CHANGE_STR,
CANNOT_DO_FIRST_AID_STR,
CANNOT_NO_NEED_FIRST_AID_STR,
CANT_MOVE_THERE_STR,
CANNOT_RECRUIT_TEAM_FULL,
HAS_BEEN_RECRUITED_STR,
BALANCE_OWED_STR,
ESCORT_PROMPT,
HIRE_PROMPT,
BOXING_PROMPT,
BUY_VEST_PROMPT,
NOW_BING_ESCORTED_STR,
JAMMED_ITEM_STR,
ROBOT_NEEDS_GIVEN_CALIBER_STR,
CANNOT_THROW_TO_DEST_STR,
TOGGLE_STEALTH_MODE_POPUPTEXT,
MAPSCREEN_POPUPTEXT,
END_TURN_POPUPTEXT,
TALK_CURSOR_POPUPTEXT,
TOGGLE_MUTE_POPUPTEXT,
CHANGE_STANCE_UP_POPUPTEXT,
CURSOR_LEVEL_POPUPTEXT,
JUMPCLIMB_POPUPTEXT,
CHANGE_STANCE_DOWN_POPUPTEXT,
EXAMINE_CURSOR_POPUPTEXT,
PREV_MERC_POPUPTEXT,
NEXT_MERC_POPUPTEXT,
CHANGE_OPTIONS_POPUPTEXT,
TOGGLE_BURSTMODE_POPUPTEXT,
LOOK_CURSOR_POPUPTEXT,
MERC_VITAL_STATS_POPUPTEXT,
CANNOT_DO_INV_STUFF_STR,
CONTINUE_OVER_FACE_STR,
MUTE_OFF_STR,
MUTE_ON_STR,
DRIVER_POPUPTEXT,
EXIT_VEHICLE_POPUPTEXT,
CHANGE_SQUAD_POPUPTEXT,
DRIVE_POPUPTEXT,
NOT_APPLICABLE_POPUPTEXT,
USE_HANDTOHAND_POPUPTEXT,
USE_FIREARM_POPUPTEXT,
USE_BLADE_POPUPTEXT,
USE_EXPLOSIVE_POPUPTEXT,
USE_MEDKIT_POPUPTEXT,
CATCH_STR,
RELOAD_STR,
GIVE_STR,
LOCK_TRAP_HAS_GONE_OFF_STR,
MERC_HAS_ARRIVED_STR,
GUY_HAS_RUN_OUT_OF_APS_STR,
MERC_IS_UNAVAILABLE_STR,
MERC_IS_ALL_BANDAGED_STR,
MERC_IS_OUT_OF_BANDAGES_STR,
ENEMY_IN_SECTOR_STR,
NO_ENEMIES_IN_SIGHT_STR,
NOT_ENOUGH_APS_STR,
NOBODY_USING_REMOTE_STR,
BURST_FIRE_DEPLETED_CLIP_STR,
ENEMY_TEAM_MERC_NAME,
CREATURE_TEAM_MERC_NAME,
MILITIA_TEAM_MERC_NAME,
CIV_TEAM_MERC_NAME,
// The text for the 'exiting sector' gui
EXIT_GUI_TITLE_STR,
OK_BUTTON_TEXT_STR,
CANCEL_BUTTON_TEXT_STR,
EXIT_GUI_SELECTED_MERC_STR,
EXIT_GUI_ALL_MERCS_IN_SQUAD_STR,
EXIT_GUI_GOTO_SECTOR_STR,
EXIT_GUI_GOTO_MAP_STR,
CANNOT_LEAVE_SECTOR_FROM_SIDE_STR,
MERC_IS_TOO_FAR_AWAY_STR,
REMOVING_TREETOPS_STR,
SHOWING_TREETOPS_STR,
CROW_HIT_LOCATION_STR,
NECK_HIT_LOCATION_STR,
HEAD_HIT_LOCATION_STR,
TORSO_HIT_LOCATION_STR,
LEGS_HIT_LOCATION_STR,
YESNOLIE_STR,
GUN_GOT_FINGERPRINT,
GUN_NOGOOD_FINGERPRINT,
GUN_GOT_TARGET,
NO_PATH,
MONEY_BUTTON_HELP_TEXT,
AUTOBANDAGE_NOT_NEEDED,
SHORT_JAMMED_GUN,
CANT_GET_THERE,
REFUSE_EXCHANGE_PLACES,
PAY_MONEY_PROMPT,
FREE_MEDICAL_PROMPT,
MARRY_DARYL_PROMPT,
KEYRING_HELP_TEXT,
EPC_CANNOT_DO_THAT,
SPARE_KROTT_PROMPT,
OUT_OF_RANGE_STRING,
CIV_TEAM_MINER_NAME,
VEHICLE_CANT_MOVE_IN_TACTICAL,
CANT_AUTOBANDAGE_PROMPT,
NO_PATH_FOR_MERC,
POW_MERCS_ARE_HERE,
LOCK_HAS_BEEN_HIT,
LOCK_HAS_BEEN_DESTROYED,
DOOR_IS_BUSY,
VEHICLE_VITAL_STATS_POPUPTEXT,
NO_LOS_TO_TALK_TARGET,
};
enum {
EXIT_GUI_LOAD_ADJACENT_SECTOR_HELPTEXT,
EXIT_GUI_GOTO_MAPSCREEN_HELPTEXT,
EXIT_GUI_CANT_LEAVE_HOSTILE_SECTOR_HELPTEXT,
EXIT_GUI_MUST_LOAD_ADJACENT_SECTOR_HELPTEXT,
EXIT_GUI_MUST_GOTO_MAPSCREEN_HELPTEXT,
EXIT_GUI_ESCORTED_CHARACTERS_MUST_BE_ESCORTED_HELPTEXT,
EXIT_GUI_MERC_CANT_ISOLATE_EPC_HELPTEXT_MALE_SINGULAR,
EXIT_GUI_MERC_CANT_ISOLATE_EPC_HELPTEXT_FEMALE_SINGULAR,
EXIT_GUI_MERC_CANT_ISOLATE_EPC_HELPTEXT_MALE_PLURAL,
EXIT_GUI_MERC_CANT_ISOLATE_EPC_HELPTEXT_FEMALE_PLURAL,
EXIT_GUI_ALL_MERCS_MUST_BE_TOGETHER_TO_ALLOW_HELPTEXT,
EXIT_GUI_SINGLE_TRAVERSAL_WILL_SEPARATE_SQUADS_HELPTEXT,
EXIT_GUI_ALL_TRAVERSAL_WILL_MOVE_CURRENT_SQUAD_HELPTEXT,
EXIT_GUI_ESCORTED_CHARACTERS_CANT_LEAVE_SECTOR_ALONE_STR,
};
enum {
LARGESTR_NOONE_LEFT_CAPABLE_OF_BATTLE_STR,
LARGESTR_NOONE_LEFT_CAPABLE_OF_BATTLE_AGAINST_CREATURES_STR,
LARGESTR_HAVE_BEEN_CAPTURED,
};
// Insurance Contract.c
enum {
INS_CONTRACT_PREVIOUS,
INS_CONTRACT_NEXT,
INS_CONTRACT_ACCEPT,
INS_CONTRACT_CLEAR,
};
// Insurance Info
enum {
INS_INFO_PREVIOUS,
INS_INFO_NEXT,
};
// Merc Account.c
enum {
MERC_ACCOUNT_AUTHORIZE,
MERC_ACCOUNT_HOME,
MERC_ACCOUNT_ACCOUNT,
MERC_ACCOUNT_MERC,
MERC_ACCOUNT_DAYS,
MERC_ACCOUNT_RATE,
MERC_ACCOUNT_CHARGE,
MERC_ACCOUNT_TOTAL,
MERC_ACCOUNT_AUTHORIZE_CONFIRMATION,
MERC_ACCOUNT_NOT_ENOUGH_MONEY,
};
// MercFile.c
enum {
MERC_FILES_PREVIOUS,
MERC_FILES_HIRE,
MERC_FILES_NEXT,
MERC_FILES_ADDITIONAL_INFO,
MERC_FILES_HOME,
MERC_FILES_ALREADY_HIRED, // 5
MERC_FILES_SALARY,
MERC_FILES_PER_DAY,
MERC_FILES_MERC_IS_DEAD,
MERC_FILES_HIRE_TO_MANY_PEOPLE_WARNING,
MERC_FILES_MERC_UNAVAILABLE,
};
// MercNoAccount.c
enum {
MERC_NO_ACC_OPEN_ACCOUNT,
MERC_NO_ACC_CANCEL,
MERC_NO_ACC_NO_ACCOUNT_OPEN_ONE,
};
// Merc HomePage
enum {
MERC_SPECK_OWNER,
MERC_OPEN_ACCOUNT,
MERC_VIEW_ACCOUNT,
MERC_VIEW_FILES,
MERC_SPECK_COM,
};
// Funerl.c
enum {
FUNERAL_INTRO_1,
FUNERAL_INTRO_2,
FUNERAL_INTRO_3,
FUNERAL_INTRO_4,
FUNERAL_INTRO_5,
FUNERAL_SEND_FLOWERS, // 5
FUNERAL_CASKET_URN,
FUNERAL_CREMATION,
FUNERAL_PRE_FUNERAL,
FUNERAL_FUNERAL_ETTIQUETTE,
FUNERAL_OUR_CONDOLENCES, // 10
FUNERAL_OUR_SYMPATHIES,
};
// Florist.c
enum {
FLORIST_GALLERY,
FLORIST_DROP_ANYWHERE,
FLORIST_PHONE_NUMBER,
FLORIST_STREET_ADDRESS,
FLORIST_WWW_ADDRESS,
FLORIST_ADVERTISEMENT_1,
FLORIST_ADVERTISEMENT_2,
FLORIST_ADVERTISEMENT_3,
FLORIST_ADVERTISEMENT_4,
FLORIST_ADVERTISEMENT_5,
FLORIST_ADVERTISEMENT_6,
FLORIST_ADVERTISEMENT_7,
FLORIST_ADVERTISEMENT_8,
};
// Florist Order Form
enum {
FLORIST_ORDER_BACK,
FLORIST_ORDER_SEND,
FLORIST_ORDER_CLEAR,
FLORIST_ORDER_GALLERY,
FLORIST_ORDER_NAME_BOUQUET,
FLORIST_ORDER_PRICE, // 5
FLORIST_ORDER_ORDER_NUMBER,
FLORIST_ORDER_DELIVERY_DATE,
FLORIST_ORDER_NEXT_DAY,
FLORIST_ORDER_GETS_THERE,
FLORIST_ORDER_DELIVERY_LOCATION, // 10
FLORIST_ORDER_ADDITIONAL_CHARGES,
FLORIST_ORDER_CRUSHED,
FLORIST_ORDER_BLACK_ROSES,
FLORIST_ORDER_WILTED,
FLORIST_ORDER_FRUIT_CAKE, // 15
FLORIST_ORDER_PERSONAL_SENTIMENTS,
FLORIST_ORDER_CARD_LENGTH,
FLORIST_ORDER_SELECT_FROM_OURS,
FLORIST_ORDER_STANDARDIZED_CARDS,
FLORIST_ORDER_BILLING_INFO, // 20
FLORIST_ORDER_NAME,
};
// Florist Gallery.c
enum {
FLORIST_GALLERY_PREV,
FLORIST_GALLERY_NEXT,
FLORIST_GALLERY_CLICK_TO_ORDER,
FLORIST_GALLERY_ADDIFTIONAL_FEE,
FLORIST_GALLERY_HOME,
};
// Florist Cards
enum {
FLORIST_CARDS_CLICK_SELECTION,
FLORIST_CARDS_BACK,
};
// Bobbyr Mail Order.c
enum {
BOBBYR_ORDER_FORM,
BOBBYR_QTY,
BOBBYR_WEIGHT,
BOBBYR_NAME,
BOBBYR_UNIT_PRICE,
BOBBYR_TOTAL,
BOBBYR_SUB_TOTAL,
BOBBYR_S_H,
BOBBYR_GRAND_TOTAL,
BOBBYR_SHIPPING_LOCATION,
BOBBYR_SHIPPING_SPEED,
BOBBYR_COST,
BOBBYR_OVERNIGHT_EXPRESS,
BOBBYR_BUSINESS_DAYS,
BOBBYR_STANDARD_SERVICE,
BOBBYR_CLEAR_ORDER,
BOBBYR_ACCEPT_ORDER,
BOBBYR_BACK,
BOBBYR_HOME,
BOBBYR_USED_TEXT,
BOBBYR_CANT_AFFORD_PURCHASE,
BOBBYR_SELECT_DEST,
BOBBYR_CONFIRM_DEST,
BOBBYR_PACKAGE_WEIGHT,
BOBBYR_MINIMUM_WEIGHT,
BOBBYR_GOTOSHIPMENT_PAGE,
};
// BobbyRGuns.c
enum {
BOBBYR_GUNS_TO_ORDER,
BOBBYR_GUNS_CLICK_ON_ITEMS,
BOBBYR_GUNS_PREVIOUS_ITEMS,
BOBBYR_GUNS_GUNS,
BOBBYR_GUNS_AMMO,
BOBBYR_GUNS_ARMOR, // 5
BOBBYR_GUNS_MISC,
BOBBYR_GUNS_USED,
BOBBYR_GUNS_MORE_ITEMS,
BOBBYR_GUNS_ORDER_FORM,
BOBBYR_GUNS_HOME, // 10
BOBBYR_GUNS_WGHT,
BOBBYR_GUNS_CALIBRE,
BOBBYR_GUNS_MAGAZINE,
BOBBYR_GUNS_RANGE,
BOBBYR_GUNS_DAMAGE,
BOBBYR_GUNS_ROF, // 5
BOBBYR_GUNS_COST,
BOBBYR_GUNS_IN_STOCK,
BOBBYR_GUNS_QTY_ON_ORDER,
BOBBYR_GUNS_DAMAGED,
BOBBYR_GUNS_SUB_TOTAL,
BOBBYR_GUNS_PERCENT_FUNCTIONAL,
BOBBYR_MORE_THEN_10_PURCHASES,
BOBBYR_MORE_NO_MORE_IN_STOCK,
BOBBYR_NO_MORE_STOCK,
};
// BobbyR.c
enum {
BOBBYR_ADVERTISMENT_1,
BOBBYR_ADVERTISMENT_2,
BOBBYR_USED,
BOBBYR_MISC,
BOBBYR_GUNS,
BOBBYR_AMMO,
BOBBYR_ARMOR,
BOBBYR_ADVERTISMENT_3,
BOBBYR_UNDER_CONSTRUCTION,
};
// Aim Sort.c
enum { AIM_AIMMEMBERS, SORT_BY, MUGSHOT_INDEX, MERCENARY_FILES, ALUMNI_GALLERY };
// Aim Policies.c
enum {
AIM_POLICIES_PREVIOUS,
AIM_POLICIES_HOMEPAGE,
AIM_POLICIES_POLICY,
AIM_POLICIES_NEXT_PAGE,
AIM_POLICIES_DISAGREE,
AIM_POLICIES_AGREE,
};
// Aim Member.c
enum {
AIM_MEMBER_FEE,
AIM_MEMBER_CONTRACT,
AIM_MEMBER_1_DAY,
AIM_MEMBER_1_WEEK,
AIM_MEMBER_2_WEEKS,
AIM_MEMBER_PREVIOUS,
AIM_MEMBER_CONTACT,
AIM_MEMBER_NEXT,
AIM_MEMBER_ADDTNL_INFO,
AIM_MEMBER_ACTIVE_MEMBERS,
AIM_MEMBER_OPTIONAL_GEAR,
AIM_MEMBER_MEDICAL_DEPOSIT_REQ,
};
// Aim Member.c
enum {
AIM_MEMBER_CONTRACT_CHARGE,
AIM_MEMBER_ONE_DAY,
AIM_MEMBER_ONE_WEEK,
AIM_MEMBER_TWO_WEEKS,
AIM_MEMBER_NO_EQUIPMENT,
AIM_MEMBER_BUY_EQUIPMENT, // 5
AIM_MEMBER_TRANSFER_FUNDS,
AIM_MEMBER_CANCEL,
AIM_MEMBER_HIRE,
AIM_MEMBER_HANG_UP,
AIM_MEMBER_OK, // 10
AIM_MEMBER_LEAVE_MESSAGE,
AIM_MEMBER_VIDEO_CONF_WITH,
AIM_MEMBER_CONNECTING,
AIM_MEMBER_WITH_MEDICAL, // 14
};
// Aim Member.c
enum {
AIM_MEMBER_FUNDS_TRANSFER_SUCCESFUL,
AIM_MEMBER_FUNDS_TRANSFER_FAILED,
AIM_MEMBER_NOT_ENOUGH_FUNDS,
AIM_MEMBER_ON_ASSIGNMENT,
AIM_MEMBER_LEAVE_MSG,
AIM_MEMBER_DEAD,
AIM_MEMBER_ALREADY_HAVE_20_MERCS,
AIM_MEMBER_PRERECORDED_MESSAGE,
AIM_MEMBER_MESSAGE_RECORDED,
};
// AIM Link.c
// Aim History
enum {
AIM_HISTORY_TITLE,
AIM_HISTORY_PREVIOUS,
AIM_HISTORY_HOME,
AIM_HISTORY_AIM_ALUMNI,
AIM_HISTORY_NEXT,
};
// Aim Facial Index
enum {
AIM_FI_PRICE,
AIM_FI_EXP,
AIM_FI_MARKSMANSHIP,
AIM_FI_MEDICAL,
AIM_FI_EXPLOSIVES,
AIM_FI_MECHANICAL,
AIM_FI_AIM_MEMBERS_SORTED_ASCENDING,
AIM_FI_AIM_MEMBERS_SORTED_DESCENDING,
AIM_FI_LEFT_CLICK,
AIM_FI_TO_SELECT,
AIM_FI_RIGHT_CLICK,
AIM_FI_TO_ENTER_SORT_PAGE,
AIM_FI_DEAD,
};
// AimArchives.
enum {
AIM_ALUMNI_PAGE_1,
AIM_ALUMNI_PAGE_2,
AIM_ALUMNI_PAGE_3,
AIM_ALUMNI_ALUMNI,
AIM_ALUMNI_DONE,
};
// Aim Home Page
enum {
// AIM_INFO_1,
// AIM_INFO_2,
// AIM_POLICIES,
// AIM_HISTORY,
// AIM_LINKS, //5
AIM_INFO_3,
AIM_INFO_4,
AIM_INFO_5,
AIM_INFO_6,
AIM_INFO_7, // 9
AIM_BOBBYR_ADD1,
AIM_BOBBYR_ADD2,
AIM_BOBBYR_ADD3,
};
// Aim Home Page
enum {
AIM_HOME,
AIM_MEMBERS,
AIM_ALUMNI,
AIM_POLICIES,
AIM_HISTORY,
AIM_LINKS,
};
// MapScreen
enum {
MAP_SCREEN_MAP_LEVEL,
MAP_SCREEN_NO_MILITIA_TEXT,
};
enum {
// Coordinating simultaneous arrival dialog strings
STR_DETECTED_SINGULAR,
STR_DETECTED_PLURAL,
STR_COORDINATE,
// AutoResove Enemy capturing strings
STR_ENEMY_SURRENDER_OFFER,
STR_ENEMY_CAPTURED,
// AutoResolve Text buttons
STR_AR_RETREAT_BUTTON,
STR_AR_DONE_BUTTON,
// AutoResolve header text
STR_AR_DEFEND_HEADER,
STR_AR_ATTACK_HEADER,
STR_AR_ENCOUNTER_HEADER,
STR_AR_SECTOR_HEADER,
// String for AutoResolve battle over conditions
STR_AR_OVER_VICTORY,
STR_AR_OVER_DEFEAT,
STR_AR_OVER_SURRENDERED,
STR_AR_OVER_CAPTURED,
STR_AR_OVER_RETREATED,
STR_AR_MILITIA_NAME,
STR_AR_ELITE_NAME,
STR_AR_TROOP_NAME,
STR_AR_ADMINISTRATOR_NAME,
STR_AR_CREATURE_NAME,
STR_AR_TIME_ELAPSED,
STR_AR_MERC_RETREATED,
STR_AR_MERC_RETREATING,
STR_AR_MERC_RETREAT,
// Strings for prebattle interface
STR_PB_AUTORESOLVE_BTN,
STR_PB_GOTOSECTOR_BTN,
STR_PB_RETREATMERCS_BTN,
STR_PB_ENEMYENCOUNTER_HEADER,
STR_PB_ENEMYINVASION_HEADER,
STR_PB_ENEMYAMBUSH_HEADER,
STR_PB_ENTERINGENEMYSECTOR_HEADER,
STR_PB_CREATUREATTACK_HEADER,
STR_PB_BLOODCATAMBUSH_HEADER,
STR_PB_ENTERINGBLOODCATLAIR_HEADER,
STR_PB_LOCATION,
STR_PB_ENEMIES,
STR_PB_MERCS,
STR_PB_MILITIA,
STR_PB_CREATURES,
STR_PB_BLOODCATS,
STR_PB_SECTOR,
STR_PB_NONE,
STR_PB_NOTAPPLICABLE_ABBREVIATION,
STR_PB_DAYS_ABBREVIATION,
STR_PB_HOURS_ABBREVIATION,
// Strings for the tactical placement gui
// The four buttons and it's help text.
STR_TP_CLEAR,
STR_TP_SPREAD,
STR_TP_GROUP,
STR_TP_DONE,
STR_TP_CLEARHELP,
STR_TP_SPREADHELP,
STR_TP_GROUPHELP,
STR_TP_DONEHELP,
STR_TP_DISABLED_DONEHELP,
// various strings.
STR_TP_SECTOR,
STR_TP_CHOOSEENTRYPOSITIONS,
STR_TP_INACCESSIBLE_MESSAGE,
STR_TP_INVALID_MESSAGE,
STR_PB_AUTORESOLVE_FASTHELP,
STR_PB_DISABLED_AUTORESOLVE_FASTHELP,
STR_PB_GOTOSECTOR_FASTHELP,
STR_BP_RETREATSINGLE_FASTHELP,
STR_BP_RETREATPLURAL_FASTHELP,
// various popup messages for battle,
STR_DIALOG_ENEMIES_ATTACK_MILITIA,
STR_DIALOG_CREATURES_ATTACK_MILITIA,
STR_DIALOG_CREATURES_KILL_CIVILIANS,
STR_DIALOG_ENEMIES_ATTACK_UNCONCIOUSMERCS,
STR_DIALOG_CREATURES_ATTACK_UNCONCIOUSMERCS,
};
// enums for the Shopkeeper Interface
enum {
SKI_TEXT_MERCHADISE_IN_STOCK,
SKI_TEXT_PAGE,
SKI_TEXT_TOTAL_COST,
SKI_TEXT_TOTAL_VALUE,
SKI_TEXT_EVALUATE,
SKI_TEXT_TRANSACTION,
SKI_TEXT_DONE,
SKI_TEXT_REPAIR_COST,
SKI_TEXT_ONE_HOUR,
SKI_TEXT_PLURAL_HOURS,
SKI_TEXT_REPAIRED,
SKI_TEXT_NO_MORE_ROOM_IN_PLAYER_OFFER_AREA,
SKI_TEXT_MINUTES,
SKI_TEXT_DROP_ITEM_TO_GROUND,
};
// ShopKeeperInterface Message Box defines
enum {
SKI_QUESTION_TO_DEDUCT_MONEY_FROM_PLAYERS_ACCOUNT_TO_COVER_DIFFERENCE,
SKI_SHORT_FUNDS_TEXT,
SKI_QUESTION_TO_DEDUCT_MONEY_FROM_PLAYERS_ACCOUNT_TO_COVER_COST,
SKI_TRANSACTION_BUTTON_HELP_TEXT,
SKI_REPAIR_TRANSACTION_BUTTON_HELP_TEXT,
SKI_DONE_BUTTON_HELP_TEXT,
SKI_PLAYERS_CURRENT_BALANCE,
};
// enums for the above text
enum {
SLG_SAVE_GAME,
SLG_LOAD_GAME,
SLG_CANCEL,
SLG_SAVE_SELECTED,
SLG_LOAD_SELECTED,
SLG_SAVE_GAME_OK, // 5
SLG_SAVE_GAME_ERROR,
SLG_LOAD_GAME_OK,
SLG_LOAD_GAME_ERROR,
SLG_GAME_VERSION_DIF,
SLG_DELETE_ALL_SAVE_GAMES, // 10
SLG_SAVED_GAME_VERSION_DIF,
SLG_BOTH_GAME_AND_SAVED_GAME_DIF,
SLG_CONFIRM_SAVE,
SLG_NOT_ENOUGH_HARD_DRIVE_SPACE,
SLG_SAVING_GAME_MESSAGE,
SLG_NORMAL_GUNS,
SLG_ADDITIONAL_GUNS,
SLG_REALISTIC,
SLG_SCIFI,
SLG_DIFF,
};
// OptionScreen.h
// defines used for the zOptionsText
enum {
OPT_SAVE_GAME,
OPT_LOAD_GAME,
OPT_MAIN_MENU,
OPT_DONE,
OPT_SOUND_FX,
OPT_SPEECH,
OPT_MUSIC,
OPT_RETURN_TO_MAIN,
OPT_NEED_AT_LEAST_SPEECH_OR_SUBTITLE_OPTION_ON,
};
// used with the gMoneyStatsDesc[]
enum {
MONEY_DESC_AMOUNT,
MONEY_DESC_REMAINING,
MONEY_DESC_AMOUNT_2_SPLIT,
MONEY_DESC_TO_SPLIT,
MONEY_DESC_PLAYERS,
MONEY_DESC_BALANCE,
MONEY_DESC_AMOUNT_2_WITHDRAW,
MONEY_DESC_TO_WITHDRAW,
};
// used with gzMoneyWithdrawMessageText
enum {
MONEY_TEXT_WITHDRAW_MORE_THEN_MAXIMUM,
CONFIRMATION_TO_DEPOSIT_MONEY_TO_ACCOUNT,
};
// Game init option screen
enum {
GIO_INITIAL_GAME_SETTINGS,
GIO_GAME_STYLE_TEXT,
GIO_REALISTIC_TEXT,
GIO_SCI_FI_TEXT,
GIO_GUN_OPTIONS_TEXT,
GIO_GUN_NUT_TEXT,
GIO_REDUCED_GUNS_TEXT,
GIO_DIF_LEVEL_TEXT,
GIO_EASY_TEXT,
GIO_MEDIUM_TEXT,
GIO_HARD_TEXT,
GIO_OK_TEXT,
GIO_CANCEL_TEXT,
GIO_GAME_SAVE_STYLE_TEXT,
GIO_SAVE_ANYWHERE_TEXT,
GIO_IRON_MAN_TEXT
};
enum {
LAPTOP_BN_HLP_TXT_VIEW_EMAIL,
LAPTOP_BN_HLP_TXT_BROWSE_VARIOUS_WEB_SITES,
LAPTOP_BN_HLP_TXT_VIEW_FILES_AND_EMAIL_ATTACHMENTS,
LAPTOP_BN_HLP_TXT_READ_LOG_OF_EVENTS,
LAPTOP_BN_HLP_TXT_VIEW_TEAM_INFO,
LAPTOP_BN_HLP_TXT_VIEW_FINANCIAL_SUMMARY_AND_HISTORY,
LAPTOP_BN_HLP_TXT_CLOSE_LAPTOP,
LAPTOP_BN_HLP_TXT_YOU_HAVE_NEW_MAIL,
LAPTOP_BN_HLP_TXT_YOU_HAVE_NEW_FILE,
BOOKMARK_TEXT_ASSOCIATION_OF_INTERNATION_MERCENARIES,
BOOKMARK_TEXT_BOBBY_RAY_ONLINE_WEAPON_MAIL_ORDER,
BOOKMARK_TEXT_INSTITUTE_OF_MERCENARY_PROFILING,
BOOKMARK_TEXT_MORE_ECONOMIC_RECRUITING_CENTER,
BOOKMARK_TEXT_MCGILLICUTTY_MORTUARY,
BOOKMARK_TEXT_UNITED_FLORAL_SERVICE,
BOOKMARK_TEXT_INSURANCE_BROKERS_FOR_AIM_CONTRACTS,
};
// enums used for the mapscreen inventory messages
enum {
MAPINV_MERC_ISNT_CLOSE_ENOUGH,
MAPINV_CANT_SELECT_MERC,
MAPINV_NOT_IN_SECTOR_TO_TAKE,
MAPINV_CANT_PICKUP_IN_COMBAT,
MAPINV_CANT_DROP_IN_COMBAT,
MAPINV_NOT_IN_SECTOR_TO_DROP,
};
// the laptop broken link site
enum {
BROKEN_LINK_TXT_ERROR_404,
BROKEN_LINK_TXT_SITE_NOT_FOUND,
};
// Bobby rays page for recent shipments
enum {
BOBBYR_SHIPMENT__TITLE,
BOBBYR_SHIPMENT__ORDER_NUM,
BOBBYR_SHIPMENT__NUM_ITEMS,
BOBBYR_SHIPMENT__ORDERED_ON,
};
enum {
GIO_CFS_NOVICE,
GIO_CFS_EXPERIENCED,
GIO_CFS_EXPERT,
};
enum {
CRDT_CAMFIELD,
CRDT_SHAWN,
CRDT_KRIS,
CRDT_IAN,
CRDT_LINDA,
CRDT_ERIC,
CRDT_LYNN,
CRDT_NORM,
CRDT_GEORGE,
CRDT_STACEY,
CRDT_SCOTT,
CRDT_EMMONS,
CRDT_DAVE,
CRDT_ALEX,
CRDT_JOEY,
NUM_PEOPLE_IN_CREDITS,
};
/* This is from _JA25EnglishText.h */
enum {
TCTL_MSG__RANGE_TO_TARGET,
TCTL_MSG__RANGE_TO_TARGET_AND_GUN_RANGE,
TCTL_MSG__DISPLAY_COVER,
TCTL_MSG__LOS,
TCTL_MSG__IRON_MAN_CANT_SAVE_NOW,
TCTL_MSG__CANNOT_SAVE_DURING_COMBAT,
};
#endif
|
from datetime import (datetime)
from namex.constants import \
BCProtectedNameEntityTypes, BCUnprotectedNameEntityTypes, XproUnprotectedNameEntityTypes, \
DesignationPositionCodes, LanguageCodes
from .name_analysis_director import NameAnalysisDirector
from namex.utils.common import parse_dict_of_lists
'''
The ProtectedNameAnalysisService returns an analysis response using the strategies in analysis_strategies.py
The response cases are as follows:
- API Returns
- Requires addition of distinctive word
- Requires addition of descriptive word
- Name Contains a Word To Avoid
- Designation Mismatch
- Too Many Words
- Name Requires Consent
- Contains Unclassifiable Word
- Conflicts with the Corporate Database
Notes:
- The 'algorithm' / process we use to analyse names may change in the future
- Using the builder pattern allows us delegate and isolate custom / changing business logic to the builder,
while exposing a consistent API for consumers of the service.
'''
d = datetime.now() # Was just used for perf analysis
class ProtectedNameAnalysisService(NameAnalysisDirector):
_d = d # Just used for perf
def __init__(self):
super(ProtectedNameAnalysisService, self).__init__()
'''
Set designations in <any> and <end> positions regardless the entity type.
designation_any_list: Retrieves all designations properly placed anywhere in the name regardless the entity type.
<Entity type>-Valid English Designations_any Stop
English Designations_any Stop.
designation_end_list: Retrieves all designations properly placed at the end in the name regardless the entity type
<Entity type>-Valid English Designations_end Stop
English Designations_end Stop.
all_designations: Retrieves misplaced and correctly placed designations.
Note: The previous lists contain designations that are in the correct position. For instance, designations with <end> position
found anywhere are not counted here, they are counted in _set_designations_incorrect_position_by_input_name.
'''
def _set_designations_by_input_name(self):
syn_svc = self.synonym_service
np_svc = self.name_processing_service
# original_name = self.get_original_name()
# Get the first section of name when exists slash. For instance, ARMSTRONG PLUMBING LTD./ ARMSTRONG PLUMBING LIMITEE
# Just take ARMSTRONG PLUMBING LTD. and perform analysis of designations.
name_first_part = np_svc.name_first_part
self._designation_any_list = syn_svc.get_designation_any_in_name(name=name_first_part).data
self._designation_end_list = syn_svc.get_designation_end_in_name(name=name_first_part).data
self._all_designations = syn_svc.get_designation_all_in_name(name=name_first_part).data
'''
Set designations in position <end> found any other place in the company name, these designations are misplaced.
'''
def _set_designations_incorrect_position_by_input_name(self):
syn_svc = self.synonym_service
tokenized_name = self.get_original_name_tokenized()
#correct_designation_end_list = remove_periods_designation(self._designation_end_list_correct)
correct_designation_end_list = self._designation_end_list_correct
designation_end_misplaced_list = syn_svc.get_incorrect_designation_end_in_name(tokenized_name=tokenized_name,
designation_end_list=correct_designation_end_list).data
self._misplaced_designation_end_list = list(map(lambda x: x.upper(), designation_end_misplaced_list))
def _set_designations_by_entity_type_user(self):
syn_svc = self.synonym_service
entity_type = self.entity_type
entity_type_code = None
if BCProtectedNameEntityTypes(entity_type):
entity_type_code = BCProtectedNameEntityTypes(entity_type)
elif BCUnprotectedNameEntityTypes(entity_type):
entity_type_code = BCUnprotectedNameEntityTypes(entity_type)
elif XproUnprotectedNameEntityTypes(entity_type):
entity_type_code = XproUnprotectedNameEntityTypes(entity_type)
self._eng_designation_any_list_correct = syn_svc.get_designations(entity_type_code=entity_type_code.value,
position_code=DesignationPositionCodes.ANY.value,
lang=LanguageCodes.ENG.value).data
self._eng_designation_end_list_correct = syn_svc.get_designations(entity_type_code=entity_type_code.value,
position_code=DesignationPositionCodes.END.value,
lang=LanguageCodes.ENG.value).data
self._fr_designation_any_list_correct = syn_svc.get_designations(entity_type_code=entity_type_code.value,
position_code=DesignationPositionCodes.ANY.value,
lang=LanguageCodes.FR.value).data
self._fr_designation_end_list_correct = syn_svc.get_designations(entity_type_code=entity_type_code.value,
position_code=DesignationPositionCodes.END.value,
lang=LanguageCodes.FR.value).data
self._eng_designation_all_list_correct = self._eng_designation_any_list_correct + self._eng_designation_end_list_correct
self._eng_designation_all_list_correct.sort(key=len, reverse=True)
self._fr_designation_all_list_correct = self._fr_designation_any_list_correct + self._fr_designation_end_list_correct
self._fr_designation_all_list_correct.sort(key=len, reverse=True)
self._designation_any_list_correct = self._eng_designation_any_list_correct + self._fr_designation_any_list_correct
self._designation_any_list_correct.sort(key=len, reverse=True)
self._designation_end_list_correct = self._eng_designation_end_list_correct + self._fr_designation_end_list_correct
self._designation_end_list_correct.sort(key=len, reverse=True)
'''
Set the corresponding entity type for designations <any> found in name
'''
def _set_entity_type_any_designation(self):
syn_svc = self.synonym_service
# entity_any_designation_dict = self._entity_any_designation_dict
designation_any_list = self._designation_any_list
all_any_designations = syn_svc.get_all_any_designations().data
self._entity_type_any_designation = syn_svc.get_entity_type_any_designation(
entity_any_designation_dict=parse_dict_of_lists(all_any_designations),
all_designation_any_end_list=designation_any_list
).data
'''
Set the corresponding entity type for designations <end> found in name
'''
def _set_entity_type_end_designation(self):
syn_svc = self.synonym_service
# entity_end_designation_dict = self._entity_end_designation_dict
designation_end_list = self._designation_end_list
all_end_designations = syn_svc.get_all_end_designations().data
self._entity_type_end_designation = syn_svc.get_entity_type_end_designation(
entity_end_designation_dict=parse_dict_of_lists(all_end_designations),
all_designation_any_end_list=designation_end_list
).data
def _set_designations(self):
# Set available designations for entity type selected by user (by default designations related to 'CR' entity type)
# _designation_any_list_user and _designation_end_list_user contain the only correct designations
self._set_designations_by_entity_type_user()
# Set _designation_any_list and _designation_end_list based on company name typed by user
# Set _all_designations (general list) based on company name typed by user
# All previous set designations have correct position, but may belong to wrong entity type
self._set_designations_by_input_name()
# Set _misplaced_designation_end_list which contains <end> designations in other part of the name
self._set_designations_incorrect_position_by_input_name()
# Set _entity_type_any_designation for designations found on company name typed by user
self._set_entity_type_any_designation()
# Set _entity_type_end_designation for designations found on company name typed by user
self._set_entity_type_end_designation()
# Set _misplaced_designation_all based on company name typed by user
# self._set_misplaced_designation_in_input_name()
# Set all designations based on entity type typed by user,'CR' by default
self._all_designations_user = self._eng_designation_all_list_correct + self._fr_designation_all_list_correct
#self._all_designations_user_no_periods = remove_periods_designation(self._all_designations_user)
#self._all_designations_user_no_periods.sort(key=len, reverse=True)
'''
do_analysis is an abstract method inherited from NameAnalysisDirector must be implemented.
This is the main execution call for running name analysis checks.
@:return ProcedureResult[]
'''
def do_analysis(self):
builder = self.builder
list_name = self.name_tokens
# list_dist, list_desc, list_none = self.word_classification_tokens
results = []
# Return any combination of these checks
check_conflicts = builder.search_conflicts(builder.get_list_dist(), builder.get_list_desc(), self.name_tokens,
self.processed_name)
if not check_conflicts.is_valid:
results.append(check_conflicts)
# TODO: Use the list_name array, don't use a string in the method!
# check_words_requiring_consent = builder.check_words_requiring_consent(list_name) # This is correct
check_words_requiring_consent = builder.check_words_requiring_consent(
self.name_tokens, self.processed_name
)
if not check_words_requiring_consent.is_valid:
results.append(check_words_requiring_consent)
# Set designations and run our check
self._set_designations()
check_designation_existence = builder.check_designation_existence(self.get_original_name_tokenized(),
self.get_all_designations(),
self.get_all_designations_user())
if not check_designation_existence.is_valid:
results.append(check_designation_existence)
else:
check_designation_mismatch = builder.check_designation_mismatch(
self.get_original_name_tokenized(),
self.entity_type,
self.get_all_designations(),
self.get_all_designations_user()
#self.get_all_designations_user_no_periods()
)
if not check_designation_mismatch.is_valid:
results.append(check_designation_mismatch)
check_designation_misplaced = builder.check_designation_misplaced(
self.get_original_name_tokenized(),
self.get_misplaced_designation_end()
)
if not check_designation_misplaced.is_valid:
results.append(check_designation_misplaced)
check_special_words = builder.check_word_special_use(self.name_tokens, self.get_original_name())
if not check_special_words.is_valid:
results.append(check_special_words)
return results
|
Bangkok – Thailand’s Ministry of Interior has held a meeting among provincial governors to prepare them for the upcoming general election, indicating it is targeting an 80 percent turnout.
The meeting was attended by provincial governors as well as district chiefs from across the nation and included a rehearsal of work to be synchronized on the date of the election. The focus of the gathering was to ensure a unified understanding of the process so that it takes place smoothly.
Deputy Minister of Interior Suthee Makboon reported that 51,427,628 people are eligible to vote on March 24 with 92,837 voting units to be active on the day. The ministry is targeting an 80 percent turnout of voters and no more than two percent of the votes being disqualified. The last general election had a turnout of 75 percent. |
Women engineers lack of precedence: the virgin territory of robotics In recent history, information and communications technologies (ICTs) have been radically advanced and largely infiltrated daily routine. Additionally, modern educational methods encourage the use of ICTs in the learning processes. Especially in the education of hard sciences like Physics, the use of ICTs is favored because the students can more easily understand the natural laws and observe in real time the results of the experimental process. Women engineers approach this kind of educational process better as they combine a variety of traits due to their feminine nature that gives them precedence. It is widely accepted that females outperform males in verbal ability, are raised to be more sensitive, have maternal instincts, and can be extremely supportive not only to same sex peers but also to both genders. These inherent genetic traits result in womens ability to be naturally tuned into the world around them. In a mans world more importantly when ICTs are concerned women are often discouraged and need to work a lot harder than men to achieve a favorable situation. This pressure makes women more active and persistent. According to the Foundation for Economic and Industrial Research report for 2010 to 2011, only 3.7% of the entrepreneurs between 18 and 64 years old are women (Ioannidis and Chatzichristou http://www.iobe.gr/index.asp?a_id=853, 2012). In the last two decades, the field of robotics has been advancing more radically than ever. Many distinctive robotic mechanisms have been implemented due to the innovative ideas and the outburst of technology. During author MTs PhD research, she noticed that few women participate actively in state-of-the-art educational methods including ICTs. More distinctively, women involved in robotics seem to have been excluded from the productive or research process. The absence of contributions by women engineers in robotics and in the assistive educational tools it provides has led to a more masculine approach in the field that may result in more stiff, plain design, or even less imaginative functionalities. Generally, the stereotypes and biases that exist with regard to gender have hatched and produced the behavior of women and the way they are encountered and treated by the society. Women engineers in a man's world The excessive use of the information and communications technologies has led to an increased demand of scientists, researchers, and engineers in the sector. Modern technology influenced every company, public sector, and daily routine. New trends in production lines, new methods in management, and even new approaches to educational processes were established and have improved living standards. Interactive whiteboards, video conferences, remote-controlled laboratories, and simulation of experiments have managed to be the best educational approach to knowledge. Many individuals turned toward informatics, hard sciences, computer engineering, and communications, and this proved to be right because such sectors kept rising and rising during the last two decades. At first, only males were interested in these fields, but soon after, females followed. As far as the robotic sector is concerned, it is unfortunately still underrepresented by women (Ioannidis and Chatzichristou 2012). Males have been dominating the engineering industry since its beginning, and womenif any -have been excluded one way or another. In an attempt to achieve a higher level of advancement, women have been working remarkably harder and under more pressure than men. As mentioned above, in this male-dominated sector, women feel less valued and less worthy. In some cases, this feeling of exclusion made women more active, persistent, and willing to sacrifice their social life entirely in order to prove that they are making valuable contribution to their sector. On the other hand, many women started to feel like second-class employees and never asked for recognition (Yoky 2006). The truth is that it is not just that feeling of exclusion that they have. Women are evaluated based on their performance, while men are evaluated according to their potential skills. During the last decade, there are few examples of successful women in the engineering field, so few that ambitious young women did not have role models. The opportunities for career advancement were significantly fewer than men, and this led many women to quit and focus on other fields. The most important obstacle that needed to be overleapt was the bias about the work-life balance. In many companies, the balance between work and personal life was a drawback only for women. It was largely assumed that women could not work long hours, could not travel for work purposes, and when someone got married or gave birth, her family was the first -if not the only -priority. Under these circumstances, the gender biases were established, and the gap between male and female employees in engineering was raised. Results and discussion There are few women engineers employed in large companies. This happens not only due to the organizational culture obstacles or the social biases that have been described above. Reports worldwide show that few women want to study engineering and far less want a career in the engineering field. According to the California State University, Long Beach (CSULB)'s College of Engineering, women represent less than 15% of CSULB's engineering population b (Engineering Student Success Center 2012) (see Figure 1). The American Association of Engineering Societies in 2004 reported that only about 10% of the nation's professional engineers and 20% of undergraduate engineering students are women. This fact is an omen of the potential isolation that women students might experience in a male-dominated field. Today, CSULB's College of Engineering is currently developing more aggressive strategies for increasing the participation of women students in its programs. These percentages are encountered around the world. From personal experience, during MT's studies in Computer Engineering at the University of Thessaly, only the 20% of the students that graduated each year were women. Thus, this indicates the importance of embedding gender into business school curriculum worldwide. Conclusions The characteristics of the feminine nature need to be highlighted in order to indicate the importance of their participation in the engineering sector. The differentiation is crucial in order to provide separate points of view. Reports demonstrate that the feminine nature combines traits that outperform men. Their verbal ability is more developed, and they are keen on listening carefully. By nature, women are less aggressive and more patient than men. This is very important in today's business-hostile environment. Additionally, they are comfortable with multitasking, and they do not crash when their schedule is full. The maternal instincts are considered a plus, along with intuition and sensitivity. These characteristics make women approach assignments in an entirely different way than men. As a result, gender diversity helps business provide better results. However, the engineering sector has been underrepresented by women for years. Additionally, pay gaps between genders were a brutal reality (Besse 2009) (see Figure 2). This sector not only is difficult for women to enter into but also is almost impossible to advance their career. Empowering women to participate fully can lead in the advancement of productivity and effectiveness, qualitative change in the organizational culture, and generally, improvement in the quality of life of the community. Percent of Women and Men Employed in Select High At present, the goal that needs to be achieved is to support companies in reviewing their organizational culture and establishing new polices to realize women's empowerment. Globalization could be a challenge: not only companies can be committed to undertake this initiative and empower the role of women but also they must respect the cultural morals that may apply in local businesses. Robotics: male-dominated grounds The sector of robotics did not grow substantially until the second half of the twentieth century. These autonomous mechanisms have been manufactured in order to perform assignments more accurately, affordably, and reliably than humans. Their use applies in a wide variety of fields, such as military, pharmaceutical, hard sciences, industry, etc. The last decades a lot remarkable robotic mechanisms have been designed and implemented, and some of these are considered as massive technological breakthroughs. Robotic laboratories that focus on biological experiments, robotic arms designed for individuals with kinetic disabilities, and robotic mechanisms that substitute human labor are few examples of the innovative advances. Due to author MT's participation in the 'Smart and Adaptable Information System for supporting Physics Experiments in a Robotic Laboratory' project (SAIS-PEaRL project), she had to engage herself in the field of robotics (SAIS-PEaRL 2010). During her 3 years of research for her PhD, MT noticed that it was a highly male-dominated sector. The vast majority of the female scientists was in the educational fields, informatics, or physical sciences and had an advisory role in the robotic implementations, and women represented just the 10% of the community of robotics. MT followed the field of robotics because it fascinated her, and she would like to encourage more women pursuing a career in the field. The reasons why women are excluded from the robotic sector might be many and controversial. Robotics combines informatics, engineering, computing, artificial intelligence, and hardware and software technologies. These are considered more masculine fields, and women are not very interested in them. Furthermore, the stereotypes that are established regarding robotics have discouraged women from pursuing a career in the field. Additionally, there are few female role models in the field. This absence of feminine contribution resulted in more stiff and plain design that lacks of female approach. Additionally, one of the reasons why women are not very familiarized with high-tech products is actually their conscious absence of the design and implementation of software and hardware solutions. Women are more sensitive in engaging themselves in issues that provide practical benefits to society. It seems a challenge to shorten this gender gap, but it is crucial that it should actually be shortened. More women must be recruited and retained in the field of robotics. At first, it is crucial to change the perception that robotic mechanisms are cold metallic objects. Robotic mechanisms can provide functionalities that improve the quality of life. Women must be inspired by the benefits of robotics. They could be assessed with issues that concern social sensitiveness, such as the improvement of lives of elderly or handicapped individuals. If women engineers are encouraged by their academic communities, they will be more interested and will offer new ideas, talents, and skills to the field of robotics. By bringing their talents to the male-dominated engineering field, women can create sparking innovation. Women's initiatives In an attempt to approach more women engineers and to exchange information about engineering developments and challenges across disciplines and countries, many international groups and events have been organized worldwide. The Society of Women Engineers, in conjunction with the India Institute of Technology, Bombay, and Indo-US Science and Technology Forum, has launched the symposium 'Women Engineers Leading Global Innovation' in India last August 2012 (Women Engineers Leading Global Innovation 2012). In addition, the PRME Working Group on Gender Equality has been focusing on promoting gender equality in the workplace, marketplace, and community, and UN Women have launched the Women's Empowerment Principles, which offer practical guidance to business and the private sector on how to empower women (Principles for Responsible Management Education 2012). Another community that aims in highlighting the contributions of women in robotics is the 'Women in Robotics and Automation towards Human Science, Technology and Society' (). It targets students, engineers, and researchers in robotics, automation, human sciences, and technology, and everyone that is interested in research and development activities conducted by women. Methods Two of the most remarkable case studies which acknowledge that change in the organizational culture is difficult but yet possible are the Deloitte & Touche case studies by the Harvard Business School a. During the last two decades, 50% of the recruitments of Deloitte & Touche were women, but 90% of them never stayed enough so as to be nominated for partnership. Mike Cook, the CEO, wanted to take action in order to stop losing talented women and make sure that there was no 'glass ceiling' for women at Deloitte (Deloitte 2005Roessner 1999;Roessner and Kanter 1999) (see Figure 3). In order to explain why women left the company at a faster rate than men and develop recommendations to reverse that trend, Cook established the 'Task Force on the Retention and Advancement of Women.' Next, the Task Force hired a consulting company, and after many interviews and extensive analysis, the conclusion was just this: Deloitte was a lousy place for a woman to work. Most women explained that they did not feel valued, that the minute they start a family they are written off, that nobody was willing to invest in them, that they did not have mentors, that they do not see women in leadership positions, and that they do not see role models. On the other hand, the male interviewees explained that it would be awkward to mentor and instruct women due to their sensitive character and also that they could not trust them with important assignments, assuming that they would break down. Most of them assumed that women would not want to travel or stay late at the office due to their commitments to their families. These biases and assumptions had been nourished for years until it began to constitute the organizational culture of Deloitte (Knowledge@Emory 2009). It became a highly male-dominated environment where all the best projects were assigned to males, the opportunities for career advancement of women were faint, and the role models of successful women could be counted with the fingers of one's hand. Unfortunately, phenomena like these have been observed in many cases, mainly in sectors related to hard sciences, finance, engineering, and informatics. Many companies have been warding off women due to the organizational culture and the biases that grow since the first years that women employment was constituted. Today, Deloitte is recognized as a leader in advancing women, thanks to the Initiative for the Retention and Advancement of Women that is still in operation. |
The effect of immediate feedback on mathematics learning achievement Real facts at school show that the ability to solve problems is still considered low. Most students still make many mistakes in solving math problems. These mistakes must be directed to the right path to minimize repeated mistakes. Immediate feedback can be a solution to correct these errors. The purpose of this study was to examine the immediate feedback by the teacher during learning and their relationship to student learning achievement. The participants in this study consisted of 30 students from seventh grade. Data were collected from the final score on the number of subjects. Data were analyzed using descriptive statistics and paired sample t-tests to describe before dan after giving immediate feedback. Paired sample t-test analysis was also used to describe the relationship between immediate feedback on learning achievement. This study revealed a significant difference between immediate feedback and learning achievement. The final mean value shows the pretest average is higher than the posttest average. There is a positive relationship between immediate feedback and mathematics learning achievement. These findings can be used as a teacher's attention in providing feedback immediately during learning. |
Heike Neumann's review of Teaching ESL/EFL Reading and Writing Teaching ESL/EFL Reading and Writing by Paul Nation is written for in-service and future teachers who want to learn more about encouraging their students development of reading and writing skills in classrooms of English as a second language (ESL) or English as a foreign language (EFL). It has been conceived and used as a textbook for undergraduate and graduate courses of teaching methods. It offers practical suggestions for the classroom. It is also helpful in the development of new, or the improvement of existing, reading and writing programs. In conjunction with its companion book, Teaching ESL/EFL Listening and Speaking (Nation & Newton, 2008), the book could be used for a comprehensive course of ESL or EFL teaching methods, although Teaching ESL/EFL Reading and Writing can also be used on its own. |
SEVEN does not mean NATURAL NUMBER, and children know more than you think Abstract Rips et al.'s critique is misplaced when it faults the induction model for not explaining the acquisition of meta-numerical knowledge: This is something the model was never meant to explain. More importantly, the critique underestimates what children know, and what they have achieved, when they learn the cardinal meanings of the number words one through nine. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.