content
stringlengths 7
2.61M
|
---|
QuESo-process: evaluating OSS software ecosystems quality To evaluate the quality of open source software ecosystems (OSSECOs) we designed the QuESo-process. This process describes the activities and tasks that support the evaluation of OSSECOs. Our proposal attempts to fill the gap between quality models and their operationalization. In order to do this, we use the QuESo-model, described previously in another paper of one of the authors, as a basis for quality evaluation of OSSECOs. |
The long-awaited car is finally here. Ferrari 458 Italia is a well-known performance benchmark worldwide and with a twin-turbo makeover, it’s ready to set records once again. After five years in production, it’s time for a new Alpha Horse. The Prancing Horse’s new ride is expected to be unveiled at the 2015 Geneva Motor Show.
UK’s CAR Magazine has announced that Prancing Horse’s brand new, gallant car will be launched as the M458-T. This name certainly contradicts with Ferrari’s conventional nomenclature but this exclusive car deserves one of a kind name.
The re-engineered 458 will be powered by a twin turbocharged V8 engine and is anticipated to produce approximately 679 horsepower, that surpasses the much celebrated 458 by a whopping 100 horsepower and the latest 560 horsepower California T. And with two turbos, the M458-T will definitely set a new standard in the world of sports cars.
The added ”T” stands for turbocharged which isn’t quite accurate as the M458 is twin-turbocharged. The face-lifted model will be the second of its kind after the California T to be fitted with a twin-turbo engine.
A number of design changes are also predicted along with the twin turbo enhancement to account for the protection of turbocharges, brakes and intercoolers. Visually, the M458-T will still be having the same design with minute style changes. Enhanced air intakes, improved aerodynamics, reshaped bumpers and restyled LED lights will make it all the more appealing and would allow it to gain more torque than the Ferrari 458 Italia.
Similar to the California T, we also expect the Harman Kardon media system with Apple’s CarPlay support and a rear view parking camera. Ferrari has also distinguished the M458-T among its customers in the top niche category allowing them to choose from an array of color combinations and designed leather seats. |
/*
* Copyright 2021 4Paradigm
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include "dbms/dbms_server_impl.h"
#include "brpc/server.h"
namespace hybridse {
namespace dbms {
DBMSServerImpl::DBMSServerImpl() : tablet_sdk(nullptr), tid_(0), tablets_() {}
DBMSServerImpl::~DBMSServerImpl() { delete tablet_sdk; }
void DBMSServerImpl::AddTable(RpcController* ctr,
const AddTableRequest* request,
AddTableResponse* response, Closure* done) {
brpc::ClosureGuard done_guard(done);
if (request->table().name().empty()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kRequestError);
status->set_msg("table name is empty");
LOG(WARNING) << "create table failed for table name is empty";
return;
}
type::Database* db = nullptr;
{
common::Status get_db_status;
db = GetDatabase(request->db_name(), get_db_status);
if (0 != get_db_status.code()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(get_db_status.code());
status->set_msg(get_db_status.msg());
return;
}
if (nullptr == db) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(hybridse::common::kNoDatabase);
status->set_msg("Database doesn't exist");
return;
}
}
std::lock_guard<std::mutex> lock(mu_);
for (auto table : db->tables()) {
if (table.name() == request->table().name()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kTableExists);
status->set_msg("table already exists");
LOG(WARNING) << "create table failed for table exists";
return;
}
}
if (nullptr == tablet_sdk) {
if (tablets_.empty()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kConnError);
status->set_msg("can't connect tablet endpoint is empty");
LOG(WARNING) << status->msg();
return;
}
tablet_sdk =
new hybridse::tablet::TabletInternalSDK(*(tablets_.begin()));
if (tablet_sdk == NULL) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kConnError);
status->set_msg(
"Fail to connect to tablet (maybe you should check "
"tablet_endpoint");
LOG(WARNING) << status->msg();
return;
}
if (false == tablet_sdk->Init()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kConnError);
status->set_msg(
"Fail to init tablet (maybe you should check tablet_endpoint");
LOG(WARNING) << status->msg();
return;
}
}
// TODO(chenjing): 后续是否需要从tablet同步表schema数据
{
hybridse::common::Status create_table_status;
hybridse::tablet::CreateTableRequest create_table_request;
create_table_request.set_db(request->db_name());
create_table_request.set_tid(tid_ + 1);
// TODO(chenjing): pid setting
create_table_request.add_pids(0);
*(create_table_request.mutable_table()) = request->table();
create_table_request.mutable_table()->set_catalog(request->db_name());
tablet_sdk->CreateTable(&create_table_request, create_table_status);
if (0 != create_table_status.code()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(create_table_status.code());
status->set_msg(create_table_status.msg());
return;
}
}
::hybridse::type::TableDef* table = db->add_tables();
// TODO(chenjing): add create time
table->set_name(request->table().name());
for (auto column : request->table().columns()) {
*(table->add_columns()) = column;
}
for (auto index : request->table().indexes()) {
*(table->add_indexes()) = index;
}
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kOk);
status->set_msg("ok");
tid_ += 1;
DLOG(INFO) << "create table " << request->table().name() << " done";
}
void DBMSServerImpl::GetSchema(RpcController* ctr,
const GetSchemaRequest* request,
GetSchemaResponse* response, Closure* done) {
brpc::ClosureGuard done_guard(done);
if (request->name().empty()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kRequestError);
status->set_msg("table name is empty");
LOG(WARNING) << "create table failed for table name is empty";
return;
}
type::Database* db = nullptr;
{
common::Status get_db_status;
db = GetDatabase(request->db_name(), get_db_status);
if (0 != get_db_status.code()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(get_db_status.code());
status->set_msg(get_db_status.msg());
return;
}
if (nullptr == db) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(hybridse::common::kNoDatabase);
status->set_msg("Database doesn't exist");
return;
}
}
std::lock_guard<std::mutex> lock(mu_);
for (auto table : db->tables()) {
if (table.name() == request->name()) {
*(response->mutable_table()) = table;
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kOk);
status->set_msg("ok");
DLOG(INFO) << "show table " << request->name() << " done";
return;
}
}
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kTableExists);
status->set_msg("table doesn't exist");
LOG(WARNING) << "show table failed for table doesn't exist";
return;
}
void DBMSServerImpl::AddDatabase(RpcController* ctr,
const AddDatabaseRequest* request,
AddDatabaseResponse* response, Closure* done) {
brpc::ClosureGuard done_guard(done);
if (request->name().empty()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kRequestError);
status->set_msg("database name is empty");
DLOG(WARNING) << "create database failed for name is empty";
return;
}
std::lock_guard<std::mutex> lock(mu_);
Databases::iterator it = databases_.find(request->name());
if (it != databases_.end()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kDatabaseExists);
status->set_msg("database name exists");
LOG(WARNING) << "create database failed for name existing";
return;
}
::hybridse::type::Database& database = databases_[request->name()];
database.set_name(request->name());
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kOk);
status->set_msg("ok");
LOG(INFO) << "create database " << request->name() << " done";
}
void DBMSServerImpl::GetDatabases(RpcController* controller,
const GetDatabasesRequest* request,
GetDatabasesResponse* response,
Closure* done) {
brpc::ClosureGuard done_guard(done);
// TODO(chenjing): case intensive
::hybridse::common::Status* status = response->mutable_status();
std::lock_guard<std::mutex> lock(mu_);
for (auto entry : databases_) {
response->add_names(entry.first);
}
status->set_code(::hybridse::common::kOk);
status->set_msg("ok");
}
void DBMSServerImpl::GetTables(RpcController* controller,
const GetTablesRequest* request,
GetTablesResponse* response, Closure* done) {
brpc::ClosureGuard done_guard(done);
type::Database* db = nullptr;
{
common::Status get_db_status;
db = GetDatabase(request->db_name(), get_db_status);
if (0 != get_db_status.code()) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(get_db_status.code());
status->set_msg(get_db_status.msg());
return;
}
if (nullptr == db) {
::hybridse::common::Status* status = response->mutable_status();
status->set_code(hybridse::common::kNoDatabase);
status->set_msg("Database doesn't exist");
return;
}
}
::hybridse::common::Status* status = response->mutable_status();
std::lock_guard<std::mutex> lock(mu_);
for (auto table : db->tables()) {
response->add_tables()->CopyFrom(table);
}
status->set_code(::hybridse::common::kOk);
status->set_msg("ok");
}
void DBMSServerImpl::InitTable(type::Database* db, Tables& tables) {
tables.clear();
for (auto table : db->tables()) {
tables[table.name()] = &table;
}
}
type::Database* DBMSServerImpl::GetDatabase(const std::string db_name,
common::Status& status) {
if (db_name.empty()) {
status.set_code(::hybridse::common::kNoDatabase);
status.set_msg("Database name is empty");
LOG(WARNING) << "get database failed for database name is empty";
return nullptr;
}
std::lock_guard<std::mutex> lock(mu_);
Databases::iterator it = databases_.find(db_name);
if (it == databases_.end()) {
status.set_code(::hybridse::common::kNoDatabase);
status.set_msg("Database doesn't exist");
LOG(WARNING) << "get database failed for database doesn't exist";
return nullptr;
}
return &it->second;
}
void DBMSServerImpl::KeepAlive(RpcController* ctrl,
const KeepAliveRequest* request,
KeepAliveResponse* response, Closure* done) {
brpc::ClosureGuard done_guard(done);
std::lock_guard<std::mutex> lock(mu_);
tablets_.insert(request->endpoint());
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kOk);
}
void DBMSServerImpl::GetTablet(RpcController* ctrl,
const GetTabletRequest* request,
GetTabletResponse* response, Closure* done) {
brpc::ClosureGuard done_guard(done);
std::lock_guard<std::mutex> lock(mu_);
std::set<std::string>::iterator it = tablets_.begin();
for (; it != tablets_.end(); ++it) {
response->add_endpoints(*it);
}
::hybridse::common::Status* status = response->mutable_status();
status->set_code(::hybridse::common::kOk);
}
} // namespace dbms
} // namespace hybridse
|
NAcetylaspartylglutamate is not demonstrated to be a selective mGlu3 receptor agonist N-acetylaspartylglutamate (NAAG) in the mammalian central nervous system (; ). In the 1980s, it was clearly demonstrated with immunohistochemistry that NAAG is present throughout the CNS (; ; ; ; ; ). NAAG has been shown to be abundant in the central nervous system and it has been proposed to be a neurotransmitter. Additionally it has been proposed to be a selective agonist of the metabotropic glutamate receptor type 3 (mGlu3) () as well as a selective agonist of native NMDA receptors (NMDARs; ). A number of studies have suggested that NAAG is a neurotransmitter in the mammalian central nervous system (for review, see ). Although indeed there is substantial evidence supporting this hypothesis, and this commentary does not intend to discuss all of those supporting data, neurotransmitters (other than NO or CO), particularly peptide or amino acids, interact with post-synaptic receptors to transduce their signaling. Thus, a key piece in the claim that NAAG is a neurotransmitter, is that it interacts with a receptor and that it is an agonist at mGlu3 receptors. Wroblewska et al. claimed that NAAG is a highly selective agonist at mGlu3 receptors with an EC50 of 65 lM measured at a chimeric receptor consisting of the extracellular ligand-binding domain of mGlu3 and the transmembrane domain and carboxy terminus of mGlu1a. Two recent publications (; ) have questioned whether NAAG per se has physiological activity at mGlu3 receptors. These two studies independently showed that NAAG, when purified, is not an agonist at mGlu3 receptors. Both of these groups sought to better understand the role of NAAG as a tool compound and potential neurotransmitter. In their efforts, they discovered that when commercial samples of NAAG were purified of glutamate, activity in functional assays disappeared. In work in our laboratories (), we assessed two commercial samples of NAAG and found them to have significant glutamate contamination. This work was first motivated by our studies performed at Sibia Neurosciences with human mGlu3 receptors (e.g. ) where we had struggled to demonstrate that purified NAAG (100 lM) was a selective agonist at human mGlu3 receptors using several different assay systems. Chopra et al. demonstrated that unpurified commercial NAAG activated mGlu3 when co-expressed with Ga15 while purified NAAG had no activity in this preparation. Additionally, this study showed that unpurified NAAG could activate rat or human mGlu3 receptors co-expressed with G protein-coupled inwardly-rectifying potassium channel (GIRK) channels in Xenopus oocytes. When NAAG was purified, it no longer had significant activity at mGlu3 receptors. There was trace activity seen in the rat mGlu3 preparation. It should be noted that in the rat and human mGlu3 GIRK assay, the glutamate EC50s were determined to be 28 and 58 nM respectively. A trace contamination of 0.010.02% glutamate in 100 lM NAAG sample could explain the residual response seen in the rat mGlu3 preparation; the HPLC analysis indicated that the remaining level of glutamate was in this range. In the study by Fricker et al., three commercial samples of NAAG were found to have significant glutamate contamination and were purified by cation exchange chromatography with purity demonstrated by HPLC and mass spectrometry. Using this purified NAAG, they assessed its |
<reponame>foxostro/arbarlith2<filename>src/engine/Switch.h<gh_stars>1-10
/*
Author: <NAME>
E-Mail: mailto:<EMAIL>
Copyright (c) 2006,2007,2009 Game Creation Society
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
* Neither the name of the Game Creation Society nor the
names of its contributors may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE Game Creation Society ``AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE Game Creation Society BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _SWITCH_H_
#define _SWITCH_H_
#include "world.h"
#include "Trigger.h"
#include "TriggerPrompt.h"
namespace Engine {
/** Abstract class for a Trigger that is activated through player input */
class Switch : public Trigger
{
public:
GEN_RTTI(Switch, "class Engine::Switch")
public:
/**
Constructs the Switch
@param ID The desired unique ID of the object
*/
Switch(OBJECT_ID ID);
/** Destroy and clear out the object */
virtual void destroy(void);
/** Clear out everything to defaults */
virtual void clear(void);
/**
Updates the object without displaying it
@param deltaTime milliseconds since the last tick
*/
virtual void update(float deltaTime);
/**
Gets the action label string
@return The action label
*/
const string& getActionLabel(void) const
{
return actionLabel;
}
/**
Loads the object state
@param data data source
*/
virtual void load(const PropertyBag &data);
/**
Activate the switch
@param a The actor that uses the switch
*/
void activate(Actor *a);
protected:
/**
Determine whether the proper conditions have been attained for trigger activation.
By default the trigger condition is the mere proximity of the PLAYER
@return true if so, false otherwise
*/
virtual bool pollConditions(void) const;
/** Called in the event of the Trigger activating */
virtual void onTrigger(void);
/**
Called when the switch is triggered and actually used
@param a The actor that uses the switch
*/
virtual void onUse(Actor *a);
/**
Saves the object state to an XML data source, but only if it differs from the default value
@param xml The XML data source returned
@param dataFile The data file containing the default values
@return true if successful, false otherwise
*/
virtual bool saveTidy(PropertyBag &xml, PropertyBag &dataFile) const;
/** Text that describes the Player's action on the switch "Use", "Activate", "Ring the bell" */
string actionLabel;
/** Milliseconds until the option to use the switch expires */
float fadeTimer;
/** The time to give till the option fades */
float time;
/** Handle to the message we set up on the prompt */
TriggerPrompt::HANDLE promptHandle;
};
}; // namespace
#endif
|
18F-FDG PET Detection of a Medullary Thyroid Carcinoma in a Patient With SMZL. Incidental thyroid FDG uptake is not rarely encountered on PET studies. In this case, we present an incidental thyroid focus of F-FDG uptake identified in a patient with splenic marginal zone lymphoma during the baseline and the response assessment PET/CT study that proved to be medullary thyroid carcinoma on subsequent histological examination. |
<reponame>icelr4y/CloseTheBox
package sim.strategy;
import static org.junit.Assert.*;
import java.util.List;
import game.Box;
import game.BoxFactory;
import game.Dice;
import org.junit.Test;
import sim.actions.TileAction;
public class HighTileStrategyTest {
@Test
public void testStrategyForNewBoard() {
Box box = BoxFactory.create();
GameStrategy strategy = new HighTileStrategy();
int numRolls = 1000;
Dice dice = new Dice(2);
for (int i = 0; i<numRolls; i++) {
int roll = dice.roll();
List<TileAction> actions = strategy.getActions(box, roll);
assertFalse("Strategy should always have an option with a fresh board", actions.isEmpty());
}
}
@Test
public void testStrategyWhenRollIsAvailable() {
Box box = BoxFactory.create();
GameStrategy strategy = new HighTileStrategy();
int roll = 9;
List<TileAction> actions = strategy.getActions(box, roll);
System.out.println("Strategy for roll: " + actions);
assertTrue("Strategy should recommend the roll when available", actions.get(0).getValue() == roll);
}
@Test
public void testStrategyWhenRollIsNotAvailable() {
Box box = BoxFactory.create();
box.closeTile(9);
GameStrategy strategy = new HighTileStrategy();
int roll = 9;
List<TileAction> actions = strategy.getActions(box, roll);
System.out.println("Strategy for roll: " + actions);
}
}
|
/*
* (C) Copyright 2017-2019 ElasTest (http://elastest.io/)
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
package io.elastest.epm.client.test.integration;
import static io.elastest.epm.client.DockerContainer.dockerBuilder;
import static java.lang.invoke.MethodHandles.lookup;
import static org.junit.Assert.assertFalse;
import static org.junit.Assert.assertNotNull;
import static org.junit.jupiter.api.Assertions.assertTrue;
import static org.slf4j.LoggerFactory.getLogger;
import static org.springframework.boot.test.context.SpringBootTest.WebEnvironment.RANDOM_PORT;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.junit.jupiter.api.DisplayName;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test;
import org.junit.jupiter.api.extension.ExtendWith;
import org.slf4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.context.annotation.PropertySource;
import org.springframework.context.annotation.PropertySources;
import org.springframework.test.context.junit.jupiter.SpringExtension;
import com.spotify.docker.client.messages.ImageInfo;
import com.spotify.docker.client.messages.PortBinding;
import io.elastest.epm.client.DockerContainer.DockerBuilder;
import io.elastest.epm.client.service.DockerService;
import io.elastest.epm.client.service.EpmServiceClient;
import io.elastest.epm.client.service.FilesService;
import io.elastest.epm.client.service.ShellService;
import static org.hamcrest.MatcherAssert.assertThat;
/**
* Tests for Docker service.
*
* @author <NAME> (<EMAIL>)
* @since 0.0.1
*/
@ExtendWith(SpringExtension.class)
@SpringBootTest(classes = { DockerService.class, ShellService.class,
EpmServiceClient.class, FilesService.class }, webEnvironment = RANDOM_PORT)
@Tag("integration")
@DisplayName("Integration test for Docker Service")
@EnableAutoConfiguration
@PropertySources({ @PropertySource(value = "classpath:epm-client.properties") })
public class DockerIntegrationTest {
final Logger log = getLogger(lookup().lookupClass());
String image = "elastest/test-etm-test1";
@Autowired
private DockerService dockerService;
@Test
@DisplayName("Ask for Chrome to Docker")
void testDocker() throws Exception {
// Test data (input)
String imageId = "selenium/standalone-chrome-debug:3.5.3";
log.debug("Starting Hub with image {}", imageId);
String containerName = dockerService
.generateContainerName("hub-for-test-");
DockerBuilder dockerBuilder = dockerBuilder(imageId)
.containerName(containerName);
String exposedPort = String.valueOf(4444);
dockerBuilder.exposedPorts(Arrays.asList(exposedPort));
String exposedHubVncPort = String.valueOf(5900);
dockerBuilder.exposedPorts(Arrays.asList(exposedHubVncPort));
// portBindings
Map<String, List<PortBinding>> portBindings = new HashMap<>();
portBindings.put(String.valueOf(exposedPort),
Arrays.asList(PortBinding.of("0.0.0.0",
Integer.toString(dockerService.findRandomOpenPort()))));
portBindings.put(String.valueOf(exposedHubVncPort),
Arrays.asList(PortBinding.of("0.0.0.0",
Integer.toString(dockerService.findRandomOpenPort()))));
dockerBuilder.portBindings(portBindings);
dockerService.pullImage(imageId);
dockerService.createAndStartContainer(dockerBuilder.build());
// Assertions
assertTrue(dockerService.existsContainer(containerName));
// Tear down
log.debug("Stoping Hub");
dockerService.stopAndRemoveContainer(containerName);
}
@Test
public void inspectImageTest() throws Exception {
this.dockerService.pullImage(image);
ImageInfo imageInfo = this.dockerService.getImageInfoByName(image);
assertNotNull(imageInfo);
}
@Test
public void runStopAndRemoveContainerTest() throws Exception {
String image = "elastest/test-etm-test1";
String containerName = "testContainer" + System.currentTimeMillis() % 1000;
log.info("Starting container {}", containerName);
DockerBuilder dockerBuilder = new DockerBuilder(image);
dockerBuilder.containerName(containerName);
String containerId = this.dockerService
.createAndStartContainer(dockerBuilder.build());
log.info("Container {} started with id {}", containerName,
containerId);
assertTrue(this.dockerService.existsContainer(containerName));
log.info("Stopping container {}", containerName);
this.dockerService.stopDockerContainer(containerId);
log.info("Removing container {}", containerName);
this.dockerService.removeDockerContainer(containerId);
assertFalse(this.dockerService.existsContainer(containerName));
}
}
|
Endobronchial ultrasound transbronchial needle aspiration at Aberdeen Royal Infirmary: the initial experience. Sir, Endobronchial ultrasound transbronchial needle aspiration (EBUS-TBNA) is a more convenient, quicker and less expensive alternative to mediastinoscopy for sampling mediastinal lymph nodes and masses. Since its introduction, it has become well established as a further tool at the disposal of chest physicians by which to accurately diagnose and stage patients with suspected lung cancer and in the evaluation of those with mediastinal lymphadenopathy of uncertain aetiology.1,2 Although its benefits, especially when compared to other investigative tools in lung cancer are well studied, far less is known regarding real-life outcomes during its initial introduction. We therefore wished to highlight the demographics, results and diagnostic sensitivity of EBUS-TBNA in the first consecutive 100 patients in whom this procedure |
/**
* Format the object into a string according to the format rule defined
*/
@SuppressWarnings("unchecked")
public String formatString(Format<?> format, Object value) throws Exception {
String strValue = "";
if (value != null) {
try {
strValue = ((Format<Object>)format).format(value);
} catch (Exception e) {
throw new IllegalArgumentException("Formatting error detected for the value: " + value, e);
}
}
return strValue;
} |
Eric Zuesse
The first big battle of the Trump transition into the White House is occurring over the fundamental issue that had caused the Establishment to repudiate Donald Trump: Which war will America prioritize — the one against jihadists, or the one against Russia (and also against any nation’s leadership — including the leaders of Iran — that is friendly toward Russia)?
A domestic underground war has thus long been raging between Trump and the neoconservatives (the people who want to resume the Cold War as being now a hot war against Russia by overthrowing all governments — e.g., Saddam Hussein, Muammar Gaddafi, Viktor Yanukovych, Bashar al-Assad — favorable toward Russia). It has been raging ever since Trump made clear early this year that he wanted to stop Obama’s war in Syria against Assad and Putin, and start a real war against all of the many jihadist groups that are trying to overthrow the secular Assad, and to eliminate jihadists in every country except the ones that are supporting them, which then would constitute state sponsors of jihadism and thus enemies of the United States. This is a war about war.
This war is right now coming to a head with the breaking-off, on Tuesday November 15th, of Trump’s conciliatory efforts to win the cooperation of the neocons, which group includes virtually the entire Republican Party foreign-affairs Establishment, both military and diplomatic, plus most of the Democratic Party’s foreign-affairs Establishment. These two ‘Establishments’ are actually two teams of one Establishment, and they are, now, after four successive neoconservative U.S. Presidents (Bush I & II, Clinton, and Obama), almost entirely neoconservatives, especially on the Republican side (the Bush side).
Neoconservatism started in earnest on 24 February 1990 when U.S. President George Herbert Walker Bush told his agents that though the Cold War was then ending on the Russian side, it wasn’t really going to end on the American side, even though they had all promised to the then-Soviet and future Russian President Mikhail Gorbachev, that it would. The next President, Bill Clinton, followed through and expanded NATO, and his successors G.W. Bush and Barack Obama, expanded it even more and so we now surround russia with our missiles. We have also overthrown Moscow’s friends and allies — Saddam Hussein, Muammar Gaddafi, and Viktor Yanukovych, and are still trying to do that in Syria — in order to weaken Russia still further, to go in then for the kill.
If Trump crosses Party lines in order to bring in the small segment of Democratic Party foreign-affairs Establishment who are not neocons, then he’ll face strong opposition from Republicans in the Senate and House, against passing significant portions of his Defense and State Department initiatives. His Presidency will then be crippled by the refusal of the Washington Establishment (the neocons) to provide the essential information and cooperation in order for the Trump Administration to have any major success in the realms of foreign affairs. The Establishment have lots of essential information and foreign-government contacts without which things cannot be done in international relations. Trump’s Presidency would then be stillborn.
The man who had organized the neoconservative revolt against Trump’s candidacy, and who recently but briefly held out an olive branch to assist the Trump team to select people to run U.S. international relations, Eliot A. Cohen, tweeted on November 15th, “After exchange w Trump team, changed my recommendation: stay away. They’re angry, arrogant, screaming ‘you LOST!’ Will be ugly.”
That’s all-out war: the neocons’ effort to sabotage Trump’s Presidency.
Another leading neocon, Daniel W. Drezner, tweeted later the same day, “Btw, the scariest sentence in that tweet is ‘Flynn and Kushner are now controlling who gets basic posts.’”
That’s referring to retired Lt. Gen. Michael Flynn, whom Obama fired as Director of the Defense Intelligence Agency because Flynn opposed Obama’s prioritizing the anti-Russia war above the anti-jihad war: Flynn favored our working with Assad instead of against all of the jihadists not only ISIS (like Obama demanded).
And that’s also Jared Kushner, Trump’s anti-Palestinian and anti-Iranian Zionist Jewish son-in-law, who is just now learning that all the terrorism that’s been perpetrated against the United States and Europe comes almost 100% not from them (the anti-Zionists), but instead from Iran’s self-declared “existential” enemy, the Saud family who own Saudi Arabia and who have been deeply allied with the U.S. Establishment, or aristocracy, especially America’s billionaires, ever since World War II ended, and who still remain determined to, with U.S. help, conquer Russia, which (even above Iran) is their major competitor in the oil-and-gas markets. Here is what the Trump team don’t know. And, above all, they don’t know that the royal family who own Saudi Arabia were the main financial backers of Al Qaeda and of 9/11. The U.S. government is in the ‘uncomfortable’ position of being allied with the enemies of not only the American public but of every nation that’s not run by fundamentalist Sunnis and Sharia law. We arm them. We defend them. And, on occasion, we get blown up by them.
Trump and his family had better be able to be quick learners of reality and discarders of myths, because in the short time they’ve got left to start running the U.S. government, there’s a lot of U.S. propaganda they’ll have to unlearn, and a lot of hidden history they’ll need to learn to replace it.
They say they want to clean the swamp in Washington, but the swamp includes thousands of people who are refusing to help inform and train their own replacements. The neocons ever since George W. Bush (and here are 450 of the most prominent of them on just the Republican side) have had a virtual monopoly over U.S. foreign policy, and don’t want to relinquish it.
—————
Investigative historian Eric Zuesse is the author, most recently, of They’re Not Even Close: The Democratic vs. Republican Economic Records, 1910-2010, and of CHRIST’S VENTRILOQUISTS: The Event that Created Christianity. |
Teaching cyber-physical systems to computer scientists via modeling and verification The greater versatility and increasingly smaller sizes of computing, sensing, and networking devices have resulted in a new computing paradigm called Cyber-Physical Systems (CPSs), which integrates computation and sensing into physical processes producing a wealth of exciting applications in many domains of life, such as transportation, medicine, and agriculture. In order to equip students with the essential knowledge and skills to be successful in the future, this paradigm requires an expansion in the scope of computer science curricula to enable students to understand and overcome the complexity inherent in CPSs. In this paper, we describe our experience with teaching CPS via a set of course modules that rely heavily on modeling and verification. By using the popular Android platform, we aim to engage students to successfully build CPS applications while enhancing their understanding of intellectually challenging concepts. |
At The NRA Meeting: Come For The Guns, Stay For The Camaraderie
Enlarge this image toggle caption Courtesy David Potter Courtesy David Potter
Ben Pickering can't believe his luck.
"Holy cow," he keeps saying. "Man, that's just incredible. That's just amazing."
Pickering won a drawing for an Ambush rifle, an $1,800 AR-15-style model. Pickering already has a lot of weapons — "I honestly could not count," he says — but he's still excited to be given this new one.
Pickering loves guns, but he's also happy that the National Rifle Association's annual meeting, being held this weekend in Indianapolis, has given him the chance to meet up with family members who live in other states.
The same sort of thing is true for a lot of people. There's plenty of talk about politics and gun laws, with Republican politicians such as Florida Sen. Marco Rubio and Louisiana Gov. Bobby Jindal addressing the crowd on Friday.
But for many among the 70,000 in attendance, meeting up with family, friends and like-minded people is more important than talking about this year's elections. It's also a chance for enthusiasts to check out all the latest in guns, custom barrels and binoculars.
"It's kind of like a big window-shopping thing," says Bill Slike, an engineer from neighboring Noblesville, Ind.
Most people — including Pickering — aren't actually going to walk away from the show with new weapons in hand, due to federal licensing requirements.
But, with the exhibit hall being billed as "nine acres of guns and gear," they can certainly do plenty of comparison shopping. The event is a showcase not only for major manufacturers such as Remington and Smith & Wesson, but smaller companies taking advantage of the chance to show off their wares to thousands of potential customers.
"You can handle the guns and make an informed purchase decision," says safety instructor Sig Swanstrom. "I'm from San Antonio and there are 1.4 million people, but there's no way to make this kind of comparison."
It's A Family Affair
The meeting is free for NRA members, who pay anywhere from $10 to join for a year to $500 for a lifetime membership. The crowd is overwhelmingly but not exclusively white, well into middle age or older, with more men than women.
A few men wear suits and ties, but most people are casually dressed in polos or plaids, some wearing t-shirts with slogans such as "Choose wisely: Glock, paper, scissors."
The aisles are so crowded that it can be difficult to move around, but a few parents patiently push their kids around in strollers. One little boy sits cross-legged against a wall, wearing an orange ball cap that says "Gun Auction" and sucking on a lollipop.
Todd Homan, a gun dealer from St. Henry, Ohio, has brought each of his eight children to an NRA meeting at least once. His son, Charlton, is making a return visit this year.
Back in 2001, when he was 5 weeks old, Charlton was held up on stage at the NRA meeting by his namesake, the actor and NRA President Charlton Heston, who died in 2008.
His parents carry around a small photo album, showing off pictures of that moment. Charlton Homan admits he's "kind of" sick of hearing the story.
Todd Homan says all his kids receive hunting safety instruction. Other parents stress the importance of safety, clearly enjoying sharing their interest in shooting sports with their kids.
"It's been something we've been doing for a fairly long time," says Jimmy Trout, a 16-year-old from Carlisle, Ohio, who's such a fan of the Browning Buck Mark line of weapons that he has its logo shaved into the back of his head.
Step Right Up
Trout's design excited the people at the Browning booth and helped get him a picture with Matt Hughes, a mixed martial arts champion who endorses the product. Trout also picked up an autographed poster from race car driver Jessica Barton at the EAA Corporation booth.
With hundreds of booths at the show, exhibitors try all sorts of things to attract attention. One sword company has its staff dressed up like pirates, while an ammunition manufacturer is holding a drawing for a helicopter pig hunt.
MGI Industries is based in Maine, so it's offering to ship two live lobsters with any rifle purchase.
"You try to entice people to see it, touch it," says Steve Henigan, MGI's vice president of sales and marketing. "We do get a lot of interest, and hopefully that interest turns into sales down the road."
In addition to rifles, slings and cleaning equipment, various companies are trying to interest attendees in other products and services. The NRA Wine Club advertises that "every sip supports the NRA" at its booth; the club pays the association 10 percent of its revenues in exchange for access to the group's massive email list.
At Booth 3603, Lucinda Bailey is trying to get passersby interested in her gardening books and heirloom seeds. "After guns and bullets, you gotta eat, right?" she says.
Maybe Next Time
Unlike many people in attendance, Bailey is not interested in what she calls "the boy toys" — the high-caliber weapons on offer, or the vehicles on display with machine guns mounted on top.
Bailey wanted to find a modest weapon that would help rid her garden of pests. "I found a gun I wouldn't be able to get back home," she says. "I'll be able to get anything from squirrels to feral hogs."
She said she appreciated the chance to quiz numerous makers about their guns. Other people said they were excited simply to be introduced to the latest products. Many expressed the desire to "feel" weapons in their own hands.
For Arthur May, a school security guard from Des Moines, the NRA meeting not only offered an excuse to hang out with an old friend from Wisconsin, but gave him the chance to hold a Thompson submachine gun, a vintage weapon from World War II he'd long admired.
Picking it up convinced him that, although he's a history buff and likes how the Thompson looks, it would never be right for him.
"It's an iconic weapon, but it's too bulky for me," May said. "It saved me $1,800, so I can have other dreams now." |
def build_branch(self, branch):
build_dir = branch.build_dir()
build_link = os.path.join(config.SRCROOT_PATH, '_build')
if os.path.islink(build_link):
os.remove(build_link)
else:
shutil.rmtree(build_link, ignore_errors=True)
env = os.environ.copy()
env['FBMAKE_BUILD_ROOT'] = build_dir
cpus = multiprocessing.cpu_count()
utils.run_command('/usr/local/bin/fbmake --build-root "%s" '
'--build-flag USE_LOWPTR --distcc=off opt -j%d' % (build_dir, cpus), env=env) |
<gh_stars>1-10
/*
Copyright (c) 2016 <NAME>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
#include <iostream>
#include <vector>
#include <string>
#include "Zusi3TCP.h"
#include "DebugSocket.h"
#include "WinsockBlockingSocket.h"
void parseDataMessage(const zusi::Node& msg)
{
if (msg.getId() == zusi::MsgType_Fahrpult)
{
for (const zusi::Node* node : msg.nodes)
{
if (node->getId() == zusi::Cmd_DATA_FTD)
{
for (const zusi::Attribute* att : node->attributes)
{
float value = *(reinterpret_cast<float*>(att->data));
std::cout << "FS Data " << att->getId() << ": " << value << std::endl;
}
}
else if (node->getId() == zusi::Cmd_DATA_OPERATION)
{
for (const zusi::Node* input : node->nodes)
{
if (input->getId() == 1)
{
std::cout << "Tastur Operation:" << std::endl;
for (const zusi::Attribute* att : input->attributes)
{
if(att->getId() <= 0x3)
std::cout << " Parameter " << att->getId() << " = " << *(reinterpret_cast<uint16_t*>(att->data)) << std::endl;
else if(att->getId() == 0x4)
std::cout << " Position = " << *(reinterpret_cast<int16_t*>(att->data)) << std::endl;
}
}
}
}
}
}
}
int main(int argc, char** argv)
{
//Create a socket for message printing
zusi::DebugSocket debug_socket;
//Create connection to server
try {
zusi::WinsockBlockingSocket tcp_socket("127.0.0.1", 1436);
//Subscribe to Fuehrerstand Data
zusi::ClientConnection con(&tcp_socket);
std::vector<zusi::FuehrerstandData> fd_ids{ zusi::Fs_Geschwindigkeit, zusi::Fs_Motordrehzahl, zusi::Fs_DruckBremszylinder };
std::vector<zusi::ProgData> prog_ids{ zusi::Prog_SimStart };
//Subscribe to receive status updates about the above variables
//Not subscribing to input events
con.connect("DumpFtd", fd_ids, prog_ids, true);
std::cout << "Zusi Version:" << con.getZusiVersion() << std::endl;
std::cout << "Connection Info: " << con.getConnectionnfo() << std::endl;
while (true)
{
zusi::Node msg;
if (con.receiveMessage(msg))
{
std::cout << "Received message..." << std::endl;
msg.write(debug_socket); //Print data to the console
parseDataMessage(msg);
std::cout << std::endl;
}
else
{
std::cout << "Error receiving message" << std::endl;
break;
}
}
}
catch (std::runtime_error& e)
{
std::cout << "Network error: " << e.what() << std::endl;
return 1;
}
return 0;
} |
<filename>packages/ts-docs-gen/src/api-items/types/api-type-definition.ts
import { Contracts } from "ts-extractor";
import { TsHelpers } from "ts-extractor/dist/internal";
import { ApiTypeReferenceBase } from "../api-type-reference-base";
import { ReferenceRenderHandler } from "../../contracts/serialized-api-item";
export type TypeDefinitions = Contracts.TypeLiteralTypeDto |
Contracts.MappedTypeDto |
Contracts.FunctionTypeTypeDto |
Contracts.ThisTypeDto |
Contracts.ConstructorTypeDto;
export class ApiTypeDefinition<TKind extends Contracts.ApiReferenceBaseType = TypeDefinitions> extends ApiTypeReferenceBase<TKind> {
public ToText(render: ReferenceRenderHandler = this.DefaultReferenceRenderer): string[] {
if (this.ReferenceItem == null) {
return [this.ApiItem.Text];
}
if (TsHelpers.IsInternalSymbolName(this.ReferenceItem.ApiItem.Name)) {
return this.ReferenceItem.ToText(render);
}
return [render(this.ReferenceItem.Name, this.ApiItem.ReferenceId)];
}
}
|
New research released today from the Insurance Institute for Highway Safety says that vehicular crashes have increased in states where recreational marijuana is legal.
The nonprofit organization took crash data from four states where recreational pot is legal: Nevada, Oregon, Washington and Colorado.
The findings revealed crashes were up as much as 6 percent, when compared with adjacent states that don't have legalized marijuana.
Now, the research doesn't prove marijuana is directly responsible for the increase, but it does show a correlation. The organization’s president says we should all take these numbers as an early warning sign.
There are still a lot of unknowns regarding marijuana in terms of how it affects the human body. For example, when someone is drunk, you can measure their blood alcohol content with a breathalyzer. However, there is no equivalent real-time test for measuring THC, the psychoactive component in marijuana. |
def findFreeFilename(ext):
parent = os.path.realpath(__file__)
for i in range(100):
filename = "temp"+str(i)+"."+ext
if not os.path.exists(os.path.join(parent, filename)):
return filename
return filename |
Meta-analysis of macrolide maintenance therapy for prevention of disease exacerbations in patients with noncystic fibrosis bronchiectasis Supplemental Digital Content is available in the text Introduction Noncystic fibrosis (non-CF) bronchiectasis is a chronic respiratory disease characterized by abnormal dilatation and distortion of the bronchi and bronchioles, mainly due to the vicious circle of frequent bacterial infections, chronic airway inflammation, retention of secretions, and airway destruction. During the last 2 decades, the prevalence of bronchiectasis has not decreased as expected with a better control of airway infections. From 2000 to 2007, the prevalence of bronchiectasis in the United States has increased with an annual percentage of 8.74%. The prevalence of bronchiectasis is particularly high among indigenous children in Central Australian, Maori, and Pacific Island in New Zealand, with at least 1470 cases for every 100,000 children. Therefore, there is still a heavy health care burden associated with bronchiectasis worldwide. Patients with non-CF bronchiectasis suffer from recurrent exacerbations, resulting in the destruction of airways and reduced lung function, life quality and lifespan. There is a strong interest in developing approaches that can mitigate this substantial problem. The current therapy of non-CF bronchiectasis mainly includes airway clearance techniques, exercise, inhaled hyperosmolar agents and mucolytic agents, and other anti-inflammatory agents. While the available non-CF treatments can provide some benefits, the above strategies are insufficient and the specific indications of the above strategies are not clearly defined. Better strategies are urgently needed. To the best of our knowledge, the important step of preventing exacerbations is to interrupt the vicious circle of infection, obstruction, inflammation and destruction. Macrolide antibiotics are antibacterial agents with anti-inflammatory and immunomodulatory properties, which indicates that macrolide maintenance treatment may be effective in preventing exacerbations of patients with non-CF bronchiectasis. Since the 1980s, macrolide antibiotics have been used to reduce the incidence of non-CF bronchiectasis exacerbations. In addition, macrolide antibiotics have the advantages of high plasma concentration, long half-life, and broad antimicrobial spectrum. All of these provide the rationale for using macrolide maintenance therapy in patients with non-CF bronchiectasis, which would be the first choice for prevention of non-CF bronchiectasis exacerbations. However, antibiotic maintenance therapy is not currently recommended as part of conventional management to control the disease and macrolides are only recommended in selected patients (e.g., those with frequent exacerbations, ≥3 exacerbations and/or ≥2 hospitalizations in the previous 12 months). Recently, several randomized controlled trials (RCTs) focusing on the benefits of reducing exacerbations have been carried out to evaluate the benefits and safety on macrolides maintenance therapy in non-CF bronchiectasis. However, individual studies obtained varied results. The current meta-analysis aimed to determine the benefits and safety of macrolide maintenance treatment for non-CF bronchiectasis exacerbations. Search strategy We operated a literature search using PubMed, Embase, Web of Science, and the Cochrane Library, with the last report up to May 2018. No language and date restriction were applied. Searches were limited to human only and RCTs. We searched the following MeSH terms: "Macrolides," "Macrolide," "Azithromycin," "Erythromycin," "Clarithromycin," "Roxithromycin," "Bronchiectasis," "noncystic fibrosis bronchiectasis," "non-CF bronchiectasis," "NCFB." In addition, the reference lists of the articles were manually searched for other potentially eligible studies. Abstracts published in academic conferences or website materials were excluded. The present meta-analysis was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement. 2.2. Inclusion criteria, exclusion criteria, and study selection The inclusion criteria were: a clinical randomized controlled trial (RCT); aimed to evaluate the benefits or safety of macrolides in comparison with control group (placebo, another class of antibiotic or blank control) in the treatment of patients with non-CF bronchiectasis; reported the number of exacerbations as the outcome; published in peer-reviewed journal. A study was excluded if: it is presented as a review article or protocol; involved patients with other chronic respiratory conditions, such as cystic fibrosis, COPD, or asthma; data could not be obtained by original manuscripts or e-mail to authors. Two reviewers independently managed the study selection, differences were resolved by consensus after discussion. Ethical approval was not necessary because this was a meta-analysis. Assessment of risk of bias in included studies The risk of bias of each study was assessed by 2 authors independently according to the criteria outlined in the Cochrane Handbook for Systematic Reviews of Interventions. Disagreements were resolved by discussion or arbitration involving a third reviewer. The detail domains of risk of bias included: random sequence generation; allocation concealment; blinding of participants and personnel; blinding of outcome assessment; incomplete outcome data; selective reporting; and other bias. Each potential source of bias was graded as high, low, or unclear and the assessment outcome was noted in the "Risk of bias" table. Data extraction and outcome measures The following data were extracted: the year when the study was performed, location, age range of patients, number of patients, diagnostic criteria, total duration of study, macrolides intervention (the type of macrolide, medicine dose, interval of drug administration), reported outcomes, adverse events. We extracted data from the original manuscripts when possible; we also contacted the corresponding authors for original data. The primary outcome measurement was the number of patients stratifying by different exacerbation times, the rate of exacerbation per patient per year and exacerbation-related admissions. A bronchiectasis exacerbation was defined using the study author's criteria, almost all of the criteria included intense coughing, dyspnea, wheezing, fever, chest pain, increased purulent sputum, requirement for oral or intravenous antibiotics. Secondary outcomes were adverse events and macrolide resistance. Data analysis Relative risk (RR) with 95% confidence intervals (95% CI) was used to evaluate dichotomous variables and standard mean differences (SMD) with 95% CI were used to evaluate continuous variables. Sensitivity analysis was performed for missing data. Cochran's X 2 statistics with P value and I 2 was used to measure the heterogeneity among studies. If P <.10 or I 2 > 50%, suggesting significant heterogeneity, a random-effects model was chosen. Otherwise, calculations were performed with a fixedeffects model. Subgroup analyses were performed to explore heterogeneity when substantial heterogeneity was presented. For the primary outcome, subgroup analyses were also performed based on: type of macrolides: azithromycin vs erythromycin; children vs adults. For the data that cannot be merged, a descriptive analysis was performed. P<.05 was considered statistically significant. Publication bias was assessed by the funnel plot. All statistical analyses were performed using Review Manager Software (version 5.3; Cochrane Collaboration, Oxford, United Kingdom). Literature review and selection Initial literature searches retrieved 541 articles, 85 articles were included after initial title and abstract screening. Another 75 articles were excluded for various reasons, and 10 RCTs were finally identified for meta-analysis (see figure, supplemental digital content S1, http://links.lww.com/MD/C926, which illustrates the flowchart of the study selection process). The methodological quality of the included studies was evaluated using the risk of bias (see figure, supplemental digital content S2, http://links.lww.com/MD/C926, which illustrates the quality assessment of the studies). Seven trials described random sequence generation, 5 trials described allocation concealment, all the trials reported incomplete outcome data and described the selective reporting, only 4 indicated no other bias. Three trials did not describe blinding of the patients or executors, one trial described no blinding of outcome assessment. Outcomes The detailed outcome measurements of the included trials are shown in Table 2. 3.3.1. Primary outcome: clinic exacerbations 3.3.1.1. Number of patients free from exacerbations. Nine trials (n = 572) reported the number of patients free from exacerbations. Six trials were conducted in adults and the other 3 in children; 4 trials used azithromycin as the intervention, 3 used erythromycin, and the other 2 used roxithromycin. Compared with the standard group, the number of patients without exacerbation in the macrolides group significantly increased during the observation time (RR = 1.56, 95% CI = 1.14-2.14, P =.006; Fig. 1). Since significant heterogeneity was detected (I 2 = 72%), random-effects model were chosen. To identify and measure heterogeneity, specified subgroup analyses were further performed (Fig. 2). There was a significant benefit in azithromycin group (RR = 2.25, 95% CI = 1.67-3.02, P <.00001), but no significant benefit for neither erythromycin (RR = 1.33, 95% CI = 0.92-1.94, P =.13) nor roxithromycin (RR = 1.14, 95% CI = 0.97-1.35, P =.11). The heterogeneity in the subgroup of azithromycin, erythromycin, and roxithromycin exhibited was all unremarkable (I 2 = 0%), suggesting the result in azithromycin, erythromycin, roxithromycin was reliable. There was a significantly benefit of macrolide therapy in reducing the exacerbations both in children (RR 5.03, 95% CI 2.02-12.50, P =.0005) and adults (RR = 1.66, 95% CI = 1.37-2.02, P <.00001). The heterogeneity in the subgroup of children was not significant (I 2 = 45%), suggesting the result in children may be credible; while the heterogeneity exhibited in the subgroup of adults was significant (I 2 = 79%), indicating the result in adults may be not reliable. It can be concluded that there was a significant benefit of macrolide therapy in preventing the exacerbation for patients with non-CF bronchiectasis and reducing the number of patients experiencing one or more exacerbations in children, while in Table 1 Characteristics of randomized clinical trials included in the meta-analysis. Table 2 The non-CF exacerbations of the trials included in the meta-analysis. Exacerbations Adverse events No. of patients who suffered from any exacerbation severe adverse events mainly included disease needed surgery, heart failure, stroke, corrected QT interval (QTc) prolongation, and even death. The number of patients who experienced any severe adverse event was significantly reduced in the macrolides group, compared with the control group (RR = 0.53, 95% CI = 0.33-0.85, P =.009; Fig. 7). No significant heterogeneity was detected (I 2 = 0%), fixed-effects model was chosen. Three trials reported the macrolide resistance and the rate in macrolides group is higher than the control group (RR = 3.59, 95% CI = 2.60-4.96, P <.00001; Fig. 8). No significant heterogeneity was detected (I 2 = 0%), fixed-effects model was chosen. The 3 trials were all about azithromycin. Unfortunately, no data on erythromycin and roxithromycin was reported about the macrolide resistance. So we could draw a conclusion that azithromycin was associated with a higher risk of developing resistance, but that no data was available for other macrolide agents to allow a comparison. It can be concluded that, compared with the control group, macrolides therapy would not increase the risk of adverse events, would decrease the risk of severe adverse events, but would Publication bias There is an asymmetrical appearance of the funnel plot with a gap in a bottom corner for the number of patients free from bronchiectasis exacerbations. (see figure, supplemental digital content S7, http://links.lww.com/MD/C926, which illustrates the funnel plot of included trials for the number of patients free from bronchiectasis exacerbations). However, Egger's test did not show a significant publication bias (P =.09). Publication bias for the rate of exacerbation was not assessed due to the limited number of studies. Discussion The overall aim of the treatment of patients with non-CF bronchiectasis is to reduce symptoms, to prevent exacerbations, and thereby maintain lung function and improve the quality of life. As a group of antibiotics with a wide antimicrobial spectrum, macrolides has been demonstrated to have the effect of preventing exacerbations in patients with CF and COPD. We performed the systematic review and meta-analysis to explore the evidence for a beneficial therapy of macrolide treatment on non-CF bronchiectasis. In spite of the 2 meta-analyses conducted by Gao et al and Fan et al assessing the benefits and safety of macrolides in patients with non-CF bronchiectasis, the current study utilized different perspectives in assessing macrolide effect and included more eligible articles. For primary outcome, our study concluded that macrolide therapy could significantly decrease the number of patients with exacerbations and the average exacerbations per patient during the observation time. However, we further analyzed the number of patients stratifying by different exacerbations (0, 1, 2, ≥3), and found that there is a significant decrease of the number of participants with 0 and ≥3 exacerbations in the macrolide-treated group compared with the control group. There is no significant decrease of the number of participants with 1 and 2 exacerbations. Different from the result of Gao's study, we found that macrolides therapy can reduce the risk of admissions for infective exacerbations compared with the control group. Our results suggested that macrolides seem to have a positive net effect among individuals with more frequent exacerbations of non-CF. In addition, to address the degree of heterogeneity in the findings, subgroup analyses were performed for the number of patients free from exacerbations and the average exacerbations per patient by adults or children, azithromycin or erythromycin or roxithromycin. We concluded that exacerbations could be prevented in children as well as adults, but we cannot eliminate the heterogeneity in adults about the number of patients with 0 exacerbation and in children about the average exacerbations per patient. Due to the limited number of studies, we cannot employ further analysis to decrease the heterogeneity. We also concluded that only azithromycin could prevent the exacerbations, erythromycin and roxithromycin did not show significant effect in patients with non-CF bronchiectasis, no trial on clarithromycin met inclusion criteria. Except the benefits, the safety is another important matter of concern for the widespread implementation of the macrolides. So we collected the data about the adverse events of included studies. Macrolides might have gastrointestinal reaction, ototoxicity, and cardiovascular risks, even proarrhythmic effect. The main adverse events included gastrointestinal complications, such as nausea, vomiting, diarrhea, and abdominal discomfort, as well as headache, sinusitis, and rash in some participants. In our study, the gastrointestinal adverse events of the macrolide antibiotic treatment group were significantly more common than the control group. Diarrhea was the most common side effect, followed by nausea and vomiting. A few patients had abdominal pain. But the symptoms were mild and can be relieved by expectant treatment, which would not result in patient withdrawal. Neither ototoxicity nor cardiovascular risks were systematically assessed in our selected studies, and only one study monitored electrocardiogram changes and reported the QTc prolongation with one participant (1/59) in the macrolides group. However, clinicians should pay attention to monitor the ECG to prevent cardiovascular events. According to meta-analysis results, the pooled estimate of the number of patients suffering from any adverse event was not significantly different in the macrolides treatment group than that in the control group (P =.92). Compared with the control group, there was a significant decrease of the number of patients who experiencd any severe adverse event in the macrolides treatment group (P =.02). Severe adverse events mainly included disease needed admission or surgery, heart failure, stroke, QTc prolongation, even death. Both Li and Gao did not report severe adverse events in their meta-analyses. Due to the emergence of multidrug resistant strains, macrolide resistance is another critical issue for the use of long-term macrolide therapy. Comparing with the control group, we observed a significant increase of antimicrobial resistance in the macrolides group. This is in accordance with Li's study, but there was an obvious mistake in the forest plot of antimicrobial resistance in Li's study. In Li's study, he extracted the data reported by Valery et al as the same as reported by Wong et al (macrolides group 2/71 vs control group 0/70). However, the exact data reported by Valery et al was macrolides group 19/41 vs control group 4/37. Macrolide resistance has become an increasing concern worldwide, particularly in Asia, which probably is the most hazardous shortcoming of the widespread use of macrolides. Certain limitations of our study should be noted. Our analyses were based on pooled data from various trials with different macrolide antibiotics, populations, duration, in particular, of heterogeneous illness states (different exacerbation history, age and disease severity) before the participants were recruited, and without more details, we ca not employ meta-regression to detect whether the effect size is related to clinical heterogeneity. In addition, the random-effect model and subgroup analyses were used to account for the heterogeneity. Finally, studies recruited participants without CT scan to confirm the presence of bronchiectasis, which may induce bias. Conclusion In conclusion, the result from this meta-analysis strongly supported that macrolide maintenance treatment had a benefit in preventing exacerbations both in adults and children with bronchiectasis compared with controls. Azithromycin showed significant benefits comparing with other macrolides. In addition, macrolides would not increase the risk of adverse events, could decrease the risk of severe adverse events, but would increase the risk of antimicrobial resistance. Nonetheless, randomized controlled trials involving larger patient samples are needed to confirm the optimal agents, dose, duration, and the potential antimicrobial resistance following macrolide therapy for patients with non-CF bronchiectasis. |
<filename>jest.setup.ts
import '@testing-library/jest-dom'
import 'jest-canvas-mock' |
/**
* Function provies array functions.
*/
public final class ArrayFunction {
private ArrayFunction() {
}
/**
* Creates an ARRAY_CONTAINS(expr, value) function that checks whether the given array
* expression contains the given value or not.
*
* @param expression The expression that evaluate to an array.
* @param value The value to search for in the given array expression.
* @return The ARRAY_CONTAINS(expr, value) function.
*/
@NonNull
public static Expression contains(@NonNull Expression expression, @NonNull Expression value) {
if (expression == null || value == null) {
throw new IllegalArgumentException("expression and value cannot be null.");
}
return new Expression.FunctionExpression("ARRAY_CONTAINS()", Arrays.asList(expression, value));
}
/**
* Creates an ARRAY_LENGTH(expr) function that returns the length of the given array
* expression.
*
* @param expression The expression that evluates to an array.
* @return The ARRAY_LENGTH(expr) function.
*/
@NonNull
public static Expression length(@NonNull Expression expression) {
if (expression == null) {
throw new IllegalArgumentException("expression cannot be null.");
}
return new Expression.FunctionExpression("ARRAY_LENGTH()", Arrays.asList(expression));
}
} |
<filename>src/redux/reducers/forex.ts
export enum FOREX_TYPES {
SET_DATA_POINTS = 'SET_DATA_POINTS',
SET_NAME = 'SET_NAME',
}
export interface DataPoints {
close: number
open: number
low: number
high: number
date: Date
}
export interface ForexPayload {
currencyName: string
currencyData: DataPoints[]
}
export interface ForexAction {
type: FOREX_TYPES
payload: ForexPayload
}
export interface ForexState extends ForexPayload {}
const initialState: ForexState = {
currencyName: '',
currencyData: [],
}
const forex = (state = initialState, action: ForexAction) => {
switch (action.type) {
case FOREX_TYPES.SET_DATA_POINTS: {
return {
...state,
...{ currencyData: action.payload.currencyData },
}
}
case FOREX_TYPES.SET_NAME: {
return {
...state,
...{ currencyName: action.payload.currencyName },
}
}
default: {
return state
}
}
}
export default forex
|
def preprocess_courses_corpus():
soup = None
with open('courses_corpus.html', 'r') as infile:
content = infile.read()
soup = BeautifulSoup(content, 'html.parser')
docid = 0
data = {}
data['documents'] = []
main_table = soup.find_all("div", attrs={'class': 'courseblock'})
for course in main_table:
docid += 1
title = course.find_all('p', attrs={'class':'courseblocktitle noindent'})[0].text.lstrip('\n') if len(course.find_all('p', attrs={'class':'courseblocktitle noindent'}))!=0 else ''
description = (course.find_all('p', attrs={'class':'courseblockdesc noindent'})[0].text.lstrip('\n') if len(course.find_all('p', attrs={'class':'courseblockdesc noindent'}))!=0 else '') + ' ' + (course.find_all('p', attrs={'class':'courseblockextra noindent'})[0].text if len(course.find_all('p', attrs={'class':'courseblockextra noindent'}))!=0 else '')
data['documents'].append({
'docId' : docid,
'title' : title.strip(),
'description' : description.strip()
})
with open('courses_data.json', 'w') as outfile:
json.dump(data, outfile) |
. A total of 92 patients with tricuspid valvular disease (TR) had surgical repair of DeVega's annuloplasty in 80 patients (87%) and of valve replacement in 12 patients (13%) from January, 1978, to March, 1988. All of those patients were diagnosed by cardiac catheterization and angiogram, clinical findings and in recent cases, pulsed and color Doppler echocardiography were applied. Eighty-nine of 92 patients (97%) were in NYHA class III or IV before operation. There were 7 early death (8.5%) with DeVega procedure and one death (8.3%) in TVR and late deaths were noted in 3 patients (3.6%) (DeVega's procedure) and one (8.3%) in TVR. Two patients after DeVega procedure at 5 and 6 years were required re-operation of TVR because of recurrent mitral valvular disease. Seventy-seven of 80 survivors were in NYHA class I or II postoperatively. Twenty-seven randomized selected patients after DeVega's annuloplasty were investigated by pulsed and color Doppler echocardiography, 17 of them (63%) had no regurgitation and the remaining 10 patient had mild to moderate regurgitation. This study suggests that DeVega's annuloplasty has a simple and reliable procedure in patients with functional TR and results in excellent hemodynamic and functional effects postoperatively. |
Selection at behavioral, developmental and metabolic genes is associated with the northward expansion of a successful tropical colonizer What makes a species able to colonize novel environments? This question is key to understand the dynamics of adaptive radiations and ecological niche shifts, but the mechanisms that underlie expansion into novel habitats remain poorly understood at a genomic scale. Lizards from the genus Anolis are typically tropical and the green anole (Anolis carolinensis) constitutes an exception since it expanded into temperate North America from subtropical Florida. Thus, we used the green anole as a model to investigate signatures of selection associated with colonization of a new environment, namely temperate North America. To this end, we analyzed 29 whole genome sequences, representing the entire genetic diversity of the species. We used a combination of recent methods to quantify both positive and balancing selection in northern populations, including FST outlier methods, machine learning and ancestral recombination graphs. We naively scanned for genes of interest and assessed the overlap between multiple tests. Strikingly, we identified many genes involved in behavior, suggesting that the recent successful colonization of northern environments may have been linked to behavioral shifts as well as physiological adaptation. These results were robust to recombination, gene length and clustering. Using a candidate genes strategy, we determined that genes involved in response to cold or behavior displayed more frequently signals of selection, while controlling for local recombination rate and gene length. In addition, we found signatures of balancing selection at immune genes in all investigated genetic groups, but also at genes involved in neuronal and anatomical development in Florida. |
def summarize_children(queues: dict) -> dict:
averaged_nodes = set()
edges = tuple(queue for name, queue in queues.items() if not queue.children)
for node in edges:
metrics_to_apply = node.metrics
while node.parent:
for metric_name, value in metrics_to_apply.items():
queues[node.parent].metrics[metric_name] += value
node = queues[node.parent]
stack = list(edges)
while stack:
node = stack.pop(0)
if node.parent is None:
continue
parent = queues[node.parent]
if parent.name in averaged_nodes:
continue
parent.metrics["queue_load_factor"] /= len(parent.children)
averaged_nodes.add(parent.name)
stack.append(parent)
return queues |
Operating Grid-Forming Control on Automotive Reversible Battery Charger Nowadays, the production of electric energy is evolving towards decentralized systems by an increasingly advanced integration of new active loads such as electric vehicles and renewable energy sources. This energy transition involves the use of power electronics converters to regulate energy exchanges. In this context, this paper brings the L2EP knowledge on grid-forming control developed in a high-voltage context to the ValeoSiemens reversible charger. This comprises two significant differences in the sizing of the system: a lower grid connection impedance and a resistive aspect of the distribution network. To overcome these challenges, this paper proposes two techniques: the virtual impedance to compensate the low value of connection impedance and dynamic decoupling of active and reactive power to consider the resistive effect of the distribution network. |
/**
* Copyright (c) 2015 See AUTHORS file
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
*
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of the mini2Dx nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
package org.mini2Dx.ui.element;
import org.mini2Dx.core.serialization.annotation.ConstructorArg;
import org.mini2Dx.core.serialization.annotation.Field;
import org.mini2Dx.ui.layout.LayoutRuleset;
import org.mini2Dx.ui.render.AbsoluteModalRenderNode;
import org.mini2Dx.ui.render.ParentRenderNode;
/**
* A {@link Modal} that can be positioned manually
*/
public class AbsoluteModal extends Modal {
@Field(optional=true)
private float x;
@Field(optional=true)
private float y;
/**
* Constructor. Generates a unique ID for this {@link Modal}
*/
public AbsoluteModal() {
this(null);
}
/**
* Constructor
* @param id The unique ID of the {@link AbsoluteModal}
*/
public AbsoluteModal(@ConstructorArg(clazz=String.class, name = "id") String id) {
super(id);
}
@Override
protected ParentRenderNode<?, ?> createRenderNode(ParentRenderNode<?, ?> parent) {
return new AbsoluteModalRenderNode(parent, this);
}
/**
* Returns the x coordinate of this {@link AbsoluteModal}
* @return 0 by default
*/
public float getX() {
return x;
}
/**
* Sets the x coordinate of this {@link AbsoluteModal}
* @param x The x coordinate in pixels
*/
public void setX(float x) {
if(this.x == x) {
return;
}
this.x = x;
if(renderNode == null) {
return;
}
renderNode.setDirty(true);
}
/**
* Returns the y coordinate of this {@link AbsoluteModal}
* @return 0 by default
*/
public float getY() {
return y;
}
/**
* Sets the y coordinate of this {@link AbsoluteModal}
* @param y The y coordinate in pixels
*/
public void setY(float y) {
if(this.y == y) {
return;
}
this.y = y;
if(renderNode == null) {
return;
}
renderNode.setDirty(true);
}
/**
* Sets the x and y coordinate of this {@link AbsoluteModal}
* @param x The x coordinate in pixels
* @param y The y coordinate in pixels
*/
public void set(float x, float y) {
if(this.x == x && this.y == y) {
return;
}
this.x = x;
this.y = y;
if(renderNode == null) {
return;
}
renderNode.setDirty(true);
}
/**
* Returns the width of the {@link AbsoluteModal} as determined by its
* {@link LayoutRuleset}
*
* @return
*/
public float getWidth() {
if(renderNode == null) {
return 0f;
}
return renderNode.getOuterWidth();
}
/**
* Returns the height of the {@link AbsoluteModal} as determined by its
* child {@link UiElement}s
*
* @return 0 is no elements have been added
*/
public float getHeight() {
if(renderNode == null) {
return 0f;
}
return renderNode.getOuterHeight();
}
}
|
Mobility of Antiflorigen and PEBP mRNAs in Tomato-Tobacco Heterografts1 Long-distance movement of tobacco NsCET1 and PEBP mRNAs suggests that acquisition of RNA mobility is an early evolutionary event. Photoperiodic floral induction is controlled by the leaf-derived and antagonistic mobile signals florigen and antiflorigen. In response to photoperiodic variations, florigen and antiflorigen are produced in leaves and translocated through phloem to the apex, where they counteract floral initiation. Florigen and antiflorigen are encoded by a pair of homologs belonging to FLOWERING LOCUS T (FT)- or TERMINAL FLOWER1 (TFL1)-like clades in the phosphatidylethanolamine-binding domain protein (PEBP) family. The PEBP gene family contains FT-, TFL1-, and MOTHER OF FT AND TFL1 (MFT)-like clades. Evolutionary analysis suggests that FT- and TFL1-like clades arose from an ancient MFT-like clade. The protein movement of the PEBP family is an evolutionarily conserved mechanism in many plants; however, the mRNA movement of the PEBP family remains controversial. Here, we examined the mRNA movement of PEBP genes in different plant species. We identified a tobacco (Nicotiana sylvestris) CENTRORADIALIS-like1 gene, denoted NsCET1, and showed that NsCET1 is an ortholog of the Arabidopsis (Arabidopsis thaliana) antiflorigen ATC. In tobacco, NsCET1 acts as a mobile molecule that non-cell-autonomously inhibits flowering. Grafting experiments showed that endogenous and ectopically expressed NsCET1 mRNAs move long distances in tobacco and Arabidopsis. Heterografts of tobacco and tomato (Solanum lycopersicum) showed that, in addition to NsCET1, multiple members of the FT-, TFL1-, and MFT-like clades of tobacco and tomato PEBP gene families are mobile mRNAs. Our results suggest that the mRNA mobility is a common feature of the three clades of PEBP-like genes among different plant species. |
My lovely wednesday! Here I am, once again smile
I’ll start with honest statement – I wasn’t really productive this week. There we go, I’ve said it! I’ve been really tired every day and I hardly got anything done. I skipped #screenshotsaturday again which isn’t good for my exposure. I had almost everything ready but unpredicted stuff came into my schedule. I must get myself together and start with full force once again, starting tomorrow. Also, I need to sleep more and kill some stress, that should help as well.
Now, to the point – what was done this week anyway
Wannabe #screenshotsaturday entry
I wanted to show few seconds of the Courier on his way to the Old Hag in stormy weather. I’ve added water drop splashes but it’s really hard to see them in this gif animation. I’ll also need to add some fog here as well as more splashes.
Introducing Golems (prototype name) for the first time
I’ve shared quite some info about current enemies in the past but there was silence about it for some time due to the storyline map development. So this may be a bit more interesting.
I had golems in my game for a really long time but you meet them kinda late in the game. As you know, I don’t have content for the late game done and that’s the reason I wasn’t showing you anything about these. I was talking to my friend about the Courier of the Crypts the other day and got inspired to make some changes to that living piece of metal. I’ve changed the graphics and some other attributes.
When I first looked at the new sprite I saw Arthas in it. A bit different helmet, cloak and frostmourne would result in that for sure. Next to that, I don’t think it looks like golem too much but as I’ve mentioned, Golem is prototype name and when you hear me talking about golems, that’s what I have in mind.
Long ago, I’ve mentioned that almost every enemy acts differently – some are passive, others will shoot at you and some of them are invincible. Golem is one of the invincible enemies but it has it’s own weakness.
Can you read the trick out of this image? Golems were designed to block your way with heavy stone shield (part of the sarchopagus) in one of the hands. You could easily dodge them but in some cases you couldn’t – the only way to stop them was to throw oil urn into them. This would stun them for few seconds, just enough to go past them.
Stunning golems can be good in many ways which I won’t spoil or at least not for now. To make them a bit more complicated I’ve decided to add another element which is shield block. This is the new idea which isn’t implemented yet and I can’t wait to do it. If you’ll throw oil urn directly into his face, he’ll deflect the urn back at you. You’ll have to make sure to catch it off guard to stun him.
I hope you love the idea behind the golems and graphics. Graphics are still WIP but it is an improvement.
Storyline map progress
There is not much to show here right now and most likely I’ll stop posting screenshots about that for a while because I really don’t want to spoil everything about it. I’ll shop you my humble minimap (my first minimap in gamedev career) resembling storyline map. Don’t laugh at it please, it’s mearly a tool to help me navigate from one side of the map to another. And no, this isn’t minimap for the game – you won’t have one in the game tongue
If you remember church scene I’ve shown you a while ago, that’s the yellow area on the minimap. Just to imagine it. Some places are not connected and that’s because there are some transitions in there from one place to another.
This is it for the week – I’ll keep working on the storyline map for now. My plans are telling me I should have been finished with it by now but I can’t help it. Let’s hope next week is going to be more productive.
Have fun everyone and may the weekend comes soon! |
<gh_stars>0
/*
* Copyright (c) 2016 The Khronos Group Inc.
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software source and associated documentation files (the "Materials"),
* to deal in the Materials without restriction, including without limitation
* the rights to use, copy, modify, compile, merge, publish, distribute,
* sublicense, and/or sell copies of the Materials, and to permit persons to
* whom the Materials are furnished to do so, subject the following terms and
* conditions:
*
* All modifications to the Materials used to create a binary that is
* distributed to third parties shall be provided to Khronos with an
* unrestricted license to use for the purposes of implementing bug fixes and
* enhancements to the Materials;
*
* If the binary is used as part of an OpenCL(TM) implementation, whether binary
* is distributed together with or separately to that implementation, then
* recipient must become an OpenCL Adopter and follow the published OpenCL
* conformance process for that implementation, details at:
* http://www.khronos.org/conformance/;
*
* The above copyright notice, the OpenCL trademark license, and this permission
* notice shall be included in all copies or substantial portions of the
* Materials.
*
* THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE MATERIALS OR THE USE OR OTHER DEALINGS IN
* THE MATERIALS.
*
* OpenCL is a trademark of Apple Inc. used under license by Khronos.
*/
#include "icd_dispatch.h"
#include "icd.h"
#include <stdlib.h>
#include <string.h>
// Platform APIs
CL_API_ENTRY cl_int CL_API_CALL
clGetPlatformIDs(cl_uint num_entries,
cl_platform_id * platforms,
cl_uint * num_platforms) CL_API_SUFFIX__VERSION_1_0
{
KHRicdVendor* vendor = NULL;
cl_uint i;
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
if (!num_entries && platforms)
{
return CL_INVALID_VALUE;
}
if (!platforms && !num_platforms)
{
return CL_INVALID_VALUE;
}
// set num_platforms to 0 and set all platform pointers to NULL
if (num_platforms)
{
*num_platforms = 0;
}
for (i = 0; i < num_entries && platforms; ++i)
{
platforms[i] = NULL;
}
// return error if we have no platforms
if (!khrIcdVendors)
{
return CL_PLATFORM_NOT_FOUND_KHR;
}
// otherwise enumerate all platforms
for (vendor = khrIcdVendors; vendor; vendor = vendor->next)
{
if (num_entries && platforms)
{
*(platforms++) = vendor->platform;
--num_entries;
}
if (num_platforms)
{
++(*num_platforms);
}
}
return CL_SUCCESS;
}
CL_API_ENTRY cl_int CL_API_CALL
clGetPlatformInfo(cl_platform_id platform,
cl_platform_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetPlatformInfo(
platform,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Device APIs
CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceIDs(cl_platform_id platform,
cl_device_type device_type,
cl_uint num_entries,
cl_device_id * devices,
cl_uint * num_devices) CL_API_SUFFIX__VERSION_1_0
{
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetDeviceIDs(
platform,
device_type,
num_entries,
devices,
num_devices);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceInfo(
cl_device_id device,
cl_device_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clGetDeviceInfo(
device,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clCreateSubDevices(cl_device_id in_device,
const cl_device_partition_property * properties,
cl_uint num_entries,
cl_device_id * out_devices,
cl_uint * num_devices) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(in_device, CL_INVALID_DEVICE);
return in_device->dispatch->clCreateSubDevices(
in_device,
properties,
num_entries,
out_devices,
num_devices);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainDevice(cl_device_id device) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clRetainDevice(device);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseDevice(cl_device_id device) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clReleaseDevice(device);
}
// Context APIs
CL_API_ENTRY cl_context CL_API_CALL
clCreateContext(const cl_context_properties * properties,
cl_uint num_devices,
const cl_device_id * devices,
void (CL_CALLBACK *pfn_notify)(const char *, const void *, size_t, void *),
void * user_data,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
if (!num_devices || !devices)
{
if (errcode_ret)
{
*errcode_ret = CL_INVALID_VALUE;
}
return NULL;
}
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(devices[0], CL_INVALID_DEVICE);
return devices[0]->dispatch->clCreateContext(
properties,
num_devices,
devices,
pfn_notify,
user_data,
errcode_ret);
}
CL_API_ENTRY cl_context CL_API_CALL
clCreateContextFromType(const cl_context_properties * properties,
cl_device_type device_type,
void (CL_CALLBACK *pfn_notify)(const char *, const void *, size_t, void *),
void * user_data,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
cl_platform_id platform = NULL;
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
// determine the platform to use from the properties specified
khrIcdContextPropertiesGetPlatform(properties, &platform);
// validate the platform handle and dispatch
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clCreateContextFromType(
properties,
device_type,
pfn_notify,
user_data,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainContext(cl_context context) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(context, CL_INVALID_CONTEXT);
return context->dispatch->clRetainContext(context);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseContext(cl_context context) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(context, CL_INVALID_CONTEXT);
return context->dispatch->clReleaseContext(context);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetContextInfo(cl_context context,
cl_context_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(context, CL_INVALID_CONTEXT);
return context->dispatch->clGetContextInfo(
context,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Command Queue APIs
CL_API_ENTRY cl_command_queue CL_API_CALL
clCreateCommandQueue(cl_context context,
cl_device_id device,
cl_command_queue_properties properties,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateCommandQueue(
context,
device,
properties,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainCommandQueue(cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clRetainCommandQueue(command_queue);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseCommandQueue(cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clReleaseCommandQueue(command_queue);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetCommandQueueInfo(cl_command_queue command_queue,
cl_command_queue_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clGetCommandQueueInfo(
command_queue,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Memory Object APIs
CL_API_ENTRY cl_mem CL_API_CALL
clCreateBuffer(cl_context context,
cl_mem_flags flags,
size_t size,
void * host_ptr,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateBuffer(
context,
flags,
size,
host_ptr,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateImage(cl_context context,
cl_mem_flags flags,
const cl_image_format * image_format,
const cl_image_desc * image_desc,
void * host_ptr,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateImage(
context,
flags,
image_format,
image_desc,
host_ptr,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainMemObject(cl_mem memobj) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(memobj, CL_INVALID_MEM_OBJECT);
return memobj->dispatch->clRetainMemObject(memobj);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseMemObject(cl_mem memobj) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(memobj, CL_INVALID_MEM_OBJECT);
return memobj->dispatch->clReleaseMemObject(memobj);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetSupportedImageFormats(cl_context context,
cl_mem_flags flags,
cl_mem_object_type image_type,
cl_uint num_entries,
cl_image_format * image_formats,
cl_uint * num_image_formats) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(context, CL_INVALID_CONTEXT);
return context->dispatch->clGetSupportedImageFormats(
context,
flags,
image_type,
num_entries,
image_formats,
num_image_formats);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetMemObjectInfo(cl_mem memobj,
cl_mem_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(memobj, CL_INVALID_MEM_OBJECT);
return memobj->dispatch->clGetMemObjectInfo(
memobj,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetImageInfo(cl_mem image,
cl_image_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(image, CL_INVALID_MEM_OBJECT);
return image->dispatch->clGetImageInfo(
image,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Sampler APIs
CL_API_ENTRY cl_sampler CL_API_CALL
clCreateSampler(cl_context context,
cl_bool normalized_coords,
cl_addressing_mode addressing_mode,
cl_filter_mode filter_mode,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateSampler(
context,
normalized_coords,
addressing_mode,
filter_mode,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainSampler(cl_sampler sampler) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(sampler, CL_INVALID_SAMPLER);
return sampler->dispatch->clRetainSampler(sampler);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseSampler(cl_sampler sampler) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(sampler, CL_INVALID_SAMPLER);
return sampler->dispatch->clReleaseSampler(sampler);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetSamplerInfo(cl_sampler sampler,
cl_sampler_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(sampler, CL_INVALID_SAMPLER);
return sampler->dispatch->clGetSamplerInfo(
sampler,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Program Object APIs
CL_API_ENTRY cl_program CL_API_CALL
clCreateProgramWithSource(cl_context context,
cl_uint count,
const char ** strings,
const size_t * lengths,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateProgramWithSource(
context,
count,
strings,
lengths,
errcode_ret);
}
CL_API_ENTRY cl_program CL_API_CALL
clCreateProgramWithBinary(cl_context context,
cl_uint num_devices,
const cl_device_id * device_list,
const size_t * lengths,
const unsigned char ** binaries,
cl_int * binary_status,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateProgramWithBinary(
context,
num_devices,
device_list,
lengths,
binaries,
binary_status,
errcode_ret);
}
CL_API_ENTRY cl_program CL_API_CALL
clCreateProgramWithBuiltInKernels(cl_context context,
cl_uint num_devices,
const cl_device_id * device_list,
const char * kernel_names,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateProgramWithBuiltInKernels(
context,
num_devices,
device_list,
kernel_names,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainProgram(cl_program program) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clRetainProgram(program);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseProgram(cl_program program) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clReleaseProgram(program);
}
CL_API_ENTRY cl_int CL_API_CALL
clBuildProgram(cl_program program,
cl_uint num_devices,
const cl_device_id * device_list,
const char * options,
void (CL_CALLBACK *pfn_notify)(cl_program program, void * user_data),
void * user_data) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clBuildProgram(
program,
num_devices,
device_list,
options,
pfn_notify,
user_data);
}
CL_API_ENTRY cl_int CL_API_CALL
clCompileProgram(cl_program program,
cl_uint num_devices,
const cl_device_id * device_list,
const char * options,
cl_uint num_input_headers,
const cl_program * input_headers,
const char ** header_include_names,
void (CL_CALLBACK * pfn_notify)(cl_program program, void * user_data),
void * user_data) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clCompileProgram(
program,
num_devices,
device_list,
options,
num_input_headers,
input_headers,
header_include_names,
pfn_notify,
user_data);
}
CL_API_ENTRY cl_program CL_API_CALL
clLinkProgram(cl_context context,
cl_uint num_devices,
const cl_device_id * device_list,
const char * options,
cl_uint num_input_programs,
const cl_program * input_programs,
void (CL_CALLBACK * pfn_notify)(cl_program program, void * user_data),
void * user_data,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clLinkProgram(
context,
num_devices,
device_list,
options,
num_input_programs,
input_programs,
pfn_notify,
user_data,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clUnloadPlatformCompiler(cl_platform_id platform) CL_API_SUFFIX__VERSION_1_2
{
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clUnloadPlatformCompiler(platform);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetProgramInfo(cl_program program,
cl_program_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clGetProgramInfo(
program,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetProgramBuildInfo(cl_program program,
cl_device_id device,
cl_program_build_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clGetProgramBuildInfo(
program,
device,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Kernel Object APIs
CL_API_ENTRY cl_kernel CL_API_CALL
clCreateKernel(cl_program program,
const char * kernel_name,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(program, CL_INVALID_PROGRAM);
return program->dispatch->clCreateKernel(
program,
kernel_name,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clCreateKernelsInProgram(cl_program program,
cl_uint num_kernels,
cl_kernel * kernels,
cl_uint * num_kernels_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(program, CL_INVALID_PROGRAM);
return program->dispatch->clCreateKernelsInProgram(
program,
num_kernels,
kernels,
num_kernels_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainKernel(cl_kernel kernel) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clRetainKernel(kernel);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseKernel(cl_kernel kernel) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clReleaseKernel(kernel);
}
CL_API_ENTRY cl_int CL_API_CALL
clSetKernelArg(cl_kernel kernel,
cl_uint arg_index,
size_t arg_size,
const void * arg_value) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clSetKernelArg(
kernel,
arg_index,
arg_size,
arg_value);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetKernelInfo(cl_kernel kernel,
cl_kernel_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clGetKernelInfo(
kernel,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetKernelArgInfo(cl_kernel kernel,
cl_uint arg_indx,
cl_kernel_arg_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clGetKernelArgInfo(
kernel,
arg_indx,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetKernelWorkGroupInfo(cl_kernel kernel,
cl_device_id device,
cl_kernel_work_group_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clGetKernelWorkGroupInfo(
kernel,
device,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Event Object APIs
CL_API_ENTRY cl_int CL_API_CALL
clWaitForEvents(cl_uint num_events,
const cl_event * event_list) CL_API_SUFFIX__VERSION_1_0
{
if (!num_events || !event_list)
{
return CL_INVALID_VALUE;
}
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event_list[0], CL_INVALID_EVENT);
return event_list[0]->dispatch->clWaitForEvents(
num_events,
event_list);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetEventInfo(cl_event event,
cl_event_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event, CL_INVALID_EVENT);
return event->dispatch->clGetEventInfo(
event,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainEvent(cl_event event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event, CL_INVALID_EVENT);
return event->dispatch->clRetainEvent(event);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseEvent(cl_event event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event, CL_INVALID_EVENT);
return event->dispatch->clReleaseEvent(event);
}
// Profiling APIs
CL_API_ENTRY cl_int CL_API_CALL
clGetEventProfilingInfo(cl_event event,
cl_profiling_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event, CL_INVALID_EVENT);
return event->dispatch->clGetEventProfilingInfo(
event,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
// Flush and Finish APIs
CL_API_ENTRY cl_int CL_API_CALL
clFlush(cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clFlush(command_queue);
}
CL_API_ENTRY cl_int CL_API_CALL
clFinish(cl_command_queue command_queue) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clFinish(command_queue);
}
// Enqueued Commands APIs
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReadBuffer(cl_command_queue command_queue,
cl_mem buffer,
cl_bool blocking_read,
size_t offset,
size_t cb,
void * ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReadBuffer(
command_queue,
buffer,
blocking_read,
offset,
cb,
ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReadBufferRect(
cl_command_queue command_queue,
cl_mem buffer,
cl_bool blocking_read,
const size_t * buffer_origin,
const size_t * host_origin,
const size_t * region,
size_t buffer_row_pitch,
size_t buffer_slice_pitch,
size_t host_row_pitch,
size_t host_slice_pitch,
void * ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReadBufferRect(
command_queue,
buffer,
blocking_read,
buffer_origin,
host_origin,
region,
buffer_row_pitch,
buffer_slice_pitch,
host_row_pitch,
host_slice_pitch,
ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueWriteBuffer(cl_command_queue command_queue,
cl_mem buffer,
cl_bool blocking_write,
size_t offset,
size_t cb,
const void * ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueWriteBuffer(
command_queue,
buffer,
blocking_write,
offset,
cb,
ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueWriteBufferRect(
cl_command_queue command_queue,
cl_mem buffer,
cl_bool blocking_read,
const size_t * buffer_origin,
const size_t * host_origin,
const size_t * region,
size_t buffer_row_pitch,
size_t buffer_slice_pitch,
size_t host_row_pitch,
size_t host_slice_pitch,
const void * ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueWriteBufferRect(
command_queue,
buffer,
blocking_read,
buffer_origin,
host_origin,
region,
buffer_row_pitch,
buffer_slice_pitch,
host_row_pitch,
host_slice_pitch,
ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueFillBuffer(cl_command_queue command_queue,
cl_mem buffer,
const void * pattern,
size_t pattern_size,
size_t offset,
size_t cb,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueFillBuffer(
command_queue,
buffer,
pattern,
pattern_size,
offset,
cb,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueCopyBuffer(cl_command_queue command_queue,
cl_mem src_buffer,
cl_mem dst_buffer,
size_t src_offset,
size_t dst_offset,
size_t cb,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueCopyBuffer(
command_queue,
src_buffer,
dst_buffer,
src_offset,
dst_offset,
cb,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueCopyBufferRect(
cl_command_queue command_queue,
cl_mem src_buffer,
cl_mem dst_buffer,
const size_t * src_origin,
const size_t * dst_origin,
const size_t * region,
size_t src_row_pitch,
size_t src_slice_pitch,
size_t dst_row_pitch,
size_t dst_slice_pitch,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueCopyBufferRect(
command_queue,
src_buffer,
dst_buffer,
src_origin,
dst_origin,
region,
src_row_pitch,
src_slice_pitch,
dst_row_pitch,
dst_slice_pitch,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReadImage(cl_command_queue command_queue,
cl_mem image,
cl_bool blocking_read,
const size_t * origin,
const size_t * region,
size_t row_pitch,
size_t slice_pitch,
void * ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReadImage(
command_queue,
image,
blocking_read,
origin,
region,
row_pitch,
slice_pitch,
ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueWriteImage(cl_command_queue command_queue,
cl_mem image,
cl_bool blocking_write,
const size_t * origin,
const size_t * region,
size_t input_row_pitch,
size_t input_slice_pitch,
const void * ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueWriteImage(
command_queue,
image,
blocking_write,
origin,
region,
input_row_pitch,
input_slice_pitch,
ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueFillImage(cl_command_queue command_queue,
cl_mem image,
const void * fill_color,
const size_t origin[3],
const size_t region[3],
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueFillImage(
command_queue,
image,
fill_color,
origin,
region,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueCopyImage(cl_command_queue command_queue,
cl_mem src_image,
cl_mem dst_image,
const size_t * src_origin,
const size_t * dst_origin,
const size_t * region,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueCopyImage(
command_queue,
src_image,
dst_image,
src_origin,
dst_origin,
region,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueCopyImageToBuffer(cl_command_queue command_queue,
cl_mem src_image,
cl_mem dst_buffer,
const size_t * src_origin,
const size_t * region,
size_t dst_offset,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueCopyImageToBuffer(
command_queue,
src_image,
dst_buffer,
src_origin,
region,
dst_offset,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueCopyBufferToImage(cl_command_queue command_queue,
cl_mem src_buffer,
cl_mem dst_image,
size_t src_offset,
const size_t * dst_origin,
const size_t * region,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueCopyBufferToImage(
command_queue,
src_buffer,
dst_image,
src_offset,
dst_origin,
region,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY void * CL_API_CALL
clEnqueueMapBuffer(cl_command_queue command_queue,
cl_mem buffer,
cl_bool blocking_map,
cl_map_flags map_flags,
size_t offset,
size_t cb,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueMapBuffer(
command_queue,
buffer,
blocking_map,
map_flags,
offset,
cb,
num_events_in_wait_list,
event_wait_list,
event,
errcode_ret);
}
CL_API_ENTRY void * CL_API_CALL
clEnqueueMapImage(cl_command_queue command_queue,
cl_mem image,
cl_bool blocking_map,
cl_map_flags map_flags,
const size_t * origin,
const size_t * region,
size_t * image_row_pitch,
size_t * image_slice_pitch,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueMapImage(
command_queue,
image,
blocking_map,
map_flags,
origin,
region,
image_row_pitch,
image_slice_pitch,
num_events_in_wait_list,
event_wait_list,
event,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueUnmapMemObject(cl_command_queue command_queue,
cl_mem memobj,
void * mapped_ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueUnmapMemObject(
command_queue,
memobj,
mapped_ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueMigrateMemObjects(cl_command_queue command_queue,
cl_uint num_mem_objects,
const cl_mem * mem_objects,
cl_mem_migration_flags flags,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueMigrateMemObjects(
command_queue,
num_mem_objects,
mem_objects,
flags,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueNDRangeKernel(cl_command_queue command_queue,
cl_kernel kernel,
cl_uint work_dim,
const size_t * global_work_offset,
const size_t * global_work_size,
const size_t * local_work_size,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueNDRangeKernel(
command_queue,
kernel,
work_dim,
global_work_offset,
global_work_size,
local_work_size,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueTask(cl_command_queue command_queue,
cl_kernel kernel,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueTask(
command_queue,
kernel,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueNativeKernel(cl_command_queue command_queue,
void (CL_CALLBACK * user_func)(void *),
void * args,
size_t cb_args,
cl_uint num_mem_objects,
const cl_mem * mem_list,
const void ** args_mem_loc,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueNativeKernel(
command_queue,
user_func,
args,
cb_args,
num_mem_objects,
mem_list,
args_mem_loc,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueMarkerWithWaitList(cl_command_queue command_queue,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueMarkerWithWaitList(
command_queue,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueBarrierWithWaitList(cl_command_queue command_queue,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueBarrierWithWaitList(
command_queue,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY void * CL_API_CALL
clGetExtensionFunctionAddressForPlatform(cl_platform_id platform,
const char * function_name) CL_API_SUFFIX__VERSION_1_2
{
// make sure the ICD is initialized
khrIcdInitialize();
#ifdef WITH_OPENCL_EXTRAS
// return any ICD-aware extensions
#define CL_COMMON_EXTENSION_ENTRYPOINT_ADD(name) if (!strcmp(function_name, #name) ) return (void *)(size_t)&name
// Are these core or ext? This is unclear, but they appear to be
// independent from cl_khr_gl_sharing.
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLBuffer);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLTexture);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLTexture2D);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLTexture3D);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLRenderbuffer);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetGLObjectInfo);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetGLTextureInfo);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireGLObjects);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseGLObjects);
// cl_khr_gl_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetGLContextInfoKHR);
// cl_khr_gl_event
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateEventFromGLsyncKHR);
#if defined(_WIN32)
// cl_khr_d3d10_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetDeviceIDsFromD3D10KHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D10BufferKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D10Texture2DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D10Texture3DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireD3D10ObjectsKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseD3D10ObjectsKHR);
// cl_khr_d3d11_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetDeviceIDsFromD3D11KHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D11BufferKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D11Texture2DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D11Texture3DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireD3D11ObjectsKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseD3D11ObjectsKHR);
// cl_khr_dx9_media_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetDeviceIDsFromDX9MediaAdapterKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromDX9MediaSurfaceKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireDX9MediaSurfacesKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseDX9MediaSurfacesKHR);
#endif
// cl_ext_device_fission
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateSubDevicesEXT);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clRetainDeviceEXT);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clReleaseDeviceEXT);
/* cl_khr_egl_image */
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromEGLImageKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireEGLObjectsKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseEGLObjectsKHR);
/* cl_khr_egl_event */
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateEventFromEGLSyncKHR);
/* cl_khr_sub_groups */
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetKernelSubGroupInfoKHR);
#endif /* ! WITH_OPENCL_EXTRAS */
// fall back to vendor extension detection
// FIXME Now that we have a platform id here, we need to validate that it isn't NULL, so shouldn't we have an errcode_ret
// KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetExtensionFunctionAddressForPlatform(
platform,
function_name);
}
// Deprecated APIs
CL_API_ENTRY cl_int CL_API_CALL
clSetCommandQueueProperty(cl_command_queue command_queue,
cl_command_queue_properties properties,
cl_bool enable,
cl_command_queue_properties * old_properties) CL_EXT_SUFFIX__VERSION_1_0_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clSetCommandQueueProperty(
command_queue,
properties,
enable,
old_properties);
}
#ifdef WITH_OPENCL_EXTRAS
CL_API_ENTRY cl_int CL_API_CALL
clCreateSubDevicesEXT(
cl_device_id in_device,
const cl_device_partition_property_ext * partition_properties,
cl_uint num_entries,
cl_device_id * out_devices,
cl_uint * num_devices) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(in_device, CL_INVALID_DEVICE);
return in_device->dispatch->clCreateSubDevicesEXT(
in_device,
partition_properties,
num_entries,
out_devices,
num_devices);
}
CL_API_ENTRY cl_int CL_API_CALL
clRetainDeviceEXT(cl_device_id device) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clRetainDeviceEXT(device);
}
CL_API_ENTRY cl_int CL_API_CALL
clReleaseDeviceEXT(cl_device_id device) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clReleaseDeviceEXT(device);
}
#endif /* ! WITH_OPENCL_EXTRAS */
CL_API_ENTRY cl_mem CL_API_CALL
clCreateImage2D(cl_context context,
cl_mem_flags flags,
const cl_image_format * image_format,
size_t image_width,
size_t image_height,
size_t image_row_pitch,
void * host_ptr,
cl_int * errcode_ret) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateImage2D(
context,
flags,
image_format,
image_width,
image_height,
image_row_pitch,
host_ptr,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateImage3D(cl_context context,
cl_mem_flags flags,
const cl_image_format * image_format,
size_t image_width,
size_t image_height,
size_t image_depth,
size_t image_row_pitch,
size_t image_slice_pitch,
void * host_ptr,
cl_int * errcode_ret) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateImage3D(
context,
flags,
image_format,
image_width,
image_height,
image_depth,
image_row_pitch,
image_slice_pitch,
host_ptr,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clUnloadCompiler(void) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
return CL_SUCCESS;
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueMarker(cl_command_queue command_queue,
cl_event * event) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueMarker(
command_queue,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueWaitForEvents(cl_command_queue command_queue,
cl_uint num_events,
const cl_event * event_list) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueWaitForEvents(
command_queue,
num_events,
event_list);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueBarrier(cl_command_queue command_queue) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueBarrier(command_queue);
}
CL_API_ENTRY void * CL_API_CALL
clGetExtensionFunctionAddress(const char *function_name) CL_EXT_SUFFIX__VERSION_1_1_DEPRECATED
{
size_t function_name_length = strlen(function_name);
KHRicdVendor* vendor = NULL;
// make sure the ICD is initialized
khrIcdInitialize();
#ifdef WITH_OPENCL_EXTRAS
// return any ICD-aware extensions
#define CL_COMMON_EXTENSION_ENTRYPOINT_ADD(name) if (!strcmp(function_name, #name) ) return (void *)(size_t)&name
// Are these core or ext? This is unclear, but they appear to be
// independent from cl_khr_gl_sharing.
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLBuffer);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLTexture);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLTexture2D);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLTexture3D);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromGLRenderbuffer);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetGLObjectInfo);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetGLTextureInfo);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireGLObjects);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseGLObjects);
// cl_khr_gl_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetGLContextInfoKHR);
// cl_khr_gl_event
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateEventFromGLsyncKHR);
#if defined(_WIN32)
// cl_khr_d3d10_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetDeviceIDsFromD3D10KHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D10BufferKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D10Texture2DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D10Texture3DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireD3D10ObjectsKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseD3D10ObjectsKHR);
// cl_khr_d3d11_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetDeviceIDsFromD3D11KHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D11BufferKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D11Texture2DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromD3D11Texture3DKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireD3D11ObjectsKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseD3D11ObjectsKHR);
// cl_khr_dx9_media_sharing
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetDeviceIDsFromDX9MediaAdapterKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromDX9MediaSurfaceKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireDX9MediaSurfacesKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseDX9MediaSurfacesKHR);
#endif
// cl_ext_device_fission
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateSubDevicesEXT);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clRetainDeviceEXT);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clReleaseDeviceEXT);
/* cl_khr_egl_image */
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateFromEGLImageKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueAcquireEGLObjectsKHR);
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clEnqueueReleaseEGLObjectsKHR);
/* cl_khr_egl_event */
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clCreateEventFromEGLSyncKHR);
/* cl_khr_sub_groups */
CL_COMMON_EXTENSION_ENTRYPOINT_ADD(clGetKernelSubGroupInfoKHR);
#endif /* ! WITH_OPENCL_EXTRAS */
// fall back to vendor extension detection
for (vendor = khrIcdVendors; vendor; vendor = vendor->next)
{
size_t vendor_suffix_length = strlen(vendor->suffix);
if (vendor_suffix_length <= function_name_length && vendor_suffix_length > 0)
{
const char *function_suffix = function_name+function_name_length-vendor_suffix_length;
if (!strcmp(function_suffix, vendor->suffix) )
{
return vendor->clGetExtensionFunctionAddress(function_name);
}
}
}
return NULL;
}
#ifdef WITH_OPENCL_EXTRAS
// GL and other APIs
CL_API_ENTRY cl_mem CL_API_CALL clCreateFromGLBuffer(
cl_context context,
cl_mem_flags flags,
GLuint bufobj,
int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromGLBuffer(
context,
flags,
bufobj,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL clCreateFromGLTexture(
cl_context context,
cl_mem_flags flags,
cl_GLenum target,
cl_GLint miplevel,
cl_GLuint texture,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_2
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromGLTexture(
context,
flags,
target,
miplevel,
texture,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL clCreateFromGLTexture2D(
cl_context context,
cl_mem_flags flags,
GLenum target,
GLint miplevel,
GLuint texture,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromGLTexture2D(
context,
flags,
target,
miplevel,
texture,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL clCreateFromGLTexture3D(
cl_context context,
cl_mem_flags flags,
GLenum target,
GLint miplevel,
GLuint texture,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromGLTexture3D(
context,
flags,
target,
miplevel,
texture,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL clCreateFromGLRenderbuffer(
cl_context context,
cl_mem_flags flags,
GLuint renderbuffer,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromGLRenderbuffer(
context,
flags,
renderbuffer,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL clGetGLObjectInfo(
cl_mem memobj,
cl_gl_object_type * gl_object_type,
GLuint * gl_object_name) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(memobj, CL_INVALID_MEM_OBJECT);
return memobj->dispatch->clGetGLObjectInfo(
memobj,
gl_object_type,
gl_object_name);
}
CL_API_ENTRY cl_int CL_API_CALL clGetGLTextureInfo(
cl_mem memobj,
cl_gl_texture_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(memobj, CL_INVALID_MEM_OBJECT);
return memobj->dispatch->clGetGLTextureInfo(
memobj,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL clEnqueueAcquireGLObjects(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem * mem_objects,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueAcquireGLObjects(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL clEnqueueReleaseGLObjects(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem * mem_objects,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_1_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReleaseGLObjects(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL clGetGLContextInfoKHR(
const cl_context_properties *properties,
cl_gl_context_info param_name,
size_t param_value_size,
void *param_value,
size_t *param_value_size_ret) CL_API_SUFFIX__VERSION_1_0
{
cl_platform_id platform = NULL;
// initialize the platforms (in case they have not been already)
khrIcdInitialize();
// determine the platform to use from the properties specified
khrIcdContextPropertiesGetPlatform(properties, &platform);
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetGLContextInfoKHR(
properties,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_event CL_API_CALL clCreateEventFromGLsyncKHR(
cl_context context,
cl_GLsync sync,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateEventFromGLsyncKHR(
context,
sync,
errcode_ret);
}
#if defined(_WIN32)
/*
*
* cl_d3d10_sharing_khr
*
*/
CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceIDsFromD3D10KHR(
cl_platform_id platform,
cl_d3d10_device_source_khr d3d_device_source,
void *d3d_object,
cl_d3d10_device_set_khr d3d_device_set,
cl_uint num_entries,
cl_device_id *devices,
cl_uint *num_devices)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetDeviceIDsFromD3D10KHR(
platform,
d3d_device_source,
d3d_object,
d3d_device_set,
num_entries,
devices,
num_devices);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromD3D10BufferKHR(
cl_context context,
cl_mem_flags flags,
ID3D10Buffer *resource,
cl_int *errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromD3D10BufferKHR(
context,
flags,
resource,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromD3D10Texture2DKHR(
cl_context context,
cl_mem_flags flags,
ID3D10Texture2D * resource,
UINT subresource,
cl_int * errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromD3D10Texture2DKHR(
context,
flags,
resource,
subresource,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromD3D10Texture3DKHR(
cl_context context,
cl_mem_flags flags,
ID3D10Texture3D *resource,
UINT subresource,
cl_int *errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromD3D10Texture3DKHR(
context,
flags,
resource,
subresource,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueAcquireD3D10ObjectsKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem *mem_objects,
cl_uint num_events_in_wait_list,
const cl_event *event_wait_list,
cl_event *event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueAcquireD3D10ObjectsKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReleaseD3D10ObjectsKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem *mem_objects,
cl_uint num_events_in_wait_list,
const cl_event *event_wait_list,
cl_event *event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReleaseD3D10ObjectsKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
/*
*
* cl_d3d11_sharing_khr
*
*/
CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceIDsFromD3D11KHR(
cl_platform_id platform,
cl_d3d11_device_source_khr d3d_device_source,
void * d3d_object,
cl_d3d11_device_set_khr d3d_device_set,
cl_uint num_entries,
cl_device_id * devices,
cl_uint * num_devices)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetDeviceIDsFromD3D11KHR(
platform,
d3d_device_source,
d3d_object,
d3d_device_set,
num_entries,
devices,
num_devices);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromD3D11BufferKHR(
cl_context context,
cl_mem_flags flags,
ID3D11Buffer * resource,
cl_int * errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromD3D11BufferKHR(
context,
flags,
resource,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromD3D11Texture2DKHR(
cl_context context,
cl_mem_flags flags,
ID3D11Texture2D * resource,
UINT subresource,
cl_int * errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromD3D11Texture2DKHR(
context,
flags,
resource,
subresource,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromD3D11Texture3DKHR(
cl_context context,
cl_mem_flags flags,
ID3D11Texture3D * resource,
UINT subresource,
cl_int * errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromD3D11Texture3DKHR(
context,
flags,
resource,
subresource,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueAcquireD3D11ObjectsKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem * mem_objects,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueAcquireD3D11ObjectsKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReleaseD3D11ObjectsKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem * mem_objects,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReleaseD3D11ObjectsKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
/*
*
* cl_khr_dx9_media_sharing
*
*/
CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceIDsFromDX9MediaAdapterKHR(
cl_platform_id platform,
cl_uint num_media_adapters,
cl_dx9_media_adapter_type_khr * media_adapters_type,
void * media_adapters,
cl_dx9_media_adapter_set_khr media_adapter_set,
cl_uint num_entries,
cl_device_id * devices,
cl_uint * num_devices)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(platform, CL_INVALID_PLATFORM);
return platform->dispatch->clGetDeviceIDsFromDX9MediaAdapterKHR(
platform,
num_media_adapters,
media_adapters_type,
media_adapters,
media_adapter_set,
num_entries,
devices,
num_devices);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromDX9MediaSurfaceKHR(
cl_context context,
cl_mem_flags flags,
cl_dx9_media_adapter_type_khr adapter_type,
void * surface_info,
cl_uint plane,
cl_int * errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromDX9MediaSurfaceKHR(
context,
flags,
adapter_type,
surface_info,
plane,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueAcquireDX9MediaSurfacesKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem * mem_objects,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueAcquireDX9MediaSurfacesKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReleaseDX9MediaSurfacesKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem * mem_objects,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReleaseDX9MediaSurfacesKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
#endif /* ! WITH_OPENCL_EXTRAS */
#endif
CL_API_ENTRY cl_int CL_API_CALL
clSetEventCallback(
cl_event event,
cl_int command_exec_callback_type,
void (CL_CALLBACK *pfn_notify)(cl_event, cl_int, void *),
void *user_data) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event, CL_INVALID_EVENT);
return event->dispatch->clSetEventCallback(
event,
command_exec_callback_type,
pfn_notify,
user_data);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreateSubBuffer(
cl_mem buffer,
cl_mem_flags flags,
cl_buffer_create_type buffer_create_type,
const void * buffer_create_info,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(buffer, CL_INVALID_MEM_OBJECT);
return buffer->dispatch->clCreateSubBuffer(
buffer,
flags,
buffer_create_type,
buffer_create_info,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clSetMemObjectDestructorCallback(
cl_mem memobj,
void (CL_CALLBACK * pfn_notify)( cl_mem, void*),
void * user_data ) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(memobj, CL_INVALID_MEM_OBJECT);
return memobj->dispatch->clSetMemObjectDestructorCallback(
memobj,
pfn_notify,
user_data);
}
CL_API_ENTRY cl_event CL_API_CALL
clCreateUserEvent(
cl_context context,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateUserEvent(
context,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clSetUserEventStatus(
cl_event event,
cl_int execution_status) CL_API_SUFFIX__VERSION_1_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(event, CL_INVALID_EVENT);
return event->dispatch->clSetUserEventStatus(
event,
execution_status);
}
#ifdef WITH_OPENCL_EXTRAS
CL_API_ENTRY cl_mem CL_API_CALL
clCreateFromEGLImageKHR(
cl_context context,
CLeglDisplayKHR display,
CLeglImageKHR image,
cl_mem_flags flags,
const cl_egl_image_properties_khr *properties,
cl_int *errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateFromEGLImageKHR(
context,
display,
image,
flags,
properties,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueAcquireEGLObjectsKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem *mem_objects,
cl_uint num_events_in_wait_list,
const cl_event *event_wait_list,
cl_event *event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueAcquireEGLObjectsKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueReleaseEGLObjectsKHR(
cl_command_queue command_queue,
cl_uint num_objects,
const cl_mem *mem_objects,
cl_uint num_events_in_wait_list,
const cl_event *event_wait_list,
cl_event *event)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueReleaseEGLObjectsKHR(
command_queue,
num_objects,
mem_objects,
num_events_in_wait_list,
event_wait_list,
event);
}
/* cl_khr_egl_event */
CL_API_ENTRY cl_event CL_API_CALL
clCreateEventFromEGLSyncKHR(
cl_context context,
CLeglSyncKHR sync,
CLeglDisplayKHR display,
cl_int *errcode_ret)
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateEventFromEGLSyncKHR(
context,
sync,
display,
errcode_ret);
}
#endif /* ! WITH_OPENCL_EXTRAS */
CL_API_ENTRY cl_command_queue CL_API_CALL
clCreateCommandQueueWithProperties(
cl_context context,
cl_device_id device,
const cl_queue_properties * properties,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateCommandQueueWithProperties(
context,
device,
properties,
errcode_ret);
}
CL_API_ENTRY cl_mem CL_API_CALL
clCreatePipe(
cl_context context,
cl_mem_flags flags,
cl_uint pipe_packet_size,
cl_uint pipe_max_packets,
const cl_pipe_properties * properties,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreatePipe(
context,
flags,
pipe_packet_size,
pipe_max_packets,
properties,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetPipeInfo(
cl_mem pipe,
cl_pipe_info param_name,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(pipe, CL_INVALID_MEM_OBJECT);
return pipe->dispatch->clGetPipeInfo(
pipe,
param_name,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY void * CL_API_CALL
clSVMAlloc(
cl_context context,
cl_svm_mem_flags flags,
size_t size,
cl_uint alignment) CL_API_SUFFIX__VERSION_2_0
{
if (!context) {
return NULL;
}
return context->dispatch->clSVMAlloc(
context,
flags,
size,
alignment);
}
CL_API_ENTRY void CL_API_CALL
clSVMFree(
cl_context context,
void * svm_pointer) CL_API_SUFFIX__VERSION_2_0
{
if (!context || !svm_pointer) {
return;
}
context->dispatch->clSVMFree(
context,
svm_pointer);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueSVMFree(
cl_command_queue command_queue,
cl_uint num_svm_pointers,
void* svm_pointers[],
void (CL_CALLBACK* pfn_free_func)(
cl_command_queue queue,
cl_uint num_svm_pointers,
void* svm_pointers[],
void* user_data),
void* user_data,
cl_uint num_events_in_wait_list,
const cl_event* event_wait_list,
cl_event* event) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueSVMFree(
command_queue,
num_svm_pointers,
svm_pointers,
pfn_free_func,
user_data,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueSVMMemcpy(
cl_command_queue command_queue,
cl_bool blocking_copy,
void * dst_ptr,
const void * src_ptr,
size_t size,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueSVMMemcpy(
command_queue,
blocking_copy,
dst_ptr,
src_ptr,
size,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueSVMMemFill(
cl_command_queue command_queue,
void * svm_ptr,
const void * pattern,
size_t pattern_size,
size_t size,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueSVMMemFill(
command_queue,
svm_ptr,
pattern,
pattern_size,
size,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueSVMMap(
cl_command_queue command_queue,
cl_bool blocking_map,
cl_map_flags flags,
void * svm_ptr,
size_t size,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueSVMMap(
command_queue,
blocking_map,
flags,
svm_ptr,
size,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueSVMUnmap(
cl_command_queue command_queue,
void * svm_ptr,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueSVMUnmap(
command_queue,
svm_ptr,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_sampler CL_API_CALL
clCreateSamplerWithProperties(
cl_context context,
const cl_sampler_properties * sampler_properties,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateSamplerWithProperties(
context,
sampler_properties,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clSetKernelArgSVMPointer(
cl_kernel kernel,
cl_uint arg_index,
const void * arg_value) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clSetKernelArgSVMPointer(
kernel,
arg_index,
arg_value);
}
CL_API_ENTRY cl_int CL_API_CALL
clSetKernelExecInfo(
cl_kernel kernel,
cl_kernel_exec_info param_name,
size_t param_value_size,
const void * param_value) CL_API_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
return kernel->dispatch->clSetKernelExecInfo(
kernel,
param_name,
param_value_size,
param_value);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetKernelSubGroupInfoKHR(
cl_kernel in_kernel,
cl_device_id in_device,
cl_kernel_sub_group_info param_name,
size_t input_value_size,
const void * input_value,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_EXT_SUFFIX__VERSION_2_0
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(in_kernel, CL_INVALID_KERNEL);
return in_kernel->dispatch->clGetKernelSubGroupInfoKHR(
in_kernel,
in_device,
param_name,
input_value_size,
input_value,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clSetDefaultDeviceCommandQueue(
cl_context context,
cl_device_id device,
cl_command_queue command_queue) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(context, CL_INVALID_CONTEXT);
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return context->dispatch->clSetDefaultDeviceCommandQueue(
context,
device,
command_queue);
}
CL_API_ENTRY cl_program CL_API_CALL
clCreateProgramWithIL(
cl_context context,
const void * il,
size_t length,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(context, CL_INVALID_CONTEXT);
return context->dispatch->clCreateProgramWithIL(
context,
il,
length,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetKernelSubGroupInfo(
cl_kernel kernel,
cl_device_id device,
cl_kernel_sub_group_info param_name,
size_t input_value_size,
const void * input_value,
size_t param_value_size,
void * param_value,
size_t * param_value_size_ret) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(kernel, CL_INVALID_KERNEL);
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return kernel->dispatch->clGetKernelSubGroupInfo(
kernel,
device,
param_name,
input_value_size,
input_value,
param_value_size,
param_value,
param_value_size_ret);
}
CL_API_ENTRY cl_kernel CL_API_CALL
clCloneKernel(
cl_kernel source_kernel,
cl_int * errcode_ret) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_HANDLE(source_kernel, CL_INVALID_KERNEL);
return source_kernel->dispatch->clCloneKernel(
source_kernel,
errcode_ret);
}
CL_API_ENTRY cl_int CL_API_CALL
clEnqueueSVMMigrateMem(
cl_command_queue command_queue,
cl_uint num_svm_pointers,
const void ** svm_pointers,
const size_t * sizes,
cl_mem_migration_flags flags,
cl_uint num_events_in_wait_list,
const cl_event * event_wait_list,
cl_event * event) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(command_queue, CL_INVALID_COMMAND_QUEUE);
return command_queue->dispatch->clEnqueueSVMMigrateMem(
command_queue,
num_svm_pointers,
svm_pointers,
sizes,
flags,
num_events_in_wait_list,
event_wait_list,
event);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetDeviceAndHostTimer(
cl_device_id device,
cl_ulong * device_timestamp,
cl_ulong * host_timestamp) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clGetDeviceAndHostTimer(
device,
device_timestamp,
host_timestamp);
}
CL_API_ENTRY cl_int CL_API_CALL
clGetHostTimer(
cl_device_id device,
cl_ulong * host_timestamp) CL_API_SUFFIX__VERSION_2_1
{
KHR_ICD_VALIDATE_HANDLE_RETURN_ERROR(device, CL_INVALID_DEVICE);
return device->dispatch->clGetHostTimer(
device,
host_timestamp);
}
|
Differences in the Slope of the QT-RR Relation Based on 24-Hour Holter ECG Recordings between Cardioembolic and Atherosclerotic Stroke. Objective Detecting paroxysmal atrial fibrillation in patients with ischemic stroke presenting in sinus rhythm is difficult because such episodes are often short, and they are also frequently asymptomatic. It is possible that the ventricular repolarization dynamics may reflect atrial vulnerability and cardioembolic stroke. Hence, we compared the QT-RR relation between cardioembolic stroke and atherosclerotic stroke during sinus rhythm. Methods The subjects comprised 62 consecutive ischemic stroke patients including 31 with cardioembolic strokes (71.8±12.7 years, 17 men) and 31 with atherosclerotic strokes (74.8±10.8 years, 23 men). The QT and RR intervals were measured from ECG waves based on a 15-sec averaged ECG during 24-hour Holter recording using an automatic QT analyzing system. The QT interval dependence on the RR interval was analyzed using a linear regression line for each subject (=A+B; where A is the slope and B is the y-intercept). Results The mean slope of the QT-RR relation was significantly greater in cardioembolic stroke than in atherosclerotic stroke (0.187±0.044 vs. 0.142±0.045, p<0.001). The mean QT, RR, or QTc during 24-hour Holter recordings did not differ between them. An increased slope (≥0.14) of the QT-RR regression line could predict cardioembolic stroke with 97% sensitivity, 55% specificity and a positive predictive value of 64%. Conclusion The increased slope of the QT-RR linear regression line based on 24-hour Holter ECG in patients with ischemic stroke presenting in sinus rhythm may therefore be a simple and useful marker for cardioembolic stroke. Introduction In patients with acute ischemic stroke presenting in sinus rhythm, it is sometimes difficult to detect the association of paroxysmal atrial fibrillation (AF) because such episodes of paroxysmal AF are often short, and they are also frequently asymptomatic. The ventricular repolarization dynamics are considered to be a marker of ventricular vulnerability. The same channel as the determinants of ventricular repolarization could affect part of atrial repolarization. Hence, the ventricular repolarization dynamics may also re-flect atrial vulnerability. Several previous studies have reported that QTc prolongation is associated with an increased risk of AF and stroke independent of the traditional stroke risk factors. It is proposed that a prolonged QT existed before and after cardioembolic stroke episodes and the presence of a prolonged QT may be used as a marker for cardioembolic stroke. However, heart rate correction using the Bazett formula has serious limitations for evaluating the QT interval at lower and higher heart rates. We have previously demonstrated the usefulness of the QT-RR slope and intercept assessment in ventricular repolarization dynamics using 24-hour Holter ECG recordings. We hypothesized that the assessment of the QT-RR relation can be used as a marker of cardioembolic stroke. In this study, we retrospectively evaluated the QT-RR relation based on a 15-sec averaged ECG during 24-hour Holter ECG recordings in patients who had acute ischemic stroke and compared the QT-RR regression line between cardioembolic and atherosclerotic stroke. Materials and Methods This retrospective study consisted of 62 consecutive patients who had acute ischemic stroke including 31 patients (17 men, 14 women, average age 71.8±12.7 years) with cardioembolic stroke and 31 patients (23 men, 8 women, average age 74.8±10.8 years) with atherosclerotic stroke. Any patients with persistent or permanent AF were excluded. All patients with stroke were diagnosed based on the findings from neurological observations and magnetic resonance imaging (MRI) and or computed tomography (CT). Intracranial artery luminal stenosis of >50% was considered significant as determined by magnetic resonance angiography. Carotid artery luminal stenosis of >50% was defined as significant stenosis as assessed by ultrasound. Patients underwent continuous in-hospital ECG monitoring for at least 3 days following the stroke episode. Patients who had documented episodes of paroxysmal AF (duration >30 sec) were defined as cardioembolic stroke patients. Patients without any known AF who had significant stenosis of the carotid artery and/or intracranial artery were defined as atherosclerotic stroke patients. Ischemic stroke patients without significant stenosis of both the carotid artery and intracranial artery were defined as cardioembolic stroke cases even if they had no known episodes of paroxysmal AF. Holter ECGs were recorded for 24 hours within two weeks after an acute stroke episode and CM5 (the modified chest lead V5) lead was used for automatic QT measure-ments because of the morphological stability of T wave. No subject was being treated with any drugs affecting the QT interval. A digital ECG recording device (FM-180, Fukuda Denshi, Tokyo, Japan) with a sampling rate of 128/sec was used with an automatic measurement system (SCM-8000, Fukuda Denshi). The QT interval was plotted against the RR interval from averaged ECG waves obtained by the summation of consecutive QRS-T complexes during each 15 seconds period over 24 hours without any episodes of paroxysmal AF. Therefore, a maximum total of 5,760 data points could be obtained for each subject. The analyzing system determined the top and the end of the T wave according to the following algorithm. The top of the T wave was determined as the point where the first derivative of the T wave changed from positive to negative or negative to positive. The end of the T wave was determined by the crossing point between the baseline and the slope fitting the descending part of the T wave using the regression tangent. In each subject, the detection level of the T wave first derivative was set as the average level of the ST segment. The dependence of the QT interval on the RR interval was analyzed for each patient using linear regression ( = A + B; where A is the slope and B is the intercept). In each subject visual checks verified the automatic QT interval measurements. The results are presented as the mean ± standard deviation (SD). Unpaired data were analyzed using the Student's t-test for continuous variables and the chi-square analysis for categorical variables. A receiver operating characteristic (ROC) curve analysis for predicting cardioembolic stroke was performed to calculate the optimal cutoff value for the slope of the QT-RR regression line. Statistical significance was set at p<0.05. Data were analyzed using the SPSS software program for Windows. Results Patients with cardioembolic stroke consisted of 12 patients having episodes of paroxysmal AF and 19 patients without significant stenosis of both the carotid artery and intracranial artery. The number of patients with frequent episodes of premature atrial contraction (>1,000/24 hours) did not differ between the cardioembolic and atherosclerotic stroke groups (5 vs. 3). There was no significant difference in terms of the patient clinical characteristics between cardioembolic stroke and atherosclerotic stroke (Table 1). A representative QT-RR relationship in a 65-year-old woman with cardioembolic stroke is shown in Fig. 1. She had right hemiplegia and aphasia on admission and showed no episodes of paroxysmal AF. Continuous ECG monitoring revealed an episode of paroxysmal AF. The slope and intercept of the QT-RR regression was 0.19 and 0.25. Fig. 2 shows a 73-year-old man with atherosclerotic stroke. On admission he had aphasia. ECG monitoring revealed no episode of paroxysmal AF, but ultrasound showed 92% stenosis of left carotid artery. The slope and intercept of the QT-RR (Fig. 3). The distribution of the scatter diagram in cardioembolic stroke shifted to the right lower area compared to that in atherosclerotic stroke (Fig. 4). The mean slope of the QT-RR regression line was significantly greater in cardioembolic stroke than in atherosclerotic stroke (0.187 ±0.044 vs. 0.142±0.045, p<0.001) and the mean intercept was significantly smaller in cardioembolic stroke than in atherosclerotic stroke (0.241±0.045 vs. 0.270±0.036, p<0.05, Table 1). The mean QT, the mean RR, or the mean QTc using Bazett formula during 24-hour Holter ECG recording did not differ between cardioembolic and atherosclerotic stroke. In cardioembolic stroke there were no differences in the clinical characteristics and the QT-RR relations between the patients with and those without documented episodes of paroxysmal AF (Table 2). Slope of QT-RR relation An ROC curve analysis for predicting cardioembolic stroke was performed to calculate the optimal cutoff value for the slope of QT-RR regression line. The area under the curve of the slope of QT-RR regression line was 0.775 and the optimal cutoff value to predict cardioembolic stroke was 1.40 yielding 97% sensitivity, 55% specificity, a positive predictive value of 64%, and a negative predictive value of 93% (Fig. 5). Discussion The major findings of the present study were as follows: the mean slope of the QT-RR relation was significantly greater in cardioembolic stroke than in atherosclerotic stroke; the mean RR, the mean QT, or the mean QTc during 24-hour Holter ECG recordings did not differ between cardioembolic and atherosclerotic stroke; an in-creased slope (! 0.14) of the QT-RR regression line could predict cardioembolic stroke with 97% sensitivity, 55% specificity, and a 63% positive predictive value. We speculate that the QT-RR regression line slope during the 24-hour Holter ECG may therefore be a new useful marker for cardioembolic stroke. Slope and intercept relationship of the QT-RR regression line The QT-RR dynamics were affected both by the QT-RR slope and by the QT-RR intercept. We evaluated the relationship between the slope and the intercept using a scatter plot and we found a statistically significant negative correlation between the QT-RR slope and intercept among a large number of healthy subjects. This distribution may be related to the differences in background repolarization in each subject. A combination of the rapid component of the delayed rectifier potassium current (IKr) and the slow component of the delayed rectifier potassium current (IKs) may play an important role in the modulation of ventricular repolarization to heart rate. It is possible that the suppression of IKr mainly increases the QT-RR slope and decreases the intercept; on the other hand, the suppression of IKs is known to mainly decrease the QT-RR slope and increase the intercept. An analysis of the QT-RR slope from the 24-hour Holter ECG is commonly used to evaluate the QT dynamics. An increased slope of the QT-RR regression line was observed in patients with post-myocardial infarction, long QT syndrome, dilated cardiomyopathy, congestive heart failure, and diabetic patients with autonomic dysfunction. Watanabe et al. reported that the QT-RR slope >0.17 was associated with sudden death in patients with stable chronic heart failure. A QT-RR slope >0.19 was proposed to be an independent risk marker for sudden death in patients with dilated cardiomyopathy. Cygaukievicz et al. demonstrated that a QT-RR slope of >0.22 was associated with increased total mortality in patients with chronic heart failure. Shimono et al. showed an inverse relationship between the high frequency component of the heart rate variability and the slope of QT-RR regression in diabetic patients and suggested an association between a steeper slope of QT-RR regression and diabetic neuropathy. QT interval and cardioembolic stroke Several previous studies have reported that QTc prolongation is associated with an increased risk of incident AF independent of traditional AF risk factors. An increased risk of AF in patients with long QT syndrome has also been reported. A prolonged QT interval may be related to enhanced activity of the late Na current which increases intracellular Ca and triggered automaticity in the atrium. QT prolongation is a well-known predictor of cardiovascular mortality and is also a marker of cardiac disease. Hence, it is possible that a prolonged QT is associated with cardiac disease in itself and is not associated with AF directly. On the other hand, Nielsen demonstrated not only a longer QT, but also a shorter QT to be a risk marker of lone AF having no underlying structural heart disease (J-shaped association). A prolonged QTc also has been reported to be a risk marker of ischemic stroke and the post-stroke prognosis. Hoshino et al. assessed the predictive value of a prolonged QTc in paroxysmal AF detection after acute ischemic stroke. They found the QTc to be significantly longer in patients with paroxysmal AF than in those without. Slope and intercept relationship of the QT-RR regression line in patients with ischemic stroke In the present study, the mean slope of QT-RR regression was significantly greater in cardioembolic stroke than in atherosclerotic stroke, but the mean RR, the mean QT, or the mean QTc did not show any significant difference between them. The QT-RR relationship has considerable intersubject variability, and it is very difficult to obtain the optimal heart rate correction formula that could permit an accurate comparison of the corrected QT intervals. Compared with the conventional QT evaluation using the heart rate correction formula, the slope of the QT-RR relation during 24-hour Holter ECG may be less affected by the sampling heart rate. The steeper slope of the QT-RR relation suggested not only a longer QT at lower heart rates, but also a shorter QT at higher heart rates. These findings are compatible with the J-shaped association between QTc and the risk of AF. Although the mechanism for these associations remains unclear, both the presence of AF in itself and the remodeled left atrium having possibility of paroxysmal AF may be associated with an increased risk of cardioembolic stroke. A steeper slope of QT-RR regression has been reported as a surrogate indicator of subclinical atherosclerosis and autonomic nerve dysfunction and subsequently could be a predictor of cardiovascular mortality. Most of cardioembolic risk factors, such as diabetes mellitus, hypertension, aging, female gender, and congestive heart failure, showed a steeper slope of QT-RR regression. Hence, in association with an increased risk of paroxysmal AF, patients with steeper slope of QT-RR regression may have an increased risk of cardioembolic stroke. The present study was retrospective and was limited by the small number of patients investigated. Therefore, further prospective studies with larger numbers of patients are needed to clarify the role of the QT-RR regression slope and intercept relationship as a marker of cardioembolic episodes in patients with acute ischemic stroke. No AF episodes could be detected in about two-thirds patients classified as cardioembolic stroke in the present study. Further ECG monitoring to detect such episodes of AF should be done for these patients. Conclusion The slope of the QT-RR regression line during 24-hour Holter ECG may thus be a simple and useful marker for cardioembolic stroke. |
import React from 'react';
import {
DATA_TRANSIENT_ELEMENT,
isEngine,
isSafari,
Plugin,
} from '@aomao/engine';
import type {
EditorInterface,
NodeInterface,
PluginOptions,
} from '@aomao/engine';
import type { CollapseItemProps } from '../collapse/item';
import ToolbarComponent, { ToolbarPopup } from './component';
import type { ToolbarValue, GroupItemProps } from './component';
import locales from '../locales';
type Config = Array<{
title: React.ReactNode;
items: Array<Omit<CollapseItemProps, 'engine'> | string>;
}>;
export interface ToolbarOptions extends PluginOptions {
config?: Config | false;
popup?: {
items: GroupItemProps[];
};
}
const defaultConfig = (editor: EditorInterface): Config => {
return [
{
title: editor.language.get<string>(
'toolbar',
'commonlyUsed',
'title',
),
items: [
'image-uploader',
'codeblock',
'table',
'file-uploader',
'video-uploader',
'math',
'status',
],
},
];
};
class ToolbarPlugin<
T extends ToolbarOptions = ToolbarOptions,
> extends Plugin<T> {
static get pluginName() {
return 'toolbar';
}
private popup?: ToolbarPopup;
init() {
if (isEngine(this.editor)) {
this.editor.on('keydown:slash', this.onSlash);
this.editor.on('parse:value', this.paserValue);
}
this.editor.language.add(locales);
if (this.options.popup) {
this.popup = new ToolbarPopup(this.editor, {
items: this.options.popup.items,
});
}
}
paserValue = (node: NodeInterface) => {
if (
node.isCard() &&
node.attributes('name') === ToolbarComponent.cardName
) {
return false;
}
return true;
};
onSlash = (event: KeyboardEvent) => {
if (!isEngine(this.editor)) return;
const { change } = this.editor;
let range = change.range.get();
const block = this.editor.block.closest(range.startNode);
const text = block.text().trim();
if (text === '/' && isSafari) {
block.empty();
}
if (
'' === text ||
('/' === text && isSafari) ||
event.ctrlKey ||
event.metaKey
) {
if (this.options.config === false) return;
range = change.range.get();
if (range.collapsed) {
event.preventDefault();
const data = this.options.config || defaultConfig(this.editor);
const card = this.editor.card.insert(
ToolbarComponent.cardName,
{},
data,
) as ToolbarComponent<ToolbarValue>;
card.setData(data);
card.root.attributes(DATA_TRANSIENT_ELEMENT, 'true');
this.editor.card.activate(card.root);
range = change.range.get();
//选中关键词输入节点
const keyword = card.find('.data-toolbar-component-keyword');
range.select(keyword, true);
range.collapse(false);
change.range.select(range);
}
}
};
execute(...args: any): void {
throw new Error('Method not implemented.');
}
destroy() {
this.popup?.destroy();
this.editor.off('keydown:slash', this.onSlash);
this.editor.off('parse:value', this.paserValue);
}
}
export { ToolbarComponent, ToolbarPopup };
export type { ToolbarValue };
export default ToolbarPlugin;
|
Pre-school Storytime for children ages 2-5 on Thursday mornings April 4, 11 18, and 25 with Miss Jill at 10:30 a.m. Children will participate in reading, singing, rhymes and fun crafts. Children under 3 ¢ will need a parent/caregiver present.
Storytime for School Age Children on Thursday mornings April 4, 11, 18 and 25 at 10:30 a.m. Miss Amy will organize a variety of activities for children ages 6-10. Expect stories, crafts, Legos, games and more! Great for homeschooling families!
Month of the Young Child Storytime, Saturday, April 27 at 11 a,m, with special guest readers Jeff Hornburg, Mayor of the Village of Silver Creek, and Virginia Miller, Assistant Director of Lake Shore Family Center. Join us for stories and a craft to celebrate young children!
“Diets Don’t Work” with Kristian Reiber, Certified Wellness Coach, Thursday, April 4 at 6 p.m. Free and open to the public.
Flowers that do Double Duty with Tina Ames, Thursday, April 11 at 6 p.m. Learn the basics of the popular article “Flowers that do Double Duty” Tina wrote for Mother Earth News. You will learn Tina’s favorite six flowers that are not only beautiful to look at and great for the bees, but do even more. All students will leave with seeds to take home and grow their own, too! Class is free and open to the public. Only 20 spots are available, so register soon!
Erie Traveling Zoo, Friday, April 26 at 3 p.m. All ages will enjoy this meet and greet session with our friends from the zoo! Come get a close-up view of a variety of animals. You can even touch some, too! Please register so we can plan for enough seating.
Eat Smart NY, Monday, April 29 at 6:30 p.m. Free cooking and nutrition workshop for all ages. Learn simple ways to incorporate more fresh and healthy foods into your diet.
LEGO Club meets the first Tuesday of the month at 6:30 p.m. All ages are welcome to join us on April 2 for our monthly building challenge. Registration is appreciated.
Kids Cooking Club meets on the second Monday of the month at 4 p.m. Kids from 8-15 years old are invited to attend and learn to put together a healthy and nutritious meal on April 8. Registration is limited to 15 and is required to ensure enough seating and ingredients for all.
Mine Craft Club meets on the second Tuesday of the month at 6:30 p.m. Kids age 6-15 are invited to join us on April 9 as we cooperatively play, build, craft, explore and create! We’ll be focusing on survival mode and playing together on the library’s network. Registration is limited to 8 participants, so sign up soon!
Teen Book Talk meets the third Tuesday of the month at 7 p.m. All teens age 13-17 are invited to bring a book they are reading or have read and share it with the group. Refreshments are provided; this month, the club with meet April 16.
Tinker Tuesday meets on the fourth Tuesday of the month at 6:30 p.m. All ages are invited to tinker and build with a variety of materials. This month, the group meets on Tuesday, April 23. Registration is appreciated. |
//From all the Rest Client Entity, HTTP request will be created. User can also provide the Authunication mode also, whether BASIC or OAUTH.
//Example: BASIC Authencticaton BasicAuth{Username:"xyz", Password:"abcd"}
func (client *Rest) Request(authoptions ...interface{}) (*http.Request, error) {
if !strings.HasSuffix(client.baseurl, "/") {
client.baseurl = client.baseurl + "/"
}
client.url = client.baseurl
params := strings.Join(client.uriparams, "/")
client.url = client.baseurl + params
v, err := url.Parse(client.url)
if err != nil {
panic("Complete URL is not correct")
}
client.url = v.String()
var requrl url.URL
requrl.Path = client.url
requrl.RawQuery = client.queryvalues.Encode()
var req *http.Request
if client.formdata != nil {
req, err = http.NewRequest(client.verb, requrl.String(), strings.NewReader(client.formdata.Encode()))
} else {
req, err = http.NewRequest(client.verb, requrl.String(), client.payload)
}
if err != nil {
panic("Request Object is not proper")
}
for key, values := range client.headers {
for _, value := range values {
req.Header.Add(key, value)
}
}
for _, opt := range authoptions {
switch v := opt.(type) {
case *BasicAuth:
req.SetBasicAuth(v.Username, v.Password)
case *OAuth2:
}
}
return req, err
} |
<filename>System/Library/PrivateFrameworks/TelephonyUtilities.framework/TUAudioRoutePreferredRouteOptions.h
/*
* This header is generated by classdump-dyld 1.0
* on Wednesday, March 22, 2017 at 9:05:14 AM Mountain Standard Time
* Operating System: Version 10.1 (Build 14U593)
* Image Source: /System/Library/PrivateFrameworks/TelephonyUtilities.framework/TelephonyUtilities
* classdump-dyld is licensed under GPLv3, Copyright © 2013-2016 by <NAME>.
*/
@interface TUAudioRoutePreferredRouteOptions : NSObject {
BOOL _active;
}
@property (assign,getter=isActive,nonatomic) BOOL active; //@synthesize active=_active - In the implementation block
-(BOOL)isActive;
-(void)setActive:(BOOL)arg1 ;
@end
|
In many everyday applications, such as access control, payment and tracking, devices involved in those applications need to be identified. Devices are typically assigned an identifier for such purposes. Thus, when the time comes for a device to be identified, the device transmits its assigned identifier to a network entity, which takes a decision as to whether the device (or a user thereof) is authorized to access a physical resource, view online content, utilize funds, etc.
In many situations, at least a portion of the pathway between a given device and the network entity might not be secure. For example, RFID, Bluetooth, WiFi, WiMax, Internet all present potential security risks whereby a malicious individual could detect and copy identifiers transmitted by the given device. Once the malicious individual gains knowledge of the given device's identifier, it is possible that he or she can simulate the given device and potentially gain access to a secured resource facility or vehicle, conduct unauthorized payments, impersonate the given device, etc.
Thus, an improved approach to the identification of devices would be welcome in the industry. |
Partridge in print and online Eric Partridge's A Dictionary of Slang and Unconventional English, first published in 1937, ran to 8 editions culminating in 1984 and is widely acknowledged as the definitive record of twentieth-century British slang. The New Partridge Dictionary of Slang and Unconventional English (NPD) maintains the tradition impressively, enhanced by a more conventional approach to citing sources, a broader focus to include examples of colloquial and vernacular vocabulary worldwide and prominence given to usage since 1945. A thousand new entries from the UK, USA, Australia, New Zealand, Canada, India, South Africa, Ireland and the Caribbean, and increased representation of the language of social media, document linguistic innovation and/or reflect more sophisticated lexical data capture since the previous print edition of 2006. The 19 pages of introductory text outline criteria for inclusion, describe the structure of entries and provide a fascinating set of observations on slang drawn from Partridge's many published works. With over 60,000 entries the second edition of NPD is complemented for the first time by Partridge Slang Online (PSO), a resource which offers new ways to access and interrogate the data. |
BAGHDAD (Reuters) - Iraq has agreed with Iran to exchange Iraqi food items for Iranian gas and energy supplies, two Iraqi government officials said on Wednesday.
Baghdad is now seeking U.S. approval to allow it to import Iranian gas which is used in its power stations, and needs more time to find an alternative source, they said. The sources are a senior government official and a member of Iraq’s ministerial energy committee.
Washington granted Iraq a waiver to be able to import Iranian gas and energy supplies as well as food items when U.S. sanctions were restored against Iran’s oil sector last week. |
Classification of Amino Acids Using Hybrid Terahertz Spectrum and an Efficient Channel Attention Convolutional Neural Network Terahertz (THz) spectroscopy is the de facto method to study the vibration modes and rotational energy levels of molecules and is a widely used molecular sensor for non-destructive inspection. Here, based on the THz spectra of 20 amino acids, a method that extracts high-dimensional features from a hybrid spectrum combined with absorption rate and refractive index is proposed. A convolutional neural network (CNN) calibrated by efficient channel attention (ECA) is designed to learn from the high-dimensional features and make classifications. The proposed method achieves an accuracy of 99.9% and 99.2% on two testing datasets, which are 12.5% and 23% higher than the method solely classifying the absorption spectrum. The proposed method also realizes a processing speed of 3782.46 frames per second (fps), which is the highest among all the methods in comparison. Due to the compact size, high accuracy, and high speed, the proposed method is viable for future applications in THz chemical sensors. Introduction Terahertz time-domain spectroscopy (THz-TDS) has been applied to non-destructive testing on many substances due to its stronger penetration into dielectric materials compared with visible light and infrared. In frequency domain, the effective band produced by desktop THz spectrometers ranges between 0.1 to 4 THz, and the upper limit of the frequency band can reach 6 THz using asynchronous optical sampling. As the rotational energy levels and some of the vibrational modes of many biomolecules reside in the THz band, the pulse wave generated by the THz spectrometer is sensitive to the molecular structural changes and the binding between molecules, and thus creates a label-free approach to detect substances such as proteins, DNA, and explosives. The molecular rotational resonance spectroscopy utilizes the unique THz spectral signatures of gas samples to analyze isotopic species. However, due to the strong absorption of THz radiation by water and polar solvent, the restriction of dynamics range, and the significant scattering attributed to the ∼10 m to mm wavelength of the THz pulses, many molecular features are difficult to extricate, which largely constrains the prospects of THz-TDS as a competent sensing technology. Another challenge in applying the THz spectroscopy is that many molecular species generate non-distinguishable absorption spectra or even no characteristic features in the effective frequency band, which further restricts the identification based on THz spectroscopy. Many methods have been proposed to extract the characteristic features from the THz spectrum. The principal component analysis (PCA) method is based on the idea that the majority of information of a multi-dimensional matrix can be represented by a few principal eigenvectors so that the minor features such as non-specific jitters are eliminated and matrices bearing similar features are easier to classify. Thereby, PCA can extract major features from the THz spectrum and has been used to identify drugs, soybean oil, and numerous chemicals from a large variety of substances. In addition, partial least-squares (PLS) methods are used to select the absorption wavelength of the molecule and to find spectral intervals with the highest signal-to-noise ratios (SNR), and hence concentrations of substances in mixtures are quantitatively estimated with high precision. Several variation of PLS including iPLS, biPLS, mwPLS, and siPLS are employed in the aforementioned studies. Additionally, the generic algorithm (GA) is a method to globally optimize parameters such as the absorption wavelength and to extract optical metrics such as refractive index and absorption rate from the data affected by noise, and is shown to produce better results than PLS and iterative algorithms. However, these methods are based on numerous metrics of the data, and their performance decays largely for spectra with no apparent features. Machine learning is the study of computer algorithms that can improve automatically through experience and by the use of data. As a novel branch of machine learning, neural networks can use numerous filters to extract the high-dimensional characteristics of the input data and study the interconnections between them, which makes it more adaptable to data lack of apparent features. In recent years, many researchers have used machine learning algorithms to classify the THz spectra. Hu et al. compared networks built with convolutional neural network (CNN) and recurrent neural network, and concluded that the identification based on CNN was more accurate and faster for a dataset containing 12 materials. A network combining CNN and bidirectional gated recurrent network (BiGRU) was designed to trace the dynamic changes of the absorption spectrum in time domain, and overcame the difficulties of frequency-domain analysis such as band cutoff but took much longer time to process data due to the larger size of time signal. However, previous research mainly focused on extracting features from 1D data such as absorption spectrum or time-domain spectrum, which did not fully integrate the amplitude and phase information of the pulsed THz wave, and hence restricts the accuracy of prediction. To date, no study associated with the classification of THz spectra has adopted the integrated feature regarding both amplitude and phase such as the combination of absorption rate and refractive index. Here, a study is proposed to demonstrate that the combination of absorption rate and refractive index can reveal high-dimensional spectral information, which is later abstracted as a feature map and passed to a CNN equipped with efficient channel attention (ECA) mechanism for calibrating the interdependence among data channels. A chemical library containing the solid samples of 20 amino acids is established for this study, and lowlying vibrational modes with respect to the samples are used for the classification of spectra. The proposed method realizes 100% prediction accuracy on validation data, 99.9% and 99.2% on two testing datasets. Additionally, comparisons are made among different classification models and indicate that the proposed method is superior in terms of accuracy and processing speed. These characteristics makes the proposed ECA network an ideal algorithm to be integrated on compact THz sensors. Experimental Equipment and Sample Preparation The experimental device includes an ultra-fast femotosecond laser, an optical delay line, emitter and receiver photoconductive antennas (PCAs), a lock-in amplifier, and a computer to control the device and process the signal. The femotosecond laser has a central wavelength of 1560 nm, a pulse width around 100 fs, a repetition rate of 100 MHz, and working power of 80 mW. The laser utilizes doped fiber as gain medium, and the generated power is evenly distributed to two channels, which are used for the generation and sampling of the THz radiation respectively. Single-mode optical fibers are used to transmit laser between modules. An illustration of the experimental device is shown in Figure 1. As shown in Figure 1, the emitter PCA is connected to a bias source that provides energy for the generation of THz pulses, which have a width of around 1 ps. Afterwards, the emitted THz beam is collimated and focused by two plano-convex lenses, respectively. Then, the beam transmits through the sample and is collimated and focused by two other plano-convex lenses to the receiver PCA, by which the absorption characteristics of the sample are measured. The temporal shape of the received signal is sampled by a optical delay line of 90 ps range and intensified by a lock-in amplifier connected to the computer. The delay line works at 60 Hz, which samples the time-domain THz spectrum 60 times per second. The system achieves a bandwidth of 4.5 THz and a dynamic range of 80 dB. The 20 amino acid samples used in the experiment were purchased from Shanghai Aladdin Reagent Company. The samples were grounded with a pestle and mortar, and filtered with a 180 mesh sieve to exclude the particles larger than 80 m so that the scattering effect was attenuated. The samples were then mixed with polyethylene in a 1:1 (w/w) ratio and pressurized at 30 MPa for approximately 5 min into solid tablets. The tablets were around 1.2 mm thick and 10 mm in diameter. 5 tablets were made for each amino acid, and each tablet was measured by continuous acquisition that lasted 40 s to accumulate about 2400 signals. The measurement chamber was purged with nitrogen gas to reduced the air humidity to 10%, and the experiment was carried out at 17 C, which was reduced from the room temperature of 25 C by the nitrogen gas. Hybrid Spectrum Combined with Absorption Rate and Refractive Index An advantage of pulsed THz radiation is that the amplitude and the phase of the signal can be extracted simultaneously by Fourier analysis. In particular, the attenuation of amplitude responds to the absorption of the THz wave, and the change in phase reveals the change of refractive index. As shown in Figure 1, this study uses transmission mode to measure the properties of the sample. A reference signal is required to extract the spectral information, explicitly the changes of amplitude and phase, due to the sample. Therefore, the reference signal was measured by removing the sample from the optical path, and the data collected by 10 s continuous acquisition were averaged to reduce the white noise generated by the vibration of the delay line. Meanwhile, the residual noise can be further attenuated with wavelet shrinkage denoising. The wavelet shrinkage denoising follows the idea that wavelet coefficients having small absolute values enclose mostly noise, and the important information is encoded by coefficients having large absolute values. Thereby, removing the small absolute value coefficients and then reconstructing the signal could produce a signal with less noise. Here, a wavelet decomposition with maximum level of 5 is used to denoise the signal, and the sym4 wavelet is selected for the optimal outcome. A comparison between the spectrum of D-Glutamic acid, which was averaged over 40 s continuous acquisition of a single tablet, and the reference signal is shown in Figure 2. In Figure 2a, the main pulse of the D-Glutamic acid spectrum is delayed by approximately 3 ps from the main pulse of the reference signal, followed by a series of decaying fluctuations. The delay is attributed to the higher refractive index of the sample, and the decaying fluctuations can be explained by the etalon effect by which the THz pulse is reflected by the front and back sides of the sample changeably. The etalon effect also causes baseline fluctuations in the frequency domain, which is shown in Figure 3. The spectral information carried by the sample can be extracted in the frequency domain, as given by Equation : where F {E(t)} denotes the Fourier transform of the time-domain spectrum E(t), denotes the angular frequency, |()| and () are the amplitude and phase of the frequencydomain spectrum,() respectively. The amplitude and phase changes can be obtained from the transfer function, which follows: where T() is the transfer function, samp () is the frequency-domain spectrum of the sample, re f () is the frequency-domain spectrum of the reference signal, () is the amplitude ratio between the sample and reference spectra, ∆() is the phase change caused by propagation through the sample. Figure 2b,c denote the changes in amplitude and phase regarding the D-Glutamic acid spectrum and the reference spectrum. Based on the transfer function and the assumption that the extinction rate is much smaller than the refractive index in the THz band, the absorption rate and refractive index of the sample can be derived from the Fresnel's law, as given by Equations and : where n() denotes the refractive index, () denotes the absorption rate, d is the thickness of the tablet, c is the speed of light. Figure 3 gives the absorption rate and the refractive index of D-Glutamic acid sample, where the time-domain spectra in Figure 2 were used in the calculation. As shown in Figure 3, the absorption peaks of D-Glutamic acid located at 1.216 THz and 2.038 THz are configured by the absorption rate and refractive index, respectively, which can be explained by the Kramers-Kronig relations by which the real and imaginary parts of the complex refractive index are connected. This feature forms the basis of the input block of the neural network in later discussion. It is noticed that the absorption rate in Figure 3a becomes highly compromised after 2.2 THz due to absorption and scattering, and the peak located at 2.443 THz is completely disguised by noise. However, the region after 2.2 THz in Figure 3b is still explicit, and the peak located at 2.443 THz is discernible, as highlighted by the dashed orange line. This reveals that the refractive index can hold spectral features that are not distinguishable from the absorption rate; thus, the combination of the two metrics would give more explicit description of the spectral information. Table 1 gives the absorption peaks of the 20 amino acids referenced from previous research and the measurements in this study. For the rest of the paper, all spectra are cutoff from 0.1 to 2.5 THz, which contains 240 points given the frequency step of 0.01 THz. The spectral information of all the 20 amino acids will be given in Appendix A. In addition, the effect of combining absorption rate and refractive index can be configured by PCA. As shown in Figure 4a,b, the first two principal components of the absorption rate and the refractive index can not separate different categories. In contrast, after stacking the absorption rate and refractive index to a 2D vector and extracting the first principal component, the points belonging to different amino acids form clusters, as highlighted by dashed elliptic circles in Figure 4c. The hybrid spectrum combined with absorption rate and refractive index is explained by Figure 4d and will be used to classify the amino acids in later discussions. N is the size of 1D spectrum, which is 240 in this study. Efficient Channel Attention Network In response to classification of the amino acids and inspired by the works in, a CNN that reshapes the hybrid spectrum to a feature map and identifies it by a convolutional network associated with ECA mechanism is proposed. The structure of the network is illustrated in Figure 5. As shown in Figure 5a, the hybrid spectrum is passed to a convolutional layer that has 32 filters (Conv1), and the outputs of the layer are 32 filtered versions of the input signal. These signals are then stacked to a 2D matrix, so that the input to the next layer Conv2 is a single feature map instead of 32 distinct filtered signals. The input and reshaping layers are combined as the input block. Layers Conv2 and Conv3 use 2D filters to capture relationships across the filtered signals produced by Conv1, and in turn output 32 channels, respectively. Layer Conv4 uses 1 1 kernel to increase the number of channels to 64, which extracts more channel-wise information for ECA module. As shown in Figure 5b, the ECA module is composed of a global pooling layer to reduce the dimension to 1 1 64, a 1D convolutional layer to implement the cross-channel interaction, and an activation layer using sigmoid function to render nonlinearity to channel weights (attention coefficients, coefficients to enforce the interconnection between neighboring channels). A channel-wise multiplication is performed to the output of Conv4 and the attention coefficients, so that the interdependencies between channels are substantially handled. Two fully connected layers of sizes 256 and 128 are used to reduce the number of hyper parameters passed from ECA network, followed by a dense layer using softmax activation function, which outputs the probability of each category, for classification. Batch normalization is applied after each convolutional layer to standardize the outputs of the layer for each mini batch. Pooling layer is added afterwards to reduce the output dimension. The detailed description of ECA can be found in. A brief introduction regarding the attention mechanism and the adaptive kernel size is given here. To address the interaction between channels in a CNN, a mechanism to recalibrate the weights of different channels is required. As mentioned by, ECA is an efficient method to calculate the channel weights without dimensionality reduction. The estimation of the weights follows Equation : where y j i is the input from the adjacent channel j of channel i, w j is the weight for y j i, k is the number of adjacent channels, and k i is the set of neighboring channels. This mechanism not only handles the cross-channel interactions but also avoids complete independence among different groups. As aforementioned, attention are calculated by a 1D convolution with a kernel size of k. Depending on the total number of channels, C, k can be adaptively adjusted for the optimal performance. Equation gives the relationship between C and k: where and b are fitting factors, which are set as = 2, b = 1 in this study; | | odd indicates the nearest odd number. According to Equation, k is equal to 3 for the 64 channels given by Conv4. Training Details To make the training data, 100 signals of a tablet are averaged for a single record. Thereby, there are about 24 records for each tablet and 120 records for each amino acid. For the testing data, the signals are averaged 20 and 10 times to form two datasets (Av-erage20 and Average10) that are noisier than the training data, which are designed to test the robustness of the model. The training data were shuffled, and 20% of the data were assigned for validation. The training used stochastic gradient decent (SGD) with an initial learning rate of 10 −3, a learning rate decay of 10 −5, a momentum of 0.9, Nesterov accelerated gradient for faster convergence, and a batch size of 128. Cross-entropy loss function was used to measure the distance between the predicted and target labels during the training process. The training ran for 300 epochs with the initial learning rate, and would come to an early stop if the loss did not decrease for 30 consecutive epochs. Then, the training ran for other 100 epochs with a learning rate of 10 −4 to fine-tune the model. All programs ran on a PC equipped with a RTX 2070 GPU and an 8-core Intel(R) i7 CPU. Metrics The accuracy (Acc) was used to evaluate the classification performance of the model, and precision (Pr) is used to evaluate the classification performance for each category, as shown in Equations and : where i refers to the index of the label, j denotes the index of the predicted label. n ij represents the number of items belong to category i predicted as category j. Similarly, n ii represents the number of correct predictions associated with category i. Here, the number of hyper parameters (#.Param.), the floating point operations per second (FLOPs), training time, and processing speed (test rate, frame per second, fps) are used to evaluate the efficiency of the model. The Effects of Hybrid Spectrum and ECA Module To demonstrate the effectiveness of the proposed network, the effect of hybrid spectrum and ECA module are studied explicitly. First, the absorption rate and the refractive index were separately taken as the training data, and the model was trained as aforementioned. The feature maps produced by the input block of the trained models are shown in Figure 6, where the vertical axis of the image corresponds to the data size of each channel and the horizontal axis of the image corresponds to the number of channels. As seen in Figure 6a,b, the feature maps generated with only absorption rate or refractive index do not have obvious patterns cross different channels. In contrast, the feature map generated with the hybrid spectrum in Figure 6c encloses patterns that signify the connection among channels, so that the following convolutional layers could sufficiently extract the high-dimensional features by applying multiple channels and make more accurate predictions. The results regarding different inputs are given in Table 2, where the accuracy of ECA network with hybrid spectrum as input is 1.2% and 3.4% higher than those loaded with only absorption rate and refractive index on Average20 dataset, and 1.9% and 5.6% higher on the Average10 dataset. Table 2. The precision and accuracy of classification of 20 amino acids on Average20 and Average10 datasets. From a to h are: a. ECA network with absorption rate as input; b. ECA network with refractive index as input; c. plain CNN with hybrid spectrum as input; d. ECA-DDCNN with hybrid spectrum as input; e. ECA-Resnet50 with hybrid spectrum as input; f. ECA-Resnet101 with hybrid spectrum as input; g. CNN-BiGRU referred from ; h. CNN referred from. Ours denotes the ECA network with hybrid spectrum as input. The values in the table are percentages. The effect of ECA module is tested by ablation study, where the Conv4 layer and ECA module are removed to test the feasibility of the remaining network, which is denoted as plain CNN in Table 2. The classification accuracy of ECA network is 0.2% higher than that of plain CNN on the Average20 dataset, and 0.3% higher on the Average10 dataset. The classification precision of most categories is also higher for the ECA network. However, the precision of D-Serine is the lowest among all categories, especially for classification by absorption rate or refractive index. This can be explained by Figure 7. As shown in Figure 7a, the higher bound of high-SNR region is around 1.7 THz, followed by the low-SNR features that vary largely for different average times. In Figure 7b, the slope of refractive index is more evident in the training data than in the testing data. This brings difficulties to the trained model to identify the features in testing data, and thus lowers the classification precision. To conclude, the ECA network with hybrid spectrum as input can sufficiently improve the results of classification and achieve considerable accuracy in complex scenarios such as D-Serine. Compare with Other ECA-Based Networks ECA-based networks referred from are compared with the ECA network. The network in Figure 5a first substituted the ECA network for the aforementioned networks, then trained them with the hybrid spectrum. As seen from Table 2, the accuracy of our ECA network is 11.3% higher than the ECA-DDCNN proposed by, 6.8% and 7.7% higher than the ECA-Resnet50 and ECA-Resnet101 proposed by for the Average20 dataset. For the Average10 dataset, the accuracy is 26.2% higher than ECA-DDCNN, 10.5% higher than ECA-Resnet50, and 15.7% higher than ECA-Resnet101. A comparison among different networks is given in Table 3. As illustrated by Table 3, the depths and number of parameters of previously proposed ECA-based networks are much greater than the ECA network, which makes the optimization more complicated, as verified by the longer training times. As the training dataset contains about 2400 records of data, the deeper models may experience overfitting in which the model is too closely aligned to a limited set of data points during the training and hence is less adaptive to the data outside training dataset. Figure 8 displays the training history of each ECA-based network, where the divergence between losses belong to training data and validation data indicates the occurrence of overfitting. Overfitting Overfitting As shown by Figure 8, all the deep networks experienced overfitting during the training, which from one aspect explains their lower classification accuracy. Compare with Other Methods for Amino-Acid Classification In previous studies of amino-acid classification, Ref. proposed a network combining CNN and bidirectional gated recurrent network (BiGRU) to trace the dynamic changes of THz time spectrum, and proposed a CNN inspired by LeNet5 to classify the 2D images composed by frequency spectra. As shown in Table 2, the accuracy of the CNN is 12.5% lower than ECA network on Average20, and 23% lower on Average10. Meanwhile, the accuracy of CNN-BiGRU is 0.2% lower than the ECA network on Average20, but 0.1% higher than ECA network on Average10. However, this can be explained by Figure 7, where time spectrum only changes by noise level but maintains all the dynamic features for different average times; on the contrary, the frequency spectrum differs largely for different average times. This helps CNN-BiGRU achieve higher precision for D-Serine. In fact, the classification accuracy of ECA network on 19 amino acids excluded Serine is 99.95% for Average20 and 99.68 for Average10. In contrast, the accuracy of CNN-BiGRU is 99.75% for Average20 and 99.45% for Average10. In real-world applications, the samples may be contained in packages, and thus the time spectrum varies due to shapes and materials of the packages. Meanwhile, the frequency spectrum is less influenced by the environments and hence is widely used in non-destructive inspection. In this regard, the ECA network is more practical in the real world due to its robustness on noisy test data and higher processing speed, i.e., the test rate of ECA network is 3782.46 fps, compared to 84 fps of CNN-BiGRU. Discussion In this paper, the combination of absorption rate and refractive index for extracting high-dimensional information from the THz spectrum was proposed, and an ECA-based CNN was designed to classify the combined spectrum with high accuracy. Compared with other ECA-based networks, the proposed ECA network avoided overfitting during the training and achieved higher accuracy. The accuracy of the proposed method is higher than a previous method that classifies solely absorption spectrum, and slightly lower than the method working with time spectrum due to the changed features among frequency-domain data. The proposed method achieved the highest processing speed among all the methods and smaller size than other ECA-based networks, which makes it a viable option for integration to embedded systems. However, this paper only discussed the classification of pure chemicals, but in real world samples are usually compounds, therefore extending the ECA network to compound classification will be the next step of our investigation. |
import os
import requests
from requests.auth import HTTPBasicAuth
import json
import yaml
JENKINS_ENDPOINT = os.environ['JENKINS_ENDPOINT']
JENKINS_USER = os.environ['JENKINS_USER']
JENKINS_TOKEN = os.environ['JENKINS_TOKEN']
def run_jenkins(job, **params):
body=dict(
parameter=[
dict(name=k, value=v)
for k, v
in params.items()
]
)
resp = requests.post(
f'{JENKINS_ENDPOINT}/job/{job}/build',
auth=HTTPBasicAuth(JENKINS_USER, JENKINS_TOKEN),
data=dict(
json=json.dumps(body)
)
)
return resp.status_code in (
requests.codes.ok, requests.codes.created
), resp.text
def instance_status():
resp = requests.get(
f'{JENKINS_ENDPOINT}/job/Provisioning%20-%20instance%20status/lastSuccessfulBuild/consoleText',
auth=HTTPBasicAuth(JENKINS_USER, JENKINS_TOKEN),
)
resp = resp.text.split('---')[1].split('\nFinished: ')[0]
resp = yaml.load(resp)
return resp
if __name__ == '__main__':
r = run_jenkins('test_test', param1='David', param2='Moshe')
print(r)
r = instance_status()
print(r)
|
use crate::rtcp::RtcpType;
use pnet_macros::packet;
use pnet_macros_support::types::*;
#[packet]
#[derive(Eq, PartialEq)]
/// Sender report, containing jitter, reception, timing and volume information.
///
/// See the relevant [RTP RFC section](https://tools.ietf.org/html/rfc3550#section-6.4.1).
/// Main payload body is a [`SenderInfo`] block followed by [`rx_report_count`] x [`ReportBlock`].
///
/// A description of fields:
///
/// ## `version`
/// RTP version. Should be `2`.
///
/// ## `padding`
/// Packet contains padding octets which are not part of the payload, but
/// who are counted in [`length`]. The last byte of the payload contains the
/// count of bytes to be ignored from the end (including itself).
///
/// ## `rx_report_count`
/// Number of [`ReportBlock`]s contained. May be `0`.
///
/// ## `packet_type`
/// Must be [`RtcpType::SenderReport`].
///
/// ## `pkt_length`
/// Length of this RTCP packet in 32-bit words, minus one.
/// Includes header and padding. `0` is valid for compound RTCP packets.
///
/// ## `ssrc`
/// SSRC for the source of this packet.
///
/// ## `payload`
/// Bytes of the RTCP body.
///
/// [`SenderInfo`]: struct.SenderInfo.html
/// [`rx_report_count`]: #structfield.rx_report_count
/// [`ReportBlock`]: struct.ReportBlock.html
/// [`length`]: #structfield.length
/// [`RtcpType::SenderReport`]: ../enum.RtcpType.html#variant.SenderReport
pub struct SenderReport {
pub version: u2,
pub padding: u1,
pub rx_report_count: u5,
#[construct_with(u8)]
pub packet_type: RtcpType,
pub pkt_length: u16be,
pub ssrc: u32be,
#[payload]
pub payload: Vec<u8>,
}
#[packet]
#[derive(Eq, PartialEq)]
/// Receiver report, containing jitter and reception information.
///
/// See the relevant [RTP RFC section](https://tools.ietf.org/html/rfc3550#section-6.4.2).
/// Main payload body is [`rx_report_count`] x [`ReportBlock`].
///
/// A description of fields:
///
/// ## `version`
/// RTP version. Should be `2`.
///
/// ## `padding`
/// Packet contains padding octets which are not part of the payload, but
/// who are counted in [`length`]. The last byte of the payload contains the
/// count of bytes to be ignored from the end (including itself).
///
/// ## `rx_report_count`
/// Number of [`ReportBlock`]s contained. May be `0`.
///
/// ## `packet_type`
/// Must be [`RtcpType::SenderReport`].
///
/// ## `pkt_length`
/// Length of this RTCP packet in 32-bit words, minus one.
/// Includes header and padding. `0` is valid for compound RTCP packets.
///
/// ## `ssrc`
/// SSRC for the source of this packet.
///
/// ## `payload`
/// Bytes of the RTCP body.
///
/// [`SenderInfo`]: struct.SenderInfo.html
/// [`rx_report_count`]: #structfield.rx_report_count
/// [`ReportBlock`]: struct.ReportBlock.html
/// [`length`]: #structfield.length
/// [`RtcpType::SenderReport`]: ../enum.RtcpType.html#variant.SenderReport
pub struct ReceiverReport {
pub version: u2,
pub padding: u1,
pub rx_report_count: u5,
#[construct_with(u8)]
pub packet_type: RtcpType,
pub pkt_length: u16be,
pub ssrc: u32be,
#[payload]
pub payload: Vec<u8>,
}
#[packet]
#[derive(Eq, PartialEq)]
/// Sender Info block in a [`SenderReport`].
///
/// A description of fields:
///
/// ## `ntp_timestamp_{second,fraction}`
/// Wallclock time when this report was sent.
///
/// ## `rtp_timestamp`
/// The above timestamp, converted to use the same units/offset
/// as the timestamps appearing in RTP packets.
///
/// ## `pkt_count`
/// Total number of packets sent by this sender since the start of the session.
///
/// Should be reset if SSRC changes.
///
/// ## `byte_count`
/// Total number of *payload* bytes/octets sent by this sender.
///
/// Should be reset if SSRC changes.
///
/// ## `payload`
/// Remainder of the packet.
///
/// [`SenderReport`]: struct.SenderReport.html
/// [`rx_report_count`]: #structfield.rx_report_count
/// [`ReportBlock`]: struct.ReportBlock.html
/// [`length`]: #structfield.length
/// [`RtcpType::SenderReport`]: ../enum.RtcpType.html#variant.SenderReport
pub struct SenderInfo {
pub ntp_timestamp_second: u32be,
pub ntp_timestamp_fraction: u32be,
pub rtp_timestamp: u32be,
pub pkt_count: u32be,
pub byte_count: u32be,
#[payload]
pub payload: Vec<u8>,
}
#[packet]
#[derive(Eq, PartialEq)]
/// Report block in a [`SenderReport`] or [`ReceiverReport`].
///
/// A description of fields:
///
/// ## `ssrc`
/// SSRC of stream which this report concerns.
///
/// ## `fraction_lost`
/// Packet loss as a fixed-point number (*i.e.*, n => n/256).
///
/// ## `cumulative_pkts_lost`
/// Total number of packets from this SSRC that have been lost since
/// reception began.
///
/// ## `cycles`
/// Number of times the source's sequence number has wrapped past its
/// starting value.
///
/// Part of the "extended highest sequence number received".
///
/// ## `sequence`
/// Highest sequence number observed from this source.
///
/// Part of the "extended highest sequence number received".
///
/// ## `interarrival_jitter`
/// Estimated variance of RTP interarrival times.
///
/// See p.40 of [the RFC] for the algorithm used to calculate this.
///
/// ## `last_sr_timestamp`
/// Middle 32 bits of the NTP timestamp of the most recent sender report
/// from this SSRC.
///
/// Will be `0` if no report has been observed.
///
/// ## `last_sr_delay`
/// Delay in fractions of a second (*i.e.*,`n/65536`) since the last sender report
/// from this SSRC was received.
///
/// Set to `0` *iff.* no send report is observed
///
/// ## `payload`
/// Remainder of the packet.
///
/// [`SenderReport`]: struct.SenderReport.html
/// [`ReceiverReport`]: struct.ReceiverReport.html
/// [`rx_report_count`]: #structfield.rx_report_count
/// [`ReportBlock`]: struct.ReportBlock.html
/// [`length`]: #structfield.length
/// [`RtcpType::SenderReport`]: ../enum.RtcpType.html#variant.SenderReport
/// [the RFC]: https://tools.ietf.org/html/rfc3550
pub struct ReportBlock {
pub ssrc: u32be,
pub fraction_lost: u8,
pub cumulative_pkts_lost: u24be,
pub cycles: u16be,
pub sequence: u16be,
pub interarrival_jitter: u32be,
pub last_sr_timestamp: u32be,
pub last_sr_delay: u32be,
#[payload]
pub payload: Vec<u8>,
}
|
<reponame>YangJae96/KMU_Visual-SLAM<filename>slam.py
import cv2 as cv
import os
from os.path import abspath, join, dirname
import copy
import logging
import argparse
import time
import sys
import numpy as np
import errno
import io
import json
import gzip
import pickle
import six
import networkx as nx
from opensfm import dataset
from opensfm import extract_metadata
from opensfm import detect_features
from opensfm import match_features
from opensfm import create_tracks
from opensfm import reconstruct
from opensfm import mesh_data
from opensfm import undistort
from opensfm import undistorted_dataset
from opensfm import compute_depthmaps
from opensfm import log
from opensfm import io
from opensfm import exif
from opensfm import types
from opensfm import config
from opensfm import features
from opensfm import tracking
from PIL import Image
logger = logging.getLogger(__name__)
logging.getLogger("Starting Webcam!!").setLevel(logging.WARNING)
class SLAM():
def __init__(self,data):
self.data=data
def Metadata(self):
self.meta_data=extract_metadata.run(self.data)
print("meta_data==", self.meta_data)
def detect_Features(self):
DF=detect_features.detecting_features()
DF.run(self.data)
#self.data.feature_of_images 저장완료
#self.data.feature_report 저장완료
def match_Features(self):
MF=match_features.run(self.data)
def create_tracks(self):
create_tracks.run(self.data)
def reconstruct(self):
reconstruct.run(self.data)
# def mesh(self):
# mesh_data.run(self.data)
def undistorting(self):
undistort.run(self.data)
def compute_depthmaps(self):
compute_depthmaps.run(self.data)
class Command:
name = 'slam'
help = "Starting webcam for image capture"
#if __name__=='__main__'
def add_arguments(self, parser):
parser.add_argument('dataset', help='dataset to process')
parser.add_argument('webcam', help='webcam status 0:off 1: on')
def run(self, args):
print(args)
self.data_path=args.dataset
if(args.webcam=='0'):#webcam off
self.load_image_list(self.data_path)
else: #webcam on
self.save_webcamImage(self.data_path)
#print(self.image_list.keys())
print(self.image_list.keys())
start=time.time()
data=dataset.DataSet(args.dataset,self.image_list)
slam=SLAM(data)
slam.Metadata()
slam.detect_Features()
slam.match_Features()
slam.create_tracks()
slam.reconstruct()
#slam.mesh()
slam.undistorting()
slam.compute_depthmaps()
end=time.time()
recon_time=end-start
print("Reconstruction Time == {}m {}s".format(recon_time//60, recon_time%60))
def load_image_list(self, data_path):
print(data_path)
self._set_image_path(os.path.join(data_path, 'images'))
def _is_image_file(self, filename):
extensions = {'jpg', 'jpeg', 'png', 'tif', 'tiff', 'pgm', 'pnm', 'gif'}
return filename.split('.')[-1].lower() in extensions
def _set_image_path(self,data_path):
self.image_list = {}
if os.path.exists(data_path):
for name in os.listdir(data_path):
name = six.text_type(name)
if self._is_image_file(name):
frame=cv.imread(os.path.join(data_path,name), 1)
print(frame.shape)
self.image_list.update({name:frame})
def save_webcamImage(self, data_path):
cap=cv.VideoCapture(2)
i=0
count=1
self.image_list={}
image_path=os.path.join(data_path,'webcam_images')
if(cap.isOpened()==False):
print("Unable to open the webcam!!")
while(cap.isOpened()):
ret, frame=cap.read()
if ret==False:
break
cv.imshow('camera',frame)
if i%40==0:
img_name = "{}.jpg".format(count-1)
cv.imwrite(os.path.join(image_path, img_name),frame)
#self.image_list.append(frame)
self.image_list.update({img_name:frame})
logging.info('Capturing Images for {}'.format(img_name))
if count%10 ==0:
break
count+=1
if cv.waitKey(1) & 0xFF==27:
break
i=i+1
cap.release()
cv.destroyAllWindows()
# log.setup()
# args=sys.argv[1]
# save_webcamImage(args)
# print(args)
|
<reponame>pedroscaff/bomberman-rs
use amethyst::core::math::Vector3;
use amethyst::core::transform::Transform;
use amethyst::ecs::prelude::{Component, DenseVecStorage};
use amethyst::prelude::*;
use amethyst::renderer::SpriteRender;
use crate::state::{ARENA_HEIGHT, ARENA_WIDTH};
pub const PLAYER_WIDTH: f32 = 12.0;
pub const PLAYER_HEIGHT: f32 = 12.0;
pub const PLAYER_WIDTH_HALF: f32 = PLAYER_WIDTH / 2.0;
pub const PLAYER_HEIGHT_HALF: f32 = PLAYER_HEIGHT / 2.0;
pub struct Player {
pub is_human: bool,
pub number: u8,
pub num_bombs: u8,
}
impl Component for Player {
type Storage = DenseVecStorage<Self>;
}
pub fn init_players(world: &mut World, sprites: &[SpriteRender]) {
for i in 0..4 {
let x = if i % 2 == 0 {
PLAYER_WIDTH_HALF
} else {
ARENA_WIDTH - PLAYER_WIDTH_HALF
};
let y = if i < 2 {
PLAYER_HEIGHT_HALF
} else {
ARENA_HEIGHT - PLAYER_HEIGHT_HALF
};
let mut transform = Transform::default();
transform.set_translation_xyz(x, y, 0.4);
transform.set_scale(Vector3::new(0.75, 0.75, 1.0));
let is_human = if i == 0 { true } else { false };
world
.create_entity()
.with(sprites[2].clone())
.with(Player {
is_human,
number: i,
num_bombs: 1,
})
.with(transform)
.build();
}
}
|
Olson-Chen, C.; Balaram, K.; Hackney, D. "Chlamydia trachomatis and adverse pregnancy outcomes: meta-analysis of patients with and without infection". Maternal and Child Health Journal. 2018; : 1-10. |
Total U.S. rail traffic rose 3.1 percent the week ending Sept. 14 compared to the same period last year, according to a recent report by the Association of American Railroads.
Overall, North American freight traffic broke even year-over-year with carload traffic going up across the board for all three countries. This shows more companies are relying on freight sourcing to ship their products as consumer confidence bounces back after the recession and domestic manufacturing continues its recovery. U.S. freight carload traffic gained 1.5 percent while intermodal volume increased even more at 4.9 percent year-over-year. Beating the U.S., Canadian freight carload traffic increased by 4.9 percent for the week ending Sept. 14 and up 5.1 percent for intermodal volume. Advancing 6.3 percent, Mexican freight volume was also surpassed by a 7.5 percent jump in intermodal volume.
The AAR said rail activity has increased for the majority of the carload commodity groups monitored by the association with seven out of 10 posting gains as shipments of motor vehicles and parts moved up 14.4 percent. More consumers are demanding big ticket items like automobiles resulting in a surge in auto sales.
Petroleum and petroleum products are staying on par with this figure as U.S. oil and natural gas production booms.
Energy companies have turned to railways to transport output from the Midwest to refineries, International Business Times reported.
Recently, Jack Gerard, CEO of the American Petroleum Institute, released a statement encouraging President Barack Obama to approve the Keystone XL pipeline, which is slated to transport Canadian oil to be refined at American facilities.
"Refining more Canadian oil at American refineries should be a no brainer," Gerard said. "It will help displace oil from unstable parts of the world and enhance our national security. Americans overwhelmingly support building the Keystone XL pipeline."
Faced the delay in the pipeline network, John Felmy, chief economist at the APT, told International Business Times that he thinks railroading fits in with the organization's shipping goals. |
<filename>lib/Runtime/Library/ArgumentsObject.cpp
//-------------------------------------------------------------------------------------------------------
// Copyright (C) Microsoft. All rights reserved.
// Licensed under the MIT license. See LICENSE.txt file in the project root for full license information.
//-------------------------------------------------------------------------------------------------------
#include "RuntimeLibraryPch.h"
#include "Library/ArgumentsObjectEnumerator.h"
namespace Js
{
BOOL ArgumentsObject::GetEnumerator(JavascriptStaticEnumerator * enumerator, EnumeratorFlags flags, ScriptContext* requestContext, ForInCache * forInCache)
{
return GetEnumeratorWithPrefix(
RecyclerNew(GetScriptContext()->GetRecycler(), ArgumentsObjectPrefixEnumerator, this, flags, requestContext),
enumerator, flags, requestContext, forInCache);
}
BOOL ArgumentsObject::GetDiagValueString(StringBuilder<ArenaAllocator>* stringBuilder, ScriptContext* requestContext)
{
stringBuilder->AppendCppLiteral(_u("{...}"));
return TRUE;
}
BOOL ArgumentsObject::GetDiagTypeString(StringBuilder<ArenaAllocator>* stringBuilder, ScriptContext* requestContext)
{
stringBuilder->AppendCppLiteral(_u("Object, (Arguments)"));
return TRUE;
}
Var ArgumentsObject::GetCaller(ScriptContext * scriptContext)
{
JavascriptStackWalker walker(scriptContext);
if (!this->AdvanceWalkerToArgsFrame(&walker))
{
return scriptContext->GetLibrary()->GetNull();
}
return ArgumentsObject::GetCaller(scriptContext, &walker, false);
}
Var ArgumentsObject::GetCaller(ScriptContext * scriptContext, JavascriptStackWalker *walker, bool skipGlobal)
{
// The arguments.caller property is equivalent to callee.caller.arguments - that is, it's the
// caller's arguments object (if any). Just fetch the caller and compute its arguments.
JavascriptFunction* funcCaller = nullptr;
while (walker->GetCaller(&funcCaller))
{
if (walker->IsCallerGlobalFunction())
{
// Caller is global/eval. If we're in IE9 mode, and the caller is eval,
// keep looking. Otherwise, caller is null.
if (skipGlobal || walker->IsEvalCaller())
{
continue;
}
funcCaller = nullptr;
}
break;
}
if (funcCaller == nullptr || JavascriptOperators::GetTypeId(funcCaller) == TypeIds_Null)
{
return scriptContext->GetLibrary()->GetNull();
}
AssertMsg(JavascriptOperators::GetTypeId(funcCaller) == TypeIds_Function, "non function caller");
CallInfo const *callInfo = walker->GetCallInfo();
uint32 paramCount = callInfo->Count;
CallFlags flags = callInfo->Flags;
if (paramCount == 0 || (flags & CallFlags_Eval))
{
// The caller is the "global function" or eval, so we return "null".
return scriptContext->GetLibrary()->GetNull();
}
if (!walker->GetCurrentFunction()->IsScriptFunction())
{
// builtin function do not have an argument object - return null.
return scriptContext->GetLibrary()->GetNull();
}
// Create new arguments object, everytime this is requested for, with the actuals value.
Var args = nullptr;
args = JavascriptOperators::LoadHeapArguments(
funcCaller,
paramCount - 1,
walker->GetJavascriptArgs(),
scriptContext->GetLibrary()->GetNull(),
scriptContext->GetLibrary()->GetNull(),
scriptContext,
/* formalsAreLetDecls */ false);
return args;
}
bool ArgumentsObject::Is(Var aValue)
{
return JavascriptOperators::GetTypeId(aValue) == TypeIds_Arguments;
}
HeapArgumentsObject::HeapArgumentsObject(DynamicType * type) : ArgumentsObject(type), frameObject(nullptr), formalCount(0),
numOfArguments(0), callerDeleted(false), deletedArgs(nullptr)
{
}
HeapArgumentsObject::HeapArgumentsObject(Recycler *recycler, ActivationObject* obj, uint32 formalCount, DynamicType * type)
: ArgumentsObject(type), frameObject(obj), formalCount(formalCount), numOfArguments(0), callerDeleted(false), deletedArgs(nullptr)
{
}
void HeapArgumentsObject::SetNumberOfArguments(uint32 len)
{
numOfArguments = len;
}
uint32 HeapArgumentsObject::GetNumberOfArguments() const
{
return numOfArguments;
}
HeapArgumentsObject* HeapArgumentsObject::As(Var aValue)
{
if (ArgumentsObject::Is(aValue) && static_cast<ArgumentsObject*>(RecyclableObject::FromVar(aValue))->GetHeapArguments() == aValue)
{
return static_cast<HeapArgumentsObject*>(RecyclableObject::FromVar(aValue));
}
return NULL;
}
BOOL HeapArgumentsObject::AdvanceWalkerToArgsFrame(JavascriptStackWalker *walker)
{
// Walk until we find this arguments object on the frame.
// Note that each frame may have a HeapArgumentsObject
// associated with it. Look for the HeapArgumentsObject.
while (walker->Walk())
{
if (walker->IsJavascriptFrame() && walker->GetCurrentFunction()->IsScriptFunction())
{
Var args = walker->GetPermanentArguments();
if (args == this)
{
return TRUE;
}
}
}
return FALSE;
}
//
// Get the next valid formal arg index held in this object.
//
uint32 HeapArgumentsObject::GetNextFormalArgIndex(uint32 index, BOOL enumNonEnumerable, PropertyAttributes* attributes) const
{
if (attributes != nullptr)
{
*attributes = PropertyEnumerable;
}
if ( ++index < formalCount )
{
// None of the arguments deleted
if ( deletedArgs == nullptr )
{
return index;
}
while ( index < formalCount )
{
if (!this->deletedArgs->Test(index))
{
return index;
}
index++;
}
}
return JavascriptArray::InvalidIndex;
}
BOOL HeapArgumentsObject::HasItemAt(uint32 index)
{
// If this arg index is bound to a named formal argument, get it from the local frame.
// If not, report this fact to the caller, which will defer to the normal get-value-by-index means.
if (IsFormalArgument(index) && (this->deletedArgs == nullptr || !this->deletedArgs->Test(index)) )
{
return true;
}
return false;
}
BOOL HeapArgumentsObject::GetItemAt(uint32 index, Var* value, ScriptContext* requestContext)
{
// If this arg index is bound to a named formal argument, get it from the local frame.
// If not, report this fact to the caller, which will defer to the normal get-value-by-index means.
if (HasItemAt(index))
{
*value = this->frameObject->GetSlot(index);
return true;
}
return false;
}
BOOL HeapArgumentsObject::SetItemAt(uint32 index, Var value)
{
// If this arg index is bound to a named formal argument, set it in the local frame.
// If not, report this fact to the caller, which will defer to the normal set-value-by-index means.
if (HasItemAt(index))
{
this->frameObject->SetSlot(SetSlotArguments(Constants::NoProperty, index, value));
return true;
}
return false;
}
BOOL HeapArgumentsObject::DeleteItemAt(uint32 index)
{
if (index < formalCount)
{
if (this->deletedArgs == nullptr)
{
Recycler *recycler = GetScriptContext()->GetRecycler();
deletedArgs = RecyclerNew(recycler, BVSparse<Recycler>, recycler);
}
if (!this->deletedArgs->Test(index))
{
this->deletedArgs->Set(index);
return true;
}
}
return false;
}
BOOL HeapArgumentsObject::IsFormalArgument(uint32 index)
{
return index < this->numOfArguments && index < formalCount;
}
BOOL HeapArgumentsObject::IsFormalArgument(PropertyId propertyId)
{
uint32 index;
return IsFormalArgument(propertyId, &index);
}
BOOL HeapArgumentsObject::IsFormalArgument(PropertyId propertyId, uint32* pIndex)
{
return
this->GetScriptContext()->IsNumericPropertyId(propertyId, pIndex) &&
IsFormalArgument(*pIndex);
}
BOOL HeapArgumentsObject::IsArgumentDeleted(uint32 index) const
{
return this->deletedArgs != NULL && this->deletedArgs->Test(index);
}
#if ENABLE_TTD
void HeapArgumentsObject::MarkVisitKindSpecificPtrs(TTD::SnapshotExtractor* extractor)
{
if(this->frameObject != nullptr)
{
extractor->MarkVisitVar(this->frameObject);
}
}
TTD::NSSnapObjects::SnapObjectType HeapArgumentsObject::GetSnapTag_TTD() const
{
return TTD::NSSnapObjects::SnapObjectType::SnapHeapArgumentsObject;
}
void HeapArgumentsObject::ExtractSnapObjectDataInto(TTD::NSSnapObjects::SnapObject* objData, TTD::SlabAllocator& alloc)
{
this->ExtractSnapObjectDataInto_Helper<TTD::NSSnapObjects::SnapObjectType::SnapHeapArgumentsObject>(objData, alloc);
}
template <TTD::NSSnapObjects::SnapObjectType argsKind>
void HeapArgumentsObject::ExtractSnapObjectDataInto_Helper(TTD::NSSnapObjects::SnapObject* objData, TTD::SlabAllocator& alloc)
{
TTD::NSSnapObjects::SnapHeapArgumentsInfo* argsInfo = alloc.SlabAllocateStruct<TTD::NSSnapObjects::SnapHeapArgumentsInfo>();
AssertMsg(this->callerDeleted == 0, "This never seems to be set but I want to assert just to be safe.");
argsInfo->NumOfArguments = this->numOfArguments;
argsInfo->FormalCount = this->formalCount;
uint32 depOnCount = 0;
TTD_PTR_ID* depOnArray = nullptr;
argsInfo->IsFrameNullPtr = false;
argsInfo->IsFrameJsNull = false;
argsInfo->FrameObject = TTD_INVALID_PTR_ID;
if(this->frameObject == nullptr)
{
argsInfo->IsFrameNullPtr = true;
}
else if(Js::JavascriptOperators::GetTypeId(this->frameObject) == TypeIds_Null)
{
argsInfo->IsFrameJsNull = true;
}
else
{
argsInfo->FrameObject = TTD_CONVERT_VAR_TO_PTR_ID(this->frameObject);
//Primitive kinds always inflated first so we only need to deal with complex kinds as depends on
if(TTD::JsSupport::IsVarComplexKind(this->frameObject))
{
depOnCount = 1;
depOnArray = alloc.SlabAllocateArray<TTD_PTR_ID>(depOnCount);
depOnArray[0] = argsInfo->FrameObject;
}
}
argsInfo->DeletedArgFlags = (this->formalCount != 0) ? alloc.SlabAllocateArrayZ<byte>(argsInfo->FormalCount) : nullptr;
if(this->deletedArgs != nullptr)
{
for(uint32 i = 0; i < this->formalCount; ++i)
{
if(this->deletedArgs->Test(i))
{
argsInfo->DeletedArgFlags[i] = true;
}
}
}
if(depOnCount == 0)
{
TTD::NSSnapObjects::StdExtractSetKindSpecificInfo<TTD::NSSnapObjects::SnapHeapArgumentsInfo*, argsKind>(objData, argsInfo);
}
else
{
TTD::NSSnapObjects::StdExtractSetKindSpecificInfo<TTD::NSSnapObjects::SnapHeapArgumentsInfo*, argsKind>(objData, argsInfo, alloc, depOnCount, depOnArray);
}
}
ES5HeapArgumentsObject* HeapArgumentsObject::ConvertToES5HeapArgumentsObject_TTD()
{
Assert(VirtualTableInfo<HeapArgumentsObject>::HasVirtualTable(this));
VirtualTableInfo<ES5HeapArgumentsObject>::SetVirtualTable(this);
return static_cast<ES5HeapArgumentsObject*>(this);
}
#endif
ES5HeapArgumentsObject* HeapArgumentsObject::ConvertToUnmappedArgumentsObject(bool overwriteArgsUsingFrameObject)
{
ES5HeapArgumentsObject* es5ArgsObj = ConvertToES5HeapArgumentsObject(overwriteArgsUsingFrameObject);
for (uint i=0; i<this->formalCount && i<numOfArguments; ++i)
{
es5ArgsObj->DisconnectFormalFromNamedArgument(i);
}
return es5ArgsObj;
}
ES5HeapArgumentsObject* HeapArgumentsObject::ConvertToES5HeapArgumentsObject(bool overwriteArgsUsingFrameObject)
{
if (!CrossSite::IsCrossSiteObjectTyped(this))
{
Assert(VirtualTableInfo<HeapArgumentsObject>::HasVirtualTable(this));
VirtualTableInfo<ES5HeapArgumentsObject>::SetVirtualTable(this);
}
else
{
Assert(VirtualTableInfo<CrossSiteObject<HeapArgumentsObject>>::HasVirtualTable(this));
VirtualTableInfo<CrossSiteObject<ES5HeapArgumentsObject>>::SetVirtualTable(this);
}
ES5HeapArgumentsObject* es5This = static_cast<ES5HeapArgumentsObject*>(this);
if (overwriteArgsUsingFrameObject)
{
// Copy existing items to the array so that they are there before the user can call Object.preventExtensions,
// after which we can no longer add new items to the objectArray.
for (uint32 i = 0; i < this->formalCount && i < this->numOfArguments; ++i)
{
AssertMsg(this->IsFormalArgument(i), "this->IsFormalArgument(i)");
if (!this->IsArgumentDeleted(i))
{
// At this time the value doesn't matter, use one from slots.
this->SetObjectArrayItem(i, this->frameObject->GetSlot(i), PropertyOperation_None);
}
}
}
return es5This;
}
BOOL HeapArgumentsObject::HasProperty(PropertyId id)
{
ScriptContext *scriptContext = GetScriptContext();
// Try to get a numbered property that maps to an actual argument.
uint32 index;
if (scriptContext->IsNumericPropertyId(id, &index) && index < this->HeapArgumentsObject::GetNumberOfArguments())
{
return HeapArgumentsObject::HasItem(index);
}
return DynamicObject::HasProperty(id);
}
BOOL HeapArgumentsObject::GetProperty(Var originalInstance, PropertyId propertyId, Var* value, PropertyValueInfo* info, ScriptContext* requestContext)
{
ScriptContext *scriptContext = GetScriptContext();
// Try to get a numbered property that maps to an actual argument.
uint32 index;
if (scriptContext->IsNumericPropertyId(propertyId, &index) && index < this->HeapArgumentsObject::GetNumberOfArguments())
{
if (this->GetItemAt(index, value, requestContext))
{
return true;
}
}
if (DynamicObject::GetProperty(originalInstance, propertyId, value, info, requestContext))
{
// Property has been pre-set and not deleted. Use it.
return true;
}
*value = requestContext->GetMissingPropertyResult();
return false;
}
BOOL HeapArgumentsObject::GetProperty(Var originalInstance, JavascriptString* propertyNameString, Var* value, PropertyValueInfo* info, ScriptContext* requestContext)
{
AssertMsg(!PropertyRecord::IsPropertyNameNumeric(propertyNameString->GetString(), propertyNameString->GetLength()),
"Numeric property names should have been converted to uint or PropertyRecord*");
if (DynamicObject::GetProperty(originalInstance, propertyNameString, value, info, requestContext))
{
// Property has been pre-set and not deleted. Use it.
return true;
}
*value = requestContext->GetMissingPropertyResult();
return false;
}
BOOL HeapArgumentsObject::GetPropertyReference(Var originalInstance, PropertyId id, Var* value, PropertyValueInfo* info, ScriptContext* requestContext)
{
return HeapArgumentsObject::GetProperty(originalInstance, id, value, info, requestContext);
}
BOOL HeapArgumentsObject::SetProperty(PropertyId propertyId, Var value, PropertyOperationFlags flags, PropertyValueInfo* info)
{
// Try to set a numbered property that maps to an actual argument.
ScriptContext *scriptContext = GetScriptContext();
uint32 index;
if (scriptContext->IsNumericPropertyId(propertyId, &index) && index < this->HeapArgumentsObject::GetNumberOfArguments())
{
if (this->SetItemAt(index, value))
{
return true;
}
}
// TODO: In strict mode, "callee" and "caller" cannot be set.
// length is also set in the dynamic object
return DynamicObject::SetProperty(propertyId, value, flags, info);
}
BOOL HeapArgumentsObject::SetProperty(JavascriptString* propertyNameString, Var value, PropertyOperationFlags flags, PropertyValueInfo* info)
{
AssertMsg(!PropertyRecord::IsPropertyNameNumeric(propertyNameString->GetSz(), propertyNameString->GetLength()),
"Numeric property names should have been converted to uint or PropertyRecord*");
// TODO: In strict mode, "callee" and "caller" cannot be set.
// length is also set in the dynamic object
return DynamicObject::SetProperty(propertyNameString, value, flags, info);
}
BOOL HeapArgumentsObject::HasItem(uint32 index)
{
if (this->HasItemAt(index))
{
return true;
}
return DynamicObject::HasItem(index);
}
BOOL HeapArgumentsObject::GetItem(Var originalInstance, uint32 index, Var* value, ScriptContext* requestContext)
{
if (this->GetItemAt(index, value, requestContext))
{
return true;
}
return DynamicObject::GetItem(originalInstance, index, value, requestContext);
}
BOOL HeapArgumentsObject::GetItemReference(Var originalInstance, uint32 index, Var* value, ScriptContext* requestContext)
{
return HeapArgumentsObject::GetItem(originalInstance, index, value, requestContext);
}
BOOL HeapArgumentsObject::SetItem(uint32 index, Var value, PropertyOperationFlags flags)
{
if (this->SetItemAt(index, value))
{
return true;
}
return DynamicObject::SetItem(index, value, flags);
}
BOOL HeapArgumentsObject::DeleteItem(uint32 index, PropertyOperationFlags flags)
{
if (this->DeleteItemAt(index))
{
return true;
}
return DynamicObject::DeleteItem(index, flags);
}
BOOL HeapArgumentsObject::SetConfigurable(PropertyId propertyId, BOOL value)
{
// Try to set a numbered property that maps to an actual argument.
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
// From now on, make sure that frame object is ES5HeapArgumentsObject.
return this->ConvertToES5HeapArgumentsObject()->SetConfigurableForFormal(index, propertyId, value);
}
// Use 'this' dynamic object.
// This will use type handler and convert its objectArray to ES5Array is not already converted.
return __super::SetConfigurable(propertyId, value);
}
BOOL HeapArgumentsObject::SetEnumerable(PropertyId propertyId, BOOL value)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->ConvertToES5HeapArgumentsObject()->SetEnumerableForFormal(index, propertyId, value);
}
return __super::SetEnumerable(propertyId, value);
}
BOOL HeapArgumentsObject::SetWritable(PropertyId propertyId, BOOL value)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->ConvertToES5HeapArgumentsObject()->SetWritableForFormal(index, propertyId, value);
}
return __super::SetWritable(propertyId, value);
}
BOOL HeapArgumentsObject::SetAccessors(PropertyId propertyId, Var getter, Var setter, PropertyOperationFlags flags)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->ConvertToES5HeapArgumentsObject()->SetAccessorsForFormal(index, propertyId, getter, setter, flags);
}
return __super::SetAccessors(propertyId, getter, setter, flags);
}
BOOL HeapArgumentsObject::SetPropertyWithAttributes(PropertyId propertyId, Var value, PropertyAttributes attributes, PropertyValueInfo* info, PropertyOperationFlags flags, SideEffects possibleSideEffects)
{
// This is called by defineProperty in order to change the value in objectArray.
// We have to intercept this call because
// in case when we are connected (objectArray item is not used) we need to update the slot value (which is actually used).
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->ConvertToES5HeapArgumentsObject()->SetPropertyWithAttributesForFormal(
index, propertyId, value, attributes, info, flags, possibleSideEffects);
}
return __super::SetPropertyWithAttributes(propertyId, value, attributes, info, flags, possibleSideEffects);
}
// This disables adding new properties to the object.
BOOL HeapArgumentsObject::PreventExtensions()
{
// It's possible that after call to preventExtensions, the user can change the attributes;
// In this case if we don't covert to ES5 version now, later we would not be able to add items to objectArray,
// which would cause not being able to change attributes.
// So, convert to ES5 now which will make sure the items are there.
return this->ConvertToES5HeapArgumentsObject()->PreventExtensions();
}
// This is equivalent to .preventExtensions semantics with addition of setting configurable to false for all properties.
BOOL HeapArgumentsObject::Seal()
{
// Same idea as with PreventExtensions: we have to make sure that items in objectArray for formals
// are there before seal, otherwise we will not be able to add them later.
return this->ConvertToES5HeapArgumentsObject()->Seal();
}
// This is equivalent to .seal semantics with addition of setting writable to false for all properties.
BOOL HeapArgumentsObject::Freeze()
{
// Same idea as with PreventExtensions: we have to make sure that items in objectArray for formals
// are there before seal, otherwise we will not be able to add them later.
return this->ConvertToES5HeapArgumentsObject()->Freeze();
}
//---------------------- ES5HeapArgumentsObject -------------------------------
BOOL ES5HeapArgumentsObject::SetConfigurable(PropertyId propertyId, BOOL value)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->SetConfigurableForFormal(index, propertyId, value);
}
return this->DynamicObject::SetConfigurable(propertyId, value);
}
BOOL ES5HeapArgumentsObject::SetEnumerable(PropertyId propertyId, BOOL value)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->SetEnumerableForFormal(index, propertyId, value);
}
return this->DynamicObject::SetEnumerable(propertyId, value);
}
BOOL ES5HeapArgumentsObject::SetWritable(PropertyId propertyId, BOOL value)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->SetWritableForFormal(index, propertyId, value);
}
return this->DynamicObject::SetWritable(propertyId, value);
}
BOOL ES5HeapArgumentsObject::SetAccessors(PropertyId propertyId, Var getter, Var setter, PropertyOperationFlags flags)
{
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return SetAccessorsForFormal(index, propertyId, getter, setter);
}
return this->DynamicObject::SetAccessors(propertyId, getter, setter, flags);
}
BOOL ES5HeapArgumentsObject::SetPropertyWithAttributes(PropertyId propertyId, Var value, PropertyAttributes attributes, PropertyValueInfo* info, PropertyOperationFlags flags, SideEffects possibleSideEffects)
{
// This is called by defineProperty in order to change the value in objectArray.
// We have to intercept this call because
// in case when we are connected (objectArray item is not used) we need to update the slot value (which is actually used).
uint32 index;
if (this->IsFormalArgument(propertyId, &index))
{
return this->SetPropertyWithAttributesForFormal(index, propertyId, value, attributes, info, flags, possibleSideEffects);
}
return this->DynamicObject::SetPropertyWithAttributes(propertyId, value, attributes, info, flags, possibleSideEffects);
}
BOOL ES5HeapArgumentsObject::GetEnumerator(JavascriptStaticEnumerator * enumerator, EnumeratorFlags flags, ScriptContext* requestContext, ForInCache * forInCache)
{
ES5ArgumentsObjectEnumerator * es5HeapArgumentsObjectEnumerator = ES5ArgumentsObjectEnumerator::New(this, flags, requestContext, forInCache);
if (es5HeapArgumentsObjectEnumerator == nullptr)
{
return false;
}
return enumerator->Initialize(es5HeapArgumentsObjectEnumerator, nullptr, nullptr, flags, requestContext, nullptr);
}
BOOL ES5HeapArgumentsObject::PreventExtensions()
{
return this->DynamicObject::PreventExtensions();
}
BOOL ES5HeapArgumentsObject::Seal()
{
return this->DynamicObject::Seal();
}
BOOL ES5HeapArgumentsObject::Freeze()
{
return this->DynamicObject::Freeze();
}
BOOL ES5HeapArgumentsObject::GetItemAt(uint32 index, Var* value, ScriptContext* requestContext)
{
return this->IsFormalDisconnectedFromNamedArgument(index) ?
this->DynamicObject::GetItem(this, index, value, requestContext) :
__super::GetItemAt(index, value, requestContext);
}
BOOL ES5HeapArgumentsObject::SetItemAt(uint32 index, Var value)
{
return this->IsFormalDisconnectedFromNamedArgument(index) ?
this->DynamicObject::SetItem(index, value, PropertyOperation_None) :
__super::SetItemAt(index, value);
}
BOOL ES5HeapArgumentsObject::DeleteItemAt(uint32 index)
{
BOOL result = __super::DeleteItemAt(index);
if (result && IsFormalArgument(index))
{
AssertMsg(this->IsFormalDisconnectedFromNamedArgument(index), "__super::DeleteItemAt must perform the disconnect.");
// Make sure that objectArray does not have the item ().
if (this->HasObjectArrayItem(index))
{
this->DeleteObjectArrayItem(index, PropertyOperation_None);
}
}
return result;
}
//
// Get the next valid formal arg index held in this object.
//
uint32 ES5HeapArgumentsObject::GetNextFormalArgIndex(uint32 index, BOOL enumNonEnumerable, PropertyAttributes* attributes) const
{
return GetNextFormalArgIndexHelper(index, enumNonEnumerable, attributes);
}
uint32 ES5HeapArgumentsObject::GetNextFormalArgIndexHelper(uint32 index, BOOL enumNonEnumerable, PropertyAttributes* attributes) const
{
// Formals:
// - deleted => not in objectArray && not connected -- do not enum, do not advance.
// - connected, if in objectArray -- if (enumerable) enum it, advance objectEnumerator.
// - disconnected =>in objectArray -- if (enumerable) enum it, advance objectEnumerator.
// We use GetFormalCount and IsEnumerableByIndex which don't change the object
// but are not declared as const, so do a const cast.
ES5HeapArgumentsObject* mutableThis = const_cast<ES5HeapArgumentsObject*>(this);
uint32 formalCount = this->GetFormalCount();
while (++index < formalCount)
{
bool isDeleted = mutableThis->IsFormalDisconnectedFromNamedArgument(index) &&
!mutableThis->HasObjectArrayItem(index);
if (!isDeleted)
{
BOOL isEnumerable = mutableThis->IsEnumerableByIndex(index);
if (enumNonEnumerable || isEnumerable)
{
if (attributes != nullptr && isEnumerable)
{
*attributes = PropertyEnumerable;
}
return index;
}
}
}
return JavascriptArray::InvalidIndex;
}
// Disconnects indexed argument from named argument for frame object property.
// Remove association from them map. From now on (or still) for this argument,
// named argument's value is no longer associated with arguments[] item.
void ES5HeapArgumentsObject::DisconnectFormalFromNamedArgument(uint32 index)
{
AssertMsg(this->IsFormalArgument(index), "SetAccessorsForFormal: called for non-formal");
if (!IsFormalDisconnectedFromNamedArgument(index))
{
__super::DeleteItemAt(index);
}
}
BOOL ES5HeapArgumentsObject::IsFormalDisconnectedFromNamedArgument(uint32 index)
{
return this->IsArgumentDeleted(index);
}
// Wrapper over IsEnumerable by uint32 index.
BOOL ES5HeapArgumentsObject::IsEnumerableByIndex(uint32 index)
{
ScriptContext* scriptContext = this->GetScriptContext();
Var indexNumber = JavascriptNumber::New(index, scriptContext);
JavascriptString* indexPropertyName = JavascriptConversion::ToString(indexNumber, scriptContext);
PropertyRecord const * propertyRecord;
scriptContext->GetOrAddPropertyRecord(indexPropertyName->GetString(), indexPropertyName->GetLength(), &propertyRecord);
return this->IsEnumerable(propertyRecord->GetPropertyId());
}
// Helper method, just to avoid code duplication.
BOOL ES5HeapArgumentsObject::SetConfigurableForFormal(uint32 index, PropertyId propertyId, BOOL value)
{
AssertMsg(this->IsFormalArgument(index), "SetAccessorsForFormal: called for non-formal");
// In order for attributes to work, the item must exist. Make sure that's the case.
// If we were connected, we have to copy the value from frameObject->slots, otherwise which value doesn't matter.
AutoObjectArrayItemExistsValidator autoItemAddRelease(this, index);
BOOL result = this->DynamicObject::SetConfigurable(propertyId, value);
autoItemAddRelease.m_isReleaseItemNeeded = !result;
return result;
}
// Helper method, just to avoid code duplication.
BOOL ES5HeapArgumentsObject::SetEnumerableForFormal(uint32 index, PropertyId propertyId, BOOL value)
{
AssertMsg(this->IsFormalArgument(index), "SetAccessorsForFormal: called for non-formal");
AutoObjectArrayItemExistsValidator autoItemAddRelease(this, index);
BOOL result = this->DynamicObject::SetEnumerable(propertyId, value);
autoItemAddRelease.m_isReleaseItemNeeded = !result;
return result;
}
// Helper method, just to avoid code duplication.
BOOL ES5HeapArgumentsObject::SetWritableForFormal(uint32 index, PropertyId propertyId, BOOL value)
{
AssertMsg(this->IsFormalArgument(index), "SetAccessorsForFormal: called for non-formal");
AutoObjectArrayItemExistsValidator autoItemAddRelease(this, index);
BOOL isDisconnected = IsFormalDisconnectedFromNamedArgument(index);
if (!isDisconnected && !value)
{
// Settings writable to false causes disconnect.
// It will be too late to copy the value after setting writable = false, as we would not be able to.
// Since we are connected, it does not matter the value is, so it's safe (no matter if SetWritable fails) to copy it here.
this->SetObjectArrayItem(index, this->frameObject->GetSlot(index), PropertyOperation_None);
}
BOOL result = this->DynamicObject::SetWritable(propertyId, value); // Note: this will convert objectArray to ES5Array.
if (result && !value && !isDisconnected)
{
this->DisconnectFormalFromNamedArgument(index);
}
autoItemAddRelease.m_isReleaseItemNeeded = !result;
return result;
}
// Helper method for SetPropertyWithAttributes, just to avoid code duplication.
BOOL ES5HeapArgumentsObject::SetAccessorsForFormal(uint32 index, PropertyId propertyId, Var getter, Var setter, PropertyOperationFlags flags)
{
AssertMsg(this->IsFormalArgument(index), "SetAccessorsForFormal: called for non-formal");
AutoObjectArrayItemExistsValidator autoItemAddRelease(this, index);
BOOL result = this->DynamicObject::SetAccessors(propertyId, getter, setter, flags);
if (result)
{
this->DisconnectFormalFromNamedArgument(index);
}
autoItemAddRelease.m_isReleaseItemNeeded = !result;
return result;
}
// Helper method for SetPropertyWithAttributes, just to avoid code duplication.
BOOL ES5HeapArgumentsObject::SetPropertyWithAttributesForFormal(uint32 index, PropertyId propertyId, Var value, PropertyAttributes attributes, PropertyValueInfo* info, PropertyOperationFlags flags, SideEffects possibleSideEffects)
{
AssertMsg(this->IsFormalArgument(propertyId), "SetPropertyWithAttributesForFormal: called for non-formal");
BOOL result = this->DynamicObject::SetPropertyWithAttributes(propertyId, value, attributes, info, flags, possibleSideEffects);
if (result)
{
if ((attributes & PropertyWritable) == 0)
{
// Setting writable to false should cause disconnect.
this->DisconnectFormalFromNamedArgument(index);
}
if (!this->IsFormalDisconnectedFromNamedArgument(index))
{
// Update the (connected with named param) value, as above call could cause change of the value.
BOOL tempResult = this->SetItemAt(index, value); // Update the value in frameObject.
AssertMsg(tempResult, "this->SetItem(index, value)");
}
}
return result;
}
//---------------------- ES5HeapArgumentsObject::AutoObjectArrayItemExistsValidator -------------------------------
ES5HeapArgumentsObject::AutoObjectArrayItemExistsValidator::AutoObjectArrayItemExistsValidator(ES5HeapArgumentsObject* args, uint32 index)
: m_args(args), m_index(index), m_isReleaseItemNeeded(false)
{
AssertMsg(args, "args");
AssertMsg(args->IsFormalArgument(index), "AutoObjectArrayItemExistsValidator: called for non-formal");
if (!args->HasObjectArrayItem(index))
{
m_isReleaseItemNeeded = args->SetObjectArrayItem(index, args->frameObject->GetSlot(index), PropertyOperation_None) != FALSE;
}
}
ES5HeapArgumentsObject::AutoObjectArrayItemExistsValidator::~AutoObjectArrayItemExistsValidator()
{
if (m_isReleaseItemNeeded)
{
m_args->DeleteObjectArrayItem(m_index, PropertyOperation_None);
}
}
#if ENABLE_TTD
TTD::NSSnapObjects::SnapObjectType ES5HeapArgumentsObject::GetSnapTag_TTD() const
{
return TTD::NSSnapObjects::SnapObjectType::SnapES5HeapArgumentsObject;
}
void ES5HeapArgumentsObject::ExtractSnapObjectDataInto(TTD::NSSnapObjects::SnapObject* objData, TTD::SlabAllocator& alloc)
{
this->ExtractSnapObjectDataInto_Helper<TTD::NSSnapObjects::SnapObjectType::SnapES5HeapArgumentsObject>(objData, alloc);
}
#endif
}
|
All in all, Marcin Gortat is a pretty good NBA center. He's big, he's strong, he's got pretty nimble feet for his size. He sets devastating screens, he's an outstanding finisher rolling to the hoop, he's generally OK at being in the right place defensively. He also happens to be a ludicrously enormous and quite terrifying Eastern European man. Look at him. Look at his scary Meg Mucklebones nose, his big heavy awning of a brow. Look at his arms! Those things are bigger than my kids. They're bigger than my legs! Marcin Gortat's arms are scary as hell.
Still, when people talk about terrifying NBA ogres, ol' Gordo tends not to come up in the discussion. It's all, oh, Nikola Pekovic tears car doors off for scary Montenegrin gangsters, and Ivan Johnson could pull my head off like it was nothing, and Pero Antic literally eats babies and I can't even look at him without sobbing and puking. Gortat, by contrast, I dunno, maybe he comes off docile? Good-natured? Like a self-aware goober just playin' basketball, drivin' an armored Humvee, and havin' a good-ass time?
No more. Marcin Gortat has a mohawk now; now, Marcin Gortat is a fucking nightmare.
Look at that badass hair. No one stands between a mohawked Eastern European giant and the hoop. How will the Wizards ever lose again? They will never lose again. The Wizards literally will never lose again.
Advertisement
Here is some footage of Marcin Gortat operating in the post with his new hair:
Your browser does not support HTML5 video tag.Click here to view original GIF
Advertisement
Oh man. Try defending that shit, suckers.
[DC Sports Bog]
Top image by Jim Cooke |
package comp
import (
"fmt"
"strings"
)
// Controller send CtrlMessage to nodes and get replies from them, and simply use string as rpc protocol.
// this file provides utility functions to build and parse strings based on the protocol.
//var regRemoveSpace = regexp.MustCompile("\\s+")
const (
stateNormal = iota
stateInQuoteBackslash
stateInQuote
stateNone
)
func With(args ...string) []string {
return args
}
// WithString parse a string into substrings separated by SPACE or TAB, double quote is required if substring contains
// space, and backslash is requited for escaped double quote, for example:
// cmd arg_a "arg with space", "arg with \" escaped \" quote"
func WithString(args string) (result []string, err error) {
i, start := 0, 0
n := len(args)
s := stateNone
for i < n {
switch args[i] {
case ' ', '\t':
if s == stateNormal {
slice := args[start:i]
result = append(result, slice)
s = stateNone
} else if s == stateInQuoteBackslash {
}
case '\\':
if s == stateNone {
s = stateNormal
start = i
} else if s == stateInQuote {
s = stateInQuoteBackslash
}
case '"':
if s == stateNone {
// start quote
s = stateInQuote
start = i
} else if s == stateInQuote {
// end quote
slice := args[start+1 : i] // remove start quote char
slice = strings.Replace(slice, "\\\"", "\"", -1)
result = append(result, slice)
s = stateNone
} else if s == stateInQuoteBackslash {
// in form of "...\"...", just forward
s = stateInQuote
} else {
err = fmt.Errorf("illegal double quote at index %v", i)
return
}
default:
// any character other than above
if s == stateNone {
start = i
s = stateNormal
}
}
i++
}
switch s {
case stateNormal:
slice := args[start:i]
result = append(result, slice)
case stateInQuote, stateInQuoteBackslash:
err = fmt.Errorf("quote is not closed at the end")
}
return
}
// WithConnect dynamically ask the callee to set data pipe endpoint
func WithConnect(toSession, toName string) []string {
return With("conn", toSession, toName)
}
// WithOk return normal reply
func WithOk(args ...string) (r []string) {
r = append(r, "ok")
r = append(r, args...)
return
}
// WithError return abnormal reply
func WithError(args ...string) (r []string) {
r = append(r, "err")
r = append(r, args...)
return
}
|
MOTÖRHEAD was forced to cut its concert in Austin, Texas short last night (Tuesday, September 1) due to continued health issues experienced by the band's frontman, Lemmy Kilmister.
According to GlideMagazine.com, MOTÖRHEAD delivered only three songs — "Damage Case", "Stay Clean" and "We Are Motörhead" — before a "fatigued and winded" Lemmy announced the next track, "Metropolis", and then let out a sigh, telling the crowd, "I can't do it." He then left the stage at Emo's with the rest of his band and returned moments later and apologized to a disappointed but supportive audience. He said: "You are one of the best gigs in America, and I would love to play for you, but I can't… So please accept my apologies. Next time, all right?" The house lights and music then came on, before "concerned" and "slightly annoyed" fans began funneling out.
Fan-filmed video footage of MOTÖRHEAD's entire Austin appearance — shot from the front row — can be seen below.
The latest MOTÖRHEAD walk-off follows a similarly abbreviated performance last week in Salt Lake City, where the band cited the high altitude as the reason behind Lemmy's inability to breathe. A show in Denver, at an even higher altitude, was scrapped the following day before MOTÖRHEAD ever took the stage.
Lemmy, who turned 69 years old in December, in 2013 suffered a haematoma (where blood collects outside of a blood vessel), causing the cancelation of a number of the band's European festival shows. The band has since scrapped a couple of tours and, during the 2013 edition of the Wacken Open Air festival in Germany, abandoned its set midway through so Lemmy could be taken to the hospital.
"I've had some health scares," Lemmy told Kerrang! last month, "and I've had to really cut back on smoking and drinking and whatever. But it is what it is. I've had a good life, a good run. I do what I do still. I'm sure I'll die on the road, one way or another."
Asked if he is afraid of death, Lemmy said, "No."
The rocker told Classic Rock he didn't expect to still be here at 30,
"I don't do regrets," he said. "Regrets are pointless. It's too late for regrets. You've already done it, haven't you? You've lived your life. No point wishing you could change it.
"There are a couple of things I might have done differently, but nothing major; nothing that would have made that much of a difference.
"I'm pretty happy with the way things have turned out. I like to think I've brought a lot of joy to a lot of people all over the world. I'm true to myself and I'm straight with people."
Asked if his illness in 2013 has made him more aware of his own mortality, Lemmy said: "Death is an inevitability, isn't it? You become more aware of that when you get to my age. I don't worry about it. I'm ready for it. When I go, I want to go doing what I do best. If I died tomorrow, I couldn't complain. It's been good." |
Involvement of 2antiplasmin in dendritic growth of hippocampal neurons The 2Antiplasmin (2AP) protein is known as a principal physiological inhibitor of plasmin, but we previously demonstrated that it acts as a regulatory factor for cellular functions independent of plasmin. 2AP is highly expressed in the hippocampus, suggesting a potential role for 2AP in hippocampal neuronal functions. However, the role for 2AP was unclear. This study is the first to investigate the involvement of 2AP in the dendritic growth of hippocampal neurons. The expression of microtubuleassociated protein 2, which contributes to neurite initiation and neuronal growth, was lower in the neurons from 2AP−/− mice than in the neurons from 2AP+/+ mice. Exogenous treatment with 2AP enhanced the microtubuleassociated protein 2 expression, dendritic growth and filopodia formation in the neurons. This study also elucidated the mechanism underlying the 2APinduced dendritic growth. Aprotinin, another plasmin inhibitor, had little effect on the dendritic growth of neurons, and 2AP induced its expression in the neurons from plaminogen−/− mice. The activation of p38 MAPK was involved in the 2APinduced dendritic growth. Therefore, our findings suggest that 2AP induces dendritic growth in hippocampal neurons through p38 MAPK activation, independent of plasmin, providing new insights into the role of 2AP in the CNS. |
The Scotch Metaphysics Volume 28, Number 2, November 2002, pp. 314-318 GEORGE DAVIE. The Scotch Metaphysics: A Century of Enlightenment in Scotland. London and New York: Routledge, 2001. Pp. xi + 241. ISBN 0415242657, cloth, $115.00. This is the first book in a series entitled Routledge Studies in Nineteenth- Century Philosophy. Its author is well-known in studies of the Scottish Enlightenment; in fact, Davie could be called the dean of such studies. His previous books are The Democratic Intellect: Scotland and her Universities in the Nineteenth Century; The Crisis of the Democratic Intellect; and two volumes of essays on the Scottish Enlightenment. The text of the present book was written in 1953 but was never published until now and was awarded a D. Litt, degree at the University of Edinburgh, which helps explain some things about its style. The book's title, The Scotch Metaphysics, is a phrase "made by George III in the early days of the French Revolution in response to the attempt made by Dundas to overcome the King's scruples about signing the Catholic (one might say Irish) Emancipation Bill, which the government of William Pitt the younger wanted to pass into law". Davie's use of the phrase |
Particle-based temperature measurement coupled with velocity measurement This review article focuses on the simultaneous particle-based measurement of velocity and temperature. The thermographic PIV with phosphor particles is included in the scope. However, there are many other dye-doped particles that can be used in these methods, some of which are more suitable for measurements of liquid flow and low temperature flow. This review summarized the methods for fabrication, applications, optical properties, and limitations of temperature sensitive particles, and the extensive coverage of the field provided by this review renders it useful as a base for further experimentation. Dopant sensor molecules are also summarized in this review make it possible to fabricate functional particles that are suitable for the aims of an individual researcher. These particles are excited by a UV-laser and several kinds of them can be excited using a common green laser. Several approaches to velocity and temperature measurements, mono-color intensity-based method, the ratiometric method and the lifetime method are explained. The combined methods are limited by velocity conditions; therefore, decoupling the temperature measurement from the velocity measurement is sometimes reasonable. However, combined measurements were also considered in this review. There are many possible sources of error when using these methods; however, the most important factor is the following capacity of the particles, especially in gaseous flow. The deformation of particle images because of the long exposure inherent in the lifetime-based method was discussed quantitatively. Applications of liquid/gas/multiphase flow measurements were introduced. Micro capsules or microbeads are often used for a liquid flow and a microchannel flow. Many previous works use a phosphor for a high temperature gas flow. Uncertainties are different on a case-by-case basis. Accurate calibration is very important in suppressing errors. The development of easy calibration methods or calibration free methods while retaining high accuracy is an important issue in future work. |
/// Construct, given a boxed closure to produce the final value
///
/// Returns the future and a `finish` closure to set the value when done.
pub fn new_box_fnmut<U: 'static>(
mut f: Box<dyn FnMut(&mut U) -> T>,
) -> (Self, Box<dyn FnMut(&mut U)>) {
let target: Rc<RefCell<Option<T>>> = Default::default();
let t2 = target.clone();
let finish: Box<dyn FnMut(&mut U)> = Box::new(move |u| *t2.borrow_mut() = Some(f(u)));
(Future(target), finish)
} |
import contextlib
import threading
import warnings
from pprint import PrettyPrinter
import grpc
from google.protobuf.json_format import MessageToDict
@contextlib.contextmanager
def warn_on_rpc_error_context():
try:
yield
except grpc.RpcError as e:
warnings.warn(str(e))
def warn_on_rpc_error(stream):
with warn_on_rpc_error_context():
for event in stream:
yield event
def cancel_after(stream, after):
timer = threading.Timer(after, stream.cancel)
timer.start()
return warn_on_rpc_error(stream)
def pprint_message(message):
pp = PrettyPrinter()
pp.pprint(MessageToDict(message, preserving_proto_field_name=True))
|
#include "common_token.h"
static int self_fill_and_empty(TestAnalyzer_TokenQueue* me,
size_t Size, int CPushN);
static int self_test(int Size, int ContinuousPushN);
static E_UnitTest self_unit_test = E_UNIT_TEST_VOID;
int
main(int argc, char** argv)
{
int count_n = 0;
hwut_info("TokenQueue: Push and Pop;\n"
"CHOICES: plain, text;");
hwut_if_choice("plain") self_unit_test = E_UNIT_TEST_PLAIN;
hwut_if_choice("text") self_unit_test = E_UNIT_TEST_TEXT;
count_n += self_test(1, 1);
count_n += self_test(2, 2);
count_n += self_test(2, 3);
count_n += self_test(3, 2);
count_n += self_test(3, 3);
count_n += self_test(3, 4);
printf("<terminated %i>\n", count_n);
}
static int
self_test(int Size, int ContinuousPushN)
{
TestAnalyzer_TokenQueue me;
struct TestAnalyzer_tag lexer;
int count_n;
printf("\n---( size: %i; )------------------\n", (int)Size);
printf("\n");
TestAnalyzer_TokenQueue_construct(&me, &lexer, Size);
hwut_verify(me.end - me.begin == (ptrdiff_t)Size);
printf("\n");
count_n = self_fill_and_empty(&me, Size, ContinuousPushN);
printf("\n");
hwut_verify(me.end - me.begin == (ptrdiff_t)Size);
TestAnalyzer_TokenQueue_destruct(&me);
printf("\n");
return count_n;
}
static int
self_fill_and_empty(TestAnalyzer_TokenQueue* me, size_t Size, int CPushN)
{
int push_n = 1, pop_n = 1, total_n = -1;
QUEX_TYPE_TOKEN* token_p = (QUEX_TYPE_TOKEN*)0;
char* example[] = { "adelbert", "berta", "caesar", "dagobert" };
TestAnalyzer_lexatom_t* string;
hwut_verify(TestAnalyzer_TokenQueue_is_empty(me));
for(push_n=1, pop_n=0; push_n<=Size; ++push_n) {
switch( self_unit_test ) {
case E_UNIT_TEST_PLAIN:
TestAnalyzer_TokenQueue_push(me, 100 * push_n);
break;
case E_UNIT_TEST_TEXT:
string = (TestAnalyzer_lexatom_t*)example[push_n-1];
TestAnalyzer_TokenQueue_push_text(me, 100 * push_n, string,
&string[strlen((char*)string)+1]);
break;
default:
hwut_verify(false);
}
common_print_push(me, push_n, &me->write_iterator[-1]);
if( (push_n+1) % CPushN == 0 ) {
token_p = TestAnalyzer_TokenQueue_pop(me);
++pop_n;
common_print_pop(me, pop_n, token_p);
}
}
total_n = push_n + common_empty_queue(me, pop_n, Size);
hwut_verify(TestAnalyzer_TokenQueue_is_empty(me));
return total_n;
}
|
Proteome ProfilingPitfalls and Progress In this review we examine the current state of analytical methods in proteomics. The conventional methodology using two-dimensional electrophoresis gels and mass spectrometry is discussed, with particular reference to the advantages and shortcomings thereof. Two recently published methods which offer an alternative approach are presented and discussed, with emphasis on how they can provide information not available via two-dimensional gel electrophoresis. These two methods are the isotope-coded affinity tags approach of Gygi et al. and the two-dimensional liquid chromatographytandem mass spectrometry approach as presented by Link et al. We conclude that both of these new techniques represent significant advances in analytical methodology for proteome analysis. Furthermore, we believe that in the future biological research will continue to be enhanced by the continuation of such developments in proteomic analytical technology. Why do we analyse proteins? The long-standing paradigm in biology is that DNA synthesizes RNA, which synthesizes protein. Conventional wisdom states that the blueprint for how to assemble a cell is contained in the genetic code, but it is important to realize that the bricks and mortar used in the building process are predominantly proteins. Thus, proteins are the molecules in cells that are directly responsible for maintenance of correct cellular function, and consequently the viability of the organism that contains the cells. In recent years, the simultaneous study of the whole range of proteins expressed in a cell at any given time has become an area of great interest. This has led to the classi®cation of a new subdiscipline of protein chemistry known as`proteomics', where a proteome is de®ned as the protein complement expressed by the genome of an organism or cell type. The tools used in the analysis of proteins, however, still lag behind the analogous tools used in the analysis of DNA and RNA. It is relatively facile to undertake identi®cation and quanti®cation of many different DNA or RNA molecules in a single experiment using an array prepared from a single initial sample. This can be done using such techniques as DNA chips and cDNA microarrays, differential display PCR and serial analysis of gene expression. It is simply not possible to perform the same type of experiments at the protein level using two-dimensional gel electrophoresis (2DE), which is the current widely accepted technology in this area despite the fact that it suffers from several major shortcomings. In discussing analytical methods to be used in proteome analysis, consideration must be given to the fact that the number of proteins expressed at one time in a given cellular system is typically in the thousands or tens of thousands. Any attempt to categorize and identify all of these proteins simultaneously must use methods which are as rapid as possible to enable completion of the project within a reasonable time frame. Thus, an idealized proteomics technology would consist of a combination of the following features: high sensitivity, high throughput, the ability to differentiate differentially modi®ed proteins, and the ability to quantitatively display and analyse all the proteins present in a sample. In this review we will compare and contrast the current technology with two recently developed analytical methods that may, with further development, overcome several of the problems inherent in 2D gel electrophoresis. The 2D electrophoresis approach The most commonly used technique in global proteome analysis is 2D electrophoresis using isoelectric focusing/sodium dodecyl sulfate± polyacrylamide gel electrophoresis (IEF/SDS± PAGE). In 2DE, proteins are ®rst separated by isoelectric focusing (IEF) and then further resolved by SDS±PAGE in the second, perpendicular, dimension. Separated proteins are visualized by staining or autoradiography to produce a 2D array that can contain thousands of proteins. The identi®cation of individual proteins from polyacrylamide gels, of one or two dimensions, has traditionally been carried out using co-migration with known proteins, immunoblotting, Nterminal sequencing or internal peptide sequencing. In recent years there has been a fundamental shift in the ways such experiments are performed, principally due to the explosive growth of large-scale genomic databases. The current widely used method relies on excising spots from gels, proteolytically digesting the spots, and then extracting the peptides produced. The ®nal stage involves analysing these peptides by mass spectrometry (MS) or tandem mass spectrometry (MS±MS) and then correlating the mass spectral data derived from the peptides with information contained in databases of protein sequence, genomic sequence or expressed sequence tags (ESTs). It is clear that this type of approach will become even more widely used in the near future as complete genome sequence data becomes available for more and more organisms. Complete genome sequences have already been reported for a number of organisms, among them Haemophilus in¯uenzae, Saccharomyces cerevisiae, Escherichia coli, Caenorhabditis elegans and Drosophila melanogaster. The human genome project is, of course, one of the major driving forces in biomedical research in recent years. At the time of writing, the ®rst reports have just been released that the ®rst draft of the human genome sequence is now complete. The pros and cons of 2D electrophoresis The disadvantages of 2D electrophoresis are that it is very time-consuming, essentially non-quantitative, does not work well for hydrophobic proteins, and has a limited dynamic range. Large-format gels typically require at least 24 h to complete, and for practical reasons are often completed over the course of several days. Staining of individual 2DE spots can be measured and compared using scanning densitometry, but there are so many caveats attached to the data that the results are of questionable value. Many staining techniques, such as silver staining, suffer from a limited dynamic range, so that the intensity of more abundant spots is not linearly correlated to that of less abundant spots. Moreover, some types of proteins, especially those that are post-translationally modi®ed, can give qualitatively and quantitatively different staining when compared to similar amounts of other proteins. Hydrophobic proteins, particularly those of high molecular weight, are especially problematic in 2D gels because the presence of SDS is incompatible with successful ®rst-dimension IEF. Thus, most IEF sample buffers solubilize a wide range of proteins by including high concentrations of chaotropic salts, such as urea, and lower levels of mild detergents, such as CHAPS. It should be noted, however, that signi®cant progress in overcoming this particular limitation has been made in recent years, including the development of new detergents with greater solubilizing power, and the selective application of organic solvents to aid in solubilizing hydrophobic proteins. Several studies have shown that the majority of proteins identi®ed in 2DE are the more abundant and the more long-lived proteins in the cell. In a study of more than 150 proteins identi®ed in 2DE of yeast cells, for example, no proteins were identi®ed with a codon bias value of less than 0.1, an arbitrarily de®ned cut-off indicating low abundance. In contrast, calculated values indicate that over half of the 6000 genes in yeast have a codon bias index of less than 0.1 and thus are unlikely to be seen in 2DE without prior enrichment. Several techniques have been proposed as generic sample pretreatment strategies to increase the total number of spots that can be visualized in 2DE. These include sequential extraction of a sample with buffers of increasing solubilizing power, which generates fractions on the basis of hydrophobicity, and using narrow-range pH gradients for the ®rst dimension IEF, which expands the resolution in a given range. Despite these disadvantages, 2DE remains the method of choice for displaying proteins as the front end of a proteomics project, for two main reasons: ®rst, because it can be used to visualize a very large number of proteins simultaneously; and second, because it can be used in a differential display format. The ability to study complex biological systems in their entirety rather than as a multitude of individual components makes it far easier to discover the many complex relationships between proteins in functioning cells. This type of experiment, where the aim is to catalogue as many of the expressed proteins as possible and build up a database of expressed proteins, is often referred to as a`proteome project'. Large-scale proteome characterization projects which have been reported include those of microorganisms such as Saccharomyces cerevisiae, Escherichia coli, Haemophilus in¯uenzae, Mycobacterium tuberculosis, Ochrobactrum anthropi, Salmonella enterica, Spiroplasma melliferum, Synechocystis spp., Dictyostelium discoideum and Rhizobium leguminosarum, and tissues including human liver, human plasma, human ®broblasts, human keratinocytes, human bladder squamous cell carcinomas, mouse kidney and rat serum. Additionally, an ambitious attempt to undertake a complete human proteome project has recently been announced by the same research group responsible for one of the major successful efforts in the human genome project. It remains to be seen whether this effort will be quite so successful. One major issue with establishing any proteome characterization project is de®ning the proteome in question. A single genome can give rise to an essentially in®nite number of qualitatively and quantitatively different proteomes, depending on such variables as the stage of the cell cycle, growth and nutrient conditions, temperature and stress response, pathological conditions and strain differences, to name but a few. Another way of expressing the same problem is that genomes are essentially static, while proteomes are, by their very nature, dynamic and, therefore, a 2DE-based proteome project can only represent a snapshot rather than the whole constantly moving picture. Although 2DE is not strictly quantitative, as noted above, the presence or absence of one or more spots in one gel when compared to another is readily detectable. Using this approach, the state of a cellular system in response to a particular treatment can be assessed using 2DE of samples of each state, which allows for the simultaneous assessment of the effect of the treatment on many proteins at once, rather than measuring, for example, levels of a single enzyme. This type of differential display experiment can be used to directly visualize the proteins which are affected during, for example, cell differentiation, gene knockouts, changes in growth or nutrient conditions, or treatment of cultured cells with a potential therapeutic drug. Once these protein spots are identi®ed, knowledge of the proteins that are directly affected by a particular treatment can identify the biochemical pathways involved and so be of great value in deciding the direction of future research. Thus, in this implementation, proteome analysis is used as a biological assay, rather than as a database as described above. In summary, despite numerous drawbacks and limitations, 2DE remains a powerful and versatile tool in proteome analysis. It is clear, however, that there is room for improvement in the ef®ciency of analysis, and this may be achieved by both incremental advances in current methods or development of new technologies. Isotope-coded af®nity tag peptide labelling The ®rst of the new methodologies that we feel has the potential to have a great impact on proteome research is known as isotope-coded af®nity tag (ICAT) peptide labelling. This is an approach that combines accurate quanti®cation and concurrent sequence identi®cation of the individual proteins in complex mixtures. The method is based on a newly synthesized class of chemical reagents (ICATs) used in combination with tandem mass spectrometry. The ICAT reagent contains a biotin af®nity tag and a thiol speci®c reactive group, which are joined by a spacer domain which is available in two forms; regular and isotopically heavy, which includes eight deuterium atoms. In brief, the method consists of four steps. First, a reduced protein mixture representing one cell state is derivatized with the isotopically light version of the ICAT reagent, while the corresponding reduced protein mixture representing a second cell state is derivatized with the isotopically heavy version of the ICAT reagent. Second, the labelled samples are combined and proteolytically digested to produce peptide fragments. Third, the tagged cysteinecontaining peptide fragments are isolated by avidin af®nity chromatography. Finally, the isolated tagged peptides are separated and analysed by microcapillary tandem mass spectrometry, which provides both identi®cation of the peptides by fragmentation in MS±MS mode and relative quantitation of labelled pairs by comparing signal intensities in MS mode. It should be noted that a method based on similar principles, which has the one crucial difference of being based on metabolic labelling of cells and is therefore limited to use with organisms which can be successfully cultured, has also been recently reported. There are several advantages of this approach when compared to 2DE. There is no need to run time-consuming 2DE experiments, and the approach is scaleable so that, in theory, a large enough amount of sample can be used to enable analysis of low-abundance proteins. The method is based on stable isotope labelling of isolated protein samples, so it does not require the use of metabolic labelling or radioactivity. Most important of all, however, is that it provides accurate relative quanti®cation of each peptide identi®ed. For example, if a protein is present at the same level in the two original samples, the amount of each peptide detected will be the same. If, however, a protein is present at a 10-fold higher level in the sample derivatized with the heavy ICAT reagent, then the amount of heavy ICAT-labelled peptide detected will be 10 times greater than the amount of light ICAT-labelled peptide detected. It should be emphasized that, although quantitation by mass spectrometry is often unreliable, in this case the peptides act as mutual internal standards, since they are chemically identical and differ only by eight neutrons, and thereby eliminate potential problems due to differing ionization ef®ciencies or other physicochemical properties. There are also several obvious disadvantages to this technique as it is currently presented, but all of them appear to be surmountable in the course of future development. The proteins must, ®rst of all, contain cysteine, which is true for an estimated 92% of yeast proteins, for example, and those cysteines must also be¯anked by appropriately spaced protease cleavage sites. Moreover, the ICAT tag is a large moiety when compared to the size of some small peptides and thus may interfere with peptide ionization and can greatly complicate mass spectral interpretation. All of these problems may be overcome by designing different reagents with speci®city for other peptide side-chains, using a smaller tag group, and using different proteases. In summary, it is suf®cient to say that the published application example, involving the identi-®cation and quanti®cation of galactose-and glucose-repressed proteins in yeast harvested under different growth conditions, represents a signi®cant advance in proteome analysis. Not only were the proteins affected by different growth conditions unambiguously identi®ed, but also their relative amounts were accurately quanti®ed in the course of the same experiment. It is to be hoped that further research in this area will yield even more promising data in the future. Multi-dimensional protein identi®cation technique The second of the new methodologies that we believe represents a signi®cant step forward in proteome analysis is the use of multidimensional liquid chromatography coupled to tandem mass spectrometry (LC±LC±MS/MS). The LC±LC±MS/ MS method, as recently reported for use in the analysis of complex mixtures of peptides, is now commonly known by the acronym MudPIT, for multi-dimensional protein identi®cation technique. This method has been previously reported in various incarnations, involving reversed phase columns coupled to either cation exchange columns or size exclusion columns. However, it was only when the technique was employed with a mixed-bed microcapillary column containing strong cation exchange (SCX) and reversed phase (RPC) resins that the true utility of this method was demonstrated. This chromatographic method contains numerous steps, as outlined below. First, a denatured and reduced protein mixture is digested with trypsin to produce peptide fragments. The mixture is loaded onto a microcapillary column containing SCX resin upstream of RPC resin, eluting directly into a tandem mass spectrometer. A discrete fraction of the absorbed peptides are displaced from the SCX column onto the RPC column using a step gradient of salt, causing the peptides to be retained on the RPC column, while contaminating salts and buffers are washed through. Peptides are then eluted from the RPC column using an acetonitrile gradient, and analysed by MS/MS. This process is repeated using increasing salt concentration to displace additional fractions from the SCX column. This is applied in an iterative manner, typically involving 10±20 steps, and the MS/MS data from all of the fractions are analysed by database searching and combined to give an overall picture of the protein components present in the initial sample. There are several advantages of the MudPIT technique, beginning with the fact that it once again avoids the need for time-consuming 2DE and can, in fact, be run in a fully automated system. The use of two dimensions for chromatographic separation also greatly increases the number of peptides that can be identi®ed from very complex mixtures. For example, analysis of a total yeast cell lysate identi®ed 749 unique peptides, from 189 unique proteins, in a single MudPIT experiment, which is far more than would be expected from a conventional LC±MS/MS experiment. Perhaps the most important point, however, is that the method has a very wide dynamic range and none of the protein solubility problems associated with 2DE since the proteins are all proteolytically digested en masse. This is graphically demonstrated in the published example, where the technique was used to characterize whole protein complexes. The yeast ribosomal 80s complex was found to contain 64 proteins by analysis of 56 discrete spots visible in a 2DE experiment, but an additional 11 proteins were identi®ed by analysing the same sample using MudPIT. The main drawbacks of this approach are concerned with post-experimental analysis. The sheer volume of data collected in a MudPIT experiment consisting of 10±20 cycles of reversedphase chromatography presents a signi®cant problem in terms of both computing power required to complete database searching and the time required to collate and assemble the data into an understandable format. Moreover, the approach is generally limited to use with organisms that have complete genome sequence data available for searching. For the analysis of a single 2DE spot, it is possible to obtain de novo sequence data of peptides, using either software or manual interpretation or a combination of both. This is, however, a labour-intensive and time-consuming task that could not be practically applied to the number of tandem mass spectra collected in a typical MudPIT experiment. These problems are all readily solvable, which makes this approach even more attractive. Computing resources continue to steadily increase in performance and become more affordable. Mass spectrometric instrumentation and de novo sequencing algorithms will surely improve, making de novo sequencing on a larger scale more practical. Additionally, the technique could be combined with some of the strategies used for improving success in MS±MS based on de novo sequencing experiments, such as employing proteolytic digestion in 18 O enriched water to provide an isotopic end label. And, of course, at some point in the future, complete genomic sequence data will be available for all the major research organisms and therefore de novo sequencing will no longer be required. In summary, MudPIT represents a viable alternative to 2DE for the analysis of certain complex mixtures. It is a technique that is best suited to rapidly building a proteomic database, rather than being applied in a differential display proteomic assay. This approach is clearly going to become increasingly attractive as a means of extracting as much information as possible in a short time from a protein sample that could represent a complex or even a relatively simple whole organism, particularly one for which large amounts of genomic sequence data are readily available. Conclusion Identi®cation and quanti®cation of large numbers of proteins in as short a time as possible will become increasingly important in the future. As we enter the post-genomic era, the search for new enabling technologies will become ever more intense. In this review, we have tried to demonstrate that such techniques and methodologies are becoming available, and are now ready to be used in solving interesting and exciting biological problems. |
package com.minicluster.cluster.node;
import com.google.common.collect.Lists;
import com.minicluster.cluster.service.Reducible;
import org.apache.log4j.Logger;
import java.io.*;
import java.net.InetAddress;
import java.net.ServerSocket;
import java.net.Socket;
import java.util.Collection;
class NodeTcp implements Node {
private static Logger log = Logger.getLogger(NodeTcp.class);
private ServerSocket listener;
private Socket father;
private Collection<Socket> children;
private Integer nodeId;
public NodeTcp(Integer NodeId) {
listener = null;
father = null;
children = Lists.newArrayList();
this.nodeId = NodeId;
}
/**
* create a server socket, bound to any free port.
*/
@Override
public void start() throws IOException {
this.start(0);
}
/**
* create a server socket, bound to the given port. A port of 0 creates a
* socket on any free port.
*/
private void start(Integer port) throws IOException {
listener = new ServerSocket(port);
log.debug("starting the local node = " + getEndpoint());
}
/**
* get the port on which the server socket is listening
*/
private Integer getPort() {
return listener.getLocalPort();
}
/**
* get the hostname on which the server socket is listening
*/
private String getHostname() throws IOException {
return InetAddress.getLocalHost().getHostName();
}
/**
* connect to other nodes
*
* @throws IOException
*/
@Override
public void connect(Endpoint father, Collection<Endpoint> children) throws IOException, ClassNotFoundException {
log.debug("connecting node to the distributed environment");
if (father != null) {
log.debug("waiting for incoming connection");
this.father = listener.accept();
}
if (children != null) {
for (Endpoint childInfo : children) {
this.children.add(new Socket(childInfo.getHostname(), childInfo.getPort()));
}
}
}
/**
* close all sockets
*/
@Override
public void disconnect() throws IOException {
log.debug("disconnecting node from the distributed environment");
if (father != null) {
father.close();
}
if (children != null) {
for (Socket socket : children) {
socket.close();
}
}
log.debug("stopping the local node");
if (listener != null) {
listener.close();
}
}
@Override
public void reduce(Message<? extends Reducible> msg) throws ClassNotFoundException, IOException {
reduceChildren(msg);
sendToFather(msg);
}
@Override
public void broadcast(Message<?> msg) throws ClassNotFoundException, IOException {
receiveFromFather(msg);
sendToChildren(msg);
}
@Override
public Endpoint getEndpoint() throws IOException {
return new Endpoint(this.getPort(), this.getHostname());
}
@Override
public Integer getId() {
return nodeId;
}
@SuppressWarnings({"unchecked", "rawtypes"})
private void reduceChildren(Message<? extends Reducible> msg) throws ClassNotFoundException, IOException {
for (Socket child : children) {
if (child != null) {
log.debug("receiving from child node");
Message<? extends Reducible> m = new Message(msg.getContent());
m.deSerializeContent(child.getInputStream());
msg.getContent().reduce(m.getContent());
}
}
}
private void sendToFather(Message<?> msg) throws IOException {
if (father != null) {
log.debug("sending to father node");
msg.serializeContent(father.getOutputStream());
}
}
private void receiveFromFather(Message<?> msg) throws ClassNotFoundException, IOException {
if (father != null) {
log.debug("receiving from father node");
msg.deSerializeContent(father.getInputStream());
}
}
private void sendToChildren(Message<?> msg) throws IOException {
for (Socket child : children) {
if (child != null) {
log.debug("sending to child node");
msg.serializeContent(child.getOutputStream());
}
}
}
}
|
In the past two months, Nabisco Biscuit Company has been celebrating its bicentennial by producing 25 million commemorative packages of three of its brands: Oreo cookies, Premium Saltines, and Honey Maid Grahams. Each collectible series is reminiscent of packages from the early 20th century.
A small bakery, founded in 1792 in Newburyport, Mass., to make hardtack for sailors, eventually joined other bakers to become the National Biscuit Company. |
The New York City Hospitality Alliance said it's another example of an "issue fines first, educate last" policy.
The city has begun cracking down on bars and eateries serving CBD-infused foods and drinks without taking time to ensure the industry had time to digest the recently enacted embargo, the New York City Hospitality Alliance said.
The alliance said it has received many questions over the last couple of days from members confused about an embargo initiated last month that, for the time being, forbids the sale of CBD-infused food and drinks. Owners have received little-to-no guidance on the new rules, and the trade group said several businesses have expressed concerns about just how dramatically the embargo may cut into their revenue.
"This aggressive enforcement is another example of New York City’s regulatory approach: Issue fines first, and educate last," the alliance said in a statement.
Cannabidiol, or CBD, is a marijuana extract that does not contain THC and, therefore, includes no psychedelic effects. The federal government made CBD more widely available for consumer and medicinal use when Congress passed last year's farm bill.
The Hospitality Alliance said it learned about the new regulations Monday while reading a post in Eater, which reported that during an inspection of Fat Cat Kitchen in East Village, the city Health Department "embargoed" or sealed CBD cookies and other goods in bags and prohibited their sale until further notice. Although the eatery was able to maintain possession of the items, co-owner C.J. Holm told the site that inspectors could not provide him with detailed information about related rules.
A Health Department representative said the department has been in communication with city restaurants and bars and alerting them of the embargo. The health code requires that "any nonfood item added to food be approved or generally recognized, among qualified experts, as safe under the conditions of its intended use," according to the department.
"The Health Department takes seriously its responsibility to protect New Yorkers’ health. Until cannabidiol (CBD) is deemed safe as a food additive, the Department is ordering restaurants not to offer products containing CBD,” the agency said in a statement.
The Health Department person said five establishments have been ordered to stop using CBD as a food additive in their products, but noted that the owners were allowed to keep thoe products.
The sale of medicinal CBD products, such as prescriptions for epilepsy in small dosage vials, is still permitted in pharmacies. Food products purchased outside the city are not illegal, the Health Department spokesperson said.
The agency did not comment on the Hospitality Alliance's statement.
For one owner of a CBD-themed coffee shop, the new rules came as a complete shock.
Ian Ford, who opened Caffeine Underground, marketed as the state's first CBD cafe, in 2017, said he was unaware of the Health Department's ban until he was asked about it by amNewYork. He estimated that if Caffeine Underground were to stop selling CBD-infused products, the Bushwick cafe would lose a third of its sales.
Ford said he was even more surprised about the news because a health official had inspected the cafe last week and did not raise any concerns about the CBD items on its menu or embargo any of its products.
"As long as you know where you get it and it’s legally sourced, I don’t understand why they would have an issue," Ford said. |
import sys
import numpy as np
try:
from cStringIO import StringIO # python 2.x
pass
except ImportError:
from io import StringIO # python 3.x
pass
from spatialnde.coordframes import concrete_affine
from spatialnde.coordframes import coordframe
from spatialnde.ndeobj import ndepart
from spatialnde.cadpart.rectangular_plate import rectangular_plate
from spatialnde.cadpart.appearance import simple_material
from spatialnde.cadpart.appearance import texture_url
from spatialnde.exporters.x3d import X3DSerialization
from spatialnde.exporters.vrml import VRMLSerialization
if __name__=="__main__":
# define lab frame
labframe = coordframe()
# define camera frame
camframe = coordframe()
# define object frame
objframe = coordframe()
# Define 90 deg relation and .2 meter offset between lab frame and camera frame:
# Define camera view -- in reality this would come from
# identifying particular reference positions within the lab
concrete_affine.fromtransform(labframe,camframe,
np.array( (( 0.0,1.0,0.0),
(-1.0,0.0,0.0),
(0.0,0.0,1.0)),dtype='d'),
b=np.array( (0.0,0.2,0.0), dtype='d'),
invertible=True)
# Define object coordinates relative to lab
concrete_affine.fromtransform(labframe,objframe,
np.array( (( 1.0/np.sqrt(2.0),-1.0/np.sqrt(2.0),0.0),
(1.0/np.sqrt(2.0),1.0/np.sqrt(2.0),0.0),
(0.0,0.0,1.0)),dtype='d'),
b=np.array( (0.0,0.0,0.4), dtype='d'),
invertible=True)
testplate=ndepart.from_implpart_generator(objframe,None,lambda cadpartparams: rectangular_plate(length=.1000, # m
width=.0300,
thickness=.01,
tol=1e-3))
x3dnamespace=None
Appear = simple_material.from_color((.6,.01,.01),DefName="coloring")
shape_els=[]
#shape_els.extend(testplate.X3DShapes(objframe,x3dnamespace=x3dnamespace))
motor=ndepart.fromstl(objframe,None,"/usr/local/src/opencascade/data/stl/shape.stl",tol=1e-6,recalcnormals=False,metersperunit=1.0e-3,defaultappearance=Appear)
#motor_els = motor.X3DShapes(objframe,x3dnamespace=x3dnamespace,appearance=X3DAppear)
#shape_els.extend(motor_els)
# DEFINE ALTERNATE APPEARANCE!!!
moondial_tex=texture_url.from_url("/usr/local/src/h3d/svn/H3DAPI/examples/x3dmodels/moondial/texture.jpg",DefName="Texture") # reference texture from a URL relative to the x3d/vrml we will write
moondial=ndepart.fromx3d(objframe,None,"/usr/local/src/h3d/svn/H3DAPI/examples/x3dmodels/moondial/moondial_orig.x3d",tol=1e-6)
moondial.assign_appearances(moondial_tex)
x3dwriter=X3DSerialization.tofileorbuffer("objout.x3d",x3dnamespace=x3dnamespace)
moondial.X3DWrite(x3dwriter,objframe) #,appearance=X3DAppear)
x3dwriter.finish()
vrmlwriter = VRMLSerialization.tofileorbuffer("objout.wrl")
moondial.VRMLWrite(vrmlwriter,objframe,UVparameterization=None) #,appearance=VRMLAppear)
vrmlwriter.finish()
import dataguzzler as dg
import dg_file as dgf
import dg_metadata as dgm
import scipy.ndimage
# Internal reference to texture for dataguzzler:
dg_moondial_tex=texture_url.from_url("#moondial_tex",DefName="Texture") # reference to dataguzzler channel... really for .dgs file only
moondial.assign_appearances(dg_moondial_tex)
# Generate dataguzzler VRML string
DGVRMLBuf=StringIO()
dgvrmlwriter = VRMLSerialization.tofileorbuffer(DGVRMLBuf)
moondial.VRMLWrite(dgvrmlwriter,objframe,UVparameterization=None) # ,appearance=VRMLAppear)
dgvrmlwriter.finish()
DGX3DBuf=StringIO()
dgx3dwriter = X3DSerialization.tofileorbuffer(DGX3DBuf)
moondial.X3DWrite(dgx3dwriter,objframe,UVparameterization=None) # ,appearance=VRMLAppear)
dgx3dwriter.finish()
wfmdict={}
wfmdict["moondial"]=dg.wfminfo()
wfmdict["moondial"].Name="moondial"
wfmdict["moondial"].dimlen=np.array((),dtype='i8')
dgm.AddMetaDatumWI(wfmdict["moondial"],dgm.MetaDatum("VRML97Geom",DGVRMLBuf.getvalue()))
dgm.AddMetaDatumWI(wfmdict["moondial"],dgm.MetaDatum("X3DGeom",DGX3DBuf.getvalue()))
teximage=scipy.ndimage.imread("/usr/local/src/h3d/svn/H3DAPI/examples/x3dmodels/moondial/texture.jpg",flatten=True).astype(np.float32).T
wfmdict["moondial_tex"]=dg.wfminfo()
wfmdict["moondial_tex"].Name="moondial_tex"
# ***!!!! NOTE: Must adjust contrast on coloring channel
# in dg_scope to get a nice colormap
wfmdict["moondial_tex"].ndim=2
wfmdict["moondial_tex"].dimlen=np.array(teximage.shape,dtype='i8')
wfmdict["moondial_tex"].n=np.prod(teximage.shape)
wfmdict["moondial_tex"].data=teximage[:,::-1] # Flip y axis so it appears correct in scope
dgm.AddMetaDatumWI(wfmdict["moondial"],dgm.MetaDatum("TextureChan_coloring","coloring:0"))
dgf.savesnapshot("objout.dgs",wfmdict)
pass
|
Restoration of ocular volume through the anterior chamber in postextracapsular cataract extraction retinal surgery. To the Editor. To our knowledge, we present herein a new route for restoration of intraocular volume after subretinal fluid drainage in the course of aphakic retinal detachment surgery when the posterior capsule is intact. Subretinal fluid drainage in the course of retinal detachment surgery may render the eye hypotonic. Restoration of intraocular volume and pressure reduces the frequency and extent of postoperative choroidal detachment, particularly in aphakic eyes in which there has been drainage of a large volume of subretinal fluid. 1 Intraocular injection of fluid is also used to flatten retinal folds produced by buckling, and to express subretinal fluid through a choroidal perforation. 2 Isotonic solution, air, gas, or an airgas mixture is usually used for this purpose. The usual site for intravitreal fluid injections is the pars plana ciliaries in both phakic and aphakic cases, while injection into the anterior chamber is performed only in aphakic |
class PingPongThread extends Thread {
final Printer printer;
String message;
public PingPongThread(Printer printer, String message) {
this.printer = printer;
this.message = message;
this.start();
}
@Override
public void run() {
while (true) {
synchronized (printer) {
printer.printMsg(message);
printer.notify();
try {
printer.wait();
} catch (InterruptedException e) {
e.printStackTrace();
}
}
}
}
} |
async def role(self, ctx, user: discord.Member, *, role: discord.Role):
if role in user.roles:
await user.remove_roles(role)
await ctx.send(f'{self.bot.yes} Successfully removed `{role.name}` from `{user}`')
else:
await user.add_roles(role)
await ctx.send(f'{self.bot.yes} Successfully added `{role.name}` to `{user}`') |
The San Diego International Airport stopped all landings after police reported an active shooter in the city’s Little Italy neighborhood.
A man has been sporadically firing in an apartment complex in the area, police said according to the Associated Press. So far no one has been reported injured. Officers were responding to a domestic violence call when they heard gunshots at the complex a little after 9 a.m. on Wednesday. The shots camewithin inches of hitting the officers, 10 News reports.
The police said on Twitter that people should “Stay away from all windows until further notice and Shelter in place.”
SWAT teams were in the area and numerous streets in the immediate vicinity were shut down by around 10 a.m. local time, according to the San Diego Union-Tribune.
The Brief Newsletter Sign up to receive the top stories you need to know right now. View Sample Sign Up Now
City Tree Christian School and Washington Elementary School have been put on lockdown as a precautionary measure, the San Diego Police reported on Nextdoor, a neighborhood social network. “Parents are advised to stay out of the area. The children are safe,” the police posted.
The shooting has caused disruption at the San Diego International Airport. “We are allowing departures but we are not landing aircraft due to the active shooter situation under the approach path,” Ian Gregor, a public affairs manager at the Federal Aviation Administration (FAA) told TIME.
Contact us at editors@time.com. |
BORN GLOBALS IN POLAND: DEVELOPMENT FACTORS AND RESEARCH OVERVIEW Objective: This paper investigates the concept of the born global, indicates the main attributes and sources of success for enterprises that internationalised early, and demonstrates the scale of the born global phenomenon in Poland. Research Design & Methods: A review of empirical studies is the research method used in the paper. Secondary data from research studies is collected. The article outlines the theoretical background of born globals. A brief overview of the existing research (since 2004) is also provided. The results are compiled to demonstrate the scale of the born global phenomenon in Poland. Findings: A review of quantitative research indicates that the percentage of born globals in Poland can be estimated at 3050% of all SMEs which are engaged in international activities. Polish managers have the necessary knowledge and traits for early internationalisation; the primary factors limiting the development of born global are limited financial and organisational resources. Implications/Recommendations: The existing quantitative analyses of born globals do not sufficiently enable the factors that constitute the main barriers to the development of this sector in Poland to be identified. There is still a room to plan the appropriate support policies for SMEs on international markets. Contribution: This paper addresses mainstream research on born globals, which are part of broader trends in the modern economy. This new generation of small and Czesawa Pilarska, Cracow University of Economics, Department of Microeconomics, Rakowicka 27, 31-510 Krakw, Poland, e-mail: pilarskc@uek.krakow.pl, ORCID: https://orcid.org/ 0000-0003-4224-8517. Grzegorz Waga, Cracow University of Economics, Department of Microeconomics, Rakowicka 27, 31-510 Krakw, Poland, e-mail: grzegorz.walega@uek.krakow.pl, ORCID: https://orcid.org/ 0000-0002-4355-5204. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License (CC BY-NC-ND 4.0); https://creativecommons.org/ licenses/by-nc-nd/4.0/. Czesawa Pilarska, Grzegorz Waga 60 medium-sized enterprises, just as economic globalisation, the opening of markets and acceleration in the field of ICT development, is particularly relevant to both developing countries and middle-income economies such as Poland. Introduction Advancing globalisation as well as access to the internet and modern technologies are factors that have created new development opportunities for small and medium-sized enterprises related to expansion onto foreign markets. While the main actors of globalisation and internationalisation processes have primarily been large enterprises (transnational corporations), nowadays we are increasingly dealing with a new generation of small and medium-sized enterprises that have been globally active since launching their business activities. These firms, known as born globals or early internationalisation enterprises in the literature, are also gradually developing in Poland. The purpose of this article is to promote familiarisation with the born globals concept, indicate the main attributes and sources of success for these enterprises. The method used in the paper is a review of empirical research. We collect secondary data from research studies and synthesise their findings. The article begins with an outline of the theoretical background of born globals. We also give a brief overview of the existing research (since 2004). The results have been compiled to demonstrate the scale of the born global phenomenon in Poland. Theoretical Background of Born Globals At the outset, it should be noted that born globals are a special case with respect to the internationalisation of small and medium-sized enterprises and a challenge for traditional theories of the internationalisation of firms (Hollensen 2017). The internationalisation of born-global firms does not proceed in stages, sequentially, as in the Uppsala model (Johanson & Wiedersheim-Paul 1975, Johanson & Vahlne 1977. Instead, these firms are global "from inception". Gustafsson and Zasada emphasise that born-global firms usually create products with high added value that were not previously developed in a strong domestic market. The concept of born global was introduced relatively recently in economics literature. It first appeared in a McKinsey report concerning Australian production enterprises. The report claimed that born globals have always perceived the world as a marketplace and the domestic market merely as support for their international activities (Rennie 1993). Australian born globals include technologically advanced firms as well as typical enterprises using well-known technological solutions in their daily work. According to Rasmussen and Madsen, the primary factor in explaining the born global phenomenon is management engagement in the internationalisation process of Australian firms as well as their ability to standardise their product and marketing solutions and concentrate on market niches. Another characteristic trait of these firms was achieving a fast rate of growth compared to other firms in Australia and a high share of production intended for export relative to domestic sales. Despite the traits of born globals being mentioned in the McKinsey report, the first working definition of this concept in an academic publication was presented by Knight and Cavusgil, who described born-global firms as small and technologically oriented entities operating on international markets from their inception 1. According to these authors, born-global firms usually operate in the small business sector. They employ fewer than 500 employees, earn sales revenues of less than USD 100 million, and use cutting-edge technologies to develop a relatively unique product or innovation process. As time passed and as discussion continued in the academic community about this phenomenon and further research was pursued 2, new definitions appeared. Madsen and Servais, for instance, in an attempt to find the most accurate definition of born globals, indicated seven attributes of such firms: 1) firms launched by strong entrepreneurs with strong international experience and perhaps, in addition, a strong product, 2) the extension of the born global phenomenon is positively associated with the degree of market internationalisation, 3) in comparison with other export firms, born globals are more specialised and niche-oriented, 4) the geographical location of activities in born globals is determined by the past experience of founders and partners as well as economic and capability or customer-related factors, 5) in comparison with other export firms, born globals are more likely to rely on supplemental resources provided by other firms; in their distribution channels, they rely more often on hybrid structures, 6) the growth of a born global is positively associated with high innovative skills, including an ability to access effective R&D as well as distributions channels, often in partnerships involving close collaboration via international relationships, 7) firms in countries with small domestic markets have a higher propensity to become born globals than those in countries with large domestic markets. Moreover, countries with a high number of immigrants may have a higher proportion of born globals. Differentiating born globals from other enterprises was the focus adopted by Cavusgil and Knight. These authors identified the following differentiators in the case of born globals: -born globals are characterised by extensive activity on many foreign markets, immediately or shortly after they are founded -most of these enterprises enter foreign markets via exports, which represents their major path to internationalisation. Born globals export their products or services a few years after their establishment and may export one fourth or more of their production; -they have at their disposal limited financial and material resourcesborn globals are young and relatively small firms. Being small enterprises, they dispose of considerably fewer financial, human, and material resources in comparison with large international enterprises that have functioned for a long time on the market and dominate in global trade and investments; -they engage in activities in many industries -many researchers suggest that born globals concentrate exclusively on high-tech industries. However, there are also many studies showing that these firms are also created in traditional branches of the economy (e.g. the metal, furniture or grocery industries); -managers in these industries demonstrate a strong international outlook as well as entrepreneurship directed to the international market -many born-global firms are created by active managers with strong entrepreneurial attitudes. An entrepreneurial approach on the part of management manifests in a dynamic and aggressive attitude, which is the driving force for acquiring new foreign markets. This attitude is associated with a specific vision in terms of management, risk appetite, and competition; -born globals use a differentiation strategy -many born globals fill a market niche by supplying products of varying quality oriented to a specific client segment, the production of which large enterprises show little interest in. The use of a differentiation strategy inspires consumer loyalty to the firm because it supplies them unique products adapted to their needs. Consumers as well as firms that communicate demand for certain specialised products create an important opportunity for the growth of small and medium-sized enterprises; -they emphasise the highest product quality -many born globals supply products of more efficient construction and higher quality compared to their competitors. Born globals are often leaders in terms of technology in the industry. They target niches and supply unique products of the highest quality. Indeed, the founding of born-global firms is often associated with the growth of new products or services; -they use a wide range of advanced communication and IT technologies to communicate with partners and consumers around the world at a marginal cost that is almost equal to zero. Advances in IT technologies in principle eliminate borders and expand the scope of business activities around the world; -born globals usually take advantage of intermediaries when distributing goods on foreign markets -firms engage in direct sales of products on a foreign market (export) or use independent intermediaries for this purpose. They also often use external solutions (forwarders) that frequently organise foreign shipments. Export and the use of services provided by independent intermediaries means that born globals conduct their activities on foreign markets in a way that is very flexible. They can enter and withdraw from foreign markets quickly and relatively easily. Born globals with greater experience combine exports with other forms of foreign expansion such as joint ventures or direct foreign investments. The low cost and risk-free nature of exports makes it the most suitable form for small and young firms to enter foreign markets. Cavusgil and Knight also enumerated five factors that foster the creation of born globals. These include: 1) market globalisation, 2) level of advancement in the field of communication and information technologies, 3) level of advancement in the field of technological production, 4) the existence of market niches, 5) the occurrence of global network connections. Moreover, the authors listed seven determining factors in the launch of an internationalisation process: 1) a mechanism that encourages exports in the form of external factors (export pull), 2) a mechanism that drives exports via the operation of an external agent (export push), 3) a monopolistic position on a global scale, 4) the product and market conditions necessary to undertake international engagement, 5) an advantage in terms of the product offered, 6) global network connections, 7) global market niches. Hollensen, in turn, defined a born-global firm as one that from its inception pursues a vision of becoming global and globalises rapidly without any preceding long-term domestic or internationalisation period. According to this author, born globals represent a type of firm that operates at the intersection (compression) of time and space, which allows them to achieve global reach from the very beginning. This "time-space" compression phenomenon means that geographic processes can be reduced and limited to the "here and now" of information exchange and trade around the world as long as the necessary infrastructure, communication, and IT facilities are introduced along with qualified personnel. The most significant distinguishing trait of born globals is that they strive to be managed by the most entrepreneurial visionaries, who perceive the world as one, without any boundaries, from the firm's inception. Similarly, Chetty and Campbell-Hunt associate born globals with a group of firms that "from inception" treat the world as one big global market. Andersson, Danilovic and Huang, however, draw attention to the fact that research into born globals concerns first and foremost developed countries located in North America and Europe as well as enterprises in Australia and to a lesser extent emerging markets. In searching for the factors that are decisive in the success of this type of entity, they claim that despite the differences identified, there are also common elements such as: a specific number of entrepreneurs with international vision, experience, knowledge, level of education, cognitive abilities and processes, local and international networks, financial conditions, innovative culture, unique resources, level of market competition, strategies for entering foreign markets, geographical location, and government policy. In the case of less developed countries, the last of the above-mentioned factors has particular significance, namely, an active state policy supporting the activity of young exporters, including tax relief and assistance in organising promotions (e.g. exhibitions) abroad, which can accelerate the internationalisation process of domestic firms. The level of development of native clusters is also important because they are a hotbed of entrepreneurship and innovation, and such an environment favours the emergence of firms implementing a global strategy of operation from the beginning (Andersson, Danilovic & Huang 2015). The developed model contains the essential success factors of born globals in the process of internationalisation. They have been classified and presented from four different perspectives: entrepreneurship, organisational, implemented strategy, and external environment (Figure 1). Of the listed factors, Andersson, Danilovic and Huang assign particular importance to a culture of innovation. The authors believe that without a unique, innovative product, resulting from the use of unique knowledge, it will be difficult for born-global firms to achieve success on the international market. Entrepreneurial factors Issues related to born globals are also discussed in the Polish literature on the subject (Pietrasieski 2005, Nowiski 2006, Morawczyski 2007, Przybylska 2010, Duliniec 2011, Jarosiski 2012, Oczkowska 2013, Wach 2015, Limaski & Drabik 2017. The definition of born globals proposed by Pietrasieski, for example, referred to enterprises that begin to internationalise their activity from the very beginning of, or shortly after, their inception. He pointed out that in most cases the issue is the movement of products, and less often production factors (processes), to a large number of countries simultaneously or within a short time period. According to Morawczyski, born globals are firms that adopt an international or global strategy after their establishment. Jarosiski claimed that the term born globals concerns only those enterprises that begin the internationalisation of their activity within three years of inception and earn at least 25% of their revenues from international markets. Limaski and Drabik, in turn, emphasised that the international competitive advantage of born globals is based on innovative and flexible activities (associated mainly with new technologies, but not excluding the use of traditional technologies) and offering unique products, despite having generally limited resources. Born Globals in the Polish Economy -the Scale of the Phenomenon and Determinant Factors During the centrally planned economy, the internationalisation of enterprises in Poland was a sporadic phenomenon, reserved primarily for large state-owned enterprises. Before 1989, internationalisation was primarily limited to exports to neighbouring countries with similar ideologies. The processes associated with the transformation of the socio-economic system followed by accession to the European Union have led Polish enterprises to gradually include expansion onto foreign markets in their growth strategies. It should be noted, however, that since Poland is a large country with a large market compared to other Central and Eastern European countries, the pressure on foreign expansion has been considerably lower. Research into the operation and growth of born-global firms in Poland is carried out by both individuals and certain institutions. One of the first analyses for Poland was undertaken by Morawczyski. In 2004, he conducted a survey among 117 small and medium-sized enterprises operating in the Maopolska province. Based on these responses, he claimed that the pace of internationalisation among SMEs in the Maopolska province was faster than the theory of gradual internationalisation envisaged. More than 25% of firms began exporting within a year of inception. Half of the firms he researched entered a foreign market no later than three years from inception, which means that early internationalisation was observed in the case of approximately 50 of the enterprises included in the research. The research also indicated a relationship between a firm's period of operation and the share of exports in its total sales revenues: in enterprises existing on the market for up to five years, this indicator stood at 55%, while in the case of firms with thirteen or more years of experience, it stood at 75% (Morawczyski 2007(Morawczyski, 2008. Other studies on Polish born globals were carried out between 2007 and 2010 by a research team under the direction of Cielik. A total of 18,896 Polish exporters operating from 1994 to 2003 were included in the research. The study led the researchers to conclude that the phenomenon of early internationalisation among these firms was quite common. As many as 75% began exporting within three years of registration. The researchers distinguished three groups of enterprises: immediate exporters, which began exporting at inception or within the first full year of operation; rapid exporters, which began exporting products during the second or third year after establishment; and delayed exporters, which launched export activities after four years or more. The analysis carried out by Cielik's team indicated that 46.1% of exporters could be described as immediate exporters, 28.8% as rapid exporters, and only 25.1% as delayed exporters. Research conducted in 2008 by Kranicka (Przedsibiorczo... 2008) also indicates the existence of born-global firms in Poland. One hundred small and medium-sized enterprises operating in the Silesia province were randomly selected for the study. Almost one third of them were already operating on the international market. However, in the case of 12% of these enterprises, the internationalisation process began from the moment of their inception. Empirical research aimed at confirming the existence of born globals in Poland was also carried out by Przybylska. The author surveyed 53 firms, 18 of which (34%) fulfilled the criteria of being a global enterprise from inception, in other words, firms that were established after 1989, employed fewer than 249 people, entered foreign markets within three years of launching business activity, and gained at least 30% of their revenues from exports. Similar results were obtained by Nowiski and Nowara, who attempted to identify the born globals in Poland. The research sample included 50 small and medium-sized private Polish firms that are engaged in exports. The research showed that the period required for a firm to decide on foreign expansion was 3.9 years on average. Each firm served six foreign markets on average, and most exported their goods to the European Union. The authors classified 30% of the firms surveyed as born globals or enterprises that within three years of inception earn at least 25% of their revenue from export sales. In 2010, Jarosiski also carried out research on this subject. Among the 47 small and medium-sized enterprises included in the study, the author found that 32% satisfied the criteria of born globals. Most bornglobal firms pursued internationalisation rapidly: 73% of firms within the first year of operations, 13% within the second year, and another 13% within the third year. The vast majority (close to 90%) operated only on European markets. On average, a firm operated on 3.8 markets (a median of 3). In conclusion, the author pointed out that although the researched enterprises met the definition of born global with respect to operational criteria, their characteristics diverged from the theoretical definitions. Danik, Kowalik and Krl also analysed born globals. For this research they selected 233 SMEs of which 45% were found to satisfy the criteria of born-global firms. Among Polish born globals, the main areas of activity were: production of food, metal goods, machinery and tools, rubber and artificial fibre goods as well as furniture. The research revealed that the dominant means of entry onto foreign markets was direct exports (as high as 98.1% of responses) and that most of the revenue (70.5%) earned by Polish born globals originated from exports of goods to EU markets. In addition to individuals and research groups, the Polish Agency for Enterprise Development (PARP) has also carried out research into the operation of born-global firms in Poland. Among enterprises that declared an engagement in international activity, more than half (51.8%) of SMEs stated that they had been operating on foreign markets since the start of their business activity. This also applies to almost half of small and medium--sized exporters (49.8%). Quantitative research carried out by PARP only among SMEs, in turn, indicated that more than 33% of the firms surveyed (35%) in the sectors that dominate exports were present on foreign markets since their inception. These results led the researchers to claim that the scale of operation of born-global firms in Poland ranges between 35 and 50% of SMEs engaged in international activity, which amounts to an estimated 58,000 to 83,000 firms (Ewaluacja... 2014). Research carried out by PARP has also shown that the decision by small and medium-sized Polish enterprises to internationalise is motivated by various factors. These can be divided into two groups: internal (related to the firm) and external (related to the environment). The major internal factors that influenced the decision to expand onto foreign markets included: the prospect of cooperation with a foreign partner, the possibility to sell products abroad at a higher price than within the domestic country, the desire to avoid dependence on domestic sales, ownership-based relationships with a foreign contractor, and activities to improve the firm's image. In the case of enterprises engaged in exports, these factors were mentioned by 84.4% of entities. Therefore, we can conclude that the most important internal determinant in the internationalisation of Polish enterprises belonging to the SME sector was a strategic drive by firms to increase profitability and achieve market diversification, which, as shown by PARP research, translates into better economic results compared to companies operating only on the domestic market. In the case of external determinants, the most important factors among the researched enterprises were related to the foreign market: high demand for products and fewer administrative and regulatory restrictions; however, with respect to the domestic market the most important factors were high competition and an under-developed market (Ewaluacja... 2014). The quantitative research carried out by PARP is also supported by qualitative studies. These showed that the owner's experiences on foreign markets played a significant role in the decision to expand onto foreign markets. Internationalisation was also fostered by factors such as: knowledge of foreign languages and cultures as well as personal contacts on the part of management. Exporters from the SME sector believed that the most important advantages allowing them to place products on foreign markets included the high quality and competitive prices of their goods. Exporters assigned less importance to factors such as the unique character, modernity or design of their goods. The results obtained by PARP were in line with those obtained by other authors. For example, Kowalik and Baranowska-Prokop, in analysing the determinants of the decision and motives behind the expansion of small and medium-sized Polish enterprises that internationalised rapidly, identified the following main external determinants: the owner's personality, unique knowledge of the foreign market or experience of the board in international business. Other characteristics enumerated in the research included: a network of personal contacts, sales motivations and the firm's organisational culture, a unique advantage in the field of technology, and the board's focus on the development of human resources. In turn, the most important determinants of an internal nature included: the appearance of new business opportunities at the moment of Poland's accession to the EU, the chance to access a network of suppliers and partners among international corporations, geographical proximity, the impact of systemic transformation, greater market potential, and higher prices abroad. The research indicated that the personal characteristics of their founders played a key role in the launch and internationalisation of small and medium--sized Polish firms. Global vision on the part of management from the firm's inception and the use of personal and professional relationships with foreign partners distinguishes born-global firms from traditional exporters. Conclusions The results presented above, obtained by certain authors, indicate that the born global phenomenon is also visible in Poland. Enterprises undergoing early internationalisation began to appear in the Polish economy as early as the beginning of the 1990s. This is the same moment that other firms of the same type were developing in other countries. However, research into this phenomenon began considerably later in Poland. Based on a review of selected analyses of the activities of Polish born--global firms, it can be concluded that there are SMEs that actively participate in the process of early internationalisation. A review of the research indicates that the percentage of these types of companies ranges from 30 to 50% of the population. However, these results should be treated with caution due to the different definitions of born globals adopted in individual studies and their methodologies. It is worth pointing out that the population of Polish born globals reflects the structure of enterprises in Poland -the number of enterprises from high technology industries on international markets is small. The research results point to the need for the state to be more involved in supporting the early internationalisation of enterprises. It is apparent that Polish managers have the right entrepreneurial traits and strategic vision of enterprise development, which determine the formation of born globals; however, there are other factors that hinder this process. The main barriers hindering the expansion of Polish enterprises onto foreign markets include a lack of financial and human resources, the low prices foreign contractors want to pay for goods produced by Polish SMEs, the low recognition of Polish products and services abroad, a lack of free production capacity allowing for increases in export production, and the high expectations of foreign customers regarding quality, environmental standards innovation, and the design of products for export (Chilimoniuk--Przedziecka & Klimczak 2014). In addition, weak pressure on the early internationalisation of enterprises may result from a belief that being a born global does not directly or necessarily translate into higher profits or improved business performance. Having a broad global presence also makes born globals particularly vulnerable to financial losses, and these entities suffer disproportionately from internationalisation (Oyson 2018). The role of born-global firms in the Polish economy requires further research. Existing quantitative analyses of born globals do not fully enable the factors constituting the main barriers to the development of this sector in Poland to be identified. Qualitative studies are also necessary, especially case studies. This is necessary in order to plan the appropriate support policies for SMEs on international markets. |
<filename>src/app/media-library/media-library-routing.module.ts<gh_stars>0
import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';
import {extract, Route} from '@app/core';
import {MediaLibraryComponent} from '@app/media-library/media-library.component';
const routes: Routes = [
Route.withShell([
{ path: 'medialibrary', component: MediaLibraryComponent, data: { title: extract('Media Library') } }
])
];
@NgModule({
imports: [RouterModule.forChild(routes)],
exports: [RouterModule]
})
export class MediaLibraryRoutingModule { }
|
<gh_stars>1-10
/******************************************************************************
* Copyright (c) 2019 Texas Instruments Incorporated - http://www.ti.com
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the
* distribution.
*
* Neither the name of Texas Instruments Incorporated nor the names of
* its contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
*****************************************************************************/
/** \file board_clock.c
*
* \brief This file contains initialization of wakeup and main PSC
* configuration structures and function definitions to get the number
* of wakeup and main PSC config exists.
*/
#include "board_clock.h"
#include "board_utils.h"
#include <ti/drv/sciclient/sciclient.h>
extern Board_initParams_t gBoardInitParams;
uint32_t gBoardClkModuleMcuID[] = {
TISCI_DEV_MCU_ADC0,
TISCI_DEV_MCU_ADC1,
TISCI_DEV_MCU_CPSW0,
TISCI_DEV_MCU_TIMER0,
TISCI_DEV_MCU_FSS0_HYPERBUS1P0_0,
TISCI_DEV_MCU_FSS0_OSPI_0,
TISCI_DEV_MCU_FSS0_OSPI_1,
TISCI_DEV_WKUP_GPIO0,
TISCI_DEV_WKUP_GPIO1,
TISCI_DEV_WKUP_GPIOMUX_INTRTR0,
TISCI_DEV_MCU_UART0,
TISCI_DEV_MCU_MCAN0,
TISCI_DEV_MCU_MCAN1,
TISCI_DEV_MCU_I2C0,
TISCI_DEV_MCU_I2C1,
TISCI_DEV_WKUP_I2C0,
TISCI_DEV_MCU_SA2_UL0,
TISCI_DEV_WKUP_UART0, //Note: Keep the wakeup UART at end to skip it during clock deinit
};
uint32_t gBoardClkModuleMainID[] = {
TISCI_DEV_DDR0,
TISCI_DEV_TIMER0,
TISCI_DEV_TIMER1,
TISCI_DEV_TIMER2,
TISCI_DEV_TIMER3,
TISCI_DEV_EMIF_DATA_0_VD,
TISCI_DEV_MMCSD0,
TISCI_DEV_MMCSD1,
TISCI_DEV_MMCSD2,
TISCI_DEV_GPIO0,
TISCI_DEV_GPIO1,
TISCI_DEV_GPIO2,
TISCI_DEV_GPIO3,
TISCI_DEV_GPIO4,
TISCI_DEV_GPIO5,
TISCI_DEV_GPIO6,
TISCI_DEV_GPIO7,
TISCI_DEV_PRU_ICSSG0,
TISCI_DEV_PRU_ICSSG1,
TISCI_DEV_UART0,
TISCI_DEV_MCAN0,
TISCI_DEV_MCAN1,
TISCI_DEV_MCAN2,
TISCI_DEV_MCAN3,
TISCI_DEV_MCAN4,
TISCI_DEV_MCAN5,
TISCI_DEV_MCAN6,
TISCI_DEV_MCAN7,
TISCI_DEV_MCAN8,
TISCI_DEV_MCAN9,
TISCI_DEV_MCAN10,
TISCI_DEV_MCAN11,
TISCI_DEV_MCAN12,
TISCI_DEV_MCAN13,
TISCI_DEV_MCASP0,
TISCI_DEV_MCASP1,
TISCI_DEV_MCASP2,
TISCI_DEV_MCASP3,
TISCI_DEV_MCASP4,
TISCI_DEV_MCASP5,
TISCI_DEV_MCASP6,
TISCI_DEV_MCASP7,
TISCI_DEV_MCASP8,
TISCI_DEV_MCASP9,
TISCI_DEV_MCASP10,
TISCI_DEV_MCASP11,
TISCI_DEV_I2C0,
TISCI_DEV_I2C1,
TISCI_DEV_I2C2,
TISCI_DEV_I2C3,
TISCI_DEV_I2C4,
TISCI_DEV_I2C5,
TISCI_DEV_I2C6,
TISCI_DEV_PCIE0,
TISCI_DEV_PCIE1,
TISCI_DEV_PCIE2,
TISCI_DEV_PCIE3,
TISCI_DEV_UFS0,
TISCI_DEV_UART1,
TISCI_DEV_UART2,
TISCI_DEV_UART3,
TISCI_DEV_UART4,
TISCI_DEV_UART5,
TISCI_DEV_UART6,
TISCI_DEV_UART7,
TISCI_DEV_UART8,
TISCI_DEV_UART9,
TISCI_DEV_USB0,
TISCI_DEV_USB1,
TISCI_DEV_VPFE0,
TISCI_DEV_SERDES_16G0,
TISCI_DEV_SERDES_16G1,
TISCI_DEV_SERDES_16G2,
TISCI_DEV_SERDES_16G3,
TISCI_DEV_SERDES_10G0,
TISCI_DEV_SA2_UL0,
TISCI_DEV_GTC0,
};
/**
* \brief Disables module clock
*
* \return BOARD_SOK - Clock disable successful.
* BOARD_FAIL - Clock disable failed.
*
*/
Board_STATUS Board_moduleClockDisable(uint32_t moduleId)
{
Board_STATUS retVal = BOARD_SOK;
int32_t status = CSL_EFAIL;
uint32_t moduleState = 0U;
uint32_t resetState = 0U;
uint32_t contextLossState = 0U;
/* Get the module state.
No need to change the module state if it
is already OFF
*/
status = Sciclient_pmGetModuleState(moduleId,
&moduleState,
&resetState,
&contextLossState,
SCICLIENT_SERVICE_WAIT_FOREVER);
if(moduleState != TISCI_MSG_VALUE_DEVICE_HW_STATE_OFF)
{
status = Sciclient_pmSetModuleState(moduleId,
TISCI_MSG_VALUE_DEVICE_SW_STATE_AUTO_OFF,
(TISCI_MSG_FLAG_AOP |
TISCI_MSG_FLAG_DEVICE_RESET_ISO),
SCICLIENT_SERVICE_WAIT_FOREVER);
if (status == CSL_PASS)
{
status = Sciclient_pmSetModuleRst (moduleId,
0x1U,
SCICLIENT_SERVICE_WAIT_FOREVER);
if (status != CSL_PASS)
{
retVal = BOARD_FAIL;
}
}
else
{
retVal = BOARD_FAIL;
}
}
return retVal;
}
/**
* \brief Enables module clock
*
* \return BOARD_SOK - Clock enable sucessful.
* BOARD_FAIL - Clock enable failed.
*
*/
Board_STATUS Board_moduleClockEnable(uint32_t moduleId)
{
Board_STATUS retVal = BOARD_SOK;
int32_t status = CSL_EFAIL;
uint32_t moduleState = 0U;
uint32_t resetState = 0U;
uint32_t contextLossState = 0U;
/* Get the module state.
No need to change the module state if it
is already ON
*/
status = Sciclient_pmGetModuleState(moduleId,
&moduleState,
&resetState,
&contextLossState,
SCICLIENT_SERVICE_WAIT_FOREVER);
if(moduleState == TISCI_MSG_VALUE_DEVICE_HW_STATE_OFF)
{
if(gBoardInitParams.pscMode == BOARD_PSC_DEVICE_MODE_NONEXCLUSIVE)
{
status = Sciclient_pmSetModuleState(moduleId,
TISCI_MSG_VALUE_DEVICE_SW_STATE_ON,
(TISCI_MSG_FLAG_AOP |
TISCI_MSG_FLAG_DEVICE_RESET_ISO),
SCICLIENT_SERVICE_WAIT_FOREVER);
}
else
{
status = Sciclient_pmSetModuleState(moduleId,
TISCI_MSG_VALUE_DEVICE_SW_STATE_ON,
(TISCI_MSG_FLAG_AOP |
TISCI_MSG_FLAG_DEVICE_EXCLUSIVE |
TISCI_MSG_FLAG_DEVICE_RESET_ISO),
SCICLIENT_SERVICE_WAIT_FOREVER);
}
if (status == CSL_PASS)
{
status = Sciclient_pmSetModuleRst (moduleId,
0x0U,
SCICLIENT_SERVICE_WAIT_FOREVER);
if (status != CSL_PASS)
{
retVal = BOARD_FAIL;
}
}
else
{
retVal = BOARD_FAIL;
}
}
return retVal;
}
/**
* \brief clock Initialization function for MCU domain
*
* Enables different power domains and peripheral clocks of the MCU.
* Some of the power domains and peripherals will be OFF by default.
* Enabling the power domains is mandatory before accessing using
* board interfaces connected to those peripherals.
*
* \return BOARD_SOK - Clock initialization sucessful.
* BOARD_INIT_CLOCK_FAIL - Clock initialization failed.
*
*/
Board_STATUS Board_moduleClockInitMcu(void)
{
Board_STATUS status = BOARD_SOK;
uint32_t index;
uint32_t loopCount;
loopCount = sizeof(gBoardClkModuleMcuID) / sizeof(uint32_t);
for(index = 0; index < loopCount; index++)
{
status = Board_moduleClockEnable(gBoardClkModuleMcuID[index]);
if(status != BOARD_SOK)
{
status = BOARD_INIT_CLOCK_FAIL;
break;
}
}
#if defined(BUILD_MCU)
if(status == BOARD_SOK)
{
int32_t ret;
uint64_t mcuClkFreq;
ret = Sciclient_pmGetModuleClkFreq(TISCI_DEV_MCU_R5FSS0_CORE0,
TISCI_DEV_MCU_R5FSS0_CORE0_CPU_CLK,
&mcuClkFreq,
SCICLIENT_SERVICE_WAIT_FOREVER);
if(ret == 0)
{
Osal_HwAttrs hwAttrs;
uint32_t ctrlBitmap;
ret = Osal_getHwAttrs(&hwAttrs);
if(ret == 0)
{
/*
* Change the timer input clock frequency configuration
based on R5 CPU clock configured
*/
hwAttrs.cpuFreqKHz = (int32_t)(mcuClkFreq/1000U);
ctrlBitmap = OSAL_HWATTR_SET_CPU_FREQ;
ret = Osal_setHwAttrs(ctrlBitmap, &hwAttrs);
}
}
if(ret != 0)
{
status = BOARD_INIT_CLOCK_FAIL;
}
}
#endif
return status;
}
/**
* \brief clock Initialization function for MAIN domain
*
* Enables different power domains and peripheral clocks of the SoC.
* Some of the power domains and peripherals will be OFF by default.
* Enabling the power domains is mandatory before accessing using
* board interfaces connected to those peripherals.
*
* \return BOARD_SOK - Clock initialization successful.
* BOARD_INIT_CLOCK_FAIL - Clock initialization failed.
*
*/
Board_STATUS Board_moduleClockInitMain(void)
{
Board_STATUS status = BOARD_SOK;
uint32_t index;
uint32_t loopCount;
loopCount = sizeof(gBoardClkModuleMainID) / sizeof(uint32_t);
for(index = 0; index < loopCount; index++)
{
status = Board_moduleClockEnable(gBoardClkModuleMainID[index]);
if(status != BOARD_SOK)
{
return BOARD_INIT_CLOCK_FAIL;
}
}
return status;
}
/**
* \brief clock de-initialization function for MCU domain
*
* Disables different power domains and peripheral clocks of the SoC.
*
* \return BOARD_SOK - Clock de-initialization successful.
* BOARD_INIT_CLOCK_FAIL - Clock de-initialization failed.
*
*/
Board_STATUS Board_moduleClockDeinitMcu(void)
{
Board_STATUS status = BOARD_SOK;
uint32_t index;
uint32_t loopCount;
loopCount = sizeof(gBoardClkModuleMcuID) / sizeof(uint32_t);
/* (loopCount - 1) to avoid wakeup UART disable which is used by DMSC */
for(index = 0; index < (loopCount - 1); index++)
{
status = Board_moduleClockDisable(gBoardClkModuleMcuID[index]);
if(status != BOARD_SOK)
{
return BOARD_INIT_CLOCK_FAIL;
}
}
return status;
}
/**
* \brief clock de-initialization function for MAIN domain
*
* Disables different power domains and peripheral clocks of the SoC.
*
* \return BOARD_SOK - Clock de-initialization successful.
* BOARD_INIT_CLOCK_FAIL - Clock de-initialization failed.
*
*/
Board_STATUS Board_moduleClockDeinitMain(void)
{
Board_STATUS status = BOARD_SOK;
uint32_t index;
uint32_t loopCount;
loopCount = sizeof(gBoardClkModuleMainID) / sizeof(uint32_t);
for(index = 0; index < loopCount; index++)
{
status = Board_moduleClockDisable(gBoardClkModuleMainID[index]);
if(status != BOARD_SOK)
{
return BOARD_INIT_CLOCK_FAIL;
}
}
return status;
}
|
HASBROUCK HEIGHTS, N.J., Dec. 16, 2015 (GLOBE NEWSWIRE) -- Retail TouchPoints, the industry's go-to source for customer engagement strategies, today announced the winners of the 2015 Customer Engagement Awards.
This year, Retail TouchPoints is recognizing 18 retail companies that are reaching lofty goals with a variety of technologies and campaigns. Across the organization — from the supply chain to the mobile screen — each of this year’s winners has gone the extra mile to delight, surprise and satisfy shoppers. The award winners are ahead of the curve and are achieving business success in this increasingly competitive and challenging marketplace.
"Retail companies have really stepped it up when it comes to customer engagement innovation in 2015," said Debbie Hauss, Editor-In-Chief of Retail TouchPoints. "We were thrilled to receive more than 30 nominations for this year's awards."
Winners include large, national retailers and smaller, regional companies, as well as international selections. Award recipients also vary in their products and services offerings, from specialty apparel and department stores to automotive and gourmet consumables.
Those interested in obtaining further details should visit 18 Retailers Honored With 2015 Customer Engagement Awards, a feature report which provides additional information on each winner.
Retail TouchPoints is an online publishing network for retail executives, with content focused on optimizing the customer experience across all channels. The Retail TouchPoints network is comprised of a weekly newsletter, category-specific blogs, special reports, web seminars, exclusive benchmark research, and a content-rich web site featuring daily news updates and multi-media interviews at http://www.retailtouchpoints.com. The Retail TouchPoints team also interacts with social media communities via Facebook, Twitter and LinkedIn. |
/**
* Monitors the interface and re-starts monitoring or re-opens interface as needed.
*/
protected void carInterfaceRestartIfNeeded() {
int status = mCarInterface.getsStatus();
if (status == ElmInterface.STATUS_OPEN_STOPPED) {
try {
mCarInterface.monitorStart();
} catch (Exception ex) {
Log.e(TAG, "ERROR STARTING CAR INTERFACE MONITORING", ex);
}
} else if (status != ElmInterface.STATUS_OPEN_MONITORING) {
mCarInterface.deviceOpen();
}
updateNotification();
} |
<gh_stars>0
package gamedori.beans.dto;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.text.SimpleDateFormat;
import java.util.Date;
public class NoticeDto {
private int notice_no, member_no, notice_read;
private String notice_title, notice_content, notice_date,member_nick;
public NoticeDto() {
super();
}
// 추가 : ResultSet을 NoticeDto로 변환하는 생성자
public NoticeDto(ResultSet rs) throws SQLException {
this.setNotice_no(rs.getInt("notice_no"));
this.setMember_no(rs.getInt("member_no"));
this.setNotice_title(rs.getString("notice_title"));
this.setNotice_content(rs.getString("notice_content"));
this.setNotice_date(rs.getString("notice_date"));
this.setNotice_read(rs.getInt("notice_read"));
}
public String getMember_nick() {
return member_nick;
}
public void setMember_nick(String member_nick) {
this.member_nick = member_nick;
}
public int getNotice_no() {
return notice_no;
}
public void setNotice_no(int notice_no) {
this.notice_no = notice_no;
}
public int getMember_no() {
return member_no;
}
public void setMember_no(int member_no) {
this.member_no = member_no;
}
public int getNotice_read() {
return notice_read;
}
public void setNotice_read(int notice_read) {
this.notice_read = notice_read;
}
public String getNotice_title() {
return notice_title;
}
public void setNotice_title(String notice_title) {
this.notice_title = notice_title;
}
public String getNotice_content() {
return notice_content;
}
public void setNotice_content(String notice_content) {
this.notice_content = notice_content;
}
public String getNotice_date() {
return notice_date;
}
public void setNotice_date(String notice_date) {
this.notice_date = notice_date;
}
// 메소드 2개를 추가
// [1] getBoard_time() : 시간을 반환하는 메소드
// [2] getBoard_day() : 날짜를 반환하는 메소드
// [3] getBoard_autotime() : 자동으로 오늘날짜에는 시간을, 아닌 경우는 날짜를 반환
public String getNotice_time() {
return notice_date.substring(11, 16);
}
public String getNotice_day() {
return notice_date.substring(0, 10);
}
public String getNotice_auto() {
// Date d = new Date();
// Format f = new SimpleDateFormat("yyyy-MM-dd");
// String today = f.format(d);
String today = new SimpleDateFormat("yyyy-MM-dd").format(new Date());
if (getNotice_day().equals(today)) {// 오늘 작성한 글이라면
return getNotice_time();
} else {// 아니라면
return getNotice_day();
}
}
// 추가된 항목에 대한 변수와 setter/getter 추가
private int notice_replycount;
public int getNotice_replycount() {
return notice_replycount;
}
public void setNotice_replycount(int notice_replycount) {
this.notice_replycount = notice_replycount;
}
}
|
type Coordinates = {
lat: number;
lon: number;
geohash: string;
};
export type Location = Coordinates & {
id: number;
city_id: string;
name: string;
address: string[];
type: string;
};
export type City = Coordinates & {
id: string;
locale: string;
region_id: number;
name: string;
timezone: string;
image_url: string;
legacy_url_form: string;
full_name: string;
region: Region;
};
type Country = {
code2: string;
locale: string;
code3: string;
name: string;
continent: string;
default_locale: string;
default_currency: string;
population: number;
};
type Region = {
id: number;
locale: string;
country_code2: string;
name: string;
country: Country;
};
|
<reponame>doboy/Underscore<filename>examples/underscored/future_statement.py
# from __future__ import with_statement
#
# x = 1
from __future__ import with_statement
(__,) = (1,)
_ = __
(x,) = (_,)
|
The Lozoya River Valley, in the Madrid mountain range of Guadarrama, could easily be called "Neanderthal Valley," says the paleontologist Juan Luis Arsuaga.
"It is protected by two strings of mountains, it is rich in fauna, it is a privileged spot from an environmental viewpoint, and it is ideal for the Neanderthal, given that it provided the with good hunting grounds."
This is not just a hypothesis: scientists working on site in Pinilla del Valle, near the reservoir, have already found nine Neanderthal teeth, remains of bonfires and thousands of animal fossils, including some from enormous aurochs (the ancestor of cattle, each the length of two bulls), rhinoceros and fallow deer.
The Neanderthal is a human species that is well known and unknown at the same time. It is well known because numerous vestiges have been found from the time when they lived in Europe, between 200,000 and 30,000 years ago. But it is also unknown because of the many unresolved issues that keep cropping up, including, first and foremost: why did they become extinct just as our current species made an appearance on the continent?
Nobody knows for sure whether the Neanderthal was able to talk, or whether they shared territory with Homo sapiens, or whether both species ignored each other until one - ours - proliferated while the other got lost forever... Scientists in charge of the sites at Pinilla del Valle could make significant contributions to finding the answers to these and other questions about the lives of the Neanderthal people.
"There are around 15 sites in Spain: in the Cantabrian mountain range, along the eastern Mediterranean coast and in Andalusia, but none on the plateau, where there are no limestone formations and no adequate caves to preserve human remains for thousands of years," adds Arsuaga. But Pinilla del Valle is an exception to the rule. "There is limestone here. It was like a cap made of stone under which the Neanderthal presumably took refuge to prepare for the hunt, to craft their tools, to eat... It's not that they lived inside in the sense of a home; they wandered in the fields, and this was probably more like a base camp to take refuge when they needed to."
It is clear that the Neanderthals took care of their dead in some way
"The site, which has great potential, extends some 150 meters and we are now working in three areas: the cave of Camino, the refuge of Navalmaillo and the cave of Des-Cubierta, which cover three different time frames," says Enrique Baquedano, director of the Regional Archeology Museum in Madrid.
It was on the floor of Des-Cubierta that the Neanderthal must have placed the dead body of a small child aged two-and-a-half to three years old. They placed two slabs of stone and an aurochs horn on top, and set the body on fire. Baquedano explains that they found some of the child's teeth - they call it a little girl, although they have no scientific evidence of its gender - as well as a piece of coal that turned up just a few days ago and which will enable precise dating. "Complete burials, with a clear structure that allows [researchers] to reconstruct behaviors, is a very rare thing in any part of the world," says Arsuaga, who is also co-director of the excavations at the major prehistoric site of Atapuerca.
Standing next to him, Baquedano points at the spot where they found the coal from that bonfire, perhaps a ritual of some sort, and which will be subjected to carbon 14-dating techniques.
"We are convinced that it was an intentional deposition of the girl's body; perhaps there were more burials at Neanderthal sites but they were not recognized as such," says the museum director.
The fact is, the Neanderthal took care of their dead in some way. Traces of them have also been found in France and Israel.
Here at the Madrid valley, archeologists and paleontologists get busier as the days go by. A total of 70 people scattered over three sites dig among the sediment with chisels and brushes; they clear through the rock with jackhammers, they wash kilograms of extracted earth so that not even the smallest noteworthy piece will go by unnoticed, and every excavated centimeter is documented. This scientific work has been going on every summer for a decade, "for 40 days, in two shifts," explains César Laplana of the regional museum.
The nine Neanderthal teeth discovered so far are between 60,000 and 90,000 years old, and several of them appeared in what must have been hyena dens, where the animals probably devoured and destroyed the bodies. "Teeth are the most resistant of all organic tissue; they keep better than the rest of the skeleton, and they provide lots of information about the diet, the diseases, and the passage from childhood to adulthood," continues Laplana.
"The Neanderthal lived both in the interglacial and the glacial periods," explains Arsuaga. After an ice age that made half of Europe look like Greenland does today, the interglacial period began around 130,000 years ago with a climate that was actually warmer than today's; then, 85,000 years ago the last ice age began, ending 11,500 years ago. The excavations at Pinilla corresponding to the interglacial period produced many remains of fallow deer (a Mediterranean species), tortoises, porcupines and brown bears, as opposed to the cave bears of the glacial period.
Over at the cave of Des-Cubierta, Javier Somoza, a student at Salamanca University, walks up to Baquedano and shows him an artifact wrapped in white paper: it is a tool that he has just found in the ground. "Yes, I was very excited," says Somoza about the bit of pink quartz.
Thousands of stone tools have already been found. "The best stone for sculpting is flint, but there's none in this area, so they had to make do with what they had handy. So they adapted their technique to quartz. "It's worse, but it works and it represents an admirable technological adaptation."
And what about hunting? "They used wooden lances with fire-hardened tips."
"Here, in this valley so full of rich sites, we can find out lots of things about the Neanderthal, their lives and their deaths, their climate, their technology and their economy," concludes the archeologist. "It's just a matter of time." |
The Annual Premium of Life Insurance on The Joint-Life Status based on The 2011 Indonesian Mortality Table Doi: 10.24042/djm.v3i3.6761 Life insurance is insurance that protects against risks to someone's life. Joint Life Insurance is insurance where the life and death rules are a combination of two or more factors, such as husband-wife or parent-child, and if the first death occurs, then the premium payment process is stopped. The annual premium is the premium paid annually. In this study, the annual premium is calculated continuously with the equivalence principle based on the 2011 Indonesian Mortality Table. The calculation shows that the amount of annual premiums for 2 (two) and 3 (three) people is not much different. The factors that influence the annual premium amount are the duration insurance period, age at signing the policy, interest rates, life chances, force of mortality, and the number of benefits. INTRODUCTION Everyone in their life has many risks of occurrence that maybe happen themselves that causes loss, such as accidents, damage, health, loss of life, and other unexpected events. According to, life is full of uncertainty. Everyone wants to have prospered; life prosperity will be disturbed if he is sick, disability, or died. Welfare guarantee can be obtained if he insures himself, guaranteeing that is given like compensation, so it can be concluded that insurance is a guarantee against the unexpected event that possibly happened to minimize risk. Insurance that guarantees a person's life is called life insurance. Life insurance protects against the risk of someone's life who becomes an insured (). Life insurance can be divided into two: life insurance for an individual is called single life insurance, while for more than one person is joint-life insurance. One type of combined life insurance is joint-life life insurance. Joint-life is a condition where the death rule of his life is a combination of two or more factors, such as husband and wife, parents, and children. An institution that provides various protection for its customer is called an insurance company. According to (), the insurance company will publish a contract that will promise to pay a specified amount, equal to or less from the loss during the policy period. Then it is started from the signing of the insurance policy. It means that it is started to switch the insured (customer) to the guarantor (company). Both sides in the insurance policy must obey the rule that each customer must pay a premium, while the company must provide rights to the suspended family (heirs) if a risk occurs, the right is called a benefit. The premium paid directly at the insurance contract time is agreed at once is called a single premium, but in reality, they are rarely used because they are too expensive to be paid at once. Premium payments can be made every year to facilitate customers. This payment is called the annual premium. According to (Matvejevs & Matvejevs, 2001), family insurance contracts have applied more than individual insurance, and family insurance is joint-life insurance. Therefore, they determine the annual premium for two people on discrete joint-life life insurance, and () also research to determine the annual premium for three people with the same type of insurance. The discrete premium calculation means that the insured's provided the benefit is carried out at the end of the year the insured dies, whereas for deaths that occur at the beginning of the year is very long to receive the provided benefit at the end of the year. Moreover, other research related to determining the annual premium by discrete calculations can be seen in () and (). Therefore, the author is motivated to determine the formulation of annual premiums with a type of joint-life insurance for two and three people continuously. The continuous premium calculation means that the benefit can be given directly when the insured dies but with the same amount of benefit if he dies within the specified policy, different from two previous journals with many benefits with premiums that have been paid. This study aims to determine the annual premium continuously from joint-life insurance for two people (husband and wife) and three people (husband, wife, and son). METHOD An insurance contract for two people consists of a married couple, with their ages, namely husband ( ) and wife ( ). Suppose both remain alive until the insurance contract expiration for years or reach + and + years. In that case, they get a benefit. If one of the couples dies before the contract expires, for example, dies, then, who remains alive until the end of the contract, starts in the year during a lifetime each year gets a benefit of, and vice-versa if died then gets a benefit of. If the couple both ( and ) dies before the contract expires, then the heir will receive a benefit as soon as they both die. Furthermore, the insurance contract for three people consists of husband, wife, and son, with their age, namely husband ( ), wife ( ), and son ( ). Suppose all three remain alive until the end of the contract for n years or reach +, +, and years. Then, they get benefits of. Furthermore, if one of three participants dies before the contract period ends, for example, died, and are still alive until the end of the contract, then and gets a benefit of. If dies, then and get a benefit of, and so if dies, then and get a benefit of. Meanwhile, if and die, while is still alive until the end of the contract, then starts in the year during a lifetime every year, gets a benefit of. If and die, then get a benefit of, and so if and die, then get a benefit of. If all their participant dies in the same year before the contract expires, then the heirs will receive a benefit as soon as three of them died. In this research, the calculation of the formulation is based on the 2011 Indonesian mortality table. Then the formulation uses the symbols in the table. The chance of living a joint-life or the chance of survival for people aged 1, 2, 3, will remain alive until they are x + n years old. For a joint-life, the chance of living for two people is denoted by, and formulated as follows: n p xy = n p x. n p y = l x+n ;y+n l xy Whereas, for three people is denoted by, and formulated as follows: The rate of death is the chance of person currently aged years will die sometime later or die between the ages of and + ∆ with the condition of life at age is denoted by ( ), and formulated as follows: Meanwhile, for two people, age and years and three people, age,, and years are denoted each ( ) and ( ), and formulated as follows: ( )= + −1; + −1; + −1 − + +1; + +1; + +1 2 + ; + ; + A single premium is a premium that is paid directly when an insurance contract is agreed upon. Term life insurance is insurance with the benefit provided if the insured dies as long as the contract is not over. If he is still alive, then he does not get anything. Single life term joint-life insurance premium for two people age and years, with the contract for years, is denoted by : | 1, and formulated as follows: Meanwhile, for three people age,, and years, with insurance contract for years is denoted by : | 1, and formulated as follows: Pure endowment life insurance is insurance with the benefit provided if the insured remains alive until the end of the insurance contract. If he dies, then he does not get anything. The single premium of pure joint-life endowment life insurance for two people age and years with a contract for years is denoted by ∶ | 1, and formulated as follows: A xy∶n| 1 = v n. n p xy Meanwhile, for three people age,, and years with an insurance contract for n years is denoted by : | 1, and formulated as follows: A xyz∶n| 1 = v n. n p xyz = v n. l x+n ;y+n ;z+n l xyz An annuity is a series of payments for a certain period. A life annuity is a series of payments done during a person who is still alive, so this payment is associated with someone's life or death. Joint-life term annuity for two people age and for years is denoted by a : xy n|, formulated as follows: Meanwhile, for three people aged,, and years, with insurance contract for years are denoted by a : xyz n|, formulated as follows: An annuity paid after a while is called a deferred annuity. An annuity which is postponed years of joint-life for two people age and years is denoted by |, and formulated as follows: Meanwhile, for three people age,, and years, with an insurance contract for years is denoted by n| a xyz, and formulated as follows: The premium amount sought in this study is determined by the principle of equivalence as follows: states that the amount of loss the guarantor is defined as a random variable from the cash benefit. According to, there is a healthy and fair guideline that "the value of future premiums = the value of future benefit" does not harm/benefit the insured or the guarantor. The translation of loss functions using the equivalence principle is as follows: The formulation above can be concluded that the value of the benefit must be equal to the annual premium value. There is an equivalence principle that can be determined the amount of the annual premium of insurance. The procedure and data analysis technique should be emphasized in the literature review article. RESULTS AND DISCUSSION This chapter will determine the value of the annual premium and the value of the benefit paid by the guarantor because both of them are used to obtain the annual premium formulation of joint-life life insurance for two and three people based on the principle of equivalence with continuous calculation. The annual premium for joint-life life insurance for two people is as follows: The value of the benefit is paid by the insurer (Output). Case examples will be provided to facilitate understanding of the formulation as follows: 1. Insurance starting age Insurance starting age is the age when he registers his self to join insurance. In this case, for two people (husband and wife) and three people (husband and wife and son), the detail of ages is a husband ( ) 50 years, wife ( ) 45 years, and son ( ) 15 years. Insurance coverage period The insurance coverage period is the length of time the insurance contract valid. In this case, the insurance coverage period is = 1-10 years. Interest Rates The interest rate that is used in this case is 5%. Benefit The amount of benefit for two or three people, if all of the participants still alive until the contract end, they are given benefit, namely a sum of 1 rupiah ( = 1). If one of the participant members dies, the premium payment is stopped. The participant who remains alive until the end of the contract will get the benefit of money, that is, the amount of 1 rupiah every year for the rest of his life ( = 1, = 1, = 1, = 1, = 1, = 1). Also, if all of the participants die before the contract ends, the expert inheritance will get a benefit of 1 rupiah ( = 1). Based on the annual premium formulation for two and three people and the case example above is obtained, the value of the annual joint-life insurance premium for two and three people can be seen in Table 1 If it is stated in the graph of the value of the annual joint-life insurance premiums for two and three people, it can be seen in Figure 1. Table 1, it can be seen that the amount of the joint-life annual premium with the benefit of each possibility is the duration of the insurance contract strongly influences 1 rupiah, and the number of people is covered. It can be seen that the annual premium for three people is slightly more expensive than for two people, and the shorter the period, so the more significant the annual premium must be paid. For example, a 10-year contract period for two people must pay a premium each year of 0.2797692. For three people to amount to 0.2935317 with the same benefit amount, if considered, it is more profitable to pay for three people than for two people because of the difference it is the amount of a small annual premium. Meanwhile, based on Figure 1, with the duration of the insurance contract (n = 1-10 years), it can be seen that the annual premium value decreases every year. For a short term insurance contract, the annual premium is much greater than the long term, and after that, the annual premium is the age of a person. The number of insurance participants, because of the annual premium for three people, is slightly greater than for two people. However, it does not look much different because of the life opportunity that affects it; this can be considered in choosing the type of insurance needed or more profit than a joint-life insurance case for 2 (two) or 3 (three) people. CONCLUSIONS AND SUGGESTIONS Based on the results and discussion in the previous chapter, it can be concluded that the principle of continuous equivalence determines the annual jointlife premium for two and three people. It can be obtained from the formulation of the joint-life insurance annual premium as follows: 1. Formulation of annual joint-life premium for 2 (two) people as follows: The result of calculations implies that the amount of the annual premium joint-life for two people and three people will affect the table value mortality, annuity, and single premium. Moreover, to determine the annual premium for jointlife life insurance, which causes three people is slightly greater than for two people, this can be a consideration to choose the type of insurance that is desired or which is more profit. It can be concluded that the factors influence the amount of the annual premium are the number of insured, the duration of the insurance contract (period), the age when the policy is signed, the interest rate, life chances, the rate of death, and also the number of benefits to be provided. |
Assessing Urban Accessibility in Monterrey, Mexico: A Transferable Approach to Evaluate Access to Main Destinations at the Metropolitan and Local Levels : Cities demand urgent transformations in order to become more affordable, livable, sustainable, walkable and comfortable spaces. Hence, important changes have to be made in the way cities are understood, diagnosed and planned. The current paper puts urban accessibility into the centre of the public policy and planning agenda, as a transferable approach to transform cities into better living environments. To do so, a practical example of the City of Monterrey, Mexico, is presented at two planning scales: the metropolitan and local level. Both scales of analysis measure accessibility to main destinations using walking and cycling as the main transport modes. The results demonstrate that the levels of accessibility at the metropolitan level are divergent, depending on the desired destination, as well as on the planning processes (both formal and informal) from different areas of the city. At the local level, the Distrito Tec Area is diagnosed in terms of accessibility to assess to what extent it can be considered a part of a 15 minutes city. The results show that Distrito Tec lacks the desired parameters of accessibility to all destinations for being a 15 minutes city. Nevertheless, there is a considerable increase in accessibility levels when cycling is used as the main travelling mode. The current research project serves as an initial approach to understand the accessibility challenges of the city at different planning levels, by proving useful and disaggregated data. Finally, it concludes providing general recommendations to be considered in planning processes aimed to improve accessibility and sustainability. Introduction Currently, planet Earth and all its inhabitants are living during the key moment to stop climate change. According to UN-Habitat, cities are responsible for more than 60% of the world's greenhouse emissions and are thus an important part of the problem. Nevertheless, the authors of the present paper believe that cities are also the solution, but only if immediate and effective actions are taken within them. Urban planners and decision-makers have been exploring new ways to reconfigure and build cities, with the objective of making more livable, safe, affordable, environmentally friendly and sustainable urban realms. However, the rate of change is not fast enough to meet the needs to stop climate change. Certain cities are advancing faster than others; therefore, the current project has decided to analyse to what extent the City of Monterrey, In response to the current increasing mobility problems in the MMZ, the present study proposes the analysis of urban accessibility using the Urban Accessibility Computer (UrMoAC) software developed by the German Aerospace Centre (DLR). The analysis was performed at two scales: (i) the metropolitan level (entire MMZ) and (ii) the local level, using the Distrito Tec Area (neighbourhoods in the vicinity of the Tecnolgico de Monterrey university campus). Both scales have the objective to measure accessibility to destinations that most people frequently visit: schools, main employment centres, supermarkets and hospitals. Nevertheless, each scale of analysis has a specific scope and methodology to evaluate accessibility, as will be explained in Section 2. It is important to mention that the whole social, economic and, to a certain extent, political structure of the city behaves as a metropolitan area, despite the actual political divisions. Therefore, it is key to comprehend the urban needs at the different scales to develop public policies and interventions that respond to specific issues. In contrast to traditional urban planning theories and procedures, taking an approach from urban accessibility has been demonstrated to have an incredible potential to better understand the systemic and complex nature of cities. According to, accessibility can be defined as the "extent to which the land use-transport system enables (groups of) individuals or goods to reach activities or destinations by means of (combination of) transport mode(s)". Furthermore, it has been argued that accessibility consists of four components: (I) transport, (II) land use, (III) temporal and (IV) individual. Accessibility concentrates on studying and evaluating how people access, or not, the different opportunities (destinations) of the urban realm, by taking into account the distributions of activity locations and the available transport modes within a given area. It is important to mention that there are certain variables that affect the peoples' behaviour when accessing opportunities in the city, which go beyond the availability of activity locations and transport alternatives. These variables can be related to social preference (e.g., a family prefers one school over another), demographic (group ages of the population in a specific area), entitlement to health services (whether a person has the right to receive attention at a specific hospital or not), or level of service of the opportunities (this has to do with opening hours, capacity and type of service provided), among others. It is crucial to fully analyse these variables before suggesting or making any interventions in the land use or mobility network of a given area, as they drastically affect the level of accessibility for the local population. Nevertheless, to do so goes beyond the scope of work of the current research project, which aims to provide a preliminary diagnosis of urban accessibility at the metropolitan and local level and relate it, only for the local level, to the 15 minutes city planning approach. "La ville du quart d'heure" or 15 minutes city is an urban concept developed by Carlos Moreno where he imagines a city where every urban dweller can access her/his daily necessities within a maximum of 15 min of travel time by foot or bicycle. The 15 minutes city implies an urban shift from car-oriented cities to proximity-based cities, upon the idea that "quality of urban life is inversely proportional to the amount of time invested in transportation". In this sense, the 15 minutes city concept emphasizes urban planning at the local (neighbourhood) level and concentrates on promoting accessibility rather than mobility. The focus is on diversifying land use to guarantee that every part of the city has enough green space, housing, public services, recreation areas, and jobs, at the local level, instead of developing more or higher capacity transport networks. Micromobility plays a key role in 15 minutes cities, as it promotes people's ability to access all local opportunities by walking or by using a bicycle. Hence, there is a special interest in creating open streets that foster activity in the public space and make walking and cycling safe and comfortable. As a result of recent planning trends such as the 15 minutes city and a transition of transport modes from vehicles to active mobility (such as walking, cycling, etc.), accessibility has taken a spotlight in the planning paradigm by encouraging ideas such as mobilising people rather than motorised vehicles; creating access, not mobility; and thinking first at the local level. To analyse to what extent Mexican cities are prepared to become accessible or 15 minutes cities and have a safe, comfortable, and realistic transition to active mobility, it is necessary to understand the transport modes and the availability of activity locations, as well as cultural, economic, political and social factors that may promote or oppose such transformation. Hence, the use of urban accessibility measures represents an adequate approach to comprehend travel times and distances and their implications in a social, economic, and environmental dimension. The results represent a key input to detect accessibility issues and develop solutions for them. They also are usable knowledge that can be easily transmitted to the local population to socialise and cocreate interventions, projects, and public policies that can transform the local environment into a safer, more livable, affordable, accessible and sustainable place. Finally, it must be said that large Latin American cities have specific socio-economic and spatial characteristics that make them very different from cities of similar sizes in other parts of the world. Recognising and working with these specific characteristics has been an important part of the development of the project, to suggest and reach results that are relevant and tailor-made for the nature and context of the MMZ and Distrito Tec. Software Urban Mobility Accessibility Computer (UrMoAC) is an open-source tool developed by the German Aerospace Centre (DLR) to compute accessibility measures within a geographical area, which can be aggregated for variable areas. UrMoAC can calculate, among others, the minimum time and distance required to reach the nearest specified destination (places such as hospitals, schools, parks, industries, etc.) using a specific transport mode or a combination of them (walking, cycling, motorised vehicles, and public transport), while taking into consideration the real road network and its constraints (speed limits, directions, and modes of transport). UrMoAC is a command-line tool written in the Java programming language, which reads its inputs from a PostgreSQL/PostGIS database. UrMoAC was selected as the most suitable tool to perform the required accessibility analysis since it provides accurate and disaggregated results based on real distances and close-to-reality transportation behaviours. Furthermore, the open-source QGis software was used to visualise the information obtained from UrMoAC. Data Processing The current project was based on two scales of analysis. The first is at the metropolitan level, which aims to highlight and demonstrate the diverging levels of accessibility in the MMZ and relate them to relevant urban planning paradigms and socio-economic indicators such as the marginalisation levels. The second is at the local level, considering the Distrito Tec Area. Here, the objective was to analyse the accessibility levels to main employment centres, schools, supermarkets, and hospitals, travelling by foot or bicycle, to determine whether the area can be, or not, considered as a 15 minutes city. For the current analysis, UrMoAC was programmed to start the measurement from urban blocks (origins) to several different destinations (hospitals, schools, supermarkets, and main employments centres). It is important to mention that there are socio-economic factors, such as social preference and willingness to pay, that can affect the decision of people to access one destination over another. Specifically, higher-income social groups tend to prefer private hospitals and schools to fulfil their health and education needs. In contrast, public services usually have access limitations related to the capacity of the building, level of service, entitlement, opening hours, working days, demand, among others. Including all these variables in the analysis goes beyond the scope of work from the current research project, as it aims to provide a preliminary approach to urban accessibility based on the supply of land uses and transport networks at different scales. In this sense, only public hospitals and schools were taken into account, embracing the idea that public establishments are available for everyone despite their income. The transport modes used for the computation were either walking (with an average speed of 3.6 km/h) or bicycle (average speed of 12 km/h) to analyse to what extent the MMZ is ready to shift from car-oriented mobility to generating accessibility through micromobility and proximity-based land use planning. The aforementioned average speeds for each transport mode are the ones proposed by the UrMoAC tool. Public Transport was not considered as it requires General Transit Feed Specification (GTFS) data files to run the accessibility computation, which are not currently available for the areas of study analysed in this work. As mentioned before, the computation was designed to start every measurement from urban blocks. Depending on the measurement desired, the destinations would vary considering either: the closest available, a specific number of destinations that can be reached (e.g., the five nearest hospitals), or with a 15 min travel time constraint to see how many destinations are available within that travel time. The reasons for generating different measurements were to demonstrate UrMoAC's capacity to compute diverse and relevant results, to integrate social and capacity factors into the measurement (e.g., one school alone cannot fulfil the entire education demand of a given district; therefore, more than one has to be included) and to analyse the results from different approaches that provide useful insights about the accessibility of the study area. For the results to be legible, understandable, and useful, the individual results from each block were aggregated into Basic Geostatistical Areas (AGEBs, by its acronym in Spanish) and then mapped. The AGEBs are the basic aggregation areas used by the National Institute of Statistics and Geography (INEGI, by its acronym in Spanish) census; therefore, they are an unit of analysis widely used for mapping purposes in Mexico. For the realisation of this project, the following data were gathered: a compendium of the urban geostatistical cartography of Nuevo Len (Mexico) state (from 2016, as it is the most recent version), the urban block subdivision of the 18 municipalities that form the MMZ, the National Statistical Directory of Economic Units 2019 (DENUE, by its acronym in Spanish), and the OpenStreetMap road network from 2021. Table 1 shows the databases of the variables (municipalities, blocks, AGEBs, DENUE, and roads), the institutions, and the URLs for all the gathered data in this study. The DENUE is elaborated by INEGI and offers a large compendium of all the economic activities registered within the Mexican territory, which can be categorised according to the North American Industry Classification System (NAICS). The NAICS is a standard created to allow high comparability in business statistics among North American countries (Canada, USA, and Mexico). For this study, the categories displayed in Table 2 were analysed. For the metropolitan-level analysis, the following accessibility measures were performed: number of employments centres with 51 or more employees accessible within a 15 min travel time, the distance and time to the nearest public school (including all levels from preschool to high school), and nearest hospital (including codes 622112, 622212 and 622312). For the Distrito Tec's analysis, the computations were: number of employment centres with 51 or more employees accessible within a 15 min travel time, the number of retail trade supermarkets available within a 15 min travel time, and the time required to access the five nearest schools or hospitals; here, an individual study was conducted for each category mentioned in Table 2. The aforementioned measurements were evaluated under the following (see Table 3) criteria, which were determined in compliance with each transport mode's average speed (according to UrMoAC) and the cumulative opportunity accessibility metric (where more opportunities in the surroundings relate to higher accessibility levels). It is important to highlight the complexity of a standardised rating scale for the number of destinations reachable within a specific time or distance, when comparing different services, as socio-economic and demographic features must be considered to develop an accurate measurement. Therefore, the current research project performed a preliminary approach and diagnosis by evaluating the number of destinations available within a 15 min travel time either by bicycle or walking. Comparisons of accessibility across different services were not evaluated. With 74 metropolitan areas in 2015, as declared by the National Population Council (CONAPO, by its acronym in Spanish), the conformation of large metropolitan areas has been one of the main urbanisation trends in Mexico since the 1950s Century. The creation of metropolitan areas has been fuelled by the centralisation of economic activities, such as industries and services, in large-and, to a lower extent, middle-sized cities. These cities grew demographically from rural to urban migration, led by people who were looking for better employment opportunities and living conditions. Yet, the metropolitan areas have failed to achieve an urban development that can cope, in a sustainable manner, with the social, economic, and environmental demands. The marginalisation index is an indicator used by CONAPO that serves to demonstrate how metropolitan areas have divergent conditions and how planning paradigms and processes have failed to create a city where all inhabitants have the same conditions and access to opportunities. It relates to a lack of social opportunities and the absence of the capacity to generate them, as well as to deprivation and inaccessibility of basic goods and services. This indicator is based on 10 different socio-economic factors : 1. Percentage of the population within 6 and 14 y who do not attend school; 2. Percentage of the population 15 y or older who have not completed their basic education; 3. Percentage of the population without entitlement to access the public health system; 4. Percentage of dead children from women between 15 and 49 y; 5. Percentage of particular occupied households without running water; 6. Percentage of particular occupied households without drainage connected to the public network or a septic tank; 7. Percentage of particular occupied households without a toilet connected to running water; 8. Percentage of particular occupied households with dirt floors; 9. Percentage of particular occupied households with some level of overcrowding; 10. Percentage of particular occupied households without a fridge. Figure 2 shows the marginalisation levels in the MMZ according to the CONAPO, as well as the location of Distrito Tec within the MMZ. It can be observed that most of the AGEBs in the MMZ lie within the low (22.91%) and very low (41.42%) marginalisation levels, with few classified as medium (20.76%), high (4%), or very high (1.63%). There are 9.19% of the AGEBs that are not categorised. It is worth mentioning that the regions with higher marginalisation levels are located in the outskirts of the MMZ, usually places with low levels of accessibility. Clearly, the MMZ is not an exception to the challenges that Mexican metropolitan areas face. There exists room for improvement in terms of creating central and well-located social housing, diversifying economic clusters throughout the city to promote decentralisation, and rethinking urban planning concerning land uses to guarantee enough supply to satisfy local needs. Furthermore, changing demographics have to be taken into account. Figure 3 displays the population distribution in the MMZ as in Monterrey Municipality. The MMZ population has a pyramidal shape where most of its inhabitants are 40 y or younger. Even though the distribution of the Monterrey Municipality is similar to the one of the MMZ, there is a faster shift towards a more rectangular shape, meaning that the city is facing a stationary growth. This population trend will result in a decreasing demand for infrastructure related to children (such as schools and kindergartens) and an increase in demand in infrastructure related to elderly persons (such as hospitals and retirement/nursing homes). The analysis at the metropolitan level has the objective to demonstrate the performance of the MMZ in terms of accessibility to different location types. The results will allow urban planners and policymakers to identify underperforming areas in relationship to specific variables. This information can then be translated into a hierarchy-based intervention plan, starting with the areas with lower levels of accessibility. Hence, the analysis at the metropolitan level does not relate to the 15 minutes city concept, as a broad analysis of each area of the city would be required and the addition of more variables (destinations) will be needed. Nevertheless, some of the results can provide useful insights and findings to determine whether certain areas of the city meet the 15 min travel time parameters for a specific variable. The current research project departed from analysing accessibility at a metropolitan scale, to understand and assess the socio-spatial relationships that drive the city. Hence, the two major destinations that promote travel were studied: main employment centres and public schools. Additionally, a variable for public hospitals was added, as it was considered that access to health is key to promoting better living conditions. Distrito Tec Area By considering a smaller scale of analysis, the level of complexity of the area reduces, allowing incorporating new variables and destinations within the scope of work of the current research project. Therefore, the local area analysis, considering Distrito Tec (See Figure 1) as the study area, can take a preliminary approach to analyse to what extent Distrito Tec meets the requirements of accessibility to different destinations, using walking or cycling as the transport modes, to be considered a 15 minutes city. It is important to mention that Distrito Tec is located within a central area of the city. Hence, in comparison to other areas, it is considered privileged in terms of its surroundings (see Figure 2). Based on this, it is expected that the levels of accessibility within such an area will be higher than the ones of suburban areas. The neighbourhoods surrounding the Distrito Tec Area are heterogeneous and affect the levels of accessibility of the area. The same counts for the mobility patterns. Therefore, it is important to give a brief characterisation of each one of them. According to Figure 4, the northern area has very low to medium marginalisation levels and is mainly residential with single-family zoning. The commercial activities are located on the main avenues and offer some additional services such as community schools. The eastern area has faced important changes since the 1950s, as it used to have a brick factory that was dismantled to allow several urban renewal processes to happen. There is a contrast between the socio-economic levels of the population. In the neighbourhoods in the vicinity of the brick factory, a low-income worker population predominates; in contrast, on the opposite side of the river, there are high-income gated residential communities. The land use variety is low, with most being residential with single-family zoning. The small numbers of commercial places and services tend to be located around the main avenues where vast shopping centres have been built (e.g., Nuevo Sur shopping mall). The western area is the oldest, dating to the 1920s. It is characterised by being a middle-income residential area with single-family zoning. Through time, a heterogeneous land use mixture has emerged in the area, having many commercial activities and services located on the main avenues. The polygon of Distrito Tec was designed based on the neighbourhoods' boundaries, which are located in the vicinity of the university campus. However, the AGEBs do not fully correspond to such limits; thus, some AGEBs will not be completely encompassed within the polygon. For the current analysis, 11 AGEBs were considered as part of the Distrito Tec Area, as presented in Figure 4. The first set of variables to be analysed is access to public kindergartens and schools. For this section, a specific accessibility measure was computed for each education level to understand how the local people, depending on their age, are able of accessing education places. Considering that the closest destination might not be able to address the entire demand of the AGEB, as well as social preference factors, the accessibility measures were evaluated for the five closest locations (kindergartens, primary, secondary and high schools). Figure 5 shows the average distance from every AGEB to the closest public schools using bicycles as the transport mode. It can be observed that most of the AGEBs have a public school in a range of less than 1000 m. This is due to the high density of schools (especially primary and secondary) that most areas of the MMZ have. Building schools has been one of the priorities of the authorities, as there is a high demand for them given that Mexican demographics maintain a pyramid structure. This means that there is a considerable amount of people younger than 30 y, as observed in Figure 3a,b, which shows the age distribution of the population of the MMZ and Monterrey, respectively, according to the INEGI 2020 census. Monterrey Metropolitan Area Interestingly, many suburban areas that are not even physically integrated into the city have high accessibility levels (e.g., some northern and southwest regions). This phenomenon reduces the travel dependence from suburban areas to central areas to access education. The south and southeast of the city have the lowest access levels, which relates to the fact that both are high-income areas where residents tend to prefer private schools and that some of these areas were designed and built as gated communities with only residential land use. Figure 6 compares the performance of the transport modes (bike and walking) in accessing the closest public school at the metropolitan level. The first important thing to highlight is that despite the transport mode used, 75% of the AGEBs can access a school in less than 10 min. This is related to the high density of schools at the metropolitan level, as observed in Figure 5. By comparing the transport modes, it becomes evident that cycling achieves considerably higher levels of accessibility: while it takes 9.7 min for 75% of the AGEBs to reach the closest school by walking, for the same percentage, it only takes 3.5 min by cycling. It is important to mention that Figure 5 is just a preliminary approach to understand accessibility to schools; this variable will be disaggregated in the following section, to analyse how access varies depending on each school's education level. Figure 7 presents the average travel time by AGEB to the closest public hospital by foot. The map shows a considerable divergence of accessibility between AGEBs, with many going beyond the 30 min travel time, mainly due to the low density or nonexistence of hospitals, predominantly in suburban areas. Central areas show high access to hospitals. This can be attributed to the historical conformation of the city, which started at the centre and gradually sprawled. Therefore, central areas have existed for longer periods, allowing the authorities to implement through time the necessary health infrastructures in these areas. In contrast, many suburban areas are relatively new, and some have lacked formal planning procedures (people built homes without having any official permit from the government or public administration), while others have been designed following urban paradigms that purposely create isolated and monofunctional areas, such as gated communities. The southeast of the map represents an excellent example of how planned urban communities can lack access criteria in their planning procedures and how these problems are not exclusive to lower-income areas or highly marginalised ones that have been built without any formal planning. In this sense, what some people think could be a solution (urban planning) can trigger a problem. The COVID-19 pandemic has demonstrated how important it is to have a robust and accessible healthcare system. Therefore, poor access to health services should be unacceptable, as observed in many AGEBs of the MMZ. The MMZ needs to urgently tackle this problem by developing new public health centres in any area of the city that does not meet the desirable access parameters. Figure 8 compares the performance of both transport modes (walking and bicycle) to access the closest public hospital. For this specific variable, the differences between one mode and the other are more evident. This is related to the lower number of destinations available at the metropolitan level (in contrast with Figure 6, the number of hospitals is evidently lower than that of schools). Hence, faster transport modes, such as bicycles, will outperform walking by a considerable margin: for 75% of the AGEBs to reach the closest public hospital travelling by foot, it takes 46.8 min; in contrast, it only takes 14.7 min by bicycle. Figure 9 shows the number of economic activities that employ 51 persons or more (main employment centres) that can be accessed within a 15 min travel time by bicycle. It can be appreciated that most of the AGEBs with higher levels of accessibility (+73 destinations) are located in the central areas of the city, with a gradual decline towards the outskirts. At first sight, such a result might be perceived as unusual, considering that the MMZ is an industrial city and that most industries are located in suburban areas. Nevertheless, industries usually have very large complexes that occupy vast areas of land and make them separate from one another, reducing access to them in short periods. In contrast, even though they are smaller in size and number of employees, the economic activities that concentrate in central areas of the city (such as commerce and services) require smaller areas of land compared to industries and are located closer to each other. The reduced number of main employment centres at the northern, western, eastern, and southeast AGEBs implies that residents in those areas have to travel long distances and periods of time to central areas to work, relying heavily on motorised modes of transport. This traffic has high economic, environmental and social costs for the city and its residents, and can be tackled by creating new clusters of economic activities in areas with low access and by improving transport networks, especially public transport. It is important to isolate and analyse each particular case, as some of these areas lack main employment centres given that, historically, they were independent of the city, but the sprawl has reached them and forced them to integrate with its economy. Some other areas were lacking main employment centres on purpose, as many of the AGEBs located to the southeast of the MMZ were designed as high-income gated communities (especially golf clubs) with only residential land use. As all the residents from these areas require accessing the opportunities located within central areas, they generate demand for public infrastructure that is extremely expensive and that should not exist if planning policies were to assure land use diversity and accessibility parameters before authorising building permits. The three variables previously presented demonstrate how complex urban areas truly are and how inaccurate generalisations can be. The present exercise thus serves as a preliminary approach to demonstrate that every city faces many different and particular challenges. Consequently, any solution, in order to generate a beneficial change, must be based on a full understanding of the specific desired conditions in the city. The current research demonstrates that using accessibility measures is extremely useful for identifying specific needs from different areas of the city and that despite the accessibility measure chosen, the data obtained are relevant. The information generated can be used as a departure point for prioritising interventions and public policies depending on the performance of each area on a given variable. By doing so, much time and many resources can be saved. This first section of analysis concludes that the levels of accessibility at the metropolitan level are divergent and that they drastically vary, even in the same location, depending on the measured variable. The results are a relevant input to obtain a general diagnosis of the state of the MMZ in terms of accessibility. These results can also be compared to different socio-economic variables, such as marginalisation rates, to better understand the social, economic, political, and environmental factors that drive the city. This information represents a solid departure point to then analyse what happens at smaller scales, without ignoring that each area of the city is bound to metropolitan interrelationships. Figure 10 shows the average time by AGEB to access the five closest public kindergartens. As most children going to kindergartens are not able to cycle, the accessibility measure was computed for travelling by foot. Even though the number of kindergartens within the polygon is low (four), the surrounding areas at the north, west, and southwest have considerably high densities. In contrast, the areas in the east and the southeast have very low densities. Three of the eleven AGEBs within Distrito Tec meet the constraint to access the closest five kindergartens in a maximum of 15 min, with all three AGEBs in the 11-15 min class. The rest of the AGEBs have considerably higher access times with four AGEBs in the 16-20 min class, three in the 21-25 min class, and one in the 26-30 min class. The lowest accessibility levels are represented in the AGEBs located to the southeast of the polygon. This is due to the relatively low density of destinations in the surrounding areas, especially to the east. Distrito Tec Area As most AGEBs within the polygon do not meet the 15 min travel time criteria, it can be argued that in terms of access to kindergartens by foot, the Distrito Tec Area cannot be considered a 15 minutes city. Nevertheless, additional information should be collected to know how many children in the Distrito Tec Area are within the age range to go to kindergartens. In the case that this number is low, the computation could be made to the closest kindergarten, as this one could host all the local demand. Such a scenario would considerably increase the levels of accessibility. Figure 11 presents the average time by AGEB to access the five closest public primary schools by foot. Again, the density of destinations within the Distrito Tec Area is low, with only three schools. Yet, the surrounding areas at the north, northeast, west, and southwest have considerably high densities. Only three out of eleven AGEBs from the Distrito Tec Area meet travel times of a less than or equal to 15 min. These AGEBs are located on two opposite corners of the polygon (northeast and southwest) and can meet the 15 min parameters due to the high number of destinations in the outer nearby areas. In contrast, six AGEBs require a travel time of 16-20 min to reach the closest five destinations. The higher travel time is due to the low number of destinations in central areas of the Distrito Tec Area. Finally, two AGEBs reach the 21-25 min class. Given the previous results, it can be said that in terms of access to public primary schools, the Distrito Tec does not meet the requirements to be considered a 15 minutes city using only foot as the travel mode. However, by using other modes of transporta-tion such as bikes, the travel times would drastically decrease and the accessibility level would increase. Figure 11. Average time to the 5 closest public primary schools travelling by foot. As the age groups that attend secondary and high schools usually are able to use bicycles as their main travel mode, the following computations were performed considering bicycle as the transport mode. It is important to bear in mind that the average speed of a person travelling by bike is three-times higher than travelling by foot. Therefore, a much higher number of destinations can be accessed within the same time. Figure 12 presents the average travel time by AGEB to reach the five closest public secondary schools travelling by bike. Even though the density of destinations in the Distrito Tec Area is extremely low (two) and remains low in the outer areas, the accessibility levels are rather high. All AGEBs can access the closest five destinations in a maximum of 10 min of travel time. However, when the computation was run using foot as the travel mode, the accessibility levels drastically dropped (See Figure 13). As the share of trips done by bike is very low for the MMZ (only 0.8%), the current research project assumed that most of the population attending secondary schools uses other transport modes. In this sense, it is important to highlight the benefits, in terms of accessibility, that fostering the use of bicycles as the main travel mode would bring. One key factor to promote the use of bicycles is to create the safety conditions that users require; thus, implementing cycling lanes and open streets would be mandatory. The surrounding areas at the south and east of Distrito Tec have an absolute lack of destinations. As a result, the population living there has to travel through Distrito Tec and other areas to reach a public high school. This phenomenon creates traffic in all the surrounding areas and other negative effects. Again, a computation considering walking as the main travel mode to reach public high schools was included (see Figure 15). The accessibility results are worrying, as all the AGEBs within Distrito Tec, and the surrounding areas, demand a ≥31 min travel time. Hence, despite which travel mode is taken, it can be argued that the Distrito Tec Area cannot be considered a 15 minutes city regarding access to public high schools. As seen along with the previous examples, education has many additional variables that have to be taken into consideration such as student capacity, local population age groups, etc., in order to entirely assess how accessibility behaves in the local area. Nevertheless, the accessibility measures are an excellent departing point for understanding the impact of transport modes and services' density on accessibility patterns. The next variable is access to commercial activities, specifically to supermarkets. Due to a lack of public markets, as seen in many other Mexican cities, Monterrey's population does most of their grocery shopping at supermarkets. Hence, supermarkets are one of the most frequented destinations among the local population, and access to them is key to guaranteeing satisfactory quality of life. According to the 2020 INEGI census, the average number of inhabitants per household in the MMZ is 3.18 people. This indicator is relevant given that people tend to buy groceries not for one, but for three persons when they go to the supermarket. For that reason, the bought items can be difficult or heavy to carry if the individual travels by foot or bicycle, so many people prefer to use motorised vehicles such as cars or public transport. Nevertheless, if accessibility to supermarkets is high, people would be encouraged to visit supermarkets more often than once per week and buy fewer items per visit so they could avoid the need to use a motorised vehicle. Access to health is probably one of the most important things for an urban dweller, and this has been exemplified throughout the entire COVID-19 pandemic. However, hospitals are very expensive infrastructures to build and run; hence, they usually have a metropolitan or regional radius of incidence in their planning processes (see Table 4). This section analyses what effects these planning processes have at the local level by measuring the average travel time by foot and bicycle to the closest public general hospital. Figure 16 presents the average travel time by AGEB to the closest public general hospital by foot. The map shows that the number of destinations is extremely low, and within the Distrito Tec Area, there are no public general hospitals whatsoever. The surrounding areas of the polygon show a lack of destinations, with the only exceptions at the northeast area with two available hospitals and the southwest area with also two destinations. As a result of the lack of destinations available within Distrito Tec and in most surrounding areas, eight out of eleven AGEBs of the Distrito Tec's polygon require over a 31 min travel time to reach the closest destination. From the three AGEBs left, a travel time between 26 and 30 min is needed to reach the closest public general hospital. These results highlight that the location of hospitals follows metropolitan and regional planning processes, where access at the local scale is not relevant. Additionally, as seen in Figure 16, most of the hospitals tend to cluster. This has to do with the complementarity that one can give to others. A consequence of these localisation patterns is that travel distances from all neighbourhoods that do not have a nearby hospital are considerably high and demand motorised transport modes. From the previous results, it can be said that Distrito Tec Area is far from becoming a 15 minutes city, if people travel only by foot. Figure 17 shows the average travel time by AGEB to the closest public general hospital travelling by bike. In comparison to Figure 16, the travel times, both for Distrito Tec and the surrounding areas, drastically decrease. Nonetheless, many AGEBs still are beyond the 15 min parameter. In the Distrito Tec Area, six AGEBs require a 6-10 min travel time, three an 11-15 min travel time and two a 16-20 min travel time. As nine out of eleven AGEBs meet the parameters to access a public hospital in under 15 min of travel time, the Distrito Tec Area could be considered a 15 minutes city. Here, it becomes evident that transport modes dramatically influence access levels and that the location and number of destinations are not the only relevant factors in terms of accessibility. There are two important lessons from the previous results. The first has to do with the importance that infrastructure at different scales has. One possible solution to improve the access to public general hospitals would be to increase the number of small clinics or medical centres throughout the city. By doing this, people could solve most of their basic health issues at local destinations and travel to large hospitals only when they have complications. The second lesson is that if public general hospitals need to have a metropolitan and regional scale and cluster in specific areas, they need to ensure highquality transit connections. Building roads for cars is not the solution. As 44% of the daily trips from the MMZ are travel to work, employment is the driving force of mobility. Thus, creating employment opportunities for the population at the local level is key to reducing the population's travel distances and times. By this, negative effects from traffic, such as accidents, pollution and low quality of life, would decrease. It is important to mention that employment exists in a vast variety of ways. For example, there are many small-to middle-sized economic activities (such as grocery shops or dry cleaners) that are located throughout the entire Metropolitan Area; however, each one of them employs a relatively low number of persons (less than 50). In contrast, large economic activities tend to be less common but employ a larger amount of people (≥51). These activities in the MMZ are usually industries (most located in suburban areas) and company headquarters or offices (most located in central areas). As seen before, at the metropolitan level, the main employment centres (activities that employ ≥51 persons) are located in central areas of the city. As Distrito Tec is considered a privileged location in terms of centrality, it is expected that the number of main economic activities available at the local level will be high. Figure 18 presents the number of economic activities that employ ≥51 persons (main employment centres) that can be reached from each AGEB within a 15 min travel time by foot. Clearly, most of the main employment centres are located on the two main avenues of Distrito Tec: Av. Eugenio Garza Sada and Av. Revolucin. Even though there is a high number of economic activities, the access levels are relatively low. The 11 AGEBs that compose the Distrito Tec Area are in the same access class of 6-17 destinations within a 15 min travel time by foot. From the surrounding areas, the northeast, east, southeast southwest, and west have very low access levels, all in the class of 0-6 destinations. This has to do with the low number of economic activities located in these areas. Once again, the population living in those areas has to travel to other parts of the city to work, causing traffic, as well as other negative effects. People highly value the possibility to walk to their job; therefore, it is important to decentralise economic activities and create social housing in central areas that permit the local population to easily access their jobs. Figure 19 shows the number of economic activities that employ ≥51 persons that can be reached by AGEB using a bicycle. As soon as one looks at the map, it is evident that the access levels from the entire area drastically increase. Ten out of eleven AGEBs of the Distrito Tec Area can reach ≥105 destinations and the remaining one 95-105. The surrounding areas also present very high access levels, excluding the east. The eastern area is mainly residential with a gated community urban structure. It is also physically separated from the rest of the city by the river, with a small number of bridges that allow crossing to it. These two factors generate longer-distance trips to access job opportunities, which also results in longer travel times and motorised vehicle dependence. From the results presented in Figures 18 and 19, it can be said that in terms of access to main employment centres, the Distrito Tec Area can be considered a 15 minutes city. Nonetheless, additional information should be gathered to understand the specific type of economic activities located in the area and if the local population has the qualification, interests, and requirements to take those jobs. Figure 20 presents the number of supermarkets that can be reached by foot within a travel time of a maximum of 15 min. There are five supermarkets within the Distrito Tec polygon; in the surrounding areas to the west and northeast, there is a high density. In contrast, to the east and southwest, there are no destinations available. To the south, the density is low with all supermarkets concentrating on Eugenio Garza Sada Avenue. The location of supermarkets has negative effects on the mobility of the Distrito Tec Area, as the population living in the eastern neighbourhoods has to travel to or through Distrito Tec to access a supermarket. From the 11 AGEBs that conform the Distrito Tec Area, 2 cannot access a single supermarket, 6 can access 1 destination, and 3 can access 2 destinations. On that account, most of the AGEBs meet the parameters of the 15 minutes city using foot as the travel mode. Figure 21 shows the number of supermarkets that can be reached by AGEB within a travel time of 15 min using a bicycle. Immediately, one can appreciate that the number of destinations becomes very high for pretty much the entire area, which is related to the fact that cycling speed is three-times higher than walking. By bike, all of the AGEBs within Distrito Tec show high accessibility levels, only one AGEB being in the 31-50 destinations class, one AGEB in the 51-69 class, five AGEBs in the 70-93 class, and three AGEBs in the 94-195 class. Such results demonstrate that the Distrito Tec Area comfortably meets the parameters to be considered a 15 minutes city in terms of access to commercial activities (supermarkets). Additionally, the evidence collected is key to understanding the massive potential that cycling has in the city in terms of accessibility. By using this transport mode, the local population can drastically increase their access to commercial activities without relying on motorised vehicles. Once again, this should promote local authorities to create adequate conditions to make cycling safe and comfortable for the local population. All the previous variables serve as a preliminary approach to understanding to what extent Distrito Tec is ready to become a 15 minutes city in terms of its road network and its destinations' availability. Nevertheless, as mentioned repeatedly, further variables and information have to be taken into consideration to provide a much more accurate analysis of the accessibility and the population's needs in the local area. The results presented highlight the importance of thinking locally and not just at a metropolitan or regional scale when infrastructure and urban equipment are planned. They also demonstrate that micromobility, especially via bicycles, has a massive potential to increase local accessibility without, necessarily, having to increase the number of amenities in a given area. Discussion The previous section illustrates that the urban planning processes, as well as the social, political, economic, and environmental factors that have structured the MMZ have failed to fulfil the accessibility conditions that the population requires. Hence, it becomes clear that a change of approach in planning is needed, where high-capacity transit and car-oriented mobility paradigms need to be left behind to adopt an accessibility-based process. In order to move away from motorised vehicle dependence, not just mobility has to be restructured, but also the land use mix throughout entire cities. The localisation of opportunities in the city dictates the mobility patterns of the population; therefore, if cities want to concentrate on moving people rather than vehicles and creating access instead of traffic, a systemic change to the urban structure is required. The previous results demonstrated that accessibility can provide extremely disaggregated data based on specific variables. This information is useful for having a starting point for planning processes by identifying priority areas in which to intervene and improve. Nevertheless, additional variables, especially concerning qualitative data (entitlement rights, a specific level of hospital attention, etc.) and social preference, have to be taken into account to obtain a more close-to-reality accessibility model. After assessing and analysing the accessibility levels of the MMZ and of the Distrito Tec Area, the following recommendations regarding planning processes can be made: 1. Rethink the planning of the city starting at the local level. The results demonstrate that the MMZ is extremely heterogeneous and that not all of the population of urban areas has the same needs. Hence, before thinking about what interventions at a large scale have to be made, it is important to understand what is happening at the microlevel. To do so, speaking to citizens and opening citizen participation spaces are crucial to extract relevant information about what are the most important needs. If this process is replicated throughout the entire city, many of the metropolitan issues will be solved without having to perform metropolitan interventions. This is a win-win procedure, as the authorities spend less of their budgets and do so more wisely and local people obtain tailor-made solutions to their problems; 2. Embrace land use mixture. People need more than their house to carry out their daily lives. Therefore, each neighbourhood should have enough land use mixture to allow the local population to access commercial activities, recreation, education, employment, and health. If people can find all these opportunities in the vicinity of their household, mobility and all its negative effects would dramatically reduce. Furthermore, the short distances that people would require to travel would encourage the use of sustainable transport modes such as walking or cycling. The authorities have a key role in influencing the structure of the city through building permits. Thus, they should not provide any permits to create residential areas unless they ensure that the needed land use mix will be available for the local population to address their daily needs; 3. Promote the use of bicycles and other micromobility vehicles. The results demonstrate that using a bike drastically increases the number of destinations that a person can reach in a given time. Therefore, the improvements that have to be made in terms of mobility and transport networks in the MMZ and at the local level should concentrate on increasing the quality and coverage of public transport for metropolitan journeys and creating open streets with safe and comfortable cycling lanes for short and local trips. By doing so, the number of destinations (opportunities) does not necessarily have to be increased, but only the ease of reaching them. If the MMZ wants to reduce the extreme use of motorised vehicles, the first step to take is to reduce the public space designated to them by creating a safe built environment for other transport users; 4. Accessibility is a social and gender policy. Usually, the groups with the lowest incomes live in areas of the city where accessibility levels are low and marginalisation levels are high; this is not an exception for the MMZ. Therefore, increasing accessibility in such areas will bring massive benefits to such populations as their transportation expenditures (for the specific case of women, this is known as the "Pink Tax") will decrease, as well as their travel distances and times, hence improving their quality of life and sense of belonging to their local community. Additionally, women in Mexican cities usually have more complex travel patterns than men, as they tend to visit many different destinations in one day (trip-chaining), carry heavy things, and accompany other persons (mobility of care) in their trips. Consequently, making all these destinations easier and less expensive to access will bring important improvements to the travel behaviours and patterns of women. Finally, women argue that they usually feel unsafe when using public transport ; thus, using bikes and walking as their main transport modes will make them feel safer in their daily trips. In addition, further research should be conducted to assess the changing needs of an ageing society and accessible mobility in MMZ; 5. Decentralise the city. As in many other cities in the world, most of the main employment centres of the city are located in central areas. This localisation pattern results in a phenomenon where, during the morning, the origin of most trips to go to work is from suburban areas, and the destination is the centre of the city. In the afternoon, most people travel back home, and the mobility pattern is inverted, starting in the centre and ending in suburban areas. Such a way of structuring the city creates traffic, as well as extremely long average distances and travel times, which result in high economic, social, and environmental costs. Hence, it is important to apply planning processes or economic activities (especially commerce and services) that encourage the localisation of such activities in different parts of the city, creating subcentres and reducing the need for metropolitan mobility.; 6. Increase the density of the city. With only 698 people per square kilometre, the MMZ is not a dense city. Large sprawling cities with low densities go against accessibility, as there is not enough critical mass (demand) to implement some opportunities throughout the entire city. This generates a dependency on middle to large-distance motorised travel. Considering the nature of the MMZ, it is recommended that the city no longer sprawl. To do so, it has to generate all-income-level vertical housing in central areas. The authorities play a key role in doing so by putting in action economic incentives for developers (such as tax waivers or reductions) to build social vertical housing in central areas and by eradicating any permits to develop large monofunctional suburban residential gated communities. This is an ambitious plan for a city like the MMZ, as it has historically grown horizontally and with most of its population preferring single-family housing schemes rather than apartments. Nevertheless, the transformation of its urban structure to a denser area will bring massive social, economic, and environmental benefits to the local population in the short, middle, and long term; 7. Downsize urban infrastructure and equipment. It has been demonstrated that highcapacity and large-scale destinations such as public general hospitals or shopping centres attract many persons who live in other districts. Therefore, having a larger number of small-scale opportunities dispersed along the city would allow locals to access their daily needs in their neighbourhood. This would drastically reduce middle to long distance mobility, as well as the use of motorised vehicles. It is important to mention that certain destinations, such as specialised hospitals, will still have a metropolitan radius of influence. For these kinds of opportunities, sustainable and diverse transport alternatives have to be guaranteed; 8. Use of technology. One of the major lessons that the COVID-19 pandemic has left is the massive potential that technology has to reshape education, commerce, and employment. This can generate important changes in the urban realm. By encouraging the use of technology in schools, commerce, and employment centres (e-learning, e-commerce, and home offices), mobility can drastically reduce the need to go to a specific location every day. Lowering the number of travelling people reduces the overloading of public transport, as well as the traffic levels, bringing important social, economic and environmental benefits. Additionally, spaces that are no longer needed, such as some office buildings, can be reconverted (taking advantage of their usual central location) to other uses such as commerce, housing, or schools. Finally, it must be said that using technology brings considerable economic benefits to employers, the government, and society, as mobility is expensive for all. As part of the further research that can be performed to complement the current investigation, it will be important to incorporate more socio-economic and demographic factors that help provide a more accurate understanding of the qualitative differences that also affect the degree of accessibility in a given city. By doing so, the relationship between marginalisation levels and other social variables such as income, age, or gender with the level of accessibility will become clearer, providing useful data for developing public policies and urban interventions. Considering new accessibility measures to other destinations as green areas, universities and minor employment centres and more local commerce such as pharmacies or small shops can provide very useful data of the local level configuration of the city in different areas. These additional measures can also help to develop specific public policies based on the topic being analysed; for example, accessibility to green areas. Even though the concept of the 15 minutes city focuses on transport modes such as walking and cycling, it would be relevant to run the accessibility measures using other transport modes, especially public transport and private vehicles. For this to be done, it is important to generate the necessary data, such as GTFS for public transport and a database that provides the average speed for private vehicles per road per time of the day. The previous paragraph encloses a compilation of ideas that could complement and expand the current scope of work. It is important to consider that multiple variables play a dynamic role in the analysis of accessibility in a city, and by understanding this, it becomes clear that the possibilities to analyse accessibility from different perspectives are vast. Conclusions The urban realm is in constant and rapid transformation. Every day, new social, economic, and environmental needs arise, and cities have to evolve to satisfactorily overcome the challenges. However, this is not an easy task, as the number of variables and elements that conform the great system known as the city are uncountable. Hence, dealing with complexity has become one of the biggest challenges for policymakers and urban planners. The fast rate of change in cities demands tailor-made solutions for each specific context, in very short periods of time, that respond to short-, medium-and long-range time frames. Traditional planning has failed to do so, and this has to do, to a certain extent, with the approach taken. Humans like to think that a new invention can be the answer to all our problems, and the car was considered so. During the Twentieth Century, the car-dominated all urban planning processes, as it was seen as the ultimate alternative for mobilising goods and people within and around urban areas. Nevertheless, car-oriented planning was soon demonstrated to be unsustainable due to the extremely negative social and environmental effects that it has. Taking more holistic, people-centric, and humble approaches to plan can be a more effective and equitable way to deal with complexity. Major urban planning and infrastructure interventions,on the other hand, in the urban realm take a long time to mature, and they can only be assessed after completion. Therefore, iterating planning processes can provide useful information about the rate of success of a given intervention to deal with a specific issue and allow the possibility to adjust the solution before being fully implemented. To do so, a huge amount of data has to be gathered and processed. On that account, technology plays a key role in urban planning, by providing high-quality open data, such as those collected from INEGI, and software, such as UrMoAC, with which urban planners can model at a high level of accuracy the state of a city. This allows a clear understanding of the issues of different parts of a city, permitting the elaboration of a hierarchy-based intervention. Additionally, cities have to have a comparable and, ideally, standardised methodology to assess urban accessibility. By doing so, benchmarking, as well as the evaluation of public policies and urban interventions can be directly compared between cities of the same country, region, or even at a worldwide scale. This allows planners to argue to what extent a change is being made and at what rate. The current research project adds to that by using a transferable approach that can be replicated in different cities, making comparison of the results possible. The results from the current research project demonstrate that taking an approach to urban planning based on accessibility provides many benefits in terms of sustainable planning. Accessibility measures permit obtaining highly disaggregated data of the performance of every part of the city regarding a specific variable, highlighting the areas with higher intervention needs and providing results that contrast how different transport modes perform for accessing a specific destination. Thus, using accessibility as the pillar of urban planning will allow cities into transform to more inclusive, environmentally friendly, and comfortable places to live. Most cities in the world are still far from becoming 15 minutes cities, especially when whole metropolitan areas are considered, and the MMZ is not an exception. However, having this travel time frame as an objective should encourage policymakers, citizens and the private sector to transform cities from the local to the metropolitan level. It is also important to understand that in order to truly transform cities into better living environments, not only central areas need intervention, but the whole urban area. Everyone plays a key role in such a transformation, as it is necessary to agree that the current state of cities is going against sustainability and that an urgent urban, social, economic, political and environmental transformation is needed. Funding: This research received funding by the CampusCity Initiative of the Tecnologico de Monterrey. CampusCity Initiative is funded by Fundacin FEMSA. Fundacin FEMSA had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data used in this study were obtained from public databases and can be found in Table 1. |
/**
* Try to create a custom drawable from the given class name, note
* that the custom drawable class must have a public constructor
* that takes a {@link Context} context and a {@link AttributeSet}
* attrs as parameters, if it is difficult to make this constructor,
* you can supply a {@link DrawableFactory} drawable factory to
* folivora instead, which can receive attributes to create and
* configure the drawable manually.
*
* @param drawableName full qualified drawable name
* @param ctx current context
* @param attrs attributes from view tag
* @return a newly created drawable, or null
*/
private static Drawable newCustomDrawable(String drawableName, Context ctx, AttributeSet attrs) {
if (sFailedNames.contains(drawableName)) return null;
if (drawableName.indexOf('.') == -1) return null;
Constructor<? extends Drawable> constructor = sConstructorCache.get(drawableName);
try {
if (constructor == null) {
Class<? extends Drawable> clazz = ctx.getClassLoader().loadClass(drawableName)
.asSubclass(Drawable.class);
constructor = clazz.getConstructor(sConstructorSignature);
constructor.setAccessible(true);
sConstructorCache.put(drawableName, constructor);
}
sConstructorArgs[0] = ctx;
sConstructorArgs[1] = attrs;
return constructor.newInstance(sConstructorArgs);
} catch (ClassNotFoundException cnfe) {
sFailedNames.add(drawableName);
Log.w(TAG, "drawable class [" + drawableName
+ "] not found, Folivora will never try to load it any more");
} catch (NoSuchMethodException nsme) {
sFailedNames.add(drawableName);
final String classSimpleName = drawableName.substring(drawableName.lastIndexOf('.') + 1);
final String msg = "constructor " + classSimpleName + "(Context context, AttributeSet attrs)"
+ " does not exists in drawable class [" + drawableName + "], Folivora will never try to" +
" load it any more";
Log.w(TAG, msg);
} catch (IllegalAccessException iae) {
throw new AssertionError(iae);
} catch (Exception e) {
Log.w(TAG, "exception occurred instantiating drawable [" + drawableName + "]", e);
} finally {
sConstructorArgs[0] = null;
sConstructorArgs[1] = null;
}
return null;
} |
Four companies are competing for the contract to build a wastewater treatment plant in Tehran, following the submission of bids to Tehran Sewerage Company (TSC)in mid July. The plant is part of a larger scheme to install a comprehensive sewerage network in the capital aimed at providing services to more than 2 million residents in the poorer southern suburbs of the city by 2006 (MEED 1:3:02).
The bidders for the estimated $60 million-70 million contract are Austria’s VA Tech, Generale des Eauxand Degremont, both of France, and Germany’s Walter Bau.
TSC is now reviewing the bids before submitting its recommendations to the World Bank, which is part-financing the scheme. A contract award is expected to be announced within three months.
At least six groups of local and international companies in May prequalified and were approved by the World Bank to bid for the design-build-operate project.
In the first phase, the plant will have a capacity of 450,000 cubic metres a day and will serve a population of 2.1 million. During phase 2 of the five-phase development, which will be implemented between 2006 and 2011, capacity will be doubled to cover 4.2 million people. Phases 3-5 of the wastewater scheme will see the expansion of the network and the construction of an additional plant in southwestern Tehran to serve a population of 10.5 million by 2029.
A team of Austria’s ILF Consulting Engineers, Kuwait-based KEO International Consultantsand the local Parsconsultwas appointed in 2001 to provide consultancy services and technical assistance to TSC.
A team of ILF and Parsconsult is part of a group of at least five companies bidding for another contract to provide supervision services on the construction of the estimated $44 million eastern trunk main. TSC and the World Bank, which is expected to announce a final decision on the winning proposal in September, are in the process of evaluating the bids.
The award of the contract will be followed by a tender for the construction of the eastern trunk line.
The local Loshan Companylast July was awarded the $22 million contract to install the western trunk line, which stretches over a distance of 23 kilometres. Other local companies have been awarded, and are awaiting the award of, contracts worth a total of more than $200 million for laterals and interceptors. Among the larger contracts yet to be awarded is the construction of an 18-kilometre tunnel with a diameter in the range of 2,000-3,000 millimetres.
Implementation costs of phase 1 of the wastewater scheme are estimated at about $340 million, $145 million of which will be provided by the World Bank. The bank in May 2000 agreed to resume lending to Iran after a gap of seven years and despite US pressure. TSC is providing parallel financing of $195 million. |
Assessment of Carbon Stocks in the Topsoil Using Random Forest and Remote Sensing Images. Wetland soils are able to exhibit both consumption and production of greenhouse gases, and they play an important role in the regulation of the global carbon (C) cycle. Still, it is challenging to accurately evaluate the actual amount of C stored in wetlands. The incorporation of remote sensing data into digital soil models has great potential to assess C stocks in wetland soils. Our objectives were (i) to develop C stock prediction models utilizing remote sensing images and environmental ancillary data, (ii) to identify the prime environmental predictor variables that explain the spatial distribution of soil C, and (iii) to assess the amount of C stored in the top 20-cm soils of a prominent nutrient-enriched wetland. We collected a total of 108 soil cores at two soil depths (0-10 cm and 10-20 cm) in the Water Conservation Area 2A, FL. We developed random forest models to predict soil C stocks using field observation data, environmental ancillary data, and spectral data derived from remote sensing images, including Satellite Pour l'Observation de la Terre (spatial resolution: 10 m), Landsat Enhanced Thematic Mapper Plus (30 m), and Moderate Resolution Imaging Spectroradiometer (250 m). The random forest models showed high performance to predict C stocks, and variable importance revealed that hydrology was the major environmental factor explaining the spatial distribution of soil C stocks in Water Conservation Area 2A. Our results showed that this area stores about 4.2 Tg (4.2 Mt) of C in the top 20-cm soils. |
Assessment of the Reliability of Different Components of Fuel Measurement Systems Used in Angola The present work aims to assess the reliability of the different fuel measurement systems used at SNLD filling stations. As objectives, it is intended to propose a methodology for selecting the types and models of pumps to be used and when to replace them. The assessment of the risks in the replacement of a pump and the impact on the environment will also be examined. Although the discussion on pumps is not new in Angola, there are no published records on this subject, particularly in the selection of types and models of systems to be used and when to replace, using the different scientific tools. For the analysis of these equipments and their reliability, weibull distribution, ANOVA analysis and computational tools of probability and statistics with excel were used. The analysis by these two methods made it possible to certify the results obtained. A benchmarking was carried out at the filling stations of SNLD, Pumangol and Sonangalp to compare the quality of some services. The results achieved were discussed during the internship held at Petrotec. As a result, the components that have the most impact on the maintenance system of the different pumps, the causes of the faults and their respective costs were identified. The most vulnerable components identified are errors in the display, heating in the engine and malfunction of electrovalves. Introduction The quality of the service, competitiveness, profitability and customer satisfaction depends, among other factors, on maintenance services, which have a great influence in the availability of fuel measurement systems (pumps). There is a growing discussion about the costs related to maintenance of fuel measurement system types and models, to be used at a filling The aim of this study is to propose a methodology for identifying the reliability of components and selecting the types and models of pumps to be used at filling stations and when to replace them. In order to achieve the general objective, the following aspects were considered: i. Identify and analyse different models of measurement systems; ii. Assessment of the reliability of the components of the different measurement systems. The fuel measurement system is the physical structure where fuel supply occurs (Petrol, Diesel) for cars, motorcycles and others. It consists of general structure composed of measuring systems (pumps) and their pistols, element that covers the structure, platibanda, which surrounds the structure. SNLD has in operation in its network of PA's 2104 fuel measurement systems, totalling 11984 hoses. The number of base PA's (definitive construction) is 324 and 80 containerized posts (PAC). The data observed until 2019 in the fuel distribution and marketing segment were limited to Sonangalp, with 16 PA's and 28 CAP, and Pumamgol with 77 PA's ( Fig. 1). In the market, there are also the Bandeira Branca (BB), which premake 77 base posts and 386 PAC's. The BB PA's has supply contracts with the distribution companies, being with SNLD, 77 PA BB base and 386 PAC. Sonangalp, have 28 PAC's supply contracts, (DPM, SNLD. 2014). The network of filling stations until the independence of the country was all composed of fuel measurement systems from "Tokheim". In 1995, the brand "Nuevo Pignone" was installed in two PA's. The fuel measurement systems used were manual and mechanical. The mechanical systems have been extended to electronic calculators. In the period 1975-1996, the SNLD supply station network used pumps indicated in Table 1. In the period 1997-1999, the SNLD supply station network used pumps indicated in Table 2. In the period 2010-2018, Sonangol Distribuidora's supply station network used the pumps indicated in Table 3. Other gas companies on the market use Dresserwayne, Euro and Tokheim brand pumps. Small dealers use Tokheim-branded pumps mostly with mechanical and electronic characteristics. Fuel measurement systems, in general, have the same constitution and operation as they are summarized in the Table 4. Development For this paper, we considered tools of quality to support the diagnosis, as being: i. Brainstorming for the preparation of the survey of the main breakdowns of the different types of measurement systems used by SNLD from 1975 to 2018; ii. Registration of malfunctions in the maintenance department of The DPM of SNLD; iii. Information and recognition of the different fuel measurement systems used by SNLD in the period 1975-2018. Ishikawa diagram or cause-effect diagram, which allowed brainstorming sessions with multidisciplinary, teams. The main malfunctions and problems of the measurement systems are presented in Figure 2. From the information provided and the benchmarking carried out, were identified the major malfunctions in the area of hydraulics, electrical / electronics, and motors (DPM, SNLD. 2014). The study of the operation of given equipment or component and its behaviour with regard to breakdowns in a given period of time shall be carried out on the basis of statistical techniques and probability calculation, which are therefore subjects of no need for maintenance studies (Pinto, Carlos Varela. 2002). Availability is defined as the probability that a component that has undergone maintenance performs its function satisfactorily for a t-time. In practice, it is expressed by the percentage of time, in which the system is operating, for components that operate continuously. For backup components, it is the probability of success in the operation of the system when demanded (Lafraia, Joo Ricardo Baruso. 2014). Analysed aspects of statistical control of quality, having given particular attention to the Weibull distribution, the analysis of Variance (ANOVA), computational tools of probability and statistics, as well as descriptive statistics applied to management. One of the objectives of the ANOVA application is to perform statistical testing to verify if there is a difference between distribution of a measure between three or more groups (Montgomery, Douglas C. 2016). W. Weibull originally proposed the Weibull probability distribution in 1954 in studies related to failure time due to metal fatigue. It is often used to describe the lifetime of industrial products. The Weibull distribution is described by: i. : shape parameter; ii. : scale parameter; iii. : position parameter. The parameter influence the Weibull distribution as follows: ii. f(t) grows when t → t (mode) and decreases soon after. The shape factor (indicates the shape of the curve and the characteristic of the failures): i. <1 Infant mortality; ii. = 1 Random failures (negative exponential function, infant mortality; iii. > 1 Wear failures. With regard to the form factor, we have to consider: Quantium 210 Me cha ni ca l Ca l cul a tor, 2 Fue l guns,Qma x=40l /mi n Quantium 200T Me cha ni ca l Ca l cul a tor, 2 Fue l guns,Qma x=40l /mi n If =1 (constant failure rate), may be an indication of which multiple failure modes are present or that the data collected from the times to fail are suspicious. To analyse reliability, the different fuel measurement systems (pumps), used by SNLD from 1975 to 2018, were considered the most frequent breakdowns obtained from the proposed investigation of the analysis of existing problems and non-operating hours (down hours) of each fuel measurement system (DPM, SNLD. 2014). All data were grouped for the 16 (sixteen) components of the 12 (twelve) fuel measurement systems, namely engines, battery error, pistol, suction group, fuses, transformer, lack of communication, engine heating, pressure gauge, electrovalvula failure, hose, totalizer and other malfunctions. Results and Discussion Using the concepts of planning of experiments, could confirm the reliability results using Weibull of the components. For this purpose, systems with reliability equal to or greater than 60% were considered as a starting point. Data on breakdowns of 3 (three) referenced factors and 8 (eight) experiments were considered. Major breakdowns occur in the following areas: i. Electrical/electronic -due to the quality of energy and battery life cycle; iii. And motors -derived from burn by difference in intensity and earth problems; Bellow are some graphs illustrating the failure weight of components of the fuel measurement systems and the additional information of the systems studied. From Figure 3 is clear that the Display error (17%), Aspiration System (15%) and Transformer (11%) are the more critical components The Tokheim, Quantium 200T and 510 and EURO 2000 and 5000 fuel measurement systems showed accepted reliability parameters above 60% due to the low percentage of breakdowns. Regression analysis was carried out in order to confirm the reliability of the data. This allowed evaluating the quality of the samples and incorporating them into cohesion groups. Conclusions The reduction of the traditional maintenance structure, for a contract and service management structure, suggests the conversion of maintenance workers, whose services are performed in full by service providers. Similarly, reducing the number of types and models of systems reduces the number of service providers; the selection of types and models of fuel measurement systems for filling stations should represent one of the major concerns in the profitability of filling stations and consequent satisfaction of their customers and employees. During the data collection for this work, 12 different types of fuel measurement systems were used in the period 1975-1996. A better systematization of the registration of malfunctions and constant in a database allows becoming aware of all the information available in the country, although this maintenance service is mostly tercerized. The existing information on the faults, not systematised in order to make a correlation of litters sold for each fuel measurement system and its registered breakdowns, in the 12(twelve) varieties of systems and by each maintenance service provider in a given period. It is suggested to optimize the measurement systems, using the Tokheim Quantium 200T and 510 and Euro 2000 and Euro 5000 systems in the PA's; It is found that the main malfunction in the measurement systems occur so mainly on hydraulics (Suction Group, Meters, Electrovalves and Motors), as well as Electrical/Electronic (Energy Quality and Batria/Batria Change). Recommendations Following the conclusions, it is recommended to: i. Implementation of a maintenance policy for fuel measurement systems, focused on reliability. This process can increase and efficiency of maintenance and that analyses whether, and when, maintenance is technically feasible, as well as its effectiveness (costbenefit); ii. Adopt the methodology for selecting the types and models of fuel measurement systems, the components of which are reliable, from 60%; iii. That in the construction, rehabilitation and change of filling station measurement systems, the proposed pumps, namely the Tokheim Quantum 200T and 510 and Euro 2000 and Euro 5000; iv. The suggested measurement systems can also be used to assess the profitability of PA's with different characteristics and throughout the country, which have entered the object of this work; v. Register of breakdowns in a systematized manner where there is a correlation of litres sold by each fuel measurement system and its registered breakdowns, in the different varieties of systems and by each maintenance service provider in a given period of time throughout the country; vi. Carrying out preventive maintenance as systemic at filling stations, in order to detect possible failures in fuel measurement systems; vii. On the other hand, and considering that man plays a fundamental role in the operationally of filling stations, and according to the benchmarking carried out, the mechanisms of motivation of workers assigned to the maintenance function should be improved by adopting an HR policy, which considers the constant formation of man taking into account the dynamics of industry; viii. Application of the risk analysis map in the replacement of any fuel measurement system in all filling stations throughout the country, thus safeguarding the protection of the environment. Conflict of interest The author declares that there is no conflict of interest regarding the publication of this manuscript. |
<gh_stars>0
/*
* Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one
* or more contributor license agreements. Licensed under the Elastic License
* 2.0 and the Server Side Public License, v 1; you may not use this file except
* in compliance with, at your election, the Elastic License 2.0 or the Server
* Side Public License, v 1.
*/
package org.elasticsearch.gradle.internal.docker;
import java.util.stream.Collectors;
import java.util.stream.IntStream;
/**
* The methods in this class take a shell command and wrap it in retry logic, so that our
* Docker builds can be more robust in the face of transient errors e.g. network issues.
*/
public class ShellRetry {
static String loop(String name, String command) {
return loop(name, command, 4, "exit");
}
static String loop(String name, String command, int indentSize, String exitKeyword) {
String indent = " ".repeat(indentSize);
// bash understands the `{1..10}` syntax, but other shells don't e.g. the default in Alpine Linux.
// We therefore use an explicit sequence.
String retrySequence = IntStream.rangeClosed(1, 10).mapToObj(String::valueOf).collect(Collectors.joining(" "));
StringBuilder commandWithRetry = new StringBuilder("for iter in " + retrySequence + "; do \n");
commandWithRetry.append(indent).append(" ").append(command).append(" && \n");
commandWithRetry.append(indent).append(" exit_code=0 && break || \n");
commandWithRetry.append(indent);
commandWithRetry.append(" exit_code=$? && echo \"").append(name).append(" error: retry $iter in 10s\" && sleep 10; \n");
commandWithRetry.append(indent).append("done; \n");
commandWithRetry.append(indent).append(exitKeyword).append(" $exit_code");
// We need to escape all newlines so that the build process doesn't run all lines onto a single line
return commandWithRetry.toString().replaceAll(" *\n", " \\\\\n");
}
}
|
Internet vs. live tournament poker: How to play?
Should you play a live tournament solely by feel, or instead, take a strict mathematical approach and look to play in any +EV (positive expected value) situation?
This discussion recently came up on a radio show called "Poker Road" between a young, mathematically oriented internet player and one of the hosts of the show who opts to play by feel. At one point, the Internet player said to the host with much disdain, "You just don't know how to think about poker properly." He then went on to state his case in favor of math-based poker, a philosophy that's shared by many other Internet players.
Though it's clear that many of today's internet young guns dissect the game merely from a mathematical perspective, they don't sufficiently consider the people part of the game when they play in live tournaments.
Obviously, you can't see your opponents when playing online. That makes it difficult to get a read on your competition and to exploit their weaknesses. So, to be successful, online players tend to rely on their math skills and think about the game in terms of +EV. And although +EV is not a terrible way to approach a live poker tournament, it isn't quite enough. In a live tourney, there are other considerations in addition to pure mathematics that should be factored into your decisions.
Here's a list of several important factors that I find essential for live tournament poker success. Internet players -- take note.
Is your table full of weak players or is the competition strong? Answer that question and you'll be able to exploit situations where donkeys are present, and to make the proper adjustments necessary to beat tougher opponents.
At an easier table, avoid high-risk situations. It's much better to wait patiently for lower-risk opportunities that will eventually appear. At a tougher table, you'll be forced to take more chances and will need to employ a more mathematical approach.
Where are the big stacks? Where are the tough players? If the big-stacked sharks are seated behind you, look to take on thinner +EV situations, and play them aggressively. Conversely, if you have weak-tight players to your left, take the safer approach and try to win a lot of smaller pots.
Your competitors will likely vary their style of play depending on the stage of a tournament. Adjust your game to those changes. It's common, for example, to see players take on a much more conservative approach as the money bubble nears. That's a good time to kick up your level of aggression in order to exploit this observed tendency.
The rate at which the blinds increase should also influence your play; the faster the blinds escalate the less patient you should be. Conversely, in a slow-paced tournament structure, pass up marginal situations and look to be more selective about the risks that you take.
It's always important to focus on your opponents' state of mind. Look for fatigue, desperation, confidence, and patience. Remember that a player's mental state will usually be affected after he loses a big pot. Use that to your advantage.
Be aware that your opponents are always watching. What have they seen you do recently? What do you think that they think it all means? If you limp into pots, do you think they're fearful of strength, or do they assume that you're playing a garbage hand?
You can never discount the value of mathematical analysis in poker. But to be truly successful in live tournament play, you must start to think about these other considerations even before the first hand is dealt.
Visit www.cardsharkmedia.com/book.html for information on "Hold'em Wisdom for All Players." &Copy; 2008 Card Shark Media. All rights reserved. |
<reponame>AlexRogalskiy/electron-components-storybook
import * as React from 'react';
import { shallow } from 'enzyme';
import DownloaderItem from './DownloaderItem';
import TestComponentPropUtils from '../../../utils/TestComponentPropUtils';
jest.mock(
'electron',
() => {
const mockIpcMain = {
on: jest.fn().mockReturnThis(),
send: jest.fn().mockReturnThis(),
};
return { ipcRenderer: mockIpcMain };
},
{ virtual: true },
);
describe('DownloaderItem', () => {
it('renders without crashing', () => {
shallow(<DownloaderItem />);
});
it('renders basic react props like id, className, and style as element attributes', () => {
const shallowWrapper = shallow(<DownloaderItem {...TestComponentPropUtils.basicReactProps} />);
TestComponentPropUtils.expectsBasicReactProps(shallowWrapper, false);
});
});
|
A correlation study between a corneal endothelial count in patients with different stages of keratoconus using specular microscopy Objective: To evaluate the corneal endothelial count in patients with keratoconus (KCN) by specular microscopy and correlate with the stage of keratoconus. Methods: Ninety-three eyes from 61 patients with KCN were included in this cross-sectional study. The eyes were classified into KCN stages 1 to 4 according to the Amsler-Krumeich classification using keratometry obtained by corneal topography and pachymetry readings obtained by specular microscopy. Results: Age ranged from 12 to 43 years, mean ± (standard deviation) 22.1 ± 6.7 years. The average keratometry ranged from 42.25 to 71.4 D, (53.0 ± 6.1 D). Pachymetry ranged from 350 to 606 m, (461.7 ± 47.1 m). Regarding the Amsler classification, 23 patients (24.7%) had stage 1, 24 (25.8%) stage 2, 5 (6.5%) stage 3 and 41 patients (44.1%) stage 4. No linear correlation was observed between mean keratometry and endothelial cell count (Pearson's correlation coefficient = -0.05). In the early to moderate stages of KCN, the mean endothelial cell count was 2738.3 ± 285.4 cell / mm2, while in the advanced KCN group (stages 3 and 4) it was 2670.6 ± 262.7 cell / mm2, p = 0.24. Conclusions: No correlation was found between the endothelial cell count and the KCN stage. IntRoductIon K eratoconus (KCN) is a progressive disorder featured by central or paracentral corneal stroma (usually inferior) thinning followed by apical protrusion and irregular astigmatism. According to epidemiological studies, annual estimates on the incidence of new KCN cases ranges from 1:3000 to 1:80000 a year,. Such variation can be attributed to the following factors: different definition criteria applied to the cases, great sensitivity of modern diagnostic devices, differences in study design, ethnic differences and other differences related to the assessed sample. A recent study has estimated KCN prevalence in the general population of 1:375 (265 per 100,000) based on its annual incidence, patients' mean age at diagnosis and life expectancy. The literature about KCN is rich in publications on the anterior part of the cornea. These studies investigate the epithelium, Bowman's layer and stroma, and few of them focus the endothelium and Descemet's membrane. However, changes in corneal microstructure can also lead to changes in the corneal endothelial layer. Corneal endothelium plays essential role in corneal physiology and transparency. Its morphological data, such as endothelial density, mean cell area, rate of hexagonal cells and coefficient of variation are important tools to assess the corneal physiology. Specular microscopy is application in KCN cases, since it is useful in studies about transient and chronic changes in endothelial cell morphology in contact lens wearers, in pre-and postoperative corneal crosslinking surgical procedures, and in corneal transplants. There are reports in the literature associating KCN to Fuchs endothelial dystrophy. The aims of the current research were to assess corneal endothelial cell counting in patients with keratoconus (KCN) based on specular microscopy and to correlate it to keratoconus stage. methods Cross-sectional study to review the medical records based on the consecutive inclusion of all patients with keratoconus referred to Bonsucesso Federal Hospital, Contact Lens sector, from January to December 2019, who had undergone corneal topography and corneal specular microscopy exams before the first test and contact lens adaptation. The Ethics Committee of Bonsucesso Federal Hospital approved the research (n. 57,557,8166,5253). Cases with corneal scarring, hydrops, previous eye surgeries, cornea guttata or Fuchs endothelial dystrophy, and contact lens wearers were excluded from the sample. The following data were collected from the medical records: sex, age, k1 and k2 obtained by corneal topography (Aladdin, Topcon Medical Systems, Inc. Oakland), endothelial cell count and corneal pachymetry obtained by non-contact specular microscopy (Tomey Corporation, Nagoya, Japan). This specular microscopy device automatically captures a sequence of 15 images during each measurement and counts up to 300 cells per image in the region of interest through an automated image processing. The image with the highest contrast and lighting quality is automatically selected by the instrument and subsequently manually checked by the examiner. The automated cell detection and counting implemented in the manufacturer's software was used in the xperiment. Eyes were classified into KCN stages 1 to 4 according to the Amsler-Krumeich classification by using keratometry obtained through corneal topography and pachymetry readings through specular microscopy. SPSS software (SPSS version 19.0, IBM, New York) was used to perform the statistical analyses. Continuous variables were expressed as mean ± standard deviation, whereas categorical variables were expressed as frequencies and percentages. Student's t-test was used to compare continuous variables with normal distribution between stages 1 and 2 to the advanced group (stages 3 and 4). The correlation between endothelial cell count and mean keratometry was assessed through Pearson's correlation coefficient (Values of p <0.05 were considered significant). No linear correlation was observed between mean keratometry and endothelial cell count (Pearson's correlation coefficient: -0.05, p<0.05). There was no correlation between more curved keratometry (k2) and endothelial cell count (Pearson's correlation coefficient: -0.08, p <0.05). Mean endothelial cell count was 2738.3 ± 285.4 cell/mm2 in keratoconus early to moderate stages (stages 1 and 2), whereas the advanced keratoconus group (stages 3 and 4) recorded 2670.6 ± 262.7 cell/mm2 (p = 0.24). (Figure 1) dIscussIon Specular microscopy is a non-invasive technique, easy to perform, based on corneal endothelium reflection. The specular microscope captures the specular reflection of part of the endothelium and presents endothelial image that is magnified by an electronic set. 5 Individual suffer with natural decline in endothelial density throughout life, and ith can be potentiated by corneal diseases. Mean age in the assessed group was 22 years, and this age group is compatible to the age range accounting for the highest keratoconus prevalence; moreover, it is related to good endothelial cell reserve. The sample was divided into stages, based on the Amsler-Krumeich Classification. There was an attempt to relate endothelial density to different KCN stages. Few studies in the literature compare endothelial changes at different keratoconus sages, but they present conflicting results. If the literature has proven any correlation between endothelial abnormality in patients with KCN and disease stage, such a finding could affect the selection criteria set for corneal penetration or for lamellar anterior corneal transplantation. No correlation was found between endothelial density and KCN stage in the current research, and this outcome corroborates findings in a previous study that has assessed 40 eyes from 29 Egyptian patients. This study has recorded the following results: mean 2404 ± 345 cell/mm2 at stage 1, 2455 ± 331 cell/mm2 at stage 2 and 2214 ± 748 cell/mm2 at stage 3; correlation coefficient was 0.018 (p = 0.9). Likewise, the study conducted by Niederer et al. used confocal microscopy in the assessments and did not record any significant difference between KCN stage and endothelial cell counting. The recorded findings were 2510.6 ± 334.4 cell/mm2 count at the mild and moderate stages (1 and 2) and 2345.5 ± 331.8 cell/mm2 at the advanced stages (with k> 52D, p = 0.09). Timocin et al. reported that changes observed in endothelial cell counting in KCN were not depend on central pachymetry and were not correlate to KCN stages. Ostadi-Moghaddam et al. conducted a study to assess specular microscopy in the contralateral eyes of patients with Vogt striae; they found no significant difference between then. On the other hand, the study conducted by Goebels et al. with 712 patients, whose patients were in the mean age group 38 years, re- Current study Uakhan et al. Niederer et al. Timucin et al. El-Agha et al. ( Table 1 Comparison between endothelial cell count studies at different keratoconus stages carried out through specular microscopy ported significantly lower endothelial cell count in more advanced KCN stages: with 2624 ± 300 cell/mm2 at stage 1 and 2401 ± 464 cell/mm2 at stage 4 (p <0.01). It is important highlighting that most patients included in the study wore contact lenses. More advanced KCN stages can be associated with prolonged contact lenses' use throughout life. Such a profile can lead to endothelial changes; therefore, the inclusion of patients who wear contact lenses may have influenced the results in the current research. However, difference in endothelial density between KCN stages 1 and 4 remained significant, even when only the subgroup with no current history of contact lens use was assessed. Corneal endothelial cell count was evaluated based on KCN using. Different devices showed conflicting results in different studies, and it can be the consequence of different data collection protocols. Table 1 shows the comparison between published articles and the current study. A limitation of the current research lies on the fact that specular microscopy did not capture any image in very curved and/or very thin corneas, because specular surface area depends on the curvature of the reflecting surface. The more curved the cornea, the smaller the image obtained in specular microscopy devices. There is additional restriction in the light reflection area due to the proximity of the two concentric surfaces (i.e., epithelium and endothelium). The epithelial surface is highly reflective due to the significant difference in the refractive index between air and the epithelium. The light beam crosses the cornea and reflects on both epithelium interface and the endothelial interfaces. The visible specular area is given by the relationship between beam width and cornea thickness. The viable endothelium area is a rectangle because of the aforementioned restriction; corneal curvature radius dominates the height of the rectangle. Therefore, very curved and very thin corneas can decrease exam's reliability by reducing the visible area of the specular reflex or even by making it impossible to capture its image. The current research did not have access to all data of the specular microscopy exam, such as endothelial mosaic pictures and other important indices that are not routinely registered in all medical records, in our service, like coefficient of variation, hexagonal cell rate and mean cell area, and this is a medical record review study conclusIon Specular microscopy is important for the follow-up of patients with keratoconus. despite the evidence in the current research about lack of direct correlation between disease classification stage and endothelial cell count, the evaluation of endothelial cell count is useful for both surgical monitoring and planning. |
From Project Management To Project Unlimited The year 2004, 2005, and 2006 experienced significant downsizing of the Information Technology (IT) industry. In 2006, outsourcing or off-shoring gave another blow to the IT job market. The enrollment of computer science, computer information system, management information system majors were down. Many of us, the professors who presented papers at the ISECON conferences, would have fewer students to teach. Our jobs may not be here. The pedagogy using true/false questions, multiple choices or short essays, no longer worked. One of the hottest jobs today is being a project manager. Using the concept in project management: team building, constant communication and leadership, students can complete any final project individually or in a group. Final projects were used as assessment tools in advanced business or computer courses. This author taught/mentored all courses as listed in current paper. The class evaluations were extremely positive and the retention of Business, CIS or MIS majors as well as clients/employees was greatly improved. Project Unlimited becomes the new paradigm for learning. |
Comparative Analysis of Artificial Neural Network Models: Application in Bankruptcy Prediction This study compares the predictive performance of three neural network methods, namely the learning vector quantization, the radial basis function, and the feedforward network that uses the conjugate gradient optimization algorithm, with the performance of the logistic regression and the backpropagation algorithm. All these methods are applied to a dataset of 139 matched-pairs of bankrupt and non-bankrupt US firms for the period 19831994. The results of this study indicate that the contemporary neural network methods applied in the present study provide superior results to those obtained from the logistic regression method and the backpropagation algorithm. |
package com.example.intelligentdoorlock;
import android.annotation.SuppressLint;
import android.app.TimePickerDialog;
import android.bluetooth.BluetoothSocket;
import android.content.Context;
import android.content.DialogInterface;
import android.content.Intent;
import android.content.SharedPreferences;
import android.graphics.drawable.BitmapDrawable;
import android.graphics.drawable.ColorDrawable;
import android.os.Build;
import android.os.Bundle;
import com.google.android.material.floatingactionbutton.FloatingActionButton;
import com.google.android.material.snackbar.Snackbar;
import com.google.gson.Gson;
import androidx.annotation.NonNull;
import androidx.annotation.RequiresApi;
import androidx.appcompat.app.AlertDialog;
import androidx.appcompat.app.AppCompatActivity;
import androidx.appcompat.widget.Toolbar;
import androidx.core.content.ContextCompat;
import android.os.Handler;
import android.os.Message;
import android.text.InputType;
import android.util.Log;
import android.view.Gravity;
import android.view.LayoutInflater;
import android.view.View;
import android.view.Menu;
import android.view.MenuItem;
import android.view.ViewGroup;
import android.widget.Button;
import android.widget.CompoundButton;
import android.widget.EditText;
import android.widget.PopupWindow;
import android.widget.TextView;
import android.widget.TimePicker;
import android.widget.Toast;
import android.widget.ToggleButton;
import org.apache.commons.io.IOUtils;
import java.io.BufferedWriter;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.io.OutputStreamWriter;
import java.nio.charset.StandardCharsets;
import java.util.List;
import java.util.Objects;
import java.util.Scanner;
import java.util.Timer;
import java.util.TimerTask;
public class MainActivity extends AppCompatActivity implements View.OnClickListener, CommonPopWindow.ViewClickListener {
private static final String TAG = "MainActivity";
private Button button1, button2, button3, button4, button5, button6, idmatch;
private String id_matched, mac_address, message;
private BluetoothSocket socket;
private InputStream mmInStream;
private OutputStream mmOutStream;
private Message message_to_transfer = new Message();
public int safety_mode, unlock_direction;
public static final int UPDATE_TEXT1 = 1, UPDATE_TEXT2 = 2, UPDATE_TEXT3 = 3;
private boolean thread_open_close = false;
private TextView click, click2;
private List<GetConfigReq.DatasBean> datasBeanList, datasBeanList2;
private String categoryName, categoryName2;
private String latest_modified_object = "", latest_modified_content = "";
public long time;
private static final long PERIOD = 19000;
@SuppressLint("HandlerLeak")
private Handler handler = new Handler() {
@Override
public void handleMessage(@NonNull Message msg) {
switch (msg.what) {
case UPDATE_TEXT1:
Toast.makeText(MainActivity.this, "You've received: " + message, Toast.LENGTH_LONG).show();
break;
case UPDATE_TEXT2:
Toast.makeText(MainActivity.this, "Arriving checking point !", Toast.LENGTH_LONG).show();
break;
case UPDATE_TEXT3:
Toast.makeText(MainActivity.this, "读取信息流失败!", Toast.LENGTH_LONG).show();
break;
default:
break;
}
}
};
@SuppressLint("SetTextI18n")
@RequiresApi(api = Build.VERSION_CODES.KITKAT)
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Toolbar toolbar = findViewById(R.id.toolbar);
setSupportActionBar(toolbar);
initView();
initData();
initListener();
initView2();
initData2();
initListener2();
button1 = findViewById(R.id.button1);
button2 = findViewById(R.id.button2);
button3 = findViewById(R.id.button3);
button4 = findViewById(R.id.button4);
button5 = findViewById(R.id.button5);
button6 = findViewById(R.id.button6);
//此部分代码预设各参数值,仅供同步功能实现以前调试使用
//实际情况中,应当在连接时即进行参数同步,而不需要此步工作
//此处部分参数可看作未设置时的默认值
((GlobalVarious) getApplication()).setCurrent_mode("waiting");
((GlobalVarious) getApplication()).setSafety_mode("1");
((GlobalVarious) getApplication()).setUnlock_direction("1");
((GlobalVarious) getApplication()).setOpen_close("close");
((GlobalVarious) getApplication()).setBattery("90");
((GlobalVarious) getApplication()).setLatest_modified("");
time = 0;
Log.d(TAG, "The safety_mode after initialization is " + ((GlobalVarious) getApplication()).getSafety_mode());
Log.d(TAG, "The unlock_direction after initialization is " + ((GlobalVarious) getApplication()).getUnlock_direction());
switch (((GlobalVarious) getApplication()).getSafety_mode()) {
case "0":
categoryName = "设备锁关闭";
break;
case "1":
categoryName = "需要验证";
break;
case "2":
categoryName = "设备锁开启";
break;
default:
Log.e(TAG, "categoryName初始化失败!");
}
switch (((GlobalVarious) getApplication()).getUnlock_direction()) {
case "0":
categoryName2 = "顺时针";
break;
case "1":
categoryName2 = "逆时针";
break;
case "2":
categoryName2 = "顺逆皆可";
break;
default:
Log.e(TAG, "categoryName2初始化失败!");
}
SharedPreferences.Editor editor = getSharedPreferences("data", MODE_PRIVATE).edit();
editor.putString("id_matched", "");
editor.putString("mac_address", "");
editor.apply();
socket = null;
SharedPreferences pref = getSharedPreferences("data", MODE_PRIVATE);
id_matched = pref.getString("id_matched", "");
final ToggleButton toggleButton;
toggleButton = findViewById(R.id.toggleButton);
toggleButton.setChecked(false);
toggleButton.setOnCheckedChangeListener((buttonView, isChecked) -> {
if (System.currentTimeMillis() >= time && System.currentTimeMillis() <= time + PERIOD) {
Toast.makeText(this, "正在开锁中,请稍后操作。", Toast.LENGTH_LONG).show();
} else {
if (isChecked) {
if (!id_matched.equals("")) {
try {
mmOutStream.write("sys_control open".getBytes());
((GlobalVarious) getApplication()).setOpen_close("open");
Toast.makeText(MainActivity.this, "已开启!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "同步失败!", Toast.LENGTH_SHORT).show();
toggleButton.setChecked(false);
}
} else {
Toast.makeText(MainActivity.this, "请先配对装置!", Toast.LENGTH_SHORT).show();
toggleButton.setChecked(false);
}
} else {
if (!id_matched.equals("")) {
try {
mmOutStream.write("sys_control close".getBytes());
((GlobalVarious) getApplication()).setOpen_close("close");
Toast.makeText(MainActivity.this, "已关闭!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "同步失败!", Toast.LENGTH_SHORT).show();
toggleButton.setChecked(true);
}
} else {
Toast.makeText(MainActivity.this, "请先配对装置!", Toast.LENGTH_SHORT).show();
toggleButton.setChecked(false);
}
}
}
});
FloatingActionButton fab = findViewById(R.id.fab);
fab.setOnClickListener(v -> {
if (((GlobalVarious) getApplication()).getLatest_modified().equals("")) {
Snackbar.make(v, "尚未做任何修改"
, Snackbar.LENGTH_LONG).setAction("Action", null).show();
} else {
switch (((GlobalVarious) getApplication()).getLatest_modified()) {
case "open_close":
latest_modified_object = "运行状态";
latest_modified_content = ((GlobalVarious) getApplication()).getOpen_close();
if (latest_modified_content.equals("open")) latest_modified_content = "开机";
if (latest_modified_content.equals("close")) latest_modified_content = "待机";
break;
case "auto_control_open":
latest_modified_object = "定时开机时间";
String[] latest_modified_content_arr1 = (((GlobalVarious) getApplication()).getAuto_control_open()).split("\\s+");
latest_modified_content = latest_modified_content_arr1[1] + ":" + latest_modified_content_arr1[2];
break;
case "auto_control_close":
latest_modified_object = "定时关机时间";
String[] latest_modified_content_arr2 = (((GlobalVarious) getApplication()).getAuto_control_close()).split("\\s+");
latest_modified_content = latest_modified_content_arr2[1] + ":" + latest_modified_content_arr2[2];
break;
case "clear_sys_auto_control_open":
latest_modified_object = "取消定时开机";
break;
case "clear_sys_auto_control_close":
latest_modified_object = "取消定时关机";
break;
case "safety_mode":
latest_modified_object = "安全模式";
latest_modified_content = ((GlobalVarious) getApplication()).getSafety_mode();
switch (latest_modified_content) {
case "0":
latest_modified_content = "设备锁关闭";
break;
case "1":
latest_modified_content = "需要验证";
break;
case "2":
latest_modified_content = "设备锁开启";
break;
default:
}
break;
case "steer_angle":
latest_modified_object = "舵机角度";
latest_modified_content = ((GlobalVarious) getApplication()).getSteer_angle() + "°";
break;
case "unlock_direction":
latest_modified_object = "开锁方向";
latest_modified_content = ((GlobalVarious) getApplication()).getUnlock_direction();
switch (latest_modified_content) {
case "0":
latest_modified_content = "顺时针";
break;
case "1":
latest_modified_content = "逆时针";
break;
case "2":
latest_modified_content = "顺逆皆可";
break;
default:
}
break;
case "indicator_light_mode":
latest_modified_object = "指示灯模式";
latest_modified_content = ((GlobalVarious) getApplication()).getIndicator_light_mode();
break;
default:
}
if (!latest_modified_content.equals("")) {
Snackbar.make(v, "上一次修改:" + latest_modified_object + " " + latest_modified_content
, Snackbar.LENGTH_LONG).setAction("Action", null).show();
} else {
Snackbar.make(v, "上一次修改:" + latest_modified_object
, Snackbar.LENGTH_LONG).setAction("Action", null).show();
}
}
});
idmatch = findViewById(R.id.idmatch);
if (Objects.equals(id_matched, "")) idmatch.setText("未连接设备 ");
else idmatch.setText("门锁ID:" + id_matched + " ");
idmatch.setOnClickListener(v -> {
if (Objects.equals(id_matched, ""))
Toast.makeText(MainActivity.this, "您还尚未配对,请点击下面的“重新配对”按钮进行配对。",
Toast.LENGTH_LONG).show();
else
Toast.makeText(MainActivity.this, "您的id是:" + id_matched +
"\n如果您想要重新配对,请点击下面的“重新配对”按钮。", Toast.LENGTH_LONG).show();
});
Button rematch = findViewById(R.id.rematch);
rematch.setOnClickListener(v -> {
if (System.currentTimeMillis() >= time && System.currentTimeMillis() <= time + PERIOD) {
Toast.makeText(this, "正在开锁中,请稍后操作。", Toast.LENGTH_LONG).show();
} else {
if (!Objects.equals(id_matched, "")) {
AlertDialog.Builder dialog = new AlertDialog.Builder(MainActivity.this);
dialog.setTitle("提示消息");
dialog.setMessage("您确定要重新配对吗?");
dialog.setCancelable(false);
dialog.setPositiveButton("是的", (dialog1, which) -> {
try {
thread_open_close = false;
socket.close();
id_matched = "";
mac_address = "";
((GlobalVarious) getApplication()).setGlobalBlueSocket(null);
((GlobalVarious) getApplication()).setLatest_modified("");
toggleButton.setChecked(false);
Intent intent = new Intent(MainActivity.this, BluetoothActivity.class);
startActivityForResult(intent, 1);
} catch (IOException ignored) {
Toast.makeText(MainActivity.this, "错误!\n关闭socket失败!", Toast.LENGTH_SHORT).show();
}
});
dialog.setNegativeButton("取消", (dialog12, which) -> {
});
dialog.show();
} else {
Intent intent = new Intent(MainActivity.this, BluetoothActivity.class);
startActivityForResult(intent, 1);
}
}
});
if (Objects.equals(id_matched, "")) button1.setEnabled(false);
button1.setOnClickListener(v -> {
if (System.currentTimeMillis() >= time && System.currentTimeMillis() <= time + PERIOD) {
Toast.makeText(this, "正在开锁中,请勿重复点击!", Toast.LENGTH_LONG).show();
} else {
try {
mmOutStream.write("unlock_door".getBytes());
((GlobalVarious) getApplication()).setCurrent_mode("working");
time = System.currentTimeMillis();
Toast.makeText(MainActivity.this, "正在开门中!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "开门失败!\n请检查蓝牙连接后重试。", Toast.LENGTH_SHORT).show();
}
}
});
if (Objects.equals(id_matched, "")) button2.setEnabled(false);
button2.setOnClickListener(v -> {
if (System.currentTimeMillis() >= time && System.currentTimeMillis() <= time + PERIOD) {
Toast.makeText(this, "正在开锁中,请稍后操作。", Toast.LENGTH_LONG).show();
} else {
AlertDialog.Builder dialog = new AlertDialog.Builder(MainActivity.this);
dialog.setTitle("请选择您需要设置的内容:");
dialog.setMessage("提示:若需要取消定时开关机,请前往更多功能——取消定时开关机。");
dialog.setCancelable(false);
dialog.setNegativeButton("定时开机", (dialog1, which) -> {
TimePickerDialog timePickerDialog = new TimePickerDialog(MainActivity.this,
(view, hourOfDay, minute) -> {
try {
mmOutStream.write(("sys_autocontrol open " + hourOfDay + " " + minute).getBytes());
((GlobalVarious) getApplication()).setAuto_control_open(hourOfDay + " " + minute);
Toast.makeText(MainActivity.this, "定时开机设置成功!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "设置失败!", Toast.LENGTH_SHORT).show();
}
}, 0, 0, true);
timePickerDialog.show();
});
dialog.setNeutralButton("取消", (dialog1, which) -> {
});
dialog.setPositiveButton("定时关机", (dialog12, which) -> {
TimePickerDialog timePickerDialog = new TimePickerDialog(MainActivity.this,
(view, hourOfDay, minute) -> {
try {
mmOutStream.write(("sys_autocontrol close " + hourOfDay + " " + minute).getBytes());
((GlobalVarious) getApplication()).setAuto_control_close(hourOfDay + " " + minute);
Toast.makeText(MainActivity.this, "定时关机设置成功!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "设置失败!", Toast.LENGTH_SHORT).show();
}
}, 0, 0, true);
timePickerDialog.show();
});
dialog.show();
}
});
if (Objects.equals(id_matched, "")) button3.setEnabled(false);
button3.setOnClickListener(v -> {
if (System.currentTimeMillis() >= time && System.currentTimeMillis() <= time + PERIOD) {
Toast.makeText(this, "正在开锁中,请稍后操作。", Toast.LENGTH_LONG).show();
} else {
setAddressSelectorPopup(v);
}
});
if (Objects.equals(id_matched, "")) button4.setEnabled(false);
button4.setOnClickListener(v -> {
if (System.currentTimeMillis() >= time && System.currentTimeMillis() <= time + PERIOD) {
Toast.makeText(this, "正在开锁中,请稍后操作。", Toast.LENGTH_LONG).show();
} else {
final EditText inputServer = new EditText(this);
inputServer.setInputType(InputType.TYPE_CLASS_NUMBER);
inputServer.setHint("请输入0~90度之间的值");
inputServer.setGravity(Gravity.CENTER_HORIZONTAL);
AlertDialog.Builder builder = new AlertDialog.Builder(this);
builder.setTitle("请输入舵机角度:").setView(inputServer).setNegativeButton("取消", null);
builder.setPositiveButton("确定", (dialog, which) -> {
if (Integer.parseInt(inputServer.getText().toString()) >= 0 && Integer.parseInt(inputServer.getText().toString()) <= 90) {
try {
mmOutStream.write(("set_steer_angle " + inputServer.getText().toString()).getBytes());
((GlobalVarious) getApplication()).setSteer_angle(inputServer.getText().toString());
Toast.makeText(MainActivity.this, "舵机角度设置成功!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "传输失败,请重试!", Toast.LENGTH_SHORT).show();
}
} else {
Toast.makeText(MainActivity.this, "设置失败!\n请重试,并输入0~90之间的角度。", Toast.LENGTH_SHORT).show();
}
});
builder.show();
}
});
if (Objects.equals(id_matched, "")) button5.setEnabled(false);
button5.setOnClickListener(this::setAddressSelectorPopup2);
if (Objects.equals(id_matched, "")) button6.setEnabled(false);
button6.setOnClickListener(v -> {
Intent intent = new Intent(MainActivity.this, MoreFunctionsActivity.class);
intent.putExtra("time", time);
startActivity(intent);
});
}
@SuppressLint("SetTextI18n")
@RequiresApi(api = Build.VERSION_CODES.KITKAT)
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (requestCode == 1) {
if (resultCode == RESULT_OK) {
Toast.makeText(MainActivity.this, "连接成功!", Toast.LENGTH_SHORT).show();
id_matched = data.getStringExtra("id_matched");
mac_address = data.getStringExtra("mac_address");
//获取全局对象mBluetoothSocket
socket = ((GlobalVarious) getApplication()).getGlobalBlueSocket();
try {
mmInStream = socket.getInputStream();
} catch (IOException e) {
Toast.makeText(this, "获取InputStream错误!", Toast.LENGTH_SHORT).show();
}
try {
mmOutStream = socket.getOutputStream();
} catch (IOException e) {
Toast.makeText(this, "获取OutputStream错误!", Toast.LENGTH_SHORT).show();
}
SharedPreferences.Editor editor = getSharedPreferences("data", MODE_PRIVATE).edit();
editor.putString("id_matched", id_matched);
editor.putString("mac_address", mac_address);
editor.apply();
idmatch.setText("门锁ID:" + id_matched + " ");
button1.setEnabled(true);
button2.setEnabled(true);
button3.setEnabled(true);
button4.setEnabled(true);
button5.setEnabled(true);
button6.setEnabled(true);
//注:在调试线程部分时,须将thread_open_close设为true
thread_open_close = false;
new Thread(() -> {
Log.d(TAG, "Having created new thread.");
while (thread_open_close) {
try {
message = null;
mmInStream = socket.getInputStream();
Log.d(TAG, "Having gotten inputStream.");
//实验代码:
/*try {
byte[] buffer = new byte[1024];
int bytes = mmInStream.read(buffer);
message = String.valueOf(bytes);
} catch (IOException e) {
break;
}*/
//方法1:
//message = readStreamToString(mmInStream);
//方法2:
//Scanner scanner = new Scanner(mmInStream, "UTF-8");
//message = scanner.useDelimiter("\\A").next();
//scanner.close();
//方法3:
//message = IOUtils.toString(mmInStream, StandardCharsets.UTF_8);
//check
message = "abc def";
//checking point
message_to_transfer.what = UPDATE_TEXT2;
handler.sendMessage(message_to_transfer);
Log.d(TAG, "Having received message.");
String[] order = message.split("\\s+");
Log.d(TAG, "Having divided message.");
switch (order[1]) {
case "sys_control":
((GlobalVarious) getApplication()).setOpen_close(order[2]);
break;
case "sys_autocontrol":
if (order[2].equals("open")) {
((GlobalVarious) getApplication()).setAuto_control_open(order[3] + " " + order[4]);
} else if (order[2].equals("close")) {
((GlobalVarious) getApplication()).setAuto_control_close(order[3] + " " + order[4]);
}
break;
case "set_safety_mode":
((GlobalVarious) getApplication()).setSafety_mode(order[2]);
break;
case "set_steer_angle":
((GlobalVarious) getApplication()).setSteer_angle(order[2]);
break;
case "set_unlock_direction":
((GlobalVarious) getApplication()).setUnlock_direction(order[2]);
break;
case "set_indicator_light_mode":
((GlobalVarious) getApplication()).setIndicator_light_mode(order[2]);
break;
case "sys_battery":
((GlobalVarious) getApplication()).setBattery(order[2]);
break;
case "current_mode":
((GlobalVarious) getApplication()).setCurrent_mode(order[2]);
break;
default:
Log.d(TAG, "Having received message that can not be executed.");
message_to_transfer.what = UPDATE_TEXT1;
handler.sendMessage(message_to_transfer);
}
} catch (IOException e) {
message_to_transfer.what = UPDATE_TEXT3;
handler.sendMessage(message_to_transfer);
}
}
}).start();
}
}
}
@Override
public boolean onCreateOptionsMenu(Menu menu) {
// Inflate the menu; this adds items to the action bar if it is present.
Log.d(TAG, "Menu is on_on.");
getMenuInflater().inflate(R.menu.menu_main, menu);
return true;
}
@SuppressLint({"SetTextI18n", "RtlHardcoded"})
@RequiresApi(api = Build.VERSION_CODES.KITKAT)
@Override
public boolean onOptionsItemSelected(MenuItem item) {
// Handle action bar item clicks here. The action bar will
// automatically handle clicks on the Home/Up button, so long
// as you specify a parent activity in AndroidManifest.xml.
switch (item.getItemId()) {
case R.id.update:
SharedPreferences pref = getSharedPreferences("data", MODE_PRIVATE);
id_matched = pref.getString("id_matched", "");
mac_address = pref.getString("mac_address", "");
if (!Objects.equals(id_matched, "")) {
idmatch.setText("门锁ID:" + id_matched + " ");
button1.setEnabled(true);
button2.setEnabled(true);
button3.setEnabled(true);
button4.setEnabled(true);
button5.setEnabled(true);
button6.setEnabled(true);
}
Toast.makeText(this, "完成刷新!", Toast.LENGTH_SHORT).show();
break;
case R.id.help:
if (id_matched.equals("")) {
Toast.makeText(this, "您还尚未连接,请先打开装置,并点击“重新配对”按钮,进行配对!",
Toast.LENGTH_LONG).show();
} else {
Toast.makeText(this, "您已经成功连接装置,可以根据需要,通过下方的按键设置功能。\n" +
"如果您想了解更多关于我们的信息,请点击“更多功能”-“关于我们”。", Toast.LENGTH_LONG).show();
}
break;
default:
}
return true;
}
private void initData() {
//模拟请求后台返回数据
String response = "{\"ret\":0,\"msg\":\"succes,\",\"datas\":" +
"[{\"ID\":\" 0\",\"categoryName\":\"设备锁关闭\",\"state\":\"1\"}" +
",{\"ID\":\"1\",\"categoryName\":\"需要验证\",\"state\":\"1\"}," +
"{\"ID\":\"2\",\"categoryName\":\"设备锁开启\",\"state\":\"1\"}]}";
GetConfigReq getConfigReq = new Gson().fromJson(response, GetConfigReq.class);
//0请求表示成功
if (getConfigReq.getRet() == 0) {
//滚动选择数据集合
datasBeanList = getConfigReq.getDatas();
}
}
private void initData2() {
//模拟请求后台返回数据
String response = "{\"ret\":0,\"msg\":\"succes,\",\"datas\":" +
"[{\"ID\":\" 0\",\"categoryName\":\"顺时针\",\"state\":\"1\"}," +
"{\"ID\":\"1\",\"categoryName\":\"逆时针\",\"state\":\"1\"}," +
"{\"ID\":\"2\",\"categoryName\":\"顺逆皆可\",\"state\":\"1\"}]}";
GetConfigReq getConfigReq = new Gson().fromJson(response, GetConfigReq.class);
//0请求表示成功
if (getConfigReq.getRet() == 0) {
//滚动选择数据集合
datasBeanList2 = getConfigReq.getDatas();
}
}
private void initView() {
click = findViewById(R.id.button3);
}
private void initView2() {
click2 = findViewById(R.id.button5);
}
private void initListener() {
click.setOnClickListener(this);
}
private void initListener2() {
click2.setOnClickListener(this);
}
/**
* 将选择器放在底部弹出框
*/
private void setAddressSelectorPopup(View v) {
int screenHeight = getResources().getDisplayMetrics().heightPixels;
CommonPopWindow.newBuilder()
.setView(R.layout.choice_view)
.setAnimationStyle(R.style.AppTheme)
.setBackgroundDrawable(new BitmapDrawable())
.setSize(ViewGroup.LayoutParams.MATCH_PARENT, Math.round(screenHeight * 0.3f))
.setViewOnClickListener(this)
.setBackgroundDarkEnable(true)
.setBackgroundAlpha(0.7f)
.setBackgroundDrawable(new ColorDrawable(999999))
.build(this)
.showAsBottom(v);
}
private void setAddressSelectorPopup2(View v) {
int screenHeight = getResources().getDisplayMetrics().heightPixels;
CommonPopWindow.newBuilder()
.setView(R.layout.choice_view2)
.setAnimationStyle(R.style.AppTheme)
.setBackgroundDrawable(new BitmapDrawable())
.setSize(ViewGroup.LayoutParams.MATCH_PARENT, Math.round(screenHeight * 0.3f))
.setViewOnClickListener(this)
.setBackgroundDarkEnable(true)
.setBackgroundAlpha(0.7f)
.setBackgroundDrawable(new ColorDrawable(999999))
.build(this)
.showAsBottom(v);
}
@Override
public void getChildView(final PopupWindow mPopupWindow, View view, int mLayoutResId) {
switch (mLayoutResId) {
case R.layout.choice_view:
TextView imageBtn = view.findViewById(R.id.img_guanbi);
PickerScrollView addressSelector = view.findViewById(R.id.address);
// 设置数据,默认选择第一条
addressSelector.setData(datasBeanList);
//滚动监听
addressSelector.setOnSelectListener(pickers -> categoryName = pickers.getCategoryName());
//完成按钮
imageBtn.setOnClickListener(v -> {
Log.d(TAG, "The safety_mode now is " + safety_mode);
mPopupWindow.dismiss();
switch (categoryName) {
case "设备锁关闭":
safety_mode = 0;
break;
case "需要验证":
safety_mode = 1;
break;
case "设备锁开启":
safety_mode = 2;
break;
default:
safety_mode = -1;
}
Log.d(TAG, "Having defined safety_mode of " + categoryName);
if (safety_mode != -1) {
try {
mmOutStream.write(("set_safety_mode " + safety_mode).getBytes());
((GlobalVarious) getApplication()).setSafety_mode(String.valueOf(safety_mode));
Toast.makeText(MainActivity.this, "设置成功!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "设置失败!", Toast.LENGTH_SHORT).show();
}
} else {
Toast.makeText(MainActivity.this, "设置失败!", Toast.LENGTH_SHORT).show();
}
});
break;
case R.layout.choice_view2:
TextView imageBtn2 = view.findViewById(R.id.img_guanbi2);
PickerScrollView addressSelector2 = view.findViewById(R.id.address2);
// 设置数据,默认选择第一条
addressSelector2.setData(datasBeanList2);
//滚动监听
addressSelector2.setOnSelectListener(pickers -> categoryName2 = pickers.getCategoryName());
//完成按钮
imageBtn2.setOnClickListener(v -> {
Log.d(TAG, "Having tapped finish button.");
mPopupWindow.dismiss();
switch (categoryName2) {
case "顺时针":
unlock_direction = 0;
break;
case "逆时针":
unlock_direction = 1;
break;
case "顺逆皆可":
unlock_direction = 2;
break;
default:
unlock_direction = -1;
}
Log.d(TAG, "Having defined unlock_direction of " + categoryName2);
if (unlock_direction != -1) {
try {
mmOutStream.write(("set_unlock_direction " + unlock_direction).getBytes());
((GlobalVarious) getApplication()).setSafety_mode(String.valueOf(unlock_direction));
Toast.makeText(MainActivity.this, "设置成功!", Toast.LENGTH_SHORT).show();
} catch (IOException e) {
Toast.makeText(MainActivity.this, "设置失败!", Toast.LENGTH_SHORT).show();
}
} else {
Toast.makeText(MainActivity.this, "设置失败!", Toast.LENGTH_SHORT).show();
}
});
break;
}
}
public static String readStreamToString(InputStream inputStream) throws IOException {
//创建字节数组输出流 ,用来输出读取到的内容
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
//创建读取缓存,大小为1024
byte[] buffer = new byte[1024];
//每次读取长度
int len = 0;
//开始读取输入流中的文件
while ((len = inputStream.read(buffer)) != -1) { //当等于-1说明没有数据可以读取了
byteArrayOutputStream.write(buffer, 0, len); // 把读取的内容写入到输出流中
}
//把读取到的字节数组转换为字符串
String result = byteArrayOutputStream.toString();
//返回字符串结果
return result;
}
public void showMyToast(final Toast toast, final int cnt) {
final Timer timer = new Timer();
timer.schedule(new TimerTask() {
@Override
public void run() {
toast.show();
}
}, 0, 3000);
new Timer().schedule(new TimerTask() {
@Override
public void run() {
toast.cancel();
timer.cancel();
}
}, cnt);
}
@Override
public void onClick(View v) {
}
} |
<reponame>algesten/hreq
use crate::body::Body;
use crate::body_codec::BodyImpl;
use crate::body_send::BodySender;
use crate::bw::BandwidthMonitor;
use crate::head_ext::HeaderMapExt;
use crate::params::HReqParams;
use crate::uninit::UninitBuf;
use crate::Error;
use crate::AGENT_IDENT;
use crate::{AsyncRead, AsyncWrite};
use bytes::Bytes;
use futures_util::future::poll_fn;
use h2::server::Connection as H2Connection;
use h2::server::SendResponse as H2SendResponse;
use hreq_h1::server::Connection as H1Connection;
use hreq_h1::server::SendResponse as H1SendResponse;
use httpdate::fmt_http_date;
use std::net::SocketAddr;
use std::pin::Pin;
use std::task::Poll;
use std::time::SystemTime;
use tokio_util::compat::Compat;
const START_BUF_SIZE: usize = 16_384;
const MAX_BUF_SIZE: usize = 2 * 1024 * 1024;
pub(crate) struct Connection<Stream> {
inner: Inner<Stream>,
bw: Option<BandwidthMonitor>,
}
enum Inner<Stream> {
H1(H1Connection<Stream>),
H2(H2Connection<Compat<Stream>, Bytes>),
}
impl<Stream> Connection<Stream>
where
Stream: AsyncRead + AsyncWrite + Unpin,
{
pub fn new_h1(conn: H1Connection<Stream>) -> Self {
Connection {
inner: Inner::H1(conn),
bw: None,
}
}
pub fn new_h2(conn: H2Connection<Compat<Stream>, Bytes>, bw: BandwidthMonitor) -> Self {
Connection {
inner: Inner::H2(conn),
bw: Some(bw),
}
}
pub async fn accept(
&mut self,
local_addr: SocketAddr,
remote_addr: SocketAddr,
) -> Option<Result<(http::Request<Body>, SendResponse), Error>> {
// cheap clone, either None or a Arc<Mutex<_>>
let bw_acc = self.bw.clone();
match &mut self.inner {
Inner::H1(c) => {
if let Some(next) = c.accept().await {
match next {
Err(e) => return Some(Err(e.into())),
Ok(v) => {
let (req, send) = v;
let (parts, recv) = req.into_parts();
let body = Body::new(BodyImpl::Http1(recv), None, false);
let send = SendResponse::H1(send);
return Some(Ok(Self::configure(
parts,
body,
local_addr,
remote_addr,
send,
None,
)));
}
}
}
trace!("H1 accept incoming end");
}
Inner::H2(c) => {
let mut bw_acc = bw_acc.expect("h2 requires bandwidth monitor");
let bw_req = bw_acc.clone();
// piggy-back the bandwidth monitor on accepting requests from connection
let accept_and_bw = poll_fn(move |cx| {
if let Poll::Ready(window_size) = bw_acc.poll_window_update(cx) {
trace!("Update h2 window size: {}", window_size);
c.set_target_window_size(window_size);
c.set_initial_window_size(window_size)?;
};
Pin::new(&mut *c).poll_accept(cx)
});
if let Some(next) = accept_and_bw.await {
match next {
Err(e) => return Some(Err(e.into())),
Ok(v) => {
let (req, send) = v;
let (parts, recv) = req.into_parts();
let body = Body::new(BodyImpl::Http2(recv), None, false);
let send = SendResponse::H2(send);
return Some(Ok(Self::configure(
parts,
body,
local_addr,
remote_addr,
send,
Some(bw_req),
)));
}
}
}
trace!("H2 accept incoming end");
}
};
None
}
fn configure(
mut parts: http::request::Parts,
mut body: Body,
local_addr: SocketAddr,
remote_addr: SocketAddr,
send: SendResponse,
bw: Option<BandwidthMonitor>,
) -> (http::Request<Body>, SendResponse) {
// Instantiate new HReqParams that will follow the request and response through.
let mut hreq_params = HReqParams::new();
hreq_params.mark_request_start();
hreq_params.local_addr = local_addr;
hreq_params.remote_addr = remote_addr;
parts.extensions.insert(hreq_params.clone());
body.set_bw_monitor(bw);
body.configure(&hreq_params, &parts.headers, true);
(http::Request::from_parts(parts, body), send)
}
}
pub(crate) enum SendResponse {
H1(H1SendResponse),
H2(H2SendResponse<Bytes>),
}
impl SendResponse {
pub async fn send_response(
self,
result: Result<http::Response<Body>, Error>,
req_params: HReqParams,
) -> Result<(), Error> {
match result {
Ok(res) => self.handle_response(res, req_params).await?,
Err(err) => self.handle_error(err).await?,
}
Ok(())
}
fn is_http2(&self) -> bool {
if let SendResponse::H2(_) = self {
return true;
}
false
}
async fn handle_response(
self,
mut res: http::Response<Body>,
req_params: HReqParams,
) -> Result<(), Error> {
//
let mut params = res
.extensions_mut()
.remove::<HReqParams>()
.unwrap_or_else(HReqParams::new);
// merge parameters together
params.copy_from_request(&req_params);
let (mut parts, mut body) = res.into_parts();
body.configure(¶ms, &parts.headers, false);
// for small response bodies, we try to fully buffer the data.
if params.prebuffer {
body.attempt_prebuffer().await?;
}
configure_response(&mut parts, &body, self.is_http2());
let res = http::Response::from_parts(parts, ());
let mut body_send = self.do_send(res).await?;
// this buffer should probably be less than h2 window size
let mut buf = UninitBuf::with_capacity(START_BUF_SIZE, MAX_BUF_SIZE);
if !body.is_definitely_no_body() {
loop {
buf.clear();
let amount_read = buf.read_from_async(&mut body).await?;
// Ship it to they underlying http1.1/http2 layer.
body_send.send_data(&buf[0..amount_read]).await?;
if amount_read == 0 {
break;
}
}
}
body_send.send_end().await?;
Ok(())
}
async fn do_send(self, res: http::Response<()>) -> Result<BodySender, Error> {
Ok(match self {
SendResponse::H1(send) => {
let send_body = send.send_response(res, false).await?;
BodySender::H1(send_body)
}
SendResponse::H2(mut send) => {
let send_body = send.send_response(res, false)?;
BodySender::H2(send_body)
}
})
}
async fn handle_error(self, err: Error) -> Result<(), Error> {
warn!("Middleware/handlers failed: {}", err);
let res = http::Response::builder().status(500).body(()).unwrap();
let mut body_send = self.do_send(res).await?;
body_send.send_end().await?;
Ok(())
}
}
pub(crate) fn configure_response(parts: &mut http::response::Parts, body: &Body, is_http2: bool) {
let is304 = parts.status == 304;
// https://tools.ietf.org/html/rfc7232#section-4.1
//
// Since the goal of a 304 response is to minimize information transfer
// when the recipient already has one or more cached representations, a
// sender SHOULD NOT generate representation metadata other than the
// above listed fields unless said metadata exists for the purpose of
// guiding cache updates (e.g., Last-Modified might be useful if the
// response does not have an ETag field).
if !is304 {
if let Some(len) = body.content_encoded_length() {
// the body indicates a length (for sure).
let user_set_length = parts.headers.get("content-length").is_some();
if !user_set_length && (len > 0 || !parts.status.is_redirection()) {
parts.headers.set("content-length", len.to_string());
}
} else if !is_http2 && !parts.status.is_redirection() {
// body does not indicate a length (like from a reader),
// and status indicates there really is one.
// we chose chunked.
if parts.headers.get("transfer-encoding").is_none() {
parts.headers.set("transfer-encoding", "chunked");
}
}
if parts.headers.get("content-type").is_none() {
if let Some(ctype) = body.content_type() {
parts.headers.set("content-type", ctype);
}
}
}
if parts.headers.get("server").is_none() {
parts.headers.set("server", &*AGENT_IDENT);
}
if parts.headers.get("date").is_none() {
// Wed, 17 Apr 2013 12:00:00 GMT
parts.headers.set("date", fmt_http_date(SystemTime::now()));
}
}
|
import {
getAgularFieldType,
getColumnName,
getColumnType,
} from "./../../../shared/function";
import { BaseClass } from "../../../shared/sharedClass/baseClass";
import { CellItemModel } from "../../../model/cellModel";
import { StyleType } from "../../../shared/constans";
export class ModelTemp extends BaseClass {
masterList: CellItemModel[];
itemModel: string[];
listModel: string[];
constructor(masterList: CellItemModel[]) {
super(masterList);
this.masterList = masterList;
this.itemModel = [];
this.listModel = [];
}
getListModelData(): string[] {
const listModels = this.masterList.filter(
(value) => value.listViewOrdering != null
);
this.listModel.push(
`import { BaseCompanyView } from 'app/core/interfaces/base/baseCompanyView';`
);
this.listModel.push(
`export class ${this.moduleName}ListModel extends BaseCompanyView {`
);
this.listModel.push(` ${this.primaryColumn}: string = null;`);
listModels.forEach((item) => {
const columnName: string = getColumnName(
StyleType.SNAKE,
item.columnName
);
const columnType: string = getAgularFieldType(item.dataType);
this.listModel.push(` ${columnName}: ${columnType} = null;`);
});
this.listModel.push(`}`);
return this.listModel;
}
getItemModelData(): string[] {
const itemModels = this.masterList.filter(
(value) => value.groupOrdering != null
);
this.itemModel.push(
`import { BaseCompanyView } from 'app/core/interfaces/base/baseCompanyView';`
);
this.itemModel.push(
`export class ${this.moduleName}ItemModel extends BaseCompanyView {`
);
this.itemModel.push(` ${this.primaryColumn}: string = null;`);
itemModels.forEach((item) => {
const columnName: string = getColumnName(
StyleType.SNAKE,
item.columnName
);
const columnType: string = getAgularFieldType(item.dataType);
this.itemModel.push(` ${columnName}: ${columnType} = null;`);
});
this.itemModel.push(`}`);
return this.itemModel;
}
}
|
def MakeTables(data_dir='.'):
table, firsts, others = first.MakeTables(data_dir)
pool = descriptive.PoolRecords(firsts, others)
Process(pool, 'live births')
Process(firsts, 'first babies')
Process(others, 'others')
return pool, firsts, others |
GaN junctionless trigate field-effect transistor with deep-submicron gate length: Characterization and modeling in RF regime Radio-frequency (RF) performances of a gallium nitride (GaN)-based junctionless (JL) trigate field-effect transistor (TGFET), or fin-shaped FET (FinFET), are demonstrated along with RF modeling for the first time. The fabricated GaN JL TGFET had a gate length of 350 nm and had no AlGaN/GaN heterojunction on which conventional high-mobility electron transistors (HEMTs) had been based to have a fully junctionless channel. The device with five fin channels exhibits a maximum drain current of 403 mA/mm and maximum transconductance of 123.6 mS/mm. The maximum cutoff frequency (fT) and maximum oscillation frequency (fmax) are 2.45 and 9.75 GHz, respectively. In order to confirm its potential for high-frequency applications, small-signal modeling has been carried out up to a frequency above the maximum fT. |
EGCG, a major component of green tea, inhibits tumour growth by inhibiting VEGF induction in human colon carcinoma cells Catechins are key components of teas that have antiproliferative properties. We investigated the effects of green tea catechins on intracellular signalling and VEGF induction in vitro in serum-deprived HT29 human colon cancer cells and in vivo on the growth of HT29 cells in nude mice. In the in vitro studies, (-)-epigallocatechin gallate (EGCG), the most abundant catechin in green tea extract, inhibited Erk-1 and Erk-2 activation in a dose-dependent manner. However, other tea catechins such as (-)-epigallocatechin (EGC), (-)-epicatechin gallate (ECG), and (-)-epicatechin (EC) did not affect Erk-1 or 2 activation at a concentration of 30 M. EGCG also inhibited the increase of VEGF expression and promoter activity induced by serum starvation. In the in vivo studies, athymic BALB/c nude mice were inoculated subcutaneously with HT29 cells and treated with daily intraperitoneal injections of EC (negative control) or EGCG at 1.5mg day−1mouse−1starting 2 days after tumour cell inoculation. Treatment with EGCG inhibited tumour growth (58%), microvessel density (30%), and tumour cell proliferation (27%) and increased tumour cell apoptosis (1.9-fold) and endothelial cell apoptosis (3-fold) relative to the control condition (P< 0.05 for all comparisons). EGCG may exert at least part of its anticancer effect by inhibiting angiogenesis through blocking the induction of VEGF. © 2001 Cancer Research Campaign http://www.bjcancer.com Signalling pathways that mediate proliferation also mediate other processes involved in tumour progression Jung et al, 1999). Tumour growth induced by mitogenic compounds is associated with the activation of several cytosolic proteins, including those involved in the phosphorylation and activation of MAPKs (Cobb and Goldsmith, 1995). Among the three subgroups of MAPKs in mammalian cells, the Erks seem to be the most important for growth factor-induced cell proliferation. Regulators of angiogenesis are also important determinants of tumour growth and various signal transduction pathways have been implicated in regulating angiogenic factor expression. Vascular endothelial growth factor (VEGF) is the angiogenic factor most closely associated with inducing and maintaining the neovasculature in human colon cancer (Takahashi et al, 1995(Takahashi et al,, 1997Ellis et al, 1996). Activation of Erk-1 and Erk-2 have been shown to be important mediators for up-regulation of VEGF mRNA. Milanini et al demonstrated that activation of Erk-1 and Erk-2 play a key role in the regulation of the VEGF expression via alteration of AP-2 and Sp1 transcription factors in fibroblasts. Our previous studies on VEGF induction in serum-starved cells revealed a causal role for the activation of Erk -but not of P38, stress-activated protein kinase (SAPK) or Akt (Jung et al, 1999). A recent finding that green tea and one of its components, EGCG, prevents the growth of new blood vessels in animals (Cao and Cao, 1999) suggests a mechanistic link between the consumption of tea and the possible prevention and treatment of angiogenesis-dependent diseases, including cancer. In this study, we investigated the effects of green tea catechins on VEGF expression in vitro and their effects on tumour growth in vivo in human colon cancer xenografts in nude mice. optimum cutting temperature (OCT) compound from Miles Inc (Elkhart), diaminobenzidine substrate (DAB) and Universal Mount from Research Genetics (Huntsville), Superfrost slides from Fisher Scientific Co (Houston), terminal deoxynucleotidyl transferase (TdT)-mediated dUTP nick-end labelling (TUNEL) kit from Promega (Madison), and 4,6-diamidino-2-phenylindole dihydrochloride (DAPI) mount from Vector Laboratories Inc (Burlingame). Antibodies for the immunohistochemical analyses were obtained as follows: rat anti-mouse CD31/PECAM-1 antibody from Pharmingen (San Diego); mouse anti-PCNA clone PC 10 DAKO A/S from Dako Corp (Carpinteria); peroxidase-conjugated goat anti-rat immunoglobulin (IgG) (H+L) and Texas Red-and fluorescein-conjugated goat anti-rat IgG from Jackson Research Laboratories (West Grove); and peroxidase-conjugated rat antimouse IgG2a from Serotec Harlan Bioproducts for Science Inc (Indianapolis). Cell culture The human colon cancer cell line HT29 was obtained from the American Type Culture Collection (Manassas) and cultured in minimal essential medium (MEM) supplemented with 10% fetal bovine serum (FBS), 2 U ml -1 penicillin and streptomycin, 1 mM sodium pyruvate, 2 mM L-glutamine, vitamins, and nonessential amino acids at 37C under 5% CO 2. Serum deprivation was induced by excluding FBS from the standard culture medium after cells had reached 90-100% confluence. The membranes were then washed and treated with secondary antibody labelled with horseradish peroxidase (anti-rabbit immunoglobulin antiserum from donkey at a 1:3000 dilution, Amersham, Arlington Heights). Protein bands were visualized with a commercially available chemoluminescence kit (Amersham). To quantify total Erk-1 and -2 protein levels, the membrane was washed with stripping solution (100 mM 2-mercaptoethanol, 2% SDS, and 62.5 mM Tris-HCl (pH 6.7)) for 30 min at 50C and reprobed with rabbit anti-p44/42 MAPK antiserum (New England Biolabs) at a 1:1000 dilution. mRNA extraction and Northern blot analysis Total RNA was extracted from cells by using the Tri Reagent (Molecular Research Center Inc, Cincinnati). Northern blot hybridization was performed as previously described (Jung et al, 1999). In brief, total RNA (25 g) was subjected to electrophoresis on 1% denaturing formaldehyde-agarose gels, transferred to a Hybond-N + positively charged nylon membrane (Amersham) overnight by capillary elution, and ultraviolet cross-linked at 120 000 J cm -2 by using an ultraviolet Stratalinker 1800 (Stratagene, La Jolla). The blots were prehybridized for 3-4 h at 65C in rapid hybridization buffer (Amersham), and the membranes were hybridized overnight at 65C with the cDNA probe for VEGF or glyceraldehyde 3-phosphate dehydrogenase (GAPDH) (see below). The probed nylon membranes were then washed and exposed to radiographic film (GIBCO BRL Life Technologies Inc, Grand Island). A human VEGF-specific 204-bp cDNA probe was a gift from Dr Brygida Berse (Harvard Medical School, Boston), and a GAPDH probe was purchased from the American Type Culture Collection (Manassas). The VEGF probe identifies all of the alternatively spliced forms of VEGF mRNA transcripts. Probes were purified by agarose gel electrophoresis by using the QIAEX Gel Extraction kit (QIAGEN Inc, Chatworth). Each cDNA probe was radiolabeled with (-32P) deoxyribonucleotide triphosphate by using the random-priming technique with the Rediprime labeling system (Amersham). VEGF promoter-reporter activity in response to serum starvation The effect of EGCG on the transcriptional regulation of VEGF by serum starvation of HT29 cells was examined by using transient transfection with a VEGF promoter (luciferase)-reporter construct. Full-length VEGF promoter cDNA, kindly provided by J Abraham (Scios Nova Inc, Mountain View), was subcloned into pGL3 by using standard techniques (Akagi et al, 1998). The following plasmids were used: pGL3-VEGF (containing the human VEGF promoter linked to the firefly luciferase reporter gene) (Promega, Madison), pRLTK (an internal control plasmid containing the herpes simplex thymidine kinase promoter linked to a constitutively active Renilla luciferase reporter gene), and pGL3 (plasmid vector alone as a negative control). HT29 cells (0.5-1.0 106) were seeded in 6-well plates, and the pRLTK and pGL3-VEGF constructs were cotransfected into cells with the FuGENETM6 Transfection Reagent (Boehringer Mannheim, Indianapolis) as described by the manufacturer. pRLTK and pGL3 were cotransfected as a negative control. After cells were incubated in the transfection medium for 24 h, the medium was changed to standard medium and the cells were incubated for another 24 h, after which they were incubated in serum-free medium for 24 h. To determine whether EGCG could inhibit the increase in VEGF-promoter activity associated with serum deprivation, cells were treated with EGCG 1 h before being exposed to the serum-free condition. Cells were harvested with passive lysis buffer (Dual-Luciferase Reporter Assay System, Promega), and luciferase activity was determined with a singlesample luminometer, as outlined in the manufacturer's protocol. Tumour cell inoculation and EGCG treatment Six-week-old male athymic BALB/c nude mice were obtained from the National Cancer Institute's Animal Production Area (Frederick) and were acclimated for 1 week. Mice were then injected subcutaneously with 106 viable HT29 cells. Beginning 2 days later, mice were given daily intraperitoneal injections of 1.5 mg EGCG or EC (control). (Our previous studies have shown no difference between mice injected with EC and those injected with PBS (data not shown), so we elected to use EC as the control condition.) Animals were observed daily for tumour growth, and when tumours appeared they were measured every 3 days. Tumour volume was calculated as 0.5 length width 2 (length > width). All animal studies were conducted in accord with institutional guidelines approved by the Animal Care and Use Committee of The University of Texas MD Anderson Cancer Center. Necropsy and tissue preparation Mice were killed by cervical dislocation 22 days after tumour-cell implantation. The tumours were excised, weighed, and sectioned and the tumour sections were either embedded in OCT compound and frozen at -70C or fixed in formalin. Immunohistochemical analysis Tumour vessel formation was assessed immunohistochemically by staining the tumour sections for CD31 and proliferating cell nuclear antigen (PCNA) as follows. Formalin-fixed or paraffinembedded sections were treated by standard deparaffinization, and sections frozen in OCT were treated by fixation in acetone and chloroform; immunohistochemical analyses then were performed as described elsewhere (Shaheen et al, 1999). Briefly, endogenous peroxidases were blocked with 3% H 2 O 2 in methanol, slides were washed in PBS, incubated for 20 min with protein-blocking solution (PBS supplemented with 1% normal goat serum and 5% normal horse serum), incubated overnight at 4C with primary antibodies directed against CD31 or PCNA. Then the slides were washed again, incubated with protein-blocking solution, incubated for 1 h at room temperature with peroxidase-conjugated secondary antibodies, washed, incubated with DAB, washed, counterstained with haematoxylin, washed, mounted with Universal Mount, and dried on a 56C hot-plate. Negative controls were prepared with the same procedure but without the primary antibody. Immunofluorescent staining for CD31 and the TUNEL assay Frozen tumour sections were stained by immunofluorescence for CD31 according to the above protocol with the following modifications. After sections were incubated overnight at 4C with the primary antibody, washed, and incubated with protein-blocking solution, they were incubated for 1 h at room temperature with a secondary antibody that was conjugated to Texas Red (red fluorescence), washed, and then TUNEL staining was performed according to the manufacturer's protocol. Briefly, the sections were fixed with 4% methanol-free paraformaldehyde, washed, permeabilized with 0.2% Triton X-100, washed, incubated with the kit's equilibration buffer, incubated with a reaction mix containing equilibration buffer, nucleotide mix, and the TdT enzyme at 37C for 1 h, incubated for 15 min at room temperature with 2 standard saline citrate to stop the TdT reaction, washed, stained with DAPI mount (to visualize the nuclei), and glass coverslips were applied. Quantification of CD31, PCNA and TUNEL Tumour vessels and PCNA-positive cells were evaluated by light microcopy, counted in five random 0.159-mm 2 fields at 100 magnification, imaged digitally, and processed with Optimas Image Analysis software (Biscan, Edmond). We also quantified apoptosis with immunofluorescence by imaging sections digitally and processing them with Adobe Photoshop software (Adobe Systems, Mountain View). CD31-positive endothelial cells were detected by localized red fluorescence using a rhodamine filter. Tumour and endothelial cell apoptosis was visualized by localized green fluorescence using a fluorescein filter. Nuclei were detected by blue fluorescence of the DAPI with its respective filter. Cell counts were obtained in five random 0.011-mm 2 fields per slide at 400 magnification. The percentage of apoptotic cells was determined as (number of apoptotic cells / total number of cells) 100). Statistical analysis Statistical comparisons among groups were made with the Mann-Whitney U-test for non-parametric data or Student's t-test for parametric data (promoter studies) (InStat Statistical Software, San Diego) at the 95% confidence level (P < 0.05 was considered statistically significant). Erk-1 and -2 activation Erk-1 and Erk-2 were found to be activated (phosphorylated) in serum-deprived HT29 cells, as previously reported (Jung et al, 1999), whereas the total levels of Erk-1 and Erk-2 did not change significantly. Our previous studies demonstrated that pretreating the cells with 1 mg ml -1 of green tea extract inhibited the Erk-1 and -2 activation that had been induced by serum deprivation (data not shown). Purified EGCG, at 30 M, markedly inhibited Erk-1 and -2 activation, but EGC, ECG, and EC at this concentration had no effect ( Figure 1A). Next, we treated cells with 0, 10, 30, or 50 M EGCG under serum-free conditions and found that EGCG inhibited the Erk-1 and -2 activation in serum-deprived HT29 cells in a dose-dependent manner ( Figure 1B). VEGF expression Since Erk-1 and -2 activation is known to be involved in the induction of VEGF by serum deprivation, we next examined the effect of EGCG on VEGF expression. EGCG inhibited VEGF expression in serum-deprived HT29 cells in a dose-dependent manner, but EC did not affect VEGF expression at any dose ( Figure 2). Neither EGC nor ECG (at 50 M concentrations) affected VEGF expression in serum-starved cells (data not shown). Transcriptional regulation of VEGF To examine the effect of EGCG on the transcriptional regulation of VEGF induced by serum starvation, we transiently transfected promoter-reporter constructs into HT29 cells. Cells transfected with pGL3-VEGF (promoter-reporter construct) and pRLTK (internal control) demonstrated an increase in VEGF promoter activity secondary to serum starvation. Treating cells with EGCG inhibited the activity of the VEGF promoter in a dose-dependent fashion ( Figure 3). Growth of HT29 cells in vivo Daily injections of EGCG (1.5 mg mouse -1 day -1 ) produced no signs of toxicity in athymic mice and effectively suppressed the growth of HT29 cells that had been implanted subcutaneously into those mice. At 22 days after tumour-cell implantation, tumour volume was inhibited by 61% ( Figure 4A) and tumour weight was inhibited by 58% ( Figure 4B) in the EGCG-treated group. In preliminary studies, we demonstrated that there was no difference in tumour size between mice injected with EC and those injected with PBS (data not shown); therefore we used EC as the control agent. Tumour angiogenesis and tumour cell proliferation We used immunohistochemical staining for CD31 to reveal vessel formation in tumour sections. At 22 days after tumour-cell implantation, daily EGCG treatment had decreased the number of tumour vessels by 30% compared with that of controls EGCG inhibits the induction of VEGF promoter activity in colon carcinoma cells. HT29 cells were co-transfected with pGL3-VEGF (a VEGF promoter-luciferase-reporter construct) and pRLTK (control for transfection efficiency); co-transfection of pGL3 and pRLTK was used as a negative control. After 23 h, (-)-epigallocatechin gallate (EGCG) was added, and 1 h thereafter the medium was changed to serum-free medium. Cells were harvested after 24 h of serum starvation, protein was extracted, and luciferase activity was determined. Five separate experiments were done and results were standardized to reporter activity at t = 0 (contro; control values were set at 1 or 1.1 to allow for statistic analysis utilizing our computer software program). Bars indicate standard error of the mean.*P < 0.05 (unpaired Student's T-test) Figure 5A). Tumour cell proliferation was evaluated by immunohistochemical staining for PCNA. Tumours from mice treated with EGCG had significantly less tumour cell proliferation (27%) than that of controls ( Figure 5A). Apoptosis of tumour and endothelial cells Immunofluorescent TUNEL staining with concurrent staining for CD31 was performed to quantify endothelial apoptosis in tumour sections (Shaheen et al, 1999). EGCG treatment produced a 1.9-fold increase in tumour cell apoptosis and a 3-fold increase in endothelial cell apoptosis, compared with those of controls ( Figure 5B). The differences in microvessel density, tumour cell proliferation and endothelial cell apoptosis between the EC and EGCG groups are illustrated in Figure 6. DISCUSSION EGCG is the most abundant of the green tea polyphenols, accounting for more than 40% of the total polyphenolic mixture (Stoner and Mukhtar, 1995). Several molecular mechanisms have been suggested for EGCG's observed anticancer effect, including suppression of ligand binding to the EGF receptor (Liang et al, 1997); inhibition of urokinase (Jankun et al, 1997), protein kinase C (Kitano et al, 1997), lipoxygenase, and cyclooxygenase activities (Stoner and Mukhtar, 1995); and induction of apoptotic cell death and arrest of the cell cycle (Ahmad et al, 1997;Fujiki et al, 1998) in tumour cells. In the present study, we found that EGCG inhibited angiogenesis by blocking Erk-1 and Erk-2 activation and VEGF expression. Appearance of human colon cancer xenograft sections in nude mice. Tumour sections were stained with haematoxylin and eosin (row 1 = 40 magnification), immunohistochemically for CD31 (row 2 = 100) and PCNA (row 3 = 100); and immunofluorescently for TUNEL (row 4 = 400) and sequential CD31 (red) and TUNEL (green) (row 5 = 400). (-)-epigallocatechin gallate (EGCG) treatment led to decreases in number of tumour vessels (row 2) and tumour cell proliferation (row 3) and an increase in endothelial cell apoptosis (row 5) relative to treatment with (-)-epicatechin (EC) (column 1) The Erk-1 and Erk-2 pathway is also thought to be essential in cellular growth and differentiation. Blockade of the Mek-Erk pathway suppresses growth of colon tumours in vivo (Sebolt-Leopold et al, 1999). Recently, Erk-1 and Erk-2 has been reported to be an important signalling cascade that leads to overexpression of VEGF mRNA (Jung et al, 1999;Milanini et al, 1998). We previously showed that VEGF is up-regulated in serum-starved HT29 cells through the activation of Erk-1 and Erk-2 (Jung et al, 1999). Milanini et al also found that maximal transcriptional activation of VEGF in fibroblasts was Erk-dependent. The EGCG-mediated inhibition of Erk-1 and -2 activation may be an early cellular event that is partly responsible for the anticancer effect of EGCG. The exact mechanism by which ECGC inhibits the activation of Erk-1 and -2 in serum-starved cells is not known. One possible explanation is that EGCG could inhibit kinases that are involved in Erk-1 and -2 activation. EGCG is known to be a strong metal ion chelator (Yang and Wang, 1993). Since some receptor kinases depend on divalent cations for their activity (Mahadevan et al, 1995), EGCG could inhibit the activity of receptor kinases by chelating the divalent cations. VEGF is a potent and unique angiogenic protein that has specific mitogenic and chemotactic effects on vascular endothelial cells. Studies from our laboratory and others suggest that VEGF is the angiogenic factor that is most closely associated with induction and maintenance of the neovasculature in human colon cancer (Warren et al, 1995;Ellis et al, 1996;Takahashi et al, 1996). Our present results show that treatment of mice with EGCG resulted in marked inhibition of the growth, vascularity, and proliferation of human tumour xenografts in nude mice. We also found that EGCG induced significant endothelial cell apoptosis, a result that supports our earlier contention that VEGF is an in vivo survival factor for tumour endothelium (Shaheen et al, 1999). These findings suggest that down-regulation of VEGF by EGCG may lead to endothelial cell apoptosis within tumours, which could not only inhibit new blood vessel formation and tumour growth but also could lead to tumour cell apoptosis. These findings are supported by other recent reports that green tea could inhibit tumour growth by suppressing blood vessel growth (Cao and Cao, 1999;Swiercz et al, 1999). These studies have demonstrated that green tea, and more specifically EGCG, can inhibit tumour growth in vivo, possibly by inhibiting the formation of new blood vessels. These findings may partially explain the antineoplastic effects associated with drinking green tea. A complete knowledge of the molecular mechanism or mechanisms involved with the anti-tumour efficacy of green tea polyphenols may be useful in devising better strategies for cancer therapy. |
<reponame>blckt/corp-playlist
import * as React from 'react'
import * as ReactDOM from 'react-dom'
import Root from './containers/root'
class Hello extends React.Component<any,any>{
constructor(props:any){
super(props);
}
render(){
return <h1>Hi</h1>
}
}
var el=document.createElement("div");
document.body.appendChild(el);
ReactDOM.render(<Root/>,el) |
package tconstruct.library.crafting;
import java.util.*;
import net.minecraft.item.*;
import net.minecraftforge.oredict.OreDictionary;
public class StencilBuilder
{
public static StencilBuilder instance = new StencilBuilder();
public List<ItemStack> blanks = new LinkedList<ItemStack>(); // i wish ItemStack would support equals so i could use a Set here...
public Map<Integer, ItemStack> stencils = new TreeMap<Integer, ItemStack>();
/**
* Returns whether the given ItemStack is a blank pattern and therefore usable for stencil crafting.
*/
public static boolean isBlank (ItemStack stack)
{
for (ItemStack blank : instance.blanks)
if (OreDictionary.itemMatches(stack, blank, false)) // this has nothing to do with the oredictionary.
return true;
return false;
}
public static void registerBlankStencil (ItemStack itemStack)
{
instance.blanks.add(itemStack);
}
public static void registerStencil (int id, Item item, int meta)
{
registerStencil(id, new ItemStack(item, 1, meta));
}
public static void registerStencil (int id, ItemStack pattern)
{
if(instance.stencils.containsKey(id))
throw new IllegalArgumentException("[TCon API] Stencil ID " + id + " is already occupied by " + instance.stencils.get(id).getDisplayName());
instance.stencils.put(id, pattern);
}
public static Collection<ItemStack> getStencils ()
{
return instance.stencils.values();
}
/**
* Returns the index of the given stencil. If no stencil is found, returns -1.
*/
public static int getId(ItemStack stencil)
{
for(Map.Entry<Integer, ItemStack> entry : instance.stencils.entrySet())
if (OreDictionary.itemMatches(stencil, entry.getValue(), false))
return entry.getKey();
return -1;
}
// returns the stencil with the given index
public static ItemStack getStencil (int num)
{
if (!instance.stencils.containsKey(num))
return null;
return instance.stencils.get(num).copy();
}
public static int getStencilCount ()
{
return instance.stencils.size();
}
}
|
package com.udacity.pricing;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.servlet.MockMvc;
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath;
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
@AutoConfigureMockMvc
public class PricingServiceApplicationTests {
@Autowired
private MockMvc mvc;
@Test
public void contextLoads() {
}
@Test
public void testGetPrice() throws Exception{
mvc.perform(get("/services/price?vehicleId=1"))
.andExpect(status().isOk())
.andExpect(jsonPath("$.price").exists())
.andExpect(jsonPath("$.currency").value("USD"))
.andExpect(jsonPath("$.vehicleId").value(1));
}
}
|
Rotary drum dryers for drying bulk solids are well-known in the art and generally consist of a horizontal drum which is rotated about its horizontal axis and associated with a heating source for drying material loaded into the drum. Drum dryers are typically heated by firing a burner along the axis of the drum or by directing combustion gas into the drum through one end. In many instances it is desirable to control the flow of gases into and out of the dryer drum. In particular, when drying combustible materials, it is often desirable to control the oxygen content in the drum to prevent combustion of the material being dried.
Rotary dryer drums are used, for example, in the wood processing industry, to dry hogfuel and other wood wastes for boiler fuel or to dry wood strands for strand board production. Such dryer drums are typically sealed to a combustion gas inlet plenum and an exhaust gas outlet plenum. The gases which are supplied by the inlet plenum are typically combustion gases depleted in oxygen. The interior of the drum typically runs at a negative pressure produced by the natural or forced draft of the exhaust gas chimney. Existing dryers utilize resilient, flexible seals which are mounted on the non-moving parts of the dryer and ride on a sealing ring mounted on the rotating drum. These existing drum dryer seals have been typically manufactured of built-up layers of fiberglass or ceramic cloth treated with silicone rubber. While these seals function quite satisfactorily when the dryer is initially introduced into service, over time problems can rise.
The prior art seals are initially resilient and are installed so that they are biased by their resiliency against the sealing ring. The seals, while normally running relatively cool, will on occasion be subject to temperatures as high as 1500 degrees Fahrenheit. Because the drum is normally under negative pressure, hot combustion gases do not normally impinge upon the seals. However, on occasion, the drum outlet can become clogged and combustion gases will back up through the seals subjecting them to high temperatures.
In practice, these conditions have meant that after a few months, the prior art seals loose some of their resiliency and thus are no longer effective at sealing outside air from the interior of the rotating drum. The problem of obtaining a good seal between the dryer and the inlet plenum is further aggravated by the tendency of rotating drum dryers to be slightly eccentric about their axes of rotation. This eccentricity tends to increase with age of the dryer as wear of the dryer trunnions and other components tends to increase the eccentricity of the dryer drum.
Another factor which tends to make sealing the rotating drum difficult is the changes in size of the drum with changes in temperature, as the drum upon heating tends to increase in diameter and length. What is needed is a rotary drum dryer seal which remains resilient and functional after prolonged exposure to high temperatures. |
Zero-rating in emerging mobile markets: Free Basics and Wikipedia Zero in Ghana Despite widespread controversy surrounding zero-rating---that is, the practice of subsidizing mobile data---the field suffers from a lack of inquiry into user understanding of and experience with zero-rated services. This paper explores how Ghanaian mobile users interact with zero-rated mobile applications Free Basics and Wikipedia Zero. Based on semi-structured interviews with users and non-users of the applications, I discuss how mobile phone users perceive Free Basics and Wikipedia Zero, what motivates them to use or not use the applications, and how the availability of the applications influences their data-buying strategies. Findings suggest that respondents, including those who did not actively use the applications, understood and experienced Free Basics and Wikipedia Zero in ways divergent from the providers' aim of expanding access to online content and services. |
from django import forms
from datetime import date
from .models import Abonent, Tag
class AbonentForm(forms.ModelForm):
name = forms.CharField(label='Имя')
birthday = forms.DateField(
label='день рождения', widget=forms.widgets.SelectDateWidget())
#email = forms.EmailField(label='электронная очта', widget=forms.widgets.EmailInput())
class Meta:
model = Abonent
fields = ('name', 'birthday') # , 'email')
class AbonentEditForm(forms.Form):
name = forms.CharField(label = 'Имя')
birthday = forms.DateField(
label='день рождения', widget=forms.widgets.SelectDateWidget())
address = forms.CharField(label = "Адрес")
email = forms.EmailField(label='электронная почта', widget=forms.widgets.EmailInput())
class FindContactsForm(forms.Form):
year_max = date.today().year + 1
# tags_dict = {elem.tag: elem for elem in Tag.objects.all()}
tags_test = [(tag.tag, tag.tag) for tag in Tag.objects.all()]
#tags_test = [('A', 'a'), ('B', 'b'), ('C', 'c')]
pattern = forms.CharField(label='паттерн', required=False,
help_text='набор символов, которым будет искаться соотвествие (в имени, тпелефонах, адресах, email, заметках)')
tags = forms.MultipleChoiceField(
choices=tags_test, widget=forms.CheckboxSelectMultiple, label='выберите тэги', required=False, help_text='можно выбрать один или несколько тэгов, по которым будет осуществляться поиск')
date_start = forms.DateField(label='дата старт', widget=forms.SelectDateWidget(
years=[a for a in range(year_max - 90, year_max)]), required=False)
date_stop = forms.DateField(label='дата стоп', widget=forms.SelectDateWidget(
years=[a for a in range(year_max - 90, year_max)]), required=False)
|
CHAMBERSBURG, Pa. ? A third Democrat has entered the race for the Franklin County Board of Commissioners, with St. Thomas Township resident Cheryl Stearn announcing she will seek one of the two Democratic nominations in the May 15 primary.
"I am passionate about open and responsive county government," Stearn stated in her announcement. Growth is another theme that Stearn, along with other announced candidates, is raising.
"Ensuring and maintaining livable communities is one of my interests," Stearn said. "We are facing an onslaught of development from the south and it's very difficult for the county, school districts, townships and boroughs to handle it."
That growth places strains on public services, which impact fees on development would help pay for, Stearn said, although that would require convincing the state legislature that they are needed.
"I'm interested in maintaining a good environment and safe places to raise families ... This is a beautiful area," Stearn said. Those qualities attracted her and her husband here 34 years ago, she said.
Stearn said she also will take an active role in assuring that the county's social service agencies are well-run. Through the campaign, she said, she also will try to ascertain the issues that are important to residents.
"Part of it will take getting feedback from residents through the campaign to find out what they are passionate about," Stearn said.
Stearn, 60, whose husband, Frank, is a St. Thomas Township supervisor, has been active in two issues within the township in recent years. She was a member of Friends and Residents of St. Thomas (FROST), which led an unsuccessful effort to stop a quarry operation, and was involved in organizing a home rule referendum that was defeated in the November election.
Frank and Cheryl Stearn own Sunrise Electronic Distributing Co. in Chambersburg.
Born in York and raised in Lancaster, Stearn has a degree in English from Lebanon Valley College. She is a Democratic committeewoman and a member of the Chambersburg Hospital Board of Directors.
The other announced Democratic candidates are Waynesboro, Pa., attorney and former borough councilman Clint Barkdoll, and businessman Bob Ziobrowski, a former president of the Chambersburg School Board.
Incumbent Republican Commissioner Bob Thomas, former Waynesboro Borough Council president Douglas Tengler and retired federal analyst Carl Barton have announced they are running for the two GOP nominations in the primary.
The deadline for candidates in all county, municipal and school board races to file nominating petitions is March 6. |
Early bloodbrain barrier disruption in human focal brain ischemia Loss of integrity of the bloodbrain barrier (BBB) resulting from ischemia/reperfusion is believed to be a precursor to hemorrhagic transformation (HT) and poor outcome. We used a novel magnetic resonance imaging marker to characterize early BBB disruption in human focal brain ischemia and tested for associations with reperfusion, HT, and poor outcome (modified Rankin score >2). BBB disruption was found in 47 of 144 (33%) patients, having a median time from stroke onset to observation of 10.1 hours. Reperfusion was found to be the most powerful independent predictor of early BBB disruption (p = 0.018; odds ratio, 4.09; 95% confidence interval, 1.2813.1). HT was observed in 22 patients; 16 (72.7%) of those also had early BBB disruption (p < 0.001; odds ratio, 8.11; 95% confidence interval, 2.8523.1). In addition to baseline severity (National Institutes of Health Stroke Scale score >6), early BBB disruption was found to be an independent predictor of HT. Because the timing of the disruption was early enough to make it relevant to acute thrombolytic therapy, early BBB disruption as defined by this imaging biomarker may be a promising target for adjunctive therapy to reduce the complications associated with thrombolytic therapy, broaden the therapeutic window, and improve clinical outcome. Ann Neurol 2004 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.