content
stringlengths 7
2.61M
|
---|
/**
* Created by ive_marruda on 07/04/17.
*/
@RestController
@RequestMapping("/moip")
public class MoipController {
@Autowired
private OrderService orderService;
@Autowired
private ConfirmationService confirmationService;
@RequestMapping(value = "/response", method = RequestMethod.POST)
@ResponseStatus(HttpStatus.OK)
public void response(@RequestBody ResponseDTO response) throws InterruptedException {
System.out.println("Got Response from MOIP: "+response.getResource().getPayment().getStatus());
PaymentDTO payment = response.getResource().getPayment();
Order order = orderService.findOrder(payment.getId());
if(order != null && "AUTHORIZED".equals(payment.getStatus())){
order.setStatus(OrderStatus.COMPLETED);
orderService.updateOrder(order);
}
confirmationService.send(payment);
}
// @RequestMapping(value = "/response", method = RequestMethod.POST)
// @ResponseStatus(HttpStatus.OK)
// public void responseString(@RequestBody String response) throws InterruptedException {
// System.out.println("Got Response from MOIP: "+response);
// }
@RequestMapping(value = "/response", method = RequestMethod.GET)
@ResponseStatus(HttpStatus.OK)
public String response() throws InterruptedException {
System.out.println("Got Response from MOIP: " + "GET");
return "Got Response from MOIP: " + "GET";
}
} |
<gh_stars>0
# pylint: disable=missing-module-docstring
from pandas_transformers.transformers import test
class TestExample:
"""
Test class
"""
def test_example(self):
"""
Test example
"""
print("example test")
class TestTest:
"""
Test class
"""
def test_test(self):
"""
Testing test
"""
test()
|
INHIBITORY ACTIVITY OF ALLIUM SATIVUM L. EXTRACT AGAINST STREPTOCOCCUS PYOGENES AND PSEUDOMONAS AERUGINOSA Background : One of the most common health problems is infectious diseases. Infectious disease can be caused by bacteria. There were two groups of bacteria based on the staining, Gram-positive and Gram-negative bacteria. Purpose : Antibiotics are the main therapy used in the incidence of bacterial infections. But over time, some antibiotics became resistance. Several studies have shown that garlic has an antibacterial effect. The content of allicin, ajoene, saponins, and flavonoids is found in garlic which has antibacterial properties. The antibiotic activity test of garlic was carried out on the bacteria Streptococcus pyogenes and Pseudomonas aeruginosa. The goal of this study is to investigate the antibacterial effect of Allium sativum L. extract against Streptococcus pyogenes and Pseudomonas aeruginosa. Methods : Garlic extract was made using the maceration method using 96% alcohol as the solvent. Tube dilution method elected to observe garlic antibiotic activity. This test aims to determine the minimum inhibitory concentration (MIC) and the minimum bactericidal concentration (MBC). There were eight different concentration used, i.e. 2 grams/ml, 1 gram/ml, 0.5 gram/ml, 0.25 gram/ml, 0.125 gram/ml, 0.0625 gram/ml, 0.03125 gram/ml, and 0.015625 gram/ml. Replication is done three times. Results : In this experiment, the extract produced was turbid that MIC could not be determined and there was no momentous differentiation between before and after treatment. There was no growth of Streptococcus pyogenes in 1 gram/ml and Pseudomonas aeruginosa in 0.5 gram/ml. This number indicates the MBC for each bacteria. Conclusion: Garlic (Allium sativum L.) has an effect of bactericidal activity, it can perform as an antibacterial for Streptococcus pyogenes and Pseudomonas aeruginosa. Garlic extract was more effective for Pseudomonas aeruginosa than Streptococcus pyogenes. |
<filename>advice/advice_test.go
package advice
import (
"fmt"
"github.com/stretchr/testify/assert"
"github.com/wesovilabs/beyond/advice/internal"
"regexp"
"testing"
)
func TestMatch(t *testing.T) {
fmt.Printf("[TEST] %s\n", t.Name())
cases := []struct {
regExp *regexp.Regexp
matches []string
noMatches []string
}{
{
regExp: internal.NormalizePointcut("*.set*(*)..."),
matches: []string{
"a.setPerson(string)int",
"a.setElement(int)",
"a/b.setCat(string)*int",
},
noMatches: []string{
"a.unsetPerson(string)int",
"a.list(string)(int,*string)",
"a/b.unsetCat(string)(int,*string)",
},
},
{
regExp: internal.NormalizePointcut("*.*(*)..."),
matches: []string{
"a.b(string)int",
"a.b(string)(int,*string)",
"a/b.b(string)(int,*string)",
},
noMatches: []string{
"a/b.c.d(string)",
"a.c.d(string)",
"a/b.b()(int,*string)",
},
},
{
regExp: internal.NormalizePointcut("model/*.*(*)..."),
matches: []string{
"model/a.b(string)int",
"model/a/b.b(string)(int,*string)",
},
noMatches: []string{
"model.b(string)(int,*string)",
},
},
{
regExp: internal.NormalizePointcut("model*.*(*)..."),
matches: []string{
"model/a.b(string)int",
"model/a/b.b(string)(int,*string)",
"model.b(string)(int,*string)",
},
noMatches: []string{
"a/model/a.b(string)int",
},
},
{
regExp: internal.NormalizePointcut("model*.set*(*)..."),
matches: []string{
"model/a.setB(string)int",
"model/a/b.setElement(string)(int,*string)",
"model.set(string)(int,*string)",
},
noMatches: []string{
"a/model/a.b(string)int",
"model/aset(string)int",
},
},
{
regExp: internal.NormalizePointcut("model*.*set*(*)..."),
matches: []string{
"model/a.setB(string)int",
"model/a/b.setElement(string)(int,*string)",
"model/a/b.unsetElement(string)(int,*string)",
"model.set(string)(int,*string)",
"model.setPerson(string)(int,*string)",
},
noMatches: []string{
"a/model/a.b(string)int",
},
},
{
regExp: internal.NormalizePointcut("model.set(*)"),
matches: []string{
"model.set(string)",
"model.set(int)",
"model.set(*int)",
},
noMatches: []string{
"model.set(string,int)int",
"model.set(*int)int",
"model.set()",
"model.set(int)*int",
},
},
{
regExp: internal.NormalizePointcut("model.set(*)"),
matches: []string{
"model.set(string)",
"model.set(int)",
"model.set(*int)",
},
noMatches: []string{
"model.set(string,int)int",
"model.set(*int)int",
"model.set()",
"model.set(int)*int",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set(*)"),
matches: []string{
"model.obj.set(string)",
},
noMatches: []string{
"model.obj.set(string,int)",
"model.object.set(string)",
"model.myobj.set(string)",
},
},
{
regExp: internal.NormalizePointcut("model.*obj*.set(*)"),
matches: []string{
"model.obj.set(string)",
"model.object.set(string)",
"model.myobj.set(string)",
},
noMatches: []string{
"model.obj.set(string)(string,string)",
"model.object.set(string)int",
"model.myobj.set(int,string)",
},
},
{
regExp: internal.NormalizePointcut("model.obj*.set(*)"),
matches: []string{
"model.obj.set(string)",
"model.object.set(string)",
},
noMatches: []string{
"model.unobj.set(string)",
"model.Obj.set(string)",
},
},
{
regExp: internal.NormalizePointcut("*.*(...)..."),
matches: []string{
"a.b(string)int",
"a.b(string)(int,*string)",
"a/b.b(string)(int,*string)",
"a/b.b()(int,*string)",
"a/b.b()(int,*github.com/projec/repo.model.Person)",
},
noMatches: []string{
"a/b.c.d(string)",
"a.c.d(string)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set(*)"),
matches: []string{
"model.obj.set(string)",
"model.obj.set(*int32)",
"model.obj.set(func(string,int))",
"model.obj.set(*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(map[string]interface{})",
},
noMatches: []string{
"model.obj.set(string,int)",
"model.obj.set(string,map[string]interface{})",
"model.obj.set(string)*int",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set(...)"),
matches: []string{
"model.obj.set()",
"model.obj.set(string)",
"model.obj.set(*int32)",
"model.obj.set(func(string,int))",
"model.obj.set(*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(map[string]interface{})",
"model.obj.set(string,int)",
"model.obj.set(string,map[string]interface{})",
"model.obj.set(string)",
},
noMatches: []string{},
},
{
regExp: internal.NormalizePointcut("model.obj.set(string,...)"),
matches: []string{
"model.obj.set(string,*int32)",
"model.obj.set(string,func(string,int))",
"model.obj.set(string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(string,map[string]interface{})",
"model.obj.set(string,int)",
"model.obj.set(string,map[string]interface{})",
},
noMatches: []string{
"model.obj.set(string)",
"model.obj.set(*string,int)",
"model.obj.set(int,string)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set(...,github.com/wesovilabs/beyond.model.Person)"),
matches: []string{
"model.obj.set(string,*int32,github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(string,func(string,int),github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(string,github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(*string,github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(string,map[string]interface{},github.com/wesovilabs/beyond.model.Person)",
},
noMatches: []string{
"model.obj.set(string)",
"model.obj.set(*string,int)",
"model.obj.set(int,string)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set(int,...,*github.com/wesovilabs/beyond.model.Person)"),
matches: []string{
"model.obj.set(int,string,*int32,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(int,string,func(string,int),*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(int,string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(int,string,map[string]interface{},*github.com/wesovilabs/beyond.model.Person)",
},
noMatches: []string{
"model.obj.set(int,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(*int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(int,string,map[string]interface{},github.com/wesovilabs/beyond.model.Person)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set()..."),
matches: []string{
"model.obj.set()",
"model.obj.set()string",
"model.obj.set()*int32",
"model.obj.set()func(string,int)",
"model.obj.set()*github.com/wesovilabs/beyond.model.Person",
"model.obj.set()map[string]interface{}",
},
noMatches: []string{},
},
{
regExp: internal.NormalizePointcut("model.obj.set()(string,...)"),
matches: []string{
"model.obj.set()(string,*int32)",
"model.obj.set()(string,func(string,int))",
"model.obj.set()(string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(string,map[string]interface{})",
"model.obj.set()(string,int)",
"model.obj.set()(string,map[string]interface{})",
},
noMatches: []string{
"model.obj.set()(string)",
"model.obj.set()(*string,int)",
"model.obj.set()(int,string)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set()(...,github.com/wesovilabs/beyond.model.Person)"),
matches: []string{
"model.obj.set()(string,*int32,github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(string,func(string,int),github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(string,github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(*string,github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(string,map[string]interface{},github.com/wesovilabs/beyond.model.Person)",
},
noMatches: []string{
"model.obj.set()(string)",
"model.obj.set()(*string,int)",
"model.obj.set()(int,string)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set()(int,...,*github.com/wesovilabs/beyond.model.Person)"),
matches: []string{
"model.obj.set()(int,string,*int32,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(int,string,func(string,int),*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(int,string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(int,string,map[string]interface{},*github.com/wesovilabs/beyond.model.Person)",
},
noMatches: []string{
"model.obj.set()(int,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(*int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(int,string,map[string]interface{},github.com/wesovilabs/beyond.model.Person)",
},
},
{
regExp: internal.NormalizePointcut("model.obj.set(func()string)(int,...,*github.com/wesovilabs/beyond.model.Person)"),
matches: []string{
"model.obj.set(func()string)(int,string,*int32,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(int,string,func(string,int),*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(int,string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(int,string,map[string]interface{},*github.com/wesovilabs/beyond.model.Person)",
},
noMatches: []string{
"model.obj.set()(int,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(*int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set()(int,string,map[string]interface{},github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(int,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(*int,*string,*github.com/wesovilabs/beyond.model.Person)",
"model.obj.set(func()string)(int,string,map[string]interface{},github.com/wesovilabs/beyond.model.Person)",
},
},
}
assert := assert.New(t)
for index, c := range cases {
fmt.Printf("\nScenario number %v:\n", index)
def := &Advice{
regExp: c.regExp,
}
fmt.Printf(" %s\n", c.regExp)
fmt.Printf("[matches]\n")
for _, m := range c.matches {
fmt.Printf(" %s\n", m)
if !assert.True(def.Match(m)) {
t.FailNow()
}
}
fmt.Printf("[no matches]\n")
for _, m := range c.noMatches {
fmt.Printf(" %s\n", m)
if !assert.False(def.Match(m)) {
t.FailNow()
}
}
}
fmt.Println()
}
func Test_addImport(t *testing.T) {
assert := assert.New(t)
inv := adviceInvocation{
imports: []string{"import1", "import2"},
}
inv.addImport("import2")
assert.Len(inv.imports, 2)
}
func Test_Advice_Imports(t *testing.T) {
assert := assert.New(t)
advice := &Advice{
call: &adviceInvocation{
pkg: "mypkg",
imports: nil,
},
}
res := advice.Imports()
assert.Len(res, 1)
assert.Equal("mypkg", res[0])
}
func Test_Advice_Match(t *testing.T) {
assert := assert.New(t)
advice := &Advice{}
res := advice.Match("test")
assert.False(res)
}
|
<filename>core-model-test/tests/src/test/java/org/jboss/as/core/model/test/access/StandaloneAccessControlTestCase.java
/*
* JBoss, Home of Professional Open Source.
* Copyright 2013, Red Hat Middleware LLC, and individual contributors
* as indicated by the @author tags. See the copyright.txt file in the
* distribution for a full listing of individual contributors.
*
* This is free software; you can redistribute it and/or modify it
* under the terms of the GNU Lesser General Public License as
* published by the Free Software Foundation; either version 2.1 of
* the License, or (at your option) any later version.
*
* This software is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with this software; if not, write to the Free
* Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
* 02110-1301 USA, or see the FSF site: http://www.fsf.org.
*/
package org.jboss.as.core.model.test.access;
import static org.jboss.as.controller.PathElement.pathElement;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.ACCESS;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.APPLICATION_CLASSIFICATION;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.AUTHORIZATION;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.CLASSIFICATION;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.CONSTRAINT;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.CORE;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.CORE_SERVICE;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.MANAGEMENT;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.RESULT;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.SECURITY_REALM;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.SENSITIVITY_CLASSIFICATION;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.TYPE;
import static org.jboss.as.controller.descriptions.ModelDescriptionConstants.VAULT_EXPRESSION;
import org.jboss.as.controller.PathAddress;
import org.jboss.as.controller.access.constraint.ApplicationTypeConfig;
import org.jboss.as.controller.access.constraint.SensitivityClassification;
import org.jboss.as.controller.access.management.ApplicationTypeAccessConstraintDefinition;
import org.jboss.as.controller.access.management.SensitiveTargetAccessConstraintDefinition;
import org.jboss.as.controller.operations.common.Util;
import org.jboss.as.core.model.test.AbstractCoreModelTest;
import org.jboss.as.core.model.test.KernelServices;
import org.jboss.as.core.model.test.TestModelType;
import org.jboss.as.domain.management.access.ApplicationClassificationConfigResourceDefinition;
import org.jboss.as.domain.management.access.SensitivityResourceDefinition;
import org.jboss.as.model.test.ModelTestUtils;
import org.jboss.dmr.ModelNode;
import org.junit.Assert;
import org.junit.Test;
/**
* Simple test case to test the parsing and marshalling of the <access-control /> element within the standalone.xml
* configuration.
*
* @author <a href="<EMAIL>"><NAME></a>
*/
public class StandaloneAccessControlTestCase extends AbstractCoreModelTest {
private static final String SOCKET_CONFIG = SensitivityClassification.SOCKET_CONFIG.getName();
@Test
public void testConfiguration() throws Exception {
//Initialize some additional constraints
new SensitiveTargetAccessConstraintDefinition(new SensitivityClassification("play", "security-realm", true, true, true));
new SensitiveTargetAccessConstraintDefinition(new SensitivityClassification("system-property", "system-property", true, true, true));
new ApplicationTypeAccessConstraintDefinition(new ApplicationTypeConfig("play", "deployment", false));
KernelServices kernelServices = createKernelServicesBuilder(TestModelType.STANDALONE)
.setXmlResource("standalone.xml")
.validateDescription()
.build();
Assert.assertTrue(kernelServices.isSuccessfulBoot());
String marshalled = kernelServices.getPersistedSubsystemXml();
ModelTestUtils.compareXml(ModelTestUtils.readResource(this.getClass(), "standalone.xml"), marshalled);
//////////////////////////////////////////////////////////////////////////////////
//Check that both set and undefined configured constraint settings get returned
/*
* <sensitive-classification type="play" name="security-realm" requires-addressable="false" requires-read="false" requires-write="false" />
* <sensitive-classification type="system-property" name="system-property" requires-addressable="true" requires-read="true" requires-write="true" />
* system-property sensitive classification default values are false, false, true
*/
System.out.println(kernelServices.readWholeModel());
//Sensitivity classification
//This one is undefined
ModelNode result = ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getReadAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, CORE),
pathElement(CLASSIFICATION, SOCKET_CONFIG)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_ADDRESSABLE.getName())));
checkResultExists(result, new ModelNode());
//This one is undefined
result = ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getReadAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "play"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_ADDRESSABLE.getName())));
checkResultExists(result, new ModelNode(false));
// WFCORE-3995 Test write operations on configured-requires-addressable
// This should fail as sensitivity constraint attribute configured-requires-read and configured-requires-write must not be false before writing configured-requires-addressable to true
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getWriteAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "play"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_ADDRESSABLE.getName(), true)));
checkResultNotExists(result);
// This should fail as sensitivity constraint attribute configured-requires-read and configured-requires-write must not be false before undefine configured-requires-addressable to its default value true
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getUndefineAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "play"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_ADDRESSABLE.getName())));
checkResultNotExists(result);
// WFCORE-3995 Test write operations on configured-requires-read
// This should fail as sensitivity constraint attribute configured-requires-addressable must not be true before writing configured-requires-read to false
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getWriteAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "system-propery"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName(), false)));
checkResultNotExists(result);
// This should fail as sensitivity constraint attribute configured-requires-addressable must not be true before undefine configured-requires-read its default value false
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getUndefineAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "system-propery"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName())));
checkResultNotExists(result);
// This should fail as sensitivity constraint attribute configured-requires-write must not be false before writing configured-requires-read to true
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getWriteAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "play"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName(), true)));
checkResultNotExists(result);
// This should fail as sensitivity constraint attribute configured-requires-addressable must not be false before undefine configured-requires-read to its default value true
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getUndefineAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "play"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName())));
checkResultNotExists(result);
// WFCORE-3995 Test write operations on configured-requires-write
// This should fail as sensitivity constraint attribute configured-requires-addressable and configured-requires-read must not be true before writing configured-requires-read to false
result = ModelTestUtils.checkFailed(
kernelServices.executeOperation(
Util.getWriteAttributeOperation(PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, SENSITIVITY_CLASSIFICATION),
pathElement(TYPE, "system-propery"),
pathElement(CLASSIFICATION, SECURITY_REALM)), SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName(), false)));
checkResultNotExists(result);
//VaultExpression
//It is defined
PathAddress vaultAddress = PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, VAULT_EXPRESSION));
result = ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getReadAttributeOperation(vaultAddress, SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName())));
checkResultExists(result, new ModelNode(false));
//Now undefine it and check again (need to undefine configured-requires-write first)
ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getUndefineAttributeOperation(vaultAddress, SensitivityResourceDefinition.CONFIGURED_REQUIRES_WRITE.getName())));
ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getUndefineAttributeOperation(vaultAddress, SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName())));
result = ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getReadAttributeOperation(vaultAddress, SensitivityResourceDefinition.CONFIGURED_REQUIRES_READ.getName())));
checkResultExists(result, new ModelNode());
//Application classification
//It is defined
PathAddress applicationAddress = PathAddress.pathAddress(
pathElement(CORE_SERVICE, MANAGEMENT),
pathElement(ACCESS, AUTHORIZATION),
pathElement(CONSTRAINT, APPLICATION_CLASSIFICATION),
pathElement(TYPE, "play"),
pathElement(CLASSIFICATION, "deployment"));
result = ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getReadAttributeOperation(applicationAddress, ApplicationClassificationConfigResourceDefinition.CONFIGURED_APPLICATION.getName())));
checkResultExists(result, new ModelNode(false));
//Now undefine it and check again
ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getUndefineAttributeOperation(applicationAddress, ApplicationClassificationConfigResourceDefinition.CONFIGURED_APPLICATION.getName())));
result = ModelTestUtils.checkOutcome(
kernelServices.executeOperation(
Util.getReadAttributeOperation(applicationAddress, ApplicationClassificationConfigResourceDefinition.CONFIGURED_APPLICATION.getName())));
checkResultExists(result, new ModelNode());
kernelServices.shutdown();
}
private void checkResultExists(ModelNode result, ModelNode expected) {
Assert.assertTrue(result.has(RESULT));
Assert.assertEquals(expected, result.get(RESULT));
}
private void checkResultNotExists(ModelNode result) {
Assert.assertFalse(result.has(RESULT));
}
}
|
Former Virginia Sen. Jim Webb is used to standing alone.
On the issue of the Confederate battle flag in the wake of the Charleston, South Carolina, church massacre, the potential Democratic presidential candidate is going his own way once again.
Bucking the bipartisan trend of politicians and corporations coming out against the flag, including South Carolina’s Republican governor and senators, Webb instead is calling for “mutual respect” when considering the symbol.
The flag, which many view as a symbol of hate, became a flashpoint after Dylann Roof, who had been photographed displaying the flag, allegedly killed nine people at a historic black church in an apparently racially motivated attack. The flag flies on the grounds of the state capitol building, but Gov. Nikki Haley and many others have called for its removal.
But Webb suggested people are jumping to conclusions too quickly.
“The Confederate battle flag has wrongly been used for racist and other purposes in recent decades. It should not be used in any way as a political symbol that divides us,” he said of the flag displayed by Robert E. Lee’s army as they marched into battle against the army of the United States.
The former senator, who was a Republican for much of his career, announced an exploratory committee in November, but has yet to declare formally if he will challenge Clinton for the 2016 Democratic nomination. Webb has said his campaign will focus on winning over white men, a demographic where the Democratic Party has lost ground in recent years.
Webb was elected to Senate in 2006 in large part thanks to the apparent bigotry of his opponent, Republican incumbent Sen. George Allen. Allen called a Democratic tracker by an obscure racial slur, sported a Confederate flag (including in a campaign TV ad) and reportedly kept a noose in his law office.
Webb, a successful author, left the Senate after one term, suggesting his writerly temperament was not well suited for politics. |
<reponame>ncorgan/libpkmn<filename>lib/conversions/gb_conversions.hpp
/*
* Copyright (c) 2016-2018 <NAME> (<EMAIL>)
*
* Distributed under the MIT License (MIT) (See accompanying file LICENSE.txt
* or copy at http://opensource.org/licenses/MIT)
*/
#ifndef INCLUDED_PKMN_CONVERSIONS_GB_CONVERSIONS_HPP
#define INCLUDED_PKMN_CONVERSIONS_GB_CONVERSIONS_HPP
#include <pksav/gen1/pokemon.h>
#include <pksav/gen2/pokemon.h>
namespace pkmn { namespace conversions {
void gen1_pc_pokemon_to_gen2(
const struct pksav_gen1_pc_pokemon* from,
struct pksav_gen2_pc_pokemon* to
);
void gen1_party_pokemon_to_gen2(
const struct pksav_gen1_party_pokemon* from,
struct pksav_gen2_party_pokemon* to
);
void gen2_pc_pokemon_to_gen1(
const struct pksav_gen2_pc_pokemon* from,
struct pksav_gen1_pc_pokemon* to
);
void gen2_party_pokemon_to_gen1(
const struct pksav_gen2_party_pokemon* from,
struct pksav_gen1_party_pokemon* to
);
}}
#endif /* INCLUDED_PKMN_CONVERSIONS_GB_CONVERSIONS_HPP */
|
FEM. The path of transformation and development The article is devoted to the celebration of the 90th anniversary of the Kharkiv National University of Civil Engineering and Architecture. The Faculty of Economics and Management (FEM) has been one of the main structural divisions of the university for over 25 years. The article describes the most significant stages in the development and formation of the faculty from the moment of its creation in 1994 to the present. The faculty was created thanks to the new economic relations in the state, the development of the market and entrepreneurship, the widespread dissemination of IT technologies and the need to train management personnel. Historical information about the former heads of the faculty N. Pasechnik, V. Zadriboroda, D. Cherednik, as well as the development of the structure of the faculty, its departments and specialties is given. The data on the faculty's international cooperation, the participation of its scientists in international projects, start-ups, conferences, student exchange programs and internships in the European Union, publishing are given. Over the years of its existence, the faculty has trained and graduated more than 3000 specialists in a number of areas of economic activity. In addition to the citizens of Ukraine, students from more than 12 countries of the near and far abroad studied at the faculty. It also shows the main directions of scientific activity of the faculty scientists, the effectiveness of the work of graduate school, personnel policy. Today, 49 candidates and 10 doctors of sciences work at the faculty, most of whom are graduates of the faculty's specialties. The article provides a number of examples of innovative activities of faculty departments: the introduction of new specialties, joint projects, startups, the creation of public organizations. It is shown that the preparation of students today is carried out according to 10 educational programs of bachelor's, master's and doctoral studies. Information is given on the basics of practice and branches of departments, a description of the material and technical base, laboratories, computer classes is given. The main directions of educational work at the faculty are shown. In conclusion, a roadmap is presented for the development of the faculty, improving the scientific potential, material and technical base. |
<gh_stars>0
package net.nocturne.network.decoders;
import net.nocturne.Settings;
import net.nocturne.game.player.Player;
import net.nocturne.network.Session;
import net.nocturne.stream.InputStream;
import net.nocturne.utils.Logger;
public final class WorldLoginPacketsDecoder extends Decoder {
private Player player;
public WorldLoginPacketsDecoder(Session session, Player player) {
super(session);
this.player = player;
}
@Override
public final int decode(InputStream stream) {
session.setDecoder(-1);
int packetId = stream.readUnsignedByte();
switch (packetId) {
case 26:
return decodeLogin(stream);
default:
if (Settings.DEBUG)
Logger.log(this, "WorldLoginPacketId " + packetId);
session.getChannel().close();
return -1;
}
}
private final int decodeLogin(InputStream stream) {
if (stream.getRemaining() != 0) {
session.getChannel().close();
return -1;
} // switches decoder
session.setDecoder(3, player);
return stream.getOffset();
}
} |
Klotho Protein Deficiency Leads to Overactivation of -Calpain* The klotho mouse is an animal model that prematurely shows phenotypes resembling human aging. Here we report that in homozygotes for the klotho mutation (kl −/−), II-spectrin is highly cleaved, even before the occurrence of aging symptoms such as calcification and arteriosclerosis. Because II-spectrin is susceptible to proteolysis by calpain, we examined the activation of calpain in kl −/− mice. m-Calpain was not activated, but -calpain was activated at an abnormally high level, and an endogenous inhibitor of calpain, calpastatin, was significantly decreased. Proteolysis of II-spectrin increased with decreasing level of Klotho protein. Similar phenomena were observed in normal aged mice. Our results indicate that the abnormal activation of calpain due to the decrease of Klotho protein leads to degradation of cytoskeletal elements such as II-spectrin. Such deterioration may trigger renal abnormalities inkl −/− mice and aged mice, but Klotho protein may suppress these processes. The klotho mouse is an animal model that prematurely shows phenotypes resembling human aging. Here we report that in homozygotes for the klotho mutation (kl / ), ␣ II -spectrin is highly cleaved, even before the occurrence of aging symptoms such as calcification and arteriosclerosis. Because ␣ II -spectrin is susceptible to proteolysis by calpain, we examined the activation of calpain in kl / mice. m-Calpain was not activated, but -calpain was activated at an abnormally high level, and an endogenous inhibitor of calpain, calpastatin, was significantly decreased. Proteolysis of ␣ II -spectrin increased with decreasing level of Klotho protein. Similar phenomena were observed in normal aged mice. Our results indicate that the abnormal activation of calpain due to the decrease of Klotho protein leads to degradation of cytoskeletal elements such as ␣ II -spectrin. Such deterioration may trigger renal abnormalities in kl / mice and aged mice, but Klotho protein may suppress these processes. The klotho (kl / ) mouse shows multiple phenotypes resembling human aging caused by the mutation of a single gene. This mutation is caused by the insertion of ectopic DNA into the regulatory region of the klotho gene. The klotho gene encodes a type I membrane protein that is expressed predominantly in the kidney and brain. The extracellular domain of Klotho protein consists of two internal repeats that share sequence similarity to the -glucosidases of both bacteria and plants. As a result of a defect in klotho gene expression, the kl / mouse exhibits multiple age-associated disorders, such as arteriosclerosis, osteoporosis, skin atrophy, pulmonary emphysema, short life span, and infertility. However, the mechanism by which the klotho gene product suppresses the aging phenomena has not been identified. Analysis of the pathophysiology of kl / mice is expected to give clues not only to understanding the mechanisms of individual diseases associated with aging but also the relationship between these mechanisms during human aging. Non-erythroid spectrin is a heterodimeric actin-binding pro-tein that consists of ␣ II -and  II -spectrin and is usually found on the cytoplasmic side of the plasma membrane. It is thought to participate in the establishment and maintenance of cell polarity, shape, and receptor distribution. Recently, it was proposed that spectrin retained and stabilized various proteins at specific regions on the cell surface (6 -9). ␣ II -Spectrin has been shown to be cleaved by calpain and/or caspase during apoptosis and necrosis (10 -13). Calpain, a calcium-dependent cytosolic cysteine protease, is involved in many physiological and pathological processes (14 -16). Calpain mediates proteolysis of various cellular proteins, including cytoskeletal proteins, and causes irreversible cell damage (10 -13, 17, 18). Thus, calpain overactivation may contribute to the pathology of cerebral and cardiac ischemia, Alzheimer's disease, arthritis, and cataract formation. Calpain has been shown to be regulated by both calcium ion and calpastatin. Two types of isozymic calpain, -calpain and m-calpain, are ubiquitously distributed in mammalian cells. The former is activated by micromolar concentrations of calcium and the latter is activated by millimolar concentrations of calcium. Calpastatin is an endogenous inhibitor specific for calpain, but is slowly degraded by calpain. Here, we report the cleavage of ␣ II -spectrin due to the continuous activation of -calpain in kl / mice. Furthermore, we also observe similar phenomena in normal aged mice. EXPERIMENTAL PROCEDURES Preparation of Mouse Tissue Extracts-Kidneys were obtained from 2-and 3-week-old kl /, kl /, and kl / mice and from 4-week-old and 29-month-old C57BL/6 mice. Brain, lung, heart, liver, and kidney were obtained from 4-week-old and 8-week-old kl / and kl / mice. Tissue samples were homogenized with 9 volumes (weight/volume) of 10 mM Tris-HCl, pH 7.4, 1 mM EDTA, 250 mM sucrose. After centrifugation at 900 g for 10 min, the supernatant was subjected to ultra centrifugation at 100,000 g for 1 h. The supernatants and precipitates were used as the cytosolic fraction and microsomal membrane fraction, respectively. Protein concentration was determined by BCA assay (Pierce). All experimental procedures using laboratory animals were approved by the Animal Care and Use Committee of Tokyo Metropolitan Institute of Gerontology. All efforts were made to minimize the number of animals used and their suffering. Amino Acid Sequencing of 280-kDa Protein-Kidney microsomal fraction (250 g) from 4-week-old kl / and kl / mice was subjected to SDS-PAGE under reducing conditions followed by staining with Coomassie Brilliant Blue R-250. A protein band of 280 kDa was excised and treated with 0.1 g of Achromobacter protease I (lysylendopeptidase) at 37°C for 12 h in 0.1 M Tris-HCl, pH 9.0, containing 0.1% SDS and 1 mM EDTA. The peptides were separated on columns of DEAE-5PW (1 20 mm; Tosoh, Tokyo, Japan) and CAPCELL PAK C18 UG120 (1 100 mm; Shiseido, Tokyo, Japan). Solvent A was 0.085% (v/v) trifluoroacetic acid in distilled water, and solvent B was 0.075% (v/v) trifluoroacetic acid in 80% (v/v) acetonitrile. The peptides were eluted at a flow rate of 30 l/min using a linear gradient of 1-60% solvent B. Selected peptides were subjected to Edman degradation using a Procise 494 cLC protein sequencer (Applied Biosystems, Foster City, CA) and to matrix-assisted laser desorption ionization time-offlight mass spectrometry on a Reflex MALDI-TOF (Bruker Daltonics, Billerica, MA) in linear mode using 2-mercaptobenzothiazole as a matrix. Antibodies-Rabbit antibodies specific to the pre-and post-autolytic forms of -calpain (anti-pre-and anti-post-, respectively) were raised against synthetic peptides as described previously. Antibodies specific to the pre-and post-autolytic forms of m-calpain (anti-pre-m and anti-post-m, respectively) were produced using synthetic peptides corresponding to the N-terminal 21 residues (AGIAAKLAKDREAAEGLG-SHE) of the intact form and the N-terminal 6 residues (KDREAA) of the autolytic form, respectively. A cysteine residue was added to the C terminus of each peptide so that the antigenic peptide could be conjugated to keyhole limpet hemocyanin. The entire amino acid sequence and the autolytic cleavage site of human m-calpain were obtained from previous reports. Antibodies specific to the calpain-generated N-and C-terminal fragments of ␣ II -spectrin (136 and 148 kDa, respectively) were produced by the peptide antigens QQQEVY (anti-BDP-136) and GAMPRD (anti-BDP-148), respectively (see Fig. 2). A cysteine residue was added to the N terminus of QQQEVY peptide or was added to the C terminus of GAMPRD peptide. The amino acid sequence and the cleavage site in mouse ␣ II -spectrin by calpain were as determined by others previously. An antibody against domain IV of human calpastatin was produced using a synthetic peptide corresponding to residues 601-630 (AEHRDKLGERDDTIPPEYRHLLDDNGQDKP) with a cysteine residue added to the C terminus. Rabbits were immunized with the antigenic peptide-keyhole limpet hemocyanin conjugates. Affinity purification of polyclonal antibodies was carried out using antigenic peptides immobilized on epoxy-activated Sepharose 6B (Amersham Biosciences, Buckinghamshire, UK). Anti-human-␣ II -spectrin polyclonal antibody C-20 from goat was purchased from Santa Cruz Biotechnology (Santa Cruz, CA). Anti-Klotho monoclonal antibody (KM2076) from rat was a generous gift from Kyowa Hakko Kogyo Co., Ltd. RESULTS Decrease of ␣ II -Spectrin in the Kidney of kl / Mice-To determine whether mouse homozygotes of the klotho gene mutation (kl / ) have a different pattern of proteins in the kidney, we examined the kidney microsomal fractions from 4-week-old mice by SDS-PAGE. A band of about 280 kDa was found to be significantly weaker in kl / mice than in kl / mice (Fig. 1A). Similar results were obtained with five other kl / mice. The 280-kDa protein band was subjected to in-gel lysylendopeptidase digestion, and the sequences of two of the resulting peptides were determined to be LQTASDESYK and KHEAFETDFTVHK by a combination of Edman degradation and mass spectrometry. A data base search of protein sequences revealed that these peptide sequences were homologous to those of human ␣ II -spectrin (GenBank TM accession number AAB41498). A Western blot using an anti-␣ II -spectrin antibody (C-20) confirmed that the 280-kDa protein is ␣ II -spectrin and that the reactivity of the antibody was drastically decreased in kl / mice (Fig. 1B). The antibody also stained a 145-kDa band in kl / mice, but this band was below the detectable level in kl / mice (Fig. 1B). Because the anti-␣ II -spectrin antibody recognizes the C terminus of ␣ II -spectrin, it is likely that the 145-kDa band is a C-terminal fragment of ␣ II -spectrin. Although the kidney of 4-week-old kl / mice was not morphologically different from that of kl / mice, it did show a small amount of calcification ( Fig. 1, C-F). Decrement of ␣ II -Spectrin and Calpastatin with Increased Activation of -Calpain-␣ II -Spectrin was previously shown to be cleaved at a particular site by calpain, yielding 136-and 148-kDa fragments. To determine whether calpain is involved in proteolysis of ␣ II -spectrin in the kidney of kl / mice, we prepared specific antibodies to sequences on either side of the cleavage site (Fig. 2). The anti-BDP-136 antibody, which was produced against a sequence (QQQEVY) in the C-terminal region of BDP-136, recognized only the 136-kDa fragment of ␣ II -spectrin. BDP-136 was detected only in kl / mice (Fig. 3A). On the other hand, the anti-BDP-148 antibody, which was produced against a sequence (GAMPRD) in the N-terminal region of BDP-148, recognized not only the 148 kDa fragment but also full-length ␣ II -spectrin. BDP-148 was detected only in kl / mice (Fig. 3B). These results indicated that ␣II-spectrin was degraded by calpain in the kidney of kl / mice. To determine which of the calpain isozymes were activated in the kl / kidney, we made a Western blot of kidney cytosolic fractions using antibodies against four types of calpain: the inactive and active forms of -calpain (pre-and post--calpain) and the inactive and active forms of m-calpain (pre-and postm-calpain). Pre--calpain was detected in kl / mice but not in kl / mice (Fig. 3C). Post--calpain was detected in kl / mice but not in kl / mice (Fig. 3D). Pre-m-calpain was detected in both kl / and kl / mice with no significant difference between them (Fig. 3E). Post-m-calpain was barely detected in either kl / or kl / mice (Fig. 3F). These results indicate that -calpain, but not m-calpain, was specifically activated in the kl / kidney. Interestingly, calpastatin, which is an endogenous inhibitor of calpain, was barely detected in kl / mice (Fig. 3G). The triplet bands at about 122 kDa in Fig. 3G are probably alternative splicing forms of calpastatin. The expression levels of mRNAs of calpastatin (Fig. 3H) and ␣ II -spectrin (Fig. 3I) were not different between kl / and kl / mice, which suggests that the decreases of calpastatin and ␣ II -spectrin in kl / mice were due to increased degradation rather than a down-regulation of transcription. Activation of -Calpain Depends on Klotho Protein Level-To elucidate the relation between the amount of Klotho protein and the degree of -calpain activation, we examined the mouse heterozygotes for the klotho mutation (kl / ). The expression level of Klotho protein in 2-week-old kl / mice (Fig. 4A, lane 2), was approximately half that in 2-week-old kl / mice (lane 1). A similar relation was found in 3-week-old mice (lanes 5 and 5, 4, and 6, respectively). These results showed that the expression level of Klotho protein affected the activation of -calpain and the amount of calpastatin (Fig. 4B). To elucidate the process of calpain activation, calpastatin decrement, and ␣ II -spectrin proteolysis, we examined mice that were less than 4 weeks old. In kl / mice, pre--calpain and calpastatin were present at low levels at 2 weeks (Fig. 4A, lane 3) but were undetectable at 3 weeks (lane 6), while the amount of cleaved ␣ II -spectrin was much higher at 3 weeks (lane 6) and 4 weeks than at 2 weeks (lane 3). In 2-and 3-week-old kl / mice (lanes 2 and 5), the amount of cleaved ␣ II -spectrin was much higher at 3 weeks (lane 5) than at 2 weeks (lane 2), while the levels of pre--calpain and calpastatin at 3 weeks were slightly less than those at 2 weeks. These findings suggest that: 1) pre--calpain and calpastatin were originally expressed in kl / mouse kidney and that -calpain was gradually activated and calpastatin was gradually decreased during development, and 2) ␣ II -spectrin was hardly cleaved in the presence of calpastatin, but intensive cleavage of ␣ II -spectrin was observed after the complete disappearance of calpastatin (Fig. 4B). No calcification was observed in 3-week-old kl / mice (data not shown), indicating that degradation of ␣ II -spectrin in the kidney of kl / mice preceded the occurrence of any tissue damage. Organ-specific Calpain Activation-The susceptibility and degree of proteolysis due to the klotho mutation varied among different organs. Changes in the lung of 4-week-old kl / mice (Fig. 5) were similar to those observed in the kidney. The intensity of intact ␣ II -spectrin drastically decreased and lower molecular weight bands newly appeared. In addition, post-calpain, but not pre--calpain, was detected, suggesting that significant proteolysis occurred in the lung. It may be relevant to that the first observation of the pulmonary emphysematous changes occurs at 4 weeks of age in kl / mice. In the heart, partial activation of calpain was observed at 4 weeks, and only post--calpain was detected at 8 weeks. However, ␣ II -spectrin was not cleaved in the heart. These results suggest that the heart has a sufficient amount of calpastatin to prevent ␣ II -spectrin degradation. On the other hand, no ␣ II -spectrin degradation or calpain activation was observed in the brain or liver at 8 weeks. Calpain Activation in Aged Normal Mice-Changes similar to those observed in kl / mice occurred in aged normal (C57BL/6) mice. As normal mice aged from 4 weeks to 29 months, the expression of Klotho protein decreased, the activation of -calpain increased, the level of calpastatin considerably decreased, and the degradation of ␣ II -spectrin increased (Fig. 6). Similar changes were observed in five other mice. DISCUSSION Our results show that the aberrant activation of -calpain and the decrease of calpastatin in the kidney are caused by the klotho mutation, and such changes lead to the cleavage of ␣ II -spectrin. These phenomena are well correlated with the expression level of Klotho protein. Our results also show that similar changes in -calpain, calpastatin, and ␣ II -spectrin occur in normal aged mice. The abnormal activation of -calpain in the kidney occurs at an early age: in kl / mice, changes in -calpain activation and ␣ II -spectrin degradation started to occur one to 2 weeks before the appearance of abnormal phenotypes, and in kl / mice, -calpain was gradually activated as they aged, even though these mice have a normal phenotypic appearance. Our finding that -calpain, but not m-calpain, was activated in the kidney of kl / mice suggests that the concentration of intracellular calcium ions in these mice is in the micromolar range. Normally calpain is activated temporarily and calpaincatalyzed proteolysis leads to modulation rather than destruction of the substrate proteins. Therefore, continuous activation of -calpain is unusual and elucidation of the mechanism is essential to understanding its pathophysiological role. One possible mechanism is that -calpain overactivation causes a deficiency of calpastatin, and another is that a decrease in calpastatin causes an increase in -calpain activation. Since the transcription levels of calpastatin are the same in kl / and kl / mice, the latter possibility is unlikely, but we cannot completely rule it out. -Calpain activity, in addition to being regulated by the calcium ion concentration, is usually also regulated by the binding of calpastatin. Thus, a deficiency of calpastatin may induce the overdestruction of substrates such as ␣ II -spectrin by calpain. The mechanism by which Klotho protein might regulate -calpain activity and calpastatin level in the kidney is unknown. However, it is possible that this regulation is mediated by nitric oxide (NO). NO has been shown to inhibit calpain-mediated proteolysis, and sys- temic NO synthesis is decreased in kl / mice. Furthermore, adenovirus-mediated klotho gene delivery increased NO production and restored vascular endothelial dysfunction. It is noteworthy that calpain overactivation in kl / mice is not caused by ischemia due to arteriosclerosis, while ischemia would cause overactivation of calpain. A previous study revealed that, in kl / mice, arteriosclerosis first appeared around 4 weeks after birth and progressed gradually with age. However, in the lung and kidney in kl / mice, arteriosclerosis could not be the cause of overactivation of -calpain, because the latter occurred as early as 23 weeks. The degree of proteolysis and of activation of calpain caused by the klotho mutation varied among different organs. Both ␣ II -spectrin degradation and calpain activation were observed in the kidney and lung as early as 23 weeks. Spectrin was not cleaved in the heart even at 8 weeks, while overactivation of calpain was observed. The time course of activation of calpain in the heart seemed to be proceeded slower than in the lung and kidney. However, it is impossible to examine this possibility, because kl / mice die at 8 -9 weeks. On the other hand, no ␣ II -spectrin degradation or calpain activation was observed in the brain or liver at 8 weeks. In addition, an organ's susceptibility to the klotho mutation did not necessarily correspond to its expression of klotho mRNA. Taken together, these results suggest that Klotho protein or its metabolites may function as a humoral factor. In support of this hypothesis, both mice and human have a secretory form of Klotho protein, and the exogenous klotho gene expressed in the brain and testis could improve systemic aging phenotypes in kl / mice. It is important to identify and characterize a target molecule (receptor) that is responsive to Klotho protein or its metabolites. Thus, it may be that the factor most responsible for an organ's sensitivity to the klotho mutation is the density of such a receptor. Our finding that normal aged mice show changes similar to those in kl / mice suggests that the decrease of Klotho protein is closely related to aging processes. Recent studies revealed that the expression of klotho gene was gradually reduced in the rat kidney during long term hypertension and that calpastatin was gently degraded also in the kidney of hypertensive rats. Furthermore, humans with chronic renal failure commonly develop multiple complications resembling phenotypes observed in kl / mice, and the expression of klotho mRNA and the production of Klotho protein were severely reduced in these patients. Taken together, these results suggest that Klotho protein in the kidney protects the progress of age-related renal disorders. Based on the above results, we propose that tissue deterioration during aging is caused by a decrease of Klotho protein, which leads to a decrease of calpastatin and activation of -calpain, which leads to a degradation of cytoskeletal components such as spectrin. The magnitude of each of these effects correlates with the amount of Klotho expression. A decrease of calpastatin accelerates the activation of -calpain and vice versa. Such deterioration may trigger tissue abnormalities in kl / mice and aged mice, but Klotho protein may suppress these processes, while the detailed mechanism is not clear yet. Very recently Yoshida et al. reported that calcium and phosphorus homeostasis could be regulated through Klotho function via the action of 1,25-dihydroxyvitamin D due to the impaired regulation of 1␣-hydroxylase gene expression. This deterioration in the vitamin D 3 endocrine system may participate in many of the phenotypes in kl / mice via toxicity due to increased levels of calcium, phosphorus, and 1,25-dihydroxyvitamin D. It should be noted that when serum concentrations of calcium, phosphorus, and 1,25-dihydroxyvitamin D are re-stored to normal levels, many of phenotypes are improved despite Klotho protein deficiency. 1 Thus, Klotho protein may be a regulator of calcium homeostasis via the vitamin D 3 endocrine system. Alternatively, based on the homology to -glucosidase, Klotho protein may function as a glycosidase-like enzyme and modify the glycan moieties of ion channels. Since it is known that glycosylation appears important for the function of ion channels (48 -50), the change of glycosylation may affect calcium homeostasis. In any case, the abnormal activation of calpain due to the decrease of Klotho protein leads to degradation of cytoskeletal elements such as ␣ II -spectrin is likely to be integral to the pathogenic sequence in kl / mice and recapitulates effects seen in normal aging. Future studies are needed to determine the definitive role of Klotho protein in the regulation of calcium metabolism as well as of intracellular calcium concentration. Such studies will also lead to a better understand of age-related renal abnormalities and to prevent renal diseases in the future. |
from tornado.httpclient import HTTPRequest, HTTPResponse
from .collection import RequestCollection
class AsyncHTTPStubClient(object):
def fetch(self, request, callback=None, **kwargs):
if not isinstance(request, HTTPRequest):
request = HTTPRequest(url=request, **kwargs)
response_partial = RequestCollection.find(request)
if response_partial:
resp = response_partial(request)
else:
resp = HTTPResponse(request, 404)
callback(resp)
|
package com.silvio.practice.advanced.javamd5;
import java.math.BigInteger;
import java.security.MessageDigest;
import java.security.NoSuchAlgorithmException;
import java.util.Scanner;
public class Solution {
public static void main(String[] args) {
Scanner in = new Scanner(System.in);
System.out.println(getMD5(in.next()));
}
public static String getMD5(String input) {
try {
final MessageDigest md = MessageDigest.getInstance("MD5");
final byte[] messageDigest = md.digest(input.getBytes());
BigInteger number = new BigInteger(1, messageDigest);
String hashtext = number.toString(16);
while (hashtext.length() < 32) {
hashtext = "0" + hashtext;
}
return hashtext;
}
catch (NoSuchAlgorithmException e) {
throw new RuntimeException(e);
}
}
}
|
'use strict';
declare var require: any
import Base from "@bitclave/base-client-js";
//required for babel to polyfill regeneratorRuntime
require("babel-polyfill");
// process.on('unhandledRejection', (reason, p) => {
// console.log('Unhandled Rejection at: Promise', p, 'reason:', reason);
// });
(async function() {
var getBase = (async function(passphrase) {
//Initialize Base
var base = new Base("https://base2-bitclva-com.herokuapp.com", 'localhost', '', '');
base.changeStrategy('POSTGRES');
//Create a KeyPair
let keyPair = await base.createKeyPairHelper('').createKeyPair(passphrase);
console.log("\nCreated a keypair for the passphrase: " + passphrase);
console.log("PublicKey:" + keyPair.publicKey);
console.log("PrivateKey:" + keyPair.privateKey);
//Check for existence or create a new account
let account;
try {
console.log("\nChecking if account already exists.");
account = await base.accountManager.checkAccount(passphrase, "somemessage");
console.log("Account already exists: " + JSON.stringify(account));
} catch(e) {
console.log("\nAccount doesn't exist, Creating a new one.");
account = await base.accountManager.registration(passphrase, "somemessage");
console.log("Account created:" + JSON.stringify(account));
}
return base
})
var getPublicKey = (async function(passphrase) {
//Create a KeyPair
let keyPair = await (await getBase(passphrase)).createKeyPairHelper('').createKeyPair(passphrase);
return keyPair.publicKey;
})
console.log('Hello')
let aliceBase = await getBase("alice")
let eveBase = await getBase("eve")
let malloryBase = await getBase("mallory")
let reviewerBase = await getBase("reviewer")
let aliceKey = await getPublicKey("alice")
let eveKey = await getPublicKey("eve")
let malloryKey = await getPublicKey("mallory")
let reviewerKey = await getPublicKey("reviewer")
let data = new Map();
let data1 = new Map();
let data2 = new Map();
// let data = new Base.
var commit_id1 = "commit_1";
var commit_id2 = "commit_2";
var commit_id3 = "commit_3";
data.set(commit_id1, "https://github.com/bitclave/base-tutorial/commit/3993254b7c13d7617b7f2add9cb00c24fd10508a");
data1.set(commit_id2, "https://github.com/bitclave/base-tutorial/commit/64fd301d6d540d04468e43b6e5d2ea5bb870acd6");
data2.set(commit_id3, "https://github.com/bitclave/base-tutorial/commit/8909a0a371d916914e64bb0b894f61552d55989c");
// data.set("lastname", "Doe");
// data.set("email", "<EMAIL>");
// data.set("city", "NewYork");
// Save encrypted data to Base
let encryptedData1 = await aliceBase.profileManager.updateData(data);
let encryptedData2 = await eveBase.profileManager.updateData(data1);
let encryptedData3 = await malloryBase.profileManager.updateData(data2);
console.log("\nUser data is encrypted and saved to Base.");
for (var [key, value] of encryptedData1.entries()) {
console.log("Key:" + key + ", Encrypted Value:" + value);
}
//give access
const grantFields = new Map();
grantFields.set(commit_id1, 0);
await aliceBase.dataRequestManager.grantAccessForClient(reviewerKey, grantFields);
const grantFields1 = new Map();
grantFields1.set(commit_id2, 0);
await eveBase.dataRequestManager.grantAccessForClient(reviewerKey, grantFields1);
const grantFields2 = new Map();
grantFields2.set(commit_id3, 0);
await malloryBase.dataRequestManager.grantAccessForClient(reviewerKey, grantFields2);
// console.log("<NAME>")
// check approval
var temp = await reviewerBase.dataRequestManager.getRequests(reviewerKey, "");
temp.forEach(async function(approval) {
//respond to approval
console.log(await reviewerBase.profileManager.getAuthorizedData(approval.toPk,approval.responseData))
})
})(); |
package io.quarkiverse.zeebe.runtime.tracing;
import static io.quarkus.opentelemetry.runtime.OpenTelemetryConfig.INSTRUMENTATION_NAME;
import static java.lang.String.valueOf;
import java.util.Map;
import javax.annotation.Nullable;
import javax.annotation.Priority;
import javax.interceptor.AroundInvoke;
import javax.interceptor.Interceptor;
import javax.interceptor.InvocationContext;
import io.camunda.zeebe.client.api.response.ActivatedJob;
import io.opentelemetry.api.OpenTelemetry;
import io.opentelemetry.api.trace.Span;
import io.opentelemetry.api.trace.SpanKind;
import io.opentelemetry.api.trace.StatusCode;
import io.opentelemetry.context.Context;
import io.opentelemetry.context.Scope;
import io.opentelemetry.context.propagation.ContextPropagators;
import io.opentelemetry.context.propagation.TextMapGetter;
import io.opentelemetry.context.propagation.TextMapPropagator;
@SuppressWarnings("CdiInterceptorInspection")
@Interceptor
@Priority(value = Interceptor.Priority.LIBRARY_BEFORE + 1)
public class ZeebeOpenTelemetryInterceptor {
private final OpenTelemetry openTelemetry;
public ZeebeOpenTelemetryInterceptor(final OpenTelemetry openTelemetry) {
this.openTelemetry = openTelemetry;
}
@AroundInvoke
public Object wrap(InvocationContext ctx) throws Exception {
ActivatedJob job = (ActivatedJob) ctx.getParameters()[1];
Span span = createSpan(openTelemetry, ZeebeTracing.getClass(ctx.getTarget().getClass()),
ZeebeTracing.getSpanName(job, ctx.getMethod()), job);
try (Scope ignored = span.makeCurrent()) {
return ctx.proceed();
} catch (Throwable e) {
span.setStatus(StatusCode.ERROR);
span.setAttribute(ZeebeTracing.JOB_EXCEPTION, e.getMessage());
throw e;
} finally {
span.end();
}
}
private static Span createSpan(OpenTelemetry openTelemetry, String clazz, String spanName, ActivatedJob job) {
ContextPropagators propagators = openTelemetry.getPropagators();
TextMapPropagator textMapPropagator = propagators.getTextMapPropagator();
Context context = textMapPropagator.extract(Context.current(), job.getVariablesAsMap(), new TextMapGetter<>() {
@Override
public Iterable<String> keys(Map<String, Object> data) {
return data.keySet();
}
@Nullable
@Override
public String get(@Nullable Map<String, Object> data, String key) {
if (data == null) {
return null;
}
Object o = data.get(key);
if (o instanceof String) {
return (String) o;
}
return valueOf(o);
}
});
Span span = openTelemetry.getTracer(INSTRUMENTATION_NAME).spanBuilder(spanName).setParent(context)
.setSpanKind(SpanKind.CONSUMER).startSpan();
ZeebeTracing.setAttributes(clazz, job, new ZeebeTracing.AttributeConfigCallback() {
@Override
public void setAttribute(String key, long value) {
span.setAttribute(key, value);
}
@Override
public void setAttribute(String key, String value) {
span.setAttribute(key, value);
}
});
return span;
}
}
|
Estrogenic activity of organic extracts in the effluents treated byivity present treatment and the new process. OBJECTIVE Evaluate the estrogenic activity of organic extracts in the effluents treated by present treatment and the new technique find the scientific evidences of the new wastewater treatment technique to compare the removal efficiency of trace organic pollutants. METHODS The solid phase extraction was adopted to enrich the trace organic pollutants in the water samples with resin adsorbing, then detected the estrogenic activity of them by using yeast estrogens screen and immature rat uterine bioassay. RESULTS Yeast estrogens screen demonstrated that the organic extracts in the new technique effluent showed the activity as estrogen after concentrated 1000 times, because the activity of beta-galactosidase produced by yeast began to appear, otherwise, the same phenomenon occurred for the tertiary effluent, the secondary effluent and the influent at concentration of 500 times. At same concentration times of the extracts, the activity of beta-galactosidase of each group could be listed as, the new technique effluent < the tertiarity effluent < the secondary effluent < the influent. The immature rat uterine bioassay showed there was significant difference only between the high dose group of the influent organic extracts and the negative control (P < 0.05), but not between other groups, about the ratio of uterine weight to body weight. CONCLUSION The estrogenic activity of urban sewage in Zhengzhou was significantly decreased after treated but it still possesses potential hazard to environment. And the trace organic pollutants in wastewater were removed by the new technique more efficiently than by the present treatment. |
<filename>springloaded/src/test/java/org/springsource/loaded/test/CatcherTests.java
/*
* Copyright 2010-2012 VMware and contributors
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* https://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.springsource.loaded.test;
import org.junit.Assert;
import org.junit.Test;
import static org.junit.Assert.assertEquals;
import org.springsource.loaded.ClassRenamer;
import org.springsource.loaded.ReloadableType;
import org.springsource.loaded.TypeDescriptor;
import org.springsource.loaded.TypeRegistry;
/**
* Checking the computation of catchers.
*
* @author <NAME>
*/
@SuppressWarnings("unused")
public class CatcherTests extends SpringLoadedTests {
/*
* Details on catchers
*
* Four types of method in the super type to think about:
* - private
* - protected
* - default
* - public
*
* And things to keep in mind:
* - private methods are not overridable (invokespecial is used to call them)
* - visibility cannot be reduced, only widened
* - static methods are not overridable
*
* Catching rules:
* - don't need a catcher for a private method, there cannot be code out there that calls it with INVOKEVIRTUAL
* - visibility is preserved except for protected/default, which is widened to public - this enables the executor to call the
* catcher. Doesn't seem to have any side effects (doesn't limit the ability for an overriding method in a further
* subclass to have been declared initially protected).
*/
@Test
public void rewrite() throws Exception {
TypeRegistry typeRegistry = getTypeRegistry("catchers.B");
loadClass("catchers.A");
TypeDescriptor typeDescriptor = typeRegistry.getExtractor().extract(loadBytesForClass("catchers.B"), true);
checkDoesNotContain(typeDescriptor, "privateMethod");
checkDoesContain(typeDescriptor, "0x1 publicMethod");
checkDoesContain(typeDescriptor, "0x1 protectedMethod");
checkDoesContain(typeDescriptor, "0x1 defaultMethod");
ReloadableType rtype = typeRegistry.addType("catchers.B", loadBytesForClass("catchers.B"));
reload(rtype, "2");
}
/**
* Exercising the two codepaths for a catcher. The first 'run' will run the super version. The second 'run' will
* dispatch to our new implementation.
*/
@Test
public void exerciseCatcher() throws Exception {
TypeRegistry registry = getTypeRegistry("catchers..*");
String a = "catchers.A";
String b = "catchers.B";
ReloadableType rtypeA = registry.addType(a, loadBytesForClass(a));
ReloadableType rtypeB = registry.addType(b, loadBytesForClass(b));
Class<?> clazz = loadit("catchers.Runner",
ClassRenamer.rename("catchers.Runner", loadBytesForClass("catchers.Runner")));
assertStartsWith("catchers.B@", runUnguarded(clazz, "runToString").returnValue);
Assert.assertEquals(65, runUnguarded(clazz, "runPublicMethod").returnValue);
Assert.assertEquals(23, runUnguarded(clazz, "runProtectedMethod").returnValue);
rtypeB.loadNewVersion("2", retrieveRename(b, b + "2"));
Assert.assertEquals("hey!", runUnguarded(clazz, "runToString").returnValue);
Assert.assertEquals(66, runUnguarded(clazz, "runPublicMethod").returnValue);
Assert.assertEquals(32, runUnguarded(clazz, "runProtectedMethod").returnValue);
// 27-Aug-2010 - typical catcher - TODO should we shorten some type names/method names to reduce class file size?
// METHOD: 0x0001(public) publicMethod()V
// CODE
// GETSTATIC catchers/B.r$typeLorg/springsource/loaded/ReloadableType;
// LDC 0
// INVOKEVIRTUAL org/springsource/loaded/ReloadableType.fetchLatestIfExists(I)Ljava/lang/Object;
// DUP
// IFNULL L0
// CHECKCAST catchers/B__I
// ALOAD 0
// INVOKEINTERFACE catchers/B__I.publicMethod(Lcatchers/B;)V
// RETURN
// L0
// POP
// ALOAD 0
// INVOKESPECIAL catchers/A.publicMethod()V
// RETURN
}
/**
* Now we work with a mixed hierarchy. Type X declares the methods, type Y extends X does not, type Z extends Y
* does.
*/
@Test
public void exerciseCatcher2() throws Exception {
TypeRegistry registry = getTypeRegistry("catchers..*");
String x = "catchers.X";
String y = "catchers.Y";
String z = "catchers.Z";
ReloadableType rtypeX = registry.addType(x, loadBytesForClass(x));
ReloadableType rtypeY = registry.addType(y, loadBytesForClass(y));
ReloadableType rtypeZ = registry.addType(z, loadBytesForClass(z));
Class<?> clazz = loadRunner("catchers.Runner2");
Assert.assertEquals(1, runUnguarded(clazz, "runPublicX").returnValue);
Assert.assertEquals(1, runUnguarded(clazz, "runPublicY").returnValue); // Y does not override
Assert.assertEquals(3, runUnguarded(clazz, "runPublicZ").returnValue);
Assert.assertEquals('a', runUnguarded(clazz, "runDefaultX").returnValue);
Assert.assertEquals('a', runUnguarded(clazz, "runDefaultY").returnValue); // Y does not override
Assert.assertEquals('c', runUnguarded(clazz, "runDefaultZ").returnValue);
Assert.assertEquals(100L, runUnguarded(clazz, "runProtectedX").returnValue);
Assert.assertEquals(100L, runUnguarded(clazz, "runProtectedY").returnValue); // Y does not override
Assert.assertEquals(300L, runUnguarded(clazz, "runProtectedZ").returnValue);
rtypeY.loadNewVersion("2", retrieveRename(y, y + "2"));
Assert.assertEquals(1, runUnguarded(clazz, "runPublicX").returnValue);
Assert.assertEquals(22, runUnguarded(clazz, "runPublicY").returnValue); // now Y does
Assert.assertEquals(3, runUnguarded(clazz, "runPublicZ").returnValue);
Assert.assertEquals('a', runUnguarded(clazz, "runDefaultX").returnValue);
Assert.assertEquals('B', runUnguarded(clazz, "runDefaultY").returnValue); // now Y does
Assert.assertEquals('c', runUnguarded(clazz, "runDefaultZ").returnValue);
// Runner2.runProtectedX invokes x.callProtectedMethod() which simply returns 'protectedMethod()'
Assert.assertEquals(100L, runUnguarded(clazz, "runProtectedX").returnValue);
Assert.assertEquals(200L, runUnguarded(clazz, "runProtectedY").returnValue); // now Y does
Assert.assertEquals(300L, runUnguarded(clazz, "runProtectedZ").returnValue);
}
// TODO are reloadings happening too frequently now that ctors will force them?
protected Class<?> loadRunner(String name) {
return loadit(name, loadBytesForClass(name));
}
}
|
<gh_stars>1-10
import sys
from mpi4py import MPI
from teesd import sourcepath
from parteesd import (
xmlreadbc,
checkerror
)
import abate
comm = MPI.COMM_WORLD
myrank = comm.Get_rank()
numproc = comm.Get_size()
testname = "TEESDReadBC"
def runtest():
path_to_datafile = sourcepath + "/Testing/Data/XML/parxml.xml"
mydict = xmlreadbc(path_to_datafile, comm)
expected_data = 'v1.1.1 v1.1.2 v1.1.3 v1.1.4'
check_value = 0
if mydict['teesd_config']['config1']['value1'] != expected_data:
check_value = 1
print(testname + "(", myrank, "): Failed test.")
print(testname + "(", myrank, "): Expected: ", expected_data)
print(testname + "(", myrank, "): Got: ",
mydict['teesd_config']['config1']['value1'])
check_value = checkerror(check_value)
return(check_value)
if __name__ == "__main__":
numargs = len(sys.argv)
resultsfilename = ""
if numargs > 1:
resultsfilename = sys.argv[1]
myrank = comm.Get_rank()
test_pass = runtest()
testresult = {testname: "Pass"}
if test_pass != 0:
testresult = {testname: "Fail"}
if myrank == 0:
abate.updatetestresults(testresult, resultsfilename)
|
def download_dir() -> str:
downdir = app.config.get(
'DOWNLOAD_FOLDER',
os.path.join(os.getcwd(), "downloads")
)
if not (
os.path.isdir(downdir) and
os.access(downdir, os.R_OK | os.W_OK | os.X_OK)
):
raise FileNotFoundError(
errno.ENOENT,
os.strerror(errno.ENOENT) + \
f"VM Directory '{downdir}' does not exist or not accessible?",
downdir
)
return downdir |
The effect of child-centered play therapy on children with anger control problems is true Background: Children's anger and aggressive behaviors become a problem for teachers and parents at home, in the classroom, or the playground. Pharmacological and psychotherapeutic approaches are recommended for children who cannot control their anger. Child-centered play therapy is one of these approaches. Aim: This study aimed to reveal the effect of child-centered play therapy on children with anger issues. Materials and Methods: The study group consists of 25 volunteer child clients with anger symptoms, and the control group consists of 25 volunteer child clients without anger symptoms. Each participant was given child-centered play therapy with 45-min sessions twice a week for 3 weeks during the research process. The Trait Anger-Anger Style Scale was administered to the participants before and after the therapy. Results: As a result of the study, it was shown that children with anger issues experienced a significant change and improved after child-centered play therapy. Children have become able to control their anger. At the same time, improvement was observed in the verbal and behavioral expression of anger. Conclusion: The results of this study indicate that child-centered play therapy can be an effective treatment option for children with anger issues and aggressive behaviors. |
. UNLABELLED The effects of imaging conditions and measures for their improvement were examined with regard to recognition of the effects of contrast on images when T1-weighted imaging with selective fat suppression was applied. METHOD Luminance at the target region was examined before and after contrast imaging using phantoms assuming pre- and post-imaging conditions. A clinical examination was performed on tumors revealed by breast examination, including those surrounded by mammary gland and by fat tissue. RESULTS When fat suppression was used and imaging contrast was enhanced, the luminance level of fat tumors with the same structure as the prepared phantoms appeared to be high both before and after contrast imaging, and the effects of contrast were not distinguishable. This observation is attributable to the fact that the imaging conditions before and after contrast imaging were substantially different. To make a comparison between pre- and post-contrast images, it is considered necessary to perform imaging with fixed receiver gain and to apply the same imaging method for pre- and post-contrast images by adjusting post-contrast imaging conditions to those of pre-contrast imaging. |
import SerializationContext from "./core/context/SerializationContext";
import { deserializeInternal, serializeInternal } from "./core/Serializer";
import DeserializationContext from "./core/context/DeserializationContext";
import { ReferenceBehavior } from "./core/context/ContextBase";
export interface SerializationSettings {
allowDynamic: boolean
referenceBehavior: ReferenceBehavior
}
export interface DeserializationSettings {
referenceBehavior: ReferenceBehavior
}
export var defaultSerializationSettings: SerializationSettings = {
allowDynamic: false,
referenceBehavior: ReferenceBehavior.Error
};
export var defaultDeserializationSettingss: DeserializationSettings = {
referenceBehavior: ReferenceBehavior.Error
};
export async function serializeObject(object: any, settings?: Partial<SerializationSettings>): Promise<any> {
const mergedSettings = {...defaultSerializationSettings, ...settings};
const context = new SerializationContext(mergedSettings.allowDynamic, mergedSettings.referenceBehavior);
return await serializeInternal(object, context)
}
export async function deserializeObject<T>(object: any, cls?: Function, settings?: Partial<DeserializationSettings>): Promise<T> {
const mergedSettings = {...defaultDeserializationSettingss, ...settings};
const context = new DeserializationContext(cls, mergedSettings.referenceBehavior);
return await deserializeInternal<T>(object, context);
}
export async function populateObject<T>(object: T, dto: any, cls?: Function, settings?: Partial<DeserializationSettings>): Promise<T> {
const mergedSettings = {...defaultDeserializationSettingss, ...settings};
const context = new DeserializationContext(cls || object.constructor, mergedSettings.referenceBehavior, object);
return await deserializeInternal<T>(dto, context);
} |
<filename>Final.2/Q3/Main.java
package com.intro;
public class Main {
public static void main(String[] args) {
StockData stockData = new StockData();
Visualizer visualizer = new Visualizer();
/*
This could be IOC (Inversion of control) for the parameters of the question.
I am assuming that this main thread is the controller therefore this is correct control.
*/
stockData.setStockDataChangeListener(visualizer);
// Test the observer, only the setElement(int index, int value) function is available as this is an array.
stockData.setElement(5, 50);
}
}
|
<gh_stars>0
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package com.google.api.ads.dfp.jaxws.v201711;
import javax.xml.bind.annotation.XmlEnum;
import javax.xml.bind.annotation.XmlType;
/**
* <p>Java class for EvaluationStatus.
*
* <p>The following schema fragment specifies the expected content contained within this class.
* <p>
* <pre>
* <simpleType name="EvaluationStatus">
* <restriction base="{http://www.w3.org/2001/XMLSchema}string">
* <enumeration value="CANCELED"/>
* <enumeration value="COMPLETED"/>
* <enumeration value="FAILED"/>
* <enumeration value="IN_PROGRESS"/>
* <enumeration value="INACTIVE"/>
* <enumeration value="SKIPPED"/>
* <enumeration value="INACTIVE_BUT_TRIGGERED"/>
* <enumeration value="UNKNOWN"/>
* </restriction>
* </simpleType>
* </pre>
*
*/
@XmlType(name = "EvaluationStatus")
@XmlEnum
public enum EvaluationStatus {
/**
*
* When a {@link Proposal} is retracted the associated workflow is canceled.
* Including the steps, rules and actions.
*
*
*/
CANCELED,
/**
*
* The entity is in a completed state. If the entity is a workflow, it means that all steps have
* been completed. If the entity is a step, it means all actions in the step have been completed.
* If the entity is a workflow action, it means it has been done.
*
*
*/
COMPLETED,
/**
*
* The entity is in a failed state. If the entity is a workflow, it means that some step has
* failed. If the entity is a step, it means some actions in the step have failed. If the
* entity is a workflow action, it means it has failed.
*
*
*/
FAILED,
/**
*
* The entity is in progress. If the entity is a workflow, it means that some steps have yet to be
* started. If the entity is a step, it means some actions in the step are still in a pending
* state. If the entity is a workflow action, it means the action is ongoing.
*
*
*/
IN_PROGRESS,
/**
*
* The entity has not been started. If the entity is a step, it has not been started by the
* workflow execution process If the entity is a workflow action, it means that the step has not
* been triggered.
*
*
*/
INACTIVE,
/**
*
* The action is skipped because the {@link Proposal} and/or {@link ProposalLineItem proposal line
* items} do not trigger the conditions for the step. This value is only for actions.
*
*
*/
SKIPPED,
/**
*
* The action is triggered because the {@link Proposal} and/or {@link ProposalLineItem proposal
* line items} trigger the conditions for the step, but the step itself has not started yet.
*
*
*/
INACTIVE_BUT_TRIGGERED,
/**
*
* The value returned if the actual value is not exposed by the requested API version.
*
*
*/
UNKNOWN;
public String value() {
return name();
}
public static EvaluationStatus fromValue(String v) {
return valueOf(v);
}
}
|
def unwise_tiles_touching_wcs(wcs, polygons=True):
from astrometry.util.miscutils import polygons_intersect
from astrometry.util.starutil_numpy import degrees_between
from pkg_resources import resource_filename
atlasfn = resource_filename('legacypipe', 'data/wise-tiles.fits')
T = fits_table(atlasfn)
trad = wcs.radius()
wrad = np.sqrt(2.) / 2. * 2048 * 2.75 / 3600.
rad = trad + wrad
r, d = wcs.radec_center()
I, = np.nonzero(np.abs(T.dec - d) < rad)
I = I[degrees_between(T.ra[I], T.dec[I], r, d) < rad]
if not polygons:
return T[I]
tw, th = wcs.imagew, wcs.imageh
targetpoly = [(0.5, 0.5), (tw + 0.5, 0.5),
(tw + 0.5, th + 0.5), (0.5, th + 0.5)]
cd = wcs.get_cd()
tdet = cd[0] * cd[3] - cd[1] * cd[2]
if tdet > 0:
targetpoly = list(reversed(targetpoly))
targetpoly = np.array(targetpoly)
keep = []
for i in I:
wwcs = unwise_tile_wcs(T.ra[i], T.dec[i])
cd = wwcs.get_cd()
wdet = cd[0] * cd[3] - cd[1] * cd[2]
H, W = wwcs.shape
poly = []
for x, y in [(0.5, 0.5), (W + 0.5, 0.5), (W + 0.5, H + 0.5), (0.5, H + 0.5)]:
rr,dd = wwcs.pixelxy2radec(x, y)
_,xx,yy = wcs.radec2pixelxy(rr, dd)
poly.append((xx, yy))
if wdet > 0:
poly = list(reversed(poly))
poly = np.array(poly)
if polygons_intersect(targetpoly, poly):
keep.append(i)
I = np.array(keep)
return T[I] |
Continuous In-Situ Removal of Butanol from Clostridium acetobutylicum Fermentations via Expanded-bed Adsorption crude oil reserves to diminish. Therefore, there exists a need to replace petroleum as the primary fuel derivative. Butanol is a four-carbon alcohol that can be used to effectively replace gasoline without changing the current automotive infrastructure. Additionally, butanol offers the same environmentally friendly effects as ethanol, but possess a 23% higher energy density. Clostridium acetobutylicum is an anaerobic bacterium that can ferment renewable biomass-derived sugars into butanol. However, this fermentation becomes limited by relatively low butanol concentrations (1.3% w/v), making this process uneconomical. To economically produce butanol, the in-situ product removal (ISPR) strategy is employed to the butanol fermentation. ISPR entails the removal of butanol as it is produced, effectively avoiding the toxicity limit and allowing for increased overall butanol production. This thesis explores the application of ISPR through integration of expanded-bed adsorption (EBA) with the C. acetobutylicum butanol fermentations. The goal is to enhance volumetric productivity and to develop a semi-continuous biofuel production process. The hydrophobic polymer resin adsorbent Dowex Optipore L-493 was characterized in cell-free studies to determine the impact of adsorbent mass and circulation rate on butanol loading capacity and removal rate. Additionally, the EBA column was optimized to use a superficial velocity of 9.5 cm/min and a resin fraction of 50 g/L. When EBA was applied to a fedbatch butanol fermentation performed under optimal operating conditions, a total of 25.5 g butanol was produced in 120 h, corresponding to an average yield on glucose of 18.6%. At this level, integration of EBA for in situ butanol recovered enabled the production of 33% more butanol than the control fermentation. These results are very promising for the production of butanol as a biofuel. Future work will entail the optimization of the fed-batch process for higher glucose utilization and development of a reliable butanol recovery system from the resin. Chemical Engineering Masters Defense Continuous In-Situ Removal of Butanol from Clostridium acetobutylicum Fermentations via Expanded-bed Adsorption |
<gh_stars>0
export class SolutionRange {
constructor(
public from: number,
public to: number
) {}
}
|
Synchronous Computer-Mediated Interactions in English: A Case of Indonesian Learners-English Non-native Speakers Communication This paper aims to investigate to what extent Synchronous Computer-Mediated Communication with non-native speakers of English affects Engineering students speaking skills, and how the use of SCMC in speaking practice is perceived by the students. Grounded in an ex post facto design, the data were gained from teacher journals, and interviews to fifteen vocational secondary students majoring in Engineering engaged in SCMC for one semester. Findings showed that SCMC did not indicate positive effects on improving Engineering-related vocabulary and accurate grammar use compared with their pronunciation aspect. The use of SCMC was considered negative in terms of interaction with their interlocutors, and technicalities. However, few students positively perceived that they found it joyful learning within SCMC and got motivated to improve their speaking skills within their interaction with their nonnative speakers of English they met online. INTRODUCTION In a situation where we are currently in the fourth industrial revolution, many higher education programs strive to make students become graduates who can communicate well using English as an international language to broaden their opportunities to get jobs, either at a national or international level. As Kirubahar, Santhi and Subashini stated, the relationship between a person's ability to get and maintain a job and the ability to speak in English is very significant. This requirement, no exception, also applies to students majoring in Engineering. Shrestha, Pahari, and Awasthi reported that English is the most essential language in the career of engineering students all over the world. However, there are still many college students with low English-speaking performance, they were afraid of producing incorrect pronunciation and difficulty in discussing issues related to their major in English due to insufficient English words, expressions, and grammar knowledge. They also feel quite hard to recognize their true skill. In addition to those factors, the teacher factor is also a problem. Sometimes English lecturers only have general English skills and do not have a specific skill to teach English in some majors. Some research studies found that students' failure to learn English in particular disciplines was caused by a lack of appropriately trained English language lecturers at their major (Hoa, 2016;Luo & Garner, 2017;Patra & Mohanty, 2016). To overcome their unfamiliarity with specialized English required in specific fields, some English lecturers have directed EFL learners to have online English conversations with English-speaking partners who are also interested in improving their speaking performance in a similar topic to talk about. Several studies have indicated implicitly the features of Synchronous Computer-Mediated Communication media allowing EFL students to access global communication, it affected an effectivity of English-speaking production, accuracy and fluency improvements in specific topics, such as implementing synchronous online English communication, done by letting learners interact with foreign interlocutors in their fields of interest about the importance of English and Internet, and, by giving each interlocutor's feedback (Gurzynski-Weiss & Baralt, 2014). There are also studies have examined the implementation of Synchronous Computer Mediated Communication with native speakers and found several negative and positive perceptions in terms oral performance, when talking with native speakers via Second Life, one of SCMC platform, the EFL learners felt motivated to always talk in and decrease their anxiety to talk English, it also made the learners enjoy talking with native speakers of English. SCMC with native speakers of English also results in terms its effectiveness to improve oral skills (Abrams, 2003;Brown, 2016;Kung & Eslami, 2018;Spring, Kato & Mori, 2019). However, the interlocutors of the EFL learners in those studies were native speakers of English, while interactions conducted between especially EFL learners with foreigners who are predicated as non-native speakers of English invited from online speaking platform need to be further studied for its effects in improving their English speaking skills in Engineering topic and their perceptions towards it. Many affordable online speaking platforms with its filtering features allow English language learners, especially EFL learners in Indonesia to find English speaking partners who usually come from other countries having not too much time differences and dominated with non-native speakers of English with specific topic preferences to talk, in this case Engineering-related topics. However, research on this concern is very limited and not enough to ensure that the Engineering-related English conversation conducted between EFL learners and non-native speakers of English who they get from the online platform can facilitate the students to improve Engineering-related vocabularies and such their Englishspeaking accuracy like grammar and pronunciation considered important to be mastered. Based on the aforementioned importance of Englishspeaking skills for college students, in this case, students in Engineering major and the importance to conduct a study on the effectivity of Synchronous Computer-Mediated Communication between EFL learners majoring in Industrial Engineering and foreigners who are predicated as non-native speakers of English who have similar interest to talk about Engineering, this study focus on one particular practice of Engineering-related English conversation between Engineering students with non-native speakers of English they get from online platform who has Engineering as their topic preferences to talk about. Based on the aforementioned issues, this study aims to investigate to what extent Synchronous Computer-Mediated Communication with non-native speakers of English affects Engineering students' speaking skills, and how the use of SCMC in speaking practice is perceived by the students. Synchronous Computer-Mediated Communication in Classroom Speaking Practice Many English Language Teaching (ELT) teachers have acknowledged the benefits of technologies in learning and teaching English. They have increasingly involved in developing collaborative language learning activities using Computer-Mediated Communication (CMC) served as a medium to practice foreign language (Coverdale-Jones, 2017;Lin, 2014;O'Rourke & Stickler, 2017;Trejos, Pascuas, & Cuellar, 2018). English-speaking practice has been implemented frequently in a synchronous mode. Helm reported that synchronous computer mediated communication is the most widely used in Europe institutions of higher education. Synchronous Computer Mediated Communication (SCMC) is also considered to be useful in one of the higher education institutions where participants in this study try to improve their course related speaking skills, i.e., Engineering. The implementations of SCMC have been examined from a myriad of different perspectives. O'Dowd explores some outputs done through virtual exchanges in many contexts for varied educational aims and recommends using virtual exchange to refer to any programs providing online communication among language learners in different parts of the world. The O'Dowd's terminology in line with a particular practice implemented by EFL learners majoring in Industrial Engineering and foreigners who are predicated as nonnative speakers of English have similar interests to talk about Engineering, which is the focus of this study. English Oral Interaction through SCMC Now many platforms make it possible to get partners to talk with globally involving users in different geographic locations, interacting to engage in learning dialogues (O'Dowd, 2016). With this opportunity, it usually used by English teachers to improve their students' English-speaking skills in oral interactions with foreign interlocutors (Osipov, Volinsky, Nikulchev, & Prasikova, 2016). There are also studies have examined the implementation of SCMC with native speakers, when talking with native speakers via SCMC, the EFL learners felt motivated to always talk and decrease their anxiety to talk in English (Iino & Yabuta, 2015;Kruk, 2016;Melchor-Couto, 2017), it also made the learners enjoy talking with native speakers. SCMC with native speakers also results in terms its effectiveness to improve oral skills (Abrams, 2003;Brown, 2016;Kung & Eslami, 2018). Based on those studies, and a condition where we are here implementing it with non-native speakers of Advances in Social Science, Education and Humanities Research,volume 546 English, the students' English-speaking production and accuracy are being asked, like how they produce Engineering-related vocabularies, applying good grammar, and pronunciation. Effects of Interacting with Non-Native Speakers of English through SCMC on Vocabularies Use There have been many ways to strive improving EFL learners' English-vocabularies production while speak English, one of which is by involving them practicing English conversation with foreign interlocutors through SCMC. Eguchi found that talking with foreigners through SCMC made EFL learners in Japan produce more utterances and felt more comfortable and being curious talking about culture. Kohn and Hoffstaedter also focused on vocabularies production, using BigBlueButton platform. The result show that it made the students being active and easy to develop topic, increase their English vocabularies production with non-native speakers of English. Abe and Mashiko also found that SCMC application affect English language production produced through audio SCMC. The other studies found similar findings that SCMC improve students' vocabularies use (Abrams, 2003;AbuSeileek & Rabab'ah, 2013). During real-time synchronous communication through SCMC media, there is also less time to think about message content so that the vocabularies production increases (Smith, Alvarez-Torres, & Zhao, 2003). It also helps students to produce many sentences, as Eslami and Kung stated that SCMC interaction between interlocutors also improves English speaking production because it sets less structured and more dynamic discussion. However, Yanguas found different result on vocabularies production between learners using oral SCMC and learners with face-to-face interaction. Nguyen and White compared two modes of exchanges, SCMC versus face-to-face (FTF), it revealed that students with SCMC, collaborating an academic task produced fewer words than students in FTF mode, Loewen and Wolff also found that SCMC does not support any better speaking practice than F2F class. On the factors of whom EFL learner speak with, anxiety becomes something causes lack of English vocabularies production spoken by EFL learners when practicing conversations with native speakers using SCMC media. Russell also claimed that language learners tend to be nervous facing and talking with foreigners, so their vocabulary production is less optimal. AbuSeileek and Qatawneh, found that SCMC caused the language learners with only short, clear and unambiguous answers, they asked their interlocutors with restricted and closed questions, so that their vocabularies did not increase. From all the studies, there is a tendency where EFL learners feel more comfortable and confident to have an English conversation with non-native speakers who have similarities in difficulty of speaking English that allows them to understand each other's meanings, and get used to speak English using simple words or sentences form, as Paetzold stated that non-native speakers tend to use simple sentences or vocabularies when they speak English, and there are also the ones who prefer to talk to native speakers. Effects of Interacting with Foreigners through SCMC on Grammar and Pronunciation Accuracy Grammar and pronunciation are important skills need to be mastered in speaking English. So far, the improvement in terms of grammar is still not enough to be achieved through this technique, as Alshahrani pointed out that interacting with foreigners, SCMC could not fully improve students' English-speaking skill in terms of grammar, it seemed that grammar is better to be improved in face-to-face learning with teacher in classroom. Mustafa emphasized that social media networking had a great impact on all speaking components which were in terms of vocabulary, grammar, and pronunciation of 22 beginners EFL learners from Arab who practiced their spoken English with foreign interlocutors using SCMC media. Jung, Kim, Lee, Cathey, Carver, and Skalicky also found that SCMC was beneficial to improve grammar. In pronunciation, SCMC also contribute to improve it. Hung and Higgins found that SCMC seems particularly effective for pronunciation improvement for Chinese-speaking learners of English and foreigners. Bueno-Alastuey also explores the effects of SCMC on pronunciation with three different kinds of oral exchanges, i.e. NNS-NNS with same-L1, NNS-NNS with different-L1, and NNS-NS. It shown that NNS-NNS with different-L1 SCMC as the most beneficial for pronunciation development. In her further research, Bueno-Alastuey also found that SCMC produced more negotiations and gave high interactional feedback quantity on phonetic triggers in NNSs-NNSs with different L1 interaction. From above studies, it seems that speaking accuracy can be more achieved, especially for improving grammar and pronunciation, when EFL learners make an English interaction with foreigners, it seems that EFL learners better to interact with non-native English speakers because non-native English speakers pay more attention to grammar while SCMCing. In contrast, Kim found that grammar or processes to construct good sentences are more prevalent in F2F classes than in SCMC, foreign interlocutors did not fix or give any grammar correction, or even remind each other interlocutor towards their grammar mistakes, as Guest Advances in Social Science, Education and Humanities Research, volume 546 stated that grammar does not seem to be a serious thing to concern in online exchange. In conclusion, using SCMC might be more beneficial to improve pronunciation than grammar use, as Lin revealed that SCMC has good effect on pronunciation, however, it might have a negative effect on grammar accuracy, while Ziegler indicated that SCMC only has small benefit to improve productive skills. However, Loewen and Isbell found that SCMC does not support pronunciation improvement any better than speaking practice in F2F class. Research Design This study employed ex post facto research design since the use of SCMC in speaking practice has occurred and leaves some data such as a journal when I instructed and implemented SCMC, as Kerlinger defines that ex post facto design is used in which the independent variable or variables have already occurred. Then, for the second reason, because of that, I could not manipulate variables more (Cohen, Manion, and Morrison, 2018). Then, based on the aims of this study, ex post facto was a research design used for how I analysed the effects on the use of SCMC towards the students' English-speaking skills through the existing data, i.e. the journal, as Kerlinger states that he then studied the independent variable in retrospect for its possible effects on the dependent variable. Research Setting and Participants The study was conducted for a particular online distance learning practice i.e., SCMC in English II subject implemented by one tertiary education in Bandung, Indonesia. To inform important facets and perspectives related to the phenomenon being studied, the selection of participants in this study was purposeful. Therefore, 3 (three) higher achievers (HA) and 12 (twelve) lower achievers (LA) out of fifteen students majoring in Engineering engaged in Synchronous Computer-Mediated Communication for one semester were taken as participants for this study. Data Collection and Analysis The data collection techniques used in this research study are document i.e., teacher journal, telling how I instructed and implemented SCMC in speaking class referred to Rencana Pembelajaran Semester, and semi structured grouped interview. After the data were been collected, to find the effects of SCMC implementation on students' speaking skills and students' reactions/perceptions towards the implementation might be implied from the journal and the interview transcript, it began with the coding process, and grouping the codes from the journal, and display it into tables and charts to be analyzed and interpreted the data in accordance with the research questions. All the analyzed data collected were interpreted into a description, matched, compared, and linked with other research. Conclusions were then drawn to answer the research questions. After all the interpretation were corroborated, conclusions were drawn and regarded in the light of other research findings. Effect of SCMC on Vocabulary Use From the result of teacher journals analysis and interview transcripts, this study found that SCMC implemented by the Engineering students, did not indicate positive effects on improving their Engineeringrelated vocabularies use. In the teacher journal, it was stated that from all the students' 9 recording SCMCs with foreigners that I wrote in the journal, they only produced about 1 to 6 Engineering-related vocabularies. Furthermore, through the interview, they felt some factors causing them for not producing what vocabularies should be more concerned to be used, i.e., Engineeringrelated vocabularies. The number of vocabularies seemed very limited. From all the SCMCs, the production of their Engineering-related vocabularies was very limited, the students also seemed not enjoying their conversation with their foreign interlocutors. This was also supported by the students' perceptions stated in the interview session that it was not about even using Engineering-related vocabularies, for only using general vocabularies in English was very difficult due to anxiety. Six LAs students felt that they could not even neither produce general vocabularies nor Engineering-related vocabularies. Effect of SCMC on Grammar Accuracy Data taken from the teacher journals and interview show that SCMC did not also indicate positive effects in proper grammar application. From the teacher journals, it is found that SCMC did not make the students using proper grammar taught through all English 2 subject materials referred to Rencana Pembelajaran Semester that should be applied in their conversation with their interlocutors. The students were not maximally applying the materials while they conducted SCMC, especially for using complex and compound sentences, and equal, comparative, and superlative degrees. There was one student asked in his SCMC recording like "what is industrial the best in your country?", actually, he tried to Advances in Social Science, Education and Humanities Research, volume 546 apply superlative degrees, however he could not arrange the words properly. There are also many grammar mistakes made by the students, because they arranged English words into sentences like the way they arrange words in Bahasa Indonesia sentences and their vocabulary choice was still like what Indonesian say, like "I am new resign" instead of "I quit my job", and the other basic one like "how old I am?" she said "old me?", and the most grammar mistakes that the students did were the way they use verbal and nominal sentences, like "I am very like duren", and "I am not now about...", and other ones like "You can speak Indonesia?", and "What are you busy now?". In this study, most of the students, high and low achievers felt that their interlocutors (non-native English speakers) could not support them to improve their grammar, they did not really concern on grammar use, and rarely gave any feedback to the students regarding their grammar mistakes, the students were also difficult to improve their proper grammar use. Effect of SCMC on Pronunciation Pronunciation is the speaking skill that the Engineering students could improve through this SCMC technique. In the teacher journals, it was stated that all SCMC recordings showed the same result on the students' pronunciation, it was quite understandable even though with Indonesian accents, and some of them still pronounce English words the way they pronounce words in Bahasa Indonesia like involves, industrial, discuss, assignment, other, university, busy, quality assurance, conversation, agriculture, students, and enough. However, they improved it, they changed how they pronounce it with better pronunciation. It because they had a good corrective feedback from their interlocutors. It was also confirmed by the students from the interview session that they felt pronunciation is the skill they could improve due to their foreign interlocutors' feedback on recasting their mispronunciation. Effect of SCMC on Learners' Overall Speaking Performance With whom the students talk to also affects the students English speaking skills, especially in terms of Engineering-related vocabularies use and grammar. In the teacher journal stated that the students hardly to find foreign interlocutors who were interested in talking about Engineering. Three HAs and six LAs said that they should always divert the conversation to Engineering topics because their interlocutors do not have Engineering background, it was difficult to find interlocutors who are interested in Engineering. Based on what the students stated in interview, there are two kinds of foreign interlocutors who students talked to, the first were foreign interlocutors with English skills that are the same as or lower than theirs and the second ones were foreign interlocutors with upper English skills. The first kind of foreign interlocutors limiting the students to explore Engineering-related vocabularies, because the students need to always repeat and explain what they had asked. Meanwhile, talking to interlocutors with upper English skill made the students fear instead and could not catch meaning of what their interlocutors said, moreover they said all of their utterances with unfamiliar accents. This thing also caused the students for not exploring and applying the materials, they were comfortable to use simple sentence and use unproper grammar, because their interlocutors also did not pay attention on grammar mistakes. SCMC Use as Perceived by Learners Anxiety is a condition indicated by tension, nervousness, fear, or worry, of doing something. This actually felt by the students while they were conducting SCMC. I felt feeling anxious is the one of reason why they could not improve their speaking skills, as I wrote in the journals and as what the students admitted in interview. From the three skills descriptive result, the students only could produce one to six Engineeringrelated vocabularies, dominated with very basic and common vocabularies that always repeated in the next SCMCs, even though few of them also produce different vocabularies. SCMC seemed affecting their Engineeringrelated vocabularies use because of anxiety. Six LAs and one HA students felt that anxiety made them confused, run out of things to say, and hastily answered questions asked by their foreign interlocutor and asked questions in English sentences by only relying what was on their mind. Six LAs students felt that they could not even neither produce many vocabularies nor Engineering-related vocabulary because of that, while two LAs said that it caused grammar and mispronunciation. However, two of LAs felt that they were feeling motivated to improve their pronunciation and increase vocabulary use. In other opinion, one HA felt that anxiety does not affect the production of Engineering-related vocabularies as long as he could still lead the conversation in Engineering topic. In conclusion, anxiety is something that the students think as one of the factors affecting their speaking skills, the first effect is making them confused, run out of things to say, and hastily answered questions asked by their foreign interlocutor and asked questions in English sentences by only relying what was on their mind, secondly, the Engineering vocabularies use was not optimal, followed by grammar errors, and mispronunciation. Challenges of SCMC in Classroom Practice In the teacher journals, I found some of the students conveyed that they had bad internet connection so that they could not ask or answer what their foreign interlocutors ask, there are always repetitions between them. While in the interview transcripts, all participants of HAs and LAs agree that unstable internet problem is the one affected them to not practice their speaking skills maximally, so they needed to always repeat what they have said, and it effects on students' English-speaking production. Three HAs and seven LAs said that the unstable internet connection caused miscommunication, it made them could not heard messages clearly, there was a misunderstanding, difficult to catch messages, the farther away the countries that the participant contact the worse the quality of the messages heard. While three HAs and three LAs said that they always requested their foreign interlocutors to repeat what had been said, and it caused the conversation got stuck. As a result, it effects on lack of vocabulary production and grammatical misunderstanding. Other than that, Five LAs felt that they cannot speak English fluently, and five other LAs said that they felt that they are difficult to say English words in proper pronunciation, explained by two LAs, it because they did not learn much about English pronunciation before entering university, they do not have a good history of learning English pronunciation when they were in elementary, junior, until senior high school level, the students tended to feel unfamiliar with using English. Participant tended to have difficulty in speaking English. Two LAs tended to have low motivation to improve their English language skills, they tended to be lazy in learning English. One LA felt that she is less proficient at using engineering-related vocabulary, and one other tended not to be confident in his grammar skill. Discussion This present study found that SCMC did not indicate positive effects in increasing the Engineering-related vocabularies use, the students seemed felt anxious to talk to their interlocutors, in this case non-native speakers of English. The more afraid they were to speak with foreigners, the weaker vocabularies they produced. It did not make them feel comfortable to produce English vocabularies. This result is not as similar with what Kohn and Hoffstaedter and Abe and Mashiko who found that SCMC with non-native speakers of English made the students being active interlocutors and easy to develop topic, increased their English vocabularies production. They can produce many sentences in English due to a similar way of thinking to string words into sentences. There are some other factors also occurred in terms of their interlocutors, there are few non-native speakers of English with good English-speaking skills, and that was a problem for the students, the higher the speaking skills of their interlocutor, the more the students feel confused about talking to them. Foreigners interested in Engineering in the SCMC platform also hard to find, they mostly wanted to talk about general topic, so that the students were difficult to improve their Engineeringrelated vocabularies. It made the students exposed to situation where they were difficult to use Engineeringrelated vocabularies. This study also found that SCMC did not indicate positive effects in proper grammar application. The students did not really maximally applying the materials, there are many grammar mistakes made by the students, because they arranged English words into sentences like the way they arrange words in Bahasa Indonesia sentences and their vocabulary choice was still like what Indonesian say, like "I am new resign" instead of "I quit my job", and the other basic one like "how old I am?" she said "old me?", and the most grammar mistakes that the students did were the way they use verbal and nominal sentences, like "I am very like duren", and "I am not now about...", and other ones like "You can speak Indonesia?", and "What are you busy now?". Vinagre and Muoz also found in their study that there are grammar mistakes while doing telecollaboration exchange mostly in terms of subject and verb agreement. Those mistakes did not get too much concern from the students and the interlocutors, foreign interlocutors did not fix or give any grammar correction, or even remind them towards their grammar mistakes, as Guest stated that grammar does not seem to be a serious thing to concern in online exchange. There are no grammar corrections from each other, this is not as what Monteiro, and Yanguas stated that in video/audio-conferencing, we can improve our speaking skills because of corrective feedback given from each other interlocutor when they made mistakes. In this study, most of the students, high and low achievers felt that their interlocutors (non-native speakers of English) could not support them to improve their grammar, they did not really concern on grammar use, and rarely gave any feedback to the students regarding their grammar mistakes, the students were also difficult to improve their proper grammar use, as Alshahrani pointed out that interacting with foreigners, SCMC could not fully improve students' Englishspeaking skill especially in terms of grammar. It because they exposed to use simple sentence to make their interlocutors understand to what they said, as Paetzold stated that non-native speakers tend to use simple sentences or vocabularies when they speak English. There were also many interlocutors who had not quite good English-speaking skills, and the condition forced Advances in Social Science,Education and Humanities Research,volume 546 the students to use simple sentences to make their foreign interlocutors (non-native speakers of English) understand what they say in English. However, for some of the students, they felt that that was indeed due to their lack of oral skills that made them difficult to talk with proper grammar. This study also found that SCMC bring quite good effects on pronunciation. Other research also dealt with it, Mustafa emphasized that social media networking had a great impact on all speaking components which were one of them is pronunciation of 22 beginners EFL learners from Arab who practiced their spoken English with foreign interlocutors using SCMC media. Hung and Higgins found that SCMC seems particularly effective for pronunciation improvement for Chinese-speaking learners of English and foreigners. Bueno-Alastuey also explores the effects of SCMC on pronunciation with three different kinds of oral exchanges, i.e. NNS-NNS with same-L1, NNS-NNS with different-L1, and NNS-NS. It shown that NNS-NNS with different-L1 SCMC as the most beneficial for pronunciation development. In her further research, Bueno-Alastuey also found that SCMC produced more negotiations and gave high interactional feedback quantity on phonetic triggers in NNSs-NNSs with different L1 interaction. The existence of the NNSs of English brings own benefits on the Engineering students' pronunciation skills as what Bueno-Alastuey's studies in 2010 and 2013 found. There is some more interest to correct each other's pronunciation, or perhaps doing something like self-correction. For pronunciation, this study also found that SCMC between NNS with different L1 could at least help the students improving their pronunciation. It is the one that have a great possibility to improve through SCMC with non-native speakers of English. They gave corrective feedback when they mispronounced English words. The students mostly pronounced English words like how they pronounced Bahasa Indonesia words, but it was still can be understood and they improve it because some of their foreign interlocutors remind them to change their pronunciation with the proper one, or simply like repeating it with the proper pronunciation. Feeling anxiety while SCMCing made the students could not improve their speaking skills. This study found that anxiety is one of causes makes the students not confident to talk in English, as Guest found that the students felt anxious and had lack of confidence to talk. Macayan, Quinto, Otsuka, and Cueto also found the same, anxiety affects poor speaking English performance while SCMCing. In contrary, other students' perception in the interview, they in fact feeling motivated to improve their oral skill and seemed enjoying the conversation, as other research suggested to use SCMC for improving speaking skills because it was found that it indicated a positive effect to increase the learners confidence and decreased nervousness, York, Shibata, Tokutake, and Nakayama also found that SCMC was making fun atmosphere to improve learning language. Those studies show that SCMC roles as good technique to decrease anxiety to talk, however this study found in contrast. The next issue are technical problem and other factors before and when SCMCing made the students difficult to improve their speaking skills. Unstable internet problem is the one affected them to not practice their speaking skills maximally, as Nascimento & Melnyk stated that SCMC is dealt with internet connection quality, Blake also suggested that we have to know at least with any miscommunication happened due to technical problem. The students will find difficulty to talk when their internet connection getting worse or unstable, because the quality of video or audio will be also getting worse, so they need to always repeat what they have said, and it effects on students' English-speaking production. Ino states that the better English proficiency of students, the better they also manage the strategy in speaking English. 21 Japanese students with different English proficiency level, majoring in Economics were given opportunity to conduct five times SCMC sessions via Skype with foreigners for one semester. However, in contrary, Nilayon and Brahmakasikara conducted Cross-cultural SCMC English conversation between EFL Thai students with two speaking practice partners, it was found that English speaking skills of participants in higher levels were not much better than other participants in lower ones, they tried very hard to speak English. The results of the study, therefore, seemed to show that lower-level learners tend to have more improvement, therefore this practice might be a suitable English-speaking practice for lower level learners as it seemed to work best with learners in the elementary level. This study finds similar finding as Ino stated previously. The students seemed could not manage their interaction with their foreign interlocutors also because of their basic of speaking skills, they admitted that they have lack of speaking proficiency and not really good English learning history. CONCLUSION In conclusion, SCMC with non-native English speakers did not indicate positive effects on improving Engineering-related vocabulary and accurate grammar use compared with the students' pronunciation aspect. Pronunciation skill of the students considered as the one of the speaking skills improved by the implementation of SCMC. It has also been found that the implementation of SCMC in improving oral skills bring negative and Advances in Social Science, Education and Humanities Research, volume 546 positive perceptions from the Engineering student in this present study. The negative perceptions come from the students for its ineffectiveness of the online platform to find the foreign interlocutors interested in talking about Engineering, so that they could not use or improve the use of Engineering-related vocabularies because they were not exposed to talk mostly about Engineering. Then, they were also exposed with technical problem interrupting them to interact, and some other problems related to their basic speaking skills that they perceived not really good enough to actively bring the conversation with foreigners through SCMC. The positive perceptions come from few of the students' statement that they were fun doing SCMC and get motivated to improve their speaking skills with the foreigners they met online, even though not often talking about what they should dominantly talk, i.e. Engineering. |
Oral contraceptive mortality. Results of several recent studies estimating the mortality risk attributable to oral contraceptives (OCs) are reviewed. A 1968 study estimated the pulmonary, cerebral, or coronary thromboembolic mortality risk from OC usage as 2.2 and 4.5/100,000 woman-years, respectively, for those aged 20-34 and 35-44. A retrospective case-control study of deaths of women under 50 placed the oral contraception myocardial infarction mortality risk at 1.1, 8.1, and 20/100,00 woman-years for users aged 20-34, 35-44, and 40-44, respectively. Another study found the standardized excess circulatory mortality rate to be 20/100,00 woman-years for ever-users, 21 for current users, and 18 for ex-users. Among continuous users, the age-standardized rate increased from 12 to 45/100,000 woman-years for use lasting 5 or more years. Smoking was found to increase circulatory mortality considerably. Others have concluded that the excess total mortality rate in ever-users, mainly due to circulatory causes, is larger than previous estimates based only on thromboembolism and myocardial infarction. The British Committee on Safety of Medicines has stated that while recent studies do not indicate the necessity of changing OC warnings, women in the older age groups, especially those who smoke, should be informed of the increased risk. |
Extra security measures are being introduced in Iraq ahead of Sunday's election - the first since the US-led invasion nearly two years ago.
An extended curfew has begun to be followed by border closures and movement restrictions on Saturday.
Insurgents, who have urged a boycott, killed four civilians in a car bomb in Baghdad on Friday as well as attacking polling stations across the country.
Expatriate Iraqis were allowed to start early voting in 14 countries on Friday.
A small US Army helicopter crashes in south-west Baghdad, two days after a helicopter crash in bad weather killed 31.
The tempo of violence appears to be increasing in the run-up to the poll, the BBC's Paul Wood in Baghdad says.
The violence has led to many candidates campaigning without revealing their names.
Our correspondent, who accompanied two candidates in Baghdad as the campaign drew to a close, says people are taking the threats seriously.
In many places, insurgents carried out mortar, rocket and bomb attacks against polling centres.
For this reason the location of most polling stations remains secret, even as the ballot boxes are distributed by the security forces, our correspondent adds.
Hundreds of police are being deployed to enforce the extra security lasting three days around Sunday's election.
These include closing Iraq's borders, the Baghdad international airport and the banning of civilian vehicles on election day.
A US general in Iraq has been quoted as saying that the multinational force will expand its combat power on the streets by one third to try to make the elections safe.
Sunni insurgents told Iraqis on Thursday to boycott the polls, a day after President Bush urged voters to "defy the terrorists".
The minority Sunni community dominated Iraqi politics during the regime of Saddam Hussein.
But the election is expected to lead to a power shift in favour of majority Shia Muslims.
Sunday's vote will be supervised by 828 international monitors, with a number of foreign embassies providing staff to act as monitors, too.
The expatriate vote is running from Friday to Sunday.
About 280,000 people in 14 countries - from Australia, the Middle East, Europe and North America - may take part.
Some decided not to participate amid fears of possible persecution against themselves and their relatives still living in Iraq, community leaders say in Australia.
Iraqi nationals are also voting in Jordan, the United Arab Emirates, Syria, Turkey, the United States, Britain, Canada, Denmark, France, Germany, the Netherlands and Sweden.
You can watch John Simpson's Panorama programme on the state of Iraq on BBC One on Sunday 30 January at 2215 GMT and on BBC World on Saturday 5 February at 0810, 1210 and 2210 GMT. |
def format_with_args(*args):
def formatter(func, _, params):
pa = params.args
format_args = [pa[i] for i in range(0, len(pa)) if i in args]
return func.__doc__.format(*format_args)
return formatter |
#include <iostream>
#include <vector>
#include <map>
#define pb push_back
using namespace std;
int main()
{
int n;
cin >> n;
int l = 2*n-2;
string s[l];
vector <string> t;
for (int i = 0; i < l; i++)
{
cin >> s[i];
if ((int)s[i].size() == n-1) t.pb(s[i]);
}
string kit[n][2];
for (int i = 0; i < n; i++)
{
kit[i][0] = "";
kit[i][1] = "";
}
for (int i = 0; i < l; i++)
{
int len = s[i].size();
if (kit[len][0] == "") kit[len][0] = s[i];
else kit[len][1] = s[i];
}
string ans = t[0];
ans += t[1][n-2];
for (int i = 1; i < n; i++)
{
//cout << kit[i][0] << ' ' << kit[i][1] << ' ' << ans.substr(0, i) << ' ' << ans.substr(n-i, i) << endl;
bool find = false;
if (kit[i][0] == ans.substr(0, i) && kit[i][1] == ans.substr(n-i, i)) find = true;
if (kit[i][1] == ans.substr(0, i) && kit[i][0] == ans.substr(n-i, i)) find = true;
if (!find)
{
ans = t[1] + t[0][n-2];
break;
}
}
string answ = "";
vector <char> mask (n, '0');
for (int i = 0; i < l; i++)
{
//cout << i << ' ' << ans.substr(0, len) << ' ' << ans.substr(n-len, len) << endl;;
int len = s[i].size();
if (mask[len] == 'S') answ += 'P';
else if (mask[len] == 'P') answ += 'S';
else
{
if (s[i] == ans.substr(0, len))
{
mask[len] = 'P';
answ += 'P';
}
else if (s[i] == ans.substr(n-len, len))
{
mask[len] = 'S';
answ += 'S';
}
}
}
cout << answ;
}
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('dnsalloc', '0005_auto_20161228_1014'),
]
operations = [
migrations.RenameField(
model_name='service',
old_name='plain_password',
new_name='password',
),
migrations.RenameField(
model_name='service',
old_name='plain_username',
new_name='username',
),
]
|
<filename>lib/ac-alert-stack.ts
import * as path from "path";
import * as cdk from "@aws-cdk/core";
import { Bucket, BucketEncryption } from "@aws-cdk/aws-s3";
import { Runtime } from "@aws-cdk/aws-lambda";
import { NodejsFunction } from "@aws-cdk/aws-lambda-nodejs";
import { Rule, Schedule } from "@aws-cdk/aws-events";
import { LambdaFunction } from "@aws-cdk/aws-events-targets";
import { StringParameter } from "@aws-cdk/aws-ssm";
export class AcAlertStack extends cdk.Stack {
constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const bucket = new Bucket(this, "Bucket", {
bucketName: "ac-alert-bucket",
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
encryption: BucketEncryption.S3_MANAGED,
versioned: true,
lifecycleRules: [{ expiration: cdk.Duration.days(30) }],
});
const userNameSsm = StringParameter.fromStringParameterName(
this,
"UserNameSsm",
"/ac-alert/username"
);
const webhookSsm = StringParameter.fromStringParameterAttributes(
this,
"WebHookSsm",
{
parameterName: "/ac-alert/slack-webhook-url",
version: 1,
}
);
const lambda = new NodejsFunction(this, "Handler", {
runtime: Runtime.NODEJS_14_X,
entry: path.join(__dirname, "../src/handler.ts"),
handler: "handler",
environment: {
BUCKET_NAME: bucket.bucketName,
API_URL: "https://kenkoooo.com/atcoder/atcoder-api/v3/user/submissions",
},
timeout: cdk.Duration.seconds(60),
});
bucket.grantReadWrite(lambda);
userNameSsm.grantRead(lambda);
webhookSsm.grantRead(lambda);
new Rule(this, "Rule1", {
ruleName: "ac-alert-rule1",
description: "rule on 10pm",
schedule: Schedule.cron({
minute: "0/30",
hour: "13",
}),
targets: [new LambdaFunction(lambda)],
});
new Rule(this, "Rule2", {
ruleName: "ac-alert-rule2",
description: "rule on 11pm",
schedule: Schedule.cron({
minute: "0/11",
hour: "14",
}),
targets: [new LambdaFunction(lambda)],
});
}
}
|
ORGANIZATION OF THE DISPATCHER'S AUTOMATED WORKPLACE The problem of automation of production processes and management processes as a tool for improving labor is always relevant. The sphere of Informatization increases the importance of computer technology in management processes. At the present stage of enterprise management automation, the most promising is the automation of management functions directly installed at the workplaces of specialists. Such systems are widely used in organizational enterprise management, called automated workplaces. In particular, as a result of automation of the transport dispatcher's workplace, the integrated system allows you to collect, analyze, calculate data and generate accounting documentation to provide better and more complete information on this area. The article provides for the organization of an automated workplace of the dispatcher. The principle of building an automated workplace is analyzed and the algorithm of the model operation is developed based on the analysis of an automated workplace requirements and tasks. |
<reponame>hanxiao34/solon<filename>_plugin_cloud/aws-s3-solon-plugin/src/main/java/org/noear/solon/cloud/extend/aws/s3/S3Props.java
package org.noear.solon.cloud.extend.aws.s3;
import org.noear.solon.cloud.CloudProps;
/**
* @author noear 2021/4/7 created
*/
public class S3Props {
public static final CloudProps instance = new CloudProps("aws.s3");
}
|
The Effect of Multiple-Choice Test Items Difficulty Degree on the Reliability Coefficient and the Standard Error of Measurement Depending on the Item Response Theory (IRT) This study aims at identifying the effect of multiple-choice test items' difficulty degree on the reliability coefficient and the standard error of measurement depending on the item response theory IRT. To achieve the objectives of the study, (WinGen3) software was used to generate the IRT parameters (difficulty, discrimination, guessing) for four forms of the test. Each form consisted of items with different difficulty coefficients averages (-0.24, 0.24, 0.42, 0.93). The resulting items parameters were utilized to generate the ability and responses of examinees based on the three-parameter model. These data were converted into a readable file using the (SPSS) and the (BILOG-MG3) software. Then the reliability coefficients for the four test forms, the items parameters, and the items information function were calculated, and dependence on the information function values to calculate the standard error of measurement for each item. The results of the study showed that there are statistically significant differences at the level of significance ( ≤ 0.05) between the averages of the values of the standard error of measurement attributed to the difference in the difficulty degree of the items in favor of the test with the higher difficulty coefficient. The results also found that there are apparent differences between the test reliability parameters attributed to the difficulty degree of the test according to the three-parameter model in favor of the form with the average difficulty degree. Introduction Achievement tests are considered to be one of the most important various assessment methods that are relied upon when making important decisions concerning the individual and society. The use of tests has spread widely in many areas. They are designed for various purposes, among which: choosing a person for a job; or for classification purposes such as determining the path of learners in proportion to their abilities and skills; and in evaluating students' achievement through the grades they obtain in class tests. Thus, it is possible to work on improving and developing the educational and learning process and moving it forward for the better through developing these tests, whether verbal or performative, and improving their ability to measure learning outcomes. Moreover, tests are one of the most important educational methods through which students' performance is assessed, as they provide the final output of the educational process, so they must be prepared carefully, taking into account the availability of the objectivity, honesty and reliability factors, so that the tests yield the desired result. (Baghaei & Amrahi, 2011) believe that multiple-choice items are the best types of objective items, and the most common and widespread in achievement tests. They are easy to correct and provide good coverage of the subject matter. The student's score on them is characterized by a high degree of reliability. In addition, they determine the intended learning outcomes to a high degree, although their preparation requires a long time, great effort, and great skill on the part of their authors. Furthermore, the multiple-choice items are able to measure learning outcomes at the higher mental levels of the cognitive domain to a degree that exceeds the matching items, true or false items, filling the blanks items, and short answers. There are many sources of measurement errors related to the test. Among these sources is the extreme difficulty of the test items compared to the level of students. This extreme difficulty encourages students to random guesses, and thus not obtaining true scores, which increases measurement errors and thus affects the value of the test reliability coefficient. Classical models were used in the past decades to design achievement tests. Nevertheless, their benefits were limited due to the method used in analyzing those tests which were based on the foundations of that traditional theory as well as the psychometric and statistical concepts associated with it. If we look at the difficulty and discrimination coefficients, we find that they vary according to the average and the extent of the ability of the sample members used in calculating these parameters. Thus, the benefit from these coefficients becomes limited to a community similar to the community from which the sample was chosen as the scores of examinees in a test depend on the sample of the items that the test includes. Measurement scientists have tried to benefit from the technological advancement in finding new psychometric methods and solving these problems through what is known as the latent feature theory or the item response theory. Many believe that the test scores reflect to some extent the amount of knowledge the individual possesses but in fact it does not. These scores include a certain amount of error which can be an increase or a decrease in the score. The increase comes from obtaining some marks from other sources such as the degree of test difficulty. When the degree of the test difficulty increases, this leads students to guess or cheat. On the other hand, the decrease comes from the loss of some knowledge due to forgetfulness. Hence, the apparent score does not reflect the actual amount of knowledge the individual possesses because it includes a percentage of error which may affect the test reliability. This problem can be overcome by controlling the sources of error. Therefore, this study came to identify the effect of multiple-choice test items' difficulty degree on the reliability coefficient and the standard error of measurement depending on the item response theory IRT. The Study Problem Achievement tests, which are considered one of the most important measurement tools to determine the performance of the examinees depend on the score obtained by the examinee in the test according on the classical test theory. The tool that is relied upon in measuring the examinee's performance must be truthful and gives results and indicators that can be relied upon when making decisions. It is known that the examinee's score on the tests is expected to be sufficient evidence of the extent to which the examinee possesses the skill or knowledge measured in the tests. This means that external variables such as the difficulty degree of the test items should not have an effect on performance. Most studies have indicated that when the test items or ordered from easy to difficult with a degree of medium difficulty, taking into account the individual differences of students, this provides reinforcement for the examinee and increases his motivation to answer the items of the test. Thus, the examinee will obtain a higher score when the test is of medium difficulty ; (Hambleton & Traub, 1974). This would affect the reliability of the test and the standard error of measurement. By reviewing the studies that dealt with this topic, we find that they dealt with it from the viewpoint of the classical test theory. Therefore, this study came to know the effect of the test items difficulty on the test reliability and the standard error of measurement depending on the item response theory IRT. Consequently, this study seeks to answer the following questions. The Study Questions: The first question: Are there statistically significant differences between the test reliability coefficients attributed to the difficulty degree of the items according to the three-parameter model in the item response theory? The second question: Are there statistically significant differences between the values of the standard error of measurement attributed to the difficulty degree of the items according to the three-parameter model in the item response theory? The Importance of the Study: The importance of this current study lies in the following: 1. The scarcity of studies that dealt with the effect of the test item's difficulty on the test reliability, depending on the item response theory. 2. The scarcity of studies that dealt with the effect of the test item's difficulty on the standard error of measurement in estimating the item difficulty parameter depending on the item response theory. 3. This study seeks to determine the appropriate difficulty degree of the test items which achieve the best reliability for the test. 4. Providing test authors with the necessary information that helps them build tests with a high degree of reliability by determining the best difficulty degree of the test items. The Study Objectives: 1. Identifying the potential differences between the test reliability coefficients due to the difficulty degree of the items according to the three-parameter model in the item response theory. 2. Identifying the potential differences between the values of the standard error of measurement due to the difficulty degree of the items according to the three-parameter model in the item response theory. Terms Definition: The Test: A measurement tool prepared according to an organized method of several steps that include a set of procedures that are subject to specific conditions and rules, with the aim of determining the degree of an individual's possession of a certain characteristic or ability through his response to a sample of stimuli that represent the characteristic or ability to be measured. Item Difficulty Parameter (Threshold): The ability level that corresponds to the probability of 0.5 for answering the item correctly when the guessing coefficient is equal to zero. (Hambleton & Swaminathan, 1985) Standard Error of Measurement: A measure of dispersion associated with () estimated values for some examinees about their true ability value (), which is inversely related to the square root of the test information function Reliability Coefficient: The ratio of variance in the true score to the variance in the observed score. It is defined as the quantity of the test information function, which indicates the accuracy of the score that reflects the examinee's ability to represent this ability. Test Reliability Reliability is statistically defined as the ratio of true variance to the total variance, that is, how much of the total variance in scores can be true, whether or not it is related to the measured characteristic.. It is defined as the quantity of the test information function, which indicates the accuracy of the score that reflects the examinee's ability to represent this ability. stated that reliability means objectivity, this means that the results are not substantially affected if the examiner or the grader changes. Consistency may mean that the examinee's mark on the test part is related to his score on the test as a whole. indicated that one of the meanings of reliability in measurement is consistency, so if we say that the test achieves the characteristic of reliability, this means that the test measures anything consistently. Reliability answers this question: Do we get the same score (or close to it) each time this test is performed on this individual? Hence it is possible for the scale to be consistent even though it does not measure the characteristic we wish to measure. Reliability means the consistency and harmony with which the test scores measure the characteristic or the thing that the test was prepared to measure. As for the validity of the test, it is the extent to which the test measures the characteristic that it is intended to measure. Test reliability is one of the basic components of a good test, as it is assumed that a test gives almost the same results when it is reused at different times. For example, the meter is a reliable test because it gives the same results in measuring the length of things. The concept of reliability in the item response theory is related to the item information function ( ) and the test information function ( ) and to the standard error of estimating the capabilities of the examinees SEE. showed that the best method for estimating the reliability coefficient is based on the test information function. The relationship between reliability and the test information function can be represented by the equation (Rxx = 1-), where Rxx denotes the test reliability coefficient, and ( ) denotes the item information function. This equation confirms that the relationship between the test information function and the test reliability is direct proportionality. To find the value of the empirical reliability coefficients, the statistical program (BILOG-MG3) was used, and the value of the empirical reliability coefficient indicates the amount of information we obtain from the test. Standard Error of Measurement The reliability coefficient is an estimate of the correlation coefficient between the scores of a group of examinees in a particular test, and the scores of the same examinees in another test that is equivalent to the first test. The higher this coefficient, the greater the consistency of the test in measuring what it is designed to measure. Complete reliability cannot be obtained from a practical point of view, which is represented by a stability coefficient of (1.00). Although the values of the reliability coefficient such as (0.96) or higher are mentioned in reports and some research, most test designers are satisfied if their tests give a reliability coefficient of around (0.90). On the other hand, the reliability coefficient for tests that teachers prepare tends to not reach this value. Another way to look at and interpret the reliability coefficient is by considering it as the ratio of the variance of the true scores to the variance of the observed scores obtained by the examinees. The true score of an individual in a specific test is a hypothetical score by which we mean the average of a large number of scores that the same individual can obtain on similar tests under favorable conditions. While the observed score, it is the score obtained in a specific test. The difference between the observed score (X) and the true score (T) is called the 'error of measurement' (E). So, the relationship between these scores is: That is, the observed score (X) of a given test consists of two parts, the first being the true score (T) and the second being the error (E). An individual's score observed in a test differs in most cases from his true score, due to the fact that the observed score is affected by multiple sources of errors. If we assume that we can determine the degree of the random errors that affected the observed score for each individual, then the standard deviation of the error degrees can be found, and the resulting value is called the 'Standard Error of Measurement'. In fact, we cannot find the degree of error for each individual in a group unless the test is repeated on the same individual a large number of times, which is not possible. Therefore, we cannot find the standard error of the measurement, but we can estimate this value if we know the value of the standard deviation of the observed scores, as well as the value of the reliability coefficient of test scores, using a mathematical formula that can be directly derived from the following equation: Since 2 2 is the reliability coefficient, then: By calculating the square root of each side, we find that: That is, the standard error of measurement equals the standard deviation of the observed degrees multiplied by the square root of 1 minus the reliability coefficient. The standard error of measurement in the item response theory is related to the information function to the degree of accuracy in estimating the information function, as the shape of the information function distribution leads to important information. The value of the standard error of measurement is related to the information function by the following equation: This means that by estimating the standard error of measurement for each ability level, the information function has a meaning that helps in understanding the accuracy of the measurement and thus the test reliability. Where it is clear from the equation that the relationship between the standard error of measurement and the information function is an inverse relationship, so the greater the standard error of measurement, the lower the values of the information function. Item Response Theory (IRT) IRT is a theory of measurement that is based on the probabilistic relations between responses to items in a test and the construct a test aims to measure (Schultz & Whitney, 2005). The construct that is intended to be measured with a test but is not directly observed is called a latent trait in IRT. For this reason, the Latent Trait Theory is another name for IRT (De Ayala, 2009). Moreover, the IRT represents the contemporary trend in psychological and educational measurement. It is called item response theory and latent trait theory. The credit for presenting the foundations of the item response theory to those interested in psychometrics and education goes to Lord. IRT with its different models overcomes the problem in selecting items according to classical methods. It provides a method of selecting the items and the ability of the examinee in a way that the developer of the test can choose the most effective items in a range determined by a cut-off mark on the ability scale that helps to separate the levels of mastery and mastery on the scale (Hambleton & Rogers, 1991). The item response theory (the modern theory of measurement) proposes a model for the relationship between an unobservable variable used to measure the abilities intended to be measured by the test, and the probability of a correct response on a given item. This is being done through logarithmic functions linking the examinee's ability and the item parameters to the probability of the correct answer to it. Multiple models have emerged from the IRT, all of which assume that a single ability measures performance on the test. The ability can be represented on an infinite continuum, but it changes in its characteristics described by the items. The difficulty and ability degrees extend theoretically over a continuum ranging from (-∞) to (+∞), but it ranges practically between (-3 and +3). That is because it is rare to have values greater than (+3) or less than (-3) (Hambleton & Swaminathan, 1985). Differing models were suggested throughout the historical development of IRT. The first model suggested within the framework of IRT was the Rasch model, which was developed for items rated in two categories and contains only difficulty parameters. A two-parameter model was developed with the inclusion of a discrimination parameter in the Rasch model, and a three-parameter model was developed with the inclusion of a guessing parameter in the two-parameter model (Furr & Bacharach, 2008). As can be understood, the first factor influential in the emergence of different models in the development process of IRT was the number of estimated item parameters. The second factor was the response categories in relation to items. IRT was first developed for items that were rated dichotomously. However, later, the use of the theory was not limited to dichotomously rated items and, thus, models for polytomous items (nominal response model, partial credit model and graded response theory) were also included in IRT (Harvey & Hammer, 1999;van der Linden, 2005, Embreston & Rise,2000 IRT is divided into two categories, parametric and non-parametric models, in terms of approaches considered in estimating the item characteristic curve, while parametric IRT models assume that the item characteristic curve normal ogive or logistic properties, non-parametric models do not have an assumption limiting the item characteristic curve to a certain form (Takno, Tsunoda & Muraki,2015). mathematical models known as the "latent traits models" emerge from it. Each of those models depends on a mathematical equation that determines the relationship of the individual's performance on an item with his ability that lies behind and explains this performance. The IRT includes the logistical models (Hambleton & Rogers, 1991). The item response theory has a set of features mentioned by (Hamilton & Swaminathan, 1985): 1. Item parameters are independent of the group of examinees used from the population of examinees for whom the test was designed (item-free). 2. Examinee ability estimates are independent of the particular choice of test items used from the population of items that were calibrated (person-free). 3. There is a statistic indicating the accuracy in estimating the ability of each individual, such as the standard error of estimation. This statistic is different from one individual to another, and it is found for each item. It is not constant for all items. The most important characteristic of this theory in psychological and educational measurement is the possibility of obtaining the item statistics that does not depend on the characteristics of the examinees; and the scores that express the ability of the subjects do not depend on the characteristics of the items. Many studies have been conducted which examined the effect of the difficulty degree of the multiple-choice test items on the reliability coefficient and the standard error of measurement based on the item response theory. Among these studies is (Hambleton & Traub, 1974) which aimed to study the effect of ordering the test items according to their difficulty level on the performance in a math test and on the anxiety generated during the test. In order to achieve the objectives of the study, an achievement test consisting of items was prepared and applied to examinees in order to calculate the items difficulty coefficients. Accordingly, two forms of the test were prepared based on the difficulty coefficients. The items were arranged in ascending order in the first test form, and in descending order in the second test form. Then the two test forms were applied to examinees who were subjected to the Achievement Anxiety Test (AAT). One of the most important findings of the study is that the order of the test items in ascending order leads to higher scores than the scores obtained when the test items are arranged in descending order. Moreover, it was found that the arrangement of the test items affects the examinees' scores in the (AAT). (Crehan, K. D., Haladyna, T. M., & Brewer, B. W., 1993) aimed at determining the optimal number of alternatives in the multiple-choice test in terms of difficulty and discrimination coefficients. In order to achieve the study objectives, an achievement test consisting of items was prepared. Two equivalent forms of the test were prepared. One form with three alternatives and the other with four alternatives. The two forms were applied to a random sample of students. The difficulty coefficients were calculated for each model. The average difficulty coefficient was (0.80) for the three-alternative form and (0.77) for the four-alternative form. The averages of discrimination coefficients were (0.35) for the three-alternative form and (0.36) for the four-alternative form. It was found that there were statistically significant differences between the average of difficulty coefficients in favor of the three-alternative form, as it was found that its items were easier than those of the four-alternative form. (Za'al, 2010) aimed to study test anxiety and the arrangement of test items according to their difficulty degree on the achievement of ninth-grade school students in mathematics. To achieve the study objectives, an achievement test in mathematics was prepared as well an anxiety test. The study sample consisted of male and female students. Among the most important findings of the study is that there are no statistically significant differences at the level of significance ( = 0.05) between the averages of students' performance on the mathematics test attributed to the test items arrangement according to their degree of difficulty. The study also found the presence of statistically significant differences at the level of significance ( = 0.05) among the averages of female students' performance attributed to the test items arrangement according to their difficulty degree in favor of both ascending and random order. entitled "The effect of arranging the test items on the validity and reliability of the multiple-choice test in mathematics for high school students in Makkah. The study aimed to identify the most important methods of arranging the test items according to the sequence of course content, as well as the arrangement according to the difficulty coefficients. The study also aimed at determining the best pattern of arrangement for test vocabulary and its effect on the validity and reliability of the multiple-choice achievement tests. To achieve the objectives of the study, a multiple-choice achievement test in mathematics was prepared for the second secondary grade consisting of items. One of the most important results of the study is that there are no statistically significant differences between the values of the internal consistency coefficients calculated by the Cronbach Alpha equation attributed to the difficulty degree. (Ma'rouf, 2013) aimed at identifying the effect of arranging the test items according to their difficulty level on the psychometric characteristics of the intelligence test that is not influenced by culture. In order to achieve the objectives of the study, four forms of the test were prepared. In the first form, the test items were arranged in ascending order; in the second form items were arranged in descending order; in the third form items were arranged randomly, while in the fourth form items were arranged in a circular order. The test was applied to male and female students. The study found that: i. there were no statistically significant differences between the Cronbach Alpha coefficients for the test scores attributed to the arrangement of the test items according to their level of difficulty. ii. there were statistically significant differences in the reliability coefficients of the half-segmentation of the test scores according to the Spearman-Brown equation in favor of the circular arrangement. (Ilhan & Guler, 2018) aimed to compare difficulty indices calculated for open-ended items in accordance with the classical test theory (CTT) and the Many-Facet Rasch Model (MFRM). Although theoretical differences between CTT and MFRM occupy much space in the literature, the number of studies empirically comparing the two theories is quite limited. Therefore, this study is expected to be a substantial contribution to the literature. The research data were collected through three teachers rating the answers given by 375 eighth-grade students to ten open-ended questions in a mathematics test. The difficulties of the items in the test were calculated according to CTT and MFRM by using the obtained data, and the consistency between the difficulty indices estimated based on the two theories was tested. While the Microsoft Excel program was used in the analyses for CTT, the FACETS package was employed in the analyses for MFRM. Findings: The research findings showed that CTT and MFRM yielded similar results in terms of difficulty indices of open-ended questions. It was found that, according to both theories, the ten items in the achievement test were ranked as I2, I3, I1, I4, I7, I6=I8, I5, and I9, from easiest to most difficult. Implications for Research and Practice: It may be said that estimating item difficulties according to either CTT or MFRM will not cause any notable differences in terms of the items to be included or excluded in the development of an achievement test with open-ended questions. Method and Procedures This study used the experimental simulation approach. Data were generated using the (WinGen3) software and were studied using the (SPSS) and (BILOG-MG3) software to answer the study questions according to the following steps: Data Generation: First: Generating the test based on the three-parameter model: 1. Generating four test forms, each form consisted of items with different difficulty factors averages (-0.24, 0.24, 0.42, 0.93) according to the three-parameter model using (WinGen3) software based on the IRT. Data Analysis: 1. To achieve the objectives of the study, the (WinGen3) software was used to generate data. The items parameters (difficulty, discrimination, guessing) were generated for the three forms of the test. The resulting items parameters were relied upon to generate the ability of examinees according to the three-parameter model based on to the IRT. 3. Using the (SPSS) software to convert these data into a readable file for the (BILOG-MG3) software. 4. Calculating the reliability coefficients for the four test forms and the item's parameters. 5. Calculating the information function of the four test forms according to the three-parameter model in the IRT. 6. Utilizing the information function to calculate the standard error of measurement for each item, depending on the equation that links the information function with the standard error of measurement Data Goodness of Fit: The (BILOG-MG3) software was utilized to match individuals and items to the models of the item response theory. The data of examinees were analyzed, and the results indicated that all the items were fit to the model as the value of Chi-Square test (2 test) is not statistically significant at the level of significance ( ≤ 0.05). Moreover, the results of the analysis showed that all the responses of the examinees matched with the expectations of the models except for sixteen examinees where the value of Chi-Square (2) was statistically significant at the level of significance ( ≤0.05). Results and Discussion The first question: Are there statistically significant differences between the test reliability coefficients attributed to the difficulty degree of the items according to the three-parameter model in the item response theory? To answer this question, the test reliability coefficients based on the degree of difficulty of the items were found according to the three-parameter model in the item response theory using the equation (R xx = 1 -), where R xx refers to the test reliability coefficient, and i I refers to the item information function. The z-test was also used to identify the significant of differences. To find out the significance of the differences between the reliability coefficients, the Fisher Equation was used to convert the reliability coefficients into z-values and examine their significance using the Fisher Equation as shown in the following table: that the differences between the dual comparisons of the reliability coefficients for the four test forms were statistically significant at the level of significance ( ≤ 0.05). It is noticed that the second form is the most stable compared to the other forms, followed by the third one. It is also noticed that the test was less reliable when the test was extremely easy or difficult. It is clear that there are no statistically significant differences between the first and fourth forms. This confirms that the reliability coefficient will be the best in the case of arranging the test items in terms of difficulty from the easiest to the difficult. This arrangement provides students with motivation to keep on trying to answer when they receive immediate reinforcement because of their ability to answer the first questions of the test, which are called encouraging questions or shock-absorbing questions. These results are consistent with the results of (Hambleton & Traub, 1974) and (Za'al, 2010) studies. The second question: Are there statistically significant differences between the values of the standard error of measurement attributed to the difficulty degree of the items according to the three-parameter model in the item response theory? To answer this question, the one-way ANOVA analysis was used for the values of the standard error of measurement based on the variance in the difficulty degree of the items according to the three-parameter model in the item response theory. Table shows these results: that there are statistically significant differences between the means of the values of the standard error of measurement according to the difficulty degree of the items. To find out the direction of the differences and to which test form these differences belong, Scheffe test for post-hoc comparisons was used. Table shows the results of the post-hoc comparisons. that there are statistically significant differences between the averages of the standard errors of measurement for the first and fourth test forms in favor of the fourth test form -which has the higher item difficulty. This may be attributed to the fact that the test form with a higher difficulty coefficient has a higher level of items difficulty, which leads students to cheat or guess. Thus, the student's observed score will be far from his real score (T = X + E), which increases the standard error of measurement for the test items. Also, the extreme difficulty in the test items may encourage the students to guess randomly, which leads to their scores being close so that the group of students appears as a homogeneous group, meaning that the test has a weak discerning ability, which increases the standard error of measurement. Conclusions and Recommendations The results of the study showed that the standard error of measurement differs according to the items difficulty degree in favor of the test with a higher difficulty coefficient. That is, the standard error of measurement increases with the increase in test items difficulty. The results also concluded that there are apparent differences between the test reliability coefficients attributed to the difficulty degree of the test based on the three-parameter model in favor of the test form with moderate difficulty degree. This means that the best reliability coefficient was for the test with moderate items difficulty degree. Based on these results, the study recommends the need to construct achievement tests of medium difficulty and not relying on extremely difficult tests. The study also recommends conducting further studies based on the one-parameter and two-parameter models. |
/*
zbfilter creates a 2 pole IIR? band pass filter.
B,A : double array of size 3.
basize is the size of the array A and B. because this is a zb filter
this value must always be 3.
fc : center frequency of band pass filter
fs : sample frequency (44100)
bw : bandwidth of the band pass filter
gdb : gain coefficient, in DB.
*/
double zbfilter(double* B,double* A,int basize,float fc, float fs, float bw, float gdb){
if ( basize != 3 )
return 0;
double v0 = pow(10.0,gdb/20.0);
double h0 = v0 - 1.0;
double ohmc = (2.0*PI*fc)/fs;
double ohmw = (2.0*PI*bw)/fs;
double d = -cos(ohmc);
double ax;
double tohm = tan(ohmw/2.0);
if (v0 >= 1.0)
ax = (tohm-1.0) / (tohm+1.0);
else
ax = (tohm-v0) / (tohm+v0);
B[0] = -ax;
B[1] = d * ( 1.0 - ax );
B[2] = 1.0;
A[0] = 1.0;
A[1] = -d * ( 1.0 - ax );
A[2] = ax;
alternate form
double e = d * ( 1.0 - ax );
double f = ( 1 + ax ) * h0 / 2.0;
B[0] = 1.0 + f;
B[1] = e;
B[2] = (-ax - f);
A[0] = 1.0;
A[1] = e;
A[2] = -ax;
return h0/2.0;
} |
1. Field of Invention
This invention relates generally to generating three-dimensional image information and more particularly to generating three-dimensional image information using a single imaging path.
2. Description of Related Art
In conventional two-dimensional (2D) imaging, rays of light representing objects in a three-dimensional (3D) scene are captured and mapped onto a 2D image plane, and thus depth information is not recorded. Stereoscopic optical systems are capable of producing images that represent depth information by producing separate images from differing perspective viewpoints. The separate images may be separately presented to respective left and right eyes of a user so as to mimic operation of the human eyes in viewing a real scene and allowing the user to perceive depth in the presented views. The separated or stereo images are generally produced by an optical system having either a pair of spatially separated imaging paths or by using different portions of a single imaging path to produce images having differing perspective viewpoints. The images may then be presented using eyewear that is able to selectively permit the separate images to reach the user's respective left and right eyes. Alternatively, a special display may be configured to project spatially separated images toward the user's respective left and right eyes.
The use of a stereoscopic imaging also finds application in the field of surgery where a 3D endoscope may be used to provide a 3D view to the surgeon. Stereoscopic imaging may also be useful in remote operations, such as undersea exploration for example, where control of a robotic actuator is facilitated by providing 3D image information to an operator who is located remotely from the actuator. Other applications of stereoscopic imaging may be found in physical measurement systems and in 3D film production equipment used in the entertainment industry. |
Incorporating organic materials into lithium ion batteries could lower their cost and make them more environmentally friendly, A*STAR researchers have found. The team has developed an organic-based battery cathode that has significantly improved electrochemical performance compared to previous organic cathode materials. Crucially, the new material is also robust, remaining stable over thousands of battery charge/discharge cycles. |
The influence of French colonial humanism on the study of late antiquity: Braudel, Marrou, Brown Late antiquity is a sub-discipline of history. It is also a particular way of representing the time and space of the past. Studies in late antiquity tend to focus on the culture and society of the late Roman world. This article argues that this way of imagining time and space and people derives from francophone debates about colonial governance that were current in the 1920s and 1930s. This colonial humanism provided the context for two francophone authors whose work heavily influenced the formation of late antiquity: Fernand Braudel and Henri Marrou. This article shows how Braudel and Marrou were influenced by colonial humanism and how this influence shaped the formation of late antiquity. Historiographical accounts of the study of late antiquity have noted a recurring preoccupation with modernity. This article argues that late antiquity is modern to the extent that it is dependent on the colony for its constitution. |
A randomized trial of TLR-2 agonist CADI-05 targeting desmocollin-3 for advanced non-small-cell lung cancer Background Randomized controlled trial to evaluate synergy between taxane plus platinum chemotherapy and CADI-05, a Toll like receptor-2 agonist targeting desmocollin-3 as a first-line therapy in advanced non-small-cell lung cancer (NSCLC). Patients and methods Patients with advanced NSCLC (stage IIIB or IV) were randomized to cisplatin-paclitaxel (chemotherapy group, N=112) or cisplatin-paclitaxel plus CADI-05 (chemoimmunotherapy group, N=109). CADI-05 was administered a week before chemotherapy and on days 8 and 15 of each cycle and every month subsequently for 12 months or disease progression. Overall survival was compared using a log-rank test. Computed tomography was carried out at baseline, end of two cycles and four cycles. Response rate was evaluated using Response Evaluation Criteria in Solid Tumors criteria by an independent radiologist. Results As per intention-to-treat analysis, no survival benefit was observed between two groups . In a subgroup analysis, improvement in median survival by 127 days was observed in squamous NSCC with chemoimmunotherapy (hazard ratio, 0.55; 95% CI 0.32-0.95; P=0.046). In patients receiving planned four cycles of chemotherapy, there was improved median overall survival by 66 days (299 versus 233 days; hazard ratio, 0.64; 95% CI 0.41 to 0.98; P=0.04) in the chemoimmunotherapy group compared with the chemotherapy group. This was associated with the improved survival by 17.48% at the end of 1 year, in the chemoimmunotherapy group. Systemic adverse events were identical in both the groups. Conclusion There was no survival benefit with the addition of CADI-05 to the combination of cisplatin-paclitaxel in patients with advanced NSCLC; however, the squamous cell subset did demonstrate a survival advantage. |
/*
* (c) Copyright <NAME>, Germany. Contact: <EMAIL>.
*
* Created on 17.02.2018
*/
package net.finmath.modelling.descriptor;
import java.time.LocalDate;
import java.util.Map;
import org.junit.Assert;
import org.junit.Test;
import net.finmath.marketdata.model.curves.CurveInterpolation.ExtrapolationMethod;
import net.finmath.marketdata.model.curves.CurveInterpolation.InterpolationEntity;
import net.finmath.marketdata.model.curves.CurveInterpolation.InterpolationMethod;
import net.finmath.marketdata.model.curves.DiscountCurve;
import net.finmath.marketdata.model.curves.DiscountCurveInterpolation;
import net.finmath.modelling.DescribedModel;
import net.finmath.modelling.Product;
import net.finmath.modelling.ProductDescriptor;
import net.finmath.modelling.modelfactory.AssetModelFourierMethodFactory;
import net.finmath.modelling.modelfactory.AssetModelMonteCarloFactory;
import net.finmath.montecarlo.BrownianMotion;
import net.finmath.montecarlo.BrownianMotionLazyInit;
import net.finmath.montecarlo.RandomVariableFromArrayFactory;
import net.finmath.montecarlo.assetderivativevaluation.models.HestonModel.Scheme;
import net.finmath.time.FloatingpointDate;
import net.finmath.time.TimeDiscretization;
import net.finmath.time.TimeDiscretizationFromArray;
/**
* Unit test creating a Heston model and a European option from corresponding model descriptors and product descriptors
* using two different factories: Fourier versus Monte-Carlo.
*
* @author <NAME>
*/
public class HestonModelDescriptorTest {
// Model properties
private static final LocalDate referenceDate = LocalDate.of(2017,8,15);
private static final double initialValue = 1.0;
private static final double riskFreeRate = 0.05;
private static final double volatility = 0.30;
private static final double theta = volatility*volatility;
private static final double kappa = 0.1;
private static final double xi = 0.50;
private static final double rho = 0.1;
// Product properties
private static final double maturity = 1.0;
private static final LocalDate maturityDate = FloatingpointDate.getDateFromFloatingPointDate(referenceDate, maturity);
private static final double strike = 0.95;
// Monte Carlo simulation properties
private final int numberOfPaths = 100000;
private final int numberOfTimeSteps = 100;
private final double deltaT = 0.05;
private final int seed = 31415;
@Test
public void test() {
/*
* Create Heston Model descriptor
*/
final HestonModelDescriptor hestonModelDescriptor = new HestonModelDescriptor(referenceDate, initialValue, getDiscountCurve("forward curve", referenceDate, riskFreeRate), getDiscountCurve("discount curve", referenceDate, riskFreeRate), volatility, theta, kappa, xi, rho);
/*
* Create European option descriptor
*/
final String underlyingName = "eurostoxx";
final ProductDescriptor europeanOptionDescriptor = (new SingleAssetEuropeanOptionProductDescriptor(underlyingName, maturityDate, strike));
/*
* Create Fourier implementation of model and product
*/
// Create Fourier implementation of Heston model
final DescribedModel<?> hestonModelFourier = (new AssetModelFourierMethodFactory()).getModelFromDescriptor(hestonModelDescriptor);
// Create product implementation compatible with Heston model
final Product europeanOptionFourier = hestonModelFourier.getProductFromDescriptor(europeanOptionDescriptor);
// Evaluate product
final double evaluationTime = 0.0;
final Map<String, Object> valueFourier = europeanOptionFourier.getValues(evaluationTime, hestonModelFourier);
System.out.println(valueFourier);
/*
* Create Monte Carlo implementation of model and product
*/
// Create a time discretization
final BrownianMotion brownianMotion = getBronianMotion(numberOfTimeSteps, deltaT, 2 /* numberOfFactors */, numberOfPaths, seed);
final RandomVariableFromArrayFactory randomVariableFromArrayFactory = new RandomVariableFromArrayFactory();
// Create Fourier implementation of Heston model
final DescribedModel<?> hestonModelMonteCarlo = (new AssetModelMonteCarloFactory(randomVariableFromArrayFactory, brownianMotion, Scheme.FULL_TRUNCATION)).getModelFromDescriptor(hestonModelDescriptor);
// Create product implementation compatible with Heston model
final Product europeanOptionMonteCarlo = hestonModelMonteCarlo.getProductFromDescriptor(europeanOptionDescriptor);
final Map<String, Object> valueMonteCarlo = europeanOptionMonteCarlo.getValues(evaluationTime, hestonModelMonteCarlo);
System.out.println(valueMonteCarlo);
final double deviation = (Double)valueMonteCarlo.get("value") - (Double)valueFourier.get("value");
Assert.assertEquals("Difference of Fourier and Monte-Carlo valuation", 0.0, deviation, 1E-3);
}
/**
* Get the discount curve using the riskFreeRate.
*
* @param name Name of the curve
* @param referenceDate Date corresponding to t=0.
* @param riskFreeRate Constant continuously compounded rate
*
* @return the discount curve using the riskFreeRate.
*/
private static DiscountCurve getDiscountCurve(final String name, final LocalDate referenceDate, final double riskFreeRate) {
final double[] times = new double[] { 1.0 };
final double[] givenAnnualizedZeroRates = new double[] { riskFreeRate };
final InterpolationMethod interpolationMethod = InterpolationMethod.LINEAR;
final InterpolationEntity interpolationEntity = InterpolationEntity.LOG_OF_VALUE_PER_TIME;
final ExtrapolationMethod extrapolationMethod = ExtrapolationMethod.CONSTANT;
final DiscountCurve discountCurve = DiscountCurveInterpolation.createDiscountCurveFromAnnualizedZeroRates(name, referenceDate, times, givenAnnualizedZeroRates, interpolationMethod, extrapolationMethod, interpolationEntity);
return discountCurve;
}
/**
* Create a Brownian motion implementing BrownianMotion from given specs.
*
* @param numberOfTimeSteps The number of time steps.
* @param deltaT The time step size.
* @param numberOfFactors The number of factors.
* @param numberOfPaths The number of paths.
* @param seed The seed for the random number generator.
* @return A Brownian motion implementing BrownianMotion with the given specs.
*/
private static BrownianMotion getBronianMotion(final int numberOfTimeSteps, final double deltaT, final int numberOfFactors, final int numberOfPaths, final int seed) {
final TimeDiscretization timeDiscretization = new TimeDiscretizationFromArray(0.0 /* initial */, numberOfTimeSteps, deltaT);
final BrownianMotion brownianMotion = new BrownianMotionLazyInit(timeDiscretization, numberOfFactors, numberOfPaths, seed);
return brownianMotion;
}
}
|
<reponame>shiva92/Contests
__author__ = 'anonymous'
from config_loader import ConfigLoader
import json
def main():
loader = ConfigLoader("config.ini",['ubuntu','staging','development'])
print(loader.get('ftp'))
print(loader.get('ftp.name'))
print(loader.get('ftp.enabled'))
print(loader.get('poda'))
print(loader.get('common'))
print(loader.get('common.basic_size_limit'))
if __name__ == "__main__":
main()
|
<gh_stars>0
import { Inject, Pipe, PipeTransform } from '@angular/core';
import { TranslocoService } from './transloco.service';
import { HashMap } from './types';
import { TRANSLOCO_MISSING_HANDLER, TranslocoMissingHandler } from './transloco-missing-handler';
@Pipe({
name: 'translocoParams'
})
export class TranslocoParamsPipe implements PipeTransform {
constructor(
private service: TranslocoService,
@Inject(TRANSLOCO_MISSING_HANDLER) private missingHandler: TranslocoMissingHandler
) {}
transform(value: string, params?: HashMap) {
if (!value) {
this.missingHandler.handle(value, params, this.service.config);
}
return this.service.transpile(value, params);
}
}
|
New Perspectives of Machine Learning in Drug Discovery. Artificial intelligence methods, in particular, machine learning, has been playing a pivotal role in drug development, from structural design to clinical trial. This approach is harnessing the impact of computer-aided drug discovery thanks to large available data sets for drug candidates and its new and complex manner of information interpretation to identify patterns for the study scope. In the present review, recent applications related to drug discovery and therapies are assessed, and limitations and future perspectives are analyzed. |
/**
* Sort the point array in ascending order.
*
* @param p array of point variable
*/
public static void selectionSort(final Point[] p)
{
for (int i = 0; i < p.length - 1; i++)
{
int minI = i;
for (int j = i + 1; j < p.length; j++)
{
double
currX = p[j].getX(),
minX = p[minI].getX();
if (currX < minX)
{
minI = j;
}
}
Point temp = p[i];
p[i] = p[minI];
p[minI] = temp;
}
} |
Exploring the Temperature Effect on Enantioselectivity of a BaeyerVilliger Biooxidation by the 2,5DKCMO Module: The SLM Approach Abstract Temperature is a crucial parameter for biological and chemical processes. Its effect on enzymatically catalysed reactions has been known for decades, and stereo and enantiopreference are often temperaturedependent. For the first time, we present the temperature effect on the BaeyerVilliger oxidation of racbicyclohept2en6one by the type II BayerVilliger monooxygenase, 2,5DKCMO. In the absence of a reductase and driven by the hydridedonation of a synthetic nicotinamide analogue, the clear trend for a decreasing enantioselectivity at higher temperatures was observed. Traditional approaches such as the determination of the enantiomeric ratio (E) appeared unsuitable due to the complexity of the system. To quantify the trend, we chose to use the Shape Language Modelling (SLM), a tool that allows the reaction to be described at all points in a shape prescriptive manner. Thus, without knowing the equation of the reaction, the substrate ee can be estimated that at any conversion. Typical time-course of ketone 1 biotransformation Figure S2. Time-course of the biotransformation of rac-1 at 287 K. : conversion, ■ and ■ : remaining ketone enantiomers, , and : lactone yields. The colour code is adjusted to Scheme 1. The standard deviations were calculated from duplicates. 3. The effect of the temperature on conversion and enantioselectivity Figure S3. Enantioselective BV oxidation of 1 by 2,5-DKCMO from 283 K to 303 K. Conversion versus time in A, ee of 'normal' lactone 2 versus conversion in B. Shape Language Modeling (SLM) 'Shape Language Modeling' (SLM, MATLAB®) tools were used to build a prescription for the 'best shaped' model of the reaction at 283 K. SLM is a method for the prescription of a curve fit using sets of shape primitives. The basic idea is the find the function, that displays the dataset most appropriately, rather than applying a mathematical model fitting best. The integration of knots at key positions allows designing functions with a general set of characteristics, which is computed between two knots individually. The results of the tested parameter variations applied to the curve "ee of substrate 1 versus conversion" are shown in Figure S4 with the 'best shaped' model in F. Figure S4. "Shape Language Modeling" (SLM) tests using MATLAB® to identify the best model for the enantioselectivity of the reaction applying the dataset at 283.15 K. The command slmengine was used as the driven tool for fitting the models using the prescription structures. The resulting curve is red (data set is shown as blue dots, the knots of the function are in green). All models (b) to (l) are based on the following structure: slm = slmengine (x, y, 'plot', 'on', 'increasing', 'on', 'leftvalue', 0, 'rightvalue', 100), with the additions in (b) to (l) decribed below. Figure S4F were applied for the other datasets (287 K -303 K) as the 'best shape' SLM function among the tested parameters. The result is shown in Figure S5 Figure S5. Implementation of the 'best shape' SLM function for the plots of ee of 1 over conversion for reactions at 283 K to 303 K (A to F). The following prescriptive model ( Figure S4F) was applied for the data sets at the tested temperatures: slm = slmengine (x, y, 'plot', 'on', 'increasing', 'on', 'knots',, 'leftvalue', 0, 'rightvalue', 100). For the legend see Figure S4. Enantiomeric Ratio E The Enantiomeric Ratio (E) describes the stereoselectivity of a chemical reaction. Methodologies to determine E are divers, e.g. it can be determined directly from the ee of the substrate and the conversion (c) according to Equation 1: We applied the experimental data for conversion and ee of 1 to determine E by a non-linear least-square method. A decreasing E value at higher temperatures was observed, along with a trend for a decreasing fit (R 2 ). Exemplary, the graphs and the results for 283 K and 303 K are show in Figure S6 and E values reported in Table S1. Figure S6. Determination of E value for reactions at 283 K and 303 K. : ee of 1: experimental data; : theoretical curves from regression, : theoretical ee of the product when only one product is formed. Regression analysis for the determination of E was based on the experimental values (not the SLM results) gained from duplicates. The E values for the reactions at temperatures of 283 K to 303 K were also calculated using the ees of 1 determined by SLM method at 25% and 50% conversion. Applying this methodology we observed an increased E value during the reaction in both data sets, as shown in Table S1. SLM was applied to compute ee of 1 at 25% and 50% conversion from the graphs in Figure S5. E values calculated from ee of 1 |
Aligning the Stars: Understanding Digital Scholarship Needs to Support the Evolving Nature of Academic Research Digital scholarship centres located within academic libraries are proliferating. This project gathered feedback from library staff and researchers at the University of Calgary to inform the development of a physical space and associated services to support the evolving nature of academic research. Semi-structured interviews were conducted, and the results were analyzed thematically. Common needs identified included access to interdisciplinary collaborators, technologies, and space. The library was beginning to renovate an existing space to support collaboration and, informed by this research, reconfigured and realigned services and expertise to support digital scholarship in a more cohesive manner. This study will be of interest to other academic libraries wishing to develop a digital scholarship centre that is responsive to the needs of their local community. |
/**
* Licensed to the Apache Software Foundation (ASF) under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. The ASF licenses this file
* to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
#include <future>
#include <iostream>
#include <memory>
#include <ostream>
#include <sstream>
#include <string>
#include "hdfs-setrep.h"
#include "internal/set-replication-state.h"
#include "tools_common.h"
namespace hdfs::tools {
Setrep::Setrep(const int argc, char **argv) : HdfsTool(argc, argv) {}
bool Setrep::Initialize() {
auto add_options = opt_desc_.add_options();
add_options("help,h",
"Changes the replication factor of a file at PATH. If PATH is a "
"directory then the command recursively changes the replication "
"factor of all files under the directory tree rooted at PATH.");
add_options(
"replication-factor", po::value<std::string>(),
"The replication factor to set for the given path and its children.");
add_options("path", po::value<std::string>(),
"The path for which the replication factor needs to be set.");
// We allow only one positional argument to be passed to this tool. An
// exception is thrown if multiple arguments are passed.
pos_opt_desc_.add("replication-factor", 1);
pos_opt_desc_.add("path", 1);
po::store(po::command_line_parser(argc_, argv_)
.options(opt_desc_)
.positional(pos_opt_desc_)
.run(),
opt_val_);
po::notify(opt_val_);
return true;
}
bool Setrep::ValidateConstraints() const {
// Only "help" is allowed as single argument.
if (argc_ == 2) {
return opt_val_.count("help");
}
// Rest of the cases must contain more than 2 arguments on the command line.
return argc_ > 2;
}
std::string Setrep::GetDescription() const {
std::stringstream desc;
desc << "Usage: hdfs_setrep [OPTION] NUM_REPLICAS PATH" << std::endl
<< std::endl
<< "Changes the replication factor of a file at PATH. If PATH is a "
"directory then the command"
<< std::endl
<< "recursively changes the replication factor of all files under the "
"directory tree rooted at PATH."
<< std::endl
<< std::endl
<< " -h display this help and exit" << std::endl
<< std::endl
<< "Examples:" << std::endl
<< "hdfs_setrep 5 hdfs://localhost.localdomain:8020/dir/file"
<< std::endl
<< "hdfs_setrep 3 /dir1/dir2" << std::endl;
return desc.str();
}
bool Setrep::Do() {
if (!Initialize()) {
std::cerr << "Unable to initialize HDFS setrep tool" << std::endl;
return false;
}
if (!ValidateConstraints()) {
std::cout << GetDescription();
return false;
}
if (opt_val_.count("help") > 0) {
return HandleHelp();
}
if (opt_val_.count("path") > 0 && opt_val_.count("replication-factor") > 0) {
const auto replication_factor =
opt_val_["replication-factor"].as<std::string>();
const auto path = opt_val_["path"].as<std::string>();
return HandlePath(path, replication_factor);
}
return false;
}
bool Setrep::HandleHelp() const {
std::cout << GetDescription();
return true;
}
bool Setrep::HandlePath(const std::string &path,
const std::string &replication_factor) const {
// Building a URI object from the given path.
auto uri = hdfs::parse_path_or_exit(path);
const auto fs = hdfs::doConnect(uri, true);
if (!fs) {
std::cerr << "Could not connect to the file system." << std::endl;
return false;
}
/*
* Wrap async FileSystem::SetReplication with promise to make it a blocking
* call.
*/
auto promise = std::make_shared<std::promise<hdfs::Status>>();
std::future future(promise->get_future());
auto handler = [promise](const hdfs::Status &s) { promise->set_value(s); };
const auto replication = static_cast<uint16_t>(
std::strtol(replication_factor.c_str(), nullptr, 8));
/*
* Allocating shared state, which includes:
* replication to be set, handler to be called, request counter, and a boolean
* to keep track if find is done
*/
auto state =
std::make_shared<SetReplicationState>(replication, handler, 0, false);
/*
* Keep requesting more from Find until we process the entire listing. Call
* handler when Find is done and request counter is 0. Find guarantees that
* the handler will only be called once at a time so we do not need locking in
* handler_find.
*/
auto handler_find = [fs, state](const hdfs::Status &status_find,
const std::vector<hdfs::StatInfo> &stat_infos,
const bool has_more_results) -> bool {
/*
* For each result returned by Find we call async SetReplication with the
* handler below. SetReplication DOES NOT guarantee that the handler will
* only be called once at a time, so we DO need locking in
* handler_set_replication.
*/
auto handler_set_replication =
[state](const hdfs::Status &status_set_replication) {
std::lock_guard guard(state->lock);
// Decrement the counter once since we are done with this async call.
if (!status_set_replication.ok() && state->status.ok()) {
// We make sure we set state->status only on the first error.
state->status = status_set_replication;
}
state->request_counter--;
if (state->request_counter == 0 && state->find_is_done) {
state->handler(state->status); // Exit.
}
};
if (!stat_infos.empty() && state->status.ok()) {
for (hdfs::StatInfo const &stat_info : stat_infos) {
// Launch an asynchronous call to SetReplication for every returned
// file.
if (stat_info.file_type == hdfs::StatInfo::IS_FILE) {
state->request_counter++;
fs->SetReplication(stat_info.full_path, state->replication,
handler_set_replication);
}
}
}
/*
* Lock this section because handlerSetReplication might be accessing the
* same shared variables simultaneously.
*/
std::lock_guard guard(state->lock);
if (!status_find.ok() && state->status.ok()) {
// We make sure we set state->status only on the first error.
state->status = status_find;
}
if (!has_more_results) {
state->find_is_done = true;
if (state->request_counter == 0) {
state->handler(state->status); // Exit.
}
return false;
}
return true;
};
// Asynchronous call to Find.
fs->Find(uri.get_path(), "*", hdfs::FileSystem::GetDefaultFindMaxDepth(),
handler_find);
// Block until promise is set.
const auto status = future.get();
if (!status.ok()) {
std::cerr << "Error: " << status.ToString() << std::endl;
return false;
}
return true;
}
} // namespace hdfs::tools
|
package oracle.dws;
import javax.jws.Oneway;
import javax.jws.WebMethod;
import javax.jws.WebParam;
import javax.jws.WebResult;
import javax.jws.WebService;
import javax.jws.soap.SOAPBinding;
import javax.jws.soap.SOAPBinding.ParameterStyle;
import javax.jws.soap.SOAPBinding.Style;
import javax.xml.bind.annotation.XmlSeeAlso;
import javax.xml.ws.Action;
import javax.xml.ws.FaultAction;
// !DO NOT EDIT THIS FILE!
// This source file is generated by Oracle tools
// Contents may be subject to change
// For reporting problems, use the following
// Version = Oracle WebServices (172.16.17.32.0, build 100408.1504.05443)
@WebService(wsdlLocation="http://192.168.1.72:7001/DWSV0AL1/CompositionService?WSDL",
targetNamespace="oracle/documaker/schema/ws/composition", name="CompositionServicePortType")
@XmlSeeAlso(
{ oracle.dws.types.ObjectFactory.class })
@SOAPBinding(style=Style.DOCUMENT, parameterStyle=ParameterStyle.BARE)
public interface CompositionServicePortType
{
@WebMethod(action="doCallIDS")
@SOAPBinding(parameterStyle=ParameterStyle.BARE)
@Action(input="doCallIDS", fault =
{ @FaultAction(value="oracle/documaker/schema/ws/composition/CompositionServicePortType/doCallIDS/Fault/CompositionFault",
className=oracle.dws.CompositionFault.class) }, output="oracle/documaker/schema/ws/composition/CompositionServicePortType/doCallIDSResponse")
@WebResult(targetNamespace="oracle/documaker/schema/ws/composition",
partName="DoCallIDSResponse", name="DoCallIDSResponse")
public oracle.dws.types.DoCallIDSResponse doCallIDS(@WebParam(targetNamespace="oracle/documaker/schema/ws/composition",
partName="DoCallIDSRequest", name="DoCallIDSRequest")
oracle.dws.types.DoCallIDSRequest DoCallIDSRequest)
throws oracle.dws.CompositionFault;
@WebMethod(action="doCallIDSOneWay")
@SOAPBinding(parameterStyle=ParameterStyle.BARE)
@Action(input="doCallIDSOneWay")
@Oneway
public void doCallIDSOneWay(@WebParam(targetNamespace="oracle/documaker/schema/ws/composition",
partName="DoCallIDSOneWayRequest", name="DoCallIDSOneWayRequest")
oracle.dws.types.DoCallIDSOneWayRequest DoCallIDSOneWayRequest);
}
|
def home():
backends = Backend.query.all()
return render_template('home.html', backends=backends) |
San Francisco and Los Angeles have plenty of different experiences to offer visitors this autumn and winter.
The sun-drenched shores of California are undoubtedly one of the biggest attractions for making a trip to the west coast of America. However, it is in the Golden State’s cities like San Francisco and LA where you will find a whole lot more than just beautiful beaches.
Beyond the famous Golden Gate Bridge, San Francisco will dazzle you as one of America’s finest food destinations. Renowned chef Alice Waters of the restaurant Chez Panisse was the pioneer of the 1970s farm to table movement. The city has a cool, radical edge having been the birthplace of the “Summer of Love” in the 1960s as well as the gay rights movement.
Alongside the amazing food, there are the city’s famous cable cars, fantastic art galleries and museums, buzzing Mission District and infamous Alcatraz prison. For wine lovers, it is worth taking a trip out to Napa or Sonoma Valley.
Los Angeles is a huge, sprawling city with endless, different experiences and attractions to visit. It is much more than just movies, there are also fantastic galleries, museums and places like Griffith Park that are well worth a visit. With an extended Metro system and Uber, tourists can now access different parts of this exciting, creative city more easily than ever before.
Take a trip down to Fisherman's Wharf (pictured left) on a San Fran tram, or have a stroll through the one million trees in Golden Gate Park (pictured right).
Catherine Barry moved to the US intending to stay for a summer and has spent 25 years in the city where she works as director of Irish Culture Bay Area.
“San Francisco is not your typical American city. It is small enough to explore and get a good feel for in a few days. The weather is a bit weird in the summer but outside the summer can be the best times for great weather.
“It’s a city on a bay so there are lots of water, bridges, boats and piers. The west end of the city meets the Pacific Ocean with lovely stretches of beaches and there are lovely coastal hikes and whale watching. The Golden Gate Park - pictured above - connects the city with the sea. San Francisco is especially known for its restaurants and the food here is pretty amazing. From Italian in North Beach to Mexican in Mission District to Chinese in Chinatown to all-American diners. I love a great Indian meal in the Tenderloin area, known also as “Tandoor-loin” or an old fashioned diner meal at the historic John’s Grill, home of Dashiell Hammett’s “The Maltese Falcon”. In Chinatown you could also check out the Great Eastern Restaurant where Obama surprised staff by arriving and ordering dumplings to go.
There are the great well-known art galleries like de Young, Legion of Honor and San Francisco Museum of Modern Art, but there are also interesting, quirky places like the Beat Poets Museum or Haight-Ashbury which is the former home of Janis Joplin, Jimi Hendrix and Grateful Dead.
A great way to see the city is to do a walking tour with City Guides. There have been a ton of famous movies shot here but I always encourage visitors to watch Hitchcock’s Vertigo movie before coming out. Must visits are Fisherman’s Wharf (pictured above), Alcatraz, also the Academy of Sciences in Golden Gate Park. While in this park which reportedly has one million trees, don’t miss the herd of bison or the Conservatory of Flowers which is one of the oldest Victorian glasshouses in the US.
I have to give a shout out for my own Irish Arts and Writers Festival in October where around 20 Irish authors, artists and poets join US counterparts and showcase the best in contemporary Irish arts and literature. This year’s line-up includes Fintan O’Toole and Paul Muldoon.
It can get crowded early at El Techo, a rooftop hotspot at the heart of the Mission district, but the crowds are worth it. Great views, San Francisco-essential wind protection and fantastic Latin American-themed street food and cocktails.
If you can’t make it to Napa, there are more than a dozen wineries within the city limits of San Francisco. They don’t grow their own vines but many have production onsite and the tasting rooms in each are a fantastic education – and a lot of fun. Try the Bluxome Street Winery in Soma.
Track down the Wave Organ, an art installation on a jetty in the bay. Specially built pipes transport sound created by the ocean to an excited audience. It also comes with fantastic views of Alcatraz (pictured above), the city skyline and the Golden Gate bridge.
Strut your stuff down Sunset Boulevard or catch a wave on Malibu Beach.
Adrienne Borlongan was born and raised in Los Angeles. She is the chef and owner of Wanderlust Creamery, a Los Angeles-based artisanal ice-cream business.
“I grew up in a part of LA called “The Valley,” that is, the San Fernando Valley. Before opening Wanderlust Creamery, I worked as a mixologist for SBE, an LA-based hospitality group with nightclubs, restaurants and hotels in Los Angeles, Miami, Las Vegas, and Dubai.
“We make ice-cream inspired by travel; each flavour is based on a specific destination in the world, inspired by places I’ve been or long to visit or childhood memories.
LA is one of the best cities in the world to enjoy life; there are so many experiences that are unique to LA. It’s one of the only cities in the world where you can surf some waves in Malibu (pictured, above right), at sunrise and hit the slopes to ski in the mountains before sunset on the same day. One thing that surprises first time visitors is the city’s vastness; it is way bigger in size than one could imagine.
A typical LA day is bright, sunny, and somewhere around 24 degrees, which can make every day feel like a holiday. A winter “Sunday Funday” in LA should consist of a visit to Smorgasburg LA to sample an assortment of LA’s most impressive food vendors, followed by a stop at The Do Over at Grand Park to experience the city’s best house music.
“If I could give an LA introduction to an out-of-towner, I’d spend half the day driving them down Sunset Boulevard (pictured, above left). You could literally experience all the different facets of life in LA on this one street alone. I’d start in gritty Chinatown, through less shiny Echo Park and watch it turn into trendy Silverlake. Then through the seediness that surrounds the tourist traps in Hollywood, meander along the glamour of the Sunset Boulevard of West Hollywood which transcends to the affluent Beverly Hills.
“I’d detour on Canon Drive between “little Santa Monica” and Sunset for an iconic picture of palm tree-lined streets and fancy cars in front of mansions. Then on to Brentwood and Pacific Palisades, ending at the Pacific Coast Highway, where I’d drive them along the ocean in Malibu and up through Topanga Pass to the Valley. You could easily hire an Uber to do this, but just make sure they make a pitstop at an In-N-Out.
Every day, hundreds of people move here with the feeling they were too big for their hometown or meant for a destiny bigger than one back home. Because of this, LA is home to many bright, talented, progressive and innovative people and it’s very motivating to live here. The downside to this is that you get a lot of “those” personalities; everyone is someone, and if they’re not someone, they must pretend to be, it’s the LA culture. Locals on the other hand, are very laid back and unpretentious. In LA, people should explore all that is not Hollywood. Experience the lifestyle that’s made possible by the weather, and the culture that’s made possible by the city’s diversity.
With the tagline ‘What are you waiting for, we won’t be here forever’, The Last Bookstore is California’s largest second-hand book and vinyl store and a hive of activity for LA’s arts scene. A mix of Victorian drawing room and bohemian chic.
Looking for a great whiskey house in a log cabin with its own stuffed bear? Then Big Foot Lodge might just fit the bill. Described as like drinking in Twin Peaks, this quirky bar on Los Feliz Blvd is stuffed with taxidermy and offbeat LA charm. A popular hangout for local musicians, the sign over the bar reads “Bigfoot doesn’t believe in you either’.
Stranger Things fans will be delighted to know that the Upside Down world from the hit Netflix show will be added to Universal Studios this autumn. The maze attraction is being developed in collaboration with the series creators, the Duffer Brothers. So if you can’t wait for the series three you can visit the world of Eleven and long lost Barb once its launched during the Universal Halloween Horror Nights festival. |
/**
* @author Yaroslav Bondarchuk
* Date: 26.12.13
* Time: 11:15
*/
public class HierarchyBrowserSearchClickEvent extends GwtEvent<HierarchyBrowserSearchClickEventHandler> {
public static Type<HierarchyBrowserSearchClickEventHandler> TYPE = new Type<HierarchyBrowserSearchClickEventHandler>();
private Id parentId;
private String parentCollectionName;
private String inputText;
private int recursionDeepness;
public HierarchyBrowserSearchClickEvent(Id parentId, String parentCollectionName, String inputText, int recursionDeepness) {
this.parentId = parentId;
this.parentCollectionName = parentCollectionName;
this.inputText = inputText;
this.recursionDeepness = recursionDeepness;
}
@Override
public Type<HierarchyBrowserSearchClickEventHandler> getAssociatedType() {
return TYPE;
}
@Override
protected void dispatch(HierarchyBrowserSearchClickEventHandler handler) {
handler.onHierarchyBrowserSearchClick(this);
}
public Id getParentId() {
return parentId;
}
public String getParentCollectionName() {
return parentCollectionName;
}
public String getInputText() {
return inputText;
}
public int getRecursionDeepness() {
return recursionDeepness;
}
} |
/**
*
* Class for data exchange between the MATSim data format and postgreSQL databases.<br>
*
* @author dhosse
*
*/
public class MatsimPsqlAdapter {
private static final Logger log = Logger.getLogger(MatsimPsqlAdapter.class);
// only one connection at a time
private static Connection connection;
// private!
private MatsimPsqlAdapter() {};
public static void main(String args[]) {
Scenario scenario = ScenarioUtils.createScenario(ConfigUtils.createConfig());
new PopulationReader(scenario).readFile("/home/dhosse/garmisch/run16/output_plans.xml.gz");
MatsimPsqlAdapter.writeScenarioToPsql(scenario, "09180_2025", "development");
}
/**
*
* This method is equivalent to MATSim's {@link org.matsim.core.scenario.ScenarioUtils#loadScenario(Config)} method.
* Takes data tables contained in the given schema name from the 'simulation' database to generate MATSim scenario data
* (network, population etc.)
*
* @param scenario The MATSim scenario to read the data in.
* @param tablespace The schema name containing the MATSim data tables.
*/
public static void createScenarioFromPsql(final Scenario scenario, final Configuration configuration, final String scenarioName) {
try {
connection = PsqlAdapter.createConnection(DatabaseConstants.SIMULATIONS_DB);
createNetworkFromTable(scenario.getNetwork(), scenarioName);
createPopulationFromTable(scenario.getPopulation(), scenarioName);
connection.close();
} catch (InstantiationException | IllegalAccessException
| ClassNotFoundException | SQLException e) {
e.printStackTrace();
}
}
public static void writeScenarioToPsql(final Scenario scenario, final String scenarioName, final String railsEnvironment) {
try {
String dbName = RailsEnvironments.valueOf(railsEnvironment).getDatabaseName();
connection = PsqlAdapter.createConnection(dbName);
log.info("Connected to database " + dbName);
plans2Table(scenario.getPopulation(), scenarioName);
writeScenarioMetaData(scenario, scenarioName);
connection.close();
} catch (InstantiationException | IllegalAccessException
| ClassNotFoundException | SQLException e) {
e.printStackTrace();
}
}
/**
*
* @param network The MATSim network object.
* @param tablespace The schema name containing the MATSim nodes and links table.
*/
private static void createNetworkFromTable(final Network network, final String tablespace) {
try {
Statement statement = connection.createStatement();
ResultSet nodesSet = statement.executeQuery("SELECT * FROM " + tablespace + ".nodes;");
while(nodesSet.next()) {
Id<Node> id = Id.createNodeId(nodesSet.getString("id"));
Coord coord = new Coord(nodesSet.getDouble("x_coord"), nodesSet.getDouble("y_coord"));
Node nn = network.getFactory().createNode(id, coord);
network.addNode(nn);
}
nodesSet.close();
ResultSet linksSet = statement.executeQuery("SELECT * FROM " + tablespace + ".links;");
while(linksSet.next()) {
Id<Link> id = Id.createLinkId(linksSet.getString("id"));
Node fromNode = network.getNodes().get(Id.createNodeId(linksSet.getString("from_node_id")));
Node toNode = network.getNodes().get(Id.createNodeId(linksSet.getString("to_node_id")));
Link ll = network.getFactory().createLink(id, fromNode, toNode);
ll.setLength(linksSet.getDouble("length"));
ll.setFreespeed(linksSet.getDouble("freespeed"));
ll.setCapacity(linksSet.getDouble("capacity"));
ll.setNumberOfLanes(linksSet.getInt("permlanes"));
ll.setAllowedModes(CollectionUtils.stringToSet(linksSet.getString("modes")));
NetworkUtils.setOrigId(ll, linksSet.getString("origid"));
NetworkUtils.setType(ll, linksSet.getString("type"));
network.addLink(ll);
}
linksSet.close();
statement.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
/**
*
* @param network
* @param tablespace
*/
public static void network2Table(final Network network, final String tablespace) {
try {
Statement statement = connection.createStatement();
statement.executeUpdate("CREATE SCHEMA IF NOT EXISTS " + tablespace + ";");
statement.executeUpdate("DROP TABLE IF EXISTS " + tablespace + ".nodes;");
statement.executeUpdate("DROP TABLE IF EXISTS " + tablespace + ".links;");
statement.executeUpdate("CREATE TABLE IF NOT EXISTS " + tablespace + ".nodes("
+ "id varchar,"
+ "x_coord double precision,"
+ "y_coord double precision"
+ ");");
statement.executeUpdate("CREATE TABLE IF NOT EXISTS " + tablespace + ".links("
+ "id varchar,"
+ "from_node_id varchar,"
+ "to_node_id varchar,"
+ "length double precision,"
+ "freespeed double precision,"
+ "capacity double precision,"
+ "permlanes double precision,"
+ "oneway integer,"
+ "modes varchar,"
+ "origid varchar,"
+ "type varchar"
+ ");");
writeNodesTable(network.getNodes().values(), tablespace);
writeLinksTable(network.getLinks().values(), tablespace);
} catch (SQLException e) {
e.printStackTrace();
}
}
/**
*
* @param nodes
* @param tablespace
* @throws SQLException
*/
private static void writeNodesTable(final Collection<? extends Node> nodes, String tablespace) throws SQLException {
PreparedStatement stmt = connection.prepareStatement("INSERT INTO " + tablespace + ".nodes (id, x_coord, y_coord) VALUES(?, ?, ?);");
for(Node node : nodes) {
stmt.setString(1, node.getId().toString());
stmt.setDouble(2, node.getCoord().getX());
stmt.setDouble(3, node.getCoord().getY());
stmt.addBatch();
}
stmt.executeBatch();
stmt.close();
}
/**
*
* @param links
* @param tablespace
* @throws SQLException
*/
private static void writeLinksTable(final Collection<? extends Link> links, String tablespace) throws SQLException {
PreparedStatement stmt = connection.prepareStatement("INSERT INTO " + tablespace + ".links (id, from_node_id, to_node_id, length,"
+ "freespeed, capacity, permlanes, oneway, modes, origid, type) VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);");
for(Link link : links) {
stmt.setString(1, link.getId().toString());
stmt.setString(2, link.getFromNode().getId().toString());
stmt.setString(3, link.getToNode().getId().toString());
stmt.setDouble(4, link.getLength());
stmt.setDouble(5, link.getFreespeed());
stmt.setDouble(6, link.getCapacity());
stmt.setDouble(7, link.getNumberOfLanes());
stmt.setInt(8, 1);
stmt.setString(9, CollectionUtils.setToString(link.getAllowedModes()));
stmt.setString(10, NetworkUtils.getOrigId(link));
stmt.setString(11, NetworkUtils.getType(link));
stmt.addBatch();
}
stmt.executeBatch();
stmt.close();
}
/**
*
* @param population
* @param tablespace
*/
public static void plans2Table(final Population population, final String tablespace) {
try {
log.info("Writing plans...");
writePlansTable(population, tablespace);
} catch (SQLException e) {
log.error(e.getMessage());
}
}
/**
*
* @param population
* @param tablespace
* @throws SQLException
*/
private static void writePersonsTable(final Population population, String tablespace) throws SQLException {
PreparedStatement stmt = connection.prepareStatement("INSERT INTO " + tablespace +
".persons (id, age, sex, license, car_avail, employed) VALUES (?, ?, ?, ?, ?, ?)");
ObjectAttributes personAttributes = population.getPersonAttributes();
for(Person person : population.getPersons().values()) {
String personId = person.getId().toString();
Double age = (Double) personAttributes.getAttribute(personId, "age");
String sex = (String) personAttributes.getAttribute(personId, "sex");
Boolean license = (Boolean) personAttributes.getAttribute(personId, "hasLicense");
Boolean carAvail = (Boolean) personAttributes.getAttribute(personId, "carAvail");
Boolean employed = (Boolean) personAttributes.getAttribute(personId, "employed");
stmt.setString(1, personId);
stmt.setDouble(2, age != null ? age : -1);
stmt.setString(3, sex != null ? sex : "");
stmt.setBoolean(4, license != null ? license : false);
stmt.setBoolean(5, carAvail != null ? carAvail : false);
stmt.setBoolean(6, employed != null ? employed : false);
stmt.addBatch();
}
stmt.executeBatch();
stmt.close();
}
/**
*
* @param population
* @param tablespace
* @throws SQLException
*/
private static void writePlansTable(final Population population, String scenario) throws SQLException {
PreparedStatement stmt = connection.prepareStatement("INSERT INTO plans (agent_id, started_at, ended_at,"
+ "from_activity_type, to_activity_type, location_start, location_end, mode, scenario_id)"
+ " VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?);");
filterTransitWalkLegs(population);
final CoordinateTransformation ct = TransformationFactory.getCoordinateTransformation(GlobalNames.UTM32N, GlobalNames.WGS84);
for(Person person : population.getPersons().values()) {
stmt.setString(1, person.getId().toString());
Plan plan = person.getSelectedPlan();
for(PlanElement pe : plan.getPlanElements()) {
if(pe instanceof Leg) {
Activity from = (Activity) plan.getPlanElements().get(plan.getPlanElements().indexOf(pe)-1);
Activity to = (Activity) plan.getPlanElements().get(plan.getPlanElements().indexOf(pe)+1);
Leg leg = (Leg) pe;
String mode = interpretLegMode(leg.getMode());
String fromActType = from.getType().contains(".") ? interpretActivityTypeString(from.getType()) : from.getType();
String toActType = to.getType().contains(".") ? interpretActivityTypeString(to.getType()) : to.getType();
double startTime = from.getEndTime() != org.matsim.core.utils.misc.Time.UNDEFINED_TIME ? from.getEndTime() :
leg.getDepartureTime();
double endTime = to.getStartTime() != org.matsim.core.utils.misc.Time.UNDEFINED_TIME ? to.getStartTime() :
leg.getDepartureTime() + leg.getTravelTime();
if(!diurnalCurves.containsKey(mode)) {
List<Integer> list = new ArrayList<>();
for(int i = 0; i < 24; i++) {
list.add(0);
}
diurnalCurves.put(mode, list);
}
List<Integer> list = diurnalCurves.get(mode);
int startHour = (int) startTime / 3600;
int endHour = (int) endTime / 3600;
if(startHour < 0 || endHour < 0 || startHour > 23 || endHour > 23)
continue;
for(int i = startHour; i <= endHour; i++) {
int val = list.get(i);
list.set(i, val+1);
}
diurnalCurves.put(mode, list);
stmt.setTime(2, new Time(TimeUnit.SECONDS.toMillis((long)startTime)));
stmt.setTime(3, new Time(TimeUnit.SECONDS.toMillis((long)endTime)));
stmt.setString(4, fromActType);
stmt.setString(5, toActType);
stmt.setObject(6, new PGgeometry(createWKT(ct.transform(from.getCoord()))));
stmt.setObject(7, new PGgeometry(createWKT(ct.transform(to.getCoord()))));
stmt.setString(8, mode);
stmt.setString(9, scenario);
stmt.addBatch();
}
}
}
try {
stmt.executeBatch();
} catch(BatchUpdateException e) {
System.out.println(e.getNextException().toString());
}
stmt.close();
}
private static Map<String, List<Integer>> diurnalCurves = new HashMap<>();
private static String interpretActivityTypeString(String type) {
if(type.startsWith("home"))
return "home";
else if(type.startsWith("work"))
return "work";
else if(type.startsWith("leis"))
return "leisure";
else if(type.startsWith("educ"))
return "education";
else if(type.startsWith("shop"))
return "shop";
else
return "other";
}
private static void filterTransitWalkLegs(final Population population) {
for(Person person : population.getPersons().values()) {
Plan selectedPlan = person.getSelectedPlan();
List<PlanElement> planElements = selectedPlan.getPlanElements();
for (int i = 0, n = planElements.size(); i < n; i++) {
PlanElement pe = planElements.get(i);
if (pe instanceof Activity) {
Activity act = (Activity) pe;
if (PtConstants.TRANSIT_ACTIVITY_TYPE.equals(act.getType())) {
PlanElement previousPe = planElements.get(i-1);
if (previousPe instanceof Leg) {
Leg previousLeg = (Leg) previousPe;
previousLeg.setMode(TransportMode.pt);
previousLeg.setRoute(null);
} else {
throw new RuntimeException("A transit activity should follow a leg! Aborting...");
}
final int index = i;
PopulationUtils.removeActivity(((Plan) selectedPlan), index); // also removes the following leg
n -= 2;
i--;
}
}
}
}
for (Person person : population.getPersons().values()){
Plan selectedPlan = person.getSelectedPlan();
List<PlanElement> planElements = selectedPlan.getPlanElements();
for (int i = 0, n = planElements.size(); i < n; i++) {
PlanElement pe = planElements.get(i);
if (pe instanceof Leg) {
String legMode = ((Leg) pe).getMode();
if(legMode.equals(TransportMode.transit_walk)){
((Leg) pe).setMode(TransportMode.walk);
}
}
}
}
}
/**
*
* @param coord
* @return
*/
private static String createWKT(Coord coord) {
return "POINT(" + Double.toString(coord.getX()) + " " + Double.toString(coord.getY()) + ")";
}
private static String interpretLegMode(String mode) {
if(mode.contains("oneway") || mode.contains("twoway") || mode.contains("freefloat")) {
return "carsharing";
} else {
return mode;
}
}
/**
*
* @param population
* @param tablespace
*/
private static void createPopulationFromTable(final Population population, final String tablespace) {
try {
DatabaseTable table = DatabaseConstants.getDatabaseTable(DatabaseConstants.PLANS_TABLE);
Statement statement = connection.createStatement();
ResultSet results = statement.executeQuery("SELECT * from " + tablespace + "." + table.getTableName() + ";");
PopulationFactory factory = population.getFactory();
ObjectAttributes personAttributes = population.getPersonAttributes();
while(results.next()) {
// Create a person from the id
Id<Person> personId = Id.createPersonId(results.getString("id"));
Person current = factory.createPerson(personId);
population.addPerson(current);
// Create the person's attributes
/*
* double, string, boolean, boolean, boolean
*/
Double age = results.getDouble("age");
String sex = results.getString("sex");
String license = results.getString("license");
String carAvail = results.getString("car_avail");
Boolean employed = results.getBoolean("employed");
personAttributes.putAttribute(personId.toString(), "age", age != null ? age : -1);
personAttributes.putAttribute(personId.toString(), "sex", sex != null ? sex : "n");
personAttributes.putAttribute(personId.toString(), "license", license != null ? Boolean.parseBoolean(license) : false);
personAttributes.putAttribute(personId.toString(), "car_avail", carAvail != null ? Boolean.parseBoolean(carAvail) : false);
personAttributes.putAttribute(personId.toString(), "employed", employed != null ? employed : false);
}
results.close();
String sql = new PsqlUtils.PsqlStringBuilder(PsqlUtils.processes.SELECT.name(), tablespace, "plans")
.orderClause("person_id, element_index").build();
results = statement.executeQuery(sql);
Plan plan = null;
Id<Person> lastPersonId = null;
Id<Person> currentPersonId = null;
while(results.next()) {
Person current = population.getPersons().get(Id.createPersonId(results.getString("person_id")));
currentPersonId = current.getId();
if(current != null) {
if(currentPersonId != lastPersonId) {
plan = factory.createPlan();
lastPersonId = currentPersonId;
current.addPlan(plan);
Boolean isSelected = results.getBoolean("selected");
if(isSelected) {
current.setSelectedPlan(plan);
}
}
String actType = results.getString("act_type");
if(actType != null) {
// Create an activity
double actStartTime = results.getDouble("act_start");
double actEndTime = results.getDouble("act_end");
double maxDuration = results.getDouble("act_duration");
PGgeometry geometry = (PGgeometry) results.getObject("act_coord");
Point point = (Point) geometry.getGeometry();
Activity act = factory.createActivityFromCoord(actType, new Coord(point.getX(), point.getY()));
if(Double.isFinite(actStartTime)) act.setStartTime(actStartTime);
if(Double.isFinite(actEndTime)) act.setEndTime(actEndTime);
if(Double.isFinite(maxDuration)) act.setMaximumDuration(maxDuration);
plan.addActivity(act);
} else {
// Create a leg
plan.addLeg(factory.createLeg(results.getString("leg_mode")));
}
}
}
} catch (SQLException e) {
e.printStackTrace();
}
}
static void writeScenarioMetaData(final Scenario scenario, String scenarioId) {
try {
log.info("Writing scenario metadata...");
AggregatedAnalysis.generate(scenario);
Map<String, String> modeCounts = AggregatedAnalysis.getModeCounts();
Map<String, String> modeDistances = AggregatedAnalysis.getModeDistanceStats();
Map<String, String> modeEmissions = AggregatedAnalysis.getModeEmissionStats();
PreparedStatement statement = connection.prepareStatement("INSERT INTO scenarios (district_id, year, population,"
+ " population_diff_2017,person_km, trips, diurnal_curve, carbon_emissions, seed, created_at, updated_at) "
+ "VALUES(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?);");
String[][] diurnalCurves = new String[modeDistances.size()*24][3];
int i = 0;
for(Entry<String, List<Integer>> entry : MatsimPsqlAdapter.diurnalCurves.entrySet()) {
int j = 0;
for(Integer integer : entry.getValue()) {
diurnalCurves[i][0] = entry.getKey();
diurnalCurves[i][1] = Integer.toString(j);
diurnalCurves[i][2] = Integer.toString(integer);
j++;
i++;
}
}
String[] scenarioData = scenarioId.split("_");
statement.setString(1, scenarioData[0]);
statement.setInt(2, Integer.parseInt(scenarioData[1]));
statement.setInt(3, scenario.getPopulation().getPersons().size());
statement.setInt(4, 0);
statement.setArray(5, connection.createArrayOf("varchar", createArrayFromMap(modeDistances)));
statement.setArray(6, connection.createArrayOf("varchar", createArrayFromMap(modeCounts)));
statement.setArray(7, connection.createArrayOf("varchar", diurnalCurves));
statement.setArray(8, connection.createArrayOf("varchar", createArrayFromMap(modeEmissions)));
statement.setBoolean(9, true);
statement.setTimestamp(10, new Timestamp(System.currentTimeMillis()));
statement.setTimestamp(11, new Timestamp(System.currentTimeMillis()));
statement.addBatch();
try {
statement.executeBatch();
} catch(BatchUpdateException e) {
log.error(e.getNextException().toString());
}
statement.close();
} catch (SQLException e) {
log.error(e.getMessage());
}
}
private static String[][] createArrayFromMap(Map<String, String> map) {
String[][] array = new String[map.size()][2];
int i = 0;
for(Entry<String, String> entry : map.entrySet()) {
array[i][0] = entry.getKey();
array[i][1] = entry.getValue();
i++;
}
return array;
}
} |
In recent years, there has been proposed a variable light distribution headlight that can automatically change the light distribution thereof according to the running status of a vehicle. A headlight obtained by combining an LED and a liquid crystal shutter is known as such a variable light distribution headlight. However, in the illumination device obtained by combining an LED and a liquid crystal shutter, the LED is a diffusion light source having a large area. Therefore, it is impossible to finely control the light distribution with a refraction/reflection optical system, and it is difficult to deliver a light beam over a long distance. Furthermore, since the LED has a lower emission intensity than the conventional lamp light source, it is necessary to arrange a large number of LEDs in order to obtain a large light quantity as a headlight. This arrangement is expensive and requires a large installation space. Also, with a configuration in which a large number of high-intensity LEDs are arranged, it is necessary to take some measures for heat dissipation.
Patent Literature 1 discloses a vehicle lamp including a light source that emits a coherent light beam, and a hologram device storing a hologram pattern. The hologram pattern has been calculated such that a diffracted light beam reproduced by irradiation with the coherent light beam forms a light distribution pattern for the vehicle lamp with a predetermined light intensity distribution. |
A Decision Support System Based upon Economic Order Quantities for use in a Civil Engineering Contractor A decision support model was created based upon the theory of economic order quantities and maximisation of return for the investor. A new model for maximising return on investment ( ROI ) is presented which provides more reliable results for low cost, small volume items as typified in the construction maintenance environment. The model was incorporated into the materials purchase system of a civil engineering contractor to guide the selection of order quantities. Estimated savings on the overheads of the purchasing process are presented for sample items. |
Cochlear implantation in congenital cochlear abnormalities. Many children have benefited from cochlear implant device including those with congenital malformation of the inner ear. The results reported in children with malformed cochlea are very encouraging. We describe 2 cases of Mondini's malformation with severe sensorineural hearing loss. Cochlear implantation was performed and both of them underwent post-implantation speech rehabilitation. Post-implantation, both of them were noted to respond to external sound. But the second case developed facial twitching a few months after the device was switched on. It is important to evaluate the severity of the inner ear deformity and the other associated anomalies in pre-implantation radiological assessment in order to identify the problem that may complicate the surgery and subsequent patient management. |
Chatbots can be setup in just a few clicks using the company's new myEinstein tool.
Salesforce has updated its Einstein artificial intelligence platform with predictive models and automated chatbots that have been designed with the company's end users in mind.
During the Dreamforce 2017 conference in San Francisco, Salesforce announced the new features that will not require any programming and can be configured in just a few clicks.
To make the setup process easier, the company has also created a new tool called myEinstein that will automate the building, training and deployment of AI models. These models will be based on structured and unstructured Salesforce user data and once the chatbots have been configured, they can be added to Salesforce workflows easily.
Using Einstein Prediction Builder, organisations can build predictive models using any custom field or object from the company's software such as predicting whether or not a customer will move their business to another company.
Businesses can also utilise Einstein Bots to create and deploy customised service chatbots using natural language processing that can deal with a range of routine customer issues to free up their customer support representatives.
We will like see more businesses taking advantage of these new features and creating their own chatbots in Salesforce once they become generally available. |
Growth of Si-doped GaN Nanowires With Low Density For Power Device Applications In this work, the growth of low-density self-catalyzed n-doped gallium nitride (GaN) nanowires (NWs) on Si substrate has been investigated for power device applications. In the first part of this study, the influence of the growth temperature on the morphology and the density of the NWs has been studied. We have found that the NWs density can be reduced to 1.55109 NWs/cm2 at low growth temperature. However, under these conditions, a 1560 nm thick parasitic layer is also grown connecting the NWs by their bottom. To minimize this parasitic growth, we have developed a two-step growth procedure allowing us to maintain the NWs density around 1.91109 NWs/cm2, while minimizing the parasitic layers thickness to 158 nm. In the second part, we have optimized the growth conditions to keep the NW characteristics (low density and thin parasitic layer) while inducing their n-type doping using silicon. |
Burn Severity and Albedo Analysis Concerning the Mendocino Complex Fire Wildfires leave significant impacts upon many ecosystems. In California, the Mendocino Complex Fire was the second largest fire in California history. This fire left a sizable burn scar within the Mendocino National Forest. This work examines normalized burn ratio to understand the burn severity to formulate insight on wildfire impacts to land albedo and its recovery. |
import urllib, json, dateutil.parser, urllib2, os, traceback
from video.utils import build_url
from video.utils.parse_dict import parse_dict
from gdata.youtube.service import YouTubeService as gdata_YouTubeService
import gdata.media
#from gdata.youtube.service import YouTubeError as gdata_YouTubeError
class GetYoutubeVideos:
GDATA_YOUTUBE_VIDEOS_URL='https://gdata.youtube.com/feeds/api/videos'
def __init__(
self,q=None,max_results=20,author=None,orderby='published',
videos_json=None,youtube_id_url=None,
limit_time='all_time'
):
"""
perform search on youtube
parameters:
q: (string) the query to search for
max_results: (int) maximum number of results to return
author: (string) limit to videos uploaded by this youtube user
orderby: (string) how to order the videos, possible values:
relevance, published, viewCount, rating
limit_time: (string) limit to videos uploaded in a certain timeframe
possible values: today, this_week, this_month, all_time
"""
self.videos=[]
if videos_json is None and youtube_id_url is not None:
videos_json=urllib.urlopen(youtube_id_url+'?alt=json').read()
if videos_json is None and q is not None:
params={
'q':q,
'max-results':max_results,
'alt':'json',
'orderby':orderby
}
if author is not None: params['author']=author
if limit_time is not None: params['time']=limit_time
url=build_url(self.GDATA_YOUTUBE_VIDEOS_URL,params)
videos_json=urllib.urlopen(url).read()
if videos_json is not None and len(videos_json)>0:
try:
videos_json=videos_json[videos_json.find('{'):videos_json.rfind('}')+1]
yvideos=json.loads(videos_json)
except:
print "youtube_id_url="
print youtube_id_url
print "url"
print url
print "videos_json="
print videos_json
raise
yentries=parse_dict(yvideos,{'feed':'entry'})
if yentries is None:
yentry=parse_dict(yvideos,'entry')
if yentry is None:
yentries=[]
else:
yentries=[yentry]
for yentry in yentries:
video=self._parse_youtube_entry(yentry)
self.videos.append(video)
def _parse_youtube_entry(self,yentry):
video={
'id':parse_dict(yentry,{'id':'$t'}),
'title':parse_dict(yentry,{'title':'$t'},validate={'title':{'type':'text'}}),
'description':parse_dict(yentry,{'content':'$t'},validate={'content':{'type':'text'}}),
}
published=parse_dict(yentry,{'published':'$t'})
if published is not None:
video['published']=dateutil.parser.parse(published)
yauthors=parse_dict(yentry,'author',default=[])
if len(yauthors)>0:
yauthor=yauthors[0]
video['author']=parse_dict(yauthor,{'name':'$t'})
ylinks=parse_dict(yentry,'link',default=[])
for ylink in ylinks:
link=parse_dict(ylink,'href',validate={'type':'text/html','rel':'alternate'})
if link is not None:
video['link']=link
ymediaGroup=parse_dict(yentry,'media$group',default={})
ymediaContents=parse_dict(ymediaGroup,'media$content',default=[])
for ymediaContent in ymediaContents:
embed_url=parse_dict(ymediaContent,'url',validate={'isDefault':'true'})
if embed_url is not None:
video['embed_url']=embed_url
video['embed_url_autoplay']=embed_url+'&autoplay=1'
ymediaThumbnails=parse_dict(ymediaGroup,'media$thumbnail',default=[])
if len(ymediaThumbnails)>0:
ymediaThumbnail=ymediaThumbnails[0]
video['thumbnail480x360']=parse_dict(ymediaThumbnail,'url')
if len(ymediaThumbnails)>1:
ymediaThumbnail=ymediaThumbnails[1]
video['thumbnail90x120']=parse_dict(ymediaThumbnail,'url')
return video
class UploadYoutubeVideo():
isOk=False
errMsg=''
errDesc=''
# http://gdata.youtube.com/schemas/2007/categories.cat
ALLOWED_CATEGORIES=[
"Film","Autos",'Music','Animals','Sports','Sports','Shortmov','Videoblog',
'Games','Comedy','People','News','Entertainment','Education','Howto','Nonprofit',
'Tech','Movies_Anime_animation','Movies','Movies_Comedy','Movies_Documentary',
'Movies_Action_adventure','Movies_Classics','Movies_Foreign','Movies_Horror',
'Movies_Drama','Movies_Family','Movies_Shorts','Shows','Movies_Sci_fi_fantasy',
'Movies_Thriller','Trailers',
]
# title - string (required)
# category - string (required) - from the list of self.ALLOWED_CATEGORIES
# filename - string (required) - path of file to download
# ytService - object (required) - YouTubeService authenticated object
# description - string (optional)
# keywords - string (optional) - comma separated list of keywords
# location - tuple (optional) - coordinates e.g. (37.0,-122.0)
# developerTags - list (optional) - list of developer tags
# isPrivate - boolean (optional)
def __init__(
self, title, category, filename, ytService,
description=None, keywords=None, location=None, developerTags=None,
isPrivate=False
):
if category not in self.ALLOWED_CATEGORIES:
self.errMsg='invalid category'
self.errDesc='you must specify a cateogry from the following list: '+str(self.ALLOWED_CATEGORIES)
elif len(title)<5:
self.errMsg='invalid title'
self.errDesc='you must specify a title'
elif len(filename)<5 or not os.path.exists(filename):
self.errMsg='invalid filename'
self.errDesc='you must specify a filename to upload'
else:
if description is not None:
description=gdata.media.Description(description_type='plain',text=description)
if keywords is not None:
keywords=gdata.media.Keywords(text=keywords)
if location is not None:
where=gdata.geo.Where()
where.set_location(location)
else:
where=None
if isPrivate:
private=gdata.media.Private()
else:
private=None
mediaGroup=gdata.media.Group(
title=gdata.media.Title(text=title),
description=description,
keywords=keywords,
category=[
gdata.media.Category(
text=category,
scheme='http://gdata.youtube.com/schemas/2007/categories.cat',
label=category
)
],
player=None,
private=private
)
videoEntry=gdata.youtube.YouTubeVideoEntry(
media=mediaGroup,
geo=where
)
if developerTags is not None:
videoEntry.addDeveloperTags(developerTags)
try:
self.newEntry=ytService.InsertVideoEntry(videoEntry, filename)
self.isOk=True
except Exception, e:
self.errMsg='exception in InsertVideoEntry'
self.errDesc=str(e)+' '+traceback.format_exc()
class YouTubeService(gdata_YouTubeService):
def __init__(self,developer_key,authsub_token):
gdata_YouTubeService.__init__(
self,
developer_key=developer_key
)
self.SetAuthSubToken(authsub_token)
|
<filename>tests/tabnet/test_utils.py
import pytest
import torch
class TestGhostBatchNorm1d():
@pytest.mark.parametrize("batch_size, input_size, momentum, virtual_batch_size",
[
(128, 32, 0.1, 4),
(1024, 512, 0.01, 128),
])
def test_statistics_2d(self, batch_size, input_size, momentum, virtual_batch_size):
"""tests ghost batch norm statistics"""
from tabnet.utils import GhostBatchNorm1d
input = torch.randn(size=(batch_size, input_size))
gbn = GhostBatchNorm1d(input_size=input_size, momentum=momentum, virtual_batch_size=virtual_batch_size)
output = gbn(input)
mean = torch.mean(output)
std = torch.std(output)
assert torch.allclose(mean, torch.tensor(0.0))
assert torch.allclose(std, torch.tensor(1.0), atol=1e-4)
@pytest.mark.parametrize("batch_size, sequence_length, input_size, momentum, virtual_batch_size",
[
(128, 10, 32, 0.1, 128),
(1024, 5, 512, 0.01, 128),
(1024, 100, 512, 0.01, 10),
(128, 512, 512, 0.1, 2),
])
def test_statistics_3d(self, batch_size, sequence_length, input_size, momentum, virtual_batch_size):
from tabnet.utils import GhostBatchNorm1d
input = torch.randn(size=(batch_size, sequence_length, input_size))
gbn = GhostBatchNorm1d(input_size=input_size, momentum=momentum, virtual_batch_size=virtual_batch_size)
output = gbn(input)
mean = torch.mean(output, dim=(0, 1))
std = torch.std(output, dim=(0, 1))
# TODO verify proper numerical differences
assert torch.allclose(mean, torch.zeros_like(mean), atol=1e-7)
assert torch.allclose(std, torch.ones_like(std), atol=1e-3)
|
Synthesis and spectroscopic properties of platinum(II) terpyridine complexes having an arylborane charge transfer unit. Synthesis, redox, spectroscopic, and photophysical properties of a new class of Pt(II) complexes of the type + are reported, where Ln is 4'-phenyl(dimesitylboryl)-2,2':6',2"-terpyridine (L1) or 4'-duryl(dimesitylboryl)-2,2':6',2"-terpyridine (L2). The free L1 or L2 ligand in CH3CN shows the absorption band responsible for intramolecular charge transfer (CT) from the pi-orbital of the aryl group in L1 or L2 (pi(aryl)) to the vacant p-orbital on the boron atom (p(B)), in addition to pipi* absorption in the 2,2':6',2"-terpyridine (tpy) unit. In particular, the L1 ligand shows an intense CT absorption band as compared with L2. Such intramolecular pi(aryl)-p(B) CT interactions in L1 give rise to large influences on the redox, spectroscopic, and photophysical properties of +. In practice, + shows strong room-temperature emission in CHCl3 with the quantum yield and lifetime of 0.011 and 0.6 micros, respectively, which has been explained by synergetic effects of Pt(II)-to-L1 MLCT and pi(aryl)-p(B) CT interactions on the electronic structures of the complex. In the case of +, the dihedral angle between the planes produced by the tpy and duryl(dimesitylborane) groups is very large (84 degrees ) as compared with that between the tpy and phenyl(dimesitylborane) units in + (26-39 degrees ), which disturbs electron communication between the Pt(II)-tpy and arylborane units in +. Thus, + is nonemissive at room temperature. The important roles of the synergetic CT interactions in the excited-state properties of the + complex are shown clearly by emission quenching of the complex by a fluoride ion. The X-ray crystal structure of + is also reported. |
Robust Tabu Search Algorithm for Planning Rail-Truck Intermodal Freight Transport In this paper a new efficient tabu search algorithm for assigning freight to the intermodal transport connections was developed. There were also formulated properties of the problem that can be used to design robust heuristic algorithms based on the local search methods. The quality of solutions produced by the tabu search algorithm and by often recommended greedy approach were also compared. Introduction Road transportation is very expensive. Because of that, carrying goods over a long distance is achieved as a combination of different types of transportation (e.g.with the use of rail, ships and trucks). Such means of transportation is called intermodal freight transport. The problems related to the intermodal transport are intensely studied by the Operations Research. Thanks to such activiites, transport infrastructure can be better designed, vehicle routes and delivery schedules can be planned more consciously which results in savings and consequently may lead to lower prices of transported goods. A widely used definition of intermodal freight transport was introduced at the European Conference of Ministers of Transport. It was defined as "the movement of goods in one and the same loading unit or vehicle by successive modes of transport without handling of the goods themselves when changing modes". Second commonly used definition was introduced by C. Macharis and Y. Bontekoning. They define it as "the combination of at least two modes of transport in a single transport chain, without the change of container for the goods, with most of the route travelled by rail, inland waterway or ocean-going vessel and with the shortest possible initial and final journeys by road". More definitions of intermodal freight transport can be found in. Problems related to the intermodal transport are much more complex to solve than problems which take into consideration a fixed type of transportation (aka unimodal problems). Moreover, models used to solve unimodal problems are also applied to intermodal transport problems. For this reason, there is a group of people convinced that intermodal transportation research is emerging as a new transportation research field and it is still in the pre-paradigmatic phase and proper models are not known at the moment (see ). Y. Bontekoning, C. Macharis and J. Trip described the actual state of intermodal transport research field and marked actions which should be done to make it similar to "normal science". They proved that the intermodal transport research is in the pre-paradigmatic phase. They found plenty of small research communities that address intermodal transport problems. They proposed transformation of these small communities to one or two large research communities. C. Macharis and Y. Bontekoning underlined that the intermodal transport is a very complex process that involves many decision-makers. They distinguished four types of them and emphasized that some decisions can have longterm effects (e.g. planning and building a railway infrastructure) when other can have short-term effects (temporary changes in timetable). Decision-makers should work in a close collaboration to achieve supreme results. Many decisions lead to a variety of areas where optimization can be used, whereas many authors introduced overview of articles and methods used in the intermodal transport research and classified them by the type of decision maker and by the time horizon of operations problem. A. Caris, C. Macharis and G. Janssens proposed new research fields regarded to decision support in intermodal freight transport. They made an overview of applied applications that support decisions of policy makers, terminal network design, intermodal service network design, intermodal routing, ICT (Information and Communication Technologies) innovations and drayage operations. They pointed out that there is no link between models for terminal network design and intermodal service network design. They recognized the need for solution methods solving intermodal freight transport optimization problems that can accept multiple objective functions, transportation mode schedules, economies of scale and demanded times of delivery. Intermodal freight transport, due to its big complexity and plenty of constraints imposed on solutions, constitutes a challenge for many types of heuristics and metaheuristics. A. Caris and G. Janssens optimized pre-and endhaulage of intermodal container terminals using heuristic approach. They modeled problem as a Full Truckload Pickup and Delivery Problem with Time Windows (FTPDPTW) where vehicles carry full truckloads to and from an intermodal terminal. Time windows were used to represent the time interval in which the service at a customer must start. They proposed two-phase insertion heuristic and the improvement heuristic with three types of neighborhood. The solution is obtained by two-phase insertion heuristic and afterwards improved by the improvement heuristic. There are some articles devoted to comercial decision support systems (DSS). G. Kelleher, A. El-Rhalibi and F. Arshad described features of PISCES, the integrated system for planning intermodal transport. They presented methods used in PISCES for dealing with the triangulation in the pick-up and drop scenario. Another research on decision support system was published by A. Rizzoli, N. Fornara and L. Gamberdella. They presented the terminal simulator component of the Platform project, funded by the Directorate General VII of the European Community. Presented software can model processes taking place in an intermodal road or rail terminal. It was designed on the basis of discreteevent simulation paradigm. Software user can define the structure of terminal and different input data. It allows us to check how changing terminal structure may have an influence on its performance. Noteworthy are also researches on systems that support decision-making by more than one policymaker. A. Febraro, N. Sacco and M. Saeednia proposed an agent-based framework for cooperative planning of intermodal freight transport chains. In this system, many actors can work together and negotiate their decisions to achieve a common goal. Transport companies are not willing to share their tariffs on their websites. This information may be hidden due to many factors, eg. cost changes. Costs can change overnight because of fluctuations of exchange rates, political situation etc. Moreover, prices vary from one company to another. Researchers need such data to develop better algorithms. Special price models come forward. T. Hanssen, T. Mathisen, F. Jorgensen proposed a generalized transport costs model that can be used to assess mean prices of different types of transport on the given distance. Intermodal transport research field demands knowledge base that will ease scientists to conduct their researches on the real data. Experimental results will be more reliable. It should be in the interest of governments and all shipping companies to support building of such database. Problem formulation In a planning phase, transportation management company has to realize a certain number of transport tasks. Transport task is to carry some amount of goods from suppliers to customers. Transport can be organized in two ways: (i) by a single truck, (ii) by the intermodal transport (truck-train-truck). Goods are transported in containers or semitrailers customized to rail transport. There are known locations of customers, suppliers and intermodal terminals, distances between: (i) intermodal terminals, (ii) intermodal terminals and customers, (iii) suppliers and customers. Of course in the first case this is the length of the rail route and in the other cases it is the length of the road route. Cargo trains implementing intermodal freight transport follow the schedule of courses. Each course specifies initial and final intermodal terminal, time of delivery, unit cost of the course and the amount of free wagons. The number of free wagons is updated online based on reservations of transportation management companies. Attachment of wagons on intermediate stations is forbidden. The objective of optimization is the assignment of task to the train course simultaneously minimizing the overall costs of transport. Let J = {1,..., n} be a set consisting of n transport tasks and let T = {1,..., t} be the set of railway courses. For each task j ∈ J and each course i ∈ T the distance achieved by the traffic transport d j,i is given. Note, that the distance d j,i is the result of summing distances between supplier and the initial intermodal terminal and between final intermodal terminal and the customer. The distance achieved by the traffic transport between supplier and customer for the fixed j ∈ J is marked as d j,0. The railway distance for the course i ∈ T is r i. The course i ∈ T has l i ≥ 0 free cargo wagons to load. The overall price for the carriage of freight from the supplier to the customer using cargo train i ∈ T is c j,i. The cost of direct road transport from the supplier to the customer specified in the task j is c j,0. Let the assignment of the course to the task j be marked as a j, a j ∈ {0} ∪ T (a j = 0 if the transport is carried only by the truck). Vector = denotes the assignment of all tasks to the courses. The total cost of transport for assignment is We would like to find such assignment * that the total cost of transport is as small as possible where is the set of all possible assignments, || = n m+1. Properties of the problem In the current subsection we formulate certain properties of the problem, which can be used in the design of efficient heuristic algorithms based on local search methods. The first one relates to the method that allows us to determine the lower bound of the objective function value for the optimal solution, the second one allows us to reduce the number of solutions in the neighborhood by eliminating the subset with worse solutions. Proposition 2. Let and be the assignments of tasks to the trains such that Cost() < Cost(). Then, at least one task has lower cost of transporting in the assignment than in the assignment. Proposition 3. Let be the assignment of tasks to the trains resulting from by the assignment of the task j to train i, then Proposition 4. Let be the assignment of tasks to the trains resulting from by interchanging assignment of tasks j and k, then Note, if Cost() is known, expressions and can be determined in time O. Example A certain logistic company has to realize n = 5 transportation tasks. Transportation may be achieved with t = 3 courses of cargo train. All trains have l i = 2 free cargo wagons to load. The transportation costs (traffic and intermodal) are given in Table 1. It is easy to see that the price of road transport is 9623 whereas, the lowest price of transport is LB = 5472 (marked with bold). Transport with the lowest price requires use of 5 wagons of the course 3. It is not possible because rail connection 3 has only two free wagons. In Table 1 a feasible connection (that takes into account number of free wagons) with total price 5893 was marked in bold. Let us consider a greedy strategy that assigns tasks to the cheapest intermodal freight transport connection. In the first row, tasks with the biggest difference in price between road and rail transport will be assigned. Tasks that cannot be achieved by the rail transport with the cheapest cost will be achieved by truck transport. The described strategy is used in many logistic companies, where assignment of transport tasks is realized with the use of forwarding agents. A forwarding agent is concentrated on finding the cheapest solution of transport problem, because his salary depends (directly or indirectly) on the income that is the difference between the price of transport negotiated with a customer and the real price. Let us assume that road transport prices were negotiated with customers, profits from the intermodal freight transport are formed respectively: 905, 707, 739, 875, 925. Tasks 1 and 5 generate the biggest profits, therefore they will be realized by the intermodal freight transport. The rest of tasks will be realized by trucks. An approximation algorithm In order to solve the stated problem we propose a local search algorithm based on the tabu search (TS) approach. The tabu search is one of the best methods of constructing heuristic algorithms. This is confirmed for many optimizing problems, as a main method (for scheduling of tasks, vehicle routing, packing, container loading ), as well as a key element of higher metaheuristics, e.g. golf method. Neighborhood determination in the tabu search metaheuristics is also frequently parallelized (see ). An algorithm based on this method, in each iteration searches neighborhood of the basic solution for the solution with the best objective function value. In every iteration the best solution replaces the basic solution. To prevent searching loops, the tabu mechanism is used. It is usually implemented as a list with the limited length. In each iteration, selected attributes of subsequently visited solutions are stored. Contents of the list divide the neighborhood into two subsets: a set of forbidden and a set of feasible solutions. Forbidden solutions are not searched except the case when forbidden solution is better than the best solution found so far. The search stops when the given number of iterations without improving the criterion value has been reached or the algorithm has performed a given number of iterations. Moves and neighborhood The neighborhood of a solution is generated by moves. In our problem, the solution is represented by a vector of task assignments. The neighborhood of the solution can be created due to exchanges of assignment of two tasks or change of assignment of a single task. Let EX be a set of some such moves and N (EX, ) = { (v) : v ∈ EX} be a neighborhood of solution generated by a move set EX. For a feasible solution every move v ∈ EX generates a feasible solution. Let v = (a, k) be the move that changes the assignment of the task a to the course k. We define the new assignment v obtained from = ( 1,..., n ) by execution move v in as follows: ( 1,..., a−1, k, a+1,..., n ). We propose a reduction of the neighborhood size to the set of promising moves. The move v is promising if its execution gives the chance to receive a better solution. From Proposition 2 we have simple conditions for obtaining a better solution: We will mark the reduced set of moves as V. Computation results The main objective of experimental studies was to evaluate the usefulness of advanced heuristics in assigning transportation tasks to intermodal transport. Experimental test was carried out on the randomly generated data. The set of 120 instances is divided into 12 groups. Each group consists of 10 instances with the same number of tasks n and freight trains t. The study was conducted for groups, where the number of tasks n ∈ {50, 100, 200, 500} and the number of cargo trains t ∈ {10, 20, 30}. Railway distances r i for course i ∈ T were generated from the uniform distribution on, in intermodal transport the distances achieved by traffic transport d j,i were generated from uniform distribution on. A traffic transport distance is usually shorter than distance of intermodal transport thus we determine this distance from the expression d j,0 = min i=1,...,t (r i − d j,i ). We assumed unit costs of traffic transport c j,0 = d j,0. The cost of task j ∈ J carried by freight train i ∈ T includes the cost of road transport (d j,i ), the cost of handling in intermodal terminal (h j ), the cost of transporting freight train depending on the distance (r j ) and is expressed by the formula c j,i = d j,i + h j + r j, where is the factor of the cost of rail transport to the cost of traffic transport. The research was carried out for three values of factor (0.8, 0.65, 0.5) and h j = 60. Note that for = 0.8 and transport distance 300 the costs of traffic and intermodal transport are comparable. The number of free wagons was the same for each cargo train. We considered two levels of wagon availability for loading: (i) a few free wagons: l i = n/t + 1, (ii) many free wagons: l i = 2 n/t. The algorithm TS was implemented with the reduced neighborhood V, written in C++ and ran on Lenovo T540p personal computer with processor i7-4710 2.5 GHz. Further, we wrote a greedy algorithm G (see subsection 2.2 for details) and an algorithm R which compute the total cost of traffic transportation. The algorithm TS performed 1000 iterations and started from the solution in which all tasks were assigned to road transport. Since there are no algorithms for solving the considered problem in the literature, we made a comparison of TS, G and R with lower bound LB. For each instance, we defined the following values: Cost( A ) -the total cost of transportation of tasks forms set J found by the algorithm A, A ∈ {T S, G, R}, P RD(A) -the mean value of the relative cost of solution found by algorithm A with respect to the lower bound LB i.e. CPU -the mean computation time (in seconds). The results of computer computations are summarized in Table 2. The first column contains the number of tasks and freight trains in each instance of the group, the second contains the average relative cost of traffic transport, the next three columns refer to instances of the small number of available wagons and include: the average relative cost of the solution generated by greedy algorithm R, the average relative cost of the solution generated by tabu search algorithm TS and the average number of transports carried out exclusively by road in the solution generated by the algorithm TS. The other three columns refer to instances with many free cargo wagons. The table shows the results for different values. At the beginning of the analysis of the results collected in Table 2, it should be noted that the proposed tabu search algorithm successfully finds the task assignments for intermodal transport. The solutions generated by TS algorithm for the intermodal transport with a limited number of free cargo wagons are only a few percent worse than transport with minimum cost i.e. with the unlimited number of free cargo wagons and trucks. It is easy to notice that the algorithm TS finds significantly better solutions for instance with a large number of wagons to be loaded. According to Table 2, tabu search heuristic performs significantly better than the greedy heuristic. The average relative cost does not exceed 3.2% for instances with large number of free wagons and 8.3% for instances with small number of free wagons. In the case of greedy algorithm this cost varies accordingly from 3.2% to 38.3% and from 6.0% to 52.6%. The superiority of the algorithm TS over the greedy algorithm R increases with decreasing (with increasing attractiveness of intermodal transport). While comparing the cost of road transport and intermodal transport, it can be noted that with decreasing values increases the difference between the cost of road transport and the intermodal one. For = 0.5 it is close to 70%. The experiment shows that for the highest value, the profit of using intermodal transport is admittedly less (approximately 10%) however, in our opinion, it is important from the standpoint of business activity. In addition, with decreasing Table 3 shows the average computational time for 9 groups of instances (for groups with n = 50 the computation time was less than 0.1). The calculations were performed for two versions of the algorithm TS: T S(V ) with reduced neighborhood V and T S(U ) with the full neighborhood U. It is easy to see that the computation time increases with the increasing number of tasks n and the number of cargo trains t. While comparing computation time T S(V ) and T S(U ), it should be highlighted that T S(V ) runs faster than T S(U ) from 30% to 50%. The computation time of algorithm T S(V ) does not exceed 6 seconds for instance with the biggest number of tasks and cargo trains. Conclusion In this paper, we developed the tabu search algorithm to minimize the total cost of transport. We have considered the intermodal transport as an alternative to the most used traffic transport. We have formulated several properties of the problem, which were used not only to increase the efficiency but also to reduce the computation time of TS algorithm. We experimentally proved that the global cost optimization of intermodal transport allows us to achieve significantly higher profits than using in practice of the greedy approach. The results obtained and our research experience encourage us to extend the ideas proposed to multi-criteria problems generated by intermodal transport. |
KALAMAZOO - Derrick Mitchell says his itch to play football never went away.
The Paw Paw native spent 10 years in the grind of minor league baseball and knocked on the door to the big leagues at the Triple-A level before being released by the Atlanta Braves in 2014.
"The whole 10 years through baseball, I would come back every offseason and think, 'Man, I wonder what it would have been like to go pursue a football career,'" Mitchell said. "I see these guys playing in the fall and it would bring back those Friday nights in high school and playing football and being around the guys. I don't think those thoughts ever left me.
"The reality though, when I finally got released by the Braves, now, 'Am I too old to actually go back, or can we make this a go?' That's when I decided to walk on in 2014, so here we are today. It's been a heckuva ride."
At 30 years old, Mitchell is currently the oldest player in the FBS. He is also a skilled specialist for the Western Michigan football team, leading the nation in punts inside the 20 with 30, compared to just two touchbacks.
"He's No. 1 in the country in putting that ball inside the 20, which is a weapon," said WMU coach Tim Lester. "When you give a team a short field, it's hard to stop anybody. But when you make them earn it, it's hard to go 80-plus yards, so he makes our team much better."
As accurate as he has been at finding the "coffin corner," the left-footed Mitchell has unleashed some booming punts this season, including a long of 64. The 6-foot-2, 205-pound redshirt junior also handles kickoff duties (13 touchbacks) and is the holder for freshman kicker Josh Grant.
"The old guy ... His wisdom is off the charts," Lester said of Mitchell. "He doesn't talk a ton as our punter and our kickoff guy. But when he does, everyone listens, and that's what makes him special... When our 18-year-old kicker goes out there, we've got a 30-year-old to calm him down."
Things are starting to calm down for Mitchell. He's set to graduate in the spring with a degree in business, and he's now a married man after tying the knot with his wife Heather back in August.
"Married life is great. Luckily she supports a 30-year-old husband going to school, playing football and not making any money," Mitchell joked. "All the stress is on her to make those big mortgage payments. Soon enough I'll be able to help out a little bit, but she's been great about it."
Mitchell was a three-sport star at Paw Paw High School, where his father Rick is still the athletic director. He signed a national letter of intent to play baseball at Michigan State, but opted to go the professional route after he was selected by the Philadelphia Phillies in the 23rd round of the 2005 MLB draft.
Mitchell climbed the ladder between the Phillies and Braves farm systems as an infielder, finishing with a career .240 batting average, 88 home runs, 365 RBIs and 101 stolen bases. The right-hander once went yard on Roy Oswalt, and says the best pitcher he ever faced was New York Yankees closer and likely Hall of Famer Mariano Rivera (he struck out on three cutters).
"A lot of people think baseball isn't that tough on an athlete, but it's every day," Mitchell said. "Once the season starts, it's six months straight of baseball. And you are on your feet a lot. Yeah, you might not be getting hit, but there's a lot of wear and tear.
"When we're doing sprints at the end of practice or when you wake up the next day, you're feeling a little sore and wondering, 'Man, how are those young guys feeling right now, or is it just me?' But no complaints, it's all part of the job and it's been a blast."
Mitchell came to WMU's walk-on tryouts under former coach P.J. Fleck and made the team as a quarterback. He was a 27-year-old redshirt freshman during the 2014 season, when Zach Terrell entrenched himself as the starter under center for what was a record-breaking four-year career for the Broncos. With no real path to playing, the former baseball player offered his talents as a kicker.
Spending 10 years in a sport where failing two out of three times is still considered success, Mitchell learned valuable lessons that he continues to apply to himself and his younger Bronco teammates.
"It all translates — competitive sports," Mitchell said. "Just being positive, just trying to encourage these guys who are young — some 18-, 19-year-olds. Just trying to get them to think about all the good things that can happen when you show up and work hard every day." |
The Canadian Ombudsman for Banking Services and Investment (OBSI) today released a list of the most common frauds, in recognition of the closing of fraud-awareness month.
My favourite (that is, the one I most like to warn people about) is the “too good to be true” investment scam.
“If you’re offered a special deal on an investment “for you only”, or guaranteed high returns, watch out!” the OSBI wrote in a press release.
I’ve met too many people who actually fall for these scams, often through high-pressure sales tactics that insist this is a great deal that can only be bought in the next 24 hours (or even 24 minutes), that it has to be kept a secret for some reason or another (I’ve heard of some scammers who say the deal has to be kept secret because governments don’t want people to know how easy it is to make money!) and involves sending money offshore. No one (I hope) would fall for a Nigerian letter scam (the Nigerian who needs help getting his multimillions out of the country and has chosen to give you 50 per cent to help him do that) so why do they fall for the no-letter, just fast-talk scam?
As the OBSI says, always buy your investments through a licensed adviser or firm.
Some of the other scams listed may more difficult to guard against, like debit or credit card fraud where someone manages to find out your PIN or identity theft where fraudsters learn enough about you to be able to get debit and credit cards in your name. Be careful with your PIN and personal records. But even that may not stop those really dedicated to stealing your identity.
OBSI also warns against frauds related to buying and selling. In one case a buyer may offer more money than agreed upon as the sale price and ask for the change in cash (oops, did you say $2,000? I wrote the cheque for $3,000 and don’t have another cheque with me. Can you give me the change?). Online sellers and buyers may also part with their goods or money and get nothing in return.
The bottom line is be careful, ask questions and if it sounds fishy, don’t be afraid to say so. It’s your money.
Read the full OBSI press release here.
Far more common, practised every day, is the fraud by banks. (1) The fine print at the back of the form that changes the agreement. (2) Changing an agreement if you don’t notify them you don’t want the change. When they notify you by mail, you receive it too late to notify them in time. |
The former finance director for Plymouth, Connecticut is accused of embezzling more than $800,000 from the town. (Published Tuesday, Jan. 20, 2015)
The former finance director for Plymouth, Connecticut is accused of embezzling more than $800,000 from the town and using much of the money to build something akin to a museum in his home, filled with Hummel figurines, Annalee dolls, coins, stamps, Coach purses and more.
David J. Bertnagel, 41, of Thomaston, was arrested at his home this morning, appeared in court and was released on a $250,000 bond.
He worked as the town finance director from July 2014 until Oct. 31, 2014, when he was suspended after town officials discovered "improprieties" in the finance department.
Further investigation revealed that $808,029.94 in town funds might be missing, according to the criminal complaint.
Prior to becoming the finance director, Bertnagel, worked in the department for around six years as a part-time employee. According to the court paperwork, he is accused of exploiting a weakness in the payroll software program that allowed him to manually create batches of checks, print them and delete any record of them from the system.
When questioned about the funds on Nov. 10, he admitted to issuing “non-salary payments” from the town to himself, the complaint says.
He claimed he’d reached an agreement with town officials, allowing him to withdrawing money early from his pension account, but the former mayor denied any agreement ever existed, the complaint says.
At first, Bertnagel said he could not find a copy of the contract and said he spent around half the money on stamp and coin collections, as well as normal household expenses. The other half was in cash or marketable securities, he said, but a review of Bertnagel’s assets showed that much less – around $100,000 – was available.
In all, Bertnagel is accused of issuing 207 checks to himself from October 2011 through October 2014, and spending $101,890 to pay down a mortgage and two lines of credit secured to his home; $136,700 for home repairs, improvements and renovations; $149,188 on credit card expenses; $124,279 to retailers specializing in collectible items, including coins, stamps Hummel figurines and Annalee dolls; $8,850 to four brokerage firms for stocks and more.
When investigators questioned one of Bertnagel’s friends, she described the house Bertnagel shares with his mother as a “museum” full of collections.
Inside the house, she said, there were more than 200 Coach purses, several Hummel figurines and dolls in a large room on the first floor of the house.
The friend also said one room of the house is dedicated to stamp and coin collections and Bertnagel also has a collection of antique clocks and original artwork depicting the town of Thomaston.
Bertnagel eventually did present a copy of what he claimed was a contract to make early withdrawals from a pension account, but federal investigators determined that it was fake and Bertnagel had likely used an electronic signing machine to add one of the signatures, the complaint says.
Neither Bertnagel nor his attorney was available for comment after court proceedings Tuesday. |
Simulation of the poling of P(VDF-TrFE) with ferroelectric electrodes based on the Preisach model Abstract In this paper, a multi-layered ferroelectric composite is analyzed by using the concepts of the classical Preisach model to describe each constituent material. The results obtained are compared to D-E measurements made in the poling of polyvinylidenefluoride-trifluoroethylene (P(VDF-TrFE)) copolymer film sandwiched between ferroelectric triglycine sulfate crystal (TGS) electrodes. In general, the computer simulations are in good agreement with experimental results. |
Evidence of Dividend Catering Theory in Malaysia: Implications for Investor Sentiment This study investigates the key determinants of corporate performance in Malaysia. Using panel data of 361 companies listed in Malaysia, the study finds dividend per share, use of debt, number of board members, and last years performance to be the most significant determinants of corporate performance across four selected industries: trading or services, property, consumer products, and industrial products. This study also finds that dividend per share is influenced by market performance and is followed by last years dividend and size of the dividend. These findings exhibit the presence of dividend catering incentives. As such, market demand for dividends drives corporate dividends. The study concludes that investor sentiment influences corporate decisions in Malaysia. Introduction Dividend policy is a major financial decision. Despite theories suggesting that dividend policy has no significant impact on the changes in corporate value (Miller & Modigliani, 1961), extant studies find that dividend works as a signal and influences asset valu-expected that dividends are perceived as tangible benefits to investors when valuing any company. Investors desire more dividends. However, Fama and French presented the disappearing dividends effect among American investors claiming that dividends are no longer an important vehicle for attracting investors. In response to this argument, 2004) found that the propensity to pay dividends is driven by the catering incentive. The catering theory of dividends purports that corporations will pay dividends only if they perceive a demand for the same from the market (Baker & Wurgler, 2004). Thus, there will be higher dividend payouts if the market provides a premium for the stock price. It is this premium that creates the catering incentive and thus explains the corporate tendency to pay dividends. Baker and Wurgler argued that if dividend payment is influenced by stock market performance (or vice versa), investor sentiment would be a major reason behind this causal relationship. Consequently, disequilibrium in the market reveals the tendency to relate dividends to the market value of corporations. This study examines the presence of dividend catering theory and the influential power of dividends in corporate valuation among the listed firms in Malaysia. Similar to Baker and Wurgler, if corporate performance influences corporate dividend payment, the study may conclude that some performance-related motivational force is driving the propensity to pay dividends. The study also investigates the influence of different industries (such as construction and trading) on the determinants of corporate value and dividend catering incentives. Dividend and Other Determinants of Corporate Valuation A number of studies in the West and recently in emerging markets have identified a list of determinants for corporate valuation. Theoretically, better quality investments should positively influence the value of corporations in the market (Morgado & Pindado, 2003). The term quality investment, in most of the studies, refers to investments having a positive net present value. Myers found that the effect of debt financing in investment decisions was negative. Jensen argued that investment interacts with availability of free cash flow, agency conflict and corporate financing policies when determining the value of corporations. Companies with higher debt have the opportunity to offer external stakeholders control, providing the corporations with transparency and effective checks and balances. However, involving external stakeholders may result in agency conflict, which could be flagged by investors as being a negative signal. On average, however, an increase in positive NPV projects would positively influence corporate value. Baker, Stein and Wurgler presented similar results by reporting that investment performance depends on how financing decisions are made. The vehicle for corporate financing -debt or equity -has a significant impact on the value of corporations. Fama and French found that U.S. corporations mostly use long-term debt for expansion. These firms rely on equities only during merger and acquisition activities. Investment performance and dividend policy simultaneously influence the effect of external financing on firm performance. Although firms gain external control through debt financing (Berger & Di Patti, 2006), higher dependency on debt financing may result in poor performance given that the investment decisions are below average quality (Abor, 2005;2007;Lang, Ofek, & Stulz, 1996). On the other hand, the use of debt positively influences the performance of reputed firms (Campello, 2006;Harris & Raviv, 1991). The existing literature displays the effect of dividends on corporate value. Dividends work as a signal and carry both positive and negative impacts. Grinblatt, Masulis and Titman found that dividends positively influence corporate value. Baker and Wurgler discovered that dividend premiums are a significant proxy of investor sentiment in the market. They concluded, similarly to Brown and Cliff, that investor sentiment works as a contrarian predictor of future stock returns, thus proving that the global presence of dividend premiums is a deter- other studies found a significant negative influence of large board size on firm performance (Eisenberg, Sundgren, & Wells, 1998;Haniffa & Hudaib, 2006). Various proxies measure corporate performance. Many studies used firm performance and firm value (especially market performance and market value) interchangeably. Return on Asset (ROA) is used as a measure for financial performance (Haniffa & Hudaib, 2006), whereas Tobin's Q is reported as a proxy for financial and market performance in various studies (Chua, Eun, & Lai, 2007). Chua et al. reported that Tobin's Q is the proxy for perceived corporate value by the investors. Tobin explained the Q ratio as being the determinant of how investors reward and penalize the firms' financial decisions. Thus, Tobin's Q can work as a proxy for financial, market and investor perception. Other than Tobin's Q, various studies rely on stock price as a measure for market performance. However, due to the frequent volatility of the stock price, which requires additional analysis, making a valid proxy from stock returns to represent firm performance is somewhat questionable. Empirical Models The major objectives of this study include determining key factors behind corporate valuation among listed firms in Malaysia to determine the influence of significant factors in different industries and to determine the presence of the dividend catering incentive in overall corporate valuation and in selected industries. Equation 1 lists a number of determinants along with the proxy for corporate value. Table 2 gives the descriptions of the variables. Haniff and Hudaib found a significant influence of different industry groups in linking corporate value and corporate governance. Their study uses data from six industries, including the consumer, trading, property, construction, plantation and industrial sectors. This study incorporates the analysis of four significantly large industries that include industrial production (IP), consumer products (CP), property (PR), and trading and service (TS). Table 1 provides descriptive statistics on industry groups. Because the number of companies under each of the four selected industries is suitable for conducting multiple regression analysis, equation will be examined for four industries to compare the beta coefficients. year. Thus, more than an average dividend in the previous year would create a positive demand for dividends in the current year. To examine these three conditions, the study uses the following three equations. Data and Method Due to the structural differences of listing require- The study uses panel data, which has become increasingly important in developing countries due to the paucity of time series data. In the panel data method, the study can control for cross section fixed effects. To provide a simple understanding,, and (Table 3). Duality is insignificant in almost all sectors except for property. It was interesting to observe a conflict between duality and the lag value of the Q ratio, which may lead to challenging future research on governance and firm performance. found a negative relationship between the number of board members and value. Conflict of interest is the primary reason behind such relationships. The previous year's performance (Q t-1 ) also influences the current year's performance and boosts the R 2 of the estimates. The variable is robust across all of the sectors and is consistent with the suggestions of Haniffa and Hudaib. Table 5 shows that the R 2 of the estimates are significantly above conventional norms. Additionally, the Durbin Watson (DW) statistics are under control. Higher standardized beta coefficients of DPS and Q (t-1) lead to further inquiry on the dividend catering incentive. Dividend Catering Dividend catering theory argues that corporations offer dividends if there is market demand for dividend pay-ment. Thus, to examine the existence of dividend catering incentives, equations, and should be significant and robust across industries. Table 6 highlights the tests for these three equations for the total sample and for four industry groups. One of the major arguments behind dividend catering theory is that market value drives the propensity to pay dividends. Table 6 shows that the Q ratio (proxy for market performance) Additionally, investors may expect that the companies with higher dividends may continue to pay higher dividends. Thus, they will expect higher dividends and by the grace of catering incentives, managers should look for sources of income to provide higher dividends. The proxy for higher dividends, DPOUT, significantly influences DPS in all sectors as well as the total sample. The beta coefficients are also high. Thus, market forces drive corporations and investor sentiment while paying higher dividends. Three of our proxies, through equations 2, 3 and 4, establish that corporate managers time the market for their dividend announcement activity. 2004) theoretically support the performance proxy (DPS and Q in equation 2) and size proxy (DPS and DPOUT). A Comprehensive Model After analyzing the dividend catering incentive, the study revises the preliminary estimates of the key determinants. Table 7 exhibits robust results for DPS, DEBT, BOARD and Q (t-1). The study finds a new variable, DPOUT, significant while explaining the changes in corporate value in Malaysia. Additionally, the R 2 and DW statistics for the estimates are satisfactory. Among these variables, Q (t-1) is the most influential variable, followed by DPS, DEBT, DPOUT and BOARD. Notes: Beta Coefficients are standardized *** = Significant at 1%, ** = at 5% and * = at 10%. Dependent Variable: Tobin' s Q. |
<reponame>angetria/flir_lepton_rpi2<gh_stars>10-100
/*********************************************************************
*
* Software License Agreement (BSD License)
*
* Copyright (c) 2015, P.A.N.D.O.R.A. Team.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above
* copyright notice, this list of conditions and the following
* disclaimer in the documentation and/or other materials provided
* with the distribution.
* * Neither the name of the P.A.N.D.O.R.A. Team nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
* FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
* COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
* BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
* CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
* ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGE.
*
* Authors: <NAME>, <NAME>
* <NAME> <<EMAIL>>
*********************************************************************/
#ifndef FLIR_LEPTON_IMAGE_PROCESSING_UTILS_PARAMETERS_H
#define FLIR_LEPTON_IMAGE_PROCESSING_UTILS_PARAMETERS_H
#include "utils/defines.h"
#include <dynamic_reconfigure/server.h>
#include <flir_lepton_image_processing/thermal_cfgConfig.h>
/**
@brief The namespaces for this package
**/
namespace flir_lepton
{
namespace flir_lepton_image_processing
{
/**
@struct Parameters
@brief Provides flexibility by parameterizing variables needed by the
hole detector package
**/
struct Parameters
{
// Blob detection - specific parameters
struct Blob
{
static int min_threshold;
static int max_threshold;
static int threshold_step;
static int min_area;
static int max_area;
static double min_convexity;
static double max_convexity;
static double min_inertia_ratio;
static double max_circularity;
static double min_circularity;
static bool filter_by_color;
static bool filter_by_circularity;
};
// Debug-specific parameters
struct Debug
{
// Show the thermal image that arrives in the thermal node
static bool show_thermal_image;
// In the terminal's window, show the probabilities of candidate rois
static bool show_probabilities;
static bool show_find_rois;
static int show_find_rois_size;
static bool show_denoise_edges;
static int show_denoise_edges_size;
static bool show_connect_pairs;
static int show_connect_pairs_size;
static bool show_get_shapes_clear_border;
static int show_get_shapes_clear_border_size;
};
// Parameters specific to the Thermal node
struct Thermal
{
// The thermal detection method
// If set to 0 process the binary image acquired from temperatures MultiArray
// If set to 1 process the sensor/Image from thermal sensor
static int detection_method;
// The probability extraction method
// 0 for Gaussian function
// 1 for Logistic function
static int probability_method;
static float min_thermal_probability;
// Gausian variables
static float optimal_temperature;
static float tolerance;
// Logistic variables
static float low_acceptable_temperature;
static float high_acceptable_temperature;
static float left_tolerance;
static float right_tolerance;
// Low and High acceptable temperatures for thresholding
static float low_temperature;
static float high_temperature;
};
// Thermal image parameters
struct ThermalImage
{
// Thermal image width and height
static int WIDTH;
static int HEIGHT;
};
// Edge detection specific parameters
struct Edge
{
// canny parameters
static int canny_ratio;
static int canny_kernel_size;
static int canny_low_threshold;
static int canny_blur_noise_kernel_size;
// The opencv edge detection method:
// 0 for the Canny edge detector
// 1 for the Scharr edge detector
// 2 for the Sobel edge detector
// 3 for the Laplacian edge detector
// 4 for mixed Scharr / Sobel edge detection
static int edge_detection_method;
// Threshold parameters
static int denoised_edges_threshold;
// When mixed edge detection is selected, this toggle switch
// is needed in order to shift execution from one edge detector
// to the other.
// 1 for the Scharr edge detector,
// 2 for the Sobel edge detector
static int mixed_edges_toggle_switch;
};
// Image representation specific parameters
struct Image
{
// The thermal sensor's horizontal field of view
static float horizontal_field_of_view;
// The thermal sensor's vertical field of view
static float vertical_field_of_view;
// Depth and RGB images' representation method.
// 0 if image used is used as obtained from the image sensor
// 1 through wavelet analysis
static int image_representation_method;
// Method to scale the CV_32F images to CV_8UC1
static int scale_method;
// Term criteria for segmentation purposes
static int term_criteria_max_iterations;
static double term_criteria_max_epsilon;
};
// Outline discovery specific parameters
struct Outline
{
// The detection method used to obtain the outline of a blob
// 0 for detecting by means of brushfire
// 1 for detecting by means of raycasting
static int outline_detection_method;
// When using raycast instead of brushfire to find the (approximate here)
// outline of blobs, raycast_keypoint_partitions dictates the number of
// rays, or equivalently, the number of partitions in which the blob is
// partitioned in search of the blob's borders
static int raycast_keypoint_partitions;
// Loose ends connection parameters
static int AB_to_MO_ratio;
static int minimum_curve_points;
};
};
} // namespace flir_lepton_image_processing
} // namespace flir_lepton
#endif // FLIR_LEPTON_IMAGE_PROCESSING_UTILS_PARAMETERS_H
|
Marvel: Avengers Alliance is announced by Marvel and Playdom.
Marvel: Avengers Alliance has been announced for Facebook users.
The social networking game is being developed by Marvel and Playdom to promote this year's blockbuster Avengers movie.
Marvel: Avengers Alliance features an alternate storyline to Joss Whedon's cinematic release. Villains including Doctor Doom, Loki, The Red Skull, and Magneto attempt to take over Manhattan after an event known as the Pulse rocks the galaxy.
Players assume control of a S.H.I.E.L.D. agent and must recruit a team of superheroes to take down the threat. Spider-Man, Iron Man, Captain America, Thor, Black Widow, Hulk and other Marvel characters will feature in the game.
Gameplay combines team-based combat with role-playing elements and social networking. Facebook users who 'like' the game's official page will be given priority when sign ups commence.
Marvel: Avengers Alliance will be released on Facebook during Q1, 2012. |
use super::pixel::*;
use crate::RGB;
use crate::RGBA;
use core::ops::*;
/// `px + px`
impl<T: Add> Add for RGB<T> {
type Output = RGB<<T as Add>::Output>;
#[inline(always)]
fn add(self, other: RGB<T>) -> Self::Output {
RGB {
r: self.r + other.r,
g: self.g + other.g,
b: self.b + other.b,
}
}
}
/// `px + px`
impl<T> AddAssign for RGB<T> where
T: Add<Output = T> + Copy
{
fn add_assign(&mut self, other: RGB<T>) {
*self = Self {
r: self.r + other.r,
g: self.g + other.g,
b: self.b + other.b,
};
}
}
/// `px - px`
impl<T: Sub> Sub for RGB<T> {
type Output = RGB<<T as Sub>::Output>;
#[inline(always)]
fn sub(self, other: RGB<T>) -> Self::Output {
RGB {
r: self.r - other.r,
g: self.g - other.g,
b: self.b - other.b,
}
}
}
/// `px - px`
impl<T> SubAssign for RGB<T> where
T: Sub<Output = T> + Copy
{
#[inline(always)]
fn sub_assign(&mut self, other: RGB<T>) {
*self = Self {
r: self.r - other.r,
g: self.g - other.g,
b: self.b - other.b,
};
}
}
/// `px - 1`
impl<T> Sub<T> for RGB<T> where
T: Copy + Sub<Output=T>
{
type Output = RGB<<T as Sub>::Output>;
#[inline(always)]
fn sub(self, r: T) -> Self::Output {
self.map(|l| l-r)
}
}
/// `px - 1`
impl<T> SubAssign<T> for RGB<T> where
T: Copy + Sub<Output=T>
{
#[inline(always)]
fn sub_assign(&mut self, r: T) {
*self = self.map(|l| l-r);
}
}
/// `px + 1`
impl<T> Add<T> for RGB<T> where
T: Copy + Add<Output=T>
{
type Output = RGB<T>;
#[inline(always)]
fn add(self, r: T) -> Self::Output {
self.map(|l|l+r)
}
}
/// `px + 1`
impl<T> AddAssign<T> for RGB<T> where
T: Copy + Add<Output=T>
{
#[inline(always)]
fn add_assign(&mut self, r: T) {
*self = self.map(|l| l+r);
}
}
/// `px + px`
impl<T: Add, A: Add> Add<RGBA<T, A>> for RGBA<T, A> {
type Output = RGBA<<T as Add>::Output, <A as Add>::Output>;
#[inline(always)]
fn add(self, other: RGBA<T, A>) -> Self::Output {
RGBA {
r: self.r + other.r,
g: self.g + other.g,
b: self.b + other.b,
a: self.a + other.a,
}
}
}
impl<T, A> AddAssign<RGBA<T, A>> for RGBA<T, A> where
T: Copy + Add<Output = T>,
A: Copy + Add<Output = A>
{
fn add_assign(&mut self, other: RGBA<T, A>) {
*self = Self {
r: self.r + other.r,
g: self.g + other.g,
b: self.b + other.b,
a: self.a + other.a,
};
}
}
/// `px - px`
impl<T: Sub, A: Sub> Sub<RGBA<T, A>> for RGBA<T, A> {
type Output = RGBA<<T as Sub>::Output, <A as Sub>::Output>;
#[inline(always)]
fn sub(self, other: RGBA<T, A>) -> Self::Output {
RGBA {
r: self.r - other.r,
g: self.g - other.g,
b: self.b - other.b,
a: self.a - other.a,
}
}
}
/// `px - px`
impl<T, A> SubAssign<RGBA<T, A>> for RGBA<T, A> where
T: Copy + Sub<Output = T>,
A: Copy + Sub<Output = A>
{
#[inline(always)]
fn sub_assign(&mut self, other: RGBA<T, A>) {
*self = RGBA {
r: self.r - other.r,
g: self.g - other.g,
b: self.b - other.b,
a: self.a - other.a,
}
}
}
/// `px - 1`
/// Works only if alpha channel has same depth as RGB channels
impl<T> Sub<T> for RGBA<T> where
T: Copy + Sub
{
type Output = RGBA<<T as Sub>::Output, <T as Sub>::Output>;
#[inline(always)]
fn sub(self, r: T) -> Self::Output {
self.map(|l| l - r)
}
}
/// `px - 1`
/// Works only if alpha channel has same depth as RGB channels
impl<T> SubAssign<T> for RGBA<T> where
T: Copy + Sub<Output = T>
{
#[inline(always)]
fn sub_assign(&mut self, r: T) {
*self = self.map(|l| l - r);
}
}
/// `px + 1`
impl<T> Add<T> for RGBA<T> where
T: Copy + Add<Output=T>
{
type Output = RGBA<T>;
#[inline(always)]
fn add(self, r: T) -> Self::Output {
self.map(|l| l+r)
}
}
/// `px + 1`
impl<T> AddAssign<T> for RGBA<T> where
T: Copy + Add<Output=T>
{
#[inline(always)]
fn add_assign(&mut self, r: T) {
*self = self.map(|l| l+r);
}
}
/// `px * 1`
impl<T> Mul<T> for RGB<T> where
T: Copy + Mul<Output=T>
{
type Output = RGB<T>;
#[inline(always)]
fn mul(self, r: T) -> Self::Output {
self.map(|l|l*r)
}
}
/// `px * 1`
impl<T> MulAssign<T> for RGB<T> where
T: Copy + Mul<Output=T>
{
#[inline(always)]
fn mul_assign(&mut self, r: T) {
*self = self.map(|l| l*r);
}
}
/// `px * 1`
impl<T> Mul<T> for RGBA<T> where
T: Copy + Mul<Output=T>
{
type Output = RGBA<T>;
#[inline(always)]
fn mul(self, r: T) -> Self::Output {
self.map(|l|l*r)
}
}
/// `px * 1`
impl<T> MulAssign<T> for RGBA<T> where
T: Copy + Mul<Output=T>
{
#[inline(always)]
fn mul_assign(&mut self, r: T) {
*self = self.map(|l| l*r);
}
}
#[cfg(test)]
mod test {
use super::*;
const WHITE_RGB: RGB<u8> = RGB::new(255, 255, 255);
const BLACK_RGB: RGB<u8> = RGB::new(0, 0, 0);
const RED_RGB: RGB<u8> = RGB::new(255, 0, 0);
const GREEN_RGB: RGB<u8> = RGB::new(0, 255, 0);
const BLUE_RGB: RGB<u8> = RGB::new(0, 0, 255);
const WHITE_RGBA: RGBA<u8> = RGBA::new(255, 255, 255, 255);
const BLACK_RGBA: RGBA<u8> = RGBA::new(0, 0, 0, 0);
const RED_RGBA: RGBA<u8> = RGBA::new(255, 0, 0, 255);
const GREEN_RGBA: RGBA<u8> = RGBA::new(0, 255, 0, 0);
const BLUE_RGBA: RGBA<u8> = RGBA::new(0, 0, 255, 255);
#[test]
fn test_add() {
assert_eq!(RGB::new(2,4,6), RGB::new(1,2,3) + RGB{r:1,g:2,b:3});
assert_eq!(RGB::new(2.,4.,6.), RGB::new(1.,3.,5.) + 1.);
assert_eq!(RGBA::new_alpha(2u8,4,6,8u16), RGBA::new_alpha(1u8,2,3,4u16) + RGBA{r:1u8,g:2,b:3,a:4u16});
assert_eq!(RGBA::new(2i16,4,6,8), RGBA::new(1,3,5,7) + 1);
assert_eq!(RGB::new(255, 255, 0), RED_RGB+GREEN_RGB);
assert_eq!(RGB::new(255, 0, 0), RED_RGB+RGB::new(0, 0, 0));
assert_eq!(WHITE_RGB, BLACK_RGB + 255);
assert_eq!(RGBA::new(255, 255, 0, 255), RED_RGBA+GREEN_RGBA);
assert_eq!(RGBA::new(255, 0, 0, 255), RED_RGBA+RGBA::new(0, 0, 0, 0));
assert_eq!(WHITE_RGBA, BLACK_RGBA + 255);
}
#[test]
#[should_panic]
fn test_add_overflow() {
assert_ne!(RGBA::new(255u8, 255, 0, 0), RED_RGBA+BLUE_RGBA);;
}
#[test]
fn test_sub() {
assert_eq!(RED_RGB, (WHITE_RGB - GREEN_RGB) - BLUE_RGB);
assert_eq!(BLACK_RGB, WHITE_RGB - 255);
assert_eq!(RGBA::new(255, 255, 0, 0), WHITE_RGBA - BLUE_RGBA);
assert_eq!(BLACK_RGBA, WHITE_RGBA - 255);
}
#[test]
fn test_add_assign() {
let mut green_rgb = RGB::new(0, 255, 0);
green_rgb += RGB::new(255, 0, 255);
assert_eq!(WHITE_RGB, green_rgb);
let mut black_rgb = RGB::new(0, 0, 0);
black_rgb += 255;
assert_eq!(WHITE_RGB, black_rgb);
let mut green_rgba = RGBA::new(0, 255, 0, 0);
green_rgba += RGBA::new(255, 0, 255, 255);
assert_eq!(WHITE_RGBA, green_rgba);
let mut black_rgba = RGBA::new(0, 0, 0, 0);
black_rgba += 255;
assert_eq!(WHITE_RGBA, black_rgba);
}
#[test]
fn test_sub_assign() {
let mut green_rgb = RGB::new(0, 255, 0);
green_rgb -= RGB::new(0, 255, 0);
assert_eq!(BLACK_RGB, green_rgb);
let mut white_rgb = RGB::new(255, 255, 255);
white_rgb -= 255;
assert_eq!(BLACK_RGB, white_rgb);
let mut green_rgba = RGBA::new(0, 255, 0, 0);
green_rgba -= RGBA::new(0, 255, 0, 0);
assert_eq!(BLACK_RGBA, green_rgba);
let mut white_rgba = RGBA::new(255, 255, 255, 255);
white_rgba -= 255;
assert_eq!(BLACK_RGBA, white_rgba);
}
#[test]
fn test_mult() {
assert_eq!(RGB::new(0.5,1.5,2.5), RGB::new(1.,3.,5.) * 0.5);
assert_eq!(RGBA::new(2,4,6,8), RGBA::new(1,2,3,4) * 2);
}
#[test]
fn test_mult_assign() {
let mut green_rgb = RGB::new(0u16, 255, 0);
green_rgb *= 1;
assert_eq!(RGB::new(0, 255, 0), green_rgb);
green_rgb *= 2;
assert_eq!(RGB::new(0, 255*2, 0), green_rgb);
let mut green_rgba = RGBA::new(0u16, 255, 0, 0);
green_rgba *= 1;
assert_eq!(RGBA::new(0, 255, 0, 0), green_rgba);
green_rgba *= 2;
assert_eq!(RGBA::new(0, 255*2, 0, 0), green_rgba);
}
}
|
Santorini Volcano and its Plumbing System Santorini Volcano is an outstanding natural laboratory for studying arc volcanism, having had twelve Plinian eruptions over the last 350,000 years, at least four of which caused caldera collapse. Periods between Plinian eruptions are characterized by intra-caldera edifice construction and lower intensity explosive activity. The Plinian eruptions are fed from magma reservoirs at 48 km depth that are assembled over several centuries prior to eruption by the arrival of high-flux magma pulses from deeper in the sub-caldera reservoir. Unrest in 20112012 involved intrusion of two magma pulses at about 4 km depth, suggesting that the behaviour of the modern-day volcano is similar to the behaviour of the volcano prior to Plinian eruptions. Emerging understanding of Santorini's plumbing system will enable better risk mitigation at this highly hazardous volcano. |
<reponame>LAC-UNC/ChimpStudyCases
package com.productioncell.dummies.v1;
import resources.Piston;
import utils.Addeable;
import utils.Item;
import com.lac.petrinet.components.Dummy;
import com.lac.petrinet.exceptions.PetriNetException;
public class AtrasPistonDummy extends Dummy {
Piston origen;
Addeable destino;
public AtrasPistonDummy(String tName, Piston pistonOrigen, Addeable pistonDestino ) {
super(tName);
this.origen = pistonOrigen;
this.destino = pistonDestino;
}
@Override
protected void execute() throws PetriNetException {
try {
Item item = origen.returnItem();
destino.addItem(item);
origen.moveBackward();
} catch (InterruptedException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
}
|
package org.apache.jsp.advanced;
import javax.servlet.*;
import javax.servlet.http.*;
import javax.servlet.jsp.*;
import org.eclipse.help.internal.webapp.data.*;
public final class tabs_jsp extends org.apache.jasper.runtime.HttpJspBase
implements org.apache.jasper.runtime.JspSourceDependent {
private static java.util.List _jspx_dependants;
static {
_jspx_dependants = new java.util.ArrayList(1);
_jspx_dependants.add("/advanced/header.jsp");
}
public Object getDependants() {
return _jspx_dependants;
}
public void _jspService(HttpServletRequest request, HttpServletResponse response)
throws java.io.IOException, ServletException {
JspFactory _jspxFactory = null;
PageContext pageContext = null;
HttpSession session = null;
ServletContext application = null;
ServletConfig config = null;
JspWriter out = null;
Object page = this;
JspWriter _jspx_out = null;
PageContext _jspx_page_context = null;
try {
_jspxFactory = JspFactory.getDefaultFactory();
response.setContentType("text/html; charset=UTF-8");
pageContext = _jspxFactory.getPageContext(this, request, response,
null, true, 8192, true);
_jspx_page_context = pageContext;
application = pageContext.getServletContext();
config = pageContext.getServletConfig();
session = pageContext.getSession();
out = pageContext.getOut();
_jspx_out = out;
out.write('\n');
request.setCharacterEncoding("UTF-8");
boolean isRTL = UrlUtil.isRTL(request, response);
String direction = isRTL?"rtl":"ltr";
if (new RequestData(application,request, response).isMozilla()) {
out.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01//EN\">\n");
} else {
out.write("<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.01 Transitional//EN\">\n");
}
out.write("<!------------------------------------------------------------------------------\n");
out.write(" ! Copyright (c) 2000, 2007 IBM Corporation and others.\n");
out.write(" ! All rights reserved. This program and the accompanying materials \n");
out.write(" ! are made available under the terms of the Eclipse Public License v1.0\n");
out.write(" ! which accompanies this distribution, and is available at\n");
out.write(" ! http://www.eclipse.org/legal/epl-v10.html\n");
out.write(" ! \n");
out.write(" ! Contributors:\n");
out.write(" ! IBM Corporation - initial API and implementation\n");
out.write(" ------------------------------------------------------------------------------->");
out.write('\n');
out.write('\n');
LayoutData data = new LayoutData(application,request, response);
WebappPreferences prefs = data.getPrefs();
View[] views = data.getViews();
out.write("\n");
out.write("\n");
out.write("<html>\n");
out.write("<head>\n");
out.write("<meta http-equiv=\"Content-Type\" content=\"text/html; charset=UTF-8\">\n");
out.write("\n");
out.write("<title>");
out.print(ServletResources.getString("Tabs", request));
out.write("</title>\n");
out.write(" \n");
out.write("<style type=\"text/css\">\n");
out.write("\n");
out.write("\n");
out.write("BODY {\n");
out.write("\tmargin:0px;\n");
out.write("\tpadding:0px;\n");
out.write("\theight:100%;\n");
if (data.isMozilla()){
out.write("\n");
out.write("\theight:21px;\n");
}
out.write("\n");
out.write("}\n");
out.write("\n");
out.write("/* tabs at the bottom */\n");
out.write(".tab {\n");
out.write("\tfont-size:5px;");
out.write("\n");
out.write("\tmargin:0px;\n");
out.write("\tpadding:0px;\n");
out.write("\tborder-top:1px solid ThreeDShadow;\n");
out.write("\tborder-bottom:1px solid ");
out.print(data.isMozilla()?prefs.getToolbarBackground():"ThreeDShadow");
out.write(";\n");
out.write("\tcursor:default;\n");
out.write("\tbackground:");
out.print(prefs.getToolbarBackground());
out.write(";\n");
out.write("}\n");
out.write("\n");
out.write(".pressed {\n");
out.write("\tfont-size:5px;");
out.write("\n");
out.write("\tmargin:0px;\n");
out.write("\tpadding:0px;\n");
out.write("\tcursor:default;\n");
out.write("\t");
out.print(prefs.getViewBackgroundStyle());
out.write("\n");
out.write("\tborder-top:0px solid ");
out.print(prefs.getToolbarBackground());
out.write(";\n");
out.write("\tborder-bottom:1px solid ThreeDShadow;\n");
out.write("}\n");
out.write("\n");
out.write(".separator {\n");
out.write("\theight:100%;\n");
out.write("\tbackground-color:ThreeDShadow;\n");
out.write("\tborder-bottom:1px solid ");
out.print(prefs.getToolbarBackground());
out.write(";\n");
out.write("}\n");
out.write("\n");
out.write(".separator_pressed {\n");
out.write("\theight:100%;\n");
out.write("\tbackground-color:ThreeDShadow;\n");
out.write("\tborder-top:0px solid ");
out.print(prefs.getToolbarBackground());
out.write(";\n");
out.write("\tborder-bottom:1px solid ");
out.print(prefs.getToolbarBackground());
out.write(";\n");
out.write("}\n");
out.write("\n");
out.write("A {\n");
out.write("\ttext-decoration:none;\n");
out.write("\tvertical-align:middle;\n");
out.write("\theight:16px;\n");
out.write("\twidth:16px;\n");
if (data.isIE()){
out.write("\n");
out.write("\twriting-mode:tb-rl; ");
out.write('\n');
} else {
out.write("\n");
out.write("\tdisplay:block;");
out.write('\n');
}
out.write("\n");
out.write("}\n");
out.write("\n");
out.write("IMG {\n");
out.write("\tborder:0px;\n");
out.write("\tmargin:0px;\n");
out.write("\tpadding:0px;\n");
out.write("\theight:16px;\n");
out.write("\twidth:16px;\n");
out.write("}\n");
out.write("\n");
out.write("</style>\n");
out.write(" \n");
out.write("<script language=\"JavaScript\">\n");
out.write("\n");
out.write("var isMozilla = navigator.userAgent.indexOf('Mozilla') != -1 && parseInt(navigator.appVersion.substring(0,1)) >= 5;\n");
out.write("var isIE = navigator.userAgent.indexOf('MSIE') != -1;\n");
out.write("var linksArray = new Array (\"linktoc\", \"linkindex\", \"linksearch\", \"linkbookmarks\");\n");
out.write("\n");
out.write("if (isIE){\n");
out.write(" document.onkeydown = keyDownHandler;\n");
out.write("} else {\n");
out.write(" document.addEventListener('keydown', keyDownHandler, true);\n");
out.write("}\n");
out.write("\n");
out.write("/**\n");
out.write(" * Returns the target node of an event\n");
out.write(" */\n");
out.write("function getTarget(e) {\n");
out.write("\tvar target;\n");
out.write(" \tif (isIE)\n");
out.write(" \t\ttarget = window.event.srcElement;\n");
out.write(" \telse\n");
out.write(" \t\ttarget = e.target;\n");
out.write("\n");
out.write("\treturn target;\n");
out.write("}\n");
out.write("\n");
for (int i=0; i<views.length; i++) {
out.write("\n");
out.write("\tvar ");
out.print(views[i].getName());
out.write(" = new Image();\n");
out.write("\t");
out.print(views[i].getName());
out.write(".src = \"");
out.print(views[i].getOnImage());
out.write('"');
out.write(';');
out.write('\n');
}
out.write("\n");
out.write("\n");
out.write("var lastTab = \"\";\n");
out.write("/* \n");
out.write(" * Switch tabs.\n");
out.write(" */ \n");
out.write("function showTab(tab)\n");
out.write("{ \t\n");
out.write("\tif (tab == lastTab) \n");
out.write("\t\treturn;\n");
out.write("\t\n");
out.write("\tlastTab = tab;\n");
out.write("\t\n");
out.write(" \t// show the appropriate pressed tab\n");
out.write(" \tvar buttons = document.body.getElementsByTagName(\"TD\");\n");
out.write(" \tfor (var i=0; i<buttons.length; i++)\n");
out.write(" \t{\n");
out.write(" \t\tif (buttons[i].id == tab) { \n");
out.write("\t\t\tbuttons[i].className = \"pressed\";\n");
out.write("\t\t\tif (i > 0) \n");
out.write("\t\t\t\tbuttons[i-1].className = \"separator_pressed\";\n");
out.write("\t\t\tif (i<buttons.length-1) \n");
out.write("\t\t\t\tbuttons[i+1].className = \"separator_pressed\";\n");
out.write("\t\t} else if (buttons[i].className == \"pressed\") {\n");
out.write("\t\t\tbuttons[i].className = \"tab\";\n");
out.write("\t\t\tif (i > 0) \n");
out.write("\t\t\t\tif (i > 1 && buttons[i-2].id == tab) \n");
out.write("\t\t\t\t\tbuttons[i-1].className = \"separator_pressed\";\n");
out.write("\t\t\t\telse\n");
out.write("\t\t\t\t\tbuttons[i-1].className = \"separator\";\n");
out.write("\t\t\tif (i<buttons.length-1) \n");
out.write("\t\t\t\tif (i<buttons.length-2 && buttons[i+2].id == tab) \n");
out.write("\t\t\t\t\tbuttons[i+1].className = \"separator_pressed\";\n");
out.write("\t\t\t\telse\n");
out.write("\t\t\t\t\tbuttons[i+1].className = \"separator\";\n");
out.write("\t\t}\n");
out.write(" \t }\n");
out.write("}\n");
out.write("\n");
out.write("/**\n");
out.write(" * Handler for key down (arrows)\n");
out.write(" */\n");
out.write("function keyDownHandler(e)\n");
out.write("{\n");
out.write("\tvar key;\n");
out.write("\n");
out.write("\tif (isIE) {\n");
out.write("\t\tkey = window.event.keyCode;\n");
out.write("\t} else {\n");
out.write("\t\tkey = e.keyCode;\n");
out.write("\t}\n");
out.write("\t\t\n");
out.write("\tif (key <37 || key > 39) \n");
out.write("\t\treturn true;\n");
out.write("\t\n");
out.write(" \tvar clickedNode = getTarget(e);\n");
out.write(" \tif (!clickedNode) return true;\n");
out.write("\n");
out.write("\tvar linkId=\"\";\n");
out.write("\tif (clickedNode.tagName == 'A')\n");
out.write("\t\tlinkId=clickedNode.id;\n");
out.write("\telse if(clickedNode.tagName == 'TD')\n");
out.write("\t\tlinkId=\"link\"+clickedNode.id;\n");
out.write("\n");
out.write(" \tif (isIE)\n");
out.write(" \t\twindow.event.cancelBubble = true;\n");
out.write(" \telse\n");
out.write(" \t\te.cancelBubble = true;\n");
out.write(" \tif (key == 38 ) { // up arrow\n");
out.write("\t\tif(linkId.length>4){\n");
out.write("\t\t\tparent.showView(linkId.substring(4, linkId.length));\n");
out.write("\t\t\tclickedNode.blur();\n");
out.write("\t\t\tparent.frames.ViewsFrame.focus();\n");
out.write("\t\t}\n");
out.write(" \t} else if (key == 39) { // Right arrow, expand\n");
out.write(" \t\tvar nextLink=getNextLink(linkId);\n");
out.write("\t\tif(nextLink!=null){\n");
out.write("\t\t\tdocument.getElementById(nextLink).focus();\n");
out.write("\t\t}\n");
out.write(" \t} else if (key == 37) { // Left arrow,collapse\n");
out.write(" \t\tvar previousLink=getPreviousLink(linkId);\n");
out.write("\t\tif(previousLink!=null){\n");
out.write("\t\t\tdocument.getElementById(previousLink).focus();\n");
out.write("\t\t}\n");
out.write(" \t}\n");
out.write(" \t \t\t\t\n");
out.write(" \treturn false;\n");
out.write("}\n");
out.write("\n");
out.write("function getNextLink(currentLink){\n");
out.write("\tfor(i=0; i<linksArray.length; i++){\n");
out.write("\t\tif(currentLink==linksArray[i]){\n");
out.write("\t\t\tif((i+1)<linksArray.length)\n");
out.write("\t\t\t\treturn linksArray[i+1];\n");
out.write("\t\t}\n");
out.write("\t}\n");
out.write("\treturn linksArray[0];\n");
out.write("}\n");
out.write("\n");
out.write("function getPreviousLink(currentLink){\n");
out.write("\tfor(i=0; i<linksArray.length; i++){\n");
out.write("\t\tif(currentLink==linksArray[i]){\n");
out.write("\t\t\tif(i>0)\n");
out.write("\t\t\t\t return linksArray[i-1];\n");
out.write("\t\t}\n");
out.write("\t}\n");
out.write("\treturn linksArray[linksArray.length-1];\n");
out.write("}\n");
out.write("\n");
out.write("</script>\n");
out.write("\n");
out.write("</head>\n");
out.write(" \n");
out.write("<body dir=\"");
out.print(direction);
out.write("\" onload=\"showTab('");
out.print(data.getVisibleView());
out.write("')\">\n");
out.write("\n");
out.write(" <table cellspacing=\"0\" cellpadding=\"0\" border=\"0\" width=\"100%\" height=\"100%\" valign=\"middle\">\n");
out.write(" <tr>\n");
out.write("\n");
for (int i=0; i<views.length; i++)
{
String title = ServletResources.getString(views[i].getName(), request);
if (i != 0) {
out.write("\n");
out.write("\t<td width=\"1px\" class=\"separator\"><div style=\"width:1px;height:1px;display:block;\"></div></td>\n");
out.write("\t");
out.write('\n');
}
out.write("\n");
out.write("\t<td title=\"");
out.print(UrlUtil.htmlEncode(title));
out.write("\" \n");
out.write("\t align=\"center\" \n");
out.write("\t valign=\"middle\"\n");
out.write("\t class=\"tab\" \n");
out.write("\t id=\"");
out.print(views[i].getName());
out.write("\" \n");
out.write("\t onclick=\"parent.showView('");
out.print(views[i].getName());
out.write("')\" \n");
out.write("\t onmouseover=\"window.status='");
out.print(UrlUtil.JavaScriptEncode(title));
out.write("';return true;\" \n");
out.write("\t onmouseout=\"window.status='';\">\n");
out.write("\t <a href='javascript:parent.showView(\"");
out.print(views[i].getName());
out.write("\");' \n");
out.write("\t onclick='this.blur();return false;' \n");
out.write("\t onmouseover=\"window.status='");
out.print(UrlUtil.JavaScriptEncode(title));
out.write("';return true;\" \n");
out.write("\t onmouseout=\"window.status='';\"\n");
out.write("\t id=\"link");
out.print(views[i].getName());
out.write("\"\n");
out.write("\t ");
out.print(views[i].getKey()==View.NO_SHORTCUT?"":"ACCESSKEY=\""+views[i].getKey()+"\"");
out.write(">\n");
out.write("\t <img alt=\"");
out.print(UrlUtil.htmlEncode(title));
out.write("\" \n");
out.write("\t title=\"");
out.print(UrlUtil.htmlEncode(title));
out.write("\" \n");
out.write("\t src=\"");
out.print(views[i].getOnImage());
out.write("\"\n");
out.write("\t id=\"img");
out.print(views[i].getName());
out.write("\"\n");
out.write("\t height=\"16\"\n");
out.write("\t >\n");
out.write("\t </a>\n");
out.write("\t</td>\n");
}
out.write("\n");
out.write(" \n");
out.write(" </tr>\n");
out.write(" </table>\n");
out.write("\n");
out.write("</body>\n");
out.write("</html>\n");
out.write("\n");
} catch (Throwable t) {
if (!(t instanceof SkipPageException)){
out = _jspx_out;
if (out != null && out.getBufferSize() != 0)
out.clearBuffer();
if (_jspx_page_context != null) _jspx_page_context.handlePageException(t);
}
} finally {
if (_jspxFactory != null) _jspxFactory.releasePageContext(_jspx_page_context);
}
}
}
|
Q:
Why not require a bit of reputation to post Chinese characters?
There has been a lot of spamming with Chinese characters on some sites. However, some sites need to have Chinese characters enabled. We also have people asking to enable Chinese characters on a particular site.
I am wondering why we can't make it a privilege to make a post with Chinese characters. It would be logical to put it in 'remove new user restrictions', but I'm not sure if 10 reputation threshold is enough to prevent the spam completely.
Of course this shouldn't apply to Chinese Stack Exchange.
A:
This is something that has been bothering me for a while.
I designed the anti-spam / anti-abuse layer that's been keeping most of this crap out for the last couple of years, and it was designed with 'snow shoe' spammers in mind.
For those of you unfamiliar with the term 'snow shoe' when it comes to spamming, well, consider a snow shoe:
What this effectively does is spread your force over a much larger area, or, if you're a spammer, thousands of infected Windows XP machines or rooted web servers run by lazy hosts that are incredibly great at not looking like what they are.
They've gotten exponentially bigger, and better. Many that you see actually posting have replaced machines with humans that aren't otherwise able to market their skills, or want on-the-job training to learn them. Jeff put it best, it's industrial. We're keeping a lot of it out, but I'm uncomfortable with how we're positioned.
The solution here isn't tossing problematic character sets into a corner to think about what they've done (though we have done this to thwart larger onslaughts) - the solution is to beef up the Bayesian-ess of what we currently have so it trips on the actual content better, without additional inconvenience to passers-by. There are more than several systems in place looking at this that should be better.
Several ideas are floating around at how to do this. Don't think, even for a second, that this is a problem regex can solve. It can't, and my therapist won't let me talk about that to any further extent.
We're working on it now. I'll update when we've got something more, though (due to the nature of it) - it'll continue to be a bit of a black box. It is a priority, I think we've got what we need to really hit back, but it's a long series of complex changes involved.
My job here is to make them not-so-long. I'm working on it.
A:
While this is tempting, I do not think it is a good idea.
There can be a variety of use cases for using Chinese characters. One site especially affected by Chinese spam is Travel.SE, where one can easily invent a number of valid uses for Chinese characters.
Similarly, most of the computer sites have a valid use for Chinese characters - whenever a computer needs to display Chinese characters, or is displaying Chinese characters.
SE could implement this on a few sites, like the sites about western languages. But it should be OFF by default. |
RHINO (squat)
RHINO was a famous squat in Geneva, Switzerland. It occupied two buildings on the Boulevard des Philosophes in downtown Geneva, a few blocks away from the main campus of the University of Geneva. RHINO housed seventy people until its eviction in July 2007. It had been occupied by the squatters since 1988.
Activities
The RHINO project (which stands for "Retour des Habitants dans les Immeubles Non-Occupés", or 'Return of inhabitants to non-occupied buildings') also operated an independent cinema in its basement, the Cave 12, as well as a bar, restaurant and concert space on the ground floor called Bistro'K.
The two buildings' facades were often decorated with protest art, usually promulgating leftist political messages or generally the right to occupy the buildings. The buildings were instantly recognizable by the large papier-maché red horn installed on the wall.
In 2001, the Mayor of Geneva visited the squat with Bertrand Delanoë (Mayor of Paris) to show him Geneva's alternative culture.
Association
The squatters set up an association to represent themselves. Each individual paid 100 CHF every month to the communal fund, which among other things paid for lawyers.
Eviction
The RHINO organisation often faced legal troubles, and in 2007 it was dissolved by the Swiss Federal Tribunal because of its "illegal aims."
Geneva police then evicted the inhabitants on July 23, 2007. There were 19 arrests and water cannon was used to quell the riots.
The RHINO eviction was a large chunk of the city's plan to evict all the squats. Chief prosecutor Daniel Zappelli, who at that time was dealing with at least 27 criminal procedures concerning squatting in Geneva, commented "There comes a time when state authority should be affirmed and restored."
European Court of Human Rights
RHINO appealed the decision to dissolve their organisation and in 2011 won their case at the European Court of Human Rights. The judgement asserted that the eviction violated the article 11 (Freedom of association) rights of the squatters and ordered compensation of 65,651 Euros in respect of pecuniary damage and 21,949 Euros for costs and expenses. |
/*
* Copyright 2015 United States Government, as represented by the Administrator
* of the National Aeronautics and Space Administration. All Rights Reserved.
* 2017-2021 The jConstraints Authors
* SPDX-License-Identifier: Apache-2.0
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package gov.nasa.jpf.constraints.types;
import gov.nasa.jpf.constraints.casts.CastOperation;
public abstract class ConcreteType<T> implements Type<T> {
private final String name;
private final Type<?> superType;
private final T defaultValue;
private final Class<T> canonicalClass;
private final Class<?>[] otherClasses;
private final String[] otherNames;
public ConcreteType(
String name,
Class<T> canonicalClass,
T defaultValue,
Type<?> superType,
String[] otherNames,
Class<?>... otherClasses) {
this.name = name;
this.superType = superType;
this.defaultValue = defaultValue;
this.canonicalClass = canonicalClass;
this.otherClasses = otherClasses;
this.otherNames = otherNames;
}
/* (non-Javadoc)
* @see gov.nasa.jpf.constraints.types.Type#getName()
*/
@Override
public String getName() {
return name;
}
@Override
public String[] getOtherNames() {
return otherNames.clone(); // defensive copy
}
/* (non-Javadoc)
* @see gov.nasa.jpf.constraints.types.Type#getCanonicalClass()
*/
@Override
public Class<T> getCanonicalClass() {
return canonicalClass;
}
/* (non-Javadoc)
* @see gov.nasa.jpf.constraints.types.Type#getOtherClasses()
*/
@Override
public Class<?>[] getOtherClasses() {
return otherClasses.clone(); // defensive copy
}
/* (non-Javadoc)
* @see gov.nasa.jpf.constraints.types.Type#getDefaultValue()
*/
@Override
public T getDefaultValue() {
return defaultValue;
}
/* (non-Javadoc)
* @see gov.nasa.jpf.constraints.types.Type#getSuperType()
*/
@Override
public Type<?> getSuperType() {
return superType;
}
@Override
public <O> CastOperation<? super O, ? extends T> cast(Type<O> fromType) {
if (fromType instanceof ConcreteType) {
ConcreteType<O> ctype = (ConcreteType<O>) fromType;
CastOperation<? super O, ? extends T> castOp = castFrom(ctype);
if (castOp == null) castOp = ctype.castTo(this);
return castOp;
}
return null;
}
@Override
public <O> CastOperation<? super O, ? extends T> requireCast(Type<O> fromType) {
CastOperation<? super O, ? extends T> op = cast(fromType);
if (op == null)
throw new IllegalArgumentException(
"Required cast from type " + fromType + ", but none declared");
return op;
}
protected <O> CastOperation<? super T, ? extends O> castTo(Type<O> toType) {
return null;
}
protected <O> CastOperation<? super O, ? extends T> castFrom(Type<O> fromType) {
return null;
}
}
|
Since becoming a town councillor in May last year Liz Clews has had a lot to learn on her journey so far.
Liz, 54, has lived in Gainsborough for 27 years, after she moved here with her partner who was overseeing the build of West Lindsey Leisure Centre.
Liz works as a specialised fitness instructor.
She said: “I love my job, I love the fact it makes a difference to people not just physically, but mentally and in such a positive way.
“It is so rewarding and I have evolved from being a fitness freak and slimming consultant to nurturing a massive market in mature health, which I have only just scratched the surface.
“I studied continuously throughout my teaching life and now specialise in COPD, Dementia and Parkinson’s working partly with a charity called Vitality NHS approved and partly under West Lindsey Mature Fitness, a group I founded to develop new projects that support vulnerable groups in the town.
“My other love is teaching yoga and pilates, all immense subjects that work so well together, more and more mature people are taking up yoga for wellness it’s fantastic for everything, physically mentally and emotionally and many health conditions.
Liz said before she become a town councillor she knew little about what the role would entail.
She said: “I knew nothing about what a Town Councillor entailed, I even thought you got paid.
“To be truthful I don’t look as this as political, the varied list of things that we are asked to make decisions on are for the good of the town.
“I’ve had phone calls about picking up rats, allotment gates, trees, graves and crossings.
“All the jargon that baffles me is educating me. I feel so lucky to be making a difference on things that, not only affects the whole town, but small decisions on helping someone getting a sign at the end of their road.
“It is challenging, and such a commitment, but one I will ensure I do my best.
Liz said since becoming a councillor she has learnt so much.
She said: “I never knew what a motion or campaign really was all about, I didn’t have a clue about various policies. It’s been a scary journey and from being on the radio and now the telly, I have to pinch myself. No way did I consider this as my role. But I’m loving it and yes it was the right decision, I won’t be looking back in wonder.
“I can honestly my short journey so far has been great, And crikey am I still learning?
Liz said her ultimate ambition is to develop support groups for both suffers of Parkinson’s and Dementia, their carers and utilise the John Coupland Hospital.
She said: “You watch this space, it will happen.
“At the age of 54, with a knee op under my belt and many other areas with wear and tear, I realise just how important Mature Health is now than ever before. |
. To clarify the effects of biochar addition (0.5%, 1.5%, 2.5%, 3.5%) on the emission of carbon dioxide (CO2) and nitrous oxide (N2O), pH and microbial communities of the tea garden soil, an indoor incubation experiment was conducted using the acidulated tea-planted soil. Results showed that the emissions of CO2 and N2O and the rate of C, N mineralization were increased in a short term after the addition of biochar compared with the control, while the promoting effect was weakened along with increasing the addition of biochar. The pH, dehydrogenase activity and microbial biomass carbon were increased in the biochar treatments. Phospholi-pid fatty acid (PLFA) with different markers was measured and the most PLFA was detected in the group in the 1.5% biochar treatment with significant differences (P<0.05) compared with the control. In addition, the higher levels of 16:0, 14:0 (bacteria), 18:l9c (fungi), l0Me18:0 (actinomycetes) groups were observed and there were significant differences (P <0.05) in individual phospholipid fatty acid among the different treatments. Taken together, the acidulated tea-planted soil, soil microbial biomass and microbial number were improved after addition of biochar. |
// Unit tests for the foreign key verifier. These include branch coverage of the
// verifier code. Separate conformance tests cover more detailed end-to-end
// tests for different foreign key shapes and initial data states.
class ForeignKeyVerifiersTest : public ::testing::Test {
protected:
void SetUp() override {
ZETASQL_ASSERT_OK(CreateDatabase({R"(
CREATE TABLE T (
A INT64,
B INT64,
C INT64,
) PRIMARY KEY(A))",
R"(
CREATE TABLE U (
X INT64,
Y INT64,
Z INT64,
) PRIMARY KEY(X))"}));
}
absl::Status AddForeignKey() {
return UpdateSchema({R"(
ALTER TABLE U
ADD CONSTRAINT C
FOREIGN KEY(Z, Y)
REFERENCES T(B, C))"});
}
absl::Status CreateDatabase(const std::vector<std::string>& statements) {
ZETASQL_ASSIGN_OR_RETURN(database_, Database::Create(&clock_, statements));
return absl::OkStatus();
}
absl::Status UpdateSchema(absl::Span<const std::string> statements) {
int succesful;
absl::Status status;
absl::Time timestamp;
ZETASQL_RETURN_IF_ERROR(
database_->UpdateSchema(statements, &succesful, ×tamp, &status));
return status;
}
void Insert(const std::string& table, const std::vector<std::string>& columns,
const std::vector<int>& values) {
Mutation m;
m.AddWriteOp(MutationOpType::kInsert, table, columns, {AsList(values)});
ZETASQL_ASSERT_OK_AND_ASSIGN(std::unique_ptr<ReadWriteTransaction> txn,
database_->CreateReadWriteTransaction(
ReadWriteOptions(), RetryState()));
ZETASQL_ASSERT_OK(txn->Write(m));
ZETASQL_ASSERT_OK(txn->Commit());
}
ValueList AsList(const std::vector<int>& values) {
ValueList value_list;
std::transform(
values.begin(), values.end(), std::back_inserter(value_list),
[](int value) { return value == 0 ? NullInt64() : Int64(value); });
return value_list;
}
Clock clock_;
std::unique_ptr<Database> database_;
} |
import random
from pecan import expose, response, request
def rand_string(min, max):
int_gen = random.randint
string_length = int_gen(min, max)
return ''.join([chr(int_gen(ord('\t'), ord('~')))
for i in range(string_length)])
body = rand_string(10240, 10240)
class TestController(object):
def __init__(self, account_id):
self.account_id = account_id
@expose(content_type='text/plain')
def test(self):
user_agent = request.headers['User-Agent'] # NOQA
limit = request.params['limit'] # NOQA
response.headers['X-Test'] = 'Funky Chicken'
return body
class HelloController(object):
@expose()
def _lookup(self, account_id, *remainder):
return TestController(account_id), remainder
class RootController(object):
@expose(content_type='text/plain')
def index(self):
response.headers['X-Test'] = 'Funky Chicken'
return body
hello = HelloController()
|
The present disclosure relates to vent apparatus for use in venting the inside of a rotational mold to outside of the mold.
Rotational molding involves heating a flowable material in a hollow mold and rotating the mold to melt and distribute the material over the inside of the mold. Rotational molding is a high temperature, low pressure process and the strength required from the molds is minimal, which results in its ability to produce large, complex parts using a low-cost mold. Further, the low processing pressure involved in rotational molding has the added advantage of producing parts that are virtually stress free.
Rotational molded articles are used for many different commercial or consumer purposes including but not limited to livestock feeders, drainage systems, food service containers, instrument housings, fuel tanks, vending machines, highway barriers, road markers, boats, kayaks, childcare seats, light globes, tool carts, planter pots, playing balls, playground equipment, headrests, truck/cart liners, and air ducts.
The process of rotational molding generally includes placing a flowable material such as, e.g., a polymer usually in a powder form, inside a mold. Often, the mold is composed of two or more parts and totally encloses the powder. Molds may be made out of steel, aluminum, and/or another metal and may be supported by a steel frame. The mold is then placed in an oven and heated for a predetermined amount of time to allow the flowable material to turn into a liquid state. The mold is rotated in two perpendicular axes throughout the rotational molding process. As the mold heats up, the flowable material begins to coalesce to the inside walls of the mold. The heat distribution around the inner surface of the mold may be determined by the outside design of the mold. For example, tin may be used to reduce heat in areas and gas lines may be used to radiate, or deliver, more heat on, or to, other areas similar to a convection oven.
Centrifugal forces additionally contribute to the accumulation of the flowable material around the inside of the mold (e.g., such centrifugal forces may constantly pull the material against the inside surface of the mold as the mold is rotated about the two respective axes). After a selected period of time, the mold may be cooled. Rotation of the mold may continue throughout the cooling process. Once the flowable material (e.g., polymer) has hardened (after the cooling process has completed), rotation can stop, the mold may be opened, and the mold part can be removed from the mold.
Along with the flowable material, gases (e.g., air, oxygen, nitrogen, carbon dioxide, etc.) are located inside the mold during the molding process. The gases may exercise significant rates of thermal expansion in comparison to the flowable materials inside the mold. Since the mold may be sealed tight, the pressure inside the mold may fluctuate (e.g., increase and/or decrease) due to the temperature fluctuations of the gases located inside the mold during the heating and cooling steps of the rotational molding process. The pressure fluctuations may cause “blowholes” and/or deformations in the article being molded.
To counter these pressure fluctuations, a “vent tube” may be placed in the mold to allow the inside of the mold to “breathe” to the outside of the mold. In other words, the vent tube may allow the pressure inside of the mold to equalize with the pressure outside of the mold. Typically, a wad of furnace filter or steel wool is placed in the vent tube to prevent any flowable material (e.g., polymer) from falling out of the mold through the vent tube as the mold rotates. Tape may also be used to cover the end of the vent tube located inside of the mold. The wad of furnace filter or steel wool and/or the tape may be burned off after a selected time period during a heating cycling of the rotational molding process such that, e.g., the vent tube can breathe. Often, such practices may result in clogged vent tubes, which may cause a resistance in airflow (e.g., which may cause improper molding or blowholes).
Many molds may not fully seal at the points where the mold comes together (which may be called parting lines). Often, the mold may vent through the parting lines prior to the flowable material solidifying, or hardening, and thereby blocking the parting lines. If airflow in a vent tube is restricted after the parting lines have become blocked, the air pressure inside the mold will rise as the temperature inside the mold rises. Likewise, as the mold begins to cool, the pressure inside the mold will begin to fall as the temperature inside falls. During the cooling process, the gas inside the mold may be sealed from the outside of the mold as the polymer completely coats the entire inside of the mold. If the vent remains restricted, a vacuum may be created within the molded part that may cause “blowholes” along the parting lines as gas tries to enter the mold to relieve the vacuum. Additionally, as the polymer hardens during the cooling process, the vacuum may suck a portion, or part of the molded article away from the mold wall and cause a deformed or “scrap” part.
Vents may be used as one-way valves, which may be reliant on the pressure differential of the gas inside the mold to the gas outside the mold, which may provide a positive pressure inside the mold at the end of the heating cycle such that a vacuum may not be created during the cooling phase. Such one-way valve system may have an inability to control pressure that builds up in the mold. A silicone tube used as a vent tube is disclosed in U.S. Pat. App. Pub. No. 2005/0167887 published Aug. 4, 2004 to Rory Jones, which is incorporated herein by reference in its entirety. |
On the issue of the public order concept in the context of criminal law qualification and differentiation of crimes from administrative offenses We consider the concept of "public order". We emphasize that the protection of public order is reflected in a number of provisions of the Criminal Code of the Russian Federation and the Code of Administrative Offenses of the Russian Federation. Based on doctrinal points of view, a list of acts that infringe on public order is established, since not all the norms of these Codes specify that they are aimed at protecting public order from unlawful infringements. At the same time, the legislator does not propose its interpretation, although a number of regulatory legal acts regulating the protection of public order are adopted. Therefore, based on doctrinal points of view, we propose the definition of this concept: public order is expressed in the observance by individuals of the norms of laws, morality in public places, ensuring public peace, the inviolability of the person and the normal functioning of government bodies and local self-government, the activities of public organizations and legal entities. In addition, in the scientific literature there is a position that any crime violates public order (consequently, this also applies to administrative offenses), but based on judicial practice, we conclude that when committing other crimes and offenses that are not related to violation of public order, they do not indicate a violation of public order, and applicable to the analyzed acts there is not always any specification what exactly is expressed in violation of public order. |
The Microphthalmia Transcription Factor (Mitf) Controls Expression of the Ocular Albinism Type 1 Gene: Link between Melanin Synthesis and Melanosome Biogenesis ABSTRACT Melanogenesis is the process that regulates skin and eye pigmentation. Albinism, a genetic disease causing pigmentation defects and visual disorders, is caused by mutations in genes controlling either melanin synthesis or melanosome biogenesis. Here we show that a common transcriptional control regulates both of these processes. We performed an analysis of the regulatory region of Oa1, the murine homolog of the gene that is mutated in the X-linked form of ocular albinism, as Oa1's function affects melanosome biogenesis. We demonstrated that Oa1 is a target of Mitf and that this regulatory mechanism is conserved in the human gene. Tissue-specific control of Oa1 transcription lies within a region of 617 bp that contains the E-box bound by Mitf. Finally, we took advantage of a virus-based system to assess tissue specificity in vivo. To this end, a small fragment of the Oa1 promoter was cloned in front of a reporter gene in an adeno-associated virus. After we injected this virus into the subretinal space, we observed reporter gene expression specifically in the retinal pigment epithelium, confirming the cell-specific expression of the Oa1 promoter in the eye. The results obtained with this viral system are a preamble to the development of new gene delivery approaches for the treatment of retinal pigment epithelium defects. |
<filename>source/lantern/main.cpp
/* Lantern - A path tracer
*
* Lantern is the legal property of <NAME>
* Copyright <NAME> 2015 - 2016
*/
#include "scene/scene.h"
#include "visualizer/visualizer.h"
#include "integrator/integrator.h"
#include "argparse.h"
#include <xmmintrin.h>
#include <pmmintrin.h>
#include <thread>
int main(int argc, const char *argv[]) {
_MM_SET_FLUSH_ZERO_MODE(_MM_FLUSH_ZERO_ON);
_MM_SET_DENORMALS_ZERO_MODE(_MM_DENORMALS_ZERO_ON);
struct LanternOpts {
const char *ScenePath = "scene.json";
bool Verbose = false;
} options;
const char *const usage[] = {
"lantern [options] [[--] args]",
"lantern [options]",
NULL,
};
struct argparse_option parseOptions[] = {
OPT_HELP(),
OPT_GROUP("Root options"),
OPT_BOOLEAN('v', "verbose", &options.Verbose, "Use verbose logging"),
OPT_GROUP("Basic Options"),
OPT_STRING('s', "scene", &options.ScenePath, "Path to the scene.json file. If ommited, Lantern will search for 'scene.json' in the working directory"),
OPT_END(),
};
argparse argparse;
argparse_init(&argparse, parseOptions, usage, 0);
argparse_describe(&argparse, "Renders a scene with an interactive preview", "");
argc = argparse_parse(&argparse, argc, argv);
// Load the scene
Lantern::Scene scene;
if (!scene.LoadSceneFromJSON(options.ScenePath)) {
printf("Could not load scene.json\n");
return 1;
}
Lantern::FrameBuffer transferFrames[3] = {
Lantern::FrameBuffer(scene.Camera->FrameBufferWidth, scene.Camera->FrameBufferHeight),
Lantern::FrameBuffer(scene.Camera->FrameBufferWidth, scene.Camera->FrameBufferHeight),
Lantern::FrameBuffer(scene.Camera->FrameBufferWidth, scene.Camera->FrameBufferHeight)
};
std::atomic<Lantern::FrameBuffer *> swapBuffer(&transferFrames[1]);
Lantern::Integrator integrator(&scene, &transferFrames[0], &swapBuffer);
Lantern::Visualizer visualizer(&scene, &transferFrames[2], &swapBuffer);
if (!visualizer.Init(scene.Camera->FrameBufferWidth, scene.Camera->FrameBufferHeight)) {
return 1;
}
std::atomic_bool quit(false);
std::thread rendererThread(
[](Lantern::Integrator *_integrator, std::atomic_bool *_quit) {
while (!_quit->load(std::memory_order_relaxed)) {
_integrator->RenderFrame();
}
}, &integrator, &quit);
visualizer.Run();
visualizer.Shutdown();
quit.store(true);
rendererThread.join();
}
|
For Kimberly Pinkson, founder of EcoMom Alliance, the environment and community go hand in hand. Now, she’s reaching out to her audience through an online marketplace.
Wondering where to shop for organic bath soaps or a green label pair of jeans? Look no further. With the new EcoMom.com website, green shoppers can get easy access to organic and natural everyday products.
It's been an interesting walk to the marketplace for Pinkson. It all started in 2006, after she and a small group of motivated moms attended a 2006 UNEP (UN Environment Programme)-sponsored event.
From there they built a national organization around some 260 self-trained “EcoMom community leaders” who cater parties, lectures and events to "inspire and empower." There are now 11,000 members and EcoMom has partnerships with a number of other non-profit organizations and corporate sponsors.
Q: First of all—why "moms"?
A: The “mom demographic” is such a powerful one as a market force; it’s been reported that they are worth over $2 to $7 trillion. Moms have an incredible power over production, which is why we decided to go with the name. They are also a symbol of role models. For example, if you’re a mom who recycles at home, your children will grow up seeing and being influenced by that. Also, the name “moms” refers to the caretaking stewardship.
Q: How can "EcoMom” concepts be incorporated into businesses practices?
A: We find that more companies are starting to look toward us for information, whether it be how they can “green” their office spaces or how much money they could save by going green. Companies can start by implementing actions such as using non-toxic products at work. The office becomes your family during the week since you spend so much time with your co-workers...Offices can start out by doing simple things like recycling, using mugs instead of paper or plastic cups, hold creative events and parties—and of course, refrain from using plastic utensils during them!
Q: How will these steps help people save money?
Fortunately, things that are helpful for the environment are also better for the pocketbooks. So for example, putting in leather stripping in your insulators will save you more than $200 a year for heating. By doing this, you’re not contributing fuel usage while at the same time, reducing CO2 emissions and saving money.
Also, go through the house and see what kinds of cleansers you are using. For example, natural products such as rubbing alcohol, lemon juice, vinegar, which are very cheap, work just as well as name brand $5 cleaners. The average home has 150 toxic chemicals. By using natural products as cleaners, this will save you $20 a month.
In terms of transportation, make an effort to carpool more with friends and try to walk more. This will not only save money on gas, but will also be healthier for your body!
Finally, be aware of the foods you buy at your grocery stores. Eating local and more seasonal food will save you money.
Q: What are some difficulties that you face as a nonprofit organization?
A: Since EcoMom was first established, it’s been entirely volunteer work. But we are working right now to try to get a better funding so that some of the positions that we currently have can be salaried positions. How can people help out in an easy way?
Q: What are the easy steps people can take to green their routine?
A: Take the EcoMom challenge, join the EcoMom Alliance, host an EcoMom party, become an EcoMom leader. We are launching a new “EcoMom Market” on our website in a few days and this is another way to support the alliance. (The website has been launched since the interview.) This is a web-based market and it sells all the things that you need in your daily life, not just soy candles and scarves, but also other organic and natural products. Our goal was to create more practical products and it was to meet the demands of EcoMoms!
Q: What are the common mistakes people make in trying to go green?
A: I think that the biggest mistake that people make is that they approach it with an "all-or- nothing” mindset. Instead of starting just where they can, they say: “I can’t build my home green, so I can’t do anything about it.” Anything you’re doing is better than nothing. People get anxious when they find themselves, for example, using a plastic cup as opposed to a mug, and they feel guilty. I believe going green is a continuous move forward and looking for better answers.
Q: What does your organization need to do better?
A: I think we need to strengthen our Ecomom.com infrastructure so that we can better support community leaders. They’re ready and ready for more, so we weren’t prepared for that success! The health of the population needs it. |
The invention relates to a lateral wall arrangement for laterally bounding the roller gap of a roller press having rolls supported in a machine frame, driven in opposite directions and forming a roller gap, comprising a lateral wall, an assembly device and a suspension for the lateral wall, wherein the lateral wall is supported in a spring-loaded manner by the suspension.
In roller presses for the high-pressure comminution of material to be ground, the material to be comminuted is discharged uniformly onto the roller gap of two rolls rotating in opposite directions, the material to be ground being drawn into the roller gap by the rolls and compacted there. If the compaction is very high, the material structure of the material to be comminuted fractures and forms briquette-like flakes, which leave the roller gap on the side opposite the feed side. These flakes can then be de-agglomerated with the comparatively low expenditure of energy, by which means a comminuted material to be ground can be obtained. As mentioned at the beginning, it is important for the high-pressure comminution to charge the roller gap uniformly with material to be ground, in order that the roller press does not operate as a breaker and therefore exhibit a lower comminution performance. In order to charge the roller gap uniformly with material to be ground, the material is distributed uniformly over the length of the roller gap by a discharge device. It is necessary to devise a boundary in each case at the ends of the rolls in order that the material to be ground does not fall out of the roller press at these points and this thus leads to a non-uniform discharge of material over the length of the roller gap. Such a boundary can in the simplest case be a lateral wall in each case, which each bear closely on the rotating ends of the rolls in the roller press.
However, if one or both rolls of the roller press is movable as a loose roll, in order to be able to carry out deflection movements in the event of a non-uniform discharge of material to be ground, the lateral walls must be able to follow this mobility. Furthermore, the lateral walls interfere during the regular maintenance and the cyclic changing of the rolls, which are stressed highly by wear, since the lateral walls firstly have to be dismantled and each individual dismantling step during a roll change leads to an undesired prolongation of the stoppage time.
German Laid-open Specification DE 3705051 A1 discloses a roller press in which the lateral walls are fixed by a link to the lateral walls of the material discharge device located above the latter and are supported against the ends of the rolls by an outrigger via compression springs received in clamping screws. Because of the compression springs, the lateral walls suspended on the link are capable of following deflection movements of the rolls. This type of suspension has proven worthwhile in operation but is complicated to dismantle during the regular roll change.
In German Patent DE 102007032177 B3, a lateral wall for a roller press is disclosed which is received in a slotted guide. The slotted guide permits the lateral wall to carry out a movement during the dismantling of the rolls in which the lateral wall is moved away from the end of a roll. Between the lateral wall and the slotted guide, a lever connected to the lateral wall in the manner of a rotary joint ensures that the lateral wall is locked in its operating position. However, this locking prevents the lateral wall from deflecting during operation, which means that the lateral wall can be damaged and in the extreme case can be destroyed. |
<reponame>jiturbide/JavaDeveloper2018Beeva
package com.curso.examen01;
public class Q34 {
{}
}
interface Climb {
boolean isTooHigh(int height, int limit);
}
class Climber {
public static void main(String[] args) {
check((h, l) -> h.append(l).isEmpty(), 5);
}
private static void check(Climb climb, int height) {
if (climb.isTooHigh(height, 10))
System.out.println("too high");
else
System.out.println("ok");
}
}
/*
34. What is the result of the following code?
1: interface Climb {
2: boolean isTooHigh(int height, int limit);
3: }
4:
5: public class Climber {
6: public static void main(String[] args) {
7: check((h, l) -> h.append(l).isEmpty(), 5);
8: }
9: private static void check(Climb climb, int height) {
10: if (climb.isTooHigh(height, 10))
11: System.out.println("too high");
12: else
13: System.out.println("ok");
14: }
15: }
A. ok
B. too high
C. Compiler error on line 7.
D. Compiler error on line 10.
E. Compiler error on a different line.
F. A runtime exception is thrown.
R: C
*/ |
/**
* Synchronously handles a message sent to this part's message passing hierarchy. This method may not be invoked on
* the Swing dispatch thread.
*
* @param context The execution context.
* @param initiator The statement or expression in the abstract syntax tree that caused this message to be sent,
* null indicates that WyldCard generated this message.
* @param message The message to be received
* @throws HtException Thrown if an error occurs while handling this message.
*/
default void blockingReceiveMessage(ExecutionContext context, ASTNode initiator, Message message) throws HtException {
ThreadChecker.assertWorkerThread();
CountDownLatch cdl = new CountDownLatch(1);
final HtException[] exception = new HtException[1];
receiveMessage(context, initiator, message, (msg, wasTrapped, error) -> {
if (error != null) {
exception[0] = error;
}
cdl.countDown();
});
try {
cdl.await();
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
}
if (exception[0] != null) {
throw exception[0];
}
} |
Condition number bounds for problems with integer coefficients An apriori bound for the condition number associated to each of the following problems is given: general linear equation solving, minimum squares, non-symmetric eigenvalue problems, solving univariate polynomials, solving systems of multivariate polynomials. It is assumed that the input has integer coefficients and is not on the degenerate locus of the respective problem (i.e. the condition number is finite). Then condition numbers are bounded in terms of the dimension and of the bit-size of the input. In the same setting, bounds are given for the speed of convergence of the following iterative algorithms: QR without shift for the symmetric eigenvalue problem, and Graeffe iteration for univariate polynomials. Introduction In most of the numerical analysis literature, complexity and stability of numerical algorithms are usually estimated in terms of the problem instance dimension and of a 'condition number'. For instance, the complexity of solving an n n linear system Ax = b is usually estimated in terms of the dimension n (actually the input size is n(n + 1)) and of the condition number (A) = A 2 A −1 2. There is a set of problems instances with (A) = ∞, and in most cases it makes no sense to attempt solving those problem instances. There are also problem instances (in our case, matrices) close to the locus of degenerate problem instances. Those will have a large condition number, and will be said to be ill-conditioned. It is usually accepted that ill-conditioned problem instances are hard to solve. Thus, for complexity purposes a problem instance with a large condition number should be considered 'large'. Therefore, when considering problems defined for real inputs, a reasonable measure for the input size would be (in our example): n 2 log 2 (A). (Compare to Formula 2.1 and paragraph below. See also the discussion in, Chapter 3, Section 1). Another tradition, derived from classical complexity theory and pervasive in several branches of literature (such as linear programming), is to consider the subset of problems instances with integer coefficients. Hence the input size is the number of coefficients times the bit-size of the largest coefficient (in absolute value). In this paper, the following classical problems of numerical analysis are considered: 1. Solving a general n n system of linear equations. 2. Minimal squares problem for a full-rank matrix. 3. Non-symmetric eigenvalue problem. 4. Solution of one univariate polynomial. 5. Solution of a non-degenerate system of n polynomial equations in n variables. All those problems share the feature mentioned above: there is a degenerate locus, and problem instances with real coefficients can be as close to the degenerate locus as wished. This implies that they can be arbitrarily ill-conditioned. However, in Theorems 1 to 5 below, we provide bounds for the condition number of problems instances with integer coefficients and not in the degenerate locus. Those bounds depend on the dimension (size) of the problem instance and on the bit-size of its coefficients. In the analysis of iterative algorithms, one further considers a certain quantity that can be used to bound the speed of convergence and hence the number of iterations to obtain a given approximation. For instance, for power methods (or QR iteration without shift) in the symmetric eigenvalue problem, one can bound the number of steps in terms of the desired accuracy and of the ratio between different eigenvalues. The farther this number is from 1, the faster is the convergence. Once again, if input has real coefficients, this quantity can be arbitrarily close to 1. However, explicit bounds for that quantity will be given for inputs with integer coefficients for 6. QR iteration without shift for the Symmetric Eigenvalue Problem. 7. Graeffe iteration for solving univariate polynomials. The reader should be warned that the results herein are worst case estimates, and are overly pessimistic for application purposes. The main motivation for those results is to convert numerical analysis estimates into 'polynomial time' estimates, not the opposite. Statement of main results Notation.. 2 stands for the 2-norm: if x ∈ R n or C n, then If A is a matrix, then 2.1. Linear equation solving. The first problem considered is linear equation solving: given an n n matrix A and a vector b ∈ R n, find x ∈ R n such that Ax = b. Its condition number (with respect to the 2-norm) is defined as No originality is claimed for Theorem 1. This result is included for completeness and because its proof is elementary, yet illustrates the principle behind the other results. 2.2. Minimal squares. The second problem in the list is minimal squares fitting. Let A be an m n matrix, m ≥ n, with full rank, and let b ∈ R m. One has to find x to minimize Ax − b 2 2. Let r = Ax − b be the residual, we are minimizing r 2 2. Let According to p. 117 (Compare to Lecture 18 and Section 19.1), the condition number of the linear least squares problem is Since we do not assume A to be square, we need to give a new definition for (A). Let MAX (A) and MIN (A) be respectively the largest and the smallest singular values of A. Then set When m = n, this definition is equal to the previous one. The singular locus is now the set of pairs (A, b) such that A does not have full rank (i.e. MIN (A) = 0) or such that r 2 = b 2 (i.e. b is orthogonal to the image of A). The result is: Let A be an m n matrix with integer coefficients, and assume that A has full rank. Let b ∈ Z m. Set H = max i,j (|A ij |, |b i |). Then if b is not orthogonal to the image of A, we have: 2.3. Non-symmetric eigenvalue problem. Let A be an n n matrix and let be a single eigenvalue of A. The condition number of depends on the angle between the left and right eigenvectors: Let x, y be respectively right and left norm-1 eigenvectors of A associated to : Ax = x, y * A = y *, and x 2 = y 2 = 1. Then See Theorem 4.4 p. 149 for references. Theorem 3. Let A be an n n matrix with integer coefficients, and let be a single eigenvalue of A. Then 2.4. Solving univariate polynomials. The condition number (in affine space) for solving a univariate polynomial f (x) = d i=0 f i x i can be defined ( page 228) as: The degenerate locus is the set of polynomials with a multiple root or with a root at infinity. 2.5. Solving systems of polynomials. A similar condition number exists for systems of polynomials. However, for the purpose of condition number theory, it is usually convenient to homogenize the equations and to study the perturbation theory of the 'roots' in complex projective space. This can also be seen as a change of metric, that simplifies the formula of the condition number and of several theorems (See Chapters 10, 12, 13). Let f = (f 1,, f n ) be a system of polynomials in variables x 1,, x n. We homogenize the system by multiplying each coefficient. We obtain a system of homogeneous polynomials in n + 1 variables, that we call F = (F 1,, F n ). The natural space for the roots of F is projective space P n, defined as the space of all 'rays' where x 0,..., x n are not all equal to 0. Every finite root (x 1,, x n ) of f corresponds to the projective root of F given by (1 : x 1 : : x n ), and projective roots of F correspond either to a finite root of f or to a root 'at infinity'. Suppose that the coefficients of f (hence of F ) are made to depend upon a parameter t. The condition number bounds the absolute speed of the roots of F (in projective space) with respect to the absolute speed of the coefficients of F. Recall that the roots of F are in projective space, so their speed vector belongs to the tangent space T P n. The condition number of F at a root turns out to be: where ∈ C n+1 is such that ( 0 : : n ) is a root of F (See Proposition 7c in Page 230 of ). We did not define the norm of a polynomial yet. Above,. 2 stands for the unitary invariant norm (See Chapter III-7 or Section 12.1), that is the most reasonable generalization of the 2-norm to spaces of polynomials: Notation. Let G be a homogeneous degree d polynomial in n + 1 variables. Then Let F be a system of homogeneous polynomials. Then With these definitions, the number (F, ) is invariant under scalings of F,, and under the action of the unitary group U(n + 1), where an element Q ∈ SU(n + 1) acts by Q : (F, ) → (F Q, Q). In order to define the condition number of a system of n equations in n variables, we set: where ranges over the roots of F. (Another possibility is to restrict to the nondegenerate roots of F. This would make no difference in this paper). The following theorem is true if one restricts to any subset of the roots of F. Theorem 5. Let f be a system of n polynomial equations in n variables, with integer coefficients. We write H(f ) for the maximum of the absolute value of the coefficients of f, S(f ) for the number of non-zero coefficients of f and D for max d i. Assume that (f ) is finite. Then where c is an universal constant. Unlike the non-symmetric eigenvalue problem, the symmetric eigenvalue problem has absolute condition number always equal to 1 (See Theorem 5.1. See also citePARLETT Fact 1.11 p.16). However, when using an iterative algorithm, the ratio of eigenvalues (A) = min j>i j i may play an important role for estimating convergence. For instance, according to Theorem 28.4, the QR algorithm without shift converges linearly with speed 1 (A). Convergence may get slower when (A) → 1. Therefore one can bound the speed of convergence by bounding Thus it suffices to perform O( 1 0 log 2 1 1 ) iterations to obtain a result with accuracy 1. Also, the quantity (A) −1 can also be interpreted as a condition number for the eigenvectors (See Theorem 5.7 p. 208). We will show here that Theorem 6. Let A be an n n matrix with integer coefficients. Then be a monic univariate polynomial with zeros 1,, d. Those zeros can be ordered such that, it is explained how to recover the actual roots of f after a certain number of Graeffe iterations, with a good approximation. The number of required iterations depends on the ratio: Unlike in Section 2.6, we do not require here that the roots have different absolute value. We consider also the auxiliary quantity By the above definitions, the 'condition number' (f ) −1 is always finite. In order to recover the roots within relative precision, the number of Graeffe iterations to perform is For clarity of exposition, we will show that bound under a special hypothesis: all the roots should be different positive real numbers. For the general case, see and. Also, all estimates here are 'up to the first order', and quadratic error terms will be discarded. After k steps of Graeffe iteration one obtains the polynomial Expanding each g i as the (d − i)-th elementary symmetric function of the 2 k i, one obtains We can use the special hypothesis to bound Hence Since we assumed the i are all positive, we can recover them by taking 2 k -th roots t. Now we can use the estimate on (g) = (f ) 2 k to deduce that O(log (f ) −1 +log −1 ) steps are sufficient to obtain a relative precision in the roots. Indeed after k 1 = log 2 (f ) −1 steps, After extra k 2 = log 2 (d + 1 + log 2 −1 ) steps, one gets So we can set k = k 1 + k 2 + 1, the last 1 to get rid of the high order terms, to deduce that | i | <. This says that Graeffe iteration is 'polynomial time', in the sense that we can obtain relative accuracy of the roots after steps. Background material The proof of Theorems 3 to 7 will make use of the absolute multiplicative height function H to bound inequalities involving algebraic numbers. The construction of the height function H is quite standard in number theory and we refer the reader to Chapter II or to pages 205-214. For applications to complexity theory, see Chapter 7 and. The height function is naturally defined in the projectivization P n (Q a ) of the algebraic numbers Q a. It returns a real number ≥ 1. We can also extend it to complex projective space P n by setting H(P ) = ∞ when P ∈ P n (Q a ). We will adopt this convention in order to simplify the notation of domains and ranges. We can also define the height of matrices, polynomials and systems of polynomials as the height of the vector of all the coefficients. The following properties of heights will be used in the sequel. First of all, we can explicitly write the height of a vector with integer coefficients as: Proposition 1 follows from the construction of the height function. One immediate consequence is that if v ∈ Q n, then H(v) = max |mv i |, |m| where m is the greatest common denominator of the v i 's. We can use the following fact to bound the height of the roots of an integral polynomial: Proposition 2 is Theorem 5 in. Compare with Theorem 5.9 in, where the coefficients of f are algebraic numbers. We can use a bound on the height to bound absolute values above and below: Proposition 3. Let K be an algebraic extension of Q, and let x ∈ K, x = 0. Then The height of a vector and of its coordinates can be related by: Propositions 3 and 4 follow immediately from the construction of the height function. The height function is invariant under permutation of coordinates, and also: Proposition 5. Let K be an algebraic extension of Q, and let g ∈ Gal . Then for any x ∈ K, H(g(x)) = H(x) Proposition 5 is Lemma 5.10 in. be a system of multi-homogeneous polynomials with algebraic coefficients, where each F i has degree d j in variables P j. Let the P j be algebraic. Then In the case k = 1, this is similar to Theorem 5.6 in (where max S(f i ) is not given explicitly). For the general case see Theorem 4 in. be a system of polynomials with algebraic coefficients, where each G i has degree at most d j in variables Q j. Let the Q j be algebraic. Then This is Corollary 1 in. Some consequences of this are that H( n i=1 x i ) ≤ n H(x i ) and that H( n i=1 x i ) ≤ H(x i ). The following fact follows also from the construction of heights: Proposition 8. If x is an algebraic number, Also, it makes sense to bound the height of the roots of a system of polynomials with respect to the height, size and degree of the system. Corollary 6 in is: Proposition 9. Let f 1,, f r r ≤ n be polynomials in Z of degree and height bounded by d ≥ n and, respectively, and let V denote the algebraic affine variety defined by: Then V has at most d n isolated points, and their height verifies: Proof of Theorem 1. Let A(i, u) be the matrix obtained by replacing the i-th column of A by the vector u. Then if v = A −1 u, Cramer's rule is: Since A has integer coefficients and det A = 0, one can always bound |v i | ≤ | det A(i, u)|. By Hadamard inequality, this implies: Therefore, Combining the bounds for A 2 and A −1 2, one obtains: 4.2. Proof of Theorem 2. In order to estimate (A), we write ≤ n n 4 + 1 2 H n m n/2 In order to bound cos, we use the assumption that b is not orthogonal to the image of A. Hence A * b 2 ≥ 1 and the 'normal equation' A * Ax = A * b implies: Therefore, Proof of Lemma 1. where C ranges over the (n − i) (n − i) sub-matrices of B of the form C kl = Bs k s l for some 1 ≤ s 1 < < s n−1 ≤ n. Hence Let A be an n n matrix with integer coefficients and let be an eigenvalue of A. Then Proof of Lemma 2. Apply Proposition 2 to the polynomial p(t) from Lemma 1. Lemma 3. Let B be an n n matrix with integer coefficients. Let q(t) = det(B − tI + te * n e n ) = q i t i. Then Proof of Lemma 3. Let p(t) = det(B − tI) and let r(t) = det(B − tI) whereB is the (n − 1) (n − 1) matrix obtained by deleting the n-th row and the n-th column of B. Then, by multi-linearity of the determinant, Therefore, Let A be an n n matrix with integer coefficients. Let be an isolated eigenvalue of A and let x be an eigenvector associated to, Ax = x. Then Proof of Lemma 4. Assume without loss of generality that the first n − 1 lines of A − I are independent. Let M 1,..., M i,..., M n be the sub-matrices obtained from A − I by deleting the last line and the i-th column. Then we can scale x in such a way that We have M n = B n − I. By reordering rows and columns, we obtain for each i < n that M i is of the form: n−1 e n−1 where B i is the sub-matrix of A obtained by deleting the last line and the i-th column. Set We consider now the morphism: q : P → P n ( : 1) → (q () : : q (n) ()) Then x = q() and the first inequality because of Proposition 6, and the second because of Lemma 2. End of the Proof of Theorem 3. Proposition 7 implies We By hypothesis y * x = 0. Hence, by Proposition 3, Lemma 5. Let A be an n n invertible matrix with algebraic coefficients. Then Proof of Lemma 5. Let A(i, j) be the sub-matrix of A obtained by deleting the i-th row and the j-th column. By Cramer's rule, (A −1 ) ji = det A(i,j) det A. Therefore we should define the degree n morphism: : P n 2 → P n 2 (A 11 : A 12 : : A nn : 1) Let ∈ C n+1 be a fixed representative for a root of F. Any u ∈ T P n can be written as a vector in C n+1, orthogonal to. Computing u = Cv is the same as. The operator C is the same as (M −1 ) |x n+1 =0. Therefore, Lemma 6. In the conditions of Theorem 5, Proof of Lemma 6. We apply Proposition 7 to the system: We can bound H(DF ) ≤ DH(F ) and H(N) = H( 2 ) = H( | i | 2 ). We can apply Proposition 6 to the map Thus, we can estimate that End of the Proof of Theorem 5. By definition of the norm, f 2 ≤ SH(f ). By Lemma 5 and Lemma 6, we have: Knowing that deg [Q : Q] ≤ D n, we can use Proposition 3 to deduce that According to Proposition 9, where c is a universal constant. Thus, (f, ) ≤ SH(f )(n + 1) (n + 1)S n √ n + 1 nD−n n 2nD−2n where c is a universal constant. Further comments As mentioned before, a reasonable definition for the 'real complexity' input size is the number of coefficients of a given problem instance, times the logarithm of its condition number. Theorems 1 to 4 show that the 'real complexity' input size is no worse than a polynomial of the 'classical complexity' input size, for problem instances with integer coefficients. Theorem 5 also, if one considers D n as part of the input size. It may be possible to replace D n by the Bzout number d i, that is the number of solutions of a generic system of polynomials. Since the 'real complexity' of the problems considered can be bound by common numerical analysis techniques, those Theorems provide a scheme to convert 'real complexity' bounds into 'classical complexity' bounds. The same idea is behind Theorems 6 and 7. In the case of the iterative algorithms considered, the number of iterations for obtaining a certain approximation can also be bounded in terms of a 'condition number'. In the case of problem instances with integer coefficients, the 'condition number' is also polynomially bounded in terms of the input size. Those Theorems have many features in common, and this is not a coincidence. A more general approach is to interpret the condition number as the inverse of the distance to the degenerate locus. This can be bounded in terms of the height of the problem instance, and in terms of the degenerate locus (degree, dimension, height). However, bound obtained this way will be no sharper and possibly worse than the direct bounds obtained by using the exact expression for the condition number. This paper was written while the author was visiting MSRI at Berkeley. He wishes to thank MSRI for its generous support. Thanks to Bernard Deconinck, Jennifer Roveno, Paul Gross, Raquel, and very special thanks to Paulo Ney de Souza and family. |
<reponame>restorecommerce/libs
import { FederatedCatalogSchema } from './gql/federation';
import { namespace, CatalogConfig, CatalogModule } from "./interfaces";
import { CatalogSrvGrpcClient } from "./grpc";
import { createFacadeModuleFactory } from "../../utils";
export const catalogModule = createFacadeModuleFactory<CatalogConfig, CatalogModule>(namespace, (facade, config) => {
const catalog = {
client: new CatalogSrvGrpcClient(config.config.client, facade.logger)
};
facade.addApolloService({
name: namespace,
schema: FederatedCatalogSchema(config.config)
});
facade.koa.use(async (ctx, next) => {
ctx.catalog = catalog;
await next();
});
});
|
Every morning in our home, one of the first things I see is a necklace I cherish but have never worn. It is too delicate for that, too meaningful — 264 reminders of what a woman can do when she dares to hope.
Between 1942 and 1945, the U.S. government forced about 120,000 people into internment camps on American soil. Their only crime was to be Japanese-Americans during World War II. They lost their homes and their jobs, and virtually all their material possessions. They were ripped from their lives and herded into nearly a dozen camps, with no idea how long they would be imprisoned or if they would ever again be free.
In 1942, 22-year-old Toshiye Morita was sent to an internment camp in Topaz, Utah, along with her parents and six siblings. This particular camp was built on an ancient lakebed, where thousands of seashells remained. They were the detritus of a lake that no longer existed, but to young women like Morita, they were found treasures.
During her three years in the camp, Morita collected hundreds of tiny shells to string together. It was a painstaking process. She matched the shells in size and shape, and lacquered each with nail polish before threading them one at a time.
During her three years in the camp, Toshiye Morita made three necklaces from the shells. Her son, Michael F. Ozaki, discovered them only after she had died, at age 94. She had secreted them away in one of the two suitcases she had packed with evidence of her incarceration and labeled: “Don’t Throw Away.” Along with the necklaces, Ozaki found photos of his mother and her family on the day they were released in 1945, and a small plastic ID badge that had allowed her to work in the field.
Michael, a retired pediatrician in California, said his mother had always refused to discuss her years in the camp. “Couldn’t be helped,” was all she would say, in Japanese.
In the midst of such horror and uncertainty, his mother made this thing of beauty. I look at the necklace and marvel at her resolve. I touch the delicate shells and feel her courage. For what is hope if not an act of bravery, a refusal to surrender when the world is closing in.
Toshiye Morita’s necklace also reminds me of our potential for cruelty. What would have happened if American citizens had raised their collective voices to fight for those 120,000 neighbors, friends and fellow citizens?
I know the counterargument: We were a country at war. It was a different time.
What keeps evil alive? Many things, of course. Surely, our indifference is one of them.
In recent months, I have often looked at that single strand of shells and thought of the migrant children who remain in U.S. custody, far away from their parents. At least 500 of the children who were torn from their parents at the border are still here. They have no idea when, or if, they will ever see them again. Some of them are too young to know their parents’ names.
Right now, she writes, 12,800 migrant children remain in federally contracted shelters. Many of these children are teenagers from Central America. They fled the most dangerous countries in the world — alone — because their parents were willing to do the unthinkable to save their lives. There’s not a devoted parent in America who wouldn’t do the same thing in the same circumstances and let his or her child go. We should say that out loud. Every day, we should tell someone else.
There’s so much going on in our country, so many concerns hovering like ghosts competing for our attention, that it can be overwhelming. But Toshiye Morita’s necklace reminds me of who we can be.
All those little shells, so perfectly strung together. At the height of her uncertainty, so clearly a sign of hope. |
How Do Trees Grow in Girth? Controversy on the Role of Cellular Events in the Vascular Cambium Radial growth has long been a subject of interest in tree biology research. Recent studies have brought a significant change in the understanding of some basic processes characteristic to the vascular cambium, a meristem that produces secondary vascular tissues (phloem and xylem) in woody plants. A new hypothesis regarding the mechanism of intrusive growth of the cambial initials, which has been ratified by studies of the arrangement of cambial cells, negates the influence of this apical cell growth on the expansion of the cambial circumference. Instead, it suggests that the tip of the elongating cambial initial intrudes between the tangential (periclinal) walls, rather than the radial (anticlinal) walls, of the initial(s) and its(their) derivative(s) lying ahead of the elongating cell tip. The new concept also explains the hitherto obscure mechanism of the cell event called elimination of initials. This article evaluates these new concepts of the cambial cell dynamics and offers a new interpretation for some curious events occurring in the cambial meristem in relation to the radial growth in woody plants. Introduction With an estimated global forest growing stock of 530.5 billion m 3 (), production of wood and bark by the activity of vascular cambium, the lateral meristem in woody plants, is one of the most important biological processes on 1 3 Earth. The cambium exists in a form of cylinder of multi-layered meristematic cells between xylem and phloem tissues (Fig. 1a-c). It surrounds the central wood core and is itself surrounded by an outer cylinder of bark in the long axis (root and shoot) of woody plants. In transverse sections of the plant axis, it appears as a multi-layered circle around the xylem, comprising of a division zone in the middle, where cell divisions occur, and the differentiation zones of peripheral layers, where derivative cells pass through a variety of processes on way to attaining their final form and position in derivative tissues. The division zone generally consists of a layer of cambial initials sandwiched by the layers of xylem mother cells (XMCs) on the inner side and those of phloem mother cells (PMCs) on the outer side ( Fig. 1a-c). Forests act as the major terrestrial carbon sinks, and possess large carbon pools mainly in the form of wood produced by the cambium activity in branches, stems and roots of trees (). Climate, or local atmospheric condition, is the main regulator of cambium activity and the consequent carbon allocation in woody parts of trees (a, b;). On being triggered by environmental factors (such as temperature level, water availability, air quality, light intensity and day length), some long-distance hormonal signals and short-range peptide signals jointly regulate the cambial activity. Communications from endodermis and phloem tissues also influence the proliferation of cambial initial cells (;;;Wang 2020). Interactions between these signaling pathways render the vascular development flexible. Xylogenesis, i.e. the process of production of new cells from cambium and their differentiation into mature functional wood cells, comprises of periclinal divisions of XMCs creating new daughter cells, enlargement of all these cells, deposition of cellulose and hemi-cellulose to form secondary cell walls, impregnation of cell walls with lignin, and finally the programmed cell death (). As new layers of wood cells are produced each year on the inner side of the cambial cylinder, increasing the diameter of the wood core, the circumference of the cambial cylinder is bound to increase. Surprisingly, many processes related to growth in radial direction (the wood-core thickness) and expansion of cambial circumference remain poorly understood. Morphogenesis in plants is typically coordinated by organizer cells that direct the adjacent stem cells to undergo programmed cell division and differentiation. Using the lineage-tracing and molecular genetic studies in roots of Arabidopsis thaliana, Smetana et al. showed that the concept of organizer cells applies to cambium also, where cells with a xylem identity act as organizer cells and direct the adjacent cambial cells to divide and function as stem cells. Placing a genetic label on individual cambial cells and tracing their derivatives in the poplar stem, Bossinger and Spokevicius found that differentiation of xylem and phloem was not well synchronized and hence likely to be controlled independently. They observed a frequent loss of cambial initials but such a cell loss was rare in XMCs or PMCs. Further, the period for which the mother cells remained active, varied greatly, showing that the time or the number of cell cycles for which the mother cells remained active was not pre-determined. Through pulse labeling and genetically-encoded lineage tracing in active cambium of Arabidopsis thaliana hypocotyl, Shi et al. mapped the activity of cambial initials (identified by them as stem cells) and 1 3 How Do Trees Grow in Girth? Controversy on the Role of Cellular Fig. 1 Vascular cambium in a transverse section of stem. A Schematic of a cross-section of stem axis, showing correlation between diameter and circumference of cambial cylinder. Slanted walls, indicating transformation of periclinal walls into radial ones, pointed with a white arrowhead, are typical for the area of intrusive growth and elimination of initial. The little rectangle marks the position of the enlarged fragment underneath. One initial has been enlarged to exhibit its typical dimensions. Dashed lines indicate the cambium zone; initial cells are marked with grey. The initial marked with a black circle has increased its circumferential dimension due to intrusive growth, but the change has been compensated with an equal partial elimination of the neighbouring initial. IL-initial layer, Ph-phloem, Xy-xylem, PMC-phloem mother cells, XMC-xylem mother cells. As the diameter of cambial cylinder (D) is 31,847 m, its radius r = D/2 = 15,923.5 m and circumference C = 2r = 100,000 m. Similarly, if the average circumferential dimension (Ci) of a single cambial initial cell is 20 m, the total number of cambial initials forming the circumference N = C/Ci = 100,000 m/20 m = 5000. Also, if the average radial dimension of an initial is 10 m, increase in radius of cambial cylinder due to deposition of one cell layer on xylem side r = 10 m, an increase in cambial circumference after addition of one cell layer to the xylem core is C = 2r = 6.28 10 m = 62.8 m. Moreover, increase in circumferential direction of each cambial initial C i = C/N = 62.8/5000 = 0.0126 m, and the ratio of radial growth to circumferential growth of wood (r/C i ) = 793.65. B Cross-section of Tilia cordata. PMC-phloem mother cells; XMCxylem mother cells. Ph-phloem, Xy-xylem, R-ray, ST-sieve tube element. Black arrow indicates an intrusively growing wood fibre, whereas white arrow points to companion cell. Asterisks indicate the location of the most probable initial cells. C Cross-section of Pinus sylvestris. T-tracheid. SC-sieve cell. PMC-phloem mother cells; XMC-xylem mother cells; Ph-phloem, Xy-xylem, R-ray. Asterisks indicate the location of the most probable initial cells. Scale bars-50 m confirmed that a single bifacial cambial initial generates both xylem and phloem cell lineages. They established different transgenic markers, which defined a proximal, a distal and a central cambium domain, representing the site of xylem formation, the site of phloem formation and the site of strongly proliferating bifacial cambial initials (stem cells), respectively. Studies undertaken on the cambial cell growth during the last two decades (;Kojs 2012;a, b;;Woch et al., 2013aWilczek et al., 2018) have led to a new hypothesis on the mechanism of intrusive growth of the cambial initials. These studies have elucidated certain characteristic features of the cambium from new angles, suggesting that the mechanical strains in the cambial tissue likely affect the processes involved in the formation of wood and the consequent increase in the cambial circumference. This article attempts to compare these newly emerging concepts with the traditional knowledge of the cambial dynamics, with special focus on radial growth of wood in stems and roots and the consequent expansion of the cambial circumference. The Structure of Vascular Cambium Whereas only a few cells normally act as initials in apical meristems, cambial initials are extremely numerous and form a sort of layer between the layers of tissue mother cells (PMCs and XMCs) around the wood core of the trunk, branches and roots of woody plants (Fig. 1). They are responsible for the production of wood cells on the inner side (Ajmal and Iqbal 1987;;) and secondary phloem cells on the outer side (Iqbal and Ghouse 1987;Iqbal and Zahur 1995). The layer of initials is not in fact a perfect layer due to the non-parallelism of its component cells (Woch 1981), but is a significant demarcation plane between xylem and phloem from developmental point of view. This has also been described as 'the initial surface' (Woch and Poap 1994;). Unlike the initial cells of primary meristems, which are normally homogeneous in shape, cambial initials are usually of two types: (a) axially elongate and highly vacuolated fusiform initials, and (b) relatively small, almost isodiametric or radially extended ray initials that often occur in aggregates ( Fig. 2) (Iqbal and Ghouse 1990;Larson 1994;Evert 2006). Fusiform initials produce the axially aligned derivatives such as vessel elements, tracheids, fibres and sieve-tube elements of the secondary vascular tissues; whereas ray initials give rise normally to the radially aligned ray parenchyma. In transverse view, ray parenchyma appear to form radially running rays across the secondary vascular tissues, i.e. xylem and phloem (Fig. 1b, c) (Larson 1994;Iqbal 1995;Lev-Yadun and Aloni 1995). Cambial initials (stem cells) never undergo differentiation but continue to remain initials and produce tissue mother cells (Iqbal 1994(Iqbal, 1995). Based on the form and arrangement of the initial cells, as seen in tangential longitudinal view, the cambium is identified to be storeyed or non-storeyed. It is non-storeyed, if the fusiform initials, relatively long and diversified in shape and length, terminate at varied height levels and neither the fusiform initials nor the rays are arranged in tiers, forming storeys ( Fig. 2b) (Ghouse and Yunus 1974;Woch and Szendera 1989;Evert 2006). On the contrary, the cambium is storeyed if the fusiform initials are relatively short in axial direction and terminate nearly at the same height, thus forming horizontal tiers placed one above the other (Fig. 2a); it is double-storeyed if both fusiform initials and rays (aggregates of ray initials) are arranged in storeyed fashion, forming horizontal bands. The storeyed structure develops ontogenetically from the non-storeyed one (Soh 1990;Larson 1994). Cambial initials also undergo directional and dimensional changes unrelated to the formation of storeyed pattern, a feature termed as a 'rearrangement of cambial initials'. This is considered as the mechanism for the formation of grain in the wood, and possibly affects the development of the vessel network (Zimmermann 1983). It seems that the rearrangement of cambial initials helps the cambial adaptation to diverse conditions of both the external and internal environments (). It is assumed that the processes contributing to cell rearrangement include anticlinal divisions, unequal or imperfect periclinal divisions, intrusive growth, elimination of initials, and changes in the ray pattern (Larson 1994;Evert 2006). Mechanical Strains in the Vascular Cambium The cambial cells are exposed to mechanical stresses resulting not only from their turgor but also from the radial growth of secondary vascular tissues (Hejnowicz 1980(Hejnowicz, 1997Kwiatkowska and Nakielski 2011). Radial growth of the wood core pushes the cambial tissue from the inside outwards, stretching the layer of cambial initials and creating conditions for intrusive growth of initials. This, together with the constraints imposed by the bark, generates compressive stress in radial direction, which is received by the cambial layers sandwiched between the wood and the bark (Iqbal and Ghouse 1990;Kwiatkowska and Nakielski 2011). The question of why a delicate and fragile tissue like cambium is not crushed under these circumstances, while the mechanical stress resulting from an increase in the wood's radius is strong enough to cause an expansion in the bark circumference remains unanswered. It is assumed that under specific conditions a tensile stress (in the radial direction) may occur in the vascular cambium, for instance in areas of phloem collapse, often during early spring. This phenomenon has been correlated to the enlargement of vessel-element mother cells (Hejnowicz 1997;Kwiatkowska and Nakielski 2011). However, in view of the massive growth of vessel members occurring in spring, it may be argued that it should not be based on unusual conditions. Further, no attention has been paid to numerous cases where intrusive growth of the initials is located close to the growing vessel members, sometimes separated by only a few layers of xylem derivatives. These two phenomena were explained by assuming the presence of two different mechanical conditions in the tissue, i.e. a radial compressive strain in the case of the intrusive growth of cambial initials along the radial walls, and a radial tensile strain in the case of the intrusive growth along the tangential walls of the vessel-element mother cells. Studies in developmental plant biology and biophysics, together with a detailed analysis of a large number of anatomical sections of active cambium, have suggested a reassessment of the mechanical stresses existing in vascular cambium. Studies have also pointed out that the diurnal variation of water balance in plants, negative during the day and positive during the night, is of crucial importance in the process of radial increment of tissues (Kojs and Rusin 2011;Kojs 2012). Transpiration is intense during the daytime, causing a strong negative pressure in vessels. Water flows into vessels from the surrounding living tissues, and hence the water potential of cells in this part of the plant goes down (Klepper 1968). The turgor pressure of tissues decreases and the whole organ (trunk, branch or root) shrinks (Ueda and Shibata 2001). These changes in turgor pressure occur in both the xylem (wood) and phloem (inner bark) (Almras 2008). It is also likely that some preventive measures are induced in the living cells of the vascular tissues to protect these cells from excessive dehydration (Kojs and Rusin 2011). After the sunset, transpiration becomes less intense and water flows back into cells, increasing their water potential, and hence the turgor pressure (Klepper 1968). As most of the wood cells are dead cells, with strong and often lignified cell walls, changes in the wood-core diameter due to variations in tissue hydration are How Do Trees Grow in Girth? Controversy on the Role of Cellular less distinct than in the phloem, which consists dominantly of living and osmotically active cells (Molz and Klepper 1973). While measuring the different strains in the secondary xylem and phloem, Almras and Almras et al. noted that phloem layers exert pressure on the xylem cylinder, generating a compressive stress in the delicate meristematic tissue (vascular cambium) located between them, when the turgor pressure of phloem cells in the bark decreases (during the daytime). When the phloem cells regain their turgor (during the night), the phloem layers move away from the wood cylinder, thus generating a tensile stress in the radial direction and causing a radial stretching of cambial cells between the phloem and xylem. The major part of this alteration in shape and size of the cambial cells (elastic deformation) reverses with the beginning of a new day, but a small part of it is retained (plastic deformation), which may be regarded as the net radial increment in the cambial cell dimension (Kojs and Rusin 2011). Periclinal Divisions Cambial cells frequently undergo periclinal cell divisions that occur parallel to the closest organ surface and result in the addition of derivative cell layers. Occasionally, they also experience anticlinal divisions that occur perpendicular to the closest organ surface, adding new cells to the layer of initials and forming new radial files (). Occurrence of anticlinal divisions normally remains confined to the layer of initials, but periclinal divisions occur both in the initial layer as well as in layers of XMCs and PMCs (Butterfield 1975). The main activity of cambial cells is their expansion in the radial direction followed by their periclinal division, which reduces the radial thickness of the cell to almost half, but the tangential width remains unaffected (Fig. 1a). The resultant daughter cells, having a radial thickness about half of the thickness of the mother cell, expand radially to regain the original thickness of the mother cell before undergoing the next periclinal division, thus forming a radial file of cells (Bailey 1923;). After the periclinal division of an initial cell, one of the daughter cells maintains the 'initial' status, while the other one acts as a xylem or phloem mother cell, depending on whether it is located inside or outside the initial surface, respectively. While the initial cells maintain their meristematic nature, the xylem or phloem mother cells usually leave the cambial zone after several periclinal divisions and begin to differentiate into xylem elements (tracheids, vessel elements, parenchyma and fibres) or phloem elements (sieve-tube elements, albuminous or companion cells, parenchyma and fibres) (Iqbal 1994(Iqbal, 1995Larson 1994;Evert 2006). Periclinal divisions, together with symplastic growth of cell walls in the radial direction, are considered to be the cause of radial growth in woody plants (Evert 2006). An increase in cell number due to the occurrence of periclinal divisions as such has no direct influence on the radial dimensions of cambial zone, because the increase in cell number is accompanied by a decrease in the radial dimension of the cells. It is the symplastic growth taking place between two successive periclinal divisions, which actually increases the radial dimension (thickness) of the tissue (cambial zone). As fusiform cambial initials are axially elongated and radially flattened cells (see Figs. 1, 2), such cells should divide by transverse division as per the Errera's rule (Kwiatkowska and Nakielski 2011). However, cambial cells divide predominantly by periclinal divisions. Transverse divisions with a minimal cell-plate surface are rare in vascular cambium and occur mainly during ray formation (Cumbie 1967;Evert 2006). Mechanical strains play an important role in control of cells division and differentiation of plant cells, and nuclei are sensitive to the frequent external mechanical stimulation (Qu and Sun 2008). As per the unified hypothesis of mechano-perception in plant cells proposed by Telewski, the role of mechanical stimuli in plant morphogenesis is certain and beyond any doubt. Studies of isolated plant protoplasts have revealed that the cell-plate orientation depends on the pattern of mechanical strains; it is usually parallel to the orientation of the principal compressive tensors, although in some cases it is perpendicular (Lintilhac and Vesecky 1984;Lynch and Lintilhac 1997). As mentioned earlier, it is commonly accepted that cambial cells are radially compressed (Kwiatkowska and Nakielski 2011), which means that the cell plate should be formed predominantly parallel to the radius, and hence frequent anticlinal divisions should be expected. However, it is the periclinal division that occurs most frequently in the cambial cells (Lintilhac and Vesecky 1984;Iqbal 1994;Lynch and Lintilhac 1997). Considering the probable relation between the compressive stress and the cellplate orientation (parallel to each other), one might think that if periclinal divisions are dominant, the predominant compressive tensor should be oriented in the tangential plane of cambial cells. However, this idea counters most of the reports hitherto made and hence is not viable. Studies on protoplasts suggest that compression in one direction causes tension in the plane perpendicular to the axis of the compression (Lintilhac and Vesecky 1984;Lynch and Lintilhac 1997). A study by Louveaux et al. has revealed that the tensile stress defines the orientation of the division plane, i.e. division plates are located along the local maximum tensile stress in cell walls. The tensile stress in the radial direction, which stretches the radial walls of cambial cells, may likely create a local maxima of tensile stress in these walls, determining the periclinal division of cambial cells. However, such a possibility has hardly been examined so far. Future research in this direction should explain the cause of the peculiar orientation of periclinal divisions of cambial cells, which might stem from the tensile stress that occurs in the radial direction. Another hypothesis suggests that the division plane in cells of apical meristems is normal to the main growth direction (Hofmeister 1863). This rule may also be applied to cambial cells, because their maximal growth occurs in the radial direction. The modification of Hofmeister's rule takes into consideration the principal growth tensor (Hejnowicz and Romberger 1984). However, we have no argument to explain why both the maximal growth direction and the maximal tensor occur in a direction in which the cambial tissue is compressed. Considering the new reports on cambial cell dynamics (Kojs and Rusin 2011;Kojs 2012;) it seems plausible that the radial direction of maximal growth, as well as the maximal growth tensor, may be an outcome of tensile stress in the radial direction that occurs in the cambial tissue during the night time. In this context, studies on the cambial tissue culture in vitro should not be lost sight of. The cambial cells grown in vitro do not exhibit radial expansion. They assume isodiametric shape, which possibly suggests that the shape of fusiform cells is also an outcome of the specific mechanical environment inside the plant (Brown 1964;Brown and Sax 1962). Based on the original work of Mahmood, it has been described by many subsequent workers, including Murmanis (1970Murmanis (, 1977, that after each periclinal division a new primary cell wall develops around the two daughter protoplasts. As a result of the 'emboxing' phenomenon described by Mahmood, the inner tangential walls of the periclinally dividing initials, which grow considerably thick during phloem formation, become part of the first derivatives on the xylem side when xylogenesis starts. Similarly, the outer tangential walls that thicken gradually during xylem formation, become part of the first derivatives on the phloem side when leptogenesis begins again. This is how the thickness of the tangential walls of cambial initials is maintained during the deposition of secondary vascular tissues. Murmanis, using electron microscopy, confirmed the difference in the thickness of the inner and outer tangential walls of the dividing initials during the phloem and xylem formation. Catesson and Roland, however, observed the deposition of new primary wall only in the area of the developing tangential cell plate. This possibly suggests that the thickening of walls may occur during cytokinesis, interphase or the whole cell cycle. On the other hand, the radial walls keep receiving the extra depositions continuously, with each division of the initial, irrespective of whether tissue formation occurs on the outer or the inner side of the cambial initials. However, the thickness of these walls is simultaneously reduced by their successive stretching in the radial direction during the symplastic expansion of cells after each periclinal division. Thus, thickness of radial walls remains more or less uniform due to addition of wall material on one hand, and radial extension of the wall on the other. In general, radial walls are significantly thicker than tangential ones. Assuming that cambial cells are tensed in the radial direction during the night, which for the most part is an elastic deformation (Kojs and Rusin 2011;Kojs 2012), the thickness of radial cell walls may perhaps be an adaptation to this plausibly strong tension. Anticlinal Divisions of Cambial Cells Any increase in the girth of the wood cylinder (i.e. radial growth) necessitates a corresponding increase in the cambial circumference, which is made possible through anticlinal divisions of cambial initials and a meager symplastic growth in the circumferential direction (;;). The expansion of cambial initials in the circumferential direction, and their anticlinal divisions, help maintain the more or less constant cell dimensions, except during the first few years of the cambial activity when fusiform initials increase their tangential dimensions, especially in both storeyed and non-storeyed cambium (Srivastava 1973;Larson 1994). During formation of storeyed structure tangential dimensions, especially their length, slightly decreases together with unification of their length (b;Wilczek 2012). When cambium cylinder increases its circumference, occurrence of longitudinal anticlinal divisions is just expected. Several types of anticlinal division (oblique, longitudinal, lateral) have been described in the relevant literature (Iqbal and Ghouse 1990;Larson 1994). Oblique anticlinal (pseudotransverse) divisions are predominant in non-storeyed cambia, whereas longitudinal anticlinal divisions characterize the storeyed cambium (Cumbie 1963(Cumbie, 1967(Cumbie, 1984Butterfield 1972;Krawczyszyn 1977;Rao and Dave 1985). In the mosaic type of cambium, typical for transition from a non-storeyed structure to the storeyed one, anticlinal division is of an intermediate type, wherein the length of the wall normally covers more than 50% but less than 70% of the total cell length (Krawczyszyn 1977). Lateral divisions of fusiform initials are relatively rare and often are related with ray formation (Larson 1994). Oblique anticlinal divisions, followed by intense intrusive growth of at least one of the derived sister initials (Bannan 1950;Evert 1961;Cumbie 1967;Srivastava 1973), occur in a domain pattern supposedly causing the rearrangement of cambial initials (Hejnowicz and Krawczyszyn 1969;Hejnowicz 1973;Hejnowicz and Romberger 1973). The number of these anticlinal divisions markedly exceeds the requirement for an adequate expansion of cambial circumference due to the increasing girth of the wood core (Bannan 1950;Evert 1961;Cumbie 1967;Srivastava 1973;Lim and Soh 1997a, b;Bossinger and Spokevicius 2018). The significance of this phenomenon, exclusive to the non-storeyed cambium, is still obscure. The occurrence of excessive anticlinal divisions, followed by the supposed elimination of the initials produced in excess, was equated by Gahan and Mellerowicz et al. to the mechanism of somatic mutation elimination. Nonetheless, the question of why eliminations are so frequent in non-storeyed cambia and just seldom in storeyed cambia cannot be answered. Would it mean that non-storeyed cambium needs to eliminate somatic mutations more often, whereas storeyed cambium evidently does not? Referring to the commonly reported high frequency of oblique anticlinal divisions in the non-storeyed cambium, which is far more than the actual requirement for the due expansion of the cambial circumference, Woch et al. suggested that this excessive cell division could be the result of some specific pattern of mechanical strains occurring in the tissue. If the local maximal tensile stress in cell walls determines the orientation of the division plane (), the occurrence of anticlinal division can possibly be a side effect of local mechanical stresses. The ability of the directed and synchronous intrusive growth in the storeyed cambium results in a rapid, coordinated change of fusiform initials' orientation and inclination. That the coordinated response might cause a relaxation of shearing strains generated in the cambium (Woch and Poap 1994;a, b;). In the non-storeyed cambium, on the other hand, such a rapid rearrangement is not possible, and hence the magnitude of shearing strains exceeds this threshold, causing the initiation of excessive anticlinal divisions. How Do Trees Grow in Girth? Controversy on the Role of Cellular In the storeyed cambium, an increase in circumference is usually considered to be an outcome of longitudinal anticlinal divisions (also known as radial longitudinal divisions) and the symplastic coordinated growth of the new sister initials with all initials, whereas the rearrangement of initials is ascribed to the unidirectional apical intrusive growth of fusiform initials that occurs in numerous initial cells simultaneously but remains confined to cell ends, causing changes only to the position of the cell tips (Larson 1994;Woch and Poap 1994;b;Evert 2006). The sister fusiform initials produced by the radial longitudinal divisions are almost equal in length, but their circumferential dimension (width) is halved, which increases subsequently by means of symplastic growth (Butterfield 1972). Thus, the radial longitudinal divisions in storeyed cambia contribute to the circumferential growth but do not affect the fusiform initials' orientation and inclination-unlike the oblique anticlinal divisions of the non-storeyed cambia. The frequency of these anticlinal divisions is relatively high during the first few years of cambial activity, and decreases in the subsequent years (Bailey 1923;Butterfield 1972;Iqbal 1994;b), reflecting a decline in the relative increment of cambial circumference (b;Wilczek 2012). The formation of a storeyed structure from a non-storeyed procambium is attributed normally to longitudinal anticlinal divisions (Cumbie 1984;;Carlquist 1988;Soh 1990). It has long been held that the horizontal storeys of fusiform initials are homogeneous, and the intrusive growth of the initials is insignificant and unable to disturb a storeyed structure (Zagrska-Marek 1984). However, recent studies have revealed that although the directed intrusive growth does not disturb a storeyed structure, it does facilitate the vertical rearrangement of whole packets of initials, leading to the formation of heterogeneous storeys (;Wilczek 2012). This also provides an explanation for the rapid formation of regular storeys only during those few years of cambial activity when the frequency of anticlinal divisions is relatively low considering the large number of fusiform initials in a common storey. Tall multiseriate rays, if present, do not obstruct the spreading of regular storeys (b;Wilczek 2012;). In several species with a storeyed structure of cambium, short oblique anticlinal divisions (often covering less than 60% of the cell length) are common in the first few years of cambial activity when storeyed structure is formed rapidly, and do not interfere with the process of storey formation (b;Wilczek 2012;). The reports also indicate that such short anticlinal divisions become infrequent after regular storeys have been formed. All these facts make one wonder if the types of anticlinal division really determine the structure of cambium or, on the contrary, it is the structure of the cambium that actually defines the type of anticlinal divisions. Symplastic Growth of Cambial Cells Symplastic growth, typical for both the primary and secondary meristems, is a coordinated growth in which the various cells of a tissue grow in unison, keeping the mutual contacts with adjacent cells intact. Symplastic growth of cambial cell walls is anisotropic, being enormous in the radial direction, quite meager in the circumferential direction (causing a slight circumferential expansion), and nil in the axial direction (;). The unequal extent of symplastic growth in the radial and circumferential directions is commonly accepted but often underestimated. A precise mathematical analysis of the cambium in tree trunk of 1 m circumference has revealed that, after a periclinal division, the daughter fusiform cells grow over 8000 times more in the radial direction than in the circumferential direction. In fact, the circumferential increment of an individual initial due to the expansion of tangential walls is less than 0.002 m, which is even less than the thickness of the cell wall (;). In the example presented in Fig. 1a, after adding only one layer of cambial cells, hence increasing the cambial radius by 10 m, the change of circumference per one initial would be equal to Ci = C/N = 62.8 m/5000 = 0.0126 m, so less than the thickness of their radial walls. The relationship between the rates of growth in the radial and circumferential directions of an individual initial depends on the magnitude of the cambial radius or, more precisely, on the number of the initials participating in the increment of the cambial circumference. In a study of active fusiform cambial initials, the intensely expanding areas of radial walls appeared to be completely devoid of cellulose (Roland 1978). Radial walls of fusiform initials consist mostly of hemicelluloses, as detected by Catesson and Roland and later confirmed by Catesson. This agrees with the commonly accepted view of the maximal growth of cambial cells in the radial direction, which requires their radial walls to be especially suitable for rapid expansion through symplastic growth. Besides, the relative proportion of pectins and hemicelluloses also differs during the active and dormant phases of the cambium, as observed in Populus tomentosa (). This may possibly be related to the difference in cell-wall extensibility. On the other hand, the nature of the tangential walls of fusiform cambial cells is rather cellulosic with a high content of methylated pectin (Catesson 1990;). Such a structure seems to be insusceptible to stretching and expansion. In fact, the extent of symplastic growth of the tangential walls in the circumferential direction is quite insignificant in comparison to the extent of symplastic growth of the radial walls (;). Catesson and Roland reported that the middle lamellae between the radial walls of fusiform cells were considerably thick, whereas those between the tangential walls of the initials and their closest derivatives were not even discernible. In a study of cytokinesis in cells of the root apical meristem, chemical composition of cell plate and middle lamella was found to be different (Matar and Catesson 1988). It is plausible that the middle lamella gains its mechanical strength over a period of time, and hence the attachment of recently divided cells (by periclinal divisions) is not strong enough to endure the tensile strain emerging rapidly after the sunset. The relationship of microtubule arrangement with mechanical stress is welldocumented (;). In the first stage of tracheid differentiation, cellulose microfibrils are arranged longitudinally, whereas new microfibrils are later deposited in transverse orientation, a pattern similar to that observed in the arrangement of cortical microtubules (). The longitudinal arrangement of microfibrils, predominant in the first stage of tracheid differentiation, seems to be related to cell expansion in the radial direction. The transverse arrangement of microfibrils possibly occurs when the radial expansion of tracheids has ceased, but a slight longitudinal expansion may continue (;Funada 2008). The mechanism of regulation of the pattern of microtubule deposition and the reason of cell-wall growth taking place in one direction first and then in a different direction are yet to be worked out. Intrusive Growth and Its Location in Cambial Cells Intrusive growth is meager in apical meristems and in differentiating primary tissues, but quite abundant in the elongate cells of the vascular cambium and their derivatives, i.e. elongating fibres, widening vessel-element mother cells, growing sievetube elements and several other cell types Yunus 1975, Ghouse andIqbal 1979;Iqbal and Ghouse 1983;Iqbal 1995;Lev-Yadun 2001;). Here we focus only on the intrusive growth of the cambial initials. However, recent reports suggest that the same mechanical conditions are likely involved in the intrusive growth of cambial initials, vessel-element mother cells (b;) and fibres (;). The exact location of intrusion of the growing cambial initials has long been a subject of debate. It was initially assumed that intrusive growth contributes to the increment of the cambial circumference, hence the growing tip of the cell intrudes between the radial walls of the neighbouring initials (Larson 1994;Evert 2006). Comprehensive studies of rearrangement of cambial initials in tumours (Woch 1976;) and in normal trunks of numerous trees (Woch and Poap 1994;Woch et al., 2009Woch et al., 2013a, b;;;a) have questioned the long-held and commonly accepted explanation of the mechanism of intrusive growth. They suggest a new hypothesis, which identifies the occurrence of intrusive growth between the tangential walls of neighbouring initials and their closest derivative. This hypothesis (intrusion between tangential walls) provides a coherent explanation for the 'elimination of initials' also. It suggests that the intrusive growth of an initial is associated with an equal elimination (partial or total) of one (or more) neighbouring initials (a, b;;Woch et al., 2013a;) (Fig. 3). Such a growth is not likely to affect the circumference of the cambium, but it results in the rearrangement of cambial initials, plausibly to relax the strong mechanical shearing strains generated by the growing tissues (a, b;;a;). Although the difference in the spatial location of intrusive growth seems to be of minor significance, it in fact leads to a dramatic change in our understanding of the functioning of the vascular cambium. Despite the repeated and thorough examinations of cambial structure in numerous studies, no instance of the actual increment of the cambial circumference could be detected during the intrusive growth of the cambial initials (a, b;Jura Fig. 3 A-F Comparison of implications of the two hypotheses of apical intrusive growth of cambial initials on the arrangement of these initials as seen in tangential sections. A-C Schematics of three fusiform cambial initials: A Before the occurrence of intrusive growth in the lower initial (marked with grey) in the direction indicated by arrow; B After the occurrence of intrusive growth in the lower initial, assuming that it occurred between the radial walls of neighbouring initials and caused an increase in the cambial circumference. The intruding cell (marked with dark grey) pushed the contiguous cells sideways, as indicated by arrows, and caused an increase in the cambial circumference equal to the tangential surface of the intruding cell. The previous location of cell walls is indicated with a dotted line; C After the intrusive growth of the lower initial, assuming that it occurred along the tangential walls and caused no increase in the cambial circumference. The gain in the tangential surface of the growing initial (marked with dark grey) is equal to the loss (elimination) of the tangential surface of neighbouring initials. D-F Analysis of the arrangement of cambial cells of Picea abies, as seen in two tangential sections: D phloem mother cells, and E most likely cambial initials, sections obtained from positions 10 m apart from each other; F both D and E seen together after superimposing. The superimposed view exhibits the area occupied by the intrusive growth of initials counterbalanced by an elimination of parts of the neighbouring initials marked: Dark grey-intrusive growth of initial 1, counterbalanced with the partial elimination of initial 2. Bright grey-intrusive growth of initial 1, counterbalanced with the partial elimination of two ray ; ;a;Wilczek 2012;). Earlier hypothesis assuming the occurrence of apical intrusive growth between the radial walls of adjacent initials was widely accepted and repeatedly mentioned in most of the literature dealing with the dynamics of cambial initials (;Iqbal and Ghouse 1990;Larson 1994). This was obviously in line with the concept that the intrusive growth of cambial initials was the main cause of circumferential expansion in the non-storeyed cambia (Cumbie 1963;Hejnowicz and Braski 1966;Iqbal 1994), due to the supposed intrusion of elongating initials between the radial walls of the neighbouring initials. Any such intrusion is possible only when the neighbouring initials lying ahead of the elongating cell tip are cleaved apart via dissolution of their middle lamella, and the intrusively growing initial fills the intercellular microspace thus produced; this would obviously increase the total circumference of the whole group of the initials (Fig. 3a, b). The extent of increase to the cambial circumference due to the apical intrusive growth of a large number of anticlinally divided cells comes out to be much more abundant than is indeed required. This unusual situation was explained in the earlier literature by assuming that the excessive initials produced by anticlinal divisions were eliminated from the initial surface by their gradual shortening in length through successive unequal periclinal divisions and then differentiation of the reduced initials to become part of a derivative tissue (Zagrska-Marek 1984;Fahn 1990;Iqbal 1994;Larson 1994). The phenomenon of the so-called 'elimination of initial' (loss of initial), which is supposed to be of common occurrence in the initial layer of vascular cambium, also remains poorly explained (Larson 1994). Traditionally, the intrusive growth and the elimination of initials have been considered as two separate phenomena, the mechanisms of which could never be explained convincingly. The area of the eliminated initial was supposed to be filled gradually by the intrusively-growing adjacent initial (Zagrska-Marek 1984). However, it was never discussed what force pulls or pushes the adjacent initials towards the 'space' made available by the declining initial and helps them develop new connections with the adjacent cells and grow in unison. However, if spaces previously occupied by the supposedly eliminated initials was later occupied by the intrusively grown tips of the neighbour initials, then the whole exercise of 'elimination of initial' becomes redundant because the supposed purpose of the phenomenon was to reduce the undue increase in the cambial circumference due to excessive intrusive growth. The new concept of intrusive growth, on the other hand, explains that intrusive growth of an initial (along tangential walls) and so-called 'elimination of initial' via a simultaneous gradual disappearance of the adjacent initial occur inseparably and have no impact on the magnitude of the cambial circumference (a, b;;Woch et al., 2013a). As stated above, cambial tissue develops a tensile strain in the radial direction, caused by diurnal changes in tissue water balance. When this strain crosses some threshold level, it is likely that some areas of tangential walls are separated and fragments of at least some fusiform cells are detached from each other. When a fusiform initial grows into the space created that way, along the tangential walls of the neighbouring one they may be envisaged to be in a temporary competition for the same area of the initial surface. In this competition, one initial loses its initial status and is moved away from the initial surface by the symplastically growing tissue. If a whole initial moves away from the initial surface, it would appear to have undergone a total elimination from the initial surface, and the area which was previously occupied by this initial is now occupied by the intrusively growing initial. This new arrangement of initials is established further by the prospective periclinal divisions, which are unequal, resulting in a smaller (after an elimination) and a larger (due to intrusive growth) initial (Fig. 3a, c). Two tangential sections of phloem mother cells and the most probable cambial initials were superimposed (Fig. 3d-f). Phloem (or xylem) mother cells reflect the arrangement of cambial initials existing when that layer of cells was deposited, hence may be considered as a record of the past. Initials present the most actual arrangement of the cambial cells (b;;;a). One fusiform initial has grown intrusively, but the location of radial walls of neighboring cells remains identical except for the area of intrusive growth. The intrusively growing initial didn't push any other initial away, but instead occupies the area previously occupied by the neighbouring fusiform initial, partially eliminating it from the initial surface. It has also eliminated one ray initial and partially eliminated three others. Similar examples has been presented in numerous studies (b;;;a). The arrangement of cell walls in the radial file should also be analyzed to determine whether the intrusive growth of cambial initials contributes to the increase in the cambial circumference. As the cambial tissue normally occurs in a more or less cylindrical form, any increase in the circumference of this cylinder has to be linked to a corresponding increment of its radius, following the rules of geometry (Fig. 4). In a single radial row of cambial cells, which is a very small segment of the cambial circumference, it is not possible to observe the curvature of the periclinal walls (Figs. 1, 4) and usually a term 'tangential plane' is used to describe a position of dividing wall in periclinal division. Plant anatomists use this term conventionally while measuring the width of cambial cells on anatomical preparations normally referred to as the tangential, radial and transverse sections. However, from mathematical point of view, the term 'tangential' may perhaps be replaced with 'circumferential' while referring to this dimension of cells in a tissue of cylindrical form such as the vascular cambium. If, according to the previous concept, intrusion of an elongating fusiform initial occurs between the radial walls of contiguous initials, then the intrusive growth of every single cell would make some addition to the cambial circumference. Theoretically we can assume two possibilities: such a sudden increment of circumference could be local, without any impact on neighbouring radial files, or it would have an impact on these files. Any localized increment in a sector of cambial circumference, without a corresponding radial increment of the whole tissue, would likely form a sort of bulge on the initial layer, but nothing like this has ever been observed in microscopic studies. Moreover, in a cylindrical tissue, any increase in the cambial circumference, resulting from the intrusive growth of even one initial, has to be accompanied by radial increment of the tissue. Suppose 3 How Do Trees Grow in Girth? Controversy on the Role of Cellular Fig. 4 A-I. Implications of two hypotheses regarding the intrusive growth of fusiform cambial initials as seen in transverse sections: A Relationship between the radial and circumferential increments of the cambium. Cambium with a given radius (r1) and circumference (C1) before the occurrence of intrusive growth. Change in the cambial circumference (C) has to be associated with the proportional (C = 2r) change in its radius (r). The proportions between r and C are modified for better visibility. C1-inner circle, C2-outer circle. B Based on the old hypothesis that intrusive growth contributes to the increment in the cambial circumference; the arrangement of radial files was disrupted by the intrusion of the growing initial. C Based on the new hypothesis that intrusive growth causes no increment in the cambial circumference; the arrangement of radial files is regular despite the occurrence of intrusive growth of the initial, except for the area covered by intrusive growth. P1-P2-layers of deposited phloem mother cells; X1-X3-layers of deposited xylem mother cells; Radial lines-radial files; CI-thickness of initial layer; asterisk-intrusively growing initial. D-G Diagrammatic presentation of the impact of intrusive growth on the arrangement of three adjacent radial files of cambial cells: D Arrangement of cambial cells before the occurrence of intrusive growth; E Based on the hypothesis that intrusive growth causes an increment in the cambial circumference. The same three radial files of cambial cells (as on Fig. D) after intrusion of one fusiform initial between the radial walls of neighbouring initials. Grey dashed vertical lines-increased tangential dimension of radial files (look at g above the arrows). F and G Based on the hypothesis that intrusive growth causes no increase in the cambial circumference; F The same three radial files of cambial cells (as in Fig. D) as seen after the intrusive growth of one fusiform initial. The cambial zone is elastically (e) and plastically (p) tensed in the radial direction (during the night). Locations of the forthcoming periclinal divisions are marked with dashed horizontal lines. G) The same three radial files (as in Fig. f) after periclinal divisions, resulting in the deposition of a new layer of xylem mother cells (X4). The periclinal division marked with an oblique arrow is unequal). H, I Two examples of intrusive growth observed in transverse sections of a Picea abies cambium display no disruption of radial files. The intrusion of the initial (marked with asterisks) does not add to the circumferential dimension of the group of initials, but causes a corresponding reduction in the dimensions of neighbouring initials (marked with dashed lines). Asterisk-intrusively growing initial; Dashed horizontal lines-tangential dimension of radial files the intrusive growth of only one initial (C) adds, for instance, 20 m to the cambial circumference, then the radius of the cambial cylinder should correspondingly increase by 3.18 m (i.e. r = C/2 = 20 m/2 3.14 = 3.18 m). Massive occurrence of intrusive growth should mean many times more growth in radial direction (in proportion to the size of the circumference). So such massive intrusive growth would have to be accompanied by an adequate simultaneous increase in the cambial radius. Besides, if the growing initial intrudes along the radial walls of adjacent initials, the circumferential dimension of all cambial cells located lateral to the intrusively grown initials should remain constant (unaffected). However, this is never the case, and the tangential width of the initials lateral to the intrusively grown one is invariably and proportionately reduced. Moreover, in the case of an initial intruding along the radial surface of adjacent initials and adding to the cambial circumference, not only these adjacent cells would be cleaved apart by this intrusion, but the next neighbours would also be pushed laterally, causing a dislocation of initials all along the cambial circumference. That would be the case especially for non-storeyed cambia, where intrusive growth was commonly mentioned as the main way of circumferential increment. This dislocation is bound to disturb the alignment of cambial initials with the radial files of their derivatives (Fig. 4b). It would not be logical to presume that all the other initials might decrease their tangential dimensions in order to maintain the proper alignment of radial files; such an assumption would also nullify the presumed impact of intrusive growth on the cambial circumference. However, according to the new hypothesis, the intrusive growth of an elongating initial occurs between the tangential walls of the neighbouring initial and its immediate derivative and is linked with an equal elimination of the neighbouring initial from the layer of initials (or the initial surface). Therefore, the arrangement of radial files remains intact, except in the area of intrusive growth (Fig. 4c). Figure 4d and e exhibit the impact of intrusive growth estimated on the basis of old hypothesis, supposing that intrusive growth is the mechanism for increase in the cambial circumference. This hypothesis does not explain the relationship between the symplastic growth of the whole cambial tissue and the intrusive growth of a particular fusiform initial. As we can see in Fig. 4b, the cambial tissue has increased its circumference and radius, which obviously means that the volume of the xylem cylinder has increased. An increment in cambial radius is commonly accepted to be the result of symplastic growth and periclinal divisions of cambial initials and their derivatives. This too is well accepted that, in the case of storeyed cambia, anticlinal division and symplastic growth of initials are responsible for the circumferential increment of the cambium. However, in non-storeyed cambia, the same outcome is surprisingly attributed to intrusive growth rather than to symplastic growth. Nonetheless, it has never been explained how the intrusive growth of even one initial would coordinate with the normal symplastic radial expansion of the cambial tissue. Further, the supporters of the concept of 'elimination of initials', should not ignore the fact that every local/sectorial change, taking place anywhere in the cambial cylinder, would have an impact on the overall circumference of the cambium. And this would also cause a proportional change in the radius of the wood and the cambial zone. According to the new hypothesis, intrusive growth is inseparably linked to the symplastic growth of the tissue, because both these features are the outcome of one fundamental process, i.e. the diurnal variation of water balance, which causes a change in phloem radius. This hypothesis asserts that the cambial zone is tensed in the radial direction when the secondary phloem swells, typically during the night. The tension may likely be a combination of elastic and plastic strains (Fig. 4f) (Kojs and Rusin 2011;). This radial tension, which was not taken into consideration earlier, explains the assumed formation of space available for intrusive growth between the tangential walls. If the tangential walls of any initial and its immediate derivative are separated (for instance because of shearing strains), the tension would move these cells away from each other. In the space thus produced, one of the neighbouring initials may grow intrusively. The following day, when the phloem decreases its turgor pressure, the radial tension prevailing in the cambial zone during the night, will decrease. The elastic strain is then withdrawn and the plastic strain remains intact in the form of a radial symplastic increment of the cambial tissue (Fig. 4g). As the intrusive growth of one initial is counterbalanced by the corresponding elimination of a neighbouring initial(s) (Fig. 4h, i), no disruption of radial files is visible in the transverse sections of the cambial zone. This corroborates the new hypothesis, which proposes intrusion of the growing initial along the tangential walls of the neighbouring cells, causing no increment in the cambial circumference. In transverse sections of the active cambial tissue, with its cells dividing periclinally and growing symplastically as well as intrusively, one often comes across some slanted walls of the initials that have undergone intrusive growth or elimination, indicating a gradual transformation of their tangential walls into radial walls (Figs. 1,4h,i,5). Such a transformation (of tangential walls into radial ones) has been observed in areas with vigorous intrusive growth in the cambia of both the gymnospermous and dicotyledonous species (;;a). The number of layers of cambial cells in the space surrounded by the slanted walls of the initials reflects the relative duration for which the intrusive growth has been in progress in the given area of cambial tissue. If the intrusive growth has occurred immediately before the sample collection, the slanted walls should be confined within one layer of cells (Fig. 4h, i). However, the number of the cell layers in contact with slanted walls increases with time due to the consistent symplastic growth of cells and the concurrent incidence of periclinal division (Figs. 4,5). Newly produced tangential walls connect the slanted walls, just as they connect the radial ones at the other end. The space surrounded by the slanted walls (i.e. the transactional area of the intruding cell) shows unequal periclinal divisions, wherein dimensions of the cell-plate area change according to the growing dimensions of the cell concerned. In the new radial file formed by the intrusively grown initial, cells and cell plates are wider (in the circumferential direction), whereas in the radial file of the eliminated initial, they are narrower in the circumferential direction (Fig. 5). If the elongating initial has intruded between a neighbouring initial and its immediate derivative, the slanted walls appear on one side of the new radial file and form a shape similar to a triangle (Fig. 5a, c). If, on the contrary, the initial has intruded between two neighbouring initials and their immediate derivatives, the 1 3 slanted walls form a rhomboidal shape (Fig. 5d). In some cases, intrusive growth occurs not around the tip of the initials, but on their lateral walls. Figure 5b exhibits this condition, where one radial file is eliminated because of the lateral growth of both of the neighbouring initials. Such eliminations of radial files have often been described earlier without any reference to intrusive growth. It is curious to assume that the tangential walls of adjacent cambial cells are temporarily separated from each other due to intrusion of some elongating cells between them, but it finds support from certain observations, such as the occurrence of unequal distribution of plasmodesmata on the radial and the tangential walls. Plasmodesmata hardly occur on the tangential cell walls of fusiform initials, but are abundant on their radial walls (Catesson 1990;Ehlers and van Bel 2010). This is exactly what would be expected on assuming (a) a frequent separation of tangential Fig. 5 Transverse sections of the cambium of Pinus sylvestris, depicting the initials, the tangential walls of which have become slanted due to the intercellular intrusion of neighbouring initials. White arrows indicate slanted walls during their transformation from tangential walls to anticlinal ones. Black arrowheads point to tangential walls formed due to unequal periclinal divisions of cells of a new radial file, whereas white arrowheads show tangential walls formed due to unequal periclinal divisions of cells of the eliminating radial file. Asterisks indicate the most probable initial cell; Black asterisks-intrusively growing initial; E-elimination of radial file due to the lateral intrusive growth of two neighbouring initials. Scale bars-10 m. (Similar examples were presented by Jura el al. 2006;) walls of cells in a radial file, and (b) a firmness of association between the radial walls of cells of the contiguous radial files. This uneven distribution and frequency of plasmodesmata makes one think that either the frequent separation of the newly produced tangential walls disrupts the newly formed plasmodesmata, or the plasmodesmata are rarely formed on the tangential walls, possibly as an adaptation to the frequent separation of these walls. One of the characteristic features of meristematic cells is the absence of large vacuoles. However, cambial initials normally possess one big vacuole per cell during the active phase and a few smaller ones during the dormant phase of the cambium (Iqbal and Ghouse 1985). The vacuolar compartment seems to have a role in the process of xylogenesis, but its significance in the cambial initials is still unclear (Arend and Fromm 2003). The intrusive growth of cambial initials may be intense, involving a rapid enlargement of cell volume and wall surface. In this process, further enlargement of an already extended vacuolar system related to the active uptake of osmotically active solutes seems to be helpful (Arend and Fromm 2003). It was proposed that intrusive growth occurs on the edge of the cell with the smallest turgor pressure and hence the intrusively growing cambial initial cannot actually cleave the adjacent cells apart, and can only fill the already existing microspace (Hejnowicz 1980). Intrusive growth should therefore be a passive phenomenon, unable to separate the walls of two neighbouring cells, and hence may occur only in areas with previously formed microspaces between the walls of adjacent cells (Kwiatkowska and Nakielski 2011). The role of vacuoles in the regulation of turgor pressure, which could possibly allow for intense intrusive growth, may be an interesting subject for future research. Role of Intrusive Growth in Rearrangement of Cambial Initials The circumferential expansion in the storeyed and non-storeyed cambia is believed to occur by different mechanisms, i.e. through longitudinal anticlinal divisions coupled with the symplastic growth of the resultant sister initials in storeyed cambium, while through oblique anticlinal divisions followed by intrusive growth of the sister initials in the case of non-storeyed cambium (Fahn 1990;Larson 1994;Evert 2006). Such a unique situation would require some curious arrangement for a simultaneous regulation or a periodical switching over between the two different mechanisms, which has never been observed. As the storeyed structure of cambium develops from the non-storeyed procambium during the plant ontogeny (Soh 1990;Larson 1994;Evert 2006), one may genuinely expect coexistence of both the storeyed and non-storeyed cambial structures in a given specimen. Furthermore, in the so-called mosaic cambium, both types of the cambial structure do occur together (Krawczyszyn 1977). The increase of cambial circumference in such cases should, therefore, involve two different mechanisms, one of which is dependent on symplastic growth and longitudinal divisions, while the other is dependent on the oblique anticlinal divisions and intrusive growth of the initials. Recent reports have shown a non-participation of intrusive growth in the circumferential increment (a, b;;, 1 3 2013; a). In such a situation, symplastic growth that follows the anticlinal division is the only alternative that may provide a common and uniform mechanism for the increase in cambial circumference, both in the storeyed and nonstoreyed cambia. Cell rearrangement in non-storeyed cambia was ascribed to oblique anticlinal divisions followed by intrusive growth, whereas typically in storeyed cambia oblique anticlinal divisions don't occur and intrusive growth was considered insignificant. If oblique anticlinal divisions, followed by intrusive growth of the cells produced, constitute the main mechanism for the rearrangement of cambial initials, the intensity of cell rearrangement should be markedly less in storeyed cambia, which hardly exhibit the occurrence of oblique anticlinal divisions (Larson 1994;Evert 2006). Nonetheless, the most rapid rearrangement is seen in the storeyed cambia, particularly in old trunks where oblique anticlinal divisions are practically absent (Krawczyszyn and Romberger 1979;Kojs et al., 2003Kojs et al., 2004). This also indicates that significance of oblique anticlinal divisions as the possible mechanism for rearrangement of cambial initials has been highly overestimated. In consequence of the new hypothesis, intrusive growth is supposed to be the mechanism responsible for rearrangement of cambial initials in both the storeyed and non-storeyed types of cambium (a, b;Wilczek, 2012;a). An oblique anticlinal division of fusiform initials results in two shorter initials. Interestingly, numerous such divisions are followed by the elongation of one of the sister fusiform initials produced, often with a total or partial elimination of the other sister initial. The intrusively growing sister initial increases its axial dimensions, but the other sister initial appears to undergo a simultaneous thinning and/or shortening, and may ultimately disappear from the layer of initials, as described in numerous reports (Bannan 1950;Evert 1961;Cumbie 1967;Srivastava 1973;Lim and Soh 1997a, b;Bossinger and Spokevicius 2018). This apparent thinning or shortening of initials is in fact a case of overlapping of the adjacent initial(s) by the intrusively growing initial along the tangential surface of the contiguous cells. This supports the view that the apical intrusive growth of cambial initials takes place along the tangential walls rather than the radial walls of neighbouring cells lying ahead of the elongating cell tip (;). In this situation, despite the fact that the growing initial gains in its tangential dimension (cell width), no increase accrues to the cambial circumference. In the non-storeyed cambia, especially in gymnosperms, intrusive growth covers a larger area of the initial, whereas in storeyed cambia it is confined to a small area around the cell end, merely causing a change in the cell-tip location, although the frequency of such events is very high (Krawczyszyn and Romberger 1979). This could be why the significance of apical intrusive growth for cellular rearrangement in storeyed cambia has been underestimated. Such a rearrangement involves a synchronic intrusive growth at the tip of large groups of cambial initials arranged in storeys, together with the concurrent fusion and splitting of rays (Krawczyszyn and Romberger 1979;Woch and Szendera 1989;a). The small but synchronic changes of inclination in large groups of initials arranged in horizontal storeys have an enormous impact on the inclination of the cambial derivatives, which imitate the structural pattern of the cambium. The cambial structure in which the initials change their inclination rapidly and synchronically is called a 'functional storeyed structure' (;). Recent observations and their interpretations suggest that the significance of oblique anticlinal divisions requires a critical reappraisal. It was previously assumed that oblique anticlinal divisions followed by intrusive growth constitute the main mechanism operative behind the rearrangement of cambial initials. The basis of this assumption was the belief that the intrusive growth of elongating cells takes place between the radial walls. Given this, a lack of new oblique radial walls would make the intercellular intrusion of a growing cell tip, and hence the consequent rearrangement of cells, impossible. However, if the newly formed initials elongate by intrusive growth along the tangential surface of adjacent cell(s), and the radial walls have no role in the process, the frequency of anticlinal divisions becomes irrelevant in this regard. Therefore, the supposed relationship between oblique anticlinal divisions and cambial cell rearrangement needs to be re-evaluated. Conclusions Recent studies on the radial growth of arborescent plants have brought out the following facts related to the cambial cell dynamics: The frequent periclinal divisions of cambial cells contribute adequately to the radial expansion of the symplastically growing cambium tissue by adding new layers of derivatives. The radial dimension of cambial cells is maintained by symplastic growth of the radial walls of daughter cells after each periclinal division. The symplastic growth of cambial cells after periclinal division occurs mainly on radial walls (in the radial direction), and meagerly on tangential walls (in the circumferential direction), corresponding to the ratio between the increase in the radius of the wood core and the resultant increase in the circumference of the cylindrical tissue of cambium. This is the only mechanism of radial and circumferential growth because of the activity of the vascular cambium. The much less frequent anticlinal divisions of cambial initials contribute to the required expansion of the cambial circumference and ensure the maintenance of normal tangential dimensions of cambial initials throughout synchronic symplastic growth of all initial cells. The excess of oblique anticlinal divisions, observed normally in non-storeyed cambia, may plausibly be the result of some specific strains in the cambial tissue. They have no direct impact on the dimensions of the cambial cylinder or the arrangement of the cambial initials because the excess initials produced by these divisions are eliminated from the layer of initials due to the apical intrusive growth of their adjacent initials along the tangential wall surface. The intrusive growth of cambial initials, which has long been regarded as the main mechanism of increase in the cambial circumference of non-storeyed cambia, has in fact no role in that process. It occurs (in the axial and tangential directions) between the tangential walls of the adjacent initial and its immediate derivative, and is always counterbalanced with an equal amount of eliminations of the neighbouring initials, irrespective of the frequency of anticlinal divisions. It does, however, result in the rearrangement of cambial initials. Such locations of intrusive growth in the cambium indicate a complete or, as a result of unequal periclinal divisions, a partial loss of the initial status. Thus, there is nothing like 'elimination' or 'loss' of the initial cells in actuality; it is only a case of pushing the cell from the initial surface of the cambium to the derivative tissue. Mechanical strains in the vascular cambium keep changing in diurnal cycles, due to alteration in water balance and hence in turgor pressure of cells in the derivative tissues. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. |
Intrahepatic cholangiocarcinoma arising 28 years after excision of a type IV-A congenital choledochal cyst: report of a case This report presents a rare case of intrahepatic cholangiocarcinoma (IHCC) arising 28 years after excision of a type IV-A congenital choledochal cyst. The patient underwent excision of a congenital choledochal cyst (Todanis type IV-A) at 12 years of age, with Roux-en-Y hepaticojejunostomy reconstruction. She received a pancreaticoduodenectomy (PD) using the modified Child method for an infection of a residual congenital choledochal cyst in the pancreatic head at the age of 18. She was referred to this department with a liver tumor 22 years later. Left hemihepatectomy with left-side caudate lobectomy was performed and the tumor was pathologically diagnosed to be IHCC. The cause of the current carcinogenesis was presumed to be reflux of pancreatic juice into the residual intrahepatic bile duct during surgery. This case suggests that a careful long-term follow-up is important for patients with congenital choledochal cysts, even if a separation-operation was performed at a young age, and especially after PD. Introduction Intrahepatic cholangiocarcinoma (IHCC) is the second most common primary liver cancer, and the incidence is increasing. There is a significant association between the presence of congenital choledochal cysts and the development of hepatobiliary malignancies, including IHCC. The cause of carcinogenesis in these cases is presumed to be the reflux of pancreatic juice into the bile duct and the accumulation of mixed bile in the biliary system caused by an anomalous junction of the pancreaticobiliary duct. The recommended standard surgical treatment is the excision of the dilated extrahepatic bile duct, with a hepaticoenterostomy to stop the reflux of pancreatic juice. This is called a ''separation-operation.'' However, some patients develop biliary cancer long after a separationoperation. The development of intrahepatic cholangiocarcinoma after pancreaticoduodenectomy has not previously been reported in patients with congenital biliary dilation, however, this report presents a case of intrahepatic cholangiocarcinoma arising 28 years after the initial separation-operation for a Todani's type IV-A congenital choledochal cyst, and 22 years after pancreaticoduodenectomy for infection of a residual congenital choledochal cyst in the pancreatic head. Case report A 40-year-old female was referred to this department for further examination of a liver tumor. She had undergone excision of a congenital choledochal cyst (Todani's type IV-A) at 12 years of age, with Roux-en-Y hepaticojejunostomy reconstruction (Fig. 1a). She underwent a pancreaticoduodenectomy (PD) at the age of 18, using the modified Child method to treat an infection of a residual congenital choledochal cyst in the pancreatic head (Fig. 1b). The patient had been well with no symptoms since her last operation, so she had not undergone regular follow-up during the 18 years since her second operation. She was diagnosed with a chronic hepatitis C virus (HCV) infection during a medical checkup at the age of 37, and was referred to our hospital for treatment. She had received interferon and ribavirin therapy and finally obtained a sustained viral response, after which she underwent regular follow-up. Follow-up-dynamic abdominal computed tomography (CT) revealed a low-density tumor o the left medial section (Fig. 2a) at the age of 40. Serum carcinoembryonic antigen (CEA) and DUPAN-2 levels were elevated to 6.3 ng/ml and 360 U/ml, respectively. Positron emission tomography-CT (PET-CT) revealed the maximum standardized uptake value (SUVmax) of the liver tumor to be 9.3, with no signs of lymph node metastasis, intraperitoneal dissemination, or hepatic metastasis. 3D-drip infusion cholangiography-CT (3D-DIC-CT) revealed dilation of the right and left hepatic ducts of up to 20 mm and tumor invasion of the B2 ? 3 bile duct; the hepaticojejunostomy was not affected by the tumor (Fig. 2b). The tumor was diagnosed to be IHCC. She underwent a left hemihepatectomy with left-sided caudate lobectomy, preserving the hepaticojejunostomy that had been established in the previous operation. Intraoperative frozen sections of the cut end of the left hepatic duct and lymph node of the hilar lesion revealed no metastasis. Amylase levels in the bile juice of the left hepatic duct were 7621 U/L, and bile juice culture detected the presence of Enterococcus faecalis. The cut surface of the tumor was 32 mm in diameter, hard and whitish; the margin was somewhat lobulated and the tumor had invaded the bile duct of the lateral segment (Fig. 3a). The tumor contained atypical cells with nuclei with enriched chromatin and showed morphological variety. The atypical cells had a poorly glandular arrangement (Fig. 3b). Discussion The development of biliary cancer is a major complication in patients with congenital choledochal cysts, with an incidence of hepatobiliary malignancies associated with congenital choledochal cysts ranging from 2.5 to 28 %. Although the mechanism of carcinogenesis has not been fully elucidated, it is reported that the carcinogenetic process is caused by repeated damage and restoration of the biliary epithelium by a mutual countercurrent of pancreatic and bile juice. The regenerated epithelium gradually produces a variant accompanied by cellular atypical changes, as well as mutations of the K-ras and p53 genes. These processes may lead to mucosal metaplasia and biliary tract malignancy. Excision of the entire extrahepatic bile duct and hepaticoenterostomy are recommended to prevent the development of biliary carcinoma, because it separates bile from pancreatic juice flow. However, some patients develop intrahepatic cholangiocarcinoma long after the separationoperation. Kobayashi et al.. reported biliary tract cancer before and after separation-operations for patients with congenital biliary dilation, and concluded that the relative risk in the post-surgery group was still higher than in the general population, although it was decreased by approximately 50 % after the separation-operation. This suggests that the epithelium of the remnant bile duct wall may have already progressed to a precancerous stage by the time of surgery, and that genetic changes may have taken place or continued during the postoperative period. Furthermore, all patients in this previous study that developed bile duct carcinoma after surgery had a Todani's type IV-A dilation, characterized by narrowing of the peripheral bile duct and a dilated pathologic bile duct. A complete resection of a dilated pathologic intrahepatic bile duct is not a straightforward procedure, and thus the risk of developing cancer remains high. This was also true in the current case, in which the patient was diagnosed with Todani's type IV-A dilation and the residual dilated intrahepatic duct was detected by 3D-DIC-CT. Patients who undergo biliary-enteric anastomosis are thought to be at risk for developing IHCC after surgery for benign disease, as the reflux of activated pancreatic juice and bacterial contamination can cause chronic inflammation and carcinogenic processes. Tocchi et al. reported that the incidence of cholangiocarcinomas after choledochoduodenostomy or hepaticojejunostomy for benign disease is 7.6 and 1.9 %, respectively, and this significant difference occurs because the activated pancreatic juice can more easily flow back to the biliary tract in a choledochoduodenostomy. Therefore, the reflux of activated pancreatic juice might be the strongest carcinogenic factor. Re-exposure to pancreatic juice may have been one of the causes of cancer in the current case. The residual dilated intrahepatic bile duct appeared to have been stimulated by a mutual countercurrent of pancreatic and bile juice and by intestinal bacteria, because E. faecalis was detected in a culture of the bile juice and amylase levels were 7621 U/L in the bile juice of the left hepatic duct. Therefore, pancreaticogastrostomy is recommended for patients with congenital choledochal cysts after PD, because it is more difficult for pancreatic juice to flow backward to the bile duct during this procedure (Fig. 1 c). Re-anastomosis of the hepaticojejunostomy using another Roux-en-Y to prevent pancreatic juice from flowing Tumor nuclei werevesicular with coarse chromatin, small nuclei and eosinophilic cytoplasm backward to residual dilated right hepatic duct during resection of the IHCC was planned in the current case, because the patient was relatively young. However, the adhesion of the hepatic hilum and jejunum was strong and there was a risk of damage to the right hepatic artery. Therefore, only a left hemihepatectomy with left-sided caudate lobectomy was performed, preserving the hepaticojejunostomy that had been established in the previous operation. Infection with hepatitis B virus or HCV is suggested to be involved in the pathogenesis of IHCC. A large cohort study revealed that HCV infection conferred a more than two fold elevated risk of developing IHCC, while Yamamoto et al. reported that nodular IHCC appears to be related to hepatitis viral infection and could be detected at an early stage by following up cases of chronic hepatitis and cirrhosis. The current patient had not undergone regular follow-up for 18 years after her second operation. However, she underwent periodic medical check-ups after being diagnosed with chronic HCV infection that allowed IHCC to be detected early and a potentially curative operation to be performed. An extensive literature search revealed ten reports describing IHCC arising after surgery for a congenital choledochal cyst ( Table 1). The 11 patients, including the current case, included five males, five females and one case of unknown gender, ranging in age from 16 to 66 years. The mean period between the primary operation and the development of cancer was 15.3 years (2-34 years) and the type of dilation was Todani's type IV-A in eight patients and type I in one patient. The IHCC was resected successfully in only four of these patients, three had an unresectable advanced tumor and one had a resectable tumor that was inoperable due to poor liver function. A periodic medical check-up is important for detecting the tumor at an early stage; since cholangiocarcinoma is not characterized by distinct clinical symptoms until its advanced late stages. This report presented a case of intrahepatic cholangiocarcinoma arising 28 years after the initial operation of excision of a Todani's type IV-A congenital choledochal cyst with reconstruction by Roux-en-Y hepaticojejunostomy. The patient had multiple possible risk factors for developing IHCC, including remaining IV-A congenital biliary dilation, subsequent modified Child PD that induced re-exposure of pancreatic juice, and chronic HCV infection. Careful long-term follow-up is therefore recommended for high risk patients even after separationoperation. |
Quantum-dynamical semigroup generators for proton-spin relaxation in water. Various aspects of a rather general treatment of proton-spin relaxation in water are discussed within the framework of quantum-dynamical semigroup theory for a four-level system coupled to a reservoir in equilibrium. In particular, the specifications of the infinitesimal generator of time evolution, either in Kossakowski or in Davies form, are worked out in detail. With the help of the Lie algebra of SU, the results are used to derive, under suitable simplifications, generalized Bloch equations for the static and alternating-field case. The relevant correlation functions are calculated using conventional approaches but supplemented by taking into account explicitly results from a stochastic model for formation and breaking of hydrogen bridges. A further approximate reduction of the coupled general equations to simpler ordinary Bloch equations leads to an identification of the relevant relaxation times. This approach provides a somewhat different interpretation of rotational correlation times whose numerical values are estimated over a wide temperature range. |
Image filtering processor and its applications Nowadays, low cost, flexible, and high performance hardware-software co-design implementation of widely used image filtering methods is very important. In this work, image filtering processor is implemented through convolution, designed in Verilog Hardware Description Language, and softcore microprocessor. Microprocessor is synthesized on FPGA and removing of salt and pepper noises is examined. Convolution hardware is designed with and without DSP48 slice, which is dedicated hardware in FPGA, and obtained datas are compared. Also, synthesize of softcore microprocessor on FPGA is implemented for these two design and obtained datas are compared. Finally, one of selected basic algorithm is implemented on co-design and PSNR values are given. |
<gh_stars>0
//
// Created by whuty on 3/17/22.
//
#ifndef SYSTEMMONITOR_DISPLAYINFORMATION_H
#define SYSTEMMONITOR_DISPLAYINFORMATION_H
#include <iostream>
class DisplayInformation {
public:
static void display_main_histogram();
static void display_memory_load_progress_bar();
static void display_cpu_temperature();
static void display_gpu_temperature();
static void display_uptime();
static void display_memory_information();
};
#endif //SYSTEMMONITOR_DISPLAYINFORMATION_H
|
Working on the Fundamentals of Journalism and Mass Communication Research This special virtual theme issue presents eight articles on methods selected from Journalism & Mass Communication Quarterly (JMCQ) issues published between 2007 and 2016. This collection was selected from articles that focused on developing and assessing the quality of a (new) research method or technique, or articles that examined methodological innovations as part of a study on a substantive issue. A scan of the articles in JMCQ over the 10-year period under study revealed that about 7% (n = 28) focused on advancing research methods or data analysis techniques. The articles selected are prime examples of JMCQs method articles and deserve renewed attention because of their inspiring approach and the insights they provide. We introduce them briefly below under three categories: methodological issues in content analysis; methodological issues in surveys, interviews, and focus groups; and measurement and scale development. |
// Fread handles fread().
//
// Reads an array of count elements, each one with a size of size bytes, from
// the stream and stores them in the block of memory specified by ptr.
//
// The position indicator of the stream is advanced by the total amount of bytes
// read.
//
// The total amount of bytes read if successful is (size*count).
func Fread(ptr unsafe.Pointer, size1, size2 int32, f *File) int32 {
newBuffer := make([]byte, size1*size2)
ptrSlice := toByteSlice((*byte)(ptr), size1*size2)
n, err := f.OsFile.Read(newBuffer)
for i, b := range newBuffer {
ptrSlice[i] = b
}
if err != nil {
if err == io.EOF {
f._flags |= io_EOF_SEEN
} else {
f._flags |= io_ERR_SEEN
}
return EOF
}
return int32(n)
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.