content
stringlengths
7
2.61M
/** * Signals that there's no corresponding record for a specified key found. */ public class KeyNotFoundException extends IOException { public KeyNotFoundException() { } public KeyNotFoundException(String message) { super(message); } public KeyNotFoundException(String message, Throwable cause) { super(message, cause); } public KeyNotFoundException(Throwable cause) { super(cause); } }
def mergeSort(l): if len(l) < 2: return l result, L, R, i, j = [], mergeSort(l[:len(l) // 2]), mergeSort(l[len(l) // 2:]), 0, 0 while i < len(L) and j < len(R): result, j, i = result + ([R[j]] if L[i] > R[j] else [L[i]]), j + (L[i] > R[j]), i + (L[i] <= R[j]) return result + L[i:] + R[j:] def bubbleSort(l): for i in range(len(l)): flag = True for j in range(len(l) - 1, i, -1): if l[j - 1] > l[j]: flag, l[j - 1], l[j] = False, l[j], l[j - 1] if flag: return l def selectionSort(l): result = [] while l: min = 0 for i in range(len(l)): if l[i] < l[min]: min = i result += [l.pop(min)] return result def bitonicSort(l): gap = 2 ** (len(bin(len(l) - 1)) - 2) - len(l) l, k = [min(l)] * gap + l, 2 while k <= len(l): j = k // 2 while j: for i in range(len(l)): x = i ^ j if x > i: if (not(i & k) and l[i] > l[x]) or (i & k and l[i] < l[x]): l[i], l[x] = l[x], l[i] j //= 2 k *= 2 return l[gap:] # The next sorting algorithm actually needs some imports # This is a sorting algorithm in o(n) complexity # I will run for ints *and* floats (I know count sort is possible but this is for floats too) # I know it's impossible, but it is # It is my own invention... from threading import Thread from time import sleep o = [] def threadsort(l): global o o = [] max_value = max(l) def insert(element): global o sleep(element / max_value) o.append(element) for item in l: Thread(target=insert, args=(item,)).start() sleep(1) return o
Lifting Transformer for 3D Human Pose Estimation in Video Despite great progress in video-based 3D human pose estimation, it is still challenging to learn a discriminative single-pose representation from redundant sequences. To this end, we propose a novel Transformer-based architecture, called Lifting Transformer, for 3D human pose estimation to lift a sequence of 2D joint locations to a 3D pose. Specifically, a vanilla Transformer encoder (VTE) is adopted to model long-range dependencies of 2D pose sequences. To reduce redundancy of the sequence and aggregate information from local context, fully-connected layers in the feed-forward network of VTE are replaced with strided convolutions to progressively reduce the sequence length. The modified VTE is termed as strided Transformer encoder (STE) and it is built upon the outputs of VTE. STE not only significantly reduces the computation cost but also effectively aggregates information to a single-vector representation in a global and local fashion. Moreover, a full-tosingle supervision scheme is employed at both the full sequence scale and single target frame scale, applying to the outputs of VTE and STE, respectively. This scheme imposes extra temporal smoothness constraints in conjunction with the single target frame supervision. The proposed architecture is evaluated on two challenging benchmark datasets, namely, Human3.6M and HumanEva-I, and achieves stateof-the-art results with much fewer parameters.
A problem in Euclidean Geometry I describe below an elementary problem in Euclidean (or Hyperbolic) geometry which remains unsolved more than 10 years after it was first formulated. There is a proof for n = 3 and (when the ball is the whole of 3-space) when n = 4. There is strong numerical evidence for n 6 30. Let (x1, x2,...xn) be n distinct points inside the ball of radius R in Euclidean 3-space. Let the oriented line xixj meet the boundary 2-sphere in a point tij (regarded as a point of the complex Riemann sphere (C ∪∞)). Form the complex polynomial pi, of degree n−1, whose roots are tij : this is determined up to a scalar factor. The open problem is
A cleaner scolded his girlfriend for not cooking one evening last year and she responded by stabbing him in the chest. Wan Yoke Sim, 49, pleaded guilty on Monday to grievously hurting Mr Tay Mui Tong. She was jailed for two years. A district court heard the couple was then sharing a one-room Housing Board flat in Ang Mo Kio. He returned at 4pm on Aug 13 last year and a quarrel erupted between them when he found no food in the kitchen. He left the flat and returned at 8.30pm with a bottle of rice wine. While he was drinking it, they quarrelled some more. In the midst of swearing at each other and cursing at each other's parents and family, Wan, who is jobless, took a fruit knife from the kitchen and stabbed him in the chest. When she saw him bleeding profusely, she panicked and called the police. When they arrived, she claimed that Mr Tay had been attacked by several strangers. As Mr Tay was stretched off to the ambulance, she broke down and confessed to stabbing him. Mr Tay's right lung had collapsed from the injury and he required emergency surgery at Tan Tock Seng Hospital to stop the bleeding. He was warded for nine days. Pleading for a lenient sentence, lawyer Gloria Lee, who was assigned to the case by the Criminal Legal Aid Scheme of the Law Society, said that her client had a long history of treatment at the Institute of Mental Heath.
/**updates the channel entry list for the panel to match the image*/ private void resetChannelEntriesForPanel(MultiChannelImage impw, PanelListElement panel) { if (panel.designation==PanelListElement.MERGE_IMAGE_PANEL) { this.setUpChannelEntryForMerge(impw, panel, panel.targetFrameNumber, panel.targetSliceNumber); } else { setUpChannelEntriesForPanel(impw, panel, panel.targetChannelNumber, panel.targetFrameNumber, panel.targetSliceNumber); } panel.purgeDuplicateChannelEntries(); updateChannelOrder(panel); if (panel.getChannelLabelDisplay()!=null) panel.getChannelLabelDisplay().setParaGraphToChannels(); }
<reponame>elegantShock2258/UwUcode /*--------------------------------------------------------------------------------------------- * Copywight (c) <NAME>. Aww wights wesewved. * Wicensed unda the MIT Wicense. See Wicense.txt in the pwoject woot fow wicense infowmation. *--------------------------------------------------------------------------------------------*/ impowt { extHostNamedCustoma } fwom 'vs/wowkbench/api/common/extHostCustomews'; impowt { MainContext, MainThweadKeytawShape, IExtHostContext } fwom 'vs/wowkbench/api/common/extHost.pwotocow'; impowt { ICwedentiawsSewvice } fwom 'vs/wowkbench/sewvices/cwedentiaws/common/cwedentiaws'; @extHostNamedCustoma(MainContext.MainThweadKeytaw) expowt cwass MainThweadKeytaw impwements MainThweadKeytawShape { constwuctow( _extHostContext: IExtHostContext, @ICwedentiawsSewvice pwivate weadonwy _cwedentiawsSewvice: ICwedentiawsSewvice, ) { } async $getPasswowd(sewvice: stwing, account: stwing): Pwomise<stwing | nuww> { wetuwn this._cwedentiawsSewvice.getPasswowd(sewvice, account); } async $setPasswowd(sewvice: stwing, account: stwing, passwowd: stwing): Pwomise<void> { wetuwn this._cwedentiawsSewvice.setPasswowd(sewvice, account, passwowd); } async $dewetePasswowd(sewvice: stwing, account: stwing): Pwomise<boowean> { wetuwn this._cwedentiawsSewvice.dewetePasswowd(sewvice, account); } async $findPasswowd(sewvice: stwing): Pwomise<stwing | nuww> { wetuwn this._cwedentiawsSewvice.findPasswowd(sewvice); } async $findCwedentiaws(sewvice: stwing): Pwomise<Awway<{ account: stwing, passwowd: stwing }>> { wetuwn this._cwedentiawsSewvice.findCwedentiaws(sewvice); } dispose(): void { // } }
First-aid knowledge about tooth avulsion among dentists, doctors and lay people. In avulsion, teeth are bodily displaced out of the bony socket. Boys, aged 7-9 years, are most prone to avulsion of maxillary central incisors. Tooth avulsion should ideally be treated with immediate replantation. Because of the urgency in treatment, personnel dealing with this injury should have knowledge about the first-aid treatment. This study was conducted to assess the first-aid knowledge about tooth avulsion among dentists, doctors, students, school teachers and the general public in Lahore, Pakistan. Data were collected using a form with one open-ended question about the first-aid treatment of traumatic avulsion. Immediate replantation of the avulsed tooth was suggested by 10.1% of 377 respondents. Among dentists, 45.8% suggested immediate replantation, with the rest suggesting transport of the tooth to a dentist for replantation. Among all other groups (non-dentists) immediate replantation was suggested by 4.6% and transport to a dentist by 3.3%. Non-dentists in Pakistan, including doctors, have insufficient knowledge about the immediate treatment of tooth avulsion. Dentists, in comparison, have significantly more knowledge, but may need training in selection of the appropriate treatment option and handling and care of the avulsed tooth.
1. Technical Field The present invention relates to the processing of dried fruits to improve their softness retention characteristics, and more particularly to the infusion of raisins with glycerol. 2. Background Information It is well-known that raisins and other dried fruits lose enough moisture over time to reduce their softness characteristics beyond desirable limits. This problem is especially pronounced when the dried fruit is mixed with a dry cereal, such as corn or bran flakes. Many methods have been disclosed for minimizing this problem. For example, U.S. Pat. No. 5,439,692, discloses a method in which glycerol is infused into raisins under vacuum in order to improve the softness retention characteristics of the raisins. A problem with this technique is that it requires an apparatus capable of achieving and maintaining a reduced pressure of about 35 mm of Hg. The reduced pressure requirement adds to the cost and complexity of the process for treating dried fruit with glycerol. What is needed is a simpler and less expensive method of infusing dried fruit with glycerol.
import { DeleteResponse } from "../../../../types/common"; declare const _default: (app: any) => any; export default _default; export declare type AdminUploadRes = { uploads: any[]; }; export declare type AdminDeleteUploadRes = DeleteResponse; export * from "./create-upload";
import tensorflow as tf import math as m def gaussian_loglike_loss(sigma=1.): sigma = tf.convert_to_tensor(sigma, dtype=tf.float32) @tf.function def loss_fun(y_true, y_pred): # GAUSSIAN LOG-LIKELIHOOD beh = tf.cast(y_pred, dtype=tf.float32) targets = tf.cast(y_true, dtype=tf.float32) mse = 0.5 * \ tf.reduce_sum(tf.keras.backend.square( (targets - beh))) # constant = tf.reduce_sum(tf.ones_like( # targets) * (sigma * tf.math.log(2 * m.pi))) return mse# + constant # -loglik return loss_fun
<filename>src/main/java/org/burningwave/core/io/FileSystemHelper.java<gh_stars>0 /* * This file is part of Burningwave Core. * * Author: <NAME> * * Hosted at: https://github.com/burningwave/core * * -- * * The MIT License (MIT) * * Copyright (c) 2019 <NAME> * * Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated * documentation files (the "Software"), to deal in the Software without restriction, including without * limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of * the Software, and to permit persons to whom the Software is furnished to do so, subject to the following * conditions: * * The above copyright notice and this permission notice shall be included in all copies or substantial * portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT * LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO * EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN * AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE * OR OTHER DEALINGS IN THE SOFTWARE. */ package org.burningwave.core.io; import static org.burningwave.core.assembler.StaticComponentContainer.ManagedLoggersRepository; import static org.burningwave.core.assembler.StaticComponentContainer.Methods; import static org.burningwave.core.assembler.StaticComponentContainer.Paths; import static org.burningwave.core.assembler.StaticComponentContainer.Streams; import static org.burningwave.core.assembler.StaticComponentContainer.ThreadHolder; import static org.burningwave.core.assembler.StaticComponentContainer.Throwables; import java.io.File; import java.io.FileInputStream; import java.io.IOException; import java.io.InputStream; import java.nio.file.Files; import java.nio.file.StandardOpenOption; import java.util.Arrays; import java.util.Collection; import java.util.Iterator; import java.util.Optional; import java.util.UUID; import org.burningwave.core.Closeable; import org.burningwave.core.Component; import org.burningwave.core.ManagedLogger; import org.burningwave.core.assembler.StaticComponentContainer; import org.burningwave.core.function.Executor; public class FileSystemHelper implements Component { private String name; private File mainTemporaryFolder; private String id; private Scavenger scavenger; private FileSystemHelper(String name) { this.name = name; id = UUID.randomUUID().toString() + "_" + System.currentTimeMillis(); } public static FileSystemHelper create(String name) { return new FileSystemHelper(name); } public void clearBurningwaveTemporaryFolder() { delete(Arrays.asList(getOrCreateBurningwaveTemporaryFolder().listFiles())); } public void clearMainTemporaryFolder() { delete(getOrCreateMainTemporaryFolder()); } public File getOrCreateBurningwaveTemporaryFolder() { return getOrCreateMainTemporaryFolder().getParentFile(); } public File getOrCreateMainTemporaryFolder() { if (mainTemporaryFolder != null && mainTemporaryFolder.exists()) { return mainTemporaryFolder; } synchronized (this) { if (mainTemporaryFolder != null && mainTemporaryFolder.exists()) { return mainTemporaryFolder; } return mainTemporaryFolder = Executor.get(() -> { File toDelete = File.createTempFile("_BW_TEMP_", "_temp"); File tempFolder = toDelete.getParentFile(); File folder = new File(tempFolder.getAbsolutePath() + "/" + "Burningwave" +"/"+id); if (!folder.exists()) { folder.mkdirs(); folder.deleteOnExit(); } toDelete.delete(); return folder; }); } } public File getOrCreatePingFile() { File pingFile = new File(Paths.clean(getOrCreateBurningwaveTemporaryFolder() .getAbsolutePath() + "/" + id + ".ping")); if (!pingFile.exists()) { Executor.run(() -> pingFile.createNewFile()); pingFile.deleteOnExit(); } return pingFile; } public File createTemporaryFolder(String folderName) { return Executor.get(() -> { File tempFolder = new File(getOrCreateMainTemporaryFolder().getAbsolutePath() + "/" + folderName); if (tempFolder.exists()) { tempFolder.delete(); } tempFolder.mkdirs(); return tempFolder; }); } @Override public File getOrCreateTemporaryFolder(String folderName) { return Executor.get(() -> { File tempFolder = new File(getOrCreateMainTemporaryFolder().getAbsolutePath() + "/" + folderName); if (!tempFolder.exists()) { tempFolder.mkdirs(); } return tempFolder; }); } public void delete(Collection<File> files) { if (files != null) { Iterator<File> itr = files.iterator(); while(itr.hasNext()) { File file = itr.next(); if (file.exists()) { delete(file); }; } } } public boolean delete(File file) { if (file.isDirectory()) { File[] files = file.listFiles(); if(files != null) { //some JVMs return null for empty dirs for(File fsItem: files) { delete(fsItem); } } } if (!file.delete()) { file.deleteOnExit(); return false; } return true; } public void deleteOnExit(File file) { if (file.isDirectory()) { File[] files = file.listFiles(); if(files != null) { //some JVMs return null for empty dirs for(File fsItem: files) { deleteOnExit(fsItem); } } } file.deleteOnExit(); } public boolean delete(String absolutePath) { return delete(new File(absolutePath)); } public void deleteOnExit(String absolutePath) { deleteOnExit(new File(absolutePath)); } public void startSweeping() { if (scavenger == null) { synchronized(this) { if (scavenger == null) { scavenger = new Scavenger(this, getTemporaryFileScavengerThreadName(), 3600000, 30000); } } } scavenger.start(); } public void stopSweeping() { if (scavenger != null) { scavenger.stop(); } } private String getTemporaryFileScavengerThreadName() { return Optional.ofNullable(name).map(nm -> nm + " - ").orElseGet(() -> "") + "Temporary file scavenger"; } @Override public void close() { if (this != StaticComponentContainer.FileSystemHelper || Methods.retrieveExternalCallerInfo().getClassName().equals(StaticComponentContainer.class.getName()) ) { Scavenger scavenger = this.scavenger; if (scavenger != null) { scavenger.close(); } closeResources(() -> id == null, () -> { clearMainTemporaryFolder(); this.scavenger = null; id = null; mainTemporaryFolder = null; }); } else { Throwables.throwException("Could not close singleton instance {}", this); } } public static class Scavenger implements ManagedLogger, Closeable { private String name; private FileSystemHelper fileSystemHelper; private long deletingInterval; private long waitInterval; private File burningwaveTemporaryFolder; long lastDeletionStartTime; private Scavenger(FileSystemHelper fileSystemHelper, String name, long deletingInterval, long waitInterval) { this.fileSystemHelper = fileSystemHelper; this.deletingInterval = deletingInterval; this.waitInterval = waitInterval; this.burningwaveTemporaryFolder = fileSystemHelper.getOrCreateBurningwaveTemporaryFolder(); this.name = name; } public boolean isAlive() { return ThreadHolder.isAlive(name); } void pingAndDelete() { try { setPingTime(fileSystemHelper.getOrCreatePingFile().getAbsolutePath()); } catch (Throwable exc) { ManagedLoggersRepository.logError(getClass()::getName, "Exception occurred while setting ping time on file " + fileSystemHelper.getOrCreatePingFile().getAbsolutePath()); ManagedLoggersRepository.logError(getClass()::getName, exc.getMessage()); ManagedLoggersRepository.logInfo(getClass()::getName, "Current execution id: {}", fileSystemHelper.id); } if (System.currentTimeMillis() - lastDeletionStartTime > deletingInterval) { lastDeletionStartTime = System.currentTimeMillis(); for (File fileSystemItem : burningwaveTemporaryFolder.listFiles()) { if (!fileSystemItem.getName().equals(fileSystemHelper.getOrCreateMainTemporaryFolder().getName()) && !fileSystemItem.getName().equals(fileSystemHelper.getOrCreatePingFile().getName()) ) { try { try { if (fileSystemItem.isDirectory()) { File pingFile = new File( burningwaveTemporaryFolder.getAbsolutePath() + "/" + fileSystemItem.getName() + ".ping" ); long pingTime = getCreationTime(fileSystemItem.getName()); if (pingFile.exists()) { pingTime = getOrSetPingTime(pingFile); } if (System.currentTimeMillis() - pingTime >= deletingInterval) { delete(fileSystemItem); } } else if (fileSystemItem.getName().endsWith("ping")) { long pingTime = getOrSetPingTime(fileSystemItem); if (System.currentTimeMillis() - pingTime >= deletingInterval) { delete(fileSystemItem); } } } catch (Throwable exc) { ManagedLoggersRepository.logWarn(getClass()::getName, "Exception occurred while cleaning temporary file system item '{}'", fileSystemItem.getAbsolutePath()); if (fileSystemItem.getName().contains("null")) { ManagedLoggersRepository.logInfo(getClass()::getName, "Trying to force deleting of '{}'", fileSystemItem.getAbsolutePath()); delete(fileSystemItem); } else { throw exc; } } } catch (Throwable exc) { ManagedLoggersRepository.logError(getClass()::getName, "Could not delete '{}' automatically, To avoid this error remove it manually", fileSystemItem.getAbsolutePath()); ManagedLoggersRepository.logInfo(getClass()::getName, "Current execution id: {}", fileSystemHelper.id); } } } } } public void start() { lastDeletionStartTime = -1; ThreadHolder.startLooping(name, true, Thread.MIN_PRIORITY, thread -> { pingAndDelete(); thread.waitFor(waitInterval); }); } long getOrSetPingTime(File pingFile) throws IOException { long pingTime = -1; try { pingTime = getPingTime(pingFile); } catch (Throwable exc) { ManagedLoggersRepository.logError(getClass()::getName, "Exception occurred while getting ping time on file " + pingFile.getAbsolutePath()); ManagedLoggersRepository.logError(getClass()::getName, exc.getMessage()); ManagedLoggersRepository.logInfo(getClass()::getName, "Current execution id: {}", fileSystemHelper.id); pingTime = setPingTime(pingFile.getAbsolutePath()); ManagedLoggersRepository.logInfo(getClass()::getName, "Ping time reset to {} for file {}", pingTime, pingFile.getAbsolutePath()); } return pingTime; } long setPingTime(String absolutePath) throws IOException { long pingTime = System.currentTimeMillis(); Files.write( java.nio.file.Paths.get(absolutePath), (String.valueOf(pingTime) + ";").getBytes(), StandardOpenOption.TRUNCATE_EXISTING ); return getPingTime(new File(absolutePath)); } Long getCreationTime(String resourceName) { return Long.valueOf(resourceName.split("_")[1]); } void delete(File resource) { fileSystemHelper.delete(resource.getAbsolutePath()); } long getPingTime(File pingFile) throws IOException { long pingTime; try (InputStream pingFileAsInputStream = new FileInputStream(pingFile)) { StringBuffer content = Streams.getAsStringBuffer(pingFileAsInputStream); pingTime = Long.valueOf(content.toString().split(";")[0]); } return pingTime; } public void stop() { ThreadHolder.stop(name); } @Override public void close() { closeResources(() -> burningwaveTemporaryFolder == null, () -> { stop(); burningwaveTemporaryFolder = null; fileSystemHelper = null; } ); } } }
Teens are wild in the streets. There is no longer any respect for authority. Manners are yesterday’s news. And elderly people fear for their lives when they step out of their doorway. And yet the Daily News’ Denis Hamill is upset that the Appellate court ruled that a spanking did not constitute excessive corporal punishment. The 8-year old was cursing up a storm at a party and the parent spanked her. What did he expect the parent to do, give the kid a time out and take away her cellphone for a week? Give me a break. Before all of you get in my face, I don’t believe that a parent should beat a child, but a smack on the fanny is not a beating. Let’s face it — all the Dr. Spock methods really haven’t worked. There is more disrespect now than ever before. I would never dream of answering my parents back, or any authority figure, and that includes teachers, police officers, elderly people — hell no, not even the postman. And all it took was a little fear that my parents might spank me if I did. Did this mar my psyche for all eternity? I can honestly say I have no long-lasting psychological defects stemming from a whack on the backside when I was 10 because I disrespected my mother. In fact, as a result of that whack on the backside I grew up having a very healthy respect for authority and I stayed on the straight and narrow. Can we say that for most of the youth of today? The job of a parent is to set ground rules, nurture his child, raise her with respect and tolerance of others, and provide realistic boundaries for his children to follow, in order that the child grow to be a responsible adult and an asset to the community. If a spanking will accomplish that, so be it. Not every child is a star athlete, nor a mathematical genius. Not every child is destined for greatness in the history books. But every child is destined to grow up and become a useful part of society, to benefit this world, and leave it a better place after they have gone. Children are no longer taught the golden rule, nor how to respect anyone or anything. How dare he equate a parent spanking a child with the horrible beating death of Myles Dobson, the 4-year-old boy who was tortured and murdered by a psychotic babysitter. Sorry, Mr. Hamill, but a smack on the backside does not an abusive parent make. Yes, there will always be someone who is violent. Yes, Mr. Hamill, there will always be tragedies in this world where innocent children are hurt. But not every parent is an abusive fiend who lives to torture their child, nor is spanking aberrational behavior. For thousands of millennium parents have spanked their children when it was warranted. It did not result in a world of abused, damaged, and psychologically marred individuals who grew to be depraved, wanton felons. Maybe a child should be a little afraid of that whack on the backside. Maybe, Mr. Hamill, the streets would be a little safer out there for the elderly people that walk in fear because no one taught the young thug out there to respect, or to be a little afraid of that whack on the backside with a wooden spoon, if they didn’t. Joanna DelBuono writes about national issues — like child rearing — every Wednesday on BrooklynDaily.com. E-mail her at jdelbuono@cnglocal.com.
<reponame>Tikubonn/yuno<filename>test/src/yunopipe/src/test-yunopipe.c #include <yuno.h> #include <test.h> void test_yunopipe (){ test_yunopipe1(); test_yunopipe2(); }
def compute_position(self): self.position = self.atom.get_cell().get_periodic_image( self.atom.get_position_cartesian(), self.supercell[0], self.supercell[1], self.supercell[2])
/** * Execute DML Statement * * @param sqlNodeAndOptions Parsed DML object * @param headers extra headers map for minion task submission * @return BrokerResponse is the DML executed response */ public BrokerResponse executeDMLStatement(SqlNodeAndOptions sqlNodeAndOptions, @Nullable Map<String, String> headers) { DataManipulationStatement statement = DataManipulationStatementParser.parse(sqlNodeAndOptions); BrokerResponseNative result = new BrokerResponseNative(); switch (statement.getExecutionType()) { case MINION: AdhocTaskConfig taskConf = statement.generateAdhocTaskConfig(); try { Map<String, String> tableToTaskIdMap = getMinionClient().executeTask(taskConf, headers); List<Object[]> rows = new ArrayList<>(); tableToTaskIdMap.forEach((key, value) -> rows.add(new Object[]{key, value})); result.setResultTable(new ResultTable(statement.getResultSchema(), rows)); } catch (IOException e) { result.setExceptions(ImmutableList.of(QueryException.getException(QueryException.QUERY_EXECUTION_ERROR, e))); } break; case HTTP: try { result.setResultTable(new ResultTable(statement.getResultSchema(), statement.execute())); } catch (Exception e) { result.setExceptions(ImmutableList.of(QueryException.getException(QueryException.QUERY_EXECUTION_ERROR, e))); } break; default: result.setExceptions(ImmutableList.of(QueryException.getException(QueryException.QUERY_EXECUTION_ERROR, new UnsupportedOperationException("Unsupported statement - " + statement)))); break; } return result; }
Boris Yeltsin circling over Shannon diplomatic incident On 30 September 1994, Boris Yeltsin, then President of the Russian Federation, was scheduled for an official state visit to the Republic of Ireland but failed to get off his plane when it landed at Shannon Airport. The incident embarrassed the Irish government, in particular Taoiseach Albert Reynolds who was left standing at the foot of the stairs to Yeltsin's plane, and raised questions about Yeltsin's health and fitness to serve. Yeltsin's return from the United States and Irish planning Boris Yeltsin travelled to New York to address the United Nations General Assembly on 26 September 1994. He then travelled to Seattle, Washington, to promote trade relations between the United States and Russia. After delivering a speech Yeltsin departed for Moscow via Shannon. The choice of Shannon was symbolic; in 1980 Aeroflot started operations there as its most western non-NATO hub in Europe. The trip was scheduled at short notice. Reynolds was in Australia on official business when he learned of the intended stopover. He cut his visit short and returned to Ireland, landing at Shannon airport just a few hours before Yeltsin's scheduled arrival. Thirty one official vehicles waited on the runway to provide transportation to a formal reception at Dromoland Castle. The Irish Defence Forces brought in the band of the Southern Command and deployed one hundred soldiers of the 12th Infantry Battalion to serve as an honor guard. Incident Around 12:30 p.m. IST an aircraft bearing the Russian's advance party landed at Shannon. The official delegation, including Reynolds, the Russian ambassador to Ireland, Nikolai Kozyrev, Bertie Ahern (the Minister for Finance), Brian Cowen (the Minister for Energy), and Willie O'Dea (Minister of State) went to the runway to greet Yeltsin's plane which was expected ten minutes later. The assembled dignitaries waited but Yeltsin's plane did not land. Airport officials reported that it was circling over Shannon and County Clare. After circling for an hour the plane landed. However, Yeltsin did not appear when the plane's door opened. An Aeroflot official informed Kozyrev that Yeltsin was unwell and that the vice premier, Oleg Soskovets, would meet with the Irish delegation. Kozyrev was able to board the plane but was unable to see Yeltsin. Alexander Korzhakov, Yeltsin's bodyguard, told Kozyrev that Yeltsin was "very tired." Kozyrev returned to the runway and informed Reynolds that Yeltsin would not be making an appearance due to poor health. Reynolds replied, "Well now, if he is sick, there is nothing we can do about it. I am willing to talk to the Russian President's representative, but Mr. Yeltsin, my guest, is on Irish soil, and I cannot miss the opportunity to go on board the airplane for five minutes, shake the president's hand and wish him a speedy recovery." The Russians rejected this suggestion. Reynolds then agreed to meet Soskovets and gave orders for a meeting to be held in the airport. The Irish government commandeered the VIP lounge of Delta Air Lines to serve as the venue. Reynolds and Soskovets held a brief meeting and Yeltsin's plane departed immediately after it concluded. Aftermath The immediate reaction in Ireland was uniformly negative. Yeltsin's problems with alcohol were well-known and the national and international media assumed that he had been too drunk to disembark from the plane (although in 2010 Tatyana Yumasheva, Yeltsin's daughter, suggested that her father had suffered a heart attack on the plane). The Irish Times ran a cartoon on its front page the next day which depicted a bottle of vodka bouncing down mobile stairs while an onlooker states "At last a message from President Yeltsin." A large photo of Reynolds standing on the runway waiting for Yeltsin was printed on page three. The Irish Independent ran a photograph on its front page of Reynolds standing on the tarmac looking at his watch. Its editorial page stated, "When a statesman occupying one of the most pivotal positions imaginable neglects basic courtesy and insults his hosts, searching questions must be asked about his fitness to hold office." Upon his return to Moscow, Yeltsin stated that he had merely overslept: "I feel excellent. I can tell you honestly, I just overslept. The security services did not let in the people who were due to wake me - of course I will sort things out and punish them." The Irish Press criticized the lack of respect shown to Reynolds and suggested that Yeltsin's excuses be taken with "a large measure of vodka." Reynolds subsequently claimed that he had used the incident to extract favours from Yeltsin regarding the operation of Aer Rianta International in Russia. Maxine David at the University of Surrey supports Reynold's assessment. She notes that in the years following the Shannon incident Russian-Irish commercial aviation links strengthened. The term "circling over Shannon" briefly became a euphemism in Ireland to describe the condition of a person who has had too much to drink.
This post originally appeared on Ozy.com. Researchers found that, in general, 32 percent of heavy Facebook users consider leaving their spouse. Facebook in particular is “a positive, significant predictor of divorce rate and spousal troubles,” it notes. Of course, there are some limits to this finding — it’s all about correlation. But the study’s authors feel they’re noticing something that’s genuinely statistically significant. As usage of the social media site rose across 43 states, they found that a 20 percent bump in Facebook use equated with a greater-than-2 percent bump in divorce rates between 2008 and 2010. The authors of the study suggest men and women troubled by their marriage may turn to social media for emotional support. Researchers looked at numbers from Texas, specifically, and found the larger correlation was true there, too. Among non-social media users, about 16 percent pondered leaving their mates at some point. Social media users doubled that number. While previous studies suggested that Facebook and its ilk make it easier for people to cheat on their spouses, the authors of the new study suggest that men and women troubled by their marriage may turn to social media for emotional support (as opposed to just looking for a little somethin’ on the side). Data aside, the message is to trust your gut: If your sweetheart seems more attached to Instagram than to you, it’s probably time to take stock of your relationship — before one of you is tweeting that it’s over.
A common type of truck tailgate lift comprises a rectangular load platform pivotally connected at a forward edge to the swingable rear ends of a pair of parallelogram linkages. The forward ends of the linkages are secured to the bed, frame or chassis of the truck and a power means, typically an electrical hydraulic system, is provided for raising and lowering the load platform relative to the bed of the truck. The parallelogram linkages maintain a substantially horizontal, active attitude of the load platform during freight handling operations. When not used for loading purposes, the load platform is turned upwardly to substantially vertical attitude to a transit position in which it may also serve as the closer for the tailgate opening of the truck body. In the past, a variety of mechanisms have been developed for turning the load platform between horizontal and vertical positions by the use of either the power cylinder or cylinders that raise and lower the liftgate or the use of a dedicated, separate power cylinder or cylinders. One commonly used approach in the former arrangement uses a spaced apart pair of cam blocks on the rearwardly facing vertical surface of the sill of the truck bed which are engagable by a spaced pair of cam followers mounted to the forward edge of the load platform. The present invention is an improvement over all of these systems.
/* This file belongs to the LibM2 library (http://github.com/imermcmaps/LibM2) * Copyright (c) 2013, iMer (www.imer.cc) * All rights reserved. * Licensed under the BSD 3-clause license (http://opensource.org/licenses/BSD-3-Clause) */ #ifndef __LIBM2_ADDR_HPP #define __LIBM2_ADDR_HPP #include "revision.hpp" #if REVISION == R34083 #include "addr/34083.hpp" #endif #endif // __LIBM2_ADDR_HPP
Occupied, please call for appointment.3 Bedroom, 2.5 bath on 19 acres - perfect for horses & other farm animals. Quiet dead end street. Open floor plan, separate master. Half bath in garage makes it nice when working outdoors.
Feasibility Study of Lamellar Keratoplasty in a Murine Model Purpose: In contrast to penetrating keratoplasty (PK), the donor cornea in lamellar keratoplasty (LK) remains separated from the host aqueous humor. There is debate about relative merits of each approach, but experimental comparisons have never been performed in animal models. Therefore, the authors developed a murine LK model. Methods: For allogeneic PK and LK surgeries, corneas of C57BL/6 mice were transplanted to BALB/c mice, assessed by slit lamp, and scored for opacity, edema, and neovascularization up to 46 d post-transplantation. Additional PK or LK surgeries were performed for histological assessment. Results: Graft rejection rate was less in LK vs. PK (69.2 vs. 100%), as was neovascularization (84.6 vs. 100%). In LK, inflammatory cells infiltrated primarily the button; in PK, heavier infiltration was observed throughout the cornea. Conclusions: This study demonstrates the feasibility of LK in mice and presents data suggesting that the inflammatory response in LK differs from that in PK.
<reponame>simonetripodi/modello<filename>modello-plugins/modello-plugin-java/src/test/java/org/codehaus/modello/plugin/java/BiDirectionalOverrideJavaGeneratorTest.java package org.codehaus.modello.plugin.java; /* * Copyright 2001-2006 The Apache Software Foundation. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ import org.codehaus.modello.AbstractModelloJavaGeneratorTest; import org.codehaus.modello.core.ModelloCore; import org.codehaus.modello.model.Model; import java.util.Properties; /** * BiDirectionalOverrideJavaGeneratorTest * * @author <a href="mailto:<EMAIL>"><NAME></a> */ public class BiDirectionalOverrideJavaGeneratorTest extends AbstractModelloJavaGeneratorTest { public BiDirectionalOverrideJavaGeneratorTest() { super( "bidirectional" ); } public void testJavaGenerator() throws Throwable { ModelloCore modello = (ModelloCore) lookup( ModelloCore.ROLE ); Model model = modello.loadModel( getXmlResourceReader( "/models/bidirectional-override.mdo" ) ); Properties parameters = getModelloParameters( "1.0.0", false ); modello.generate( model, "java", parameters ); compileGeneratedSources(); verifyCompiledGeneratedSources( "JavaVerifier" ); } }
/* Function: tinybool sdp_attr_fmtp_valid(void *sdp_ptr) * Description: Returns true or false depending on whether an fmtp * attribute was specified with the given payload value * at the given level. If it was, the instance number of * that attribute is returned. * Parameters: sdp_ptr The SDP handle returned by sdp_init_description. * level The level to check for the attribute. * cap_num The capability number associated with the * attribute if any. If none, should be zero. * inst_num The attribute instance number to check. * Returns: TRUE or FALSE. */ tinybool sdp_attr_fmtp_payload_valid (void *sdp_ptr, u16 level, u8 cap_num, u16 *inst_num, u16 payload_type) { u16 i; sdp_t *sdp_p = (sdp_t *)sdp_ptr; sdp_attr_t *attr_p; u16 num_instances; if (sdp_verify_sdp_ptr(sdp_p) == FALSE) { return (FALSE); } if (sdp_attr_num_instances(sdp_ptr, level, cap_num, SDP_ATTR_FMTP, &num_instances) != SDP_SUCCESS) { return (FALSE); } for (i=1; i <= num_instances; i++) { attr_p = sdp_find_attr(sdp_p, level, cap_num, SDP_ATTR_FMTP, i); if ((attr_p != NULL) && (attr_p->attr.fmtp.payload_num == payload_type)) { *inst_num = i; return (TRUE); } } return (FALSE); }
Inter/intraframe coding of color TV signals for transmission at the third level of the digital hierarchy The data rate of the third level of the digital hierarchy is regarded as particularly economical for the international exchange of television programs via satellite. Nonetheless, the exchange is aggravated by the fact that this level has not been standardized on a world-wide scale. For example, a rate of 32 Mbits/s is used in Japan, 34 Mbits/s in Europe, and 44 Mbits/s in North America. The present paper details a procedure which allows digital TV signals of studio standard to be transmitted on 32-, 34-, or 44-Mbit/s channels by means of sampling rate conversion and adaptive source coding. For the case where an international link consists of sections using different data rates, a transcoding method is described and tested in computer simulations which converts the third-level rates into each other, without cascading of DPCM-decoders and-encoders.
Comparative effect of propofol versus sevoflurane on renal ischemia/reperfusion injury after elective open abdominal aortic aneurysm repair Background: Renal injury is a common cause of morbidity and mortality after elective abdominal aortic aneurysm (AAA) repair. Propofol has been reported to protect several organs from ischemia/reperfusion (I/R) induced injury. We performed a randomized clinical trial to compare propofol and sevoflurane for their effects on renal I/R injury in patients undergoing elective AAA repair. Materials and Methods: Fifty patients scheduled for elective AAA repair were randomized to receive propofol anesthesia in group I or sevoflurane anesthesia in group II. Urinary specific kidney proteins (N-acetyl-beta-glucosamidase, alpha-1-microglobulin, glutathione transferase -pi, GST-alpha) were measured within 5 min of starting anesthesia as a base line (T0), at the end of surgery (T1), 8 h after surgery (T2), 16 h after surgery (T3), and 24 h postoperatively (T4). Serum pro-inflammatory cytokines (tumor necrosis factor- and interleukin 1-) were measured at the same time points. In addition, serum creatinine and cystatin C were measured before starting surgery as a baseline and at days 1, 3, and 6 after surgery. Results: Postoperative urinary concentrations of all measured kidney specific proteins and serum pro-inflammatory cytokines were significantly lower in the propofol group. In addition, the serum creatinine and cystatin C were significantly lower in the propofol group compared with the sevoflurane group. Conclusion: Propofol significantly reduced renal injury after elective open AAA repair and this could have clinical implications in situations of expected renal I/R injury. Introduction Renal injury, a result of the hemodynamic changes after aortic cross-clamping and ischemia/reperfusion injury (I/R) after declamping, is a common cause of morbidity and mortality after elective abdominal aortic aneurysm (AAA) repair. In one series, acute renal failure was reported in 6.7% of patients after elective open AAA repair and is an independent predictor of death. Multiple factors are involved in the etiology of renal injury during infrarenal AAA procedures. Aortic cross-clamping below the kidney triggers renal vasoconstriction that is associated with a redistribution of blood flow from the medullary to the cortical compartment. This renal vasoconstriction may be due to changes in the humoral and neurogenic factors that regulate renal blood flow or may Comparative effect of propofol versus sevoflurane on renal ischemia/reperfusion injury after elective open abdominal aortic aneurysm repair Saudi Journal of Anesthesia / July-September 2016 / Volume 10 / Issue 3 be triggered by aortic cross-clamping induced turbulence in blood flow inside the aorta at the level of the kidney. In addition, increase in renin activity may have been induced by aortic cross-clamping. The subsequent reduction in renal blood flow may expose the cells of the renal tubules to ischemic injury. Furthermore, the renal tubular cells suffer I/R injury after release of the clamping that is accompanied with a neutrophil-mediated systemic response. The pathophysiology of renal I/R injury is complex. Triggering of lipid peroxidation and the formation of free radicals has been shown to be major factors. Several studies have proven that propofol increases antioxidant capacity in different tissues. There is some laboratory evidence to suggest that propofol may provide protection to the kidney through modulation of the systemic inflammatory response. Sevoflurane has been reported to be nephrotoxic in rats while some recent studies suggested that sevoflurane is protective against renal I/R injury in mice. Sevoflurane toxicity in humans has been tackled by several investigations. Some human studies have reported that sevoflurane anesthesia resulted in increase in the excretion of markers of renal injury indicating potential nephrotoxicity, whereas other studies reported no effect. The purpose of this investigation was to compare the renal effect of intravenous (IV) anesthesia with propofol against inhalation anesthesia with sevoflurane in patients undergoing AAA repair. Materials and Methods This prospective randomized blinded study was performed on 50 American Society of Anesthesiologists class II or III patients scheduled for elective infrarenal AAA repair. The study was carried out between February, 2012 and April, 2014. Written informed consent was obtained from the patients and Institutional Review Board of Minoufiya Faculty of Medicine has approved the study (Ref: 11/A213/352). The study was registered with PACTR201505001095139. All the operations were performed by the same surgical team through a mini-laparotomy approach. Thorough clinical evaluation, electrocardiogram, echocardiography, and laboratory investigations were performed as a routine diagnostic check-up. Patients were excluded from the study if they needed concomitant procedures other than AAA repair, had experienced an acute coronary syndrome within 3 months, or were >85 years of age. Bisoprolol was prescribed preoperatively in a dose of 5 mg daily in the absence of contra-indications (heart rate below 60 bpm or systolic blood pressure <100 mm Hg). Cardiac medications were continued up to the day of surgery. The patients were randomly allocated to receive propofol (n = 25) or sevoflurane (n = 25) anesthesia using a random number table generated by Microsoft Excel. An independent statistician was assigned to perform central randomization to ensure proper concealment of the study management from the patients and investigators until the release of the final statistical results. In the propofol group, general anesthesia was induced with propofol 1.5-2 mg/kg and fentanyl 3 g/kg. Tracheal intubation was facilitated by administration of cis-atracurium 0.1 mg/kg. Anesthesia was maintained with a continuous infusion of propofol 4-6 mg/kg/h, and cis-atracurium 2 g/kg/min. In the sevoflurane group, anesthesia was induced as above but maintained with sevoflurane 1 MAC and cis-atracurium 2 g/kg/min. The bispectral index electrode (BIS-Sensor, Aspect Medical Systems, USA) was positioned on the patient's forehead to monitor depth of anesthesia where BIS value was kept between 45 and 55 in both groups through modulating propofol infusion rate or sevoflurane concentration. In the operating room, a radial arterial catheter and multiple peripheral IV catheters were inserted. Heart rate, arterial blood pressure and oxygen saturation were continuously monitored during the whole procedure. Fluid loading was performed with 1.0 L of 6% 130/0.4 hydroxyethyl starch (Voluven) infusion. Fluid and blood replacements were adjusted to maintain patient hematocrit value above 30%. Norepinephrine and nicardipine were used if required (if mean arterial blood pressure changed by more than 20%) to maintain hemodynamic stability. Normothermia was maintained with fluid warming and forced air warming (Bair-Hugger). Blood glucose level was kept normoglycemic (3.9-8.3 mmol/L). One analyst was blinded in respect to the drug under study during the procedure by covering the lines, infusion pump, gas analyzer, and by numeric codes during the whole process of data evaluation. Furthermore, physicians who were charged for postoperative care of patients and for their discharges from intensive care unit (ICU) and hospital were effectively blinded to the study design. The patients stayed in the ICU till return to their preoperative physiological homeostasis including stable hemodynamics, adequate ventilation, normothermia, and satisfactory pain control. Hospital discharge was guided by the ability to ambulate independently and to tolerate oral feeding. Saudi Journal of Anesthesia / July-September 2016 / Volume 10 / Issue 3 Epidural analgesia was performed before starting anesthesia at the T8-T10 level by inserting an epidural catheter (Braun perifix 18 ba and a microporous filter). A test dose of 4 ml 1% lidocaine with epinephrine 5 g/ml was used for testing intrathecal or intravascular injection, respectively. Epidural block activation was performed by injecting 12 ml of bupivacaine hydrochloride 0.25%. Furthermore, 4 ml was injected 2 h later as a maintenance dose and every hour thereafter for postoperative epidural analgesia. Acetaminophen IV was also used postoperatively if needed. Assessment of kidney function All patients had a bladder catheter. The following assays were performed on urine specimens, taken within 5 min of starting anesthesia as a base line (T 0 ), at the end of surgery (T 1 ), 8h after surgery (T 2 ), 16h after surgery (T 3 ), and 24 h postoperatively (T 4 ): N-acetyl-beta-dglucosamidase (beta-NAG) analyzed by a spectrophotometric method, normal value 0-7 U/L, intra-and inter-assay coefficients of variation 3.9%; alpha-1-microglobulin (alpha-1-M) assessed by immunonephelometry, normal value <14 mg/L, intraand inter-assay coefficients of variation 3.7%; glutathione transferase-pi (GST-pi) measured by enzyme immunoassay, normal value 12-15 g/L, intra-and inter-assay coefficients of variation 4.6%; and GST-alpha measured by enzyme immunoassay, normal value 3.5-11 g/L, intra-and inter-assay coefficients of variation 3.5%. In addition, the plasma proinflammatory cytokines, (tumor necrosis factor and interleukin-1 ) were measured at the same time points. Blood samples were immediately centrifuged and the serum separated, divided into aliquots, and placed in Eppendorf tubes and frozen at -80°C until assay. Commercial kits were used for the determination of TNF- and IL-1 (enzyme-linked immunosorbent assay Kit; Biomed, Diepenbeek, Belgium) based on ELISA. Recordings were carried out on a plate reader (GEST, General ELISA System Technology, Menarini Labs, Badalona, Spain) for the automatic ELISA technique in triplicate. The lower limit of detection of the assay for TNF- and IL-1 were 10.7 and 4.2 pg/ml, respectively. Intra-and inter-assay coefficients of variation for TNF- and IL-1 were below 8%. Statistical analysis Continuous variables are expressed as mean (standard deviation) and categorical variables are reported as percentages. Statistical analyses were performed using statistical for windows version 10.0 software. A preliminary study had demonstrated that for patients scheduled for elective AAA repair at our hospital, the mean value of urinary beta-NAG was 2.5 (0.5) U/L, alpha-1-M was 4.9 (1.4) mg/L, GST-pi was 13.7 (3.4) g/L and GST-alpha was 4.9 (1.3) g/L. With a two-sided type I error of 5 % and study power at 80%, a mean sample size of 25 patients in each group was found sufficient to demonstrate a difference in the urinary specific kidney proteins (0.6 U/L for beta-NAG, 1.5 mg/L for alpha-1-M, 3.6 g/L for GST-pi, and 1.5 g/L for GST-alpha). The Kolmogorov-Smirnov test was used to verify normal distribution of data. Distribution of residuals testing was performed to confirm that analysis of variance (ANOVA) was appropriate to our data. Data were analyzed on an intention to treat basis using two-way ANOVA for repeatedmeasures. This was followed by Student-Newman-Keuls test, if a difference between groups had been detected. Changes over time in nonnormally distributed data sets were analyzed by Friedman repeated-measures ANOVA on ranks. P < 0.05 was considered statistically significant (SigmaStat, Systat Software, Richmond, USA). Results Baseline characteristics and operative characteristics including cross-clamp time, operating time , vasopressor requirements, and hemoglobin concentration changes were comparable in both groups. Patients in the propofol group had lower urinary concentrations of all measured kidney specific proteins and lower serum creatinine and cystatin C , in addition to lower serum pro-inflammatory cytokines as follows. Table 3. Serum creatinine at day 1 after surgery was significantly higher than baseline values in both groups but was significantly lower in the propofol group when compared with the sevoflurane group. The overall two-way ANOVA analysis of the groups was significant (F = 8.66 and P = 0.001). Post-hoc test results are shown in Table 4. The overall two-way ANOVA analysis of serum cystatin C was also significant (F = 22.41 and P = 0.001 at day 1, F = 52.46 and P = 0.001 at day 3). Cystatin C was significantly higher than baseline value in the propofol group at day 1 after surgery and returned to near normal baseline values thereafter whereas it was significantly higher than baseline value in the sevoflurane group at both days 1 and 3 and was higher than comparable values in the propofol group. Again the results of post-hoc tests are shown in Table 4. Table 5. Discussion In this randomized trial, propofol reduced the risk of perioperative renal impairment in patients undergoing AAA repair as manifested by changes in kidney specific proteins, lower serum creatinine, and cystatin C levels. Several studies have reported that propofol can provide protection against I/R injury while sevoflurane could not provide such protection. Several possible mechanisms have been proposed. Snchez-Conde et al., compared the abilities of propofol and sevoflurane to modulate inflammation and oxidative stress to the kidney caused by supra-renal aortic cross-clamping. Compared to sevoflurane, propofol administration led to the modulation of markers of inflammation and decreased NF-kappa B expression. Wang et al. demonstrated a protective effect of propofol in renal I/R injury in rats and suggested that this was due to the induction of the heme oxygenase-1 expression. In a study comparing propofol against sevoflurane for their effects on the systemic inflammatory response during aortic surgery in pigs, propofol anesthesia was associated with less neutrophil infiltration, lower plasma pro-inflammatory cytokine levels, lower production of oxygen free radicals, less lipid peroxidation, and reduced inducible nitric oxide synthase activity. Another mechanism for renal protective effect of propofol was reported by a study by Feng et al., where pretreatment with 5 g/ml propofol protected human proximal renal tubular epithelial cells against anoxiareoxygenation injury at clinically relevant concentrations by regulating the expression of apoptosis related genes. Assad et al., reported that protection by propofol was probably due to a preconditioning effect and was at least in part mediated by KATP channels. Obal et al., compared the effect of preconditioning with sevoflurane and preconditioning with short episodes of ischemia on renal I/R injury in the rat in vivo. They reported that sevoflurane could not preserve renal function or attenuate cell damage in the rat in vivo. Higuchi et al., compared the effects of high-and low-flow sevoflurane and isoflurane anesthesia on renal function and on markers of nephrotoxicity in humans. Increased urinary beta-NAG excretion was seen in the low-flow and high-flow sevoflurane groups, but not in the isoflurane group (P < 0.01) but was not associated with any changes in blood urea nitrogen, creatinine, and creatinine clearance. In contrast to our findings, some studies have reported renal protection by sevoflurane. Lee et al., reported that sevoflurane protects against renal I/R injury in mice via Data are expressed as mean (SD). P < 0.05 within the propofol group; P < 0.05 within the sevoflurane group; *P < 0.05 between both groups. Beta-NAG: N-acetyl-beta-Dglucosamidase; Alpha-1-M: Alpha-1-microglobulin; GST-pi: Glutathione transferase-pi; GST-alpha: Glutathione transferase-alpha; SD: Standard deviation Data are expressed as mean (SD). P < 0.05 within the propofol group; P < 0.05 within the sevoflurane group; *P < 0.05 between both groups. SD: Standard deviation Saudi Journal of Anesthesia / July-September 2016 / Volume 10 / Issue 3 the transforming growth factor-1 (TGF-1) pathway and stated that this protection was absent in mice deficient in TGF-1 signaling, and this fact can explain why sevoflurane protection is controversial in different studies. In another study, Lee et al., approved sevoflurane protection against renal I/R injury in cultured human proximal tubular cells and reported that this protection was through activation of the TGF-1signaling pathways. Equipotent doses of volatile anesthetics (desflurane, halothane, isoflurane, or sevoflurane) were compared against injectable anesthetics (pentobarbital or ketamine) in rats subjected to renal I/R. Rats treated with volatile anesthetics had lower plasma creatinine and reduced renal necrosis 24-72 h after injury compared with rats anesthetized with pentobarbital or ketamine. Annecke et al., compared effects of sevoflurane and propofol on I/R injury after thoracic-aortic occlusion in pigs. Serum markers of cellular injury (lactate dehydrogenase, aspartate transaminase, and alanine aminotransferase) were lower with sevoflurane. However, these markers are not specific to the kidney. Some studies have reported that hyrdroxy ethyl starch infusion may have negative impact on kidney function. In our study, both groups were comparable regarding volume of hyrdroxy ethyl starch infused and this fact can eliminate its effect on our results. One limitation of the present study is the use of urinary alpha-1-M as a marker of renal tubular dysfunction (since its increase suggests impaired reabsorption by the tubules and hence tubular dysfunction). It should be emphasized that increased tubular dysfunction is not necessarily a bad thing as it suggests temporary dysfunction of the tubules, less renal tubular work, and less tubular oxygen consumption. In fact increased alpha-1-M can be argued as being renoprotective through reducing renal workload and hence oxygen demand at a time of reduced oxygen delivery. Our study suffers from an additional number of limitations. With the exception of serum creatinine, the renal function measures used are at best subclinical markers of renal dysfunction and injury and the clinical application of this work would require clinically used measures of renal function with greater patient numbers in the study. Conclusion We have demonstrated that the use of propofol significantly reduced renal injury after elective open AAA repair and this could have clinical implications in situations of expected renal I/R injury as patients suffering from hemorrhagic, traumatic, or septic shock, and certain surgical procedures including renal transplantation or abdominal aortic surgery. Data are expressed as mean (SD). P < 0.05 within the propofol group; P < 0.05 within the sevoflurane group; *P < 0.05 between both groups. TNF-: Tumor necrosis factor ; IL-1: Interleukin-1; SD: Standard deviation
def commit_buf(self, board): self.conn.execute("drop table if exists %s" % table_name(board)) self.conn.execute("alter table %s rename to %s" % (buf_table_name(board), table_name(board))) self.conn.commit()
<gh_stars>1-10 package net.physiodelic.model; import lombok.AllArgsConstructor; import lombok.Data; import lombok.NoArgsConstructor; import java.io.Serializable; /** * Created by joris on 22/04/17. * Simple pojo to hold address information */ @Data @NoArgsConstructor @AllArgsConstructor public class Address implements Serializable { private static final long serialVersionUID = 6760640046199377121L; private String streetOne; private String streetTwo; private String stateProvince; private String city; private String country; private String postalCode; } // That's All Folks !!
Behavior of a persistent current qubit in a time-dependent electromagnetic field This paper considers the behavior of a model persistent current qubit in the presence of a time-dependent electromagnetic field. A semi-classical approximation for the electromagnetic field is used to solve the time- dependent Schrodinger equation (TDSE) for the qubit, which is treated as a macroscopic quantum object. The qubit is describe3d by a Hamiltonian involving the enclosed magnetic flux (Phi) and the electric displacement flux Q, which obey the quantum mechanical commutation relation. The paper includes a brief summary of recent work on quantum mechanical coherence in persistent current circuits, and the solution of the TDSE in superconducting rings. Of particular interest is the emergence of strongly non-perturbative behavior that corresponds to transitions between the energy levels of the qubit. These transitions are due to the strong coupling between the electromagnetic fields and the superconducting condensate and can appear at frequencies predicted by conventional methods based on perturbations around the energy eigenstate of the time-independent system. The relevance of these non-perturbative processes to the operation of quantum logic gates based on superconducting circuits and the effect of the resultant non linearities on the environmental degrees of freedom coupled to the qubit are considered.
Executives tend to approach negotiations from two angles, they need to make it work with their external partner, and sell the deal internally. A working paper from Harvard Business School's James Sebenius points to a third element missing from previous research and practice: helping to resolve a partner's internal hangups and roadblocks. Sebenius calls these "Level 2 barriers", the factions or concerns 'behind the table' that can derail a deal, even if it's of mutual benefit to both parties. The paper focuses on diplomatic negotiations, but its insights are relevant to business leaders. When negotiators focus exclusively on the business issues of a deal, they can be particularly ill informed about the internal politics of the other side. Remembering who's behind the scenes can speed up a deal and avoid costly delays. Deals can be structured to appeal to particular interests. A small concession informed by the other side's internal conflicts can go a long way. Public appearance and signals can be more important than they seem. A quick and speedy deal might make opponents feel marginalized, so public shows of argument and reconciliation might help. Negotiators can also work with one another to frame the deal in a way that makes it easier to sell internally. As complex as the business part of a deal can get, it's important to remember that everybody has to make things work internally. Making that an explicit part of a negotiation can benefit both sides.
Identification of survivin as a promising target for the immunotherapy of adult B-cell acute lymphoblastic leukemia B-cell acute lymphoblastic leukemia (B-ALL) is a rare heterogeneous disease characterized by a block in lymphoid differentiation and a rapid clonal expansion of immature, non-functioning B cells. Adult B-ALL patients have a poor prognosis with less than 50% chance of survival after five years and a high relapse rate after allogeneic haematopoietic stem cell transplantation. Novel treatment approaches are required to improve the outcome for patients and the identification of B-ALL specific antigens are essential for the development of targeted immunotherapeutic treatments. We examined twelve potential target antigens for the immunotherapy of adult B-ALL. RT-PCR indicated that only survivin and WT1 were expressed in B-ALL patient samples (7/11 and 6/11, respectively) but not normal donor control samples (0/8). Real-time quantitative (RQ)-PCR showed that survivin was the only antigen whose transcript exhibited significantly higher expression in the B-ALL samples (n = 10) compared with healthy controls (n = 4)(p = 0.015). Immunolabelling detected SSX2, SSX2IP, survivin and WT1 protein expression in all ten B-ALL samples examined, but survivin was not detectable in healthy volunteer samples. To determine whether these findings were supported by the analyses of a larger cohort of patient samples, we performed metadata analysis on an already published microarray dataset. We found that only survivin was significantly over-expressed in B-ALL patients (n = 215) compared to healthy B-cell controls (n = 12)(p = 0.013). We have shown that survivin is frequently transcribed and translated in adult B-ALL, but not healthy donor samples, suggesting this may be a promising target patient group for survivin-mediated immunotherapy. INTRODUCTION Acute lymphoblastic leukemia (ALL) is characterized by an excess of lymphoblasts of either the B-or T-lineage. If untreated the disease progresses rapidly and can be fatal within weeks to months. Adult patients with ALL who have had an allogeneic haematopoietic stem cell transplant (allo-HSCT) have an improved overall survival (OS) rate of 27-65% compared with 15-45% in the absence of allo-HSCT. While the improvement in survival post-allogeneic HSCT may in part be due to the use of intensive chemotherapy and radiotherapy (such as total body irradiation) as conditioning, there does appear to be an increased survival advantage following HSCT using reduced intensity conditioning schedules in older patients and those with co-morbid risk factors. This suggests that post-transplant mechanisms, probably immunological in nature, play an important role in disease control with graft versus leukemia (GvL) effective in the eradication of residual disease. The 'GvL effect' has been demonstrated in other haematological malignancies, particularly chronic myeloid leukemia (CML), acute myeloid leukemia (AML) and myeloma, with the identification of probable immunological targets such as minor histocompatibility antigens, tumor specific antigens and cancer-testis antigens (CTAs). A number of therapies have been, and are being, developed to target CD19, CD20, CD22 and/or CD52 on adult B-ALL patient blasts (recently reviewed in ). One of the most promising antibody therapies is blinatumab, which at the end of phase III clinical trials was shown to increase survival by months in patients with relapsed or refractory disease. In addition, anti-CD19 chimeric antigen receptor-modified T cells (CAR-T- 19) therapies have been shown to be able to induce complete remissions. Such studies demonstrate the potential for immunotherapy to treat patients with B-ALL, with novel antigens providing additional targets that can be used to stimulate immune-mediate escape variant destruction. Our own previous studies of adult B-ALL CD8+ T cells and their recognition of known leukemia antigens/ epitopes therein did not identify the same frequency/ presence of antigen-specific T-cell populations as myeloid leukemia patients at disease diagnosis. The large numbers of affected lymphoblasts in the bone marrow of patients with adult B-ALL patients may lead to a lack of immune competent B and T cells in the periphery and may explain the general lack of tumor antigens identified to date. We examined the expression of a panel of cancer antigens in adult B-ALL to determine whether any would be promising targets for the immunotherapy of this difficult to treat disease. Reverse transcription-polymerase chain reaction (RT-PCR) analysis of cell lines, patient samples and healthy donors We examined the expression of twelve antigens (BCP-20, G250, HAGE, END, NY-ESO-1, PASD1, p68 RNA helicase, SSX2, SSX2IP, survivin, tyrosinase and WT1), identified as promising through a review of the literature, in human cancer cell lines to demonstrate consistency with previously published data and to optimise our assays (Supplementary Table 1). These results provided positive and negative controls for the expression of each antigen (Table 2A). We then examined the expression of the same twelve antigens in thirteen samples from eleven adult B-ALL patients (including twelve samples taken from patients prior to the start of any treatment) and eight healthy volunteers (Table 1). No suitable sample was available from ALL003 for RT-PCR analysis. RT-PCR analysis showed that two antigens were expressed in B-ALL patient samples (Table 2B) but not healthy donor samples. These were survivin (7/11 B-ALL patients) and WT1 (6/11 B-ALL patients) with no detectable antigen expression in eight healthy volunteer samples ( Figure 1; Table 2B). All other genes studied (BCP-20, END, G250, HAGE, NY-ESO-1, p68 RNA helicase, SSX2IP and tyrosinase) were detectable in patient samples and healthy volunteers, except PAS domain-containing protein 1 (PASD1) and SSX2 which were not detected in either. Due to limited sample availability we choose six of the antigens, that were differentially expressed in patients compared with normal controls (Survivin, WT1 and END) or of particular interest to our group (PASD1, SSX2, SSX2IP), for further investigation by qPCR. qPCR analysis of antigen expression in B-ALL and healthy donor samples A two-way ANOVA test was used to determine whether there was a statistical difference between transcript expression of END, PASD1, SSX2, SSX2IP, Survivin and WT1, as determined by qPCR, in B-ALL patients (ALL001-8, 11 and 14) compared with healthy volunteers. Survivin had a significantly higher expression in seven of the ten B-ALL patients analysed, compared to healthy controls (p = 0.015) (Figure 2A). Its median C T value (7.19) in patients was much lower compared to the median C T value (12.81) in normal controls. WT1 was expressed by three out of ten adult B-ALL patients ( Figure 2B) however the median C T of B-ALL patients and normal controls, 12.88 and 12.81 respectively, were almost equal. Therefore, there was no significant difference detected by the two-way ANOVA test between these two groups. www.impactjournals.com/oncotarget Expression of PASD1 and synovial sarcoma, X breakpoint 2 (SSX2) expression were not detected in any of the adult B-ALL patients or healthy volunteers ( Figures 2C and 2D). Nine out of ten patients expressed SSX2IP ( Figure 2E) while seven out of ten expressed END ( Figure 2F). Although the expression of these genes were high, their transcripts were also found in three of five healthy volunteers. Immunolabelling of antigen expression in B-ALL using immunocytochemistry The cell lines K562, OCI-LY3 and MDA-MB-231 were used to demonstrate the effectiveness of immunolabelling to detect the expression of END, PASD1, SSX2, SSX2IP, survivin, and WT1 (Table 3). Four out of five antigens had a cytoplasmic and nuclear localisation, while WT1 was only found in the cytoplasm of the K562 cells ( Figure 3). The immunoreactivity score of both survivin and WT1 was moderate, while SSX2 and SSX2IP showed a weak labelling in K562 (Table 3). END was not expressed in the K562 cell line. OCI-LY-3 cell line was used as an extra control for the expression of PASD1 and showed high levels of PASD1 in the cytoplasm and near the cell membrane. END was moderately expressed on the surface of MDA-MB-231 cells grown on coverslips, confirming the findings of previous studies. HV samples (all PB) HV003 healthy volunteers expressed SSX2 at High levels, two of six healthy volunteers had detectable SSX2IP expression detectable at moderate levels, and WT1 at moderate levels. The ICC experiments were performed twice, with controls, due to limited samples being available, but the results were reproducible. Gene expression analysis Bioinformatic analysis of the publically available gene expression data set GSE38403, indicated that survivin (p = 0.013) was significantly over-expressed in the B-ALL patient cohort (n = 215) compared to healthy B-cell controls (n = 12). Furthermore, of the twelve candidate genes investigated only p68 DNA helicase, SSX2IP, Survivin and WT1 showed significant differences in expression when compared across individual cytogenetic groups (Table 5). Elevated END or survivin expression was significantly associated with the t(9;21) translocation while p68 DNA helicase, SSX2IP, Survivin and WT1 expression were associated with different 11q23 /MLL abnormalities. We did examine whether there was a correlation between OS and event free survival (EFS) with the expression of each gene but none achieved significance and the closest to achieving significance was SSX2 interacting protein (SSX2IP) with an association with OS with a p value of 0.078. DISCUSSION Most patients with adult B-ALL achieve first remission with conventional treatment however many relapse with high associated mortality. There is an acute need for therapies that can remove minimal residual disease and delay, if not prevent, relapse for these patients. To this end, we have investigated twelve known leukemia antigens for their expression in adult B-ALL. HV021 + indicates the immunlabelling of test protein in the cell. Actin acted as the positive control for the assay. Two isotype control antibodies (Ms and Rb isotype) were used as negative controls to ensure there was minimal non-specific antibody binding to antigen. All pictures were taken at 400X magnification and were representative of at least two independent experiments. Genes that were not expressed were assigned a C T value of 40. The higher the C T value, the less antigen was expressed. All C T values for antigens were lower level than the reference gene GAPDH. Streaked dots, representing patient sample ALL004, were outliers that do not represent antigen expression. P-values were determined using a two-way ANOVA test. Ns, not significant. www.impactjournals.com/oncotarget Actin acted as a positive control. # The two isotype antibodies were used as negative controls or one isotype control and one test with no primary antibody. Immunoreactivity scores are as follows:-0 = negative; 1-29:weak; 30-143: moderate (mod) and 144-228: high; >228: very high. Data is representative of at least two independent experiments. www.impactjournals.com/oncotarget Survivin and WT1 were the only antigens detected in patients but not healthy volunteers by RT-PCR and ICC, but only survivin had a statistically significant elevation in expression between the adult B-ALL cohort and healthy volunteer group by qPCR. This adds to a growing body of studies that have shown an association between survivin expression and ALL. Survivin is upregulated in a large number of solid tumors and haematological malignancies, including AML and ALL. It acts as a dual regulator of both apoptosis and cell cycle progression, and is a member of the inhibitors of apoptosis proteins (IAP) family. Survivin plays a role in the cells' escape from apoptotic pathways and is considered an important mechanism that facilitates leukaemogenesis and the resistance of tumors to chemotherapy. Survivin overexpression has been shown to initiate haematologic malignancies in transgenic mice while its synthesis and degradation is controlled in a cell cycle-dependent course, with increasing transcription during G 1 that peaks in G 2 -M. This supports its role in the regulation of the mitotic spindle checkpoint. A study by Esh and colleagues showed that the knockdown of survivin mRNA via short-hairpin RNA or a locked antisense oligonucleotide reduced its gene expression, increased apoptosis in leukemia cell lines and accumulated the cells in the sub-G1 phase of the cell cycle. In addition, silencing of the survivin gene in an ALL xenograft animal model improved chemotherapeutic responses while overexpression of the survivin gene has been associated with poor prognosis in paediatric ALL patients. Mori and colleagues examined survivin expression in ALL patients using RT-PCR and found survivin expression in 11 of 16 ALL patients, but not in normal bone marrow (BM) cells. Yang et al. also identified an elevation in vascular endothelial growth factor (VEGF) levels that coincided with survivin levels in 40 ALL patients by RT-PCR and western blotting. Due to its limited expression in normal non-foetal tissues, survivin is a highly attractive immunotherapeutic target. When analysing five HLA-A2 positive adult ALL patient samples on a pMHC array we did not detect any survivin-specific T cells that had bound to either of the HLA-A2 restricted survivin epitopes examined (survivin 5-11 or 96-104; ) even at a detection sensitivity of at least 0.02% of the CD8+ population. However other groups have found that survivin-specific T cell responses can be expanded in a number of pre-clinical and clinical settings, most recently a Phase II multi-epitope vaccine of five survivin peptides with adjuvant that resulted in the expansion of survivin-specific T cell responses in patients with solid cancers. Although survivin was listed as one of the top 15 prioritised antigens by virtue of its therapeutic function, immunogenicity, specificity and oncogenicity among other features, it has been shown to be downregulated in both chronic myeloid leukemia (CML) and AML-derived leukaemic stem cells (LSCs). Although this negatively impacts the value of survivin for the immunotherapy of myeloid leukemia, it's epitopes have been characterised for immunotherapy (recently reviewed in ) and are likely to be useful in the generation of antitumor responses that may lead to epitope spreading and, at the least, could elongate the remission period and enable the administration of LSC-targeting treatments. Our previous serological analysis of recombinant tumor cDNA expression libraries (SEREX) analysis of AML patient sera, identified SSX2IP and PASD1, as well as SSX2IP's interacting partner, SSX2, as potential targets for the immunotherapy of myeloid leukemia. Both PASD1 and SSX2 are CTAs that are expressed in cancer cells and immunologically protected sites. This restricted expression makes CTAs attractive targets for immunotherapy as their targeting should not lead to catastrophic auto-immune responses against healthy tissue. However, PASD1 and SSX2 transcripts were not detected in any of the adult B-ALL patients or healthy controls, although all of the adult B-ALL patient samples examined showed positive immunolabelling for SSX2 at moderate to high levels. This indicates that SSX2 protein expression may be worthy of further investigation in B-ALL patient samples, and our results suggest a lack of correlation between detectable SSX2 transcription and the presence of SSX2 protein. Only those antigens that had a significant association between their expression and the clinical features analysed are shown. NA: no abnormality; NS: not significant : single cytogenetic abnormalities. ** -highly significant. *** -very highly significant. www.impactjournals.com/oncotarget In contrast to our own previous studies, SSX2IP transcripts were found in patient samples and healthy control PB by RT-PCR, and qPCR, suggesting an improved sensitivity in our detection of SSX2IP transcripts in the last decade, as the RT-PCR primers and reagents remained unchanged. Surprisingly NY-ESO-1 transcripts were also found in B-ALL patient samples and normal donor PB samples by RT-PCR. Both findings require further investigation but may reflect the rapid proliferation and enhanced turnover of the white blood cells compared with many other healthy cell types or a technical error on our part, although we used multiple controls to ensure we did not have any contaminating gDNA in our cDNA samples. In addition we used previously published primers with our own validated techniques. BCR/ABL, a hallmark of CML has also been found in 10-30% of tested healthy adults increasing in prevalence with donor age. The Philadelphia chromosome translocation product has been shown to be essential for the development of CML yet remains present in healthy donors without doing so, suggesting an, as yet undefined, requirement for additional events to achieve full transformation to the malignant phenotype. Ismail et al. suggested this occurrence was due to an accumulation of DNA damage with age and it may be a feature of the rapidly proliferating, high turnover "normal" white blood cells. Another of the potential immunotherapeutic targets for B-ALL that we investigated was the LAA WT1. Inoue et al. demonstrated consistently increased WT1 expression levels in most myeloid and lymphoid acute leukemias via RT-PCR. These results were confirmed by Cilloni et al, who detected WT1 overexpression in all 48 ALL samples at diagnosis (BM and PB) when using qPCR. They showed that WT1 expression was detectable, but that WT1 transcript levels in normal BM and PB were extremely low and often below the qPCR detection limit. Therefore, WT1 is a promising marker to discriminate between normal and leukaemic haematopoiesis, and effective in establishing the presence, persistence and/or reappearance of leukaemic blasts for diagnosis or detection of minimal residual disease (MRD). Our qPCR results demonstrated WT1 mRNA expression in three out of the ten adult B-ALL patients, but none of the healthy volunteers. In summary, our study has demonstrated the value of pursuing survivin as a target for the immunotherapy of adult B-ALL, through our demonstration of its transcription and translation as early as disease diagnosis. This is a rare disease with high associated mortality. The fact that most patients can achieve first remission provides a time point during which residual tumor cells may be targeted by immunotherapy. The reduction or, ideally, removal of MRD could provide an opportunity to extend, if not prevent, relapse benefitting patient survival. A number of clinical trials are underway that target survivin including those using immunotherapy protocols or survivin-inhibitors. Our study has identified a patient group who would likely benefit from their application and warrants further investigation. Cell lines and patient samples Human cancer cell lines were used to measure the expression of the antigens and optimise assays. All were obtained from ATCC (Sigma-Aldrich Co. Ltd) and grown in RPMI 1640 or DMEM media (Sigma-Aldrich Company Ltd., Dorset, U.K.) containing 10% foetal bovine serum (FBS) and 1% penicillin and streptomycin (both Thermo Fisher Scientific, Leicestershire, UK), in a humidified incubator at 37°C with 5% CO 2. K562 was positive for the expression of most antigens examined (Supplementary Table 2) as described previously. 16 samples were collected from 14 adult B-ALL patients at various treatment time points, but predominantly diagnosis and pre-treatment (Table 1A), from the Departments of Haematology at University Hospital Southampton NHS FT, Portsmouth Hospitals NHS Trust and the Royal Devon and Exeter Foundation Trust following informed consent and local ethical approval (REC 07/H0606/88). Leukaemic blasts and mononuclear cells were isolated from PB and/or BM in EDTA. White blood cells were also isolated from age and sex-matched normal donor PB following informed consent and local ethical approval (LREC 228/02/T). Identification of antigens for study in B-ALL Due to a lack of known antigens that can act as targets for the immunotherapy of B-ALL, we identified a list of antigens of potential interest for further study. We had previously identified PASD1, SSX2IP and HAGE expression in presentation AML and examined presentation acute leukemia patients for T cells that recognized epitopes within G250, NY-ESO-1, tyrosinase, p68 RNA helicase, WT1 and survivin. We examined SSX2 because of its known interaction with SSX2IP, BCP-20 based on its expression in solid tumors and END based on its detection in paediatric leukemia and association with patient outcome. RT-PCR analysis of antigen expression in patient and normal donor samples To evaluate the expression of the most promising antigens in normal and malignant tissues we isolated RNA from BM and PB samples using QIAGEN RNeasy kit (QIAGEN Ltd.). mRNA was DNase I treated (Roche Products Ltd, Herts, U.K.), cleaned using a RNeasy kit (Qiagen), checked on a 1% agarose-TBE gel and quantified using a spectrophotometer. We prepared cDNA using the MBI Fermentas RevertAid First Strand cDNA synthesis kit (MBI Fermentas Ltd, Helena BioSciences Ltd, Sunderland, U.K.), utilising the random hexamer primers. Sequencing of PCR products After gel electrophoresis of the PCR products and image capture, bands were excised from the agarose gel and placed in 1.5 ml sterile eppendorf tube. These products were extracted from the gel bands using the PCR gel extraction kit (Qiagen). Where available, products from three independent PCR reactions on the same template were sent for Sanger sequencing to the DNA sequencing facility at the University of Cambridge. We analysed each sequence using Finch TV software and BLASTN to compare similarity between the PCR products and their target cDNA sequences. QPCR analysis QPCR was performed using SYBR Green technology with the QuantiTect Primer Assays and QuantiNova SYBR Green PCR kit (all Qiagen), to investigate the relative expression of six TAAs in ten adult B-ALL samples (ALL001- 8, 11 and 14), as well as GAPDH as a control for sample loading and the quality of the cDNA. Each primer was tested on at least one human cancer cell line that was known to express the antigen of interest (Supplementary Table 2), based on previously published studies. To control for contamination within the qPCR reagents, a no cDNA control was included on every qPCR plate whereby cDNA was replaced by RNase-free H 2 O. In addition, each sample was plated in triplicate on the 96-well qPCR plate (Applied Biosystems, USA) to identify any outliers in the dataset. The reaction volumes were 10 L 2X QuantiNova SYBR green PCR MM, 0.1 L ROX reference dye, 2 L primer assay and 6.9 L RNase-free H 2 O, making a total volume of 19 L added to each well in the 96-well plate. The thermocycler (StepOne Plus Real-Time PCR system, Applied Biosystems) ran a heating step for 2 minutes at 95°C, then 40 cycles were run, whereby each cycle was set to denature for 5 s at 95°C, and to anneal plus extend primers in a combined step for 10 s at 60°C. This was immediately followed by a melt curve stage of 15 s at 95°C, 1 min at 60°C and 15 s at 95°C, to verify the specificity of the amplification, e.g. no non-specific primer dimer formation. Data was compared using StepOne software v2.0 (Applied Biosystems) and the comparative C T method. When comparing antigen expression in B-ALL to healthy controls, the results were normalised with the GAPDH reference gene (C T = C T antigen-C T GAPDH). All qPCR data were analysed with a twoway ANOVA test for pairwise comparisons, using Partek Genomic Software (Partek Inc., USA). Immunocytochemistry Leukocytes were isolated following a 30min incubation of PB and/or BM samples with red cell lysis buffer (155 mM NH 4 Cl, 10 mM KHCO 3, 0.1 mM EDTA), after which leukocytes were pelleted by centrifugation for 10 min at 800 g. Leukocytes or cells from lines were resuspended in PBS at 5 10 6 /ml, with 10 ul cells spotted at each of two sites on microscope glass slides. Slides were double wrapped in saran wrap and stored at -20 o C until required for use. Immunocytochemistry was performed as described previously using antibodies as detailed in Supplementary Table 1. Due to the cellular localisation of END on the cell surface, the MDA-MB-231 cell line was grown to 50-70% confluence on glass coverslips. Then, the culture medium from each well was aspirated and cover slips were rinsed in PBS. The coverslips were air dried for 4-6 h, wrapped in saran wrap and stored in -20°C freezer. Actin was used as a positive control for the successful performance of ICC while isotype, no primary and no secondary antibody immunolabelling acted as negative controls, used to detect non-specific staining. Lillies-Mayer Haematoxylin was used as a countertstain. Gene expression analysis To determine the relative expression of the antigens of interest in a larger cohort of adult B-ALL samples and healthy controls, we performed metadata analysis on a publically available microarray expression data (GSE38403) which utilised 215 adult B-cell ALL and 12 normal pre-B samples. The CEL files were downloaded and imported in the Partek Genomic Suite, normalised using RMA, and subjected to ANOVA analysis which was filtered on significance for the generation of the gene lists. Author contributions LFB: designed and performed experiments, analysed data, processed samples, prepared figures, wrote paper; PS: processed samples, designed and performed experiments, analysed data, prepared figures; SEB, LO, HW, CHL and ES: collected and processed samples; AHB: provided essential reagents, contributed to writing of paper; KIM: analysed data, prepared figures, contributed to writing of paper; KHO: designed experiments, collected samples, contributed to writing of paper; BG: collected and processed samples, designed and performed experiments, analysed data, wrote paper.
package lee.code.code_16__3Sum_Closest; import java.util.*; import lee.util.*; /** * * * 16.3Sum Closest * * difficulty: Medium * @see https://leetcode.com/problems/3sum-closest/description/ * @see description_16.md * @Similiar Topics * -->Two Pointers https://leetcode.com//tag/two-pointers * -->Array https://leetcode.com//tag/array * @Similiar Problems * -->3Sum Smaller https://leetcode.com//problems/3sum-smaller * -->3Sum https://leetcode.com//problems/3sum * Run solution from Unit Test: * @see lee.codetest.code_16__3Sum_Closest.CodeTest * Run solution from Main Judge Class: * @see lee.code.code_16__3Sum_Closest.C16_MainClass * */ class Solution { public int threeSumClosest(int[] nums, int target) { return 0; } } class Main1 { public static void main(String[] args) { //new Solution(); } }
// #pragma mark - Device Manager module API static status_t rescan_node(device_node* node) { return B_ERROR; }
#include<stdio.h> #include<string.h> #include<math.h> #include<limits.h> int isPalindrome(int n) { // Find the appropriate divisor // to extract the leading digit int divisor = 1; while (n / divisor >= 10) divisor *= 10; while (n != 0) { int leading = n / divisor; int trailing = n % 10; // If first and last digit // not same return false if (leading != trailing) return 0; // Removing the leading and trailing // digit from number n = (n % divisor) / 10; // Reducing divisor by a factor // of 2 as 2 digits are dropped divisor = divisor / 100; } return 1; } int main() { int N; scanf("%d ",&N); while(N%10==0) N = N/10; if(isPalindrome(N)) printf("YES\n"); else printf("NO\n"); return 0; }
#include <stdio.h> #include <string.h> #include <sys/time.h> #include "sonar.h" #include "freertos/FreeRTOS.h" #include "freertos/task.h" #include "freertos/queue.h" #include "freertos/semphr.h" #include "esp_err.h" #include "esp_log.h" #include "driver/rmt.h" #include "driver/periph_ctrl.h" #include "soc/rmt_reg.h" #include "driver/gpio.h" // extern "C" // { #define RMT_CLK_DIV 100 /* RMT counter clock divider */ #define RMT_TX_CARRIER_EN 0 /* Disable carrier */ #define rmt_item32_tIMEOUT_US 9500 /*!< RMT receiver timeout value(us) */ #define RMT_TICK_10_US (80000000 / RMT_CLK_DIV / 100000) /* RMT counter value for 10 us.(Source clock is APB clock) */ #define ITEM_DURATION(d) ((d & 0x7fff) * 10 / RMT_TICK_10_US) const gpio_num_t RMT_TX_GPIO_NUM = (gpio_num_t)32; const gpio_num_t RMT_RX_GPIO_NUM = (gpio_num_t)33; const rmt_channel_t RMT_TX_CHANNEL = (rmt_channel_t)1; const rmt_channel_t RMT_RX_CHANNEL = (rmt_channel_t)0; const rmt_carrier_level_t RMT_CARRIER_LEVEL = (rmt_carrier_level_t)1; const rmt_idle_level_t RMT_IDLE_LEVEL = (rmt_idle_level_t)0; const rmt_mode_t RMT_MODE = (rmt_mode_t)0; size_t rx_size = 0; RingbufHandle_t rb = NULL; rmt_item32_t item; static void HCSR04_tx_init() { rmt_config_t rmt_tx; rmt_tx.channel = RMT_TX_CHANNEL; rmt_tx.gpio_num = RMT_TX_GPIO_NUM; rmt_tx.mem_block_num = 1; rmt_tx.clk_div = RMT_CLK_DIV; rmt_tx.tx_config.loop_en = false; rmt_tx.tx_config.carrier_duty_percent = 50; rmt_tx.tx_config.carrier_freq_hz = 3000; rmt_tx.tx_config.carrier_level = RMT_CARRIER_LEVEL; rmt_tx.tx_config.carrier_en = RMT_TX_CARRIER_EN; rmt_tx.tx_config.idle_level = RMT_IDLE_LEVEL; rmt_tx.tx_config.idle_output_en = true; rmt_tx.rmt_mode = RMT_MODE; rmt_config(&rmt_tx); rmt_driver_install(rmt_tx.channel, 0, 0); } static void HCSR04_rx_init() { rmt_config_t rmt_rx; rmt_rx.channel = RMT_RX_CHANNEL; rmt_rx.gpio_num = RMT_RX_GPIO_NUM; rmt_rx.clk_div = RMT_CLK_DIV; rmt_rx.mem_block_num = 1; rmt_rx.rmt_mode = RMT_MODE_RX; rmt_rx.rx_config.filter_en = true; rmt_rx.rx_config.filter_ticks_thresh = 100; rmt_rx.rx_config.idle_threshold = rmt_item32_tIMEOUT_US / 10 * (RMT_TICK_10_US); rmt_config(&rmt_rx); rmt_driver_install(rmt_rx.channel, 1000, 0); } void sonarInit(){ HCSR04_tx_init(); HCSR04_rx_init(); item.level0 = 1; item.duration0 = RMT_TICK_10_US; item.level1 = 0; item.duration1 = RMT_TICK_10_US; // for one pulse this doesn't matter rmt_get_ringbuf_handle(RMT_RX_CHANNEL, &rb); rmt_rx_start(RMT_RX_CHANNEL, 1); } double sonarGetDistanceCm() { rmt_write_items(RMT_TX_CHANNEL, &item, 1, true); rmt_wait_tx_done(RMT_TX_CHANNEL, portMAX_DELAY); rmt_item32_t *item = (rmt_item32_t *)xRingbufferReceive(rb, &rx_size, 1000); double distance = 340.29 * ITEM_DURATION(item->duration0) / (1000 * 1000 * 2); // distance in meters vRingbufferReturnItem(rb, (void *)item); return (distance*100); } // }
def _rank_terms(self, terms, **kwargs): topic_language_model = kwargs.get('topic_language_model', None) ranker = QueryRanker(smoothed_language_model=topic_language_model) ranker.calculate_query_list_probabilities(terms) return ranker.get_top_queries(len(terms))
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Purpose: Given a set of 3-D coordinates in x,y,z order, traverse * through all frames of all specified files, and return a * matrix containing all of the real values at the given voxel * coordinate. * * The dimensions of the returned matrix is: * (number_of_files * number_of_frames) * * Note that passing a single 3-D volume will return a 1x1 matrix. */ SEXP read_voxel_from_files(SEXP filenames_, SEXP voxCoords_, SEXP noFiles_, SEXP noFrames_) { mihandle_t minc_volume; int n_dimensions; int output_ndx; hsize_t hSlab_start[MI2_MAX_VAR_DIMS]; hsize_t hSlab_count[MI2_MAX_VAR_DIMS]; const char *dimorder3d[] = { "zspace","yspace","xspace" }; const char *dimorder4d[] = { "time", "zspace","yspace","xspace" }; if ( R_DEBUG_rmincIO ) Rprintf("read_voxel_from_files: start ...\n"); std::vector<std::string> filenames = Rcpp::as<std::vector<std::string> >(filenames_); Rcpp::IntegerVector voxCoords(voxCoords_); int no_files = Rcpp::as<int>(noFiles_); int no_frames = Rcpp::as<int>(noFrames_); hSlab_start[0] = 0; hSlab_start[1] = (hsize_t) voxCoords[0]; hSlab_start[2] = (hsize_t) voxCoords[1]; hSlab_start[3] = (hsize_t) voxCoords[2]; int no_rows = no_files; int no_cols = (no_frames == 0) ? 1 : no_frames; int outBuf_num_entries = no_rows * no_cols; if ( R_DEBUG_rmincIO ) Rprintf("DEBUG: read_voxel_from_files: Attempting buffer allocation: %d entries [%d bytes]\n", outBuf_num_entries, outBuf_num_entries * sizeof (double)); std::vector<double> outBuf_read_buffer; try { outBuf_read_buffer.resize(outBuf_num_entries); } catch (std::bad_alloc &e) { Rprintf("Exception caught in read_voxel_from_files: %s\n", e.what()); Rprintf("Error allocating aggregate output read buffer: %d %d-byte entries\n", outBuf_num_entries, sizeof (double)); Rcpp::NumericVector error_vector(0); return(error_vector); } int hSlab_num_entries = no_cols; if ( R_DEBUG_rmincIO ) Rprintf("DEBUG: read_voxel_from_files: Attempting buffer allocation: %d entries [%d bytes]\n", hSlab_num_entries, hSlab_num_entries * sizeof (double)); std::vector<double> hSlab_read_buffer; try { hSlab_read_buffer.resize(hSlab_num_entries); } catch (std::bad_alloc &e) { Rprintf("Exception caught in read_voxel_from_files: %s\n", e.what()); Rprintf("Error allocating read buffer: %d %d-byte entries\n", hSlab_num_entries, sizeof (double)); Rcpp::NumericVector error_vector(0); return(error_vector); } int result; for( int i=0; i < no_files; ++i ) { if ( R_DEBUG_rmincIO ) Rprintf("Debug: read_voxel_from_files: Processing file %s ... \n", filenames[i].c_str()); result = miopen_volume(filenames[i].c_str(), MI2_OPEN_READ, &minc_volume); if (result != MI_NOERROR) Rf_error("Error opening input file: %s.\n", filenames[i].c_str()); set the apparent order to something conventional ... first need to get the number of dimensions if ( R_DEBUG_rmincIO ) Rprintf("Debug: read_voxel_from_files: Setting the apparent order ... "); if ( miget_volume_dimension_count(minc_volume, MI_DIMCLASS_ANY, MI_DIMATTR_ALL, &n_dimensions) != MI_NOERROR ) Rf_error("\nError returned from miget_volume_dimension_count.\n"); if ( R_DEBUG_rmincIO ) Rprintf("%d dimensions detected ... \n", n_dimensions); if ( n_dimensions == 3 ) { result = miset_apparent_dimension_order_by_name(minc_volume, 3, const_cast<char **>(dimorder3d)); } else if ( n_dimensions == 4 ) { result = miset_apparent_dimension_order_by_name(minc_volume, 4, const_cast<char **>(dimorder4d)); } else { Rf_error("Error file %s has %d dimensions and we can only deal with 3 or 4.\n", filenames[i].c_str(), n_dimensions); } if ( result != MI_NOERROR ) Rf_error("Error returned from miset_apparent_dimension_order_by_name while setting apparent order for %d dimensions.\n", n_dimensions); read the hyperslab if ( no_frames > 0 ) { read a hyperslab across all frames (i.e. over time) hSlab_count[0] = no_frames; hSlab_count[1] = hSlab_count[2] = hSlab_count[3] = 1; if ( R_DEBUG_rmincIO ) Rprintf("hSlab_count [0..3] = %d, %d, %d, %d\n", hSlab_count[0], hSlab_count[1], hSlab_count[2], hSlab_count[3]); result = miget_real_value_hyperslab(minc_volume, MI_TYPE_DOUBLE, hSlab_start, hSlab_count, &hSlab_read_buffer[0]); if ( result != MI_NOERROR ) Rf_error("Error in miget_real_value_hyperslab: %s.\n", filenames[i].c_str()); move values from hyper-slab buffer to output buffer ... we're doing this because R expects data in column-major order, ... and we're writing the frame values as rows for ( int ndx=0; ndx < no_frames; ++ndx) { output_ndx = (no_files * ndx) +i; outBuf_read_buffer[output_ndx] = hSlab_read_buffer[ndx]; } } else { no frames (i.e. a 3-d volume) if ( R_DEBUG_rmincIO ) { Rprintf("Debug: About to read value in 3-D volume\n"); Rprintf("hSlab_start[1]: %lu\n", hSlab_start[1]); Rprintf("hSlab_start[2]: %lu\n", hSlab_start[2]); Rprintf("hSlab_start[3]: %lu\n", hSlab_start[3]); } result = miget_real_value(minc_volume, &hSlab_start[1], 3, &hSlab_read_buffer[0]); if ( result != MI_NOERROR ) { Rf_error("Error in miget_real_value. File: %s.\n", filenames[i].c_str()); } outBuf_read_buffer[i] = hSlab_read_buffer[0]; } done with this volume, so close it miclose_volume(minc_volume); } clean-up and then return vector if ( R_DEBUG_rmincIO ) Rprintf("read_voxel_from_files: returning ...\n"); R_CheckStack(); return(Rcpp::wrap(outBuf_read_buffer)); }
Nipsey Hussle's suspected killer, Eric Holder, is being held in solitary confinement, TMZ reports today (April 3). According to the celebrity news site, police fear for the 29-year-old Los Angeles man's life, so that's why he's in solitary, a place he will reportedly be for a long time. On an additional note, Holder has reportedly just had his bail set at $7,040,000. Nipsey Hussle was shot and killed on Sunday (March 31), and police quickly identified Holder as the primary suspect. Speaking at a press conference yesterday, Los Angeles Police Department Chief Michel Moore explained the LAPD's belief that Nipsey's death was the result of a personal dispute between Holder and Nipsey rather than a gang-related matter. "At this point of our investigation based on witness statements and the background of those that we've identified, we believe this to be a dispute between Mr. Hussle and Mr. Holder," Moore said at the conference. "I'm not going to go over the conversation, but it appears to be a personal matter between the two." Yesterday (April 2), Holder was arrested for allegedly shooting Hussle.
// ReadPacketMetas reads a pcap file and calls func for each packet meta that was sucessfully read. func ReadPacketMetas(name string, callback func(*PacketMeta)) { ReadPackets(name, func(p gopacket.Packet) { md, err := NewPacketMeta(p) if err == nil { callback(md) } }) }
The La Jolla resident said each interview celebrates the personal and professional accomplishments of her guests, “and that means what they did and how they did it; it’s getting into the body of the person and trying to show as much of them as you can to other people,” she said. Her inaugural show featured Stanford Penner, UCSD Professor Emeritus in the Mechanical and Aerospace Engineering Department. Other guests have included Neal Ash, Chair of the USO at Lindbergh Field, and USO San Diego Board Chair Charlotte Jacobs. “To my knowledge, I did not know of anybody who was really concentrating on seniors, and we’re a big group now and we have a lot to offer,” she said. Case in point, her interview with 92-year-old Penner. “Dr. Penner was going to Vienna every other month to represent the United States but no one really knows that much about him,” she said, adding she’s been trying to book Ann Romney, author and wife of former presidential candidate Mitt Romney, and diet guru and author Jenny Craig. and features podcasts on the topics of business, community, lifestyle, military, politics, sports and technology. Millard, who said she is the least tech-savvy of any of her friends, doesn’t even have an e-mail address, but enjoys other new technologies. She said someone she plays bridge with uses Google all the time, and that many of her friends have iPads. “The iPad is truly the thing of the future,” she said. She also loves her cell phone. For her message notification, she has a whistling sound, which went off during her interview with La Jolla Light. “I’ll tell you what, when I first got the phone and heard that sound I thought, ‘at my age, I can’t believe it!’ ” thinking it was someone whistling at her. Now that she has some technology figured out, Millard can focus on her podcast, which she started because she was bored with retirement. “(Being retired) wasn’t any fun, but now that I’m doing this, life is fun again. You have a mission to get up in the morning, you have to be places, it’s like being young again,” she said.
/** * Licensed under the GNU LESSER GENERAL PUBLIC LICENSE, version 2.1, dated February 1999. * * This program is free software; you can redistribute it and/or modify * it under the terms of the latest version of the GNU Lesser General * Public License as published by the Free Software Foundation; * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public License * along with this program (LICENSE.txt); if not, write to the Free Software * Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. */ package org.jamwiki.parser.jflex; import java.util.ArrayList; import java.util.List; import java.util.Map; import java.util.regex.Matcher; import java.util.regex.Pattern; import org.apache.commons.lang3.StringUtils; import org.apache.commons.lang3.math.NumberUtils; import org.jamwiki.model.Namespace; import org.jamwiki.parser.ParserException; import org.jamwiki.parser.ParserInput; import org.jamwiki.parser.WikiLink; import org.jamwiki.parser.image.ImageBorderEnum; import org.jamwiki.utils.Utilities; import org.jamwiki.utils.WikiLogger; /** * Handle image galleries of the form <gallery>...</gallery>. */ public class GalleryTag implements JFlexCustomTagItem { private static final WikiLogger logger = WikiLogger.getLogger(GalleryTag.class.getName()); // match image dimensions of the form "450px". note that "?:" is a regex non-capturing group. private static Pattern IMAGE_DIMENSION_PATTERN = Pattern.compile("([0-9]+)[ ]*(?:px)?", Pattern.CASE_INSENSITIVE); private static final int DEFAULT_IMAGES_PER_ROW = 4; private static final int DEFAULT_THUMBNAIL_MAX_DIMENSION = 120; private String tagName = "gallery"; /** * Given a list of image links to display in the gallery, generate the * gallery HTML. */ private String generateGalleryHtml(ParserInput parserInput, Map<String, String> attributes, List<String> imageLinks) throws ParserException { if (imageLinks.isEmpty()) { // empty gallery tag return ""; } int width = this.retrieveDimension(attributes, "widths", DEFAULT_THUMBNAIL_MAX_DIMENSION); int height = this.retrieveDimension(attributes, "heights", DEFAULT_THUMBNAIL_MAX_DIMENSION); int perRow = NumberUtils.toInt(Utilities.getMapValueCaseInsensitive(attributes, "perrow"), DEFAULT_IMAGES_PER_ROW); int count = 0; StringBuilder result = new StringBuilder("{| class=\"gallery\" cellspacing=\"0\" cellpadding=\"0\"\n"); String caption = Utilities.getMapValueCaseInsensitive(attributes, "caption"); if (!StringUtils.isBlank(caption)) { result.append("|+ ").append(caption.trim()).append("\n"); } result.append("|-\n"); for (String imageLink : imageLinks) { count++; if (count != 1 && count % perRow == 1) { // new row result.append("|-\n"); } result.append("| [["); result.append(imageLink).append('|'); result.append(ImageBorderEnum._GALLERY).append('|'); result.append(width).append('x').append(height).append("px"); result.append("]]\n"); } // add any blank columns that are necessary to fill out the last row if ((count % perRow) != 0) { for (int i = (perRow - (count % perRow)); i > 0; i--) { result.append("| &#160;\n"); } } result.append("|}"); return result.toString(); } /** * Process the contents of the gallery tag into a list of wiki link objects * for the images in the gallery. */ private List<String> generateImageLinks(ParserInput parserInput, String content) throws ParserException { List<String> imageLinks = new ArrayList<String>(); if (!StringUtils.isBlank(content)) { String[] lines = content.split("\n"); String imageLinkText; WikiLink wikiLink; for (String line : lines) { imageLinkText = "[[" + line.trim() + "]]"; try { wikiLink = JFlexParserUtil.parseWikiLink(parserInput, null, imageLinkText); } catch (ParserException e) { // failure while parsing, the user may have entered invalid text logger.info("Invalid gallery entry " + line); continue; } if (!wikiLink.getNamespace().getId().equals(Namespace.FILE_ID)) { // not an image continue; } imageLinks.add(line); } } return imageLinks; } /** * Return the tag name. If the tag is "<custom>" then the tag name is "custom". */ public String getTagName() { return this.tagName; } /** * Set the tag name. If the tag is "<custom>" then the tag name is "custom". */ public void setTagName(String tagName) { this.tagName = tagName; } /** * Initialize the tag with any key-value params passed in from the configuration. */ public void initParams(Map<String, String> initParams) { } /** * Parse a gallery tag of the form <gallery>...</gallery> and return the * resulting wiki text output. */ public String parse(JFlexLexer lexer, Map<String, String> attributes, String content) throws ParserException { // get the tag contents as a list of wiki syntax for image thumbnails. List<String> imageLinks = this.generateImageLinks(lexer.getParserInput(), content); // generate the gallery wiki text return this.generateGalleryHtml(lexer.getParserInput(), attributes, imageLinks); } /** * Utility method for converting a dimension of the form "50px" to an integer. */ private int retrieveDimension(Map<String, String> attributes, String key, int defaultValue) { String value = Utilities.getMapValueCaseInsensitive(attributes, key); if (StringUtils.isBlank(value)) { return defaultValue; } Matcher matcher = IMAGE_DIMENSION_PATTERN.matcher(value.trim()); if (matcher.find()) { value = matcher.group(1); } return NumberUtils.toInt(value, defaultValue); } }
Mayor Rahm Emanuel on Friday made it clear he has no intention of doing away with a program that allows each of the city's 50 aldermen to decide how to spend their own $1.3 million pots of money on construction projects. His defense of the so-called menu money came the day after Inspector General Joseph Ferguson recommended axing the program and instead letting the city Department of Transportation make those decisions. Emanuel said the program allows residents, through their elected representatives, to have direct input on how the money is spent. "I actually think you want the neighborhood input and you want the aldermanic input," Emanuel said after announcing an $18.2 million rehab of the CTA's Quincy Loop station. "And I think that's a good way to go, and I don't think those ideas should be generated out of downtown. I think they actually should come from the residents that make up our many, many different neighborhoods." Emanuel, a former U.S. representative, contended that Congress "has been totally, 100 percent dysfunctional" since reformers ended earmarks, which let lawmakers individually greenlight pet projects. Left unsaid by the mayor was the political battle he'd have on his hands if he tried to ax the menu program, which was launched under then-Mayor Richard M. Daley about 20 years ago, not long after a near-majority of aldermen staged a council floor revolt demanding more money for local street repairs. Taking away the menu money would further diminish the power of aldermen, who under Emanuel and Daley before him have seen their influence erode. Emanuel moved to a grid-based garbage pickup system that lessened the role of ward superintendents, and Daley launched a 311 system that let residents seek services directly from City Hall rather than through their alderman. When Emanuel first took office at City Hall in mid-2011, there was some talk of eliminating the menu program, but he kept it after aldermen made it clear they wouldn't take that sitting down. He did set firmer guidelines about how the money could be spent. The bulk of the money is spent on street, alley, sidewalk, street light and bike path improvements, but millions of dollars a year also is spent on items like items like police surveillance cameras, basketball courts, spray pools, murals, decorative garbage cans and flower baskets. hdardick@chicagotribune.com Twitter @ReporterHal
Using explicit discourse rules to guide video enrichment Video content analysis and named entity extraction are increasingly used to automatically generate content annotations for TV programs. A potential use of these annotations is to provide an entry point to background information that users can consume on a second screen. Automatic enrichments are, however, meaningless when it is unclear to the user what they can do with them and why they would. We propose to contextualize the annotations by an explicit representation of discourse in the form of scene templates. Through content rules these templates are populated with the relevant annotations. We illustrate this idea with an example video and annotations generated in the LinkedTV1 project.
Fineneedle aspiration of adult smallroundcell tumors studied with flow cytometry Immunophenotypic study is critical for the diagnosis of adult smallroundcell tumors (SRCTs). We describe three patients with Ewing's sarcoma/primitive neuroectodermal tumor (ES/PNET) and one patient with neuroblastoma in which flow cytometry immunophenotyping (FCI) on the fineneedle aspirate (FNA) and bone marrow aspirate (BMA) demonstrated an abnormal population of cells that were CD45− and CD16/CD56+. Four patients with mean age of 30 years, three male and one female, clinically suspicious for a lymphoma or SRCT are described. FNA, BMA, and biopsy specimens were obtained for routine cytologic and histologic evaluation. Fresh tissue was studied by FCI. In all cases, the cytology smears showed small cells with round nuclei, slightly irregular nuclear membranes, fine chromatin, and scant cytoplasm. FCI showed CD16/56+ and CD45− neoplastic cells in all cases. In one case, 76% of these cells were CD99+. The diagnoses of ES/PNET were confirmed by immunohistochemical, ultrastructural, and cytogenetic studies. ES/PNET in FNA and BMA can be efficiently and rapidly diagnosed by combining cytologic examination with FCI using a panel including CD45, CD16/56, and CD99. Diagn. Cytopathol. 2004;31:147154. © 2004 WileyLiss, Inc.
Roles of hepatic stellate cells in acute liver failure: From the perspective of inflammation and fibrosis Acute liver failure (ALF) usually results in hepatocellular dysfunction and coagulopathy and carries a high mortality rate. Hepatic stellate cells (HSCs) are famous for their role in liver fibrosis. Although some recent studies revealed that HSCs might participate in the pathogenesis of ALF, the accurate mechanism is still not fully understood. This review focuses on the recent advances in understanding the functions of HSCs in ALF and revealed both protective and promotive roles during the pathogenesis of ALF: HSC activation participates in the maintenance of cell attachment and the architecture of liver tissue via extracellular matrix production and assists liver regeneration by producing growth factors; and HSC inflammation plays a role in relaying inflammation signaling from sinusoids to parenchyma via secretion of inflammatory cytokines. A better understanding of roles of HSCs in the pathogenesis of ALF may lead to improvements and novel strategies for treating ALF patients. INTRODUCTION Liver failure, including acute, chronic and acute-on-chronic liver failure, is a rare but dramatic clinical syndrome characterized by massive hepatocyte death and overactivation of hepatic inflammation. Acute liver failure (ALF), characterized by a rapid deterioration of liver function without pre-existing liver disease, usually results in hepatocellular dysfunction and coagulopathy and carries a high mortality rate. The main causes of ALF include viral hepatitis, ischemia and drug-induced toxicity. Currently, ALF continues to be a huge therapeutic challenge and apart from liver transplantation, few effective therapies are available. Hepatic stellate cells (HSCs) are resident mesenchymal cells that have features of resident fibroblasts and pericytes and account for 15% of total resident cells in the normal human liver. HSCs are one of the key nonparenchymal components in the sinusoid with multiple functions in the liver and are known for their roles in fibrosis. Under physiological conditions, HSCs exhibit a quiescent state and contain numerous vitamin A lipid droplets. Upon liver injury, HSCs lose lipid-rich granules and transdifferentiate into active myofibroblast-like cells characterized by the expression of -SMA, production of extracellular matrix (ECM) and release of cytokines. Although the involvement of HSCs in liver fibrosis is well recognized, few studies have examined their roles in ALF. Some recent studies have indicated that the blockade of fibrosis by depleting activated HSCs in an acetaminophen (APAP)induced mouse ALF model resulted in significantly more severe liver damage and a lower survival rate. However, due to the dramatic clinical course of ALF, the role of HSC activation in the process of ALF is still unclear. HSCs comprise approximately one-third of nonparenchymal cells and constitute the liver sinusoid together with sinusoidal endothelial cells and Kupffer cells (KCs). Upon stimulation by the gut microbiota and microbial byproducts in septic liver injury, KCs and sinusoidal endothelial cells produce inflammatory cytokines in the sinusoidal lumen and serve as the first gate against inflammatory stimuli in the portal circulation. Although the role of HSC activation in liver fibrosis has been widely accepted and attracts much attention, whether and how HSCs participate in hepatic inflammation have not been examined. Anatomically, HSCs seem to respond to inflammatory stimuli from the sinusoids. Recent studies have revealed that activated HSCs may release inflammatory cytokines such as interleukin (IL)-1 and IL-18. HSCs from both humans and rodents produce inflammatory cytokines promoting hepatocellular carcinoma and immune-mediated hepatitis. However, how HSCs participate in hepatic inflammation, and whether and how HSC inflammation is involved in the pathogenesis of ALF are still unknown ( Figure 1). PATHOGENESIS OF ALF To date, ALF remains a life-threatening syndrome with a high mortality rate, and is characterized by massive hepatocyte death and overactivation of hepatic inflammation. Cell death and regeneration in ALF Hepatocyte injury and subsequent cell death are important during the pathogenesis of ALF. Two different types of programmed cell death are thought to be involved in this process, including apoptosis and necrosis. Apoptosis is defined by chromatin condensation, nuclear fragmentation, cell shrinkage, blebbing of the plasma membrane, and the formation of apoptotic bodies that contain nuclear or cytoplasmic material; and necrosis, which is an alternative to apoptotic cell death and is considered to be a toxic process with the characteristics of cytoplasmic swelling, dilation of organelles, and mechanical rupture of the plasma membrane. The relative contribution of apoptosis and necrosis during liver failure remains controversial. Studies have shown that a variety of injurious stimuli induce apoptosis at low dose while the same stimuli may result in necrosis at higher dose. The etiology may also alter the type of cell death in ALF: necrosis is considered a prominent death pathway of hepatocytes in drug-induced ALF, and apoptosis is always found in viraland toxin-mediated liver failure. Clinicians have observed that some ALF patients may recover spontaneously and the clinical outcomes largely depend on the balance between hepatocyte loss and regeneration. Under mild conditions, lost cells can quickly be replaced by neighboring healthy hepatocytes via replication in an attempt to restore hepatic architecture and function. However, the regenerative capacity of the remaining hepatocytes may not be sufficient upon extensive injury and massive hepatocyte death, and the resident liver progenitor cells (LPCs) are then activated to take over the role of hepatocytes in hepatic regeneration. However, for many liver failure patients, even the regenerative process by LPCs is inadequate to match the rapid process of hepatocyte death and dramatic deterioration in liver function, which means that apart from liver transplantation, few effective therapies exist. To date, the mechanisms promoting hepatic cell death and the processes mediating liver regeneration are not fully understood. Hepatic inflammation in ALF Overactivation of hepatic inflammation is another important characteristic of ALF. Clinically, ALF shares many features with severe sepsis, including a systemic inflammatory response and progression to multi-organ failure. Patients with ALF often present with endotoxemia and increased serum lipopolysaccharide (LPS) levels due to increased gut permeability. LPS can cause the release of a wide variety of inflammatory mediators and contribute to the pathogenesis of various diseases, including ALF. Studies have also found elevated plasma inflammatory cytokines, such as IL-1, IL-6, IL-8 and tumor necrosis factor (TNF)-, in ALF patients. Moreover, approximately 60% of ALF patients fulfill the criteria for systemic inflammatory syndrome irrespective of the presence or absence of infection. Inflammasome activation serves as a double-edged sword, which contributes to both the protective antimicrobial response and cell death when excessively active during the pathogenesis of various diseases. Inflammation is a common element in the pathogenesis of most liver diseases. ALF is now known as an inflammation-mediated hepatocellular injury process. During the disease process of ALF, inflammation first participates in the initiation and amplification steps leading to cell injury and hepatocyte death; these injured/dead hepatocytes then release damage-associated molecular patterns that can drive inflammasome activation, directly perpetuate further cell death, and mediate additional organ failure forming a vicious circle. Studies have shown that inhibition of hepatic inflammation can successfully delay/prevent the progression of ALF. However, the mechanisms promoting hepatic inflammation during ALF are still not fully understood. Hepatic fibrosis and HSCs in ALF Liver fibrosis is a highly conserved and coordinated wound-healing process aimed at maintaining organ integrity, which results from acute or chronic liver injury and is always associated with excess hepatocellular death. Chronic liver injury always accompanies progressive hepatocyte apoptosis and subsequent liver fibrogenesis. In chronic liver injury, fibrosis is widely acknowledged as a damaging process, which results in cirrhosis, portal hypertension and liver cancer. ALF is associated with massive short-term hepatocyte death by provoking excessive apoptosis and necrosis, and consequently, deterioration of liver function. When the disease is not fatal, the liver has a unique capacity to recover via proliferation and regeneration, and HSC activation has also been found to participate in the pathogenesis of ALF. However, data on the roles of fibrosis during the pathogenesis of ALF are still scarce. HSC activation is the central step during liver fibrogenesis, and HSCs are known for their role in the initiation, progression and regression of hepatic fibrosis. A recent study has shown that fibrogenic cells, including HSCs and myofibroblasts, are activated early after acute/chronic liver injury to produce ECM components. The engulfment of hepatocyte-derived apoptotic bodies formed during liver failure was shown to promote the expression of fibrogenic genes in HSCs. Moreover, Dechne et al found that ALF was accompanied by active hepatic fibrogenesis and revealed a positive correlation between liver stiffness, hepatocyte death and HSC activation, which suggests that fibrosis is an attempt to repair liver damage responding to ALF. Besides, a decrease of liver stiffness in the remission stage of the disease was also found in these ALF patients. Our previous data indicated that this short-term occurrence of fibrosis during the progression stage of ALF is a potentially beneficial response by the liver and serves as a scaffold to support the parenchyma and maintain hepatic integrity. Thus, liver fibrosis may play a protective role during ALF. Clinical data have revealed that patients with chronic liver disease are not sensitive to the deleterious effects of toxic compounds due to elevated levels of fibrosis: patients with long-term elevated liver enzyme levels are less sensitive to the hepatotoxicity of statins, and patients with chronic liver disease have shown increased tolerance to APAP compared to healthy individuals. Moreover, in experimental mouse models, Osawa et al showed that mice with bile duct-ligatedinduced fibrosis were more resistant to the lethal effect of Fas. Acute and chronic injury can both induce HSC activation and subsequent ECM accumulation. In the pathogenesis of ALF, ECM has been shown to protect hepatocytes from death through the maintenance of cell attachment and the architecture of liver tissue. However, the mechanism by which ECM participates in protecting hepatocytes from death remains complex. In a recent study, collagen 1, the most abundant form of collagen in both normal and pathologic livers, has been shown to increase resistance to various injurious stimuli and protect hepatocytes from apoptotic or necrotic death via activation of the ERK1/2-MAPK signaling pathway. In addition, some adaptor molecules such as the integrins, focal adhesion kinase, integrin-like kinase, PINCH and others are also likely to contribute to hepatocyte survival. Matrix metalloproteinases are a family of proteinases that are capable of degrading all ECM proteins. A recent study revealed that IL-1 induced the production of matrix metalloproteinases during liver failure, which provoked the collapse of sinusoids via ECM degradation and led to parenchymal cell death and loss of liver function in response to hepatic toxins. Taken together, HSC activation leads to hepatic fibrosis, which participates in the maintenance of cell attachment and the architecture of liver tissue and protects hepatocytes from injurious stimuli via ECM production. in the body that may induce cell injury or even death, and the ability for regeneration is of importance to maintain liver homeostasis. It is known that the key strategy for the treatment of ALF is to reduce hepatocyte death and stimulate hepatocyte regeneration. Liver regeneration is the process by which the liver is able to replace lost liver tissue via growth from the remaining tissue. Liver regeneration driven by epithelial cell (including hepatocytes and LPCs) proliferation is a highly controlled process regulated by a complex signaling network and has important implications for stimulating hepatic recovery and improving survival during liver failure. The induction of liver regeneration depends on cross-talk between epithelial cells and nonparenchymal cells, especially HSCs. HSCs are liver-specific mesenchymal cells that play vital roles in promoting liver fibrosis and maintaining hepatic homeostasis. There is growing evidence to show that HSCs have a profound impact on the proliferation, differentiation and morphogenesis of other hepatic cell types during liver development and regeneration. HSCs are in direct contact with hepatocytes and LPCs, and their close anatomic relationship in the space of Disse suggests that HSCs are part of the local "stem cell niche" for hepatocytes and LPCs. Activated HSCs have been shown to assist liver regeneration by producing growth factors, which can modulate the proliferation of both hepatocytes and LPCs around them. Conditioned medium collected from HSCs at an early stage of liver regeneration in a 2-acetylaminofluorene/partial hepatectomy injury model was found to contain high levels of hepatic growth factor and epidermal growth factor, which target and act primarily on epithelial cells. These factors may directly enhance the proliferation of hepatocytes and LPCs. It has also been shown that early-activated HSC-derived paracrine factors can evoke an enhanced liver protective response in APAP-induced ALF in mice by promoting LPCs proliferation. In addition, depletion of activated HSCs has been shown to correlate with severe liver damage and abnormal liver regeneration in APAP-induced acute liver injury in mice. We hypothesize that HSCs may assist liver regeneration during liver failure by producing growth factors. Hepatic inflammation and HSCs Inflammation is one of the most characteristic features of chronic liver disease of viral, alcoholic, fatty and autoimmune origin. Inflammation has been shown to typically present in different disease stages and is associated with the pathogenesis of cirrhosis, hepatocellular carcinoma and ALF. Fibrosis is a highly conserved response to hepatic injury occurring in diseases with hepatocellular death. A number of studies have focused on explaining the links between inflammation and fibrosis. Hepatocyte injury followed by inflammation and activation of the innate immune system leads to liver fibrosis mediated by HSC activation. HSCs are quiescent in the normal liver and upon activation by liver injury become activated. HSCs have been characterized as the main effector cells in liver fibrogenesis and receive a wide range of signals from injured/dead hepatocytes and liver immune cells, predominantly KCs. KC-derived transforming growth factor-1 activates HSCs and is the most potent fibrogenic agonist. KCs also enhance liver fibrosis by promoting activated HSC survival in a NF-B dependent manner. The cross-talk between KCs and HSCs have been shown to be mediated by inflammatory cytokines, including IL-1 and TNF-. In addition, inhibition of IL-1 significantly led to increased apoptosis of HSCs and decreased liver fibrosis. Studies have shown that inflammatory cytokines, such as IL-1 and IL-6, are produced in activated HSCs. HSCs of murine or human origin are highly responsive to LPS and other pro-inflammatory cytokines, resulting in the activation of proinflammatory signaling pathways and the subsequent production of inflammatory chemokines/cytokines. This positive inflammatory feedback loop then maintains a sustained inflammatory process and ensures the survival and activation of HSCs. Hepatic inflammation and HSCs in ALF ALF is characterized by elevated inflammation. ALF shares many features with severe sepsis, including a systemic inflammatory response and progression to multi-organ failure. Two main mouse models are now used to study ALF, including the LPS/Dgalactosamine and Concanavalin A (Con A) models. Intraperitoneal injection of LPS may activate immune cells located in the circulation and the sinusoids, and these activated cells produce large amounts of inflammatory cytokines and chemokines resulting in massive hemorrhagic liver injury or even hepatocyte death. D-galactosamine is a hepatotoxic agent, which inhibits protein synthesis and is usually used together with LPS to create ALF mouse models. A recent study showed that compared to wild-type mice, HSC-depleted mice presented with decreased cytokine and chemokine expression and attenuated liver injury after LPS/D-galactosamine administration. Con A is a lectin, carbohydrate-binding protein, extracted from the jack-bean (Canavalia ensiformis). An intravenous injection of Con A constitutively activates intrahepatic and systemic immune cells resulting in excessive inflammatory cytokines and chemokines production. In a Con A-induced liver injury mouse model, inflammatory cytokines, including TNF- and interferon-, caused massive hepatocyte necrosis with dense infiltration of leukocytes. A recent study on a Con Ainduced liver injury model showed that HSCs received inflammatory signals generated in the sinusoids and relayed them to the liver parenchyma. Thus, we hypothesize that HSCs have important roles in hepatic inflammation during the pathogenesis of ALF. Our recent work showed that during the pathogenesis of ALF, reactive oxygen species activate the NLRP3 inflammasome and promote inflammation in HSCs. We also revealed that LPS treatment induced reactive oxygen species generation in HSCs via mitophagy inhibition. Studies have suggested that in hepatocytes, reactive oxygen species play important roles in the pathophysiology of diseases, including ALF. Injured/dead hepatocytes greatly increase oxidative stress during liver failure, which in turn contributes to inflammation, further hepatocyte loss and impedes regeneration. Taken together, these data suggest that HSC inflammation is involved in the pathogenesis of ALF by producing inflammatory cytokines upon stimulation and relaying inflammation signaling from the sinusoids to parenchyma (Figure 2). CONCLUSION ALF is a life-threatening disease, which has a high mortality rate. Hepatocyte death and overactivation of hepatic inflammation are two main characteristics of ALF. HSCs play both protective and promotive roles during the pathogenesis of ALF: first, HSC activation participates in the maintenance of cell attachment and the architecture of liver tissue via ECM production; second, HSC activation assists liver regeneration by producing growth factors; and third, HSC inflammation plays a role in relaying inflammation signaling from the sinusoids to parenchyma via the secretion of inflammatory cytokines. A better understanding of the roles of HSCs in the pathogenesis of ALF will lead to improvements and novel strategies for the treatment of patients with ALF.
<gh_stars>1-10 package SignalHandler import ( "fmt" "log" "os" "os/signal" ) // SignalHandler is a basic, abstract signal handler that is implementable in most go applications without much effort type SignalHandler struct { signalChannel chan os.Signal callbacks map[os.Signal][]func() error outputFunc func(v ...interface{}) } // ConstructSignalHandler will create a new SignalHandler func ConstructSignalHandler() *SignalHandler { return &SignalHandler{ signalChannel: make(chan os.Signal, 1), callbacks: make(map[os.Signal][]func() error, 10), outputFunc: log.Println, } } // SetOutput will set the output function used to print errors should they occurr func (handler *SignalHandler) SetOutput(outputFunction func(v ...interface{})) { handler.outputFunc = outputFunction } // Start will listen, async, in a seperate go-routine. func (handler *SignalHandler) Start() { go handler.Listen() } // Listen will wait for signals to come in, go through registered signal functions and execute them func (handler *SignalHandler) Listen() { for { receivedSignal := <-handler.signalChannel for _, signalFunction := range handler.callbacks[receivedSignal] { err := signalFunction() if err != nil { handler.outputFunc(fmt.Sprintf("SignalHandler: error during signal %d func: %v", receivedSignal, err)) } } } } // RegisterSignalFunction allows you to specify a function to be called when a signal is received by the signal handler func (handler *SignalHandler) RegisterSignalFunction(sig os.Signal, callback func() error) { if len(handler.callbacks[sig]) == 0 { signal.Ignore(sig) signal.Notify(handler.signalChannel, sig) } handler.callbacks[sig] = append(handler.callbacks[sig], callback) }
package backend import ( "path/filepath" "github.com/tidwall/buntdb" "github.com/btcsuite/btcd/wire" "github.com/vertiond/verthash-one-click-miner/miners" "github.com/vertiond/verthash-one-click-miner/payouts" "github.com/vertiond/verthash-one-click-miner/pools" "github.com/vertiond/verthash-one-click-miner/util" "github.com/vertiond/verthash-one-click-miner/wallet_doge" "github.com/wailsapp/wails" ) type Backend struct { runtime *wails.Runtime wal *wallet.Wallet settings *buntdb.DB pendingSweep []*wire.MsgTx minerBinaries []*miners.BinaryRunner rapidFailures []*miners.BinaryRunner pool pools.Pool payout payouts.Payout network string walletAddress string customAddress string refreshBalanceChan chan bool refreshHashChan chan bool refreshRunningState chan bool stopMonitoring chan bool stopHash chan bool stopBalance chan bool stopUpdate chan bool stopRunningState chan bool prerequisiteInstall chan bool alreadyRunning bool } func NewBackend(alreadyRunning bool) (*Backend, error) { backend := &Backend{ refreshBalanceChan: make(chan bool), refreshHashChan: make(chan bool), refreshRunningState: make(chan bool), stopHash: make(chan bool), stopBalance: make(chan bool), stopRunningState: make(chan bool), stopMonitoring: make(chan bool), stopUpdate: make(chan bool), prerequisiteInstall: make(chan bool), minerBinaries: []*miners.BinaryRunner{}, rapidFailures: []*miners.BinaryRunner{}, } if alreadyRunning { backend.alreadyRunning = true return backend, nil } db, err := buntdb.Open(filepath.Join(util.DataDirectory(), "settings.db")) if err != nil { return nil, err } backend.settings = db return backend, nil } func (m *Backend) ResetPool() { m.pool = pools.GetPool(m.GetPool(), m.GetTestnet()) } func (m *Backend) ResetPayout() { m.payout = pools.GetPayout(m.pool, m.GetPayout(), m.GetTestnet()) } func (m *Backend) ResetNetwork() { m.network = m.GetNetwork() } func (m *Backend) ResetCustomAddress() { m.customAddress = m.GetCustomAddress() } func (m *Backend) ResetWalletAddress() { m.walletAddress = m.Address() } func (m *Backend) WailsInit(runtime *wails.Runtime) error { // Save runtime m.runtime = runtime go m.PrerequisiteProxyLoop() go m.UpdateLoop() return nil } func (m *Backend) OpenDownloadUrl(url string) { util.OpenBrowser(url) } func (m *Backend) AlreadyRunning() bool { return m.alreadyRunning } func (m *Backend) Close() { m.runtime.Window.Close() }
Stochastic volatility: approximation and goodness-of-fit test Let $X$ be the unique solution started from $x_0$ of the stochastic differential equation $dX_t=\theta(t,X_t)dB_t+b(t,X_t)dt$, with $B$ a standard Brownian motion. We consider an approximation of the volatility $\theta(t,X_t)$, the drift being considered as a nuisance parameter. The approximation is based on a discrete time observation of $X$ and we study its rate of the convergence as a process. A goodness-of-fit test is also constructed.
<reponame>Ashish-Me2/Arduino-ESP8266-AzureIoTHub-MQTT-CameraMonitoring #ifndef Eventhub_h #define Eventhub_h #include "Arduino.h" #include "sha256.h" #include "Base64.h" #include "IoTHub.h" #include <WiFiClientSecure.h> class Eventhub : public IoT { public: String createSas(char* key, String url); void initialiseHub(); private: // Azure Event Hub settings const char* EVENT_HUB_END_POINT = "/ehdevices/publishers/nodemcu/messages"; }; #endif
/** * Abstraction level for the following constructors: * <ul> * <li>{@link TLPageFull}: pageFull#556ec7aa</li> * <li>{@link TLPagePart}: pagePart#8e3f9ebe</li> * </ul> * * This class is generated by Mono's TL class generator */ public abstract class TLAbsPage extends TLObject { protected TLVector<TLAbsPageBlock> blocks; protected TLVector<TLAbsPhoto> photos; protected TLVector<TLAbsDocument> documents; public TLAbsPage() { } public TLVector<TLAbsPageBlock> getBlocks() { return blocks; } public void setBlocks(TLVector<TLAbsPageBlock> blocks) { this.blocks = blocks; } public TLVector<TLAbsPhoto> getPhotos() { return photos; } public void setPhotos(TLVector<TLAbsPhoto> photos) { this.photos = photos; } public TLVector<TLAbsDocument> getDocuments() { return documents; } public void setDocuments(TLVector<TLAbsDocument> documents) { this.documents = documents; } }
<reponame>khalilkorbi/ionic<gh_stars>1-10 import { Component } from '@angular/core'; import { Chart } from 'angular-highcharts'; @Component({ templateUrl: 'scatter-plot.html' }) export class ScatterPlotPage { chart = new Chart(<any>{ chart: { type: 'scatter', zoomType: 'xy' }, title: { text: 'Height Versus Weight of 507 Individuals by Gender' }, subtitle: { text: 'Source: Heinz 2003' }, xAxis: { title: { enabled: true, text: 'Height (cm)' }, startOnTick: true, endOnTick: true, showLastLabel: true }, yAxis: { title: { text: 'Weight (kg)' } }, legend: { layout: 'vertical', align: 'left', verticalAlign: 'top', x: 100, y: 70, floating: true, backgroundColor: '#FFFFFF', borderWidth: 1 }, plotOptions: { scatter: { marker: { radius: 5, states: { hover: { enabled: true, lineColor: 'rgb(100,100,100)' } } }, states: { hover: { marker: { enabled: false } } }, tooltip: { headerFormat: '<b>{series.name}</b><br>', pointFormat: '{point.x} cm, {point.y} kg' } } }, series: [{ name: 'Female', color: 'rgba(223, 83, 83, .5)', data: [[161.2, 51.6], [167.5, 59.0], [159.5, 49.2], [157.0, 63.0], [155.8, 53.6], [170.0, 59.0], [159.1, 47.6], [166.0, 69.8], [176.2, 66.8], [160.2, 75.2], [172.5, 55.2], [170.9, 54.2], [172.9, 62.5], [153.4, 42.0], [160.0, 50.0], [147.2, 49.8], [168.2, 49.2], [175.0, 73.2], [157.0, 47.8], [167.6, 68.8], [159.5, 50.6], [175.0, 82.5], [166.8, 57.2], [176.5, 87.8], [170.2, 72.8], [174.0, 54.5], [173.0, 59.8], [179.9, 67.3], [170.5, 67.8], [160.0, 47.0], [154.4, 46.2], [162.0, 55.0], [176.5, 83.0], [160.0, 54.4], [152.0, 45.8], [162.1, 53.6], [170.0, 73.2], [160.2, 52.1], [161.3, 67.9], [166.4, 56.6], [168.9, 62.3], [163.8, 58.5], [167.6, 54.5], [160.0, 50.2], [161.3, 60.3], [167.6, 58.3], [165.1, 56.2], [160.0, 50.2], [170.0, 72.9], [157.5, 59.8], [167.6, 61.0], [160.7, 69.1], [163.2, 55.9], [152.4, 46.5], [157.5, 54.3], [168.3, 54.8], [180.3, 60.7], [165.5, 60.0], [165.0, 62.0], [164.5, 60.3], [156.0, 52.7], [160.0, 74.3], [163.0, 62.0], [165.7, 73.1], [161.0, 80.0], [162.0, 54.7], [166.0, 53.2], [174.0, 75.7], [172.7, 61.1], [167.6, 55.7], [151.1, 48.7], [164.5, 52.3], [163.5, 50.0], [152.0, 59.3], [169.0, 62.5], [164.0, 55.7], [161.2, 54.8], [155.0, 45.9], [170.0, 70.6], [176.2, 67.2], [170.0, 69.4], [162.5, 58.2], [170.3, 64.8], [164.1, 71.6], [169.5, 52.8], [163.2, 59.8], [154.5, 49.0], [159.8, 50.0], [173.2, 69.2], [170.0, 55.9], [161.4, 63.4], [169.0, 58.2], [166.2, 58.6], [159.4, 45.7], [162.5, 52.2], [159.0, 48.6], [162.8, 57.8], [159.0, 55.6], [179.8, 66.8], [162.9, 59.4], [161.0, 53.6], [151.1, 73.2], [168.2, 53.4], [168.9, 69.0], [173.2, 58.4], [171.8, 56.2], [178.0, 70.6], [164.3, 59.8], [163.0, 72.0], [168.5, 65.2], [166.8, 56.6], [172.7, 105.2], [163.5, 51.8], [169.4, 63.4], [167.8, 59.0], [159.5, 47.6], [167.6, 63.0], [161.2, 55.2], [160.0, 45.0], [163.2, 54.0], [162.2, 50.2], [161.3, 60.2], [149.5, 44.8], [157.5, 58.8], [163.2, 56.4], [172.7, 62.0], [155.0, 49.2], [156.5, 67.2], [164.0, 53.8], [160.9, 54.4], [162.8, 58.0], [167.0, 59.8], [160.0, 54.8], [160.0, 43.2], [168.9, 60.5], [158.2, 46.4], [156.0, 64.4], [160.0, 48.8], [167.1, 62.2], [158.0, 55.5], [167.6, 57.8], [156.0, 54.6], [162.1, 59.2], [173.4, 52.7], [159.8, 53.2], [170.5, 64.5], [159.2, 51.8], [157.5, 56.0], [161.3, 63.6], [162.6, 63.2], [160.0, 59.5], [168.9, 56.8], [165.1, 64.1], [162.6, 50.0], [165.1, 72.3], [166.4, 55.0], [160.0, 55.9], [152.4, 60.4], [170.2, 69.1], [162.6, 84.5], [170.2, 55.9], [158.8, 55.5], [172.7, 69.5], [167.6, 76.4], [162.6, 61.4], [167.6, 65.9], [156.2, 58.6], [175.2, 66.8], [172.1, 56.6], [162.6, 58.6], [160.0, 55.9], [165.1, 59.1], [182.9, 81.8], [166.4, 70.7], [165.1, 56.8], [177.8, 60.0], [165.1, 58.2], [175.3, 72.7], [154.9, 54.1], [158.8, 49.1], [172.7, 75.9], [168.9, 55.0], [161.3, 57.3], [167.6, 55.0], [165.1, 65.5], [175.3, 65.5], [157.5, 48.6], [163.8, 58.6], [167.6, 63.6], [165.1, 55.2], [165.1, 62.7], [168.9, 56.6], [162.6, 53.9], [164.5, 63.2], [176.5, 73.6], [168.9, 62.0], [175.3, 63.6], [159.4, 53.2], [160.0, 53.4], [170.2, 55.0], [162.6, 70.5], [167.6, 54.5], [162.6, 54.5], [160.7, 55.9], [160.0, 59.0], [157.5, 63.6], [162.6, 54.5], [152.4, 47.3], [170.2, 67.7], [165.1, 80.9], [172.7, 70.5], [165.1, 60.9], [170.2, 63.6], [170.2, 54.5], [170.2, 59.1], [161.3, 70.5], [167.6, 52.7], [167.6, 62.7], [165.1, 86.3], [162.6, 66.4], [152.4, 67.3], [168.9, 63.0], [170.2, 73.6], [175.2, 62.3], [175.2, 57.7], [160.0, 55.4], [165.1, 104.1], [174.0, 55.5], [170.2, 77.3], [160.0, 80.5], [167.6, 64.5], [167.6, 72.3], [167.6, 61.4], [154.9, 58.2], [162.6, 81.8], [175.3, 63.6], [171.4, 53.4], [157.5, 54.5], [165.1, 53.6], [160.0, 60.0], [174.0, 73.6], [162.6, 61.4], [174.0, 55.5], [162.6, 63.6], [161.3, 60.9], [156.2, 60.0], [149.9, 46.8], [169.5, 57.3], [160.0, 64.1], [175.3, 63.6], [169.5, 67.3], [160.0, 75.5], [172.7, 68.2], [162.6, 61.4], [157.5, 76.8], [176.5, 71.8], [164.4, 55.5], [160.7, 48.6], [174.0, 66.4], [163.8, 67.3]] }, { name: 'Male', color: 'rgba(119, 152, 191, .5)', data: [[174.0, 65.6], [175.3, 71.8], [193.5, 80.7], [186.5, 72.6], [187.2, 78.8], [181.5, 74.8], [184.0, 86.4], [184.5, 78.4], [175.0, 62.0], [184.0, 81.6], [180.0, 76.6], [177.8, 83.6], [192.0, 90.0], [176.0, 74.6], [174.0, 71.0], [184.0, 79.6], [192.7, 93.8], [171.5, 70.0], [173.0, 72.4], [176.0, 85.9], [176.0, 78.8], [180.5, 77.8], [172.7, 66.2], [176.0, 86.4], [173.5, 81.8], [178.0, 89.6], [180.3, 82.8], [180.3, 76.4], [164.5, 63.2], [173.0, 60.9], [183.5, 74.8], [175.5, 70.0], [188.0, 72.4], [189.2, 84.1], [172.8, 69.1], [170.0, 59.5], [182.0, 67.2], [170.0, 61.3], [177.8, 68.6], [184.2, 80.1], [186.7, 87.8], [171.4, 84.7], [172.7, 73.4], [175.3, 72.1], [180.3, 82.6], [182.9, 88.7], [188.0, 84.1], [177.2, 94.1], [172.1, 74.9], [167.0, 59.1], [169.5, 75.6], [174.0, 86.2], [172.7, 75.3], [182.2, 87.1], [164.1, 55.2], [163.0, 57.0], [171.5, 61.4], [184.2, 76.8], [174.0, 86.8], [174.0, 72.2], [177.0, 71.6], [186.0, 84.8], [167.0, 68.2], [171.8, 66.1], [182.0, 72.0], [167.0, 64.6], [177.8, 74.8], [164.5, 70.0], [192.0, 101.6], [175.5, 63.2], [171.2, 79.1], [181.6, 78.9], [167.4, 67.7], [181.1, 66.0], [177.0, 68.2], [174.5, 63.9], [177.5, 72.0], [170.5, 56.8], [182.4, 74.5], [197.1, 90.9], [180.1, 93.0], [175.5, 80.9], [180.6, 72.7], [184.4, 68.0], [175.5, 70.9], [180.6, 72.5], [177.0, 72.5], [177.1, 83.4], [181.6, 75.5], [176.5, 73.0], [175.0, 70.2], [174.0, 73.4], [165.1, 70.5], [177.0, 68.9], [192.0, 102.3], [176.5, 68.4], [169.4, 65.9], [182.1, 75.7], [179.8, 84.5], [175.3, 87.7], [184.9, 86.4], [177.3, 73.2], [167.4, 53.9], [178.1, 72.0], [168.9, 55.5], [157.2, 58.4], [180.3, 83.2], [170.2, 72.7], [177.8, 64.1], [172.7, 72.3], [165.1, 65.0], [186.7, 86.4], [165.1, 65.0], [174.0, 88.6], [175.3, 84.1], [185.4, 66.8], [177.8, 75.5], [180.3, 93.2], [180.3, 82.7], [177.8, 58.0], [177.8, 79.5], [177.8, 78.6], [177.8, 71.8], [177.8, 116.4], [163.8, 72.2], [188.0, 83.6], [198.1, 85.5], [175.3, 90.9], [166.4, 85.9], [190.5, 89.1], [166.4, 75.0], [177.8, 77.7], [179.7, 86.4], [172.7, 90.9], [190.5, 73.6], [185.4, 76.4], [168.9, 69.1], [167.6, 84.5], [175.3, 64.5], [170.2, 69.1], [190.5, 108.6], [177.8, 86.4], [190.5, 80.9], [177.8, 87.7], [184.2, 94.5], [176.5, 80.2], [177.8, 72.0], [180.3, 71.4], [171.4, 72.7], [172.7, 84.1], [172.7, 76.8], [177.8, 63.6], [177.8, 80.9], [182.9, 80.9], [170.2, 85.5], [167.6, 68.6], [175.3, 67.7], [165.1, 66.4], [185.4, 102.3], [181.6, 70.5], [172.7, 95.9], [190.5, 84.1], [179.1, 87.3], [175.3, 71.8], [170.2, 65.9], [193.0, 95.9], [171.4, 91.4], [177.8, 81.8], [177.8, 96.8], [167.6, 69.1], [167.6, 82.7], [180.3, 75.5], [182.9, 79.5], [176.5, 73.6], [186.7, 91.8], [188.0, 84.1], [188.0, 85.9], [177.8, 81.8], [174.0, 82.5], [177.8, 80.5], [171.4, 70.0], [185.4, 81.8], [185.4, 84.1], [188.0, 90.5], [188.0, 91.4], [182.9, 89.1], [176.5, 85.0], [175.3, 69.1], [175.3, 73.6], [188.0, 80.5], [188.0, 82.7], [175.3, 86.4], [170.5, 67.7], [179.1, 92.7], [177.8, 93.6], [175.3, 70.9], [182.9, 75.0], [170.8, 93.2], [188.0, 93.2], [180.3, 77.7], [177.8, 61.4], [185.4, 94.1], [168.9, 75.0], [185.4, 83.6], [180.3, 85.5], [174.0, 73.9], [167.6, 66.8], [182.9, 87.3], [160.0, 72.3], [180.3, 88.6], [167.6, 75.5], [186.7, 101.4], [175.3, 91.1], [175.3, 67.3], [175.9, 77.7], [175.3, 81.8], [179.1, 75.5], [181.6, 84.5], [177.8, 76.6], [182.9, 85.0], [177.8, 102.5], [184.2, 77.3], [179.1, 71.8], [176.5, 87.9], [188.0, 94.3], [174.0, 70.9], [167.6, 64.5], [170.2, 77.3], [167.6, 72.3], [188.0, 87.3], [174.0, 80.0], [176.5, 82.3], [180.3, 73.6], [167.6, 74.1], [188.0, 85.9], [180.3, 73.2], [167.6, 76.3], [183.0, 65.9], [183.0, 90.9], [179.1, 89.1], [170.2, 62.3], [177.8, 82.7], [179.1, 79.1], [190.5, 98.2], [177.8, 84.1], [180.3, 83.2], [180.3, 83.2]] }] }); }
#include "stdio.h" #include "stdlib.h" #include "GfxUtils.h" #include "GfxRenderer.h" static GLenum StringToCullFace(const char *szString) { if (szString) { if (!stricmp(szString, "GL_FRONT")) return GL_FRONT; if (!stricmp(szString, "GL_BACK")) return GL_BACK; if (!stricmp(szString, "GL_FRONT_AND_BACK ")) return GL_FRONT_AND_BACK; } return GL_BACK; } static GLenum StringToFrontFace(const char *szString) { if (szString) { if (!stricmp(szString, "GL_CW")) return GL_CW; if (!stricmp(szString, "GL_CCW")) return GL_CCW; } return GL_CCW; } static GLenum StringToDepthFunc(const char *szString) { if (szString) { if (!stricmp(szString, "GL_NEVER")) return GL_NEVER; if (!stricmp(szString, "GL_LESS")) return GL_LESS; if (!stricmp(szString, "GL_EQUAL")) return GL_EQUAL; if (!stricmp(szString, "GL_LEQUAL")) return GL_LEQUAL; if (!stricmp(szString, "GL_GREATER")) return GL_GREATER; if (!stricmp(szString, "GL_NOTEQUAL")) return GL_NOTEQUAL; if (!stricmp(szString, "GL_GEQUAL")) return GL_GEQUAL; } return GL_LESS; } static GLenum StringToMinFilter(const char *szString) { if (szString) { if (!stricmp(szString, "GL_LINEAR")) return GL_LINEAR; if (!stricmp(szString, "GL_LINEAR_MIPMAP_LINEAR")) return GL_LINEAR_MIPMAP_LINEAR; if (!stricmp(szString, "GL_LINEAR_MIPMAP_NEAREST")) return GL_LINEAR_MIPMAP_NEAREST; if (!stricmp(szString, "GL_NEAREST")) return GL_NEAREST; if (!stricmp(szString, "GL_NEAREST_MIPMAP_LINEAR")) return GL_NEAREST_MIPMAP_LINEAR; if (!stricmp(szString, "GL_NEAREST_MIPMAP_NEAREST")) return GL_NEAREST_MIPMAP_NEAREST; } return GL_LINEAR_MIPMAP_NEAREST; } static GLenum StringToMagFilter(const char *szString) { if (szString) { if (!stricmp(szString, "GL_LINEAR")) return GL_LINEAR; if (!stricmp(szString, "GL_NEAREST")) return GL_NEAREST; } return GL_LINEAR; } static GLenum StringToAddressMode(const char *szString) { if (szString) { if (!stricmp(szString, "GL_REPEAT")) return GL_REPEAT; if (!stricmp(szString, "GL_CLAMP_TO_EDGE")) return GL_CLAMP_TO_EDGE; } return GL_REPEAT; } static GLenum StringToBlendSrcFactor(const char *szString) { if (szString) { if (!stricmp(szString, "GL_ZERO")) return GL_ZERO; if (!stricmp(szString, "GL_ONE")) return GL_ONE; if (!stricmp(szString, "GL_SRC_COLOR")) return GL_SRC_COLOR; if (!stricmp(szString, "GL_ONE_MINUS_SRC_COLOR")) return GL_ONE_MINUS_SRC_COLOR; if (!stricmp(szString, "GL_DST_COLOR")) return GL_DST_COLOR; if (!stricmp(szString, "GL_ONE_MINUS_DST_COLOR")) return GL_ONE_MINUS_DST_COLOR; if (!stricmp(szString, "GL_SRC_ALPHA")) return GL_SRC_ALPHA; if (!stricmp(szString, "GL_ONE_MINUS_SRC_ALPHA")) return GL_ONE_MINUS_SRC_ALPHA; if (!stricmp(szString, "GL_DST_ALPHA")) return GL_DST_ALPHA; if (!stricmp(szString, "GL_ONE_MINUS_DST_ALPHA")) return GL_ONE_MINUS_DST_ALPHA; if (!stricmp(szString, "GL_CONSTANT_COLOR")) return GL_CONSTANT_COLOR; if (!stricmp(szString, "GL_ONE_MINUS_CONSTANT_COLOR")) return GL_ONE_MINUS_CONSTANT_COLOR; if (!stricmp(szString, "GL_CONSTANT_ALPHA")) return GL_CONSTANT_ALPHA; if (!stricmp(szString, "GL_ONE_MINUS_CONSTANT_ALPHA")) return GL_ONE_MINUS_CONSTANT_ALPHA; if (!stricmp(szString, "GL_SRC_ALPHA_SATURATE")) return GL_SRC_ALPHA_SATURATE; } return GL_SRC_ALPHA; } static GLenum StringToBlendDstFactor(const char *szString) { if (szString) { if (!stricmp(szString, "GL_ZERO")) return GL_ZERO; if (!stricmp(szString, "GL_ONE")) return GL_ONE; if (!stricmp(szString, "GL_SRC_COLOR")) return GL_SRC_COLOR; if (!stricmp(szString, "GL_ONE_MINUS_SRC_COLOR")) return GL_ONE_MINUS_SRC_COLOR; if (!stricmp(szString, "GL_DST_COLOR")) return GL_DST_COLOR; if (!stricmp(szString, "GL_ONE_MINUS_DST_COLOR")) return GL_ONE_MINUS_DST_COLOR; if (!stricmp(szString, "GL_SRC_ALPHA")) return GL_SRC_ALPHA; if (!stricmp(szString, "GL_ONE_MINUS_SRC_ALPHA")) return GL_ONE_MINUS_SRC_ALPHA; if (!stricmp(szString, "GL_DST_ALPHA")) return GL_DST_ALPHA; if (!stricmp(szString, "GL_ONE_MINUS_DST_ALPHA")) return GL_ONE_MINUS_DST_ALPHA; if (!stricmp(szString, "GL_CONSTANT_COLOR")) return GL_CONSTANT_COLOR; if (!stricmp(szString, "GL_ONE_MINUS_CONSTANT_COLOR")) return GL_ONE_MINUS_CONSTANT_COLOR; if (!stricmp(szString, "GL_CONSTANT_ALPHA")) return GL_CONSTANT_ALPHA; if (!stricmp(szString, "GL_ONE_MINUS_CONSTANT_ALPHA")) return GL_ONE_MINUS_CONSTANT_ALPHA; if (!stricmp(szString, "GL_SRC_ALPHA_SATURATE")) return GL_SRC_ALPHA_SATURATE; } return GL_ONE_MINUS_SRC_ALPHA; } CGfxMaterial::CGfxMaterial(GLuint name) : m_name(name) , m_pProgram(NULL) , refCount(0) { m_state.bEnableCullFace = GL_TRUE; m_state.bEnableDepthTest = GL_TRUE; m_state.bEnableDepthWrite = GL_TRUE; m_state.bEnableColorWrite[0] = GL_TRUE; m_state.bEnableColorWrite[1] = GL_TRUE; m_state.bEnableColorWrite[2] = GL_TRUE; m_state.bEnableColorWrite[3] = GL_TRUE; m_state.bEnableBlend = GL_FALSE; m_state.bEnablePolygonOffset = GL_FALSE; m_state.cullFace = GL_BACK; m_state.frontFace = GL_CCW; m_state.depthFunc = GL_LESS; m_state.srcBlendFactor = GL_SRC_ALPHA; m_state.dstBlendFactor = GL_ONE_MINUS_SRC_ALPHA; m_state.polygonOffsetFactor = 0.0f; m_state.polygonOffsetUnits = 0.0f; } CGfxMaterial::~CGfxMaterial(void) { Free(); } GLuint CGfxMaterial::GetName(void) const { return m_name; } void CGfxMaterial::Lock(void) { refCount++; } void CGfxMaterial::Unlock(bool bFree) { if (refCount > 0) { refCount--; } if (refCount == 0 && bFree) { CGfxRenderer::GetInstance()->FreeMaterial(this); } } void CGfxMaterial::Bind(void) const { if (m_pProgram) { m_pProgram->UseProgram(); BindState(); BindUniforms(m_pProgram); BindTextures(m_pProgram, 0); } } void CGfxMaterial::BindState(void) const { if (m_state.bEnableCullFace) { glEnable(GL_CULL_FACE); } else { glDisable(GL_CULL_FACE); } if (m_state.bEnableDepthTest) { glEnable(GL_DEPTH_TEST); } else { glDisable(GL_DEPTH_TEST); } if (m_state.bEnableDepthWrite) { glDepthMask(GL_TRUE); } else { glDepthMask(GL_FALSE); } if (m_state.bEnableBlend) { glEnable(GL_BLEND); } else { glDisable(GL_BLEND); } if (m_state.bEnablePolygonOffset) { glEnable(GL_POLYGON_OFFSET_FILL); } else { glDisable(GL_POLYGON_OFFSET_FILL); } glCullFace(m_state.cullFace); glFrontFace(m_state.frontFace); glDepthFunc(m_state.depthFunc); glBlendFunc(m_state.srcBlendFactor, m_state.dstBlendFactor); glPolygonOffset(m_state.polygonOffsetFactor, m_state.polygonOffsetUnits); glColorMask(m_state.bEnableColorWrite[0] ? GL_TRUE : GL_FALSE, m_state.bEnableColorWrite[1] ? GL_TRUE : GL_FALSE, m_state.bEnableColorWrite[2] ? GL_TRUE : GL_FALSE, m_state.bEnableColorWrite[3] ? GL_TRUE : GL_FALSE); } void CGfxMaterial::BindUniforms(CGfxProgram *pProgram) const { for (const auto &itUniform : m_pUniformVec1s) { itUniform.second->Apply(); pProgram->BindUniformBuffer(itUniform.first, itUniform.second->GetBuffer(), itUniform.second->GetSize()); } for (const auto &itUniform : m_pUniformVec2s) { itUniform.second->Apply(); pProgram->BindUniformBuffer(itUniform.first, itUniform.second->GetBuffer(), itUniform.second->GetSize()); } for (const auto &itUniform : m_pUniformVec3s) { itUniform.second->Apply(); pProgram->BindUniformBuffer(itUniform.first, itUniform.second->GetBuffer(), itUniform.second->GetSize()); } for (const auto &itUniform : m_pUniformVec4s) { itUniform.second->Apply(); pProgram->BindUniformBuffer(itUniform.first, itUniform.second->GetBuffer(), itUniform.second->GetSize()); } for (const auto &itUniform : m_pUniformMat4s) { itUniform.second->Apply(); pProgram->BindUniformBuffer(itUniform.first, itUniform.second->GetBuffer(), itUniform.second->GetSize()); } } void CGfxMaterial::BindTextures(CGfxProgram *pProgram, GLuint indexUnit) const { for (const auto &itTexture : m_pTexture2ds) { if (pProgram->BindTexture2D(itTexture.first, itTexture.second->GetTexture(), m_pSamplers.find(itTexture.first)->second->GetSampler(), indexUnit)) { indexUnit++; } } for (const auto &itTexture : m_pTexture2dArrays) { if (pProgram->BindTextureArray(itTexture.first, itTexture.second->GetTexture(), m_pSamplers.find(itTexture.first)->second->GetSampler(), indexUnit)) { indexUnit++; } } for (const auto &itTexture : m_pTextureCubeMaps) { if (pProgram->BindTextureCubeMap(itTexture.first, itTexture.second->GetTexture(), m_pSamplers.find(itTexture.first)->second->GetSampler(), indexUnit)) { indexUnit++; } } } bool CGfxMaterial::Load(const char *szFileName) { /* <Material> <Cull enable="" cull_face="" front_face="" /> <Depth enable_test="" enable_write="" depth_func="" /> <Color enable_write_red="" enable_write_green="" enable_write_blue="" enable_write_alpha="" /> <Blend enable="" src_factor="" dst_factor="" /> <Offset enable="" factor="" units="" /> <Program vertex_file_name="" fragment_file_name="" /> <Texture2D file_name="" name="" min_filter="" mag_filter="" address_mode="" /> <Texture2DArray file_name="" name="" min_filter="" mag_filter="" address_mode="" /> <TextureCubeMap file_name="" name="" min_filter="" mag_filter="" address_mode="" /> <Uniform1f name="" value="" /> <Uniform2f name="" value="" /> <Uniform3f name="" value="" /> <Uniform4f name="" value="" /> </Material> */ try { Free(); LogOutput("LoadMaterial(%s)\n", szFileName); { char szFullPath[260]; CGfxRenderer::GetInstance()->GetMaterialFullPath(szFileName, szFullPath); TiXmlDocument xmlDoc; if (xmlDoc.LoadFile(szFullPath) == false) throw 0; TiXmlNode *pMaterialNode = xmlDoc.FirstChild("Material"); if (pMaterialNode == NULL) throw 1; if (LoadState(pMaterialNode) == false) throw 2; if (LoadProgram(pMaterialNode) == false) throw 3; if (LoadTexture2D(pMaterialNode) == false) throw 4; if (LoadTexture2DArray(pMaterialNode) == false) throw 5; if (LoadTextureCubeMap(pMaterialNode) == false) throw 6; if (LoadUniformVec1(pMaterialNode) == false) throw 7; if (LoadUniformVec2(pMaterialNode) == false) throw 8; if (LoadUniformVec3(pMaterialNode) == false) throw 9; if (LoadUniformVec4(pMaterialNode) == false) throw 10; } return true; } catch (int) { Free(); return false; } } bool CGfxMaterial::LoadState(TiXmlNode *pMaterialNode) { try { LogOutput("\tLoadState ... "); { if (TiXmlNode *pCullNode = pMaterialNode->FirstChild("Cull")) { m_state.bEnableCullFace = pCullNode->ToElement()->AttributeBool("enable"); m_state.cullFace = StringToCullFace(pCullNode->ToElement()->AttributeString("cull_face")); m_state.frontFace = StringToFrontFace(pCullNode->ToElement()->AttributeString("front_face")); } if (TiXmlNode *pDepthNode = pMaterialNode->FirstChild("Depth")) { m_state.bEnableDepthTest = pDepthNode->ToElement()->AttributeBool("enable_test"); m_state.bEnableDepthWrite = pDepthNode->ToElement()->AttributeBool("enable_write"); m_state.depthFunc = StringToDepthFunc(pDepthNode->ToElement()->AttributeString("depth_func")); } if (TiXmlNode *pColorNode = pMaterialNode->FirstChild("Color")) { m_state.bEnableColorWrite[0] = pColorNode->ToElement()->AttributeBool("enable_write_red"); m_state.bEnableColorWrite[1] = pColorNode->ToElement()->AttributeBool("enable_write_green"); m_state.bEnableColorWrite[2] = pColorNode->ToElement()->AttributeBool("enable_write_blue"); m_state.bEnableColorWrite[3] = pColorNode->ToElement()->AttributeBool("enable_write_alpha"); } if (TiXmlNode *pBlendNode = pMaterialNode->FirstChild("Blend")) { m_state.bEnableBlend = pBlendNode->ToElement()->AttributeBool("enable"); m_state.srcBlendFactor = StringToBlendSrcFactor(pBlendNode->ToElement()->AttributeString("src_factor")); m_state.dstBlendFactor = StringToBlendDstFactor(pBlendNode->ToElement()->AttributeString("dst_factor")); } if (TiXmlNode *pOffsetNode = pMaterialNode->FirstChild("Offset")) { m_state.bEnablePolygonOffset = pOffsetNode->ToElement()->AttributeBool("enable"); m_state.polygonOffsetFactor = pOffsetNode->ToElement()->AttributeFloat1("factor"); m_state.polygonOffsetUnits = pOffsetNode->ToElement()->AttributeFloat1("units"); } } LogOutput("OK\n"); return true; } catch (int) { LogOutput("Fail\n"); return false; } } bool CGfxMaterial::LoadProgram(TiXmlNode *pMaterialNode) { try { LogOutput("\tLoadProgram ... "); { TiXmlNode *pProgramNode = pMaterialNode->FirstChild("Program"); if (pProgramNode == NULL) throw 0; const char *szVertexFileName = pProgramNode->ToElement()->AttributeString("vertex_file_name"); const char *szFragmentFileName = pProgramNode->ToElement()->AttributeString("fragment_file_name"); if (szVertexFileName == NULL || szFragmentFileName == NULL) throw 1; m_pProgram = CGfxRenderer::GetInstance()->CreateProgram(szVertexFileName, szFragmentFileName); if (m_pProgram->IsValid() == false) throw 2; } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadTexture2D(TiXmlNode *pMaterialNode) { try { TiXmlNode *pTextureNode = pMaterialNode->FirstChild("Texture2D"); if (pTextureNode == NULL) return true; LogOutput("\tLoadTexture2D ... "); { do { const char *szFileName = pTextureNode->ToElement()->AttributeString("file_name"); const char *szName = pTextureNode->ToElement()->AttributeString("name"); if (szFileName == NULL || szName == NULL) throw 0; GLuint name = HashValue(szName); GLenum minFilter = StringToMinFilter(pTextureNode->ToElement()->AttributeString("min_filter")); GLenum magFilter = StringToMagFilter(pTextureNode->ToElement()->AttributeString("mag_filter")); GLenum addressMode = StringToAddressMode(pTextureNode->ToElement()->AttributeString("address_mode")); if (minFilter == GL_INVALID_ENUM || magFilter == GL_INVALID_ENUM || addressMode == GL_INVALID_ENUM) throw 1; if (m_pTexture2ds.find(name) != m_pTexture2ds.end()) { throw 2; } if (m_pProgram->IsTextureValid(name)) { m_pSamplers[name] = CGfxRenderer::GetInstance()->CreateSampler(minFilter, magFilter, addressMode); m_pTexture2ds[name] = CGfxRenderer::GetInstance()->LoadTexture2D(szFileName); m_pTexture2ds[name]->Lock(); } } while (pTextureNode = pMaterialNode->IterateChildren("Texture2D", pTextureNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadTexture2DArray(TiXmlNode *pMaterialNode) { try { TiXmlNode *pTextureNode = pMaterialNode->FirstChild("Texture2DArray"); if (pTextureNode == NULL) return true; LogOutput("\tLoadTexture2DArray ... "); { do { const char *szFileName = pTextureNode->ToElement()->AttributeString("file_name"); const char *szName = pTextureNode->ToElement()->AttributeString("name"); if (szFileName == NULL || szName == NULL) throw 0; GLuint name = HashValue(szName); GLenum minFilter = StringToMinFilter(pTextureNode->ToElement()->AttributeString("min_filter")); GLenum magFilter = StringToMagFilter(pTextureNode->ToElement()->AttributeString("mag_filter")); GLenum addressMode = StringToAddressMode(pTextureNode->ToElement()->AttributeString("address_mode")); if (minFilter == GL_INVALID_ENUM || magFilter == GL_INVALID_ENUM || addressMode == GL_INVALID_ENUM) throw 1; if (m_pTexture2dArrays.find(name) != m_pTexture2dArrays.end()) { throw 2; } if (m_pProgram->IsTextureValid(name)) { m_pSamplers[name] = CGfxRenderer::GetInstance()->CreateSampler(minFilter, magFilter, addressMode); m_pTexture2dArrays[name] = CGfxRenderer::GetInstance()->LoadTexture2DArray(szFileName); m_pTexture2dArrays[name]->Lock(); } } while (pTextureNode = pMaterialNode->IterateChildren("Texture2DArray", pTextureNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadTextureCubeMap(TiXmlNode *pMaterialNode) { try { TiXmlNode *pTextureNode = pMaterialNode->FirstChild("TextureCubeMap"); if (pTextureNode == NULL) return true; LogOutput("\tLoadTextureCubeMap ... "); { do { const char *szFileName = pTextureNode->ToElement()->AttributeString("file_name"); const char *szName = pTextureNode->ToElement()->AttributeString("name"); if (szFileName == NULL || szName == NULL) throw 0; GLuint name = HashValue(szName); GLenum minFilter = StringToMinFilter(pTextureNode->ToElement()->AttributeString("min_filter")); GLenum magFilter = StringToMagFilter(pTextureNode->ToElement()->AttributeString("mag_filter")); GLenum addressMode = StringToAddressMode(pTextureNode->ToElement()->AttributeString("address_mode")); if (minFilter == GL_INVALID_ENUM || magFilter == GL_INVALID_ENUM || addressMode == GL_INVALID_ENUM) throw 1; if (m_pTextureCubeMaps.find(name) != m_pTextureCubeMaps.end()) { throw 2; } if (m_pProgram->IsTextureValid(name)) { m_pSamplers[name] = CGfxRenderer::GetInstance()->CreateSampler(minFilter, magFilter, addressMode); m_pTextureCubeMaps[name] = CGfxRenderer::GetInstance()->LoadTextureCubeMap(szFileName); m_pTextureCubeMaps[name]->Lock(); } } while (pTextureNode = pMaterialNode->IterateChildren("TextureCubeMap", pTextureNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadUniformVec1(TiXmlNode *pMaterialNode) { try { TiXmlNode *pUniformNode = pMaterialNode->FirstChild("Uniform1f"); if (pUniformNode == NULL) return true; LogOutput("\tLoadUniformVec1 ... "); { do { const char *szName = pUniformNode->ToElement()->AttributeString("name"); if (szName == NULL) throw 0; GLuint name = HashValue(szName); GLfloat value = pUniformNode->ToElement()->AttributeFloat1("value"); if (m_pUniformVec1s.find(name) != m_pUniformVec1s.end()) { throw 1; } if (m_pProgram->IsUniformValid(name)) { if (m_pUniformVec1s[name] = new CGfxUniformVec1) { m_pUniformVec1s[name]->SetValue(value); } } } while (pUniformNode = pMaterialNode->IterateChildren("Uniform1f", pUniformNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadUniformVec2(TiXmlNode *pMaterialNode) { try { TiXmlNode *pUniformNode = pMaterialNode->FirstChild("Uniform2f"); if (pUniformNode == NULL) return true; LogOutput("\tLoadUniformVec2 ... "); { do { const char *szName = pUniformNode->ToElement()->AttributeString("name"); if (szName == NULL) throw 0; GLuint name = HashValue(szName); GLfloat value[2]; pUniformNode->ToElement()->AttributeFloat2("value", value); if (m_pUniformVec2s.find(name) != m_pUniformVec2s.end()) { throw 1; } if (m_pProgram->IsUniformValid(name)) { if (m_pUniformVec2s[name] = new CGfxUniformVec2) { m_pUniformVec2s[name]->SetValue(value[0], value[1]); } } } while (pUniformNode = pMaterialNode->IterateChildren("Uniform2f", pUniformNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadUniformVec3(TiXmlNode *pMaterialNode) { try { TiXmlNode *pUniformNode = pMaterialNode->FirstChild("Uniform3f"); if (pUniformNode == NULL) return true; LogOutput("\tLoadUniformVec3 ... "); { do { const char *szName = pUniformNode->ToElement()->AttributeString("name"); if (szName == NULL) throw 0; GLuint name = HashValue(szName); GLfloat value[3]; pUniformNode->ToElement()->AttributeFloat3("value", value); if (m_pUniformVec3s.find(name) != m_pUniformVec3s.end()) { throw 1; } if (m_pProgram->IsUniformValid(name)) { if (m_pUniformVec3s[name] = new CGfxUniformVec3) { m_pUniformVec3s[name]->SetValue(value[0], value[1], value[2]); } } } while (pUniformNode = pMaterialNode->IterateChildren("Uniform3f", pUniformNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } bool CGfxMaterial::LoadUniformVec4(TiXmlNode *pMaterialNode) { try { TiXmlNode *pUniformNode = pMaterialNode->FirstChild("Uniform4f"); if (pUniformNode == NULL) return true; LogOutput("\tLoadUniformVec4 ... "); { do { const char *szName = pUniformNode->ToElement()->AttributeString("name"); if (szName == NULL) throw 0; GLuint name = HashValue(szName); GLfloat value[4]; pUniformNode->ToElement()->AttributeFloat4("value", value); if (m_pUniformVec4s.find(name) != m_pUniformVec4s.end()) { throw 1; } if (m_pProgram->IsUniformValid(name)) { if (m_pUniformVec4s[name] = new CGfxUniformVec4) { m_pUniformVec4s[name]->SetValue(value[0], value[1], value[2], value[3]); } } } while (pUniformNode = pMaterialNode->IterateChildren("Uniform4f", pUniformNode)); } LogOutput("OK\n"); return true; } catch (int err) { LogOutput("Fail(%d)\n", err); return false; } } void CGfxMaterial::Free(void) { for (auto &itTexture : m_pTexture2ds) { itTexture.second->Unlock(false); CGfxRenderer::GetInstance()->FreeTexture(itTexture.second); } for (auto &itTexture : m_pTexture2dArrays) { itTexture.second->Unlock(false); CGfxRenderer::GetInstance()->FreeTexture(itTexture.second); } for (auto &itTexture : m_pTextureCubeMaps) { itTexture.second->Unlock(false); CGfxRenderer::GetInstance()->FreeTexture(itTexture.second); } for (auto &itUniform : m_pUniformVec1s) { delete itUniform.second; } for (auto &itUniform : m_pUniformVec2s) { delete itUniform.second; } for (auto &itUniform : m_pUniformVec3s) { delete itUniform.second; } for (auto &itUniform : m_pUniformVec4s) { delete itUniform.second; } for (auto &itUniform : m_pUniformMat4s) { delete itUniform.second; } m_pProgram = NULL; m_pSamplers.clear(); m_pTexture2ds.clear(); m_pTexture2dArrays.clear(); m_pTextureCubeMaps.clear(); m_pUniformVec1s.clear(); m_pUniformVec2s.clear(); m_pUniformVec3s.clear(); m_pUniformVec4s.clear(); m_pUniformMat4s.clear(); } void CGfxMaterial::SetEnableCullFace(bool bEnable, GLenum cullFace, GLenum frontFace) { m_state.bEnableCullFace = bEnable; m_state.cullFace = cullFace; m_state.frontFace = frontFace; } void CGfxMaterial::SetEnableDepthTest(bool bEnable, GLenum depthFunc) { m_state.bEnableDepthTest = bEnable; m_state.depthFunc = depthFunc; } void CGfxMaterial::SetEnableDepthWrite(bool bEnable) { m_state.bEnableDepthWrite = bEnable; } void CGfxMaterial::SetEnableColorWrite(bool bEnableRed, bool bEnableGreen, bool bEnableBlue, bool bEnableAlpha) { m_state.bEnableColorWrite[0] = bEnableRed; m_state.bEnableColorWrite[1] = bEnableGreen; m_state.bEnableColorWrite[2] = bEnableBlue; m_state.bEnableColorWrite[3] = bEnableAlpha; } void CGfxMaterial::SetEnableBlend(bool bEnable, GLenum srcFactor, GLenum dstFactor) { m_state.bEnableBlend = bEnable; m_state.srcBlendFactor = srcFactor; m_state.dstBlendFactor = dstFactor; } void CGfxMaterial::SetEnablePolygonOffset(bool bEnable, GLfloat factor, GLfloat units) { m_state.bEnablePolygonOffset = bEnable; m_state.polygonOffsetFactor = factor; m_state.polygonOffsetUnits = units; } bool CGfxMaterial::IsEnableBlend(void) const { return m_state.bEnableBlend; } CGfxProgram* CGfxMaterial::GetProgram(void) { return m_pProgram; } CGfxSampler* CGfxMaterial::GetSampler(const char *szName, GLenum minFilter, GLenum magFilter, GLenum addressMode) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsTextureValid(name))) { if (m_pSamplers[name] == NULL) { m_pSamplers[name] = CGfxRenderer::GetInstance()->CreateSampler(minFilter, magFilter, addressMode); } return m_pSamplers[name]; } return NULL; } CGfxTexture2D* CGfxMaterial::GetTexture2D(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsTextureValid(name))) { if (m_pTexture2ds[name] == NULL) { m_pTexture2ds[name] = new CGfxTexture2D(HashValue(szName)); } return m_pTexture2ds[name]; } return NULL; } CGfxTexture2DArray* CGfxMaterial::GetTexture2DArray(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsTextureValid(name))) { if (m_pTexture2dArrays[name] == NULL) { m_pTexture2dArrays[name] = new CGfxTexture2DArray(HashValue(szName)); } return m_pTexture2dArrays[name]; } return NULL; } CGfxTextureCubeMap* CGfxMaterial::GetTextureCubeMap(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsTextureValid(name))) { if (m_pTextureCubeMaps[name] == NULL) { m_pTextureCubeMaps[name] = new CGfxTextureCubeMap(HashValue(szName)); } return m_pTextureCubeMaps[name]; } return NULL; } CGfxUniformVec1* CGfxMaterial::GetUniformVec1(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsUniformValid(name))) { if (m_pUniformVec1s[name] == NULL) { m_pUniformVec1s[name] = new CGfxUniformVec1; } return m_pUniformVec1s[name]; } return NULL; } CGfxUniformVec2* CGfxMaterial::GetUniformVec2(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsUniformValid(name))) { if (m_pUniformVec2s[name] == NULL) { m_pUniformVec2s[name] = new CGfxUniformVec2; } return m_pUniformVec2s[name]; } return NULL; } CGfxUniformVec3* CGfxMaterial::GetUniformVec3(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsUniformValid(name))) { if (m_pUniformVec3s[name] == NULL) { m_pUniformVec3s[name] = new CGfxUniformVec3; } return m_pUniformVec3s[name]; } return NULL; } CGfxUniformVec4* CGfxMaterial::GetUniformVec4(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsUniformValid(name))) { if (m_pUniformVec4s[name] == NULL) { m_pUniformVec4s[name] = new CGfxUniformVec4; } return m_pUniformVec4s[name]; } return NULL; } CGfxUniformMat4* CGfxMaterial::GetUniformMat4(const char *szName) { GLuint name = HashValue(szName); if ((m_pProgram == NULL) || (m_pProgram && m_pProgram->IsUniformValid(name))) { if (m_pUniformMat4s[name] == NULL) { m_pUniformMat4s[name] = new CGfxUniformMat4; } return m_pUniformMat4s[name]; } return NULL; } GLuint CGfxMaterial::GetTextureUnits(void) const { GLuint numTexUnits = 0; numTexUnits += m_pTexture2ds.size(); numTexUnits += m_pTexture2dArrays.size(); numTexUnits += m_pTextureCubeMaps.size(); return numTexUnits; }
How Do You Tell Which Parents Are Abusive and Which Are the Victims of Abusive Children? Members of estranged parents' forums say their adult children are abusive. They claim verbal abuse, emotional abuse, and deliberate mind games; many claim financial abuse; a few claim extortion, harassment, even physical assaults. Members diagnose their children with alcoholism, drug addiction, Borderline Personality Disorder, Narcissistic Personality Disorder, and sociopathy, all conditions that can make adult children just as much of a threat to their parents as abusive parents are to adult children.[1] This is exactly the sad picture you'd expect if estranged parents' forums were gathering places for parents victimized by abusive children. It's also exactly what you'd expect if you're familiar with the acronym DARVO. So how do you tell the abusers from the victims? That's an excellent question, and one I'm working to answer. Without the chance to interview both parents and children, then check up on their stories, it's impossible to get a real answer to this question; and even then, some abusers are skilled enough to convince anyone that they're the wronged party. However, it's possible to come to tentative conclusions. My working principles are: 1. Abusiveness is not an either/or situation. Abusive parents can have abusive children. In fact, abusive parents are more likely to have abusive children. So it's not a simple matter of determining that one party is abusive and calling it a day. 2. Abusers lie. Bear that in mind at all times—when reading both parents' and children's accounts. (This is the point I stumble over the most because I'm biased toward the children.) 3. If a person's own writing shows that they lie, rewrite reality, or otherwise engage in cognitive distortions, they're abusive. Period. Instant kill shot. The only exception is if they catch themselves distorting, correct it, and reflect upon it. That suggests that they have abusive tendencies, but are working to improve themselves in a most un-abuserlike manner. Unfortunately, that also means they're not entirely trustworthy, and can still cause pain to those around them; so if anyone is reading this list to decide whether someone in their life is toxic, a) please don't and b) go with your gut to decide whether the person is safe to be around. 4. Look for patterns of distorted beliefs. Common beliefs that show up in estranged parents' posts are: My child is responsible for my happiness. My child is permanently subordinate to me. My child wants to control me. Any limits my child sets on me are a power play that I must resist. My child's decision to ignore my advice or make a choice I disapprove of is a sign of immaturity. My child was most real and true to himself when he was a preschooler (and had not begun to defy me). I am the best friend my child will ever have./I am my child's only true friend. My child is living only half a life if he or she doesn't have a relationship with me. If the relationship had any good times at all, the child has no justification for breaking it off. If I put up with a certain level of mistreatment from my own parents, then my child should put up with the same level of mistreatment from me. My pain is the complete justification for why my child should resume a relationship with me. Children have no right to break off relationships with their parents. Refusing to have a relationship with me is abusive. 5. Is the abuse offensive or defensive? Is one party tracking down the other party to abuse them? Or does the abuse happen only when one party insists upon contacting the other party? If a daughter drops by her mother's house for a visit and ends up shoving and punching her mother, there's an excellent chance that her abuse is offensive. If a mother drops by her daughter's house despite requests for no contact, and the daughter ends up shoving and punching her mother, the abuse is defensive—and is probably self-defense, not abuse. Why study estranged parents' forums?Not all estranged parents are abusive [1] The one form of abuse members don't claim is elder abuse. Most of them aren't yet dependent on others for care, and the few who are have other caretakers or are in the care of social services. It's hard to abuse someone you don't see. Updated 5/4/2015 This page may contain copyrighted material the use of which has not always been specifically authorized by the copyright owner. It is being made available in an effort to advance the understanding of psychological issues. It is believed that this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. The analyses on this page are my own opinions and should not be construed as medical advice or statements of absolute fact.
Tamoxifeninduced radiation recall dermatitis: three calls from Egypt Tamoxifen is a selective estrogen receptor modulator used for hormone receptor-positive breast cancer in addition to radiotherapy. Tamoxifen-induced radiation recall dermatitis (RRD) was 1st described in 1992 however only 7 cases have been reported1. Herein, we report 3 Egyptian patients with tamoxifen-induced RRD to highlight its diagnostic and therapeutic aspects. This article is protected by copyright. All rights reserved.
<reponame>yxiong/automl<filename>runtime/databricks/automl_runtime/forecast/pmdarima/utils.py # # Copyright (C) 2022 Databricks, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # from typing import Tuple import pandas as pd import matplotlib.pyplot as plt from matplotlib.dates import AutoDateLocator, AutoDateFormatter def plot(history_pd: pd.DataFrame, forecast_pd: pd.DataFrame, xlabel: str = 'ds', ylabel: str = 'y', figsize: Tuple[int, int] = (10, 6)): """ Plot the forecast. Adapted from prophet.plot.plot. See https://github.com/facebook/prophet/blob/ba9a5a2c6e2400206017a5ddfd71f5042da9f65b/python/prophet/plot.py#L42. :param history_pd: pd.DataFrame of history data. :param forecast_pd: pd.DataFrame with forecasts and optionally confidence interval, sorted by time. :param xlabel: Optional label name on X-axis :param ylabel: Optional label name on Y-axis :param figsize: Optional tuple width, height in inches. :return: A matplotlib figure. """ history_pd = history_pd.sort_values(by=["ds"]) history_pd["ds"] = pd.to_datetime(history_pd["ds"]) fig = plt.figure(facecolor='w', figsize=figsize) ax = fig.add_subplot(111) fcst_t = forecast_pd['ds'].dt.to_pydatetime() ax.plot(history_pd['ds'].dt.to_pydatetime(), history_pd['y'], 'k.', label='Observed data points') ax.plot(fcst_t, forecast_pd['yhat'], ls='-', c='#0072B2', label='Forecast') if "yhat_lower" in forecast_pd and "yhat_upper" in forecast_pd: ax.fill_between(fcst_t, forecast_pd['yhat_lower'], forecast_pd['yhat_upper'], color='#0072B2', alpha=0.2, label='Uncertainty interval') # Specify formatting to workaround matplotlib issue #12925 locator = AutoDateLocator(interval_multiples=False) formatter = AutoDateFormatter(locator) ax.xaxis.set_major_locator(locator) ax.xaxis.set_major_formatter(formatter) ax.grid(True, which='major', c='gray', ls='-', lw=1, alpha=0.2) ax.set_xlabel(xlabel) ax.set_ylabel(ylabel) ax.legend() fig.tight_layout() return fig
<gh_stars>0 import { forwardRef, Ref, ReactElement, useContext } from 'react' import { createStyles, makeStyles } from '@material-ui/core/styles' import DialogTitle from '@material-ui/core/DialogTitle' import DialogContent from '@material-ui/core/DialogContent' import DialogActions from '@material-ui/core/DialogActions' import Dialog from '@material-ui/core/Dialog' import { TransitionProps } from '@material-ui/core/transitions' import Slide from '@material-ui/core/Slide' import { parametersContext } from '../context/useParameters' import ConfirmButton from '../atoms/ConfirmButton' import useIsButtonDisable from '../hooks/useIsButtonDisable' import useDialogOpen from '../hooks/useDialogOpen' import PreviewDocxText from '../molecules/PreviewDocxText' import ExportDialogButtons from '../molecules/ExportDialogButtons' import IsPrintPaidLeaveRequestForm from '../atoms/IsPrintPaidLeaveRequestForm' const useStyles = makeStyles((theme) => createStyles({ root: { display: 'flex', flexWrap: 'wrap', flexDirection: 'row', justifyContent: 'center', }, title: { textAlign: 'center', }, confirm: { marginBottom: theme.spacing(3), }, buttonArea: { display: 'flex', flexWrap: 'wrap', flexDirection: 'row', justifyContent: 'center', }, }) ) const Transition = forwardRef(function Transition( // eslint-disable-next-line react/require-default-props props: TransitionProps & { children?: ReactElement }, ref: Ref<unknown> ) { // eslint-disable-next-line react/jsx-props-no-spreading return <Slide direction="up" ref={ref} {...props} /> }) export default function ExportDialog(): ReactElement { const context = useContext(parametersContext) const { daysOfPaidLeaveRemaining } = context const classes = useStyles() const { isOpen, handleOpen, handleClose } = useDialogOpen() return ( <div className={classes.root}> <div className={classes.confirm}> <ConfirmButton text="確認" isDisable={useIsButtonDisable(context)} onClickFunction={handleOpen} /> </div> <Dialog open={isOpen} TransitionComponent={Transition} keepMounted onClose={handleClose} fullWidth maxWidth="md" > <DialogTitle className={classes.title}>プレビュー</DialogTitle> <DialogContent> <PreviewDocxText /> <IsPrintPaidLeaveRequestForm daysOfPaidLeaveRemaining={daysOfPaidLeaveRemaining} /> </DialogContent> <DialogActions className={classes.buttonArea}> <ExportDialogButtons handleClose={handleClose} /> </DialogActions> </Dialog> </div> ) }
/** * Extend the page blob file if we are close to the end. */ private void conditionalExtendFile() { final long MAX_PAGE_BLOB_SIZE = 1024L * 1024L * 1024L * 1024L; if (currentBlobSize == MAX_PAGE_BLOB_SIZE) { return; } if (currentBlobSize - currentBlobOffset <= MAX_RAW_BYTES_PER_REQUEST) { CloudPageBlob cloudPageBlob = (CloudPageBlob) blob.getBlob(); long newSize = currentBlobSize + configuredPageBlobExtensionSize; if (newSize > MAX_PAGE_BLOB_SIZE) { newSize = MAX_PAGE_BLOB_SIZE; } final int MAX_RETRIES = 3; int retries = 1; boolean resizeDone = false; while(!resizeDone && retries <= MAX_RETRIES) { try { cloudPageBlob.resize(newSize); resizeDone = true; currentBlobSize = newSize; } catch (StorageException e) { LOG.warn("Failed to extend size of " + cloudPageBlob.getUri()); try { Thread.sleep(2000 * retries * retries); } catch (InterruptedException e1) { Thread.currentThread().interrupt(); } } finally { retries++; } } } }
// EntryToDomainTreeIndex returns the index of a DomainTreeEntry in the domain tree // for the specified domain (after domain name normalization). // // Returns an error if the domain tree does not exist or if the specified // certificate could not be found in the domain tree. // has no such certificate. func (dm *DomainMap) EntryToDomainTreeIndex(entry DomainTreeEntry, domain string) (uint64, error) { tree, err := dm.GetDomainTree(domain) if err != nil { return 0, err } return tree.EntryToDomainTreeIndex(entry) }
<reponame>yosgo-open-source/Material-X<filename>src/Card/Card.stories.tsx<gh_stars>1-10 import React from "react"; import { Story } from "@storybook/react/types-6-0"; import SWAPTheme from "../SWAPTheme/SWAPTheme"; import { CardProps } from "./Card.types"; import Card from "./Card"; import Typography from "../Typography/Typography"; import Chip from "../Chip/Chip"; import SWAPSpace from "../SWAPSpace/SWAPSpace"; export default { title: "Card", component: Card, parameters: { docs: { description: { component: " ", }, }, }, }; const Demo: Story<CardProps> = (args) => { const [loading, setLoading] = React.useState(true); React.useEffect(() => { setTimeout(() => { setLoading(false); }, 2000); }, []); return ( <SWAPTheme> <Card {...args} /> <Typography variant="h4" style={{ margin: "24px 0 16px" }}> Example </Typography> <div style={{ display: "flex", gap: 40 }}> <Card width={360} loading={loading} children={ <> <div style={{ display: "flex", justifyContent: "space-between" }}> <Typography variant="body2" color="black800"> 2020/12/07 </Typography> <Chip variant="success" label="已付款" /> </div> <SWAPSpace size={8} /> <Typography> iOS App 案件,要交付完整設計和介面規格書,並確認素材都有輸出 </Typography> <SWAPSpace size={12} /> <div style={{ display: "flex", justifyContent: "space-between" }}> <Typography variant="body1" color="black800"> 50 薪資所得 </Typography> <Typography variant="subtitle">SP 56,000</Typography> </div> </> } buttons={[ { title: "封存請款單", onClick: () => {} }, { title: ( <div style={{ display: "flex", alignItems: "center" }}> {icon("#4862CC")} <span>瀏覽請款單</span> </div> ), onClick: () => {}, variant: "text", }, ]} /> <Card width={328} loading={loading} children={ <> <Typography>專家方案自動扣款</Typography> <SWAPSpace size={8} /> <div style={{ display: "flex", justifyContent: "space-between" }}> <Typography variant="body1" color="black800"> 2020/12/07 </Typography> <Typography variant="subtitle">TWD 720</Typography> </div> </> } buttons={[ { title: ( <div style={{ display: "flex", alignItems: "center" }}> {icon("#4B4B4B")} <span>發票</span> </div> ), onClick: () => {}, }, { title: ( <div style={{ display: "flex", alignItems: "center" }}> {icon("#4B4B4B")} <span>勞報單</span> </div> ), onClick: () => {}, }, { title: ( <div style={{ display: "flex", alignItems: "center" }}> {icon("#4B4B4B")} <span>請款單</span> </div> ), onClick: () => {}, }, ]} /> </div> </SWAPTheme> ); }; const icon = (color: string) => ( <svg width="21" height="20" viewBox="0 0 21 20" fill="none" xmlns="http://www.w3.org/2000/svg" > <path d="M4.91667 2.5C3.99167 2.5 3.25 3.24167 3.25 4.16667V15.8333C3.25 16.7583 3.99167 17.5 4.91667 17.5H16.5833C17.5083 17.5 18.25 16.7583 18.25 15.8333V4.16667C18.25 3.24167 17.5083 2.5 16.5833 2.5H4.91667ZM4.91667 4.16667H16.5833V15.8333H4.91667V4.16667ZM6.58333 5.83333V7.5H14.9167V5.83333H6.58333ZM6.58333 9.16667V10.8333H14.9167V9.16667H6.58333ZM6.58333 12.5V14.1667H12.4167V12.5H6.58333Z" fill={color} /> </svg> ); export const 認識 = Demo.bind({}); 認識.args = { loading: false, children: ( <div style={{ width: "100%", height: 140, backgroundColor: "#ececec", display: "flex", alignItems: "center", justifyContent: "center", color: "#6f6f6f", fontSize: 14, fontWeight: 700, }} > 請替換 Body 內容 </div> ), buttons: [ { title: ( <div style={{ display: "flex", alignItems: "center", gap: 4 }}> {icon("#4b4b4b")} <span>Button1</span> </div> ), }, { title: ( <div style={{ display: "flex", alignItems: "center", gap: 4 }}> {icon("#4b4b4b")} <span>Button2</span> </div> ), }, ], };
/// Returns a KeyEvent with the given parameters. /// /// # Parameters /// * `event`: The keyboard event to process. /// * `event_time`: The time in nanoseconds when the event was first recorded. fn create_key_event( event: &keyboard_binding::KeyboardEvent, event_time: input_device::EventTime, ) -> fidl_ui_input3::KeyEvent { let modifier_state: FrozenModifierState = event.get_modifiers().unwrap_or(Modifiers::from_bits_allow_unknown(0)).into(); let lock_state: FrozenLockState = event.get_lock_state().unwrap_or(LockState::from_bits_allow_unknown(0)).into(); fx_log_debug!( "ImeHandler::create_key_event: key:{:?}, modifier_state: {:?}, lock_state: {:?}, event_type: {:?}", event.get_key(), modifier_state, lock_state, event.get_event_type(), ); // Don't override the key meaning if already set, e.g. by prior stage. let key_meaning = event.get_key_meaning().or(keymaps::US_QWERTY.apply( event.get_key(), &modifier_state, &lock_state, )); fidl_ui_input3::KeyEvent { timestamp: Some(event_time.try_into().unwrap_or_default()), type_: event.get_event_type().into(), key: event.get_key().into(), modifiers: event.get_modifiers(), lock_state: event.get_lock_state(), key_meaning, ..fidl_ui_input3::KeyEvent::EMPTY } }
Shakespeares Semiotics and the Problem of Falstaff In this article, I contend that the Henry IV plays evoke the plurivocity of language in order to show not only the multiplicity of possible interpretations but more importantly the location of those interpretations within the audience. The plays use of allusion and parody necessarily forces the audience into interpretive acts because they both rely on the audiences prior knowledge and on their ability to make implicit connections between this knowledge and the text being delivered. I focus on two particular uses that demonstrate this interpretive burden well: first through allusions to Falstaff as Sir John Oldcastle and second through Falstaffs pervasive biblical parody. I then argue that this audience-centered hermeneutic constructed within the texts dramatizes post-Reformation Englands contentious religious and political battle over lay access to the scriptures.
<reponame>shihab4t/Competitive-Programming #include <bits/stdc++.h> using namespace std; typedef long long int llint; typedef unsigned long long int ullint; typedef short int sint; #define endn "\n" //Solve // 0 0 0 // 0 0 0 void test(void) { int n, m, i, j; cin >>n >>m >>i >>j; int x1, y1, x2, y2; if ((i == 1 && j == 1) || (i == n && j == m)) { x1 = 1, y1 = m; x2 = n, y2 = 1; } else { x1 = 1, y1 = 1; x2 = n, y2 = m; } cout <<x1 <<" " <<y1 <<" " <<x2 <<" " <<y2 <<endn; } int main(void) { ios_base::sync_with_stdio(false); cin.tie(NULL); cout.tie(NULL); int t; cin >>t; while (t--) { test(); } return 0; } // Solved By: shihab4t // Monday, July 05, 2021 | 10:37:32 AM (+06)
/** * Return AnimatedSprite built from the same frames as another animated sprite. This isn't a * deep clone, only the frames & FPS of the original sprite are copied. */ public static AnimatedSprite fromAnimatedSprite(AnimatedSprite other) { AnimatedSprite sprite = new AnimatedSprite(other.frames, other.sampleSize); sprite.setFPS(other.fps); return sprite; }
Data Mining Analysis of Moving Object Based on Apriori Algorithm At present, under the background of reform in university physical education, the sports community constantly makes innovations of ways and means of study to improve the ways of education, how to analyze the teaching methods, this is the key issue needed to be focused on. During table tennis teaching, comparative analysis is based on the traditional test to study the effectiveness of teaching and take further measures, this process needs to choose specific objects to have the training in specific time and compare the results before and after testing, the manpower and material resources required is large and it is time-consuming, the article uses the improved Apriori algorithm, uses data mining technology to have systematic analysis of representative sample of data of Table Tennis teaching, and through the formation of a series strong association rules data to have compared analysis, in this way it can have effective analysis of teaching data for table tennis teaching, and give technical support to the classification management, at the same time it provides a reasonable, effective and directional data and suggestions for table tennis teaching.
<filename>Game_EntryPoint/Source/main.cpp<gh_stars>1-10 #include "stdafx.h" #include "ComponentRegistry.h" #include "StackGame.h" #include "Engine/Engine.h" ME_APPLICATION_MAIN(StackGame)
/** * A special tool allowing to destroy an element * * @author Adrien Boitelle * @version 1.0 */ public class DeleteElementTool extends Element { private static final long serialVersionUID = 1L; /** * Create a tool that will delete any element dragged to it * * @param pos The position of the tool * @param width The width of the tool * @param height The height of the tool */ public DeleteElementTool(Position pos, int width, int height) { this.pos = pos; this.width = width; this.height = height; } @Override public void Draw(String viewName, Position ref) { Draw(viewName, ref,width,height); } @Override public Element Clone() { return null; } @Override public int getSurfaceWidth() { return width; } @Override public int getSurfaceHeight() { return height; } @Override public void Draw(String viewName, Position ref, int fit_width, int fit_height) { if(!isClicked(App.appController.CurrentMousePos())) App.view.drawImage(viewName, "assets/img/bin.png", ref.x, ref.y, fit_width, fit_height); else App.view.drawImage(viewName, "assets/img/bin_hover.png", ref.x, ref.y, fit_width, fit_height); } }
In-training assessment using direct observation of single-patient encounters: a literature review We reviewed the literature on instruments for work-based assessment in single clinical encounters, such as the mini-clinical evaluation exercise (mini-CEX), and examined differences between these instruments in characteristics and feasibility, reliability, validity and educational effect. A PubMed search of the literature published before 8 January 2009 yielded 39 articles dealing with 18 different assessment instruments. One researcher extracted data on the characteristics of the instruments and two researchers extracted data on feasibility, reliability, validity and educational effect. Instruments are predominantly formative. Feasibility is generally deemed good and assessor training occurs sparsely but is considered crucial for successful implementation. Acceptable reliability can be achieved with 10 encounters. The validity of many instruments is not investigated, but the validity of the mini-CEX and the clinical evaluation exercise is supported by strong and significant correlations with other valid assessment instruments. The evidence from the few studies on educational effects is not very convincing. The reports on clinical assessment instruments for single work-based encounters are generally positive, but supporting evidence is sparse. Feasibility of instruments seems to be good and reliability requires a minimum of 10 encounters, but no clear conclusions emerge on other aspects. Studies on assessor and learner training and studies examining effects beyond happiness data are badly needed. Introduction The mini-clinical evaluation exercise (mini-CEX) is widely used for assessment in single work-based encounters of clinical competence at the top of Miller's pyramid-the 'does' level. Currently, assessment of clinical competence is receiving increasing attention, particularly in postgraduate training (), and assessment of authentic performance is considered the main challenge. Reliable and valid performance measurements that can serve as a gold standard for clinical assessment have as yet not been achieved (). Developed for the evaluation of a multitude of clinical competencies (), the mini-CEX is a single-encounter instrument to be used by professionals in conducting work-based assessment of actual clinical performance. It was originally developed in 1995 in the USA for the evaluation of internal medicine residents' clinical skills ( and its principal characteristics are direct observation of real patient encounters, easy and instant use in day-to-day practice, applicability in a broad range of settings and immediate feedback to the learner after the encounter. These characteristics make the mini-CEX an educational tool that can help learners to gain insight into the strengths and weaknesses of their clinical performance. It can be used to assess multiple competencies, such as communication and professionalism. Typically, the mini-CEX and similar instruments use global assessment scales, provide space for narrative comments and allow for feedback presented by a moderator in a postencounter review session. Since the mini-CEX was first introduced, several comparable instruments have been developed for use in undergraduate and postgraduate medical education, including, among many others, 'longitudinal evaluation of performance' (), 'structured clinical observation' (Lane and Gottlieb 2000) and the 'clinical encounter card' (). To our knowledge, however, no review has compared the characteristics and key qualities of these instruments. Feasibility, reliability, validity and educational effects are the core elements in determining the utility of assessment methods (van der Vleuten 1996). The only review of the validity of instruments for work-based clinical assessment was published in October 2009 (). The authors conclude that many tools are available, but evidence on their validity and descriptions of educational outcomes are scarce. We reviewed the literature on instruments for single-encounter work-based clinical assessment, like the mini-CEX. These instruments appear to hold promise for clinical assessment but too little is known about their characteristics and feasibility, reliability, validity and educational effects. We addressed the following research questions: 1. What are the similarities and differences between the characteristics of clinical assessment instruments, such as the mini-CEX? 2. What is known about the feasibility, validity, reliability and educational effects of these clinical assessment instruments? Methods We conducted two searches of the PubMed database for papers on clinical assessment instruments published before 8 January 2009. For our first search, aimed at identifying papers dealing with the principal characteristics of work-based assessment instruments, we used the following search terms: Our second PubMed search was limited to articles published between November 1995 (publication date of the first paper on the mini-CEX) and 8 January 2009, and used the text words: mini clinical evaluation exercise OR mini-CEX OR mCEX OR clinical evaluation exercise. In addition, we manually searched the reference lists of the included articles for relevant articles. We used the following inclusion criteria. the instrument is used by professionals to assess directly observed performance. the instrument is used in authentic patient encounters the instrument uses a generic and global assessment scale the instrument allows for feedback immediately after the assessment the instrument is used in a postgraduate or undergraduate medical programme. And we applied the following exclusion criteria: the instrument is used for peer-, patient-or self-assessment the instrument only assesses technical skills the instrument is used in simulated encounters (as opposed to authentic encounters) the instrument (only) assesses a 'long case' (Wass and van der Vleuten 2004) the instrument reports results as a letter or comment no abstract is available Articles were selected by four researchers (LvdE, EP, AK and HM). In an initial selection round, two researchers independently selected articles based on the title only. Any disagreements were resolved by discussion. Next, the abstracts of the articles selected in the first round were independently judged by two researchers. Any disagreements on inclusion or exclusion were resolved in a meeting of three researchers. In the final selection round the full text of the remaining articles was read by LvdE or EP. Data extraction Data relating to the following characteristics of the assessment instruments were extracted from each article by one researcher (LvdE or EP). setting, summative or formative assessment type of encounters (e.g. in-patient, out-patient), assessor and learner subject of assessment rating scale, criteria for the allocation of marks, frame of reference the assessment form type of feedback (quantitative/qualitative) assessor training learner instruction. Next, two of four researchers (LvdE, EP, AK and HM) extracted data related to the aspects addressed by the second research question: Two of four researchers (LvdE, EP, AK and HM) analyzed each article to determine whether these four aspects were evaluated, which research methods were used and the outcomes of the study. If there was disagreement, a third or fourth researcher also read the article and consensus was reached through discussion. The data are presented in tables (appendices 1-6) that are available on https://www.umcn.nl/Onderwijs/IWOO/VOHA/ Pages/OnderzoekbijdeVOHA.aspx. If an instrument was the subject of more than one article, additional articles were only included if they contained new information about the aspects of interest. Based on the tables, the researchers identified highlights and interesting results for each characteristic, which are reported in the results section. Descriptive analysis The initial search yielded 349 articles. Of these, 261 were excluded based on the title, a further 50 were excluded based on the abstract and another 19 were eliminated after the reading of the full article. This left a total of 19 articles. The second search yielded 34 articles. After exclusion of five, nine and five articles based on title, abstract and full text, respectively, 15 articles from the second search met the criteria. The manual search of reference lists yielded another 5 articles. The resulting 39 articles dealt with 18 different assessment instruments (Alves de ;;;;Cook and Beckman 2009;;;Dowson and Hassell 2006;;;Golnik and Goldenhar 2005;;;Hatala and Norman 1999;;Kogan et al., 2003Kogan and Hauer 2006;Lane and Gottlieb 2000;;;;;Norcini et al., 1997Norcini et al., 2003Norcini and Burch 2007;Nyman and Sheridan 1997;;;;;;Ross 2002;Shayne et al., 2006;;), which are listed in Table 1. Characteristics The instruments included in the review assess a wide range of competencies or combinations of competencies. Some allow coverage of broad content and can be used in all kinds of clinical situations; others assess content that is limited to a particular setting, e.g. a palliative care or psychiatry clerkship. All instruments itemize content globally, but some are more detailed than others (items such as: 'open-ended questions' versus 'patient communication'). Most items relate to the 'medical expertise', 'communication' and 'professionalism' competencies from the Canadian Medical Educational Directives for Specialists (CanMEDS). Some items relate to the CanMEDS competence 'management skills'. Generally, the instruments appear to be flexible with regard to content. They can be used to assess a multitude of competencies and are easily attuned to a specific educational context. Most instruments are (intended to be) used for formative purposes. It is consistent with this purpose that almost all instruments ask for qualitative, narrative feedback to be provided in writing or orally. Additionally, almost all instruments require quantitative feedback on a rating scale. These scales vary widely, ranging from dichotomized scores of 'satisfactory' and 'unsatisfactory' to an 11-point scale. A minority of the instruments (four with small and three with large scales) provide criteria for the allocation of marks or behavioural anchors. Only one study examines the effects of different rating scales (Cook and Beckman 2009) by comparing the results for 9-and 5-point scales. Inter-rater reliability was similar for both scales, but the 9-point scale showed better agreement with previously established levels of competence of a performance on video (the scripted competence level). Based on the assumption that previously established competence levels are accurate, the 9-point scale was better able to accurately classify learners' competence as unsatisfactory or superior. A reference norm for competence rating is specified in no more than eight instruments: five use an 'end of training' norm and three a 'class level' norm. However, norm selection is not based on evidence and authors generally state few or no arguments to support their choice of rating scale or frame of reference, thereby leaving much freedom of interpretation to assessors. Assessors almost always receive some form of training before an instrument is implemented. Training involves verbal instruction or a workshop, but it is uncommon for training effects to be evaluated. The only authors to do so are Cook et al., who evaluated the effects of a workshop on error training, performance dimension training, behavioural observation training and frame of reference training using lecture, video and facilitated discussion. They found no improvement in inter-rater reliability of mini-CEX scores in a group of assessors who had attended the workshop compared to a control group receiving no training. Generally, learner instruction receives scant attention. If learners are instructed at all they receive verbal or written instructions, but no studies evaluate the effects. In conclusion, instruments show considerable variation in content, rating scale, frame of reference, assessor training and learner instruction. There is a striking paucity of research on these characteristics, which are merely described in the majority of studies without evidence to support their value. Feasibility Studies of feasibility mostly focus on completion rates of the instruments or users' satisfaction. Feasibility is generally qualified as good but no clear criteria are set in advance and results vary. Durning et al. and Torre et al., for example, report completion rates of 96.4 and 100%, respectively, but Turnbull et al. conclude that feasibility is good with a response rate of only 23%. Conclusions regarding the feasibility of the various instruments, with the exception of the mini-CEX, are based on single studies. When more studies are available, the results are both negative and positive. Wilkinson et al. attribute feasibility problems to lack of time and the fact that the procedure is experienced as time consuming. Alves de Lima et al. blame poor feasibility on inadequate implementation. They conclude that assessment instruments must be well integrated within the curriculum and part of the routine of practice, and additionally propose that workshops are a better way to implement an instrument than written instructions. Clearly, further studies are needed to unravel the instruments' feasibility issues. Reliability Generalizability or reproducibility was studied for four instruments in eight studies. The results are presented in Table 2. We used the Spearman-Brown formula to calculate the average reliability coefficient for all instruments. For most of them acceptable reliability ([0.8) can be achieved with a sample of 10 encounters. In other words, reliability seems achievable with a feasible sample of encounters. For some studies, we could not determine the number of assessors involved. The study by Margolis et al. is the only one to examine reliability with different numbers of assessors. The results show that one assessor taking 10 encounters is much less reliable than 10 assessors taking one encounter each (0.39 and 0.83, respectively). This outcome is contradicted by Nair et al., who conclude that the mini-CEX is reliable (0.88) with one assessor and eight encounters. However, this study did not explicitly examine the effects of different numbers of assessors. More research is needed to systematically tease out sources of variance in reliability to enable well founded recommendations with regard to the required numbers of (different) assessors and encounters. Ringsted et al. explain the low inter-rater reliability of their 'global rating form in anaesthesiology' by staff being unfamiliar with the instrument's underlying concept. They suggest that intensive assessor training might improve reliability results, but the opposite conclusion is put forward by Cook et al.. This conflicting evidence underlines the need for more research into inter-rater reliability and how it is affected by assessor training. Validity Criterion validity of the mini-CEX and the 'clinical encounter card' was evaluated by comparisons of the results with those of instruments of proven validity. For the mini-CEX, strong and significant correlations were found with results on the Royal College of Physicians and Surgeons of Canada Comprehensive Examination in Internal Medicine (RCSPC-IM), a high-stakes assessment of clinical competence. Correlations were 0.73 with the subscale 'structured oral', 0.67 with the subscale 'bedside station' and 0.72 with the subscale 'written examination' (). In addition, strong correlations are reported between mini-CEX scores and corresponding scores on a monthly evaluation form and 'intraining examination scores' (). The 'clinical encounter card' showed significant positive correlations with learners' 'clinical performance ratings', 'final grades' and scores on an important summative examination (National Board of Medical Examiners ) (). Interestingly, no correlations are reported between the 'clinical encounter card' and an objective structured clinical examination (OSCE). A number of studies infer construct validity from an increase in ratings over time. Kogan et al. report an increase in mean scores on the mini-CEX during one year. Links et al. found significant improvement in skills as manifested in pre-and post-observations, using the 'clinical skills assessment form'. Prescott-Clements et al. report improvement in ratings on 'longitudinal evaluation of performance' in the course of 1 year. In conclusion, the validity of the mini-CEX and the 'clinical encounter card' appears to be supported by strong and significant correlations with other assessment instruments. For some other instruments positive indications for construct validity are reported, but for most instruments evidence of validity remains to be provided. Educational effect Some studies evaluated educational effect by eliciting learners' or assessors' attitudes towards the use of the instrument, but none of the studies examined educational effects by measuring improvement of clinical skills or the quality of patient care. Although authors emphasize the formative nature of assessment procedures, they examine effects on learning and performance by evaluating users' subjective judgements or perceived satisfaction. For the most part, the reported effects are positive. Learners rated the value of 'structured clinical observation' four on a five-point scale (Lane and Gottlieb 2000) and rated the 'clinical skills assessment form' as the second most valuable component of their clerkship in terms of assisting skill acquisition (). Outcomes of a student questionnaire on 'bedside formative assessment' show that 95.6% recognize its learning value, 70% acknowledge the informative, advisory and motivational role of feedback and 71.9% report that the assessment stimulated them to do more preparatory reading. However, outcomes like learning behaviour, transfer of skills to new situations or improvement of patient care are not investigated, although they are crucial for the evaluation of educational impact. Currently, educational effects are a neglected area of assessment research, which should be given much greater priority in future research. Discussion As for the similarities and differences between the characteristics of the instruments, the main conclusion is that there is huge variation in the competencies being assessed, rating scales, frame of reference, assessor training and learner instruction. Unfortunately, there is hardly any sound research reported on these characteristics. Authors describe rating scales, frames of reference and assessor training but fail to elaborate on rationales and usually do not investigate their value. Consequently, assessment characteristics remain implicit and interpretation is largely left to assessors. This will inevitably have a profound effect on instruments' measurement characteristics. Almost all the instruments discussed in this review originated after the introduction of the mini-CEX at the Medical College of Pennsylvania, Philadelphia. An exception is the 'clinical skills assessment form', an observation exercise that was introduced in the psychiatric clerkship at McMaster University, Canada as early as 1984 (), well before the publication of the first paper on the mini-CEX. It is interesting to note that this early appearance on the medical education scene of a predecessor of the mini-CEX apparently failed to make much of an impact either in the literature or in educational practice. Perhaps the time was not ripe then for this type of instrument. Some information on the feasibility, validity, reliability and educational effect of the instruments we studied emerges from the review. Conclusions regarding feasibility are generally positive. Despite the absence of direct compelling evidence, we are inclined to conclude that training may be the key to effective implementation of instruments because it can improve the quality of their use. The value of these instruments lies mainly in the process of formative feedback and thus in the feedback skills of assessors and the extent to which they pay serious attention to this process. Much of what is assessed is left implicit and is up to the discretion of assessors (;Norcini and Burch 2007). Assessors need training to reliably rate learners' performance and discriminate between performance levels (). For learners too training may play an important role, although no direct evidence is available to support this. It seems likely that learner training can increase feasibility and educational effect. Criterion validity was only evaluated for the mini-CEX and the 'clinical encounter card', and these instruments showed strong and significant correlations with other assessment instruments. Construct validity was inferred from three studies showing that ratings increased over time. Otherwise, like Kogan et al.'s review of validity (), our review reveals a general lack of evidence of validity. In-training assessment 139 The outcomes of reliability studies suggest that around 10 encounters suffice for a reproducible outcome. This is somewhat surprising. In terms of testing time (time of one medical consultation) 10 encounters compares favourably with the samples needed for other standardized and objectified assessment formats (Van der Vleuten and Schuwirth 2005), although one would expect poor reliability of an instrument characterized by absence of explicit characteristics. Apparently (different) assessors pick up measurement information that is relatively generalizable across individual encounters, while at the same time broad sampling across assessors evens out the effects of assessor subjectivity. Good reliability is no guarantee for the absence of bias, however, and, due to their quite subjective nature, instruments like the mini-CEX may actually be quite vulnerable to bias. All this requires further investigation. We also need more evidence regarding factors that contribute to (un)reliability and the extent of this contribution to underpin recommendations on sound sampling strategies. Evidence on educational effect is lacking as well. No studies examined whether instruments improve learning, clinical skills or the quality of patient care. Given the formative nature of the instruments, effects on learning and performance are more or less the prime objective of this type of assessment. Existing research typically evaluates perceptions of users, and although the outcomes are overwhelmingly positive, they do not provide compelling evidence for learning effects. More rigorous research will have to elucidate the educational effects of clinical work-based assessment. An important conclusion from our review appears to be that instruments for authentic work-based assessment of single clinical encounters should not be evaluated outside the context of the curriculum or other assessment instruments. Assessment by one instrument can only be a part of the whole story. The 'competence based assessment, rheumatology' for example was not valid when applied in isolation (Dowson and Hassell 2006). It should be used as a component of a spectrum of assessment instruments that complement each other. While optimization of the feasibility, validity, reliability and educational effect of individual instruments is important, it is equally, if not more, important to look from a broader perspective at the respective unique contributions of different instruments to the assessment of clinical competence (Van der Vleuten and Schuwirth 2005). Assessment procedures should be integrated within the curriculum and preferably also be an integral part of routine practice (Alves de ). It should be noted that we included articles in the review on the basis of the subjects they addressed, not the quality of their research. Some bias may have arisen because we did not systematically judge research quality. In so far as the articles report on feasibility, validity, reliability or educational effect, the conclusions are mostly positive. This absence of negative or critical outcomes could be suggestive of publication bias. It cannot be ruled out that studies on inadequate instruments were not published. Although single-encounter clinical assessment instruments appear to be received positively in the literature, this positive reception is based on relatively limited empirical justification. Results on the most extensively evaluated aspects, feasibility and reliability, support the viability of the format and the use of a minimum of 10 encounters to attain reliability. However, there is an obvious need for further, and especially more scientifically rigorous, research on all the characteristics that we studied. We also need further research on basic characteristics like rating scales, narrative feedback, frame of reference, etc. Although a call for more and better research may be the sad conclusion from most reviews, it is unfortunately equally applicable to single-encounter work-based clinical assessment instruments. Open Access This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited.
Düsseldorf-Unterrath–Düsseldorf Airport Terminal railway History On 27 October 1975, in preparation for the upgrading of the line between Cologne and Duisburg for the introduction of S-Bahn services, a new line from Düsseldorf-Derendorf Dp junction to Düsseldorf Airport Terminal station opened for passenger traffic. For this purpose an existing siding had been duplicated and electrified and extended to the Terminal C building. At the time the line crossed the north-bound track of freight line 2670 at grade on the approach to Düsseldorf-Rath station, until the late 1980s when grade separated access was built to Unterrath station. On 27 May 1990, a northern access line was put into operation so that trains from the Ruhr area could approach the terminal station directly from the former Kalkum station. After the disastrous fire at the airport in 1996 and the subsequent short-term closure of the terminal, a temporary platform was opened halfway along the line, called Düsseldorf Airport Departure Terminal E. It was located near the temporary Terminal E established only for departures and operated until the opening of the Düsseldorf Airport long-distance station on 26 May 2000. Operations With the opening of the station on the main line from Düsseldorf to Duisburg the connecting line from the north was superfluous. Reversing in the underground station was too time consuming for through trains such as the former S-Bahn line S 21, which ran parallel to line S 1. Due to problems with the installation and acceptance of the Skytrain at first, a bus shuttle was established. The Skytrain is a driverless suspended monorail train, which commenced operations on 1 July 2002 and now connects the long-distance station via two intermediate stops at P4 parking station and Terminal A/B to the Terminal C station. Rail services The track is served by line S 11 of the Rhine-Ruhr S-Bahn operating as follows: Düsseldorf Flughafen Terminal – Düsseldorf-Unterrath – Düsseldorf-Derendorf – Düsseldorf Hbf – Neuss Hbf – Dormagen – Cologne Hbf – Bergisch Gladbach.
Comparison between radiocephalic and brachiocephalic arteriovenous fistula in octogenarians: A retrospective single center study. PURPOSE The number of older patients who need vascular access for end-stage renal disease is rapidly increasing. However, determining the optimal vascular access for older patients is difficult. We aimed to compare the outcomes of radiocephalic (RC) and brachiocephalic (BC) arteriovenous fistula (AVF) in patients aged >80years. METHODS This study included 94 patients undergoing hemodialysis who underwent the procedure for the first time between 2013 and 2019 in Korea University Guro Hospital. The primary outcomes were primary patency (PP) and cumulative patency (CP). The secondary outcome was maturation failure and death with functional vascular access. RESULTS Of the 94 patients (mean age, 83.9±2.97years), 66 (70.2%) and 28 (29.8%) patients belonged to the RC and BC AVF groups, respectively. One-year PP was worse in the RC AVF group than in the BC AVF group (59.6% vs. 87.4%, p=0.013). However, no significant difference was observed in 1-year CP between the groups (87.4% vs. 91.2%, p=0.441). The unassisted maturation rate was higher in the BC AVF group than in the RC AVF group (74.2% vs. 96.4%, p=0.011). During follow-up (649±612days), only 6 (6.4%) patients required secondary fistula placement. Eighteen patients (19.1%), all of whom had functional AVF, died. CONCLUSION BC AVF showed better PP and a smaller number of interventions than RC AVF in octogenarians. Therefore, BC AVF could be a primary choice of vascular access in the octogenarian patient. However, further research is warranted to confirm these findings.
# coding: utf-8 """ Onshape REST API The Onshape REST API consumed by all clients. # noqa: E501 The version of the OpenAPI document: 1.113 Contact: <EMAIL> Generated by: https://openapi-generator.tech """ from __future__ import absolute_import import re # noqa: F401 import sys # noqa: F401 import six # noqa: F401 import nulltype # noqa: F401 from onshape_client.oas.model_utils import ( # noqa: F401 ModelComposed, ModelNormal, ModelSimple, date, datetime, file_type, int, none_type, str, validate_get_composed_info, ) try: from onshape_client.oas.models import btm_database_parameter2229 except ImportError: btm_database_parameter2229 = sys.modules[ "onshape_client.oas.models.btm_database_parameter2229" ] try: from onshape_client.oas.models import btm_parameter_appearance627 except ImportError: btm_parameter_appearance627 = sys.modules[ "onshape_client.oas.models.btm_parameter_appearance627" ] try: from onshape_client.oas.models import btm_parameter_array2025 except ImportError: btm_parameter_array2025 = sys.modules[ "onshape_client.oas.models.btm_parameter_array2025" ] try: from onshape_client.oas.models import btm_parameter_blob_reference1679 except ImportError: btm_parameter_blob_reference1679 = sys.modules[ "onshape_client.oas.models.btm_parameter_blob_reference1679" ] try: from onshape_client.oas.models import btm_parameter_boolean144 except ImportError: btm_parameter_boolean144 = sys.modules[ "onshape_client.oas.models.btm_parameter_boolean144" ] try: from onshape_client.oas.models import btm_parameter_configured2222 except ImportError: btm_parameter_configured2222 = sys.modules[ "onshape_client.oas.models.btm_parameter_configured2222" ] try: from onshape_client.oas.models import btm_parameter_derived864 except ImportError: btm_parameter_derived864 = sys.modules[ "onshape_client.oas.models.btm_parameter_derived864" ] try: from onshape_client.oas.models import btm_parameter_enum145 except ImportError: btm_parameter_enum145 = sys.modules[ "onshape_client.oas.models.btm_parameter_enum145" ] try: from onshape_client.oas.models import btm_parameter_feature_list1749 except ImportError: btm_parameter_feature_list1749 = sys.modules[ "onshape_client.oas.models.btm_parameter_feature_list1749" ] try: from onshape_client.oas.models import btm_parameter_foreign_id146 except ImportError: btm_parameter_foreign_id146 = sys.modules[ "onshape_client.oas.models.btm_parameter_foreign_id146" ] try: from onshape_client.oas.models import btm_parameter_invalid1664 except ImportError: btm_parameter_invalid1664 = sys.modules[ "onshape_client.oas.models.btm_parameter_invalid1664" ] try: from onshape_client.oas.models import btm_parameter_lookup_table_path1419 except ImportError: btm_parameter_lookup_table_path1419 = sys.modules[ "onshape_client.oas.models.btm_parameter_lookup_table_path1419" ] try: from onshape_client.oas.models import btm_parameter_material1388 except ImportError: btm_parameter_material1388 = sys.modules[ "onshape_client.oas.models.btm_parameter_material1388" ] try: from onshape_client.oas.models import btm_parameter_quantity147 except ImportError: btm_parameter_quantity147 = sys.modules[ "onshape_client.oas.models.btm_parameter_quantity147" ] try: from onshape_client.oas.models import btm_parameter_query_list148 except ImportError: btm_parameter_query_list148 = sys.modules[ "onshape_client.oas.models.btm_parameter_query_list148" ] try: from onshape_client.oas.models import btm_parameter_query_with_occurrence_list67 except ImportError: btm_parameter_query_with_occurrence_list67 = sys.modules[ "onshape_client.oas.models.btm_parameter_query_with_occurrence_list67" ] try: from onshape_client.oas.models import btm_parameter_reference2434 except ImportError: btm_parameter_reference2434 = sys.modules[ "onshape_client.oas.models.btm_parameter_reference2434" ] try: from onshape_client.oas.models import btm_parameter_string149 except ImportError: btm_parameter_string149 = sys.modules[ "onshape_client.oas.models.btm_parameter_string149" ] class BTMParameter1(ModelNormal): """NOTE: This class is auto generated by OpenAPI Generator. Ref: https://openapi-generator.tech Do not edit the class manually. Attributes: allowed_values (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict with a capitalized key describing the allowed value and an allowed value. These dicts store the allowed enum values. attribute_map (dict): The key is attribute name and the value is json key in definition. discriminator_value_class_map (dict): A dict to go from the discriminator variable value to the discriminator class name. validations (dict): The key is the tuple path to the attribute and the for var_name this is (var_name,). The value is a dict that stores validations for max_length, min_length, max_items, min_items, exclusive_maximum, inclusive_maximum, exclusive_minimum, inclusive_minimum, and regex. additional_properties_type (tuple): A tuple of classes accepted as additional properties values. """ allowed_values = {} validations = {} additional_properties_type = None @staticmethod def openapi_types(): """ This must be a class method so a model may have properties that are of type self, this ensures that we don't create a cyclic import Returns openapi_types (dict): The key is attribute name and the value is attribute type. """ return { "bt_type": (str,), # noqa: E501 "import_microversion": (str,), # noqa: E501 "node_id": (str,), # noqa: E501 "parameter_id": (str,), # noqa: E501 } @staticmethod def discriminator(): return { "bt_type": { "BTMParameterQuantity-147": btm_parameter_quantity147.BTMParameterQuantity147, "BTMParameterLookupTablePath-1419": btm_parameter_lookup_table_path1419.BTMParameterLookupTablePath1419, "BTMParameterMaterial-1388": btm_parameter_material1388.BTMParameterMaterial1388, "BTMParameterEnum-145": btm_parameter_enum145.BTMParameterEnum145, "BTMParameterDerived-864": btm_parameter_derived864.BTMParameterDerived864, "BTMParameterBoolean-144": btm_parameter_boolean144.BTMParameterBoolean144, "BTMParameterFeatureList-1749": btm_parameter_feature_list1749.BTMParameterFeatureList1749, "BTMParameterConfigured-2222": btm_parameter_configured2222.BTMParameterConfigured2222, "BTMParameterString-149": btm_parameter_string149.BTMParameterString149, "BTMDatabaseParameter-2229": btm_database_parameter2229.BTMDatabaseParameter2229, "BTMParameterReference-2434": btm_parameter_reference2434.BTMParameterReference2434, "BTMParameterForeignId-146": btm_parameter_foreign_id146.BTMParameterForeignId146, "BTMParameterQueryList-148": btm_parameter_query_list148.BTMParameterQueryList148, "BTMParameterBlobReference-1679": btm_parameter_blob_reference1679.BTMParameterBlobReference1679, "BTMParameterQueryWithOccurrenceList-67": btm_parameter_query_with_occurrence_list67.BTMParameterQueryWithOccurrenceList67, "BTMParameterArray-2025": btm_parameter_array2025.BTMParameterArray2025, "BTMParameterInvalid-1664": btm_parameter_invalid1664.BTMParameterInvalid1664, "BTMParameterAppearance-627": btm_parameter_appearance627.BTMParameterAppearance627, }, } attribute_map = { "bt_type": "btType", # noqa: E501 "import_microversion": "importMicroversion", # noqa: E501 "node_id": "nodeId", # noqa: E501 "parameter_id": "parameterId", # noqa: E501 } @staticmethod def _composed_schemas(): return None required_properties = set( [ "_data_store", "_check_type", "_from_server", "_path_to_item", "_configuration", ] ) def __init__( self, _check_type=True, _from_server=False, _path_to_item=(), _configuration=None, **kwargs ): # noqa: E501 """btm_parameter1.BTMParameter1 - a model defined in OpenAPI Keyword Args: _check_type (bool): if True, values for parameters in openapi_types will be type checked and a TypeError will be raised if the wrong type is input. Defaults to True _path_to_item (tuple/list): This is a list of keys or values to drill down to the model in received_data when deserializing a response _from_server (bool): True if the data is from the server False if the data is from the client (default) _configuration (Configuration): the instance to use when deserializing a file_type parameter. If passed, type conversion is attempted If omitted no type conversion is done. bt_type (str): [optional] # noqa: E501 import_microversion (str): [optional] # noqa: E501 node_id (str): [optional] # noqa: E501 parameter_id (str): [optional] # noqa: E501 """ self._data_store = {} self._check_type = _check_type self._from_server = _from_server self._path_to_item = _path_to_item self._configuration = _configuration for var_name, var_value in six.iteritems(kwargs): if ( var_name not in self.attribute_map and self._configuration is not None and self._configuration.discard_unknown_keys and self.additional_properties_type is None ): # discard variable. continue setattr(self, var_name, var_value) @classmethod def get_discriminator_class(cls, from_server, data): """Returns the child class specified by the discriminator""" discriminator = cls.discriminator() discr_propertyname_py = list(discriminator.keys())[0] discr_propertyname_js = cls.attribute_map[discr_propertyname_py] if from_server: class_name = data[discr_propertyname_js] else: class_name = data[discr_propertyname_py] class_name_to_discr_class = discriminator[discr_propertyname_py] return class_name_to_discr_class.get(class_name)
Here’s what I’m looking forward to, or am at least curious about, coming out tomorrow. Phantom Stranger #1 The zero issue really did not do it for me, but the Green Lantern book turned out to be pretty good, or at least good enough for me to stick around for the next issue. I’m hoping Phantom Stranger can pull the same trick. Uncanny Avengers #1 Rick Remender and John Cassaday are a team that automatically get my attention, not matter how absurd or short-lived the book. In this particular case, though, I’m giving the book a shot simply because Remender can make anything worth reading for at least a few issues, and if it really stinks on toast, at least the art will be pretty. MacGuyver: Fugitive Gauntlet #1 I’m picking this up not least because the idea of MacGuyver in a comic book makes me smile. As silly as the show was (and is), it’s actually fairly well suited to comics, and apparently this book is co-written by the creator of the show, so you know it won’t be something stupid like Robot MacGuyver. Just overly inventive silliness. Secondly, the idea of Image running a licensed book is kind of amusing, in its own way. But Image has repeatedly shown, especially lately, a strong editorial staff and good taste in material; even the stuff I don’t like, I can see why Image went ahead with. So, there’s some actual promise, here. Let me know what you’re looking forward to, with the list here, and we’ll talk some comics.
Is Apple's iOS Losing Its Grip on the Enterprise Market? New data shows Android and Windows tablets are threatening the iPad's dominance. Apple (NASDAQ:AAPL) is need of some changes to its mobile enterprise strategy. The latest data from from Good Technology's Mobility Index Report shows that Android put a significant dent in Apple's enterprise activations market share in Q2 2105. Tablets running on Google's mobile operating system are quickly closing in on Apple's iOS, and even Windows tablets have significantly increased market share over the past year. While Apple still maintains its lead, the company can no longer assume enterprise customers automatically prefer its devices over the competition. Click through the slideshow below to find out how Apple is holding up and what it can do to possibly win back customers.
PETALING JAYA: Land and General Bhd (L&G), the master developer of Bandar Sri Damansara township, plans to redevelop its club facilities and if successful, would lead the company to having access to 36 acres of prime freehold commercial land. However, it first needs to get the consent from the members of the club, an exercise the company has already initiated. Managing director Low Gay Teck told StarBiz that while they were still in initial stages of drawing out plans for the land that the club house sits on, it’s time to unlock its potential value. “We are proposing for a commercial development and since the clubhouse sits on a prime piece of commercial land, to redevelop it will unlock its potential value. “We may redesign the facility into a modern day club house as we believe an upgraded version is required to cater to the current residents’ needs,” said Low, adding that it would be integrated into the proposed commercial development, which consist of service apartment, shops and offices. Bandar Sri Damansara is a matured 1,200-acre freehold township developed since the 1990s and the two-storey Bandar Sri Damansara Club was one of the lifestyle amenities built for residents. At that time, the community bought the membership for RM5,000 for a 30-year term. Today, the facility has about 1,700 members and the fee for a new member is RM2,408 a year and RM1,908 for second year upon renewal of membership. Low said L&G would propose to buy back the remaining unutilised term and compensate residents accordingly based on the entitlement in the trust deed. Primarily involved in property investment and development business, L&G’s history goes back nearly 50 years. The low-profile firm sits on a cash pile of about RM500mil and has made it clear that it’s on a hunt to acquire more land to replenish its land bank. L&G has undeveloped land with gross development value (GDV) of RM3.3bil, with unbilled sales of RM29.7mil and new developments in the pipeline in excess of RM2bil. This would keep the company busy in the next seven to 10 years, said Low. “Contrary to people’s belief, L&G has over the years transformed and we are familiar with the market needs for the various areas. “We are in the position to develop mid to upper range products and that augurs well for us to source for more landbank within the Klang Valley area,” he noted. Low said the company hoped to conclude the acquisition of 112 acres in U10, Shah Alam within the next two months. L&G entered into a share sale agreement with Pembinaan Jaya Megah to buy the latter’s land for RM92.5mil in June 2015. “This is a mature piece of land and we plan to develop terrace and semi-detached homes with some commercial units. “This will give us a more balanced product portfolio of landed as well as high-rise properties,” he noted, adding that the project would commence in 2017 with an estimated GDV of RM1.2bil. On the property market outlook, Low forecast that the property market could pick up in 2017. Low said L&G’s unencumbered land had all been carefully sourced, while it kept abreast with the latest designs for its products. “As we progress on Damansara Foresta phase 2, 3 and 4, we are optimistic that take-up rates for these phases will be positive based on the response gotten from phase 1,” he says. On catalyst for growth, Low noted that L&G would be developing land pockets for quick turnaround, adding that the continuous developments in Sena Parc, U10 and Foresta Damansara would be the company’s “bread and butter” for the next five to 10 years. L&G has a 2,500-acre land bank consisting of rubber and oil palm plantation in Ladang Kerling, Ulu Selangor, which it intends to develop into a township, according to Low. However, the company is still in the stages of submitting papers for this. “The demographics have changed over the years and an average income earner can no longer afford landed properties. “Taking this into account, house buyers may opt for properties away from the city,” he noted, adding that Ladang Kerling is accessible via the Lembah Beringin toll, but it would also be linked via other routes. Meanwhile, on its recurring income, Low said this was derived from rental of 8trium’s 108,000 sq ft retail space and Sekolah Sri Bestari in Bandar Sri Damansara, with each contributing over RM2mil and an average of RM5mil per year. Its 13-storey office building in Putrajaya, which it bought over from Mayland Parkview Sdn Bhd two years ago for over RM70mil is still vacant and that has a guarantee yield of 5% for the first two years, according to him. Mayland Parkview is the largest shareholder with a 31.03% stake. L&G’s net profit for the first quarter ended June 30, 2016 was down 51% to RM10.28mil on the back of more than 80% drop in revenue at RM11.9mil.
Concerned about the growth of legal cannabis businesses sprouting up in low-income areas, Los Angeles County leaders approved Tuesday an effort seeking to protect communities from any negative impacts on neighborhoods. The motion, authored by Supervisors Mark Ridley Thomas and Hilda Solis, asks the county’s Department of Public Health and the Office of Cannabis Management to craft a model that would emphasize “health equity” in those communities where legal cannabis businesses soon may open. Already, illegal medical marijuana dispensary owners have balked at any attempt at enforcement. Across unincorporated Los Angeles County, there were 75 illegal medical marijuana dispensaries in April. Of those, 29 were closed, but 31 opened soon after, a county official told the Board Tuesday. The unincorporated areas include more than 2,600 square miles or 65 percent of Los Angeles County. About 1 million people live in these areas. In Solis’ district, which includes vast parts of East Los Angeles, there were 38 dispensaries in April. Of those, 14 were closed by county enforcement, but another 14 opened. “So it really is like whack-a-mole,” Solis said, referring to a term that was used earlier this year when the Board first heard about the issue. A set of regulations for legal recreational marijuana is set to be introduced before the end of the year by the County’s Office of Cannabis Management. California voters approved Proposition 64 in 2016, allowing legalization of recreational marijuana. The sale of adult use cannabis is set to begin in early 2018. Tuesday’s approval of the motion means that a discretionary hearing process would take place for cannabis retailers that will assess, among other factors, the impact of that retailer in that neighborhood. For example, that neighborhood would be examined for the number of liquor stores there, graduation rates, crime stats and health outcomes. The community also would be invited to the hearing process. “The discretionary hearing process should empower the County hearing body to place conditions on the issuance of a cannabis retail license to mitigate any potential negative health outcomes, or to deny the issuance of the license if these conditions will not be sufficient to mitigate the impacts,” according to the motion. A report back from several county departments on the feasibility of some elements of the motion is due back in 60 days.
In vitro and in vivo investigation of the effects of polydimethylsiloxane and paeonol modification on the biocompatibility of carbon/carbon composites Carbon debris and the resulting inflammatory reaction are major disadvantages of carbon/carbon (C/C) composites in repairing bone damage or impairment. These issues cause infection after orthopaedic surgeries and implantation surgeries. Therefore, the enhancement of the biocompatibility of carbon materials as implantable medical materials is investigated in the study of surface modification of orthopaedic scaffold materials. In this work, polydimethylsiloxane (PDMS) was introduced onto C/C composites to produce PDMSC/C composites. The use of PDMSC/C not only prevented the carbon debris but also enhanced the mechanical flexibility. In addition, paeonol (Pae) was coated on the PDMSC/C with the aim of improving the antibacterial properties and biocompatibility of PDMSC/C. In vitro evaluations of the bacteria at 2 and 4 h indicated that PaePDMSC/C exhibited an improved antibacterial effect, reaching 32.3% at 4 h, and greater cell adhesion and proliferation activity than C/C and PDMSC/C. Importantly, the in vivo study demonstrated that the implantation of PaePDMSC/C efficiently promoted new bone formation based on an evaluation of a 3D reconstruction and histological observations. The in vivo and in vitro studies illustrated that PaePDMSC/C possesses better biological effects and biocompatibility and will help to expand the application of carbon materials in medical implantation.
<gh_stars>1-10 #pragma once #include "common/Constants.hpp" #include <string_view> namespace Constants { #if SERVER constexpr std::int32_t MAX_PUSHTICK = 0; #else constexpr std::int32_t MAX_PUSHTICK = 125; #endif #if !SERVER constexpr std::int32_t WEP_RESTRICT_WIDTH = 64; constexpr std::int32_t WEP_RESTRICT_HEIGHT = 64; constexpr std::int32_t GOS_RESTRICT_WIDTH = 16; constexpr std::int32_t GOS_RESTRICT_HEIGHT = 16; #endif }; // namespace Constants #include "Constants.cpp.h"
n, nums = raw_input(), sorted(map(int, raw_input().split())) cnt, part, total = 0, 0, sum(nums) while part <= total - part: part += nums.pop() cnt += 1 print cnt
gateway in the United States. for an increase of 72 percent. District of Columbia’s population losses. 2 percent from other countries. English proficiency exceeds that in all of the other large immigrant metro areas. speak English well, or at all. percent of immigrants live in poverty.
Pharmacokinetics of Magnesium Bolus Therapy in Cardiothoracic Surgery. OBJECTIVE To investigate the pharmacokinetics of a 20 mmol magnesium bolus in regards to serum and urinary magnesium concentration, volume of distribution, and half-life. DESIGN Prospective, experimental study. SETTING A university-affiliated teaching hospital. PARTICIPANTS Twenty consecutive cardiac surgery patients treated with magnesium bolus therapy for prevention of arrhythmia. INTERVENTIONS A 20-mmol bolus of magnesium sulfate was administered intravenously. MEASUREMENTS AND MAIN RESULTS Median magnesium levels increased from 1.04 (interquartile range 0.94-1.23) mmol/L to 1.72 (1.57-2.14) mmol/L after 60 minutes of magnesium infusion (p < 0.001) but decreased to 1.27 (1.21-1.36) and 1.16 (1.11-1.21) mmol/L after 6 and 12 hours, respectively. Urinary magnesium concentration increased from 6.3 (4.2-14.5) mmol/L to 19.1 (7.4-34.5) mmol/L after 60 minutes (p < 0.001), followed by 22.7 (18.4-36.7) and 15 (8.4-19.7) mmol/L after 6 and 12 hours, respectively. Over the 12-hour observation period, the cumulative urinary magnesium excretion was 19.1 mmol (95.5% of the dose given). The median magnesium clearance was 10 (4.7-15.8) mL/min and increased to 14.9 (3.8-20.7; p = 0.934) mL/min at 60 minutes. The estimated volume of distribution was 0.31 (0.28-0.34) L/kg. CONCLUSION Magnesium bolus therapy after cardiac surgery leads to a significant but short-lived increase of magnesium serum concentration due to renal excretion and distribution, and the magnesium balance is neutral after 12 hours.
Forget baked — the Big Apple is going to be fried starting this afternoon. The city will suffer and seat for the next three days with temperatures expected to soar into the 90s. The temperature in Central Park will reach a high of 92 degrees today — and will get even more torrid tomorrow and Wednesday, according to AccuWeather.com. Temperatures are expected to reach 95 degrees on both those days, before the heat wave takes a bit of a break and reaches a high of 89 degrees on Thursday. In response, the city opened cooling centers this morning in all five boroughs to help people cope with the high temperatures. Exposure to extreme temperatures can lead to heat-related illnesses, especially among the elderly and people with chronic medical conditions, the city warned. State officials issued an air quality advisory about ozone starting this afternoon from 1 p.m. through 11 p.m. The advisory covers the city, Long Island and parts of the Hudson Valley.
Optimal trade execution under price-sensitive risk preferences We consider the problem of how to close a large asset position in an illiquid market in such a way that very high liquidation costs are unlikely. To this end we introduce a discrete-time model that provides a simple device for designing and controlling the distribution of the revenues/costs from unwinding the position. By appealing to dynamic programming we derive semi-explicit formulas for the optimal execution strategies. We then present a numerical algorithm for approximating optimal execution rates as functions of the price. We provide error bounds and prove convergence. Finally, examples for the liquidation of forward positions in illiquid energy markets illustrate the efficiency of the algorithm.
<gh_stars>1-10 /**************************************************************************************** * @author: kzvd4729 created: Sep/16/2018 17:05 * solution_verdict: Accepted language: GNU C++14 * run_time: 280 ms memory_used: 10400 KB * problem: https://codeforces.com/contest/1041/problem/C ****************************************************************************************/ #include<bits/stdc++.h> #define long long long using namespace std; const int N=1e6,inf=2e9; int ans[N+2]; int main() { ios_base::sync_with_stdio(0);cin.tie(0); int n,m,d;cin>>n>>m>>d; set<pair<int,int> >st; for(int i=1;i<=n;i++) { int x;cin>>x; st.insert({x,i}); } st.insert({inf,inf}); int day=1,tm=0; while(true) { if(st.size()==1)break; pair<int,int>p=*st.upper_bound({tm+1,-1}); if(p.first==inf)day++,tm=0; else { st.erase(p);tm=p.first+d; ans[p.second]=day; } } cout<<day<<endl; for(int i=1;i<=n;i++) cout<<ans[i]<<" "; cout<<endl; return 0; }
import { IWebPartContext } from '@microsoft/sp-webpart-base'; export interface IAlpacaManagementProps { description: string; farmSize: number; context: IWebPartContext; }
Last year a bill was introduced in the Senate proposing to eliminate the commission by July of this year. The state House and Gov. Pat McCrory’s administration, along with advocacy groups, successfully saved the commission in the current budget.
/** * \file IfxCpu_Intrinsics.h * \ingroup IfxLld_Cpu_Intrinsics Intrinsics * * \version iLLD_1_0_1_11_0 * \copyright Copyright (c) 2013 Infineon Technologies AG. All rights reserved. * * * IMPORTANT NOTICE * * * Use of this file is subject to the terms of use agreed between (i) you or * the company in which ordinary course of business you are acting and (ii) * Infineon Technologies AG or its licensees. If and as long as no such * terms of use are agreed, use of this file is subject to following: * Boost Software License - Version 1.0 - August 17th, 2003 * Permission is hereby granted, free of charge, to any person or * organization obtaining a copy of the software and accompanying * documentation covered by this license (the "Software") to use, reproduce, * display, distribute, execute, and transmit the Software, and to prepare * derivative works of the Software, and to permit third-parties to whom the * Software is furnished to do so, all subject to the following: * The copyright notices in the Software and this entire statement, including * the above license grant, this restriction and the following disclaimer, must * be included in all copies of the Software, in whole or in part, and all * derivative works of the Software, unless such copies or derivative works are * solely in the form of machine-executable object code generated by a source * language processor. * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT * SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE * FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER * DEALINGS IN THE SOFTWARE. * * \defgroup IfxLld_Cpu_Intrinsics Intrinsics * \ingroup IfxLld_Cpu_Std * */ #ifndef IFXCPU_INTRINSICS_H #define IFXCPU_INTRINSICS_H /******************************************************************************/ #include "Ifx_Types.h" #if defined(__DCC__) #include "IfxCpu_IntrinsicsDcc.h" #elif defined(__HIGHTEC__) #include "IfxCpu_IntrinsicsGnuc.h" #elif defined(__TASKING__) #include "IfxCpu_IntrinsicsTasking.h" #elif defined(__ghs__) #include "IfxCpu_IntrinsicsGhs.h" #else #error Compiler unsupported #endif #define IFX_ALIGN_8 (1) // Align on 8 bit Boundary #define IFX_ALIGN_16 (2) // Align on 16 bit Boundary #define IFX_ALIGN_32 (4) // Align on 32 bit Boundary #define IFX_ALIGN_64 (8) // Align on 64 bit Boundary #define IFX_ALIGN_128 (16) // Align on 128 bit Boundary #define IFX_ALIGN_256 (32) // Align on 256 bit Boundary #define Ifx_AlignOn256(Size) ((((Size) + (IFX_ALIGN_256 - 1)) & (~(IFX_ALIGN_256 - 1)))) #define Ifx_AlignOn128(Size) ((((Size) + (IFX_ALIGN_128 - 1)) & (~(IFX_ALIGN_128 - 1)))) #define Ifx_AlignOn64(Size) ((((Size) + (IFX_ALIGN_64 - 1)) & (~(IFX_ALIGN_64 - 1)))) #define Ifx_AlignOn32(Size) ((((Size) + (IFX_ALIGN_32 - 1)) & (~(IFX_ALIGN_32 - 1)))) #define Ifx_AlignOn16(Size) ((((Size) + (IFX_ALIGN_16 - 1)) & (~(IFX_ALIGN_16 - 1)))) #define Ifx_AlignOn8(Size) ((((Size) + (IFX_ALIGN_8 - 1)) & (~(IFX_ALIGN_8 - 1)))) #define Ifx_COUNTOF(x) (sizeof(x) / sizeof(x[0])) //______________________________________________________________________________ /** Convert context pointer to address pointer * \param[in] cx context pointer * \return address pointer */ IFX_INLINE void *__cx_to_addr(uint32 cx) { uint32 seg_nr = __extru(cx, 16, 4); return (void *)__insert(seg_nr << 28, cx, 6, 16); } /** Convert address pointer to context pointer * \param[in] addr address pointer * \return context pointer */ IFX_INLINE uint32 __addr_to_cx(void *addr) { uint32 seg_nr, seg_idx; seg_nr = __extru((int)addr, 28, 4) << 16; seg_idx = __extru((int)addr, 6, 16); return seg_nr | seg_idx; } /******************************************************************************/ IFX_INLINE void __ldmst_c(volatile void *address, unsigned mask, unsigned value) { *(volatile uint32 *)address = (*(volatile uint32 *)address & ~(mask)) | (mask & value); } /** 32bit load operation */ IFX_INLINE uint32 __ld32(void *addr) { return *(volatile uint32 *)addr; } /** 32bit store operation */ IFX_INLINE void __st32(void *addr, uint32 value) { *(volatile uint32 *)addr = value; } /** 64bit load operation */ IFX_INLINE uint64 __ld64(void *addr) { return *(volatile uint64 *)addr; } /** 64bit store operation */ IFX_INLINE void __st64(void *addr, uint64 value) { *(volatile uint64 *)addr = value; } /** 64bit load operation which returns the lower and upper 32bit word */ IFX_INLINE void __ld64_lu(void *addr, uint32 *valueLower, uint32 *valueUpper) { register uint64 value; value = __ld64(addr); *valueLower = (uint32)value; *valueUpper = (uint32)(value >> 32); } /** 64bit store operation which stores a lower and upper 32bit word */ IFX_INLINE void __st64_lu(void *addr, uint32 valueLower, uint32 valueUpper) { register uint64 value = ((uint64)valueUpper << 32) | valueLower; __st64(addr, value); } /******************************************************************************/ #endif /* IFXCPU_INTRINSICS_H */
<filename>src/users/users.controller.ts import { Controller, Get, Post, Body, Patch, Param, Delete, } from '@nestjs/common'; import { UsersService } from './users.service'; import { CreateUserDto } from './dto/create-user.dto'; import { UpdateUserDto } from './dto/update-user.dto'; import { ApiOperation, ApiResponse, ApiTags } from '@nestjs/swagger'; import { User } from './entities/user.entity'; import { Public } from '../metadata.definition'; @Controller('users') export class UsersController { constructor(private readonly usersService: UsersService) {} @Post() @Public() @ApiOperation({ summary: 'Create User' }) @ApiResponse({ status: 403, description: 'Forbidden.' }) create(@Body() createUserDto: CreateUserDto) { return this.usersService.create(createUserDto); } @Get() @ApiOperation({ summary: 'Find All users' }) @ApiResponse({ status: 200, description: 'found users', type: [User], }) findAll() { return this.usersService.findAll(); } @Get(':id') @ApiOperation({ summary: 'Find One user' }) @ApiResponse({ status: 200, description: 'Find one user', type: User, }) findOne(@Param('id') id: number) { return this.usersService.findOne(id); } @Patch(':id') @ApiOperation({ summary: 'Update user' }) @ApiResponse({ status: 200, description: 'Updated user', type: User, }) update(@Param('id') id: string, @Body() updateUserDto: UpdateUserDto) { return this.usersService.update(id, updateUserDto); } @Delete(':id') @ApiOperation({ summary: 'Delete user' }) @ApiResponse({ status: 200, description: 'Deleted user', type: User, }) remove(@Param('id') id: string) { return this.usersService.remove(id); } }
CHICAGO, IL - OCTOBER 10: Head coach Marc Trestman of the Chicago Bears talks with Josh McCown #12 and Jay Cutler #6 during a game against the New York Giants at Soldier Field on October 10, 2013 in Chicago, Illinois. The Bears defeated the Giants 27-21. (Photo by Jonathan Daniel/Getty Images) *** Local Caption *** Marc Trestman; Josh McCown; Jay Cutler Marc Trestman talks with Josh McCown and jay Cutler. (Photo by Jonathan Daniel/Getty Images) (CBS) Brian Urlacher knows who he wants starting at quarterback for the Bears moving forward. And it’s not who you might think. Urlacher, an analyst for FOX Sports 1, was asked Tuesday if he thinks Jay Cutler should regain his starting role over Josh McCown once healthy. “I think Jay might be healthy right now,” Urlacher said. “If you watch him move around, and some of the film they have of him earlier this week in a pregame, he might be healthy right now. “It’s going to be awfully hard to take Josh out now the way he’s playing,” Urlacher said. “If you look at the numbers since he came in — 13 touchdowns, one pick, they’re 3-2 — the yards are there, the wins are there, he’s making great decisions with the football, and I know he’s got great players around him. When asked again, Urlacher doubled-down on his comments. “In my opinion, (McCown) should be the guy, he should be the starting quarterback for the Bears. Even if Jay Cutler’s healthy. You can’t take a guy who’s this hot out of the football game. If I was on that team, I would have a hard time with them taking him out. … He’s making all the right decisions and the football’s going where it’s supposed to go in that offense.”
Exposure and accumulation of cadmium in populations from Japan, the United States, and Sweden. Studies were carried out in Japan, United States, and Sweden regarding comparability of analytical methods for cadmium, daily intake of cadmium via food, daily amount of cadmium in feces, concentrations of cadmium in different tissues and the body burden of cadmium, urinary excretion of cadmium and cadmium concentrations in blood. It was found that the cadmium intake via food among adults is about 35 mug/day in Japan (Tokyo) and about 17 mug/day in the U.S. (Dallas) and Sweden (Stockholm). It varies with age in a way similar to calorie intake. Body burden increases rapidly with age. The half-time of cadmium is longer in muscles than in liver or kidneys. In the cross-sectional population samples studied (smokers and nonsmokers mixed) the average cadmium body burden at age 45 was about 21 mg in Japan, 9 mg in the U.S., and 6 mg in Sweden. Among nonsmokers in the U.S. and Sweden the body burden at age 45 was about 5-6 mg. The difference in average body burden for smokers and nonsmokers is explained by differences in smoking habits. Cadmium excretion in urine was closely correlated with body burden and about 0.005-0.01% of body burden is excreted daily in urine. Cadmium concentration in the blood was a good indicator of average recent intake over a 3-month period. Neither blood cadmium nor urine cadmium changed immediately after an increase of exposure level. Introduction Health Effects of Cadmium Health effects of environmental cadmium have received considerable interest in recent years. Extensive reviews of toxicological aspects of cadmium have been published. The risks from occupational exposure to cadmium fumes or cadmium dust have long been well documented, and programs to monitor exposure and effects exist in most industrialized countries. Health effects of cadmium exposure in the general environment were acknowledged because of the occurrence of itai-itai disease and the high prevalence of proteinuria in cadmium-exposed February 1979 areas of Japan. The acute and chronic health effects of cadmium exposure via air or food have been described in much detail. Acute exposure to high levels of cadmium in air may give a lethal pneumonitis, whereas chronic exposure to lower air levels may produce primarily emphysema and proteinuria. Acute exposure to cadmium via food causes vomiting, diarrhea and abdominal pain, whereas the major effects of chronic exposure in food is renal tubular dysfunction. The cadmium-induced proteinuria is a sign of such dysfunction. It had been estimated that a long-term daily cadmium intake via food of 200-300 4g may be associated with an increased prevalence of tubular proteinuria. The "normal" daily intakes were estimated to vary between 15 and 75,ug, depending on country. The Need for Research, when this cooperative study was planned, very few accurate data on the daily intake of cadmium were available. It was known that the U. S. industrial use of cadmium had doubled every decade since the beginning of this century, and that only a few percent of this Cd was recycled. The industrial use of zinc had increased in a similar way. This will also mean an increase in Cd exposure, as Zn and Cd are closely related both in nature and in industrial operations. Cadmium in air around point sources was known to contaminate soil and water, and cadmium in fertilizers and sewage sludge could be expected to increase the cadmium concentration in soil. Evidence has also accumulated concerning cadmium in cigarettes showing that cigarette smoking could be an important source of exposure. In order to evaluate the risks of toxic effects in the general population, additional data on present daily cadmium intake was needed as well as data on absorption, distribution and excretion of cadmium. The autopsy data available showed that cadmium concentrations in renal cortex increased from almost none at birth to about 30-70,ug/g at age 50. The age-related rate of increase indicated a very long (decades) half-time of cadmium in renal cortex. The rapid increase of industrial use of cadmium and zinc pointed to the need to attempt retrospective measurement of daily intake and tissue levels. The limited data available in 1972 did not show any definite relationship between either blood or urinary cadmium and body burden or kidney concentration. Further data elucidating these relationships would be of value in order to select ways of monitoring individual intakes and kidney concentrations in the future. Research into the health effects of cadmium was underway in Japan, the U.S., and Sweden. Some of these studies were conducted on a bilateral basis among the three countries. Furthermore, the exposure situation could be expected to vary between the countries, the exposure being highest in Japan and lowest in Sweden with the U.S. in between. These facts, taken together, indicated the desirability of a cooperative study involving all three countries simultaneously, using as far as possible comparable methods and taking advantage of the different situations in the three countries. In this general introduction and in the introduction to each section reference will be made mainly to publications before this study was planned in order to indicate the level of knowledge at the time of the study. Later publications will be referred to in the discussion sections. Studies carried out as a part of this cooperative project in each of the three countries are referred to by using the name of the country. It is well recognized, however, that all studies were limited to a particular geographic area within each country. There may be regional differences in cadmium concentrations in foodstuffs, tissues, etc., and the areas chosen may or may not be representative of the whole country. Design of the Cooperative Study The planning of the cooperative study occurred at a meeting in Tokyo in 1972. A protocol was set up which included six main areas for study (Fig. 1). The present cadmium exposure via food was to be measured by cross-sectional studies on personal exposure estimated by fecal cadmium concentration. Inhalation exposure via cigarette smoking was to be estimated indirectly from data on body burden of cadmium. Environmental exposure would also be evaluated retrospectively by analyzing old food and cigarette specimens. Body burden was to be estimated based on measurement of cadmium concentrations in kidney cortex, liver, pancreas and muscle from autopsy specimens of cases of sudden and accidental death. Duplicate sets of autopsy specimens were to be stored in tissue banks in order to facilitate future studies of time-related changes in body burden. Excretion of cadmium from whole body and from renal cortex should be evaluated from crosssectional studies on urinary cadmium excretion and comparison with body burdens and kidney concentrations. The relationships between daily intake and blood and urinary cadmium were to be studied by analyzing consecutive specimens from persons with sudden increases in daily intake, e.g., newly employed workers. All through these studies, comparisons of analytical results in the different laboratories were to be carried out. The atomic absorption techniques Environmental Health Perspectives 170 used in each laboratory would be compared with eath other and with an independent technique such as neutron activation analysis. Such method studies should be performed on the different biological specimens to be analyzed. Throughout the study period, annual meetings between representatives of the three research groups have been held, at which time the program has been discussed. Based on accumulated experience, the protocol has been amended, partly to improve the program and partly because of infeasibility of certain studies. From the beginning it was expected that there would be problems encountered in performing certain studies. As will be seen below, however, most of the original plans were followed. An outline of studies performed and laboratories participating is seen in Table 1. A consensus on the content of this report was reached at a meeting in Florida, October 1976, with participants from all project groups. The final draft was circulated to all members of the cooperative study. Problems of Analysis Since the purpose of this cooperative study was to estimate the cadmium exposure of nonoccupa-February 1979 tionally exposed populations in the three participating countries, it was imperative to assess the comparability of analytical methods used by participating laboratories in these countries. This assessment of comparability was complicated by both the variety of materials to be analyzed, e.g., food (especially grains), feces, tissues (e.g., kidney, liver), urine, and blood, and range of expected values from <1 ng/g to >100,ug/g. Although at the time of the agreement all participating laboratories were using methods based on atomic absorption spectrophotometry (AA), details of sample preparation and extraction procedures did vary between the laboratories. In Table 2 the major procedures and their abbreviations are listed. For most materials it was anticipated that AA would have a high accuracy, but it was known that materials with very low cadmium concentrations like urine and blood were difficult to analyze. A comparison of results of analysis of cadmium in urine between six laboratories in Japan had shown a variation of + 100% from the overall means. Normal urinary cadmium excretion was about 1,ug/24 hr in some studies, but up to 40-100,g/24 hr in other studies. Interferences in cadmium analysis caused by sodium chloride and other salts may explain the very high values in some studies. Methods of Interlaboratory Cross-Check of Analysis The comparability of cadmium determinations by the participating labs was assessed by sending aliquots of samples of grains (wheat and rice), liver, blood, muscle, feces, and urine to each laboratory for analysis. Spiked water samples were also provided to the participating labs to investigate the accuracy of the final step of analysis. This comparison of cadmium analysis was carried out during the course of the rest of the cooperative studies. When the reasons for differences in analytical results could be identified, the methods were changed accordingly. In some cases it was then too late to repeat the initial study. The occurrence of problems of this kind as well as the details of preparations of samples and number of samples analyzed will be presented in the appropriate section below. The samples for method study I and III (Table 1) were. prepared and distributed by KI in Sweden. Keio prepared and distributed samples for method study II and EPA/SWRI carried this out for method study IV. The grain samples were dried. The liver, muscle, and feces samples were lyophylized and the blood, urine, and water samples stored in a refrigerator during preparation. All containers used for preparation or storage of samples had been acid ashing procedure; E, extraction procedure. b Instrumental methods: AA, atomic absorption spectrophotometry, F-AA, regular flame AA; /D2, deuterium background correction; CR-AA, "carbon rod" flameless AA; HGA-AA, "heated graphite atomizer" flameless AA; ES, emission spectroscopy. c Ashing procedures: LT, low temperature (< 200'C); HT, high temperature (> 400°C); dry, dry ashing; wet, wet ashing with acids. If acids are named, this would be the wet ashing procedure. d Solvents: DDTC, diethyl dithiocarbamate; MIBK, methyl isobutyl ketone; APDC, ammonium pyrrolidine dithiocarbamate; Dith, dithizone; Chlor, chloroform. e For samples with solution concentrations <0.1 g/g, HGA-AA/D2 was used instead. washed or checked for cadmium contamination before use. Samples of sufficient size were prepared for the preparation of aliquots for all the laboratories participating in the study. The samples were homogenized by shaking and mixing before aliquots were taken. Each aliquot was given a unique code number and the people carrying out the analysis did not know the codes of the different samples, which should ensure blind analysis. Other laboratories than KI, Keio, and EPA/ SWRI were included in the method studies in order to further check the validity of analysis. Most of these laboratories used AA, but one used destructive neutron activation analysis (NA), which was used as a "reference" method. For details about the methods used, the reader is referred to a separate' report of the method studies (Kjellstrom and Linnman, to be published) and to Table 2. The "'reference" method (NA) was carried out in a similar way for all types of materials. The specimen was irradiated, and nonradioactive carrier cadmium was added. After chemical and electrolytic separation of cadmium from other constituents, radioactivity was measured. Recovery was estimated by weighing the separated cadmium and comparing the amount to the added amount of carrier. A good agreement between AA and other techniques would indicate that a reasonable degree ofconfidence could be placed in the results. In this section of the report only the results of the studies dealing with the water samples will be discussed, since the other results will be discussed in connection with the epidemiological studies undertaken. Comparison of Results in Final Analytical Step Forty standard water solutions with additions of cadmium were distributed from the Karolinska In-Environmental Health Perspectives 172 preparation than dilution. In the laboratories listed in Table I (participants in the cooperative study) and the NA laboratory,°t he correlation coefficients were above 0.96; the average recovery was between 90 and 103%. There. was no laboratory where analysis of the solutions with addition of sodium chloride or phosphate gave 00^' systematic differences from the expected values. Thus, all methods used avoided the expected interferences. It was concluded that systematic differences between analytical results of the biological samples would not be caused by systematic errors in the final analytical step. stitute to 22 laboratories, not only in Japan, U.S., and Sweden, but also in other European countries than Sweden. Sodium chloride had been added to 10 of the solutions and phosphate to another 10. In most laboratories, AA after extraction in an organic solvent or flameless AA with background correction was used. Two laboratories used anodic stripping voltametry and another laboratory used NA. The correlation coefficient between expected and observed values varied between 0.94 and 1.00. The results of these two "extreme" laboratories are depicted in Figure 2. Neither of these laboratories took part in any other cooperative study than the method study. When a large number of specimens is included even a correlation coefficient of 0.94 implies a considerable scatter. One would expect a better agreement between true and measured values than what is shown in Figure 2 for one of the laboratories, if a good method is used with skill for analysis of standard water solutions. The laboratory in Figure 2 with r = 1.00 used F-AA after APDC/ MIBK extraction for analysis, whereas the other laboratory used HGA-AA without other sample The major part of the general population's daily cadmium intake in Japan, the U.S., and Sweden comes via ingestion of food. Drinking water normally contributes very little. Also the contribution from ambient air is small, even around point sources. Cigarette smoking alone may cause a respiratory cadmium uptake similar to the uptake from food (I5, 16). Present Environmental Exposure When estimating the average daily cadmium intake via food, basically two different approaches can be used. One is to measure cadmium in food and the other is to measure cadmium in feces. The latter is feasible because only about 6% of ingested cadmium is absorbed. Animal experiments have shown that less than 0.05% of the body burden is excreted daily via the gastrointestinal tract. Assuming that most of the daily cadmium intake comes via food, at steady state the excreted amount cannot exceed the 6% absorbed. Cadmium analysis of individual foodstuffs and calculation of daily cadmium intake from data in dietary surveys, or cadmium analysis of homogenated total daily diet samples are the commn'n ways of estimating daily intake by analysis of food. In the latter case, the diet samples could either consist of a set mixture of commonly used foods (market basket method) or of duplicates from actual diets consumed by persons in a study group (total diet collect method). Reported daily cadmium intakes using the food approach range from about 26,ug Cd/day in the U.S. to,ug Cd/day in Japanese "'lowexposure" areas. Estimates of daily cadmium intake via food can be obtained from data on cadmium in feces in two studies before 1972. The reported values were 31,g Cd/day in West Germany and 30-47,ug Cd/day in the U.S.. In this cooperative study the daily cadmium content of feces was used as the principal method for estimating cadmium intake via food. Some data on cadmium in whole diet samples will also be presented. Methods of Analysis One comparative study of analysis of feces was carried out ( Table 1). Aliquots of six homogenized lyophilized specimens were prepared and distributed from KI to the seven laboratories (2 g to each). The specimens were coded to ensure blind analysis. At Keio, triplicate 2-g samples were wet-ashed in HNO3/H2SO4. Cadmium was extracted with DDTC/MIBK and analyzed with regular flame AA using deuterium background correction. At EPA/SWRI, duplicate 5-g samples were wetashed in H2SOH202. Cadmium was extracted with a combination of potassium iodide and Amberlite in decane. The final analysis was made with regular flame AA using deuterium background correction. The KI method used duplicate 2-g samples (in the epidemiological study five 10-g samples from each specimen were used) which were dry-ashed at 450"C for 30 hr. Cadmium in the ash was dissolved with 1-M HNO3 and analyzed with flame AA with deuterium background correction. Thus, the methods used were similar in many respects and so were the results of the analytical comparison (Fig. 3). The results of neutron activation (NA) analysis agreed well with the AA results. Only four of the 23 results in Figure 3 fell outside the ranges of ± one standard deviation away from the sample means. The scatter of results from three other laboratories participating in the comparison tended to be greater than for the three laboratories mentioned above (Fig. 3), but the overall averages for all laboratories were close to the averages of the four laboratories (Fig. 3). The average ratio between individual results from Keio and the sample averages (for the four laboratories included in Fig. 3) was 1.02. The corresponding ratios were for EPA/SWRI 0.90, for NA 1.07, and KI 1.01. It was concluded that the agreement between the methods was acceptable. Analytical differences would not affect epidemiological comparisons more than a maximum of about 10%. In the epidemiological study Keio had changed from wet ashing to dry ashing (450°C for 30 hr) of the feces specimens. An intralaboratory comparison of the two procedures on 10 specimens had given an average ratio between dry ashed and wet ashed aliquots of 0.97 and a correlation coefficient of 0.90 which was considered not to affect significantly the interlaboratory comparison of epidemiological results. Fecal Cadmium Content In Japan, 24-hr feces specimens were collected for four consecutive days from 19 male and 17 female students in the age group 18-24 years, from 11 children under age 5, and from two 54-year old persons. All were living in Tokyo. The samples were weighed, mixed with a stick, and lyophilized before analysis. No data on smoking habits were collected. In the U.S., feces specimens were collected at two occasions from 86 persons in the age range 2-59. All were male volunteers in various occupations, living in Dallas, Texas. In order to estimate the average daily fecal amount an individual daily amount was calculated by taking half of the total amount for the two collects. This was considered acceptable because most Western people normally only defecate once a day. No fecal markers were used in this study. The samples were stored frozen. Before analysis the samples were thawed and thoroughly mixed with a glass stick. Information about smoking, medical, and work histories were collected in a standard questionnaire. In the Swedish study 80 persons working at KI or their friends and relatives participated. The age range was 5-69 years, including 10 men in each 10 year age group and 10 women in the age group 20-29 years. During three consecutive days 24-hr specimens of feces were collected. The daily fecal amount for each participant was estimated as one third of the total 3-day amount. No fecal markers were used. After weighing and homogenization with a glass stick, subspecimens were taken for analysis from each 24-hr specimen. The smoking habits were recorded via a standard questionnaire. In all three studies persons with occupational exposure to cadmium were excluded. The analysis of feces was carried out according to the methods de-Environmental Health Perspectives scribed in the previous section. The age group average cadmium concentration in feces was calculated based on individual feces specimens. It was seen in the Swedish study that the cadmium concentrations in fecal samples had a distribution closer to a log-normal than to a normal distribution, whereas the distribution of weights of feces samples were closer to a normal distribution. The evaluation of these agreements were made by comparing sums of square deviations between the distributions. These findings were confirmed in the American study. In the following treatise, geometric means (and standard deviations) will be used for cadmium concentrations in feces, and arithmetic means will be used for fecal amounts and daily fecal cadmium amounts. The average fecal amounts in different age groups ( Table 3) tended to be lowest in Japan and highest in the U.S. with Sweden in between. The differences were small. Except for the youngest age group there did not seem to be much variation with age. It was assumed that the calculated average fecal amounts would reflect the daily amounts of feces in each country. An earlier report from the U.S gave the daily average fecal amount for healthy adults as 115 g. This is slightly lower than most of the figures in Table 3, which, for instance, may be explained by changes in nutritional habits or differences between groups studied. The ratios between cadmium concentrations in feces in Japan and the U.S. or Sweden were 2.5-3.5 (Table 3). There were no significant changes with age, but there was a tendency in the U.S. and Sweden for a slight decrease with age. When the age-specific average daily fecal cadmium amounts are compared (Fig. 4), it is again seen that the data from the U.S. and Sweden are similar. The variation with age in Sweden was shown to follow closely the variation of total daily energy intake with age. The results for Japanese men (Fig. 4) were about twice as high as the results from the other countries. In all the groups where comparisons between men and women could be made (Fig. 4), the daily fecal cadmiiim amount among women was about two thirds of the amount among men. This may reflect sex differences in energy intake. A limited study of seven male medical students from Gifu, Japan, was carried out in connection with the cooperative study. Five consecutive 24-hr specimens were collected from each student. The average daily fecal cadmium varied between 41 and 79,g with an overall average of 56,g. The laboratory in Gifu participated in the interlaboratory crosscheck study of cadmium analysis (Kjellstrom and Linnman, to be published) and their results agreed well with the other laboratories. One additional study (EPAISWRI) had been carried out in the U.S. on 216 volunteers from Houston in the age range 20-49. Overnight specimens of feces were collected, and 24-hr fecal amounts were not measured. The analytical method was different from the one used for the study in Dallas described above. The geometric average fecal cadmium concentrations in ten-year age groups varied between 0.17 and 0.23,mg/g wet weight. These appear to be higher than those listed in Table 3. Some of the volunteers were parking attendants and may therefore have been occupationally exposed to cadmium in car exhaust fumes. Because of differences in analytical method and lack of data on total fecal amounts, daily fecal cadmium amounts were not calculated. It was concluded that the average daily fecal cadmium at age 45 would be about 40,ug in Japan, about 19,ug in the U.S., and about 18,ug in Sweden. These figures will be used for comparison with cadmium concentrations in different tissues. The average daily fecal cadmium was slightly higher among smokers than among nonsmokers. In Sweden the 15 nonsmokers in the age range 20-59 had an average of 15.9,ug Cd/day, whereas the 25 former and present smokers had 19.1 ug Cd/day. In the U.S. the 49 In Sweden, 1.4,ug of the difference between smokers and nonsmokers could be explained by a greater average fecal output among smokers than among nonsmokers and a difference of 1.8,ug Cd/ day in feces would be the result of smoking itself. This includes both the amount cleared from the respiratory tract that is swallowed and the amount that is excreted via bile and the intestinal cells. In the U.S. the smokers had a smaller average fecal amount than the nonsmokers (155 g as compared to 165 g) and the difference explained by smoking itself would therefore be 3.8,ug Cd/day, slightly greater than the 2.8,tg Cd/day given above. Cadmium in Food In order to assess the agreement between daily intake estimates using the feces method and the food analysis method, Keio collected total diet specimens for 20 consecutive days from Keio hospital in Tokyo (Iwao et al. to be published). Standard patient diets were prepared by hospital dieticians and all the food for each day was combined and homogenized. The average daily energy content in these specimens was 9.6 MJ and the average daily wet weight of food was 2400 g (= 530 g dry weight). Twenty specimens were analyzed with the same method as for feces. The average cadmium concentration was 0.07,ug/g (dry weight, S.D. = 0.03), which corresponds to a daily total amount of cadmium in food of 35,mg. This agrees well with the figures given above for cadmium amount in feces of adults from Tokyo. No other studies on present cadmium intake via food were carried out as a part of the cooperative study, although during the course of the study some other analyses of foodstuffs took place in the participating laboratories. The average daily cadmium intake among adult men in Sweden was estimated at 17.2,ug Cd/day based on national average food consumption data and analysis of cadmium in wheat, vegetables, milk and meat products (the four main food items). This is very close to the average daily fecal amount of cadmium (18,vg) reported above. Cadmium in Tobacco Samples of 18 different brands of cigarettes were analyzed by KI with the same AA method that was used for grains and tissues (see other sections below). Brands sold in Sweden and Finland contained between 1 and 1.9,ug Cd/cigarette, whereas those sold in Japan contained between 1.6 and 2.3,ug Cd/cigarette. No cigarettes sold in the U.S. were analyzed. The cadmium in mainstream smoke was collected on filters with a smoking machine that automatically smokes one cigarette at a time. The puff frequency can be varied. The puff size is 35 ml. Depending on puff frequency (1, 2, or 3 puffs/min) the cadmium amount in mainstream smoke of one brand of cigarette varied between 0.14 and 0.19,ug Cd/ cigarette (22 determinations). In this particular brand of cigarette the average total amount was 1.5,ug Cd/cigarette. Thus, about 10% of the cadmium amount in the cigarette would be inhaled. Past Environmental Exposure Because of the long half-time of cadmium in the critical organ, cross-sectional studies of cadmium concentrations in different tissues will show both age-related variations and cohort-related variations, Environmental Health Perspectives I 176 depending on changing average daily ( take with time. Studies of chai cadmium intake with time would impro racy of estimations of half-time based oi variations and could be of value for l future changes. With "past exposure" exposure decades previously. The earliest extensive reports on food came in the early 1960's (35,36 referred to the U.S., and subsequent available from the same country (27, 3 mates of daily cadmium intake via fo tween 4 and 71,ug/day for the different there was no distinct trend with time populations in early and late studic necessarily comparable, and difference cal methods may influence the validity sons. It was the aim of the cooperati collect and analyze specimens of old fo rice, wheat grains, tea leaves, and car each of the three countries. In Japan ai only a few specimens were found, and below is based mainly on the Swedish Methods of Analysis Two method studies of analysis of gra ried out (method study I and III, 1 AAS, Sweden, Japan, USA ng Cd/g wet weight 100 150 ng C cadmium inmethod study I, cadmium analysis was compared nges of daily on 10 wheat and 10 rice specimens distributed by KI ye the accuto eight laboratories. The following methods n age-related were used by those participating in the past expoprognosis of sure study. we mean the At IPH, 2-g samples of grain were dry-ashed at low temperature (125°C for 4 hr). The ashes were cadmium in dissolved in HNO3 and cadmium analyzed with reg-5). The data ular flame AA. Keio used a similar method; reports are EPA/SL used optical emission spectroscopy after a 7). The esticombination of dryand wet-ashing, and KI used sod vary be-HGA-AA/D2 after dry-ashing at 450°C for 30 hr of studies, but duplicate 4-g samples.. The target The analytical results of low-level (< 0.2 uglg),s were not specimens of wheat and rice showed a good agree-.s in analytiment between NA and AA at KI and two other of comparilaboratories not participating in the cooperative ive study to study (Fig. 5). Sparked source mass spectrometry odstuffs like was utilized in another laboratory (United States nned food in National Bureau of Standards), also with good nd the U.S., agreement. However, AA analysis at Keio, IPH, the account and EPA/SL gave consistently 2-15 times higher results. values than the other laboratories. The AA method used at the Karolinska Institute had been studied in detail by addition of radioactive cadmium. It had also been previously comain were car-pared to NA on a large number of samples, whererable 1). In upon the agreement was very good. Thus, there was reason to believe that the methods used at Keio, IPH, and EPA/SL gave erroneously high values. One year later, another method-study was performed (method study III, Table 1) in which ten specimens of rice and five specimens of wheat were analyzed by eight laboratories (three outside the cooperative study). At Keio the analytical procedure had been changed so that D2 background correction was used with the flame AA and wet-ashing (HNO3/HCIO,) was now used at IPH instead of low-temperature dry-ashing. EPA/SWRI participated instead of EPA/SL in this second methodstudy. EPA/SWRI leached the cadmium from 3 g rice or wheat with 10 ml 1% HNO3 for 25 hr at room temperature. Cadmium was extracted from the leach solution by DDTC/MIBK and analyzed by F-AA/D2. The results are depicted in Figures 6 and 7. IPH still consistently had high values. On average the results of rice analysis at IPH were 3.7 times higher than the sample averages for all laboratories. The corresponding figure for wheat analysis was 1.47. Three old Japanese rice samples tories. Losses caused by deficient were analyzed by IPH, and 15 old American grain ure may explain these results. samples were analyzed by EPA/SL. er three laboratories (Keio, NA, and The results were similar to results from recent nent was satisfactory (Figs. 6 and 7). specimens, but due to great variations in analytical,e, the results were 0.94 to 1.27 times results compared to the other laboratories, the zrages depending on material and lab-time-related changes could not be evaluated. Old Food Specimens After an enquiry in Sweden to museums and agricultural research laboratories and advertisements to the general public, 322 old specimens of grains (mainly wheat) were received, as well as old specimens of home-canned vegetables (n = 276), mush- Old Tobacco Specimens Fifteen specimens of cigarettes sold in Sweden between 1918 and 1970 were found in a tobacco museum. The cadmium concentrations were analyzed by KI by the AA method also used for grains and tissues. The results ranged between 1.0 and 6.5,ug Cd/cigarette, and there was no ten-Environmental Health Perspectives Present Body Burden The body burden of cadmium in a "standard American man" (70 kg body weight) has been calculated as 30 mg. Friberg et al. calculated from the limited available data that in Europe the corresponding figure would be 10-18 mg and in some Japanese "nonpolluted" areas, 40-80 mg. Because of the highly cumulative nature of cadmium, body burden estimates and estimates of population-average cadmium concentrations in the critical organ (kidney cortex) are of greater value for assessing the risk of cadmium-induced tubular damage in a particular population than the present daily intake levels. The relationship between present and past exposure levels, body burden, and blood or urine cadmium concentrations was not well known at the start of the cooperative study. Accurate and comparable data on cadmium body burden under different exposure situations in different countries were therefore in great need. It had been estimated that about a third of the body burden of cadmium is in the kidneys and a sixth is in the liver after long-term low level exposure. These organs were selected as major indicators of body burden, and in addition cadmium concentrations were measured in muscles, blood and pancreas. For obvious reasons the specimens of internal organs had to be collected at autopsies. In vivo neutron activation analysis is a new promising analysis method which may in the future enable us to carry out population studies of cadmium concentrations in different tissues of living people. Only persons who had died from sudden or accidental death were included in the cooperative study. This type of selection avoids inclusion of people with long-term illness that may have caused deterioration of kidneys and possible concomitant rapid changes in cadmium concentrations in the kidneys. On the other hand, there may be an overrepresentation of smokers in the group studied because they have higher mortality rates for accidents and sudden death than nonsmokers. Higher cadmium body burdens and higher urinary cadmium excretions have been found among smokers than among nonsmokers. This agrees with the finding that cigarette smoking can contribute significantly to the daily absorbed amount of cadmium (see section "Present Environmental Exposure" above). Individual data on smoking habits are therefore important when measuring cadmium body burden. Methods of Analysis Keio, EPA/SWRI, and KI participated in the epidemiological study of cadmium concentrations in liver, kidney, muscles and pancreas. Keio used 1-g specimens wet-ashed in HNO3/H2SO/HCIO4. Cadmium was extracted with DDTC/MIBK and analyzed with flame AA by use of D2 background correction. EPA/SWRI used duplicate 1-g specimens that were dry-ashed at low temperature (125°C for 4 hr) in oxygen. The ashes were dissolved in HNO3 and cadmium was analyzed with flame AA and use of D2 background correction. KI used duplicate 2-g specimens that were dry-ashed at high temperature (450°C for 30 hr). The ashes were dissolved in HNO3, and analysis was carried out as for EPA/SWRI. For low-level specimens (<0.1,ug Cd/g) HGA-AA was used instead of F-AA at EPA/ SWRI and KI. NA was carried out as described above on 1-g specimens. In a limited method study (No. II, Table 1) three specimens of each of frozen liver and kidney cortex were sent from Keio to KI and EPA/SWRI. There was a 10%-36% difference in average results. The differences may partly be explained by different degrees of drying of the specimens at analysis. Furthermore, the methods used in this method study were not exactly the same as those that were described above and were used in the epidemiological study. A more extensive method study was therefore carried out. Aliquots of 10 lyophylized liver specimens were distributed by KI to the four laboratories mentioned AA analysis,mg Cd/g liver above and to an ac (method study III, ' agreement between a] coefficients (in the ra liver) between NA a: were +0.95, +0.97, a seen in Figure 9 all slightly lower results results from Keio wei ages for all laboratorii for EPA/SWRI, NA, 0.90. Aliquots of six: were distributed from the NA laboratory ( There was a greater v results (Fig. 10) than The cadmium conceni times lower than the I achieve a high and consistent accuracy at this level. Ideally, these method studies should have been dditional three laboratories followed by modification of the methods and further Table 1). There was a good interlaboratory comparisons of analysis until a very 1 laboratories. The correlation close agreement was found. Only after that should nge 2-10 ug Cd/g lyophilized epidemiological studies begin by using verified nd Keio, EPA/SWRI and KI analytical methods. Unfortunately this was not Lnd +0.98, respectively. As is feasible in the present study, and intercountry comthese AA laboratories had parisons must be made with the results of the than NA. On the average, the method studies in mind. re 1.02 times the sample aver-It was concluded that for tissues with high cad-[es. The corresponding figures mium levels (liver, kidney) there was a close agreeand KI were 0.98, 1.10, and ment of analysis results (up to 10%o systematic diflyophilized muscle specimens ferences). Average analysis results of muscle dif-EPA/SWRI to Keio, KI, and fered up to about 40%o, and results of blood analysis 'method study IV, Table 1). differed even more. For each type of tissue the avariation in the muscle analysis erage results from Japanese laboratories tended to in the results of liver analysis. be higher than results from other laboratories, and trations in muscle are about 10 this has to be taken into consideration when the liver, and it is more difficult to results of the epidemiological studies are evaluated. Environmental Health Perspectives Cadmium in Liver and Kidney In the epidemiological studies, samples from autopsies in Tokyo, Dallas, and Stockholm were analyzed. As mentioned above, liver and kidney cortex were collected at autopsies of cases of accidental or sudden death. The samples of liver (1-2 g) were taken from the left lobe and the samples of kidney cortex (1-2 g) were dissected as a 5 mm thick slice of the lower pole of one of the kidneys. The following account includes only those cases for which both liver and kidney cortex were analyzed. Further, only those data were included that were based on the analytical methods that had been crosschecked as discussed in the methods section above. The numbers of people studied that fulfiled these criteria were 157 in Tokyo (men and women, age range 1-79), 164 in Dallas (men only, age range 10-59) and 285 in Stockholm (men and women, age range 2-89). The detailed data are given in Tables 4 and 5. It was shown in the Swedish study that cadmium concentrations in both liver and kidney cortex follow log-normal distributions. The geometric averages and standard deviations are therefore given in the tables along with the arithmetic averages and standard deviations, which would be more comparable with averages given in earlier publications. In each of the age-and sex-groups from each country, a log-normal distribution fitted the data better than a normal distribution. As is seen in Tables 4 and 5, in most of the age groups where comparisons between data for males and females can be made, the averages for females were slightly higher. Because data only on males were collected in the U.S., the three-country comparisons will be based on the male data. In Japan no individual smoking habit data could be collected. No stratification for smoking habits were carried out in the initial comparisons between countries, but the possible influence of differences in smoking habits will be discussed below. The proportion of smokers in the Swedish group was the same for men and women, about 80%. Figure 12 shows how the average cadmium concentration in liver increases with age in all three countries. There is a leveling off with increasing age, but it is difficult to estimate at what age this takes place. In kidney cortex (Fig. 13) there is a continuing increase up to age 40-60 and then a de- Tables 4 and 5 include only the cases for whom both liver and kidney cortex were analyzed. b These averages are not exactly the same as those reported by Johnson et al.. The table was based on preliminary data, some of which were later corrected by Johnson et al. Further, only cases for which both liver and kidney data were available were included here and Johnson et al. used a logit-transformation before calculation of average. c Arithmetic mean and standard deviation. d Geometric mean and standard deviation. creasing concentration with age. The age-related changes agreed with earlier reports from the three countries. Our data showed only small differences between Sweden and the U.S., whereas the Japanese results were generally about 4-5 times higher (Figs. 12 and 13). Based on Tables 4 and 5 the ratios between cadmium concentrations in kidney cortex and liver were calculated (Table 6). In Japan and Sweden the ratios increase with age up to about age 40 and then decrease, reflecting the age-related changes in kidney cortex and liver cadmium concentrations. In the U.S. the ratios increase continuously with age from age 10 to 59. In each age group the ratios in Sweden are higher than the ratios in Japan. The U.S. ratios tend to be in between. It is known from animal experiments and autopsy data from industrial workers that an increasing proportion of the body burden of cadmium will be in the liver at increasing exposure levels. The differences in kidney cortex to liver ratios between the three countries seen in this study may be related to the different exposure levels. Cadmium in Muscles Samples of abdominal wall muscle were collected at autopsies from similar groups of people in the three countries. In the U.S. the study group was identical to the group from which liver and kidney cortex specimens were collected (males, ages 10-59, n = 164) ( Table 7). In Japan and Sweden the study groups for muscle analysis were only partly the same as the groups for liver analysis. Muscle specimens from 208 men and women in the age range 1-79 in Japan and 61 men and women in the age range 18-69 in Sweden were studied. Both in Japan and Sweden the women had, in most age groups, higher average cadmium concentrations in muscle than men (Table 7). Among the men there was a continuous increase in cadmium concentration with age ( Fig. 14) in each of the three countries. The results indicate that the half-time of cadmium in muscles is very long (several decades) and even longer than the half-time in kidney cortex (Fig. 13). The differences between the countries follow the same pattern as for liver and kidney cortex. In the U.S. the cadmium concentrations in muscle are about twice as high as in Sweden and in Japan they are about 5-10 times as high as in Sweden. Even though the method study for muscle did not give as good interlaboratory agreement as the method study for liver, only a small part of these differences could be explained by systematic differences in analytical results as described above. There are no published extensive studies of cadmium in muscles with which these data can be compared. Cadmium in Blood In the cooperative study, cadmium concentrations in blood were studied mainly with the aim of elucidating how blood cadmium reflected daily intake or body burden. No systematic cross-sectional studies covering a large age range were therefore carried out. Some data were collected that can be used to calculate the contribution to cadmium body burden from blood. In Japan (SE laboratory) vein blood cadmium was analyzed for 213 male newspaper factory workers in the age range 20-55 years. They had no occupational cadmium exposure. The overall arithmetic average was 4.5 ng Cd/g blood (S.D. = 2.6 ng/g). No data on smoking habits had been collected. It was reported that the background correction in the analysis (F-AA/D2 after extraction) amounted to about 50% of the total absorption in the AA analysis. This again points to the problems of analysis of cadmium in blood discussed above. In the U.S., 216 males and females (age range 18-53 years) from Houston were studied by SWRI in connection with a survey of lead exposure from automobile exhausts. The group included policemen, garage attendants, and housewives. Individual data about smoking habits were collected. The analysis of cadmium in vein blood was carried out with the Delves cup AA technique instead of the one used for the interlaboratory cross-check of cadmium analysis (EPA/SWRI). The overall arithmetic average cadmium concen-tration in blood was 4.9 ng/g (S.D. = 1.5 ng/g) for the 127 men and 6.5 ng/g (S.D. = 2.4 ng/g) for the 89 women. There were no obvious variations with age of cadmium concentration in blood within this age range. The 77 smoking men had 5.2 ng Cd/g blood as compared with 4.5 ng Cd/g blood for the 50 nonsmoking men. The 49 smoking women had 6.0 ng Cd/g blood and the 40 nonsmoking women had 7.2 ng Cd/g blood. A systematic effect of smoking on blood cadmium could therefore not be seen in these data. In Sweden 39 newly employed workers in a cadmium battery factory were studied. Venous blood samples were collected before cadmium exposure began. The overall arithmetic average was 4.5 ng Cd/g blood (S.D. = 2.1 ng/g) when the method described above was used (KI). All samples were also analyzed with a Delves cup AA method. The overall average result using this method was 3.1 ng Cd/g blood. There seemed to be a systematic difference of 1-2 ng/g between these two methods at KI. There was no tendency for a difference between the 27 men and the 12 women in the group, but the smokers had on average higher results than the nonsmokers. The average results were for the HGA-AA method 3.0 ng/g (nonsmokers) and 4.8 ng/g (smokers) and for the Delves cup AA method 1.4 ng/g (nonsmokers) and 3.4 ng/g (smokers). Due to the uncertainties of the comparability of the different analytical techniques, no quantitative comparison between the three countries can be made. The Swedish data seem to be lower than the Environmental Health Perspectives 184 other data, but the difference is about the same as the difference in analytical results seen in the interlaboratory cross-check. The results are of the same magnitude as reliable data in earlier reports. Some other earlier reports reviewed by Friberg et al. gave cadmium concentrations in blood that were obviously erroneously high due to inaccurate analytical methods. Higher blood cadmium concentrations among smokers than among nonsmokers have also been reported. It was found that nonsmoking adults in Sweden had an average about 0.5 ng Cd/g blood and smokers had an average about 2 ng Cd/g (Delves cup AA). For the calculation of the contribution of blood cadmium to cadmium body burden, it was decided to use a range instead of a single number. It was estimated that in each of the three countries the average adult cadmium concentration in blood would be between 1 and 6 ng Cd/g. A smoker would have about 1 ng Cd/g higher value than a nonsmoker. Cadmium in Other Tissues Analysis of other tissues, like pancreas and fat, was carried out to a more limited degree than the analysis of liver and kidney cortex. The pancreas cadmium levels in Sweden were similar to liver cadmium levels, and the fat cadmium levels were similar to the muscle cadmium levels (Kjellstrom and Elinder, to be published). In the body burden estimate below, pancreas was included as a separate tissue even though its weight is very small. The analytical comparability between the three countries was assumed to be the same as for liver. Detailed data will not be given here, as they have been published elsewhere. In the group of men between 30-59 years (the group used for the body burden calculations below) the geometric average cadmium concentrations (wet weight) in pancreas were 2.2,ug/g (Japan), 0.70,g/g (U. S.), and 0.50,ug/g (Sweden). In order to estimate the contribution to cadmium body burden from tissues other than liver, kidney cortex, muscle, blood, and pancreas, it was assumed that the ratio between cadmium concentration in these other tissues and the concentration in muscles was the same as in the report by Sumino et al.. They analyzed the cadmium concentration (flame AA after extraction in DDTC/isopropyl acetone) in 19 different tissues from 30 Japanese (age range 15-65 years) living in a nonpolluted areas. The estimated average weights for each tissue were given and by multiplying these weights with average cadmium concentrations, a weighted average cadmium concentration in "other tissue" (15 tissues; excluding liver, kidney, pancreas, and muscle) could be calculated. The ratio between this weighted average cadmium concentration in "other tissues" (0.19,ug Cd/g) and average cadmium concentration in muscles was 0.64. Calculation of Body Burden Our aim was to calculate the body burden of an average 45-year old man in the 1970's for each country. Cadmium concentrations in the different tissues in the age range 30-59 years were used for the calculations. In Figure 15 the distributions of data for the three countries were plotted. Cadmium concentrations in urine were also included for comparison. It is seen that between each of the four tissues included in the figure there is roughly one order of magnitude difference in cadmium concentrations. Most of the observed distributions fit very well to log-normal distributions. The initial calculation of body burden is based on the whole group studied regardless of smoking habits. It was assumed that the cadmium concentrations in kidney cortex were 50% higher than in the whole kidney. The weights of the different tissues in the U.S. and Sweden are those given for "reference man". Corresponding weights for an average Japanese person were given by Sumino et al.. The concentrations in kidneys, liver, pancreas, and muscles used in the calculation were based on the geometric average for 30-59 year old men as reported by each laboratory (Fig. 15). For a Japanese 45-year old man the cadmium body burden is the highest (about 21 mg). The American cadmium body burden is about 8.7 mg and the Swedish about 6.4 mg ( Table 8). These estimates all refer to mixed smoker-nonsmoker populations as they occurred in the epidemiological studies. Only a fraction of the differences between the countries could be caused by analytical differences (see above). Past Body Burden As was mentioned in the section on past exposure, the long half-times of cadmium in many body tissues make it important to study secular changes of exposure or body burden in order properly to evaluate findings in cross-sectional studies. The body burdens of cadmium in old people reflect both recent exposure and past exposure, but data about present body burdens only cannot be used to estimate how the exposure levels have changed with time. The only feasible way to study past body burden would be to analyze tissue specimens from autop- sies carried out long ago. The storage procedure must be suitable so that losses or contamination does not occur. Cadmium in Old Kidney Specimens Old tissue specimens for analyses in the cooperative study were collected only in Sweden. Thirtythree specimens of adult human kidneys from autopsies during 1880-1899 were found in anatomical museums. The specimens had been stored in alcohol or formalin and were all in good anatomical condition. Small (1 g) samples of kidney cortex were taken from each kidney so as not to damage the specimens. Cadmium concentrations in these samples were analyzed at KI with the AA method for kidney described above. The cadmium content of the storage liquids were also analyzed by the same method. The cadmium concentrations in these liquids were generally very much lower than the concentrations in the kidneys. A two-year experiment with two fresh kidneys stored in ethanol showed that losses of cadmium from tissue to storage liquid were small. By also taking the individual kidney weights and storage liquid weights into account it was estimated that any possible losses of cadmium from the kidneys to the liquid must have been very small. In order to avoid bias caused by different moisture content of fresh kidneys and old kidneys, dry weight based values were used for the comparison between new and old kidneys. The geometric mean cadmium concentration in kidney cortex of the 33 adults from the 19th century was 15,ug Cd/g dry weight (96% confidence limits of mean was,ug/g). The corresponding figure for 39 nonsmoking adults who died in 1974 was 57,ug Cd/g dry weight (95% confidence interval; 46-71 Aglg). These data support the data in the section on past exposure showing an increased concentration of cadmium in certain foodstuffs (Fig. 7) with time. Urinary Excretion after Long-Term Exposure Both animal and human data indicate that urine is one of the major excretion media for cadmium. Fecal excretion of cadmium has not been quantified in humans but in animals it is of the same magnitude as urinary excretion. Urinary excretion of cadmium has been used in a number of studies to estimate exposure in both occupationally exposed groups and groups exposed via food. The relationship between urinary excretion and body burden or exposure in human beings is not very well known, however. Furthermore, animal experiments have shown that cadmium excretion increases drastically when cadmium-induced renal tubular damage occurs. The aim with the urinary cadmium excretion analyses in this cooperative study was to compare urinary excretion with present daily intake and present body burden in comparable general population groups. Clustered samples according to age group were selected from the three countries (Tokyo, Dallas, and Stockholm). The people in the autopsy studies had died from accidental or sudden death. There may therefore be a higher proportion of smokers in the autopsy groups (see section on Present Body Burden, above) than in the groups of the general population in which urinary excretion was studied. Otherwise there was no reason to believe that the groups selected for the body burden studies and the urinary excretion studies would have different average cadmium intakes. Methods of Analysis In method study I (Table 1) samples of urine with a cadmium concentration range from less than 1,ug/l. to 30,g/l. were analyzed in four laboratories rine rThe epidemiological studies were carried out with these methods, but an additional comparison of analysis at SWRI and KI was carried out (method study IV, Table 1). In this study SWRI distributed All the urinary concentrations in the eping AA and in one laboratory by using NA demiological studies of the Japanese and the Swedrenerally these were relatively high cadmium ish group were corrected for specific gravity to the trations in urine and there was a good corre-average specific gravity in the Swedish group between AA laboratories and the NA lab-(1.020). Specific gravity was not measured in the ( in the whole group of specimens (r = 0.96-American group, but the average for a similar group However, in the two specimens with the low-studied earlier was 1.021. No corrections for Imium concentrations (< 1,tg/l.), NA could specific gravity were done for the American data. :ect cadmium and there was a great difference The average for the Japanese group in the cooperathe AA laboratories (including Keio, IPH tive study was also 1.021. and KI). In "normal" urines the cadmium concentrations are usually below 1 ug/l., and it was therefore decided to carry out a further method study on the low level samples. Eight frozen urine specimens were distributed from KI to 11 other laboratories as a part of method study III. All specimens were from "normal" Swedes and the expected cadmium concentrations were < 1 ug/l. NA could not detect cadmium in any of the specimens. Two of the AA laboratories gave results one order of magnitude higher than all the other AA laboratories and were excluded from the comparison in Figure 16. Keio used 100 ml samples that were wet-ashed in HNO3/H2 SO,. Cadmium was extracted with dithizone/chloroform and analyzed with regular F-AA. At SWRI, cadmium was extracted directly from 10 ml urine with APDC/MIBK and analyzed with HGA-AA/D2. KI used 25 ml urine that was both dry-ashed at 450°C for 30 hr and wet-ashed in HNO3. From an acid solution of the ashes cadmium was extracted with APDC/MIBK and analyzed with HGA-AA/D2. The individual results for these three laboratories are given in Figure 16. There was good agreement between Keio and KI (r = 0.84) with an average for all specimens of 0.51 ug/l. at Keio and 0.45,ug/l. at KI. The scatter of the SWRI results is greater, but the average of all eight specimens, 0.60,ug/l. is close to the average of the other laboratories. Cadmium in Urine In Japan a sample of 609 persons in the age range 0-90 years were studied. These were people coming to test their urines in a health center in Tokyo because high cadmium concentrations in soil were found in the area where they lived. However, it was found that consumption of local food was rare, and there were no indications that the daily cadmium intake in their area was higher than in other parts of Tokyo. All specimens were analyzed by Keio with the method given above. This study was carried out before the cooperative studies were started in the other countries. No data on smoking habits were collected. In the U.S., 87 men from Dallas in the age range 1-70 were studied. They were volunteers among hospital staff and service club members. None of them had occupational exposure to cadmium. The urines were analyzed by SWRI with the method given above. In Sweden a sample of 130 persons were selected for the study in the following way. From a roster of nonsmokingconcordant monozygotic male twin pairs living in Stockholm persons were contacted until five complete volunteer pairs in each 10-year age group from 10-69 years were found. In the same way 10 complete female nonsmoking pairs in the age group Environmental Health Perspectives m Cd/I ur by usir. G concen lation oratory 0.98). 1 est cad not det among 40-59 and 10 complete male smoking-discordant pairs in the age group 40-59 were selected. Ten volunteers under age 10 and in each of the age groups 70-79 and 80-89 were also studied. All specimens were analyzed by KI with the method given above. The distributions of individual urinary cadmium concentrations within any one age group fitted more closely to log-normal than to normal distributions. This was tested in the Swedish study and is seen also in Figure 15 for the age group 30-59 years. In Table 9 the results are given for the three countries both as arithmetic and geometric means and standard deviations. There is a tendency for increasing urinary cadmium concentrations with age, which is more clearly shown in Figure 17 for men from the three countries. In the age groups 0-9 and 40-59 where both female and male data are available in Japan and Sweden there were systematically higher average concentrations for women than for men (Table 9). An earlier American study of 216 men and women from Houston had not shown such a difference between the sexes. Smoking habits do influence urinary cadmium excretion. In the age range 40-59, 10 smokers had on average 10%o higher values than 10 nonsmokers corresponding to about 0.3,ug/l. difference. In nonsmokers had an average of 0.67,ug/l. giving a difference of 0.33,g/l. An earlier American study had given similar differences between smokers and nonsmokers. The urinary cadmium concentrations in the Japanese group are, depending on the age group, two to five times higher than in the group of nonsmoking Swedes (Fig. 17). The difference can be explained only to a small degree by the inclusion of smokers in the Japanese group. The American group has results in between the Japanese and Swedish groups. Blood and Urine as Indicators of Exposure and Body Burden Due to the long half-time of cadmium in the critical organ as well as in several other tissues, it would be of value for epidemiological studies and for individual occupational health monitoring to be able to measure exposure levels, total dose, body burden and the cadmium concentration in critical organs via some easily accessible indicator medium like urine, blood, hair, nails, or feces. The best way to evaluate these relationships would be to carry out longitudinal studies of cadmium levels in various tissues of people with sudden changes in their cadmium exposure. Crosssectional studies comparing exposure levels, body burdens, and cadmium concentrations in indicator media could also be of value for quantifying the relationships. One aim in the cooperative study was to analyze cadmium in blood and urine from newly employed cadmium workers at regular intervals during one year after employment. In each of the three countries as many as feasible, but not more than 25 workers, should be followed up during one year of exposure. Unfortunately the study could only be carried out in Sweden. Longitudinal Study of a High-Exposure Population During one year there were 17 newly employed workers in a Swedish cadmium-nickel battery factory who could be followed during the whole first year of exposure. Morning urine specimens and blood specimens were collected at three times before employment, twice a week during the first two weeks, twice a month during the next two months, and then once a month up to one year after start of employment. There were nine women and eight men, and the age range was 18-53 years. Samples of dust in factory air were collected for 8 hr with personal portable sampling devices on membrane filters on the same days as the blood and urine collections for three of the participants. Cadmium concentrations in blood and urine were measured by KI with the methods described above and in the dust samples with AA analysis after dissolution of the membrane fiters in HNO3. The average cadmium concentration in blood before exposure started was 2.9 ng Cd/g for the three nonsmokers and 4.6 ng Cd/g for the 11 smokers. The corresponding concentrations in urine were 0.6 and 0.7,g Cd/g creatinine. An example of how the cadmium levels in blood and urine changed with time is given in Figure 18. The blood level increased progressively during the first 3 months and then leveled off. No obvious change in urine level took place during the first year of exposure. The pattern of change was similar in most of the other workers studied, but there was a considerable individual variation in the quantitative increase of average blood levels. The increase was greater among smokers than among nonsmokers. The short-term variations in cadmium concentrations in air did not seem to influence the cadmium concentrations in blood and urine. The sudden increase in average exposure caused the changes in the blood levels. The average cadmium concentration in air of this factory at the time of the study was about 50,tg Cd/i3 air and 95% of the dust particles had a MMAD less than 5,um. A onecompartment exponential model was fitted by a nonlinear regression procedure to the blood data. The median half-time of cadmium in blood was 77 days (range 8-14300 days). The detailed data from this study will be published elsewhere (Kjellstrom et al., to be published). It was concluded that cadmium concentrations in blood would be an indicator of the average recent cadmium exposure over a time period of 1-3 months, whereas urine would not be a good indicator of recent exposure; at least not under the exposure conditions of this study. The half-time of cadmium in blood may be a reflection of accumulation in the lungs or in the blood cells. burden, and excretion interrelate. For calculations of body burden or urinary excretion, age-specific data on average tissue weights and daily urine volume are needed. Because the American data on cadmium levels in the various tissues were limited to males, it was decided to make the calculations based on male data in all three countries. From general biological handbooks the data in Table 10 were collated. For some tissues intrapolations had to be made. It was assumed that muscle weight was 40% of body weight at all ages, even though this value refers to adults. Weights for the individual tissues among Japanese were not available and it was assumed that the weight distribution between tissues was the same as for caucasions. Urine volumes were assumed to be the same in all three countries-I liter/24 hr among young adults, with lower volumes in childhood and old age. The volumes given in Table 10 were estimated from data in Documenta Geigy 1970, as well as data reviewed by Elinder et al.. The cadmium concentrations (geometric averages for nonsmoking men) used for the calculation of body burdens and daily intake are given in Table 11. For the younger age groups in the U.S. and Sweden, estimates of certain tissue concentrations were based on the assumption that all subjects were nonsmokers and the increase with age would follow the same pattern as in Japan. The cadmium concentrations in blood was assumed to be 3 ng/g in all countries and all ages. The daily intake via food in the U.S. and Sweden could be estimated roughly as the cadmium amount in feces because the fecal data were based on data for groups of nonsmokers. For Japan no individual information on smoking habits were collected, but the data were included for comparison. It is seen in Figure 19 that there is a good agreement between daily cadmium amount in urine and body burden and bad agreement between these two variables and estimated cadmium daily intake via food. The pattern is the same in all three countries, even though in the Japanese data smokers were included. About 0.005-0.01% of body burden is excreted daily in urine. For the 30-59 year age group, a comparison between nonsmoking and smoking men was carried out in order to quantify the role of smoking habits as a determining factor for cadmium body burden and excretion. In the U.S., sufficient data for both smoking categories on cadmium in kidney cortex, liver, muscle, and urine were available. It is seen in Figure 20 that for each tissue the geometric mean concentration is about twice as high for smokers as for nonsmokers. In Sweden some data on smokers and nonsmokers was also available. Using the weights given in Table 10 the body burdens for smokers and nonsmokers (age in the U.S. and Sweden were calculated. Urinary excretions, daily intake, and cadmium concentrations in blood were also estimated from the data given earlier in this report. A comparison between the different variables (Table 12) shows how the smokers have higher levels than the nonsmokers. In an average smoker of age 45 in the U.S. and Sweden, tobacco smoking in itself accounts for about half the body burden and half the urinary excretion. In Sweden there were not enough data on muscle from smokers and nonsmokers, so an estimate was made based on overall muscle data ( Table 12). The calculated body burdens were higher for smokers than for nonsmokers. Only nine muscle samples from nonsmokers were analyzed, however. In the U.S. there were sufficient data to calculate the additional body burden due to smoking in each age group (Table 13). The calculation was based on the differences in average tissue cadmium concentrations between smokers and nonsmokers. The accumulation with age would correspond to a constant intake from smoking up to the highest age group where the slight decrease may reflect a shorter overall smoking duration (cohort effect). There were not enough data on urine and feces to estimate smoking-specific values. The additonal body burden due to smoking at age 45 would be about 4-6 mg (Tables 12 and 13). The body burden of cadmium for a nonsmoking Swede or American at age 45 would be about 5-6 mg. In Japan at this age about 75% of the male population smoke on average 24 cigarettes per day. If we assume that the smoking-related 4-6 mg of the American body burden is the result of smoking 20 cigarettes per day, the part of the average Japanese body burden (21 mg, 19.1 a These figures were estimated from the whole group studied (including those for whom smoking habits were unknown). The average for this group was 40 ng/g. The ratio between smokers and nonsmokers was set the same as for pancreas. In the American data the muscle ratio and pancreas ratio between smokers and nonsmokers were similar. b The age group 20-59 was used in order to increase the study populations and to get a more reliable estimate of average fecal cadmium amount..0 a Calculated in the same way as the body burdens in Table 11. The additional cadmium concentration in blood from smoking is assumed to be 1 ng/g. (Table 8). This can explain the differences between body burden estimates in Table 8 and Table 12. General Discussion and Conclusions The study showed that there was a good agreement between the different laboratories in the final step of the analysis and therefore any differences between analytical results of the individual materials would be caused by losses or interferences in the preparatory chemical steps of analysis, or matrix effects in final analysis. The method studies showed that in tissues with high levels (about 1,ug/g) of cadmium, like the liver, the agreement between analysis at different laboratories was good. Analysis of muscles with average levels about 0.1,ug Cd/g did not give such good agreement, and further development work on methods is necessary. Accurate muscle analysis data are of great importance to further quantify the half-time in muscles, which seems to be much longer than in liver and kidney. In food stuffs with low cadmium concentrations (0.01-0.1 ug/g) large differences in analytical results occurred. Data that are not accompanied by valid method studies must be evaluated cautiously. One useful approach is to compare completely different analytical techniques. Such a comparison showed that it is possible to get a good agreement between atomic absorption and neutron activation analysis at levels above 0.01,g Cd/g. In tissues with low concentrations (about 1 ng/g) such as urine and blood, the matrix effects on analysis were still a problem because urine analysis gives comparable results in different laboratories whereas blood analysis still may give considerable differences. However, when data on blood cadmium concentrations from different groups of people are produced within one lab using the same method, relative differences between such groups may still be used for evaluations. The studies on daily intake of cadmium by analysis of cadmium amount in feces showed that within each country the cadmium concentration in feces tends to be relatively constant regardless of age. The daily amount of feces varied with age in a similar fashion as energy intake and were similar in the three countries. The daily amount of cadmium in feces from Japan was about twice as high as in the U.S. and Sweden. Due to the low gastrointestinal Environmental Health Perspectives absorption these data would be representative for the differences in daily cadmium intake via food. There were indications that the daily cadmium intake in Sweden may have increased during the 20th century but no comparable data were available from Japan and the U.S. The data from Sweden were not conclusive, however, and further studies of old food and tissue specimens would be of value. The cadmium concentrations in kidney cortex, liver, pancreas, and muscle as well as the calculated body burden showed similar differences as the daily intake, but there was a tendency for a higher ratio between data from Japan and the other data than was found for daily intake. The rapid accumulation with age of cadmium in liver and kidney cortex seen in earlier studies was confirmed. There was a continuous accumulation in muscles even at old age, which indicates that the half-time in muscle is longer than in kidney cortex. Further autopsy studies of cadmium in muscles are necessary and in such studies other major tissues like fat, bone and skin should be analyzed with sensitive methods that can determine changes at the 0.001-0.01,ug Cd/g level. The tissue specimens from the autopsies carried out in this cooperative study are stored frozen in tissue banks. They can be used for analyses in the future. Women in many age groups had higher tissue cadmium concentrations than men. In the Swedish group there was the same proportion of smokers among men and women. Differences in smoking habits between the sexes can not explain the differences in tissue concentrations. Whether the higher values for women are the result of higher cadmium intakes per unit body weight, higher absorption rates, or lower excretion rates is not known. There seemed to be a greater proportion of body burden in liver among the Japanese than among the Americans and Swedes, possibly reflecting a different distribution of cadmium in the body at high exposure levels. From the American and Swedish data it was estimated that smokers would get an additional body burden at age 45 of about 4 mg through absorption of cadmium from tobacco smoke. With a rough assumption that the contribution to body burden from smoking would be the same in all countries, it was calculated that the body burdens of a nonsmoking adult male would be 16-17 mg in Japan, and 5-6 mg in the U.S. and Sweden. The daily cadmium intakes via food were about 35,ug in Japan and 17,ug in the other countries. The cadmium concentration in urine among adults after correction for individual variations in specific gravity had a log-normal distribution. Urinary cadmium excretions increased with age. In older age groups from Japan there was a slight de-crease in urinary excretion of cadmium which did not occur in Sweden. On a group basis after longterm low level exposure urinary excretion of cadmium was a good indicator of body burden. After sudden changes in exposure level, cadmium concentration in blood was a better indicator of recent intake than cadmium concentration in urine. The comparisons between daily intakes of cadmium via food and body burdens as well as the data on urinary excretion did not indicate any change in whole body half-time of cadmium depending on exposure level.
<gh_stars>10-100 #ifndef OCNUMPYTOOLS_H_ // A few helper functions/defines for help for dealing with Numeric // (Python Numeric) #include "ocport.h" OC_BEGIN_NAMESPACE // Convert from a Val tag to a Python Numeric Tab inline const char* OCTagToNumPy (char tag, bool supports_cx_int=false) { switch (tag) { case 's': return "int8"; break; case 'S': return "uint8"; break; case 'i': return "int16"; break; case 'I': return "uint16"; break; case 'l': return "int32"; break; case 'L': return "uint32"; break; case 'x': return "int64"; break; case 'X': return "uint64"; break; case 'b': return "bool"; break; case 'f': return "float32"; break; case 'd': return "float64"; break; case 'F': return "complex64"; break; case 'D': return "complex128"; break; default: if (supports_cx_int) { switch (tag) { case 'c': return "complexint8"; break; case 'C': return "complexuint8"; break; case 'e': return "complexint16"; break; case 'E': return "complexuint16"; break; case 'g': return "complexint32"; break; case 'G': return "complexuint32"; break; case 'h': return "complexint64"; break; case 'H': return "complexuint64"; break; default: break; } } throw runtime_error("No corresponding NumPy type for Val type"); } return 0; } // Convert from Numeric tag to OC Tag inline char NumPyStringToOC (const char* tag, bool supports_cx_int=false) { char ret = '*'; if (tag==NULL || tag[0]=='\0') { throw runtime_error("No corresponding OC tag for NumPy tag"); } typedef AVLHashT<string, char, 16> TABLE; static TABLE* lookup = 0; if (lookup == 0) { // Eh, any thread that reaches here will do a commit TABLE& temp = *new TABLE(); temp["bool"] = 'b'; temp["int8"] = 's'; temp["uint8"] = 'S'; temp["int16"] = 'i'; temp["uint16"] ='I'; temp["int32"] = 'l'; temp["uint32"] ='L'; temp["int64"] ='x'; temp["uint64"] ='X'; temp["float32"]='f'; temp["float64"]='d'; temp["complex64"]='F'; temp["complex128"]='D'; temp["complexint8"] = 'c'; temp["complexuint8"] = 'C'; temp["complexint16"] = 'e'; temp["complexuint16"] = 'E'; temp["complexint32"] = 'g'; temp["complexuint32"] = 'G'; temp["complexint64"] = 'h'; temp["complexuint64"] = 'H'; lookup = &temp; } /// AVLHashTIterator<string, char, 16> it(*lookup); /// while (it()) { /// cout << it.key() << ":" << it.value() << endl; /// } // If found, return full char, otherwise -1 to inidcate it failed string tagger = tag; if (lookup->findValue(tagger, ret)) { if (supports_cx_int) { return int_1(ret); } // Doesn't support cx int if (tagger.find("complex") != string::npos && tagger.find("int")!=string::npos) { return -1; } else { return int_1(ret); } } else { return -1; } } OC_END_NAMESPACE #define OCNUMPYTOOLS_H_ #endif // OCNUMPYTOOLS_H_
Conventionally, as for a surface light emitter, a usage as a display has been mainstream as can be seen in a backlight light source device of a liquid crystal display device. In recent years, a movement of using this surface light emitter as a gobo for building material and amusement etc. has been increasing. In such a case, the gobo is required to act as a transparent plate at the time when a light source is turned off, and to act as a gobo by plate-surface transversal radiant emitted light at the time when the light source is on, so as to act to block view in the back. As for past general liquid crystal displays, in the case of a transmissive liquid crystal, a non-transparent backlight device is required and in the case of a reflective liquid crystal, a reflective plate is practically required. Therefore, in either of the case, it has been non-transparent as the whole display device. In the surface light emitter, a configuration, in which a scattering function is incorporated by convexo-concave, dot printing, or the like to the light guide surface as can be seen in the backlight light source device of the liquid crystal display device (Patent literature 1), or a configuration, in which light diffusing particles with a small refractive index difference Δn between a refractive index of a substrate and the refractive index of the light diffusion particles are included inside (Patent literature 2), are known. In these configurations, the light guide may be opaque or a haze value in the thickness direction of the light guide is large when the light source is turned off. Therefore, although it is possible to perform shading effect when the light source is turned off, it has been difficult to act as a transparent plate at the time of extinction.
Daniel Alfei Swansea City On 8 January 2011, Alfei made his professional debut for Swansea City in a 4–0 victory against Colchester United in the FA Cup, where he was named man of the match. Alfei made his league debut a week later, as an 88th-minute substitute against Crystal Palace. Alfei also played in Swansea's FA Cup fourth round tie against Leyton Orient in 2011. He signed a new three-year contract in April 2011. On 9 May 2013, Alfei signed a new contract with Swansea until June 2016. Alfei was released in May 2016. Wrexham loan On transfer deadline day in January 2012 he signed for Conference National side Wrexham on a season long loan. He made his debut in a home win against Hayes & Yeading United where the Dragons won 4–1. He returned to Swansea at the end of the 2011–12 season. In October 2012 Alfei re-joined Wrexham on loan until January 2013. Wrexham extended Alfei's loan until the end of the 2012–2013 season. Portsmouth loan On 2 January 2014, Alfei joined League Two club Portsmouth on loan for one month. On 31 January 2014, Alfei's loan was extended until the end of the season. Northampton Town loan On 2 July 2014, Alfei signed for League Two club Northampton Town on a season long loan. Alfei made 14 appearances for Northampton before his loan was ended on 2 January 2015. Mansfield Town loan In February 2016, Alfei joined League Two club Mansfield Town on loan until the end of the season. Alfei went on to make 12 appearances for The Stags. Aberystwyth Town Alfei joined Aberystwyth Town on a free transfer in 2016. After making 18 appearances for them, Alfei was released at the end of the season. Yeovil Town On 28 July 2017, Alfei signed for League Two club Yeovil Town on a two-year contract. In only his fourth appearance for Yeovil, Alfei suffered a ruptured anterior cruciate ligament which ruled him out for the remainder of the 2017–18 season. He was released by Yeovil at the end of the 2017–18 season. Llanelli Town On 31st August 2018, Alfei signed for Welsh Premier League side Llanelli Town. International career Alfei represented and captained the Wales under-19 team and has played several times for the Wales under-21 team. In January 2013 he was selected in the Wales under 21 squad for the friendly match against Iceland on 6 February 2013.
<gh_stars>0 package edu.gy.personalmanagersystem.service; import com.github.pagehelper.PageInfo; import edu.gy.personalmanagersystem.pojo.Honor; import java.util.List; /** * @InterfaceName: HonorService * @Author: <NAME> * @Date: 2019-04-22 16:11 * @Version: 1.0 **/ public interface HonorService { PageInfo<Honor> getAll(Integer pageNum); Honor getHonorByKey(Integer honorid); int addHonor(Honor honor); int updateHonorInfo(Honor honor); PageInfo<Honor> getByItem(Honor honor, String rule,Integer pageNum); int addHonors(List<Honor> honorList); int deleteHonor(Integer number); PageInfo<Honor> getByLikes(Honor honor,Integer pageNum); }
Toward porting Astrophysics Visual Analytics Services to the European Open Science Cloud The European Open Science Cloud (EOSC) aims to create a federated environment for hosting and processing research data to support science in all disciplines without geographical boundaries, such that data, software, methods and publications can be shared as part of an Open Science community of practice. This work presents the ongoing activities related to the implementation of visual analytics services, integrated into EOSC, towards addressing the diverse astrophysics user communities needs. These services rely on visualisation to manage the data life cycle process under FAIR principles, integrating data processing for imaging and multidimensional map creation and mosaicing, and applying machine learning techniques for detection of structures in large scale multidimensional maps. Introduction The European Open Science Cloud 3 (EOSC) initiative has been proposed by the European Commission in 2016 to build a competitive data and knowledge economy in Europe with the vision of enabling a new paradigm of transparent, data-driven science as well as accelerating innovation driven by Open Science. In Astrophysics, data (and metadata) management, mapping and structure detection are fundamental tasks involving several scientific and technological challenges. A typical astrophysical data infrastructure includes several components: very large observatory archives and surveys, rich databases containing several types of metadata (e.g. describing a multitude of observations) frequently produced through long and complex pipelines, linking to outcomes within scientific publications as well as journals and bibliographic databases. In this context, visualisation plays a fundamental role throughout the data life-cycle in astronomy and astrophysics, starting from research planning, and moving to observing processes or simulation runs, quality control, qualitative knowledge discovery and quantitative analysis. The main challenges came to integrate visualisation services within common scientific workflows in order to provide appropriate supporting mechanisms for data findability, accessibility, interoperability and reusability (FAIR principles ). Large-scale sky surveys are usually composed of large numbers of individual tiles -2D images or 3D data cubes-, each one mapping a limited portion of the sky. This tessellation derives from the observing process itself, when a telescope with a defined field of view is used to map a wide region of the sky by performing multiple pointings. Although it is simpler for an astronomer to handle single-pointing datasets for analysis purposes, it strongly limits the results for objects extending over multiple adjacent tiles and hampers the possibility to have a large-scale view on a particular phenomenon (e.g. the Galactic diffuse emission). Tailored services are required to map and mosaic such data for scientific exploitation in a way that their native characteristics (both in 2D and 3D) are preserved. Additionally, the astrophysics community produces data at very high rates, and the quantity of collected and stored data is increasing at a much faster rate than the ability to analyse them in order to find specific structures to study. Due to the sharp increase on data volume and complexity, a suite of automated structure detection services exploiting machine learning techniques is requiredconsider as an example the ability to recover and classify diffuse emission and to extract compact and extended sources. This work presents the ongoing activities related to the implementation of services, integrated into EOSC, towards addressing the diverse astrophysics user communities needs for: (i) putting visualisation at the centre of the data life cycle process while underpinning this by FAIR principles. (ii) integrating data processing for imaging and multidimensional map creation and mosaicing. (iii) exploiting machine learning techniques for automated structure detection in large scale multidimensional maps. Background and Related Works Innovative developments in data processing, archiving, analysis and visualisation are nowadays unavoidable to deal with the data deluge expected in nextgeneration facilities for astronomy, such as the Square Kilometer Array 4 (SKA). The increased size and complexity of the archived image products will raise significant challenges in the source extraction and cataloguing stage, requiring more advanced algorithms to extract scientific information in a mostly automated way. Traditional data visualisation performed on local or remote desktop viewers will be also severely challenged in presence of very large data, demanding more efficient rendering strategies, possibly decoupling visualisation and computation, for example moving the latter to a distributed computing infrastructure. The analysis capabilities offered by existing image viewers are currently limited to the computation of image/region statistical estimators or histograms, and to data retrieval (images or source catalogues) from survey archives. Advanced source analysis, from extraction to catalog cross-matching and object classification, are unfortunately not supported as the graphical applications are not interfaced with source finder batch applications. On the other hand, source finding often requires visual inspection of the extracted catalog, for example to select particular sources, reject false detections or identify the object classes. Integration of source analysis capabilities into data visualisation tools could therefore significantly improve and speed-up the cataloguing process of large surveys, boosting astronomers' productivity and shortening publication times. As we approach the SKA era, two main challenges are to be faced in the data visualisation domain: scalability and data knowledge extraction and presentation to users. The present capability of visualisation software to interactively manipulate input datasets will not be sufficient to handle the image data cubes expected in SKA ( 200-300 TB at full spectral resolution). This expected volume of data will require innovative visualisation techniques and a change in the underlying software architecture models to decouple the computation part from the visualisation. This is, for example, the approach followed by new-generation viewers such as CARTA, which uses a tiled rendering method in a client-server model. In CARTA, storage and computation are carried out on high-performance remote clusters, whereas visualisation of processed products takes place on the client side exploiting modern web features, such as GPU-accelerated rendering. However, the expected volume and complexity of SKA data will demand not only enhanced visualisation capabilities but also, principally, efficient extraction of meaningful knowledge, allowing the discovery of new unexpected results. The ability to extract scientific value from large amounts of data indeed represents the SKA ultimate challenge. To address such needs under a unified framework, visual analytics (VA) has recently emerged as the science of analytical reasoning facilitated by interactive visual interfaces. VA aims to develop techniques and tools to support researchers in synthesising information and deriving insights from massive, dynamic, unclear, and often conflicting data. To achieve this goal, VA integrates methodologies from information, geospatial and scientific analytics, and also takes advantage from techniques developed in the fields of data management, knowledge representation and discovery, and statistical analytics. In this context, new developments have been recently done for astronomy. As an example, the encube framework was developed to enable astronomers to interactively visualise, compare and query subsets of spectral cubes from survey data. encube provides a large scale comparative visual analytics framework tailored for use with large tiled displays and advanced immersive environments like the CAVE2 (a modern hybrid 2D and 3D virtual reality environment). VisIVO Visual Analytics VisIVO Visual Analytics is an integrated suite of tools focused on handling massive and heterogeneous volumes of data coming from cutting-edge Milky Way surveys that span the entire Galactic Plane, homogeneously sampling its emission over the whole electromagnetic spectrum. The tool access data previously processed by data mining algorithms and advanced analysis techniques, providing highly interactive visual interfaces that offer scientists the opportunity for in-depth understanding of massive, noisy, and high-dimensional data. Alongside data collections, the tool exposes also the scientific knowledge derived from the data, including information related to filamentary structures, bubbles and compact sources. EOSCPilot Science Demonstrator The EOSCpilot project 5 supported the first phase in the development of the European Open Science Cloud, bringing together stakeholders from research infrastructures and e-Infrastructure providers, and engaging with funders and policy makers to propose and trial EOSC's governance framework. The VisIVO project has been selected as a science demonstrator functioning as a high-profile pilot that integrates astrophysical data and visual analytics services and infrastructures, showing interoperability within other scientific domains such as Earth sciences and life sciences. Therefore, the connection with the European Open Science Cloud has been thoroughly investigated, exploiting several services developed within the European Grid Initiative (EGI), such as federated authentication and authorization and the federated cloud for analysis and archiving services. The visual analytics application has been further extended by exploiting EOSC technologies for the archive services, as well as intensive analysis employing the ViaLactea Science Gateway 6. EGI Federated Cloud EGI-CheckIN proxy service Fig. 2. Architecture of the VisIVO EOSC Science Demonstrator implementation and employed services. Figure 2 shows the overall architecture of the VisIVO EOSC Science Demonstrator implementation and employed services. The Archiving Services (including the knowledge base) have been deployed within the EGI Federated Cloud toward the assurance of a FAIR access to surveys data and related metadata. The Cloud Gateway has been integrated with the EGI Check-in 7 proxy service to enable the connection from the federated Identity Providers and with the EGI Federated Cloud 8 to expand the computing capabilities making use of a dedicated Virtual Appliance stored into the EGI Applications Database 9. The virtual appliance was exploited for massive calculation of spectral energy distributions but may be expanded for more advanced types of analysis in the future. Furthermore, we have also implemented a lightweight version of the science gateway framework, developing an ad-hoc RESTful API, named Cloud for Astrophysics GatEways (CAGE) and available on GitHub 10, to expose a simple set of functionalities to define pipelines and executing scientific workflows on any Cloud resources, hiding all the underlying infrastructures. Future Works: further EOSC Exploitation The H2020 NEANIAS project 11 has been recently approved by the European Commission to address the Prototyping New Innovative Services challenge set out in the recent Roadmap for EOSC foreseen actions. NEANIAS will drive the co-design, delivery, and integration into EOSC of innovative thematic services, derived from state-of-the-art research assets and practices in three major sectors: underwater research, atmospheric research and space research. Each thematic service will not only address its community-specific needs, but will also enable the transition of the respective community to the EOSC concept and Open Science principles. From a technological perspective, NEANIAS will deliver a rich set of services that are designed to be flexible and extensible; they will be able to accommodate the needs of communities beyond their original definition and to adapt to neighbouring cases, fostering reproducibility and re-usability. The foreseen services related to the astrophysics visual analytics are: -The FAIR Data Management and visualisation service will deliver an advanced operational solution for data management and visualisation for space FAIR data. It will provide tools that enable an efficient and scalable visual discovery, exposed through advanced interaction paradigms exploiting virtual reality. -The Map Making and Mosaicing of Multidimensional Space Images service will deliver a user-friendly cloud-based version of the already existing workflow for map making and mosaicing of multidimensional map images based on open source software such as Unimap and Montage. It will create multidimensional space maps through novel mosaicing techniques to a variety of prospective users/customers (e.g., mining and robotic engineers, mobile telecommunications companies, space scientists). -The Structure Detection on Large Scale Maps with Machine Learning service will deliver a user-friendly cloud-based solution for innovative structure detection (e.g. compact/extended sources, filaments), extending the CAESAR and CuTEx tools with state-of-the-art machine learning frameworks and techniques. The delivered structure detection capabilities will leverage the targeted-users opportunities for efficiently identifying and classifying specific structures of interest. The Figure 3 shows the main workflow foreseen to exploit the EOSC ecosystem for visualisation, source finding and classification of Big Data images and 3D spectral data cubes coming from multiwavelength Galactic Plane surveys. The user will employ the Visual Analytics tool to import data from the data management services, opportunely mapped and mosaiced. The tool will exploit the source finding applications to extract sources from the data. Optionally, expert users may employ the Visual Analytics tool and/or Virtual Reality application to interactively classify the sources by visual inspection. The results of this supervised classification can be stored to the data services, enriching the training set for the Deep Learning networks. The extracted sources are then automatically classified with Deep Learning algorithms and the results are stored within the data user space, and optionally published for re-use by other users and/or to enrich the training set. Conclusion We presented the ongoing activities related to the implementation of services, integrated into EOSC, towards addressing the diverse astrophysics user communities needs for visual analytics. The preliminary demonstration implementation developed within the H2020 EOSCPilot project has been summarized. Forthcoming activities to be developed within the H2020 NEANIAS project have been presented, projecting tailored services for FAIR data management and visualisation, multidimensional map creation and mosaicing, and machine learning-supported automated source detection in multidimensional maps.
// Get Yandex.Tracker ticket comments by ticket key func (t *Tracker) GetTicketComments(ticketKey string) (comments TicketComments, err error) { request := t.client.R().SetHeaders(t.headers) resp, err := request.Get(ticketUrl + ticketKey + ticketComments) if err != nil { return } defer resp.RawBody().Close() if resp.StatusCode() != 200 { return comments, fmt.Errorf(string(resp.Body())) } err = json.Unmarshal(resp.Body(), &comments) if err != nil { return } return }
from django.shortcuts import render, redirect from django.conf import settings from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger from app.models import Girl from app.rating import rating import glob import random import os def home(request): """ 直接用文件名作为图片的 id,就暂时不搞 hash 那么复杂了。 """ top100 = Girl.objects.all().order_by('-rating')[:1000] img1, img2 = random.sample(list(top100), 2) return render(request, 'app/home.html', {'img1': img1, 'img2': img2}) def vote(request): """ 处理用户的投票;为了让用户不停投票,就不显示投票结果,我做一个页面专门查看天梯 """ winner = request.POST['winner'] loser = request.POST['loser'] winner = Girl.objects.get(pk=winner) loser = Girl.objects.get(pk=loser) winner.rating, loser.rating = rating(winner.rating, loser.rating) winner.save() loser.save() return redirect('/') def ladder(request): girls_list = Girl.objects.all().order_by('-rating') # 按照分数降序排列 paginator = Paginator(girls_list, 20) page = request.GET.get('page') try: girls = paginator.page(page) except PageNotAnInteger: girls = paginator.page(1) except EmptyPage: # out of page's range girls = paginator.page(paginator.num_pages) return render(request, 'app/ladder.html', {'girls': girls})
Sorption Properties of Chitosan in the Refining of Rough Indium The degree of purity of cathode deposits during the electrochemical refining of rough indium depends on the content of impurity metals in the electrolyte. In this work, an additional sorption purification of the refining electrolyte was carried out in order to reduce the content of such impurity metals as cadmium, lead, copper. Chitosan was used as a sorbent due to high sorption properties with respect to heavy metal ions. The determination of the concentration of the studied metals before and after the sorption was carried out by the method of differential pulse anodic stripping voltammetry (DPASV). The experimental results allowed to calculate the amount of metal sorbed by chitosan and the efficiency of its removal. The Langmuir and Freundlich adsorption models were applied to describe the equilibrium isotherms and isotherm constants were determined. The Langmuir model agrees very well with experimental data. An inductively coupled plasma optical emission spectroscopy (ICP-OES) method was used to determine the presence of impurity metals and the degree of purity of electrorefined indium. The use of chitosan as a sorbent in the purification of rough indium allows to reduce the concentration of impurity metals in cathode deposits and to increase the content of the base metal to 99.9994%. Introduction Practical application of indium in the space, nuclear, aviation industry, in the production of liquid crystal screens, photocells and in microelectronics is due to its properties such as strength, plasticity, corrosion resistance. These properties are inherent only to indium of high purity. Existing technologies for obtaining high-purity indium are multistage and require the use of combined methods. Electrolysis is a universal method for purifying metals from metallic and non-metallic impurities, characterized by high productivity and the ability to automate the process. In electrorefining of indium, the removal of impurity metals by traditional methods often leads to an insufficient reduction in their concentration. Therefore, one of the promising directions in the refining of metals is the use of the electrochemical method in combination with others. There is a continuous purification of the electrolyte from the impurities accumulating during the electrolysis, including the sorption stage, i.e. passing the solution through the layer of activated carbon and ion exchange resin. To purify the copper refining electrolyte, three types of sorbent were proposed in : activated carbon, zeolite and chelate resin, of which the third proved to be very effective. The main requirements for the sorbent when used in the electrorefining of metals are chemical purity, stability, high sorption characteristics with respect to impurities. As a sorbent that meets these requirements, we have chosen a chitosandeacetylated chitin derivative that exhibits high sorption properties when heavy metal ions are removed in industrial wastewater. The purpose of this work is to use the sorption properties of chitosan in the purification of indium produced in Kazakhstan at the Ust-Kamenogorsk enterprise of KazTsink JSC. Atomic absorption spectroscopy, electrochemical methods of analysis, optical emission and mass spectrometry with inductively coupled plasma (ICP-OES, ICP-MS) are widely used in the study of sorption characteristics of sorbents. Methods of differential pulse anodic stripping voltammetry are often used in the analysis to determine trace amounts of substances because of their high sensitivity and selectivity. In connection with this, differential pulse voltammetry (DPV) was used in this research to study the sorption properties of chitosan. In order to increase the selectivity, sensitivity and efficiency of the determination of small contents of impurity metals in the indium refining electrolyte, we used a modification of the surface of the glassy carbon electrode with a thin layer of metallic mercury. Experimental The voltammograms were recorded in a differential-pulse regime using the potentiostat-galvanostat Metrohm Autolab. The working electrode was a mercury-film glassy carbon rotating disc electrode, the auxiliary electrode was a platinum plate. All potentials were measured relative to the reference electrode Ag/AgCl (3.5M KCl). Electrochemical deposition of mercury on the surface of the glassy carbon electrode was carried out in accordance with GOST standard P 51301-99 (ex-situ). Concentration of impurity metals (copper, lead, cadmium) was carried out from standard solutions (GSO) in the concentration range 10 -7 M -10 -5 M. The supporting electrolyte was 1 M sodium chloride solution. The pH of the solutions (~ 2) was created by acidification with HCl (37%) of Sigma Aldrich. Before each measurement, the electrolyte was purged with argon for 5 min. Concentrating was carried out at more negative values than the potentials of the peaks. The rest period of solution was 15 s, the pulse amplitude was 50 mV and the scan rate was 10 mV/s. All experiments were performed at 25 °C. The sorbent was the chitosan of Sigma Aldrich. The equilibrium sorption of Cu 2+, Pb 2+, and Cd 2+ ions was carried out by contacting 0.1 g of chitosan with 200 ml of a solution containing metal ions in a concentration range of 10 -5 M -10 -4 M in conical flasks for 90 min on a shaker. The mixture was filtered and the filtrate was analyzed for the content of metal ions by the calibration graph from the results of differential pulse voltammograms. Results and discussion To determine the small concentrations of impurity metals in the refining electrolyte, the differential pulse voltammetry (DPV) method was used. The detection limits for DPV are about 10 -10 -10 -9 M, while for irreversible systems the loss in sensitivity is not so great. In comparison with traditional static mercury drop electrodes on film electrodes, during the anodic stripping voltammetric analysis, higher sensitivity and better resolution of the peaks are achieved. The sensitivity of the determination depends on the thickness of the film, i.e. the amount of deposited mercury. Initially, anodic voltammograms of standard solutions of copper, lead and cadmium, shown in Fig. 1, were obtained for the construction of calibration curves. The insertion graphs show the calibration curves of the peak current values of the electro-oxidation of the studied metals from their concentration in the electrolyte. Determination of the content of metal ions by the method of anodic stripping voltammetry is based on the dissolution of the concentrate from the anode, which leads to the appearance of a current peak whose height depends on the concentration of the metal in the amalgam. As can be seen from Fig. 2, the current values of the anode peak (i p ) increase with the increasing concentration of metal ions. The height of the peak on the differential pulse voltammogram is directly proportional to the concentration of the electroactive substance c a and depends, among other factors, on the pulse amplitude E A and the pulse duration t p. In stripping voltammetry, the Randles-Shevchik equation is used : where, n -number of electrons, A -surface area of the electrode, -the diffusion coefficient of the determining substance in the amalgam, concentration of the determining substance in the amalgam, v -scan rate. When a thin mercury film electrode (TMFE) is used, the peak current is proportional to the scan rate v and depends on the surface area of the mercury film A F and the metal concentration in it at time of electrolysis t e. As a sorbent, chitosan is used in the work, which possesses high sorption properties in relation to the studied metals, chemical stability, ecological safety and ease of regeneration. The determination of the equilibrium concentration of the studied ions before and after the sorption of metals was carried out by the DPASV method. The experimental results are shown in Fig. 2, from which a significant decrease in the content of impurity metals in the electrolyte is evident. The results obtained allowed calculating the amount of sorbed metal and the efficiency of its removal using the following equation : where, Q -the amount of substance sorbed from the solution, -volume of sorbate, C i -concentration before sorption, C e -concentration after sorption and W -mass of sorbent. The calculated values are given in Table 1. The experimental results were simulated using simple adsorption isotherms, such as the classical Langmuir and Freundlich equations. In addition to these equations, we also apply the Temkin and Dubinin-Radushkevich equations for interpreting experimental data on adsorption. The Langmuir model The equation of the adsorption isotherm of Langmuir expresses the ratio between the amount of adsorption Q e (mol/kg) and the equilibrium concentration of adsorbate in the liquid phase C e (mol/m 3 ): The equation constants were calculated from the slope and intersection of lines on the graphs in the corresponding coordinates of the linear Eq. : 1/Q e from 1/C e (Fig. 3). where C e is the equilibrium concentration of adsorbate (mg/l), Q e is the amount of adsorbed metal per gram of adsorbent at equilibrium (mg/g). Q o is the maximum monolayer capacity (mg/g), K L is the Langmuir isotherm constant (L/mg). The Freundlich model The Freundlich adsorption isotherm is usually used to describe the adsorption characteristics of a heterogeneous surface. Since the adsorption centers in this model have different energy values, first of all, the active sorption centers with maximum energy are filled. These data often correspond to the empirical equation proposed by Freundlich: where K f -the equilibrium constant of the Freundlich equation (mg/g), n -the adsorption intensity; C e -the equilibrium concentration of adsorbate (mg/l), Q e -the amount of adsorbed metal per gram of adsorbent at equilibrium (mg/g). Figure 4 presents experimental data on the adsorption of cadmium, copper, and lead by chitosan in the coordinates of the linear Freundlich equation logQ e from logC e : logQ e = logK f + 1/n logC e The equilibrium constants were calculated from the slope and intersection of lines with the ordinate axis on the graph in the coordinates of Eq.. The parameters K f and 1/n of Eq. The experimental results cannot be described by Temkin and Dubinin-Radushkevich isotherms, because of the small values of the correlation coefficients. Table 2 shows the parameters of the Langmuir and Freundlich equations calculated graphically. Comparison of the table data shows that the sorption of Cd 2+, Cu 2+, Pb 2+ ions by chitosan is best described by the Langmuir model (the largest R 2 ). This indicates that the sorption of the studied metals satisfies the boundary conditions for the applicability of the Langmuir model with a monomolecular coating. Electrolysis Electrodeposition of indium was carried out in chloride electrolytes at an indium chloride concentration of 0.5 mol/l and a current density of 30 mA/cm 2. The choice of conditions for conducting electrolysis was carried out on the basis of previous works. The anode served as a rough indium, and the cathode was a titanium plate. The sorbent was located in a polymer partition separating the cathode and anode spaces. Scheme of the electrolysis installation is shown in Fig. 5. The content of impurity metals in the obtained pure indium samples was determined by the ICP-OES method. Samples of solutions of indium cathode sediments were prepared for analysis as follows: after electrolysis, samples of electrodeposited indium were dissolved in nitric acid (osm) and diluted with bidistilled water to a certain volume. The results of the analysis are presented in Table 3. The content of impurity metals in the cathode deposits decreases and leads to an increase in the purity of indium. The obtained results allow us to recommend chitosan as an effective sorbent of impurity metals of electrolyte during refining of other metals. "-" means that the metal content is below the detection limit (30 g/L) Conclusions The sorption properties of chitosan in chloride electrolytes containing ions of cadmium, copper and lead have been studied. The content of the studied metals was determined before and after sorption by the DPASV method. The amount of sorbed metal and the removal efficiency were calculated for all the studied impurities. The experimental results were simulated using the Langmuir and Freundlich sorption isotherms, from which the highest regression value corresponding to the Langmuir model. Electro refining of rough indium with the use of chitosan as a sorbent was carried out. The results of the analysis indicate a significant decrease of the content of the studied impurity metals in cathode deposits. Thus, chitosan is an effective sorbent for reducing the content of cadmium, copper and lead ions in the indium refining electrolyte.
/** * Find a position that can be selected (i.e., is not a separator). * * @param position The starting position to look at. * @param lookDown Whether to look down for other positions. * @return The next selectable position starting at position and then searching either up or * down. Returns {@link #INVALID_POSITION} if nothing can be found. */ @Override int lookForSelectablePosition(int position, boolean lookDown) { final ListAdapter adapter = mAdapter; if (adapter == null || isInTouchMode()) { return INVALID_POSITION; } final int count = adapter.getCount(); if (!mAreAllItemsSelectable) { if (lookDown) { position = Math.max(0, position); while (position < count && !adapter.isEnabled(position)) { position++; } } else { position = Math.min(position, count - 1); while (position >= 0 && !adapter.isEnabled(position)) { position--; } } } if (position < 0 || position >= count) { return INVALID_POSITION; } return position; }
Having a birthday around the holidays was never easy and, with every successive year, it felt more and more as if celebrating my birthday got thrown into the December holiday mix as an afterthought. Chelsea Manning spends sixth Christmas in prison with no end in sight Read more But now, Decembers are becoming the hardest month of the year to endure. The most obvious reasons are physical: the temperature drops; here in Kansas, it rains and snows a lot more; the colors outside my window turn from the greens, yellows and blues of summer to the browns, grays and tans of winter, with the occasional white on the rare days that it snows. I spend more time indoors, trying to stay warm and dry. The hills and trees I can see seem still, silent and lifeless. I feel myself becoming more distant and disconnected as the color leaches from the world outside these walls. The chasm between me and the outside world feels like it’s getting wider and wider, and all I can do is let it happen. I realize that my friends and family are moving on with their lives even as I’m in an artificially imposed stasis. I don’t go to my friends’ graduation ceremonies, to their engagement parties, to their weddings, to their baby showers or their children’s birthday parties. I miss everything – and what I’m missing gets more routine and middle-aged with each passing year. The changes that occur as I sit here can raise doubts about my very existence. I have no recent snapshots of myself and no current selfies, just old Facebook photos, grainy trial photos and mugshots to show for the last six years of my life. When everyone is obsessed with Twitter, Instagram, SnapChat and WhatsApp, it begins to feel like I don’t exist in some very real, important way. Living in a society that says “Pics or it didn’t happen”, I wonder if I happened. I sometimes feel less than empty; I feel non-existent. Still, I endure. I refuse to give up. I open the mail I receive – which spikes in December, as people send me birthday and then Christmas cards, but I get letters and well-wishing cards all year – and am happily reminded that I am real and that I do exist for people outside this prison. And I celebrate, too, this time of year, in my own little way: I make phone calls to family, I write letters, I treat myself with the processed foods and desserts I all but gave up during my gender transition. This holiday season is the first since I won the right to begin hormone therapy for that gender transition, which I began in February. The anti-androgen and estrogen I take is reflected in my external appearance, finally: I have softer skin, less angular facial features and a fuller figure. Even though I’m still not allowed to grow my hair to the female standard in prison – a battle I’ll continue to fight with the ACLU in 2016 – I know that my struggles pale in comparison to those faced by many vulnerable queer and transgender people. Despite more mainstream visibility, identification and even celebration of queer and trans people, the reality for many is that they face at least as many, if not more, obstacles as I do in transitioning and living their lives with dignity. And, however improbably, I have hope this holiday season. With my appeals attorneys, Nancy Hollander and Vince Ward, I expect to submit my first brief to the US army court of criminal appeals next year, in support of my appeal to the 2013 court-martial convictions and sentence. Whatever happens, it will certainly be a long path. There may well be other Decembers like this one, where I feel at times so far away from everyone and everything. But when faced with bleakness, I won’t give up. And I’ll try to remember all the people who haven’t given up on me.
EM-based Underwater Localization in Stratified Medium Acoustic waves in an underwater environment do not necessarily travel in straight lines due to sound speed variations, which poses a set of challenges for underwater acoustic localization. In this paper, we consider acoustic localization in a stratified underwater medium based on time of arrival (TOA) measurements. We assume that the sound speed profile (SSP) only depends on depth and is stratified in vertical. A multi-layer depth-dependent SSP model is considered in this paper. However, which layer originates the source is uncertain in practice. An expectation-maximization (EM) based underwater localization approach is proposed, where uncertainty in measurement origin is handled. This approach largely simplifies data association and reduces the complexity of localization induced by sound speed variations. In addition, the Cramr-Rae lower bound (CRLB) of this problem is derived. We illustrate the effectiveness of the proposed algorithm by locating several sources in different layers.
from classes.stickers import Stickers import os from dotenv import load_dotenv load_dotenv() SESSION_NAME = os.getenv('SESSION_NAME_2') API_ID = os.getenv('API_ID_2') API_HASH = os.getenv('API_HASH_2') stickers = Stickers(SESSION_NAME, API_ID, API_HASH) stickers.import_from_file()