content
stringlengths
7
2.61M
1. Field of the Invention This invention relates to apparatus for cleaning gutters and the like and, more particularly, to a gutter cleaning device operated by an individual standing below the gutter. 2. Prior Art Relating to the Disclosure Eaves trough gutters are designed to carry away runoff water from the roof of a building. Quite often, however, debris such as leaves, dirt, needles, roofing material granules and the like, accumulates in the gutter and is not flushed away by the flow of water in the gutter. Accumulated debris often sticks to the interior surfaces of the gutter and inhibits free flow of water and causes further accumulation of debris. If the debris is not periodically removed, it is apparent that the gutter soon becomes clogged with such debris, causing backup and restricted flow of water within the gutter. During heavy rainfall, runoff water often overflows the gutter and runs down the side of the building and may seep into the eaves. The accumulation of debris and standing water in wooden gutters promotes deterioration. Both wooden and sheet metal gutters which are filled with debris and accumulated water or ice weigh a substantial amount which strains their mountings and causes the fasteners to be worked loose from the building. The most direct way of cleaning a gutter is for a maintenance person to mount a ladder, manually scrape or brush the debris loose from the interior of the gutter and flush with water or otherwise remove the accumulated debris. Use of a ladder is hazardous particularly when the gutters are located high above ground level. Various gutter cleaning devices have heretofore been available which permit a maintenance person to stand on the ground and remotely clean and remove the debris from a gutter. U.S. Pat. No. 2,910,711 discloses a gutter cleaner having an elongated tubular handle with a reversely bent upper end to which is mounted a flexible scraper extending in one direction and a water nozzle facing in the opposite direction. U.S. Pat. No. 3,041,655 discloses a gutter cleaner having a similar handle to which is mounted a flat scraper blade which extends in one direction along the gutter and which has a water channel formed along its top surface. Both of these prior gutter cleaners are adapted to operate in only one direction along a gutter and their single scrapers are not contoured to match the gutter shape nor to easily pitch debris from the gutter.
Wheat germ agglutinin (WGA) reduces ADHinduced water flow and induces cell surface changes in epithelial cells of frog urinary bladder The functional and structural changes induced by apical wheat germ agglutinin (WGA) 100 g/ml exposure on frog urinary bladder have been investigated and the possible correlation between these effects discussed. Bladders, apically exposed to WGA for 30 min to 3 hr exhibit a marked reduction of their response to antidiuretic hormone (ADH) challenge and of their hydrosmotic reactivity. Structural changes triggered by WGA trreatmentare: 1. apical invaginations of the plasma membrane, interpreted as endocytotic in nature, taking into account the results of carbohydrate cytochemical detection and horseradish peroxidase (HRP) exposure: 2. cytoskeleton disorganization and microvilli collapse. These phenomena do not interfere with cortical granule traffic and are independent of ADH challenge: they occur in ADHstimulated bladders as well as in bladders at rest. These findings could be interpreted as follows: binding of the divalent lectin WGA to its coat specific receptors would induce changes in the apical membrane structure which in turn could provoke disorganization and disruption of apical cytoskeletal elements associated with plasma membrane. Reduction of bladder response to ADH challenge could result from a reduced recyling of aggrephores, as they are associated with cytoskeletal elements in the subapical cytoplasm. Collapse of microvilli and endocytotic events also could result from apical cytoskeleton disruption, as microvilli are sustained by bundles of actin filaments interconnected with apical cytoskeleton filaments and as plasma membrane is associated with apical cytoskeleton. However, these two last events evidently occur in ADHchallenged or nonchallenged bladders.
/* * Assign the next available spill slot to `ivl'. */ void Vxls::assignSpill(Interval* ivl) { assertx(!ivl->fixed() && ivl != ivl->leader()); if (ivl->var->slot != kInvalidSpillSlot) return; auto& used_spill_slots = spill_info.used_spill_slots; auto const assign_slot = [&] (size_t slot) { ivl->var->slot = slot; ++spill_info.num_spills; spill_slots[slot] = kMaxPos; if (!ivl->var->wide) { used_spill_slots = std::max(used_spill_slots, slot + 1); } else { used_spill_slots = std::max(used_spill_slots, slot + 2); spill_slots[slot + 1] = kMaxPos; } }; if (!ivl->var->wide) { for (size_t slot = 0, n = spill_slots.size(); slot < n; ++slot) { if (ivl->leader()->start() >= spill_slots[slot]) { return assign_slot(slot); } } } else { for (size_t slot = 0, n = spill_slots.size() - 1; slot < n; slot += 2) { if (ivl->leader()->start() >= spill_slots[slot] && ivl->leader()->start() >= spill_slots[slot + 1]) { return assign_slot(slot); } } } ONTRACE(kVasmRegAllocDetailLevel, dumpVariables(variables, spill_info.num_spills)); TRACE(1, "vxls-punt TooManySpills\n"); TRACE_PUNT("LinearScan_TooManySpills"); }
UAV-to-Ground Multi-Hop Communication Using Backpressure and FlashLinQ-Based Algorithms The use of Unmanned Aerial Vehicles (UAVs) for remote sensing and surveillance applications has become increasingly popular in the last decades. This paper investigates the communication between a UAV and a final control center (CC), using static relays located on the ground, to overcome the intermittent connectivity between the two end points, due to the UAV flight. Backpressure and FlashLinQ routing and scheduling algorithms are jointly applied to this scenario. Backpressure has been shown to be able stabilize any input traffic within the network capacity region without requiring knowledge of traffic arrival rates and channel state probabilities. FlashLinQ is used in the scheduling phase to derive a maximal feasible subset of links which can coexist on a given slot without causing harmful interference to each other. Moreover, to overcome the limit on long end-to-end delays of backpressure, we propose a modified algorithm, where relays are selected depending on their proximity to the CC and on the UAV trajectory. Through extensive simulations, we demonstrate that, compared to the benchmark solution based on backpressure, the proposed algorithm is able to reduce delay significantly without any loss in throughput gain.
Motor Nonlinearities in Electrodynamic Loudspeakers: Modelling and Measurement This paper studies the motor nonlinearities of a classical electrodynamic loudspeaker. Measurements show the dependence of the voice-coil inductance with its position and with the driving current as well as the force factor dependence on the coil position. They allow the tuning of the parameters of a model proposed for these dependences. Time and frequency analysis of the model help in the explanation of both the harmonic and intermodulation distortions observed in classical loudspeakers.
This invention relates generally to noise reduction panels and, more particularly, to a method and system for improving a vibratory response of noise reduction panels. At least some known acoustic panels used to line the fan flowpath of a turbine engine for noise reduction may be exposed to a high vibratory forcing function, much of which can be due to the aero shock waves of fan blade passing. Initial design intent is to make the panels and their supporting structure stiff enough so that they do not respond to this stimulus. Weight and/or maintainability design constraints sometimes undermine this design intent. For example, a bolted-on panel is preferable to a panel that is bonded to the fan case for maintainability, allowing easy replacement of damaged panels in service. Also, to reduce weight, panel section properties may be minimized. These added design constraints may reduce the installed panel stiffness, causing it to have a small frequency margin from the driving excitation. This may result in a forced vibratory response that may cause excessive alternating stress in the panels and/or its supporting fasteners. For example, a forward acoustic panel of some known engines is a composite laminate structure that is bolted to a radially inner surface of the engine fan containment case, just forward of the fan blades. The forward end of each panel is supported by bolts that span the arc covered by the panel. The aft end is supported by insertion of a lip formed in the panel into a mating groove of the fan case. Between these supports the panels are free to vibrate, restricted only by elastomeric spacers bonded to the outer surface of the panel and residing in the small radial gap between the panel and the inner surface of the fan case. When the vibration amplitude of the unsupported portion of the panels exceeds the gap between the spacers and the fan case, the spacers act as springs in compression and add stiffness to the overall panel. However, since the spacers have very little damping, they act as almost purely elastic springs, dissipating very little vibrational energy. The overall affect of the spacers is not enough to make the panel unresponsive to blade passing stimulus.
<filename>lafontaine/feature_director/features/image/color_counter.py import numpy as np from lafontaine.feature_director.feature.single_frame_feature import SingleFrameFeature from lafontaine.feature_director.feature_result.single_frame_feature_result import SingleFrameFeatureResult from lafontaine.helpers.frame import Frame class ColorCounter(SingleFrameFeature): def __init__(self, color_count, frames): self.color_count = color_count super().__init__('ColorCounter', frames) @staticmethod def get_unique_colors(frame): return np.unique(frame.image.reshape(-1, frame.image.shape[2]), axis=0) def check_feature(self, frame: Frame): unique_colors = self.get_unique_colors(frame) frame_color_count = len(unique_colors) return SingleFrameFeatureResult(frame_color_count >= self.color_count, self.frames, self.feature_id)
Several individuals have taught structural composites using fluoropolymers reinforced with continuous filament fibers. These composites have strength and provide chemical stabilities. For example, Gentile et al. in U.S. Pat. No. 5,069,959 teaches a composite comprising a fluoropolymer resin matrix reinforced with continuous filament aligned fibers for use in corrosive high temperature environments. Gentile's matrix PFA fluorocarbon resin is reinforced with continuous filament fibers and the composite has a flex modulus above 5 million psi. Gentile et al. in U.S. Pat. No. 4,975,321 teaches a composite comprising fluoropolymer resin matrix reinforced with continuous filament aligned fibers for use in corrosive high temperature environments. The continuous fibers used are Hercules AS4 continuous graphite filaments coated with ethylene tetrafluoroethylene copolymer resins, although the composite may contain other resins. The continuous filament fiber may also be glass fibers or aramid fibers. R. H. Michel in U.S. Pat. No. 4,422,992 teaches an extrusion process for blending carbon fibers and tetrafluoroethylene copolymers and the formation of laminates therefrom. W. Novis Smith, Jr. et al. in U.S. Pat. No. 5,082,721 provides a fabric for protecting garments which fabric has high tensile fibers bonded by a film layer which film comprises at least one of multiple polyhalogenated resins, with ethylene-vinyl alcohol copolymers bonded on the bottom surface of the fabric. The high tensile fabrics utilized include polyamides such as Kevlar.RTM., polyphenylene/polyphenylene oxide filaments, such as Nomex.RTM. fibers, and carbonaceous polymeric materials such as oxidized polyacrylonitrile fibers, and blends thereof. The polyhalogenated resins of Smith et al. included fluorinated ethylene perfluoroalkyl-vinylether copolymer resins (PFA) and perfluoroethylene perfluorinated propylene copolymer (FEP). Smith, in U.S. Pat. No. 4,970,105 also teaches a fabric for use in manufacture of protective garments, containers and covers comprising an inner layer of a tear-resistant high tensile fabric and a film layer bonded on at least one surface of the fabric comprising a multiply polyhalogenated resin. Again the polyhalogenated resins are fluorinated resins described above. The high tensile fibers utilized include polyamides such as Kevlar, the Nomex nylons, and the PET (polyethylene terephthalate) fibers, and blends of these fibers with polybenzamidazoles and oxidized polyacrylonitrile fibers (carbon fibers). Finally, Fukuda et al. in U.S. Pat. No. 4,818,640 discloses a carbonaceous composite product primarily used for fuel cell electrodes which is produced by joining carbonaceous materials together by melt adhesion of tetrafluoroethylene resins or with tetrafluoroethylene resins mixed with highly electro-conductive carbon blacks. Although the above efforts have formed composites, these composites are not taught to be useful in containment of high acid components and particularly in containment of toxic waste materials which may also contain highly acidic materials. It is a goal of the instant invention to provide a chemically resistant, acid resistant composite material which can hold for a period of at least 20 years, highly acidic and/or toxic wastes in appropriately designed containers. It is also a goal of the instant invention to provide a chemically resistant, and particularly acid resistant composite material by compositing certain matrix resins having good thermal stability when so composited, such matrix resins including thermoplastic or thermosetting resins with carbon fibers and with selected borosilicate glass particulates. It is a goal of the invention to provide for a composite having good thermal stability, high strength, and both acid resistance and chemical resistance for toxic wastes. Such a composite would include a matrix resin capable of thermal stability of at least 200.degree. C. within the cured composite, and the matrix resin structurally reinforced with selected carbon fibers, with the matrix resin having interspersed therein borosilicate glass particulates, which particulates provide additional acid stabilities.
package com.paleblue.persistence.milkha.mapper; import static com.paleblue.persistence.milkha.util.Preconditions.checkNotNull; import java.util.Arrays; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.stream.Collectors; import com.paleblue.persistence.milkha.dto.TransactionLogItem; import com.paleblue.persistence.milkha.dto.TransactionStatus; import com.amazonaws.services.dynamodbv2.model.AttributeDefinition; import com.amazonaws.services.dynamodbv2.model.AttributeValue; import com.amazonaws.services.dynamodbv2.model.ScalarAttributeType; public class TransactionLogItemMapper extends HashOnlyMapper<TransactionLogItem> { public static final String TRANSACTION_ID_KEY_NAME = "transactionId"; public static final String TRANSACTION_STATUS_KEY_NAME = "transactionStatus"; public static final String WAIT_PERIOD_BEFORE_SWEEPER_UNLOCK_MILLIS = "waitPeriodBeforeSweeperUnlockMillis"; public static final String WAIT_PERIOD_BEFORE_SWEEPER_DELETE_MILLIS = "waitPeriodBeforeSweeperDeleteMillis"; public static final String UNLOCKED_BY_SWEEPER = "unlockedBySweeper"; public static final String TRANSACTION_LOG_TABLE_NAME = "TransactionLog"; private static final String CREATE_SET_KEY_NAME = "createSet"; private static final String DELETE_SET_KEY_NAME = "deleteSet"; @Override public Map<String, AttributeValue> marshall(TransactionLogItem item) { checkNotNull(item); Map<String, AttributeValue> attributeMap = new HashMap<>(); attributeMap.put(TRANSACTION_ID_KEY_NAME, new AttributeValue(item.getTransactionId())); attributeMap.put(TRANSACTION_STATUS_KEY_NAME, new AttributeValue(item.getTransactionStatus().name())); attributeMap.put(WAIT_PERIOD_BEFORE_SWEEPER_UNLOCK_MILLIS, new AttributeValue().withN(String.valueOf(item.getWaitPeriodBeforeSweeperUnlockMillis()))); attributeMap.put(WAIT_PERIOD_BEFORE_SWEEPER_DELETE_MILLIS, new AttributeValue().withN(String.valueOf(item.getWaitPeriodBeforeSweeperDeleteMillis()))); attributeMap.put(UNLOCKED_BY_SWEEPER, new AttributeValue().withBOOL(item.isUnlockedBySweeper())); if (item.getCreateSet() != null && !item.getCreateSet().isEmpty()) { attributeMap.put(CREATE_SET_KEY_NAME, toAttributeValue(item.getCreateSet())); } if (item.getDeleteSet() != null && !item.getDeleteSet().isEmpty()) { attributeMap.put(DELETE_SET_KEY_NAME, toAttributeValue(item.getDeleteSet())); } return attributeMap; } @Override public TransactionLogItem unmarshall(Map<String, AttributeValue> attributeMap) { checkNotNull(attributeMap); String transactionId = attributeMap.get(TRANSACTION_ID_KEY_NAME).getS(); TransactionStatus transactionStatus = TransactionStatus.valueOf(attributeMap.get(TRANSACTION_STATUS_KEY_NAME).getS()); Long waitPeriodBeforeSweeperUnlockMillis = Long.parseLong(attributeMap.get(WAIT_PERIOD_BEFORE_SWEEPER_UNLOCK_MILLIS).getN()); Long waitPeriodBeforeSweeperDeleteMillis = Long.parseLong(attributeMap.get(WAIT_PERIOD_BEFORE_SWEEPER_DELETE_MILLIS).getN()); boolean unlockedBySweeper = false; if (attributeMap.containsKey(UNLOCKED_BY_SWEEPER)) { unlockedBySweeper = attributeMap.get(UNLOCKED_BY_SWEEPER).getBOOL(); } Map<String, List<Map<String, AttributeValue>>> createSet = null; if (attributeMap.containsKey(CREATE_SET_KEY_NAME)) { createSet = commitSetfromAttributeValue(attributeMap.get(CREATE_SET_KEY_NAME).getM()); } Map<String, List<Map<String, AttributeValue>>> deleteSet = null; if (attributeMap.containsKey(DELETE_SET_KEY_NAME)) { deleteSet = commitSetfromAttributeValue(attributeMap.get(DELETE_SET_KEY_NAME).getM()); } return new TransactionLogItem(transactionId, transactionStatus, waitPeriodBeforeSweeperUnlockMillis, waitPeriodBeforeSweeperDeleteMillis, unlockedBySweeper, createSet, deleteSet); } @Override public String getTableName() { return TRANSACTION_LOG_TABLE_NAME; } @Override public String getHashKeyName() { return TRANSACTION_ID_KEY_NAME; } @Override public Map<String, AttributeValue> getPrimaryKeyMap(String transactionId) { return Collections.singletonMap(TRANSACTION_ID_KEY_NAME, new AttributeValue(transactionId)); } @Override public List<AttributeDefinition> getAttributeDefinitions() { return Arrays.asList( new AttributeDefinition(TRANSACTION_ID_KEY_NAME, ScalarAttributeType.S), new AttributeDefinition(TRANSACTION_STATUS_KEY_NAME, ScalarAttributeType.S), new AttributeDefinition(WAIT_PERIOD_BEFORE_SWEEPER_UNLOCK_MILLIS, ScalarAttributeType.N), new AttributeDefinition(WAIT_PERIOD_BEFORE_SWEEPER_DELETE_MILLIS, ScalarAttributeType.N), new AttributeDefinition(UNLOCKED_BY_SWEEPER, ScalarAttributeType.N), new AttributeDefinition(CREATE_SET_KEY_NAME, "M"), new AttributeDefinition(DELETE_SET_KEY_NAME, "M")); } private AttributeValue toAttributeValue(Map<String, List<Map<String, AttributeValue>>> commitSet) { Map<String, AttributeValue> tableToKeys = new HashMap<>(); for (Map.Entry<String, List<Map<String, AttributeValue>>> entry : commitSet.entrySet()) { List<AttributeValue> keys = entry.getValue().stream().map(key -> new AttributeValue().withM(key)).collect(Collectors.toList()); tableToKeys.put(entry.getKey(), new AttributeValue().withL(keys)); } return new AttributeValue().withM(tableToKeys); } private Map<String, List<Map<String, AttributeValue>>> commitSetfromAttributeValue(Map<String, AttributeValue> rawMap) { Map<String, List<Map<String, AttributeValue>>> tableToKeys = new HashMap<>(); rawMap.forEach((tableName, keyList) -> tableToKeys.put(tableName, keyList.getL().stream().map(AttributeValue::getM).collect(Collectors.toList()))); return tableToKeys; } }
Congenital "neurovascular hamartoma" of the skin. A possible marker of malignant rhabdoid tumor. Distinct congenital, benign, probably hamartomatous, lesions of the upper dermis were noted in two children who subsequently developed malignant rhabdoid tumors. The dermal lesions, which we have named "neurovascular hamartomas" were characterized by a proliferation of capillaries in a background of bland spindle cells with possible neural features. In one child the malignant rhabdoid tumor was located in the kidney, and a synchronous primitive neuroectodermal tumor of the central nervous system was the cause of his death. The other infant had two neurovascular hamartomas, and a malignant rhabdoid tumor arose in contiguity with the deepest portion of the larger of the two hamartomas. An axillary lymph node metastasis rapidly developed in this child followed by widespread metastases and death 3 months later. Neuroectodermal differentiation was observed immunohistochemically or ultrastructurally in all rhabdoid tumors and in the tumor of the brain. This is the first report of a unique congenital benign dermal lesion that appears to be associated with malignant rhabdoid tumors in very young children. A genetic abnormality of neuroectodermal differentiation may underlie the development of these neoplasms.
Drug treatment in pregnancy. When considering drug therapy in pregnancy, the risk of treatment for the embryo/fetus has to be weighed against the risk to the mother and the child of carrying out no treatment. This is of particular relevance in certain conditions like diabetes, epilepsy or AIDS, where the risk of embryopathy is increased when no treatment is carried out and the available drugs are potentially teratogenic. However, carefully selected drugs and close meshed monitoring may even decrease the risk for the child. In many instances, unintentional drug exposure occurs in the period before the pregnancy has been diagnosed. This may lead to additional diagnostic measures or even abortion of an otherwise wanted child. In both situations, planned and unintentional drug exposure during pregnancy, insufficient information is available on the clinical conditions relevant here and the specific drugs involved. Identification of potential teratogenic effects of a new drug takes place during the early development phase. However, animal models may not be representative of specifically human characteristics, e.g. deficiences in enzymes. Since drug treatment is generally best avoided during pregnancy, pharmacokinetic studies in this population are rare. However, physiological changes, known to be relevant for some drugs do occur during pregnancy. In order to improve knowledge on the pharmacokinetics of drugs in pregnancy, population pharmacokinetic approaches may represent a solution. Intensive efforts to investigate the efficacy and safety of drugs during pregnancy are necessary. Since controlled clinical trials are usually not feasible due to ethical reasons, intensified collection of case reports as well as epidemiological studies are warranted to gain sufficient information for the counselling of pregnant women.
<gh_stars>1-10 package com.github.yinfujing.dubbo.spring.boot.actuate; import com.github.yinfujing.dubbo.spring.boot.autoconfigure.DubboAutoConfiguration; import com.github.yinfujing.dubbo.spring.boot.demo.DemoServiceImpl; import org.junit.Test; import org.junit.runner.RunWith; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.actuate.autoconfigure.EndpointAutoConfiguration; import org.springframework.boot.actuate.health.Health; import org.springframework.boot.test.context.SpringBootTest; import org.springframework.test.context.ActiveProfiles; import org.springframework.test.context.junit4.SpringRunner; import static junit.framework.TestCase.assertEquals; @RunWith(SpringRunner.class) @SpringBootTest(classes = { ZookeeperHealthIndicator.class , DubboAutoConfiguration.class, DemoServiceImpl.class , EndpointAutoConfiguration.class }) @ActiveProfiles({"dubbo-standard", "dubbo-consumer", "dubbo-provider"}) public class ZookeeperHealthIndicatorTest { @Autowired private ZookeeperHealthIndicator zookeeperHealthIndicator; @Test public void getZookeeperUrls() throws Exception { Health.Builder builder = new Health.Builder(); zookeeperHealthIndicator.getZookeeperUrls().add("10.1.1.1"); zookeeperHealthIndicator.doHealthCheck(builder); zookeeperHealthIndicator.setConnectionTimeout(10); assertEquals("UP {10.1.1.1=Unable to connect to zookeeper server within timeout: 1000, 10.1.1.234:2181=true, 10.1.1.153:2181=true}",builder.build().toString()); } }
ISSA RAE is on top of her game right now. Following the success of her HBO show Insecure and film roles in The Hate U Give and Little, the 34-year-old is continuing to climb new heights in Hollywood. However, the Senegalese American actress revealed that she sometimes has to change her language in order to not be labelled as mean or difficult. Speaking at a Beautycon panel discussing ‘Black Girl Magic on the Big Screen,’ Rae said: “I have to sugarcoat because I know that the environment I’m in will label me as the angry or difficult black woman. “It is frustrating, at times, to feel like you have to constantly work on how you’re going to present something, so you are able to work again in the future. During the panel, Rae talked about her excitement on starring in Little alongside fellow black actresses Regina Hall and Marsai Martin. Martin is a body-swap comedy, centered on Jordan [Regina Hall], a take-no-prisoners tech mogul who torments her long-suffering assistant, April [Issa Rae] and the rest of her employees on a daily basis. She soon faces an unexpected threat to her personal life and career when she magically transforms into a 13-year-old version of herself [Marsai] right before a career-changing presentation.
// CalculatePercentage function calculates amount of coins for the given the percentage func (coins Coins) CalculatePercentage(percentage uint) Coins { c := coins.NoNil() p := big.NewInt(int64(percentage)) theta := new(big.Int) theta.Mul(c.ThetaWei, p) theta.Div(theta, Hundred) tfuel := new(big.Int) tfuel.Mul(c.TFuelWei, p) tfuel.Div(tfuel, Hundred) return Coins{ ThetaWei: theta, TFuelWei: tfuel, } }
Reduced Aeration Affects the Expression of the NorB Efflux Pump of Staphylococcus aureus by Posttranslational Modification of MgrA ABSTRACT We previously showed that at acid pH, the transcription of norB, encoding the NorB efflux pump, increases due to a reduction in the phosphorylation level of MgrA, which in turn leads to a reduction in bacterial killing by moxifloxacin, a substrate of the NorB efflux pump. In this study, we demonstrated that reduced oxygen levels did not affect the transcript levels of mgrA but modified the dimerization of the MgrA protein, which remained mostly in its monomeric form. Under reduced aeration, we also observed a 21.7-fold increase in the norB transcript levels after 60 min of growth that contributed to a 4-fold increase in the MICs of moxifloxacin and sparfloxacin for Staphylococcus aureus RN6390. The relative proportions of MgrA in monomeric and dimeric forms were altered by treatment with H2O2, but incubation of purified MgrA with extracts of cells grown under reduced but not normal aeration prevented MgrA from being converted to its dimeric DNA-binding form. This modification was associated with cleavage of a fragment of the dimerization domain of MgrA without change in MgrA phosphorylation and an increase in transcript levels of genes encoding serine proteases in cells incubated at reduced aeration. Taken together, these data suggest that modification of MgrA by proteases underlies the reversal of its repression of norB and increased resistance to NorB substrates in response to reduced-aeration conditions, illustrating a third mechanism of posttranslational modification, in addition to oxidation and phosphorylation, that modulates the regulatory activities of MgrA.
<reponame>Sanechoic/JoyStory /********************************************************************** * $Id: EnhancedPrecisionOp.h 2556 2009-06-06 22:22:28Z strk $ * * GEOS - Geometry Engine Open Source * http://geos.refractions.net * * Copyright (C) 2005-2006 Refractions Research Inc. * * This is free software; you can redistribute and/or modify it under * the terms of the GNU Lesser General Public Licence as published * by the Free Software Foundation. * See the COPYING file for more information. * ********************************************************************** * * Last port: precision/EnhancedPrecisionOp.java rev. 1.9 (JTS-1.7) * **********************************************************************/ #ifndef GEOS_PRECISION_ENHANCEDPRECISIONOP_H #define GEOS_PRECISION_ENHANCEDPRECISIONOP_H #include <geos/export.h> #include <geos/platform.h> // for int64 // Forward declarations namespace geos { namespace geom { class Geometry; } } namespace geos { namespace precision { // geos.precision /** \brief * Provides versions of Geometry spatial functions which use * enhanced precision techniques to reduce the likelihood of robustness * problems. */ class GEOS_DLL EnhancedPrecisionOp { public: /** \brief * Computes the set-theoretic intersection of two * Geometrys, using enhanced precision. * * @param geom0 the first Geometry * @param geom1 the second Geometry * @return the Geometry representing the set-theoretic * intersection of the input Geometries. */ static geom::Geometry* intersection( const geom::Geometry *geom0, const geom::Geometry *geom1); /** * Computes the set-theoretic union of two Geometrys, * using enhanced precision. * @param geom0 the first Geometry * @param geom1 the second Geometry * @return the Geometry representing the set-theoretic * union of the input Geometries. */ static geom::Geometry* Union( const geom::Geometry *geom0, const geom::Geometry *geom1); /** * Computes the set-theoretic difference of two Geometrys, * using enhanced precision. * @param geom0 the first Geometry * @param geom1 the second Geometry * @return the Geometry representing the set-theoretic * difference of the input Geometries. */ static geom::Geometry* difference( const geom::Geometry *geom0, const geom::Geometry *geom1); /** * Computes the set-theoretic symmetric difference of two * Geometrys, using enhanced precision. * @param geom0 the first Geometry * @param geom1 the second Geometry * @return the Geometry representing the set-theoretic symmetric * difference of the input Geometries. */ static geom::Geometry* symDifference( const geom::Geometry *geom0, const geom::Geometry *geom1); /** * Computes the buffer of a Geometry, using enhanced precision. * This method should no longer be necessary, since the buffer * algorithm now is highly robust. * * @param geom0 the first Geometry * @param distance the buffer distance * @return the Geometry representing the buffer of the input Geometry. */ static geom::Geometry* buffer( const geom::Geometry *geom, double distance); }; } // namespace geos.precision } // namespace geos #endif // GEOS_PRECISION_ENHANCEDPRECISIONOP_H /********************************************************************** * $Log$ * Revision 1.2 2006/04/06 14:36:52 strk * Cleanup in geos::precision namespace (leaks plugged, auto_ptr use, ...) * * Revision 1.1 2006/03/23 09:17:19 strk * precision.h header split, minor optimizations * **********************************************************************/
Federal government employees will be granted flexible working hours to attend parent-teacher meetings or school activities, it has been announced. Employees will be able to take up to three hours out of their day to be part of their children’s activities, which includes graduation ceremonies. The UAE cabinet adopted the plan as part of the back to school policy launched under the National Programme for Happiness and Quality of Life. "The UAE government is keen to promote and consolidate social and family ties in the community," said Ohoud bint Khalfan Al Roumi, Minister of State for Happiness and Well-being. "This involves allowing parents to be a part of their children’s school activities in order to achieve our bigger goal of preserving a cohesive society." More than 94,000 students and 28,000 federal government employees are expected to benefit as a result of the plan, which is designed to help parents balance their work and home life and increase their productivity.
Worsening Heart Failure and Recurrent Untreated Ventricular Tachycardia in a CRTD Treated Patient: Why A 71-year-old man with history of severe nonischemic cardiomyopathy, ventricular tachycardia (VT), and chronic advanced heart failure was hospitalized with worsening heart failure and recurrent syncope 3 months after undergoing implantable cardioverter-defibrillator (ICD) generator replacement because of battery depletion. At the time, sensing, pacing, and impedance values of atrial, right (RV), and left (coronary sinus) ventricular (LV) leads were appropriate; however, defibrillation was not tested because of hemodynamic instability. The patient was discharged the same day on standard heart failure medications and the device was not analyzed until this hospitalization. During hospitalization the patient had several episodes of syncope during self-terminating episodes of rapid (190200 bpm) VT (Fig. 1) that were not treated by the patients device. The device (Guidant model H179, St. Paul, MN, USA) settings were as follows:
'use strict'; import { Parser } from './parser'; import { CommandFactory } from '../commands/command-factory'; import { TeleportCommand } from '../commands/teleport-command'; import { GameState } from '../state/game-state'; const verbSynonyms = ['teleport', 'port', 'portal']; /** * Note: Developer Command a.k.a. cheat * Parses the input text and decides whether to return a teleport command. * The teleport keyword should be followed by the id of the node you want to teleport to. * * @class TeleportParser */ export class TeleportParser extends Parser { constructor(private commandFactory: CommandFactory) { super(); } parseInput(inputText: string): TeleportCommand { if (!inputText) { return null; } const words = inputText.toLowerCase().match(/\b(\w+)\b/g); if (words && words.length === 2 && verbSynonyms.indexOf(words[0]) !== -1) { const nodeId = Number(words[1]); return this.commandFactory.createTeleportCommand(nodeId); } return null; } }
#include <iostream> #include <string> #include <vector> #include <cmath> #include <algorithm> using namespace std; using ll = long long; ll mod = 1000000007; int main(int argc, char const *argv[]) { int n,m; std::cin >> n >> m; std::vector<std::vector<int>> v(m,std::vector<int> (2)); for (size_t i = 0; i < m; i++) { std::cin >> v[i][1] >> v[i][0]; } sort(v.begin(), v.end()); std::vector<int> kill; for (size_t i = 0; i < v.size(); i++) { bool flag = true; for (size_t j = 0; j < kill.size(); j++) { if (v[i][1] < kill[j] && kill[j] <= v[i][0]) { flag = false; break; } } if (flag) { kill.push_back(v[i][0]); } } std::cout << kill.size() << '\n'; return 0; }
What Is The Value Of A Star When Choosing A Provider For Total Joint Replacement? A Discrete Choice Experiment. The past decade witnessed a rapid rise in the public reporting of surgeon- and hospital-specific quality-of-care measures. However, patients' interpretations of star ratings and their importance relative to other considerations (for example, cost, distance traveled) are poorly understood. We conducted a discrete choice experiment in an outpatient setting (an academic joint arthroplasty practice) to study trade-offs that patients are willing to make in choosing a provider for a hypothetical total joint arthroplasty. Two hundred consecutive new patients presenting for hip or knee pain in 2018 were included. The average patient was willing to pay $2,607 and $3,152 extra for an additional hospital or physician star, respectively, and an extra $11.45 to not travel an extra mile for arthroplasty care. History of prior surgery and prior experience with rating systems reduced the relative value of an incremental star by $539.25 and $934.50, respectively. Patients appear willing to accept significantly higher copayments for higher quality of care, and surgeon quality seems relatively more important than hospital quality. Further study is needed to understand the value and trust patients place in publicly reported hospital and surgeon quality ratings.
<gh_stars>0 package alg.ninegrid; import org.junit.Assert; import org.junit.Test; public class NineGridTestCase2 { @Test public void test001() { int[][] initGrid = { { 7, 8, 0, 2, 3, 0, 1, 0, 5 }, { 6, 0, 1, 0, 0, 0, 7, 0, 0 }, { 9, 5, 0, 0, 0, 0, 0, 0, 0 }, { 3, 0, 0, 0, 0, 1, 6, 5, 0 }, { 0, 2, 0, 5, 0, 0, 0, 0, 0 }, { 0, 0, 0, 0, 0, 0, 0, 0, 0 }, { 0, 9, 6, 8, 7, 0, 3, 2, 0 }, { 0, 0, 0, 0, 6, 0, 9, 0, 8 }, { 0, 4, 0, 0, 0, 0, 0, 7, 6 } }; SolutionRecursion solution = new SolutionRecursion(initGrid); int result = solution.calculate(); Assert.assertEquals(NineGrid.EXECUTE_SUCCESS,result); System.out.println(solution.output()); } @Test public void test002() { int[][] initGrid = { { 0, 0, 8, 3, 0, 9, 1, 0, 0 }, { 9, 0, 0, 0, 6, 0, 0, 0, 4 }, { 0, 0, 7, 5, 0, 4, 8, 0, 0 }, { 0, 3, 6, 0, 0, 0, 5, 4, 0 }, { 0, 0, 1, 0, 0, 0, 6, 0, 0 }, { 0, 4, 2, 0, 0, 0, 9, 7, 0 }, { 0, 0, 5, 9, 0, 7, 3, 0, 0 }, { 6, 0, 0, 0, 1, 0, 0, 0, 8 }, { 0, 0, 4, 6, 0, 8, 2, 0, 0 } }; SolutionRecursion solution = new SolutionRecursion(initGrid); int result = solution.calculate(); Assert.assertEquals(NineGrid.EXECUTE_SUCCESS,result); System.out.println(solution.output()); } @Test public void test003() { int[][] initGrid = { { 8, 9, 0, 1, 7, 0, 0, 0, 0 }, { 6, 0, 4, 9, 0, 0, 0, 8, 0 }, { 0, 0, 0, 0, 0, 0, 6, 0, 0 }, { 0, 0, 0, 0, 0, 2, 4, 3, 0 }, { 0, 3, 0, 4, 0, 8, 0, 1, 0 }, { 0, 1, 2, 7, 0, 0, 0, 0, 0 }, { 0, 0, 9, 0, 0, 0, 0, 0, 0 }, { 0, 5, 0, 0, 0, 7, 9, 0, 4 }, { 0, 0, 0, 0, 6, 9, 0, 2, 7 } }; SolutionRecursion solution = new SolutionRecursion(initGrid); int result = solution.calculate(); Assert.assertEquals(NineGrid.EXECUTE_SUCCESS,result); System.out.println(solution.output()); } }
G. W. Reynolds Gilbert Westacott Reynolds (10 October 1895 Bendigo - 7 April 1967 Mbabane), was a South African optometrist and authority on the genus Aloe. Gilbert Reynolds arrived in Johannesburg with his parents in 1902, where his father started business as an optician. He received his education at St John's College where he was Victor Ludorum. After the outbreak of World War I he enlisted and saw active service in South West Africa and Nyasaland with the rank of captain. Having qualified as optometrist he joined his father's practice in 1921. Reynolds developed a keen interest in the bulbs and succulents of South Africa at about this time. When he started his own country practice about 1930, he was able to travel extensively and gradually narrowed his interests to Aloe. Reynolds was guided in the early stages of his research by Dr I. C. Verdoorn and Dr R. A. Dyer of the Botanical Research Institute in Pretoria, later becoming the authority on Aloe and having an extensive knowledge of the genus in the field and under cultivation. To gather material for his book, he explored the entire country, collecting specimens, gathering data and taking photographs of the plants in their natural habitats. General Smuts, himself an avid collector and experienced botanist, wrote the foreword to the book. Before the publication of Reynolds' work, no comprehensive guide to the aloes had been compiled, except for various writings and monographs which did not attempt a complete coverage. He spent four weeks at Kew towards the end of 1960, checking the taxonomy, type specimens and identifications.
An approach to improved energy efficient hybrid clustering in wireless sensor networks In wireless sensor networks (WSNs), hierarchical clustering approach gives an efficient solution to achieve the goal of maximizing the network lifetime by minimizing energy utilization. In this paper, we introduce a cluster head (CH) selection process and cluster formation algorithm by re-selection of CHs called Improved Energy Efficient Hybrid Clustering Scheme (IEEHCS). In the proposed scheme, energy efficient CHs are selected by a centralized algorithm based on remaining energy, node density and minimum separation distance to reduce the control message overheads. The key idea is that the CH role is repeated with the same settings or shifted to a eligible member nodes instead of re-clustering the whole network at every round. In this way, IEEHCS reduces the frequency of updating CHs, avoids unnecessary re-clustering in every round and saves significant amount of node energy. Simulation results demonstrate that IEEHCS effectively reduces the energy consumption and prolongs the network lifetime for first node death upto 45.39% and 11.36% over LEACH-C and EEHCS, respectively, in certain network settings.
from sklearn.metrics import confusion_matrix from sklearn.tree import DecisionTreeClassifier #from sklearn.naive_bayes import GaussianNB from sklearn.model_selection import train_test_split import itertools import matplotlib.pyplot as plt import pandas as pd import numpy as np import nltk nltk.download('punkt') dftrain = pd.read_csv("../data/sample_data.csv") urls = dftrain['url'] with open("../data/corpus_set.txt", 'r') as f: corpus_set = f.read().split(" ") def count(tkn, tokens): c = 0 for token in tokens: if tkn == token: c += 1 return c def extract_features(text): pattern = r'[a-z]+|[A-Z]+|\W+' tokens = nltk.regexp_tokenize(text, pattern=pattern) counts = list() for token in list(set(tokens)): counts.append(count(token, tokens)) local_bow = {k:v for k, v in zip(list(set(tokens)), counts)} global_bow = dict() for tk in corpus_set: if tk in local_bow.keys(): global_bow[tk] = local_bow[tk] else tk not in local_bow.keys(): global_bow[tk] = 0 return np.array(list(global_bow.values())) def plot_confusion_matrix(cm, classes, normalize=False, title='Confusion matrix', cmap=plt.cm.Blues): """ This function prints and plots the confusion matrix. Normalization can be applied by setting `normalize=True`. """ if normalize: cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis] print("Normalized confusion matrix") else: print('Confusion matrix, without normalization') print(cm) plt.imshow(cm, interpolation='nearest', cmap=cmap) plt.title(title) plt.colorbar() tick_marks = np.arange(len(classes)) plt.xticks(tick_marks, classes, rotation=45) plt.yticks(tick_marks, classes) fmt = '.2f' if normalize else 'd' thresh = cm.max() / 2. for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])): plt.text(j, i, format(cm[i, j], fmt), horizontalalignment="center", color="white" if cm[i, j] > thresh else "black") plt.ylabel('True label') plt.xlabel('Predicted label') plt.tight_layout() X = np.array([extract_features(url) for url in urls]) print("Prepared X, shape: ", X.shape) #print(X) del urls labels = dftrain.replace({"bad": 1, "good": 0})['label'] del dftrain y = labels.values print("Prepared y, shape: ", y.shape) #print(y) del labels xtr, xts, ytr, yts = train_test_split(X, y, test_size=0.3) model = DecisionTreeClassifier() model.fit(xtr, ytr) #scores = cross_val_score(model, X, y, cv=3) scores = model.score(xts, yts) print("Validation score:", scores) y_pred = model.predict(xts) # Compute confusion matrix cnf_matrix = confusion_matrix(yts, y_pred) plt.figure() plot_confusion_matrix(cnf_matrix, classes=['Benign', 'Malicious'], title='Confusion matrix') plt.show()
Fox has found a solution for the potential challenges of marketing the awkwardly named upcoming film Neighborhood Watch -- with its unintentional echo of the Trayvon Martin case -- by just changing the name altogether. We noted that the studio was having some headaches after their deliberately obscure marketing campaign led some to worry the Ben Stiller comedy about aliens could be misinterpreted as having something to do with the horrible killing of an unarmed teenager by a neighborhood watchman. So the studio's just going to call the film The Watch, according to Reuters. That doesn't sound like an awesome title in general, but it's much, much better and sort of resolves the difficulty at hand. Maybe someday we'll look back and declare 2012 the year of poorly named alien flicks.
Hepatic Encephalopathy: A Diagnosis for the Individual but an Experience for the Household Abstract: Hepatic encephalopathy (HE) is a common complication of cirrhosis that results in unpredictable neuropsychiatric symptoms and increases the risk of death and disability. In the current issue of Clinical and Translational Gastroenterology, Fabrellas et al. report on a qualitative study that assesses the psychological impact of HE on both patients and their informal caregivers. Both patients and caregivers report diminished quality of life driven by disruptive anxiety and feelings of fear and sorrow. There is a need to optimize therapy for encephalopathy and to address the shared psychological impact of HE experienced by both patients and caregivers. Although it presents with a spectrum of severity, even the presence of minimal hepatic encephalopathy (HE) impacts every aspect of a patient's life. HE disrupts one's sleep, driving, daily functioning, and earning potential and carries with it the burden of repeated hospitalizations and diminished survival. Unsurprisingly, these effects on the patient spill over and affect the quality of life (QoL), health, and functioning of one's caregivers. Measuring patient-and caregiver-reported outcomes in cirrhosis provides essential insights into the subjective impact of the disease. However, our understanding of contributors to the psychological burden of HE remains incomplete. Missing from the literature are the voices of those experiencing the debilitating consequences of HE. Fabrellas et al. extends our knowledge of the scale and scope of the burden of HE on patients and their caregivers with an important study that gives voice directly to those who are affected. STUDY FINDINGS Using a mixed-methods study design, the authors enrolled 15 patients with a history of HE and their informal caregivers to complete validated QoL scales (Medical Outcomes Study Short Form 36 and, additionally for caregivers, the Zarit Burden Index). In all areas of the physical and mental components of the Medical Outcomes Study Short Form 36, both patients and their caregivers reported markedly lower health-related QoL scores compared with established norms. Caregiver burden was also exceedingly high (mean Zarit Burden Index 51). Semistructured interviews revealed high expressions of fear, anxiety, sorrow, and anger in nearly half of patients and in one-third of their caregivers. Even more alarming was the disclosure from most participants that the entity of HE was unknown to them before its actual occurrence. In summary, this informative data provide us with 3 major opportunities to improve the care we deliver to patients and their caregivers (Table 1). IDENTIFYING AT-RISK PERSONS Improving the burden of HE begins with proper identification of those at risk so that we can prepare patients and mitigate known precipitants. This starts with risk stratification, which includes the use of scores such as the Bilirubin-Albumin-Beta-Blocker-Statin score and screening for covert HE with tools such as the Animal Naming Test or EncephalApp Stroop. In addition, routine care for persons with cirrhosis should include frequent review of medications and efforts to minimize risky medications, such as benzodiazepines, opiates, and proton pump inhibitors. Finally, nutritional interventions to achieve protein targets (e.g., 1 g/kg actual bodyweight) are recommended to reduce the risk of sarcopenia and forestall HE episodes. REDUCING RISK Secondary prophylaxis of HE relies on both pharmacologic and nonpharmacologic strategies. Lactulose, rifaximin, and optimized nutrition all reduce the risk of recurrent HE and improve patient-reported outcomes. However, as highlighted by Fabrellas et al., treatment must also address the psychological burden experienced by patients and their caregivers. Clinicians should inquire about, validate, and address the worries and fears experienced by their patients. When indicated, clinicians should then refer patients and their caregivers for counseling and consider interventions, such as mindfulness training, that improve mood, sleep, and caregiver burden. Additional, scalable interventions are also needed. We, for example, have launched a randomized controlled trial to assess the impact of resilience training and emotional disclosure through keeping a diary on caregiver burden. EDUCATION Fabrellas et al. also found that patients and caregivers felt totally unprepared during the first occurrence of HE because of minimal or no awareness of HE being a complication that they were at risk of experiencing. Although striking, this is consistent with other studies. When assessed, patient's knowledge regarding the natural progression of cirrhosis is consistently low; however, improvement can be achieved through structured patient education. Similarly, there is a need for education of frontline providers to improve the recognition of covert encephalopathy in order to promote both timely referral and treatment. CONCLUSION The time is now to recognize the deleterious effects of HE on patient-and caregiver-reported health-related QoL. After identification, we must implement treatment strategies that not only address the disease but also reduce the burden placed on the household.
import React from "react"; import {Chart} from "$models/ChartModel"; import {Button, Modal, Space} from "antd"; import {CHART_SINGLE_VIEW, METRIC_EDIT} from "$constants/Route"; import Metric from "$components/Metric/Metric"; import {CopyToClipboard} from "react-copy-to-clipboard"; import {EditOutlined, ShareAltOutlined} from "@ant-design/icons/lib"; import ChartMetricView from "$components/Chart/ChartMetricView/ChartMetricView"; import {messageHandler} from "$utils/message"; import {APP_BASE_URL} from "$constants/index"; interface ChartPreviewProps { chart: Chart; visible: boolean; onOk: any; } /** * 指标模态预览框 */ const ChartPreview: React.FC<ChartPreviewProps> = props => { const {chart, onOk, visible} = props; const chartSingleViewURL = `${APP_BASE_URL}${CHART_SINGLE_VIEW}/${chart.id}`; const actions = <Space> <Button onClick={onOk}>关闭</Button> <CopyToClipboard text={chartSingleViewURL} onCopy={() => messageHandler("success", "链接已复制")}> <Button type="primary" icon={<ShareAltOutlined />}>分享</Button> </CopyToClipboard> <Button type="primary" icon={<EditOutlined />} href={`${METRIC_EDIT}/${chart.id}`} target="_blank">编辑</Button> </Space>; return ( <Modal width="80%" closable={false} onOk={onOk} onCancel={onOk} bodyStyle={{padding: 0}} visible={visible} zIndex={100} footer={actions} destroyOnClose={true} > <Metric chart={chart} title={chart.title} /> {chart.targets && (<ChartMetricView targets={chart.targets} />)} </Modal> ); }; export default ChartPreview;
An approach to understanding the interaction of hope and desire for explicit prognostic information among individuals with severe chronic obstructive pulmonary disease or advanced cancer. BACKGROUND Physicians often report that they are reluctant to discuss prognosis for life-threatening illnesses with patients and family out of concern for destroying their hope, yet there is little empirical research describing how patients and family incorporate their needs for hope with desires for prognostic information. OBJECTIVE We conducted a qualitative study to examine the perspectives of patients, family, physicians, and nurses on the simultaneous need for supporting hope and discussing prognosis. METHODS We conducted in-depth longitudinal qualitative interviews with patients with either advanced cancer or severe chronic obstructive pulmonary disease (COPD), along with their family, physicians, and nurses. We used principles of grounded theory to analyze the transcripts and evaluated a conceptual model with four diagrams depicting different types of approaches to hope and prognostic information. RESULTS We interviewed 55 patients, 36 family members, 31 physicians, and 25 nurses representing 220 hours of interviews. Asking patients directly "how much information" they wanted was, by itself, not useful for identifying information needs, but in-depth questioning identified variability in patients' and family members' desires for explicit prognostic information. All but 2 patients endorsed at least one of the diagrams concerning the interaction of hope and prognostic information and some patients described moving from one diagram to another over the course of their illness. Respondents also described two different approaches to communication about prognosis based on the diagram selected: two of the four diagrams suggested a direct approach and the other two suggested a cautious, indirect approach. CONCLUSIONS This study found important variability in the ways different patients with life-limiting illnesses approach the interaction of wanting support for hope and prognostic information from their clinicians. The four-diagram approach may help clinicians understand individual patients and families, but further research is needed to determine the utility of these diagrams for improving communication about end-of-life care.
Opioid Overdose Prevention Initiatives on the College Campus: Critical Partnerships between Academe and Community Experts Citation: Steiker LH. Opioid Overdose Prevention Initiatives on the College Campus: Critical Partnerships between Academe and Community Experts. J Drug Abuse. 2016, 2:2. Attention has recently turned to the epidemic Opioid Overdose deaths in our country. From the President and political platforms to community groups to legislators, the crisis is being illuminated and changes are being effected. Presently, all but five states (AZ, KS, MO, MT, WY) have passed legislation designed to improve layperson naloxone access. Naloxone hydrochloride is a generic, non-narcotic opioid antagonist that blocks the brain cell receptors activated by opioids. It is a fast-acting drug that, when administered during an overdose blocks the effects of opioids on the brain and restores breathing within two to three minutes of administration. It is not psychoactive, has no potential for abuse, and side effects are rare. Naloxone makes opioid overdose prevention effective with the injectable or inhalable response. This prescription drug epidemic has been widespread on college campuses. Between 1993 and 2005, the proportion of college students using prescription drugs went up dramatically: use of opioids such as Vicodin, Oxycontin, and Percocet increased by 343%. In addition, 50% of college students are offered a prescription drug for nonmedical purposes by their sophomore year. Opioids are becoming the college drug of choice. Studies suggest that the problem is most prevalent among highly selective urban colleges. Since 1991, fatal overdoses from prescription painkillers have more than tripled. Students embrace the misconception that prescription drugs are "safer" than illegal narcotics; the staggering increase in such deaths illuminates this faulty logic. Intervention has been slow due to denial, lack of awareness/resources and stigma. Also, there are misconceptions that heroin and fentanyl are not present on college campuses. However, they are becoming more and more available and pervasive, especially with availability of heroin in powder form, black tar heroin, and synthetic fentanyl, 50-100 times more powerful than morphine. Some campuses are making progress around Opioid Overdose Prevention efforts. Due to the epidemic and rash of overdose deaths, campuses are challenged to educate students, faculty and staff about overdose and prevention. Some are starting with campus police departments. Others are working through university health services and Resident Assistants in dorms. University of Washington is placing Naloxone kits next to fire extinguishers in case of emergencies. Others are distributing information through the Internet, e.g. GetNaloxoneNow.org training for college students. Some states and municipalities, including parts of New York City, Boston and have already launched programs to equip law enforcement with naloxone. Twenty colleges/university systems have their Police Departments trained and carrying Naloxone. We can and should continue to rely in EMS to respond to overdoses-but not to the exclusion of others who may be the first on the scene, often law enforcement personnel. Student, staff and faculty trainings can be very basic, and should provide the following learning goals: Participants will be able to recognize signs of an Opioid Overdose Participants will know effective response to an Overdose (911 and rescue breathing or chest compression) and how to evaluate the situation. Journal of Drug Abuse 2471-853X All participants should have knowledge of naloxone and the recent related legislation in their state. Participants will observe and be able to do rescue breathing. Ideally, participants should be evaluated for efficacy. Some training requires the certification provided on the GetNaloxoneNow.org website prior to receiving their Naloxone (which may be available with standing orders in local pharmacies or available from local harm reduction coalitions, depending on the area). There are trainers and templates available and programs can be tailored to the culture and needs of each campus and community. A 2013 cost-benefit analysis published in the Annals of Internal Medicine concludes, Naloxone distribution "is likely to reduce overdose deaths and is cost-effective, even under markedly conservative assumptions". Specifically, the study found that an average of one life would be saved for every 164 Naloxone kits that are distributed. Before the epidemic, Naloxone was fairly inexpensive, but now that there is a demand for the drug, the prices are rising. Foundations, such as the Clinton Foundation, are working to be sure that it is available and accessible. However, universities may have to be creative about submitting grants for overdose prevention and Naloxone. Every second counts in the medical emergency of an overdose. With appropriate training, administering naloxone is safe and simple. Professionals, students, resident assistants, and campus employees should have the training and the necessary tool, naloxone, to make a difference when it matters most. Many law enforcement officers and first responders are already trained in using AEDs (automated external defibrillators) or in administering CPR (cardiopulmonary resuscitation). Adding naloxone to their set of tools will undoubtedly help save lives. College campuses have an obligation to provide education and resources to make young adults and campus employees aware of the dangers of misusing opioids and how to intervene when an overdose occurs. Teams must find ways to collaborate and achieve effective communication and networking between community experts, academic specialists, researchers, students, faculty, staff and university administrators. Overdose Prevention trainings exist and can be tailored for university groups. The greatest challenge is weaving policy, practice, and research for further impact on college campuses. We need to act quickly. There are lives to be saved.
Newly minted Sen. Scott Brown has officially made himself an embarrassment to his state. He was shameless enough to portray the recent tragedy in Texas (the plane that was flown into the Internal Revenue Service building) as an example of voter anger and frustration. Not only that, but according to Mr. Brown, it is the same voter anger and frustration that put him in office. I�m unsure why Mr. Brown felt the need to defend a man who, instead of seeking help for his obvious mental illness, decided to commit an act of domestic terrorism. If Massachusetts is paying attention, they will realize what a tremendous mistake they made in trusting this man to represent them.
Where did thousands of dollars worth of energy drinks go? They're in this trailer, the FBI says. Law enforcement found the truck, but the trailer and its stamina-increasing cargo still are missing. TAMPA, Fla. — Where did thousands of dollars worth of energy drinks go? The FBI is kicking in a couple thousand dollars if any information leads to their finding. Someone swiped a semi-truck and its trailer during the overnight hours of Feb. 2-3 in the area of East Broadway Avenue and 50th Street in Tampa, according to a news release. The FBI says the cargo contained some $65,000 worth of energy drinks. Authorities found the truck in Broward County, Florida, but the trailer and beverages are nowhere to be found. The trailer has an identification number of LRG #5347 with Florida tag 2277CS. Anyone with information is asked to call the FBI Tampa field office at 813-253-1000 or send a tip to tips.fbi.gov, with the $2,000 reward offered.
import { useDispatch, useSelector } from "react-redux"; import { RootState } from "../store"; import { tokenSlice } from "../store/token"; import { userInfoSlice } from "../store/user"; import { useRouter } from "next/dist/client/router"; import { axios } from "../utils/axios"; import { useEffect } from "react"; const LoginCheck = () => { const dispatch = useDispatch(); const token = useSelector((state: RootState) => state.token.token); const user = useSelector((state: RootState) => state.user.user); const router = useRouter(); useEffect(() => { if (router.pathname !== "/login") { if (token.jwt == null) { userInfoSlice.actions.reset(); router.push("/login?redirect=" + router.pathname); } else { axios .post("/v1/auth/jwt/verify", { token: token.jwt, }) .then(res=>console.log("TOKEN問題なし")) .catch((error) => axios .post("/v1/auth/jwt/refresh", { refresh: token.refresh, }) .then((res: any) => { dispatch( tokenSlice.actions.updateToken({ jwt: res.data.access, refresh: token.refresh, }) ); }) .catch((error) => { userInfoSlice.actions.reset(); router.push("/login?redirect=" + location.pathname); }) ); } } }, []); return <></>; }; export default LoginCheck;
Kumbhaka Kumbhaka is the retention of the breath in the hatha yoga practice of pranayama. It has two types, accompanied (by breathing) whether after inhalation or after exhalation, and, the ultimate aim, unaccompanied. That state is kevala kumbhaka, the complete suspension of the breath for as long as the practitioner wishes. Breath retention The name kumbhaka is from Sanskrit कुम्भ kumbha, a pot, comparing the torso to a vessel full of air. Kumbhaka is the retention of the breath in pranayama, either after inhalation, the inner or Antara Kumbhaka, or after exhalation, the outer or Bahya Kumbhaka (also called Bahir Kumbhaka). According to B.K.S. Iyengar in Light on Yoga, kumbhaka is the "retention or holding the breath, a state where there is no inhalation or exalation". Sahit or Sahaja Kumbhaka is an intermediate state, when breath retention becomes natural, at the stage of withdrawal of the senses, Pratyahara, the fifth of the eight limbs of yoga. Kevala Kumbhaka, when inhalation and exhalation can be suspended at will, is the extreme stage of Kumbhaka "parallel with the state of Samadhi", or union with the divine, the last of the eight limbs of yoga, attained only by continuous long term pranayama and kumbhaka exercises. The 18th century Joga Pradipika states that the highest breath control, which it defines as inhaling to a count (mātrā) of 8, holding to a count of 19, and exhaling to a count of 9, confers liberation and Samadhi. The Yoga Institute recommends sitting in a meditative posture such as Sukhasana for Kumbhaka practice. After a full inhalation for 5 seconds, it suggests retaining the air for 10 seconds, exhaling smoothly, and then taking several ordinary breaths. It recommends five such rounds per pranayama session, increasing the time of retention as far as is comfortable by one second each week of practice. Historical purpose The yoga scholar Andrea Jain states that while pranayama in modern yoga as exercise consists of synchronising the breath with movements (between asanas), in ancient texts like the Bhagavad Gita and the Yoga Sutras of Patanjali, pranayama meant "complete cessation of breathing", for which she cites Bronkhorst 2007. The Yoga Sutras state: [D]istractions ... act as barriers to stillness. ... One can subdue these distractions by ... pausing after breath flows in or out. — Yoga Sutras, 1:30-34, translated by Chip Hartranft With effort relaxing, the flow of inhalation and exhalation can be brought to a standstill; this is called breath regulation. — Yoga Sutras, 2:49, translated by Chip Hartranft According to the scholar-practitioner of yoga Theos Bernard, the ultimate aim of pranayama is the suspension of breathing, "causing the mind to swoon". Swami Yogananda writes, "The real meaning of Pranayama, according to Patanjali, the founder of Yoga philosophy, is the gradual cessation of breathing, the discontinuance of inhalation and exhalation". The yoga scholars James Mallinson and Mark Singleton write that "pure breath-retention" (without inhalation or exhalation) is the ultimate pranayama practice in later hatha yoga texts. They give as an example the account in the c. 13th century Dattātreyayogaśāstra of kevala kumbhaka (breath retention unaccompanied by breathing). They note that this is "the only advanced technique" of breath-control in that text, stating that in it the breath can be held "for as long as one wishes". The Dattātreyayogaśāstra states that kevala kumbhaka gives magical powers, allowing the practitioner to do anything: Once unaccompanied [kevala] breath-retention, free from exhalation and inhalation, is mastered, there is nothing in the three worlds that is unattainable. — Dattātreyayogaśāstra 74 The 15th century Hatha Yoga Pradipika states that the kumbhakas force the breath into the central sushumna channel (allowing kundalini to rise and cause liberation). The 18th century Gheranda Samhita states that death is impossible when the breath is held in the body. Mallinson and Singleton note that sahita kumbhaka, the intermediate state which is still accompanied (the meaning of sahita) by breathing, was described in detail. They write that the Goraksha Sataka describes four sahita kumbhakas, and that the Hatha Yoga Pradipika describes another four. They point out, however, that these supposed kumbhakas differ in their styles of breathing, giving the example of the buzzing noise made while breathing in bhramari.
/** * WebdavFile * * @author S. Koulouzis, Piter T. de Boer */ public class WebdavFile extends VFile { private static ClassLogger logger; static { logger = ClassLogger.getLogger(WebdavFile.class); logger.setLevelToDebug(); } // === Instance === private WebdavResource webdavResource; private WebdavFileSystem webdavFileSystem; public WebdavFile(WebdavFileSystem webdavFileSystem, VRL vrl, WebdavResource webdavResource) { super(webdavFileSystem, vrl); this.webdavResource = webdavResource; this.webdavFileSystem = webdavFileSystem; } @Override public boolean exists() throws VlException { return webdavResource.exists(); } @Override public long getLength() throws VlException { return webdavResource.getGetContentLength(); } @Override public boolean create(boolean ignoreExisting) throws VlException { VFile file = webdavFileSystem.createFile(getVRL(), ignoreExisting); return (file != null); } @Override public long getModificationTime() throws VlException { return webdavResource.getGetLastModified(); } @Override public boolean isReadable() throws VlException { try { AclProperty res = webdavResource.aclfindMethod(getVRL().getPath()); if (res != null) { Ace[] ace = res.getAces(); if (ace != null || ace.length > 1) { for (int i = 0; i < ace.length; i++) { logger.debugPrintf("ACL: %s\n", ace[i].getPrincipal()); } } } } catch (HttpException e) { throw new VlException(e); } catch (IOException e) { throw new VlIOException(e); } return false; } @Override public boolean isWritable() throws VlException { // TODO Auto-generated method stub return false; } @Override public VRL rename(String newNameOrPath, boolean nameIsPath) throws VlException { VRL destination = null; if (nameIsPath || (newNameOrPath.startsWith("/"))) { destination = getVRL().copyWithNewPath(newNameOrPath); } else { destination = getVRL().getParent().append(newNameOrPath); } return webdavFileSystem.move(getVRL(), destination, false); } public InputStream getInputStream() throws VlException { try { return webdavResource.getMethodData(); } catch (HttpException e) { throw new VlException(e); } catch (IOException e) { throw new VlIOException(e); } } public OutputStream getOutputStream() throws VlException { // TODO Auto-generated method stub return null; } public boolean delete() throws VlException { return webdavFileSystem.delete(getVRL(), true); } }
<gh_stars>0 package SuperRainbowReefGame; import java.awt.image.BufferedImage; class Dynamic extends GraphicOBJxIMG { Dynamic() { } double movementRate; Dynamic(BufferedImage graphic, int a, int b, double movementRate) { super(a, b, graphic, null); this.movementRate = movementRate; } double acqMovementRate() { double movementRate = this.movementRate; return movementRate; } }
Dcodage conceptuel partir de graphes de mots sur le corpus de dialogue Homme-Machine MEDIA Within the framework of the French evaluation program MEDIA on spoken dialogue systems, this paper presents the methods proposed at the LIA for the robust extraction of basic conceptual constituents (or concepts) from an audio message. The conceptual decoding model proposed follows a stochastic paradigm and is directly integrated into the Automatic Speech Recognition (ASR) process. This approach allows us to keep the probabilistic search space on sequences of words produced by the ASR module and to project it to a probabilistic search space of sequences of concepts. The experiments carried on on the MEDIA corpus show that the performance reached by our approach is better than the traditional sequential approach that looks first for the best sequence of words before looking for the best sequence of concepts.
Reaction cuvettes have been provided in the past with a burstable reagent compartment, a reaction flow passage between the inlet end and exit aperture, and a filter across the flow passage. Examples can be found, e.g., in WO 86/00704. However, the cuvette in the latter operates by bursting the seal between the two sheets defining the reagent compartment, rather than by bursting the container wall. Such a construction is also used in other cuvettes having burstable compartments, e.g., those described in U.S. Pat. Re. No. 29,725 and EPA 381501. Such burstable seals can produce problems of indeterminate sealing strength. That is, the force needed to burst the seal is not uniformly the same from cuvette to cuvette, due to variances in the sealing conditions (bonding temperatures and/or pressures). As a result, the temporary seal can fail unexpectedly or prematurely, leading to unsatisfactory results. Therefore, there has been a need prior to this invention to provide such a cuvette wherein the burst strength of the compartment is more predictable and uniform.
import numpy as np from layers import FullyConnectedLayer, ReLULayer, softmax_with_cross_entropy, l2_regularization, softmax class TwoLayerNet: """ Neural network with two fully connected layers """ def __init__(self, n_input, n_output, hidden_layer_size, reg): """ Initializes the neural network Arguments: n_input, int - dimension of the model input n_output, int - number of classes to predict hidden_layer_size, int - number of neurons in the hidden layer reg, float - L2 regularization strength """ self.reg = reg # TODO Create necessary layers self.fc_layer1 = FullyConnectedLayer(n_input, hidden_layer_size) self.ReLu_1 = ReLULayer() self.fc_layer2 = FullyConnectedLayer(hidden_layer_size, n_output) def compute_loss_and_gradients(self, X, y): """ Computes total loss and updates parameter gradients on a batch of training examples Arguments: X, np array (batch_size, input_features) - input data y, np array of int (batch_size) - classes """ # Before running forward and backward pass through the model, # clear parameter gradients aggregated from the previous pass # TODO Set parameter gradient to zeros # Hint: using self.params() might be useful! self.fc_layer1.params()['W'].grad = np.zeros_like(self.fc_layer1.params()['W'].value) self.fc_layer1.params()['B'].grad = np.zeros_like(self.fc_layer1.params()['B'].value) self.fc_layer2.params()['W'].grad = np.zeros_like(self.fc_layer2.params()['W'].value) self.fc_layer2.params()['B'].grad = np.zeros_like(self.fc_layer2.params()['B'].value) # TODO Compute loss and fill param gradients # by running forward and backward passes through the model # forward fc_layer1_output = self.fc_layer1.forward(X) ReLu_1_output = self.ReLu_1.forward(fc_layer1_output) fc_layer2_output = self.fc_layer2.forward(ReLu_1_output) loss_data, d_preds = softmax_with_cross_entropy(fc_layer2_output, y) # backward d_fc_layer2 = self.fc_layer2.backward(d_preds) d_ReLu_1 = self.ReLu_1.backward(d_fc_layer2) d_fc_layer1 = self.fc_layer1.backward(d_ReLu_1) # print(self.fc_layer1.params()['W'].grad.shape) # print(self.fc_layer1.params()['B'].grad.shape) # print(self.fc_layer2.params()['W'].grad.shape) # print(self.fc_layer2.params()['B'].grad.shape) # After that, implement l2 regularization on all params # Hint: self.params() is useful again! loss_reg = l2_regularization(self.fc_layer1.params()['W'].value, self.reg)[0] + l2_regularization(self.fc_layer2.params()['W'].value, self.reg)[0] grad_reg_fc_layer1 = l2_regularization(self.fc_layer1.params()['W'].value, self.reg)[1] grad_reg_fc_layer2 = l2_regularization(self.fc_layer2.params()['W'].value, self.reg)[1] self.fc_layer1.params()['W'].grad += grad_reg_fc_layer1 self.fc_layer2.params()['W'].grad += grad_reg_fc_layer2 loss = loss_data + loss_reg return loss def predict(self, X): """ Produces classifier predictions on the set Arguments: X, np array (test_samples, num_features) Returns: y_pred, np.array of int (test_samples) """ # TODO: Implement predict # Hint: some of the code of the compute_loss_and_gradients # can be reused pred = np.zeros(X.shape[0], np.int) # forward fc_layer1_output = self.fc_layer1.forward(X) ReLu_1_output = self.ReLu_1.forward(fc_layer1_output) fc_layer2_output = self.fc_layer2.forward(ReLu_1_output) probs = softmax(fc_layer2_output) pred = np.argmax(probs, axis = 1) return pred def params(self): result = {} # TODO Implement aggregating all of the params result['W1'] = self.fc_layer1.params()['W'] result['B1'] = self.fc_layer1.params()['B'] result['W2'] = self.fc_layer2.params()['W'] result['B2'] = self.fc_layer2.params()['B'] return result
Michael Hicks (musicologist) Michael Dustin Hicks (born 1956) is an American professor of music, poet and artist, who has studied a broad array of topics, although his work on music and The Church of Jesus Christ of Latter-day Saints has been ground-breaking in that field. Hicks was born and raised in California. Hicks has a DMA from the University of Illinois at Urbana-Champaign. He has been on the music faculty at Brigham Young University (BYU) since 1984. Hicks has a bachelor's degree from BYU. He has been a full professor at BYU since 1996. Hicks first book was Mormonism and Music: A History (1989). This work received awardss from both the Mormon History Association and the Association of Mormon Letters. In 1990 his work Sixties Rock: Garage, Psychedelic and Other Satisfactions was published. This book received significant coverage in Music and History: Bridging the Disciplines edited by Jeffrey H. Jackson and Stanley C Pelkey. His book Henry Cowell: Bohemian was published in 2002. In 2012 his work Christian Wolff was published. In 2015 his work The Mormon Tabernacle Choir: A Biography was published. All these works have been published by Illinois University Press. Hicks has created a variety of chamber and solo works. From 2007 to 2010 Hicks was editor of the journal American Music published by the University of Illinois (not to be confused with the Journal of the Society for American Music which used to be published as American Music).
p53 localizes to the centrosomes and spindles of mitotic cells in the embryonic chick epiblast, human cell lines, and a human primary culture: An immunofluorescence study. Immunofluorescent staining of mitotic centrosomes and spindles by anti-p53 antibodies was observed in the embryonic chick epiblast by epifluorescence microscopy and in three human cancer cell lines, an SV40-immortalized cell line, and a normal human fibroblast culture by confocal microscopy. In the chick epiblast, the centrosomes stained from early prophase through to the formation of the G1 nuclei and the spindle fibers stained from prophase through to telophase. In the human cells, the staining was observed from late prophase to telophase. The epiblast was stained by the anti-p53 antibodies DO-1, Ab-6, and Bp53-12. The human cells were also stained by these antibodies as well as by other anti-p53 antibodies. Preabsorption of DO-1 and Bp53-12 with purified tubulin did not diminish the immunostaining, showing that the antibodies were not reacting with tubulin in the mitotic centrosomes and spindles. The immunostaining in the chick epiblast was very clearly localized to the mitotic centrosomes and spindles, revealing a cytoplasmic location for p53 during mitosis and accounting for earlier reports of an association between p53, tubulin, and centrosomes. The localization of p53 to the spindle supports an involvement of p53 in spindle function.
class Bitmapper: """Given a font file, render utf-8 characters onto a bitmap canvas and return the result as a numpy array. """ face = freetype.Face("NotoSansMonoCJKtc-Regular.otf") max_rows = 48 max_width = 48 # TODO(nwan): This seems to work better than 48*48, must be pertinent to # the widest glyph in the font, but this seems to get us close to a 48*48 # for the small number of chinese characters I've tried. face.set_char_size( max_rows*64 ) def render(this, c): # render only utf-8 characters assert type(c) == unicode # render only one utf-8 character assert len(c) == 1 # create np array from bitmap buffer this.face.load_char(c) bitmap = this.face.glyph.bitmap buffer = np.array(bitmap.buffer).reshape([bitmap.rows,-1]) # reshape np array into (max_rows x max_height) by padding right and # bottom assert buffer.shape[0] < this.max_rows, \ "%r has too many rows: %r" % (c, buffer.shape[0]) assert buffer.shape[1] < this.max_width, \ "%r has too many columns: %r" % (c, buffer.shape[1]) buffer = np.pad(buffer, ((0,this.max_rows - bitmap.rows), (0,this.max_width - bitmap.width)), mode='constant') return buffer
Measurement of Regional 2D Gas Transport Efficiency in Rabbit Lung Using Hyperpolarized 129Xe MRI While hyperpolarized xenon-129 (HXe) MRI offers a wide array of tools for assessing functional aspects of the lung, existing techniques provide only limited quantitative information about the impact of an observed pathology on overall lung function. By selectively destroying the alveolar HXe gas phase magnetization in a volume of interest and monitoring the subsequent decrease in the signal from xenon dissolved in the blood inside the left ventricle of the heart, it is possible to directly measure the contribution of that saturated lung volume to the gas transport capacity of the entire lung. In mechanically ventilated rabbits, we found that both xenon gas transport and transport efficiency exhibited a gravitation-induced anterior-to-posterior gradient that disappeared or reversed direction, respectively, when the animal was turned from supine to prone position. Further, posterior ventilation defects secondary to acute lung injury could be re-inflated by applying positive end expiratory pressure, although at the expense of decreased gas transport efficiency in the anterior volumes. These findings suggest that our technique might prove highly valuable for evaluating lung transplants and lung resections, and could improve our understanding of optimal mechanical ventilator settings in acute lung injury. dynamically as a function of time after DP saturation with a narrow bandwidth RF pulse or, more recently, as a combination of the two by analyzing multiple data sets acquired with different flip angles or flip angle -repetition time (TR) combinations 29,30. Yet each region of the lungs contributes only an individual, unknown quota to the overall amount of gas transport from the alveoli to the arterial blood, which depends on the distributions of ventilation and blood flow, as well as the local efficacy of alveolar-capillary diffusion. Thus, while existing DP MRI techniques may be sufficiently sensitive to localize regional abnormalities in pulmonary gas exchange, they provide only limited information about the actual impact of an observed pathology on overall lung function. The large chemical shift difference between GP and DP means that the dynamics of HXe accumulation in one or more tissue compartments of the lung can easily be quantified, but the measurement of actual net xenon transport by the pulmonary circulation is less straightforward. Upon inhalation, xenon quickly dissolves in the alveolar septal walls and capillary blood in proportion to their volume and the xenon solubility for each physiological compartment. Once these compartments are saturated with xenon, the xenon magnetization reaches steady state conditions. Despite continuous removal of xenon from the alveoli by the blood stream this volume is immediately replenished by fresh xenon from the alveolar GP such that the net DP magnetization appears to be approximately constant and the DP signal in any given pixel within the lung parenchyma will no longer reflect blood flow; however, even with non-equilibrium measurements, it is challenging to accurately extract directional transport processes. One way out of this dilemma is to quantify the xenon DP signal once it has left the lung parenchyma. Since essentially all xenon gas that is taken up by the blood stream passes through the heart before it is distributed throughout the body, the left atrium and ventricle represent excellent central reporting sites for this purpose. If the xenon DP in the heart following saturation of the xenon GP magnetization in a volume of interest is compared to the heart signal without GP saturation, the contribution of the saturated region to the total pulmonary gas transport can be determined in a completely non-invasive and model free manner 29. In this work, we investigated the feasibility of such an approach by characterizing regional pulmonary gas transport in rabbits, both as a function of animal position and in a model of acute lung injury following acid aspiration. Although only a single DP resonance can be resolved in rabbits at 1.5 T, one advantage of our proposed method is that it is entirely imaging-based and does not depend on specifically identifying xenon bound to hemoglobin. It is therefore applicable in any species. Figure 1. Schematic of the image analysis in a GP-saturation data set. The heart and aortic arch were manually segmented from the acquisition with the largest anterior GP saturation volume (bottom row), and the mask was applied to all other images in the set. The contributions of lung regions L1-L4 to the pulmonary gas transport were calculated as the difference in DP signal within the heart mask between consecutive acquisition pairs (top to bottom). The residual heart signal following the GP saturation with the largest volume was assigned to the remaining ventilated lung volume (bottom row). Methods Animal Studies. Five New Zealand rabbits (3.5-4.5 kg) were anesthetized (intraperitoneal ketamine and xylazine) and tracheotomized. Peripheral veins were accessed to maintain general anesthesia (Propofol), and 15 ml/kg per hour of saline was given for hydration and to stabilize hemodynamics. Animals were mechanically ventilated using a custom-built ventilator, with FiO 2 0.3, tidal volume 6 ml/kg, and respiratory rate 40 breaths per minute, while body temperature was supported by a circulating warm water pad. Animals were euthanized at the end of the imaging procedures. All experiments were approved by and performed in accordance with the guidelines established by the University of Pennsylvania Institutional Animal Care and Use Committee and the NIH guidelines for the care and use of laboratory animals. Images were obtained during breath holds at EE or EI respiratory phases. Animals were studied in five experimental settings: To explore the impact of the GP saturation, one supine rabbit was scanned with the flip angle for saturating the GP in the right lung incremented from 15° to 120°; To show the effect of subject orientation, one rabbit was studied in prone and supine position at PEEP 0 cm H 2 O; To test the reproducibility of the measurements, a supine rabbit was imaged three times at EE with a PEEP of 0 cm H 2 O; identical sets of 4 GP saturation bands were created by incrementally shifting a 50 mm regional saturation slab from posterior to anterior in 1 cm steps; To measure the impact of PEEP and respiratory phase, one rabbit was studied supine at both EI and EE with PEEP 0 cm H 2 O, and at EE with PEEP 5 and 10 cm H 2 O, with and without GP saturation; To investigate the effects of mild focal injury, two rabbits received direct endo-bronchial instillation of hydrochloric acid (HCl, 0.75 ml/kg, PH 1.25) through a catheter (OD 1/16") wedged into a bronchus. In this group, images were acquired before and after lung injury at PEEP 0 and 5 cm H 2 O. Gas Polarization and Administration. Enriched xenon gas (87% xenon-129) was polarized by collisional spin exchange with an optically pumped rubidium vapor using a prototype commercial system (XeBox-E10; Xemed, LLC, Durham, NH) that provided gas polarizations of 40-50%. Immediately before MR data acquisition, 1.25-1.5 L of HXe gas was dispensed into a Tedlar bag (Jensen Inert Products, Coral Springs, FL) inside a pressurizable cylinder that was subsequently connected to and controlled by the ventilator. At the beginning of the imaging study, animals were ventilated with 30% oxygen and 70% HXe (6 ml/kg tidal volume). After inhalation of the gas mixture for up to 3 breaths, ventilation was suspended for up to 7 s at either EI or EE for image acquisition. HXe Data Acquisition. Imaging was performed on a 1.5-Tesla commercial whole-body scanner (Magnetom Avanto; Siemens Medical Solutions, Malvern, PA, USA) that had been modified by the addition of a broadband amplifier to permit operation at the resonant frequency of 17.6 MHz. The RF coil was a custom xenon-129 transmit/receive birdcage design (Stark Contrast, Erlangen, Germany), positioned to cover the whole chest of the animal. Low-resolution proton MR scout images were obtained with the built-in body coil. The RF excitation flip angle was calibrated during an initial 5 s breath hold, during which 32 xenon spectra were acquired. The TR for the first 16 acquisitions was set to 100 ms, and was extended to 200 ms for the second 16 acquisitions. Exponential decay functions where fitted to the integral of the GP amplitude in the phased, real spectra of both sets, and the T1-corrected flip angle was calculated 31. The ratio between the nominal and the measured flip angle was used to set the reference voltage for all subsequent studies. At 1.5 T, all xenon resonances in the rabbit lung are merged into a single peak separated from the GP resonance by approximately 200 ppm. To image these two frequency bands individually, we implemented a 2D projection acquisition similar to the one previously described in detail 8. Briefly, the pulse sequence is based on a standard RF-spoiled gradient echo sequence with a non-selective 700-s Gaussian RF excitation pulse centered 200 ppm downfield from the gas resonance that predominantly excites the DP region. However, the RF pulse was sufficiently short to excite the GP resonance as well, albeit with an amplitude 2.5% that of the DP resonance. This scaling relationship was established through a calibration acquisition that measured each k-space line twice: once with a 40° excitation pulse centered at the DP resonance, and once with a 2° excitation flip angle centered at the GP resonance. The GP signal for the 2° GP excitation was approximately twice as large as for the 40° DP excitation. To destroy any DP magnetization taken up prior to the data collection and thereby achieve a steady state condition for the DP signal, the sequence was first preceded by two 2-ms Gaussian RF saturation pulses. Next, a series of 700-s Gaussian RF excitation pulses was applied for 1.5 s using the same TR as the image acquisition. All preparatory RF pulses were also centered at 200 ppm. The sampling was 65% asymmetric with a bandwidth of 110 Hz, which, at the main field strength of 1.5 T and a gyromagnetic ratio for xenon-129 of 11.78 MHz/T, yielded a 32-pixel separation between the GP and DP images in the readout direction. Other sequence parameters included: matrix size 28 80 (interpolated to 112 320); TR/TE 200/2.6 ms; FOV 220-238 mm; flip angle at 200 ppm 30-40°. These flip angle -TR combinations offer a sufficiently low DP signal depolarization with each RF pulse and a sufficiently long delay time between consecutive RF pulses to ensure that enough DP magnetization can accumulate in the heart to provide a signal-to-noise ratio. The 30°/200 ms acquisition is equivalent to the application of a 90° RF pulse that would destroy the entire DP magnetization every TR 90°, equiv = 1.5 s (40°/200 ms; TR 90°, equiv = 0.9 s) 32. Acquisitions with shorter TR but similar TR 90°, equiv are feasible albeit the price of a reduced measurement accuracy. Each study consisted of an acquisition without GP saturation to generate a reference data set, followed by three identical acquisitions following a regional GP saturation at the beginning of the breath hold. Regional GP saturation was performed using a 50 mm saturation slab positioned along the anterior -posterior axis of the animal. The slab position was shifted twice in 10 mm increments between acquisitions. The flip angle of the GP RF saturation pulse was approximately 90°. To investigate the impact of RF saturation pulse flip angle on the DP signal change in the heart, the former was varied between 15° and 120°, in 15° increments, while the saturation slab covered the entire right lung. Data Analysis. All image reconstruction, post-processing and data analysis was performed using customized MatLab (Mathworks, Natick, MA, USA) scripts. The asymmetrically sampled k-space data was filled using a Homodyne algorithm 33 before Fourier transform. A summary diagram of the image analysis is shown in Fig. 1. Within the DP image with the largest anterior GP saturation volume, the heart and the aortic arch were manually segmented. These segmentation masks were then applied to the other three DP images in the set, allowing a selective extraction of the xenon DP signal from these volumes. In one study, the manual segmentation was repeated by three different operators for each image in order to confirm the robustness of the procedure with respect to operator uncertainties. To obtain the ventilated volume, the left and right lungs were manually delineated in the non-saturated GP images, and the mask thus obtained was used to segment the lungs in the GP maps of the measurements acquired with regional GP saturation. The large airways were excluded from the segmentation. All GP-saturated acquisitions were normalized to the unsaturated reference data by applying the GP segmentation mask of the GP-saturated acquisition to the reference measurement. The ratio of the median GP signals within these two masked images was then used to scale the GP-saturated images. To measure the contribution of each GP-saturated region to the total pulmonary gas transport, the difference DP H of the normalized DP signal inside the heart mask between the 4 acquisitions of a set was calculated. The functional efficiency of the regional gas transport in a lung volume was determined as the ratio between DP H and the corresponding difference of the total GP signal, respectively. Results Figure 2 demonstrates the response of the DP magnetization to the application of an RF saturation pulse selective for the right lung (green box in left-hand panel of Fig. 2A). The residual GP signal within the saturation slab is proportional to the cosine of the flip angle of the saturation pulse such that, for instance, an ideal 90° flip angle would destroy the entire GP magnetization inside the slab while leaving the GP magnetization outside the slab untouched. Any subsequently acquired maps of the DP magnetization distribution will then reflect the pulmonary gas uptake and transport of the manipulated GP magnetization distribution. This effect is shown by increasing the flip angle of the GP saturation of the right lung from 0° to 120°, which changed the intensity of the 2D-projection DP signal in both the parenchyma of that same volume and in the downstream vasculature and left heart ( Fig. 2A). No changes in the DP signal within the parenchyma of the left lung were observable. In Fig. 2B, the DP amplitudes for the left and right lungs as well as the heart are plotted as a function of the flip angle of the GP RF saturation pulse. As expected, the DP signal behavior of the right lung approximated a cosine curve with a minimum at 90°. However, the DP signal in the heart continued to decline up to the maximum achievable flip angle of 120°. Figure 3 illustrates how pulmonary xenon gas transport can be assessed by incrementally shifting the position of the GP saturation slab across the lung: upon saturation of their GP magnetization, lung volumes that made large contributions to the pulmonary gas transport reduced the DP signal in the heart more significantly than volumes with small contributions. In supine rabbits, saturation of the posterior GP in particular diminished the heart signal so drastically that subsequent saturation increments were difficult to analyze. Regardless of animal orientation, the GP saturation steps were therefore conducted in ventral-to-dorsal direction in all subsequent studies. The repeatability of the segmentation process was tested preliminarily by manually segmenting the heart and lungs three times without finding any significant differences in segmentation size of the heart (64.7 ± 2.1 pixels) or the numerical results (Fig. 4A). Slightly larger variations were seen when the same sets of GP saturation bands were applied three times in the same animal (Fig. 4B). A more robust repeatability test will be conducted as our measurements are performed in additional animals under a variety of study conditions. To further evaluate the sensitivity of our technique through comparison to known physiological responses 34, we investigated the gas transport behavior of a rabbit in supine versus prone orientation (Fig. 5). In supine position, a strong vertical gradient of DP H was observed, with the most dependent (dorsal) saturation volume contributing almost 50% of the total transport-approximately five times as much as the most non-dependent volume. This gradient largely disappeared when the animal was turned prone (Fig. 5B), and all four saturation volumes contributed between 20% and 30%. However, the direction of the gas transport efficiency gradient was reversed between supine and prone positioning, with ventral predominance in the prone position (Fig. 5C). Figure 6 depicts the impact of respiratory phase (end-expiratory, EE, versus end-inspiratory, EI) and applied positive end-expiratory pressure (PEEP) on the distribution of the DP signal in the lungs of supine rabbits. Unsurprisingly, higher associated intrapulmonary pressure caused a larger cross section of the lung to appear in the GP and DP 2D projection images. In contrast, the apparent size of the left heart and the normalized DP signal intensities showed an inverse relationship to intrapulmonary pressure. In particular, EI measurements with PEEP resulted in such low DP signals in the heart that the impact of regional GP saturation could not be reliably quantified. These measurements were therefore excluded from further analysis. The quantitative results for the remaining experiments in Fig. 6 are displayed in Fig. 7. Although no explicit pressure readings were available, the EI PEEP 0 cm H 2 O measurement slotted in-between the EE PEEP 5 cm H 2 O and EE PEEP 10 cm H 2 O experiments based on both gas transport contribution and efficiency curves. The absolute gas transport efficiency decreased by a factor of approximately 7 throughout the lung as intrapulmonary pressure rose. At the regional level, the contribution to gas transport and efficiency of the anterior-most saturation segment increased, but both parameters declined in the posterior-most segment. However, interpretation of this latter aspect was confounded by the fact that the absolute location of the GP saturation slabs was fixed while the lung expanded with pressure. As an initial test for the utility of our GP saturation technique, we evaluated the impact of a focal lung injury on the collected functional metrics (Fig. 8). At baseline, the regional variations in pulmonary gas transport and efficiency at PEEP 0 cm H 2 O and 5 cm H 2 O exhibited the expected anterior-to-posterior gradient. Approximately 1 hour after HCl instillation, a ventilation defect became apparent in the posterior region of the injured right lung (white arrow in Fig. 8A). The reduced ventilation also manifested itself as a reduction in the contribution to the gas transport (Fig. 8B) and transport efficiency (Fig. 8C) in the posterior GP saturation volume. When a PEEP of 5 cm H 2 O was applied, the collapsed region of the lung was re-inflated. At the same time, the gas transport contributions and efficiency of the posterior-most lung region increased dramatically, even exceeding their respective baseline values, albeit at the expense of a decreased gas transport efficiency in the more anterior volumes. Discussion This study set out to demonstrate the utility of regionally saturating the GP magnetization in selected airspaces of the lung as a means for assessing the contribution of that volume to the total pulmonary gas transport. In particular, we used the decrease in DP signal in the left heart following GP saturation relative to measurements without saturation as a non-invasive, conveniently accessible metric for quantifying regional lung function. In this initial implementation of our technique, we took advantage of the regional saturation feature in the scanner product software to position saturation slabs over the volume of interest and to destroy the GP magnetization with RF pulses centered at the xenon-129 gas resonance frequency prior to the start of data acquisition. Once imprinted, macroscopic GP saturation largely persists for the remainder of the breath hold; there are therefore no practical constraints on the available time for creating more complex patterns. The degree to which the GP magnetization in the selected saturation volume is attenuated is a function of the applied effective flip angle and reaches it maximum for flip angles around 90°. Due to rapid xenon gas exchange between the alveolar airspaces and the lung parenchyma, saturation of the GP magnetization results in an almost instantaneous depolarization of xenon dissolved in the lung parenchyma as well. However, xenon magnetization farther downstream-such as in the major pulmonary veins, the left heart and the arteries-only washes out over several hundreds of milliseconds. Thus, to ensure that the steady state conditions associated with the created ventilation pattern had been established prior to the actual data acquisition, we first applied additional DP saturation pulses and a series of dummy RF excitations pulses with the same TR as during imaging. Residual GP and DP magnetization will be apparent in the saturation volume for saturation flip angles exceeding 90° (Fig. 2a), but will be 180° out of phase with the magnetization from lung regions unaffected by the GP saturation pulse. These two DP magnetization pools partially cancel each other out when they mix in the vasculature and the heart, further reducing the combined DP signal in these regions, although the total DP signal magnitude in the lung parenchyma is increasing (Fig. 2b). The RF saturation pulses in the product software could not be used to induce saturation flip angles in excess of 120° with our RF coil. Future refinement of our technique will permit the application of 180° inversion pulses within the saturation volume, which would double the sensitivity of the measurement relative to 90° saturation pulses, and which would be particularly useful for investigating smaller saturation volumes. A coarse lung function profile can be calculated by incrementally moving a broad saturation slab position across the lung during consecutive measurements (Fig. 3). Due to the resulting crisp saturation slab profile, such an approach should be superior to the theoretically equivalent method of positioning narrow saturation bands over the lung, since the latter could introduce a slab thickness-dependent band profile into the analysis as an additional confounding factor. In a supine rabbit, most of the gas uptake and transport occurs in the posterior regions of the lung. Thus, positioning the saturation slab so as to saturate the GP magnetization in the posterior volume of both lungs results in a large drop of the DP signal in the heart. This effect complicates the segmentation of the heart and impedes the quantification of small relative signal changes. Advancing the saturation slab in an anterior-to-posterior direction therefore usually yields superior results compared to advancing from posterior-to-anterior. Rotated saturation slabs might prove to be advantageous as a means of minimizing partial volume effects for one-sided lung pathologies. Although there is relatively little overlap between the lung parenchyma and the heart in axial projections in rabbits, the segmentation of the heart and vasculature can be greatly facilitated by saturating the GP magnetization in the anterior part of the lung, thus removing all DP background signal originating in the parenchyma (e.g. bottom rows of Fig. 3A). The segmented heart can then be used to isolate the same region within the images without regional saturation. We showed that the change in xenon DP signal in the heart (DP H ) following regional GP saturation and the functional efficiency within the saturation volume are insensitive to operator-related variations in the manual segmentation process is low, and that measurement reproducibility in the same animal is high (Fig. 4A,B). In our measurements, the regional lung function in a healthy rabbit was not only impacted by its orientation (supine vs prone), but also by the inflation level of the lung-i.e., when during the respiratory cycle the breath hold was induced (EE vs EI), as well as the amount of PEEP applied. Flipping a rabbit from supine into a more natural, prone position resulted in an asymmetric redistribution of lung function, as demonstrated in Fig. 5. While in supine position, the contributions of the lung regions to gas uptake exhibited a strong, gravity-dependent gradient. In prone position, on the other hand, gas uptake appeared to be almost homogenously distributed in the vertical direction. However, this may be the result of larger tissue content in ventral vs. dorsal saturation slabs. We are currently investigating a tissue-volume-independent metric in which gas uptake and transport is normalized by local tissue volume measurements via proton MRI or CT or by using the relative contribution of the saturation slab to the total DP signal as a proxy. The functional efficiency gradient, on the other hand, followed gravity in both the supine and in the prone positions. PEEP is an important tool for mitigating the risk of atelectasis and improving oxygenation in mechanically ventilated patients. Nevertheless, the effectiveness of PEEP and the optimal ventilator settings for maximizing its benefits while minimizing potentially detrimental side effects remain areas of great interest 35. Fortunately, the impact of various lung inflation levels on pulmonary ventilation and gas transport are easily discernible with our technique (Figs 6 and 7). For one, the higher intrapulmonary pressure associated with increased lung inflation, either in the form of EE versus EI breath holds or as PEEP, compresses the heart and results in a smaller left ventricle size in the images. Although not directly quantifiable with our measurements, it stands to reason that at elevated alveolar pressure the pulmonary vasculature partially collapses and the blood flow rate is reduced 36. The latter also results in a prolonged gas transit time from the alveolar airspaces to the heart; for the selected flip angle and TR, our technique is very sensitive to any such time-delays. As a consequence, the gas transport efficiency varies dramatically throughout the respiratory cycle: up to a factor of ~7 (Fig. 7B) between PEEP 0 and PEEP 10 cm H 2 O for breath holds at EE. Figure 7A also indicates pressure-dependent changes in the spatial distribution of the gas transport contributions. However, this interpretation could be misleading, as we advanced the positions of the saturation slabs in fixed increments and not with respect to their anatomical location. For future studies, it might be advantageous to distribute a fixed number of GP saturation slabs evenly across the entire lung volume. While it can be expected that the application of PEEP always leads to large reductions in gas transport efficiency in healthy lungs, the situation could be altogether different in injured or diseased lungs. Under such circumstances, higher intrapulmonary pressure can drastically increase the lung volume involved in gas exchange, more than compensating for decreased regional functional efficiency. As an initial demonstration of our method's sensitivity to changes in gas transport in an acutely injured lung imaged with and without PEEP, we used a rabbit acid aspiration model. We found that lung function is spatially redistributed following administration of the acid (Fig. 8), most likely due to perfusion changes in response to the insult. Of particular interest, however, was the observation that the injury-induced ventilation defect in the right posterior lung resulted in greatly reduced gas transport efficiency in the posterior-most slice of the lung. A PEEP of 5 cm H 2 O re-inflated the collapsed lung and caused the gas transport efficiency of that region to spike, even exceeding its baseline value, but reduced functional efficiency in the presumably less severely injured anterior lung volumes. This finding indicates that there should exist an identifiable, inflation-dependent maximum in gas-transport efficiency, and emphasizes the large potential benefit of using our method to find optimal PEEP settings in acute lung injury. However, it is important to note that the current efficiency measurements were based on 2D acquisitions, and so did not take GP concentration changes due to lung expansion in the third dimension into account. In the future, we will include 3D volumetric scans to our study protocol to correct for differences in ventilated lung volume at different PEEP levels. Knowledge of this parameter will also allow us to investigate whether a change in functional lung efficiency is partially compensated for by an offsetting change in lung volume. In this study, we showed that HXe DP MRI in conjunction with regional GP saturation can be used to gain additional insights into lung function in the form of pulmonary gas transport and its efficiency. The difference between these parameters and existing hyperpolarized-gas techniques is that regional deficiencies in ventilation, gas exchange, or perfusion are all captured simultaneously and integrated into one metric. In contrast with existing DP xenon MRI techniques, reported values do not reflect only the xenon distribution in the parenchyma, but instead the actual xenon transport. While existing methods can localize regional abnormalities, they cannot quantify the impact of any given abnormality on overall lung function or provide much insight into potential compensatory mechanisms by comparatively healthy lung regions. Although it can be expected that the measured xenon gas exchange dynamics will differ from those for oxygen and carbon dioxide, they are susceptible to Scientific RepoRts | 9:2413 | https://doi.org/10.1038/s41598-019-38942-8 the same pathological changes in physiology, e.g. abnormal ventilation patterns, surface-to-volume ratio, septal wall thickness, etc. Therefore, regionally abnormal lung function affecting oxygen and carbon dioxide exchange should also be reflected in the xenon gas exchange and transport in a similar manner. In disease, some parts of the lungs may be responsible for most gas exchange function, while others may contribute only minimally or be functionally silent. Information about this condition could improve the management of patients with chronic lung diseases (e.g. in the evaluation for lung transplant or lung reduction surgery), and aid preparations for lung cancer resection. In patients with acutely injured lungs, smaller portions of parenchyma (the "baby lung") perform all gas exchange 37. In this situation, mapping gas uptake and transport with our GP saturation technique could provide a better marker of disease severity than arterial blood gases, which are notoriously inaccurate and heavily affected by regional variability in lung performance. Although our method in its current form already seems to be very susceptible to alterations in lung function due to positioning and acute lung injury, more extensive studies in both animals and human subjects will be required to optimize its sensitivity and evaluate its full potential. Data Availability The datasets generated and/or analysed during the current study are available from the corresponding author on reasonable request.
<filename>pkg/controller/vernemq/deployment.go package vernemq import ( "fmt" vernemqv1alpha1 "github.com/vernemq/vmq-operator/pkg/apis/vernemq/v1alpha1" appsv1 "k8s.io/api/apps/v1" v1 "k8s.io/api/core/v1" metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" ) func makeDeployment(instance *vernemqv1alpha1.VerneMQ) *appsv1.Deployment { boolTrue := true spec := makeDeploymentSpec(instance) return &appsv1.Deployment{ ObjectMeta: metav1.ObjectMeta{ Name: deploymentName(instance.Name), Namespace: instance.Namespace, Annotations: instance.ObjectMeta.Annotations, OwnerReferences: []metav1.OwnerReference{ { APIVersion: instance.APIVersion, BlockOwnerDeletion: &boolTrue, Controller: &boolTrue, Kind: instance.Kind, Name: instance.Name, UID: instance.UID, }, }, }, Spec: *spec, } } func makeDeploymentSpec(instance *vernemqv1alpha1.VerneMQ) *appsv1.DeploymentSpec { if instance.Spec.BundlerBaseImage == "" { instance.Spec.BundlerBaseImage = defaultBundlerBaseImage } if instance.Spec.BundlerVersion == "" { instance.Spec.BundlerVersion = defaultBundlerVersion } bundlerImage := fmt.Sprintf("%s:%s", instance.Spec.BundlerBaseImage, instance.Spec.BundlerVersion) if instance.Spec.BundlerTag != "" { bundlerImage = fmt.Sprintf("%s:%s", instance.Spec.BundlerBaseImage, instance.Spec.BundlerTag) } if instance.Spec.BundlerSHA != "" { bundlerImage = fmt.Sprintf("%s@sha256:%s", instance.Spec.BundlerBaseImage, instance.Spec.BundlerSHA) } if instance.Spec.BundlerImage != nil && *instance.Spec.BundlerImage != "" { bundlerImage = *instance.Spec.BundlerImage } podLabels := map[string]string{"app": "vmq-bundler"} podAnnotations := map[string]string{} return &appsv1.DeploymentSpec{ Selector: &metav1.LabelSelector{ MatchLabels: podLabels, }, Template: v1.PodTemplateSpec{ ObjectMeta: metav1.ObjectMeta{ Labels: podLabels, Annotations: podAnnotations, }, Spec: v1.PodSpec{ Containers: []v1.Container{ { Name: "vmq-bundler", Image: bundlerImage, Ports: []v1.ContainerPort{ { Name: "http", ContainerPort: 80, Protocol: v1.ProtocolTCP, }, }, Env: []v1.EnvVar{ { Name: "BUNDLER_CONFIG", Value: makeBundlerConfig(instance), }, { Name: "HTTP_PORT", Value: "80", }, }, }, }, }, }, } } func makeBundlerConfig(instance *vernemqv1alpha1.VerneMQ) string { config := ` {plugins, [ {rebar3_cargo, {git, "https://github.com/benoitc/rebar3_cargo", {ref, "379115f"}}} ]}. {deps, [ ` for _, p := range instance.Spec.ExternalPlugins { config = config + fmt.Sprintf("{%s, {git, \"%s\", {%s, \"%s\"}}},\n", p.ApplicationName, p.RepoURL, p.VersionType, p.Version) } config = config + ` {vmq_k8s, {git, "https://github.com/vernemq/vmq-operator", {branch, "master"}}} ]}. ` return config }
// // ViewController.h // OPENGLES_CH8_3 // // Created by Gguomingyue on 2017/10/17. // Copyright © 2017年 Gguomingyue. All rights reserved. // #import <GLKit/GLKit.h> @interface ViewController : GLKViewController - (IBAction)takeSelectedEmitterFrom:(UISegmentedControl *)sender; - (IBAction)takeSelectedTextureFrom:(UISegmentedControl *)sender; @end
Technologies in diabetes--the fifth ATTD yearbook. The fifth Advanced Technologies and Treatments for Diabetes (ATTD) yearbook comes to you in its now already traditional form. Published data reviewed by the leading experts this year almost suggest that the philosophical stratagem of closed-loop insulin delivery may indeed be close to a product introduced into clinical practice. Despite the proven clinical benefit of continuous subcutaneous insulin infusion and continuous glucose monitoring, psychological and behavioral barriers still limit their benefits. When linked together, and at least partially controlled by an algorithm, its users suddenly become relieved from a fragment of their daily responsibilities, and it almost looks like the initial benefit comes at no psychological or behavioral cost. The clinical reader may decide if there is enough scientific ground for some cautious optimism, and the industry may find reasons for the crucial decision of bringing a closed-loop insulin delivery system to the market. All of this is revealed in the broadest frame of 14 chapters covering all that is new and advanced in the ever-expanding field of diabetes. The major advantage of ATTD remains its completely open character without any formal structure that would set limits or rules. Free flow of creative ideas and friendly exchange of different opinions provide the background for cooperative creativity between medical care professionals, scientists, engineers, investors, and managers. The ATTD yearbook touches the base with handpicked data and sets the ground for plans funded on the best available knowledge. The ATTD web page and the publisher, Mary Ann Liebert, Inc., generously provide the electronic text of the ATTD yearbook free to all. The 11,200 hits and downloads in the last year speak for itself. The global reach of the ATTD yearbook fulfills one of the fundamental missions of the ATTDfree exchange and distribution of knowledge and clinical experience to each and every member of our diabetes community. Finally, it is the dream that creates the futurethe ATTD yearbook through its contributors and associate editors fosters this ongoing dream and provides some fertile soil for growing it into a reality for our patients.
/** * Abstract class that defines interface to use for converting * "raw" {@link StorableKey} instances into higher level key * abstractions, constructing such keys, and calculating * routing hash codes. */ public abstract class EntryKeyConverter<K extends EntryKey> { /* /********************************************************************** /* Factory/conversion methods /********************************************************************** */ /** * Method called to reconstruct an {@link EntryKey} from raw bytes. */ public abstract K construct(byte[] rawKey); /** * Method called to reconstruct an {@link EntryKey} from raw bytes. */ public abstract K construct(byte[] rawKey, int offset, int length); /** * Method called to construct a "refined" key out of raw * {@link StorableKey} */ public abstract K rawToEntryKey(StorableKey key); /** * Optional method for converting external textual key representation * into internal one. * Useful when exposing keys to AJAX interfaces, or debugging. * * @since 0.8.7 */ public abstract K stringToKey(String external); /** * Optional method for converting key into external String representation * (one that can be converted back using {@link #stringToKey}, losslessly). * Useful when exposing keys to AJAX interfaces, or debugging. * * @since 0.8.7 */ public abstract String keyToString(K key); /** * Optional method for converting key into external String representation * (one that can be converted back using {@link #stringToKey}, losslessly). * Useful when exposing keys to AJAX interfaces, or debugging. * * @since 0.8.7 */ public abstract String rawToString(StorableKey key); /* /********************************************************************** /* Hash code calculation /********************************************************************** */ /** * Method called to figure out raw hash code to use for routing request * regarding given content key. */ public abstract int routingHashFor(K key); public abstract int contentHashFor(ByteContainer bytes); /** * Method that will create a <b>new</b> hasher instance for calculating * hash values for content that can not be handled as a single block. */ public abstract IncrementalHasher32 createStreamingContentHasher(); /* /********************************************************************** /* Path encoding/decoding /********************************************************************** */ /** * Method for appending key information into path, using given path builder. */ public abstract <B extends RequestPathBuilder<B>> B appendToPath(B pathBuilder, K key); /** * Method for extracting key information from the path, using given path builder * that contains relevant remainder of path (i.e. servlet and operation parts * have been handled). */ public abstract <P extends DecodableRequestPath> K extractFromPath(P pathBuilder); /* /********************************************************************** /* Helper methods /********************************************************************** */ /** * Helper method that will 'clean up' raw hash, so that it * is always a non-zero positive value. */ protected int _truncateHash(int hash) { if (hash > 0) { return hash; } if (hash == 0) { // need to mask 0 return 1; } return hash & 0x7FFFFFFF; } }
Listen to the bonkers places people have used their mobiles. We learn that the Dancing On Ice Champion, Beth Tweddle has been on a "long journey" and that One Direction are in trouble for tweeting fans to show them their tattoos to be in with a chance of starring in the next 1D movie. Apparently they quickly deleted the tweet for fear of being seen to encourage underage inkings! However, Harry Styles has also had a new tattoo on his stomach - Listen to Neil's thoughts on it!
Evolutionary Developmental Biology of Nonflowering Land Plants Current phylogenetic studies indicate that the closest relatives to land plants are among the charophycean green algae, which have a gametophytedominant life cycle and lack a multicellular sporophyte generation. Land plants evolved a multicellular sporophyte generation and organs with complex tissue and cell differentiation. A number of genes in nonflowering plants homologous to the developmental genes in flowering plants have been isolated, and their roles during evolution have been hypothesized from expression patterns. Genetic manipulation is necessary to show the gene function implicated by expression patterns. In addition to studies in established model organisms, such as Physcomitrella patens, the development of genetic manipulation techniques in more taxa should be addressed. Genomics has accelerated the identification of genes and enabled the determination of the presence/absence of homologous genes, which would be the basis for the detection of gene losses and acquisitions. The process of phenotypic evolution should ultimately be explained by gene acquisitions and losses and changes in gene functions.
Prevalence of Lipoatrophy and Mitochondrial DNA Content of Blood and Subcutaneous Fat in HIV-1-Infected Patients Randomly Allocated to Zidovudine- or Stavudine-Based Therapy Introduction Mitochondrial toxicity resulting from mitochondrial DNA (mtDNA) depletion is suggested to be involved in the pathogenesis of lipodystrophy. Methods We cross-sectionally assessed lipodystrophy both clinically and radiographically in patients who, 4 years before, had been enrolled in a randomized comparative trial of stavudine- or zidovudine-based therapy. mtDNA content was measured in peripheral blood mononuclear cells (PBMCs) and subcutaneous adipose tissue from the thigh and back. Results Twenty-eight of the 45 patients enrolled in the original trial were included. Despite comparable exposure to stavudine or zidovudine (51 and 50 months, respectively), lipoatrophy prevalence by intent-to-treat analysis was significantly greater in stavudine recipients (82 vs 9%, P=0.0001). Likewise, those allocated to stavudine had significantly less peripheral fat. In an analysis restricted to patients who had remained on randomly allocated nucleoside reverse transcriptase inhibitors (NRTIs), mtDNA in PBMCs decreased after the start of treatment in both groups (P<0.0001) (-73% for stavudine and -67% for zidovudine, P=0.11), resulting in significantly lower levels in patients with lipoatrophy (P=0.007). The mtDNA content in subcutaneous adipose tissue from the thigh, but not from the back, was significantly lower in patients allocated to stavudine compared to zidovudine (P=0.01). mtDNA in adipose tissue from either location did not differ significantly between those with or without lipoatrophy. Discussion This study objectively confirms that regimens containing stavudine are associated with a greater risk of lipoatrophy than those containing zidovudine. mtDNA in PBMCs markedly declined with both treatments and was lowest in patients with lipoatrophy. The lack of difference in mtDNA in adipose tissue from patients with as opposed to without lipoatrophy may have been masked by a relative preponderance of stromal and vascular tissue in the subcutaneous tissue samples from these patients, combined with compensatory mitochondrial proliferation in remaining adipocytes. However, our findings may also suggest that the different risk of lipoatrophy observed between NRTIs cannot solely be explained by differences in mtDNA depletion directly at the level of peripheral adipose tissue.
<reponame>caxenie/embedded-ser2eth-converter /* * adc.c * * Created on: Jan 16, 2011 * Author: <NAME> * Fisier cu functii de configurare si de preluare a datelor specifice ADC */ #include <avr/io.h> #include <util/delay.h> // 0x60 = 01100000 #define ADC_VREF_TYPE 0x60 void init_adc(void); unsigned char read_adc(unsigned char); void init_adc(void) { // ADC initialization // ADC Clock frequency: 1000.000 kHz // ADC Voltage Reference: AVCC pin // Only the 8 most significant bits of // the AD conversion result are used ADMUX=ADC_VREF_TYPE & 0xff;// 01100000 & 11111111 = 01100000 // 0 1 1 ... AVCC with external capacitor at AREF pin (bits 7 and 6) and ADC Left Adjust Result (bit 5) ADCSRA=0x84;// 10000100 --> ADC Enable (bit 7) and Division Factor 16 (bits 2 1 0 combination) } // Read the 8 most significant bits (ADCH register) // of the AD conversion result unsigned char read_adc(unsigned char adc_input) { ADMUX = adc_input | (ADC_VREF_TYPE & 0xff); /* Delay needed for the stabilization of the ADC input voltage*/ _delay_us(10.0); // Start the AD conversion ADCSRA|=0x40;// 01000000 --> ADSC bit set = AD start conversion // Wait for the AD conversion to complete while ((ADCSRA & 0x10)==0);// Test the ADIF bit (AD interrupt flag) for AD conversion finish ADCSRA|=0x10; return ADCH;// return the conversion result ADCH }
<filename>spectate/models.py # SEE END OF FILE FOR LICENSE import inspect import itertools from .utils import Sentinel from .base import Model, Control __all__ = ["Structure", "List", "Dict", "Set", "Object", "Undefined"] Undefined = Sentinel("Undefined") class Structure(Model): def _notify_model_views(self, events): for e in events: if "new" in e: new = e["new"] if isinstance(new, Model): self._attach_child_model(new) if "old" in e: old = e["old"] if isinstance(old, Model): self._remove_child_model(old) super()._notify_model_views(events) class List(Structure, list): """A :mod:`spectate.mvc` enabled ``list``.""" _control_setitem = Control( "__setitem__", before="_control_before_setitem", after="_control_after_setitem" ) _control_delitem = Control( "__delitem__", before="_control_before_delitem", after="_control_after_delitem" ) _control_insert = Control( "insert", before="_control_before_insert", after="_control_after_insert" ) _control_append = Control("append", after="_control_after_append") _control_extend = Control( "__init__, extend", before="_control_before_extend", after="_control_after_extend", ) _control_pop = Control( "pop", before="_control_before_pop", after="_control_after_delitem" ) _control_clear = Control( "clear", before="_control_before_clear", after="_control_after_clear" ) _control_remove = Control( "remove", before="_control_before_remove", after="_control_after_delitem" ) _control_rearrangement = Control( "sort, reverse", before="_control_before_rearrangement", after="_control_after_rearrangement", ) def _control_before_setitem(self, call, notify): index = call["args"][0] try: old = self[index] except KeyError: old = Undefined return index, old def _control_after_setitem(self, answer, notify): index, old = answer["before"] new = self[index] if new is not old: notify(index=index, old=old, new=new) def _control_before_delitem(self, call, notify): index = call["args"][0] return index, self[index:] def _control_after_delitem(self, answer, notify): index, old = answer["before"] for i, x in enumerate(old): try: new = self[index + i] except IndexError: new = Undefined notify(index=(i + index), old=x, new=new) def _control_before_insert(self, call, notify): index = call["args"][0] return index, self[index:] def _control_after_insert(self, answer, notify): index, old = answer["before"] for i in range(index, len(self)): try: o = old[i] except IndexError: o = Undefined notify(index=i, old=o, new=self[i]) def _control_after_append(self, answer, notify): notify(index=len(self) - 1, old=Undefined, new=self[-1]) def _control_before_extend(self, call, notify): return len(self) def _control_after_extend(self, answer, notify): for i in range(answer["before"], len(self)): notify(index=i, old=Undefined, new=self[i]) def _control_before_pop(self, call, notify): if not call["args"]: index = len(self) - 1 else: index = call["args"][0] return index, self[index:] def _control_before_clear(self, call, notify): return self.copy() def _control_after_clear(self, answer, notify): for i, v in enumerate(answer["before"]): notify(index=i, old=v, new=Undefined) def _control_before_remove(self, call, notify): index = self.index(call["args"][0]) return index, self[index:] def _control_before_rearrangement(self, call, notify): return self.copy() def _control_after_rearrangement(self, answer, notify): old = answer["before"] for i, v in enumerate(old): if v != self[i]: notify(index=i, old=v, new=self[i]) class Dict(Structure, dict): """A :mod:`spectate.mvc` enabled ``dict``.""" _control_setitem = Control( "__setitem__, setdefault", before="_control_before_setitem", after="_control_after_setitem", ) _control_delitem = Control( "__delitem__, pop", before="_control_before_delitem", after="_control_after_delitem", ) _control_update = Control( "__init__, update", before="_control_before_update", after="_control_after_update", ) _control_clear = Control( "clear", before="_control_before_clear", after="_control_after_clear" ) def _control_before_setitem(self, call, notify): key = call["args"][0] old = self.get(key, Undefined) return key, old def _control_after_setitem(self, answer, notify): key, old = answer["before"] new = self[key] if new != old: notify(key=key, old=old, new=new) def _control_before_delitem(self, call, notify): key = call["args"][0] try: return key, self[key] except KeyError: # the base method will error on its own pass def _control_after_delitem(self, answer, notify): key, old = answer["before"] notify(key=key, old=old, new=Undefined) def _control_before_update(self, call, notify): if len(call["args"]): args = call["args"][0] if inspect.isgenerator(args): # copy generator so it doesn't get exhausted args = itertools.tee(args)[1] new = dict(args) new.update(call["kwargs"]) else: new = call["kwargs"] old = {k: self.get(k, Undefined) for k in new} return old def _control_after_update(self, answer, notify): for k, v in answer["before"].items(): if self[k] != v: notify(key=k, old=v, new=self[k]) def _control_before_clear(self, call, notify): return self.copy() def _control_after_clear(self, answer, notify): for k, v in answer["before"].items(): notify(key=k, old=v, new=Undefined) class Set(Structure, set): """A :mod:`spectate.mvc` enabled ``set``.""" _control_update = Control( [ "__init__", "clear", "update", "difference_update", "intersection_update", "add", "pop", "remove", "symmetric_difference_update", "discard", ], before="_control_before_update", after="_control_after_update", ) def _control_before_update(self, call, notify): return self.copy() def _control_after_update(self, answer, notify): new = self.difference(answer["before"]) old = answer["before"].difference(self) if new or old: notify(new=new, old=old) class Object(Structure): """A :mod:`spectat.mvc` enabled ``object``.""" _control_attr_change = Control( "__setattr__, __delattr__", before="_control_before_attr_change", after="_control_after_attr_change", ) def __init__(self, *args, **kwargs): for k, v in dict(*args, **kwargs).items(): setattr(self, k, v) def _control_before_attr_change(self, call, notify): return call["args"][0], getattr(self, call["args"][0], Undefined) def _control_after_attr_change(self, answer, notify): attr, old = answer["before"] new = getattr(self, attr, Undefined) if new != old: notify(attr=attr, old=old, new=new)
Overview on the Treatment Technology of Municipal Solid Wastes in China This paper analyzed three dominant approaches of Municipal solid wastes treatments in our country. In this paper we not only compared the advantages and disadvantages of these three dominant approaches but also took analysis to the present situation of MSW(municipal solid wastes) treatment and existing problems in domestic and foreign. At last we thought the future development of MSW treatment: it has many restrictions to take each single way so that comprehensive treatment is the key point to achieve the goal of MSW resource processing.
An island whisky distillery has announced plans to install its own on-site malting operation as part of a multimillion-pound infrastructure investment - safeguarding 80 jobs. A three-year, £10.5 million refurbishment programme is under way at the home of Bunnahabhain single malt whisky on Islay. The first spirit other than whisky to come out of Glenfarclas, one of Scotland’s oldest distilleries, has gone on sale at Harrods for £1,300 a bottle. Accountants have raised a dram to the chancellor as they predict a boost in new distillery development next year. A north-east distillery has raised a glass in toast to a double success after winning two accolades in the region’s annual tourism Oscars. Plans to build the UK’s most northerly whisky distillery are progressing after an investment drive was launched to raise around £4.5 million for the project based on the island of Unst. Whisky firm William Grant and Sons is working on £30-million-plus plans to expand Glenfiddich Distillery. Whisky giant Whyte and Mackay is to cut 21 jobs - a fifth of the workforce - at its Invergordon distillery as it implements a £15 million modernisation programme. Visitors to the Highland’s oldest working distillery have increased by more than a third in the last year as Scottish whisky tourism takes off. The community-owned Ross-shire distillery, GlenWyvis, has launched a new gin and unveiled fresh branding for its products. The new boss of Tomatin Distillery, near Inverness, is toasting sales success in the USA and Germany, as well as the UK, after achieving double digit annual growth. A north-east distillery is to be used to inspire businesses to consider developing within food tourism across the region. The parent company of whisky specialist Gordon & MacPhail has announced plans for a new multi-million-pound distillery near Grantown which it expects to become a “significant local employer” and tourist attraction. Ambitious £10million plans to build the first distillery on the Hebridean island of South Uist in 174 years – just a stone’s throw from the setting of Whisky Galore – have been unveiled. As whisky producers go, it certainly does its bit per square foot. She was a trailblazer in an industry which has traditionally been dominated by men. Construction of a new pipeline bringing a continual, year-round gas supply to a Speyside distillery has been completed a month ahead of schedule. Glenmorangie has marked 175 years of whisky creation with the announcement of plans for a new still house in the Highlands. A couple who started off distilling gin as a hobby from their garden shed have grown it into a successful business which is now employing two members of staff. A family-run craft distillery in Aberdeenshire has just celebrated the launch of its second product. A whisky distillery worker has been banned from driving for three years after getting behind the wheel while more than five times the legal alcohol limit. Pirates are usually associated with the Caribbean but one Orcadian seadog will have his name branded on the newest alcoholic offering from the islands’ first rum distillery. The family-owned Moray firm behind the world’s best-selling single malt whisky saw turnover and profits rocket after core brands did well during 2016. A Moray distillery’s flagship product has picked up another international award. Moray Council has approved plans to rejuvenate a rural community by creating a distillery and heritage centre, which celebrates its role in the birth of the whisky industry. A start-up craft distillery on Royal Deeside is claiming at least one and possibly two innovative firsts after producing its first bottles of a spirit once banned because of its “dangerous” reputation. The community group behind plans to build a new distillery in Moray have marked a milestone in the development by releasing images of a whisky-themed play area. Whisky bosses have raised their glasses to councillors' approval of an extension of car parking at Talisker's distillery at Carbost on Skye.
How to Diagnose Early 5-Azacytidine-Induced Pneumonitis: A Case Report Interstitial pneumonitis is a classical complication of many drugs. Pulmonary toxicity due to 5-azacytidine, a deoxyribonucleic acid methyltransferase inhibitor and cytotoxic drug, has rarely been reported. We report a 67-year-old female myelodysplastic syndrome patient treated with 5-azacytidine at the conventional dosage of 75 mg/m2 for 7 days. One week after starting she developed moderate fever along with dry cough and subsequently her temperature rose to 39.5 °C. She was placed under broad-spectrum antibiotics based on the protocol for febrile neutropenia, including ciprofloxacin 750 mg twice daily, ceftazidime 1 g three times daily (tid), and sulfamethoxazole/trimethoprim 400 mg/80 mg tid. High-resolution computed tomography of the chest disclosed diffuse bilateral opacities with ground-glass shadowing and pleural effusion bilaterally. Mediastinal and hilar lymph nodes were moderately enlarged. polymerase chain reaction for Mycobacterium tuberculosis, Pneumocystis jiroveci, and cytomegalovirus were negative. Cultures including viral and fungal were all negative. A diagnosis of drug-induced pneumonitis was considered and, given the negative bronchoalveolar lavage in terms of an infection, corticosteroid therapy was given at a dose of 1 mg/kg body weight. Within 4 weeks, the patient became afebrile and was discharged from hospital. Development of symptoms with respect to drug administration, unexplained fever, negative workup for an infection, and marked response to corticosteroid therapy were found in our case. An explanation could be a delayed type of hypersensitivity (type IV) with activation of CD8 T cell which could possibly explain most of the symptoms. We have developed a decision algorithm in order to anticipate timely diagnosis of 5-azacitidine-induced pneumonitis, and with the aim to limit antibiotics abuse and to set up emergency treatment. Introduction Pneumonitis, often called interstitial lung disease or ILD, is a possible manifestation of many antineoplastic and other drugs, with several ILD subtypes being described in association with drugs. Pulmonary toxicity from 5-azacytidine, a deoxyribonucleic acid (DNA) methyltransferase inhibitor which also exerts cytotoxic effects, has rarely been reported, although the drug has been used since 1982. 5-Azacytidine acts as a hypomethylating agent of the Y globin suppressor gene to induce fetal hemoglobin in thalassemia and, since 2000, to treat high-risk myelodysplastic syndrome (MDS) and acute myelogenous leukemia (AML) with low blast counts. Here, we report a case of 5-azacytidine-asociated pneumonitis, review the literature, and develop a diagnostic algorithm for this rare condition to avoid delay in medical care and misuse of antibiotics. Based on the above data, high-risk MDS was considered. The patient underwent appropriate tests concerning eligibility for allogenic stem cell transplantation. She received the first cycle of 5-azacytidine at the conventional dosage of 75 mg/m 2 for 7 days from September 28, 2015. One week after starting 5-azacytidine, she developed moderate fever along with dry cough and, subsequently, her temperature rose to 39.5°C. She was hospitalized on October 11, 2015. Vital signs and pulse oximetry were normal. She was placed under broad-spectrum antibiotics based on the protocol for febrile neutropenia, including ciprofloxacin 750 mg twice daily, ceftazidime 1 g three times daily (tid), and sulfamethoxazole/trimethoprim 400 mg/80 mg tid. Fever did not abate. All routine bacteriological investigations were negative. Procalcitonin levels were within the normal range. The chest and sinus radiographs were normal, as were precipitins against Aspergillus and titers against Cytomegalovirus (CMV) and Epstein-Barr virus (EBV). CMV antigenemia was negative. An interferon-c release assay was negative. Marrow re-aspiration revealed a 22% increment of blast number, suggesting a transformation towards acute myeloid leukemia. During her second week in hospital, the patient complained of dyspnea on October 22, 2015. Blood gas showed a PaO 2 of 59 mmHg and PaCO 2 of 29 mmHg. Pulse oxygen saturation was 91% (room air). High-resolution computed tomography (HRCT) of the chest disclosed diffuse bilateral opacities with ground-glass shadowing and pleural effusion bilaterally (Fig. 1). Mediastinal and hilar lymph nodes were moderately enlarged. The patient was transferred to the intensive care unit on October 23 for bronchoalveolar lavage (BAL), which showed 170 red blood cells/mm 3 and 10 white blood cells/mm 3. Polymerase chain reaction (PCR) for Mycobacterium tuberculosis, Pneumocystis jiroveci, and CMV were negative. Immunofluorescence test for Pneumocystis was also negative. Cultures including viral and fungal were all negative. The patient was maintained on antibiotics. A diagnosis of drug-induced pneumonitis was considered and, given the negative BAL in terms of an infection, corticosteroid therapy was given at a dose of 1 mg/kg body weight on October 28. Within 4 days, a significant improvement in clinical status and imaging was noted. A repeat chest computed tomography (CT) scan at 1 week also showed significant improvement. Temperature was normal and C-reactive protein returned to normal within 1 week. Following 2 days of quick steroid tapering, the patient again developed fever. Left upper chest pain corresponding to lobulated pleural effusion was noted and 1200 mL of serosanguinous fluid was removed via chest tube. Pleural fluid was a predominantly neutrophilic exudate containing 4 g/dL proteins. Corticosteroids were maintained and antibiotics were discontinued. The patient remained afebrile and was discharged from hospital on November 9. She eventually received a haploidentical bone marrow transplant on December 23, 2015. The diagnosis of drug-induced pneumonitis rests on history of drug exposure, clinical imaging, bronchoalveolar lavage, exclusion of other lung conditions, improvement following drug discontinuation, and recurrence of symptoms upon rechallenge with the drug. In the present case, we were reluctant to readminister the drug as the risk of doing so is poorly known. The Naranjo probability score in this case was 6, consistent with probable adverse reaction. In our case, despite steroid use, symptoms relapsed and were characterized as serosanguinous pleural effusion. Serosanguinous pleural exudates with polymorphonuclear leukocyte predominance without bacteriological evidence of infection may be a manifestation of pleurisy such as in lupus erythematosus, which might be induced by the drug in question. Mechanisms for drug-induced ILD are direct cytotoxicity, hypersensitivity, oxidative stress, release of cytokines and thus pyrogens, and lastly impaired repair by type II pneumocytes. Chronology of events, unexplained fever, and steroid response to clinical and radiological signs constitute a hypersensitivity pneumonitis. 5-Azacytidine is a cytosine analog, a potent inhibitor of DNA methyltransferase, with a hypomethylating effect in vivo and in vitro. Unlike gemcitabine, although cytotoxic at high dose, at low dose it is capable of inducing differentiation and hypomethylation. Hence, profound myelosuppression or direct lung injury like capillary leak syndrome is not encountered during 5-azacytidine toxicity. The role of oxidative stress is still unclear although there are a few reports concerning induction of necrosis in vitro by 5-azacytidine. Oxidative stress could contribute to T-cell response by inhibiting the ERK pathway signaling in T cells. Recently we observed drug-associated ILD in two patients treated with an experimental inhibitor of DNA methyltransferase, suggesting a common class effect. Unlike oxaliplatin, anaphylactic reaction is extremely rare in 5-azacytidine. Few patients develop symptoms during the administration of chemotherapy. Although an elevated IgE level was reported in one case by Nair et al., the evidence is not sufficient to conclude a type I reaction. Most patients develop symptoms within a week to a month after administration of 5-azacytidine. Although the histopathological evidence is rarely possible in immunocompromised patients with hematological malignancy, Sekhri et al. presented a bronchocentric granuloma in their report. Hence, another plausible explanation could be a delayed type of hypersensitivity (type IV) with activation of CD8 T cell, which could explain most of the symptoms. This could possibly occur during a relative immune reconstitution phase of an immunocompromised patient. The pulmonary fibrosis may be due to DNA hypomethylation causing direct upregulation of type I collagen synthesis. Sanders et al. suggested that the DNA methylation is important in idiopathic pulmonary fibrosis (IPF), as an altered DNA methylation profile has been demonstrated in their experiment. Moreover, there are reports suggesting the epigenetic priming by 5-azacytidine confers transdifferentiating properties to various cells. However, it is difficult to establish a relationship at present. Our diagnostic algorithm is based on that of drug-induced interstitial lung disease (DILD), and is not specific for 5-azacytidine (Fig. 2). Any febrile condition in those patients with worsening pulmonary symptoms despite broad-spectrum antibiotics should arouse suspicion of DILD. HRCT and BAL are crucial as 5-azacytidine-induced pneumonitis remains a diagnosis of exclusion, like many other DILDs. Some nonspecific immunological tests could be helpful, like levels of p-ANCA (antineutrophil cytoplasmic antibody) and ANA (antinuclear antibody). Prompt consultation with a pulmonary care unit is of utmost utility. Conclusions A high degree of vigilance is advised to entertain the diagnosis in a timely manner, since the condition can be fatal. We now utilize a decision algorithm in order for timely diagnosis of 5-azacitidine-induced ILD to limit antibiotics abuse and to set up emergency treatment. Compliance with Ethical Standards Conflict of interest S.C. Misra, L. Gabriel, E. Nacoulma, G. Dine, and V. Guarino declare that they have no conflict of interest. Funding No financial support was received for the preparation of this manuscript. Informed consent Written informed consent was obtained from the patient for publication of this case report and any accompanying images. A copy of the written consent may be requested for review from the corresponding author. Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
The present invention relates to a data analysis system. Measurement instruments are applied to execute various measurement tasks in order to measure any kind of physical parameter. As a result of a measurement, measurement data is output by the measurement instrument. Such measurement data may include values of physical parameters such as concentrations of components of a sample, intensity values of a fluorescence measurement, etc. This information can be displayed to a user via a graphical user interface for evaluation of the data. An example for such a measurement instrument is a coupled liquid chromatography and mass spectroscopy device (for instance the 1200 Series LC/MSD of Agilent Technologies). DE 10 2007 000 627 A1 discloses a device which has a processing unit, e.g. CPU, for processing of measured data of a liquid chromatography measurement and mass spectrometer measurements such that the processed data are represented in two dimensions. Parameters such as retention time and mass spectrometer-spectrum and characterizing the measurements are represented in dimensions, where the latter parameter is correlated with the former parameter. The processing unit is arranged such that data of an original sample, i.e. fluid sample, and data of fragments of the sample are represented in two dimensions. Niels-Peter Vest Nielsen, Jens Michael Carstensen, Jon Smedsgaard, “Aligning of single and multiple wavelength chromatographic profiles for chemometric data analysis using correlation optimized warping”, Journal of Chromatography A, 805 (1998) 17-35, discloses that the use of chemometric data processing is becoming an important part of modern chromatography. Most chemometric analyses are performed on reduced data sets using areas of selected peaks detected in the chromatograms, which means a loss of data and introduces the problem of extracting peak data from the chromatographic profiles. These disadvantages shall be overcome by using the entire chromatographic data matrix in chemometric analyses, but it is necessary to align the chromatograms, as small unavoidable differences in experimental conditions cause minor changes and drift. The method uses the entire chromatographic data matrices and does not require any preprocessing, e.g. peak detection. It relies on piecewise linear correlation optimized warping (COW) using two input parameters which can be estimated from the observed peak width. COW is demonstrated on constructed single trace chromatograms and on single and multiple wavelength chromatograms obtained from HPLC diode detection analyses of fungal extracts. WO 2005/106920 discloses a method of mass spectrometry which comprises determining a first physico-chemical property and a second physico-chemical property of components, molecules or analytes in a first sample, wherein said first physicochemical property comprises the mass or mass to charge ratio and said second physico-chemical property comprises the elution time, hydrophobicity, hydrophilicity, migration time, or chromatographic retention time. A first physico-chemical property and a second physico-chemical property of components, molecules or analytes in a second sample is determined, wherein said first physicochemical property comprises the mass or mass to charge ratio and said second physico-chemical property comprises the elution time, hydrophobicity, hydrophilicity, migration time, or chromatographic retention time. Data relating to components, molecules or analytes in said first sample is probabilistically associated, clustered or grouped with data relating to components, molecules or analytes in said second sample. For the management of such measurement data, a user interface may be appropriate for visualizing corresponding data items to a user in a way that a technically reasonable evaluation of the measurement data is enabled. In this respect, conventional data analysis systems may be inconvenient in use.
<reponame>kaivol/drone-stm32f4-hal<gh_stars>1-10 mod usart1; mod usart2; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f412", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f437", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] mod usart3; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f437", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] mod uart4; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f405", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f437", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] mod uart5; #[cfg(any( stm32_mcu = "stm32f401", stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f410", stm32_mcu = "stm32f411", stm32_mcu = "stm32f412", stm32_mcu = "stm32f413", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] mod usart6; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f437", stm32_mcu = "stm32f469", ))] mod uart7; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f437", stm32_mcu = "stm32f469", ))] mod uart8; #[cfg(any(stm32_mcu = "stm32f413",))] mod uart9; #[cfg(any(stm32_mcu = "stm32f413",))] mod uart10; pub use self::usart1::*; pub use self::usart2::*; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f412", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f437", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] pub use self::usart3::*; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f437", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] pub use self::uart4::*; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f405", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f437", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] pub use self::uart5::*; #[cfg(any( stm32_mcu = "stm32f401", stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f410", stm32_mcu = "stm32f411", stm32_mcu = "stm32f412", stm32_mcu = "stm32f413", stm32_mcu = "stm32f427", stm32_mcu = "stm32f429", stm32_mcu = "stm32f446", stm32_mcu = "stm32f469", ))] pub use self::usart6::*; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f437", stm32_mcu = "stm32f469", ))] pub use self::uart7::*; #[cfg(any( stm32_mcu = "stm32f405", stm32_mcu = "stm32f407", stm32_mcu = "stm32f413", stm32_mcu = "stm32f417", stm32_mcu = "stm32f427", stm32_mcu = "stm32f437", stm32_mcu = "stm32f469", ))] pub use self::uart8::*; #[cfg(any(stm32_mcu = "stm32f413",))] pub use self::uart9::*; #[cfg(any(stm32_mcu = "stm32f413",))] pub use self::uart10::*;
Integrating XQuery and P2P in MonetDB/XQuery* In a numerical control unit which has an interpolation function from generating command pulses in response to a distribution command and a servo system for driving a movable part of a machine in accordance with command pulses, a comparison is made between the error of the servo system a predetermined period of time after the distribution command is switched off and a preset value; when the error of the servo system is larger than the predetermined value, an alarm signal is produced to detect the occurrence of a malfunction.
/** * ItemInstance is the class for an instance of an Item. * @author Adam */ public class ItemInstance extends WorldObject { /** * The ItemInstance's Item. */ public Item item; private int count=1; /** * Makes a new ItemInstance given its item. */ public ItemInstance(Item item) { this.item=item; } /** * Makes a new ItemInstance given its item and quantity. */ public ItemInstance(Item item, int quan) { this.item=item; setQuantity(quan); } /** * Makes a new ItemInstance given its item and coordinates. */ public ItemInstance(Item item, int x, int y) { this.x=x; this.y=y; this.item=item; } /** * Makes a new ItemInstance givne its item, coordinates and quantity. */ public ItemInstance(Item item, int x, int y, int quan) { this.x=x; this.y=y; this.item=item; setQuantity(quan); } /** * Returns a boolean stating if the ItemInstance is stackable. */ public boolean getStacks(){ return item.stacks; } /** * Sets the number of an Item an ItemInstance has. */ public int setQuantity(int quantity){ if(item.stacks){ count=quantity; } return count; } /** * Returns the number of its Item an ItemInstance has. */ public int getQuantity(){ return count; } public boolean getBlocking() { return item.blocking; } public boolean getOpaque() { return item.opaque; } /** * Returns the cost of the ItemInstance. */ public int getValue(){ return item.cost*count; } /** * Returns a boolean stating if the ItemInstance can be carried. */ public boolean getCarriable() { return item.carriable; } /** * Parses an ItemInstance from AMLBlock. */ public ItemInstance parseStatic(AMLBlock block) { x=block.getParameterInt("x", x); y=block.getParameterInt("y", y); //avoid declaring team in map count=block.getParameterInt("quantity", count); return this; } public void collideWith(Entity entity) { if(item.onCollide!=null){ //has a collide callback if(item.onCollide.command.equals("delete")){ world.removeItem(this); }else{ if(item.onCollide.command.equals("changeTo")){ Item it=world.catalog.getItem(item.onCollide.getParameterString("type", item.name)); if(it!=null){ item=it; } }else if(item.onCollide.command.equals("replace")){ String with=item.onCollide.getParameterString("with", item.name); String[] values = with.split("\\."); Vector<ItemInstance> its=new Vector<ItemInstance>(); for (String value : values) { int index=value.indexOf("_"); int count=1; if(index!=-1){ try{ count=Integer.parseInt(value.substring(index+1)); }catch(NumberFormatException e){ count=1; } value=value.substring(0, index); } Item it=world.catalog.getItem(value); if(it!=null){ its.add(new ItemInstance(it, x, y, count)); } } World wor=world; world.removeItem(this); for(int n=0;n<its.size();n++){ wor.addItem(its.get(n)); } } } } } /** * Gets the text description of the ItemInstance. */ public String getDescription() { String str=""; //name if(item.stacks){ str+=item.name+" ("+count+")"+"\n"; }else{ str+=item.name+"\n"; } //effect if(!item.effect.equals("")){ //effect str+="effect: "+item.effect+" ("+item.min+"-"+item.max+")\n"; }else if(item.equip==0){ //damage str+="damage: ("+item.min+"-"+item.max+")\n"; } //armor if(item.armor!=0){ str+="armor: "+item.armor+"\n"; } //value if(count==1){ str+="value: "+item.cost; }else{ str+="value: "+count+"x"+item.cost; } return str; } /** * Returns the String of the AMLBlock that can be parsed to return the ItemInstance. */ public String save() { AMLBlock block=new AMLBlock(); block.command="Item"; block.addParameter("name", item.name); block.addParameter("x", x+""); block.addParameter("y", y+""); block.addParameter("quantity", count+""); return block.encode(); } }
/* ** This (sqlite3EndBenignMalloc()) is called by SQLite code to indicate that ** subsequent malloc failures are benign. A call to sqlite3EndBenignMalloc() ** indicates that subsequent malloc failures are non-benign. */ void sqlite3BeginBenignMalloc(void){ wsdHooksInit; if( wsdHooks.xBenignBegin ){ wsdHooks.xBenignBegin(); } }
Reducing Uncertainty in the American Community Survey through Data-Driven Regionalization The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold. Introduction In 2010 the American Community Survey (ACS) replaced the long form of the decennial census as the principal source for geographically detailed information about the population and economy of the United States. The ACS produces estimates for thousands of variables at a variety of geographic scales, the smallest of which (the block group) divides the US like a jigsaw puzzle into 217,740 pieces. The ACS releases estimates annually; however, for smaller areas these annual estimates are based on 3 or 5 years of data collection. This increase in frequency comes at a cost, the ACS data are terribly imprecise. For some policy-relevant variables, like the number of children in poverty, the estimates are almost unusable-in the 2007-2011 ACS, of the 56,204 tracts for which a poverty estimate for children under 5 was available, 40,941 (72.8%) had a margin of error greater than the estimate. For example, the ACS indicates that Census Tract 196 in Brooklyn, New York has 169 children under 5 in poverty ± 174 children, suggesting that somewhere between 0 and 343 children in the area live in poverty. At the census tract scale, the margins of error on ACS data are on average 75 percent larger than those of the corresponding decennial long form estimate. The imprecision in the ACS is especially vexing because the survey is used to allocate nearly $450 billion in federal spending each year. For example, the US Treasury Department's New Markets Tax Credit (NMTC) provides a federal tax credit for investment in low-income communities. Since its inception in 2000 the NMTC has distributed over $36 billion in tax credits; unfortunately, the census tracts targeted by this program are especially ill-served by the ACS. Spielman, Folch, and Nagle show that there is a strong association between tract-level median household income and data quality. The practical implication is that some places which arguably should qualify for public assistance are disqualified and vice versa: imprecision in public data has real social implications. It was well understood, before the adoption of the ACS, that the ACS would have higher margins of error than earlier decennial censuses. However, the difference in quality between the ACS and the decennial long form has far exceeded initial expectations. The particular reasons for this decline in data quality are complex and are discussed in detail elsewhere. This paper focuses on a way to fix the data, that is, to reduce the margins of error in the ACS data to some user-specified quality threshold. The method presented here is explicitly spatial: it reengineers census geography by combining tracts (or block groups) into larger "regions." These regions, because they have a larger effective sample size, have a smaller margin of error. The process of building regions is computationally complex and fraught with conceptual (and practical) challenges. The algorithm that we present an overview of here has been previously described in the technical literature, the aim in this article is to illustrate how spatial optimization procedures can be used to improve the usability of small area estimates from the ACS (or any other survey). In the balance of this paper, we explain these challenges, present the region-building algorithm, and provide empirical results demonstrating the algorithm's utility across a broad range of variables and geographic locations. The algorithm is open source and freely available at (https://github. com/geoss/ACS_Regionalization). Existing Strategies to Reduce the MOE in Survey Data As the name suggests, the ACS is a survey. It aims to build population-level estimates based on information from a sample of the US population. The "populations" for which the ACS produces estimates are geographically defined and range in size from approximately 1500 people (block groups) to administrative units such as cities, counties, states, and the nation. The estimates for any given geographic area are created via a combination of data (completed questionnaires) and statistical methods (weighting and variance estimation). In 2012 3,539,552 households were contacted by the ACS, resulting in 2,375,715 completed surveys (a 67% response rate). The number of completed surveys seems substantial until one considers the number of geographic zones for which estimates are produced. In 2012, the most recent year for which data were available, this response rate translates into 32 responses per census tract and 11 responses per block group on average per year. At the tract and block group scale these data are pooled into multiyear estimates, giving an average of 135 (median 124) completed surveys per tract over the 5-year period from 2007 through 2011. However, the ACS produces over 1400 tables of estimates per tract making this average of 135 responses seem woefully inadequate. The ACS estimates describe geographically bounded populations, so the number of completed surveys within any given geographic area is largely a function of the population and the response rate. As one's geographic zone of interest grows in size, the number of completed surveys increases. Zones with larger numbers of completed surveys have high-quality estimates. Thus for larger geographic scales like large counties and cities the ACS estimates are excellent and provide high-quality annual data, but for small areas like tracts and block groups the estimates are poor. The US Census Bureau publishes margins of error (MOE) to accompany each estimated variable in the ACS. The published margins of error reflect a 90 percent confidence interval, a range of values that is overwhelmingly likely to contain the true population-level value for a given variable. These MOEs are published on the same scale as the variable-that is, the margin of error on an income variable is expressed in dollars and the margin of error on a count of people is expressed as number of people. This makes it difficult to directly compare the amount of uncertainty in a variable on a dollar scale with one on a count scale. For this reason we use a statistic called the coefficient of variation (CV), which is calculated as CV ij where i and j index areal units and variables, respectively. The CV is an imperfect but useful statistic because it gives a standardized measure of uncertainty that can be interpreted as the share of the estimate that the error represents-higher CV implies greater uncertainty. There is no CV level that is universally accepted as "too high," but a comprehensive report on the ACS describes a range of 0.10 to 0.12 as a "reasonable standard of precision for an estimate" (p. 64). Although uncertainty in the ACS can be high, data users often have few, if any, alternatives; so researchers, planners, and policymakers must proceed using the currently available data. The US Census Bureau (USCB) offers two strategies for data users confronting high-uncertainty data: "while it is true that estimates with high CVs have important limitations, they can still be valuable as building blocks to develop estimates for higher levels of aggregation. Combining estimates across geographic areas or collapsing characteristic detail can improve the reliability of those estimates as evidenced by reductions in the CVs" (p. A-13). The first strategy, "collapsing detail," and the second strategy, "combining geographic areas," work by effectively increasing the sample size supporting a given estimate. If, for example, the CV on income for African-Americans in a census tract is too high, one could "collapse" detail by considering income for all residents of the tract (as opposed to just the subset of people who identify as African-American). However, this strategy is not viable for all variables. For example, the 2007-2011 ACS estimates of the number of people living in poverty show that in 3835 tracts the MOE is greater than the estimate, and in 35,737 tracts (approximately 50% of all tracts) the MOE is 50% or more of the estimate. While the ACS poverty estimates are very poor, they cannot be collapsed because no coarser level of detail is available. The second strategy, "combining geographic areas" together into a "region," works via a similar mechanism (boosting the number of completed surveys supporting an estimate): a group of census tracts will contain more completed surveys than a single tract, and thus will usually yield a reduced margin of error. This grouping strategy allows users to maintain attribute detail while achieving higher-quality estimates. This procedure is illustrated in Fig. 1. In the figure the squares represent census tracts and the rectangles show regions (combinations of two tracts). The color of the unit corresponds to the estimated per capita income, with blue representing high income and yellow representing low income. Per capita income simply divides aggregate income by the population. Each of the input tracts has an estimated population of 5000 ± 822 people (i.e., a CV of 0.1); aggregate income varies by tract but has a constant CV of 0.3. As Fig. 1 shows, it is possible to significantly alter the geographic distribution of a variable via aggregation. That is, one can induce geographic patterns in the aggregate data that do not exist in the input data. For example, in the lower right of Fig. 1 a high-income neighborhood is combined with a low-income neighborhood, and while this reduces the margin of error it creates a green middle income neighborhood type that did not exist in the input data. A map can be "broken" by aggregations that mix dissimilar types of neighborhoods, thus creating new types of regions. In contrast the regions in the lower left maintain the same pattern as the original tract map. It has long been known that such aggregation effects can have a profound impact on analytical outcomes. The Census Bureau's recommendation to combine geographic areas does not include a framework for solution quality, nor does it include a warning to users about the analytical implications of modifying areal units. However, it is clear that naively applied, the geographic aggregation strategy carries a real risk of generating spurious or at least questionable analytical results. The problem is compounded when one considers a multivariate case, because an aggregation that preserves patterns in one variable may "break" patterns in others. Complicating matters even more, if one expanded the four tracts in Fig. 1 to an entire metropolitan area that contains thousands of tracts, there would be millions (or more) of possible aggregations. A final wrinkle is that within metropolitan areas there can be substantial tract-to-tract and estimate-to-estimate variability in the quality of data. A particular attribute will not have a constant CV in all census tracts (see Table 3)-some tracts may have good poverty estimates whereas other nearby tracts may not. A single tract may have a good poverty estimate but a poor income estimate. Thus it is unnecessary to apply a "collapsing detail" or "aggregating geography" strategy for every census tract. When applied naively to large areas, the aggregation strategies recommended by USCB to reduce the MOE will have a tendency to over-correct the problem. Since some input geographies will have reliable estimates across all variables of interest, these areas should not be combined with other tracts, because doing so would result in an unnecessary loss of geographic detail. This article presents a multivariate algorithm for finding the "best" possible combination of tracts into new regions. The algorithm accepts a variety of inputs from the user, including a list of variables and a data quality threshold (CV). Given a large multivariate map of census geographies, it will enumerate a representative subset of the millions of possible combinations of tracts into regions. The algorithm employs an optimization procedure to get all variables over the user-specified quality threshold; for example, all variables must have a CV of less than 0.10. The algorithm attempts to minimize the amount of aggregation and maximize the quality of the output regions; it will group tracts together only when grouping is necessary, and avoids "poor" solutions (as in the lower right of Fig. 1) through an objective function that penalizes intraregion heterogeneity. We view this process as both an art and a science and thus provide both quantitative and visual procedures for assessing the quality of the algorithm's solutions. Using this algorithm requires one to sacrifice geographic detail for attribute precision; however, as we show in subsequent sections, the magnitude of this trade-off is controlled by the user. Regionalization Most Americans conceptualize "the South" or "New England" as regions. Montello describes a region as a geographic category whose defining characteristic is that the entities it contains are in some way similar to each other and differentiated from entities in other categories. The process of regionalization (the identification of regions) is akin to drawing lines on a map that delineate the spatial extent and thus the membership of a region. In the case of New England this would mean grouping states or counties by circumscribing them within some boundary. Experts or residents who agree on the existence of a region will often differ on the exact boundaries that define it. Regionalization is a general term that covers procedures in which n areas, such as census tracts, are grouped into p regions. The concept is similar to clustering, in which n observations are grouped into p "clusters" on the basis of similarity. Regionalization simply adds a spatial contiguity constraint to clustering algorithms, meaning that a region is a set of census tracts each of which touches at least one other member of the region. The p regions therefore cover the same territory as the n areas, but do so using fewer spatial units. For almost any real-world problem, there are far more potential groupings of areas into regions than can be tested to find the optimal solution; therefore heuristic algorithms must be employed to search the solution space intelligently. The heuristic algorithm described below identifies geographically contiguous clusters of tracts that are as homogeneous as possible across a user-specified set of attributes. Census tracts are themselves often seen as substantively meaningful regions that group together residents into "neighborhoods". Given the substantive meaning ascribed to census tracts, and their widespread use, the Census Bureau decided to maintain the decennial geographic system of block groups and tracts for the ACS. Our algorithm does not discard the old system of census tracts but builds new regions based upon combinations of existing areas. This requires one to abandon those areas in favor of a new geography. While many users of census data are attached to tracts and see them as substantive units of analysis, we believe that such attachments are unwise given the quality of the ACS data. Even if census tracts are substantively important geographies that structure urban space, for many variables the data quality is so poor that it becomes impossible to differentiate areas on important characteristics like wealth, race, ethnicity, etc. Computing Regions with a User Specified Uncertainty The computational regionalization algorithm developed here has three goals: Reduce the margin of error on input variables to meet or exceed a user specified threshold. Avoid grouping dissimilar areas together, i.e., do not break the pattern on the map. Group together as few tracts as necessary to meet user specified data quality thresholds. To achieve the first goal, we require that every attribute in every region has a CV below a user-specified threshold. These thresholds can be global, so that all variables meet or exceed a user specified threshold c (i.e., CV c), or variable specific, so that for a set of J variables a user specifies a 1 J vector of CV or MOE targets. In addition the user can specify a maximum, a minimum, or a range for the population of output regions. The second goal is achieved through the use of an objective function that aims to minimize intraregion heterogeneity. The objective function is simply the sum of the squared deviations (SSD) from the mean of the region for each variable: There is some debate in the literature about objective functions for regionalization. Martin, Nolan, and Tranmer have argued that minimizing the intraregion heterogeneity as in equation 1 does not necessarily maximize the interregion heterogeneity. That is, the objective of regionalization should be to ensure that one creates internally homogeneous regions that are strongly differentiated from other regions. This approach, however, requires an arbitrary decision on how to weight inter-and intraregion composition. The third goal is accomplished by maximizing the number of regions created from the input map of tracts, subject to user-specified constraints. By maximizing the number of output regions, we minimize aggregation. We have adapted the max-p regionalization algorithm to achieve these goals. The maxp algorithm operates in two phases. The first phase proceeds by selecting a census tract at random from all the tracts, and designates this as a region seed. Seeds can be chosen at random or via a number of other initialization procedures. Folch and Spielman have found that a purely random selection of seeds yields the most homogeneous regions and that approach is used here. Tracts contiguous to the seed are added one-by-one to the seed tract to build up the region. Once the set of tracts adjacent to the seed tract has been exhausted, the set of tracts eligible to join the region is expanded to include tracts contiguous to the tracts previously added. The strategy of building concentrically outward from the seed was adopted to ensure that the initialization produced compact regions as opposed to sinuous "gerrymanders." Region construction stops and tracts are no longer added to the seed once the region satisfies all of the user-specified criteria (i.e., meeting or exceeding the CV and/or population thresholds). If a randomly selected seed meets all user-specified constraints then adding tracts to it is not necessary. A region can therefore be made up of one or more census tracts. Once that region is complete, another seed is chosen from the set of unassigned tracts, and the construction process repeats. This procedure iterates until no other feasible regions can be built. This typically results in a set of leftover census tracts. These leftover tracts are then added to existing regions, after verification that the newly expanded region still meets the user-specified constraints. A feasible solution is one in which each tract is assigned to a single region, and each region meets the CV and/or population constraints. Each run of this phase is very fast and can be repeated thousands of times. From this large set of feasible partitions the "best" partition is taken, where the best partition is the one with the most regions. In the case of a tie in the number of regions, we select the solution with the lowest SSD. The second phase of the max-p algorithm swaps tracts between spatially adjacent regions in an effort to reduce the aggregate attribute heterogeneity within the regions, as measured by the sum of the squared deviations from the mean of the region (see equation 1). The max-p is a heuristic optimization algorithm; Folch and Spielman show that using an internalvariance-minimizing objective function like SSD finds the minimum-variance partition of the input map over 95 percent of the time. Areas are swapped iteratively, and each iteration tries to identify the best of all feasible swaps of a single tract between regions. A feasible swap is one that does not change the number of regions identified in the first phase (regions cannot be created in the optimization phase), and one that ensures that all regions remain feasible after the swap. A tabu search strategy is used to prevent backtracking to earlier solutions and to avoid getting trapped in suboptimal solutions. Stopping criteria prevent the algorithm from continuing to search once further improvement appears unlikely or some user specified maximum number of swaps occurs. In both the initialization and the optimization phases we rely on equations provided by the US Census Bureau in to calculate the region-level CV for each input variable. The general approach to calculating region-level CV is to consider the standard errors of the input variables for each tract in a region. For derived proportions like average household income or percent Asian one has to consider the standard errors of both the numerator and the denominator. The procedure is fairly straightforward and well described in. Data The algorithm accepts a set of ACS variables and their margins of error as input data. The ACS is widely used in the social sciences, and in an effort to illustrate the utility of our approach across a wide variety of social-scientific domains we construct four attribute scenarios: "general," "poverty," "transportation," and "housing" (see Table 1). The data themselves come from the 2007-2011 ACS. For the examples that follow, we constructed 18 data sets for each scenario, where each data set described a Metropolitan Statistical Area (MSA). We chose the 18 MSAs manually to represent both the range of US cities (population sizes and growth rates) and geographic variations within the United States (by selecting two cities from each of the nine US census divisions) (see Table 2). Data Preparation In addition to the substantive decisions on the goals, constraints, data, and algorithm discussed above, a number of practical decisions are needed to allow the approach to run smoothly. We remove from the analysis all tracts that do not have households. These tend to be uninhabited places such as large parks or bodies of water, or institutional locations such as large prisons or employment centers. This exclusion is necessary because we measure various ratios and proportions, and zero-household tracts or tracts with missing attributes would force a divide-byzero operation that would derail the algorithm. Another problematic issue is the MOE on attributes with a zero estimate. The approach used by the USCB to compute MOEs does not accommodate zero estimates, so all zero estimates in each state receive the same MOE. For example, in the 2007-2011 data Ohio has 934 tracts with zero public transit commuters; each of these estimates has an MOE of ±89. In contrast, 57 tracts have only one transit commuter, but their MOEs range from just 2 to 5 (see 2007-2011 American Community Survey table B08134). Because of the high MOEs on zero estimates, we simply reset them to zero. While this assumes no uncertainty in the estimate, it is preferable to the high MOE from the published data. Since the ranges of the input data are quite heterogeneous, e.g., dollars, number of rooms, percentages, etc., we standardize the input data using z-scores. There are multiple standardization procedures that could potentially be used. Tarpey suggests that the optimal transformation for clustering applications is one in which the between-cluster variance is maximized, but Steinley has argued that the choice of standardization procedure is unlikely to have an overall detrimental effect on classification performance. Because we compute the sum of squared deviations from the mean in our objective function, it is important that these deviations be on the same scale; otherwise a variable on, say, a dollar scale would have much more impact on the objective function than one on a ratio scale. An additional challenge is potential redundant information in the input vectors. For example, in the transportation scenario we expect vehicles per person and percent who drove alone to be correlated. To account for this redundancy, principal components are calculated on the standardized data, and each of the resulting components is included in the analysis but is weighted according to the amount of variance it explains. This approach allows us to capture 100 percent of the variance in the input data while ensuring that correlated variables do not have a disproportionate impact on the objective function. The intention is to use all the information, but to give more weight to those components that contribute more to the overall variation in the data. One might be able to avoid the use of principal components by manually weighting variables, and this may make sense in certain applications. However, for the demonstration below, we decided to avoid such a subjective exercise. When an estimate is very low, CVs tend to be extraordinarily high, as the Ohio transit commuter example illustrates. In places where the estimate for a particular variable is very low we remove the CV constraint-specifically, in the case of variables that are proportions where the estimate is less than 5%. For example, if the estimated percent African-American in a census tract was less that 5% the CV constraint for that variable would be removed because it would be very difficult to reduce the CV without building a very large region. Thus in some regions, for some variables, it is possible for the CV to exceed the user-specified threshold. This approach is based both on a pragmatic desire to prevent rare phenomena from dominating the classification and on a recommendation in, which states that a hard CV threshold "does not apply in some instances: specifically, for estimates of proportions that are less than 5 percent of a population group in an area. The formula for estimating the coefficient of variation is very unstable for estimates of small proportions, and the estimated coefficients can be misleadingly large" (pp. 67, 72). For example, exurban locations tend to have few transit options, so the CVs on the share of workers using transit tend to be quite high. If we do not ignore the CV, then the region would need to contain many tracts in order to meet the user-specified CV threshold. Geographic irregularities can also confound the algorithm. Since a region must consist of spatially contiguous tracts, islands can make it hard to find feasible solutions. Tracts located on islands are not contiguous to the mainland and may not be able to form a region that meets user-specified targets because there is a limited number of tracts from which a region can be built and an island may not contain enough tracts to meet the user specified threshold. In the case of Lindo Isle and Balboa Island in the Los Angeles MSA, we create an artificial link to the mainland based on bridge locations. In contrast, we entirely exclude Grand Island in the Buffalo MSA, since it is on the edge of the MSA and is somewhat distinct from the more urban mainland communities. These are admittedly arbitrary decisions, but ones that an analyst familiar with an area can likely make on the basis of local context. Evaluation of Results It is fairly simple to show that the regions produced by the algorithm achieve a user-specified uncertainty threshold; however, demonstrating that the resulting regions do not alter spatial patterns in the input data is a more difficult task. We have developed a suite of statistical and visual evaluation tools to allow a user to evaluate output from the algorithm objectively and subjectively. We use two statistical metrics to identify the amount of information retained from the regionalization. The first summary statistic is simply the number of tracts per region. Higher values mean that on average more tracts need to be grouped together to form feasible regions. This measure is useful to compare solutions across MSAs (which all have a different number of input tracts). This measure can also be useful in variable selection, one might be considering multiple poverty scenarios, each defined by a different ensemble of variables. Each of the poverty scenarios might yield a different average number of tracts per region. If the scenarios were substantively similar one might choose the set that yielded the smallest number of tracts per regions. The set of variables with the smallest number of tracts per region would maximize the geographic resolution of the output by minimizing the amount of aggregation necessary to meet constraints. When one must compare a set of possible solutions for a single MSA, this statistic can be reduced to the total number of regions. The second metric (S j ) attempts to quantify information loss through aggregation. This is measured by comparing the region-level estimates for each variable to the corresponding estimates for the tracts that constitute the region. The statistic S j measures whether the regionlevel estimates for a given variable are within the margins of error of their constituent tracts. If a region-level estimate is within the margin of error of all its constituent tracts, then there is no information lost through aggregation; information loss increases as the 90 percent confidence intervals of more and more tract-level estimates do not overlap with the region's estimate. Formally: For each tract i within each region k, we evaluate if the difference between the tract's attribute value, a ij, and the region's attribute value, a kj, is within the tract's margin of error, e ij. The true cases are summed and divided by the total number of tracts (n). S j therefore indicates the share of all tracts that are assigned to a region with no information loss for attribute j. A global version of S j can be computed as the weighted average over all the attributes: S provides a single value for the overall success of the solution. Visually, a user with local knowledge would find the maps of region boundaries and thematic maps an important evaluation tool. Fig. 2 shows the spatial pattern of estimates and CV for the percent of the population with a bachelor's degree or higher at both the tract (input) and region (output) scales for Washington, DC. The top choropleth maps show estimates (using the same class breaks), and the lower row shows the CV for those estimates. Green regions of the lower maps have high-quality estimates; brown or red regions have poor estimates. Generally, the macrospatial pattern of higher educational attainment in the northwest and lower attainment in the southeast is preserved by the regionalization, but the CVs are markedly improved. A second visual evaluation tool is an examination of the region boundaries. Fig. 3 shows the results from the general scenario for a section of the city. A user with local knowledge could evaluate the coherence of the solution-that is, whether the regions seem like reasonable divisions of the city, or whether the regions mix different types of neighborhoods (as in lower right of Fig. 1). Another visual strategy is to plot tract-level estimates and region-level estimates on a scatter plot, as in Fig. 4. Each point in the figure represents a tract. The position of the point in the graph is determined by the tract-level estimate from the ACS (x-axis) and the region estimate determined from the algorithm (y-axis). The color of the dot shows its initial condition: green points indicate tracts that meet or exceed the user-specified CV threshold, 0.12 in this case, and red points are tracts that need to be fixed by the algorithm. In the large graph in Fig. 4a, tracts where the tract-level margin of error includes the region-level estimate are depicted with a solid dot, and tracts where the region-level estimate is outside the tract-level MOE are shown with a hollow dot. The ratio of solid points to all points equals the S j value for that attribute. This diagnostic plot does not work with count estimates (i.e., number of children under 5) because counts for groups of tracts (regions) will always be higher than counts for individual tracts. The horizontal bars link the constituent tracts of a region. Ideally, the horizontal bars would be short and centered on the 45-degree line, an indication that the region contains similar tracts and that the region-and tract-level estimates are similar. The axes of the plots are linked to the observed range of tract-level estimates; thus a plot like Fig. 4d, which shows the ratio of housing costs to income for homeowners, illustrates the fact that the range of observed values at the tract level is greater than the range of observed values at the region level. Similarly, there is more variance in the tract-level estimates than in the region-level estimates. The reduction in variance and compression of the range of observed values is illustrated by the lack of points above 0.35 on the y-axis. Aggregation necessarily reduces variance, but an ideal Open Source Code, Data, and Results The algorithm is fully open source, was developed in Python (regionalization) and R (evaluation), and is available free on GitHub (https://github.com/geoss/ACS_Regionalization). Additionally, all of the results described in the subsequent section and the code to reproduce all charts and figures are available on GitHub. The algorithm uses only open-source free software and relies heavily on the PySAL library ; maps and figures are produced in R using the ggplot2 library. While the use of these tools requires some programming experience, the GitHub site includes a step-by-step tutorial that should allow users with minimal programming experience to use the methods outlined here. In addition shapefiles and input data for each of the scenarios for each of the MSAs have been posted to GitHub. Table 3 presents data from the poverty scenario for a selection of four adjacent census tracts from the Logan Square area of Chicago. The first row of the table shows estimates for housing cost as a share of income for home owners, a measure of housing affordability. From the estimates alone, tract 222800 appears to be the least affordable, with a rate of 63.3 percent; however, this tract also has an MOE of 114.5 percent, indicating that we are 90 percent certain that the true estimate is within the range of 0 to 177.8%, so this tract could actually be the highest or lowest for the percent of homeowner income spent on housing. The table also shows that high uncertainty on one attribute does not entail high uncertainty for all attributes for that tract. Tract 222800 has the highest CV for two of the attributes, but it has one of the lowest CVs for percent employed. Tract 222900 is the most consistent across attributes in terms of CV, but none of its attributes have CVs below the recommended threshold of 0.12, while the other three tracts all have at least one CV below the threshold. The challenges of measuring poverty in Chicago are not confined to these selected tracts. Only 78 of the 2,210 tracts in the MSA meet Citro and Kalton's recommended CV threshold of 0.12 on all five attributes. Table 4 shows the full distribution, and the more positive result that approximately 62 percent of the tracts meet the threshold for at least three of the five attributes. Table 5 shows that the pattern in Table 4 can be partially explained by variation in the overall quality of estimation of specific attributes-the attribute housing cost as share of income for renters, for example, meets the 0.12 CV threshold in only 194 tracts (8.8%). Demonstration With this diagnostic information in hand, one solution might be to collapse the owner and renter affordability estimates into one overall affordability measure. The weakness of this approach is that owning and renting housing are quite different. Owners and renters may differ in substantive ways other than tenure, and collapsing the variables might mask differences that are important from a policy perspective. For our purposes, we assume that keeping these two high-uncertainty attributes separate is advantageous. Similarly, we assume that all of the variables in the poverty scenario are necessary. Our aim here is more pedagogical than empirical, and a different set of variables would not substantively alter the illustration of the use of the algorithm that follows. Attribute Number of Tracts Housing cost as share of income (owners) 800 Housing cost as share of income (renters) 194 Children above poverty 1,248 Total above poverty 2,056 The regionalization algorithm produces 256 regions for the Chicago MSA, given the variables in Table 3, a maximum CV of 0.12, and no region-level population constraints. On average this is 8.6 tracts per region. The accuracy measure S j, which measures the share of tracts whose tract attribute value is "close" to the corresponding region attribute value, shows good results in general. S j ranges from 0.758 for proportion of total population in poverty to 0.897 for housing cost as share of income (owners). The overall accuracy (S) is 0.836. These results are summarized graphically in Fig. 4. Ideally the horizontal bars linking tracts would be short and centered on the 45 degree line, an indication that the region contains similar tracts and that the region estimate and the tract estimates are similar. The large share of red in Fig. 4d (lower right) indicates that this variable is particularly uncertain at the tract scale, especially in contrast to the variable in Fig. 4b (lower left). In Fig. 4 wide bars are generally terminated by a red point, a tract with an uncertain estimate, suggesting that the estimate may not be as different from contiguous areas as the plot suggests. Variations in User Selections In the previous example the number of attributes and the maximum CV value were fixed. In this section we look first at the effect of varying the maximum allowable uncertainty in the regionalization solution. Lower levels of uncertainty give more confidence in the estimates but require more aggregation of tracts leading to larger regions. To illustrate the impact of the userspecified CV threshold we rerun the Chicago scenario described above with five different maximum CV values (0.05, 0.10, 0.15, 0.20, and 0.40). Table 6 shows a dramatic reduction in the number of regions as the CV threshold decreases. When the maximum CV is set to 0.40, a level generally considered too high for research, there are on average 1.4 tracts per region. At the most restrictive level, CV = 0.05, there are approximately 43 tracts in the average region. When the CV threshold is set at 0.05 there is a significant loss of spatial resolution; the Chicago MSA is described by only 51 geographic zones. If one is willing to accept more uncertainty in the data there are significant gains in spatial granularity, at a CV threshold 0.40 the MSA is described by 1573 zones. The loss in attribute information is not as dramatic as the loss in spatial information. Even at the most stringent uncertainty level, over 78.5 percent of the tract estimates are located within the margin of error of their assigned region. Next we consider the impact of changing the number of attributes. Again using the Chicago poverty scenario, we compute a regionalization solution for a single attribute, two attributes, and so on up to all five attributes. We hold the CV constant at 0.12 for these cases. For this example, attributes are added sequentially so that the variable with the lowest tract-level CV is added first, and then other variables are added until the worst performer is included. Table 7 shows the order in which variables are added and the accuracy rate by variable (S j ) for each solution. Percent employed has a relatively low tract CV (see Table 5), which is reflected in a solution with an accuracy level of 0.991 and 2,021 regions ( Table 8). As more attributes are included in the regionalization, the regions need to accommodate attributes with different spatial patterns in their CVs, and as a result the accuracy level (S j ) for percent employed steadily declines, but still remains relatively high at 0.832. This decline in accuracy holds for all attributes as more attributes are added. Table 8 shows that the average S j declines and region size grows as more attributes are added. Comparisons Across MSAs and Scenarios To provide perspective on variation by city types and attribute types, we applied the algorithm to 18 MSAs (Table 2) and four scenarios (Table 1), 72 total cases. In all cases we used a CV of 0.12 and no population constraints. Fig. 5 summarizes the results in terms of the two metrics: accuracy (S) on the y-axis and areas per region on the x-axis. What is clear from the plot is that differences in attribute bundle are more powerful in determining the general form of the solution than differences in MSA-all the results from a particular scenario are clustered together, while the results for a particular MSA are scattered around the plot. This is not to say that MSA does not matter. A two-way ANOVA comparing the scenariolevel and the MSA-level means of S and tracts per region rejects the null hypothesis that within scenario-level means are the same (significant at 0.001 level) and within scenario MSA-level means are the same (significant at 0.01 level Discussion The regionalization method presented above is a Band-Aid for the ACS data. That is, it addresses an immediate problem (data quality) without getting at the root causes of the problem. The causes of the problems with the ACS data are complex and range from the statistical to the political. Given that systemic fixes are not likely to be forthcoming, we have tried to create a broadly applicable, intuitive, and usable method for post-processing the public-use ACS data. The method we have presented is not ideal for all situations; in some cases abandoning existing census geographies may not be feasible. In these cases alternate methods, like Bayesian map smoothing, could be considered. Moreover, there are some problems with our approach that warrant discussion. We rely heavily on the point estimates in the construction of regions. First, without access to the raw surveys it is not possible to calculate the exact MOE for the new regions, the methods we use are the best available and recommended for use by the US Census Bureau. Second, the objective function does not account for the reliability of an estimate; it simply uses the published estimates for each variable selected by the user. While we do use the published MOEs to determine the feasibility of a region, the MOEs do not factor into the objective function. The algorithm is written in such a way that it is relatively easy to replace the objective function and the code is open source. One could create a new objective function that accounted for uncertainty in the input data however we were unable to identify one. Each run of our algorithm will produce a different set of regions. Users of the algorithm must select a solution from a set of solutions that meet the user-specified constraints. In the preceding analyses we simply selected the solution with the lowest intraregion heterogeneity, but alternative criteria could have been used. For example, local knowledge could guide the selection: a user with an understanding of a metropolitan area might determine that one set of boundaries was a more coherent partition of the landscape. One could also try to select solutions that had desired geometric properties (such as compactness). A potentially fruitful area of future research is the application of methods like heuristic concentration, which attempt to identify an optimal solution utilizing multiple outputs from a heuristic optimization procedure. From run-to-run the solutions are not entirely different, within a large set of potential solutions some tracts are often grouped together while other tracts often flip-flop between regions. These stable areas can be seen as "natural" regions, groups of tracts that share characteristics and the tracts that flip-flop between regions may be the edges of or transition zones between natural regions. We have tried to develop a statistical measure of the stability of region assignment. The stability of solutions from stochastically seeded algorithms is a long-running concern in the literature. While it would be nice to know how stable a solution was, or which tracts tended to be grouped together, such knowledge would not substantively alter the application of the algorithm since all tracts must be assigned to a region. Data-driven geographies of the sort created by the algorithm raise a more vexing issue. If geographies are designed around data, and the data change, should the geography change? A set of regions that works well for one release of the ACS might not achieve user-specified CV targets for the next release of the ACS. On the one hand, it seems sensible to design regions that maximize the utility of the data; on the other, it seems foolish to create ephemeral geographies that change from year to year. Moreover, having one set of regions for transportation and another set for housing-related problems may be problematic for certain uses. Using tracts as the building blocks of regions ameliorates these concerns to some extent, because tracts are relatively stable and therefore can always be recombined. For longitudinal comparisons regions created with one ACS release could be used to aggregate census tracts from prior (or later) releases of the ACS. This approach allows historical continuity but raises questions about the statistical optimality and substantive coherence of regions created using different data releases. In places with highly dynamic populations this concern may be more pronounced. However, these same concerns exist for the tracts themselves. If the "optimal" set of regions changes with a new release of the ACS data, it would be possible to retabulate the older data with the new regions. Census tract boundaries do change over time, however it is possible to account for these boundary changes using the tract relationship file published by the USCB concurrently with boundary changes. Conclusion The American Community Survey, as the primary source of data about US neighborhoods, has important implications for social policy and social science. In their current form ACS data are unusable for many purposes. Unfortunately, the geographic hierarchy of census units has not evolved to match the reality of the new census ACS estimates. That is, tracts simply yield too few completed surveys to provide high-quality estimates, and counties (and cities, and even towns) are simply too large for many geographic and social-scientific questions. The Census Bureau recommendation that data users "combine geographic areas" can be seen as a statement about the suitability of the current census geographic system for many types of analysis. Unfortunately, this recommendation was accompanied neither by a set of guidelines for what constitutes a good aggregation nor by a set of tools to help users aggregate. New York City, manually, using local knowledge, made its own "Neighborhood Tabulation Areas," which have a minimum population of 15,000 people. Our algorithm accomplishes a similar end, and our diagnostics provide users some guidance in the aggregation process. The algorithm allows people to create bespoke geographic units of analysis. These custom divisions of space are a big conceptual change from the relatively static, general-purpose census tracts that have been in wide use for over 60 years, but we believe that this conceptual shift is necessary given the quality of the tract-level estimates published by the ACS. Using the algorithm requires a tradeoff that is not appropriate in all situations or for all audiences-one must be willing to reduce the number of geographic units of analysis. In the overwhelming majority of cases, however, this compromise does not sacrifice information. However, reengineering geographic units opens a Pandora's box of statistical issues. In the late 1970s showed that it is possible to generate a perfectly negative (0.99) or a perfectly positive (0.99) correlation between the same variables in the same study region simply by changing the shape and scale of geographic units. Yule and Kendall, quoted in, noticed a similar phenomenon and wondered whether the associations they observed in reaggregated data were "real" or "illusory." These aggregation effects are a real concern and can be difficult to anticipate in statistical models. However, the alternative to regionalization is using data that in many cases fail to meet even the most liberal standards of fitness for use.
package org.gooru.nucleus.search.indexers.app.repositories.activejdbc; import java.util.List; import io.vertx.core.json.JsonArray; import io.vertx.core.json.JsonObject; public interface TaxonomyCodeRepository { static TaxonomyCodeRepository instance() { return new TaxonomyCodeRepositoryImpl(); } JsonObject getTaxonomyCode(String codeId); JsonObject getCode(String codeId); Long getStandardLtsCountByFramework(String frameworkCode); JsonArray getLTCodeByFrameworkAndOffset(String frameworkCode, Integer limit, Long offset); JsonArray getStandardCodeByFrameworkAndOffset(String frameworkCode, Integer limit, Long offset); Long getStandardCountByFramework(String frameworkCode); Long getLTCountByFramework(String frameworkCode); Long getStandardLtsCount(); JsonArray getLTCodeByOffset(Integer limit, Long offset); JsonArray getStandardCodeByOffset(Integer limit, Long offset); Long getStandardCount(); Long getLTCount(); JsonArray getStdLTCodeByFrameworkAndOffset(String frameworkCode, Integer limit, Long offset); JsonObject getGutCode(String codeId); List<String> getAllStandardByDomain(String domainId, String fw); }
<filename>src/main/java/ir/sk/algorithm/others/Sudoku.java package ir.sk.algorithm.others; import ir.sk.helper.Difficulty; import ir.sk.helper.DifficultyType; import ir.sk.helper.complexity.SpaceComplexity; import ir.sk.helper.complexity.TimeComplexity; import ir.sk.helper.paradigm.Backtracking; /** * Given a partially filled 9×9 2D array ‘grid[9][9]’, the goal is to assign digits (from 1 to 9) to the empty cells so that every row, column, * and subgrid of size 3×3 contains exactly one instance of the digits from 1 to 9. * * Created by sad.kayvanfar on 9/5/2021. */ @Difficulty(type = DifficultyType.HARD) @Backtracking @TimeComplexity("O(9^(n*n))") @SpaceComplexity("O(n*n)") public class Sudoku { private int[][] board; private int size; public Sudoku(int[][] board) { this.board = board; this.size = board.length; } public boolean solveSudoku() { int row = -1; int col = -1; boolean isEmpty = true; for (int i = 0; i < size; i++) { for (int j = 0; j < size; j++) { if (board[i][j] == 0) { row = i; col = j; // We still have some remaining // missing values in Sudoku isEmpty = false; break; } } if (!isEmpty) { break; } } // No empty space left if (isEmpty) { return true; } // Else for each-row backtrack for (int num = 1; num <= size; num++) { if (isSafe(board, row, col, num)) { board[row][col] = num; if (solveSudoku()) { // print(board, n); return true; } else { // replace it board[row][col] = 0; } } } return false; } private boolean isSafe(int[][] board, int row, int col, int num) { // Row has the unique (row-clash) for (int d = 0; d < board.length; d++) { // Check if the number we are trying to // place is already present in // that row, return false; if (board[row][d] == num) { return false; } } // Column has the unique numbers (column-clash) for (int r = 0; r < board.length; r++) { // Check if the number // we are trying to // place is already present in // that column, return false; if (board[r][col] == num) { return false; } } // Corresponding square has // unique number (box-clash) int sqrt = (int) Math.sqrt(board.length); int boxRowStart = row - row % sqrt; int boxColStart = col - col % sqrt; for (int r = boxRowStart; r < boxRowStart + sqrt; r++) { for (int d = boxColStart; d < boxColStart + sqrt; d++) { if (board[r][d] == num) { return false; } } } // if there is no clash, it's safe return true; } public void print() { // We got the answer, just print it for (int r = 0; r < size; r++) { for (int d = 0; d < size; d++) { System.out.print(board[r][d]); System.out.print(" "); } System.out.print("\n"); if ((r + 1) % (int) Math.sqrt(size) == 0) { System.out.print(""); } } } }
Barnsley have been linked with a move for Bournemouth winger Connor Mahoney. A potential loan deal is thought to be on the table for the highly-rated 20-year-old. Reds boss Paul Heckingbottom is believed to have tried to sign the former Blackburn Rovers and Accrington Stanley player last season. Mahoney has represented England at Under-17, U-18 and U-20 level but has yet to make his Premier League debut for the Cherries. He did, however, start in a 2-2 home FA Cup clash with Wigan earlier this month before being substituted at half-time and featured for 64 minutes in a 3-0 loss in the away replay at the DW Stadium. Nottingham Forest and Celtic have been linked with the player in the past.
<reponame>fossabot/semantic-ui-react-native<filename>example/App.tsx import React from 'react'; import { SafeAreaView, ScrollView, StyleSheet, View } from 'react-native'; import { Stack } from 'react-native-spacing-system'; import { Button, LabeledButton } from 'semantic-ui-react-native'; const App = () => { return ( <SafeAreaView style={{ flex: 1 }}> <ScrollView contentInsetAdjustmentBehavior="automatic" style={styles.scrollView} > <View style={{ alignItems: 'center', justifyContent: 'center', margin: 10 }} > <View style={{ flexDirection: 'row', flexWrap: 'wrap' }}> <Button title="Save" style={{ margin: 5 }} /> <Button title="Save" disabled style={{ margin: 5 }} /> <Button title="Save" color="primary" style={{ margin: 5 }} /> <Button title="Save" color="primary" disabled style={{ margin: 5 }} /> <Button title="Save" color="secondary" style={{ margin: 5 }} /> <Button title="Save" color="secondary" disabled style={{ margin: 5 }} /> <Button title="Save" color="red" style={{ margin: 5 }} /> <Button title="Save" color="red" disabled style={{ margin: 5 }} /> </View> <View style={{ flexDirection: 'row' }}> <Button title="Add Favorite" color="secondary" iconName="heart" iconType="AntDesign" fluid circular style={{ marginRight: 10 }} /> <Button title="Add Favorite" color="secondary" iconName="heart" iconType="AntDesign" disabled /> </View> <View style={{ flexDirection: 'row', marginVertical: 10 }}> <Button outline title="Add Friend" color="secondary" iconName="user" iconType="FontAwesome" style={{ marginRight: 10 }} /> <Button loading outline title="Add Friend" color="red" iconName="user" iconType="FontAwesome" fluid style={{ marginRight: 10 }} /> <Button disabled loading outline title="Add Friend" color="red" iconName="user" iconType="FontAwesome" /> </View> <Stack size={10} /> <Button color="secondary" iconName="basket" iconType="MaterialCommunityIcons" /> <Stack size={10} /> <Button circular color="secondary" iconName="basket" iconType="MaterialCommunityIcons" /> <Stack size={10} /> <LabeledButton label="Like" labelIcon="heart" labelIconType="AntDesign" title="2,048" /> <Stack size={10} /> <LabeledButton loading label="Like" labelIcon="heart" labelIconType="AntDesign" title="2,048" /> <Stack size={10} /> <LabeledButton loading disabled label="Like" labelIcon="heart" labelIconType="AntDesign" title="2,048" /> <Stack size={10} /> <LabeledButton pointing label="Like" labelIcon="heart" labelIconType="AntDesign" title="2,048" labelRight /> <Stack size={10} /> <LabeledButton label="Like" labelIcon="heart" labelIconType="AntDesign" title="2,048" labelRight /> <Stack size={10} /> <LabeledButton labelIcon="heart" labelIconType="AntDesign" title="2,048" labelRight /> <Stack size={10} /> <LabeledButton pointing color="red" label="Like" labelIcon="heart" labelIconType="AntDesign" title="2,048" /> <Stack size={10} /> <LabeledButton outline pointing color="primary" label="Forks" labelIcon="fork" labelIconType="AntDesign" title="1,048" /> <Stack size={10} /> <LabeledButton circular outline pointing color="primary" label="Forks" labelIcon="fork" labelIconType="AntDesign" title="1,048" /> <Stack size={10} /> <LabeledButton labelIcon="pause" labelIconType="Fontisto" title="Pause" /> </View> </ScrollView> </SafeAreaView> ); }; const styles = StyleSheet.create({ scrollView: { flex: 1, height: '100%' } }); export default App;
<reponame>ballerina-platform/module-ballerina-serdes<filename>serdes-native/src/main/java/io/ballerina/stdlib/serdes/Deserializer.java<gh_stars>1-10 /* * Copyright (c) 2021, WSO2 Inc. (http://www.wso2.org) All Rights Reserved. * * WSO2 Inc. licenses this file to you under the Apache License, * Version 2.0 (the "License"); you may not use this file except * in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package io.ballerina.stdlib.serdes; import com.google.protobuf.ByteString; import com.google.protobuf.Descriptors.Descriptor; import com.google.protobuf.Descriptors.FieldDescriptor; import com.google.protobuf.DynamicMessage; import com.google.protobuf.InvalidProtocolBufferException; import io.ballerina.runtime.api.TypeTags; import io.ballerina.runtime.api.creators.TypeCreator; import io.ballerina.runtime.api.creators.ValueCreator; import io.ballerina.runtime.api.types.ArrayType; import io.ballerina.runtime.api.types.Field; import io.ballerina.runtime.api.types.RecordType; import io.ballerina.runtime.api.types.Type; import io.ballerina.runtime.api.types.UnionType; import io.ballerina.runtime.api.utils.StringUtils; import io.ballerina.runtime.api.values.BArray; import io.ballerina.runtime.api.values.BError; import io.ballerina.runtime.api.values.BMap; import io.ballerina.runtime.api.values.BObject; import io.ballerina.runtime.api.values.BString; import io.ballerina.runtime.api.values.BTypedesc; import io.ballerina.stdlib.serdes.protobuf.DataTypeMapper; import java.util.Collection; import java.util.HashMap; import java.util.Map; import static io.ballerina.stdlib.serdes.Utils.SERDES_ERROR; import static io.ballerina.stdlib.serdes.Utils.createSerdesError; /** * Deserializer class to generate Ballerina value from byte array. */ public class Deserializer { static final String ATOMIC_FIELD_NAME = "atomicField"; static final String ARRAY_FIELD_NAME = "arrayfield"; static final String NULL_FIELD_NAME = "nullField"; static final String SCHEMA_NAME = "schema"; static final String UNION_FIELD_NAME = "unionelement"; static final String UNION_TYPE_IDENTIFIER = "ballerinauniontype"; static final String UNION_FIELD_SEPARATOR = "__"; static final String UNSUPPORTED_DATA_TYPE = "Unsupported data type: "; static final String DESERIALIZATION_ERROR_MESSAGE = "Failed to Deserialize data: "; static final String MISSING_ENTRY_IN_DATATYPE = "Missing entry in datatype for "; static final String UNSUPPORTED_UNION_TYPE = "Unsupported union type"; /** * Creates an anydata object from a byte array after deserializing. * * @param des Deserializer object. * @param encodedMessage Byte array corresponding to encoded data. * @param dataType Data type of the encoded value. * @return anydata object. */ public static Object deserialize(BObject des, BArray encodedMessage, BTypedesc dataType) { Descriptor schema = (Descriptor) des.getNativeData(SCHEMA_NAME); Object object = null; try { DynamicMessage dynamicMessage = generateDynamicMessageFromBytes(schema, encodedMessage); object = dynamicMessageToBallerinaType(dynamicMessage, dataType, schema); } catch (BError e) { return e; } catch (InvalidProtocolBufferException e) { return createSerdesError(DESERIALIZATION_ERROR_MESSAGE + e.getMessage(), SERDES_ERROR); } return object; } private static DynamicMessage generateDynamicMessageFromBytes(Descriptor schema, BArray encodedMessage) throws InvalidProtocolBufferException { return DynamicMessage.parseFrom(schema, encodedMessage.getBytes()); } private static Object dynamicMessageToBallerinaType(DynamicMessage dynamicMessage, BTypedesc typedesc, Descriptor schema) { Type type = typedesc.getDescribingType(); if (type.getTag() <= TypeTags.BOOLEAN_TAG) { FieldDescriptor fieldDescriptor = schema.findFieldByName(ATOMIC_FIELD_NAME); return getBallerinaPrimitiveValueFromMessage(dynamicMessage.getField(fieldDescriptor)); } else if (type.getTag() == TypeTags.UNION_TAG) { FieldDescriptor fieldDescriptor = schema.findFieldByName(ATOMIC_FIELD_NAME); DynamicMessage dynamicMessageForUnion = (DynamicMessage) dynamicMessage.getField(fieldDescriptor); return getBallerinaUnionTypeValueFromMessage(dynamicMessageForUnion, type, schema); } else if (type.getTag() == TypeTags.ARRAY_TAG) { ArrayType arrayType = (ArrayType) type; Type elementType = arrayType.getElementType(); FieldDescriptor fieldDescriptor = schema.findFieldByName(ARRAY_FIELD_NAME); schema = fieldDescriptor.getContainingType(); return getBallerinaArrayValueFromMessage(dynamicMessage.getField(fieldDescriptor), elementType, schema, 1); } else if (type.getTag() == TypeTags.RECORD_TYPE_TAG) { Map<String, Object> mapObject = getBallerinaRecordValueFromMessage(dynamicMessage, type, schema); return ValueCreator.createRecordValue(type.getPackage(), type.getName(), mapObject); } else { throw createSerdesError(UNSUPPORTED_DATA_TYPE + type.getName(), SERDES_ERROR); } } private static Object getBallerinaPrimitiveValueFromMessage(Object value) { if (value instanceof String) { return StringUtils.fromString((String) value); } return value; } private static Object getBallerinaArrayValueFromMessage(Object value, Type type, Descriptor schema, int unionFieldIdentifier) { if (value instanceof ByteString) { ByteString byteString = (ByteString) value; return ValueCreator.createArrayValue(byteString.toByteArray()); } else { Collection collection = (Collection) value; BArray bArray = ValueCreator.createArrayValue(TypeCreator.createArrayType(type)); for (Object element : collection) { if (type.getTag() == TypeTags.STRING_TAG) { bArray.append(StringUtils.fromString((String) element)); } else if (type.getTag() == TypeTags.ARRAY_TAG) { ArrayType arrayType = (ArrayType) type; Type elementType = arrayType.getElementType(); String fieldName; if (elementType.getTag() == TypeTags.UNION_TAG) { fieldName = UNION_FIELD_NAME + unionFieldIdentifier; } else if (elementType.getTag() == TypeTags.ARRAY_TAG) { fieldName = UNION_FIELD_NAME + unionFieldIdentifier; unionFieldIdentifier++; } else { fieldName = elementType.getName(); } Descriptor nestedSchema = schema.findNestedTypeByName(fieldName); DynamicMessage nestedDynamicMessage = (DynamicMessage) element; FieldDescriptor fieldDescriptor = nestedSchema.findFieldByName(fieldName); Object nestedArrayContent = nestedDynamicMessage.getField(fieldDescriptor); BArray nestedArray = (BArray) getBallerinaArrayValueFromMessage(nestedArrayContent, elementType, nestedSchema, unionFieldIdentifier); bArray.append(nestedArray); } else if (type.getTag() == TypeTags.UNION_TAG) { DynamicMessage dynamicMessageForUnion = (DynamicMessage) element; bArray.append(getBallerinaUnionTypeValueFromMessage(dynamicMessageForUnion, type, schema)); } else if (type.getTag() == TypeTags.RECORD_TYPE_TAG) { Map<String, Object> mapObject = getBallerinaRecordValueFromMessage((DynamicMessage) element, type, schema); bArray.append(ValueCreator.createRecordValue(type.getPackage(), type.getName(), mapObject)); } else { bArray.append(element); } } return bArray; } } private static Map<String, Object> getBallerinaRecordValueFromMessage(DynamicMessage dynamicMessage, Type type, Descriptor schema) { Map<String, Object> map = new HashMap(); for (Map.Entry<FieldDescriptor, Object> entry : dynamicMessage.getAllFields().entrySet()) { String fieldName = entry.getKey().getName(); Object value = entry.getValue(); if (value instanceof DynamicMessage) { DynamicMessage nestedDynamicMessage = (DynamicMessage) value; String fieldType = nestedDynamicMessage.getDescriptorForType().getName(); String[] processFieldName = fieldType.split("_"); String unionCheck = processFieldName[processFieldName.length - 1]; if (unionCheck.contains(UNION_TYPE_IDENTIFIER)) { Descriptor unionSchema = schema.findNestedTypeByName(fieldType); Type unionType = null; RecordType recordType = (RecordType) type; for (Map.Entry<String, Field> member: recordType.getFields().entrySet()) { if (member.getKey().equals(fieldName)) { unionType = member.getValue().getFieldType(); break; } } map.put(fieldName, getBallerinaUnionTypeValueFromMessage(nestedDynamicMessage, unionType, unionSchema)); } else { Map<String, Object> nestedMap = getBallerinaRecordValueFromMessage(nestedDynamicMessage, type, schema); String recordTypeName = getRecordTypeName(type, fieldName); BMap<BString, Object> nestedRecord = ValueCreator.createRecordValue(type.getPackage(), recordTypeName, nestedMap); map.put(fieldName, nestedRecord); } } else if (value instanceof ByteString || entry.getKey().isRepeated()) { if (!(value instanceof ByteString)) { Type elementType = getArrayElementType(type, fieldName); Object handleArray = getBallerinaArrayValueFromMessage(value, elementType, schema, 1); map.put(fieldName, handleArray); } else { Object handleArray = getBallerinaArrayValueFromMessage(value, type, schema, 1); map.put(fieldName, handleArray); } } else if (DataTypeMapper.getProtoTypeFromJavaType(value.getClass().getSimpleName()) != null) { Object handlePrimitive = getBallerinaPrimitiveValueFromMessage(value); map.put(fieldName, handlePrimitive); } else { throw createSerdesError(UNSUPPORTED_DATA_TYPE + value.getClass().getSimpleName(), SERDES_ERROR); } } return map; } private static String getRecordTypeName(Type type, String fieldName) { RecordType recordType = (RecordType) type; for (Map.Entry<String, Field> entry: recordType.getFields().entrySet()) { Type fieldType = entry.getValue().getFieldType(); if (fieldType.getTag() == TypeTags.RECORD_TYPE_TAG && entry.getKey().equals(fieldName)) { return fieldType.getName(); } else if (fieldType.getTag() == TypeTags.RECORD_TYPE_TAG) { return getRecordTypeName(fieldType, fieldName); } } throw createSerdesError(DESERIALIZATION_ERROR_MESSAGE + MISSING_ENTRY_IN_DATATYPE + fieldName, SERDES_ERROR); } private static Type getArrayElementType(Type type, String fieldName) { RecordType recordType = (RecordType) type; for (Map.Entry<String, Field> entry: recordType.getFields().entrySet()) { if (entry.getValue().getFieldType().getTag() == TypeTags.ARRAY_TAG && entry.getKey().equals(fieldName)) { ArrayType arrayType = (ArrayType) entry.getValue().getFieldType(); return arrayType.getElementType(); } else if (entry.getValue().getFieldType().getTag() == TypeTags.RECORD_TYPE_TAG) { return getArrayElementType(entry.getValue().getFieldType(), fieldName); } } throw createSerdesError(DESERIALIZATION_ERROR_MESSAGE + MISSING_ENTRY_IN_DATATYPE + fieldName, SERDES_ERROR); } private static Object getBallerinaUnionTypeValueFromMessage(DynamicMessage dynamicMessage, Type type, Descriptor schema) { for (Map.Entry<FieldDescriptor, Object> entry : dynamicMessage.getAllFields().entrySet()) { Object value = entry.getValue(); if (entry.getKey().getName().equals(NULL_FIELD_NAME) && (Boolean) value) { return null; } if (value instanceof DynamicMessage) { DynamicMessage dynamicMessageForUnion = (DynamicMessage) entry.getValue(); Type recordType = getCorrespondingElementTypeFromUnion(type, entry.getKey().getName()); Map<String, Object> mapObject = getBallerinaRecordValueFromMessage(dynamicMessageForUnion, recordType, schema); return ValueCreator.createRecordValue(recordType.getPackage(), recordType.getName(), mapObject); } else if (value instanceof ByteString || entry.getKey().isRepeated()) { Type elementType = getCorrespondingElementTypeFromUnion(type, entry.getKey().getName()); return getBallerinaArrayValueFromMessage(value, elementType, schema, 1); } else { return getBallerinaPrimitiveValueFromMessage(entry.getValue()); } } throw createSerdesError(DESERIALIZATION_ERROR_MESSAGE + UNSUPPORTED_UNION_TYPE, SERDES_ERROR); } private static Type getCorrespondingElementTypeFromUnion(Type type, String fieldName) { UnionType unionType = (UnionType) type; String typeFromFieldName = fieldName.split(UNION_FIELD_SEPARATOR)[0]; for (Type memberType : unionType.getMemberTypes()) { if (memberType.getTag() == TypeTags.ARRAY_TAG) { ArrayType arrayType = (ArrayType) memberType; String elementType; if (DataTypeMapper.getProtoTypeFromTag(arrayType.getElementType().getTag()) != null) { elementType = DataTypeMapper.getProtoTypeFromTag(arrayType.getElementType().getTag()); } else { elementType = arrayType.getElementType().getName(); } if (typeFromFieldName.equals(elementType)) { return arrayType.getElementType(); } } else if (memberType.getTag() == TypeTags.RECORD_TYPE_TAG) { RecordType recordType = (RecordType) memberType; if (recordType.getName().equals(typeFromFieldName)) { return recordType; } } } throw createSerdesError(DESERIALIZATION_ERROR_MESSAGE + UNSUPPORTED_UNION_TYPE, SERDES_ERROR); } }
One of Arkansas's top politicians, State Senate Majority Leader Jim Hendren, a Republican, is using unpaid, forced inmate labor to work at his plastics company, which makes dock floats for Home Depot and Walmart, according to Prison Legal News. Shocking? Sure. Illegal? Well, it depends on whom you ask. Prison labor, where inmates earn nothing or close to nothing, is used to man call centers, manufacture equipment for the US military, and otherwise put small businesses around the country out of business because they simply can't compete with an entity that has few or no labor costs. It's the American way of doing business. The odd thing about the program that Hendren is taking advantage of is that many judges and politicians, especially in the south, consider it to be "progressive." For example, courts in Oklahoma and Arkansas send men to the Drug and Alcohol Recovery Program (DARP) as an alternative to prison, and there they are supposed to receive drug treatment and counseling. A recent investigation by the Center for Investigative Reporting, however, found that there is no treatment or counseling and that prisoners serve simply as free labor for private industry. Indeed, in the DARP program, prisoners work full-time jobs in factories and chicken processing plants, companies pay a discounted rate to the rehabs for the labor, and literally none of that money is passed on to the prisoners, either as salary or for counseling. It's slave labor. If they refuse to do the work, they are moved from the drug rehab to a state prison. Hendren, for his part, isn't shying away from what he's doing. He bragged to the press recently, "I've been creating jobs for over 20 years. A country cannot survive if it cannot feed itself and make things." He added that he's "proud to give kids in drug rehab programs a second chance." A lawsuit may soon change all that. Mark Fochtman, a former rehab prisoner, filed suit in an Arkansas court, saying that he was forced to work in Hendren's company on a production line that melted plastic into dock floats and boat slips. In his affidavit, he said, "The environment was very caustic working around melted plastics. Because of the work environment, the turnover rate during my time was high." He said that if DARP workers got hurt on the job and couldn't work, they were kicked out of the program and sent to prison. Others just worked through the pain to avoid prison. Another prisoner, Dylan Willis, who is also a plaintiff in the suit, said that his face, arms, and legs are still covered with burn scars from molten plastic that shot out of a machine. Willis said his supervisors shrugged off his injuries as "cosmetic" and gave him some Neosporin. Hendren is well connected in Arkansas politics. Besides being the Senate Majority Leader, he is Governor Asa Hutchinson's nephew. His father, Kim, with whom he started the company, also is a Republican state legislator. If all of this sounds illegal, it likely is. In 2014, the Arkansas Department of Community Corrections revoked DARP's license to house parolees after discovering that the program refused to pay workers the minimum wage. As a result, Arkansas prisons are no longer supposed to send parolees to the program. The courts, however, continue to do so in violation of the law, but with no consequences. Of course, the same thing happens in the federal prison system, too. Federal Prison Industries, also known as UNICOR, a wholly-owned US government corporation, was created in 1934 as a labor program for federal prisoners. Like Hendren's company, it forces prisoners to manufacture goods for sale to a variety of US government agencies and departments. When I was incarcerated at the Federal Correctional Institution at Loretto, Pennsylvania, we had a UNICOR factory that manufactured high-speed cable for the US Navy. So much of it was deemed to be substandard that the plant was closed twice during my short 23-month stay there. The most obvious problems, then, are twofold: slave labor doesn't make for quality production, and private manufacturers can't compete with an organization that has a payroll of almost nothing. Using forced labor in private industry ought to be illegal everywhere in the country. Indeed, society would be better off if prisoners were paid a real wage. They could then pay whatever restitution they may have, whether to victims or to the government, and they could save money that they then could use to get back on their feet once they're released from prison. But that won't happen. There is no "prisoner lobby" on Capitol Hill. And no member of Congress or the state legislatures will win any votes by advocating that convicted criminals be paid even the minimum wage. It's a vicious cycle that will repeat itself until a courageous judge finally puts an end to it.
import {Type} from "../models/type"; export const fire = new Type({ name: "Fire" }); export const water = new Type({ name: "Water" }); export const plant = new Type({ name: "Plant" }); export const electric = new Type({ name: "Electric" });
The power of one: benefits of individual self-expansion The self-expansion model suggests that the acquisition of new identities, capabilities, perspectives, and resources primarily occurs in the context of romantic relationships and that self-expanding activities have numerous benefits for relationships. However, self-expansion can theoretically occur outside of a relational context, yet little is known about the benefits of self-expanding activities for individuals. Across six experimental studies, we examined: whether nonrelational novel, exciting, and interesting activities produce self-expansion and whether engaging in nonrelational self-expanding activities results in greater exerted effort. In Studies 1 and 2, individuals who engaged in novel, exciting, and interesting activities experienced greater self-expansion than those who engaged in control activities. In Studies 36, individuals who engaged in high self-expanding activities exerted more effort on cognitive and physical tasks than those who engaged in low self-expanding activities, and this effect was not due to depleted self-regulatory resources, altered mood, or changes in self-esteem (Studies 5 and 6).
IT WOULD be considered nothing but harmless fun in Australia, but it’s considered deeply disrespectful in Thailand. TWO American tourists who ran a popular travel social media account called “Traveling Butts” have been arrested in Thailand after posting a photo of themselves at a famous Buddhist temple with their rear ends exposed. Joseph and Travis Dasilva, both 38, caused an uproar in Buddhist-majority Thailand last week after their photo taken at Bangkok’s Wat Arun was widely shared, prompting a police investigation. The couple, whom police identified only by their first names, maintained a since-deleted Instagram account where they posed for photos at famous tourist destinations around the world with their bottoms exposed. The account had more than 14,000 followers. Police said the men each paid a fine of 5000 baht ($A200) at a police station near the temple and have been handed over to immigration authorities. Police are also considering charging the men under Thailand’s computer crimes act as the image was posted online, Bangkok Yai district police station’s deputy chief Wisit Suwan said. The controversial law is often used against political activists to stifle free speech online or against those accused of insulting Thailand’s monarchy. It carries punishments of up to five years in prison.
def plot_chromaticity_diagram_CIE1960UCS( cmfs: Union[ MultiSpectralDistributions, str, Sequence[Union[MultiSpectralDistributions, str]], ] = "CIE 1931 2 Degree Standard Observer", show_diagram_colours: Boolean = True, show_spectral_locus: Boolean = True, **kwargs: Any, ) -> Tuple[plt.Figure, plt.Axes]: settings = dict(kwargs) settings.update({"method": "CIE 1960 UCS"}) return plot_chromaticity_diagram( cmfs, show_diagram_colours, show_spectral_locus, **settings )
Albert Pike Early life and education Albert Pike was born in Boston, Massachusetts, on December 29, 1809, the son of Benjamin and Sarah (Andrews) Pike, and spent his childhood in Byfield and Newburyport, Massachusetts. His colonial ancestors settled the area in 1635, and included John Pike (1613–1688/1689), the founder of Woodbridge, New Jersey. He attended school in Newburyport and Framingham until he was 15. In August 1825, he passed entrance exams at Harvard University, though when the college requested payment of tuition fees for the first two years, he chose not to attend. He began a program of self-education, later becoming a schoolteacher in Gloucester, North Bedford, Fairhaven and Newburyport. Pike was an imposing figure; six feet tall and 300 pounds with hair that reached his shoulders and a long beard. In 1831, he left Massachusetts to travel west, first stopping in Nashville, Tennessee and later moving to St. Louis, Missouri. There he joined an expedition to Taos, New Mexico, devoted to hunting and trading. During the excursion his horse broke and ran, forcing Pike to walk the remaining 500 miles to Taos. After this, he joined a trapping expedition to the Llano Estacado in New Mexico and Texas. Trapping was minimal and, after traveling about 1,300 miles (650 on foot), he finally arrived at Fort Smith, Arkansas. Pike's relative, Jacob, married Bethina Jones, daughter of the Chief of the Choctaw Nation. Jacob and Bethina's son, Benjamin M. Pike, was fluent in several Indian dialects and served as representative between the Native American Tribes in Oklahoma and the government of the United States of America. Journalist and lawyer Settling in Arkansas in 1833, Pike taught in a school and wrote a series of articles for the Little Rock Arkansas Advocate under the pen name of "Casca." The articles were sufficiently well received for him to be asked to join the newspaper's staff. Under Pike's administration, the Advocate promoted the viewpoint of the Whig Party in a politically volatile and divided Arkansas in December 1832. After marrying Mary Ann Hamilton in 1834, he purchased the newspaper. He was the first reporter for the Arkansas Supreme Court. He wrote a book (published anonymously), titled The Arkansas Form Book, which was a guidebook for lawyers. Pike began to study law and was admitted to the bar in 1837, selling the Advocate the same year. He also made several contacts among the Native American tribes in the area. He specialized in claims on behalf of Native Americans against the federal government. In 1852, he represented Creek Nation before the Supreme Court in a claim regarding ceded tribal land. In 1854 he advocated for the Choctaw and Chickasaw, although compensation later awarded to the tribes in 1856 and 1857 was insufficient. These relationships were to influence the course of his Civil War service. Additionally, Pike wrote on several legal subjects. He also continued writing poetry, a hobby he had begun in his youth in Massachusetts. His poems were highly regarded in his day, but are now mostly forgotten. Several volumes of his works were privately published posthumously by his daughter. In 1859, he received an honorary Master of Arts degree from Harvard. Mexican–American War When the Mexican–American War started, Pike joined the Regiment of Arkansas Mounted Volunteers (a cavalry regiment) and was commissioned as a troop commander with the rank of captain in June 1846. With his regiment, he fought in the Battle of Buena Vista. Pike was discharged in June 1847. He and his commander, Colonel John Selden Roane, had several differences of opinion. This situation led finally to an "inconclusive" duel between Pike and Roane on July 29, 1847, near Fort Smith, Arkansas. Although several shots were fired in the duel, nobody was injured, and the two were persuaded by their seconds to discontinue it. After the war, Pike returned to the practice of law, moving to New Orleans for a time beginning in 1853. He wrote another book, Maxims of the Roman Law and Some of the Ancient French Law, as Expounded and Applied in Doctrine and Jurisprudence. Although unpublished, this book increased his reputation among his associates in law. He returned to Arkansas in 1857, gaining some amount of prominence in the legal field. At the Southern Commercial Convention of 1854, Pike said the South should remain in the Union and seek equality with the North, but if the South "were forced into an inferior status, she would be better out of the Union than in it." His stand was that state's rights superseded national law and he supported the idea of a Southern secession. This stand is made clear in his pamphlet of 1861, "State or Province, Bond or Free?" American Civil War In 1861, Pike penned the lyrics to "Dixie to Arms!" At the beginning of the war, Pike was appointed as Confederate envoy to the Native Americans. In this capacity he negotiated several treaties, one of the most important being with Cherokee chief John Ross, which was concluded in 1861. At the time, Ross agreed to support the Confederacy, which promised the tribes a Native American state if it won the war. Ross later changed his mind and left Indian Territory, but the succeeding Cherokee government maintained the alliance. Pike was commissioned as a brigadier general on November 22, 1861, and given a command in the Indian Territory. With Gen. Ben McCulloch, Pike trained three Confederate regiments of Indian cavalry, most of whom belonged to the "civilized tribes", whose loyalty to the Confederacy was variable. Although initially victorious at the Battle of Pea Ridge (Elkhorn Tavern) in March 1862, Pike's unit was defeated later in a counterattack, after falling into disarray. When Pike was ordered in May 1862 to send troops to Arkansas, he resigned in protest. As in the previous war, Pike came into conflict with his superior officers, at one time drafting a letter to Jefferson Davis complaining about his direct superior. After Pea Ridge, Pike was faced with charges that his Native American troops had scalped soldiers in the field. Maj. Gen. Thomas C. Hindman also charged Pike with mishandling of money and material, ordering his arrest. Both these charges were later found to be considerably lacking in evidence; nevertheless Pike, facing arrest, escaped into the hills of Arkansas, sending his resignation from the Confederate States Army on July 12. He was at length arrested on November 3 under charges of insubordination and treason, and held briefly in Warren, Texas. His resignation was accepted on November 11, and he was allowed to return to Arkansas. Death and legacy Pike died on April 2, 1891 in Washington, D.C. at the age of 81, and was buried at Oak Hill Cemetery. Burial was against his wishes; he had left instructions for his body to be cremated. In 1944, his remains were moved to the House of the Temple, headquarters of the Southern Jurisdiction of the Scottish Rite. A memorial to Pike is located in the Judiciary Square neighborhood of Washington, D.C. He is the only Confederate military officer with an outdoor statue in Washington, D.C., and in 2019 Delegate Eleanor Holmes Norton called for it to be removed. The Albert Pike Memorial Temple is an historic Masonic lodge in Little Rock, Arkansas; the structure is listed on the National Register of Historic Places. Figure in conspiracy theory Pike has become a key figure for conspiracy theorists. Some people claim that stories about Pike, including his "forecast" of three world wars, are bogus and derive from the Taxil hoax. In the 2007 movie National Treasure: Book of Secrets, Albert Pike is mentioned as a Confederate general to whom a missive from Queen Victoria is addressed. Poetry As a young man of letters, Pike wrote poetry, and he continued to do so for the rest of his life. At 23, he published his first poem, "Hymns to the Gods." Later work was printed in literary journals such as Blackwood's Edinburgh Magazine and local newspapers. His first collection of poetry, Prose Sketches and Poems Written in the Western Country, was published in 1834. He later gathered many of his poems and republished them in Hymns to the Gods and Other Poems (1872). After his death these were published again in Gen. Albert Pike's Poems (1900) and Lyrics and Love Songs (1916). The authorship of "The Old Canoe" was attributed to Pike. He was suggested as author because about the time of its publication, when it was going the rounds of the press, probably without any credit, a doggerel called "The Old Canoe" was composed about Pike by one of his political foes. The subject was a canoe in which he left Columbia, Tennessee, when a young man practicing law in that place. Pike told Senator Edward W. Carmack that he was not the author of "The Old Canoe," and could not imagine how he ever got the credit for it. The rightful author was Emily Rebecca Page.
Digital Technologies in Italian Cultural Institutions The application of digital technologies plays a crucial role in offering solutions to enhance the economic potential of the cultural sector, through new modalities for distribution and reception of cultural experiences. The question of whether and how ICT adds value to collections, museums, and cultural sites and promotes access and communication with users/visitors is an open one. This chapter aims to provide empirical evidence on the effects of technological innovations in the economic performance of cultural institutions. To this end, the authors use data of Italy's statistical office covering the universe of Italian cultural organizations in the year 2018. The findings suggest that new digital technologies play a role in enhancing the value and relevance of cultural heritage and its influences over the socio-economic context.
Does the nature of schools matter? An exploration of selected school ecology factors on adolescent perceptions of school connectedness. BACKGROUND Connectedness to school is a significant predictor of adolescent health and academic outcomes. While individual predictors of connectedness have been well-described, little is known about school-level factors which may influence connectedness. A school's ecology, or its structural, functional, and built aspects, coupled with interpersonal interactions, may also help to enhance adolescent connectedness. AIM This study aims to identify school ecological characteristics which predict enhanced connectedness in secondary school. SAMPLE Data from 5,159 Grade 8 students (12-13 years) from 39 randomly selected schools were tracked until the end of Grade 9 (13-14 years). METHOD Students' self-reported school, teacher, and family connectedness, mental health and peer relationships were measured at two time points. Accounting for school-level clustering, student- and school-level ecological characteristics were modelled on self-reported school connectedness in Grades 8 and 9. RESULTS Students' higher school connectedness in Grades 8 and 9 was influenced by greater levels of family connectedness, fewer classroom and peer problems, less difficult secondary school transition, fewer emotional problems, and greater prosocial skills. Seven school-level ecological variables were significantly associated with school connectedness after controlling for student-level predictors. At the school-level, priority for pastoral care and students' aggregated writing skills scores significantly predicted concurrent and future enhanced connectedness. CONCLUSIONS Interventions to improve students' school connectedness should address individual student characteristics and school functional features such as pastoral care strategies and helping students to achieve greater academic outcomes. Future studies should focus on the cumulative longitudinal influence of school ecological and student-level predictors of school connectedness.
from selenium import webdriver url = 'https://www.siarh.unicamp.br/concurso/LoginInscricao.jsf;jsessionid=C77E7A6C8A3536700BD4A84BE45F0A6E?modoParam=MANUTENCAO' from selenium.webdriver.firefox.firefox_binary import FirefoxBinary binary = FirefoxBinary('firefox') driver = webdriver.Firefox(firefox_binary=binary, executable_path=r'/usr/local/bin/geckodriver') driver.get(url) cpfElem = driver.find_element_by_name('formulario:cpf') cpfElem.send_keys('13788904852') buttonBuscar = driver.find_element_by_class_name("rh-btn") buttonBuscar.click() senha = driver.find_element_by_name('formulario:senha') senha.send_keys('<PASSWORD>') buttonBuscar = driver.find_element_by_class_name("rh-btn") buttonBuscar.click()
Virtual Design and Interactive Demonstration of the Product Based on Sketchup In order to design industrial products efficiently, the Google's free software--SketchUp was used for the design and research of the product which can transform from a single chair to double person bench, and the SketchyPhysics plug-in was used to complete the dynamic showing process of the design product. The design cycle is short and it is easy to adjust and operate. SketchUp and its plug-ins can be competence for the design of industrial product and presentation.
import mongoose from "mongoose"; import Station, { IStation, StationType } from "model/Station"; const validStation: IStation = new Station({ number: 333, }); describe("Station model", () => { beforeAll(async () => { await mongoose.connect(process.env.MONGO_URL ?? "", { useCreateIndex: true, useNewUrlParser: true, useUnifiedTopology: true, }); }); afterAll(async () => { mongoose.connection.close(); }); afterEach(async () => { await Station.deleteMany({}); }); it("Throws an error if station is created without parameters", async () => { const station: IStation = new Station(); await expect(Station.create(station)).rejects.toThrowError(); }); it("Should create a new station with station number", async () => { expect.assertions(5); const station: IStation = new Station(validStation); const spy = jest.spyOn(station, "save"); const savedStation: IStation = await station.save(); expect(spy).toHaveBeenCalled(); expect(savedStation).toMatchObject({ number: expect.any(Number), enabled: expect.any(Boolean), stationType: expect.any(String), }); expect(savedStation.number).toBe(333); expect(savedStation.enabled).toBe(true); expect(savedStation.stationType).toBe(StationType.Regular); }); });
/* Adds a new icon record to the list */ static BOOL add_icon(NOTIFYICONDATAW *nid) { struct icon *icon; WINE_TRACE("id=0x%x, hwnd=%p\n", nid->uID, nid->hWnd); if ((icon = get_icon(nid->hWnd, nid->uID))) { WINE_WARN("duplicate tray icon add, buggy app?\n"); return FALSE; } if (!(icon = HeapAlloc(GetProcessHeap(), HEAP_ZERO_MEMORY, sizeof(*icon)))) { WINE_ERR("out of memory\n"); return FALSE; } ZeroMemory(icon, sizeof(struct icon)); icon->id = nid->uID; icon->owner = nid->hWnd; icon->display = -1; if (list_empty( &icon_list )) SetTimer( tray_window, 1, 2000, NULL ); list_add_tail(&icon_list, &icon->entry); modify_icon( icon, nid ); if (!((nid->uFlags & NIF_STATE) && (nid->dwStateMask & NIS_HIDDEN))) show_icon( icon ); return TRUE; }
Developmental mechanisms underlying variable, invariant and plastic phenotypes. BACKGROUND Discussions of phenotypic robustness often consider scenarios where invariant phenotypes are optimal and assume that developmental mechanisms have evolved to buffer the phenotypes of specific traits against stochastic and environmental perturbations. However, plastic plant phenotypes that vary between environments or variable phenotypes that vary stochastically within an environment may also be advantageous in some scenarios. SCOPE Here the conditions under which invariant, plastic and variable phenotypes of specific traits may confer a selective advantage in plants are examined. Drawing on work from microbes and multicellular organisms, the mechanisms that may give rise to each type of phenotype are discussed. CONCLUSION In contrast to the view of robustness as being the ability of a genotype to produce a single, invariant phenotype, changes in a phenotype in response to the environment, or phenotypic variability within an environment, may also be delivered consistently (i.e. robustly). Thus, for some plant traits, mechanisms have probably evolved to produce plasticity or variability in a reliable manner.
package org.rs2server.rs2.content; import org.rs2server.rs2.event.Event; import org.rs2server.rs2.model.Animation; import org.rs2server.rs2.model.DialogueManager; import org.rs2server.rs2.model.Graphic; import org.rs2server.rs2.model.Location; import org.rs2server.rs2.model.World; import org.rs2server.rs2.model.boundary.Boundary; import org.rs2server.rs2.model.boundary.BoundaryManager; import org.rs2server.rs2.model.container.Container; import org.rs2server.rs2.model.container.Equipment; import org.rs2server.rs2.model.minigame.fightcave.FightCave; import org.rs2server.rs2.model.minigame.warriorsguild.WarriorsGuild; import org.rs2server.rs2.model.player.Player; public class Jewellery { private static final Animation GEM_PRE_CAST_ANIMATION = Animation.create(714); private static final Graphic GEM_PRE_CAST_GRAPHICS = Graphic.create(308, 48, 100); private static final Animation TELEPORTING_ANIMATION = Animation.create(715); private static Object[][] GLORY_DATA = { {1706, 1704, "You use your amulets last charge."}, {1708, 1706, "Your amulet has one charge left."}, {1710, 1708, "Your amulet has two charges left."}, {1712, 1710, "Your amulet has three charges left."}, {11976, 1712, "Your amulet has four charges left."}, {11978, 11976, "Your amulet has five charges left."}, {19707, 19707, "Your amulet is forever charged."}, }; private static Object[][] SLAYER_RING_DATA = { {11873, 4155, "Your slayer ring crumbles to dust."}, {11872, 11873, "Your slayer ring has one charge left."}, {11871, 11872, "Your slayer ring has two charges left."}, {11870, 11871, "Your slayer ring has three charges left."}, {11869, 11870, "Your slayer ring has four charges left."}, {11868, 11869, "Your slayer ring has five charges left."}, {11867, 11868, "Your slayer ring has six charges left."}, {11866, 11867, "Your slayer ring has seven charges left."}, }; private static Object[][] RING_OF_DUELING_DATA = { {2566, -1, "Your ring of dueling crumbles to dust."}, {2564, 2566, "Your ring of dueling has one charge left."}, {2562, 2564, "Your ring of dueling has two charges left."}, {2560, 2562, "Your ring of dueling has three charge left."}, {2558, 2560, "Your ring of dueling has four charge left."}, {2556, 2558, "Your ring of dueling has five charge left."}, {2554, 2556, "Your ring of dueling has six charge left."}, {2552, 2554, "Your ring of dueling has seven charge left."},}; private static Object[][] GAMES_NECKLACE_DATA = { {3867, -1, "Your games necklace crumbles to dust."}, {3865, 3867, "Your games necklace has one charge left."}, {3863, 3865, "Your games necklace has two charges left."}, {3861, 3863, "Your games necklace has three charges left."}, {3859, 3861, "Your games necklace has four charges left."}, {3857, 3859, "Your games necklace has five charges left."}, {3855, 3857, "Your games necklace has six charges left."}, {3853, 3855, "Your games necklace has seven charges left."},}; public enum GemType { GLORY, RING_OF_DUELING, SLAYER_RING, GAMES_NECKLACE } public static boolean rubItem(Player player, int slot, int itemId, boolean operating) { if (player.getRFD().isStarted() | FightCave.IN_CAVES.contains(player) || BoundaryManager.isWithinBoundaryNoZ(player.getLocation(), "PestControl") || BoundaryManager.isWithinBoundaryNoZ(player.getLocation(), "PestControlBoat")) { player.getActionSender().sendMessage("You can't teleport from here!"); return false; } if (!operating || slot == Equipment.SLOT_AMULET) { if (itemId == 1704) { player.getActionSender().sendMessage("The amulet has lost its charge."); player.getActionSender().sendMessage("It will need to be recharged before you can use it again."); return true; } if (itemId > 1704 && itemId <= 1712) { itemId -= 1704; if (itemId % 2 == 0) { //Its an equal number.. int divided = itemId / 2; DialogueManager.openDialogue(player, 1712); player.getJewellery().setGem(GemType.GLORY, divided, operating); return true; } } else if (itemId == 11976 || itemId == 11978) { itemId -= 11966; if (itemId % 2 == 0) { //Its an equal number.. int divided = itemId / 2; DialogueManager.openDialogue(player, 1712); player.getJewellery().setGem(GemType.GLORY, divided, operating); return true; } } else if (itemId >= 3852 && itemId <= 3867) { itemId -= 3851; if (itemId % 2 == 0) { //Its an equal number.. int divided = itemId / 2; DialogueManager.openDialogue(player, 1718); player.getJewellery().setGem(GemType.GAMES_NECKLACE, 9 - divided, operating); return true; } } } if (!operating || slot == Equipment.SLOT_RING) { if (itemId >= 2552 && itemId <= 2566) { itemId -= 2550; if (itemId % 2 == 0) { //Its an equal number.. int divided = itemId / 2; DialogueManager.openDialogue(player, 1722); player.getJewellery().setGem(GemType.RING_OF_DUELING, 9 - divided, operating); return true; } } else if (itemId >= 11866 && itemId <= 11873) { itemId = 11874 - itemId; System.out.println(itemId); if (itemId >= 0) { DialogueManager.openDialogue(player, 1353); player.getJewellery().setGem(GemType.SLAYER_RING, itemId, operating); return true; } } } return false; } /** * Notice: This may be set, without being used at all. (Eg. if the player clicks rub, but then clickes the minimap). * Sets how many charges the glory have left, so we can display the correct message. * * @param gemId The id of the specific gem, Glory = 1, Ring of Dueling = 2, Games Necklace = 3. * @param charge The charge our gem have left. :) * @param isOperating true if you're operating the gem, false if not. */ public void setGem(GemType gem, int charge, boolean isOperating) { this.gem = gem; this.gemCharge = charge; this.operate = isOperating; } /** * Gets the players currently rubbed GemType. * * @return The players GemType. */ public GemType getGemType() { return gem; } /** * Used for teleporting, while using a glory/ring of duelling/games necklage. * * @param player The player, who's using a gem for teleporting. * @param location The location of where to go. (Edgewille, Castle wars, Duel arena). */ public void gemTeleport(final Player player, final Location location) { if (gem == null || gemCharge == -1 || player.getCombatState().isDead()) { return; } /* * Prevents mass clicking them. */ if (player.getSettings().getLastTeleport() < 3000) { return; } if (player.getRFD().isStarted() | FightCave.IN_CAVES.contains(player) || BoundaryManager.isWithinBoundaryNoZ(player.getLocation(), "PestControl") || BoundaryManager.isWithinBoundaryNoZ(player.getLocation(), "PestControlBoat")) { player.getActionSender().sendMessage("You can't teleport from here!"); return; } if (WarriorsGuild.IN_GAME.contains(player)) { WarriorsGuild.IN_GAME.remove(player); } if (player.getDatabaseEntity().getPlayerSettings().isTeleBlocked()) { player.getActionSender().sendMessage("A magical force stops you from teleporting."); return; } player.getInstancedNPCs().clear(); Container con = operate ? player.getEquipment() : player.getInventory(); switch (gem) { /* * Player is using a glory. */ case GLORY: if (Location.getWildernessLevel(player, player.getLocation()) > 30) { player.getActionSender().sendMessage("You cannot teleport above level 30 wilderness."); return; } Object[] data1 = GLORY_DATA[gemCharge - 1]; if (!con.replace((Integer) data1[0], (Integer) data1[1])) { return; } player.getActionSender().sendMessage("<col=7f00ff>" + (String) data1[2]); break; /* * Player is using a Ring of Dueling. */ case RING_OF_DUELING: /*if (Location.getWildernessLevel(player, player.getLocation()) > 20) { player.getActionSender().sendMessage("You cannot teleport above level 20 wilderness."); return; }*/ Object[] data2 = RING_OF_DUELING_DATA[gemCharge - 1]; if (!con.replace((Integer) data2[0], (Integer) data2[1])) { return; } player.getActionSender().sendMessage("<col=7f00ff>" + (String) data2[2]); break; /* * Player is using a Games Necklace. */ case GAMES_NECKLACE: /*if (Location.getWildernessLevel(player, player.getLocation()) > 20) { player.getActionSender().sendMessage("You cannot teleport above level 20 wilderness."); return; }*/ Object[] data3 = GAMES_NECKLACE_DATA[gemCharge - 1]; if (!con.replace((Integer) data3[0], (Integer) data3[1])) { return; } player.getActionSender().sendMessage("<col=7f00ff>" + (String) data3[2]); break; /* * Player is using a Slayer ring */ case SLAYER_RING: /*if (Location.getWildernessLevel(player, player.getLocation()) > 20) { player.getActionSender().sendMessage("You cannot teleport above level 20 wilderness."); return; }*/ Object[] data4 = SLAYER_RING_DATA[gemCharge - 1]; if (!con.replace((Integer) data4[0], (Integer) data4[1])) { return; } player.getActionSender().sendMessage("<col=7f00ff>" + (String) data4[2]); break; } player.setCanBeDamaged(false); player.resetBarrows(); player.playAnimation(GEM_PRE_CAST_ANIMATION); player.playGraphics(GEM_PRE_CAST_GRAPHICS); World.getWorld().submit(new Event(1800) { public void execute() { player.setTeleportTarget(location); player.playAnimation(TELEPORTING_ANIMATION); player.setCanBeDamaged(true); this.stop(); } }); player.getSettings().setLastTeleport(System.currentTimeMillis()); } /** * What is our gemtype? */ private GemType gem = null; /** * What is the current gem charge? */ private int gemCharge = -1; /** * Are we operating our glory/ring of dueling/games necklace? */ private boolean operate = false; }
Velocity Profiling of Multiphase Flows Using Capacitive Sensor Sensitivity Gradient Velocity profiling of a flow involves the task of determining the velocity vector at every point in a given flow volume. A new method is proposed for velocity profiling of multiphase flows based on electrical capacitance volume tomography (ECVT) sensors. The proposed method utilizes a mapping between the change in measured capacitances and the displacement of flow that is effected by the spatial gradient of the sensitivity distribution. This novel mapping not only avoids the need for costly image cross correlations but also is fully compatible with existing ECVT sensor and image reconstruction algorithms. Simulation and measurement results are provided to demonstrate the proposed method.
The field of the present invention relates to spatially selective material processing using a laser. In particular, apparatus and methods are shown and described in which a laser system is employed to spatially selectively remove a metal coating from a polymer substrate without damaging the polymer substrate and without leaving resolidified molten metal residue on the substrate. A wide variety of spatially selective material processing techniques have been developed, using lasers, applied to metal, applied to polymers, or applied to other materials. Selected examples include: U.S. Pat. No. 3,720,784 entitled “Recording and display method and apparatus” issued Mar. 13, 1973 to Maydan et al; U.S. Pat. No. 4,000,492 entitled “Metal film recording media for laser writing” issued Dec. 28, 1976 to Willens; U.S. Pat. No. 4,752,455 entitled “Pulsed laser microfabrication” issued Jun. 21, 1988 to Mayer; U.S. Pat. No. 5,093,279 entitled “Laser ablation damascene process” issued Mar. 3, 1992 to Andreshak et al; U.S. Pat. No. 5,104,480 entitled “Direct patterning of metals over a thermally inefficient surface using a laser” issued Apr. 14, 1992 to Wojnarowski et al; U.S. Pat. No. 5,569,398 entitled “Laser system and method for selectively trimming films” issued Oct. 29, 1996 to Sun et al; U.S. Pat. No. 6,036,809 entitled “Process for releasing a thin-film structure from a substrate” issued Mar. 14, 2000 to Kelly et al; U.S. Pat. No. 6,183,588 entitled “Process for transferring a thin-film structure to a substrate” issued Feb. 6, 2001 to Kelly et al; U.S. Pat. No. 6,531,679 entitled “Method for the laser machining of organic materials” issued Mar. 11, 2003 to Heerman et al; U.S. Pat. No. 6,833,222 entitled “Method and apparatus for trimming a pellicle film using a laser” issued Dec. 21, 2004 to Buzerak et al; U.S. Pat. No. 6,949,215 entitled “Method for processing a three-dimensional structure by laser” issued Sep. 27, 2005 to Yamada et al; U.S. Pat. No. 7,106,507 entitled “Flexible wire grid polarizer and fabricating method thereof” issued Sep. 12, 2006 to Lee et al; U.S. Pat. No. 7,176,053 entitled “Laser ablation method for fabricating high performance organic devices” issued Feb. 13, 2007 to Dimmler; U.S. Pat. No. 7,220,371 entitled “Wire grid polarizer and method for producing same” issued May 22, 2007 to Suganuma; U.S. Pat. No. 7,332,263 entitled “Method for patterning an organic light emitting diode device” issued Feb. 19, 2008 to Addington et al; U.S. Pat. No. 7,692,860 entitled “Wire grid polarizer and method of manufacturing the same” issued Apr. 6, 2010 to Sato et al; E. Hunger, H. Pietsch, S. Petzoldt and E. Matthias; “Multishot ablation of polymer and metal films at 248 nm”; Applied Surface Science, Vol. 54, pp. 227-231 (1992); Matthias Bolle and Sylvain Lazare; “Ablation of thin polymer films on Si or metal substrate with the low intensity UV beam of an excimer laser or mercury lamp: advantages of ellipsometric rate measurements”; Applied Surface Science, Vol. 54, pp. 471-476, (1992); J. Krüger and W. Kautek; “Femtosecond-pulse laser processing of metallic and semiconducting thin films”; Laser-Induced Thin Film Processing, J. J. Dubowski, ed; Proc. SPIE Vol. 2403, p. 436 (1995); P. Simon and J. Ihlemann; “Machining of submicron structures on metals and semiconductors by ultrashort UV-laser pulses”; Applied Physics A, Vol. 63, p. 505 (1996); S. Nolte, C. Momma, H. Jacobs, A. Tünnermann, B. N. Chichkov, B. Wellegehausen, and H. Welling; “Ablation of metals by ultrashort laser pulses”; Journal of the Optical Society of America B, Vol. 14, No. 10, pp. 2716-2722 (October 1997); Itsunari Yamada, Kenji Kintaka, Junji Nishii, Satoshi Akioka, Yutaka Yamagishi, and Mitsunori Saito; “Mid-infrared wire-grid polarizer with silicides”; Optics Letters, Vol. 33, No. 3, pp. 258-260 (10 Sep. 2008); Itsunari Yamada, Junji Nishii, and Mitsunori Saito; “Modeling, fabrication, and characterization of tungsten silicide wire-grid polarizer in infrared region”; Applied Optics, Vol. 47, No. 26, pp. 4735-4738 (2008); Andrew C. Strikwerda, Kebin Fan, Hu Tao, Daniel V. Pilon, Xin Zhang, and Richard D. Averitt; “Comparison of birefringent electric split-ring resonator and meanderline structures as quarter-wave plates at terahertz frequencies”; Optics Express, Vol. 17, No. 1, pp. 136-149 (5 Jan. 2009); and Yong Ma, A. Khalid, Timothy D. Drysdale, and David R. S. Cumming; “Direct fabrication of terahertz optical devices on low-absorption polymer substrates”; Optics Letters, Vol. 34, No. 10, pp. 1555-1557 (15 May 2009). Maydan (U.S. Pat. No. 3,720,784) discloses use of pulsed output of a visible laser to form holes of varying sizes in a thin bismuth film on a transparent polyester film. Each hole is formed by a single pulse (3-20 nJ, 20-30 ns, 5-10 μm beam size), which heats the bismuth film to beyond its melting point (272° C.) over an area that is approximately proportional to the pulse energy, and surface tension draws the molten metal toward the periphery of the newly formed hole. The molten material resolidifies, leaving a crater-like rim around the hole. The size of each hole is determined by the area that was melted, hence by the energy delivered by the corresponding laser pulse. Each of Dimmler (U.S. Pat. No. 7,176,053) and Addington (U.S. Pat. No. 7,332,263) discloses processing organic transistors or LEDs using UV lasers, in which all layers of a structure (e.g., metal, organic, and oxide) absorb the laser radiation and are melted.
Boy, oh boy! A lot of people had strong feelings about yesterday’s video, and let me hear about it. The reaction was almost unanimous. I’ve made a video in response to everything that I’ve heard over the past 12 hours: There are a couple of things that I can be a bit stubborn about, but I’m definitely not stubborn enough to defend a feature when the overwhelming majority of the fan-base tells me that it’s ridiculous. A developer who doesn’t listen to criticism is a developer digging his own grave. I take feedback very seriously, and this definitely won’t be an exception. There’s a possibility that I may eventually revisit this topic at some point in the future, but it definitely won’t happen unless I can present a thorough plan for how I’d make the feature fit in and feel thematically appropriate. …hmm…I wonder…what if the player was given the option of infiltrating the loan shark’s office and setting up a bunch of traps, so that when he arrives with his henchmen, they’re injured or incapacitated? That might be a more believable way for a schoolgirl to take down a bunch of grown men, while keeping things stealthy…hmm… In other news, I’ve uploaded a new build. Here are the details: Removed debug command that would trigger Kokona’s expulsion cut-scene, because way too many people were pressing it on accident and then reporting it as a bug. Fixed bug that would cause bizarre issues if Yandere-chan tried to arrange a meeting on the school rooftop at the same time that Kokona was accepting a phone call. It is now possible to have the “Offer Help” conversation with Kokona behind the school and in the storage room, in addition to the rooftop. Fixed bug that would allow the player to use the “Offer Help” command on Kokona…after pushing Kokona off of the school rooftop. Fixed bug that would cause the player to switch between their phone and their rival’s stolen phone outside of the appropriate situation. The player is no longer able to peek into Info-chan’s room when Info-chan is dropping an object out of her window. Attempted to fix bug that would cause Kuu Dere to sometimes become stuck on a door on the rooftop. Replaced Sakyu Basu’s poor-quality ring removal animation with a higher quality animation. Fixed bug that would cause Musume’s hair to not have an outline in Yandere Vision. Added a texture for Kokona’s phone so that it’s not just a purple rectangle. Replaced the texture of Yandere-chan’s phone with a higher-quality one.
<gh_stars>0 /* * MIT License * * Copyright (c) 2022 MASES s.r.l. * * Permission is hereby granted, free of charge, to any person obtaining a copy * of this software and associated documentation files (the "Software"), to deal * in the Software without restriction, including without limitation the rights * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell * copies of the Software, and to permit persons to whom the Software is * furnished to do so, subject to the following conditions: * * The above copyright notice and this permission notice shall be included in all * copies or substantial portions of the Software. * * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE * SOFTWARE. */ /************************************************************************************** * <auto-generated> * This code was generated from a template using JCOReflector * * Manual changes to this file may cause unexpected behavior in your application. * Manual changes to this file will be overwritten if the code is regenerated. * </auto-generated> *************************************************************************************/ package org.bouncycastle.crypto.agreement.jpake; import org.mases.jcobridge.*; import org.mases.jcobridge.netreflection.*; import java.util.ArrayList; // Import section import org.bouncycastle.crypto.agreement.jpake.JPakePrimeOrderGroup; import org.bouncycastle.crypto.IDigest; import org.bouncycastle.crypto.IDigestImplementation; import org.bouncycastle.security.SecureRandom; import org.bouncycastle.crypto.agreement.jpake.JPakeRound1Payload; import org.bouncycastle.crypto.agreement.jpake.JPakeRound2Payload; import org.bouncycastle.crypto.agreement.jpake.JPakeRound3Payload; import org.bouncycastle.math.BigInteger; /** * The base .NET class managing Org.BouncyCastle.Crypto.Agreement.JPake.JPakeParticipant, BouncyCastle.Crypto, Version=1.8.2.0, Culture=neutral, PublicKeyToken=null. * <p> * * See: <a href="https://docs.microsoft.com/en-us/dotnet/api/Org.BouncyCastle.Crypto.Agreement.JPake.JPakeParticipant" target="_top">https://docs.microsoft.com/en-us/dotnet/api/Org.BouncyCastle.Crypto.Agreement.JPake.JPakeParticipant</a> */ public class JPakeParticipant extends NetObject { /** * Fully assembly qualified name: BouncyCastle.Crypto, Version=1.8.2.0, Culture=neutral, PublicKeyToken=null */ public static final String assemblyFullName = "BouncyCastle.Crypto, Version=1.8.2.0, Culture=neutral, PublicKeyToken=null"; /** * Assembly name: BouncyCastle.Crypto */ public static final String assemblyShortName = "BouncyCastle.Crypto"; /** * Qualified class name: Org.BouncyCastle.Crypto.Agreement.JPake.JPakeParticipant */ public static final String className = "Org.BouncyCastle.Crypto.Agreement.JPake.JPakeParticipant"; static JCOBridge bridge = JCOBridgeInstance.getInstance(assemblyFullName); /** * The type managed from JCOBridge. See {@link JCType} */ public static JCType classType = createType(); static JCEnum enumInstance = null; JCObject classInstance = null; static JCType createType() { try { String classToCreate = className + ", " + (JCOReflector.getUseFullAssemblyName() ? assemblyFullName : assemblyShortName); if (JCOReflector.getDebug()) JCOReflector.writeLog("Creating %s", classToCreate); JCType typeCreated = bridge.GetType(classToCreate); if (JCOReflector.getDebug()) JCOReflector.writeLog("Created: %s", (typeCreated != null) ? typeCreated.toString() : "Returned null value"); return typeCreated; } catch (JCException e) { JCOReflector.writeLog(e); return null; } } void addReference(String ref) throws Throwable { try { bridge.AddReference(ref); } catch (JCNativeException jcne) { throw translateException(jcne); } } /** * Internal constructor. Use with caution */ public JPakeParticipant(java.lang.Object instance) throws Throwable { super(instance); if (instance instanceof JCObject) { classInstance = (JCObject) instance; } else throw new Exception("Cannot manage object, it is not a JCObject"); } public String getJCOAssemblyName() { return assemblyFullName; } public String getJCOClassName() { return className; } public String getJCOObjectName() { return className + ", " + (JCOReflector.getUseFullAssemblyName() ? assemblyFullName : assemblyShortName); } public java.lang.Object getJCOInstance() { return classInstance; } public void setJCOInstance(JCObject instance) { classInstance = instance; super.setJCOInstance(classInstance); } public JCType getJCOType() { return classType; } /** * Try to cast the {@link IJCOBridgeReflected} instance into {@link JPakeParticipant}, a cast assert is made to check if types are compatible. * @param from {@link IJCOBridgeReflected} instance to be casted * @return {@link JPakeParticipant} instance * @throws java.lang.Throwable in case of error during cast operation */ public static JPakeParticipant cast(IJCOBridgeReflected from) throws Throwable { NetType.AssertCast(classType, from); return new JPakeParticipant(from.getJCOInstance()); } // Constructors section public JPakeParticipant() throws Throwable { } public JPakeParticipant(java.lang.String participantId, char[] password, JPakePrimeOrderGroup group, IDigest digest, SecureRandom random) throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.ArgumentNullException, system.InvalidOperationException, system.PlatformNotSupportedException, system.ArrayTypeMismatchException, system.IndexOutOfRangeException, system.NotSupportedException, system.ObjectDisposedException, system.RankException { try { // add reference to assemblyName.dll file addReference(JCOReflector.getUseFullAssemblyName() ? assemblyFullName : assemblyShortName); setJCOInstance((JCObject)classType.NewObject(participantId, password, group == null ? null : group.getJCOInstance(), digest == null ? null : digest.getJCOInstance(), random == null ? null : random.getJCOInstance())); } catch (JCNativeException jcne) { throw translateException(jcne); } } public JPakeParticipant(java.lang.String participantId, char[] password, JPakePrimeOrderGroup group) throws Throwable, system.ArgumentNullException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.ArgumentOutOfRangeException, system.ArgumentException, system.InvalidOperationException, system.NotSupportedException, system.InvalidCastException, system.RankException, system.ArrayTypeMismatchException, org.bouncycastle.security.SecurityUtilityException, system.OverflowException, system.threading.tasks.TaskSchedulerException, system.ObjectDisposedException, system.threading.AbandonedMutexException { try { // add reference to assemblyName.dll file addReference(JCOReflector.getUseFullAssemblyName() ? assemblyFullName : assemblyShortName); setJCOInstance((JCObject)classType.NewObject(participantId, password, group == null ? null : group.getJCOInstance())); } catch (JCNativeException jcne) { throw translateException(jcne); } } public JPakeParticipant(java.lang.String participantId, char[] password) throws Throwable, system.ArgumentNullException, system.ArgumentOutOfRangeException, system.PlatformNotSupportedException, system.ArgumentException, system.InvalidOperationException, system.IndexOutOfRangeException, system.RankException, system.ArrayTypeMismatchException, org.bouncycastle.security.SecurityUtilityException, system.OverflowException, system.ObjectDisposedException, system.threading.AbandonedMutexException { try { // add reference to assemblyName.dll file addReference(JCOReflector.getUseFullAssemblyName() ? assemblyFullName : assemblyShortName); setJCOInstance((JCObject)classType.NewObject(participantId, password)); } catch (JCNativeException jcne) { throw translateException(jcne); } } // Methods section public JPakeRound1Payload CreateRound1PayloadToSend() throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, system.ArithmeticException, system.FormatException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { JCObject objCreateRound1PayloadToSend = (JCObject)classInstance.Invoke("CreateRound1PayloadToSend"); return new JPakeRound1Payload(objCreateRound1PayloadToSend); } catch (JCNativeException jcne) { throw translateException(jcne); } } public JPakeRound2Payload CreateRound2PayloadToSend() throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, system.ArithmeticException, system.FormatException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { JCObject objCreateRound2PayloadToSend = (JCObject)classInstance.Invoke("CreateRound2PayloadToSend"); return new JPakeRound2Payload(objCreateRound2PayloadToSend); } catch (JCNativeException jcne) { throw translateException(jcne); } } public JPakeRound3Payload CreateRound3PayloadToSend(BigInteger keyingMaterial) throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, system.FormatException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { JCObject objCreateRound3PayloadToSend = (JCObject)classInstance.Invoke("CreateRound3PayloadToSend", keyingMaterial == null ? null : keyingMaterial.getJCOInstance()); return new JPakeRound3Payload(objCreateRound3PayloadToSend); } catch (JCNativeException jcne) { throw translateException(jcne); } } public BigInteger CalculateKeyingMaterial() throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, system.FormatException, system.ArithmeticException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { JCObject objCalculateKeyingMaterial = (JCObject)classInstance.Invoke("CalculateKeyingMaterial"); return new BigInteger(objCalculateKeyingMaterial); } catch (JCNativeException jcne) { throw translateException(jcne); } } public void ValidateRound1PayloadReceived(JPakeRound1Payload round1PayloadReceived) throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, org.bouncycastle.crypto.CryptoException, system.FormatException, system.ArithmeticException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { classInstance.Invoke("ValidateRound1PayloadReceived", round1PayloadReceived == null ? null : round1PayloadReceived.getJCOInstance()); } catch (JCNativeException jcne) { throw translateException(jcne); } } public void ValidateRound2PayloadReceived(JPakeRound2Payload round2PayloadReceived) throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, system.ArithmeticException, org.bouncycastle.crypto.CryptoException, system.OutOfMemoryException, system.FormatException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { classInstance.Invoke("ValidateRound2PayloadReceived", round2PayloadReceived == null ? null : round2PayloadReceived.getJCOInstance()); } catch (JCNativeException jcne) { throw translateException(jcne); } } public void ValidateRound3PayloadReceived(JPakeRound3Payload round3PayloadReceived, BigInteger keyingMaterial) throws Throwable, system.ArgumentException, system.ArgumentOutOfRangeException, system.IndexOutOfRangeException, system.PlatformNotSupportedException, system.NotSupportedException, system.ArgumentNullException, system.ObjectDisposedException, system.InvalidOperationException, system.RankException, system.ArrayTypeMismatchException, org.bouncycastle.crypto.CryptoException, system.OutOfMemoryException, system.FormatException { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { classInstance.Invoke("ValidateRound3PayloadReceived", round3PayloadReceived == null ? null : round3PayloadReceived.getJCOInstance(), keyingMaterial == null ? null : keyingMaterial.getJCOInstance()); } catch (JCNativeException jcne) { throw translateException(jcne); } } // Properties section public int getState() throws Throwable { if (classInstance == null) throw new UnsupportedOperationException("classInstance is null."); try { return (int)classInstance.Get("State"); } catch (JCNativeException jcne) { throw translateException(jcne); } } // Instance Events section }
MACON, Ga. — Medical marijuana supporters in Georgia were hoping for something different from the federal government, not its recent ruling that cannabis should remain off-limits. Georgians like Janea Cox of Monroe County want to be able to get medical cannabis just like other prescriptions instead of breaking the law to seek therapies for themselves or their loved ones, The Telegraph reported. It was difficult to hear news of Thursday's ruling from the federal Drug Enforcement Administration, Cox told the Macon newspaper. The agency decided that marijuana will remain on the list of most dangerous drugs, which includes heroin and LSD, The Associated Press reported. Cox's daughter Haleigh, 7, takes a liquid made in part from cannabis to treat the symptoms of a severe seizure disorder that can stop the little girl's breathing. She said her daughter now sometimes goes days without a seizure, is learning things and can sit up on her own. "Get past the stigma," Cox told the Macon newspaper. "Our kids are not smoking this. It's an oil and it's saving our kids." Blaine Cloud, a father from Smyrna, and his wife Shannon Cloud are among the most vocal medical cannabis activists in Georgia. Their 11-year-old daughter Alaina has used medical cannabis for a seizure disorder. "They say it has no medical benefit, but there is proof all over the world that it does," Blaine Cloud told The Telegraph. The Drug Enforcement Administration said its decision came after a lengthy review and consultation with the Health and Human Services Department, which said marijuana "has a high potential for abuse" and "no acceptedmedical use." The decision means that pot will remain illegal for any purpose under federal law, despite laws in 25 states and District of Columbia that have legalized pot for either medicinal or recreational use. The author of Georgia's medical cannabis law, state Rep. Allen Peake, R-Macon, said it's time for the state to make sure residents can get safe, regulated medical cannabis under the oversight of doctors. "To me it's pure insanity to continue to say that there is no medicinal value in a product that has been recognized by at least 25 states. to have some medicinal value," said Peake, referring to states that allow medical cultivation. Cultivation has been a hard sell in Georgia. In hearings over the years at the state Capitol, some of the state's top law enforcement officers have argued that they cannot endorse breaking the law and if cannabis has medicalvalue, then it needs to be proven through trials just like any other drug. Some also say medical grows would be used to cover illegal recreational grows, and medical cannabis could be a slippery slope to the recreational kind. Republican Gov. Nathan Deal has been an opponent and has said he's not convinced Georgia could control in-state cannabis cultivation. Nationally, the Obama administration's position on marijuana started to ease in 2013, when the Justice Department notified Colorado and Washington that it would not interfere with state laws so long as the drug was kept out of the hands of children, off the black market and away from federal property, the AP reported. Colorado and Washington were the first two states to legalize pot for recreational use and sales. Advocates saw that policy statement as the first step to an end of the federal prohibition of marijuana. But that hope was quickly diminished as administration officials, including the head of the White House-run Office of National Drug Control Policy, repeatedly said publicly that they still considered marijuana a dangerous drug that had no place in the legal market, the AP reported. Thursday's announcement was seen as another blow to those hoping the federal government would change pot laws.
Marriage and Love in England 1300-1840.Alan MacfarlaneThe Patriarch's Wife: Literary Evidence and the History of the Family.Margaret J. M. Ezell These two works demonstrate just how far historians have shifted their perspective on the history of the family since the late 1970s, when the pessimistic pronouncements of Lawrence Stone and Edward Shorter reigned almost unchallenged. Both Macfarlane and Ezell claim that the early modern family was less patriarchal and mercenary, more egalitarian and affectionate, than has been suspected heretofore. Both studies focus their attention on England, although Macfarlane ventures comparisons from as far afield as Nigeria, India, and China. The two works differ in all other respects: methodology, source material, time scale, and treatment of such fundamental variables as class and gender. Macfarlane has decided to sacrifice depth of analysis for length of time scale. Marriage and Love in England covers the four centuries from Chaucer to Malthus, with occasional forays back to the tribes of Tacitus' Germania. This "enormously long" span has necessitated a simplification of other parameters, including social and geographical variations, ideological factors such as religion and politics, and class and gender differences.
// ParseTarget parse target to host:port func ParseTarget(t string) (Target, error) { ts := strings.Split(strings.Replace(t, " ", "", -1), "||") targets := []TargetHost{} for _, t := range ts { var port uint16 var host string if t == "" { host = "" port = uint16(0) } else if strings.Contains(t, ":") { tAddr, err := net.ResolveTCPAddr("tcp", t) if err != nil { return Target{}, err } host = tAddr.IP.String() port = uint16(tAddr.Port) } else if strings.Contains(t, ".") { host = t port = uint16(0) } else { host = "" port64, err := strconv.ParseUint(t, 10, 64) if err != nil { return Target{}, err } port = uint16(port64) } targets = append(targets, TargetHost{ Host: host, Port: port, }) } return Target{ TargetHosts: targets, }, nil }
/** * Configure the authenticator and create service definition information. * * <p>The first time config() is called it contacts the endpoint it was * created with to get information about the security service and other * related service endpoints and creates a service definition. Later calls * will return this saved service defition. * * <p>The service definition is a JSON object of the form: * <pre> * { * "type" : "streams", * "externalClient" : "true", * "service_token" : "...", * "service_token_expire" : T, * "connection_info" : { * "serviceRestEndpoint" : "...", * "serviceBuildEndpoint" : "..." * } * } * </pre> * where the token expiry time is in milliseconds from the UNIX Epoch, and * the connection endpoints are URLs. Only service endpoints that were found * are included, so if the required endpoint is not found here it must be * determined some other way (eg. via an environment variable). * * <p>The method may return null if there is no configured security service * or the authenticator was unable to use it, or to find other services. * * @param verify Verify the TLS / SSL certificate for the request. * @return A JSON service definition or {@code null}. * * @throws IOException */ public JsonObject config(boolean verify) throws IOException { if (cfg != null) { return cfg; } return config(RestUtils.createExecutor(!verify)); }
#pragma once #include "contextmasks.h" #include "Modules/threading/taskqueue.h" namespace Engine { namespace Threading { struct MADGINE_CLIENT_EXPORT FrameLoop : Threading::TaskQueue { public: FrameLoop(); FrameLoop(const FrameLoop &) = delete; virtual ~FrameLoop(); FrameLoop &operator=(const FrameLoop&) = delete; virtual bool singleFrame(std::chrono::microseconds timeSinceLastFrame = 0us); std::chrono::nanoseconds fixedRemainder() const; void addFrameListener(FrameListener* listener); void removeFrameListener(FrameListener* listener); void shutdown(); bool isShutdown() const; protected: bool sendFrameStarted(std::chrono::microseconds timeSinceLastFrame); bool sendFrameRenderingQueued(std::chrono::microseconds timeSinceLastFrame, ContextMask context = ContextMask::SceneContext); bool sendFrameFixedUpdate(std::chrono::microseconds timeSinceLastFrame, ContextMask context = ContextMask::SceneContext); bool sendFrameEnded(std::chrono::microseconds timeSinceLastFrame); virtual std::optional<Threading::TaskTracker> fetch_on_idle() override; private: std::vector<FrameListener*> mListeners; std::chrono::high_resolution_clock::time_point mLastFrame; std::chrono::microseconds mTimeBank{ 0 }; static constexpr std::chrono::microseconds FIXED_TIMESTEP{ 15000 }; }; } }
<reponame>sdressler/objekt /*------------------------------------------------------------------------*/ /* Copyright 2010, 2011 Sandia Corporation. */ /* Under terms of Contract DE-AC04-94AL85000, there is a non-exclusive */ /* license for use of this work by or on behalf of the U.S. Government. */ /* Export of this program may require a license from the */ /* United States Government. */ /*------------------------------------------------------------------------*/ #include <stk_mesh/fixtures/GridFixture.hpp> #include <Shards_BasicTopologies.hpp> #include <stk_util/parallel/Parallel.hpp> #include <stk_mesh/base/MetaData.hpp> #include <stk_mesh/base/BulkData.hpp> #include <stk_mesh/base/Entity.hpp> #include <stk_mesh/base/GetEntities.hpp> #include <stk_mesh/fem/FEMMetaData.hpp> #include <stk_mesh/fem/FEMHelpers.hpp> /* The following fixture creates the mesh below 1-16 Quadrilateral<4> 17-41 Nodes 17---18---19---20---21 | 1 | 2 | 3 | 4 | 22---23---24---25---26 | 5 | 6 | 7 | 8 | 27---28---29---30---31 | 9 | 10 | 11 | 12 | 32---33---34---35---36 | 13 | 14 | 15 | 16 | 37---38---39---40---41 */ namespace stk { namespace mesh { namespace fixtures { GridFixture::GridFixture(stk::ParallelMachine pm) : m_spatial_dimension(2) , m_fem_meta( m_spatial_dimension, fem::entity_rank_names(m_spatial_dimension) ) , m_bulk_data( stk::mesh::fem::FEMMetaData::get_meta_data(m_fem_meta) , pm ) , m_quad_part( fem::declare_part<shards::Quadrilateral<4> >(m_fem_meta, "quad_part") ) , m_dead_part( m_fem_meta.declare_part("dead_part")) {} GridFixture::~GridFixture() { } void GridFixture::generate_grid() { const unsigned num_nodes = 25; const unsigned num_quad_faces = 16; const unsigned p_rank = m_bulk_data.parallel_rank(); const unsigned p_size = m_bulk_data.parallel_size(); const EntityRank element_rank = m_fem_meta.element_rank(); std::vector<Entity*> all_entities; // assign ids, quads, nodes, then shells // (we need this order to be this way in order for our connectivity setup to work) std::vector<unsigned> quad_face_ids(num_quad_faces); std::vector<unsigned> node_ids(num_nodes); { unsigned curr_id = 1; for (unsigned i = 0 ; i < num_quad_faces; ++i, ++curr_id) { quad_face_ids[i] = curr_id; } for (unsigned i = 0 ; i < num_nodes; ++i, ++curr_id) { node_ids[i] = curr_id; } } // Note: This block of code would normally be replaced with a call to stk_io // to generate the mesh. // declare entities such that entity_id - 1 is the index of the // entity in the all_entities vector { const PartVector no_parts; const unsigned first_quad = (p_rank * num_quad_faces) / p_size; const unsigned end_quad = ((p_rank + 1) * num_quad_faces) / p_size; // declare faces PartVector face_parts; face_parts.push_back(&m_quad_part); const unsigned num_nodes_per_quad = 4; // (right-hand rule) counterclockwise: const int stencil_for_4x4_quad_mesh[num_nodes_per_quad] = {0, 5, 1, -5}; for (unsigned i = first_quad; i < end_quad; ++i) { unsigned face_id = quad_face_ids[i]; unsigned row = (face_id - 1) / num_nodes_per_quad; Entity& face = m_bulk_data.declare_entity(element_rank, face_id, face_parts); unsigned node_id = num_quad_faces + face_id + row; for (unsigned chg_itr = 0; chg_itr < num_nodes_per_quad; ++chg_itr) { node_id += stencil_for_4x4_quad_mesh[chg_itr]; Entity& node = m_bulk_data.declare_entity(fem::FEMMetaData::NODE_RANK, node_id, no_parts); m_bulk_data.declare_relation( face , node , chg_itr); } } } } } // fixtures } // mesh } // stk
package br.edu.infnet.raphaelbgr.lightcontrol.model; import com.pi4j.io.gpio.*; import com.pi4j.io.gpio.event.GpioPinListener; import java.util.Collection; import java.util.List; import java.util.Map; import java.util.concurrent.Callable; import java.util.concurrent.Future; import java.util.concurrent.TimeUnit; public class FakeGpioPinDigitalOutput implements GpioPinDigitalOutput { private final String pinName; public FakeGpioPinDigitalOutput(String pinName) { this.pinName = pinName; } @Override public void high() { } @Override public void low() { } @Override public void toggle() { } @Override public Future<?> blink(long delay) { return null; } @Override public Future<?> blink(long l, TimeUnit timeUnit) { return null; } @Override public Future<?> blink(long delay, PinState blinkState) { return null; } @Override public Future<?> blink(long l, PinState pinState, TimeUnit timeUnit) { return null; } @Override public Future<?> blink(long delay, long duration) { return null; } @Override public Future<?> blink(long l, long l1, TimeUnit timeUnit) { return null; } @Override public Future<?> blink(long delay, long duration, PinState blinkState) { return null; } @Override public Future<?> blink(long l, long l1, PinState pinState, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration) { return null; } @Override public Future<?> pulse(long l, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, Callable<Void> callback) { return null; } @Override public Future<?> pulse(long l, Callable<Void> callable, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, boolean blocking) { return null; } @Override public Future<?> pulse(long l, boolean b, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, boolean blocking, Callable<Void> callback) { return null; } @Override public Future<?> pulse(long l, boolean b, Callable<Void> callable, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, PinState pulseState) { return null; } @Override public Future<?> pulse(long l, PinState pinState, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, PinState pulseState, Callable<Void> callback) { return null; } @Override public Future<?> pulse(long l, PinState pinState, Callable<Void> callable, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, PinState pulseState, boolean blocking) { return null; } @Override public Future<?> pulse(long l, PinState pinState, boolean b, TimeUnit timeUnit) { return null; } @Override public Future<?> pulse(long duration, PinState pulseState, boolean blocking, Callable<Void> callback) { return null; } @Override public Future<?> pulse(long l, PinState pinState, boolean b, Callable<Void> callable, TimeUnit timeUnit) { return null; } @Override public void setState(PinState state) { } @Override public void setState(boolean state) { } @Override public boolean isHigh() { return false; } @Override public boolean isLow() { return false; } @Override public PinState getState() { return null; } @Override public boolean isState(PinState state) { return false; } @Override public GpioProvider getProvider() { return null; } @Override public Pin getPin() { return null; } @Override public void setName(String name) { } @Override public String getName() { return pinName; } @Override public void setTag(Object tag) { } @Override public Object getTag() { return null; } @Override public void setProperty(String key, String value) { } @Override public boolean hasProperty(String key) { return false; } @Override public String getProperty(String key) { return null; } @Override public String getProperty(String key, String defaultValue) { return null; } @Override public Map<String, String> getProperties() { return null; } @Override public void removeProperty(String key) { } @Override public void clearProperties() { } @Override public void export(PinMode mode) { } @Override public void export(PinMode mode, PinState defaultState) { } @Override public void unexport() { } @Override public boolean isExported() { return false; } @Override public void setMode(PinMode mode) { } @Override public PinMode getMode() { return null; } @Override public boolean isMode(PinMode mode) { return false; } @Override public void setPullResistance(PinPullResistance resistance) { } @Override public PinPullResistance getPullResistance() { return null; } @Override public boolean isPullResistance(PinPullResistance resistance) { return false; } @Override public Collection<GpioPinListener> getListeners() { return null; } @Override public void addListener(GpioPinListener... listener) { } @Override public void addListener(List<? extends GpioPinListener> listeners) { } @Override public boolean hasListener(GpioPinListener... listener) { return false; } @Override public void removeListener(GpioPinListener... listener) { } @Override public void removeListener(List<? extends GpioPinListener> listeners) { } @Override public void removeAllListeners() { } @Override public GpioPinShutdown getShutdownOptions() { return null; } @Override public void setShutdownOptions(GpioPinShutdown options) { } @Override public void setShutdownOptions(Boolean unexport) { } @Override public void setShutdownOptions(Boolean unexport, PinState state) { } @Override public void setShutdownOptions(Boolean unexport, PinState state, PinPullResistance resistance) { } @Override public void setShutdownOptions(Boolean unexport, PinState state, PinPullResistance resistance, PinMode mode) { } }
Measuring Success of Water Reservoir Project by Using Delphi and Priority Evaluation Method Traditionally project success has been defined in terms of the time, cost and quality. Extending the traditional triangle to include other factor of stakeholders and end-user provide a more complete view of project success. The aim of the study is to determine the factors that can be used to assess water reservoir project success. The factors for project success in from previous study have been identified and then, narrowly to determine the critical success factors by evaluating the appropriateness of the factors. Delphi method has been applied and a one day seminar has been conducted with a group of expert who involved in construction of water reservoir project in a water company agency. An initial questionnaire has been asked during the seminar through brainstorming and discussion session to identify potential success factors. Following from the seminar, a questionnaire survey was distributed to the participant for the purpose to establish the level of important of the factors. From the feedback of thirty expert opinion and fourteen returned expert surveys, refined seven clusters of project success factors were identified: Clear Realistic Objectives; Quality Factor; Time Factor; Cost Factor; Deliverable; Legacy System; and Safety, Health and Environment. A template for measuring project success has been produced base on Priority Evaluation Method. At the end, five actual projects in the water company agency have been used to show the application of the measuring project success template specifically for the water reservoir project. It has been used to determine the successfulness of each project. It shows that the main factor of an unsuccessful water reservoir project in this case are because of unclear realistic objectives, deliverable issue and time factor. Introduction In order for a construction project to succeed, each phase of the project should be managed efficiently and effectively. The critical success factors of a construction project should be prioritized so that management efforts can be exerted in the most balanced way. Research has previously been carried out on methods for prioritizing project critical success factors. However, these approaches lack consideration of the degree of satisfaction specifically for water reservoir construction project. Satisfaction values should be considered equally with importance values. The aim of the study is to determine the critical factors that can be used to evaluate the success of water reservoir project. The objectives of the study are: (i) to identify project success factors in general; (ii) to determine critical project success factors for water reservoir project, and (iii) to develop a template for measuring success of water reservoir project in a water company. The template for measuring project success produced suitable to be used for the project in that agency only. The water company is a water provider and can contribute to achieve the UN Sustainable Development Goals (SDG) 6: Ensure availability and sustainable management of water and sanitation for all. Methodology Basically, descriptive measures have been use in this research methodology. Descriptive measures include two components: a person competent to judge the work performed and a list of factors by which the quality of the work can be judged. The methodology in this study can briefly explain as funnel concept of identifying success in factors. All the project success factors from previous study will be narrowly down for measuring project success. While these factors may not be quantifiable-that is, measurable in numbers-they should be verifiable. Verifiable measures provide a way to measure those aspects of performance for which number do not work well (Crawford and Cabanis-Brewin, 2006, Chan, 2001. Out from the 21 journals, only 6 are related to construction projects. There are hundreds factors that have been identified and listed. Objective 2: To determine critical project success factors for water reservoir project The Delphi method is an interactive process to collect and distil the anonymous judgments of experts using a series of data collection and analysis techniques interspersed with feedback (,. The Delphi process in this research was initially planned to include surveys by questionnaire. The process was planned to end when a certain agreement was reached among the members of the Delphi panel. This study adopted the brainstorming and discussion of potential success factor from a group of expert in a one day seminar and workshop to identifying the potential critical success factors of construction projects and Delphi method to confirm the identified critical success factors. A one day seminar and workshop on Project Auditing for Measuring Project Performance and Delivery have been organised by the agency and been attended by a group of expert in construction project ( Figure 1, & 2). All the data collected are within the scope of a group of personnel of the agency only which situated at Johor, Malaysia. The correspondent background are a group of expert included Senior Officer, Project Manager and Engineers, and mostly involved in the construction project in the agency. There are thirty of them who attend the one day seminar and acted as respondents in the survey. Initial Questionnaire Survey is purposely to identify the potential critical success factors of construction projects suitable in their agency. From the perspective of the expert, short listed of factors are captured. Factors that are considered as not important will be eliminated from the list by consensus opinion in the workshop. Moreover, a subjective question was included in the discussion to describe any factors other than initially listed. If any new factors emerged, they were to be added to the subsequent surveys. The process was continued until they met an agreed project success factors for evaluation of project in their agency. Below is a list of project success factors in construction project resulted from the workshop. At this stage, the factors are grouped into ten clusters. Follow up Questionnaire Survey aims to establish the level of satisfaction for each of the criteria presented as an element that can influence the success factor evaluation for construction projects. This BEYOND 2020 -World Sustainable Built Environment conference IOP Conf. Series: Earth and Environmental Science 588 042021 IOP Publishing doi:10.1088/1755-1315/588/4/042021 3 methodology is a follow up from the previous one day seminar and workshop with a group of expert in construction of an agency. All the factors presented in this questionnaire survey have been agreed by the seminar participants as 'important'. However, the level of their importance is needed to be established as factors that can be used to evaluate project success so that a fair contribution of weightage can be established. The questionnaire survey was sent by email and fourteen respondents' feedbacks have been returned by post. Details results of important level of each factor in the clusters can be seen in Table 1. Data Analysis Data collected from the respondents were analysed using Microsoft Office Excel. An evaluation of the questions relative to other questions in the survey of Relative Importance Index (RII) was used in the analysis. The data will be translated into pictorial and graphically way for better understanding ( Figure 3). It will explain more and the information can be seen as an overall view. Relative Importance Index (RII) is the evaluation of the questions relative to other questions in the survey. The weightage is based on the response of the questionnaire surveys. It is governed by the formula: = the Likert Scale chosen The index value for any given factor is not more than 1. The higher the value of RII, the higher the importance of the factors compared to the others. These values were calculated using the Excel Spreadsheet. Objective 3: To develop a template for measuring success of water reservoir project in a water company Analysis of critical success factors (CSFs) will be using the Priority Evaluation Model that has been introduced by Yu and Kwon consisting of three steps, as shown in Figure 5. The Measuring Project Success Template was designed by using Priority Evaluation Model as mentioned. First template has been designed base on ten clusters of levelled project success factors. The result from the template consist four level of achievement depends on the Priority Index (PI) calculation below. A "priority CSF" list consists of those CSFs that have statistically significant gaps between I and S values. This step determines the rank of the CSFs in the priority CSF list. The priority of CSFs is determined by a Priority Index (PI), calculated using the following equation: Refined Measuring Project Success Template Result from the questionnaire and first template were returned to the agency and reviewed by the top management personnel. They found that listed agreed factors were too many and still need to be refined and modified. As discussed, they were decided to simplify the template and compressed the project success factors into seven clusters which contain fewer factors, namely: Clear Realistic Objectives; Quality Factor; Time Factor; Cost Factor; Deliverable; Legacy System; and Safety, Health and Environment. This refined template has been used to measure project performance in the agency. They also decided to put fixed level important of each factors with rating seven of extremely important. The template has been modified as in Table 2. Application of the Measuring Project Success Template At the end of the methodology, the template for Measuring Project Success produced have been used to evaluate five construction water reservoir projects in the agency. The result of project performance will be generated automatically in the template. The measurement were conducted by the officer who involve in project auditing in the agency together with the researcher. The finalised Measuring Project Success template has been applied to measure project success by three assessors in the agency. Projects involved are construction of reservoir, ancillary works and pipelines located scattered around Johor, Malaysia. These five actual projects have different objectives, cost, contract period, scope and location. The projects that have been measured are as below. As results, Project 1, 2, 3 and 4 were found as Satisfactory, whereby Project 5 was a Failure (Table 3). Table 3. The Priority Index (PI) and Achievement Level of five water reservoir project. Conclusion The main factors of unsuccessful water reservoir project in this case are because of: (i) unclear realistic objectives; (ii) deliverable issue; and (iii) time factor. At planning stage, the clear realistic objectives should be worth to project economy, end product that benefits well and design that meets actual objective. In term of deliverable issue, the unsuccessful project did not meet the customer satisfaction and acceptance. It does not served the intended purposes and give better customer service. It fail to fulfil the serviceability for the future requirements. The project also did not complete within the contractual period. The finalised template that has been developed had helped has helped them as a tool to measure project success in their agency. They also decided to use the template for assessment of next fifteen project. If there is a request for using this template in other agency, all over process of methodology need to be done to meet suitable project success factors. This study has met the objectives within the scope and limitation and successfully been applied by the agency.
1. Field of the Invention This invention relates to electronic information surveillance and security systems and, more specifically, to a Method and System for Collecting and Surveying Radio Communications From a Specific Protected Area of Operations in or Around a Compound. 2. Description of Related Art The twenty-first century has seen a radical increase in the importance of both physical and communications security. Many facilities and operations areas: military bases, intelligence buildings, financial buildings, airports, etc. need greater radio communications security than ever before. An example of the need for greater radio communications security is the operations area in and around airports. Today someone with a transmitter near the end of an airport runway can cause material damage and loss of life by attempting to disrupt communications between airport towers, TRACONs and the airplanes themselves. Even traditionally sensitive areas such as military bases and intelligence buildings need the ability to detect and monitor radio communications in various areas inside them. For example, various areas, like prisons, are mandated by their security policy to be off-limits for cell phone use. Isolating and monitoring cell phone transmissions in those areas would automatically enforce the policy. What is needed therefore in order to enhance radio communications security for sensitive areas of operation is an invention that has 1) The ability to quickly localize and monitor all radio communications, and 2) the ability to be calibrated to instantly determine the location of transmission sources for specific frequencies. Both must be applied to make such a system accurate. The user of this invention can use the system to isolate and monitor transmissions simply by specifying the geographic area of the transmission source to be monitored. The term “calibration” is used in the Electronic Warfare environment to profile a land or air vehicle-based electronic detection system. The approach used to calibrate such a vehicle is to circle the vehicle with an electronic transmitter. The transmitter is periodically, at known locations, caused to transmit (potentially at different frequency bands of interest). The transmission detection equipment mounted inside of the vehicle is used to detect and record these transmissions. In this manner, blind spots, areas having anomalous reflective characteristics, and any other non-standard signal behavior will be detected and incorporated into the profile of the sensing equipment. Once calibrated, the vehicle's equipment performance is known and should not change unless there are equipment or structural changes made to the vehicle. This type of calibration has never been done to create a profile for the electronic transmission characteristics of a physical compound or installation. An electronic transmission surveillance system monitoring a physical compound or installation would be much more accurate if the effects of the buildings, equipment, and other such things were known and taken into account by the transmission localizing system. It is this information that is the subject of the present invention.
Ultrastructure of the Drosophila larval salivary gland cells during the early developmental stages. I. Morphological studies Larval salivary gland cells of seven Drosophila species from the melanogaster group were studied during the early thirdinstar period. Similar cytoplasmic organization was seen in both the distal and proximal parts of the gland. The cytoplasm contained a large number of free ribosomes, but only a few rough endoplasmic reticulum profiles; the nucleolus was very large. Golgi complexes consisted mainly of vesiculated cisternae. Small secretory granules (diameter, 0.230.32 m) are produced during this period and in some species contain both granular and filamentous material. These granules appeared to be secreted by a peculiar apocrinetype secretion after enclosure of granules into microvillar lacunae. A digestive function is attributed to the secretory material. During the third instar, a close association between the salivary gland and the fat body also was observed. The physiological significance of this association seems to be related to the transfer of nutrients, enzymes, or membranous materials from fat body to salivary gland.
/** * This is more like a functional test, exercising several sub-systems * of ArgoUML, including persistence, kernel and model. * It is composed of the following steps: * <ol> * <li>create a model with a class in it, then assert that the class is * found in the project;</li> * <li>save the model as an XMI file;</li> * <li>load the model and create a project around it, then assert that * the class is found again.</li> * </ol> * * @throws Exception when any of the activities fails */ public void testCreateSaveAndLoadYieldsCorrectModel() throws Exception { Project project = ProjectManager.getManager().makeEmptyProject(); ProjectManager.getManager().setCurrentProject(project); Object model = Model.getModelManagementFactory().getRootModel(); assertNotNull(model); Object classifier = Model.getCoreFactory().buildClass("Foo", model); assertNotNull(project.findType("Foo", false)); Object intType = project.findType("Integer", false); assertNotNull(intType); Object attribute = Model.getCoreFactory().buildAttribute2(classifier, intType); Model.getCoreHelper().setName(attribute, "profileTypedAttribute"); checkFoo(project.findType("Foo", false)); File file = File.createTempFile("ArgoTestCreateSaveAndLoad", ".xmi"); XmiFilePersister persister = new XmiFilePersister(); project.preSave(); persister.save(project, file); project.postSave(); Model.getUmlFactory().delete(classifier); ProjectManager.getManager().removeProject(project); project = ProjectManager.getManager().makeEmptyProject(); ProjectManager.getManager().setCurrentProject(project); persister = new XmiFilePersister(); project = persister.doLoad(file); Object attType = checkFoo(project.findType("Foo", false)); assertEquals("Integer", Model.getFacade().getName(attType)); file.delete(); ProjectManager.getManager().removeProject(project); }
<filename>Project/Engine/Include/Engine/Graphics/Shader.hpp #pragma once #include <vector> #include <string> #include <filesystem> #include <glm/glm.hpp> #include <Engine/Api.hpp> #include <Engine/Types.hpp> #include <Engine/FileWatcher.hpp> namespace Engine::Graphics { struct ENGINE_API ShaderStageInfo { std::string VertexPath; std::string FragmentPath; std::string ComputePath; std::string GeometryPath; std::string TessellationControl; std::string TessellationEvaluate; bool Empty() { return VertexPath.empty() && FragmentPath.empty() && ComputePath.empty() && GeometryPath.empty() && TessellationControl.empty() && TessellationEvaluate.empty(); } }; struct ENGINE_API ShaderUniform { int Location = -1; std::string Name; unsigned int Type; }; class Shader { bool m_IsDirty; unsigned int m_Program; ShaderStageInfo m_ShaderStages; std::vector<ShaderUniform> m_Uniforms = { {} }; EngineUnorderedMap<std::string, FileWatcher*> m_Watchers; #if USE_STRING_ID EngineUnorderedMap<StringId::Storage, ShaderUniform*> m_NamedUniforms; #else EngineUnorderedMap<std::string, ShaderUniform*> m_NamedUniforms; #endif void Destroy(); void CreateShaders(); void CacheUniformLocations(); void WatchShader(std::string path, bool watch = true); unsigned int CreateShader(const std::string & source, const unsigned int type); void ShaderSourceChangedCallback(std::string path, FileWatchStatus changeType); public: ENGINE_API Shader(); ENGINE_API Shader(ShaderStageInfo stageInfo); ENGINE_API ~Shader(); ENGINE_API ShaderStageInfo& GetStages(); ENGINE_API void UpdateStages(ShaderStageInfo stageInfo); ENGINE_API void Bind(); ENGINE_API void Unbind(); ENGINE_API unsigned int GetProgram(); ENGINE_API unsigned int GetUniformCount(); ENGINE_API void Set(int& location, int value) const; ENGINE_API void Set(int& location, bool value) const; ENGINE_API void Set(int& location, float value) const; ENGINE_API void Set(int& location, double value) const; ENGINE_API void Set(int& location, glm::vec2 value) const; ENGINE_API void Set(int& location, glm::vec3 value) const; ENGINE_API void Set(int& location, glm::vec4 value) const; ENGINE_API void Set(int& location, glm::mat3 value) const; ENGINE_API void Set(int& location, glm::mat4 value) const; ENGINE_API void Set(std::string locationName, int value); ENGINE_API void Set(std::string locationName, bool value); ENGINE_API void Set(std::string locationName, float value); ENGINE_API void Set(std::string locationName, double value); ENGINE_API void Set(std::string locationName, glm::vec2 value); ENGINE_API void Set(std::string locationName, glm::vec3 value); ENGINE_API void Set(std::string locationName, glm::vec4 value); ENGINE_API void Set(std::string locationName, glm::mat3 value); ENGINE_API void Set(std::string locationName, glm::mat4 value); /// <returns>Information about the uniform at location, or an invalid struct if outside of bounds</returns> ENGINE_API ShaderUniform* GetUniformInfo(int location); /// <returns>Information about the uniform at locationName, or an invalid struct if not found</returns> ENGINE_API ShaderUniform* GetUniformInfo(std::string& locationName); }; }
#pragma once // Generates hash from UNICODE_STRING. Using SDBM hashing algorithm. UINT32 GetHash(__in UNICODE_STRING *pStr); /* Sends access request to client application. pAccessData must be allocated with ExAllocatePoolWithTag or similar function. It will be freed by function. */ NTSTATUS ApRequestAccess(__in PACCESS_DATA pAccessData); // Determines is specified path is under protect. PAP_PROTECTED_ENTRY ApIsUnderProtect(__in PRTL_GENERIC_TABLE pGenericTable, __in UNICODE_STRING *pName); // Converts NTSTATUS to it's string representation. Used with DbgPrintStatus macro. PCHAR Status2String(__in NTSTATUS status);
<reponame>girish17/csw-1<filename>csw-framework/src/test/java/csw/framework/javadsl/components/JSampleComponentHandlers.java<gh_stars>1-10 package csw.framework.javadsl.components; import akka.actor.typed.javadsl.ActorContext; import akka.actor.typed.javadsl.Adapter; import akka.stream.ActorMaterializer; import akka.stream.ThrottleMode; import akka.stream.javadsl.Sink; import akka.stream.javadsl.Source; import csw.common.components.command.ComponentStateForCommand; import csw.common.components.framework.SampleComponentState; import csw.framework.CurrentStatePublisher; import csw.framework.javadsl.JComponentHandlers; import csw.framework.models.JCswContext; import csw.command.client.messages.TopLevelActorMessage; import csw.params.commands.*; import csw.location.api.models.TrackingEvent; import csw.params.core.generics.Key; import csw.params.core.models.Id; import csw.params.javadsl.JKeyType; import csw.params.core.generics.Parameter; import csw.params.core.states.CurrentState; import csw.params.core.states.StateName; import csw.command.client.CommandResponseManager; import csw.logging.api.javadsl.ILogger; import java.time.Duration; import java.util.concurrent.*; import static csw.common.components.command.ComponentStateForCommand.*; public class JSampleComponentHandlers extends JComponentHandlers { // Demonstrating logger accessibility in Java Component handlers private ILogger log; private CommandResponseManager commandResponseManager; private CurrentStatePublisher currentStatePublisher; private CurrentState currentState = new CurrentState(SampleComponentState.prefix(), new StateName("testStateName")); private ActorContext<TopLevelActorMessage> actorContext; JSampleComponentHandlers(ActorContext<TopLevelActorMessage> ctx, JCswContext cswCtx) { super(ctx, cswCtx); this.currentStatePublisher = cswCtx.currentStatePublisher(); this.log = cswCtx.loggerFactory().getLogger(getClass()); this.commandResponseManager = cswCtx.commandResponseManager(); this.actorContext = ctx; } @Override public CompletableFuture<Void> jInitialize() { log.debug("Initializing Sample component"); try { Thread.sleep(100); } catch (InterruptedException ignored) { } return CompletableFuture.runAsync(() -> { //#currentStatePublisher CurrentState initState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.initChoice())); currentStatePublisher.publish(initState); //#currentStatePublisher }); } @Override public void onLocationTrackingEvent(TrackingEvent trackingEvent) { } @Override public CommandResponse.ValidateCommandResponse validateCommand(ControlCommand controlCommand) { if (controlCommand.commandName().equals(hcdCurrentStateCmd())) { // This is special because test doesn't want these other CurrentState values published return new CommandResponse.Accepted(controlCommand.runId()); } else if (controlCommand.commandName().equals(crmAddOrUpdateCmd())) { return new CommandResponse.Accepted(controlCommand.runId()); } else { // All other tests CurrentState submitState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.commandValidationChoice())); currentStatePublisher.publish(submitState); // Special case to accept failure after validation if (controlCommand.commandName().equals(failureAfterValidationCmd())) { return new CommandResponse.Accepted(controlCommand.runId()); } else if (controlCommand.commandName().name().contains("failure")) { return new CommandResponse.Invalid(controlCommand.runId(), new CommandIssue.OtherIssue("Testing: Received failure, will return Invalid.")); } else { return new CommandResponse.Accepted(controlCommand.runId()); } } } @Override public CommandResponse.SubmitResponse onSubmit(ControlCommand controlCommand) { // Adding item from CommandMessage paramset to ensure things are working if (controlCommand.commandName().equals(crmAddOrUpdateCmd())) { return crmAddOrUpdate((Setup)controlCommand); } else { CurrentState submitState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.submitCommandChoice())); currentStatePublisher.publish(submitState); return processSubmitCommand(controlCommand); } } @Override public void onOneway(ControlCommand controlCommand) { if (controlCommand.commandName().equals(hcdCurrentStateCmd())) { // Special handling for oneway to test current state processCurrentStateOnewayCommand((Setup)controlCommand); } else { // Adding item from CommandMessage paramset to ensure things are working CurrentState onewayState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.oneWayCommandChoice())); currentStatePublisher.publish(onewayState); processOnewayCommand(controlCommand); } } private CommandResponse.SubmitResponse processSubmitCommand(ControlCommand controlCommand) { publishCurrentState(controlCommand); if (controlCommand.commandName().equals(immediateCmd())) { return new CommandResponse.Completed(controlCommand.runId()); } else if (controlCommand.commandName().equals(immediateResCmd())) { Parameter<Integer> param = JKeyType.IntKey().make("encoder").set(22); Result result = new Result(controlCommand.source().prefix()).add(param); return new CommandResponse.CompletedWithResult(controlCommand.runId(), result); } else if (controlCommand.commandName().equals(ComponentStateForCommand.matcherCmd())) { processCommandWithMatcher(controlCommand); return new CommandResponse.Started(controlCommand.runId()); } else if (controlCommand.commandName().equals(failureAfterValidationCmd())) { return processCommandWithoutMatcher(controlCommand); } else if (controlCommand.commandName().equals(ComponentStateForCommand.longRunningCmd())) { return processCommandWithoutMatcher(controlCommand); } return new CommandResponse.Completed(controlCommand.runId()); } //#addOrUpdateCommand private CommandResponse.SubmitResponse crmAddOrUpdate(Setup setup) { // This simulates some worker task doing something that finishes after onSubmit returns Runnable task = new Runnable() { @Override public void run() { commandResponseManager.addOrUpdateCommand(new CommandResponse.Completed(setup.runId())); } }; // Wait a bit and then set CRM to Completed ExecutorService executor = Executors.newSingleThreadScheduledExecutor(); ((ScheduledExecutorService) executor).schedule(task, 1, TimeUnit.SECONDS); // Return Started from onSubmit return new CommandResponse.Started(setup.runId()); } //#addOrUpdateCommand private void processCurrentStateOnewayCommand(Setup setup) { //#subscribeCurrentState Key<Integer> encoder = JKeyType.IntKey().make("encoder"); int expectedEncoderValue = setup.jGet(encoder).orElseThrow().head(); CurrentState currentState = new CurrentState(prefix(), new StateName("HCDState")).add(encoder().set(expectedEncoderValue)); currentStatePublisher.publish(currentState); //#subscribeCurrentState } private void processOnewayCommand(ControlCommand controlCommand) { publishCurrentState(controlCommand); if (controlCommand.commandName().equals(ComponentStateForCommand.matcherCmd())) { processCommandWithMatcher(controlCommand); } // Nothing else done in oneway } private void processCommandWithMatcher(ControlCommand controlCommand) { Source.range(1, 10) .map(i -> { currentStatePublisher.publish(new CurrentState(controlCommand.source(), new StateName("testStateName")).add(JKeyType.IntKey().make("encoder").set(i * 10))); return i; }) .throttle(1, Duration.ofMillis(100), 1, ThrottleMode.shaping()) .runWith(Sink.ignore(), ActorMaterializer.create(Adapter.toUntyped(actorContext.getSystem()))); } private CommandResponse.SubmitResponse processCommandWithoutMatcher(ControlCommand controlCommand) { if (controlCommand.commandName().equals(failureAfterValidationCmd())) { // Set CRM to Error after 1 second sendCRM(controlCommand.runId(), new CommandResponse.Error(controlCommand.runId(), "Unknown Error occurred")); return new CommandResponse.Started(controlCommand.runId()); } else { Parameter<Integer> parameter = JKeyType.IntKey().make("encoder").set(20); Result result = new Result(controlCommand.source().prefix()).add(parameter); // Set CRM to Completed after 1 second sendCRM(controlCommand.runId(), new CommandResponse.CompletedWithResult(controlCommand.runId(), result)); return new CommandResponse.Started(controlCommand.runId()); } } private void sendCRM(Id runId, CommandResponse.SubmitResponse response) { Runnable task = new Runnable() { @Override public void run() { commandResponseManager.addOrUpdateCommand(response); } }; // Wait a bit and then set CRM to Completed ExecutorService executor = Executors.newSingleThreadScheduledExecutor(); ((ScheduledExecutorService) executor).schedule(task, 1, TimeUnit.SECONDS); } private void publishCurrentState(ControlCommand controlCommand) { CurrentState commandState; if (controlCommand instanceof Setup) { commandState = new CurrentState(SampleComponentState.prefix(), new StateName("testStateSetup")) .add(SampleComponentState.choiceKey().set(SampleComponentState.setupConfigChoice())).add(controlCommand.paramSet().head()); } else commandState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.observeConfigChoice())).add(controlCommand.paramSet().head()); // DEOPSCSW-372: Provide an API for PubSubActor that hides actor based interaction currentStatePublisher.publish(commandState); } @Override public CompletableFuture<Void> jOnShutdown() { return CompletableFuture.runAsync(() -> { CurrentState shutdownState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.shutdownChoice())); currentStatePublisher.publish(shutdownState); }); } @Override public void onGoOffline() { CurrentState offlineState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.offlineChoice())); currentStatePublisher.publish(offlineState); } @Override public void onGoOnline() { CurrentState onlineState = currentState.add(SampleComponentState.choiceKey().set(SampleComponentState.onlineChoice())); currentStatePublisher.publish(onlineState); } }
<gh_stars>0 # Generated by Django 3.0.7 on 2020-09-16 08:44 from django.db import migrations, models class Migration(migrations.Migration): dependencies = [ ('data_ocean', '0011_auto_20200915_1923'), ] operations = [ migrations.AddField( model_name='registryupdatermodel', name='unzip_file_arch_length', field=models.PositiveIntegerField(blank=True, default=0), ), migrations.AddField( model_name='registryupdatermodel', name='unzip_file_name', field=models.CharField(blank=True, max_length=255, null=True), ), migrations.AddField( model_name='registryupdatermodel', name='unzip_file_real_length', field=models.PositiveIntegerField(blank=True, default=0), ), migrations.AddField( model_name='registryupdatermodel', name='unzip_message', field=models.CharField(blank=True, max_length=255, null=True), ), migrations.AddField( model_name='registryupdatermodel', name='unzip_status', field=models.BooleanField(blank=True, default=False), ), ]
package com.egoveris.deo.web.satra.monitor; import com.egoveris.deo.model.model.SolicitudQuartzDTO; import org.zkoss.zul.Label; import org.zkoss.zul.Listcell; import org.zkoss.zul.Listitem; import org.zkoss.zul.ListitemRenderer; public class QuartzItemRenderer implements ListitemRenderer { public void render(Listitem listitem, Object data, int arg2) throws Exception { SolicitudQuartzDTO solicitud = (SolicitudQuartzDTO) data; listitem.setValue(data); addListcell(listitem, solicitud.getNombreJob()); addListcell(listitem, solicitud.getNombreTrigger()); addListcell(listitem, solicitud.getEstado()); if (solicitud.getNextFireTime() != null){ addListcell(listitem, solicitud.getNextFireTime().toString()); }else{ addListcell(listitem, null); } addListcell(listitem, solicitud.getGrupo()); addListcell(listitem, solicitud.getProximoReintento()); if (solicitud.getCronExpression() != null){ addListcell(listitem, solicitud.getCronExpression().toString()); listitem.setCheckable(false); listitem.setDisabled(false); }else{ addListcell(listitem, null); } } private void addListcell (Listitem listitem, String value) { Listcell lc = new Listcell (); Label lb = new Label(value); lb.setParent(lc); lc.setParent(listitem); } }