content
stringlengths
7
2.61M
I am proud to announce that I have officially sold my Nissan Skyline GT-R. This means I will no longer accidentally open the passenger door to my normal car when I’m trying to go somewhere. You’d know all about my Skyline sale if you followed me on Twitter, because I announced that I was auctioning it on Bring a Trailer, where it garnered a sale price of $21,000. You’d also know all about the Skyline if you followed along throughout my last year of ownership here on Jalopnik, where I’ve done some exciting things with it, like drive backwards through a fast food drive-thru, compare it to a new Nissan GT-R, measure its horsepower at a dyno, and get pulled over twice in one night by suspicious police officers. But before it goes away for good, I’ve decided to write one more column that sums up my experiences owning one of the most famous Japanese cars of all time. There’s also a video, which I highly suggest you watch, unless you work in one of those offices where they don’t allow people to view moving pictures. In that case, I highly suggest you take a bathroom break and watch it on your phone as you sit on the toilet. I’ve now owned 21 cars in my lifetime—everything from a Prius to a Ferrari —and I’m happy to report that the Skyline easily finds itself among my top five in terms of driving experience. When I got the Skyline, I figured it would be a typical 1980s-era Nissan: it would creak and rattle, it would wallow around bumps with saggy suspension, and it would feel underpowered to the extreme. In fact, I don’t think I’ve ever been as apathetic to a car purchase in my entire life as I was when I bought the Skyline, except for the time I found out I’d be getting my brother’s old Cube. But the truth is the Skyline’s driving experience was exactly the opposite of what I was expecting. There was no wallowing. There was no slowness. There was no loose steering. This car was tight, composed, and exceptionally flat when cornering. There weren’t even any rattles. In fact, the whole driving experience reminded me of a Porsche, which I never really thought a 25-year-old Nissan would do. And while it wasn’t lightning fast by any means, I think 300 horsepower is just about the magic number for a car like this: enough to have some fun on the road, but not enough to violate every legal speed limit on earth with one four-second push of the throttle, like my old Cadillac CTS-V Wagon could. The only real drawback to the Skyline’s driving experience was its most obvious difference from a normal car: right-hand drive. I hated it. Although a lot of Americans with imported cars say they eventually get used to it, I never did—probably because I didn’t drive the Skyline very much. It was always a concerted effort to stay in the correct part of the lane, to remember that the turn signal stalk was on the opposite side, and to shift with my left hand. Admittedly, this is a minor drawback, and I’d never let it stop me from buying this car, but it’s worth noting for all of you who want to be mad JDM tyte, yo. But the excellent driving experience wasn’t the most surprising thing about owning the Skyline. By far, the most surprising thing about owning the Skyline was the attention that it received from virtually everyone I encountered. Now that I’m free of Skyline ownership, I can admit my own feelings about the car’s appearance: about a year ago, I’m not sure I would’ve noticed a silver R32 Skyline GT-R on the road. To me, the thing looks like a Maxima with circular tail lights: it blends in to the extreme, with few unique lines to distinguish it from any other boxy 1980s Nissan you see driving around. A year ago, I would’ve been more likely to turn my head and watch a garden-variety Porsche 911 go by than an R32 GT-R. But not everyone feels this way. I know this because I received an engine rev from virtually every single New Jersey resident with a Chevy Cobalt SS, or a Dodge Neon SRT-4, or an old WRX, all of whom immediately picked up on exactly what I was driving. And it wasn’t just car people: random people on the street would stop me and ask why I was driving on the wrong side of the car, whether I imported the car from Britain, and—occasionally—whether I was currently on a long road trip from Europe. I never thought I would use the phrase “I imported it from Japan” so many times in one year. In the end, I think this car got more attention than my Ferrari, and in a good way. I always hated when people would approach me at gas stations in my Ferrari and ask “How much does it cost?” or “What do you do for a living?” But when I got approached in the Skyline at a gas station, it was usually by a guy with a high-mileage E36 M3 who wanted to talk about cars, not money. There’s nothing better than a 20-minute gas station conversation with a car enthusiast where you both realize you previously owned a first-generation Cadillac CTS-V. Beyond the attention, Skyline ownership was a lot like owning any regular car. I had no trouble getting it titled and registered thanks to the good folks at Japanese Classics, who sold me the car and did all the legwork for me. It took regular pump gas, it never really had any issues, and it never required me to take unusual angles to enter parking lots (like the Ferrari) or measure the distance between two posts to make sure it would fit somewhere (like the Hummer). The only unusual part about the ownership experience was insurance: my regular insurance company wouldn’t handle it, so I had to go through Hagerty, an excellent classic car insurer with a great reputation. They issued me a policy, but there was a mileage restriction—and I had to keep the car in a locked single-car garage, which I don’t have at my home. As a result, I stored the car about 15 miles from my house. Optimistically, I paid extra for a 6,000-mile insurance policy, but with the inconvenience of the car being so far away, I barely covered 2,000 miles. A very unexpected part of Skyline ownership came from writing about it on the Internet. By now, virtually everyone on Jalopnik understands what I’m doing: every year, I ask you to help me choose a car, then I buy one of your top suggestions, then I write about it for a year, then I sell it. But the Skyline had a life of its own on YouTube and other car websites well beyond Jalopnik, where “Skyline people” couldn’t seem to understand why I was making so many videos with the car. In fact, dozens of people accused me of “bragging” about owning a Skyline, which I found especially amusing. Prior to this accusation, I didn’t think it was possible to brag about owning a 25-year-old car worth less than a new Corolla. The reality is that I wrote about the Skyline less than I wrote about my Hummer—and “Hummer people” didn’t seem to mind. In fact, when I went to sell my Hummer on the Hummer forums, everyone was very complimentary. And so, I believe I’ve discovered something about the Skyline that holds true about the E46 BMW M3, stanced Golfs, and countless other cars: owning the car is a lot of fun. As long as you don’t go to any of the owner meetups. Many of you probably want to know what it cost me to own my Skyline for a year. I know this because many of you have e-mailed me and asked: What did it cost you to own a Skyline for a year? Well, here’s the situation: I bought the car in March for $20,995, which included all import costs. I sold the car last week for $21,000. This represents a five-dollar profit, which I used to purchase two packs of Skittles. I took out all the yellows, if anyone in the Philadelphia area would like to share. Of course, this doesn’t quite tell the whole story, but it tells most of it. My insurance policy was $1,600 for the year, but I sold the car after nine months, so I expect to see a quarter of that back. My storage fees were about $180 per month, though you probably won’t have to pay this if you’re not a city-dweller with an aging Range Rover occupying your only parking space. And… that was it. Yes, there were taxes and registration fees, just like any normal car. And yes, I put fuel into it. But the car never needed one repair, never had one problem, and never cost me anything that I wasn’t expecting. In fact, the biggest cost—beyond insurance, and taxes, and fuel, and other stuff you’d pay for any normal car—was a Dodge Challenger: the one-way rental car I used to pick up the Skyline back in March. After a year of writing about anything, it’s time to move on. I couldn’t wait to sell my Ferrari. I couldn’t wait to sell my Hummer. And now, after more than two-dozen columns and videos dissecting every aspect of Skyline ownership, I’m pretty excited to move along this car, too. But I have to say that I’m really glad I got the chance to spend a year with my Skyline, because it totally changed my opinion about this car. And the next time I see one in traffic, I promise you that I’ll turn my head to watch it go by.
#pragma once #include "dataconsts.h" #include <array> namespace Component { struct StatusEffects { std::array<RoseCommon::StatusEffect, RoseCommon::MAX_STATUS_EFFECTS> effects; }; }
/* * Copyright (C) by Argonne National Laboratory * See COPYRIGHT in top-level directory */ #include <mpi.h> #include <stdlib.h> #include <stdio.h> #include <string.h> static void handle_error(int errcode, const char *str) { char msg[MPI_MAX_ERROR_STRING]; int resultlen; MPI_Error_string(errcode, msg, &resultlen); fprintf(stderr, "%s: %s\n", str, msg); MPI_Abort(MPI_COMM_WORLD, 1); } #define CHECK(fn) {int errcode; errcode = (fn); if (errcode != MPI_SUCCESS) handle_error(errcode, #fn); } static int hint_check(MPI_Info info_used, const char *key, const char *expected) { char value[MPI_MAX_INFO_VAL + 1]; int flag; CHECK(MPI_Info_get(info_used, key, MPI_MAX_INFO_VAL, value, &flag)); if (strcmp(expected, value)) { fprintf(stderr, "expected value \"%s\" for key \"%s\" got \"%s\"\n", expected, key, value); return 1; } return 0; } int main(int argc, char **argv) { setenv("ROMIO_HINTS", argv[1], 1); MPI_File fh; MPI_Info info_used, info_mine; int nr_errors = 0; MPI_Init(&argc, &argv); MPI_Info_create(&info_mine); MPI_Info_set(info_mine, "romio_cb_read", "disable"); CHECK(MPI_File_open(MPI_COMM_WORLD, argv[1], MPI_MODE_RDONLY, info_mine, &fh)); CHECK(MPI_File_get_info(fh, &info_used)); nr_errors += hint_check(info_used, "ind_rd_buffer_size", "49"); nr_errors += hint_check(info_used, "romio_no_indep_rw", "true"); if (nr_errors == 0) printf(" No Errors\n"); CHECK(MPI_Info_free(&info_mine)); CHECK(MPI_Info_free(&info_used)); CHECK(MPI_File_close(&fh)); MPI_Finalize(); return nr_errors; }
<filename>Point.cpp<gh_stars>1-10 #include "Point.h" // Create a point from coordinates Point::Point(double _x, double _y) { x = _x; y = _y; mass = 10.0; fixed = false; road = false; } // Created a point from coordinates and define whether it is fixed Point::Point(double _x, double _y, bool _fixed) { x = _x; y = _y; mass = 10.0; fixed = _fixed; road = false; } // Create a point on the road Point::Point(double _x, double _y, bool _fixed, bool _road) { x = _x; y = _y; mass = 10.0; fixed = _fixed; road = _road; }
def read_instances(self, rows): instances = [self.read_fields(rows)] if self.label_field: for instance in instances: instance[LABEL_KEY] = self.chunk_func(instance[self.label_field]) return instances
Electronic Arts' exclusive deal with Disney to make games based on Star Wars should have been a godsend—a contractual coup that could carry the company forward for years to come. Instead, it appears to be more akin to the Executor trying to parallel park in front of the Death Star: According to this CNBC report, EA has shed $3 billion in stock value since the launch of Star Wars Battlefront 2, and the fallout could impact the future direction of its entire business. EA's share price is down 8.5 percent since the beginning of November, compared to an increase of five percent for Take-Two, and 0.7 percent for Activision, over the same period. It's not a meltdown—EA's stock price is still up 39 percent over 2017, having reached an all-time high in October. But backlash against in-your-face loot boxes lit a fire, and EA's decision to turn them off—temporarily—only emphasized the game's apparent inability to function well without them. Initial sales have reportedly flagged, but the real problem is the loot box boondoggle. EA reported earlier this year (via GamesIndustry) that the microtransaction-driven FIFA Ultimate Team is now worth $800 million annually, but the need to clean up the Battlefront 2 mess could see that figure cut. In a recent investors note, Cowen and Co. analyst Doug Creutz said, "We think the time has come for the industry to collectively establish a set of standards for MTX implementation, both to repair damaged player perceptions and avoid the threat of regulation." Interestingly, while there have been reports of Disney's displeasure with EA over this whole mess, EA CFO Blake Jorgensen said at the Credit Suisse Annual Technology, Media and Telecom Conference (via Polygon) that it was actually a commitment to Star Wars canon and "realism" that led it to fill Battlefront 2 loot boxes with credits, Star Cards, and similar items, rather than throwaway cosmetics that don't have any impact on gameplay. "The one thing we're very focused on and they are extremely focused on is not violating the canon of Star Wars," he said. "It's an amazing brand that’s been built over many, many years, and so if you did a bunch of cosmetic things, you might start to violate the canon, right? Darth Vader in white probably doesn't make sense, versus in black. Not to mention you probably don't want Darth Vader in pink. No offense to pink, but I don't think that's right in the canon." "So, there might be things that we can do cosmetically, and we’re working with Lucas[film] on that. But coming into it, it wasn’t as easy as if we were building a game around our own IP where it didn’t really matter. It matters in Star Wars, because Star Wars fans want realism." It's an admirable commitment to the integrity of the brand, I suppose, but I can't help thinking that there was probably a better way to go about it. Also, I suspect that these guys would disagree.
. Ventriculo-atrial (VA) conduction was studied by ventricular stimulation at increasing rate and atrial mapping in 126 patients either without ventricular preexcitation (WPW) and supraventricular tachycardia (SVT) (Group I: 60 cases) or with the WPW syndrome with or without SVT (Group II: 30 cases) or with SVT without WPW (Group III: 53 cases) or with short PR intervals (Group IV: 3 cases). In Group I, 22 patients had VA block, 10 had concealed accessory pathways and 28 had nodal VA conduction. In Group II, 2 patients had VA block, 6 had nodal VA conduction and 22 had preferential retrograde conduction through the Kent bundle. In Group III, 9 patients had concealed Kent bundles and 24 had nodal retrograde conduction. In Group IV, the results were varied. The characteristics of retrograde VA conduction are therefore often different from those of anterograde conduction. 52 attacks of SVT were recorded; in the 18 cases of Group II, 5 septal, 3 right lateral, 8 left lateral Kent bundles and 2 intranodal reentries were demonstrated. In the 33 cases of SVT without overt WPW (Group III) a concealed accessory pathway was demonstrated in 9 cases. In all, approximately a half (25 out of 52) of all SVT were due to reentry involving an accessory pathway which was concealed in about one third of cases (9 out of 25) and more often situated on the left border than on the right or in the septum.
Performance Enhancement of the Attitude Estimation using Small Quadrotor by Vision-based Marker Tracking Abstract The accuracy of small and low cost CCD camera is insufficient t o provide data for precisely tracking un-manned aerial vehicles(UAVs). This study shows how UAV can hover on a human targeted tracking ob jectby using CCD camera rather than imprecise GPS data.To realize this, UAVs need to recognize their attitude and position in known environment as well as un-known environment. Moreover, it is necessary for their localiza tion to occur naturally. It is desirable for an UAV to estimate of his attitude by environment recognition for UAV hovering, as one of the best importan t problems. In this paper, we describe a method for the attitude of an UAV using image information of a maker on the floor. This method combines the observed position from GPS sensors and the estimated atti-tude from the images captured by a fixed camera to estimate an UAV. Using the a priori known path of anUAV in the world coordinates and a perspective camera model, we derive the geometric constraint equationswhich represent the relation between image frame coordinates for a marker on the floor and the estimatedUAV's attitude. Since the equations are based on the estimated position, the measurement error may exist all the time. The proposed method utilizes the error between the ob served and estimated image coordinates to localize the UAV. The Kalman filter scheme is applied for this method. its performance is verified by the im-age processing results and the experiment.Key Words : Quadcopter, CCD camera, GPS, Attitude estimation, Template matching.Received: Mar. 22, 2015Revised : Apr. 5, 2015Accepted: Sep. 28, 2015
/** * SRP-6 Session Key (K). * <p> * This variable is computed as: * <pre> * K = H_Inteleave(S) * </pre> * where {@code S} is a shared secret and {@code H_Interleave} a special * hashing function, used to generate the session key that is twice as long. * <p> * This implementation closely follows the algorithm described in RFC 2945, * <a href="https://tools.ietf.org/html/rfc2945#section-3.1">Section 3.1 - Interleaved SHA</a>. * <p> * The underlying hash function need not be SHA-1. */ public final class SRP6SessionKey implements Bytes { /** * The hash function used to perform the hashing inside the H_Interleave * algorithm. */ private final ImmutableMessageDigest hashFunction; /** SRP-6 variable: shared secret (S). */ private final SRP6IntegerVariable sharedSecret; /** * The byte order to use when converting {@code sharedSecret} to a byte * sequence. */ private final ByteOrder byteOrder; /** * Creates a new SRP-6 Session Key from the specified hash function, shared * secret and its preferred byte order. * * @param hashFunction the hash function used to perform the hashing inside * the H_Interleave algorithm * @param sharedSecret SRP-6 variable: shared secret (S) * @param byteOrder the byte order to use when converting * {@code sharedSecret} to a byte sequence */ @SuppressWarnings("checkstyle:hiddenfield") public SRP6SessionKey( final ImmutableMessageDigest hashFunction, final SRP6IntegerVariable sharedSecret, final ByteOrder byteOrder ) { this.hashFunction = hashFunction; this.sharedSecret = sharedSecret; this.byteOrder = byteOrder; } @Override public byte[] asArray() { byte[] t = sharedSecret.bytes(byteOrder).asArray(); int off = t.length % 2; int halfSize = t.length / 2; byte[] e = new byte[halfSize]; byte[] f = new byte[halfSize]; for (int i = off; i < halfSize; i++) { e[i - off] = t[2 * i - off]; f[i - off] = t[2 * i + 1 - off]; } byte[] g = hashFunction.update(e).digest(); byte[] h = hashFunction.update(f).digest(); byte[] res = new byte[g.length + h.length]; for (int i = 0; i < res.length / 2; i++) { res[2 * i ] = g[i]; res[2 * i + 1] = h[i]; } return res; } }
Apical leakage of resin based root canal sealers with a new computerized fluid filtration meter. In this in vitro study, the apical leakage of three root-canal sealers: AH Plus, Diaket, and EndoREZ was evaluated using a new computerized fluid filtration meter. Forty-five extracted human premolar teeth with single root and canal were used. The coronal part of each tooth was removed and the root canals were prepared using GT Rotary files and crown-down technique. The roots were randomly divided into three groups of 15 samples, filled with one of the test materials and gutta-percha cones by the cold lateral condensation technique and were stored at 37 degrees C and 100% humidity for 7 days. One-week later, apical parts of roots of 10 +/- 0.05 mm were attached to computerized fluid filtration meter. Apical leakage quantity was determined as microl/cmHO/min(-1). Statistical analysis indicated that root fillings with Diaket in combination with cold lateral condensation technique showed lower apical leakage than the others (p < 0.05). In addition, this new computerized fluid filtration meter allowed quantitative measurement of leakage easily. As it is a newly developed device to measure apical leakage of endodontic sealers, the reliability of it needed to be tested.
import confuse class AlertMangerConfig(): SETTINGS_FILE = "conf.yml" TEMPLATE = { 'global': { 'log_level': str, 'lang': str, 'port': int, 'prometheus': str, 'ssl_verification': bool }, 'telegram':{ 'cli': { 'api_id': int, 'api_hash': str, 'session': str, }, 'admins': confuse.Sequence([ int ]) } } def __init__(self): source = confuse.YamlSource(self.SETTINGS_FILE) self._settings = confuse.RootView([source]) self._settings = self._settings.get(self.TEMPLATE) @property def webport(self): return self._settings['global']['port'] @property def lang(self): return self._settings['global']['lang'] @property def logLevel(self): return self._settings['global']['log_level'] @property def ssl_verification(self): return self._settings['global']['ssl_verification'] @property def cliSession(self): return self._settings['telegram']['cli']['session'] @property def cliApiId(self): return self._settings['telegram']['cli']['api_id'] @property def cliApiHash(self): return self._settings['telegram']['cli']['api_hash'] @property def admins(self): return self._settings['telegram']['admins'] @property def prometheus(self): return self._settings['global']['prometheus'] _config = AlertMangerConfig()
import copy import json import numpy as np import os import pytest import random import tempfile from ark.mibi import tiling_utils import ark.settings as settings from ark.utils import misc_utils from ark.utils import test_utils _TMA_TEST_CASES = [False, True] _RANDOMIZE_TEST_CASES = [['N', 'N'], ['N', 'Y'], ['Y', 'Y']] _MOLY_RUN_CASES = ['N', 'Y'] _MOLY_INTERVAL_SETTING_CASES = [False, True] def test_read_tiling_param(monkeypatch): # test 1: int inputs # test an incorrect response then a correct response user_inputs_int = iter([0, 1]) # make sure the function receives the incorrect input first then the correct input monkeypatch.setattr('builtins.input', lambda _: next(user_inputs_int)) # simulate the input sequence for sample_tiling_param = tiling_utils.read_tiling_param( "Sample prompt: ", "Sample error message", lambda x: x == 1, dtype=int ) # assert sample_tiling_param was set to 1 assert sample_tiling_param == 1 # test 2: str inputs # test an incorrect response then a correct response user_inputs_str = iter(['N', 'Y']) # make sure the function receives the incorrect input first then the correct input monkeypatch.setattr('builtins.input', lambda _: next(user_inputs_str)) # simulate the input sequence for sample_tiling_param = tiling_utils.read_tiling_param( "Sample prompt: ", "Sample error message", lambda x: x == 'Y', dtype=str ) # assert sample_tiling_param was set to 'Y' assert sample_tiling_param == 'Y' def test_read_tma_region_input(monkeypatch): # define a sample fovs list sample_fovs_list = test_utils.generate_sample_fovs_list( fov_coords=[(0, 0), (100, 100), (100, 100), (200, 200)], fov_names=["TheFirstFOV", "TheFirstFOV", "TheSecondFOV", "TheSecondFOV"] ) # define sample region_params to read data into sample_region_params = {rpf: [] for rpf in settings.REGION_PARAM_FIELDS} # basic error check: odd number of FOVs provided with pytest.raises(ValueError): sample_fovs_list_bad = sample_fovs_list.copy() sample_fovs_list_bad['fovs'] = sample_fovs_list_bad['fovs'][:3] # use the dummy user data to read values into the params lists tiling_utils._read_tma_region_input( sample_fovs_list_bad, sample_region_params ) # basic error check: start coordinate cannot be greater than end coordinate with pytest.raises(ValueError): # define a sample fovs list sample_fovs_list_bad = test_utils.generate_sample_fovs_list( fov_coords=[(100, 100), (0, 0), (0, 0), (100, 100)], fov_names=["TheFirstFOV", "TheFirstFOV", "TheSecondFOV", "TheSecondFOV"] ) # use the dummy user data to read values into the params lists tiling_utils._read_tma_region_input( sample_fovs_list_bad, sample_region_params ) # set the user inputs, also tests the validation check for num and spacing vals for x and y user_inputs = iter([300, 300, 100, 100, 3, 3, 1, 1, 'Y', 300, 300, 100, 100, 3, 3, 1, 1, 'Y']) # override the default functionality of the input function monkeypatch.setattr('builtins.input', lambda _: next(user_inputs)) # use the dummy user data to read values into the params lists tiling_utils._read_tma_region_input( sample_fovs_list, sample_region_params ) # assert the values were set properly assert sample_region_params['region_start_x'] == [0, 100] assert sample_region_params['region_start_y'] == [0, 100] assert sample_region_params['fov_num_x'] == [3, 3] assert sample_region_params['fov_num_y'] == [3, 3] assert sample_region_params['x_fov_size'] == [1, 1] assert sample_region_params['y_fov_size'] == [1, 1] assert sample_region_params['x_intervals'] == [[0, 50, 100], [100, 150, 200]] assert sample_region_params['y_intervals'] == [[0, 50, 100], [100, 150, 200]] assert sample_region_params['region_rand'] == ['Y', 'Y'] def test_read_non_tma_region_input(monkeypatch): # define a sample fovs list sample_fovs_list = test_utils.generate_sample_fovs_list( fov_coords=[(0, 0), (100, 100)], fov_names=["TheFirstFOV", "TheSecondFOV"] ) # define sample region_params to read data into sample_region_params = {rpf: [] for rpf in settings.REGION_PARAM_FIELDS} sample_region_params.pop('x_intervals') sample_region_params.pop('y_intervals') # set the user inputs user_inputs = iter([3, 3, 1, 1, 'Y', 3, 3, 1, 1, 'Y']) # override the default functionality of the input function monkeypatch.setattr('builtins.input', lambda _: next(user_inputs)) # use the dummy user data to read values into the params lists tiling_utils._read_non_tma_region_input( sample_fovs_list, sample_region_params ) # assert the values were set properly assert sample_region_params['region_start_x'] == [0, 100] assert sample_region_params['region_start_y'] == [0, 100] assert sample_region_params['fov_num_x'] == [3, 3] assert sample_region_params['fov_num_y'] == [3, 3] assert sample_region_params['x_fov_size'] == [1, 1] assert sample_region_params['y_fov_size'] == [1, 1] assert sample_region_params['region_rand'] == ['Y', 'Y'] @pytest.mark.parametrize('tma', _TMA_TEST_CASES) def test_generate_region_info(tma): sample_region_inputs = { 'region_start_x': [1, 1], 'region_start_y': [2, 2], 'fov_num_x': [3, 3], 'fov_num_y': [4, 4], 'x_fov_size': [5, 5], 'y_fov_size': [6, 6], 'region_rand': ['Y', 'Y'] } if tma: sample_region_inputs['x_interval'] = [[100, 200, 300], [100, 200, 300]] sample_region_inputs['y_interval'] = [[200, 400, 600], [200, 400, 600]] # generate the region params sample_region_params = tiling_utils.generate_region_info(sample_region_inputs) # assert both region_start_x's are 1 assert all( sample_region_params[i]['region_start_x'] == 1 for i in range(len(sample_region_params)) ) # assert both region_start_y's are 2 assert all( sample_region_params[i]['region_start_y'] == 2 for i in range(len(sample_region_params)) ) # assert both num_fov_x's are 3 assert all( sample_region_params[i]['fov_num_x'] == 3 for i in range(len(sample_region_params)) ) # assert both num_fov_y's are 4 assert all( sample_region_params[i]['fov_num_y'] == 4 for i in range(len(sample_region_params)) ) # assert both x_fov_size's are 5 assert all( sample_region_params[i]['x_fov_size'] == 5 for i in range(len(sample_region_params)) ) # assert both y_fov_size's are 6 assert all( sample_region_params[i]['y_fov_size'] == 6 for i in range(len(sample_region_params)) ) # assert both randomize's are 0 assert all( sample_region_params[i]['region_rand'] == 'Y' for i in range(len(sample_region_params)) ) if tma: # assert x_interval set properly for TMA assert all( sample_region_params[i]['x_interval'] == [100, 200, 300] for i in range(len(sample_region_params)) ) # assert y_interval set properly for TMA assert all( sample_region_params[i]['x_interval'] == [100, 200, 300] for i in range(len(sample_region_params)) ) else: # assert x_interval not set for non-TMA assert all( 'x_interval' not in sample_region_params[i] for i in range(len(sample_region_params)) ) # assert y_interval not set for non-TMA assert all( 'y_interval' not in sample_region_params[i] for i in range(len(sample_region_params)) ) @pytest.mark.parametrize('tma', _TMA_TEST_CASES) def test_set_tiling_params(monkeypatch, tma): # define a sample set of fovs if tma: sample_fovs_list = test_utils.generate_sample_fovs_list( fov_coords=[(0, 0), (100, 100), (100, 100), (200, 200)], fov_names=["TheFirstFOV", "TheFirstFOV", "TheSecondFOV", "TheSecondFOV"] ) else: sample_fovs_list = test_utils.generate_sample_fovs_list( fov_coords=[(0, 0), (100, 100)], fov_names=["TheFirstFOV", "TheSecondFOV"] ) sample_moly_point = test_utils.generate_sample_fov_tiling_entry( coord=(14540, -10830), name="MoQC" ) # set the user inputs user_inputs = iter([3, 3, 1, 1, 'Y', 3, 3, 1, 1, 'Y', 'Y', 'Y', 1]) # override the default functionality of the input function monkeypatch.setattr('builtins.input', lambda _: next(user_inputs)) # bad fov list path provided with pytest.raises(FileNotFoundError): tiling_utils.set_tiling_params('bad_fov_list_path.json', 'bad_moly_path.json') with tempfile.TemporaryDirectory() as temp_dir: # write fov list sample_fov_list_path = os.path.join(temp_dir, 'fov_list.json') with open(sample_fov_list_path, 'w') as fl: json.dump(sample_fovs_list, fl) # bad moly path provided with pytest.raises(FileNotFoundError): tiling_utils.set_tiling_params(sample_fov_list_path, 'bad_moly_path.json') # write moly point sample_moly_path = os.path.join(temp_dir, 'moly_point.json') with open(sample_moly_path, 'w') as moly: json.dump(sample_moly_point, moly) # run tiling parameter setting process with predefined user inputs sample_tiling_params, moly_point = tiling_utils.set_tiling_params( sample_fov_list_path, sample_moly_path, tma=tma ) # assert the fovs in the tiling params are the same as in the original fovs list assert sample_tiling_params['fovs'] == sample_fovs_list['fovs'] # assert region start x and region start y values are correct sample_region_params = sample_tiling_params['region_params'] fov_0 = sample_fovs_list['fovs'][0] fov_1 = sample_fovs_list['fovs'][1] assert sample_region_params[0]['region_start_x'] == fov_0['centerPointMicrons']['x'] assert sample_region_params[1]['region_start_x'] == fov_1['centerPointMicrons']['x'] assert sample_region_params[0]['region_start_y'] == fov_0['centerPointMicrons']['y'] assert sample_region_params[1]['region_start_y'] == fov_1['centerPointMicrons']['y'] # assert fov_num_x and fov_num_y are all set to 3 assert all( sample_region_params[i]['fov_num_x'] == 3 for i in range(len(sample_region_params)) ) assert all( sample_region_params[i]['fov_num_y'] == 3 for i in range(len(sample_region_params)) ) # assert x_fov_size and y_fov_size are all set to 1 assert all( sample_region_params[i]['x_fov_size'] == 1 for i in range(len(sample_region_params)) ) assert all( sample_region_params[i]['y_fov_size'] == 1 for i in range(len(sample_region_params)) ) # assert randomize is set to Y for both fovs assert all( sample_region_params[i]['region_rand'] == 'Y' for i in range(len(sample_region_params)) ) # assert moly run is set to Y assert sample_tiling_params['moly_run'] == 'Y' # assert moly interval is set to 1 assert sample_tiling_params['moly_interval'] == 1 # for TMAs, assert that the x interval and y intervals were created properly if tma: # TheFirstFOV assert sample_region_params[0]['x_intervals'] == [0, 50, 100] assert sample_region_params[0]['y_intervals'] == [0, 50, 100] # TheSecondFOV assert sample_region_params[1]['x_intervals'] == [100, 150, 200] assert sample_region_params[1]['y_intervals'] == [100, 150, 200] def test_generate_x_y_fov_pairs(): # define sample x and y pair lists sample_x_range = [0, 5] sample_y_range = [2, 4] # generate the sample (x, y) pairs sample_pairs = tiling_utils.generate_x_y_fov_pairs(sample_x_range, sample_y_range) assert sample_pairs == [(0, 2), (0, 4), (5, 2), (5, 4)] @pytest.mark.parametrize('randomize_setting', _RANDOMIZE_TEST_CASES) @pytest.mark.parametrize('moly_run', _MOLY_RUN_CASES) @pytest.mark.parametrize('moly_interval_setting', _MOLY_INTERVAL_SETTING_CASES) def test_create_tiled_regions_non_tma(randomize_setting, moly_run, moly_interval_setting): sample_fovs_list = test_utils.generate_sample_fovs_list( fov_coords=[(0, 0), (100, 100)], fov_names=["TheFirstFOV", "TheSecondFOV"] ) sample_region_inputs = { 'region_start_x': [0, 50], 'region_start_y': [100, 150], 'fov_num_x': [2, 4], 'fov_num_y': [4, 2], 'x_fov_size': [5, 10], 'y_fov_size': [10, 5], 'region_rand': ['N', 'N'] } sample_region_params = tiling_utils.generate_region_info(sample_region_inputs) sample_tiling_params = { 'fovFormatVersion': '1.5', 'fovs': sample_fovs_list['fovs'], 'region_params': sample_region_params } sample_moly_point = test_utils.generate_sample_fov_tiling_entry( coord=(14540, -10830), name="MoQC" ) sample_tiling_params['moly_run'] = moly_run sample_tiling_params['region_params'][0]['region_rand'] = randomize_setting[0] sample_tiling_params['region_params'][1]['region_rand'] = randomize_setting[1] if moly_interval_setting: sample_tiling_params['moly_interval'] = 3 tiled_regions = tiling_utils.create_tiled_regions( sample_tiling_params, sample_moly_point ) # retrieve the center points center_points = [ (fov['centerPointMicrons']['x'], fov['centerPointMicrons']['y']) for fov in tiled_regions['fovs'] ] # define the center points sorted actual_center_points_sorted = [ (x, y) for x in np.arange(0, 10, 5) for y in np.arange(100, 140, 10) ] + [ (x, y) for x in np.arange(50, 90, 10) for y in np.arange(150, 160, 5) ] # if moly_run is Y, add a point in between the two runs if moly_run == 'Y': actual_center_points_sorted.insert(8, (14540, -10830)) # add moly points in between if moly_interval_setting is set if moly_interval_setting: if moly_run == 'N': moly_indices = [3, 7, 11, 15, 19] else: moly_indices = [3, 7, 12, 16, 20] for mi in moly_indices: actual_center_points_sorted.insert(mi, (14540, -10830)) # easiest case: the center points should be sorted if randomize_setting == ['N', 'N']: assert center_points == actual_center_points_sorted # if there's any sort of randomization involved else: if moly_run == 'N': fov_1_end = 10 if moly_interval_setting else 8 else: fov_1_end = 11 if moly_interval_setting else 9 # only the second run is randomized if randomize_setting == ['N', 'Y']: # ensure the fov 1 center points are the same for both sorted and random assert center_points[:fov_1_end] == actual_center_points_sorted[:fov_1_end] # ensure the random center points for fov 2 contain the same elements # as its sorted version misc_utils.verify_same_elements( computed_center_points=center_points[fov_1_end:], actual_center_points=actual_center_points_sorted[fov_1_end:] ) # however, fov 2 sorted entries should NOT equal fov 2 random entries # NOTE: due to randomization, this test will fail once in a blue moon assert center_points[fov_1_end:] != actual_center_points_sorted[fov_1_end:] # both runs are randomized else: # ensure the random center points for fov 1 contain the same elements # as its sorted version misc_utils.verify_same_elements( computed_center_points=center_points[:fov_1_end], actual_center_points=actual_center_points_sorted[:fov_1_end] ) # however, fov 1 sorted entries should NOT equal fov 1 random entries # NOTE: due to randomization, this test will fail once in a blue moon assert center_points[:fov_1_end] != actual_center_points_sorted[:fov_1_end] # ensure the random center points for fov 2 contain the same elements # as its sorted version misc_utils.verify_same_elements( computed_center_points=center_points[fov_1_end:], actual_center_points=actual_center_points_sorted[fov_1_end:] ) # however, fov 2 sorted entries should NOT equal fov 2 random entries # NOTE: due to randomization, this test will fail once in a blue moon assert center_points[fov_1_end:] != actual_center_points_sorted[fov_1_end:] @pytest.mark.parametrize('randomize_setting', _RANDOMIZE_TEST_CASES) @pytest.mark.parametrize('moly_run', _MOLY_RUN_CASES) @pytest.mark.parametrize('moly_interval_setting', _MOLY_INTERVAL_SETTING_CASES) def test_create_tiled_regions_tma_test(randomize_setting, moly_run, moly_interval_setting): sample_fovs_list = test_utils.generate_sample_fovs_list( fov_coords=[(0, 0), (100, 100), (100, 100), (200, 200)], fov_names=["TheFirstFOV", "TheFirstFOV", "TheSecondFOV", "TheSecondFOV"] ) sample_region_inputs = { 'region_start_x': [0, 50], 'region_start_y': [100, 150], 'fov_num_x': [2, 4], 'fov_num_y': [4, 2], 'x_fov_size': [5, 10], 'y_fov_size': [10, 5], 'x_intervals': [[0, 50, 100], [100, 150, 200]], 'y_intervals': [[0, 50, 100], [100, 150, 200]], 'region_rand': ['N', 'N'] } sample_region_params = tiling_utils.generate_region_info(sample_region_inputs) sample_tiling_params = { 'fovFormatVersion': '1.5', 'fovs': sample_fovs_list['fovs'], 'region_params': sample_region_params } sample_moly_point = test_utils.generate_sample_fov_tiling_entry( coord=(14540, -10830), name="MoQC" ) sample_tiling_params['moly_run'] = moly_run sample_tiling_params['region_params'][0]['region_rand'] = randomize_setting[0] sample_tiling_params['region_params'][1]['region_rand'] = randomize_setting[1] if moly_interval_setting: sample_tiling_params['moly_interval'] = 5 tiled_regions = tiling_utils.create_tiled_regions( sample_tiling_params, sample_moly_point, tma=True ) # retrieve the center points center_points = [ (fov['centerPointMicrons']['x'], fov['centerPointMicrons']['y']) for fov in tiled_regions['fovs'] ] # define the center points sorted actual_center_points_sorted = [ (x, y) for x in np.arange(0, 150, 50) for y in np.arange(0, 150, 50) ] + [ (x, y) for x in np.arange(100, 250, 50) for y in np.arange(100, 250, 50) ] # if moly_run is Y, add a point in between the two runs if moly_run == 'Y': actual_center_points_sorted.insert(9, (14540, -10830)) # add moly points in between if moly_interval_setting is set if moly_interval_setting: if moly_run == 'N': moly_indices = [5, 11, 17] else: moly_indices = [5, 12, 18] for mi in moly_indices: actual_center_points_sorted.insert(mi, (14540, -10830)) # easiest case: the center points should be sorted if randomize_setting == ['N', 'N']: assert center_points == actual_center_points_sorted # if there's any sort of randomization involved else: if moly_run == 'N': fov_1_end = 10 if moly_interval_setting else 9 else: fov_1_end = 11 if moly_interval_setting else 10 # only the second run is randomized if randomize_setting == ['N', 'Y']: # ensure the fov 1 center points are the same for both sorted and random assert center_points[:fov_1_end] == actual_center_points_sorted[:fov_1_end] # ensure the random center points for fov 2 contain the same elements # as its sorted version misc_utils.verify_same_elements( computed_center_points=center_points[fov_1_end:], actual_center_points=actual_center_points_sorted[fov_1_end:] ) # however, fov 2 sorted entries should NOT equal fov 2 random entries # NOTE: due to randomization, this test will fail once in a blue moon assert center_points[fov_1_end:] != actual_center_points_sorted[fov_1_end:] # both runs are randomized else: # ensure the random center points for fov 1 contain the same elements # as its sorted version misc_utils.verify_same_elements( computed_center_points=center_points[:fov_1_end], actual_center_points=actual_center_points_sorted[:fov_1_end] ) # however, fov 1 sorted entries should NOT equal fov 1 random entries # NOTE: due to randomization, this test will fail once in a blue moon assert center_points[:fov_1_end] != actual_center_points_sorted[:fov_1_end] # ensure the random center points for fov 2 contain the same elements # as its sorted version misc_utils.verify_same_elements( computed_center_points=center_points[fov_1_end:], actual_center_points=actual_center_points_sorted[fov_1_end:] ) # however, fov 2 sorted entries should NOT equal fov 2 random entries # NOTE: due to randomization, this test will fail once in a blue moon assert center_points[fov_1_end:] != actual_center_points_sorted[fov_1_end:]
/// Split off rows from the top /// /// The returned sub-area will cover the given number of rows. Those rows /// will be taken from the area on which the function is called. /// pub fn split_top(&mut self, rows: u16) -> Area<'a, &'_ mut DrawHandle<'a, W>, W> { let row_b = std::cmp::min(self.row_a.saturating_add(rows), self.row_b); self.row_a = std::cmp::min(row_b, self.row_b); Area { handle: self.handle.borrow_mut(), row_a: self.row_a, col_a: self.col_a, row_b, col_b: self.col_b, phantom: Default::default(), } }
Alessandra Belloni Biography Alessandra Belloni was born in Rome, daughter of a marble stoneworker and a mother from a musical family of Rocca di Papa. Her maternal grandfather worked as a baker but was famous in his village for playing music, especially tambourine. At the age of 17 in 1971, Belloni moved to New York to visit her sister and pursue a career in music. In 1974 she studied acting at HB Studio and briefly acted in films, playing a Turkish princess in the film Fellini's Casanova, for which she learned belly dance. At New York University she studied under Dario Fo. She studied voice with Michael Warren and Walter Blazer. While back in Italy, she decided that film was not to be her career. Federico Fellini advised her: Io ho notato che tu sei molto seria... sei un'artista. Tu non sei come quelle che stanno qua. Sai che te dico: che tu devi torna' a New York. Non resta' qua perché se io avessi potuto fare quello che faccio qua in America, se sapessi meglio l'inglese, l'avrei fatto. (I notice that you are very serious... you're an artist. You are not like the other girls here... You know what I say? You've got to return to New York. Don't stay here because if I had been able to do in America what I do here, if I knew English better, I would have done it.) In 1980 she co-founded, with John La Barbera, the music, folk dance, and theater group I Giullari di Piazza ('the town square players'), which has performed in the United States and Europe, and is in residency at the Cathedral of St. John the Divine. She studied traditional tambourine technique with the Sicilian percussionist Alfio Antico, renowned for his ability to create "magical and primordial" atmospheres. She has also worked with percussionist Glen Velez, the leading figure in the revival of frame drumming in America, whom she met in 1982 through her work in Bread and Puppet Theater. Velez gave her a book, La Terra del Rimorso (The Land of Remorse) by Ernesto De Martino, which inspired her to study Apulian tarantism. Beginning in 1984, Belloni has taken part in the feast of San Rocco of Torrepaduli in Salento, Apulia, a traditional summer event for tambourines and pizziche. Her former husband's name was Dario Bollini. Work The traditional dance she teaches is presented as an ancient healing ritual for women suffering from repressed sexuality, abuse, powerlessness, and the feeling of being caught in a web that binds them. Belloni emphasizes that tarantella music and dance, as popularly known today around the world, is different from its origins in Apulian folk culture, going back to the ancient Greeks. The "spider bite" or tarantismo, being psychosexual injury, formerly called hysteria, affects women with depression and loneliness, and can be healed by drumming and dancing the ancient pizzica tarantata. Belloni teaches that a woman suffering from such "bite" is called a tarantata, and the music and dance for healing it is called pizzica, referring to the spider's bite. By dancing oneself into ecstatic trance with the support of the community, a sufferer seeks to expel the "venom" and be restored to health. Belloni has integrated her work studying women's therapeutic dance and drumming with traditions of the Black Madonna in several countries, including the Madonna of Montevergine in Campania. Like many before her, she connects devotion to the Black Madonna with pre-Christian Goddess worship dating back to the ancient Greeks, who colonized Apulia as part of Magna Graecia long before the rise of Rome. Belloni has brought into her art influences from song and drumming of Brazil, invoking Yemanjá and Oshun. For example, in "Canto di Sant'Irene" she sings "In Calabria è Maria / Yemanjá è a Bahia" ('In Calabria she is Mary; she is Yemanjá in Bahia'). In connection with Yemanjá, Belloni plays the Remo ocean drum, which produces ocean wave-like sounds. In addition to revivifying traditional music, she has composed an oeuvre of original songs inspired by Black Madonnas and others on Brazilian themes, as well as love songs after the Italian tradition. She has also worked to integrate her music with African drumming, saying: "It appeals to me, because I do believe that this form of music therapy and this healing drumming originated in Southern Italy, but in the history most of it comes from Africa, because Africa is closer to us than Northern Europe. All of this connected in the ancient times, and it makes a lot of sense to do it now in America, because it really is like continuing the normal evolution of this music." Belloni conducts dance therapy workshops, titled "Rhythm is the Cure," in New York, Italy, and other places in the United States and Europe. Participants dress in white with red sashes. Since 2009, she has led a New York City troupe of women drummers named the Daughters of Cybele. Belloni's stage shows Rhythm Is the Cure and Tarantella Spider Dance are productions combining music, tammuriata dance and drumming, drama, fire dance, and aerial dance. Spider Dance starts with the birth of Spider Woman and traces the development of pizzica tarantata tradition from ancient Greek rituals of Cybele and Dionysus to Italian folk tradition. The Remo drum company produces a signature line of Alessandra Belloni tambourines depicting the Black Madonna of Montserrat.
<filename>examples/docs/validators/ts/vanilla/async-record-validator/src/form-validation.ts<gh_stars>100-1000 import { ValidationSchema, createFormValidation } from '@lemoncode/fonk'; import { processValidator } from './custom-validator'; const validationSchema: ValidationSchema = { record: { process: [processValidator], }, }; export const formValidation = createFormValidation(validationSchema);
<filename>python-pscheduler/pscheduler/pscheduler/limitprocessor/identifier/localif.py """ Identifier Class for localif """ import ipaddress import netifaces def data_is_valid(data): """Check to see if data is valid for this class. Returns a tuple of (bool, string) indicating valididty and any error message. """ if type(data) == dict and len(data) == 0: return True, None return False, "Data is not an object or not empty." class IdentifierLocalIF(): """ Class that holds and processes identifiers """ def __init__(self, data # Data suitable for this class ): valid, message = data_is_valid(data) if not valid: raise ValueError("Invalid data: %s" % message) self.cidrs = [] for iface in netifaces.interfaces(): ifaddrs = netifaces.ifaddresses(iface) if netifaces.AF_INET in ifaddrs: for ifaddr in ifaddrs[netifaces.AF_INET]: if 'addr' in ifaddr: self.cidrs.append(ipaddress.ip_network(unicode(ifaddr['addr']))) if netifaces.AF_INET6 in ifaddrs: for ifaddr in ifaddrs[netifaces.AF_INET6]: if 'addr' in ifaddr: #add v6 but remove stuff like %eth0 that gets thrown on end of some addrs self.cidrs.append(ipaddress.ip_network(unicode(ifaddr['addr'].split('%')[0]))) def evaluate(self, hints # Information used for doing identification ): """Given a set of hints, evaluate this identifier and return True if an identification is made. """ try: ip = ipaddress.ip_network(unicode(hints['requester'])) except KeyError: return False # TODO: Find out of there's a more hash-like way to do this # instead of a linear search. This would be great if it # weren't GPL: https://pypi.python.org/pypi/pytricia for cidr in self.cidrs: if cidr.overlaps(ip): return True return False # A short test program if __name__ == "__main__": ident = IdentifierLocalIF({}) for ip in [ "127.0.0.1", "::1", "10.1.1.1", "172.16.31.10", "10.0.0.7" ]: print ip, ident.evaluate({ "requester": ip })
The present invention relates to an extendable column and more particularly to an extendable column utilizing screws to extend and retract a plurality of axially elongated sections which telescope within each other. In the design of a large deployable parabolic antenna for space application a need arose for an extendable column capable of extending up to ten times its initial length. This capability is necessary to enable the antenna to be collapsed for transport aboard the space shuttle. A variety of extension designs were investigated, including systems using cables, hydraulics and airjacks, but proved inadequate. In space application the use of hydraulic or airjack systems is precluded by the risk of leaks and subsequent system failure. Cable systems are complex and prone to failure especially in a weightless environment where opposing sets of cables are required. The present invention utilizes screws to extend sections which telescope within each other. Each section includes a set of screws interconnected by a chain and sprockets to enable simultaneous rotation thereof. Rotation of the screws in one section results in telescoping the next inner section having threaded bosses engaging the screws. The section screws are driven by a common motor being selectively engaged therewith by clutch mechanisms to extend the column to the desired length. The use of screws to interface the extended section provides excellent structural integrity, low compliant joints, and zero backlash since once extended, the sections are bolted tightly together. Also, the column is capable of supporting both compressive and tensile loads during extension and retraction affording a high degree of safety in operation under loaded conditions. An object of the present invention is an extendable column capable of extending up to ten times its original length. A further object of the invention is an extendable column with low compliant joints and zero backlash. A further object of the invention is an extendable column capable of supporting loads during extension and contraction. A further object of the invention is an extendable column having a plurality of structural sections which telescope within each other having screws threadingly engaging an adjacent section to extend or retract the column by rotation of the screws. Another object of the invention is an extendable column which yields the foregoing advantages and which utilizes a single motor and a plurality of clutches to selectively extend or retract the structural sections of the columns. Another object of the invention is an extendable column which yields the foregoing advantages and which utilizes chains and sprockets to drivingly connect the motor with the clutches. Other objects and advantages of the present invention will be readily apparent from the following description and drawings which illustrate preferred embodiments of the present invention.
Growth Volatility and Equity Market Liberalization If there are benefits to international risk sharing, consumption growth variability should decrease following the liberalization of the equity market. In addition, markets with open equity markets should display lower consumption growth variability than closed markets, everything else equal. However, the recent literature on financial liberalization suggests that volatile capital flows lead to increased output and consumption growth variability post liberalization. In this article, we examine the effects of equity market liberalization on GDP and consumption growth variability. Excluding the 1997-2000 years, dominated by the consequences of the South-East Asia crisis, we find an economically and statistically significant decrease in both GDP and consumption growth variability post liberalization. When the 1997-2000 years are taken into account, the negative volatility response of consumption growth is weakened and is no longer significant for a sample of emerging markets, but remains significant for a larger set of countries. These results hold for both total and idiosyncratic consumption growth volatility. JEL Classification: E32, F30, F36, F43, G15, G18, G28 ∗ We thank Susan Collins for inspiring this paper. We appreciate the comments of Andrew Frankel. Send correspondence to: Campbell R. Harvey, Fuqua School of Business, Duke University, Durham, NC 27708. Phone: +1 919.660.7768, E-mail: cam.harvey@duke.edu.
<filename>BookManager/src/com/java1234/view/LogOnFrm.java<gh_stars>1-10 package com.java1234.view; import java.awt.EventQueue; import java.awt.Font; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.sql.Connection; import javax.swing.GroupLayout; import javax.swing.GroupLayout.Alignment; import javax.swing.ImageIcon; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.JOptionPane; import javax.swing.JPanel; import javax.swing.JPasswordField; import javax.swing.JTextField; import javax.swing.UIManager; import javax.swing.border.EmptyBorder; import com.java1234.dao.UserDao; import com.java1234.model.User; import com.java1234.util.DbUtil; import com.java1234.util.StringUtil; public class LogOnFrm extends JFrame { private JPanel contentPane; private JTextField userNameTxt; private JPasswordField passwordTxt; private DbUtil dbUtil=new DbUtil(); private UserDao userDao=new UserDao(); /** * Launch the application. */ public static void main(String[] args) { EventQueue.invokeLater(new Runnable() { public void run() { try { LogOnFrm frame = new LogOnFrm(); frame.setVisible(true); } catch (Exception e) { e.printStackTrace(); } } }); } /** * Create the frame. */ public LogOnFrm() { //改变系统默认字体 Font font = new Font("Dialog", Font.PLAIN, 15); java.util.Enumeration keys = UIManager.getDefaults().keys(); while (keys.hasMoreElements()) { Object key = keys.nextElement(); Object value = UIManager.get(key); if (value instanceof javax.swing.plaf.FontUIResource) { UIManager.put(key, font); } } setResizable(false); setTitle("管理员登录"); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); setBounds(100, 100, 734, 468); contentPane = new JPanel(); contentPane.setBorder(new EmptyBorder(5, 5, 5, 5)); setContentPane(contentPane); JLabel label = new JLabel("图书管理系统"); label.setFont(new Font("宋体", Font.BOLD, 23)); label.setIcon(new ImageIcon(LogOnFrm.class.getResource("/images/logo.png"))); JLabel label_1 = new JLabel("用户名:"); label_1.setFont(new Font("宋体", Font.PLAIN, 20)); label_1.setIcon(new ImageIcon(LogOnFrm.class.getResource("/images/userName.png"))); JLabel label_2 = new JLabel("密 码:"); label_2.setFont(new Font("宋体", Font.PLAIN, 20)); label_2.setIcon(new ImageIcon(LogOnFrm.class.getResource("/images/password.png"))); userNameTxt = new JTextField(); userNameTxt.setColumns(10); passwordTxt = new JPasswordField(); JButton button = new JButton("登录"); button.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { loginActionPerformed(e); } }); button.setFont(new Font("宋体", Font.PLAIN, 18)); button.setIcon(new ImageIcon(LogOnFrm.class.getResource("/images/login.png"))); JButton button_1 = new JButton("重置"); button_1.setFont(new Font("宋体", Font.PLAIN, 18)); button_1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { resetValueActionPerformed(e); } }); button_1.setIcon(new ImageIcon(LogOnFrm.class.getResource("/images/reset.png"))); GroupLayout gl_contentPane = new GroupLayout(contentPane); gl_contentPane.setHorizontalGroup( gl_contentPane.createParallelGroup(Alignment.LEADING) .addGroup(Alignment.TRAILING, gl_contentPane.createSequentialGroup() .addGroup(gl_contentPane.createParallelGroup(Alignment.TRAILING) .addGroup(Alignment.LEADING, gl_contentPane.createSequentialGroup() .addGap(249) .addComponent(label)) .addGroup(Alignment.LEADING, gl_contentPane.createSequentialGroup() .addGap(175) .addGroup(gl_contentPane.createParallelGroup(Alignment.LEADING) .addGroup(gl_contentPane.createSequentialGroup() .addComponent(button) .addGap(60) .addComponent(button_1)) .addGroup(gl_contentPane.createSequentialGroup() .addGroup(gl_contentPane.createParallelGroup(Alignment.LEADING) .addComponent(label_1) .addComponent(label_2)) .addGap(33) .addGroup(gl_contentPane.createParallelGroup(Alignment.LEADING) .addComponent(passwordTxt, GroupLayout.DEFAULT_SIZE, 193, Short.MAX_VALUE) .addComponent(userNameTxt, GroupLayout.DEFAULT_SIZE, 193, Short.MAX_VALUE)))))) .addGap(257)) ); gl_contentPane.setVerticalGroup( gl_contentPane.createParallelGroup(Alignment.LEADING) .addGroup(gl_contentPane.createSequentialGroup() .addGap(61) .addComponent(label) .addGap(36) .addGroup(gl_contentPane.createParallelGroup(Alignment.BASELINE) .addComponent(label_1) .addComponent(userNameTxt, GroupLayout.PREFERRED_SIZE, GroupLayout.DEFAULT_SIZE, GroupLayout.PREFERRED_SIZE)) .addGap(56) .addGroup(gl_contentPane.createParallelGroup(Alignment.BASELINE) .addComponent(label_2) .addComponent(passwordTxt, GroupLayout.PREFERRED_SIZE, 21, GroupLayout.PREFERRED_SIZE)) .addGap(60) .addGroup(gl_contentPane.createParallelGroup(Alignment.BASELINE) .addComponent(button) .addComponent(button_1)) .addContainerGap(74, Short.MAX_VALUE)) ); contentPane.setLayout(gl_contentPane); //设置窗体JFram居中显示 this.setLocationRelativeTo(null); } /** * 登录事件处理 * @param e */ private void loginActionPerformed(ActionEvent e) { // TODO Auto-generated method stub String userName=this.userNameTxt.getText(); String password=new String(this.passwordTxt.getPassword()); if(StringUtil.isEmpty(userName)) { JOptionPane.showMessageDialog(null,"用户名不能为空!"); return; } if(StringUtil.isEmpty(password)) { JOptionPane.showMessageDialog(null,"密码不能为空!"); return; } User user=new User(userName,password); Connection con=null; try { con=dbUtil.getCon(); User currentUser=userDao.login(con,user); if(currentUser!=null) { dispose(); //销毁当前窗体 new MainFrm().setVisible(true); //弹出MainFrm窗体 } else { JOptionPane.showMessageDialog(null, "用户名或密码错误!"); } } catch (Exception e1) { // TODO Auto-generated catch block e1.printStackTrace(); }finally { try { dbUtil.closeCon(con); } catch (Exception e1) { // TODO Auto-generated catch block e1.printStackTrace(); } } } /** * 重置事件处理 * @param e */ private void resetValueActionPerformed(ActionEvent evt) { // TODO Auto-generated method stub this.userNameTxt.setText(""); this.passwordTxt.setText(""); } }
<reponame>kalgoop/cpp-client<filename>src/helpers/helpers.h /** * This file is part of Ark Cpp Client. * * (c) <NAME> <<EMAIL>> * * For the full copyright and license information, please view the LICENSE * file that was distributed with this source code. **/ #ifndef HELPERS_H #define HELPERS_H #include <string> #include <cstring> /**/ /**/ #if (defined ARDUINO || defined ESP8266 || defined ESP32) #define USE_IOT /**/ #include <Arduino.h> #include <pgmspace.h> /**/ // undef the C macros to allow the C++ STL to take over // This is to have compatibility with various board implementations of the STL #undef min #undef max /**/ /**/ template <typename T> std::string toString(T val) { return String(val).c_str(); }; /**/ const static inline std::string toString(uint64_t input) { std::string result; uint8_t base = 10; do { char c = input % base; input /= base; (c < 10) ? c += '0' : c += 'A' - 10; result = c + result; } while (input); return result; } /**/ #else /**/ template <typename T> std::string toString(T val) { return std::to_string(val); } /**/ #endif #endif
<reponame>joe-chacko/yoko /** * * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements. See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package org.apache.yoko.rmi.impl; import java.io.IOException; class RMIStubDescriptor extends ValueDescriptor { RMIStubDescriptor(Class type, TypeRepository repository) { super(type, repository); } @Override protected String genRepId() { final Class[] ifaces = type.getInterfaces(); if (ifaces.length != 2 || ifaces[1] != org.apache.yoko.rmi.util.stub.Stub.class) { throw new RuntimeException("Unexpected RMIStub structure"); } final String ifname = ifaces[0].getName(); final int idx = ifname.lastIndexOf('.'); return ((idx < 0) ? String.format("RMI:_%s_Stub:0", ifname) : String.format("RMI:%s_%s_Stub:0", ifname.substring(0, idx + 1), ifname.substring(idx + 1))); } // // Override writeValue/readvalue, such that only the superclass' // state is written. This ensures that fields in the proxy are // not included on the wire. // @Override protected void writeValue(ObjectWriter writer, java.io.Serializable val) throws IOException { _super_descriptor.writeValue(writer, val); } @Override protected void readValue(ObjectReader reader, java.io.Serializable value) throws IOException { _super_descriptor.readValue(reader, value); } }
A Class of Constrained Adaptive Beamforming Algorithms Based on Uniform Linear Arrays A new class of adaptive beamforming algorithms is proposed based on a uniformly spaced linear array by constraining its weight vector to a specific conjugate symmetric form. The method is applied to the well-known reference signal based (RSB) beamformer and the linearly constrained minimum variance (LCMV) beamformer as two implementation examples. The effect of the additional constraint is equivalent to adding a second step in the derived adaptive algorithm. However, a difference arises for the RSB case since no direction-of-arrival (DOA) information of the desired signal is available, which leads to a two-stage structure for incorporating the imposed constraint. Compared to the traditional algorithms, the proposed ones can achieve a faster convergence speed and a higher steady state output signal-to-interference-plus-noise ratio, given the same stepsize.
Above: A Vengevine from Magic: The Gathering. Card battling series Magic: The Gathering is going free-to-play in its latest iteration, Magic: Origins, which drops on Xbox One, PC, and iPad this July, with a PlayStation 4 version to follow. Magic creator Wizards of the Coast is promising “limitless free-to-play with 100 percent earnable content,” for Magic: Origins, meaning you shouldn’t have to spend any money to unlock cards if you don’t want to — provided you have plenty of time on your hands. In going completely free-to-play, Magic: Origins is adopting a similar model to its big competitor, Hearthstone: Heroes of Warcraft. Hearthstone lets players buy card packs with real money or earn them through playing, and it now has over 25 million players across PC, iPad, and Android tablets. The game’s success helped publisher Activision reach record digital sales of $2.2 billion in 2014. The previous title in the Magic series, Magic 2015: Duels of the Planewalkers, allowed players to build their own decks for the first time. It also drew some criticism from fans, however, for being a $9.99 game with additional micro-transactions. Some of the extra cards in the game you could only buy, and they weren’t unlockable through gameplay. Magic: Origins sees the return of the popular Two-Headed Giant multiplayer mode, which pits pairs of players against each other. It will also include a solo battle mode with “virtually endless” A.I. opponents, and a quest system with new individual and community challenges each week. It will also offer improved deck building facilities with step-by-step deck construction guidance for those who need it.
<filename>src/components/Form/Item/CheckItem.tsx import { Observer, useObserver } from "@/hooks"; import { ItemConfig } from '@/stores'; import { Option } from '@/utils'; import Checkbox, { CheckboxGroupProps } from 'antd/lib/checkbox'; // import Switch, { SwitchProps } from 'antd/lib/switch'; // import 'antd/lib/switch/style/css'; import 'element-ui/lib/theme-chalk/switch.css'; import { pullAll } from 'lodash'; import * as React from 'react'; // import { SlotContext } from '@/utils/SlotUtils'; import { ScopedSlot } from '../../../utils/SlotUtils'; import { useFormItemConfig } from '../hooks/useItemConfig'; import { OFormItemCommon } from '../Interface/FormItem'; export interface ICheckItemProps extends CheckboxGroupProps, OFormItemCommon { } export type CheckScopedSlot<FM = object, VALUE = any> = (props: { col: { data: VALUE, item: Option, index: number, props: FM }, onChange: any, value: boolean, config: ItemConfig }) => React.ReactElement declare const option: Option; declare const i: number; const checkSlots = (itemConfig: ItemConfig, onChange: any, other: any) => { const store = itemConfig.useOptionsStore() const slotFactory = (options: Option, index: number) => ({ col: { data: other.value, item: options, index, props: itemConfig.formStore.formSource }, value: other.value && other.value.includes(options.value), onChange(checked: boolean) { if (checked) { const nextV = Utils.isArrayFilter(other.value) || [] nextV.push(options.value) onChange(nextV) } else { onChange(pullAll([...Utils.castArray(other.value)], [options.value])) } }, config: itemConfig }) return ( <div className='el-checkbox-group'> <For index='i' each='option' of={store.displayOptions}> <ScopedSlot key={option.value} name={itemConfig.slot} {...slotFactory(option, i)} /> </For> </div> ) } export const useCheckItem = ({ code, onChange, onBlur, ...other }: ICheckItemProps, ref: any) => { return useObserver(() => { const itemConfig = useFormItemConfig().itemConfig // const { scopedSlots } = React.useContext(SlotContext) if (itemConfig.useSlot) { return checkSlots(itemConfig, onChange, other) } const store = itemConfig.useOptionsStore() const { displayOptions } = store return ( <Observer>{() => ( <Checkbox.Group ref={ref} {...other} style={{ width: '100%' }} onChange={onChange} options={displayOptions as any}> {/* <Observer>{ () => displayOptions.map(option => { return <Observer key={option.value}>{() => <Checkbox value={option.value}>{option.label}</Checkbox>}</Observer> }) as any }</Observer> */} {/* <Row> <Col span={8}><Checkbox value="A">A</Checkbox></Col> <Col span={8}><Checkbox value="B">B</Checkbox></Col> <Col span={8}><Checkbox value="C">C</Checkbox></Col> <Col span={8}><Checkbox value="D">D</Checkbox></Col> <Col span={8}><Checkbox value="E">E</Checkbox></Col> </Row> */} </Checkbox.Group> )}</Observer> ) }, 'useCheckItem') } interface SwitchProps { value?: number | string | boolean disabled?: boolean width?: number onIconClass?: string offIconClass?: string onText?: string offText?: string onColor?: string offColor?: string onValue?: number | string | boolean offValue?: number | string | boolean name?: string onChange?(value: number | string | boolean): void } export interface ISwitchItemProps extends SwitchProps, OFormItemCommon { } export const useSwitchItem: React.FunctionComponent<ISwitchItemProps> = ({ code, ...other }: ISwitchItemProps) => { // const itemConfig = useFormItemConfig() // console.log(other, itemConfig) return <span></span>//<Switch onText='' offText='' {...other} /> }
import requests import time from addict import Dict from .resources.categories import ranking_dict from .resources.utils import OSRSXp from .base import OSRSBase class Highscores(OSRSBase): """Highscores This class obtains and formats the data requested from the runescape Highscores for OldSchool Args: username str: Target Username for Account target str: The target Highscores to lookup username. Defaults to `default` - Accepted Values: [default, ironman, ultamite, hardcore_ironman, seasonal, tournament, deadman] Returns: None """ def __init__(self, username, target='default'): super(Highscores, self).__init__(target) self.username = username self.target_url = self._OSRSBase__request_build(player=username) self.skill = dict() self.minigame = dict() self.boss = dict() self.__instantiate() def __process_data(self): """__process_data Formats the returned raw string value into consumable self.* attributes self.skill self.minigame self.boss Args: self Returns: None """ skill = dict() minigame = dict() boss = dict() xp_calc = OSRSXp() count = 0 for _ in ranking_dict: data = self.data[count].split(',') info = dict() if ranking_dict[count]['type'] == 'skill': info = { 'rank': data[0], 'level': data[1], 'xp': data[2], 'xp_to_level': xp_calc.level_to_xp(int(data[1])+1)-int(data[2]) } skill[ranking_dict[count]['name']] = info elif ranking_dict[count]['type'] == 'minigame': info = { 'rank': data[0], 'score': data[1], } minigame[ranking_dict[count]['name']] = info elif ranking_dict[count]['type'] == 'boss': info = { 'rank': data[0], 'kills': data[1], } boss[ranking_dict[count]['name']] = info setattr(self, ranking_dict[count]['name'], Dict(info)) count += 1 self.skill = skill self.minigame = minigame self.boss = boss def __instantiate(self): """__instantiate Runs a full query and process request on the target URL for username/target provided. Args: self Returns: None """ max_retries = 5 retry = 0 while True: self.data = requests.get(self.target_url).content.decode('utf-8').split('\n') if len(self.data) < 80: if retry >= max_retries: raise ValueError("No data loaded!") else: retry += 1 else: break self.time = time.time() self.__process_data() def update(self): """update Updates existing information by making a new request to the OSRS Highscores Args: self Returns: None """ self.__instantiate()
/**Test loading of the fuel names from main properties/config. */ public static void testConfiguredFuelNames() { assertTrue("Must be able to load the main properties", MainProperties.getTimestamp() > 0); final Map<String, String> configuredFuelNames = FUELINSTUtils.getConfiguredFuelNames(); assertNotNull(configuredFuelNames); assertTrue(configuredFuelNames.size() > 0); assertNotNull(configuredFuelNames.get("INTIFA2")); assertTrue(configuredFuelNames.get("INTIFA2").length() > 0); for(String k : configuredFuelNames.keySet()) { assertTrue(FUELINSTUtils.FUEL_NAME_REGEX.matcher(k).matches()); } }
/** * A sum expression. Represents addition of two expressions. * @author (Kevin Dittmar) * @author (William Ezekiel) * @author (Joseph Alacqua) * @version Mar 29 2016 */ public class Sum extends Exp{ /** * Create a Sum * @param l the left expression * @param r the right expression */ public Sum(Exp l, Exp r) { left = l; right = r; } /** * Accept a visitor. * @param v a visitor * @return the object produced by the visit. */ public Object accept(Visitor v) { return v.visit(this); } }
Epithelial hyperplasia and malignant change in congenital lung cysts. Patients with congenital lung cysts are at increased risk of developing carcinoma, but the mechanisms concerned are not clear. The case of a young adult who developed a bronchioloalveolar carcinoma associated with a cystic congenital adenomatoid malformation is reported. The adjacent lung showed an unusual intra-alveolar hyperplasia of mucous cells. Two further cases of congenital adenomatoid malformation are also described; both patients presented in infancy and showed similar mucous cell hyperplasia in alveoli surrounding the cysts. In all three cases the staining characteristic of the mucus was identical with that of normal bronchial mucous glands. It is suggested that the benign proliferation represents a premalignant lesion. Its presence in infants shows that it may occur at an early age and reinforces the need for early removal of congenital lung cysts.
Bayesian comparison of nonstandard cosmologies using type Ia supernovae and BAO data We use the most recent type Ia supernovae (SNe Ia) observations to perform a statistical comparison between the standard $\Lambda$CDM model and its extensions and some alternative cosmologies: namely, the Dvali--Gabadadze--Porrati (DGP) model, a power-law $f(R)$ scenario in the metric formalism and an example of vacuum decay cosmology in which the dilution of pressureless matter is attenuated with respect to the usual $a^{-3}$ scaling due to the interaction of the dark matter and dark energy fields. We perform a Bayesian model selection analysis using the \textsc{MultiNest} algorithm. To obtain the posterior distribution for the parameters of each model, we use the joint light-curve analysis (JLA) SNe Ia compilation containing 740 events in the interval $0.01<z<1.3$ along with current measurements of baryon acoustic oscillations (BAO). The JLA data are analyzed with the SALT2 light-curve fitter and the model selection is then performed by computing the Bayesian evidence of each model and the Bayes factor of the $\Lambda$CDM cosmology related to the other models. The results indicate that the JLA data alone are unable to distinguish the standard $\Lambda$CDM model from some of its alternatives but its combination with current measurements of baryon acoustic oscillations shows up an ability to distinguish them. In particular, the DGP model is practically not supported by both the BAO and the joint JLA + BAO data sets compared to the standard scenario. Finally, we provide a rank order for the models considered. I. INTRODUCTION Almost two decades ago, distance measurements of type Ia supernovae (SNe Ia) provided the first direct evidence for a late-time cosmic acceleration. Nowadays, this phenomenon is also confirmed from independent data, such as, for instance, the most recent measurements of the baryon acoustic oscillations (BAO) from galaxy surveys (see, e.g., for a recent review on BAO measurements). From the theoretical side, however, the absence of a firm physical mechanism responsible for the present acceleration of the Universe has given rise to a number of alternative explanations. In general, mechanisms of cosmic acceleration are explored in two different ways: either introducing a new field in the framework of the Einstein's general theory of relativity (GR), the dark energy, or introducing modifications in GR at very large scales. In the general relativistic framework, the simplest explanation is to posit the existence of a cosmological constant, a spatially homogeneous component whose pressure and energy density are related by p = w, with the equation of state (EoS) parameter w = −1. However, as is well known, the standard CDM model (cosmological constant plus cold dark matter) provides a good fit for a large number of observational data sets without addressing some important theoretical issues, such as the fine-tuning of the value and the cosmic coincidence problems. If the cosmological term is null or it is not decaying in the course of the expansion, as discussed in the vacuum decay or (t) cos- * thoven@on.br chandrachaniningombam@astro.unam.mx alcaniz@on.br mologies, an alternative possibility (which also does not address the above issues) is to assume the presence of an extra degree of freedom in the form of a minimally coupled scalar field (quintessence field). Among other things, what observationally may distinguish or (t) from is the time dependency of the EoS parameter of quintessence fields, whose behavior has been parametrized phenomenologically by several authors (see, e.g., and references therein). The observed cosmic acceleration can also be seen as the first evidence of a breakdown of GR on large scales rather than a manifestation of another ingredient in the cosmic budget. The most usual examples of cosmologies derived from modified or extended theories of gravity include f (R) models, in which terms proportional to powers of the Ricci scalar R are added to the Einstein-Hilbert Lagrangian, and higher dimensional braneworld models, in which extra dimension effects drive the current cosmic acceleration by changing the energy balance in a modified Friedmann equation. Since very little is known about the nature of the physical mechanism driving the cosmic acceleration, an important way to improve our understanding of this phenomenon is to use cosmological observations to constrain and select its many approaches. In this paper we use the most recent SNe Ia observations, the joint light-curve analysis (JLA) SNe Ia compilation containing 740 events in the interval 0.01 < z < 1.3, to perform Bayesian model selection analysis using the MultiNest algorithm. We consider in our analysis different classes of cosmological models and show that a joint analysis involving SNe Ia and BAO data is able to distinguish between the standard cosmology and some of its alternatives. We organized this paper as follows. In Sec. II we present the cosmological models considered in our analysis. The Bayesian framework of model selection is briefly discussed in Sec. III. The data sets and methodology used in the analysis are discussed in Secs. IV and V, respectively. We present and discuss the model comparison results in Sec. VI. We summarize our main conclusions in Sec. VII. II. NONSTANDARD COSMOLOGICAL MODELS As mentioned earlier, the late-time cosmic acceleration is usually explored in two different ways: either including an extra component in the right-hand side of Einstein's field equations or modifying gravity at large scales. In this work, we select models of both cases under the framework of a flat Friedmann-Robertson-Walker (FRW) metric. In what follows, we briefly discuss the scenarios considered in our analysis. A. Dark energy models with constant equation of state General relativistic scenarios with a constant dark energy EoS w generalize the standard CDM model in which w = −1. In what follows, we refer to this model as the wCDM model. The corresponding Friedmann equation for this cosmology is given by where E(z) = H(z)/H 0 is the normalized Hubble parameter and m,0 and de,0 correspond, respectively, to the current values of clustered matter (baryonic and dark) and dark energy density parameters, which obey the normalization condition de,0 = 1 − m,0. Dynamical dark energy models A more general case can be studied by allowing the equation of state of the dark energy component to vary as a function of the cosmological scale factor a. In this case, the Friedmann equation takes the form To discriminate the dynamical dark energy (the time-varying nature of EoS) from that of a cosmological constant, we consider two kinds of w(a) parametrizations. First, we consider the Chevallier-Polarski-Linder (CPL) parametrization, given by where w 0 stands for the EoS's value today whereas w a describes its time evolution. For this parametrization, the last term of Eq. is written as As discussed in, the above parametrization cannot be extended to the entire history of the Universe since it blows up exponentially in the future (a → ∞) for w a > 0. Therefore, we also consider a second dynamical dark energy parametrization suggested by Ref., which is well behaved over the entire cosmic evolution and mimics a linear-redshift evolution at low redshift. For this parametrization (referred to it as BA parametrization), the last term of Eq. can be written as Previous studies have shown that bounds on the w 0 and w a parameters allow this dark energy component to remain subdominant at z >> 1. For details about the classification of different dark energy behaviors using parametrization, we refer the reader to Ref.. B. Vacuum decay model An interesting attempt to account for the cosmological constant problems has also been discussed in the context of interacting dark matter and dark energy cosmologies. A number of ideas have been examined along these lines (see, e.g., [7,9,10, and references therein). The model analyzed in our study has a time-dependent cosmological term (t) in which the vacuum energy density decays with the expansion of the Universe as where the determines the diluting power of the dark matter density dm with respect to the usual a −3 as dm ∝ a −3+. Depending upon the positive or negative values of, the energy is transferred either from dark energy to dark matter or vice versa, respectively. In such scenarios, dark matter is no longer independently conserved, such tha The Friedmann equation for this class of models is given by where,0 =,0 − 3 dm,0 /(3 − ). There is an extra degree of freedom compared to the standard CDM model due to such interaction (for more details on this class of models, see Ref. ). C. f (R)-gravity models The simplest extension of general relativity can be obtained by considering additional terms proportional to powers of the Ricci scalar R in the Einstein-Hilbert Lagrangian, the so-called f (R) gravity. Differently from general relativistic scenarios, f (R) cosmology can naturally drive an accelerating cosmic expansion without introducing a dark energy field. We consider the Einstein-Hilbert action in the Jordan frame including f (R) function of the Ricci scalar as where k 2 = 8G (G is a bare gravitational constant) and S matter represents the action of the matter minimally coupled to gravity. We assume the metric formalism, in which the connections are assumed to be the Christoffel symbols and the variation of the action is taken with respect to the metric. In a flat FRW spacetime, the field equations for the action are given by where a prime denotes derivative with respect to R (we refer the reader to Refs. for more on f (R) cosmologies). In what follows, we consider the power-law f (R) model which satisfies all the viability conditions of f (R) models, as discussed by Ref., and reduces to the CDM model for n = 0 and = 6 de,0. D. DGP model The Dvali-Gabadadze-Porrati (DGP) model is an example of an alternative approach which governs cosmic acceleration via modification of Einstein's general relativity, driven by higher dimensional theories. In this model, our four-dimensional Universe is confined to a three-dimensional brane, embedded in a five-dimensional bulk spacetime with an infinite extra dimension. The energy-momentum tensor only resides on the brane surface whereas the gravitational field equations are driven by the five-dimensional Einstein tensor and the four-dimensional Einstein tensor of the induced metric on the brane. Only gravity is allowed to propagate off the 3-brane into the bulk and this induced effect on the brane leads to an accelerated expansion. A crossover length scale, where the interaction between the effective four-dimensional and five-dimensional gravities Table I. Summary of models considered in the analysis along with the free parameters. Model Equation takes place, is given by r c = M 2 Pl /2M 3 5, and the Friedmann equation is modified as where is the energy density of the cosmic fluid. Note that in the limit of H ∼ r −1 c, a self-accelerating solution is attained asymptotically, which is the main feature of this model (see Refs. for details). The above equation can be rewritten as Here r c represents the density parameter associated with the crossover scale, r c = 1/(4r 2 c H 2 0 ). Under the flat FRW framework, the normalization condition is given by r c = . For analysis involving BAO data we add a radiation term,,0 = 2.469 10 −5 h −2, to all Friedmann equations above. A summary of the cosmological models considered in our analysis is given in Table I. III. BAYESIAN MODEL SELECTION Bayesian inference is a way to describe the relationship between the model (or hypotheses), the data and the prior information about the model parameters. In a parameter estimation problem, the starting point for Bayesian data analysis is to compute the joint posterior for a set of free parameters given the data, D, through Bayes' theorem, P(|D, M) = L(D|, M) P(|M)/E(D|M), where P, L, P and E are the shorthands for the posterior, the likelihood, the prior and the evidence, 1 respectively. In short, Bayes' theorem updates our previous knowledge about some model parameters in the light of a given data set. It is important to note that the evidence E, the denominator of the Bayes' theorem, is just a normalization constant and is 1 Also called Bayesian evidence, marginal likelihood or model likelihood. Therefore, the evidence is the average value of the likelihood L over the entire model parameter space that is allowed before we observe the data. The most important characteristic of the evidence is its application of Occam's razor to the model selection problem. It rewards the models that fit the data well and are also predictive, moving the average of the likelihood in Eq. towards higher values than in the case of a model which fits poorly or is not very predictive (or is either too complex or has a large number of parameters). This concept has been widely applied in cosmology (see, e.g, ). It is used to discriminate two competing models by taking the ratio which is also known as the Bayes' factor of the model M i relative to the model M j (called the reference model in this work). If each model is assigned equal prior probability, the Bayes factor gives the posterior odds of the two models. To rank the models of interest, we adopted the scale showed in Table II to interpret the values of ln B i j = ln (E i /E j ) in terms of the strength of the evidence of a chosen reference model. This scale, suggested by Ref., is a revised and more conservative version of the Jeffreys scale. Note that the labels attached to the Jeffreys scale are empirical: it depends on the problem being investigated. Thus, for an experiment for which | ln B i j | < 1, the evidence in favor of the model M i is usually interpreted as inconclusive (see Ref. for a more complete discussion about this scale). Note also that ln B i j < −1 means support in favor of the model M j. In this work, we take CDM as the reference model M j, so the subscripts i and j are omitted hereafter. A. Type Ia supernovae In this work, we focus primarily on current distance measurements of SNe Ia to perform an observational comparison of the cosmologies discussed in the previous section. We use the JLA sample which is an extension of the compilation provided by Ref. (referred to as the C11 compilation), containing a set of 740 spectroscopically confirmed SNe Ia. JLA is a compilation of several low-redshift (z < 0.1) samples, the full three-year SDSS-II supernova survey sample within redshift 0.05 < z < 0.4, the first three years data of the SNLS survey up to redshift z < 1 and a few high-redshift Hubble Space Telescope SNe in the interval 0.216 < z < 1.755. The photometry of SDSS and SNLS was recalibrated and the SALT2 model is retrained using the joint data set. Theoretically, the distance modulus predicted by the homogeneous and isotropic, flat FRW universe is given by with the luminosity distance d L defined as where E(z) = H(z)/H 0 is the normalized Hubble parameter. However, from the observational point of view, the distance modulus of a type Ia supernova is obtained by a linear relation from its light curve, where m B represents the observed peak magnitude in restframe B band, x 1 is the time stretching of the light curve, and c is the supernova color at maximum brightness. These three light-curve parameters m B, x 1 and c have different values for each supernova and are derived directly from the light curves. The nuisance parameters and are assumed to be constants for all the supernovae, but different for different cosmological models. Following directly Ref., we also assume a with where C corresponds to the covariance matrix of the distance modulus, estimated accounting for various statistical and systematic uncertainties. The light-curve fit statistical uncertainties, the systematic uncertainties associated with the calibration, the light-curve model, the bias correction and the mass step uncertainty are described in detail in Sec. 5 of Ref., whereas the systematic uncertainties related to the peculiar velocity corrections and the contamination of the Hubble diagram by non-Ia are described briefly in Ref.. The uncertainty in redshift due to peculiar velocities, the uncertainty in magnitudes due to gravitational lensing, and the intrinsic deviation in magnitudes are also taken into account while calibrating it. Using the JLA sample, claimed to have provided the most restrictive constraints so far, i.e., w = −1.027 ± 0.055 (assuming w = constant) and m,0 = 0.295 ± 0.034 (for a flat CDM model). Therefore, it is interesting to perform a similar analysis for nonstandard cosmological models, calibrating the data to each cosmology and checking their constraining power on the model parameters. B. Baryon acoustic oscillations Besides the JLA supernovae data set, we also consider in our analysis the measurements of BAO in the galaxy distribution. The BAO in the primordial plasma have striking effects on the anisotropies of the cosmic microwave background (CMB) and the large scale structure of matter. The measurements of the characteristic scale of the BAO in the correlation function of matter distribution provide a powerful standard ruler to probe the angular-diameter distance versus redshift relation and the Hubble parameter evolution. This distanceredshift relation can be obtained from the matter power spectrum and calibrated by the CMB anisotropy data. Usually, the BAO distance constraints are reported as a combination of the angular scale and the redshift separation. This combination is obtained by performing a spherical average of the BAO scale measurement and is given by where is the volume-averaged distance and D C (z) = z 0 dz /H(z ) is the comoving angular-diameter distance. In Eq., r s (z drag ) is the radius of the comoving sound horizon at the drag epoch z drag when photons and baryons decouple, where c s (z) = c/ 3 1 + (3 b,0 /4,0 )(1 + z) −1 is the sound speed in the photon-baryon fluid, and b,0 = 0.022765h −2 and,0 = 2.469 10 −5 h −2 are the present values of baryon and photon density parameters, respectively, as given by Ref.. Table III shows the BAO distance measurements employed in this work. In addition to this data, we also include three correlated measurements of d z (z = 0.44) = 0.073, d z (z = 0.6) = 0.0726 and d z (z = 0.73) = 0.0592 from the WiggleZ survey, with the following inverse covariance matrix: Using the same methodology applied to the JLA SNe Ia compilation, we also consider a multivariate Gaussian likelihood for the BAO data set. For each survey listed in the first column of the Table III, the chi square is given by where d z,survey and d z (z survey, ) are the observed and theoretical d z, respectively, and survey is the error associated with each observed value. However, for the WiggleZ data the chi square is of the form Then, the BAO likelihood is directly obtained by the product of the individual likelihoods as Similarly, the joint likelihood for the JLA SNe Ia compilation and the BAO data is given by L joint = L JLA L BAO. V. METHODOLOGY While the idea of the Bayes' theorem is simple to understand, the computation of the posterior and the evidence can be difficult both analytically, since the necessary integrals cannot be evaluated in closed form, and numerically, meaning that the integrations can be very time consuming when the dimension of the parametric space is large. To solve this problem, a widely used practice is to sample from the posterior by applying MCMC techniques (we refer the reader Table IV. Priors on the free parameters of each model used to compute the model's evidence. Note that N, 2 denotes a Gaussian prior with mean and variance 2, and U (a, b) denotes the normalized uniform prior for which P(x|M) = 1/(b − a) for a ≤ x ≤ b and P(x|M) = 0 otherwise. In this work, we applied an algorithm relying on PyMultiNest 2, a Python 3 interface for the nested sampling (NS) algorithm MultiNest 4. NS is designed to directly estimate the relation between the likelihood function and the prior mass, thus obtaining the evidence (and its uncertainty) immediately by summation. It also computes the samples from the posterior distribution as an optional byproduct. To compute the evidence values we used the most accurate importance nested sampling (INS) instead of the vanilla NS method, requiring an INS global log-evidence tolerance of 0.1 as a convergence criterion. Moreover, to improve the accuracy in the estimate of the evidence, we have chosen to perform all analysis by working with a set of 1000 live points, instead of the MultiNest's default value of 400, so that the number of samples for all posterior distributions was of order O(10 4 ). It is worth mentioning that Bayesian inference (both parameter estimation and model selection) depends on the priors P(|M) chosen for the free parameters. This property accounts for each model's predictive power, turning this dependence in a feature, rather than a defect of Bayesian inference. Although in Bayesian parameter estimation the use of uniform (flat) priors can be reasonable in some cases, this 2 https://johannesbuchner.github.io/PyMultiNest. 3 https://www.python.org. 4 https://ccpforge.cse.rl.ac.uk/gf/project/multinest. kind of prior can lead to some issues in a model comparison problem. Uniform priors with different domain ranges change the evidence and can potentially affect the Bayes factor between two competing models if the models have nonshared parameters. To use well-motivated priors we considered values that reflect our current state of knowledge about the parameters of the models investigated. These values are shown in Table IV. 5 We applied uniform priors on the parameters related to the JLA data set (,, M B and ∆ M ) since they are common to all models, and so the arbitrary multiplicative constant for these priors cancels out in all Bayes factors. These uniform priors are centered at the best fit values given by the results involving the JLA data set (stat + sys) as displayed in Table X of Ref., and have ranges arbitrarily chosen to be 20 times larger than the respective standard deviations as given in that table, a conservative choice to encompass the predictions of all models considered in this work. For the same reason, we adopted the conservative Gaussian priors m,0 = 0.3 ± 0.1 and dm,0 = 0.26 ± 0.1, since we have fixed b,0 = 0.022765h −2 (see Sec. IV B). These priors are consistent with model-independent estimates from relative peculiar velocity measurements for pairs of galaxies. VI. RESULTS In Fig. 1 we show the parametric space of m,0 and the nuisance parameters, and ∆ M for the standard CDM model. These results were obtained using the JLA SNe Ia sample considering the priors shown in Table IV, as described in the last section. As shown in the figure, our results are in good agreement with those of Ref. (see Fig. 9 and Table X of that reference for comparison). Similar plots for the other cosmological models considered in this analysis are not shown for brevity. Our main results are summarized in Table V where the first, second and third subtables correspond to the results obtained using the JLA SNe sample alone, BAO measurements alone and a joint analysis of SNe and BAO, respectively. These results were obtained considering the priors shown in Table IV. We first observe that the current SNe Ia data alone cannot rule out any of the cosmological models studied in this analysis. The joint analysis with BAO data seems to be more effective to this end. This is clearly seen in the last subtable of Table V, where one can note that, among all, the most dramatic change in the rank of the models with the inclusion of the BAO data in the analysis happens for the DGP model. Although, as discussed above, one cannot make any conclusions about the evidence of this model in comparison to CDM from the SNe Ia data alone, the joint analysis with BAO measurements reveals that this scenario is strongly disfavored with respect to the CDM model. Using the results from this joint analysis, we see from Eq. that, by assuming that the DGP and CDM models exhaust the model space 6 and keeping their prior probabilities as equal, the probability of the DGP model is not greater than 2.1 10 −15, and the posterior odds in favor of the CDM model are not less than ∼ 10 13 : 1. To have some insight into why the DGP model is so significantly disfavored by the data described in Sec. IV, we show in Fig. 2 the predictive 68% credible intervals for the evolution of E(z) of all models. We can see a notable tension involving the evolutions for the DGP model and the evolutions related to the other models, which may explain why this scenario is ruled out in our analyses. Regarding the other models, we also note that, with the exception of the Bayes factor related to the DGP model, the joint analysis involving JLA and BAO data shifts all the ranges of the Bayes factors towards a better support for the alternative models compared to CDM. One can easily observe these shifts from the graphical representation of the ranges of all Bayes factors, displayed in Fig. 3. The above results should be compared with the ranking order provided by Ref.. These authors used 103 SNe Ia from the Sloan Digital Sky Survey-II Supernova Survey along with two data points of the CMB/BAO ratio to rank a number of alternative cosmologies, some of which are also considered in the current study. Their analysis was performed using two different SNe Ia light-curve fitting, i.e., MLCS and SALT2, and the model ranking was done using the Bayesian Another similar work was done in Ref.. The authors compared several nonstandard cosmological models by performing a maximum likelihood analysis combining 307 SNe Ia from the Union08 compilation with constraints from BAO and CMB measurements. Although sharing only three models with our work (namely, the CDM, wCDM and DGP cosmologies), the ranking order displayed in Table I of Ref. seems to be more consistent with our results, showing that the wCDM model is better ranked than the standard CDM scenario, while the DGP alternative performs worse compared to all other models studied in their analyses. VII. CONCLUSIONS Given the current state of uncertainty that remains over the physical mechanism behind the observed acceleration of the Universe, an important way to improve our under-standing of this phenomenon is to use cosmological observations to constrain its different approaches. In this paper, we have performed a Bayesian model selection statistics to rank some nonstandard cosmological models in the light of the most recent SNe Ia (JLA compilation) and BAO data. Our analyses have shown that the JLA data alone are unable to distinguish between the standard CDM scenario from some specific examples of coupled quintessence cosmologies , modified gravity models and simple parametrizations of the dark energy component. On the other hand, while not being able to distinguish most of the alternative models considered in this work, the current BAO measurements can strongly rule out the flat DGP model (see Table V). We have also shown that, when a joint analysis involving SNe Ia and BAO data is performed, the evidence for the DGP model is weakened with respect to the CDM model. The result of this joint analysis shows that the DGP scenario becomes even more strongly disfavored with respect to the standard cosmology, with ln B = −33.892 ± 0.082, whereas the analysis using the BAO data alone provides ln B = −8.506 ± 0.012 and ln B = −0.295 ± 0.045 from the JLA data alone. These results are consistent with some of the previous studies done using different statistics and data sets (see, e.g., Refs. ). Finally, an important aspect worth emphasizing concerns the ranking position of the decaying vacuum cosmology considered in our analysis. As discussed earlier (see Sec. II B), in this kind of model the dark energy field interacts with the pressureless component of dark matter in a process that violates adiabaticity and that constitutes a phenomenological attempt at alleviating the coincidence problem. We have found that this scenario provides an excellent fit to both SNe Ia observations and SNe Ia plus current baryon acoustic oscillation measurements.
Q: Two threads are waiting on a mutex. Which one is unblocked? Suppose there are three threads A, B, C and thread A has acquired a mutex lock and it is processing. If threads B and C try to acquire the same mutex, they will be blocked according to the mutex lock concept. But once thread A completes, which of the threads will be unblocked? Is there any scheduling policy? A: The short answer is "it depends". If there is truly nothing to distinguish thread B from thread C, then the answer on most scheduler implementations will likely be either "could be B or C, and you can't predict which one in advance", or "the one which tried to acquire the mutex first" (i.e. first-come, first-served). However, there may be something which distinguishes them in some way, and that's where it gets more interesting. In particular, if the implementation supports priority scheduling, one of the threads may have a higher priority than the other, in which case the higher-priority thread should be scheduled first. Sometimes the programmer will decide that thread B should have a higher priority than thread C. Some schedulers assign higher priorities to tasks which have earlier deadlines, or periodic tasks with shorter periods (known as rate-monotonic scheduling). In other circumstances, even if two threads have the same nominal priority, the scheduler may give one a higher effective priority than the other. For example, in a non-realtime system, a scheduler may decide that one thread is more of a background task than the other (e.g. in a pre-emptive multitasking environment, a thread which seems to exhaust its quantum is probably a "background thread"), and so should have a lower effective priority than a more "interactive" thread. This is a common heuristic to make a system more responsive (or appear more responsive!) because interactive jobs are given priority. Another common scenario is if one of the threads is already holding a priority-aware synchronisation object (say, a priority inheritance mutex or a priority ceiling mutex). In that case, the thread may have its effective priority raised to ensure that it releases the critical object more quickly.
def accelerate(self, speed, slew_rate, steps, orientation): if not self._enabled: self._log.info('cannot accelerate: motors disabled.') return self._log.debug('accelerating...') if orientation is Orientation.PORT: self._log.info('starting port motor with {:>5.2f} speed for {:d} steps...'.format(speed, steps)) self._port_motor.accelerate(speed, slew_rate, steps) else: self._log.info('starting starboard motor with {:>5.2f} speed for {:d} steps...'.format(speed, steps)) self._stbd_motor.accelerate(speed, slew_rate, steps) self._log.debug('accelerated.')
Measuring the five elastic constants of a nematic liquid crystal elastomer ABSTRACT The determination of the mechanical properties of materials is important for the creation of physical models and describing mechanisms for converting diverse stimuli into mechanical response. It is critical for the design of nanoscale transducers, sensors, and actuators such as motors, pumps, artificial muscles and medical microrobots. This paper reports the measurement of the five independent elastic constants of a transversely isotropic liquid crystal elastomer. We express the elastic constants,,,, and in terms of strains and stresses, then measure these and determine the moduli for three different nematic elastomer samples. Graphical abstract
from drawing_tool import Model_Drawing img = 1563 filename = "DiscNetModel" image = 'IMG_%d_IMG.png'%img gt = 'IMG_%d_ANN.png'%img sparse = 'IMG_%d_SPARSE.png'%img model = Model_Drawing(image) model.add_conv(kernel=(7,7,16),to="(0,0,0)",offset="(0,0,0)") model.add_pool(stride=4) model.add_conv(kernel=(6,6,32)) model.add_pool(stride=4) model.add_conv(kernel=(5,5,48)) model.add_pool(stride=4) model.add_conv(kernel=(4,4,64)) model.add_pool(stride=2) model.add_conv((1,1,1),name="END",caption="Discriminator Logits") model.add_image(sparse,offset="(0,0,0)",to="END",opacity=.5) model.generate(filename)
<reponame>aykutkes2/P1001 //#include "pch.h" #undef UNICODE #define LOCATION_DTKM 1 #define GATEWAY_NO 1 #define WIN32_LEAN_AND_MEAN #define _CRT_SECURE_NO_WARNINGS // Need to link with Ws2_32.lib #pragma comment (lib, "Ws2_32.lib") #include<iostream> #include<cstdio> #include<fstream> #include<sstream> #include<string> #include<cstdlib> #include<conio.h> #include<windows.h> #include <stdlib.h> #include <stdio.h> #include <time.h> #include <AY_Printf.h> #include <AY_Functions.h> #include <AY_ClientSocket.h> #include <AY_ClientPrjtBased.h> #include <AY_Client.h> #include <AY_Memory.h> #include <process.h> #include <pcap.h> // #define BUF ((struct uip_tcpip_hdr *)&buff[UIP_LLH_LEN]) /*---------------------------------------------------------------------------*/ uint16_t mhtons(uint16_t val) { return _HTONS(val); } /*---------------------------------------------------------------------------*/ static uint16_t chksum(uint16_t sum, uint8_t *data, uint16_t len) { uint16_t t; uint8_t *dataptr; uint8_t *last_byte; dataptr = data; last_byte = data + len - 1; while (dataptr < last_byte) { /* At least two more bytes */ t = (dataptr[0] << 8) + dataptr[1]; sum += t; if (sum < t) { sum++; /* carry */ } dataptr += 2; } if (dataptr == last_byte) { t = (dataptr[0] << 8) + 0; sum += t; if (sum < t) { sum++; /* carry */ } } /* Return sum in host byte order. */ return sum; } /*---------------------------------------------------------------------------*/ uint16_t uip_chksum(uint16_t *data, uint16_t len) { return mhtons(chksum(0, (uint8_t *)data, len)); } /*---------------------------------------------------------------------------*/ uint16_t uip_ipchksum(uint8_t *buff) { uint16_t sum; sum = chksum(0, (uint8_t *)&buff[UIP_LLH_LEN], UIP_IPH_LEN); return (sum == 0) ? 0xffff : mhtons(sum); } /*---------------------------------------------------------------------------*/ static uint16_t upper_layer_chksum(uint8_t proto, uint8_t *buff) { uint16_t upper_layer_len; uint16_t sum; upper_layer_len = mhtons(*(uint16_t *)&BUF->len[0]) - UIP_IPH_LEN;// (((uint16_t)(BUF->len[0]) << 8) + BUF->len[1]) - UIP_IPH_LEN; /* First sum pseudoheader. */ /* IP protocol and length fields. This addition cannot carry. */ sum = upper_layer_len + proto; /* Sum IP source and destination addresses. */ sum = chksum(sum, (uint8_t *)&BUF->srcipaddr[0], 2 * sizeof(ip_address)); /* Sum TCP header and data. */ sum = chksum(sum, (uint8_t *)&buff[UIP_IPH_LEN + UIP_LLH_LEN], upper_layer_len); return (sum == 0) ? 0xffff : mhtons(sum); } /*---------------------------------------------------------------------------*/ uint16_t uip_tcpchksum(uint8_t *buff) { return upper_layer_chksum(UIP_PROTO_TCP, buff); } /*---------------------------------------------------------------------------*/ uint16_t uip_udpchksum(uint8_t *buff) { return upper_layer_chksum(UIP_PROTO_UDP, buff); } /* prototype of the packet handler */ void _packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data); static pcap_if_t *alldevs; static pcap_t *fs; static Ui32 MyNetMask; //static struct bpf_program My_fcode; static u_char SocketBuff[1500]; static udp_headerAll MyUDP_header; typedef struct _AYSCKT_Thread{ void *pMyHandler; pcap_t *pfp; struct bpf_program My_fcode; }AYSCKT_Thread; //AYSCKT_Thread *pThrd; AYSCKT_Thread Thrd[2]; void ListenSocket_Thread(void *pParams) { //pThrd = &Thrd;// (AYSCKT_Thread *)pParams; printf("In thread function \n"); //printf("\nlistening on %s...\n", d->description); /* start the capture */ pcap_loop(Thrd[0].pfp, 0, /*_packet_handler*/(pcap_handler)Thrd[0].pMyHandler, NULL); printf("Thread function ends \n"); _endthread(); } void ListenOtherIPs_Thread(void *pParams) { //pThrd = &Thrd;// (AYSCKT_Thread *)pParams; printf("In thread function \n"); //printf("\nlistening on %s...\n", d->description); /* start the capture */ pcap_loop(Thrd[1].pfp, 0, (pcap_handler)Thrd[1].pMyHandler, NULL); printf("Thread function ends \n"); _endthread(); } void ListenSocket_ThreadOrj(void *pParams) { printf("In thread function \n"); //printf("\nlistening on %s...\n", d->description); /* start the capture */ pcap_loop(fs, 0, _packet_handler, NULL); printf("Thread function ends \n"); _endthread(); } int AY_ClientFilterSet(pcap_t *fp, bpf_program *pfcode, char *pfilter, Ui32 netmask) { //struct bpf_program fcode; //compile the filter if (pcap_compile(fp, pfcode, pfilter, 1, netmask) < 0) { fprintf(stderr, "\nUnable to compile the packet filter. Check the syntax.\n"); printf("\n %s\n", pfilter); /* Free the device list */ return PCAP_ERROR; } printf("\n %s\n", pfilter); //set the filter if (pcap_setfilter(fp, pfcode) < 0) { fprintf(stderr, "\nError setting the filter.\n"); /* Free the device list */ return PCAP_ERROR; } return 1; } int AY_ClientFilterFree(struct bpf_program *pfcode) { pcap_freecode(pfcode); return 1; } int AY_ClientStartThread(void *pCallBack) { HANDLE hThread; hThread = (HANDLE)_beginthread((_beginthread_proc_type)pCallBack, 16 * 1024/*0*/, 0); return 1; } int AY_ClientFilterSetA(Ui08 idx, char *pfilter) { //pcap_t *fp = (pcap_t *)pFp; AY_ClientFilterSet(Thrd[idx].pfp, &Thrd[idx].My_fcode, pfilter, MyNetMask); if (idx == 0) { AY_ClientStartThread((void *)ListenSocket_Thread); } else { AY_ClientStartThread((void *)ListenOtherIPs_Thread); } return 1; } int AY_ClientFilterFreeA(Ui08 idx) { pcap_breakloop(Thrd[idx].pfp); //pcap_close(Thrd.pfp); //pcap_freecode(&My_fcode); return 1; } int AY_ClientFilterFreeB(Ui08 idx) { pcap_breakloop(Thrd[idx].pfp); pcap_close(Thrd[idx].pfp); pcap_freecode(&Thrd[idx].My_fcode); if (idx == 0) { pcap_freealldevs(alldevs); } return 1; } int AY_ClientSocket_Init(Ui08 idx, Ui08 *pMAC, Ui08 *pAdr, Ui16 rPort, char *pfilter, void *pCallBack, Ui32 _A) { //==============================// //pcap_if_t *alldevs; pcap_if_t *d; pcap_addr_t *a,*b; int i = 0; int ListenPort = 0; char DevFound = 0; Ui32 j=0; //==============================// pcap_t *fp; char errbuf[PCAP_ERRBUF_SIZE]; char packet_filter[40] = "udp dst port "; char packet_filterIP[40] = "ip src host "; d = NULL; a = NULL; b = NULL; /* Retrieve the device list */ if (pcap_findalldevs(&alldevs, errbuf) == -1) { fprintf(stderr, "Error in pcap_findalldevs: %s\n", errbuf); return PCAP_ERROR; } for (pcap_if_t* pInterface(alldevs); pInterface != 0; pInterface = pInterface->next) { if ((pInterface->flags & PCAP_IF_LOOPBACK) != 0) // Skip loopback interfaces { continue; } for (d = alldevs; d != NULL; d = d->next) { printf("%s:", d->name); for (a = d->addresses; a != NULL; a = a->next) { if (a->addr->sa_family == AF_INET) { printf(" %s", inet_ntoa(((struct sockaddr_in*)a->addr)->sin_addr)); if (*((Ui32 *)pAdr) == 0) { if ((Ui32)(((struct sockaddr_in*)a->addr)->sin_addr.S_un.S_addr)) { //if (j == _A) { // printf("\n\n_A = %d I will use this device !!!\n\n",_A); // printf(" %s \n\n", inet_ntoa(((struct sockaddr_in*)a->addr)->sin_addr)); b = a; //e = d; DevFound = 1; //} j++; } } else if (*((Ui32 *)pAdr) == (Ui32)(((struct sockaddr_in*)a->addr)->sin_addr.S_un.S_addr)) { b = a; DevFound = 1; } } } printf("\n"); if (DevFound) { a = b; printf("\n\n_A = %d I will use this device !!!\n\n",_A); printf(" %s \n\n", inet_ntoa(((struct sockaddr_in*)a->addr)->sin_addr)); goto L_DevFound; } } } L_DevFound: //========================================================================// if (AY_Client_DynamicIP) { bpf_u_int32 ip_raw; /* IP address as integer */ bpf_u_int32 subnet_mask_raw; /* Subnet mask as integer */ /* Get device info */ if (pcap_lookupnet(d->name, &ip_raw, &subnet_mask_raw, errbuf) < 0) { fprintf(stderr, "Error in pcap_lookupnet: %s\n", errbuf); return PCAP_ERROR; } *(((Ui32 *)pAdr) + _SUBNET_) = subnet_mask_raw; *(((Ui32 *)pAdr) + _MASK_) = ip_raw; *(((Ui32 *)pAdr) + _GW_) = ip_raw + 0x01000000;///< not good! } else { *(((Ui32 *)pAdr) + _MASK_) = *((Ui32 *)&CngFile.NetworkSubnetMask); *(((Ui32 *)pAdr) + _GW_) = *((Ui32 *)&CngFile.NetworkGatewayIp); *(((Ui32 *)pAdr) + _SUBNET_) = *((Ui32 *)&CngFile.NetSubnetIp); } printf("Subnet address: %s\n", AY_ConvertIPToStrRet((Ui08 *)(((Ui32 *)pAdr) + _SUBNET_), &errbuf[0])); printf("Subnet mask: %s\n", AY_ConvertIPToStrRet((Ui08 *)(((Ui32 *)pAdr) + _MASK_), &errbuf[0])); printf("Gateway address: %s\n", AY_ConvertIPToStrRet((Ui08 *)(((Ui32 *)pAdr) + _GW_), &errbuf[0])); //==========================================================================================================// if ((a != NULL)&& (d != NULL)) { *((Ui32 *)pAdr) = (Ui32)(((struct sockaddr_in*)a->addr)->sin_addr.S_un.S_addr); /* Open the adapter */ if ((fp = pcap_open_live(d->name/*argv[1]*/, // name of the device 65536, // portion of the packet to capture. It doesn't matter in this case 1, // promiscuous mode (nonzero means promiscuous) 1000, // read timeout errbuf // error buffer )) == NULL) { fprintf(stderr, "\nUnable to open the adapter. %s is not supported by WinPcap\n", d->name/*argv[1]*/); return PCAP_ERROR; } //*pFp = (void *)fp; if (d->addresses != NULL) { /* Retrieve the mask of the first address of the interface */ MyNetMask = ((struct sockaddr_in *)(d->addresses->netmask))->sin_addr.S_un.S_addr; } else /* If the interface is without addresses we suppose to be in a C class network */ MyNetMask = 0xffffff; // select filter type if (pfilter != nullptr) { ListenPort = 0; //strcpy(packet_filter, pfilter); } else if(rPort != 0) { ListenPort = 1; AY_ConvertUi32AddToStrRet(rPort, packet_filter); pfilter = &packet_filter[0]; } else { ListenPort = 0; strcat(packet_filterIP, inet_ntoa(((struct sockaddr_in*)a->addr)->sin_addr)); pfilter = &packet_filterIP[0]; } if (AY_ClientFilterSet(fp, &Thrd[idx].My_fcode, pfilter, MyNetMask) < 0) { pcap_freealldevs(alldevs); return PCAP_ERROR; } Thrd[idx].pfp = fp; Thrd[idx].pMyHandler = pCallBack; if (idx == 0) { AY_ClientStartThread((void *)ListenSocket_Thread); } else { AY_ClientStartThread((void *)ListenOtherIPs_Thread); } //===================================================// } else { //pcap_freealldevs(alldevs); return PCAP_ERROR; } //pcap_freealldevs(alldevs); return 1; } int UDP_header_init(udp_headerAll * UDP_header) { // Ethernet Header UDP_header->_ethHeader.type = mhtons(UIP_ETHTYPE_IP); // IP Header UDP_header->_ipHeader.ver_ihl = 0x45; ///< Version:4 Length:20 UDP_header->_ipHeader.tos = 0x00; ///< Not ECN-Capable Transport UDP_header->_ipHeader.identification = mhtons(0x6f63); ///< identification UDP_header->_ipHeader.flags_fo = mhtons(0x0000); ///< Fragment offset UDP_header->_ipHeader.ttl = 0x80; ///< time to live 128 UDP_header->_ipHeader.proto = 0x11; ///< UDP // UDP Header // END return 1; } int UDP_header_load(udp_headerAll * UDP_header, uip_eth_addr dest, ip_address daddr, Ui16 dport, uip_eth_addr src, ip_address saddr, Ui16 sport) { // Ethernet Header memcpy(&UDP_header->_ethHeader.dest.addr[0], &dest.addr[0], 6); memcpy(&UDP_header->_ethHeader.src.addr[0], &src.addr[0], 6); // IP Header memcpy(&UDP_header->_ipHeader.saddr, (u_char *)&saddr, 4); ///< sorce address memcpy(&UDP_header->_ipHeader.daddr, (u_char *)&daddr, 4); ///< destination address // UDP Header UDP_header->_udpHeader.sport = mhtons(sport); ///< source port UDP_header->_udpHeader.dport = mhtons(dport); ///< destination port // END return 1; } int UDP_packet_send(Ui08 idx, udp_headerAll * UDP_header, Ui08 *pBuff, int len) { int i = sizeof(udp_headerAll); Ui08 *ptr = (Ui08 *)_AY_MallocMemory(len + sizeof(udp_headerAll));///< max packet size udp_headerAll *pHdr = (udp_headerAll *)&ptr[0]; //pcap_t *fp = (pcap_t *)pFp; //i = sizeof(udp_headerAll); memcpy(&ptr[0], UDP_header, i); memcpy(&ptr[i], pBuff, len); i += len; pHdr->_ipHeader.tlen = mhtons(i - 14); ///< length 100 bytes pHdr->_ipHeader.crc = 0; ///< header checksum pHdr->_udpHeader.len = mhtons(len + 8); ///< length 72 + 8 bytes pHdr->_udpHeader.crc = 0; ///< checksum pHdr->_ipHeader.crc = ~(uip_ipchksum(&ptr[0])); ///< header checksum pHdr->_udpHeader.crc = ~(uip_udpchksum(&ptr[0])); ///< checksum /* Send down the packet */ if (pcap_sendpacket(Thrd[idx].pfp, ptr, i) != 0) { fprintf(stderr, "\nError sending the packet: %s\n", pcap_geterr(Thrd[idx].pfp)); _AY_FreeMemory((unsigned char*)ptr); return PCAP_ERROR; } printf("Data Has been Sent !!!! Count = %d", i); _AY_FreeMemory((unsigned char*)ptr); return 1; } int AY_ClientSocket_main(void)//(int argc, char **argv) { HANDLE hThread; //==============================// pcap_if_t *alldevs; pcap_if_t *d; int inum; u_int netmask; char packet_filter[20] = "udp dst port ";// "ip and udp"; //struct bpf_program fcode; //==============================// pcap_t *fp; char errbuf[PCAP_ERRBUF_SIZE]; u_char packet[114]; int i = 0; //udp_headerAll *pHdr; const u_char MAC[2][6] = { {0xff,0xff,0xff,0xff,0xff,0xff}, {0x4c,0xcc,0x6a,0xec,0x5d,0x94} }; const u_char IPs[2][6] = { {0xc0,0xa8,0x02,0xac}, {0xff,0xff,0xff,0xff} }; const uint16_t PortNo = 1982; const uint16_t SPortNo = 19820; //==============================// /* Retrieve the device list */ if (pcap_findalldevs(&alldevs, errbuf) == -1) { fprintf(stderr, "Error in pcap_findalldevs: %s\n", errbuf); exit(1); } for (pcap_if_t* pInterface(alldevs); pInterface != 0; pInterface = pInterface->next) { if ((pInterface->flags & PCAP_IF_LOOPBACK) != 0) // Skip loopback interfaces { continue; } for (pcap_if_t *d = alldevs; d != NULL; d = d->next) { printf("%s:", d->name); for (pcap_addr_t *a = d->addresses; a != NULL; a = a->next) { if (a->addr->sa_family == AF_INET) printf(" %s", inet_ntoa(((struct sockaddr_in*)a->addr)->sin_addr)); } printf("\n"); } } /* Print the list */ for (d = alldevs; d; d = d->next) { printf("%d. %s", ++i, d->name); if (d->description) { printf(" (%s) ------ (%s)\n", d->description, d->name); } else printf(" (No description available)\n"); if ((d->flags & PCAP_IF_LOOPBACK)) { printf(" LOOPBACK !!!\n"); } if ((d->flags & PCAP_IF_UP)) { printf(" UP !!!\n"); } if ((d->flags & PCAP_IF_RUNNING)) { printf(" RUNNING !!!\n"); } switch ((d->flags & PCAP_IF_CONNECTION_STATUS)) { case PCAP_IF_CONNECTION_STATUS_UNKNOWN: printf(" CONNECTION_STATUS_UNKNOWN !!!\n"); break; case PCAP_IF_CONNECTION_STATUS_CONNECTED: printf(" CONNECTION_STATUS_CONNECTED !!!\n"); break; case PCAP_IF_CONNECTION_STATUS_DISCONNECTED: printf(" CONNECTION_STATUS_DISCONNECTED !!!\n"); break; case PCAP_IF_CONNECTION_STATUS_NOT_APPLICABLE: printf(" CONNECTION_STATUS_NOT_APPLICABLE !!!\n"); break; } } if (i == 0) { printf("\nNo interfaces found! Make sure WinPcap is installed.\n"); return -1; } printf("Enter the interface number (1-%d):", i); scanf_s("%d", &inum); /* Check if the user specified a valid adapter */ if (inum < 1 || inum > i) { printf("\nAdapter number out of range.\n"); /* Free the device list */ pcap_freealldevs(alldevs); return -1; } /* Jump to the selected adapter */ for (d = alldevs, i = 0; i < inum - 1; d = d->next, i++); //==============================// /* Open the adapter */ if ((fp = pcap_open_live(d->name/*argv[1]*/, // name of the device 65536, // portion of the packet to capture. It doesn't matter in this case 1, // promiscuous mode (nonzero means promiscuous) 1000, // read timeout errbuf // error buffer )) == NULL) { fprintf(stderr, "\nUnable to open the adapter. %s is not supported by WinPcap\n", d->name/*argv[1]*/); return 2; } //============== RECEIVE =================================// /* Open the adapter */ if ((fs = pcap_open_live(d->name/*argv[1]*/, // name of the device 65536, // portion of the packet to capture. It doesn't matter in this case 1, // promiscuous mode (nonzero means promiscuous) 1000, // read timeout errbuf // error buffer )) == NULL) { fprintf(stderr, "\nUnable to open the adapter. %s is not supported by WinPcap\n", d->name/*argv[1]*/); return 2; } if (d->addresses != NULL) /* Retrieve the mask of the first address of the interface */ netmask = ((struct sockaddr_in *)(d->addresses->netmask))->sin_addr.S_un.S_addr; else /* If the interface is without addresses we suppose to be in a C class network */ netmask = 0xffffff; //compile the filter if (pcap_compile(fs, &Thrd[0].My_fcode, AY_ConvertUi32AddToStrRet(PortNo, packet_filter)/*packet_filter*/, 1, netmask) < 0) { fprintf(stderr, "\nUnable to compile the packet filter. Check the syntax.\n"); printf("\n %s\n", packet_filter); /* Free the device list */ pcap_freealldevs(alldevs); return -1; } printf("\n %s\n", packet_filter); //set the filter if (pcap_setfilter(fs, &Thrd[0].My_fcode) < 0) { fprintf(stderr, "\nError setting the filter.\n"); /* Free the device list */ pcap_freealldevs(alldevs); return -1; } hThread = (HANDLE)_beginthread(ListenSocket_ThreadOrj, 16 * 1024/*0*/, NULL); //===================================================// UDP_header_init(&MyUDP_header); UDP_header_load(&MyUDP_header, *((uip_eth_addr *)&MAC[0][0]), *((ip_address *)&IPs[1][0]), PortNo, *((uip_eth_addr *)&MAC[1][0]), *((ip_address *)&IPs[0][0]), SPortNo); /* Fill the rest of the packet */ for (i = 0; i < 114; i++) { packet[i] = (u_char)i; } Thrd[0].pfp = fp; Thrd[0].pMyHandler = _packet_handler; while (1) { UDP_packet_send(0, &MyUDP_header, &packet[0], 114); } pcap_close(fp); return 0; } /* Callback function invoked by libpcap for every incoming packet */ void _packet_handler(u_char *param, const struct pcap_pkthdr *header, const u_char *pkt_data) { struct tm *ltime; char timestr[16]; ip_header *ih; udp_header *uh; u_int ip_len; u_short sport, dport; time_t local_tv_sec; /* * unused parameter */ (VOID)(param); /* convert the timestamp to readable format */ local_tv_sec = header->ts.tv_sec; ltime = localtime(&local_tv_sec); strftime(timestr, sizeof timestr, "%H:%M:%S", ltime); /* print timestamp and length of the packet */ printf("%s.%.6d len:%d ", timestr, header->ts.tv_usec, header->len); /* retireve the position of the ip header */ ih = (ip_header *)(pkt_data + 14); //length of ethernet header /* retireve the position of the udp header */ ip_len = (ih->ver_ihl & 0xf) * 4; uh = (udp_header *)((u_char*)ih + ip_len); /* convert from network byte order to host byte order */ sport = ntohs(uh->sport); dport = ntohs(uh->dport); /* print ip addresses and udp ports */ printf("%d.%d.%d.%d.%d -> %d.%d.%d.%d.%d\n", ih->saddr.byte1, ih->saddr.byte2, ih->saddr.byte3, ih->saddr.byte4, sport, ih->daddr.byte1, ih->daddr.byte2, ih->daddr.byte3, ih->daddr.byte4, dport); }
name = input('') lower = name.lower() upper = name.upper() p_low =0 q_up = 0 l = list(map(chr,range(97,123))) t = [chr(x) for x in range(ord('A'), ord('Z')+1 )] for k in range(0, len(name)): for j in range(0,26): if name[k] == l[j]: p_low = p_low + 1 elif name[k] == t[j]: q_up = q_up + 1 if q_up>p_low: print(upper) else: print(lower)
CHAPTER 3: THE CARDIORESPIRATORY SYSTEM Of the poliomyelitis patients with respiratory paralysis 20-64 per cent have remained dependent on mechanical respiratory assistance. Many thorough cardiorespiratory studies have been made on respirator patients in the acute stage of the disease to control and to determine the effects of artificial respiration (2, 9, 10, IS, 24, 33, 54, 6 3 ). Various respiratory complications have occured in conjunction with the alarming initial phase of the illness in over 50 per cent of the cases, but later pulmonary complications have been more rare. Patients convalesced from respiratory paralysis have not had any physiologically significant pulmonary defects produced by mechanical respirators or by poliomyelitis per se. In deaths from acute poliomyelitis there have often been a myocarditis (44, 53, 6 1 ) or cardiovascular complications in which the prolonged insufflation pressure might have impeded the venous return, causing shock. The cause of death in chronic poliomyelitic respirator patients has been reported to be predominantly in the respiratory system.
Cost-effectiveness of Screening for Atrial Fibrillation Using Wearable Devices Key Points Question Is population-based atrial fibrillation (AF) screening using wearable devices cost-effective? Findings In this economic evaluation of 30 million simulated individuals with an age, sex, and comorbidity profile matching the US population aged 65 years or older, AF screening using wearable devices was cost-effective, with the overall preferred strategy identified as wearable photoplethysmography, followed conditionally by wearable electrocardiography with patch monitor confirmation (incremental cost-effectiveness ratio, $57894 per quality-adjusted life-year). The cost-effectiveness of screening was consistent across multiple scenarios, including strata of sex, screening at earlier ages, and with variation in the association of anticoagulation with risk of stroke associated with screening-detected AF. Meaning This study suggests that contemporary AF screening using wearable devices may be cost-effective. Event-related costs For clinical events modeled (e.g., ischemic stroke, intracranial hemorrhage, and major bleeding), upfront costs were stratified by severity and obtained from the Agency for Healthcare Research and Quality (https://hcupnet.ahrq.gov/#setup) as follows: First, we extracted separate cost statistics for all International Classification of Diseases, 10 th revision (ICD-10) diagnosis codes corresponding to the event of interest. Then, we sorted the costs in ascending order and divided them into quantiles equal in number to the categories of severity (e.g., tertiles for mild/moderate/severe groupings). Within each quantile, we utilized the mean hospital cost as the base case cost for the event at the corresponding severity level. The lower and upper bounds were set as the minimum and maximum cost values observed within the quantile. In cases where one has multiple competing event-related costs, either the most relevant cost is incurred, or the maximum of the costs is incurred. For example, a history of stroke is associated with a maintenance cost associated with chronic poststroke care. If a recurrent acute stroke occurs, only the upfront cost corresponding to the new stroke is invoked (since it is greater than the maintenance cost associated with chronic post-stroke care), with no additional maintenance cost. Drug/visit costs In cases where anticoagulation was stopped due to a history of major bleeding, or in accordance with modeled discontinuation rates, we assumed that the monthly drug cost would stop accumulating until the treatment regimen was resumed. We also assumed that physician visits for acute events (e.g., major bleeding) would also fulfill potential maintenance visit requirements. For example, if an individual on anticoagulation has a physician visit secondary to an acute bleed, that individual's next annual physician visit for anticoagulation maintenance would be no less than one year after the acute bleed. Screening costs For discrete screening modalities, namely single-lead ECG, 12-lead ECG, pulse palpation, and patch monitor, a one-time screening cost was incurred if and only if the test was performed. For costs associated wrist-worn wearable screening, a one-time upfront cost was incurred upon the start of screening (corresponding to initial purchase of the device) and an additional cost of replacing the device every five years was applied as long as the given strategy called for continued wearable screening. For all screening strategies, a one-time nurse visit cost was incurred upon screening. Also, for strategies involving a wrist-worn wearable followed by a confirmatory patch monitor, an additional nursing visit cost was incurred after an abnormal wearable signal for prescription and application of the patch monitor. Lastly, a physician visit cost was incurred for all instances where an ultimate diagnosis of AF was made (either true or false positive), corresponding to diagnosis counseling and prescription of anticoagulation if appropriate (i.e., no history of major bleeding). Modeling of paroxysmal AF Given lack of reliable data regarding the test characteristics of wearable devices for detecting paroxysmal AF over longer durations of monitoring (i.e., months to years), we modeled the temporal effect of screening via a wearable device as follows: We applied literature-based values for the estimated prevalence of paroxysmal AF among individuals with screen-detected AF (59%). We then utilized estimates of the average AF burden among individuals with paroxysmal AF (4.5%). We assumed that the average AF burden follows a uniform distribution on the order of days (i.e., an individual with an AF burden of 4.5% would be expected, on average, to spend 4.5% of each day in AF). Then, the probability that an individual will not experience a single AF episode over t days is (1-0.045) t. The probability that an individual will experience at least one AF episode over t days is the complement, or 1-(1-0.045) t. We then applied the known static test characteristics of the wearable device to the probability of observing AF with each cycle of simulation (i.e., one month or 30 days). For example, an individual with AF wearing a watch for 3 months would have a probability of the device being exposed to an AF episode after one cycle of 1-(1-0.045) 30, or 0.749. If this individual is wearing a W-PPG (sensitivity 95.3, specificity 99.7), they will be diagnosed with AF with probability 0.749 * 0.953, or 0.714 after one cycle. As with other screening modalities, if a diagnosis of AF is not made, and the screening strategy under evaluation includes continued screening, then the screening process will repeat as dictated by the length of the screening interval being evaluated. In this case of 3-month screening, screening would continue for three cycles, with a probability of being diagnosed with AF of 0.714 after each cycle, and the overall probability of being diagnosed with AF of 1-(1-0.714) 3 or 0.977. Although the data provided by a recent study by Diedrichsen et al. are insufficient to primarily inform test characteristics over the necessary durations required to model wearable screening approaches, we were able to validate that our approach described above resulted in comparable estimates of sensitivity for paroxysmal AF at 30 days, after allowance for the uncertainty in AF burden, which we modeled in probabilistic sensitivity analyses ( Sensitivity analysis assumptions In cases where uncertainty in model parameters could not be estimated based on the available published literature, we varied point estimates by +/-20% when performing both one-way and probabilistic sensitivity analyses. Simulation size determination To determine sufficient cohort size for base case simulation taking into account firstorder uncertainty (i.e., Monte Carlo error), we followed the guidelines provided by the ISPOR-SMDM Modeling Good Research Practices Task Force Working Group-6. 8 Specifically, we tested results at increasing sample size from 10 million to 50 million and noted the comparative clinical effectiveness of all 8 screening strategies with respect to no screening, i.e., d(QALY), as well as the cost effectiveness results for all 5 cases. We report these values in the tables below. At a precision of 0.001 (i.e., 100 QALYs per 100,000 persons), one can see that d(QALY) is well-stabilized at simulation sizes at or above 30 million (Table B). Further, the cost-effectiveness strategy remained the same for all simulation sizes and the ICER stabilizes at a precision of $100,000 at or above 30 million ( Table C). As a result, we utilized a simulation size of 30 million for the base case analysis. eMethods Defined using NHANES 2013-2016 health interviews. Coronary heart disease was considered present if a person reported "yes" to being told by a healthcare professional that he or she had coronary heart disease, angina or angina pectoris, heart attack, or myocardial infarction. Those who answered "no" but were diagnosed with angina based on the Rose questionnaire were also included. * denotes baseline condition ICER = incremental cost-effectiveness ratio; Freq = frequency; PP = pulse palpation; 12L = 12-lead electrocardiogram; PPG = wearable photoplethysmography; 1L = wearable single-lead electrocardiogram; PM = patch monitor; AF = atrial fibrillation; RR = relative risk; OAC = oral anticoagulant; NOAC = novel oral anticoagulant
<filename>ui/theme/radius.ts export const radius = { default: '4px', small: '3px', medium: '6px' }
<filename>rlpyt/samplers/parallel/gpu/action_server.py import numpy as np from rlpyt.agents.base import AgentInputs from rlpyt.utils.synchronize import drain_queue from rlpyt.utils.logging import logger EVAL_TRAJ_CHECK = 20 # [steps]. class ActionServer: def serve_actions(self, itr): obs_ready, act_ready = self.sync.obs_ready, self.sync.act_ready step_np, agent_inputs = self.step_buffer_np, self.agent_inputs for t in range(self.batch_spec.T): for b in obs_ready: b.acquire() # Workers written obs and rew, first prev_act. # assert not b.acquire(block=False) # Debug check. if self.mid_batch_reset and np.any(step_np.done): for b_reset in np.where(step_np.done)[0]: step_np.action[b_reset] = 0 # Null prev_action into agent. step_np.reward[b_reset] = 0 # Null prev_reward into agent. self.agent.reset_one(idx=b_reset) action, agent_info = self.agent.step(*agent_inputs) step_np.action[:] = action # Worker applies to env. step_np.agent_info[:] = agent_info # Worker sends to traj_info. for w in act_ready: # assert not w.acquire(block=False) # Debug check. w.release() # Signal to worker. for b in obs_ready: b.acquire() assert not b.acquire(block=False) # Debug check. if "bootstrap_value" in self.samples_np.agent: self.samples_np.agent.bootstrap_value[:] = self.agent.value( *agent_inputs) if np.any(step_np.done): # Reset at end of batch; ready for next. for b_reset in np.where(step_np.done)[0]: step_np.action[b_reset] = 0 # Null prev_action into agent. step_np.reward[b_reset] = 0 # Null prev_reward into agent. self.agent.reset_one(idx=b_reset) # step_np.done[:] = False # Worker resets at start of next. for w in act_ready: assert not w.acquire(block=False) # Debug check. def serve_actions_evaluation(self, itr): obs_ready, act_ready = self.sync.obs_ready, self.sync.act_ready step_np, step_pyt = self.eval_step_buffer_np, self.eval_step_buffer_pyt traj_infos = list() self.agent.reset() agent_inputs = AgentInputs(step_pyt.observation, step_pyt.action, step_pyt.reward) # Fixed buffer objects. for t in range(self.eval_max_T): if t % EVAL_TRAJ_CHECK == 0: # (While workers stepping.) traj_infos.extend(drain_queue(self.eval_traj_infos_queue, guard_sentinel=True)) for b in obs_ready: b.acquire() # assert not b.acquire(block=False) # Debug check. for b_reset in np.where(step_np.done)[0]: step_np.action[b_reset] = 0 # Null prev_action. step_np.reward[b_reset] = 0 # Null prev_reward. self.agent.reset_one(idx=b_reset) action, agent_info = self.agent.step(*agent_inputs) step_np.action[:] = action step_np.agent_info[:] = agent_info if self.eval_max_trajectories is not None and t % EVAL_TRAJ_CHECK == 0: self.sync.stop_eval.value = len(traj_infos) >= self.eval_max_trajectories for w in act_ready: # assert not w.acquire(block=False) # Debug check. w.release() if self.sync.stop_eval.value: logger.log("Evaluation reach max num trajectories " f"({self.eval_max_trajectories}).") break if t == self.eval_max_T - 1 and self.eval_max_trajectories is not None: logger.log("Evaluation reached max num time steps " f"({self.eval_max_T}).") for b in obs_ready: b.acquire() # Workers always do extra release; drain it. assert not b.acquire(block=False) # Debug check. for w in act_ready: assert not w.acquire(block=False) # Debug check. return traj_infos class AlternatingActionServer: """Two environment instance groups may execute partially simultaneously.""" def serve_actions(self, itr): obs_ready_pair = self.obs_ready_pair act_ready_pair = self.act_ready_pair step_np_pair = self.step_buffer_np_pair agent_inputs_pair = self.agent_inputs_pair # Can easily write overlap and no overlap of workers versions. for t in range(self.batch_spec.T): for alt in range(2): step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # Workers written obs and rew, first prev_act. # assert not b.acquire(block=False) # Debug check. if self.mid_batch_reset and np.any(step_h.done): for b_reset in np.where(step_h.done)[0]: step_h.action[b_reset] = 0 # Null prev_action into agent. step_h.reward[b_reset] = 0 # Null prev_reward into agent. self.agent.reset_one(idx=b_reset) action, agent_info = self.agent.step(*agent_inputs_pair[alt]) step_h.action[:] = action # Worker applies to env. step_h.agent_info[:] = agent_info # Worker sends to traj_info. for w in act_ready_pair[alt]: # Final release. # assert not w.acquire(block=False) # Debug check. w.release() # Signal to worker. for alt in range(2): step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # assert not b.acquire(block=False) # Debug check. if "bootstrap_value" in self.samples_np.agent: self.bootstrap_value_pair[alt][:] = self.agent.value(*agent_inputs_pair[alt]) if np.any(step_h.done): for b_reset in np.where(step_h.done)[0]: step_h.action[b_reset] = 0 step_h.reward[b_reset] = 0 self.agent.reset_one(idx=b_reset) self.agent.toggle_alt() # Value and reset method do not advance rnn state. for b in self.sync.obs_ready: assert not b.acquire(block=False) # Debug check. for w in self.sync.act_ready: assert not w.acquire(block=False) # Debug check. def serve_actions_evaluation(self, itr): obs_ready, act_ready = self.sync.obs_ready, self.sync.act_ready obs_ready_pair = self.obs_ready_pair act_ready_pair = self.act_ready_pair step_np_pair = self.eval_step_buffer_np_pair agent_inputs_pair = self.eval_agent_inputs_pair traj_infos = list() self.agent.reset() stop = False for t in range(self.eval_max_T): if t % EVAL_TRAJ_CHECK == 0: # (While workers stepping.) traj_infos.extend(drain_queue(self.eval_traj_infos_queue, guard_sentinel=True)) for alt in range(2): step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # assert not b.acquire(block=False) # Debug check. for b_reset in np.where(step_h.done)[0]: step_h.action[b_reset] = 0 # Null prev_action. step_h.reward[b_reset] = 0 # Null prev_reward. self.agent.reset_one(idx=b_reset) action, agent_info = self.agent.step(*agent_inputs_pair[alt]) step_h.action[:] = action step_h.agent_info[:] = agent_info if (self.eval_max_trajectories is not None and t % EVAL_TRAJ_CHECK == 0 and alt == 0): if len(traj_infos) >= self.eval_max_trajectories: for b in obs_ready_pair[1 - alt]: b.acquire() # Now all workers waiting. self.sync.stop_eval.value = stop = True for w in act_ready[alt]: w.release() break for w in act_ready_pair[alt]: # assert not w.acquire(block=False) # Debug check. w.release() if stop: logger.log("Evaluation reached max num trajectories " f"({self.eval_max_trajectories}).") break # TODO: check exit logic for/while ..? if not stop: logger.log("Evaluation reached max num time steps " f"({self.eval_max_T}).") for b in obs_ready: b.acquire() # Workers always do extra release; drain it. assert not b.acquire(block=False) # Debug check. for w in act_ready: assert not w.acquire(block=False) # Debug check. return traj_infos class NoOverlapAlternatingActionServer: def serve_actions(self, itr): obs_ready = self.sync.obs_ready obs_ready_pair = self.obs_ready_pair act_ready_pair = self.act_ready_pair step_np, step_np_pair = self.step_buffer_np, self.step_buffer_np_pair agent_inputs, agent_inputs_pair = self.agent_inputs, self.agent_inputs_pair for t in range(self.batch_spec.T): for alt in range(2): step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # Workers written obs and rew, first prev_act. # assert not b.acquire(block=False) # Debug check. if t > 0 or alt > 0: # Just don't do the very first one. # Only let `alt` workers go after `1-alt` workers done stepping. for w in act_ready_pair[1 - alt]: # assert not w.acquire(block=False) # Debug check. w.release() if self.mid_batch_reset and np.any(step_h.done): for b_reset in np.where(step_h.done)[0]: step_h.action[b_reset] = 0 # Null prev_action into agent. step_h.reward[b_reset] = 0 # Null prev_reward into agent. self.agent.reset_one(idx=b_reset) action, agent_info = self.agent.step(*agent_inputs_pair[alt]) step_h.action[:] = action # Worker applies to env. step_h.agent_info[:] = agent_info # Worker sends to traj_info. for alt in range(2): step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # assert not b.acquire(block=False) # Debug check. if alt == 0: for w in act_ready_pair[1]: # assert not w.acquire(block=False) # Debug check. w.release() if "bootstrap_value" in self.samples_np.agent: self.bootstrap_value_pair[alt][:] = self.agent.value(*agent_inputs_pair[alt]) if np.any(step_h.done): for b_reset in np.where(step_h.done)[0]: step_h.action[b_reset] = 0 step_h.reward[b_reset] = 0 self.agent.reset_one(idx=b_reset) self.agent.toggle_alt() # Value and reset method do not advance rnn state. def serve_actions_evaluation(self, itr): obs_ready, act_ready = self.sync.obs_ready, self.sync.act_ready obs_ready_pair = self.obs_ready_pair act_ready_pair = self.act_ready_pair step_np, step_np_pair = self.eval_step_buffer_np, self.eval_step_buffer_np_pair agent_inputs = self.eval_agent_inputs agent_inputs_pair = self.eval_agent_inputs_pair traj_infos = list() self.agent.reset() step_np.action[:] = 0 # Null prev_action. step_np.reward[:] = 0 # Null prev_reward. # First step of both. alt = 0 step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # assert not b.acquire(block=False) # Debug check. action, agent_info = self.agent.step(*agent_inputs_pair[alt]) step_h.action[:] = action step_h.agent_info[:] = agent_info alt = 1 step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # assert not b.acquire(block=False) # Debug check. for w in act_ready_pair[1 - alt]: # assert not w.acquire(block=False) # Debug check. w.release() action, agent_info = self.agent.step(*agent_inputs_pair[alt]) step_h.action[:] = action step_h.agent_info[:] = agent_info for t in range(1, self.eval_max_T): if t % EVAL_TRAJ_CHECK == 0: # (While workers stepping.) traj_infos.extend(drain_queue(self.eval_traj_infos_queue, guard_sentinel=True)) for alt in range(2): step_h = step_np_pair[alt] for b in obs_ready_pair[alt]: b.acquire() # assert not b.acquire(block=False) # Debug check. for w in act_ready_pair[1 - alt]: # assert not w.acquire(block=False) # Debug check. w.release() for b_reset in np.where(step_h.done)[0]: step_h.action[b_reset] = 0 # Null prev_action. step_h.reward[b_reset] = 0 # Null prev_reward. self.agent.reset_one(idx=b_reset) action, agent_info = self.agent.step(*agent_inputs_pair[alt]) step_h.action[:] = action step_h.agent_info[:] = agent_info if self.eval_max_trajectories is not None and t % EVAL_TRAJ_CHECK == 0: self.sync.stop_eval.value = len(traj_infos) >= self.eval_max_trajectories if self.sync.stop_eval.value: for w in act_ready_pair[1 - alt]: # Other released past loop. # assert not w.acquire(block=False) # Debug check. w.release() logger.log("Evaluation reached max num trajectories " f"({self.eval_max_trajectories}).") break # TODO: check logic when traj limit hits at natural end of loop? for w in act_ready_pair[alt]: # assert not w.acquire(block=False) # Debug check. w.release() if t == self.eval_max_T - 1 and self.eval_max_trajectories is not None: logger.log("Evaluation reached max num time steps " f"({self.eval_max_T}).") for b in obs_ready: b.acquire() # Workers always do extra release; drain it. # assert not b.acquire(block=False) # Debug check. return traj_infos
Growth of Bali Bulls on Rations Containing Sesbania grandiflora in Central Lombok, Indonesia Part of the Plant Sciences Commons, and the Soil Science Commons This document is available at https://uknowledge.uky.edu/igc/22/1-2/39 The 22nd International Grassland Congress (Revitalising Grasslands to Sustain Our Communities) took place in Sydney, Australia from September 15 through September 19, 2013. Proceedings Editors: David L. Michalk, Geoffrey D. Millar, Warwick B. Badgery, and Kim M. Broadfoot Publisher: New South Wales Department of Primary Industry, Kite St., Orange New South Wales, Australia Introduction The demand for meat in Indonesia is currently growing by up to 8% per year, with beef cattle fattening identified as a major livestock industry (). Bali cattle (Bos javanicus) account for almost 27% of total beef cattle in Indonesia; they are the predominant breed in the eastern islands and are highly favored by smallholder farmers for their high fertility, low calf mortality and generally higher price at markets (). Lombok in west Nusa Tenggara is one of the biggest suppliers of Bali cattle in Indonesia. A major constraint to improving the overall productivity of Bali cattle is their slow growth rate, due to lack of readily available, inexpensive, high-quality protein sources. Fodder tree legumes, such as sesbania (Sesbania grandiflora), offer a fast-growing, low-cost source of protein (Evans and Rotar 1987). Farmers in Lombok have established a unique and productive integrated farming system by planting sesbania trees along the bunds of rice paddies, providing forage and timber without significantly compromising rice yield (Dahlanuddin and Shelton 2005). As only the central part of Lombok is intensively planted with sesbania, a collaborative project funded by ___________ Correspondence: Dahlanuddin, Faculty of Animal Science, University of Mataram, Jalan Majapahit, Mataram, Mataram City, West Nusa Tenggara 83125, Indonesia. Email: dahlan_travel@yahoo.com the Australian Centre for International Agricultural Research (ACIAR) is underway aiming to: (a) characterize the existing cattle fattening systems; and (b) assess the impact of differing levels of sesbania feeding on the growth rate of Bali bulls from weaning to maturity (about 30 months old). Objective 1 − Pre-trial Three typical cattle fattening groups were selected in central Lombok in the hamlets of Montong Oboq, Bun Prie and Repok Nyerot. Commencing March 2012, animal weights, feed regimes and sale prices were monitored regularly to understand the fattening profiles of the 3 groups. Objective 2 -Feeding trial Within each of these groups, a semi-controlled feeding trial was begun in July 2012, using 20 male Bali calves with an average age of 7.6 ± 0.4 months and mean live weight (LW) of 90 ± 5.8 kg. Bulls were randomly allocated to the 3 villages in August 2012. Farmers were requested to feed sesbania to these bulls at rates up to 20% (fresh weight) of total diet in Montong Oboq, 40% in Bun Prie and 60% in Repok Nyerot. A rice bran supplement of 0.5 kg fresh weight/100 kg LW was supplied for farmers at Bun Prie and Repok Nyerot, where higher levels of sesbania were being fed. The actual amounts and proportions of different feeds offered were recorded on 6 consecutive days in March 2013. Live weight was measured monthly. Dahlanuddin, B.T. Yuliana, T. Panjaitan, M.J. Halliday and H.M. Shelton www.tropicalgrasslands.info Results and Discussion The pre-trial profiles of the 3 groups are presented in Table 1. Farmers in Repok Nyerot achieved the highest daily gains and sale weights, but the monthly profit margin was slightly lower than for those in Montong Oboq. The higher gains were thought to be due to higher levels of sesbania feeding. This aspect was tested in the subsequent feeding trial. Montong Oboq had the longest fattening period (12.5 ± 1.3 months), as they started with the lightest bulls (119 ± 15 kg). In the feeding trial, farmers could not achieve the recommended levels of sesbania feeding (Table 2). Despite similar proportions of sesbania in the diet, daily gains were higher at Repok Nyerot (0.50 kg/hd/d) than at Bun Prie (0.34 kg/hd/d); gains at Montong Oboq were 0.35 kg/hd/d, where sesbania feeding was least and rice bran was not fed. Differences in growth rates may have been related to variation in feeding practices by individual farmers, i.e., total dry matter offered/day and differing quality of the grass offered. These data offer a basic understanding of sesbania feeding systems in Indonesia and their productivity. Growth rates were comparable with previously record-ed data, namely 0.38 kg/d for bull calves of similar age fed 30% sesbania; however, they were much higher than 0.2 kg/d achieved in traditional fattening systems comprising diets of predominantly local grass species (). Conclusion Although some difficulties occurred with this on-farm research, the study suggests that the inclusion of sesbania in the fattening diet can boost animal growth www.tropicalgrasslands.info rates. The trial will continue to monitor the growth path on-farm until the bulls reach maturity.
package ru.job4j.classes; public class Doctor extends Profession { private Diagnose diagnose = new Diagnose(); public Diagnose heal(Patient pacient) { return this.diagnose; } }
<reponame>cc8848/report package com.report.web.admin.controller; import java.util.List; import java.util.Map; import javax.servlet.http.HttpServletRequest; import org.apache.commons.lang3.StringUtils; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; import org.springframework.web.bind.annotation.ResponseBody; import com.alibaba.fastjson.JSON; import com.report.biz.admin.service.GroupService; import com.report.common.dal.admin.constant.Constants; import com.report.common.dal.admin.entity.vo.GroupModel; import com.report.common.model.AjaxJson; import com.report.common.model.DataGrid; import com.report.common.model.PageHelper; import com.report.common.model.ResultCodeConstants; import com.report.common.model.SessionUtil; import lombok.extern.slf4j.Slf4j; /** * 组别Controller * @author lishun * @since 2017年4月8日 上午10:50:53 */ @Slf4j @Controller @RequestMapping("/group") public class GroupController { @Autowired private GroupService groupService; @RequestMapping(value = "/group.htm") public String index(HttpServletRequest request) { return "group/groupList"; } @RequestMapping(value = "/findGroupList.htm") @ResponseBody public DataGrid findGroupList(HttpServletRequest request, GroupModel groupModel, PageHelper pageHelper) { log.debug("findGroupList GroupModel[{}], PageHelper[{}]", JSON.toJSONString(groupModel), JSON.toJSONString(pageHelper)); groupModel.setCurrentMemberGroupCode(SessionUtil.getUserInfo().getGroupCode()); return groupService.findGroups(pageHelper, groupModel); } @RequestMapping(value = "/findAllGroupNames.htm") @ResponseBody public List<Map<String, String>> findAllGroupNames() { return groupService.findGroupNamesByCurrentMemberId(SessionUtil.getUserInfo().getMember().getId()); } @RequestMapping(value = "/addOrUpdateGroup.htm") @ResponseBody public AjaxJson addOrUpdateGroup(GroupModel groupModel, HttpServletRequest request) { AjaxJson json = new AjaxJson(); if (StringUtils.isBlank(groupModel.getGroupCode()) || StringUtils.isBlank(groupModel.getGroupName())) { json.setErrorNo(ResultCodeConstants.RESULT_INCOMPLETE); return json; } // 判断当前是更新还是新增 // 更新:判断当前组编码和数据库中的组编码是否一致 // 新增:判断组编码是否已经存在 int status = Constants.FAIL; if (groupModel.getId() != null) { // 更新操作 if (!groupService.isSameGroupCode(groupModel.getId(), groupModel.getGroupCode())) { json.setErrorNo(ResultCodeConstants.RESULT_GROUP_CODE_CANNOT_BE_MODIFIED); json.setErrorInfo("组编码不能修改!"); return json; } status = groupService.updateGroup(groupModel, SessionUtil.getUserInfo().getMember().getId(), request.getRemoteAddr()); if (status == Constants.FAIL) { json.setStatus(status); json.setErrorInfo("更新失败!"); return json; } } else { if (groupService.isGroupCodeExists(groupModel)) { json.setErrorNo(ResultCodeConstants.RESULT_GROUP_IS_EXISTS); json.setErrorInfo("组编码已经存在!"); return json; } status = groupService.saveGroup(groupModel, SessionUtil.getUserInfo().getMember().getId(), request.getRemoteAddr()); if (status == Constants.FAIL) { json.setStatus(status); json.setErrorInfo("保存失败!"); } } return json; } @RequestMapping(value = "/deleteGroup.htm") @ResponseBody public AjaxJson deleteGroup(Long id, HttpServletRequest request) { AjaxJson ajaxJson = null; // 只有权限管理员能够执行 if (!SessionUtil.getUserInfo().isAdmin()) { ajaxJson = new AjaxJson(); ajaxJson.setErrorNo(ResultCodeConstants.RESULT_PER_ADMIN_HAS_PRIV); return ajaxJson; } return groupService.deleteGroupById(id, SessionUtil.getUserInfo().getMember().getId(), request.getRemoteAddr()); } }
OP0199TRP Channels Overexpression Contributes to Inflammasome Activation in Clavicular Cortical Hyperostosis? Background Clavicular cortical hyperostosis (CCH) is a sterile inflammatory bone disorder of unknown etiology clinically characterized by pain and/or swelling of the clavicle. It has been regarded as a variant of chronic nonbacterial/recurrent multifocal osteomyelitis (CNO/CRMO) but due to lack of other inflammatory sites and recurrence it could also be regarded as a separate disease in the spectrum. Objectives Identification of specific gene expression patterns in CCH patients. Methods Total RNA was isolated from whole blood of 18 new-onset, untreated CCH patients and 8 healthy controls. DNA microarray gene expression was preformed in 5 CCH and 4 control patients along with bioinformatical analysis of retrieved data. Carefully selected differentially expressed genes (TRPM2, TRPM3, TRPM7, CASP2, MEFV, STAT3, EIF5A, ERBB2, TLR4, NLRP3, CD24, MYST3) where analyzed by qRT-PCR in all participants of the study. Results Microarray results and bioinformatical analysis revealed 974 differentially expressed genes, while qRT-PCR analysis showed significantly higher expression of TRPM3 and TRPM7, and lower expression of ERBB2. Conclusions Microarray data analysis reveled that majority of differentially expressed genes in CCH patients are involved in various inflammatory processes, while qRT-PCR analysis confirmed statistically significant expression change of 3 genes. Among them, TRPM3 and TRPM7 are members of transient receptor potential (TRP) gene superfamily, which encodes proteins that act as multimodal sensor cation channels for a wide variety of stimuli, one of which is environmental temperature that in the case of CCH could be elicited by overuse of sterno-clavicular joint (SCJ). Upon stimulation, TRP channels transduce electrical and/or Ca2+ signals. Dysfunctions in Ca2+ signaling due to altered TRP channel function can have strong effects on a variety of cellular and systemic processes, including the activation and the regulation of the inflammasomes, which are reported to be involved in CRMO pathogenesis. ERBB2, third gene with significant expression change, belongs to a family of genes that encodes for widely expressed cell surface growth factor receptors. Recently it has been shown that ErbB activation promotes protective cellular outcomes during inflammation, hence lower expression of this gene could cause damage due to inflammation. Based on the results of these and previous studies, we hypothesize that CCH could be an autoinflammatory disease induced by SCJ overuse, TRP channel overexpression, inflammasome activation and reduced protection during inflammation. References Borzutzky A, Stern S, Reiff A, et al. Pediatric chronic nonbacterial osteomyelitis. Pediatrics. 2012 Nov; 130:e1190-1197. Shimizu S, Takahashi N, Mori Y. TRPs as chemosensors (ROS, RNS, RCS, gasotransmitters). Handbook of experimental pharmacology. 2014; 223:767-794. Latz E, Xiao TS, Stutz A. Activation and regulation of the inflammasomes. Nature reviews Immunology. 2013 Jun; 13:397-411. Scianaro R, Insalaco A, Bracci Laudiero L, et al. Deregulation of the IL-1beta axis in chronic recurrent multifocal osteomyelitis. Pediatric rheumatology online journal. 2014; 12:30. Frey MR, Brent Polk D. ErbB receptors and their growth factor ligands in pediatric intestinal inflammation. Pediatric research. 2014 Jan; 75:127-132. Disclosure of Interest None declared
export { default as Playlist } from './Playlist'
<gh_stars>0 from ..argumentation_theory.argumentation_theory import ArgumentationTheory from .labels import Labels class LabelerInterface: """ Interface for labelers. Actual Labeler-objects inherit from this class. They should all have a label method. """ def label(self, argumentation_theory: ArgumentationTheory) -> Labels: """ Assign Labels to each Literal and Rule in the ArgumentationTheory. :param argumentation_theory: ArgumentationTheory that should be labelled. :return: Labels for the ArgumentationTheory. """ pass
# Copyright 2022 Cerebras Systems. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ The trainer class. """ import numpy as np import tensorflow as tf from tensorflow.compat.v1.train.experimental import DynamicLossScale from tensorflow.python.keras.mixed_precision.experimental.loss_scale_optimizer import ( LossScaleOptimizer, ) from tensorflow.python.training.experimental.loss_scale_optimizer import ( MixedPrecisionLossScaleOptimizer, ) from modelzoo.common.tf.layers.utils import summary_layer from modelzoo.common.tf.optimizers.AdamWOptimizer import AdamWOptimizer from modelzoo.common.tf.optimizers.GradAccumOptimizer import GradAccumOptimizer from modelzoo.common.tf.optimizers.LossScale import ( CSDynamicLossScale, wrap_optimizer, ) class Trainer: """ The trainer class that builds train ops based on the given configuration parameters. :param dict params: Trainer configuration parameters. :param bool tf_summary: Summaries flag. :param bool mixed_precision: Mixed precision flag. """ def __init__(self, params, tf_summary=False, mixed_precision=False): # Optimizer params self._optimizer = None self._optimizer_type = params["optimizer_type"].lower() self._momentum = params.get("momentum", 0.9) self._beta1 = params.get("beta1", 0.9) self._beta2 = params.get("beta2", 0.999) self._epsilon = float( params.get("epsilon", 1e-05 if mixed_precision else 1e-08) ) self._use_bias_correction = params.get("use_bias_correction", False) self._weight_decay_rate = params.get("weight_decay_rate", 0.0) self._exclude_from_weight_decay = params.get( "exclude_from_weight_decay", ["LayerNorm", "layer_norm", "bias", "Bias"], ) self._rmsprop_decay = params.get("rmsprop_decay", 0.9) self._rmsprop_momentum = params.get("rmsprop_momentum", 0.0) # Learning rate params self._lr_params = params["learning_rate"] if not isinstance(self._lr_params, (float, str, dict, list)): raise ValueError( f"Learning rate must be a float, a dict, or a list of dicts. " f"Got {type(self._lr_params)}" ) # Loss scaling self._loss_scaling_factor = params.get("loss_scaling_factor", 1.0) if isinstance( self._loss_scaling_factor, str ) and self._loss_scaling_factor not in ['dynamic', 'tf_dynamic']: raise ValueError( "Loss scaling factor must be either numeric or " "one of the string values ['dynamic, 'tf_dynamic']. " f"Instead got {self._loss_scaling_factor}." ) # Dynamic loss scaling (DLS) params required by # CS-supported and tf native DLS optimizers self._initial_loss_scale = params.get("initial_loss_scale", 2.0 ** 15) self._steps_per_increase = params.get("steps_per_increase", 2000) # Extra DLS params required only by CS-supported # DLS optimizer (loss_scaling_factor=='dynamic') self._min_loss_scale = params.get("min_loss_scale", 2.0 ** -14) self._max_loss_scale = params.get("max_loss_scale", 2.0 ** 15) self._overflow_tolerance = params.get("overflow_tolerance", 0.05) # Gradient clipping params self._max_gradient_norm = params.get("max_gradient_norm", 0) self._max_gradient_value = params.get("max_gradient_value", 0) # Gradient accumulation params self._grad_accum_steps = params.get("grad_accum_steps", 1) # Util params self._log_summaries = params.get("log_summaries", False) self._log_grads = params.get("log_grads", False) self._log_hists = params.get("log_hists", False) self._disable_lr_steps_reset = params.get( "disable_lr_steps_reset", False ) self._denormal_range = 2 ** -14 if mixed_precision else 2 ** -126 self._gradient_global_norm = None self._loss_scale_value = None self.tf_summary = tf_summary self._ws_summary = params.get("ws_summary", False) def build_train_ops(self, loss): """ Setup optimizer and build train ops. :param Tensor loss: The loss tensor :return: Train ops """ self._optimizer = self.build_optimizer() grads_and_vars = self._optimizer.compute_gradients( tf.cast(loss, tf.float32) ) if self._log_summaries: self._gradient_global_norm = tf.linalg.global_norm( [g for (g, v) in grads_and_vars] ) if not self.is_grad_accum(): tf.compat.v1.summary.scalar( 'train/unclipped_grad_norm', self._gradient_global_norm ) if self._ws_summary: gradient_num_zeros = tf.reduce_sum( [ tf.reduce_sum(tf.cast(tf.equal(g, 0.0), tf.float32)) for (g, v) in grads_and_vars ], ) tf.compat.v1.summary.scalar( "train/grad_num_zeros", gradient_num_zeros ) self._params_global_norm = tf.linalg.global_norm( [v for (g, v) in grads_and_vars] ) tf.compat.v1.summary.scalar( 'train/params_norm', self._params_global_norm ) # This code mimics the CS1 dynamic loss scaling # kernel implementation, where the global norm of # weight gradients is computed to detect NaN/Inf and # is reused in gradient clipping. This saves compute # and simplifies kernel matching. if isinstance( self._optimizer, (LossScaleOptimizer, MixedPrecisionLossScaleOptimizer), ): # CSDynamicLossScale checks the global norm of the weight # gradients to determine whether a NaN/Inf has occurred when its # update() method is called. If its global_norm field is set, then # it will just check that value instead of recomputing the norm. if hasattr(self._optimizer.loss_scale, 'global_norm'): if self._gradient_global_norm is None: self._gradient_global_norm = tf.linalg.global_norm( [g for (g, v) in grads_and_vars] ) self._optimizer.loss_scale.global_norm = ( self._gradient_global_norm ) clipped_grads_and_vars = self.clip_gradients( grads_and_vars, global_norm=self._gradient_global_norm ) global_step = tf.compat.v1.train.get_or_create_global_step() train_op = self._optimizer.apply_gradients( clipped_grads_and_vars, global_step, ) if self._log_summaries and self._log_grads and not self.is_grad_accum(): # Log the scaled gradients for (g, v) in grads_and_vars: if "kernel" in v.name: self.log_training_summaries( self._rescale(g), v.name, f"kernel_grads" ) elif "bias" in v.name: self.log_training_summaries( self._rescale(g), v.name, f"bias_grads" ) return train_op def build_optimizer(self): """ Setup the optimizer. :returns: The optimizer """ lr = self.get_learning_rate() if self._log_summaries and not self.is_grad_accum(): tf.compat.v1.summary.scalar('train/lr', lr) optimizer = None if self._optimizer_type == "sgd": optimizer = tf.compat.v1.train.GradientDescentOptimizer( learning_rate=lr, name="SGD", ) elif self._optimizer_type == "momentum": optimizer = tf.compat.v1.train.MomentumOptimizer( learning_rate=lr, momentum=self._momentum, name="SGDM", ) elif self._optimizer_type == "adam": optimizer = tf.compat.v1.train.AdamOptimizer( learning_rate=lr, beta1=self._beta1, beta2=self._beta2, epsilon=self._epsilon, name="Adam", ) elif self._optimizer_type == "adamw": optimizer = AdamWOptimizer( learning_rate=lr, weight_decay_rate=self._weight_decay_rate, beta1=self._beta1, beta2=self._beta2, epsilon=self._epsilon, exclude_from_weight_decay=self._exclude_from_weight_decay, use_bias_correction=self._use_bias_correction, name="AdamW", ) elif self._optimizer_type == "rmsprop": optimizer = tf.compat.v1.train.RMSPropOptimizer( learning_rate=lr, use_locking=False, centered=False, decay=self._rmsprop_decay, momentum=self._rmsprop_momentum, name="RMSProp", ) else: raise ValueError(f'Unsupported optimizer {self._optimizer_type}') # Set up loss scale loss_scale = None if self.uses_dynamic_loss_scaling(): if self._loss_scaling_factor == 'dynamic': # Explicit Cerebras System optimized dynamic loss scaling loss_scale = CSDynamicLossScale( initial_loss_scale=self._initial_loss_scale, increment_period=self._steps_per_increase, multiplier=2.0, min_loss_scale=self._min_loss_scale, max_loss_scale=self._max_loss_scale, overflow_tolerance=self._overflow_tolerance, ) else: # For any Cerebras System run, DynamicLossScale will be # automatically replaced with CSDynamicLossScale loss_scale = DynamicLossScale( initial_loss_scale=self._initial_loss_scale, increment_period=self._steps_per_increase, multiplier=2.0, ) self._loss_scale_value = loss_scale() if self._log_summaries and not self.is_grad_accum(): tf.compat.v1.summary.scalar( 'train/loss_scale', self._loss_scale_value ) if self.tf_summary: summary_layer(tf.cast(self._loss_scale_value, tf.float16)) elif self.uses_static_loss_scaling(): loss_scale = self._loss_scaling_factor self._loss_scale_value = self._loss_scaling_factor # Wraps optimizer with: # V1 optimizer: # loss_scale_optimizer_v1.MixedPrecisionLossScaleOptimizer # V2 optimizer (i.e. Keras): # loss_scale_optimizer_v2.LossScaleOptimizer # MixedPrecisionLossScaleOptimizer returns unscaled grads # Some may be NaNs, in which case, apply_gradients() won't apply # them and may adjust the loss scaling factor if loss_scale is not None: optimizer = wrap_optimizer(optimizer, loss_scale=loss_scale) # Wraps optimizer with GradAccumOptimizer # for gradient accumulation if self.is_grad_accum(): optimizer = GradAccumOptimizer( optimizer, grad_accum_steps=self._grad_accum_steps ) return optimizer def get_learning_rate(self): """ Define the learning rate schedule. Currently supports: - constant - exponential - linear - polynomial - piecewise constant - inverse exponential time decay (not supported natively) learning_rate can be specified in yaml as: - a single float for a constant learning rate - a dict representing a single decay schedule - a list of dicts (for a series of decay schedules) :returns: the learning rate tensor """ def _get_scheduler(schedule_params, step): """ Parses a dict of learning rate scheduler specifications and returns a learning rate tensor. :param dict schedule_params: A dict with a "scheduler" key (e.g., schedule_params["scheduler"] = "Exponential") and all params schedulers of that type need. :param tf.Tensor step: The step that the scheduler should use to calculate the learning rate. :returns: The learning rate tensor. """ scheduler = schedule_params["scheduler"] if scheduler == "Constant": return tf.constant( schedule_params["learning_rate"], dtype=tf.float32 ) elif scheduler == "Exponential": return tf.compat.v1.train.exponential_decay( schedule_params["initial_learning_rate"], step, schedule_params["decay_steps"], schedule_params["decay_rate"], staircase=schedule_params.get("staircase", False), ) elif scheduler == "PiecewiseConstant": return tf.compat.v1.train.piecewise_constant( step, boundaries=schedule_params["boundaries"], values=schedule_params["values"], ) elif scheduler == "Polynomial" or scheduler == "Linear": power = ( 1.0 if scheduler == "Linear" else schedule_params.get("power", 1.0) ) return tf.compat.v1.train.polynomial_decay( learning_rate=float( schedule_params["initial_learning_rate"] ), global_step=step, decay_steps=schedule_params["steps"], end_learning_rate=schedule_params["end_learning_rate"], power=power, cycle=schedule_params.get("cycle", False), ) elif scheduler == "Cosine": return tf.compat.v1.train.cosine_decay( learning_rate=schedule_params["initial_learning_rate"], global_step=step, decay_steps=schedule_params["decay_steps"], alpha=schedule_params.get("alpha", 0.0), ) else: raise ValueError(f"Unsupported LR scheduler {scheduler}") # handle a constant learning rate # scientific notation (e.g. "1e-5") parsed as string in yaml if isinstance(self._lr_params, (float, str)): return tf.constant(float(self._lr_params), dtype=tf.float32) global_step = tf.compat.v1.train.get_or_create_global_step() # handle a single decay schedule if isinstance(self._lr_params, dict): return _get_scheduler(self._lr_params, global_step) # handle a list of decay schedules assert isinstance(self._lr_params, list) if len(self._lr_params) == 1: return _get_scheduler(self._lr_params[0], global_step) total_steps = 0 schedule_sequence = [] # if disable_lr_steps_reset is True, global_step will not be offset, # meaning that schedules will overlap rather than occur sequentially. # helps replicate Google's LR schedules on BERT. step_reset_mask = 1 - int(self._disable_lr_steps_reset) for i, schedule_params in enumerate(self._lr_params): # default argument needed so that schedule is captured in for loop # see https://docs.python.org/3/faq/programming.html#id10 schedule_fn = lambda sp=schedule_params, ts=total_steps: _get_scheduler( sp, global_step - (ts * step_reset_mask) ) if i == len(self._lr_params) - 1: break # all schedules except final become cases, `decay_steps` is used # by cosine decay schedule currently if ( "steps" not in schedule_params and "decay_steps" not in schedule_params ): raise ValueError( "Non-final LR schedules must specify number of steps." ) # one of two cases to enable schedules if "steps" in schedule_params: total_steps += schedule_params["steps"] elif ( "decay_steps" in schedule_params and schedule_params["scheduler"] == "Cosine" ): # add this case for cosine decay schedule total_steps += schedule_params["decay_steps"] schedule_sequence.append( (tf.less(global_step, total_steps), schedule_fn) ) # final schedule becomes the default return tf.case(schedule_sequence, default=schedule_fn) def clip_gradients(self, grads_vars, global_norm=None): """ Performs basic gradient clipping: - by global norm if self._max_gradient_norm is set - by value if self._max_gradient_value is set, to the symmetric range (-self._max_gradient_value, self._max_gradient_value) :param Tensor grads_vars: List of ``(grad, var)`` tuples """ # clip by norm if self._max_gradient_norm: if self._max_gradient_value: raise ValueError( "Gradients can be clipped by norm or by value, but not both. " "Do not set both max_gradient_norm and max_gradient_value." ) if self._max_gradient_norm < 0: raise ValueError( f"max_gradient_norm cannot be negative. Got " f"{self._max_gradient_norm}" ) gradients = [g for (g, v) in grads_vars] clipped_gradients, _ = tf.clip_by_global_norm( gradients, self._max_gradient_norm, use_norm=global_norm, ) grads_vars = [ (clipped_gradients[i], grads_vars[i][1]) for i in range(len(gradients)) ] # clip by value elif self._max_gradient_value: if self._max_gradient_value < 0: raise ValueError( f"max_gradient_value cannot be negative. Got " f"{self._max_gradient_value}" ) for i, (g, v) in enumerate(grads_vars): clipped_gradient = tf.clip_by_value( g, -self._max_gradient_value, self._max_gradient_value ) grads_vars[i] = (clipped_gradient, v) return grads_vars def log_training_summaries(self, tensor, name, family): """ Make summaries for training. Plotting summaries for - Sparsity of tensor - Histogram of tensor (on log scale) - Denormals in tensor - Norm of tensor :param Tensor tensor: tensor to plot summaries for :param str name: name of the tensor to plot summaries for :param str family: family that the tensor belongs to (kernel / bias) """ tf.compat.v1.summary.scalar( f"sparsity_{family}/{name}", ( 1.0 - tf.math.count_nonzero(tensor, dtype=tf.float32) / tf.size(tensor, out_type=tf.float32) ), ) tf.compat.v1.summary.scalar( f"denormal_{family}/{name}", ( tf.reduce_sum( tf.cast( tf.math.logical_and( tf.math.less(tf.abs(tensor), self._denormal_range), tf.math.not_equal(tensor, 0), ), tf.float32, ) ) / tf.size(tensor, out_type=tf.float32) ), ) tf.compat.v1.summary.scalar( f"norm_{family}/{name}", tf.linalg.global_norm([tensor]) ) if self._log_hists: tf.compat.v1.summary.histogram( f"{family}/{name}", tf.math.log(tf.cast(tf.abs(tensor), tf.float32) + 2.0 ** -50) / tf.math.log(2.0), ) def _rescale(self, g): """ Scale the gradients for plotting :param Tensor g: tensor to scale """ try: output = g * tf.cast(self._loss_scale_value, g.dtype) except Exception as e: tf.compat.v1.logging.debug(e) output = g return output def is_grad_accum(self): return True if self._grad_accum_steps > 1 else False def uses_loss_scaling(self): return ( self.uses_dynamic_loss_scaling() or self.uses_static_loss_scaling() ) def uses_dynamic_loss_scaling(self): return self._loss_scaling_factor in ['dynamic', 'tf_dynamic'] def uses_static_loss_scaling(self): return ( not isinstance(self._loss_scaling_factor, str) and np.isscalar(self._loss_scaling_factor) and not np.isclose(self._loss_scaling_factor, 1.0) ) @property def gradient_global_norm(self): return self._gradient_global_norm @property def loss_scale_value(self): return self._loss_scale_value @property def grad_accum_steps(self): return self._grad_accum_steps @property def log_summaries(self): return self._log_summaries @property def optimizer(self): return self._optimizer
Sal Caccavale Playing career Caccavale played high school soccer from 1999 to 2002 for the West Islip Lions and collegiately at American University, where he finished as the seventh leading scorer in school history. He led the team in points in 2006, and was named First Team All-Patriot League three straight years. In January 2007, New York Red Bulls picked Caccavale in the second round of the 2007 MLS Supplemental Draft, 19th overall. Caccavale made his MLS debut on May 19, 2007, coming on as a substitute in the 88th minute. He made a quick impact, scoring a goal 4 minutes after coming on, the final goal in a 4-0 win for New York over Columbus Crew. Caccavale also appeared in 9 matches for the Red Bulls reserves in 2007 registering 3 assists, before being released by the team on January 23, 2008. He is the all-time Major League Soccer record holder for goals per minute played. As he came on at the end of the game, he is registered for two minutes played, and with one goal scored, he on average scored 45 goals in a 90-minute game. Caccavale was announced as a Monarchs player on 10 February 2009. Caccavale last playing for Real Maryland Monarchs in the USL Second Division, before retired in the Winter 2009. Coaching career After the retirement of his active playing career, became the job as Youth Coach by Bethesda Soccer Club. Besides works as Sophomore Coach by the Soccer team of the Woodrow Wilson High School.
<filename>20211SVAC/G28/OLC2_VJ2021_PY1/src/scriptPath.tsx import { Objeto } from './Expresiones/Objeto'; import { Entorno } from './AST/Entorno'; import gramaticaA from './GramaticaXPATH/XPATH_A'; import gramaticaD from './GramaticaXPATH/XPATH_D'; import gramaticaXMLA from './GramaticaXML/gramaticaA'; import gramaticaXMLD from './GramaticaXML/gramaticaD'; var Salidas = 1; export default function ejecutarCodigo(entrada: string, tipo: string, entradaxml: string) { Salidas = 1; if (tipo === 'a') { const XPATH = gramaticaA.parse(entrada); const XML = gramaticaXMLA.parse(entradaxml); darPadre(XML[0].listaObjetos); var Supp = []; Supp.push(XML[0].listaObjetos); var Salida = ConsultasAscendente(XPATH.Consultas, Supp); var retorno = { XPATH: XPATH, XML: XML, Salida: Salida }; return retorno; } else { const XPATH2 = gramaticaD.parse(entrada); const XML2 = gramaticaXMLD.parse(entradaxml); darPadre(XML2[0].listaObjetos); var Salida2 = ''; var retorno2 = { XPATH: XPATH2, XML: XML2, Salida: Salida2 }; return retorno2; } } const darPadre = (p: Objeto) => { for (const i in p.listaObjetos) { p.listaObjetos[i].padre = p; darPadre(p.listaObjetos[i]); } }; function ConsultasAscendente(Consultas: any, objetos: any): any { var SalidaConsulta = ''; for (var i = 0; i < Consultas.length; i++) { SalidaConsulta += ConsultaAscendente(Consultas[i], objetos); } return SalidaConsulta; } function ConsultaAscendente(Consulta: any, objetos: any): any { if (Consulta.Relativo === '' || Consulta.Relativo === '/') { return AR(Consulta, objetos); } else if (Consulta.Relativo === '//') { return AN(Consulta, objetos); } return ''; } //Ascendente Relativa function AR(Consulta: any, objetos: any): any { switch (Consulta.Posicion.Valor) { case 'ID': { return ARI(Consulta, objetos, Consulta.Posicion.Hijos[0].Valor); } case '*': { break; } case 'node()': { break; } case 'text()': { break; } case '@': { break; } case '.': { break; } case '..': { break; } } return ''; } //Ascendente Relativa ID function ARI(Consulta: any, objetos: any, ID: string): any { var ListaObjetos = []; for (var i = 0; i < objetos.length; i++) { if (objetos[i].identificador === ID) { ListaObjetos.push(objetos[i]); } } if (ListaObjetos.length > 0) { if (Consulta.Predicado.Valor === '') { if (Consulta.Secuencia === '') { var texto = ''; for (var i = 0; i < ListaObjetos.length; i++) { texto += Salidas + '. ' + ImprimirTextoA(ListaObjetos[i]) + '\n'; Salidas += 1; } return texto; } else { var texto = ''; for (var i = 0; i < ListaObjetos.length; i++) { texto += ConsultaAscendente(Consulta.Secuencia, ListaObjetos[i].listaObjetos); } return texto; } } } return ''; } //ImprimirTextosA function ImprimirTextoA(objeto: any): any { var texto = ''; if (objeto.texto != '') { for (var i = 0; i < objeto.texto.length; i++) { texto += objeto.texto[i] + ' '; } } else { for (var i = 0; i < objeto.listaObjetos.length; i++) { texto += ImprimirTextoA(objeto.listaObjetos[i]); } } return texto; } //Ascendente NoRelativa function AN(Consulta: any, objetos: any): any { switch (Consulta.Posicion.Valor) { case 'ID': { break; } case '*': { break; } case 'node()': { break; } case 'text()': { break; } case '@': { break; } case '.': { break; } case '..': { break; } } return ''; }
package com.jqhee.latte.core.delegates.web.event; import com.jqhee.latte.core.util.log.LatteLogger; /** * @author: wuchao * @date: 2017/11/29 22:44 * @desciption: */ public class UndefineEvent extends Event { @Override public String execute(String params) { LatteLogger.e("UndefineEvent", params); return null; } }
Double-stranded DNA Binding Domain of Poly(ADP-ribose) Polymerase-1 and Molecular Insight into the Regulation of Its Activity* Poly(ADP-ribose) polymerase-1 (PARP-1) modifies various proteins, including itself, with ADP-ribose polymers (automodification). Polymer synthesis is triggered by binding of its zinc finger 1 (Zn1) and 2 (Zn2) to DNA breaks and is followed by inactivation through automodification. The multiple functional domains of PARP-1 appear to regulate activation and automodification-mediated inactivation of PARP-1. However, the roles of these domains in activation-inactivation processes are not well understood. Our results suggest that Zn1, Zn2, and a domain identified in this study, the double-stranded DNA binding (DsDB) domain, are involved in DNA break-dependent activation of PARP-1. We found that binding of the DsDB domain to double-stranded DNA and DNA break recognition by Zn1 and Zn2, whose actual binding targets are likely to be single-stranded DNA, lead to the activation of PARP-1. In turn, the displacement of single- and double-stranded DNA from Zn2 and the DsDB domain caused by ADP-ribose polymer synthesis results in the dissociation of PARP-1 from DNA breaks and thus its inactivation. We also found that the WGR domain is one of the domains involved in the RNA-dependent activation of PARP-1. Furthermore, because zinc finger 3 (Zn3) has the ability to bind to single-stranded RNA, it may have an indirect role in RNA-dependent activation. PARP-1 functional domains, which are involved in oligonucleic acid binding, therefore coordinately regulate PARP-1 activity depending on the status of the neighboring oligonucleic acids. Based on these results, we proposed a model for the regulation of PARP-1 activity. It has been demonstrated that binding of PARP-1 to double strand DNA breaks (DSB) or single strand DNA breaks (SSB) triggers ADP-ribose synthesis. PARP-1 activation also occurs through its binding to the linker DNA of nucleosomes and upon activation of transcription. In addition to DNA, PARP-1 is capable of binding to RNA. Thus, PARP-1 could have the ability to recognize diverse oligonucleic acid structures. ADP-ribose synthesis leads to PARP-1 inactivation through automodification. These complex regulations of PARP-1 activity are carried out by at least six functional domains of PARP-1. PARP-1 is a 110-kDa enzyme with a modular architecture of multiple functional domains (see Fig. 1A). The most N-terminal end of PARP-1 is the DNA break binding (DBD) domain, which contains zinc finger 1 (Zn1), a 20-residue linker peptide, and zinc finger 2 (Zn2), which has over 80% homology with Zn1. Binding of these zinc fingers to DSBs, SSBs and the linker DNA of nucleosomes is essential to activate PARP-1. Although it has been suggested that Zn1 recognizes DSBs and that Zn2 has a binding preference for SSBs, both fingers are required for full activation of PARP-1. It is, however, not well known how Zn1 and Zn2 recognize diverse types of DNA structures. Zn2 is linked to zinc finger 3 (Zn3) by a 26-residue peptide. This zinc finger was identified in recent years as being involved in PARP-1-PARP-1 homodimer formation. Zn3 mutants are not activated by DNA breaks, indicating that functional homodimerization of PARP-1 through Zn3 is required for efficient activation of PARP-1. Following Zn3, an 80residue peptide, which does not form any particular ternary structure, connects Zn3 with a domain called BRCT, involved in the interaction between PARP-1 and other enzymes, including XRCC1 and topoisomerase I (19 -21). Between the BRCT and the WGR domain, there is a highly basic 60-residue peptide in which the lysine and glutamic acid residues are known as ADP-ribose polymer attachment sites (33,. Because this peptide does not form any particular ternary structure either, it is susceptible to protease digestion. The precise function of the WGR domain is not known, although a recent report suggests that it is involved in the regulation of the catalytic activity of PARP-1. The C-terminal end of PARP-1 is the catalytic domain (CAT) domain, connected to the WGR domain by a 10-residue peptide. This domain is involved in ADP-ribose synthesis. It appears that PARP-1 controls various fundamental cellular processes by coordinating the action of PARP-1 domains. Despite the accumulation of knowledge regarding such domains, the underlying mechanisms involved in PARP-1 activity regulation have not been fully elucidated. We thus performed characterization of Zn1, Zn2, Zn3, and the WGR domain. Furthermore, we have identified the double-stranded DNA binding domain (DsDB), which corresponds to the highly basic 60-residue peptide located between the BRCT and WGR domains. This domain, in its unbound form, inhibited the elongation of ADP-ribose polymers. Binding of the domain to dsDNA released the inhibition, allowing long ADPribose polymer formation. Our results suggest that the DsDB domain is one of the key domains involved in regulation of ADP-ribose polymer synthesis. Together with characterization results of the PARP-1 zinc fingers and WGR domain, we proposed a model for PARP-1 activity regulation. Binding Assay-Single-stranded DNA (ssDNA) and singlestranded RNA (ssRNA) were incubated with ATP and polynucleotide kinase. ADP-ribose polymers labeled with 32 P were prepared by incubation of PARP-1 (100 pmol) and NAD (10 Ci) in 50 l of reaction mixture containing 1 mM NAD, 10 mM Tris-HCl, pH 7.5, and 1 mM MgCl 2 for 30 min at 30°C. PARP-1 was then digested by proteinase K, and 32 P-labeled ADP-ribose polymers were precipitated with ethanol. Reactions were carried out with 30 pmol of probes and various amounts of PARP-1 domains or fragments in 10-l buffer aliquots containing 25 mM Tris-HCl, pH 8.0, 50 mM NaCl, and 1 mM MgCl 2 for 15 min at 23°C. Proteins were fractionated on 7% native acrylamide gels. 32 P activity was visualized by autoradiography and quantified using a Typhoon 9200 imager (GE Healthcare) or an AlphaImager (Packard). Surface Plasmon Resonance Biosensor-Carboxyl surface sensor chips were purchased from Reichert Analytical Instruments. To create the nickel-nitrilotriacetic acid surface, chips were washed with 1 mM NaOH. Then, the surface of the chips was soaked with 200 mM 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide and 50 mM N-hydroxysuccinimide for 10 min at room temperature followed by washing with 20 mM sodium acetate. Sensor chips were incubated with 1 mg/ml (S)-N-(5amino-1-carboxypentyl)iminodiacetic acid hydrate for 30 min at 4°C. After washing, the surface was treated with 1 M ethanolamine for 10 min at room temperature. Then, the surface was treated with 1 mM NaOH for 10 min and then incubated with 40 mM NiSO 4 for 10 min at room temperature. After the surface was rinsed with 150 mM NaCl, the chips were used for analysis. A Reichert SR7000 was employed for analysis after immobilization of Zn1 or Zn2 to the sensor surface by injecting 1 mg/ml Zn1 or Zn2 at a flow rate of 0.005 ml/min. Then, the response was measured by injecting 25, 50, 75, 100, and 200 nM poly(dC) or poly(rC) at a flow rate of 1 ml/min. Binding constants of poly(dC) and poly(rC) to Zn1 or Zn2 were determined by the Prism and Scrubber programs. Poly(ADP-ribosyl)ation Assay-Poly(ADP-ribosyl)ation assays were carried out with dumbDNAs with PARP-1 domains or fragments in a 10-l reaction mixture containing 1 Ci of NAD, 1 mM NAD, 10 mM Tris-HCl, pH 7.5, 100 mM NaCl, and 1 mM MgCl 2 at 37°C for 15 min. Proteins were fractionated on an SDS-polyacrylamide gel. Stacking gel was not used. 32 P activity was visualized by autoradiography and quantified using a Typhoon 9200 imager. RESULTS ssDNAs as Potential Binding Targets of Zn1 and Zn2-DNA nicks, DNA gaps, and DSBs, including blunt, 3-protruding, and 5-protruding ends, are known binding targets of the DBD domain. This domain is also capable of binding to DNA stem loops, implying that the domain recognizes oligonucleic acid structures that commonly appear in DNA breaks and in DNA stem loops, such as ssDNA. Binding of the DBD domain to DNA breaks is a key step in PARP-1 activation, but its mechanism is not well understood. We thus began this study by characterizing Zn1 and Zn2 binding to ssDNA. When recombinant Zn1 and Zn2 (Fig. 1, A and B) were incubated with 30 nucleotides of synthetic 32 Plabeled ssDNAs, i.e. poly(dA), poly(T), and poly(dC) (except poly(dG), which forms G-quadruplexes), discrete retarding bands were formed ( Fig. 2A). On the other hand, neither recombinant showed significant binding to poly(rA), poly(U), and poly(rC) (Fig. 2B). We also used Zn3 and found that Zn3 is capable of binding to ssRNA (Fig. 2B). These results suggest that ssDNA is one of the potential binding targets of Zn1 and Zn2. Cross-binding of ssDNA by the DBD Domain-Binding of Zn1 and Zn2 to ssDNA suggests that the DBD domain contains two ssDNA binding sites. To study whether the domain is capable of binding to two ssDNA molecules, we carried out cross-binding assays. As illustrated in Fig. 2C, this assay was designed to pull down 32 P-labeled oligonucleic acid, using avidin-Sepharose, when the labeled oligonucleic acid binds to one functional zinc finger and biotinylated oligonucleic acid binds to the other. For example, 32 P-labeled poly(rA) was pulled down when PARP-1 was incubated with biotinylated poly(dC), demonstrating the binding of Zn3 to poly(rA) and of Zn1 or Zn2 to poly(dC). 32 P-labeled poly(U) is expected to be pulled down with biotinylated poly(U) if Zn3 forms a functional homodimer and if each Zn3 in the homodimer binds ssRNA. Indeed, 32 P-labeled poly(U) was pulled down, suggesting that both Zn3 domains are capable of binding to ssRNA. In the case of the DBD domain, 32 P-labeled poly(dC) was pulled down by avidin-Sepharose by incubation of the domain with biotinylated poly(dC). Two zinc fingers in the DBD domain can thus indeed independently bind to ssDNA. Binding Affinity of Zn1 and Zn2 to ssDNA-We then employed surface plasmon resonance biosensor and poly(dC) to investigate whether Zn1 and Zn2 have the necessary affinity to form stable complexes with ssDNA. As summarized in Table 1, the K D of Zn1 was 7.9 10 8 M. Although Zn2, which has a secondary role in DNA break binding (33,, had a lower affinity for poly(dC) than Zn1, the K D of Zn2 was still 1.6 10 7 M. Consistent with binding assay results (Fig. 2B), Zn1 and Zn2 showed negligible affinity for poly(rC). It has been reported that the binding affinity of PARP-1 toward DNA breaks is about 2.6 10 9 to 1.1 10 10 M. Although the K D of Zn1 and Zn2 to poly(dC) is about 30 -60fold lower than that of PARP-1 to DNA breaks, these results suggest that Zn1 and Zn2 have sufficient affinity to establish stable bounds with ssDNAs. Thus, ssDNAs, which are formed at DNA break ends and at the loop region of DNA stem loops, can be binding targets of Zn1 and Zn2. DsDB Domain-Although we have demonstrated that Zn1 and Zn2 bind to ssDNAs, biochemical studies of PARP-1 have indicated that ssDNAs only served as a weak activator of PARP-1. One of the explanations for this lack of full activation of PARP-1 by ssDNAs is that PARP-1 can distinguish whether it binds to free ssDNAs or to ssDNAs formed in ds-DNA structures such as DNA breaks. Previously, Ikejima et al. suggested the presence of a dsDNA binding domain in the 89-kDa fragment of PARP-1 that is involved in the activation of PARP-1. Thus, we have studied whether such a domain indeed existed by preparing dsDNA with a minimal length of loop (dumbDNA ) ( Fig. 3A) and by carrying out dsDNA Binding Domain of PARP-1 and Activity Regulation FIGURE 2. Analysis of Zn1, Zn2, and Zn3 binding to ssDNA and ssRNA probes. A, binding assays were carried out with 32 P-labeled poly(dA), poly(T), or poly(dC) (30 pmol) and Zn1, Zn2, or Zn3. Retarded bands are indicated by arrowheads. N, no protein. B, instead of ssDNA probes, 32 P-labeled poly(rA), poly(U), and poly(rC) (30 pmol) were used. C, biotinylated poly(dC) or poly(U) (0.25 pmol) and 32 P-labeled poly(rA), poly(U), or poly(dC) (0.25 pmol) were used. After incubation of a biotinylated and a 32 P-labeled oligonucleic acid with PARP-1, Zn3, or the DBD domain, biotinylated oligonucleic acid was pulled down by avidin-Sepharose. 32 P-labeled oligonucleic acid was analyzed using urea-10% polyacrylamide gels. dsDNA Binding Domain of PARP-1 and Activity Regulation DNA mobility assays. Zn1, Zn2, the DBD domain, Zn3, and the WGR-CAT fragment did not show any significant binding to dumbDNA (Fig. 3A). On the other hand, incubation of dumbDNA with the DsDB-WGR-CAT fragment, which contained a highly basic 60-residue peptide between the BRCT and the WGR domains, resulted in the formation of a smear that migrated near the origin. The DsDB-WGR-CAT fragment therefore bound to dumbDNA. Because dumbDNA contained small terminal loops, we then used dumbDNA that lacked these loops. Even in the absence of the loops, the DsDB-WGR-CAT fragment still bound dumbDNA (Fig. 3B). On the other hand, the DsDB-WGR-CAT fragment did not show any specific binding to ssDNA (data not shown), and double-stranded RNA does not appear to be a preferred binding target for the DsDB domain (Fig. 3C). Thus, these results suggest that the dsDNA binding domain indeed exists as Ikejima et al. predicted. We referred to this domain as the DsDB domain. Activation of PARP-1 by dumbDNA-(AT)-Binding of Zn1 and Zn2 of the DBD domain to ssDNA and binding of the DsDB domain to dsDNA imply that a DNA structure composed of a junction between ssDNA and dsDNA could activate PARP-1. Thus, we prepared dumbDNA with a T-loop (dumbDNA-T) and dumbDNA containing an AT-rich region (dumbDNA-(AT)), which can be destabilized at 37°C, thereby producing a single-stranded region (Fig. 4A). Although the DBD domain did not show significant binding to dumbDNA, it had the ability to bind to dumbDNA-T, dumbDNA-(AT), and dumbDNA-T(AT), which contained both the T-loop and the AT-rich region (Fig. 4B). Consistent with the ability of the DBD domain to bind dumbDNAs, the poly(ADP-ribosyl)ation activity of PARP-1 was promoted, particularly by dumbDNA-(AT) and dumbDNA-T(AT) (Fig. 4C). Although DNA breaks were still the best PARP-1 activators, the promotion of automodified PARP-1 synthesis by these dumbDNAs suggests that activation of PARP-1 occurs through binding of Zn1 and Zn2 to ssDNA and of the DsDB domain to dsDNA at the junction between ssDNA and dsDNA. WGR Domain and RNA-dependent Activation of PARP-1-We then characterized the WGR domain, which is located at the N terminus of the DsDB domain and has been suggested to be involved in the regulation of the CAT domain. Because this domain contains a WGR consensus peptide sequence that is found in RNA-metabolizing enzymes, we investigated whether the WGR domain has the ability to interact with RNA by employing the WGR-CAT fragment and the CAT domain. Results of binding assays with ssRNA and ss-DNA probes have demonstrated that the WGR-CAT fragment is in fact capable of binding to poly(rA), poly(U), and poly(rC), whereas ssDNAs are not preferred binding targets of the WGR-CAT fragment (Fig. 5A). Because the CAT domain alone did not show significant binding to ssRNA, these results suggest that the WGR domain has the ability to bind to ss-RNA. Furthermore, as shown in Fig. 5B, incubation of ssRNA with the WGR-CAT fragment led to activation of ADP-ribose polymer synthesis, whereas ssDNA induced less activation than ssRNA, indicating that the CAT domain can be activated through binding of the WGR domain to ssRNA. The WGR domain is thus involved in RNA-dependent activation of PARP-1. Negative Regulation of ADP-ribose Polymer Synthesis by the DsDB Domain-As shown in Fig. 5, C and D, poly(rA) activates ADP-ribose polymer synthesis when poly(ADP-ribosyl)ation assays are carried out with PARP-1, the 89-kDa fragment, the DsDB-CAT-WGR fragment, and the WGR-CAT fragment. Typically, automodified PARP-1, the 89-kDa fragment, and the DsDB-WGR-CAT fragment, which migrated around their original molecular masses, were produced due to the attachment of short poly(ADP-ribose) polymers to these ). The DsDB domain thus inhibited the formation of long ADP-ribose polymers. In the poly(ADP-ribosyl)ation assay with poly(A), the DsDB domain remained unbound to ds-DNA. Thus, we then investigated the effect on ADP-ribose polymer synthesis of the DsDB domain binding to dsDNA by employing dumbDNA. As shown in Fig. 5E, in the presence of dumbDNA, an automodified DsDB-WGR-CAT fragment that migrated near the origin was in fact produced, allowing the formation of long ADP-ribose polymers by binding of the DsDB domain to dsDNA. These results suggest that the DsDB domain controls ADP-ribose polymer synthesis; this domain negatively regulates synthesis of ADP-ribose polymers when it does not bind to dsDNA, thereby limiting PARP-1 to the synthesis of only short ADP-ribose polymers, whereas binding of the domain to dsDNA allows PARP-1 to produce long ADPribose polymers. Displacement of ssDNA and dsDNA from Zn2 and the DsDB Domain, Respectively, by ADP-ribose Polymers-The last step in the PARP-1 activation-inactivation processes is dissociation of automodified PARP-1 from DNA breaks. The mechanism of such dissociation is not well known. Because our current results suggest that PARP-1 is retained at DNA breaks through binding of its functional domains to ss-DNA and dsDNA, we have studied the effect of ADP-ribose polymers on the binding of Zn1, Zn2, Zn3, the DsDB domain, and the WGR domain to their target oligonucleic acids. We first tested whether PARP-1 functional domains bind to ADPribose polymers. Results shown in Fig. 6A indicate that Zn2, Zn3, the DsDB domain, and the WGR domain can indeed bind to ADP-ribose polymers. Therefore, we have carried out displacement assays with these functional domains. In the case of Zn2, poly(dA)-Zn2 complexes were preformed, and then ADP-ribose polymers were added to the reaction (Fig. 6B). Such an addition led to the dissolution of the preformed complexes (Fig. 6B), suggesting that ADP-ribose polymers displace poly(dA) from Zn2. Because Zn2 binds to ADP-ri-, an AT-rich region (dumbDNA-(AT)), or both a T-loop and an AT-rich region (dumbDNA-T(AT)) were prepared. The T m of the AT-rich region is 24°C. Thus, the region is expected to be destabilized at 37°C. B, DNA mobility assays with ethidium bromide-1.5% agarose gels were carried out with the DBD domain. Various dumbDNAs (10 pmol) were used. XC, xylene cyanol. C, poly(ADP-ribosyl)ation assays were carried out with various dumbDNAs (800 fmol) and PARP-1 in the presence of NAD. For assays with DNA breaks, DSBs corresponding to 800 fmol were added. Automodified PARP-1 was fractionated with SDS-10% polyacrylamide gels. Labeled PARP-1 was visualized by autoradiography, and 32 P activity was quantified by a Typhoon scanner. Standard deviations are shown. dsDNA Binding Domain of PARP-1 and Activity Regulation bose polymers that are more than 50 residues in length (Fig. 6C), this dissociation could occur only following long ADPribose polymer formation. Preformed complexes of Zn3 or the WGR-CAT fragment with poly(rC) were not well dissolved by the polymers (Fig. 6B). Thus, binding of Zn3 or the WGR domain is unlikely affected by ADP-ribose polymer formation. In the case of the DsDB domain, preformed dsDNA-DsDB domain complexes were dissolved by the addition of ADP-ribose polymers (Fig. 6D). Distinct from Zn2, however, the DsDB domain has a preference for ADP-ribose polymers of around 30 ADP-ribose residues in length (Fig. 6C). These results demonstrate that one of the mechanisms involved in the dissociation of PARP-1 from DNA breaks is the displacement of ssDNA and dsDNA from Zn2 and the DsDB domain, respectively, by ADP-ribose polymers. DISCUSSION PARP-1 is one of the most highly investigated enzymes. Despite that fact, the underlying molecular mechanisms that control PARP-1 activity have not been fully elucidated due to the lack of critical information regarding the characteristics of PARP-1 domains. We thus performed the current study to obtain such information and, as summarized in Fig. 7A, our results suggest that: 1) Zn1 and Zn2 have the ability to bind to ssDNA; 2) the DsDB domain is involved dsDNA binding; 3) the unbound form of the DsDB domain inhibits the formation of long ADP-ribose polymer synthesis; 4) Zn1, Zn2, and the DsDB domain play a role in DNA break-and DNA loop-dependent activation of PARP-1; 5) the WGR domain has the ability to bind to ssRNA, leading to RNA-dependent activation of PARP-1; and 6) ADP-ribose polymers displace ssDNA and dsDNA from Zn2 and the DsDB domain, respectively. Based on these results and on reports published by others, we are proposing two models, i.e. models for Zn1 and Zn2 binding to DNA breaks and DNA loops and for PARP-1 activity regulation. Model for Zn1 and Zn2 Binding to DNA Breaks and DNA Loops-The DBD domain binds to a variety of DNA breaks, including blunt ends, 5-protruding and 3-protruding ends, DNA nicks, DNA gaps, DNA stem loops, and dsDNA at the linker region of nucleosomes (2, 11, 24, 29, 39 -41). Recognition and binding of such diverse types of DNA structures are carried out by Zn1 and Zn2, although the underlying mechanism of this binding remains unclear. Because we found that Zn1 and Zn2 have the ability to bind to ssDNA (Fig. 2), we propose the ssDNA binding model. As illustrated in Fig. 7B (SSB), when Zn1 binds to one DNA strand at a SSB, the second zinc finger, Zn2, can bind to the complementary DNA strand. As Zn1 is tandemly connected to Zn2 by only about 2 nm of peptide (20 peptide residues, corresponding to about 5 bp), binding of Zn1 and Zn2 to DNA strands in the manner illustrated in Fig. 7B would create DNA bending. Binding of Zn1 and Zn2 to DSBs could occur in a similar manner as binding to SSBs (Fig. 7B). Furthermore, this model can predict that the DBD domain has a preference for either 3-protruding or 5-protruding ends as Zn1 could more efficiently recognize protruded ssDNA from DSB ends. Indeed, it has been reported that 3-protruding ends serve as better activators of PARP-1 and that the DBD domain has a higher dsDNA Binding Domain of PARP-1 and Activity Regulation MARCH 4, 2011 VOLUME 286 NUMBER 9 binding affinity toward 3-protruding ends than blunt or 5-protruding ends. Thus, 3-ssDNA created at the DSB ends could be binding targets of Zn1. These ssDNA binding models are also consistent with the notion that the DSB domain recognizes a junction between ssDNA and dsDNA ( Fig. 4). Any DNA containing such a junction could be a binding target for the DBD domain. ssDNA at DNA stem loops and internal loops may thus serve as binding targets for Zn1 and Zn2 (Fig. 7B). Model for PARP-1 Activity Regulation, Basal Status-It has been reported that PARP-1 forms functional homodimers. Several domains, including the DBD and DsDB domain, have been identified as homodimerization domains by biochemical investigations. However, structural analysis of Zn3 reveals that the zinc ribbon fold of Zn3 is potentially involved in PARP-1-PARP-1 homodimerization, although a recent report suggests that such homodimerization occurs in crystal lattice but not in solution. Langelier et al. thus suggest that homodimerization of PARP-1 through the zinc ribbon fold occurs upon PARP-1 binding to DNA breaks or when concentration of PARP-1 is high enough to allow the formation of Zn3 dimers. Pion et al. indeed found that the DBD domain is able to bind to two SSBs. However, the DBD domain that was used in their study did FIGURE 6. Binding of ADP-ribose polymer to PARP-1 functional domains. A, binding assays were carried out with 32 P-labeled ADP-ribose polymers (30 pmol) and PARP-1 domains. N, no protein; PAR, ADP-ribose polymers. B, Zn2 (12 pmol) was preincubated with 32 P-labeled poly(dA) (30 pmol) for 15 min at room temperature. Alternatively, Zn3 and the WGR-CAT fragment (12 pmol) were incubated with 32 P-labeled poly(rC) (30 pmol). Then, ADP-ribose polymers were added. The resulting samples were analyzed by 7.5%-native acrylamide gel electrophoresis. C, binding assays were carried out with 32 P-labeled ADPribose polymers (30 pmol) with either FLAG-tagged Zn1 or FLAG-tagged Zn2 (20 pmol). Then, these fingers were pulled down by anti-FLAG M2 affinity gel. After washing of the precipitates, zinc fingers were denatured, and 32 P-labeled ADP-ribose polymers were analyzed using urea-10% polyacrylamide gels. When FLAG-tagged WGR-CAT or DsDB-WGR-CAT fragment was used, 20 pmol of poly(dC) was included in the reaction mixture, and fractionation of 32 Plabeled ADP-ribose polymers was carried out by using urea-15% polyacrylamide gels. For urea-10% polyacrylamide gels, xylene cyanol (XC) and bromphenol blue (BPB) migrated around 50 and 10 in length of ADP-ribose residues. For urea-15% polyacrylamide gels, xylene cyanol and bromphenol blue migrated around 40 and 8 in length of ADP-ribose residues. D, the DsDB-WGR-CAT fragment was incubated with dumbDNA (10 pmol) for 15 min at 37°C, and then ADP-ribose polymers were added. After a 15-min incubation at 37°C, DNA mobility assays were carried out on ethidium bromide-1.5% agarose gels. dsDNA Binding Domain of PARP-1 and Activity Regulation not contain Zn3. Thus, it is not yet clear whether multiple PARP-1 domains are involved in PARP-1-PARP-1 homodimerization. In our model, we illustrated two PARP-1 molecules in a head-to-tail arrangement by a manner proposed by Langelier et al. (Fig. 7C, ) because cross-binding of two ssRNA molecules by Zn3 suggests the formation of Zn3-Zn3 homodimers (Fig. 2). However, the two PARP-1 molecules are separated as there is no conclusive evidence for the existence of preformed PARP-1-PARP-1 homodimers. In the basal status, catalytic activity of PARP-1 is tightly regulated as it only shows extremely weak poly(ADPribosyl)ation activity in the absence of its activators (e.g. Fig. 5C, lane 1). It appears that one or more of the PARP-1 domains thus directly or indirectly suppress ADP-ribose polymer formation. The DsDB domain, which was identified in this study, could be involved in such suppression by inhibiting the synthesis of ADP-ribose polymers (Fig. 7C, ). The WGR domain could also be involved in the suppression as it has a role in the regulation of PARP-1 catalytic activity. Other than its involvement in PARP-1-PARP-1 homodimerization and chromatin compaction, the functional roles of the Zn3 finger in the basal status of PARP-1 are not yet clear. However, it is plausible that Zn3 has a role in the retention of PARP-1 on nascent RNA as Zn3 can bind to ssRNA (Fig. 2). Furthermore, Fakan et al. previously observed that PARP-1 binds to RNA stem loops, Tulin and Spradling have reported accumulation of PARP-1 to Drosophila puffs upon activation of transcription, and it has been demonstrated that PARP-1 is concentrated to transcriptionally active nucleoli. In addition, the K D of PARP-1 to RNA stem loop is 1.0 10 10 M, which allows the formation of highly stable complexes with RNA. Thus, in its basal status, a fraction of PARP-1 in the nucleus may accumulate at actively transcribed genes through Zn3 binding to nascent RNA. RNA-dependent Activation of PARP-1-Binding of the WGR domain to ssRNA (Fig. 5) leads to RNA-dependent activation of PARP-1 (Fig. 7C, ). This activation unlikely causes the formation of heavily automodified PARP-1 as the DsDB domain could suppress long ADP-ribose polymer formation (Fig. 5). The functional role of this weak poly(ADP-ribosyl)ation of PARP-1 is, however, not well understood. Perhaps this activation is required so that PARP-1 can reach a form that can be efficiently automodified by ADP-ribose polymers upon its activation by DNA breaks or DNA loops. Indeed, the first ADP-ribose residue needs to be attached to glutamic acid, aspartic acid, or lysine, which are ADP-ribose polymer attachment sites (33,. This weak activation may thus be involved in PARP-1 priming. Alternatively, RNA-dependent activation of PARP-1 could have a role in the regulation of nascent RNA elongation as suppression of RNA synthesis occurs upon binding of PARP-1 to RNA stem loops. Such suppression is related to the negative transcription elongation factors, negative elongation factor (NELF) and 5,6-dichloro-1-␤-D-ribofuranosylbenzimidazole sensitivity inducing factor (DSIF), which are now known as critical transcriptional regulators involved in divergent transcription. The functional role of this activation still remains to be elucidated, and the role of Zn3, which has the ability to bind to ssRNA in the RNA-dependent activation, is still not clear. However, it is evident that this activation plays critical roles in nascent RNA synthesis as reports suggest the presence of a link between transcription and PARP-1. Activation of PARP-1 by DNA Breaks and DNA Loops-Following the models shown in Fig. 7B, we illustrated that Zn1 and Zn2 bind to ssDNAs at a DNA breaks site, that the DsDB domain recognizes dsDNA, and that PARP-1 forms a homodimer through the zinc ribbon fold (Fig. 7C,, illustration is based on PARP-1 binding to SSB). Suppression of ADP-ribose polymer synthesis by the DsDB domain is released by its binding to dsDNA (Fig. 5E), allowing automodification of PARP-1 with long ADP-ribose polymers. Because the homodimerization of PARP-1 brings the CAT domain close to the automodification site, the efficiency of PARP-1 automodification can be significantly promoted. In our model, we illustrated that the DsDB domain of the second PARP-1 also binds to dsDNA and that the two PARP-1 molecules interact through the BRCT domain, which has been suggested to have a role in PARP-1-PARP-1 homodimer formation, whereas the second pair of zinc fingers remains unbound. Although it is not known whether this pair of zinc finger has functional roles, it may be involved in crossbinding of two DSB ends if a PARP-1-PARP-1 homodimer is formed at the DSB. Audebert et al. and Wang et al. in fact suggest that the efficiency of DSB repair is promoted through homodimerization of PARP-1. A similar model could be applied for PARP-1 activation by its binding to DNA loops or DNA structure containing a junction of ssDNA and dsDNA, e.g. internal loops. Activation of PARP-1 by this type of DNAs is likely equivalent to the DNA FIGURE 7. A model for the binding of the DBD domain to DNA breaks and DNA loops and the regulatory mechanisms of PARP-1 activity. A, a summary of our results is shown. Zn1 and Zn2 bind to ssDNA. Zn3 has the ability to bind to ssRNA in addition to ssDNA. The DsDB domain is involved in recognition of dsDNA. When the domain does not bind to dsDNA, it suppresses ADP-ribose polymer synthesis. By binding of this domain to dsDNA, the suppression is removed. The WGR domain binds to ssRNA, resulting in activation of the CAT domain. Upon the formation of ADP-ribose polymers, ssDNA and dsDNA are displaced from Zn2 and the DsDB domain, respectively, by the polymers. B, a model for the DBD domain binding to DNA breaks and DNA loops is illustrated. Zn1 binds to one DNA strand, and Zn2 holds the complementary strand of SSBs. Because Zn1 and Zn2 are connected by a peptide of only 20 residues, binding of both fingers to DNA strands could bend DNA. In a similar manner, Zn1 and Zn2 can bind to DSBs, DNA internal loops, and DNA terminal loops. C, in the basal status, poly(ADP-ribosyl)ation activity of PARP-1 is suppressed by other PARP-1 domains. The DsDB domain is one of such domains, which has role in inhibition of long ADP-ribose polymer formation. Two PARP-1 molecules are illustrated., binding of ssRNA to the WGR domain activates the CAT domain. However, only short polymers are produced due to the suppression of ADP-ribose polymer synthesis by the DsDB domain., when Zn1 and Zn2 bind to a SSB, PARP-1 forms a functional homodimer. PARP-1 is aligned with DNA strand through binding of the DsDB domain to dsDNA, leading to the activation of the CAT domain. Binding of the DsDB domain to dsDNA releases the suppression of ADP-ribose polymer formation, allowing the CAT domain to produce long ADP-ribose polymers. PARP-1 could bind to DSBs and to DNA loops in a similar manner to as to SSBs., ADP-ribose polymers then displace ssDNA and dsDNA from Zn2 and the DsDB domain, respectively, resulting in dissociation of PARP-1 from DNA breaks. ADP-ribose polymers are then degraded by poly(ADP-ribose) glycohydrolase (PARG). dsDNA Binding Domain of PARP-1 and Activity Regulation damage-independent PARP-1 activation, which occurs upon binding of PARP-1 to the linker DNA of nucleosomes and perhaps at transcription promoter. It has been demonstrated that PARP-1 is recruited to transcriptional promoters through interaction with transcription co-activators, e.g. NF-B. Although it is not known whether recruited PARP-1 is indeed activated, recent studies suggest that the transcription of a subset of genes is promoted, whereas others are suppressed by ADP-ribose polymer formation, thereby indicating that DNA structures that can activate PARP-1 are indeed formed at transcriptional promoters. Such structures could be produced at other regions of chromatin as PARP-1 can be activated by nucleosome linker DNA, leading to chromatin remodeling. A recent report in fact suggests that a chromatin-remodeling factor, ALC1, which has a role in transcription and DNA repair, is recruited to the remodeling site through its binding to ADP-ribose polymers. This observation also implies that chromatin remodeling per se occurs as a downstream event of PARP-1 activation, thereby suggesting that DNA structures that activate PARP-1 are produced by other mechanisms. Although it has been demonstrated that activation of PARP-1 by DNA breaks triggers DNA damage-dependent chromatin remodeling through recruitment of ALC1 to DNA damage sites, how DNA structures, which are involved in DNA break-independent activation of PARP-1, are formed is not known. Perhaps such structures could be formed or exposed following the progression of RNA polymerases as accumulation of PARP-1 to actively transcribed regions of chromatin has been reported. Once PARP-1 is activated, histones, in addition to PARP-1 itself, are poly(ADP-ribosyl)ated. ADP-ribose polymer binding factors, e.g. ALC1, and DNA repair factors, XRCC1 and aprataxin and PNK-like factor (APLF), could thus be recruited to the polymers produced on PARP-1 or histones. If these factors bind to automodified PARP-1, they could be brought away from chromatin-remodeling sites upon dissociation of automodified PARP-1 from DNA. If binding of these factors to ADP-ribose polymers formed on histones occurs, they could be able to remain at the site. Nevertheless, it is not known how these factors exert their function after their binding to ADP-ribose polymers and how PARP-1, particularly homodimerized PARP-1, plays a regulatory role in transcription and DNA repair in the chromatin context. Dissociation of Automodified PARP-1 from DNA Breaks-Automodified PARP-1 then dissociates from DNA breaks or DNA loops. Displacement of PARP-1 domains from ss-DNA and dsDNA could explain the mechanism of such dissociation (Fig. 7C, ). Our results suggest that binding of ADPribose polymers to the DsDB domain leads to displacement of the domain from dsDNA (Fig. 6). Zn2 is also displaced from ssDNA by the polymers. Because the DsDB domain binds to shorter lengths of ADP-ribose polymers (30 ADP-ribose residues) than Zn2 (longer than 50 ADP-ribose residues), dissociation of the DsDB domain from dsDNA might occur prior to the dissociation of Zn2 from ssDNA. Although it has been assumed that electric repulsion between DNA and ADP-ri-bose polymer is involved in the dissociation, the dissociation mechanism of automodified PARP-1 from DNA beaks or DNA loops could thus be explained by our model. Then, ADP-ribose polymers are degraded by poly(ADP-ribose) glycohydrolase, bringing PARP-1 back to its basal status. In this study, we have investigated the mechanisms of PARP-1 activation and found that five out of seven PARP-1 functional domains are involved in oligonucleic acid binding. This suggests that PARP-1 initiates ADP-ribose polymer synthesis depending on the status of neighboring oligonucleic acids. Although PARP-1 is an enzyme that catalyzes posttranslational protein modification by ADP-ribose polymers, the primary roles of PARP-1 may be more related to its ability to survey unique oligonucleic acid structures to regulate various fundamental cellular processes, including DNA repair, transcription, and chromatin remodeling (4 -14, 60).
//Small wrapper class private static final class SetupCard { private final View mSetupCard; private NyaaFansubGroup mNyaaFansubGroup; private CheckBox res480CheckBox; private CheckBox res720CheckBox; private CheckBox res1080CheckBox; private View mResolutionsContainer; private EditText mEpisodeEditText; private SparseArrayCompat<CheckBox> mResCheckboxSparseArray; private Spinner modeSpinner; private boolean mResAvailable = false; private NyaaEntry.Resolution mDefaultRes = null; public SetupCard(View setupCardView) { this.mSetupCard = setupCardView; } public void setDefaultRes(NyaaEntry.Resolution mDefaultRes) { this.mDefaultRes = mDefaultRes; } public void initViews() { mEpisodeEditText = (EditText) mSetupCard.findViewById(R.id.libs_et_episode); mSetupCard.findViewById(R.id.libs_btn_firstep).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { mEpisodeEditText.setText("1"); } }); mSetupCard.findViewById(R.id.libs_btn_currep).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { final int ep = mNyaaFansubGroup.getLatestEpisode(); mEpisodeEditText.setText(String.valueOf(ep)); } }); mSetupCard.findViewById(R.id.libs_btn_nextep).setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { final int ep = mNyaaFansubGroup.getLatestEpisode(); mEpisodeEditText.setText(String.valueOf(ep + 1)); } }); //Checkboxes mResolutionsContainer = mSetupCard.findViewById(R.id.libs_ll_mid); res480CheckBox = (CheckBox) mResolutionsContainer.findViewById(R.id.libs_btn_480); res720CheckBox = (CheckBox) mResolutionsContainer.findViewById(R.id.libs_btn_720); res1080CheckBox = (CheckBox) mResolutionsContainer.findViewById(R.id.libs_btn_1080); mResCheckboxSparseArray = new SparseArrayCompat<>(3); mResCheckboxSparseArray.append(NyaaEntry.Resolution.R480.ordinal(), res480CheckBox); mResCheckboxSparseArray.append(NyaaEntry.Resolution.R720.ordinal(), res720CheckBox); mResCheckboxSparseArray.append(NyaaEntry.Resolution.R1080.ordinal(), res1080CheckBox); modeSpinner = (Spinner) mSetupCard.findViewById(R.id.libs_spinner_modes); } public int getVisibility() { return mSetupCard.getVisibility(); } public void setVisibility(int visibility) { mSetupCard.setVisibility(visibility); } public void setData(NyaaFansubGroup nyaaFansubGroup) { this.mNyaaFansubGroup = nyaaFansubGroup; } //by updating views upon list item clicked, we do not need to reset their states fully due to overriding. public void updateViews() { if (mNyaaFansubGroup != null) { mEpisodeEditText.setText(String.valueOf(mNyaaFansubGroup.getLatestEpisode())); //if(mNyaaFansubGroup.getResolutions()) List<NyaaEntry.Resolution> resolutions = mNyaaFansubGroup.getResolutions(); mResAvailable = false; if (resolutions.size() > 0) { for (final NyaaEntry.Resolution resolution : mNyaaFansubGroup.getResolutions()) { if (resolution != null) { final CheckBox checkBox = mResCheckboxSparseArray.get(resolution.ordinal()); if (checkBox != null) { mResAvailable = true; checkBox.setVisibility(View.VISIBLE); //Auto checked if default res is set if (mDefaultRes != null && mDefaultRes == resolution) { checkBox.setChecked(true); } else if (checkBox.isChecked()) { checkBox.setChecked(false); } } } } } if (!mResAvailable || resolutions.size() == 0) { mResolutionsContainer.setVisibility(View.GONE); } else if (mResolutionsContainer .getVisibility() != View.VISIBLE) mResolutionsContainer.setVisibility(View.VISIBLE); } } public NyaaFansubGroup applyFinalData(Context context) { final NyaaFansubGroup finalNyaaFansubGroup = new NyaaFansubGroup(mNyaaFansubGroup.getGroupName()); if (mEpisodeEditText.getText() != null && mEpisodeEditText.length() > 0) { finalNyaaFansubGroup.setLatestEpisode(Integer.parseInt(mEpisodeEditText.getText().toString())); } else { Toast.makeText(context, R.string.error_noepsentered, Toast.LENGTH_LONG).show(); return null; } if (mResAvailable) //only applicable if any res is available to be selected { boolean selectAny = false; for (NyaaEntry.Resolution resolution : NyaaEntry.Resolution.values()) { final CheckBox checkBox = mResCheckboxSparseArray.get(resolution.ordinal()); if (checkBox != null) { if (checkBox.isChecked()) { selectAny = checkBox.isChecked(); finalNyaaFansubGroup.getResolutions().add(resolution); } } } if (!selectAny) { Toast.makeText(context, R.string.error_noresselected, Toast.LENGTH_LONG).show(); return null; } } finalNyaaFansubGroup.setModes(modeSpinner.getSelectedItemPosition()); finalNyaaFansubGroup.setSeriesTitle(mNyaaFansubGroup.getSeriesTitle()); finalNyaaFansubGroup.setId(mNyaaFansubGroup.getId()); finalNyaaFansubGroup.setTrustCategory(mNyaaFansubGroup.getTrustCategory()); //TODO:return modes as well return finalNyaaFansubGroup; } }
The Role of Chemokines in Shaping the Balance Between CD4+ T Cell Subsets and Its Therapeutic Implications in Autoimmune and Cancer Diseases Chemokines are the key activators of adhesion molecule and also drivers of leukocyte migration to inflammatory sites and are therefore mostly considered as proinflammatory mediators. Many studies, including ours, imply that targeting the function of several key chemokines, but not many others, could effectively suppress inflammatory responses and inflammatory autoimmunity. Along with this, a single chemokine named CXCL10 could be used to induce antitumor immunity, and thereby suppress myeloma. Our working hypothesis is that some chemokines differ from others as aside from being chemoattractants for leukocytes and effective activators of adhesion receptors that possess additional biological properties making them driver chemokines. We came up with this notion when studying the interlay between CXCR4 and CXCL12 and between CXCR3 and its three ligands: CXCL9, CXCL10, and CXCL11. The current mini-review focuses on these ligands and their biological properties. First, we elaborate the role of cytokines in directing the polarization of effector and regulatory T cell subset and the plasticity of this process. Then, we extend this notion to chemokines while focusing on CXCL 12 and the CXCR3 ligands. Finally, we elaborate the potential clinical implications of these studies for therapy of autoimmunity, graft-versus-host disease, and cancer. the polarization of effector TH1/Th17 cells into IL-10 producing Tr1-like cells. The current review focuses on these findings and their biological significance. CYTOKineS THAT ReGULATe THe BALAnCe BeTween CD4 + T CeLLS SUBSeTS AS DRiveRS AnD ReGULATORS OF inFLAMMATiOn Cytokines are involved in the induction of inflammatory responses by two different, yet complementary, mechanisms: the first includes a direct effect aimed at destructing invading microbes. Two cytokines that posses a major function in this function are tumor necrosis factor alpha (TNF-) and IL-1. Consequently, during inflammatory autoimmunity, they are thought to be key mediators of the harmful anti-self distractive response and are, therefore, major targets for therapy of these diseases. The other mechanism includes directing the functional development (polarization) of CD4 + T cells subsets, and thereby the dynamics of the inflammatory process. The notion that the cytokine milieu at the site of inflammation drives T-cell polarization came from early studies showing that while IL-12 skews the TH1/TH2 balance into IFN- high IL-4 low TNF producing TH1 cells, IL-4 shifts this balance toward IFN- low IL-4 high TH2 cells, capable of restraining the inflammatory activities of TH1 cells. Along with this notion, Leonard et al. showed that blocking IL-12 inhibits experimental autoimmune encephalomyelitis (EAE) by shifting the TH1/Th2 balance toward TH2. Another cytokine that has been associated with shifting the TH1/TH2 balance toward TH1 is IL-18 (IGIF). Following this publication, we observed that target neutralization of this cytokine suppresses autoimmunity by interfering in the TH1/TH2 balance toward TH2, and also that targeted expression of its natural inhibitor, IL-18 binding protein at also suppress the disease by the same mechanism. A major concern in applying therapies aiming at shifting the TH1/TH2 balanced toward TH2 is that the last are also a subtype of effector T cells that promote IL-4-dependent immunity. Thus, shifting anti-self immunity from TH1 to TH2 might result in an unexpected form of self-destructive immunity. In 2005, IL-17-expressing T cells (TH17 cells) were proposed to be a third, independent TH-cell lineage with a role in inflammatory and autoimmune diseases. The key cytokines that drive the polarization of these cells vary between rodents and human. In mice, IL-6 together with transforming growth factor-beta (TGF-) are likely to induce TH17 at early stages of its polarization (together with IL-21) followed by stabilization by IL-23, whereas in human the combination of IL-1 and IL-6, but not TGF- are key drivers of TH17 polarization. More recently, it has been proposed that TH17 cells may also hold anti-inflammatory properties due to potential expression of CD39 and CD73 ectonucleotidases, leading to adenosine release and the subsequent suppression of CD4 + and CD8 + T cell effector functions. The activity of effector T cells is tightly regulated by regulatory T cells that fall into two major subtypes, those expressing the master forkhead box protein 3 (FOXP3) that has a major role in directing their biological properties. They suppress the activities of effector T cells and of inflammatory macrophages by various mechanisms, thus maintaining self-tolerance. Aside from nTregs, FOXP3-positive T cells could be polarized from FOXP3-negative T cells (in vitro) in the presence of transforming growth factor (TGF-). In 1997, Maria Grazia Roncarolo and her coworkers discovered the reciprocal FOXP3-negative IL-10 high -producing Tr1 cells that also play a major part in maintenance of self-tolerance. These cells could be polarized in vitro by either IL-10 + IL-2 or by the combination of IL-10 + Rapamycin and in human by IL-10 + IFN. CYTOKineS AnD THe PLASTiCiTY OF CD4 + T CeLL SUBSeTS First evidence for potential plasticity in CD4 + T cell subsets have been demonstrated by Anderson et al. in 2007 showing that during chronic cutaneous leishmaniasis TH1 may gain the Tr1-like phenotype and largely produce IL-10. It is not known if these IL10 high cells are indeed Tr1 cells, or just IL10 high -producing CD4 + T cells, at that time, biomarkers that could distinguish Tr1 cell from other IL10 high CD4 + T cells were not yet identified. Later IL-27, together with TGF, could repolarize TH1 cells into Tr1. As for FOXP3 + Tregs, Chen et al. have shown that coculturing with TGF may transform FOXP3 − CD4 + T cells into FOXP3 + Tregs, also known as induced Tregs (iTregs). The stability of iTregs in vivo is still questionable. More recent studies focused on the plasticity between TH17 cells and FOXP3 + Tregs. It appears that expression of Foxp3 by iTreg cells or IL-17 by Th17 cells may not be stable and that there is a great degree of flexibility in their differentiation options as they emerge from an overlapping developmental program. Much of the attention has been devoted to exploring the transition from TH17 to iTregs, though a very recent study showed that the inflammatory environment in autoimmune arthritis induces conversion of a subset of Foxp3 + T cells into interleukin-17-producing cells that contribute to disease pathogenesis. These findings should be taken into consideration in designing future therapies aiming at redirecting the polarization of T cell subsets. THe ROLe OF CHeMOKineS in DRivinG THe FUnCTiOnAL DeveLOPMenT (POLARiZATiOn) OF CD4 + T CeLL SUBSeT, ARe THeRe "DRiveR" CHeMOKineS? Chemokines are small (~8-14 kDa), structurally related chemotactic cytokines that regulate cell trafficking through interactions with specific seven-transmembrane, GPCRs. One of the important features of GPCRs is their ability to transmit diverse signaling cascades upon binding different ligands. This large family of related molecules is classified on the basis of structural properties, regarding the number and position of conserved cysteine residues, to give two major (CXC and CC) and two minor (C and CX3C) chemokine subfamilies ( Table 1). Most of the attention has been drawn to the key role of these chemotactic mediators in promoting lymphocyte migration processes critical for the onset of inflammatory processes with a special interest in inflammatory autoimmune diseases. Reviewing the results of the very many studies in which single chemokines or there receptors were targeted reviles a major paradox; even though most of the 50 known chemokines can direct the migration of the same leukocytes, targeted neutralization of only one chemokine, such as CCL2, CCL3, CCL5, or CXCL10, is sufficient to suppress the entire inflammatory process (10,. Therefore, the question that begs an answer is why other chemokines that also attract the same type of leukocyte to the autoimmune site do not compensate for the absence of this single chemokine. In addition, it is also not clear why neutralization of as few as eight to 10 of the 50 different chemokines can effectively suppress the attacks in autoimmune inflammatory diseases (10,. Hence, what are the attributes of this limited number of chemokines that make them so important in the regulation of inflammatory processes? A partial explanation for this paradigm could be that these chemokines might have other biological actions that are associated with these autoimmune inflammatory diseases. This includes directing the mobilization of various cells types from the bone marrow to the blood, and later their colonization at the inflammatory site, induction of selective migration to specific organs, directing the development cell subtypes (such as CD4 + T cell polarization) or potentiation of innate immune cells. The current review focuses on the role of chemokines on the balance of T cell subsets. CXCL10 is a key driver of TH1 and possibly TH17 polarization and has, therefore, been a major target for neutralization in different autoimmune diseases. More recently, we identified two different CXC chemokines that possess anti-inflammatory properties. CXCL12 is an important chemokine that participates in the regulation of tissue homeostasis, immune surveillance, cancer development, and the regulation of inflammatory responses. It is believed that under non-inflammatory conditions, the continuing expression of CXCL12 in tissues that are partially segregated from the immune system, such as the CNS, is important for directing the entry of leukocytes to these sites, as part of immune surveillances. We have previously shown that aside from this activity, which in its nature could be proinflammatory, CXCL12 also drives the polarization of CXCR4 + macrophages into the IL-10 high M2c-like macrophages that hold anti-inflammatory properties and also of effector CD4 + T cells (CXCR4 + ) into IL-10 high Tr1 cells. This may explain why its administration during late stages of EAE leads to rapid remission. Based on the above, we thought of generating an Ig-based stabilized protein (CXCL12-Ig) for therapy of various inflammatory autoimmune diseases. Nevertheless, the major involvement of this chemokine in various biological functions, such as homing of stem cells to the bone marrow, homeostasis of neutrophils, angiogenesis, and others precludes it use as a stabilized chemokine for therapy of autoimmune diseases. CXCL11 AS A nOveL DRiveR AnTi-inFLAMMATORY CHeMOKine Is CXCL12 an exception, or are there other chemokine with antiinflammatory properties? One of the important features of GPCRs is their ability to transmit diverse signaling cascades upon binding different ligands. The Nobel prizewinner Robert J. Lefkowitz and his team have previously raised the concept that different ligands binding the same G-coupled receptor may induce diverse signaling cascades resulting in distinct biological activities. Even though the mechanistic basis of this feature is not fully understood, its biological and clinical implications are highly significant. We have investigated the interplay between CXCR3 and its three ligands: CXCL9, CXCL10, and CXCL11 on directing the polarization of CD4 + T cells. We observed that while CXCL9 and CXCL10 skew T cell polarization into Th1/Th17 effector cells, CXCL11 drives CD4 + T cell polarization into IL-10-producing Tr-1. We also uncovered the signaling basis of this biased response, and learned that it is GI independent. While CXCL10/CXCR3 interactions drive effector Th1 polarization via STAT1, STAT4, and STAT5 phosphorylation, CXCL11/CXCR3 binding induces an immunotolerizing state that is characterized by IL-10 high (Tr1) and IL-4 high (Th2) cells and mediated via p70 kinase/mTOR in STAT-3-and STAT-6-dependent pathways. CXCL11 binds CXCR3 a higher affinity than CXCL10, suggesting that CXCL11 has potential to mediate and restrain inflammatory autoimmunity (Figure 1). This may explain, in part, why CXCR3-deficient mice develop an extremely severe form of EAE and T1DM. nOveL APPROACH FOR CHeMOKine-BASeD THeRAPY OF inFLAMMATORY AUTOiMMUniTY GvHD AnD CAnCeR DiSeASeS Thus, far many affords has been spent in studying exploring the therapeutic potential of targeting the interaction between chemokine and their receptors for treating various autoimmune and cancer diseases. This includes antibody-based therapy to single chemokines or their receptors, targeted DNA vaccines that that amplify the natural autoantibody titer to chemokines, soluble chemokine receptor-based therapy, and small molecule-based antagonists to chemokine receptors. Some of these studies have been employed in human clinical trials, thus far with very limited successes. It is believed that the major limitation of applying anti-chemokine-or chemokine receptor-based therapies is the redundancy between chemokines and the enhanced in vivo production, once being neutralized. The discovery of chemokines with anti-inflammatory properties opens the door for an alterative approach of using stabilized chemokines for therapy of autoimmunity and graft-versus-host disease (GVHD). Could stabilized chemokines be also used for therapy of cancer diseases? Studies that were initiated in experimental models and recently extended to patients suffering from melanoma showed that blockage of FOXP3 + T cells function by blocking the interaction between immunosuppressive receptor programmed cell death-1 (PD-1) largely expressed on FOXp3 + T cells and its target coreceptor on antigen presenting cells (PDL-1) using anti-PD1 mAb (nivolumab) or anti-PDL-1 mAb suppressed the function of tumor infiltrating Tregs, and thereby enhanced antitumor immunity to suppress tumor development and progression. The other successful approach of enhancing antitumor immunity against melanoma included the administration of a mAb (ipilimumab) which blocks cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) to potentiate an antitumor T-cell response. Very recently, combined therapy\ of anti-PD1 (nivolumab) and anti-CTLA-4 (ipilimumab) showed improved efficacy in treating melanoma. The observations that CXCL10 enhances effector T cell activities motivated us to explore CXCL10-Ig-based therapy in cancer diseases. Very recently, we showed that indeed administration of CXCL10-Ig in a clinical set-up of myeloma that CXCL10-Ig could be used for immunotherapy of this disease, and that aside from enhancing antitumor immunity, it directly suppresses tumor growth. Along with this study, very recently, Barreira da Silva et al. showed that inhibition of DPP4 enzymatic activity enhanced tumor rejection by preserving biologically active CXCL10 and increasing trafficking into the tumor by lymphocytes expressing the counter-receptor CXCR3. We are now exploring combined therapies of CXCL10-Ig with anti-PD1 or anti-CTLA-4 in a melanoma set-up. Another chemokine that might serve as a target for cancer therapy is CCL1. Its CCR8 receptor is highly expressed on FOXP3 + Tregs and has been associated in their targeted attraction. Along with this, Hoelzinger et al. showed that targeting CCL1 might enhance antitumor immunity. We are now examining whether its stabilized form (CCL1-Ig) could be used for therapy of inflammatory autoimmunity. COnCLUSiOn The current review focuses on exploring the involvement of chemokines in directing the polarization and biological function of CD4 + T cells. Thus, far most of the attention has been devoted to exploring the role of cytokines in this property. From a clinically oriented perspective, the findings that chemokines may also polarize Tregs (so far our data shows relevance only for FOXP3negative Tregs) opens the window of opportunities for using stabilized chemokines for therapy of inflammatory autoimmunity and GVHD, and also for cancer diseases. The basic rational is that the stabilized form of chemokines that induce Tr1-like cells, among them CXCL12 and CXCL11, could be used for therapy of autoimmunity and GVHD, whereas stabilized CXCL10 would be used for cancer therapy. We find some major differences between CXCL12 and CXCL11 as potential tolerizing chemokines. CXCL12 also renders anti-inflammatory properties in macrophages, whereas CXCL11 also polarizes IL-4 high Th2 cells. We assume that CXCL11 could be a better candidate for being a potential drug since CXCL12 is involved in many biological activities aside from being an immunoregulator, such as neutrophil homeostasis or stem cell homing. eTHiCS STATeMenT All experimental work described in the manuscript was approved by the ethical committee of the Technion, according the NIH guideline. FUnDinG This study was funded by the Israel Science Foundation (ISF), ICRF, Israel Cancer Association, and the Collek fund of the Technion.
Business In Burma, Cheap SIM Card Draw May Herald Telecoms Revolution The SIM card lottery is a first tentative step into a telecoms revolution which could be a game changer for economic growth and political reform. RANGOON—Introduced a decade and a half ago under Burma’s former military rulers, SIM cards sold for as much as $7,000 apiece. Today, they still cost more than $200. From Thursday, lucky winners of a lottery-style sale may get one for as little as $2. This is telecoms deregulation, Burma-style. The lottery is a first tentative step into a telecoms revolution that has transformed societies and spurred economic growth across the globe—and could be a game changer for Burma, emerging from decades of isolation and mismanagement that have left it Asia’s second-poorest nation after Afghanistan. State-owned Burma Post and Telecommunications is selling 350,000 SIM cards through a public lottery, and plans to offer additional batches on a monthly basis. Yatanarpon Teleport, a joint venture between local private firms and the government, holds the country’s only other telecoms license, for now. On June 27, the government is due to announce the winners of two new 15-year telecoms licenses up for grabs to international companies. Such is the untapped potential—analysts say Burma is the least connected nation on earth, bar maybe North Korea—that more than 90 international companies and consortia expressed interest in tendering for the two mobile licenses. The Telecommunications Operator Tender Evaluation and Selection Committee whittled that down to 12 applicants to pre-qualify to bid, including India’s Bharti Airtel, Japan’s KDDI Corp, South Africa’s MTN, Singapore Telecommunications, Norway’s Telenor, a group backed by billionaire George Soros, and China Mobile, which has teamed up with Vodafone. “The bid round is seen as one of the most exciting green-field opportunities available globally in the telecoms sector,” said Marae Ciantar, a Singapore-based lawyer with international law firm Allens, who has advised multinational telecoms companies seeking to invest in Burma. Getting Connected Burma’s military rulers neglected the telecoms sector, building only a skeleton infrastructure capable of handling the few subscribers who could afford SIM cards. Sanctions imposed in response to human rights abuses in what is also known as Burma barred western telecoms firms and others from operating there. But those sanctions have eased since Burma’s government embarked on reforms that include releasing political prisoners and allowing civilians into politics. The government says mobile penetration is around 9 percent—though Swedish telecoms giant Ericsson last year put the figure at less than 4 percent—and President Thein Sein, a former general and member of the ruling junta, has set a goal of 80 percent penetration by 2015. “The market potential … is clearly very substantial,” said Allens’ Ciantar. Ericsson estimated the total economic impact of the mobile sector in Burma could potentially be as high as 7.4 percent of GDP over the first three years after the new licenses are issued. And telecoms firms are not alone in beating a path to the underdeveloped Southeast Asian nation of some 60 million people. In February, Danish brewer Carlsberg said it was returning to Burma after sanctions forced it to leave in the mid-1990s. Energy companies from Canada, the United States, Britain, Australia, Japan, China and elsewhere are in the running for exploration licenses, and more foreign companies are now visiting the country, sizing up its potential. Economic, Political Boost Experience elsewhere shows developing telecommunications can spur economic growth, and may also encourage political reform. In its report last year, Ericsson said mobile networks “encourage the growth of small businesses and increase their efficiency,” while mobile access “could also play an important role in enabling basic human rights and in driving increased transparency in society.” David Butcher, a telecoms consultant who last year studied the sector in Burma for the Asian Development Bank, said affordable mobile networks have the potential to boost rural economies. “Some calculations have shown that the impact of having a mobile phone can be to increase rural incomes by about 20 percent,” he told Reuters. Vodafone said that if it wins one of the mobile licenses it plans to roll out its M-Pesa system, which allows financial transactions via mobile phone. The system provides access to financial services for people in underdeveloped areas with little or no banking infrastructure, and is commonly used by workers in cities to send money back to their home villages. “In Kenya, mobile money was the game changer in bringing financial services to the middle class and the poor,” said Nick Read, Vodafone’s regional chief executive for Africa, the Middle East and Asia Pacific. “In 2006, only 20 percent of Kenyan adults had access to financial services, but by the end of 2010 that share had jumped to 75 percent.” The SIM cards to be sold off this week will only be compatible with MECTel top-up cards issued by the military-owned Myanma Economic Corporation (MEC) and distributed through their authorized outlets in big cities. In later batches, SIM cards will change to GSM, the global standard for mobile communications. Test Case The bidding process for the telecoms licenses is seen as a test case for the government’s approach to managing investment in other sectors. “Potential investors in Burma, and advisers to international investors … have been watching the bid process closely and with great interest,” said Ciantar, adding it has so far been “very transparent.” Butcher said that during his research last September there were concerns that some companies were trying to influence the process. “There was a strong suspicion when I was there that people were trying to get at the minister to make them the favored son who would be given the license for a substantial payment,” he said. In January, the government launched an unprecedented corruption investigation into dozens of officials at the telecoms ministry, including former minister Thein Tun, who stepped down that month for unexplained reasons. Eight senior officials were reconfirmed in their jobs in March. While the advent of affordable telecoms holds potential for grand social and economic change, people on the streets of Rangoon, Burma’s biggest city, have simpler expectations. “I can connect with my friends and passengers, and my family can call me if there’s an emergency at home,” said Kai Saw Lin, who earns just 4,000 kyat (about US $4.50) a day driving a bicycle taxi. But he added that even if he wins one of the SIM cards, he might not be able to afford the handset to put it in. Additional reporting by Aung Hla Tun.
import os from dotenv import load_dotenv, find_dotenv load_dotenv(find_dotenv()) class cred(): API_TOKEN = os.getenv('API_TOKEN') owner_id = os.getenv('OWNER_ID') my_db = os.getenv('DB_NAME') my_user = os.getenv('USER_NAME') my_pass = os.getenv('<PASSWORD>') my_host = os.getenv('HOST_NAME')
Experimental investigation of the structure of 124Ce Gamma-ray transitions have been observed for the first time in 124Ce using the Daresbury Recoil Separator. The excitation energy of the first excited state, 142 keV, implies a deformation epsilon 2 approximately=0.31. This value confirms the trend to higher deformation for the more neutron-deficient cerium isotopes, but it is larger than that predicted by recent calculations.
package io.github.ulisse1996.jaorm.entity.relationship; import io.github.ulisse1996.jaorm.entity.Result; import java.util.*; import java.util.function.Function; public class Relationship<T> { private final Class<T> entityClass; private final Set<Node<T>> nodeSet; public Relationship(Class<T> entityClass) { this.entityClass = entityClass; this.nodeSet = new HashSet<>(); } public void add(Node<T> node) { this.nodeSet.add(node); } public Set<Node<T>> getNodeSet() { //NOSONAR return Collections.unmodifiableSet(this.nodeSet); } public Class<T> getEntityClass() { return entityClass; } public static class Node<T> { private final Function<T, ?> function; private final boolean opt; private final boolean collection; private final List<EntityEventType> events; private final Class<?> linkedClass; public Node(Class<?> linkedClass, Function<T, ?> function, boolean opt, boolean collection, EntityEventType... events) { this.function = function; this.opt = opt; this.collection = collection; this.events = Arrays.asList(events); this.linkedClass = linkedClass; } public Class<?> getLinkedClass() { return linkedClass; } public boolean matchEvent(EntityEventType eventType) { return events.contains(eventType); } public boolean isCollection() { return collection; } public boolean isOpt() { return opt; } @SuppressWarnings("unchecked") public Result<Object> getAsOpt(T entity) { return (Result<Object>) function.apply(entity); } @SuppressWarnings("unchecked") public Collection<Object> getAsCollection(T entity) { Collection<Object> res = (Collection<Object>) function.apply(entity); return Optional.ofNullable(res).orElse(Collections.emptyList()); } public Object get(T entity) { return function.apply(entity); } } }
Notified body consensus statements. The ability of governments, official bodies, industry and others to uniformly interpret the European Directives for medical devices will have a profound effect on the success of the European system regulating these products. This is particularly important for Notified Bodies. This article will discuss Notified Body consensus statements. An article in the near future will discuss Notified Body recommendations.
The synapse : structure and function The Synapse summarizes recent advances in cellular and molecular mechanisms of synaptic transmission and provides new insights into neuronal plasticity and the cellular basis of neurological diseases. Part 1 provides an in-depth look at structural differences and distribution of various pre- and post-synaptic proteins found at glutamatergic synapses; and Part 2 is dedicated to dendritic spines and their associated perisynaptic glia, which together constitute the tripartite synapse. The spines are portrayed as major sites for calcium sequestration and local protein synthesis. Part 3 highlights the important regional and cellular differences between glutamatergic transmission and that of neurotransmitters such as dopamine and acetylcholine that are commonly found in axon terminals without synaptic membrane specializations; and Part 4 provides an overview of the synapse from the time of formation to degeneration under the powerful influence of aging or hormonal decline that leads to severe deficits in cognitive function. Each chapter is illustrated with drawings and images derived from calcium imaging, electron microscopic immunolabeling, or electrophysiology. This book is a valuable reference for neuroscientists and clinical neurologists in both research and clinical settings. It is a comprehensive reference focused on the structure and function of the synapse. It covers the links between the synapse and neural plasticity and the cellular basis of neurologic disease. It includes detailed coverage of dendritic spines and associated perisynaptic glia-the tripartite synapse. It includes in-depth coverage of synapse degeneration due to aging or hormonal decline related to severe cognitive impairment.
<reponame>TheVinhLuong102/Strawberry<gh_stars>0 import builtins import dataclasses import warnings from functools import partial from typing import Any, Dict, List, Optional, Sequence, Tuple, Type, cast from pydantic import BaseModel from pydantic.fields import ModelField from typing_extensions import Literal from graphql import GraphQLResolveInfo import strawberry from strawberry.arguments import UNSET from strawberry.experimental.pydantic.conversion import ( convert_pydantic_model_to_strawberry_class, ) from strawberry.experimental.pydantic.fields import get_basic_type from strawberry.experimental.pydantic.utils import get_private_fields from strawberry.field import StrawberryField from strawberry.object_type import _process_type, _wrap_dataclass from strawberry.schema_directive import StrawberrySchemaDirective from strawberry.types.type_resolver import _get_fields from strawberry.types.types import TypeDefinition from .exceptions import MissingFieldsListError, UnregisteredTypeException def replace_pydantic_types(type_: Any): origin = getattr(type_, "__origin__", None) if origin is Literal: # Literal does not have types in its __args__ so we return early return type_ if hasattr(type_, "__args__"): new_type = type_.copy_with( tuple(replace_pydantic_types(t) for t in type_.__args__) ) if isinstance(new_type, TypeDefinition): # TODO: Not sure if this is necessary. No coverage in tests # TODO: Unnecessary with StrawberryObject new_type = builtins.type( new_type.name, (), {"_type_definition": new_type}, ) return new_type if issubclass(type_, BaseModel): if hasattr(type_, "_strawberry_type"): return type_._strawberry_type else: raise UnregisteredTypeException(type_) return type_ def get_type_for_field(field: ModelField): type_ = field.outer_type_ type_ = get_basic_type(type_) type_ = replace_pydantic_types(type_) if not field.required: type_ = Optional[type_] return type_ def type( model: Type[BaseModel], *, fields: Optional[List[str]] = None, name: Optional[str] = None, is_input: bool = False, is_interface: bool = False, description: Optional[str] = None, directives: Optional[Sequence[StrawberrySchemaDirective]] = (), all_fields: bool = False, ): def wrap(cls): model_fields = model.__fields__ fields_set = set(fields) if fields else set([]) if fields: warnings.warn( "`fields` is deprecated, use `auto` type annotations instead", DeprecationWarning, ) existing_fields = getattr(cls, "__annotations__", {}) fields_set = fields_set.union( set(name for name, typ in existing_fields.items() if typ is strawberry.auto) ) if all_fields: if fields_set: warnings.warn( "Using all_fields overrides any explicitly defined fields " "in the model, using both is likely a bug", stacklevel=2, ) fields_set = set(model_fields.keys()) if not fields_set: raise MissingFieldsListError(cls) all_model_fields: List[Tuple[str, Any, dataclasses.Field]] = [ ( name, get_type_for_field(field), StrawberryField( python_name=field.name, graphql_name=field.alias if field.has_alias else None, default=field.default if not field.required else UNSET, default_factory=( field.default_factory if field.default_factory else UNSET ), type_annotation=get_type_for_field(field), description=field.field_info.description, ), ) for name, field in model_fields.items() if name in fields_set ] wrapped = _wrap_dataclass(cls) extra_fields = cast(List[dataclasses.Field], _get_fields(wrapped)) private_fields = get_private_fields(wrapped) all_model_fields.extend( ( ( field.name, field.type, field, ) for field in extra_fields + private_fields if field.type != strawberry.auto ) ) # Sort fields so that fields with missing defaults go first # because dataclasses require that fields with no defaults are defined # first missing_default = [] has_default = [] for field in all_model_fields: if field[2].default is dataclasses.MISSING: missing_default.append(field) else: has_default.append(field) sorted_fields = missing_default + has_default # Implicitly define `is_type_of` to support interfaces/unions that use # pydantic objects (not the corresponding strawberry type) @classmethod # type: ignore def is_type_of(cls: Type, obj: Any, _info: GraphQLResolveInfo) -> bool: return isinstance(obj, (cls, model)) cls = dataclasses.make_dataclass( cls.__name__, sorted_fields, bases=cls.__bases__, namespace={"is_type_of": is_type_of}, ) _process_type( cls, name=name, is_input=is_input, is_interface=is_interface, description=description, directives=directives, ) model._strawberry_type = cls # type: ignore cls._pydantic_type = model # type: ignore def from_pydantic(instance: Any, extra: Dict[str, Any] = None) -> Any: return convert_pydantic_model_to_strawberry_class( cls=cls, model_instance=instance, extra=extra ) def to_pydantic(self) -> Any: instance_kwargs = dataclasses.asdict(self) return model(**instance_kwargs) cls.from_pydantic = staticmethod(from_pydantic) cls.to_pydantic = to_pydantic return cls return wrap input = partial(type, is_input=True) interface = partial(type, is_interface=True)
/** * Convert a given matrix into a magix square at minimal cost * @param s input matrix to convert * @return the conversion cost */ static int formingMagicSquare(int[][] s) { int result = -1; if (validateInput(s)) { magicSquare = new int[][]{ {4, 9, 2}, {3, 5, 7}, {8, 1, 6} }; int magicSquares[][] = { {4, 9, 2, 3, 5, 7, 8, 1, 6}, {4, 3, 8, 9, 5, 1, 2, 7, 6}, {2, 9, 4, 7, 5, 3, 6, 1, 8}, {2, 7, 6, 9, 5, 1, 4, 3, 8}, {8, 1, 6, 3, 5, 7, 4, 9, 2}, {8, 3, 4, 1, 5, 9, 6, 7, 2}, {6, 7, 2, 1, 5, 9, 8, 3, 4}, {6, 1, 8, 7, 5, 3, 2, 9, 4}, }; result = Integer.MAX_VALUE; for (int i = 0 ; i < 8 ; i++) { int temp = Math.abs(s[0][0] - magicSquares[i][0]) + Math.abs(s[0][1] - magicSquares[i][1]) + Math.abs(s[0][2] - magicSquares[i][2]) + Math.abs(s[1][0] - magicSquares[i][3]) + Math.abs(s[1][1] - magicSquares[i][4]) + Math.abs(s[1][2] - magicSquares[i][5]) + Math.abs(s[2][0] - magicSquares[i][6]) + Math.abs(s[2][1] - magicSquares[i][7]) + Math.abs(s[2][2] - magicSquares[i][8]); result = temp < result ? temp : result; } } return result; }
package com.megvii.zhimasdk.b; import android.annotation.TargetApi; import android.os.Build.VERSION; import android.util.Log; public class l implements m { boolean a = true; int b = 2; @TargetApi(8) private void c(String paramString1, String paramString2, Throwable paramThrowable) { Log.wtf(paramString1, paramString2, paramThrowable); } public void a(int paramInt, String paramString1, String paramString2) { a(paramInt, paramString1, paramString2, null); } public void a(int paramInt, String paramString1, String paramString2, Throwable paramThrowable) { if ((a()) && (a(paramInt))) {} switch (paramInt) { case 7: default: return; case 2: Log.v(paramString1, paramString2, paramThrowable); return; case 5: Log.w(paramString1, paramString2, paramThrowable); return; case 6: Log.e(paramString1, paramString2, paramThrowable); return; case 3: Log.d(paramString1, paramString2, paramThrowable); return; case 8: if (Integer.valueOf(Build.VERSION.SDK).intValue() > 8) { c(paramString1, paramString2, paramThrowable); return; } Log.e(paramString1, paramString2, paramThrowable); return; } Log.i(paramString1, paramString2, paramThrowable); } public void a(String paramString1, String paramString2) { a(2, paramString1, paramString2); } public void a(String paramString1, String paramString2, Throwable paramThrowable) { a(5, paramString1, paramString2, paramThrowable); } public boolean a() { return this.a; } public boolean a(int paramInt) { return paramInt >= this.b; } public void b(String paramString1, String paramString2) { a(2, paramString1, paramString2); } public void b(String paramString1, String paramString2, Throwable paramThrowable) { a(6, paramString1, paramString2, paramThrowable); } public void c(String paramString1, String paramString2) { a(4, paramString1, paramString2); } public void d(String paramString1, String paramString2) { a(5, paramString1, paramString2); } public void e(String paramString1, String paramString2) { a(6, paramString1, paramString2); } } /* Location: /Users/gaoht/Downloads/zirom/classes2-dex2jar.jar!/com/megvii/zhimasdk/b/l.class * Java compiler version: 6 (50.0) * JD-Core Version: 0.7.1 */
Editorial: Recent Advances in Understanding the Basic Mechanisms of Atrial Fibrillation Using Novel Computational Approaches 1 Auckland Bioengineering Institute, University of Auckland, Auckland, New Zealand, 2 School of Biomedical Engineering and Imaging Sciences, Kings College London, London, United Kingdom, Department of Cardiology, University Heart Center Hamburg, Hamburg, Germany, 4 Royal Melbourne Hospital, Melbourne, VIC, Australia, Department of Cardiology, University of Melbourne, Melbourne, VIC, Australia, Department of Medicine and Therapeutics, Faculty of Medicine, Chinese University of Hong Kong, Hong Kong, China, 7 Faculty of Medicine, Li Ka Shing Institute of Health Sciences, Chinese University of Hong Kong, Hong Kong, China, 8 IMB, UMR 5251, University of Bordeaux, Pessac, France, 9 IHU Liryc, Electrophysiology and Heart Modeling Institute, Fondation Bordeaux University, Pessac, France WHERE WE ARE AT REGARDING ATRIAL FIBRILLATION Atrial fibrillation (AF) is the most common sustained heart rhythm disturbance, associated with substantial morbidity and mortality (). The current prevalence of AF is ∼2% of the general population worldwide and is projected to more than double in the following decades, becoming a global epidemic due to the aging population and the increasing incidence of heart failure and other comorbidities such as hypertension and diabetes (;). Current clinical treatment for AF is suboptimal. Ablation treatment for persistent and permanent AF and AF with concurrent cardiac diseases is disappointing with long term success rates being <30% for single ablation procedures (;Nishida and Nattel, 2014). Furthermore, anti-arrhythmic drugs (AADs) often lose their efficacy and have side effects (Woods and Olgin, 2014). The poor clinical outcomes are primarily due to a lack of basic understanding of the AF mechanism and quantitative tools to optimize treatment strategies in a clinical setting (;). Novel computational approaches and techniques are playing an important role in our understanding and treatment of AF. Multi-scale computer models of the human atria have been used to investigate the important role of fibrosis in AF and consistently demonstrated that AF is perpetuated by the re-entrant circuits persisting in the fibrotic boundary zones (;;;;). Moreover, models have been applied to propose efficient ablation () and AAD () treatments for AF. To improve patients outcomes, novel computational analysis-aided ablation strategies have also been proposed. Narayan et al. have identified stable AF re-entrant drivers in patients using phase singularity analysis and atrial cellular restitution properties and demonstrated that it was possible to reverse AF in 80.3% of patients by directly targeting these regions in their Focal Impulse and Rotor Modulation (FIRM) trial (). In addition to the FIRM trial study, Haissaguerre et al. studied 103 patients with persistent AF using a non-invasive ECG imaging (ECGI) approach () and concluded that AF is sustained by localized spatially stable drivers where targeted ablation led to 85% of patients being freed from AF at 12 months post ablation. These high success rates are yet to be confirmed in a multi-center randomized clinical trial and the recent REAFFIRM clinical trial presented during a late-breaking session at Heart Rhythm 2019, however, failed to provide evidence of the superiority of the FIRM approach over pulmonary vein isolation. Meanwhile, machine learning is proving to be a promising tool for helping us to understand AF. For example, deep convolutional neural networks have been used to classify AF from single-lead ECGs () and to reconstruct 3D left atrial chambers from gadolinium-enhanced MRIs () with superior performance. The aim of this Research Topic was to collect a series of reviews and original research articles presenting recent advances toward a better understanding and treatment of AF through the development or use of: structure-detailed computer modeling; biophysics-based atrial cellular modeling; signal processing and clinical mapping; and meta-analysis and clinical studies. A total of 27 accepted articles were published under this Research Topic. Here in this editorial, we will summarize the new knowledge and approaches generated, and discuss how these can contribute to an improved understanding of AF mechanisms and clinical treatment, as well as how they may shape future research directions. CRITICAL INSIGHTS LEARNED FROM STRUCTURE-DETAILED COMPUTER MODELING Improvements in clinical imaging and mapping allow detailed characterization of atrial anatomy, structure and electrophysiology. Computer models of atrial electrical activation provide a powerful computational framework for understanding the structure-function relationship that underlies atrial reentrant arrhythmias. Atrial structure, including wall thickness, fibrosis, and myofiber orientation, have been suggested to dictate the locations of AF re-entrant drivers in explanted human heart studies (;Zhao et al.,, 2017. Of all atrial structures, fibrosis, the hallmark of structural remodeling, has been investigated extensively in this Research Topic. Clayton studied the effect of the spatial scale (size) of simulated fibrosis on electrical propagations by smoothly varying the diffusion coefficient in 2D atrial tissue models. His study concludes that the spatial scale of fibrosis has important effects on both dispersion of recovery and vulnerability to re-entry. The Aslanidi group evaluated the effects of both atrial wall thickness and fibrosis on AF re-entrant drivers using two sets of computer models, a simple model of an atrial tissue slab with a step change in wall thickness and a synthetic fibrosis patch, and a set of 3D patient-specific computer models based on MRI (Roy et al.). In the slab model, they observed that an AF re-entrant driver drifted toward and along the regions with changes/gradients in wall thickness. Furthermore, they discovered that additional patchy fibrosis would pull the AF re-entrant driver toward it, and that the locations of AF re-entrant drivers were determined by both fibrosis and wall thickness gradients. On the other hand, results from the patient-specific computer models suggested that the interaction between wall thickness and fibrosis plays a very important role in the right atrium due to extensive trabecular structure, whilst fibrosis performs a more decisive role in the left atrium due to a comparably smaller trabecular structure and more extensive fibrotic remodeling (Roy et al.). In another study conducted by Stephenson et al. using micro-CT imaging and anatomically accurate computer modeling, morphological substrates for atrial arrhythmogenesis were discovered in archived human hearts with atrioventricular septal defect. To directly link computer modeling to clinical treatment, Boyle et al. have carried out a multi-modal assessment of the arrhythmogenic propensity of the fibrotic substrate in patients with persistent AF by comparing locations of AF driver regions found in patient-specific computer simulations to those detected by the clinical FIRM approach. They discovered that computer modeling successfully detected most AF driver regions that were identified and ablated using the FIRM approach. The interaction and impact of atrial structural and electrical remodeling on electrical propagation were also investigated in this Research Topic. The Vigmond group have studied the effects of fibrosis and wavelength on the locations of AF reentrant drivers using bi-layer atrial models (Saha et al.). They observed that AF re-entrant drivers became more unstable with decreasing wavelength and that driver locations were largely influenced by the degree and distribution of fibrosis as well as the choice of implementation approach. Zhao et al. modeled the loss of lateral connections in atrial myocytes due to fibrotic remodeling and investigated the relative contributions of the sodium and L-type calcium currents to transverse propagation using a simple computer model of two parallel atrial myocyte strands. They discovered that although transverse propagation depends on both sodium and calcium currents, their relative contribution and sensitivity to channel blockage depends on the distribution of transverse connections. Fibrosis is important but structural remodeling involves many factors. Recent experiments suggest that adipocytes lead to a 69-87% increase in action potential duration in neighboring cells as well as an increase in resting membrane potential by 2.5 to 5.5 mV (De Coster et al.). The Panfilov group investigated the electrical interaction of fat and normal myocytes using multi-scale computer models and concluded that adipose remodeling may induce spiral wave dynamics to a complex arrhythmia (De Coster et al.). Besides, Bueno-Orovio and Ugarte et al. developed a novel approach to model cardiac structural heterogeneity by using a fractional diffusion for the description of cardiac conduction. Their studies remind us that the current cardiac modeling approach itself is not perfect and needs improvement. Dillon-Murphy et al. presented a novel patient-specific modeling workflow for characterizing the thermal-fluid dynamics in the human atria. This is a potentially useful tool for evaluating ablation treatment and minimizing stroke risks. BIOPHYSICS-BASED ATRIAL CELLULAR MODELING The vast majority of patients with AF are treated pharmacologically. However, AADs are often ineffective in ∼40% of AF patients (). Cardiac cellular models were widely used to improve our understanding of electrical remodeling and to facilitate AAD design and development. In this Research Topic, Sutanto et al. presented a novel integrative approach by combining an experimental animal study, confocal imaging and computer modeling to study the effects of the subcellular distribution of ryanodine receptors (RyR2) and L-type Ca 2+ channels on Ca 2+ transient properties and spontaneous Ca 2+ release events (SCaEs) in atrial cardiomyocytes. They discovered that SCaEs preferentially arise from regions of high local RyR2 expression and the propagation of Ca 2+ waves is modulated by the distance between RyR2 bands. On the other hand, incorporation of axial tubules in various amounts and locations reduce Ca 2+ -transient time to peak; and selective hyperphosphorylation of RyR2 around axial tubules increases the number of spontaneous waves (Sutanto et al.). These novel findings significantly enhance our understanding of the atrial structure-function relationship at the subcellular level. In another modeling study, Colman et al. developed a human atrial cell model derived from a single congruent data source which offers a unique approach for directly relating the model to the experiment. There are also two important review articles devoted to the modeling of atrial cellular electrophysiology and pharmacotherapy. The Grandi group review recent advances in statistical and computational techniques, i.e., populationbased and sample-specific modeling, simulating physiological variability when building cellular computer models of cardiac electrophysiology in both physiological and diseased conditions (Ni et al.); The Koivumki group detail the unique aspects of AF pathophysiology, modeling approaches for drug testing and how heterogeneity and variability can be incorporated into AFspecific models (Vagos et al.). INSIGHTS ON SIGNAL PROCESSING AND CLINICAL MAPPING Ineffective signal processing and atrial mapping approaches impede our understanding of AF mechanisms and the identification of effective targets for treatment. To determine accurate intracardiac maps, the Rappel group investigated AF re-entrant drivers using phase maps from patients with persistent AF in the presence of various signal contamination (Vidmar et al.). They conclude that domains of low fidelity electrograms can be produced at rotational cores which are most sensitive to far-field activation. By contrast, based on atrial electrograms collected using ECGI from patients with persistent AF, Meo et al. have utilized a new approach to measure AF complexity, a non-dipolar component index, and have correlated this with ablation outcomes and AF pathophysiology. Finally, the Zhang group developed a new 2D convolutional neural network for automatic detection of AF using the MIT-BIH ECG database with superior performance (He et al.). Animal models and computer simulations are often utilized to validate atrial mapping and signal processing. The Schotten group mapped 12 goats with persistent AF for 3-4 weeks using a 249-electrode array and analyzed the AF episodes collected from the left atrial free wall to quantify its degree of spatiotemporal stationarity (van Hunnik et al.). They discovered that AF properties were stationary; however, they argue that this could not be attributed to stable recurrent conduction patterns. Instead, they postulate that the structural properties of the atria may explain the very variable conduction patterns underlying stationary AF properties. A 64-channel basket mapping catheter was used in the FIRM trials and widely used now in clinics for patients with AF; however, it remains uncertain how reliable this clinical mapping tool is. Alessandrini et al. have developed a computer modeling framework to evaluate basket catheter guided AF ablation. They discovered that a stable re-entrant driver needs a high-density mapping catheter (<3 mm) and a low distance to the atrial surface (<10 mm) for accurate mapping. Finally, the Ganesan group review information theory, such as Shannon entropy, and its application to AF mapping, in the hopes of better pinpointing effective targets (Dharmaprani et al.). META-ANALYSIS AND CLINICAL STUDIES In this Research Topic, there are four original meta-analysis articles. Through a pooled analysis of a total of 17 studies including 5,169 participants, Chen et al. found that the adenosine test and elimination of dormant conduction provoked by adenosine may not improve the long-term success rate in AF patients that undergo circumferential pulmonary vein isolation. Their study raises a serious question about the clinical usage of adenosine to unmask dormant conduction of pulmonary veins as potential reconnection sites. The Tse group has systematically compared AF recurrence rates and complication rates between a novel ablation approach (circular irrigated radiofrequency ablation) and conventional ablation techniques based on 161 original publications (Li et al.). They found that the performance between the two is comparable though circular irrigated radiofrequency ablation has a higher mortality. Filos et al. conducted a scoping review by mapping existing literature in the field of atrial models and their associations with AF to synthesize the vast knowledge toward the mechanism between AF-related P-wave morphologies and atrial computer models. The final meta-analysis study was aided by a novel machine learning approach (Xiong et al.). The growth in medical research publications is accelerating across the board; therefore, there is an urgent need to develop an intelligent automated approach, such as machine learning, to facilitate the identification and selection of relevant articles for meta-analysis. The Zhao group developed a novel machine learning approach to assist in the screening of potentially relevant articles for large-scale metaanalyses and systematic review (Xiong et al.). Their approach led to a 87% reduction in the number of publications needed for manual screening. More importantly, their study demonstrates that diabetes mellitus is a strong, independent risk factor for AF, particularly for women. It is always important to link or interpret computational approaches and their results back to clinical settings. There are three clinical review papers devoted to this area. Stiles et al. reviewed computational approaches for detecting AF substrates, ranging from complex fractionated atrial electrograms (CFAEs), dominant frequency, ECGI, FIRM, and fibrosis-guided ablation to risk factor modification. Clearly, some of these approaches did not work that well, as demonstrated by recent high-profile clinical studies (), due to our lack of understanding of AF mechanisms. Cheniti et al. focus on reviewing the AF mechanisms that are further obscured and complicated by intermingled multilevel atrial remodeling, various concurrent conditions such as genetic factors (PITX2), obesity/metabolic syndrome, and the limitations of each mapping/imaging/ablation methodology. Bohne et al. systematically review the structural, electrical, and autonomic remodeling underlying elevated AF in diabetes mellitus conditions. Further studies are required to investigate the inter-relationship among obesity, diabetes mellitus, and metabolic syndrome, as well as the role of insulin resistance in AF. CONCLUSIONS AND FUTURE DIRECTIONS The articles collected under this Research Topic advance our understanding of atrial structural and electrical remodeling, presenting recent progress on the development of computational modeling, signal processing, atrial mapping, and machine learning approaches, as well as how the gap between basic and clinical studies is being bridged. There is a growing body of evidence supporting a more integrative approach by combining new and established computational and experimental/clinical approaches to improve our understanding and treatment of AF. More importantly, computer modeling of AF will need to be truly multiscale, going from subcellular genetic changes to tissue-level fibrosis to organ-scale geometry and electrical connectivity. AF is a complex disease; therefore, future work should extend the current paradigm to investigate upstream mechanisms and therapy, such as the genetic factors (PITX2) and concurrent clinical conditions (metabolic syndrome). Finally, in the world of meta-data and wearable technology, more effective computational approaches, such as machine learning and large physiological and clinical datasets will need to be used to aid traditional approaches toward further advancements in this exciting research area. Together, these methods will no doubt be an important part of global efforts to tackle this most common, yet elusive, cardiac disease. AUTHOR CONTRIBUTIONS JZ wrote the draft. The remaining authors provided comments and edits. All authors approved the final version of this article. FUNDING Sources of support are the National Institutes of Health grants (HL115580 and HL135109 to JZ); the Health Research Council of New Zealand (16/385 to JZ); the British Heart Foundation (PG/15/8/31130 to OA); the French Government as part of the Investments of the Future program managed by the National Research Agency (ANR) (Grant reference ANR-10-IAHU-04 to EV).
Empagliflozin and Clinical Outcomes in Patients With Type 2 Diabetes Mellitus, Established Cardiovascular Disease, and Chronic Kidney Disease Background: Empagliflozin, a sodium-glucose cotransporter 2 inhibitor, reduced cardiovascular morbidity and mortality in patients with type 2 diabetes mellitus and established cardiovascular disease in the EMPA-REG OUTCOME trial (Empagliflozin Cardiovascular Outcome Event Trial in Type 2 Diabetes Mellitus Patients). Urinary glucose excretion with empagliflozin decreases with declining renal function, resulting in less potency for glucose lowering in patients with kidney disease. We investigated the effects of empagliflozin on clinical outcomes in patients with type 2 diabetes mellitus, established cardiovascular disease, and chronic kidney disease. Methods: Patients with type 2 diabetes mellitus, established cardiovascular disease, and estimated glomerular filtration rate (eGFR) ≥30 mLmin−11.73 m−2 at screening were randomized to receive empagliflozin 10 mg, empagliflozin 25 mg, or placebo once daily in addition to standard of care. We analyzed cardiovascular death, hospitalization for heart failure, all-cause hospitalization, and all-cause mortality in patients with prevalent kidney disease (defined as eGFR <60 mLmin−11.73 m−2 and/or urine albumin-creatinine ratio >300 mg/g) at baseline. Additional analyses were performed in subgroups by baseline eGFR (<45, 45<60, 60<90, ≥90 mLmin−11.73 m−2) and baseline urine albumin-creatinine ratio (>300, 30⩽300, <30 mg/g). Results: Of 7020 patients treated, 2250 patients had prevalent kidney disease at baseline, of whom 67% had a diagnosis of type 2 diabetes mellitus for >10 years, 58% were receiving insulin, and 84% were taking angiotensin-converting enzyme inhibitors or angiotensin receptor blockers. In patients with prevalent kidney disease at baseline, empagliflozin reduced the risk of cardiovascular death by 29% compared with placebo (hazard ratio , 0.71; 95% confidence interval , 0.520.98), the risk of all-cause mortality by 24% (HR, 0.76; 95% CI, 0.590.99), the risk of hospitalization for heart failure by 39% (HR, 0.61; 95% CI, 0.420.87), and the risk of all-cause hospitalization by 19% (HR, 0.81; 95% CI, 0.720.92). Effects of empagliflozin on these outcomes were consistent across categories of eGFR and urine albumin-creatinine ratio at baseline and across the 2 doses studied. The adverse event profile of empagliflozin in patients with eGFR <60 mLmin−11.73 m−2 was consistent with the overall trial population. Conclusions: Empagliflozin improved clinical outcomes and reduced mortality in vulnerable patients with type 2 diabetes mellitus, established cardiovascular disease, and chronic kidney disease. Clinical Trial Registration: URL: https://www.clinicaltrials.gov. Unique identifier: NCT01131676.
SIRT1 Regulates the Human Alveolar Epithelial A549 Cell Apoptosis Induced by Pseudomonas Aeruginosa Lipopolysaccharide Background: Sirtuin1 (SIRT1) is an NAD+-dependent deacetylase that plays an inhibitory role in cell apoptosis, which is associated with p53 deacetylation. Lipopolysaccharide (LPS) is a key virulence factor produced by Pseudomonas aeruginosa and plays an important role in mediating the interactions between the bacterium and its host. However, the effect of SIRT1 in the regulation of LPS-induced human alveolar epithelial A549 cells apoptosis is unknown. Methods: Cell viability, apoptosis and reactive oxygen species (ROS) production were first examined in A549 cells that were treated with LPS. Relative cell signaling pathways were further explored by western blot analysis. Results: Exposure of A549 cells to LPS decreased cell viability in a concentration- and time- dependent manner. LPS stimulated cell apoptosis and ROS production while inhibiting the expression of SIRT1 in A549 cells. Activation of SIRT1 by exposure to resveratrol significantly reversed the effects of LPS on A549 cells. In contrast, inhibition of SIRT1 by nicotinamide had the opposite effects enhancing cell apoptosis and ROS production. Conclusion: SIRT1 plays an important role in regulating the human alveolar epithelial A549 cell apoptosis process induced by LPS.
/** * Reads informations of keys from specific file. */ public static Map<String, Map<String, String>> readKeysInfo(String fname) { File keysInfoFile = new File(fname); if (!keysInfoFile.exists()) { try { keysInfoFile.createNewFile(); } catch (IOException ex) { LOGGER.log(Level.SEVERE, null, ex); } } try (FileInputStream fis = new FileInputStream(keysInfoFile); ObjectInputStream in = new ObjectInputStream(fis)) { return (Map<String, Map<String, String>>) in.readObject(); } catch (IOException | ClassNotFoundException ex) { LOGGER.log(Level.SEVERE, null, ex); } return new HashMap<>(); }
The cardiovascular and metabolic effects of bench stepping exercise in females. The purpose of this investigation was to measure cardiovascular and metabolic responses to 20 min continuous bouts of "choreographed" bench stepping exercise in healthy females. Four frequently used bench heights were employed in a cross-over design: 15.2 cm (6 inches, B-6), 20.3 cm (8 inches, B-8), 25.4 cm (10 inches, B-10), and 30.5 cm (12 inches, B-12). Oxygen uptake (VO2) responses were significantly more pronounced in direct relationship to the bench height: B-12 greater than B-10 greater than B-8 greater than B-6 (P less than 0.05). Mean responses for VO2 ranged from 28.4 ml.kg-1.min-1 for B-6 to 37.3 ml.kg-1.min-1 for B-12. Interestingly, no difference was revealed for heart rate and the respiratory exchange ratio between B-12 and B-10 despite a higher VO2 for B-12 (B-12, B-10 greater than B-8 greater than B-6, P less than 0.05). The incorporation of 0.91 kg (2 lb) hand weights with exercise on the 20.3 cm bench elicited a modest but statistically significant increase in VO2 compared with no hand weights. No significant increase in VO2 was revealed for conditions that employed 0.45 kg (1 lb) hand weights. The results demonstrate that aerobic bench stepping is an exercise modality that provides sufficient cardiorespiratory demand for enhancing aerobic fitness and promoting weight loss in females.
<filename>microservice-openapi/src/main/java/com/melink/open/api/model/OpenAppProp.java package com.melink.open.api.model; import com.fasterxml.jackson.annotation.JsonIgnore; import com.melink.microservice.utils.GUIDGenerator; import java.io.Serializable; /** * The persistent class for the open_app_prop database table. */ public class OpenAppProp implements Serializable { private String guid; private String name; private String value; private String openAppId; @JsonIgnore private OpenApp openApp; public OpenAppProp() { this.guid = GUIDGenerator.generate(); } public String getGuid() { return guid; } public void setGuid(String guid) { this.guid = guid; } public String getName() { return this.name; } public void setName(String name) { this.name = name; } public String getValue() { return this.value; } public void setValue(String value) { this.value = value; } public String getOpenAppId() { return openAppId; } public void setOpenAppId(String openAppId) { this.openAppId = openAppId; } public OpenApp getOpenApp() { return openApp; } public void setOpenApp(OpenApp openApp) { this.openApp = openApp; } }
Construction and evaluation of multitracer small-animal PET probabilistic atlases for voxel-based functional mapping of the rat brain. UNLABELLED Automated voxel-based or predefined volume-of-interest (VOI) analysis of rodent small-animal PET data is necessary for optimal use of information because the number of available resolution elements is limited. We have mapped metabolic (F-FDG), dopamine transporter (DAT) (2'-F-fluoroethyl(1R-2-exo-3-exe)-8-methyl-3-(4-chlorophenyl)-8-azabicyclo-octane-2-carboxylate ), and dopaminergic D receptor (C-raclopride) small-animal PET data onto a 3-dimensional T2-weighted MRI rat brain template oriented according to the rat brain Paxinos atlas. In this way, ligand-specific templates for sensitive analysis and accurate anatomic localization were created. Registration accuracy and test-retest and intersubject variability were investigated. Also, the feasibility of individual rat brain statistical parametric mapping (SPM) was explored for F-FDG and DAT imaging of a 6-hydroxydopamine (6OHDA) model of Parkinson's disease. METHODS Ten adult Wistar rats were scanned repetitively with multitracer small-animal PET. Registrations and affine spatial normalizations were performed using SPM2. On the MRI template, a VOI map representing the major brain structures was defined according to the stereotactic atlas of Paxinos. F-FDG data were count normalized to the whole-brain uptake, whereas parametric DAT and D binding index images were constructed by reference to the cerebellum. Registration accuracy was determined using random simulated misalignments and vectorial mismatching. RESULTS Registration accuracy was between 0.24 and 0.86 mm. For F-FDG uptake, intersubject variation ranged from 1.7% to 6.4%. For C-raclopride and F-FECT data, these values were 11.0% and 5.3%, respectively, for the caudate-putamen. Regional test-retest variability of metabolic normalized data ranged from 0.6% to 6.1%, whereas the test-retest variability of the caudate-putamen was 14.0% for C-raclopride and 7.7% for F-FECT. SPM analysis of 3 individual 6OHDA rats showed severe hypometabolism in the ipsilateral sensorimotor cortex (P </= 0.0004) and a striatal decrease in DAT availability (P </= 0.0005, corrected). CONCLUSION MRI-based small-animal PET templates facilitate accurate assessment and spatial localization of rat brain function using VOI or voxel-based analysis. Regional intersubject and test-retest variations found in this study, as well as registration errors, indicate that accuracy comparable to the human situation can be achieved. Therefore, small-animal PET with advanced image processing is likely to play a useful role in detailed in vivo molecular imaging of the rat brain.
<filename>src/app/factories/ItemFactory.ts import { PotionFactory } from "./PotionFactory"; import { Potion } from "../models/Items/Potions/Potion"; import { BagType } from "../enums/BagType"; import { ItemType } from "../enums/ItemType"; import { Item } from "../models/Items/Item"; import { SettingFactory } from "./SettingFactory"; import { ArmorType } from "../enums/ArmorType"; import { Armor } from "../models/Items/Armor"; import { WeaponType } from "../enums/WeaponType"; import { Weapon } from "../models/Items/Weapon"; import { WeaponFactory } from "./WeaponFactory"; import { SettingType } from "../enums/SettingType"; import { Setting } from "../models/Settings/Setting"; import { SpellFactory } from "./SpellFactory"; import { SpellType } from "../enums/SpellType"; import { Spell } from "../models/Spells/Spell"; import { PotionType } from "../enums/Potions/PotionType"; import { ArmorFactory } from "./ArmorFactory"; import { PotionColor } from "../enums/Potions/PotionColor"; /** * Item factory will create any potion based off Item enums */ export class ItemFactory { /** * Create any potion based off type * @param type Potion to be created */ public static CreatePotion(type: PotionType, color: PotionColor): Potion { return PotionFactory.Create(type, color); } /** * Create any item based off type * @param type Item to be created */ public static CreateArmor(type: ArmorType): Armor { return ArmorFactory.Create(type); } /** * Create any item based off type * @param type Item to be created */ public static CreateWeapon(type: WeaponType): Weapon { return WeaponFactory.Create(type); } /** * Create any item based off type * @param type Item to be created */ public static CreateSetting(type: SettingType): Setting { return SettingFactory.Create(type); } /** * Create any item based off type * @param type Item to be created */ public static CreateSpell(type: SpellType): Spell { return SpellFactory.Create(type); } }
Say cheese? The connections between positive facial expressions in student identification photographs and health care seeking behavior This study examined whether positive facial expressions in student identification photographs were connected with a health-relevant behavior: visits to a health care center in the last year for preventive and non-preventive (e.g. illness, injury) purposes. Identification photographs were coded for degree of smile. Smiling participants were more likely to have sought preventive care versus those not smiling in their photographs, but there was no difference in non-preventive (i.e. ill health) visits. This study shows for the first time that smiling in photographs may be related to healthy behavior and complements past work connecting smiling to positive psychosocial and health outcomes.
import * as p from "@qramana/qramana"; // Find |11> states. const q1 = new p.Qubit({ value: 0 }); const q2 = new p.Qubit({ value: 0 }); // Prepare quantum states. q1.h(); q2.h(); // Apply grover iteration. // Flip target states phase. q2.controlledZ(q1); // Flip around average state. q1.h(); q2.h(); q1.x(); q2.x(); q2.controlledZ(q1); q1.x(); q2.x(); q1.h(); q2.h(); // Show results const m1 = q1.measure(); const m2 = q2.measure(); console.log("Measurement result: q1=" + m1 + ", q2=" + m2);
AFP/Getty Images President Donald Trump is blaming Democrats for obstructing his nominees, even as he lags other presidents in acting to fill high-level posts. President Donald Trump is blaming Democrats for the slow pace of confirming his nominees, even as he lags his predecessors in tapping people for high-level positions. In a tweet on Tuesday morning, Trump said Democrats “can’t win so all they do is slow things down & obstruct!” He said only 48 of 197 nominees have been confirmed. The Senate Democrats have only confirmed 48 of 197 Presidential Nominees. They can't win so all they do is slow things down & obstruct! — Donald J. Trump (@realDonaldTrump) July 11, 2017 As of June 30, Trump had nominated 242 people to key executive posts, according to a report by the Congressional Research Service. That compares to 336 and 379 nominated during the same period by Presidents Barack Obama and George W. Bush, respectively. There are more than 1,200 positions that require Senate confirmation, including cabinet secretaries, agency directors and ambassadors, as the Partnership for Public Service notes in its political appointee tracker. Republicans control the Senate and thus consideration for nominees. But Democrats can use the filibuster to slow down the process. On Monday, White House Legislative Affairs Director Marc Short said Senate Minority Leader Chuck Schumer had run “an unprecedented campaign of obstruction” against Trump’s nominees for high-ranking government positions. Schumer, a New York Democrat, said in response that “no administration in recent memory has been slower in sending nominees to the Senate.” Also read: Trump taps Randal Quarles to be Fed’s top banking regulator.
<gh_stars>0 package jdbcext.types; import java.math.BigDecimal; import java.util.Date; public interface OracleStruct extends StructBuilder, DisposableType { OracleStructDescriptor getStructDescriptor(); void setValue(final String fieldName, final Object value); void setStringValue(final String fieldName, String value); void setIntegerValue(final String fieldName, Integer value); void setLongValue(final String fieldName, Long value); void setBigDecimalValue(final String fieldName, BigDecimal value); void setDateValue(final String fieldName, Date value); Object getValue(final String fieldName); <T> T getValue(final String fieldName, Class<T> type); String getStringValue(final String fieldName); Integer getIntegerValue(final String fieldName); Long getLongValue(final String fieldName); BigDecimal getBigDecimalValue(final String fieldName); Date getDateValue(final String fieldName); }
AC = 0 WA = 0 TLE = 0 RE = 0 a_list = [input() for j in range(int(input()))] for a in a_list: if a == "AC": AC += 1 elif a == "WA": WA += 1 elif a == "TLE": TLE += 1 else: RE += 1 print(f"AC x {AC}\nWA x {WA}\nTLE x {TLE}\nRE x {RE}")
Attenuation in Superconducting Circular Waveguides We present an analysis on wave propagation in superconducting circular waveguides. In order to account for the presence of quasiparticles in the intragap states of a superconductor, we employ the characteristic equation derived from the extended Mattis-Bardeen theory to compute the values of the complex conductivity. To calculate the attenuation in a circular waveguide, the tangential fields at the boundary of the wall are first matched with the electrical properties (which includes the complex conductivity) of the wall material. The matching of fields with the electrical properties results in a set of transcendental equations which is able to accurately describe the propagation constant of the fields. Our results show that although the attenuation in the superconducting waveguide above cutoff (but below the gap frequency) is finite, it is considerably lower than that in a normal waveguide. Above the gap frequency, however, the attenuation in the superconducting waveguide increases sharply. The attenuation eventually surpasses that in a normal waveguide. As frequency increases above the gap frequency, Cooper pairs break into quasiparticles. Hence, we attribute the sharp rise in attenuation to the increase in random collision of the quasiparticles with the lattice structure. Introduction Circular waveguides have been widely used in radio telescopes to channel signals to the receiver circuits. The front end receiver noise temperature is determined by a number of factors. These include the mixer noise temperature T M, the conversion loss C loss, the noise temperature of the first IF amplifier T F and the coupling efficiency between the IF port of the junction and the input port of the first IF amplifier IF. Walker et al. have performed a comparison among different waveguide receivers. It was found that the deterioration in system performance is partly affected by the increase in conversion loss C loss. Since signals from distant sources are usually extremely faint, it is therefore important to ensure that the conversion loss C loss of the mixer circuit could be kept to its minimal. To minimize the loss of the signals, the availability of a highly efficient waveguide is certainly central to the development of the receiver circuit. Most waveguides implemented in the receiver system are fabricated using copper. Due to the weak intensity of the signals, however, the attenuation level in standard metallic waveguides may actually cause significant degradation to the signals. Superconductors are known to feature low loss. It is, therefore, interesting to perform an investigation on wave propagation in superconducting circular waveguides. In and, analysis on the performance of superconducting circular waveguides has been performed based on Mattis-Bardeen theory. Since the equations are derived from Bardeen, Cooper and Schrieffer BCS weak coupling theory, it takes into account the presence of the gap energy. According to the BCS theory, the electronic states in the immediate vicinity of the Fermi energy E F have their energy pushed away from E F. Hence, no quasiparticle state exists within the gap energy. Recent findings show, however, that this may not be true. Experimental measurements actually suggested that intragap states exist within the gap energy -. In and, Noguchi et al. have modified Mattis-Bardeen theory to account for the presence of the intragap states. Measurements on the surface resistance of the superconductor were found to agree with those computed using this extended Mattis-Bardeen theory. Since the new equations are able to give a more realistic behavior of a superconductor, we have applied them in to analyze wave propagation in superconducting rectangular waveguides. Here, we extend further our approach to the case of a superconducting circular waveguide. We apply the complex conductivity of a superconductor, derived using the extended Mattis-Bardeen theory, onto the equations presented in which calculate the attenuation constant of a circular waveguide. In order to present a complete scheme, we briefly outline the extended Mattis-Bardeen theory and the characteristic equations in in the following sections. Superconducting complex conductivity Due to the existence of the gap energy 2∆(T) in a superconductor, the conductivity of the material is complex and can be expressed as where 1 and 2 represent the quasiparticle and Cooper-pair currents in a superconductor, respectively. In order to take into account the existence of the intragap states, Noguchi et al. suggested to express the gap energy as a complex variable, i.e. ∆ = ∆ 1 + j∆ 2, where ∆ 1 and ∆ 2 are real,. Here, ∆ 1 can be expressed with the real gap energy given below, T is the operating temperature, T c the critical temperature of the superconductor and E = 1.781 is the Euler's constant. By extending Mattis-Bardeen theory, the new complex conductivity is derived as follows ( where E r and E i are, respectively, the real and imaginary parts of the complex quasiparticle excitation energy E and n is the normal conductivity of the material at room temperature. The function, ( ) gives the Fermi-Dirac statistics, where k is the Boltzmann's constant. Niobium Nb has been widely used in the fabrication of the Superconductor-Insulator-Superconductor SIS mixer in millimeter/submillimeter radio receivers. Here, we employ Nb as the wall material of the circular waveguide. The critical temperature T c, energy gap at 0 K 2∆ and normal conductivity n of Nb are, respectively, given as 9.2 K, 3.05 meV and 1.57 10 7 S/m. In an actual Nb film, the imaginary part of the energy gap ∆ 2 is found to be 10 −4 of its real part ∆ 1,. Propagation in a circular waveguide where is the angular frequency, the permeability of the wall material and is the permittivity of free space. For a superconducting waveguide, the conductivity in can be found by solving. By letting the determinant of the coefficients in vanish; we obtain the following transcendental equation ( ) where J n () denotes the Bessel function of the first kind, J n '() its derivative,, n the order of the Bessel function, k the wavenumber in free space and k z is the wave propagation constant. The propagation constant k z = z -j z is a complex variable which comprises both the phase constant z and attenuation constant z. By extracting the imaginary part of k z, the attenuation constant z can therefore be obtained. Results and discussion By numerically solving, the attenuation constant of a Nb circular waveguide with radius a = 8.1 mm, operating at both room temperature and under the critical temperature T c at T = 4.2 K is calculated. Here, we have applied the Powell Hybrid root-searching algorithm to determine the roots of. To solve for the integrals in the complex gap energy ∆ and the complex conductivity, we have applied the algorithms in the SLATEC mathematical library. Fig. 2 illustrates the overall attenuation of the dominant TE11 mode from frequency f = 0 to 1.5 THz. It can be observed from the figure that the superconducting waveguide behaves differently at different range of frequencies. To analyze the behavior of the waveguide, we separate the attenuation into 3 parts, i.e. the attenuation at (i) frequency f below cutoff f c (f < f c ), (ii) f above cutoff but below the gap frequency f g (f c < f < f g ) and (iii) f above f g (f > f g ). Fig. 3 depicts the attenuation of TE11 mode below the cutoff frequency f c ; whereas, Figs. 4 to 6 depicts the attenuation above f c. As can be observed from the figures, at frequency f below cutoff f c, the attenuation in the superconducting waveguide is somewhat higher than that operating at normal state. On the other hand, when f increases above f c, the attenuation in the superconducting waveguide below the gap frequency f g decreases dramatically. As can be seen in Figs. 4 and 5, the attenuation in the superconducting waveguide turns out to be considerably lower than its other counterpart which is operating at room temperature. It is worthwhile noting that, although the attenuation we found here is low, it is finite. This is in contrast to the results shown in and, where the attenuation above f c (but below f g ) is found to be infinitesimal. Since and (which applied Mattis-Bardeen theory) assume that quasiparticles do not exist in a superconductor, while our method accounts for their presence, it is apparent that the attenuation found here is contributed by the quasiparticles at the intragap states. Ideally, a lossless waveguide behaves like a high pass filter where signals below the cutoff frequency f c cease to propagate through the waveguide. Above f c however, the attenuation in the lossless waveguide decreases sharply, allowing signals to propagate with negligible loss. Hence, it can be clearly seen from Figs. 2 to 5 that a superconducting circular waveguide behaves closer to a lossless waveguide than a normal waveguide. During superconducting state, the density of quasiparticles in the material is low. These quasiparticles are mainly those in the intragap states within the gap energy. Hence, energy loss due to collisions with the lattice structure is very low as well. This allows the superconducting waveguide to behave closer to a perfect waveguide, which is ideally lossless. As the frequency f increases above the gap frequency f g, the photon energy exceeds the gap energy. With sufficient absorption of energy, Cooper pairs break into quasiparticles. Hence, the waveguide operating below the critical temperature T c starts to lose its superconductivity. As can be clearly seen from Fig. 6, at f above f g, the attenuation in the waveguide at 4.2 K increases significantly. In fact, it surpasses that found in a normal waveguide. We attribute this phenomenon to the increase of random collision between the quasiparticles and the lattice structure at the wall, resulting in higher conduction loss in the waveguide. The results found in the superconducting circular waveguide agree with those of the superconducting rectangular waveguide. They, therefore, corroborate the findings in. Conclusion We have performed an analysis on superconducting circular waveguides based on the extended Mattis-Bardeen theory. In contrast with those found in literatures, our results show that the loss above cutoff in a superconducting waveguide is not infinitesimal. Although the loss turns out to be considerably lower than those in a normal waveguide, it is certainly finite. We attribute this phenomenon to the presence of quasiparticles in the intragap states within the gap energy. Above the gap energy, Cooper pairs break into quasiparticles. The waveguide operating below the critical temperature T c loses its superconducting characteristics. Hence, the loss increases significantly. Our results show that the loss of the waveguide eventually surpasses those found in a normal waveguide.
<filename>src/pages/MyItems.tsx import React from "react"; import { Container } from "react-bootstrap"; function MyItems() { return <Container>MyItems</Container>; } export default MyItems;
/** * List of RDBMS engines. */ public static final class RDBMSEngines { private RDBMSEngines() { throw new AssertionError(); } public static final String MYSQL = "mysql"; public static final String DERBY = "derby"; public static final String MSSQL = "mssqlserver"; public static final String ORACLE = "oracle"; public static final String DB2 = "db2"; public static final String HSQLDB = "hsqldb"; public static final String POSTGRESQL = "postgresql"; public static final String SYBASE = "sybase"; public static final String H2 = "h2"; public static final String INFORMIX_SQLI = "informix-sqli"; public static final String GENERIC = "Generic"; }
/// Obtain a symbol from this library of the specified type. pub(crate) unsafe fn symbol<'library, F>( &'library self, name: &std::ffi::CStr, ) -> Result<Symbol<'library, F>, String> { let inner = libc::dlsym(self.handle, name.as_ptr()); if inner.is_null() { return Err(dlerror()); } Ok(Symbol { inner, _library: Default::default(), _type: Default::default(), }) }
Divergent effects of muscarinic receptor subtype gene ablation on murine colon tumorigenesis reveals association of M3R and zinc finger protein 277 expression in colon neoplasia Background M3 and M1 subtype muscarinic receptors are co-expressed in normal and neoplastic intestinal epithelial cells. In mice, ablating Chrm3, the gene encoding M3R, robustly attenuates intestinal tumor formation. Here we investigated the effects of Chrm1 gene ablation, alone and in combination with Chrm3 ablation. Methods We used wild-type, Chrm1-/-, Chrm3-/- and combined Chrm1-/-/Chrm3-/- knockout (dual knockout) mice. Animals were treated with azoxymethane, an intestine-selective carcinogen. After 20 weeks, colon tumors were counted and analyzed histologically and by immunohistochemical staining. Tumor gene expression was analyzed using microarray and results validated by RT-PCR. Key findings were extended by analyzing gene and protein expression in human colon cancers and adjacent normal colon tissue. Results Azoxymethane-treated Chrm3-/- mice had fewer and smaller colon tumors than wild-type mice. Reductions in colon tumor number and size were not observed in Chrm1-/- or dual knockout mice. To gain genetic insight into these divergent phenotypes we used an unbiased microarray approach to compare gene expression in tumors from Chrm3-/- to those in wild-type mice. We detected altered expression of 430 genes, validated by quantitative RT-PCR for the top 14 up- and 14 down-regulated genes. Comparing expression of this 28-gene subset in tumors from wild-type, Chrm3-/-, Chrm1-/- and dual knockout mice revealed significantly reduced expression of Zfp277, encoding zinc finger protein 277, in tissue from M3R-deficient and dual knockout mice, and parallel changes in Zfp277 protein expression. Notably, mRNA and protein for ZNF277, the human analogue of Zfp277, were increased in human colon cancer compared to adjacent normal colon, along with parallel changes in expression of M3R. Conclusions Our results identify a novel candidate mouse gene, Zfp277, whose expression pattern is compatible with a role in mediating divergent effects of Chrm3 and Chrm1 gene ablation on murine intestinal neoplasia. The biological importance of this observation is strengthened by finding increased expression of ZNF277 in human colon cancer with a parallel increase in M3R expression. The role of zinc finger protein 277 in colon cancer and its relationship to M3R expression and activation are worthy of further investigation. Background Activation of muscarinic receptors and downstream signaling was shown to stimulate proliferation of cells derived from lung, breast, prostate, colon, and skin cancers, and muscarinic receptors are frequently over-expressed in these common cancers. Hence, it is highly likely that activation of muscarinic receptor signaling plays a fundamentally important role in neoplastic transformation and progression. Of five cholinergic muscarinic receptor subtypes, designated M1R -M5R, human colon cancer cells express primarily M3R. M3R activation stimulates cell proliferation, survival, migration and invasion [6, key hallmarks of neoplasia. Human colon cancer cells also produce and release acetylcholine at concentrations capable of activating M3R and stimulating cell proliferation, identifying the capacity for autocrine and paracrine stimulation of M3R signaling. Jointly, these in vitro studies provided strong evidence that M3R expression and signaling are particularly important in the progression of colon neoplasia. Mice with targeted knockout of genes encoding each of the five muscarinic receptor subtypes (Chrm1 -Chrm5) are useful for investigating their biological functions. We showed that ablating Chrm3, the gene encoding M3R, attenuates colon neoplasia in mice treated with azoxymethane (AOM), a colon-selective carcinogen. Compared to AOM-treated WT mice, AOM-treated Chrm3 knockout mice had 40% and 60% reductions in tumor number and size, respectively. Similar results were obtained using Apc min/+ mice, a genetic model of intestinal neoplasia. These findings suggested to us that treatments directed at reducing M3R expression, activation or downstream signaling might be useful to prevent or treat colon neoplasia. Indeed, Apc min/+ mice treated with scopolamine butylbromide, an inhibitor of muscarinic receptor activation, developed fewer intestinal tumors than vehicle-treated control mice. Many cell types co-express muscarinic receptor subtypes. Using in situ hybridization, we demonstrated expression of mRNA for both M1R and M3R in murine gastric and colonic epithelial cells. Likewise, human colon cancer cells used to investigate in vitro actions of muscarinic receptors and ligands express a mixture of M3R and M1R, with a predominance of M3R. Whereas expression of multiple muscarinic receptor subtypes in the same cell type is likely to provide growth and survival advantages, it can also result in complex, unpredictable interactions. This was apparent when we examined the impact of Chrm1 and Chrm3 co-expression in gastric chief cells that synthesize and release the proenzyme pepsinogen. M3R deficiency did not alter carbamylcholine (carbachol)-induced pepsinogen release, but M1R deficiency resulted in a 25% decrease in pro-enzyme release. Strikingly, in mice deficient in both M1R and M3R, carbachol-induced pepsinogen secretion was totally abolished. These observations motivated us to examine the role of M1R (Chrm1) expression in colon neoplasia and to determine whether knocking out both M1R (Chrm1) and M3R (Chrm3) in the same animal (hereafter called dual KO mice) would more effectively attenuate AOMinduced colon neoplasia than M3R (Chrm3) knockout alone. We also took advantage of these murine models to identify genes whose expression levels might be relevant to resulting colon tumor phenotypes. As a group dual KO mice were extremely frail, necessitating a modified study design; whereas AOM treatment was started in six-week-old Chrm3 -/and Chrm1 -/mice, AOM treatment was delayed in dual KO mice until they were 11 to 12-weeks-old and better able to tolerate AOM treatment. Even with this modification, at 11-12 of age dual KO mice weighed~25% less than WT mice ( Figure 1B), and in contrast to the other genotypes dual KO mice lost weight during the first six weeks of AOM treatment; at 20 weeks they still weighed~30% less than WT mice ( Figure 1B). Although the gross anatomical appearance of colons in knockout mice was normal, we detected a modest but statistically significant reduction in colon length ( Figure 1C). These differences probably reflect the lower body weights of Chrm3 -/-, Chrm1 -/and dual KO compared to WT mice ( Figure 1B). Colon length was not significantly different when compared within the three groups of knockout mice. Microscopic review of H&E-stained tissue sections from untreated (no AOM or vehicle) mice with the four muscarinic receptor genotypes by a senior gastrointestinal pathologist (CD) revealed no differences in colon epithelial morphology ( Figure 1D). Mice treated with vehicle alone did not develop colon neoplasia (not shown). Representative photographs in Figure 2A show robust colon tumor formation in AOMtreated mice. At the 20-week end-point, only colons from Chrm3 -/mice had reduced tumor burden compared to WT mice (Figure 2A). Tumor measurements revealed 28 and 74% reduction, respectively, in tumor number and size for Chrm3 -/compared to WT mice (P < 0.05 and P < 0.005, respectively) ( Figure 2B and C), consistent with our previous work. When colon tumors were stratified by volume ( Figure 2D), a shift in tumor size in Chrm3 -/mice towards smaller lesions (< 2 mm 3 ) became evident. We were surprised to observe that genetic ablation of Chrm1 did not alter the number or volume of colon tumors in Chrm1 -/mice compared to WT mice (Figure 2A-D). Even more surprising were the outcomes in dual KO mice; tumor number and volume were only slightly reduced compared to those in WT and Chrm1 -/mice (Figure 2A-D). That is, concomitant genetic ablation of Chrm1 in Chrm3 -/mice appeared to mitigate reductions in both colon tumor number and volume that we repeatedly observed in AOM-treated Chrm3 -/compared to WT mice (Figure 2A-D). As shown in Figure 2E, in WT, Chrm1 -/and dual KO mice the majority of colon tumors were adenocarcinomas. In contrast, Chrm3 -/mice had nearly equivalent numbers of adenomas and adenocarcinomas. Although adenomas were numerically less frequent in dual KO compared to WT mice ( Figure 2E), this was not a significant difference (P = 0.1). Likewise, there were no significant differences in the numbers of adenocarcinomas per section when comparing Chrm1 -/-, dual KO and WT mice. Conversely, the 76% reduction in adenocarcinomas in colons from Chrm3 -/compared to WT mice was highly significant (P < 0.001) ( Figure 2E). These findings suggest that the major impact of M3R deficiency in AOM-treated mice is to block progression of colon adenomas to adenocarcinomas. We evaluated the multiplicity of adenocarcinomas per section; 56% of Chrm3 -/mouse colons had no adenocarcinomas and only one Chrm3 -/mouse had more than one colon adenocarcinoma. In contrast, more than 50% of WT, Chrm1 -/and dual KO mice had multiple (two to seven) adenocarcinomas per section and only two of 20 WT (10%) and one of 12 dual KO (8%) mice had no adenocarcinomas (P < 0.01 for reduced multiplicity of tumors in colons from Chrm3 -/mice vs. colons from the other three genotypes). Colons from all Chrm1 -/mice contained at least one adenocarcinoma. We considered the possibility that modifying our protocol to delay AOM treatment of frail dual KO mice until they were 12 weeks old might have impacted outcomes -AOM treatment in the three other genotypes started when mice were six weeks old. To exclude this as a confounder, we started AOM treatment in WT mice at six (N = 25) or 12 (N = 16) weeks of age. Twenty weeks Figure 1 Study protocol, animal weights, colon length and histological appearance of colon sections from mice with different Chrm genotypes. A: Schematic of study design; WT, Chrm3 -/-, Chrm1 -/and dual KO male mice were treated with intraperitoneal injection of AOM (10 mg/kg) or an equal volume of vehicle (phosphate buffered saline) weekly for 6 weeks and followed for a total of 20 weeks. At 20 weeks, animals were euthanized and colon tumor number and size, and mucosal markers of proliferation and apoptosis were measured. B: Weights of AOM-treated mice during the 20-week study (mean ± S.E.). C: Colon length of AOM-treated mice was measured following euthanasia at 20 weeks (mean ± S.E.). D No morphological differences were seen in hematoxylin and eosin (H&E)-stained microscopic sections of normal colon tissue from WT, Chrm3 -/-, Chrm1 -/and dual KO mice. Size bars = 50 micrometers. after starting AOM treatment there was no difference in tumor number or volume when comparing mice that started AOM treatment at age six versus 12 weeks (Additional file 1). We concluded that the failure to observe reduced tumor number and size in AOM-treated dual KO mice cannot be attributed to the delay in initiating AOM treatment. Effect of M1R, M3R and dual knockout on tumor cell proliferation and apoptosis To determine whether changes in tumor number and size resulted from differences in cell proliferation and apoptosis, we examined Ki67 and activated caspase-3 staining, respectively. Figure 3A shows representative micrographs of Ki67 staining in adenomas from AOMtreated WT, Chrm3 -/-, Chrm1 -/and dual KO mice. Compared to adenomas from AOM-treated WT mice, Ki67 staining was significantly reduced in those from Chrm3 -/and dual KO but not Chrm1 -/mice ( Figure 3). Figure 3B shows 58% reduction in Ki67-positive cells in adenomas from Chrm3 -/compared to those from WT mice (P < 0.01). Ki67 staining was reduced 42% in dual KO mice (P < 0.05 compared to WT mice). Thus, although Ki67 staining was significantly reduced in dual KO compared to WT mice, this reduction was less than that observed in Chrm3 -/mice ( Figure 3B), suggesting that in AOM-treated mice concomitant ablation of Chrm1 mitigates anti-proliferative effects of Chrm3 gene ablation. Figure 3C shows representative micrographs of activated caspase-3 staining in adenomas from AOMtreated WT, Chrm3 -/-, Chrm1 -/and dual KO mice. The number of apoptotic cells in adenomas from AOMtreated mice was almost two orders of magnitude lower than the number of proliferating cells (compare scales for vertical axes in Figure 3B and D). Apoptotic cells were reduced in adenomas from Chrm3 -/and dual KO compared to WT mice (P < 0.01 and < 0.001, respectively). In contrast, apoptotic cells were not significantly different in adenomas from Chrm1 -/and WT mice ( Figure 3D). Genes differentially expressed in colon tumors from WT and Chrm3 -/mice Tissues obtained in these experiments provided an opportunity to identify novel genes and signaling pathways potentially underlying tumor-promoting actions of M3R activation in the colon provide strong evidence that M3R acts as a tumor promoter, the present observations newly suggest that in the colon M1R acts as a tumor suppressor. This putative role for M1R is unmasked by combined M1R and M3R deficiency in dual KO mice where tumor formation is similar to that observed in WT mice. Thus, M1R deficiency appears to negate the beneficial effects of knocking out only M3R. This finding may also explain why treating Apc min/+ mice with a non-selective muscarinic receptor inhibitor, scopolamine butylbromide, attenuates intestinal tumor formation less efficaciously than M3R gene ablation. Scopolamine butylbromide treatment blocks both M3R and M1R activation, thereby mimicking combined M3R and M1R deficiency in dual KO mice. These considerations intimate therapeutic promise for a pharmacological approach to block M3R activation while at the same time augmenting M1R activation. The unanticipated findings regarding the impact of Chrm3 and Chrm1 knockout on AOM-induced colon neoplasia provided an opportunity to explore M3R-regulated changes in tumor gene expression. We identified a set of genes expressed differentially in tumors from WT and M3R-deficient mice. Then, we used tumors from M1R-deficient mice as controls to exclude non-specific changes in gene expression due to neoplastic transformation but not a specific consequence of altered M3R expression. We employed a gene microarray comprising 19,100 target genes to identify 430 genes with expression levels significantly altered in tumors from M3Rdeficient compared to WT mice (strategy schematized in Additional file 3); a dataset further refined by increasing statistical stringency and using qPCR to validate results. These combined approaches identified 14 promising genes with meaningful changes in expression in tumors from M3R-deficient compared to WT mice ( Figure 4B). We concede this gene discovery approach has limitations. It may have identified only a fraction of the genome; many genes are turned off or encode proteins required for survival in specific amounts that do not change. Also, protein expression may be regulated by mechanisms that do not involve altered mRNA levels. Financial constraints limited expression profiling experiments to a relatively small number of observations under identical conditions and for the same reason limited further investigation of candidate genes to the relatively small subset of 14 genes shown in Figure 4B, thereby reducing statistical power. We may have missed important but subtle changes in gene expression. Even so, we believe that confirmatory results from our qPCR experiments provided reliable measures of changes in the expression levels of this 14gene subset in both tumors and normal colon from WT, Chrm3 -/-, Chrm1 -/and dual KO mice. Changes in expression of only one gene, Zfp277, achieved statistical significance with matching changes in expression of the corresponding protein. Based on the stringent overall approach (Additional file 3), we are confident that our work identifies a novel role for Zfp277 as an M3R-regulated gene pertinent to the progression of intestinal neoplasia. Confidence in this conclusion was bolstered by detecting over-expression of both mRNA and protein for ZNF277, the human analogue of Zfp277, in human colon cancer samples and, importantly, that this over-expression mirrored that of CHRM3 and M3R. Literature and gene bank searches revealed little regarding the function of ZNF277 (NIRF4), which is expressed in multiple tissues, including the proximal colon. The protein product, zinc finger protein 277, is reported to play a role in cellular senescence and protection against genomic instability and cancer. ZNF277 over-expression is reported in other cancerschronic lymphocytic leukemia, well-differentiated renal cell carcinoma, and germ cell and endocrine tumors. Hence, a novel, hitherto unrecognized role for ZNF277 in colon cancer biology is certainly plausible. We found no previous reports of an association between either Zfp277 or ZNF277 expression and muscarinic receptor expression or activation. In future work, we plan to use in vitro and in vivo models to explore the molecular mechanisms underlying the association between M3R and ZNF277 expression. Conclusions Our results identify a novel candidate mouse gene, Zfp277, whose expression pattern is compatible with a role in mediating divergent effects of Chrm3 and Chrm1 gene ablation on murine intestinal neoplasia. Although finding an association between ZNF277 and M3R overexpression in human colon cancer is reassuring, we do not currently understand the molecular mechanism underlying this interaction. Future work will use both in vitro and in vivo approaches to address these questions and elucidate the functional role of ZNF277 and its interaction with M3R in colon cancer. Although it is currently unclear why M3R and M1R, which both signal by stimulating phospholipid turnover and changes in cell calcium, have such divergent effects on colon neoplasia, their contrary roles suggest that jointly targeting these receptors may have therapeutic potential. Based on these considerations, we are optimistic that improved understanding of the role of muscarinic receptors in neoplasia will continue to yield novel therapeutic targets for colon cancer. Animals Chrm3 -/and Chrm1 -/mice were generated from the same mixed genetic background (129S6/SvEvTac X CF1: 50%/ 50%) as described previously. Dual KO mice on the same genetic background were generated by mating homozygous Chrm1 -/and Chrm3 -/mutant mice. For all experiments, only male mice were used and agedmatched WT mice of the same genetic background served as controls. Mice were housed under identical conditions in a pathogen-free room, had free access to commercial rodent chow and water, and were acclimatized in the vivarium for at least one week before experiments. These studies were approved by the University of Maryland School of Medicine Institutional Animal Care and Use, and the Baltimore VA Research and Development Committees. Human tissues To examine M3R (CHRM3) and ZNF277 gene and protein expression, we used archived pre-existing de-identified surgical specimens of colon cancer and adjacent normal colon epithelium (approved by the University of Maryland School of Medicine Institutional Review Board and the Baltimore VA Research and Development Committee). Study design For the initial 6 weeks of treatment, 94 mice (25 WT, 20 Chrm3 -/-, 23 Chrm1 -/and 26 dual KO mice) received weekly intraperitoneal injections of azoxymethane (AOM; Midwest Research Institute; 10 mg/kg body weight) and 21 mice (5 WT, 5 Chrm3 -/-, 5 Chrm1 -/and 6 dual KO mice) received an equal volume of vehicle (phosphatebuffered saline) ( Figure 1A). As in our previous study, AOM and vehicle treatment in WT, Chrm3 -/and Chrm1 -/mice was started when animals were 6-weeks old. In dual KO mice, AOM and PBS treatments were initiated at 12 weeks of age. All animals were euthanized 20 weeks after initiating AOM injections. Colon length was measured, and segments were opened longitudinally and placed flat on microscope slides. Tumors were identified by visual inspection and photographed (Nikon SMZ1500 dissecting microscope). Tumor size was measured using calipers and tumor volume calculated using: volume = (length width 2 ). Histological and immunohistochemical staining analysis Tissues were fixed in 4% paraformaldehyde and paraffinembedded. Five-micrometer sections were stained with hematoxylin and eosin. Adenomas and adenocarcinomas were defined according to consensus recommendations by the Mouse Models of Human Cancers Consortium. As markers of cell proliferation and apoptosis, we used immunohistochemical staining for Ki67 and activated caspase-3, respectively (antibodies from Cell Signaling Technology). Only complete crypts were evaluated and investigators were masked to genotype and treatment group. To identify corresponding changes in protein expression for relevant genes identified by microarray and qPCR, formalin-fixed paraffin-embedded tumor sections were immunostained with a specific antibody against both mouse Zfp277 and human ZNF277 from Santa Cruz Biotech (Santa Cruz, CA), and a specific antibody from Alomone Labs (Jerusalem, Israel) against both mouse and human M3R. Tumor sections were examined with a Nikon 80i photomicroscope at 200 magnification. Sections were first reviewed and scored by a senior pathologist (CD) masked to tissue origin and immunostaining was then quantified using Image-Pro Plus software (version 5.1; Media Cybernetics, Silver Spring, MD). To minimize variation, all tumor sections were examined and photographed using the same microscope settings. Microarray performance and analyses After resection, murine tissue was immediately stored in RNAlater (Ambion) at -80°C. Total RNA was extracted using the RNeasy kit from Qiagen. RNA was digested using the RNase-Free DNase set. The quality of total RNA was tested and confirmed using a Bioanalyzer 2100 (Expression Analysis, Inc., Durham, NC). The microarray assay was performed by Expression Analysis, Inc. using the MouseWG-6 v2.2 Expression BeadChip (Illumina, San Diego, CA). This chip covers the whole mouse genome, > 19,100 unique, curate genes targeting a total of 45,281 transcripts. Results were analyzed using the cubic spline normalization method without background subtraction. In comparing changes in mRNA expression in Chrm3 -/vs. WT mouse colon tumors, statistical significance cutoff levels were set for individual transcripts at P ≤ 0.05 (false discovery rate) and enrichment scores ≥ ± 1.3; changes in gene mRNA that met these thresholds were deemed to be differentially expressed. Microarray data represent results of tissue from three different mice per genotype. Results were submitted to the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO) database (GSE43444). Quantitative RT-PCR (qPCR) First-strand cDNAs were synthesized from 5 g RNA (Superscript III First Strand Synthesis System for RT-PCR, Invitrogen). qPCR was then performed using 50 ng cDNA, the SYBR Green PCR Master Mix (Applied Biosystems), and forward and reverse primers (final concentration 0.5 M in sample volumes of 20 l). Primers (Additional file 4) were designed to span introns using the National Center for Biotechnology Information nucleotide database SIM-4 gene alignment program and on-line software (http://www.genscript.com/index.html). qPCR was performed using the 7900HT Fast System (ABI) with Power SYBR Green Master Mix (ABI). PCR conditions included 5 min at 95°C followed by 37 cycles of 95°C for 15 seconds, 60°C for 20 seconds, and 72°C for 40 seconds and a final cycle at 95°C for 15 seconds, 60°C for 15 seconds, and 95°C for 15 seconds. PCR data were analyzed using ABI instrument software SDS 2.1. Expression of candidate genes in each group of mice was normalized to glyceraldehyde 3-phosphate dehydrogenase (Gapdh). For human samples, expression of CHRM3 and ZNF277 was normalized to 2 -microglobulin (B2M), a preferable housekeeping gene for analysis of colon cancer. Quantitative qPCR data were evaluated using the comparative C T (2 -CT ) method. Statistical analysis Student's unpaired t-test was used to determine statistical significance. The strength of linear association between two variables was quantified using the Spearman's rank correlation coefficient and that of non-linear associations by Pearson's chi-squared test. P values ≤0.05 were considered significant.
<gh_stars>1-10 import java.util.*; import java.util.stream.Stream; public class DfsNoRecursion { public static void dfs(List<Integer>[] graph, int root) { int n = graph.length; int[] curEdge = new int[n]; int[] stack = new int[n]; stack[0] = root; for (int top = 0; top >= 0; ) { int u = stack[top]; if (curEdge[u] == 0) { System.out.println(u); } if (curEdge[u] < graph[u].size()) { int v = graph[u].get(curEdge[u]++); if (curEdge[v] == 0) { stack[++top] = v; } } else { --top; } } } // Usage example public static void main(String[] args) { List<Integer>[] g = Stream.generate(ArrayList::new).limit(3).toArray(List[]::new); g[0].add(1); g[1].add(0); g[0].add(2); dfs(g, 0); System.out.println(); dfs(g, 1); System.out.println(); dfs(g, 2); } }
// NewSchemaValidationError returns a new diag.Message based on SchemaValidationError. func NewSchemaValidationError(r *resource.Instance, err error) diag.Message { return diag.NewMessage( SchemaValidationError, r, err, ) }
Q: If Scarlet Witch of MCU has the ability to fly, why didn't she attempt to fly earlier? Apparently Scarlet Witch does have the ability to fly. She's clearly levitating in this particular scene from Avengers: Age of Ultron which I didn't notice until I watched AoU for the 4th time today. I know she has the ability to fly in the comics but why didn't she attempt to fly prior to this scene in AoU? A: Her ability to levitate herself comes out of nowhere in the movie, I assume to enhance her dramatic entrance in the final scene. So there's no real explanation for why she doesn't ever try to do it earlier. In particular, there were times when levitating herself would have been very useful, such as when she needed Vision to fly her out of the train car as Slokovia fell. However, Wanda's major character development during the course of the movie is that she starts out, under Ultron's guidance, as a misguided villain, until her time spend with the rest of the Avengers eventually redeems her. You will notice that, during the first half of the movie, she almost exclusively uses her mental manipulation power. It's only after she starts spending time with people like Captain America and Hawkeye that she starts to embrace her other powers, like stopping the train, or using hex-bolts on Ultron clones. It seems likely that the Avengers have been helping her explore her abilities, and become better at using them. It may be as simple as her not knowing she could levitate until someone else convinced her to try.
Plastic Biliary Stent Migration During Multiple Stents Placement and Successful Endoscopic Removal Using Intra-Stent Balloon Inflation Technique: A Case Report and Literature Review Patient: Male, 77 Final Diagnosis: Biliary neoplasm Symptoms: Medication: Clinical Procedure: Biliary stent removal using intra-stent balloon inflation techniqueextraction Specialty: Gastroenterology and Hepatology Objective: Diagnostic/therapeutic accidents Background: Late migration of a plastic biliary stent after endoscopic placement is a well known complication, but there is little information regarding migration of a plastic stent during multiple stents placement. Case Report: A white man was hospitalized for severe jaundice due to neoplastic hilar stenosis. Surgical eligibility appeared unclear on admission and endoscopy was carried out, but the first stent migrated proximally at the time of second stent insertion. After failed attempts with various devices, the migrated stent was removed successfully through cannulation with a dilation balloon. Conclusions: The migration of a plastic biliary stent during multiple stents placement is a possible complication. In this context, extraction can be very complicated. In our patient, cannulation of a stent with a dilation balloon was the only effective method. Background Multiple plastic biliary stents (PBS) placement is a widely accepted procedure for management of hilar neoplasms or dilation of benign biliary stricture. Although this is a complex procedure, it is associated with few immediate adverse events. One of the main complications is proximal migration of the first stent as further stents are pushed in. There have been few reports on this complication, and the approach to resolve migration is inspired by the more numerous experiences in the extraction of stents that migrated late after endoscopic placement. However, the predisposing factors for stent migration during placement probably differ from those reported in the post-placement migration and some suggested techniques may not be equally effective if migration occurs during the process of placement. Case Report A 77-year-old white male was admitted to our hospital due to painless jaundice (total bilirubin: 27.53 mg/dl, direct bilirubin: 23.68 mg/dl, ALP: 240 UI/l (VN <125 UI/l), gGT: 424 UI/l (VN <56 UI/l). CT and MRI showed stenosis of the hepatic hilum, type 1 of the Bismuth-Corlette classification. Abdominal lymphadenopathy or other localizations in hepatic parenchyma were absent; however, the CT scans also showed a mass of undetermined nature in the chest. In consideration of the unclear indication for surgery, intense jaundice, and pruritus, we decided to place a plastic biliary stent and subsequently investigate the thoracic mass. An ERCP was performed 2 days after admission. We accessed the bile duct after biliary precut with fistulotomy; the stenosis was passed by a 0.035-inch guidewire and the main intrahepatic right duct was injected without obtaining visualization of the left ducts. Because we were worried about the possible presence of a stenosis type II of Bismuth Corlette classification, we decided to place 2 stents in both intrahepatic major ducts. We first introduced a plastic stent 9-cm long and 10-french diameter (Preload Advantx stent, Boston Scientific, Marlborough, Massachusetts, USA). At the end of placement, this stent was immediately beyond the stenosis but the distal flap was close to the edge of the fistulotomy ( Figure 1). Subsequently, we attempted to place a second plastic stent 12cm long and 7-french diameter (Preload Advantx stent, Boston Scientific, Marlborough, Massachusetts, USA) in the left main duct. The stenosis was severe and while we were pushing, we observed sudden migration of the first stent in the main biliary duct and its disappearance from endoscopic view (Figure 2). At first we tried to extract the migrated stent, according to Caponi et al., through cannulation with a sphincterotome, but this maneuver failed due to the tight stenosis ( Figure 3A). We then tried to extract the stent with a snare over the same guidewire, but the maneuver was aborted due to the limited space within the choledochus ( Figure 3B). At this point the procedure was stopped due to inability to extract the migrated stent ( Figure 3C). After a few days a second ERCP was carried out. A guidewire was passed through the proximally migrated stent, and a dilation balloon with a basal diameter of 5.8 french, an inflated diameter of 6 mm, and a length of 4 cm (Hurricane Rx Biliary Balloon Dilatation Catheter; Boston Scientific, Marlborough, Massachusetts, USA) was coaxially inserted over the guidewire and advanced to the distal portion of the stent, according to a technique inspired by Odemis et al.. The balloon was at first inflated manually with a syringe, but the traction force was too low; subsequently the balloon was inflated up to 11 atm with an inflation system (Alliance II Boston Scientific, Marlborough, Massachusetts, USA) according to standard dilation procedure, and the balloon-stent system was withdrawn through the operative channel while the other stent remained in place ( Figure 4A-C). Finally, we passed a plastic prosthesis 12-cm long and 10-french diameter (Preload Advantx stent, Boston Scientific, Marlborough, Massachusetts, USA) by the guidewire formerly used for balloon extraction. The intra-hepatic stents were correctly positioned at the end of the procedure ( Figure 5). Several days later, the thoracic mass was biopsied and the histological results were benign. Surgical resection was performed 2 weeks later and the patient remained in good health after being sent home. Discussion Risk factors for migration of a PBS during multiple stents placement are not well known. In our case we hypothesize that the risk of migration was increased due the following: stent placement without dilation, use of a first stent closely corresponding to the distance between papilla and proximal margin of the stenosis, attempts to push the stent (although we used a deployment device) with the possibility of suspending stent placement, and (perhaps) inadequate control of the procedure. Although we considered these possible risks, we know that stent migration during placement of multiple PBS is possible even in expert hands and we found suggestions in the literature to prevent such complications. Hamada et al. introduced a guidewire from the distal end of the first PBS through the distal side hole toward the third portion of the duodenum before inserting the second stent, using an "anchorwire technique". Dumonceau et al. recommend initially A B C placing longer stents, or a dilation balloon inflated alongside stents already in place, to decrease the risk of proximal stent migration during further stenting. Nonetheless, the reports on this type of migration are very few with respect to literature regarding late migration of a PBS. Late migration complicates 3.5% to 5% of PBS placements, especially if a single PBS is placed during treatment of benign mild stenosis or in absence of stenosis. Our case demonstrates that a tight malignant stenosis increases risk of proximal migration during multiple PBS placement. Late migration is usually managed with snare, basket, Soehendra retrieval catheter, forceps, or balloon dilation. The choice of retrieval technique depends on several factors, including biliary ductal dilation, depth of stent migration, distal stent impaction, and biliary stricture distal to the migrated stent. In a study focused on delayed PBS migration published last year by Kawaguchi et al., the grasping technique using a basket or snare was effective for pig-tailed or thin (7-french) straight stents, whereas the guidewire cannulation technique by balloon catheter, cannula, or stent retrieval was effective for thick (>10-french) straight stents. Tarnasky et al., in their historic article, reported that cannulating the stent lumen with a guidewire is often the best approach in patients with a biliary stricture or a non-dilated duct. In patients with a dilated duct, directly grasping the stent with a wire basket, snare, or forceps or indirect balloon traction is usually preferable. We think that reports focused on delayed PBS migration are also useful in migration during multiple PBS placements. However, some maneuvers suggested in late migration might not be forceful enough to extract the migrated stent while leaving the other stents in place, as seen in our case. In our procedure, the extraction of the migrated stent was finally achieved by performing a procedure inspired by Odemis et al., but they used a stone extraction balloon, while we used a dilation balloon. Granata et al. recently reported use of a dilation balloon to resolve a delayed stent migration with a technique very similar to ours. We think that a dilation balloon, as used by Granata and in our case, is generally preferable because it develops a traction force higher than that achieved with a stone extraction balloon. Dilation balloons are designed for use with high inflation pressure, which was mandatory in our case. We observed in vitro that a light inflation pressure anchored the dilation balloon inside the stent (Figure 6), but in vivo we needed an 11-atm pressure inside the balloon to withdraw the stent through the stenosis. Conclusions Proximal migration of a PBS during multiple stents placement is a rare complication. This situation differs from delayed stent migration because in our case the force that keeps the stent in an improper position is greater due the presence of more stents inside the stenosis. We think that initially placing longer stents, using a dilation balloon inflated alongside stents already in place, or using anchor-wire technique can prevent plastic biliary stent migration. To the best of our knowledge this is the first case report describing migration during multiple PBS placements and reporting techniques used to resolve it. In our case cannulation with a balloon dilation and forceful inflation to nominal dilation pressure was the only way to extract the migrated stent and we feel this is the best approach in such a situation.
<gh_stars>0 package gamifikator.model; import org.bson.types.ObjectId; import org.mongodb.morphia.annotations.Id; import org.mongodb.morphia.annotations.Property; public class MongoDBObject { @Id @Property("id") private ObjectId id; public MongoDBObject() { super(); } public ObjectId getId() { return id; } public void setId(ObjectId id) { this.id = id; } }
from mlapp.handlers.databases.sql_alchemy_handler import SQLAlchemyHandler class MySQLHandler(SQLAlchemyHandler): def __init__(self, settings): """ Initializes the ￿MySQLHandler with it's special connection string :param settings: settings from `mlapp > config.py` depending on handler type name. """ super(MySQLHandler, self).__init__(settings) self.connection_string = 'mysql+pymysql://{0}:{1}@{2}:{3}/{4}'.format( self.connections_parameters['user_id'], self.connections_parameters['password'], self.connections_parameters['hostname'], str(self.connections_parameters['port']), self.connections_parameters['database_name'])
CNN no longer believes in “Believer,” the non-fiction series it launched earlier this year with Reza Aslan, the Iranian-American author and religious scholar. “Believer” would have entered a second season if it had been picked up. CNN has had to grapple with other incidents of hosts and talent expressing political opinions. Late last month, the Time Warner-owned network said it would no longer feature comedienne Kathy Griffin as part of its annual New Year’s Eve broadcast, a day after pictured surfaced on social media of Griffin holding a bloody head resembling President Trump. Aslan had been seen as a notable addition to the ranks of CNN’s various non-fiction series. He co-founded BoomGen Studios, a production shingle centered on content from and about the Middle East, in 2006. He served as a consulting producer on the HBO drama “The Leftovers” and has penned best-selling books such as “Zealot: The Life and Times of Jesus of Nazareth” and “No God but God: The Origins, Evolution, and Future of Islam.” In episodes of “Believer,” Aslan examined various faiths and doctrines around the world, spending time with Orthodox Jews in Israel and a group of Indian cannibals who, in the series’ first episode, ate cooked human brain tissue.
Medium supplementation and thorough optimization to induce carboxymethyl cellulase production by Trichoderma reesei under solid state fermentation of nettle biomass Abstract In the present study, the production of cellulase by Trichoderma reesei under solid-state fermentation of nettle biomass was promoted through supplementation of the culture media using carbonaceous additives and comprehensive optimization of the cultivation via the Taguchi method. CMCase activities about 5.56.1U/gds were obtained by fermentation of the autoclave-pretreated biomass, among various chemical and physical pretreatments. Then, several additives including Tween 80, betaine, carboxymethyl cellulose, and lactose were individually or in combination added to the culture media to induce the enzyme production. The results proved that such additives could act as either inducers or inhibitors. Furthermore, CMCase activity surprisingly increased to 14.0U/gds by supplementing the fermentation medium with the optimal mixture of additives including 0.08mg/gds Tween 80, 0.4mg/gds betaine, and 0.2mg/gds carboxymethyl cellulose. Factor screening according to PlackettBurman design confirmed that the levels of Urea and MgSO4 among basal medium constituents as well as pH of the medium were significantly affected CMCase production. By optimizing the levels of these factors, CMCase activity of 18.8U/gds was obtained, which was noticeably higher than that of fermentation of the raw nettle. The applied procedure can be promisingly used to convert the nettle biomass into valuable products.
Effects of Divided Attention on fMRI Correlates of Memory Encoding Performing a secondary task concurrently with a study task has a detrimental effect on later memory for studied items. To investigate the mechanisms underlying this effect, the processing resources available for an incidental encoding task were varied by manipulating secondary task difficulty. fMRI data were acquired as volunteers (n = 16) made animacy decisions to visually presented study words while concurrently performing either an easy or a hard auditory monitoring task. Subsequent memory effects-greater activity at study for words later remembered versus words later forgotten-were identified in the left ventral inferior frontal gyrus and the left anterior hippocampus. These effects did not vary according to whether the encoding task was performed concurrently with the easy or the hard secondary task. However, as secondary task difficulty increased, study-item activity declined and auditory-item activity increased in dorsolateral prefrontal and superior parietal regions that have been implicated in the support of executive and control functions. The findings suggest that dividing attention during encoding influences the probability of engaging the encoding operations that support later episodic memory, but does not alter the nature of the operations themselves. The findings further suggest that the probability of engaging these encoding operations depends on the level of general processing resources engaged in service of the study task.
Effective Merger Review: A Question for Australian Courts? There is increasing global concern about the effectiveness of merger control in competition law. Globally, concerns about rising market concentration and in particular, the effect of consolidation by digital platform businesses, have prompted numerous inquiries and articles exploring whether competition laws are effective in addressing concerns about their anticompetitive impact in relation to mergers. Australias approach to merger control makes it an outlier in a number of ways. Its major approval procedure, informal clearance, is outside the scope of the Competition and Consumer Act 2010 (Cth). Formal decisions are generally heard in courts. Of note, under the current likely substantial lessening of competition test which became operative in 1993, the Australian Competition and Consumer Commission (ACCC) has not successfully proven in court that a merger would be likely to infringe the law. This article examines the methodology of Australian courts in applying this test, including the judicial approach to acceptance and assessment of economic and noneconomic evidence. It suggests approaches to enable consideration of the best evidence available. This analysis is in the context of amendments to the merger system recently proposed by the ACCC. We conclude that there are significant challenges in determining whether a merger is anticompetitive and that changes to the relevant methodology are necessary. This might be done by adopting the ACCC proposals or by a reconsideration of the merger factors and the approach to applying them.
A selective review of recent North American long-term followup studies of schizophrenia. North American outcome studies of schizophrenia conducted within the past quarter century are reviewed if their minimum average followup is 10 years and they meet at least some modern design criteria. Ten such investigations are described and summarized. Taken as a whole, they demonstrate that schizophrenia can be a chronic disease whose outcome on the average is worse than that of other major mental illnesses. It is associated with an increased risk for suicide, physical illness, and mortality. The schizophrenic process, however, is not relentlessly progressive, as originally described, but appears to plateau after 5-10 years of manifest illness. Overall, outcome is heterogeneous, but much of the variance can be linked to sample characteristics, including expressions of psychopathology (broad vs. narrow diagnostic criteria, subtypes, and comorbidity), dimensions of chronicity (length of manifest illness, treatment resistance, age of onset, and institutionalization), and other predictor variables (gender, marital status, socioeconomic status, physical setting, and premorbid health). Long-term followup studies have yet to demonstrate clearly any effect of treatment on the natural history of schizophrenia. Finally, these studies support a broad definition of schizophrenia.
. Different concentrations of silicon (Si) were applied to flowering Chinese cabbbage (Brassica campestris L. ssp. chinensis var. utilis Tsen et Lee) to study their effects on the flowering Chinese cabbage's anthracnose occurrence, flower stalk formation, and Si uptake and accumulation. The results indicated that Si could obviously control the occurrence of anthracnose, and the effect was genotype-dependant. The plants of susceptible cultivar applied with 2.5 mmol L(-1) Si and those of resistant cultivar applied with 0.5 mmol L(-1) Si exhibited the highest resistance to Colletotrichum higginsianum, with the lowest disease index and the higheist flower stalk yield. Si application also obviously affected the quality of flower stalk. For susceptible cultivar, Si application promoted the synthesis of chlorophyll, crude fiber and vitamin C, and induced the formation of soluble sugars. The contents of chlorophyll and crude fiber increased with increasing Si level. For resistant cultivar, the chlorophyll content increased while vitamin C content decreased with increasing Si level, but Si application had less effect on the contents of crude fiber and soluble sugars. For both cultivars, Si application did not have significant effect on the contents of crude protein and soluble protein but remarkably increased the Si accumulation in plant leaves, and the leaf Si content was significantly increased with increasing Si level. The Si granules deposited in leaf tissues were not equal in size, and distributed unevenly in epidermis tissues. It was concluded that the accumulation of Si in leaves could increase the resistance of plant to anthracnose, but there was no linear correlation between the accumulated amount of Si and the resistance.
#ifndef NAN_TEST_H #define NAN_TEST_H #include "../../lib/nan.h" namespace addons { class Test: public Nan::ObjectWrap { public: Test(); ~Test(); v8::Local<v8::Value> ToValue() { // const int argc = 1; // v8::Local<v8::Value> argv[argc] = {Nan::New(this->m_ref)}; // v8::Local<v8::Object> result = Nan::NewInstance(Nan::New(Test::constructor)->GetFunction(), argc, argv).ToLocalChecked(); v8::Local<v8::Object> result = Nan::NewInstance(Nan::New(Test::constructor)->GetFunction()).ToLocalChecked(); this->Wrap(result); return result; } public: static NAN_MODULE_INIT(Init); private: static NAN_CONSTRUCTOR(constructor); static NAN_NEW(New); static NAN_METHOD(run); }; } // namespace addons #endif // NAN_TEST_H
import { ICredentials, IPendingAssetFolder } from '../../interfaces'; export declare class AssetFolder { private apiClient; private credentials; private data; constructor(credentials: ICredentials, data: IPendingAssetFolder); readonly name: string; readonly id: number; generate(): Promise<void>; private sync; } //# sourceMappingURL=AssetFolder.d.ts.map
Underweight, overweight, obesity, and excess deaths. In their study of deaths associated with underweight, overweight, and obesity, Dr Flegal and colleagues1 conclude that excess mortality due to obesity and overweight is much lower than previously reported. We believe that their analysis is flawed and misleading. A major challenge in such studies is that low weight is often due to underlying chronic disease, which may exist for many years before death. Thus, lean persons are a mix of smokers, healthy active persons, and those with chronic illness (due to the direct effects of disease on weight and sometimes purposeful weight loss motivated by diagnosis of a serious illness). Their analysis does not successfully disentangle this diverse group.
COSTA MESA, Calif. — Greg Hayworth, 44, made a good living in his home state, California, from real estate and mortgage finance. Then that business crashed, and early last year the bank foreclosed on the house his family was renting, forcing their eviction. Now the Hayworths and their three children represent a new face of homelessness in Orange County: formerly middle income, living week to week in a cramped motel room. “I owe it to my kids to get out of here,” Mr. Hayworth said, recalling the night they saw a motel neighbor drag a half-naked woman out the door while he beat her. As the recession has deepened, longtime workers who lost their jobs are facing the terror and stigma of homelessness for the first time, including those who have owned or rented for years. Some show up in shelters and on the streets, but others, like the Hayworths, are the hidden homeless — living doubled up in apartments, in garages or in motels, uncounted in federal homeless data and often receiving little public aid. The Hayworths tried staying with relatives but ended up last September at the Costa Mesa Motor Inn, one of more than 1,000 families estimated to be living in motels in Orange County alone. They are among a lucky few: a charity pays part of the $800-a-month charge while Mr. Hayworth tries to recreate a career. The family, which includes a 15-year-old daughter, shares a single room and sleeps on two beds. With most possessions in storage, they eat in two shifts, on three borrowed plates — all that one jammed cabinet can hold. His wife, Terri, has health problems and, like many other families, they cannot muster the security deposit and other upfront costs of renting a new place. Motel families exist by the hundreds in Denver, along freeway-bypassed Route 1 on the Eastern Seaboard, and in other cities from Chattanooga, Tenn., to Portland, Ore. But they are especially prevalent in Orange County, which has high rents, a shortage of public housing and a surplus of older motels that once housed Disneyland visitors. “The motels have become the de facto low-income housing of Orange County,” said Wally Gonzales, director of Project Dignity, one of dozens of small charities and church groups that have emerged to assist families, usually helping a few dozen each and relying on donations of food, clothing and toys. In the past, motel families here were mainly drawn from the chronically struggling. In 1998, an exposé of neglected motel children by The Orange County Register prompted creation of city task forces and promises of help. But in recent months, schools, churches and charities report a different sort of family showing up. “People asking for help are from a wider demographic range than we’ve seen in the past, middle-income families,” said Terry Lowe, director of community services in Anaheim, Calif. The motels range from those with tattered rugs and residents who abuse alcohol and drugs to newer places with playgrounds and kitchenettes. With names like the Covered Wagon Motel and the El Dorado Inn, they look like any other modestly priced stopover inland from the ritzy beach towns. But walk inside and the perception immediately changes. In the evening, the smell of pasta sauce cooked on hot plates drifts through half-open doors; in the morning, children leave to catch school buses. Families of three, six or more are squeezed into a room, one child doing homework on a bed, jostled by another watching television. Children rotate at bedtime, taking their turns on the floor. Some families, like the Malpicas, in a motel in Anaheim, commandeer a closet for baby cribs. The Garza family moved to the Costa Mesa Inn in August, after the husband, Johnny, lost his job at Target, his wife, Tamara, lost her job at Petco, and they were evicted from their two-bedroom rental. Their 9-year-old daughter now shares a bed with two younger brothers, their toys and schoolbooks piled on the floor. The couple’s baby boy, born in April, sleeps in a small crib. Rental aid from federal and county programs reaches only a small fraction of needy families, said Bob Cerince, coordinator for homeless and motel residents services in Anaheim, who estimated the families at more than 1,000. President Obama’s stimulus package may give hope to more people and blunt the projected rise of families who could end up in motels and shelters, said Nan Roman, president of the National Alliance to End Homelessness in Washington. The package allows $1.5 billion for homeless prevention, including help with rent and security deposits. Schools have made special efforts to help children in displaced families stay in class, and some send social workers to connect families with counseling services and food aid. Wendy Dallin, the liaison for the homeless in one of Anaheim’s seven school districts, said that in the last three months she had learned of 38 newly homeless families, bringing the total she knew of in her district to 376. About 48 of those families are living in motels, Ms. Dallin said, with the rest in shelters, renting a room or garage, staying with relatives or living in cars. At the same time, in California’s budget crisis, some school social workers are being laid off. By necessity, most cities here have been lax in enforcing occupancy codes. Still, a source of turmoil for motel families is a California rule that after 28 days, residents are considered tenants, gaining legal rights of occupancy. Some motels force families to move every month, while others make families stay in a different room for a day or two. Many motel residents have at least one working parent and pay $800 to $1,200 a month for a room. Yet even those with jobs can become mired in motel life for years because of bad credit ratings and the difficulty of saving the extra months’ rent and security deposits to secure an apartment. Paris Andre Navarro, 47, knows how hard it can be to climb back. She and her husband used to have good jobs and an apartment in Garden Grove, near Anaheim. But they have spent the last three years with their 11-year-old daughter in the El Dorado Inn. The bottom fell out when her husband’s medical problems forced him to leave his job as a computer technician and her home-care job ended. They were evicted and moved into the motel, and she started working the night shift at Target. Last year, when Ms. Navarro’s husband started a telemarketing job, they thought they might escape. That hope evaporated when her hours at Target were cut in half. What with the $241 weekly rent, the cost of essentials and a $380 car payment, they cannot save. “Now we’re just living paycheck to paycheck,” Ms. Navarro said. Their daughter, Crystal, tries to sound stoical. “What I miss most is having a pet,” she said. The motel does not allow pets, so she gave away her cat and kittens. Greg Hayworth, whose family has spent six dispiriting months in the Costa Mesa Inn, tried working in sales but has had trouble finding a lasting job. Paul Leon, a former nurse who formed the Illumination Foundation to aid motel families, has promised to help with a security deposit when the Hayworths are able to move out. Mr. Hayworth’s teenage daughter has had the roughest time because of the lack of privacy. She is too embarrassed to take friends home, and is uncomfortable dressing in front of her brothers, who are 10 and 11. Not long ago, she was attacked at school by classmates who mocked her for living in a motel. An article on March 11 about homeless families living in California motels, using information from Greg Hayworth, a member of one such family, referred incorrectly to Mr. Hayworth’s educational background. Mr. Hayworth neither graduated from Syracuse University nor ever enrolled there. An earlier version of this story contained an erroneous hyperlink for the Project Dignity’s Web site. The correct Web address is projectdignity.org.
William Holliday (rugby league) Background Bill Holliday was born in Whitehaven, Cumberland, England. International honours Bill Holliday won caps for Great Britain while at Whitehaven in 1964 against France, in 1965 against France, New Zealand (3 matches), while at Hull Kingston Rovers in 1966 against France, France (sub), and in 1967 against Australia (3 matches). Bill Holliday captained Great Britain in 1967 against Australia (3 matches). County Cup Final appearances Bill Holliday played left-second-row, i.e. number 11, in Hull Kingston Rovers' 25-12 victory over Featherstone Rovers in the 1966 Yorkshire County Cup Final during the 1966–67 season at Headingley Rugby Stadium, Leeds on Saturday 15 October 1966, played left-prop, i.e. number 8, in Hull Kingston Rovers' 8-7 victory over Hull F.C. in the 1967 Yorkshire County Cup Final during the 1967–68 season at Headingley Rugby Stadium, Leeds on Saturday 14 October 1967, played left-second-row in Swinton's 11-2 victory over Leigh in the 1969 Lancashire County Cup Final during the 1969–70 season at Central Park, Wigan on Saturday 1 November 1969, and played as an interchange/substitute, i.e. number 15, (replacing Second-row Rod Smith) in the 11-25 defeat by Salford in the 1972 Lancashire County Cup Final during the 1972–73 season at Central Park, Wigan on Saturday 21 October 1972. Player's No.6 Trophy Final appearances Bill Holliday played left-prop, i.e. number 8, and scored 2-conversions in Rochdale Hornets' 16-27 defeat by Warrington in the 1973–74 Player's No.6 Trophy Final during the 1973–74 season at Central Park, Wigan on Saturday 9 February 1974. Holliday had secured the quarter final victory for Rochdale over Leeds with a drop goal from just inside the attacking half to give Hornets a 7 points to 5 lead. Honoured at Whitehaven Bill Holliday is a Whitehaven Hall of Fame inductee. Genealogical information Bill Holliday is the father of the rugby league footballer; Les Holliday, and the rugby league footballer who played in the 1980s for Swinton and Leigh; Mike Holliday.
(Reuters) - Everton boss Marco Silva is not worried about receiving a hostile reception at Watford on Saturday as he returns to Vicarage Road as an opposing manager for the first time since he was sacked last year. Silva joined Watford on a two-year contract before the start of the 2017-18 season, but was soon the subject of an approach from Everton. Watford then suffered a poor run of results and Silva was eventually sacked. “I think it’s not the time to make or to do reflection on last season. We are in the middle of a season, it is not the moment to talk about the situation,” Silva told reporters on Friday. Everton have lost three of their last four league games and have found goals hard to come by in recent weeks. The club’s leading scorer Richarlison is struggling for form having scored only two goals in all competitions since the turn of the year. The Brazilian forward, who played under Silva at Watford, did not start in the defeat to Manchester City on Wednesday and Silva explained the need to manage his minutes to avoid exhaustion with Everton playing three games in eight days. “We have to manage the physical condition of some players. Our job with Richarlison is to put him in the best conditions,” Silva said. “When I made the decision about the starting 11 for the last match it was because I thought it was our best starting 11 for that match.
<filename>test/test_case_when.py import pandas as pd from kungfu_pandas import case_when def test_case_when_simple(groups_df): """Case when simple""" out = ( groups_df .pipe(case_when, { lambda d: d['x'] == 0: 0, lambda d: (d['x'] == 1) & (d['group'] == 'a'): 1, lambda d: (d['x'] == 1) & (d['group'] == 'b'): 2, lambda d: d['x'] >= 3: 3, }) ) pd.testing.assert_series_equal( out, pd.Series([1, None, 3, 0, 0, 2]) ) def test_case_when_order(groups_df): """Case when order matters""" out = ( groups_df .pipe(case_when, { lambda d: d['x'] >= 0: 0, lambda d: d['x'] >= 1: 1, }) ) pd.testing.assert_series_equal( out, pd.Series([0.0] * 6) ) def test_case_when_empty(empty_df): """Case when empty""" pd.testing.assert_series_equal( case_when(empty_df, {lambda d: d['x'] == 0: 0.0}), pd.Series(dtype='float64') ) def test_case_when_list_tuple(groups_df): """Case when order matters, using list of tuples as an argument""" out = ( groups_df .pipe(case_when, [ (lambda d: d['x'] >= 0, 0), (lambda d: d['x'] >= 1, 1), ]) ) pd.testing.assert_series_equal( out, pd.Series([0.0] * 6) )
/** * \brief Init function for ReceiveWinDivert * * ReceiveWinDivertThreadInit sets up receiving packets via WinDivert. * * \param tv pointer to generic thread vars * \param initdata pointer to the interface passed from the user * \param data out-pointer to the WinDivert-specific thread vars */ TmEcode ReceiveWinDivertThreadInit(ThreadVars *tv, const void *initdata, void **data) { SCEnter(); TmEcode ret = TM_ECODE_OK; WinDivertThreadVars *wd_tv = (WinDivertThreadVars *)initdata; if (wd_tv == NULL) { SCLogError(SC_ERR_INVALID_ARGUMENT, "initdata == NULL"); SCReturnInt(TM_ECODE_FAILED); } WinDivertQueueVars *wd_qv = WinDivertGetQueue(wd_tv->thread_num); if (wd_qv == NULL) { SCLogError(SC_ERR_INVALID_ARGUMENT, "queue == NULL"); SCReturnInt(TM_ECODE_FAILED); } SCMutexLock(&wd_qv->filter_init_mutex); if (wd_qv->filter_handle != NULL && wd_qv->filter_handle != INVALID_HANDLE_VALUE) { goto unlock; } TAILQ_INIT(&wd_tv->live_devices); if (WinDivertCollectFilterDevices(wd_tv, wd_qv) == TM_ECODE_OK) { WinDivertDisableOffloading(wd_tv); } else { SCLogWarning(SC_ERR_SYSCALL, "Failed to obtain network devices for WinDivert filter"); } wd_qv->filter_handle = WinDivertOpen(wd_qv->filter_str, wd_qv->layer, wd_qv->priority, wd_qv->flags); if (wd_qv->filter_handle == INVALID_HANDLE_VALUE) { WinDivertLogError(GetLastError()); ret = TM_ECODE_FAILED; goto unlock; } unlock: if (ret == 0) { wd_tv->filter_handle = wd_qv->filter_handle; *data = wd_tv; } SCMutexUnlock(&wd_qv->filter_init_mutex); SCReturnInt(ret); }
Screening the key genes of hair follicle growth cycle in Inner Mongolian Cashmere goat based on RNA sequencing Abstract Inner Mongolian Cashmere goat is an excellent local breed selected for the dual-purpose of cashmere and meat. There are three lines of Inner Mongolian Cashmere goat: Erlangshan, Alashan and Aerbasi. Cashmere is a kind of precious textile raw material with a high price. Cashmere is derived from secondary hair follicle (SHF), while hair is derived from primary hair follicle (PHF). The growth cycle of SHF of cashmere goat is 1 year, and it can be divided into three different stages: anagen, catagen and telogen. In this study, we tried to find some important influence factors of SHF growth cycle in skin tissue from Inner Mongolian Cashmere goats by RNA sequencing (RNA-Seq). Three female Aerbasi Inner Mongolian Cashmere goats (2 years old) were used as experimental samples in this study. Skin samples were collected in September (anagen), December (catagen) and March (telogen) at dorsal side from cashmere goats. Results showed that over 511396044 raw reads and 487729890 clean reads were obtained from sequence data. In total, 51 different expression genes (DEGs) including 29 downregulated genes and 22 upregulated genes were enriched in anagencatagen comparing group. The 443 DEGs contained 117 downregulated genes and 326 upregulated genes that were enriched in catagentelogen comparing group. In telogenanagen comparing group, 779 DEGs were enriched including 582 downregulated genes and 197 upregulated genes. The result of gene ontology (GO) annotation showed that DEGs are in different growth cycle periods, and enriched GO items are mostly related to the transformation of cell and protein. The Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment result indicated that metabolic process has a great impact on SHF growth cycle. Based on the results of a comprehensive analysis of differentially expressed genes, GO enrichment and KEGG enrichment, we found that FGF5, FGFR1 and RRAS had an effect on the hair follicle growth cycle. The results of this study may provide a theoretical basis for further research on the growth and development of SHF in Inner Mongolian Cashmere goats. Introduction China is the largest cashmere producer in the world, with the output of cashmere accounting for about 50 % of the world's total production. In addition, the production of cashmere in Inner Mongolia accounts for about 40 % of the total output of the whole country. Owing to the excellent quality, cashmere from Inner Mongolian Cashmere goats is very expensive and competitive in the textile industry. Inner Mongolian Cashmere goat is a local breed that provides both cashmere and meat, which can be divided into three lines: Erlangshan, Alashan and Aerbasi. These goats live tenaciously in semiarid steppe of the Inner Mongolia Autonomous Region. Because of the harsh living environment, the fiber diameter of cashmere is usually less than 16 m, which can keep a body warm in cold winter. Cashmere is derived from secondary hair follicle (SHF) of cashmere goat, while hair is derived from primary hair follicle (PHF). Hair follicle is a special tissue in the skin which has its own growth cycle and affects the growth of cashmere. A growth cycle mainly consists of three distinct stages: hair follicles begin to develop from growth stage (anagen), stop growing during regression stage (catagen), and then atrophy at rest stage (telogen); finally, hair follicles re-enter a new cycle of growth ( ;;). The growth cycle of hair follicles is influenced by heredity, environment, climate, nutrition and so on. Generally, SHF growth cycle of Inner Mongolian Cashmere goats is 1 year. Li et al. observed skin tissue sections of Inner Mongolian Cashmere goats for 1 year and concluded that SHF anagen is from April to November, catagen is from December to January, and telogen is from February to March. However, the growth cycle of PHF in cashmere goats is different from that of the SHF. In order to meet the needs of the market, scientific methods were used to select less fineness and higher yield cashmere goats for breeding purposes in order to obtain more high-quality cashmere. With the continuous development of science and technology, molecular breeding will help scientists to speed up the breeding process of Inner Mongolian Cashmere goat. RNA-Seq is a novel high-throughput sequencing-based approach for global transcriptome mapping, which was first proposed and applied in yeast in 2008 (). Transcriptome has temporal and spatial specificity, which means gene expression varies in different tissues or different periods. In the past decade, RNA-Seq technology has been applied in many species, and scientists have developed several methods to analyze these differences that explain some biological phenomena. Researchers have found many important factors affecting the growth cycle of cashmere goats, such as the MAPK signaling pathway, Wnt signal transduction pathway, fibroblast growth factor (FGF) family, bone morphogenetic protein (BMP) family, Notch signal transduction pathway and so on Geng et al., Wang et al., and Jin et al.. In this study, Inner Mongolian Cashmere goat skin samples were sequenced, and the influencing factors and their interactions were explored. By comparing different expression genes (DEGs) among anagen, catagen and telogen, we tried to reveal a fresh viewpoint of growth cycle of hair follicle of cashmere goats. Functional annotation analysis was used to locate influence factors. The real-time quantitative polymerase chain reaction (qRT-PCR) was used to verify the DEGs, and a network diagram of the interaction of various factors was constructed. Ethics statement In this study, skin samples were collected in accordance with the International Guiding Principles for Biomedical Research Involving Animals and were approved by the Animal Ethics Committee of the Inner Mongolia Academy of Agriculture and Animal Husbandry Sciences that is responsible for animal care and use in the Inner Mongolia Autonomous Region of China. In our study, no specific permissions were required for these activities, and the animals did not involve endangered or protected species. Skin sample preparation for RNA-seq and qRT-PCR validation Three female Aerbasi Inner Mongolian Cashmere goats at 2 years old from a stud farm (Inner Mongolia Jin Lai Livestock Technology Company, Hohhot, Inner Mongolia) were used in this study. All cashmere goats were raised by feeding practices according to the cashmere goat standard. Skin samples were collected in September (anagen), December (catagen) and March (telogen) of SHF from cashmere goats, the sampling site was the upper one-third of the left scapula along the mid-dorsal and mid-abdominal line. For each goat, we used procaine for local anesthesia to reduce animal pain. After hair shearing and alcohol disinfection, approximately 1 cm 2 of skin tissue was grasped with sterile forceps and quickly cut near the tip using sterile scalpel blades. Each clipping was obtained immediately adjacent to the location of the previous shearing. Yunnan Baiyao powder (Yunnan Baiyao Group Co., Ltd., China) was applied immediately to stop the bleeding. Then the samples were quickly put into the liquid nitrogen and finally stored at −80 C until RNA extraction. RNA extraction, quantification and qualification Total RNA was extracted by TRIzol (Invitrogen) under the protocol. In addition, RNA degradation and contamination were monitored on 1 % agarose gels. The purity was checked using the NanoPhotometer ® spectrophotometer (IMPLEN, CA, USA). RNA concentration was measured using Qubit ® RNA Assay Kit in a Qubit ® 2.0 flurometer (Life Technolo-gies, CA, USA). RNA integrity was assessed using the RNA Nano 6000 Assay Kit of the Bioanalyzer 2100 system (Agilent Technologies, CA, USA). Library preparation for transcriptome sequencing A total amount of 3 g of RNA per sample was used as input material for the RNA sample preparations. Sequencing libraries were generated using NEBNext ® Ultra™ RNA Library Prep Kit for Illumina ® (NEB, USA) following manufacturer's recommendations and index codes were added to attribute sequences to each sample. Briefly, mRNA was purified from total RNA using poly-T oligo-attached magnetic beads. Fragmentation was carried out using divalent cations under elevated temperature in NEBNext First Strand Synthesis Reaction Buffer (5X). First strand complementary DNA (cDNA) was synthesized using random hexamer primer and M-MuLV Reverse Transcriptase (RNase H-). Second strand cDNA synthesis was subsequently performed using DNA Polymerase I and RNase H. Remaining overhangs were converted into blunt ends via exonuclease or polymerase activities. After adenylation of 3 ends of DNA fragments, NEB-Next Adaptor with hairpin loop structure was ligated to prepare for hybridization. In order to select cDNA fragments of preferentially 250 ∼ 300 bp in length, the library fragments were purified with AMPure XP system (Beckman Coulter, Beverly, USA). Then 3 L USER Enzyme (NEB, USA) was used with size-selected, adaptor-ligated cDNA at 37 C for 15 min followed by 5 min at 95 C before PCR. Then PCR was performed with Phusion high-fidelity DNA polymerase, Universal PCR primers and Index (X) primer. At last, PCR products were purified (AMPure XP systemc) and library quality was assessed on the Agilent Bioanalyzer 2100 system. Clustering and sequencing The clustering of the index-coded samples was performed on a cBot cluster generation system using TruSeq PE Cluster Kit v3-cBot-HS (Illumina) according to the manufacturer's instructions. After cluster generation, the library preparations were sequenced on an Illumina Hiseq platform and 125 bp or 150 bp paired-end reads were generated. Data analysis and quality control Raw data (raw reads) of fastq format were firstly processed through in-house perl scripts. In this step, clean data (clean reads) were obtained by removing reads containing adapter, reads containing ploy-N and low-quality reads from raw data. At the same time, Q20, Q30 and GC (Q20 and Q30 are Phred scores, which represent sequencing quality, and GC represents the percentage of bases G and C in the sequencing) contents of the clean data were calculated. All the downstream analyses were based on the clean data with high quality. Reads mapping to the reference genome Reference genome and gene model annotation files were downloaded from the genome website directly. Index of the reference genome was built using Bowtie v2.2.3 () and paired-end clean reads were aligned to the reference genome using TopHat v2.0.12 (). We selected TopHat as the mapping tool because TopHat can generate a database of splice junctions based on the gene model annotation file and thus a better mapping result than other non-splice mapping tools. Quantification of gene expression level HTSeq v0.6.1 was used to count the read numbers mapped to each gene. And then the FPKM (Fragments Per Kilobase of transcript per Million mapped reads) of each gene was calculated based on the length of the gene and read count mapped to this gene. FPKM, expected number of Fragments Per Kilobase of transcript sequence per Million base pairs sequenced, considers the effect of sequencing depth and gene length for the read count at the same time, and it is currently the most commonly used method for estimating gene expression levels (). Differential expression analysis Differential expression analysis of three conditions or groups (three biological replicates per condition) was performed using the DESeq R package (1.18.0) (). DE-Seq provides statistical routines for determining differential expression in digital gene expression data using a model based on the negative binomial distribution. The resulting P values were adjusted using the approach by Benjamini and Hochberg for controlling the false discovery rate. Genes with an adjusted P value < 0.05 found by DESeq were assigned as differentially expressed. GO and KEGG enrichment analysis of differentially expressed genes GO enrichment analysis of differentially expressed genes was implemented by the GOseq R package, in which gene length bias was corrected. GO terms with corrected P value (q value) less than 0.05 were considered significantly enriched by differential expressed genes. KEGG is a database resource for understanding high-level functions and utilities of the biological system, such as the cell, the organism and the ecosystem, from molecular-level information, especially large-scale molecular datasets generated by genome sequencing and other high-throughput experimental technologies (http://www.genome.jp/kegg/, last access: 29 November 2019). We used KOBAS software to test the statistical enrichment of differential expression genes in KEGG pathways. Validate RNA-seq data and gene expression level We used qRT-PCR to validate the RNA-seq data and gene expression level in this study. After we extracted total RNA (TRIzol, Invitrogen) from experimental goat skin samples in three periods, we synthesized cDNA (Prime Script™ RT reagent kit with gDNA Eraser Perfect Real Time, TaKaRa) from mRNA. Then, the primers we used were designed and synthesized by Sangon Biotech, depending on the mRNA sequences published on the NCBI database. SYBR Green (TaKaRa) was used in qRT-PCR. -actin acted as internal reference and the DEG expression level was calculated by 2 − ct (where ct is The threshold cycle) (). Reaction system and condition of the qRT-PCR reaction was based on the protocol. The annealing temperature (TM) was based on the primer design. Results were analyzed by SAS 9.2. Results After extracting total RNA from three female Inner Mongolian Cashmere goats in the key stages of anagen, catagen and telogen, we totally constructed nine transcriptome libraries of cashmere goat skin samples and sequenced the RNA. Over 511 396 044 raw reads and 487 729 890 clean reads were obtained in sequence data. Quality control result showed Q20 of each sample was more than 93 % and Q30 more than 85 %, GC content was between 54.77 % and 57.75 % (Table 1). Numbers 1-3 in the sample names represent the 1-3 cashmere goats. Quality control results indicated that the sequencing results were reliable and could be used for subsequent data analysis. After we mapped the clean reads to goat reference genome (Capra hircus ARS1), shown in Table 2, we compared each growth cycle stage. There were three comparing groups in our research: anagen to catagen, catagen to telogen and telogen to anagen. By limiting the q value to < 0.05, we found, in total, 51 DEGs including 29 downregulated genes and 22 upregulated genes in the first group. In the second group, there were 443 DEGs in total, containing 117 downregulated genes and 326 upregulated genes. In the third group, there were 779 DEGs including 582 downregulated genes and 197 upregulated genes. In the second group, most DEGs were upregulated, while in the third group downregulated genes play a greater part. A Venn diagram of the construction of DEGs is shown in Fig. 1; it can be seen that the number of DEGs in anagen and telogen is the most, and the number of DEGs in anagen and catagen is the least. These analysis data show that when the hair follicle is in telogen, the gene expression changes greatly compared with other periods. To analyze the expression patterns of genes showing conserved expression between anagen, catagen and telogen, we performed hierarchical clustering to group the genes according to similarities in their patterns of gene expression (Fig. 2). Through hierarchical clustering, it can be found that anagen and catagen are clustered together, and the gene expression patterns are similar, while the gene expression patterns of telogen are quite different from those of anagen and catagen. After comparing the DEGs between each group, GO annotations were analyzed (see the Supplement). In the first group, we found, for biological process, that upregulated DEGs were mostly enriched in fatty acid beta-oxidation using acyl-CoA dehydrogenase (GO:0033539), acute inflammatory response (GO:0002526) and acute-phase response (GO:0006953); downregulated DEGs were mostly enriched in regulation of cell shape (GO:0008360), injection of substances into other organism (GO:0035737) and envenomation, resulting in modification of morphology or physiology of other organisms (GO:0035738). Interestingly, both up-and downregulated DEGs were enriched in protein complex assembly (GO:0006461) and protein complex biogenesis (GO:0070271). In order to collect the molecular interaction, reaction and relation of the DEGs of the three groups, the KEGG pathway was also analyzed. DEGs in these three groups were mainly enriched in the pathways related to metabolism (Fig. 3), while most DEGs in the first and second groups were enriched in the metabolic pathway (chx01100). However, most DEGs in third group were enriched in phagosome (chx04145). The results of KEGG enrichment indicated that the metabolic pathway had a great impact on SHF growth cycle. Together with DEGs, GO and KEGG analysis data, we identified three genes including fibroblast growth factor 5(FGF5), fibroblast growth factor receptor 1(FGFR1) and RAS related (RRAS), which may be related to hair follicle growth and development in MAPK signaling pathway and verified their expression level by qRT-PCR. The primer sequences information is shown in Table 3. Results showed the expression trend in qRT-PCR of these three genes was basically the same with RNA-Seq (Fig. 4). It can be seen that the expression trend of FGF5 is opposite to that of FGFR1 and RRAS. It is possible that there is a negative regulatory relationship between FGF5 and FGFR1, while a positive regulatory relationship between FGFR1 and RRAS. FGF5 was highly expressed in anagen and expressed lower in telogen. FGFR1 was different from FGF5, while telogen expressed higher and anagen expressed lower. Depending on the expression of genes and MAPK pathway, we can draw an interactive network control chart (Fig. 5). The interactive network control chart showed that FGF5, FGFR1 and RRAS were located at essential places in the MAPK signal pathway. Firstly, together with other factors, FGF5 directly activated FGFR1 which was a receptor on membrane. FGFR1 may combine with GRB2 and SOS, then activate RRAS. Finally, after a series of activation and phosphorylation interactions, cell proliferation and differentiation were affected and then regulated the periodic growth of hair follicles. Discussion Since the 1960s, researchers tried to explore the periodic changes in SHF in cashmere goats, from phenotype to the molecular mechanism. The expression gene at RNA level varies in time and tissue, so transcriptome sequencing is a direct way to explore gene expression changes. Therefore, we studied the growth cycle of SHF in cashmere goats from the perspective of transcriptome. In a recent study, we chose hair follicle, pulled out from the dorsal side, as experimental samples to analyze the SHF and PHF in catagen and telogen growth cycles of cashmere goats, while the effect of other factors were excluded. We identified a set of differentially expressed known and novel genes in hair follicles, such as STC2, ROR2 and VEGFA, which may be related to hair cycle growth and other physiological functions. However, in a recent study, researchers found that stem cells in hair follicle were regulated by the intrafollicle adjacent micro-environmental niche, while this niche was also modulated dynamically by extra-follicular macroenvironmental signals. Therefore, we used skin in anagen, catagen and telogen as sample to sequence the transcriptome in this study. Geng et al. studied the hair follicle development and cycling of Shaanbei white cashmere goat. It was found that a large number of DEGs were mainly related to the cellular process, cell and cell part, binding, biological regulation and metabolic process among the different stages of hair follicle development. In addition, Wnt, Shh, TGF- and Notch signal pathways may be involved in the development of hair follicles (). In this study, there were 51 DEGs between anagen and catagen, 443 DEGs between catagen and telogen, and 779 DEGs between telogen and anagen. The most DEGs were gained between telogen and anagen, while fewest DEGs were between anagen and catagen. This may mean that, from growth stage to resting phase of hair follicle, changes were gradually taking place inside the skin. In addition, in hair follicle going from resting phase into a new round of cycle growth phase, the internal molecular microstate of the skin has undergone a great change. We also found that upregulated genes between catagen and tel- ogen played a great count, while downregulated genes accounted for the majority between telogen and anagen. This may indicate that the expression level of most genes in skin was consistent with the hair follicle activity. The mechanism of SHF growth of cashmere goats is still in the exploratory stage. Hair follicle growth and development is an extremely complicated process, and it is influenced by many internal and external factors. It has been reported that hormone level, light duration and nutrition, as well as some other important factors, had an important effect on the growth and development of hair follicles in cashmere goats. Melatonin is one of the most important hormones that affects the growth and development of SHF in cashmere goats. It could promote the initiation and maturation of secondary follicles and increased their population, while the beneficial effect of melatonin on secondary follicle population remained throughout the cashmere goat's whole life (). Recently, the mechanism of melatonin effect on SHF of cashmere goats was reported. Included in the enhancement of activities of antioxidant enzymes are, for example, superoxide dismutase and glutathione peroxidase (GSH-Px), elevated total antioxidant capacity, upregulated anti-apoptotic Bcl-2 expression, and downregulated expression of the pro-apoptotic proteins, Bax and caspase-3 (). Daily light exposure also played a large role in SHF growth. Exposure to a short photoperiod extended the anagen phase of the cashmere goat hair follicle to increase cashmere production. Assessments of tissue sections indicated that the short photoperiod significantly induced cashmere growth (). When reducing the daily light exposure for cashmere goats to 7 h, the SHF activity and the melatonin concentration in July and the cashmere fiber length and fiber weight in October were significantly increased compared to natural daily photoperiod. From a nutritional aspect, researchers found that there was a tendency or a significant interaction effect of Cu and Mo on cashmere growth (P = 0.076) or diameter (P <0.05), which might be accomplished by changing the number of secondary follicle and active secondary follicle, as well as secondary-to-primary follicle ratio (). Researchers also found that dietary supplement of Essential Oils-Cobalt (EOC) significantly promoted cashmere goat hair fiber quality (P <0.05) (). Because the growth and development of SHF of cashmere goats are a very complex process, many scientists are still exploring the important factors it affects and its growth and development mechanism. In this study, the expression of some genes from the MAPK signal pathway was different in different periods. Therefore, it can be predicted that these DEGs would have an impact on cell proliferation and differentiation through the MAPK signal pathway, and then it affected the growth and development of hair follicles. Previous studies found the MAPK signal pathway was an important pathway for the growth and development of hair follicle in mammal. Akilli ztrk et al. demonstrated an essential role of Gab1 upstream of MAPK in the regulation of the hair cycle and the selfrenewal of hair follicle stem cells in mouse (Akilli ). Liu et al. found inhibition of MAPK-ERK-Mfn2 axis abrogated the protective effects of Sirt1 on hair follicle stem cell survival, migration and proliferation (). Zhang et al. analyzed the transcriptome of cashmere goat and milk goat and discovered that the MAPK signal pathway was involved in hair follicle cycling in both cashmere and milk goat (). Jin et al. suggested that LAMTOR3 influences the character of cashmere fiber, and it might regulate the development of hair follicle and cashmere growth by inducing the MAPK signaling pathway. Platelet-rich plasma (PRP) was an innovative treatment of androgenic alopecia in the early stages of development; PRP might promote proliferation of dermal papilla cells by activated MAPK and Akt signal pathways (). Recently, researchers found that the MAPK signal pathway not only affected hair follicle growth and development but also affected poultry feather growth and development. Fang et al. revealed that the altered genes or targets of altered miRNAs were involved in multiple biological processes and pathways, including the MAPK signal pathway that related to feather growth and development (). This research all demonstrated MAPK signal pathway's importance in hair follicle growth and development. Our study indicated that FGF5, FGFRL1 and RRAS influenced the growth and development of hair follicle in Inner Mongolian Cashmere goat through MAPK signal pathway. This will provide a theoretical basis for further research on the growth and development of SHF in Inner Mongolian Cashmere goats. Conclusions In this study, we tried to find some important influencing factors of SHF growth cycle in skin tissue from Inner Mongolian Cashmere goats by RNA-Seq. As results, over 511 396 044 raw reads and 487 729 890 clean reads were obtained from sequence data. In total, 51 DEGs including 29 downregulated genes and 22 upregulated genes were enriched between anagen and catagen. After comparing catagen to telogen, we got 443 DEGs in total containing 117 downregulated genes and 326 upregulated genes. In the telogen-anagen comparing group, 779 DEGs including 582 downregulated genes and 197 upregulated genes were found. DEGs were annotated in different growth cycle periods by GO analysis in each comparing group. The GO items were mostly related to the transformation of cell and protein. KEGG enrichment result indicated that the metabolic process had a great impact on SHF growth cycle. Comprehensive analysis results of DEGs, GO enrichment and KEGG enrichment that our study excavated indicated that FGF5, FGFRL1 and RRAS influenced the growth and development of hair follicle in Inner Mongolian Cashmere goats through MAPK signal pathway. Data availability. All data files are available from the SRA database (https://www.ncbi.nlm.nih.gov/sra, last access: 18 May 2020, accession number SUB6509124; Gong, 2020). Author contributions. RS and JL designed the experiments, GG and XQ carried them out, LZ and XY analyzed the data. RS and GG prepared the article with contributions from all co-authors.