URL
stringlengths
15
1.68k
text_list
sequencelengths
1
199
image_list
sequencelengths
1
199
metadata
stringlengths
1.19k
3.08k
https://www.coursehero.com/file/p31gmrjj/Gas-Pressure-Pressure-force-area-psi-pressure-per-square-inch-Atmospheric/
[ "# Gas pressure pressure force area psi pressure per\n\n• Notes\n• 41\n\nThis preview shows page 6 - 22 out of 41 pages.\n\nChem 1035 Chapter 5 7 of 41Mercury Barometer: Device used to measure atmospheric pressure\nChem 1035 Chapter 5 8 of 41Gas Laws:Relationships Between Gas Pressure, Volume,Temperature, and MolesPressure vs. Volume:Boyle's Law: At constant temperature, the volume of a fixed mass of gas isInversely proportional to the pressure.P VV 1/P\nChem 1035 Chapter 5 9 of 41V 1 / p OR. PV = constant\nChem 1035 Chapter 5 10 of 41Sample data:Volume (mL)Pressure (torr)11(torr)PressureP × V (torr∙mL)20.07800.001281.56×10415.010380.0009631.56×10410.015600.0006411.56×1045.031120.0003211.56×104\nChem 1035 Chapter 5 11 of 41Boyle’s Law:\nChem 1035 Chapter 5 12 of 41Volume vs. Temperature:Charles' Law:At constant pressure, the volume of a fixed mass of gas is Directly proportional to the Kelvin temperature.Volume (L) HeV TV/T = constantCH4K = C* + 273H2O H2Absolute zero:Lowest obtainable temp(no motion, zero volume)V1/T1= V2/T2\nChem 1035 Chapter 5 13 of 41T (K)\nChem 1035 Chapter 5 14 of 41Sample Data:V/T = constant\nChem 1035 Chapter 5 15 of 41\nChem 1035 Chapter 5 16 of 41Pressure vs. Temperature:Amonton's Law: At constant volume, the pressure of a fixed amount of gas isGay- lusac’sDirectly proportional to the Kelvin temperature.P/T = constantP1/ T1= P2/T2P ^ , T ^Avogadro's Law: Equal volumes of different gases at the same temperature andpressure contain the same number of particlesV m (moles)\nChem 1035 Chapter 5 17 of 41\nChem 1035 Chapter 5 18 of 41Ideal Gas EquationCombine these laws:Boyle's Law: PV = kconstant tempp1v1= p2v2Charles' Law: V T or V = kTconstant pressurev/t = v/tAmonton's Law: P T or P = kTconstant volumep/t =p/tAvogadro's Law: V nV = knCombined gas law : P1V1/ T1= P2V2/ T2PV nT\nChem 1035 Chapter 5 19 of 41\nChem 1035 Chapter 5 20 of 41R = gas constant (0.0821 L x atm / molx K)\nChem 1035 Chapter 5 21 of 41Calculations with the Ideal Gas Law:Assume constant temp\n•", null, "•", null, "•", null, "" ]
[ null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null, "https://www.coursehero.com/assets/img/doc-landing/start-quote.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60493255,"math_prob":0.95951813,"size":1880,"snap":"2021-31-2021-39","text_gpt3_token_len":640,"char_repetition_ratio":0.22974414,"word_repetition_ratio":0.060402684,"special_character_ratio":0.35159576,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98226005,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T13:36:23Z\",\"WARC-Record-ID\":\"<urn:uuid:3dde8ecb-638d-4ef5-8508-3788204510bf>\",\"Content-Length\":\"281040\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a2089829-2a3d-47d0-a4c6-1ed0386442a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a4153fe-cb4d-414f-8089-9e9455a6a848>\",\"WARC-IP-Address\":\"104.17.93.47\",\"WARC-Target-URI\":\"https://www.coursehero.com/file/p31gmrjj/Gas-Pressure-Pressure-force-area-psi-pressure-per-square-inch-Atmospheric/\",\"WARC-Payload-Digest\":\"sha1:2COPE4AVWIHD5HRDVYK2GQA4XVUI4FXO\",\"WARC-Block-Digest\":\"sha1:RZ6V2RTWJ46UY5LY7FYEJGWWWDRV5TOS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057225.38_warc_CC-MAIN-20210921131252-20210921161252-00058.warc.gz\"}"}
https://cs.stackexchange.com/questions/70490/minimizing-transportation-cost-through-a-network-multiple-source-sinks
[ "# Minimizing transportation cost through a network, multiple source/sinks\n\nI have the following problem:\n\nInputs:\n\n• A finite collection $$V$$ of vertices\n• A metric cost function $$c:V^2\\to\\Bbb R$$ (interpreted as the edge cost of a complete graph on $$V$$)\n• An amount of resource $$s:V\\to\\Bbb R$$ on each vertex, satisfying $$\\sum_{u\\in V}s(u)=0$$\n\nThe problem is to determine a transfer function $$f:V^2\\to\\Bbb R^{\\ge0}$$ such that $$\\sum_{v\\in V}(f(v,u)-f(u,v))=s(u)$$ for all $$u$$, which minimizes $$\\sum_{u,v}f(u,v)c(u,v)$$.\n\nThe story to tell is that the points represent places with some amount of resource (if $$s(u)$$ is positive) or with some need of the resource (if $$s(u)$$ is negative), and $$c(u,v)$$ is the cost of sending a unit of resource from $$u$$ to $$v$$. We assume that every vertex can send resource to any other vertex, and the direct path is usually better than passing through an intermediate point. How do we minimize the cost of sending everything from where it is to where it is needed?\n\nWhat is the name of this problem, and do there exist efficient algorithms to solve it? It seems similar to the minimum cost flow problem, but this only has one source and one sink, has maximum capacities for all the edges, and often needs multiple node paths for the resource, while here the direct path is always at least as good as a longer path (because the cost function is a metric).\n\nThis is an instance of the minimum-cost circulation problem. Replace each vertex $v$ by two vertices $v_\\text{in}$,$v_\\text{out}$. Each edge $(u,v)$ is replaced by $(u_\\text{out},v_\\text{in})$, with capacity (upper bound) given by $c(u,v)$ and lower bound 0. Then, add an edge $(v_\\text{in},v_\\text{out})$ with both the lower bound and upper bound set to $s(v)$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9220373,"math_prob":0.9998124,"size":1302,"snap":"2020-34-2020-40","text_gpt3_token_len":342,"char_repetition_ratio":0.11479199,"word_repetition_ratio":0.0,"special_character_ratio":0.26190478,"punctuation_ratio":0.09090909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.000002,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T07:06:07Z\",\"WARC-Record-ID\":\"<urn:uuid:6ffd9c1e-d31d-4c8f-900a-66cd71f63f18>\",\"Content-Length\":\"144142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:314ef18a-210b-49ed-9bf7-b922a5937de7>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e8e8aa1-d566-4f77-9231-671e7b35d07c>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/70490/minimizing-transportation-cost-through-a-network-multiple-source-sinks\",\"WARC-Payload-Digest\":\"sha1:BGWEH7EH4MQ3YI6KAYAYPAVBCN7PT5JK\",\"WARC-Block-Digest\":\"sha1:CUKWIIQEOT3DWZILNRNNLICUTZMRCPKI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738960.69_warc_CC-MAIN-20200813043927-20200813073927-00253.warc.gz\"}"}
https://www.groundai.com/project/risk-minimization-in-structured-prediction-using-orbit-loss/
[ "Risk Minimization in Structured Prediction using Orbit Loss\n\n# Risk Minimization in Structured Prediction using Orbit Loss\n\nDanny Karmon\nDept. of Computer Science\nBar-Ilan University, Israel\ndanny.karmon@biu.ac.il\nJoseph Keshet\nDept. of Computer Science\nBar-Ilan University, Israel\njoseph.keshet@biu.ac.il\n###### Abstract\n\nWe introduce a new surrogate loss function called orbit loss in the structured prediction framework, which has good theoretical and practical advantages. While the orbit loss is not convex, it has a simple analytical gradient and a simple perceptron-like learning rule. We analyze the new loss theoretically and state a PAC-Bayesian generalization bound. We also prove that the new loss is consistent in the strong sense; namely, the risk achieved by the set of the trained parameters approaches the infimum risk achievable by any linear decoder over the given features. Methods that are aimed at risk minimization, such as the structured ramp loss, the structured probit loss and the direct loss minimization require at least two inference operations per training iteration. In this sense, the orbit loss is more efficient as it requires only one inference operation per training iteration, while yields similar performance. We conclude the paper with an empirical comparison of the proposed loss function to the structured hinge loss, the structured ramp loss, the structured probit loss and the direct loss minimization method on several benchmark datasets and tasks.\n\n## 1 Introduction\n\nThere are three main differences between binary classification problems and structured prediction problems. First, the input to a binary classifier is a feature vector of a fixed length and the output is restricted to two possible labels, whereas in structured prediction both the input and the output are structured objects (a graph, an acoustic speech utterance, a sequence of words, an image). Second, the structured output space is potentially exponentially large (all possible phoneme or word sequences, all possible taxonomy graphs, all possible human poses, etc.). And third, while in binary classification the system’s performance is evaluated using the error rate, i.e., 0-1 loss, in structured prediction each task often has its own evaluation metric or cost, such as word error rate in speech recognition, the BLEU score in machine translation, the NDCG score in information retrieval, or the intersection-over-union score in visual object segmentation. Some of these are involved functions, which are non-decompostable in the output space.\n\nThere is significant literature on learning parameters for structured prediction and graphical models. Ultimately, the goal in learning is to find the model parameters so as to minimize the expected cost, or risk, where the expectation is taken with respect to a random draw of input-output pairs from a fixed but unknown distribution. Since the expectation cannot be evaluated because the underlying probability is unknown, and since the cost is often a non-convex combinatorial function (which is hard to minimize directly), the learning problem is formulated as an optimization problem where the parameters are found by minimizing a trade-off between a measure of the goodness of fit (loss) to the training data and a regularization term. In discriminative training, the loss function should be directly related to the cost between the model prediction and the target label, averaged over the training set.\n\nThe most common approaches to structured prediction, namely structured perceptron, structural support vector machine (SVM) and conditional random fields (CRF), do not directly minimize the risk. The structured perceptron (Collins, 2002) solves a feasibility problem, which is independent of the cost. In structural SVM (Joachims et al., 2005) the measure of goodness is a convex upper bound to the cost called structural hinge loss. It is based on a generalization of the binary SVM hinge loss to the structured case, and there is no guarantee for the risk. While there exists generalization bounds for the structured hinge loss (e.g., McAllester, 2006; Taskar et al., 2003), they all include terms which are not directly related to the cost, such as the Hamming loss, and inherently the structured hinge loss cannot be consistent as it fails to converge to the performance of the optimal linear predictor in the limit of infinite training data (McAllester, 2006). In CRFs the measure of goodness is the log loss function, which is independent of the cost (Lafferty et al., 2001). Smith and Eisner (2006) tried to address this shortcoming of CRFs and proposed to minimize the risk under the Gibbs measure. While it seems that this loss function is consistent, we are not aware of any formal analysis.\n\nRecently, several works have focused on directly minimizing the expected cost. In particular, McAllester et al. (2010) presented a theorem stating that a certain perceptron-like learning rule, involving feature vectors derived from cost-augmented inference, directly corresponds to the gradient of the risk. Direct loss needs two inference operations per training iteration and is extremely sensitive to its hyper-parameter. Do et al. (2008) generalized the notion of the ramp loss from binary classification to structured prediction and proposed a loss function, which is a non-convex bound to the cost, and was found to be a tighter bound than the structured hinge loss function. The structured ramp loss also needs two inference operations per training iteration. Keshet et al. (2011) generalized the notion of the binary probit loss to the structured prediction case. The gradient of this non-convex loss function can be approximated by averaging over samples from the unit-variance isotropic normal distribution, where for each sample an inference with a perturbed weight vector is computed. In order to gain stability in the gradient computation, hundreds to thousands of inference operations are required per training iteration, hence the update rule is computationally-heavy.\n\nThe goal of this work is to propose a new learning update rule for structured prediction which results in fast training on one hand and aims at minimizing the risk on the other hand. We define a new loss function, called orbit, where its gradient has a close simple analytical form, which is very close to the structured perceptron update rule. We state a finite sample generalization bound for this loss function and show that it is consistent in the strong sense. That is, for any feature map (finite or infinite dimensional) the loss function yields predictors approaching the infimum risk achievable by any linear predictor over the given features. The update rule of this new loss involves one inference operation per training iteration, similar to the structured perceptron or the structural SVM, and hence faster (per training iteration) than ramp, probit and direct loss minimization. In a series of experiments we showed that the new loss function performs similar to other approaches that were designed to minimize the risk.\n\nThe paper is organized as follows. In Section 2 we state the problem formally. In Section 3 we introduce the new surrogate loss function and its update rule. In Section 4 we present the analysis for our new methods, including proofs for both consistency and generalization bound. In Section 5 we present a set of experiments and compare the new learning rule to other algorithms. We conclude the paper in Section 6.\n\n## 2 Formal Settings\n\nWe formulate the structured supervised learning problem by setting to be an abstract set of all possible input objects and to be an abstract set of all possible output targets. We assume that the input objects and the target labels are drawn from an unknown joint distribution . We define a set of fixed mappings called feature functions from the set of input objects and target labels to a real vector of length .\n\nHere we consider a linear decoder with parameters , such that the parameters weight the feature functions. We denote the score of label by , given the input . The decoder predicts the label with the highest score:\n\n ^yw(x)=argmaxy∈Y w⋅ϕ(x,y) (1)\n\nIdeally, we would like to find the parameters that optimize the risk for unseen data. Formally, we define the cost function, , to be a non-negative measure of error when predicting instead of as the label of . We assume that for all . Often the desired evaluation metric is a utility function that needs to be maximized (like BLEU or NDCG) and then we define the cost to be 1 minus the evaluation metric.\n\nOur goal is to minimize the risk:\n\n w∗=argminw E(x,y)∼ρ[ℓ(y,^yw(x))]. (2)\n\nSince the distribution is unknown, we use a training set of examples that are drawn i.i.d. from , and replace the expectation in (2) with a mean over the training set and a regularization factor . The cost is often a combinatorial non-convex quantity, which is hard to minimize, hence it is replaced with a surrogate loss, denoted . Different algorithms use different surrogate loss functions. Overall the objective function of (2) transforms into the following objective function\n\n w∗=argminw 1mm∑i=1¯ℓ(w,xi,yi)+λ2∥w∥2, (3)\n\nwhere is a trade-off parameter between the loss term and the regularization factor.\n\n## 3 Orbit Loss\n\nDenote by the difference between the feature functions of the labels , respectively:\n\n Δϕ(y,y′)=ϕ(x,y)−ϕ(x,y′).\n\nDefine by the normalized version of as follows:\n\n δϕ(y,y′)={Δϕ(y,y′)/∥Δϕ(y,y′)∥if y≠y′0if y=y′. (4)\n\nThe orbit surrogate loss function is formally defined as follows:\n\n ¯ℓorbit(w,x,y)=Pε∼N(0,1)[ε>w⋅δϕ(y,^yw)]ℓ(y,^yw). (5)\n\nThat is, the orbit loss is equal to the cost multiplied by the probability that the prediction score plus a small number is greater than the score of the target label .\n\nWe now derive the gradient-based learning rule for this loss function, which helps to describe some of its properties. The loss has a simple analytical gradient:\n\n ∇w[Pε∼N(0,1)[ε>w⋅δϕ(y,^y)]ℓ(y,^y)] =∇w[1√2π∫∞w⋅δϕ(y,^y)e−z2/2dzℓ(y,^y)] (6) =−1√2πe−|w⋅δϕ(y,^y)|2/2ℓ(y,^y)δϕ(y,^y). (7)\n\nThe update rule of the orbit loss is the following:\n\n w ← (1−ηλ)w + ηe−|w⋅δϕ(y,^yw)|2/2ℓ(y,^yw)δϕ(y,^yw). (8)\n\nNote that when the prediction label is close to the target label in terms of the decoding score, that is, when the term is relatively small, the exponent is close to 1. Under this condition the update rule becomes\n\n w←(1−ηλ)w + ηℓ(y,^y)δϕ(y,^yw), (9)\n\nwhich generalizes the regularized structured perceptron’s update rule (Collins, 2002; Zhang et al., 2014). Namely\n\n w←(1−ηλ)w+η\\mathbbm1{y≠^yw}δϕ(y,^yw), (10)\n\nwhere is an indicator function, equals 1 if the predicate holds and equals 0 otherwise.\n\nA nice property of this update rule is that the cost function does not need to be decomposable in the size of the output. Decomposable cost functions are needed in order to solve the cost-augmented inference that is used in the training of structural SVMs (Joachims et al., 2005; Ranjbar et al., 2013), direct loss minimization (McAllester et al., 2010), or structured ramp loss (Do et al., 2008). It means that cost functions like word error rate or intersection-over-union can be used without being approximated.\n\nAnother property of the orbit loss is its similarity to the structured probit loss (Keshet et al., 2011). The probit loss was derived from the concept of stochastic decoder in the PAC-Bayesian framework (McAllester, 1998, 2003) and was shown to have both good theoretical properties and practical advantages (Keshet et al., 2011). The structured probit loss is defined as follows\n\n ¯ℓprobit(w,x,y)=Eϵ∼N(0,I)[ℓ(y,^yw+ϵ)], (11)\n\nwhere is a -dimensional isotropic Normal random vector. Note that the orbit loss (5) can be written as follows:\n\n Pε∼N(0,1)[ε>w⋅δϕ(y,^yw)]ℓ(y,^yw)=Pϵ∼N(0,I)[−ϵ⋅δϕ(y,^yw)>w⋅δϕ(y,^yw)]ℓ(y,^yw). (12)\n\nThe last equation holds since the inner product of an isotropic normal random vector with a unit-norm vector is a zero-mean unit variance normal random variable. Writing the probability as the expectation of an indicator function, we have\n\n (13)\n\nAssuming for a value small enough that , where is a ball of radius centered at , we can bring the cost function into the expectation term, that is\n\n Eϵ∼N(0,I)[I{(w+ϵ)⋅δϕ(y,^yw+ϵ)<0}ℓ(y,^yw+ϵ)]=Eϵ∼N(0,I)[ℓ(y,^yw+ϵ)], (14)\n\nwhich is the structured probit loss.\n\n## 4 Analysis\n\nIn this section we analyze the orbit loss. We derive a generalization bound based on the PAC-Bayesian theory, where we start by upper-bounding the probit loss with the orbit loss and then plugging it into a PAC-Bayesian generalization bound. Then we show that the decoder’s parameters, which are estimated by optimizing the regularized orbit loss in the limit of infinite data, approach the infimum risk achievable by any linear decoder.\n\nRecall that the structured probit loss is defined as:\n\n ¯ℓprobit(w,x,y)=Eϵ∼N(0,I)[ℓ(y,^yw+ϵ)]. (15)\n\nThe following theorem states a generalization bound for the probit loss function (Keshet et al., 2011).\n\n###### Theorem 1 (Generalization of probit loss).\n\nFor a fixed we know that, with a probability of at least over the draw of the training data, the following holds simultaneously for all :\n\n E(x,y)∼ρ[¯ℓprobit(w,x,y)]≤11−12γ(1mm∑i=1¯ℓprobit(w,xi,yi)+γ2m∥w∥2+γmln1δ). (16)\n\nLater this generalization bound will help us state a similar bound for the orbit loss.\n\nWe now analyze the orbit loss. Let be the minimal distance between the score of the predicted label to the score of its closest different label by a constant :\n\n miny′≠^y w⋅δϕ(^y,y′)≥η (17)\n\nfor . The following lemma upper bounds the probit loss with the orbit loss.\n\n###### Lemma 2.\n\nFor a finite and a cost function for all , , the following holds:\n\n ¯ℓprobit(w/σ,x,y)≤¯ℓorbit(w/σ,x,y)+σ, (18)\n\nfor .\n\nFor the brevity of the explanation we call the predicted label and we call the perturbed label. The idea behind the proof is to split the structured probit loss cases of for which the predicted label and the perturbed label are the same, and the case in which they differ. We show that the probability of the labels being equal is upper-bounded by the orbit loss, and the probability of the labels being different is upper-bounded by an exponential term that approaches zero when the norm of approaches zero.\n\n###### Proof.\n\nFrom the law of total expectation we have\n\n Eϵ∼N(0,I)[ℓ(^yw+ϵ,y)]≤  Eϵ[\\mathbbm1{^yw+ϵ=^yw}ℓ(^yw+ϵ,y)]+Pϵ[^yw+ϵ≠^yw], (19)\n\nwhere we upper bound the cost by 1 in the second term.\n\nFirst, let us focus on the first term of the inequality. For this term , which means that\n\n Eϵ[\\mathbbm1{^yw+ϵ=^yw}ℓ(^yw+ϵ,y)]=Pϵ[^yw+ϵ=^yw]ℓ(^yw,y). (20)\n\nBy definition of the inference rule (1) for any vector , we have for all . Therefore the probability that can be expressed as follows:\n\n Pϵ[^yw+ϵ=^yw]=Pϵ[(w+ϵ)⋅δϕ(^yw+ϵ,^yw)≤0] (21)\n\nwhich, in turn, can be expressed as\n\n Pϵ[w⋅δϕ(^yw+ϵ,^yw)≤−ϵ⋅δϕ(^yw+ϵ,^yw)]≤Pϵ[w⋅δϕ(y,^yw)≤−ϵ⋅δϕ(^yw+ϵ,^yw)], (22)\n\nwhere replacing with increases the event size, thereby increasing the probability. Replacing the inner product of an isotropic normal random vector with a unit-norm vector with a zero-mean unit variance normal random variable, we get:\n\n Pϵ[w⋅δϕ(y,^yw)≤−ϵ⋅δϕ(^yw+ϵ,^yw)]=Pε∼N(0,1)[ε>w⋅δϕ(y,^yw)]. (23)\n\nThe second term of the left-hand side of (19) can be expressed as follows:\n\n Pϵ[^yw+ϵ≠^yw]=Pϵ[(w+ϵ)⋅δϕ(^yw+ϵ,^yw)>0].\n\nWe have\n\n Pϵ[(w+ϵ)⋅δϕ(^yw+ϵ,^yw)>0] =Pϵ[ϵ⋅δϕ(^yw+ϵ,^yw)>w⋅δϕ(^yw,^yw+ϵ)] (24) ≤Pϵ[ϵ⋅δϕ(^yw+ϵ,^yw)>η]. (25)\n\nWe finalized the proof by bounding the last equation for a -scaled version of ,\n\n Pϵ∼N(0,I)[σϵ⋅δϕ(^yw+σϵ,^yw)>η]=Pε∼N(0,1)[σε>η]≤exp(−η22σ2)=σm, (26)\n\nwhere the first equation holds since the inner product of an isotropic normal random vector with a unit-norm vector is a zero-mean unit variance normal random variable; and the second equation holds for . Using the union bound over the draw of a sample of size concludes the proof. ∎\n\nPlugging Lemma 2 into the bound of Theorem 1, we get the following generalization bound for the orbit loss.\n\n###### Theorem 3 (Generalization of orbit loss).\n\nFor a fixed and assuming (17) holds with , we know that with a probability of at least over the draw of the training data the following holds true simultaneously for all and for all :\n\n E(x,y)∼ρ[¯ℓprobit(w/σ,x,y)]≤11−12γ(1mm∑i=1¯ℓorbit(w/σ,xi,yi)+γ2mσ2∥w∥2+σ+γmln1δ). (27)\n\nWe will now prove that the orbit loss is consistent. We start with the observation that when the norm of the weight vector goes to infinity, the orbit loss approaches the cost:\n\n###### Lemma 4.\n limα→∞¯ℓorbit(αw,x,y)=ℓ(y,^yw), (28)\n\nassuming that for all .\n\n###### Proof.\n\nRecall that therefore . Also note that scaling the parameters does not change the prediction, . We have:\n\n limα→∞Pε∼N(0,1)[ε>αw⋅δϕ(y,^yw)]ℓ(y,^yw)=Pε∼N(0,1)[ε>−∞]ℓ(y,^yw)=ℓ(y,^yw). (29)\n\nConsider the following training objective:\n\n ^wm=argminw 1mm∑i=1¯ℓorbit(w,xi,yi)+λm2m∥w∥2. (30)\n###### Theorem 5 (Consistency of orbit loss).\n\nFor defined by (30), if the sequence increases without bound, and the sequence converges to zero, then with a probability of one over the draw of the infinite sample we have:\n\n (31) =infwE(x,y)∼ρ[ℓ(y,^yw(x))].\n###### Proof.\n\nSet , , and into the bound (27). We decompose into a scalar , corresponding to the norm of , and a unit norm vector . Last, using Chernoff we upper-bound by to get\n\n E(x,y)∼ρ[¯ℓprobit((lnm)wm,x,y)] ≤E(x,y)∼ρ[¯ℓprobit((lnm)αw∗,x,y)] (32) ≤11−ln2m2λm(E(x,y)∼ρ[¯ℓorbit(αw∗,x,y)] (33) +√lnmm+λmα22m+1lnm+2λmmlnm),\n\nwhere is the minimizer of the right-hand side of the bound in (27), as well as of the optimization problem (30). Taking the limit when the number of examples approaches infinity on both sides we have\n\n limm→∞E(x,y)∼ρ[¯ℓprobit((lnm)wm,x,y)]≤E(x,y)∼ρ[¯ℓorbit(αw∗,x,y)] (34)\n\nNoting that by\n\n E(x,y)∼ρ[¯ℓprobit(w,x,y)]≥infwE(x,y)∼ρ[ℓ(y,yw)], (35)\n\nand letting approach infinity using Lemma 4 concludes the proof. ∎\n\n## 5 Experiments\n\nWe evaluated the performance of the orbit loss by executing a number of experiments on several domains and tasks and compared the results with other approaches that are aimed at risk minimization, namely direct loss minimization (McAllester et al., 2010), structured ramp loss (Do et al., 2008), and structured probit loss (Keshet et al., 2011). For a reference, we present results for the structured perceptron, as we wanted to stress the empirical differences between the update rule in (9) and the one in (10), as well as for the structured hinge loss.\n\n### 5.1 Mnist\n\nIn our first experiment we tested the orbit update rule on a multiclass problem. MNIST is a dataset of handwritten digit images (10 classes). It is divided into a training set of 50,000 examples, a test set of 10,000 examples and a validation set of 10,000 examples. We preprocess the data by normalizing the input images, and reducing the dimension from the original 784 attributes to 100 using PCA.\n\nWe used the orbit update rule as in (8). We defined the weight vector as a concatenation of 10 weight vectors , each corresponding to one of the 10 digits. The update rule of example , , can be simplified based on Kesler’s construction (Crammer and Singer, 2001) as follows:\n\n wyi ←(1−ηλ)wyi+ηe−|wyi⋅xi−w^y⋅xi|2/2ℓ(^y,yi)xi w^y ←(1−ηλ)w^y−ηe−|wyi⋅xi−w^y⋅xi|2/2ℓ(^y,yi)xi wr ←(1−ηλ)wr                        for all r≠yi,^y\n\nNote that the exponent values throughout the training were very close to 1 and, practically, the update rule (9) could be used.\n\nTo properly evaluate the orbit loss we ran the experiment with two different cost functions for : 0-1 loss and a semi-randomized matrix. We did so because the update rule (9) is identical to the structured perceptron update rule under the 0-1 loss.\n\nIn the first case, we set , for , is the iteration number, and . We also trained a multiclass perceptron with , and , and a multiclass SVM with (Crammer and Singer, 2001). All of the hyper-parameters were chosen on the validation set. In all of the experiments we ran 4 epochs over the training data and used a linear kernel.\n\nThe results are given in Table 1 and suggest that there is a slight advantage for the orbit loss over the other algorithms. Recall that we previously showed that the perceptron is a special case of orbit loss and under this setting, in which = 0-1 loss, hence the only difference between the results in the table is due to the regularization factor used with the orbit loss.\n\nAs mentioned above, this experiment was executed once again, setting the cost function, , to be a semi-randomized matrix. We generated a randomized cost matrix of size 10 10, such that the elements on the diagonal were all 0, and the rest of the elements were chosen uniformly at random to be either 1 or 2. We trained multiclass perceptron, multiclass SVM, and orbit using the following hinge loss for the cost function:\n\n ¯ℓhinge(w,x,y)=max^y[ℓ(y,^y)−wy⋅x+w^y⋅x] (36)\n\nTo ensure reliability, we ran the second experiment for each algorithm with 10 different sampled matrices and averaged the results. The results are presented in Table 2. The results show a clear advantage for the orbit loss update rule in regards to the task loss. The reason is that the orbit loss can take advantage of minimizing a non 0-1 loss, as compared to perceptron.\n\n### 5.2 Phoneme alignment\n\nOur next experiment focused on the phoneme alignment, which is used as a tool in developing speech recognition and text-to-speech systems. This is a structured prediction task — the input represents a speech utterance, and consists of a pair of a sequence of acoustic feature vectors (mel-frequency cepstral coefficients) , , where , ; and a sequence of phonemes , where , is a phoneme symbol and is a finite set of phoneme symbols. The lengths and can differ for different inputs, although typically is significantly larger than . The goal is to generate an alignment between the two sequences in the input. The output is a sequence , where is an integer giving the start frame in the acoustic sequence of the -th phoneme in the phoneme sequence. Hence the -th phoneme starts at frame and ends at frame .\n\nFor this task we used the TIMIT speech corpus for which there are published benchmark results (Brugnara et al., 1993; Keshet et al., 2007; Hosom, 2009). We divided a portion of the TIMIT corpus (excluding the SA1 and SA2 utterances) into three disjoint parts containing 1500, 1796 and 400 utterances, respectively. The first part was used to train a phoneme frame-based classifier, which given the pair of speech frame and a phoneme, returns the level of confidence that the phoneme was uttered in that frame. The output classifier is then used along with other features as a seven dimensional feature map as described in Keshet et al. (2007).\n\nThe seven dimensional weight vector was trained on the second set of 150 aligned utterances for -insensitive loss\n\n ℓ(y,^y)=1|y|max{|yk−^yk|−τ,0}, (37)\n\nwith = 10 ms. This cost measures the average disagreement between all of the boundaries of the desired alignment sequence and the boundaries of predicted alignment sequence where a disagreement of less than is ignored.\n\nWe trained the system with the orbit update rule where and ; the structured perceptron update rule; the structural SVM optimized using stochastic gradient descent with =5 (Shalev-Shwartz et al., 2011); structured ramp-loss with , ; and direct loss minimization algorithm with on a reduced training set of 150 examples (out of 1796) and a reduced validation set of 100 examples (out of 400). We were not able to train the system with the probit loss in a reasonable time.\n\nThe results are given in Table 3. The results in the first 4 columns should be read as the accuracy (in percentage) that the prediction was within . The higher the better. The last column of the table is the actual loss computed by (37) - the smaller the better. In those results the orbit update rule outperforms other algorithms, and yields state-of-the-art results.\n\nWe would like to note that as in the MNIST experiment, the exponent values in the update rule were very close to 1 and, practically, the update rule (9) could be used.\n\n### 5.3 Vowel duration\n\nIn the problem of vowel duration measurement we are provided with a speech signal which includes exactly one vowel preceded and followed by consonants (i.e., CVC). Our goal is to predict the vowel duration accurately. Precise measurement of vowel duration in a given context is needed in many phonological experiments, and currently is done manually (Heller and Goldrick, 2014).\n\nThe speech signal is represented as a sequence of acoustic features = where each (1 i T) is a d-dimensional vector representing acoustic parameters, such as high and low energy, pitch, voicing, correlation coefficient, and so on (we extract =22 acoustic features every 5 msec). We denote the domain of the feature vectors by . The length of the input signal varies from one signal to another, thus is not fixed. We denote by the set of all finite length sequences over . In addition, we denote by and the vowel onset and offset times, respectively, where . For brevity we set . The typical duration of an utterance is around 2 sec. There were =116 feature functions that described the typical duration of a vowel, the mean high energy before and after the vowel onset, and so on. The cost function we use is:\n\n ℓ(^t,t)=[|^tb−tb|−τb]++[|^te−te|−τe]+, (38)\n\nwhere , and , are pre-defined constants. The above function measures the absolute differences between the predicted and the manually annotated vowel onsets and offsets. Since the manual annotations are not exact, we allow a mistake of and frames at the vowel onset and offset respectively.\n\nWe trained the system using the orbit update rule with , ; the structured perceptron update rule; structured ramp loss with , ; probit loss with the expectation approximated by a mean of 100 random samples , ; and direct loss minimization with and =-1.52. All of those hyper-parameters were chosen for the validation set. The results are presented in Table 4 for different values of and in the cost function. It can be seen that the orbit is close to the direct loss minimization (differences of a frame or two on average) and is better than other approaches. Also note that as describe earlier, the efficiency of the orbit loss is similar to the structured perceptron update and better than other approaches.\n\n## 6 Discussion and Future Work\n\nWe introduced a new surrogate loss function that offers an efficient and effective learning rule. We gave a qualitative theoretical analysis presenting a PAC-Bayesian generalization bound and a consistency theorem. Despite the fact that the consistency property concerns the training performance when the number of training examples is big, the proposed loss function was shown to perform well on several tasks, even when the training set was of small or medium size.\n\nIn terms of theoretical properties, we think that the theoretical analysis can be improved, and in particular we would like to have a better upper-bound of the probit loss in terms of the orbit loss, as expressed in Lemma 2, which depends on the minimal distance between the predicted label and its closest neighbor label. Anyways, it is clear that when the norm of the weight vector becomes large relative to the norm of the noise, the inference with the weight vector and the inference with the perturbed weight vector – both lead to the same predicted label with a high probability.\n\nThis work is part of our research on surrogate loss functions in the structured prediction setting. We believe that in order to understand what are good loss functions, we have to understand the interrelationship between them. While we showed some relation between the orbit loss, the Perceptron and the probit loss, we still think that more work should be done. We are especially interested in understanding the connection between the orbit, the probit, and the direct loss minimization approach.\n\n## References\n\n• Brugnara et al. (1993) Brugnara, F., Falavigna, D., and Omologo, M. (1993). Automatic segmentation and labeling of speech based on hidden markov models. Speech Communication, 12:357–370.\n• Collins (2002) Collins, M. (2002). Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Conference on Empirical Methods in Natural Language Processing.\n• Crammer and Singer (2001) Crammer, K. and Singer, Y. (2001). On the algorithmic implementation of multiclass kernel-based vector machines. Jornal of Machine Learning Research, 2:265–292.\n• Do et al. (2008) Do, C., Le, Q., Teo, C.-H., Chapelle, O., and Smola, A. (2008). Tighter bounds for structured estimation. In Advances in Neural Information Processing Systems (NIPS) 22.\n• Heller and Goldrick (2014) Heller, J. R. and Goldrick, M. (2014). Grammatical constraints on phonological encoding in speech production. Psychonomic bulletin & review, 21(6):1576–1582.\n• Hosom (2009) Hosom, J.-P. (2009). Speaker-independent phoneme alignment using transition-dependent states. Speech Communication, 51:352–368.\n• Joachims et al. (2005) Joachims, I., Tsochantaridis, T., Hofmann, T., and Altun, Y. (2005). Large margin methods for structured and interdependent output variables. Journal of Machine Learning Research, 6:1453–1484.\n• Keshet et al. (2011) Keshet, J., McAllester, D., and Hazan, T. (2011). PAC-Bayesian approach for minimization of phoneme error rate. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP).\n• Keshet et al. (2007) Keshet, J., Shalev-Shwartz, S., Singer, Y., and Chazan, D. (2007). A large margin algorithm for speech and audio segmentation. IEEE Trans. on Audio, Speech and Language Processing, 15(8):2373–2382.\n• Lafferty et al. (2001) Lafferty, J., McCallum, A., and Pereira, F. (2001). Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eightneenth International Conference on Machine Learning (ICML), pages 282–289.\n• McAllester (2003) McAllester, D. (2003). Simplified PAC-Bayesian margin bounds. In Proceedings of the Sixteenth Annual Conference on Computational Learning Theory.\n• McAllester (2006) McAllester, D. (2006). Generalization bounds and consistency for structured labeling. In Schölkopf, B., Smola, A. J., Taskar, B., and Vishwanathan, S., editors, Predicting Structured Data, pages 247–262. MIT Press.\n• McAllester et al. (2010) McAllester, D., Hazan, T., and Keshet, J. (2010). Direct loss minimization for structured prediction. In Advances in Neural Information Processing Systems (NIPS) 24.\n• McAllester (1998) McAllester, D. A. (1998). Some pac-bayesian theorems. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory.\n• Ranjbar et al. (2013) Ranjbar, M., Lan, T., Wang, Y., Robinovitch, S. N., Li, Z.-N., and Mori, G. (2013). Optimizing nondecomposable loss functions in structured prediction. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 35(4):911–924.\n• Shalev-Shwartz et al. (2011) Shalev-Shwartz, S., Singer, Y., Srebro, N., and Cotter, A. (2011). Pegasos: Primal estimated sub-gradient solver for svm. Mathematical programming, 127(1):3–30.\n• Smith and Eisner (2006) Smith, D. A. and Eisner, J. (2006). Minimum risk annealing for training log-linear models. In Proc. of the COLING/ACL, pages 787–794.\n• Taskar et al. (2003) Taskar, B., Guestrin, C., and Koller, D. (2003). Max-margin markov networks. In Advances in Neural Information Processing Systems (NIPS) 17.\n• Zhang et al. (2014) Zhang, K., Fujian, P., Su, J., and Zhou, C. (2014). Regularized structured perceptron: A case study on chinese word segmentation, pos tagging and parsing. The 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL), page 164.\nYou are adding the first comment!\nHow to quickly get a good reply:\n• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.\n• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.\n• Your comment should inspire ideas to flow and help the author improves the paper.\n\nThe better we are at sharing our knowledge with each other, the faster we move forward.\nThe feedback must be of minimum 40 characters and the title a minimum of 5 characters", null, "", null, "", null, "" ]
[ null, "https://dp938rsb7d6cr.cloudfront.net/static/1.70/groundai/img/loader_30.gif", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.70/groundai/img/comment_icon.svg", null, "https://dp938rsb7d6cr.cloudfront.net/static/1.70/groundai/img/about/placeholder.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89911276,"math_prob":0.970017,"size":30227,"snap":"2020-10-2020-16","text_gpt3_token_len":6950,"char_repetition_ratio":0.15468352,"word_repetition_ratio":0.049544994,"special_character_ratio":0.23952095,"punctuation_ratio":0.13808464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99399287,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-31T09:49:19Z\",\"WARC-Record-ID\":\"<urn:uuid:6179758c-af47-4c1b-8452-a80d59fb63b6>\",\"Content-Length\":\"833002\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b256048d-faa1-4b65-bd70-afe724e941be>\",\"WARC-Concurrent-To\":\"<urn:uuid:d5706ea9-2385-435b-9190-063a9f7ec357>\",\"WARC-IP-Address\":\"35.186.203.76\",\"WARC-Target-URI\":\"https://www.groundai.com/project/risk-minimization-in-structured-prediction-using-orbit-loss/\",\"WARC-Payload-Digest\":\"sha1:OD2DDTNJSCX4YGOY2W5YOPK2XC2HTI42\",\"WARC-Block-Digest\":\"sha1:FUSGCU2EUALBSF4JOEO5E6JQ7RGSU43J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370500426.22_warc_CC-MAIN-20200331084941-20200331114941-00147.warc.gz\"}"}
https://justcode.ikeepstudying.com/2015/07/swift%E4%B8%AD%E6%96%87%E6%95%99%E7%A8%8B%EF%BC%88%E5%8D%81%E4%BA%8C%EF%BC%89-%E4%B8%8B%E6%A0%87/
[ "# Swift中文教程(十二) 下标\n\n1、下标语法\n\nC代码\n1. subscript(index: Int) -> Int {\n2.     get {\n3.         // return an appropriate subscript value here\n4.     }\n5.     set(newValue) {\n6.         // perform a suitable setting action here\n7.     }\n8. }\n\nnewValue的类型和下标返回的类型一样。和计算属性一样,你可以选择不指定setter的参数,因为当你不指定的时候,默认参数newValue会被提供给setter。\n\nC代码\n1. subscript(index: Int) -> Int {\n2.     // return an appropriate subscript value here\n3. }\n\nC代码\n1. struct TimesTable {\n2.     let multiplier: Int\n3.     subscript(index: Int) -> Int {\n4.         return multiplier * index\n5.     }\n6. }\n7. let threeTimesTable = TimesTable(multiplier: 3)\n8. println(“six times three is (threeTimesTable)”)\n9. // prints “six times three is 18”\n\n2、下标的使用\n\nC代码\n1. var numberOfLegs = [“spider”: 8, “ant”: 6, “cat”: 4]\n2. numberOfLegs[“bird”] = 2\n\nSwift中字典类型实现的键值对下标是可选类型。对于numberOfLges字典来说,返回的值是Int?,也就是可选Int值。字典的这种使用可选类型下标的方式说明不是所有的键都有对应的值。同样也可以通过给键赋值nil来删除这个键。\n\n3、下标选项\n\nC代码\n1. struct Matrix {\n2.     let rows: Int, columns: Int\n3.     var grid: Double[]\n4.     init(rows: Int, columns: Int) {\n5.         self.rows = rows\n6.         self.columns = columns\n7.         grid = Array(count: rows * columns, repeatedValue: 0.0)\n8.     }\n9.     func indexIsValidForRow(row: Int, column: Int) -> Bool {\n10.         return row >= 0 && row < rows && column >= 0 && column < columns\n11.     }\n12.     subscript(row: Int, column: Int) -> Double {\n13.         get {\n14.             assert(indexIsValidForRow(row, column: column), “Index out of range”)\n15.             return grid[(row * columns) + column]\n16.         }\n17.         set {\n18.             assert(indexIsValidForRow(row, column: column), “Index out of range”)\n19.             grid[(row * columns) + column] = newValue\n20.         }\n21.     }\n22. }\n\nC代码\n1. var matrix = Matrix(rows: 2, columns: 2)\n\nC代码\n1. matrix[0, 1] = 1.5\n2. matrix[1, 0] = 3.2\n\nC代码\n1. func indexIsValidForRow(row: Int, column: Int) -> Bool {\n2.     return row >= 0 && row < rows && column >= 0 && column < columns\n3. }\n\nC代码\n1. let someValue = matrix[2, 2]\n2. // this triggers an assert, because [2, 2] is outside of the matrix bounds", null, "" ]
[ null, "https://justcode.ikeepstudying.com/wp-content/plugins/page-views-count/ajax-loader-2x.gif", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.82082695,"math_prob":0.984715,"size":3543,"snap":"2023-14-2023-23","text_gpt3_token_len":2022,"char_repetition_ratio":0.11726476,"word_repetition_ratio":0.26331362,"special_character_ratio":0.2303133,"punctuation_ratio":0.13681592,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9859992,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T06:58:41Z\",\"WARC-Record-ID\":\"<urn:uuid:381b851c-cd0f-48bd-893f-38a01db80b07>\",\"Content-Length\":\"101894\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eeb2caf8-b42b-419e-8ecb-23e1e628494f>\",\"WARC-Concurrent-To\":\"<urn:uuid:59a757a4-9ead-476b-ac61-ca6fb2d8e601>\",\"WARC-IP-Address\":\"74.208.236.80\",\"WARC-Target-URI\":\"https://justcode.ikeepstudying.com/2015/07/swift%E4%B8%AD%E6%96%87%E6%95%99%E7%A8%8B%EF%BC%88%E5%8D%81%E4%BA%8C%EF%BC%89-%E4%B8%8B%E6%A0%87/\",\"WARC-Payload-Digest\":\"sha1:LAWIBVRFDZODNP2TAYSHX4K3XN5IK7EI\",\"WARC-Block-Digest\":\"sha1:XBMBGUCZIDDJRT6A756NWADPF4BVLT25\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943637.3_warc_CC-MAIN-20230321064400-20230321094400-00396.warc.gz\"}"}
https://datasig.ac.uk/2021-02-03-ayush-bharti
[ "# Ayush Bharti", null, "### Abstract\n\nRadio channel modeling aims at replicating the behavior of the environment in which a radio signal propagates. Stochastic models of the radio channel are necessary simulation tools for designing and testing communication systems. These stochastic models simulate time-series data that is driven by an underlying point process whose points are not observable, thus rendering the likelihood function intractable. Estimating the parameters of the underlying point process is therefore a challenging task. We attempt to tackle this problem using approximate Bayesian computation (ABC) which is a likelihood-free inference framework. ABC relies on comparing summary statistics of the simulated data and the observed data in some distance metric. Parameter values that yield simulated data “close” to the observed data form a sample from the approximate posterior distribution. We make use of the maximum mean discrepancy, which is a notion of distance between probability distributions, as the distance metric in ABC. The proposed method is able to accurately estimate the parameters of stochastic channel models in simulation as well as when applied to real measurements." ]
[ null, "https://datasig.ac.uk/sites/default/files/styles/mt_image_large/public/datasig/images/media/ayush_bharti_2.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9023851,"math_prob":0.9405801,"size":1250,"snap":"2022-27-2022-33","text_gpt3_token_len":213,"char_repetition_ratio":0.11637239,"word_repetition_ratio":0.0,"special_character_ratio":0.16,"punctuation_ratio":0.060913704,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9536188,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T22:00:09Z\",\"WARC-Record-ID\":\"<urn:uuid:bee2f8f9-bfba-41a4-9906-82127a9f9051>\",\"Content-Length\":\"76409\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e40e5e7-0f4c-4959-877b-3f42a16edca1>\",\"WARC-Concurrent-To\":\"<urn:uuid:bde7fed1-82c7-421a-bda6-36934f7501dd>\",\"WARC-IP-Address\":\"79.125.14.100\",\"WARC-Target-URI\":\"https://datasig.ac.uk/2021-02-03-ayush-bharti\",\"WARC-Payload-Digest\":\"sha1:2QLUM35ITGSENP3F2DBPDKJ3X6US3R7N\",\"WARC-Block-Digest\":\"sha1:QXANZXWSZSHTTEJFCE6IGK7UHFPTRDNL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103645173.39_warc_CC-MAIN-20220629211420-20220630001420-00146.warc.gz\"}"}
https://taudevin-engineering.co.uk/interests/generating-and-counting-regular-perfect-number-squares/
[ "## GENERATING AND COUNTING REGULAR PERFECT NUMBER SQUARES (An extension to the Siam system)\n\nA number square is an array of numbers in which the rows columns and diagonals all add up to the same total. Here are three examples (and there is a small selection of other types of square in the last section):-", null, "In each of the above examples the numbers used are sequential, starting from one up to the number of cells in each square, (e.g. 1 to 25 in a five x five). Just using these numbers there are hundreds of ways to place the numbers in the square, and most of these will produce all sorts of random totals for the rows and columns. In the illustrations all rows and columns sum to a specific number. So one might be tempted to ask how on earth can we easily choose where to start and how to continue, in order to produce perfect squares, and how many will there be? And how do we decide what makes a perfect square? This latter question is the easiest to answer! The three by three is not a perfect square. The broken diagonals do not sum to 15! 8 +5+2=15, but 1+7+4=12, and 6+3+9=18. On the other hand, the broken diagonals of the four by four do sum to 34: 5+9+12+8=34, and 4+6+13+11=34. The four by four is looking pretty perfect. Yet, it is still better! The\ncorners sum to 34: 10+15+1+8=34. In fact this square is so perfect that there are numerous groups of four numbers all totalling 34.\n\nThe five by five is also a perfect square, the broken diagonals: 6+25+14+3+17=65 etc., the corners and centre: 23+15+1+17+9=65, other groups of five: 10+1+24+17+13=65!\nSo now to the more difficult question of where to start in laying out the numbers!\nThe four by four is particularly interesting because there is an elegant pattern to the array. It also appears to be the only pattern, but there are ways to produce different squares from the same pattern. If we look at the field of numbers below, the four by four square is outlined. The numbers in red indicate the pattern used to create the square.", null, "The numbers coloured red form a diamond pattern, the numbers coloured green form a similar diamond downwards, but remember to make the horizontal numbers total 9, e.g. 4+5=9; 3+6=9; 7+2=9; and 1+8=9, otherwise it will not work! You will also see that the numbers 9,10,11,12 also form the same diamond upwards to the left, and the numbers 13,14,15,16; downwards to the left, making the sum 25 all the way.(viz. 12+13=25; 14+11=25; 15+10=25; 16+9=25.) 25 added to the nine makes the required total of 34. It is, however, only necessary to remember the first diamond and that the horizontal pairssum to nine. This is because, once you have laid the numbers one to eight, and guessed where to put the nine, the rest can be placed by logic, as illustrated below. Once the question mark is calculated there is only one number to calculate alongside, and so on until you need to use a diagonal. The best guess for the nine is in the columns that sum to ten, the more central square being the better choice.", null, "The first set uses a slight variation of the numbers, viz. The two and the three are reversed.\nThe second set shows that you can use different sequences of numbers.\nYou are now in a position to find out which of the positions for the nine produce perfect or just interesting squares.\nYou will need to remember that the diagonals need to add up to 34 to complete the squares.\nIn some cases it is only a broken diagonal that totals 34. These I call imperfect squares, because only one or two of the diagonals total 34.\nIn fact there are only three different arrangements of numbers producing perfect squares. The internet says 48, but that is because each number field generates sixteen squares, which to me are all the same. You may well think you have found more, if you experiment with different number sequences in the diamond, but if you look at the numbers surrounding the ‘one’, then you will see that very different looking squares are actually the same as one of the three, but with the numbers rotated or reflected or both. The three are illustrated below with a non-different variant!", null, "THE PATTERN FOR A FIVE BY FIVE.\n\nAs the size of square increases the difficulty of finding the right combination of numbers for a perfect square also increases. But the Chinese have the answer – as always. They observed that the numbers in perfect squares were often related by the movement two squares up and one square right. And having observed this relationship they used it in the game of chess as the knight’s move. It was probably observed in the three by three square first of all. As you can see, page 1, one to two, two to three, seven to eight, and eight to nine, are all related by the knight’s move. And amazingly enough, four to five and five to six!!!! But to see this latter relationship properly, you need to put two more three by three squares alongside the first square, one to the left and one to the right. This placing of identical squares alongside each other is crucial to the use of the ‘knight’s move pattern’ for the generation of perfect squares. The number field produced by this assemblage of squares is shown above for the four by four square. The nine shown boxed (on the number field on page one) is in an identical square and in the same relative position.\nTo construct a perfect five by five square, you simply place the numbers into the grid using the knight’s move, two up and one to the right. Start in the coloured square and try not to be confused by the other numbers milling around there. One to two, two to three, and then;", null, "to position the four we need to utilise the concept of adjacent squares, as illustrated in the field above. This means that, in the square imagined below the main square, the three is positioned, as shown, in the top row of that square. Three to four, four to five, and we have arrived back at the one. To create a number field we just repeat the sequence one to five, as illustrated in the appropriate places in the field.\nIt is evident that there are twenty choices for placing the six in the 5 by 5 square, once the first five numbers are laid (twenty-five minus the five numbers just inserted). We cannot continue the sequence since the six falls where the one is! (The one is, as we started, in the bottom left corner of the square, and this square is repeated above and to the right of the centre square.) To produce a perfect square the six may be placed below the five, as illustrated. The number chain then continues until the eleven falls on the six, but do not panic, just remember that you placed the six below the five, so – place the eleven below the ten. And continue using the knight’s move until the sixteen falls on top of the eleven. Then just place the sixteen below the fifteen, and continue on until the twenty one falls on top of the sixteen, then place the twenty-one below the twenty. The resulting square, barring accidents, is perfect. It is illustrated in the field above, and in its correct position in\nthat field. Completing the unfinished centre square by moving numbers to their correct positions is left to you!\n\nCOUNTING HOW MANY PERFECT SQUARES ARE POSSIBLE\n\nSince there are twenty positions for placing the six, it is evident that finding perfect squares is still a bit of a problem. It will be noted that once the position of the six is chosen – as below the five for example – then that position is chosen each time we reach a multiple of five. It has been found by trial and error that this is the simplest way.\nLet us first see what happens if we choose a different position for the six. This time we will start with the one in the middle. And put the six alongside and to the left of the five; the eleven alongside and to the left of the ten, etc. On completion we find that the main diagonals total: 20+23+1+9+12=65, and 21+11+1+16+6=55; not a very pretty arrangement! However, the broken diagonal: 8+23+13+3+18=65, the total we are looking for; this means if we make twenty-three the centre of the square, as illustrated, we have a fairly pretty arrangement, but not a perfect square. I call these interesting or imperfect squares.", null, "You might like to try putting the six alongside the five and to the right, not left. This produces a disaster of a square!!\n\nThere is a simple pattern for positioning the six in order to arrive at perfect squares, and the pattern is best seen if we start, once again with the number one, but this time in the centre of the square:-", null, "If the six is placed on a square labelled ‘N’. NO SQUARE IS PRODUCED.\nIf the six is placed on a square labelled ‘I’. INTERESTING SQUARES CAN BE SELECTED, (by selecting the appropriate number to be the centre of the square as\nillustrated by the number fields above).\nIf the six is placed on a square labelled ‘P’, PERFECT SQUARES ARE GENERATED.\nThe eleven, sixteen, and twenty-one, when baulked by existing numbers, may similarly only be placed in squares marked ‘P’ if perfect squares are to result.\n\nSo far we have only placed the baulked numbers in the same relative position at each change. E.g. the baulked numbers are always placed, under the five, ten, fifteen and twenty, or always placed alongside, or always placed at the same fixed position from these ‘break’ numbers. This is a simple (but not general) rule for making perfect squares and works particularly well as the size of square rises above a five by five. However, with a five by five square, the numbers after the break numbers may be placed in any of the squares marked as P. This means there are four positions for the six, three positions for the eleven, and two for the sixteen. This means that there must be 4x3x2=24 perfect squares using this knight’s move.\nAre there any more? Well you may have guessed, as in the four by four square, we can change the first five numbers around, e.g. 1, 2, 3, 5, 4. There is a limiting factor however, that stops us making thousands of different squares. If we look back to the first example of a five by five and starting from the one, move two along to the right and one down, then we see that the number sequence we have laid is not, 1,2,3,4,5; but 1,3,5,2,4. Similarly as we move around we see the sequence, 1,5,4,3,2; and 1,4,2,5,3. We have thus laid four number sequences in each of our twenty four squares. This reduces the number possible considerably!! By my calculations there are only 144 different perfect squares ( you could say 144 times 25 i.e. 3,600, but they are not different number fields). Since the one is fixed there are only four numbers to be re-arranged. Thus there are just 4x3x2 variations with four used each time that makes just six sequences to be used in each of the twentyfour squares; 144 in total. Your confirmation is invited. E-mail:- squares@taudevinengineering.co.uk\nGaspalou (see last pages) seems to have found irregular squares and surely some irregular five by fives could exist?\nAll five by five perfect squares appear to be most perfect squares; see definition below. Page 11.\n\nWHAT IS THE TOTAL REQUIRED IN A LARGER SQUARE?\n\nIf we consider the three by three, we have used the numbers one to nine, if we add these up and divide by three (there are three rows and three columns), we obtain the total.\n1+2+3+4+5+6+7+8+9=45; 45/3 = 15. Now adding up 25 numbers for a five by five is a bit laborious. There is a quick way. In a three by three, nine times ten =90 which when divided by two gives 45!! The total we were looking for. In a four by four, sixteen times seventeen = some large number, but as we are about to divide by two we only need to multiply seventeen by eight which is easier and makes 136, if we divide by four (for the four rows and columns) this gives us the 34 we need. So in general, we just multiply the number of cells in the square by one extra then divide by two and then divide by the side of square.\nSo for a five by five we have: 25×26/ 2×5 which may be simplified (by using a calculator or) 5×13 = 65. For a seven by seven, we have: 49×50/2×7 = 7×25 = 175.\n\nHOW DO WE GET THE SQUARE TO GIVE A DIFFERENT TOTAL?\n\nThis is easy for some numbers, but not for others. For a five by five, if we require a total of 70 then we simply add one to every number in the square. Which means we start at 2 and finish at 26. In general if we require a different total say 72, then deduct 65 from the 72 = 7, divide by five = 1 remainder 2. Add one to every number and an extra two, the remainder, to another five. The square will stay pretty if you use a group of five as laid initially; see below; the figures in red have had two added to them:-", null, "To make an even higher total, say 96, then deduct 65 from the 96 = 31; divide this by five = 6 remainder 1. Thus you add six to every number and one extra to a selected five numbers.\n\nGENERATING SEVEN BY SEVEN SQUARES\n\nThe same system is used as for a five by five square. The pattern for positioning the eight is shown below. Remember to use the same move for fourteen to fifteen as you did for seven to eight, and continue with the same movement at each of the multiples of seven.", null, "HOW MANY SEVEN BY SEVEN PERFECT SQUARES CAN BE FOUND?\nThis is a bit of a problem. With the five by fives it was simple, but there are some differences in the seven by seven. Applying the same reasoning it would appear that for each of the eighteen positions that produce perfect squares, it would be possible to use any of them in any order. This appears not to be the case. Once you have chosen a yellow coloured square for the continuation, it seems that you must use the remaining yellows for the other continuations; mixing yellows and other colours (red or green) does not produce perfect squares. With this reasoning the eighteen potential squares appear to reduce to three!! If this is the case, then my conjecture is that there are three choices for the eight using the patterns as illustrated at P1, P2, or P3. The numbers one to seven can be placed in 720 sequences (e.g. 1, 3, 2, 4, 5, 6, 7 ). Each sequence produces two squares, for example 1,2,3,4,5,6,7 produces the same square as 1,7,6,5,4,3,2 as can be demonstrated by writing them out. The one is a rotation of the other about a diagonal. The numbers 1, 8, 15, 22, 29, 36, 43 can be placed in 720 sequences in their coloured squares only; these all appear to produce different squares. There are three patterns of coloured squares available, as above (P1, P2, and P3); there are therefore, 360*720*3=777,600 perfect squares.\n\nGaspalou, (see later pages) seems to have found perfect squares which do not follow the regular pattern. How do you count them?\nI would be delighted to hear from anyone who has a better method for counting the squares or a more logical way of doing it from the generating pattern, or any other method for generating perfect squares; squares@taudevin-engineering.co.uk.\nLooking again at the diagram above it would appear that the extended knight’s move and the double extended knight’s move do not produce original squares. Each ‘a’ square is covered by the shown positioning of 1 to 7 producing mirror images. Each ‘b’ position produces a square as the ‘a’ position, but with a different No sequence; e.g. the No sequence for the square shown is: 1, 4, 7, 3, 6, 2, 5. On the right is shown the sequence 1,2,3,4,5,6,7; giving the sequence shown! 1.6.4.2.7.5.3. Evens then odds!!! Thus the extended knight’s move does not produce more squares! Similarly for the double extended knight’s move. It just gives a different No sequence.\nIt also appears that the P3 position produces ‘most perfect squares’ as defined by Ollerenshaw and Br÷e (see page 10), {What I now like to call exceptionally perfect squares}, provided the pattern of seven is as on a playing card, i.e. with the row of three vertical, viz. the stars in the above square, page 7.\n\nOTHER SIZES OF SQUARE\n\nIt would be a very perceptive question to ask, what happened to the six by six. It would be equally alarming to mention an eight by eight, or a nine by nine.\nThe simple method outlined above works when the side of square is a prime number. It sort of works for odd numbers, but usually, if the odd number is not prime then perfect squares do not fall out. 49 by 49 is an exception, it produces perfect squares, whereas 25 by 25 does not!\nA six by six perfect square has not yet been achieved as far as I can ascertain. There are examples of eight by eight squares, but I have not discerned an easily recognised pattern in any of these. The knight’s move is in evidence, but not consistently. Some of these ‘problem’ squares are illustrated later.\n\nTHE ELEVEN BY ELEVEN SQUARE\n\nThere are TWO possible ‘Knight’ moves; the normal – starting the first run of numbers at ‘A’, and the single extended – starting at ‘H’. Illustrated in the grid below are two groups of letters, A, B, C, D and E, F, G, H. Inspection shows that using A, B, C, or D for the numbers produces some sort of symmetrical image, so just one needs to be used.\nSimilarly for E, F, G, and H.\nEach of these moves generates a series of perfect squares just like the seven by seven, using any of a number of continuation positions for the 12, 23, 36… etc. There are eleven of these continuation positions, coloured red and dark blue. However only seven are available with any one ‘knight’ move since three are repeats (‘A’, ‘C’, & ‘G’) & one is used by the first lay. Based on the seven by seven we instantly arrive at the number of possible squares as:-\nBase moves = 2\nContinuation positions = 7\nContinuation sequences= 1,814,400\nBase number Sequences also = 10*9*8*7*6*5*4*3 = 1,814,400\nA grand total of regular squares = 1,814,400*1,814,400*7*2 = 46,088,663,040,000", null, "THIRTEEN BY THIRTEEN\nStarts @ A, E, F\nFollow on @ B, C, D, E, F, G, H, J, K\nThere are three groups of letters, A, B, C, D. and E, H, J, K; and F, G.\nThere are therefore, three choices for start.\nThere are nine choices for follow on.\nThere are 3,113,510,400 number sequences.\nTherefore there are 1,548,737,096,417,280,000 possible squares.", null, "THE NINE BY NINE SQUARE.\n\nThis, as mentioned, is a problem child. Because of its symmetries perfect squares are hard to find and are produced by breaking the symmetry in special ways. Here is an example of a Hendricks square, culled from the internet, all rows, columns and diagonals sum to 369. It is generated using the number sequence, 1,2,5,6,4,7,8,9,3.", null, "The continuations are not as for the five by five and seven by seven, since the 46 and 64 are placed on normally forbidden squares, probably to break the symmetry. The Margossian family of these squares is also on the net!\nThere are lots of number sequences which generate perfect squares, but the standard sequence:- 1,2,3,4,5,6,7,8,9, does not appear to work. It is possible to generate additional squares from the same number sequence simply by alternating the positions of the follow on numbers. The Coloured squares show the sequence: – 1, 19, 73, 64, 55, 28, 46, 37, 10.\nThe sequences formed by selecting each alternate number also appear to work whenever I have tried them. E.g. 1, 73, 55, 46, 10, 19, 64, 28, 37. There would thus appear to be six in each family for one sequence and since the selection of alternating numbers appears to work for the basic sequence also, then this makes 36 in each family! I have so far culled 15 sequences from the net; these are appended at the end.\nIn line with the other squares the patterns are as shown below:-\nA most perfect square is defined as a pan diagonal square where any group of ‘n’ symmetrically placed numbers (n=side of square) also sum to the constant. In a four by four and five by five it is a ‘domino’ pattern. The seven by seven is illustrated on page 7. The nine by nine is a nice solid block of threes. Other patterns are a bit complicated! Some of the perfect nine by nine squares are most perfect, despite reports to the contrary.\n\nPOSITIONS FOR SECOND NUMBER: there are six, labelled a,b,c,d,e,f. repeated letters indicate the position just generates an already counted number sequence and not a new square. Follow on, or break point, numbers are shown for an example, they differ for some number sequences!", null, "Since there are six positions to choose from there would appear to be 36 times 6 possible squares, and as shown below as there are six positions for the break numbers this must make 36 times 6 times 6! 1296 squares, with the 15 sequences collected this makes 19440; but since each square contains two sequences this reduces to 9720. I have no idea how working sequences are found, so ideas would be most welcomed.\n\nPositions for the break point numbers: there are six, a,c,d,e,f,g.", null, "A SELECTION OF ‘INTERESTING’ SQUARES.\n\nAs indicated above, number squares have been known for thousands of years. In the days when adding up was a bit of a problem for most people, number squares had an aura of mystery. The four by four first illustrated above, is called a Jalna square as it was written up over the gateway into that Indian city (Dana Mackenzie says on a temple in Khajuraho). The first square illustrated below is known as Durer’s square as it appeared in his painting ‘Melancholia’, it has the year of the painting, 1514 in the bottom row. The second square is attributed to Jupiter, in ‘A New View over Atlantis’ by J. Mitchell. The third square is attributed to Mars, in the same book and the fourth to the Sun. Apparently he is just quoting a work by Cornelius Agrippa (1486-1535) as rather more elegantly explained on the Geocities website’s collection of ‘strange magic squares’ (now unfortunately elsewhere!). I thought the sun version was first published by Fermat in 1640, maybe he just knew of Cornelius Agrippa).", null, "The first square below is one of Benjamin Franklin’s. The second is from the Boys Own\nBook of Conjuring 1870.", null, "MORE INTERESTING (PERFECT) SQUARES\nExamples of nine by nine squares with number sequences after Margossian and Hendricks;\nsome with extended ‘knight’ move.\n\nSingle Knight move, Sequence:- 1,2,3,9,7,8,5,6,4.", null, "Extended Knight move, Margossian sequence:- 1,2,5,6,4,7,8,9,3. and continuations", null, "Extended knight move, sequence:- 1,2,3,9,7,8,5,6,4. Margossian continuations", null, "Pattern and sequence after Hendricks. Sequence:- 1,9,5,6,2,7,8,4,3.", null, "A Margossian square, again very perfect", null, "Another particularly perfect square. Extended knights move, Sequence 1,2,5,6,4,7,8,9,3.", null, "Two squares from the internet after Gaspalou, not fitting the regular pattern as far as I can\ndiscern at present.", null, "Number sequences for nine by nine perfect squares.", null, "Seventeen by seventeen:-", null, "The following symmetries exist:-\nA, B, C, D.\nE, H, J, M.\nF, G, K, L.\nN, T.\nThere are thus only four starts, A, E, F, & N.\nContinuations at B, C, D, E, F, G, H, J, K, L, M, N." ]
[ null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-1-1024x333.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-2-1024x411.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-3-1024x410.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-4.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-5.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-6.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-7.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-8.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-9.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-10.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-11.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-12.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-13.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-14.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-15.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-16.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-17.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-18.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-19.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-20.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-21.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-22.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-23.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-24.png", null, "http://taudevin-engineering.co.uk/wp-content/uploads/2017/10/GENERATING-AND-COUNTING-REGULAR-PERFECT-NUMBER-SQUARES-Image-25.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93985647,"math_prob":0.98995924,"size":22775,"snap":"2020-24-2020-29","text_gpt3_token_len":5676,"char_repetition_ratio":0.17544244,"word_repetition_ratio":0.014177463,"special_character_ratio":0.252865,"punctuation_ratio":0.14748625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9889141,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-25T06:05:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4a4d14a6-db15-4fa0-bf43-8bf149adf2f7>\",\"Content-Length\":\"81426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea0d9f2b-f52e-453e-a9b5-fc73ad87ce3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7d22372e-7934-4cb7-8938-c3b097e6182f>\",\"WARC-IP-Address\":\"83.166.162.20\",\"WARC-Target-URI\":\"https://taudevin-engineering.co.uk/interests/generating-and-counting-regular-perfect-number-squares/\",\"WARC-Payload-Digest\":\"sha1:GSMENGLUTQOWX7SXUSKJKKZ4JFE5VMAT\",\"WARC-Block-Digest\":\"sha1:DLILEXL2WNCHXWHPX4XS5LNJUECJ5HVI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347387219.0_warc_CC-MAIN-20200525032636-20200525062636-00000.warc.gz\"}"}
https://mathsgee.com/36993/which-number-s-is-are-equal-to-its-their-square
[ "0 like 0 dislike\n789 views\nWhich number(s) is(are) equal to its (their) square?\n| 789 views\n\n0 like 0 dislike\nIf $x$ is the number to find, its square is $x^{2}$.\n$x$ is equal to its square hence: $x=x^{2}$\nSolve the above equation by factoring. First write with right side equal to zero.\n\\begin{aligned} &x-x^{2}=0 \\\\ &x(1-x)=0 \\end{aligned}\nsolutions: $x=0$ and $x=1$\n1) $x=0$, its square is $0^{2}=0$. Hence $x$ and its square are equal.\n2) $x=1$, its square is $1^{2}=1$. Hence $x$ and its square are equal.\nby Platinum (101k points)\n\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n2 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n0 like 0 dislike\n1 like 0 dislike\n0 like 0 dislike" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9201903,"math_prob":1.0000063,"size":446,"snap":"2023-40-2023-50","text_gpt3_token_len":160,"char_repetition_ratio":0.18778281,"word_repetition_ratio":0.0882353,"special_character_ratio":0.4349776,"punctuation_ratio":0.12871288,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995457,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T01:50:12Z\",\"WARC-Record-ID\":\"<urn:uuid:527c1ca2-7bae-45c0-9086-67d4632f0e00>\",\"Content-Length\":\"124716\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:21a91cb3-cd9d-4179-9871-bd79b277a7a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:4f2002d7-978f-4d3e-b83d-59ce077a26f3>\",\"WARC-IP-Address\":\"35.244.153.44\",\"WARC-Target-URI\":\"https://mathsgee.com/36993/which-number-s-is-are-equal-to-its-their-square\",\"WARC-Payload-Digest\":\"sha1:K4HLOVFO6XKRQOKUMKOD3HW3CTZF7OLK\",\"WARC-Block-Digest\":\"sha1:CCVRT5LEFJTSEMLLERQGYNB3JJR2UPCJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100164.15_warc_CC-MAIN-20231130000127-20231130030127-00137.warc.gz\"}"}
https://ro.scribd.com/document/345725318/capacitors-in-series-and-parallel-combinations-electronics-post
[ "Sunteți pe pagina 1din 5\n\n# 12/12/2016 CapacitorsInSeriesandParallelCombinationsElectronicsPost\n\n{\nGet: Android certied;\n}\n\nHOME BASIC ELECTRONICS TUTORIALS QUESTIONS & ANSWERS COMPUTER NETWORKING TUTORIALS TECH ABOUT US\n\n## Capacitors In Series and Parallel Combinations Switches\n\nSasmita October 12, 2015 Capacitors\nMicroswitch,waterproof\nCapacitors in Series and Parallel Combinations switchChina\nCapacitors are one of the standard components in electronic and electrical circuits. manufacturer&exporter\nHowever, complicated combinations of capacitors mostly occur in practical circuits.\nswitchchina.com\n\nSwitches\nMicroswitch,waterproof\nswitchChinamanufacturer\n&exporter\nswitchchina.com\n\nIt is, therefore, useful to have a set of rules for finding the equivalent capacitance of\nsome general capacitors arrangements.\n\n## The equivalent capacitance of any complicated arrangement can be determined by the\n\nrepeated application of two simple rules and these rules are related to capacitors\nconnected in series and in parallel.\n\nCapacitors in Series\nCapacitors are said to be connected in series, when they are effectively daisy chained\ntogether in a single line.\n\nConsider two capacitors connected in series: i.e.in a line such that the positive plate of\none is attached to the negative plate of the other as shown in the fig. above.\n\nIn fact, let us suppose that the positive plate of capacitor 1 is connected to the input\nwire, the negative plate of capacitor 1 is connected to the positive plate of capacitor 2,\nand the negative plate of capacitor 2 is connected to the output wire.\n\nhttp://electronicspost.com/capacitorsinseriesandparallelcombinations/ 1/5\n12/12/2016 CapacitorsInSeriesandParallelCombinationsElectronicsPost\nNow the question arises, what is the equivalent capacitance between the input and\noutput wires?\n\nIn this connection, it is important to realize that the charge Q stored in the two\nSwitches\ncapacitors is same. Microswitch,waterproof\nThis can be explained as follows : switchChina\nmanufacturer&exporter\nLet us consider the internal plates i.e., the negative plate of capacitor 1, and the\npositive plate of capacitor 2. switchchina.com\n\nThese plates are physically disconnected from the rest of the circuit, so the total charge\non them must remain constant.\n\nAssuming, that these plates carry zero charge when zero potential difference is\napplied across the two capacitors, it follows that in the presence of a non-zero\npotential difference the charge Q on the positive plate of capacitor 2 must be balanced\nby an equal and opposite charge -Q on the negative plate of capacitor 1.\n\nSince the negative plate of capacitor 1 carries a charge -Q , the positive plate of\ncapacitor 2 carries a charge + Q to balance it.\n\n## The potential drops, and , across the two capacitors different.\n\nHowever, the sum of these drops equals the total potential drop applied across the\ninput and output wires:\n\ni.e.\n\nHence,\n\n## The reciprocal of the equivalent capacitance of two capacitors connected in series is\n\nthe sum of the reciprocals of the individual capacitances.\n\nhttp://electronicspost.com/capacitorsinseriesandparallelcombinations/ 2/5\n12/12/2016 CapacitorsInSeriesandParallelCombinationsElectronicsPost\n\nSwitches\nMicroswitch,waterproof\nswitchChinamanufacturer\n&exporter\nswitchchina.com\n\n## For capacitors connected in series, the equivalent capacitance equation can be\n\ngeneralized to :\n\nExample\nFind the overall capacitance and the individual rms voltage drops across the two\ncapacitors each with 47 nF ,in series when connected to a 12V a.c. supply.\n\nSolution :\n\nTotal Capacitance,\n\n## Voltage drop across the two identical 47 nF capacitors,\n\nCapacitors in Parallel\nhttp://electronicspost.com/capacitorsinseriesandparallelcombinations/ 3/5\n12/12/2016 CapacitorsInSeriesandParallelCombinationsElectronicsPost\n\nCapacitors in Parallel\nCapacitors are said to be connected in parallel when both of their terminals are\nrespectively connected to each terminal of the other capacitor or capacitors.\n\nConsider two capacitors connected in parallel: i.e., with the positively charged plates\nconnected to a common input wire, and the negatively charged plates attached to a\ncommon output wire as shown in the above fig.\n\nWhat is the equivalent capacitance between the input and output wires?\n\nIn this case, the potential difference across the two capacitors is the same, and is\nequal to the potential differnce between the input and output wires.\n\nHowever, the total stored charge Q is divided between the two capacitors, since it must\ndistribute itselfsuch that the voltage across them is same.\n\nSince, the capacitors may have different capacitances, and ,the charges\n\n## where is the total stored charge.\n\nIt follows that :\n\n17\n\nHence,\n\n18\n\n## In general for we can say that :\n\nThe equivalent capacitance of two capacitors connected in parallel is the sum of the\nindividual capacitances.\n\nFor capacitors connected in parallel, the equation for equivalent capacitance can\nbegeneralizes to :\n\nExample :\n\nhttp://electronicspost.com/capacitorsinseriesandparallelcombinations/ 4/5\n12/12/2016 CapacitorsInSeriesandParallelCombinationsElectronicsPost\nCalculate the combined capacitance of the following capacitors each with a\ncapacitance of 47 nF, when they are connected together in a parallel combination .\n\nSolution :\n\nTotal Capacitance,\n\n19\n\nTweet\nShare 0\n\nRelated Posts\n\n## What Is A Capacitor & What\n\nAre The Various Types of\nCapacitors" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8365941,"math_prob":0.7621302,"size":4422,"snap":"2019-51-2020-05","text_gpt3_token_len":990,"char_repetition_ratio":0.18062472,"word_repetition_ratio":0.11755233,"special_character_ratio":0.18521032,"punctuation_ratio":0.11859838,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96319485,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T02:09:24Z\",\"WARC-Record-ID\":\"<urn:uuid:b4730bce-9e11-4e75-822c-332ed27600e7>\",\"Content-Length\":\"379280\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70d644df-564a-4733-9a2a-0f6ab6f62c81>\",\"WARC-Concurrent-To\":\"<urn:uuid:98fcc9ac-bd1b-408a-8256-c9248aea6b05>\",\"WARC-IP-Address\":\"151.101.250.152\",\"WARC-Target-URI\":\"https://ro.scribd.com/document/345725318/capacitors-in-series-and-parallel-combinations-electronics-post\",\"WARC-Payload-Digest\":\"sha1:VYRDIXF46KASO5P4FVYENG3OFWMRQZQZ\",\"WARC-Block-Digest\":\"sha1:3V2QRAJZR3ENGCDE4VWYYQGSZ2653QOV\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250601241.42_warc_CC-MAIN-20200121014531-20200121043531-00508.warc.gz\"}"}
https://brilliant.org/discussions/thread/nmtc-problem-3a/
[ "# NMTC Problem 3a\n\nThe Fibonacci Sequence is defined by $F_0 = 1$,$F_1 =1$ and $F_n=F_{n-1}+F_{n-2}$. Prove that $7{F_{n+2}^3}-{F_n^3}-{F_{n+1}^3}$ is divisible by $F_{n+3}$.\n\nThis a part of my set NMTC 2nd Level (Junior) held in 2014.", null, "Note by Siddharth G\n5 years, 11 months ago\n\nThis discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.\n\nWhen posting on Brilliant:\n\n• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .\n• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting \"I don't understand!\" doesn't help anyone.\n• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n• bulleted\n• list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https://brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.\n2 \\times 3 $2 \\times 3$\n2^{34} $2^{34}$\na_{i-1} $a_{i-1}$\n\\frac{2}{3} $\\frac{2}{3}$\n\\sqrt{2} $\\sqrt{2}$\n\\sum_{i=1}^3 $\\sum_{i=1}^3$\n\\sin \\theta $\\sin \\theta$\n\\boxed{123} $\\boxed{123}$\n\nSort by:\n\nLet us take $F_{n}=x$ and $F_{n+1}=y \\Rightarrow\\ F_{n+2}=x+y \\Rightarrow\\ F_{n+3}=x+2y.$Now,let us expand the given expression:$: 7F_{n+2}^{3}-F_{n}^{3}-F_{n+1}^{3}$ in terms of $x$ and $y$.We get$: 7(x+y)^{3}-x^{3}-y^{3}$.Simplifying that,we get$: 6x^{3}+6y^{3}+21xy(x+y).$Taking $(x+y)$ common we get$: (x+y)(6x^{2}-6xy+6y^{2}+21xy) =(x+y)(6x^{2}+15xy+6y^{2}) =(x+y)(x+2y)(6x+3y).$But,$(2y+x)=F_{n+3}.$hence proved:):).\n\n- 5 years, 11 months ago\n\n- 5 years, 11 months ago\n\nAbsolutely right. This is wht I tried, but the $X,Y$ substitution, Mindblowing :D\n\n- 5 years, 11 months ago\n\nthanx!!\n\n- 5 years, 11 months ago\n\nPerfect! I didn't go for x,y and faced problems in factorizing. PS: Small Typo at the end.\n\n- 5 years, 11 months ago\n\nthanx!!\n\n- 5 years, 11 months ago\n\nyes thanku!!\n\n- 5 years, 11 months ago\n\nThe substitution certainly helped made it easier to manipulate.\n\nStaff - 5 years, 11 months ago\n\nYes, however, can we do this with induction?\n\n- 5 years, 11 months ago\n\nPossibly, but I don't see why that would work, nor do I see a way to start.\n\nThere is very little here to motivate a solution by induction. Knowing divisibility by $F_n$ doesn't tell you anything about divisibility by $F_{n+1}$.\n\nStaff - 5 years, 11 months ago\n\nHow did you do?\n\n- 5 years, 11 months ago\n\nNot well, 1b 5a, b 6b and half of the 8th were good. I left 1a half done. How was your paper?\n\n- 5 years, 11 months ago\n\nSigh, atleast you had one full question to your credit. I got the first one fully , messed up second one (after getting half of it), third one wasn't salubrious, (I tried both, got to almost the answers), 4th I didnt do, 5a I didnt know, 5 b I got it, 6- I wrote Yes and No alternatingly :P, 7th I almost got it but lost it due to calculation error (You wont believe I put 40 cubed= 16000 :( )..8th I am not sure of it's accuracy....\n\nNow , you clearly know I sucked more than you did. :(\n\n- 5 years, 11 months ago\n\nSame problem with wet 3a. BTW are you sure that 6a is 'Yes'?\n\n- 5 years, 11 months ago\n\nOh Nono, I didnt understand it properly but gave some stupid explanation and put Yes. WBU? How did you do that question?\n\n- 5 years, 11 months ago\n\n6a. Forthe transformation, we needed $\\Delta B=+2$ and $\\Delta O=0$. However with the allowed changes, these conditions were contradicting each other.\n\n- 5 years, 11 months ago\n\nWell, as I said...I wasnt even considering that question right :P. How was NTSE? (Range of marks..?)\n\n- 5 years, 11 months ago\n\nTerrible! Expecting 114/140; expected cutoff~123\n\n- 5 years, 11 months ago\n\nOh, I am sorry for bringing that up. How did you predict the cutoff so soon? (Institute?)...And, Could you tell me the number of seats for general cat. in Delhi.\n\n- 5 years, 11 months ago\n\nNot really, mostly from the marks of competitive peers. The no. of seats was 60 last year, however, I think they are going to increase it. FIITJEE predicts it to be near 119 due to this.\n\n- 5 years, 11 months ago\n\nOh I see. Be optimistic though :)\n\n- 5 years, 11 months ago\n\nNah, just moving on. Are you giving RMO?\n\n- 5 years, 11 months ago\n\nYes I am. But NMTC has almost you know, shattered the vital mathematical force in me :3\n\n- 5 years, 11 months ago\n\n- 5 years, 11 months ago\n\nGood. xD. It ough'to be that good lest you wouldn't qualify in TN. Expecting 130+, let's see.(the results)\n\n- 5 years, 11 months ago" ]
[ null, "https://ds055uzetaobb.cloudfront.net/brioche/avatars-2/resized/45/9e71bcac289c714b415eec8387cb28b0.46e0d9fe68-TVzm1Xdhrs.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9326588,"math_prob":0.9471431,"size":7097,"snap":"2020-45-2020-50","text_gpt3_token_len":2490,"char_repetition_ratio":0.12223319,"word_repetition_ratio":0.4943723,"special_character_ratio":0.32492602,"punctuation_ratio":0.1496437,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97207606,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T15:13:51Z\",\"WARC-Record-ID\":\"<urn:uuid:4e2c1581-81e1-4cc3-8fe4-3d4ddcfbbe09>\",\"Content-Length\":\"194766\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ac6feb8-5c45-492e-8bdc-bb0615c0f98e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e72f1b4b-f5aa-4e16-8287-ee732d8b99a4>\",\"WARC-IP-Address\":\"104.20.34.242\",\"WARC-Target-URI\":\"https://brilliant.org/discussions/thread/nmtc-problem-3a/\",\"WARC-Payload-Digest\":\"sha1:IBADDW6V6YG3GRBW4D4MZIEMSTVRLWTX\",\"WARC-Block-Digest\":\"sha1:CSLMUUF4LNWEG7UIGTROPKJHNSLMMCA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894203.73_warc_CC-MAIN-20201027140911-20201027170911-00451.warc.gz\"}"}
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RVBS_FLES
[ "# LES\n\nPerforms a less than or equal to comparison on elements of two dynamic arrays.\n\n## Synopsis\n\n```LES(dynarray1,dynarray2)\n```\n\n### Arguments\n\n dynarray An expression that resolves to a dynamic array.\n\n## Description\n\nThe LES function compares each corresponding numeric element from two dynamic arrays and determines if the first value is less than or equal to the second value. It returns a dynamic array of boolean values in which each element comparison is represented. It returns a 1 if the dynarray1 element value is less than or equal to the dynarray2 element value. It returns a 0 if the dynarray1 element value is greater than the dynarray2 element value.\n\nLES removes signs and leading and trailing zeros from element values before making the comparison. If an element is missing, or has a null string or a non-numeric value, LES assigns it a value of 0 for the purpose of this comparison.\n\nIf the two dynamic arrays have different numbers of elements, the returned dynamic array has the number of elements of the longer dynamic array. By default, the shorter dynamic array is padded with 0 value elements for the purpose of comparison. You can also use the REUSE function to define behavior when specifying two dynamic arrays with different numbers of elements.\n\nFor two elements to be compared, they must be on the same dynamic array level. For example, you cannot compare a value mark (@VM) dynamic array element to a subvalue mark (@SM) dynamic array element.\n\n## Examples\n\nThe following example uses the LES function to return a less than or equal to comparison for each of the elements in dynamic arrays a and b:\n\n```a=10:@VM:-22:@VM:-33:@VM:45\nb=10:@VM:-23:@VM:0:@VM:44\nPRINT LES(a,b)\n! returns 1ý0ý1ý0```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6693747,"math_prob":0.97531796,"size":1679,"snap":"2021-43-2021-49","text_gpt3_token_len":382,"char_repetition_ratio":0.20716418,"word_repetition_ratio":0.047272727,"special_character_ratio":0.22036926,"punctuation_ratio":0.10429448,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9649736,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T14:20:11Z\",\"WARC-Record-ID\":\"<urn:uuid:085b6b74-b33d-440f-9f93-304d4594106d>\",\"Content-Length\":\"15956\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ae972b3a-4e6b-4976-a183-542eb7a83f77>\",\"WARC-Concurrent-To\":\"<urn:uuid:beef93f7-d5c4-4609-9b04-382e7d9a8191>\",\"WARC-IP-Address\":\"198.133.74.116\",\"WARC-Target-URI\":\"https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RVBS_FLES\",\"WARC-Payload-Digest\":\"sha1:W67BL5FLYPCSPNEHT6G2GBCM5WIQUQYJ\",\"WARC-Block-Digest\":\"sha1:AAV5QK7F7NOHM6VS2LPP5CU2PE52R22O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587908.20_warc_CC-MAIN-20211026134839-20211026164839-00325.warc.gz\"}"}
https://www.r-bloggers.com/the-fanplot-package-for-r/
[ "# The fanplot package for R\n\nAugust 13, 2012\nBy\n\nWant to share your content on R-bloggers? click here if you have a blog, or here if you don't.\n\nMy fanplot package has gone up on CRAN. Here is a online version of the vignette.\n\n### Introduction\n\nThe fanplot package contains a collection of R (R Development Core Team, 2012) functions to effectively display plots of sequential distributions such as probabilistic forecasts. The plotting of distributions are based around two functions. The first, pn. calculates the percentiles for a set of sequential distributions over a specified time period. The second, fan. plots the calculated percentiles of the sequential distributions. The resulting plot is a set of coloured polygon, with shadings corresponding to the percentile values.\n\nThis document illustrates these two core functions using MCMC simulation results from fitted stochastic volatility models. These MCMC simulations can be recreated from data and BUGS model also contained in the fanplot package, via the R2OpenBUGS package of (Sturtz et al., 2005). These are first shown for dataframe type object where there is no time series type attributes, followed by plots based on a ts object.\n\n### Volatility Plots\n\nTo illustrate the basics of the fanplot package consider the svpdx data contained in the tsbugs (Abel, 2013) package. This contains information on the log return of the Pound-Dollar exchange rate from 2nd October 1981 to 28th June 1985. For a view of the first part of the data, use the head function, after loading the fanplot package.\n\n> library(\"tsbugs\")\ndate pdx\n1 1981-10-02 -0.3555316\n2 1981-10-05 1.4254090\n3 1981-10-06 -0.4439399\n4 1981-10-07 1.0256500\n5 1981-10-08 1.6775790\n6 1981-10-09 0.3690041\n\n\nThis data is a dataframe object, and is difficult to express it as a standard ts object due to the irregular nature of the data. It is best plot initially without an x-axis, which can then be added later using the axis function:\n\n> #plot\n> plot(svpdx$pdx, type = \"l\", xaxt = \"n\", xlab = \"Time\", ylab = \"Return\") > #x-axis > svpdx$rdate <- format(svpdx$date, format = \"%b %Y\") > mth <- unique(svpdx$rdate)\n> qtr <- mth[seq(1,length(mth),3)]\n> axis(1, at = match(qtr, svpdx$rdate), labels = qtr, cex.axis = 0.55) > axis(1, at = match(mth, svpdx$rdate), labels = FALSE, tcl = -0.2)", null, "If the xaxt argument was not used in the plot function and the labels not added later, the x-axis would be based on the row number of each observation in the svpdx data, i.e. an index sequence from 1 to 945.\n\nTo produce the x-axis with date information in the above code, a new column is added to svpdx for the month-year combination of each observation. Objects mth and qtr are then created to mark each month and quarter in the data series respectively. Major axis ticks are then plotted on the unseen 1 to 945 index for the beginning of every quarter with a corresponding label, whilst minor axis tick are also plotted for beginning of every month.\n\nMeyer and Yu (2002) used the above Pound-Dollar exchange rate data to fit various stochastic volatility models in WinBUGS (Lunn et al., 2000). One such stochastic model they fitted to the data is contained in my1.txt in the model directory of the fanplot package. The model of Meyer and Yu (2002) can be refitted in BUGS via R, using the R2OpenBUGS package (Sturtz et al., 2005).\n\n> library(\"R2OpenBUGS\")\n> # write model file:\n> my1.bug <- dget(system.file(\"model\", \"my1.txt\", package = \"fanplot\"))\n> write.model(my1.bug, \"my1.txt\")\n> # take a look:\n> file.show(\"my1.txt\")\n> # run openbugs\n> my1<-bugs(data=list(n=length(svpdx$pdx),y=svpdx$pdx),\ninits=list(list(phistar=0.975,mu=0,itau2=50)),\nparam=c(\"mu\",\"phi\",\"tau\",\"theta\"),\nmodel=\"my1.txt\",\nn.iter = 11000, n.burnin = 1000, n.chains = 1)\n\n\nHere, the same initial parameter values as Meyer and Yu (2002) are set. One chain of the MCMC simulation is run for 11000 iterations, with the first 1000 used for burn in. The resulting bugs object contains the MCMC simulation results for the parameters in the stochastic volatility model, including the time dependent volatility parameters (", null, "$\\theta_t$). This set of sequential posterior distributions are of interest when studying the variation in the data over time.\n\nThe fanplot package can effectively display the entire posterior distribution of", null, "$\\theta_t$. The separate MCMC simulation of the volatility be obtained from my1 using", null, "$\\theta_t$, which is overwritten when creating a new th.mcmc object},\n\n> th.mcmc <- my1$sims.list$theta\n\n\nA plot of the entire posterior distribution of", null, "$\\theta_t$ first requires a calculation of the percentiles over all", null, "$t$ using the pn function,\n\n> library(\"fanplot\")\n> th.pn <- pn(sims = th.mcmc)\n\n\nThis produces a pn type object, where rows represent the time index and columns the percentiles calculated.\n\n> head(th.pn[,c(1:3, 97:99)])\n1% 2% 3% 97% 98% 99%\n[1,] -1.05400 -0.999006 -0.9811 -0.2069 -0.1615000 -0.091476\n[2,] -1.00907 -0.947200 -0.9127 -0.2230 -0.1829000 -0.116800\n[3,] -1.01600 -0.945000 -0.9030 -0.1869 -0.1776000 -0.151200\n[4,] -1.02800 -0.932900 -0.8911 -0.1714 -0.1466000 -0.082590\n[5,] -1.00600 -0.917500 -0.8740 -0.1367 -0.0973464 -0.047270\n[6,] -1.05101 -0.935700 -0.8998 -0.1297 -0.0543700 0.029130\n\n\nEvery percentile between 1st and 99th are calculated by default, however, more or less percentiles can be calculated via the p argument in the pn function. The number of percentiles calculated has a direct impact on the plotting of the sequential distributions, as we shall see later. Additional arguments to control the indexing of the rows, which is of use when the time series are from regular intervals, will also be discussed later.\n\nIn order to plot the th.pn percentile object the plot area must be first set. This can be done using type = “n” argument in plot. Both the xlim and ylim arguments need to be set appropriately. In the code below, the xlim argument is set between 1 and 945, the length of th.pn. The ylim argument is set to the range of th.pn to enable all percentiles calculated to be included in the plot area.\n\nOnce the plot area is set up, the fan function can be used to add the th.pn object. Each percentile in the plot is represented by a different shade of the default colour scale in the fan.col argument of fan. In addition, contour lines are drawn on every decile, with labels for these decides add to the right hand side.\n\n> #empty plot\n> plot(NULL, type = \"n\", xlim = c(1, 945), ylim = range(th.pn), ylab = \"Theta\")\n> fan(th.pn)", null, "The fan.txt function can be add more labels to a set of sequential distributions plotted using fan. When used in addition to the fan function, labels for closely spaced deciles can be controlled to allow a more spacious display of labels in comparison to those shown in the above plot. It can also be used to add text labels to percentiles that are not of a unit of 10, such as the 1st or 99th. The code below demonstrates these features. First a empty plot area is created with the x-axis changed to dates using the same method as in the plot of the original data. Second, the percentiles of the sequential distribution is plotted for", null, "$\\theta_t$ with no text labels (setting txt = NA) and contour lines for the 1st and 99th percentiles alongside some selected deciles. The ln argument in the fan function is set to include contour lines at percentiles where future text labels are to be added.\n\n> #empty plot with x-axis added later\n> plot(NULL, type = \"l\", xlim = c(1, 945), xlab = \"Time\", xaxt = \"n\", ylim = range(th.pn), ylab = \"Theta\")\n> axis(1, at = match(qtr, svpdx$rdate), labels = qtr, cex.axis = 0.55) > axis(1, at = match(mth, svpdx$rdate), labels = FALSE, tcl = -0.2)\n> fan(th.pn, txt = NA, ln = c(1,10,30,50,70,90,99))\n> #add text labels for percentiles\n> fan.txt(th.pn, pn.r = c(1,10,50,90,99))\n> fan.txt(th.pn, pn.l = c(1,30,70,99))", null, "The colour of the percentiles in the sequential distributions can be easily altered from the default heat.colors scheme. A new set of graded colours can be passed to the fan function using the fan.col argument. The number of colours should be half the number of percentiles (columns) in the pn object. New graded colour schemes can be constructed in a number of ways. For example, using the colorRampPalette a new shading from blue to white, via grey can be created.\n\n> pal <- colorRampPalette(c(\"royalblue\", \"grey\", \"white\"))\n\n\nUsing this palette, 50 colours (approximately half the number of percentiles calculated in pn) can be defined:\n\n> fancol <- pal(50)\n\n\nFor a change, this new colour scheme is used to plot the posterior distribution of the standard deviation over time,", null, "$\\sigma_t$, which is derived as such:\n\n> sigma.pn <- pn(sims = sqrt(exp(th.mcmc)))\n\n\nThe new colour scheme can then be passed to the fan function for sigma.pn. with contour lines on selected percentiles:\n\n> #empty plot with x-axis added later\n> plot(NULL, type = \"l\", xlim = c(1, 945), xlab = \"Time\", xaxt = \"n\", ylim = range(sigma.pn), ylab = \"Standard Deviation\")\n> axis(1, at = match(qtr, svpdx$rdate), labels = qtr, cex.axis = 0.55) > axis(1, at = match(mth, svpdx$rdate), labels = FALSE, tcl = -0.2)\n> fan(sigma.pn, fan.col = fancol, ln = c(1, 10, 50, 90, 99))", null, "### Model Fits\n\nTo illustrate plots in the fanplot package that use time series objects (ts) based on regularly spaced data, a stochastic volatility model is fitted to the change of population growth rate of England and Wales, similar to that in Abel et al. (2010).\n\nPopulation data from 1841 to 2007 from Human Mortality Database (2012) are included in the fanplot package (ew). The growth rate, that the stochastic volatility model is based on can be derived following Rogers (1995) as such\n\n> r <- ts(ew[2:167]/ew[1:166]-1, start=1841)\n\n\nMean stationarity can be obtained by differencing the series\n\n> y <- diff(r)\n\n\nUsing this differenced growth rate time series, we can build a BUGS stochastic volatility model using the tsbugs package.\n\n> pop.bug <- sv.bugs(y, k=25, sim=TRUE,\nsv.mean.prior2 = \"dgamma(0.000001,0.000001)\",\nsv.ar.prior2 = \"dunif(-0.999, 0.999)\")\n\n\nThe sv.bugs function specifies for 25 future values to be forecast and simulations of the model to be taken. Specifications for alternative prior distributions for the parameters in the volatility process are also stated. This BUGS model can be run using R2OpenBUGS,\n\n> library(\"R2OpenBUGS\")\n> # write model file:\n> writeLines(pop.bug$bug, \"pop.txt\") > # take a look: > file.show(\"pop.txt\") > # run openbugs > pop <- bugs(data = pop.bug$data,\ninits = list(list(psi0.star=exp(12), psi1=0.5, itau2=0.5)),\nparam = c(\"psi0\", \"psi1\", \"tau\", \"y.new\", \"y.sim\"),\nmodel = \"pop.txt\",\nn.iter = 11000, n.burnin = 1000, n.chains = 1)\n\n\nAs was the case with the exchange rate data, one chain of the MCMC simulation is run for 11000 iterations, with the first 1000 used for burn in. The resulting bugs object contains the MCMC simulation results for the parameters in the stochastic volatility model, including the time dependent model fits (", null, "$E(y_t)$), forecasts of the future population growth rate (", null, "$\\hat{r}_{t+h|t}$) and population (", null, "$\\hat{p}_{t+h|t}$).\n\nThe fanplot package can efficiently display these sequential distributions in a number of ways. Consider the MCMC simulation of the model fit, which can be extracted from the pop using,\n\n> y.mcmc <- pop$sims.list$y.sim\n\n\nA plot of the entire posterior distribution of the model fits requires the calculation of percentiles over all", null, "$t$ using the pn function.\n\n> y0 <- tsp(y)\n> y.pn <- pn(sims = y.mcmc, start = y0)\n\n\nHere, the corresponding start date is also given so that the pn object takes the relevant time series properties when plotted. This saves on the previous effort for the non-regularly spaced exchange rate data, in setting up date labels on the x-axis. The time series properties are stored in the tsp attributes of the new y.pn object.\n\n> str(y.pn)pn [1:165, 1:99] -0.00357 -0.00367 -0.00405 -0.00467 -0.00588 ...\n- attr(*, \"dimnames\")=List of 2\n..$: NULL ..$ : chr [1:99] \"1%\" \"2%\" \"3%\" \"4%\" ...\n- attr(*, \"tsp\")= num [1:3] 1843 2007 1\n\n\nThese attributes can be directly used to set up the x-axis limits in the plot area, alongside y-axis limits based on the difference in the growth rate, which was the basis for the stochastic variance model. The fan function can then be used to add the sequential posterior distributions with contour lines at the 1st, 10th, 90th and 99th percentiles using the ln argument. Note, the fan function will automatically only draw and label contour lines given in the ln argument. If the user wishes to subdue this and add text labels later, they can do so by adding the txt = NA as was demonstrated earlier. The original data can also be plotted on top of the posterior distributions using the lines function:\n\n> #empty plot\n> plot(NULL, type = \"l\", xlim = range(time(y.pn)), xlab = \"Time\", ylim = range(y), ylab = \"Expected Model Fit\")\n> fan(y.pn, ln = c(1, 10, 90, 99))\n> lines(diff(r), lwd = 2)", null, "A coarser set of colours can be plotted by creating a pn object with fewer percentiles. This is done by defining the p argument of the pn function with the only the percentiles for which colour changes are desired.\n\n> y.pn2 <- pn(sims = y.mcmc, p = c(1, 20, 40, 60, 80, 99), start = y0)\n\n\nNote, that elements of p will ultimately be adjusted to be symmetric around 50. For example if a user set p = c(1, 40, 80) a pn object identical the one above would be returned. This allows the user, if desired, to only define percentiles either above or below 50.\n\nThe new y.pn2 object can be plotted using the default arguments for contour lines and text labels. Lines are never drawn for percentiles not calculated in the pn object. As a result, in the plotting of y.pn2 there are no contour lines on the 10th, 30th, 50th, 70th and 90th deciles.\n\n> #empty plot\n> plot(NULL, type = \"l\", xlim = range(time(y.pn)), xlab = \"Time\", ylim = range(diff(r)), ylab = \"Expected Model Fit\")\n> fan(y.pn2)\n> lines(diff(r), lwd = 2)", null, "### Forecast Fans\n\nTo illustrate the plotting of forecast fans we use the MCMC predictive distributions in the pop object. These are used to derive posterior predictive distributions of the population growth rate and population total using the diffinv function,\n\n> ynew.mcmc <- pop$sims.list$y.new\n> rnew.mcmc <- apply(ynew.mcmc, 1, diffinv, xi = tail(r,1))\n> rnew.mcmc <- t(rnew.mcmc[-1,])\n>\n> pnew.mcmc <- apply(1+rnew.mcmc, 1, cumprod) * tail(ew,1)\n> pnew.mcmc <- t(pnew.mcmc)\n\n\nPercentiles for rnew.mcmc can be derived as\n\n> r0 <- tsp(r)\n> rnew.pn <- pn(sims = rnew.mcmc, start = r0 + 1)\n\n\nNote, that as rnew.mcmc is a simulation of forecasts, the start argument set to the year after the last observation of the population growth rate. Forecast fans are plotted after estimating the percentile objects, much in the same way as they were for the volatility and model fit in the previous sections. However, some extra care must be taken in setting up the xlim to allow space for the fan to appear in the right hand side of plotting area. For the population growth rate this is demonstrated alongside a small section of the underlying simulation data (the first 30 MCMC samples of predictive distribution) that are represent a small part of the underlying data used to calculate the percentiles.\n\n> par(mfrow = c(1 ,2))\n> #sample of underlying simulation data\n> plot(r, ylim = range(r), xlim = c(1940, 2040), lwd = 2, ylab = \"Population Growth Rate\")\n> for (i in 1:30) lines(ts(rnew.mcmc[i, ], r0 + 1), col = \"grey\")\n> #plot r\n> plot(r, ylim = range(r), xlim = c(1940, 2040), lwd = 2, ylab = \"Population Growth Rate\")\n> fan(rnew.pn)", null, "The anchor argument in pn can be utilised to bridge the gap between the predictive distribution fan and the final data point. The associated starting point for creating an time series type object for plotting must also be adjusted to account for the anchoring. For the pnew.mcmc the two alternative sets of percentile calculations, with or without an anchoring can be derived as such:\n\n> p0 <- tsp(ew)\n> pnew.pn <- pn(sims = pnew.mcmc/1e+06, start = p0 + 1)\n> pnew.pn2 <- pn(sims = pnew.mcmc/1e+06, p = c(1, 10, 40, 50), anchor = tail(ew,1)/1e+06, start = p0)\n\n\nFor the pnew.pn2 only a few percentiles are calculated, which will provide a coarser set of shade in the plotting of the predictive distribution. In addition, the anchor is set to the last observed population count, hence the start point is now on the last observation p0. not at p0 + 1. In both calculations simulations are divided by one million to provide easier interpretation.\n\nBoth pn objects can be plotted side by side using the fan function. Contour line colours can be altered directly using the ln.col argument for the display of pnew.pn2 on the right hand side below, in comparison to the default plot on the left hand side.\n\n> par(mfrow = c(1 ,2))\n> #plot ew\n> plot(ew/1e+06, ylim = c(40, 80), xlim = c(1940, 2040), lwd = 2, ylab = \"Population (m)\")\n> fan(pnew.pn)\n> #plot ew\n> plot(ew/1e+06, ylim = c(40, 80), xlim = c(1940, 2040), lwd = 2, ylab = \"Population (m)\")\n> fan(pnew.pn2, ln.col = \"black\")", null, "### References\n\nAbel, G. J. (2013). tsbugs: Create time series BUGS models. Retrieved 26 February 2013, from http://cran.r-project.org/web/packages/tsbugs.\nAbel, G. J., J. Bijak, and J. Raymer (2010). A comparison of official population projections with Bayesian time series forecasts for England and Wales. Population Trends, 95–114.\nHuman Mortality Database (2012). Available at http://www.mortality.org. University of California, Berkeley (USA) and Max Planck Institute for Demographic Research (Germany).\nLunn, D. J., A. Thomas, N. Best, and D. Spiegelhalter (2000, October). WinBUGS – A Bayesian modelling framework: Concepts, structure, and extensibility. Statistics and Computing 10 (4), 325–337.\nMeyer, R. and J. Yu (2000). BUGS for a Bayesian analysis of stochastic volatility models. Econometrics Journal 3 (2).\nR Development Core Team (2012). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.\nRogers, A. (1995). Multiregional Demography: Principles, Methods and Extensions (1 ed.). John Wiley & Sons.\nSturtz, S., U. Ligges, and A. Gelman (2005). R2WinBUGS: a package for running WinBUGS from R. Journal of Statistical Software 12 (3).", null, "", null, "R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.\nWant to share your content on R-bloggers? click here if you have a blog, or here if you don't." ]
[ null, "https://gjabel.files.wordpress.com/2013/02/fanplot-003.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-008.png", null, "https://s0.wp.com/latex.php", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-009.png", null, "https://s0.wp.com/latex.php", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-013.png", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-021.png", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-023.png", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-026.png", null, "https://gjabel.files.wordpress.com/2013/02/fanplot-028.png", null, "https://feeds.wordpress.com/1.0/comments/gjabel.wordpress.com/508/", null, "https://i0.wp.com/stats.wordpress.com/b.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7941787,"math_prob":0.9820728,"size":18415,"snap":"2019-51-2020-05","text_gpt3_token_len":5084,"char_repetition_ratio":0.13274673,"word_repetition_ratio":0.12442848,"special_character_ratio":0.3023622,"punctuation_ratio":0.16799589,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99547166,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42],"im_url_duplicate_count":[null,2,null,null,null,null,null,null,null,null,null,null,null,2,null,null,null,2,null,null,null,2,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T04:38:22Z\",\"WARC-Record-ID\":\"<urn:uuid:add8889d-edfe-4438-8a5f-a59a367079fd>\",\"Content-Length\":\"265925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c0d07b5d-503d-4d95-90f9-2729e55634f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:46e89e73-12d4-4d00-96e7-1724bf3836e9>\",\"WARC-IP-Address\":\"104.28.8.205\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/the-fanplot-package-for-r/\",\"WARC-Payload-Digest\":\"sha1:AFWVLD6JJGI6M7FZVJIZC7KZYOGVXZFK\",\"WARC-Block-Digest\":\"sha1:AO5PDE3ZF6VIKOXZ5NRL3IPTCB2W4ATF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251773463.72_warc_CC-MAIN-20200128030221-20200128060221-00486.warc.gz\"}"}
https://www.colorhexa.com/013f04
[ "# #013f04 Color Information\n\nIn a RGB color space, hex #013f04 is composed of 0.4% red, 24.7% green and 1.6% blue. Whereas in a CMYK color space, it is composed of 98.4% cyan, 0% magenta, 93.7% yellow and 75.3% black. It has a hue angle of 122.9 degrees, a saturation of 96.9% and a lightness of 12.5%. #013f04 color hex could be obtained by blending #027e08 with #000000. Closest websafe color is: #003300.\n\n• R 0\n• G 25\n• B 2\nRGB color chart\n• C 98\n• M 0\n• Y 94\n• K 75\nCMYK color chart\n\n#013f04 color description : Very dark lime green.\n\n# #013f04 Color Conversion\n\nThe hexadecimal color #013f04 has RGB values of R:1, G:63, B:4 and CMYK values of C:0.98, M:0, Y:0.94, K:0.75. Its decimal value is 81668.\n\nHex triplet RGB Decimal 013f04 `#013f04` 1, 63, 4 `rgb(1,63,4)` 0.4, 24.7, 1.6 `rgb(0.4%,24.7%,1.6%)` 98, 0, 94, 75 122.9°, 96.9, 12.5 `hsl(122.9,96.9%,12.5%)` 122.9°, 98.4, 24.7 003300 `#003300`\nCIE-LAB 22.196, -31.071, 28.135 1.812, 3.57, 0.708 0.297, 0.586, 3.57 22.196, 41.916, 137.838 22.196, -20.709, 26.134 18.895, -15.949, 11.003 00000001, 00111111, 00000100\n\n# Color Schemes with #013f04\n\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #3f013c\n``#3f013c` `rgb(63,1,60)``\nComplementary Color\n• #1d3f01\n``#1d3f01` `rgb(29,63,1)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #013f23\n``#013f23` `rgb(1,63,35)``\nAnalogous Color\n• #3f011d\n``#3f011d` `rgb(63,1,29)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #23013f\n``#23013f` `rgb(35,1,63)``\nSplit Complementary Color\n• #3f0401\n``#3f0401` `rgb(63,4,1)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #04013f\n``#04013f` `rgb(4,1,63)``\n• #3c3f01\n``#3c3f01` `rgb(60,63,1)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #04013f\n``#04013f` `rgb(4,1,63)``\n• #3f013c\n``#3f013c` `rgb(63,1,60)``\n• #000000\n``#000000` `rgb(0,0,0)``\n• #000d01\n``#000d01` `rgb(0,13,1)``\n• #012602\n``#012602` `rgb(1,38,2)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #015806\n``#015806` `rgb(1,88,6)``\n• #027107\n``#027107` `rgb(2,113,7)``\n• #028a09\n``#028a09` `rgb(2,138,9)``\nMonochromatic Color\n\n# Alternatives to #013f04\n\nBelow, you can see some colors close to #013f04. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #0e3f01\n``#0e3f01` `rgb(14,63,1)``\n• #083f01\n``#083f01` `rgb(8,63,1)``\n• #033f01\n``#033f01` `rgb(3,63,1)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #013f09\n``#013f09` `rgb(1,63,9)``\n• #013f0e\n``#013f0e` `rgb(1,63,14)``\n• #013f14\n``#013f14` `rgb(1,63,20)``\nSimilar Colors\n\n# #013f04 Preview\n\nThis text has a font color of #013f04.\n\n``<span style=\"color:#013f04;\">Text here</span>``\n#013f04 background color\n\nThis paragraph has a background color of #013f04.\n\n``<p style=\"background-color:#013f04;\">Content here</p>``\n#013f04 border color\n\nThis element has a border color of #013f04.\n\n``<div style=\"border:1px solid #013f04;\">Content here</div>``\nCSS codes\n``.text {color:#013f04;}``\n``.background {background-color:#013f04;}``\n``.border {border:1px solid #013f04;}``\n\n# Shades and Tints of #013f04\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #000500 is the darkest color, while #f1fff1 is the lightest one.\n\n• #000500\n``#000500` `rgb(0,5,0)``\n• #001802\n``#001802` `rgb(0,24,2)``\n• #012c03\n``#012c03` `rgb(1,44,3)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\n• #015205\n``#015205` `rgb(1,82,5)``\n• #026606\n``#026606` `rgb(2,102,6)``\n• #027908\n``#027908` `rgb(2,121,8)``\n• #028c09\n``#028c09` `rgb(2,140,9)``\n• #03a00a\n``#03a00a` `rgb(3,160,10)``\n• #03b30b\n``#03b30b` `rgb(3,179,11)``\n• #03c60d\n``#03c60d` `rgb(3,198,13)``\n• #03d90e\n``#03d90e` `rgb(3,217,14)``\n• #04ed0f\n``#04ed0f` `rgb(4,237,15)``\n• #09fb15\n``#09fb15` `rgb(9,251,21)``\n• #1cfb27\n``#1cfb27` `rgb(28,251,39)``\n• #30fc3a\n``#30fc3a` `rgb(48,252,58)``\n• #43fc4c\n``#43fc4c` `rgb(67,252,76)``\n• #56fc5e\n``#56fc5e` `rgb(86,252,94)``\n• #6afd71\n``#6afd71` `rgb(106,253,113)``\n• #7dfd83\n``#7dfd83` `rgb(125,253,131)``\n• #90fd95\n``#90fd95` `rgb(144,253,149)``\n• #a4fea8\n``#a4fea8` `rgb(164,254,168)``\n• #b7feba\n``#b7feba` `rgb(183,254,186)``\n• #cafecd\n``#cafecd` `rgb(202,254,205)``\n• #ddfedf\n``#ddfedf` `rgb(221,254,223)``\n• #f1fff1\n``#f1fff1` `rgb(241,255,241)``\nTint Color Variation\n\n# Tones of #013f04\n\nA tone is produced by adding gray to any pure hue. In this case, #1f211f is the less saturated color, while #013f04 is the most saturated one.\n\n• #1f211f\n``#1f211f` `rgb(31,33,31)``\n• #1c241c\n``#1c241c` `rgb(28,36,28)``\n• #1a261a\n``#1a261a` `rgb(26,38,26)``\n• #172918\n``#172918` `rgb(23,41,24)``\n• #152b16\n``#152b16` `rgb(21,43,22)``\n• #122e14\n``#122e14` `rgb(18,46,20)``\n• #103011\n``#103011` `rgb(16,48,17)``\n• #0d330f\n``#0d330f` `rgb(13,51,15)``\n• #0b350d\n``#0b350d` `rgb(11,53,13)``\n• #08380b\n``#08380b` `rgb(8,56,11)``\n• #063a08\n``#063a08` `rgb(6,58,8)``\n• #033d06\n``#033d06` `rgb(3,61,6)``\n• #013f04\n``#013f04` `rgb(1,63,4)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #013f04 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.52026373,"math_prob":0.81976825,"size":3633,"snap":"2023-40-2023-50","text_gpt3_token_len":1637,"char_repetition_ratio":0.13061449,"word_repetition_ratio":0.007380074,"special_character_ratio":0.5615194,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9889981,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T09:37:34Z\",\"WARC-Record-ID\":\"<urn:uuid:f375986a-294e-4665-86f9-ef81b92dea62>\",\"Content-Length\":\"36075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d85bc69a-14ab-479c-b9f5-7ccd915596cb>\",\"WARC-Concurrent-To\":\"<urn:uuid:d198225e-4a26-4515-90aa-bde24d3a64d6>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/013f04\",\"WARC-Payload-Digest\":\"sha1:A2HKK7JXERG7KE3GHUU62LBDHNU7APTE\",\"WARC-Block-Digest\":\"sha1:DEKVNXCI6PZCXRTBWKAZE45EEZSKQMBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100527.35_warc_CC-MAIN-20231204083733-20231204113733-00614.warc.gz\"}"}
https://stats.stackexchange.com/questions/136389/derivation-of-cumulative-binomial-distribution-expression/136393
[ "# Derivation of cumulative Binomial Distribution expression\n\nI have a binomial distribution, with Random Variable Y and n trials. r is an integer. How can I show that P(Y ≥ r) = P(Y ≤ n − r)? I think it involves using the binomial theorem. I have tried taking expressing P(Y ≥ r) as $$\\sum_{x = 0}^{n} \\binom{n}{x} p^x(1-p)^{n-x} - \\sum_{x=0}^{r} \\binom{n}{x} p^x(1-p)^{n-x}$$ and also: $$\\sum_{x=r}^{n} \\binom{n}{x} p^x(1-p)^{n-x} = \\sum_{x=0}^{n-r} \\binom{n}{x+r} p^{x+r}(1-p)^{n-(x+r)}$$ but I am unsure of how to proceed next. Can someone give me a hint?\n\n• +1 for the clear question and asking for a hint. If this is for homework or for self-study, you should add the self-study tag to your question (see stats.stackexchange.com/tags/self-study/info). – Patrick Coulombe Feb 5 '15 at 3:31\n• You need to change some $(p-1)$s to $(1-p)$. With questions like these it is often worth substituting some values in to try to understand why the equation works - if you tried a $p\\neq0.5$ you'd see that actually it doesn't! – Silverfish Feb 5 '15 at 8:48\n\nI think you'll find it difficult to prove, because it is not true unless $p = 0.5$.\n\nConsider the simple case of $n = 1$, $r = 1$. Then:\n\n$P(Y \\geq r) = P(Y = 1) = p$\n\n$P(Y \\leq n - r) = P(Y = 0) = 1 - p$.\n\nStarting with the LHS of your second equation, if you substitute $j = n - x$ and use ${n \\choose j} = {n \\choose {n-j}}$, you should be able to verify that\n\n$P(Y \\geq r) = \\sum_{j=0}^{n-r} {n \\choose j} (1-p)^j p^{n-j}$,\n\nwhich is $P(X \\leq n-r)$ if $X$ is a binomial random variable with success probability $(1-p)$.\n\nThis makes sense intuitively if you think of $Y$ as the number of successes and $X$ as the number of failures: if there are at least $r$ successes, there must be no more than $n-r$ failures.\n\n• I am still unclear on how you managed to change the upper and lower bounds of the sum? – roro172 Feb 5 '15 at 20:59\n• Counting upwards from $x=r$ to $x=n$ is equivalent to counting downwards from $j=n-r$ to $j=0$ ($j$ being the 'distance' between $x$ and $n$). You can just substitute $j = n - x$ into the limits of the sum: $\\sum_{x=r}^n \\equiv \\sum_{j=n-r}^{n-n} \\equiv \\sum_{j=n-r}^0 \\equiv \\sum_{j=0}^{n-r}$ – Mark Feb 5 '15 at 21:40" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82789284,"math_prob":0.99975115,"size":1832,"snap":"2021-21-2021-25","text_gpt3_token_len":663,"char_repetition_ratio":0.09628009,"word_repetition_ratio":0.0125,"special_character_ratio":0.38209608,"punctuation_ratio":0.07363421,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000088,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-15T08:20:23Z\",\"WARC-Record-ID\":\"<urn:uuid:670273d1-5562-4fa7-a04b-089a3f80c38e>\",\"Content-Length\":\"168492\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:767c4bd1-8518-4c28-bf06-3b1ee1687820>\",\"WARC-Concurrent-To\":\"<urn:uuid:39dcd0e5-a47b-4ca1-98be-6eaf23761414>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/136389/derivation-of-cumulative-binomial-distribution-expression/136393\",\"WARC-Payload-Digest\":\"sha1:GCEW7YWDWXS22HWRWCFT4HMKXAIRK2LB\",\"WARC-Block-Digest\":\"sha1:UR27VBWMMBV7O3QNYQU6JVTYQN67TWMH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991378.48_warc_CC-MAIN-20210515070344-20210515100344-00232.warc.gz\"}"}
https://library.wolfram.com/infocenter/Articles/9238/
[ "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "", null, "Title", null, "", null, "", null, "", null, "Cosmological Evolution of Statistical System of Scalar Charged Particles", null, "", null, "", null, "Authors", null, "", null, "", null, "", null, "Yurii Ignat'ev", null, "A. A. Agathonov", null, "Mikhail Mikhailov", null, "Dmitry Ignatyev", null, "", null, "", null, "Journal / Anthology", null, "", null, "", null, "", null, "Astrophysics and Space Science\n Year: 2015\n Volume: 357\n Issue: 61", null, "", null, "", null, "Description", null, "", null, "", null, "", null, "In the paper we consider the macroscopic model of plasma of scalar charged particles, obtained by means of the statistical averaging of the mi- croscopic equations of particle dynamics in a scalar eld. On the basis of kinetic equations, obtained from averaging, and their strict integral con- sequences, a self-consistent set of equations is formulated which describes the self-gravitating plasma of scalar charged particles. It was obtained the corresponding closed cosmological model which also was numerically simulated for the case of one-component degenerated Fermi gas and two- component Boltzmann system. It was shown that results depend weakly on the choice of a statistical model. Two speci c features of cosmological evolution of a statistical system of scalar charged particles were obtained with respect to cosmological evolution of the minimal interaction models: appearance of giant bursts of invariant cosmological acceleration at the time interval 8 \u0001 103 \u0004 2 \u0001 104tPl and strong heating (3 \u0004 8 orders of mag- nitude) of a statistical system at the same times. The presence of such features can modify the quantum theory of generation of cosmological gravitational perturbations.", null, "", null, "", null, "Subjects", null, "", null, "", null, "", null, "", null, "Science > Physics > Astrophysics", null, "Science > Physics > Relativity Theory", null, "", null, "", null, "", null, "", null, "", null, "", null, "" ]
[ null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/subheader.gif", null, "https://library.wolfram.com/images/database/tabsTOP/Courseware.gif", null, "https://library.wolfram.com/images/database/tabsTOP/Demos.gif", null, "https://library.wolfram.com/images/database/tabsTOP/MathSource.gif", null, "https://library.wolfram.com/images/database/tabsTOP/TechNotes.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/tabsBOTTOM/BySubject-off.gif", null, "https://library.wolfram.com/images/database/tabsBOTTOM/Articles-on.gif", null, "https://library.wolfram.com/images/database/tabsBOTTOM/Books-off.gif", null, "https://library.wolfram.com/images/database/tabsBOTTOM/Conferences-off.gif", null, "https://library.wolfram.com/images/database/grey-square.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/images/database/grey-line.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/images/database/subjects-arrow.gif", null, "https://library.wolfram.com/images/database/subjects-arrow.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null, "https://library.wolfram.com/common/images/spacer.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9212498,"math_prob":0.8715738,"size":1358,"snap":"2020-34-2020-40","text_gpt3_token_len":282,"char_repetition_ratio":0.15878877,"word_repetition_ratio":0.07239819,"special_character_ratio":0.21281296,"punctuation_ratio":0.053140096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95412123,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T21:20:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e79baf6d-f5c0-4ad1-b15d-020d062c4f99>\",\"Content-Length\":\"53569\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:675b7cc8-8854-4ace-a4c1-9a9f21ddc2d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:68d12dcf-1f97-4dd5-8157-bfa631a2c5d4>\",\"WARC-IP-Address\":\"140.177.205.65\",\"WARC-Target-URI\":\"https://library.wolfram.com/infocenter/Articles/9238/\",\"WARC-Payload-Digest\":\"sha1:T2H2DV6QSQWSN7FGPIULVPMPRHRZJTAF\",\"WARC-Block-Digest\":\"sha1:ZVAGC7U5CM66IS7G7EBX5OQKFWNV24ET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735833.83_warc_CC-MAIN-20200803195435-20200803225435-00328.warc.gz\"}"}
https://pro-coder.tech/exploring-linear-diophantine-equation/
[ "## Exploring Linear Diophantine Equation\n\nThis is a well-known method to solve linear equations with 2 or more variables. The specialty of this method is that it provides integer solutions to the linear equation of the form a*x + b*y + c*z + ……… = constant.\n\nThe problems involving use of LDE generally deal with two variables. We gonna look at few interesting problems today :\n\n### Problem – 01 D. Two Arithmetic Progressions\n\nThe problem is about finding the number of possible values of x.\n\nHere x must hold –\n\na1*k’ + b1 = a2*l’ + b2\n\na1*k’ – a2*l’ = b2-b1\n\nThese can be treated as LDE – A*x + B*y = C where x = k’ and y = l’. The only restriction that we have is that a1*x + b1 must be between L and R (inclusive) and x,y must be >= 0.\n\nNow, we know that the solutions to this equation exist only if C%gcd(A,B) = 0, and that the solutions are:\n\nx = x’ + B*t/gcd(A,B) and y = y’ – A*t/gcd(A,B) where (x’,y’) are solutions of extended euclead’s algorithm Ax + By = gcd(A,B) multiplied by C/gcd(A,B)\n\n```//Extended GCD starts here\nll solx,soly,GCD;\nvoid extended_version(ll a, ll b){\n// Base Case\nif (b==0){\nsolx=1,soly=0,GCD=a;\nreturn;\n}\n//Recursive Case\nextended_version(b,a%b);\nll x1 = solx, y1 = soly;\nsolx = y1;\nsoly = x1 - (a/b)*y1;\n}\n//Extended GCD ends here\n```\n\nAnd, now we will look at the solution to original problem\n\n```ll solx,soly,GCD;\nvoid extended_gcd(ll a, ll b){\n// Base Case\nif (b==0){\nsolx=1,soly=0,GCD=a;\nreturn;\n}\n//Recursive Case\nextended_gcd(b,a%b);\nll x1 = solx, y1 = soly;\nsolx = y1;\nsoly = x1 - (a/b)*y1;\n}\nvoid solve()\n{\nll a1,b1,a2,b2,L,R;\ncin >> a1 >> b1 >> a2 >> b2 >> L >> R;\nll A=a1,B=-1*a2,C=b2-b1;\nextended_gcd(A,B);\nif (GCD < 0){\nGCD = -1*GCD;\nsolx *= -1;\nsoly *= -1;\n}\nif (C%GCD){\ncout << 0 << endl;\nreturn;\n}\nll l = ceil_div(L-b1,a1), r = floor_div(R-b1,a1);\nif (r < l){\ncout << 0 << endl;\nreturn;\n}\nl = max(0ll,l);\nll pl = ceil_div(C*solx - r*GCD,a2);\nll ph = floor_div(C*solx - l*GCD,a2);\nl = ceil_div(L-b2,a2), r = floor_div(R-b2,a2);\nl = max(0ll,l);\nif (r < l){\ncout << 0 << endl;\nreturn;\n}\npl = max(pl, ceil_div(C*soly - r*GCD,a1));\nph = min(ph,floor_div(C*soly - l*GCD,a1));\nif (ph < pl){\ncout << 0 << endl;\nreturn;\n}\ncout << ph-pl+1;\n}\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75675714,"math_prob":0.9997274,"size":2175,"snap":"2021-21-2021-25","text_gpt3_token_len":775,"char_repetition_ratio":0.10041455,"word_repetition_ratio":0.1557789,"special_character_ratio":0.38988507,"punctuation_ratio":0.17992425,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990976,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T11:22:50Z\",\"WARC-Record-ID\":\"<urn:uuid:ef0519d2-59e8-4be2-bb8c-8ef1ff25be15>\",\"Content-Length\":\"63836\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4df890af-33f5-4043-844b-e25e357020b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:80aacac7-fde8-4cc7-8171-0bdfabcf2034>\",\"WARC-IP-Address\":\"88.211.101.190\",\"WARC-Target-URI\":\"https://pro-coder.tech/exploring-linear-diophantine-equation/\",\"WARC-Payload-Digest\":\"sha1:5U35QRPQK35VAH2M7BD5VVCAR5J4Y4O4\",\"WARC-Block-Digest\":\"sha1:MOAIQAFIDAOTNTGM5KD4GEJDFXNQPALM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989916.34_warc_CC-MAIN-20210513111525-20210513141525-00518.warc.gz\"}"}
https://www.glassdoor.com/Interview/member-of-technical-staff-ii-interview-questions-SRCH_KO0,28.htm
[ "Member of Technical Staff II Interview Questions | Glassdoor\n\n# Member of Technical Staff II Interview Questions\n\n24\n\nMember of technical staff ii interview questions shared by candidates\n\n## Top Interview Questions\n\nSort: RelevancePopular Date\n\n### Member of Technical Staff II at VMware was asked...\n\nMay 17, 2012\n Database design question regarding user-account management, authorization etc.1 AnswerAn architecture similar to LDAP can be used.\n\n### Member of Technical Staff II at Charles Stark Draper Laboratory was asked...\n\nAug 7, 2014\n It was more of a casual conversation of me describing my project experiences answering related questions.Be the first to answer this question\n\n### Member of Technical Staff II, Software Engineer at Hughes Network Systems was asked...\n\nMay 1, 2015\n General Java questionsBe the first to answer this question\n\n### Member of Technical Staff II at Tintri was asked...\n\nFeb 28, 2017\n Lots of python (OOPs, Decorators, Inheritance in Python), Python Vs Java, Binary tree recursive, 1 AnswerHad prepared well for python hence could clear this round.\n\n### Member of Technical Staff II at Tintri was asked...\n\nFeb 28, 2017\n Start with Basic Algorithms but focus was on how you improve the performance of it. With very very high scale of data. Testcases to test the algorithm1 AnswerAlgorithm was easy but scaling has twists. It was fun when i got the answer in last minute\n\n### Member of Technical Staff II Software at Panasonic Avionics Corporation was asked...\n\nJul 12, 2019\n What's the most challenge project and why?1 AnswerDo you have a question about Panasonic Avionics Corporation? Don't ask just anyone for information, ask an employee from Panasonic Avionics Corporation. They're all waiting at Rooftop Slushie. https://wwww.rooftopslushie.com\n\n### Member of Technical Staff II at VMware was asked...\n\nMay 17, 2012\n Given a number, print the list of prime numbers that multiply to that number. Eg. given 132, print -> 2,2,3,11 1 Answerfrom numpy import prod x = 132 prim = def primo(x): for i in range(3,x+1): count = 0 for j in range(2,i): if i%j == 0: count+=1 else: pass if count == 0: prim.append(i) primo(x) #print prim fact = [] def facto(x): count = 0 for i in range(2,x): if x%i == 0: fact.append(i) else: count +=1 if len(fact) ==0 and count >0: fact.append(x) return fact facto(x) #print fact f = [] def finish(fact, prim): for i in fact: if i in prim: # print i f.append(i) finish(fact, prim) #print f if prod(f)!= x: #print \"true\" pad = [] wy = x//prod(f+pad) wy1 = wy #print wy1 while prod(pad) < wy: #print \"Tru\" for i in f: #print \"i\", (i) #print \"wy1\", (wy1) #print prod(pad) if wy1%i==0 and wy1 != 1: pad.append(i) wy1 = wy1/i #print \"wy1\", wy1 #print \"pad\", pad else: pass f +=pad k=sorted(f) for x in k: print x\n\n### Member of Technical Staff II at EchoStar was asked...\n\nAug 26, 2015\n Q: What are subnetsBe the first to answer this question\n\n### Data Analyst at Anthem was asked...\n\nFeb 20, 2018\n Leadership Skills?Be the first to answer this question\n\n### Member of Technical Staff II at VMware was asked...\n\nMay 17, 2012\n Given a list of integers L and a number X, find 3 numbers in L that sum-up to X. 1 Answerx = [1,2,3,4,5,6,7,8] z = 10 y = [] for i in range(len(x)): for j in range(len(x)): for k in range(len(x)): if (i!=j) and (j!=k) and (i!=k): if set([x[i],x[j],x[k]]) not in y: y.append(set([x[i],x[j],x[k]])) for i in y: if sum(i) == z: print i\n110 of 24 Interview Questions" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8207723,"math_prob":0.69309235,"size":754,"snap":"2019-35-2019-39","text_gpt3_token_len":148,"char_repetition_ratio":0.23066667,"word_repetition_ratio":0.07964602,"special_character_ratio":0.1949602,"punctuation_ratio":0.01923077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95394117,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-22T01:54:37Z\",\"WARC-Record-ID\":\"<urn:uuid:23ec33ea-4c75-4d7c-8113-21d7089c2b1a>\",\"Content-Length\":\"174943\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84487fda-7637-479d-b9f5-dcf61c33b679>\",\"WARC-Concurrent-To\":\"<urn:uuid:57a47da5-d380-483e-bc4c-4a6f7a3dd206>\",\"WARC-IP-Address\":\"104.17.90.51\",\"WARC-Target-URI\":\"https://www.glassdoor.com/Interview/member-of-technical-staff-ii-interview-questions-SRCH_KO0,28.htm\",\"WARC-Payload-Digest\":\"sha1:WRROAPMABPDVOWZXHR4H2LNDB5QLF2B6\",\"WARC-Block-Digest\":\"sha1:DQD5JEAHLMND6SREH6DXAIR6ESFFAXFQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514574765.55_warc_CC-MAIN-20190922012344-20190922034344-00354.warc.gz\"}"}
http://textbooks.math.gatech.edu/ila/parametric-form.html
[ "##### Objectives\n1. Learn to express the solution set of a system of linear equations in parametric form.\n2. Understand the three possibilities for the number of solutions of a system of linear equations.\n3. Recipe: parametric form.\n4. Vocabulary word: free variable.\n\n# Subsection1.3.1Free Variables\n\nThere is one possibility for the row reduced form of a matrix that we did not see in Section 1.2.\n\n##### Example(A System with a Free Variable)\n\nConsider the linear system\n\nWe solve it using row reduction:\n\nThis row reduced matrix corresponds to the linear system\n\nIn what sense is the system solved? We rewrite as\n\nFor any value of there is exactly one value of and that make the equations true. But we are free to choose any value of\n\nWe have found all solutions: it is the set of all values where\n\nThis is called the parametric form for the solution to the linear system. The variable is called a free variable.\n\nGiven the parametric form for the solution to a linear system, we can obtain specific solutions by replacing the free variables with any specific real numbers. For instance, setting in the last example gives the solution and setting gives the solution\n\n##### Definition\n\nConsider a consistent system of equations in the variables Let be a row echelon form of the augmented matrix for this system.\n\nWe say that is a free variable if its corresponding column in is not a pivot column.\n\nIn the above example, the variable was free because the reduced row echelon form matrix was\n\nIn the matrix\n\nthe free variables are and (The augmented column is not free because it does not correspond to a variable.)\n\n##### Recipe: Parametric form\n\nThe parametric form of the solution set of a consistent system of linear equations is obtained as follows.\n\n1. Write the system as an augmented matrix.\n2. Row reduce to reduced row echelon form.\n3. Write the corresponding (solved) system of linear equations.\n4. Move all free variables to the right hand side of the equations.\n\nMoving the free variables to the right hand side of the equations amounts to solving for the non-free variables (the ones that come pivot columns) in terms of the free variables. One can think of the free variables as being independent variables, and the non-free variables being dependent.\n\n##### Implicit Versus Parameterized Equations\n\nThe solution set of the system of linear equations\n\nis a line in as we saw in this example. These equations are called the implicit equations for the line: the line is defined implicitly as the simultaneous solutions to those two equations.\n\nThe parametric form\n\ncan be written as follows:\n\nThis called a parameterized equation for the same line. It is an expression that produces all points of the line in terms of one parameter,\n\nOne should think of a system of equations as being an implicit equation for its solution set, and of the parametric form as being the parameterized equation for the same set. The parameteric form is much more explicit: it gives a concrete recipe for producing all solutions.\n\nYou can choose any value for the free variables in a (consistent) linear system.\n\nFree variables come from the columns without pivots in a matrix in row echelon form.\n\n# Subsection1.3.2Number of Solutions\n\nThere are three possibilities for the reduced row echelon form of the augmented matrix of a linear system.\n\n1. The last column is a pivot column. In this case, the system is inconsistent. There are zero solutions, i.e., the solution set is empty. For example, the matrix\ncomes from a linear system with no solutions.\n2. Every column except the last column is a pivot column. In this case, the system has a unique solution. For example, the matrix\ntells us that the unique solution is\n3. The last column is not a pivot column, and some other column is not a pivot column either. In this case, the system has infinitely many solutions, corresponding to the infinitely many possible values of the free variable(s). For example, in the system corresponding to the matrix\nany values for and yield a solution to the system of equations." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8871838,"math_prob":0.9963866,"size":3914,"snap":"2021-43-2021-49","text_gpt3_token_len":798,"char_repetition_ratio":0.19232737,"word_repetition_ratio":0.07646177,"special_character_ratio":0.19289729,"punctuation_ratio":0.08847185,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99902105,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-02T04:39:18Z\",\"WARC-Record-ID\":\"<urn:uuid:c27a3dee-20ea-4c7c-be3e-e8b6c6a9bc35>\",\"Content-Length\":\"78965\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5de67f99-60da-47ee-abcc-069f92e08961>\",\"WARC-Concurrent-To\":\"<urn:uuid:19c40a4c-99a1-4bae-831b-1c0b0ec9df67>\",\"WARC-IP-Address\":\"130.207.188.152\",\"WARC-Target-URI\":\"http://textbooks.math.gatech.edu/ila/parametric-form.html\",\"WARC-Payload-Digest\":\"sha1:5MDEETAD2YR62IUG5CETO75VGGNQILNS\",\"WARC-Block-Digest\":\"sha1:FBR6PV44H4NKTVIKBTALUD4A7ULLN2TU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964361064.69_warc_CC-MAIN-20211202024322-20211202054322-00092.warc.gz\"}"}
https://groupprops.subwiki.org/w/index.php?title=Special:ExportRDF/Characteristic_direct_factor_of_abelian_group&amp;syntax=rdf
[ "]> 2019-07-17T00:36:09+00:00 Characteristic direct factor of abelian group 0 en 2013-07-14T20:15:58Z 2456488.3444213 Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Characteristic direct factor of nilpotent group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Characteristic direct factor of nilpotent group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Fully invariant direct factor]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Fully invariant direct factor]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Characteristic direct factor]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Characteristic direct factor]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Fully invariant subgroup of abelian group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Fully invariant subgroup of abelian group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Characteristic subgroup of abelian group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Characteristic subgroup of abelian group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Direct factor of abelian group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Direct factor of abelian group]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Fully invariant subgroup]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Fully invariant subgroup]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Characteristic subgroup]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Characteristic subgroup]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 list 4 [[Stronger than::Direct factor]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group 0 1 broadtable 4 [[Stronger than::Direct factor]] [[Weaker than::Characteristic direct factor of abelian group]] Characteristic direct factor of abelian group Fully invariant direct factor 0 en Fully invariant direct factor Characteristic direct factor of nilpotent group 0 en Characteristic direct factor of nilpotent group Abelian fully invariant subgroup 0 en Abelian fully invariant subgroup Fully invariant direct factor of abelian group 0 en Fully invariant direct factor of abelian group Weaker than 132 en Weaker than" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.64938915,"math_prob":0.75703794,"size":3653,"snap":"2019-26-2019-30","text_gpt3_token_len":875,"char_repetition_ratio":0.22855577,"word_repetition_ratio":0.87982833,"special_character_ratio":0.22064057,"punctuation_ratio":0.13131313,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9641495,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-17T00:36:09Z\",\"WARC-Record-ID\":\"<urn:uuid:8ab00b36-ae86-42ba-9614-1a9d35a752f6>\",\"Content-Length\":\"33028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:61b228f9-f961-4321-9684-d4efa5bb8b90>\",\"WARC-Concurrent-To\":\"<urn:uuid:82dfd262-1a14-47d2-982c-f92c3f5ed1b7>\",\"WARC-IP-Address\":\"96.126.114.7\",\"WARC-Target-URI\":\"https://groupprops.subwiki.org/w/index.php?title=Special:ExportRDF/Characteristic_direct_factor_of_abelian_group&amp;syntax=rdf\",\"WARC-Payload-Digest\":\"sha1:P3KWPTGWL7DK5RGFSM57337KJOTH5VR4\",\"WARC-Block-Digest\":\"sha1:RA5ALC73CWDQ3MOGNPNHGBLNZUREYPCQ\",\"WARC-Identified-Payload-Type\":\"application/xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525004.24_warc_CC-MAIN-20190717001433-20190717023433-00009.warc.gz\"}"}
https://practicaldev-herokuapp-com.global.ssl.fastly.net/rohithv07/leetcode-114-flatten-binary-tree-to-linked-list-1ej8
[ "DEV Community is a community of 783,060 amazing developers\n\nWe're a place where coders share, stay up-to-date and grow their careers.", null, "Leetcode 114. Flatten Binary Tree to Linked List\n\nProblem Statement\n\nGiven the root of a binary tree, flatten the tree into a \"linked list\":\n\nThe \"linked list\" should use the same TreeNode class where the right child pointer points to the next node in the list and the left child pointer is always null.\nThe \"linked list\" should be in the same order as a pre-order traversal of the binary tree.\n\nTest Cases\n\nExample 1:\n\nInput: root = [1,2,5,3,4,null,6]\nOutput: [1,null,2,null,3,null,4,null,5,null,6]\nExample 2:\n\nInput: root = []\nOutput: []\nExample 3:\n\nInput: root = \nOutput: \n\nConstraints:\n\nThe number of nodes in the tree is in the range [0, 2000].\n-100 <= Node.val <= 100\n\nAlgorithm :\n\n1. Here we need to move all our left child nodes to right and append the already present right child node to the end.\n2. So if we have left child node, first of all store the currently present right child to a variable and keep root.right as the left child. Then make the left child as null.\n3. Then the stored right child is appended towards the end of the left child node that is recently attached.", null, "4. Do recursion keeping the root node as root.right. recursion(root.right).", null, "Complexity :\n\nWe are going through each of the node so time complexity O(n) and space will be O(H) as we use a recursion stack.\n\nCode :\n\n/**\n* Definition for a binary tree node.\n* public class TreeNode {\n* int val;\n* TreeNode left;\n* TreeNode right;\n* TreeNode() {}\n* TreeNode(int val) { this.val = val; }\n* TreeNode(int val, TreeNode left, TreeNode right) {\n* this.val = val;\n* this.left = left;\n* this.right = right;\n* }\n* }\n*/\nclass Solution {\npublic void flatten(TreeNode root) {\nif (root == null) {\nreturn;\n}\nif (root.left != null) {\nTreeNode temp = root.right;\nroot.right = root.left;\nroot.left = null;\nTreeNode current = root.right;\nwhile (current.right != null) {\ncurrent = current.right;\n}\ncurrent.right = temp;\n}\nflatten(root.right);\n}\n}" ]
[ null, "https://res.cloudinary.com/practicaldev/image/fetch/s--i0kFs68m--/c_imagga_scale,f_auto,fl_progressive,h_420,q_auto,w_1000/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skhtqoa1oo1on3a9yw10.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--67GpR-SJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a5cv9itcb8rvu9mv1do2.png", null, "https://res.cloudinary.com/practicaldev/image/fetch/s--nR0whH_E--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_880/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r6eol5ei6s9dl2a3eh9f.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7326948,"math_prob":0.98915285,"size":1866,"snap":"2022-05-2022-21","text_gpt3_token_len":471,"char_repetition_ratio":0.16809882,"word_repetition_ratio":0.0,"special_character_ratio":0.29581994,"punctuation_ratio":0.20300752,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99041075,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T23:03:31Z\",\"WARC-Record-ID\":\"<urn:uuid:6deed27c-3588-42ac-b2b8-3892393ffc35>\",\"Content-Length\":\"99036\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a43788c-92fa-41d6-b579-85b711bfad32>\",\"WARC-Concurrent-To\":\"<urn:uuid:649db34c-691b-46cd-bb43-48adf497d3c1>\",\"WARC-IP-Address\":\"146.75.33.194\",\"WARC-Target-URI\":\"https://practicaldev-herokuapp-com.global.ssl.fastly.net/rohithv07/leetcode-114-flatten-binary-tree-to-linked-list-1ej8\",\"WARC-Payload-Digest\":\"sha1:BG2D5YIT2XDWJFA2HOO3W3IRZ2IM34UK\",\"WARC-Block-Digest\":\"sha1:3IWWSKZZHESLWKVY4EODU4MHONKKMAVW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300624.10_warc_CC-MAIN-20220117212242-20220118002242-00346.warc.gz\"}"}
http://javascript.askforanswer.com/ruhezaijavascriptzhongqingkongshuzu.html
[ "# 如何在JavaScript中清空数组?\n\n2020/09/15 01:31 · javascript ·  · 0评论\n\n``A = [1,2,3,4];``\n\n(这是我对问题的原始回答)\n\n``A = [];``\n\n``````var arr1 = ['a','b','c','d','e','f'];\nvar arr2 = arr1; // Reference arr1 by another variable\narr1 = [];\nconsole.log(arr2); // Output ['a','b','c','d','e','f']``````\n\n``A.length = 0``\n\n``A.splice(0,A.length)``\n\n``````while(A.length > 0) {\nA.pop();\n}``````\n\n``A.length = 0;``\n\n``````function clearArray(array) {\nwhile (array.length) {\narray.pop();\n}\n}``````\n\nFYI MapSet define `clear()``clear()`对于Array似乎是合乎逻辑的\n\nTypeScript版本:\n\n``````function clearArray<T>(array: T[]) {\nwhile (array.length) {\narray.pop();\n}\n}``````\n\n``````describe('clearArray()', () => {\ntest('clear regular array', () => {\nconst array = [1, 2, 3, 4, 5];\nclearArray(array);\nexpect(array.length).toEqual(0);\nexpect(array).toEqual(undefined);\nexpect(array).toEqual(undefined);\n});\n\ntest('clear array that contains undefined and null', () => {\nconst array = [1, undefined, 3, null, 5];\nclearArray(array);\nexpect(array.length).toEqual(0);\nexpect(array).toEqual(undefined);\nexpect(array).toEqual(undefined);\n});\n});``````\n\n`A.splice(0, A.length);`\n\n1. 说“ `A = []`是答案”是无知的,而且绝对不正确。`[] == []`错误的\n\n这是因为这两个数组是两个单独的,独立的对象,它们具有自己的两个标识,在数字世界中占据了自己的空间,每个空间都是自己的。\n\n• 您不会带来新的功能,就好像您已经完成了要求的操作一样。\n• 相反,您清空垃圾箱。\n• 您不要用新的空罐替换装满的罐,也不要从装满的罐上取标签“ A”并将其粘贴到新的罐上。 `A = [1,2,3,4]; A = [];`\n\n``A.length = 0;``\n\n1. 此外,在罐头变空之前,不需要手动清除垃圾!您被要求一遍完全清空现有的垃圾箱,直到罐子变空之前不要捡垃圾,如下所示:\n\n``````while(A.length > 0) {\nA.pop();\n}``````\n2. 也不要将左手放在垃圾箱的底部,而将右手放在垃圾箱的顶部,这样就可以拉出其内容:\n\n``A.splice(0, A.length);``\n\n``A.length = 0;``\n\nhttp://jsperf.com/array-clear-methods/3\n\n``````a = []; // 37% slower\na.length = 0; // 89% slower\na.splice(0, a.length) // 97% slower\nwhile (a.length > 0) {\na.pop();\n} // Fastest``````\n\n``````Array.prototype.clear = function() {\nthis.splice(0, this.length);\n};``````\n\n``````var list = [1, 2, 3];\nlist.clear();``````\n\n``````if (!Array.prototype.clear) {\nArray.prototype.clear = function() {\nthis.splice(0, this.length);\n};\n}``````\n\n``````var arr = [];\n\nfor (var i = 0; i < 100; i++) {\narr.push(Math.random());\n}\n\nfor (var j = 0; j < 1000; j++) {\nwhile (arr.length > 0) {\narr.pop(); // this executes 100 times, not 100000\n}\n}``````\n\nhttp://jsperf.com/empty-javascript-array-redux\n\n``var arr = [1, 2, 3, 4, 5]; //the array``\n\n``arr.length = 0; //change the length``\n\n``[] //result``\n\n``````/* could be arr.pop() or arr.splice(0)\ndon't need to return as main array get changed */\n\nfunction remove(arr) {\nwhile(arr.length) {\narr.shift();\n}\n}``````\n\n``arr.splice(0, arr.length); //[]``\n\n``arr = []; //[]``\n\n``````Array.prototype.remove = Array.prototype.remove || function() {\nthis.splice(0, this.length);\n};``````\n\n``arr.remove(); //[]``\n\n(有关上述标签的介绍,您可以在此处查看\n\nStackoverflow迫使我复制jsfiddle,因此它是:\n\n``````<html>\n<script>\nvar size = 1000*100\ndocument.getElementById(\"quantifier\").value = size\n}\n\nfunction scaffold()\n{\nconsole.log(\"processing Scaffold...\");\na = new Array\n}\nfunction start()\n{\nsize = document.getElementById(\"quantifier\").value\nconsole.log(\"Starting... quantifier is \" + size);\nconsole.log(\"starting test\")\nfor (i=0; i<size; i++){\na[i]=\"something\"\n}\nconsole.log(\"done...\")\n}\n\nfunction tearDown()\n{\nconsole.log(\"processing teardown\");\na.length=0\n}\n\n</script>\n<body>\n<span style=\"color:green;\">Quantifier:</span>\n<input id=\"quantifier\" style=\"color:green;\" type=\"text\"></input>\n<button onclick=\"scaffold()\">Scaffold</button>\n<button onclick=\"start()\">Start</button>\n<button onclick=\"tearDown()\">Clean</button>\n<br/>\n</body>\n</html>``````\n\n``````Array.prototype.clear = function() {\nthis.length = 0;\n};``````\n\n``a = []; ``\n\n``````var a=[1,2,3];\nvar b=a;\na=[];\nconsole.log(b);// It will print [1,2,3];``````\n\n``a.length = 0;``\n\n``a.splice(0,a.length)``\n\n``````while(a.length > 0) {\na.pop();\n}``````\n\n`A.splice(0);`\n\n``````var originalLength = A.length;\nfor (var i = originalLength; i > 0; i--) {\nA.pop();\n}``````\n\n``const numbers = [1, 2, 3]``\n\n``numbers = []``\n\n``numbers.length = 0``", null, "• `length`:您可以设置length属性以随时截断数组。通过更改数组的length属性扩展数组时,实际元素的数量会增加。\n• `pop()` :pop方法从数组中删除最后一个元素,并返回返回删除的值。\n• `shift()`:shift方法将移除第零个索引处的元素,并将连续索引处的值向下移位,然后返回移除的值。\n\n``````var arr = ['77'];\narr.length = 20;\nconsole.log(\"Increasing : \", arr); // (20) [\"77\", empty × 19]\narr.length = 12;\nconsole.log(\"Truncating : \", arr); // (12) [\"77\", empty × 11]\n\nvar mainArr = new Array();\nmainArr = ['1', '2', '3', '4'];\n\nvar refArr = mainArr;\nconsole.log('Current', mainArr, 'Refered', refArr);\n\nrefArr.length = 3;\nconsole.log('Length: ~ Current', mainArr, 'Refered', refArr);\n\nmainArr.push('0');\nconsole.log('Push to the End of Current Array Memory Location \\n~ Current', mainArr, 'Refered', refArr);\n\nmainArr.poptill_length(0);\nconsole.log('Empty Array \\n~ Current', mainArr, 'Refered', refArr);\n\nArray.prototype.poptill_length = function (e) {\nwhile (this.length) {\nif( this.length == e ) break;\n\nconsole.log('removed last element:', this.pop());\n}\n};``````\n\n• `new Array() | []` 使用`Array constructor`创建具有新存储位置的数组`array literal`\n\n``````mainArr = []; // a new empty array is addressed to mainArr.\n\nvar arr = new Array('10'); // Array constructor\narr.unshift('1'); // add to the front\narr.push('15'); // add to the end\nconsole.log(\"After Adding : \", arr); // [\"1\", \"10\", \"15\"]\n\narr.pop(); // remove from the end\narr.shift(); // remove from the front\nconsole.log(\"After Removing : \", arr); // [\"10\"]\n\nvar arrLit = ['14', '17'];\nconsole.log(\"array literal « \", indexedItem( arrLit ) ); // {0,14}{1,17}\n\nfunction indexedItem( arr ) {\nvar indexedStr = \"\";\narr.forEach(function(item, index, array) {\nindexedStr += \"{\"+index+\",\"+item+\"}\";\nconsole.log(item, index);\n});\nreturn indexedStr;\n}``````\n• `slice()` :通过使用slice函数,我们从原始数组中获得了具有新存储器地址的浅表元素副本,从而对cloneArr进行的任何修改都不会影响实际数组。\n\n``````var shallowCopy = mainArr.slice(); // this is how to make a copy\n\nvar cloneArr = mainArr.slice(0, 3);\nconsole.log('Main', mainArr, '\\tCloned', cloneArr);\n\ncloneArr.length = 0; // Clears current memory location of an array.\nconsole.log('Main', mainArr, '\\tCloned', cloneArr);``````\n\n``````let xs = [1,2,3,4];\nfor (let i in xs)\ndelete xs[i];``````\n\n``````xs\n=> Array [ <4 empty slots> ]\n\n[...xs]\n=> Array [ undefined, undefined, undefined, undefined ]\n\nxs.length\n=> 4\n\nxs\n=> ReferenceError: reference to undefined property xs``````\n\n``````public emptyFormArray(formArray:FormArray) {\nfor (let i = formArray.controls.length - 1; i >= 0; i--) {\nformArray.removeAt(i);\n}\n}``````" ]
[ null, "https://i.stack.imgur.com/nChy7.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5747282,"math_prob":0.9736748,"size":8616,"snap":"2020-45-2020-50","text_gpt3_token_len":4310,"char_repetition_ratio":0.13063167,"word_repetition_ratio":0.040462427,"special_character_ratio":0.30501392,"punctuation_ratio":0.2618888,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97761744,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-29T08:29:45Z\",\"WARC-Record-ID\":\"<urn:uuid:c77e1c7c-e49a-474d-a01d-e3d12b0b7796>\",\"Content-Length\":\"102674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:87052781-97ec-46b8-97ff-f4b33d073e34>\",\"WARC-Concurrent-To\":\"<urn:uuid:57149eb4-0460-408a-8161-0ef621f3cc8a>\",\"WARC-IP-Address\":\"144.168.58.239\",\"WARC-Target-URI\":\"http://javascript.askforanswer.com/ruhezaijavascriptzhongqingkongshuzu.html\",\"WARC-Payload-Digest\":\"sha1:5Y7ZPE7C5LGUDBGDC5ZQYC5VKF7CSNRQ\",\"WARC-Block-Digest\":\"sha1:OU2BUX6PWZ37ORL4UGZYBOGSKD2GMTPU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107903419.77_warc_CC-MAIN-20201029065424-20201029095424-00629.warc.gz\"}"}
https://www.physicsforums.com/threads/free-surface-charges-on-concentric-cylinders.954252/
[ "# Free surface charges on concentric cylinders\n\n## Homework Statement\n\nConsider an infinitely long cylindrical rod with radius a carrying a uniform charge density ##\\rho##. The rod is surrounded by a co-axial cylindrical metal-sheet with radius b that is connected to ground. The volume between the sheet and the rod is filled with a dielectric, ##\\epsilon##.\n\nCalculate the free and bound surface charges at r=a and r=b\n\n## The Attempt at a Solution\n\n[/B]\nI tried to use discontinuity in E.\n\nFrom Gauss. ##\\nabla \\cdot E = \\frac{\\rho}{\\epsilon_0} \\Rightarrow \\oint E \\cdot ds = \\frac{Q}{\\epsilon_0} ##\n\n##\\Rightarrow (E_{above}-E_{below}) = \\frac{\\sigma_b}{\\epsilon_0}##.\n\nFor r= a, ##\\sigma_b = \\frac{Q}{2 \\pi s*l} (\\frac{1}{\\epsilon_r} - 1)##\n\nSo now we have the bound charge per unit area.\n\nBy integrating we ca get the total bound surface charge, ##\\sigma_{bt} = Q(\\frac{1}{\\epsilon_r}-1)##.\n\nI'm not sure this is right. But if it is, how do I now get the ##free## surface charge?\n\nAnd I do remember that either the bound charge or free charge should be zero for a grounded material, but not which one.\n\n•", null, "Delta2\n\n## Answers and Replies\n\nRelated Introductory Physics Homework Help News on Phys.org\nCharles Link\nHomework Helper\nGold Member\n2020 Award\nI think you omitted a step: ## \\nabla \\cdot D=\\rho_{free}=0 ## at and around ## r=a ##, so that ## D_{above}=D_{below} ## which gives ## \\epsilon_o \\epsilon_r E_{above}=\\epsilon_o E_{below} ##. (And then ## E_{below} ## is computed from knowing ## \\rho ## which was given). ## \\\\ ## To get the polarization surface charge density at ## r=b ##, the polarization charge per unit length at ## r=b ## must be equal and opposite that at ## r=a ##. (Recommend don't use ## \\sigma_b ## at ## r=a ##. Better to call it ## \\sigma_{pa} ##.(## p ## for polarization). ## \\\\ ## The free charge at ## r=b ## is the easiest. If the outside is grounded, by symmetry, the net charge enclosed must be zero to have ## E =0 ## outside of the coaxial metal layer, as well as inside this metallic layer. The inside surface of the metallic layer at ## r=b ## will thereby have some charge.## \\\\ ## Additional item: For charge per unit length and surface charge per unit length, suggest using ## \\lambda ##. Do not use ## \\sigma ## for that. ## \\sigma ## is a surface charge density per unit area. e.g. For the core, you would have ## \\lambda_{core}=\\rho \\, \\pi \\, a^2 ##. Also ## \\lambda_{pa}=\\sigma_{pa} \\, 2 \\pi \\, a ##, etc.\n\nLast edited:\n•", null, "Philip Land and Delta2\nDelta2\nHomework Helper\nGold Member\nI am a bit lost here, the central cylindrical rod is conducting/metal or non conducting? And the charge density given as ##\\rho## is equal to ##\\rho_{free}## or not?\n\nCharles Link\nHomework Helper\nGold Member\n2020 Award\n@Delta² The central rod is ## \\rho_{free} ## and non-conducting. It's free charge, (as opposed to polarization type charge), but is not free to move. It is embedded in the rod. ## \\\\ ## And it's not a polarization type charge that forms as the result of dipoles in the material.\n\n•", null, "Delta2\nDelta2\nHomework Helper\nGold Member\n@Delta² The central rod is ## \\rho_{free} ## and non-conducting. It's free charge, (as opposed to polarization type charge), but is not free to move. It is embedded in the rod. ## \\\\ ## And it's not a polarization type charge that forms as the result of dipoles in the material.\nEhm, I don't understand then why in your first equation you say ##\\rho_{free}=0## at ##r=a##, ok its not free to move, but still it should be ##\\rho_{free}=\\rho## at ##r=a## , right or wrong?\n\nI think you omitted a step: ## \\nabla \\cdot D=\\rho_{free}=0 ## at and around ## r=a ##, so that ## D_{above}=D_{below} ## which gives ## \\epsilon_o \\epsilon_r E_{above}=\\epsilon_o E_{below} ##. (And then ## E_{below} ## is computed from knowing ## \\rho ## which was given). ## \\\\ ## To get the polarization surface charge density at ## r=b ##, the polarization charge per unit length at ## r=b ## must be equal and opposite that at ## r=a ##. (Recommend don't use ## \\sigma_b ## at ## r=a ##. Better to call it ## \\sigma_{pa} ##.(## p ## for polarization). ## \\\\ ## The free charge at ## r=b ## is the easiest. If the outside is grounded, by symmetry, the net charge enclosed must be zero to have ## E =0 ## outside of the coaxial metal layer, as well as inside this metallic layer. The inside surface of the metallic layer at ## r=b ## will thereby have some charge.## \\\\ ## Additional item: For charge per unit length, suggest using ## \\lambda ##. Do not use ## \\sigma ## for that. ## \\sigma ## is a surface charge density per unit area. e.g. For the core, you would have ## \\lambda_{core}=\\rho \\, \\pi \\, a^2 ##. Also ## \\lambda_{pa}=\\sigma_{pa} \\, 2 \\pi \\, a ##, etc.\nBy surface charge ##density## do you mean surface charge, because it's only the surface charge I want.\n\nIs polarization charge same as bound charge? I failed to understand how to use the ##\\lambda## concept.\n\nHowever, I did a new calculation, I did a little wrong above.\n\nSee picture.\n\nSo now I think I have the total surface charge for both the cylinders. However, there's still bound charge present. How do I differentiate bound and free charge, so I eventually can extract only the free surface charge from the total surface charge?", null, "#### Attachments\n\nCharles Link\nHomework Helper\nGold Member\n2020 Award\nYour calculations are leaving off an important step or two in the solution. I recommend you use the equation ## \\nabla \\cdot D=\\rho_{free} ## where ## D=\\epsilon_o \\epsilon_r E ##. You can do the calculation just using ## E ## and deriving everything else from ## -\\nabla \\cdot P=\\rho_p ##, but it's easier to use ## D ##. ## \\\\ ## With this equation, Gauss' law reads ## \\int D \\cdot dA=Q_{free} ##. You seem to get the correct answer for ## \\sigma_pa ##, but you aren't showing the steps to get there. ## \\\\ ## And yes, your equation ## E_{above}-E_{below}=\\frac{\\sigma_{pa}}{\\epsilon_o} ## is correct. ## \\\\ ## And, yes, it is bound polarization charge, usually called just simply polarization charge. ## \\\\ ## Using the ## D ## form of Gauss' law with a pillbox around ## r=a ## gives: ## \\\\ ## ## \\epsilon_o \\epsilon_r E_{above}-\\epsilon_o E_{below}=0 ##, so that ## E_{above}=\\frac{E_{below}}{\\epsilon_r} ##. ## \\\\ ## This gives ## E_{below}(\\frac{1}{\\epsilon_r}-1)=\\frac{\\sigma_{pa}}{\\epsilon_o} ##. ## \\\\ ## ## E_{below} ## is readily found: (Edit) ## E_{below} 2 \\pi \\, a L=\\frac{\\rho \\pi a^2 \\, L }{\\epsilon_o} ##. ## \\\\ ## Now we can simply solve for ## \\sigma_{pa} ##. (I think you got that part correct). ## \\\\ ## Because this is polarization of the dielectric between ## r=a ## and ## r=b ##, ## \\lambda_{pa} L=-\\lambda_{pb} L ##. ## \\\\ ## (It can be shown that ## -\\nabla \\cdot P=\\rho_p=0 ## inside the dielectric, so that the only polarization charge is surface polarization charges. The net polarization charge must be zero. Alternatively, ## \\int D \\cdot dA =Q_{free} ## so that ##\\epsilon_o \\epsilon_r E(r) 2 \\pi r=\\rho \\, \\pi a^2 ##. We can compute ## \\nabla \\cdot E(r)=\\frac{\\rho_{total}}{\\epsilon_o}=0 ## in the dielectric, (google \"divergence in cylindrical coordinates\"), so that ## \\rho_p =0 ## in the dielectric. ) ## \\\\ ## This is why you need the surface charge per unit length.## \\\\ ## Now ## \\lambda_{pa}=\\sigma_{pa} 2 \\pi a ## and ## \\lambda_{pb}=\\sigma_{pb} 2 \\pi b ##. You need these last two relations to solve for ## \\sigma_{pb} ## from ## \\sigma_{pa} ##.\n\nLast edited:\n•", null, "Delta2 and Philip Land" ]
[ null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml;charset=utf-8,%3Csvg xmlns%3D'http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg' width='450' height='600' viewBox%3D'0 0 450 600'%2F%3E", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.75574356,"math_prob":0.58564544,"size":1077,"snap":"2021-04-2021-17","text_gpt3_token_len":309,"char_repetition_ratio":0.12581547,"word_repetition_ratio":0.0,"special_character_ratio":0.30083567,"punctuation_ratio":0.078431375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9932206,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-10T12:29:45Z\",\"WARC-Record-ID\":\"<urn:uuid:096d03eb-675d-4c46-93dd-8707399d88ad>\",\"Content-Length\":\"85511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55af733c-48c0-4002-94b1-5e2abf334815>\",\"WARC-Concurrent-To\":\"<urn:uuid:da5e2474-9db3-4098-992e-0f47fdf59339>\",\"WARC-IP-Address\":\"104.26.14.132\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/free-surface-charges-on-concentric-cylinders.954252/\",\"WARC-Payload-Digest\":\"sha1:Q7K3KDS4DCTKEROBHF6LVAT5JXREUYRB\",\"WARC-Block-Digest\":\"sha1:4JT5PVK3LRO6MWMKOII6EMIUG774SMBC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038056869.3_warc_CC-MAIN-20210410105831-20210410135831-00075.warc.gz\"}"}
https://mathoverflow.net/questions/84622/is-the-maximum-tree-path-length-distributed-lognormally-in-the-limit
[ "# Is the maximum tree-path length distributed lognormally (in the limit) ?\n\nConsider a full binary tree with $k>10$ levels. Let the lengths of individual edges in this tree be i.i.d. random variables with finite moments. Then total lengths of the $2^{k-1}$ source-to-sink paths in this tree are approximately Gaussian by the CLT, regardless of the edge-length distribution. We are interested in the limit distribution ($k\\rightarrow\\infty$) of the maximum path length in the tree.\n\nOur numerical simulation built 100K independent trees ($k=15$) with $(a)$ uniform and $(b)$ Gaussian edge lengths. The resulting distributions for $(a)$ and $(b)$ did not look qualitatively different and were somewhat skewed to the right. Lognormal distributions provided very close fits --- better than Gumbel and Airy. If lognormal is indeed the limit distribution, we would appreciate references or suggestions on proving this analytically.\n\nUnless I misunderstood your question, this can be entirely rephrased in terms of branching random walks. This goes as follows: at time 0 there is 1 individual at position 0. Each individual gives birth to two descendants, whose position is the position of the parent plus a jump, where all jumps are i.i.d. random variable. You are asking about the maximum position at time $k$, $M_k$.\nThis is a much studied problem, with deep links to traveling wave partial differential equations such as the Fisher-KPP equation. (Eg, in the space-time continuous case where branching random walk is replaced by branching Brownian motion, the function $u(t,x) = \\mathbb{P}(M_t >x)$ solves the KPP equation with initial condition $u(0,x) = 1_{x<0}$.)\nSee this recent paper http://front.math.ucdavis.edu/1101.1810 by Elie Aidekon, which provides complete answers to your question under minimal assumptions on the jump distribution. The main result is then that $M_k - ck + (3/2) \\log k$ converges to a random variable, where $c$ is a constant that is easy to compute. The distribution of the limiting random variable doesn't have to be either Gumbel or lognormal." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9015541,"math_prob":0.9956521,"size":848,"snap":"2019-35-2019-39","text_gpt3_token_len":186,"char_repetition_ratio":0.13270143,"word_repetition_ratio":0.0,"special_character_ratio":0.22287735,"punctuation_ratio":0.087248325,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989907,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T23:33:32Z\",\"WARC-Record-ID\":\"<urn:uuid:17954d15-90cc-4d5c-85ba-922a8a46d7e7>\",\"Content-Length\":\"117582\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef40db93-003f-4647-b930-a6f9d0bdaa32>\",\"WARC-Concurrent-To\":\"<urn:uuid:2f407d61-7df4-499b-9e17-2f7ea5fe708c>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/84622/is-the-maximum-tree-path-length-distributed-lognormally-in-the-limit\",\"WARC-Payload-Digest\":\"sha1:L3NQO2VZ65VUMATN5G5E65DIJS73A62Q\",\"WARC-Block-Digest\":\"sha1:PRTVZBSZWMP5PHU6W4NTFMJ7QKRSPTMS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573124.40_warc_CC-MAIN-20190917223332-20190918005332-00229.warc.gz\"}"}
https://americanring.com/(X(1)S(gf1iyr45vlffzs555e1ote45))/products/snap-ring.aspx?item=US-950
[ "", null, "", null, "", null, "### US-950\n\nDisplay Units: inches |  metric\nRequest Quick Quote for this part.\n Ring Specs (D) Free Diameter: 9.170 +0.000 / -0.070 in. (t) Thickness: 0.076 +0.002 / -0.002 in. (b) Radial Wall: 0.345 +0.004 / -0.008 in. Groove Specs: (B) Application Diameter: 9.500 in. (G) Groove Diameter: 9.263 +0.008 / -0.008 in. (W) Groove Width: 0.082 +0.005 / -0.000 in. Groove Depth: 0.119 in. Other Specs Approximate Weight per 1000: (m) Material Thickness: 0.075 in. (N) Ring Number of Turns: 1.000 (Pg) Theoretical Thrust Load Capacity - Groove Yield:Notes: Yield Strength of Groove Material (Ys): 45,000 psi. Calculated using a safety factor (K) of 2Equation:Pg = [ B * d * Ys * pi ] / K    = [ (9.500 in.) * (0.119 in.) * (45,000 psi) * 3.14 ] / 2 79,869 lbs. (Pr) Theoretical Thrust Load Capacity - Ring Shear:Notes: Ring Material: Carbon Spring Steel (SAE 1070-1090). Shear Strength of Ring Material* (Ss): 120,000 psi. Calculated using a safety factor (K) of 3Equation:Pr = [ B * t * Ss * pi ] / K    = [ (9.500 in.) * (0.076 in.) * (120,000 psi) * 3.14 ] / 3*Shear Strength of Material value is for Carbon Spring Steel material only. 90,683 lbs. Industry Equivalent Part Number(s):US-950, CL-950, VS-950" ]
[ null, "https://americanring.com/images/sub_banners/AR_1020x200_5.jpg", null, "https://americanring.com/images/product_spec_us.jpg", null, "https://americanring.com/images/product_app_shaft_spiral.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5393927,"math_prob":0.98503536,"size":793,"snap":"2022-27-2022-33","text_gpt3_token_len":284,"char_repetition_ratio":0.12040558,"word_repetition_ratio":0.16438356,"special_character_ratio":0.443884,"punctuation_ratio":0.22033899,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9575002,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-26T17:54:22Z\",\"WARC-Record-ID\":\"<urn:uuid:a4393374-22c8-4532-a31e-63c1ff1ffbc6>\",\"Content-Length\":\"18111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6eee000c-97d0-4dbe-9e67-ff78b2711cae>\",\"WARC-Concurrent-To\":\"<urn:uuid:0f8dc179-c3bf-4372-b181-11184e400761>\",\"WARC-IP-Address\":\"172.67.134.163\",\"WARC-Target-URI\":\"https://americanring.com/(X(1)S(gf1iyr45vlffzs555e1ote45))/products/snap-ring.aspx?item=US-950\",\"WARC-Payload-Digest\":\"sha1:ZUH64QPFI4JE25UM5RK6IPNPNLPTUSDR\",\"WARC-Block-Digest\":\"sha1:YL2EOBMJVE6TIZTSK6IUE6GI76N5NIZ6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103271763.15_warc_CC-MAIN-20220626161834-20220626191834-00362.warc.gz\"}"}
https://cloudstack.ninja/muhammad-alif/overlapping-pyplot-imshow-plot-in-single-grid/
[ "# Overlapping pyplot.imshow() plot in single grid\n\nI want to plot this figures. For the first figure:\n\nFigure1.png\n\nFor the code snippet:\n\n``````import matplotlib.pyplot as plt\nrow = 30\ncol = row\nareas_pher = [[-0.01 for i in range(col)] for j in range(row)]\n\nfor i in range(1, row-1):\nfor j in range(1, col-1):\nareas_pher[i][j] = 0\nmax_pher = 20\nfor pher in range(1, col-1):\nareas_pher[pher] = max_pher * pher/max_pher\nplt.imshow(areas_pher, cmap=\"Blues_r\", vmin=0, vmax=max_pher)\n``````\n\nAnd the second figure:\n\nFigure2.png\n\nFor the code snippets:\n\n``````import matplotlib.pyplot as plt\nimport numpy as np\n\nareas_ant = [[6 for i in range(col)] for j in range(row)]\nprob_ant = 0.05\n\nfor i in range(1, row-1):\nfor j in range(1, col-1):\nrand = np.random.rand()\nif rand < prob_ant:\nareas_ant[i][j] = 5\nplt.imshow(areas_ant, cmap=\"Blues\")\n``````\n\nMy question, what the best way to make its figures overlapping, for illustrate like this image below?\n\nOverlapping Figures Illustration.png" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8022455,"math_prob":0.9875322,"size":1180,"snap":"2020-45-2020-50","text_gpt3_token_len":339,"char_repetition_ratio":0.13860545,"word_repetition_ratio":0.1183432,"special_character_ratio":0.28898305,"punctuation_ratio":0.15510204,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99931586,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-05T00:00:41Z\",\"WARC-Record-ID\":\"<urn:uuid:0c7b2888-b7bb-4f80-ab2d-4cec06a64d0e>\",\"Content-Length\":\"62528\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e49385ca-7755-4031-b709-bb27192f12dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3e91526-ee24-4c75-8d66-1344e1fed6ab>\",\"WARC-IP-Address\":\"192.0.78.178\",\"WARC-Target-URI\":\"https://cloudstack.ninja/muhammad-alif/overlapping-pyplot-imshow-plot-in-single-grid/\",\"WARC-Payload-Digest\":\"sha1:NG3FYQPES3EO4TDYJ3OQIK26KEND7X66\",\"WARC-Block-Digest\":\"sha1:4KAQJSL7WM74AVC3MOYCUUXUZIZBKJKD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141745780.85_warc_CC-MAIN-20201204223450-20201205013450-00267.warc.gz\"}"}
https://www.bsynchro.com/m7b2uyfd/decision-making-under-risk-questions-and-answers-54cf56
[ "8.5. In such cases, the problem is classified as decision making under risk . 150) + 0.3 (Rs. 450) (8.7). ‘Do not In­vest’, i.e., E(U2) = 0. If the original payoff table is stated in terms of losses or costs, the decision-maker will then select the smallest loss for each event and subtract this value from each row entry. The optimal decision would still be the same, viz., ordering 200 units; thus the manager’s decision is not very much sensitive to changes in the proba­bility assignments. Risk is objective but uncertainty is subjective; risk can be measured or quantified but uncertainty cannot be. Thus, a situation of complete uncertainty prevails. Let us consider a simple competitive market where the demand (average revenue curve) faced by a seller is a horizontal straight line. Exam 10 October 2014, questions. Therefore, marginal utility measures the satisfac­tion the individual receives from a small increase in his stock of wealth. 300 (CE = Rs. All other trademarks and copyrights are the property of their respective owners. Mr. X’s EMV from playing this gamble is Rs. If Mylo adopts a maximin approach to decision-making, which daily supply level will he choose? We illustrate the concept in table 8.6 below: If we adopt the simple EMV criterion, a cursory glance would make project B apparently seem to be the best possible choice. It is not possible for you to wait for some time to study the nature (or determine the level) of demand, nor can you place more than one order. Concept of Decision-Making Environment 2. To illustrate, a discount rate of 10% becomes a discount factor of 1.46 [= (1.10)4] by the end of four years, and the 13% rate becomes 1.63 [=(1.13)4]. In case of two or more projects (alternatives) having unequal costs or benefits (payoffs) the CV is undoubtedly a preferable measure of relative risk. 300 and if demand were 200 units, he would order 200 and the payoff would be Rs. 167.50, Rs. The conversion of a payoff matrix to a regret matrix is very easy. Recall that the word ‘margin’ always refers to anything extra. It is gratifying to note that the expected utility approach to decision problems under risk ac­commodates both factors and provides a logical way to arrive at decisions. The decision maker is able to assign probabilities based on the occurrence of the states of nature. The minimax regret has been proposed by Sav­age. all of the unknown outcomes of a decision, a complex thought process in decision-making, a decision that will definitely have a negative outcome, the possible negative outcomes of a decision. Therefore they would decide not to participate in this type of gamble characterized by highly uncer­tain outcome against an unlimited payment (that has to be made if the gamble is accepted). It is because the total cost is Rs. Fig. Therefore, by using the maximization of expected value criterion, the inventory manager would choose A2, i.e., order 200 units. If the decision-maker analyses the expected values of each of the actions, he arrives at the decision to select the option which is having the highest ex­pected value, i.e., option 2 in this example. Thus the lottery is equivalent to tossing an unbiased coin. If a head appears in the first toss Mr. X owes Mr. Y Rs. Four major criteria that are based entirely on the payoff matrix approach are: In those situations where the decision-maker is willing to assign subjective proba­bilities to the possible outcomes, the two other cri­teria are. a. It is zero for the alternative action. Since the first decision (A1) has the highest ex­pected value it will be taken. A risk neutral decision maker will always prefer C to A or B. c. A risk seeking decision maker will always prefer C to A or B. d. All of the above are correct. The two competitors may not have the same approximate utilities (with a negative sign). Now, in the context of our NPV model we may assert that risk aversion is reflected in the fact that any decision that a firm makes will sure­ly change its risk level — the degree of risk to which it is exposed. 150) + 0.2 (Rs. So the maximization of EMV criterion is not a reliable guide in predicting the strategic action or strategic choice of an individual in a given decision environment. -4000) x .80 = Re. However, in real life most people prefer to play safe and avoid risk. He is con­sidering whether or not to make long-term invest­ment for introducing the product in the market. The Society for Education in India (in short SEI) had been engaged in running primary schools in different parts of the country since 1950s. If this happens, such a value is called a saddle point. Based on this estimation of probabilities, the ex­pected payoff can be computed as follows: A1 (100) = 0.5 (Rs. 300 (Rs. Positive payoff implies profit and negative pay-off implies loss. In this article we will discuss about Managerial Decision-Making Environment:- 1. In the context of decision problems whose uncer­tain possible outcomes constitute rupee payments with known probabilities of occurrence, it has been observed by many that a simple preference for higher rupee amounts is not sufficient to explain the choices (that is, decisions) made by various in­dividuals. 200; if demand were going to be 150 units, he would place order for 200 units with a payoff of Rs. Choose an answer and hit 'next'. Decision-Making under Uncertainty (70985) Akademisches Jahr. The maximum regret values for each of the ac­tion or actions are presented below: The smallest possible regret (or minimum opportu­nity loss) would be incurred by ordering 200 units. 10 per shirt, if 200 or more are ordered, the cost is Rs. The player is supposed to receive or win 2n rupees as soon as the first head appears on the n-th toss. Share Your Word File There will also be a cost saving of Rs. Thus, the criterion is conservative in nature and is well-suited to firms whose very survival is at stake because of losses. In other words, even if the returns from project B are higher on average than that of A, the former exhibits greater varia­bility. The most obvious defect of the CE approach, outlined above, is that it requires the specification of a util­ity function so that risk premium can be numerical­ly measured or quantified. For example, we know that if we toss an unbiased coin, one of two equally likely outcomes (i.e., either head or tail) occur, and the probability of each outcome is prede­termined. So B chooses the minimax criterion. Under un­certain conditions the profits in the numerator, Rt – Ct = Pt, are really the expected value of the profits each year. This minimises A’s pay­off and therefore maximises his own. ACCA BT F1 MA F2 FA F3 LW F4 Eng PM F5 TX F6 UK FR F7 AA F8 FM F9 SBL SBR INT SBR UK AFM P4 APM P5 ATX P6 UK AAA P7 INT AAA P7 UK. In the final analysis, the inventory manager can easily toss out the A3 option, but he must still bear the burden of choosing A1 or A2 in the face of uncertain demand. Recall that risk is characterized as a state in which the decision-­maker has only imperfect information about the decision environment, i.e., the impact of all of the available alternatives. Thus, the prediction is that actual monetary values of the possible outcomes of the gamble fail to reflect the true preference of a representative individual for these outcomes. In our T-shirt example the minimum payoffs associated with each of the actions are presented below: If the decision-maker is a pessimist and assumes that nature will always be niggardly and uncharit­able the optimal decision would be to order 100 T- shirts because this action maximizes the minimum payoff. Uncertainty refers to a state in which the decision-maker lacks even the information to assign subjective probabili­ties. It is just a retail store selling readymade gar­ments. of only Rs. (8.1) assuming an alpha value of 0.25 are presented below: Thus, the decision-maker would choose A1, i.e., or­der 100 T-shirts. Since different share­holders are involved and they have different util­ity functions, which are not directly comparable, it is virtually impossible to arrive at a group utility function. Question 1 1.5 Pts • Decision Making Under Risk Means That: The Decision Maker Does Not Know The Alternatives Available. Bernoulli observed that gamblers did not respond to the expected ru­pee prices in games of chances. 9 per shirt; and if 300 or more shirts are ordered the cost is Rs. 65 lessons This re­veals the increasing marginal utility hypothesis The implication of this hypothesis is simple enough: as the individual’s wealth increases, he receives more extra utility from each extra rupee that he receives. (b) By reference to a theoretical probability distri­bution (such as the binomial distribution, Poisson distribution or normal distribution). 300), then his risk premium (RP) can be defined as: In such a situation Mr. Hari is willing to pay Rs. For simplicity, we assume that the prod­uct is perishable. For this reason it is necessary to look at the probability dis­tribution of the random variable, which is a listing- of the possible outcomes with the associated proba­bilities of those outcomes. Before publishing your Articles on this site, please read the following pages: 1. 125. 8.9 illustrates the relationship between K* and project risk. Under a state of risk, the decision maker has incomplete information about available alternatives but has a good idea of the probability of outcomes for each alternative. When these probabilities are known or can be estimated, the choice of an optimal action, based on these probabilities, is termed as decision making under risk. MC Question 16 - September 2016. 600? The implication is that the price that the firm faces is not stable. Pulmonary aspiration is defined by the inhalation of oro-pharyngeal or gastric contents into the larynx and the respiratory tract. Thus the optimal decision would be to accept the project, i.e., invest in the product. We can now compare the figures in brack­ets — (Rs. Table 8.2 depicts the regret matrix for the T-shirt invent­ory problem. Mainstream economics and finance is dominated by models of decision- making under risk under the rationality axioms, where modern macroeconomics has its analytical roots in the general equilibrium framework of Kenneth Arrow and Gerard Debreu (Arrow and Debreu, 1954). Decision theory involving 2 or more decision makers is known as game theory. They calculate expected utility in the same way expected value is calculated by multiplying the utility of each outcome by its probability of occur­rence, and then summing up the whole thing, thus: This criterion apparently appears to be very ef­fective. Risk Analysis 4. It is based on the belief that nature is unkind and that the decision-maker therefore should determine the worst possible out­come for each of the actions and select the one yielding the best of the worst (maximin) results. Chapter 4 Decision Analysis 97 includes risk analysis. You will receive your score and answers at the end. This criterion is, how­ever, criticized on the ground that the assumption of equally likely events may be incorrect and the user of this criterion must consider the basic validi­ty of the assumption. It may also be that the opponent’s utilities are not known at all: The decision problem would then have to be treated under uncertainty. The Decision Maker Is Generally Ignorant About The Whole Problem He Is Trying To Solve. The states of nature occur passively and in­dependently of the strategies chosen. A decision tree is used for sequential decision-making. By assigning subjective probabilities, the decision maker is, in essence, converting an uncertain situa­tion into a situation of risk. The price of tea next week may also be random owing to unfore­seen shifts in supply and demand. Step 2: Developing a set of potential responses or viable solutions. So the manager has to sell all the output rather than store some of it for future sales. Dealing with Risk and Uncertainty in Decision Making. Since there are con­stant changes in market conditions and in the num­ber (range) of competitive (rival) products, it is not possible to repeat the experiment under the same conditions hundreds of times. 8.8 presents the decision tree associated both the problem faced by Mr. Ram. The results of such computations are presented in Table 8.10 below: It is clear that construction of the prototype us­ing conventional materials (A1) is the least risky alternative. Acowtancy. 150) + 0.2(Rs. 6,000). The specific consequence or outcome depends not only on the decision (A1, A2, or A3) that is made but also on the event (D1, D2, or D3) that oc­curs. 6,000. Suppose Mr. Hari has purchased a lottery ticket that has a 50-50 chance of paying Rs. Laplace criteria. If profit maximization does not appear to be a sensible goal, one has to search out or identify another objective function for the firm. True, expected value is a mathematical av­erage the mean of a probability distribution that neatly summarizes an entire distribution of outcomes. If the firm has to choose between alter­native methods of operation, one with high ex­pected profits and high risk and another with smaller expected profits and lower risk, will the higher expected profits be sufficient to neutralize the high degree of risk involved in it? Since NPV analysis uses a compounding factor in the denominator (1+r)t the incorporation of a risk adjustment factor in the denominator to deflate future values, heightens this compounding. However, in order to measure the riskiness of the three alternatives, Mr. Ram computes the standard deviation of each of the alternatives. After finishing this lesson, you should be ready to: 9 chapters | Decision Making: Solved 67 Decision Making Questions and answers section with explanation for various online exam preparation, various interviews, Logical Reasoning Category online test. Hence, it involves more risk. When decisions are based on the EMV criterion, it is implicitly based on the assumption that a decision-­maker is able to withstand the short-run fluctua­tions and is a continuous participant in comparable EMV decision problems. However, the real commercial world is characterized by uncertainty. 175) + 0.2 (Rs. 200. Thus, the inventory manager knows that the maximum amount that he would pay for a perfect prediction of demand would be Rs. Such objec­tive probability is couched in terms of relative fre­quency. Thus even if the two alternative have the same EMV, the de­cision maker would choose the option having the least dispersion (or maximum concentration). Uncertainty does not seem to suggest that the decision-maker does not have any knowledge. Read our guide, together with our How to handle competency-based interview questions tips, and double your chance of interview success. Firstly, in a large organiza­tion, whose utility function has to be used remains an open question. Our mission is to provide an online platform to help students to discuss anything and everything about Economics. flashcard set{{course.flashcardSetCoun > 1 ? Certainty Equivalents. For the T-shirt example, the probability as­signed to each of the three events would be 0.33, and the expected monetary value (EMV) would be. Since the events are mutually exclu­sive, the sum of their probabilities is equal to 1. The Decision Maker Knows The Payoffs As Well As … 4,000, i.e., the cost of production and marketing. 600) (8.6), A3 (100) = 0.5 (Rs. Suppose that you have the following payoff ma­trix: Select the optimal action by applying maximin, maximax, Hurwicz (= 0.3 ), minimax regret and the. If the future event that will occur could be pre­dicted with certainty, the decision-maker would merely look down the column and select the opti­mal decision. For project A it is 0.183 and for project B, 0.297. We may now illustrate the concept. With external economies, such games could arise. The utility function is characterized by dimi­nishing marginal utility of money. As another example, let us consider the follow­ing discrete probability distribution of prices. Alternatively, he may be a risk-lover, in which case he would not exit the game (part with the lottery ticket) unless he re­ceived more than Rs. Since profit is a random variable, the concept of maximum profit becomes meaningless. We simply calculate the standard deviation for project A and B as the square root of the variances σA2 and σB2. 500 per ticket. It means to choose one risk over another. Thus we can say that a payoff matrix provides the decision-maker with quantitative measures of the payoff for each possible consequence and for each alternative under consideration. It is in­teresting to note that this is the same decision (that is, indifference) as was obtained in the first part with the EMV criterion. If the conflict of interest is not complete, the game is called a non-zero sum game. How can you give the answer an employer is looking for unless you know the questions they’ll ask? 300, Rs. To a rational decision-maker, the value of infor­mation can be treated as the difference between what the payoff would be with the information currently available and the payoff that would be earned if he were to know with certainty the out­come prior to arriving at a decision. An important characteristic of a random varia­ble is its expected value or mean. Whatever strategy B chooses, A will try to maximise his own pay-offs. Decision-Making Environment under Uncertainty 3. Managers are required to examine the risk associated with each project before making a decision. - Process, Methods & Examples, Quiz & Worksheet - Risk & Uncertainty in Decision Making, Dealing with Risk & Uncertainty During Decision Making, {{courseNav.course.mDynamicIntFields.lessonCount}}, Types of Problems & Problem Solving Strategies, Availability Heuristic: Examples & Definition, Ways to Manage Risk: Insurable and Uninsurable Risk, UK Clinical Aptitude Test (UKCAT) Flashcards, Working Scholars® Bringing Tuition-Free College to the Community, What's involved with assessing the risks involved in a decision, Action to take after making a decision that involves risk, Definition of uncertainty in decision making, Question to ask yourself when making a decision that involves uncertainty, Outline important characteristics of the risk evaluation process, Discuss the goal in the decision-making process, Explain why it can be helpful to involve others when making decisions involving uncertainty. 100,000 if the newly designed chip is used. Thus we get σA = Rs. The manufacturer of these has imposed a condition on you: You have to order in batches of 100. All we have to do is to subtract each entry in the payoff matrix from the largest entry in its column. When you take this quiz, you'll be asked about how to assess the risks involved in a decision and the definition of uncertainty as it applies to decision making. This is the average price which is arrived at by multiplying each possible price by the probability of its occurrence and adding up the results. There will be interaction, the basis of which is conflict of in­terest. The focus is on an index which is based on the derivation of a coefficient known as the coefficient of optimism. It differs from the EMV in the sense that it in­volves the use of the regret matrix. For example, if the inventory manager knew, before arriving at the decision, that actual demand were going to be 100 units, the optimal decision would be to order 100 units with a payoff of Rs. Thus Mr. Hari’s av­erage or expected payoff in this game is Rs. The decision-maker thus attaches his best estimate of the ‘true’ probability to each possible outcome. Find out his optimal strategy considering that (a) he is a par­tial optimist (Hurwicz criterion, with the coeffi­cient of optimism 60%), (b) he is an extreme pessi­mist (Savage criterion) and (c) he is a subjectivist (Laplace criterion). By putting the values of cash flow (X), expected value (EMV), and assigned probability from Table 8.6 into equation (8.13) we are in a position to quantify this risk. It is known as the criterion of optimism because it is based on the assumption that nature is benevolent (kind). The pay­offs are measured in terms of profit. Here, in Fig. A decision tree is used for sequential decision-making. This website includes study notes, research papers, essays, articles and other allied information submitted by visitors like YOU. This corroborates the diminishing marginal utility hy­pothesis. 8.1 illustrates this observation. where the Xs refer to the payoffs from each event and to the probabilities associated with each of the payoffs. Secondly, in case of large private firms characterized by separation of ownership from management whose utility function — the managers’ or shareholders’ — has to be used is an­other question. In terms of actual conditions a large number of problems is involved with states of nature. Therefore, by using the maximiza­tion of expected utility criterion, the rational en­trepreneur would decide against the project. However, a closer scrutiny of the cash flows also reveals that project A has a small expected value, but, at the same time, it shows less variation and according to our yardstick, appears to be less risky. In some cases, however, a relative frequency (also known as the classical) interpretation of probability does not work because repeated trials are not possible. ACCA CIMA CAT DipIFR Search. Or the role of ambiguity in decision-making. Secondly, complex problems arise in measuring the utility function of an individual. The regret value in Table 8.2 represent the dif­ference in value between what one obtains for a giv­en action and a given event and what one could ob­tain if one knew beforehand that the given event was, in fact, the actual event. He estimates that the probabilities associated with each of these out­comes are 0.25, 0.50 and 0.25, respectively. One may, for instance, ask what is the probability of successfully introducing a new breakfast food (like Maggie). 500 or Rs. While attending a conference on employee selection, Mr. J Mehta, a senior member of the society learned that a leading school had recently employed a psychologist to perform employment functions, i.e. Here, for the sake of simplicity, we consider only two probability distributions. (Try to guess why.) With complete conflict of interest the game is a zero-sum game. You have to decide how many men’s T-shirts to order for the summer season. 8.2 makes one thing clear at least: when demand is random, the actual price is subject to a probability distribution. 500) and (Re. | {{course.flashcardSetCount}} But its major defect is that it can obscure the presence of abnormally high poten­tial losses or exceptionally attractive potential gains. 0) — in the upper tree with the expected utility figures — (-0.25) and (0) — in the lower tree. Decision-Making Environment under Uncertainty: Decision-Making Environment under Risk Analysis: Decision-Making Environment under Certainty Equivalents. If head appears, Mr. Hari will get Rs. 400,000, Mr. Ram has the option of simultane­ously pursing the development of both prototypes. Expected Value of Perfect Information (EVPI): So long our stress was on selection of an alterna­tive on the basis of information currently possessed by the decision-maker. Therefore, the entrepreneur with a linear utility function would show indifference to the two alternative actions when attempting to maximise expected utility. If so, the ris­kier alternative will surely be preferred; other­wise the low-risk project or method of operation should be accepted. Finally, let us consider a situation in which the entrepreneur has a linear utility function, as shown in Fig. Looking at the worst case scenario and what can possibly go wrong with each decision is a good way to understand the pros and cons of different choices. Management Science 29:1066 1076. It is a nice way of summarizing the inter­actions of various alternative action and events. The first one is deductive and it goes by the name a priori meas­urement; the second one is based on statistical anal­ysis of data and is called a posteriori. By contrast, the RADR method focuses on the de­nominator. Equa­tion (8.1) indicates that the more optimistic the decision maker, the larger will be the Hi value, and vice versa. Hence Mr. Ram is faced with a perplexing dilemma — a trade-off between risk and profita­bility. 200) + 0.3 (Rs. The slope of the utility function at any point measures marginal utility. The R&D engineers have succeeded in identifying two approaches, one utilizing conventional materials and another using a newly developed chip. Of probability as the coefficient of variation to make a compar­ison of the decision be. His maximum regret CE is less ri­sky than project a it is also based on the concept of random.! Frank Knight who noted that risk is weighed during a manager second of. Of production and marketing a batch of the product ) are risk averters occur passively and of. The worth of a future event. ) for risk you’re likely to say I. Decide to use the three actions ( order 100, 200 ( A2 ) or 300 ( A3 =... Analysis the decision tree associated both the prototypes are developed, an ’. Pro­Ject B is characterized by greater degree of risk suggest­ed that they responded to the alternative levels of or... Appears, Mr. Ram U ( Rs s gain is B ’ s and! Section with detailed description, explanation will help you determine the qualifications of candidate! Men ’ s attitude toward risk is objective but uncertainty can easily be con­verted risk... A cost saving of Rs used by investors to determine the worth of a functional prototype s and. Help other people take more risks in their decisions and maximin principles are the same approximate utilities ( with payoff... Maker would again choose A4, investors ) are risk averters still able to derive probability estimates without carrying any! That they responded to the maximum amount that he receives the same and equal to 1 ) be! Employer is looking for unless you know the alternatives Available of gathering ad­ditional information before arriving at a.. Problem, decision making under risk questions and answers RADR approach is very easy by F. H. Knight who noted that once subjective probabilities to problems... Minimax criterion is followed, the cost of Rs.107,000 has to determine the worth of a decision problem facing players... Associated costs with the number that comes up is a price-maker to adjusting our basic valuation model of the chosen! Show indifference to the alternative levels of demand or sales in advance the price. Lover to be eager and willing to accept — Rs prices are possible but unlikely presented above number difficulties! – ( payoff to a ) by an analysis of historical patterns or. B are, respectively, 0.001 and 0.002 inter­actions of various alternative and. Type of games given day by the nature of decision-­making assumes strategic signifi­cance both in reducing the anxiety the! For introducing the product a retail store selling readymade gar­ments falls as the coefficient of optimism because it a! Frequency, we calculate the ex­pected payoff can be computed as follows: A1 100! A crop planted in July have the following principle of decision or action and event. ) management involves decision! Tree associated both the prototypes are developed, an additional labour cost production... + 0.2 ( 0 ) + 0.3 ( Rs knows that the price... Probability distri­bution ( such as the unfavorable consequences that may occur the strategies chosen be computed as follows A1... Is called a non-zero sum decision making under risk questions and answers, player a has an extra option of simultane­ously pursing the development both... Under risk [ 27 ] and double your chance of losing Rs conventional mate­rials these characteris­tics... “ I feel the probability distribution have to order 200 units with a function... ‘ take bet ’ with ‘ decline bet ’, shows that the firm ’ s EMV from playing gamble! Of which is produced can be meas­ured by the height of the errors decision. Is neither an opti­mist nor a pessimist alternatives Available before making a decision problem in product... To Rs of in­terest game will exist sales of the decision maker is, minimise a s... In Table 8.6, a solution: 2 s T-shirts to order 200 units because is. Observed that gamblers did not respond to the EMV im­plies risk indifference measure of risk involving objective of. To illustrate these common characteris­tics of the decision problem in the market is.. Adjust our basic valuation model of the bell-shaped curve sell decision making under risk questions and answers much it! Many men ’ s av­erage or expected payoff in this article we will discuss about Managerial decision-making subjective... Or sales at the end real-life situations, the less dispersed the probability occur­rence... By using equation ( 8.19 ), EOL ( A2 ) = 0.5 (.... 0.50 and 0.25, 0.50 and 0.25, 0.50 and 0.25, respectively assigned a probability distribution of possi­ble,. In particular, managers are required to construct a payoff of Rs be­low the matrix we show decision making under risk questions and answers! Does not have any knowledge, such a value is subject to probabilistic variation thus attaches his estimate! Easily be con­verted into risk analysis: decision-making Environment under certainty Equivalents and. Happening is the dis­tinction among three different states of nature this is equiv­alent to assuming with optimism! Competitive market where the Xs refer to the possible outcomes are equally likely i.e.... Depicts the regret matrix for the decision maker is able to assign probability estimates to the payoff! Prefer to play safe and avoid risk in a random varia­ble is expected! Yet to be reluctant to undertake invest­ments having negative EMVs be computed as follows: A1 ( )! The favorable as well as the coefficient of variation to make long-term invest­ment for introducing the product following ten! A will try to implement it 1 1.5 Pts • decision making, accountability and flexibility always to. Conservative in nature and effectiveness of various alternative action and event. ) is equivalent to tossing an coin... A1, B will chose B1 the word ‘ margin ’ always refers to taking number... Of losses for the sake of simplicity and reliability when compared with the number of decisions al­though... 200 ( A2 ) = 0.5 ( Rs because the expected value is a calculus decision-making... A lottery ticket that has a linear utility function has to be put into the market characteristic a... Use of conventional mate­rials by uncertainty essential tool of decision-­making maximise his own payoff that. Also be random owing to unfore­seen shifts in supply and demand this article we will about. In the first toss Mr. X owes Mr. Y Rs decision-maker thus attaches his best estimate the. The information to assign subjective probabili­ties ) we can compute CV for a... Established and built in norms, see e.g is virtually impossible, the. In real life example decision-­maker should attempt to minimize his maximum regret manufacture the product 8.16 ) a be. Calculate expected utility criterion, alternative a would be Rs risk lev­el this... By passing quizzes and exams between them and taking in our experts’ advice the... Are possible but unlikely measurement of proba­bility is based on the concept let us consider a competitive! Has developed a new technique of decision outcomes EMV is the assignment of probabilities to possible! To in­vestigate the nature and effectiveness of various states of nature occur passively in­dependently... A is less than the increase in utility from winning Rs in an Environment! Or­Dered, the decision maker is, whose value is a risk averse decision maker would choose. The present government, arranging a coup in­vasion information about the Whole problem he is whether! Disclaimer Copyright, Share your PPT File, Steps involved in Managerial decision-making extreme! Us consider the follow­ing discrete probability distribution when demand is 150 units he. Lottery tick­et ) say “ I feel the probability distribution with the implementation of each option before making a problem... Variation for projects a and B as the coefficient of variation to make long-term invest­ment introducing... Ri­Sky than project B term of EMV criterion is also possible for the optimal decision would be equal to.! From declining the bet of which is yet to be reluctant to undertake invest­ments having negative.. From them under certainty Equivalents matrix ( Table 8.4 ) do is to provide an online platform help! Treated as less risky than alternative B computes the standard deviation for project a the. This gamble is Rs than the budgetary limit of the product decision trees pay­off and therefore his. Twin advantages of simplicity, we may now summarize the basic characteris­tics a. 2: Developing a set of potential responses or viable solutions by decision making under risk questions and answers H. Knight who first drew a between! Condition on you: you have to decide how many men ’ av­erage... Word File Share your PPT File, Steps involved in Managerial decision-making the chosen. Now summarize the basic characteris­tics of a decision if 100 T-shirts are or­dered, the problem classified., al­though A2 is dominant associated expected values av­erage or expected payoff in game... Invent­Ory problem: risk-averter, risk-indifferent and risk- lover between risk and uncertainty, invest in valuation. Variable can assume may not be Laplace criterion, the decision maker will always occur satisfac­tion! Such dissimilar utilities that cause non-zero-sum type of games be Available, it is decision making under risk questions and answers! Whose very survival is at stake because of the degree of risk involving objective of... Of optimism and pessimism following pages: 1 yet is ; how much would Mr. has. Implies selection of the three actions ( order 100, 150, or units. Ce exactly equalled the EMV under condi­tions of uncertainty upsets the profit- maximization objective or normal distribution ) s York... ) could be developed to specifications forth the probabilities or the index of relative risk, decision-makers are classified three. Manager 's decision making process is benevolent ( kind ) new computer chip offers the twin advantages simplicity!, by assigning subjective probabilities are introduced, the cost is Rs considering problems having reasonably few of! Prob­Abilities of various alternative action and events events ) its EMV is the assignment of probabilities decision...\n2020 decision making under risk questions and answers" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9400803,"math_prob":0.9306096,"size":34637,"snap":"2021-04-2021-17","text_gpt3_token_len":7129,"char_repetition_ratio":0.14812462,"word_repetition_ratio":0.04803801,"special_character_ratio":0.21009326,"punctuation_ratio":0.12133252,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97358507,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T14:56:25Z\",\"WARC-Record-ID\":\"<urn:uuid:149deedd-9a89-48d6-8505-ef0b01d02e56>\",\"Content-Length\":\"46176\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d65f4604-8073-4904-bc98-2b68be91d317>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e69de21-6603-4537-8ad7-73d17dc3cf21>\",\"WARC-IP-Address\":\"23.97.216.47\",\"WARC-Target-URI\":\"https://www.bsynchro.com/m7b2uyfd/decision-making-under-risk-questions-and-answers-54cf56\",\"WARC-Payload-Digest\":\"sha1:BDWPIU5US54YWR64TKDVNMLYJFTTH2U6\",\"WARC-Block-Digest\":\"sha1:6KHAZOMFBRGLXGTI5XMGRJAEIPCZZDN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038064520.8_warc_CC-MAIN-20210411144457-20210411174457-00195.warc.gz\"}"}
https://stacks.math.columbia.edu/tag/00RZ
[ "Lemma 10.131.16. Suppose $R \\to S$ is of finite type. Then $\\Omega _{S/R}$ is finitely generated $S$-module.\n\nProof. This is very similar to, but easier than the proof of Lemma 10.131.15. $\\square$\n\nThere are also:\n\n• 12 comment(s) on Section 10.131: Differentials\n\nIn your comment you can use Markdown and LaTeX style mathematics (enclose it like $\\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar)." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8018053,"math_prob":0.95338595,"size":579,"snap":"2022-40-2023-06","text_gpt3_token_len":145,"char_repetition_ratio":0.09043478,"word_repetition_ratio":0.0,"special_character_ratio":0.2642487,"punctuation_ratio":0.1322314,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9666904,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T13:13:17Z\",\"WARC-Record-ID\":\"<urn:uuid:39b90ccd-22ea-4421-8fea-911a8e4502bb>\",\"Content-Length\":\"14028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:86b15ed0-aed7-4ccd-ba68-f8090b21a90c>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f59c936-ccc8-43bd-8284-f1fca392d163>\",\"WARC-IP-Address\":\"128.59.222.85\",\"WARC-Target-URI\":\"https://stacks.math.columbia.edu/tag/00RZ\",\"WARC-Payload-Digest\":\"sha1:ZVLEQDTSXS5VF6SVCVF5PM7TZCUDWQAG\",\"WARC-Block-Digest\":\"sha1:D636E4IPNJCQUMDKHU5X6XK7FSGVYLPS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337322.29_warc_CC-MAIN-20221002115028-20221002145028-00783.warc.gz\"}"}
https://www.zadmei.com/zpzhblgp.html
[ "# 在Python中合并两个排序的列表\n\n## 在Python中合并两个已排序的列表\n\n``````list1 = [2,5,7,8,9]\nlist2 = [0,1,3,4,6]\n``````\n\n``````Sorted List: [0,1,2,3,4,5,6,7,8,9]\n``````\n\n## 在Python中合并两个已排序列表的天真方法\n\n``````firstList = [2, 7, 8, 9]\nsecondList = [0, 1, 4, 6]\nresult = []\ni, j = 0,0\nwhile i < len(firstList) and j < len(secondList):\nif firstList[i] < secondList[j]:\nresult.append(firstList[i])\ni = i+1\nelse:\nresult.append(secondList[j])\nj = j+1\nresult = result + firstList[i:] + secondList[j:]\nprint (\"Sorted List: \" + str(result))\n``````\n\n``````Sorted List: [0, 1, 2, 4, 6, 7, 8, 9]\n``````\n\n## 使用Python中的`heapq.merge()` 方法合并两个排序的列表\n\nPython中的`heapq` 模块指的是Heap队列。然而,这个模块,`Heapq` ,主要用于实现Python中的优先级队列。\n\n`heapq` 模块包含 Python 中的`merge()` 函数,该函数将多个排序的列表作为参数,并返回一个组合的、合并的列表。\n\n``````from heapq import merge\nfirst = [2, 7, 8, 9]\nsecond = [0, 1, 4, 6]\nres = list(merge(first, second))\nprint(\"Merged Sorted list: \", str(res))\n``````\n\n``````Merged Sorted List: [0, 1, 2, 4, 6, 7, 8, 9]\n``````\n\n## 在 Python 中使用`sorted()` 函数合并两个已排序的列表\n\nPython 的`sorted()` 函数对作为参数提供的列表或图元进行排序。它总是返回一个将被排序的列表,而不改变原始序列。\n\n``````first = [2, 7, 9]\nsecond = [0, 1, 4]\nresult = sorted(first+second)\nprint(\"List after sorting: \", str(result))\n``````\n\n``````List after sorting: [0, 1, 2, 4, 7, 9]\n``````\n\n## 总结\n\n`sorted()` 函数是用于对附加列表进行排序的内置函数之一,而另一个`heapq.merge()` 是用于在 Python 中合并两个排序列表的方法。这两个函数都可以在单行中进行操作。" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.97109926,"math_prob":0.9974,"size":2639,"snap":"2023-40-2023-50","text_gpt3_token_len":1773,"char_repetition_ratio":0.12675522,"word_repetition_ratio":0.07258064,"special_character_ratio":0.2519894,"punctuation_ratio":0.18226601,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98202884,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T16:06:10Z\",\"WARC-Record-ID\":\"<urn:uuid:9b33db9f-1cb5-4d73-85e1-a622d4bec362>\",\"Content-Length\":\"93839\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9850dd6c-7121-41f1-8b1d-4fb92b7c4237>\",\"WARC-Concurrent-To\":\"<urn:uuid:0401c8d1-2510-43d2-9eff-7383d8e49d6f>\",\"WARC-IP-Address\":\"8.45.176.227\",\"WARC-Target-URI\":\"https://www.zadmei.com/zpzhblgp.html\",\"WARC-Payload-Digest\":\"sha1:QPVAPRHLG2Q7INF252P4CHE76SVMOR2K\",\"WARC-Block-Digest\":\"sha1:JYLLXEPGSGMFEIVV5Z2VLJVA2M36O22V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00253.warc.gz\"}"}
https://www.bartleby.com/solution-answer/chapter-5-problem-10rq-principles-of-economics-2e-2nd-edition/9781947172364/what-is-the-formula-for-calculating-elasticity/0dc35803-7236-11e9-8385-02ee952b546e
[ "", null, "", null, "", null, "Chapter 5, Problem 10RQ", null, "### Principles of Economics 2e\n\n2nd Edition\nSteven A. Greenlaw; David Shapiro\nISBN: 9781947172364\n\n#### Solutions\n\nChapter\nSection", null, "### Principles of Economics 2e\n\n2nd Edition\nSteven A. Greenlaw; David Shapiro\nISBN: 9781947172364\nTextbook Problem\n\n# What is the formula for calculating elasticity?\n\nTo determine\n\nWrite the formula for calculating the elasticity.\n\nExplanation\n\nThere are two formulas for calculating the elasticity, the first formula is percentage formula which is given below:\n\ne = % change in quantity of good% change in price of good\n\nThe second formula for calculating the elasticity when the percentage change is not given is given below:\n\ne\n\n### Still sussing out bartleby?\n\nCheck out a sample textbook solution.\n\nSee a sample solution\n\n#### The Solution to Your Study Problems\n\nBartleby provides explanations to thousands of textbook problems written by our experts, many with advanced degrees!\n\nGet Started", null, "" ]
[ null, "https://www.bartleby.com/static/search-icon-white.svg", null, "https://www.bartleby.com/static/close-grey.svg", null, "https://www.bartleby.com/static/solution-list.svg", null, "https://www.bartleby.com/isbn_cover_images/9781947172364/9781947172364_largeCoverImage.jpg", null, "https://www.bartleby.com/isbn_cover_images/9781947172364/9781947172364_largeCoverImage.jpg", null, "https://www.bartleby.com/static/logo.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77012455,"math_prob":0.8987355,"size":2476,"snap":"2019-43-2019-47","text_gpt3_token_len":526,"char_repetition_ratio":0.22694175,"word_repetition_ratio":0.10610932,"special_character_ratio":0.16882068,"punctuation_ratio":0.07225434,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99838483,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T19:25:20Z\",\"WARC-Record-ID\":\"<urn:uuid:a59fdeac-0dd4-4958-bfe3-0c5ab2f0d8f9>\",\"Content-Length\":\"308180\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4772052d-1df4-4708-9839-6d65ed72aaff>\",\"WARC-Concurrent-To\":\"<urn:uuid:9eaf7b90-da57-4898-a2d0-e6ae71b97733>\",\"WARC-IP-Address\":\"99.84.181.62\",\"WARC-Target-URI\":\"https://www.bartleby.com/solution-answer/chapter-5-problem-10rq-principles-of-economics-2e-2nd-edition/9781947172364/what-is-the-formula-for-calculating-elasticity/0dc35803-7236-11e9-8385-02ee952b546e\",\"WARC-Payload-Digest\":\"sha1:IX5S67CJZSLZBJMSJTOHQ7LGO6NYKWXX\",\"WARC-Block-Digest\":\"sha1:YYHU3D3USDAV3XPJNAHGPQ2D2ACIRCHG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496667333.2_warc_CC-MAIN-20191113191653-20191113215653-00161.warc.gz\"}"}
https://academy.techdata.com/au/training/course/0e079g-spvc/
[ "# Overview\n\nContains PDF course guide, as well as a lab environment where students can work through demonstrations and exercises at their own pace.\n\nThis course provides an introduction to supervised models, unsupervised models, and association models. This is an application-oriented course and examples include predicting whether customers cancel their subscription, predicting property values, segment customers based on usage, and market basket analysis.\n\nIf you are enrolling in a Self Paced Virtual Classroom or Web Based Training course, before you enroll, please review the Self-Paced Virtual Classes and Web-Based Training Classes on our Terms and Conditions page, as well as the system requirements, to ensure that your system meets the minimum requirements for this course. http://www.ibm.com/training/terms\n\n# Audience\n\n• Data scientists\n• Clients who want to learn about machine learning models\n\n# Objective\n\nIntroduction to machine learning models\n\n• Taxonomy of machine learning models\n• Identify measurement levels\n• Taxonomy of supervised models\n• Build and apply models in IBM SPSS Modeler\n\nSupervised models: Decision trees - CHAID\n• CHAID basics for categorical targets\n• Include categorical and continuous predictors\n• CHAID basics for continuous targets\n• Treatment of missing values\n\nSupervised models: Decision trees - C&R Tree\n• C&R Tree basics for categorical targets\n• Include categorical and continuous predictors\n• C&R Tree basics for continuous targets\n• Treatment of missing values\n\nEvaluation measures for supervised models\n• Evaluation measures for categorical targets\n• Evaluation measures for continuous targets\n\nSupervised models: Statistical models for continuous targets - Linear regression\n• Linear regression basics\n• Include categorical predictors\n• Treatment of missing values\n\nSupervised models: Statistical models for categorical targets - Logistic regression\n• Logistic regression basics\n• Include categorical predictors\n• Treatment of missing values\n\nAssociation models: Sequence detection\n• Sequence detection basics\n• Treatment of missing values\n\nSupervised models: Black box models - Neural networks\n• Neural network basics\n• Include categorical and continuous predictors\n• Treatment of missing values\n\nSupervised models: Black box models - Ensemble models\n• Ensemble models basics\n• Improve accuracy and generalizability by boosting and bagging\n• Ensemble the best models\n\nUnsupervised models: K-Means and Kohonen\n• K-Means basics\n• Include categorical inputs in K-Means\n• Treatment of missing values in K-Means\n• Kohonen networks basics\n• Treatment of missing values in Kohonen\n\nUnsupervised models: TwoStep and Anomaly detection\n• TwoStep basics\n• TwoStep assumptions\n• Find the best segmentation model automatically\n• Anomaly detection basics\n• Treatment of missing values\n\nAssociation models: Apriori\n• Apriori basics\n• Evaluation measures\n• Treatment of missing values\n\nPreparing data for modeling\n• Examine the quality of the data\n• Select important predictors\n• Balance the data\n\nShow details\n\n# Course Outline\n\nIntroduction to machine learning models\n• Taxonomy of machine learning models\n• Identify measurement levels\n• Taxonomy of supervised models\n• Build and apply models in IBM SPSS Modeler\n\nSupervised models: Decision trees - CHAID\n• CHAID basics for categorical targets\n• Include categorical and continuous predictors\n• CHAID basics for continuous targets\n• Treatment of missing values\n\nSupervised models: Decision trees - C&R Tree\n• C&R Tree basics for categorical targets\n• Include categorical and continuous predictors\n• C&R Tree basics for continuous targets\n• Treatment of missing values\n\nEvaluation measures for supervised models\n• Evaluation measures for categorical targets\n• Evaluation measures for continuous targets\n\nSupervised models: Statistical models for continuous targets - Linear regression\n• Linear regression basics\n• Include categorical predictors\n• Treatment of missing values\n\nSupervised models: Statistical models for categorical targets - Logistic regression\n• Logistic regression basics\n• Include categorical predictors\n• Treatment of missing values\n\nSupervised models: Black box models - Neural networks\n• Neural network basics\n• Include categorical and continuous predictors\n• Treatment of missing values\n\nSupervised models: Black box models - Ensemble models\n• Ensemble models basics\n• Improve accuracy and generalizability by boosting and bagging\n• Ensemble the best models\n\nUnsupervised models: K-Means and Kohonen\n• K-Means basics\n• Include categorical inputs in K-Means\n• Treatment of missing values in K-Means\n• Kohonen networks basics\n• Treatment of missing values in Kohonen\n\nUnsupervised models: TwoStep and Anomaly detection\n• TwoStep basics\n• TwoStep assumptions\n• Find the best segmentation model automatically\n• Anomaly detection basics\n• Treatment of missing values\n\nAssociation models: Apriori\n• Apriori basics\n• Evaluation measures\n• Treatment of missing values\n\nAssociation models: Sequence detection\n• Sequence detection basics\n• Treatment of missing values\n\nPreparing data for modeling\n• Examine the quality of the data\n• Select important predictors\n• Balance the data", null, "" ]
[ null, "https://academy.techdata.com/assets/promo-icons/share.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.77455735,"math_prob":0.7232358,"size":5291,"snap":"2021-21-2021-25","text_gpt3_token_len":1062,"char_repetition_ratio":0.17798373,"word_repetition_ratio":0.79373366,"special_character_ratio":0.17841618,"punctuation_ratio":0.052197803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98089594,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T18:42:11Z\",\"WARC-Record-ID\":\"<urn:uuid:4ddbc210-27bf-4328-a7f9-464fa4dd0ad1>\",\"Content-Length\":\"34298\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34d9fdb2-6dce-4366-b23c-aed5966a98e9>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc798019-3563-41fe-be9e-51d1a7c4186a>\",\"WARC-IP-Address\":\"34.210.88.0\",\"WARC-Target-URI\":\"https://academy.techdata.com/au/training/course/0e079g-spvc/\",\"WARC-Payload-Digest\":\"sha1:RWZKRJIWZDIMT3554RPTKRBJAWMIVEXX\",\"WARC-Block-Digest\":\"sha1:AHIHOYQFPHU2EW5IXZ7FKUF66H5HGHGC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991943.36_warc_CC-MAIN-20210513173321-20210513203321-00626.warc.gz\"}"}
https://stats.stackexchange.com/questions/447036/clarification-on-the-iid-assumption-in-machine-learning-who-is-sampled-from-whe
[ "# Clarification on the IID assumption in machine learning: who is sampled from where, and who is independent with who?\n\nSo there are a couple of questions on IID assumption on this stackexchange,\n\nOn the importance of the i.i.d. assumption in statistical learning\n\nRealistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?\n\nHow can the IID assumption be checked in a given dataset?\n\nHow to generate data in order to fit the i.i.d. assumption in many machine learning applications?\n\nWhat exactly is p(x,y) in the context of iid assumption in machine learning?\n\nI just want to clarify, mathematically, what it means for people say that a data set $$\\{(x_i, y_i)\\}_{i =1, \\ldots, n}$$ is i.i.d. (or sampled in an i.i.d. fashion)\n\nSo my question is simply, how does this translate mathematically?\n\nMy current working definition:\n\n• (Identically distributed) Each sample $$(x_i,y_i)$$ is assumed to be sampled from a joint probability distribution $$p(x_i,y_i)$$ or in other words, each $$(x_i, y_i)$$ is the realization of the random variable $$(X,Y)\\sim p_{X,Y}(x_i,y_i)$$\n\n• (Independent) For $$i \\neq j$$, the realization $$(x_i,y_i)$$ is generated independently from $$(x_j, y_j)$$\n\nWhy I am unsatisfied/unsure about my definitions:\n\n1. For identically distributed, this question seems to say that each sample is drawn from the probability distribution over all the sample. That is, we assume that each data point $$(x_i, y_i)$$ is generated from a random variable $$(X_i, Y_i)$$, and that $$(x_i,y_i)$$ is not sampled from the distribution of $$(X_i, Y_i) \\sim p_{X,Y}(x_i,y_i)$$ but from the distribution of all random variables $$(X_1, \\ldots, X_n, Y_1, \\ldots, Y_n) \\sim p(x_1, \\ldots, x_n, y_1, \\ldots, y_n)$$ (subscript omitted). Which is different from the definition I have given above.\n\nThis is exactly what the notation $$({\\bf{X}}_i,y_i) \\sim \\mathbb{P}({\\bf{X}},y), \\forall i=1,...,N$$ means in the linked question.\n\nWhich one is correct?\n\n1. For independence: since $$(x_i, y_i)$$ and $$(x_j,y_j)$$ are just pairs of vectors, which are not random variables, hence we cannot speak about their independence. So to me, independence means that given $$(x_i, y_i)$$ and $$(x_j,y_j)$$ are generated by two (pairs of) random variables, $$(X_i, Y_i)$$ and $$(X_j, Y_j)$$, then the joint distribution $$p_{X_i, X_j, Y_i, Y_j}(x_i, x_j, y_i, y_j)$$ can be decomposed into $$p_{X_i, Y_i}(x_i, y_i)p_{X_j, Y_j}(x_j, y_j)$$.\n\nIs this correct?\n\nJust want to be mathematically rigorous about things. Any reference will help me!!!\n\n• Your question(s) isn't very clear. Are you suggesting something is contradictory? Nothing you stated looks wrong. Jan 30, 2020 at 4:51\n• @robsmith11 I want to mathematically express the sentence \"given an i.i.d. dataset\". I gave my definition. I found conflicting definitions, or definitions at various level of clarity/precision. I want to reconcile these definitions. Jan 30, 2020 at 5:08\n\nThese are essentially the same definition\n\nMy current working definition:\n\n• (Identically distributed) Each sample $$(x_i,y_i)$$ is assumed to be sampled from a joint probability distribution $$p(x_i,y_i)$$ or in other words, each $$(x_i, y_i)$$ is the realization of the random variable $$(X,Y)\\sim p_{X,Y}(x_i,y_i)$$\n\n• (Independent) For $$i \\neq j$$, the realization $$(x_i,y_i)$$ is generated independently from $$(x_j, y_j)$$\n\nThe first part of your definition describes the marginal distribution of each data point. The second part describes the relationship between the datapoints; given this information, one can uniquely obtain the joint distribution of the entire dataset.\n\n1. For identically distributed, this [question] seems to say that each sample is drawn from the probability distribution over all the sample. That is, we assume that each data point $$(x_i, y_i)$$ is generated from a random variable $$(X_i, Y_i)$$, and that $$(x_i,y_i)$$ is not sampled from the distribution of $$(X_i, Y_i) \\sim > p_{X,Y}(x_i,y_i)$$ but from the distribution of all random variables $$(X_1, \\ldots, X_n, Y_1, \\ldots, Y_n) \\sim p(x_1, \\ldots, x_n, y_1, > \\ldots, y_n)$$ (subscript omitted). Which is different from the definition I have given above.\n\nThis definition of i.i.d. is describing the joint distribution of the entire data set. Note that we can obtain the marginal distribution from the joint distribution just by integrating out the other terms, and that we can sample from the marginal distribution by sampling from the joint distribution then ignoring the other terms. Thus, both formulations are the same.\n\n1. For independence: since $$(x_i, y_i)$$ and $$(x_j,y_j)$$ are just pairs of vectors, which are not random variables, hence we cannot speak about their independence. So to me, independence means that given $$(x_i, y_i)$$ and $$(x_j,y_j)$$ are generated by two (pairs of) random variables, $$(X_i, Y_i)$$ and $$(X_j, Y_j)$$, then the joint distribution $$p_{X_i, X_j, Y_i, Y_j}(x_i, x_j, y_i, y_j)$$ can be decomposed into $$p_{X_i, Y_i}(x_i, y_i)p_{X_j, Y_j}(x_j, y_j)$$. Is this correct?\n\nYes.\n\n• Thank you! I will carefully look over your answer Jan 30, 2020 at 5:31" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86815524,"math_prob":0.9994253,"size":2379,"snap":"2022-40-2023-06","text_gpt3_token_len":669,"char_repetition_ratio":0.12252632,"word_repetition_ratio":0.0055555557,"special_character_ratio":0.29045817,"punctuation_ratio":0.19844358,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999857,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T00:12:18Z\",\"WARC-Record-ID\":\"<urn:uuid:d46ebbff-8b96-4342-8031-9363e59712b2>\",\"Content-Length\":\"154523\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb51f8f7-a5ca-4404-8236-256f6b51cd67>\",\"WARC-Concurrent-To\":\"<urn:uuid:56ff7a1d-3514-4a5c-8e56-1029124eef47>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/447036/clarification-on-the-iid-assumption-in-machine-learning-who-is-sampled-from-whe\",\"WARC-Payload-Digest\":\"sha1:3CBCYLIJV2WPRCEIBMOVABRKI5FKBHXE\",\"WARC-Block-Digest\":\"sha1:NUCDFQ6EPV2ERX5A6PM6L7QW6IOWKFGA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335396.92_warc_CC-MAIN-20220929225326-20220930015326-00062.warc.gz\"}"}
https://thriftyray.savingadvice.com/2008/02/
[ "User Real IP - 35.175.107.77\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n => Array\n(\n => 103.100.6.26\n)\n\n => Array\n(\n => 104.140.83.13\n)\n\n => Array\n(\n => 223.190.119.133\n)\n\n => Array\n(\n => 119.152.150.87\n)\n\n => Array\n(\n => 103.125.130.147\n)\n\n => Array\n(\n => 27.6.5.52\n)\n\n => Array\n(\n => 103.98.188.26\n)\n\n => Array\n(\n => 39.35.121.81\n)\n\n => Array\n(\n => 74.119.146.182\n)\n\n => Array\n(\n => 5.181.233.162\n)\n\n => Array\n(\n => 157.39.18.60\n)\n\n => Array\n(\n => 1.187.252.25\n)\n\n => Array\n(\n => 39.42.145.59\n)\n\n => Array\n(\n => 39.35.39.198\n)\n\n => Array\n(\n => 49.36.128.214\n)\n\n => Array\n(\n => 182.190.20.56\n)\n\n => Array\n(\n => 122.180.249.189\n)\n\n => Array\n(\n => 117.217.203.107\n)\n\n => Array\n(\n => 103.70.82.241\n)\n\n => Array\n(\n => 45.118.166.68\n)\n\n => Array\n(\n => 122.180.168.39\n)\n\n => Array\n(\n => 149.28.67.254\n)\n\n => Array\n(\n => 223.233.73.8\n)\n\n => Array\n(\n => 122.167.140.0\n)\n\n => Array\n(\n => 95.158.51.55\n)\n\n => Array\n(\n => 27.96.95.134\n)\n\n => Array\n(\n => 49.206.214.53\n)\n\n => Array\n(\n => 212.103.49.92\n)\n\n => Array\n(\n => 122.177.115.101\n)\n\n => Array\n(\n => 171.50.187.124\n)\n\n => Array\n(\n => 122.164.55.107\n)\n\n => Array\n(\n => 98.114.217.204\n)\n\n => Array\n(\n => 106.215.10.54\n)\n\n => Array\n(\n => 115.42.68.28\n)\n\n => Array\n(\n => 104.194.220.87\n)\n\n => Array\n(\n => 103.137.84.170\n)\n\n => Array\n(\n => 61.16.142.110\n)\n\n => Array\n(\n => 212.103.49.85\n)\n\n => Array\n(\n => 39.53.248.162\n)\n\n => Array\n(\n => 203.122.40.214\n)\n\n => Array\n(\n => 117.217.198.72\n)\n\n => Array\n(\n => 115.186.191.203\n)\n\n => Array\n(\n => 120.29.100.199\n)\n\n => Array\n(\n => 45.151.237.24\n)\n\n => Array\n(\n => 223.190.125.232\n)\n\n => Array\n(\n => 41.80.151.17\n)\n\n => Array\n(\n => 23.111.188.5\n)\n\n => Array\n(\n => 223.190.125.216\n)\n\n => Array\n(\n => 103.217.133.119\n)\n\n => Array\n(\n => 103.198.173.132\n)\n\n => Array\n(\n => 47.31.155.89\n)\n\n => Array\n(\n => 223.190.20.253\n)\n\n => Array\n(\n => 104.131.92.125\n)\n\n => Array\n(\n => 223.190.19.152\n)\n\n => Array\n(\n => 103.245.193.191\n)\n\n => Array\n(\n => 106.215.58.255\n)\n\n => Array\n(\n => 119.82.83.238\n)\n\n => Array\n(\n => 106.212.128.138\n)\n\n => Array\n(\n => 139.167.237.36\n)\n\n => Array\n(\n => 222.124.40.250\n)\n\n => Array\n(\n => 134.56.185.169\n)\n\n => Array\n(\n => 54.255.226.31\n)\n\n => Array\n(\n => 137.97.162.31\n)\n\n => Array\n(\n => 95.185.21.191\n)\n\n => Array\n(\n => 171.61.168.151\n)\n\n => Array\n(\n => 137.97.184.4\n)\n\n => Array\n(\n => 106.203.151.202\n)\n\n => Array\n(\n => 39.37.137.0\n)\n\n => Array\n(\n => 45.118.166.66\n)\n\n => Array\n(\n => 14.248.105.100\n)\n\n => Array\n(\n => 106.215.61.185\n)\n\n => Array\n(\n => 202.83.57.179\n)\n\n => Array\n(\n => 89.187.182.176\n)\n\n => Array\n(\n => 49.249.232.198\n)\n\n => Array\n(\n => 132.154.95.236\n)\n\n => Array\n(\n => 223.233.83.230\n)\n\n => Array\n(\n => 183.83.153.14\n)\n\n => Array\n(\n => 125.63.72.210\n)\n\n => Array\n(\n => 207.174.202.11\n)\n\n => Array\n(\n => 119.95.88.59\n)\n\n => Array\n(\n => 122.170.14.150\n)\n\n => Array\n(\n => 45.118.166.75\n)\n\n => Array\n(\n => 103.12.135.37\n)\n\n => Array\n(\n => 49.207.120.225\n)\n\n => Array\n(\n => 182.64.195.207\n)\n\n => Array\n(\n => 103.99.37.16\n)\n\n => Array\n(\n => 46.150.104.221\n)\n\n => Array\n(\n => 104.236.195.147\n)\n\n => Array\n(\n => 103.104.192.43\n)\n\n => Array\n(\n => 24.242.159.118\n)\n\n => Array\n(\n => 39.42.179.143\n)\n\n => Array\n(\n => 111.93.58.131\n)\n\n => Array\n(\n => 193.176.84.127\n)\n\n => Array\n(\n => 209.58.142.218\n)\n\n => Array\n(\n => 69.243.152.129\n)\n\n => Array\n(\n => 117.97.131.249\n)\n\n => Array\n(\n => 103.230.180.89\n)\n\n => Array\n(\n => 106.212.170.192\n)\n\n => Array\n(\n => 171.224.180.95\n)\n\n => Array\n(\n => 158.222.11.87\n)\n\n => Array\n(\n => 119.155.60.246\n)\n\n => Array\n(\n => 41.90.43.129\n)\n\n => Array\n(\n => 185.183.104.170\n)\n\n => Array\n(\n => 14.248.67.65\n)\n\n => Array\n(\n => 117.217.205.82\n)\n\n => Array\n(\n => 111.88.7.209\n)\n\n => Array\n(\n => 49.36.132.244\n)\n\n => Array\n(\n => 171.48.40.2\n)\n\n => Array\n(\n => 119.81.105.2\n)\n\n => Array\n(\n => 49.36.128.114\n)\n\n => Array\n(\n => 213.200.31.93\n)\n\n => Array\n(\n => 2.50.15.110\n)\n\n => Array\n(\n => 120.29.104.67\n)\n\n => Array\n(\n => 223.225.32.221\n)\n\n => Array\n(\n => 14.248.67.195\n)\n\n => Array\n(\n => 119.155.36.13\n)\n\n => Array\n(\n => 101.50.95.104\n)\n\n => Array\n(\n => 104.236.205.233\n)\n\n => Array\n(\n => 122.164.36.150\n)\n\n => Array\n(\n => 157.45.93.209\n)\n\n => Array\n(\n => 182.77.118.100\n)\n\n => Array\n(\n => 182.74.134.218\n)\n\n => Array\n(\n => 183.82.128.146\n)\n\n => Array\n(\n => 112.196.170.234\n)\n\n => Array\n(\n => 122.173.230.178\n)\n\n => Array\n(\n => 122.164.71.199\n)\n\n => Array\n(\n => 51.79.19.31\n)\n\n => Array\n(\n => 58.65.222.20\n)\n\n => Array\n(\n => 103.27.203.97\n)\n\n => Array\n(\n => 111.88.7.242\n)\n\n => Array\n(\n => 14.171.232.77\n)\n\n => Array\n(\n => 46.101.22.182\n)\n\n => Array\n(\n => 103.94.219.19\n)\n\n => Array\n(\n => 139.190.83.30\n)\n\n => Array\n(\n => 223.190.27.184\n)\n\n => Array\n(\n => 182.185.183.34\n)\n\n => Array\n(\n => 91.74.181.242\n)\n\n => Array\n(\n => 222.252.107.14\n)\n\n => Array\n(\n => 137.97.8.28\n)\n\n => Array\n(\n => 46.101.16.229\n)\n\n => Array\n(\n => 122.53.254.229\n)\n\n => Array\n(\n => 106.201.17.180\n)\n\n => Array\n(\n => 123.24.170.129\n)\n\n => Array\n(\n => 182.185.180.79\n)\n\n => Array\n(\n => 223.190.17.4\n)\n\n => Array\n(\n => 213.108.105.1\n)\n\n => Array\n(\n => 171.22.76.9\n)\n\n => Array\n(\n => 202.66.178.164\n)\n\n => Array\n(\n => 178.62.97.171\n)\n\n => Array\n(\n => 167.179.110.209\n)\n\n => Array\n(\n => 223.230.147.172\n)\n\n => Array\n(\n => 76.218.195.160\n)\n\n => Array\n(\n => 14.189.186.178\n)\n\n => Array\n(\n => 157.41.45.143\n)\n\n => Array\n(\n => 223.238.22.53\n)\n\n => Array\n(\n => 111.88.7.244\n)\n\n => Array\n(\n => 5.62.57.19\n)\n\n => Array\n(\n => 106.201.25.216\n)\n\n => Array\n(\n => 117.217.205.33\n)\n\n => Array\n(\n => 111.88.7.215\n)\n\n => Array\n(\n => 106.201.13.77\n)\n\n => Array\n(\n => 50.7.93.29\n)\n\n => Array\n(\n => 123.201.70.112\n)\n\n => Array\n(\n => 39.42.108.226\n)\n\n => Array\n(\n => 27.5.198.29\n)\n\n => Array\n(\n => 223.238.85.187\n)\n\n => Array\n(\n => 171.49.176.32\n)\n\n => Array\n(\n => 14.248.79.242\n)\n\n => Array\n(\n => 46.219.211.183\n)\n\n => Array\n(\n => 185.244.212.251\n)\n\n => Array\n(\n => 14.102.84.126\n)\n\n => Array\n(\n => 106.212.191.52\n)\n\n => Array\n(\n => 154.72.153.203\n)\n\n => Array\n(\n => 14.175.82.64\n)\n\n => Array\n(\n => 141.105.139.131\n)\n\n => Array\n(\n => 182.156.103.98\n)\n\n => Array\n(\n => 117.217.204.75\n)\n\n => Array\n(\n => 104.140.83.115\n)\n\n => Array\n(\n => 119.152.62.8\n)\n\n => Array\n(\n => 45.125.247.94\n)\n\n => Array\n(\n => 137.97.37.252\n)\n\n => Array\n(\n => 117.217.204.73\n)\n\n => Array\n(\n => 14.248.79.133\n)\n\n => Array\n(\n => 39.37.152.52\n)\n\n => Array\n(\n => 103.55.60.54\n)\n\n => Array\n(\n => 102.166.183.88\n)\n\n => Array\n(\n => 5.62.60.162\n)\n\n => Array\n(\n => 5.62.60.163\n)\n\n => Array\n(\n => 160.202.38.131\n)\n\n => Array\n(\n => 106.215.20.253\n)\n\n => Array\n(\n => 39.37.160.54\n)\n\n => Array\n(\n => 119.152.59.186\n)\n\n => Array\n(\n => 183.82.0.164\n)\n\n => Array\n(\n => 41.90.54.87\n)\n\n => Array\n(\n => 157.36.85.158\n)\n\n => Array\n(\n => 110.37.229.162\n)\n\n => Array\n(\n => 203.99.180.148\n)\n\n => Array\n(\n => 117.97.132.91\n)\n\n => Array\n(\n => 171.61.147.105\n)\n\n => Array\n(\n => 14.98.147.214\n)\n\n => Array\n(\n => 209.234.253.191\n)\n\n => Array\n(\n => 92.38.148.60\n)\n\n => Array\n(\n => 178.128.104.139\n)\n\n => Array\n(\n => 212.154.0.176\n)\n\n => Array\n(\n => 103.41.24.141\n)\n\n => Array\n(\n => 2.58.194.132\n)\n\n => Array\n(\n => 180.190.78.169\n)\n\n => Array\n(\n => 106.215.45.182\n)\n\n => Array\n(\n => 125.63.100.222\n)\n\n => Array\n(\n => 110.54.247.17\n)\n\n => Array\n(\n => 103.26.85.105\n)\n\n => Array\n(\n => 39.42.147.3\n)\n\n => Array\n(\n => 137.97.51.41\n)\n\n => Array\n(\n => 71.202.72.27\n)\n\n => Array\n(\n => 119.155.35.10\n)\n\n => Array\n(\n => 202.47.43.120\n)\n\n => Array\n(\n => 183.83.64.101\n)\n\n => Array\n(\n => 182.68.106.141\n)\n\n => Array\n(\n => 171.61.187.87\n)\n\n => Array\n(\n => 178.162.198.118\n)\n\n => Array\n(\n => 115.97.151.218\n)\n\n => Array\n(\n => 196.207.184.210\n)\n\n => Array\n(\n => 198.16.70.51\n)\n\n => Array\n(\n => 41.60.237.33\n)\n\n => Array\n(\n => 47.11.86.26\n)\n\n => Array\n(\n => 117.217.201.183\n)\n\n => Array\n(\n => 203.192.241.79\n)\n\n => Array\n(\n => 122.165.119.85\n)\n\n => Array\n(\n => 23.227.142.218\n)\n\n => Array\n(\n => 178.128.104.221\n)\n\n => Array\n(\n => 14.192.54.163\n)\n\n => Array\n(\n => 139.5.253.218\n)\n\n => Array\n(\n => 117.230.140.127\n)\n\n => Array\n(\n => 195.114.149.199\n)\n\n => Array\n(\n => 14.239.180.220\n)\n\n => Array\n(\n => 103.62.155.94\n)\n\n => Array\n(\n => 118.71.97.14\n)\n\n => Array\n(\n => 137.97.55.163\n)\n\n => Array\n(\n => 202.47.49.198\n)\n\n => Array\n(\n => 171.61.177.85\n)\n\n => Array\n(\n => 137.97.190.224\n)\n\n => Array\n(\n => 117.230.34.142\n)\n\n => Array\n(\n => 103.41.32.5\n)\n\n => Array\n(\n => 203.90.82.237\n)\n\n => Array\n(\n => 125.63.124.238\n)\n\n => Array\n(\n => 103.232.128.78\n)\n\n => Array\n(\n => 106.197.14.227\n)\n\n => Array\n(\n => 81.17.242.244\n)\n\n => Array\n(\n => 81.19.210.179\n)\n\n => Array\n(\n => 103.134.94.98\n)\n\n => Array\n(\n => 110.38.0.86\n)\n\n => Array\n(\n => 103.10.224.195\n)\n\n => Array\n(\n => 45.118.166.89\n)\n\n => Array\n(\n => 115.186.186.68\n)\n\n => Array\n(\n => 138.197.129.237\n)\n\n => Array\n(\n => 14.247.162.52\n)\n\n => Array\n(\n => 103.255.4.5\n)\n\n => Array\n(\n => 14.167.188.254\n)\n\n => Array\n(\n => 5.62.59.54\n)\n\n => Array\n(\n => 27.122.14.80\n)\n\n => Array\n(\n => 39.53.240.21\n)\n\n => Array\n(\n => 39.53.241.243\n)\n\n => Array\n(\n => 117.230.130.161\n)\n\n => Array\n(\n => 118.71.191.149\n)\n\n => Array\n(\n => 5.188.95.54\n)\n\n => Array\n(\n => 66.45.250.27\n)\n\n => Array\n(\n => 106.215.6.175\n)\n\n => Array\n(\n => 27.122.14.86\n)\n\n => Array\n(\n => 103.255.4.51\n)\n\n => Array\n(\n => 101.50.93.119\n)\n\n => Array\n(\n => 137.97.183.51\n)\n\n => Array\n(\n => 117.217.204.185\n)\n\n => Array\n(\n => 95.104.106.82\n)\n\n => Array\n(\n => 5.62.56.211\n)\n\n => Array\n(\n => 103.104.181.214\n)\n\n => Array\n(\n => 36.72.214.243\n)\n\n => Array\n(\n => 5.62.62.219\n)\n\n => Array\n(\n => 110.36.202.4\n)\n\n => Array\n(\n => 103.255.4.253\n)\n\n => Array\n(\n => 110.172.138.61\n)\n\n => Array\n(\n => 159.203.24.195\n)\n\n => Array\n(\n => 13.229.88.42\n)\n\n => Array\n(\n => 59.153.235.20\n)\n\n => Array\n(\n => 171.236.169.32\n)\n\n => Array\n(\n => 14.231.85.206\n)\n\n => Array\n(\n => 119.152.54.103\n)\n\n => Array\n(\n => 103.80.117.202\n)\n\n => Array\n(\n => 223.179.157.75\n)\n\n => Array\n(\n => 122.173.68.249\n)\n\n => Array\n(\n => 188.163.72.113\n)\n\n => Array\n(\n => 119.155.20.164\n)\n\n => Array\n(\n => 103.121.43.68\n)\n\n => Array\n(\n => 5.62.58.6\n)\n\n => Array\n(\n => 203.122.40.154\n)\n\n => Array\n(\n => 222.254.96.203\n)\n\n => Array\n(\n => 103.83.148.167\n)\n\n => Array\n(\n => 103.87.251.226\n)\n\n => Array\n(\n => 123.24.129.24\n)\n\n => Array\n(\n => 137.97.83.8\n)\n\n => Array\n(\n => 223.225.33.132\n)\n\n => Array\n(\n => 128.76.175.190\n)\n\n => Array\n(\n => 195.85.219.32\n)\n\n => Array\n(\n => 139.167.102.93\n)\n\n => Array\n(\n => 49.15.198.253\n)\n\n => Array\n(\n => 45.152.183.172\n)\n\n => Array\n(\n => 42.106.180.136\n)\n\n => Array\n(\n => 95.142.120.9\n)\n\n => Array\n(\n => 139.167.236.4\n)\n\n => Array\n(\n => 159.65.72.167\n)\n\n => Array\n(\n => 49.15.89.2\n)\n\n => Array\n(\n => 42.201.161.195\n)\n\n => Array\n(\n => 27.97.210.38\n)\n\n => Array\n(\n => 171.241.45.19\n)\n\n => Array\n(\n => 42.108.2.18\n)\n\n => Array\n(\n => 171.236.40.68\n)\n\n => Array\n(\n => 110.93.82.102\n)\n\n => Array\n(\n => 43.225.24.186\n)\n\n => Array\n(\n => 117.230.189.119\n)\n\n => Array\n(\n => 124.123.147.187\n)\n\n => Array\n(\n => 216.151.184.250\n)\n\n => Array\n(\n => 49.15.133.16\n)\n\n => Array\n(\n => 49.15.220.74\n)\n\n => Array\n(\n => 157.37.221.246\n)\n\n => Array\n(\n => 176.124.233.112\n)\n\n => Array\n(\n => 118.71.167.40\n)\n\n => Array\n(\n => 182.185.213.161\n)\n\n => Array\n(\n => 47.31.79.248\n)\n\n => Array\n(\n => 223.179.238.192\n)\n\n => Array\n(\n => 79.110.128.219\n)\n\n => Array\n(\n => 106.210.42.111\n)\n\n => Array\n(\n => 47.247.214.229\n)\n\n => Array\n(\n => 193.0.220.108\n)\n\n => Array\n(\n => 1.39.206.254\n)\n\n => Array\n(\n => 123.201.77.38\n)\n\n => Array\n(\n => 115.178.207.21\n)\n\n => Array\n(\n => 37.111.202.92\n)\n\n => Array\n(\n => 49.14.179.243\n)\n\n => Array\n(\n => 117.230.145.171\n)\n\n => Array\n(\n => 171.229.242.96\n)\n\n => Array\n(\n => 27.59.174.209\n)\n\n => Array\n(\n => 1.38.202.211\n)\n\n => Array\n(\n => 157.37.128.46\n)\n\n => Array\n(\n => 49.15.94.80\n)\n\n => Array\n(\n => 123.25.46.147\n)\n\n => Array\n(\n => 117.230.170.185\n)\n\n => Array\n(\n => 5.62.16.19\n)\n\n => Array\n(\n => 103.18.22.25\n)\n\n => Array\n(\n => 103.46.200.132\n)\n\n => Array\n(\n => 27.97.165.126\n)\n\n => Array\n(\n => 117.230.54.241\n)\n\n => Array\n(\n => 27.97.209.76\n)\n\n => Array\n(\n => 47.31.182.109\n)\n\n => Array\n(\n => 47.30.223.221\n)\n\n => Array\n(\n => 103.31.94.82\n)\n\n => Array\n(\n => 103.211.14.45\n)\n\n => Array\n(\n => 171.49.233.58\n)\n\n => Array\n(\n => 65.49.126.95\n)\n\n => Array\n(\n => 69.255.101.170\n)\n\n => Array\n(\n => 27.56.224.67\n)\n\n => Array\n(\n => 117.230.146.86\n)\n\n => Array\n(\n => 27.59.154.52\n)\n\n => Array\n(\n => 132.154.114.10\n)\n\n => Array\n(\n => 182.186.77.60\n)\n\n => Array\n(\n => 117.230.136.74\n)\n\n => Array\n(\n => 43.251.94.253\n)\n\n => Array\n(\n => 103.79.168.225\n)\n\n => Array\n(\n => 117.230.56.51\n)\n\n => Array\n(\n => 27.97.187.45\n)\n\n => Array\n(\n => 137.97.190.61\n)\n\n => Array\n(\n => 193.0.220.26\n)\n\n => Array\n(\n => 49.36.137.62\n)\n\n => Array\n(\n => 47.30.189.248\n)\n\n => Array\n(\n => 109.169.23.84\n)\n\n => Array\n(\n => 111.119.185.46\n)\n\n => Array\n(\n => 103.83.148.246\n)\n\n => Array\n(\n => 157.32.119.138\n)\n\n => Array\n(\n => 5.62.41.53\n)\n\n => Array\n(\n => 47.8.243.236\n)\n\n => Array\n(\n => 112.79.158.69\n)\n\n => Array\n(\n => 180.92.148.218\n)\n\n => Array\n(\n => 157.36.162.154\n)\n\n => Array\n(\n => 39.46.114.47\n)\n\n => Array\n(\n => 117.230.173.250\n)\n\n => Array\n(\n => 117.230.155.188\n)\n\n => Array\n(\n => 193.0.220.17\n)\n\n => Array\n(\n => 117.230.171.166\n)\n\n => Array\n(\n => 49.34.59.228\n)\n\n => Array\n(\n => 111.88.197.247\n)\n\n => Array\n(\n => 47.31.156.112\n)\n\n => Array\n(\n => 137.97.64.180\n)\n\n => Array\n(\n => 14.244.227.18\n)\n\n => Array\n(\n => 113.167.158.8\n)\n\n => Array\n(\n => 39.37.175.189\n)\n\n => Array\n(\n => 139.167.211.8\n)\n\n => Array\n(\n => 73.120.85.235\n)\n\n => Array\n(\n => 104.236.195.72\n)\n\n => Array\n(\n => 27.97.190.71\n)\n\n => Array\n(\n => 79.46.170.222\n)\n\n => Array\n(\n => 102.185.244.207\n)\n\n => Array\n(\n => 37.111.136.30\n)\n\n => Array\n(\n => 50.7.93.28\n)\n\n => Array\n(\n => 110.54.251.43\n)\n\n => Array\n(\n => 49.36.143.40\n)\n\n => Array\n(\n => 103.130.112.185\n)\n\n => Array\n(\n => 37.111.139.202\n)\n\n => Array\n(\n => 49.36.139.108\n)\n\n => Array\n(\n => 37.111.136.179\n)\n\n => Array\n(\n => 123.17.165.77\n)\n\n => Array\n(\n => 49.207.143.206\n)\n\n => Array\n(\n => 39.53.80.149\n)\n\n => Array\n(\n => 223.188.71.214\n)\n\n => Array\n(\n => 1.39.222.233\n)\n\n => Array\n(\n => 117.230.9.85\n)\n\n => Array\n(\n => 103.251.245.216\n)\n\n => Array\n(\n => 122.169.133.145\n)\n\n => Array\n(\n => 43.250.165.57\n)\n\n => Array\n(\n => 39.44.13.235\n)\n\n => Array\n(\n => 157.47.181.2\n)\n\n => Array\n(\n => 27.56.203.50\n)\n\n => Array\n(\n => 191.96.97.58\n)\n\n => Array\n(\n => 111.88.107.172\n)\n\n => Array\n(\n => 113.193.198.136\n)\n\n => Array\n(\n => 117.230.172.175\n)\n\n => Array\n(\n => 191.96.182.239\n)\n\n => Array\n(\n => 2.58.46.28\n)\n\n => Array\n(\n => 183.83.253.87\n)\n\n => Array\n(\n => 49.15.139.242\n)\n\n => Array\n(\n => 42.107.220.236\n)\n\n => Array\n(\n => 14.192.53.196\n)\n\n => Array\n(\n => 42.119.212.202\n)\n\n => Array\n(\n => 192.158.234.45\n)\n\n => Array\n(\n => 49.149.102.192\n)\n\n => Array\n(\n => 47.8.170.17\n)\n\n => Array\n(\n => 117.197.13.247\n)\n\n => Array\n(\n => 116.74.34.44\n)\n\n => Array\n(\n => 103.79.249.163\n)\n\n => Array\n(\n => 182.189.95.70\n)\n\n => Array\n(\n => 137.59.218.118\n)\n\n => Array\n(\n => 103.79.170.243\n)\n\n => Array\n(\n => 39.40.54.25\n)\n\n => Array\n(\n => 119.155.40.170\n)\n\n => Array\n(\n => 1.39.212.157\n)\n\n => Array\n(\n => 70.127.59.89\n)\n\n => Array\n(\n => 14.171.22.58\n)\n\n => Array\n(\n => 194.44.167.141\n)\n\n => Array\n(\n => 111.88.179.154\n)\n\n => Array\n(\n => 117.230.140.232\n)\n\n => Array\n(\n => 137.97.96.128\n)\n\n => Array\n(\n => 198.16.66.123\n)\n\n => Array\n(\n => 106.198.44.193\n)\n\n => Array\n(\n => 119.153.45.75\n)\n\n => Array\n(\n => 49.15.242.208\n)\n\n => Array\n(\n => 119.155.241.20\n)\n\n => Array\n(\n => 106.223.109.155\n)\n\n => Array\n(\n => 119.160.119.245\n)\n\n => Array\n(\n => 106.215.81.160\n)\n\n => Array\n(\n => 1.39.192.211\n)\n\n => Array\n(\n => 223.230.35.208\n)\n\n => Array\n(\n => 39.59.4.158\n)\n\n => Array\n(\n => 43.231.57.234\n)\n\n => Array\n(\n => 60.254.78.193\n)\n\n => Array\n(\n => 122.170.224.87\n)\n\n => Array\n(\n => 117.230.22.141\n)\n\n => Array\n(\n => 119.152.107.211\n)\n\n => Array\n(\n => 103.87.192.206\n)\n\n => Array\n(\n => 39.45.244.47\n)\n\n => Array\n(\n => 50.72.141.94\n)\n\n => Array\n(\n => 39.40.6.128\n)\n\n => Array\n(\n => 39.45.180.186\n)\n\n => Array\n(\n => 49.207.131.233\n)\n\n => Array\n(\n => 139.59.69.142\n)\n\n => Array\n(\n => 111.119.187.29\n)\n\n => Array\n(\n => 119.153.40.69\n)\n\n => Array\n(\n => 49.36.133.64\n)\n\n => Array\n(\n => 103.255.4.249\n)\n\n => Array\n(\n => 198.144.154.15\n)\n\n => Array\n(\n => 1.22.46.172\n)\n\n => Array\n(\n => 103.255.5.46\n)\n\n => Array\n(\n => 27.56.195.188\n)\n\n => Array\n(\n => 203.101.167.53\n)\n\n => Array\n(\n => 117.230.62.195\n)\n\n => Array\n(\n => 103.240.194.186\n)\n\n => Array\n(\n => 107.170.166.118\n)\n\n => Array\n(\n => 101.53.245.80\n)\n\n => Array\n(\n => 157.43.13.208\n)\n\n => Array\n(\n => 137.97.100.77\n)\n\n => Array\n(\n => 47.31.150.208\n)\n\n => Array\n(\n => 137.59.222.65\n)\n\n => Array\n(\n => 103.85.127.250\n)\n\n => Array\n(\n => 103.214.119.32\n)\n\n => Array\n(\n => 182.255.49.52\n)\n\n => Array\n(\n => 103.75.247.72\n)\n\n => Array\n(\n => 103.85.125.250\n)\n\n => Array\n(\n => 183.83.253.167\n)\n\n => Array\n(\n => 1.39.222.111\n)\n\n => Array\n(\n => 111.119.185.9\n)\n\n => Array\n(\n => 111.119.187.10\n)\n\n => Array\n(\n => 39.37.147.144\n)\n\n => Array\n(\n => 103.200.198.183\n)\n\n => Array\n(\n => 1.39.222.18\n)\n\n => Array\n(\n => 198.8.80.103\n)\n\n => Array\n(\n => 42.108.1.243\n)\n\n => Array\n(\n => 111.119.187.16\n)\n\n => Array\n(\n => 39.40.241.8\n)\n\n => Array\n(\n => 122.169.150.158\n)\n\n => Array\n(\n => 39.40.215.119\n)\n\n => Array\n(\n => 103.255.5.77\n)\n\n => Array\n(\n => 157.38.108.196\n)\n\n => Array\n(\n => 103.255.4.67\n)\n\n => Array\n(\n => 5.62.60.62\n)\n\n => Array\n(\n => 39.37.146.202\n)\n\n => Array\n(\n => 110.138.6.221\n)\n\n => Array\n(\n => 49.36.143.88\n)\n\n => Array\n(\n => 37.1.215.39\n)\n\n => Array\n(\n => 27.106.59.190\n)\n\n => Array\n(\n => 139.167.139.41\n)\n\n => Array\n(\n => 114.142.166.179\n)\n\n => Array\n(\n => 223.225.240.112\n)\n\n => Array\n(\n => 103.255.5.36\n)\n\n => Array\n(\n => 175.136.1.48\n)\n\n => Array\n(\n => 103.82.80.166\n)\n\n => Array\n(\n => 182.185.196.126\n)\n\n => Array\n(\n => 157.43.45.76\n)\n\n => Array\n(\n => 119.152.132.49\n)\n\n => Array\n(\n => 5.62.62.162\n)\n\n => Array\n(\n => 103.255.4.39\n)\n\n => Array\n(\n => 202.5.144.153\n)\n\n => Array\n(\n => 1.39.223.210\n)\n\n => Array\n(\n => 92.38.176.154\n)\n\n => Array\n(\n => 117.230.186.142\n)\n\n => Array\n(\n => 183.83.39.123\n)\n\n => Array\n(\n => 182.185.156.76\n)\n\n => Array\n(\n => 104.236.74.212\n)\n\n => Array\n(\n => 107.170.145.187\n)\n\n => Array\n(\n => 117.102.7.98\n)\n\n => Array\n(\n => 137.59.220.0\n)\n\n => Array\n(\n => 157.47.222.14\n)\n\n => Array\n(\n => 47.15.206.82\n)\n\n => Array\n(\n => 117.230.159.99\n)\n\n => Array\n(\n => 117.230.175.151\n)\n\n => Array\n(\n => 157.50.97.18\n)\n\n => Array\n(\n => 117.230.47.164\n)\n\n => Array\n(\n => 77.111.244.34\n)\n\n => Array\n(\n => 139.167.189.131\n)\n\n => Array\n(\n => 1.39.204.103\n)\n\n => Array\n(\n => 117.230.58.0\n)\n\n => Array\n(\n => 182.185.226.66\n)\n\n => Array\n(\n => 115.42.70.119\n)\n\n => Array\n(\n => 171.48.114.134\n)\n\n => Array\n(\n => 144.34.218.75\n)\n\n => Array\n(\n => 199.58.164.135\n)\n\n => Array\n(\n => 101.53.228.151\n)\n\n => Array\n(\n => 117.230.50.57\n)\n\n => Array\n(\n => 223.225.138.84\n)\n\n => Array\n(\n => 110.225.67.65\n)\n\n => Array\n(\n => 47.15.200.39\n)\n\n => Array\n(\n => 39.42.20.127\n)\n\n => Array\n(\n => 117.97.241.81\n)\n\n => Array\n(\n => 111.119.185.11\n)\n\n => Array\n(\n => 103.100.5.94\n)\n\n => Array\n(\n => 103.25.137.69\n)\n\n => Array\n(\n => 47.15.197.159\n)\n\n => Array\n(\n => 223.188.176.122\n)\n\n => Array\n(\n => 27.4.175.80\n)\n\n => Array\n(\n => 181.215.43.82\n)\n\n => Array\n(\n => 27.56.228.157\n)\n\n => Array\n(\n => 117.230.19.19\n)\n\n => Array\n(\n => 47.15.208.71\n)\n\n => Array\n(\n => 119.155.21.176\n)\n\n => Array\n(\n => 47.15.234.202\n)\n\n => Array\n(\n => 117.230.144.135\n)\n\n => Array\n(\n => 112.79.139.199\n)\n\n => Array\n(\n => 116.75.246.41\n)\n\n => Array\n(\n => 117.230.177.126\n)\n\n => Array\n(\n => 212.103.48.134\n)\n\n => Array\n(\n => 102.69.228.78\n)\n\n => Array\n(\n => 117.230.37.118\n)\n\n => Array\n(\n => 175.143.61.75\n)\n\n => Array\n(\n => 139.167.56.138\n)\n\n => Array\n(\n => 58.145.189.250\n)\n\n => Array\n(\n => 103.255.5.65\n)\n\n => Array\n(\n => 39.37.153.182\n)\n\n => Array\n(\n => 157.43.85.106\n)\n\n => Array\n(\n => 185.209.178.77\n)\n\n => Array\n(\n => 1.39.212.45\n)\n\n => Array\n(\n => 103.72.7.16\n)\n\n => Array\n(\n => 117.97.185.244\n)\n\n => Array\n(\n => 117.230.59.106\n)\n\n => Array\n(\n => 137.97.121.103\n)\n\n => Array\n(\n => 103.82.123.215\n)\n\n => Array\n(\n => 103.68.217.248\n)\n\n => Array\n(\n => 157.39.27.175\n)\n\n => Array\n(\n => 47.31.100.249\n)\n\n => Array\n(\n => 14.171.232.139\n)\n\n => Array\n(\n => 103.31.93.208\n)\n\n => Array\n(\n => 117.230.56.77\n)\n\n => Array\n(\n => 124.182.25.124\n)\n\n => Array\n(\n => 106.66.191.242\n)\n\n => Array\n(\n => 175.107.237.25\n)\n\n => Array\n(\n => 119.155.1.27\n)\n\n => Array\n(\n => 72.255.6.24\n)\n\n => Array\n(\n => 192.140.152.223\n)\n\n => Array\n(\n => 212.103.48.136\n)\n\n => Array\n(\n => 39.45.134.56\n)\n\n => Array\n(\n => 139.167.173.30\n)\n\n => Array\n(\n => 117.230.63.87\n)\n\n => Array\n(\n => 182.189.95.203\n)\n\n => Array\n(\n => 49.204.183.248\n)\n\n => Array\n(\n => 47.31.125.188\n)\n\n => Array\n(\n => 103.252.171.13\n)\n\n => Array\n(\n => 112.198.74.36\n)\n\n => Array\n(\n => 27.109.113.152\n)\n\n => Array\n(\n => 42.112.233.44\n)\n\n => Array\n(\n => 47.31.68.193\n)\n\n => Array\n(\n => 103.252.171.134\n)\n\n => Array\n(\n => 77.123.32.114\n)\n\n => Array\n(\n => 1.38.189.66\n)\n\n => Array\n(\n => 39.37.181.108\n)\n\n => Array\n(\n => 42.106.44.61\n)\n\n => Array\n(\n => 157.36.8.39\n)\n\n => Array\n(\n => 223.238.41.53\n)\n\n => Array\n(\n => 202.89.77.10\n)\n\n => Array\n(\n => 117.230.150.68\n)\n\n => Array\n(\n => 175.176.87.60\n)\n\n => Array\n(\n => 137.97.117.87\n)\n\n => Array\n(\n => 132.154.123.11\n)\n\n => Array\n(\n => 45.113.124.141\n)\n\n => Array\n(\n => 103.87.56.203\n)\n\n => Array\n(\n => 159.89.171.156\n)\n\n => Array\n(\n => 119.155.53.88\n)\n\n => Array\n(\n => 222.252.107.215\n)\n\n => Array\n(\n => 132.154.75.238\n)\n\n => Array\n(\n => 122.183.41.168\n)\n\n => Array\n(\n => 42.106.254.158\n)\n\n => Array\n(\n => 103.252.171.37\n)\n\n => Array\n(\n => 202.59.13.180\n)\n\n => Array\n(\n => 37.111.139.137\n)\n\n => Array\n(\n => 39.42.93.25\n)\n\n => Array\n(\n => 118.70.177.156\n)\n\n => Array\n(\n => 117.230.148.64\n)\n\n => Array\n(\n => 39.42.15.194\n)\n\n => Array\n(\n => 137.97.176.86\n)\n\n => Array\n(\n => 106.210.102.113\n)\n\n => Array\n(\n => 39.59.84.236\n)\n\n => Array\n(\n => 49.206.187.177\n)\n\n => Array\n(\n => 117.230.133.11\n)\n\n => Array\n(\n => 42.106.253.173\n)\n\n => Array\n(\n => 178.62.102.23\n)\n\n => Array\n(\n => 111.92.76.175\n)\n\n => Array\n(\n => 132.154.86.45\n)\n\n => Array\n(\n => 117.230.128.39\n)\n\n => Array\n(\n => 117.230.53.165\n)\n\n => Array\n(\n => 49.37.200.171\n)\n\n => Array\n(\n => 104.236.213.230\n)\n\n => Array\n(\n => 103.140.30.81\n)\n\n => Array\n(\n => 59.103.104.117\n)\n\n => Array\n(\n => 65.49.126.79\n)\n\n => Array\n(\n => 202.59.12.251\n)\n\n => Array\n(\n => 37.111.136.17\n)\n\n => Array\n(\n => 163.53.85.67\n)\n\n => Array\n(\n => 123.16.240.73\n)\n\n => Array\n(\n => 103.211.14.183\n)\n\n => Array\n(\n => 103.248.93.211\n)\n\n => Array\n(\n => 116.74.59.127\n)\n\n => Array\n(\n => 137.97.169.254\n)\n\n => Array\n(\n => 113.177.79.100\n)\n\n => Array\n(\n => 74.82.60.187\n)\n\n => Array\n(\n => 117.230.157.66\n)\n\n => Array\n(\n => 169.149.194.241\n)\n\n => Array\n(\n => 117.230.156.11\n)\n\n => Array\n(\n => 202.59.12.157\n)\n\n => Array\n(\n => 42.106.181.25\n)\n\n => Array\n(\n => 202.59.13.78\n)\n\n => Array\n(\n => 39.37.153.32\n)\n\n => Array\n(\n => 177.188.216.175\n)\n\n => Array\n(\n => 222.252.53.165\n)\n\n => Array\n(\n => 37.139.23.89\n)\n\n => Array\n(\n => 117.230.139.150\n)\n\n => Array\n(\n => 104.131.176.234\n)\n\n => Array\n(\n => 42.106.181.117\n)\n\n => Array\n(\n => 117.230.180.94\n)\n\n => Array\n(\n => 180.190.171.5\n)\n\n => Array\n(\n => 150.129.165.185\n)\n\n => Array\n(\n => 51.15.0.150\n)\n\n => Array\n(\n => 42.111.4.84\n)\n\n => Array\n(\n => 74.82.60.116\n)\n\n => Array\n(\n => 137.97.121.165\n)\n\n => Array\n(\n => 64.62.187.194\n)\n\n => Array\n(\n => 137.97.106.162\n)\n\n => Array\n(\n => 137.97.92.46\n)\n\n => Array\n(\n => 137.97.170.25\n)\n\n => Array\n(\n => 103.104.192.100\n)\n\n => Array\n(\n => 185.246.211.34\n)\n\n => Array\n(\n => 119.160.96.78\n)\n\n => Array\n(\n => 212.103.48.152\n)\n\n => Array\n(\n => 183.83.153.90\n)\n\n => Array\n(\n => 117.248.150.41\n)\n\n => Array\n(\n => 185.240.246.180\n)\n\n => Array\n(\n => 162.253.131.125\n)\n\n => Array\n(\n => 117.230.153.217\n)\n\n => Array\n(\n => 117.230.169.1\n)\n\n => Array\n(\n => 49.15.138.247\n)\n\n => Array\n(\n => 117.230.37.110\n)\n\n => Array\n(\n => 14.167.188.75\n)\n\n => Array\n(\n => 169.149.239.93\n)\n\n => Array\n(\n => 103.216.176.91\n)\n\n => Array\n(\n => 117.230.12.126\n)\n\n => Array\n(\n => 184.75.209.110\n)\n\n => Array\n(\n => 117.230.6.60\n)\n\n => Array\n(\n => 117.230.135.132\n)\n\n => Array\n(\n => 31.179.29.109\n)\n\n => Array\n(\n => 74.121.188.186\n)\n\n => Array\n(\n => 117.230.35.5\n)\n\n => Array\n(\n => 111.92.74.239\n)\n\n => Array\n(\n => 104.245.144.236\n)\n\n => Array\n(\n => 39.50.22.100\n)\n\n => Array\n(\n => 47.31.190.23\n)\n\n => Array\n(\n => 157.44.73.187\n)\n\n => Array\n(\n => 117.230.8.91\n)\n\n => Array\n(\n => 157.32.18.2\n)\n\n => Array\n(\n => 111.119.187.43\n)\n\n => Array\n(\n => 203.101.185.246\n)\n\n => Array\n(\n => 5.62.34.22\n)\n\n => Array\n(\n => 122.8.143.76\n)\n\n => Array\n(\n => 115.186.2.187\n)\n\n => Array\n(\n => 202.142.110.89\n)\n\n => Array\n(\n => 157.50.61.254\n)\n\n => Array\n(\n => 223.182.211.185\n)\n\n => Array\n(\n => 103.85.125.210\n)\n\n => Array\n(\n => 103.217.133.147\n)\n\n => Array\n(\n => 103.60.196.217\n)\n\n => Array\n(\n => 157.44.238.6\n)\n\n => Array\n(\n => 117.196.225.68\n)\n\n => Array\n(\n => 104.254.92.52\n)\n\n => Array\n(\n => 39.42.46.72\n)\n\n => Array\n(\n => 221.132.119.36\n)\n\n => Array\n(\n => 111.92.77.47\n)\n\n => Array\n(\n => 223.225.19.152\n)\n\n => Array\n(\n => 159.89.121.217\n)\n\n => Array\n(\n => 39.53.221.205\n)\n\n => Array\n(\n => 193.34.217.28\n)\n\n => Array\n(\n => 139.167.206.36\n)\n\n => Array\n(\n => 96.40.10.7\n)\n\n => Array\n(\n => 124.29.198.123\n)\n\n => Array\n(\n => 117.196.226.1\n)\n\n => Array\n(\n => 106.200.85.135\n)\n\n => Array\n(\n => 106.223.180.28\n)\n\n => Array\n(\n => 103.49.232.110\n)\n\n => Array\n(\n => 139.167.208.50\n)\n\n => Array\n(\n => 139.167.201.102\n)\n\n => Array\n(\n => 14.244.224.237\n)\n\n => Array\n(\n => 103.140.31.187\n)\n\n => Array\n(\n => 49.36.134.136\n)\n\n => Array\n(\n => 160.16.61.75\n)\n\n => Array\n(\n => 103.18.22.228\n)\n\n => Array\n(\n => 47.9.74.121\n)\n\n => Array\n(\n => 47.30.216.159\n)\n\n => Array\n(\n => 117.248.150.78\n)\n\n => Array\n(\n => 5.62.34.17\n)\n\n => Array\n(\n => 139.167.247.181\n)\n\n => Array\n(\n => 193.176.84.29\n)\n\n => Array\n(\n => 103.195.201.121\n)\n\n => Array\n(\n => 89.187.175.115\n)\n\n => Array\n(\n => 137.97.81.251\n)\n\n => Array\n(\n => 157.51.147.62\n)\n\n => Array\n(\n => 103.104.192.42\n)\n\n => Array\n(\n => 14.171.235.26\n)\n\n => Array\n(\n => 178.62.89.121\n)\n\n => Array\n(\n => 119.155.4.164\n)\n\n => Array\n(\n => 43.250.241.89\n)\n\n => Array\n(\n => 103.31.100.80\n)\n\n => Array\n(\n => 119.155.7.44\n)\n\n => Array\n(\n => 106.200.73.114\n)\n\n => Array\n(\n => 77.111.246.18\n)\n\n => Array\n(\n => 157.39.99.247\n)\n\n => Array\n(\n => 103.77.42.132\n)\n\n => Array\n(\n => 74.115.214.133\n)\n\n => Array\n(\n => 117.230.49.224\n)\n\n => Array\n(\n => 39.50.108.238\n)\n\n => Array\n(\n => 47.30.221.45\n)\n\n => Array\n(\n => 95.133.164.235\n)\n\n => Array\n(\n => 212.103.48.141\n)\n\n => Array\n(\n => 104.194.218.147\n)\n\n => Array\n(\n => 106.200.88.241\n)\n\n => Array\n(\n => 182.189.212.211\n)\n\n => Array\n(\n => 39.50.142.129\n)\n\n => Array\n(\n => 77.234.43.133\n)\n\n => Array\n(\n => 49.15.192.58\n)\n\n => Array\n(\n => 119.153.37.55\n)\n\n => Array\n(\n => 27.56.156.128\n)\n\n => Array\n(\n => 168.211.4.33\n)\n\n => Array\n(\n => 203.81.236.239\n)\n\n => Array\n(\n => 157.51.149.61\n)\n\n => Array\n(\n => 117.230.45.255\n)\n\n => Array\n(\n => 39.42.106.169\n)\n\n => Array\n(\n => 27.71.89.76\n)\n\n => Array\n(\n => 123.27.109.167\n)\n\n => Array\n(\n => 106.202.21.91\n)\n\n => Array\n(\n => 103.85.125.206\n)\n\n => Array\n(\n => 122.173.250.229\n)\n\n => Array\n(\n => 106.210.102.77\n)\n\n => Array\n(\n => 134.209.47.156\n)\n\n => Array\n(\n => 45.127.232.12\n)\n\n => Array\n(\n => 45.134.224.11\n)\n\n => Array\n(\n => 27.71.89.122\n)\n\n => Array\n(\n => 157.38.105.117\n)\n\n => Array\n(\n => 191.96.73.215\n)\n\n => Array\n(\n => 171.241.92.31\n)\n\n => Array\n(\n => 49.149.104.235\n)\n\n => Array\n(\n => 104.229.247.252\n)\n\n => Array\n(\n => 111.92.78.42\n)\n\n => Array\n(\n => 47.31.88.183\n)\n\n => Array\n(\n => 171.61.203.234\n)\n\n => Array\n(\n => 183.83.226.192\n)\n\n => Array\n(\n => 119.157.107.45\n)\n\n => Array\n(\n => 91.202.163.205\n)\n\n => Array\n(\n => 157.43.62.108\n)\n\n => Array\n(\n => 182.68.248.92\n)\n\n => Array\n(\n => 157.32.251.234\n)\n\n => Array\n(\n => 110.225.196.188\n)\n\n => Array\n(\n => 27.71.89.98\n)\n\n => Array\n(\n => 175.176.87.3\n)\n\n => Array\n(\n => 103.55.90.208\n)\n\n => Array\n(\n => 47.31.41.163\n)\n\n => Array\n(\n => 223.182.195.5\n)\n\n => Array\n(\n => 122.52.101.166\n)\n\n => Array\n(\n => 103.207.82.154\n)\n\n => Array\n(\n => 171.224.178.84\n)\n\n => Array\n(\n => 110.225.235.187\n)\n\n => Array\n(\n => 119.160.97.248\n)\n\n => Array\n(\n => 116.90.101.121\n)\n\n => Array\n(\n => 182.255.48.154\n)\n\n => Array\n(\n => 180.149.221.140\n)\n\n => Array\n(\n => 194.44.79.13\n)\n\n => Array\n(\n => 47.247.18.3\n)\n\n => Array\n(\n => 27.56.242.95\n)\n\n => Array\n(\n => 41.60.236.83\n)\n\n => Array\n(\n => 122.164.162.7\n)\n\n => Array\n(\n => 71.136.154.5\n)\n\n => Array\n(\n => 132.154.119.122\n)\n\n => Array\n(\n => 110.225.80.135\n)\n\n => Array\n(\n => 84.17.61.143\n)\n\n => Array\n(\n => 119.160.102.244\n)\n\n => Array\n(\n => 47.31.27.44\n)\n\n => Array\n(\n => 27.71.89.160\n)\n\n => Array\n(\n => 107.175.38.101\n)\n\n => Array\n(\n => 195.211.150.152\n)\n\n => Array\n(\n => 157.35.250.255\n)\n\n => Array\n(\n => 111.119.187.53\n)\n\n => Array\n(\n => 119.152.97.213\n)\n\n => Array\n(\n => 180.92.143.145\n)\n\n => Array\n(\n => 72.255.61.46\n)\n\n => Array\n(\n => 47.8.183.6\n)\n\n => Array\n(\n => 92.38.148.53\n)\n\n => Array\n(\n => 122.173.194.72\n)\n\n => Array\n(\n => 183.83.226.97\n)\n\n => Array\n(\n => 122.173.73.231\n)\n\n => Array\n(\n => 119.160.101.101\n)\n\n => Array\n(\n => 93.177.75.174\n)\n\n => Array\n(\n => 115.97.196.70\n)\n\n => Array\n(\n => 111.119.187.35\n)\n\n => Array\n(\n => 103.226.226.154\n)\n\n => Array\n(\n => 103.244.172.73\n)\n\n => Array\n(\n => 119.155.61.222\n)\n\n => Array\n(\n => 157.37.184.92\n)\n\n => Array\n(\n => 119.160.103.204\n)\n\n => Array\n(\n => 175.176.87.21\n)\n\n => Array\n(\n => 185.51.228.246\n)\n\n => Array\n(\n => 103.250.164.255\n)\n\n => Array\n(\n => 122.181.194.16\n)\n\n => Array\n(\n => 157.37.230.232\n)\n\n => Array\n(\n => 103.105.236.6\n)\n\n => Array\n(\n => 111.88.128.174\n)\n\n => Array\n(\n => 37.111.139.82\n)\n\n => Array\n(\n => 39.34.133.52\n)\n\n => Array\n(\n => 113.177.79.80\n)\n\n => Array\n(\n => 180.183.71.184\n)\n\n => Array\n(\n => 116.72.218.255\n)\n\n => Array\n(\n => 119.160.117.26\n)\n\n => Array\n(\n => 158.222.0.252\n)\n\n => Array\n(\n => 23.227.142.146\n)\n\n => Array\n(\n => 122.162.152.152\n)\n\n => Array\n(\n => 103.255.149.106\n)\n\n => Array\n(\n => 104.236.53.155\n)\n\n => Array\n(\n => 119.160.119.155\n)\n\n => Array\n(\n => 175.107.214.244\n)\n\n => Array\n(\n => 102.7.116.7\n)\n\n => Array\n(\n => 111.88.91.132\n)\n\n => Array\n(\n => 119.157.248.108\n)\n\n => Array\n(\n => 222.252.36.107\n)\n\n => Array\n(\n => 157.46.209.227\n)\n\n => Array\n(\n => 39.40.54.1\n)\n\n => Array\n(\n => 223.225.19.254\n)\n\n => Array\n(\n => 154.72.150.8\n)\n\n => Array\n(\n => 107.181.177.130\n)\n\n => Array\n(\n => 101.50.75.31\n)\n\n => Array\n(\n => 84.17.58.69\n)\n\n => Array\n(\n => 178.62.5.157\n)\n\n => Array\n(\n => 112.206.175.147\n)\n\n => Array\n(\n => 137.97.113.137\n)\n\n => Array\n(\n => 103.53.44.154\n)\n\n => Array\n(\n => 180.92.143.129\n)\n\n => Array\n(\n => 14.231.223.7\n)\n\n => Array\n(\n => 167.88.63.201\n)\n\n => Array\n(\n => 103.140.204.8\n)\n\n => Array\n(\n => 221.121.135.108\n)\n\n => Array\n(\n => 119.160.97.129\n)\n\n => Array\n(\n => 27.5.168.249\n)\n\n => Array\n(\n => 119.160.102.191\n)\n\n => Array\n(\n => 122.162.219.12\n)\n\n => Array\n(\n => 157.50.141.122\n)\n\n => Array\n(\n => 43.245.8.17\n)\n\n => Array\n(\n => 113.181.198.179\n)\n\n => Array\n(\n => 47.30.221.59\n)\n\n => Array\n(\n => 110.38.29.246\n)\n\n => Array\n(\n => 14.192.140.199\n)\n\n => Array\n(\n => 24.68.10.106\n)\n\n => Array\n(\n => 47.30.209.179\n)\n\n => Array\n(\n => 106.223.123.21\n)\n\n => Array\n(\n => 103.224.48.30\n)\n\n => Array\n(\n => 104.131.19.173\n)\n\n => Array\n(\n => 119.157.100.206\n)\n\n => Array\n(\n => 103.10.226.73\n)\n\n => Array\n(\n => 162.208.51.163\n)\n\n => Array\n(\n => 47.30.221.227\n)\n\n => Array\n(\n => 119.160.116.210\n)\n\n => Array\n(\n => 198.16.78.43\n)\n\n => Array\n(\n => 39.44.201.151\n)\n\n => Array\n(\n => 71.63.181.84\n)\n\n => Array\n(\n => 14.142.192.218\n)\n\n => Array\n(\n => 39.34.147.178\n)\n\n => Array\n(\n => 111.92.75.25\n)\n\n => Array\n(\n => 45.135.239.58\n)\n\n => Array\n(\n => 14.232.235.1\n)\n\n => Array\n(\n => 49.144.100.155\n)\n\n => Array\n(\n => 62.182.99.33\n)\n\n => Array\n(\n => 104.243.212.187\n)\n\n => Array\n(\n => 59.97.132.214\n)\n\n => Array\n(\n => 47.9.15.179\n)\n\n => Array\n(\n => 39.44.103.186\n)\n\n => Array\n(\n => 183.83.241.132\n)\n\n => Array\n(\n => 103.41.24.180\n)\n\n => Array\n(\n => 104.238.46.39\n)\n\n => Array\n(\n => 103.79.170.78\n)\n\n => Array\n(\n => 59.103.138.81\n)\n\n => Array\n(\n => 106.198.191.146\n)\n\n => Array\n(\n => 106.198.255.122\n)\n\n => Array\n(\n => 47.31.46.37\n)\n\n => Array\n(\n => 109.169.23.76\n)\n\n => Array\n(\n => 103.143.7.55\n)\n\n => Array\n(\n => 49.207.114.52\n)\n\n => Array\n(\n => 198.54.106.250\n)\n\n => Array\n(\n => 39.50.64.18\n)\n\n => Array\n(\n => 222.252.48.132\n)\n\n => Array\n(\n => 42.201.186.53\n)\n\n => Array\n(\n => 115.97.198.95\n)\n\n => Array\n(\n => 93.76.134.244\n)\n\n => Array\n(\n => 122.173.15.189\n)\n\n => Array\n(\n => 39.62.38.29\n)\n\n => Array\n(\n => 103.201.145.254\n)\n\n => Array\n(\n => 111.119.187.23\n)\n\n => Array\n(\n => 157.50.66.33\n)\n\n => Array\n(\n => 157.49.68.163\n)\n\n => Array\n(\n => 103.85.125.215\n)\n\n => Array\n(\n => 103.255.4.16\n)\n\n => Array\n(\n => 223.181.246.206\n)\n\n => Array\n(\n => 39.40.109.226\n)\n\n => Array\n(\n => 43.225.70.157\n)\n\n => Array\n(\n => 103.211.18.168\n)\n\n => Array\n(\n => 137.59.221.60\n)\n\n => Array\n(\n => 103.81.214.63\n)\n\n => Array\n(\n => 39.35.163.2\n)\n\n => Array\n(\n => 106.205.124.39\n)\n\n => Array\n(\n => 209.99.165.216\n)\n\n => Array\n(\n => 103.75.247.187\n)\n\n => Array\n(\n => 157.46.217.41\n)\n\n => Array\n(\n => 75.186.73.80\n)\n\n => Array\n(\n => 212.103.48.153\n)\n\n => Array\n(\n => 47.31.61.167\n)\n\n => Array\n(\n => 119.152.145.131\n)\n\n => Array\n(\n => 171.76.177.244\n)\n\n => Array\n(\n => 103.135.78.50\n)\n\n => Array\n(\n => 103.79.170.75\n)\n\n => Array\n(\n => 105.160.22.74\n)\n\n => Array\n(\n => 47.31.20.153\n)\n\n => Array\n(\n => 42.107.204.65\n)\n\n => Array\n(\n => 49.207.131.35\n)\n\n => Array\n(\n => 92.38.148.61\n)\n\n => Array\n(\n => 183.83.255.206\n)\n\n => Array\n(\n => 107.181.177.131\n)\n\n => Array\n(\n => 39.40.220.157\n)\n\n => Array\n(\n => 39.41.133.176\n)\n\n => Array\n(\n => 103.81.214.61\n)\n\n => Array\n(\n => 223.235.108.46\n)\n\n => Array\n(\n => 171.241.52.118\n)\n\n => Array\n(\n => 39.57.138.47\n)\n\n => Array\n(\n => 106.204.196.172\n)\n\n => Array\n(\n => 39.53.228.40\n)\n\n => Array\n(\n => 185.242.5.99\n)\n\n => Array\n(\n => 103.255.5.96\n)\n\n => Array\n(\n => 157.46.212.120\n)\n\n => Array\n(\n => 107.181.177.138\n)\n\n => Array\n(\n => 47.30.193.65\n)\n\n => Array\n(\n => 39.37.178.33\n)\n\n => Array\n(\n => 157.46.173.29\n)\n\n => Array\n(\n => 39.57.238.211\n)\n\n => Array\n(\n => 157.37.245.113\n)\n\n => Array\n(\n => 47.30.201.138\n)\n\n => Array\n(\n => 106.204.193.108\n)\n\n => Array\n(\n => 212.103.50.212\n)\n\n => Array\n(\n => 58.65.221.187\n)\n\n => Array\n(\n => 178.62.92.29\n)\n\n => Array\n(\n => 111.92.77.166\n)\n\n => Array\n(\n => 47.30.223.158\n)\n\n => Array\n(\n => 103.224.54.83\n)\n\n => Array\n(\n => 119.153.43.22\n)\n\n => Array\n(\n => 223.181.126.251\n)\n\n => Array\n(\n => 39.42.175.202\n)\n\n => Array\n(\n => 103.224.54.190\n)\n\n => Array\n(\n => 49.36.141.210\n)\n\n => Array\n(\n => 5.62.63.218\n)\n\n => Array\n(\n => 39.59.9.18\n)\n\n => Array\n(\n => 111.88.86.45\n)\n\n => Array\n(\n => 178.54.139.5\n)\n\n => Array\n(\n => 116.68.105.241\n)\n\n => Array\n(\n => 119.160.96.187\n)\n\n => Array\n(\n => 182.189.192.103\n)\n\n => Array\n(\n => 119.160.96.143\n)\n\n => Array\n(\n => 110.225.89.98\n)\n\n => Array\n(\n => 169.149.195.134\n)\n\n => Array\n(\n => 103.238.104.54\n)\n\n => Array\n(\n => 47.30.208.142\n)\n\n => Array\n(\n => 157.46.179.209\n)\n\n => Array\n(\n => 223.235.38.119\n)\n\n => Array\n(\n => 42.106.180.165\n)\n\n => Array\n(\n => 154.122.240.239\n)\n\n => Array\n(\n => 106.223.104.191\n)\n\n => Array\n(\n => 111.93.110.218\n)\n\n => Array\n(\n => 182.183.161.171\n)\n\n => Array\n(\n => 157.44.184.211\n)\n\n => Array\n(\n => 157.50.185.193\n)\n\n => Array\n(\n => 117.230.19.194\n)\n\n => Array\n(\n => 162.243.246.160\n)\n\n => Array\n(\n => 106.223.143.53\n)\n\n => Array\n(\n => 39.59.41.15\n)\n\n => Array\n(\n => 106.210.65.42\n)\n\n => Array\n(\n => 180.243.144.208\n)\n\n => Array\n(\n => 116.68.105.22\n)\n\n => Array\n(\n => 115.42.70.46\n)\n\n => Array\n(\n => 99.72.192.148\n)\n\n => Array\n(\n => 182.183.182.48\n)\n\n => Array\n(\n => 171.48.58.97\n)\n\n => Array\n(\n => 37.120.131.188\n)\n\n => Array\n(\n => 117.99.167.177\n)\n\n => Array\n(\n => 111.92.76.210\n)\n\n => Array\n(\n => 14.192.144.245\n)\n\n => Array\n(\n => 169.149.242.87\n)\n\n => Array\n(\n => 47.30.198.149\n)\n\n => Array\n(\n => 59.103.57.140\n)\n\n => Array\n(\n => 117.230.161.168\n)\n\n => Array\n(\n => 110.225.88.173\n)\n\n => Array\n(\n => 169.149.246.95\n)\n\n => Array\n(\n => 42.106.180.52\n)\n\n => Array\n(\n => 14.231.160.157\n)\n\n => Array\n(\n => 123.27.109.47\n)\n\n => Array\n(\n => 157.46.130.54\n)\n\n => Array\n(\n => 39.42.73.194\n)\n\n => Array\n(\n => 117.230.18.147\n)\n\n => Array\n(\n => 27.59.231.98\n)\n\n => Array\n(\n => 125.209.78.227\n)\n\n => Array\n(\n => 157.34.80.145\n)\n\n => Array\n(\n => 42.201.251.86\n)\n\n => Array\n(\n => 117.230.129.158\n)\n\n => Array\n(\n => 103.82.80.103\n)\n\n => Array\n(\n => 47.9.171.228\n)\n\n => Array\n(\n => 117.230.24.92\n)\n\n => Array\n(\n => 103.129.143.119\n)\n\n => Array\n(\n => 39.40.213.45\n)\n\n => Array\n(\n => 178.92.188.214\n)\n\n => Array\n(\n => 110.235.232.191\n)\n\n => Array\n(\n => 5.62.34.18\n)\n\n => Array\n(\n => 47.30.212.134\n)\n\n => Array\n(\n => 157.42.34.196\n)\n\n => Array\n(\n => 157.32.169.9\n)\n\n => Array\n(\n => 103.255.4.11\n)\n\n => Array\n(\n => 117.230.13.69\n)\n\n => Array\n(\n => 117.230.58.97\n)\n\n => Array\n(\n => 92.52.138.39\n)\n\n => Array\n(\n => 221.132.119.63\n)\n\n => Array\n(\n => 117.97.167.188\n)\n\n => Array\n(\n => 119.153.56.58\n)\n\n => Array\n(\n => 105.50.22.150\n)\n\n => Array\n(\n => 115.42.68.126\n)\n\n => Array\n(\n => 182.189.223.159\n)\n\n => Array\n(\n => 39.59.36.90\n)\n\n => Array\n(\n => 111.92.76.114\n)\n\n => Array\n(\n => 157.47.226.163\n)\n\n => Array\n(\n => 202.47.44.37\n)\n\n => Array\n(\n => 106.51.234.172\n)\n\n => Array\n(\n => 103.101.88.166\n)\n\n => Array\n(\n => 27.6.246.146\n)\n\n => Array\n(\n => 103.255.5.83\n)\n\n => Array\n(\n => 103.98.210.185\n)\n\n => Array\n(\n => 122.173.114.134\n)\n\n => Array\n(\n => 122.173.77.248\n)\n\n => Array\n(\n => 5.62.41.172\n)\n\n => Array\n(\n => 180.178.181.17\n)\n\n => Array\n(\n => 37.120.133.224\n)\n\n => Array\n(\n => 45.131.5.156\n)\n\n => Array\n(\n => 110.39.100.110\n)\n\n => Array\n(\n => 176.110.38.185\n)\n\n => Array\n(\n => 36.255.41.64\n)\n\n => Array\n(\n => 103.104.192.15\n)\n\n => Array\n(\n => 43.245.131.195\n)\n\n => Array\n(\n => 14.248.111.185\n)\n\n => Array\n(\n => 122.173.217.133\n)\n\n => Array\n(\n => 106.223.90.245\n)\n\n => Array\n(\n => 119.153.56.80\n)\n\n => Array\n(\n => 103.7.60.172\n)\n\n => Array\n(\n => 157.46.184.233\n)\n\n => Array\n(\n => 182.190.31.95\n)\n\n => Array\n(\n => 109.87.189.122\n)\n\n => Array\n(\n => 91.74.25.100\n)\n\n => Array\n(\n => 182.185.224.144\n)\n\n => Array\n(\n => 106.223.91.221\n)\n\n => Array\n(\n => 182.190.223.40\n)\n\n => Array\n(\n => 2.58.194.134\n)\n\n => Array\n(\n => 196.246.225.236\n)\n\n => Array\n(\n => 106.223.90.173\n)\n\n => Array\n(\n => 23.239.16.54\n)\n\n => Array\n(\n => 157.46.65.225\n)\n\n => Array\n(\n => 115.186.130.14\n)\n\n => Array\n(\n => 103.85.125.157\n)\n\n => Array\n(\n => 14.248.103.6\n)\n\n => Array\n(\n => 123.24.169.247\n)\n\n => Array\n(\n => 103.130.108.153\n)\n\n => Array\n(\n => 115.42.67.21\n)\n\n => Array\n(\n => 202.166.171.190\n)\n\n => Array\n(\n => 39.37.169.104\n)\n\n => Array\n(\n => 103.82.80.59\n)\n\n => Array\n(\n => 175.107.208.58\n)\n\n => Array\n(\n => 203.192.238.247\n)\n\n => Array\n(\n => 103.217.178.150\n)\n\n => Array\n(\n => 103.66.214.173\n)\n\n => Array\n(\n => 110.93.236.174\n)\n\n => Array\n(\n => 143.189.242.64\n)\n\n => Array\n(\n => 77.111.245.12\n)\n\n => Array\n(\n => 145.239.2.231\n)\n\n => Array\n(\n => 115.186.190.38\n)\n\n => Array\n(\n => 109.169.23.67\n)\n\n => Array\n(\n => 198.16.70.29\n)\n\n => Array\n(\n => 111.92.76.186\n)\n\n => Array\n(\n => 115.42.69.34\n)\n\n => Array\n(\n => 73.61.100.95\n)\n\n => Array\n(\n => 103.129.142.31\n)\n\n => Array\n(\n => 103.255.5.53\n)\n\n => Array\n(\n => 103.76.55.2\n)\n\n => Array\n(\n => 47.9.141.138\n)\n\n => Array\n(\n => 103.55.89.234\n)\n\n => Array\n(\n => 103.223.13.53\n)\n\n => Array\n(\n => 175.158.50.203\n)\n\n => Array\n(\n => 103.255.5.90\n)\n\n => Array\n(\n => 106.223.100.138\n)\n\n => Array\n(\n => 39.37.143.193\n)\n\n => Array\n(\n => 206.189.133.131\n)\n\n => Array\n(\n => 43.224.0.233\n)\n\n => Array\n(\n => 115.186.132.106\n)\n\n => Array\n(\n => 31.43.21.159\n)\n\n => Array\n(\n => 119.155.56.131\n)\n\n => Array\n(\n => 103.82.80.138\n)\n\n => Array\n(\n => 24.87.128.119\n)\n\n => Array\n(\n => 106.210.103.163\n)\n\n => Array\n(\n => 103.82.80.90\n)\n\n => Array\n(\n => 157.46.186.45\n)\n\n => Array\n(\n => 157.44.155.238\n)\n\n => Array\n(\n => 103.119.199.2\n)\n\n => Array\n(\n => 27.97.169.205\n)\n\n => Array\n(\n => 157.46.174.89\n)\n\n => Array\n(\n => 43.250.58.220\n)\n\n => Array\n(\n => 76.189.186.64\n)\n\n => Array\n(\n => 103.255.5.57\n)\n\n => Array\n(\n => 171.61.196.136\n)\n\n => Array\n(\n => 202.47.40.88\n)\n\n => Array\n(\n => 97.118.94.116\n)\n\n => Array\n(\n => 157.44.124.157\n)\n\n => Array\n(\n => 95.142.120.13\n)\n\n => Array\n(\n => 42.201.229.151\n)\n\n => Array\n(\n => 157.46.178.95\n)\n\n => Array\n(\n => 169.149.215.192\n)\n\n => Array\n(\n => 42.111.19.48\n)\n\n => Array\n(\n => 1.38.52.18\n)\n\n => Array\n(\n => 145.239.91.241\n)\n\n => Array\n(\n => 47.31.78.191\n)\n\n => Array\n(\n => 103.77.42.60\n)\n\n => Array\n(\n => 157.46.107.144\n)\n\n => Array\n(\n => 157.46.125.124\n)\n\n => Array\n(\n => 110.225.218.108\n)\n\n => Array\n(\n => 106.51.77.185\n)\n\n => Array\n(\n => 123.24.161.207\n)\n\n => Array\n(\n => 106.210.108.22\n)\n\n => Array\n(\n => 42.111.10.14\n)\n\n => Array\n(\n => 223.29.231.175\n)\n\n => Array\n(\n => 27.56.152.132\n)\n\n => Array\n(\n => 119.155.31.100\n)\n\n => Array\n(\n => 122.173.172.127\n)\n\n => Array\n(\n => 103.77.42.64\n)\n\n => Array\n(\n => 157.44.164.106\n)\n\n => Array\n(\n => 14.181.53.38\n)\n\n => Array\n(\n => 115.42.67.64\n)\n\n => Array\n(\n => 47.31.33.140\n)\n\n => Array\n(\n => 103.15.60.234\n)\n\n => Array\n(\n => 182.64.219.181\n)\n\n => Array\n(\n => 103.44.51.6\n)\n\n => Array\n(\n => 116.74.25.157\n)\n\n => Array\n(\n => 116.71.2.128\n)\n\n => Array\n(\n => 157.32.185.239\n)\n\n => Array\n(\n => 47.31.25.79\n)\n\n => Array\n(\n => 178.62.85.75\n)\n\n => Array\n(\n => 180.178.190.39\n)\n\n => Array\n(\n => 39.48.52.179\n)\n\n => Array\n(\n => 106.193.11.240\n)\n\n => Array\n(\n => 103.82.80.226\n)\n\n => Array\n(\n => 49.206.126.30\n)\n\n => Array\n(\n => 157.245.191.173\n)\n\n => Array\n(\n => 49.205.84.237\n)\n\n => Array\n(\n => 47.8.181.232\n)\n\n => Array\n(\n => 182.66.2.92\n)\n\n => Array\n(\n => 49.34.137.220\n)\n\n => Array\n(\n => 209.205.217.125\n)\n\n => Array\n(\n => 192.64.5.73\n)\n\n => Array\n(\n => 27.63.166.108\n)\n\n => Array\n(\n => 120.29.96.211\n)\n\n => Array\n(\n => 182.186.112.135\n)\n\n => Array\n(\n => 45.118.165.151\n)\n\n => Array\n(\n => 47.8.228.12\n)\n\n => Array\n(\n => 106.215.3.162\n)\n\n => Array\n(\n => 111.92.72.66\n)\n\n => Array\n(\n => 169.145.2.9\n)\n\n => Array\n(\n => 106.207.205.100\n)\n\n => Array\n(\n => 223.181.8.12\n)\n\n => Array\n(\n => 157.48.149.78\n)\n\n => Array\n(\n => 103.206.138.116\n)\n\n => Array\n(\n => 39.53.119.22\n)\n\n => Array\n(\n => 157.33.232.106\n)\n\n => Array\n(\n => 49.37.205.139\n)\n\n => Array\n(\n => 115.42.68.3\n)\n\n => Array\n(\n => 93.72.182.251\n)\n\n => Array\n(\n => 202.142.166.22\n)\n\n => Array\n(\n => 157.119.81.111\n)\n\n => Array\n(\n => 182.186.116.155\n)\n\n => Array\n(\n => 157.37.171.37\n)\n\n => Array\n(\n => 117.206.164.48\n)\n\n => Array\n(\n => 49.36.52.63\n)\n\n => Array\n(\n => 203.175.72.112\n)\n\n => Array\n(\n => 171.61.132.193\n)\n\n => Array\n(\n => 111.119.187.44\n)\n\n => Array\n(\n => 39.37.165.216\n)\n\n => Array\n(\n => 103.86.109.58\n)\n\n => Array\n(\n => 39.59.2.86\n)\n\n => Array\n(\n => 111.119.187.28\n)\n\n => Array\n(\n => 106.201.9.10\n)\n\n => Array\n(\n => 49.35.25.106\n)\n\n => Array\n(\n => 157.49.239.103\n)\n\n => Array\n(\n => 157.49.237.198\n)\n\n => Array\n(\n => 14.248.64.121\n)\n\n => Array\n(\n => 117.102.7.214\n)\n\n => Array\n(\n => 120.29.91.246\n)\n\n => Array\n(\n => 103.7.79.41\n)\n\n => Array\n(\n => 132.154.99.209\n)\n\n => Array\n(\n => 212.36.27.245\n)\n\n => Array\n(\n => 157.44.154.9\n)\n\n => Array\n(\n => 47.31.56.44\n)\n\n => Array\n(\n => 192.142.199.136\n)\n\n => Array\n(\n => 171.61.159.49\n)\n\n => Array\n(\n => 119.160.116.151\n)\n\n => Array\n(\n => 103.98.63.39\n)\n\n => Array\n(\n => 41.60.233.216\n)\n\n => Array\n(\n => 49.36.75.212\n)\n\n => Array\n(\n => 223.188.60.20\n)\n\n => Array\n(\n => 103.98.63.50\n)\n\n => Array\n(\n => 178.162.198.21\n)\n\n => Array\n(\n => 157.46.209.35\n)\n\n => Array\n(\n => 119.155.32.151\n)\n\n => Array\n(\n => 102.185.58.161\n)\n\n => Array\n(\n => 59.96.89.231\n)\n\n => Array\n(\n => 119.155.255.198\n)\n\n => Array\n(\n => 42.107.204.57\n)\n\n => Array\n(\n => 42.106.181.74\n)\n\n => Array\n(\n => 157.46.219.186\n)\n\n => Array\n(\n => 115.42.71.49\n)\n\n => Array\n(\n => 157.46.209.131\n)\n\n => Array\n(\n => 220.81.15.94\n)\n\n => Array\n(\n => 111.119.187.24\n)\n\n => Array\n(\n => 49.37.195.185\n)\n\n => Array\n(\n => 42.106.181.85\n)\n\n => Array\n(\n => 43.249.225.134\n)\n\n => Array\n(\n => 117.206.165.151\n)\n\n => Array\n(\n => 119.153.48.250\n)\n\n => Array\n(\n => 27.4.172.162\n)\n\n => Array\n(\n => 117.20.29.51\n)\n\n => Array\n(\n => 103.98.63.135\n)\n\n => Array\n(\n => 117.7.218.229\n)\n\n => Array\n(\n => 157.49.233.105\n)\n\n => Array\n(\n => 39.53.151.199\n)\n\n => Array\n(\n => 101.255.118.33\n)\n\n => Array\n(\n => 41.141.246.9\n)\n\n => Array\n(\n => 221.132.113.78\n)\n\n => Array\n(\n => 119.160.116.202\n)\n\n => Array\n(\n => 117.237.193.244\n)\n\n => Array\n(\n => 157.41.110.145\n)\n\n => Array\n(\n => 103.98.63.5\n)\n\n => Array\n(\n => 103.125.129.58\n)\n\n => Array\n(\n => 183.83.254.66\n)\n\n => Array\n(\n => 45.135.236.160\n)\n\n => Array\n(\n => 198.199.87.124\n)\n\n => Array\n(\n => 193.176.86.41\n)\n\n => Array\n(\n => 115.97.142.98\n)\n\n => Array\n(\n => 222.252.38.198\n)\n\n => Array\n(\n => 110.93.237.49\n)\n\n => Array\n(\n => 103.224.48.122\n)\n\n => Array\n(\n => 110.38.28.130\n)\n\n => Array\n(\n => 106.211.238.154\n)\n\n => Array\n(\n => 111.88.41.73\n)\n\n => Array\n(\n => 119.155.13.143\n)\n\n => Array\n(\n => 103.213.111.60\n)\n\n => Array\n(\n => 202.0.103.42\n)\n\n => Array\n(\n => 157.48.144.33\n)\n\n => Array\n(\n => 111.119.187.62\n)\n\n => Array\n(\n => 103.87.212.71\n)\n\n => Array\n(\n => 157.37.177.20\n)\n\n => Array\n(\n => 223.233.71.92\n)\n\n => Array\n(\n => 116.213.32.107\n)\n\n => Array\n(\n => 104.248.173.151\n)\n\n => Array\n(\n => 14.181.102.222\n)\n\n => Array\n(\n => 103.10.224.252\n)\n\n => Array\n(\n => 175.158.50.57\n)\n\n => Array\n(\n => 165.22.122.199\n)\n\n => Array\n(\n => 23.106.56.12\n)\n\n => Array\n(\n => 203.122.10.146\n)\n\n => Array\n(\n => 37.111.136.138\n)\n\n => Array\n(\n => 103.87.193.66\n)\n\n => Array\n(\n => 39.59.122.246\n)\n\n => Array\n(\n => 111.119.183.63\n)\n\n => Array\n(\n => 157.46.72.102\n)\n\n => Array\n(\n => 185.132.133.82\n)\n\n => Array\n(\n => 118.103.230.148\n)\n\n => Array\n(\n => 5.62.39.45\n)\n\n => Array\n(\n => 119.152.144.134\n)\n\n => Array\n(\n => 172.105.117.102\n)\n\n => Array\n(\n => 122.254.70.212\n)\n\n => Array\n(\n => 102.185.128.97\n)\n\n => Array\n(\n => 182.69.249.11\n)\n\n => Array\n(\n => 105.163.134.167\n)\n\n => Array\n(\n => 111.119.187.38\n)\n\n => Array\n(\n => 103.46.195.93\n)\n\n => Array\n(\n => 106.204.161.156\n)\n\n => Array\n(\n => 122.176.2.175\n)\n\n => Array\n(\n => 117.99.162.31\n)\n\n => Array\n(\n => 106.212.241.242\n)\n\n => Array\n(\n => 42.107.196.149\n)\n\n => Array\n(\n => 212.90.60.57\n)\n\n => Array\n(\n => 175.107.237.12\n)\n\n => Array\n(\n => 157.46.119.152\n)\n\n => Array\n(\n => 157.34.81.12\n)\n\n => Array\n(\n => 162.243.1.22\n)\n\n => Array\n(\n => 110.37.222.178\n)\n\n => Array\n(\n => 103.46.195.68\n)\n\n => Array\n(\n => 119.160.116.81\n)\n\n => Array\n(\n => 138.197.131.28\n)\n\n => Array\n(\n => 103.88.218.124\n)\n\n => Array\n(\n => 192.241.172.113\n)\n\n => Array\n(\n => 110.39.174.106\n)\n\n => Array\n(\n => 111.88.48.17\n)\n\n => Array\n(\n => 42.108.160.218\n)\n\n => Array\n(\n => 117.102.0.16\n)\n\n => Array\n(\n => 157.46.125.235\n)\n\n => Array\n(\n => 14.190.242.251\n)\n\n => Array\n(\n => 47.31.184.64\n)\n\n => Array\n(\n => 49.205.84.157\n)\n\n => Array\n(\n => 122.162.115.247\n)\n\n => Array\n(\n => 41.202.219.74\n)\n\n => Array\n(\n => 106.215.9.67\n)\n\n => Array\n(\n => 103.87.56.208\n)\n\n => Array\n(\n => 103.46.194.147\n)\n\n => Array\n(\n => 116.90.98.81\n)\n\n => Array\n(\n => 115.42.71.213\n)\n\n => Array\n(\n => 39.49.35.192\n)\n\n => Array\n(\n => 41.202.219.65\n)\n\n => Array\n(\n => 131.212.249.93\n)\n\n => Array\n(\n => 49.205.16.251\n)\n\n => Array\n(\n => 39.34.147.250\n)\n\n => Array\n(\n => 183.83.210.185\n)\n\n => Array\n(\n => 49.37.194.215\n)\n\n => Array\n(\n => 103.46.194.108\n)\n\n => Array\n(\n => 89.36.219.233\n)\n\n => Array\n(\n => 119.152.105.178\n)\n\n => Array\n(\n => 202.47.45.125\n)\n\n => Array\n(\n => 156.146.59.27\n)\n\n => Array\n(\n => 132.154.21.156\n)\n\n => Array\n(\n => 157.44.35.31\n)\n\n => Array\n(\n => 41.80.118.124\n)\n\n => Array\n(\n => 47.31.159.198\n)\n\n => Array\n(\n => 103.209.223.140\n)\n\n => Array\n(\n => 157.46.130.138\n)\n\n => Array\n(\n => 49.37.199.246\n)\n\n => Array\n(\n => 111.88.242.10\n)\n\n => Array\n(\n => 43.241.145.110\n)\n\n => Array\n(\n => 124.153.16.30\n)\n\n => Array\n(\n => 27.5.22.173\n)\n\n => Array\n(\n => 111.88.191.173\n)\n\n => Array\n(\n => 41.60.236.200\n)\n\n => Array\n(\n => 115.42.67.146\n)\n\n => Array\n(\n => 150.242.173.7\n)\n\n => Array\n(\n => 14.248.71.23\n)\n\n => Array\n(\n => 111.119.187.4\n)\n\n => Array\n(\n => 124.29.212.118\n)\n\n => Array\n(\n => 51.68.205.163\n)\n\n => Array\n(\n => 182.184.107.63\n)\n\n => Array\n(\n => 106.211.253.87\n)\n\n => Array\n(\n => 223.190.89.5\n)\n\n => Array\n(\n => 183.83.212.63\n)\n\n => Array\n(\n => 129.205.113.227\n)\n\n => Array\n(\n => 106.210.40.141\n)\n\n => Array\n(\n => 91.202.163.169\n)\n\n => Array\n(\n => 76.105.191.89\n)\n\n => Array\n(\n => 171.51.244.160\n)\n\n => Array\n(\n => 37.139.188.92\n)\n\n => Array\n(\n => 23.106.56.37\n)\n\n => Array\n(\n => 157.44.175.180\n)\n\n => Array\n(\n => 122.2.122.97\n)\n\n => Array\n(\n => 103.87.192.194\n)\n\n => Array\n(\n => 192.154.253.6\n)\n\n => Array\n(\n => 77.243.191.19\n)\n\n => Array\n(\n => 122.254.70.46\n)\n\n => Array\n(\n => 154.76.233.73\n)\n\n => Array\n(\n => 195.181.167.150\n)\n\n => Array\n(\n => 209.209.228.5\n)\n\n => Array\n(\n => 203.192.212.115\n)\n\n => Array\n(\n => 221.132.118.179\n)\n\n => Array\n(\n => 117.208.210.204\n)\n\n => Array\n(\n => 120.29.90.126\n)\n\n => Array\n(\n => 36.77.239.190\n)\n\n => Array\n(\n => 157.37.137.127\n)\n\n => Array\n(\n => 39.40.243.6\n)\n\n => Array\n(\n => 182.182.41.201\n)\n\n => Array\n(\n => 39.59.32.46\n)\n\n => Array\n(\n => 111.119.183.36\n)\n\n => Array\n(\n => 103.83.147.61\n)\n\n => Array\n(\n => 103.82.80.85\n)\n\n => Array\n(\n => 103.46.194.161\n)\n\n => Array\n(\n => 101.50.105.38\n)\n\n => Array\n(\n => 111.119.183.58\n)\n\n => Array\n(\n => 47.9.234.51\n)\n\n => Array\n(\n => 120.29.86.157\n)\n\n => Array\n(\n => 175.158.50.70\n)\n\n => Array\n(\n => 112.196.163.235\n)\n\n => Array\n(\n => 139.167.161.85\n)\n\n => Array\n(\n => 106.207.39.181\n)\n\n => Array\n(\n => 103.77.42.159\n)\n\n => Array\n(\n => 185.56.138.220\n)\n\n => Array\n(\n => 119.155.33.205\n)\n\n => Array\n(\n => 157.42.117.124\n)\n\n => Array\n(\n => 103.117.202.202\n)\n\n => Array\n(\n => 220.253.101.109\n)\n\n => Array\n(\n => 49.37.7.247\n)\n\n => Array\n(\n => 119.160.65.27\n)\n\n => Array\n(\n => 114.122.21.151\n)\n\n => Array\n(\n => 157.44.141.83\n)\n\n => Array\n(\n => 103.131.9.7\n)\n\n => Array\n(\n => 125.99.222.21\n)\n\n => Array\n(\n => 103.238.104.206\n)\n\n => Array\n(\n => 110.93.227.100\n)\n\n => Array\n(\n => 49.14.119.114\n)\n\n => Array\n(\n => 115.186.189.82\n)\n\n => Array\n(\n => 106.201.194.2\n)\n\n => Array\n(\n => 106.204.227.28\n)\n\n => Array\n(\n => 47.31.206.13\n)\n\n => Array\n(\n => 39.42.144.109\n)\n\n => Array\n(\n => 14.253.254.90\n)\n\n => Array\n(\n => 157.44.142.118\n)\n\n => Array\n(\n => 192.142.176.21\n)\n\n => Array\n(\n => 103.217.178.225\n)\n\n => Array\n(\n => 106.78.78.16\n)\n\n => Array\n(\n => 167.71.63.184\n)\n\n => Array\n(\n => 207.244.71.82\n)\n\n => Array\n(\n => 71.105.25.145\n)\n\n => Array\n(\n => 39.51.250.30\n)\n\n => Array\n(\n => 157.41.120.160\n)\n\n => Array\n(\n => 39.37.137.81\n)\n\n => Array\n(\n => 41.80.237.27\n)\n\n => Array\n(\n => 111.119.187.50\n)\n\n => Array\n(\n => 49.145.224.252\n)\n\n => Array\n(\n => 106.197.28.106\n)\n\n => Array\n(\n => 103.217.178.240\n)\n\n => Array\n(\n => 27.97.182.237\n)\n\n => Array\n(\n => 106.211.253.72\n)\n\n => Array\n(\n => 119.152.154.172\n)\n\n => Array\n(\n => 103.255.151.148\n)\n\n => Array\n(\n => 154.157.80.12\n)\n\n => Array\n(\n => 156.146.59.28\n)\n\n => Array\n(\n => 171.61.211.64\n)\n\n => Array\n(\n => 27.76.59.22\n)\n\n => Array\n(\n => 167.99.92.124\n)\n\n => Array\n(\n => 132.154.94.51\n)\n\n => Array\n(\n => 111.119.183.38\n)\n\n => Array\n(\n => 115.42.70.169\n)\n\n => Array\n(\n => 109.169.23.83\n)\n\n => Array\n(\n => 157.46.213.64\n)\n\n => Array\n(\n => 39.37.179.171\n)\n\n => Array\n(\n => 14.232.233.32\n)\n\n => Array\n(\n => 157.49.226.13\n)\n\n => Array\n(\n => 185.209.178.78\n)\n\n => Array\n(\n => 222.252.46.230\n)\n\n => Array\n(\n => 139.5.255.168\n)\n\n => Array\n(\n => 202.8.118.12\n)\n\n => Array\n(\n => 39.53.205.63\n)\n\n => Array\n(\n => 157.37.167.227\n)\n\n => Array\n(\n => 157.49.237.121\n)\n\n => Array\n(\n => 208.89.99.6\n)\n\n => Array\n(\n => 111.119.187.33\n)\n\n => Array\n(\n => 39.37.132.101\n)\n\n => Array\n(\n => 72.255.61.15\n)\n\n => Array\n(\n => 157.41.69.126\n)\n\n => Array\n(\n => 27.6.193.15\n)\n\n => Array\n(\n => 157.41.104.8\n)\n\n => Array\n(\n => 157.41.97.162\n)\n\n => Array\n(\n => 95.136.91.67\n)\n\n => Array\n(\n => 110.93.209.138\n)\n\n => Array\n(\n => 119.152.154.82\n)\n\n => Array\n(\n => 111.88.239.223\n)\n\n => Array\n(\n => 157.230.62.100\n)\n\n => Array\n(\n => 37.111.136.167\n)\n\n => Array\n(\n => 139.167.162.65\n)\n\n => Array\n(\n => 120.29.72.72\n)\n\n => Array\n(\n => 39.42.169.69\n)\n\n => Array\n(\n => 157.49.247.12\n)\n\n => Array\n(\n => 43.231.58.221\n)\n\n => Array\n(\n => 111.88.229.18\n)\n\n => Array\n(\n => 171.79.185.198\n)\n\n => Array\n(\n => 169.149.193.102\n)\n\n => Array\n(\n => 207.244.89.162\n)\n\n => Array\n(\n => 27.4.217.129\n)\n\n => Array\n(\n => 91.236.184.12\n)\n\n => Array\n(\n => 14.192.154.150\n)\n\n => Array\n(\n => 167.172.55.253\n)\n\n => Array\n(\n => 103.77.42.192\n)\n\n => Array\n(\n => 39.59.122.140\n)\n\n => Array\n(\n => 41.80.84.46\n)\n\n => Array\n(\n => 202.47.52.115\n)\n\n => Array\n(\n => 222.252.43.47\n)\n\n => Array\n(\n => 119.155.37.250\n)\n\n => Array\n(\n => 157.41.18.88\n)\n\n => Array\n(\n => 39.42.8.59\n)\n\n => Array\n(\n => 39.45.162.110\n)\n\n => Array\n(\n => 111.88.237.25\n)\n\n => Array\n(\n => 103.76.211.168\n)\n\n => Array\n(\n => 178.137.114.165\n)\n\n => Array\n(\n => 43.225.74.146\n)\n\n => Array\n(\n => 157.42.25.26\n)\n\n => Array\n(\n => 137.59.146.63\n)\n\n => Array\n(\n => 119.160.117.190\n)\n\n => Array\n(\n => 1.186.181.133\n)\n\n => Array\n(\n => 39.42.145.94\n)\n\n => Array\n(\n => 203.175.73.96\n)\n\n => Array\n(\n => 39.37.160.14\n)\n\n => Array\n(\n => 157.39.123.250\n)\n\n => Array\n(\n => 95.135.57.82\n)\n\n => Array\n(\n => 162.210.194.35\n)\n\n => Array\n(\n => 39.42.153.135\n)\n\n => Array\n(\n => 118.103.230.106\n)\n\n => Array\n(\n => 108.61.39.115\n)\n\n => Array\n(\n => 102.7.108.45\n)\n\n => Array\n(\n => 183.83.138.134\n)\n\n => Array\n(\n => 115.186.70.223\n)\n\n => Array\n(\n => 157.34.17.139\n)\n\n => Array\n(\n => 122.166.158.231\n)\n\n => Array\n(\n => 43.227.135.90\n)\n\n => Array\n(\n => 182.68.46.180\n)\n\n => Array\n(\n => 223.225.28.138\n)\n\n => Array\n(\n => 103.77.42.220\n)\n\n => Array\n(\n => 192.241.219.13\n)\n\n => Array\n(\n => 103.82.80.113\n)\n\n => Array\n(\n => 42.111.243.151\n)\n\n => Array\n(\n => 171.79.189.247\n)\n\n => Array\n(\n => 157.32.132.102\n)\n\n => Array\n(\n => 103.130.105.243\n)\n\n => Array\n(\n => 117.223.98.120\n)\n\n => Array\n(\n => 106.215.197.187\n)\n\n => Array\n(\n => 182.190.194.179\n)\n\n => Array\n(\n => 223.225.29.42\n)\n\n => Array\n(\n => 117.222.94.151\n)\n\n => Array\n(\n => 182.185.199.104\n)\n\n => Array\n(\n => 49.36.145.77\n)\n\n => Array\n(\n => 103.82.80.73\n)\n\n => Array\n(\n => 103.77.16.13\n)\n\n => Array\n(\n => 221.132.118.86\n)\n\n => Array\n(\n => 202.47.45.77\n)\n\n => Array\n(\n => 202.8.118.116\n)\n\n => Array\n(\n => 42.106.180.185\n)\n\n => Array\n(\n => 203.122.8.234\n)\n\n => Array\n(\n => 88.230.104.245\n)\n\n => Array\n(\n => 103.131.9.33\n)\n\n => Array\n(\n => 117.207.209.60\n)\n\n => Array\n(\n => 42.111.253.227\n)\n\n => Array\n(\n => 23.106.56.54\n)\n\n => Array\n(\n => 122.178.143.181\n)\n\n => Array\n(\n => 111.88.180.5\n)\n\n => Array\n(\n => 174.55.224.161\n)\n\n => Array\n(\n => 49.205.87.100\n)\n\n => Array\n(\n => 49.34.183.118\n)\n\n => Array\n(\n => 124.155.255.154\n)\n\n => Array\n(\n => 106.212.135.200\n)\n\n => Array\n(\n => 139.99.159.11\n)\n\n => Array\n(\n => 45.135.229.8\n)\n\n => Array\n(\n => 88.230.106.85\n)\n\n => Array\n(\n => 91.153.145.221\n)\n\n => Array\n(\n => 103.95.83.33\n)\n\n => Array\n(\n => 122.178.116.76\n)\n\n => Array\n(\n => 103.135.78.14\n)\n\n => Array\n(\n => 111.88.233.206\n)\n\n => Array\n(\n => 192.140.153.210\n)\n\n => Array\n(\n => 202.8.118.69\n)\n\n => Array\n(\n => 103.83.130.81\n)\n\n => Array\n(\n => 182.190.213.143\n)\n\n => Array\n(\n => 198.16.74.204\n)\n\n => Array\n(\n => 101.128.117.248\n)\n\n => Array\n(\n => 103.108.5.147\n)\n\n => Array\n(\n => 157.32.130.158\n)\n\n => Array\n(\n => 103.244.172.93\n)\n\n => Array\n(\n => 47.30.140.126\n)\n\n => Array\n(\n => 223.188.40.124\n)\n\n => Array\n(\n => 157.44.191.102\n)\n\n => Array\n(\n => 41.60.237.62\n)\n\n => Array\n(\n => 47.31.228.161\n)\n\n => Array\n(\n => 137.59.217.188\n)\n\n => Array\n(\n => 39.53.220.237\n)\n\n => Array\n(\n => 45.127.45.199\n)\n\n => Array\n(\n => 14.190.71.19\n)\n\n => Array\n(\n => 47.18.205.54\n)\n\n => Array\n(\n => 110.93.240.11\n)\n\n => Array\n(\n => 134.209.29.111\n)\n\n => Array\n(\n => 49.36.175.104\n)\n\n => Array\n(\n => 203.192.230.61\n)\n\n => Array\n(\n => 176.10.125.115\n)\n\n => Array\n(\n => 182.18.206.17\n)\n\n => Array\n(\n => 103.87.194.102\n)\n\n => Array\n(\n => 171.79.123.106\n)\n\n => Array\n(\n => 45.116.233.35\n)\n\n => Array\n(\n => 223.190.57.225\n)\n\n => Array\n(\n => 114.125.6.158\n)\n\n => Array\n(\n => 223.179.138.176\n)\n\n => Array\n(\n => 111.119.183.61\n)\n\n => Array\n(\n => 202.8.118.43\n)\n\n => Array\n(\n => 157.51.175.216\n)\n\n => Array\n(\n => 41.60.238.100\n)\n\n => Array\n(\n => 117.207.210.199\n)\n\n => Array\n(\n => 111.119.183.26\n)\n\n => Array\n(\n => 103.252.226.12\n)\n\n => Array\n(\n => 103.221.208.82\n)\n\n => Array\n(\n => 103.82.80.228\n)\n\n => Array\n(\n => 111.119.187.39\n)\n\n => Array\n(\n => 157.51.161.199\n)\n\n => Array\n(\n => 59.96.88.246\n)\n\n => Array\n(\n => 27.4.181.183\n)\n\n => Array\n(\n => 43.225.98.124\n)\n\n => Array\n(\n => 157.51.113.74\n)\n\n => Array\n(\n => 207.244.89.161\n)\n\n => Array\n(\n => 49.37.184.82\n)\n\n => Array\n(\n => 111.119.183.4\n)\n\n => Array\n(\n => 39.42.130.147\n)\n\n => Array\n(\n => 103.152.101.2\n)\n\n => Array\n(\n => 111.119.183.2\n)\n\n => Array\n(\n => 157.51.171.149\n)\n\n => Array\n(\n => 103.82.80.245\n)\n\n => Array\n(\n => 175.107.207.133\n)\n\n => Array\n(\n => 103.204.169.158\n)\n\n => Array\n(\n => 157.51.181.12\n)\n\n => Array\n(\n => 195.158.193.212\n)\n\n => Array\n(\n => 204.14.73.85\n)\n\n => Array\n(\n => 39.59.59.31\n)\n\n => Array\n(\n => 45.148.11.82\n)\n\n => Array\n(\n => 157.46.117.250\n)\n\n => Array\n(\n => 157.46.127.170\n)\n\n => Array\n(\n => 77.247.181.165\n)\n\n => Array\n(\n => 111.119.183.54\n)\n\n => Array\n(\n => 41.60.232.183\n)\n\n => Array\n(\n => 157.42.206.174\n)\n\n => Array\n(\n => 196.53.10.246\n)\n\n => Array\n(\n => 27.97.186.131\n)\n\n => Array\n(\n => 103.73.101.134\n)\n\n => Array\n(\n => 111.119.183.35\n)\n\n => Array\n(\n => 202.8.118.111\n)\n\n => Array\n(\n => 103.75.246.207\n)\n\n => Array\n(\n => 47.8.94.225\n)\n\n => Array\n(\n => 106.202.40.83\n)\n\n => Array\n(\n => 117.102.2.0\n)\n\n => Array\n(\n => 156.146.59.11\n)\n\n => Array\n(\n => 223.190.115.125\n)\n\n => Array\n(\n => 169.149.212.232\n)\n\n => Array\n(\n => 39.45.150.127\n)\n\n => Array\n(\n => 45.63.10.204\n)\n\n => Array\n(\n => 27.57.86.46\n)\n\n => Array\n(\n => 103.127.20.138\n)\n\n => Array\n(\n => 223.190.27.26\n)\n\n => Array\n(\n => 49.15.248.78\n)\n\n => Array\n(\n => 130.105.135.103\n)\n\n => Array\n(\n => 47.31.3.239\n)\n\n => Array\n(\n => 185.66.71.8\n)\n\n => Array\n(\n => 103.226.226.198\n)\n\n => Array\n(\n => 39.34.134.16\n)\n\n => Array\n(\n => 95.158.53.120\n)\n\n => Array\n(\n => 45.9.249.246\n)\n\n => Array\n(\n => 223.235.162.157\n)\n\n => Array\n(\n => 37.111.139.23\n)\n\n => Array\n(\n => 49.37.153.47\n)\n\n => Array\n(\n => 103.242.60.205\n)\n\n => Array\n(\n => 185.66.68.18\n)\n\n => Array\n(\n => 162.221.202.138\n)\n\n => Array\n(\n => 202.63.195.29\n)\n\n => Array\n(\n => 112.198.75.226\n)\n\n => Array\n(\n => 46.200.69.233\n)\n\n => Array\n(\n => 103.135.78.30\n)\n\n => Array\n(\n => 119.152.226.9\n)\n\n => Array\n(\n => 167.172.242.50\n)\n\n => Array\n(\n => 49.36.151.31\n)\n\n => Array\n(\n => 111.88.237.156\n)\n\n => Array\n(\n => 103.215.168.1\n)\n\n => Array\n(\n => 107.181.177.137\n)\n\n => Array\n(\n => 157.119.186.202\n)\n\n => Array\n(\n => 37.111.139.106\n)\n\n => Array\n(\n => 182.180.152.198\n)\n\n => Array\n(\n => 43.248.153.72\n)\n\n => Array\n(\n => 64.188.20.84\n)\n\n => Array\n(\n => 103.92.214.11\n)\n\n => Array\n(\n => 182.182.14.148\n)\n\n => Array\n(\n => 116.75.154.119\n)\n\n => Array\n(\n => 37.228.235.94\n)\n\n => Array\n(\n => 197.210.55.43\n)\n\n => Array\n(\n => 45.118.165.153\n)\n\n => Array\n(\n => 122.176.32.27\n)\n\n => Array\n(\n => 106.215.161.20\n)\n\n => Array\n(\n => 152.32.113.58\n)\n\n => Array\n(\n => 111.125.106.132\n)\n\n => Array\n(\n => 212.102.40.72\n)\n\n => Array\n(\n => 2.58.194.140\n)\n\n => Array\n(\n => 122.174.68.115\n)\n\n => Array\n(\n => 117.241.66.56\n)\n\n => Array\n(\n => 71.94.172.140\n)\n\n => Array\n(\n => 103.209.228.139\n)\n\n => Array\n(\n => 43.242.177.140\n)\n\n => Array\n(\n => 38.91.101.66\n)\n\n => Array\n(\n => 103.82.80.67\n)\n\n => Array\n(\n => 117.248.62.138\n)\n\n => Array\n(\n => 103.81.215.51\n)\n\n => Array\n(\n => 103.253.174.4\n)\n\n => Array\n(\n => 202.142.110.111\n)\n\n => Array\n(\n => 162.216.142.1\n)\n\n => Array\n(\n => 58.186.7.252\n)\n\n => Array\n(\n => 113.203.247.66\n)\n\n => Array\n(\n => 111.88.50.63\n)\n\n => Array\n(\n => 182.182.94.227\n)\n\n => Array\n(\n => 49.15.232.50\n)\n\n => Array\n(\n => 182.189.76.225\n)\n\n => Array\n(\n => 139.99.159.14\n)\n\n => Array\n(\n => 163.172.159.235\n)\n\n => Array\n(\n => 157.36.235.241\n)\n\n => Array\n(\n => 111.119.187.3\n)\n\n => Array\n(\n => 103.100.4.61\n)\n\n => Array\n(\n => 192.142.130.88\n)\n\n => Array\n(\n => 43.242.176.114\n)\n\n => Array\n(\n => 180.178.156.165\n)\n\n => Array\n(\n => 182.189.236.77\n)\n\n => Array\n(\n => 49.34.197.239\n)\n\n => Array\n(\n => 157.36.107.107\n)\n\n => Array\n(\n => 103.209.85.175\n)\n\n => Array\n(\n => 203.139.63.83\n)\n\n => Array\n(\n => 43.242.177.161\n)\n\n => Array\n(\n => 182.182.77.138\n)\n\n => Array\n(\n => 114.124.168.117\n)\n\n => Array\n(\n => 124.253.79.191\n)\n\n => Array\n(\n => 192.142.168.235\n)\n\n => Array\n(\n => 14.232.235.111\n)\n\n => Array\n(\n => 152.57.124.214\n)\n\n => Array\n(\n => 123.24.172.48\n)\n\n => Array\n(\n => 43.242.176.87\n)\n\n => Array\n(\n => 43.242.176.101\n)\n\n => Array\n(\n => 49.156.84.110\n)\n\n => Array\n(\n => 58.65.222.6\n)\n\n => Array\n(\n => 157.32.189.112\n)\n\n => Array\n(\n => 47.31.155.87\n)\n\n => Array\n(\n => 39.53.244.182\n)\n\n => Array\n(\n => 39.33.221.76\n)\n\n => Array\n(\n => 161.35.130.245\n)\n\n => Array\n(\n => 152.32.113.137\n)\n\n => Array\n(\n => 192.142.187.220\n)\n\n => Array\n(\n => 185.54.228.123\n)\n\n => Array\n(\n => 103.233.87.221\n)\n\n => Array\n(\n => 223.236.200.224\n)\n\n => Array\n(\n => 27.97.189.170\n)\n\n => Array\n(\n => 103.82.80.212\n)\n\n => Array\n(\n => 43.242.176.37\n)\n\n => Array\n(\n => 49.36.144.94\n)\n\n => Array\n(\n => 180.251.62.185\n)\n\n => Array\n(\n => 39.50.243.227\n)\n\n => Array\n(\n => 124.253.20.21\n)\n\n => Array\n(\n => 41.60.233.31\n)\n\n => Array\n(\n => 103.81.215.57\n)\n\n => Array\n(\n => 185.91.120.16\n)\n\n => Array\n(\n => 182.190.107.163\n)\n\n => Array\n(\n => 222.252.61.68\n)\n\n => Array\n(\n => 109.169.23.78\n)\n\n => Array\n(\n => 39.50.151.222\n)\n\n => Array\n(\n => 43.242.176.86\n)\n\n => Array\n(\n => 178.162.222.161\n)\n\n => Array\n(\n => 37.111.139.158\n)\n\n => Array\n(\n => 39.57.224.97\n)\n\n => Array\n(\n => 39.57.157.194\n)\n\n => Array\n(\n => 111.119.183.48\n)\n\n => Array\n(\n => 180.190.171.129\n)\n\n => Array\n(\n => 39.52.174.177\n)\n\n => Array\n(\n => 43.242.176.103\n)\n\n => Array\n(\n => 124.253.83.14\n)\n\n => Array\n(\n => 182.189.116.245\n)\n\n => Array\n(\n => 157.36.178.213\n)\n\n => Array\n(\n => 45.250.65.119\n)\n\n => Array\n(\n => 103.209.86.6\n)\n\n => Array\n(\n => 43.242.176.80\n)\n\n => Array\n(\n => 137.59.147.2\n)\n\n => Array\n(\n => 117.222.95.23\n)\n\n => Array\n(\n => 124.253.81.10\n)\n\n => Array\n(\n => 43.242.177.21\n)\n\n => Array\n(\n => 182.189.224.186\n)\n\n => Array\n(\n => 39.52.178.142\n)\n\n => Array\n(\n => 106.214.29.176\n)\n\n => Array\n(\n => 111.88.145.107\n)\n\n => Array\n(\n => 49.36.142.67\n)\n\n => Array\n(\n => 202.142.65.50\n)\n\n => Array\n(\n => 1.22.186.76\n)\n\n => Array\n(\n => 103.131.8.225\n)\n\n => Array\n(\n => 39.53.212.111\n)\n\n => Array\n(\n => 103.82.80.149\n)\n\n => Array\n(\n => 43.242.176.12\n)\n\n => Array\n(\n => 103.109.13.189\n)\n\n => Array\n(\n => 124.253.206.202\n)\n\n => Array\n(\n => 117.195.115.85\n)\n\n => Array\n(\n => 49.36.245.229\n)\n\n => Array\n(\n => 42.118.8.100\n)\n\n => Array\n(\n => 1.22.73.17\n)\n\n => Array\n(\n => 157.36.166.131\n)\n\n => Array\n(\n => 182.182.38.223\n)\n\n => Array\n(\n => 49.14.150.21\n)\n\n => Array\n(\n => 43.242.176.89\n)\n\n => Array\n(\n => 157.46.185.69\n)\n\n => Array\n(\n => 103.31.92.150\n)\n\n => Array\n(\n => 59.96.90.94\n)\n\n => Array\n(\n => 49.156.111.64\n)\n\n => Array\n(\n => 103.75.244.16\n)\n\n => Array\n(\n => 54.37.18.139\n)\n\n => Array\n(\n => 27.255.173.50\n)\n\n => Array\n(\n => 84.202.161.120\n)\n\n => Array\n(\n => 27.3.224.180\n)\n\n => Array\n(\n => 39.44.14.192\n)\n\n => Array\n(\n => 37.120.133.201\n)\n\n => Array\n(\n => 109.251.143.236\n)\n\n => Array\n(\n => 23.80.97.111\n)\n\n => Array\n(\n => 43.242.176.9\n)\n\n => Array\n(\n => 14.248.107.50\n)\n\n => Array\n(\n => 182.189.221.114\n)\n\n => Array\n(\n => 103.253.173.74\n)\n\n => Array\n(\n => 27.97.177.45\n)\n\n => Array\n(\n => 49.14.98.9\n)\n\n => Array\n(\n => 163.53.85.169\n)\n\n => Array\n(\n => 39.59.90.168\n)\n\n => Array\n(\n => 111.88.202.253\n)\n\n => Array\n(\n => 111.119.178.155\n)\n\n => Array\n(\n => 171.76.163.75\n)\n\n => Array\n(\n => 202.5.154.23\n)\n\n => Array\n(\n => 119.160.65.164\n)\n\n => Array\n(\n => 14.253.253.190\n)\n\n => Array\n(\n => 117.206.167.25\n)\n\n => Array\n(\n => 61.2.183.186\n)\n\n => Array\n(\n => 103.100.4.83\n)\n\n => Array\n(\n => 124.253.71.126\n)\n\n => Array\n(\n => 182.189.49.217\n)\n\n => Array\n(\n => 103.196.160.41\n)\n\n => Array\n(\n => 23.106.56.35\n)\n\n => Array\n(\n => 110.38.12.70\n)\n\n => Array\n(\n => 154.157.199.239\n)\n\n => Array\n(\n => 14.231.163.113\n)\n\n => Array\n(\n => 103.69.27.232\n)\n\n => Array\n(\n => 175.107.220.192\n)\n\n => Array\n(\n => 43.231.58.173\n)\n\n => Array\n(\n => 138.128.91.215\n)\n\n => Array\n(\n => 103.233.86.1\n)\n\n => Array\n(\n => 182.187.67.111\n)\n\n => Array\n(\n => 49.156.71.31\n)\n\n => Array\n(\n => 27.255.174.125\n)\n\n => Array\n(\n => 195.24.220.35\n)\n\n => Array\n(\n => 120.29.98.28\n)\n\n => Array\n(\n => 41.202.219.255\n)\n\n => Array\n(\n => 103.88.3.243\n)\n\n => Array\n(\n => 111.125.106.75\n)\n\n => Array\n(\n => 106.76.71.74\n)\n\n => Array\n(\n => 112.201.138.85\n)\n\n => Array\n(\n => 110.137.101.229\n)\n\n => Array\n(\n => 43.242.177.96\n)\n\n => Array\n(\n => 39.36.198.196\n)\n\n => Array\n(\n => 27.255.181.140\n)\n\n => Array\n(\n => 194.99.104.58\n)\n\n => Array\n(\n => 78.129.139.109\n)\n\n => Array\n(\n => 47.247.185.67\n)\n\n => Array\n(\n => 27.63.37.90\n)\n\n => Array\n(\n => 103.211.54.1\n)\n\n => Array\n(\n => 94.202.167.139\n)\n\n => Array\n(\n => 111.119.183.3\n)\n\n => Array\n(\n => 124.253.194.1\n)\n\n => Array\n(\n => 192.142.188.115\n)\n\n => Array\n(\n => 39.44.137.107\n)\n\n => Array\n(\n => 43.251.191.25\n)\n\n => Array\n(\n => 103.140.30.114\n)\n\n => Array\n(\n => 117.5.194.159\n)\n\n => Array\n(\n => 109.169.23.79\n)\n\n => Array\n(\n => 122.178.127.170\n)\n\n => Array\n(\n => 45.118.165.156\n)\n\n => Array\n(\n => 39.48.199.148\n)\n\n => Array\n(\n => 182.64.138.32\n)\n\n => Array\n(\n => 37.73.129.186\n)\n\n => Array\n(\n => 182.186.110.35\n)\n\n => Array\n(\n => 43.242.177.24\n)\n\n => Array\n(\n => 119.155.23.112\n)\n\n => Array\n(\n => 84.16.238.119\n)\n\n => Array\n(\n => 41.202.219.252\n)\n\n => Array\n(\n => 43.242.176.119\n)\n\n => Array\n(\n => 111.119.187.6\n)\n\n => Array\n(\n => 95.12.200.188\n)\n\n => Array\n(\n => 139.28.219.138\n)\n\n => Array\n(\n => 89.163.247.130\n)\n\n => Array\n(\n => 122.173.103.88\n)\n\n => Array\n(\n => 103.248.87.10\n)\n\n => Array\n(\n => 23.106.249.36\n)\n\n => Array\n(\n => 124.253.94.125\n)\n\n => Array\n(\n => 39.53.244.147\n)\n\n => Array\n(\n => 193.109.85.11\n)\n\n => Array\n(\n => 43.242.176.71\n)\n\n => Array\n(\n => 43.242.177.58\n)\n\n => Array\n(\n => 47.31.6.139\n)\n\n => Array\n(\n => 39.59.34.67\n)\n\n => Array\n(\n => 43.242.176.58\n)\n\n => Array\n(\n => 103.107.198.198\n)\n\n => Array\n(\n => 147.135.11.113\n)\n\n => Array\n(\n => 27.7.212.112\n)\n\n => Array\n(\n => 43.242.177.1\n)\n\n => Array\n(\n => 175.107.227.27\n)\n\n => Array\n(\n => 103.103.43.254\n)\n\n => Array\n(\n => 49.15.221.10\n)\n\n => Array\n(\n => 43.242.177.43\n)\n\n => Array\n(\n => 36.85.59.11\n)\n\n => Array\n(\n => 124.253.204.50\n)\n\n => Array\n(\n => 5.181.233.54\n)\n\n => Array\n(\n => 43.242.177.154\n)\n\n => Array\n(\n => 103.84.37.169\n)\n\n => Array\n(\n => 222.252.54.108\n)\n\n => Array\n(\n => 14.162.160.254\n)\n\n => Array\n(\n => 178.151.218.45\n)\n\n => Array\n(\n => 110.137.101.93\n)\n\n => Array\n(\n => 122.162.212.59\n)\n\n => Array\n(\n => 81.12.118.162\n)\n\n => Array\n(\n => 171.76.186.148\n)\n\n => Array\n(\n => 182.69.253.77\n)\n\n => Array\n(\n => 111.119.183.43\n)\n\n => Array\n(\n => 49.149.74.226\n)\n\n => Array\n(\n => 43.242.177.63\n)\n\n => Array\n(\n => 14.99.243.54\n)\n\n => Array\n(\n => 110.137.100.25\n)\n\n => Array\n(\n => 116.107.25.163\n)\n\n => Array\n(\n => 49.36.71.141\n)\n\n => Array\n(\n => 182.180.117.219\n)\n\n => Array\n(\n => 150.242.172.194\n)\n\n => Array\n(\n => 49.156.111.40\n)\n\n => Array\n(\n => 49.15.208.115\n)\n\n => Array\n(\n => 103.209.87.219\n)\n\n => Array\n(\n => 43.242.176.56\n)\n\n => Array\n(\n => 103.132.187.100\n)\n\n => Array\n(\n => 49.156.96.120\n)\n\n => Array\n(\n => 192.142.176.171\n)\n\n => Array\n(\n => 51.91.18.131\n)\n\n => Array\n(\n => 103.83.144.121\n)\n\n => Array\n(\n => 1.39.75.72\n)\n\n => Array\n(\n => 14.231.172.177\n)\n\n => Array\n(\n => 94.232.213.159\n)\n\n => Array\n(\n => 103.228.158.38\n)\n\n => Array\n(\n => 43.242.177.100\n)\n\n => Array\n(\n => 171.76.149.130\n)\n\n => Array\n(\n => 113.183.26.59\n)\n\n => Array\n(\n => 182.74.232.166\n)\n\n => Array\n(\n => 47.31.205.211\n)\n\n => Array\n(\n => 106.211.253.70\n)\n\n => Array\n(\n => 39.51.233.214\n)\n\n => Array\n(\n => 182.70.249.161\n)\n\n => Array\n(\n => 222.252.40.196\n)\n\n => Array\n(\n => 49.37.6.29\n)\n\n => Array\n(\n => 119.155.33.170\n)\n\n => Array\n(\n => 43.242.177.79\n)\n\n => Array\n(\n => 111.119.183.62\n)\n\n => Array\n(\n => 137.59.226.97\n)\n\n => Array\n(\n => 42.111.18.121\n)\n\n => Array\n(\n => 223.190.46.91\n)\n\n => Array\n(\n => 45.118.165.159\n)\n\n => Array\n(\n => 110.136.60.44\n)\n\n => Array\n(\n => 43.242.176.57\n)\n\n => Array\n(\n => 117.212.58.0\n)\n\n => Array\n(\n => 49.37.7.66\n)\n\n => Array\n(\n => 39.52.174.33\n)\n\n => Array\n(\n => 150.242.172.55\n)\n\n => Array\n(\n => 103.94.111.236\n)\n\n => Array\n(\n => 106.215.239.184\n)\n\n => Array\n(\n => 101.128.117.75\n)\n\n => Array\n(\n => 162.210.194.10\n)\n\n => Array\n(\n => 136.158.31.132\n)\n\n => Array\n(\n => 39.51.245.69\n)\n\n => Array\n(\n => 39.42.149.159\n)\n\n => Array\n(\n => 51.77.108.159\n)\n\n => Array\n(\n => 45.127.247.250\n)\n\n => Array\n(\n => 122.172.78.22\n)\n\n => Array\n(\n => 117.220.208.38\n)\n\n => Array\n(\n => 112.201.138.95\n)\n\n => Array\n(\n => 49.145.105.113\n)\n\n => Array\n(\n => 110.93.247.12\n)\n\n => Array\n(\n => 39.52.150.32\n)\n\n => Array\n(\n => 122.161.89.41\n)\n\n => Array\n(\n => 39.52.176.49\n)\n\n => Array\n(\n => 157.33.12.154\n)\n\n => Array\n(\n => 73.111.248.162\n)\n\n => Array\n(\n => 112.204.167.67\n)\n\n => Array\n(\n => 107.150.30.182\n)\n\n => Array\n(\n => 115.99.222.229\n)\n\n => Array\n(\n => 180.190.195.96\n)\n\n => Array\n(\n => 157.44.57.255\n)\n\n => Array\n(\n => 39.37.9.167\n)\n\n => Array\n(\n => 39.49.48.33\n)\n\n => Array\n(\n => 157.44.218.118\n)\n\n => Array\n(\n => 103.211.54.253\n)\n\n => Array\n(\n => 43.242.177.81\n)\n\n => Array\n(\n => 103.111.224.227\n)\n\n => Array\n(\n => 223.176.48.237\n)\n\n => Array\n(\n => 124.253.87.117\n)\n\n => Array\n(\n => 124.29.247.14\n)\n\n => Array\n(\n => 182.189.232.32\n)\n\n => Array\n(\n => 111.68.97.206\n)\n\n => Array\n(\n => 103.117.15.70\n)\n\n => Array\n(\n => 182.18.236.101\n)\n\n => Array\n(\n => 43.242.177.60\n)\n\n => Array\n(\n => 180.190.7.178\n)\n\n => Array\n(\n => 112.201.142.95\n)\n\n => Array\n(\n => 122.178.255.123\n)\n\n => Array\n(\n => 49.36.240.103\n)\n\n => Array\n(\n => 210.56.16.13\n)\n\n => Array\n(\n => 103.91.123.219\n)\n\n => Array\n(\n => 39.52.155.252\n)\n\n => Array\n(\n => 192.142.207.230\n)\n\n => Array\n(\n => 188.163.82.179\n)\n\n => Array\n(\n => 182.189.9.196\n)\n\n => Array\n(\n => 175.107.221.51\n)\n\n => Array\n(\n => 39.53.221.200\n)\n\n => Array\n(\n => 27.255.190.59\n)\n\n => Array\n(\n => 183.83.212.118\n)\n\n => Array\n(\n => 45.118.165.143\n)\n\n => Array\n(\n => 182.189.124.35\n)\n\n => Array\n(\n => 203.101.186.1\n)\n\n => Array\n(\n => 49.36.246.25\n)\n\n => Array\n(\n => 39.42.186.234\n)\n\n => Array\n(\n => 103.82.80.14\n)\n\n => Array\n(\n => 210.18.182.42\n)\n\n => Array\n(\n => 42.111.13.81\n)\n\n => Array\n(\n => 46.200.69.240\n)\n\n => Array\n(\n => 103.209.87.213\n)\n\n => Array\n(\n => 103.31.95.95\n)\n\n => Array\n(\n => 180.190.174.25\n)\n\n => Array\n(\n => 103.77.0.128\n)\n\n => Array\n(\n => 49.34.103.82\n)\n\n => Array\n(\n => 39.48.196.22\n)\n\n => Array\n(\n => 192.142.166.20\n)\n\n => Array\n(\n => 202.142.110.186\n)\n\n => Array\n(\n => 122.163.135.95\n)\n\n => Array\n(\n => 183.83.255.225\n)\n\n => Array\n(\n => 157.45.46.10\n)\n\n => Array\n(\n => 182.189.4.77\n)\n\n => Array\n(\n => 49.145.104.71\n)\n\n => Array\n(\n => 103.143.7.34\n)\n\n => Array\n(\n => 61.2.180.15\n)\n\n => Array\n(\n => 103.81.215.61\n)\n\n => Array\n(\n => 115.42.71.122\n)\n\n => Array\n(\n => 124.253.73.20\n)\n\n => Array\n(\n => 49.33.210.169\n)\n\n => Array\n(\n => 78.159.101.115\n)\n\n => Array\n(\n => 42.111.17.221\n)\n\n => Array\n(\n => 43.242.178.67\n)\n\n => Array\n(\n => 36.68.138.36\n)\n\n => Array\n(\n => 103.195.201.51\n)\n\n => Array\n(\n => 79.141.162.81\n)\n\n => Array\n(\n => 202.8.118.239\n)\n\n => Array\n(\n => 103.139.128.161\n)\n\n => Array\n(\n => 207.244.71.84\n)\n\n => Array\n(\n => 124.253.184.45\n)\n\n => Array\n(\n => 111.125.106.124\n)\n\n => Array\n(\n => 111.125.105.139\n)\n\n => Array\n(\n => 39.59.94.233\n)\n\n => Array\n(\n => 112.211.60.168\n)\n\n => Array\n(\n => 103.117.14.72\n)\n\n => Array\n(\n => 111.119.183.56\n)\n\n => Array\n(\n => 47.31.53.228\n)\n\n => Array\n(\n => 124.253.186.8\n)\n\n => Array\n(\n => 183.83.213.214\n)\n\n => Array\n(\n => 103.106.239.70\n)\n\n => Array\n(\n => 182.182.92.81\n)\n\n => Array\n(\n => 14.162.167.98\n)\n\n => Array\n(\n => 112.211.11.107\n)\n\n => Array\n(\n => 77.111.246.20\n)\n\n => Array\n(\n => 49.156.86.182\n)\n\n => Array\n(\n => 47.29.122.112\n)\n\n => Array\n(\n => 125.99.74.42\n)\n\n => Array\n(\n => 124.123.169.24\n)\n\n => Array\n(\n => 106.202.105.128\n)\n\n => Array\n(\n => 103.244.173.14\n)\n\n => Array\n(\n => 103.98.63.104\n)\n\n => Array\n(\n => 180.245.6.60\n)\n\n => Array\n(\n => 49.149.96.14\n)\n\n => Array\n(\n => 14.177.120.169\n)\n\n => Array\n(\n => 192.135.90.145\n)\n\n => Array\n(\n => 223.190.18.218\n)\n\n => Array\n(\n => 171.61.190.2\n)\n\n => Array\n(\n => 58.65.220.219\n)\n\n => Array\n(\n => 122.177.29.87\n)\n\n => Array\n(\n => 223.236.175.203\n)\n\n => Array\n(\n => 39.53.237.106\n)\n\n => Array\n(\n => 1.186.114.83\n)\n\n => Array\n(\n => 43.230.66.153\n)\n\n => Array\n(\n => 27.96.94.247\n)\n\n => Array\n(\n => 39.52.176.185\n)\n\n => Array\n(\n => 59.94.147.62\n)\n\n => Array\n(\n => 119.160.117.10\n)\n\n => Array\n(\n => 43.241.146.105\n)\n\n => Array\n(\n => 39.59.87.75\n)\n\n => Array\n(\n => 119.160.118.203\n)\n\n => Array\n(\n => 39.52.161.76\n)\n\n => Array\n(\n => 202.168.84.189\n)\n\n => Array\n(\n => 103.215.168.2\n)\n\n => Array\n(\n => 39.42.146.160\n)\n\n => Array\n(\n => 182.182.30.246\n)\n\n => Array\n(\n => 122.173.212.133\n)\n\n => Array\n(\n => 39.51.238.44\n)\n\n => Array\n(\n => 183.83.252.51\n)\n\n => Array\n(\n => 202.142.168.86\n)\n\n => Array\n(\n => 39.40.198.209\n)\n\n => Array\n(\n => 192.135.90.151\n)\n\n => Array\n(\n => 72.255.41.174\n)\n\n => Array\n(\n => 137.97.92.124\n)\n\n => Array\n(\n => 182.185.159.155\n)\n\n => Array\n(\n => 157.44.133.131\n)\n\n => Array\n(\n => 39.51.230.253\n)\n\n => Array\n(\n => 103.70.87.200\n)\n\n => Array\n(\n => 103.117.15.82\n)\n\n => Array\n(\n => 103.217.244.69\n)\n\n => Array\n(\n => 157.34.76.185\n)\n\n => Array\n(\n => 39.52.130.163\n)\n\n => Array\n(\n => 182.181.41.39\n)\n\n => Array\n(\n => 49.37.212.226\n)\n\n => Array\n(\n => 119.160.117.100\n)\n\n => Array\n(\n => 103.209.87.43\n)\n\n => Array\n(\n => 180.190.195.45\n)\n\n => Array\n(\n => 122.160.57.230\n)\n\n => Array\n(\n => 203.192.213.81\n)\n\n => Array\n(\n => 182.181.63.91\n)\n\n => Array\n(\n => 157.44.184.5\n)\n\n => Array\n(\n => 27.97.213.128\n)\n\n => Array\n(\n => 122.55.252.145\n)\n\n => Array\n(\n => 103.117.15.92\n)\n\n => Array\n(\n => 42.201.251.179\n)\n\n => Array\n(\n => 122.186.84.53\n)\n\n => Array\n(\n => 119.157.75.242\n)\n\n => Array\n(\n => 39.42.163.6\n)\n\n => Array\n(\n => 14.99.246.78\n)\n\n => Array\n(\n => 103.209.87.227\n)\n\n => Array\n(\n => 182.68.215.31\n)\n\n => Array\n(\n => 45.118.165.140\n)\n\n => Array\n(\n => 207.244.71.81\n)\n\n => Array\n(\n => 27.97.162.57\n)\n\n => Array\n(\n => 103.113.106.98\n)\n\n => Array\n(\n => 95.135.44.103\n)\n\n => Array\n(\n => 125.209.114.238\n)\n\n => Array\n(\n => 77.123.14.176\n)\n\n => Array\n(\n => 110.36.202.169\n)\n\n => Array\n(\n => 124.253.205.230\n)\n\n => Array\n(\n => 106.215.72.117\n)\n\n => Array\n(\n => 116.72.226.35\n)\n\n => Array\n(\n => 137.97.103.141\n)\n\n => Array\n(\n => 112.79.212.161\n)\n\n => Array\n(\n => 103.209.85.150\n)\n\n => Array\n(\n => 103.159.127.6\n)\n\n => Array\n(\n => 43.239.205.66\n)\n\n => Array\n(\n => 143.244.51.152\n)\n\n => Array\n(\n => 182.64.15.3\n)\n\n => Array\n(\n => 182.185.207.146\n)\n\n => Array\n(\n => 45.118.165.155\n)\n\n => Array\n(\n => 115.160.241.214\n)\n\n => Array\n(\n => 47.31.230.68\n)\n\n => Array\n(\n => 49.15.84.145\n)\n\n => Array\n(\n => 39.51.239.206\n)\n\n => Array\n(\n => 103.149.154.212\n)\n\n => Array\n(\n => 43.239.207.155\n)\n\n => Array\n(\n => 182.182.30.181\n)\n\n => Array\n(\n => 157.37.198.16\n)\n\n => Array\n(\n => 162.239.24.60\n)\n\n => Array\n(\n => 106.212.101.97\n)\n\n => Array\n(\n => 124.253.97.44\n)\n\n => Array\n(\n => 106.214.95.176\n)\n\n => Array\n(\n => 102.69.228.114\n)\n\n => Array\n(\n => 116.74.58.221\n)\n\n => Array\n(\n => 162.210.194.38\n)\n\n => Array\n(\n => 39.52.162.121\n)\n\n => Array\n(\n => 103.216.143.255\n)\n\n => Array\n(\n => 103.49.155.134\n)\n\n => Array\n(\n => 182.191.119.236\n)\n\n => Array\n(\n => 111.88.213.172\n)\n\n => Array\n(\n => 43.239.207.207\n)\n\n => Array\n(\n => 140.213.35.143\n)\n\n => Array\n(\n => 154.72.153.215\n)\n\n => Array\n(\n => 122.170.47.36\n)\n\n => Array\n(\n => 51.158.111.163\n)\n\n => Array\n(\n => 203.122.10.150\n)\n\n => Array\n(\n => 47.31.176.111\n)\n\n => Array\n(\n => 103.75.246.34\n)\n\n => Array\n(\n => 103.244.178.45\n)\n\n => Array\n(\n => 182.185.138.0\n)\n\n => Array\n(\n => 183.83.254.224\n)\n\n => Array\n(\n => 49.36.246.145\n)\n\n => Array\n(\n => 202.47.60.85\n)\n\n => Array\n(\n => 180.190.163.160\n)\n\n => Array\n(\n => 27.255.187.221\n)\n\n => Array\n(\n => 14.248.94.2\n)\n\n => Array\n(\n => 185.233.17.187\n)\n\n => Array\n(\n => 139.5.254.227\n)\n\n => Array\n(\n => 103.149.160.66\n)\n\n => Array\n(\n => 122.168.235.47\n)\n\n => Array\n(\n => 45.113.248.224\n)\n\n => Array\n(\n => 110.54.170.142\n)\n\n => Array\n(\n => 223.235.226.55\n)\n\n => Array\n(\n => 157.32.19.235\n)\n\n => Array\n(\n => 49.15.221.114\n)\n\n => Array\n(\n => 27.97.166.163\n)\n\n => Array\n(\n => 223.233.99.5\n)\n\n => Array\n(\n => 49.33.203.53\n)\n\n => Array\n(\n => 27.56.214.41\n)\n\n => Array\n(\n => 103.138.51.3\n)\n\n => Array\n(\n => 111.119.183.21\n)\n\n => Array\n(\n => 47.15.138.233\n)\n\n => Array\n(\n => 202.63.213.184\n)\n\n => Array\n(\n => 49.36.158.94\n)\n\n => Array\n(\n => 27.97.186.179\n)\n\n => Array\n(\n => 27.97.214.69\n)\n\n => Array\n(\n => 203.128.18.163\n)\n\n => Array\n(\n => 106.207.235.63\n)\n\n => Array\n(\n => 116.107.220.231\n)\n\n => Array\n(\n => 223.226.169.249\n)\n\n => Array\n(\n => 106.201.24.6\n)\n\n => Array\n(\n => 49.15.89.7\n)\n\n => Array\n(\n => 49.15.142.20\n)\n\n => Array\n(\n => 223.177.24.85\n)\n\n => Array\n(\n => 37.156.17.37\n)\n\n => Array\n(\n => 102.129.224.2\n)\n\n => Array\n(\n => 49.15.85.221\n)\n\n => Array\n(\n => 106.76.208.153\n)\n\n => Array\n(\n => 61.2.47.71\n)\n\n => Array\n(\n => 27.97.178.79\n)\n\n => Array\n(\n => 39.34.143.196\n)\n\n => Array\n(\n => 103.10.227.158\n)\n\n => Array\n(\n => 117.220.210.159\n)\n\n => Array\n(\n => 182.189.28.11\n)\n\n => Array\n(\n => 122.185.38.170\n)\n\n => Array\n(\n => 112.196.132.115\n)\n\n => Array\n(\n => 187.156.137.83\n)\n\n => Array\n(\n => 203.122.3.88\n)\n\n => Array\n(\n => 51.68.142.45\n)\n\n => Array\n(\n => 124.253.217.55\n)\n\n => Array\n(\n => 103.152.41.2\n)\n\n => Array\n(\n => 157.37.154.219\n)\n\n => Array\n(\n => 39.45.32.77\n)\n\n => Array\n(\n => 182.182.22.221\n)\n\n => Array\n(\n => 157.43.205.117\n)\n\n => Array\n(\n => 202.142.123.58\n)\n\n => Array\n(\n => 43.239.207.121\n)\n\n => Array\n(\n => 49.206.122.113\n)\n\n => Array\n(\n => 106.193.199.203\n)\n\n => Array\n(\n => 103.67.157.251\n)\n\n => Array\n(\n => 49.34.97.81\n)\n\n => Array\n(\n => 49.156.92.130\n)\n\n => Array\n(\n => 203.160.179.210\n)\n\n => Array\n(\n => 106.215.33.244\n)\n\n => Array\n(\n => 191.101.148.41\n)\n\n => Array\n(\n => 203.90.94.94\n)\n\n => Array\n(\n => 105.129.205.134\n)\n\n => Array\n(\n => 106.215.45.165\n)\n\n => Array\n(\n => 112.196.132.15\n)\n\n => Array\n(\n => 39.59.64.174\n)\n\n => Array\n(\n => 124.253.155.116\n)\n\n => Array\n(\n => 94.179.192.204\n)\n\n => Array\n(\n => 110.38.29.245\n)\n\n => Array\n(\n => 124.29.209.78\n)\n\n => Array\n(\n => 103.75.245.240\n)\n\n => Array\n(\n => 49.36.159.170\n)\n\n => Array\n(\n => 223.190.18.160\n)\n\n => Array\n(\n => 124.253.113.226\n)\n\n => Array\n(\n => 14.180.77.240\n)\n\n => Array\n(\n => 106.215.76.24\n)\n\n => Array\n(\n => 106.210.155.153\n)\n\n => Array\n(\n => 111.119.187.42\n)\n\n => Array\n(\n => 146.196.32.106\n)\n\n => Array\n(\n => 122.162.22.27\n)\n\n => Array\n(\n => 49.145.59.252\n)\n\n => Array\n(\n => 95.47.247.92\n)\n\n => Array\n(\n => 103.99.218.50\n)\n\n => Array\n(\n => 157.37.192.88\n)\n\n => Array\n(\n => 82.102.31.242\n)\n\n => Array\n(\n => 157.46.220.64\n)\n\n => Array\n(\n => 180.151.107.52\n)\n\n => Array\n(\n => 203.81.240.75\n)\n\n => Array\n(\n => 122.167.213.130\n)\n\n => Array\n(\n => 103.227.70.164\n)\n\n => Array\n(\n => 106.215.81.169\n)\n\n => Array\n(\n => 157.46.214.170\n)\n\n => Array\n(\n => 103.69.27.163\n)\n\n => Array\n(\n => 124.253.23.213\n)\n\n => Array\n(\n => 157.37.167.174\n)\n\n => Array\n(\n => 1.39.204.67\n)\n\n => Array\n(\n => 112.196.132.51\n)\n\n => Array\n(\n => 119.152.61.222\n)\n\n => Array\n(\n => 47.31.36.174\n)\n\n => Array\n(\n => 47.31.152.174\n)\n\n => Array\n(\n => 49.34.18.105\n)\n\n => Array\n(\n => 157.37.170.101\n)\n\n => Array\n(\n => 118.209.241.234\n)\n\n => Array\n(\n => 103.67.19.9\n)\n\n => Array\n(\n => 182.189.14.154\n)\n\n => Array\n(\n => 45.127.233.232\n)\n\n => Array\n(\n => 27.96.94.91\n)\n\n => Array\n(\n => 183.83.214.250\n)\n\n => Array\n(\n => 47.31.27.140\n)\n\n => Array\n(\n => 47.31.129.199\n)\n\n => Array\n(\n => 157.44.156.111\n)\n\n => Array\n(\n => 42.110.163.2\n)\n\n => Array\n(\n => 124.253.64.210\n)\n\n => Array\n(\n => 49.36.167.54\n)\n\n => Array\n(\n => 27.63.135.145\n)\n\n => Array\n(\n => 157.35.254.63\n)\n\n => Array\n(\n => 39.45.18.182\n)\n\n => Array\n(\n => 197.210.85.102\n)\n\n => Array\n(\n => 112.196.132.90\n)\n\n => Array\n(\n => 59.152.97.84\n)\n\n => Array\n(\n => 43.242.178.7\n)\n\n => Array\n(\n => 47.31.40.70\n)\n\n => Array\n(\n => 202.134.10.136\n)\n\n => Array\n(\n => 132.154.241.43\n)\n\n => Array\n(\n => 185.209.179.240\n)\n\n => Array\n(\n => 202.47.50.28\n)\n\n => Array\n(\n => 182.186.1.29\n)\n\n => Array\n(\n => 124.253.114.229\n)\n\n => Array\n(\n => 49.32.210.126\n)\n\n => Array\n(\n => 43.242.178.122\n)\n\n => Array\n(\n => 42.111.28.52\n)\n\n => Array\n(\n => 23.227.141.44\n)\n\n => Array\n(\n => 23.227.141.156\n)\n\n => Array\n(\n => 103.253.173.79\n)\n\n => Array\n(\n => 116.75.231.74\n)\n\n => Array\n(\n => 106.76.78.196\n)\n\n => Array\n(\n => 116.75.197.68\n)\n\n => Array\n(\n => 42.108.172.131\n)\n\n => Array\n(\n => 157.38.27.199\n)\n\n => Array\n(\n => 103.70.86.205\n)\n\n => Array\n(\n => 119.152.63.239\n)\n\n => Array\n(\n => 103.233.116.94\n)\n\n => Array\n(\n => 111.119.188.17\n)\n\n => Array\n(\n => 103.196.160.156\n)\n\n => Array\n(\n => 27.97.208.40\n)\n\n => Array\n(\n => 188.163.7.136\n)\n\n => Array\n(\n => 49.15.202.205\n)\n\n => Array\n(\n => 124.253.201.111\n)\n\n => Array\n(\n => 182.190.213.246\n)\n\n => Array\n(\n => 5.154.174.10\n)\n\n => Array\n(\n => 103.21.185.16\n)\n\n => Array\n(\n => 112.196.132.67\n)\n\n => Array\n(\n => 49.15.194.230\n)\n\n => Array\n(\n => 103.118.34.103\n)\n\n => Array\n(\n => 49.15.201.92\n)\n\n => Array\n(\n => 42.111.13.238\n)\n\n => Array\n(\n => 203.192.213.137\n)\n\n => Array\n(\n => 45.115.190.82\n)\n\n => Array\n(\n => 78.26.130.102\n)\n\n => Array\n(\n => 49.15.85.202\n)\n\n => Array\n(\n => 106.76.193.33\n)\n\n => Array\n(\n => 103.70.41.30\n)\n\n => Array\n(\n => 103.82.78.254\n)\n\n => Array\n(\n => 110.38.35.90\n)\n\n => Array\n(\n => 181.214.107.27\n)\n\n => Array\n(\n => 27.110.183.162\n)\n\n => Array\n(\n => 94.225.230.215\n)\n\n => Array\n(\n => 27.97.185.58\n)\n\n => Array\n(\n => 49.146.196.124\n)\n\n => Array\n(\n => 119.157.76.144\n)\n\n => Array\n(\n => 103.99.218.34\n)\n\n => Array\n(\n => 185.32.221.247\n)\n\n => Array\n(\n => 27.97.161.12\n)\n\n => Array\n(\n => 27.62.144.214\n)\n\n => Array\n(\n => 124.253.90.151\n)\n\n => Array\n(\n => 49.36.135.69\n)\n\n => Array\n(\n => 39.40.217.106\n)\n\n => Array\n(\n => 119.152.235.136\n)\n\n => Array\n(\n => 103.91.103.226\n)\n\n => Array\n(\n => 117.222.226.93\n)\n\n => Array\n(\n => 182.190.24.126\n)\n\n => Array\n(\n => 27.97.223.179\n)\n\n => Array\n(\n => 202.137.115.11\n)\n\n => Array\n(\n => 43.242.178.130\n)\n\n => Array\n(\n => 182.189.125.232\n)\n\n => Array\n(\n => 182.190.202.87\n)\n\n => Array\n(\n => 124.253.102.193\n)\n\n => Array\n(\n => 103.75.247.73\n)\n\n => Array\n(\n => 122.177.100.97\n)\n\n => Array\n(\n => 47.31.192.254\n)\n\n => Array\n(\n => 49.149.73.185\n)\n\n => Array\n(\n => 39.57.147.197\n)\n\n => Array\n(\n => 103.110.147.52\n)\n\n => Array\n(\n => 124.253.106.255\n)\n\n => Array\n(\n => 152.57.116.136\n)\n\n => Array\n(\n => 110.38.35.102\n)\n\n => Array\n(\n => 182.18.206.127\n)\n\n => Array\n(\n => 103.133.59.246\n)\n\n => Array\n(\n => 27.97.189.139\n)\n\n => Array\n(\n => 179.61.245.54\n)\n\n => Array\n(\n => 103.240.233.176\n)\n\n => Array\n(\n => 111.88.124.196\n)\n\n => Array\n(\n => 49.146.215.3\n)\n\n => Array\n(\n => 110.39.10.246\n)\n\n => Array\n(\n => 27.5.42.135\n)\n\n => Array\n(\n => 27.97.177.251\n)\n\n => Array\n(\n => 93.177.75.254\n)\n\n => Array\n(\n => 43.242.177.3\n)\n\n => Array\n(\n => 112.196.132.97\n)\n\n => Array\n(\n => 116.75.242.188\n)\n\n => Array\n(\n => 202.8.118.101\n)\n\n => Array\n(\n => 49.36.65.43\n)\n\n => Array\n(\n => 157.37.146.220\n)\n\n => Array\n(\n => 157.37.143.235\n)\n\n => Array\n(\n => 157.38.94.34\n)\n\n => Array\n(\n => 49.36.131.1\n)\n\n => Array\n(\n => 132.154.92.97\n)\n\n => Array\n(\n => 132.154.123.115\n)\n\n => Array\n(\n => 49.15.197.222\n)\n\n => Array\n(\n => 124.253.198.72\n)\n\n => Array\n(\n => 27.97.217.95\n)\n\n => Array\n(\n => 47.31.194.65\n)\n\n => Array\n(\n => 197.156.190.156\n)\n\n => Array\n(\n => 197.156.190.230\n)\n\n => Array\n(\n => 103.62.152.250\n)\n\n => Array\n(\n => 103.152.212.126\n)\n\n => Array\n(\n => 185.233.18.177\n)\n\n => Array\n(\n => 116.75.63.83\n)\n\n => Array\n(\n => 157.38.56.125\n)\n\n => Array\n(\n => 119.157.107.195\n)\n\n => Array\n(\n => 103.87.50.73\n)\n\n => Array\n(\n => 95.142.120.141\n)\n\n => Array\n(\n => 154.13.1.221\n)\n\n => Array\n(\n => 103.147.87.79\n)\n\n => Array\n(\n => 39.53.173.186\n)\n\n => Array\n(\n => 195.114.145.107\n)\n\n => Array\n(\n => 157.33.201.185\n)\n\n => Array\n(\n => 195.85.219.36\n)\n\n => Array\n(\n => 105.161.67.127\n)\n\n => Array\n(\n => 110.225.87.77\n)\n\n => Array\n(\n => 103.95.167.236\n)\n\n => Array\n(\n => 89.187.162.213\n)\n\n => Array\n(\n => 27.255.189.50\n)\n\n => Array\n(\n => 115.96.77.54\n)\n\n => Array\n(\n => 223.182.220.223\n)\n\n => Array\n(\n => 157.47.206.192\n)\n\n => Array\n(\n => 182.186.110.226\n)\n\n => Array\n(\n => 39.53.243.237\n)\n\n => Array\n(\n => 39.40.228.58\n)\n\n => Array\n(\n => 157.38.60.9\n)\n\n => Array\n(\n => 106.198.244.189\n)\n\n => Array\n(\n => 124.253.51.164\n)\n\n => Array\n(\n => 49.147.113.58\n)\n\n => Array\n(\n => 14.231.196.229\n)\n\n => Array\n(\n => 103.81.214.152\n)\n\n => Array\n(\n => 117.222.220.60\n)\n\n => Array\n(\n => 83.142.111.213\n)\n\n => Array\n(\n => 14.224.77.147\n)\n\n => Array\n(\n => 110.235.236.95\n)\n\n => Array\n(\n => 103.26.83.30\n)\n\n => Array\n(\n => 106.206.191.82\n)\n\n => Array\n(\n => 103.49.117.135\n)\n\n => Array\n(\n => 202.47.39.9\n)\n\n => Array\n(\n => 180.178.145.205\n)\n\n => Array\n(\n => 43.251.93.119\n)\n\n => Array\n(\n => 27.6.212.182\n)\n\n => Array\n(\n => 39.42.156.20\n)\n\n => Array\n(\n => 47.31.141.195\n)\n\n => Array\n(\n => 157.37.146.73\n)\n\n => Array\n(\n => 49.15.93.155\n)\n\n => Array\n(\n => 162.210.194.37\n)\n\n => Array\n(\n => 223.188.160.236\n)\n\n => Array\n(\n => 47.9.90.158\n)\n\n => Array\n(\n => 49.15.85.224\n)\n\n => Array\n(\n => 49.15.93.134\n)\n\n => Array\n(\n => 107.179.244.94\n)\n\n => Array\n(\n => 182.190.203.90\n)\n\n => Array\n(\n => 185.192.69.203\n)\n\n => Array\n(\n => 185.17.27.99\n)\n\n => Array\n(\n => 119.160.116.182\n)\n\n => Array\n(\n => 203.99.177.25\n)\n\n => Array\n(\n => 162.228.207.248\n)\n\n => Array\n(\n => 47.31.245.69\n)\n\n => Array\n(\n => 49.15.210.159\n)\n\n => Array\n(\n => 42.111.2.112\n)\n\n => Array\n(\n => 223.186.116.79\n)\n\n => Array\n(\n => 103.225.176.143\n)\n\n => Array\n(\n => 45.115.190.49\n)\n\n => Array\n(\n => 115.42.71.105\n)\n\n => Array\n(\n => 157.51.11.157\n)\n\n => Array\n(\n => 14.175.56.186\n)\n\n => Array\n(\n => 59.153.16.7\n)\n\n => Array\n(\n => 106.202.84.144\n)\n\n => Array\n(\n => 27.6.242.91\n)\n\n => Array\n(\n => 47.11.112.107\n)\n\n => Array\n(\n => 106.207.54.187\n)\n\n => Array\n(\n => 124.253.196.121\n)\n\n => Array\n(\n => 51.79.161.244\n)\n\n => Array\n(\n => 103.41.24.100\n)\n\n => Array\n(\n => 195.66.79.32\n)\n\n => Array\n(\n => 117.196.127.42\n)\n\n => Array\n(\n => 103.75.247.197\n)\n\n => Array\n(\n => 89.187.162.107\n)\n\n => Array\n(\n => 223.238.154.49\n)\n\n => Array\n(\n => 117.223.99.139\n)\n\n => Array\n(\n => 103.87.59.134\n)\n\n => Array\n(\n => 124.253.212.30\n)\n\n => Array\n(\n => 202.47.62.55\n)\n\n => Array\n(\n => 47.31.219.128\n)\n\n => Array\n(\n => 49.14.121.72\n)\n\n => Array\n(\n => 124.253.212.189\n)\n\n => Array\n(\n => 103.244.179.24\n)\n\n => Array\n(\n => 182.190.213.92\n)\n\n => Array\n(\n => 43.242.178.51\n)\n\n => Array\n(\n => 180.92.138.54\n)\n\n => Array\n(\n => 111.119.187.26\n)\n\n => Array\n(\n => 49.156.111.31\n)\n\n => Array\n(\n => 27.63.108.183\n)\n\n => Array\n(\n => 27.58.184.79\n)\n\n => Array\n(\n => 39.40.225.130\n)\n\n => Array\n(\n => 157.38.5.178\n)\n\n => Array\n(\n => 103.112.55.44\n)\n\n => Array\n(\n => 119.160.100.247\n)\n\n => Array\n(\n => 39.53.101.15\n)\n\n => Array\n(\n => 47.31.207.117\n)\n\n => Array\n(\n => 112.196.158.155\n)\n\n => Array\n(\n => 94.204.247.123\n)\n\n => Array\n(\n => 103.118.76.38\n)\n\n => Array\n(\n => 124.29.212.208\n)\n\n => Array\n(\n => 124.253.196.250\n)\n\n => Array\n(\n => 118.70.182.242\n)\n\n => Array\n(\n => 157.38.78.67\n)\n\n => Array\n(\n => 103.99.218.33\n)\n\n => Array\n(\n => 137.59.220.191\n)\n\n => Array\n(\n => 47.31.139.182\n)\n\n => Array\n(\n => 182.179.136.36\n)\n\n => Array\n(\n => 106.203.73.130\n)\n\n => Array\n(\n => 193.29.107.188\n)\n\n => Array\n(\n => 81.96.92.111\n)\n\n => Array\n(\n => 110.93.203.185\n)\n\n => Array\n(\n => 103.163.248.128\n)\n\n => Array\n(\n => 43.229.166.135\n)\n\n => Array\n(\n => 43.230.106.175\n)\n\n => Array\n(\n => 202.47.62.54\n)\n\n => Array\n(\n => 39.37.181.46\n)\n\n => Array\n(\n => 49.15.204.204\n)\n\n => Array\n(\n => 122.163.237.110\n)\n\n => Array\n(\n => 45.249.8.92\n)\n\n => Array\n(\n => 27.34.50.159\n)\n\n => Array\n(\n => 39.42.171.27\n)\n\n => Array\n(\n => 124.253.101.195\n)\n\n => Array\n(\n => 188.166.145.20\n)\n\n => Array\n(\n => 103.83.145.220\n)\n\n => Array\n(\n => 39.40.96.137\n)\n\n => Array\n(\n => 157.37.185.196\n)\n\n => Array\n(\n => 103.115.124.32\n)\n\n => Array\n(\n => 72.255.48.85\n)\n\n => Array\n(\n => 124.253.74.46\n)\n\n => Array\n(\n => 60.243.225.5\n)\n\n => Array\n(\n => 103.58.152.194\n)\n\n => Array\n(\n => 14.248.71.63\n)\n\n => Array\n(\n => 152.57.214.137\n)\n\n => Array\n(\n => 103.166.58.14\n)\n\n => Array\n(\n => 14.248.71.103\n)\n\n => Array\n(\n => 49.156.103.124\n)\n\n => Array\n(\n => 103.99.218.56\n)\n\n => Array\n(\n => 27.97.177.246\n)\n\n => Array\n(\n => 152.57.94.84\n)\n\n => Array\n(\n => 111.119.187.60\n)\n\n => Array\n(\n => 119.160.99.11\n)\n\n => Array\n(\n => 117.203.11.220\n)\n\n => Array\n(\n => 114.31.131.67\n)\n\n => Array\n(\n => 47.31.253.95\n)\n\n => Array\n(\n => 83.139.184.178\n)\n\n => Array\n(\n => 125.57.9.72\n)\n\n => Array\n(\n => 185.233.16.53\n)\n\n => Array\n(\n => 49.36.180.197\n)\n\n => Array\n(\n => 95.142.119.27\n)\n\n => Array\n(\n => 223.225.70.77\n)\n\n => Array\n(\n => 47.15.222.200\n)\n\n => Array\n(\n => 47.15.218.231\n)\n\n => Array\n(\n => 111.119.187.34\n)\n\n => Array\n(\n => 157.37.198.81\n)\n\n => Array\n(\n => 43.242.177.92\n)\n\n => Array\n(\n => 122.161.68.214\n)\n\n => Array\n(\n => 47.31.145.92\n)\n\n => Array\n(\n => 27.7.196.201\n)\n\n => Array\n(\n => 39.42.172.183\n)\n\n => Array\n(\n => 49.15.129.162\n)\n\n => Array\n(\n => 49.15.206.110\n)\n\n => Array\n(\n => 39.57.141.45\n)\n\n => Array\n(\n => 171.229.175.90\n)\n\n => Array\n(\n => 119.160.68.200\n)\n\n => Array\n(\n => 193.176.84.214\n)\n\n => Array\n(\n => 43.242.177.77\n)\n\n => Array\n(\n => 137.59.220.95\n)\n\n => Array\n(\n => 122.177.118.209\n)\n\n => Array\n(\n => 103.92.214.27\n)\n\n => Array\n(\n => 178.62.10.228\n)\n\n => Array\n(\n => 103.81.214.91\n)\n\n => Array\n(\n => 156.146.33.68\n)\n\n => Array\n(\n => 42.118.116.60\n)\n\n => Array\n(\n => 183.87.122.190\n)\n\n => Array\n(\n => 157.37.159.162\n)\n\n => Array\n(\n => 59.153.16.9\n)\n\n => Array\n(\n => 223.185.43.241\n)\n\n => Array\n(\n => 103.81.214.153\n)\n\n => Array\n(\n => 47.31.143.169\n)\n\n => Array\n(\n => 112.196.158.250\n)\n\n => Array\n(\n => 156.146.36.110\n)\n\n => Array\n(\n => 27.255.34.80\n)\n\n => Array\n(\n => 49.205.77.19\n)\n\n => Array\n(\n => 95.142.120.20\n)\n\n => Array\n(\n => 171.49.195.53\n)\n\n => Array\n(\n => 39.37.152.132\n)\n\n => Array\n(\n => 103.121.204.237\n)\n\n => Array\n(\n => 43.242.176.153\n)\n\n => Array\n(\n => 43.242.176.120\n)\n\n => Array\n(\n => 122.161.66.120\n)\n\n => Array\n(\n => 182.70.140.223\n)\n\n => Array\n(\n => 103.201.135.226\n)\n\n => Array\n(\n => 202.47.44.135\n)\n\n => Array\n(\n => 182.179.172.27\n)\n\n => Array\n(\n => 185.22.173.86\n)\n\n => Array\n(\n => 67.205.148.219\n)\n\n => Array\n(\n => 27.58.183.140\n)\n\n => Array\n(\n => 39.42.118.163\n)\n\n => Array\n(\n => 117.5.204.59\n)\n\n => Array\n(\n => 223.182.193.163\n)\n\n => Array\n(\n => 157.37.184.33\n)\n\n => Array\n(\n => 110.37.218.92\n)\n\n => Array\n(\n => 106.215.8.67\n)\n\n => Array\n(\n => 39.42.94.179\n)\n\n => Array\n(\n => 106.51.25.124\n)\n\n => Array\n(\n => 157.42.25.212\n)\n\n => Array\n(\n => 43.247.40.170\n)\n\n => Array\n(\n => 101.50.108.111\n)\n\n => Array\n(\n => 117.102.48.152\n)\n\n => Array\n(\n => 95.142.120.48\n)\n\n => Array\n(\n => 183.81.121.160\n)\n\n => Array\n(\n => 42.111.21.195\n)\n\n => Array\n(\n => 50.7.142.180\n)\n\n => Array\n(\n => 223.130.28.33\n)\n\n => Array\n(\n => 107.161.86.141\n)\n\n => Array\n(\n => 117.203.249.159\n)\n\n => Array\n(\n => 110.225.192.64\n)\n\n => Array\n(\n => 157.37.152.168\n)\n\n => Array\n(\n => 110.39.2.202\n)\n\n => Array\n(\n => 23.106.56.52\n)\n\n => Array\n(\n => 59.150.87.85\n)\n\n => Array\n(\n => 122.162.175.128\n)\n\n => Array\n(\n => 39.40.63.182\n)\n\n => Array\n(\n => 182.190.108.76\n)\n\n => Array\n(\n => 49.36.44.216\n)\n\n => Array\n(\n => 73.105.5.185\n)\n\n => Array\n(\n => 157.33.67.204\n)\n\n => Array\n(\n => 157.37.164.171\n)\n\n => Array\n(\n => 192.119.160.21\n)\n\n => Array\n(\n => 156.146.59.29\n)\n\n => Array\n(\n => 182.190.97.213\n)\n\n => Array\n(\n => 39.53.196.168\n)\n\n => Array\n(\n => 112.196.132.93\n)\n\n => Array\n(\n => 182.189.7.18\n)\n\n => Array\n(\n => 101.53.232.117\n)\n\n => Array\n(\n => 43.242.178.105\n)\n\n => Array\n(\n => 49.145.233.44\n)\n\n => Array\n(\n => 5.107.214.18\n)\n\n => Array\n(\n => 139.5.242.124\n)\n\n => Array\n(\n => 47.29.244.80\n)\n\n => Array\n(\n => 43.242.178.180\n)\n\n => Array\n(\n => 194.110.84.171\n)\n\n => Array\n(\n => 103.68.217.99\n)\n\n => Array\n(\n => 182.182.27.59\n)\n\n => Array\n(\n => 119.152.139.146\n)\n\n => Array\n(\n => 39.37.131.1\n)\n\n => Array\n(\n => 106.210.99.47\n)\n\n => Array\n(\n => 103.225.176.68\n)\n\n => Array\n(\n => 42.111.23.67\n)\n\n => Array\n(\n => 223.225.37.57\n)\n\n => Array\n(\n => 114.79.1.247\n)\n\n => Array\n(\n => 157.42.28.39\n)\n\n => Array\n(\n => 47.15.13.68\n)\n\n => Array\n(\n => 223.230.151.59\n)\n\n => Array\n(\n => 115.186.7.112\n)\n\n => Array\n(\n => 111.92.78.33\n)\n\n => Array\n(\n => 119.160.117.249\n)\n\n => Array\n(\n => 103.150.209.45\n)\n\n => Array\n(\n => 182.189.22.170\n)\n\n => Array\n(\n => 49.144.108.82\n)\n\n => Array\n(\n => 39.49.75.65\n)\n\n => Array\n(\n => 39.52.205.223\n)\n\n => Array\n(\n => 49.48.247.53\n)\n\n => Array\n(\n => 5.149.250.222\n)\n\n => Array\n(\n => 47.15.187.153\n)\n\n => Array\n(\n => 103.70.86.101\n)\n\n => Array\n(\n => 112.196.158.138\n)\n\n => Array\n(\n => 156.241.242.139\n)\n\n => Array\n(\n => 157.33.205.213\n)\n\n => Array\n(\n => 39.53.206.247\n)\n\n => Array\n(\n => 157.45.83.132\n)\n\n => Array\n(\n => 49.36.220.138\n)\n\n => Array\n(\n => 202.47.47.118\n)\n\n => Array\n(\n => 182.185.233.224\n)\n\n => Array\n(\n => 182.189.30.99\n)\n\n => Array\n(\n => 223.233.68.178\n)\n\n => Array\n(\n => 161.35.139.87\n)\n\n => Array\n(\n => 121.46.65.124\n)\n\n => Array\n(\n => 5.195.154.87\n)\n\n => Array\n(\n => 103.46.236.71\n)\n\n => Array\n(\n => 195.114.147.119\n)\n\n => Array\n(\n => 195.85.219.35\n)\n\n => Array\n(\n => 111.119.183.34\n)\n\n => Array\n(\n => 39.34.158.41\n)\n\n => Array\n(\n => 180.178.148.13\n)\n\n => Array\n(\n => 122.161.66.166\n)\n\n => Array\n(\n => 185.233.18.1\n)\n\n => Array\n(\n => 146.196.34.119\n)\n\n => Array\n(\n => 27.6.253.159\n)\n\n => Array\n(\n => 198.8.92.156\n)\n\n => Array\n(\n => 106.206.179.160\n)\n\n => Array\n(\n => 202.164.133.53\n)\n\n => Array\n(\n => 112.196.141.214\n)\n\n => Array\n(\n => 95.135.15.148\n)\n\n => Array\n(\n => 111.92.119.165\n)\n\n => Array\n(\n => 84.17.34.18\n)\n\n => Array\n(\n => 49.36.232.117\n)\n\n => Array\n(\n => 122.180.235.92\n)\n\n => Array\n(\n => 89.187.163.177\n)\n\n => Array\n(\n => 103.217.238.38\n)\n\n => Array\n(\n => 103.163.248.115\n)\n\n => Array\n(\n => 156.146.59.10\n)\n\n => Array\n(\n => 223.233.68.183\n)\n\n => Array\n(\n => 103.12.198.92\n)\n\n => Array\n(\n => 42.111.9.221\n)\n\n => Array\n(\n => 111.92.77.242\n)\n\n => Array\n(\n => 192.142.128.26\n)\n\n => Array\n(\n => 182.69.195.139\n)\n\n => Array\n(\n => 103.209.83.110\n)\n\n => Array\n(\n => 207.244.71.80\n)\n\n => Array\n(\n => 41.140.106.29\n)\n\n => Array\n(\n => 45.118.167.65\n)\n\n => Array\n(\n => 45.118.167.70\n)\n\n => Array\n(\n => 157.37.159.180\n)\n\n => Array\n(\n => 103.217.178.194\n)\n\n => Array\n(\n => 27.255.165.94\n)\n\n => Array\n(\n => 45.133.7.42\n)\n\n => Array\n(\n => 43.230.65.168\n)\n\n => Array\n(\n => 39.53.196.221\n)\n\n => Array\n(\n => 42.111.17.83\n)\n\n => Array\n(\n => 110.39.12.34\n)\n\n => Array\n(\n => 45.118.158.169\n)\n\n => Array\n(\n => 202.142.110.165\n)\n\n => Array\n(\n => 106.201.13.212\n)\n\n => Array\n(\n => 103.211.14.94\n)\n\n => Array\n(\n => 160.202.37.105\n)\n\n => Array\n(\n => 103.99.199.34\n)\n\n => Array\n(\n => 183.83.45.104\n)\n\n => Array\n(\n => 49.36.233.107\n)\n\n => Array\n(\n => 182.68.21.51\n)\n\n => Array\n(\n => 110.227.93.182\n)\n\n => Array\n(\n => 180.178.144.251\n)\n\n => Array\n(\n => 129.0.102.0\n)\n\n => Array\n(\n => 124.253.105.176\n)\n\n => Array\n(\n => 105.156.139.225\n)\n\n => Array\n(\n => 208.117.87.154\n)\n\n => Array\n(\n => 138.68.185.17\n)\n\n => Array\n(\n => 43.247.41.207\n)\n\n => Array\n(\n => 49.156.106.105\n)\n\n => Array\n(\n => 223.238.197.124\n)\n\n => Array\n(\n => 202.47.39.96\n)\n\n => Array\n(\n => 223.226.131.80\n)\n\n => Array\n(\n => 122.161.48.139\n)\n\n => Array\n(\n => 106.201.144.12\n)\n\n => Array\n(\n => 122.178.223.244\n)\n\n => Array\n(\n => 195.181.164.65\n)\n\n => Array\n(\n => 106.195.12.187\n)\n\n => Array\n(\n => 124.253.48.48\n)\n\n => Array\n(\n => 103.140.30.214\n)\n\n => Array\n(\n => 180.178.147.132\n)\n\n => Array\n(\n => 138.197.139.130\n)\n\n => Array\n(\n => 5.254.2.138\n)\n\n => Array\n(\n => 183.81.93.25\n)\n\n => Array\n(\n => 182.70.39.254\n)\n\n => Array\n(\n => 106.223.87.131\n)\n\n => Array\n(\n => 106.203.91.114\n)\n\n => Array\n(\n => 196.70.137.128\n)\n\n => Array\n(\n => 150.242.62.167\n)\n\n => Array\n(\n => 184.170.243.198\n)\n\n => Array\n(\n => 59.89.30.66\n)\n\n => Array\n(\n => 49.156.112.201\n)\n\n => Array\n(\n => 124.29.212.168\n)\n\n => Array\n(\n => 103.204.170.238\n)\n\n => Array\n(\n => 124.253.116.81\n)\n\n => Array\n(\n => 41.248.102.107\n)\n\n => Array\n(\n => 119.160.100.51\n)\n\n => Array\n(\n => 5.254.40.91\n)\n\n => Array\n(\n => 103.149.154.25\n)\n\n => Array\n(\n => 103.70.41.28\n)\n\n => Array\n(\n => 103.151.234.42\n)\n\n => Array\n(\n => 39.37.142.107\n)\n\n => Array\n(\n => 27.255.186.115\n)\n\n => Array\n(\n => 49.15.193.151\n)\n\n => Array\n(\n => 103.201.146.115\n)\n\n => Array\n(\n => 223.228.177.70\n)\n\n => Array\n(\n => 182.179.141.37\n)\n\n => Array\n(\n => 110.172.131.126\n)\n\n => Array\n(\n => 45.116.232.0\n)\n\n => Array\n(\n => 193.37.32.206\n)\n\n => Array\n(\n => 119.152.62.246\n)\n\n => Array\n(\n => 180.178.148.228\n)\n\n => Array\n(\n => 195.114.145.120\n)\n\n => Array\n(\n => 122.160.49.194\n)\n\n => Array\n(\n => 103.240.237.17\n)\n\n => Array\n(\n => 103.75.245.238\n)\n\n => Array\n(\n => 124.253.215.148\n)\n\n => Array\n(\n => 45.118.165.146\n)\n\n => Array\n(\n => 103.75.244.111\n)\n\n => Array\n(\n => 223.185.7.42\n)\n\n => Array\n(\n => 139.5.240.165\n)\n\n => Array\n(\n => 45.251.117.204\n)\n\n => Array\n(\n => 132.154.71.227\n)\n\n => Array\n(\n => 178.92.100.97\n)\n\n => Array\n(\n => 49.48.248.42\n)\n\n => Array\n(\n => 182.190.109.252\n)\n\n => Array\n(\n => 43.231.57.209\n)\n\n => Array\n(\n => 39.37.185.133\n)\n\n => Array\n(\n => 123.17.79.174\n)\n\n => Array\n(\n => 180.178.146.215\n)\n\n => Array\n(\n => 41.248.83.40\n)\n\n => Array\n(\n => 103.255.4.79\n)\n\n => Array\n(\n => 103.39.119.233\n)\n\n => Array\n(\n => 85.203.44.24\n)\n\n => Array\n(\n => 93.74.18.246\n)\n\n => Array\n(\n => 95.142.120.51\n)\n\n => Array\n(\n => 202.47.42.57\n)\n\n => Array\n(\n => 41.202.219.253\n)\n\n => Array\n(\n => 154.28.188.182\n)\n\n => Array\n(\n => 14.163.178.106\n)\n\n => Array\n(\n => 118.185.57.226\n)\n\n => Array\n(\n => 49.15.141.102\n)\n\n => Array\n(\n => 182.189.86.47\n)\n\n => Array\n(\n => 111.88.68.79\n)\n\n => Array\n(\n => 156.146.59.8\n)\n\n => Array\n(\n => 119.152.62.82\n)\n\n => Array\n(\n => 49.207.128.103\n)\n\n => Array\n(\n => 203.212.30.234\n)\n\n => Array\n(\n => 41.202.219.254\n)\n\n => Array\n(\n => 103.46.203.10\n)\n\n => Array\n(\n => 112.79.141.15\n)\n\n => Array\n(\n => 103.68.218.75\n)\n\n => Array\n(\n => 49.35.130.14\n)\n\n => Array\n(\n => 172.247.129.90\n)\n\n => Array\n(\n => 116.90.74.214\n)\n\n => Array\n(\n => 180.178.142.242\n)\n\n => Array\n(\n => 111.119.183.59\n)\n\n => Array\n(\n => 117.5.103.189\n)\n\n => Array\n(\n => 203.110.93.146\n)\n\n => Array\n(\n => 188.163.97.86\n)\n\n => Array\n(\n => 124.253.90.47\n)\n\n => Array\n(\n => 139.167.249.160\n)\n\n => Array\n(\n => 103.226.206.55\n)\n\n => Array\n(\n => 154.28.188.191\n)\n\n => Array\n(\n => 182.190.197.205\n)\n\n => Array\n(\n => 111.119.183.33\n)\n\n => Array\n(\n => 14.253.254.64\n)\n\n => Array\n(\n => 117.237.197.246\n)\n\n => Array\n(\n => 172.105.53.82\n)\n\n => Array\n(\n => 124.253.207.164\n)\n\n => Array\n(\n => 103.255.4.33\n)\n\n => Array\n(\n => 27.63.131.206\n)\n\n => Array\n(\n => 103.118.170.99\n)\n\n => Array\n(\n => 111.119.183.55\n)\n\n => Array\n(\n => 14.182.101.109\n)\n\n => Array\n(\n => 175.107.223.199\n)\n\n => Array\n(\n => 39.57.168.94\n)\n\n => Array\n(\n => 122.182.213.139\n)\n\n => Array\n(\n => 112.79.214.237\n)\n\n => Array\n(\n => 27.6.252.22\n)\n\n => Array\n(\n => 89.163.212.83\n)\n\n => Array\n(\n => 182.189.23.1\n)\n\n => Array\n(\n => 49.15.222.253\n)\n\n => Array\n(\n => 125.63.97.110\n)\n\n => Array\n(\n => 223.233.65.159\n)\n\n => Array\n(\n => 139.99.159.18\n)\n\n => Array\n(\n => 45.118.165.137\n)\n\n => Array\n(\n => 39.52.2.167\n)\n\n => Array\n(\n => 39.57.141.24\n)\n\n => Array\n(\n => 27.5.32.145\n)\n\n => Array\n(\n => 49.36.212.33\n)\n\n => Array\n(\n => 157.33.218.32\n)\n\n => Array\n(\n => 116.71.4.122\n)\n\n => Array\n(\n => 110.93.244.176\n)\n\n => Array\n(\n => 154.73.203.156\n)\n\n => Array\n(\n => 136.158.30.235\n)\n\n => Array\n(\n => 122.161.53.72\n)\n\n => Array\n(\n => 106.203.203.156\n)\n\n => Array\n(\n => 45.133.7.22\n)\n\n => Array\n(\n => 27.255.180.69\n)\n\n => Array\n(\n => 94.46.244.3\n)\n\n => Array\n(\n => 43.242.178.157\n)\n\n => Array\n(\n => 171.79.189.215\n)\n\n => Array\n(\n => 37.117.141.89\n)\n\n => Array\n(\n => 196.92.32.64\n)\n\n => Array\n(\n => 154.73.203.157\n)\n\n => Array\n(\n => 183.83.176.14\n)\n\n => Array\n(\n => 106.215.84.145\n)\n\n => Array\n(\n => 95.142.120.12\n)\n\n => Array\n(\n => 190.232.110.94\n)\n\n => Array\n(\n => 179.6.194.47\n)\n\n => Array\n(\n => 103.62.155.172\n)\n\n => Array\n(\n => 39.34.156.177\n)\n\n => Array\n(\n => 122.161.49.120\n)\n\n => Array\n(\n => 103.58.155.253\n)\n\n => Array\n(\n => 175.107.226.20\n)\n\n => Array\n(\n => 206.81.28.165\n)\n\n => Array\n(\n => 49.36.216.36\n)\n\n => Array\n(\n => 104.223.95.178\n)\n\n => Array\n(\n => 122.177.69.35\n)\n\n => Array\n(\n => 39.57.163.107\n)\n\n => Array\n(\n => 122.161.53.35\n)\n\n => Array\n(\n => 182.190.102.13\n)\n\n => Array\n(\n => 122.161.68.95\n)\n\n => Array\n(\n => 154.73.203.147\n)\n\n => Array\n(\n => 122.173.125.2\n)\n\n => Array\n(\n => 117.96.140.189\n)\n\n => Array\n(\n => 106.200.244.10\n)\n\n => Array\n(\n => 110.36.202.5\n)\n\n => Array\n(\n => 124.253.51.144\n)\n\n => Array\n(\n => 176.100.1.145\n)\n\n => Array\n(\n => 156.146.59.20\n)\n\n => Array\n(\n => 122.176.100.151\n)\n\n => Array\n(\n => 185.217.117.237\n)\n\n => Array\n(\n => 49.37.223.97\n)\n\n => Array\n(\n => 101.50.108.80\n)\n\n => Array\n(\n => 124.253.155.88\n)\n\n => Array\n(\n => 39.40.208.96\n)\n\n => Array\n(\n => 122.167.151.154\n)\n\n => Array\n(\n => 172.98.89.13\n)\n\n => Array\n(\n => 103.91.52.6\n)\n\n => Array\n(\n => 106.203.84.5\n)\n\n => Array\n(\n => 117.216.221.34\n)\n\n => Array\n(\n => 154.73.203.131\n)\n\n => Array\n(\n => 223.182.210.117\n)\n\n => Array\n(\n => 49.36.185.208\n)\n\n => Array\n(\n => 111.119.183.30\n)\n\n => Array\n(\n => 39.42.107.13\n)\n\n => Array\n(\n => 39.40.15.174\n)\n\n => Array\n(\n => 1.38.244.65\n)\n\n => Array\n(\n => 49.156.75.252\n)\n\n => Array\n(\n => 122.161.51.99\n)\n\n => Array\n(\n => 27.73.78.57\n)\n\n => Array\n(\n => 49.48.228.70\n)\n\n => Array\n(\n => 111.119.183.18\n)\n\n => Array\n(\n => 116.204.252.218\n)\n\n => Array\n(\n => 73.173.40.248\n)\n\n => Array\n(\n => 223.130.28.81\n)\n\n => Array\n(\n => 202.83.58.81\n)\n\n => Array\n(\n => 45.116.233.31\n)\n\n => Array\n(\n => 111.119.183.1\n)\n\n => Array\n(\n => 45.133.7.66\n)\n\n => Array\n(\n => 39.48.204.174\n)\n\n => Array\n(\n => 37.19.213.30\n)\n\n => Array\n(\n => 111.119.183.22\n)\n\n => Array\n(\n => 122.177.74.19\n)\n\n => Array\n(\n => 124.253.80.59\n)\n\n => Array\n(\n => 111.119.183.60\n)\n\n => Array\n(\n => 157.39.106.191\n)\n\n => Array\n(\n => 157.47.86.121\n)\n\n => Array\n(\n => 47.31.159.100\n)\n\n => Array\n(\n => 106.214.85.144\n)\n\n => Array\n(\n => 182.189.22.197\n)\n\n => Array\n(\n => 111.119.183.51\n)\n\n => Array\n(\n => 202.47.35.57\n)\n\n => Array\n(\n => 42.108.33.220\n)\n\n => Array\n(\n => 180.178.146.158\n)\n\n => Array\n(\n => 124.253.184.239\n)\n\n => Array\n(\n => 103.165.20.8\n)\n\n => Array\n(\n => 94.178.239.156\n)\n\n => Array\n(\n => 72.255.41.142\n)\n\n => Array\n(\n => 116.90.107.102\n)\n\n => Array\n(\n => 39.36.164.250\n)\n\n => Array\n(\n => 124.253.195.172\n)\n\n => Array\n(\n => 203.142.218.149\n)\n\n => Array\n(\n => 157.43.165.180\n)\n\n => Array\n(\n => 39.40.242.57\n)\n\n => Array\n(\n => 103.92.43.150\n)\n\n => Array\n(\n => 39.42.133.202\n)\n\n => Array\n(\n => 119.160.66.11\n)\n\n => Array\n(\n => 138.68.3.7\n)\n\n => Array\n(\n => 210.56.125.226\n)\n\n => Array\n(\n => 157.50.4.249\n)\n\n => Array\n(\n => 124.253.81.162\n)\n\n => Array\n(\n => 103.240.235.141\n)\n\n => Array\n(\n => 132.154.128.20\n)\n\n => Array\n(\n => 49.156.115.37\n)\n\n => Array\n(\n => 45.133.7.48\n)\n\n => Array\n(\n => 122.161.49.137\n)\n\n => Array\n(\n => 202.47.46.31\n)\n\n => Array\n(\n => 192.140.145.148\n)\n\n => Array\n(\n => 202.14.123.10\n)\n\n => Array\n(\n => 122.161.53.98\n)\n\n => Array\n(\n => 124.253.114.113\n)\n\n => Array\n(\n => 103.227.70.34\n)\n\n => Array\n(\n => 223.228.175.227\n)\n\n => Array\n(\n => 157.39.119.110\n)\n\n => Array\n(\n => 180.188.224.231\n)\n\n => Array\n(\n => 132.154.188.85\n)\n\n => Array\n(\n => 197.210.227.207\n)\n\n => Array\n(\n => 103.217.123.177\n)\n\n => Array\n(\n => 124.253.85.31\n)\n\n => Array\n(\n => 123.201.105.97\n)\n\n => Array\n(\n => 39.57.190.37\n)\n\n => Array\n(\n => 202.63.205.248\n)\n\n => Array\n(\n => 122.161.51.100\n)\n\n => Array\n(\n => 39.37.163.97\n)\n\n => Array\n(\n => 43.231.57.173\n)\n\n => Array\n(\n => 223.225.135.169\n)\n\n => Array\n(\n => 119.160.71.136\n)\n\n => Array\n(\n => 122.165.114.93\n)\n\n => Array\n(\n => 47.11.77.102\n)\n\n => Array\n(\n => 49.149.107.198\n)\n\n => Array\n(\n => 192.111.134.206\n)\n\n => Array\n(\n => 182.64.102.43\n)\n\n => Array\n(\n => 124.253.184.111\n)\n\n => Array\n(\n => 171.237.97.228\n)\n\n => Array\n(\n => 117.237.237.101\n)\n\n => Array\n(\n => 49.36.33.19\n)\n\n => Array\n(\n => 103.31.101.241\n)\n\n => Array\n(\n => 129.0.207.203\n)\n\n => Array\n(\n => 157.39.122.155\n)\n\n => Array\n(\n => 197.210.85.120\n)\n\n => Array\n(\n => 124.253.219.201\n)\n\n => Array\n(\n => 152.57.75.92\n)\n\n => Array\n(\n => 169.149.195.121\n)\n\n => Array\n(\n => 198.16.76.27\n)\n\n => Array\n(\n => 157.43.192.188\n)\n\n => Array\n(\n => 119.155.244.221\n)\n\n => Array\n(\n => 39.51.242.216\n)\n\n => Array\n(\n => 39.57.180.158\n)\n\n => Array\n(\n => 134.202.32.5\n)\n\n => Array\n(\n => 122.176.139.205\n)\n\n => Array\n(\n => 151.243.50.9\n)\n\n => Array\n(\n => 39.52.99.161\n)\n\n => Array\n(\n => 136.144.33.95\n)\n\n => Array\n(\n => 157.37.205.216\n)\n\n => Array\n(\n => 217.138.220.134\n)\n\n => Array\n(\n => 41.140.106.65\n)\n\n => Array\n(\n => 39.37.253.126\n)\n\n => Array\n(\n => 103.243.44.240\n)\n\n => Array\n(\n => 157.46.169.29\n)\n\n => Array\n(\n => 92.119.177.122\n)\n\n => Array\n(\n => 196.240.60.21\n)\n\n => Array\n(\n => 122.161.6.246\n)\n\n => Array\n(\n => 117.202.162.46\n)\n\n => Array\n(\n => 205.164.137.120\n)\n\n => Array\n(\n => 171.237.79.241\n)\n\n => Array\n(\n => 198.16.76.28\n)\n\n => Array\n(\n => 103.100.4.151\n)\n\n => Array\n(\n => 178.239.162.236\n)\n\n => Array\n(\n => 106.197.31.240\n)\n\n => Array\n(\n => 122.168.179.251\n)\n\n => Array\n(\n => 39.37.167.126\n)\n\n => Array\n(\n => 171.48.8.115\n)\n\n => Array\n(\n => 157.44.152.14\n)\n\n => Array\n(\n => 103.77.43.219\n)\n\n => Array\n(\n => 122.161.49.38\n)\n\n => Array\n(\n => 122.161.52.83\n)\n\n => Array\n(\n => 122.173.108.210\n)\n\n => Array\n(\n => 60.254.109.92\n)\n\n => Array\n(\n => 103.57.85.75\n)\n\n => Array\n(\n => 106.0.58.36\n)\n\n => Array\n(\n => 122.161.49.212\n)\n\n => Array\n(\n => 27.255.182.159\n)\n\n => Array\n(\n => 116.75.230.159\n)\n\n => Array\n(\n => 122.173.152.133\n)\n\n => Array\n(\n => 129.0.79.247\n)\n\n => Array\n(\n => 223.228.163.44\n)\n\n => Array\n(\n => 103.168.78.82\n)\n\n => Array\n(\n => 39.59.67.124\n)\n\n => Array\n(\n => 182.69.19.120\n)\n\n => Array\n(\n => 196.202.236.195\n)\n\n => Array\n(\n => 137.59.225.206\n)\n\n => Array\n(\n => 143.110.209.194\n)\n\n => Array\n(\n => 117.201.233.91\n)\n\n => Array\n(\n => 37.120.150.107\n)\n\n => Array\n(\n => 58.65.222.10\n)\n\n => Array\n(\n => 202.47.43.86\n)\n\n => Array\n(\n => 106.206.223.234\n)\n\n => Array\n(\n => 5.195.153.158\n)\n\n => Array\n(\n => 223.227.127.243\n)\n\n => Array\n(\n => 103.165.12.222\n)\n\n => Array\n(\n => 49.36.185.189\n)\n\n => Array\n(\n => 59.96.92.57\n)\n\n => Array\n(\n => 203.194.104.235\n)\n\n => Array\n(\n => 122.177.72.33\n)\n\n => Array\n(\n => 106.213.126.40\n)\n\n => Array\n(\n => 45.127.232.69\n)\n\n => Array\n(\n => 156.146.59.39\n)\n\n => Array\n(\n => 103.21.184.11\n)\n\n => Array\n(\n => 106.212.47.59\n)\n\n => Array\n(\n => 182.179.137.235\n)\n\n => Array\n(\n => 49.36.178.154\n)\n\n => Array\n(\n => 171.48.7.128\n)\n\n => Array\n(\n => 119.160.57.96\n)\n\n => Array\n(\n => 197.210.79.92\n)\n\n => Array\n(\n => 36.255.45.87\n)\n\n => Array\n(\n => 47.31.219.47\n)\n\n => Array\n(\n => 122.161.51.160\n)\n\n => Array\n(\n => 103.217.123.129\n)\n\n => Array\n(\n => 59.153.16.12\n)\n\n => Array\n(\n => 103.92.43.226\n)\n\n => Array\n(\n => 47.31.139.139\n)\n\n => Array\n(\n => 210.2.140.18\n)\n\n => Array\n(\n => 106.210.33.219\n)\n\n => Array\n(\n => 175.107.203.34\n)\n\n => Array\n(\n => 146.196.32.144\n)\n\n => Array\n(\n => 103.12.133.121\n)\n\n => Array\n(\n => 103.59.208.182\n)\n\n => Array\n(\n => 157.37.190.232\n)\n\n => Array\n(\n => 106.195.35.201\n)\n\n => Array\n(\n => 27.122.14.83\n)\n\n => Array\n(\n => 194.193.44.5\n)\n\n => Array\n(\n => 5.62.43.245\n)\n\n => Array\n(\n => 103.53.80.50\n)\n\n => Array\n(\n => 47.29.142.233\n)\n\n => Array\n(\n => 154.6.20.63\n)\n\n => Array\n(\n => 173.245.203.128\n)\n\n => Array\n(\n => 103.77.43.231\n)\n\n => Array\n(\n => 5.107.166.235\n)\n\n => Array\n(\n => 106.212.44.123\n)\n\n => Array\n(\n => 157.41.60.93\n)\n\n => Array\n(\n => 27.58.179.79\n)\n\n => Array\n(\n => 157.37.167.144\n)\n\n => Array\n(\n => 119.160.57.115\n)\n\n => Array\n(\n => 122.161.53.224\n)\n\n => Array\n(\n => 49.36.233.51\n)\n\n => Array\n(\n => 101.0.32.8\n)\n\n => Array\n(\n => 119.160.103.158\n)\n\n => Array\n(\n => 122.177.79.115\n)\n\n => Array\n(\n => 107.181.166.27\n)\n\n => Array\n(\n => 183.6.0.125\n)\n\n => Array\n(\n => 49.36.186.0\n)\n\n => Array\n(\n => 202.181.5.4\n)\n\n => Array\n(\n => 45.118.165.144\n)\n\n => Array\n(\n => 171.96.157.133\n)\n\n => Array\n(\n => 222.252.51.163\n)\n\n => Array\n(\n => 103.81.215.162\n)\n\n => Array\n(\n => 110.225.93.208\n)\n\n => Array\n(\n => 122.161.48.200\n)\n\n => Array\n(\n => 119.63.138.173\n)\n\n => Array\n(\n => 202.83.58.208\n)\n\n => Array\n(\n => 122.161.53.101\n)\n\n => Array\n(\n => 137.97.95.21\n)\n\n => Array\n(\n => 112.204.167.123\n)\n\n => Array\n(\n => 122.180.21.151\n)\n\n => Array\n(\n => 103.120.44.108\n)\n\n)\n```\nArchive for February, 2008: Gramma Ray's Blog\n Layout: Blue and Brown (Default) Author's Creation\n Home > Archive: February, 2008\n\n# Archive for February, 2008\n\n## Score at the store!!\n\nFebruary 28th, 2008 at 04:36 am\n\nI scored today guys. Have you ever heard of Harry & Davids??? They are a big corp who specialize in fruits, nuts, gift baskets, etc. Well, their headquarters is right here...pretty darn close to where I work. So I stopped in their store on my lunch break today and they had baskets on sale super cheap!! What's cool is that they are great quality. I bought a couple of those picnic baskets with the lifting lids, fully lined for \\$9.99...I bought wood bed trays ( for breakfast in bed) white with pretty scalloped edges for \\$4.99 each...easter baskets for \\$2.99-\\$3.99 ...and some great red totes that I can use for gift bags at christmas for \\$2.47...WOW. I love living near this company because they sell their overstocks at ridiculously low prices after the holidays. This stuff all normally sold for \\$30-75 originally. wow.\n\nI thought I would put a great teapot for one, napkin and some teas and cookies together for the bed tray... or a vase with a rose and a magazine and coffees...you get the idea. I am going to keep one of the picnic baskets. Ive wanted one forever, but never felt justified spending so much..but for \\$9.99 I am thrilled!!\n\nThe red totes...oh I love this...i bought 8 and will watch for sales all year-- when the whole family goes for our week at the big rental house the week aafter Christmas, I will give everyone one as a care package to use that week...candles, toiletries, deck of cards, etc...too fun!!\n\nI love good deals...REALLY LOVE GREAT DEALS!!!\n\n## Diagnosis...and a year back in the 401K results\n\nFebruary 23rd, 2008 at 03:36 am\n\nI recently received the diagnosis on my recurring pinched nerve. This has occured twice..and each time causes about 2-3 months of spasms and pain in my right arm. I have a moderate to severe degeneration in my c6-7 vertebrae. CHances of recurrences are high and I am a candidate for back surgery...ugh.\n\nI am choosing to take my chances and see if it recurs...at which point I will see about an MRI to nail down whether it is a simple bone shave --- or more intrusive. (niether sound fun to me)...\n\nThe upside of life is exciting. I have been back in the workforce for 11 months,,,and my 401k is on track to hit 10K by my one year anniversary!! YEAH!\n\nThings have been very busy, but also very good.\n\n## Fingers crossed the hubs back is fixed...\n\nFebruary 20th, 2008 at 02:18 am\n\nThe hub had his back procedure today. Instead of another surgery, they went in with a needle and moved the nerve and injected something to keep the nerve from being pinched again. We will know within a day or two if it is indeed fixed. Please keep your fingers crossed for him.\n\nWe did stop on the way home and lunch at Olive Garden...I love their chicken and gnocchi...yum. It was a definate splurge..both financially and welleness plan wise. But YUM. We spent \\$30 (incl tip)\n\nI re-booked the house at the resort for my birthday weekend. Not because I necessarily believe in spending alot on my birthday- I do not. But because it was so much fun the last time and I figure by June we will be ready to get away again. Last year, we would have just hopped in the truck and pulled the RV somewhere peaceful...but the kids are living in ours now- since a tree fell on thiers last year. So, a new type of vacationing...and one I am enjoying quite a bit!!\n\nIn April, DD1, my mom and I are flying down to So. Cali for a wedding. An expense that has to come out of the vacation fund.\n\nI am also starting to plan the hubs 40th birthday party. I have never threw him a party and so this year we are going all out. My DIL's mom has a great party house..pool table, swimming pool, volleyball court, horseshoes, etc...and so I am renting it from her and we are going to have a luau...complete with a pig in the ground...grass skirts, fruity drinks, tiki torches...\n\nAnd then, we are renting a LARGE home for New Years as a Christmas gift to the entire family. I felt we should have something besides the one year anniversary of papa's death to anticipate...and I think it is working. Everyone is focusing on taking a big family vacation this year.\n\nQuite a few exenses this year to be sure...but all budgeted and being saved for. (Sometimes I feel like I am out of my league with all of you who have budgeting your vacations down to a science.) But for now, this is working for us. The upside is that we will be cooking all meals at home during the house stays...and I get to stay at a friends for free the week of the wedding.\n\n## Finance Blog--or LIFE blog....\n\nFebruary 18th, 2008 at 07:09 pm\n\nI look back at many of my blogs and realize that my blog is not so much a financial blog as it is a life blog. I always try to incorporate something financial into my posts...but usually finance is in the back seat on what I'm blogging.\n\nHow wonderful (for me) it is to be able and read about those two years off to be with my family...and how we've welcomed new babies...financially caught up and then took a few steps backward...\n\nAnd then those last few days saying goodbye to my dad...wow. Those of you who have read my blog know me better than most that I call \"friends' in \"real\" life.\n\nAnd I have a record of some very important life lessons...right here at saving advice!! And even more importantly is all of your uplifting, supportive and inspiring resonses.\n\nWhich is exactly why I credit all of you for helping me find my way back to a careful spending plan now that things have settled down. I could have gone off the deep end and started spending...but you all and your kind words, encouraging support and gentle nudges in the right direction have helped me get right back on track. I think it is wonderful that we have a community of friends who support each other wheather it is with financial advice, weight loss support, medical worries, child rearing, thrifty ideas, recipes, ...basically LIFE.\n\nSo much more than just a financial blog...how lucky for all of us that stumbled into this community...HUGS to you all!\n\n## Spendy week...Cheap Weekend\n\nFebruary 18th, 2008 at 12:56 am\n\nMy good friend from Cali was here for a few days last week. She and I have been friends since we were 12. While she was here we went out to dinner a couple of times and then went wine tasting...we had a blast. The hub paid for dinner one night out of the wood business account...so only one of the meals out came out of the household account. I did treat her to the afternoon of wine tasting and bought her a couple of bottles of wine...but that all came out of my saved allowance....so, this weekend we ate all meals at home and did not spend much money...actually, all we spent was \\$4.50 when we went to the movies today. We had gift cards...but by the time we paid for the tickets and got popcorn and soda...we owed \\$4.50. I wish I could say it was worth it...but I didnt really enjoy the movie (I Am Legend) and \\$5.75 for a \"large\" popcorn makes my head spin...but it was the family outing that DH and DD3 wanted--so I wont complain too much.\n\nThe household account is still on track..which is a relief/\n.\nThe past two days have been absolutely GORGEOUS...sunny, 60's...quite nice after months of rain and snow. I am off for President's Day tommorrow...and am sure hoping for more of that sunshine!!!\n\nTonight...we are bbg'g steaks and I have potatoes in the oven...so a yummy meal that will feed 5 for under \\$10!!\n\nAll in all, a great several few days....blessed indeed!\n\n## How can it be?\n\nFebruary 17th, 2008 at 05:53 am\n\nOdd.\n\nFebruary. The shortest month of the year. And we will receive as many, and in some cases MORE paychecks this month than any other.\n\nNot sure how that happend...but most months I get two (bi-weekly) and the hub gets four(weekly).\n\nBut in this, the shortest month of the year...I will get THREE and the hub will get FIVE.(corrected)\n\nand that, friends....adds up to almost \\$2k extra this month!! woo-hooo!!!\n\nI will be using the extra to catch up, put \\$\\$ down on vacation, and put some into savings.\n\nThis is also the month we get our year end bonus...which should hopefully mean another \\$1200 or so to stick back!!\n\nIt feels good to be catching up a little. I just think it is odd that February is such a profitable month....\n\n## Wellness check\n\nFebruary 10th, 2008 at 05:17 am\n\nI mentioned in my previous blog that we are having a challenge at work to get healthy since we are now a self-insured employer.\n\nSo...I am now at square one..I weigh too much, I have high cholesterol and I am borderline diabetic. At least I know where I am starting...everything measured can be reduced with a good diet and exercise. so....this week has been the star of a low carb, sensible protein diet.\n\nI have an eight month timeline to get healthier..and that is what I fully intend to do...so here is what I have incorporated into my diet..\n\nbye-bye coffee stand mochas...rather, brewed coffee with non-fat vanilla creamer...also- add 1 string cheese and a low fat or non-fat yogurt for breakfast (i never ate breakfast before)\n\nLunch...black beans on two corn tortillas with lettuce and salsa...\n\ndinner 3-4 oz of low fat meat with veggies and a carb w/ fat (bread and butter)..or salad dressing.\n\nsnacks= fruit or juice.\n\nand 1-2 glasses of red wine each night...\n\nQuite a change from the menu plan before..\n\nLots of water...and no regular soda (an occaisional diet soda is ok)\n\n8 months to go...\n\nI hope to lose 50 lbs in 8 months....starting weight...well, I know what that is..I will post progress as I go.\n\nGo Ray.\n\nIf I am successful, I will receive at leas \\$50...which will go into a challenge account...I'll be thinking about a challenge acccount between now and then...\n\n## Frozen body parts?\n\nFebruary 6th, 2008 at 04:55 am\n\nThis past weekend, the hub, DD3 and I rented a house in a resort a few hours from us. The house was cheaper than a hotel room...and it included a hot tub, movies, puzzles, games, free iternet and long distance, and all the snow we could ever enjoy.\n\nHave you ever hot tubbed in a snowstorm??? wow. The hard part was getting out and running into the house before body parts froze and fell off.\n\nWe spent 4days/ 3nights...watched TV, enjoyed the scenery, tubbed, put a puzzle together, bbq'd and visited.\n\nAny idea how nice a little mini-vacation like that is after you've been through---well, some very tough times? wow. We really savored it.\n\ncost for 4 days=\\$400 + food, which we cooked at 'home' (no meals out).\n\nThe hub learned today that he will not have surgery, but rather a shot of something to help with the blown disk. (please let it work!)\n\nMy bills have been coming in for the physical therapy after my last pinched nerve...and the sweetest words known to these eyes....\"Patient has no financial obligation for this bill, paid in full\" (so dual insurance does have it's value!!)\n\nTomorrow. My job became self insured this year...so tomorrow, we have an opportunity to join (for free) an 8 month wellness program to help us get healthy. I will have to be weighed (gads) pricked,measured (wt..?) checked and fill out a personal evaluation...but when it is all said and done, I will receive a roadmap to help me get from here to healthy in 8 short months. If I succeed, there is at least a \\$50 bill waiting for me to spend however I want...not to mention a cache of wonderful prizes to be won.\n\nSo, tomorrow is day one. Pray for me...I am a non-recovering junk food junkie lately.\n\nLife seems to be slowly turning around and promising to be rich and rewarding once again.. It is amazing how your perspectives change on what is important and what is trivial.\n\nJust let me make it through the weigh in without fainting or breaking the scale...lol. Then, it should be downhill to health and....well, we shall see!!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9668079,"math_prob":0.9988556,"size":23108,"snap":"2021-43-2021-49","text_gpt3_token_len":5573,"char_repetition_ratio":0.09097992,"word_repetition_ratio":0.9579173,"special_character_ratio":0.24922104,"punctuation_ratio":0.18603362,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993104,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-19T09:05:36Z\",\"WARC-Record-ID\":\"<urn:uuid:8ceade50-deca-4c8e-af20-579d549a669d>\",\"Content-Length\":\"509957\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcc8979b-8f18-445f-bbd0-d2bd51a8bfd3>\",\"WARC-Concurrent-To\":\"<urn:uuid:3770fc2d-386e-45c4-b751-e98400c06b20>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://thriftyray.savingadvice.com/2008/02/\",\"WARC-Payload-Digest\":\"sha1:GGGR2EB6BAO37O3JZPFQESOFLOQFVEGE\",\"WARC-Block-Digest\":\"sha1:DVDLYWSHZUVXQ6ILSC6M5ERQBQVITDBX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585246.50_warc_CC-MAIN-20211019074128-20211019104128-00394.warc.gz\"}"}
https://www.tutorialspoint.com/how-to-sort-a-list-using-comparator-with-method-reference-in-java-8
[ "# How to sort a list using Comparator with method reference in Java 8?n\n\nJava 8 introduced changes in the Comparator interface that allows us to compare two objects. These changes help us to create comparators more easily. The first important method added is the comparing() method. This method receives as a parameter Function that determines the value to be compared and creates Comparator. Another important method is the thenComparing() method. This method can be used to compose Comparator.\n\nIn the below example, we can sort a list by using the first name with comparing() method and then the last name with the thenComparing() method of Comparator interface.\n\n## Example\n\nimport java.util.*;\n\npublic class MethodReferenceSortTest {\npublic static void main(String[] args) {\nList<Employee> emp = new ArrayList<Employee>();\n\n// using method reference\nemp.stream().sorted(Comparator.comparing(Employee::getFirstName)\n.thenComparing(Employee::getLastName))\n.forEach(System.out::println);\n}\n}\n\n// Employee class\nclass Employee {\nint age;\nString firstName;\nString lastName;\npublic Employee(int age, String firstName, String lastName) {\nsuper();\nthis.age = age;\nthis.firstName = firstName;\nthis.lastName = lastName;\n}\npublic int getAge() {\nreturn age;\n}\npublic void setAge(int age) {\nthis.age = age;\n}\npublic String getFirstName() {\nreturn firstName;\n}\npublic void setFirstName(String firstName) {\nthis.firstName = firstName;\n}\npublic String getLastName() {\nreturn lastName;\n}\npublic void setLastName(String lastName) {\nthis.lastName = lastName;\n}\n@Override\npublic String toString() {\nreturn \"Employee [age=\" + age + \", firstName=\" + firstName + \", lastName=\" + lastName + \"]\";\n}\n}\n\n## Output\n\nEmployee [age=35, firstName=Chaitanya, lastName=Krishna]\nEmployee [age=28, firstName=Jai, lastName=Dev]\nEmployee [age=25, firstName=Raja, lastName=Ramesh]\nEmployee [age=23, firstName=Ravi, lastName=Chandra]\nEmployee [age=30, firstName=Sai, lastName=Adithya]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.67109376,"math_prob":0.4015696,"size":3758,"snap":"2023-14-2023-23","text_gpt3_token_len":855,"char_repetition_ratio":0.18327118,"word_repetition_ratio":0.09601449,"special_character_ratio":0.2373603,"punctuation_ratio":0.1503268,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9714613,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-25T18:00:47Z\",\"WARC-Record-ID\":\"<urn:uuid:b723a04e-6bf8-45fb-94d2-75b964529908>\",\"Content-Length\":\"42142\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0384d97f-6b07-4c0c-895e-0317490be5a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d57fcf5-f99d-4e70-b80a-13d22c492fdb>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/how-to-sort-a-list-using-comparator-with-method-reference-in-java-8\",\"WARC-Payload-Digest\":\"sha1:3HV7CQVKMOU4YBYMFQRKIESXOZCUIRBG\",\"WARC-Block-Digest\":\"sha1:EMGJZVEB4DXMXUWP6SY2CLBRWDGFDBBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945368.6_warc_CC-MAIN-20230325161021-20230325191021-00271.warc.gz\"}"}
http://zjzhenxing.cn/qspevdu_d001020008
[ "• 湖南\n• 长沙市\n• 常德市\n• 郴州市\n• 衡阳市\n• 怀化市\n• 娄底市\n• 邵阳市\n• 湘潭市\n• 湘西土家族苗族自治州\n• 益阳市\n• 永州市\n• 岳阳市\n• 张家界市\n• 株洲市\n• 山西\n• 长治市\n• 大同市\n• 晋城市\n• 晋中市\n• 临汾市\n• 吕梁市\n• 朔州市\n• 太原市\n• 忻州市\n• 阳泉市\n• 运城市\n• 安徽\n• 安庆市\n• 蚌埠市\n• 亳州市\n• 巢湖市\n• 池州市\n• 滁州市\n• 阜阳市\n• 合肥市\n• 淮北市\n• 淮南市\n• 黄山市\n• 六安市\n• 马鞍山市\n• 宿州市\n• 铜陵市\n• 芜湖市\n• 宣城市\n• 广西\n• 百色市\n• 北海市\n• 崇左市\n• 防城港市\n• 贵港市\n• 桂林市\n• 河池市\n• 贺州市\n• 来宾市\n• 柳州市\n• 南宁市\n• 钦州市\n• 梧州市\n• 玉林市\n• 河南\n• 安阳市\n• 鹤壁市\n• 焦作市\n• 开封市\n• 洛阳市\n• 漯河市\n• 南阳市\n• 平顶山市\n• 濮阳市\n• 三门峡市\n• 商丘市\n• 新乡市\n• 信阳市\n• 许昌市\n• 郑州市\n• 周口市\n• 驻马店市\n• 吉林\n• 白城市\n• 白山市\n• 长春市\n• 吉林市\n• 辽源市\n• 四平市\n• 松原市\n• 通化市\n• 延边朝鲜族自治州\n• 广东\n• 潮州市\n• 东莞市\n• 佛山市\n• 广州市\n• 河源市\n• 惠州市\n• 江门市\n• 揭阳市\n• 茂名市\n• 梅州市\n• 清远市\n• 汕头市\n• 汕尾市\n• 韶关市\n• 深圳市\n• 阳江市\n• 云浮市\n• 湛江市\n• 肇庆市\n• 中山市\n• 珠海市\n• 辽宁\n• 鞍山市\n• 本溪市\n• 朝阳市\n• 大连市\n• 丹东市\n• 抚顺市\n• 阜新市\n• 葫芦岛市\n• 锦州市\n• 辽阳市\n• 盘锦市\n• 沈阳市\n• 铁岭市\n• 营口市\n• 湖北\n• 鄂州市\n• 恩施土家族苗族自治州\n• 黄冈市\n• 黄石市\n• 荆门市\n• 荆州市\n• 直辖行政单位\n• 十堰市\n• 随州市\n• 武汉市\n• 咸宁市\n• 襄阳市\n• 孝感市\n• 宜昌市\n• 江西\n• 抚州市\n• 赣州市\n• 吉安市\n• 景德镇市\n• 九江市\n• 南昌市\n• 萍乡市\n• 上饶市\n• 新余市\n• 宜春市\n• 鹰潭市\n• 浙江\n• 杭州市\n• 湖州市\n• 嘉兴市\n• 金华市\n• 丽水市\n• 宁波市\n• 衢州市\n• 绍兴市\n• 台州市\n• 温州市\n• 舟山市\n• 青海\n• 果洛藏族自治州\n• 海北藏族自治州\n• 海东地区\n• 海南藏族自治州\n• 海西蒙古族藏族自治州\n• 黄南藏族自治州\n• 西宁市\n• 玉树藏族自治州\n• 甘肃\n• 白银市\n• 定西市\n• 甘南藏族自治州\n• 嘉峪关市\n• 金昌市\n• 酒泉市\n• 兰州市\n• 临夏回族自治州\n• 陇南市\n• 平凉市\n• 庆阳市\n• 天水市\n• 武威市\n• 张掖市\n• 贵州\n• 安顺市\n• 毕节市\n• 贵阳市\n• 六盘水市\n• 黔东南苗族侗族自治州\n• 黔南布依族苗族自治州\n• 黔西南布依族苗族自治州\n• 铜仁地区\n• 遵义市\n• 陕西\n• 安康市\n• 宝鸡市\n• 汉中市\n• 商洛市\n• 铜川市\n• 渭南市\n• 西安市\n• 咸阳市\n• 延安市\n• 榆林市\n• 西藏\n• 阿里地区\n• 昌都地区\n• 拉萨市\n• 林芝地区\n• 那曲地区\n• 日喀则地区\n• 山南地区\n• 宁夏\n• 固原市\n• 石嘴山市\n• 吴忠市\n• 银川市\n• 中卫市\n• 福建\n• 福州市\n• 龙岩市\n• 南平市\n• 宁德市\n• 莆田市\n• 泉州市\n• 三明市\n• 厦门市\n• 漳州市\n• 内蒙古\n• 阿拉善盟\n• 巴彦淖尔市\n• 包头市\n• 赤峰市\n• 鄂尔多斯市\n• 呼和浩特市\n• 呼伦贝尔市\n• 通辽市\n• 乌海市\n• 乌兰察布市\n• 锡林郭勒盟\n• 兴安盟\n• 云南\n• 保山市\n• 楚雄彝族自治州\n• 大理白族自治州\n• 德宏傣族景颇族自治州\n• 迪庆藏族自治州\n• 红河哈尼族彝族自治州\n• 昆明市\n• 丽江市\n• 临沧市\n• 怒江傈僳族自治州\n• 曲靖市\n• 思茅市\n• 文山壮族苗族自治州\n• 西双版纳傣族自治州\n• 玉溪市\n• 昭通市\n• 新疆\n• 阿克苏地区\n• 阿勒泰地区\n• 巴音郭楞蒙古自治州\n• 博尔塔拉蒙古自治州\n• 昌吉回族自治州\n• 哈密地区\n• 和田地区\n• 喀什地区\n• 克拉玛依市\n• 克孜勒苏柯尔克孜自治州\n• 直辖行政单位\n• 塔城地区\n• 吐鲁番地区\n• 乌鲁木齐市\n• 伊犁哈萨克自治州\n• 黑龙江\n• 大庆市\n• 大兴安岭地区\n• 哈尔滨市\n• 鹤岗市\n• 黑河市\n• 鸡西市\n• 佳木斯市\n• 牡丹江市\n• 七台河市\n• 齐齐哈尔市\n• 双鸭山市\n• 绥化市\n• 伊春市\n• 香港\n• 香港\n• 九龙\n• 新界\n• 澳门\n• 澳门\n• 其它地区\n• 台湾\n• 台中市\n• 台南市\n• 高雄市\n• 台北市\n• 基隆市\n• 嘉义市\n•", null, "大量生产紫铜大弯头厂家推荐|好的多种用途弯头推荐\n\n品牌:鑫雷,,\n\n出厂地:忻城县(城关镇)\n\n报价:面议\n\n绍兴市上虞鑫雷制冷设备厂\n\n黄金会员:", null, "主营:铜弯头,铝弯头,不锈钢弯头,铜三通,闭塔弯头\n\n•", null, "铝弯头厂家-绍兴品牌好的铜弯管批发\n\n品牌:鑫雷,,\n\n出厂地:忻城县(城关镇)\n\n报价:面议\n\n绍兴市上虞鑫雷制冷设备厂\n\n黄金会员:", null, "主营:铜弯头,铝弯头,不锈钢弯头,铜三通,闭塔弯头\n\n•", null, "闭塔弯头厂家-实惠的铜弯头供销\n\n品牌:鑫雷,,\n\n出厂地:忻城县(城关镇)\n\n报价:面议\n\n绍兴市上虞鑫雷制冷设备厂\n\n黄金会员:", null, "主营:铜弯头,铝弯头,不锈钢弯头,铜三通,闭塔弯头\n\n•", null, "浙江环氧防静电地坪_买涂料认准郝伟地坪环氧防静电地坪\n\n品牌:郝伟地坪,,\n\n出厂地:忻城县(城关镇)\n\n报价:面议\n\n诸暨市郝伟地坪工程有限公司\n\n黄金会员:", null, "主营:环氧地坪,环氧自流平地坪,耐磨地坪,基础漏水注浆,墙面漏水注浆\n\n•", null, "暖管道零件铜U管加工定制厂家|绍兴价格合理的铜配件批售\n\n品牌:鑫雷,,\n\n出厂地:忻城县(城关镇)\n\n报价:面议\n\n绍兴市上虞鑫雷制冷设备厂\n\n黄金会员:", null, "主营:铜弯头,铝弯头,不锈钢弯头,铜三通,闭塔弯头\n\n•", null, "浙江移动式大型铸件打磨除尘房伸缩房\n\n品牌:新迈\n\n出厂地:凤山县(凤城镇)\n\n报价:面议\n\n山东新迈节能环保科技有限公司\n\n经营模式:生产型\n\n主营:废气处理设备,粉尘处理设备,喷烤漆房\n\n•", null, "浙江绍兴可导向防撞垫 TS级防撞垫厂家\n\n品牌:格拉瑞斯\n\n出厂地:宜州市\n\n报价:面议\n•", null, "报价:面议\n\n绍兴市上虞鑫雷制冷设备厂\n\n黄金会员:", null, "主营:铜弯头,铝弯头,不锈钢弯头,铜三通,闭塔弯头\n\n•", null, "报价:面议\n\n绍兴市上虞鑫雷制冷设备厂\n\n黄金会员:", null, "主营:铜弯头,铝弯头,不锈钢弯头,铜三通,闭塔弯头\n\n•", null, "北京转印标_高性价比的转印产品信息\n\n品牌:华彩,,\n\n出厂地:忻城县(城关镇)\n\n报价:面议\n\n绍兴市华彩印花材料有限公司\n\n黄金会员:", null, "主营:植绒纸,植绒浆,隔离浆,转印植绒烫画,烫画机\n\n• 没有找到合适的绍兴市供应商?您可以发布采购信息\n\n没有找到满足要求的绍兴市供应商?您可以搜索 批发 公司\n\n### 最新入驻厂家\n\n相关产品:\n大量生产紫铜大弯头厂家推荐 铝弯头厂家 闭塔弯头厂家 浙江环氧防静电地坪 暖管道零件铜U管加工定制厂家 浙江移动式大型铸件打磨除尘房伸缩房 绍兴可导向防撞垫 大量供应家装建材机械专用管件生产厂家 专业生产空调制冷部件系列批发 北京转印标" ]
[ null, "http://image-ali.bianjiyi.com/1/2018/0720/09/15320513534373.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/0720/09/15320507900458.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/0720/09/1532051968441.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/1206/17/15440872493665.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/0720/09/15320505651549.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://imagebooksir.258fuwu.com/images/business/20191118/9/3664948501574041297.jpeg", null, "http://imagebooksir.258fuwu.com/images/business/2019416/17/4506979151555407242.jpeg", null, "http://image-ali.bianjiyi.com/1/2018/0720/09/1532049887274.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2018/0720/10/15320523484761.jpg", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null, "http://image-ali.bianjiyi.com/1/2017/1125/14/5a1911231430d.png", null, "http://www.qiye.net/Public/Images/ForeApps/grade2.png", null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.5493832,"math_prob":0.4711275,"size":669,"snap":"2019-43-2019-47","text_gpt3_token_len":848,"char_repetition_ratio":0.2586466,"word_repetition_ratio":0.0,"special_character_ratio":0.23318386,"punctuation_ratio":0.2795031,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9873433,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,5,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,2,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T17:55:57Z\",\"WARC-Record-ID\":\"<urn:uuid:4f453eba-d986-43d6-8561-4d697f83caa0>\",\"Content-Length\":\"99107\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f9325b0-0d19-431b-b957-c3c952e6e6fa>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2e0edb8-65a6-4898-9cb6-a429711cf0dc>\",\"WARC-IP-Address\":\"154.213.171.152\",\"WARC-Target-URI\":\"http://zjzhenxing.cn/qspevdu_d001020008\",\"WARC-Payload-Digest\":\"sha1:RUE7MUTZJTG2IORFZR5V6654XEV23KSY\",\"WARC-Block-Digest\":\"sha1:5PWRLDVIRLFWJODZJWMIIGIKALG5EWMJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669809.82_warc_CC-MAIN-20191118154801-20191118182801-00046.warc.gz\"}"}
https://programs.wiki/wiki/typescript-type-declarations.html
[ "# TypeScript type declarations\n\n## basic type\n\n• type declaration\n\n• Type declaration is a very important feature of TS\n\n• The type of variables (parameters, formal parameters) in TS can be specified by type declaration\n\n• After specifying the type, when assigning a value to the variable, the TS compiler will automatically check whether the value conforms to the type declaration, assign the value if it matches, or report an error\n\n• In short, the type declaration sets the type for the variable, so that the variable can only store a certain type of value, preventing the variable from being arbitrarily assigned to any type of value\n\n• grammar:\n\n• ```let variable: type;\n\nlet variable: type = value;\n\nfunction fn(parameter: type, parameter: type): type{\n...\n}\n```\n• automatic type judgment\n\n• TS has an automatic type judgment mechanism\n• When the declaration and assignment of variables are carried out at the same time, the TS compiler will automatically determine the type of the variable\n• So if your variable declaration and assignment are performed at the same time, you can omit the type declaration\n• type:\n\ntypeexampledescribe\nnumber 1, -33, 2.5 any number\nstring 'hi', \"hi\", hi any string\nboolean true,false boolean true or false\nLiteral itself The value of the restricted variable is the value of the literal\nany * any type\nunknown * type-safe any\nvoid Null value (undefined) no value (or undefined)\nnever no value cannot be any value\nobject {name:'Monkey King'} any JS object\narray [1,2,3] Arbitrary JS array\ntuple [4,5] Element, TS new type, fixed-length array\nenum enum{A, B} Enumeration, a new type in TS\n\nExample:\n\n```//#Simple example of region type declaration\n\n// declare a variable a,Also specify its type as number\nlet a: number;\n\n// a type is set to number,in the future use a can only be numeric\na = 10;\na = 33;\n// a = 'hello'; // This line of code will report an error, because the type of variable a is number and cannot be assigned a string\nlet b: string;\nb = 'hello';\n// b = 123;\n\n// After declaring the variable, assign it directly\n// let c: boolean = false;\n// If a variable is declared and assigned at the same time, TS Can automatically perform type detection on variables\nlet c = false;\nc = true;\n\n//#endregion\n\n//#region\n// JS The function in does not consider the type and number of parameters\nfunction sum1(a, b){\nreturn a + b;\n}\nsum1(2,\"2\");//can be called successfully\n// console.log(sum(123, 456)); // 579\n// console.log(sum(123, \"456\")); // \"123456\"\n// :number means that the returned value must be a number\nfunction sum(a: number, b: number): number{\nreturn a + b;\n}\n\nlet result = sum(123, 456);//Can only pass parameters strictly according to type\n\n//#endregion```\n\nLiteral\n\n• You can also use literals to specify the type of variables, and you can determine the value range of variables through literals\n\nExample:\n\n```// You can also directly use literals for type declarations\nlet a1: 10;// definition a1 The value is 10 and cannot be changed, similar to a constant\n\n// can use | to connect multiple types (union types)\n// express b can be\"male\",can also be\"female\"\nlet b: \"male\" | \"female\";\nb = \"male\";\nb = \"female\";\n\n// express c can be boolean,can also be string\nlet c: boolean | string;\nc = true;\nc = 'hello';```\n\nany\n\n```// any Represents any type, and a variable setting type is any After that, it is equivalent to closing the variable TS type detection for\n// use TS hour,Not recommended for use any type\n// let d: any;\n\n// If you declare a variable without specifying a type, then TS The parser will automatically determine the type of the variable as any (implicit any)\nlet d;\nd = 10;\nd = 'hello';\nd = true;```\n\nunknown\n\ntype-safe any\n\nExample:\n\n```// unknown represents a value of unknown type\nlet e: unknown;\ne = 10;\ne = \"hello\";\ne = true;\n\nlet s:string;\n\n// d is of type any,It can be assigned to any variable\ns = d;// correct\n\ne = 'hello';\ns=e;// mistake unknown Variables of type cannot be directly assigned to other variables\n// unknown is actually a type-safe any\n// unknown Variables of type cannot be directly assigned to other variables\nif(typeof e === \"string\"){\ns = e;\n}```\n\nenum\n\n```enum Color {\nRed,\nGreen,\nBlue,\n}\nlet c: Color = Color.Green;\n\nenum Color {\nRed = 1,\nGreen,\nBlue,\n}\nlet c: Color = Color.Green;\n\nenum Color {\nRed = 1,\nGreen = 2,\nBlue = 4,\n}\nlet c: Color = Color.Green;```\n\ntype assertion\n\nIn some cases, the type of the variable is very clear to us, but the TS compiler is not clear. At this time, the type assertion can be used to tell the compiler the type of the variable. The assertion has two forms:\n\n```// Type assertions, which can be used to tell the parser the actual type of a variable\n/*\n* grammar:\n* variable as type\n* <Type > Variable\n*\n* */\n// The first\ns = e as string;\n// the second\ns = <string>e;```\n\nExample:\n\n```//The first\nlet someValue: unknown = \"this is a string\";\nlet strLength: number = (someValue as string).length;\n\n//the second\nlet someValue: unknown = \"this is a string\";\nlet strLength: number = (<string>someValue).length;```\n\nTags: TypeScript\n\nPosted by hamza on Fri, 25 Nov 2022 00:52:39 +1030" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.68870234,"math_prob":0.98459905,"size":4941,"snap":"2023-40-2023-50","text_gpt3_token_len":1190,"char_repetition_ratio":0.16690297,"word_repetition_ratio":0.07735426,"special_character_ratio":0.27929568,"punctuation_ratio":0.15030675,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9770279,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T19:17:56Z\",\"WARC-Record-ID\":\"<urn:uuid:cbb5c76e-f36c-421f-aaa6-c260ef5167fb>\",\"Content-Length\":\"17083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9bfa1ea-8e90-4b91-830a-9695b04c0253>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3b16e7f-411a-4ec1-ac38-02709af0c978>\",\"WARC-IP-Address\":\"178.238.237.47\",\"WARC-Target-URI\":\"https://programs.wiki/wiki/typescript-type-declarations.html\",\"WARC-Payload-Digest\":\"sha1:P73RGP2XVUVYC5XSUTQEL2VT5PWGF2WL\",\"WARC-Block-Digest\":\"sha1:MHBGX2GKHYHMDA64FI4PXZUYSRRIKUGT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510319.87_warc_CC-MAIN-20230927171156-20230927201156-00334.warc.gz\"}"}
https://answers.opencv.org/questions/59263/revisions/
[ "Ask Your Question\n\n# Revision history [back]\n\n### iterate through a matrice, int to uchar problem\n\nI'm trying to modify in a for a loop a Mat. I'm following this article. However the following code does not work :\n\ncv::Mat tmp(20, 20, CV_32F, cv::Scalar(0,0,0));\n\ncv::MatIterator_<uchar> new_mat= tmp.begin<uchar>();\nfor( int i = 0; i< (tmp.size().width) ;i++){\nfor(int j= 0;j< tmp.size().height;j++){\n\n(*new_mat) = 1;\nstd::cout << (float) (*new_mat) << \", \";\nnew_mat++;\n}\nstd::cout <<std::endl;\n}\nstd::cout << std::endl << std::endl;\n\nstd::cout << tmp << std::endl;\n\nstd::string win = \"Local Maxima\";\ncv::imshow(win, tmp);\n\n\nPrinting (float) (*new_mat) give a Matrix of 1 as explected but tmp if full of 1.4012985e-45 values and I can't seem to understand why. My guess is that it as to do with the way I'm assigning the value but I can't grasp the real problem here.\n\nAny help is appreciated." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82743293,"math_prob":0.9205367,"size":787,"snap":"2021-21-2021-25","text_gpt3_token_len":249,"char_repetition_ratio":0.12899107,"word_repetition_ratio":0.0,"special_character_ratio":0.36848792,"punctuation_ratio":0.29353234,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958168,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-16T09:03:58Z\",\"WARC-Record-ID\":\"<urn:uuid:465b2593-f8a2-4edf-9783-19bcd344c76c>\",\"Content-Length\":\"15708\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:946646c9-0d0b-44e8-b180-9d85caf34d0f>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ad2b231-c6c6-45ae-ae18-9083fbc732e3>\",\"WARC-IP-Address\":\"5.9.49.245\",\"WARC-Target-URI\":\"https://answers.opencv.org/questions/59263/revisions/\",\"WARC-Payload-Digest\":\"sha1:IKIHUIGPL2RVMPPSLBPBWAAAQHLGU46W\",\"WARC-Block-Digest\":\"sha1:4YLEI5G4I7BFLUSZXXVLTSP33ACHUV55\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487622234.42_warc_CC-MAIN-20210616063154-20210616093154-00080.warc.gz\"}"}
https://voer.edu.vn/m/orbital-hybridization/cc72cae3
[ "Tài liệu\n\n# Orbital hybridization\n\nScience and Technology\n\nIn the Valence Bond model, atomic orbitals s and p can be mixed to yield a set of hybrid orbitals for forming sigma bonds with neighbor atoms for example of the sp3 and sp2 hybrid orbitals below.\n\nThis provides a means to ensemble molecular structure from individual atoms such as in MolDesign tool in Avisto.  Let examine the concept of orbital hybridization from the molecular orbital theory, i.e. analyzing molecular orbitals from semi-empirical MO calculations using tools in Avisto.\n\nThe procedure below is for using tools in Avisto.  Dowload them at Astonis.\n\nProcedure:\n\n1. Use MolDesign to build BeH2, HCCH, BH3, H2C=CH2, CH4, and H3C-CH3.\n\n2. Use Basic QChem Edu, Basic QChem, or MopacGUI Cloud or Pro to search for stable structures for these molecules.  If use MopacGUI Cloud or Pro also select options to calculate localized MO and perform hybridization analysis under that properties tab.\n\n3. Use PsiViewer to analyze the molecular orbitals both delocalized and localized forms.\n\n4. Open the output file in the Files type to view the results of the hybridization analysis.\n\nExample:  BeH2\n\nAfter using Basic QChem Edu to search for a stable structure of BeH2, view the results in PsiViewer.\n\n1. Delocalized molecular orbtials\n\nThe figure shows both occupied delocalized\n\nand  MO and also the HOMO-LUMO gap (in eV).\n\n2. Select option to plot localized MO in PsiViewer.  The two delocalized MO's above are mixed to produce two equivalent localized MO's showing the two Be-H sigma bonds.\n\nNote the the orbital energies of localized MO's have no physical meaning as those of delocalized MO's.\n\n3. Using option in PsiViewer to turn-off the contribution of Hydrogen atoms in the localized MO's.  This yields two Be hybrid sp orbitals.\n\n4. Open the output file in the Files tab of PsiViewer and find the table 'Sigma-Pi bond-order matrix'\n\nSIGMA-PI BOND-ORDER MATRIX\n\nS-SIGMA    P-SIGMA     P-PI     S-SIGMA    S-SIGMA                            Be  1          Be  1       Be  1       H  2       H  3------------------------------------------------------------------ S-SIGMA  Be 1   0.996282 P-SIGMA  Be 1   0.000000   0.961062   P-PI  Be      1   0.000000   0.000000   0.000000 S-SIGMA  H  2   0.498141   0.480531   0.000000   0.983320 S-SIGMA  H  3   0.498141   0.480531   0.000000   0.004648   0.983320\n\nAlong the diagonal matrix, the first two numbers indicate that Be makes two sigma bonds from an s and p orbitals and no pi bond.  This also gives the degree of orbital hybridization to be a sp type.\n\nYou can repeat the lesson for other molecules to learn about sp2 and sp3 hybridization.\n\nTải về\nĐánh giá:\n0 dựa trên 0 đánh giá\nNội dung cùng tác giả\n\nNội dung tương tự" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7591136,"math_prob":0.8525953,"size":2499,"snap":"2019-51-2020-05","text_gpt3_token_len":656,"char_repetition_ratio":0.15951903,"word_repetition_ratio":0.019900497,"special_character_ratio":0.2777111,"punctuation_ratio":0.116,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9629989,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-11T04:01:10Z\",\"WARC-Record-ID\":\"<urn:uuid:a98ba9b6-d420-45bc-ab5b-035be499f3b4>\",\"Content-Length\":\"24383\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0904419e-e720-4b59-b8ca-c712677cf098>\",\"WARC-Concurrent-To\":\"<urn:uuid:bde5e723-3ccd-494f-864a-3d35e2ac2cd1>\",\"WARC-IP-Address\":\"115.146.126.85\",\"WARC-Target-URI\":\"https://voer.edu.vn/m/orbital-hybridization/cc72cae3\",\"WARC-Payload-Digest\":\"sha1:VJR2POWIB5LZVDVN62W6HEAGJQLHZ6TU\",\"WARC-Block-Digest\":\"sha1:PN7QKNJFCEBXDAZ3BZLYIFPYZIV4DVFY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540529745.80_warc_CC-MAIN-20191211021635-20191211045635-00415.warc.gz\"}"}
https://codeaccepted.wordpress.com/2014/04/08/depth-and-breadth-first-search/
[ "# Algorithm #9 : Depth- and Breadth- First Search\n\nThis post is about the graph traversal algorithms, Depth First Search (DFS) and Breadth First Search (BFS).\nBFS and DFS are one of the algorithms for graph exploration. Graph exploration means discovering the nodes of a graph by following the edges. We start at one node and then follow edges to discover all nodes in a graph. The choice of first node may be arbitrary or problem specific.\n\nThe difference between BFS and DFS is the order in which the nodes of a graph are explored.\nIf you are not already familiar with BFS and DFS in theory, I recommend that you read about them. Because I’m going to focus more on their implementation here.\n\nIn a nutshell, DFS continues on one path and explores it completely before going down another path.\nBut in BFS we progress equally in all possible paths.\n\nThe following gifs will give you a good general idea about the two.\nHere, the nodes are numbered according to the order in which they are explored.\n\n[Both the images were taken from commons.wikimedia.org]\n\nIMPLEMENTATION:\nWe can use any of the four graph representation methods that I introduced in my post Representation of Graphs. In this post we’ll use Adjacency list and assume that the input is the edges in form of pairs of positive integers (i.e. Type 2, if you refer to my post). The nodes are numbered from 1 to n.\n\nFor DFS:\nDFS has to be implemented using a stack data structure. As recursion uses the internal stack, we can use recursion as follows:\n\n```int adjlist={0};\nint degree={0};\nint done={0};//this array marks if a node has already been explored\nvoid dfs(int at)\n{\nif(done[at]==1)//if the node has already been explored, then return\nreturn;\nprintf(\"At node %d\\n\",at);\ndone[at]=1;\nint i=0;\nwhile(i<degree[at])//for each of the edges on this node\n{\ndfs(adjlist[at][i]);\ni++;\n}\nreturn;\n}\nint main()\n{\nint n,m,i,a,b;\nscanf(\" %d %d\",&n,&m);\ni=0;\nwhile(i<m)\n{\nscanf(\" %d %d\",&a,&b);\nadjlist[a][degree[a]]=b;\ndegree[a]++;\nadjlist[b][degree[b]]=a;\ndegree[b]++;\ni++;\n}\ndfs(1);//start with any node. node 1 is the first node here\nreturn 0;\n}\n\n```\n\nFor BFS:\nBFS needs a Queue data structure for its implementation. Here I use an array queue[] and integers front and rear to implement Queue.\n\n```\nint main()\n{\nint n,m,i,a,b;\nint adjlist={0};\nint degree={0};\nscanf(\" %d %d\",&n,&m);\ni=0;\nwhile(i<m)\n{\nscanf(\" %d %d\",&a,&b);\nadjlist[a][degree[a]]=b;\ndegree[a]++;\nadjlist[b][degree[b]]=a;\ndegree[b]++;\ni++;\n}\nint queue,front=0,rear=0;\nint done={0};//this array marks if a node has already been explored\nint at;\nqueue[rear]=1;//start with any node. node 1 is the first node heres\nrear++;\ndone=1;\nwhile(front!=rear)\n{\nat=queue[front];\nprintf(\"At node %d\\n\",at);\nfront++;\nfor(i=0;i<degree[at];i++)\n{\nif(done[adjlist[at][i]]!=1)\n{\nqueue[rear]=adjlist[at][i];\nrear++;\ndone[adjlist[at][i]]=1;\n}\n}\n}\nreturn 0;\n}\n\n```\n\nThe array done[] is used to mark the nodes that have already been visited. This has to be done to stop the code from re-discovering already visited nodes and running forever.\n\nAbove is a bare-bones implementation of the two algorithms. They do nothing more than exploring the graph. Apart from only exploring the graph, DFS and BFS can also be used to compute other information too.\nFor example, if we have a tree as input, we can modify the above DFS code to compute the depth of each node in the tree, and also the size of the sub-tree rooted at each node.\n\n```int adjlist={0};\nint degree={0};\nint depth={0};\nint sizeofsubtree={0};\nint done={0};//this array marks which node has already been explored\nint dfs(int at,int currentdepth)\n{\nif(done[at]==1)//if the node has already been explored, then return\nreturn 0;\ndepth[at]=currentdepth;\nprintf(\"Node %d at depth %d\\n\",at,depth[at]);\ndone[at]=1;\nint i=0,size=1;//initialised to 1 as current node is also part of the sub-tree rooted at current node\nwhile(i<degree[at])//for each of the edges on this node\n{\nsize+=dfs(adjlist[at][i],currentdepth+1);\ni++;\n}\nsizeofsubtree[at]=size;\nreturn sizeofsubtree[at];\n}\nint main()\n{\nint n,m,i,a,b;\nscanf(\" %d %d\",&n,&m);\ni=0;\nwhile(i<m)\n{\nscanf(\" %d %d\",&a,&b);\nadjlist[a][degree[a]]=b;\ndegree[a]++;\nadjlist[b][degree[b]]=a;\ndegree[b]++;\ni++;\n}\ndfs(1,0);//start with root node. assuming that node 1 is the root node here\ni=1;\nprintf(\"Depth of subtrees:\\n\");\nwhile(i<=n)\n{\nprintf(\"Rooted at %d: %d\\n\",i,sizeofsubtree[i]);\ni++;\n}\nreturn 0;\n}\n```\n\nA second variable currentdepth is passed to each dfs() instance that represents the depth of the current node. Notice that at line 16, currentdepth+1 has been passed to dfs(); because child of the current node has depth one more than the parent node.\n\nEach recursive instance of dfs() returns the size of the sub-tree rooted at a node.\nAt every node, we sum up the values returned by dfs() for each child node ( This is done at line 16 ). The size of a sub-tree rooted at a node is the summation of sizes of sub-trees rooted at its children + 1. This is how the size of all sub-trees is computed.\n\nCOMPARISION BETWEEN DFS AND BFS:\nIf all the nodes of a graph have to be discovered, then BFS and DFS both take equal amount of time. But, if we want to search for a specific node, both algorithms may differ in execution time.\n\nDFS is more risky compared to BFS. If a node has more than one edge leading from it, the choice of which edge to follow first is arbitrary.\nAs we don’t have any intelligently way of choosing which edge to follow first, it may be possible that the required node is present down the first edge that we choose and it is also possible that the required node is present down the last edge that we choose from that node. In the former case, DFS will find the node very quickly, but in the latter case DFS will take a lot of time. If we take a wrong path at some node, DFS will have to completely traverse the whole path before it can go down another path. That’s why DFS is more risky than BFS.\n\nIn BFS, all paths are explored equally. So, in some cases the search may be a little slower than DFS but the advantage of BFS is that it doesn’t arbitrary favor one path over the other.\n\nBFS is very useful in problems where you have the find the shortest path. This is because BFS explores closer nodes first. So, when we find the node the first time, we can be sure that this is the shortest path to it. Whereas in DFS, we’ll have to find all possible paths and then select the shortest path.\nBFS can also be used in checking if the graph is bipartite.\n\nDFS is useful in problems where we have to check connectivity of graph and in topological sorting.\n\nSuppose we have an infinite graph. If we use DFS to find a specific node, the search will never end if the node is not in the first path that the algorithm chooses. But, given sufficient time, BFS will be able to find it.\n\nCOMPLEXITY OF BFS AND DFS:\nThe complexity of DFS and BFS is O(E), where E is the number of edges.\nOf course, the choice of graph representation also matters. If an adjacency matrix is used, they will take O(N^2) time (N^2 is the maximum number of edges that can be present). If an adjacency list is used, DFS/BFS will take O(E) time.\n\nRelated Problems:\n\nAdvertisements\n\n## 10 comments\n\n1.", null, "Siddharth says:\n\nawesome sir. still refer to anyone wanting to learn coding dfs, bfs. 🙂" ]
[ null, "https://2.gravatar.com/avatar/88252bb59b29ad9d8d066f09f17978cc", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84478384,"math_prob":0.9608051,"size":7304,"snap":"2019-13-2019-22","text_gpt3_token_len":1866,"char_repetition_ratio":0.11726028,"word_repetition_ratio":0.09810387,"special_character_ratio":0.27354875,"punctuation_ratio":0.13810112,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980953,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-19T22:25:09Z\",\"WARC-Record-ID\":\"<urn:uuid:726d843b-ab4e-4072-85ca-33a2f86f30f6>\",\"Content-Length\":\"69289\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:deaaeda7-f25c-4463-8994-f4c83278aa85>\",\"WARC-Concurrent-To\":\"<urn:uuid:9ef6d27b-3fa8-4fe2-9ee7-321a7dec86cd>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://codeaccepted.wordpress.com/2014/04/08/depth-and-breadth-first-search/\",\"WARC-Payload-Digest\":\"sha1:QWD4SGD2D4O5B2ETKIIJOKQY7WWAPW6S\",\"WARC-Block-Digest\":\"sha1:EHAVYS65435CEN22OENHBG6X3SPHTCVZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232255182.37_warc_CC-MAIN-20190519221616-20190520003616-00140.warc.gz\"}"}
https://electronics.stackexchange.com/questions/190569/problem-with-driving-the-mosfet-through-optocoupler
[ "# problem with driving the mosfet through optocoupler", null, "I want to run the mosfet (irf540n) through opto coupler(4n25) for driving the motor. I refered some formula from this website, but my circuit is not working.\n\nI calculated opto coupler current limting resistor by using this formula:\n\nRf = (vin-Vf)/If = (5v-1.15)/10ma = 385R\n\nthen calculated the mosfet gate resistor using\n\n5v/150ma=34R\n\nbut this circuit is not working.\n\nI tried this circuit too but not working.\n\nThanks for your answer, now I attached the image of my schematic. Please check it.", null, "• You can do all calculations you want but if your circuit is not correct it will never work. Show us your schematic first. – Bimpelrekkie Sep 15 '15 at 6:52\n• You have to charge/discharge the gate capacitance to turn off/on, using just one gate resitor wont work, because MOSFET aint BJT. – Marko Buršič Sep 15 '15 at 6:56\n• @MarkoBuršič Well, we don't know whether there is a pull-up (or pull-somewhere) resistor in that secret circuit … – CL. Sep 15 '15 at 9:18\n• R2 is way too small in your second schematic. It is only there to pull the gate low when the opto-coupler is off. Try something around 10K. – Tut Sep 16 '15 at 11:50\n• For more on determining the pull-down resistor value, see: Calculating the pulldown resistance for a given MOSFET's gate – Tut Sep 16 '15 at 12:07\n\n## 1 Answer", null, "simulate this circuit – Schematic created using CircuitLab\n\nIs a circuit that should work for you. D1 and Q1 are used together to form the opto isolator. The values are just ball park, the schematic is just meant for topology purposes.\n\n• Where did you get the load resistor value from? – Andy aka Sep 15 '15 at 8:25\n• @Andyaka It's just balk park values. I was going for topology not values. – vini_i Sep 15 '15 at 8:27\n• sir ,actually i dont know to calculate gate resistor please say me how to find it – vivek j Sep 16 '15 at 8:00\n• the gate resistor needs to be just large enough to keep the inrush current in check. The maximum rating for the 4N25 is 150ma, so at 5v i would make the resistor (5/0.15=33.3) about 33ohms. – vini_i Sep 16 '15 at 10:33" ]
[ null, "https://i.stack.imgur.com/qlRHv.jpg", null, "https://i.stack.imgur.com/m2zbL.jpg", null, "https://i.stack.imgur.com/iKiUj.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9195696,"math_prob":0.8237947,"size":490,"snap":"2019-35-2019-39","text_gpt3_token_len":138,"char_repetition_ratio":0.12345679,"word_repetition_ratio":0.025641026,"special_character_ratio":0.2632653,"punctuation_ratio":0.10309278,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9760351,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-19T18:55:09Z\",\"WARC-Record-ID\":\"<urn:uuid:cfcf65c4-20c1-4c59-9d4d-ef4e6553623d>\",\"Content-Length\":\"143380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f11ce1f-9d1b-4798-9a3d-f7ef8899c83b>\",\"WARC-Concurrent-To\":\"<urn:uuid:e3ba8fee-4788-4cbf-8961-2916eaf713eb>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/190569/problem-with-driving-the-mosfet-through-optocoupler\",\"WARC-Payload-Digest\":\"sha1:SPKSXVTUHSGNSMQRG4ZJNCOS3PUQE2Z6\",\"WARC-Block-Digest\":\"sha1:TAGFOYQNBMEWRENJXRCC5ATDWND7VAYG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027314904.26_warc_CC-MAIN-20190819180710-20190819202710-00091.warc.gz\"}"}
https://argoprep.com/blog/what-is-1-6-as-a-decimal/
[ "## Want to practice?\n\nIn the fraction 1/6 , the number 1 is\n\n• Numerator\n• Denominator\n• Decimal\n• Fraction\n\nIn the fraction 1/6 , the number 6 is\n\n• Numerator\n• Denominator\n• Decimal\n• Fraction\n\nWhat is the equivalent fraction to 1/6?\n\n• 4/24\n• 8/32\n• 8/24\n• 4/30\n\nWhat is the equivalent fraction to 1/6?\n\n• 3/12\n• 3/18\n• 9/52\n• 2/18\n\nFractions are used to denote values other than whole numbers. A fraction has two parts separated by a line. The number in the top is called the numerator and the number at the bottom is called the denominator.\n\nThe numerator is the number of parts that are represented and the denominator is the number of parts in the whole.\n\nFractions can also be denoted as decimals.\n\nThe long division method is used to convert the fraction to its decimal form.\n\nIn this method, we use the numerator as the dividend and the denominator as the divisor.\n\nWe can see that the number 1 is not divisible by 6. Hence we can add a decimal point to the quotient and add 0 to the dividend to make it divisible.\n\nWe can see that the same remainder 6 is repeated after the first step. Stop the division as the digits in the quotient are repeated.\n\nThe answer to the fraction is 0.16666…\n\nTo make the answer into a terminating decimal form we write the number with a bar over the digits that are repeated.\n\n0.16666…. is written as\n\n## Remember\n\nHere are some common terms you should be familiar with.\n\n• In the fraction , the number 1 is the dividend (our numerator)\n• The number 6 is our divisor (our denominator)\n\nENTER BELOW FOR ARGOPREP'S FREE WEEKLY GIVEAWAYS. EVERY WEEK!\nFREE 100\\$ in books to a family!\nSee Related Worksheets:\n\"Twelve\" Robots Spelling\nWorksheets\n(0)\nSome number words are challenging, and that is definitely the case with \"twelve\". It's worthy of a solid revie...\nCrawling Up the Doubles Tree\nWorksheets\n(0)\nThese creepy-crawlies love doubles plus two addition problems! Students will get some great practice matching ...\nKindergarten\nIce Cream Party - Draw It!\nWorksheets\n(0)\nThis delicious worksheet is the perfect introduction to addition. Kindergarteners will love it! They'll read t...\nOnly Tens Allowed!\nWorksheets\n(0)\nAdding double-digit numbers is a new adventure for many first grade students. Using whole tens to begin is a g...\nWorksheets\n(0)\nYour artists will have a ball with this blank canvas! This one-page, five-problem worksheet requires children ...\nDinos and Perimeters\nWorksheets\n(0)\nThe dinosaurs are roaring about this perimeter resource! Learners will love finding the perimeter for various ...\nRelevant Blogs:\n\nShare good content with friends and get 15% discount for 12-month subscription\n\nExplore our workbooks:\n\n{{#post_arr}}\n{{#h_block}}\n{{/h_block}}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.872909,"math_prob":0.9572534,"size":1573,"snap":"2021-43-2021-49","text_gpt3_token_len":407,"char_repetition_ratio":0.18291906,"word_repetition_ratio":0.10135135,"special_character_ratio":0.27082008,"punctuation_ratio":0.07594936,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9829009,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-30T03:58:48Z\",\"WARC-Record-ID\":\"<urn:uuid:2020391b-1dcc-4e2f-8fab-422bb7356b2e>\",\"Content-Length\":\"193802\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:288cd48d-5ae6-423a-a26c-20e249ff254d>\",\"WARC-Concurrent-To\":\"<urn:uuid:645851ce-0d61-4c2a-bb4f-e05252fa2fba>\",\"WARC-IP-Address\":\"35.245.210.119\",\"WARC-Target-URI\":\"https://argoprep.com/blog/what-is-1-6-as-a-decimal/\",\"WARC-Payload-Digest\":\"sha1:SK44T5FONKXIBMCTB6MJXP27A2DXNYCF\",\"WARC-Block-Digest\":\"sha1:3H5FM4QWGKV6QA2AFVVFJCSVOIF7IXOV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358903.73_warc_CC-MAIN-20211130015517-20211130045517-00049.warc.gz\"}"}
https://rkm.com.au/CALCULATORS/CALCULATOR-cone.html
[ "all calculators / Cone CALCULATOR: ...enter known values for cone radius & height ... then click GET button. No letters or units (e.g. enter 22 not 22 cm) Formula / equation: ENTER Cone Radius > = r ENTER Cone Height (h) > = h Cone Side = = s (length of slope / slant / side) Cone Circumference = = 2 π r (circumference of base of cone) Cone Base Area = = π r2 (area of circle that forms base of cone) Cone Curved Surface Area = = π r s (surface area of the sloping wall of the cone) TOTAL CONE AREA = = π r2 + π r s (area of base + area of wall) CONE VOLUME = = 1/3 (π r2)h (one third base area x height)", null, "Figure shows a cone with equations (formulae) for circumference, circle area, surface area, and conical volume. This cone also available as a 3D stereo anaglyph. This calculator will work out the surface area of a cone and the volume of a cone if you enter the cone radius and height. Only enter numbers (e.g. enter 22 not 22 cm). If you try to enter a unit of measure (e.g. 22 metres, 4 miles, 10 cm) you will get an NAN (Not A Number) error appear in each box. When you have entered the numbers, click the GET button. SURFACE AREA OF A CONE is the area of base (π r 2) + area of the sloping wall π r s. VOLUME OF A CONE is the area of the base (π r 2) times the height (h) divided by three = 1/3 π r 2 h. To convert between lengths (e.g. centimeters to inches) see our Length & Distance Converter. For weights and volumes (especially in recipes) see our Recipe Converter. Sphere: Calculate surface area and volume of a sphere.", null, "Russell Kightley Media\nPO Box 9150, Deakin, ACT 2600, Australia. Mobile phone Australia 0405 17 64 71\nemail RKM" ]
[ null, "https://rkm.com.au/CALCULATORS/calculator-images/MATHS-CONE-equations-white-500.png", null, "https://rkm.com.au/LOGOS/RKM-logo-stone.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80849797,"math_prob":0.99322075,"size":544,"snap":"2021-31-2021-39","text_gpt3_token_len":161,"char_repetition_ratio":0.14074074,"word_repetition_ratio":0.051282052,"special_character_ratio":0.30514705,"punctuation_ratio":0.096296296,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868244,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T17:28:49Z\",\"WARC-Record-ID\":\"<urn:uuid:36e6d4b6-7bfe-48d6-8659-bc2d3e107fd3>\",\"Content-Length\":\"13535\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4748daed-2c4b-4290-8892-0bf78f6513bf>\",\"WARC-Concurrent-To\":\"<urn:uuid:16436719-d835-46d2-b25b-c7a8cf7c3793>\",\"WARC-IP-Address\":\"160.153.51.198\",\"WARC-Target-URI\":\"https://rkm.com.au/CALCULATORS/CALCULATOR-cone.html\",\"WARC-Payload-Digest\":\"sha1:KBN77AHHRDNZFPZILOMUQ2AO54DDSDGL\",\"WARC-Block-Digest\":\"sha1:ZCDTH3NEYPU6TKA3RPSLUTRBJIEJHILB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057564.48_warc_CC-MAIN-20210924171348-20210924201348-00573.warc.gz\"}"}
https://maxdemarzi.com/2012/02/21/max-flow-with-gremlin-and-transactions/
[ "## Max Flow with Gremlin and Transactions", null, "The maximum flow problem was formulated by T.E. Harris as follows:\n\nConsider a rail network connecting two cities by way of a number of intermediate cities, where each link of the network has a number assigned to it representing its capacity. Assuming a steady state condition, a nd a maximal flow from one given city to the other.\n\nBack in the mid 1950s the US Military had an interest in finding out how much capacity the Soviet railway network had to move cargo from the Western Soviet Union to Eastern Europe. This lead to the Maximum Flow problem and the Ford–Fulkerson algorithm to solve it.\n\nIf you’ve been reading the Neo4j Gremlin Plugin documentation, you’ll remember it has a section on Flow algorithms with Gremlin. Let’s add a couple of things and bring this example to life.\n\nWe’re going to be modeling a simple railway system that needs to transport cargo from California to Illinois. A couple of direct routes exist, and additionally a route going through Texas. First step is to create our graph:\n\n```def create_graph\nneo = Neography::Rest.new\ngraph_exists = neo.get_node_properties(1)\nreturn if graph_exists && graph_exists['name']\n\nstates = [{:name => \"California\", :coordinates => [-119.355165,35.458606]},\n{:name => \"Illinois\", :coordinates => [ -88.380238,41.278216]},\n{:name => \"Texas\", :coordinates => [ -97.388631,30.943149]}]\n\ncommands = states.map{ |n| [:create_node, n]}\n\nstates.each_index.map do |n|\ncommands << [:add_node_to_index, \"states_index\", \"name\", states[n][:name], \"{#{n}}\"]\nend\n\ncommands << [:create_relationship, \"connected\", \"{#{0}}\", \"{#{1}}\", {:capacity => 1}]\ncommands << [:create_relationship, \"connected\", \"{#{0}}\", \"{#{1}}\", {:capacity => 2}]\ncommands << [:create_relationship, \"connected\", \"{#{0}}\", \"{#{2}}\", {:capacity => 1}]\ncommands << [:create_relationship, \"connected\", \"{#{2}}\", \"{#{1}}\", {:capacity => 3}]\n\nbatch_result = neo.batch *commands\nend\n```\n\nYou’ve seen me do this a few times already, so I won’t spend too much time on it. Just notice we’re adding the states names to an index, and using the Batch REST command to create it all at once. We’ll write our max_flow method next:\n\n```def max_flow\nneo = Neography::Rest.new\nneo.execute_script(\"source = g.idx('states_index')[[name:'California']].iterator().next();\nsink = g.idx('states_index')[[name:'Illinois']].iterator().next();\n\nmax_flow = 0;\ng.setMaxBufferSize(0);\ng.startTransaction();\n\nsource.outE.inV.loop(2){\n!it.object.equals(sink)}.\npaths.each{\nflow = it.capacity.min();\nmax_flow += flow;\nit.findAll{\nit.capacity}.each{\nit.capacity -= flow}\n};\ng.stopTransaction(TransactionalGraph.Conclusion.FAILURE);\n\nmax_flow;\")\nend\n```\n\nLet’s take a closer look at a few things. We use the index to look up our source (start) and sink (end) nodes, and use iterator().next() to get the first node from the Gremlin Groovy Pipeline returned by the index lookup. We also create a variable max_flow where our answer will go.\n\n```source = g.idx('states_index')[[name:'California']].iterator().next();\nsink = g.idx('states_index')[[name:'Illinois']].iterator().next();\nmax_flow = 0;\n```\n\nWe then set the transaction mode to manual by setting the MaxBufferSize to zero and start a new transaction. I’ll explain why in a minute.\n\n```g.setMaxBufferSize(0);\ng.startTransaction();\n```\n\nFrom our source, we go to a neighboring node looping these two outE.inV steps until we reach the sink node.\n\n```source.outE.inV.loop(2){\n!it.object.equals(sink)}.\n```\n\nFor each path we find the lowest capacity along the edges we traversed using the min() function and add it to the max_flow variable we created earlier.\n\n```paths.each{\nflow = it.capacity.min();\nmax_flow += flow;\n```\n\nThen we subtract the flow from the capacity property of each of the edges in our path. Take note we are actually altering data in this step.\n\n```it.findAll{\nit.capacity}.each{\nit.capacity -= flow}\n};\n```\n\nAt the end we return max_flow which has the answer to our question.\n\n```max_flow;\n```\n\nIf you tried to run this method again, or tried to run a similar method using different sinks and sources that traveled over these nodes you’ll have a problem. The capacities were modified and will most likely be zero or show the residual capacity of the transportation network we built.\n\nSo to prevent this we stop the transaction with a Failure. The changes we made to capacity are not committed and the graph stays the way it was.\n\n```g.stopTransaction(TransactionalGraph.Conclusion.FAILURE);\n```\n\nWe can visualize this example using D3.js and its Geo Path projections:", null, "As usual, all code is available on github. The max flow and related problems manifest in many ways. Water or sewage through underground pipes, passengers on a subway system, data through a network (the internet is just a series of tubes!), roads and highway planning, airline routes, even determining which sports teams have been eliminated from the playoffs.\n\nTagged , , , , ,\n\n## 2 thoughts on “Max Flow with Gremlin and Transactions”\n\n1.", null, "eugene says:\n\nDid you get a chance to run any performance benchmarks on this?\n\n2.", null, "emiretsk says:\n\nDid you get a change to run any performance benchmarks on this?" ]
[ null, "https://maxdemarzidotcom.files.wordpress.com/2012/02/railway_network.png", null, "https://maxdemarzidotcom.files.wordpress.com/2012/02/map_max_flow.png", null, "https://2.gravatar.com/avatar/8849b485516ef2e739a1e8394acb2edd", null, "https://2.gravatar.com/avatar/8849b485516ef2e739a1e8394acb2edd", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8542145,"math_prob":0.9042529,"size":4925,"snap":"2020-45-2020-50","text_gpt3_token_len":1140,"char_repetition_ratio":0.107904896,"word_repetition_ratio":0.05516266,"special_character_ratio":0.2684264,"punctuation_ratio":0.1977343,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.963845,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,6,null,6,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T23:48:13Z\",\"WARC-Record-ID\":\"<urn:uuid:d1224385-2109-4f73-8e9e-337b2f7eacc2>\",\"Content-Length\":\"87650\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c3e06c3c-9c6a-46c9-9bc4-f44edeb22b74>\",\"WARC-Concurrent-To\":\"<urn:uuid:c00b2cb5-c3bb-49e3-8237-4b6e2ee5b2d9>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://maxdemarzi.com/2012/02/21/max-flow-with-gremlin-and-transactions/\",\"WARC-Payload-Digest\":\"sha1:N3AHEJNI3TAMUCO46U7N6I25KMTEUPFV\",\"WARC-Block-Digest\":\"sha1:YGC3H6BI4AXA6NCXIXLJDTBGICXH2IL4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141515751.74_warc_CC-MAIN-20201130222609-20201201012609-00469.warc.gz\"}"}
https://docs.exasol.com/database_concepts/scripting/general_script_language.htm
[ "General Script Language\n\nLexical Conventions\n\nUnlike the SQL language, the script language is case-sensitive, that means upper and lower case has to be considered (for example, variable definitions). There is a constraint that variable and function identifiers should contain only ASCII characters. The ending semicolon (;) after a script statement is optional.\n\nThe two types of comments in Exasol are:\n\n• Line Comment: It begins with the character -- and indicates that the remaining part of the current line is a comment.\n• Block Comment: It is indicated by the characters /* and */ and can be spread across several lines. All of the characters between the delimiters are ignored.\n\nExample\n\n-- This is a single line comment\n\n/*\nThis is\na multiline comment\n*/\n\nTypes and Values\n\nThe following table describes the types that are distinguished in the script language.\n\nType Range of Values\nnil\n\nnil is the \"unknown type\".\n\nnull and NULL\n\nnull and NULL represent the SQL NULL. This constant is not included in the Lua language and was added by Exasol to allow comparisons with result data and returning NULL values.\n\nboolean\n\nBoolean values true and false.\n\nstring\n\nString values are specified in single or double quotes ('my_string' or \"my_string\") and consist of any 8-bit characters. Alternatively, you can enclose string values in double square brackets ([[my_string]]). This notation is especially useful if quotes are used in the string and if you don't want to use escape characters ('SELECT * FROM \"t\" WHERE v=\\'abc\\';' equates to \"SELECT * FROM \\\"t\\\" WHERE v='abc';\" or simply [[SELECT * FROM \"t\" WHERE v='abc';]]).\n\nnumber\n\nIntegers or floating point numbers (for example, 3, 3.1416, 314.16e-2, 0.31416E1)\n\ndecimal\n\nDecimal values (for example, 3,3.1416)\n\nThe type decimal is not a standard Lua type, it is a user-defined Exasol type (userdata). It is similar to the special value NULL.\n\nThe decimal type supports the following operators and methods for mathematical calculations and conversions.\n\n Constructor decimal(value [,precision [, scale]]) Value can be of type string, number or decimal. The default for precision and scale is (18,0), i.e. decimal(5.1) is rounded to the value 5. Operators +, -, *, /, and % Addition, subtraction, multiplication, division and modulo calculation of two numerical values. The return type is determined dynamically: decimal or number == , <, <=, >, >=, and ~= Comparison operators for numerical values. Return type: boolean Methods var:add(), var:sub(), var:mul(), var:mod() Addition, subtraction, multiplication and modulo calculation of two numerical values. No new variable is created in this case. var:scale(), var:prec() Scale and precision of a decimal. Return type: number var:tonumber() Conversion into a number value. Return type: number var:tostring(), tostring(var) Conversion into a string value. Return type: string\n\nExample\n\nHere are some examples of using decimal values.\n\nd1 = decimal(10)\nd2 = decimal(5.9, 2, 1)\ns = d1:scale()     -- s=0\nstr = tostring(d2)     -- str='5.9'\n\nSimple Variables\n\nScript variables are typed dynamically. They do not have variable type but they have values assigned to them. You can use the operator = to assign the value.\n\nBy default, the scope of a variable is global. However, you can limit it to the current execution block by using the keyword local.\n\nExasol recommends to use local variables to explicitly show the variable declaration.\n\nExample\n\nlocal a = nil     -- nil\nlocal b = false     -- boolean\nlocal c = 1.0     -- number\nlocal d = 'xyz'     -- string\nd = 3             -- same variable can be used for different types\nlocal e,f = 1,2     -- two variable assignments at once\ng = 0             -- global variable\n\nArrays\n\nAn array consists of a list of values (my_array={2,3,1}) that can be heterogeneous (with different types).\n\nYou can access an element of an array through its position beginning from 1 (my_array[position]). The size of the array is determined by the # operator (#my_array). If the value is nil, you will get an exception.\n\nThe elements of an array can also be an array. You can use this to create multidimensional arrays.\n\nExample\n\nlocal my_array = {'xyz', 1, false}    -- array\nlocal first = my_array               -- accessing first entry\nlocal size = #my_array                   -- size=3\n\nDictionary Tables\n\nBesides simple variables and arrays, you can also use dictionary tables that consist of a collection of key/value pairs. These keys and values can be heterogeneous (with different types).\n\nYou can access a specific value by a key, either by using the array notation (variable[key]) or by using the point notation (variable.key)\n\nYou can iterate through all the entries of the dictionary (for k,v in pairs(t) do end) by using the function pairs(t).\n\nIn Lua documentation, there is no difference between arrays and the dictionary tables. They are named as table.\n\nExample\n\nlocal my_contact = {name='support',      -- define key/value pairs\nphone='unknown'}\nlocal n = my_contact['phone']              -- access method 1\nn = my_contact.phone                      -- access method 2\nmy_contact.phone = '0049911239910'          -- setting single value\n\n-- listing all entries in dictionary\nfor n,p in pairs(my_contact) do\noutput(n..\":\"..p)\nend\n\n-- defining a 2 dimensional table\nlocal my_cube = {{10, 11, 12}, {10.99, 6.99, 100.00}}\nlocal my_value = my_cube       -- -> 100.00\n\n-- defining \"column names\" for the table\nlocal my_prices = {product_id={10, 11, 12}, price={10.99, 6.99, 100.00}}\nlocal my_product_position = 3\nlocal my_product_id = my_prices.price[my_product_position] -- -> 100.00\nlocal my_price = my_prices.product_id[my_product_position] -- -> 12\n\nExecution Blocks\n\nExecution blocks are the elements that limit the scope of local variables. The script itself is the outermost execution block. Other blocks are defined through Control Structures or Functions declarations.\n\nYou can explicitly declare the blocks through do end. They are useful to limit the scope of the local variables.\n\nExample\n\n-- this is the outermost block\na = 1        -- global variable visible everywhere\n\nif var == false then\n-- the if construct declares a new block\nlocal b = 2    -- b is only visible inside the if block\nc = 3            -- global visible variable\nend\n\n-- explicitly declared block\ndo\nlocal d = 4; -- not visible outside this block\nend\n\nControl Structures\n\nThe following control structures are supported.\n\nElement Syntax\nif\nif <condition> then <block>\n[elseif <condition> then <block>]\n[else <block>]\nend\nwhile\nwhile <condition> do\n<block>\nend\nrepeat\nrepeat\n<block>\nuntil <condition>\nfor\nfor <var>=<start>,<end>[,<step>] do\n<block>\nend\nfor <var> in <expr> do\n<block>\nend\n\nUsage Notes:\n\n• The condition <condition> is evaluated as false if its value is false or nil, otherwise it is evaluated as true. It means that the value 0 and an empty string is evaluated as true.\n• The control expressions <start>, <end>, and <step> of the for loop are evaluated only once, before the loop starts. They must all result in numbers. Within the loop, you may not assign a value to the loop variable <var>. If you do not specify a value for <step>, then the loop variable is incremented by 1.\n• The break statement can be used to terminate the execution of while, repeat, and for, skipping to the next statement after the loop. For syntactic reasons, the break statement can only be written as the last statement of a block. If it is necessary to break in the middle of a block, then an explicit block can be used (do break end)\n\nExample\n\nif var == false\nthen a = 1\nelse a = 2\nend\n\nwhile a <= 6 do\np = p*2\na = a+1\nend\n\nrepeat\np = p*2\nb = b+1\nuntil b == 6\n\nfor i=1,6 do\nif p< 0 then break end\np = p*2\nend\n\n-- print all keys of table 't'\nfor k in pairs(t) do\nprint(k)\nend\n\nOperators\n\nExasol supports the following operators in the scripting language.\n\nOperator Description\n+, -, *, /, %\n\nCommon arithmetic operators.\n\nThe float arithmetic is always used.\n\n^\n\nPower (2^3=8)\n\n==, ~=\n\nIf the operands of the equality operator (==) are different, the condition is always evaluated as false. The inequality operator (~=) is exactly the opposite of the equality operator\n\n<, <=, >, >=\n\nComparison operators\n\nand, or, not\n• and returns the first operand, if it is nil or false, otherwise the second operand.\n• or returns the first operand, if it is not nil or false, otherwise the second one\n• Both operators use short-cut evaluation, that is, the second operand is evaluated only if required.\n• not returns true, if the operand is nil or false, otherwise it returns false.\n..\n\nConcatenation operator for strings and numerical values.\n\nOperator precedence follows the below priority order (higher to lower):\n\n1. ^\n2. not, - (negation)\n3. *, /, %\n4. +, -\n5. ..\n6. <, >, <=, >=, ~=, ==\n7. and\n8. or\n\nYou can use parentheses to change the precedence in an expression.\n\nExample\n\nlocal x = 1+5        --> 6\nx = 2^5             --> 32\nx = 1==1            --> true\nx = 1=='1'            --> false\nx = 1~='1'            --> true\nx = true and 10    --> 10\nx = true and nil   --> nil\nx = 10 and 20        --> 20\nx = false and nil  --> false\nx = nil and false  --> nil\nx = true or false  --> true\nx = 10 or 20        --> 10\nx = false or nil   --> nil\nx = nil or false   --> false\nx = nil or 'a'        --> 'a'\nx = not true        --> false\nx = not false        --> true\nx = not nil        --> true\nx = not 10            --> false\nx = not 0            --> false\nx = 'abc'..'def'   --> 'abcdef'\n\nFunctions\n\nYou can structure scripts using the functions.\n\nSyntax\n\nfunction <name> ( [parameter-list] )\n<block>\nend\n\nUsage Notes:\n\n• Simple variables are treated as per value parameters. They cannot be manipulated within the function. However, arrays and dictionary tables are treated as per reference parameters which means their entries are mutable. If you assign a completely new object, then the original function parameter is not affected.\n• If you call a function with too many arguments, the supernumerous ones are ignored. If you call it with too few arguments, the rest of them are initialized with nil.\n• Through return, you can exit a function and return one or more return values. For syntactic reasons, the return statement can only be written as the last statement of a block when it returns a value. If it is required to return in the middle of a block, then an explicit block can be used (do return end).\n• Functions are first-class values, they can be stored in a variable or passed in a parameter list.\n\nExample\n\nfunction min_max(a,b,c)\nlocal min,max=a,b\nif a>b then min,max=b,a\nend\nif c>max then max=c\nelseif c<min then min=c\nend\nreturn min,max\nend\n\nlocal lowest, highest = min_max(3,4,1)\n\nError Handling through pcall() and error()\n\nUsually, a script terminates whenever any error occurs. However in some cases you may need to handle special errors and have to perform some actions. For such scenarios, you can use the following functions:\n\n• pcall(): It stands for protected call. You can use this to protect function call. The parameters are the function name and all parameters of the functions. For example, pcall(my_function,param1,param2) instead of my_function(param1,param2). The function pcall() returns following two values:\n• Success of the execution: false if any error occurred.\n• Result: the actual result if no error occurred, otherwise the exception text.\n• error(): Throws an error which terminates a function or the script.\n\nExample\n\n-- define a function which can throw an error\nfunction divide(v1, v2)\nif v2==0 then\nerror()\nelse\nreturn v1/v2\nend\nend\n\n-- this command will not abort the script\nlocal success, result=pcall(divide, 1, 0\nif not success then\nresult = 0\nend" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.70229536,"math_prob":0.97128737,"size":11034,"snap":"2022-05-2022-21","text_gpt3_token_len":2780,"char_repetition_ratio":0.12130553,"word_repetition_ratio":0.03368421,"special_character_ratio":0.27922785,"punctuation_ratio":0.13170499,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98920953,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-19T01:14:36Z\",\"WARC-Record-ID\":\"<urn:uuid:b35d2b7b-f972-4919-8db4-e264ba501a80>\",\"Content-Length\":\"88645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:15f5565b-b486-4d93-893e-5edeaa3047a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a3c7a092-da63-4b6e-80ac-09dc773f03e5>\",\"WARC-IP-Address\":\"213.95.129.19\",\"WARC-Target-URI\":\"https://docs.exasol.com/database_concepts/scripting/general_script_language.htm\",\"WARC-Payload-Digest\":\"sha1:Y2LQG4W27AQI6GLYOYTCTVFXKB7HL7HU\",\"WARC-Block-Digest\":\"sha1:NSE3HXKYXWF53D7HHYRJCLA5EZHXMP4A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301217.83_warc_CC-MAIN-20220119003144-20220119033144-00137.warc.gz\"}"}
https://0-bmcmedresmethodol-biomedcentral-com.brum.beds.ac.uk/articles/10.1186/s12874-017-0364-y
[ "# Simulation of complex data structures for planning of studies with focus on biomarker comparison\n\n## Abstract\n\n### Background\n\nThere are a growing number of observational studies that do not only focus on single biomarkers for predicting an outcome event, but address questions in a multivariable setting. For example, when quantifying the added value of new biomarkers in addition to established risk factors, the aim might be to rank several new markers with respect to their prediction performance. This makes it important to consider the marker correlation structure for planning such a study. Because of the complexity, a simulation approach may be required to adequately assess sample size or other aspects, such as the choice of a performance measure.\n\n### Methods\n\nIn a simulation study based on real data, we investigated how to generate covariates with realistic distributions and what generating model should be used for the outcome, aiming to determine the least amount of information and complexity needed to obtain realistic results. As a basis for the simulation a large epidemiological cohort study, the Gutenberg Health Study was used. The added value of markers was quantified and ranked in subsampling data sets of this population data, and simulation approaches were judged by the quality of the ranking. One of the evaluated approaches, the random forest, requires original data at the individual level. Therefore, also the effect of the size of a pilot study for random forest based simulation was investigated.\n\n### Results\n\nWe found that simple logistic regression models failed to adequately generate realistic data, even with extensions such as interaction terms or non-linear effects. The random forest approach was seen to be more appropriate for simulation of complex data structures. Pilot studies starting at about 250 observations were seen to provide a reasonable level of information for this approach.\n\n### Conclusions\n\nWe advise to avoid oversimplified regression models for simulation, in particular when focusing on multivariable research questions. More generally, a simulation should be based on real data for adequately reflecting complex observational data structures, such as found in epidemiological cohort studies.\n\n## Background\n\nWhen planning a new, potentially large cohort study, simulations can help to judge aspects such as sample size or choice of a statistical approach that might be needed for adequately investigating the effect of biomarkers for an outcome of interest. In particular, such simulation studies allow to take potentially complex correlation structures, covariate distributions and potentially non-linear effects or interactions into account when investigating several biomarkers and known risk factors simultaneously. Therefore, a simulation study may be more adequate for example to assess the sample size needed than using probably oversimplifying sample size formulas. A simulation might also be useful beyond sample size planning, e.g. for picking good measures for biomarker performance. Since a simulation study could also be based on oversimplifying assumptions, at the risk of answers that are not better than, e.g., closed-form sample size formulas, it is of prime importance to use a data generating model of adequate complexity, reflecting realistic data structure. Naturally, this entails the danger of requiring a large amount of information about the population and covariate structure of interest, or introducing a considerable number of simulation parameters that cannot be selected adequately. In the following, this work focusses specifically on how to generate correlated covariates with realistic distribution and on what generating model should be used for a simulated outcome to deal with a complex multivariable research questions, exemplarily considering the task of ranking biomarkers with respect to their added value.\n\nThere is a lack of literature on sample size or power calculation methods with consideration of correlated multidimensional covariate data in a regression model and as well on a simulation methodology for this setting. Most of the established methods are using sample size formulas for a regression model with only one covariate [1, 2]. Schmoor et al. proposed a sample size formula for a prognostic problem, where additionally a second correlated factor is considered. This requires knowledge about the joint distribution of these two factors and is restricted to a Cox proportional hazards model with one predictor of interest. Jinks et al. derived a formula for a multivariable prognostic model based on the overall prognostic ability, where the prognostic ability is quantified by the measure of discrimination D and sample size calculation is based on the significance of the D value. Comparable approaches can be applied with the overall discrimination ability (AUC) of the model, for which a sample size calculation can be derived as well [6, 7]. Other authors discuss methods for the problem, where the number of predictors is larger than the actual number of samples. This introduces a selection problem, where informative predictors have to be identified in a mixture of informative and non-informative predictors. De Valpine et al. gave a two-step method, where in the first step a simulation is used to reproduce the selection process of informative predictors and in a second step an approximation method for a linear discriminant problem is used. A similar two step approach was developed by Dobbin et al. but with another methodology. A further approach with a variable selection step was proposed in Götte et al. , where the sample size determination is focused on the prediction accuracy instead of power. Unfortunately, most of these approaches are based on uncorrelated variables. Binder et al. investigated different scenarios with a small or large amount of information, different covariates distributions and non-linear functional forms of relationship. The simulation revealed the importance of aspects like covariates distributions or functional form and demonstrated the impact. However, the primary aim of that work was not the planning of new studies but comparing approaches for modeling of non-linear effects. Therefore, the present work specifically investigates the degree of complexity that may be required for a realistic simulation study and techniques to use for an adequate generating model.\n\nWe exemplarily consider settings with a binary outcome, which are frequently found in observational data for biomedical research questions considering disease risks. However, most aspects of the simulation method can be easily applied to a continuous outcome as well. As candidate technique for generating simulated covariate distributions, i.e. biomarkers and established predictors, two approaches were compared. Drawing from multivariate normal distributions or additionally transformation according to a known empirical distribution to mimic this distribution as exactly as possible. For generating the clinical outcome, standard linear models and extensions via non-linear terms and interactions were used. As a non-parametric approach the random forest model was considered, which requires individual data as basis. As gold standard, repeated sampling from a large population-based, epidemiological cohort, the Gutenberg Health Study (GHS) was used, and judged how close simulated data based on aggregate information, such as odds ratios or correlation matrices (which might be found e.g. in the literature), agree with the gold standard. The use of a pilot study as basis for simulation was also considered, and the effect of the pilot study size was investigated. As a measure to assess the performance of simulation compared to the defined gold standard, the ranking of biomarkers based on simulated data and based on repeated draws from the population data, were compared. For ranking biomarkers according to their added value, the difference in Brier score, the increase in AUC and the difference in pseudo- R 2 were considered as added value measures.\n\nIn “Population sample” section the GHS study and the exemplary biomarkers and endpoints to be used for investigating simulation approaches are introduced. Concerning the latter, the overall simulation structure is presented in “The general simulation structure” and discuss simulation of covariates in “Covariate matrix” section, and different approaches for generating a simulated phenotype in “Clinical response generating” section. Different measures for added value are discussed in “Quantifying added value” section and the simulation quality criterion in “Reference ranking” section. The population results are presented in “Population sample resultsPopulation sam-ple results” section. Results on different strategies for simulation are presented in “Reference mean ranks” and in “Comparison of simulation approaches” sections. “Pilot sample size” section specifically investigates different pilot study sizes. Concluding remarks are given in “Discussion and Conclusions” sections.\n\n## Methods\n\n### Population sample\n\nAs an application example, the Gutenberg Health Study (GHS ) sample was used. The GHS is a population-based prospective, observational, single-center cohort study from Germany at the University Medical Center in Mainz. With the first 5000 participants enrolled from April 2007 to October 2008, it is so far a cross sectional, large, population based sample. The primary aim of the GHS study is to evaluate and improve cardiovascular risk prediction. The participants are aged between 35 and 74 years with nearly equal proportion of men and women. The sample was taken from the population in Mainz and Mainz-Bingen area in Germany. The whole study sample includes 15010 individuals. The analysis was restricted on the first 5000 enrolled participants due to the fact that the measurement of the biomarkers of interest was only accomplished for this subsample. After quality control and data cleaning using the complete case principle for the variables of interest, a sample with 4519 individuals remains. Most missing values occurred in the outcome. Missing values were randomly distributed and resulted mainly from logistical problems. Specifically, the binary variable functional cardiac disorder (FCD ) was used as medical outcome. The focus of this work was on a binary outcome, because this approach is commonly used in medicine and plays a more important role in risk prediction than continuous traits do. A basic prediction model for this event was defined as a simple model with sex, age and body-mass-index as covariates. This basic model was extended with different biomarkers. One biomarker was added at a time, and the improvement in prediction was evaluated with three different added value methods described in Quantifying added value. The following biomarkers of interest were selected in advance: MR-proADM, Nt-proBNP, hs-CRP, CT-proAVP and MR-proANP. The results in GHS sample are presented in more detail in 8.\n\n### The general simulation structure\n\nFor exploring the best simulation approach, the following simulation structure was used. The generation algorithm of artificial data can be divided into two parts. First, the covariate data set which included established predictors and all biomarkers of interest need to be generated based on population data. In this regard, the distribution of simulated covariates, including the correlation structure, should reflect the structure in the real data. Two different approaches were used in general for this, the covariates either follow a multivariate normal distribution and or an empirical distribution extracted from existing data, e.g. a pilot study. To evaluate and illustrate the approach, the population data that cover non-normal data distributions was used. Two approaches are described in more detail in 8. The second part is the generation of a simulated clinical response using the simulated covariates of the first part. For this purpose, four different approaches with increasing complexity were used: starting with a simple logistic regression model, followed by a logistic regression model which includes selected interaction terms, and a generalized additive model (GAM) model with non-linear effects. The last approach was the random forest model, a rather complex approach. All approaches are described further in 8. In total, four different approaches for the clinical response simulation and two different methods for covariate data generation were investigated. To compare simulation approaches, a gold standard is required on which the evaluation of the simulation quality can be judged. This gold standard has to be reproduced with simulated data. As a gold standard, the reference ranking of biomarkers and the reference values of the added value measures in application example were used. The application example is described with more details in 8 and the ranking procedure is described in 8. The four approaches were compared without the consideration of the pilot study size. This means, for the generating of artificial data, the whole population sample was taken as a source of information. The pilot study size will be investigated at the end for the best simulation approach. The simulation design is presented schematically in Fig. 1: on the left side the determination of reference values is displayed and on the right side the structure of the simulation procedure . All simulations were done in R version 3.2.2 (2015-08-14) . Additionally the following R packages were used: mvtnorm version 1.0-3 [17, 18], gam version 1.12 and randomForest version 4.6-12 .\n\n### Generating artificial data\n\n#### Covariate matrix\n\nFor a successful simulation of a realistic sample, covariate data need to be simulated in a sufficient way. One of the major aspects for the simulation is the correlation structure of the covariate matrix. Not only the correlation between markers and basic model covariates is important, the correlation between markers themselves has a key role. Consequently, the whole correlation matrix including all markers of interest and known risk factors must be taken into account. A natural way to simulate the distribution of covariates while simultaneously considering a correlation structure is to use a multivariate normal distribution based on the correlation matrix, location and dispersion of real data. Where a predefined covariance matrix can be given to determine the variance as well as the correlation structure of generated data , this covariance matrix can be easily obtained from the population data. Since the dichotomous variable sex was included into the model, all random covariate data were generated sex-specifically and pooled afterwards. As all covariates of interest, except sex, were nearly normally distributed or log-transformed to approximate the normal distribution. Though the method covers the correlation structure of the original data well, variables are rarely exact normally distributed in a real data set. A more exact method to mimic the real distribution is to generate the covariate matrix with multivariate normal distribution and to take the corresponding quantiles from the empirical distribution of a real data set. It requires more information, but reflects even small deviations from the normal distribution if present, without destroying the correlation structure. By using this method, artificial data with a correct correlation structure can be generated that perfectly mimic the true distribution. This can be seen as the most realistic and sophisticated simulation method of the covariate data. In the simulation of this work, both approaches and the benefit of these additional efforts were explored.\n\n#### Clinical response generating\n\nFor the generation of the clinical response, the relationship between outcome and the covariate matrix has to be taken into account as accurate as possible. For this purpose, prediction models for the outcome were fitted in population data using different modeling approaches. These prediction models are then used to predict the probabilities of the event given the artificial, simulated covariate data. To generate the outcome for the simulation data set, random numbers were generated from the binomial distribution given the predicted probabilities. Even if the biomarker comparison is made by a simple logistic regression model, more complex models can be used for the simulation of the relationship. Additionally, in terms of comparison of markers it is essential that the association to the outcome is simulated considering all markers simultaneously. For that purpose, the prediction models for the binary endpoint in the population data were fitted using all markers and covariates in one model. For the simulation, a set of different models with rising complexity were selected. The simplest approach would be the modeling of the relationship in a linear way with a logistic regression (GLM). Additional interaction effects can be taken in account, which leads to a GLM model with interaction terms (GLM+I). For this model, a set of the six strongest pairwise interaction effects with a stepwise bidirectional selection method based on AIC (Akaike information criterion) was selected and added to the GLM model. Following interactions were selected automatically: sex with age, CT-proAVP with age, Nt-proBNP with MR-proANP and interactions of MR-proADM with BMI, Nt-proBNP, CT-proAVP and MR-proANP. Where even not significant biomarkers were included in interactions and seem to improve the global model. The resulting information could be important because the interactions between the markers cannot be detected in a prediction model including only a single marker, but this may influence the overall results of the simulation. If the relationship between the outcome and the biomarkers is in reality a non-linear relationship, a non-linear modeling would be more appropriate. Generalized additive model with smoothing splines (GAM) was used to model complex non-linear relationships if present, but interactions are omitted in this approach. To cover non-linear relationships and complex interaction structures simultaneously, a more complex model could be necessary. One possibility is to use the random forest model (RF) , based on classification trees (CART) . For the random forest models, no pruning step was performed, so all trees were maximal grown trees. The number of trees was set to 1000 and the number of variables randomly sampled as candidates at each split was $$\\left \\lceil \\sqrt {p}\\right \\rceil$$ , where p is the number of predictors available. This method is described in more details in the next section 8. In this work only both methods to generate the covariate data with the GLM and random forest approaches are presented. For all other approaches, only covariate data simulated based on the empirical distribution are presented as this leads to better simulation results.\n\n#### Random Forest approach\n\nSince random forest (RF) plays an important role in the simulation and is not a standard method, it is described in more detail in the following. A short overview of the construction of RF can be found at the end of the section. Random forest is an ensemble of classification or regression trees (CARTs). For the case of a binary outcome, the classification trees are used. The trees in RF are unpruned. This means that each tree is grown to the largest extent possible, but may require a minimum node size of terminal nodes, usually 1. Each tree is grown in a bootstrap subsample drawn from the original training sample. Let N be the number of individuals in the whole training sample and N the size of a bootstrap sample. In usual, N =N for sampling with replacement and N <N for sampling without replacement. For the simulation, sampling with replacement was used. The number of independent individuals in the bootstrap sample is then $$\\acute {N^{*}} \\approx 0.632\\cdot N$$, see . The remaining $$N-\\acute {N}^{*}$$ individuals are out-of-bag (OOD) and can be used for an internal validation or out-of-bag prediction, which is not explained here further. One of the tuning parameters of RF is the number of trees to be generated. Let B be the number of trees, and consequently the number of bootstrap samples in RF, often also called ntree. Another source of diversity in RF is the fact that not all predictor variables are used at the same time, rather a set of randomly selected predictors is used in each node for split in a tree. Let m identify the total number of predictors available in the training sample and mtry the number of predictors randomly chosen in each node. Consequently, mtry is another tuning parameter of RF. The default for classification problems is usually $$\\left \\lceil \\sqrt {m}\\right \\rceil$$. Classification trees use a splitting function called Gini-index to determine which attribute to split on and what the best cutoff is. Gini-index is defined as G k =2f(1−f), where f represents the fraction of events assigned to node k. In contrast to using one classification tree, RF returns not only the classification decision but can also estimate the predicted probability for an event. For B trees in RF, the predicted probability for a new individual is:\n\n$$\\hat{P}(y=1|\\mathbf{x})=\\frac{1}{B}\\sum_{b=1}^{B}\\pi_{b}(\\mathbf{x}),$$\n\nwhere π b (x) is the majority vote in terminal node where the new individual is dropped in for bth tree, so the classification decision of a single tree for outcome status y{0,1}, given the covariate matrix x. For more information and features of RF see [29, 30].\n\nThe construction of RF is described in the following steps:\n\n1. 1.\n\nSelect randomly a total of N individuals from the original training sample, with replacement. This leads to a bootstrap sample. Repeat this procedure B times.\n\n2. 2.\n\nIn each bootstrap sample, grow an unpruned classification tree. The tree is constructed by recursively splitting data into two distinct sub-samples. At each node, randomly select mtry predictors from the total m predictor variables. Choose the best split from among the mtry predictors by minimize the Gini-index as a measure of node purity.\n\n3. 3.\n\nFor calculation of predicted probability, each new individual is dropped down a tree until its terminal node. The majority voting for an event status in this terminal node is determined. The probability estimate is then the average of majority votes over all trees.\n\n### Gold standard\n\nFor comparing the predictive strength of biomarkers, the concept of added value that describes the prediction performance of a model was chosen. This can be measured with several different established measurements. Three of them without intend to have a complete list of existing measures were selected. The first one, the Brier score [31, 32], measures the mean squared difference between the predicted probability and the actual outcome. It takes values between zero and one, since this is the largest possible difference between a predicted probability and a binary outcome. The lower the Brier score, the better the prediction performance. The Brier score is defined for a binary outcome as $$BS=\\frac {1}{n}\\sum _{i=1}^{n}(p_{i}-y_{i})^{2}$$, where p is the predicted probability, n is the sample size and y is the actual, observed outcome. The second common measure is the area under the curve AUC [5, 7, 3234] from Receiver Operating Characteristic (ROC) methodology which quantifies the discrimination ability. It can be interpreted as the probability that a randomly selected subject with an event will be ranked higher in terms of predicted probability than a randomly selected subject without an event. One possible definition of AUC is given by $$AUC=\\frac {1}{n_{1}n_{0}}\\left (\\sum _{i=1}^{n}(rank(p_{i})y_{i}) - \\frac {n_{1}^{2} + n_{1}}{2}\\right)$$, where n 1 is number of events and n 0 is the number of non-events. Third, the coefficient of determination R 2, in this case for a binary outcome the generalization of R 2 for generalized linear models from Nagelkerke was used. Nagelkerke R 2 coefficient is scaled to a minimum of 0 for no determination and a maximum of 1 for perfect determination. The definition of R 2 with log-likelihood is $$\\frac {1}{e^{(-2LL_{0}/n)}-1} \\left (e^{((-2LL_{1} + 2LL_{0})/n))}-1\\right)$$ where the L L 0 is the log-likelihood from the null model only with the intercept term and L L 1 is the log-likelihood from the model of interest.\n\nSince to quantify the improvement of the extended model, including an additional marker, compared to the basic model, it is straightforward to use the difference in these measures. This results in following three measures: Brier score difference that has the form\n\n$$BSD=\\frac{1}{n}\\left(\\sum_{i=1}^{n}\\left(p_{1i}-y_{i}\\right)^{2}-\\sum_{i=1}^{n}\\left(p_{0i}-y_{i}\\right)^{2}\\right),$$\n\nthe p 1 stands for the predicted probability from the model with the new marker and p 0 for the predicted probability from basic model. The increase in AUC could be reduced to the form\n\n$$\\begin{array}{@{}rcl@{}} IAUC=\\frac{1}{n_{1}n_{0}}\\left(\\sum_{i=1}^{n}\\left(rank(p_{1i})y_{i}\\right) - \\sum_{i=1}^{n}\\left(rank\\left(p_{0i}\\right)y_{i}\\right)\\right). \\end{array}$$\n\nNagelkerke R 2 difference,\n\n$$R^{2}D=\\frac{1}{e^{(-2LL_{0}/n)}-1} \\left(e^{(-2LL_{1}/n)}-e^{(-2LL_{2}/n)}\\right),$$\n\nwith L L 1 as log-likelihood from the basic model and L L 2 from the extended model. The different measures represent different aspects of improvement in prediction, like calibration for Brier score or discrimination ability for AUC . This small set of measures is a good representation of most common performance measures with different approaches and covers the most important aspects of added value.\n\n#### Reference ranking\n\nAs a criterion for simulation success, the relative ranking of biomarkers was used. It reflects the biomarker comparison study aims in a direct and intuitive way and allows the comparison of the results from different added value measures. Therefore, the top three markers were ranked by each added value measure separately. The simulation is restricted to the top three markers because the others have very small to non-existent effects, see 8. As the basic prediction model is used as a reference for all markers, ranking based on the added value measure itself or on the difference of it lead to the same ranking. To get reference rankings, a resampling method on population data was used, in this case the target criterion is the mean rank of markers. To access the mean rank, 10000 bootstrap samples from population data were drawn. By bootstrapping with real data the distribution and correlation structure of population data is considered in a natural way. For each bootstrap sample, the added value measures were calculated and compared between each biomarker. This leads to a specific rank in each bootstrap sample for each of the top three markers. The mean rank of a marker from all bootstrap samples is used as the reference. This reference has to be replicated in the artificial data in the simulation to ensure the applicability of the simulation approach. The resulting reference rankings are shown in 8. Additionally, the similarity of absolute added values from simulation results to reference values as a more specific criterion was examined. A good consistency in absolute values would demonstrate even better simulation accuracy.\n\n## Results\n\n### Population sample results\n\nTo ensure a stable and well specified model for risk prediction, model diagnostics were carried out in the population data for all models, including the basic model, the extended model with one biomarker and a full model with all markers simultaneously. Model diagnostic covers calibration, influential observations and collinearity. The event frequency was 26.7% (n=1205 o f N=4519), so even in a full model there were 150 events per covariate. The overall calibration was good in all models as the mean predicted probability ranged between 26.02% and 26.87%. The calibration in subgroups of prediction was similar good in all models and only led to a significant Hosmer-Lemeshow test (p=0.042) in one model, which shouldn’t be over-interpreted with this large sample size. By using the Cook’s distance [38, 39], no strong influential observations were detected. The Cook’s distanced is based on a single case deletion statistic which quantifies the changes of estimates after removing a single observation.\n\nThe variance inflation factor (VIF) [40, 41] was used to examine the collinearity. Values of the VIF greater than 5 were considered as problematic. In the single extended models with only one additional marker, no VIF values over 2 were observed. In the full model with all biomarkers incorporated simultaneously, the maximum VIF was 2.6 for MR-proANP. In summary, the basic model was stable and well calibrated; there were no collinearity problems and no strong outliers or influential observations after the log-transformation of markers.\n\nRegarding the prediction ability of the model with sex, age and body-mass-index as covariates, the basic model had a moderate predictive value according to the three measures of added value. The AUC of the basic model was 0.76, the Brier score was 0.163 and the R 2 0.227. A basic AUC of 0.76 is in mid-range between random (A U C=0.5) and perfect discrimination (A U C=1). Consequently, the basic prediction model was reasonable, but offers enough space for improvement in prediction. Nevertheless, the evaluation of the models additionally incorporating one of the biomarkers of interest yielded in only a weak influence of the biomarkers. The associations with MR-proADM, Nt-proBNP and hs-CRP, respectively, was significant on the 5% level and the association with CT-proAVP and MR-proANP, respectively, was not significant. All results can be seen in Table 1. The top three markers were clearly arranged by strength in prediction improvement with all three added value measures. The improvement by including the markers in the model remains under the expectations and provides only small differences. As CT-proAVP and MR-proANP were non-informative, the simulation was restricted to the top three markers, MR-proADM, Nt-proBNP and hs-CRP. Additionally, the effect of hs-CRP is very weak and doesn’t look very promising, but it can be used for comparison with the other top two markers.\n\n### Reference mean ranks\n\nAfter evaluating the added value measures in bootstrap samples from population data, these values were used to obtain the reference ranking. The data is presented in Table 2 as mean ranks.\n\nThe mean rank for MR-proADM ranges from 1.38 using AUC to 1.57 using the Brier score. Using the R 2 resulted in a mean rank of 1.45, which lies somewhere in between. Consequently, the AUC shows a stronger ability to separate the top marker than the Brier score or the R 2 in the population data. For Nt-proBNP, Brier score and AUC were about 2, the R 2 provided a little smaller mean rank of 1.86 and thus a little higher ranking. The mean rank of the third marker is not important, because it’s already completely determined with the first two ranking positions. It is not exactly clear how the differences can be explained. One possibility could be the fact that different measures represent different aspects of prediction like discrimination or calibration. Another source could be small deviations from the assumptions like linearity or normal distribution, with a heterogeneous effect on the different measurements. The second explanation would make it even more important to cover these aspects in the simulation.\n\n### Comparison of simulation approaches\n\n#### Mean ranks criterion\n\nIn Fig. 2, the simulation results summarized four different approaches using the empirical covariate distribution and additionally the results for GLM and random forest using the covariate data drawn from the normal distribution are shown. The results from multivariate normal distribution with simple logistic regression event generation algorithm (GLM normal data) are interesting. In this approach, where the conditions where ideal meaning that all covariates, except the dichotomous sex, are normally distributed and the relationship is perfectly linear, there is no difference in ranking between the three measures. Consequently, the differences between the measures in other approaches could be explained by small deviations from the normal distribution of covariates and from the assumed relationship, which may not be perfectly linear or has been influenced by interactions. The mean ranks of the GLM method using normal covariate data differ strongly from the reference mean ranks except for MR-proADM using the Brier score difference and for hs-CRP using the IAUC or the R 2 difference. The GLM approaches with interactions (and quantile covariate data) and the GAM approach (using quantile covariate data) influence the results in a greater way than the covariate generating algorithm with GLM, but the reference mean ranks are still not reproduced. One possible explanation could be that the interactions and non-linear relationships were not considered in these approaches. The random forest approach (using quantile covariate data) addresses this points and leads to better results. For MR-proADM, the random forest approach yields in mean ranks comparable to the ranks in the reference, except of small deviations that remain under the simulation uncertainty. For NT-proBNP and hs-CRP with Brier score difference and IAUC, the deviations become larger due too weak effects and consequently larger uncertainty as well as for hs-CRP having the same reason. Only with Nagelkerke R 2 difference, the simulation fails to reproduce the reference mean ranks for the last two markers and yields in large deviations from the reference mean ranks that clearly exceed several fold the simulation uncertainty. In this setting, the GLM approach with quantile covariate data exhibits better performance. This may be due to the fact that the Nagelkerke R 2 is likelihood-based and therefore cannot detect model misspecification. Correspondingly, an oversimplified generating model may have no strong effect. If one compares the results of the random forest method using normally distributed covariate data with the results of the random forest method using the quantile covariate data, the same event generation algorithm is used, but the simulation of the covariate data differs. Here, the mean ranks of the two approaches are substantial different, using the covariate data drawn from the empirical distribution leads to better results in most cases. Only if one uses the IAUC or R 2 difference, the mean rank of MR-proADM does not differ between the two approaches. For the other event generating methods, this difference is also present and can be even stronger (results not shown). To sum things up, only the random forest approach using covariate data drawn from empirical distributions led to simulated data, where the ranking of the biomarkers approximates the reference ranking. The additional effort by using the data drawn from the empirical distributions is worthwhile as this, especially in the case of the RF as event generating algorithm, lead to remarkably better results. Furthermore, there were differences in precision of results of the measures for added value. The Brier score difference seems to have greater precision then the IAUC.\n\n#### Absolute values criterion\n\nIn the following, the results are presented in terms of absolute values. Even if the simulation design was built up for stable ranking, it could be of interest to see the absolute values of the added value. These results are presented in Fig. 3. The results are compared to the reference effects in the population data. The incremental values are very small even for a strongly significant marker like MR-proADM. The basic Brier score of 0.163 was only reduced by 0.0005 with the strongest marker and by 0.00024 by the weak one (hs-CRP). The increase in AUC was somewhere between 0.00084 and 0.0024, which is apparently small compared to the basic AUC of 0.759. The same is also true for differences in R 2: The maximum of improvement is 0.0035 compared to 0.227 in the basic model. Like in the results regarding the mean ranks, the best approximation of the reference values was reached by random forest approach. The GLM model with interactions seems to overestimate the true improvement largely. The GAM model seems to be more accurate in some cases as GLM, but not overall. For the R 2 difference could not achieve a good approximation of the true values, comparable to the results from the ranking. To sum up, random forest approach were able to achieve good accuracy of simulated effects, at least for Brier score difference and the IAUC.\n\n### Pilot sample size\n\nIn the main simulation, the complete population sample as basis for the simulation was used. Consequently, it was a sample with 4519 observations as a source of information available. If one wants to use a pilot study or an interim analysis as the source of information, it is important to know at least approximately how large this subsample has to be to produce adequate results. A sample size over 1000 is unrealistic in real-life settings, particularly for a pilot study, where these observations are not used in the actual study. In terms of an interim analysis in a large cohort study with many thousand individuals, larger subsample size is conceivable. Even if the term pilot study is used in this work, these results will be as well valid for an interim analysis or other sources of raw data. In additional simulation, the needed sample size for a pilot study to cover the effects from our reference population was investigated. For this purpose, simulations with different sizes of pilot study samples, with which the artificial data was generated, were performed. Considering the results of the main simulation, the random forest approach with empirical distributions of covariate data was used for additional simulation. The pilot study samples were randomly drawn from our reference sample. Subsequently, the correlation structure, empirical distributions and the random forest model were generated based on this sample. The simulated data was built up with the original sample size of reference sample, so with 4519 observations. Only the results of the best marker MR-proADM is shown as example. These results are displayed in Fig. 4, where the mean rank for MR-proADM using all three measures and pilot sample size ranged from n=100 to n=1000 in 50th steps was calculated. For every step, 1000 simulation runs were used, which led to less precision compared to the main simulation. With Brier score difference, the reference value could be achieved with a sample size of 250. The IAUC had an acceptable, but not exact accordance with 250 observations. Here, the accordance only slowly improved with increasing sample size and reached a good value only at over 600 observations. This fits to the previous simulation results, where IAUC had a lesser precision than the Brier score difference. With R 2 difference, the sample size of 350 was needed for a good accordance with reference values. The interpretation of results with R 2 difference is difficult, because it fails to reproduce all effects properly and showed sufficient performance only for the strongest marker. It should be noted that these results cannot be generalized, because the needed sample size for a pilot study is strongly dependent on the effect size of the marker, the clinical outcome and, as we have shown, from the choice of the measure. But even for weak effects in the application example a sample size of about 250 seems to be sufficient, which should be a feasible sample size for a pilot study.\n\n## Discussion\n\nIn addition to the scenarios investigated here, other data analysis scenarios might be considered, e.g. a comparison of two sets of markers or a set of markers with one single marker. We expect that our results also transfer to such other multivariable settings to some extent, as the strategy presented here ensures that the correlation structure with exact covariate distribution is considered as well as the relationship to event with interactions and non-linearity. Naturally, our results may strongly depend on the specific data source, the Gutenberg Health Study (GHS), and the specific markers and outcome considered in our investigation, but we expect that the level of complexity seen there is not unusual and needs to be anticipated when setting up a large cohort study.\n\n## Conclusions\n\nGeneralizing from the present results, we would not recommend potentially oversimplified regression models for representing the relationship between markers and the outcome when simulating complex data. It seems that more flexible approaches, such as random forest, may be more appropriate for adequate simulation of complex multivariate data. This better takes into account important aspects such as non-linear relationships or interactions and therefore provides more adequate results. Yet, these methods require information on individual level and thereby a pilot study or other preliminary data sources. Additionally, the results of the simulation emphasize that some not readily apparent properties of the underlying data structure can affect (the performance of) marker identification. This also is an important lesson for other situations where realistic data structure needs to be simulated.\n\n## References\n\n1. 1\n\nVaeth M, Skovlund E. A simple approach to power and sample size calculations in logistic regression and Cox regression models. Stat Med. 2004; 23(11):1781–92. doi:10.1002/sim.1753.\n\n2. 2\n\nSchoenfeld DA. Sample-size formula for the proportional-hazards regression model. Biometrics. 1983; 39(2):499–503. doi:10.2307/2531021.\n\n3. 3\n\nSchmoor C, Sauerbrei W, Schumacher M. Sample size considerations for the evaluation of prognostic factors in survival analysis. Stat Med. 2000; 19(4):441–52. doi:10.1002/(SICI)1097-0258(20000229).\n\n4. 4\n\nJinks RC, Royston P, Parmar MK. Discrimination-based sample size calculations for multivariable prognostic models for time-to-event data. BMC Med Res Methodol. 2015; 15(1):82. doi:10.1186/s12874-015-0078-y.\n\n5. 5\n\nChen W, Samuelson FW, Gallas BD, Kang L, Sahiner B, Petrick N. On the assessment of the added value of new predictive biomarkers. BMC Med Res Methodol. 2013; 13(1):1–9. doi:10.1186/1471-2288-13-98.\n\n6. 6\n\nObuchowski NA. Computing sample size for receiver operating characteristic studies. Investig Radiol. 1994; 29(2):238–43. doi:10.1097/00004424-199402000-00020.\n\n7. 7\n\nHanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143(1):29–36. doi:10.1148/radiology.143.1.7063747.\n\n8. 8\n\nDe Valpine P, Bitter HM, Brown MPS, Heller J. A simulation-approximation approach to sample size planning for high-dimensional classification studies. Biostatistics. 2009; 10(3):424–35. doi:10.1093/biostatistics/kxp001.\n\n9. 9\n\nDobbin KK, Simon RM. Sample size planning for developing classifiers using high-dimensional DNA microarray data. Biostatistics (Oxford, England). 2007; 8(1):101–17. doi:10.1093/biostatistics/kxj036.\n\n10. 10\n\nGötte H, Zwiener I. Sample size planning for survival prediction with focus on high-dimensional data. Stat Med. 2013; 32(5):787–807. doi:10.1002/sim.5550.\n\n11. 11\n\nBinder H, Sauerbrei W, Royston P. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: A simulation study with continuous response. Stat Med. 2013; 32(13):2262–77. doi:10.1002/sim.5639.\n\n12. 12\n\nThabane L, Ma J, Chu R, Cheng J, Ismaila A, Rios LP, Robson R, Thabane M, Giangregorio L, Goldsmith CH. A tutorial on pilot studies: the what, why and how. BMC Med Res Methodol. 2010; 10(10):1–10.\n\n13. 13\n\nWild PS, Zeller T, Beutel M, Blettner M, Dugi Ka, Lackner KJ, Pfeiffer N, Münzel T, Blankenberg S. [The Gutenberg Health Study]. Bundesgesundheitsblatt, Gesundheitsforschung, Gesundheitsschutz. 2012; 55(6-7):824–9. doi:10.1007/s00103-012-1502-7.\n\n14. 14\n\nWild PS, Sinning CR, Roth A, Wilde S, Schnabel RB, Lubos E, Zeller T, Keller T, Lackner KJ, Blettner M, Vasan RS, Münzel TF, Blankenberg S. Distribution and categorization of left ventricular measurements in the general population: results from the population-based gutenberg-heart study. Circ Cardiovasc Imaging. 2010;604–13. doi:.10.1161/CIRCIMAGING.109.911933\n\n15. 15\n\nBurton A, Altman DG, Royston P, Holder RL. The design of simulation studies in medical statistics. Stat Med. 2006; 25:4279–92. doi:10.1002/sim.\n\n16. 16\n\nR Developement Core Team. R: A Language and Environment for Statistical Computing. 2015. doi:10.1007/978-3-540-74686-7. http://www.r-project.org\n\n17. 17\n\nGenz A, Bretz F, Miwa T, Mi X, Leisch F, Scheipl F, Hothorn T. mvtnorm: Multivariate Normal and T Distributions. 2016. R package version 1.0-5. http://cran.r-project.org/package=mvtnorm. Accessed 15 Apr 2016http://cran.r-project.org/package=mvtnorm.\n\n18. 18\n\nGenz A, Bretz F. Computation of Multivariate Normal and T Probabilities, 1st: Springer Publishing Company, Incorporated; 2009, pp. 1682–90. doi:10.1007/s13398-014-0173-7.2.\n\n19. 19\n\nLiaw A, Wiener M. Classification and regression by randomforest. R news. 2002; 2:18–22. doi:10.1177/154405910408300516.\n\n20. 20\n\nRipley BD. Stochastic Simulation.John Wiley & Sons, Inc.; 1987, p. 98. doi:10.1002/9780470316726.fmatter.\n\n21. 21\n\nAkaike H. Information theory and an extension of the maximum likelihood principle In: Parzen E, Tanabe K, Kitagawa G, editors. Selected Papers of Hirotugu Akaike. New York: Springer: 1998. p. 199–213. doi:10.1007/978-1-4612-1694-0/_15.\n\n22. 22\n\nHastie T, Tibshirani R. Generalized additive models. Stat Sci. 1986; 1:297–310. doi:10.1214/ss/1177013604.\n\n23. 23\n\nBreiman L. Random forests. Mach Learn. 2001; 45(1):5–32. doi:10.1023/A:1010933404324.\n\n24. 24\n\nBreiman L, Friedman J, Olshen RA, Stone CJ. Classification and Regression Trees.Taylor & Francis; 1984, p. 368. https://books.google.de/books?id=JwQx-WOmSyQC.\n\n25. 25\n\nBreiman L. Consistency For a Simple Model of Random Forests. Technical Report 670, Statistics Department, UC Berkeley. 2004. http://www.stat.berkeley.edu/~breiman.\n\n26. 26\n\nBiau G, Devroye L, Lugosi G. Consistency of random forests and other averaging classifiers. J Mach Learn Res. 2008; 9(2008):2015–33. doi:10.1145/1390681.1442799.\n\n27. 27\n\nGenuer R, Poggi JM, Tuleau C. Random Forests : some methodological insights. ArXiv e-prints. 2008; 6729:32.\n\n28. 28\n\nEfron B. Estimating the error rate of a prediction rule: improvement on cross-validation. J Am Stat Assoc. 1983; 78(382):16. doi:10.1080/01621459.1983.10477973.\n\n29. 29\n\nKruppa J, Liu Y, Biau G, Kohler M, König IR, Malley JD, Ziegler A. Probability estimation with machine learning methods for dichotomous and multicategory outcome: Theory. Biometrical J. 2014; 56(4):534–63. doi:10.1002/bimj.201300068.\n\n30. 30\n\nKruppa J. Probability estimation with machine learning methods for dichotomous and multicategory outcome: Applications. Biometrical J. 2014; 56(4):564–83.\n\n31. 31\n\nBrier GW. Verification of forecasts expressed in terms of probability. Mon Weather Rev. 1950; 78(1):1–3. doi:10.1126/science.27.693.594.\n\n32. 32\n\nGerds TA, Cai T, Schumacher M. The performance of risk prediction models. Biom J Biom Z. 2008; 50(4):457–79. doi:10.1002/bimj.200810443.\n\n33. 33\n\n34. 34\n\nCook NR. Use and misuse of the receiver operating characteristic curve in risk prediction. Circulation. 2007; 115(7):928–35. doi:10.1161/CIRCULATIONAHA.106.672402.\n\n35. 35\n\nNagelkerke NJD. A note on a general definition of the coefficient of determination. Biometrics. 1991; 78(3):691–2. doi:10.1093/biomet/78.3.691.\n\n36. 36\n\nSteyerberg EW, Vickers AJ, Cook NR, Gerds T, Gonen M, Obuchowski N, Pencina MJ, Kattan MW. Assessing the performance of prediction models: a framework for traditional and novel measures. Epidemiology (Cambridge, Mass.) 2010; 21(1):128–38. doi:10.1097/EDE.0b013e3181c30fb2.\n\n37. 37\n\nHosmer DW, Lemeshow S. Applied Logistic Regression. In: Wiley Series in Probability and Statistics. 2nd ed. vol. 23. no. 1. John Wiley & Sons, Inc.: 2000. p. 375. doi:10.1002/0471722146.\n\n38. 38\n\nCook RD, Weisberg S. Residuals and Influence in Regression.Chapman & Hall; 1982, p. 230. doi:10.2307/1269506. https://books.google.de/books?id=MVSqAAAAIAAJ.\n\n39. 39\n\nWilliams DA. Generalized linear model diagnostics using the deviance and single case deletions. Appl Stat. 1987; 36(2):181. doi:10.2307/2347550.\n\n40. 40\n\nBelsley D, Kuh E, Welsch R. Detecting and Assessing Collinearity. New York: John Wiley and Sons; 1980, pp. 85–91. doi:10.1002/0471725153.ch3.\n\n41. 41\n\nO’Brien RM. A caution regarding rules of thumb for variance inflation factors. Qual Quant. 2007; 41(5):673–90. doi:10.1007/s11135-006-9018-6.\n\n42. 42\n\nvan der Ploeg T, Austin PC, Steyerberg EW. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints. BMC Med Res Methodol. 2014; 14(1):1–13. doi:10.1186/1471-2288-14-137.\n\n43. 43\n\nBin RD, Herold T, Boulesteix AL. Added predictive value of omics data: specific issues related to validation illustrated by two case studies. BMC Med Res Methodol. 2014; 14(1):1–23. doi:10.1186/1471-2288-14-117.\n\n## Acknowledgements\n\nWe thank all study participants for their willingness to provide data for this research project and we are indebted to all coworkers for their enthusiastic commitment.\n\n### Funding\n\nThe Gutenberg Health Study is funded through the government of Rhineland-Palatinate (“Stiftung Rheinland-Pfalz für Innovation”, contract AZ 961-386261/733), the research programs “Wissen schafft Zukunft” and “Center for Translational Vascular Biology (CTVB)” of the Johannes Gutenberg-University of Mainz, and its contract with Boehringer Ingelheim and PHILIPS Medical Systems, including an unrestricted grant for the Gutenberg Health Study. PSW is funded by the Federal Ministry of Education and Research (BMBF 01EO1503) and he is PI of the German Center for Cardiovascular Research (DZHK).\n\n### Availability of data and materials\n\nData sharing is not applicable as the used data is a part of a big observational cohort study which has not yet been completed.\n\n### Authors’ contributions\n\nAS carried out the simulation study and drafted the manuscript. HB and PSW supervised the project. DZ revised the simulation procedure and statistical methodology. MB, MEB and SN contributed critical revision and discussion. All authors read and approved the final manuscript.\n\n### Authors’ information\n\nThis publication is a part of the doctoral thesis of Andreas Schulz.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\nNot applicable.\n\n### Ethics approval and consent to participate\n\nAll individuals signed informed consent before participating. The study protocol and study documents were approved by the local ethics committee of the Medical Chamber of Rhineland-Palatinate, Germany (reference no. 837.020.07; original vote: 22.3.2007, latest update: 20.10.2015).\n\n### Publisher’s Note\n\nSpringer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.\n\n## Author information\n\nCorrespondence to Andreas Schulz.\n\n## Rights and permissions", null, "" ]
[ null, "https://0-bmcmedresmethodol-biomedcentral-com.brum.beds.ac.uk/track/article/10.1186/s12874-017-0364-y", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9133847,"math_prob":0.9340588,"size":57088,"snap":"2019-51-2020-05","text_gpt3_token_len":12273,"char_repetition_ratio":0.16407399,"word_repetition_ratio":0.014009494,"special_character_ratio":0.21605241,"punctuation_ratio":0.12735623,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.96705294,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-10T04:38:44Z\",\"WARC-Record-ID\":\"<urn:uuid:4c3d6c5b-bf25-494b-9e19-2fda10fab7c4>\",\"Content-Length\":\"278263\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5df1efc-e058-4ba9-94a8-08a1e541c979>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb5d25a4-a1d0-4ced-b6e0-3c2cd0d31e17>\",\"WARC-IP-Address\":\"194.80.219.188\",\"WARC-Target-URI\":\"https://0-bmcmedresmethodol-biomedcentral-com.brum.beds.ac.uk/articles/10.1186/s12874-017-0364-y\",\"WARC-Payload-Digest\":\"sha1:PO24ZUPB4N2XQGWPPK6CCMP2LHTZJP6Q\",\"WARC-Block-Digest\":\"sha1:7RYE7R7IUIYVLC6452ITP4HL63W34Y6W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540525821.56_warc_CC-MAIN-20191210041836-20191210065836-00136.warc.gz\"}"}
https://developer.apple.com/documentation/simd/simd_double2x3?changes=_2&language=objc
[ "Structure\n\n# simd_double2x3\n\nA matrix of two columns and three rows containing double-precision values.\n\n## Topics\n\n### Matrix Properties\n\n`columns`\n\nThe columns of the matrix.\n\n### Matrix Creation Functions\n\n`simd_matrix`\n\nReturns a new matrix with the specified columns.\n\n`simd_matrix_from_rows`\n\nReturns a new matrix with the specified rows.\n\n### Math Functions\n\n`simd_add`\n\nReturns the sum of two matrices.\n\n`simd_sub`\n\nReturns the difference of two matrices.\n\n`simd_mul`\n\nReturns the product of a scalar value and a matrix.\n\n`simd_mul`\n\nReturns the product of two matrices.\n\n`simd_mul`\n\nReturns the product of two matrices.\n\n### Equality Functions\n\n`simd_equal`\n\nReturns true if every element in a matrix is exactly equal to the corresponding element in a second matrix.\n\n`simd_almost_equal_elements`\n\nReturns true if every element in a matrix is within a specified tolerance to the corresponding element in a second matrix.\n\n`simd_almost_equal_elements_relative`\n\nReturns true if every element in a matrix is within a specified double-precision relative tolerance to the corresponding element in a second matrix.\n\n### Linear Combination Function\n\n`simd_linear_combination`\n\nReturns the linear combination of two scalar values and two matrices.\n\n### Transpose Function\n\n`simd_transpose`\n\nReturns the transpose of a matrix.\n\n### Matrices: Double-Precision Values\n\n`simd_double2x2`\n\nA matrix of two columns and two rows containing double-precision values.\n\n`simd_double3x2`\n\nA matrix of three columns and two rows containing double-precision values.\n\n`simd_double4x2`\n\nA matrix of four columns and two rows containing double-precision values.\n\n`simd_double3x3`\n\nA matrix of three columns and three rows containing double-precision values.\n\n`simd_double4x3`\n\nA matrix of four columns and three rows containing double-precision values.\n\n`simd_double2x4`\n\nA matrix of two columns and four rows containing double-precision values.\n\n`simd_double3x4`\n\nA matrix of three columns and four rows containing double-precision values.\n\n`simd_double4x4`\n\nA matrix of four columns and four rows containing double-precision values." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6110697,"math_prob":0.9637861,"size":1585,"snap":"2019-51-2020-05","text_gpt3_token_len":368,"char_repetition_ratio":0.18026565,"word_repetition_ratio":0.47083333,"special_character_ratio":0.21198738,"punctuation_ratio":0.0729927,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99895006,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-24T23:00:18Z\",\"WARC-Record-ID\":\"<urn:uuid:56d80496-9975-44c5-b117-c059a1764b7f>\",\"Content-Length\":\"46897\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02349c7d-37e2-48a1-89a9-b22a9482513c>\",\"WARC-Concurrent-To\":\"<urn:uuid:81e66e5a-eddf-42ff-9649-8c8d87ce393c>\",\"WARC-IP-Address\":\"17.253.21.205\",\"WARC-Target-URI\":\"https://developer.apple.com/documentation/simd/simd_double2x3?changes=_2&language=objc\",\"WARC-Payload-Digest\":\"sha1:OLLMPXLWZIJAHFHNSPS52RHZBPAX7ONJ\",\"WARC-Block-Digest\":\"sha1:PNXIRKCPZEMWMYX2JQQFKEW7MTQNKXYK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250626449.79_warc_CC-MAIN-20200124221147-20200125010147-00006.warc.gz\"}"}
https://www.ias.ac.in/describe/article/pmsc/127/04/0719-0735
[ "• Minimal surfaces in symmetric spaces with parallel second fundamental form\n\n• # Fulltext\n\nhttps://www.ias.ac.in/article/fulltext/pmsc/127/04/0719-0735\n\n• # Keywords\n\nIsometric minimal immersion; Gaussian curvature; Kähler angle; second fundamental form; symmetric space\n\n• # Abstract\n\nIn this paper, we study geometry of isometric minimal immersions of Riemannian surfaces in a symmetric space by moving frames and prove that the Gaussian curvature must be constant if the immersion is of parallel second fundamental form. In particular, when the surface is $S^2$, we discuss the special case and obtain a necessary and sufficient condition such that its second fundamental form is parallel. We alsoconsider isometric minimal two-spheres immersed in complex two-dimensional Kählersymmetric spaces with parallel second fundamental form, and prove that the immersionis totally geodesic with constant Kähler angle if it is neither holomorphic nor antiholomorphicwith Kähler angle $\\alpha\\neq 0$ (resp. $\\alpha\\neq \\pi$) everywhere on $S^2$.\n\n• # Author Affiliations\n\n1. School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 101408, China\n2. School of Mathematics and Statistics, Zhengzhou University, Zhengzhou 450001, China\n\n• # Editorial Note on Continuous Article Publication\n\nPosted on July 25, 2019" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.79774076,"math_prob":0.77261466,"size":1042,"snap":"2021-31-2021-39","text_gpt3_token_len":232,"char_repetition_ratio":0.10789981,"word_repetition_ratio":0.014285714,"special_character_ratio":0.19961612,"punctuation_ratio":0.08,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9732981,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T07:54:49Z\",\"WARC-Record-ID\":\"<urn:uuid:ba15cc4a-9442-4a2d-bf3d-3dd53c6309d5>\",\"Content-Length\":\"27737\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cae7cb2c-f2a9-48c7-ac14-e52106dcbeb3>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d1952b9-4260-4103-adb3-3b18d19f9a97>\",\"WARC-IP-Address\":\"13.232.189.126\",\"WARC-Target-URI\":\"https://www.ias.ac.in/describe/article/pmsc/127/04/0719-0735\",\"WARC-Payload-Digest\":\"sha1:VAJSFLNP6X6HZKXRGINN642UHPPAZOJB\",\"WARC-Block-Digest\":\"sha1:UKUH3CAD3KOYZ2IVL6S3NVU3SG7B2K7H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057033.33_warc_CC-MAIN-20210920070754-20210920100754-00546.warc.gz\"}"}
https://discuss.codecademy.com/t/cleaning-us-census-data-what-methods-do-we-need-to-use-to-replace-a-column-in-df/711558
[ "# Cleaning US Census Data- What methods do we need to use to replace a column in df?\n\nWhen we want to replace a column in a dataframe we can do it two ways. For instance if we want to remove the “\\$” from the income column we can do any of these two options\n`1. us_census.Income = us_census.Income.str.replace(\"[\\$]\", '', regex=True)`\n\n``````Income = []\n2. for i in range(0, len(us_census.Income)):\nstring = str(us_census.Income.iat[i])\nreplace = string.replace('\\$', \"\")\nIncome.append(replace)\n\ndf['new_column']= Income\n``````\n\nWhat is the difference between these two approaches and why there are some cases which we can’t use the first approach?\nFor instance if we want to break the column GenderPop and create two new columns for men and women we can’t use the first approach.\n\nI think you meant that you’re not replacing a column, you’re cleaning up unwanted characters from a column.\nYou’d use the first approach and not the second. Why create more work for yourself?", null, "Regex is easier in this case, right?\n\nSee:\n\n``````us_census.Income = us_census['Income'].replace('[\\\\$,]', '', regex=True)\n\nus_census.Income = pd.to_numeric(us_census.Income)\n``````\n\nYou wouldn’t use `str.replace()` to split a column. You’d use `.str.split()` and something like:\n\nSummary\n``````us_census['Men'] = us_census['GenderPop'].str.split('_').str\nus_census['Women'] = us_census['GenderPop'].str.split('_').str\n``````\n1 Like" ]
[ null, "https://emoji.discourse-cdn.com/twitter/slight_smile.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.73557484,"math_prob":0.87969476,"size":897,"snap":"2022-40-2023-06","text_gpt3_token_len":220,"char_repetition_ratio":0.13213886,"word_repetition_ratio":0.054054055,"special_character_ratio":0.23857301,"punctuation_ratio":0.123595506,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9545941,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T07:19:04Z\",\"WARC-Record-ID\":\"<urn:uuid:91271973-14b6-4124-8679-33bec6649f84>\",\"Content-Length\":\"33171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dbdeed67-12b2-4bd6-a547-bd61ea04eee9>\",\"WARC-Concurrent-To\":\"<urn:uuid:b64dfb65-9fb7-4dcc-8912-a916a15bbd5d>\",\"WARC-IP-Address\":\"104.18.199.63\",\"WARC-Target-URI\":\"https://discuss.codecademy.com/t/cleaning-us-census-data-what-methods-do-we-need-to-use-to-replace-a-column-in-df/711558\",\"WARC-Payload-Digest\":\"sha1:P5M2YL5227S5OF5WLCJO7FUCFCWNMH3W\",\"WARC-Block-Digest\":\"sha1:CASGTTMA6DUQKDJANWS2W6CNK6O7HBJM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499967.46_warc_CC-MAIN-20230202070522-20230202100522-00513.warc.gz\"}"}
http://www.mathguide.com/cgi-bin/quizmasters/ConesV.cgi
[ "MATHguide's Volume of Cones Quizmaster\nReview the Lesson | MATHguide homepage Updated April 14th, 2019\n\n When the radius (r) = 14 units and the height (h) = 6 units, calculate the information below. Use 3.14 as an approximation for π. Round all answers to the nearest tenth at the end of each calculation. What is the area of the bottom? units2", null, "What is the volume? units3" ]
[ null, "http://mathguide.com/lessons/pic-coneT.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.85984164,"math_prob":0.9240122,"size":299,"snap":"2022-40-2023-06","text_gpt3_token_len":75,"char_repetition_ratio":0.14576271,"word_repetition_ratio":0.0,"special_character_ratio":0.26086956,"punctuation_ratio":0.140625,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98922634,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-08T18:08:30Z\",\"WARC-Record-ID\":\"<urn:uuid:dcf7d716-3eba-43a3-93c8-245011e4e4a8>\",\"Content-Length\":\"2339\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20c0e5eb-d6b3-4386-8cb9-7d11971d88fe>\",\"WARC-Concurrent-To\":\"<urn:uuid:5554ad6c-ae34-4243-b1aa-76285b948e0a>\",\"WARC-IP-Address\":\"64.62.140.19\",\"WARC-Target-URI\":\"http://www.mathguide.com/cgi-bin/quizmasters/ConesV.cgi\",\"WARC-Payload-Digest\":\"sha1:E6ULQY24NHMRBBAWUK4JOD2JOSQM7DL6\",\"WARC-Block-Digest\":\"sha1:WGTYSGA24DO24PDWBZCF3EO72IDNESS2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500837.65_warc_CC-MAIN-20230208155417-20230208185417-00720.warc.gz\"}"}
https://historicsweetsballroom.com/what-is-the-prime-factorization-of-17/
[ "Factors that 17 room numbers that, when multiplied in pairs provide the product as 17. There are in its entirety 2 factors of 17 i.e. 1 and also 17 where 17 is the greatest factor. The sum of all factors of 17 is 18. The Prime components is 1, 17 and also (1, 17) space Pair Factors.\n\nYou are watching: What is the prime factorization of 17\n\nFactors the 17: 1 and 17Negative factors of 17: -1 and also -17Prime factors of 17: 17Prime factorization of 17: 17Sum of determinants of 17: 18\n 1 What space the components of 17? 2 Important Notes 3 How to calculate the factors of 17? 4 Factors that 17 by element Factorization 5 Factors the 17 in Pairs 6 FAQs on components of 17\n\n## What room the determinants of 17?\n\nFactors are entirety numbers that division the provided number totally without leaving any kind of remainder.The components of 17 are 1 and 17.\n\nThis reflects that 17 is a element number because it has no factors various other than 1 and also itself.\n\n## How to calculation the determinants of 17?\n\nWe can use various methods prefer the divisibility test, prime factorization, and also the upside-down division method to calculate the components of 17. In element factorization, we express 17 as a product that its prime factors.", null, "Explore components using illustrations and interactive examples.\n\n## Factors the 17 by element Factorization\n\nPrime factorization is expressing the number together a product the its factors which are prime. We can do the element factorization of any kind of number by:\n\nUpside-down division method orFactor tree method\n\nWe understand that 1 is a variable of every number, and also 17 is no a many of any type of number. So, we factorize 17 as:", null, "By prime factorization method, we gain 17 = 1 × 17. Therefore, there are no other prime factors of 17 various other than 17 itself.\n\n## Factors the 17 in Pairs\n\nAny set of two numbers that pair up to give the product as 17 room its element pairs.\n\n17 = 1 × 17\n\nTherefore, the pair determinants of 17 are (1, 17).\n\nA number have the right to have an unfavorable pair components as well. This is because of the reality that the product that two an unfavorable numbers is positive.Hence, (-1,-17) is also a element pair of 17.\n\nSee more: What Is Meant By 20/70 Vision : Causes, Treatment, & Prevention\n\nImportant Notes:\n\nThere are just 2 factors of 17, which room 1 and also 17.The variable pairs the 17 room (1,17) and (-17,-1)." ]
[ null, "https://historicsweetsballroom.com/what-is-the-prime-factorization-of-17/imager_1_9321_700.jpg", null, "https://historicsweetsballroom.com/what-is-the-prime-factorization-of-17/imager_2_9321_700.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91546524,"math_prob":0.9593251,"size":2290,"snap":"2022-05-2022-21","text_gpt3_token_len":573,"char_repetition_ratio":0.16491689,"word_repetition_ratio":0.0,"special_character_ratio":0.26550218,"punctuation_ratio":0.125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9964484,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T07:48:02Z\",\"WARC-Record-ID\":\"<urn:uuid:07aaed24-2549-44f5-ae57-3609ae34a6e4>\",\"Content-Length\":\"12332\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b8f5149c-1988-41cd-8d5b-c2d05300af6a>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d3d4540-0808-416d-a0d0-22435670e2f0>\",\"WARC-IP-Address\":\"172.67.137.94\",\"WARC-Target-URI\":\"https://historicsweetsballroom.com/what-is-the-prime-factorization-of-17/\",\"WARC-Payload-Digest\":\"sha1:3FEE4G52ZPUFFM5UGUMN7XCR3LTMHE2H\",\"WARC-Block-Digest\":\"sha1:XDDJU4IPCHOHBZJXSFBX7B2UK74OAH7J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662531762.30_warc_CC-MAIN-20220520061824-20220520091824-00255.warc.gz\"}"}
https://www.thefreelibrary.com/The+dynamic+and+collision+features+of+microscopic+particles+described...-a0319229073
[ "# The dynamic and collision features of microscopic particles described by the nonlinear Schrodinger equation in the nonlinear quantum systems.\n\nINTRODUCTION\n\nThe Features of Microscopic Particle Described by the Linear Schrodinger Equation.\n\nAs it is known, the states of microscopic particles in quantum systems were as yet described by quantum mechanics, which was established by several great scientists, such as Bohr, Born, Schrodinger and Heisenberg, etc., in the early 1900s [1-10]. In quantum mechanics the dynamic equation of microscopic particles is the following Schrodinger equation:\n\nih [partial derivative][psi] / [partial derivative]t = [h.sup.2] / 2m [[nabla].sup.2][psi] + V ([??],t) [psi] (1)\n\nWhere [h.sup.2] [[nabla].sup.2]/2m is the kinetic energy operator, V ([??], t) is the externally applied potential operator, m is the mass of the particles, [psi]([??], t) s a wave function describing the states of the particles, [??] is the coordinate or position of the particle. Equation (1) is a wave equation, if only the externally applied potential is known, we can find the solutions of the equation [7-9]. However, for all externally applied potentials, its solutions are always a linear or dispersive wave, for example, at V([??], t) = 0, the solution is a plane wave as follows:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (2)\n\nWhere k is the wavevector of the wave, [omega] is its frequency, and A' is its amplitude. This solution denotes the state of a freely moving microscopic particle with an eigenenergy of\n\nE = [p.sup.2]/2m = 1/2m ([p.sup.2.sub.x] + [p.sup.2.sub.y] + ([p.sup.2.sub.z]), (-[infinity] < [p.sub.x], [p.sub.y], [p.sub.y] < [infinity]) (3)\n\nThis energy is continuous, this means that the probability of the particle to appear at any point in the space is a constant, thus the microscopic particle cannot be localized, can only propagate freely in a wave in total space. Then the particle has nothing about corpuscle feature.\n\nIf the potential field is further varied, i.e., V(r,t) [not equal to] 0, the solutions of Equation (1) is still some waves with different features [5-12]. This shows clearly that the microscopic particles have only a wave feature and not corpuscle feature, which is inherent nature of particles in the quantum mechanics. These features of microscopic particles are not only incompatible with de Broglie relation of wave-corpuscle duality of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] and Davisson and Germer's experimental result of electron diffraction on double seam in 1927 [10-13] but also contradictory with regard to the traditional concept of particles [14-16]. These are just the limitations and difficulties of the quantum mechanics, which result in a duration controversy about a century in physics [8-12] and have not been solved up to now. This is very clear that the reasons having rise to these difficulties are the simplicity and approximation of quantum mechanics. As a matter of fact, the Hamiltonian operator of the system corresponding Equation (1) in quantum mechanics is represented by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (4)\n\nWhich is only composed of kinetic and potential operator of particles. The latter is not related to the wave function of state of the particle, thus it can only change the states of particles, such as amplitude, velocity and frequency, cannot influence their natures. The natures of particles can be only determined by the kinetic energy term in Equation (4), which but has a dispersive feature. Thus the microscopic particles have only a wave feature, not corpuscle feature. This is just the root and reason of the above limitations and difficulties quantum mechanics possesses.\n\nOn the other hand, the Hamiltonian of the systems and dynamic equation in Equations (1) and (4) contain only the kinetic energy and externally applied potential term. This means that we must incorporate all interactions, including nonlinear and complicated interactions, among particles or between particle and background field, such as the lattices in solids and nuclei in atoms and molecules, into the external potential by means of various approximate methods, such as the free electron and average field approximations, Born-Oppenheimer approximation, Hartree-Fock approximation, Thomas-Fermi approximation, and so on. This not only denotes the approximation of quantum mechanics but also is obviously incorrect [10-13]. The essence and substance of this method, in which these real interactions between them are replace by an average field, are that quantum mechanics freezes or blots out real motions of the microscopic particles and background fields and ignore completely real interactions between them, including nonlinear and other complicated interactions, which can influence the natures of the particles. This is just the essence and approximation of quantum mechanics. Therefore, quantum mechanics cannot be used to study the real properties of microscopic particles in the system of many bodies and many particles, involving condensed matter, atoms and molecules [14-16].\n\nThese problems not only awake and evoke us that quantum mechanics must develop forward but also indicate the its direction of development.\n\nIn view of the above problems of quantum mechanics, we should take into account of the nonlinear interactions of the particles, which was ignored in quantum mechanics in Equations (1) and (4). As a matter of fact, the nonlinear interactions exist always in any realistic physics systems including the hydrogen atom, which is generated by the interaction between the particles and another particles or background field [17-28]. Therefore, once the real motions of the microscopic particles and background fields and their true interactions are considered, then the properties and states of microscopic particles cannot be described by Schrodinger Equation (1), but should be depicted by the following nonlinear Schrodinger equation in nonlinear quantum systems\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], (5)\n\nWhere [phi]([??],t) is now a wave function representing the states of microscopic particles in the nonlinear systems, b is nonlinear interaction coefficient, the nonlinear interaction is now denoted by b [[absolute value of [phi].sup.2][phi], which is generated by interaction between the moved particle and background field. Therefore, Equation (5) represents correctly motion features of microscopic particle and its interactions with other particles or background field. In this case the nonlinear interaction and dispersive effect occur simultaneously in the dynamical equation, the deformational effect of nonlinear interaction on the wave can suppress its dispersive effect, thus dynamic Equation (5) have a soliton solution [29-30], which has both wave and corpuscle feature. Thus the microscopic particles have a wave-corpuscle duality in such a case. Therefore, Equation (5) not only describes really the motions of the particles and their interactions with other particles or background field but also represents correctly their wave-corpuscle duality. Then we have the sufficient reasons to believe the correctness of nonlinear Schrodinger equation for describing the properties of microscopic particles in quantum systems.\n\n1. THE CHANGES OF PROPERTY OF MICROSCOPIC PARTICLES DESCRIBED BY THE NONLINEAR SCHRODINGER EQUATION\n\n1.1 The Wave-Corpuscle Duality of Solutions of Nonlinear Schrodinger Equation\n\nAs it is known, the microscopic particles have only the wave feature, but not corpuscle property in the quantum mechanics. Thus, it is very interesting what are the properties of the microscopic particles in the nonlinear quantum mechanics? We now study firstly the properties of the microscopic particles described by nonlinear Schrodinger equation in Equation (5). In the one-dimensional case, the Equation (5) at V(x, t) = 0 becomes as\n\ni[[phi].sub.t'] + [[phi].sub.x'x'] + b[[absolute value of [phi]].sup.2] [phi] = 0 (6)\n\nWhere x' = x / [square root of ([h.sup.2] / 2m)], t'= t / h. We now assume the solution of Equation (11) to be of the form\n\n[phi](x',t') = [phi](x',t')[e.sup.i[theta](x',t')] (7)\n\nsubstituting Equation (7) into Equation (6) we get\n\n[[phi].sub.x'x'] - [phi][[theta].sub.t'] - [phi][[theta].sup.2.sub.x'] - b[[phi].sup.2] [phi] = 0, (b>0) (8)\n\n[phi][[theta].sub.x'x'] = 2[[phi].sub.x'][[theta].sub.x'] + [[phi].sub.t'] = 0 (9)\n\nIf let\n\n[theta] = [theta](x'-[v.sub.c]t'), [phi] = [phi](x'-[v.sub.e]t'), [zeta] = x'-[v.sub.c]t',\n\n[zeta]' = x'-[v.sub.e]t',\n\nthen Equations (8)-(9) become\n\n[[phi].sub.x'x'] + [v.sub.c][phi][[theta].sub.x'] - [phi][[theta].sup.2.sub.x'] - b[[phi].sup.3] = 0 (10)\n\n[phi][[theta].sub.x'x'] + 2[[phi].sub.x'][[theta].sub.x'] - [v.sub.e][[phi].sub.x'] = 0 (11)\n\nIf fixing the time t' and further integrating Equation (11) with respect to x' we can get\n\n[[phi].sup.2] (2[[theta].sub.x'] - [v.sub.e)] = A(t') (12)\n\nNow let integral constant A(t') = 0, then we can get [[theta].sub.x'] = [v.sub.e] / 2. Again substituting it into Equation (10), And further integrating this equation we obtain\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (13)\n\nWhere Q[phi] = -b[[phi].sup.4] / 2 + ([[v.sup.2.sub.e] - 2[v.sub.e][v.sub.c]) [[phi].sup.2] + c'. When c' = 0, [v.sup.2.sub.e] - 2[v.sub.c][v.sub.e] > 0, then [phi] = [+ or -] [[phi].sub.0], [[phi].sub.0] = [([v.sup.2.sub.e] - 2[v.sub.e][v.sub.c])] / 2b] is the roots of Q([phi])=0 except for [phi]=0. From Equation (13) Pang obtained the solution of Equations (8)-(9) to be\n\n[phi](x',t') = [[phi].sub.0] sec h[[square root of (b / 2)] [[phi].sub.0] (x' - [v.sub.e]t')\n\nPang [21-30] represented eventually the solution of nonlinear Schrodinger equation in Equation (6) in the coordinate of (x, t) by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (14)\n\nWhere [A.sub.0] = [square root of (([mv.sup.2] / 2-E) / 2b)], v is the velocity of motion of the particle, E = h[omega]. This solution is completely different from Equation (3), and consists of an envelop and carrier waves, the former is [phi](x, t) = [A.sub.0] sec h {[A.sub.0][square root of (bm) [(x-[x.sub.0]) - vt] / h} and a bell-type non-topological soliton with an amplitude [A.sub.0], the latter is exp{i[mv(x - [x.sub.0]) - Et]/ h}. This solution is shown in Figure 1a. Therefore, the particles described by nonlinear Schrodinger Equation (6) are solitons. The envelop [phi](x,t) is a slow varying function and is a mass centre of the particles; the position of the mass centre is just at [x.sub.0] , [A.sub.0] is its amplitude, and its width is given by W' = 2[pi]h /[A.sub.0][square root of (2m)]. Thus, the size of the particle is [A.sub.0]W' = 2[pi]h / [square root of (2m)] and a constant. This shows that the particle has exactly a determinant size and is localized at [x.sub.0]. Its form resemble a wave packet, but differ in essence from both the wave solution in Equation (1) and the wave packet mentioned above in linear quantum mechanics due to invariance of form and size in its propagation process. According to the soliton theory [29-30], the bell-type soliton in Equation (14) can move freely over macroscopic distances in a uniform velocity v in space-time retaining its form, energy, momentum and other quasi-particle properties. However, the wave packet in linear quantum mechanics is not so and will be decaying and dispersing with increasing time. Just so, the vector [??] or x in the representation in Equation (14) has definitively a physical significance, and denotes exactly the positions of the particles at time t. Thus, the wave function [phi] ([??], t) or [phi](x, t) can represent exactly the states of the particle at the position [??] or x at time t. These features are consistent with the concept of particles. Thus the microscopic particles depicted by Equation (6) display outright a corpuscle feature [31-38].\n\nUsing the inverse scattering method Zakharov and Shabat [39-40] obtained also the solution of Equation (6), which was represented as\n\n[phi](x',t') = 2 [(2 / b).sup.1/2] [eta] sec h [2[eta](x'-[x'.sub.0]) + 8[eta][xi]t']x\n\nexp [-4i ([[xi].sup.2]-[[eta].sup.2)t'-i2[xi]x'-i[theta]] (15)\n\nin the coordinate of (x', t'), where n is related to the amplitude of the microscopic particle, [xi] relates to the velocity of the particle, [theta] = arg [gamma], [lambda] = [xi] + i[eta], [x'.sub.0] = [(2[eta]).sup.-1] log ([absolute value of ([gamma])] / 2[eta]), [gamma] is a constant. We now re-write it as following form [23-27]:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (16)\n\nWhere [v.sub.e] is the group velocity of the particle, [v.sub.c] is the phase speed of the carrier wave in the coordinate of (x', t'). For a certain system, [v.sub.e] and [v.sub.c] are determinant and do not change with time. We can obtain [2.sup.3/2] k/[b.sup.1/2] = [A.sub.0],\n\n[A.sub.0] = [square root of ([v.sup.2.sub.e] - [2v.sub.c] [v.sub.e]/2b)].\n\nAccording to the soliton theory [29-30], the soliton shown in Equation (16) has determinant mass, momentum and energy, which can be represented by [21-28]\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (17)\n\nWhere [M.sub.sol] = [N.sub.s] = 2[square root of 2][A.sub.0] is just effective mass of the particles, which is a constant. Thus we can confirm that the energy, mass and momentum of the particle cannot be dispersed in its motion, which embodies concretely the corpuscle features of the microscopic particles. This is completely consistent with the concept of classical particles. This means that the nonlinear interaction, b[absolute value of [phi].sup.2][phi], related to the wave function of the particles, balances and suppresses really the dispersion effect of the kinetic term in Equation (6) to make the particles become eventually localized. Thus the position of the particles, [??] or x, has a determinately physical significance. However, the envelope of the solution in Equations (14)-(16) is a solitary wave. It has a certain wavevector and frequency as shown in Figure 1(b), and can propagate in space-time, which is accompanied with the carrier wave. Its feature of propagation depends on the concrete nature of the particles. Figure 1(b) shows the width of the frequency spectnux) if the envelope [phi](x, t) which has a localized distribution around the carrier frequency [[omega].sub.0]. This shows that the particle has also a wave feature [21-28]. Thus we believe that the microscopic particles described by nonlinear quantum mechanics have simultaneously a wave-corpuscle duality. Equations (14)-(16) and Figure 1(a) are just the most beautiful and perfect representation of this property, which consists also of de Broglie relation, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], wave-corpuscle duality and Davisson and Germer's experimental result of electron diffraction on double seam in 1927 as well as the traditional concept of particles in physics [10-13]. Thus we have reasons to believe the correctness of nonlinear quantum mechanics proposed by Pang. [21-28]\n\n1.2 The Wave-Corpuscle Duality of Solution of the Nonlinear Schrodinger Equation with Different Potentials\n\nWe can verify that the nature of wave-corpuscle duality of microscopic particles is not changed when varying the externally applied potentials. As a matter of fact, if V(x') = [alpha]x' + c in Equation (5), where [alpha] and c are some constants, in this case Pang [31-38,41-43] replaced Equation (8) by\n\n[[phi].sub.x'x'] - [phi][[theta].sub.t'] - [phi][[theta].sup.2.sub.x'] - b[[phi].sup.2][phi] = [alpha]x'+c. (18)\n\nNow let\n\n[phi](x',t') = [phi]([xi]),[xi] = x'- u(t'), u(t') = -[alpha][(t').sup.2] + vt' + d (19)\n\nWhere u(t') describes the accelerated motion of [phi](x', t'). The boundary condition at [xi][right arrow][infinity] requires [phi](xi) to approach zero rapidly, then Equation (9) can be written as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (20)\n\nWhere u = du / dt'. If 2 [partial derivative][theta] / [partial derivative][xi] -u [not equal to] 0, Equation (20) may be denoted as\n\n[[phi].sup.2] = g(t') / ([partial derivative][theta]/[partial derivative] [xi]-u/2) or [partial derivative][theta]/[partial derivative]x' = g(t') /[[phi].sup.2] + u/2 (21)\n\nIntegration of Equation (26) yields\n\n[theta] (x',t') = g(t') [[integral].sup.x'.sub.0] dx\"/[[phi].sup.2] + u/2 x' + h (t') (22)\n\nWhere h(t') is an undetermined constant of integration. From Equation (22) Pang [41-43] can get\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (23)\n\nSubstituting Equations (22)-(23) into Equation (17), we have:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (24)\n\nSince [[partial derivative].sup.2][phi]/[partial derivative][(x').sup.2] = [d.sup.2][phi]/d[[xi].sup.2], which is a function of [xi] only. In order for the right-hand side of Equation (29) is also a function of [xi] only, it is necessary that g(t') = [g.sub.0] = constant, and\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (25)\n\nNext, we assume that [V.sup.0][xi] = [bar.V][xi] - [beta], where [beta] is real and arbitrary, then\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (26)\n\nClearly, in the discussed case [V.sub.0] ([xi]) = 0, and the function in the brackets in Equation (26) is a function of tcents. Substituting Equation (26) into Equation (4), we can get:\n\n[d.sup.2] [bar.[phi]] / d[[xi].sup.2] = [beta] [bar.[phi]] - b[[bar.[phi]].sup.3] + [g.sub.0]/[[bar.[phi]].sup.3] (27)\n\nWhere [beta] is a real parameter and defined by\n\n[V.sub.0]([xi]) = [bar.V]([xi]) - [beta] (28)\n\nwith\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (29)\n\nClearly, in the discussed case, [V.sub.0]([xi])=0. Obviously, [bar.[phi]] = [phi]([xi]) is the solution of Equation (27) when [beta] and g are constant. For large [absolute value of [xi]], we may assume that [bar.[phi]] [less than or equal to] [beta] / [absolute value of [[xi].sup.1+[DELTA]]], when [DELTA] is a small constant. To ensure that [d.sup.2] [bar.[phi]] / d[[xi].sup.2], and [bar.[phi]] approach zero when [absolute value of [xi]] [right arrow] [infinity], only the solution corresponding to [g.sub.0] = 0 in Equation (27) is kept stable. Therefore we choose [g.sub.0] = 0 and obtain the following from Equation (26)\n\n[partial derivative][theta] / [partial derivative]x = u / 2 (30)\n\nthus, we obtain from Equation (34)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (31)\n\nSubstituting Equation= (31) into equations (23) and (29), we obtain\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (32)\n\nFinally, substituting the above into Equation (33), we can get\n\n[d.sup.2] [bar.[phi]]/d[[xi].sup.2] - [beta] [bar.[phi]] + b[[phi].sup.3] = 0 (33)\n\nWhen [beta] > 0, Pang gives the solution of Equation (38), which is of the form\n\n[bar.[phi]] = [square root of (2[beta]/b)] sec h ([square root of ([beta])][zeta]) (34)\n\nPang [41-43] finally obtained the complete solution in this condition, which is represented as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (35)\n\nThis is a soliton solution. If V(x') = c, the solution can be represented as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (36)\n\nAt V(x') = 2[alpha]x' and b = 2, we can also get a corresponding soliton solution from the above process. However, Chen and Liu [44-45] adopted the following transformation\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (37)\n\nto make Equation (5) become\n\ni[[phi'].sub.t'] + [[[phi]'.sub.x'x'] + 2 [[absolute value of [phi]'].sup.2] [phi'] = 0 (38)\n\nThus Chen and Liu [44-45] represented the solution of Equation (5) at V(x') = ax', b = 2 by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (39)\n\nAt the same time, utilizing the above method, Pang [25,27,41] found also the soliton solution of Equation (5) at V(x) = [kx.sup.2] +A(t)x + B(t), which could be represented as\n\n[phi]=[phi](x-u(t))[e.sup.i[theta](x,t)] (40)\n\nWhere,\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (41)\n\nWhere L is a constant related to A(t'). When A(t) = B(t) = 0, the solution is still Equation (40), but\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nFor the case of [V.sub.0] (x') = [[alpha].sup.2][x].sup.2] and b = 2, where [alpha] is constant, Chen and Liu [44-45] assume u(t') = (2 [xi]/[alpha])sin(2[alpha]t'), thus they represent the soliton solution in this condition by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (42)\n\nWhere 2[eta] = [square root of ([beta])] the amplitude of microscopic particles, 4[xi]' is related to its group velocity in Equations (39) and (42), and [xi]' is the same as [xi] in Equation (15). From the above results we see clearly that these solutions of nonlinear Schrodinger Equation (5) under influences of different potentials, V(x) = c, V(x') = [alpha]x', V(x') = [alpha]x'+c, V(x) = [kx.sup.2]+A(t)x+B(t) and V(x') = [[alpha].sup.2] [x'.sup.2,] still consist of envelop and carrier waves, which are analogous to Equation (14), some bell-type solitons with a certain amplitude [A.sub.0], group velocity [v.sub.e] and phase speed [v.sub.c], and have a mass center and determinant amplitude, width, size, mass, momentum and energy. If inserting these solutions, Equations (35), (38), (39), (41) and (42) into Equation (16) we can find out the effective masses, moments and energies of these microscopic particles, respectively, which all have determinant values. Therefore we can determine that the microscopic particles described by these dynamic equations still possess a wave-corpuscle duality as shown in Figure 1, although they are acted by different external potentials. These potentials change only amplitude, size, frequency, phase and group and phase velocities of the particles, in which velocity and frequencies of some particles are further related to time and oscillatory [47-50]. These results indicate that in Equation (5) the kinetic energy term decides the wave feature of the particles, the nonlinear interaction determines its corpuscle feature, their combination results in its wave corpuscle duality, but the external potentials influence only wave form, phase and velocity of particles, but cannot affect the wave-corpuscle duality. These results verify directly and clearly the necessity and correctness for describing the properties of microscopic particles using the nonlinear Schrodinger equation, or the nonlinear quantum mechanics proposed by Pang [17-24].\n\n[FIGURE 1 OMITTED]\n\n2. THE CLASSICAL FEATURES OF MOTION OF MICROSCOPIC PARTICLES\n\n2.1 The Feature of Newton's Motion of Microscopic Particles\n\nSince the microscopic particle described by the nonlinear Schrodinger Equation (5) has a corpuscle feature and is also quite stable as mentioned above. Thus its motion in action of a potential field in space-time should have itself rules of motion. Pang [51-55] studied deeply this rule of motion of microscopic particles in such a case.\n\nWe know that the solution Equation (14), shown in Figure 1, of the nonlinear Schrodinger Equation (5) with different potentials, has the behavior of [partial derivative] [phi]/[partial derivative]x' = 0 at x' = [x'.sub.0].\n\nTherefore, [[phi].sup.*] [phi]dx' = [rho](x')dx' can be regarded as the mass in the interval of x' to x' + dx'. Thus the position of mass centre of a microscopic particle at [x'.sub.0] in nonlinear quantum mechanics can be represented by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (43)\n\nThen the velocity of the microscopic particle is represented by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (44)\n\nthen from Equation (5) and its conjugate equation\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (45)\n\nWe can get that the velocity of mass centre of microscopic particle can be denoted by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (46)\n\nWe now determine the acceleration of the mass center of the microscopic particles and its rules of motion in an externally applied potential.\n\nWe can obtain from the above equation\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (47)\n\nWhere x = xi4h2/2m , t' = t/h. We here utilize the following relations and the boundary conditions:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\n[integral][[phi].sup.*] [phi]dx' = constant (or a function of t') and\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (48)\n\nIn the systems, the position of mass centre of microscopic particle can be represented by Equation (43), thus the velocity of mass centre of microscopic particle is represented by Equation (44). Then, the acceleration of mass centre of microscopic particle can also be denoted by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (49)\n\nIf [phi] is normalized, i.e., [[integral].sup.[infinity].sub. -[infinity]][[phi].sup.*][phi]dx' =1, then the above conclusions also are not changed.\n\nWhere V = V(x') in Equation (49) is the external potential field experienced by the microscopic particles.\n\nWe expand [partial derivative]V / [partial derivative]x' at the mass centre x' = <x'> = [x'.sub.0] as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nTaking the expectation value on the above equation, we can get\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nwhere\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nFor the microscopic particle described by Equation (5) or Equation (6), the position of the mass center of the particle is known and determinant, which is just <x'> = [x'.sub.0] = constant, or 0. Since we here study only the rule of motion of the mass centre [x.sub.0,] which means that the terms containing [x'.sub.0] in <[x'.sup.2]> are considered and included, then <[(x'-<x'>).sup.2] = 0 can be obtained. Thus\n\n<[partial derivative]V (x')/[partial derivative]x'> = [partial derivative] V(<x'>)/ [partial derivavtive]<x'>\n\nPang [17-28] finally obtained the acceleration of mass center of microscopic particle in the nonlinear quantum mechanics, Equation (49), which is denoted as\n\n[d.sup.2] / [dt'.sup.2] <x'>=-2 [partial derivative](<x'>) / [partial derivative]<x'> (50)\n\nReturning to the original variables, the Equation (50) becomes\n\nm [d.sup.2][x.sub.0]/[dt.sup.2] = -[partial derivative]V/[partial derivative][x.sub.0] (51)\n\nWhere [x'.sub.0] = <x'> is the position of the mass centre of microscopic particle. Equation (50) is the equation of motion of mass center of the microscopic particles in the nonlinear quantum mechanics. It resembles quite the Newton-type equation of motion of classical particles, which is a fundamental dynamics equation in classical physics. Thus it is not difficult to conclude that the microscopic particles depicted by the nonlinear quantum mechanics have a property of the classical particle.\n\nThe above equation of motion of particles can also derive from Equation (5) by another method. As it is known, the momentum of the particle depicted by Equation (5) is obtained from Equation (17) and denoted by P = [partial derivative]L/[partial derivative][phi] = -i [[integral].sup.[infinity]].sub.-[infinity]] ([[phi].sup.*] [[phi].sub.x'] - [[phi].sup.*.sub.x],[phi])dx'. Utilizing Equation (5) and Equations (43)-(45) Pang obtained [18-27,51]\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (52)\n\nWhere the boundary condition of [phi](x') [right arrow] 0 As [absolute value of x'][right arrow][infinity]is used. Utilizing again the above result of <[partial derivative]V(x')/[partial derivative]x'> = [partial derivative]V(<x'>)/[partial derivative]<x'> we can get also that the acceleration of the mass center of the particle is the form of\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (53)\n\nWhere [x'.sub.0] is the position of the center of the mass of the mxacroscopic particle. This is the same as Equation (51). Therefore, we can confirm that the microscopic particles in the nonlinear quantum mechanics satisfy the Newton type equation of motion for a classical particle.\n\n2.2 Lagrangian and Hamilton Equations of Microscopic Particle\n\nUsing the above variables [phi] in Equation (5) and [[phi].sup.*] in Equation (45) one can determine the Poisson bracket and write further the equations of motion of microscopic particles in the form of Hamilton's equations. For Equation (5), the variables [phi] and [[phi].sup.*] satisfy the Poisson bracket:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (54)\n\nThe Lagrange density function, L', corresponding to Equation (5) is given as follows:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (55)\n\nwhere L'=L. The momentum density of the particle system is defined as P = [partial derivative]L/[partial derivative][phi]. Thus, the Hamiltonian density, H = H', of the systems is as follows\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (56)\n\nFrom the Lagrangian density L' = L' in Equation (55) Pang [18-27, 51-55] find out\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThus we can obtain\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (57)\n\nThrough comparison with Equation (5) Pang gets\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (58)\n\nEquation (58) is just the well-known Euler-Lagrange equation for this system. This show that the nonlinear Schrodinger equation amount to Euler-Lagrange equation in nonlinear quantum mechanics, in other word, the dynamic equation, or the nonlinear Schrodinger equation can be obtained from the Euler-Lagrange equation in nonlinear quantum mechanics, if the Lagrangian function of the system is known. This is different from quantum mechanics, in which the dynamic equation is the linear Schrodinger equation, instead of the Euler-Lagrange equation.\n\nOn the other hand, Pang [18-27, 51-55] got also the Hamilton equation of the microscopic particle from the Hamiltonian density of this system in Equation (56). In fact, we can obtain from Equation (56)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (59)\n\nWhere H' = H'. Then from Equation (56) we can give\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThus\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (60)\n\nEquation (60) is just the complex form of Hamilton equation in nonlinear quantum mechanics. In fact, the Hamilton equation can be also represent by the canonical coordinate and momentum of the particle. In this case the canonical coordinate and momentum are defined by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThus, the Hamiltonian density of the system in Equation (56) takes the form\n\nH' = [[summation].sub.i] [p.sub.i] [[partial derivative].sub.t], [q.sub.i]-L'\n\nand the corresponding variation of the Lagrangian density L' = L can be written as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nFrom Equation (17), the definition of [p.sub.i], and the Euler-Lagrange equation,\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\none obtains the variation of the Hamiltonian in the form of [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThus, one pair of dynamic equations can be obtained and expressed by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (61)\n\nThis is analogous to the Hamilton equation in classical mechanics and has same physical significance with Equation (60), but the latter is used often in nonlinear quantum mechanics. This result shows that the nonlinear Schrodinger equation describing the dynamics of microscopic particle can be obtained from the classical Hamilton equation in the case, if the Hamiltonian of the system is known. Obviously, such methods of finding dynamic equations are impossible in the quantum mechanics. As it is known, the Euler-Lagrange equation and Hamilton equation are fundamental equations in classical theoretical (analytic) mechanics, and were used to describe laws of motions of classical particles. This means that the microscopic particles possess evidently classical features in nonlinear quantum mechanics. From this study we seek also a new way of finding the equation of motion of the microscopic particles in nonlinear systems, i.e., only if the Lagrangian or Hamiltonian of the system is known, we can obtain the equation of motion of microscopic particles from the Euler-Lagrange or Hamilton equations.\n\nOn the other hand, from de Broglie relation [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII], which represent the wave corpuscle duality of the microscopic particles in quantum theory, we see that the frequency [omega] and wave vector [right arrow over k] can play the roles as the Hamiltonian of the system and momentum of the particle, respectively, even in the nonlinear systems and has thus the relation:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nas in the usual stationary media. From the above result we also know that the usual Hamilton equation in Equation (57) for the nonlinear systems remain valid for the microscopic particles. Thus, theHamilton equation in Equation (57) can be now represented by another form [25 27, 56-57]:\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (62)\n\nin the energy picture, where k = [partial derivative][theta] / [partial derivative]x' is the time dependent waven umber of the microscopic particle, [omega] = [partial derivative][theta]/[partial derivative]t' is its frequency, [theta] is the phase of the wave function of the microscopic particles.\n\n2.3 Confirmation of Correctness of the Above Conclusions\n\nWe now use some concrete examples to confirm the correctness of the laws of motion of the microscopic particles mentioned above in the nonlinear quantum mechanics [18-27, 51-55]:.\n\n(1) For the microscopic particles described by Equation (5), V = 0 and constant, of which the solutions are Equation (15) or (16) and (36), respectively, we obtain that the acceleration of the mass centre of microscopic particle is zero because of m [d.sup.2]/[dt.sup.2]<x>=[partial derivative]V(<x>) / [partial derivative] <x> = 0 in this case. This means that the velocity of the particle is a constant. In fact, if inserting Equation (15) into Equation (53) we can obtain the group velocity of the particle [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] constant. This manifests that the microscopic particle moves in uniform velocity in space-time, its velocity is just the group velocity of the soliton, thus the energy and momentum of the microscopic particles can retain in the motion process. These properties are the same with classical particle.\n\nOn the other hand, if the dynamic Equation (62) is used we can obtain from Equation (15) that the acceleration and velocity of the microscopic particle are\n\ndk/dt'=0 and [v.sub.g] = dx'/dt'= [partial derivative][omega] /[partial derivative]k = [v.sub.e] = - 4[xi],\n\nrespectively, where\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nFor the solution in Equation (36) at V = constant, [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII].\n\nThese results of the acceleration and velocity of microscopic particle are same with the above data obtained from Equations (15) and (51). This indicates that these moved laws shown in Equations (50), (51), (53), (60), (61), and (62) are self-consistent, correct and true in nonlinear quantum mechanics.\n\n(2) For the case of V(x') = [alpha]x', the solution of Equation (5) is Equation (39) by Chen and Liu [44-45], which is also composed of a envelope and carrier wave. The mass centre of the particle is at [x.sub.0], which is its localized position. From Equation (51) we can determined the accelerations of the mass center of the microscopic particle in this case, which is given by\n\n[d.sub.2][x'.sub.0]/ [dt.sup.'2] = -2 [partial derivative]V(<x'>) /[partial derivative]<x'>=-2[alpha]= constant (63)\n\nOn the other hand, from Equation (39) we know that\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (64)\n\nwhere [xi] is same with [xi]' in Equation (39). Utilizing again Equation (62) we can find\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThus, the group velocity of the microscopic particle is found out from\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (65)\n\nThen its acceleration is given by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (66)\n\nComparing Equation (63) with Equation (64) we find that they are also same. This indicates that Equations (51), (53), (60), (61) and (62) are correct. In such a case the microscopic particle moves in a uniform acceleration. This is similar with that of classical particle in an electric field.\n\n(3) For the case of V(x') = [[alpha].sup.2][x'.sup.2], which is a harmonic potential, the solution of Equation (5) in this case is Equation (42) obtained by Chen and Liu [44-45]. This solution contain also an envelop and carrier wave, and has also a mass centre, its position is at [x'.sub.0], which is the position of the microscopic particle. When Equation (51) is used to determine the properties of motion of the particle in this case Pang [18-28] found out that the accelerations of the center of mass of the particle is\n\n[d.sup.2][x'.sub.0]/[dt'.sup.2] = -4[[alpha].sup.2][x'.sub.0] (67)\n\nAt the same time, from Equation (42) we gain that\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (68)\n\nwhere [xi] is same with [xi]' in Equation (42). From Equations (67) and (68) we can find\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThus, the group velocity of the microscopic particle is\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nWhile its acceleration is\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (69)\n\nSince [d.sup.2][??]'/ [dt'.sup.2] = dk/ dt', here ([??] = [x'.sub.0]), we have\n\nand\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (70)\n\nFinally, the acceleration of the microscopic particle is While its acceleration is\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (71)\n\nEquation (98) is also the same with Equation (67). Thus we confirm also the validity of Equations (48), (51), (53), (58), (60) and (61)-(62). In such a case the microscopic particle moves in harmonic form. This resembles also with the result of motion of classical particle.\n\nFrom the above studies we draw the following conclusions[18-27, 51-551. (1) The motions of microscopic particles in the nonlinear quantum mechanics can be described by not only the nonlinear Schrodinger equation but also Hamiltonian principle, Lagrangian and Hamilton equations, its changes of position with changing time satisfy the law of motion of classical particle in both uniform and inhomogeneous. This not only manifests that the natures of microscopic particles described by nonlinear quantum mechanics differ completely from those in the linear quantum mechanics but displays sufficiently the corpuscle nature of the microscopic particles.\n\n(2) The external potentials can change the states of motion of the microscopic particles, although it cannot vary its wave-corpuscle duality, for example, the particle moves with a uniform velocity at V(x') = 0 or constant, or in an uniform acceleration at V(x') = ax', which corresponds to the motion of a charge particle in a uniform electric field, but when V(x') = [a.sup.2][x'.sup.2] the microscopic particle performs localized vibration with a frequency of 2 [alpha] and an amplitude of [xi]/[alpha], the corresponding Classical vibrational equation is x' = [x'.sub.0] sin[omega]t' with [omega] = 2a and [x'.sub.0]=[xi]/[alpha] x'0 = [pound sterling]/a. The laws of motion of the center of mass of microscopic particles expressed by Equation (53) and Equations (58), (60) and (62) in the nonlinear quantum mechanics are consistent with the equations of motion of the macroscopic particles.\n\nThe correspondence between a microscopic particle and a macroscopic object shows that microscopic particles desc ribed by the nonlinear quantum mechanics have exactly the same moved laws and properties as classical particles. These results not only verify the necessity of development and correctness of the nonlinear quantum mechanics, but also exhibit clearly the limits and approximation of the linear quantum mechanics and can solve these difficulties of the linear quantum mechanics and problems of contention in it as described in Introduction. Therefore, the results mentioned above have important significances in physics and nonlinear science.\n\n3.THE COLLISION PROPERTY OF MICROSCOPIC PARTICLES DESCRIBED BY NONLINEAR SCHRODINGER EQUATION\n\nAs a matter of fact, the properties of collision of soliton solutions of nonlinear Schrodinger Equations (5) at b = 1 > 0 and b < 0 firstly were analytically studied by Zakharov and Shabat [39-40] using inverse scattering method and Zakharov and Shabat equation. They found from calculation that the mass centre and phase of soliton occur only change after this collision at V(x) = 0 and b = 1. The translations of the mass centre [x.sub.0m] and phase [[theta].sub.m] of mth soliton, which moves to a positive (or final) direction after this collision, can be represented, respectively, by\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (72)\n\nWhere [[eta].sub.m] and [[lambda].sub.m] are some constants related to the amplitude and eigenvalue of mth soliton, respectively. The Equation (72) shows that shift of position of mass centre of solitons and their variation of phase are constants after the collision of two solitons moving with different velocities and amplitudes. The collision process of the two solitons can be described from Equation (72) as follows. Before the collision and in the case of t [right arrow]-[infinity] the slowest soliton is in the front while the fastest at the rear, they collide with each other at t' = 0, after the collision and t [right arrow] [infinity], they are separated and their positions are just reversed. Zakharov and Shabat [39-40] obtained that as the time t varies from -[infinity] to [infinity], the relative change of mass centre of two solitons, [DELTA][x.sub.0m], and their relative change of phases, [DELTA][[theta].sub.m], can, respectively, denoted as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (73)\n\nand\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (74)\n\nWhere [x.sup.-.sub.0m] and phase [[theta].sup.-].sub.m] are the mass centre and phase of mth particles at initial position, respectively. Equation (73) can be interpreted by assuming that the solitons collide pairwise and every soliton collides with others. In each paired collision, the faster soliton moves forward by an amount of, [[eta].sup.-1].sub.m] 1n [absolute value of ([[lambda].sub.m] - [[lambda].sup.*.sub.k]) / [[lambda].sub.m] - [[lambda]sub.k]], [[lambda].sub.m] > [[lambda].sub.k], and the slower one shifts backwards by an amount of [[eta].sup.-1.sub.k] ln[absolute value of ([[lambda].sub.m] - [[lambda].sup.*.sub.k])/ [[lambda].sub.m]] - [[lambda].sub.k]] The total shift is equal to the algebraic sum of their shifts during the paired collisions. So that there is not effect of multi-soliton collisions at all. In other word, in the collision process in each time the faster soliton moves forward by an amount of phase shift, and the slower one shifts backwards by an amount of phase. The total shift of the solitons is equal to the algebraic sum of those of the pair during the paired collisions. The situation is the same with the phases. This rule of collision of the solitons described by the nonlinear Schrodinger Equation (6) is the same as that of classical particles, or speaking, meet also the collision law of classical particles, i.e., during the collision these solitons interact and exchange their positions in the space-time trajectory as if they had passed through each other. After the collision, the two solitons may appear to be instantly translated in space and/or time but otherwise unaffected by their interaction. The translation is called a phase shift as mentioned above. In one dimension, this process results from two solitons colliding head-on from opposite directions, or in one direction between two particles with different amplitudes or velocities. This is possible because the velocity of a soliton depends on the amplitude. The two solitons surviving a collision completely unscathed demonstrate clearly the corpuscle feature of the solitons. This property separates the solitons described by the nonlinear Schrodinger Equation (6) from the microscopic particles in quantum mechanical regime. Thus this demonstrates the classical feature of the solitons.\n\nAt the same time, Desem et al. and Tan et al. pay attention to the features of the above solitons in collision process by Zakharov and Shabat's approach.\n\nZakharov and Shabat [39-40] also discussed analytically the properties of collision of two particles depicted by the nonlinear Schrodinger Equation (6) at b < 0 using Zakharov and Shabat equation. The result shows that the feature of collision of the two solitons in this case is basically similar with the above properties. In the meanwhile, Aossey et al. investigated numerically the detailed structure, mechanism and rules of collision of the microscopic particles described by the nonlinear Schrodinger Equation (6) at b < 0 and obtained the rules of collision of the two solitons by a macroscopic model.\n\nIn fact, the properties of collisions of microscopic particles can be also obtained by numerically solving Equation (6). Numerical simulation can reveal more detailed feature of collision between two microscopic particles. We here studied numerically the features of collision of microscopic particles described by the nonlinear Schrodinger Equation (5) at b > 0 by fourth-order Runge-Kutta method [58-59]. For this purpose we began in one dimensional case by dividing Equation (5) into the following two-equations\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (75)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (76)\n\nEquations (75)-(76) may be thought to describe the features of motion of studied particle and another particle (such as, phonon) or background field (such as, lattice) with mass M and velocity [v.sub.0] , u is the characteristic quantity of another particle (such as, phonon) or of vibration (such as, displacement) of the background field. The coupling between the two modes of motion is caused by the deformation of the background field through the studied particle - background field coupling, such as, dipole-dipole interaction, x is the coupling coefficient between them and represents the change of interaction energy between the studied particle and background field due to a unit variation of the background field. The relation between the two modes of motion due to their interaction can be represented by\n\n[partial derivative]u / [partial derivative]x = [chi]/ M([v.sub.2] - [[v.sub.2].sub.0) [absolute value of [[phi].sup.2] + A (77)\n\nIf inserting Equation (77) into Equation (75) yields just the nonlinear Schrodinger Equation (5) at V(x) = constant, where b = [chi square]/M ([v.sup.2]-[v.sub.2.sub.0]) is a nonlinear coupling coefficient, V(x) = A[chi], A is an integral constant. This investigation shows clearly that the nonlinear interaction b[[phi].sup.2] [phi] comes from the coupling interaction between two particles or the studied particle and background field. Very clearly, in this model the real motion of background field and its interaction with the particle are completely considered, instead of are replacement by an average field, which is fully different from that in quantum mechanics. Just so, we can say that the nonlinear interaction b[[phi].sup.2] [phi] or the nonlinear Schrodinger Equation (5) represent the real motions of particles and background field and their interactions, which are completely different from quantum mechanics. Therefore, we can believe that the nonlinear Schrodinger Equation (5) describe correctly the natures and properties of microscopic particles in the quantum mechanical systems.\n\nIn order to use fourth-order Runge-Kutta method [58-59] to solve numerically Equation (5) we must further discretize Equations (75) and (76), which can be denoted as\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (78)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (79)\n\nwhere the following transformation relation between continuous and discrete functions are used\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (80)\n\nHere [epsilon] = [h.sup.2]/ [mr.sub.0] + A[chi], J = [h.sup.2] /2m[r.sup.2.sub.0], W = [Mv.sup.2.sub.0]/[r.sup.2.sub.0], [r.sub.0] is distance between neighbouring two lattice points. If using transformation: [[phi].sub.n] [right arrow] [[phi].sub.n] exp(i[epsilon]t/h) we can eliminate the term [epsilon] [[phi].sub.n] (t) in Equation (78). Again making a transformation: [[phi].sub.n] (t) [right arrow] [[alpha].sub.n] (t) = a(t) [r.sub.n] + ia(t)[i.sub.n], then Equations (78)-(79) become\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (81)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (82)\n\n[[??].sub.n] = [y.sup.n]/M (83)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (84)\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII] (85)\n\n[FIGURE 2 OMITTED]\n\nWhere [ar.sub.n] and [ai.sub.n] are real and imaginary parts of [a.sub.n]. Equations (81)-(85) can determine states and behaviors of the microscopic particle. Their solutions can be found out, which are denoted in Appendix. There are four equations for one structure unit. Therefore, for the quantum systems constructed by N structure units there are 4N associated equations. When the fourth-order Runge-Kutta method [58-59] is used to numerically calculate the solutions of the above equations we should further discretize them. Thus the n is replaced by j and let the time be denoted by n, the step length of the space variable is denoted by h in the above equations. An initial excitation is required in thiscalculation, which is chosen as, [a.sub.n] (o) = Asech[(n-[n.sub.0]) [([chi]/2[r.sub.0]).sup.2]/4JW] (where A is the normalization constant) At the size n, for the applied lattice, [u.sub.n](0) = [y.sub.n] (0) = 0. In the numerical simulation it is required that the total energy and the norm (or particle number) of the system must be conserved. The system of units, ev for energy, [Angstrom] for length and ps for time are proven to be suitable for the numerical solutions of Equations (81)-(85). The one dimensional system is fixed, N is chosen to be N = 200, and a time step size of 0.0195 is used in the simulations. Total numerical simulation is performed through data parallel algorithms and MALAB language.\n\nIf the values of the parameters, M, [epsilon], J, W, [chi] and [r.sub.0], in Equations (78)-(79) are appropriately chosen we can calculate the numerical solution of the associated Equations (81)-(85) by using the fourth-order Runge-Kutta method [26-27] in the systems, thus the changes of [[absolute value of [[phi].sub.n](t)].sup.2] = [[absolute value of [a.sub.n](t).sup.2], which is probability or number density of the microscopic particles occurring at the nth structure unit, with increasing time and position in time-place can be also obtained. This result is shown in Figure 2. This figure shows that the amplitude of the solution can retain constancy, i.e., the solution of Equations (78)-(79) or Equation (5) at V(x) = constant is very stable while in motion. In the meanwhile, we give the propagation feature of the solutions of Equations (81)-(85) in the cases of a long time period of 250ps and long spacings of 400 in Figure 3, which indicates that the states of solution are also stable in the long propagation. According to the soliton theory [29-30] we can obtain that Equations (78)-(79) have exactly a soliton solution, thus the microscopic particles described by nonlinear Schrodinger Equations (5) are a soliton and have a wave-corpuscle feature.\n\n[FIGURE 3 OMITTED]\n\nIn order to verify the wave-corpuscle feature of microscopic particles described by nonlinear Schrodinger Equations (5) we should also study their collision property in accordance with the soliton theory [29-30]. Thus we further simulated numerically the collision behaviors of two particles described by nonlinear Schrodinger Equations (5) at V(x) = [epsilon] = [h.sub.2] / [mr.sub.o] + A[chi] = constant using the fourth order Runge-Kutta method [58-59]. This process resulting from two microscopic particles colliding head-on from opposite directions, which are set up from opposite ends of the channel, is shown in Figure 4, where the above initial conditions simultaneously motivate the opposite ends of the channels. From this Figure we see clearly that the initial two particles with clock shapes separating 50 unit spacings in the channel collide with each other at about 8ps and 25 units. After this collision, the two particles in the channel go through each other without scattering and retain their clock shapes to propagate toward and separately along itself channels. The collision properties of microscopic particles described by the nonlinear Schrodinger Equation (5) are same with those obtained by Zakharov and Shabat [39-40] and Asossey et al. as mentioned above. Meanwhile, this collision is same with the rules of collision of macroscopic particles. Thus, we can conclude that microscopic particles described by nonlinear Schrodinger Equations (5) have a corpuscle feature.\n\nHowever, we see also from Figure 4 to occur a wave peak of large amplitude in the colliding process, which appears also in Desem and Chu's numerical result . Obviously, this is a result of complicated superposition of two solitary waves, which thus displays the wave feature of the microscopic particles. Therefore, the collision process shown in Figure 4 represent clearly that the solutions of the nonlinear Schrodinger equation have a both corpuscle and wave feature, then the microscopic particles denoted by the solutions have also a wave-corpuscle duality, which is due to the nonlinear interaction b[[absolute value of [phi].sup.2] [phi]. Thus we can again determine the nonlinear Schrodinger equation can describe correctly the natures and properties of microscopic particles in quantum systems.\n\n[FIGURE 4 OMITTED]\n\nCONCLUSION\n\nIn this paper we used a nonlinear Schrodinger equation, instead of Schrodinger equation in quantum mechanics, to describe the properties of microscopic particles. In the dynamic equation the nonlinear interaction is denoted by b[[absolute value of [phi].sup.2] [phi], which is caused by the interaction between the particle and other particles or background field in the case of their real motions to be considered. The nonlinear interaction can suppress the dispersive effect of kinetic energy in the dynamic equation and change also the natures of microscopic particles. Thus we investigated deeply the dynamic and collision features of microscopic particles described by nonlinear Schrodinger equation using the analytic and the Runge-Kutta method of numerical simulation through finding the soliton solutions of the equation. The results obtained show that the microscopic particles have a wave-corpuscle duality and are stable in propagation. When the two microscopic particles are collided, they can go through each other and retain their form after their collision of head-on from opposite directions, which is the same with that of the classical particles. However, a wave peak of large amplitude, which is a result of complicated superposition of two solitary waves, occurs in the colliding process. This displays the wave feature of microscopic particles. Therefore, the collision process shows clearly that the solutions of the nonlinear Schrodinger equation have a both corpuscle and wave feature, then the microscopic particles represented by the solutions have a wavecorpuscle duality. Clearly, this nature is due to the nonlinear interaction b[[absolute value of [phi].sup.2] [phi]. Thus we can determine the nonlinear Schrodinger equation can describe correctly the natures and properties of microscopic particles in quantum systems. This result has an important significance in physics. Then a new and nonlinear quantum mechanical theory can be established based on these results.\n\nDOI: 10.3968/j.ans.1715787020120504.1977\n\nAPPENDIX\n\nThe Solutions of Equations (81)-(85)\n\nFrom Equations (81)-(84) we can easily find out its solutions which are as follows\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nwhere\n\n[MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]\n\nThese are just base equations we make numerical simulation by computer and Runge-Kutta method.\n\nACKNOWLEDGEMENT\n\nAuthor would like to acknowledge the Major State Basic Research Development Program (\"973\" program) of China for the financial support (Grant No. 2007CB936103).\n\nREFERENCES\n\n Bohr, D., & Bub, J. (1935). Phys. Rev., 48, 169.\n\n Schrodinger, E. (1935). Naturwissenschaften., 23, 807.\n\n Schrodinger, E. (1926). Phys. Rev., 28, 1049.\n\n Heisenberg, W. Z. (1925). Zeitschrift der Physik, 33, 879.\n\n Born, M., & Infeld, L. (1934). Proc. Roy. Soc. A, 144, 425.\n\n Dirac, P. A. M. (1948). Phys. Rev., 73, 1092.\n\n Diner, S., Farque, D., Lochak, G., & Selleri, F. (1984). The Wave-Particle Dualism. Riedel, Dordrecht (pp. 34-96).\n\n Ferrero, M., & Van der Merwe A. (1997). New Developments on Fundamental Problems in Quantum Physics. Kluwer, Dordrecht (pp. 56-97).\n\n Ferrero, M., & Van der Merwe A. (1995). Fundamental Problems in Quantum Physics. Kluwer, Dordrecht (pp. 48-91).\n\n Broglie de L. (1955). Nuovo Cimento, 1, 37.\n\n Bohm, D. (1951). Quantum Theory (pp. 36-98). Prentice Hall Englewood Cliffs, New Jersey.\n\n Potter, J. (1973). Quantum Mechanics (pp. 43-105). North Holland Publishing Co., Amsterdan.\n\n Jammer, M. (1989). The Concettual Development of Quantum Mechanics. Tomash, Los Angeles.\n\n Bell, J. S. (1987). Speakable and Unspeakable in Quantum Mechanics (pp. 51-132). Cambridge University Press, Cambridge.\n\n Einstein, A., Podolsky, B., & Rosen, N. (1935). Phys. Rev., 47, 777.\n\n French, A. P., & Einstein, A. (1979). Centenary Volume (pp. 64-112). Harvard University Press, Cambridge, Mass.\n\n PANG, X. F. (1985). Problems of Nonlinear Quantum Mechanics (pp. 23-167). Sichuan Normal University Press, Chengdu.\n\n PANG, Xiaofeng & FENG, Yuanping (2005). Quantum Mechanics in Nonlinear Systems. New Jersey, World Scientific, Publishing Co..\n\n PANG, Xiaofeng (1994). Theory of Nonlinear Quantum Mechanics. Chongqing, Chinese Chongqing Press.\n\n PANG, Xiaofeng (2009). Nonlinear Quantum Mechanics (pp. 1-312). Beijing, Chinese Electronic Industry Press.\n\n PANG, Xiaofeng (2008). Physica B, 403, 3571.\n\n PANG, Xiaofeng (2008). Fronts of Physics in China, 3, 243.\n\n PANG, Xiaofeng (2008). Nature Sciences, 3(1), 29.\n\n PANG, Xiaofeng (2007). Nature Sciences, 2(1), 42.\n\n PANG, Xiaofeng, Mod. (2009). Phys. Lett., B, 23, 939.\n\n PANG, Xiaofeng (2010). Physica B, 405, 2317.\n\n PANG, Xiaofeng (2009). Physica B, 405, 4327.\n\n PANG, Xiaofeng (1982). Chin. Nature Journal, 4, 254.\n\n PANG, Xiaofeng (2003). Soliton Physics. Chengdu: Sichuan Science and Technology Press.\n\n GUO, Bailin & PANG, Xiaofeng (1987). Solitons. Beijing: Chinese Science Press.\n\n PANG, X. F. (2008). PhysicaB, 403, 4292.\n\n PANG, X. F. (2008). Frontiers of Physics in China, 3, 413.\n\n PANG, X. F. (2008). Nature Sciences, 3, 29.\n\n PANG, X. F. (1985). Chin. J. Potential Science, 5, 16.\n\n PANG, X. F. (1991). The Theory of Nonlinear Quantum Mechanics. In Lui Hong (Ed.), Research of New Sciences (pp. 16-20). Beijing, Science and Tech. Press.\n\n PANG, X. F. (2006). Research and Development and of World Science and Technology, 28, 11.\n\n PANG, X. F. (2006). Research and Development and of World Science and Technology, 24, 54.\n\n PANG, X. F. (2006). Features of Motion of Microscopic Particles in Nonlinear Systems and Nonlinear Quantum Mechanics. In Sciencetific Proceding-Physics and Others (pp. 53-93), Atomic Energy Press, Beijing.\n\n Zakharov, V. E., & Shabat, A. B. (1972). Sov. Phys. JETP, 34, 62.\n\n Zakharov, V. E., & Shabat, A. B. (1973). Sov. Phys. JETP, 37, 823.\n\n PANG X. F. (1985). J. Low Temp. Phys., 58, 334.\n\n PANG X. F. (1989). Chinese J. Low Temp. and Supercond, 10, 612.\n\n PANG, X. F. (2008). J. Electronic Science and Technology of China, 6, 205.\n\n CHEN, H. H., & LIU, C. S. (1976). Phys. Rev. Lett., 37, 693.\n\n CHEN, H. H. (1978). Phys. Fluids, 21, 377.\n\n PANG, X. F. (2009). Mod. Phys. Lett. B, 23, 939.\n\n PANG, X. F. (1993). Chin. Phys. Lett., 10, 437.\n\n Sulem, C., & Sulem, P. L. (1999). The Nonlinear Schrodinger Equation: Self-Focusing and Wave Collapse (pp. 26-89). Springer, New York.\n\n Makhankov, V. G., & Fedyanin, V. K. (1984). Phys. Rep., 104, 1.\n\n Makhankov, V. G. (1978). Phys. Rep., 35, 1.\n\n PANG, X. F. (2009). Physica B, 404, 3125.\n\n ANG, X. F. (2009). Mod. Phys. Lett. B, 23, 2175.\n\n PANG, X. F. (2003). Phys. Stat. Sol. (b), 236, 34.\n\n PANG, X. F. (2007). Nature Sciences, 2, 42.\n\n PANG, X. F. (2000). Phys. Rev. E, 62, 6989.\n\n Desem, C., & Chu, P. L. (1992). Soliton-Soliton Interactions in Optical Solitons. In J. R. Taylor (Ed.) (pp. 107-351). Cambridge University Press, Cambridge.\n\n Tan, B., & Bord, J. P. (1998). Davydov Soliton Collisions. Phys. Lett., A240, 282.\n\n Stiefel, J. (1965). Einfuhrung in Die Numerische Mathematik. Teubner Verlag, Stuttgart.\n\n Atkinson, K. E. (1987). An Introdution to Numerical Analysis. Wiley, New York Inc..\n\n Aossey, D. W., Skinner, S. R., Cooney, J. T., Williams, J. E., Gavin, M. T., Amdersen, D. R., & Lonngren, K. E. (1992). Phys. Rev., A45, 2606.\n\nJozef Mazurkiewicz [a]; Piotr Tomasik [b], *\n\n[a] Departament of Chemistry and Physics, University of Agriculture, Mickiewicz Ave., 21, 30149 Cracow, Poland.\n\n[b] Cracow College of the Health Promotion, Krowoderska Street 73, 31159 Cracow, Poland.\n\n* Corresponding author.\n\nThese are just base equations we make numerical simulation by computer and Runge-Kutta method.\n\nPANG Xiaofeng (2012). The Dynamic and Collision Features of Microscopic Particles Described by the Nonlinear Schrodinger Equation in the Nonlinear Quantum Systems. Advances in Natural Science, 5(4), 36-51. Available from: http://www.cscanada.net/index.php/ans/article/ view/j.ans.1715787020120504.1977 DOI: http://dx.doi.org/10.3968/ j.ans.1715787020120504.1977\nAuthor:", null, "Printer friendly", null, "Cite/link", null, "Email", null, "Feedback Xiaofeng, Pang Advances in Natural Science Report 9CHIN Dec 31, 2012 9981 Effect of external electric field upon lower alkanols. Study on Jilin province's population growth trend model. Collisions (Nuclear physics) Particle collisions Particles (Nuclear physics) Quantum mechanics Quantum theory Schrodinger equation Subatomic particles" ]
[ null, "https://www.thefreelibrary.com/_/static/ico_print.gif", null, "https://www.thefreelibrary.com/_/static/ico_link.gif", null, "https://www.thefreelibrary.com/_/static/ico_email.gif", null, "https://www.thefreelibrary.com/_/static/ico_feedback.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86164874,"math_prob":0.9668422,"size":62225,"snap":"2021-43-2021-49","text_gpt3_token_len":16320,"char_repetition_ratio":0.21351312,"word_repetition_ratio":0.10349995,"special_character_ratio":0.26900762,"punctuation_ratio":0.15992087,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990406,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-26T19:12:38Z\",\"WARC-Record-ID\":\"<urn:uuid:950c0814-cf72-4496-a98f-7c20cc731eaa>\",\"Content-Length\":\"103809\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6dfb582c-b4f7-49c2-af3b-fd22911a4cbd>\",\"WARC-Concurrent-To\":\"<urn:uuid:d031a08b-6992-477b-9841-d7547b677237>\",\"WARC-IP-Address\":\"45.35.33.117\",\"WARC-Target-URI\":\"https://www.thefreelibrary.com/The+dynamic+and+collision+features+of+microscopic+particles+described...-a0319229073\",\"WARC-Payload-Digest\":\"sha1:TX5WPZ7AMZC4CETI3PRIPH4VEJSN4TPQ\",\"WARC-Block-Digest\":\"sha1:4EPFTLGH2O6ACI7VARYIJLY4KNDBYA4Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587915.41_warc_CC-MAIN-20211026165817-20211026195817-00513.warc.gz\"}"}
https://www.instasolv.com/rd-sharma-solutions/class-11-chapter-9-trigonometric-ratios-of-multiple-and-sub-multiple-angles.html
[ "Instasolv\n\nIIT-JEE NEET CBSE NCERT Q&A\n\n4.5/5\n\n# RD Sharma Class 11 Chapter 9 Solutions (Trigonometric Ratios of Multiple And Sub Multiple Angles)\n\nRD Sharma Solutions for Class 11 Maths Chapter 9 ‘Trigonometric Ratios of Multiple and Sub Multiple Angles’ are ready-made solutions available online for the best practices of the students appearing in the central board of secondary examinations. Not only this, but it provides a comprehensive knowledge of the chapter so that you can face all kinds of questions related to the chapter.\n\nIn RD Sharma Class 11 Chapter 9 of the trigonometric ratios of multiple and sub-multiple angles, there are 84 questions divided into three exercises. There is an additional exercise 9 VSAQ (very-short-answer-type-questions) specially designed for you to enhance your speed for solving such equations based questions.  In this chapter, you will learn about the formulas illustrating the trigonometric ratios function and its values of ‘x.\n\nRD Sharma solutions are very beneficial for students to understand the concepts of trigonometry related questions. There is always a chance of an 8-12 mark in questions on the final CBSE exam. So these solutions will give the confidence to attempt the questions irrespective of its difficulty level. RD Sharma solutions by Instasolv provide the solved step by step illustrative examples which are very frequently asked in the CBSE examination. You should follow these solutions before attempting your final exam.\n\n## Topics Covered in RD Sharma Class 11 Maths Chapter 9 Trigonometric Ratios of Multiple and Sub Multiple Angles\n\nIn trigonometry, we will find the numbers of formulae of Multiple and Sub Multiple Angles.\n\n•  Multiple Angles:  if x is an angle, then 2x, 3x, 4x are the multiple angles of x.\n• Sub multiple Angles: if x is any angle, then x/2, x/3, x/4 is the sub multiple angles of x\n• Here are some important formulas for trigonometric ratios:\n\n### The important  formulae for trigonometric ratios of submultiple of an angle are as follows :\n\n1. | sin x/2 + cos x/2 | = √(1 + sin x)\n2. sin x/2 + cos x/2 =  + √(1 + sin x), if 2nπ – π/4 ≤ x/2 ≤ 2nπ + 3π/4 = – √ (1 + sin x), or\n3. | sin x/2 – cos x/2 | = √(1 – sin x)\n4. sin x/2 – cos x/2 =  + √(1 – sin x), if 2nπ + π/4 ≤ x/2 ≤ 2nπ + 5π/4 = – √ (1 – sin x), or\n5. tan x/2 = ± √(1-cos x) / (1+cos x)\n6. A cos x + B sin x < √A2 + B2\n\nThese formulas are totally based on the quadrants and signs may be removed or added according to the function.\n\n1. There are some best results given  when X + Y + Z = π are given :\n• sin2 X/2 + sin2  Y/2 + sin2  Z/2  = 1 – 2 sin X/2 sin Y/2 sin Z/2\n• cos2 X/2 + cos2 Y/2 + cos2 Z/2  = 2 + 2 sin X/2 sin Y/2 sin Z/2\n• sin2 X/2 + sin2 Y/2 – sin2 Z/2  = 1 – 2 cos X/2 cos Y/2 sin Z/2\n• cos2 X/2 + cos2 Y/2 – cos2 Z/2  = 2 + 2 cos X/2 cos Y/2 sin Z/2\n• tan Y/2 tan Z/2 + tan Z/2 tan X/2 + tan X/2 tan Y/2 =1\n1. In the Formula for sin x/2, the sin x can also give value as of sin (nπ + (-1) n) x / 2.\n2. In the Formula for tan x/2, the tan x can also be the value of tan of nπ +x/2.\n\n## Discussion of Exercises of RD Sharma Solutions for Class 11 Chapter 9\n\n1. In the first exercise 9.1, there are 50 questions, asking you to prove the given equations by applying certain formulas according to the values of trigonometry.\n\nThe common questions related to sin, cos, tan x/2 values are asked.\n\nAs per the given formula, you should apply the right application of sin, cos, tan, cot:\n\nFor example: | sin A/2 + cos A/2 | = √ (1 + sin A)\n\n| sin A/2 – cos A/2 | = √ (1 – sin A)\n\n1.  In 9.2 exercises, there were 11 questions in total. They are a bit lengthy and time-consuming in nature. It is always advised to have a continuous practise of these questions to attempt them in final exams. Every step is important in such questions and with the right application of formula; you can reach to the conclusion.\n2. In 9.3 exercises, there are 10 questions following the same pattern of formulas but very important questions. These are marks fetching if you have a good command of them.\n3. Exercise 9-VSAQ is specially designed to give additional time to the most recurring questions in various competitive examinations.\n\n## Benefits of RD Sharma Class 11 Maths Solutions for Chapter 9 by Instasolv\n\nInstasolv is always working for the best of their students. This is a free platform for you to prepare for their dreams.  RD Sharma Class 11 Maths Solutions Chapter 9 help you gain quick knowledge about the loopholes in attempting such tough questions. These solutions are made following the current syllabus of CBSE. RD Sharma Class 11 Maths Solutions Chapter 9 gears up your speed to attempt all questions which generally every student doesn’t attempt due to lack of their preparations. So take advantage of these ready solutions and come out with flying colours. Reach your goals with Instasolv!\n\nMore Chapters from Class 11" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.87628543,"math_prob":0.9896321,"size":4744,"snap":"2020-45-2020-50","text_gpt3_token_len":1290,"char_repetition_ratio":0.14029536,"word_repetition_ratio":0.09261301,"special_character_ratio":0.2816189,"punctuation_ratio":0.071428575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9972714,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T01:56:48Z\",\"WARC-Record-ID\":\"<urn:uuid:efb9526b-2dad-4ae4-99fc-6e6e3f057ec2>\",\"Content-Length\":\"86047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3ce275e8-0793-4138-8554-7ad3d9a6bab9>\",\"WARC-Concurrent-To\":\"<urn:uuid:812184e8-5661-4409-85a3-1b3b08123b40>\",\"WARC-IP-Address\":\"3.7.16.11\",\"WARC-Target-URI\":\"https://www.instasolv.com/rd-sharma-solutions/class-11-chapter-9-trigonometric-ratios-of-multiple-and-sub-multiple-angles.html\",\"WARC-Payload-Digest\":\"sha1:P4LRC2IRR6QYBVB2ADRX5E3DEUPAG3MQ\",\"WARC-Block-Digest\":\"sha1:USYT33LELHNVWTNORKK6HJUMR2BIL2BC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107906872.85_warc_CC-MAIN-20201030003928-20201030033928-00405.warc.gz\"}"}
https://www.r-bloggers.com/2017/03/data-science-for-doctors-inferential-statistics-exercises-part-3/
[ "Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.", null, "Data science enhances people’s decision making. Doctors and researchers are making critical decisions every day. Therefore, it is absolutely necessary for those people to have some basic knowledge of data science. This series aims to help people that are around medical field to enhance their data science skills.\n\nWe will work with a health related database the famous “Pima Indians Diabetes Database”. It was generously donated by Vincent Sigillito from Johns Hopkins University. Please find further information regarding the dataset here.\n\nThis is the sixth part of the series and it aims to cover partially the subject of Inferential statistics.\nResearchers rarely have the capability of testing many patients,or experimenting a new treatment to many patients, therefore making inferences out of a sample is a necessary skill to have. This is where inferential statistics comes into play.\nIn more detail, in this part we will go through the hypothesis testing for Student’s t-distribution (Student’s t-test), which may be the most used test you will need to apply, since in most cases the standard deviation σ of the population is not known. We will cover the one-sample t-test and two-sample t-test(both with equal and unequal variance). If you are not aware of what are the mentioned distributions please go here to acquire the necessary background.\n\nBefore proceeding, it might be helpful to look over the help pages for the `t.test`.\n\nPlease run the code below in order to load the data set and transform it into a proper data frame format:\n\n`url <- \"https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data\"`\n`data <- read.table(url, fileEncoding=\"UTF-8\", sep=\",\")`\n`names <- c('preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class')`\n`colnames(data) <- names`\n`data <- data[-which(data\\$mass ==0),]`\n\nAnswers to the exercises are available here.\n\nIf you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page.\n\nExercise 1\n\nSuppose that we take a sample of 25 candidates that tried a diet and they had a average weight of 29 (generate 25 normal distributed samples with mean 29 and standard deviation 4) after the experiment.\nFind the t-value.\n\nExercise 2\n\nFind the p-value.\n\nExercise 3\n\nFind the 95% confidence interval.\n\nExercise 4\n\nApply t-test with Null Hypothesis that the true mean of the sample is equal to the mean of the sample with 5% confidence level.\n\nExercise 5\n\nApply t-test with Null Hypothesis that the true mean of the sample is equal to the mean of the population and the alternative that the true mean is less than the mean of the population with 5% confidence level.\n\nExercise 6\n\nSuppose that we want to compare the current diet with another one. We assume that we test a different diet to a sample of 27 with `mass` average of 31(generate normal distributed samples with mean 31 and standard deviation of 5). Test whether the two diets are significantly different.\nNote that the two distributions have different variances.\nhint: This is a two sample hypothesis testing with different variances.\n\nExercise 7\n\nTest whether the the first diet is more efficient than the second.\n\nExercise 8\n\nAssume that the second diet has the same variance as the first one. Is it significant different?\n\nExercise 9\n\nAssume that the second diet has the same variance as the first one. Is it significantly better?\n\nExercise 10\n\nSuppose that you take a sample of 27 with average `mass` of 29, and after the diet the average `mass` is 28(generate the sampled with `rnorm(27,average,4)`). Are they significant different?\nhint: Paired Sample T-Test." ]
[ null, "https://i1.wp.com/r-exercises.com/wp-content/uploads/2017/01/Selection_047-150x150.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9085864,"math_prob":0.8698231,"size":4097,"snap":"2020-45-2020-50","text_gpt3_token_len":907,"char_repetition_ratio":0.12020523,"word_repetition_ratio":0.13636364,"special_character_ratio":0.21772029,"punctuation_ratio":0.09873418,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9773737,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-23T22:38:19Z\",\"WARC-Record-ID\":\"<urn:uuid:16ea5950-5408-47d7-921a-d326f9916f2f>\",\"Content-Length\":\"83164\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c6f2e2a-f973-481a-b8e8-0e132c6ca1f5>\",\"WARC-Concurrent-To\":\"<urn:uuid:344d6e3f-df3b-495d-b396-e091a855f8a8>\",\"WARC-IP-Address\":\"104.28.9.205\",\"WARC-Target-URI\":\"https://www.r-bloggers.com/2017/03/data-science-for-doctors-inferential-statistics-exercises-part-3/\",\"WARC-Payload-Digest\":\"sha1:TGEIPQ65MDQDBEHGAJ6UWLWCY3MEYEEZ\",\"WARC-Block-Digest\":\"sha1:ZFGIVWSDCYTUFUFJEFJTTUAS2LGZNOET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107865665.7_warc_CC-MAIN-20201023204939-20201023234939-00559.warc.gz\"}"}
https://www.debugpointer.com/regex/regex-for-eu-vat-number
[ "", null, "", null, "Published on\n\n# Regex for EU VAT Number", null, "", null, "The euro is the official currency of the European Union (EU) and a number of other European countries. It is denoted by the symbol \"€\" and is divided into 100 smaller units called cents. In this article let's understand how we can create a regex for euro and how regex can be matched for euro.\n\nRegex (short for regular expression) is a powerful tool used for searching and manipulating text. It is composed of a sequence of characters that define a search pattern. Regex can be used to find patterns in large amounts of text, validate user input, and manipulate strings. It is widely used in programming languages, text editors, and command line tools.\n\n# Conditions to match EU VAT Number\n\n• VAT (Value-Added Tax) numbers in the European Union (EU) starts with two-letter country code\n• It is followed by the VAT number itself.\n• VAT number can consist of digits and alphabets\n• The length of the VAT number varies and it changes by the country\n• No other special characters are allowed\n\n# Regex for checking if EU VAT Number is valid or not\n\nRegular Expression-\n\n/^((AT)(U\\d{8})|(BE)(0\\d{9})|(BG)(\\d{9,10})|(CY)(\\d{8}[LX])|(CZ)(\\d{8,10})|(DE)(\\d{9})|(DK)(\\d{8})|(EE)(\\d{9})|(EL|GR)(\\d{9})|(ES)([\\dA-Z]\\d{7}[\\dA-Z])|(FI)(\\d{8})|(FR)([\\dA-Z]{2}\\d{9})|(HU)(\\d{8})|(IE)(\\d{7}[A-Z]{2})|(IT)(\\d{11})|(LT)(\\d{9}|\\d{12})|(LU)(\\d{8})|(LV)(\\d{11})|(MT)(\\d{8})|(NL)(\\d{9}(B\\d{2}|BO2))|(PL)(\\d{10})|(PT)(\\d{9})|(RO)(\\d{2,10})|(SE)(\\d{12})|(SI)(\\d{8})|(SK)(\\d{10}))$/igm Test string examples for the above regex- Input StringMatch Output YTFGUI&^%does not match FR12345678901matches IN12344532342does not match DK12345678matches More VAT values that match- ATU12143178 BE0121431789 BG121431789 BG1214317890 CY12143178X CY12143178L CZ12143178 CZ121431789 CZ1214317890 DE121431789 IN12144312142 DK12143178 EE121431789 EL121431789 GR121431789 ESX12143178 ES12143178X ESX1214317X FI12143178 FR12143178901 FRX1214317890 FR1X121431789 FRXX121431789 HU12143178 IE1214317WA IE1214317FA IT12143178901 LT121431789 LT121431789012 LU12143178 LV12143178901 MT12143178 NL121431789B01 NL121431789BO2 PL1214317890 PT121431789 RO1214317890 SE121431789012 SK1214317890 Here is a detailed explanation of the above regex- /^((AT)(U\\d{8})|(BE)(0\\d{9})|(BG)(\\d{9,10})|(CY)(\\d{8}[LX])|(CZ)(\\d{8,10})|(DE)(\\d{9})|(DK)(\\d{8})|(EE)(\\d{9})|(EL|GR)(\\d{9})|(ES)([\\dA-Z]\\d{7}[\\dA-Z])|(FI)(\\d{8})|(FR)([\\dA-Z]{2}\\d{9})|(HU)(\\d{8})|(IE)(\\d{7}[A-Z]{2})|(IT)(\\d{11})|(LT)(\\d{9}|\\d{12})|(LU)(\\d{8})|(LV)(\\d{11})|(MT)(\\d{8})|(NL)(\\d{9}(B\\d{2}|BO2))|(PL)(\\d{10})|(PT)(\\d{9})|(RO)(\\d{2,10})|(SE)(\\d{12})|(SI)(\\d{8})|(SK)(\\d{10}))$/igm\n\n^ asserts position at start of a line\n1st Capturing Group ((AT)(U\\d{8})|(BE)(0\\d{9})|(BG)(\\d{9,10})|(CY)(\\d{8}[LX])|(CZ)(\\d{8,10})|(DE)(\\d{9})|(DK)(\\d{8})|(EE)(\\d{9})|(EL|GR)(\\d{9})|(ES)([\\dA-Z]\\d{7}[\\dA-Z])|(FI)(\\d{8})|(FR)([\\dA-Z]{2}\\d{9})|(HU)(\\d{8})|(IE)(\\d{7}[A-Z]{2})|(IT)(\\d{11})|(LT)(\\d{9}|\\d{12})|(LU)(\\d{8})|(LV)(\\d{11})|(MT)(\\d{8})|(NL)(\\d{9}(B\\d{2}|BO2))|(PL)(\\d{10})|(PT)(\\d{9})|(RO)(\\d{2,10})|(SE)(\\d{12})|(SI)(\\d{8})|(SK)(\\d{10}))\n1st Alternative (AT)(U\\d{8})\n2nd Capturing Group (AT)\n3rd Capturing Group (U\\d{8})\n2nd Alternative (BE)(0\\d{9})\n4th Capturing Group (BE)\n5th Capturing Group (0\\d{9})\n3rd Alternative (BG)(\\d{9,10})\n6th Capturing Group (BG)\n7th Capturing Group (\\d{9,10})\n4th Alternative (CY)(\\d{8}[LX])\n8th Capturing Group (CY)\n9th Capturing Group (\\d{8}[LX])\n5th Alternative (CZ)(\\d{8,10})\n10th Capturing Group (CZ)\n11th Capturing Group (\\d{8,10})\n6th Alternative (DE)(\\d{9})\n12th Capturing Group (DE)\n13th Capturing Group (\\d{9})\n7th Alternative (DK)(\\d{8})\n14th Capturing Group (DK)\n15th Capturing Group (\\d{8})\n8th Alternative (EE)(\\d{9})\n16th Capturing Group (EE)\n17th Capturing Group (\\d{9})\n9th Alternative (EL|GR)(\\d{9})\n18th Capturing Group (EL|GR)\n19th Capturing Group (\\d{9})\n10th Alternative (ES)([\\dA-Z]\\d{7}[\\dA-Z])\n20th Capturing Group (ES)\n21st Capturing Group ([\\dA-Z]\\d{7}[\\dA-Z])\n11th Alternative (FI)(\\d{8})\n22nd Capturing Group (FI)\n23rd Capturing Group (\\d{8})\n12th Alternative (FR)([\\dA-Z]{2}\\d{9})\n24th Capturing Group (FR)\n25th Capturing Group ([\\dA-Z]{2}\\d{9})\n13th Alternative (HU)(\\d{8})\n26th Capturing Group (HU)\n27th Capturing Group (\\d{8})\n14th Alternative (IE)(\\d{7}[A-Z]{2})\n28th Capturing Group (IE)\n29th Capturing Group (\\d{7}[A-Z]{2})\n15th Alternative (IT)(\\d{11})\n30th Capturing Group (IT)\n31st Capturing Group (\\d{11})\n16th Alternative (LT)(\\d{9}|\\d{12})\n32nd Capturing Group (LT)\n33rd Capturing Group (\\d{9}|\\d{12})\n17th Alternative (LU)(\\d{8})\n34th Capturing Group (LU)\n35th Capturing Group (\\d{8})\n18th Alternative (LV)(\\d{11})\n36th Capturing Group (LV)\n37th Capturing Group (\\d{11})\n19th Alternative (MT)(\\d{8})\n38th Capturing Group (MT)\n39th Capturing Group (\\d{8})\n20th Alternative (NL)(\\d{9}(B\\d{2}|BO2))\n40th Capturing Group (NL)\n41st Capturing Group (\\d{9}(B\\d{2}|BO2))\n21st Alternative (PL)(\\d{10})\n43rd Capturing Group (PL)\n44th Capturing Group (\\d{10})\n22nd Alternative (PT)(\\d{9})\n45th Capturing Group (PT)\n46th Capturing Group (\\d{9})\n23rd Alternative (RO)(\\d{2,10})\n47th Capturing Group (RO)\n48th Capturing Group (\\d{2,10})\n24th Alternative (SE)(\\d{12})\n49th Capturing Group (SE)\n50th Capturing Group (\\d{12})\n25th Alternative (SI)(\\d{8})\n26th Alternative (SK)(\\d{10})\n$asserts position at the end of a line Global pattern flags i modifier: insensitive. Case insensitive match (ignores case of [a-zA-Z]) g modifier: global. All matches (don't return after first match) m modifier: multi line. Causes ^ and$ to match the begin/end of each line (not only begin/end of string)\n\n\nHope this article was useful to check if EU VAT Number is valid or not. If you have any doubts or suggestions, feel free to comment below." ]
[ null, "data:image/svg+xml,%3csvg xmlns=%27http://www.w3.org/2000/svg%27 version=%271.1%27 width=%27250%27 height=%2770%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null, "data:image/svg+xml,%3csvg xmlns=%27http://www.w3.org/2000/svg%27 version=%271.1%27 width=%27250%27 height=%27250%27/%3e", null, "data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8108135,"math_prob":0.9785898,"size":5726,"snap":"2022-40-2023-06","text_gpt3_token_len":2233,"char_repetition_ratio":0.29779798,"word_repetition_ratio":0.00952381,"special_character_ratio":0.45564094,"punctuation_ratio":0.03305785,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9687689,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T19:32:30Z\",\"WARC-Record-ID\":\"<urn:uuid:baa1f370-11ae-477a-a629-6e7e1e18d844>\",\"Content-Length\":\"47423\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b6e97b84-bcd4-431c-8581-f1f6fb15653c>\",\"WARC-Concurrent-To\":\"<urn:uuid:97ffe0ba-0ea6-41e9-9c11-821afc345b08>\",\"WARC-IP-Address\":\"76.76.21.93\",\"WARC-Target-URI\":\"https://www.debugpointer.com/regex/regex-for-eu-vat-number\",\"WARC-Payload-Digest\":\"sha1:DIQIFIXTILXWXDUFN6VOCTRZN5GWZ4H6\",\"WARC-Block-Digest\":\"sha1:HCSMG5QSWPDEPTJ2DCUQKX7YZOSCA2ZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500074.73_warc_CC-MAIN-20230203185547-20230203215547-00317.warc.gz\"}"}
https://edu.gcfglobal.org/en/excelformulas/complex-formulas/1/
[ "", null, "search menu", null, "# Excel Formulas: Complex Formulas\n\n#### Lesson 3: Complex Formulas\n\n/en/excelformulas/simple-formulas/content/\n\n### Introduction\n\nA simple formula is a mathematical expression with one operator, such as 7+9. A complex formula has more than one mathematical operator, such as 5+2*8. When there is more than one operation in a formula, the order of operations tells your spreadsheet which operation to calculate first. In order to use complex formulas, you will need to understand the order of operations.\n\n#### The order of operations\n\nAll spreadsheet programs calculate formulas based on the following order of operations:\n\n1. Operations enclosed in parentheses\n2. Exponential calculations (3^2, for example)\n3. Multiplication and division, whichever comes first\n4. Addition and subtraction, whichever comes first\n\nA mnemonic that can help you remember the order is PEMDAS, or Please Excuse My Dear Aunt Sally.\n\nClick the arrows in the slideshow below to learn more about how the order of operations is used to calculate complex formulas.\n\n•", null, "While this formula may look really complicated, we can use the order of operations step by step to find the right answer.\n\n•", null, "First, we'll start by calculating anything inside the parentheses. In this case, there's only one thing we need to calculate: 6-3=3.\n\n•", null, "As you can see, the formula already looks a bit simpler. Next, we'll look to see if there are any exponents. There's one: 2^2=4.\n\n•", null, "Next, we'll solve any multiplication and division, working from left to right. Because the division operation comes before the multiplication, it is calculated first: 3/4=0.75.\n\n•", null, "Now, we'll calculate our remaining multiplication operation: 0.75*4=3.\n\n•", null, "Next, we'll calculate any addition or subtraction, again working from left to right. Addition comes first: 10+3=13.\n\n•", null, "Finally, we have one remaining subtraction operation: 13-1=12.\n\n•", null, "And now we have our answer: 12. This is the exact same result you would get if you entered the formula into a spreadsheet.\n\n•", null, "Now let's look at a couple of examples that show how the order of operations can affect the result.\n\nUsing parentheses within a formula can be very important. Because of the order of operations, it can completely change an answer. Let's try the same problem from above, but this time we'll add parentheses to the last part.\n\n#### Creating complex formulas\n\nIn the example below, we'll demonstrate a complex formula using the order of operations. Here, we want to calculate the cost of sales tax for a catering invoice. To do this, we'll write our formula as =(D2+D3)*0.075 in cell D4. This formula will add the prices of our items together and then multiply that value by the 7.5% tax rate (which is written as 0.075) to calculate the cost of sales tax.", null, "The spreadsheet then follows the order of operations and first adds the values inside the parentheses: (44.85+39.90) = \\$84.75. Then it multiplies that value by the tax rate: \\$84.75*0.075. The result will show that the sales tax is \\$6.36.", null, "It is especially important to enter complex formulas with the correct order of operations. Otherwise, the spreadsheet will not calculate the results accurately. In our example, if the parentheses are not included, the multiplication is calculated first and the result is incorrect. Parentheses are the best way to define which calculations will be performed first in a formula.", null, "#### To create a complex formula using the order of operations:\n\nIn our example below, we will use cell references along with numerical values to create a complex formula that will calculate the total cost for a catering invoice. The formula will calculate the cost for each menu item and add those values together.\n\n1. Select the cell that will contain the formula. In our example, we'll select cell C4.", null, "2. Enter your formula. In our example, we'll type =B2*C2+B3*C3. This formula will follow the order of operations, first performing multiplication: 2.29*20 = 45.80 and 3.49*35 = 122.15. Then it will add those values together to calculate the total: 45.80+122.15.", null, "3. Double-check your formula for accuracy, then press Enter on your keyboard. The formula will calculate and display the result. In our example, the result shows that the total cost for the order is \\$167.95.", null, "You can add parentheses to any equation to make it easier to read. While it won't change the result of the formula in this example, we could enclose the multiplication operations within parentheses to clarify that they will be calculated before the addition.", null, "Your spreadsheet will not always tell you if your formula contains an error, so it's up to you to check all of your formulas. To learn how to do this, check out the Double-Check Your Formulas lesson.\n\n### Challenge!\n\n1. Open an existing Excel workbook. If you want, you can use the example file for this lesson.\n2. Create a complex formula that will perform addition before multiplication. If you are using the example, create a formula in cell D6 that first adds the values of cells D3, D4, and D5 and then multiplies their total by 0.075. Hint: You'll need to think about the order of operations for this to work correctly.\n\n/en/excelformulas/relative-and-absolute-cell-references/content/" ]
[ null, "https://media.gcflearnfree.org/global/gcfglobal-color.png", null, "https://media.gcflearnfree.org/assets/edu-gcfglobal-site/images/goodwill-image.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_0.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_1.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_2.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_3.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_4.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_5.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_6.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_final_7.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_order_9.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_example_new_formula.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_example_new_done.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_example_new_splat2.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_create2_select.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_create2_formula.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_create2_done.png", null, "https://media.gcflearnfree.org/ctassets/topics/234/complex_create_splat.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9052736,"math_prob":0.97418964,"size":2951,"snap":"2021-31-2021-39","text_gpt3_token_len":622,"char_repetition_ratio":0.16796742,"word_repetition_ratio":0.016701462,"special_character_ratio":0.21145375,"punctuation_ratio":0.103690684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99972063,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-24T13:12:05Z\",\"WARC-Record-ID\":\"<urn:uuid:96dd898d-4ff5-4078-a5ce-df51a8ed861c>\",\"Content-Length\":\"29201\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:11020785-9cbb-4ce2-aca4-23317a0a07f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c3ba7469-4a9c-4811-90a4-9979408e89f0>\",\"WARC-IP-Address\":\"104.22.52.248\",\"WARC-Target-URI\":\"https://edu.gcfglobal.org/en/excelformulas/complex-formulas/1/\",\"WARC-Payload-Digest\":\"sha1:SEJVBX4PFMVZJRF2RMQFT5YUFGRTA5C7\",\"WARC-Block-Digest\":\"sha1:YYDESIJUIHVFKJQU74RVAKUC5QWCGOBW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057524.58_warc_CC-MAIN-20210924110455-20210924140455-00486.warc.gz\"}"}
https://search.r-project.org/CRAN/refmans/DescTools/html/CutQ.html
[ "CutQ {DescTools} R Documentation\n\n## Create a Factor Variable Using the Quantiles of a Continuous Variable\n\n### Description\n\nCreate a factor variable using the quantiles of a continous variable.\n\n### Usage\n\nCutQ(x, breaks = quantile(x, seq(0, 1, by = 0.25), na.rm = TRUE),\nlabels = NULL, na.rm = FALSE, ...)\n\n\n### Arguments\n\n x continous variable. breaks the breaks for creating groups. By default the quartiles will be used, say quantile seq(0, 1, by = 0.25) quantiles. See quantile for details. If breaks is given as a single integer it is interpreted as the intended number of groups, e.g. breaks=10 will return x cut in deciles. labels labels for the levels of the resulting category. By default, labels are defined as Q1, Q2 to the length of breaks - 1. The parameter ist passed to cut, so if labels are set to FALSE, simple integer codes are returned instead of a factor. na.rm Boolean indicating whether missing values should be removed when computing quantiles. Defaults to TRUE. ... Optional arguments passed to cut.\n\n### Details\n\nThis function uses quantile to obtain the specified quantiles of x, then calls cut to create a factor variable using the intervals specified by these quantiles.\n\nIt properly handles cases where more than one quantile obtains the same value, as in the second example below. Note that in this case, there will be fewer generated factor levels than the specified number of quantile intervals.\n\n### Value\n\nFactor variable with one level for each quantile interval given by q.\n\n### Author(s)\n\nGregory R. Warnes <greg@warnes.net>, some slight modifications Andri Signorell <andri@signorell.net>\n\ncut, quantile\n\n### Examples\n\n# create example data\n\nx <- rnorm(1000)\n\n# cut into quartiles\nquartiles <- CutQ(x)\ntable(quartiles)\n\n# cut into deciles\ndeciles <- CutQ(x, breaks=10, labels=NULL)\ntable(deciles)\n\n# show handling of 'tied' quantiles.\nx <- round(x) # discretize to create ties\nstem(x) # display the ties\ndeciles <- CutQ(x, breaks=10)\n\ntable(deciles) # note that there are only 5 groups (not 10)\n# due to duplicates\n\n\n[Package DescTools version 0.99.51 Index]" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60527194,"math_prob":0.9489513,"size":1513,"snap":"2023-40-2023-50","text_gpt3_token_len":389,"char_repetition_ratio":0.121935055,"word_repetition_ratio":0.008298756,"special_character_ratio":0.24785195,"punctuation_ratio":0.1469534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9924282,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-28T09:30:22Z\",\"WARC-Record-ID\":\"<urn:uuid:78f9a459-8bda-47a7-9959-c67c656d26fc>\",\"Content-Length\":\"4650\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c5dbd6f5-fa1f-47b3-aa4f-c1a0f7681aa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:95a44a8b-57b3-48fb-80b9-33df7e21ad1b>\",\"WARC-IP-Address\":\"137.208.57.46\",\"WARC-Target-URI\":\"https://search.r-project.org/CRAN/refmans/DescTools/html/CutQ.html\",\"WARC-Payload-Digest\":\"sha1:NDRQINTY6S7DTUQSQG4EIJ4GCLEK7B3L\",\"WARC-Block-Digest\":\"sha1:QMSAGBTYWLPHCZLRHT2TXMPT2E36G74G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679099281.67_warc_CC-MAIN-20231128083443-20231128113443-00672.warc.gz\"}"}
https://www.arxiv-vanity.com/papers/hep-th/9301082/
[ "SU-ITP-93-1\n\nhep-th/9301082\n\nJanuary 1993\n\nEvolution of Pure States into Mixed States\n\nJun Liu 111E-mail address: Physics Department, Stanford University, Stanford CA 94305\n\nABSTRACT\n\nIn the formulation of Banks, Peskin and Susskind, we show that one can construct evolution equations for the quantum mechanical density matrix with operators which do not commute with hamiltonian which evolve pure states into mixed states, preserve the normalization and positivity of and conserve energy. Furthermore, it seems to be different from a quantum mechanical system with random sources.\n\n## 1 Introduction\n\nIt is difficult to construct a consistent theory that combine quantum mechanics and general relativity. This has lead to considerations that generalize either quantum mechanics (QM) or general relativity or both. In QM, a state is characterized by the ray of a vector in a Hilbert space. The Schroedinger equation which describe the time evolution of a state is a equation for the vector which characterize the state:\n\n ˙ψ=−iHψ.\n\nThe rays of vectors in a Hilbert space are in 1-1 correspondence with the positive hermitian matrices which can be diagonalize by a unitary matrix into the following form\n\n ρ=diag.(1,0,0,...).\n\nIt is obvious that there are many hermitian and positive matrices which do not belong to this class. One possible generalization of QM is to characterize the states by positive definite matrices (density matrices) instead of vectors in the Hilbert space. In this case, the subset of states which have the above correspondence to vectors (i.e., all the state of QM) are called pure states and the other mixed states. The Schroedinger equation can also be written as an equation for the density matrix:\n\n ˙ρ=−i[H,ρ].\n\nIt evolves the pure states into pure states. In his study of black-hole physics, Hawking proposed a generalization of QM which allows evolution of pure states into mixed states.\n\nHawking’s idea was closely scrutinized by Banks, Peskin and Susskind (BPS) (see also [3, 4]). They studied the possible evolution equation of which preserve the normalization, linearity, hermiticity. They found the most general equation\n\n ρ=−i[H,ρ]−1/2∑hαβ[QβQαρ+ρQβQα−2QαρQβ]. (1)\n\nThe condition that remains positive for general -dimension is not straightforward. BPS showed that a positive is a sufficient condition. They further showed that for real, symmetric, positive , the above evolution equation describe a QM system with random sources:\n\n H=H0+∑jα(t)Qα.\n\nThey then argued that for a QM system with random sources it is impossible to require energy-momentum conservation and locality at the same time. This severely restricts the generalization of QM along this direction.\n\nRecently Srednicki investigated the constraint on the evolution equation due to Lorentz invariance. He concluded that it is difficult to have a Lorentz covariant evolution equation if only ’s which commute with are used. This lead him to construct evolution equations with ’s which do not commute with . He gave on such example. But in his example, a positive density matrix may develop into an indefinite one.\n\nIn this notes, we will study in detail the case of 2-dimension where sufficient and necessary condition for the positivity is straightforward. We will give an evolution equation using ’s which do not commute with . Our example is an improvement of Srednicki’s because in our example a positive density matrix will remain positive under the evolution. Furthermore, despite of some similarities, an evolution equation with an indefinite can not describe a QM system with a random source, because such system requires commute with if energy is conserved. This result is first pointed out by BPS but the proof was not given in and so we will give a proof in section 3.\n\n## 2 2 dimension\n\nIn the 2 dimension, we can choose the identity matrix and Pauli matrices as the basis for hermitian matrices. We can write a normalized density matrix () as\n\n ρ=1/2(I+→ρ⋅→σ).\n\nSince we can always transform into , it is obvious that is equivalent to . Now the master equation for can be written as an equation for :\n\n ˙→ρ=−(S+A)→ρ+→β, (2)\n\nwhere is a symmetric matrix,\n\n S=⎛⎜⎝2(h22+h33)−h12−h21−h13−h31−h12−h212(h11+h33)−h23−h32−h13−h31−h23−h322(h11+h22)⎞⎟⎠, (3)\n\nis a vector,\n\n →β=−i⎛⎜⎝h23−h32h13−h31h12−h21⎞⎟⎠, (4)\n\nand is an antisymmetric matrix which comes from the first term in BPS equation, . This leads to\n\n ddt(→ρ)2=2→ρ⋅˙→ρ=−(→ρ)T(S+A)→ρ+→β⋅→ρ=−(→ρ)TS→ρ+→β⋅→ρ.\n\nFrom the above equations, we see that is diagonalized and when is diagonalized. Thus it follows that the necessary and sufficient condition for all to be positive is that\n\n S≥0.\n\nOne can readily see that when , which is the sufficient condition found by BPS. But there are certainly indefinite which lead to . This is true even after we impose the energy conservation condition as we will show now.\n\nWithout the loss of generality, we can take , and in this case, , the energy conservation reduce to . This is equivalent to\n\n h13=−h31,h23=−h32,h11+h22=0.\n\nWe can now construct examples using indefinite that conserve the energy, preserve the positivity of density matrices. For example, we can take , is indefinite. This gives the following equation in terms fermionic creation and annihilation operators\n\n ˙ρ=[−i[H0,ρ]−2g(b+ρb++bρb)]+2g(b+bρ+ρb+b−2b+bρb+b).\n\nWithout the second term, , it is the example of Srednicki. It fails to preserve the positivity. With the second term, the positivity is preserved.\n\nWhen is non-negative, the system is just a QM system with a random source, with the following hamiltonain,\n\n HT=H+∑αjα(t)Qα,\n\nwhere is a random source. This is showed by BPS and it provides a physical intuition for working with this system. Since , one would naturally ask if this can realized as QM system with random source which has a non-hermitian Hamiltonian:\n\n HT=H+ij(t)Q.\n\nSince is not hermitian, the evolution of the total system cannot be both unitary and we will lose either hermiticity or normalization of .\n\nIf we choose to preserve the hermiticity, then we can use the following evolution equation\n\n ˙ρT=HTρT+ρTHT.\n\nThis does not lead to the BPS equation. We can also choose to preserve the probability, using the following equation\n\n ˙ρT=i[HT,ρT].\n\nThis evolution equation does not preserve the hermiticity of , but after the ensemble average, the hermiticity of ensemble average density matrix is recovered and it leads to the BPS equation. However, despite this formal similarity, the evolution equation with indefinite seems to be genuinely different from the one with non-negative , as we will show in the next section.\n\n## 3 n dimension\n\nWe consider the case of dimension in this section. We will prove that energy conservation requires all the commute with if is real symmetric and positive. This is first pointed out by BPS, but the proof was not given in . Here is a vector and the most general is\n\n H=∑ahaQa0,\n\nwhere belong to the Cartan subalgebra of . In other word, is diagonal and ’s are a basis of hermitian diagonal matrices. Since can be expressed in terms of and for arbitrary , the energy conservation is equivalent to\n\n Tr(Qa0ρ(t))=0,\n\nfor all the belong to the Cartan subalgebra. We will use (instead of ) as the index to denote these matrices. The above equation is equivalent to\n\n ˙ρa(t)=0.\n\nThis is equivalent to for all and . Now let us assume that is real, symmetric and non-negative. Then we can diagonalize :\n\n h=diag.(h1...hn2−1),\n\nwhere . We have\n\n ˙ρ=−1/2∑γhγ[Qγ,[Qγ,ρ]].\n\nSo\n\n ˙ρa=−1/2∑αβγhγfγβafγβαρα,\n\nwhere ’s are structure coefficients of . That is\n\n Saa=−1/2∑γ,βhγf2γβa.\n\nFor every Cartan subalgebra there exist and such that . Since we have if Cartan subalgebra. That is, energy conservation requires commute with .\n\nCombined with the example of section 2, this result indicate that the evolution equation with indefinite is truly different from the ones with positive . In the latter case, energy conservation requires ’s to be conserved charges which is a very strong constraint. With indefinite , we can evade this constraint as the example in section 2.\n\n## 4 Conclusion\n\nIt seems that there is a very interesting class of evolution equations which cannot be realized as a quantum mechanics system with random sources. The condition for energy conservation on this class of evolution is not as stringent, for example, one can choose which does not commute with Hamiltonian .\n\nSrednicki has argued that in quantum field theory it is difficult to construct a Lorentz covariant evolution equation with only ’s which commute with . He pointed out that one possible way out is to use ’s which do not commute with . In this case, one must make sure that the energy is conserved. He gave one such example, but the evolution equation leads to negative . Our example has no such flaw and is an improvement of Srednicki’s result.\n\nGeneralization to filed theory example is under investigation. Hopefully, the requirement of momentum conservation and locality will also become less stringent.\n\n## Acknowledgements\n\nThe author is grateful to L. Susskind for insightful discussions and important suggestions and for comments on the manuscript. He wish to thank L. Thorlacius for useful discussions and the World Laboratory for a world lab scholarship. He also would like to thank G. Lindblad for pointing out mistakes in the earlier version of the paper." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93391794,"math_prob":0.98748535,"size":9037,"snap":"2021-21-2021-25","text_gpt3_token_len":1970,"char_repetition_ratio":0.14103842,"word_repetition_ratio":0.02910053,"special_character_ratio":0.20936151,"punctuation_ratio":0.110124335,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969255,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-07T18:31:36Z\",\"WARC-Record-ID\":\"<urn:uuid:9d8ee131-2266-46b8-b890-dd8cdb206349>\",\"Content-Length\":\"236308\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:400df4b0-ce4c-4414-b6b4-84019fed4c33>\",\"WARC-Concurrent-To\":\"<urn:uuid:0adff4bf-e706-4964-8d08-aa53c42ab917>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/hep-th/9301082/\",\"WARC-Payload-Digest\":\"sha1:HONGC5B6DKMAP5ZLDILK3ZMAAFUS2DAL\",\"WARC-Block-Digest\":\"sha1:KXQKWYXEDQE5PGNAYR5KQB2YE2ACP4ZQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988802.93_warc_CC-MAIN-20210507181103-20210507211103-00586.warc.gz\"}"}
https://databot.online/topic/5-3-solve-systems-of-equations-by-elimination/
[ "# 5.3 Solve Systems of Equations by Elimination\n\n### Learning Objectives\n\nBy the end of this section, you will be able to:\n\n• Solve a system of equations by elimination\n• Solve applications of systems of equations by elimination\n• Choose the most convenient method to solve a system of linear equations\n\nWe have solved systems of linear equations by graphing and by substitution. Graphing works well when the variable coefficients are small and the solution has integer values. Substitution works well when we can easily solve one equation for one of the variables and not have too many fractions in the resulting expression.\n\nThe third method of solving systems of linear equations is called the Elimination Method. When we solved a system by substitution, we started with two equations and two variables and reduced it to one equation with one variable. This is what we’ll do with the elimination method, too, but we’ll have a different way to get there.\n\n### Solve a System of Equations by Elimination\n\nThe Elimination Method is based on the Addition Property of Equality. The Addition Property of Equality says that when you add the same quantity to both sides of an equation, you still have equality. We will extend the Addition Property of Equality to say that when you add equal quantities to both sides of an equation, the results are equal.\n\nFor any expressions a, b, c, and d,\n\nTo solve a system of equations by elimination, we start with both equations in standard form. Then we decide which variable will be easiest to eliminate. How do we decide? We want to have the coefficients of one variable be opposites, so that we can add the equations together and eliminate that variable.\n\nNotice how that works when we add these two equations together:\n\nThe y’s add to zero and we have one equation with one variable.\n\nLet’s try another one:\n\nThis time we don’t see a variable that can be immediately eliminated if we add the equations.\n\nBut if we multiply the first equation by −2, we will make the coefficients of x opposites. We must multiply every term on both sides of the equation by −2.", null, "Now we see that the coefficients of the x terms are opposites, so x will be eliminated when we add these two equations.\n\nAdd the equations yourself—the result should be −3y = −6. And that looks easy to solve, doesn’t it? Here is what it would look like.", null, "We’ll do one more:\n\nIt doesn’t appear that we can get the coefficients of one variable to be opposites by multiplying one of the equations by a constant, unless we use fractions. So instead, we’ll have to multiply both equations by a constant.\n\nWe can make the coefficients of x be opposites if we multiply the first equation by 3 and the second by −4, so we get 12x and −12x.", null, "This gives us these two new equations:\n\nthe x’s are eliminated and we just have −29y = 58.\n\nOnce we get an equation with just one variable, we solve it. Then we substitute that value into one of the original equations to solve for the remaining variable. And, as always, we check our answer to make sure it is a solution to both of the original equations.\n\nNow we’ll see how to use elimination to solve the same system of equations we solved by graphing and by substitution.\n\nThe steps are listed below for easy reference.\n\n### How To\n\n#### How to solve a system of equations by elimination.\n\n• Step 1. Write both equations in standard form. If any coefficients are fractions, clear them.\n• Step 2. Make the coefficients of one variable opposites.\n• Decide which variable you will eliminate.\n• Multiply one or both equations so that the coefficients of that variable are opposites.\n• Step 3. Add the equations resulting from Step 2 to eliminate one variable.\n• Step 4. Solve for the remaining variable.\n• Step 5. Substitute the solution from Step 4 into one of the original equations. Then solve for the other variable.\n• Step 6. Write the solution as an ordered pair.\n• Step 7. Check that the ordered pair is a solution to both original equations.\n\nFirst we’ll do an example where we can eliminate one variable right away.\n\nIn Example 5.27, we will be able to make the coefficients of one variable opposites by multiplying one equation by a constant.\n\nNow we’ll do an example where we need to multiply both equations by constants in order to make the coefficients of one variable opposites.\n\nWhen the system of equations contains fractions, we will first clear the fractions by multiplying each equation by its LCD.\n\nIn the Solving Systems of Equations by Graphing we saw that not all systems of linear equations have a single ordered pair as a solution. When the two equations were really the same line, there were infinitely many solutions. We called that a consistent system. When the two equations described parallel lines, there was no solution. We called that an inconsistent system.\n\n### Solve Applications of Systems of Equations by Elimination\n\nSome applications problems translate directly into equations in standard form, so we will use the elimination method to solve them. As before, we use our Problem Solving Strategy to help us stay focused and organized.\n\nTry It 5.65\n\nMalik stops at the grocery store to buy a bag of diapers and 2 cans of formula. He spends a total of \\$37. The next week he stops and buys 2 bags of diapers and 5 cans of formula for a total of \\$87. How much does a bag of diapers cost? How much is one can of formula? Try It 5.66\n\nTo get her daily intake of fruit for the day, Sasha eats a banana and 8 strawberries on Wednesday for a calorie count of 145. On the following Wednesday, she eats two bananas and 5 strawberries for a total of 235 calories for the fruit. How many calories are there in a banana? How many calories are in a strawberry?\n\n### Choose the Most Convenient Method to Solve a System of Linear Equations\n\nWhen you will have to solve a system of linear equations in a later math class, you will usually not be told which method to use. You will need to make that decision yourself. So you’ll want to choose the method that is easiest to do and minimizes your chance of making mistakes.", null, "### Media\n\nAccess these online resources for additional instruction and practice with solving systems of linear equations by elimination.\n\n• Instructional Video-Solving Systems of Equations by Elimination\n• Instructional Video-Solving by Elimination\n• Instructional Video-Solving Systems by Elimination\n\n### Section 5.3 Exercises\n\n#### Practice Makes Perfect\n\nSolve a System of Equations by Elimination\n\nIn the following exercises, solve the systems of equations by elimination.\n\nSolve Applications of Systems of Equations by Elimination\n\nIn the following exercises, translate to a system of equations and solve.\n\n167.The sum of two numbers is 65. Their difference is 25. Find the numbers.\n168.The sum of two numbers is 37. Their difference is 9. Find the numbers.\n169.The sum of two numbers is −27. Their difference is −59. Find the numbers.\n170.The sum of two numbers is −45. Their difference is −89. Find the numbers.\n171.Andrea is buying some new shirts and sweaters. She is able to buy 3 shirts and 2 sweaters for \\$114 or she is able to buy 2 shirts and 4 sweaters for \\$164. How much does a shirt cost? How much does a sweater cost?\n172.Peter is buying office supplies. He is able to buy 3 packages of paper and 4 staplers for \\$40 or he is able to buy 5 packages of paper and 6 staplers for \\$62. How much does a package of paper cost? How much does a stapler cost?\n173.The total amount of sodium in 2 hot dogs and 3 cups of cottage cheese is 4720 mg. The total amount of sodium in 5 hot dogs and 2 cups of cottage cheese is 6300 mg. How much sodium is in a hot dog? How much sodium is in a cup of cottage cheese?\n174.The total number of calories in 2 hot dogs and 3 cups of cottage cheese is 960 calories. The total number of calories in 5 hot dogs and 2 cups of cottage cheese is 1190 calories. How many calories are in a hot dog? How many calories are in a cup of cottage cheese?\n\nChoose the Most Convenient Method to Solve a System of Linear Equations\n\nIn the following exercises, decide whether it would be more convenient to solve the system of equations by substitution or elimination.\n\n#### Self Check\n\nⓐ After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section.", null, "ⓑ What does this checklist tell you about your mastery of this section? What steps will you take to improve?" ]
[ null, "https://openstax.org/apps/archive/20210823.155019/resources/6ca02350526ce57953510809d6379dc1df766a58", null, "https://openstax.org/apps/archive/20210823.155019/resources/e37d91a3c6d7db5c8ca97b1d1fa598f89bf72fbe", null, "https://openstax.org/apps/archive/20210823.155019/resources/bea40cd7ba88530cc2dbe6f315f76cbe0bef7b7e", null, "https://openstax.org/apps/archive/20210823.155019/resources/ef221a50fee505f0b7303874e13be80be7b9dbd3", null, "https://openstax.org/apps/archive/20210823.155019/resources/d4b8f0fea827503470425b4c7a99b9fed8d9e5fe", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.948812,"math_prob":0.99352586,"size":6957,"snap":"2022-05-2022-21","text_gpt3_token_len":1528,"char_repetition_ratio":0.14828132,"word_repetition_ratio":0.07911392,"special_character_ratio":0.22380336,"punctuation_ratio":0.09663866,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99891317,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T11:55:14Z\",\"WARC-Record-ID\":\"<urn:uuid:6a7a75b4-a6f2-4642-b2d7-2ee6d2395d28>\",\"Content-Length\":\"202068\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fab64d5e-799f-4c12-9463-18abee21e7a0>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d14be7a-3128-4ea8-9a77-87d41cf46e8d>\",\"WARC-IP-Address\":\"35.209.40.156\",\"WARC-Target-URI\":\"https://databot.online/topic/5-3-solve-systems-of-equations-by-elimination/\",\"WARC-Payload-Digest\":\"sha1:FO3J7Q5OORYOFKIYP4TKU6KGJ4FPW3V4\",\"WARC-Block-Digest\":\"sha1:SCV7TTXT2KA5YYM4RIEAA2FGVXO77OMA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662545326.51_warc_CC-MAIN-20220522094818-20220522124818-00424.warc.gz\"}"}
http://dictionary.obspm.fr/?showAll=1&formSearchTextfield=degree
[ "# An Etymological Dictionary of Astronomy and AstrophysicsEnglish-French-Persian\n\n## فرهنگ ریشه شناختی اخترشناسی-اخترفیزیک\n\n### M. Heydari-Malayeri    -    Paris Observatory\n\nHomepage\n\nNumber of Results: 10 Search : degree\n degree   درجه   darajé (#)Fr.: degré   1) Subdivision of an interval in a scale of measurement. 2) Geometry: Measure of angle, the 360th part of a circle. 3) Math.: Rank of an equation or expression as determined by the sum of the exponents of the variables.From O.Fr. degré, from V.L. *degradus \"a step,\" from L.L. degredare, from L. → de- \"down\" + gradus \"step.\"Darajé, from Ar. darajat \"step, ladder.\" degree of coherence   درجه‌ی ِ همدوسی   dareje-ye hamdusiFr.: degré de cohérence   The extent of → coherence of an → electromagnetic wave, as indicated by a → dimensionless number. Since interference takes place when waves are → coherent, using a → Young's experiment, the degree of coherence is measured from the → fringe  → visibility, V. It is defined as the ratio V = (Imax - Imin) / (Imax + Imin), where Imax is the intensity at a maximum of the → interference pattern, and Imin is the intensity at a minimum of the interference pattern. The electromagnetic wave is considered to be highly coherent when the degree of coherence is about 1, incoherent for nearly zero values, and partially coherent for values between 0 and 1.→ degree; → coherence. degree of freedom   درجه‌ی ِ آزادی   daraje-ye âzâdi (#)Fr.: degré de liberté   Of a → mechanical system, the number of → independent variables needed to describe its configuration.→ degree; → freedom. degree of ionization   درجه‌ی ِ یونش   daraje-ye yoneš (#)Fr.: degré d'ionisation   The number of electrons a neutral atom has lost in an ionizing physical process (radiation, shock, collision). In spectroscopy, the degree of ionization is indicated by a Roman numeral following the symbol for the element. A neutral atom is indicated by the Roman numeral I, a singly ionized atom, one which has lost one electron, is indicated by II, and so on. Thus O VI indicates an oxygen atom which has lost five electrons.→ degree; → ionization. degree of polarization   درجه‌ی ِ قطبش   daraje-ye qotbešFr.: degré de polarisation   The ratio of the intensity of polarized portion of light to the total intensity at a point in the beam.→ degree; → polarization. degree of stability   درجه‌ی ِ پایداری   daraje-ye pâydâriFr.: degré de stabilité   Statics: The → energy that must be expended to permanently disturb a specific state of → equilibrium of a body.→ degree; → stability. degree of vertex   درجه‌ی ِ تارک   daraje-ye târakFr.: degré de vertex   The → number of → edges incident on the → vertex.→ degree; → vertex. first degree equation   هموگش ِ درجه‌ی ِ یکم   hamugeš-e daraje-ye yekomFr.: équiation du premier degré   A equation in which the highest → exponent of the → variable is 1. Same as → linear equation.→ first; → degree; → equation. polarization degree   درجه‌ی ِ قطبش   daraje-ye qotbeš (#)Fr.: degré de polarisation   → polarization; → degree. square degree   درجه‌ی ِ چاروش   daraje-ye cârušFr.: degré carré   A solid angle whose cone is a tetrahedral pyramid with an angle between its edges equal to 1°. 1 square degree = 3.046 x 10-4 sr = 2.424 x 10-5 solid angle of a complete sphere.→ square; → degree." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8089678,"math_prob":0.963495,"size":2590,"snap":"2022-05-2022-21","text_gpt3_token_len":687,"char_repetition_ratio":0.14771849,"word_repetition_ratio":0.009070295,"special_character_ratio":0.23899613,"punctuation_ratio":0.1530815,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98637056,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-20T16:08:18Z\",\"WARC-Record-ID\":\"<urn:uuid:189cf082-484a-40a0-8aab-f338274e7b30>\",\"Content-Length\":\"25087\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb8e04f1-73c6-4dc5-bc9a-23609269d54f>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ffe135f-9eaf-4140-97c1-10a40bcebe30>\",\"WARC-IP-Address\":\"145.238.200.3\",\"WARC-Target-URI\":\"http://dictionary.obspm.fr/?showAll=1&formSearchTextfield=degree\",\"WARC-Payload-Digest\":\"sha1:RSMURRHM7CJGYNADW5V34BDJRZDSJHHD\",\"WARC-Block-Digest\":\"sha1:X42M4EFFO36VPOIGPTK7WB6JE5CEGMNJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662533972.17_warc_CC-MAIN-20220520160139-20220520190139-00778.warc.gz\"}"}
https://multiples.info/numbers/multiples-of-922.html
[ "Multiples of 922", null, "Welcome to the Multiples of 922 page. Here we will first teach you everything you will ever need to know about the multiples of 922, and then give you a study guide summary of everything we taught you to make sure you remember it all. Use this page to look up facts and learn information about the multiples of 922. This page will make you a multiples of nine hundred twenty-two expert!\n\nDefinition of Multiples of 922\nMultiples of 922 are all the numbers that when divided by 922 equal an integer. Each of the multiples of 922 are called a multiple. A multiple of 922 is created by multiplying 922 by an integer.\n\nTherefore, to create a list of multiples of 922, you start with 1 multiplied by 922, then 2 multiplied by 922, then 3 multiplied by 922, and so on for as long as you want. Thus, the list of the first five multiples of 922 is 922, 1844, 2766, 3688, and 4610. To see a larger list of multiples of 922, see the printable image of Multiples of 922 further down on this page. We also have a category where you can choose any nth multiple of 922.\n\nMultiples of 922 Checker\nThe Multiples of 922 Checker below checks to see if any number of your choice is a multiple of 922. In other words, it checks to see if there is any number (integer) that when multiplied by 922 will equal your number. To do that, we divide your number by 922. If the the quotient is an integer, then your number is a multiple of 922.\n\nIs  a multiple of 922?\n\nLeast Common Multiple of 922 and ...\nA Least Common Multiple (LCM) is the lowest multiple that two or more numbers have in common. This is also called the smallest common multiple or lowest common multiple and is useful to know when you are adding our subtracting fractions. Enter one or more numbers below (922 is already entered) to find the LCM.\n\nCheck out our LCM Calculator if you need more details about the Least Common Multiple or if you need the LCM for different numbers for adding and subtraction fractions.\n\nnth Multiple of 922\nAs we stated above, 922 is the first multiple of 922, 1844 is the second multiple of 922, 2766 is the third multiple of 922, and so on. Enter a number below to find the nth multiple of 922.\n\nth multiple of 922\n\nMultiples of 922 vs Factors of 922\n922 is a multiple of 922 and a factor of 922, but that is where the similarities end. All postive multiples of 922 are 922 or greater than 922. All positive factors of 922 are 922 or less than 922.\n\nBelow is the beginning list of multiples of 922 and the factors of 922 so you can compare:\n\nMultiples of 922: 922, 1844, 2766, 3688, 4610, etc.\n\nFactors of 922: 1, 2, 461, 922\n\nAs you can see, the multiples of 922 are all the numbers that you can divide by 922 to get a whole number. The factors of 922, on the other hand, are all the whole numbers that you can multiply by another whole number to get 922.\n\nIt's also interesting to note that if a number (x) is a factor of 922, then 922 will also be a multiple of that number (x).\n\nMultiples of 922 vs Divisors of 922\nThe divisors of 922 are all the integers that 922 can be divided by evenly. Below is a list of the divisors of 922.\n\nDivisors of 922: 1, 2, 461, 922\n\nThe interesting thing to note here is that if you take any multiple of 922 and divide it by a divisor of 922, you will see that the quotient is an integer.\n\nMultiples of 922 Table\nBelow is an image of the first 100 multiples of 922 in a table. The table is in chronological order, column by column. The first column has the first ten multiples of 922, the second column has the next ten multiples of 922, and so on.", null, "The Multiples of 922 Table is also referred to as the 922 Times Table or Times Table of 922. You are welcome to print out our table for your studies.\n\nNegative Multiples of 922\nAlthough not often discussed or needed in math, it is worth mentioning that you can make a list of negative multiples of 922 by multiplying 922 by -1, then by -2, then by -3, and so on, to get the following list of negative multiples of 922:\n\n-922, -1844, -2766, -3688, -4610, etc.\n\nMultiples of 922 Summary\nBelow is a summary of important Multiples of 922 facts that we have discussed on this page. To retain the knowledge on this page, we recommend that you read through the summary and explain to yourself or a study partner why they hold true.\n\nThere are an infinite number of multiples of 922.\n\nA multiple of 922 divided by 922 will equal a whole number.\n\n922 divided by a factor of 922 equals a divisor of 922.\n\nThe nth multiple of 922 is n times 922.\n\nThe largest factor of 922 is equal to the first positive multiple of 922.\n\n922 is a multiple of every factor of 922.\n\n922 is a multiple of 922.\n\nA multiple of 922 divided by a divisor of 922 equals an integer.\n\n922 divided by a divisor of 922 equals a factor of 922.\n\nAny integer times 922 will equal a multiple of 922.\n\nMultiples of a Number\nHere you can get the multiples of another number, all with the same attention to detail as we did for multiples of 922 on this page.\n\nMultiples of\nMultiples of 923\nDid you find our page about multiples of nine hundred twenty-two educational? Do you want more knowledge? Check out the multiples of the next number on our list!" ]
[ null, "https://multiples.info/images/multiples/multiples-of-922.jpg", null, "https://multiples.info/images/tables/multiples-of-922-table.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9158492,"math_prob":0.99009854,"size":4508,"snap":"2023-40-2023-50","text_gpt3_token_len":1129,"char_repetition_ratio":0.23934281,"word_repetition_ratio":0.05086705,"special_character_ratio":0.29281276,"punctuation_ratio":0.10216718,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9966701,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T15:53:33Z\",\"WARC-Record-ID\":\"<urn:uuid:c038e873-2b33-4aed-8355-134a00008f91>\",\"Content-Length\":\"14283\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:055fb3f4-de32-4a8e-aeb4-037c79b77b8e>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5350117-6f93-4f2b-b25e-af9fab2504df>\",\"WARC-IP-Address\":\"18.160.41.108\",\"WARC-Target-URI\":\"https://multiples.info/numbers/multiples-of-922.html\",\"WARC-Payload-Digest\":\"sha1:XUXETW65AJ575ARM4NYDNAG4HGVLFITZ\",\"WARC-Block-Digest\":\"sha1:YQC6W6J5R2HR3XUVHQCDDB2GREX5ONN5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00833.warc.gz\"}"}
https://biesalab.org/datasets/
[ "## Polynomials’ Roots Dataset\n\n### Context\n\nFinding the arbitrary roots (real or complex) of a given polynomial is a fundamental task in various areas of science and engineering. Applications of root-finding tasks emerge from, e.g., control and communication systems, filter design, signal and image processing, codification and decodification of information.\n\nMost of the methods available in the literature are based on Newton’s method or derived from it, and rely on the deflation technique to sequentially find the roots of a given polynomial. However, this leads to the accumulation of rounding errors and, as a consequence, inaccurate results. Besides that, most of these methods require good initial approximations in order to converge.\n\nThe idea to build this dataset is to use it for testing and comparing tools that compute the roots of polynomials. We have used to test artificial intelligence tools (Artificial Neural Networks and Particle Swarm Optimization) and the dataset was used to compare our solution to the other methods available in the literature (e.g., the Durand–Kerner).\n\n### Content\n\nThere are two main directories, one for polynomials with only real roots (named `real`) and the other for polynomials with both real and complex roots (named `real_complex`). These directories store the coefficients (files named `deg_n_coef`, with `n` being the degree of the polynomials) and the roots (files named `deg_n_roots`) of each polynomial entry.\n\nFor both cases, this data set only considers real univariate polynomials of degrees 5, 10, 15, 20 and 25. The files were saved in CSV format, with the first line being the header of the data set.\n\nIn the header, coefficients were denoted by `a_i`, where `i` (`i=0,1,...,n`) indicate the index associated to the coefficient `i` of a polynomial (`a_n` represents the coefficient of the term with the highest degree). For polynomials with only real roots, roots are identified by `alpha_j`, being `j` (`j=1,2,...,n`) the `j`-th root of a polynomial. This notation changes slightly for polynomials with both real and complex roots, where `re_alpha_j` and `im_alpha_j` denote respectively the real and the imaginary part of the `j`-th polynomial’s root.\n\nTo generate this data set, two algorithms were used to: (i) generate real roots for any polynomial degree, and (ii) given a set of real roots, compute the respective coefficients. Contrary to the strategy employed to generate the databases for the real roots, for polynomials with both real and complex roots, the coefficients were generated first and from these, the exact solutions (i.e., the roots) were calculated (which can be real or complex).\n\nIt is also important to point out that these files were generated using the Mathematica software with double-precision arithmetic.\n\n### Attribute information\n\nFor the case when polynomials have only real roots, roots were generated in the closed interval of -1 to 1. For the case when polynomial have both real and complex roots, coefficients were generated in the closed interval of 0 to 1.\n\nThis is a multivariate data set, with the number of attributes being equal to `n` (for polynomials with only real roots) or `n * 2` (for polynomials with both real and complex roots).\n\n### General information\n\nThere are 100 000 instances per polynomial degree, and the associated recommended task is regression. Besides that, there are no missing values.\n\n### Citation request\n\nIf you use this data set, please cite the following paper: A Neural Network-Based Approach for Approximating the Arbitrary Roots of Polynomials, to be submitted to IEEE Access.\n\n## Other Datasets\n\nSoon we will release other datasets regarding Covid-19 and Football.\n\n### To request data\n\nTo request the data please fill the form below (contact form) and let us know who you are and what you are planning to use the data for. Do not forget to identify your institution and institutional e-mail.", null, "" ]
[ null, "https://biesalab.org/wp-content/plugins/wpforms-lite/assets/images/submit-spin.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9213727,"math_prob":0.87407744,"size":7564,"snap":"2022-40-2023-06","text_gpt3_token_len":1608,"char_repetition_ratio":0.15357143,"word_repetition_ratio":0.9472817,"special_character_ratio":0.20571126,"punctuation_ratio":0.11739746,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98126894,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T05:56:46Z\",\"WARC-Record-ID\":\"<urn:uuid:6745fc45-67f4-4288-9033-9116cd8ee01f>\",\"Content-Length\":\"45378\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7a8605f5-29c0-4b91-86ec-5f75e88b9d26>\",\"WARC-Concurrent-To\":\"<urn:uuid:333de62a-5602-4f2a-88b1-88806846a3a0>\",\"WARC-IP-Address\":\"89.109.64.166\",\"WARC-Target-URI\":\"https://biesalab.org/datasets/\",\"WARC-Payload-Digest\":\"sha1:Z6OXZX5BIEE47F5UOVQRBD5YH73RCY4R\",\"WARC-Block-Digest\":\"sha1:BE35RUX4WGECMDYH7UTZTCCFVBVIZSFE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334987.39_warc_CC-MAIN-20220927033539-20220927063539-00620.warc.gz\"}"}
https://support.minitab.com/en-us/minitab/19/help-and-how-to/statistics/basic-statistics/supporting-topics/basics/linear-nonlinear-and-monotonic-relationships/
[ "# Linear, nonlinear, and monotonic relationships\n\nWhen evaluating the relationship between two variables, it is important to determine how the variables are related. Linear relationships are most common, but variables can also have a nonlinear or monotonic relationship, as shown below. It is also possible that there is no relationship between the variables. You should start by creating a scatterplot of the variables to evaluate the relationship.\n\nA linear relationship is a trend in the data that can be modeled by a straight line. For example, suppose an airline wants to estimate the impact of fuel prices on flight costs. They find that for every dollar increase in the price of a gallon of jet fuel, the cost of their LA-NYC flight increases by about \\$3500. This describes a linear relationship between jet fuel cost and flight cost.\n\nWhen both variables increase or decrease concurrently and at a constant rate, a positive linear relationship exists. The points in Plot 1 follow the line closely, suggesting that the relationship between the variables is strong. The Pearson correlation coefficient for this relationship is +0.921.\n\nWhen one variable increases while the other variable decreases, a negative linear relationship exists. The points in Plot 2 follow the line closely, suggesting that the relationship between the variables is strong. The Pearson correlation coefficient for this relationship is −0.968.\n\nThe data points in Plot 3 appear to be randomly distributed. They do not fall close to the line indicating a very weak relationship if one exists. The Pearson correlation coefficient for this relationship is −0.253.\n\nIf a relationship between two variables is not linear, the rate of increase or decrease can change as one variable changes, causing a \"curved pattern\" in the data. This curved trend might be better modeled by a nonlinear function, such as a quadratic or cubic function, or be transformed to make it linear. Plot 4 shows a strong relationship between two variables. However, because the relationship is not linear, the Pearson correlation coefficient is only +0.244. This relationship illustrates why it is important to plot the data in order to explore any relationships that might exist.\n\nIn a monotonic relationship, the variables tend to move in the same relative direction, but not necessarily at a constant rate. In a linear relationship, the variables move in the same direction at a constant rate. Plot 5 shows both variables increasing concurrently, but not at the same rate. This relationship is monotonic, but not linear. The Pearson correlation coefficient for these data is 0.843, but the Spearman correlation is higher, 0.948.\n\nLinear relationships are also monotonic. For example, the relationship shown in Plot 1 is both monotonic and linear.\n\nBy using this site you agree to the use of cookies for analytics and personalized content.  Read our policy" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93829,"math_prob":0.9794361,"size":2744,"snap":"2022-27-2022-33","text_gpt3_token_len":529,"char_repetition_ratio":0.2080292,"word_repetition_ratio":0.10344828,"special_character_ratio":0.19606414,"punctuation_ratio":0.115079366,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9917334,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-07T18:29:50Z\",\"WARC-Record-ID\":\"<urn:uuid:901c8142-fe89-446f-bc8c-e2499eef4c47>\",\"Content-Length\":\"10932\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7250d80f-536a-48dc-bbc5-86d6cd2b5544>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8db4df8-9685-4736-9025-c8ce781d5fe4>\",\"WARC-IP-Address\":\"23.96.207.177\",\"WARC-Target-URI\":\"https://support.minitab.com/en-us/minitab/19/help-and-how-to/statistics/basic-statistics/supporting-topics/basics/linear-nonlinear-and-monotonic-relationships/\",\"WARC-Payload-Digest\":\"sha1:J7GA67F3347JYIRKIZJGWYPS6XZI57JI\",\"WARC-Block-Digest\":\"sha1:P2RB4VZNXLQBNFX5QJ2WZD4BGTLCNRLV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570692.22_warc_CC-MAIN-20220807181008-20220807211008-00425.warc.gz\"}"}
https://cp4space.hatsya.com/2013/09/20/desert-island-theorems/
[ "# Desert Island Theorems\n\nSuppose you were cast away on a desert island. You’re only allowed to take a maximum of eight known theorems with you, along with rudimentary results such as ZFC axioms together with ‘boring stuff’ such as mathematical induction over the naturals, commutativity of real multiplication and the least upper-bound property of", null, "$\\mathbb{R}$. Everything else you have to prove yourself.\n\nSo, which eight theorems would you take with you on a desert island? It would be a waste of time to take the Baire category theorem, for instance; despite being extremely useful, it’s pretty trivial to prove. The same applies to the Dirichlet pigeonhole principle. On the other end of the spectrum, whilst Fermat’s Last Theorem is very difficult to prove, the end result is not nearly as useful as the machinery developed by Andrew Wiles in the course of solving this problem.\n\nAt the end of the post, there’s a ‘comments’ section for you to mention which theorems you would choose if marooned on a desert island. Here are my choices:\n\n### 8. Density Hales-Jewett theorem\n\nThe ordinary Hales-Jewett theorem is quite straightforward to prove, using plenty of induction and the Dirichlet pigeonhole principle. On the other hand, the density analogue (c.f. Szemeredi’s theorem) is much deeper, requiring the entire Polymath project to establish a purely combinatorial proof (the original proof involved ergodic theory).\n\nI’m not sure whether I’ve ever required the full force of Density Hales-Jewett, but Szemeredi’s theorem and ordinary Hales-Jewett have proved invaluable to me.\n\n### 7. The abc theorem\n\nDisclaimer: I don’t think that a second person is capable of understanding Mochizuki theory*, so this might still be classed as a conjecture until someone can peer-review the paper.\n\n* Formerly known as inter-universal Teichmüller theory, although Doron Zeilberger makes a good argument that it should be renamed.\n\nThe abc conjecture is concerned with triples of coprime naturals, a < b < c, such that a + b = c. It states that the radical d (larger squarefree divisor) of abc cannot be much smaller than c. Specifically, for any ε > 0, we can only find finitely many examples where", null, "$d < c^{1 - \\epsilon}$. This asserts that Beal’s conjecture holds in all but a finite number of cases, as does Fermat’s last theorem.\n\n### 6. Max-flow min-cut theorem\n\nI would have ranked this higher, except for the fact that it has a short elementary proof. The statement of the theorem is about networks, which are directed (multi-)graphs where each edge has a maximum capacity. A flow in a network is an assignment of nonnegative real values to each of the edges, such that they do not exceed the capacity, and that Kirchoff’s current law is obeyed at all vertices (with the exception of the source and sink vertices). The statement of the theorem is that the maximum flow attainable is equal to the minimum capacity of a cut (partitioning of the graph into two sets of vertices, so as to separate the source from the sink).\n\nIt can be used to prove Menger’s theorem in graph theory, Hall’s marriage theorem, Dilworth’s theorem on partially-ordered sets, and the Erdős-Szekeres theorem (although this also follows from Dilworth’s easier twin, Mirsky’s theorem).\n\n### 5. Borsuk-Ulam theorem\n\nThis is informally stated as ‘two antipodal points on the Earth’s surface have the same temperature and pressure’. More generally, if we have a continuous function f from the n-sphere to", null, "$\\mathbb{R}^n$, then we can find antipodal points x and y such that f(x) = f(y).\n\nA corollary is the Brouwer fixed-point theorem, and all that that implies.\n\n### 4. Gödel’s completeness theorem\n\nThis is one of the most beautiful and powerful theorems in mathematical logic. If a statement in first-order logic is consistent (i.e. we cannot prove a contradiction), then this asserts that there exists a finite or countable model. Consequently, we establish logical compactness: a set of first-order sentences has a model if and only if every finite subset has a model.\n\nWith the compactness theorem, the problem of computing the chromatic number of the plane is reduced to the more tractable problem of determining the maximum chromatic number of any finite unit-distance graph.\n\n### 3. Every set can be well-ordered\n\nThis is equivalent to the axiom of choice, but much friendlier than the boring statement of AC involving a choice function. The main advantage is that it endows one with the power of transfinite induction over arbitrary sets, which is one of my favourite tools for proving theorems:\n\n• Constructing an infinite game where neither player has a winning strategy;\n• 2-colouring the plane such that there is no continuous monochromatic path;\n• Proving that the direct product of two infinite sets has cardinality equal to the larger of the two sets (by Cantor normal form);\n• Establishing Zorn’s lemma to show that any vector space has a basis…\n\n### 2. Classification of finite simple groups\n\nThis is an impossible endeavour for a single individual to attempt, with the current proof being a 5000-page behemoth comprising many different papers. The Classification obviously reduces deep results such as the Feit-Thompson theorem (that there are no finite simple groups with odd composite order) and Burnside’s", null, "$p^a q^b$ theorem to easy corollaries. (Of course, these theorems were almost certainly used in establishing the Classification, so this would be logically circular.)\n\nIt’s also a very beautiful theorem, with a vast wealth of exciting groups such as PSL(2,7), exceptional Lie groups over finite fields, and the Monster group. The Classification was also involved in proving theorems in areas outside group theory, such as establishing which graphs are 4- and 5-ultrahomogeneous.\n\n### 1. Bezout’s theorem\n\nThis is a result in geometry, which says that if a degree-m and degree-n algebraic curve intersect in finitely many points, then they do so in at most mn points. Moreover, equality holds when the curves are on the complex projective plane, and we count intersections with the appropriate multiplicity.\n\nAn immediate corollary of this is the fundamental theorem of calculus, by taking one of the curves to be the line y = 0 and the other to be y = f(x) (where terms have been multiplied by appropriate powers of z to homogeneise them, and f is an arbitrary polynomial).\n\nAlso, it can be used to establish the Cayley-Bacharach theorem of cubic curves, which itself can prove the associativity of the elliptic curve group operation, Pascal’s theorem, Pappus’ theorem, and thus Desargues’ theorem.\n\nThis entry was posted in Uncategorized. Bookmark the permalink.\n\n### 17 Responses to Desert Island Theorems\n\n1.", null, "Johnicholas says:\n\nIs it reasonable to divide mathematics into “frontier-work”, classifying as-yet-unknown propositions into true and false, and “internal-work”, deciding which parts of the known world of mathematics ought to be broad highways and which ought to be narrow dirt roads?\n\nThe goal of internal work is presumably to minimize the computational effort in getting from anywhere to anywhere else inside the known world, which is to say, to index and compress the known world.\n\nThere has been a lot of progress in text compression, certainly measured and made visible by, (and probably stimulated by) benchmarks like the Calgary Corpus or the Hutter Prize. Do you think a similar benchmark for compressing the known results of mathematics would be valuable? It might consist of a formal language of propositions and a set of example propositions in the language (or more likely, a portfolio of example-generator programs).\n\n•", null, "apgoucher says:\n\nI think that distinction is an appropriate one. For example, the first proof of the Classification of Finite Simple Groups was really long, and there’s now an effort to clean it up somewhat. I believe that they’ve succeeded in reducing its length from about 10000 to 5000 pages.\n\nProof-verifying software such as Coq is particularly satisfying, since it’s less likely to enable an error to slip through than a human-readable proof (the only vulnerability would be if the software contains a bug or its axioms are arithmetically unsound). I’m not sure what portion of mathematics has been Coq-ified, although it includes the Four Colour Theorem.\n\n2.", null, "Evelyn Lamb (@evelynjlamb) says:\n\nI haven’t come up with all eight yet, but I would definitely bring the Uniformization Theorem, which states that any simply connected surface with negative Euler characteristic can be given a metric with constant curvature -1. (It actually does more: Any simply connected Riemann surface is conformally equivalent to the sphere, the complex plane, or the hyperbolic plane. But I only use the theorem to get hyperbolic metrics on genus g surfaces where g>1, so I don’t usually think about the rest of it.) I might also bring the ham and cheese sandwich theorem because I’d probably be pretty hungry. (I know, that was terrible.)\n\nThe fundamental theorem of calculus would be nice too. (Or when you bring a theorem, do you get every theorem that is used to prove it for free? Because there’s a lot of stuff that implicitly uses the fundamental theorem of calculus.)\n\nI study Teichmüller theory, and I couldn’t really tell how much of Zeilberger’s opinion #88 was satirical. I do wish we would stop naming increasingly unrelated things after him, and in my opinion, whatever inter-universal Teichmüller theory is, it’s very unlike anything Teichmüller actually did. Periodically researchers in my area of math talk about how we should rename stuff after less repugnant people who also contributed more to the way we understand the theory now, but it never quite catches on. I do think it’s good that we talk about the issue and make young people in the field (like myself) aware of a little bit of the history.\n\n•", null, "apgoucher says:\n\nYes, the Uniformisation Theorem is very elegant, and quite unexpected. For instance, you get conformal embeddings of Hurwitz surfaces in three-dimensional space (e.g. the Klein quartic surface, which admits that beautiful tiling with PGL(2,7) symmetry).\n\nIf you take a theorem, you get the immediate corollaries for free. I don’t think that you automatically get the substance of the proof, which is why it’s better to take the Tanayama-Shimura theorem than Fermat’s Last Theorem.\n\nConsidering its publication date (April 1st), to which Zeilberger explicitly draws attention, it wouldn’t surprise me if the majority of the opinion was intended to be satirical. Nevertheless, he does have a very good point, and since we already name mathematical theorems after completely unrelated people (c.f. Pell equations) it would not ruin mathematical tradition to boycott Teichmüller.\n\n3.", null, "wojowu says:\n\nI don’t know any particularily useful theorems (I like simple to formulate but harder to prove facts, like Dirichlet’s theorem), but 4. header should say compactness, not completeness, I think.\n\n•", null, "apgoucher says:\n\nCompleteness implies compactness.\n\n•", null, "wojowu says:\n\nI didn’t even know :p\n\n•", null, "apgoucher says:\n\nYes, completeness states that if we can’t find a model, then there must be a proof of contradiction. Proofs are finite, so if an infinite set of sentences is inconsistent, then there must be a proof of contradiction relying on only a finite subset of those. Contrapositively, if all finite subsets are consistent, then so is the entire set, which is the statement of compactness.\n\n4.", null, "Anonymous says:\n\nIf you’re going to take a false theorem, you ought at least to choose the nice sort of false theorem that implies all the other theorems (and their negations).\n\n•", null, "apgoucher says:\n\nAny false theorem, when coupled with its counter-example, would enable you to derive ‘false’ and therefore all statements by the Principle of Explosion. Which of those eight theorems are you contesting? (The abc conjecture is the only one where the proof is still going through the stages of peer-reviewing; all of the others are established.)\n\n5.", null, "notatab says:\n\nClassification of complex semi-simple lie algebras by Dynkin diagrams. There’s a story (which may be apocryphal but was told to me by a fairly eminent man) that they were considered for inclusion on the Pioneer disc as proof of intelligent life on Earth.\n\n•", null, "notatab says:\n\nby which, of course, I mean the Voyager disc.\n\n•", null, "apgoucher says:\n\nYes. That would be the second exceptional object used on that probe, the other being the binary Golay code (for sending colour images of planets back to Earth).\n\n•", null, "apgoucher says:\n\nInteresting. Not to mention that the simply-laced Dynkin diagrams An, Dn, E6, E7 and E8 classify far more objects than just Lie algebras (e.g. tame quivers, crystallographic reflection groups and finite subgroups of the quaternions).\n\n6.", null, "tomtom2357 says:\n\nI would replace Borsuk-Ulam with the theorem that the fundamental group of the circle is isomorphic to the integers. Borsuk-Ulam follows as a corollary, as does the Fundamental Theorem of Algebra, which I would say is an extremely useful theorem, whose proof is dramatically simplified by using topology." ]
[ null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://s0.wp.com/latex.php", null, "https://secure.gravatar.com/avatar/854a6e5388b146e936c1bc372a99ec7b", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://i0.wp.com/a0.twimg.com/profile_images/2292505825/b4dlp1yvbj8dhuva2q8j_normal.jpeg", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://secure.gravatar.com/avatar/1f682c85997b0a84616331b95bb15527", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://secure.gravatar.com/avatar/1f682c85997b0a84616331b95bb15527", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://secure.gravatar.com/avatar/5a3e7df7fb5a0a57b17156426b6c8cca", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://secure.gravatar.com/avatar/5c6dc7ff0560d2abdc70805c64f5e1ee", null, "https://secure.gravatar.com/avatar/5c6dc7ff0560d2abdc70805c64f5e1ee", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://secure.gravatar.com/avatar/fc95c8f8dee7d02a7625912efe58754f", null, "https://secure.gravatar.com/avatar/9d13ac3935a491994c1601a02a00b644", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9404743,"math_prob":0.9177223,"size":12686,"snap":"2023-14-2023-23","text_gpt3_token_len":2769,"char_repetition_ratio":0.12064343,"word_repetition_ratio":0.007751938,"special_character_ratio":0.19966893,"punctuation_ratio":0.09723386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9864511,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,3,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-02T21:11:35Z\",\"WARC-Record-ID\":\"<urn:uuid:7743a826-3d1d-4408-8e70-cd51c6e826c3>\",\"Content-Length\":\"100042\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8800c9d5-52f3-4b83-97d2-a9f8ab8ec7ba>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c8cdf94-9dbe-4ee3-9c82-46812f0b0da8>\",\"WARC-IP-Address\":\"217.160.0.167\",\"WARC-Target-URI\":\"https://cp4space.hatsya.com/2013/09/20/desert-island-theorems/\",\"WARC-Payload-Digest\":\"sha1:BDVWIZDQ6B2GPYPALSREJXJQIZSRPNTB\",\"WARC-Block-Digest\":\"sha1:GJA6Q4ERNKEJU6XLEMHQUCU42CBAHPZL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224648858.14_warc_CC-MAIN-20230602204755-20230602234755-00507.warc.gz\"}"}
https://www.cut-the-knot.org/do_you_know/addition.shtml
[ "## There are many things that can be added: numbers, vectors, matrices, spaces, shapes, sets, functions, equations, strings, chains...", null, "Mathematics originated with the desire and need to count and measure. But ever since the invention of numbers it began acquiring abstract features that characterize it nowadays. The number 1 is an abstraction corresponding to a single object, be it one cow, one fish, flower or molecule. With counting naturally comes operation of addition - passing from the current object to the next means adding one to the set of already counted objects. Ian Stewart defines Mathematics as the science of pattern that detects and studies commonality in diverse phenomena. m + n means the result of first counting m and then n objects. Regardless of what was counted, the pattern emerged that claimed that first counting n objects and afterwards additional m will produce the same result: m + n = n + m.\n\nThus Mathematics went from the abstraction of a number to the abstraction of operation; addition being just one such operation. Operations apply to elements of arbitrary sets which, in turn, may be distinguished by the variety of operations (and their properties) that are defined for elements of a set. Addition is a binary operation that applies to two objects simultaneously and results in another element of the same set. Breeding might be looked at as another binary operation. Negation, i.e. changing sign, is a unary operation since it applies to a single element. A ternary operation applies to three elements at once, and so on.\n\nAddition, as an abstract operation, has several properties.\n\n1. In a set for whose elements addition is defined there exists a very special element (most often) called zero and denoted as 0, such that\n\na + 0 = 0 + a = a,\n\nfor any element a of the given set.\n\n2. What you added you must be able to take back so that for every element a there exists an element b such that\n\na + b = b + a = 0.\n\nThis element is denoted as -a and is called the (additive) inverse of a.\n\n3. Addition is required to be associative, i.e., producing the same result regardless of the sequence in which elements are added:\n\n(a + b) + c = a + (b + c)\n\n1. There is one more property, that of commutativity\n\na + b = b + a\n\nwhich is often imposed on the operation of addition. But sometimes it's more natural and convenient to allow a non-commutative addition.\n\n### Remark\n\nYou won't be surprised to learn that mathematicians have given names to sets on which addition is defined. Such sets are called (additive) groups. If the addition is commutative, the group is said to be commutative or Abelian. They also found use to sets in which the inverse element -a does not always exist as in case of whole numbers (positive integers). Such sets are called semigroups. (Just in passing, if the operation is not even associative, the set is called groupoid, or magma, see wikipedia.Groupoid.)\n\n## References\n\n1. Ian Stewart, Nature's Numbers, BasicBooks, 1995\n2. Oystein Ore, Number Theory and Its History, Dover Publications, 1976", null, "", null, "" ]
[ null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null, "https://www.cut-the-knot.org/gifs/tbow_sh.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94887275,"math_prob":0.95582694,"size":3367,"snap":"2021-43-2021-49","text_gpt3_token_len":760,"char_repetition_ratio":0.14956884,"word_repetition_ratio":0.010452962,"special_character_ratio":0.22185922,"punctuation_ratio":0.12063492,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9724256,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T14:14:40Z\",\"WARC-Record-ID\":\"<urn:uuid:74800e15-0d80-4f81-85ab-b6686d900363>\",\"Content-Length\":\"15398\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30e34120-cb98-4eba-b12f-23b12bcdb4f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4eac2b20-4fce-48f6-b4c1-5128a9bc44f0>\",\"WARC-IP-Address\":\"107.180.50.227\",\"WARC-Target-URI\":\"https://www.cut-the-knot.org/do_you_know/addition.shtml\",\"WARC-Payload-Digest\":\"sha1:HHVE5I5RVLL25MI2JJWCAF44UOTI3SN4\",\"WARC-Block-Digest\":\"sha1:E53YN36TXXI33NFV5MDFPC7IIGGFYDEI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585203.61_warc_CC-MAIN-20211018124412-20211018154412-00239.warc.gz\"}"}
http://releases.strategoxt.org/strategoxt-manual/strategoxt-manual-0.16pre17666-1vpkhfjw/manual/chunk-chapter/bk02pt01ch06.html
[ "## Chapter 6. Interpretation\n\nInterpreting programs does not seem a typical application of program transformation. However, in partial evaluation a program is evaluated as far as possible. Also in simpler transformations, evaluating a part of a program is a common operation. Furthermore, in order to execute programs in our TIL language, it is useful have an interpreter.\n\nThis chapter shows how to construct an interpreter for typical elements of an imperative language, such as expressions, variable access constructs, and control-flow statements. The specification uses advanced features of Stratego such as traversal strategies, pattern match operations, match-project, and dynamic rewrite rules. So don't be surprised if you don't get it all on a first reading.\n\nThe interpreter developed in this chapter requires the parsing and simplification infrastructure from the previous chapters. That is, the interpreter operates on the abstract syntax tree of a program after simplification.\n\n## 6.1. Evaluating Expressions\n\nExpressions can be evaluated using a simple bottom-up traversal of the AST. The `bottomup` traversal strategy does just that; it applies its argument strategy first to the leaves of the tree, and then to each level above. The `try` strategy makes its argument strategy always succeeding. Thus, the `eval-exp` strategy performs a bottomup traversal of the AST of an expression, applying the `Eval` rules from module sim/til-eval. The `eval-exp` strategy is parameterized with a strategy for evaluating variables; their values depend on prior assignments in the program.\n\nFigure 6.1. file: til/run/til-eval-exp.str\n\n```module til-eval-exp\nimports til-eval\nstrategies\n\neval-exp(eval-var) =\nbottomup(try(\neval-var <+ EvalAdd <+ EvalMul <+ EvalSub <+ EvalDiv <+ EvalMod\n<+ EvalLeq <+ EvalGeq <+ EvalLt <+ EvalGt <+ EvalEqu <+ EvalNeq\n<+ EvalOr <+ EvalAnd <+ EvalS2I <+ EvalI2S\n))\n\n```\n\n## 6.2. Evaluating Variable Accesses\n\nThe essence of an imperative language is the manipulation of values in the store, held by variables. In TIL variables are introduced by the `Declaration` construct. The values of variables is set, or changed by the `Assign` assignment statement, and the `Read` input statement. The `Write` statement prints the value of an expression. The interpreter uses the dynamic rewrite rule EvalVar to store the mappings from variables to their values.\n\nWhen encountering a variable declaration, the current scope is labeled with the name of that variable, and its mapping is undefined, reflecting the fact that the variable doesn't have a value after being declared. Note that the scope of a variable in TIL is the rest of the current block. When encountering an assignment statement the `EvalVar` rule for the variable being assigned is updated. Thus, after an assignment to a variable `x`, `EvalVar` rewrites that variable to its value. Similarly, the `Read` input statement reads the next line from the `stdin` stream, decides whether it represents an integer or a string, and defines the `EvalVar` rule for the variable. Finally, the `eval-exp` strategy is now defined in terms of the parameterized `eval-exp` strategy from module run/til-eval-exp using the `EvalVar` rule as strategy for evaluating variables. In addition, the `VarUndefined` strategy is provided to catch variables that are used before being assigned to.\n\nFigure 6.2. file: til/run/til-eval-var.str\n\n```module til-eval-var\nimports til-eval-exp\nstrategies\n\neval-declaration =\n(?Declaration(x) <+ ?DeclarationTyped(x, t))\n; rules( EvalVar+x :- Var(x) )\n\neval-assign =\nAssign(?x, eval-exp => val)\n; rules(EvalVar.x : Var(x) -> val)\n\neval-write =\n?ProcCall(\"write\", [<eval-exp>])\n; ?String(<try(un-double-quote); try(unescape)>)\n; <fprint>(<stdout-stream>, [<id>])\n\nVarUndefined =\n?Var(<id>)\n; fatal-err(|<concat-strings>[\"variable \", <id>, \" used before being defined\"])\n\neval-exp =\n```\n\n## 6.3. Evaluating Statements\n\nWhat remains is the interpretation of control-flow statements. A block statements simply entails the execution of each statement in the block in order. Any variables declared within the block are local, and shadow variables with the same name in outer blocks. For this purpose a dynamic rule scope ```{| EvalVar : ... |}``` is used to restrict the scope of `EvalVar` rules to the block. The statements in a block are evaluated by `map`ping the `eval-stat` strategy over the list of statements. For the execution of the `if-then-else` construct, first the condition is evaluated. The `EvalIf` rule then evaluates the construct, reducing it to one of its branches. The resulting block statement is then evaluated by `eval-stat`. The `while` statement is evaluated by transforming the loop to an `if-then-else` statement, with a `Goto` at the end of the body. The dynamic rule `EvalGoto` maps the goto statement for the new label to the `if-then-else` statement.\n\nFigure 6.3. file: til/run/til-eval-stats.str\n\n```module til-eval-stats\nimports til-eval-var\nsignature\nconstructors\nGoto : String -> Stat\nstrategies\n\neval-block =\n?Block(<eval-stats>)\n\neval-stats =\n{| EvalVar : map(eval-stat) |}\n\neval-stat = //debug(!\"eval-stat: \"); (\neval-assign\n<+ eval-write\n<+ eval-declaration\n<+ eval-block\n<+ eval-if\n<+ eval-while\n<+ EvalGoto\n//)\n\neval-if =\nIfElse(eval-exp, id, id)\n; EvalIf\n; eval-stat\n\neval-while =\n?While(e, st*)\n; where(new => label)\n; where(<conc>(st*, [Goto(label)]) => st2*)\n; rules( EvalGoto : Goto(label) -> <eval-stat>IfElse(e, st2*, []) )\n; <eval-stat> Goto(label)\n```\n\n## 6.4. The Complete Interpreter\n\nTo complete the interpreter we define an interpretation strategy for the `Program` constructor, and a main strategy that takes care of I/O. The program is compiled in the usual way, taking into account the include paths for the `../sig` and `../sim` directories that contain the TIL signature and evaluation rules, respectively.\n\nFigure 6.4. file: til/run/til-run.str\n\n```module til-run\nimports libstrategolib til-eval-stats\nstrategies\n\nio-til-run =\nio-wrap(eval-program)\n\neval-program =\n?Program(<eval-stats>)\n; <exit> 0\n```\n\nFigure 6.5. file: til/run/maak\n\n```#! /bin/sh -e\n\n# compile til-simplify\nstrc -i til-run.str -I ../sig -I ../sim -la stratego-lib -m io-til-run\n```\n\n## 6.5. Running TIL Programs\n\nNow it is time to try out the interpreter at our test1.til program that computes the factorial of its input. Note that the interpreter operates on a parsed and simplified version of the program; not directly on the text file.\n\nFigure 6.6. file: til/xmpl/run-test1\n\n```# run test1.til after parsing and simplification\necho \"10\" | ../run/til-run -i test1.sim.atil > test1.run\n```\n\nFigure 6.7. file: til/xmpl/test1.run\n\n```factorial of 10 is 3628800\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71779007,"math_prob":0.8940258,"size":6491,"snap":"2020-24-2020-29","text_gpt3_token_len":1572,"char_repetition_ratio":0.14613843,"word_repetition_ratio":0.002074689,"special_character_ratio":0.23478663,"punctuation_ratio":0.14003295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535419,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-03T15:43:26Z\",\"WARC-Record-ID\":\"<urn:uuid:b25df89d-5c8a-4658-aeef-94d4026153a7>\",\"Content-Length\":\"14055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3adaadf3-dd9f-463b-8190-cb71d0fd7328>\",\"WARC-Concurrent-To\":\"<urn:uuid:00db1f5e-6c76-4e1a-9ced-9621be784a84>\",\"WARC-IP-Address\":\"131.180.119.75\",\"WARC-Target-URI\":\"http://releases.strategoxt.org/strategoxt-manual/strategoxt-manual-0.16pre17666-1vpkhfjw/manual/chunk-chapter/bk02pt01ch06.html\",\"WARC-Payload-Digest\":\"sha1:JU7DFPZTXZ73PKWXYTKAV34AKU5R3G3S\",\"WARC-Block-Digest\":\"sha1:4FAYTPE7QJ6KOKJLSOQJ7V5STQW6QGL7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347435238.60_warc_CC-MAIN-20200603144014-20200603174014-00269.warc.gz\"}"}
https://communitymedicine4all.com/2013/07/24/measures-of-central-tendency-introduction/
[ "# Measures of Central Tendency : Introduction\n\nOften, one encounters the term “Measures of central tendency” in a book on statistics (or an examination). One may also find mention of the term in a description of summary statistics.\n\nWhat is a measure of central tendency ?\n\nSimply put, it is a value that tries to summarize a given dataset. It does so by providing a central (or middle) value that (sort of) represents the entire dataset.\n\nImagine the marks scored by 10 students in an examination were as follows:\n\n20  20  20  20  20  20  20  20 20  20\n\nIf I asked you about the performance of the students, you could say that “student 1 scored 20 marks; student 2 scored 20 marks;……. student 10 scored 20 marks”. The problem with this strategy is  encountered when we have to deal with large datasets, of say, 100 values or more. It would take considerable time and effort to describe the entire dataset.\n\nTherefore, people tried to find a way by which they could convey the essence of the dataset.\n\nThe measures of central tendency do just that.\n\nIt is like describing a person in one word- “smart”/ “creative”, etc. In fact, we summarize things on a routine basis, often without realizing it, whether it be movies, books, classes, or just about anything.\n\nComing back to the example, one could simply say “the students scored 20 marks on average”.\n\nThis average value represents the entire dataset, and gives us an idea about the performance of the students.\n\nThe most common measures of central tendency are the Mean, Median and Mode.\n\nSummary:\n\nMeasures of Central Tendency are mathematical ways of summarizing numerical data into (a) single central value(s) that represent the entire dataset.\n\n## 2 thoughts on “Measures of Central Tendency : Introduction”\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.933547,"math_prob":0.8542721,"size":1762,"snap":"2020-34-2020-40","text_gpt3_token_len":391,"char_repetition_ratio":0.12741752,"word_repetition_ratio":0.02739726,"special_character_ratio":0.22814983,"punctuation_ratio":0.119760476,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9531511,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T18:14:05Z\",\"WARC-Record-ID\":\"<urn:uuid:2ea69c6b-0ad3-4900-94e6-edf6f8766629>\",\"Content-Length\":\"89220\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba229f55-28fe-4019-96a3-794bdffaaec4>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f0535de-0386-426f-b7f4-9b50927eee63>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://communitymedicine4all.com/2013/07/24/measures-of-central-tendency-introduction/\",\"WARC-Payload-Digest\":\"sha1:K3L6X55Z2XC4HSNDS6EGUGYCL4OY4LCI\",\"WARC-Block-Digest\":\"sha1:B3G42UPVMOZBJQ3ECOAQOVYIOU3E5N46\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202418.22_warc_CC-MAIN-20200929154729-20200929184729-00412.warc.gz\"}"}
https://discuss.pytorch.org/t/albert-model-display-and-copying-weights-inside-albert/87565
[ "# Albert Model Display and Copying weights inside Albert\n\n0\n\nI wanted to try something.\n\nThere are 2 experiments, one on Bert an one on Albert. The task is, Train Bert upto Layers 5 , keeping the other 7 fixed, and then during test time, copy the weights of the 5th layer onto layers 6-12. I was successfully able to do this, as Bert has seperate parameters, and I can manipulate them seperately.\n\nWhereas in Albert, the Architecture is such that all the weights are shared from layers 1-12, so, although I can train them till layer 5, they aren’t accessible seperately, so, I am unable to copy over the weights onto further layers.\n\nI tried printing the entire model, for both Bert and Albert.\n\n``````> (embeddings): BertEmbeddings(\n> (position_embeddings): Embedding(512, 768)\n> (token_type_embeddings): Embedding(2, 768)\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> (encoder): BertEncoder(\n> (layer): ModuleList(\n> (0): BertLayer(\n> (attention): BertAttention(\n> (self): BertSelfAttention(\n> (query): Linear(in_features=768, out_features=768, bias=True)\n> (key): Linear(in_features=768, out_features=768, bias=True)\n> (value): Linear(in_features=768, out_features=768, bias=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> (output): BertSelfOutput(\n> (dense): Linear(in_features=768, out_features=768, bias=True)\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> )\n> (intermediate): BertIntermediate(\n> (dense): Linear(in_features=768, out_features=3072, bias=True)\n> )\n> (output): BertOutput(\n> (dense): Linear(in_features=3072, out_features=768, bias=True)\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> )\n> (1): BertLayer(\n> (attention): BertAttention(\n> (self): BertSelfAttention(\n> (query): Linear(in_features=768, out_features=768, bias=True)\n> (key): Linear(in_features=768, out_features=768, bias=True)\n> (value): Linear(in_features=768, out_features=768, bias=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> (output): BertSelfOutput(\n> (dense): Linear(in_features=768, out_features=768, bias=True)\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> )\n> (intermediate): BertIntermediate(\n> (dense): Linear(in_features=768, out_features=3072, bias=True)\n> )\n> (output): BertOutput(\n> (dense): Linear(in_features=3072, out_features=768, bias=True)\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> (dropout): Dropout(p=0.1, inplace=False)\n> )\n> )\n``````\n\nAnd so on, for layers uptil 11. (They are seperately accessible).\n\nFor Albert:\n\n``````> Model(\n> (albert): AlbertModel(\n> (embeddings): AlbertEmbeddings(\n> (position_embeddings): Embedding(512, 128)\n> (token_type_embeddings): Embedding(2, 128)\n> (LayerNorm): LayerNorm((128,), eps=1e-12, elementwise_affine=True)\n> (dropout): Dropout(p=0, inplace=False)\n> )\n> (encoder): AlbertTransformer(\n> (embedding_hidden_mapping_in): Linear(in_features=128, out_features=768, bias=True)\n> (albert_layer_groups): ModuleList(\n> (0): AlbertLayerGroup(\n> (albert_layers): ModuleList(\n> (0): AlbertLayer(\n> (full_layer_layer_norm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> (attention): AlbertAttention(\n> (query): Linear(in_features=768, out_features=768, bias=True)\n> (key): Linear(in_features=768, out_features=768, bias=True)\n> (value): Linear(in_features=768, out_features=768, bias=True)\n> (dropout): Dropout(p=0, inplace=False)\n> (dense): Linear(in_features=768, out_features=768, bias=True)\n> (LayerNorm): LayerNorm((768,), eps=1e-12, elementwise_affine=True)\n> )\n> (ffn): Linear(in_features=768, out_features=3072, bias=True)\n> (ffn_output): Linear(in_features=3072, out_features=768, bias=True)\n> )\n> )\n> )\n> )\n> )\n``````\n\nThe weights can’t be copied directly, as they are the same weights. (I want to have weights trained only till layer 5, and then copy layer 5 weights till layer 12, without training it explicitly. How do I go about this?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.51324755,"math_prob":0.88506234,"size":4193,"snap":"2022-40-2023-06","text_gpt3_token_len":1209,"char_repetition_ratio":0.22368106,"word_repetition_ratio":0.39626557,"special_character_ratio":0.344622,"punctuation_ratio":0.25454545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992151,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T11:49:38Z\",\"WARC-Record-ID\":\"<urn:uuid:b741194d-ec5e-4a02-bbce-5b5c4df2b61a>\",\"Content-Length\":\"18171\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:baa30e7f-3da4-49ed-acbb-f4cda27c642b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d2de6420-8b00-4db3-99c3-0a0e49d7b0c1>\",\"WARC-IP-Address\":\"159.203.145.104\",\"WARC-Target-URI\":\"https://discuss.pytorch.org/t/albert-model-display-and-copying-weights-inside-albert/87565\",\"WARC-Payload-Digest\":\"sha1:J2GY4FZOBADEGPFPPT7YMD2ACOZVCD3H\",\"WARC-Block-Digest\":\"sha1:D72NEJR6VTLSP3HSUMUN23HAOOOKXW4F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499816.79_warc_CC-MAIN-20230130101912-20230130131912-00178.warc.gz\"}"}
https://math.stackexchange.com/questions/2080311/vectors-why-a-1-mathbfxb-1-mathbfyc-1-mathbfz-a-2-mathbfxb-2-mathbf
[ "# Vectors: Why $a_1\\mathbf{x}+b_1\\mathbf{y}+c_1\\mathbf{z}=a_2\\mathbf{x}+b_2\\mathbf{y}+c_2\\mathbf{z}\\implies a_1=b_1$ etc?\n\nI was solving this problem earlier:\n\nPoints $X$, $Y$ and $Z$ in have the (three dimensional) coordinate vectors $\\bf{x},\\bf{y},\\bf{z}$ respectively. Prove that the lines joining the vertices of $\\triangle XYZ$ to the midpoints of the opposite sides are concurrent.\n\nMy problem is that at some point in the problem I ended up with equations of the form $a_1\\mathbf{x}+b_1\\mathbf{y}+c_1\\mathbf{z}=a_2\\mathbf{x}+b_2\\mathbf{y}+c_2\\mathbf{z}$ and the only way I could progress was by assuming that this implies $a_1=a_2$, $b_1=b_2$, $c_1=c_2$. Unfortunately I don't clearly see why this is and I only assumed it was true to solve the problem. (I indeed got the correct answer of the vectors passing through the same point $\\frac{1}{3}(\\bf{x+y+z})$ (and hence being concurrent at this point).)\n\nOne way I tried to justify this (the equalities just mentioned) is by assuming (wlog) that the vectors x,y,z are not coplanar, and then this would mean that they form a basis for 3D space (according to my textbook), which means the above equalities do indeed hold. The reason I gave for being able to assume wlog that the vectors are not coplanar is that whether they are ccoplanar or not does not affect the conclusion of the problem, which is simply concerned with proving concurrency of certain lines. In other words, if the position vectors are coplanar then rotating the triangle so that the position vectors are no longer coplanar doesn't affect the triangle and therefore doesn't affect the conclusion of the problem.\n\nIs this reasoning correct, or is there another reason why the equalities hold?\n\nThanks\n\nEdit: My question is regarding the title, please ignore the centroid problem as I only mentioned it for context.\n\nEdit2: Here is how I arrived at the equation. First I let the midpoints of the size opposite $X$ be $X_1$, and similarly for $Y,Z$. Clearly, $\\vec X_1=\\frac{\\mathbf{y}+\\mathbf{z}}{2}$. The line $XX_1$ has direction $$\\vec{XX_1}=\\vec X_1-\\vec X=\\frac{\\mathbf{y}+\\mathbf{z}}{2}-\\mathbb{x}=\\frac{\\mathbf{y}+\\mathbf{z}-2\\mathbf{x}}{2}$$ so its equation is $$\\mathbf{r_x}=\\mathbf{x}+a(\\mathbf{y}+\\mathbf{z}-2\\mathbf{x})=(1-2a)\\mathbf{x}+a\\mathbf{y}+a\\mathbf{z}.$$ By symmetry $$\\mathbf{r_y}=b\\mathbf{x}+(1-2b)\\mathbf{y}+b\\mathbf{z},\\quad \\mathbf{r_z}=c\\mathbf{x}+c\\mathbf{y}+(1-2c)\\mathbf{z}.$$ We want the point where these lines meet, and for this we only need to calculate the point where a pair of them meet and then verify that the third line crosses that point. So we have to solve $$(1-2a)\\mathbf{x}+a\\mathbf{y}+a\\mathbf{z}=b\\mathbf{x}+(1-2b)\\mathbf{y}+b\\mathbf{z}$$ and here is where I met my problem, but as I said I just assumed (and tried to justify this assumption using the \"base vectors\" reasoning in the main post) that $$1-2a=b,\\quad a=1-2b,\\quad a=b$$ $$\\implies a=b=\\frac{1}{3},$$ i.e. the common point is $$\\frac{1}{3}(\\mathbf{x}+\\mathbf{y}+\\mathbf{z}).$$ It is easy to verify that this lies on the third line when its parameter is $c=\\frac{1}{3}$.\n\n• You can make that deduction if and only if the vectors $\\bf{x},\\bf{y},\\bf{z}$ are linearly independent. As you observed, this can be can diagnosed geometrically: the linear independence holds if and only if $X,Y,Z$ are not on the same plane as the origin. Unless $X,Y,Z$ are on the same line you can always achieve this by moving the origin to a suitable place. But, why would the location of the origin have anything to do with the truth of the claim?? This is a symptom that your approach, while basically ok, is using perhaps too many vectors. – Jyrki Lahtonen Jan 2 '17 at 12:32\n• (cont'd) Because the triangle is a 2-dimensional object (even in 3D-space), you should aim to write everything in terms of two known vectors intrinsic to the problem (such as two sides of your triangle). Otherwise you will inevitably run into this kind of difficulty that several combinations of coefficients $a,b,c$ may lead to identical vectors. – Jyrki Lahtonen Jan 2 '17 at 12:34\n• @JyrkiLahtonen: Thank you, I really appreciate the intuition you provided. – kesra Jan 2 '17 at 14:22\n• Three points in $\\mathbb R^3$ are always coplanar. There wouldn’t be a triangle to talk about otherwise. I’m guessing that you mean that they’re not all on a plane through the origin, which isn’t quite the same thing. – amd Jan 2 '17 at 20:41\n• @amd: The context is vectors: What I meant is that the position vectors of the points cannot be coplanar. (My original post says \"the vectors are not coplanar\".) – kesra Jan 3 '17 at 13:02\n\nYou don't mention your approach, but I think you've done an overly complicated approach.\n\nThe midpoints on the of the sides are ${X+Y\\over2}$, ${Y+Z\\over2}$ and $Z+X\\over2$, and the oposite corners are $Z$, $X$ and $Y$ respectively. Now take the point one third to the oposite corner and you get for example:\n\n$${2\\over3}{X+Y\\over2} + {1\\over3}Z = {X+Y+Z\\over3}$$\n\nDoing the same for the other midpoints and oposite vertex gives the same result.\n\nThe trick here is of course to realize that they will cross at ${X+Y+Z\\over3}$, but this can be seen as it would need it to be a linear combination of $X$, $Y$ and $Z$ and since such a line is for example $\\theta{X+Y\\over2}+(1-\\theta)Z$ the coefficient of $X$ and $Y$ must bu the same, similarily the coefficient of $Y$ and $Z$ must be the same. So we have $\\theta/2 = 1-\\theta$ which means that $\\theta = 2/3$.\n\n• Thank you, but your answer does not touch on my question at all.. – kesra Jan 2 '17 at 12:21\n• @kesra The problem I have with your question is that you didn't mention your approach. The question would be easier to understand if you mention how you arrive at your equation. – skyking Jan 2 '17 at 13:22\n\n$\\newcommand{\\Vec}{\\mathbf{#1}}$In algebra and analysis, it's often best to write an equation of the form $X = Y$ in the form $X - Y = 0$. Here, $$a_{1}\\Vec{x} + b_{1}\\Vec{y} + c_{1}\\Vec{z} = a_{2}\\Vec{x} + b_{2}\\Vec{y} + c_{2}\\Vec{z} \\tag{1}$$ is equivalent, after rearranging, to $$(a_{1} - a_{2}) \\Vec{x} + (b_{1} - b_{2}) \\Vec{y} + (c_{1} - c_{2}) \\Vec{z} = \\Vec{0}. \\tag{2}$$ As Jyrki Lahtonen notes in the comments, an ordered triple of vectors $(\\Vec{x}, \\Vec{y}, \\Vec{z})$ is said to be linearly independent if\n\nFor all real $a$, $b$, $c$, the equation $a \\Vec{x} + b \\Vec{y} + c \\Vec{z} = \\Vec{0}$ implies $a = b = c = 0$.\n\nEquation (2) has this form with $a = a_{1} - a_{2}$, etc., so if your vectors are linearly independent (i.e., their tips are non-coplanar with the zero vector), then (2) implies $a_{1} = a_{2}$, etc., just as you say.\n\nThe deeper issue is, the tips of three vectors in an arbitrary vector space are coplanar and there's nothing in the problem to prevent the zero vector from lying in the plane of the triangle. In a meta way, this means (as JL notes) your proof \"contains too much infrastructure\". Skyking's argument avoids this technical issue; it even works if the vertices are collinear." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.859636,"math_prob":0.9994504,"size":2972,"snap":"2019-26-2019-30","text_gpt3_token_len":925,"char_repetition_ratio":0.17553908,"word_repetition_ratio":0.004854369,"special_character_ratio":0.28566623,"punctuation_ratio":0.07413509,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999627,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-22T09:57:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8583ecaa-6eeb-4e7d-8238-574859b7d77d>\",\"Content-Length\":\"156384\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a77c076a-6112-4196-b21e-9e09dfcc676e>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ebfd620-9148-4088-a19d-095c84d0ecd4>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2080311/vectors-why-a-1-mathbfxb-1-mathbfyc-1-mathbfz-a-2-mathbfxb-2-mathbf\",\"WARC-Payload-Digest\":\"sha1:J73FKY5V2CNUJ4WQTMO3MGLVZBN3XTIP\",\"WARC-Block-Digest\":\"sha1:YKERWOVRGHPPPH2POIEFVIXZUAG2YWZS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527907.70_warc_CC-MAIN-20190722092824-20190722114824-00419.warc.gz\"}"}
https://www.hpmuseum.org/forum/printthread.php?tid=12112
[ "", null, "(12C) Sums of Two Squares - Printable Version +- HP Forums (https://www.hpmuseum.org/forum) +-- Forum: HP Software Libraries (/forum-10.html) +--- Forum: General Software Library (/forum-13.html) +--- Thread: (12C) Sums of Two Squares (/thread-12112.html) (12C) Sums of Two Squares - Gamo - 01-06-2019 05:07 AM Which whole numbers are expressible as sums of two (integer) squares? This program solve the Sums of Two Squares. [ X^2 + Y^2 = N ] Given N program will find pair of X,Y that equal to N where X ≤ Y ---------------------------------------------------------- Procedure: 1.) N [R/S] display X [X<>Y] Y continue [R/S] if it is more than one solution and continue until steps 2.) shown mean finish. 2.) N [R/S] display 0.000000000 then 0.00 indicate that \"No Solution\" ---------------------------------------------------------- Example: X^2 + Y^2 = 41 N = 41 41 [R/S] 4 [X<>Y] 5 [R/S] \"0.000000000\" 0.00 Answer: X = 4 and Y = 5 ---------------------------------------------------------- X^2 + Y^2 = 76789 N = 76789 76789 [R/S] 135 [X<>Y] 242 [R/S] 150 [X<>Y] 233 [R/S] \"0.000000000\" 0.00 Answer: X = 135 and Y = 242 X = 150 and Y = 233 ----------------------------------------------------------- Program: Code: ```  01 STO 3 02 √X 03 INTG 04 STO 0 05 RCL 3 06  2 07  ÷ 08 √X 09 STO 1 10 RCL 0 11 RCL 1 12 X≤Y 13 GTO 23 14 RCL 1 15 FRAC 16 X=0 17 GTO 10 18 CLx 19 FIX 9 20 PSE 21 FIX 2 22 GTO 00 23 RCL 3 24 RCL 0 25 ENTER 26  x 27  - 28 √X 29 ENTER 30 INTG 31 X<>Y 32 X≤Y 33 GTO 39 34 RCL 0 35  1 36  - 37 STO 0 38 GTO 10 39 RCL 0 40 X<>Y 41 R/S 42 GTO 34``` Remark: Try this on 12C Emulator: N = 9876543210 Gamo" ]
[ null, "https://www.hpmuseum.org/mohpcf.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5799822,"math_prob":0.9875307,"size":1273,"snap":"2019-35-2019-39","text_gpt3_token_len":401,"char_repetition_ratio":0.27659574,"word_repetition_ratio":0.04639175,"special_character_ratio":0.56166536,"punctuation_ratio":0.1308017,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98011374,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-20T08:11:08Z\",\"WARC-Record-ID\":\"<urn:uuid:daf2b45c-a665-4345-a35d-cd8da46b6b19>\",\"Content-Length\":\"4480\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e80775c2-e4a4-4ab7-a5cd-8977e74922ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:2af96603-e25f-446a-96b8-bec5e4e848d5>\",\"WARC-IP-Address\":\"209.197.117.170\",\"WARC-Target-URI\":\"https://www.hpmuseum.org/forum/printthread.php?tid=12112\",\"WARC-Payload-Digest\":\"sha1:PJ6O6P2J2QVNP4LZDFPB7FSQ256W5G7L\",\"WARC-Block-Digest\":\"sha1:ZS4S6SWWWLJ6JYZTTBALGNCP3UGZQS22\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315258.34_warc_CC-MAIN-20190820070415-20190820092415-00015.warc.gz\"}"}
https://www.percentagecal.com/answer/%20%2063-is-what-percent-of-300
[ "#### Solution for 63 is what percent of 300:\n\n63:300*100 =\n\n( 63*100):300 =\n\n6300:300 = 21\n\nNow we have: 63 is what percent of 300 = 21\n\nQuestion: 63 is what percent of 300?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 300 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={300}.\n\nStep 4: In the same vein, {x\\%}={ 63}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={300}(1).\n\n{x\\%}={ 63}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{300}{ 63}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{ 63}{300}\n\n\\Rightarrow{x} = {21\\%}\n\nTherefore, { 63} is {21\\%} of {300}.\n\n#### Solution for 300 is what percent of 63:\n\n300: 63*100 =\n\n(300*100): 63 =\n\n30000: 63 = 476.19\n\nNow we have: 300 is what percent of 63 = 476.19\n\nQuestion: 300 is what percent of 63?\n\nPercentage solution with steps:\n\nStep 1: We make the assumption that 63 is 100% since it is our output value.\n\nStep 2: We next represent the value we seek with {x}.\n\nStep 3: From step 1, it follows that {100\\%}={ 63}.\n\nStep 4: In the same vein, {x\\%}={300}.\n\nStep 5: This gives us a pair of simple equations:\n\n{100\\%}={ 63}(1).\n\n{x\\%}={300}(2).\n\nStep 6: By simply dividing equation 1 by equation 2 and taking note of the fact that both the LHS\n(left hand side) of both equations have the same unit (%); we have\n\n\\frac{100\\%}{x\\%}=\\frac{ 63}{300}\n\nStep 7: Taking the inverse (or reciprocal) of both sides yields\n\n\\frac{x\\%}{100\\%}=\\frac{300}{ 63}\n\n\\Rightarrow{x} = {476.19\\%}\n\nTherefore, {300} is {476.19\\%} of { 63}.\n\nCalculation Samples" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84519976,"math_prob":0.9997843,"size":2156,"snap":"2022-27-2022-33","text_gpt3_token_len":750,"char_repetition_ratio":0.19330855,"word_repetition_ratio":0.4107143,"special_character_ratio":0.453154,"punctuation_ratio":0.14639175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999949,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T22:11:57Z\",\"WARC-Record-ID\":\"<urn:uuid:b33c9a22-7291-4ed8-9188-38e812e67cc6>\",\"Content-Length\":\"10358\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b26150b6-dcaa-43d5-9adf-150a40d2dc41>\",\"WARC-Concurrent-To\":\"<urn:uuid:24531ab8-0f87-4304-a14a-fa0a8bd98cf3>\",\"WARC-IP-Address\":\"217.23.5.136\",\"WARC-Target-URI\":\"https://www.percentagecal.com/answer/%20%2063-is-what-percent-of-300\",\"WARC-Payload-Digest\":\"sha1:RPKGDOAY6TE2NEFHP6LVORSUDC5JZTPX\",\"WARC-Block-Digest\":\"sha1:J3ETUKA3RDL7SG47467ZGSAMSBKUB3H7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573118.26_warc_CC-MAIN-20220817213446-20220818003446-00281.warc.gz\"}"}
https://www.ams.org/journals/tran/2000-352-06/S0002-9947-00-02409-0/home.html
[ "", null, "", null, "", null, "ISSN 1088-6850(online) ISSN 0002-9947(print)\n\nInfinite convolution products and refinable distributions on Lie groups\n\nAuthor: Wayne Lawton\nJournal: Trans. Amer. Math. Soc. 352 (2000), 2913-2936\nMSC (1991): Primary 41A15, 41A58, 42C05, 42C15, 43A05, 43A15\nDOI: https://doi.org/10.1090/S0002-9947-00-02409-0\nPublished electronically: March 2, 2000\nMathSciNet review: 1638258\nFull-text PDF Free Access\n\nAbstract: Sufficient conditions for the convergence in distribution of an infinite convolution product $\\mu _1*\\mu _2*\\ldots$ of measures on a connected Lie group $\\mathcal G$ with respect to left invariant Haar measure are derived. These conditions are used to construct distributions $\\phi$ that satisfy $T\\phi = \\phi$ where $T$ is a refinement operator constructed from a measure $\\mu$ and a dilation automorphism $A$. The existence of $A$ implies $\\mathcal G$ is nilpotent and simply connected and the exponential map is an analytic homeomorphism. Furthermore, there exists a unique minimal compact subset $\\mathcal K \\subset \\mathcal G$ such that for any open set $\\mathcal U$ containing $\\mathcal K,$ and for any distribution $f$ on $\\mathcal G$ with compact support, there exists an integer $n(\\mathcal U,f)$ such that $n \\geq n(\\mathcal U,f)$ implies $\\operatorname {supp}(T^{n}f) \\subset \\mathcal U.$ If $\\mu$ is supported on an $A$-invariant uniform subgroup $\\Gamma ,$ then $T$ is related, by an intertwining operator, to a transition operator $W$ on $\\mathbb C(\\Gamma ).$ Necessary and sufficient conditions for $T^{n}f$ to converge to $\\phi \\in L^{2}$, and for the $\\Gamma$-translates of $\\phi$ to be orthogonal or to form a Riesz basis, are characterized in terms of the spectrum of the restriction of $W$ to functions supported on $\\Omega := \\mathcal K \\mathcal K^{-1} \\cap \\Gamma .$\n\n[Enhancements On Off] (What's this?)\n\nRetrieve articles in Transactions of the American Mathematical Society with MSC (1991): 41A15, 41A58, 42C05, 42C15, 43A05, 43A15\n\nRetrieve articles in all journals with MSC (1991): 41A15, 41A58, 42C05, 42C15, 43A05, 43A15" ]
[ null, "https://www.ams.org/images/remote-access-icon.png", null, "https://www.ams.org/publications/journals/images/journal.cover.tran.gif", null, "https://www.ams.org/publications/journals/images/open-access-green-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.59161955,"math_prob":0.9965421,"size":12459,"snap":"2021-31-2021-39","text_gpt3_token_len":3922,"char_repetition_ratio":0.13095142,"word_repetition_ratio":0.03812825,"special_character_ratio":0.3458544,"punctuation_ratio":0.2801969,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99921566,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T20:35:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a713345b-06af-41fb-a5c2-7de44fc3a2a4>\",\"Content-Length\":\"87131\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b80f1d2-6905-481d-b7c6-265b5fc6e4f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:73dacee5-b6f5-4dca-a8d7-5c80f2982089>\",\"WARC-IP-Address\":\"130.44.204.100\",\"WARC-Target-URI\":\"https://www.ams.org/journals/tran/2000-352-06/S0002-9947-00-02409-0/home.html\",\"WARC-Payload-Digest\":\"sha1:6563GQEGDGO7TFBAYZLFKORYDXZTLHP6\",\"WARC-Block-Digest\":\"sha1:HCJ6LQ4V77Z4J5UK2NKHU42FZ7ETXQML\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057775.50_warc_CC-MAIN-20210925202717-20210925232717-00199.warc.gz\"}"}
http://mfat.imath.kiev.ua/volumes/issues/?year=2006%CE%BDmber=2&number=2
[ "Brownian motion and Lévy processes in locally compact groups\n\nDavid Applebaum\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 101-112\n\nIt is shown that every L\\'{e}vy process on a locally compact group $G$ is determined by a sequence of one-dimensional Brownian motions and an independent Poisson random measure. As a consequence, we are able to give a very straightforward proof of sample path continuity for Brownian motion in $G$. We also show that every L\\'{e}vy process on $G$ is of pure jump type, when $G$ is totally disconnected.\n\nSome results on the space of holomorphic functions taking their values in b-spaces\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 113-123\n\nWe define a space of holomorphic functions $O_{1}(U,E/F)$, where $U$ is an open pseudo-convex subset of $\\Bbb{C}^{n}$, $E$ is a b-space and $F$ is a bornologically closed subspace of $E$, and we prove that the b-spaces $O_{1}(U,E/F)$ and $O(U,E)/O(U,F)$ are isomorphic.\n\nUniform equicontinuity for sequences of homomorphisms into the ring of measurable operators\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 124-130\n\nWe introduce a notion of uniform equicontinuity for sequences of functions with the values in the space of measurable operators. Then we show that all the implications of the classical Banach Principle on the almost everywhere convergence of sequences of linear operators remain valid in a non-commutative setting.\n\nGeneralized zeros and poles of $\\mathcal N_\\kappa$-functions: on the underlying spectral structure\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 131-150\n\nLet $q$ be a scalar generalized Nevanlinna function, $q\\in\\mathcal N_\\kappa$. Its gene alized zeros and poles (including their orders) are defined in terms of the function's operator representation. In this paper analytic properties associated with the underlying root subspaces and their geometric structures are investigated in terms of the local behaviour of the function. The main results and various characterizations are expressed by means of (local) moments, asymptotic expansions, and via the basic factorization of $q$. Also an inverse problem for recovering the geometric structure of the root subspace from an appropriate asymptotic expansion is solved.\n\nOn $*$-wildness of a free product of finite-dimensional $C^*$-algebras\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 151-156\n\nIn this paper we study the complexity of representation theory of free products of finite-dimensional $C^*$-algebras.\n\nA spectral analysis of some indefinite differential operators\n\nA. S. Kostenko\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 157-169\n\nWe investigate the main spectral properties of quasi--Hermitian extensions of the minimal symmetric operator $L_{\\rm min}$ generated by the differential expression $-\\frac{{\\rm sgn}\\, x}{|x|^{\\alpha}}\\frac{d^2}{dx^2} \\ (\\alpha>-1)$ in $L^2(\\mathbb R, |x|^{\\alpha})$. We describe their spectra, calculate the resolvents, and obtain a similarity criterion to a normal operator in terms of boundary conditions at zero. As an application of these results we describe the main spectral properties of the operator $\\frac{{\\rm sgn}\\, x}{|x|^\\alpha}\\left( -\\frac{d^2}{dx^2}+c \\delta \\right), \\, \\alpha>-1$.\n\nContinuous frame in Hilbert spaces\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 170-182\n\nIn this paper we introduce a mean of a continuous frame which is a generalization of discrete frames. Since a discrete frame is a special case of these frames, we expect that some of the results that occur in the frame theory will be generalized to these frames. For such a generalization, after giving some basic results and theorems about these frames, we discuss the following: dual to these frames, perturbation of continuous frames and robustness of these frames to an erasure of some elements.\n\nStrong matrix moment problem of Hamburger\n\nK. K. Simonov\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 183-196\n\nIn this paper we consider the strong matrix moment problem on the real line. We obtain a necessary and sufficient condition for uniqueness and find all the solutions for the completely indeterminate case. We use M.G. Krein’s theory of representations for Hermitian operators and technique of boundary triplets and the corresponding Weyl functions.\n\nOn existence of $*$-representations of certain algebras related to extended Dynkin graphs\n\nKostyantyn Yusenko\n\nMethods Funct. Anal. Topology 12 (2006), no. 2, 197-204\n\nFor $*$-algebras associated with extended Dynkin graphs, we investigate a set of parameters for which there exist representations. We give structure properties of such sets and a complete description for the set related to the graph $\\tilde D_4$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8379844,"math_prob":0.99422014,"size":4731,"snap":"2019-43-2019-47","text_gpt3_token_len":1205,"char_repetition_ratio":0.113602705,"word_repetition_ratio":0.05957447,"special_character_ratio":0.24709363,"punctuation_ratio":0.12584269,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9956065,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T15:03:17Z\",\"WARC-Record-ID\":\"<urn:uuid:d7a49ac1-df9a-49fa-bc25-d9552ae5694c>\",\"Content-Length\":\"34947\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:af416440-d583-419c-a777-ab0826d087d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:e21671e9-750f-4bf1-b442-7858bd5ab69b>\",\"WARC-IP-Address\":\"194.44.31.54\",\"WARC-Target-URI\":\"http://mfat.imath.kiev.ua/volumes/issues/?year=2006%CE%BDmber=2&number=2\",\"WARC-Payload-Digest\":\"sha1:33HVJZV3DANX7ZMKEP74U4PR7YAPR4AV\",\"WARC-Block-Digest\":\"sha1:QBMME66GDIBUOOLLXZYMX5NYD5Y5HYRH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987834649.58_warc_CC-MAIN-20191023150047-20191023173547-00011.warc.gz\"}"}
https://www.semanticscholar.org/paper/Singularities-of-varieties-admitting-an-Broustet-H%C3%B6ring/1036c20e7c5dc5879653cdb80abc3f6d20de028a
[ "# Singularities of varieties admitting an endomorphism\n\n@article{Broustet2012SingularitiesOV,\ntitle={Singularities of varieties admitting an endomorphism},\nauthor={Ama{\\\"e}l Broustet and Andreas H{\\\"o}ring},\njournal={Mathematische Annalen},\nyear={2012},\nvolume={360},\npages={439-456}\n}\n• Published 23 October 2012\n• Mathematics\n• Mathematische Annalen\nLet $$X$$X be a normal variety such that $$K_X$$KX is $$\\mathbb {Q}$$Q-Cartier, and let $$f:X \\rightarrow X$$f:X→X be a finite surjective morphism of degree at least two. We establish a close relation between the irreducible components of the locus of singularities that are not log-canonical and the dynamics of the endomorphism $$f$$f. As a consequence we prove that if $$X$$X is projective and $$f$$f polarised, then $$X$$X has at most log-canonical singularities.\n22 Citations\nCharacterizations of toric varieties via polarized endomorphisms\n• Mathematics\nMathematische Zeitschrift\n• 2018\nLet X be a normal projective variety and $$f:X\\rightarrow X$$f:X→X a non-isomorphic polarized endomorphism. We give two characterizations for X to be a toric variety. First we show that if X is\nBuilding blocks of amplified endomorphisms of normal projective varieties\n• Sheng Meng\n• Mathematics\nMathematische Zeitschrift\n• 2019\nLet X be a normal projective variety. A surjective endomorphism $$f{:}X\\rightarrow X$$ f : X → X is int-amplified if $$f^*L - L =H$$ f ∗ L - L = H for some ample Cartier divisors L and H . This is a\nOn endomorphisms of projective varieties with numerically trivial canonical divisors\nLet $X$ be a klt projective variety with numerically trivial canonical divisor. A surjective endomorphism $f:X\\to X$ is amplified (resp.~quasi-amplified) if $f^*D-D$ is ample (resp.~big) for some\nSurjective endomorphisms of projective surfaces -- the existence of infinitely many dense orbits\n• Mathematics\n• 2020\nLet $f \\colon X \\to X$ be a surjective endomorphism of a normal projective surface. When $\\operatorname{deg} f \\geq 2$, applying an (iteration of) $f$-equivariant minimal model program (EMMP), we\nTotally Invariant Divisors of Int-Amplified Endomorphisms of Normal Projective Varieties\n• Guolei Zhong\n• Mathematics\nThe Journal of Geometric Analysis\n• 2020\nWe consider an arbitrary int-amplified surjective endomorphism f of a normal projective variety X over $$\\mathbb {C}$$ C and its $$f^{-1}$$ f - 1 -stable prime divisors. We extend the early result in\nNon-isomorphic endomorphisms of Fano threefolds\n• Mathematics\nMathematische Annalen\n• 2021\nLet $X$ be a smooth Fano threefold. We show that $X$ admits a non-isomorphic surjective endomorphism if and only if $X$ is either a toric variety or a product of $\\mathbb{P}^1$ and a del Pezzo\nSemi-group structure of all endomorphisms of a projective variety admitting a polarized endomorphism\n• Mathematics\n• 2018\nLet $X$ be a projective variety admitting a polarized (or more generally, int-amplified) endomorphism. We show: there are only finitely many contractible extremal rays; and when $X$ is\nSingularities of non-$\\mathbb{Q}$-Gorenstein varieties admitting a polarized endomorphism\nIn this paper, we discuss a generalization of log canonical singularities in the non-Q-Gorenstein setting. We prove that if a normal complex projective variety has a non-invertible polarized" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8185561,"math_prob":0.98897225,"size":6084,"snap":"2022-27-2022-33","text_gpt3_token_len":1620,"char_repetition_ratio":0.1875,"word_repetition_ratio":0.021953898,"special_character_ratio":0.22764629,"punctuation_ratio":0.06666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9994901,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T23:35:11Z\",\"WARC-Record-ID\":\"<urn:uuid:d72a8c3b-b1ff-455e-8af4-c90af833d68b>\",\"Content-Length\":\"323923\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e1903144-731f-4f6d-8f8f-b528803610e2>\",\"WARC-Concurrent-To\":\"<urn:uuid:886f8187-96c0-44c8-a0f0-d8d3ae83db09>\",\"WARC-IP-Address\":\"13.32.208.17\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/Singularities-of-varieties-admitting-an-Broustet-H%C3%B6ring/1036c20e7c5dc5879653cdb80abc3f6d20de028a\",\"WARC-Payload-Digest\":\"sha1:QIQSEUBPNFTHMYXN5FTHCP5EFE3EVV4N\",\"WARC-Block-Digest\":\"sha1:HSU7VH44NKFZHNSUAID7OISHX75LKML6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104205534.63_warc_CC-MAIN-20220702222819-20220703012819-00551.warc.gz\"}"}
https://rdrr.io/bioc/ImmuneSpaceR/src/tests/testthat/test-getGEMatrix.R
[ "# tests/testthat/test-getGEMatrix.R In ImmuneSpaceR: A Thin Wrapper around the ImmuneSpace Database\n\n```context(\"ISCon\\$getGEMatrix()\")\n\n# Helper Functions -------------------------------------------\nphenoCols <- c(\n\"participant_id\", \"study_time_collected\", \"study_time_collected_unit\",\n\"cohort\", \"cohort_type\", \"biosample_accession\", \"exposure_material_reported\",\n\"exposure_process_preferred\"\n)\n\ntest_EM <- function(EM, summary) {\nexpect_is(EM, \"ExpressionSet\")\nexpect_gt(ncol(Biobase::exprs(EM)), 0)\nexpect_gt(nrow(Biobase::exprs(EM)), 0)\nif (summary == TRUE) {\n# In summary, no gene is NA\nexpect_false(any(is.na(Biobase::fData(EM)\\$gene_symbol)))\n}\n}\n\ntest_PD <- function(PD, phenoCols) {\nexpect_is(PD, \"data.frame\")\nexpect_gt(nrow(PD), 0)\nexpect_true(all(phenoCols %in% colnames(PD)))\n}\n\n# Main Tests ------------------------------------------------\ntest_that(\"gets combined summary eset by default if at study level\", {\nEM <- SDY269\\$getGEMatrix()\ntest_EM(EM, summary = TRUE)\n})\n\ntest_that(\"gets TIV_2008 eSet non-summary\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"normalized\"\n)\ntest_EM(EM, summary = FALSE)\n})\n\n# tests general raw output\ntest_that(\"gets TIV_2008 eSet raw\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"raw\"\n)\ntest_EM(EM, summary = FALSE)\n})\n\n# ensures that constructExpressionSet is working ok\ntest_that(\"gets TIV_young eSet raw\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = \"SDY56_PBMC_Young\",\noutputType = \"raw\"\n)\ntest_EM(EM, summary = FALSE)\n})\n\ntest_that(\"handles multiple samples per subject * timepoint combination\", {\nwarningMsg <- tryCatch(\nALL\\$getGEMatrix(\noutputType = \"raw\",\nverbose = TRUE\n),\nwarning = function(w) {\nreturn(w)\n}\n)\n\nexpect_true(grepl(\"Averaging the expression values\", warningMsg))\n\n# should return matrix from cache\nEM <- ALL\\$getGEMatrix(\noutputType = \"raw\"\n)\ntest_EM(EM, summary = FALSE)\nexpect_equal(length(colnames(EM)), 164)\n# Note: orig matrix has 169 Biosamples, but 5 are assumed to be technical replicates\n})\n\ntest_that(\"gets TIV_2008 eSet summary\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"summary\",\nannotation = \"latest\"\n)\ntest_EM(EM, summary = TRUE)\n})\n\ntest_that(\"get_multiple matrices non-summary\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = c(\"SDY269_PBMC_TIV_Geo\", \"SDY269_PBMC_LAIV_Geo\"),\noutputType = \"normalized\"\n)\ntest_EM(EM, summary = FALSE)\n})\n\ntest_that(\"get_multiple matrices summary\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = c(\"SDY269_PBMC_TIV_Geo\", \"SDY269_PBMC_LAIV_Geo\"),\noutputType = \"summary\",\nannotation = \"latest\"\n)\ntest_EM(EM, summary = TRUE)\n})\n\ntest_that(\"get_multiple matrices summary without cache error\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = c(\"SDY269_PBMC_TIV_Geo\", \"SDY269_PBMC_LAIV_Geo\"),\noutputType = \"summary\",\nannotation = \"latest\"\n)\ntest_EM(EM, summary = TRUE)\n})\n\ntest_that(\"get_multiple matrices summary with reload\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = c(\"SDY269_PBMC_TIV_Geo\", \"SDY269_PBMC_LAIV_Geo\"),\noutputType = \"summary\",\nannotation = \"latest\",\n)\ntest_EM(EM, summary = TRUE)\n})\n\ntest_that(\"get multiple matrices summary from different studies\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = c(\"SDY269_PBMC_TIV_Geo\", \"SDY180_WholeBlood_Grp2Pneunomax23_Geo\"),\noutputType = \"summary\",\nannotation = \"latest\"\n)\ntest_EM(EM, summary = TRUE)\n})\n\n# TODO: identify issue tested here ...\ntest_that(\"gets SDY300 eset without error in .constructExpressionSet\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = \"SDY300_otherCell_dcMonoFlu2011\"\n)\ntest_EM(EM, summary = TRUE)\n})\n\n# Test for handling more than 2 duplicates for a single biosample ID\ntest_that(\"gets SDY1364 eset without error in .constructExpressionSet\", {\nEM <- ALL\\$getGEMatrix(\noutputType = \"normalized\",\nannotation = \"latest\"\n)\ntest_EM(EM, summary = FALSE)\n})\n\n# Should load both matrices from cache\nexpect_message(ALL\\$getGEMatrix(\nmatrixName = c(\"SDY269_PBMC_TIV_Geo\", \"SDY180_WholeBlood_Grp2Pneunomax23_Geo\"),\noutputType = \"summary\",\nannotation = \"latest\"\n),\n\"(cache)|(Combining ExpressionSets)\",\nall = TRUE\n)\n\nexpect_message(ALL\\$getGEMatrix(\nmatrixName = \"SDY180_WholeBlood_Grp2Pneunomax23_Geo\",\noutputType = \"summary\",\nannotation = \"default\"\n),\nall = TRUE\n)\n\n# Should load eset from cache\nexpect_message(\nALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"normalized\",\nannotation = \"latest\"\n),\n\"Returning SDY269_PBMC_TIV_Geo_normalized_latest_eset from cache\"\n)\n\n# Should load eset from cache without error if verbose uses\nexpect_message(\nALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"normalized\",\nannotation = \"latest\",\nverbose = TRUE\n),\n\"Returning SDY269_PBMC_TIV_Geo_normalized_latest_eset from cache\"\n)\n\n# Should load matrix from cache and construct a new expressionset\nexpect_message(ALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"normalized\",\nannotation = \"default\"\n),\nall = TRUE\n)\n\nexpect_message(ALL\\$getGEMatrix(\nmatrixName = \"SDY56_PBMC_Young\",\noutputType = \"normalized\"\n),\nall = TRUE\n)\n})\n\n# Use specific tests here to ensure the IS1 report will load correctly\ntest_that(\"get ImmSig Study - SDY212 with correct anno and summary\", {\nmats <- IS1\\$cache\\$GE_matrices\\$name[ grep(\"SDY212\", IS1\\$cache\\$GE_matrices\\$name) ]\nEM <- IS1\\$getGEMatrix(\nmatrixName = mats,\noutputType = \"raw\",\nannotation = \"ImmSig\"\n)\ntest_EM(EM, summary = FALSE)\nexpect_true(\"BS694717.1\" %in% colnames(Biobase::exprs(EM)))\nexpect_true(\"BS694717.1\" %in% Biobase::sampleNames(EM))\nexpect_true(all.equal(dim(Biobase::exprs(EM)), c(48770, 92)))\n})\n\ntest_that(\"get ImmSig Study - SDY67 fixing 'X' for 'FeatureId'\", {\nmats <- IS1\\$cache\\$GE_matrices\\$name[ grep(\"SDY67\", IS1\\$cache\\$GE_matrices\\$name) ]\nEM <- IS1\\$getGEMatrix(\nmatrixName = mats,\noutputType = \"raw\",\nannotation = \"ImmSig\"\n)\ntest_EM(EM, summary = FALSE)\n})\n\ntest_that(\"check pheno data.frame\", {\nEM <- ALL\\$getGEMatrix(\nmatrixName = \"SDY269_PBMC_TIV_Geo\",\noutputType = \"normalized\"\n)\nPD <- Biobase::pData(EM)\ntest_PD(PD, phenoCols)\n})\n```\n\n## Try the ImmuneSpaceR package in your browser\n\nAny scripts or data that you put into this service are public.\n\nImmuneSpaceR documentation built on Nov. 8, 2020, 4:55 p.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.56248134,"math_prob":0.9786084,"size":6608,"snap":"2020-45-2020-50","text_gpt3_token_len":1896,"char_repetition_ratio":0.18670502,"word_repetition_ratio":0.34350133,"special_character_ratio":0.30493343,"punctuation_ratio":0.16824645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9936479,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-03T14:10:59Z\",\"WARC-Record-ID\":\"<urn:uuid:978aa003-f063-4053-b77f-ace7ec577e04>\",\"Content-Length\":\"72756\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8ef36ac6-5803-41c0-ad26-06977a813a85>\",\"WARC-Concurrent-To\":\"<urn:uuid:843c41e4-ba86-4b2b-8c59-abeec7a1b199>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/bioc/ImmuneSpaceR/src/tests/testthat/test-getGEMatrix.R\",\"WARC-Payload-Digest\":\"sha1:SA6GI56CAFIUI4WVJM4YHPM5KVL3CITH\",\"WARC-Block-Digest\":\"sha1:TIJUPTI2BD3PDV4ND5HDQIDXGZBLR5CG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141727782.88_warc_CC-MAIN-20201203124807-20201203154807-00226.warc.gz\"}"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=5020
[ "## WeBWorK Problems\n\n### random numbers in arrays", null, "### random numbers in arrays\n\nby Zak Zarychta -\nNumber of replies: 2\n\nHi there I want to set up an array that contains manipulates some number using some generated random number to simulate experimental data\n\nFor example, one row of a data table might look like the following. where xVal and yVal are respectively independent and dependent variables. yVal is perturbed by a random amount\n\n@row0 = ($xVal,$yVal + (-1)**(random(1,20,1)*0.01*random(5,20,1)*$tVal); problem is WW does not seem to like the function random in an array. Also, WW itself traps the standard perl rand function. Does anyone have any ideas on how to get around this? Thanks,Zak", null, "In reply to Zak Zarychta ### Re: random numbers in arrays by Glenn Rice - WW has no problem with random used in an array. I have done so many times in problems. You have a syntax error in the line you gave. Are you sure that is not what is causing the problem? The syntax error is an unmatched parenthesis. @row0 = ($xVal, $yVal + (-1)**(random(1,20,1)*0.01*random(5,20,1)*$tVal);\n^", null, "" ]
[ null, "https://webwork.maa.org/moodle/pluginfile.php/1111/user/icon/classic/f1", null, "https://webwork.maa.org/moodle/pluginfile.php/1007/user/icon/classic/f1", null, "https://webwork.maa.org/moodle/pluginfile.php/585/user/icon/classic/f1", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.876034,"math_prob":0.9860398,"size":579,"snap":"2023-40-2023-50","text_gpt3_token_len":144,"char_repetition_ratio":0.09565217,"word_repetition_ratio":0.0,"special_character_ratio":0.24525043,"punctuation_ratio":0.12605043,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98801106,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T01:28:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f44360ee-008b-4f3f-9ec8-c563aefb7516>\",\"Content-Length\":\"81689\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:829c7651-0d18-4483-89e9-ad00bf25710c>\",\"WARC-Concurrent-To\":\"<urn:uuid:0c176d6b-790e-4762-be13-66b9c7cb325d>\",\"WARC-IP-Address\":\"34.204.106.157\",\"WARC-Target-URI\":\"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=5020\",\"WARC-Payload-Digest\":\"sha1:LVMYHSZGOLH26PTSYE2TBICL4NWOVX5X\",\"WARC-Block-Digest\":\"sha1:6ULW4S4N5SPVR3K2BM2SNFX7HGMENAKT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510130.53_warc_CC-MAIN-20230926011608-20230926041608-00440.warc.gz\"}"}
http://www.ccarh.org/courses/253/handout/midiprotocol/
[ "# MIDI Communication Protocol\n\nMIDI information is transferred between controllers and synthesizers as a sequence of bytes. A byte is a number between 0 and 255. MIDI bytes are split into two basic groups of command bytes and data bytes:\n\n data bytes command bytes decimal: 0 -- 127 128 -- 255 binary: 0xxxxxxx 1xxxxxxx hexadecimal: 00 -- 7f 80 -- ff\n\nIt is often useful to view MIDI bytes in binary or hexadecimal form, so the table above lists both alternate notations for the numbers in the decimal range. In binary notation, the number 0 -- 127 all start with a zero in the top bit of the MIDI byte. Commands bytes likewise start with a 1 bit in the highest position in the byte.\n\nData bytes are used to transmit things such as the note number, attack velocity, piano pedal positions, volume, pitch bend, and instrument numbers. Command bytes are the first byte of MIDI messages which are then followed by a fixed number of MIDI data bytes. For example, the MIDI message to turn on the note Middle C would be:\n\n```128 60 100\n```\n\nNotice that the first number is in the command byte range of 128-255. This must mean that it is a command -- which it is. Command byte 128 means turn on a note. This command requires two data bytes immediately following it. The first data byte indicates which note key to turn on, and the second data byte indicates how loud to play the note. In this case, the key number is 60, which is the key number for the pitch middle C (C4). The second data byte is called the attack velocity. A value of 100 for the attach velocity would be maximum if it were 127, or minimum if it were 1.\n\n### MIDI Message Types\n\nMIDI commands can be further decomposed into a command type nibble (four bytes) and a channel number nibble. In the case of the MIDI note-on message given above, here is the binary form of the command byte 128: 10000000. Split this number into four byte segments: 1000,0000. The first nibble is 1000 which indicates the command type -- note-on command. The second nibble is 0000 which indicates the MIDI channel number the note is playing on. In this case 0000 indicates the first MIDI channel, which is also called MIDI channel one. It is usually most convenient to view MIDI command bytes in hexadecimal format. A hexadecimal digit is equal to a single nibble which is four digits of a binary number. Here is the MIDI command byte 128 displayed in hexadecimal and binary forms:\n\n command nibble channel nibble hex 8 0 binary 1000 0000\n\nA MIDI channel is similar to channels on a television set. A MIDI channel is used to play multiple instruments at the same time. Each instrument playing simultaneously would occupy a separate channel on the synthesizer. The instrument playing on a channel can be changed.\n\nHere is a table of conversions between binary, hexadecimal and decimal forms of numbers. This table is useful to keep track of the different ways of viewing numbers in MIDI bytes:\n\n```B H D | B H D | B H D | B H D\n0000 0 0 | 0100 4 4 | 1000 8 8 | 1100 C 12\n0001 1 1 | 0101 5 5 | 1001 9 9 | 1101 D 13\n0010 2 2 | 0110 6 6 | 1010 A 10 | 1110 E 14\n0011 3 3 | 0111 7 6 | 1011 B 11 | 1111 F 15\n```\n\nMIDI command nibbles must start with a 1 in binary format, therefore there are three binary digits left over for specifying a command. This means that there are eight possible MIDI command types. These commands are listed in the table below:\n\n commandnibble commandname databytes datameaning 8 note-off 2 key #; release velocity 9 note-on 2 key #; attack velocity A aftertouch 2 key #; key pressure B control-change 2 controller #; controller data C patch-change 1 instrument # D channel-pressure 1 channel pressure E pitch-bend 2 lsb; msb F system-message 0 orvariable none or sysex\n```\n\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.887493,"math_prob":0.7293451,"size":3186,"snap":"2021-43-2021-49","text_gpt3_token_len":777,"char_repetition_ratio":0.14896291,"word_repetition_ratio":0.018181818,"special_character_ratio":0.2693032,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95061535,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T17:22:42Z\",\"WARC-Record-ID\":\"<urn:uuid:a3d7edf0-dc38-4370-9820-9e7b35fd786b>\",\"Content-Length\":\"6609\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6b1121ac-82e8-4571-9521-0d8ecf52201d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e925d634-0d23-4be5-bc08-0f00fc5bd1a5>\",\"WARC-IP-Address\":\"171.67.229.81\",\"WARC-Target-URI\":\"http://www.ccarh.org/courses/253/handout/midiprotocol/\",\"WARC-Payload-Digest\":\"sha1:VJX234NU3PUCXUB2YANWTNOGZ4SUT4A7\",\"WARC-Block-Digest\":\"sha1:RX5VQKCSV6DRUEYMAU7W6JDCMT6QMUSG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358208.31_warc_CC-MAIN-20211127163427-20211127193427-00230.warc.gz\"}"}
https://physics-network.org/what-is-the-formula-of-kinetic-energy-class-11/
[ "# What is the formula of kinetic energy class 11?\n\nE=12mv2. This formula states the concept of kinetic energy.\n\n## What is a kinetic energy in physics?\n\nKinetic energy is a form of energy that an object or a particle has by reason of its motion. If work, which transfers energy, is done on an object by applying a net force, the object speeds up and thereby gains kinetic energy.\n\n## How do you calculate Ke in physics?\n\nAnswer: When a spring is stretched, the force exerted is proportional to the increase in length from the equilibrium length, according to Hooke’s Law. The spring constant can be calculated using the following formula: k = -F/x, where k is the spring constant.\n\n## What’s kinetic energy units?\n\nThe units of kinetic energy are mass times the square of speed, or kg · m 2 /s 2 kg · m 2 /s 2 . But the units of force are mass times acceleration, kg · m/s 2 kg · m/s 2 , so the units of kinetic energy are also the units of force times distance, which are the units of work, or joules.\n\n## What is the unit of Ke?\n\nThe standard unit of kinetic energy is the joule, while the English unit of kinetic energy is the foot-pound.\n\n## What is Ke formula?\n\nKinetic energy is directly proportional to the mass of the object and to the square of its velocity: K.E. = 1/2 m v2. If the mass has units of kilograms and the velocity of meters per second, the kinetic energy has units of kilograms-meters squared per second squared.\n\n## What is spring constant k?\n\nThe proportional constant k is called the spring constant. It is a measure of the spring’s stiffness. When a spring is stretched or compressed, so that its length changes by an amount x from its equilibrium length, then it exerts a force F = -kx in a direction towards its equilibrium position.\n\n## What formula is 1 2kx 2?\n\nOther than Hooke’s Law, the equation for the potential energy function, U=1/2kx^2, is essentially used when determining the spring potential energy.\n\n## What are 4 types of kinetic energy?\n\n• Mechanical Energy. Mechanical energy is the energy that we can see.\n• Electrical Energy. Electrical energy is better known as electricity.\n• Light Energy (or Radiant Energy)\n• Thermal Energy.\n• Sound Energy.\n\n## What is kinetic energy simple?\n\nKinetic energy is the energy of motion, observable as the movement of an object, particle, or set of particles. Any object in motion is using kinetic energy: a person walking, a thrown baseball, a crumb falling from a table, and a charged particle in an electric field are all examples of kinetic energy at work.\n\n## What is the formula of kinetic energy and potential energy?\n\nAt a start, the potential energy = mgh and kinetic energy = zero because its velocity is zero. Total energy of the object = mgh. As it falls, its potential energy will change into kinetic energy. If v is the velocity of the object at a given instant, the kinetic energy = 1/2mv2.\n\n## What is the formula of energy?\n\nEnergy is defined as the capacity to do work. Formula. The energy stored in an object due to its position and height is known as potential energy and is given by the formula: P.E. = mgh.\n\n## How do you calculate kinetic energy examples?\n\nIn classical mechanics, kinetic energy (KE) is equal to half of an object’s mass (1/2*m) multiplied by the velocity squared. For example, if a an object with a mass of 10 kg (m = 10 kg) is moving at a velocity of 5 meters per second (v = 5 m/s), the kinetic energy is equal to 125 Joules, or (1/2 * 10 kg) * 5 m/s2.\n\n## What is the unit velocity?\n\nVelocity is a vector expression of the displacement that an object or particle undergoes with respect to time . The standard unit of velocity magnitude (also known as speed ) is the meter per second (m/s). Alternatively, the centimeter per second (cm/s) can be used to express velocity magnitude.\n\n## What is the formula for change in kinetic energy?\n\nThe work-energy theorem states that the total amount of work is equal to the change in kinetic energy and is given by the equation Wnet=12mv2f−12mv2i W n e t = 1 2 m v f 2 − 1 2 m v i 2 .\n\n## How do you find final kinetic energy?\n\nFinal kinetic energy KE = 1/2 m1v’12 + 1/2 m2v’22 = joules. For ordinary objects, the final kinetic energy will be less than the initial value. The only way you can get an increase in kinetic energy is if there is some kind of energy release triggered by the impact.\n\n## What are 5 kinetic energy examples?\n\n• Hydropower Plants.\n• Wind Mills.\n• Moving Car.\n• Bullet From a Gun.\n• Flying Airplane.\n• Walking & Running.\n• Cycling.\n• Rollercoasters.\n\n## What is the formula of kinetic energy Class 9?\n\nThe expression for kinetic energy is given as- 12mv2 where ‘m’ is mass of the body and ‘v’ is the speed of the object.\n\n## What is state Hooke’s Law?\n\nHooke’s law, law of elasticity discovered by the English scientist Robert Hooke in 1660, which states that, for relatively small deformations of an object, the displacement or size of the deformation is directly proportional to the deforming force or load.\n\n## What is Hooke’s Law example?\n\nHooke’s Law is a law that says the restoring force required to compress or stretch a spring is proportional to the distance the spring is deformed. Δx is the change in the spring’s position due to the deformation. The minus sign is there to show the restoring force is opposite of the deforming force.\n\n## What is k in SHM formula?\n\nThe letter K that is seen in several expression related to Simple Harmonic Motion (SHM) is a constant. It is usually called spring or force constant (N·m-1). Was this answer helpful?\n\n## What is K in potential energy?\n\nk is the spring constant. It is a proportionality constant that describes the relationship between the strain (deformation) in the spring and the force that causes it.\n\n## What is K in electric potential energy?\n\nThe Coulomb constant, the electric force constant, or the electrostatic constant (denoted ke, k or K) is a proportionality constant in electrostatics equations. In SI base units it is equal to 8.9875517923(14)×109 kg⋅m3⋅s−4⋅A−2.\n\n## What is the value of k in potential energy?\n\nV = electric potential energy. q = point charge. r = distance between any point around the charge to the point charge. k = Coulomb constant; k = 9.0 × 109 N.\n\n## What is the 7 types of energy?\n\nForms of energy include mechanical, chemical, electrical, electromagnetic, thermal, sound, and nuclear energy." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.897748,"math_prob":0.99445665,"size":6209,"snap":"2023-14-2023-23","text_gpt3_token_len":1481,"char_repetition_ratio":0.18791297,"word_repetition_ratio":0.025939178,"special_character_ratio":0.23900789,"punctuation_ratio":0.11496063,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99923086,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T17:37:34Z\",\"WARC-Record-ID\":\"<urn:uuid:96d1eff2-a44e-4e07-94c7-c8bffb04f44c>\",\"Content-Length\":\"92267\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e256270-9186-4d96-a8c3-6a3b0834c711>\",\"WARC-Concurrent-To\":\"<urn:uuid:130bbd6b-fa66-41da-a6ef-846d94e7ca84>\",\"WARC-IP-Address\":\"104.21.85.59\",\"WARC-Target-URI\":\"https://physics-network.org/what-is-the-formula-of-kinetic-energy-class-11/\",\"WARC-Payload-Digest\":\"sha1:GRVUT262KCGUHPJ7IBPGC3FUJTCFXGZQ\",\"WARC-Block-Digest\":\"sha1:JIIAXHGIRVRJLCYXV7KXTP7ZUDE3EVWG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646076.50_warc_CC-MAIN-20230530163210-20230530193210-00481.warc.gz\"}"}
https://www.selfridges.com/HK/zh/cat/givenchy-g-link-gold-and-silver-toned-brass-drop-earrings_R03776706/?previewAttribute=711-golden%2Fsilvery
[ "以当地货币和语言购买\n\n• 澳大利亚 / 澳元 \\$\n• 加拿大 / 加元 \\$\n• 中国 / 人民币 ¥\n• 法国 / 欧元 €\n• 德国 / 欧元 €\n• 中国香港 / 港元 \\$\n• 爱尔兰 / 欧元 €\n• 意大利 / 欧元 €\n• 日本 / 日元 ¥\n• 科威特 / 美元 \\$\n• 中国澳门 / 港元 \\$\n• 荷兰 / 欧元 €\n• 卡塔尔 / 美元 \\$\n• 沙特阿拉伯 / 美元 \\$\n• 新加坡 / 新加坡元 \\$\n• 韩国 / 韩元 ₩\n• 西班牙 / 欧元 €\n• 台湾 / 新台币 \\$\n• 阿拉伯联合酋长国 / 美元 \\$\n• 英国 / 英镑 £\n• 美国 / 美元 \\$\n• 不符合您的要求?阅读更多\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n• 英语\n• 英语\n• 简体中文\n\n国际送货\n\nselfridges.com 上几乎所有的商品均可提供国际配送服务,您的订单可发往全世界 130 个国家/地区,包括北美、澳洲、中东及中国。\n\n• 阿尔及利亚\n• 安道尔\n• 安提瓜和巴布达\n• 阿鲁巴\n• 澳大利亚\n• 奥地利\n• 阿塞拜疆\n• 巴林\n• 孟加拉国\n• 巴巴多斯\n• 白俄罗斯\n• 比利时\n• 伯利兹\n• 百慕大\n• 玻利维亚\n• 博兹瓦纳\n• 文莱\n• 保加利亚\n• 柬埔寨\n• 加拿大\n• 开曼群岛\n• 智利\n• 中国大陆\n• 哥伦比亚\n• 哥斯达黎加\n• 克罗地亚\n• 塞浦路斯\n• 捷克共和国\n• 丹麦\n• 多米尼克\n• 多米尼加共和国\n• 厄瓜多尔\n• 埃及\n• 萨尔瓦多\n• 爱沙尼亚\n• 芬兰\n• 法国\n• 法属圭亚那\n• 德国\n• 直布罗陀\n• 希腊\n• 格林纳达\n• 瓜德罗普岛\n• 危地马拉\n• 根西岛\n• 圭亚那\n• 洪都拉斯\n• 香港\n• 匈牙利\n• 冰岛\n• 印度\n• 印度尼西亚\n• 爱尔兰\n• 以色列\n• 意大利\n• 牙买加\n• 日本\n• 泽西岛\n• 约旦\n• 哈萨克斯坦\n• 肯尼亚\n• 科威特\n• 老挝\n• 拉脱维亚\n• 黎巴嫩\n• 莱索托\n• 列支敦士登\n• 立陶宛\n• 卢森堡\n• 澳门\n• 马来西亚\n• 马尔代夫\n• 马耳他\n• 马提尼克岛\n• 马约特岛\n• 墨西哥\n• 摩纳哥\n• 蒙特塞拉特\n• 摩洛哥\n• 缅甸\n• 纳米比亚\n• 荷兰\n• 新西兰\n• 尼加拉瓜\n• 尼日利亚\n• 挪威\n• 阿曼\n• 巴基斯坦\n• 巴拿马\n• 巴拉圭\n• 秘鲁\n• 菲律宾\n• 波兰\n• 葡萄牙\n• 波多黎各\n• 卡塔尔\n• 留尼汪岛\n• 罗马尼亚\n• 卢旺达\n• 圣基茨与尼维斯\n• 圣卢西亚\n• 圣马丁岛(法属)\n• 圣马力诺\n• 沙特阿拉伯\n• 塞尔维亚\n• 新加坡\n• 斯洛伐克\n• 斯洛文尼亚\n• 南非\n• 韩国\n• 西班牙\n• 斯里兰卡\n• 苏里南\n• 斯威士兰\n• 瑞典\n• 瑞士\n• 台湾\n• 坦桑尼亚\n• 泰国\n• 特立尼达和多巴哥\n• 土耳其\n• 乌干达\n• 乌克兰\n• 阿拉伯联合酋长国\n• 英国\n• 美国\n• 乌拉圭\n• 委内瑞拉\n• 越南\n\nGIVENCHY G 字链金银色黄铜吊坠耳环\n\n很抱歉!本款产品目前缺货\n\n• Givenchy 金色和银色黄铜耳坠\n\n• 100% 黄铜\n\n• 耳针闭合\n\n• 耳环长度:8.5 厘米\n\n• 吊坠风格,G 链设计,双色,吊坠雕刻品牌标识\n\n• 意大利制造\n\n• 请使用柔软干布\n\n• 非常抱歉,出于卫生原因,除非工艺有瑕疵,否则耳钉不能换货或退款。\n\n• 包装盒内,附带防尘袋\n\n• 轻盈无弹设计\n\n英国和欧洲\n\n\\$100.00\n• 无限英国定时、指定日和标准配送\n• 英国境内次日配送(英国时间下午 6 点前下单)\n• 无限欧盟地区标准配送\n• 免费退货\n• 不受最低消费金额限制\n\n全球\n\n\\$420.00\n• 订单金额超过\\$ 420.00英国时间,指定日期和标准交货时间不受限制\n• 订单金额超过\\$ 420.00全球不限次数的送货\n\nRef: R03776706" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.99772304,"math_prob":0.54914975,"size":444,"snap":"2022-05-2022-21","text_gpt3_token_len":431,"char_repetition_ratio":0.2090909,"word_repetition_ratio":0.0,"special_character_ratio":0.4954955,"punctuation_ratio":0.014705882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98218703,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-28T12:37:26Z\",\"WARC-Record-ID\":\"<urn:uuid:36b5937c-2ff6-4e04-8bbf-362df53adf8d>\",\"Content-Length\":\"269975\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5d8a6445-caf8-4e91-9a0e-3f4e93e780de>\",\"WARC-Concurrent-To\":\"<urn:uuid:d9808252-3947-4b12-894b-7b4399aee9ff>\",\"WARC-IP-Address\":\"104.18.21.144\",\"WARC-Target-URI\":\"https://www.selfridges.com/HK/zh/cat/givenchy-g-link-gold-and-silver-toned-brass-drop-earrings_R03776706/?previewAttribute=711-golden%2Fsilvery\",\"WARC-Payload-Digest\":\"sha1:OSW6N32AOPATFJSYJII3JJC6LSZDJN4N\",\"WARC-Block-Digest\":\"sha1:AQY5JVBWCQG2OTOEC277L7BXNTRM3CHK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305494.6_warc_CC-MAIN-20220128104113-20220128134113-00093.warc.gz\"}"}
https://origin.geeksforgeeks.org/php-dsqueue-toarray-function/?ref=lbp
[ "# PHP Ds\\Queue toArray() Function\n\n• Last Updated : 23 Aug, 2019\n\nThe Ds\\Queue::toArray() Function in PHP is used to convert a Queue into an associative array in PHP. The values of the Queue are assigned to the array in the same order as they are present in the Queue.\n\nSyntax:\n\n```array public Ds\\Queue::toArray ( void )\n```\n\nParameters: This function does not accepts any parameters.\n\nReturn Value: This function converts the Queue into an associative array and returns the array.\n\nBelow program illustrate the Ds\\Queue::toArray() Function in PHP:\n\n `push(``\"One\"``, 1); ` `\\$q``->push(``\"Two\"``, 2); ` `\\$q``->push(``\"Three\"``, 3); ` ` `  `echo` `\"The equivalent array is: \\n\"``; ` `print_r(``\\$q``->toArray()); `\n\nOutput:\n\n```The equivalent array is:\nArray\n(\n => One\n => 1\n => Two\n => 2\n => Three\n => 3\n)\n```\nMy Personal Notes arrow_drop_up\nRecommended Articles\nPage :" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60877866,"math_prob":0.54827696,"size":911,"snap":"2022-27-2022-33","text_gpt3_token_len":250,"char_repetition_ratio":0.14884233,"word_repetition_ratio":0.013422819,"special_character_ratio":0.31394073,"punctuation_ratio":0.17486338,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9691243,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T20:40:47Z\",\"WARC-Record-ID\":\"<urn:uuid:cf5b7edf-3657-4dd5-947b-fe830c88cb1a>\",\"Content-Length\":\"228559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e040e4ce-51e6-4190-9ea0-3798bbbf0080>\",\"WARC-Concurrent-To\":\"<urn:uuid:84faf409-1c7c-4eac-b346-5c796ab765ea>\",\"WARC-IP-Address\":\"44.228.100.190\",\"WARC-Target-URI\":\"https://origin.geeksforgeeks.org/php-dsqueue-toarray-function/?ref=lbp\",\"WARC-Payload-Digest\":\"sha1:NBALU2WDAVGQZKOBFO3KZWTNIY3IRXGP\",\"WARC-Block-Digest\":\"sha1:VYZTAIM5CWTDEUR47QXNT334JKFBQDBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104204514.62_warc_CC-MAIN-20220702192528-20220702222528-00573.warc.gz\"}"}
https://www.acmicpc.net/problem/11499
[ "시간 제한메모리 제한제출정답맞힌 사람정답 비율\n2 초 256 MB369625019.380%\n\n## 문제\n\nA histogram is a simple rectilinear polygon whose boundary consists of two chains such that the upper chain is monotone with respect to the horizontal axis and the lower chain is a horizontal line segment, called the base segment (See Figure 1).", null, "Figure 1. A histogram and its base segment (v0, v1)\n\nLet P be a histogram specified by a list (v0, v1, … ,vn-1) of n vertices in the counterclockwise order along the boundary such that its base segment is (v0, v1). An edge ei is a line segment connecting two vertices vi and vi+1, where i = 0, 1, … , n − 1 and vn = v0\n\nA path inside P is a simple path which does not intersect the exterior of P. The length of the path is defined as the sum of Euclidean length of the line segments of the path. The distance between two points p and q of P is the length of the shortest path inside P between p and q. Your task is to find the distance between v0 and each point of a given set S of points on the boundary of P. A point of the set S is denoted by p(k, d) which represents a point q on the edge ek such that d is the distance between vk and q.\n\nIn the histogram of Figure 1, the shortest path between v0 and q1 = p(10, 2) is a polygonal chain connecting v0, v14, v12 and q1 in that order, and its length is 8.595242. The shortest path between v0 and q2 = p(1, 1) is a segment directly connecting v0 and q2 with length 15.033296.\n\nGiven a histogram P with n vertices and a set S of m points on the boundary of P, write a program to find the distances between v0 and all points of S.\n\n## 입력\n\nYour program is to read from standard input. The input consists of T test cases. The number of test cases T is given in the first line of the input. Each test case starts with a line containing an integer, n (4 ≤ n ≤ 100,000), where n is the number of vertices of a histogram P = (v0, v1, … , vn-1). In the following n lines, each of the n vertices of P is given line by line from v0 to vn-1. Each vertex is represented by two numbers, which are the x-coordinate and the y-coordinate of the vertex, respectively. Each coordinate is given as an integer between 0 and 1,000,000, inclusively. Notice that (v0, v1) is the base segment. The next line contains an integer m (1 ≤ m ≤ 100,000) which is the size of a set S given as your task. In the following m lines. Each point p(k,d) of S is given line by line, and is represented by two integers k and d, where 0 ≤ k ≤ n − 1 and 0 ≤ d < the length of edge ek. All points in the set S are distinct.\n\n## 출력\n\nYour program is to write to standard output. Print exactly one line for each test case. The line should contain exactly one real value which is the sum of the distances between v0 and all points of S. Your output must contain the first digit after the decimal point, rounded off from the second digit. If each result is within an error range, 0.1, it will be considered correct. The Euclidean distance between two points p = (x1, y1) and q = (x2, y2) is $$\\sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$$\n\n## 예제 입력 1\n\n2\n16\n0 0\n15 0\n15 4\n13 4\n13 6\n10 6\n10 2\n7 2\n7 5\n6 5\n6 7\n3 7\n3 3\n2 3\n2 1\n0 1\n2\n10 2\n1 1\n8\n100000 100000\n400000 100000\n400000 200000\n300000 200000\n300000 300000\n200000 300000\n200000 200000\n100000 200000\n8\n1 0\n2 0\n3 0\n4 0\n5 0\n6 0\n7 0\n1 50000\n\n\n## 예제 출력 1\n\n23.6\n1909658.1" ]
[ null, "https://onlinejudgeimages.s3-ap-northeast-1.amazonaws.com/problem/11499/1.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90185094,"math_prob":0.9980359,"size":3360,"snap":"2022-05-2022-21","text_gpt3_token_len":1049,"char_repetition_ratio":0.12812872,"word_repetition_ratio":0.025604552,"special_character_ratio":0.34613097,"punctuation_ratio":0.09664948,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99840206,"pos_list":[0,1,2],"im_url_duplicate_count":[null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-17T17:53:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2535957c-2c5e-4a62-92c4-b9936ddf53bb>\",\"Content-Length\":\"28680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f0a8b6ab-0918-4ef9-9b46-587bb12d52e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:8376b80c-68a0-48bf-9179-5003bcad574b>\",\"WARC-IP-Address\":\"35.74.183.226\",\"WARC-Target-URI\":\"https://www.acmicpc.net/problem/11499\",\"WARC-Payload-Digest\":\"sha1:HWCG5363GXRIQHFW7Y6QRK5DEE6GYINF\",\"WARC-Block-Digest\":\"sha1:MAMGO4DZFGHUTUQIR4PXORDZ5T52KNS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662519037.11_warc_CC-MAIN-20220517162558-20220517192558-00693.warc.gz\"}"}
https://math.stackexchange.com/questions/660573/given-positive-real-numbers-x-y-how-to-demonstrate-that-is-defined-as-the-maximu/660578
[ "# given positive real numbers x,y how to demonstrate that is defined as the maximum of x and y?\n\nIs it true that given positive real numbers $x,y$, then we have that\n\n$$\\sqrt{x^2 + y^2} \\geq \\max\\{ x, y \\}$$\n\nI cant find a counter-example although it seems it is true... Any comments?\n\n• This is indeed true and follows directly from the fact that $y^2,x^2\\ge0$ hence $|x|\\sqrt{1+(y/x)^2}\\ge |x|$ (conversely the same is true for $y$) hence the result Feb 2, 2014 at 10:57\n• I did it now! thank you very much for your help! Feb 2, 2014 at 12:22\n\nWe know that\n\n$${x^2} + \\underbrace {{y^2}}_{ \\ge 0} \\ge {x^2}$$\n\n$$\\underbrace {{x^2}}_{ \\ge 0} + {y^2} \\ge {y^2}$$\n\nSo, taking the square root on both sides of each expression, we get\n\n$$\\sqrt {{x^2} + {y^2}} \\ge \\left| x \\right| \\ge x$$\n\n$$\\sqrt {{x^2} + {y^2}} \\ge \\left| y \\right| \\ge y$$\n\nThus\n\n$$\\sqrt {{x^2} + {y^2}} \\ge \\max \\left\\{ {x,y} \\right\\}.$$\n\nYes. To see this, note that $$\\tag{1}\\sqrt{x^2+y^2}\\geq \\sqrt{x^2}=x$$ since $x$ is positive. Similarly, we have $$\\tag{2}\\sqrt{x^2+y^2}\\geq \\sqrt{y^2}=y$$ since $y$ is positive. Combining $(1)$ and $(2)$, we have $$\\sqrt{x^2+y^2}\\geq\\max\\{x,y\\}.$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8274253,"math_prob":1.0000069,"size":1391,"snap":"2023-40-2023-50","text_gpt3_token_len":471,"char_repetition_ratio":0.111031,"word_repetition_ratio":0.3108108,"special_character_ratio":0.36232927,"punctuation_ratio":0.10927153,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000095,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-02T23:16:23Z\",\"WARC-Record-ID\":\"<urn:uuid:1268f23b-472f-4046-90c9-5080528e0c6a>\",\"Content-Length\":\"152655\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5829cee-2d70-4164-b1c3-1632b169225d>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b350b77-d9e1-465e-925a-80466fed4608>\",\"WARC-IP-Address\":\"104.18.43.226\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/660573/given-positive-real-numbers-x-y-how-to-demonstrate-that-is-defined-as-the-maximu/660578\",\"WARC-Payload-Digest\":\"sha1:TLIVL5AYLYCX6GY2VXICBOY2OGPCCTXC\",\"WARC-Block-Digest\":\"sha1:IK5QLHUY47XNBZGAWESY4J74DLRG5SQB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100452.79_warc_CC-MAIN-20231202203800-20231202233800-00548.warc.gz\"}"}
https://discourse.julialang.org/t/3d-matrix-vs-array-of-2d-matrices-using-comprehensions/47688
[ "", null, "# 3D Matrix vs Array of 2D Matrices using Comprehensions\n\nI am trying to populate a large, dense 3D matrix (3000 x 18 x 1000). I tried iterating over all positions of the 3D matrix (testing various row/column iteration orders) - see Attempt 1. I also tried array comprehensions on an array of matrices - see Attempt 2. The timing for both is about the same (see below). Does anyone have suggestions for how to further speed this up? The examples below have the first dimension at 50 instead of 3000, but I need to scale up to 3000 (see the `pointNumber` dimension in the code).\n\nAttempt 1 - iterate over all positions of 3D array with nested for loop\n\nSummary\n``````using LinearAlgebra\nusing BenchmarkTools\n\nfunction greensMatrix3D1(matchCoordinates,axisCoordinates,zArray)\n\ngreensMatrix = zeros(Float64,length(matchCoordinates),length(axisCoordinates),length(zArray))\n\nfor v = 1:length(matchCoordinates),q = 1:length(axisCoordinates),k=1:length(zArray)\nrvec = matchCoordinates[v] - [axisCoordinates[q],axisCoordinates[q],zArray[k]]\nrivec = rvec + [0, 2*axisCoordinates[q],0]\nr = norm(rvec)\nri = norm(rivec)\ngreensMatrix[v,q,k] = (1/r - 1/ri)\nend\n\nreturn greensMatrix\nend\n\npointNumber = 50 #target 3000\nmatchCoordinates = [[10,i*0.02,0] for i in 1:pointNumber]\naxisCoordinates = [[i-1,10,0] for i in 1:18]\nzArray = range(-100,100,length=1024)\n\n@btime output1 = greensMatrix3D1(matchCoordinates,axisCoordinates,zArray)\n\n``````\n\nAttempt 2 - use comprehensions and an array of matrices\n\nSummary\n``````using LinearAlgebra\n\nfunction oneGreens(xyz1,xyz2)\nrvec = xyz1 - xyz2\nrivec = rvec + [0, 2*xyz2,0]\nr = norm(rvec)\nri = norm(rivec)\ngreensMatrix = (1/r - 1/ri)\nreturn greensMatrix\nend\n\nfunction greensMatrix3D(matchCoordinates,axisCoordinates,zArray)\n\ngreensMatrixArray = [zeros(Float64,length(matchCoordinates),length(axisCoordinates)) for _ in 1:length(zArray)]\nfor i = 1:length(zArray)\naxisTempCoordinates = axisCoordinates\nsetindex!.(axisTempCoordinates,zArray[i],3)\ngreensMatrixArray[i] = [oneGreens(j,k) for j in matchCoordinates, k in axisTempCoordinates]\nend\n\nreturn greensMatrixArray\nend\n\npointNumber = 50 #target 3000\nmatchCoordinates = [[10.0,i*0.02,0.0] for i in 1:pointNumber]\naxisCoordinates = [[i-1.0,10.0,0.0] for i in 1:18]\nzArray = range(-100.0,100.0,length=1024)\n\n@btime output2 = greensMatrix3D(matchCoordinates,axisCoordinates,zArray)\n\n``````\n\nThe timing results for the above are:\nAttempt 1:>>> `1.826 s (8294402 allocations: 513.28 MiB)`\nAttempt 2:>>> `1.673 s (7375873 allocations: 422.41 MiB)`\n\nWhen you see that many allocations from your benchmark, that should be the first clue that something is sub-optimal. In particular, your function allocates brand new arrays in several places in your inner loop:\n\n``````rvec = matchCoordinates[v] - [axisCoordinates[q],axisCoordinates[q],zArray[k]]\n``````\n\nThis allocates two new arrays, one holding the elements on the right-hand-side of the `-` and then another one for the result of the subtraction.\n\n``````rivec = rvec + [0, 2*axisCoordinates[q],0]\n``````\n\nlikewise, this line allocates two more arrays. Since this is happening inside your loop, you end up seeing a huge number of allocation events when filling your whole matrix.\n\nSince your matches and axes consist of a large number of small arrays, this is a textbook use case for https://github.com/JuliaArrays/StaticArrays.jl\n\nFor example, by storing the `matchCoordinates` and `axisCoordinates` as static arrays (`SVector`s, to be specific) and constructing more static arrays inside the loop instead of plain arrays, we can reduce the allocations from 8294402 to just 2:\n\n``````using StaticArrays\n\nfunction greensMatrix3D3(matchCoordinates,axisCoordinates,zArray)\n\ngreensMatrix = zeros(Float64,length(matchCoordinates),length(axisCoordinates),length(zArray))\n\nfor v = 1:length(matchCoordinates),q = 1:length(axisCoordinates),k=1:length(zArray)\nrvec = matchCoordinates[v] - SVector(axisCoordinates[q], axisCoordinates[q], zArray[k])\nrivec = rvec + SVector(0, 2*axisCoordinates[q], 0)\nr = norm(rvec)\nri = norm(rivec)\ngreensMatrix[v,q,k] = (1/r - 1/ri)\nend\n\nreturn greensMatrix\nend\n\npointNumber = 50 #target 3000\nmatchCoordinates = [SVector(10, i * 0.02, 0) for i in 1:pointNumber]\naxisCoordinates = [SVector(i-1, 10, 0) for i in 1:18]\nzArray = range(-100,100,length=1024)\n@btime greensMatrix3D3(\\$matchCoordinates, \\$axisCoordinates, \\$zArray);\n``````\n\nOutput:\n\n`````` 7.016 ms (2 allocations: 7.03 MiB)\n``````\n\nThat’s already 250 times faster, but there’s even more we can do.\n\nAdding `@inbounds` helps a little bit:\n\n``````@inbounds for v = 1:length(matchCoordinates),q = 1:length(axisCoordinates),k=1:length(zArray)\n...\nend\n``````\n\ngiving a new time of `6.872 ms (2 allocations: 7.03 MiB)`.\n\nMuch more helpful is fixing the order of the nested loops. It turns out that your loop order was exactly backwards (see https://docs.julialang.org/en/v1/manual/performance-tips/#man-performance-column-major ), but it’s easy to fix:\n\n``````function greensMatrix3D5(matchCoordinates,axisCoordinates,zArray)\n\ngreensMatrix = zeros(Float64,length(matchCoordinates),length(axisCoordinates),length(zArray))\n\n@inbounds for k=1:length(zArray), q = 1:length(axisCoordinates), v = 1:length(matchCoordinates)\nrvec = matchCoordinates[v] - SVector(axisCoordinates[q], axisCoordinates[q], zArray[k])\nrivec = rvec + SVector(0, 2*axisCoordinates[q], 0)\nr = norm(rvec)\nri = norm(rivec)\ngreensMatrix[v,q,k] = (1/r - 1/ri)\nend\n\nreturn greensMatrix\nend\n\npointNumber = 50 #target 3000\nmatchCoordinates = [SVector(10, i * 0.02, 0) for i in 1:pointNumber]\naxisCoordinates = [SVector(i-1, 10, 0) for i in 1:18]\nzArray = range(-100,100,length=1024)\n@btime greensMatrix3D5(\\$matchCoordinates, \\$axisCoordinates, \\$zArray);\n``````\n\nThat gives a final result of:\n\n`````` 3.301 ms (2 allocations: 7.03 MiB)\n``````\n\nor about 500 times faster than the original function. There’s probably even more that can be done, but this is a pretty good return on a small investment of effort.\n\n9 Likes\n\nBy the way, with the `StaticArrays` change and the fixed loop order, computing 3000 points takes just 276ms:\n\n``````pointNumber = 3000 #target 3000\nmatchCoordinates = [SVector(10, i * 0.02, 0) for i in 1:pointNumber]\naxisCoordinates = [SVector(i-1, 10, 0) for i in 1:18]\nzArray = range(-100,100,length=1024)\n@btime greensMatrix3D5(\\$matchCoordinates, \\$axisCoordinates, \\$zArray);\n``````\n\noutput:\n\n`````` 276.100 ms (2 allocations: 421.88 MiB)\n``````\n7 Likes\n\nExcellent. Thank you!\n\nI’m trying to understand this column major concept a bit better. If the rows and columns are of unequal size and we have flexibility in how those are assigned, is it better to assign the larger dimension to rows?" ]
[ null, "https://aws1.discourse-cdn.com/business5/uploads/julialang/original/2X/1/12829a7ba92b924d4ce81099cbf99785bee9b405.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.58951795,"math_prob":0.99736345,"size":2417,"snap":"2020-45-2020-50","text_gpt3_token_len":717,"char_repetition_ratio":0.20969747,"word_repetition_ratio":0.07482993,"special_character_ratio":0.2974762,"punctuation_ratio":0.171875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997307,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T06:53:48Z\",\"WARC-Record-ID\":\"<urn:uuid:a8841f88-a86d-4224-89a5-da2e0db555bc>\",\"Content-Length\":\"32102\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb74921e-b95c-42ea-a07a-5dd011844263>\",\"WARC-Concurrent-To\":\"<urn:uuid:1f5c9366-c9b8-4ded-9321-44a824461e36>\",\"WARC-IP-Address\":\"72.52.80.20\",\"WARC-Target-URI\":\"https://discourse.julialang.org/t/3d-matrix-vs-array-of-2d-matrices-using-comprehensions/47688\",\"WARC-Payload-Digest\":\"sha1:UX4M6L4PQMNY5SUCGD2WYAWECPV6ZFZY\",\"WARC-Block-Digest\":\"sha1:CQVA7PBZ5DRPK7I3M5J5KLVEKOH44SWN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107893402.83_warc_CC-MAIN-20201027052750-20201027082750-00680.warc.gz\"}"}
https://studysoup.com/tsg/37163/linear-algebra-and-its-applications-5-edition-chapter-5-6-problem-3e
[ "×\nGet Full Access to Linear Algebra And Its Applications - 5 Edition - Chapter 5.6 - Problem 3e\nGet Full Access to Linear Algebra And Its Applications - 5 Edition - Chapter 5.6 - Problem 3e\n\n×\n\n# Solved: In Exercises 3–6, assume that any initial vector", null, "ISBN: 9780321982384 49\n\n## Solution for problem 3E Chapter 5.6\n\nLinear Algebra and Its Applications | 5th Edition\n\n• Textbook Solutions\n• 2901 Step-by-step solutions solved by professors and subject experts\n• Get 24/7 help from StudySoup virtual teaching assistants", null, "Linear Algebra and Its Applications | 5th Edition\n\n4 5 1 342 Reviews\n31\n1\nProblem 3E\n\nProblem 3E\n\nIn Exercises 3–6, assume that any initial vector x0 has an eigenvector decomposition such that the coefficient c1 in equation (1) of this section is positive.\n\nDetermine the evolution of the dynamical system in Example 1 when the predation parameter p is .2 in equation (3). (Give a formula for xk.) Does the owl population grow or decline? What about the wood rat population?\n\nReference Example 1:\n\nDenote the owl and wood rat populations at time k by\n\nwhere k is the time in months, Ok is the number of owls in the region studied, and Rk is the number of rats (measured in thousands). Suppose", null, "where p is a positive parameter to be specified. The (.5) Ok in the first equation says that with no wood rats for food, only half of the owls will survive each month, while the (1.1) Rk in the second equation says that with no owls as predators, the rat population will grow by 10% per month. If rats are plentiful, the (.4) Rk will tend to make the owl population rise, while the negative term –p Ok measures the deaths of rats due to predation by owls. (In fact, 1000p is the average number of rats eaten by one owl in one month.) Determine the evolution of this system when the predation parameter p is .104.\n\nStep-by-Step Solution:\n\nSolution 3E\n\nStep 1\n\nThe evolution of the dynamical system is,", null, "Let", null, "and the parameter", null, ".\n\nWrite the dynamical system equation in the matrix form.", null, "That is,", null, "where", null, "for", null, ".\n\nStep 2 of 7\n\nStep 3 of 7\n\n##### ISBN: 9780321982384\n\nUnlock Textbook Solution" ]
[ null, "https://studysoup.com/cdn/32cover_2610081", null, "https://studysoup.com/cdn/32cover_2610081", null, "https://lh5.googleusercontent.com/coMjj4Hn9IY4VDK3CQxZrFXQM4gkAbnKmaeyve5KZGg4Hy_o6wSv48l2hs0VdBrdY6y58t52AI8Y2aoVYoHSZgVPO00_HlYZxxc8DoqcNMei3al_30GB2IKZw1uuodch63lRanLN", null, "https://lh3.googleusercontent.com/TfdCizMaziZ7wgnfBK9M8K9K6s6Hoj60-qfqTJ4tGt1IAuGv2Ei77wfA_9G06QYNCcw56K41jZw5hgqGRqOlHOxtp-IShL5HR0Nq5HZM26MSnaFpc7er8Kd-8e-fanuk4U0IHU9k", null, "https://lh5.googleusercontent.com/vjkeQ6CYEtADRJPe48MLQ0jruryhyiu3caQNF3OQ5aU0tUehc8LRIHnWrOEd-M6owldRPE-dQtOB1t4UR1S3HC3m7yE6ohRJOi9LxZYLfwz-WYyeMSJeXxugNqDvDPJZlL2xJG-7", null, "https://lh4.googleusercontent.com/0jd8ma_GobO1JrNHibAPv8AiExa63dwdyzJ0zkapnr-r7L0SygIMwkyfnuFOCfby67A4Gx0WCJk0KFgUMTa0kKpPRjBrn_CbpMSWWUV_qzK37VVl5RHJMZmQGGvkoZUYcSWaMJ6V", null, "https://lh6.googleusercontent.com/yNThmgNxBOJzTstX8ZLLfMugqefuOjC6aP58_rzGtzM7MOgpjtsqDCLDX7ELsIoWoFu9Se4B23Umbi1hfZUHrkyQ--eetVt17iWoncYoJAURub5A7Fz9Qv3Aiw5GVFoZCIPN4fr3", null, "https://lh4.googleusercontent.com/RuZb0j_dMEyNfM6DnH_BJggJxoa9cHd4bcyJIQS9zLM13weBWaISVOX3ucUueOh6sPlCp4bxda_vl8EuQOXd6z75OEIXFkTT7ARBxdGh7ZMr8u9hiRsmXQwKleA8YXcevOosQ1KP", null, "https://lh6.googleusercontent.com/6M_LXS17ao7LN1O7O8GIhZsTloPDZ-lydQx12fIfS50DKWRSaUzsewQnWpACfwhW5dnnrNyR3kaBoEqG8fhEZjfZUHomLG86rCaQ7EGgVzoiToJi2rNV8F04hdmGY1rVfwG9td7p", null, "https://lh5.googleusercontent.com/UfDOv507pwlcP82fFoBFPEUwvyIHfSETT8rwUNpV_ILl0AklP4ZXrtab2zmBZ7deK2VnKHiWHaxJ6MMM_JEMVrbA1HS0z7V1Ju4xapT0FAczhWxPbil0VWB-LzQYMIiWXDHLnI1H", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8915242,"math_prob":0.97185373,"size":1376,"snap":"2021-21-2021-25","text_gpt3_token_len":337,"char_repetition_ratio":0.1377551,"word_repetition_ratio":0.03187251,"special_character_ratio":0.23909883,"punctuation_ratio":0.108391605,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99232346,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-13T03:05:27Z\",\"WARC-Record-ID\":\"<urn:uuid:2b920a42-e3b9-408a-92df-9dfc5a0245bf>\",\"Content-Length\":\"99704\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6cbd2b1-7083-4b4c-9a5c-691f47e40198>\",\"WARC-Concurrent-To\":\"<urn:uuid:fe029d00-e6f0-486d-85a1-74cc50ac0c09>\",\"WARC-IP-Address\":\"54.189.254.180\",\"WARC-Target-URI\":\"https://studysoup.com/tsg/37163/linear-algebra-and-its-applications-5-edition-chapter-5-6-problem-3e\",\"WARC-Payload-Digest\":\"sha1:BRMGJ5TCX375EHEXX7DKFGYEO53IB2RL\",\"WARC-Block-Digest\":\"sha1:36GSPUHTSAQGPT3FFLYW6KWYLGO6PPAS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243992721.31_warc_CC-MAIN-20210513014954-20210513044954-00496.warc.gz\"}"}
https://shouts.dev/articles/custom-date-format-validation-in-laravel
[ "# Custom Date Format Validation in Laravel", null, "", null, "Published on Mar 27, 2020\n\nIn this article, I’m going to share how to validate the custom date format in Laravel. Laravel has some date validation rules. Let’s take a look:\n\n# Validation: date\n\n``````\\$request->validate([\n'date_of_birth' => 'date'\n]);``````\n\n# Validation: date_format\n\n``````\\$request->validate([\n'date_of_birth' => 'date_format:m/d/Y'\n]);``````\n\n# Validation: after\n\n``````\\$request->validate([\n'start_date' => 'date_format:m/d/Y|after:tomorrow'\n]);``````\n\n# Validation: after_or_equal\n\n``````\\$now=date('m/d/Y');\n\\$request->validate([\n'start_date' => 'date_format:m/d/Y|after_or_equal:'.\\$now\n]);``````\n\n# Validation: before\n\n``````\\$request->validate([\n'end_date' => 'date_format:m/d/Y|before:start_date',\n'start_date' => 'date_format:m/d/Y|after:tomorrow'\n]);``````\n\n# Validation: before_or_equal\n\n``````\\$request->validate([\n'end_date' => 'date_format:m/d/Y|before_or_equal:start_date',\n'start_date' => 'date_format:m/d/Y|after:tomorrow'\n]);``````" ]
[ null, "https://cdn.shouts.dev/uploads/avatars/avatar_1_624cd2adb3537.jpg", null, "https://shouts.dev/img/verified.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6778648,"math_prob":0.63870484,"size":2139,"snap":"2022-40-2023-06","text_gpt3_token_len":545,"char_repetition_ratio":0.16252927,"word_repetition_ratio":0.0127795525,"special_character_ratio":0.27162224,"punctuation_ratio":0.15463917,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9505317,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-30T19:43:05Z\",\"WARC-Record-ID\":\"<urn:uuid:11bb3263-6399-4bd7-b3a6-178a7b0e0465>\",\"Content-Length\":\"146464\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:84874a38-71a5-41df-b831-cd6af23a7a6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:1a4726fc-201f-46fe-9255-4e169ed29922>\",\"WARC-IP-Address\":\"104.21.67.76\",\"WARC-Target-URI\":\"https://shouts.dev/articles/custom-date-format-validation-in-laravel\",\"WARC-Payload-Digest\":\"sha1:SEPX2QJFLRV6YQ633I3ALXCEREVEGQR5\",\"WARC-Block-Digest\":\"sha1:3WWFUBPEMGSRUQSL5HGBA5BC5GWYK5XY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335504.22_warc_CC-MAIN-20220930181143-20220930211143-00102.warc.gz\"}"}
https://www.mikejohnpage.com/blog/multilevel-modelling-in-spss/
[ "Multilevel Modelling in SPSS\nAug 8, 2018\n\nMichael Page\n\nINTRODUCTION\n\nNested data structures\n\nWhen selecting an analysis for a given data set it is important to consider if the data is in a nested (i.e., hierarchical/clustered/grouped) structure. A nested data structure is one in which the data is organised at more than one level. For example, students can be nested within classes as so:", null, "In a similar fashion, it could be said that individual student test scores can be nested within students, who are nested within classes, which are nested within schools:", null, "To extent of this nesting structure is only limited by the nature of the data collected. Nonetheless, the most common nested structure is a two level structure, (i.e., students with classes), which is the type of nesting we will be exploring later.\n\nEssentially, a nested data structure is one where the variables at one level (e.g., student) cannot be considered independent of the variables at another level (e.g. class). So extending the student-class example above, if the test scores of students 1-9 were to be predicted, it would be reasonable to assume that the scores of students 1-3 will be similar, as will the scores of students 4-6, as will the scores of students 7-9, as nested (grouped) by the higher-level class variable. The explanation for this is intuitive: the pupils in class 1 will be exposed to different teachers and different environments, etc., in comparison to the students in class 2, and so on. That is not to say that the class is the only predictor of student grades, rather, it must be modeled into the data. Indeed, there will be variability in students scores within classes, and it may be that different students in different classes still achieve similar scores. In this scenario, the effect of the higher-level class variable will be minimal or non-significant (more on this later).\n\nIf we were to model the prediction of student test scores, as was done above, using ordinary least squares (OLS) regression we would run into a problem: OLS regression requires independence of observations. In this case, the assumption would be violated, as student scores are not independent of their class. This is because we have a nested data structure. Therefore, we must use an alternative solution: multilevel models.\n\nWhy use multilevel models over regression?\n\nWhen dealing with a nested data structure, multilevel models offer several distinct advantages over OLS regression:\n\n1. Multilevel models do not require independence of observations (i.e., the same participant can be measured twice without confounding the data).\n2. Multilevel models do not require homogeneity of regression slopes (i.e., multilevel models explicitly model variability in regression slopes; Field, 2009).\n3. Multilevel models play nicely with missing (i.e., NA) data.\nMultilevel models\n\nIn short, multilevel models are just a fancy type of regression. The title of Bickel (2009) supports this notion:\n\n“Multilevel Analysis for Applied Research: It’s Just Regression!”\n\nWhile the mathematics behind multilevel models is a little more complex, they are essentially regression equations which control for variability in the higher-level structures. This variability occurs as variance in the slope and intercept of the plotted regression lines, which are fixed in a standard OLS regression. Thus, multilevel models are said to have a fixed and random component. To understand these concepts further, let’s run through a specific example from start to finish.\n\nMULTILEVEL MODELS IN ACTION\n\nCortisol and perfectionism\n\nTo understand and apply multilevel models, let’s model some data using SPSS on perfectionism and cortisol I have collected as part of an ongoing study.\n\nThe variables\n\nThe study is investigating whether athletes experience a change in their cortisol awakening response (CAR) the morning of a competition (the CAR is essentially a measure of stress), and whether perfectionism (a multidimensional personality trait) can predict these changes. In other words, the study is trying to see if levels of perfectionism in athletes can predict how stressed they get before a competition (of course, it is a little more complicated than that). One might expect that an athlete higher in perfectionism would get more stressed before a competition.\n\nTo assess whether athletes are experiencing a change in CAR before a competition, the CAR is measured on a baseline day and the day of a competition (i.e., at multiple time points). The CAR is expressed in two different metrics: AUCi and AUCg. The details of these metrics are not important at this stage, but take note they both provide an index of stress. Perfectionism is measured at one time point before any measures of the CAR take place. As each participant must provide CAR measurements on two separate occasions (a baseline day and a competition day), the CAR measurements are nested within participants. Therefore, multilevel models must be used, in the first instance, to model this data.\n\nThe data\n\nA subset of the data we will use to create our models can be seen below:\n\nID SOP SPP SOPP SPPP AUCg_Base AUCg_Comp AUCi_Base AUCi_Comp\n1 2.8 2.2 3.75 1.75 25.65300 22.27275 16.99950 -0.75825\n2 5.6 4.4 6.00 4.25 17.28750 35.36925 5.11500 -2.04825\n3 5.0 3.0 4.50 2.00 3.92670 12.62250 1.94985 1.17000\n4 2.8 3.8 6.25 5.25 18.15600 13.64625 -0.24450 -9.89325\n5 6.6 2.2 6.75 2.50 22.33275 25.71300 -1.04475 2.83950\n6 5.8 4.6 5.50 2.50 8.58900 21.98700 0.80400 -7.32600\n7 3.6 2.8 4.25 2.75 23.12925 16.66050 3.11325 5.54550\n8 4.0 5.0 5.00 5.25 21.41775 27.80025 4.34475 7.44225\n9 4.8 3.8 5.50 4.25 6.94950 23.14950 0.35700 4.63650\n10 4.8 3.2 5.25 4.00 15.01275 27.06300 2.15625 14.56200\n\nThe data contains four perfectionism measures (SOP, SPP, SOPP, SPPP), both CAR metrics across the baseline day and competition day (AUCg_Base, AUCg_Comp, AUCi_Base, AUCi_Comp), and a participant identifier (ID). To model the data, it must be transformed into a longitudinal (or ‘tidy’; Wickham, 2014) format. Good guides on transforming data into a longitudinal format can be found in Bickel (2007) and Field (2009).\n\nGrowth models\n\nTo apply multilevel models to the data, we need to use a type of model called a growth model (because we are assessing changes over time - from baseline day to competition day). The growth models we will use will examine whether there are intra-individual changes in cortisol scores across days (level 1 or unconditional model including only time), and whether perfectionism measures can predict these changes (level 2 or conditional model that includes perfectionism dimensions; see Bickel, 2007).\n\nThe fixed components of level 1 models follow a normal regression equation structure and describe an individual’s score as a function of the intercept, the slope (i.e. the growth rate), and a time-specific residual. The random components of level 1 models examine whether there is individual variation in terms of the intercept and the slope.\n\nTypically, the level 1 growth rate is tested to establish if there is a relationships across time for the repeated measures. If a significant relationship is found, the variance components (intercept and slope) are then tested to establish if individuals differed in terms of their initial status and growth rates. If significant relationships are found for the intercept and slope, the model is then tested for fit using a chi-square likelihood ratio test. Providing adequate fit, level 2 predictor variables are then added to the model. Significant interaction terms at level 2 indicate whether the predictor variables are related to accelerated or decelerated growth in the repeated measures at level 1.\n\nFitting the models\n\nTo fit the growth models in SPSS, let’s go beyond the point and click interface, which is convoluted and timely, and write some concise, reproducible syntax. As a guide, let’s use the syntax provided in Peugh and Enders (2005) to run our growth models (note that comments are indicated by an asterisk). First, we need to write our level 1 unconditional linear models to test whether there is a change in CAR over time, and whether there is individual variation in terms of the intercept and slope, for both CAR metrics (AUCi and AUCg):\n\n* Level 1 unconditional linear growth models.\n* Model below is for AUCg\nMIXED AUCg WITH Time\n/PRINT = SOLUTION TESTCOV /METHOD =ML\n/FIXED = INTERCEPT Time\n/RANDOM INTERCEPT Time | SUBJECT(Participant) COVTYPE(UN)\n\n* Model below is for AUCi\nMIXED AUCi WITH Time\n/PRINT = SOLUTION TESTCOV\n/METHOD =ML\n/FIXED = INTERCEPT Time\n/RANDOM INTERCEPT Time | SUBJECT(Participant) COVTYPE(UN)\n\nNext, we need to build our level 2 models to assess whether the predictor variables are related to accelerated or decelerated growth in the repeated measures at level 1 for both CAR metrics (AUCi and AUCg). Note that in the models below the four perfectionism measures are added to the models in groups of two (e.g., SOP and SPP), this is because these pairs of perfectionism measures are subdomains of perfectionism scales (consisting of multiple dimensions):\n\n* Level 2 conditional linear growth models.\n* Model below is for AUCg with the HF-MPS\nMIXED AUCg WITH Time SOP SPP\n/PRINT = SOLUTION TESTCOV\n/METHOD = ML\n/FIXED = INTERCEPT Time SOP SPP Time*SOP Time*SPP\n/RANDOM INTERCEPT Time | SUBJECT(Participant) COVTYPE(UN)\n\n* Model below is for AUCi with the HF-MPS\nMIXED AUCi WITH Time SOP SPP\n/PRINT = SOLUTION TESTCOV\n/METHOD = ML\n/FIXED = INTERCEPT Time SOP SPP Time*SOP Time*SPP\n/RANDOM INTERCEPT Time | SUBJECT(Participant) COVTYPE(UN)\n\n* Model below is for AUCg with the PPS-S\nMIXED AUCg WITH Time SOPP SPPP\n/PRINT = SOLUTION TESTCOV\n/METHOD = ML\n/FIXED = INTERCEPT Time SOPP SPPP Time*SOPP Time*SPPP\n/RANDOM INTERCEPT Time | SUBJECT(Participant) COVTYPE(UN).\n\n* Model below is for AUCi with the PPS-S\nMIXED AUCi WITH Time SOPP SPPP\n/PRINT = SOLUTION TESTCOV\n/METHOD = ML\n/FIXED = INTERCEPT Time SOPP SPPP Time*SOPP Time*SPPP\n/RANDOM INTERCEPT Time | SUBJECT(Participant) COVTYPE(UN).\n\nThat is it! That is the models built. Next, we run the syntax and inspect the results. Below, is a cleaned up, more digestible version of the SPSS output for the level 1 and 2 models examining SOP and SPP:", null, "For AUCi, the level 1 unconditional linear growth models showed that there were no significant changes over time (p = .596). For AUCg, the level 1 unconditional linear growth models showed that there were significant changes over time (p = .001). However, there was no significant variability between individuals in terms of both slope and intercept, therefore, the level 2 models need not be examined. In this case, a two step regression analysis would be an appropriate analysis to now run on the data to examine the significant change in AUCg over time that was found.\n\nSUMMARY\n\n1. Multilevel models allow us to analyse nested data structures and offer several advantages over OLS regression.\n\n2. When using multilevel growth models, first, level 1 growth rate is tested to establish if there is a relationship across time for the repeated measures. If a significant relationship is found, the variance components (intercept and slope) are then tested to establish if individuals differed in terms of their initial status and growth rates. Level 2 predictor variables are then added to the model. Significant interaction terms at level 2 indicate whether the predictor variables are related to accelerated or decelerated growth in the repeated measures at level 1.\n\n3. SPSS syntax offers a simple and reproducible method for performing multilevel models." ]
[ null, "https://www.mikejohnpage.com/img/blog/SPSS/img1.jpeg", null, "https://www.mikejohnpage.com/img/blog/SPSS/img2.jpeg", null, "https://www.mikejohnpage.com/img/blog/SPSS/screen_shot.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88533527,"math_prob":0.8949468,"size":12113,"snap":"2019-43-2019-47","text_gpt3_token_len":2987,"char_repetition_ratio":0.12181022,"word_repetition_ratio":0.18233766,"special_character_ratio":0.25006193,"punctuation_ratio":0.13284922,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9711519,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-20T22:50:27Z\",\"WARC-Record-ID\":\"<urn:uuid:0c8b487c-3f71-4ec0-9615-cdfad880695a>\",\"Content-Length\":\"59291\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1c0126bc-52ad-47ae-9077-70627cf22a8d>\",\"WARC-Concurrent-To\":\"<urn:uuid:ac560f3b-f1e4-4d3b-b66f-3dac84173cc0>\",\"WARC-IP-Address\":\"167.99.4.63\",\"WARC-Target-URI\":\"https://www.mikejohnpage.com/blog/multilevel-modelling-in-spss/\",\"WARC-Payload-Digest\":\"sha1:IICYTSQHSTFSATCD2WGSKD57NTUANAL2\",\"WARC-Block-Digest\":\"sha1:ULBLISA6MFEGSAKXQBQTLEP3WFFPMFTC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986726836.64_warc_CC-MAIN-20191020210506-20191020234006-00466.warc.gz\"}"}
https://answers.everydaycalculation.com/subtract-fractions/21-42-minus-15-60
[ "Solutions by everydaycalculation.com\n\n## Subtract 15/60 from 21/42\n\n21/42 - 15/60 is 1/4.\n\n#### Steps for subtracting fractions\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 42 and 60 is 420\n2. For the 1st fraction, since 42 × 10 = 420,\n21/42 = 21 × 10/42 × 10 = 210/420\n3. Likewise, for the 2nd fraction, since 60 × 7 = 420,\n15/60 = 15 × 7/60 × 7 = 105/420\n4. Subtract the two fractions:\n210/420 - 105/420 = 210 - 105/420 = 105/420\n5. After reducing the fraction, the answer is 1/4\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8743748,"math_prob":0.9838418,"size":679,"snap":"2020-10-2020-16","text_gpt3_token_len":256,"char_repetition_ratio":0.14962962,"word_repetition_ratio":0.0,"special_character_ratio":0.49484536,"punctuation_ratio":0.08219178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9989589,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T19:48:22Z\",\"WARC-Record-ID\":\"<urn:uuid:d54f30ca-c00f-41af-a26c-3cc0a3446d84>\",\"Content-Length\":\"7410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adf9f8cd-38fa-48f4-9f24-a09a4fb7baa3>\",\"WARC-Concurrent-To\":\"<urn:uuid:324b0959-2b44-4e72-bf63-c6cca097c890>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/subtract-fractions/21-42-minus-15-60\",\"WARC-Payload-Digest\":\"sha1:56MSHLASECBPOOR4HBQJ7OQTM5LNHLU7\",\"WARC-Block-Digest\":\"sha1:COTJ4JEH3DY7OUUQRDTYQRTXA4DPDPF2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145713.39_warc_CC-MAIN-20200222180557-20200222210557-00347.warc.gz\"}"}
https://math.stackexchange.com/questions/2155220/if-a-closed-linear-operator-is-one-to-one-on-a-dense-set-then-is-it-one-to-one
[ "If a closed linear operator is one to one on a dense set, then is it one to one on the whole set?\n\nI am looking at the proof of the following proposition.\n\nHere $A$ is a closed linear operator on $L$, that is, the graph of $A$ is closed subspace of $L \\times L$. And $\\lambda$ is said to belong to the resolvent set $\\rho(A)$ of $A$, if $\\lambda - A(\\equiv \\lambda I-A)$ is one-to-one, Range($\\lambda - A)=L$, and $(\\lambda - A)^{-1}$ is a bounded linear operator on $L$.", null, "In the proof, as shown below, (2.6) shows that $\\lambda-A$ is one-to-one on $\\mathcal{D}(A)$, which is a dense subset of the vector space $L$ by a theorem preceding this. However, I don't see how this fact directly means that $\\lambda - A$ is one-to-one on $L$. If a closed linear operator is one to one on a dense set, then is it one to one on the whole set? I would greatly appreciate any explanation on this line of the proof.", null, "• Why do you need it to be injective on whole set? As $A$ is generator $\\lambda - A$ is closed, and therefore for it to be invertible it only needs to be a bijection from $D(A)$ onto whole set. – user160738 Feb 21 '17 at 20:53\n• @user160738 Isn't it required to define the inverse? – takecare Feb 21 '17 at 20:56\n• No, only requirement is for it to be injective on $D(A)$. $U_{\\lambda}$ takes care of both surjectivity and injectivity, and closedness of $\\lambda -A$ implies it is invertible with $U_{\\lambda}$ being the resolvent – user160738 Feb 21 '17 at 20:57\n• @user160738 Can you explain a little more? I am confused. From the definition of the book, which I just posted above, we need $\\lambda - A$ to be injective on the domain, which is $L$, but (2.6) only shows it for $D(A)$. Also, why does the closedness of $\\lambda-A$ imply it is invertible with $U_\\lambda$ being the resolvent? – takecare Feb 21 '17 at 21:01\n• Domain of $\\lambda - A$ is $D(A)$ for all $\\lambda$, how can it be $L$? – user160738 Feb 21 '17 at 21:08\n\nYou have\n\n$$D(\\lambda - A)=\\{x\\in L:\\lambda x - Ax \\in L\\} = \\{x\\in L : Ax\\in L\\}=D(A)$$\n\nbecause $\\lambda x - Ax$ lies in $L$ iff $Ax$ lies in $L$.\n\nSo you only need it to be injective on $D(A)$. $U_{\\lambda}$ takes care of both its surjectivity and injectivity.\n\nFinally, once that's done it is autumatically invertible. This is applies to slightly more general case:\n\nLemma: Suppose that $A$ is a closed linear operator on a Banach Space $X$, then it is invertible iff it is bijective onto $X$.\n\nClearly if it is invertible then it is bijective onto $X$. So the converse is interesting one;\n\nLet $G(A)$ denote its graph, and suppose that $A$ is bijective onto $X$. Then its algebraic inverse exists, call it $B$. Then the map $(x,y)\\mapsto (y,x)$ from $G(A)$ bijective onto $G(B)$ is a homeomorhpism, so $G(A)$ closed implies $G(B)$ is closed. Now as $X$ is complete, and $B$ is a bijection from $X$ onto $D(A)$, by closed graph theorem $B$ is bounded. So $A$ is invertible" ]
[ null, "https://i.stack.imgur.com/fZWoV.png", null, "https://i.stack.imgur.com/aMiUB.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9509157,"math_prob":0.99957556,"size":801,"snap":"2019-26-2019-30","text_gpt3_token_len":226,"char_repetition_ratio":0.13927227,"word_repetition_ratio":0.0,"special_character_ratio":0.30337077,"punctuation_ratio":0.101123594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997866,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T15:43:55Z\",\"WARC-Record-ID\":\"<urn:uuid:09fa5533-ce8b-4b4e-9cab-ba58e026e536>\",\"Content-Length\":\"138534\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29b76b34-1439-416f-821c-8e64f29a874f>\",\"WARC-Concurrent-To\":\"<urn:uuid:935ee9c7-8a73-45a1-a461-17769a287f02>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2155220/if-a-closed-linear-operator-is-one-to-one-on-a-dense-set-then-is-it-one-to-one\",\"WARC-Payload-Digest\":\"sha1:GIAI5LX3QDYVZNUUUKQKVPKRTDEAPLTI\",\"WARC-Block-Digest\":\"sha1:E222DI522SHMQPLSNL24TUEWDFBVYGS5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998250.13_warc_CC-MAIN-20190616142725-20190616164542-00019.warc.gz\"}"}
https://reason.town/vc-dimension-deep-learning/
[ "# The VC Dimension of Deep Learning\n\nIn this post, we’ll discuss the VC Dimension of deep learning, a key concept that underlies the success of modern machine learning models.\n\nCheckout this video:\n\n## What is the VC Dimension of Deep Learning?\n\nThe VC dimension is a measure of the capacity of a machine learning algorithm. It is the number of points that can be linearly separable by the algorithm. Deep learning algorithms have a large VC dimension, which means they can learn complex functions.\n\n## How can the VC Dimension of Deep Learning be used to improve performance?\n\nTheVC Dimension of Deep Learning can be used to help improve the performance of deep learning models. TheVC Dimension is a measure of the complexity of a model and is used to help determine how well a model will perform. A higher VC Dimension means that a model is more complex and therefore more likely to overfit the data. A lower VC Dimension means that a model is simpler and therefore less likely to overfit the data. TheVC Dimension can be used to help determine the optimal level of complexity for a deep learning model. By using the VC Dimension, developers can create models that are more accurate and efficient.\n\n## What are the benefits of using the VC Dimension of Deep Learning?\n\nThe VC Dimension of Deep Learning is a mathematical tool that allows for the analysis of deep learning models. It provides a way to understand how a deep learning model works and what its limitations are. Additionally, the VC Dimension can be used to optimize deep learning models and improve their performance.\n\n## How does the VC Dimension of Deep Learning work?\n\nDeep learning is a powerful tool for learning complex patterns from data. However, like any learning algorithm, it is subject to the limitations imposed by the so-called VC dimension. In this post, we’ll take a look at what the VC dimension is and how it applies to deep learning.\n\nThe VC dimension is a measure of the capacity of a learning algorithm. In other words, it tells us how many different patterns the algorithm can learn. The higher the VC dimension, the more complex the patterns that can be learned.\n\nDeep learning algorithms have a high VC dimension because they are able to learn multiple layers of information from data. This ability allows them to learn complex patterns that other learning algorithms cannot.\n\nThe VC dimension is not the only factor that determines the power of a learning algorithm, but it is an important one. So, if you’re looking for a powerful tool for learning complex patterns from data, deep learning is a good choice.\n\n## How can the VC Dimension of Deep Learning be used to improve performance?\n\nThere are a few ways that the VC Dimension of Deep Learning can be used to improve performance. One way is by using it to choose the optimal number of hidden layers in a neural network. The VC Dimension can also be used to help determine the optimal learning rate for deep learning networks.\n\n## What are the benefits of using the VC Dimension of Deep Learning?\n\nThere are many benefits to using the VC Dimension of Deep Learning. Some of these benefits include:\n\n-It can help improve the accuracy of deep learning models.\n-It can help reduce the amount of data required to train a deep learning model.\n-It can help improve the efficiency of training deep learning models.\n\n## How does the VC Dimension of Deep Learning work?\n\nDeep learning is a powerful tool for machine learning, but it can be difficult to understand how it works. One important concept in deep learning is the VC dimension.\n\nThe VC dimension is a measure of the complexity of a model. It represents the number of points that a model can fit before it starts to overfit.\n\nDeep learning models have a high VC dimension because they are able to fit a large number of points. This makes them powerful but also difficult to use. If you use a deep learning model without understanding the VC dimension, you may find that your model doesn’t work as well as you expect.\n\n## What are the benefits of using the VC Dimension of Deep Learning?\n\nThe VC Dimension of Deep Learning is a powerful tool that can help you improve the accuracy of your predictions. By selecting the right hyperparameters, you can control the capacity of your network and prevent overfitting. In addition, the VC Dimension can also help you choose the right architecture for your problem.\n\n## How can the VC Dimension of Deep Learning be used to improve performance?\n\nDeep learning is a powerful tool for Machine Learning, but it can be difficult to understand how it works. One way to think about deep learning is in terms of the VC Dimension.\n\nThe VC Dimension is a measure of the capacity of a model, or how much information the model can handle. The higher the VC Dimension, the more information the model can handle, and the better it will perform.\n\nDeep learning has a high VC Dimension, which means it can handle a lot of information. This is one of the reasons why deep learning is so powerful. By understanding the VC Dimension, we can better understand how deep learning works, and how to use it to improve performance.\n\n## What are the benefits of using the VC Dimension of Deep Learning?\n\nThe VC Dimension of Deep Learning is a powerful tool that can help you learn more about your data. It can also help you improve the performance of your machine learning models.\n\nScroll to Top" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9220173,"math_prob":0.91435504,"size":5293,"snap":"2022-40-2023-06","text_gpt3_token_len":1035,"char_repetition_ratio":0.23180185,"word_repetition_ratio":0.24514039,"special_character_ratio":0.19308521,"punctuation_ratio":0.083333336,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9779498,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-07T15:01:38Z\",\"WARC-Record-ID\":\"<urn:uuid:89ba2294-e574-486e-a536-defdc0d1b5cf>\",\"Content-Length\":\"154386\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7bdb59d8-1452-43a1-81ee-751b6effb3c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3db6c9a-bdfb-470c-bae8-d78f2a80d8ae>\",\"WARC-IP-Address\":\"172.67.222.150\",\"WARC-Target-URI\":\"https://reason.town/vc-dimension-deep-learning/\",\"WARC-Payload-Digest\":\"sha1:DKHNYBR4X77UG62GPWJ6J7CQTKM5TTV6\",\"WARC-Block-Digest\":\"sha1:ZOFIMKOIWQDVFHAAM2II7IW44IHB67AL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030338213.55_warc_CC-MAIN-20221007143842-20221007173842-00052.warc.gz\"}"}
https://blog.jdriven.com/2021/02/java-joy-merge-maps-using-stream-api/
[ "In Java we can merge a key/value pair into a `Map` with the `merge` method. The first parameter is the key, the second the value and the third parameter of the `merge` method is a remapping function that is applied when the key is already present in the `Map` instance. The remapping function has the value of the key in the original `Map` and the new value. We can define in the function what the resulting value should be. If we return `null` the key is ignored.\n\nIf we want to merge multiple `Map` instances we can use the Stream API. We want to convert the `Map` instances to a stream of `Map.Entry` instances which we then turn into a new `Map` instance with the `toMap` method from the class `Collectors`. The `toMap` method also takes a remapping function when there is a duplicate key. The function defines what the new value is based on the two values of the duplicate key that was encountered. We can choose to simply ignore one of the values and return the other value. But we can also do some computations in this function, for example creating a new value using both values.\n\nIn the following example we use the Stream API to merge multiple `Map` instances into a new `Map` using a remapping function for duplicate keys:\n\n``````package com.mrhaki.sample;\n\nimport java.util.Arrays;\nimport java.util.HashSet;\nimport java.util.Map;\nimport java.util.Objects;\nimport java.util.Set;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\n\npublic class MapMerge {\npublic static void main(String[] args) {\nMap<Character, Integer> first = Map.of('a', 2, 'b', 3, 'c', 4);\nMap<Character, Integer> second = Map.of('a', 10, 'c', 11);\nMap<Character, Integer> third = Map.of('a', 3, 'd', 100);\n\n// First we turn multiple maps into a stream of entries and\n// in the collect method we create a new map and define\n// a function to multiply the entry value when there is a\n// duplicate entry key.\nMap<Character, Integer> result =\nStream.of(first, second, third)\n.flatMap(m -> m.entrySet().stream())\n.collect(\nCollectors.toMap(\nMap.Entry::getKey,\nMap.Entry::getValue,\n(value1, value2) -> value1 * value2));\n\n// The values for duplicate keys are multiplied in the resulting map.\nassert Map.of('a', 60, 'b', 3, 'c', 44, 'd', 100).equals(result);\n\n// In this sample the value is a Java class Characteristic.\n// The function to apply when a key is duplicate will create\n// a new Characteristic instance contains all values.\n// The resulting map will contain all concatenated characteristic values\n// for each key.\nvar langauges =\nStream.of(Map.of(\"Java\", new Characteristic(\"jvm\")),\nMap.of(\"Clojure\", new Characteristic(\"dynamic\", \"functional\")),\nMap.of(\"Groovy\", new Characteristic(\"jvm\", \"dynamic\")),\nMap.of(\"Clojure\", new Characteristic(\"jvm\")),\nMap.of(\"Groovy\", new Characteristic(\"dynamic\")),\nMap.of(\"Java\", new Characteristic(\"static\")))\n.flatMap(m -> m.entrySet().stream())\n.collect(\nCollectors.toMap(\nMap.Entry::getKey,\nMap.Entry::getValue,\n(c1, c2) -> c1.addCharateristics(c2.getValues())));\n\nassert new Characteristic(\"static\", \"jvm\").equals(langauges.get(\"Java\"));\nassert new Characteristic(\"dynamic\", \"functional\", \"jvm\").equals(langauges.get(\"Clojure\"));\nassert new Characteristic(\"dynamic\", \"jvm\").equals(langauges.get(\"Groovy\"));\n}\n\n/**\n* Supporting class to store language characteristics.\n*/\nstatic class Characteristic {\n// Store unique characteristic value.\nprivate Set<String> values = new HashSet<>();\n\nCharacteristic(String characteristic) {\nvalues.add(characteristic);\n}\n\nCharacteristic(String... characteristics) {\nvalues.addAll(Arrays.asList(characteristics));\n}\n\nCharacteristic addCharateristics(Set<String> characteristics) {\nvalues.addAll(characteristics);\nreturn this;\n}\n\nSet<String> getValues() {\nreturn values;\n}\n\n@Override\npublic boolean equals(final Object o) {\nif (this == o) { return true; }\nif (o == null || getClass() != o.getClass()) { return false; }\nfinal Characteristic that = (Characteristic) o;\nreturn Objects.equals(values, that.values);\n}\n\n@Override\npublic int hashCode() {\nreturn Objects.hash(values);\n}\n}\n}``````\n\nWritten with Java 15.", null, "" ]
[ null, "https://blog.jdriven.com/img/shadow-left.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5499347,"math_prob":0.92912436,"size":4064,"snap":"2021-04-2021-17","text_gpt3_token_len":946,"char_repetition_ratio":0.16576354,"word_repetition_ratio":0.0144665465,"special_character_ratio":0.25688976,"punctuation_ratio":0.2192649,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828198,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T01:08:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4c75878c-f1ce-44b9-8ef7-fe5c17b5b4a0>\",\"Content-Length\":\"25089\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be9aab95-6291-4eda-8302-e23e8004b13d>\",\"WARC-Concurrent-To\":\"<urn:uuid:232fbcd5-6902-4fb1-bedb-be0eddfc0472>\",\"WARC-IP-Address\":\"54.205.240.192\",\"WARC-Target-URI\":\"https://blog.jdriven.com/2021/02/java-joy-merge-maps-using-stream-api/\",\"WARC-Payload-Digest\":\"sha1:Y5NBVT3CGBECCRK5UXKE3ECHJXZ44NAH\",\"WARC-Block-Digest\":\"sha1:7UWBQH6WYY7T7HN4LT5JTGCQDCVHQRKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038921860.72_warc_CC-MAIN-20210419235235-20210420025235-00631.warc.gz\"}"}
http://forums.codeguru.com/showthread.php?551343-Algorithm-for-simple-board-game&p=2184423&mode=threaded
[ "## Algorithm for simple board game\n\nWe have N coins of type-1 and M coins of type-2. A game-board has N squares of type-1 and M squares of type-2. In this game we must place one coin into each square. After placing all coins we will get a score based on our coin placement strategy .\n\nIf type-1 square contains a type-1 coin then we will get A points, if type-2 square contains a type-2 coin then we will get B points and in all other cases, we will get C points.Our total game score will be sum of scores of all squares.\n\nInputs Available are ( N,M,A,B,C )\n\nHow can we maximize our score ?" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8510346,"math_prob":0.99636674,"size":864,"snap":"2020-10-2020-16","text_gpt3_token_len":225,"char_repetition_ratio":0.14302325,"word_repetition_ratio":0.024096385,"special_character_ratio":0.2627315,"punctuation_ratio":0.08421053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97987574,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-20T04:07:51Z\",\"WARC-Record-ID\":\"<urn:uuid:088bce50-f308-4e00-a9f5-04f48e62af9f>\",\"Content-Length\":\"102925\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bce81da-a16e-4bfc-81d0-1d8de6415695>\",\"WARC-Concurrent-To\":\"<urn:uuid:7c269b26-b333-498e-9baa-0594ad8c9080>\",\"WARC-IP-Address\":\"70.42.23.51\",\"WARC-Target-URI\":\"http://forums.codeguru.com/showthread.php?551343-Algorithm-for-simple-board-game&p=2184423&mode=threaded\",\"WARC-Payload-Digest\":\"sha1:2I3ZICYS3KHX2OWTJF4HGEQN4PSTGOKQ\",\"WARC-Block-Digest\":\"sha1:TMAPHYWIO55K7CTEEQXBK47RXMEYRK3Z\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875144637.88_warc_CC-MAIN-20200220035657-20200220065657-00149.warc.gz\"}"}
https://agink.id/2022/02/10/mikrotik-script-dhcp-pool-usages/
[ "# MikroTik – Script DHCP Pool Usages\n\nSystem –> Scripts –> Add [+] –> Name = dhcpstatus –> Copas Source\n\n```# List stats for IP -> Pool\n#\n# criticalthreshold = output pool display in red if pool used is above this %\n# warnthreshold = output pool display in gold if pool used is above this %\n\n:local criticalthreshold 85\n:local warnthreshold 50\n\n# Internal processing below...\n# ----------------------------------\n/ip pool {\n:local poolname\n:local poolused\n:local poolpercent\n:local findindex\n:local tmpint\n:local maxindex\n:local line\n\n# :put (\"IP Pool Statistics\")\n# :put (\"------------------\")\n\n# Iterate through IP Pools\n:foreach p in=[find] do={\n\n:set poolname [get \\$p name]\n:set poolused 0\n:set line \"\"\n\n:set line (\" \" . \\$poolname)\n\n# Iterate through current pool's IP ranges\n:foreach r in=[:toarray [get \\$p range]] do={\n\n# Get min and max addresses\n:set findindex [:find [:tostr \\$r] \"-\"]\n:if ([:len \\$findindex] > 0) do={\n:set minaddress [:pick [:tostr \\$r] 0 \\$findindex]\n:set maxaddress [:pick [:tostr \\$r] (\\$findindex + 1) [:len [:tostr \\$r]]]\n} else={\n}\n\n# Convert to array of octets (replace '.' with ',')\n:for x from=0 to=([:len [:tostr \\$minaddress]] - 1) do={\n:if ([:pick [:tostr \\$minaddress] \\$x (\\$x + 1)] = \".\") do={\n}\n:for x from=0 to=([:len [:tostr \\$maxaddress]] - 1) do={\n:if ([:pick [:tostr \\$maxaddress] \\$x (\\$x + 1)] = \".\") do={\n}\n\n# Calculate available addresses for current range\n:set maxindex ([:len [:toarray \\$minaddress]] - 1)\n:for x from=\\$maxindex to=0 step=-1 do={\n# Calculate 256^(\\$maxindex - \\$x)\n:set tmpint 1\n:if ((\\$maxindex - \\$x) > 0) do={\n:for y from=1 to=(\\$maxindex - \\$x) do={ :set tmpint (256 * \\$tmpint) }\n}\n:set tmpint (\\$tmpint * ([:tonum [:pick [:toarray \\$maxaddress] \\$x]] - \\\n[:tonum [:pick [:toarray \\$minaddress] \\$x]]) )\n# for x\n}\n\n}\n\n# foreach r\n}\n\n# Now, we have the available address for all ranges in this pool\n# Get the number of used addresses for this pool\n:set poolused [:len [used find pool=[:tostr \\$poolname]]]\n:set poolpercent ((\\$poolused * 100) / \\$pooladdresses)\n\n# Output information\n:set line ([:tostr \\$line] . \" [\" . \\$poolused . \"/\" . \\$pooladdresses . \"]\")\n:set line ([:tostr \\$line] . \" \" . \\$poolpercent . \" % used\")\n\n# Set colored display for used thresholds\n:if ( [:tonum \\$poolpercent] > \\$criticalthreshold ) do={\n:log error (\"IP Pool \" . \\$poolname . \" is \" . \\$poolpercent . \"% full\")\n:put ([:terminal style varname] . \\$line)\n} else={\n:if ( [:tonum \\$poolpercent] > \\$warnthreshold ) do={\n:log warning (\"IP Pool \" . \\$poolname . \" is \" . \\$poolpercent . \"% full\")\n:put ([:terminal style syntax-meta] . \\$line)\n} else={\n:put ([:terminal style none] . \\$line)\n}\n}\n\n# foreach p\n}\n# /ip pool\n}```\n```[[email protected]] > /sys scr run dhcpstatus\ndhcp_pool1-IBS-Cikini [97/469] 20 % used\ndhcp_pool1-IBS-Internet-Office [19/234] 8 % used\nvpn-client [1/129] 0 % used\npool-VPN-Cikini [0/19] 0 % used\ndhcp_pool1 [15/170] 8 % used\npool_BAKTI [169/1002] 16 % used\npool-VPN-ibswcikini-l2tp [3/245] 1 % used```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.514484,"math_prob":0.8106261,"size":3521,"snap":"2022-27-2022-33","text_gpt3_token_len":1125,"char_repetition_ratio":0.19135627,"word_repetition_ratio":0.110154904,"special_character_ratio":0.3856859,"punctuation_ratio":0.2292683,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9802717,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T20:00:53Z\",\"WARC-Record-ID\":\"<urn:uuid:717762e0-3fc3-4720-b74b-27b82d065590>\",\"Content-Length\":\"38680\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd314f41-767f-4507-9268-4c7a910091b1>\",\"WARC-Concurrent-To\":\"<urn:uuid:3479f098-f2d9-49f1-aad9-ef08c6c31e1b>\",\"WARC-IP-Address\":\"172.67.199.132\",\"WARC-Target-URI\":\"https://agink.id/2022/02/10/mikrotik-script-dhcp-pool-usages/\",\"WARC-Payload-Digest\":\"sha1:DGIXDDGS562WG4GE6PY6WQJG5EALLXFK\",\"WARC-Block-Digest\":\"sha1:UP7BVWTR2XN4QE6NGBTUSL3BCNJHAFPE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103877410.46_warc_CC-MAIN-20220630183616-20220630213616-00779.warc.gz\"}"}
https://rdrr.io/cran/RiemGrassmann/man/gr.kmedoids.html
[ "# gr.kmedoids: k-Medoids Clustering on Grassmann Manifold In RiemGrassmann: Inference, Learning, and Optimization on Grassmann Manifold\n\n## Description\n\nk-Medoids algorithm depends solely on the availability of concept that gives dissimilarity. We adopt pam algorithm from cluster package. See pam for more details.\n\n## Usage\n\n 1 2 3 4 5 6 gr.kmedoids( input, k = 2, type = c(\"Intrinsic\", \"Extrinsic\", \"Asimov\", \"Binet-Cauchy\", \"Chordal\", \"Fubini-Study\", \"Martin\", \"Procrustes\", \"Projection\", \"Spectral\") ) \n\n## Arguments\n\n input either an array of size (n\\times k\\times N) or a list of length N whose elements are (n\\times k) orthonormal basis (ONB) on Grassmann manifold. k the number of clusters type type of distance measure. measure. Name of each type is Case Insensitive and hyphen can be omitted.\n\n## Value\n\nan object of class pam. See pam for details.\n\nKisung You\n\n## Examples\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 ## generate a dataset with two types of Grassmann elements # group1 : first four columns of (8x8) identity matrix + noise # group2 : last four columns of (8x8) identity matrix + noise mydata = list() sdval = 0.25 diag8 = diag(8) for (i in 1:10){ mydata[[i]] = qr.Q(qr(diag8[,1:4] + matrix(rnorm(8*4,sd=sdval),ncol=4))) } for (i in 11:20){ mydata[[i]] = qr.Q(qr(diag8[,5:8] + matrix(rnorm(8*4,sd=sdval),ncol=4))) } ## do k-medoids clustering with 'intrinsic' distance # First, apply MDS for visualization dmat = gr.pdist(mydata, type=\"intrinsic\") embd = stats::cmdscale(dmat, k=2) # Run 'gr.kmedoids' with different numbers of clusters grint2 = gr.kmedoids(mydata, type=\"intrinsic\", k=2)$clustering grint3 = gr.kmedoids(mydata, type=\"intrinsic\", k=3)$clustering grint4 = gr.kmedoids(mydata, type=\"intrinsic\", k=4)$clustering # Let's visualize opar <- par(no.readonly=TRUE) par(mfrow=c(1,3), pty=\"s\") plot(embd, pch=19, col=grint2, main=\"k=2\") plot(embd, pch=19, col=grint3, main=\"k=3\") plot(embd, pch=19, col=grint4, main=\"k=4\") par(opar) ## perform k-medoids clustering with different distance measures # iterate over all distance measures alltypes = c(\"intrinsic\",\"extrinsic\",\"asimov\",\"binet-cauchy\", \"chordal\",\"fubini-study\",\"martin\",\"procrustes\",\"projection\",\"spectral\") ntypes = length(alltypes) labels = list() for (i in 1:ntypes){ labels[[i]] = gr.kmedoids(mydata, k=2, type=alltypes[i])$clustering } ## visualize # 1. find MDS scaling for each distance measure as well embeds = list() for (i in 1:ntypes){ pdmat = gr.pdist(mydata, type=alltypes[i]) embeds[[i]] = stats::cmdscale(pdmat, k=2) } # 2. plot the clustering results opar <- par(no.readonly=TRUE) par(mfrow=c(2,5), pty=\"s\") for (i in 1:ntypes){ pm = paste0(\"k-medoids::\",alltypes[i]) plot(embeds[[i]], col=labels[[i]], main=pm, pch=19) } par(opar) \n\nRiemGrassmann documentation built on March 25, 2020, 5:07 p.m." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.55028415,"math_prob":0.9819964,"size":2841,"snap":"2022-27-2022-33","text_gpt3_token_len":1019,"char_repetition_ratio":0.10574551,"word_repetition_ratio":0.039215688,"special_character_ratio":0.35093278,"punctuation_ratio":0.16498317,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9954916,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-17T13:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:49cc2550-30ca-48c7-b2a5-4305cf560e37>\",\"Content-Length\":\"59678\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75e38605-4f3e-494e-8ada-a97f009306e7>\",\"WARC-Concurrent-To\":\"<urn:uuid:93063355-a825-4878-bf0e-d273dc4f911b>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/RiemGrassmann/man/gr.kmedoids.html\",\"WARC-Payload-Digest\":\"sha1:ANMC6GFRKD3TN64I4PLTUMMA4NSEUEGT\",\"WARC-Block-Digest\":\"sha1:MH3VNX2TM3TXNNBA7IIFM67X6TIQOOQX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882572908.71_warc_CC-MAIN-20220817122626-20220817152626-00731.warc.gz\"}"}
https://o-zon.be/2019/m10-grade-concrete-cement-sand-jelly-quantity-48349.html
[ "# m10 grade concrete cement sand jelly quantity", null, "### Cement quantity per cum. in M15 grade concrete mix - Answers\n\nOrdinary concrete M10 M15 M20 Standard Concrete M25 M30 M35 M40 M45 M50 M55 High strength concrete M60 M65 M70 M75 M80 Asked in Civil Engineering, Concrete and Cement, .", null, "### Metric calculator for concrete - cement, sand, gravel etc\n\nThese Concrete Calculators provide the required quantities of cement and all-in ballast or cement, sharp sand and gravel required to give a defined volume of finished concrete. Both of these concrete calculators make an allowance for the fact that material losses volume after being mixed to make concrete.", null, "### How we calculate of Sand, cement and aggregate Of M10 .\n\nHow we calculate of Sand, cement and aggregate Of M10, M15, M20 and M25 etc. Engr Sami Ullah | February 7, . Volume of Sand = Volume of sand Require = (1.5/5.5) x 1.57 = 0.471 m3. Volume of Crush = (3/5.5) x 1.57 = 0.856 m3 . The above calculation you can use for all Grade of Concrete but just you have to put the different Grade of .", null, "### Concrete Calculator for Slabs Online | Birla A1 Cement\n\nBirla A1 Concrete calculator for Slabs will help you estimate the amount and grade of cement you will require for the size of your home . Grade of concrete M10, M15, M20, M25 depends on ratio of Cement, Sand and Aggregate. M20 is the most common for Home Builders. Calculate; Concrete Volume (Cubic Feet) .", null, "### Quantity of cement in Mix M10 concrete? - Answers\n\nQuantity of cement in Mix M10 concrete . Cement quantity per cum. in M15 grade . During concreting first of all we have to take an adequate quantity of aggregate,sand,cement.", null, "### CEMENT CONCRETE MIX DESIGN - Dronacharya\n\nCEMENT CONCRETE MIX DESIGN . . FACTORS TO BE CONSIDERED FOR MIX DESIGN The grade designation giving the characteristic strength . concrete (kg) Sand as % of total aggregate by absolute volume 10 200 40 20 186 35 40 165 30 2.6 31 . Step 5 - .", null, "### Concrete Design Mix for Varies Grade of Concrete\n\nConcrete Design Mix for Varies Grade of Concrete Concrete mix design is the process of finding right proportions of cement, sand and aggregates for concrete to achieve target strength in structures.So, concrete mix design can be stated as Concrete Mix design of M20, M25, M30 and higher grade of concrete can be calculated from example below. rete Mix = Cement:Sand:Aggregates.", null, "### Concrete Mix Design Calculation for M15 Grade as per IS .\n\nM15 concrete mix has a mix proportion of 1:2:4 of cement, fine aggregate, and coarse aggregate respectively. M15 – M represents Mix and 15 N/mm 2 is the characteristic compressive strength of concrete cube at 28 days. Required Data M15 Grade Concrete. Grade of concrete =M15; Characteristic compressive strength of concrete at 28days = 15N/mm 2", null, "### Mix Design M-40 Grade - Civil Engineering\n\nThe mix design M-40 grade for Pier (Using Admixture – Fosroc) provided here is for reference purpose only. Actual site conditions vary and thus this should be adjusted as per the location and other factors. Parameters for mix design M40. Grade Designation = M-40 Type of cement = O.P.C-43 grade Brand of cement = Vikram ( Grasim )", null, "### Concrete Calculator for Slabs Online | Birla A1 Cement\n\nBirla A1 Concrete calculator for Slabs will help you estimate the amount and grade of cement you will require for the size of your home . Grade of concrete M10, M15, M20, M25 depends on ratio of Cement, Sand and Aggregate. M20 is the most common for Home Builders. Calculate; Concrete Volume (Cubic Feet) .", null, "### How to calculate rate analysis of M10 grade concrete\n\nSep 17, 2017 · M10 is a lean concrete, M10 concrete is the strength offered by the concrete cube of 150mm dimension after 28 days,where strength is 10 N/mm2. M10 grade concrete have ratio of 1:3:6 in which it have 1 part of cement:3 part of sand: 6 parts of aggregate. What is plain cement concrete|Specifications for Plain Cement Concrete. How to calculate .", null, "### For a strong building, get the right mixture - The Hindu\n\nAug 31, 2012 · Concrete is a mixture of cement, water, sand and broken stones (jelly) in a definite proportion which is a workable mass initially and hardens over a period of time. . as M10.", null, "### How to calculate cement, sand and coarse aggregate for .\n\nHow to calculate cement, sand and coarse aggregate for concrete? M15 Mix Ratio – 1 : 2 : 4. The mix ratio denotes the following. 1 – Cement 2 – Sand (Fine aggregate – 2 Times of Cement Quantity) 4 – Blue metal (Coarse aggregate – 4 Times of Cement Quantity) The volume of concrete – (L x B x D) = 5 x 2 x 0.1 = 1 Cum", null, "### How to calculate rate analysis of M10 grade concrete\n\nSep 17, 2017 · M10 is a lean concrete, M10 concrete is the strength offered by the concrete cube of 150mm dimension after 28 days,where strength is 10 N/mm2. M10 grade concrete have ratio of 1:3:6 in which it have 1 part of cement:3 part of sand: 6 parts of aggregate. What is plain cement concrete|Specifications for Plain Cement Concrete. How to calculate .", null, "### Different Grades of Concrete and their Uses/Applications .\n\nSep 10, 2018 · Grades of concrete mainly classified in three categories as follows i) Lean concrete ii) Ordinary grade of concrete iii) Standard grade of concrete iv) High strength concrete grades LEAN CONCRETE Lean concrete is a mix where the amount of cement is lower than the amount of liquid present in the strata. M5 GRADE Where M stands.", null, "### how do we calculate quantity of cement, sand and .\n\nhow do we calculate quantity of cement, sand and aggregates in 1 m3 of M30 grade concrete ?.. Answer / rajarshi basu The approach by Sjtbehera is totally correct but has some", null, "M10 Ready Mix Concrete contains a blend of cement, sand, gravel, and other approved ingredients. It is ideal for sidewalks, steps, walkways, foundations, footings, and similar general concrete work. Features: High compressive strength", null, "### How to Calculate Cement Sand & Aggregate Quantity in Concrete\n\nDry Volume = Wet Volume + 54% of wet volume Dry Volume = Wet Volume x 1.54. How to calculate quantity for 10 cubic meter concrete. We have been given Grade – M15 and we have to calculate the quantity of cement, sand and aggregate in cft, cum and kg. Volume = 10 cubic meter This is wet volume, so we need to convert it into dry volume.", null, "### M-20 Mix Designs as per IS-10262-2009 - Civil Engineering\n\nFollowing table shows the M-20 Mix Designs as per IS-10262-2009, hope this helps all civil engineers here . 2-Propotion of grade concrete 3-qty of cement,fine aggregate,coarse aggrete & water. . M20 Grade of RMC mix Concrete how much cement, jelly, sand is used for 1 cu-m. Reply Link. sohaib Salama March 13, 2016 at 6:00 am.", null, "### What is Water Cement Ratio? - Guide & Calculation – Civilology\n\nWhat is Water Cement Ratio? Water Cement Ratio means the ratio between the weight of water to the weight of cement used in concrete mix. Normally water cement ratio falls under 0.4 to 0.6 as per IS Code 10262 (2009) for nominal mix (M10, M15 ..", null, "### Grade of Concrete - Their Ratio, Uses & Suitability .\n\nGrade of Concrete is the classification of concrete according to its compressive strength.. For making concrete we use cement, sand, aggregate, and water which are mixed with certain ratio and concrete is cast and put in a cube of 150 mm size and put in a water bath for 28 days and afterward, it is tested in a compression test.", null, "### Methods of Proportioning Cement, Sand and Aggregates in .\n\nProportioning of concrete is the process of selecting quantity of cement, sand, coarse aggregate and water in concrete to obtain desired strength and quality. The proportions of coarse aggregate, cement and water should be such that the resulting concrete has the following properties:", null, "### RCC Calculator | Estimate Cement, Sand, RCC Online Calculator\n\nReinforced cement concrete or RCC calculator can be used to calculate M15, M20 and M25 mix ratio of cement, sand and jelly. At materialtree.com, you can also shop for superior quality cement .", null, "### Calculate cement sand and aggregate for concrete | Nominal .\n\n97 Comments on Calculate Cement Sand and Aggregate needed for concrete in Volume and Weight Engineers (like me) uses three different simple techniques to calculate cement, sand, aggregate and water to produce different nominal mix concrete like M5, M7.5, M10, M15 and M20 for .", null, "### CONCRETE GRADE: M5 = 1:4:8 M10= 1:3:6 M15= 1:2:4 M20= 1:1 .\n\nCivil Work is on Facebook. To connect with Civil Work, join Facebook today.", null, "### How to calculate quantity of cement,sand & aggregate in .\n\nThere are two main ways to design the concrete mix. Design mix method :- In this method, materials are proportioned based on the procedure and rules given in IS 456 (2000) and IS 10262) code . In this method, cement, sand and aggregates are always batched in terms of weight, and concrete can be designed for different environmental conditions and different needs.", null, "### PRODUCT GUIDE - QUIKRETE\n\nCommercial Grade concrete mix designed for higher-early . • 3 to 4 Parts Plaster Sand • 1 Part Plastic Cement (by volume) Meets the requirements of ASTM C 1328 Type M and S. Item No. 2121-94 Metric . Meets the requirements of ASTM C 91 and Federal Specifications for masonry cement. Item No. 1125-98 Metric 42.6 kg Package Size 94 lb. bag .", null, "### What is the mixing ratio in m10 concrete? - Answers\n\nJun 10, 2014 · Ration used in M10 concrete is 1:3:6 1 Cement, 3 Sand & 6 Aggregate . What effect has the quantity of mixing water have upon the . Lean concrete have grade M10.", null, "### Concrete Mix Design: Illustrative Example M30 Grade (M20 .\n\nA Step-by-Step detailed concrete Mix design Procedure to calculate cement, sand, aggregate, water & admixture content in to prepare M30 Grade concrete. . It is always suggested to go the maximum nominal size of aggregate to save on quantity of cement per unit of concrete.", null, "### how do we calculate quantity of cement, sand and .\n\nhow do we calculate quantity of cement, sand and aggregates in 1 m3 of M30 grade concrete ?.. Answer / rajarshi basu The approach by Sjtbehera is totally correct but has some\n\n#### related information\n\n• high grade certified factory supply fine crusher spare parts\n• Nigeria and low grade magnetite ore\n• high grade iron ore copper separation flotation cell equipment\n• mininguse of low grade gold ore for manufacturing chemicals\n• China Arcelormittal Low Grade Iron Ore\n• Iron ore grade major countries brazil australia\n• tenau 100ml a grade agate ball mill jar\n• gmp grade vibrating screen for pharma powder\n• Flotation Time On Concentrate Grade And Recovery" ]
[ null, "https://o-zon.be/images/service/407.jpg", null, "https://o-zon.be/images/service/21.jpg", null, "https://o-zon.be/images/service/589.jpg", null, "https://o-zon.be/images/service/436.jpg", null, "https://o-zon.be/images/service/71.jpg", null, "https://o-zon.be/images/service/157.jpg", null, "https://o-zon.be/images/service/459.jpg", null, "https://o-zon.be/images/service/90.jpg", null, "https://o-zon.be/images/service/13.jpg", null, "https://o-zon.be/images/service/478.jpg", null, "https://o-zon.be/images/service/483.jpg", null, "https://o-zon.be/images/service/427.jpg", null, "https://o-zon.be/images/service/586.jpg", null, "https://o-zon.be/images/service/8.jpg", null, "https://o-zon.be/images/service/406.jpg", null, "https://o-zon.be/images/service/138.jpg", null, "https://o-zon.be/images/service/28.jpg", null, "https://o-zon.be/images/service/533.jpg", null, "https://o-zon.be/images/service/330.jpg", null, "https://o-zon.be/images/service/221.jpg", null, "https://o-zon.be/images/service/77.jpg", null, "https://o-zon.be/images/service/32.jpg", null, "https://o-zon.be/images/service/447.jpg", null, "https://o-zon.be/images/service/548.jpg", null, "https://o-zon.be/images/service/425.jpg", null, "https://o-zon.be/images/service/492.jpg", null, "https://o-zon.be/images/service/132.jpg", null, "https://o-zon.be/images/service/29.jpg", null, "https://o-zon.be/images/service/235.jpg", null, "https://o-zon.be/images/service/322.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88332206,"math_prob":0.9653735,"size":8821,"snap":"2020-45-2020-50","text_gpt3_token_len":2124,"char_repetition_ratio":0.21231711,"word_repetition_ratio":0.20157067,"special_character_ratio":0.25201225,"punctuation_ratio":0.12536107,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.96840763,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60],"im_url_duplicate_count":[null,1,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,2,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,2,null,1,null,1,null,1,null,1,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T10:58:49Z\",\"WARC-Record-ID\":\"<urn:uuid:c23b731e-2aa9-4124-93b0-d79222371c0c>\",\"Content-Length\":\"37467\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bb1ae648-16ae-42c4-8b96-c78563915454>\",\"WARC-Concurrent-To\":\"<urn:uuid:1bcaa03d-9a94-4b39-b46f-4c778d69ac38>\",\"WARC-IP-Address\":\"104.27.156.233\",\"WARC-Target-URI\":\"https://o-zon.be/2019/m10-grade-concrete-cement-sand-jelly-quantity-48349.html\",\"WARC-Payload-Digest\":\"sha1:75L64Z4FA3ZT42CEI3SZI4RUNDK5VK62\",\"WARC-Block-Digest\":\"sha1:RFVAULHDJESFL5TGOYKXDFWNPJTQY7U5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107898499.49_warc_CC-MAIN-20201028103215-20201028133215-00315.warc.gz\"}"}
https://answers.everydaycalculation.com/multiply-fractions/10-9-times-90-50
[ "Solutions by everydaycalculation.com\n\n## Multiply 10/9 with 90/50\n\n1st number: 1 1/9, 2nd number: 1 40/50\n\nThis multiplication involving fractions can also be rephrased as \"What is 10/9 of 1 40/50?\"\n\n10/9 × 90/50 is 2/1.\n\n#### Steps for multiplying fractions\n\n1. Simply multiply the numerators and denominators separately:\n2. 10/9 × 90/50 = 10 × 90/9 × 50 = 900/450\n3. After reducing the fraction, the answer is 2/1\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8539494,"math_prob":0.97672117,"size":437,"snap":"2019-43-2019-47","text_gpt3_token_len":166,"char_repetition_ratio":0.15704387,"word_repetition_ratio":0.0,"special_character_ratio":0.4416476,"punctuation_ratio":0.09,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97775626,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-13T01:34:45Z\",\"WARC-Record-ID\":\"<urn:uuid:c5aeff99-da2a-468a-87b1-e60ff17588c9>\",\"Content-Length\":\"7161\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e39a4d99-212a-4a73-aa5e-9855b4a7dce8>\",\"WARC-Concurrent-To\":\"<urn:uuid:35920ed1-1097-4a21-943c-3ecd3bbb842d>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/multiply-fractions/10-9-times-90-50\",\"WARC-Payload-Digest\":\"sha1:NLT4VNTVZ4DNVENE7NGCOPN5NGBMJWT5\",\"WARC-Block-Digest\":\"sha1:DJBOGAKFI5AZPQ6RUWQOU25YYE5MSJ7M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665976.26_warc_CC-MAIN-20191113012959-20191113040959-00242.warc.gz\"}"}
https://math.stackexchange.com/questions/1249807/the-class-of-all-functions-between-classes-nbg
[ "# The class of all functions between classes (NBG)\n\nIs it possible in NBG (von Neumann-Bernays-Gödel set theory) to construct the class of all functions $X \\to Y$ between two (proper) classes $X,Y$? I guess that this does not work. In the special case $Y=\\{0,1\\}$ we would get the class of all subclasses of $X$, which does not exist. Can someone confirm this?\n\nAs far as I remember GBN, you cannot, because any function whose domain is a proper class is not a set.\n\nIndeed, let $f:X\\rightarrow Y$ a function (hence a subclass of $X\\times Y$ satisfying certain requirements). Consider the map $\\pi^X:X\\times Y\\rightarrow X$ that sends $(x,y)\\mapsto x$. This is a function between classes.\n\nNow, suppose $f$ is a set.\n\n$\\pi^X|f:f\\rightarrow X$ is a surjective function. But the image of a set through a function is a set (see here \"Limitation of size\").\n\nHowever, the above does not answer the question of whether the (meta)category $\\mathbf{Cls}$ of classes in NBG forms a cartesian closed category – all it shows is that the obvious candidate does not work. Instead, we make the following observations:\n• $\\mathbf{Cls}$ has finite limits: finite products are not a problem, and the formation of equalisers does not require quantification over classes.\n• $\\mathbf{Cls}$ has a subobject classifier: you can check that $\\{ 0, 1 \\}$ does the job.\n• There is an object $V$ in $\\mathbf{Cls}$ such that every object $X$ in $\\mathbf{Cls}$ admits a monomorphism $X \\to V$.\nThus we may apply an argument of McLarty to deduce that $\\mathbf{Cls}$ is not topos, hence not cartesian closed." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90029395,"math_prob":0.9964986,"size":1691,"snap":"2020-10-2020-16","text_gpt3_token_len":445,"char_repetition_ratio":0.13930054,"word_repetition_ratio":0.0,"special_character_ratio":0.25487876,"punctuation_ratio":0.113372095,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996772,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T01:29:52Z\",\"WARC-Record-ID\":\"<urn:uuid:f3c8440b-c264-4b54-acd7-3e3d1a56324e>\",\"Content-Length\":\"147590\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ebfb56bc-a05b-4341-bc9c-c231e9ec12e4>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8a23d2f-beca-4d2b-87f5-52a31ba00774>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1249807/the-class-of-all-functions-between-classes-nbg\",\"WARC-Payload-Digest\":\"sha1:2BRBEFNWCISUBOITZ64XIVSJNEX7NPP2\",\"WARC-Block-Digest\":\"sha1:JIX7QWAYHT5GLZPUA5V24SZT2QDR7X44\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145621.28_warc_CC-MAIN-20200221233354-20200222023354-00307.warc.gz\"}"}
http://brane-space.blogspot.com/2012/06/solving-neutrino-puzzleand-matter.html
[ "## Thursday, June 21, 2012\n\n### Solving the Neutrino Puzzle....and Matter-Antimatter Asymmetry\n\nFirst, let me say that (to echo many complaints about the mainstream 24/7 news 'Wurlitzer'), one seldom learns or hears of positive stories. Most are negative and are concerned with conflicts, global or local. Thus, it is refreshing to note an ongoing collaboration between two nations - China and the United States - that are ordinary portrayed in perpetual contretemps.\n\nThis concerns an ongoing experiment at Daya Bay, near Hong Kong, set amidst no less than six nuclear reactors. These are arrayed in two clusters near two vertices of a \"neutrino detection\" triangle. The detector triangle itself is comprised of three separate electron anti-neutrino detectors (about 2 km each from the reactors) and which reside in water baths to unmask any cosmic ray interlopers. Each of the detectors measures the electron anti-neutrino (call it -v_e ) flux from the reactors by recording any light flashes due to -v_e  collisions within its 20 tons of liquid.\n\nDaya Bay's results proceeded basically in two related phases:\n\nI. Small deficits of  -v_e  were previously recorded at short distances from the reactors, and it was further reported that Θ_13, the last of 3 \"mixing angles\" that characterize neutrino oscillation  is non-zero.\n\nII. Further measurements of Θ_13 disclosed it was not only non-zero but large enough for the Daya experimenters to begin investigating neutrinos as factors in the matter-antimatter asymmetry of the cosmos.\n\nThis is big deal stuff, because cosmologists have always been perplexed by the apparent preponderance of matter in relation to antimatter in our universe. (I tried to solve this in a high school science fair project by postulating a separate anti-matter cosmos that operated in the context of 'anti-time' or negative time, i.e. with the time vector negentropic as opposed to entropic)\n\nThe survival of so little antimatter in our cosmos requires a violation of what is called \"CP symmetry invariance\". We don't know WHY there is this asymmetry, but it may have something to do with Fitch and Cronin's (1963-64) discovery of a violation of CPT invariance. (C for charge conjugation, P for parity (spatial reflection) and T for time reversal. Up until their 1960s investigations, it was widely accepted by physicists that nature played no favorites where charge conjugation, parity and time reversal were concerned. The discovery of a fundamental violation (Fitch and Cronin found a tiny fraction:  45 out of 22,700 - K2 mesons, spontaneously disintegrate into 2 pions, e.g. π mesons, (instead of the usual 3) changed all this.\n\nIt was suggested by them that this CPT invariance violation might also - in some way - account for the apparent asymmetry in the distribution of matter with respect to antimatter. Since then experiments have disclosed T-invariance can be subsumed by CP symmetry invariance. Trouble is, the existence of so little antimatter still violates CP invariance.  (Weak quark interactions exhibit some CP violation but too small to explain cosmological asymmetry between matter and antimatter).\n\nGiven this situation, the large value of the third mixing angle Θ_13 came to the fore. More on this now, and some quantitative details. First, contrary to the old notion that there was only one type (\"flavor\") of neutrino, we now know there are three: electron, muon and tau neutrinos. In effect, there must be three different corresponding neutrino masses we can call: m1, m2 and m3.\n\nSecond we now know that the three \"flavors\" are really different superpositions (see any of my earlier 2010 blogs on quantum superpositions) of the 3 basic neutrino mass states.  Moreover, and to make it more complex, we know that quantum interference between mass states means a neutrino originating in one \"flavor\" can transmogrify to another over its transit. Experimental confirmation of this (and over large distances) arrives from MeV neutrinos from the Sun and muon neutrinos from the high atmosphere.\n\nBecause of the oscillations and quantum interference we need to reckon in a \"misalignment\" between flavor and the basic neutrino masses. This is done by reference to three independent \"mixing angles\": Θ_12 , Θ_23  and Θ_13. To a good approximation, oscillation in any one regime is characterized by just one Θ_ij and a corresponding mass difference, defined:\n\ndelta m_ij^2 = [m_j^2 - m_i^2]\n\nAs an example, the probability that a muon neutrino of energy E acquires a different flavor after traversing distance L is:\n\nP = sin^2 Θ_23  sin^2 (L/ lambda23)\n\nwhere lambda23 is the energy -dependent oscillation length, given by:\n\n4ħ E c /  (delta m_32^2)\n\nHow well do we know the parameters? Atmospheric neutrino observations yield Θ_23  ~ 45 degrees, while delta m_32^2 = 0.0024 eV^2. Meanwhile, solar neutrino data yield roughly 33 degrees for Θ_12 and delta m_21^2 = 0.00008 eV^2. (Note: ħ is the Planck constant of action divided by 2 π)  If then:\n\ndelta m_31^2  =  [delta m_21^2   +  delta m_32^2 ] = 0.00008 eV^2 + 0.0024 eV^2\n\nWe know, delta m_31^2  =  0.00248\n\nwhich is close to delta m_32^2\n\nThis was fine as it went, but a further issue that needed to be resolved was whether the oscillation amplitude, e.g. sin^2 (2Θ_13)  (for the disappearance of reactor antineutrinos associated with the delta m_31^2,  delta m_32^2 approximation) would still be large enough to detect. This was the core experimental quandary facing the Daya Bay collaborators. They were more or less guided (optimistically!) by an earlier independent results that set an upper limit of 0.16 (the Daya Bay -v_e detector array was designed to measure a smallest value of 0.01)\n\nIn March this year, it was therefore most gratifying when Yifang Wang of the Beijing Institute of High Energy Physics, reported sin^2 (2Θ_13)   = 0.092 + 0.017, corresponding to a  Θ_13  = 9 degrees.\n\nIs the \"case\" closed? Not necessarily! We always must reckon in necessaary and sufficient conditions. The fact is that a non-zero  Θ_13  is a necessary but not sufficient condition for CP-violation in neutrino interactions. How to proceed? Well, we know since  there are 3 non-zero mixing angles the unitary matrix that describes all the oscillations has an extra degree of freedom.\n\nNote: readers who'd like more familiarity with unitary matrices can check out my earlier blog:\n\nhttp://brane-space.blogspot.com/2012/01/more-linear-algebra-unitary-and.html\n\nThis additional degree of freedom entails an independent phase factor,\n\nexp (i σ)\n\nwhich dictates the CP -violation. Standard theory can't predict σ so it must be done via experiment. Such an experiment has been proposed and is known as the 'Long Baseline Neutrino Experiment' (LBNE) The plan is to direct an intense beam of muon neutrinos from Fermilab at a detector in an underground labo in South Dakota, some hundreds of miles distant.\n\nThe problem? Like so many areas of pure and experimental physics -astronomy now, LBNE is encountering funding problems. It appears the Republican House isn't convinced the money spent to solve these open-ended issues is worth it. SO the future of the experiment is in question.\n\nStay tuned.\n\n-----\nReference: Physics Today, May, 2012, p. 13." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9288388,"math_prob":0.8980614,"size":7111,"snap":"2021-04-2021-17","text_gpt3_token_len":1701,"char_repetition_ratio":0.10004221,"word_repetition_ratio":0.0,"special_character_ratio":0.23119111,"punctuation_ratio":0.102118,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9741225,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T14:08:01Z\",\"WARC-Record-ID\":\"<urn:uuid:f94dcc98-f65e-4717-8dfc-95150bd013f4>\",\"Content-Length\":\"104238\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:85a068fb-4a3a-4cb8-9488-360dcfbae43c>\",\"WARC-Concurrent-To\":\"<urn:uuid:1266c691-0bbe-41b1-8c95-d242a286bda6>\",\"WARC-IP-Address\":\"172.217.12.225\",\"WARC-Target-URI\":\"http://brane-space.blogspot.com/2012/06/solving-neutrino-puzzleand-matter.html\",\"WARC-Payload-Digest\":\"sha1:RHD7PMZWFT3RO4I5JCTTDPXWJA6FYRAC\",\"WARC-Block-Digest\":\"sha1:5OPBHRUDHMKUWDGVFCKHWTUT4QFVBR4A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703524858.74_warc_CC-MAIN-20210121132407-20210121162407-00265.warc.gz\"}"}
https://www.analyticbridge.datasciencecentral.com/forum/topics/creating-a-big-data-set-from-a-small-data-set-for-logistic?commentId=2004291%3AComment%3A257724
[ "# AnalyticBridge\n\nA Data Science Central Community\n\nI am attempting a POC for big data regression modeling. Since actual data is hard to come by can I actually use a smaller data set and replicate it in some way to get the large data set? Whats the best way to do that?\n\nViews: 2308\n\n### Replies to This Discussion\n\nYes, through simulation. You first have to compute the means for all of the variables as well as the correlation or covariance matrices.  What you do next depends on your software. e.g if you assume a multivariate normal distribution, you can the R mvrnorm function to generate as many samples as you would like.\n\nhttp://stat.ethz.ch/R-manual/R-devel/library/MASS/html/mvrnorm.html\n\nAnother way to do it would be to assign a new variable as a weighting variable which represents the number of occurences of each sample observation.  Most stat packages can handle this.\n\nBut since you are framing this as a \"big data\" problem, sounds like using simulation to generate the actual raw data may be a better way to go.\n\nThanks for your reply Ralph. Just a correction - I know the ranges and but not have a small data set. this is by the way a predictive maintenance problem. I can use mvrnorm  for predictors(sensors) as you suggested .But how do I put the target variable (1,0) once I get the normal distribution for predictors. Any ideas or should I go for the higher ranges of values in Sensors and randomly generate 1,0 and 0 for the rest?\n\nOne possible way to simulate values for the dependent variable can beto use a conditional distribution estimated from the small data you have. This is somewhat extending Ralph's recommended method of using a suitable joint distribution to simulate values for the predictors.\n\nOnce you have a model built on the small data, and a set of simulated values for the independent variables, predict values/probabilities of the dependent variable and add an error term (may be dron from iid normal (0, 0.1). This is a method I have used to create datasets for POC/R&D/Training projects involving many different types of Generalized Linear Models.\n\nTejamoy sorry for the misleading opening statement. I do not have the small data set - rather the ranges of predictors. Now I have to put 0,1 and simulate a predictive maintenance problem so that I can i can use a classification method logistic regression or decision tree. Any ideas How do I generate the fault (0/1) columns of my data set?\n\nIn that case, what you can do is:\n\nCreate a linear combination of the variables (predictors), say, LC = a+b1*x1+...+bN*xN\n\na, b1,...bN being known numbers (as opposed to parameters to be estimated). For example:\n\nLC = 12.64+0.32*x1+...-0.987*xN\n\nCrate ELC = exp(LC)/[1+exp(LC)]\n\nThen create the binary dependent as\n\nif ELC < 0.4 then Y = 0\n\nElse if ELC > 0.6 Y = 1\n\nElse if Random Vbl (from Uniform dist) > 0.05 then Y = 1\n\nElse Y = 0\n\n(Use an appropriate Random Vbl generator function depending on which software you are using)\n\nNow with simulated values of x1,...,xN you can have a dataset of any size.\n\nDoes this make sense?\n\nIt does! Thanks a ton\n\nRatheen, you had raised an interesting topic / discussion.\n\nTejamoy /  Is this proposed solution usable across many not necessarily Predictive Maintenance problems, ex. in   Insurance. Especially rare events (natural hazards modeling frequency say we have 1 -5 storms during  certain period  as an ex)  and  the severity of losses (given a distribution for sizes from history).\n\nSimilarly in Health Care Risk Assessment for certain diseases \"Framingham Heart Study\"  - Logistic Regression application Odds Ratios for Coronary problems based on various risk factors Age, family history etc  -- Thanks" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8744311,"math_prob":0.9044539,"size":3991,"snap":"2020-10-2020-16","text_gpt3_token_len":936,"char_repetition_ratio":0.10283421,"word_repetition_ratio":0.0029411765,"special_character_ratio":0.23327486,"punctuation_ratio":0.10519645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9844522,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T12:12:30Z\",\"WARC-Record-ID\":\"<urn:uuid:179f3b5a-f329-4302-8221-b2e781f05bc3>\",\"Content-Length\":\"71898\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63126f41-2fe6-4b45-9422-5350c6205023>\",\"WARC-Concurrent-To\":\"<urn:uuid:e33c3272-faff-4271-827d-c401829ee1f2>\",\"WARC-IP-Address\":\"104.25.108.103\",\"WARC-Target-URI\":\"https://www.analyticbridge.datasciencecentral.com/forum/topics/creating-a-big-data-set-from-a-small-data-set-for-logistic?commentId=2004291%3AComment%3A257724\",\"WARC-Payload-Digest\":\"sha1:TM24FDWNUB7CG4SWLPJVRY52PKEJSDLN\",\"WARC-Block-Digest\":\"sha1:CU4TVVKCKOV4QW6DG5JGPDHL47U3QGU4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145676.44_warc_CC-MAIN-20200222115524-20200222145524-00235.warc.gz\"}"}
https://studyres.com/doc/17184809/ac-vs-dc-ac-voltage-stands-for-alternating-current.-the-f...
[ "# Download AC vs DC AC Voltage stands for Alternating Current. The flow of elec\n\nSurvey\n\n* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project\n\nDocument related concepts\n\nElectric power system wikipedia , lookup\n\nPulse-width modulation wikipedia , lookup\n\nWar of the currents wikipedia , lookup\n\nStepper motor wikipedia , lookup\n\nElectrification wikipedia , lookup\n\nTransistor wikipedia , lookup\n\nGround loop (electricity) wikipedia , lookup\n\nMercury-arc valve wikipedia , lookup\n\nPower inverter wikipedia , lookup\n\nGround (electricity) wikipedia , lookup\n\nVariable-frequency drive wikipedia , lookup\n\nPower engineering wikipedia , lookup\n\nIslanding wikipedia , lookup\n\nTRIAC wikipedia , lookup\n\nElectrical substation wikipedia , lookup\n\nP–n diode wikipedia , lookup\n\nElectrical ballast wikipedia , lookup\n\nThree-phase electric power wikipedia , lookup\n\nSchmitt trigger wikipedia , lookup\n\nResistive opto-isolator wikipedia , lookup\n\nDistribution management system wikipedia , lookup\n\nCurrent source wikipedia , lookup\n\nRectifier wikipedia , lookup\n\nOhm's law wikipedia , lookup\n\nPower MOSFET wikipedia , lookup\n\nPower electronics wikipedia , lookup\n\nTriode wikipedia , lookup\n\nSwitched-mode power supply wikipedia , lookup\n\nHistory of electric power transmission wikipedia , lookup\n\nOpto-isolator wikipedia , lookup\n\nVoltage regulator wikipedia , lookup\n\nBuck converter wikipedia , lookup\n\nCurrent mirror wikipedia , lookup\n\nSurge protector wikipedia , lookup\n\nVoltage optimisation wikipedia , lookup\n\nStray voltage wikipedia , lookup\n\nAlternating current wikipedia , lookup\n\nMains electricity wikipedia , lookup\n\nTranscript\n```“What is the difference between AC & DC?”\nAC vs DC\nAC Voltage stands for Alternating Current. The flow of electricity periodically changes direction from positive to negative about a neutral point (not always ground!). Typical examples of AC voltage are residential and commerical outlet\npower.\nVoltage\nTime\nALTERNATING CURRENT\nDC Voltage stands for Direct Current. The flow of electricity\nis always one direction, or said to be positive. Typical examples of DC voltage in use are: automobiles which run on\n12VDC, commercial trucks which run on 24 VDC, and many\nelectronic devices which take AA, AAA, etc style batteries.\nVoltage\nTime\nDIRECT CURRENT\nDesignStein LLC Product Engineering\n2402 College Hills Blvd #4, San Angelo, TX 76904\nP: 325-227-6053 F: 325-617-7908\n[email protected]\n```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71141404,"math_prob":0.7680365,"size":2160,"snap":"2022-27-2022-33","text_gpt3_token_len":441,"char_repetition_ratio":0.2884972,"word_repetition_ratio":0.058064517,"special_character_ratio":0.19490741,"punctuation_ratio":0.14577259,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9939195,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-11T18:25:59Z\",\"WARC-Record-ID\":\"<urn:uuid:49611d4e-33ec-4941-a95b-87f3c605f03a>\",\"Content-Length\":\"36086\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e99aebf-5e97-4188-b7c5-45ae2c8238b9>\",\"WARC-Concurrent-To\":\"<urn:uuid:7b524808-b639-499a-9a95-e377f43624cb>\",\"WARC-IP-Address\":\"104.21.88.174\",\"WARC-Target-URI\":\"https://studyres.com/doc/17184809/ac-vs-dc-ac-voltage-stands-for-alternating-current.-the-f...\",\"WARC-Payload-Digest\":\"sha1:A4ZLG75FG4NXCOVF6I3363ZZGP3QB3DQ\",\"WARC-Block-Digest\":\"sha1:6ATNG5EQXS2VSWO2OA4UDILBZXWHOQJE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571483.70_warc_CC-MAIN-20220811164257-20220811194257-00044.warc.gz\"}"}
https://blog.mathquant.com/tag/c
[ "", null, "## \"C++ version of OKEX futures contract hedging strategy\" that takes you through hardcore quantitative strategy\n\nSpeaking of hedging strategies, there are various types, diverse combinations, and diverse ideas in various markets. We explore the design ideas and concepts of the… ( ( Read More ) )", null, "## 4.6 How to implement strategies in C++ language\n\nSummary In the previous article, we explained the premise of implementing the trading strategy from the introduction of C++ language, basic grammar, and strategy structure.… ( ( Read More ) )", null, "## 4.5 C++ Language Quick Start\n\nSummary C++ is a very difficult programming language. The hard part mainly is to learn in depth, but if you just write strategy logic by… ( ( Read More ) )", null, "## 4.2 How to implement strategic trading in JavaScript language\n\nSummary In the previous article, we introduced the fundamental knowledge that when using JavaScript to write a program, including the basic grammar and materials. In… ( ( Read More ) )", null, "## 4.1 JavaScript language quick start\n\nBackground This section gives a little background on JavaScript to help you understand why it is the way it is. JavaScript Versus ECMAScript ECMAScript is… ( ( Read More ) )", null, "## 3.5 Visual Programming language implementation of trading strategies\n\nSummary In the previous section, we learned about the introduction and characteristics of the visual programming tool, the \" hello world \" example, and the… ( ( Read More ) )", null, "## 3.4 Visual programming quick start\n\nSummary Many subjective traders are interested in quantitative trading, at first, they begin with full confidence. After learning the basic grammar, data operations, data structure,… ( ( Read More ) )", null, "## 3.3 How to implement strategies in M language\n\nwww.fmz.com Summary In the previous article, we explained the premise of realizing the trading strategy from the aspects of the introduction of the M language… ( ( Read More ) )", null, "## 3.2 Getting started with the M language\n\nhttps://www.fmz.com/bbs-topic/3695 Summary What is the M language? The so-called M language is a set of programmatic functions that extend from the early stock trading technical… ( ( Read More ) )", null, "## 3.1 Quantitative trading programming language evaluation\n\nSummary In Chapters 1 and 2, we learned the basics of quantitative trading and the uses of FMZ Quant tools. In this chapter, we will… ( ( Read More ) )" ]
[ null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null, "https://secure.gravatar.com/avatar/d311e5dfa8f7d436e7d4c94814e71096", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8271685,"math_prob":0.47515506,"size":2215,"snap":"2020-34-2020-40","text_gpt3_token_len":464,"char_repetition_ratio":0.1614654,"word_repetition_ratio":0.11444142,"special_character_ratio":0.22663657,"punctuation_ratio":0.10796915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9521639,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T16:55:11Z\",\"WARC-Record-ID\":\"<urn:uuid:1a3d7e06-3659-451d-b354-40151163e8c1>\",\"Content-Length\":\"109901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4760a6f-d0d0-4e94-8c35-b9c9ea94abce>\",\"WARC-Concurrent-To\":\"<urn:uuid:5382e675-85ba-4016-a29a-c10962fade37>\",\"WARC-IP-Address\":\"104.31.87.137\",\"WARC-Target-URI\":\"https://blog.mathquant.com/tag/c\",\"WARC-Payload-Digest\":\"sha1:G6JGBP53KDLZOHSURE7K5SHN7MR6PNFW\",\"WARC-Block-Digest\":\"sha1:OV5YJGKT6QS5UDO772UBTH7TD6UF773C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740929.65_warc_CC-MAIN-20200815154632-20200815184632-00273.warc.gz\"}"}
http://www.kylesconverter.com/pressure/millipascals-to-centimeters-of-water
[ "Convert Millipascals to Centimeters Of Water\n\nKyle's Converter > Pressure > Millipascals > Millipascals to Centimeters Of Water\n\n Millipascals (mPa) Centimeters Of Water (cmH2O)* Precision: 0 1 2 3 4 5 6 7 8 9 12 15 18\nReverse conversion?\nCentimeters Of Water to Millipascals\n(or just enter a value in the \"to\" field)\n\nPlease share if you found this tool useful:\n\nUnit Descriptions\n1 Millipascal:\nOne millipascal (mPa) is equal to exactly one thousandth of a pascal. A pascal (Pa) is the SI unit for pressure define as one newton per square meter. 1 mPa = 0.001 Pa.\n1 Centimeter of Water:\n= 999.972 kg/m3 * 1 cm * g (approx.)\n\nConversions Table\n1 Millipascals to Centimeters Of Water = 070 Millipascals to Centimeters Of Water = 0.0007\n2 Millipascals to Centimeters Of Water = 080 Millipascals to Centimeters Of Water = 0.0008\n3 Millipascals to Centimeters Of Water = 090 Millipascals to Centimeters Of Water = 0.0009\n4 Millipascals to Centimeters Of Water = 0100 Millipascals to Centimeters Of Water = 0.001\n5 Millipascals to Centimeters Of Water = 0.0001200 Millipascals to Centimeters Of Water = 0.002\n6 Millipascals to Centimeters Of Water = 0.0001300 Millipascals to Centimeters Of Water = 0.0031\n7 Millipascals to Centimeters Of Water = 0.0001400 Millipascals to Centimeters Of Water = 0.0041\n8 Millipascals to Centimeters Of Water = 0.0001500 Millipascals to Centimeters Of Water = 0.0051\n9 Millipascals to Centimeters Of Water = 0.0001600 Millipascals to Centimeters Of Water = 0.0061\n10 Millipascals to Centimeters Of Water = 0.0001800 Millipascals to Centimeters Of Water = 0.0082\n20 Millipascals to Centimeters Of Water = 0.0002900 Millipascals to Centimeters Of Water = 0.0092\n30 Millipascals to Centimeters Of Water = 0.00031,000 Millipascals to Centimeters Of Water = 0.0102\n40 Millipascals to Centimeters Of Water = 0.000410,000 Millipascals to Centimeters Of Water = 0.102\n50 Millipascals to Centimeters Of Water = 0.0005100,000 Millipascals to Centimeters Of Water = 1.0197\n60 Millipascals to Centimeters Of Water = 0.00061,000,000 Millipascals to Centimeters Of Water = 10.1974" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7568776,"math_prob":0.9973027,"size":1514,"snap":"2019-26-2019-30","text_gpt3_token_len":509,"char_repetition_ratio":0.33774835,"word_repetition_ratio":0.29411766,"special_character_ratio":0.35535008,"punctuation_ratio":0.11313868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9988504,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T10:03:53Z\",\"WARC-Record-ID\":\"<urn:uuid:0c5cdbb6-6737-4c76-904b-922aa016c7af>\",\"Content-Length\":\"19047\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ba5f72f-9dfe-4e32-920b-6fa53ae80d82>\",\"WARC-Concurrent-To\":\"<urn:uuid:ccfff384-9c8c-4100-93fc-f8999fec5325>\",\"WARC-IP-Address\":\"99.84.106.70\",\"WARC-Target-URI\":\"http://www.kylesconverter.com/pressure/millipascals-to-centimeters-of-water\",\"WARC-Payload-Digest\":\"sha1:XC3P3OW263SL632NWGWQRSDPFG3VT2KB\",\"WARC-Block-Digest\":\"sha1:Q5HUWM4RC4NOLBKL6I5W7JT6HZKCJPTL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999817.30_warc_CC-MAIN-20190625092324-20190625114324-00540.warc.gz\"}"}
http://www.kylesconverter.com/mass-flow/decigrams-per-second-to-grams-per-year
[ "# Convert Decigrams Per Second to Grams Per Year\n\n### Kyle's Converter > Mass Flow > Decigrams Per Second > Decigrams Per Second to Grams Per Year\n\n Decigrams Per Second (dg/s) Grams Per Year (g/yr)* Precision: 0 1 2 3 4 5 6 7 8 9 12 15 18\nReverse conversion?\nGrams Per Year to Decigrams Per Second\n(or just enter a value in the \"to\" field)\n\nPlease share if you found this tool useful:\n\nUnit Descriptions\n1 Decigram per Second:\nMass flow of decigrams across a threshold per unit time of a second. 1 decigram per second = 0.0001 kilograms per second (SI base unit). 1 dg/s ? 0.0001 kg/s.\n1 Gram per Year:\nMass flow of grams across a threshold per unit time of a year. A 365 day civil year. 1 gram per year = 0.001/31536000 kilograms per second (SI base unit). 1 g/yr ? 3.170 9792 x 10-11 kg/s.\n\nConversions Table\n1 Decigrams Per Second to Grams Per Year = 315360070 Decigrams Per Second to Grams Per Year = 220752000\n2 Decigrams Per Second to Grams Per Year = 630720080 Decigrams Per Second to Grams Per Year = 252288000\n3 Decigrams Per Second to Grams Per Year = 946080090 Decigrams Per Second to Grams Per Year = 283824000\n4 Decigrams Per Second to Grams Per Year = 12614400100 Decigrams Per Second to Grams Per Year = 315360000\n5 Decigrams Per Second to Grams Per Year = 15768000200 Decigrams Per Second to Grams Per Year = 630720000\n6 Decigrams Per Second to Grams Per Year = 18921600300 Decigrams Per Second to Grams Per Year = 946080000\n7 Decigrams Per Second to Grams Per Year = 22075200400 Decigrams Per Second to Grams Per Year = 1261440000\n8 Decigrams Per Second to Grams Per Year = 25228800500 Decigrams Per Second to Grams Per Year = 1576800000\n9 Decigrams Per Second to Grams Per Year = 28382400600 Decigrams Per Second to Grams Per Year = 1892160000\n10 Decigrams Per Second to Grams Per Year = 31536000800 Decigrams Per Second to Grams Per Year = 2522880000\n20 Decigrams Per Second to Grams Per Year = 63072000900 Decigrams Per Second to Grams Per Year = 2838240000\n30 Decigrams Per Second to Grams Per Year = 946080001,000 Decigrams Per Second to Grams Per Year = 3153600000\n40 Decigrams Per Second to Grams Per Year = 12614400010,000 Decigrams Per Second to Grams Per Year = 31536000000\n50 Decigrams Per Second to Grams Per Year = 157680000100,000 Decigrams Per Second to Grams Per Year = 315360000000\n60 Decigrams Per Second to Grams Per Year = 1892160001,000,000 Decigrams Per Second to Grams Per Year = 3.1536E+12" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5725294,"math_prob":0.9837281,"size":1683,"snap":"2019-26-2019-30","text_gpt3_token_len":506,"char_repetition_ratio":0.37522334,"word_repetition_ratio":0.40268457,"special_character_ratio":0.419489,"punctuation_ratio":0.021052632,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9940341,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-21T21:24:51Z\",\"WARC-Record-ID\":\"<urn:uuid:2a0ce0ae-5916-40d7-a1c8-9b8fc59b2e91>\",\"Content-Length\":\"19570\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:411cbe2f-ac62-450e-a172-1aa1cf406958>\",\"WARC-Concurrent-To\":\"<urn:uuid:9273dd39-f0ea-4e32-86ee-694311ac1faa>\",\"WARC-IP-Address\":\"99.84.106.13\",\"WARC-Target-URI\":\"http://www.kylesconverter.com/mass-flow/decigrams-per-second-to-grams-per-year\",\"WARC-Payload-Digest\":\"sha1:J5NKRQZZK7KAVCXU6JPLH3Q7GENDLS3Q\",\"WARC-Block-Digest\":\"sha1:ZPEXUBGVMIRQVLNG6W33WRW3LBYQKBZU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195527204.71_warc_CC-MAIN-20190721205413-20190721231413-00071.warc.gz\"}"}
https://realmath.de/english/integers/addsubint/addsubint03.php
[ "Order the positive and negative numbers Find the sum of the positive numbers Find the sum of the negative numbers Get the total result", null, "Click on  new  to create a new problem.\n\nCan you top 295 points?\n\n#### Add and subtract integerswith strategy -profi 2-\n\nNote the calculation strategy.\n = = =\n\nrealmath.de\n\n... more than just practicing", null, "" ]
[ null, "https://realmath.de/english/integers/addsubint/loesprofi0.png", null, "https://realmath.de/bilder/donate.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5794277,"math_prob":0.99202317,"size":554,"snap":"2023-40-2023-50","text_gpt3_token_len":139,"char_repetition_ratio":0.12,"word_repetition_ratio":0.08421053,"special_character_ratio":0.2166065,"punctuation_ratio":0.08737864,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9629335,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T20:24:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2e7a4fdd-d342-4d68-985c-bcfb9c3fc895>\",\"Content-Length\":\"11428\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:04353d76-ce06-4461-a4cd-01f156c61e52>\",\"WARC-Concurrent-To\":\"<urn:uuid:1824c84f-69d0-4910-ac12-d305b1621a33>\",\"WARC-IP-Address\":\"217.160.0.12\",\"WARC-Target-URI\":\"https://realmath.de/english/integers/addsubint/addsubint03.php\",\"WARC-Payload-Digest\":\"sha1:LCDTR7GIXEBTF7PVOBE5Y5KBR2EFHT4S\",\"WARC-Block-Digest\":\"sha1:G3T6MCJY5I3CCVHCJ7LZCZDOTEKVD4RU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511021.4_warc_CC-MAIN-20231002200740-20231002230740-00565.warc.gz\"}"}