URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://brilliant.org/practice/regular-polygons-level-4-challenges/?subtopic=polygons&chapter=regular-polygons | [
"",
null,
"Geometry\n\n# Regular Polygons: Level 4 Challenges",
null,
"Inside a regular pentagon $ABCDE$, construct 5 more regular pentagons of side length $\\dfrac{1}{2}AB$. The part of the overlapping of these pentagons yields another regular pentagon of side length $MN$.\n\nLet $r=\\dfrac{AB}{MN}$. Find $\\displaystyle \\left \\lfloor 1000r \\right \\rfloor$.",
null,
"Above shows a 18-sided regular polygon. How many obtuse triangles are there formed by 3 vertices?\n\n###### Try part 1.",
null,
"The $n^{\\text{th}}$ figure in the above sequence is constructed by the following procedure:\n\n1. Draw a blue disc of radius $\\displaystyle\\sqrt{\\frac{2016}{\\pi}}$\n2. Remove a regular $n$-gon area from the (smallest) disc\n3. Inscribe a blue disc inside the empty $n$-gon space\n4. Repeat steps 2-4\n\nLet $A_n$ be the total blue area of the $n^{\\text{th}}$ figure in the sequence.\n\nCompute $\\displaystyle\\lim_{n\\to\\infty}A_n$.",
null,
"It is not easy to draw a regular decagon without tools.\n\nOn a piece of writing paper (with equally spaced lines), I am trying to draw a regular decagon, as shown above. I started by drawing two sides so that their vertical extent is precisely 1 unit of the paper (black lines).\n\nNow I want to draw the next side (red line), and I wonder how far it will extend vertically. To 3 decimal places, what is the distance marked with a question mark?",
null,
"A regular octagon $ABCDEFGH$ has squares $ACEG$ and $BDFH$ inscribed in it. These squares form a smaller octagon as shown.\n\nLet the the area of octagon $ABCDEFGH$ be $A_L$ and the area of the smaller octagon be $A_S$. Then for some integers $a$ and $b$, where $b$ is square-free, $\\large \\dfrac{A_S}{A_L}=a-\\sqrt{b}.$ Find $a+b$.\n\n×"
] | [
null,
"https://ds055uzetaobb.cloudfront.net/brioche/chapter/Regular%20Polygons-fAqhAp.png",
null,
"https://ds055uzetaobb.cloudfront.net/brioche/uploads/bvotLK1tci-95822.svg",
null,
"https://ds055uzetaobb.cloudfront.net/brioche/uploads/HGRDFJLSEf-bant.PNG",
null,
"https://ds055uzetaobb.cloudfront.net/brioche/uploads/2Ja3dEOYQ8-cyclicgons.png",
null,
"https://ds055uzetaobb.cloudfront.net/brioche/uploads/6wevWDsOyc-90184.svg",
null,
"https://ds055uzetaobb.cloudfront.net/brioche/uploads/tn6qzjPbzs-200octa.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9385633,"math_prob":0.99909437,"size":1482,"snap":"2021-43-2021-49","text_gpt3_token_len":333,"char_repetition_ratio":0.135318,"word_repetition_ratio":0.15555556,"special_character_ratio":0.2240216,"punctuation_ratio":0.14423077,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998092,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,9,null,6,null,9,null,6,null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T21:36:34Z\",\"WARC-Record-ID\":\"<urn:uuid:357d62ab-5d6c-49a2-8b3f-c6eec111154d>\",\"Content-Length\":\"88301\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d4ca5945-d7a0-4f89-a7b9-2690551790ac>\",\"WARC-Concurrent-To\":\"<urn:uuid:4075e7b6-3188-48b2-8a87-9cb7f10cc8e7>\",\"WARC-IP-Address\":\"104.18.9.15\",\"WARC-Target-URI\":\"https://brilliant.org/practice/regular-polygons-level-4-challenges/?subtopic=polygons&chapter=regular-polygons\",\"WARC-Payload-Digest\":\"sha1:MDJE5HJ5ASJOFTPKRIITRGLGHZEMGBYU\",\"WARC-Block-Digest\":\"sha1:VNSCKB2QGVBD7B5ZWZ5UQYJ7Y2XIAAEP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964360951.9_warc_CC-MAIN-20211201203843-20211201233843-00252.warc.gz\"}"} |
https://stats.stackexchange.com/questions/202904/centering-and-scaling-skewed-distributions | [
"Centering and scaling skewed distributions\n\nI have a dataset where the features are skewed (non normal) distributions. My preprocessing pipeline consists of the following steps:\n\n1. Missing values imputation\n2. Centering and scaling (zero mean and unit variance) of each feature\n3. Transforming the features to an approximate normal distribution by using the Box-Cox Transformation.\n\nShould I first do the centering and scaling or the transformation?\n\nSecond, if the distributions are skewed (not normal) is centering and scaling (zero mean and unit variance) still ok? Another possiblity would be to subtract the median (instead of the mean) and dividing by 1.5 * the interquartile range (instead of the standard deviation).\n\n• Depends on your Box-Cox machinery, but I don't know how you consider transformations such as logarithm and square root which are, apart from extra constants, members of the Box-Cox family if your variable has negative values, as it will after standardization. In short, #2 cannot precede #3 without correction. Apart from that, linear transformations such as (value $-$ summary) / spread have precisely no effect on skewness and kurtosis. Note that non-normal is not a synonym for skewed as there are symmetric distributions that aren't normal (and which could be awkward for data analysis). – Nick Cox Mar 21 '16 at 23:56"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8605791,"math_prob":0.9381701,"size":675,"snap":"2019-43-2019-47","text_gpt3_token_len":141,"char_repetition_ratio":0.11773472,"word_repetition_ratio":0.05940594,"special_character_ratio":0.19407408,"punctuation_ratio":0.072072074,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98778266,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T05:50:40Z\",\"WARC-Record-ID\":\"<urn:uuid:990ca8e9-e4e0-446f-ad70-a06268aae551>\",\"Content-Length\":\"132451\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0cef5a5e-59f9-42b3-86b9-64610c1504bc>\",\"WARC-Concurrent-To\":\"<urn:uuid:79cc9132-2edf-445e-9f2d-ca616c7c6147>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/202904/centering-and-scaling-skewed-distributions\",\"WARC-Payload-Digest\":\"sha1:5RYCBGHCOYUBCX7HXLOJ2A4ZMPCWRE6O\",\"WARC-Block-Digest\":\"sha1:XRG4Q77FMOSXH6UE54WKDJXEWQNBBUL4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987756350.80_warc_CC-MAIN-20191021043233-20191021070733-00219.warc.gz\"}"} |
https://www.hindawi.com/journals/sv/2019/3524948/ | [
"/ / Article\n\nResearch Article | Open Access\n\nVolume 2019 |Article ID 3524948 | https://doi.org/10.1155/2019/3524948\n\nYang Liu, Yulai Zhao, Jintao Li, Fangquan Xi, Shuanghe Yu, Ye Zhang, \"Research on Fault Feature Extraction Method Based on NOFRFs and Its Application in Rotor Faults\", Shock and Vibration, vol. 2019, Article ID 3524948, 11 pages, 2019. https://doi.org/10.1155/2019/3524948\n\n# Research on Fault Feature Extraction Method Based on NOFRFs and Its Application in Rotor Faults\n\nRevised30 May 2019\nAccepted16 Jun 2019\nPublished02 Jul 2019\n\n#### Abstract\n\nRub-impact between the rotating and static parts is a more common fault. The occurrence of faults is often accompanied by the generation of nonlinear phenomena. However, it is difficult to find out because the nonlinear characteristics are not obvious at the beginning of the fault. As a new frequency domain-based method, nonlinear output frequency response functions (NOFRFs) use the vibration response to extract the nonlinear characteristics of the system. This method has a better recognition rate for fault detection. Also, it has been applied in structural damages detection, but the high-order NOFRFs have the characteristics that the signals are weak and the features are difficult to extract. On this basis, the concept of the weighted contribution rate of the NOFRFs is proposed in this paper. The variable weighted coefficients with orders are used to amplify the influence of high-order NOFRFs on the nonlinearity of the system so as to extract its fault characteristics. The new index RI is proposed based on Clenshaw–Curtis quadrature formula to eliminate the effect of artificially selected weighted coefficients on sensitivity. Especially in the early stage of the fault, the new index varies greatly with the deepening of the fault. Both simulation and experimental results verify the validity and practicability of the new index. The new index has certain guiding significance in the detection of mechanical system faults.\n\n#### 1. Introduction\n\nIn the design process of modern mechanical products, in order to meet the requirements of small size, the distance between the components is getting smaller and smaller. For mechanical products containing rotating parts, this increases the risk of rub-impact faults between the rotor and stator.\n\nScholars have done a lot of research on the diagnosis of rub-impact faults in rotating machinery . The physical model established by the finite element method or the lumped mass method combined with the time domain frequency domain and the axial trajectory is the earliest developed method of detecting rub-impact faults. This method mainly relies on the accuracy of the model [2, 3]. Since then, a large amount of research work has been done to make the model more accurate . In the further development, the detection method of the rub-impact faults based on the Poincaré diagram, the bifurcation diagram, and the Lyapunov exponent is gradually applied, which has great influence on the development of the rub-impact faults detection [5, 6]. In recent years, with the continuous improvement of signal processing technology, some special signal processing methods have also been applied to the diagnosis of rub-impact faults . Chandra and Sekhar compared the effects of three kinds of signal processing methods, short time Fourier transform (STFT), continuous wavelet transform (CWT), and Hilbert–Huang transform (HHT), on the rub-impact faults diagnosis. Zhou et al. studied the intrawave frequency modulation phenomenon and extracted the fault characteristics of the rub-impact from the vibration signal by using the M-VNCMD method. Most of the above studies only qualitatively describe the fault characteristics of the rub-impact. The feature extraction of rub-impact faults, especially the weak fault features at the beginning of the rub-impact, is not well developed.\n\nResearchers found that the frequency domain output of the system will have nonlinear phenomena [11, 12], such as harmonics, intermodulation, and so on when the mechanical system is damaged. However, the knowledge of the linear method cannot explain the generation of nonlinear phenomena . Based on this, scholars began to focus on nonlinear methods to find out the mechanism of the system faults. They have developed a number of nonlinear methods for the diagnosis of system faults, such as the nonlinear decomposition and regulation method and the second-order output spectrum approach [15, 16]. Among many nonlinear methods, the nonlinear method based on Volterra series has been accepted by many scholars. The Volterra series can explain the nonlinear components of the frequency domain output of system. Bai and Zhang proposed a blind identification method based on third-order cumulant and an inverse recursive method and used this method to identify the simplified secondary Volterra system of hydraulic turbine shafting. Tang et al. used the basic theory of Volterra series to model the rotating machinery and diagnose the rotating machinery. The results show that the mechanical system fault is identified by the change of the generalized frequency response function under different operating conditions of the mechanical system. Xia et al. proposed a key kernels-PSO (KK-PSO) method that identifies the Volterra kernel and used the kernel function of the Volterra series to establish a rotor-bearing fault diagnosis system. The results indicate that the KK-PSO method has better recognition ability.\n\nHowever, the Volterra series is quite complex and can only be applied to some simple nonlinear problems. The generalized frequency response functions (GFRFs) is a concept based on the Volterra series. It can be regarded as a case where the frequency response function (FRF) of a linear system is generalized to a nonlinear system. Scholars used GFRFs to study the nonlinear components of the frequency domain output of the system. Since GFRF is a multidimensional function, it is difficult to calculate it accurately in practice. Therefore, Lang and Billings proposed the concept of nonlinear output frequency response functions (NOFRFs), which are a one-dimensional function that allows the calculation of the frequency domain output of a nonlinear system in a similar way to calculation of the frequency domain output of a linear system. Peng et al. used NOFRFs to detect cracked beams and found an input-independent relationship between NOFRFs and GFRFs when the system is subjected to harmonic excitation. The experimental results show that the input energy of the system will be converted into superharmonic components when the excitation frequency ω is close to the resonance frequency of the associated NOFRFs. This also implies that NOFRFs can be used to analyze the behavior of nonlinear systems. Xia et al. proposed a new method for online fault recognition of hydroelectric generators and studied the fault mechanism under different conditions. The results prove that this method is simple and efficient. Huang et al. established the NARMAX model identification with PSO-adaptive lasso algorithm. A damage detection method combining the NARMAX model with a rectangular pulse is proposed. The results show that the method can effectively detect the accumulated fatigue damage of used parts.\n\nUnder the development of nonlinear output frequency response functions, researchers have begun to focus on relevant indexes based on NOFRFs [26, 27]. Peng et al. proposed a method based on NOFRFs related index Fe. The feasibility of the method was verified by the detection of damaged aluminum plates under the NARMAX model. Huang et al. proposed a new index Ne. The NOFRFs of the nonlinear system are calculated by the NARMAX model, and then the index Ne is calculated. Comparing with the index of nondamaged test pieces, the results reveal that the newly proposed method can be used to detect fatigue accumulation damage of used parts. Mao et al. proposed a nonlinear scale index NL and a divergence index NDI based on the basis of NOFRFs. The experimental results show that these indexes can well identify the morphology and extent of microcracks before they appear.\n\nIn , the structural damage of test piece was tested at static state (nonworking state). The system excitation used in these references is a method of pulse excitation. However, this method is no longer applicable for online monitoring of rotating machinery faults. Therefore, the authors did the following research studies. It is pointed out in that the NOFRFs of a nonlinear system can be calculated under the condition that the two amplitudes are different but the excitation frequencies are the same. Due to manufacturing defects, the disc is more or less imbalanced. This will cause the disc to generate centrifugal force during rotation. The existence of centrifugal force just makes the rotor system meet the conditions for calculating NOFRFs in so that the NOFRFs of the system can be obtained. During the research, it was found that the indexes proposed in [25, 26] did not change significantly in the rotor rub-impact fault system. In order to find out a suitable index for online monitoring of rotor rub-impact faults, a new index based on Clenshaw–Curtis integration method is used in this paper, which is the second-order weighted contribution rate integral index RI. The simulation and experimental results show that the index is more suitable for detecting the rotor rub-impact faults than these indexes in [25, 26] and has certain efficiency.\n\nThe contribution of this paper is to combine the weighted contribution rate of NOFRFs with Clenshaw–Curtis integration method for the first time. A new system nonlinear detection index RI is proposed, which is used to detect the rub-impact faults between rotor and stator. The organization of the rest paper is summarized as follows. In Section 2, we introduce the method of calculating NOFRFs under harmonic excitation. In Section 3, we combine the second-order optimal weighted contribution rate of NOFRFs with the Clenshaw–Curtis integration method and propose the index RI. After that, we use the simulation to verify the effectiveness of the index. In Section 4, the effectiveness of this index in the detection of rotor rub-impact fault is verified by experiments. Finally, the conclusion is given in Section 5.\n\n#### 2. Theory of NOFRFs\n\n##### 2.1. NOFRFs under Harmonic Input\n\nTransfer functions are often used to describe the characteristics of linear systems. It can be obtained from the inputs and outputs of a linear system. Based on the similar view, GFRFs were seen as the extension of the transfer functions of the linear system to the nonlinear system situation. At this point, the output of the nonlinear system can be expressed aswherewhere N is the highest order of the system nonlinearity, U() and Y() are the frequency domain expressions of the system time domain input u(t) and the output y(t) after Fourier transform, Yn() represents the nth-order output response of the nonlinear system in the frequency domain, and Hn() is the nth-order GFRFs. It can be seen from its expression that GFRF is a multidimensional function, and its dimensions are the same as orders. As the number of dimensions increase, the amount of calculations of GFRFs become very large, which limit the application of GFRFs.\n\nBased on this, the concept of NOFRFs is proposed.\n\nOn the condition thatits specific expression is defined as\n\nIts specific solving method will be mentioned in . On the basis of the system NOFRFs acquired, the frequency domain output of the system can be rewritten as\n\nThis is similar to the form of the linear system transfer function, which decomposes the nonlinear system into a summation form of multiple linear subsystems, and the nonlinear characteristics of the system will be contained in the transfer function of each linear subsystem. This greatly reduces the amount of calculations and avoids dimensionality disasters. It is advantageous for the application of the method in the field of engineering as well as further theoretical research studies.\n\nWhen the system is excited by harmonics of the form u(t) = Asin (ωt), the system frequency domain output can be simplified as\n\nSystem frequency domain input is defined as\n\nSystem NOFRFs can be simplified to\n\nAccording to equations (6)–(8), under the condition that the first four orders of NOFRFs are sufficient to represent the nonlinear characteristics of the system, the relationship between the frequency domain output response and the input is\n\nAccording to the method mentioned in , when two harmonics with different intensities but the same frequency are used to excite the system, the first four orders of NOFRFs can be obtained, namely, G1(), G2(), G3(), G3(j3ω), G4(j2ω), and G4(j4ω). They can be used directly for fault detection, and they can be further extracted from fault features.\n\n##### 2.2. Feature Extraction Using Traditional Methods\n###### 2.2.1. Index Fe\n\nNOFRFs can be used directly to detect damages to mechanical system or structure. In addition, the nonlinear features in the NOFRFs can be further extracted to detect faults in the mechanical system. Fe is the earliest proposed NOFRFs-based index. Its specific expression is as follows:\n\nEquation (10) shows that the index Fe integrates all the NOFRFs values of the nonlinear system. It has the following characteristic:\n\nFe(n) can reflect the ratio of the nth-order NOFRFs in the system. For example, Fe(1) reflects the proportion of G1() in the system. If Fe(1) ≈ 1, it means that G1() is dominant, and high-order system frequency response NOFRFs can be ignored; that is, the system can be approximated as a linear system. In addition, if Fe(5) ≈ 1, it indicates that the fifth-order NOFRFs dominate the system behaviors. Therefore, the index Fe(n) can be used to describe the nonlinear behaviors of the system and to detect damages to the system.\n\n###### 2.2.2. Index Ne\n\nThe information entropy Ne fuses all orders of NOFRFs, and its specific expression is as follows:where\n\nSince Ne integrates all the orders of NOFRFs, it becomes easier to apply Ne, which is more suitable for fault detection in engineering than Fe. Yet, Ne may be integrated into useless information. Assume that Fe(3) cannot be used to detect system faults, but Ne also fuses Fe(3), which will reduce the sensitivity of the index to system faults detection. Therefore, the authors attempted to identify a more sensitive index to detect system faults and damages.\n\n##### 2.3. Weighted Contribution Rate of NOFRFs (Rn)\n\nDuring the operation of the mechanical system, the system is prone to fault due to abominable working conditions and alternating loads. At this moment, the system will express nonlinearity and the nonlinearity will become apparent as the fault increase. When the fault reaches a certain level, it will lead to an accident. Therefore, in engineering practice, online monitoring of mechanical systems is of great significance. NOFRFs and related indexes can be used to detect system faults. In the actual solution process, high-order NOFRFs become smaller and smaller as the order increases, which is difficult to extract. High-order NOFRFs have a significant effect on the detection of system faults. Therefore, it is necessary to extract the nonlinear features of high-order NOFRFs. Based on this, a new method is proposed in this paper, which is to weight the high-order NOFRFs and increase their contribution rate to the system. The specific weighted process is as follows:where Tn() represents Gn() after weighted, nρ represents a weighted coefficient, and ρ is an indefinite constant. It can be seen that the weighted method adopted in this paper is the variable weighted method. When ρ < 0, the higher the order of NOFRFs and the greater the weighting, as follows:\n\nIt will increase the contribution of high-order NOFRFs to the system, thus achieving the purpose of feature extraction. These are the basic theory of the weighted contribution rate of the NOFRFs, which is expressed as\n\nThe high-order nonlinear characteristics of the system will be amplified by the weighted contribution rate of NOFRFs using feature weighted method. The weighted contribution rate index Rn introduces the order n of the NOFRFs, which causes the weighted coefficients to increase as the order of the NOFRFs increases. The contribution rate of the high-order will be magnified under the condition . Only in this way, the weighted coefficients at the denominator position is less than 1, namely, . Therefore, the weighted contribution rate Rn solves the problem that the high-order NOFRFs and their contribution are too small.\n\nIt can be seen from equation (16) that when the indefinite constant ρ = 0, the weighted contribution index Rn(1) is equal to Fe(1), so the feature of Rn(1) is the same as Fe(1); when the indefinite constant ρ = 0.5, Rn is equal to PY, and PY will be replaced the index Rn. From the above analysis, it can be discovered that the weighted contribution rate index Rn is a generalized form of the two indexes of Fe and PY. There is significant effect on the detection index when different indefinite constants were chosen. Therefore, how to choose an optimal indefinite constant or even how to remove its effect is very important for the accuracy of the detection index. If the final index is not affected by the indefinite constant selection, this will greatly improve the accuracy of the detection index and can be more conducive to the application of the NOFRFs in practical engineering.\n\n#### 3. Second-Order Weighted Contribution Rate Integral Index\n\n##### 3.1. Estimation of NOFRFs of Rotor-Stator Rub-Impact System\n\nIn order to eliminate the influence of the indefinite constants selection on the detection index, we first look at a practical example to realize how to affect the weighted contribution rate of the NOFRFs by the selection of the indefinite constants. The Jeffcott rotor fixed-point rub-impact system will be used as an example in this paper. The schematic diagram of the Jeffcott rotor fixed-point rub-impact system is shown in Figure 1. The rotor system is divided into 11 shaft segments, and the specific parameters of each shaft segment are shown in Table 1. Each shaft segment has two nodes, so the rotor system has a total of 12 nodes.\n\n Segment no. 1 2 3 4 5 6 7 8 9 10 11 Length (mm) 50 45 45 45 45 27 45 45 45 45 50 Diameter (mm) 10 10 10 10 10 70 10 10 10 10 10\n\nEach shaft segment of the rotor system is simulated with Timoshenko beam, and the generalized coordinates of the beam element is u = (xA, yA, θA, θA, xB, yB, θB, θB)T. A and B represent the endpoints of the beam element, respectively, where xA, yA, θA, θA are the displacement and rotation angle of the x and y directions of the A node, respectively, and xB, yB, θB, θB are the displacements and rotation angles of the x and y directions of the B node, respectively.\n\nThe fixed-point rub-impact of rotor-stator is mainly simulated in this paper; in other words, rub-impact will appear between a certain point on the rotating shaft and a stator. The friction torque generated by the rub-impact is ignored, and only the normal rub-impact force Fn and the tangential rub-impact force Fr are considered when the phenomenon of rub-impact appears in rotor-stator, as shown in Figure 2. In Figure 2, O1 represents the centroid of the rotor when it is static, represents the centroid of the rotor when it is dynamic, O2 denotes the center of mass of the rotor when it is static, δ0 represents the initial clearance between rotor and stator, and e represents the eccentricity. The friction coefficient and the contact stiffness between rotor and stator are denoted by kc and fc, respectively.\n\nIn this paper, the Kuhn–Tucker law of Coulomb friction is used to detect the contact between rotor and stator, and the contact condition is solved by the augmented Lagrangian method of friction . These ensure that the disc and the elastic rod meet the contact conditions and do not penetrate each other. In order to meet the penetration depth is within the allowable range, a suitable force will be applied to the normal contact in this method. The rub-impact force Frub between the rotor and the stator can be decomposed into the normal rub-impact force Fn and the tangential rub-impact force Fr, as follows:where r is the distance between O1 and O2 and is the Lagrangian multiplier of the i + 1th step in the iterative process, which is expressed aswhere δr = r − c, ς is prescriptive permeability-tolerance. When ς exceeds the maximum allowable permeation amount after a given equilibrium iteration, the Lagrangian multiplier is used to enlarge the contact stiffness of the contact units. This process does not end until the magnitude of penetration is less than the maximum allowable value.\n\nThen, the rub-impact force can be expressed aswhere the Heaviside function H(δr) is defined as\n\nTherefore, the governing equation of the rotor rub-impact system is expressed aswhere, M, C, G, and K are the mass matrix, damping matrix, gyroscopic matrix, and stiffness matrix, respectively; Fe is the imbalance force vector; and Frub is the rub-impact force vector of the rotor system .\n\nThe rest of simulation parameters are shown in Table 2. The Newmark-β and Newton–Raphson iterative methods are used to solve equation (21), and the vibration response of each node at the excitation frequency of 20 Hz can be obtained. Then, two vibration responses with different excitation strengths but the same excitation frequency can be obtained only by changing the imbalance and solving again. According to equation (9) and , the first four orders of NOFRFs of the system can be obtained by the two inputs and outputs of the system. The specific values are shown in Table 3.\n\n Parameters Values Material 45#steel Elastic modulus of shaft Ef (Pa) 2.1 × 1011 Density (kg/m3) 7850 Poisson ratio 0.3 Normal rub-impact stiffness ratio 2/4/6 × 104 Friction coefficient 0.1/0.2/0.3 Bearing stiffness (N/m) 2 × 106 Imbalance (kg·m) 167.4/236.7 × 10−6 Rotational speed (rpm) 1200\n NOFRFs Normal Clearance = 90 μm Clearance = 30 μm 1.5078 × 10−4 1.5069 × 10−4 1.5051 × 10−4 2.1286 × 10−8 4.8293 × 10−7 1.2208 × 10−6 2.5299 × 10−9 1.1921 × 10−8 3.9750 × 10−9 8.9336 × 10−10 5.7439 × 10−8 1.0286 × 10−7 2.7172 × 10−9 2.2947 × 10−9 3.5165 × 10−8 3.8508 × 10−10 1.7238 × 10−8 3.0804 × 10−8\n##### 3.2. Feature Extraction Based on Weighted Contribution Rate of NOFRFs\n\nAccording to the analysis in Section 2.3, the contribution rate of the high-order NOFRFs will be increased by the weighted contribution index Rn of the NOFRFs to the system. However, the choice of indefinite constants can have a large effect on the results. The authors attempt to eliminate these effects. For this result, the authors research the effect of indefinite constants on the weighted contribution rate. First, the obtained NOFRFs values are taken into equation (16) to calculate Rn, the range of indefinite constant ρ is [−30, 5], and a curve of Rn respect to ρ is then made under three different rub-impact clearances (normal, clearance A = 90 μm, and clearance B = 30 μm).\n\nWhen the indefinite constant ρ changes within the interval [−30, 5], the trend of the weighted contribution index Rn is as shown in Figure 3. It can be seen from the trend in Figure 3 that Rn is almost equal and tends to 0 under different rub-impact clearances when the indefinite constant range is (−∞, −30] or [5, +∞), which means that the weighted contribution rate Rn will fail to diagnose unceasingly the severity of rub-impact of the system when ρ belongs to at this range.\n\nIn addition, Figure 3(a) shows that the two indexes Fe1(ρ = 0) and PY1(ρ = 0.5) increase with the decrease of the rub-impact clearance, but the weighted contribution rate increases from 0 to 1 in the process of the indefinite constant increasing from negative infinity to positive infinity; from Figures 3(a)3(d), it can be seen that Fe(n) and PY decrease when the rub-impact clearance decreases. The value of Fe(n) and PY(n) is also increasing. As seen in Figure 3, the sensitivity of the index Fe is higher than that of PY, but from this, we can also know that in the indefinite constant ρ = 0 or ρ = 0.5, Fe and PY are not the best choice. As shown in Figure 3(b), the peak-weighted contribution rate sensitivity is the highest in the case of different rub-impact clearances, but the indefinite constant corresponding to the peaks changes as the rub-impact clearances change. It can be seen from Figure 3(b) that the trend of the weighted contribution rate Rn(2) is increased first and then decreased, and the curve and the horizontal axis constitute a closed graph when the indefinite constant change. A new index is proposed by using the integral of the indefinite constant ρ of the weighted contribution rate Rn(2). Geometrically, this new index corresponds to the area of the weighted contribution rate Rn(2) and the horizontal axis. This index can better detect the change of rub-impact. Based on this idea, the index will be theoretically derived.\n\nThe specific expression of Rn(2) can be expressed from equation (16), as follows:\n\nDefine as , and equation (22) can be written as\n\nEquation (23) implies that the index Rn(2) is a function of the indefinite constant ρ, and R2(ρ) is regarded as\n\nAccording to the analysis in Section 2.1, only the first four order NOFRFs of the system are considered here, so equation (24) can be expressed as\n\nIn the practical engineering case, , i = 1, 3, 4 in the function R2(ρ) are constants. It can be seen from equation (25) that the function R2(ρ) gradually approaches zero when ρ tends to infinity. According to the analysis mentioned above, the function R2(ρ) is bounded and continuous. So, the function R2(ρ) can be integrated.\n\n##### 3.3. Clenshaw–Curtis Integration\n\nIn engineering practice, researchers often use numerical integration methods when they solve the integrations of unknown analytic functions. There are many methods for numerical integration. A method with higher precision will be adopted in this paper, namely, Clenshaw–Curtis quadrature formula . Using this method to numerically integrate the index Rn(2), the basic theory is that for the integrand function f(x), its definite integral expression on the interval [−1, 1] is\n\nThe Clenshaw–Curtis quadrature formula is defined aswhere ∑ indicates that the first and last items in the formula are halved. , , k = 0, 1, 2, ..., n, j = 1, 2, 4, ..., n, , θk are the quadrature node and the quadrature coefficient, respectively. Regarding the accuracy of Hn(f), the following theorem is given.\n\nTheorem 1. If f satisfies the condition of , the error is expressed as\n\nOn a given finite integration interval [a, b], there is an integral of f(x),where a, b are noninfinite constants and f(x) has a certain smooth degree. The following processing is often done before using Clenshaw–Curtis integration for numerical integration:that is, the integral interval is converted to [−1, 1] by variable substitution.\n\nHowever, in practical engineering problems, the researchers found that the results obtained did not achieve the desired by linear transformation in most cases. The reason for this is that the trend of the function at the endpoints is different. Considering a class of function, f(x) has a significant change in the vicinity of a in the interval [a, b], and the change in the vicinity of b is slow. Such as f(x) = ex, when a = 1 and b = 50, the ideal numerical integral is to take more integral nodes near a, and at the same time, in order to reduce the amount of calculation, there are fewer integral nodes near b. Considering the Clenshaw–Curtis numerical integration in the interval [−1, 1], the integral nodes are symmetric about the origin, and there are many nodes at the and fewer nodes near the origin. With Clenshaw–Curtis numerical integration, linear transformations cannot meet our requirements. Therefore, in this case, a new method of variable substitution will be considered, namely, nonlinear variable substitution.\n\nExtending the linear transformation to nonlinearity, equation (30) is rewritten aswhere i ≥ 1 and i ∈ Z. It can be seen from equation (31) that can convert the interval [a, b] to [−1, 1], and then perform Clenshaw–Curtis numerical integration on f(x). When i = 1, is a linear variable substitution; when i = 2, 3, 4, …, is a nonlinear variable replacement. Under nonlinear transformation, rewrite equation (29) to\n\nAs can be seen from equation (32), and f(x) have the same smooth order. Then, for equation (32), the Clenshaw–Curtis quadrature formula is used to obtain\n\nIn the case of nonlinear variable substitution, the quadrature node is denser near the end point a and sparse at the end point b. This is what we expected.\n\n##### 3.4. Second-Order Weighted Contribution Rate Integral Index (RI)\n\nAccording to the analysis in Section 3.2, the function R2(ρ) is integral. The integral expression is\n\nFrom Figure 3(a) in Section 3.2, it can be concluded that the value of Rn(2) is close to 0 when the indefinite constant is within (−∞, −30] or [5, +∞). So, the integral interval of equation (34) is taken [−30, 5], that is, a = −30, b = 5. Substituting equation (25) into equation (33), equation (34) is rewritten as\n\nDefine .\n\nIn order to verify the reliability of the Clenshaw–Curtis quadrature formula, the authors use the int infinite integral function in MATLAB to solve R2(ρ), and the integration interval is negative infinity to positive infinity. The results are compared with the Clenshaw–Curtis integration method used in this paper, as shown in Table 4. The results show that the selection of the integral interval in the Clenshaw–Curtis method is reasonable, and the results obtained by the Clenshaw–Curtis quadrature method are very accurate.\n\n Clearance (μm) Clenshaw–Curtis integral Int infinite integral Normal 0.0670145365 0.0670145578 2.1289 × 10−8 100 0.3580265528 0.3580266286 7.5884 × 10−8 80 0.5712722586 0.5712723441 8.5445 × 10−8 60 0.6289452527 0.6289452534 7.0059 × 10−8 40 0.6855516020 0.6855515858 1.6181 × 10−8 20 0.7381875132 0.7381875624 4.9269 × 10−8\n\nRub-impact stiffness and friction coefficient are the two main parameters affecting rub-impact. The change of these two parameters will affect the change of detection index RI. In order to study the influence of these two parameters on the results, the authors studied the variation of the second-order weighted contribution rate integral index RI with the rub-impact clearances when these two parameters change, as shown in Figures 4 and 5. They illustrated the trend of the index RI of the system under different rub-impact stiffness and friction coefficient. The results show that the index RI will increase with the decrease of the rub-impact clearance when the rub-impact stiffness and friction coefficient of the system increase. The variation of the index RI will gradually increase, which means that the nonlinearity of the system will be increased.\n\nIn order to verify that the second-order weighted contribution rate integral index RI is more suitable for detecting the rub-impact faults of the rotor, according to equation (33), the index RI under different rub-impact clearances is calculated. The amount of change of RI, Ne, Fe1, Fe2, Fe3, and Fe4 under different rub-impact clearance is shown in Table 5. From this, we can see that the indexes RI, Ne, and Fe1 have the largest change with the reduction of the rub-impact clearance. In order to more intuitively show the advantages of the index RI, we compare the three indexes in the form of a histogram, as shown in Figure 6. The results show that the indices RI and Ne increase with the increasing rub-impact clearance, but the index Fe1 decreases as the rub-impact clearance decreases. However, the change of the second-order weighted contribution rate integral index RI is more obvious. Especially in the initial stage of the rub-impact fault, this index has strong superiority. These indicate that the second-order weighted contribution rate integral index RI proposed in this paper is more suitable for detecting the rub-impact fault of the rotator-stator.\n\n Indexes 100 80 60 40 20 ΔRI 0.2910 0.2132 0.0577 0.0566 0.0526 ΔNe 0.0142 7.388 × 10−3 7.413 × 10−3 7.041 × 10−3 6.754 × 10−3 ΔFe1 2.771 × 10−3 1.700 × 10−3 1.810 × 10−3 1.807 × 10−3 1.803 × 10−3 ΔFe2 2.242 × 10−3 1.619 × 10−3 1.616 × 10−3 1.613 × 10−3 1.610 × 10−3 ΔFe3 3.949 × 10−4 8.183 × 10−5 8.164 × 10−5 8.150 × 10−5 8.135 × 10−5 ΔFe4 1.349 × 10−4 6.657 × 10−7 1.119 × 10−4 1.117 × 10−4 1.116 × 10−4\n\n#### 4. Experimental Verification\n\nThe experimental equipment is shown in Figure 7. In order to obtain the first four orders of NOFRFs of the system, as described in Section 2.1, the system requires two excitations of the same frequency but different intensities. In this paper, the two different strength excitations were obtained by changing the imbalance of the rotor system. In the rotor system, if the imbalance of the system is different, the centrifugal force of the system will also change, which will ultimately affect the vibration amplitude of the system. These two excitations of the same frequency can be obtained at the same speed. Therefore, when the data were collected in the experiment, it was necessary to change the imbalance of the rotor system at the same speed to meet the requirements. A different number of balanced bolts were used to change the imbalance in the rotor system. The rotor speed in the experiment was the same as the simulation, namely, 1200 rpm.\n\nIt can be seen from Figure 8 that the index Fe1 changes slowly and the index Ne’s magnitude and the relative variation are small when the rub-impact degree is increased in the rotor rub-impact system. Meanwhile, the variation in the index RI is very obvious. Therefore, RI is a better index to detect system rub-impact faults. Figure 8 also shows that the experimental data and the simulation results basically coincide, which indicates that the simulation results are credible.\n\n#### 5. Conclusion\n\nA new method of detection of the rub-impact fault of the rotor system is introduced in this paper. It introduces the concept of the order of the nonlinear output frequency response functions and uses the NOFRFs to amplify the signal characteristics. Therefore, the impact of higher-order NOFRFs on the system is amplified. On the basis, a new index called the second-order weighted contribution rate integral index RI is proposed in this paper. After comparative analysis, it is known that the sensitivity of this index to the rub-impact faults of the rotor system is much higher than that of the other two indexes (Fe and Ne). In other words, RI is a better index to detect the rub-impact faults of the rotor system. On the other hand, simulation and experimental results verify that the efficiency of the index RI is in detecting system faults. The second-order weighted contribution rate integral index RI is of great significance in the study of fault diagnosis. The performance of this index in other mechanical faults and structural damage detection will be discussed in future studies.\n\n#### Data Availability\n\nThe data used to support the findings of this study are included within the article.\n\n#### Conflicts of Interest\n\nThe authors declare no conflicts of interest.\n\n#### Acknowledgments\n\nThis work was supported by the National Natural Science Foundation of China (grant nos. 51875093 and U1708257), Basic Research Business Fee of the Central University of Education (grant no. N180304017), and China Postdoctoral Science Foundation (grant nos. 2014M551105 and 2015T80269).\n\n1. G. Jacquet-Richardet, M. Torkhani, P. Cartraud et al., “Rotor to stator contacts in turbomachines. Review and application,” Mechanical Systems and Signal Processing, vol. 40, no. 2, pp. 401–420, 2013. View at: Publisher Site | Google Scholar\n2. H. Ma, C. Shi, Q. Han, and B. Wen, “Fixed-point rubbing fault characteristic analysis of a rotor system based on contact theory,” Mechanical Systems and Signal Processing, vol. 38, no. 1, pp. 137–153, 2013. View at: Publisher Site | Google Scholar\n3. J. Páez Chávez and M. Wiercigroch, “Bifurcation analysis of periodic orbits of a non-smooth Jeffcott rotor model,” Communications in Nonlinear Science and Numerical Simulation, vol. 18, no. 9, pp. 2571–2580, 2013. View at: Publisher Site | Google Scholar\n4. N. Wang, D. Jiang, and K. Behdinan, “Vibration response analysis of rubbing faults on a dual-rotor bearing system,” Archive of Applied Mechanics, vol. 87, no. 11, pp. 1891–1907, 2017. View at: Publisher Site | Google Scholar\n5. L. Hu, Y. Liu, L. Zhao, and C. Zhou, “Nonlinear dynamic response of a rub-impact rod fastening rotor considering nonlinear contact characteristic,” Archive of Applied Mechanics, vol. 86, no. 11, pp. 1869–1886, 2016. View at: Publisher Site | Google Scholar\n6. M. Behzad and M. Alvandi, “Unbalance-induced rub between rotor and compliant-segmented stator,” Journal of Sound and Vibration, vol. 429, pp. 96–129, 2018. View at: Publisher Site | Google Scholar\n7. H. Cao, L. Niu, S. Xi, and X. Chen, “Mechanical model development of rolling bearing-rotor systems: a review,” Mechanical Systems and Signal Processing, vol. 102, pp. 37–58, 2018. View at: Publisher Site | Google Scholar\n8. Y. F. Yang, Q. Y. Wu, Y. L. Wang, W. Y. Qin, and K. Lu, “Dynamic characteristics of cracked uncertain hollow-shaft,” Mechanical Systems and Signal Processing, vol. 124, pp. 36–48, 2019. View at: Google Scholar\n9. N. H. Chandra and A. S. Sekhar, “Fault detection in rotor bearing systems using time frequency techniques,” Mechanical Systems and Signal Processing, vol. 72-73, pp. 105–133, 2016. View at: Publisher Site | Google Scholar\n10. P. Zhou, M. Du, S. Chen, Q. He, Z. Peng, and W. Zhang, “Study on intra-wave frequency modulation phenomenon in detection of rub-impact fault,” Mechanical Systems and Signal Processing, vol. 122, pp. 342–363, 2019. View at: Publisher Site | Google Scholar\n11. Z. K. Peng, Z. Q. Lang, S. A. Billings, and Y. Lu, “Analysis of bilinear oscillators under harmonic loading using nonlinear output frequency response functions,” International Journal of Mechanical Sciences, vol. 49, no. 11, pp. 1213–1225, 2007. View at: Publisher Site | Google Scholar\n12. H. Xu, N. Wang, D. Jiang, T. Han, and D. Li, “Dynamic characteristics and experimental research of dual-rotor system with rub-impact fault,” Shock and Vibration, vol. 2016, Article ID 6239281, 11 pages, 2016. View at: Publisher Site | Google Scholar\n13. R. K. Pearson, Discrete Time Dynamic models, Oxford University Press, Oxford, UK, 1994.\n14. X. Jing and Q. Li, “A nonlinear decomposition and regulation method for nonlinearity characterization,” Nonlinear Dynamics, vol. 83, no. 3, pp. 1355–1377, 2016. View at: Publisher Site | Google Scholar\n15. Q. Li and X. Jing, “A second-order output spectrum approach for fault detection of bolt loosening in a satellite-like structure with a sensor chain,” Nonlinear Dynamics, vol. 89, no. 1, pp. 587–606, 2017. View at: Publisher Site | Google Scholar\n16. Q. Li and X. Jing, “Fault diagnosis of bolt loosening in structures with a novel second-order output spectrum–based method,” Structural Health Monitoring, Article ID 1475921719836379, 2019. View at: Publisher Site | Google Scholar\n17. B. Bai and L. Zhang, “HOC based blind identification of hydroturbine shaft Volterra system,” Shock and Vibration, vol. 2017, Article ID 6732704, 11 pages, 2017. View at: Google Scholar\n18. H. Tang, Y. H. Liao, J. Y. Cao, and H. Xie, “Fault diagnosis approach based on Volterra models,” Mechanical Systems and Signal Processing, vol. 24, no. 4, pp. 1099–1113, 2010. View at: Publisher Site | Google Scholar\n19. X. Xia, J. Zhou, J. Xiao, and H. Xiao, “A novel identification method of Volterra series in rotor-bearing system for fault diagnosis,” Mechanical Systems and Signal Processing, vol. 66-67, pp. 557–567, 2016. View at: Publisher Site | Google Scholar\n20. A. K. Swain, S. A. Billings, P. K. Stansby, and M. Baker, “Accurate prediction of non-linear wave forces: part I (fixed cylinder),” Mechanical Systems and Signal Processing, vol. 12, no. 3, pp. 449–485, 1998. View at: Publisher Site | Google Scholar\n21. X. J. Jing, Z. Q. Lang, and S. A. Billings, “Parametric characteristic analysis for generalized frequency response functions of nonlinear systems,” Circuits, Systems & Signal Processing, vol. 28, no. 5, pp. 699–733, 2009. View at: Publisher Site | Google Scholar\n22. O. M. Boaghe, S. A. Billings, L. M. Li, P. J. Fleming, and J. Liu, “Time and frequency domain identification and analysis of a gas turbine engine,” Control Engineering Practice, vol. 10, no. 12, pp. 1347–1356, 2002. View at: Publisher Site | Google Scholar\n23. Z. Q. Lang and S. A. Billings, “Energy transfer properties of nonlinear systems in the frequency domain,” International Journal of Control, vol. 78, no. 5, pp. 354–362, 2005. View at: Publisher Site | Google Scholar\n24. X. Xia, J. Zhou, C. Li, and W. Zhu, “A novel method for fault diagnosis of hydro generator based on NOFRFs,” International Journal of Electrical Power & Energy Systems, vol. 71, pp. 60–67, 2015. View at: Publisher Site | Google Scholar\n25. H. Huang, H. Mao, H. Mao et al., “Study of cumulative fatigue damage detection for used parts with nonlinear output frequency response functions based on narmax modelling,” Journal of Sound and Vibration, vol. 411, pp. 75–87, 2017. View at: Publisher Site | Google Scholar\n26. Z. K. Peng, Z. Q. Lang, C. Wolters, S. A. Billings, and K. Worden, “Feasibility study of structural damage detection using NARMAX modelling and nonlinear output frequency response function based analysis,” Mechanical Systems and Signal Processing, vol. 25, no. 3, pp. 1045–1061, 2011. View at: Publisher Site | Google Scholar\n27. H. Mao, W. Tang, Y. Huang et al., “The construction and comparison of damage detection index based on the nonlinear output frequency response function and experimental analysis,” Journal of Sound and Vibration, vol. 427, pp. 82–94, 2018. View at: Publisher Site | Google Scholar\n28. D. B. Hunter and H. V. Smith, “A quadrature formula of Clenshaw-Curtis type for the gegenbauer weight-function,” Journal of Computational and Applied Mathematics, vol. 177, no. 2, pp. 389–400, 2005. View at: Publisher Site | Google Scholar\n29. Z. K. Peng, Z. Q. Lang, and S. A. Billings, “Resonances and resonant frequencies for a class of nonlinear systems,” Journal of Sound and Vibration, vol. 300, no. 3–5, pp. 993–1014, 2007. View at: Publisher Site | Google Scholar\n30. V. Barzdaitis, M. Bogdevicius, and R. Didziokas, “Diagnostics procedure for identification of rubs in rotor bearings,” Journal of Vibroengineering, vol. 12, no. 4, pp. 552–565, 2010. View at: Google Scholar\n31. M. Jiang, Y. Kuang, J. Wu, and X. Li, “Rub-impact detection in rotor systems with pedestal looseness using a nonlinearity evaluation,” Shock and Vibration, vol. 2018, Article ID 7928164, 15 pages, 2018. View at: Publisher Site | Google Scholar\n32. P. F. Gao, Z. N. Lei, X. X. Wang, and M. Zhan, “Deformation in fatigue crack tip plastic zone and its role in crack propagation of titanium alloy with tri-modal microstructure,” Materials Science and Engineering: A, vol. 739, pp. 198–202, 2019. View at: Publisher Site | Google Scholar\n\n#### More related articles\n\nArticle of the Year Award: Outstanding research contributions of 2020, as selected by our Chief Editors. Read the winning articles."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.895113,"math_prob":0.9079182,"size":42068,"snap":"2021-31-2021-39","text_gpt3_token_len":9707,"char_repetition_ratio":0.17404431,"word_repetition_ratio":0.077375695,"special_character_ratio":0.22596748,"punctuation_ratio":0.14427678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9841824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-05T23:44:17Z\",\"WARC-Record-ID\":\"<urn:uuid:61f66146-b7ee-49c3-914b-92cd819da43d>\",\"Content-Length\":\"1049303\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eda7acbb-48be-46cf-9252-f7f869b33d2b>\",\"WARC-Concurrent-To\":\"<urn:uuid:49dd134f-0180-43fd-ad10-aa0fcb752f3e>\",\"WARC-IP-Address\":\"99.84.216.86\",\"WARC-Target-URI\":\"https://www.hindawi.com/journals/sv/2019/3524948/\",\"WARC-Payload-Digest\":\"sha1:4OOB5TJXBAXEGKSWKMNZKDC6EPT7CWYE\",\"WARC-Block-Digest\":\"sha1:C6JBQUSUUNQ4Z6QID46NZQNIFHQJFEJP\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152085.13_warc_CC-MAIN-20210805224801-20210806014801-00015.warc.gz\"}"} |
https://howkgtolbs.com/convert/44.55-kg-to-lbs | [
"# 44.55 kg to lbs - 44.55 kilograms into pounds\n\nkg\nlbs\n\n## 44.55 kg to lbs\n\nBefore we go to the more practical part - that is 44.55 kg how much lbs conversion - we will tell you a little bit of theoretical information about these two units - kilograms and pounds. So let’s move on.\n\n## 44.55 kgs in pounds\n\nWe will start with the kilogram. The kilogram is a unit of mass. It is a base unit in a metric system, that is International System of Units (in short form SI).\n\nAt times the kilogram could be written as kilogramme. The symbol of the kilogram is kg.\n\nFirstly, the definition of a kilogram was formulated in 1795. The kilogram was defined as the mass of one liter of water. This definition was not complicated but difficult to use.\n\nThen, in 1889 the kilogram was described using the International Prototype of the Kilogram (in abbreviated form IPK). The IPK was prepared of 90% platinum and 10 % iridium. The International Prototype of the Kilogram was used until 2019, when it was substituted by a new definition.\n\nThe new definition of the kilogram is build on physical constants, especially Planck constant. Here is the official definition: “The kilogram, symbol kg, is the SI unit of mass. It is defined by taking the fixed numerical value of the Planck constant h to be 6.62607015×10−34 when expressed in the unit J⋅s, which is equal to kg⋅m2⋅s−1, where the metre and the second are defined in terms of c and ΔνCs.”\n\nOne kilogram is exactly 0.001 tonne. It could be also divided to 100 decagrams and 1000 grams.\n\n## 44.55 kilogram to pounds\n\nYou learned a little about kilogram, so now we can go to the pound. The pound is also a unit of mass. We want to point out that there are more than one kind of pound. What does it mean? For instance, there are also pound-force. In this article we want to focus only on pound-mass.\n\nThe pound is in use in the Imperial and United States customary systems of measurements. Naturally, this unit is in use also in another systems. The symbol of the pound is lb or “.\n\nThe international avoirdupois pound has no descriptive definition. It is defined as 0.45359237 kilograms. One avoirdupois pound could be divided to 16 avoirdupois ounces and 7000 grains.\n\nThe avoirdupois pound was enforced in the Weights and Measures Act 1963. The definition of the pound was given in first section of this act: “The yard or the metre shall be the unit of measurement of length and the pound or the kilogram shall be the unit of measurement of mass by reference to which any measurement involving a measurement of length or mass shall be made in the United Kingdom; and- (a) the yard shall be 0.9144 metre exactly; (b) the pound shall be 0.45359237 kilogram exactly.”\n\n### 44.55 kg in lbs\n\nTheoretical section is already behind us. In next part we will tell you how much is 44.55 kg to lbs. Now you learned that 44.55 kg = x lbs. So it is time to get the answer. Let’s see:\n\n44.55 kilogram = 98.2159377210 pounds.\n\nThis is an accurate result of how much 44.55 kg to pound. You may also round off the result. After rounding off your outcome is as following: 44.55 kg = 98.010 lbs.\n\nYou know 44.55 kg is how many lbs, so look how many kg 44.55 lbs: 44.55 pound = 0.45359237 kilograms.\n\nObviously, this time you can also round off the result. After it your outcome is as following: 44.55 lb = 0.45 kgs.\n\nWe also want to show you 44.55 kg to how many pounds and 44.55 pound how many kg results in tables. Have a look:\n\nWe will startwant to start\",\"will begin\",\"want to begin\",\"are going to start\",\"are going to begin\"}.RandomElement()) with a table for how much is 44.55 kg equal to pound.\n\nKilograms Pounds Pounds (rounded off to two decimal places)\n44.55 98.2159377210 98.0100\nNow look at a table for how many kilograms 44.55 pounds.\n\nPounds Kilograms Kilograms (rounded off to two decimal places\n44.55 0.45359237 0.45\n\nNow you know how many 44.55 kg to lbs and how many kilograms 44.55 pound, so it is time to move on to the 44.55 kg to lbs formula.\n\n### 44.55 kg to pounds\n\nTo convert 44.55 kg to us lbs you need a formula. We will show you two formulas. Let’s begin with the first one:\n\nAmount of kilograms * 2.20462262 = the 98.2159377210 outcome in pounds\n\nThe first formula will give you the most exact result. Sometimes even the smallest difference could be significant. So if you want to get a correct result - first formula will be the best for you/option to calculate how many pounds are equivalent to 44.55 kilogram.\n\nSo move on to the second version of a formula, which also enables calculations to learn how much 44.55 kilogram in pounds.\n\n### 44.55 pound to kg\n\nThe shorter version of a formula is down below, have a look:\n\nAmount of kilograms * 2.2 = the result in pounds\n\nAs you see, the second version is simpler. It could be the best option if you need to make a conversion of 44.55 kilogram to pounds in easy way, for instance, during shopping. Just remember that final result will be not so exact.\n\nNow we want to show you these two formulas in practice. But before we will make a conversion of 44.55 kg to lbs we want to show you easier way to know 44.55 kg to how many lbs totally effortless.\n\n### 44.55 kg to lbs converter\n\nAn easier way to learn what is 44.55 kilogram equal to in pounds is to use 44.55 kg lbs calculator. What is a kg to lb converter?\n\nConverter is an application. Converter is based on longer version of a formula which we gave you in the previous part of this article. Due to 44.55 kg pound calculator you can effortless convert 44.55 kg to lbs. You only need to enter amount of kilograms which you need to calculate and click ‘calculate’ button. You will get the result in a flash.\n\nSo let’s try to convert 44.55 kg into lbs with use of 44.55 kg vs pound calculator. We entered 44.55 as an amount of kilograms. Here is the outcome: 44.55 kilogram = 98.2159377210 pounds.\n\nAs you can see, this 44.55 kg vs lbs converter is so simply to use.\n\nNow we are going to our primary issue - how to convert 44.55 kilograms to pounds on your own.\n\n#### 44.55 kg to lbs conversion\n\nWe will begin 44.55 kilogram equals to how many pounds conversion with the first formula to get the most correct outcome. A quick reminder of a formula:\n\nNumber of kilograms * 2.20462262 = 98.2159377210 the outcome in pounds\n\nSo what have you do to know how many pounds equal to 44.55 kilogram? Just multiply amount of kilograms, this time 44.55, by 2.20462262. It is 98.2159377210. So 44.55 kilogram is exactly 98.2159377210.\n\nYou can also round off this result, for example, to two decimal places. It is 2.20. So 44.55 kilogram = 98.0100 pounds.\n\nIt is time for an example from everyday life. Let’s convert 44.55 kg gold in pounds. So 44.55 kg equal to how many lbs? As in the previous example - multiply 44.55 by 2.20462262. It is exactly 98.2159377210. So equivalent of 44.55 kilograms to pounds, if it comes to gold, is exactly 98.2159377210.\n\nIn this example you can also round off the result. Here is the result after rounding off, this time to one decimal place - 44.55 kilogram 98.010 pounds.\n\nNow let’s move on to examples calculated with a short version of a formula.\n\n#### How many 44.55 kg to lbs\n\nBefore we show you an example - a quick reminder of shorter formula:\n\nAmount of kilograms * 2.2 = 98.010 the outcome in pounds\n\nSo 44.55 kg equal to how much lbs? As in the previous example you have to multiply number of kilogram, in this case 44.55, by 2.2. Look: 44.55 * 2.2 = 98.010. So 44.55 kilogram is exactly 2.2 pounds.\n\nDo another calculation with use of shorer version of a formula. Now convert something from everyday life, for instance, 44.55 kg to lbs weight of strawberries.\n\nSo convert - 44.55 kilogram of strawberries * 2.2 = 98.010 pounds of strawberries. So 44.55 kg to pound mass is exactly 98.010.\n\nIf you know how much is 44.55 kilogram weight in pounds and can calculate it with use of two different formulas, we can move on. Now we want to show you these outcomes in tables.\n\n#### Convert 44.55 kilogram to pounds\n\nWe are aware that results shown in tables are so much clearer for most of you. We understand it, so we gathered all these outcomes in tables for your convenience. Thanks to this you can quickly compare 44.55 kg equivalent to lbs outcomes.\n\nBegin with a 44.55 kg equals lbs table for the first version of a formula:\n\nKilograms Pounds Pounds (after rounding off to two decimal places)\n44.55 98.2159377210 98.0100\n\nAnd now have a look at 44.55 kg equal pound table for the second formula:\n\nKilograms Pounds\n44.55 98.010\n\nAs you can see, after rounding off, when it comes to how much 44.55 kilogram equals pounds, the outcomes are the same. The bigger number the more considerable difference. Keep it in mind when you want to do bigger number than 44.55 kilograms pounds conversion.\n\n#### How many kilograms 44.55 pound\n\nNow you know how to convert 44.55 kilograms how much pounds but we will show you something more. Are you curious what it is? What do you say about 44.55 kilogram to pounds and ounces conversion?\n\nWe will show you how you can calculate it step by step. Start. How much is 44.55 kg in lbs and oz?\n\nFirst thing you need to do is multiply amount of kilograms, this time 44.55, by 2.20462262. So 44.55 * 2.20462262 = 98.2159377210. One kilogram is exactly 2.20462262 pounds.\n\nThe integer part is number of pounds. So in this case there are 2 pounds.\n\nTo know how much 44.55 kilogram is equal to pounds and ounces you need to multiply fraction part by 16. So multiply 20462262 by 16. It gives 327396192 ounces.\n\nSo your outcome is exactly 2 pounds and 327396192 ounces. You can also round off ounces, for example, to two places. Then your outcome is 2 pounds and 33 ounces.\n\nAs you can see, calculation 44.55 kilogram in pounds and ounces quite easy.\n\nThe last calculation which we will show you is conversion of 44.55 foot pounds to kilograms meters. Both of them are units of work.\n\nTo convert foot pounds to kilogram meters it is needed another formula. Before we give you it, look:\n\n• 44.55 kilograms meters = 7.23301385 foot pounds,\n• 44.55 foot pounds = 0.13825495 kilograms meters.\n\nNow see a formula:\n\nAmount.RandomElement()) of foot pounds * 0.13825495 = the result in kilograms meters\n\nSo to convert 44.55 foot pounds to kilograms meters you need to multiply 44.55 by 0.13825495. It is 0.13825495. So 44.55 foot pounds is equal 0.13825495 kilogram meters.\n\nIt is also possible to round off this result, for example, to two decimal places. Then 44.55 foot pounds is 0.14 kilogram meters.\n\nWe hope that this calculation was as easy as 44.55 kilogram into pounds calculations.\n\nThis article was a huge compendium about kilogram, pound and 44.55 kg to lbs in conversion. Due to this conversion you know 44.55 kilogram is equivalent to how many pounds.\n\nWe showed you not only how to do a calculation 44.55 kilogram to metric pounds but also two other calculations - to check how many 44.55 kg in pounds and ounces and how many 44.55 foot pounds to kilograms meters.\n\nWe showed you also other way to make 44.55 kilogram how many pounds calculations, that is with use of 44.55 kg en pound converter. This will be the best solution for those of you who do not like converting on your own at all or this time do not want to make @baseAmountStr kg how lbs calculations on your own.\n\nWe hope that now all of you are able to do 44.55 kilogram equal to how many pounds calculation - on your own or with use of our 44.55 kgs to pounds converter.\n\nIt is time to make your move! Calculate 44.55 kilogram mass to pounds in the way you like.\n\nDo you want to make other than 44.55 kilogram as pounds conversion? For example, for 15 kilograms? Check our other articles! We guarantee that calculations for other amounts of kilograms are so simply as for 44.55 kilogram equal many pounds.\n\n#### Kilograms [kg]\n\nThe kilogram, or kilogramme, is the base unit of weight in the Metric system. It is the approximate weight of a cube of water 10 centimeters on a side.\n\n#### Pounds [lbs]\n\nA pound is a unit of weight commonly used in the United States and the British commonwealths. A pound is defined as exactly 0.45359237 kilograms.\n\nWe have told you something about kilograms and pounds yet. In this article we want to focu\n\n10 Kilograms to Pounds ConversionLet’s convert 10 Kilograms to Pounds.Enter 10 kilograms a\n\n30 lbs to kg - 30 kg to lbs. In this article we want to focus not only on 30 kg to lbs con\n\nHow many kilograms to pounds | how many kg to lb. How many kilograms to lbs. Before we giv\n\nDo you want to know how to convert price per lb to kg? Have you ever wondered how to conve\n\nHow to convert cost per kilo to cost per pound lb? We want to show you how to convert pric\n\nIn this article we want to focus on practical side of how many lbs per kg. How many pounds\n\nIn previous article we told you how to convert kg/m3 to lb/ft3. Today we are going to show\n\n 44.01 kg to lbs = 97.0254 44.02 kg to lbs = 97.0475 44.03 kg to lbs = 97.0695 44.04 kg to lbs = 97.0916 44.05 kg to lbs = 97.1136 44.06 kg to lbs = 97.1357 44.07 kg to lbs = 97.1577 44.08 kg to lbs = 97.1798 44.09 kg to lbs = 97.2018 44.1 kg to lbs = 97.2239 44.11 kg to lbs = 97.2459 44.12 kg to lbs = 97.2679 44.13 kg to lbs = 97.29 44.14 kg to lbs = 97.312 44.15 kg to lbs = 97.3341 44.16 kg to lbs = 97.3561 44.17 kg to lbs = 97.3782 44.18 kg to lbs = 97.4002 44.19 kg to lbs = 97.4223 44.2 kg to lbs = 97.4443 44.21 kg to lbs = 97.4664 44.22 kg to lbs = 97.4884 44.23 kg to lbs = 97.5105 44.24 kg to lbs = 97.5325 44.25 kg to lbs = 97.5546\n 44.26 kg to lbs = 97.5766 44.27 kg to lbs = 97.5986 44.28 kg to lbs = 97.6207 44.29 kg to lbs = 97.6427 44.3 kg to lbs = 97.6648 44.31 kg to lbs = 97.6868 44.32 kg to lbs = 97.7089 44.33 kg to lbs = 97.7309 44.34 kg to lbs = 97.753 44.35 kg to lbs = 97.775 44.36 kg to lbs = 97.7971 44.37 kg to lbs = 97.8191 44.38 kg to lbs = 97.8411 44.39 kg to lbs = 97.8632 44.4 kg to lbs = 97.8852 44.41 kg to lbs = 97.9073 44.42 kg to lbs = 97.9293 44.43 kg to lbs = 97.9514 44.44 kg to lbs = 97.9734 44.45 kg to lbs = 97.9955 44.46 kg to lbs = 98.0175 44.47 kg to lbs = 98.0396 44.48 kg to lbs = 98.0616 44.49 kg to lbs = 98.0837 44.5 kg to lbs = 98.1057\n 44.51 kg to lbs = 98.1278 44.52 kg to lbs = 98.1498 44.53 kg to lbs = 98.1719 44.54 kg to lbs = 98.1939 44.55 kg to lbs = 98.2159 44.56 kg to lbs = 98.238 44.57 kg to lbs = 98.26 44.58 kg to lbs = 98.2821 44.59 kg to lbs = 98.3041 44.6 kg to lbs = 98.3262 44.61 kg to lbs = 98.3482 44.62 kg to lbs = 98.3703 44.63 kg to lbs = 98.3923 44.64 kg to lbs = 98.4143 44.65 kg to lbs = 98.4364 44.66 kg to lbs = 98.4584 44.67 kg to lbs = 98.4805 44.68 kg to lbs = 98.5025 44.69 kg to lbs = 98.5246 44.7 kg to lbs = 98.5466 44.71 kg to lbs = 98.5687 44.72 kg to lbs = 98.5907 44.73 kg to lbs = 98.6128 44.74 kg to lbs = 98.6348 44.75 kg to lbs = 98.6569\n 44.76 kg to lbs = 98.6789 44.77 kg to lbs = 98.701 44.78 kg to lbs = 98.723 44.79 kg to lbs = 98.7451 44.8 kg to lbs = 98.7671 44.81 kg to lbs = 98.7891 44.82 kg to lbs = 98.8112 44.83 kg to lbs = 98.8332 44.84 kg to lbs = 98.8553 44.85 kg to lbs = 98.8773 44.86 kg to lbs = 98.8994 44.87 kg to lbs = 98.9214 44.88 kg to lbs = 98.9435 44.89 kg to lbs = 98.9655 44.9 kg to lbs = 98.9876 44.91 kg to lbs = 99.0096 44.92 kg to lbs = 99.0316 44.93 kg to lbs = 99.0537 44.94 kg to lbs = 99.0757 44.95 kg to lbs = 99.0978 44.96 kg to lbs = 99.1198 44.97 kg to lbs = 99.1419 44.98 kg to lbs = 99.1639 44.99 kg to lbs = 99.186 45 kg to lbs = 99.208\nRemember that our calculator from lbs to kg or kg to lbs located on the site can sometimes show the wrong results. It is important to use the converter weight lbs to kg twice or even more to get the exact result. The kg to lbs conversion calculator is based on formulas which is not errorless. We do not take the responsibility for errors caused by lbs into kg converter."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8828774,"math_prob":0.9854341,"size":15366,"snap":"2020-24-2020-29","text_gpt3_token_len":4610,"char_repetition_ratio":0.2461919,"word_repetition_ratio":0.041564792,"special_character_ratio":0.37999478,"punctuation_ratio":0.15420432,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9981469,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T01:59:15Z\",\"WARC-Record-ID\":\"<urn:uuid:09964395-8949-4184-a552-9b8b9c9198bb>\",\"Content-Length\":\"62567\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adb8faca-f71c-4dd2-b5fa-a0d02fbfcac4>\",\"WARC-Concurrent-To\":\"<urn:uuid:90bf9b71-4aa1-4a64-9f36-cffe1aaffaa6>\",\"WARC-IP-Address\":\"104.31.65.229\",\"WARC-Target-URI\":\"https://howkgtolbs.com/convert/44.55-kg-to-lbs\",\"WARC-Payload-Digest\":\"sha1:QW4LJ6RQFMCXTVVG4TMGR5JL66R4YDXP\",\"WARC-Block-Digest\":\"sha1:TGTUHAJGU2XKI3HP5C35CM77E3Q6CRFI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657140746.69_warc_CC-MAIN-20200713002400-20200713032400-00091.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/1906.07648/ | [
"# Transitive tournament tilings in oriented graphs with large minimum total degree\n\nLouis DeBiasio, Allan Lo, Theodore Molla, and Andrew Treglown\nFebruary 28, 2021\n###### Abstract.\n\nLet be the transitive tournament on vertices. We show that every oriented graph on vertices with minimum total degree can be partitioned into vertex disjoint ’s, and this bound is asymptotically tight. We also improve the best known bound on the minimum total degree for partitioning oriented graphs into vertex disjoint ’s.\n\n11footnotetext: Department of Mathematics, Miami University . Research supported in part by Simons Foundation Collaboration Grant # 283194.22footnotetext: School of Mathematics, University of Birmingham . Research supported in part by EPSRC, grant no. EP/P002420/1.33footnotetext: Department of Mathematics and Statistics, University of South Florida . Research supported in part by NSF Grants DMS-1500121 and DMS-1800761.44footnotetext: University of Birmingham, United Kingdom . Research supported by EPSRC grant EP/M016641/1.\n\n## 1. Introduction\n\nFor a pair of (di)graphs and , we call a collection of vertex disjoint copies of in an -tiling. We say that an -tiling is perfect if it consists of exactly copies of . Perfect -tilings are sometimes referred to as perfect -packings, perfect -matchings or -factors.\n\nThe classic Hajnal–Szemerédi theorem states that if is a graph on vertices with minimum degree at least , then contains a perfect -tiling. Moreover, there are -vertex graphs with minimum degree that do not contain a perfect -tiling.\n\nRecall that digraphs are graphs such that every pair of vertices has at most two edges between them, one oriented in each direction; oriented graphs are orientations of simple graphs (so there is at most one directed edge between any pair of vertices). Note that oriented graphs are a subclass of digraphs.\n\nRecently the study of tilings in digraphs has proven fruitful, and a number of papers have focused on developing analogs of the Hajnal–Szemerédi theorem. In this setting there is more than one natural notion of degree: The minimum semidegree of a digraph is the minimum of its minimum outdegree and its minimum indegree . The minimum total degree of is the minimum number of edges incident to a vertex in . So for oriented graphs , . When there is no possibility of confusion, we often refer to the minimum total degree as the minimum degree.\n\nLet denote the transitive tournament on vertices and denote the cyclic triangle. In it was proven that every digraph on vertices with minimum total degree at least contains a perfect -tiling. This degree condition is best possible, and the result implies the original Hajnal–Szemerédi theorem. A minimum semidegree version of the Hajnal–Szemerédi theorem was proven in for large digraphs; this result considers perfect -tilings for any fixed tournament . Finally, Czygrinow, DeBiasio, Molla and Treglown gave a general result which, together with a result of Wang determines the minimum total degree threshold for perfect -tilings in a digraph for any tournament .\n\nFor oriented graphs, the situation is much more difficult. Firstly notice that one can have arbitrarily large minimum total degree and still avoid even a single copy of an oriented graph. Indeed, a transitive tournament on vertices has but contains no oriented graph with a directed cycle. Further, there are -vertex tournaments (i.e. complete oriented graphs) with minimum semidegree at least (i.e. almost as large as possible) that do not contain a perfect -tiling (see [8, 9]). Note though that Keevash and Sudakov did prove that there exists a so that every sufficiently large oriented graph with minimum semidegree at least contains a -tiling covering all but at most vertices. Additionally, Li and Molla recently proved that if is a sufficiently large odd multiple of , every regular tournament on vertices has a perfect -tiling, thereby verifying a conjecture of Cuckler and Yuster.\n\nMore is known for the perfect -tiling problem in oriented graphs, though understanding the general behaviour of the minimum degree threshold remains a significant challenge. Yuster observed that if is an oriented graph on vertices with minimum total degree at least , then has a perfect -tiling. Furthermore, this bound is best possible. Balogh, Lo and Molla later proved an analogous result for the minimum semidegree threshold.\n\nYuster gave a bound on the total degree threshold for nearly perfect tiling with . That is if is an oriented graph on vertices with minimum total degree at least , then has vertex disjoint copies of covering all but vertices. Yuster also showed that if is an oriented graph on vertices with minimum total degree at least , then has a perfect -tiling.\n\nOur main result is to asymptotically determine the minimum total degree threshold for perfect -tiling.\n\n###### Theorem 1.1.\n\nFor all , there exists such that if is an oriented graph on vertices, is divisible by , and , then has a perfect -tiling. Furthermore, for every divisible by , there exists an oriented graph on vertices with such that does not contain a perfect -tiling.\n\nMoreover, we improve the general bounds on the minimum total degree threshold for perfect -tiling, showing that a slight improvement on Yuster’s above mentioned bound for nearly perfect -tiling in fact ensures that has a perfect -tiling. Let be the smallest integer such that every tournament on vertices contains a copy of .\n\n###### Theorem 1.2.\n\nFor every and , there exists such that when and is divisible by the following holds. If is an oriented graph on vertices and\n\n δ(G)≥(1−1k(2→r(k−1)−k+1)+ε)n,\n\nthen contains a perfect -tiling. In particular, suffices here.\n\nRoughly, we obtain both of our results by splitting the problem into two parts: determining the minimum degree threshold for “fractional -tiling” (which is related to “nearly perfect -tiling”) and determining the minimum degree threshold for “-absorbing”. When , we are able to determine these two thresholds exactly, which is why we obtain an asymptotically tight bound in that case.\n\nAs discussed in the following section, one can obtain a bound for the minimum degree threshold for perfect -tilings via an application of the Hajnal–Szemerédi theorem. Indeed, this is where Yuster’s aforementioned bounds came from. However, the bound in Theorem 1.1 is lower than that obtained via the Hajnal–Szemerédi theorem, demonstrating the problem in the oriented graph setting is genuinely different. In order to discuss more precisely where our bounds come from, we must first discuss their connection to some more parameters in the next two sections.\n\nIn Section 3 we give a minimum degree condition that ensures an oriented graph has a perfect fractional -tiling (and thus a nearly perfect -tiling); see Theorem 3.2. This theorem will be applied in both the proof of Theorem 1.1 and Theorem 1.2. In Section 4 we introduce an absorbing result which, combined with our results from Section 3, yields Theorem 1.2. Theorem 1.1 is then proved in Section 5. We finish the paper with some concluding remarks and open questions.\n\n## 2. Oriented Ramsey numbers and perfect tilings\n\nRecall is the smallest integer such that every tournament on vertices contains a copy of . Erdős and Moser proved that . The following result provides for small values of .\n\n###### Theorem 2.1 (see ).\n\n, , , and .\n\nOne can consider Turán-type questions in oriented graphs. The following observation shows that the Turán number of in an oriented graph is completely determined by and Turán’s theorem. Here we let be the number of edges in a Turán graph on vertices with parts, i.e., is the number of edges in a complete -partite graph on vertices with parts of size either the ceiling or floor of .\n\n###### Observation 2.2.\n\nThe maximum number of edges in an oriented graph on vertices that does not contain a copy of is .\n\n###### Proof.\n\nIf is an oriented graph on vertices with more than edges, then, by Turán’s theorem, must contain a tournament on vertices, which implies that contains a copy of .\n\nLet be a tournament on vertices that does not contain a . Blowing-up each vertex of equitably to form an oriented graph on vertices, produces a graph without a copy of whose underlying simple graph is the Turán graph on vertices with parts. ∎\n\nFor every positive integer , let be the collection of tournaments with vertex set . Let be the smallest integer such that every has a perfect -tiling. Note that, by induction, for and divisible by , every tournament has a perfect -tiling. It is known that (see ), and Caro proved that\n\n →tr(k)≤→r(2k−1)+(2k−1)→r(k)<4k,\n\nbut the determination of is open for every . (See [15, Proposition 10] for a concise proof of Caro’s upper-bound.)\n\nFor , let be the minimum integer such that every oriented graph on vertices with has a perfect -tiling, and define . The following straightforward consequence of the Hajnal–Szemerédi theorem, together with any bounds on gives a bound on .\n\n###### Observation 2.3 (Yuster , Treglown ).\n\nGiven any , and so\n\n →δ(k)≤1−1→tr(k)<1−14k.\n\nSince , it was conceivable that for all . However, Theorem 1.1 shows that , whereas . That is, by a computer search, we found tournaments on vertices that do not have a perfect -tiling.111 In Appendix 8 these tournaments are listed. We believe this is an exhaustive list (up to isomorphism). For this search, we used the nauty and Traces software package . Therefore, Theorem 1.1 does not follow directly from the Hajnal–Szemerédi theorem .\n\n## 3. Linear programming and fractional tilings\n\n### 3.1. Linear programming\n\nLet be a -uniform hypergraph. A fractional matching in is a function so that for each , . The size of the fractional matching is . By definition, the largest fractional matching in has size at most (if it has size exactly we say it is perfect). Define and to be the size of the largest matching and fractional matching in , respectively.\n\nA fractional vertex cover for is a function so that for each , . The size of the fractional vertex cover is . Let and be the size of the smallest vertex cover and fractional vertex cover of , respectively. By the duality theorem of linear programming, we have\n\n ν(H)≤ν∗(H)=τ∗(H)≤τ(H).\n\nFor a pair of graphs or directed graphs and , we let be the -uniform hypergraph on the vertex set in which is an edge if and only if contains a copy of . If is a graph we define and if is a directed graph we set . We set and . We define , , , , , and analogously.\n\nA fractional -tiling of is a weight function on the copies of in that corresponds to a fractional matching in , i.e., for every vertex , the sum of the weights on the copies of that contain is at most one. It is a perfect fractional -tiling of if the sum of the weights is equal to . We call a weight function on the vertices of a fractional -cover if the weight function is a vertex cover of , that is, if the sum of the weights on the vertices of every copy of in is at least one. For both a fractional -tiling of and a fractional -cover of , the size of the weight function is defined to be the sum of the weights (i.e. analogous to the notion of the size of a fractional matching and a fractional vertex cover).\n\nLet denote the smallest integer such that for every we have . We clearly have that . Also, every tournament on vertices satisfies . Indeed, by induction on , we may assume that and, for each vertex , there is a perfect fractional -tiling in . Then is a perfect fractional -tiling in .\n\n### 3.2. Forcing fractional tilings and bounds on →tr∗(k)\n\nFor every , define to be the smallest integer such that every oriented graph on vertices with has a perfect fractional -tiling, and let . Let be the infimum of the set of numbers such that for every there exists such that every oriented graph on vertices with has a -tiling of missing at most vertices.\n\nUsing our notation, we now rewrite (a slightly weaker222There are three differences to note. First, we ignore the case and which Yuster considers. Second, Yuster proves that one can almost tile an oriented graph that meets the minimum degree condition with the blow-up of , but with the regularity lemma, this version of the theorem implies the original version. Third, Yuster writes the minimum degree condition in terms of the function which is defined to be the smallest integer such that every tournament on at least vertices has the property that every vertex is contained in a copy of , but it is not hard to see that (see Example 3.5). version of) Yuster’s result ([19, Theorem 3.1]).\n\n###### Theorem 3.1 (Yuster ).\n\nFor , .\n\nLater in this section we prove the following bounds on and in terms of .\n\n###### Theorem 3.2.\n\n.\n\nWe also obtain the following bounds on .\n\n###### Theorem 3.3.\n\nFor all ,\n\n max{2→r(k−1),kk−2(→r(k)−2)}≤→tr∗(k)≤k(2→r(k−1)−k+1).\n\nNote that the upper bound in Theorem 3.3 together with Theorem 3.2 yields a slight strengthening of Theorem 3.1; they also can be combined with an absorbing result (Lemma 4.3) to give Theorem 1.2 (see Section 4.2). Theorems 3.2 and 3.3 will also be applied in the proof of Theorem 1.1.\n\nWe now prove Theorem 3.2.\n\n###### Proof of Theorem 3.2.\n\nLet be an oriented graph on vertices with . Blow up each vertex of to a set of size and call the resulting oriented graph . By the Hajnal–Szemerédi theorem, the simple graph underlying has a perfect -tiling. Note that each has a perfect fractional -tiling in . Hence has a perfect fractional -tiling and so does . So we have established that .\n\nAssume . Let be a tournament on vertices that does not have a perfect fractional -tiling; i.e. . Let and note that . For sufficiently large, blow up each of the vertices of into a set of vertices to form an oriented graph on vertices. Since , and is sufficiently large, we can assume that there exists a -tiling of that covers all but at most vertices. Because every in corresponds to a in , we can create a fractional -tiling of by giving each in weight equal to the number of times a that corresponds to it appears in divided by . This fractional -tiling of has size\n\n |T|s≥(1−0.9γ)nks=(1−0.9γ)|T|k>(1−γ)|T|k=ν∗k(T),\n\na contradiction. So, we have established that .\n\nTo complete the proof, we need to show that . This can be shown by following a standard application of Szemerédi’s regularity lemma. We omit the proof. (It is also possible to establish this fact without appealing to the regularity lemma, e.g., see .) ∎\n\n### 3.3. Proof of Theorem 3.3\n\nThe following example gives a lower bound on , which together with Theorem 3.2 gives a lower bound on .\n\n###### Example 3.4.\n\nLet . For every and , there exists an oriented graph on vertices with\n\n δ(G)=⌊(1−k−2k(→r(k)−2))n−2γnk(→r(k)−2)⌋,\n\nsuch that no -tiling covers more than vertices of . In particular, this implies that which implies by Theorem 3.2.\n\n###### Proof.\n\nTake the largest tournament which does not contain ; note that it has exactly vertices. For , blow up one of the vertices to a set of size and inside the set add all possible edges (oriented arbitrarily). Blow-up the other parts to independent sets of size either the floor or ceiling of\n\n (n−|X|)⋅1→r(k)−2=(k−2)nk(→r(k)−2)+2γnk(→r(k)−2),\n\nwhilst ensuring the resulting oriented graph has vertices. Note that every must use at least vertices from , so there is only space for at most vertex disjoint copies of in . ∎\n\nThe next example gives a different lower bound on , which together with Example 3.4 implies the lower bound in Theorem 3.3.\n\nFor every , .\n\n###### Proof.\n\nTo see that , consider a tournament on vertices in which there exists a vertex such that ; both and induce a tournament on vertices that does not contain a ; all of the edges between and are directed from to . In this tournament, is not contained in a ; this immediately implies does not have a perfect fractional -tiling. ∎\n\nTo prove the upper bound of Theorem 3.3, we first collect together some useful observations.\n\nFor a hypergraph and for every , we let be the link graph of , i.e., is the hypergraph with vertex set and edge set . The following lemma is well-known. We provide a proof for completeness.\n\n###### Lemma 3.6.\n\nIf is a -uniform hypergraph on vertices and, for every , , then .\n\n###### Proof.\n\nSuppose that for every , and . In a fractional matching of of size , there must exist a vertex in which the sum of the weights on the edges incident to is strictly less than . By the complementary slackness theorem from linear programming, this implies that if is a fractional vertex cover of of size , then . This means that is a fractional vertex cover of , so\n\n ν∗(H(v))=τ∗(H(v))≤τ∗(H)=ν∗(H)\n\na contradiction. ∎\n\nLet and either be a pair of graphs or a pair of directed graphs such that and and let . For a vertex , a weight function on the -subsets of is a -extendable fractional -tiling of size if it corresponds to a fractional matching of the hypergraph that has size . We have the following corollary to Lemma 3.6.\n\n###### Corollary 3.7.\n\nLet and either be a pair of graphs or a pair of directed graphs such that and . If, for every , there exists a -extendable fractional -tiling of size at least , then there exists a perfect fractional -tiling of .\n\n###### Proof.\n\nThis follows from Lemma 3.6 if we consider the hypergraph . ∎\n\nWe now prove the upper bound in Theorem 3.3.\n\nFor , .\n\n###### Proof.\n\nLet be a tournament on vertices. For an arbitrary , we aim to prove that there exists a -extendable fractional -tiling of size at least . By Corollary 3.7, this will then prove the lemma. To do this, we first prove the following claim.\n\n###### Claim 3.8.1.\n\nIf is a tournament on vertices, then .\n\n###### Proof.\n\nLet be a fractional -cover of of size and let be an ordering of such that . Note that contains at least one , so and . Therefore,\n\n τ∗k−1(S)=→r(k−1)∑i=1w(vi)+s∑i=→r(k−1)+1w(vi)≥1+s−→r(k−1)k−1=s−(→r(k−1)−k+1)k−1.\\qed\n\nRecall that\n\n (1) 2→r(k−1)−k+1=nk.\n\nLet and . Note that forms a copy of with any copy of in or . In particular, a lower bound on gives a lower bound on the size of the largest -extendable fractional -tiling.\n\nSuppose that . If , then . So, by (1) and the claim,\n\n ν∗+≥d+T(v)−(→r(k−1)−k+1)k−1≥n−(2→r(k−1)−k+1)k−1=nk.\n\nIf , then by the claim, (1), the fact that , and the fact that we have\n\n ν∗++ν∗−≥d+T(v)+d−T(v)−2(→r(k−1)−k+1)k−1≥n−(2→r(k−1)−k+1)k−1=nk.\n\nAn analogous argument applies if . So there exists a -extendable fractional -tiling of size at least . ∎\n\n### 3.4. Remarks\n\nNote that Example 3.5 and Theorem 3.2 together imply that . If it can be shown that the lower bound on from Example 3.4 is also an upper bound; i.e. (which is true for and ), then we have , or\n\n →r(k)≥2(k−2)k⋅→r(k−1)+2,\n\nwhich would imply that which almost matches the Erdős–Moser bound of . In fact, even proving that for some absolute constant would improve the best known lower bound on . It is also worthwhile to note that provides a lower bound on the classical Ramsey number . Indeed if is a tournament on vertices with no , then the graph on formed by taking any ordering of the vertices of and, for every , placing the edge in if the edge in is directed from to has neither a clique nor an independent set of size . Therefore, it is possible that a substantial improvement to the upper bound on could give an improvement on the best known lower bound for the diagonal Ramsey numbers.\n\nNote that when ,\n\n n−(→r(k)−2)2≥n−k−2k⋅n2=nk.\n\nA way one might attempt to prove that would be to first prove that equality holds in the following.\n\n###### Example 3.9.\n\nFor , if , then\n\n minT∈Tn{ν∗(T)}≤n−(→r(k)−2)2.\n###### Proof.\n\nConstruct a tournament on vertices by starting with a tournament on vertices that does not contain a and then blow-up one of the vertices to a set of size . Then place edges between all vertices in and orient them arbitrarily. Because every has at least two vertices in , we can cover all of the copies of in by assigning weight to the vertices in and to the vertices in . Therefore,\n\n ν∗(T)=τ∗(T)≤|X|2=n−(→r(k)−2)2.\\qed\n\nExample 3.9 is quite similar to Example 3.4. We have verified that, when , equality holds in Example 3.9 when is either or . We have no evidence that equality holds when , and in light of the discussion above, it is, if true, likely extremely challenging to prove!\n\n## 4. The absorbing method and the proof of Theorem 1.2\n\n### 4.1. Absorbing\n\nWe will apply the absorbing method of Rödl, Ruciński and Szemerédi (see e.g. ). The basic idea of the method is to prove that a randomly constructed small set can serve as an “absorber”, i.e., we prove that there exists a small set that has the property that if, after removing this set from the graph, we can almost tile what is left of the oriented graph, then, using the absorbing set, we can extend this partial tiling into a perfect tiling over the entire original oriented graph.\n\nTo prove that our absorbing sets exist, we will use the following lemma, which follows immediately from a lemma of Lo and Markström [10, Lemma 1.1].\n\n###### Lemma 4.1.\n\nFor every , and , there exists such that for every directed graph on vertices the following holds. If, for every , there are at least sets such that and both and contain perfect -tilings, then there exists such that:\n\n• ; is divisible by ; and\n\n• for every , such that and is divisible by , we have that has a perfect -tiling.\n\nLet and . Define to be the set of all with the following property: there exists and so that for each , every -vertex oriented graph with , and any pair , there are at least sets such that and both and contain perfect -tilings. Let be the infinimum of . Write . We call the absorbing threshold for -tiling.\n\nWe will make use of the following simple fact.\n\n###### Fact 4.2.\n\nFor every and such that , and , the following holds. If is a graph or oriented graph on vertices and , then for every such that we have .\n\n###### Proof.\n\nBecause , we have that is at least\n\n |U|−(n−δ(G))≥|U|−(1r−c)n≥|U|−(1r−c)rs|U|=(s−1s+c⋅rs)|U|.\\qed\n\nFor all , .\n\n###### Proof.\n\nLet , let be sufficiently large and let be an oriented graph on vertices with . Let and set , , and . Since , Fact 4.2 (with ) implies that\n\n δ(G[U])≥(s−1s+ε⋅rs)|U|.\n\nSo by supersaturation there exist at least tournaments on vertices in . Since , by the pigeonhole principle, for every such tournament , there exists a subtournament of size at least in one of the four sets: , , , and , which partition . This, in turn, implies that there exists such that and are . Therefore, we have at least of the desired sets.\n\nThe choice of can be made arbitrarily small, thus we obtain that .\n\n###### Lemma 4.4.\n\nFor every , and , there exists such that for every that is divisible by the following holds. If is an oriented graph on vertices and\n\n δ(G)≥max{→δ0(k)+ε,A(k,i)+ε}n,\n\nthen has a perfect -tiling.\n\n###### Proof.\n\nLet . Let be a sufficiently large oriented graph as in the statement of the lemma.\n\nBy the degree condition we may apply Lemma 4.1 to get a set such that , is divisible by , and, for every such that and is divisible by , the oriented graph has a perfect -tiling. Since is sufficiently large and , we can tile so that if is the set of uncovered vertices, then . Since then has a perfect -tiling, we obtain a perfect -tiling of . ∎\n\n### 4.2. Proof of Theorem 1.2\n\nWith the absorbing lemma to hand, it is now straightforward to deduce Theorem 1.2 from our previous results.\n\n###### Proof of Theorem 1.2.\n\nFrom Theorem 3.2 and Lemma 3.8 we have that . Since , the first part of Theorem 1.2 then follows from Lemmas 4.3 and 4.4.\n\nThe second part of the theorem follows by the inequality in the statement of Theorem 3.1. ∎\n\n## 5. →T4-tiling - Proof of Theorem 1.1\n\nNote that . Example 3.4 with () implies the second part of the theorem. For the first part of the theorem, we will show that (Proposition 5.1) and"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9333484,"math_prob":0.9643593,"size":21100,"snap":"2021-21-2021-25","text_gpt3_token_len":4835,"char_repetition_ratio":0.18638605,"word_repetition_ratio":0.117539264,"special_character_ratio":0.22668247,"punctuation_ratio":0.13230482,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9984989,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T16:24:25Z\",\"WARC-Record-ID\":\"<urn:uuid:f94a622e-40d7-438c-a6c5-71d06904d893>\",\"Content-Length\":\"1049526\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2afbe55a-e0ea-4ca3-b8ee-2e2fd29cc109>\",\"WARC-Concurrent-To\":\"<urn:uuid:37f7fb9a-3ffd-431e-9aee-e3c19683a6a1>\",\"WARC-IP-Address\":\"104.21.14.110\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/1906.07648/\",\"WARC-Payload-Digest\":\"sha1:DJZUUL7UBABXRTOZB4YXTWWDP4G5EPEB\",\"WARC-Block-Digest\":\"sha1:4WX7QR65CED33PJA4VUYA745GG7E77WV\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487648373.45_warc_CC-MAIN-20210619142022-20210619172022-00101.warc.gz\"}"} |
https://segmentfault.com/a/1190000004426828 | [
"",
null,
"# [译] 数据可视化教程:基于Google Sheets 和 RStudio Shiny 建立实时仪表盘\n\nThanks for Douglas Watson,the original English version is http://douglas-watson.github.io/post/gdocs_1_gdocs\n\n# 概述\n\n• 第二部分(原文),我将教你如何在R中取回数据,并用ggplot2库做数据可视化。\n\n• 第三部分(原文),我将带您用 Shiny 制作一个简单的交互式可视化应用,通过 ShinyApps.io 平台发布在网上。",
null,
"# 为什么使用 R 和 Shiny?\n\nR是一种强大的语言专门为数据分析,结合ggplot2图形库,R便可以做专业的数据可视化。一旦你找到了你想要展示什么,shiny 则让可互动的数据可视化图表在网上发布。Shiny 由 RStudio 公司开发,他们",
null,
"## 前奏:数据存储格式\n\ntimestamp ID variable value\n1448227096 kitchen temperature 22.3\n1448227096 kitchen humidity 45\n1448227096 bedroom temperature 24.0\n1448227096 bedroom humidity 46\n1448227099 kitchen temperature 22.4\n1448227099 kitchen humidity 45\n1448227099 bedroom temperature 23.9\n1448227099 bedroom humidity 45\n\ntimestamp kitchen temperature kitchen humidity bedroom temperature bedroom humidity\n1448227096 22.3 45 24.0 46\n1448227099 22.4 45 23.9 45\n\n## 准备接收数据的表格\n\n2. 重命名第一张工作表(表中一个标签文档),这就是数据将被上传到这里。\n\n3. 创建标题行。每一行的数据,将由unix时间戳(1970年1月1日以来的秒数)、“id”、“变量”和“阅读量”构成。你可以冻结的行,保持可见滚动时,用“视图”>“冻结”>“一行”。\n\n4. 打开脚本编辑器,在“工具”>“脚本编辑器…”菜单。",
null,
"## 用谷歌脚本插入一行数据\n\n``````function appendLines(worksheet, csvData) {\nvar sheet = ss.getSheetByName(worksheet);\n\nvar rows = Utilities.parseCsv(csvData);\n\nfor ( var i = 0; i < rows.length; i++ ) {\nsheet.appendRow(rows[i]);\n}\n}\n\nfunction test() {\nLogger.log(\"Appending fake data\");\nappendLines(\"Raw\", \"12345, Monday, kitchen, temperature, 30\\n12346, Tuesday, living room, humidity, 50\");\n}``````\n\n• 打开当前电子表格(整个文档);\n\n• 在文档中选择一个表,由第一个参数标识(我们目前只有一个 Raw 表);\n\n• 解析CSV数据作为第二个参数;\n\n• 将每一行的CSV数据,追加到表中。",
null,
"",
null,
"",
null,
"``````function appendLines(worksheet, csvData) {\nvar sheet = ss.getSheetByName(worksheet);\n\nvar rows = Utilities.parseCsv(csvData);\n\nfor ( var i = 0; i < rows.length; i++ ) {\nsheet.appendRow(rows[i]);\n}\n}``````\n\n## 接收POST数据\n\n``````function doPost(e) {\nvar params = JSON.stringify(e);\nreturn ContentService.createTextOutput(params);\n}``````",
null,
"",
null,
"",
null,
"``````\\$ curl --data \"hello, world\" \"https://script.google.com/macros/s/AKfycbxOw-Tl_r0jDV4wcnixdYdUjcNipSzgufiezRKr28Q5OAN50cIP/exec\"\n\n{\"parameter\":{\"hello, world\":\"\"},\"contextPath\":\"\",\"contentLength\":12,\"queryString\":null,\"parameters\":{\"hello, world\":[\"\"]},\"postData\":{\"length\":12,\"type\":\"application/x-www-form-urlencoded\",\"contents\":\"hello, world\",\"name\":\"postData\"}}%``````\n\n``````\\$ curl -L --data \"hello, world\" \"https://script.google.com/macros/s/AKfycbxOw-Tl_r0jDV4wcnixdYdUjcNipSzgufiezRKr28Q5OAN50cIP/exec?sheet=Raw\"\n\n{\"parameter\":{\"sheet\":\"Raw\",\"hello, world\":\"\"},\"contextPath\":\"\",\"contentLength\":12,\"queryString\":\"sheet=Raw\",\"parameters\":{\"sheet\":[\"Raw\"],\"hello, world\":[\"\"]},\"postData\":{\"length\":12,\"type\":\"application/x-www-form-urlencoded\",\"contents\":\"hello, world\",\"name\":\"postData\"}}%``````\n\nURL参数出现在 `e[\"parameter\"][\"sheet\"]`\n\n• 在URL中找到一个\"sheet\"参数\n\n• 从POST请求中提取CSV数据\n\n• 附加在CSV数据到指定表的所有行。\n\n``````function doPost(e) {\nvar contents = e.postData.contents;\nvar sheetName = e.parameter['sheet'];\n\nappendLines(sheetName, contents);\n\nvar params = JSON.stringify(e);\nreturn ContentService.createTextOutput(params);\n\n}``````",
null,
"# 第二部分:在R中获取数据并通过ggplot制图\n\nGitHub上的相关代码\n\n## 推荐设置\n\nRStudio是一个像样的编辑器(还可以支持Vim模式)并且将Shiny集成其中。下载地址\n\n``install.packages('ggplot2')``\n\n## 从web URL导入一个CSV文件\n\n``````getRaw <- function () {\nstrip.white = TRUE\n)\ndata\n}``````\n\n``````> data <- getRaw()\n> summary(data)\ntimestamp date origin variable value\nMin. :1.448e+09 Mon Nov 23 2015 22:44:45 GMT+0100 (CET): 4 bedroom:120 humidity :120 Min. :23.00\n1st Qu.:1.448e+09 Mon Nov 23 2015 23:44:45 GMT+0100 (CET): 4 kitchen:120 temperature:120 1st Qu.:23.88\nMedian :1.448e+09 Thu Nov 26 2015 00:44:45 GMT+0100 (CET): 4 Median :32.55\nMean :1.448e+09 Thu Nov 26 2015 01:44:45 GMT+0100 (CET): 4 Mean :34.34\n3rd Qu.:1.448e+09 Thu Nov 26 2015 02:44:45 GMT+0100 (CET): 4 3rd Qu.:44.45\nMax. :1.449e+09 Thu Nov 26 2015 03:44:45 GMT+0100 (CET): 4 Max. :49.90\n(Other) :216\n>``````\n\n## 转化 date-time 列\n\n``````getRaw <- function () {\nstrip.white = TRUE\n)\ndata\\$date = as.POSIXct(data\\$timestamp, tz=\"GMT\", origin=\"1970-01-01\")\ndata\n}``````\n\n## 与 ggplot2 共舞\n\n`ggplot2包`提供了机智的`qplot` 函数,一行代码可以绘制种类繁多的图表,使R成为我最喜欢的工具数据。\n\n``````import(ggplot2)\nsource(\"helpers.R\")\ndata <- getRaw()\nqplot(date, value, data = data)``````",
null,
"``qplot(date, value, data = data, colour = origin)``",
null,
"``qplot(date, value, data = data, colour = origin) + facet_grid(variable ~ ., scales = \"free_y\")``",
null,
"``qplot(date, value, data = data, colour = origin, geom = \"line\") + facet_grid(variable ~ ., scales = \"free_y\")``",
null,
"``qplot(date, value, data = data, colour = origin, geom = \"line\") + scale_x_datetime() + facet_grid(variable ~ ., scales = \"free_y\")``\n\n## 为 Shiny 封装画图函数\n\n``````library(ggplot2)\n\ngetRaw <- function () {\nstrip.white = TRUE\n)\n\ndata\\$date = as.POSIXct(data\\$timestamp, tz=\"GMT\", origin=\"1970-01-01\")\ndata\n}\n\ntimeseriesPlot <- function(data) {\nqplot(date, value, data = data, colour = origin, geom = \"line\") + scale_x_datetime() + facet_grid(variable ~ ., scales = \"free_y\")\n}\n\nboxPlot <- function(data) {\nqplot(origin, value, data = data, geom = \"boxplot\") + facet_grid(variable ~ ., scales = \"free_y\")\n}``````\n\n# 第三部分:与Shiny结合\n\n## 我们的第一个仪表板:没有交互性\n\n``````# ui.R\n\nshinyUI(fluidPage(\nverticalLayout(\ntitlePanel(\"Sensor data dashboard\"),\nplotOutput(\"timeseries\"),\nplotOutput(\"boxplot\")\n)\n))``````\n\n`server.R`中的代码在如下时刻会被调用:\n\n1. 当服务器启动时;\n\n2. 每次访问一个页面时;\n\n3. 每次一个交互式的部件改变时。\n\n`shinyServer(...)`代码块之外的代码只会被执行一次,而代码块里面的部分每次访问页面都会被执行。\n\n``````# server.R\n\nsource(\"helpers.R\")\n\nshinyServer(function(input, output) {\n\n# Load data when app is visited\ndata <- getRaw()\n\n# Populate plots\noutput\\$timeseries <- renderPlot({\ntimeseriesPlot(data)\n})\n\noutput\\$boxplot <- renderPlot({\nboxPlot(data)\n})\n\n})``````",
null,
"",
null,
"## 添加一个按日期分类的网格布局和过滤器\n\n• `dateRangeInput`:一个下拉日历选择开始和结束日期。\n\n• `numericInput`:一个只接受数字的输入框,显示了一个向上和向下箭头来修改输入值。\n\n``````# ui.R\n\nshinyUI(fluidPage(\nverticalLayout(\ntitlePanel(\"Sensor data dashboard\"),\nfluidRow(\ncolumn(3,\ndateRangeInput(\"dates\", \"Date Range\", start=\"2015-11-20\"),\nfluidRow(\ncolumn(4, h3(\"From:\")),\ncolumn(4, numericInput(\"min.hours\", \"hours:\", value=0)),\ncolumn(4, numericInput(\"min.minutes\", \"minutes:\", value=0))\n),\nfluidRow(\ncolumn(4, h3(\"To:\")),\ncolumn(4, numericInput(\"max.hours\", \"hours:\", value=23)),\ncolumn(4, numericInput(\"max.minutes\", \"minutes:\", value=59))\n)\n),\ncolumn(9, plotOutput(\"timeseries\"))\n),\nfluidRow(\ncolumn(3),\ncolumn(9, plotOutput(\"boxplot\"))\n)\n)\n))``````\n\n``````# server.R\n\nsource(\"helpers.R\")\n\nshinyServer(function(input, output) {\n\n# Load data when app is visited\ndata <- getRaw()\n\n# Filter by device ID / time range when options are updated\ndata.filt <- reactive({\nmindate <- as.POSIXct.Date(input\\$dates) + (input\\$min.hours * 60 + input\\$min.minutes) * 60\nmaxdate <- as.POSIXct.Date(input\\$dates) + (input\\$max.hours * 60 + input\\$max.minutes) * 60\n\nsubset(data, date > mindate & date < maxdate)\n})\n\n# Populate plots\noutput\\$timeseries <- renderPlot({\ntimeseriesPlot(data.filt())\n})\n\noutput\\$boxplot <- renderPlot({\nboxPlot(data.filt())\n})\n\n})``````",
null,
"## 发布到Shinyapps.io\n\nShinyApps.io 为 Shiny 应用提供一个托管服务,而这个扩管服务同时也是 RStudio 的另一个产品。他们的开发者在 RStudio 中集成了这个服务:简单地在你 Shiny 应用的右上角点击\"发布\",跟着指引,创建一个免费的账号然后上传你的仪表盘。",
null,
"",
null,
"``````# fragment of helpers.R\n\nstrip.white = TRUE\n)``````\n\n## 更多的代理\n\nHTTPS代理是一个驻留在应用程序引擎的简单代码。相关代码\n\n1003 人关注\n59 篇文章"
] | [
null,
"https://assets.segmentfault.com/v-5e82b917/global/img/static/touch-icon.png",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null,
"https://cdn.segmentfault.com/v-5e81b20e/global/img/squares.svg",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.73292625,"math_prob":0.9522749,"size":13492,"snap":"2020-10-2020-16","text_gpt3_token_len":7477,"char_repetition_ratio":0.07502966,"word_repetition_ratio":0.19522326,"special_character_ratio":0.24911058,"punctuation_ratio":0.2605201,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.957487,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T04:26:22Z\",\"WARC-Record-ID\":\"<urn:uuid:8ddcf438-8825-49b2-9346-0bcc6fd62075>\",\"Content-Length\":\"66410\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbd78a45-312d-46f9-baeb-f92149bf6d7b>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef9881ef-e21e-427e-823f-ee99a8e3aa25>\",\"WARC-IP-Address\":\"103.61.39.86\",\"WARC-Target-URI\":\"https://segmentfault.com/a/1190000004426828\",\"WARC-Payload-Digest\":\"sha1:TKCPEU7PSA7F5QG6YPXMJYAHEIF4P5KY\",\"WARC-Block-Digest\":\"sha1:XS5NIROEN6RAEBVKHOEJTUTBBETOTSVO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370505366.8_warc_CC-MAIN-20200401034127-20200401064127-00444.warc.gz\"}"} |
https://www.clutchprep.com/chemistry/practice-problems/85703/a-gas-sample-containing-1-50-moles-at-25-c-exerts-a-pressure-of-400-torr-some-ga | [
"# Problem: A gas sample containing 1.50 moles at 25°C exerts a pressure of 400. torr. Some gas is added to the same container and the temperature is increased to 50.°C. If the pressure increases to 800. torr, how many moles of gas were added to the container? Assume a constant-volume container.\n\n###### FREE Expert Solution\n\nWe will use ideal gas equation:\n\n$\\overline{){\\mathbf{PV}}{\\mathbf{=}}{\\mathbf{nRT}}}$\n\nP = pressure, atm\nV = volume, L\nn = moles, mol\nR = gas constant = 0.08206 (L·atm)/(mol·K)\nT = temperature, K\n\nStep 1. Determine the volume:\n\n91% (364 ratings)",
null,
"###### Problem Details\n\nA gas sample containing 1.50 moles at 25°C exerts a pressure of 400. torr. Some gas is added to the same container and the temperature is increased to 50.°C. If the pressure increases to 800. torr, how many moles of gas were added to the container? Assume a constant-volume container.\n\nFrequently Asked Questions\n\nWhat scientific concept do you need to know in order to solve this problem?\n\nOur tutors have indicated that to solve this problem you will need to apply the The Ideal Gas Law concept. You can view video lessons to learn The Ideal Gas Law. Or if you need more The Ideal Gas Law practice, you can also practice The Ideal Gas Law practice problems.\n\nWhat professor is this problem relevant for?\n\nBased on our data, we think this problem is relevant for Professor Millhauser's class at UCSC.\n\nWhat textbook is this problem found in?\n\nOur data indicates that this problem or a close variation was asked in Chemistry: An Atoms First Approach - Zumdahl Atoms 1st 2nd Edition. You can also practice Chemistry: An Atoms First Approach - Zumdahl Atoms 1st 2nd Edition practice problems."
] | [
null,
"https://cdn.clutchprep.com/assets/button-view-text-solution.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9250691,"math_prob":0.92343014,"size":1254,"snap":"2021-04-2021-17","text_gpt3_token_len":308,"char_repetition_ratio":0.1232,"word_repetition_ratio":0.05479452,"special_character_ratio":0.23524721,"punctuation_ratio":0.11857708,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995975,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-11T02:07:28Z\",\"WARC-Record-ID\":\"<urn:uuid:260e775f-7e47-42f3-abb6-78d91aeea914>\",\"Content-Length\":\"155028\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:454bec7b-db1b-426c-98f9-c039ca29173b>\",\"WARC-Concurrent-To\":\"<urn:uuid:9367c155-2e4f-4fd9-bba6-f33ff7ad4811>\",\"WARC-IP-Address\":\"3.213.190.117\",\"WARC-Target-URI\":\"https://www.clutchprep.com/chemistry/practice-problems/85703/a-gas-sample-containing-1-50-moles-at-25-c-exerts-a-pressure-of-400-torr-some-ga\",\"WARC-Payload-Digest\":\"sha1:YGK6YPLV3ACB2IXY2KC3K27SU5MMKD6U\",\"WARC-Block-Digest\":\"sha1:YYI5MMQLNY7SJQTY4SVTL7LYOV4WU25P\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038060603.10_warc_CC-MAIN-20210411000036-20210411030036-00058.warc.gz\"}"} |
https://stats.stackexchange.com/questions/211445/bayesian-updating-of-scaled-beta-distribution | [
"# Bayesian updating of scaled beta distribution\n\nI would like to know how to link a scaled beta distribution to its non scaled beta distribution.\n\nFor some context: I have a model where I want to estimate the proportion ($\\pi$) of people belonging to a given group (out of 2 groups) (can see this as the probability that a given event happen).\nTo estimate the true proportion $\\pi$ I would simply do bayesian updating based on a beta prior that $\\pi \\sim Beta(a, b)$, with binomial (bernouilli) likelihood $f(x|\\pi)$ for the data.\n\nHowever, I do not observe the belonging to one or the other group directly in my data. What I observe from the data is the belonging to two different groups, where the proportion of people belonging to my group of interest is $c \\ \\pi$ (where $c$ is a known constant $\\in [0, 1]$). Hence my data follow a bernouilli likelihood: $c \\pi^x \\ (1-c\\pi)^{1-x}$.\n\nYou probably wonder, why don't I simply put the beta prior on $c \\pi$ instead and then just divide the expectation to find an estimation of $\\pi$? I cannot because $c$ can vary every period (and because I want $\\pi$ to belong to $[0,1]$). Given that $c$ can vary each period, I want to update my prior on $\\pi$ and not on $c \\pi$, and do the link between the two no matter what $c$ is (and anyway, my parameter of interest is actually $\\pi$).\n\nI am quite new to bayesian updating of binomial proportion and Beta distribution in general, and I was not able to find an answer to this. I know that if $\\pi \\sim Beta(a,b)$, then $c \\pi$ is not a beta distribution anymore but a \"scaled beta distribution\" (is there any other name? I do not care about the whole generalized beta distribution, I just want the additional scale parameter!).\nTo solve my problem I just need to know how to link two scaled beta distribution (with the same a and b) (actually, how to link a scaled beta distribution to its version with scale = 1).\n\nFrom what I tried so far, I am stuck because I end up with: $something \\times (c \\pi)^x (1-c\\pi)^{1-x} \\pi^{a-1} (1-\\pi)^{b-1}$ which does not allow me to do simple updating and say for example that it is obviously another beta distribution (as we do in the bayesian updating classic case).\n\nI do not see an easy factorization here, but maybe I just miss a simple maths trick? or maybe my formula is even wrong?\nMy main question is, is there an easy way to update \"scaled beta distribution\" (or beta of the first kind) or do I have to do it numerically?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91909856,"math_prob":0.9652399,"size":2409,"snap":"2019-35-2019-39","text_gpt3_token_len":612,"char_repetition_ratio":0.13180873,"word_repetition_ratio":0.027334852,"special_character_ratio":0.2619344,"punctuation_ratio":0.078,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99938786,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-17T23:36:18Z\",\"WARC-Record-ID\":\"<urn:uuid:623dee47-b968-4a10-9798-b1a4b00ca1dd>\",\"Content-Length\":\"128011\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7adaddb2-8e89-493c-a0a3-25cccd815197>\",\"WARC-Concurrent-To\":\"<urn:uuid:f62e0244-b077-4442-9c74-27e10b06f589>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://stats.stackexchange.com/questions/211445/bayesian-updating-of-scaled-beta-distribution\",\"WARC-Payload-Digest\":\"sha1:ZJHECJCZCPINPL4WJA32LQPTXVGORASD\",\"WARC-Block-Digest\":\"sha1:C3RYSTVJKBH7YEU4CYVL26U52M3IXLWO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313501.0_warc_CC-MAIN-20190817222907-20190818004907-00322.warc.gz\"}"} |
https://www.numbersaplenty.com/11111111110 | [
"Search a number\nBaseRepresentation\nbin10100101100100011…\n…00001100111000110\n31001200100111221000201\n422112101201213012\n5140223421023420\n65034313401114\n7542225612641\noct122621414706\n931610457021\n1011111111110\n114791a29260\n1221a110919a\n131080c5b678\n14775954a58\n154506db10a\nhex2964619c6\n\n11111111110 has 64 divisors (see below), whose sum is σ = 22435273728. Its totient is φ = 3926880000.\n\nThe previous prime is 11111111059. The next prime is 11111111113. The reversal of 11111111110 is 1111111111.\n\nIt is a happy number.\n\nIt is a Harshad number since it is a multiple of its sum of digits (10).\n\nIt is a nialpdrome in base 10.\n\nIt is a congruent number.\n\nIt is not an unprimeable number, because it can be changed into a prime (11111111113) by changing a digit.\n\nIt is a polite number, since it can be written in 31 ways as a sum of consecutive naturals, for example, 1217665 + ... + 1226755.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (350551152).\n\nAlmost surely, 211111111110 is an apocalyptic number.\n\n11111111110 is a gapful number since it is divisible by the number (10) formed by its first and last digit.\n\n11111111110 is an abundant number, since it is smaller than the sum of its proper divisors (11324162618).\n\nIt is a pseudoperfect number, because it is the sum of a subset of its proper divisors.\n\n11111111110 is a wasteful number, since it uses less digits than its factorization.\n\n11111111110 is an odious number, because the sum of its binary digits is odd.\n\nThe sum of its prime factors is 9421.\n\nThe product of its (nonzero) digits is 1, while the sum is 10.\n\n11111111110 divided by its sum of digits (10) gives a palindrome (1111111111).\n\nAdding to 11111111110 its reverse (1111111111), we get a palindrome (12222222221).\n\nSubtracting from 11111111110 its reverse (1111111111), we obtain a palindrome (9999999999).\n\nIt can be divided in two parts, 111111 and 11110, that added together give a palindrome (122221).\n\nThe spelling of 11111111110 in words is \"eleven billion, one hundred eleven million, one hundred eleven thousand, one hundred ten\"."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8238384,"math_prob":0.9676017,"size":2212,"snap":"2022-27-2022-33","text_gpt3_token_len":691,"char_repetition_ratio":0.19202898,"word_repetition_ratio":0.0056338026,"special_character_ratio":0.45705244,"punctuation_ratio":0.12686567,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99714094,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T00:45:29Z\",\"WARC-Record-ID\":\"<urn:uuid:8c92374c-eaf5-4e9f-8c8c-8b5159ea0e57>\",\"Content-Length\":\"11279\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:396c2ee2-5d06-4136-8cbd-193a320fcadb>\",\"WARC-Concurrent-To\":\"<urn:uuid:440a231b-c604-4f0b-b1d0-beb4e0a7e50d>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/11111111110\",\"WARC-Payload-Digest\":\"sha1:NBRTG664GMYJWGMXQ5I36KRHIMRTJTTL\",\"WARC-Block-Digest\":\"sha1:XIFVOGETSBZYGY3RBJFXEWMQDLC7TP65\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570741.21_warc_CC-MAIN-20220808001418-20220808031418-00572.warc.gz\"}"} |
https://www.omnicalculator.com/math/pyramid-angle | [
"# Pyramid Angle Calculator\n\nCreated by Davide Borchia\nReviewed by Anna Szczepanek, PhD and Jack Bowater\nLast updated: Feb 02, 2023\n\nUse our pyramid angle calculator to find all the possible angles in a pyramid - we'll ask you a few parameters, and do all the math for you!\n\nHere you will learn:\n\n• The types of angles in a pyramid;\n• How to calculate the angles of a pyramid;\n• An example of how to calculate the square pyramid angles; and\n• How to use our pyramid angle calculator.\n\n## What is a pyramid?\n\nA pyramid is a solid figure with a polygonal base. A triangular face corresponds to each side of the base, joining at the apex.\n\nThe base can be of any shape; however, regular pyramids are easier to study. The base of a regular pyramid is a regular polygon. We can find, for example:\n\n• Triangular pyramids, with an equilateral triangle as the base;\n• Square pyramids, where the base is, of course, a square;\n• Hexagonal pyramids; and\n• Well, infinitely more.\n\nIf the apex lies above the centroid of the base (the geometric mean point of the polygon), we call the pyramid a right pyramid; otherwise, we have an oblique pyramid. Here we will analyze only right pyramids.\n\n💡 We have many pyramid calculators here at Omni: check out our pyramid volume calculator, our right rectangular pyramid calculator, and our triangular pyramid volume calculator!\n\n## The angles of a pyramid\n\nThere are a lot of angles in a pyramid, but luckily in a regular pyramid, most of them are identical. Let's identify them!\n\nWe can identify the angle between the faces' vertical medians and the base at the apex. It defines how \"slender\" a pyramid is. We call this angle $\\alpha$.\n\nThe angle between the edge and the base $\\beta$ is always smaller than $\\alpha$ in pyramids with a convex base: the corner of the base is farther from the centroid than the center of its sides.\n\nWe can identify other angles, also related to the pyramid's height. They lie on each face, and since in a regular pyramid a face is an isosceles triangle, the two angles at the bottom are identical. We identify them with the letter $\\gamma$. The angle on the top, at the apex, gets smaller for tall and slender pyramids. We call it $\\delta$.\n\nWe highlighted $\\alpha$ and $\\beta$ in a hexagonal pyramid (because ).\n\nNow you know which angles to search for. It's time to learn how to calculate the angles in a pyramid!\n\n## How to calculate the pyramid angles\n\nTo find the angles we need to use a bit of trigonometry - in particular the theorems to find the elements of a right triangle.\n\nLet's start with the angle $\\alpha$, between the median of a face and the base. Let's take a look at the diagram, now marked with some relevant points.\n\n$\\alpha$ corresponds to the angle $\\text{C}\\widehat{\\text{M}}\\text{O}$. We compute it noting that the catheti of the triangles are $\\text{OC}$ (the height of the pyramid) and $\\text{CM}$ (the segment from the side's midpoint to the centroid). We calculate the angle $\\alpha$ with:\n\n$\\footnotesize \\tan{\\alpha} = \\frac{\\text{OC}}{\\text{CM}} \\ \\rightarrow \\ \\alpha = \\arctan{\\left(\\frac{\\text{OC}}{\\text{CM}}\\right)}$\n\nLet's proceed with the angle $\\beta$. In this case, we need to use the segment $\\text{AC}$ to compute the tangent:\n\n$\\footnotesize \\tan{\\beta} = \\frac{\\text{OC}}{\\text{AC}}\\ \\rightarrow\\ \\beta= \\arctan{\\left(\\frac{\\text{OC}}{\\text{AC}}\\right)}$\n\n🔎 Note that $\\text{AC}>\\text{CM}$ is always true: this explains why $\\beta<\\alpha$.\n\nIt's time to compute the angle on a pyramid's face. Using the old and reliable Pythagorean theorem, we find the length of the slanted side of the pyramid:\n\n🙋 To analyze these angles, we move to the other side of the base! Take another look at the diagram if you have trouble visualizing the equations.\n\n$\\text{PO}=\\sqrt{\\text{OC}^2+\\text{PC}^2}$\n\nOn each face, we can identify a pair of right triangles created by the median (in this case $\\text{ON}$). The segment $\\text{PN}$ (half the length of the base's side) and the slanted side of the pyramid $\\text{ON}$ isolate the angle $\\gamma$ ($\\text{O}\\widehat{\\text{P}}\\text{N}$). We use another right triangle identity to calculate it:\n\n$\\footnotesize \\cos{\\gamma}=\\frac{\\text{PN}}{\\text{ON}}\\ \\rightarrow \\ \\gamma=\\arccos{\\left(\\frac{\\text{PN}}{\\text{ON}}\\right)}$\n\nThe last angle, $\\delta$, can be computed using trigonometry again, or if laziness is allowed, by considering that the sum of the internal angles of a triangle is $180\\degree$.\n\n$\\delta = 180\\degree - 2\\times\\gamma$\n\n## How to calculate the angles of a square pyramid\n\nWe will guide you step by step in calculating the angles of a right square pyramid. In fact, not of a general square pyramid - let's calculate the angles of the Great Pyramid of Giza!\n\nWe need to take some measurements:\n\n• The original height of the pyramid is $146.7\\ \\text{m}$; and\n• The side measures $230.6\\ \\text{m}$.\n\nLet's calculate the segment connecting the midpoint of the side to the center. Since the base is a square, it's value is half the length of the side:\n\n$\\footnotesize\\text{CM}=\\frac{AB}{2}=\\frac{230.6 \\ \\text{m}}{2}=115.3\\ \\text{m}$\n\nWe can calculate the angle $\\alpha$:\n\n\\begin{align*} \\footnotesize\\alpha & \\footnotesize = \\arctan{\\left(\\frac{\\text{OC}}{\\text{CM}}\\right)} =\\\\ &\\footnotesize=\\arctan{\\left(\\frac{146.7\\ \\text{m}}{115.3\\ \\text{m}}\\right)} = 51.83\\degree \\end{align*}\n\nTo calculate the angle in the corner of the base, we need the measure of half its diagonal $\\beta$.\n\n\\begin{align*} \\footnotesize\\beta & \\footnotesize = \\arctan{\\left(\\frac{\\text{OC}}{\\text{AC}}\\right)} =\\\\ &\\footnotesize=\\arctan{\\left(\\frac{146.7\\ \\text{m}}{115.3\\times\\sqrt{2}\\ \\text{m}}\\right)} = 41.98\\degree \\end{align*}\n\n🙋 Ancient Egyptians measured the slope of a right pyramid using the seked, a unit corresponding to the numbers of horizontal cubits corresponding to a rise of one cubit in height. The base angle of the Great Pyramid of Giza has a seked of $5\\tfrac{1}{2}$ sekeds.\n\nWhat about the angle on each face?\n\nWe can use the formulas above to find the value of $\\gamma$ and $\\delta$; however, we need to calculate the length of the slanted side first:\n\n\\begin{align*} \\footnotesize \\text{OB} & \\footnotesize =\\sqrt{\\text{CO}^2+\\text{AC}^2} \\\\ & \\footnotesize = \\sqrt{146.7^2 + 2\\times(115.3)^2}\\\\ &\\footnotesize = 219.3\\ \\text{m} \\end{align*}",
null,
"Egyptians and mathematics went hand in hand. However, we are sure that they weren't marking angles with Greek letters!\n\nLet's proceed with the calculations.\n\n\\begin{align*} \\footnotesize\\gamma & \\footnotesize = \\arccos{\\left(\\frac{\\text{PN}}{\\text{ON}}\\right)} =\\\\ &\\footnotesize=\\arccos{\\left(\\frac{115.3\\ \\text{m}}{219.3\\ \\text{m}}\\right)}= 58.29\\degree \\end{align*}\n\nFinally, the angle of a face at the apex of the pyramid, $\\delta$, is:\n\n\\begin{align*} \\footnotesize\\delta & \\footnotesize = 180\\degree - 2\\times\\gamma=\\\\ &\\footnotesize = 180\\degree-116.58=63.42\\degree \\end{align*}\n\n## How to use our pyramid angle calculator\n\nOur pyramid angle calculator can help you with many types of regular right pyramids. Select the type of base you need; we included:\n\n• Square;\n• Regular pentagon;\n• Regular hexagon;\n• Regular heptagon; and\n• Regular octagon.\n\nInsert the available measurements in the calculator, and find the results!\n\n🙋 Our tools work in reverse, too. For example, you can insert the value of the base angles and find out the pyramid's height!\n\nDo you want to expand your pyramidal knowledge outside of geometry? Visit our Minecraft's pyramid block calculator!\n\n## FAQ\n\n### What is the best angle for pyramid power?\n\nPyramids are surrounded by mysticism and occultism. The truth is that there's no such thing as pyramid power, hidden purposes, and alien involvement. Pyramids, and in particular ancient pyramids, are neat but not mystic! The math behind them, however, is all true!\n\n### What is the angle of a hexagonal pyramid with side 2 and height 3?\n\nTo calculate the angle at the base of a hexagonal pyramid, follow these steps:\n\n1. Calculate the length of the segment MC connecting the side's midpoint to the centroid.\n2. Apply the inverse tangent function to the ratio of the height and the length just calculated: OC/MC.\n\nFor a hexagon with side 2 and height 3, it means:\n\n• MC = 2 × cos(30°) = 2 × (sqrt(3)/2) = sqrt(3)\n• α = arctan(3/sqrt(3)) = arctan(sqrt(3)) = 60°\n\n### How do I calculate the angles at the base of a square pyramid?\n\nTo calculate the angles at the base of a pyramid, you can use the trigonometric formulas of right triangles. Calling the height h and the side L, the value of the base angle is:\nα = arctan(h/(L/2))\nWe calculate the angle in the corner of the base using half the diagonal:\nß = arctan(h/(sqrt(2) × L/2)) = arctan(h/(L/sqrt(2)))\n\n### What are the angles of the Great Pyramid of Giza?\n\nThe Great Pyramid of Giza has a base angle of 51.83°. We can calculate it using the inverse trigonometric function arctangent, knowing the height and the side of the pyramid:\nα = arctan(146.7/115.3) = 51.83°\nThe angle in the corner of the base is slightly smaller: 41.98°.\n\nDavide Borchia",
null,
"Base and measurements\nBase polygon\nSquare\nSegment MC\nSide (AB)\nHeight (OC)\nSlanted side (BO)\nSlanted height (MO)\nAngles\nAlpha (α)\ndeg\nBeta (ß)\ndeg\nGamma (γ)\ndeg\nDelta (δ)\ndeg\nPeople also viewed…\n\n### Quotient\n\nThe quotient calculator will help you divide two numbers and return both fractional and remainder results.\n\n### Secretary problem (Valentine's day)\n\nUse the dating theory calculator to enhance your chances of picking the best lifetime partner.\n\n### Sequence\n\nWith Omni's sequence calculator, you can find the terms of many types of sequences starting from any point!\n\n### Steps to calories\n\nSteps to calories calculator helps you to estimate the total amount to calories burned while walking.",
null,
""
] | [
null,
"https://uploads-cdn.omnicalculator.com/images/pyramid_angle_02.png",
null,
"https://uploads-cdn.omnicalculator.com/images/pyrang_4_calc.png",
null,
"data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c3ZnIHdpZHRoPSI5OTk5cHgiIGhlaWdodD0iOTk5OXB4IiB2aWV3Qm94PSIwIDAgOTk5OSA5OTk5IiB2ZXJzaW9uPSIxLjEiIHhtbG5zPSJodHRwOi8vd3d3LnczLm9yZy8yMDAwL3N2ZyIgeG1sbnM6eGxpbms9Imh0dHA6Ly93d3cudzMub3JnLzE5OTkveGxpbmsiPjxkZWZzPjxyYWRpYWxHcmFkaWVudCBpZD0id2hpdGUiIGN4PSI1MCUiIGN5PSI1MCUiIHI9IjUwJSIgZng9IjUwJSIgZnk9IjUwJSI+PHN0b3Agb2Zmc2V0PSIwJSIgc3R5bGU9InN0b3AtY29sb3I6cmdiKDI1NSwyNTUsMjU1KTtzdG9wLW9wYWNpdHk6MC4wMSIgLz48c3RvcCBvZmZzZXQ9IjEwMCUiIHN0eWxlPSJzdG9wLWNvbG9yOnJnYigyNTUsMjU1LDI1NSk7c3RvcC1vcGFjaXR5OjAiIC8+PC9yYWRpYWxHcmFkaWVudD48L2RlZnM+PGcgc3Ryb2tlPSJub25lIiBmaWxsPSJ1cmwoI3doaXRlKSIgZmlsbC1vcGFjaXR5PSIxIj48cmVjdCB4PSIwIiB5PSIwIiB3aWR0aD0iOTk5OSIgaGVpZ2h0PSI5OTk5Ij48L3JlY3Q+PC9nPjwvc3ZnPg==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8701223,"math_prob":0.9981666,"size":6758,"snap":"2023-14-2023-23","text_gpt3_token_len":1587,"char_repetition_ratio":0.22016583,"word_repetition_ratio":0.04622871,"special_character_ratio":0.23024563,"punctuation_ratio":0.11760409,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99983686,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-04T01:46:59Z\",\"WARC-Record-ID\":\"<urn:uuid:e4b84b3e-4753-4d3c-96e7-1440ebe95bc8>\",\"Content-Length\":\"747698\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:89ab62e3-5533-4287-ae16-7805f4bf9baf>\",\"WARC-Concurrent-To\":\"<urn:uuid:efe395c7-1ba6-4c07-b29c-78d1eb958920>\",\"WARC-IP-Address\":\"68.70.205.1\",\"WARC-Target-URI\":\"https://www.omnicalculator.com/math/pyramid-angle\",\"WARC-Payload-Digest\":\"sha1:2HGV4CAKRD2U3UP4CKJGD7CCMUN5EJP7\",\"WARC-Block-Digest\":\"sha1:QSGRZDLOCLVDCJSLHNUFLI44BUAB46E3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649348.41_warc_CC-MAIN-20230603233121-20230604023121-00489.warc.gz\"}"} |
http://ixtrieve.fh-koeln.de/birds/litie/document/22887 | [
"# Document (#22887)\n\nAuthor\nBinder, G.\nStahl, M.\nFaulborn, L.\nTitle\nVergleichsuntersuchung MESSENGER-FULCRUM\nImprint\nBonn : IZ Sozialwissenschaften\nYear\n2000\nPages\n70 S\nSeries\nIZ-Arbeitsbericht; Nr.18\nAbstract\nIn einem Benutzertest, der im Rahmen der Projektes GIRT stattfand, wurde die Leistungsfähigkeit zweier Retrievalsprachen für die Datenbankrecherche überprüft. Die Ergebnisse werden in diesem Bericht dargestellt: Das System FULCRUM beruht auf automatischer Indexierung und liefert ein nach statistischer Relevanz sortiertes Suchergebnis. Die Standardfreitextsuche des Systems MESSENGER wurde um die intellektuell vom IZ vergebenen Deskriptoren ergänzt. Die Ergebnisse zeigen, dass in FULCRUM das Boole'sche Exakt-Match-Retrieval dem Verktos-Space-Modell (Best-Match-Verfahren) von den Versuchspersonen vorgezogen wurde. Die in MESSENGER realisierte Mischform aus intellektueller und automatischer Indexierung erwies sich gegenüber dem quantitativ-statistischen Ansatz beim Recall als überlegen\nFootnote\nAls PDF-Datei auch unter: http://www.bonn.iz-soz.de/publications/series/working-papers/\nTheme\nRetrievalstudien\nBenutzerstudien\nObject\nMessenger\nFULCRUM\n\n## Similar documents (author)\n\n1. Stahl, G.D.: ¬The virtual library : prospect and promise, or plus ca change, plus c'est la meme chose (1993) 2.38\n```2.376252 = sum of:\n2.376252 = product of:\n4.752504 = sum of:\n4.752504 = weight(author_txt:stahl in 288) [ClassicSimilarity], result of:\n4.752504 = score(doc=288,freq=1.0), product of:\n0.7825483 = queryWeight, product of:\n1.1211264 = boost\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.07183321 = queryNorm\n6.0731125 = fieldWeight in 288, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.625 = fieldNorm(doc=288)\n0.5 = coord(1/2)\n```\n2. Stahl, J.N.: Using the interest to access CARL and other electronic information systems (1990) 2.38\n```2.376252 = sum of:\n2.376252 = product of:\n4.752504 = sum of:\n4.752504 = weight(author_txt:stahl in 4125) [ClassicSimilarity], result of:\n4.752504 = score(doc=4125,freq=1.0), product of:\n0.7825483 = queryWeight, product of:\n1.1211264 = boost\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.07183321 = queryNorm\n6.0731125 = fieldWeight in 4125, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.625 = fieldNorm(doc=4125)\n0.5 = coord(1/2)\n```\n3. Stahl, J.: 'Have a question? Click here' : electronic reference at the National Museum of American Art (1998) 2.38\n```2.376252 = sum of:\n2.376252 = product of:\n4.752504 = sum of:\n4.752504 = weight(author_txt:stahl in 3495) [ClassicSimilarity], result of:\n4.752504 = score(doc=3495,freq=1.0), product of:\n0.7825483 = queryWeight, product of:\n1.1211264 = boost\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.07183321 = queryNorm\n6.0731125 = fieldWeight in 3495, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.625 = fieldNorm(doc=3495)\n0.5 = coord(1/2)\n```\n4. Stahl, G.: Group cognition : computer support for building collaborative knowledge (2006) 2.38\n```2.376252 = sum of:\n2.376252 = product of:\n4.752504 = sum of:\n4.752504 = weight(author_txt:stahl in 4392) [ClassicSimilarity], result of:\n4.752504 = score(doc=4392,freq=1.0), product of:\n0.7825483 = queryWeight, product of:\n1.1211264 = boost\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.07183321 = queryNorm\n6.0731125 = fieldWeight in 4392, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.71698 = idf(docFreq=6, maxDocs=42740)\n0.625 = fieldNorm(doc=4392)\n0.5 = coord(1/2)\n```\n5. Binder, W.: Online-Dienste, Telekopie - Gegenwart und Zukunft elektronischer Informationssystemem für Wissenschaft und Bibliotheken (1986) 1.69\n```1.6862763 = sum of:\n1.6862763 = product of:\n3.3725526 = sum of:\n3.3725526 = weight(author_txt:binder in 528) [ClassicSimilarity], result of:\n3.3725526 = score(doc=528,freq=1.0), product of:\n0.62258977 = queryWeight, product of:\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.07183321 = queryNorm\n5.416974 = fieldWeight in 528, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.625 = fieldNorm(doc=528)\n0.5 = coord(1/2)\n```\n\n## Similar documents (content)\n\n1. Mielke, B.: Wider einige gängige Ansichten zur juristischen Informationserschließung (2002) 0.11\n```0.11142406 = sum of:\n0.11142406 = product of:\n0.46426693 = sum of:\n0.071676575 = weight(abstract_txt:statistischen in 4146) [ClassicSimilarity], result of:\n0.071676575 = score(doc=4146,freq=1.0), product of:\n0.11469214 = queryWeight, product of:\n1.0238823 = boost\n7.999329 = idf(docFreq=38, maxDocs=42740)\n0.014003289 = queryNorm\n0.6249476 = fieldWeight in 4146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.999329 = idf(docFreq=38, maxDocs=42740)\n0.078125 = fieldNorm(doc=4146)\n0.09116891 = weight(abstract_txt:intellektueller in 4146) [ClassicSimilarity], result of:\n0.09116891 = score(doc=4146,freq=1.0), product of:\n0.13464183 = queryWeight, product of:\n1.1093618 = boost\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.014003289 = queryNorm\n0.67712176 = fieldWeight in 4146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.078125 = fieldNorm(doc=4146)\n0.102893926 = weight(abstract_txt:statistischer in 4146) [ClassicSimilarity], result of:\n0.102893926 = score(doc=4146,freq=1.0), product of:\n0.14595152 = queryWeight, product of:\n1.1550148 = boost\n9.023833 = idf(docFreq=13, maxDocs=42740)\n0.014003289 = queryNorm\n0.704987 = fieldWeight in 4146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n9.023833 = idf(docFreq=13, maxDocs=42740)\n0.078125 = fieldNorm(doc=4146)\n0.066107266 = weight(abstract_txt:ergebnisse in 4146) [ClassicSimilarity], result of:\n0.066107266 = score(doc=4146,freq=2.0), product of:\n0.10867132 = queryWeight, product of:\n1.4094695 = boost\n5.5059114 = idf(docFreq=471, maxDocs=42740)\n0.014003289 = queryNorm\n0.608323 = fieldWeight in 4146, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n5.5059114 = idf(docFreq=471, maxDocs=42740)\n0.078125 = fieldNorm(doc=4146)\n0.0861397 = weight(abstract_txt:indexierung in 4146) [ClassicSimilarity], result of:\n0.0861397 = score(doc=4146,freq=1.0), product of:\n0.16334069 = queryWeight, product of:\n1.7280066 = boost\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.014003289 = queryNorm\n0.52736217 = fieldWeight in 4146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.078125 = fieldNorm(doc=4146)\n0.046280522 = weight(abstract_txt:wurde in 4146) [ClassicSimilarity], result of:\n0.046280522 = score(doc=4146,freq=1.0), product of:\n0.12357237 = queryWeight, product of:\n1.8407912 = boost\n4.793876 = idf(docFreq=961, maxDocs=42740)\n0.014003289 = queryNorm\n0.37452158 = fieldWeight in 4146, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.793876 = idf(docFreq=961, maxDocs=42740)\n0.078125 = fieldNorm(doc=4146)\n0.24 = coord(6/25)\n```\n2. Scherer, B.: Automatische Indexierung und ihre Anwendung im DFG-Projekt \"Gemeinsames Portal für Bibliotheken, Archive und Museen (BAM)\" (2003) 0.09\n```0.093541466 = sum of:\n0.093541466 = product of:\n0.46770734 = sum of:\n0.05734126 = weight(abstract_txt:statistischen in 1284) [ClassicSimilarity], result of:\n0.05734126 = score(doc=1284,freq=1.0), product of:\n0.11469214 = queryWeight, product of:\n1.0238823 = boost\n7.999329 = idf(docFreq=38, maxDocs=42740)\n0.014003289 = queryNorm\n0.49995807 = fieldWeight in 1284, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.999329 = idf(docFreq=38, maxDocs=42740)\n0.0625 = fieldNorm(doc=1284)\n0.07293513 = weight(abstract_txt:intellektueller in 1284) [ClassicSimilarity], result of:\n0.07293513 = score(doc=1284,freq=1.0), product of:\n0.13464183 = queryWeight, product of:\n1.1093618 = boost\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.014003289 = queryNorm\n0.5416974 = fieldWeight in 1284, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.0625 = fieldNorm(doc=1284)\n0.037395917 = weight(abstract_txt:ergebnisse in 1284) [ClassicSimilarity], result of:\n0.037395917 = score(doc=1284,freq=1.0), product of:\n0.10867132 = queryWeight, product of:\n1.4094695 = boost\n5.5059114 = idf(docFreq=471, maxDocs=42740)\n0.014003289 = queryNorm\n0.34411946 = fieldWeight in 1284, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.5059114 = idf(docFreq=471, maxDocs=42740)\n0.0625 = fieldNorm(doc=1284)\n0.16879866 = weight(abstract_txt:indexierung in 1284) [ClassicSimilarity], result of:\n0.16879866 = score(doc=1284,freq=6.0), product of:\n0.16334069 = queryWeight, product of:\n1.7280066 = boost\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.014003289 = queryNorm\n1.0334146 = fieldWeight in 1284, product of:\n2.4494898 = tf(freq=6.0), with freq of:\n6.0 = termFreq=6.0\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.0625 = fieldNorm(doc=1284)\n0.13123637 = weight(abstract_txt:automatischer in 1284) [ClassicSimilarity], result of:\n0.13123637 = score(doc=1284,freq=1.0), product of:\n0.25095835 = queryWeight, product of:\n2.1418993 = boost\n8.367054 = idf(docFreq=26, maxDocs=42740)\n0.014003289 = queryNorm\n0.5229409 = fieldWeight in 1284, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.367054 = idf(docFreq=26, maxDocs=42740)\n0.0625 = fieldNorm(doc=1284)\n0.2 = coord(5/25)\n```\n3. Kempf, A.O.: Automatische Inhaltserschließung in der Fachinformation (2013) 0.07\n```0.069561236 = sum of:\n0.069561236 = product of:\n0.34780616 = sum of:\n0.05790168 = weight(abstract_txt:intellektuell in 2906) [ClassicSimilarity], result of:\n0.05790168 = score(doc=2906,freq=1.0), product of:\n0.115438215 = queryWeight, product of:\n1.0272071 = boost\n8.025305 = idf(docFreq=37, maxDocs=42740)\n0.014003289 = queryNorm\n0.50158155 = fieldWeight in 2906, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.025305 = idf(docFreq=37, maxDocs=42740)\n0.0625 = fieldNorm(doc=2906)\n0.037395917 = weight(abstract_txt:ergebnisse in 2906) [ClassicSimilarity], result of:\n0.037395917 = score(doc=2906,freq=1.0), product of:\n0.10867132 = queryWeight, product of:\n1.4094695 = boost\n5.5059114 = idf(docFreq=471, maxDocs=42740)\n0.014003289 = queryNorm\n0.34411946 = fieldWeight in 2906, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n5.5059114 = idf(docFreq=471, maxDocs=42740)\n0.0625 = fieldNorm(doc=2906)\n0.06891176 = weight(abstract_txt:indexierung in 2906) [ClassicSimilarity], result of:\n0.06891176 = score(doc=2906,freq=1.0), product of:\n0.16334069 = queryWeight, product of:\n1.7280066 = boost\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.014003289 = queryNorm\n0.42188972 = fieldWeight in 2906, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.0625 = fieldNorm(doc=2906)\n0.052360427 = weight(abstract_txt:wurde in 2906) [ClassicSimilarity], result of:\n0.052360427 = score(doc=2906,freq=2.0), product of:\n0.12357237 = queryWeight, product of:\n1.8407912 = boost\n4.793876 = idf(docFreq=961, maxDocs=42740)\n0.014003289 = queryNorm\n0.42372277 = fieldWeight in 2906, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n4.793876 = idf(docFreq=961, maxDocs=42740)\n0.0625 = fieldNorm(doc=2906)\n0.13123637 = weight(abstract_txt:automatischer in 2906) [ClassicSimilarity], result of:\n0.13123637 = score(doc=2906,freq=1.0), product of:\n0.25095835 = queryWeight, product of:\n2.1418993 = boost\n8.367054 = idf(docFreq=26, maxDocs=42740)\n0.014003289 = queryNorm\n0.5229409 = fieldWeight in 2906, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.367054 = idf(docFreq=26, maxDocs=42740)\n0.0625 = fieldNorm(doc=2906)\n0.2 = coord(5/25)\n```\n4. Bunk, T.: Deskriptoren Stoppwortlisten und kryptische Zeichen (2008) 0.07\n```0.067412905 = sum of:\n0.067412905 = product of:\n0.5617742 = sum of:\n0.13579656 = weight(abstract_txt:deskriptoren in 4472) [ClassicSimilarity], result of:\n0.13579656 = score(doc=4472,freq=1.0), product of:\n0.11062535 = queryWeight, product of:\n1.0055659 = boost\n7.856228 = idf(docFreq=44, maxDocs=42740)\n0.014003289 = queryNorm\n1.2275356 = fieldWeight in 4472, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n7.856228 = idf(docFreq=44, maxDocs=42740)\n0.15625 = fieldNorm(doc=4472)\n0.18233782 = weight(abstract_txt:intellektueller in 4472) [ClassicSimilarity], result of:\n0.18233782 = score(doc=4472,freq=1.0), product of:\n0.13464183 = queryWeight, product of:\n1.1093618 = boost\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.014003289 = queryNorm\n1.3542435 = fieldWeight in 4472, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.667158 = idf(docFreq=19, maxDocs=42740)\n0.15625 = fieldNorm(doc=4472)\n0.24363983 = weight(abstract_txt:indexierung in 4472) [ClassicSimilarity], result of:\n0.24363983 = score(doc=4472,freq=2.0), product of:\n0.16334069 = queryWeight, product of:\n1.7280066 = boost\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.014003289 = queryNorm\n1.4916053 = fieldWeight in 4472, product of:\n1.4142135 = tf(freq=2.0), with freq of:\n2.0 = termFreq=2.0\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.15625 = fieldNorm(doc=4472)\n0.12 = coord(3/25)\n```\n5. Stock, W.G.: Natürlichsprachige Suche - more like this! : Lexis-Nexis' Freestyle (1998) 0.06\n```0.056921408 = sum of:\n0.056921408 = product of:\n0.4743451 = sum of:\n0.13782352 = weight(abstract_txt:indexierung in 3524) [ClassicSimilarity], result of:\n0.13782352 = score(doc=3524,freq=1.0), product of:\n0.16334069 = queryWeight, product of:\n1.7280066 = boost\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.014003289 = queryNorm\n0.84377944 = fieldWeight in 3524, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n6.7502356 = idf(docFreq=135, maxDocs=42740)\n0.125 = fieldNorm(doc=3524)\n0.07404883 = weight(abstract_txt:wurde in 3524) [ClassicSimilarity], result of:\n0.07404883 = score(doc=3524,freq=1.0), product of:\n0.12357237 = queryWeight, product of:\n1.8407912 = boost\n4.793876 = idf(docFreq=961, maxDocs=42740)\n0.014003289 = queryNorm\n0.5992345 = fieldWeight in 3524, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n4.793876 = idf(docFreq=961, maxDocs=42740)\n0.125 = fieldNorm(doc=3524)\n0.26247275 = weight(abstract_txt:automatischer in 3524) [ClassicSimilarity], result of:\n0.26247275 = score(doc=3524,freq=1.0), product of:\n0.25095835 = queryWeight, product of:\n2.1418993 = boost\n8.367054 = idf(docFreq=26, maxDocs=42740)\n0.014003289 = queryNorm\n1.0458817 = fieldWeight in 3524, product of:\n1.0 = tf(freq=1.0), with freq of:\n1.0 = termFreq=1.0\n8.367054 = idf(docFreq=26, maxDocs=42740)\n0.125 = fieldNorm(doc=3524)\n0.12 = coord(3/25)\n```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6416977,"math_prob":0.9968248,"size":13450,"snap":"2021-04-2021-17","text_gpt3_token_len":5167,"char_repetition_ratio":0.24445932,"word_repetition_ratio":0.4647551,"special_character_ratio":0.52297395,"punctuation_ratio":0.2819115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99973243,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-14T20:10:56Z\",\"WARC-Record-ID\":\"<urn:uuid:8fbab85b-da53-4baa-8e3c-eb13db77e0b7>\",\"Content-Length\":\"25879\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00ce337d-b422-488d-802d-2fe0bb2a80f0>\",\"WARC-Concurrent-To\":\"<urn:uuid:971034c7-4ae1-4a68-b68b-65953253cd57>\",\"WARC-IP-Address\":\"139.6.160.6\",\"WARC-Target-URI\":\"http://ixtrieve.fh-koeln.de/birds/litie/document/22887\",\"WARC-Payload-Digest\":\"sha1:LSAFCEHHLNKTBQOBFQALBG3KCF7LPQJQ\",\"WARC-Block-Digest\":\"sha1:XOT66WXH7CDUNTIFNA5FFXXNKH7SQBQY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038078021.18_warc_CC-MAIN-20210414185709-20210414215709-00625.warc.gz\"}"} |
http://docs.groovy-lang.org/docs/groovy-2.5.4/html/gapi/groovy/transform/stc/FromString.html | [
"Package: groovy.transform.stc\n\n[Java] Class FromString\n\n• groovy.transform.stc.FromString\n• A closure parameter hint class that is convenient if you want to use a String representation of the signature. It makes use of the option strings, where each string corresponds to a single signature.\n\nThe following example describes a closure as accepting a single signature (List<T> list ->):\n\npublic <T> T apply(T src, @ClosureParams(value=FromString.class, options=\"List<T>\" Closure<T> cl)\n\nThe next example describes a closure as accepting two signatures (List<T> list ->) and (T t ->):\n\npublic <T> T apply(T src, @ClosureParams(value=FromString.class, options={\"List<T>\",\"T\"} Closure<T> cl)\n\nIt is advisable not to use this hint as a replacement for the various FirstParam, SimpleType, ... hints because it is actually much slower. Using this hint should therefore be limited to cases where it's not possible to express the signature using the existing hints.\n\nAuthors:\nCédric Champeau\nSince:\n2.3.0"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6209786,"math_prob":0.63677394,"size":1340,"snap":"2022-05-2022-21","text_gpt3_token_len":296,"char_repetition_ratio":0.11601797,"word_repetition_ratio":0.058479533,"special_character_ratio":0.21119402,"punctuation_ratio":0.18025751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9546683,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-23T16:05:58Z\",\"WARC-Record-ID\":\"<urn:uuid:5af216e9-ff21-4a87-93da-173a9c031fb1>\",\"Content-Length\":\"15056\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e696d5d8-2552-4e8b-808c-dc7346e81175>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee144e2b-3b29-4145-a9d4-437335d80860>\",\"WARC-IP-Address\":\"148.251.23.208\",\"WARC-Target-URI\":\"http://docs.groovy-lang.org/docs/groovy-2.5.4/html/gapi/groovy/transform/stc/FromString.html\",\"WARC-Payload-Digest\":\"sha1:VQHEAQPWB7I2SFBFPEXZOW3QZEEMBCUH\",\"WARC-Block-Digest\":\"sha1:P2RNZE676QAWL2VLLAQ3BGPQJHYHGUJZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304287.0_warc_CC-MAIN-20220123141754-20220123171754-00078.warc.gz\"}"} |
https://www.codespeedy.com/association-rule-mining-in-python/ | [
"# Association Rule Mining in Python\n\nHello everyone, In this tutorial, we’ll be learning about Association Rule Mining in Python (ARM) and will do a hands-on practice on a dataset. We will use the apriori algorithm and look on the components of the apriori algorithm. Let us start this tutorial with a brief introduction to association rules mining.\n\n## What is Association Rule Mining and its benefits?\n\nAssociation Rule Mining is a process that uses Machine learning to analyze the data for the patterns, the co-occurrence and the relationship between different attributes or items of the data set. In the real-world, Association Rules mining is useful in Python as well as in other programming languages for item clustering, store layout, and market basket analysis.\n\nAssociation rules include two parts, an antecedent (if) and a consequent (then) that is the if-then association that occurs more frequently in the dataset.\n\nFor example, {Bread} => {Milk} can be an association in a supermarket store. This relation implies that if(antecedent) a person buys Bread then(consequent) most probably the customer will buy Milk. There can be lots of relations between several itemsets that can be used to make the layout of the store. With this, customers would not require to go far to look for every product. To increase sales of the store these products can have combined discounts and there are many other ways these associations are helpful.\n\nFor this tutorial, we’ll be using a dataset that contains a list of 20 orders including the name of order items. You can download the dataset by clicking here. The dataset will look like this.",
null,
"There are many algorithms that use association rules like AIS, SETM, Apriori, etc. Apriori algorithm is the most widely used algorithm that uses association rules and we will use this in our code. Now let us import the necessary modules and modify our dataset to make it usable.\n\n## Importing and Modifications in the Dataset\n\nHere we are going to understand association rule mining with the help of apyori Python library. So let’s continue reading…\n\nInstall the apyori library using the command line by running the following pip command.\n\npip install apyori\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport numpy as np\nfrom apyori import apriori\n\nNow, let us import the data and apply some modifications to the data. Go through the code below.\n\ndata = pd.read_csv(r\"D:\\datasets(june)\\order_data.csv\",delimiter=\" \",header=None)\ndata.head()\n\nThe parameter delimiter=” “ will split the entries of the data whenever whitespace is encountered and header=None will prevent taking the first row as the header and a default header will be there. After this, our data frame will look like this.",
null,
"Let us see some Components of the Apriori Algorithm that are necessary to understand to make a good model.\n\n## Components of the Apriori Algorithm\n\nThere are three main components of an Apriori Algorithm which are as follows:\n\n• Support – It is the measure of the popularity of an itemset that is in how many transactions an item appears from the total number of transactions. It is simply the probability that a customer will buy an item. The mathematical formula to represent support of item X is\nS(X)=(Number of transaction in which X appears)/(Total number of transactions)\n##### Calculating the support value for {Bread} in our dataset\n\nNo. of transactions in which Bread appears = 11\n\nNo. of total transactions = 20\n\n• Minimum Support Value = It is a threshold value above which the product can have a meaningful effect on the profit.\n• Confidence – It tells us the impact of one product on another that is the probability that if a person buys product X then he/she will buy product Y also. Its representation in mathematical terms is\nConfidence({X} => {Y}) = (Transactions containing both X and Y)/(Transactions containing X)\n##### Calculating the Confidence ({Bread} => {Milk}) in our dataset\n\nNo. of transactions in which both Bread and Milk appears = 5\n\nNo. of transactions containing Bread = 11\n\nConfidence ({Bread} => {Milk}) = 5/11 = 0.4545\n\nA major drawback of the confidence is that it only considers the popularity of item X and not of Y. This can decrease the confidence value and therefore can be misleading in understanding the association between different products. To overcome this drawback we have another measure known as Lift.\n\n• Lift – Overcoming the limitation of confidence measure, Lift will calculate the confidence taking into account the popularity of both items. Representation of lift in mathematical terms is\nLift({X} => {Y}) = Confience({X} => {Y}) / Support(B)\n\nIf the lift measure is greater than 1, it means that the Y is likely to be bought with X, while a value less than 1 indicates that Y is unlikely to be bought with X. A lift value of near 1 indicates that both the itemsets in the transactions are appearing often together but there is no association between them.\n\n##### Calculating the Lift({Bread} => {Milk}) in our dataset\n\nConfidence ({Bread} => {Milk}) = 0.4545\n\nSupport (Milk) = 9/20 = 0.45\n\nLift({Bread} => {Milk}) = 0.4545/0.45 = 1.01\n\n## Practical Implemenation of Apriori Algorithm\n\nUsing the data-set that we have downloaded in the previous section, let us write some code and calculate the values of apriori algorithm measures. To make use of the Apriori algorithm it is required to convert the whole transactional dataset into a single list and each row will be a list in that list.\n\ndata_list = []\nfor row in range(0, 20):\ndata_list.append([str(data.values[row,column]) for column in range(0, 9)])\n\nalgo = apriori(data_list, min_support=0.25, min_confidence=0.2, min_lift=2, min_length=2)\nresults = list(algo)\n\nWe have created a list of lists, then use the apriori method from apriori module and finally covert the datatype from the generator into a list and save in a variable named results. To make proper decisions and increase the speed of the apriori algorithm, apriori methods take several arguments which are as follows –\n\n1. data – The first parameter that takes the list that contains the transactional data in inner lists.\n2. min_support – It is the threshold support value for the items that should be taken into account. Suppose we want to make decisions for our dataset and want to include only those items that are appearing in at least 5 transactions out of total i.e support value of 5/20 = 0.25.\n3. min_confidence – It is the threshold confidence value that should be there between each combination of an itemset. we have taken the confidence value of 0.2.\n4. min_lift – It is the minimum lift value for the rules that are selected. Generally, we take lift value equals to 2 or more to filter out those itemsets that have a more frequent association.\n5. min_length – The numbers of items that are to be considered in the rules.\n\nLet us see the output of the above program and print the first 3 rules that we have obtained.\n\nfor i in range(0,3):\nprint(f\"Required Association No. {i+1} is: {results[i]}\")\nprint('-'*25)\nRequired Association No. 1 is: RelationRecord(items=frozenset({'toothpaste', 'brush'}), support=0.25, ordered_statistics=[OrderedStatistic(items_base=frozenset({'brush'}),\n-------------------------\nRequired Association No. 2 is: RelationRecord(items=frozenset({'mouthwash', 'toothpaste'}), support=0.3, ordered_statistics=[OrderedStatistic(items_base=frozenset({'mouthwash'}),\n-------------------------\nRequired Association No. 3 is: RelationRecord(items=frozenset({'honey', 'bread', 'butter'}), support=0.25, ordered_statistics=[OrderedStatistic(items_base=frozenset({'butter'}),\n-------------------------"
] | [
null,
"https://codespeedy.com/wp-content/uploads/2019/12/csv-dataset.png",
null,
"https://codespeedy.com/wp-content/uploads/2019/12/modified_data_data-frame.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8615886,"math_prob":0.93828773,"size":9265,"snap":"2021-43-2021-49","text_gpt3_token_len":2164,"char_repetition_ratio":0.14210129,"word_repetition_ratio":0.010079194,"special_character_ratio":0.24684295,"punctuation_ratio":0.1079784,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98887104,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-18T14:40:08Z\",\"WARC-Record-ID\":\"<urn:uuid:7d5e481e-37fb-430e-8a1b-3085a738cea7>\",\"Content-Length\":\"35885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34161c69-c8b9-4fa6-b8d6-f84906ef44c1>\",\"WARC-Concurrent-To\":\"<urn:uuid:506679de-7f6f-4db2-9fe3-30376febbcaa>\",\"WARC-IP-Address\":\"74.207.248.30\",\"WARC-Target-URI\":\"https://www.codespeedy.com/association-rule-mining-in-python/\",\"WARC-Payload-Digest\":\"sha1:JNR6XROOQ22SKI2VJJD5DUAQFGUUV32R\",\"WARC-Block-Digest\":\"sha1:OCN2J5ME664MZQZRBISMXQGXIDCJ3DZY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585203.61_warc_CC-MAIN-20211018124412-20211018154412-00425.warc.gz\"}"} |
https://codescracker.com/c/program/c-program-check-palindrome-string.htm | [
"# C Program to Check for Palindrome String\n\n« C Tutorial C Examples »\n\nIn this article, you will learn and get about checking whether the given string is palindrome or not. Here are the list of programs available here:\n\n• Palindrome String Program in C\n• Palindrome String Program in C without using any Library function (string function)\n• Palindrome String Program in C using user-defined function\n\nBut before going through these programs. Let's understand about the meaning of palindrome string.\n\n### What is Palindrome String ?\n\nThose string whose all the opposite characters are same is palindrome. And if any of the opposite character mismatched, then the string is not palindrome. For example:\n\n• codoc is a palindrome string, because:\n• first character (c) is equal to last character (c)\n• second character (o) is equal to second last character (0)\n• third character (d) is equal to third last character (d)\n• codssdoc is a palindrome string, because:\n• first character (c) is equal to last character (c)\n• second character (o) is equal to second last character (o)\n• third character (d) is equal to third last character (d)\n• fourth character (s) is equal to fourth last character (s)\n• codes is not a palindrome string, because:\n• first character (c) is not equal to last character (s)\n• codescracker is not a palindrome string, because:\n• first character (c) is not equal to last character (r)\n\nIn other words, you can say that, if the reverse of a string is equal to its original, then that string can be called as a palindrome string.\n\n## Palindrome String Program in C\n\nThe question is, write a program in C to enter any string at run-time and check whether it is palindrome or not. The answer to this question is:\n\n```#include<stdio.h>\n#include<conio.h>\n#include<string.h>\nint main()\n{\nchar str;\nint i, j, len, chk=0;\nprintf(\"Enter any String (word): \");\nscanf(\"%s\", str);\nlen = strlen(str);\nfor(i=0, j=(len-1); i<=(len-1); i++, j--)\n{\nif(str[i] != str[j])\n{\nchk=1;\nbreak;\n}\n}\nif(chk==1)\nprintf(\"\\nIt's not a Palindrome String\");\nelse\nprintf(\"\\nIt's a Palindrome String\");\ngetch();\nreturn 0;\n}```\n\nThe above program is written under CodeBlocks IDE. Here is the output after build and run. This is the first screenshot of first sample run:\n\nAnd here is the second screenshot of first sample run:\n\nAs the reverse of codescracker is rekcarcsedoc which is not equal to the original string, that is codescracker, therefore the string is not palindrome.\n\nHere is the first screenshot of second sample run:\n\nHere in this case, the reverse of codoc is codoc which is same as the original string, that is codoc, therefore the string is palindrome.\n\n#### Program Explained\n\n• First we have to count the length of string using strlen() function. For example, let's suppose the length of string is 6.\n• Create a loop to compare first character of the string to the last, and then second to the second last and so on. Therefore here first character is compared to the sixth character, and then second to the fifth, and third to the fourth. As indexing starts from 0, therefore, here we have to minus one from all. That is, 0th to the length-1.\n• If any of the above comparison goes wrong, or if any character mismatched, then we have to initialize any number say 1 to any character say chk, and skip the remaining code of the loop using break statement.\n• After exiting from the loop whether using break statement or after running the whole loop. We have to check whether the variable chk holds its original value, that is 0 or not. If it holds its original value, then none of the opposite character mismatched (means it is palindrome), and if it doesn't holds its original value, then any of the opposite character mismatched (means it is not palindrome).\n• Here mismatched occurs, means the string is not a palindrome, otherwise the string is a palindrome.\n\nHere is another sample run for, a palindrome string:\n\nAnd here is one more sample run for, not a palindrome string:\n\n### Check Palindrome String in C without using String Function\n\nHere is another program that will not use any string function. As you can see from the previous program, string function strlen() is used to find length of the entered string. But here, this program will calculate length of string without using any library function.\n\n```#include<stdio.h>\n#include<conio.h>\nint main()\n{\nchar str;\nint i, j, len, chk=0;\nprintf(\"Enter any String (word): \");\nscanf(\"%s\", str);\nlen = 0;\nwhile(str[len]!='\\0')\nlen++;\nfor(i=0, j=(len-1); i<=(len-1); i++, j--)\n{\nif(str[i] != str[j])\n{\nchk=1;\nbreak;\n}\n}\nif(chk==1)\nprintf(\"\\nIt's not a Palindrome String\");\nelse\nprintf(\"\\nIt's a Palindrome String\");\ngetch();\nreturn 0;\n}```\n\nThe code:\n\n```len=0;\nwhile(str[len]!='\\0')\nlen++;```\n\nis used to find length of string. For example, if user enters codes as input, then initially 0 gets initialized to len before going to the while loop. The '\\0' is known as null terminated character. At last of every string, a null terminated character gets automatically initialized. So the dry run of above code works like:\n\n• For first run, str[len] or str or c is not equal to '\\0'. Therefore len++ gets executed, so value of len gets incremented and becomes 1\n• For second run, str[len] or str or o is not equal to '\\0'. Therefore len++ gets executed, so value of len gets incremented and becomes 2\n• For second run, str[len] or str or d is not equal to '\\0'. Therefore len++ gets executed, so value of len gets incremented and becomes 3\n• For sixth run, str[len] or str or '\\0' is equal to '\\0'. Therefore the condition evaluates to be false, so the loop gets terminated, and we have a variable, len that holds the length value of string.\n\n## Check Palindrome String in C using Function\n\nThis is the last program to check whether given string is a palindrome or not using user-defined function.\n\n```#include<stdio.h>\n#include<conio.h>\nint checkPalindromeStr(char []);\nint main()\n{\nchar str;\nint chk;\nprintf(\"Enter any String (word): \");\nscanf(\"%s\", str);\nchk = checkPalindromeStr(str);\nif(chk==1)\nprintf(\"\\nIt's not a Palindrome String\");\nelse\nprintf(\"\\nIt's a Palindrome String\");\ngetch();\nreturn 0;\n}\nint checkPalindromeStr(char str[])\n{\nint i, j, len;\nlen = 0;\nwhile(str[len]!='\\0')\nlen++;\nfor(i=0, j=(len-1); i<=(len-1); i++, j--)\n{\nif(str[i] != str[j])\nreturn 1;\n}\nreturn 0;\n}```\n\nThis program will produce the same output as produced by first program given, earlier of this article. To learn more about function, refer to Function in C tutorial.\n\nC Online Test\n\n« C Tutorial C Examples »"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7092217,"math_prob":0.93810385,"size":3549,"snap":"2021-43-2021-49","text_gpt3_token_len":901,"char_repetition_ratio":0.13427362,"word_repetition_ratio":0.22241993,"special_character_ratio":0.28261483,"punctuation_ratio":0.1750663,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9776158,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T16:41:13Z\",\"WARC-Record-ID\":\"<urn:uuid:bd13d0ab-2487-471c-983c-693f7a212620>\",\"Content-Length\":\"33237\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90cb6d28-d26d-4dd9-8ff6-0980b7532145>\",\"WARC-Concurrent-To\":\"<urn:uuid:bc4cc735-e3e5-42bb-a7c2-162f9789d2de>\",\"WARC-IP-Address\":\"148.72.215.147\",\"WARC-Target-URI\":\"https://codescracker.com/c/program/c-program-check-palindrome-string.htm\",\"WARC-Payload-Digest\":\"sha1:NANNVT6U4ILQ5BH54TYTDCQHFU44RAWS\",\"WARC-Block-Digest\":\"sha1:KZRZV2STYCOSRDDYNMUOTPP3OZN6QCA4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585516.51_warc_CC-MAIN-20211022145907-20211022175907-00030.warc.gz\"}"} |
https://assignmenthelpweb.com/category/basic-economic-concepts-and-principles/ | [
"## Elasticity of Demand curve\n\nIf the market demand curve is Q=100-p. What is the market price elasticity of demand? If the supply curve of individual firms is q=p and there are 50 identical firms in the market, draw the residual demand facing any one firm. What is the residual demand elasticity facing one firm at the competitive equilibrium?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9057325,"math_prob":0.94700706,"size":567,"snap":"2022-40-2023-06","text_gpt3_token_len":129,"char_repetition_ratio":0.13321492,"word_repetition_ratio":0.0,"special_character_ratio":0.22222222,"punctuation_ratio":0.06363636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9525731,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T18:11:39Z\",\"WARC-Record-ID\":\"<urn:uuid:3457e433-3fcb-46d1-a18d-0314bd1d8079>\",\"Content-Length\":\"31277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69815af1-3ee0-4da2-9ea3-2eb1d41ceada>\",\"WARC-Concurrent-To\":\"<urn:uuid:63d156b9-1ee1-4d06-b4fa-33183a2a115a>\",\"WARC-IP-Address\":\"162.0.229.244\",\"WARC-Target-URI\":\"https://assignmenthelpweb.com/category/basic-economic-concepts-and-principles/\",\"WARC-Payload-Digest\":\"sha1:MHHF54IMN54YLABYDZZOJULREB3P232R\",\"WARC-Block-Digest\":\"sha1:QFGR7365AEIMVFRVMLGFD2GWI4QRMNAG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335054.79_warc_CC-MAIN-20220927162620-20220927192620-00366.warc.gz\"}"} |
https://yiming.dev/blog/2017/12/18/as-the-tests-get-more-specific-the-code-gets-more-generic/ | [
"# As the tests get more specific, the code gets more generic\n\n• Prime Factors Kata\n• Test code and production code do not grow at the same rate (The test code grows faster)\n1. Sometimes the production code stays the same as the test code grows\n• You are done with the code but you still write the tests because they are part of the specifications\n2. Sometimes the production code shrinks as the test code grows\n• Because the programmers moves a load of functionality out of the code and into the data\n• Or comes up with some kind of more general algorithm that satisfies the tests without looking anything like them.\n\n## FizzBuzz Kata\n\nI started practicing Coding Katas1 to sharp my TDD/OOD skills several days ago.\n\nWhen I was doing this first Kata, one of my colleagues (non-programmer) was watching me. So I explained TDD to her.\n\nShe asked this question later: \"But your tests cannot cover all the cases (all the integers), how can you ensure your code is absolutely right?\"\n\n\"As the tests get more specific, the code gets more generic\" is a part of my answer. (For other reasons, like programmer confidence stuffs, I may talk about them in another post.)\n\n### Case #1\n\n• Test\n\n```def test_sound_for_1_is_1\nassert_equal '1', FizzBuzz.for(1)\nend\n```\n• Code\n\n```def self.for(_number)\n'1'\nend\n```\n\nAt this point, we only have one specification, so the code is pretty simple. (We are following TDD here, so this silly code is completely acceptable because it's green code.)\n\n### Case #2\n\n• Test\n\n```def test_sound_for_2_is_2\nassert_equal '2', FizzBuzz.for(2)\nend\n```\n• Code\n\n```def self.for(number)\nif number == 2\n'2'\nelse\n'1'\nend\nend\n```\n\nWe are able to see a pattern here (`number` -> `number.to_s`). But I chose to using `if` and wait for the next test. Because the next test seems to be very different.\n\n### Case #3\n\n• Test\n\n```def test_sound_for_3_is_Fizz\nassert_equal 'Fizz', FizzBuzz.for(3)\nend\n```\n• Code\n\n```def self.for(number)\nif number == 3\n'Fizz'\nelsif number == 2\n'2'\nelse\n'1'\nend\nend\n```\n\nThe code starts to look ugly now, but fortunately it's still green. Let's move to our next test.\n\n### Case #4\n\n• Test\n\n```def test_sound_for_4_is_4\nassert_equal '4', FizzBuzz.for(4)\nend\n```\n• Code\n\n```def self.for(number)\nif number == 3\n'Fizz'\nelse\nnumber.to_s\nend\nend\n```\n\nWe got our third case for `number.to_s` and at this point I decide to refactor it this way (Because of the magic number `3`).\n\nWe can see here that the code gets more generic (it handles more cases), while the test gets more specific (it adds a new specification for input `4`).\n\n### Case #5\n\n• Test\n\n```def test_sound_for_5_is_Buzz\nassert_equal 'Buzz', FizzBuzz.for(5)\nend\n```\n• Code\n\n```def self.for(number)\nif number == 3\n'Fizz'\nelsif number == 5\n'Buzz'\nelse\nnumber.to_s\nend\nend\n```\n\nThe same things goes with `5, Buzz` pairs as `3, Fizz` pairs. The code starts from being very specific (only deals with `3` or `5`), as the test gets more specific (case `6` or `10` are added), it gets more generic (deals with numbers have factor `3` or `5`).\n\n### Case #6\n\n• Test\n\n```def test_sound_for_6_is_Fizz\nassert_equal 'Fizz', FizzBuzz.for(6)\nend\n```\n• Code\n\n```def self.for(number)\nif number % 3 == 0\n'Fizz'\nelsif number == 5\n'Buzz'\nelse\nnumber.to_s\nend\nend\n```\n\nAgain, the code gets more generic while the test gets more specific.\n\n## Extracting logic to other classes\n\nLater, when the logic for `FizzBuzz` gets more and more complicated, I decided to push logic out of `FizzBuzz` class and extract some new concepts in our app.\n\n```class FizzBuzz\ndef initialize(sounds)\n@sounds = sounds\nend\n\ndef for(number)\nsounds\n.detect { |sound| sound.convertable?(number) }\n.for(number)\nend\n\nprivate\n\nend\n```\n\nI'll explain more about this refactoring in another post. But the basic idea is letting sounds to deal with different conversion rules (`Factor3Sound` for converting number to `Fizz`, `StringSound` for converting number to `number.to_s`, etc.)\n\nAgain, the `FizzBuzz` code gets more and more generic. Because it's now a `sound` rules chain, delegating conversion rules to `sound` objects, and just returning the result from the first `sound` that can handle the input `number`.\n\nI'll stop our `FizzBuzz` journey here as it has taken us so long. And I will talk about \"what I learned after I played with `FizzBuzz` for 3 days\" (Chain of Responsibility, and Elixir solution, etc.) in a future blog post.\n\n## DRY is not that important for tests\n\nWe programmers often says that our code needs to be DRY (Don't Repeat Yourself). And I think we can also say \"as the tests get more specific, the code gets more generic\" in this way:\n\nWhen we dry up our code, we are actually making our code more generic/abstract. This is good because it lets us handle as many possible cases on production as possible.\n\nBut our test doesn't need to be as DRY as our production code. After all, a test case should be as specific as possible, because it should be a specification for a case.\n\nThe most important thing we need to think about when we write a test should be making this test as clear as possible. I should be able to understand what this specification is talking about by only looking at this test itself.\n\nIf we try to DRY our test cases as well (like using `let` or `before` in RSpec), we may introduce too many `Mystery Guests`2 and making these tests unreadable.\n\nHowever, this doesn't mean we should not DRY our tests at all. It's still necessary to extract common assertions/matchers3 to make your tests more readable. Our tests need to be DRY as well, but in a different way from the production DRY.\n\n## A healthy Test-to-Code Ratio\n\nWith this principle being said, I think the Test-to-Code ratio can be a pretty good guide for us to decide when to refactor our code. (Alongside with Sandi Metz's Rules)\n\nIMHO, a good Test-to-Code ratio for a Rails project should be between 1:1 to 2:1. But the ratio definitely differ between different languages/frameworks/test tools.\n\n### Use Test-to-Code Ratio to guide your TDD cycle\n\n(I'll use 1:1 to 2:1 in the following explanations.)\n\n• When Test-to-Code Ratio is lower than 1:1\n1. Consider to add more test cases.\n2. Consider if the production code is too DRY.\n• When Test-to-Code Ratio is higher than 2:1\n1. Consider to DRY the production code by.\n2. Consider if the production code has too many responsibilities and needs to be split.\n\n### TDD is a process guided by this thinking\n\nAs I wrote in Book Review: 99 Bottles of OOP, we can only achieve Shameless Green by following the TDD cycle and make our code a step more abstract (generic) in each cycle.\n\nI think this thinking explained this methodology pretty well."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88592714,"math_prob":0.853692,"size":6502,"snap":"2021-43-2021-49","text_gpt3_token_len":1657,"char_repetition_ratio":0.13173284,"word_repetition_ratio":0.070609,"special_character_ratio":0.23992617,"punctuation_ratio":0.09586906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95245045,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T15:24:22Z\",\"WARC-Record-ID\":\"<urn:uuid:254c4371-cdcb-4aff-a306-77a17a5a2153>\",\"Content-Length\":\"17048\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:65952531-ff2f-4fec-bf0c-fd8a35465af4>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c7638a2-d515-4fee-b4b1-29a56c204c4c>\",\"WARC-IP-Address\":\"185.199.108.153\",\"WARC-Target-URI\":\"https://yiming.dev/blog/2017/12/18/as-the-tests-get-more-specific-the-code-gets-more-generic/\",\"WARC-Payload-Digest\":\"sha1:5KRS44UIICWQACRFCK5A25DISKFQN75J\",\"WARC-Block-Digest\":\"sha1:PB5EJARNC3A52M6DWIBUUNBYR3JTFLVW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585424.97_warc_CC-MAIN-20211021133500-20211021163500-00658.warc.gz\"}"} |
https://faculty.kfupm.edu.sa/MATH/assane/Mes%20sites%20Web/monsiteweb/Mysite/Mysite/index.html | [
"Assane Lo\n\nMathematics Department, The University of Arizona.\n\nEmail: assane@math.arizona.edu\n\nPhone: (520)621-8322\n\nOffice: MTL 120 Office: MTL 120 N\n\nEmail: assane@math.arizona.edu\n\nPhone: (520)621-8322\n\nMy scholarly interest ranges from Analysis, Geometric Analysis, Partial Differential Equations, Probability theory, and applications to Statistical Physics and Euclidean Field Theory. My current research in mathematical physics involves the study of direct methods for integrals and operators of the type that appear naturally in equilibrium statistical mechanics and euclidean field theory.\n\n### Background\n\n• Bachelor of Science in mathematics with minor in physics from the University of Dakar, Senegal in 1999.\n• Ecole Normal Superieur of Dakar 1999-2000.\n• PH.D in mathematics, University of Arizona (Expected May 2007).\n\n### Courses Taught\n\n• Vector Calculus (Math 223-02) Fall 06.\n• Calculus II (Math 129) Spring 06, Fall 05.\n• Calculus I (Math 124) Spring 05.\n• Element of Calculus (Math 113) Fall 04, Spring 04.\n• College Algebra (Math 110) Summer 05, Summer 04,Fall 03, Summer 03, Spring 03, Fall 02.\n\nThe University of",
null,
"rizona (old picture)"
] | [
null,
"https://faculty.kfupm.edu.sa/MATH/assane/Mes%20sites%20Web/monsiteweb/Mysite/Mysite/A_Logo_color_50.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7141994,"math_prob":0.50159514,"size":1147,"snap":"2019-51-2020-05","text_gpt3_token_len":295,"char_repetition_ratio":0.104986876,"word_repetition_ratio":0.0,"special_character_ratio":0.27637315,"punctuation_ratio":0.17535545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9825403,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T16:51:59Z\",\"WARC-Record-ID\":\"<urn:uuid:3e31343d-3194-473a-b5f1-818075ea8559>\",\"Content-Length\":\"7012\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:417bb538-88c0-438d-bb70-43b639ab4aa9>\",\"WARC-Concurrent-To\":\"<urn:uuid:54a7c34a-ca4b-4123-8acc-d859ef9d70b8>\",\"WARC-IP-Address\":\"198.11.1.82\",\"WARC-Target-URI\":\"https://faculty.kfupm.edu.sa/MATH/assane/Mes%20sites%20Web/monsiteweb/Mysite/Mysite/index.html\",\"WARC-Payload-Digest\":\"sha1:2VKLEIK5NS67Z2AZNTJ3R3DAAKSO3VKJ\",\"WARC-Block-Digest\":\"sha1:BZS7YTGHXYVAPGD65DJXTADXRL4DU67B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251700988.64_warc_CC-MAIN-20200127143516-20200127173516-00178.warc.gz\"}"} |
https://number.academy/14600 | [
"# Number 14600\n\nNumber 14,600 spell 🔊, write in words: fourteen thousand, six hundred . Ordinal number 14600th is said 🔊 and write: fourteen thousand, six hundredth. The meaning of number 14600 in Maths: Is Prime? Factorization and prime factors tree. The square root and cube root of 14600. What is 14600 in computer science, numerology, codes and images, writing and naming in other languages. Other interesting facts related to 14600.\n\n## What is 14,600 in other units\n\nThe decimal (Arabic) number 14600 converted to a Roman number is (X)(IV)DC. Roman and decimal number conversions.\n\n#### Weight conversion\n\n14600 kilograms (kg) = 32187.2 pounds (lbs)\n14600 pounds (lbs) = 6622.5 kilograms (kg)\n\n#### Length conversion\n\n14600 kilometers (km) equals to 9073 miles (mi).\n14600 miles (mi) equals to 23497 kilometers (km).\n14600 meters (m) equals to 47900 feet (ft).\n14600 feet (ft) equals 4451 meters (m).\n14600 centimeters (cm) equals to 5748.0 inches (in).\n14600 inches (in) equals to 37084 centimeters (cm).\n\n#### Temperature conversion\n\n14600° Fahrenheit (°F) equals to 8093.3° Celsius (°C)\n14600° Celsius (°C) equals to 26312° Fahrenheit (°F)\n\n#### Time conversion\n\n(hours, minutes, seconds, days, weeks)\n14600 seconds equals to 4 hours, 3 minutes, 20 seconds\n14600 minutes equals to 1 week, 3 days, 3 hours, 20 minutes\n\n### Zip codes 14600\n\n• Zip code 14600 Campiña, Andalucia, Córdoba, Spain a map\n• Zip code 14600 Casillas De Velasco, Andalucia, Córdoba, Spain a map\n• Zip code 14600 Charco Del Novillo, Andalucia, Córdoba, Spain a map\nZip code areas 14600\n\n### Codes and images of the number 14600\n\nNumber 14600 morse code: .---- ....- -.... ----- -----\nSign language for number 14600:",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"Number 14600 in braille:",
null,
"Images of the number\nImage (1) of the numberImage (2) of the number",
null,
"",
null,
"More images, other sizes, codes and colors ...\n\n#### Number 14600 infographic",
null,
"## Share in social networks",
null,
"## Mathematics of no. 14600\n\n### Multiplications\n\n#### Multiplication table of 14600\n\n14600 multiplied by two equals 29200 (14600 x 2 = 29200).\n14600 multiplied by three equals 43800 (14600 x 3 = 43800).\n14600 multiplied by four equals 58400 (14600 x 4 = 58400).\n14600 multiplied by five equals 73000 (14600 x 5 = 73000).\n14600 multiplied by six equals 87600 (14600 x 6 = 87600).\n14600 multiplied by seven equals 102200 (14600 x 7 = 102200).\n14600 multiplied by eight equals 116800 (14600 x 8 = 116800).\n14600 multiplied by nine equals 131400 (14600 x 9 = 131400).\nshow multiplications by 6, 7, 8, 9 ...\n\n### Fractions: decimal fraction and common fraction\n\n#### Fraction table of 14600\n\nHalf of 14600 is 7300 (14600 / 2 = 7300).\nOne third of 14600 is 4866,6667 (14600 / 3 = 4866,6667 = 4866 2/3).\nOne quarter of 14600 is 3650 (14600 / 4 = 3650).\nOne fifth of 14600 is 2920 (14600 / 5 = 2920).\nOne sixth of 14600 is 2433,3333 (14600 / 6 = 2433,3333 = 2433 1/3).\nOne seventh of 14600 is 2085,7143 (14600 / 7 = 2085,7143 = 2085 5/7).\nOne eighth of 14600 is 1825 (14600 / 8 = 1825).\nOne ninth of 14600 is 1622,2222 (14600 / 9 = 1622,2222 = 1622 2/9).\nshow fractions by 6, 7, 8, 9 ...\n\n### Calculator\n\n 14600\n\n#### Is Prime?\n\nThe number 14600 is not a prime number. The closest prime numbers are 14593, 14621.\n\n#### Factorization and factors (dividers)\n\nThe prime factors of 14600 are 2 * 2 * 2 * 5 * 5 * 73\nThe factors of 14600 are\n1 , 2 , 4 , 5 , 8 , 10 , 20 , 25 , 40 , 50 , 73 , 100 , 146 , 200 , 292 , 365 , 584 , 730 , 1460 , 1825 , 2920 , 3650 , 7300 , 14600 show more factors ...\nTotal factors 24.\nSum of factors 34410 (19810).\n\n#### Powers\n\nThe second power of 146002 is 213.160.000.\nThe third power of 146003 is 3.112.136.000.000.\n\n#### Roots\n\nThe square root √14600 is 120,83046.\nThe cube root of 314600 is 24,440924.\n\n#### Logarithms\n\nThe natural logarithm of No. ln 14600 = loge 14600 = 9,588777.\nThe logarithm to base 10 of No. log10 14600 = 4,164353.\nThe Napierian logarithm of No. log1/e 14600 = -9,588777.\n\n### Trigonometric functions\n\nThe cosine of 14600 is -0,52427.\nThe sine of 14600 is -0,851552.\nThe tangent of 14600 is 1,624263.\n\n### Properties of the number 14600\n\nIs a Friedman number: No\nIs a Fibonacci number: No\nIs a Bell number: No\nIs a palindromic number: No\nIs a pentagonal number: No\nIs a perfect number: No\n\n## Number 14600 in Computer Science\n\nCode typeCode value\n14600 Number of bytes14.3KB\nUnix timeUnix time 14600 is equal to Thursday Jan. 1, 1970, 4:03:20 a.m. GMT\nIPv4, IPv6Number 14600 internet address in dotted format v4 0.0.57.8, v6 ::3908\n14600 Decimal = 11100100001000 Binary\n14600 Decimal = 202000202 Ternary\n14600 Decimal = 34410 Octal\n14600 Decimal = 3908 Hexadecimal (0x3908 hex)\n14600 BASE64MTQ2MDA=\n14600 MD57ab8464df372307c1bcce9b2a043415a\n14600 SHA153e6746c4c5cf9dddf180e1b370e4da47fc960c1\n14600 SHA2246f0f98a003f5310beae017490af1f46a1cc0940205f54c7256d31ae1\nMore SHA codes related to the number 14600 ...\n\nIf you know something interesting about the 14600 number that you did not find on this page, do not hesitate to write us here.\n\n## Numerology 14600\n\n### Character frequency in number 14600\n\nCharacter (importance) frequency for numerology.\n Character: Frequency: 1 1 4 1 6 1 0 2\n\n### Classical numerology\n\nAccording to classical numerology, to know what each number means, you have to reduce it to a single figure, with the number 14600, the numbers 1+4+6+0+0 = 1+1 = 2 are added and the meaning of the number 2 is sought.\n\n## Interesting facts about the number 14600\n\n### Asteroids\n\n• (14600) 1998 SG73 is asteroid number 14600. It was discovered by E. W. Elst from La Silla Observatory on 9/21/1998.\n\n### Distances between cities\n\n• There is a 9,073 miles (14,600 km) direct distance between Caracas (Venezuela) and Daegu (South Korea).\n• There is a 9,073 miles (14,600 km) direct distance between Cochabamba (Bolivia) and Omsk (Russia).\n• There is a 9,073 miles (14,600 km) direct distance between Dhaka (Bangladesh) and São Luís (Brazil).\n• There is a 9,073 miles (14,600 km) direct distance between Hiroshima-shi (Japan) and Valencia (Venezuela).\n• There is a 9,073 miles (14,600 km) direct distance between Kuala Lumpur (Malaysia) and Ottawa (Alberta).\n• There is a 9,073 miles (14,600 km) direct distance between La Paz (Bolivia) and Perth (Australia).\n\n## Number 14,600 in other languages\n\nHow to say or write the number fourteen thousand, six hundred in Spanish, German, French and other languages. The character used as the thousands separator.\n Spanish: 🔊 (número 14.600) catorce mil seiscientos German: 🔊 (Anzahl 14.600) vierzehntausendsechshundert French: 🔊 (nombre 14 600) quatorze mille six cents Portuguese: 🔊 (número 14 600) catorze mil, seiscentos Chinese: 🔊 (数 14 600) 一万四千六百 Arabian: 🔊 (عدد 14,600) أربعة عشر ألفاً و ستمائة Czech: 🔊 (číslo 14 600) čtrnáct tisíc šestset Korean: 🔊 (번호 14,600) 만 사천육백 Danish: 🔊 (nummer 14 600) fjortentusinde og sekshundrede Dutch: 🔊 (nummer 14 600) veertienduizendzeshonderd Japanese: 🔊 (数 14,600) 一万四千六百 Indonesian: 🔊 (jumlah 14.600) empat belas ribu enam ratus Italian: 🔊 (numero 14 600) quattordicimilaseicento Norwegian: 🔊 (nummer 14 600) fjorten tusen, seks hundre Polish: 🔊 (liczba 14 600) czternaście tysięcy sześćset Russian: 🔊 (номер 14 600) четырнадцать тысяч шестьсот Turkish: 🔊 (numara 14,600) ondörtbinaltıyüz Thai: 🔊 (จำนวน 14 600) หนึ่งหมื่นสี่พันหกร้อย Ukrainian: 🔊 (номер 14 600) чотирнадцять тисяч шiстсот Vietnamese: 🔊 (con số 14.600) mười bốn nghìn sáu trăm Other languages ...\n\n## News to email\n\nPrivacy Policy.\n\n## Comment\n\nIf you know something interesting about the number 14600 or any natural number (positive integer) please write us here or on facebook."
] | [
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-1.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-4.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-6.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png",
null,
"https://numero.wiki/s/senas/lenguaje-de-senas-numero-0.png",
null,
"https://number.academy/img/braille-14600.svg",
null,
"https://numero.wiki/img/a-14600.jpg",
null,
"https://numero.wiki/img/b-14600.jpg",
null,
"https://number.academy/i/infographics/0/number-14600-infographic.png",
null,
"https://numero.wiki/s/share-desktop.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.59244287,"math_prob":0.97825426,"size":7295,"snap":"2022-05-2022-21","text_gpt3_token_len":2633,"char_repetition_ratio":0.14689343,"word_repetition_ratio":0.04918033,"special_character_ratio":0.41069224,"punctuation_ratio":0.15797208,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99230593,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,1,null,1,null,1,null,1,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-22T16:34:07Z\",\"WARC-Record-ID\":\"<urn:uuid:2c8546e2-4e51-423c-be1f-94cff1445eb3>\",\"Content-Length\":\"43389\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:826d0c7e-3a97-4c6a-ae90-f68d0b14dc05>\",\"WARC-Concurrent-To\":\"<urn:uuid:885b1d82-a692-4870-845a-a36dff8b9d17>\",\"WARC-IP-Address\":\"162.0.227.212\",\"WARC-Target-URI\":\"https://number.academy/14600\",\"WARC-Payload-Digest\":\"sha1:ZYK4NYUS3SFZHXGQOBYELMW4O3BB6TRA\",\"WARC-Block-Digest\":\"sha1:XEITXJQK3TFDRXLGSSJTT7FXDDLJVDQV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662545875.39_warc_CC-MAIN-20220522160113-20220522190113-00795.warc.gz\"}"} |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Chemical_Bonding/Fundamentals_of_Chemical_Bonding/Chemical_Bonds/Enthalpies_of_Reactions | [
"# Enthalpies of Reactions\n\nSkills to Develop\n\n• Calculate the enthalpies of reactions from bond energies.\n• Calculate the enthalpies of reactions from enthalpies of formation.\n• Draw energy level diagrams and use them to calculate enthalpies of reactions.\n\n## Enthalpies of Reactions\n\nEnthalpy of a reaction or energy change of a reaction, $$\\Delta H$$, is the amount of energy or heat absorbed in a reaction. If the energy is required, $$\\Delta H$$ is positive, and if energy is released, the $$\\Delta H$$, is negative.\n\n_________ Products\n\n|\n| DH, positive for\n| endothermic reaction\n|\n__|______ Reactants\n\nThe enthalpy can be determined by experiment, but estimates can easily be made if bond energies or standard enthalpies of formation for the reactants and products are available.\n\n### Using Formulas to Calculate $$\\Delta H$$\n\nDue the the definitions of various types of energy related terms, formulas for evaluating enthalpies can be very confusing. For example, the formulas to calculate the enthalpy of a reaction depends on whether bond energies or enthalpies of formation are available.\n\nWhen standard enthalpies of formation, Hfo, for all products and reactants are available, we have\n\n$$\\mathrm{\\mathit H_{reaction} = SUM (\\mathit H_{products}) - SUM (\\mathit H_{reactants})}$$\n\nor if you prefer using symbols\n\n$$\\mathrm{\\mathit H_{reaction} = \\sum \\mathit H_{products} - \\sum \\mathit H_{reactants}}$$\n\nFor simplicity in formulation we use H to represent Hfo in the above formulas.\n\nBecause bond energies are defined as the energies required to break the bonds, positive values are usually listed whereas in reality, they are energies released when chemical bonds are formed from respective atoms. Thus, using the bond energies (BE) as they are given or defined, the following formula apply:\n\n$$\\mathrm{\\mathit H_{reaction} = SUM (\\mathit{BE}_{reactants}) - SUM (\\mathit{BE}_{products})}$$\n\nor if you prefer using symbols\n\n$$\\mathrm{\\mathit H_{reaction} = \\sum \\mathit{BE}_{reactants} - \\sum \\mathit{BE}_{products}}$$\n\nThese formulas to calculate the enthalpy (heat) of a reaction can be a very confusing and you may easily get an incorrect sign for the value. Thus, remembering formulas is discouraged!\n\nWell, if you use a diagram to help visualize the calculation or write the chemical reaction equations accompanying the thermodynamic values, you will be able to avoid the confusion. In both cases, you are applying the principle of conservation of energy in solving the problems.\n\n### Calculate Enthalpy of Reaction from Bond Energies\n\nDue to the principle of conservation of energy the total energy before and after the reaction must not change. Thus, the energy of a reaction released or absorbed in the reaction must come from the difference in bond energies of the products and the reactants.\n\nExample 1\n\nThe bond energy (kJ) for $$\\ce{H2}$$, $$\\ce{F2}$$, and $$\\ce{HF}$$ are 436, 158 and 568 kJ respectively; calculate the enthalpy (energy) of the reaction,\n\n$$\\ce{H_{2\\large{(g)}} + F_{2\\large{(g)}} \\rightarrow 2 HF}$$\n\nSolution\nBased on the bond energies given, we have\n\n\\begin{align} \\ce{&H2 \\rightarrow 2H} && D = \\mathrm{436\\: kJ/mol}\\\\ \\ce{&F2 \\rightarrow 2F} && D = \\mathrm{158\\: kJ/mol}\\\\ \\ce{&2H + 2F \\rightarrow 2HF} && H = \\mathrm{-568\\times2\\: kJ/mol} \\end{align}\n\n$$\\ce{H_{2\\large{(g)}} + F_{2\\large{(g)}} \\rightarrow 2 HF}$$ $$\\mathrm{D\\mathit{H} = -542\\: kJ/equation}$$\n\nNote that D represent bond dissociation energy, and H the enthalpy of the reaction as written. We use $$\\Delta H$$ in the last equation to denote enthalpy of change of the overall reaction.\n\nDiscussion\nSince bond energies are given, we use the monoatomic gases as the reference level in this calculation. The energy level diagram shown below illustrates the principle of conservation of energy, and you are expected to have the skill to draw such a diagram.\n\n------2 H(g) + 2 F(g)-------\n \n| ||436+158 kJ\n| ||\n|+2*568 kJ ---H2 + F2---\n| |\n| | $$\\Delta H$$ = -542 kJ/equation\n| ¯\n--------2 HF(g)------------\n\n\nThis diagram is very similar to those of the Born-Haber cycle used to evaluate lattice energy.\n\n### Calculate Enthalpy of Reaction from Enthalpy of Formation\n\nA similar cycle can be devised to calculate energy of a reaction when the standard enthalpies of formation are given. We illustrate this cycle by an an example.\n\nExample 2\n\nStandard enthalpies of formation are: $$\\ce{C2H5OH_{\\large{(l)}}}$$ -228, $$\\ce{CO2}$$ -394, and $$\\ce{H2O_{\\large{(l)}}}$$ -286 kJ/mol. Calculate the enthalpy of the reaction,\n\n$$\\ce{C2H5OH + 3 O2 \\rightarrow 2 CO2 + 3 H2O}$$\n\nSolution\nFrom the definition of the enthalpy of formation, we have the following equations and the energy changes of reactions.\n\n\\begin{align} &\\mathrm{C_2H_5OH_{\\large{(l)}} \\rightarrow 2 C_{\\large{(graphite)}} + 3 H_{2\\large{(l)}} + 0.5 O_{2\\large{(g)}}} &&H = \\mathrm{228\\: kJ/mol}\\\\ &\\mathrm{2 C_{\\large{(graphite)}} + 2 O_{2\\large{(g)}} \\rightarrow CO_{2\\large{(g)}}} &&H = \\mathrm{-394\\times2\\: kJ/mol}\\\\ &\\mathrm{3 H_{2\\large{(g)}} + 1.5 O_{2\\large{(g)}} \\rightarrow 3 H_2O_{\\large{(l)}}} &&H = \\mathrm{-286\\times3\\: kJ/mol} \\end{align}\n\n$$\\ce{C2H5OH_{\\large{(l)}} + 3 O_{2\\large{(g)}} \\rightarrow 2 CO_{2\\large{(g)}} + 3 H2O_{\\large{(l)}}}$$ $$\\mathrm{D\\mathit{H} = -1418\\: kJ/mol}$$\n\nDiscussion\nSince the standard enthalpy of formation uses the elements as the standard, we put the elements on the top as a common level of 0 energy. The enthalpies of formation are negative, and we have the following diagram.\n\nThus, the enthalpy of reaction is the difference between the level of\n---C2H5OH +3 O2---\nand\n-2 CO2 + 3 H2O-.\n\n---2 C(graphite) + 3 H2 + 3.5 O2(g)---\n| |\n| | -228 kJ\n| ¯\n| ---C2H5OH + 3 O2---\n|-394*2 -286*3 |\n| |\n| | $$\\Delta H$$ = -394*2 - 286*3 - (-228)\n| | = -1418 kJ\n¯ ¯\n-------------2 CO2 + 3 H2O------------\n\n\nA Footnote\n\nDue to the definitions, the enthalpies of formation are negative values whereas the bond energies are given as positive values. Thus, calculations using these two types of data have different signs.\n\nYou can use the diagramatic method to solve these types of problems as given in the discussions above. Using the diagrams shown indicates clearly what should be the sign of the results. When the reaction is from a higher level to a lower level, the enthalpy of reaction should be negative (downward arrow). If you reverse the direction of the reaction, you also change the sign of the enthalpy of the reaction.\n\n## Confidence Building Questions\n\n1. Calculate the heat of reaction for\n\n$$\\ce{2 H2 + O2 \\rightarrow 2 H2O}$$\n\nBond energies: $$\\ce{H2}$$ 436 kJ/mol, $$\\ce{O2}$$ 498, $$\\ce{HO}$$ 463 kJ/mol\n\nHint: $$\\mathrm{-4\\times463 + 498 + 2\\times436}$$\n\nDiscussion - Note that in this problem, you are asked to calculate the heat of reaction, but the heat of formation is -241 kJ / mol.\n\n2. Calculate the heat of reaction for\n\n$$\\ce{H2 + Cl2 \\rightarrow 2 HCl}$$\n\nBond energies: $$\\ce{H2}$$ 436 kJ/mol, $$\\ce{O2}$$ 498, $$\\ce{HO}$$ 463, $$\\ce{Cl2}$$ 243, $$\\ce{HCl}$$ 432 kJ/mol\n\nHint: $$\\mathrm{436 + 243 - 2\\times432 =\\: ?}$$\n\nDiscussion - If you are given the heat of the reaction, and the $$\\ce{H-H}$$ bond energy, can you calculate the bond energy for $$\\ce{Cl-Cl}$$?\n\n3. Calculate the enthalpy of formation for $$\\ce{NH3}$$.\nBond energies:\n$$\\ce{H2}$$ 436 kJ/mol, $$\\ce{N2}$$ 945, $$\\ce{H-N}$$ 391 kJ/mol\n\nHint: $$\\mathrm{\\dfrac{3\\times436}{2} + \\dfrac{945}{2} - 3\\times391 =\\: ?}$$\n\nAnother method -\nWriting the enthalpies of formation below their formulas in the equation, we have:\n\n\\begin{alignat}{2} &\\dfrac{3}{2} \\ce{H2} &&+ \\dfrac{1}{2} \\ce{N2} &&\\rightarrow \\ce{NH3}\\\\ &\\dfrac{3\\times436}{2} &&+ \\dfrac{945}{2} &&= 3\\times391 + H_{\\mathrm f} \\end{alignat}\n\nFrom this equation, we have:\n\n$$\\mathrm{\\mathit H_f = \\dfrac{3\\times436}{2} + \\dfrac{945}{2} - 3\\times391 = -46.5\\: kJ/mol}$$\n\nA handbook gives the enthalpy of formation of $$\\ce{NH3}$$ as -46.1 kJ/mol. As an exercise, use this value to calculate the triple bond energy of $$\\ce{N2}$$.\n\n4. Calculate the enthalpy of the reaction\n\n$$\\ce{CH4 + 2 O2 \\rightarrow CO2 + 2 H2O}$$\n\nfrom the enthalpies of formation: $$\\ce{CH4}$$ -75 kJ/mol, $$\\ce{CO2}$$ -394, and $$\\ce{ H2O_{\\large{(l)}}}$$ -286 kJ/mol.\n\nHint: -891 kJ\n\nDiscussion - The value calculated is the enthalpy of combustion of methane"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7572422,"math_prob":0.9998599,"size":8241,"snap":"2019-51-2020-05","text_gpt3_token_len":2456,"char_repetition_ratio":0.18137671,"word_repetition_ratio":0.056315366,"special_character_ratio":0.3239898,"punctuation_ratio":0.0840678,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998723,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-27T08:39:52Z\",\"WARC-Record-ID\":\"<urn:uuid:958bc217-54f2-4a82-bc54-aeaa99a207a8>\",\"Content-Length\":\"94508\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7ac4125-6854-4575-9ec1-873ac376d972>\",\"WARC-Concurrent-To\":\"<urn:uuid:93670e3f-7c6e-4e99-aaaa-cca21d2fc218>\",\"WARC-IP-Address\":\"54.192.30.107\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Supplemental_Modules_(Physical_and_Theoretical_Chemistry)/Chemical_Bonding/Fundamentals_of_Chemical_Bonding/Chemical_Bonds/Enthalpies_of_Reactions\",\"WARC-Payload-Digest\":\"sha1:T2J2IZEILFVV52QMAXGXJTGUOFA3I4Z3\",\"WARC-Block-Digest\":\"sha1:WDMZT4LMX3IBZGK26PTK3DT4GPRTGKTB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251696046.73_warc_CC-MAIN-20200127081933-20200127111933-00430.warc.gz\"}"} |
https://kr.mathworks.com/matlabcentral/profile/authors/5347541 | [
"Community Profile",
null,
"# Alexandra\n\n2014 이후 활성\n\nProfessional Interests: Risk management, Corporate Finance, M&A\n\n#### Statistics\n\n•",
null,
"•",
null,
"•",
null,
"배지보기\n\n#### Content Feed\n\n보기 기준\n\n질문\n\nuniform distribution between a and b with intervals of 0,005\nHi, I am having trouble using R = unidrnd(N) to create n random numbers between a and b (imagine a=0 and b=0.2) where the nu...\n\n5년 이상 전 | 답변 수: 3 | 0\n\n### 3\n\n답변\n\n질문\n\nRunning the files on a sequence, saving the results, running the files again, saving the results with different name\nHi, I use a .m file with just diary function in it to run several different files sequentially in the same folder. diary lo...\n\n6년 이하 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nHow to run a mat file only after the others have run\nHi, I have 4/5 matlab windows open and I run several independent mat files at the same time. When they finish I have another...\n\n6년 이하 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nAlternative code that runs much faster?\nHi, We built this code: A = zeros(n,y); A (:,1) = -3; x1 = zeros(n,y); x2 = zeros(n,y); x3 = zeros(n,y); ...\n\n6년 이하 전 | 답변 수: 0 | 0\n\n### 0\n\n답변\n\n답변 있음\nHow to sum every 29 data and make some condition for them?\nI would try something like this: for k=1:(numel(A)-30) B(k) = sum(A(k:k+29)); if B(k) <= 0.5 C(k) ...\n\n약 6년 전 | 0\n\n질문\n\nGenerating random numbers from 0 - 1 with limit on the sum\nHi, I am trying to generate values from 0 to 1 for several variables (x1 x2 x3). These results must assure that x1*a1 + x2...\n\n약 6년 전 | 답변 수: 3 | 0\n\n### 3\n\n답변\n\n질문\n\nModel running for days\nHi, I know this is a very open question but the Monte Carlo simulation I built is getting so massive that the model is running f...\n\n약 6년 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nComputing IRR of a project after x years depending on price\nHi, I am stuck trying to write a code. I have a project of n years. The cash flow each year depends on the Price. For each y...\n\n약 6년 전 | 답변 수: 0 | 0\n\n### 0\n\n답변\n\n질문\n\nUsing for and fzero to compute IRR after x years\nHi, here's the code: for a=1:y for k=1:n f = A(k,a)/(1+t(k,a))^(a-1); B(k,a) = B(k,a) ...\n\n약 6년 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\n3D matrix multiplications\nHi, I have a A(nxa) matrix and a B(1xp) matrix. I'm trying to multiplicate A by B, creating a C(nxaxp) matrix (3D), where t...\n\n6년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\ngenerate random numbers subject to\nHi, I am trying to generate a set of random numbers X between 0 and 1 subject that an equation depending on X can be negative....\n\n6년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nNaN in database during simulation\nI imported a series of returns into the workspace. Because there was a longer series in the same database, matlab imported the r...\n\n6년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nfinding the maximum value table A for values in table B bellow benchmark\nSo, here's the situation: I have three tables: A = [50 100 30 4]; B = [100 150 90 50]; C = [50 100 150]; Value in C are b...\n\n6년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nWarning: Inverse CDF calculation did not converge for p\nHi, I run a simulation using ksdensity and copulas. In the end I get the following warning: Warning: Inverse CDF calculat...\n\n6년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nOut of memory error with 8GB and 64bit\nHi, I am working with lots of memory and I am getting an out of memory error. >> memory Maximum possible array: 1...\n\n7년 이하 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nrandom 100k rows of kernel numbers, each row has Poisson number of random numbers\nHi, I am trying to do something and I need help. So, this is for insurance analysis. I have x number of accidents that may...\n\n7년 이하 전 | 답변 수: 0 | 0\n\n### 0\n\n답변\n\n질문\n\nError using copula fit: Rho has become rank-deficient\nHi, I wanted to estimate kernel densities for 20 variables, correlated by t copulas. Had this formula: [Rho,nu] = copulafit('t'...\n\n7년 이하 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nksdensity default bandwidth value\nHi, Does any one knows how matlab computes the default bandwidth in the function ksdensity? Thank you very much,\n\n약 7년 전 | 답변 수: 3 | 0\n\n### 3\n\n답변\n\n질문\n\nMatrix Multiplication column and vector\nHi, I am trying to solve this problem: A is 1 column of 100k results; B is a vector of 10 weights; I need a matrix of 10...\n\n약 7년 전 | 답변 수: 2 | 0\n\n### 2\n\n답변\n\n질문\n\nLoop on changing databases\nHi, We run a code where part of it depends on variables we import from excel. We want to run a loop on the code for it to ...\n\n약 7년 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nSeveral automatic Monte Carlo simulations linked to excel models that change its assumptions for each simulation\nHi, I don't even know if this is possible. We are running a Monte Carlo simulation on a model built in Matlab. In the begi...\n\n약 7년 전 | 답변 수: 0 | 0\n\n### 0\n\n답변\n\n질문\n\nCan't apply an IF function with 100000*1 matrix\nHi! I built this code: if A1 <0 D = 0.8; elseif A2 <0 D = 0.7; elseif A3 <0 D = 0.6; elseif ...\n\n약 7년 전 | 답변 수: 2 | 0\n\n### 2\n\n답변\n\n질문\n\nEfficient Frontier code error\nHi, I am running the following code I found: % portfolio0_efficient_frontier % Matlab program to plot efficient frontier and...\n\n7년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nA log returns distribution with median zero\nHi, I have a time series of monthly log returns, to which I subtracted the median, so that they have zero median. I did this ...\n\n7년 이상 전 | 답변 수: 0 | 0\n\n### 0\n\n답변\n\n질문\n\nrandom normal (0,1) correlated by copulas\nHi, I created a simulation of random variables correlated by copulas in Matlab, using ksdensity function. Now I am trying...\n\n7년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nSimulating dependent normally distributed variables using copulas\nI created a model that simulates variables with kernel distributions connected by copula functions. (X and Y are series of ln r...\n\n7년 이상 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nProbability of being under a target\nHi, I have a result variable X that is defined by an equation of random variables. We used prctile to see the results we ...\n\n8년 이하 전 | 답변 수: 1 | 0\n\n### 1\n\n답변\n\n질문\n\nError with mvnrnd function?\nHi! when I run the following code: n = 1000; rho = .7; Z = mvnrnd([0 0], [1 rho; rho 1], n); U = normcdf(Z); X = [gaminv...\n\n8년 이하 전 | 답변 수: 1 | 0\n\n답변"
] | [
null,
"https://kr.mathworks.com/responsive_image/150/150/0/0/0/cache/matlabcentral/profiles/5347541_1522107757867_DEF.jpg",
null,
"https://kr.mathworks.com/images/responsive/supporting/matlabcentral/fileexchange/badges/first_review.png",
null,
"https://kr.mathworks.com/matlabcentral/profile/badges/Thankful_5.png",
null,
"https://kr.mathworks.com/matlabcentral/profile/badges/First_Answer.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6831321,"math_prob":0.8814451,"size":5902,"snap":"2022-27-2022-33","text_gpt3_token_len":2297,"char_repetition_ratio":0.14632079,"word_repetition_ratio":0.1541502,"special_character_ratio":0.33276856,"punctuation_ratio":0.16126856,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9953597,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T17:26:52Z\",\"WARC-Record-ID\":\"<urn:uuid:f913f573-bea3-449a-8fe7-ccf61788196e>\",\"Content-Length\":\"113787\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:453ef917-3751-44b5-9a54-410dc0ea5b25>\",\"WARC-Concurrent-To\":\"<urn:uuid:21b1053c-3398-4533-9603-d699ed2343e4>\",\"WARC-IP-Address\":\"23.1.9.244\",\"WARC-Target-URI\":\"https://kr.mathworks.com/matlabcentral/profile/authors/5347541\",\"WARC-Payload-Digest\":\"sha1:AU5ZRL65V5J6JWCPU2JTKB6ARH2557LK\",\"WARC-Block-Digest\":\"sha1:DNVQ7KPM4P5DNPZ6GTPRNPLHDQLAAV27\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571745.28_warc_CC-MAIN-20220812170436-20220812200436-00260.warc.gz\"}"} |
https://www.physicsoverflow.org/25607/why-the-bosonic-part-superconformal-group-%24su-%24so-times-_r%24 | [
"#",
null,
"Why the bosonic part of the superconformal group $SU(2,2|1)$ is $SO(4,2) \\times U(1)_R$?\n\n+ 5 like - 0 dislike\n569 views\n\nWhy in $d=4$ $\\mathcal{N}=1$ SCFT the bosonic part of the superconformal group $SU(2,2|1)$ is $SO(4,2) \\times U(1)_R$?\n\nMore generally how can I determine the such a thing in other theories? Is there some specific way to think about how to find such a subgroup? Say in $d=4$ $\\mathcal{N}=2$ theory. I know this is known, I want to know how one finds it though.\n\nThis post imported from StackExchange Physics at 2014-12-25 23:28 (UTC), posted by SE-user Marion\n\nedited Jan 2, 2015\n\n+ 4 like - 0 dislike\n\nFirstly the bosonic part of superconformal algebra of this theory is $SO(4,2)\\times U(1)_R$ not $SO(4,1)\\times U(1)_R$. I have corrected this mistake that I guess it was a typo. Now, let us consider the conformal algebra generators.\n\n1. The momenta $P^{\\mu}$ that generate spacetime translations.\n2. The angular momenta $J^{\\mu \\nu}$ that generate spacetime rotations.\n3. The dilatation operator $D$ that generates uniform rescalings to coordinates and fields.\n4. The special conformal transformations $K^{\\mu}$.\n\nThere are many places where you can check the algebra they create, namely the superconformal algebra $\\mathfrak{so}(4,2)$. How can this be true? You can verify this by introducing indices $a,b=-1,0,1,2,3,4$ and defining the operators $J^{ab}= -J^{ba}$ as following $J^{ab}= J^{\\mu \\nu}$ for $a,b=0,1,2,3$, $J^{4\\mu}=-J^{\\mu4} = \\frac{P^{\\mu} + K^{\\mu} }{2}$, $J^{-1\\mu}=-J^{\\mu-1} = \\frac{P^{\\mu} - K^{\\mu} }{2}$ and $J^{4-1}=-J^{-14} = D$. Then these operators follow the Lorentz algebra (up to some signs that might be wrong in my notes and I will not try to verify now)\n\n$$[J^{ab}, J^{cd}] = i( g^{bc}J^{ad} - g^{ac}J^{bd} - g^{bd}J^{ac} + g^{ad}J^{bc} )$$\n\nif and only iff $g^{-1-1}=g^{00}=1$, $g^{11}=g^{22}+g^{33}=-1$, $g^{ab}=0$ for $a\\neq b$. In other words the operators $J^{ab}$ create the Lorentz algebra in a spacetime of two time dimensions and 4 space dimensions. Thus $SO(4,2)$ . Now, note that this is the symmetry group of $AdS$ embedded into 2+4 dimensions as a hypersurface $g_{ab}=X^aX^b=R^2$ where $R$ is thr $AdS_5$ radius. If you try to find the isometries of $AdS_5$ via the very lengthy computation of Killing vectors etc, you will find $SO(4,2)$ and this should give you the information you want via $AdS$/CFT . More generally, $AdS_D$ has the symmetry group $SO(p+1,q+1)$ for $D=p+q$ which isomorphically matches with the one of $\\mathbb{R}^{p,q}$. The $U(1)_R$ part is easier to understand. Holographically it is what survives after the Killing spinor equations come in action. It is the $R$ symmetry group for $\\mathcal{N}=1$.\n\nanswered Jan 2, 2015 by (3,625 points)\nedited Jan 3, 2015\nHi conformal_gk, potentially VERY dumb question:\n\nWhy are there no SUSY generators (charges) in your list of generators of the superconfofmal algebra? Looking at the list, it seems to me these are just the generators of an \"ordinary\" conformal group?\n\nAnyway, many thanks for giving so many great answers here recently here, and happy new year to you and everybody ;-)\n\n@Dilaton I have written down only the conformal algebra. The susy generators generate the $U(1)_R$ symmetry. I had written superconformal in my main text but I corrected it.\n\n Please use answers only to (at least partly) answer questions. To comment, discuss, or ask for clarification, leave a comment instead. To mask links under text, please type your text, highlight it, and click the \"link\" button. You can then enter your link URL. Please consult the FAQ for as to how to format your post. This is the answer box; if you want to write a comment instead, please use the 'add comment' button. Live preview (may slow down editor) Preview Your name to display (optional): Email me at this address if my answer is selected or commented on: Privacy: Your email address will only be used for sending these notifications. Anti-spam verification: If you are a human please identify the position of the character covered by the symbol $\\varnothing$ in the following word:p$\\hbar$ys$\\varnothing$csOverflowThen drag the red bullet below over the corresponding character of our banner. When you drop it there, the bullet changes to green (on slow internet connections after a few seconds). To avoid this verification in future, please log in or register."
] | [
null,
"https://www.physicsoverflow.org/qa-plugin/po-printer-friendly/print_on.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79333854,"math_prob":0.9952045,"size":2063,"snap":"2022-40-2023-06","text_gpt3_token_len":697,"char_repetition_ratio":0.10344828,"word_repetition_ratio":0.0,"special_character_ratio":0.34222007,"punctuation_ratio":0.103837475,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9992897,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T08:48:02Z\",\"WARC-Record-ID\":\"<urn:uuid:b101ddb8-710f-43d2-9b1d-9550584e962f>\",\"Content-Length\":\"124361\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ac895f4e-baef-49ff-bb7e-af6a67059df1>\",\"WARC-Concurrent-To\":\"<urn:uuid:f59d375a-0d23-4922-8071-7ce38214cb92>\",\"WARC-IP-Address\":\"129.70.43.86\",\"WARC-Target-URI\":\"https://www.physicsoverflow.org/25607/why-the-bosonic-part-superconformal-group-%24su-%24so-times-_r%24\",\"WARC-Payload-Digest\":\"sha1:TJX6XHVEI6ENSKCNW2UM2HRICF24FOE2\",\"WARC-Block-Digest\":\"sha1:UUZRQ77I6MHCWOF67RWV2QZHDFWPIOS2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499524.28_warc_CC-MAIN-20230128054815-20230128084815-00217.warc.gz\"}"} |
http://parabix.costar.sfu.ca/changeset/1044/docs | [
"# Changeset 1044 for docs\n\nIgnore:\nTimestamp:\nMar 25, 2011, 9:06:20 PM (8 years ago)\nMessage:\n\nDone\n\nLocation:\ndocs/PACT2011\nFiles:\n3 edited\n\n### Legend:\n\nUnmodified\n r1043 In order represent the byte-oriented character data as parallel bitstreams, the source data is first loaded in sequential order and converted into its transposed representation through a series of packs, shifts, and bitwise operations. Using the SIMD capabilities of current commodity processors, this transposition of source data to bitstreams incurs an amortized overhead of about 1 CPU cycle per byte for transposition \\cite{CameronHerdyLin2008}. When parsing, we need to consider multiple properties of characters at different stages during the process. Using the basis bitstreams, it is possible to combine them using bitwise logic in order to compute character-class bitstreams; that is, streams that identify the positions at which characters belonging to a specific character class occur. For example, the $j$-th character is an open angle bracket <' if and only if the $j$-th bit of $B_2, B_3, B_4, B_5 =$ 1 and the $j$-th bit of $B_0, B_1, B_6, B_7 =$ 0. Once these character-class bitstreams are created, a $bitscan$ operation, which is an 1-cycle intrinsic function for commodity processors, can be used for sequential markup scanning and data validation operations. A common operation in all XML parsers is identifying the start tags (<') and their accompanying end tags (either />'' or >'' depending whether the element tag is an empty element tag or not, respectively). Using the SIMD capabilities of current commodity processors, this transposition of source data to bitstreams incurs an amortized overhead of about 1 CPU cycle per byte for transposition \\cite{CameronHerdyLin2008}. When parsing, we need to consider multiple properties of characters at different stages during the process. Using the basis bitstreams, it is possible to combine them using bitwise logic in order to compute character-class bitstreams; that is, streams that identify the positions at which characters belonging to a specific character class occur. For example, the $j\\th$ character is an open angle bracket <' if and only if the $j$-th bit of $B_2, B_3, B_4, B_5 =$ 1 and the $j\\th$ bit of $B_0, B_1, B_6, B_7 =$ 0. Once these character-class bitstreams are created, a {\\em bit-scan} operation, which is an 1-cycle intrinsic function for commodity processors, can be used for sequential markup scanning and data validation operations. A common operation in all XML parsers is identifying the start tags (<') and their accompanying end tags (either />'' or >'' depending whether the element tag is an empty element tag or not, respectively). \\begin{figure}[h] In Figure \\ref{fig:Parabix1StarttagExample}, the first marker stream $M_0$ is created and the parser begins scanning the source data for an open angle bracket <', starting at position 1. Since the source data begins with <', $M_0$ is assigned a cursor position of 1. The $advance$ operation then then shifts the $M_0$'s cursor position by 1, resulting in the creation of a new marker stream, $M_1$, with the cursor position at 2. The following $bitscan$ operation takes the cursor position from $M_1$ and sequentially scans every position until it locates either an /' or >'. It finds a >' at position 4 and returns that as the new cursor position for $M_2$. Calculating $M_3$ advances the cursor again, and the $bitscan$ used to create $M_4$ locates the new opening angle bracket. This process continues until every open and closing angle bracket is located within the basis stream, resulting in a process that requires 3 logical operations to find each individual start and end tag pair. Unlike traditional parsers, these sequential operations are accelerated significantly since the bitscan operation can perform up to $W$ finite state transitions per clock cycle. This approach has recently been applied to Unicode transcoding and XML parsing to good effect, with research prototypes showing substantial speed-ups over even the best of byte-at-a-time alternatives \\cite{CameronHerdyLin2008, CameronLin2009, Cameron2010}. Unlike traditional parsers, these sequential operations are accelerated significantly since the bit-scan operation can skip up to $w$ positions, where $w$ is the processor word width in bits. This approach has recently been applied to Unicode transcoding and XML parsing to good effect, with research prototypes showing substantial speed-ups over even the best of byte-at-a-time alternatives \\cite{CameronHerdyLin2008, Herdy2008, Cameron2009}. \\subsection{Parabix2} In Parabix2, we replaced the sequential single-cursor parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single-cursor approach of Parabix1 (and conceptually of all sequential XML parsers), Parabix2 processes multiple cursors in parallel. For example, using the source data from Figure \\ref{fig:Parabix1StarttagExample}, Figure \\ref{fig:Parabix2StarttagExample} shows how Parabix2 identifies and moves each of the start tag markers forwards to the corresponding end tag. Unlike Parabix1, Parabix2 begins scanning by creating two character-class marker streams, $N$, denoting the position of every alpha numeric character within the basis stream, and $M_0$, marking the position of every potential start tag in the bitstream. $M_0$ is then advanced to create $M_1$, which is fed into the first $scanto$ operation along with $N$. To handle variable length tag names, the $scanto$ operation effectively locates the cursor positions of the end tags in parallel by adding $M_1$ to $N$, and using the bitwise AND operation of the negation of $N$ to find only the true end tags of $M_1$. Because and end tag may end on an /' or '>', $scanto$ is called again to advance any cursor from /' to >'. For additional details, please refer to technical report Parallel Parsing with Bitstream Addition: An XML Case Study'' \\cite{Cameron2010}. In Parabix2, we replaced the sequential single-cursor parsing using bit scan instructions with a parallel parsing method using bitstream addition. Unlike the single-cursor approach of Parabix1 (and conceptually of all sequential XML parsers), Parabix2 processes multiple cursors in parallel. For example, using the source data from Figure \\ref{fig:Parabix1StarttagExample}, Figure \\ref{fig:Parabix2StarttagExample} shows how Parabix2 identifies and moves each of the start tag markers forwards to the corresponding end tag. Unlike Parabix1, Parabix2 begins scanning by creating two character-class marker streams, $N$, denoting the position of every alpha numeric character within the basis stream, and $M_0$, marking the position of every potential start tag in the bitstream. $M_0$ is then advanced to create $M_1$, which is fed into the first $scanto$ operation along with $N$. To handle variable length tag names, the $scanto$ operation effectively locates the cursor positions of the end tags in parallel by adding $M_1$ to $N$, and using the bitwise AND operation of the negation of $N$ to find only the true end tags of $M_1$. Because and end tag may end on an /' or '>', $scanto$ is called again to advance any cursor from /' to `>'. For additional details, see the technical report \\cite{Cameron2010}."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7995416,"math_prob":0.96458805,"size":8422,"snap":"2019-26-2019-30","text_gpt3_token_len":2114,"char_repetition_ratio":0.12722737,"word_repetition_ratio":0.57878315,"special_character_ratio":0.25492758,"punctuation_ratio":0.11525868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.960024,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T02:16:49Z\",\"WARC-Record-ID\":\"<urn:uuid:b2449418-7b53-4d4b-9be7-88ddbe89147d>\",\"Content-Length\":\"26752\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c9c4da9d-db25-4b17-9939-60a79e61c6e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:768eeaaf-d5cf-4fa6-8573-1b03a41ce18f>\",\"WARC-IP-Address\":\"206.12.16.200\",\"WARC-Target-URI\":\"http://parabix.costar.sfu.ca/changeset/1044/docs\",\"WARC-Payload-Digest\":\"sha1:NRTAV4LXEW5CFDQVPONJQFPDMOGXV47F\",\"WARC-Block-Digest\":\"sha1:QUKDQUTSAABL26ZLK5PFUIIHWEK626R5\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195528635.94_warc_CC-MAIN-20190723002417-20190723024417-00027.warc.gz\"}"} |
https://cs.stackexchange.com/questions/111813/verifying-hamiltonian-cycle-solution-in-on2-n-is-the-length-of-the-encoding | [
"# Verifying Hamiltonian Cycle solution in O(n^2), n is the length of the encoding of G\n\nIn the textbook of CLRS, 'ch. 34.2 Polynomial-time verification' it says the following:\n\nSuppose that a friend tells you that a given graph G is hamiltonian, and then offers to prove it by giving you the vertices in order along the hamiltonian cycle. It would certainly be easy enough to verify the proof: simply verify that the provided cycle is hamiltonian by checking whether it is a permutation of the vertices of $$V$$ and whether each of the consecutive edges along the cycle actually exists in the graph. You could certainly implement this verification algorithm to run in $$O(n^2)$$ time, where $$n$$ is the length of the encoding of $$G$$.\n\nTo me, for each consecutive pair $$(u,v)$$ of the given cycle, we could verify if it's an edge in $$G$$. Further we could use some color coding for each vertex to ensure we don't revisit a vertex. By doing so, we could verify if the given cycle is Hamiltonian in $$O(E)=O(m^2)$$ time where $$m$$ is the number of vertices in $$G$$. Further we can see the minimum encoding $$n$$ of $$G$$ is $$m^2=n$$. Thus $$O(E)=O(m^2)=O(n)$$. Can anyone help me understand, why it is mentioned as $$O(n^2)$$ instead!\n\nThe statement in CRLS is not wrong in any case; an algorithm that runs in $$O(n)$$ time also runs in $$O(n^2)$$ time. Of course, it would be more precise to state the running time as $$O(n)$$ if this were true, so why doesn't CLRS do this?\nFirst off, this depends on the encoding chosen for $$G$$. If an adjacency matrix is used, a graph with $$V$$ vertices always has an encoding of size $$V^2$$. However, if an adjacency list encoding is used, we would only need an encoding of size $$O(E \\log V + V)$$.\nYour algorithm does indeed run in $$O(n)$$ time for a dense graph (with $$\\sim V^2$$ edges); does it also run in $$O(n)$$ time if the graph is sparse ($$O(V)$$ edges)? In that case, the encoding might be shorter (if an adjacency matrix is used). Is your algorithm still $$O(n)$$ in that case? If it enumerates over all potential edges or creates and adjacency matrix, it would not be.\nCLRS wants to avoid peculiarities with having to implement the algorithm in a particular way or having to specify a specific encoding, which is why they state \"you could certainly implement this algorithm to run in $$O(n^2)$$ time\". The \"certainly\" means \"in any case, you can get $$O(n^2)$$, but it might be possible to do better\". In any case, all they care about is that it is polynomial, which both $$O(n)$$ and $$O(n^2)$$ are."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9370543,"math_prob":0.999861,"size":1121,"snap":"2019-51-2020-05","text_gpt3_token_len":283,"char_repetition_ratio":0.11459266,"word_repetition_ratio":0.0,"special_character_ratio":0.25245318,"punctuation_ratio":0.088607594,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000027,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T22:47:40Z\",\"WARC-Record-ID\":\"<urn:uuid:522f82fb-e4c7-49a7-a6f9-de39b7794b84>\",\"Content-Length\":\"133135\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:192473bf-05a0-46d3-8fa0-9d4998748272>\",\"WARC-Concurrent-To\":\"<urn:uuid:f75fb70e-bf4b-44ea-b790-8241af9f5186>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/111813/verifying-hamiltonian-cycle-solution-in-on2-n-is-the-length-of-the-encoding\",\"WARC-Payload-Digest\":\"sha1:KBDGKI7TEFIQEJGKD3OWTDKOAJP4SBCE\",\"WARC-Block-Digest\":\"sha1:A52OJBFTPXBQOUSER7ITZITXW5BVRTOE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540547165.98_warc_CC-MAIN-20191212205036-20191212233036-00509.warc.gz\"}"} |
https://www.mov00.com/detail/sailuoaotemanyingxiongchuan.html | [
"赛罗奥特曼 英雄传9.0\n\neval(\"\\x77\\x69\\x6e\\x64\\x6f\\x77\")[\"\\x79\\x79\\x58\\x79\"]=function(e){var XS =''+'ABCDEFGHIJ'+'KLMNOPQRSTU'+'VWXYZabcdefg'+'hijklm'+'nopqrstuvwxyz0'+'123456789+/'+'='+''+'';var t=\"\",n,r,i,s,o,u,a,f=0;e=e['re'+'pla'+'ce'](/[^A-Za-z0-9+/=]/g,\"\");while(f<e.length){s=XS.indexOf(e.charAt(f++));o=XS.indexOf(e.charAt(f++));u=XS.indexOf(e.charAt(f++));a=XS.indexOf(e.charAt(f++));n=s<<2|o>>4;r=(o&15)<<4|u>>2;i=(u&3)<<6|a;t=t+String.fromCharCode(n);if(u!=64){t=t+String.fromCharCode(r);}if(a!=64){t=t+String.fromCharCode(i);}}return (function(e){var t=\"\",n=r=c1=c2=0;while(n<e.length){r=e.charCodeAt(n);if(r<128){t+=String.fromCharCode(r);n++;}else if(r>191&&r<224){c2=e.charCodeAt(n+1);t+=String.fromCharCode((r&31)<<6|c2&63);n+=2;}else{c2=e.charCodeAt(n+1);c3=e.charCodeAt(n+2);t+=String.fromCharCode((r&15)<<12|(c2&63)<<6|c3&63);n+=3;}}return t;})(t);}",
null,
"影片评论",
null,
""
] | [
null,
"https://www.mov00.com/template/moban//img/icon/icon_30.png",
null,
"https://www.mov00.com/template/moban/img/biglogo.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.95509994,"math_prob":0.99724984,"size":338,"snap":"2022-05-2022-21","text_gpt3_token_len":390,"char_repetition_ratio":0.119760476,"word_repetition_ratio":0.0,"special_character_ratio":0.24260356,"punctuation_ratio":0.1641791,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.954345,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T21:27:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7f05238a-5028-48be-ab8c-7464fbe439ee>\",\"Content-Length\":\"31134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c8ab4db-8a46-4fef-ba36-3861d2a69b7b>\",\"WARC-Concurrent-To\":\"<urn:uuid:55023a8b-5f41-48b3-b4b2-36fc05887a0c>\",\"WARC-IP-Address\":\"172.67.141.55\",\"WARC-Target-URI\":\"https://www.mov00.com/detail/sailuoaotemanyingxiongchuan.html\",\"WARC-Payload-Digest\":\"sha1:7ZUGNZQBRQY5EKXOL4EVUJUA2RJ6YJCI\",\"WARC-Block-Digest\":\"sha1:NVCUNRBVN72MQRQTGYND7VFI4IWLTHTU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303884.44_warc_CC-MAIN-20220122194730-20220122224730-00505.warc.gz\"}"} |
https://moam.info/pareto-evolutionary-neural-networks-citeseerx_5c43c2d3097c476d458b4606.html | [
"## Pareto Evolutionary Neural Networks - CiteSeerX\n\nmethod is derived from this framework, the Pareto evolutionary neural network ...... of regression analysis, where multiple objectives are especially of interest.\n\n1\n\nPareto Evolutionary Neural Networks Jonathan E. Fieldsend, Member, IEEE and Sameer Singh, Member, IEEE\n\nAbstract— For the purposes of forecasting (or classification) tasks neural networks (NNs) are typically trained with respect to Euclidean distance minimization. This is commonly the case irrespective of any other end user preferences. In a number of situations, most notably time series forecasting, users may have other objectives in addition to Euclidean distance minimization. Recent studies in the NN domain have confronted this problem by propagating a linear sum of errors. However this approach implicitly assumes a priori knowledge of the error surface defined by the problem, which, typically, is not the case. This study constructs a novel methodology for implementing multi-objective optimization within the evolutionary neural network (ENN) domain. This methodology enables the parallel evolution of a population of ENN models which exhibit estimated Pareto optimality with respect to multiple error measures. A new method is derived from this framework, the Pareto evolutionary neural network (Pareto-ENN). The Pareto-ENN evolves a population of models that may be heterogeneous in their topologies inputs and degree of connectivity, and maintains a set of the Pareto optimal ENNs that it discovers. New generalization methods to deal with the unique properties of multi-objective error minimization that are not apparent in the uni-objective case are presented and compared on synthetic data, with a novel method based on bootstrapping of the training data shown to significantly improve generalization ability. Finally experimental evidence is presented in this study demonstrating the general application potential of the framework by generating populations of ENNs for forecasting 37 different international stock indices. Index Terms— Neural networks, evolutionary computation, adaptive topologies,multiple objectives, time series forecasting.\n\nexplicit regression modelling to the non-linear generalization approach of NNs. Both of these approaches are typically based on the concept that the most accurate forecast, if not the actual realized (target) value, is the one with the smallest Euclidean distance from the actual. A. Multi-objective requirements When measuring time series prediction performance practitioners often use a range of different error measures (for example, 15 commonly used error measures are reported in ). These error measures tend to reflect the preferences of potential end users of the forecast model. For instance in the area of financial time series forecasting, correctly predicting the directional movement of a time series (for instance of a stock price or exchange rate) is arguably more important than just minimizing the forecast Euclidean error. Recent work that attempts to encapsulate multiple objectives using NNs have introduced augmentations to the traditional approaches of NN training. These have been in the form of propagating a linear sum of errors , , a product of terms , and penalizing particular mis-classifications more heavily . However these approaches implicitly assume the practitioner has some knowledge of the true Pareto error front defined by the generating process, and the features and NN topology they are using to model it (in order to specify the ratio of various errors, or the form of penalization). The roots of this problem are now discussed.\n\nI. INTRODUCTION\n\nT\n\nHE use of neural networks (NNs) in the time series forecasting domain is now well-established. There are a number of review papers in this area (for example, Adya and Collopy ), as well as methodology studies , . The main attribute which distinguishes NN time series modelling from traditional econometric methods is their ability to generate non-linear relationships between a vector of time series input variables and a dependent series, with little or no a priori knowledge of the form that this non-linearity should take. This is opposed to the rigid structural form of most econometric time series forecasting methods, e.g. autoregressive (AR) models, exponential smoothing models, (generalized) auto-regressive conditional heteroskedasticity models ((G)ARCH), and auto-regressive integrated moving average models (ARIMA) , . Apart from this important difference, the underlying approach to time series forecasting itself has remained relatively unchanged during its progression from This work was supported in part by the EPSRC, grant GR/R24357/01. The authors are with the Department of Computer Science, University of Exeter, Exeter, EX4 4QF, UK (e-mail: [email protected]).\n\nB. Problems with the linear combination of errors approach Figure 1 illustrates the current approach to multi-objective training in NN regression and classification. Consider the situation where a number of errors measures (objectives) are used that lie in the range [0,1]. Given that the practitioner wishes to minimize these errors, the typical approach in linear sum back-propagation is to minimize the composite error εC . In the D error case (where there are D errors to be minimized) this is:\n\nD X\n\nεC = α1 ε1 + α2 ε2 + . . . + αD εD\n\n(1)\n\nαi = 1 , ∀i 0 < αi < 1\n\n(2)\n\ni=1\n\nThe D = 2 dimensional case is illustrated in Figure 1a and 1b where the practitioner gives equal weighting to both errors, and both errors lie within the same range. This is calculated as: εC = 0.5ε1 + 0.5ε2 . (3)\n\nc 2005 IEEE 0000–0000/00\\$00.00\n\n2\n\n0.5 0.4\n\n0\n\na)\n\n0.25\n\n0.5 Objective 1\n\n1\n\n1\n\nObjective 2\n\n1\n\nObjective 2\n\nObjective 2\n\n1\n\n0.5\n\n0\n\n0.3\n\nb)\n\n0.5 Objective 1\n\n1\n\n0.5\n\n0\n\nc)\n\n0.5\n\n1\n\nObjective 1\n\nFig. 1. Left: Two dimensional error surface 1. Suboptimal models denoted by circles. The optimal model returned by equal weighting of the errors highlighted at the tangent point. Middle: Two dimensional error surface 2. Right: Example the effect of composite weighting when the front is convex with respect to the origin. Irrespective of weights given to the respective errors, the optimal model returned will only be one of the extreme optimal solutions.\n\nThis approach implicitly assumes that the interaction between the two error terms is symmetric. Figure 1a illustrates the situation described, where the minimum error surface (the true Pareto front) defined by the problem is shown, with suboptimal models lying behind it denoted by circles. On its extremes it can be seen that the error combinations (0.0, 1.0) and (1.0, 0.0) are possible, which define the axial parallel hyper-boundaries of the front. On the application of (3), each dashed line shown represents a set of objective combinations that are ranked as equivalent (the lines gradient reflecting the prior weightings). It is evident that if the true Pareto front is reached by the training process, then the model returned is one tangential to one of these parallel lines. In the case of Figure 1a this model is shown to have the error properties (0.25, 0.4). Figure 1b illustrates the same situation, with identical hyper-boundaries but a slightly different degree of convexity of the front. In this case the model returned is defined by the error properties (0.3, 0.5). The two models are significantly different, and in both cases, due to the shape of the Pareto error fronts (and contrary to the desires of the user), the error properties of the models returned are not equal. Although the feasible range of both error measures are the same, the interaction of the errors, as demonstrated by the shape of their true Pareto fronts, results in the return of models, that though Pareto optimal in themselves, do not represent the preferences of the practitioner. An even worse situation arises if the true Pareto front is non-convex. In this case composite error training (if the true Pareto front is reached) will only return those models that are on the extremes of the true Pareto front, as illustrated in Figure 1c. This is irrespective of the values used for α1 and α2 . The model returned will always be the one that strictly minimizes one of the objectives (errors). This problem with the linear weighting approach has been known for a number of years in the MOEA literature (theoretical proofs are provided in ), but it has not been addressed when using linear weighting to propagate multiple objectives in NN training. An alternative is to optimize with respect to one objective, with the second objective formulated as a constraint (for instance that is has a minimum or maximum value ). In the case where the shape of the trade-off surface is unknown, a set of models need to be trained that provide an estimate of the true Pareto surface so a final operating model\n\ncan be selected. The composite weighting gradient descent or the constrained optimization approach can be used, however N runs would be needed with different weights, α, (1) or constraints to obtain N different individuals on the Pareto front (therefore it would be subject to high computation time). However, where the true Pareto front itself is convex even this expensive option is infeasible for the composite weighting approach. An example of this draw-back is shown in where a composite error term is used. reports that the composite error weights were adjusted a number of times in order to find the best results on the test data, underlining the fact that the shape of the true Pareto error front was unknown. A general framework for training NNs which is not susceptible to these problems will now be presented. C. Need for a general framework Given that it is likely that the error surface defined by the generating process is not known, a well defined approach to implementing multiple objective training within NNs is needed. Through the use of multi-objective evolutionary algorithms (MOEAs) it is possible to find an estimated Pareto set of the combination of parameters to multiple objective ‘clean’ function modelling problems –. A Pareto set of solutions is defined such that in a set of parameter combinations F , no single parameter combination Fi is better or equivalent on all other objective measures, than any other set member Fj . That is no parameter combination dominates any other parameter combinations in the set. Over the last 16 years, since the work by Schaffer , MOEAs have been applied to a vast number of design problems, where mathematical formulae define the multi-objective surface to be searched. These methods had not, until very recently, been applied to the noisy domain of multi-objective NN training. There are a limited number of studies using a MOEA to train a population of multi-objective NNs; those by Fieldsend and Singh and Kupinski and Anastasio are concerned with more than one type of forecast/classification error, whereas other previous studies using MOEAs and NNs have been concerned with trading off network complexity and a single error term – . Chaiyaratana and Zalzala in contrast use MOEAs with the outputs of standardly trained NNs.\n\n3\n\nThe new framework proposed here is designed to use those evolutionary computation (EC) methods which have previously been applied to uni-objective NN design, genetic algorithms (GAs), evolution strategies (ES) and particle swarm optimization (PSO). GAs have previously be used for feature selection , and topography selection – and ESs have been used for weight optimization –, and adaptive topography selection –. The recent EC technique of PSO has also proved popular as a uni-objective NN optimizer –.\n\nof the direct encoding form , that will now be described. Given a maximum size for a four layer feed-forward multilayer perceptron (MLP) of I input units (features), H1 and H2 hidden units in the first and second hidden layers, and O output units, the decision vector length used to represent this network within an MOEA is of size S, where\n\nII. M ULTI - OBJECTIVE EVOLUTIONARY\n\nThe first (I + 1) · H1 + (H1 + 1) · H2 + (H2 + 1) · O genes are floating point and store the weight parameters (including biases) of the ENN, the next I + H1 + H2 are bit represented genes, whose value (0 or 1) denotes the presence of a unit or otherwise (in the two hidden layers and input layer). The next D genes are again floating point and these are used to hold the D error values associated with the network on the training data. 1) Topology/feature selection through node addition/deletion: Topography and input feature selection is implemented within the Pareto-ENN model by bit mutation of the section of the decision vector representing the ENN architecture. This is facilitated by first determining a superset of input features and maximum hidden layer sizes. Once this is determined, any decision vector has a fixed maximum representation capability. Manipulation of structure is stochastic. By randomly bit flipping members of the first I genes of the binary section of the chromosome the set of input features used by the ENN is adjusted. By bit flipping the genes in the subsequent binary section of the decision vector the hidden ENN topography is manipulated. 2) Weights adjustment: In the Pareto-ENN the weight space of a network is perturbed by a set of values drawn at each epoch (generation) from a known distribution (Gaussian, Laplacian, etc.), as shown in Equation 5.\n\nNEURAL NETWORK\n\nFRAMEWORK\n\nThe use of evolutionary approaches to NN training (with a single error function) has received increasing attention in recent years as these approaches have a number of intuitive advantages over gradient descent training in this domain. Application dependent error measures, for which a derivative may be extremely costly to calculate, can be easily incorporated in EC approaches to training, as derivatives are not needed. Indeed the ability of these approaches to facilitate NN training beyond the Euclidean objective was highlighted by Porto et al. (although not with multiple objectives), but taken no further in . In addition, they benefit by training a population of evolutionary neural networks (ENNs) in one run, making them highly compatible with the concepts of population based multi-objective training from the MOEA literature. There are already a number of good reviews of MOEA methods , , , . For the purposes of this study the processes of a MOEA will be described at a very general level, readers wishing a more in depth discourse on the issue are recommended to read any of these reviews. At the basic level the EC methods employed in MOEAs are similar to those used in the EC uni-objective optimization field. However instead of a single elite member maintained by the process (the best individual found so far during the search), an archive, F , of mutually non-dominating solutions is maintained. Most recent work in the MOEA domain has been concerned with how the search population of the EC process and F should interact, and how to evolve the search as time progresses. Recent work has also investigated efficient approaches to the storing and maintenance of F as its size grows , , . Here a framework for general multi-objective evolutionary neural network (MOENN) training is outlined in Figure 2, which can be viewed as a synthesis of work from the uniobjective ENN literature and the MOEA literature. In this new framework a set of estimated Pareto optimal ENNs is maintained in tandem with the training process. The specific evolutionary operators used within the MOENN framework are determined by the optimization process used (for instance ES, GA or PSO). In this study a new method is derived from the general framework for evaluation, called Pareto-ENN, which is now introduced. A. The Pareto evolutionary neural network model In the Pareto-ENN model introduced here, the parameters of ENNs are stored within decision vectors using representation\n\nS = (I + 1) · H1 + (H1 + 1) · H2 + (H2 + 1) · O +I + H1 + H2 + D\n\nk+1 k wi,n = wi,n +p·γ·Θ\n\n(4)\n\n(5)\n\nn where wi,k is the ith weight of the nth network in the population at the k th epoch of training, Θ is a sample drawn from a user selected distribution, γ is some multiplier and p takes a value (0, 1) with some probability. This can be seen as similar to the common approach taken in uni-objective ENN optimizers for weight adjustment – , and was the only means of parameter adjustment used in , . 3) Weight addition/deletion: Finally connectivity (and therefore complexity) is adjusted within the Pareto-ENN model by a GA type bit mutator, where at each generation every weight has a small probability of being severed. For this disconnected state the weight has the potential to be reconnected through the weight adjustment process described above. The Pareto-ENN used in this study is driven by a (1+1)-ES process, where a single ENN is selected from the archive at F t , evolved, and then compared to F t – which is subsequently updated if necessary at t + 1. The archive selection method used is the partitioned quasi-random selection (PQRS) method\n\n4\n\nAlgorithm 1 Implementation of the Pareto-ENN (standard training approach), MS . Inputs:\n\nOutput: 1:\n\n2: 3: 4: 5:\n\n6: 7:\n\nM , size of initial random population of solutions. Each solution chromosome X0 , . . . , Xm , . . . , XM representing the weights and topology of a ENN model. T , maximum number of algorithm iterations (generations). A non-dominated set of ENN models that are an estimate of the true Pareto front defined by the data generation process (represented by the training data), and the ENN genus. Initialization: Generate random NN population of size M , such that each parameter (weight) of the ENNs ∼ N (0, α), and the binary part of the chromosome is either initialized at 1 or ∼ U (0, 1). Generate the empty frontal (non-dominated) set F 0 = ∅. Update F 0 with the non-dominated solutions from the random population, with respect to the chosen error terms. Initialize generation counter t := 0. Frontal Representatives: Use PQRS to select ENN representative from F t , FRt . Create replica ENN of FRt , xt . Parameter Adjustment: Adjust weights, topology, connectivity and inputs of ENN xt using EC techniques. Fitness Assignment: Evaluate the ENN xt with respect to the user determined error measures on the training data presented. If F t \u000e xt go to 5 otherwise delete xt and go to 6. Update Archive: a) Insert ENN chromosome(s) xt into F t if it is not dominated by individuals in F t . b) Remove ENN chromosome(s) from F t which are dominated by the xt . Loop: Iterate epoch count, t := t + 1. If t = T then go to 7, else go to 2. Terminate algorithm and save members of F T for evaluation on test data.\n\nintroduced by Fieldsend et al. . An algorithmic description of the Pareto-ENN is shown in Algorithm 1. As discussed earlier in this section an archive F is maintained of the non-dominated solutions found during the search process, which is initialized at line 1 in Algorithm 1. Decision vectors are subsequently replicated from this archive at each generation t (FRt ), before evolutionary adjustment and subsequent evaluation. In the context of the specific model used in this study the weights/topology/connectivity are adjusted simultaneously. The reason for taking this approach instead of doing each process separately (at different generations) is that performing them separately can prevent certain jumps in the decision space, and therefore can detrimentally constrain the search. The simple (1+1)-ES MOEA has performed well in comparison to many more complex approaches from the literature , , however, as the decision vector to be perturbed is drawn from F , the child must ‘jump’ to a nondominating or better objective space position with respect to F if it is to survive. As this is the case, all possible movements in decision space must have some probability > 0 to allow escape from (frontal) local optima. B. The problem of generalization with multiple error measures In an earlier work by the authors, , it was found that only a minority of the Pareto optimal set of ENNs fitted to training were Pareto optimal when evaluated with respect to test data; meaning that the estimated Pareto set of models on the training data were not necessarily a good estimate of the general Pareto set of models for the generating process. This clearly shows that the approach to training by a simple separation of the data into a training and test samples used in and is suboptimal, as it may lead to overfitting. In order to aid generalization in uni-objective problems, a commonly used technique is to separate time series data into consecutive training, validation and test sets. The forecast\n\nmodel is then trained on the first set until the measured error begins to increase on the second set, with the final generalization error calculated on the third set. By stopping training when the observed validation error begins to rise the practitioner aims to prevent overfitting of the model on the training data. The application of this approach in the multierror situation is however problematic. This is because when a set of ENNs is being used, some members may exhibit falling error values on training and validation data, whilst others may exhibit some validation errors rising and some other validation errors falling. It is difficult therefore to ascertain when a set of Pareto-ENNs may indeed be over-fitting if the approach is directly transferred to MOENN training from ENN training domain. Instead two new methods are now introduced that are designed to increase the generalization ability of models derived from the MOENN framework, which will be empirically compared in the Pareto-ENN method. 1) Proposed method MV , validation set training in MOENNs: The first new method to improve generalization is inspired by the traditional validation set approach. As stated above, this approach cannot be transferred directly to the MOENN domain, however the main thrust of the approach can be recreated. As in the traditional approach, the data set is partitioned so that a portion of the data is separated as an ‘unseen’ test set on which the generalization ability of the model(s) will be evaluated. The remaining data is split again to provide a training set and a validation set. A potential solution (ENN) is evaluated with regard to the training set. If this solution is found to be non-dominated by the current archive a copy is created and saved in the archive F t (line 5 of Algorithm 2). A second archive however is also maintained in MV , which maintains a non-dominated set with respect to the validation data. Potential solutions are only considered for insertion to this validation archive, V t , if (and only if) it has initially been found to be non-dominating with regard\n\n5\n\nAlgorithm 2 Implementation of MV in the Pareto-ENN. Inputs: Output: 1:\n\n2: 3: 4: 5:\n\n6:\n\n7: 8:\n\nAs in Algorithm 1. A non-dominated set of ENN models that are an estimate of the true Pareto front defined by the data generation process (represented by the training and validation data), and the ENN genus. Initialization: Generate random NN population of size M , such that each parameter (weight) of the ENNs ∼ N (0, α), and the binary part of the chromosome is either initialized at 1 or ∼ U (0, 1). Generate the empty frontal (non-dominated) set F 0 = ∅, and the empty validation archive V 0 = ∅. Update F 0 and V 0 with the non-dominated solutions from the random population, with respect to the chosen error terms. Initialize generation counter t := 0. Frontal Representatives: As in MS . Parameter Adjustment: As in MS . Fitness Assignment: As in MS . Update of Training Archive: a) Insert ENN chromosome xt into F t if it is not dominated by individuals in F t . b) Remove ENN chromosome(s) from F t which are dominated by xt . Update of Validation Archive: V a) Insert ENN chromosome xt into V t if it is not dominated by individuals in F t V t . b) Remove ENN chromosome(s) from V t which are dominated by the xt . Loop: Iterate epoch count, t := t + 1. If t = T go to 8, else go to 2. Terminate algorithm and save members of V T for evaluation on test data.\n\nAlgorithm 3 Implementation of MB in the Pareto-ENN. Inputs: As in Algorithm 1, plus: n, the number of bootstrap subsets generated from the original training sets. s, the size of the bootstrap subsets. Output: As in Algorithm 1. 1: Initialization: Generate n bootstrap subsets of the training data of size s. Generate random ENN population of size M , such that each parameter (weight) of the ENNs ∼ N (0, α), and the binary part of the chromosome is either initialized at 1 or ∼ U (0, 1). Generate the empty frontal (non-dominated) set F 0 = ∅. Update F 0 with the non-dominated solutions from the random population, with respect to the chosen error terms (using a solution’s worst D terms over the n subsets). Initialize generation counter t := 0. 2: Frontal Representatives: As in MS . 3: Parameter Adjustment: As in MS . 4: Fitness Assignment: Evaluate the ENN xt with respect to the user determined error measures on the training bootstrap subsets presented. If F t \u000e xt go to 5 otherwise go to 6. 5: Update of Archive: a) Insert ENN chromosome xt into F t if it is not dominated by individuals in F t . b) Remove ENN chromosome(s) from F t which are dominated by the xt . 6: Looping: Iterate epoch count, t := t + 1. If t = T go to 8, else go to 2. 7: Terminate algorithm and save members of F T for evaluation on test data.\n\nto the training data at that generation. This process continues until algorithm termination, and the set of models returned are those residing in the validation archive, V T , and not the training archive F T (when T is the final evaluated generation). By only comparing solutions to the validation archive (if they have been accepted previously to the training archive) this method attempts to prevent overfitting to the validation data. As solutions that are not non-dominating with respect to the validation set are not selected (even if there are nondominating with the training set), this method also aims to prevent overfitting on the training data. A description of this training approach is shown in Algorithm 2. 2) Proposed method MB , bootstrap training in MOENNs: The second new method to improve generalization is based on bootstrap techniques. The data is partitioned as in MS into a training and test set. The training set is then bootstrap sampled\n\nto create n data subsets, on which the ENNs are evaluated during the training process. Potential solution networks produced by the MOEA are initially evaluated with respect to all of the bootstrap sets. Initially this will lead to nD fitness’s associated with a solution (the number of bootstraps multiplied by the number of error terms to be optimized). The final D fitness values attached to a decision vector for the archiving processes are the worst D objective values recorded over the n bootstrap sets.1 This training method is designed to prevent overfitting on a particular subset of the training data, and also to prevent general overfitting to the training data itself. A description of this training approach is shown in Algorithm 3. 1 If bootstrapping is performed without replacement, it is evident that the size of these sets must be smaller than the original training set, otherwise the approach mimics the standard training method.\n\n6\n\nIII. E XPERIMENTAL EVALUATION Fix maximum number of generations\n\nIn this section the three training methods applied in the Pareto-ENN model, MS , MV & MB (Algorithms 1-3) are compared on synthetic test data from the multi-objective NN literature, whose generating properties are known. The best performing training model is then applied to the real world application problem of financial forecasting.\n\nInitialise ENN empty archive Ft\n\nInitialise counter t=0\n\nA. Synthetic data Initialise ENN population Xt\n\nIn order to compare the three generalization methods, as applied to the Pareto-ENN, data is used from a test function described in . This test function is designed to demonstrate the trade-off between Euclidean fitting and robust fitting for the multi-objective training of NNs. 1) Data properties: In Euclidean minimization the objective is to minimize EEuclidean (x) = (y − yˆ)2\n\nEvaluate ENN population Xt\n\nInsert non−dominated members of Xt into Ft\n\n(6)\n\nwhere y is the observed data and yˆ its forecast. Their robust error term was defined as ERobust (x) = exp(λ|y − yˆ|p )\n\nRemove dominated members of Ft\n\n(7) t:=t+1\n\nwhere λ and p are both user defined. With p = 2 and λ = 1 this criterion is equivalent to Euclidean minimization, and as λ → ∞ it approaches the minimax criterion . The test function from used in this part of the study is a noisy sinusoid, with a random phase shift. It is defined for input a as: f (a) = sin(a + c) + \u000f (8)\n\nif t=T\n\nEnd\n\nGenerate new ENN population Xt from member(s) of Xt−1 and/or Ft−1\n\nwhere a ∈ [0, 2π], c has a 75% probability of being 0, and \u0001 a 25% probability of being equal to 2. \u000f ∼ N 0, 0.07062 . A plot of 6000 input-output pair generated from this function by uniformly sampling across the range of a is provided in Figure 3.\n\nEvolve members of Xt\n\nTABLE I PARAMETERS OF THE PARETO -ENN USED IN THE TWO EMPIRICAL SECTIONS .\n\n0.2 0.02 0.02 ∼ N (0, 1) × 0.1 ∼ N (0, 1)\n\n2) Experimental details: A single NN is trained for 5000 epochs using backprop (learning rate = 0.05, momentum = 0.5), in order to find a good general starting point in decision space. The archive is initialized with the non-dominated ENNs from 1000 random perturbations of this initial NN, whose maximum topological representation is identical to those used in Lo & Bassu (one hidden layer with 5 hidden logistic sigmoidal activation functions). The algorithm parameters are shown in Table I. The training set was 300 patterns (or 200/100 with train/validation) for the test function. The bootstrap size was 240 patterns (80% of the training data), and a total of 10\n\nFlow diagram of the general MOENN framework.\n\n1.5 SAMPLES\n\n1\n\n0.5\n\nOutput\n\nProbability of weight perturbation Probability of node deletion/addition Probability of weight deletion Perturbation Initial weights\n\nFig. 2.\n\n0\n\n-0.5\n\n-1\n\n-1.5 0\n\n1\n\n2\n\n3\n\n4 Input\n\nFig. 3.\n\nLo and Bassu’s test function.\n\n5\n\n6\n\n7\n\n7\n\n1.4\n\n0.5\n\nTest set max. Euclidean error\n\n50 000\n\n1\n\n0\n\n25 000\n\n1 0.5 0\n\n15 000\n\n1\n\n1.3\n\n1.2\n\n1.1\n\n0.5 1\n\n0 0.52\n\n5 000\n\n1\n\n0.53\n\n0.54\n\n0.55\n\n0.56\n\n0.57\n\n0.58\n\n0.59\n\n0.6\n\nTest set Euclidean error\n\n0.5 0 C(B,V)\n\nC(B,S)\n\nC(V,B)\n\nC(V,S)\n\nC(S,B)\n\nC(S,V)\n\nFig. 4. Boxplot of the pairwise C˜ measure between the three training methods, for 5000, 15000, 25000 and 50000 generations.\n\nFig. 5. Plot of the archive of models F , evaluated on the test data from the bootstrap training method for a single fold. F after 5000 generations is denoted by circles, after 15000 by pluses, after 25000 by triangles and after 50000 by stars. The difference between 25000 and 50000 is small, indicating that the search process by that point converging. 700\n\n2 The number of generations may seem large, but the Pareto-ENN used here is a (1+1) method. 50000 (1+1)-ES evaluations is equivalent in ENN evaluations to 500 generations of a GA with a population of 100.\n\n600\n\n500\n\n400\n\n|F|\n\nbootstraps sets were taken. The test set size was 3000 patterns for both test problems and the Pareto-ENNs were trained for 5000, 15000, 25000 and 50000 generations.2 Each algorithm was run 50 times with different initialization vectors, different random seeds and different training data (drawn from the generating process defined in Equation 8). The two training errors to be minimized are the Euclidean distance and the criterion described in Equation 7 (λ set at 20). The evaluated errors of the trained models are the average Euclidean error and maximum Euclidean error. The training size is kept deliberately small and the number of generations used were large to help quantify the effect of overfitting when attempting to estimate the Pareto error generating process. The three training methods used are MS (as used in , ), MV and MB , and each is started with identical states (initial archive) for each fold, with identical training and test sets. e measure is used to compare the different model The C sets returned by the MOENNs. It counts the proportion of points in one set that are dominated by points in another (e.g. e C(A, B) counts the proportion of individuals in set B that are dominated by individuals in set A.) In this study it is used to pairwise compare the set of models from the three methods evaluated on test data. The exact calculation of this method is described in the Appendix. 3) Results and interpretation: Figure 4 shows boxplots of the pairwise Ce measure between the training methods for the 4 different generation lengths. MB , can be seen to be markedly superior to both of the other approaches. After 50000 generations forecast models trained using it are is seen to dominate, on average, 77% of models trained using V and 76% of models trained using B. By 50000 generations MB models can seen to be converging in Figure 5 on this test problem, although the archive size is continually rising as the\n\n300\n\n200\n\n100\n\n0 0\n\n0.5\n\n1\n\n1.5\n\n2\n\n2.5\n\nGenerations\n\n3\n\n3.5\n\n4\n\n4.5\n\n5 4\n\nx 10\n\nFig. 6. Plot of the mean |F | of the bootstrap training method across the 50 folds. The dashed lines denote this vale +/- 1 standard deviation.\n\nresolution of the front is increased (Figure 6 shows how |F | increases with generation length). These results indicate that the generalization from MOENN training can be improved beyond the approach currently taken in the literature , , by using the proposed bootstrap training method, MB . This new generalization technique will now be applied in the main empirical section of this study, for the forecasting of 37 international stock indices. B. Financial data An illustration of the interaction of multiple objectives in a problem, where a set of models is desired for collective use (as opposed to comparison) can be shown by analogy to the capital asset pricing model (CAPM) from finance . The CAPM describes the relationship between risk and return in an optimum portfolio of stocks, where risk is to be minimized and return maximized. In Figure 7a the front F F represents the Pareto optimal portfolios (called efficient portfolios in CAPM), with examples of other sub-optimal portfolios lying\n\n8\n\nReturn\n\nS\n\nF X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X\n\na\n\nRm X\n\nS\n\nX\n\nRf\n\nX\n\nX X X X X X X X X X\n\nF\n\nX\n\nX X\n\nX X\n\nSm\n\na)\n\nRisk\n\nReturn\n\nS’ S’’ Rb\n\nb\n\nRa\n\nF\n\na S’’\n\nRf’’ Rf’\n\nS’ F\n\nb)\n\nSa\n\nSb Risk\n\nFig. 7. Top: The CAPM model. Pareto front defining trade-off between Profit and Risk in a Portfolio of stocks, and also in relation to a prediction model genus with various model parameters. Bottom: Two risk free rates of interest in the CAPM model (and forecast model analogy).\n\nbeneath F F also marked. Line SS is the capital market line, with point Rf , where the capital market line intersects the y-axis, representing the level of ‘risk free’ return available in the market place to the individual (e.g. through government treasury bills). The capital market line is tangential to the efficient portfolio front, the point where it touches the front at ‘a’ being the optimal market portfolio. In the simple illustration shown in Figure 7a, by investing in the market portfolio at point ‘a’, and lending or borrowing at the risk free rate Rf , it is possible to operate on the capital market line, gaining a higher rate of return for any level of risk than that possible by investing in an efficient portfolio of stocks. More complex interactions can also be modelled within the CAPM framework. For example where there are two different zero-risk rates in the market (one available to the user when borrowing, and another available from government treasury bills) the situation illustrated in Figure 7b occurs. Here the rate of return demanded by lenders is Rf 00 , whereas the ‘risk free’ rate of return for investors in treasury bills is lower at Rf 0 . The two tangential lines generated are S 0 S 0 and S 00 S 00 , with the kinked capital market line itself a combination of the two\n\n(represented by a solid line). The central section of this line is described by the efficient portfolio front between portfolios ‘a’ and ‘b’. In this situation the user therefore desires to know the portfolios described by points ‘a’ and ‘b’, and all those in between on the efficient portfolio frontier. The rates of risk and return described by the capital market line to the left of ‘a’ can be accessed by distributing the individual’s wealth between government bills (and gaining Rf 0 return at zero risk) and portfolio ‘a’ (and potentially gaining Ra return at a risk of Sa). The risk and return levels described by the capital market line to the right of ‘b’ can be accessed by the individual borrowing from the market at the rate Rf 00 and investing this, and all their other wealth in portfolio ‘b’. An analogy can be drawn with the prediction of stock market prices. The Euclidean error of a model can be seen as a proxy for a forecast model’s risk, and a trading strategy (based around the direction success error for instance) as a measurement of the expected return of a model, as used in . However in this study the measure of risk is the standard deviation of returns from the trading strategy used, a more accurate transference of economic theory . The front F F therefore represents the Pareto optimal set of regression models, with models ‘a’ to ‘b’ being the final models desired by the practitioner (to enable operation on the capital market line). In addition, given that different individuals may experience differing Rf s (due to differing costs of borrowing and lending available to different individuals and institutions in the economy), points ‘a’ and ‘b’ will vary across individuals. 1) Data properties: The second empirical part of the this study is concerned with applying the model Pareto-ENN to a group of financial forecasting problems. The ENN are trained in order to forecast a transform of international stock indices. The data itself encompasses thirty-seven international indices, with the series varying in length, the longest being 4845 data series (over 19 years) and the shortest 416 data samples (under 2 years), all series are daily, containing open, high, low and close, run until 7th February 2003, and were obtained from http://uk.finance.yahoo.com/. A description of these series is provided in Table II. The concern in this process is the optimization of two measures, risk (minimized) and return (maximized), based on forecasting transforms of these series and using a trading strategy. Therefore the final set of archived Pareto optimal members, F T , should provide an estimate of the trade-off of the risk/return defined by the generating process and trading strategy. Financial forecasting (modelling the generating process of a financial time series, or process) is a popular application of NNs –, , , , –. However, in a number of studies misleading claims are made (or inferred) with regards to the actually efficiency of the models presented. Typically the accuracy of a model is described for some data set (usually in terms of Euclidean error), and an estimate of the profit generated by using the model forecasts and a trading strategy is provided. However, often the cost of trading (transaction costs) are not factored into this calculation. These addition costs, typically trading commission plus any taxation that may be relevant (e.g. stamp duty in the UK) can have\n\n9\n\nTABLE II S TOCK INDEX DESCRIPTIONS\n\nCountry Argentina Brazil Canada Mexico Peru USA Australia China Hong Kong India Indonesia Japan Malaysia New Zealand Pakistan Philippines Singapore South Korea Sri Lanka Thailand Taiwan Austria Belgium Czech Republic Denmark France Germany Greece Italy Netherlands Russia Spain Sweden Switzerland Turkey U.K. Israel\n\nIndex\n\nFrom\n\nAmericas MerVol 24-10-1996 Bovespa 13-05-1993 S&P TSX Composite 22-08-1984 IPC 29-05-2001 Lima General 15-05-1998 S&P 500 25-11-1983 Asia & Pacific All Ordinaries 28-08-1984 Shanghai Composite 21-07-1997 Hang Seng 18-12-1990 BSE 30 17-07-1997 Jakarta Composite 18-07-1997 Nikkei 225 19-09-1986 KLSE Composite 21-12-1993 NZSE 40 06-10-1992 Karachi 100 23-07-1997 PSE Composite 18-07-1997 Straits Times 10-07-1997 Seoul Composite 18-07-1997 All Share 15-10-1998 SET 18-07-1997 Taiwan Weighted 18-07-1997 Europe ATX 27-11-1992 BEL-20 01-07-1992 PX50 16-07-1999 KFX 11-02-1993 CAC 40 19-03-1990 DAX 12-12-1990 General Share 13-05-1998 MIBTel 04-08-1993 AEX General 28-10-1992 Moscow Times 15-05-1998 Madrid General 27-05-1999 Stockholm General 17-01-2001 Swiss Market 27-11-1990 ISE National 100 18-07-1997 FTSE 100 18-04-1984 Africa and Middle East TA-100 25-05-1998\n\nUntil\n\nSamples\n\n07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003\n\n1538 2408 4647 416 1169 4845\n\n07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003\n\n4659 1329 3000 1368 1336 4035 2247 2590 1322 1376 2651 1357 1032 1362 1351\n\n07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003 07-02-2003\n\n2517 2587 875 2499 3231 3048 1182 2391 2604 1125 857 505 3031 11321 4750\n\n07-02-2003\n\n931\n\n10\n\n4\n\na significant impact on realized profits . As such the approach used in this paper is to include transaction costs in the training and final evaluation of ENN forecast models.\n\n2) Trading strategy: The trading strategy is dependent on the market/stock level falling during the day by at least 0.5% before buying (trading-in) - as described in Algorithm 4. The open value is therefore multiplied by 0.995 to obtain the realized purchase cost if buying due to the strategy occurs. The return measure is calculated using a simple trading strategy based upon transaction costs calculated at 0.1% of price (defined as a reasonable level in ), and therefore a minimum increase in price from buy to sell of 0.2% is needed before any profits can be realized. In addition, the trading strategy is designed such that a trade into the market will only take place if estimated profits beyond transaction costs after selling the next day equal approximately 1.5%. The measure is formally described in Algorithm 4. The strategy is therefore to trade into the market on day t when the level drops by 0.5% of the open value and sell the following day when the level rises 1.7% above the purchase level (1.2% above the open of the previous day), accruing 1.5% profit (including transaction costs of 0.1% each way). If the level does not fall by at least 0.5% then the initial trade does not occur and the capital is invested overnight in a ‘risk-free’ asset (i.e. a bank deposit earning 0.016% - equivalent to 4% per annum). If the initial trade has occurred and the level does not rise 1.7% above the purchase level the following day, trade out of the market occurs at the market close of day t + 1 and profit/loss (including transaction costs) occurs. The predicted series ς t is a composite series based upon this trading strategy, and is described below: \u0013 \u0012 bth t ς = (9) 0.995bt−1 o t if bt−1 > 0.995bt−1 o , ς := 1.00016 l\n\n(10)\n\nelse if ς t ≥ 1.017, ς t := 1.017 \u0013 \u0012 btc t else ς := 0.995bt−1 o\n\n(11) (12)\n\nwhere bto is the open level of the market at day t, bth is the market high at day t, btl is the market low at day t, and btc is the market close at day t. Thus ς t exactly encapsulates the profit/loss of this trading strategy (excluding the transaction costs). A visual example of this is given in Figure 8, which shows the open level of the Japanese Nikkei 225 index over the past 17 years, and its corresponding ς t transformation.\n\nx 10\n\n3.5\n\nIndex level\n\n3\n\n2.5\n\n2\n\n1.5\n\n1\n\n0.5\n\n0\n\n0\n\n500\n\n1000\n\n1500\n\n2000\n\n2500\n\n3000\n\n3500\n\n4000\n\n4500\n\n3500\n\n4000\n\n4500\n\nt 1.05\n\nς\n\nDue to the difficulty in predicting raw market time series for profitable day trading when transaction costs are taken into consideration, this application combines a number of time series in a novel transformation in order to make the forecasting task easier. This data transformation is an application specific one (determined by the trading strategy to be used), but has the additional benefit of creating a stationary time series for forecasting (other studies have also used data specific transformations, e.g. and ).\n\n4\n\n0.925\n\n0.8\n\n0\n\n500\n\n1000\n\n1500\n\n2000\n\n2500\n\n3000\n\nt\n\nFig. 8. Top: The Nikkei 225 index (open level). Bottom: The ςt transformation of the Nikkei 225, as described in Equations 9-12.\n\n3) Experimental details: In order to use the trading strategy introduced, a model is needed to produce a prediction of the time series ς t , ςbt . A completely adaptive topology is used in this section, allowing heterogeneous topologies to be maintained by the estimated Pareto set of NNs. The model inputs are defined as follows: vt1,...,10 = ςt−2 , . . . , ςt−11 (13) variables 1 to 10 contain the last 10 lagged realized values of y t (2 weeks of trading). ς t−1 cannot be used as it incorporates information that will not be available at the start of day at t−1 (the index high of day t − 1). The fist 80% of each of the data sets was used for training ENNs for that financial index, and the final 20% for testing. The MB of the previous section was used as the training algorithm. Each run was initialized with 200 random ENNs. The algorithm parameters are shown in Table I. The number of evaluated generations was 25000. Maximum ENN representa-\n\n11\n\nAlgorithm 4 Trading strategy (return objective). 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 10:\n\nt, current time step (day). ςbt , the model forecast at day t. bt qct = 0.995bc t−1 , where btc is the market close on day t. o εtRt , Return value at time t (as a percentage of capital at t − 1). Set t := 1, first trading day of train (or test) \u0001 V t−1 \u0001 set instance. If ςbt+1 ≥ 1.017 xl /xt−1 ≤ 0.995 shift capital from risk free deposit into market at the point where the o market price falls to 99.5% of open (incurring transaction costs), go to 7, otherwise go to 8. t := t + 1, Calculate profit/loss. If (ς t ≥ 1.017), sell when market reaches the level 101.7% of that when entered, accrue return minus transaction cost, go to 2. Else: If (ς t < 1.017), sell at the end of day, εtRt = (qct − 1) − (0.1 + 0.1qct ), go to 6. Calculate nominal risk free interest accrued on assets, εtRt = 0.0016 (compound equivalent to 4% p.a.) , t := t + 1, go to 6. Halt process when end of train (or test) set is reached.\n\n0.12 0.1\n\n0.25\n\n0.06\n\n0.25\n\n0.2\n\n0.05\n\n0.2\n\n0.15\n\n0.04\n\n0.15 r\n\nE(r)\n\n0.06\n\nr\n\nE(r)\n\n0.08\n\n0.1\n\n0.03\n\n0.1\n\n0.05\n\n0.02\n\n0.05\n\n0.04 0.02 0 0\n\n0.2\n\n0.4\n\nσ\n\n0.6\n\n0.8\n\n0 0\n\n1\n\n0.2\n\n0.4\n\nσ\n\n0.6\n\n0.8\n\n1\n\nFig. 9. Risk and return on the S&P 500 index (80% train, 20% test). Left: Training Pareto front. Right: Testing estimated Pareto front. 0.12\n\n0.08\n\n0.1\n\n0.06\n\nr\n\nE(r)\n\n0.2\n\nσ\n\n0.4\n\n0.6\n\n0 0\n\n0.2\n\n0.4\n\nσ\n\n0.6\n\n0.8\n\n1\n\nFig. 11. Risk and return on the FTSE 100 index (80% train, 20% test). Left: Training Pareto front. Right: Testing estimated Pareto front.\n\n0.15\n\n0.1\n\n0.04\n\n0.05\n\n0.02 0 0\n\n0.01 0\n\n0.2\n\n0.4\n\nσ\n\n0.6\n\n0.8\n\n1\n\n0 0\n\n0.2\n\n0.4\n\nσ\n\n0.6\n\n0.8\n\n1\n\nFig. 10. Risk and return on the Nikkei 225 index (80% train, 20% test). Left: Training Pareto front. Right: Testing estimated Pareto front.\n\ntion was a 10:10:1 network. 4) Results and interpretation: Graphical examples of estimated Pareto fronts defined by the archive of ENNs on the test data are provided in Figures 9 - 11. Figure 9 shows the estimated Pareto front generated for a market from the Americas group (the S&P 500 index), Figure 9 shows the estimated Pareto front generated for a market from the Asia/Pacific group (the Nikkei 225 index), and Figure 9 shows the estimated Pareto front generated for a market from the European group (the FTSE 100 index). These figures show that, although there is some degree of noise present, the general shape and properties of ENN models fitted on the training data is consistent with their performance on the training data. Figure 12 illustrates the risk and return experienced by\n\ndifferent models on the Pareto front and their corresponding performance on the test data. Three models are taken as exemplars from the archives, those that lie on either extreme of the front, and the mid set member, for each of the 37 different stock indices. The performance of the profit maximizing extreme ENN is denoted by plus signs in Figure 12, joined by dots. The performance of the mid set members are denoted by triangles and the performance of the risk minimizing ENNs by circles. The market performance is shown with a solid line, and the random walk model performance is shown with a dashed line (in this case the random walk model uses the trading strategy based on its prediction that ςbt+1 = ς t−1 )3 . The profit maximizer model consistently produces higher rate of return than the other models across both training data and test data, and the market rate experiences the highest volatility of return across the data. The relative performance of the five different models can be more clearly seen in Figure 13. Here boxplots of the results of the five models over training and test sets for the two objective measures are shown. Using the Wilcoxon signed ranks test (two tailed at the 0.05 significance level), the higher return rates of model ‘A’ are found to be significantly better than all of the other models, with the next best model being the random walk, followed by model ‘B’. Over the training data the market return is higher than the 3 The random walk model of ςbt+1 = ς t cannot be used as ς t is unknown at the start of day t and can only be calculated at the close of trade on day t (as it uses the high at day t).\n\n12\n\nTrain\n\nM EAN RISK AND RETURN OVER THE 37 INTERNATIONAL INDICES . R ESULTS SHOWN FOR THE EXTREME AND MID ARCHIVED ENN S THE\n\n1\n\nReturn (% per day)\n\nReturn (% per day)\n\nTABLE III\n\nTest\n\n1\n\n0.5\n\n0\n\n−0.5 0\n\n10\n\n20 Market index\n\n30\n\n( STANDARD DEVIATIONS IN PARENTHESIS). 0\n\n−0.5 0\n\n40\n\n5\n\n4\n\n4\n\n3\n\n3\n\n10\n\n20 Market index\n\n30\n\n40\n\nReturn σ\n\n5\n\nReturn σ\n\nMARKET PERFORMANCE AND FROM THE RANDOM WALK MODEL\n\n0.5\n\n2\n\n2\n\n1\n\n1\n\n0 0\n\n10\n\n20 Market index\n\n30\n\n0 0\n\n40\n\n10\n\n20 Market index\n\n30\n\n40\n\nRandom Walk\n\nFig. 12. Risk and return for 3 different exemplar members of the archived ENNs for the 37 international indices (indices ordered as in Table II). Market performance and performance of the random walk model using the same trading strategy are also shown.\n\nMOENN operating point ‘C’, however on the test data, ‘C’ (effectively keeping the capital in the bank and not trading) is significantly better than investing in the markets (which isn’t surprising given recent market performance). Train\n\nReturn (% per day)\n\nReturn (% per day)\n\n1\n\n0.5\n\n0\n\nA\n\nB\n\nC\n\nmkt\n\nrw\n\n0.5\n\n0\n\n−0.5\n\nA\n\nB\n\nModel\n\nmkt\n\nrw\n\n4.5\n\n4\n\n4\n\n3.5\n\n3.5\n\n2.5\n\n2.5\n\n2\n\n1.5\n\n2\n\n1.5\n\n1\n\n1\n\n0.5\n\n0.5 A\n\nB\n\nC\n\nModel\n\nmkt\n\nrw\n\n0\n\nA\n\nB\n\nC\n\nmkt\n\nrw\n\nModel\n\nFig. 13. Boxplots of the realized risk and return for 3 different exemplar members of the archived ENNs across the 37 international indices. Market performance and performance of the random walk model using the same trading strategy are also shown. •\n\npositive returns on all markets (as shown by the boxplots), even when some of the markets were experiencing significant downward trends. Mean values and standard deviations are shown in Table III.\n\nThe fact that consistent profits can be observed using the random walk approach is interesting. This may be due to the trading strategy itself restricting the random walk model to trades where the market is volatile (which is its aim). However it is conceivable that the transaction costs modelled may be too low for certain markets. If this second case is true, it should be noted that even with transactions cost raised in order to make the random walk model unprofitable, model ‘A’ will still experience significant positive returns beyond both the market and bank returns.\n\n3\n\nReturn σ\n\n3\n\nReturn σ\n\nC\n\nModel\n\n4.5\n\n0\n\nTest Risk % Ret 0.000 0.016 (0.000) (0.00) 0.295 0.037 (0.203) (0.033) 0.757 0.114 (0.355) (0.138) 1.621 -0.018 (0.686) (0.106) 0.514 0.073 (0.263) (0.104)\n\nTest\n\n1\n\n−0.5\n\nRisk Averse (archive ENN) Middle Way (archive ENN) Profit Maximizer (archive ENN) Market\n\nTrain Risk % Ret 0.000 0.016 (0.000) (0.00) 0.192 0.048 (0.088) (0.029) 0.657 0.149 (0.353) (0.132) 1.753 0.042 (0.792) (0.094) 0.514 0.054 (0.278) (0.071)\n\nThe volatility of the market return was found to be significantly higher than all the other models, followed by ‘A’, the random walk and ‘B’. The three different operating points of the MOENN results were consistent across the test data, with ‘A’ having higher return followed by ‘B’ and then ‘C’, and ‘C’ having the lowest volatility, followed by ‘B’ and ‘A’. Model ‘A’ was the best performing model (in terms of return) out of the five used, and produced consistent\n\nFigure 14 illustrates the wide range of ENN architectures generated in the non-dominated set of models returned by the Pareto-ENN process. Figure 14a shows a Hinton plot of the weights of the 84 ENNs residing in the archive of the Nikkei 225 Pareto-ENN, each column representing a different ENN (each white square denoting an active weight, and each empty square denoting a disconnected weight). The weights are ordered such that w1 , . . . , w11 represent the weights from the 10 inputs nodes and the bias to the first hidden node, w12 , . . . , w23 represent the same for the second hidden node and so on. w110 , . . . , w121 represent the weights between the hidden layer and the output node, and the output bias. As it can be seen, a wide range of weights and degrees of connectivity are used, from 75 active weights (the 81st ENN) to 121 active weights (the 83rd ENN). Figure 14b shows which inputs were used by each of the ENNs (with some ENNs using all the available inputs, and others using as few as 6) and Figure 14c shows which hidden units when used by each of the ENN set members (with between 7 and 10 hidden units used).\n\n13\n\nAlgorithm 5 Implementation of the uni-objective ES NN optimizer (bootstrap training approach), for the finance problem. Input:\n\nOutput: 1:\n\n2: 3: 4: 5: 6: 7:\n\nM , size of initial random population of solutions. Each solution chromosome x representing the weights and topology of a NN model. n, the number of bootstrap subsets generated from the original training sets. s, the size of the bootstrap subsets. r, the maximum risk allowed by a model. A single NN which estimates the maximum return possible given r. Initialization: Generate n bootstrap subsets of the training data of size s. Generate random NN population of size M , such that each parameter (weight) of the ENNs ∼ N (0, α), and the binary part of the chromosome is either initialized at 1 or ∼ U (0, 1). Generate the empty elite individual E0 = ∅. Update E0 with the fittest solution from the random population, with respect to the chosen error term (using a solution’s worst term over the n subsets), constrained that the highest risk on the subsets is lower than r. Initialize generation counter t := 0. Xt∗ = Et . Genetic Recombination: As in MB . Fitness Assignment: Evaluate the ENN(s) Xt∗ with respect to return on the training subsets presented. If the return is greater than that of Et , and risk is less than r, go to 5, otherwise go to 6. Et+1 := Xt∗ Looping: Iterate epoch count, t := t + 1. If stopping criteria have not been met then go to 2, else terminate algorithm and save Et for evaluation on test data. end\n\n\u0019 ENN1\n\nENN84 \u0017\n\n. p(w1 )\n\na) - p(w121 ) . p(I1 ) b) - p(I10 ) . p(H1 ) c) - p(H10 ) Fig. 14. Example of the range of ENN topographies on an estimated Pareto front. Hinton plots are shown for the ENN weights (a), input topography (b) and hidden topography (c) of the 84 ENNs lying on the estimated Pareto error surface for the Nikkei 225 data.\n\nC. Further efficiency tests: Comparison with single objective ES runs As described earlier, another potential solution to this problem is to run a single objective optimizer many times on a problem keeping the other objective(s) ‘fixed’ (i.e. constraining acceptable solutions to have the other objective(s) equal to or below/above a pre-defined value). It has been argued previously that this is inefficient in time resources, however this statement will be empirically investigated here. To facilitate this a uni-objective optimizer for the previous finance problem was formulated. Here the optimizer (again a (1+1)-ES) is solely concerned with optimizing return, for a maximum acceptable risk level. The perturbation techniques and probabilities are identical to those used in the MOENN previously, and again the bootstrap generalization method is used to prevent over-fitting. For each of the 37 finance problems five single objective ENNs were trained, with the constraint of risk being no higher than 0.2, 0.4, 0.6, 0.8 and 1.0, for a maximum of 25000 generations (meaning 185 separate optimizer runs). The optimizer is described in Algorithm 5. Each run was initialized with 200 random networks. The algorithm parameters were: • Probability of weight perturbation = 0.2 • Probability of individual weight elimination = 0.02 • Probability of individual node elimination = 0.02 • Perturbation ∼ N (0, 0.1) • Initial weights ∼ N (0, 0.1) • Generations = 25,000 The progress of the elite ENN in objective space, Et in Algorithm 5, was recorded every 250 generations and compared to the set of ENNs discovered by the MOENN after 25000 generations, the generation at which the single objective ENN is no longer dominated by member(s) of the saved front F being marked. For each test problem, if the average number\n\n14\n\nTABLE IV R ESULTS COMPARING UNI - OBJECTIVE AND MULTI - OBJECTIVE OPTIMIZER . GU IS THE AVERAGE NUMBER OF GENERATIONS TAKEN BY THE UNI - OBJECTIVE TO REACH THE ESTIMATED PARETO FRONT FOUND BY THE MOES. GM IS THE NUMBER OF GENERATIONS PER NON- DOMINATED POINT ON THE ESTIMATED PARETO FRONT FOUND BY THE MORE EFFICIENT THE\n\nMOES. T HE RATIO OF\n\nTHE TWO VALUES , GU /GM , GIVES AN APPROXIMATION AS TO HOW MUCH\n\nMOES IS AT FINDING ESTIMATED PARETO SOLUTIONS FROM A\n\nFinancial Index MerVol Bovespa S&P TSX Composite IPC Lima General S&P 500 All Ordinaries Shanghai Composite Hang Seng BSE 30 Jakarta Composite Nikkei 225 KLSE Composite NZSE 40 Karachi 100 PSE Composite Straits Times Seoul Composite All Share SET Taiwan Weighted ATX BEL-20 PX50 KFX CAC 40 DAX General Share MIBTel AEX General Moscow Times Madrid General Stockholm General Swiss Market ISE National 100 FTSE 100 TA-100\n\nGU 10650 1250 4200 16850 25000 1900 25000 11700 3200 15950 14350 1950 2950 500 13300 7950 24350 1150 25000 17100 10950 13600 25000 20100 25000 16750 25000 10600 18050 15000 750 11150 7150 20550 9700 2350 11700\n\nof generations needed for the single objective optimizer to train an ENN that is not behind F is less than 25000/|F | then there is no efficiency benefit to using the MOENN training regime. If however the average number of generations needed is greater than 25000/|F | there are tangible efficiency gains, to which a value can even be assigned. 1) Results: Results from the different runs are shown in Table IV On all bar two of the 37 test problems the MOES is shown to be more efficient than the uni-objective optimizer - performing\n\nGM 892.9 781.3 675.7 263.2 1388.9 342.5 925.9 384.6 471.7 396.8 609.8 625.0 581.3 925.9 431.0 568.1 1086.9 595.2 1388.9 500.0 675.7 757.6 581.4 1086.9 806.5 294.1 531.9 609.8 657.9 333.4 1562.5 735.3 609.8 362.3 347.2 925.9 454.5\n\nGIVEN SET THAN THE UNI - OBJECTIVE ES.\n\nGU /GM 11.9 1.6 6.2 64.0 18.0 5.5 27.0 30.4 6.7 40.2 23.5 3.1 5.0 0.5 30.8 14.0 22.4 1.9 18.0 34.2 16.2 18.0 43.0 18.5 31.0 57.0 47.0 17.4 27.4 45.0 0.5 15.2 11.7 36.7 27.9 2.5 25.7\n\n22 times better on average (i.e. for the uni-objective optimizer to find the same points in |F | it would need to perform 22 times more function evaluations). This indeed may even be an underestimate - as nearly 40% of the 185 uni-objective optimizer runs did not reach the front found by the MOES within 25000 generations (an example of this is shown in Figure 15). The implication of this is that the formulation of nominally uni-objective problem as a multi-objective problem can actually improve the search process - as indeed has been previously\n\n15\n\n0.6\n\nE(r)\n\n0.4\n\n0.2\n\n0\n\n0\n\n0.2\n\n0.4\n\n0.6\n\n0.8\n\n1 σ\n\n1.2\n\n1.4\n\n1.6\n\n1.8\n\n2\n\nFig. 15. Risk and return on the Lima General index training data, the models found by the MOES after 25000 generations plotted as points, the models found by the uni-objective ES with 5 different risk maximums plotted as circles, again after 25000 generations.\n\npostulated (e.g. Knowles et al. , Jensen , and Abbass & Deb ). This is most likely due to the synergies present in multi-objective search. The evolution of a point in one area of objective space, through its decision space parameters, may lead to its shifting to a different area of objective space. In multi-objective optimization this solution is stored if it is nondominating with other solutions in the area it has moved to. In the case of uni-objective optimization these kind of fortuitous movements are not easily sustained as a movement in decision space is purely defined as better or worse – therefore the uniobjective formulation may be more likely to be caught in local optima.\n\nmutation, manipulated with respect to their weights, topographies, connectivities and inputs. At algorithm termination a set of heterogeneous MOENNs are available for use, that represent an estimate of the true error trade-off (the Pareto front) defined by the data generating process, as described by the training data. The performance on 37 real-world data sets showed that a set of ENNs training using the described methodology can perform in a consist manner on unseen test data. This has been manifest in both the visual inspection of the tradeoff fronts produced, and through statistical comparison of different operating points on training data, and their relative positions on test data. The technique is also shown to generate significant results when compared to other models in the financial domain, and to produce significant returns even when considering transaction costs. New methods to improve generalization in MOENNs have also been introduced and compared on test sets from the multi-objective NN literature, with the approach based on the bootstrapping of training data found to be significantly better than the models compared. Once a set of MOENNs has been generated, that lie upon an estimate of the Pareto surface in the error space, a practitioner not only gains knowledge with respect to the error interactions of their problem, but they also have an opportunity to select an individual model that represents their error trade-off preferences, or a group of models if so desired. ACKNOWLEDGEMENT The authors would like to thank Ming Peng and the anonymous referees for their useful comments. A PPENDIX\n\nIV. D ISCUSSION As described in Section 1, a single composite error term cannot be meaningfully propagated through a NN in a multiobjective application. The hybrid ENN methods used for example in and highlighted in , where individual networks are trained at each generation using a gradient descent technique in addition to their evolutionary manipulation, are also infeasible for these reasons. Instead a general MOENN framework has been introduced in this study where ENN parameters are entirely adjusted by EC methods, and a model derived, the Pareto-ENN, which has demonstrated a number of significant results. The first set of results were on the area of multi-objective ENN generalization. Training in and was simply terminated after a fixed number of epochs. Problems of network generalization, under/overfitting and validation were not addressed. In this study these concerns have been addressed through the introduction of two new techniques for MOENN training, one of which, bootstrap training, has been shown to improve MOENN generalization performance significantly. The Pareto-ENN model has also been applied in the domain of regression analysis, where multiple objectives are especially of interest. In one general training procedure a population of ENN models have been, through genetic recombination and\n\nA. Pareto Optimality The multi-objective optimization problem seeks to simultaneously extremize D objectives: yi = fi (x),\n\ni = 1, . . . , D\n\n(14)\n\nwhere each objective depends upon a vector x of n parameters or decision variables (in the case of regression modelling, these may represent the weights/topologies of a NN). Without loss of generality it is assumed that these objectives (also referred to as model errors in this study) are to be minimized. As such the problem can be stated as: Minimise y = f (x) = (f1 (x), f2 (x), . . . , fD (x)) ,\n\n(15)\n\nwhere x = (x1 , x2 , . . . , xn ) and y = (y1 , y2 , . . . , yD ). When faced with only a single error measure, an optimal solution is the one which minimizes the error given the model constraints. However, when there is more than one noncommensurable error term to be minimized, it is clear that solutions exist for which performance on one error cannot be improved without sacrificing performance on at least one other. Such solutions are said to be Pareto optimal and the set of all Pareto optimal solutions are said to form the Pareto front. The notion of dominance may be used to make Pareto optimality more precise. A decision vector u (vector of model\n\n16\n\nparameters) is said to strictly dominate another v (denoted u ≺ v) if fi (u) ≤ fi (v)\n\n∀i = 1, . . . , D\n\n(16)\n\nfor some i.\n\n(17)\n\nand fi (u) < fi (v)\n\nLess stringently, u weakly dominates v (denoted u \u0016 v) if fi (u) ≤ fi (v)\n\n∀i = 1, . . . , D\n\n(18)\n\nA set of M decision vectors {W1 , W2 , . . . , WM } is said to be a non-dominated set (an estimate of the Pareto front) if no member of the set is dominated by any other member: Wk 6≺ Wj\n\n∀j, k = 1, . . . , M\n\n(19)\n\nB. Front comparison measures The comparison of estimated Pareto fronts is difficult as there are several ways in which a front can be inferior or superior to another. In this work the following modified version of the C measure is used: |{b ∈ B : ∃ a ∈ A , a ≺ b}| (20) C˜ (A, B) = |B| C˜ measures the fraction of members of B which are strictly dominated by members of A. As such it measures the quality of A with respect to B. When C(A, B) = 1.0, then all of the individuals in B are dominated by solutions in A; when C(A, B) = 0.0 none of the individuals in B are dominated by ˜ A) = 0 and, in addition, it measures any member of A. C(A, two mutually non-dominating sets as equivalent, i.e. if A ⊆ W and B ⊆ W are each subsets of a non-dominating set W , then ˜ B) = 0. C(A, R EFERENCES M. Adya and F. Collopy, “How Effective are Neural Networks at Forecasting and Prediction? A Review and Evalution,” International Journal of Forecasting, vol. 17, pp. 481–495, 1998. J. Moody, “Forecasting the Economy with Neural Nets: A survey of Challenges and Solutions,” in Neural Networks: Tricks of the Trade, G. Orr and K.-R. Mueller, Eds. Berlin: Springer, 1998, pp. 347–371. A.-P. Refenes, A. Burgess, and Y. Bentz, “Neural Networks in Financial Engineering: A Study in Methodology,” IEEE Transactions on Neural Networks, vol. 8, no. 6, pp. 1222–1267, 1997. T. Bera and L. Higgins, “ARCH Models: Properties, Estimation and Testing,” Journal of Economic Surveys, vol. 7, no. 4, pp. 305–362, 1993. D. Gujarati, Essentials of Econometrics. McGraw-Hill, 1992. J. Armstrong and F. Collopy, “Error measures for generalizing about forecasting methods: Empirical comparisons,” International Journal of Forecasting, vol. 8, no. 1, pp. 69–80, 1992. Y. Wang and F. Wahl, “Multiobjective neural network for image reconstruction,” IEE Proceedings - Vision, Image and Signal Processing, vol. 144, no. 4, pp. 233–236, 1997. C.-G. Wen and C.-S. Lee, “A neural network approach to multiobjective optimization for water quality management in a river basin,” Water Resources Research, vol. 34, no. 3, pp. 427–436, 1998. J. Yao and C. Tan, “Time dependant Directional Profit Model for Financial Time Series Forecasting,” in IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, 2000.\n\n E. Saad, D. Prokhorov, and D. Wunsch, “Comparitive Study of Stock Trend Prediction Using Time Delay, Recurrent and Probabilistic Neural Networks,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1456–1470, 1998. I. Das and J.Dennis, “A closer look at drawbacks of minimizing weighted sums of objectives for pareto set generation in multicriteria optimization problems,” Structural Optimization, vol. 14, no. 1, pp. 63–69, 1997. Y. Shimizu, “Multi-objective optimization for site location problems through hybrid genetic algorithm with neural networks,” Journal of Chemical Engineering of Japan, vol. 32, no. 1, pp. 51–58, 1999. C. Coello, “A Comprehensive Survey of Evolutionary-Based Multiobjective Optimization Techniques,” Knowledge and Information Systems. An International Journal, vol. 1, no. 3, pp. 269–308, 1999. K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, 2001. J. Fieldsend, R. Everson, and S. Singh, “Using Unconstrained Elite Archives for Multi-Objective Optimisation,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 3, pp. 305–323, 2003. C. Fonseca and P. Fleming, “An Overview of Evolutionary Algorithms in Multiobjective Optimization,” Evolutionary Computation, vol. 3, no. 1, pp. 1–16, 1995. D. V. Veldhuizen and G. Lamont, “Multiobjective Evolutionary Algorithms: Analyzing the State-of-the-Art,” Evolutionary Computation, vol. 8, no. 2, pp. 125–147, 2000. E. Zitzler and L. Thiele, “Multiobjective Evolutionary Algorithms: A Comparative Case Study and the Strength Pareto Approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257–271, 1999. J. Schaffer, “Multiple objective optimization with vector evaluated genetic algorithms,” in Proceedings of the First International Conference on Genetic Algorithms, 1985, pp. 99–100. J. Fieldsend and S. Singh, “Pareto Multi-Objective Non-Linear Regression Modelling to Aid CAPM Analogous Forecasting,” in Proceedings of the 2002 IEEE International Joint Conference on Neural Networks, part of the 2002 IEEE World Congress on Computational Intelligence. Hawaii, May 12-17: IEEE Press, 2002, pp. 388–393. M. Kupinski and M. Anastasio, “Multiobjective Genetic Optimization of Diagnostic Classifiers with Implications for Generating Receiver Operating Characterisitic Curves,” IEEE Transactions on Medical Imaging, vol. 18, no. 8, pp. 675–685, 1999. H. Abbass, “An evolutionary artificial neural networks approach for breast cancer diagnosis,” Artificial Intelligence in Medicine, vol. 25, no. 3, pp. 265–281, 2002. ——, “A Memetic Pareto Evolutionary Approach to Artificial Neural Networks,” in The Australian Joint Conference on Artificial Intelligence. Springer, 2001, pp. 1–12. H. Abbass and R. Sarker, “Simultaneous evolution of architectures and connection weights in anns,” in Artificial Neural Networks and Expert Systems Conference, Dunedin, New Zealand, 2001, pp. 16–21. J. Gonz´alez, I. Rojas, J. Ortega, H. Pomares, F. Fern´andez, and A. D´ıaz, “Multiobjective evolutionary optimization of the size, shape and position parameters of radial basis function networks for function approximation,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1478– 1498, 2003. N. Chaiyaratana and A. Zalzala, “Hybidisation of neural networks and genetic algorithms for time-optimal control,” in Proceedings of the Congress on Evolutionary Computation, 1999, pp. 389–396. F. Brill, D. Brown, and W. Martin, “Fast Genetic Selection of Features for Neural Network Classifiers,” IEEE Transactions on Neural Networks, vol. 3, no. 2, pp. 324–328, 1992. S. Yamany, K. Khiani, and A. Farag, “Application of neural networks and genetic algorithms in the classification of endothelial cells,” Pattern Recognition Letters, vol. 18, no. 11-13, pp. 1205–1210, 1997. E. Alba, J. Aldana, and J. Troyla, “Full Automatic ANN Design: A Genetic Approach,” Lecture Notes in Computer Science, vol. 686, pp. 399–404, 1993. S. Baluja, “Evolution of an Artificial Neural Network Based Autonomous Land Vehicle Controller,” IEEE Transactions on Systems Man and Cybernetics - Part B: Cybernetics, vol. 26, no. 3, pp. 450–463, 1996. J. Koza and J. Rice, “Genetic generation of both the weights and architecture for a neural network,” in Proceedings of IJCNN’92, Seattle IEEE/INNS, vol. II, 1992, pp. 397–404. V. Maniezzo, “Genetic Evolution of the Topology and Weight Distribution of Neural Networks,” IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 39–53, 1994.\n\n17\n\n F. Marin and F. Sandoval, “Genetic Synthesis of Discrete-Time Recurrent Neural Network,” Lecture Notes in Computer Science, vol. 686, pp. 179–184, 1993. J. Merelo, M. Paton, A. Canas, A. Prieto, and F. Moran, “Optimization of a competitive learning neural network by Genetic Algorithms,” Lecture Notes In Computer Science, vol. 686, pp. 185–192, 1993. D. White, “GANNet: A Genetic Algorithm for Optimizing Topology and Weights in Neural Network Design,” Lecture Notes in Computer Science, vol. 686, pp. 322–327, 1993. G. Greenwoood, “Training Multiple-Layer Perceptrons to Recognise Attractors,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 4, pp. 244–248, 1997. V. Porto, D. Fogel, and L. Fogel, “Alternative Neural Network Training Methods,” IEEE Expert, vol. 10, no. 3, pp. 16–22, 1995. N. Saravanan and D. Fogel, “Evolving Neural Control Systems,” IEEE Expert, vol. 10, no. 3, pp. 23–27, 1998. X. Yao, “Evolving Artificial Neural Networks,” Proceedings of the IEEE, vol. 87, no. 9, pp. 1423–1447, 1999. J. Fang and Y. Xi, “Neural Network design based on evolutionary programming,” Artificial Intelligence in Engineering, vol. 11, pp. 155– 161, 1997. J. McDonnell and D. Waagen, “Evolving Recurrent Perceptrons for Time-Series Modeling,” IEEE Transactions on Neural Networks, vol. 5, no. 1, pp. 24–38, 1994. X. Yao and Y. Liu, “Making Use of Population Information in Evolutionary Neural Networks,” IEEE Transactions on Systems, Man and Cybernetics - Part B: Cybernetics, vol. 28, no. 3, pp. 417–425, 1998. J. Kennedy and R. Eberhart, “Particle Swarm Optimization,” in Proceedings of the Fourth IEEE International Conference on Neural Networks. Perth, Australia: IEEE Service Center, 1995, pp. 1942–1948. A. Conradie, R. Miikkulainen, and C. Aldrich, “Adaptive Control utilising Neural Swarming,” in Proceedings of the Genetic and Evolutionary Computation Conference, New York, USA, 2002, pp. 60–67. W. Duch, “Alternatives to gradient-based neural training,” in Fourth Conference on Neural Networks and Their Applications, Zakopane, Poland, 1999, pp. 59–64. A. Engelbrecht and A. Ismail, “Training product unit neural networks,” Stability and Control: Theory and Applications, vol. 2, no. 1-2, pp. 59– 74, 1999. A. Ismail and A. Engelbrecht, “Training Product Units in Feedforward Neural Networks using Particle Swarm Optimization,” in Proceedings of the International Conference on Articial Intelligence, Durban, South Africa, 1999, pp. 36–40. F. van den Bergh and A. Engelbrecht, “Cooperative Learning in Neural Networks using Particle Swarm Optimizers,” South African Computer Journal, no. 26, pp. 84–90, 2000. R. Everson, J. Fieldsend, and S. Singh, “Full Elite Sets for MultiObjective Optimisation,” in Adaptive Computing in Design and Manufacture V, I. Parmee, Ed. Springer, 2002, pp. 343–354. S. Mostaghim, J. Teich, and A. Tyagi, “Comparison of Data Structures for Storing Pareto-sets in MOEAs,” in Proceedings of the 2002 Congess on Evolutionary Computation, part of the 2002 IEEE World Congress on Computational Intelligence. Hawaii, May 12-17: IEEE Press, 2002. J. Knowles and D. Corne, “The pareto archived evolution strategy: A new baseline algorithm for pareto multiobjective optimisation,” in Proceedings of the 1999 Congress on Evolutionary Computation. Piscataway, NJ: IEEE Service Center, 1999, pp. 98–105. ——, “Approximating the Nondominated Front Using the Pareto Archived Evolution Strategy,” Evolutionary Computation, vol. 8, no. 2, pp. 149–172, 2000. J. Lo and D. Bassu, “Robust Approximation of Uncertain Functions where Adaptation is Impossible,” in Proceedings of the IEEE International Joint Conference on Neural Networks, part of the IEEE World Congress on Computational Intelligence. Hawaii, May 12-17: IEEE Press, 2002, pp. 1956–1961. J. Lo, “Minimization through Convexitization in Training Neural Networks,” in Proceedings of the IEEE International Joint Conference on Neural Networks, part of the IEEE World Congress on Computational Intelligence. Hawaii, May 12-17: IEEE Press, 2002, pp. 1889–1894. R. Brealey and S. Myers, Principles of Corporate Finance, 5th ed. McGraw-Hill, 1996. U. Anders, O. Korn, and C. Schmitt, “Improving the Pricing of Options: A Neural Network Approach,” Journal of Forecasting, vol. 17, pp. 369– 388, 1998. D. Geigle and J. Aronson, “An Artificial Neural Network Approach to the Valuation of Options and Forecasting of Volatility,” Journal of Computational Intelligence in Finance, vol. 7, no. 6, pp. 19–25, 1999.\n\n T. Hann and E. Steurer, “Much ado about nothing? Exchange rate forecasting: Neural networks vs. linear models using monthly and weekly data,” Neurocomputing, vol. 10, no. 4, pp. 323–340, 1996. P. Harrald and M. Kamstra, “Evolving Artificial Neural Networks to Combine Financial Forecasts,” IEEE Transactions on Evolutionary Computation, vol. 1, no. 1, 1997. N. Kohzadi, M. Boyd, B. Kermanshahi, and I. Kaastrs, “A Comparison of artificial neural network and time series models for forecasting commodity prices,” Neurocomputing, vol. 10, pp. 169–181, 1996. M. Malliaris and L. Salchenberger, “Using neural networks to forecast the S & P 100 implied volatility,” Neurocomputing, vol. 10, no. 2, pp. 183–196, 1996. S. Moshiri and N. Cameron, “Neural Network Versus Econometric Models in Forecasting Inflation,” Journal of Forecasting, vol. 19, pp. 201–217, 2000. C. Schittenkopf, P. Tino, and G. Dorffner, “The profitability of trading volatility using real-valued and symbolic models,” in IEEE/IAFE/INFORMS 2000 Conference on Compuational Intelligence for Financial Engineering (CIFEr), 2000, pp. 8–11. J. Fieldsend, “Novel Algorithms for Multi-Objective Search and their application in Multi-Objective Evolutionary Neural Network Training,” Ph.D. dissertation, Department of Computer Science, University of Exeter, June 2003. B. LeBaron and A. Weigend, “A bootstrap Evaluation of the Effect of Data Splitting on Financial Time Series,” IEEE Transactions on Neural Networks, vol. 9, no. 1, pp. 213–220, 1998. J. Knowles, R. Watson, and D. Corne, “Reducing Local Optima in Single-Objective Problems by Multi-objectivization,” in First International Conference on Evolutionary Multi-Criterion Optimization, Lecture Notes in Computer Science, E. Zitzler, K. Deb, L. Thiele, C. C. Coello, and D. Corne, Eds., no. 1993. Zurich, Switzerland: SpringerVerlag, March 2001, pp. 269–283. M. Jensen, “Guiding Single-Objective Optimization Using Multiobjective Methods,” in Applications of Evolutionary Computing: EvoWorkshops 2003: EvoBIO, EvoCOP, EvoIASP, EvoMUSART, EvoROB, and EvoSTIM, Lecture Notes in Computer Science, S. Cagnoni, J. R. Cardalda, D. Corne, J. Gottlieb, A. Guillot, E. Hart, C. Johnson, E. Marchiori, J.-A. Meyer, M. Middendorf, and G. Raidl, Eds., no. 2611. Essex, UK: Springer, April 2003, pp. 268–276. H. Abbass and K. Deb, “Searching under multi-evolutionary pressures,” in Proceedings of the 2003 Evolutionary Multiobjective Optimization Conference (EMO03). Springer-Verlag, 2003, pp. 391–404. X. Yao and Y. Liu, “A New Evolutionary System for Evolving Artificial Neural Networks,” IEEE Transactions on Neural Networks, vol. 8, no. 3, pp. 694–713, 1997."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89137274,"math_prob":0.9173989,"size":80257,"snap":"2021-43-2021-49","text_gpt3_token_len":20667,"char_repetition_ratio":0.14977634,"word_repetition_ratio":0.09118518,"special_character_ratio":0.26937214,"punctuation_ratio":0.1431025,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.95970714,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T13:40:59Z\",\"WARC-Record-ID\":\"<urn:uuid:a9209a6a-8114-4137-b9bf-75261f0bb83b>\",\"Content-Length\":\"144788\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dc525fb6-1221-4d4f-b67e-c3e2313a2032>\",\"WARC-Concurrent-To\":\"<urn:uuid:f43a529f-543b-457b-bf99-74ccfb380692>\",\"WARC-IP-Address\":\"104.21.83.11\",\"WARC-Target-URI\":\"https://moam.info/pareto-evolutionary-neural-networks-citeseerx_5c43c2d3097c476d458b4606.html\",\"WARC-Payload-Digest\":\"sha1:WKBC564WKFYLN6FHRHTNBCEC2UBXLBLY\",\"WARC-Block-Digest\":\"sha1:SOVJTYMXILDVPVBMXXA64P5DNKBL2KDD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587711.69_warc_CC-MAIN-20211025123123-20211025153123-00363.warc.gz\"}"} |
https://webreference.com/javascript/references/do-while-loop/ | [
"# The do...while Loop in JavaScript\n\nThe `do...while` iteration belongs to the category of exit-controlled loops. Simply put, this means that the loop body executes at least once, regardless of whether the test condition is true or false because it's evaluated and the end of the loop body rather than at the beginning like in the `while` loop.\n\n## Do...while Loop Syntax\n\n``````do {\n// code to be executed\n} while (test condition);\n``````\n\nAs we can see, the `do` keyword marks the beginning of the loop body, followed by the code to be executed. On the third line, the `while` keyword signals the end of the loop body, followed by the test condition in parentheses.\n\nYou may have already come across this, but the key difference between the `do...while` and `while` loops is that the first always executes the loop body at least once, whereas the latter may not execute the loop body at all if the test condition is false, to begin with.\n\n## Do...while Loop Examples\n\n### Using Do...While for a User Input Within a Range\n\nLet's say we want to prompt the user to enter a number between 1 and 10 and keep prompting them until they do so.\n\n``````let number;\n\ndo {\nnumber = prompt(\"Please enter a number between 1 and 10\");\n} while (number < 1 || number > 10);\n``````\n\nInitially, we ask the user to enter a number, and the test condition that we set up checks whether the number he entered is outside the specified range. If the number is outside of it, the loop executes again, asking the user to enter a new number. The loop will continue until the user enters a number within the range, and outputs it as a result.\n\n### Building a Blackjack Game with Do...While\n\nTo make things interesting, we could potentially simulate a game of blackjack where the player has to hit (draw a card) or stand (end their turn) based on their current hand. We can use the `do...while` loop to keep the game going until the player decides to stand or their hand exceeds 21 (busts).\n\n``````// Create an array of cards\nconst cards = [\n\"2\", \"3\", \"4\", \"5\", \"6\", \"7\", \"8\", \"9\", \"10\", \"J\", \"Q\", \"K\", \"A\"\n];\n\n// Initialize variables for the player's hand and total hand value\nlet hand = [];\nlet total = 0;\n\n// Declare variables for the current card and the player's choice\nlet card;\nlet choice;\n\n// Loop until the player chooses to stand or busts\ndo {\n// Draw a card at random and add it to the player's hand\ncard = cards[Math.floor(Math.random() * cards.length)];\nhand.push(card);\n\n// Print the current hand\n\n// Calculate the total hand value\ntotal = hand.reduce((acc, val) => {\nif (val === \"A\" && acc <= 10) { // Use an Ace as 11 or 1 based on the total value of the hand\nreturn acc + 11;\n} else if (val === \"A\" && acc > 10) {\nreturn acc + 1;\n} else if ([\"J\", \"Q\", \"K\"].includes(val)) { // Face cards are worth 10\nreturn acc + 10;\n} else {\nreturn acc + parseInt(val); // Numeric cards are worth their face value\n}\n}, 0);\n\n// Check if the player has busted or won\nif (total > 21) {\nconsole.log(\"Bust! You lose.\");\nbreak;\n} else if (total === 21) {\nconsole.log(\"Blackjack! You win.\");\nbreak;\n}\n\n// Prompt the player to hit or stand\nchoice = prompt(\"Do you want to hit or stand?\");\n\n} while (choice.toLowerCase() === \"hit\"); // Loop as long as the player chooses to hit\n\n// Print the final hand\n``````\n\nThe for Loop\n\nThe for...in Loop\n\nThe for...of Loop\n\nThe While Loop\n\nBreak and Continue in JavaScript"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8497372,"math_prob":0.90491503,"size":3280,"snap":"2023-40-2023-50","text_gpt3_token_len":829,"char_repetition_ratio":0.11996337,"word_repetition_ratio":0.031456955,"special_character_ratio":0.297561,"punctuation_ratio":0.16255443,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96865934,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T07:10:41Z\",\"WARC-Record-ID\":\"<urn:uuid:ca743972-9f77-41e0-ba57-d67e2fdcfac9>\",\"Content-Length\":\"72541\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90833c02-f7e3-4c21-9e1c-62e0ebb8799a>\",\"WARC-Concurrent-To\":\"<urn:uuid:5bffab4e-b8c0-4465-af1c-384235e4ad92>\",\"WARC-IP-Address\":\"104.21.4.229\",\"WARC-Target-URI\":\"https://webreference.com/javascript/references/do-while-loop/\",\"WARC-Payload-Digest\":\"sha1:CDS4B26IX5JBFD4ER4C5NWGUNIGYH7B6\",\"WARC-Block-Digest\":\"sha1:DFW4B5ZQADU2EJTT7AWYCUNLMT4MG7AU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510983.45_warc_CC-MAIN-20231002064957-20231002094957-00863.warc.gz\"}"} |
https://community.openai.com/t/simple-math-question-that-gpt-4-cant-solve/327848 | [
"# Simple Math Question that GPT 4 can't solve\n\nSo, the other day I asked ChatCPT 4 to solve the following question:\n\n`If x and y are the tens digit and the units digit, respectively, of the product 725,278 * 67,066, what is the value of x + y. Can you explain the easiest solution without calculating the whole number?`\n\n``````To find the tens digit of the product, we only need to consider the multiplication\nof the tens digit of the first number (7) with the units digit of the second number (6)\nand add it to the multiplication of the units digit of the first number (8) with\nthe tens digit of the second number (6), and then consider the tens digit of the result.\n\n7×6=42\n8×6=48\n\n42+48=90\n\nThe tens digit of 90 is 9.\nx+y=9+8=17\n\nThe value of x+y is 17.\n``````\n\nThe answer to this question is 12 and despite the answer being wrong, logic is correct until a certain point. I gave ChatGPT another shot with different numbers but it couldn’t solve any of them."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8874349,"math_prob":0.9904183,"size":918,"snap":"2023-40-2023-50","text_gpt3_token_len":240,"char_repetition_ratio":0.19912472,"word_repetition_ratio":0.05882353,"special_character_ratio":0.27668846,"punctuation_ratio":0.0882353,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99973744,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T07:04:55Z\",\"WARC-Record-ID\":\"<urn:uuid:fa52b2cd-a9b0-489f-ad74-1bc4406f50a3>\",\"Content-Length\":\"21166\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c887e5d4-3d80-4432-8e76-7dd91048650e>\",\"WARC-Concurrent-To\":\"<urn:uuid:639d9924-948b-4c09-8e01-a60f74ded204>\",\"WARC-IP-Address\":\"184.105.99.75\",\"WARC-Target-URI\":\"https://community.openai.com/t/simple-math-question-that-gpt-4-cant-solve/327848\",\"WARC-Payload-Digest\":\"sha1:7XCB6JUMA3F7CSNS4O2GX7KYFLWYGZXY\",\"WARC-Block-Digest\":\"sha1:TNJJ6MZ3TWITWEJFMH6KPVTHUBP2X2QQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506623.27_warc_CC-MAIN-20230924055210-20230924085210-00485.warc.gz\"}"} |
https://tools.carboncollective.co/compound-interest/99299-at-19-percent-in-28-years/ | [
"# What is the compound interest on $99299 at 19% over 28 years? If you want to invest$99,299 over 28 years, and you expect it will earn 19.00% in annual interest, your investment will have grown to become $12,949,702.58. If you're on this page, you probably already know what compound interest is and how a sum of money can grow at a faster rate each year, as the interest is added to the original principal amount and recalculated for each period. The actual rate that$99,299 compounds at is dependent on the frequency of the compounding periods. In this article, to keep things simple, we are using an annual compounding period of 28 years, but it could be monthly, weekly, daily, or even continuously compounding.\n\nThe formula for calculating compound interest is:\n\n$$A = P(1 + \\dfrac{r}{n})^{nt}$$\n\n• A is the amount of money after the compounding periods\n• P is the principal amount\n• r is the annual interest rate\n• n is the number of compounding periods per year\n• t is the number of years\n\nWe can now input the variables for the formula to confirm that it does work as expected and calculates the correct amount of compound interest.\n\nFor this formula, we need to convert the rate, 19.00% into a decimal, which would be 0.19.\n\n$$A = 99299(1 + \\dfrac{ 0.19 }{1})^{ 28}$$\n\nAs you can see, we are ignoring the n when calculating this to the power of 28 because our example is for annual compounding, or one period per year, so 28 × 1 = 28.\n\n## How the compound interest on $99,299 grows over time The interest from previous periods is added to the principal amount, and this grows the sum a rate that always accelerating. The table below shows how the amount increases over the 28 years it is compounding: Start Balance Interest End Balance 1$99,299.00 $18,866.81$118,165.81\n2 $118,165.81$22,451.50 $140,617.31 3$140,617.31 $26,717.29$167,334.60\n4 $167,334.60$31,793.57 $199,128.18 5$199,128.18 $37,834.35$236,962.53\n6 $236,962.53$45,022.88 $281,985.41 7$281,985.41 $53,577.23$335,562.64\n8 $335,562.64$63,756.90 $399,319.54 9$399,319.54 $75,870.71$475,190.26\n10 $475,190.26$90,286.15 $565,476.41 11$565,476.41 $107,440.52$672,916.92\n12 $672,916.92$127,854.22 $800,771.14 13$800,771.14 $152,146.52$952,917.65\n14 $952,917.65$181,054.35 $1,133,972.01 15$1,133,972.01 $215,454.68$1,349,426.69\n16 $1,349,426.69$256,391.07 $1,605,817.76 17$1,605,817.76 $305,105.37$1,910,923.14\n18 $1,910,923.14$363,075.40 $2,273,998.53 19$2,273,998.53 $432,059.72$2,706,058.25\n20 $2,706,058.25$514,151.07 $3,220,209.32 21$3,220,209.32 $611,839.77$3,832,049.09\n22 $3,832,049.09$728,089.33 $4,560,138.42 23$4,560,138.42 $866,426.30$5,426,564.72\n24 $5,426,564.72$1,031,047.30 $6,457,612.02 25$6,457,612.02 $1,226,946.28$7,684,558.30\n26 $7,684,558.30$1,460,066.08 $9,144,624.38 27$9,144,624.38 $1,737,478.63$10,882,103.01\n28 $10,882,103.01$2,067,599.57 $12,949,702.58 We can also display this data on a chart to show you how the compounding increases with each compounding period. As you can see if you view the compounding chart for$99,299 at 19.00% over a long enough period of time, the rate at which it grows increases over time as the interest is added to the balance and new interest calculated from that figure.\n\n## How long would it take to double $99,299 at 19% interest? Another commonly asked question about compounding interest would be to calculate how long it would take to double your investment of$99,299 assuming an interest rate of 19.00%.\n\nWe can calculate this very approximately using the Rule of 72.\n\nThe formula for this is very simple:\n\n$$Years = \\dfrac{72}{Interest\\: Rate}$$\n\nBy dividing 72 by the interest rate given, we can calculate the rough number of years it would take to double the money. Let's add our rate to the formula and calculate this:\n\n$$Years = \\dfrac{72}{ 19 } = 3.79$$\n\nUsing this, we know that any amount we invest at 19.00% would double itself in approximately 3.79 years. So $99,299 would be worth$198,598 in ~3.79 years.\n\nWe can also calculate the exact length of time it will take to double an amount at 19.00% using a slightly more complex formula:\n\n$$Years = \\dfrac{log(2)}{log(1 + 0.19)} = 3.98\\; years$$\n\nHere, we use the decimal format of the interest rate, and use the logarithm math function to calculate the exact value.\n\nAs you can see, the exact calculation is very close to the Rule of 72 calculation, which is much easier to remember.\n\nHopefully, this article has helped you to understand the compound interest you might achieve from investing \\$99,299 at 19.00% over a 28 year investment period."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8987071,"math_prob":0.99930406,"size":4609,"snap":"2022-05-2022-21","text_gpt3_token_len":1574,"char_repetition_ratio":0.13159609,"word_repetition_ratio":0.014224751,"special_character_ratio":0.47168583,"punctuation_ratio":0.23115161,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998636,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T22:44:56Z\",\"WARC-Record-ID\":\"<urn:uuid:da581e91-1534-4b1c-9544-003ea5685ba2>\",\"Content-Length\":\"30065\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14458d77-d507-4740-bc98-639b5400bc93>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f57a8cd-dd19-4f58-b74c-64f1347ad011>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/compound-interest/99299-at-19-percent-in-28-years/\",\"WARC-Payload-Digest\":\"sha1:LSHJK7M46I4FRRLESFYVLCEMKOOEY4CK\",\"WARC-Block-Digest\":\"sha1:JXQ4LIMG6O54QAW3NMTHAJZDXDHJKMKB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662530066.45_warc_CC-MAIN-20220519204127-20220519234127-00092.warc.gz\"}"} |
https://solvedlib.com/spring-2020-otiumhort-tnrs-160-ai-i-ionto-math,133809 | [
"# Spring 2020 Otiumhort TNRS 160 ai i IONTO Math Calculation t o morbo Your patient weighing...\n\n###### Question:",
null,
"Spring 2020 Otiumhort TNRS 160 ai i IONTO Math Calculation t o morbo Your patient weighing 44 lbs has an order for Tylenol (acetaminophen) 325 mg stat. The recommendations for acetaminophen is 10 - 15 mg/kg/dose. Tylenol solution is available is 160mg per teaspoon. lub OUTDO 40 yoruyaqotb Wolbi A. Is this ordered dose safe? B. How many mls would you give months con 0 70 or 002 . .. am 11. a\n\n#### Similar Solved Questions\n\n##### A surface is represented by the following multivariable function, fkx,y) =x +2y +4x+y a) Calculate coordinates of stationary points b) Classify all stationary points\nA surface is represented by the following multivariable function, fkx,y) =x +2y +4x+y a) Calculate coordinates of stationary points b) Classify all stationary points...\n##### Use Stokes' theorem t0 evaluateF dxwhere F = (-22,,-T) and € is the ellipse 32+y? =1, 2 = y +1 traversed clockwise as seen from the origin.[6 marks]\nUse Stokes' theorem t0 evaluate F dx where F = (-22,,-T) and € is the ellipse 32+y? =1, 2 = y +1 traversed clockwise as seen from the origin. [6 marks]...\n##### A bounded subset $E$ of $mathbb{R}$ is said to be of (one-dimensional) content zero if the following condition holds: For every $epsilon>0$, there is a finite number of closed intervals whose union contains $E$ and the sum of whose lengths is less than $epsilon$. Prove the following statements:(i) A subset of a set of content zero is of content zero.(ii) A finite union of sets of content zero is of content zero.(iii) If $E$ is of content zero and $partial E$ denotes the boundary of $E$, then\nA bounded subset $E$ of $mathbb{R}$ is said to be of (one-dimensional) content zero if the following condition holds: For every $epsilon>0$, there is a finite number of closed intervals whose union contains $E$ and the sum of whose lengths is less than $epsilon$. Prove the following statements: (...\n##### [G2 8 HH 8 H 4 ! 1 3 8 1 1 1 W ~ 1 8 3 2 [ : { { [ R | 448 j 3 [\n[G2 8 HH 8 H 4 ! 1 3 8 1 1 1 W ~ 1 8 3 2 [ : { { [ R | 448 j 3 [...\n##### The figure balow shows ancanying an electrc cument Inrough magnetic field Whaf /direction of tho mogretic forca On ihe wita?Ihe right of Iho pagoIno Iop the pagetne Velt ot Ihe pngethe pageout of Inho pjcothe bottorn Iha pnge\nThe figure balow shows an canying an electrc cument Inrough magnetic field Whaf / direction of tho mogretic forca On ihe wita? Ihe right of Iho pago Ino Iop the page tne Velt ot Ihe pnge the page out of Inho pjco the bottorn Iha pnge...\n##### A fair die is rolled 81 times and the sum X of the rolls is calculated. What is the expectation and standard deviation of X? Explain why.\nA fair die is rolled 81 times and the sum X of the rolls is calculated. What is the expectation and standard deviation of X? Explain why....\n##### If f is continuous and 6\". 16 f(x) dx = 12, find Lox xf(x2) dx.\nIf f is continuous and 6\". 16 f(x) dx = 12, find Lox xf(x2) dx....\n##### Birds use color to select and avoid certain types of food. The authors of a particular...\nBirds use color to select and avoid certain types of food. The authors of a particular article studied the pecking behavior of 1-day-old bobwhites. In an area painted white, they inserted four pins with different colored heads. The color of the pin chosen on the bird's first peck was noted for e...\n##### This Question:Constmucttne Consluctine 8882889 Daiulbove 1 trequenc; distbuion_ voilncsip 211S) SL _ complete)Less 1 8 than 60 IF Tomperature ( F)\nThis Question: Constmucttne Consluctine 8882889 Daiulbove 1 trequenc; distbuion_ voilncsip 2 1 1 S) SL _ complete) Less 1 8 than 60 IF Tomperature ( F)...\n##### EFC 3213 COMPUTING II (NUMERICAL METHOD) Assignment 2. Group Assignment (3-4 person per group) Students must...\nEFC 3213 COMPUTING II (NUMERICAL METHOD) Assignment 2. Group Assignment (3-4 person per group) Students must use MATLAB software in order to solve the problems given and write a report containing all the necessary evidence proving that the work has been done. Any theory, assumption or any related di...\n##### Question: Question:1. When the percentage-of-completion method of accounting for long-term contracts, the percentage of completion used to...\nQuestion: Question:1. When the percentage-of-completion method of accounting for long-term contracts, the percentage of completion used to recognize gross profit in the first year usually is determined by measuring: a. costs incurred in the first year, divided by estimated rem...\n##### MeasurementPlease keep your response to this post a maximum of ten sentences Can you think and reasonable scenario when you may not want to report the estimated digit in a measurement? If you cannot come up with a scenario, state why YOu would always report the estimated digit in a measurement:0Ai\nMeasurement Please keep your response to this post a maximum of ten sentences Can you think and reasonable scenario when you may not want to report the estimated digit in a measurement? If you cannot come up with a scenario, state why YOu would always report the estimated digit in a measurement: 0 A...\n##### Question 7 (6 points) Compound IS an impurty in sample of Compound Y. Compound has melting point of 80-81 oC;; Compound has melting point of 107-109 %C . Choose suitable solvent for purification of Y by recrystallization and carefully explain your answerSolventWaterEthanolEthyl AcetateDiethyl ether p-xyleneHexanesB.P. (OC)10013860-65SolubilityCompound Compoundinsoluble_slighty soluble;soluble; € cold (room temp); H hol (solvent boiling point)Format\nQuestion 7 (6 points) Compound IS an impurty in sample of Compound Y. Compound has melting point of 80-81 oC;; Compound has melting point of 107-109 %C . Choose suitable solvent for purification of Y by recrystallization and carefully explain your answer Solvent Water Ethanol Ethyl Acetate Diethyl ...\n##### An environmentalist wishes to survey Canadians todetermine what percentage support strict environmental controls onthe Oil and Gas Industry.The environmentalist randomly picks n=1000 Canadians, andfinds that 694694 favour strict environmental controls onthe Oil and Gas Industry.(a) Find a 99% confidence intervalfor p without using a bootstrap sample. Use at least fourdecimal points for your lower and upper bounds.Lower Bound =equation editorEquation EditorUpper Bound =equation editorEquation Edi\nAn environmentalist wishes to survey Canadians to determine what percentage support strict environmental controls on the Oil and Gas Industry. The environmentalist randomly picks n=1000 Canadians, and finds that 694694 favour strict environmental controls on the Oil and Gas Industry. (a) Find a 99% ...\n##### 4. A small child gives a plastic frog a big push at the bottom of a...\n4. A small child gives a plastic frog a big push at the bottom of a slippery (frictionless) 2.0-m-long ramp inclined at 35o to the horizontal, starting it with a speed of 5.6 m/s. What is the frog's speed as it flies off the top of the ramp?...\n##### $\\lim _{h \\rightarrow 0} \\frac{\\tan \\left(\\frac{\\pi}{6}+h\\right)-\\tan \\left(\\frac{\\pi}{6}\\right)}{h}=$ $\\begin{array}{ll}{\\text { (A) }} & {\\frac{4}{3}} \\\\ {\\text { (B) }} & {\\sqrt{3}} \\\\ {\\text { (C) }} & {0} \\\\ {\\text { (D) }} & {\\frac{3}{4}}\\end{array}$\n$\\lim _{h \\rightarrow 0} \\frac{\\tan \\left(\\frac{\\pi}{6}+h\\right)-\\tan \\left(\\frac{\\pi}{6}\\right)}{h}=$ $\\begin{array}{ll}{\\text { (A) }} & {\\frac{4}{3}} \\\\ {\\text { (B) }} & {\\sqrt{3}} \\\\ {\\text { (C) }} & {0} \\\\ {\\text { (D) }} & {\\frac{3}{4}}\\end{array}$...\n##### How can a rise in temperature affect the production of food crops?\nHow can a rise in temperature affect the production of food crops?...\n##### Cassidy achieved the following grades in his falllwinter term: CHEM 103: W MATH 100: B+ ENCMP 100: B- ENGG 130: PHYS 130: ENGG 100: CR CHEM 103: B- MATH 101. A- MATH 102: EN PH 131: C+ ENGL 1993 ENGG 160. CRWhat Cassidy\"s PAF?Remember: PAF is calculated t0 decima points using standard rounding rules When calculating PAF; first calculate your GPA t0 decima points and subtract the PAF deduction:Answer:\nCassidy achieved the following grades in his falllwinter term: CHEM 103: W MATH 100: B+ ENCMP 100: B- ENGG 130: PHYS 130: ENGG 100: CR CHEM 103: B- MATH 101. A- MATH 102: EN PH 131: C+ ENGL 1993 ENGG 160. CR What Cassidy\"s PAF? Remember: PAF is calculated t0 decima points using standard roundin...\n##### 1. Discuss developmental care associated with pregnancy.\n1. Discuss developmental care associated with pregnancy....\n##### 3) Fisheries in the open ocean can be considered as open-access resources. Assume that fishing efforts...\n3) Fisheries in the open ocean can be considered as open-access resources. Assume that fishing efforts are measured in number of boat trips. Following table provides total product or total output of the fisheries Number ofTotal Output (hundred tons) boat trips 0 100 200 300 10 20 30 38 500 600 700 8...\n##### THEOREM Let (X, d ) and (z, &) be metric spaces let f: Hi ~ Kzbe continuous on Aj_ and let D be & compact subset of Xi. Then f(D) is & compact subset of.Xz\nTHEOREM Let (X, d ) and (z, &) be metric spaces let f: Hi ~ Kzbe continuous on Aj_ and let D be & compact subset of Xi. Then f(D) is & compact subset of.Xz...\n##### Almost passed a bill authorizing the death penalty for homosexuals, but it was deterred by international...\nalmost passed a bill authorizing the death penalty for homosexuals, but it was deterred by international condemnation. France The United States Uganda Denmark...\n##### (Click on the following icon in order to copy its contents into a spreadsheet.) Income Statement...\n(Click on the following icon in order to copy its contents into a spreadsheet.) Income Statement Accounts for the Year Ending 2017 Account Balance Cost of goods sold $341,000 Interest expense$82,000 Taxes $56,800 Revenue$744,000 Selling, general, and administrative expenses $62,000 Depreciation$1...\n##### 9 Space? vetween the **9. Steel rails 20 m lo between the rails to allow laid...\n9 Space? vetween the **9. Steel rails 20 m lo between the rails to allow laid on a cold winter day at -10° C. How much space must be left to allow for expansion at a summer temperature of 40°C the aluminum cap is too hard to unscrew on a new illeri *+10. If the aluminum a...\n##### Please answer all multiple choice All questions need to be answered! Questions 1-20 are multiple choice...\nPlease answer all multiple choice All questions need to be answered! Questions 1-20 are multiple choice questions. Please mark the correct circle on the Scantron form. Only the Scantron form will be graded. Scantron must be filled in before time is called Which of the following acids is not ...\n##### Hyperconjugation Partial MO Diagram Alkenes0T0 + T*Figure 2. Hyperconjugative interactions in alkenes:OC-HT*oc-H\nHyperconjugation Partial MO Diagram Alkenes 0 T 0 + T* Figure 2. Hyperconjugative interactions in alkenes: OC-H T* oc-H...\n##### Point) Suppose U1, U2, U3 is an orthogonal set of vectors in R5. Let w be a vector in Span(v1, U2, U3 _ such that U1 = 33,- U2 U2 21,U3 U3 36 , U1 = 165, w U2 84, w U3 = -72, then w U3 -\npoint) Suppose U1, U2, U3 is an orthogonal set of vectors in R5. Let w be a vector in Span(v1, U2, U3 _ such that U1 = 33,- U2 U2 21,U3 U3 36 , U1 = 165, w U2 84, w U3 = -72, then w U3 -...\n##### 1. In the north American population, the average IQ is 100. Ateam of scientists want to test a new medication to see if it haseither a positive or negative effect on intelligence or no effectat all. A sample of 39 participants who have taken the medicationhas been investigated and their IQ after treatment is noted asfollow:95 90 100 110 108 105 98 105 120 115 99 100 90 118 112 110 12012 0 95 100 90 110 100 105 100 90 105 103 108 112 114 100 88 95 10510 3 105 107 98select the correct H0a. the ave\n1. In the north American population, the average IQ is 100. A team of scientists want to test a new medication to see if it has either a positive or negative effect on intelligence or no effect at all. A sample of 39 participants who have taken the medication has been investigated and their IQ after...\n##### For MAC= 36-2E and MD= E, (i) What is the socially desirable level of emissions? (ii)...\nFor MAC= 36-2E and MD= E, (i) What is the socially desirable level of emissions? (ii) What is the amount of net social benefits received when the emission level is reduced to 10 units (i.e. E=10)?...\n##### Given that, \begin{aligned} M_{c} &=81 M_{m} \\ R_{c} &=3.5 R_{mathrm{m}} end{aligned} As we know that, $mathrm{g}=frac{mathrm{GM}}{R^{2}}$ Substituting above values, $frac{mathrm{g}_{m}}{mathrm{~g}_{e}}=0.15$\nGiven that, \begin{aligned} M_{c} &=81 M_{m} \\ R_{c} &=3.5 R_{mathrm{m}} end{aligned} As we know that, $mathrm{g}=frac{mathrm{GM}}{R^{2}}$ Substituting above values, $frac{mathrm{g}_{m}}{mathrm{~g}_{e}}=0.15$...\n##### Please show all work need help with ALL parts part of one question Assignment 3 [Read-Onlyl...\nPlease show all work need help with ALL parts part of one question Assignment 3 [Read-Onlyl Word View ? Tell me Share File Home Insert Design Layout References Mailings Review Outline Draft New WindowE Arrange All Switch Macros Properties Windows Web Side Show Zoom 100% Read ode Layout Layout L..."
] | [
null,
"https://i.imgur.com/xWaWih1.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8085318,"math_prob":0.9364265,"size":15970,"snap":"2022-40-2023-06","text_gpt3_token_len":4709,"char_repetition_ratio":0.087435804,"word_repetition_ratio":0.5051095,"special_character_ratio":0.3011271,"punctuation_ratio":0.13602167,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.977856,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-06T00:23:09Z\",\"WARC-Record-ID\":\"<urn:uuid:115a7e79-e20c-4c33-a87e-aa88bc8c176c>\",\"Content-Length\":\"76790\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b604df2a-9ff9-410f-9dc6-5e3097467f67>\",\"WARC-Concurrent-To\":\"<urn:uuid:307de0fa-e5a9-45ac-bef2-bf9fc8d68de0>\",\"WARC-IP-Address\":\"104.21.12.185\",\"WARC-Target-URI\":\"https://solvedlib.com/spring-2020-otiumhort-tnrs-160-ai-i-ionto-math,133809\",\"WARC-Payload-Digest\":\"sha1:PD4BA6RLHFY3FGJ3GOIDX2KHESEYHSND\",\"WARC-Block-Digest\":\"sha1:OBH3ZAVRZJFTIZMD4WVFK4PMNH7NZAKS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500294.64_warc_CC-MAIN-20230205224620-20230206014620-00528.warc.gz\"}"} |
http://www.earth.ox.ac.uk/~davewa/pt/th_axcode.html | [
"",
null,
"###### Main Sections\n\n1. Microprobe analyses\n\n2. Mineral groups\n\n3. Solid solutions\n\n4. Thermobarometers\n\n5. Uncertainties\n\n6. P-T calculations\n\n7. Phase diagrams\n\n8. THERMOCALC tips\n\n###### THERMOCALC Stuff\n\nActivity coding",
null,
"Applications\n\nBibliography\n\nBugs and quirks\n\nBulk compositions\n\nModal proportions\n\n# Practical Aspects of Mineral Thermobarometry\n\n## Data File Construction: activity coding\n\n### Notes on coding for mineral solid solutions\n\nThe new pattern for coding activity-composition relations in THERMOCALC 3 has some advantages in logic and clarity over the earlier system. Here's an example for Fe-Mg-Ca garnet, from which one can see how a data file is constructed:\n\n```%___________________________________________\ng 3\nx(g) 0.95 % Fe/(Fe+Mg)\nz(g) 0.03 % Ca/(Fe+Mg+Ca)\n\np(gr) 1 1 0 1 1 z\np(alm) 1 2 1 1 -1 z 0 1 1 x\np(py) 1 2 1 1 -1 z 1 1 -1 x\n\nsf\n\nW(gr,alm) 0 0 0\nW(gr,py) 33 0 0\nW(alm,py) 2.5 0 0\n\n3\nxFeM1 1 2 1 1 -1 z 0 1 1 x\nxMgM1 1 2 1 1 -1 z 1 1 -1 x\nxCaM1 1 1 0 1 1 z\n\ngr 1 1 xCaM1 3\nalm 1 1 xFeM1 3\npy 1 1 xMgM1 3\n% ___________________________________________```\n\nSo, the generalised structure is as follows:\n\n1. The first line consists of the phase name and the number of end members it is made up of.\n2. Next the composition variable names are defined (x and z in this case) and initial values given. It's usually wise to choose values similar to those expected in the calculated result.\n3. Then the end member names are listed, along with the formulae for calculating proportions of them in terms of the composition variables. The sequence you give them in controls the sequence in which the ideal activity expressions and the interactions parameters must be supplied. Some thought has to be given to these, as they are used both to calculate activity coefficients and to reconstruct the bulk composition of the phase from the composition variables.\n4. Then you indicate what solution model to use (sf = symmetric formalism, ideal = ideal mixing on sites)\n5. If symmetric formalism is chosen, you then list the W's. If the end members are 1, 2, 3, these are given in the order W12, W13, W23.\n6. Next, you tell the program how many site fractions are needed to work out the activities, and define the names of these site fractions along with instructions on how to calculate them from the composition variables.\n7. Finally you tell the program how to calculate the (ideal) mixing-on-sites activities of each end member, in terms of the site fractions. They must be listed in the same sequence as in section 3 above.\n\n^ Top"
] | [
null,
"http://www.earth.ox.ac.uk/~davewa/pt/pt_img1.jpg",
null,
"http://www.earth.ox.ac.uk/~davewa/pt/pointer.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85789514,"math_prob":0.98234046,"size":2455,"snap":"2022-27-2022-33","text_gpt3_token_len":694,"char_repetition_ratio":0.13994288,"word_repetition_ratio":0.072234765,"special_character_ratio":0.29898167,"punctuation_ratio":0.07377049,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95245916,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-28T06:19:38Z\",\"WARC-Record-ID\":\"<urn:uuid:7f01d2bc-e3df-4785-90af-fc4903fac1cb>\",\"Content-Length\":\"5646\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d0a1116-2d7e-4c98-b927-1121251d2316>\",\"WARC-Concurrent-To\":\"<urn:uuid:ee05ccc6-82f8-4639-9c6f-84a210616fd0>\",\"WARC-IP-Address\":\"163.1.22.5\",\"WARC-Target-URI\":\"http://www.earth.ox.ac.uk/~davewa/pt/th_axcode.html\",\"WARC-Payload-Digest\":\"sha1:VKVJR7XMICR6ZJKAXKBBVQ4CMSEBA3EJ\",\"WARC-Block-Digest\":\"sha1:P2YN7LHS7E3D3DPINEHHIQ2VIRF6WAPB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103355949.26_warc_CC-MAIN-20220628050721-20220628080721-00404.warc.gz\"}"} |
https://www.educationquizzes.com/us/elementary-school-3rd-4th-and-5th-grade/math/properties-of-numbers-difficult/ | [
"",
null,
"13 × 13 = 169.\n\n# Properties of Numbers 2\n\nThis Math quiz is called 'Properties of Numbers 2' and it has been written by teachers to help you if you are studying the subject at elementary school. Playing educational quizzes is an enjoyable way to learn if you are in the 3rd, 4th or 5th grade - aged 8 to 11.\n\nIt costs only \\$12.50 per month to play this quiz and over 3,500 others that help you with your school work. You can subscribe on the page at Join Us\n\nSome numbers are prime numbers - some numbers have other properties.As you continue through school and math lessons, you'll learn a lot more about numbers, and if you choose to go further with math when you leave school, no doubt you will come across some strange terms for numbers. Most likely, you've heard of infinity ~ this means without end ~ and numbers go on forever. Saying this, you will learn about Graham's number, TREE(3) and a googolplex!\n\nFor now, we are dealing with the easy stuff, so get 10 out of 10 in this quiz!\n\n1.\nWhich of the following statements is wrong?\nTwo is an even number and a prime number\nOther than the number 2, there are no even prime numbers\nIf you add two odd numbers, you get another odd number\nIf you add two even numbers, you get another even number\n7 + 5 = 12 which is not odd. If you want to show that a statement is not always true, find one example that shows it is wrong - like we did here\n2.\nFor those of you with stamina: How many prime numbers are there between 1 and 100 inclusive?\n29\n26\n27\n25\nWell done those of you who bothered to do this question. The rest of you - do it later. There are 25 prime numbers between 1 and 100. Here they are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89 and 97. Phew! Does anyone want to find out how many prime numbers there are between 1 and 1,000 inclusive? Just joking! Here's the answer anyway: 168 prime numbers. I wonder if your teacher knows that? By the way, prime numbers are a big thing in higher math\n3.\nWhich of the following statements is wrong?\nIf a number is divisible by 9, then it is divisible by 3\nIf you add two prime numbers together, you get another prime number\nIf you subtract 1 from an even number, you get an odd number\nThere are four prime numbers between 1 and 10\n11 + 7 = 18 which is not a prime number. If you want to show that a statement is not always true, find one example that shows it is wrong - like we did here\n4.\nWhich two prime numbers have to be added together to give 38?\n11 + 27\n19 + 19\n23 + 15\n29 + 9\nThis is the only choice that has two prime numbers\n5.\nWhich of the following statements is wrong?\n13 is an odd number\n13 × 13 = 159\n13 is a prime number\n13 is the sixth smallest prime number\n13 × 13 = 169! D'oh! By the way, just because 13 is odd, this doesn't prevent it from being a prime number too\n6.\nWhich number is the odd man out: 49, 15, 41, 9?\n9\n15\n49\n41\nThey are all odd, but 41 is also a prime number: a prime number is a positive whole number that is only divisible by 1 and itself but does not include the number 1\n7.\nWhat is the smallest prime number?\n2\n1\n0\n-1\nA prime number is a positive whole number that is only divisible by 1 and itself but does not include the number 1, e.g. 2, 3, 17, 41\n8.\nWhich number is the odd man out: 64, 32, 144, 100?\n144\n64\n100\n32\nThey are all even numbers, but 32 is not a square number. A square number (perfect square) is a number formed by the multiplication of another number with itself, e.g. 144 = 12 × 12; 64 = 8 × 8; 100 = 10 × 10. Note: 1 × 1 = 1\n9.\nWhich number is the odd man out: 71, 17, 11, 39?\n11\n17\n39\n71\nThey are all odd numbers, but only 39 is a non prime number: a prime number is a positive whole number that is only divisible by 1 and itself but does not include the number 1\n10.\nHow many prime numbers are there between 1 and 30?\n10\n12\n11\n9\nHere they are: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29\nAuthor: Frank Evans"
] | [
null,
"https://www.educationquizzes.com/library/11-Plus-Maths/Properties-of-Numbers-(Difficult).jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.96658397,"math_prob":0.9931704,"size":945,"snap":"2020-10-2020-16","text_gpt3_token_len":223,"char_repetition_ratio":0.12752391,"word_repetition_ratio":0.0,"special_character_ratio":0.24126984,"punctuation_ratio":0.0964467,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99481565,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-22T08:01:39Z\",\"WARC-Record-ID\":\"<urn:uuid:5eaf15cb-ebd8-40a5-a269-972b6d562abd>\",\"Content-Length\":\"36282\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:796ca78c-5236-4b0f-981c-63ca8fe5b14e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7937351b-39cd-4bb9-a0dd-eee8b319df1c>\",\"WARC-IP-Address\":\"78.137.117.241\",\"WARC-Target-URI\":\"https://www.educationquizzes.com/us/elementary-school-3rd-4th-and-5th-grade/math/properties-of-numbers-difficult/\",\"WARC-Payload-Digest\":\"sha1:E45T43HW7FMATR3EVNNKJZ6QJQHMY2N7\",\"WARC-Block-Digest\":\"sha1:AOYXMMH57UV47RDNY6IOPHWX3D3BDW2B\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145654.0_warc_CC-MAIN-20200222054424-20200222084424-00201.warc.gz\"}"} |
https://hy.wikipedia.org/wiki/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80:Graph:Chart | [
"# Կաղապար:Graph:Chart\n\nԿաղապարի նկարագրություն\n\n## Parameters\n\n• width: width of the chart\n• height: height of the chart\n• type: type of the chart: line for line charts, area for area charts, and rect for (column) bar charts. Multiple series can stacked using the stacked prefix, e.g. stackedarea.\n• interpolate: interpolation method for line and area charts. It is recommended to use monotone for a monotone cubic interpolation – further supported values are listed at https://github.com/trifacta/vega/wiki/Marks#area.\n• colors: color palette of the chart as a comma-separated list of colors. The color values must be given either as #rgb/#rrggbb/#aarrggbb or by a CSS color name. For #aarrggbb the aa component denotes the alpha channel, i.e. FF=100% opacity, 80=50% opacity/transparency, etc. (The default color palette is category10).\n• xAxisTitle and yAxisTitle: captions of the x and y axes\n• xAxisMin, xAxisMax, yAxisMin, and yAxisMax: minimum and maximum values of the x and y axes\n• xAxisFormat and yAxisFormat: changes the formatting of the axis labels. Supported values are listed at https://github.com/mbostock/d3/wiki/Formatting. For example, the format d can be used to suppress the separators for year numbers.\n• x: the x-values as a comma-separated list\n• y or y1, y2, …: the y-values for one or several data series, respectively\n• legend: show legend (only works in case of multiple data series)\n• y1Title, y2Title, …: defines the label of the respective data series in the legend\n\nNote: In the editor preview the graph extension creates a canvas element with vector graphics. However, when saving the page a PNG raster graphics is generated instead.\n\n## Examples\n\nLine Chart:\n\n{{Graph:Chart|width=400|height=100|type=line|x=1,2,3,4,5,6,7,8|y=10, 12, 6, 14, 2, 10, 7, 9}}",
null,
"Note: The y-axis starts from the smallest y value, though this can be overridden with the yAxisMin parameter.\n\nArea chart:\n\n{{Graph:Chart|width=400|height=100|type=area|x=1,2,3,4,5,6,7,8|y=10, 12, 6, 14, 2, 10, 7, 9}}",
null,
"Note: The y-axis starts from zero\n\nBar chart:\n\n{{Graph:Chart|width=400|height=100|xAxisTitle=X|yAxisTitle=Y|type=rect|x=1,2,3,4,5,6,7,8|y=10, 12, 6, 14, 2, 10, 7, 9}}",
null,
"Line chart with more than one data series, using colors:\n\n{{Graph:Chart|width=400|height=100|xAxisTitle=X|yAxisTitle=Y|legend=Legend|type=line|x=1,2,3,4,5,6,7,8|y1=10, 12, 6, 14, 2, 10, 7, 9 |y2=2,4,6,8,13,11,9,2|colors=#0000aa,#ff8000}}",
null,
"Area chart with more than one data series showing blended overlap:\n\n{{Graph:Chart|width=400|height=100|xAxisTitle=X|yAxisTitle=Y|legend=Legend|type=area|x=1,2,3,4,5,6,7,8|y1=10, 12, 6, 14, 2, 10, 7, 9 |y2=2,4,6,8,13,11,9,2|colors=#800000aa,#80ff8000}}",
null,
"Bar chart with multiple data series:\n\n{{Graph:Chart|width=400|height=100|xAxisTitle=X|yAxisTitle=Y|legend=Legend|type=rect|x=1,2,3,4,5,6,7,8|y1=10, 12, 6, 14, 2, 10, 7, 9 |y2=2,4,6,8,13,11,9,2|colors=#800000aa,#80ff8000}}",
null,
"Area chart with smoothed data values:\n\n{{Graph:Chart|width=400|height=100|xAxisTitle=X|yAxisTitle=Y|legend=Legend|type=stackedarea|x=1,2,3,4,5,6,7,8|y1=10, 12, 6, 14, 2, 10, 7, 9 |y2=2,4,6,8,13,11,9,2|interpolate=monotone|colors=seagreen, orchid}}",
null,
"Bar chart with stacked data series:\n\n{{Graph:Chart|width=400|height=100|xAxisTitle=X|yAxisTitle=Y|legend=Legend|type=stackedrect|x=1,2,3,4,5,6,7,8|y1=10, 12, 6, 14, 2, 10, 7, 9 |y2=2,4,6,8,13,11,9,2|y1Title=Data A|y2Title=Data B|colors=seagreen, orchid}}",
null,
""
] | [
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/bd001718d8dcc202a3854dbec188981493e00176.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/2b94cdec1fc25241e22953e96438acd2362c80b9.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/f6ba41370b2fd4e44cece2d4b5b6720795f8dbdb.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/d71a8b31e57dd2d21a9d34eb6647e20bb12884dc.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/cb4d4973cd3e117a7c024a3990b00d74b0a02db8.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/f50d40e775409aec8b963173c89e920c482ba76a.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/915cad2158193e0ba94d5975a38664da5917fbb1.png",
null,
"https://hy.wikipedia.org/api/rest_v1/page/graph/png/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80%3AGraph%3AChart/0/5ed4894f2321f201d8f5474fbcf9518e7c1499be.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.582729,"math_prob":0.96839434,"size":3733,"snap":"2019-43-2019-47","text_gpt3_token_len":1427,"char_repetition_ratio":0.13435237,"word_repetition_ratio":0.15686275,"special_character_ratio":0.33458343,"punctuation_ratio":0.24947146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95037043,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-18T22:31:34Z\",\"WARC-Record-ID\":\"<urn:uuid:5f295a4b-c755-4841-9434-892b398e5ca6>\",\"Content-Length\":\"53773\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8bc04c26-b776-4aad-b609-5f3f29d4fa91>\",\"WARC-Concurrent-To\":\"<urn:uuid:9c62d6f7-dd59-45a7-9e60-f5cbd1e093da>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://hy.wikipedia.org/wiki/%D4%BF%D5%A1%D5%B2%D5%A1%D5%BA%D5%A1%D6%80:Graph:Chart\",\"WARC-Payload-Digest\":\"sha1:2OJN5IEAXTPQHXSQJOIZCXOSJSNZ4SGP\",\"WARC-Block-Digest\":\"sha1:2AKYJ62JLQS23GXTZX3RBBX3TCWIHRDQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496669847.1_warc_CC-MAIN-20191118205402-20191118233402-00204.warc.gz\"}"} |
https://math.stackexchange.com/questions/1026645/flipping-a-coin-until-4-heads-occur-or-until-flipped-7-times-how-many-combinat | [
"Flipping a coin until 4 heads occur, or until flipped 7 times. How many combinations are possible?\n\nQuestion: A coin is tossed until either 4 heads occur or until the coin has been tossed 7 times. How many heads/tails sequence are possible? For example, HTHTTHT, HHHH, THHTHH, and TTTTTTT are all sequences in the list of possible outcomes.\n\nI think the answer will be 276, but am not sure if it's remotely close to being right. Thanks for any and all help!\n\nA relatively short solution is the following: take all $2^7=128$ possible combinations obtainable with $7$ throws, and subtract all those containing $>4$ heads. In fact:\n\n• no sequence containing $>4$ heads is possible based on the given rules;\n\n• all sequences containing $< 4$ heads are valid;\n\n• all sequences containing exactly $4$ heads are also valid (note that these have to be considered \"truncated\" if $4$ heads occur before the seventh throw, but this does not alter the number of possible sequences since we truncate only a final sequence of tails).\n\nTherefore, the solution is given by $128- {7 \\choose 5}-{7 \\choose 6}-{7 \\choose 7}$ $=128-21-7-1=99$ combinations.\n\nAnother solution, a bit longer, is the following. If no heads occur, we have ${7 \\choose 0}=1$ combination. If one head occurs, we have ${7 \\choose 1}=7$ combinations. If two or three heads occur, we have ${7 \\choose 2}=21$ and ${7 \\choose 3}=35$ combinations, respectively.\n\nIf four heads occur, we have:\n\n• ${4 \\choose 0}=1$ combination if no tails occur;\n\n• ${4 \\choose 1}=4$ combinations if one tail occurs (if four heads and one tail occur, necessarily the last of the five throw has to be a head: so we can focus on the first four throws);\n\n• ${5 \\choose 2}=10$ combinations if two tails occur (as above, if four heads and two tails occur, necessarily the last throw has to be a head: so we can focus on the first five throws);\n\n• similarly, we get that there are ${6 \\choose 3}=20$ combinations if three tails occur.\n\nSumming all combinations, we get again\n\n$$1+7+21+35 \\\\ +1+4+10+20=99$$\n\n$$\\sum_{k=4}^7 \\binom{k-1}{3} + \\sum_{k=0}^3 \\binom{7}{k} = 99.$$ The first sum counts the cases where the fourth head occurs on the $k^{\\rm th}$ toss, for $k = 4, 5, 6, 7$, and the second sum counts the cases where $k$ heads are obtained in $7$ coin tosses, for $k = 0, 1, 2, 3$. Since these are all mutually exclusive outcomes (no double-counting), the enumeration is complete."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8688944,"math_prob":0.99681056,"size":1563,"snap":"2019-43-2019-47","text_gpt3_token_len":420,"char_repetition_ratio":0.17511225,"word_repetition_ratio":0.08396947,"special_character_ratio":0.3109405,"punctuation_ratio":0.10576923,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9993811,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-22T03:45:01Z\",\"WARC-Record-ID\":\"<urn:uuid:1621bbe3-2280-4f33-af13-1831ad6411af>\",\"Content-Length\":\"145092\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74bb69f3-bb27-44cb-9b55-0a2ae2895e05>\",\"WARC-Concurrent-To\":\"<urn:uuid:236fbe01-34a2-472e-a0c1-c9bc688f48a7>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1026645/flipping-a-coin-until-4-heads-occur-or-until-flipped-7-times-how-many-combinat\",\"WARC-Payload-Digest\":\"sha1:54ZROMF5Z2EE5WRJQCGMRCLNOVOUZ2ZB\",\"WARC-Block-Digest\":\"sha1:EUTWXQ6IU5OPHH67UPRWIFGDCYR3P4T7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987798619.84_warc_CC-MAIN-20191022030805-20191022054305-00546.warc.gz\"}"} |
https://mruni.pure.elsevier.com/en/publications/numerical-experiment-on-mathematical-model-of-forecasting-the-res | [
"# Žiniu{ogonek} testavimo prognozės matematinio modelio tyrimas skaitiniu eksperimentu\n\nTranslated title of the contribution: Numerical experiment on mathematical model of forecasting the results of knowledge testing\n\nNatalja Kosareva, Aleksandras Krylovas\n\nResearch output: Contribution to journalArticle\n\n3 Citations (Scopus)\n\n### Abstract\n\nIn this paper the new approach to the forecasting the results of knowledge testing, proposed earlier by authors, is extended with four classes of parametric functions, the best fitting one from which is selected to approximate item characteristic function. Mathematical model is visualized by two numerical experiments. The first experiment was performed with the purpose to show the procedure of selecting the most appropriate item characteristic function and adjusting the parameters of the model. Goodness-of-fit statistic for detecting misfit of the selected model is calculated. In the second experiment a test of 10 items is constructed for the population with latent ability having normal distribution. Probability distribution of total test result and test information function are calculated when item characteristic functions are selected from four classes of parametric functions. In the next step it is shown how test information function value could be increased by adjusting parameters of item characteristic functions to the observed population. This model could be used not only for knowledge testing but also when solving diagnostic tasks in various fields of human activities. Other advantage of this method is the reduction of resources of testing process by more precise adjustment of the model parameters and decreasing the standard error of measurement of the estimated examinee ability. In the presented example the methodology is applied for solving the problem of microclimate evaluation in office rooms.\n\nOriginal language Lithuanian 42-61 20 Technological and Economic Development of Economy 17 1 https://doi.org/10.3846/13928619.2011.553994 Published - 2011\n\n### Fingerprint\n\nCharacteristic function\nTesting\nMathematical model\nNumerical experiment\nExperiment\nGoodness of fit\nStatistics\nNormal distribution\nProbability distribution\nStandard error\nDiagnostics\nValue function\nMethodology\nEvaluation\nResources\nMicroclimate\n\n• Finance\n\n### Cite this\n\nIn: Technological and Economic Development of Economy, Vol. 17, No. 1, 2011, p. 42-61.\n\nResearch output: Contribution to journalArticle\n\n@article{d3306942f94940aa9e819e28fe954991,\ntitle = \"Žiniu{ogonek} testavimo prognozės matematinio modelio tyrimas skaitiniu eksperimentu\",\nabstract = \"In this paper the new approach to the forecasting the results of knowledge testing, proposed earlier by authors, is extended with four classes of parametric functions, the best fitting one from which is selected to approximate item characteristic function. Mathematical model is visualized by two numerical experiments. The first experiment was performed with the purpose to show the procedure of selecting the most appropriate item characteristic function and adjusting the parameters of the model. Goodness-of-fit statistic for detecting misfit of the selected model is calculated. In the second experiment a test of 10 items is constructed for the population with latent ability having normal distribution. Probability distribution of total test result and test information function are calculated when item characteristic functions are selected from four classes of parametric functions. In the next step it is shown how test information function value could be increased by adjusting parameters of item characteristic functions to the observed population. This model could be used not only for knowledge testing but also when solving diagnostic tasks in various fields of human activities. Other advantage of this method is the reduction of resources of testing process by more precise adjustment of the model parameters and decreasing the standard error of measurement of the estimated examinee ability. In the presented example the methodology is applied for solving the problem of microclimate evaluation in office rooms.\",\nkeywords = \"Generating function, Item characteristic function, Item response theory (irt), Mathematical modelling, Monte carlo method\",\nauthor = \"Natalja Kosareva and Aleksandras Krylovas\",\nyear = \"2011\",\ndoi = \"10.3846/13928619.2011.553994\",\nlanguage = \"Lithuanian\",\nvolume = \"17\",\npages = \"42--61\",\njournal = \"Technological and Economic Development of Economy\",\nissn = \"1392-8619\",\npublisher = \"Vilnius Gediminas Technical University\",\nnumber = \"1\",\n\n}\n\nTY - JOUR\n\nT1 - Žiniu{ogonek} testavimo prognozės matematinio modelio tyrimas skaitiniu eksperimentu\n\nAU - Kosareva, Natalja\n\nAU - Krylovas, Aleksandras\n\nPY - 2011\n\nY1 - 2011\n\nN2 - In this paper the new approach to the forecasting the results of knowledge testing, proposed earlier by authors, is extended with four classes of parametric functions, the best fitting one from which is selected to approximate item characteristic function. Mathematical model is visualized by two numerical experiments. The first experiment was performed with the purpose to show the procedure of selecting the most appropriate item characteristic function and adjusting the parameters of the model. Goodness-of-fit statistic for detecting misfit of the selected model is calculated. In the second experiment a test of 10 items is constructed for the population with latent ability having normal distribution. Probability distribution of total test result and test information function are calculated when item characteristic functions are selected from four classes of parametric functions. In the next step it is shown how test information function value could be increased by adjusting parameters of item characteristic functions to the observed population. This model could be used not only for knowledge testing but also when solving diagnostic tasks in various fields of human activities. Other advantage of this method is the reduction of resources of testing process by more precise adjustment of the model parameters and decreasing the standard error of measurement of the estimated examinee ability. In the presented example the methodology is applied for solving the problem of microclimate evaluation in office rooms.\n\nAB - In this paper the new approach to the forecasting the results of knowledge testing, proposed earlier by authors, is extended with four classes of parametric functions, the best fitting one from which is selected to approximate item characteristic function. Mathematical model is visualized by two numerical experiments. The first experiment was performed with the purpose to show the procedure of selecting the most appropriate item characteristic function and adjusting the parameters of the model. Goodness-of-fit statistic for detecting misfit of the selected model is calculated. In the second experiment a test of 10 items is constructed for the population with latent ability having normal distribution. Probability distribution of total test result and test information function are calculated when item characteristic functions are selected from four classes of parametric functions. In the next step it is shown how test information function value could be increased by adjusting parameters of item characteristic functions to the observed population. This model could be used not only for knowledge testing but also when solving diagnostic tasks in various fields of human activities. Other advantage of this method is the reduction of resources of testing process by more precise adjustment of the model parameters and decreasing the standard error of measurement of the estimated examinee ability. In the presented example the methodology is applied for solving the problem of microclimate evaluation in office rooms.\n\nKW - Generating function\n\nKW - Item characteristic function\n\nKW - Item response theory (irt)\n\nKW - Mathematical modelling\n\nKW - Monte carlo method\n\nUR - http://www.scopus.com/inward/record.url?scp=79959801661&partnerID=8YFLogxK\n\nUR - http://www.scopus.com/inward/citedby.url?scp=79959801661&partnerID=8YFLogxK\n\nU2 - 10.3846/13928619.2011.553994\n\nDO - 10.3846/13928619.2011.553994\n\nM3 - Article\n\nAN - SCOPUS:79959801661\n\nVL - 17\n\nSP - 42\n\nEP - 61\n\nJO - Technological and Economic Development of Economy\n\nJF - Technological and Economic Development of Economy\n\nSN - 1392-8619\n\nIS - 1\n\nER -"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84629893,"math_prob":0.8088164,"size":5884,"snap":"2019-51-2020-05","text_gpt3_token_len":1200,"char_repetition_ratio":0.13639456,"word_repetition_ratio":0.8018648,"special_character_ratio":0.19646499,"punctuation_ratio":0.07724868,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97543114,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T13:31:57Z\",\"WARC-Record-ID\":\"<urn:uuid:282ef935-82a5-43bb-bde3-273c10024956>\",\"Content-Length\":\"46998\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77486cda-9950-4250-93f8-b930bd363149>\",\"WARC-Concurrent-To\":\"<urn:uuid:6c9d0a6a-a7ef-4669-9bdb-086123c1fb55>\",\"WARC-IP-Address\":\"52.209.51.54\",\"WARC-Target-URI\":\"https://mruni.pure.elsevier.com/en/publications/numerical-experiment-on-mathematical-model-of-forecasting-the-res\",\"WARC-Payload-Digest\":\"sha1:2NMMA5CD7CHEOHOB7355QMXQ7NKHU4M3\",\"WARC-Block-Digest\":\"sha1:CGUBQ26RH4FTZ7IBJ6P7NYGMOQQSXP76\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594603.8_warc_CC-MAIN-20200119122744-20200119150744-00108.warc.gz\"}"} |
https://www.softmath.com/parabola-in-math/converting-decimals/convert-between-logarithmic.html | [
"",
null,
"convert between logarithmic and exponential exponents @ algebra fx 2.0\nRelated topics:\nestimating with decimals worksheet | algebra and trigonometry a graphing approach | \"ratio worksheets\" | free math trivia with answers mathematics | compound interest 6th grade | equation quadratic solver | solving algebra problems step by step free | math fraction | algebra logarithms worksheet | aptitude questions in maths | prentice hall physics quizes | how to solve algebraic expressions by removing the brackets | algebra what order add multiply divide\n\nAuthor Message Author Message\nLosieLed",
null,
"Reg.: 05.04.2006",
null,
"Posted: Thursday 04th of Jan 07:41 Homuck",
null,
"Reg.: 05.07.2001",
null,
"Posted: Saturday 06th of Jan 09:26\n\nHey All, I am urgently in need of assistance for clearing Algebrator is a very convenient tool. I have been using it\nmy math exam that is nearing. I really do not want to for a long time now.\nopt for the guidance of private coaches and online\ncoaching since they prove to be quite costly. Could you\nsuggest a perfect tutoring utility that can help me with\nlearning the principles of Pre Algebra. Particularly, I need\nassistance on trigonometry and graphing equations.\n\nVofj Timidrov",
null,
"Reg.: 06.07.2001",
null,
"Posted: Friday 05th of Jan 08:51 Sdefom Koopmansshab",
null,
"Reg.: 28.10.2001",
null,
"Posted: Sunday 07th of Jan 10:11\n\nWhat exactly are your troubles with convert between I remember having often faced problems with least\nlogarithmic and exponential exponents @ algebra fx common denominator, trigonometry and linear equations.\n2.0? Can you give some more details . I remember that A truly great piece of algebra program is Algebrator\nnot long ago I too had to go through a similar period of software. By simply typing in a problem homework a\nworry . In my instance, my worried hunt led me to a step by step solution would appear by a click on Solve.\nteacher in my vicinity . But he was so tied up that he I have used it through many math classes – College\nsimply did not have the time for me. He was the one Algebra, Basic Math and College Algebra. I greatly\nwho really suggested that these days there is yet a recommend the program.\nnew answer at hand. He initiated me to these brilliant\nprograms in algebra."
] | [
null,
"https://www.softmath.com/r-solver/images/tutor.png",
null,
"https://www.softmath.com/images/avatars/none.png",
null,
"https://www.softmath.com/images/forum/icon_minipost.gif",
null,
"https://www.softmath.com/images/avatars/none.png",
null,
"https://www.softmath.com/images/forum/icon_minipost.gif",
null,
"https://www.softmath.com/images/avatars/none.png",
null,
"https://www.softmath.com/images/forum/icon_minipost.gif",
null,
"https://www.softmath.com/images/avatars/29.jpg",
null,
"https://www.softmath.com/images/forum/icon_minipost.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.836572,"math_prob":0.68206215,"size":1024,"snap":"2021-31-2021-39","text_gpt3_token_len":195,"char_repetition_ratio":0.13627452,"word_repetition_ratio":0.90909094,"special_character_ratio":0.18847656,"punctuation_ratio":0.02189781,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9709706,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-25T16:25:32Z\",\"WARC-Record-ID\":\"<urn:uuid:d5d4ba2c-c375-4c3c-805a-f2b7a94b5ced>\",\"Content-Length\":\"45435\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efa06f0f-bce7-4fd7-ad04-ab8a92fb7202>\",\"WARC-Concurrent-To\":\"<urn:uuid:449ba3a6-4e70-4408-9422-df34b9ea1552>\",\"WARC-IP-Address\":\"52.43.142.96\",\"WARC-Target-URI\":\"https://www.softmath.com/parabola-in-math/converting-decimals/convert-between-logarithmic.html\",\"WARC-Payload-Digest\":\"sha1:44CPHCB5CMIT6PWANG4P3WXBO456UKMS\",\"WARC-Block-Digest\":\"sha1:DFNOMXLL6GFAJKZ2C2YBW5STOLQYP2ZH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046151699.95_warc_CC-MAIN-20210725143345-20210725173345-00159.warc.gz\"}"} |
http://terms.naer.edu.tw/search/?q=mean&field=ti&op=AND&order=-sorten&q=noun:%22%E6%95%B8%E5%AD%B8%E5%90%8D%E8%A9%9E-%E5%85%A9%E5%B2%B8%E6%95%B8%E5%AD%B8%E5%90%8D%E8%A9%9E%20%22&field=&op=AND | [
"### 查詢結果\n\n• 跳至 /3\n\nsecond mean value theorem 第二均值定理\n\nsample mean 樣本平均數[值]\n\nroot mean square error 均方根誤差\n\nmean vector 均值向量\n\nmean value theorem 均值定理\n\nmean term of proportion 比例中項\n\nmean square error (MSE) 均方誤差\n\nmean error 平均誤差\n\nmean-ergodic theorem 平均遍歷定理\n\nmean entropy 平均熵\n• 跳至 /3"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.52409583,"math_prob":0.9913096,"size":469,"snap":"2021-04-2021-17","text_gpt3_token_len":425,"char_repetition_ratio":0.32258064,"word_repetition_ratio":0.0,"special_character_ratio":0.23880596,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.994255,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-26T03:05:14Z\",\"WARC-Record-ID\":\"<urn:uuid:43d4f55f-c0b0-4539-bafe-59704fd3cf12>\",\"Content-Length\":\"33901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a6565a76-82df-4601-958e-c32d93a7f11f>\",\"WARC-Concurrent-To\":\"<urn:uuid:94a33793-18be-4afa-b712-163f27a292d4>\",\"WARC-IP-Address\":\"120.127.233.14\",\"WARC-Target-URI\":\"http://terms.naer.edu.tw/search/?q=mean&field=ti&op=AND&order=-sorten&q=noun:%22%E6%95%B8%E5%AD%B8%E5%90%8D%E8%A9%9E-%E5%85%A9%E5%B2%B8%E6%95%B8%E5%AD%B8%E5%90%8D%E8%A9%9E%20%22&field=&op=AND\",\"WARC-Payload-Digest\":\"sha1:ZCKFHNSPXZF7PBNBY7R3BR5RQUKK23X6\",\"WARC-Block-Digest\":\"sha1:C7722G2ZTTOI27SPF274GZ3KWDTSBX6R\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704795033.65_warc_CC-MAIN-20210126011645-20210126041645-00260.warc.gz\"}"} |
https://mt-rca.org/roof-types/how-do-you-work-out-the-area-of-a-roof-australia.html | [
"# How do you work out the area of a roof Australia?\n\nContents\n\n## How is roof area calculated in Australia?\n\nIf your house is rectangular, all you need to do is multiply the length and width of the building. If the shape of your house is more complex, simply enter the total area (after measuring the exterior dimensions) into the appropriate box.\n\n## How do you calculate the area of a roof?\n\nMultiply your house length by your house width to get the area. (For example, 40 feet x 30 feet = 1,200 square feet.) Next, multiply the area by your roof’s pitch. (1,200 x 1.05 = 1,260 square feet.)\n\n## What is the standard roof pitch in Australia?\n\nStandard roof pitches in Australia are usually either 15 degrees or 22.5 degrees, depending on the materials used in the roof.\n\n## What is the most common roof pitch?\n\nThe most commonly used roof pitches fall in a range between 4/12 and 9/12. Pitches lower than 4/12 have a slight angle, and they are defined as low-slope roofs. Pitches of less than 2/12 are considered flat roofs, even though they may be very slightly angled.\n\nIT IS INTERESTING: What colors go with a GREY roof?\n\n## How many square feet do I need calculator?\n\nIf you are measuring a square or rectangle area, multiply length times width; Length x Width = Area. For other area shapes, see formulas below to calculate Area (ft2) = Square Footage.\n\n## How many squares are in a 1200 square foot roof?\n\nTo determine the number of squares on the gable roof above, divide its total of 24,000 square feet by 100 (24,000 ÷ 100 = 240).\n\nHow many shingles do I need for 1000 square feet?\n\nHouse by Square Foot Roof by Square Foot Re-Roofing Cost*\n1,000 1,054 \\$4,000 – \\$5,500\n1,100 1,160 \\$4,200 – \\$6,000\n1,200 1,265 \\$4,500 – \\$6,500\n1,500 1,581 \\$5,500 – \\$8,000\n\n## How do I calculate shingles for my garage?\n\nCalculate the square footage by multiplying the width by the length. For example, a 20 foot long by 15 foot wide section will come to 300 square feet. Add the square footage together from all of the areas measured to get total square footage. This is the figure you will use for a shingle estimation.\n\n## What roof pitch is 25 degrees?\n\nRoof Slope in Degrees to Standard Roof Pitch Conversion Tables\n\nConvert Roof Slope from Degrees to Rise-in-Run\nRoof Angle in Degrees Roof Slope as Rise-in-Run (X-in-12)\n25° 5.596 in 12\n26° 5.853 in 12\n27° 6.114 in 12\n\n## Is there a minimum roof pitch?\n\nTraditionally, a minimum roof pitch of 20° was recommended in BS 5534, but modern tiles and slates have now been designed for applications as low as 15°. It is uncommon to find a roof below 15° but for those very low pitch applications, there are interlocking clay pantiles available, suitable for use down to 12.5°.\n\nIT IS INTERESTING: How heavy is too heavy for a roof?\n\n## What is the best roof in Australia?\n\nHere are Australia’s top types of roofing.\n\n1. Solar Tiles. An increasingly popular option for those seeking to cut costs and increase green points – solar tiles are ideal for most Australian climates. …\n2. Metal Sheeting. …\n3. Brickwork. …\n4. Terracotta Roof Tiles. …\n5. Concrete Tiles.\n\n1.11.2020\n\n## Is a 4/12 roof pitch good?\n\nAs well, a slope of 4:12 is most commonly considered the lowest slope for “standard shingle installations”. Most manufacturer and industry recommendations require, or at the very least recommend, special underlayment or other considerations on roofs between 2:12 and 4:12.\n\n## How does roof pitch affect cost?\n\nA slope of 6/12 could add nearly another thousand dollars to the price. If you were to choose a 9/12 slope, the national average for a 40 by 50-foot roof would be more than \\$8,300. A high slope of 12/12 could cost \\$10,000 or more.\n\n## What roof pitch is 30 degrees?\n\nWhat roof pitch is 30 degrees? A 30° roof pitch is roughly the same as a 7/12 roof pitch. To convert from degrees to the American ratio: Find the tangent of the angle, tan(angle).",
null,
""
] | [
null,
"data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%20100%20100'%3E%3C/svg%3E",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92397803,"math_prob":0.9618494,"size":3683,"snap":"2022-05-2022-21","text_gpt3_token_len":981,"char_repetition_ratio":0.12313128,"word_repetition_ratio":0.006134969,"special_character_ratio":0.29486832,"punctuation_ratio":0.13098237,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9602773,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T13:08:22Z\",\"WARC-Record-ID\":\"<urn:uuid:20feba7b-2c03-4e50-bdeb-b2b086ebe7cc>\",\"Content-Length\":\"75550\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:28452aab-202a-4bd7-ab4f-cb47599ab95e>\",\"WARC-Concurrent-To\":\"<urn:uuid:7e98b051-777a-4c89-933a-0064b3bee272>\",\"WARC-IP-Address\":\"144.126.138.138\",\"WARC-Target-URI\":\"https://mt-rca.org/roof-types/how-do-you-work-out-the-area-of-a-roof-australia.html\",\"WARC-Payload-Digest\":\"sha1:ETJCZM67G3HUF3Z4FNE3A3VPDSVLJRKD\",\"WARC-Block-Digest\":\"sha1:5SJ3OPPQ4SVSJISFFQMZW4R6NJLRT3AC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016853.88_warc_CC-MAIN-20220528123744-20220528153744-00668.warc.gz\"}"} |
https://learn.cemetech.net/index.php?title=TI-BASIC:Max&oldid=946 | [
"TI-BASIC:Max\n\n(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)\n\nCommand Summary\n\nReturns the maximum of two elements or of a list.\n\nCommand Syntax\n\n• for two numbers: max(x,y)\n• for a list: max(list)\n• comparing a number to each element of a list: max(x,list) or max(list,x)\n• pairwise comparing two lists: max(list1,list2)\n\nPress:\n\n1. MATH to access the Math menu.\n2. RIGHT to access the NUM submenu.\n3. 7 to select max(, or use arrows.\n\nAlternatively, press:\n\n1. 2nd LIST to access the list menu.\n2. LEFT to access the MATH submenu.\n3. 2 to select max(, or use arrows.\n\nTI-83/84/+/SE\n\n1 byte\n\nmax(x,y) returns the largest of the two numbers x and y. max(list) returns the largest element of list. max(list1,list2) returns the pairwise maxima of the two lists. max(list1,x) (equivalently, max(x,list1)) returns a list whose elements are the larger of x or the corresponding element of the original list.\n\nmax(2,3)\n3\nmax({2,3,4})\n4\nmax({1,3},{4,2})\n{4 3}\nmax({1,3},2)\n{2 3}\n\nUnlike comparison operators such as < and >, max( can also compare complex numbers. To do this, both arguments must be complex -- either complex numbers or complex lists: max(2,i) will throw an error even though max(2+0i,i) won't. In the case of complex numbers, the number with the largest absolute value will be returned. When the two numbers have the same absolute value, the first one will be returned: max(i,-i) returns i and max(-i,i) returns -i.\n\nmax( can be used in Boolean comparisons to see if at least one of a list is 1 (true) -- useful because commands like If or While only deal with numbers, and not lists, but comparisons like L1=L2 return a list of values. In general, the behavior you want varies, and you will use the Min( function or the max( function accordingly.\n\nUsing max( will give you a lenient test -- if any one element of the list is 1 (true), then the max( of the list is true -- this is equivalent to putting an 'or' in between every element. For example, this tests if K is equal to any of 24, 25, 26, or 34 (the GetKey arrow key values):\n\n:If max(K={24,25,26,34\n:Disp \"ARROW KEY\n\nTo get the element of a real list with the greatest absolute value, use imag(max(iAns.\n\nError Conditions\n\n• ERR:DATA TYPE is thrown when comparing a real and a complex number. This can be avoided by adding +0i to the real number (or i^4 right after it, for those who are familiar with complex numbers)\n• ERR:DIM MISMATCH is thrown, when using max( with two lists, if they have different dimensions."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7412057,"math_prob":0.98083,"size":2439,"snap":"2022-05-2022-21","text_gpt3_token_len":654,"char_repetition_ratio":0.14373717,"word_repetition_ratio":0.014084507,"special_character_ratio":0.27716276,"punctuation_ratio":0.14468865,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9868963,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T12:43:54Z\",\"WARC-Record-ID\":\"<urn:uuid:aa70ecd7-4f94-46ca-b890-32c1202c064d>\",\"Content-Length\":\"18487\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df751272-7288-4ccd-8606-6a3bb291ceb7>\",\"WARC-Concurrent-To\":\"<urn:uuid:ca0f31d6-71b2-455f-af36-35d713401008>\",\"WARC-IP-Address\":\"198.27.66.25\",\"WARC-Target-URI\":\"https://learn.cemetech.net/index.php?title=TI-BASIC:Max&oldid=946\",\"WARC-Payload-Digest\":\"sha1:RE3ABTNR335KIR7JQEPQGDMDSKBIVY2Q\",\"WARC-Block-Digest\":\"sha1:T5MKQHHIVWQRNHPT2WGCBUMNG5UVFLPB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305260.61_warc_CC-MAIN-20220127103059-20220127133059-00040.warc.gz\"}"} |
https://scfr.savingadvice.com/2013/04/ | [
"User Real IP - 3.89.204.127\n```Array\n(\n => Array\n(\n => 182.68.68.92\n)\n\n => Array\n(\n => 101.0.41.201\n)\n\n => Array\n(\n => 43.225.98.123\n)\n\n => Array\n(\n => 2.58.194.139\n)\n\n => Array\n(\n => 46.119.197.104\n)\n\n => Array\n(\n => 45.249.8.93\n)\n\n => Array\n(\n => 103.12.135.72\n)\n\n => Array\n(\n => 157.35.243.216\n)\n\n => Array\n(\n => 209.107.214.176\n)\n\n => Array\n(\n => 5.181.233.166\n)\n\n => Array\n(\n => 106.201.10.100\n)\n\n => Array\n(\n => 36.90.55.39\n)\n\n => Array\n(\n => 119.154.138.47\n)\n\n => Array\n(\n => 51.91.31.157\n)\n\n => Array\n(\n => 182.182.65.216\n)\n\n => Array\n(\n => 157.35.252.63\n)\n\n => Array\n(\n => 14.142.34.163\n)\n\n => Array\n(\n => 178.62.43.135\n)\n\n => Array\n(\n => 43.248.152.148\n)\n\n => Array\n(\n => 222.252.104.114\n)\n\n => Array\n(\n => 209.107.214.168\n)\n\n => Array\n(\n => 103.99.199.250\n)\n\n => Array\n(\n => 178.62.72.160\n)\n\n => Array\n(\n => 27.6.1.170\n)\n\n => Array\n(\n => 182.69.249.219\n)\n\n => Array\n(\n => 110.93.228.86\n)\n\n => Array\n(\n => 72.255.1.98\n)\n\n => Array\n(\n => 182.73.111.98\n)\n\n => Array\n(\n => 45.116.117.11\n)\n\n => Array\n(\n => 122.15.78.189\n)\n\n => Array\n(\n => 14.167.188.234\n)\n\n => Array\n(\n => 223.190.4.202\n)\n\n => Array\n(\n => 202.173.125.19\n)\n\n => Array\n(\n => 103.255.5.32\n)\n\n => Array\n(\n => 39.37.145.103\n)\n\n => Array\n(\n => 140.213.26.249\n)\n\n => Array\n(\n => 45.118.166.85\n)\n\n => Array\n(\n => 102.166.138.255\n)\n\n => Array\n(\n => 77.111.246.234\n)\n\n => Array\n(\n => 45.63.6.196\n)\n\n => Array\n(\n => 103.250.147.115\n)\n\n => Array\n(\n => 223.185.30.99\n)\n\n => Array\n(\n => 103.122.168.108\n)\n\n => Array\n(\n => 123.136.203.21\n)\n\n => Array\n(\n => 171.229.243.63\n)\n\n => Array\n(\n => 153.149.98.149\n)\n\n => Array\n(\n => 223.238.93.15\n)\n\n => Array\n(\n => 178.62.113.166\n)\n\n => Array\n(\n => 101.162.0.153\n)\n\n => Array\n(\n => 121.200.62.114\n)\n\n => Array\n(\n => 14.248.77.252\n)\n\n => Array\n(\n => 95.142.117.29\n)\n\n => Array\n(\n => 150.129.60.107\n)\n\n => Array\n(\n => 94.205.243.22\n)\n\n => Array\n(\n => 115.42.71.143\n)\n\n => Array\n(\n => 117.217.195.59\n)\n\n => Array\n(\n => 182.77.112.56\n)\n\n => Array\n(\n => 182.77.112.108\n)\n\n => Array\n(\n => 41.80.69.10\n)\n\n => Array\n(\n => 117.5.222.121\n)\n\n => Array\n(\n => 103.11.0.38\n)\n\n => Array\n(\n => 202.173.127.140\n)\n\n => Array\n(\n => 49.249.249.50\n)\n\n => Array\n(\n => 116.72.198.211\n)\n\n => Array\n(\n => 223.230.54.53\n)\n\n => Array\n(\n => 102.69.228.74\n)\n\n => Array\n(\n => 39.37.251.89\n)\n\n => Array\n(\n => 39.53.246.141\n)\n\n => Array\n(\n => 39.57.182.72\n)\n\n => Array\n(\n => 209.58.130.210\n)\n\n => Array\n(\n => 104.131.75.86\n)\n\n => Array\n(\n => 106.212.131.255\n)\n\n => Array\n(\n => 106.212.132.127\n)\n\n => Array\n(\n => 223.190.4.60\n)\n\n => Array\n(\n => 103.252.116.252\n)\n\n => Array\n(\n => 103.76.55.182\n)\n\n => Array\n(\n => 45.118.166.70\n)\n\n => Array\n(\n => 103.93.174.215\n)\n\n => Array\n(\n => 5.62.62.142\n)\n\n => Array\n(\n => 182.179.158.156\n)\n\n => Array\n(\n => 39.57.255.12\n)\n\n => Array\n(\n => 39.37.178.37\n)\n\n => Array\n(\n => 182.180.165.211\n)\n\n => Array\n(\n => 119.153.135.17\n)\n\n => Array\n(\n => 72.255.15.244\n)\n\n => Array\n(\n => 139.180.166.181\n)\n\n => Array\n(\n => 70.119.147.111\n)\n\n => Array\n(\n => 106.210.40.83\n)\n\n => Array\n(\n => 14.190.70.91\n)\n\n => Array\n(\n => 202.125.156.82\n)\n\n => Array\n(\n => 115.42.68.38\n)\n\n => Array\n(\n => 102.167.13.108\n)\n\n => Array\n(\n => 117.217.192.130\n)\n\n => Array\n(\n => 205.185.223.156\n)\n\n => Array\n(\n => 171.224.180.29\n)\n\n => Array\n(\n => 45.127.45.68\n)\n\n => Array\n(\n => 195.206.183.232\n)\n\n => Array\n(\n => 49.32.52.115\n)\n\n => Array\n(\n => 49.207.49.223\n)\n\n => Array\n(\n => 45.63.29.61\n)\n\n => Array\n(\n => 103.245.193.214\n)\n\n => Array\n(\n => 39.40.236.69\n)\n\n => Array\n(\n => 62.80.162.111\n)\n\n => Array\n(\n => 45.116.232.56\n)\n\n => Array\n(\n => 45.118.166.91\n)\n\n => Array\n(\n => 180.92.230.234\n)\n\n => Array\n(\n => 157.40.57.160\n)\n\n => Array\n(\n => 110.38.38.130\n)\n\n => Array\n(\n => 72.255.57.183\n)\n\n => Array\n(\n => 182.68.81.85\n)\n\n => Array\n(\n => 39.57.202.122\n)\n\n => Array\n(\n => 119.152.154.36\n)\n\n => Array\n(\n => 5.62.62.141\n)\n\n => Array\n(\n => 119.155.54.232\n)\n\n => Array\n(\n => 39.37.141.22\n)\n\n => Array\n(\n => 183.87.12.225\n)\n\n => Array\n(\n => 107.170.127.117\n)\n\n => Array\n(\n => 125.63.124.49\n)\n\n => Array\n(\n => 39.42.191.3\n)\n\n => Array\n(\n => 116.74.24.72\n)\n\n => Array\n(\n => 46.101.89.227\n)\n\n => Array\n(\n => 202.173.125.247\n)\n\n => Array\n(\n => 39.42.184.254\n)\n\n => Array\n(\n => 115.186.165.132\n)\n\n => Array\n(\n => 39.57.206.126\n)\n\n => Array\n(\n => 103.245.13.145\n)\n\n => Array\n(\n => 202.175.246.43\n)\n\n => Array\n(\n => 192.140.152.150\n)\n\n => Array\n(\n => 202.88.250.103\n)\n\n => Array\n(\n => 103.248.94.207\n)\n\n => Array\n(\n => 77.73.66.101\n)\n\n => Array\n(\n => 104.131.66.8\n)\n\n => Array\n(\n => 113.186.161.97\n)\n\n => Array\n(\n => 222.254.5.7\n)\n\n => Array\n(\n => 223.233.67.247\n)\n\n => Array\n(\n => 171.249.116.146\n)\n\n => Array\n(\n => 47.30.209.71\n)\n\n => Array\n(\n => 202.134.13.130\n)\n\n => Array\n(\n => 27.6.135.7\n)\n\n => Array\n(\n => 107.170.186.79\n)\n\n => Array\n(\n => 103.212.89.171\n)\n\n => Array\n(\n => 117.197.9.77\n)\n\n => Array\n(\n => 122.176.206.233\n)\n\n => Array\n(\n => 192.227.253.222\n)\n\n => Array\n(\n => 182.188.224.119\n)\n\n => Array\n(\n => 14.248.70.74\n)\n\n => Array\n(\n => 42.118.219.169\n)\n\n => Array\n(\n => 110.39.146.170\n)\n\n => Array\n(\n => 119.160.66.143\n)\n\n => Array\n(\n => 103.248.95.130\n)\n\n => Array\n(\n => 27.63.152.208\n)\n\n => Array\n(\n => 49.207.114.96\n)\n\n => Array\n(\n => 102.166.23.214\n)\n\n => Array\n(\n => 175.107.254.73\n)\n\n => Array\n(\n => 103.10.227.214\n)\n\n => Array\n(\n => 202.143.115.89\n)\n\n => Array\n(\n => 110.93.227.187\n)\n\n => Array\n(\n => 103.140.31.60\n)\n\n => Array\n(\n => 110.37.231.46\n)\n\n => Array\n(\n => 39.36.99.238\n)\n\n => Array\n(\n => 157.37.140.26\n)\n\n => Array\n(\n => 43.246.202.226\n)\n\n => Array\n(\n => 137.97.8.143\n)\n\n => Array\n(\n => 182.65.52.242\n)\n\n => Array\n(\n => 115.42.69.62\n)\n\n => Array\n(\n => 14.143.254.58\n)\n\n => Array\n(\n => 223.179.143.236\n)\n\n => Array\n(\n => 223.179.143.249\n)\n\n => Array\n(\n => 103.143.7.54\n)\n\n => Array\n(\n => 223.179.139.106\n)\n\n => Array\n(\n => 39.40.219.90\n)\n\n => Array\n(\n => 45.115.141.231\n)\n\n => Array\n(\n => 120.29.100.33\n)\n\n => Array\n(\n => 112.196.132.5\n)\n\n => Array\n(\n => 202.163.123.153\n)\n\n => Array\n(\n => 5.62.58.146\n)\n\n => Array\n(\n => 39.53.216.113\n)\n\n => Array\n(\n => 42.111.160.73\n)\n\n => Array\n(\n => 107.182.231.213\n)\n\n => Array\n(\n => 119.82.94.120\n)\n\n => Array\n(\n => 178.62.34.82\n)\n\n => Array\n(\n => 203.122.6.18\n)\n\n => Array\n(\n => 157.42.38.251\n)\n\n => Array\n(\n => 45.112.68.222\n)\n\n => Array\n(\n => 49.206.212.122\n)\n\n => Array\n(\n => 104.236.70.228\n)\n\n => Array\n(\n => 42.111.34.243\n)\n\n => Array\n(\n => 84.241.19.186\n)\n\n => Array\n(\n => 89.187.180.207\n)\n\n => Array\n(\n => 104.243.212.118\n)\n\n => Array\n(\n => 104.236.55.136\n)\n\n => Array\n(\n => 106.201.16.163\n)\n\n => Array\n(\n => 46.101.40.25\n)\n\n => Array\n(\n => 45.118.166.94\n)\n\n => Array\n(\n => 49.36.128.102\n)\n\n => Array\n(\n => 14.142.193.58\n)\n\n => Array\n(\n => 212.79.124.176\n)\n\n => Array\n(\n => 45.32.191.194\n)\n\n => Array\n(\n => 105.112.107.46\n)\n\n => Array\n(\n => 106.201.14.8\n)\n\n => Array\n(\n => 110.93.240.65\n)\n\n => Array\n(\n => 27.96.95.177\n)\n\n => Array\n(\n => 45.41.134.35\n)\n\n => Array\n(\n => 180.151.13.110\n)\n\n => Array\n(\n => 101.53.242.89\n)\n\n => Array\n(\n => 115.186.3.110\n)\n\n => Array\n(\n => 171.49.185.242\n)\n\n => Array\n(\n => 115.42.70.24\n)\n\n => Array\n(\n => 45.128.188.43\n)\n\n => Array\n(\n => 103.140.129.63\n)\n\n => Array\n(\n => 101.50.113.147\n)\n\n => Array\n(\n => 103.66.73.30\n)\n\n => Array\n(\n => 117.247.193.169\n)\n\n => Array\n(\n => 120.29.100.94\n)\n\n => Array\n(\n => 42.109.154.39\n)\n\n => Array\n(\n => 122.173.155.150\n)\n\n => Array\n(\n => 45.115.104.53\n)\n\n => Array\n(\n => 116.74.29.84\n)\n\n => Array\n(\n => 101.50.125.34\n)\n\n => Array\n(\n => 45.118.166.80\n)\n\n => Array\n(\n => 91.236.184.27\n)\n\n => Array\n(\n => 113.167.185.120\n)\n\n => Array\n(\n => 27.97.66.222\n)\n\n => Array\n(\n => 43.247.41.117\n)\n\n => Array\n(\n => 23.229.16.227\n)\n\n => Array\n(\n => 14.248.79.209\n)\n\n => Array\n(\n => 117.5.194.26\n)\n\n => Array\n(\n => 117.217.205.41\n)\n\n => Array\n(\n => 114.79.169.99\n)\n\n => Array\n(\n => 103.55.60.97\n)\n\n => Array\n(\n => 182.75.89.210\n)\n\n => Array\n(\n => 77.73.66.109\n)\n\n => Array\n(\n => 182.77.126.139\n)\n\n => Array\n(\n => 14.248.77.166\n)\n\n => Array\n(\n => 157.35.224.133\n)\n\n => Array\n(\n => 183.83.38.27\n)\n\n => Array\n(\n => 182.68.4.77\n)\n\n => Array\n(\n => 122.177.130.234\n)\n\n => Array\n(\n => 103.24.99.99\n)\n\n => Array\n(\n => 103.91.127.66\n)\n\n => Array\n(\n => 41.90.34.240\n)\n\n => Array\n(\n => 49.205.77.102\n)\n\n => Array\n(\n => 103.248.94.142\n)\n\n => Array\n(\n => 104.143.92.170\n)\n\n => Array\n(\n => 219.91.157.114\n)\n\n => Array\n(\n => 223.190.88.22\n)\n\n => Array\n(\n => 223.190.86.232\n)\n\n => Array\n(\n => 39.41.172.80\n)\n\n => Array\n(\n => 124.107.206.5\n)\n\n => Array\n(\n => 139.167.180.224\n)\n\n => Array\n(\n => 93.76.64.248\n)\n\n => Array\n(\n => 65.216.227.119\n)\n\n => Array\n(\n => 223.190.119.141\n)\n\n => Array\n(\n => 110.93.237.179\n)\n\n => Array\n(\n => 41.90.7.85\n)\n\n => Array\n(\n => 103.100.6.26\n)\n\n => Array\n(\n => 104.140.83.13\n)\n\n => Array\n(\n => 223.190.119.133\n)\n\n => Array\n(\n => 119.152.150.87\n)\n\n => Array\n(\n => 103.125.130.147\n)\n\n => Array\n(\n => 27.6.5.52\n)\n\n => Array\n(\n => 103.98.188.26\n)\n\n => Array\n(\n => 39.35.121.81\n)\n\n => Array\n(\n => 74.119.146.182\n)\n\n => Array\n(\n => 5.181.233.162\n)\n\n => Array\n(\n => 157.39.18.60\n)\n\n => Array\n(\n => 1.187.252.25\n)\n\n => Array\n(\n => 39.42.145.59\n)\n\n => Array\n(\n => 39.35.39.198\n)\n\n => Array\n(\n => 49.36.128.214\n)\n\n => Array\n(\n => 182.190.20.56\n)\n\n => Array\n(\n => 122.180.249.189\n)\n\n => Array\n(\n => 117.217.203.107\n)\n\n => Array\n(\n => 103.70.82.241\n)\n\n => Array\n(\n => 45.118.166.68\n)\n\n => Array\n(\n => 122.180.168.39\n)\n\n => Array\n(\n => 149.28.67.254\n)\n\n => Array\n(\n => 223.233.73.8\n)\n\n => Array\n(\n => 122.167.140.0\n)\n\n => Array\n(\n => 95.158.51.55\n)\n\n => Array\n(\n => 27.96.95.134\n)\n\n => Array\n(\n => 49.206.214.53\n)\n\n => Array\n(\n => 212.103.49.92\n)\n\n => Array\n(\n => 122.177.115.101\n)\n\n => Array\n(\n => 171.50.187.124\n)\n\n => Array\n(\n => 122.164.55.107\n)\n\n => Array\n(\n => 98.114.217.204\n)\n\n => Array\n(\n => 106.215.10.54\n)\n\n => Array\n(\n => 115.42.68.28\n)\n\n => Array\n(\n => 104.194.220.87\n)\n\n => Array\n(\n => 103.137.84.170\n)\n\n => Array\n(\n => 61.16.142.110\n)\n\n => Array\n(\n => 212.103.49.85\n)\n\n => Array\n(\n => 39.53.248.162\n)\n\n => Array\n(\n => 203.122.40.214\n)\n\n => Array\n(\n => 117.217.198.72\n)\n\n => Array\n(\n => 115.186.191.203\n)\n\n => Array\n(\n => 120.29.100.199\n)\n\n => Array\n(\n => 45.151.237.24\n)\n\n => Array\n(\n => 223.190.125.232\n)\n\n => Array\n(\n => 41.80.151.17\n)\n\n => Array\n(\n => 23.111.188.5\n)\n\n => Array\n(\n => 223.190.125.216\n)\n\n => Array\n(\n => 103.217.133.119\n)\n\n => Array\n(\n => 103.198.173.132\n)\n\n => Array\n(\n => 47.31.155.89\n)\n\n => Array\n(\n => 223.190.20.253\n)\n\n => Array\n(\n => 104.131.92.125\n)\n\n => Array\n(\n => 223.190.19.152\n)\n\n => Array\n(\n => 103.245.193.191\n)\n\n => Array\n(\n => 106.215.58.255\n)\n\n => Array\n(\n => 119.82.83.238\n)\n\n => Array\n(\n => 106.212.128.138\n)\n\n => Array\n(\n => 139.167.237.36\n)\n\n => Array\n(\n => 222.124.40.250\n)\n\n => Array\n(\n => 134.56.185.169\n)\n\n => Array\n(\n => 54.255.226.31\n)\n\n => Array\n(\n => 137.97.162.31\n)\n\n => Array\n(\n => 95.185.21.191\n)\n\n => Array\n(\n => 171.61.168.151\n)\n\n => Array\n(\n => 137.97.184.4\n)\n\n => Array\n(\n => 106.203.151.202\n)\n\n => Array\n(\n => 39.37.137.0\n)\n\n => Array\n(\n => 45.118.166.66\n)\n\n => Array\n(\n => 14.248.105.100\n)\n\n => Array\n(\n => 106.215.61.185\n)\n\n => Array\n(\n => 202.83.57.179\n)\n\n => Array\n(\n => 89.187.182.176\n)\n\n => Array\n(\n => 49.249.232.198\n)\n\n => Array\n(\n => 132.154.95.236\n)\n\n => Array\n(\n => 223.233.83.230\n)\n\n => Array\n(\n => 183.83.153.14\n)\n\n => Array\n(\n => 125.63.72.210\n)\n\n => Array\n(\n => 207.174.202.11\n)\n\n => Array\n(\n => 119.95.88.59\n)\n\n => Array\n(\n => 122.170.14.150\n)\n\n => Array\n(\n => 45.118.166.75\n)\n\n => Array\n(\n => 103.12.135.37\n)\n\n => Array\n(\n => 49.207.120.225\n)\n\n => Array\n(\n => 182.64.195.207\n)\n\n => Array\n(\n => 103.99.37.16\n)\n\n => Array\n(\n => 46.150.104.221\n)\n\n => Array\n(\n => 104.236.195.147\n)\n\n => Array\n(\n => 103.104.192.43\n)\n\n => Array\n(\n => 24.242.159.118\n)\n\n => Array\n(\n => 39.42.179.143\n)\n\n => Array\n(\n => 111.93.58.131\n)\n\n => Array\n(\n => 193.176.84.127\n)\n\n => Array\n(\n => 209.58.142.218\n)\n\n => Array\n(\n => 69.243.152.129\n)\n\n => Array\n(\n => 117.97.131.249\n)\n\n => Array\n(\n => 103.230.180.89\n)\n\n => Array\n(\n => 106.212.170.192\n)\n\n => Array\n(\n => 171.224.180.95\n)\n\n => Array\n(\n => 158.222.11.87\n)\n\n => Array\n(\n => 119.155.60.246\n)\n\n => Array\n(\n => 41.90.43.129\n)\n\n => Array\n(\n => 185.183.104.170\n)\n\n => Array\n(\n => 14.248.67.65\n)\n\n => Array\n(\n => 117.217.205.82\n)\n\n => Array\n(\n => 111.88.7.209\n)\n\n => Array\n(\n => 49.36.132.244\n)\n\n => Array\n(\n => 171.48.40.2\n)\n\n => Array\n(\n => 119.81.105.2\n)\n\n => Array\n(\n => 49.36.128.114\n)\n\n => Array\n(\n => 213.200.31.93\n)\n\n => Array\n(\n => 2.50.15.110\n)\n\n => Array\n(\n => 120.29.104.67\n)\n\n => Array\n(\n => 223.225.32.221\n)\n\n => Array\n(\n => 14.248.67.195\n)\n\n => Array\n(\n => 119.155.36.13\n)\n\n => Array\n(\n => 101.50.95.104\n)\n\n => Array\n(\n => 104.236.205.233\n)\n\n => Array\n(\n => 122.164.36.150\n)\n\n => Array\n(\n => 157.45.93.209\n)\n\n => Array\n(\n => 182.77.118.100\n)\n\n => Array\n(\n => 182.74.134.218\n)\n\n => Array\n(\n => 183.82.128.146\n)\n\n => Array\n(\n => 112.196.170.234\n)\n\n => Array\n(\n => 122.173.230.178\n)\n\n => Array\n(\n => 122.164.71.199\n)\n\n => Array\n(\n => 51.79.19.31\n)\n\n => Array\n(\n => 58.65.222.20\n)\n\n => Array\n(\n => 103.27.203.97\n)\n\n => Array\n(\n => 111.88.7.242\n)\n\n => Array\n(\n => 14.171.232.77\n)\n\n => Array\n(\n => 46.101.22.182\n)\n\n => Array\n(\n => 103.94.219.19\n)\n\n => Array\n(\n => 139.190.83.30\n)\n\n => Array\n(\n => 223.190.27.184\n)\n\n => Array\n(\n => 182.185.183.34\n)\n\n => Array\n(\n => 91.74.181.242\n)\n\n => Array\n(\n => 222.252.107.14\n)\n\n => Array\n(\n => 137.97.8.28\n)\n\n => Array\n(\n => 46.101.16.229\n)\n\n => Array\n(\n => 122.53.254.229\n)\n\n => Array\n(\n => 106.201.17.180\n)\n\n => Array\n(\n => 123.24.170.129\n)\n\n => Array\n(\n => 182.185.180.79\n)\n\n => Array\n(\n => 223.190.17.4\n)\n\n => Array\n(\n => 213.108.105.1\n)\n\n => Array\n(\n => 171.22.76.9\n)\n\n => Array\n(\n => 202.66.178.164\n)\n\n => Array\n(\n => 178.62.97.171\n)\n\n => Array\n(\n => 167.179.110.209\n)\n\n => Array\n(\n => 223.230.147.172\n)\n\n => Array\n(\n => 76.218.195.160\n)\n\n => Array\n(\n => 14.189.186.178\n)\n\n => Array\n(\n => 157.41.45.143\n)\n\n => Array\n(\n => 223.238.22.53\n)\n\n => Array\n(\n => 111.88.7.244\n)\n\n => Array\n(\n => 5.62.57.19\n)\n\n => Array\n(\n => 106.201.25.216\n)\n\n => Array\n(\n => 117.217.205.33\n)\n\n => Array\n(\n => 111.88.7.215\n)\n\n => Array\n(\n => 106.201.13.77\n)\n\n => Array\n(\n => 50.7.93.29\n)\n\n => Array\n(\n => 123.201.70.112\n)\n\n => Array\n(\n => 39.42.108.226\n)\n\n => Array\n(\n => 27.5.198.29\n)\n\n => Array\n(\n => 223.238.85.187\n)\n\n => Array\n(\n => 171.49.176.32\n)\n\n => Array\n(\n => 14.248.79.242\n)\n\n => Array\n(\n => 46.219.211.183\n)\n\n => Array\n(\n => 185.244.212.251\n)\n\n => Array\n(\n => 14.102.84.126\n)\n\n => Array\n(\n => 106.212.191.52\n)\n\n => Array\n(\n => 154.72.153.203\n)\n\n => Array\n(\n => 14.175.82.64\n)\n\n => Array\n(\n => 141.105.139.131\n)\n\n => Array\n(\n => 182.156.103.98\n)\n\n => Array\n(\n => 117.217.204.75\n)\n\n => Array\n(\n => 104.140.83.115\n)\n\n => Array\n(\n => 119.152.62.8\n)\n\n => Array\n(\n => 45.125.247.94\n)\n\n => Array\n(\n => 137.97.37.252\n)\n\n => Array\n(\n => 117.217.204.73\n)\n\n => Array\n(\n => 14.248.79.133\n)\n\n => Array\n(\n => 39.37.152.52\n)\n\n => Array\n(\n => 103.55.60.54\n)\n\n => Array\n(\n => 102.166.183.88\n)\n\n => Array\n(\n => 5.62.60.162\n)\n\n => Array\n(\n => 5.62.60.163\n)\n\n => Array\n(\n => 160.202.38.131\n)\n\n => Array\n(\n => 106.215.20.253\n)\n\n => Array\n(\n => 39.37.160.54\n)\n\n => Array\n(\n => 119.152.59.186\n)\n\n => Array\n(\n => 183.82.0.164\n)\n\n => Array\n(\n => 41.90.54.87\n)\n\n => Array\n(\n => 157.36.85.158\n)\n\n => Array\n(\n => 110.37.229.162\n)\n\n => Array\n(\n => 203.99.180.148\n)\n\n => Array\n(\n => 117.97.132.91\n)\n\n => Array\n(\n => 171.61.147.105\n)\n\n => Array\n(\n => 14.98.147.214\n)\n\n => Array\n(\n => 209.234.253.191\n)\n\n => Array\n(\n => 92.38.148.60\n)\n\n => Array\n(\n => 178.128.104.139\n)\n\n => Array\n(\n => 212.154.0.176\n)\n\n => Array\n(\n => 103.41.24.141\n)\n\n => Array\n(\n => 2.58.194.132\n)\n\n => Array\n(\n => 180.190.78.169\n)\n\n => Array\n(\n => 106.215.45.182\n)\n\n => Array\n(\n => 125.63.100.222\n)\n\n => Array\n(\n => 110.54.247.17\n)\n\n => Array\n(\n => 103.26.85.105\n)\n\n => Array\n(\n => 39.42.147.3\n)\n\n => Array\n(\n => 137.97.51.41\n)\n\n => Array\n(\n => 71.202.72.27\n)\n\n => Array\n(\n => 119.155.35.10\n)\n\n => Array\n(\n => 202.47.43.120\n)\n\n => Array\n(\n => 183.83.64.101\n)\n\n => Array\n(\n => 182.68.106.141\n)\n\n => Array\n(\n => 171.61.187.87\n)\n\n => Array\n(\n => 178.162.198.118\n)\n\n => Array\n(\n => 115.97.151.218\n)\n\n => Array\n(\n => 196.207.184.210\n)\n\n => Array\n(\n => 198.16.70.51\n)\n\n => Array\n(\n => 41.60.237.33\n)\n\n => Array\n(\n => 47.11.86.26\n)\n\n => Array\n(\n => 117.217.201.183\n)\n\n => Array\n(\n => 203.192.241.79\n)\n\n => Array\n(\n => 122.165.119.85\n)\n\n => Array\n(\n => 23.227.142.218\n)\n\n => Array\n(\n => 178.128.104.221\n)\n\n => Array\n(\n => 14.192.54.163\n)\n\n => Array\n(\n => 139.5.253.218\n)\n\n => Array\n(\n => 117.230.140.127\n)\n\n => Array\n(\n => 195.114.149.199\n)\n\n => Array\n(\n => 14.239.180.220\n)\n\n => Array\n(\n => 103.62.155.94\n)\n\n => Array\n(\n => 118.71.97.14\n)\n\n => Array\n(\n => 137.97.55.163\n)\n\n => Array\n(\n => 202.47.49.198\n)\n\n => Array\n(\n => 171.61.177.85\n)\n\n => Array\n(\n => 137.97.190.224\n)\n\n => Array\n(\n => 117.230.34.142\n)\n\n => Array\n(\n => 103.41.32.5\n)\n\n => Array\n(\n => 203.90.82.237\n)\n\n => Array\n(\n => 125.63.124.238\n)\n\n => Array\n(\n => 103.232.128.78\n)\n\n => Array\n(\n => 106.197.14.227\n)\n\n => Array\n(\n => 81.17.242.244\n)\n\n => Array\n(\n => 81.19.210.179\n)\n\n => Array\n(\n => 103.134.94.98\n)\n\n => Array\n(\n => 110.38.0.86\n)\n\n => Array\n(\n => 103.10.224.195\n)\n\n => Array\n(\n => 45.118.166.89\n)\n\n => Array\n(\n => 115.186.186.68\n)\n\n => Array\n(\n => 138.197.129.237\n)\n\n => Array\n(\n => 14.247.162.52\n)\n\n => Array\n(\n => 103.255.4.5\n)\n\n => Array\n(\n => 14.167.188.254\n)\n\n => Array\n(\n => 5.62.59.54\n)\n\n => Array\n(\n => 27.122.14.80\n)\n\n => Array\n(\n => 39.53.240.21\n)\n\n => Array\n(\n => 39.53.241.243\n)\n\n => Array\n(\n => 117.230.130.161\n)\n\n => Array\n(\n => 118.71.191.149\n)\n\n => Array\n(\n => 5.188.95.54\n)\n\n => Array\n(\n => 66.45.250.27\n)\n\n => Array\n(\n => 106.215.6.175\n)\n\n => Array\n(\n => 27.122.14.86\n)\n\n => Array\n(\n => 103.255.4.51\n)\n\n => Array\n(\n => 101.50.93.119\n)\n\n => Array\n(\n => 137.97.183.51\n)\n\n => Array\n(\n => 117.217.204.185\n)\n\n => Array\n(\n => 95.104.106.82\n)\n\n => Array\n(\n => 5.62.56.211\n)\n\n => Array\n(\n => 103.104.181.214\n)\n\n => Array\n(\n => 36.72.214.243\n)\n\n => Array\n(\n => 5.62.62.219\n)\n\n => Array\n(\n => 110.36.202.4\n)\n\n => Array\n(\n => 103.255.4.253\n)\n\n => Array\n(\n => 110.172.138.61\n)\n\n => Array\n(\n => 159.203.24.195\n)\n\n => Array\n(\n => 13.229.88.42\n)\n\n => Array\n(\n => 59.153.235.20\n)\n\n => Array\n(\n => 171.236.169.32\n)\n\n => Array\n(\n => 14.231.85.206\n)\n\n => Array\n(\n => 119.152.54.103\n)\n\n => Array\n(\n => 103.80.117.202\n)\n\n => Array\n(\n => 223.179.157.75\n)\n\n => Array\n(\n => 122.173.68.249\n)\n\n => Array\n(\n => 188.163.72.113\n)\n\n => Array\n(\n => 119.155.20.164\n)\n\n => Array\n(\n => 103.121.43.68\n)\n\n => Array\n(\n => 5.62.58.6\n)\n\n => Array\n(\n => 203.122.40.154\n)\n\n => Array\n(\n => 222.254.96.203\n)\n\n => Array\n(\n => 103.83.148.167\n)\n\n => Array\n(\n => 103.87.251.226\n)\n\n => Array\n(\n => 123.24.129.24\n)\n\n => Array\n(\n => 137.97.83.8\n)\n\n => Array\n(\n => 223.225.33.132\n)\n\n => Array\n(\n => 128.76.175.190\n)\n\n => Array\n(\n => 195.85.219.32\n)\n\n => Array\n(\n => 139.167.102.93\n)\n\n => Array\n(\n => 49.15.198.253\n)\n\n => Array\n(\n => 45.152.183.172\n)\n\n => Array\n(\n => 42.106.180.136\n)\n\n => Array\n(\n => 95.142.120.9\n)\n\n => Array\n(\n => 139.167.236.4\n)\n\n => Array\n(\n => 159.65.72.167\n)\n\n => Array\n(\n => 49.15.89.2\n)\n\n => Array\n(\n => 42.201.161.195\n)\n\n => Array\n(\n => 27.97.210.38\n)\n\n => Array\n(\n => 171.241.45.19\n)\n\n => Array\n(\n => 42.108.2.18\n)\n\n => Array\n(\n => 171.236.40.68\n)\n\n => Array\n(\n => 110.93.82.102\n)\n\n => Array\n(\n => 43.225.24.186\n)\n\n => Array\n(\n => 117.230.189.119\n)\n\n => Array\n(\n => 124.123.147.187\n)\n\n => Array\n(\n => 216.151.184.250\n)\n\n => Array\n(\n => 49.15.133.16\n)\n\n => Array\n(\n => 49.15.220.74\n)\n\n => Array\n(\n => 157.37.221.246\n)\n\n => Array\n(\n => 176.124.233.112\n)\n\n => Array\n(\n => 118.71.167.40\n)\n\n => Array\n(\n => 182.185.213.161\n)\n\n => Array\n(\n => 47.31.79.248\n)\n\n => Array\n(\n => 223.179.238.192\n)\n\n => Array\n(\n => 79.110.128.219\n)\n\n => Array\n(\n => 106.210.42.111\n)\n\n => Array\n(\n => 47.247.214.229\n)\n\n => Array\n(\n => 193.0.220.108\n)\n\n => Array\n(\n => 1.39.206.254\n)\n\n => Array\n(\n => 123.201.77.38\n)\n\n => Array\n(\n => 115.178.207.21\n)\n\n => Array\n(\n => 37.111.202.92\n)\n\n => Array\n(\n => 49.14.179.243\n)\n\n => Array\n(\n => 117.230.145.171\n)\n\n => Array\n(\n => 171.229.242.96\n)\n\n => Array\n(\n => 27.59.174.209\n)\n\n => Array\n(\n => 1.38.202.211\n)\n\n => Array\n(\n => 157.37.128.46\n)\n\n => Array\n(\n => 49.15.94.80\n)\n\n => Array\n(\n => 123.25.46.147\n)\n\n => Array\n(\n => 117.230.170.185\n)\n\n => Array\n(\n => 5.62.16.19\n)\n\n => Array\n(\n => 103.18.22.25\n)\n\n => Array\n(\n => 103.46.200.132\n)\n\n => Array\n(\n => 27.97.165.126\n)\n\n => Array\n(\n => 117.230.54.241\n)\n\n => Array\n(\n => 27.97.209.76\n)\n\n => Array\n(\n => 47.31.182.109\n)\n\n => Array\n(\n => 47.30.223.221\n)\n\n => Array\n(\n => 103.31.94.82\n)\n\n => Array\n(\n => 103.211.14.45\n)\n\n => Array\n(\n => 171.49.233.58\n)\n\n => Array\n(\n => 65.49.126.95\n)\n\n => Array\n(\n => 69.255.101.170\n)\n\n => Array\n(\n => 27.56.224.67\n)\n\n => Array\n(\n => 117.230.146.86\n)\n\n => Array\n(\n => 27.59.154.52\n)\n\n => Array\n(\n => 132.154.114.10\n)\n\n => Array\n(\n => 182.186.77.60\n)\n\n => Array\n(\n => 117.230.136.74\n)\n\n => Array\n(\n => 43.251.94.253\n)\n\n => Array\n(\n => 103.79.168.225\n)\n\n => Array\n(\n => 117.230.56.51\n)\n\n => Array\n(\n => 27.97.187.45\n)\n\n => Array\n(\n => 137.97.190.61\n)\n\n => Array\n(\n => 193.0.220.26\n)\n\n => Array\n(\n => 49.36.137.62\n)\n\n => Array\n(\n => 47.30.189.248\n)\n\n => Array\n(\n => 109.169.23.84\n)\n\n => Array\n(\n => 111.119.185.46\n)\n\n => Array\n(\n => 103.83.148.246\n)\n\n => Array\n(\n => 157.32.119.138\n)\n\n => Array\n(\n => 5.62.41.53\n)\n\n => Array\n(\n => 47.8.243.236\n)\n\n => Array\n(\n => 112.79.158.69\n)\n\n => Array\n(\n => 180.92.148.218\n)\n\n => Array\n(\n => 157.36.162.154\n)\n\n => Array\n(\n => 39.46.114.47\n)\n\n => Array\n(\n => 117.230.173.250\n)\n\n => Array\n(\n => 117.230.155.188\n)\n\n => Array\n(\n => 193.0.220.17\n)\n\n => Array\n(\n => 117.230.171.166\n)\n\n => Array\n(\n => 49.34.59.228\n)\n\n => Array\n(\n => 111.88.197.247\n)\n\n => Array\n(\n => 47.31.156.112\n)\n\n => Array\n(\n => 137.97.64.180\n)\n\n => Array\n(\n => 14.244.227.18\n)\n\n => Array\n(\n => 113.167.158.8\n)\n\n => Array\n(\n => 39.37.175.189\n)\n\n => Array\n(\n => 139.167.211.8\n)\n\n => Array\n(\n => 73.120.85.235\n)\n\n => Array\n(\n => 104.236.195.72\n)\n\n => Array\n(\n => 27.97.190.71\n)\n\n => Array\n(\n => 79.46.170.222\n)\n\n => Array\n(\n => 102.185.244.207\n)\n\n => Array\n(\n => 37.111.136.30\n)\n\n => Array\n(\n => 50.7.93.28\n)\n\n => Array\n(\n => 110.54.251.43\n)\n\n => Array\n(\n => 49.36.143.40\n)\n\n => Array\n(\n => 103.130.112.185\n)\n\n => Array\n(\n => 37.111.139.202\n)\n\n => Array\n(\n => 49.36.139.108\n)\n\n => Array\n(\n => 37.111.136.179\n)\n\n => Array\n(\n => 123.17.165.77\n)\n\n => Array\n(\n => 49.207.143.206\n)\n\n => Array\n(\n => 39.53.80.149\n)\n\n => Array\n(\n => 223.188.71.214\n)\n\n => Array\n(\n => 1.39.222.233\n)\n\n => Array\n(\n => 117.230.9.85\n)\n\n => Array\n(\n => 103.251.245.216\n)\n\n => Array\n(\n => 122.169.133.145\n)\n\n => Array\n(\n => 43.250.165.57\n)\n\n => Array\n(\n => 39.44.13.235\n)\n\n => Array\n(\n => 157.47.181.2\n)\n\n => Array\n(\n => 27.56.203.50\n)\n\n => Array\n(\n => 191.96.97.58\n)\n\n => Array\n(\n => 111.88.107.172\n)\n\n => Array\n(\n => 113.193.198.136\n)\n\n => Array\n(\n => 117.230.172.175\n)\n\n => Array\n(\n => 191.96.182.239\n)\n\n => Array\n(\n => 2.58.46.28\n)\n\n => Array\n(\n => 183.83.253.87\n)\n\n => Array\n(\n => 49.15.139.242\n)\n\n => Array\n(\n => 42.107.220.236\n)\n\n => Array\n(\n => 14.192.53.196\n)\n\n => Array\n(\n => 42.119.212.202\n)\n\n => Array\n(\n => 192.158.234.45\n)\n\n => Array\n(\n => 49.149.102.192\n)\n\n => Array\n(\n => 47.8.170.17\n)\n\n => Array\n(\n => 117.197.13.247\n)\n\n => Array\n(\n => 116.74.34.44\n)\n\n => Array\n(\n => 103.79.249.163\n)\n\n => Array\n(\n => 182.189.95.70\n)\n\n => Array\n(\n => 137.59.218.118\n)\n\n => Array\n(\n => 103.79.170.243\n)\n\n => Array\n(\n => 39.40.54.25\n)\n\n => Array\n(\n => 119.155.40.170\n)\n\n => Array\n(\n => 1.39.212.157\n)\n\n => Array\n(\n => 70.127.59.89\n)\n\n => Array\n(\n => 14.171.22.58\n)\n\n => Array\n(\n => 194.44.167.141\n)\n\n => Array\n(\n => 111.88.179.154\n)\n\n => Array\n(\n => 117.230.140.232\n)\n\n => Array\n(\n => 137.97.96.128\n)\n\n => Array\n(\n => 198.16.66.123\n)\n\n => Array\n(\n => 106.198.44.193\n)\n\n => Array\n(\n => 119.153.45.75\n)\n\n => Array\n(\n => 49.15.242.208\n)\n\n => Array\n(\n => 119.155.241.20\n)\n\n => Array\n(\n => 106.223.109.155\n)\n\n => Array\n(\n => 119.160.119.245\n)\n\n => Array\n(\n => 106.215.81.160\n)\n\n => Array\n(\n => 1.39.192.211\n)\n\n => Array\n(\n => 223.230.35.208\n)\n\n => Array\n(\n => 39.59.4.158\n)\n\n => Array\n(\n => 43.231.57.234\n)\n\n => Array\n(\n => 60.254.78.193\n)\n\n => Array\n(\n => 122.170.224.87\n)\n\n => Array\n(\n => 117.230.22.141\n)\n\n => Array\n(\n => 119.152.107.211\n)\n\n => Array\n(\n => 103.87.192.206\n)\n\n => Array\n(\n => 39.45.244.47\n)\n\n => Array\n(\n => 50.72.141.94\n)\n\n => Array\n(\n => 39.40.6.128\n)\n\n => Array\n(\n => 39.45.180.186\n)\n\n => Array\n(\n => 49.207.131.233\n)\n\n => Array\n(\n => 139.59.69.142\n)\n\n => Array\n(\n => 111.119.187.29\n)\n\n => Array\n(\n => 119.153.40.69\n)\n\n => Array\n(\n => 49.36.133.64\n)\n\n => Array\n(\n => 103.255.4.249\n)\n\n => Array\n(\n => 198.144.154.15\n)\n\n => Array\n(\n => 1.22.46.172\n)\n\n => Array\n(\n => 103.255.5.46\n)\n\n => Array\n(\n => 27.56.195.188\n)\n\n => Array\n(\n => 203.101.167.53\n)\n\n => Array\n(\n => 117.230.62.195\n)\n\n => Array\n(\n => 103.240.194.186\n)\n\n => Array\n(\n => 107.170.166.118\n)\n\n => Array\n(\n => 101.53.245.80\n)\n\n => Array\n(\n => 157.43.13.208\n)\n\n => Array\n(\n => 137.97.100.77\n)\n\n => Array\n(\n => 47.31.150.208\n)\n\n => Array\n(\n => 137.59.222.65\n)\n\n => Array\n(\n => 103.85.127.250\n)\n\n => Array\n(\n => 103.214.119.32\n)\n\n => Array\n(\n => 182.255.49.52\n)\n\n => Array\n(\n => 103.75.247.72\n)\n\n => Array\n(\n => 103.85.125.250\n)\n\n => Array\n(\n => 183.83.253.167\n)\n\n => Array\n(\n => 1.39.222.111\n)\n\n => Array\n(\n => 111.119.185.9\n)\n\n => Array\n(\n => 111.119.187.10\n)\n\n => Array\n(\n => 39.37.147.144\n)\n\n => Array\n(\n => 103.200.198.183\n)\n\n => Array\n(\n => 1.39.222.18\n)\n\n => Array\n(\n => 198.8.80.103\n)\n\n => Array\n(\n => 42.108.1.243\n)\n\n => Array\n(\n => 111.119.187.16\n)\n\n => Array\n(\n => 39.40.241.8\n)\n\n => Array\n(\n => 122.169.150.158\n)\n\n => Array\n(\n => 39.40.215.119\n)\n\n => Array\n(\n => 103.255.5.77\n)\n\n => Array\n(\n => 157.38.108.196\n)\n\n => Array\n(\n => 103.255.4.67\n)\n\n => Array\n(\n => 5.62.60.62\n)\n\n => Array\n(\n => 39.37.146.202\n)\n\n => Array\n(\n => 110.138.6.221\n)\n\n => Array\n(\n => 49.36.143.88\n)\n\n => Array\n(\n => 37.1.215.39\n)\n\n => Array\n(\n => 27.106.59.190\n)\n\n => Array\n(\n => 139.167.139.41\n)\n\n => Array\n(\n => 114.142.166.179\n)\n\n => Array\n(\n => 223.225.240.112\n)\n\n => Array\n(\n => 103.255.5.36\n)\n\n => Array\n(\n => 175.136.1.48\n)\n\n => Array\n(\n => 103.82.80.166\n)\n\n => Array\n(\n => 182.185.196.126\n)\n\n => Array\n(\n => 157.43.45.76\n)\n\n => Array\n(\n => 119.152.132.49\n)\n\n => Array\n(\n => 5.62.62.162\n)\n\n => Array\n(\n => 103.255.4.39\n)\n\n => Array\n(\n => 202.5.144.153\n)\n\n => Array\n(\n => 1.39.223.210\n)\n\n => Array\n(\n => 92.38.176.154\n)\n\n => Array\n(\n => 117.230.186.142\n)\n\n => Array\n(\n => 183.83.39.123\n)\n\n => Array\n(\n => 182.185.156.76\n)\n\n => Array\n(\n => 104.236.74.212\n)\n\n => Array\n(\n => 107.170.145.187\n)\n\n => Array\n(\n => 117.102.7.98\n)\n\n => Array\n(\n => 137.59.220.0\n)\n\n => Array\n(\n => 157.47.222.14\n)\n\n => Array\n(\n => 47.15.206.82\n)\n\n => Array\n(\n => 117.230.159.99\n)\n\n => Array\n(\n => 117.230.175.151\n)\n\n => Array\n(\n => 157.50.97.18\n)\n\n => Array\n(\n => 117.230.47.164\n)\n\n => Array\n(\n => 77.111.244.34\n)\n\n => Array\n(\n => 139.167.189.131\n)\n\n => Array\n(\n => 1.39.204.103\n)\n\n => Array\n(\n => 117.230.58.0\n)\n\n => Array\n(\n => 182.185.226.66\n)\n\n => Array\n(\n => 115.42.70.119\n)\n\n => Array\n(\n => 171.48.114.134\n)\n\n => Array\n(\n => 144.34.218.75\n)\n\n => Array\n(\n => 199.58.164.135\n)\n\n => Array\n(\n => 101.53.228.151\n)\n\n => Array\n(\n => 117.230.50.57\n)\n\n => Array\n(\n => 223.225.138.84\n)\n\n => Array\n(\n => 110.225.67.65\n)\n\n => Array\n(\n => 47.15.200.39\n)\n\n => Array\n(\n => 39.42.20.127\n)\n\n => Array\n(\n => 117.97.241.81\n)\n\n => Array\n(\n => 111.119.185.11\n)\n\n => Array\n(\n => 103.100.5.94\n)\n\n => Array\n(\n => 103.25.137.69\n)\n\n => Array\n(\n => 47.15.197.159\n)\n\n => Array\n(\n => 223.188.176.122\n)\n\n => Array\n(\n => 27.4.175.80\n)\n\n => Array\n(\n => 181.215.43.82\n)\n\n => Array\n(\n => 27.56.228.157\n)\n\n => Array\n(\n => 117.230.19.19\n)\n\n => Array\n(\n => 47.15.208.71\n)\n\n => Array\n(\n => 119.155.21.176\n)\n\n => Array\n(\n => 47.15.234.202\n)\n\n => Array\n(\n => 117.230.144.135\n)\n\n => Array\n(\n => 112.79.139.199\n)\n\n => Array\n(\n => 116.75.246.41\n)\n\n => Array\n(\n => 117.230.177.126\n)\n\n => Array\n(\n => 212.103.48.134\n)\n\n => Array\n(\n => 102.69.228.78\n)\n\n => Array\n(\n => 117.230.37.118\n)\n\n => Array\n(\n => 175.143.61.75\n)\n\n => Array\n(\n => 139.167.56.138\n)\n\n => Array\n(\n => 58.145.189.250\n)\n\n => Array\n(\n => 103.255.5.65\n)\n\n => Array\n(\n => 39.37.153.182\n)\n\n => Array\n(\n => 157.43.85.106\n)\n\n => Array\n(\n => 185.209.178.77\n)\n\n => Array\n(\n => 1.39.212.45\n)\n\n => Array\n(\n => 103.72.7.16\n)\n\n => Array\n(\n => 117.97.185.244\n)\n\n => Array\n(\n => 117.230.59.106\n)\n\n => Array\n(\n => 137.97.121.103\n)\n\n => Array\n(\n => 103.82.123.215\n)\n\n => Array\n(\n => 103.68.217.248\n)\n\n => Array\n(\n => 157.39.27.175\n)\n\n => Array\n(\n => 47.31.100.249\n)\n\n => Array\n(\n => 14.171.232.139\n)\n\n => Array\n(\n => 103.31.93.208\n)\n\n => Array\n(\n => 117.230.56.77\n)\n\n => Array\n(\n => 124.182.25.124\n)\n\n => Array\n(\n => 106.66.191.242\n)\n\n => Array\n(\n => 175.107.237.25\n)\n\n => Array\n(\n => 119.155.1.27\n)\n\n => Array\n(\n => 72.255.6.24\n)\n\n => Array\n(\n => 192.140.152.223\n)\n\n => Array\n(\n => 212.103.48.136\n)\n\n => Array\n(\n => 39.45.134.56\n)\n\n => Array\n(\n => 139.167.173.30\n)\n\n => Array\n(\n => 117.230.63.87\n)\n\n => Array\n(\n => 182.189.95.203\n)\n\n => Array\n(\n => 49.204.183.248\n)\n\n => Array\n(\n => 47.31.125.188\n)\n\n => Array\n(\n => 103.252.171.13\n)\n\n => Array\n(\n => 112.198.74.36\n)\n\n => Array\n(\n => 27.109.113.152\n)\n\n => Array\n(\n => 42.112.233.44\n)\n\n => Array\n(\n => 47.31.68.193\n)\n\n => Array\n(\n => 103.252.171.134\n)\n\n => Array\n(\n => 77.123.32.114\n)\n\n => Array\n(\n => 1.38.189.66\n)\n\n => Array\n(\n => 39.37.181.108\n)\n\n => Array\n(\n => 42.106.44.61\n)\n\n => Array\n(\n => 157.36.8.39\n)\n\n => Array\n(\n => 223.238.41.53\n)\n\n => Array\n(\n => 202.89.77.10\n)\n\n => Array\n(\n => 117.230.150.68\n)\n\n => Array\n(\n => 175.176.87.60\n)\n\n => Array\n(\n => 137.97.117.87\n)\n\n => Array\n(\n => 132.154.123.11\n)\n\n => Array\n(\n => 45.113.124.141\n)\n\n => Array\n(\n => 103.87.56.203\n)\n\n => Array\n(\n => 159.89.171.156\n)\n\n => Array\n(\n => 119.155.53.88\n)\n\n => Array\n(\n => 222.252.107.215\n)\n\n => Array\n(\n => 132.154.75.238\n)\n\n => Array\n(\n => 122.183.41.168\n)\n\n => Array\n(\n => 42.106.254.158\n)\n\n => Array\n(\n => 103.252.171.37\n)\n\n => Array\n(\n => 202.59.13.180\n)\n\n => Array\n(\n => 37.111.139.137\n)\n\n => Array\n(\n => 39.42.93.25\n)\n\n => Array\n(\n => 118.70.177.156\n)\n\n => Array\n(\n => 117.230.148.64\n)\n\n => Array\n(\n => 39.42.15.194\n)\n\n => Array\n(\n => 137.97.176.86\n)\n\n => Array\n(\n => 106.210.102.113\n)\n\n => Array\n(\n => 39.59.84.236\n)\n\n => Array\n(\n => 49.206.187.177\n)\n\n => Array\n(\n => 117.230.133.11\n)\n\n => Array\n(\n => 42.106.253.173\n)\n\n => Array\n(\n => 178.62.102.23\n)\n\n => Array\n(\n => 111.92.76.175\n)\n\n => Array\n(\n => 132.154.86.45\n)\n\n => Array\n(\n => 117.230.128.39\n)\n\n => Array\n(\n => 117.230.53.165\n)\n\n => Array\n(\n => 49.37.200.171\n)\n\n => Array\n(\n => 104.236.213.230\n)\n\n => Array\n(\n => 103.140.30.81\n)\n\n => Array\n(\n => 59.103.104.117\n)\n\n => Array\n(\n => 65.49.126.79\n)\n\n => Array\n(\n => 202.59.12.251\n)\n\n => Array\n(\n => 37.111.136.17\n)\n\n => Array\n(\n => 163.53.85.67\n)\n\n => Array\n(\n => 123.16.240.73\n)\n\n => Array\n(\n => 103.211.14.183\n)\n\n => Array\n(\n => 103.248.93.211\n)\n\n => Array\n(\n => 116.74.59.127\n)\n\n => Array\n(\n => 137.97.169.254\n)\n\n => Array\n(\n => 113.177.79.100\n)\n\n => Array\n(\n => 74.82.60.187\n)\n\n => Array\n(\n => 117.230.157.66\n)\n\n => Array\n(\n => 169.149.194.241\n)\n\n => Array\n(\n => 117.230.156.11\n)\n\n => Array\n(\n => 202.59.12.157\n)\n\n => Array\n(\n => 42.106.181.25\n)\n\n => Array\n(\n => 202.59.13.78\n)\n\n => Array\n(\n => 39.37.153.32\n)\n\n => Array\n(\n => 177.188.216.175\n)\n\n => Array\n(\n => 222.252.53.165\n)\n\n => Array\n(\n => 37.139.23.89\n)\n\n => Array\n(\n => 117.230.139.150\n)\n\n => Array\n(\n => 104.131.176.234\n)\n\n => Array\n(\n => 42.106.181.117\n)\n\n => Array\n(\n => 117.230.180.94\n)\n\n => Array\n(\n => 180.190.171.5\n)\n\n => Array\n(\n => 150.129.165.185\n)\n\n => Array\n(\n => 51.15.0.150\n)\n\n => Array\n(\n => 42.111.4.84\n)\n\n => Array\n(\n => 74.82.60.116\n)\n\n => Array\n(\n => 137.97.121.165\n)\n\n => Array\n(\n => 64.62.187.194\n)\n\n => Array\n(\n => 137.97.106.162\n)\n\n => Array\n(\n => 137.97.92.46\n)\n\n => Array\n(\n => 137.97.170.25\n)\n\n => Array\n(\n => 103.104.192.100\n)\n\n => Array\n(\n => 185.246.211.34\n)\n\n => Array\n(\n => 119.160.96.78\n)\n\n => Array\n(\n => 212.103.48.152\n)\n\n => Array\n(\n => 183.83.153.90\n)\n\n => Array\n(\n => 117.248.150.41\n)\n\n => Array\n(\n => 185.240.246.180\n)\n\n => Array\n(\n => 162.253.131.125\n)\n\n => Array\n(\n => 117.230.153.217\n)\n\n => Array\n(\n => 117.230.169.1\n)\n\n => Array\n(\n => 49.15.138.247\n)\n\n => Array\n(\n => 117.230.37.110\n)\n\n => Array\n(\n => 14.167.188.75\n)\n\n => Array\n(\n => 169.149.239.93\n)\n\n => Array\n(\n => 103.216.176.91\n)\n\n => Array\n(\n => 117.230.12.126\n)\n\n => Array\n(\n => 184.75.209.110\n)\n\n => Array\n(\n => 117.230.6.60\n)\n\n => Array\n(\n => 117.230.135.132\n)\n\n => Array\n(\n => 31.179.29.109\n)\n\n => Array\n(\n => 74.121.188.186\n)\n\n => Array\n(\n => 117.230.35.5\n)\n\n => Array\n(\n => 111.92.74.239\n)\n\n => Array\n(\n => 104.245.144.236\n)\n\n => Array\n(\n => 39.50.22.100\n)\n\n => Array\n(\n => 47.31.190.23\n)\n\n => Array\n(\n => 157.44.73.187\n)\n\n => Array\n(\n => 117.230.8.91\n)\n\n => Array\n(\n => 157.32.18.2\n)\n\n => Array\n(\n => 111.119.187.43\n)\n\n => Array\n(\n => 203.101.185.246\n)\n\n => Array\n(\n => 5.62.34.22\n)\n\n => Array\n(\n => 122.8.143.76\n)\n\n => Array\n(\n => 115.186.2.187\n)\n\n => Array\n(\n => 202.142.110.89\n)\n\n => Array\n(\n => 157.50.61.254\n)\n\n => Array\n(\n => 223.182.211.185\n)\n\n => Array\n(\n => 103.85.125.210\n)\n\n => Array\n(\n => 103.217.133.147\n)\n\n => Array\n(\n => 103.60.196.217\n)\n\n => Array\n(\n => 157.44.238.6\n)\n\n => Array\n(\n => 117.196.225.68\n)\n\n => Array\n(\n => 104.254.92.52\n)\n\n => Array\n(\n => 39.42.46.72\n)\n\n => Array\n(\n => 221.132.119.36\n)\n\n => Array\n(\n => 111.92.77.47\n)\n\n => Array\n(\n => 223.225.19.152\n)\n\n => Array\n(\n => 159.89.121.217\n)\n\n => Array\n(\n => 39.53.221.205\n)\n\n => Array\n(\n => 193.34.217.28\n)\n\n => Array\n(\n => 139.167.206.36\n)\n\n => Array\n(\n => 96.40.10.7\n)\n\n => Array\n(\n => 124.29.198.123\n)\n\n => Array\n(\n => 117.196.226.1\n)\n\n => Array\n(\n => 106.200.85.135\n)\n\n => Array\n(\n => 106.223.180.28\n)\n\n => Array\n(\n => 103.49.232.110\n)\n\n => Array\n(\n => 139.167.208.50\n)\n\n => Array\n(\n => 139.167.201.102\n)\n\n => Array\n(\n => 14.244.224.237\n)\n\n => Array\n(\n => 103.140.31.187\n)\n\n => Array\n(\n => 49.36.134.136\n)\n\n => Array\n(\n => 160.16.61.75\n)\n\n => Array\n(\n => 103.18.22.228\n)\n\n => Array\n(\n => 47.9.74.121\n)\n\n => Array\n(\n => 47.30.216.159\n)\n\n => Array\n(\n => 117.248.150.78\n)\n\n => Array\n(\n => 5.62.34.17\n)\n\n => Array\n(\n => 139.167.247.181\n)\n\n => Array\n(\n => 193.176.84.29\n)\n\n => Array\n(\n => 103.195.201.121\n)\n\n => Array\n(\n => 89.187.175.115\n)\n\n => Array\n(\n => 137.97.81.251\n)\n\n => Array\n(\n => 157.51.147.62\n)\n\n => Array\n(\n => 103.104.192.42\n)\n\n => Array\n(\n => 14.171.235.26\n)\n\n => Array\n(\n => 178.62.89.121\n)\n\n => Array\n(\n => 119.155.4.164\n)\n\n => Array\n(\n => 43.250.241.89\n)\n\n => Array\n(\n => 103.31.100.80\n)\n\n => Array\n(\n => 119.155.7.44\n)\n\n => Array\n(\n => 106.200.73.114\n)\n\n => Array\n(\n => 77.111.246.18\n)\n\n => Array\n(\n => 157.39.99.247\n)\n\n => Array\n(\n => 103.77.42.132\n)\n\n => Array\n(\n => 74.115.214.133\n)\n\n => Array\n(\n => 117.230.49.224\n)\n\n => Array\n(\n => 39.50.108.238\n)\n\n => Array\n(\n => 47.30.221.45\n)\n\n => Array\n(\n => 95.133.164.235\n)\n\n => Array\n(\n => 212.103.48.141\n)\n\n => Array\n(\n => 104.194.218.147\n)\n\n => Array\n(\n => 106.200.88.241\n)\n\n => Array\n(\n => 182.189.212.211\n)\n\n => Array\n(\n => 39.50.142.129\n)\n\n => Array\n(\n => 77.234.43.133\n)\n\n => Array\n(\n => 49.15.192.58\n)\n\n => Array\n(\n => 119.153.37.55\n)\n\n => Array\n(\n => 27.56.156.128\n)\n\n => Array\n(\n => 168.211.4.33\n)\n\n => Array\n(\n => 203.81.236.239\n)\n\n => Array\n(\n => 157.51.149.61\n)\n\n => Array\n(\n => 117.230.45.255\n)\n\n => Array\n(\n => 39.42.106.169\n)\n\n => Array\n(\n => 27.71.89.76\n)\n\n => Array\n(\n => 123.27.109.167\n)\n\n => Array\n(\n => 106.202.21.91\n)\n\n => Array\n(\n => 103.85.125.206\n)\n\n => Array\n(\n => 122.173.250.229\n)\n\n => Array\n(\n => 106.210.102.77\n)\n\n => Array\n(\n => 134.209.47.156\n)\n\n => Array\n(\n => 45.127.232.12\n)\n\n => Array\n(\n => 45.134.224.11\n)\n\n => Array\n(\n => 27.71.89.122\n)\n\n => Array\n(\n => 157.38.105.117\n)\n\n => Array\n(\n => 191.96.73.215\n)\n\n => Array\n(\n => 171.241.92.31\n)\n\n => Array\n(\n => 49.149.104.235\n)\n\n => Array\n(\n => 104.229.247.252\n)\n\n => Array\n(\n => 111.92.78.42\n)\n\n => Array\n(\n => 47.31.88.183\n)\n\n => Array\n(\n => 171.61.203.234\n)\n\n => Array\n(\n => 183.83.226.192\n)\n\n => Array\n(\n => 119.157.107.45\n)\n\n => Array\n(\n => 91.202.163.205\n)\n\n => Array\n(\n => 157.43.62.108\n)\n\n => Array\n(\n => 182.68.248.92\n)\n\n => Array\n(\n => 157.32.251.234\n)\n\n => Array\n(\n => 110.225.196.188\n)\n\n => Array\n(\n => 27.71.89.98\n)\n\n => Array\n(\n => 175.176.87.3\n)\n\n => Array\n(\n => 103.55.90.208\n)\n\n => Array\n(\n => 47.31.41.163\n)\n\n => Array\n(\n => 223.182.195.5\n)\n\n => Array\n(\n => 122.52.101.166\n)\n\n => Array\n(\n => 103.207.82.154\n)\n\n => Array\n(\n => 171.224.178.84\n)\n\n => Array\n(\n => 110.225.235.187\n)\n\n => Array\n(\n => 119.160.97.248\n)\n\n => Array\n(\n => 116.90.101.121\n)\n\n => Array\n(\n => 182.255.48.154\n)\n\n => Array\n(\n => 180.149.221.140\n)\n\n => Array\n(\n => 194.44.79.13\n)\n\n => Array\n(\n => 47.247.18.3\n)\n\n => Array\n(\n => 27.56.242.95\n)\n\n => Array\n(\n => 41.60.236.83\n)\n\n => Array\n(\n => 122.164.162.7\n)\n\n => Array\n(\n => 71.136.154.5\n)\n\n => Array\n(\n => 132.154.119.122\n)\n\n => Array\n(\n => 110.225.80.135\n)\n\n => Array\n(\n => 84.17.61.143\n)\n\n => Array\n(\n => 119.160.102.244\n)\n\n => Array\n(\n => 47.31.27.44\n)\n\n => Array\n(\n => 27.71.89.160\n)\n\n => Array\n(\n => 107.175.38.101\n)\n\n => Array\n(\n => 195.211.150.152\n)\n\n => Array\n(\n => 157.35.250.255\n)\n\n => Array\n(\n => 111.119.187.53\n)\n\n => Array\n(\n => 119.152.97.213\n)\n\n => Array\n(\n => 180.92.143.145\n)\n\n => Array\n(\n => 72.255.61.46\n)\n\n => Array\n(\n => 47.8.183.6\n)\n\n => Array\n(\n => 92.38.148.53\n)\n\n => Array\n(\n => 122.173.194.72\n)\n\n => Array\n(\n => 183.83.226.97\n)\n\n => Array\n(\n => 122.173.73.231\n)\n\n => Array\n(\n => 119.160.101.101\n)\n\n => Array\n(\n => 93.177.75.174\n)\n\n => Array\n(\n => 115.97.196.70\n)\n\n => Array\n(\n => 111.119.187.35\n)\n\n => Array\n(\n => 103.226.226.154\n)\n\n => Array\n(\n => 103.244.172.73\n)\n\n => Array\n(\n => 119.155.61.222\n)\n\n => Array\n(\n => 157.37.184.92\n)\n\n => Array\n(\n => 119.160.103.204\n)\n\n => Array\n(\n => 175.176.87.21\n)\n\n => Array\n(\n => 185.51.228.246\n)\n\n => Array\n(\n => 103.250.164.255\n)\n\n => Array\n(\n => 122.181.194.16\n)\n\n => Array\n(\n => 157.37.230.232\n)\n\n => Array\n(\n => 103.105.236.6\n)\n\n => Array\n(\n => 111.88.128.174\n)\n\n => Array\n(\n => 37.111.139.82\n)\n\n => Array\n(\n => 39.34.133.52\n)\n\n => Array\n(\n => 113.177.79.80\n)\n\n => Array\n(\n => 180.183.71.184\n)\n\n => Array\n(\n => 116.72.218.255\n)\n\n => Array\n(\n => 119.160.117.26\n)\n\n => Array\n(\n => 158.222.0.252\n)\n\n => Array\n(\n => 23.227.142.146\n)\n\n => Array\n(\n => 122.162.152.152\n)\n\n => Array\n(\n => 103.255.149.106\n)\n\n => Array\n(\n => 104.236.53.155\n)\n\n => Array\n(\n => 119.160.119.155\n)\n\n => Array\n(\n => 175.107.214.244\n)\n\n => Array\n(\n => 102.7.116.7\n)\n\n => Array\n(\n => 111.88.91.132\n)\n\n => Array\n(\n => 119.157.248.108\n)\n\n => Array\n(\n => 222.252.36.107\n)\n\n => Array\n(\n => 157.46.209.227\n)\n\n => Array\n(\n => 39.40.54.1\n)\n\n => Array\n(\n => 223.225.19.254\n)\n\n => Array\n(\n => 154.72.150.8\n)\n\n => Array\n(\n => 107.181.177.130\n)\n\n => Array\n(\n => 101.50.75.31\n)\n\n => Array\n(\n => 84.17.58.69\n)\n\n => Array\n(\n => 178.62.5.157\n)\n\n => Array\n(\n => 112.206.175.147\n)\n\n => Array\n(\n => 137.97.113.137\n)\n\n => Array\n(\n => 103.53.44.154\n)\n\n => Array\n(\n => 180.92.143.129\n)\n\n => Array\n(\n => 14.231.223.7\n)\n\n => Array\n(\n => 167.88.63.201\n)\n\n => Array\n(\n => 103.140.204.8\n)\n\n => Array\n(\n => 221.121.135.108\n)\n\n => Array\n(\n => 119.160.97.129\n)\n\n => Array\n(\n => 27.5.168.249\n)\n\n => Array\n(\n => 119.160.102.191\n)\n\n => Array\n(\n => 122.162.219.12\n)\n\n => Array\n(\n => 157.50.141.122\n)\n\n => Array\n(\n => 43.245.8.17\n)\n\n => Array\n(\n => 113.181.198.179\n)\n\n => Array\n(\n => 47.30.221.59\n)\n\n => Array\n(\n => 110.38.29.246\n)\n\n => Array\n(\n => 14.192.140.199\n)\n\n => Array\n(\n => 24.68.10.106\n)\n\n => Array\n(\n => 47.30.209.179\n)\n\n => Array\n(\n => 106.223.123.21\n)\n\n => Array\n(\n => 103.224.48.30\n)\n\n => Array\n(\n => 104.131.19.173\n)\n\n => Array\n(\n => 119.157.100.206\n)\n\n => Array\n(\n => 103.10.226.73\n)\n\n => Array\n(\n => 162.208.51.163\n)\n\n => Array\n(\n => 47.30.221.227\n)\n\n => Array\n(\n => 119.160.116.210\n)\n\n => Array\n(\n => 198.16.78.43\n)\n\n => Array\n(\n => 39.44.201.151\n)\n\n => Array\n(\n => 71.63.181.84\n)\n\n => Array\n(\n => 14.142.192.218\n)\n\n => Array\n(\n => 39.34.147.178\n)\n\n => Array\n(\n => 111.92.75.25\n)\n\n => Array\n(\n => 45.135.239.58\n)\n\n => Array\n(\n => 14.232.235.1\n)\n\n => Array\n(\n => 49.144.100.155\n)\n\n => Array\n(\n => 62.182.99.33\n)\n\n => Array\n(\n => 104.243.212.187\n)\n\n => Array\n(\n => 59.97.132.214\n)\n\n => Array\n(\n => 47.9.15.179\n)\n\n => Array\n(\n => 39.44.103.186\n)\n\n => Array\n(\n => 183.83.241.132\n)\n\n => Array\n(\n => 103.41.24.180\n)\n\n => Array\n(\n => 104.238.46.39\n)\n\n => Array\n(\n => 103.79.170.78\n)\n\n => Array\n(\n => 59.103.138.81\n)\n\n => Array\n(\n => 106.198.191.146\n)\n\n => Array\n(\n => 106.198.255.122\n)\n\n => Array\n(\n => 47.31.46.37\n)\n\n => Array\n(\n => 109.169.23.76\n)\n\n => Array\n(\n => 103.143.7.55\n)\n\n => Array\n(\n => 49.207.114.52\n)\n\n => Array\n(\n => 198.54.106.250\n)\n\n => Array\n(\n => 39.50.64.18\n)\n\n => Array\n(\n => 222.252.48.132\n)\n\n => Array\n(\n => 42.201.186.53\n)\n\n => Array\n(\n => 115.97.198.95\n)\n\n => Array\n(\n => 93.76.134.244\n)\n\n => Array\n(\n => 122.173.15.189\n)\n\n => Array\n(\n => 39.62.38.29\n)\n\n => Array\n(\n => 103.201.145.254\n)\n\n => Array\n(\n => 111.119.187.23\n)\n\n => Array\n(\n => 157.50.66.33\n)\n\n => Array\n(\n => 157.49.68.163\n)\n\n => Array\n(\n => 103.85.125.215\n)\n\n => Array\n(\n => 103.255.4.16\n)\n\n => Array\n(\n => 223.181.246.206\n)\n\n => Array\n(\n => 39.40.109.226\n)\n\n => Array\n(\n => 43.225.70.157\n)\n\n => Array\n(\n => 103.211.18.168\n)\n\n => Array\n(\n => 137.59.221.60\n)\n\n => Array\n(\n => 103.81.214.63\n)\n\n => Array\n(\n => 39.35.163.2\n)\n\n => Array\n(\n => 106.205.124.39\n)\n\n => Array\n(\n => 209.99.165.216\n)\n\n => Array\n(\n => 103.75.247.187\n)\n\n => Array\n(\n => 157.46.217.41\n)\n\n => Array\n(\n => 75.186.73.80\n)\n\n => Array\n(\n => 212.103.48.153\n)\n\n => Array\n(\n => 47.31.61.167\n)\n\n => Array\n(\n => 119.152.145.131\n)\n\n => Array\n(\n => 171.76.177.244\n)\n\n => Array\n(\n => 103.135.78.50\n)\n\n => Array\n(\n => 103.79.170.75\n)\n\n => Array\n(\n => 105.160.22.74\n)\n\n => Array\n(\n => 47.31.20.153\n)\n\n => Array\n(\n => 42.107.204.65\n)\n\n => Array\n(\n => 49.207.131.35\n)\n\n => Array\n(\n => 92.38.148.61\n)\n\n => Array\n(\n => 183.83.255.206\n)\n\n => Array\n(\n => 107.181.177.131\n)\n\n => Array\n(\n => 39.40.220.157\n)\n\n => Array\n(\n => 39.41.133.176\n)\n\n => Array\n(\n => 103.81.214.61\n)\n\n => Array\n(\n => 223.235.108.46\n)\n\n => Array\n(\n => 171.241.52.118\n)\n\n => Array\n(\n => 39.57.138.47\n)\n\n => Array\n(\n => 106.204.196.172\n)\n\n => Array\n(\n => 39.53.228.40\n)\n\n => Array\n(\n => 185.242.5.99\n)\n\n => Array\n(\n => 103.255.5.96\n)\n\n => Array\n(\n => 157.46.212.120\n)\n\n => Array\n(\n => 107.181.177.138\n)\n\n => Array\n(\n => 47.30.193.65\n)\n\n => Array\n(\n => 39.37.178.33\n)\n\n => Array\n(\n => 157.46.173.29\n)\n\n => Array\n(\n => 39.57.238.211\n)\n\n => Array\n(\n => 157.37.245.113\n)\n\n => Array\n(\n => 47.30.201.138\n)\n\n => Array\n(\n => 106.204.193.108\n)\n\n => Array\n(\n => 212.103.50.212\n)\n\n => Array\n(\n => 58.65.221.187\n)\n\n => Array\n(\n => 178.62.92.29\n)\n\n => Array\n(\n => 111.92.77.166\n)\n\n => Array\n(\n => 47.30.223.158\n)\n\n => Array\n(\n => 103.224.54.83\n)\n\n => Array\n(\n => 119.153.43.22\n)\n\n => Array\n(\n => 223.181.126.251\n)\n\n => Array\n(\n => 39.42.175.202\n)\n\n => Array\n(\n => 103.224.54.190\n)\n\n => Array\n(\n => 49.36.141.210\n)\n\n => Array\n(\n => 5.62.63.218\n)\n\n => Array\n(\n => 39.59.9.18\n)\n\n => Array\n(\n => 111.88.86.45\n)\n\n => Array\n(\n => 178.54.139.5\n)\n\n => Array\n(\n => 116.68.105.241\n)\n\n => Array\n(\n => 119.160.96.187\n)\n\n => Array\n(\n => 182.189.192.103\n)\n\n => Array\n(\n => 119.160.96.143\n)\n\n => Array\n(\n => 110.225.89.98\n)\n\n => Array\n(\n => 169.149.195.134\n)\n\n => Array\n(\n => 103.238.104.54\n)\n\n => Array\n(\n => 47.30.208.142\n)\n\n => Array\n(\n => 157.46.179.209\n)\n\n => Array\n(\n => 223.235.38.119\n)\n\n => Array\n(\n => 42.106.180.165\n)\n\n => Array\n(\n => 154.122.240.239\n)\n\n => Array\n(\n => 106.223.104.191\n)\n\n => Array\n(\n => 111.93.110.218\n)\n\n => Array\n(\n => 182.183.161.171\n)\n\n => Array\n(\n => 157.44.184.211\n)\n\n => Array\n(\n => 157.50.185.193\n)\n\n => Array\n(\n => 117.230.19.194\n)\n\n => Array\n(\n => 162.243.246.160\n)\n\n => Array\n(\n => 106.223.143.53\n)\n\n => Array\n(\n => 39.59.41.15\n)\n\n => Array\n(\n => 106.210.65.42\n)\n\n => Array\n(\n => 180.243.144.208\n)\n\n => Array\n(\n => 116.68.105.22\n)\n\n => Array\n(\n => 115.42.70.46\n)\n\n => Array\n(\n => 99.72.192.148\n)\n\n => Array\n(\n => 182.183.182.48\n)\n\n => Array\n(\n => 171.48.58.97\n)\n\n => Array\n(\n => 37.120.131.188\n)\n\n => Array\n(\n => 117.99.167.177\n)\n\n => Array\n(\n => 111.92.76.210\n)\n\n => Array\n(\n => 14.192.144.245\n)\n\n => Array\n(\n => 169.149.242.87\n)\n\n => Array\n(\n => 47.30.198.149\n)\n\n => Array\n(\n => 59.103.57.140\n)\n\n => Array\n(\n => 117.230.161.168\n)\n\n => Array\n(\n => 110.225.88.173\n)\n\n => Array\n(\n => 169.149.246.95\n)\n\n => Array\n(\n => 42.106.180.52\n)\n\n => Array\n(\n => 14.231.160.157\n)\n\n => Array\n(\n => 123.27.109.47\n)\n\n => Array\n(\n => 157.46.130.54\n)\n\n => Array\n(\n => 39.42.73.194\n)\n\n => Array\n(\n => 117.230.18.147\n)\n\n => Array\n(\n => 27.59.231.98\n)\n\n => Array\n(\n => 125.209.78.227\n)\n\n => Array\n(\n => 157.34.80.145\n)\n\n => Array\n(\n => 42.201.251.86\n)\n\n => Array\n(\n => 117.230.129.158\n)\n\n => Array\n(\n => 103.82.80.103\n)\n\n => Array\n(\n => 47.9.171.228\n)\n\n => Array\n(\n => 117.230.24.92\n)\n\n => Array\n(\n => 103.129.143.119\n)\n\n => Array\n(\n => 39.40.213.45\n)\n\n => Array\n(\n => 178.92.188.214\n)\n\n => Array\n(\n => 110.235.232.191\n)\n\n => Array\n(\n => 5.62.34.18\n)\n\n => Array\n(\n => 47.30.212.134\n)\n\n => Array\n(\n => 157.42.34.196\n)\n\n => Array\n(\n => 157.32.169.9\n)\n\n => Array\n(\n => 103.255.4.11\n)\n\n => Array\n(\n => 117.230.13.69\n)\n\n => Array\n(\n => 117.230.58.97\n)\n\n => Array\n(\n => 92.52.138.39\n)\n\n => Array\n(\n => 221.132.119.63\n)\n\n => Array\n(\n => 117.97.167.188\n)\n\n => Array\n(\n => 119.153.56.58\n)\n\n => Array\n(\n => 105.50.22.150\n)\n\n => Array\n(\n => 115.42.68.126\n)\n\n => Array\n(\n => 182.189.223.159\n)\n\n => Array\n(\n => 39.59.36.90\n)\n\n => Array\n(\n => 111.92.76.114\n)\n\n => Array\n(\n => 157.47.226.163\n)\n\n => Array\n(\n => 202.47.44.37\n)\n\n => Array\n(\n => 106.51.234.172\n)\n\n => Array\n(\n => 103.101.88.166\n)\n\n => Array\n(\n => 27.6.246.146\n)\n\n => Array\n(\n => 103.255.5.83\n)\n\n => Array\n(\n => 103.98.210.185\n)\n\n => Array\n(\n => 122.173.114.134\n)\n\n => Array\n(\n => 122.173.77.248\n)\n\n => Array\n(\n => 5.62.41.172\n)\n\n => Array\n(\n => 180.178.181.17\n)\n\n => Array\n(\n => 37.120.133.224\n)\n\n => Array\n(\n => 45.131.5.156\n)\n\n => Array\n(\n => 110.39.100.110\n)\n\n => Array\n(\n => 176.110.38.185\n)\n\n => Array\n(\n => 36.255.41.64\n)\n\n => Array\n(\n => 103.104.192.15\n)\n\n => Array\n(\n => 43.245.131.195\n)\n\n => Array\n(\n => 14.248.111.185\n)\n\n => Array\n(\n => 122.173.217.133\n)\n\n => Array\n(\n => 106.223.90.245\n)\n\n => Array\n(\n => 119.153.56.80\n)\n\n => Array\n(\n => 103.7.60.172\n)\n\n => Array\n(\n => 157.46.184.233\n)\n\n => Array\n(\n => 182.190.31.95\n)\n\n => Array\n(\n => 109.87.189.122\n)\n\n => Array\n(\n => 91.74.25.100\n)\n\n => Array\n(\n => 182.185.224.144\n)\n\n => Array\n(\n => 106.223.91.221\n)\n\n => Array\n(\n => 182.190.223.40\n)\n\n => Array\n(\n => 2.58.194.134\n)\n\n => Array\n(\n => 196.246.225.236\n)\n\n => Array\n(\n => 106.223.90.173\n)\n\n => Array\n(\n => 23.239.16.54\n)\n\n => Array\n(\n => 157.46.65.225\n)\n\n => Array\n(\n => 115.186.130.14\n)\n\n => Array\n(\n => 103.85.125.157\n)\n\n => Array\n(\n => 14.248.103.6\n)\n\n => Array\n(\n => 123.24.169.247\n)\n\n => Array\n(\n => 103.130.108.153\n)\n\n => Array\n(\n => 115.42.67.21\n)\n\n => Array\n(\n => 202.166.171.190\n)\n\n => Array\n(\n => 39.37.169.104\n)\n\n => Array\n(\n => 103.82.80.59\n)\n\n => Array\n(\n => 175.107.208.58\n)\n\n => Array\n(\n => 203.192.238.247\n)\n\n => Array\n(\n => 103.217.178.150\n)\n\n => Array\n(\n => 103.66.214.173\n)\n\n => Array\n(\n => 110.93.236.174\n)\n\n => Array\n(\n => 143.189.242.64\n)\n\n => Array\n(\n => 77.111.245.12\n)\n\n => Array\n(\n => 145.239.2.231\n)\n\n => Array\n(\n => 115.186.190.38\n)\n\n => Array\n(\n => 109.169.23.67\n)\n\n => Array\n(\n => 198.16.70.29\n)\n\n => Array\n(\n => 111.92.76.186\n)\n\n => Array\n(\n => 115.42.69.34\n)\n\n => Array\n(\n => 73.61.100.95\n)\n\n => Array\n(\n => 103.129.142.31\n)\n\n => Array\n(\n => 103.255.5.53\n)\n\n => Array\n(\n => 103.76.55.2\n)\n\n => Array\n(\n => 47.9.141.138\n)\n\n => Array\n(\n => 103.55.89.234\n)\n\n => Array\n(\n => 103.223.13.53\n)\n\n => Array\n(\n => 175.158.50.203\n)\n\n => Array\n(\n => 103.255.5.90\n)\n\n => Array\n(\n => 106.223.100.138\n)\n\n => Array\n(\n => 39.37.143.193\n)\n\n => Array\n(\n => 206.189.133.131\n)\n\n => Array\n(\n => 43.224.0.233\n)\n\n => Array\n(\n => 115.186.132.106\n)\n\n => Array\n(\n => 31.43.21.159\n)\n\n => Array\n(\n => 119.155.56.131\n)\n\n => Array\n(\n => 103.82.80.138\n)\n\n => Array\n(\n => 24.87.128.119\n)\n\n => Array\n(\n => 106.210.103.163\n)\n\n => Array\n(\n => 103.82.80.90\n)\n\n => Array\n(\n => 157.46.186.45\n)\n\n => Array\n(\n => 157.44.155.238\n)\n\n => Array\n(\n => 103.119.199.2\n)\n\n => Array\n(\n => 27.97.169.205\n)\n\n => Array\n(\n => 157.46.174.89\n)\n\n => Array\n(\n => 43.250.58.220\n)\n\n => Array\n(\n => 76.189.186.64\n)\n\n => Array\n(\n => 103.255.5.57\n)\n\n => Array\n(\n => 171.61.196.136\n)\n\n => Array\n(\n => 202.47.40.88\n)\n\n => Array\n(\n => 97.118.94.116\n)\n\n => Array\n(\n => 157.44.124.157\n)\n\n => Array\n(\n => 95.142.120.13\n)\n\n => Array\n(\n => 42.201.229.151\n)\n\n => Array\n(\n => 157.46.178.95\n)\n\n => Array\n(\n => 169.149.215.192\n)\n\n => Array\n(\n => 42.111.19.48\n)\n\n => Array\n(\n => 1.38.52.18\n)\n\n => Array\n(\n => 145.239.91.241\n)\n\n => Array\n(\n => 47.31.78.191\n)\n\n => Array\n(\n => 103.77.42.60\n)\n\n => Array\n(\n => 157.46.107.144\n)\n\n => Array\n(\n => 157.46.125.124\n)\n\n => Array\n(\n => 110.225.218.108\n)\n\n => Array\n(\n => 106.51.77.185\n)\n\n => Array\n(\n => 123.24.161.207\n)\n\n => Array\n(\n => 106.210.108.22\n)\n\n => Array\n(\n => 42.111.10.14\n)\n\n => Array\n(\n => 223.29.231.175\n)\n\n => Array\n(\n => 27.56.152.132\n)\n\n => Array\n(\n => 119.155.31.100\n)\n\n => Array\n(\n => 122.173.172.127\n)\n\n => Array\n(\n => 103.77.42.64\n)\n\n => Array\n(\n => 157.44.164.106\n)\n\n => Array\n(\n => 14.181.53.38\n)\n\n => Array\n(\n => 115.42.67.64\n)\n\n => Array\n(\n => 47.31.33.140\n)\n\n => Array\n(\n => 103.15.60.234\n)\n\n => Array\n(\n => 182.64.219.181\n)\n\n => Array\n(\n => 103.44.51.6\n)\n\n => Array\n(\n => 116.74.25.157\n)\n\n => Array\n(\n => 116.71.2.128\n)\n\n => Array\n(\n => 157.32.185.239\n)\n\n => Array\n(\n => 47.31.25.79\n)\n\n => Array\n(\n => 178.62.85.75\n)\n\n => Array\n(\n => 180.178.190.39\n)\n\n => Array\n(\n => 39.48.52.179\n)\n\n => Array\n(\n => 106.193.11.240\n)\n\n => Array\n(\n => 103.82.80.226\n)\n\n => Array\n(\n => 49.206.126.30\n)\n\n => Array\n(\n => 157.245.191.173\n)\n\n => Array\n(\n => 49.205.84.237\n)\n\n => Array\n(\n => 47.8.181.232\n)\n\n => Array\n(\n => 182.66.2.92\n)\n\n => Array\n(\n => 49.34.137.220\n)\n\n => Array\n(\n => 209.205.217.125\n)\n\n => Array\n(\n => 192.64.5.73\n)\n\n => Array\n(\n => 27.63.166.108\n)\n\n => Array\n(\n => 120.29.96.211\n)\n\n => Array\n(\n => 182.186.112.135\n)\n\n => Array\n(\n => 45.118.165.151\n)\n\n => Array\n(\n => 47.8.228.12\n)\n\n => Array\n(\n => 106.215.3.162\n)\n\n => Array\n(\n => 111.92.72.66\n)\n\n => Array\n(\n => 169.145.2.9\n)\n\n => Array\n(\n => 106.207.205.100\n)\n\n => Array\n(\n => 223.181.8.12\n)\n\n => Array\n(\n => 157.48.149.78\n)\n\n => Array\n(\n => 103.206.138.116\n)\n\n => Array\n(\n => 39.53.119.22\n)\n\n => Array\n(\n => 157.33.232.106\n)\n\n => Array\n(\n => 49.37.205.139\n)\n\n => Array\n(\n => 115.42.68.3\n)\n\n => Array\n(\n => 93.72.182.251\n)\n\n => Array\n(\n => 202.142.166.22\n)\n\n => Array\n(\n => 157.119.81.111\n)\n\n => Array\n(\n => 182.186.116.155\n)\n\n => Array\n(\n => 157.37.171.37\n)\n\n => Array\n(\n => 117.206.164.48\n)\n\n => Array\n(\n => 49.36.52.63\n)\n\n => Array\n(\n => 203.175.72.112\n)\n\n => Array\n(\n => 171.61.132.193\n)\n\n => Array\n(\n => 111.119.187.44\n)\n\n => Array\n(\n => 39.37.165.216\n)\n\n => Array\n(\n => 103.86.109.58\n)\n\n => Array\n(\n => 39.59.2.86\n)\n\n => Array\n(\n => 111.119.187.28\n)\n\n => Array\n(\n => 106.201.9.10\n)\n\n => Array\n(\n => 49.35.25.106\n)\n\n => Array\n(\n => 157.49.239.103\n)\n\n => Array\n(\n => 157.49.237.198\n)\n\n => Array\n(\n => 14.248.64.121\n)\n\n => Array\n(\n => 117.102.7.214\n)\n\n => Array\n(\n => 120.29.91.246\n)\n\n => Array\n(\n => 103.7.79.41\n)\n\n => Array\n(\n => 132.154.99.209\n)\n\n => Array\n(\n => 212.36.27.245\n)\n\n => Array\n(\n => 157.44.154.9\n)\n\n => Array\n(\n => 47.31.56.44\n)\n\n => Array\n(\n => 192.142.199.136\n)\n\n => Array\n(\n => 171.61.159.49\n)\n\n => Array\n(\n => 119.160.116.151\n)\n\n => Array\n(\n => 103.98.63.39\n)\n\n => Array\n(\n => 41.60.233.216\n)\n\n => Array\n(\n => 49.36.75.212\n)\n\n => Array\n(\n => 223.188.60.20\n)\n\n => Array\n(\n => 103.98.63.50\n)\n\n => Array\n(\n => 178.162.198.21\n)\n\n => Array\n(\n => 157.46.209.35\n)\n\n => Array\n(\n => 119.155.32.151\n)\n\n => Array\n(\n => 102.185.58.161\n)\n\n => Array\n(\n => 59.96.89.231\n)\n\n => Array\n(\n => 119.155.255.198\n)\n\n => Array\n(\n => 42.107.204.57\n)\n\n => Array\n(\n => 42.106.181.74\n)\n\n => Array\n(\n => 157.46.219.186\n)\n\n => Array\n(\n => 115.42.71.49\n)\n\n => Array\n(\n => 157.46.209.131\n)\n\n => Array\n(\n => 220.81.15.94\n)\n\n => Array\n(\n => 111.119.187.24\n)\n\n => Array\n(\n => 49.37.195.185\n)\n\n => Array\n(\n => 42.106.181.85\n)\n\n => Array\n(\n => 43.249.225.134\n)\n\n => Array\n(\n => 117.206.165.151\n)\n\n => Array\n(\n => 119.153.48.250\n)\n\n => Array\n(\n => 27.4.172.162\n)\n\n => Array\n(\n => 117.20.29.51\n)\n\n => Array\n(\n => 103.98.63.135\n)\n\n => Array\n(\n => 117.7.218.229\n)\n\n => Array\n(\n => 157.49.233.105\n)\n\n => Array\n(\n => 39.53.151.199\n)\n\n => Array\n(\n => 101.255.118.33\n)\n\n => Array\n(\n => 41.141.246.9\n)\n\n => Array\n(\n => 221.132.113.78\n)\n\n => Array\n(\n => 119.160.116.202\n)\n\n => Array\n(\n => 117.237.193.244\n)\n\n => Array\n(\n => 157.41.110.145\n)\n\n => Array\n(\n => 103.98.63.5\n)\n\n => Array\n(\n => 103.125.129.58\n)\n\n => Array\n(\n => 183.83.254.66\n)\n\n => Array\n(\n => 45.135.236.160\n)\n\n => Array\n(\n => 198.199.87.124\n)\n\n => Array\n(\n => 193.176.86.41\n)\n\n => Array\n(\n => 115.97.142.98\n)\n\n => Array\n(\n => 222.252.38.198\n)\n\n => Array\n(\n => 110.93.237.49\n)\n\n => Array\n(\n => 103.224.48.122\n)\n\n => Array\n(\n => 110.38.28.130\n)\n\n => Array\n(\n => 106.211.238.154\n)\n\n => Array\n(\n => 111.88.41.73\n)\n\n => Array\n(\n => 119.155.13.143\n)\n\n => Array\n(\n => 103.213.111.60\n)\n\n => Array\n(\n => 202.0.103.42\n)\n\n => Array\n(\n => 157.48.144.33\n)\n\n => Array\n(\n => 111.119.187.62\n)\n\n => Array\n(\n => 103.87.212.71\n)\n\n => Array\n(\n => 157.37.177.20\n)\n\n => Array\n(\n => 223.233.71.92\n)\n\n => Array\n(\n => 116.213.32.107\n)\n\n => Array\n(\n => 104.248.173.151\n)\n\n => Array\n(\n => 14.181.102.222\n)\n\n => Array\n(\n => 103.10.224.252\n)\n\n => Array\n(\n => 175.158.50.57\n)\n\n => Array\n(\n => 165.22.122.199\n)\n\n => Array\n(\n => 23.106.56.12\n)\n\n => Array\n(\n => 203.122.10.146\n)\n\n => Array\n(\n => 37.111.136.138\n)\n\n => Array\n(\n => 103.87.193.66\n)\n\n => Array\n(\n => 39.59.122.246\n)\n\n => Array\n(\n => 111.119.183.63\n)\n\n => Array\n(\n => 157.46.72.102\n)\n\n => Array\n(\n => 185.132.133.82\n)\n\n => Array\n(\n => 118.103.230.148\n)\n\n => Array\n(\n => 5.62.39.45\n)\n\n => Array\n(\n => 119.152.144.134\n)\n\n => Array\n(\n => 172.105.117.102\n)\n\n => Array\n(\n => 122.254.70.212\n)\n\n => Array\n(\n => 102.185.128.97\n)\n\n => Array\n(\n => 182.69.249.11\n)\n\n => Array\n(\n => 105.163.134.167\n)\n\n => Array\n(\n => 111.119.187.38\n)\n\n => Array\n(\n => 103.46.195.93\n)\n\n => Array\n(\n => 106.204.161.156\n)\n\n => Array\n(\n => 122.176.2.175\n)\n\n => Array\n(\n => 117.99.162.31\n)\n\n => Array\n(\n => 106.212.241.242\n)\n\n => Array\n(\n => 42.107.196.149\n)\n\n => Array\n(\n => 212.90.60.57\n)\n\n => Array\n(\n => 175.107.237.12\n)\n\n => Array\n(\n => 157.46.119.152\n)\n\n => Array\n(\n => 157.34.81.12\n)\n\n => Array\n(\n => 162.243.1.22\n)\n\n => Array\n(\n => 110.37.222.178\n)\n\n => Array\n(\n => 103.46.195.68\n)\n\n => Array\n(\n => 119.160.116.81\n)\n\n => Array\n(\n => 138.197.131.28\n)\n\n => Array\n(\n => 103.88.218.124\n)\n\n => Array\n(\n => 192.241.172.113\n)\n\n => Array\n(\n => 110.39.174.106\n)\n\n => Array\n(\n => 111.88.48.17\n)\n\n => Array\n(\n => 42.108.160.218\n)\n\n => Array\n(\n => 117.102.0.16\n)\n\n => Array\n(\n => 157.46.125.235\n)\n\n => Array\n(\n => 14.190.242.251\n)\n\n => Array\n(\n => 47.31.184.64\n)\n\n => Array\n(\n => 49.205.84.157\n)\n\n => Array\n(\n => 122.162.115.247\n)\n\n => Array\n(\n => 41.202.219.74\n)\n\n => Array\n(\n => 106.215.9.67\n)\n\n => Array\n(\n => 103.87.56.208\n)\n\n => Array\n(\n => 103.46.194.147\n)\n\n => Array\n(\n => 116.90.98.81\n)\n\n => Array\n(\n => 115.42.71.213\n)\n\n => Array\n(\n => 39.49.35.192\n)\n\n => Array\n(\n => 41.202.219.65\n)\n\n => Array\n(\n => 131.212.249.93\n)\n\n => Array\n(\n => 49.205.16.251\n)\n\n => Array\n(\n => 39.34.147.250\n)\n\n => Array\n(\n => 183.83.210.185\n)\n\n => Array\n(\n => 49.37.194.215\n)\n\n => Array\n(\n => 103.46.194.108\n)\n\n => Array\n(\n => 89.36.219.233\n)\n\n => Array\n(\n => 119.152.105.178\n)\n\n => Array\n(\n => 202.47.45.125\n)\n\n => Array\n(\n => 156.146.59.27\n)\n\n => Array\n(\n => 132.154.21.156\n)\n\n => Array\n(\n => 157.44.35.31\n)\n\n => Array\n(\n => 41.80.118.124\n)\n\n => Array\n(\n => 47.31.159.198\n)\n\n => Array\n(\n => 103.209.223.140\n)\n\n => Array\n(\n => 157.46.130.138\n)\n\n => Array\n(\n => 49.37.199.246\n)\n\n => Array\n(\n => 111.88.242.10\n)\n\n => Array\n(\n => 43.241.145.110\n)\n\n => Array\n(\n => 124.153.16.30\n)\n\n => Array\n(\n => 27.5.22.173\n)\n\n => Array\n(\n => 111.88.191.173\n)\n\n => Array\n(\n => 41.60.236.200\n)\n\n => Array\n(\n => 115.42.67.146\n)\n\n => Array\n(\n => 150.242.173.7\n)\n\n => Array\n(\n => 14.248.71.23\n)\n\n => Array\n(\n => 111.119.187.4\n)\n\n => Array\n(\n => 124.29.212.118\n)\n\n => Array\n(\n => 51.68.205.163\n)\n\n => Array\n(\n => 182.184.107.63\n)\n\n => Array\n(\n => 106.211.253.87\n)\n\n => Array\n(\n => 223.190.89.5\n)\n\n => Array\n(\n => 183.83.212.63\n)\n\n => Array\n(\n => 129.205.113.227\n)\n\n => Array\n(\n => 106.210.40.141\n)\n\n => Array\n(\n => 91.202.163.169\n)\n\n => Array\n(\n => 76.105.191.89\n)\n\n => Array\n(\n => 171.51.244.160\n)\n\n => Array\n(\n => 37.139.188.92\n)\n\n => Array\n(\n => 23.106.56.37\n)\n\n => Array\n(\n => 157.44.175.180\n)\n\n => Array\n(\n => 122.2.122.97\n)\n\n => Array\n(\n => 103.87.192.194\n)\n\n => Array\n(\n => 192.154.253.6\n)\n\n => Array\n(\n => 77.243.191.19\n)\n\n => Array\n(\n => 122.254.70.46\n)\n\n => Array\n(\n => 154.76.233.73\n)\n\n => Array\n(\n => 195.181.167.150\n)\n\n => Array\n(\n => 209.209.228.5\n)\n\n => Array\n(\n => 203.192.212.115\n)\n\n => Array\n(\n => 221.132.118.179\n)\n\n => Array\n(\n => 117.208.210.204\n)\n\n => Array\n(\n => 120.29.90.126\n)\n\n => Array\n(\n => 36.77.239.190\n)\n\n => Array\n(\n => 157.37.137.127\n)\n\n => Array\n(\n => 39.40.243.6\n)\n\n => Array\n(\n => 182.182.41.201\n)\n\n => Array\n(\n => 39.59.32.46\n)\n\n => Array\n(\n => 111.119.183.36\n)\n\n => Array\n(\n => 103.83.147.61\n)\n\n => Array\n(\n => 103.82.80.85\n)\n\n => Array\n(\n => 103.46.194.161\n)\n\n => Array\n(\n => 101.50.105.38\n)\n\n => Array\n(\n => 111.119.183.58\n)\n\n => Array\n(\n => 47.9.234.51\n)\n\n => Array\n(\n => 120.29.86.157\n)\n\n => Array\n(\n => 175.158.50.70\n)\n\n => Array\n(\n => 112.196.163.235\n)\n\n => Array\n(\n => 139.167.161.85\n)\n\n => Array\n(\n => 106.207.39.181\n)\n\n => Array\n(\n => 103.77.42.159\n)\n\n => Array\n(\n => 185.56.138.220\n)\n\n => Array\n(\n => 119.155.33.205\n)\n\n => Array\n(\n => 157.42.117.124\n)\n\n => Array\n(\n => 103.117.202.202\n)\n\n => Array\n(\n => 220.253.101.109\n)\n\n => Array\n(\n => 49.37.7.247\n)\n\n => Array\n(\n => 119.160.65.27\n)\n\n => Array\n(\n => 114.122.21.151\n)\n\n => Array\n(\n => 157.44.141.83\n)\n\n => Array\n(\n => 103.131.9.7\n)\n\n => Array\n(\n => 125.99.222.21\n)\n\n => Array\n(\n => 103.238.104.206\n)\n\n => Array\n(\n => 110.93.227.100\n)\n\n => Array\n(\n => 49.14.119.114\n)\n\n => Array\n(\n => 115.186.189.82\n)\n\n => Array\n(\n => 106.201.194.2\n)\n\n => Array\n(\n => 106.204.227.28\n)\n\n => Array\n(\n => 47.31.206.13\n)\n\n => Array\n(\n => 39.42.144.109\n)\n\n => Array\n(\n => 14.253.254.90\n)\n\n => Array\n(\n => 157.44.142.118\n)\n\n => Array\n(\n => 192.142.176.21\n)\n\n => Array\n(\n => 103.217.178.225\n)\n\n => Array\n(\n => 106.78.78.16\n)\n\n => Array\n(\n => 167.71.63.184\n)\n\n => Array\n(\n => 207.244.71.82\n)\n\n => Array\n(\n => 71.105.25.145\n)\n\n => Array\n(\n => 39.51.250.30\n)\n\n => Array\n(\n => 157.41.120.160\n)\n\n => Array\n(\n => 39.37.137.81\n)\n\n => Array\n(\n => 41.80.237.27\n)\n\n => Array\n(\n => 111.119.187.50\n)\n\n => Array\n(\n => 49.145.224.252\n)\n\n => Array\n(\n => 106.197.28.106\n)\n\n => Array\n(\n => 103.217.178.240\n)\n\n => Array\n(\n => 27.97.182.237\n)\n\n => Array\n(\n => 106.211.253.72\n)\n\n => Array\n(\n => 119.152.154.172\n)\n\n => Array\n(\n => 103.255.151.148\n)\n\n => Array\n(\n => 154.157.80.12\n)\n\n => Array\n(\n => 156.146.59.28\n)\n\n => Array\n(\n => 171.61.211.64\n)\n\n => Array\n(\n => 27.76.59.22\n)\n\n => Array\n(\n => 167.99.92.124\n)\n\n => Array\n(\n => 132.154.94.51\n)\n\n => Array\n(\n => 111.119.183.38\n)\n\n => Array\n(\n => 115.42.70.169\n)\n\n => Array\n(\n => 109.169.23.83\n)\n\n => Array\n(\n => 157.46.213.64\n)\n\n => Array\n(\n => 39.37.179.171\n)\n\n => Array\n(\n => 14.232.233.32\n)\n\n => Array\n(\n => 157.49.226.13\n)\n\n => Array\n(\n => 185.209.178.78\n)\n\n => Array\n(\n => 222.252.46.230\n)\n\n => Array\n(\n => 139.5.255.168\n)\n\n => Array\n(\n => 202.8.118.12\n)\n\n => Array\n(\n => 39.53.205.63\n)\n\n => Array\n(\n => 157.37.167.227\n)\n\n => Array\n(\n => 157.49.237.121\n)\n\n => Array\n(\n => 208.89.99.6\n)\n\n => Array\n(\n => 111.119.187.33\n)\n\n => Array\n(\n => 39.37.132.101\n)\n\n => Array\n(\n => 72.255.61.15\n)\n\n => Array\n(\n => 157.41.69.126\n)\n\n => Array\n(\n => 27.6.193.15\n)\n\n => Array\n(\n => 157.41.104.8\n)\n\n => Array\n(\n => 157.41.97.162\n)\n\n => Array\n(\n => 95.136.91.67\n)\n\n => Array\n(\n => 110.93.209.138\n)\n\n => Array\n(\n => 119.152.154.82\n)\n\n => Array\n(\n => 111.88.239.223\n)\n\n => Array\n(\n => 157.230.62.100\n)\n\n => Array\n(\n => 37.111.136.167\n)\n\n => Array\n(\n => 139.167.162.65\n)\n\n => Array\n(\n => 120.29.72.72\n)\n\n => Array\n(\n => 39.42.169.69\n)\n\n => Array\n(\n => 157.49.247.12\n)\n\n => Array\n(\n => 43.231.58.221\n)\n\n => Array\n(\n => 111.88.229.18\n)\n\n => Array\n(\n => 171.79.185.198\n)\n\n => Array\n(\n => 169.149.193.102\n)\n\n => Array\n(\n => 207.244.89.162\n)\n\n => Array\n(\n => 27.4.217.129\n)\n\n => Array\n(\n => 91.236.184.12\n)\n\n => Array\n(\n => 14.192.154.150\n)\n\n => Array\n(\n => 167.172.55.253\n)\n\n => Array\n(\n => 103.77.42.192\n)\n\n => Array\n(\n => 39.59.122.140\n)\n\n => Array\n(\n => 41.80.84.46\n)\n\n => Array\n(\n => 202.47.52.115\n)\n\n => Array\n(\n => 222.252.43.47\n)\n\n => Array\n(\n => 119.155.37.250\n)\n\n => Array\n(\n => 157.41.18.88\n)\n\n => Array\n(\n => 39.42.8.59\n)\n\n => Array\n(\n => 39.45.162.110\n)\n\n => Array\n(\n => 111.88.237.25\n)\n\n => Array\n(\n => 103.76.211.168\n)\n\n => Array\n(\n => 178.137.114.165\n)\n\n => Array\n(\n => 43.225.74.146\n)\n\n => Array\n(\n => 157.42.25.26\n)\n\n => Array\n(\n => 137.59.146.63\n)\n\n => Array\n(\n => 119.160.117.190\n)\n\n => Array\n(\n => 1.186.181.133\n)\n\n => Array\n(\n => 39.42.145.94\n)\n\n => Array\n(\n => 203.175.73.96\n)\n\n => Array\n(\n => 39.37.160.14\n)\n\n => Array\n(\n => 157.39.123.250\n)\n\n => Array\n(\n => 95.135.57.82\n)\n\n => Array\n(\n => 162.210.194.35\n)\n\n => Array\n(\n => 39.42.153.135\n)\n\n => Array\n(\n => 118.103.230.106\n)\n\n => Array\n(\n => 108.61.39.115\n)\n\n => Array\n(\n => 102.7.108.45\n)\n\n => Array\n(\n => 183.83.138.134\n)\n\n => Array\n(\n => 115.186.70.223\n)\n\n => Array\n(\n => 157.34.17.139\n)\n\n => Array\n(\n => 122.166.158.231\n)\n\n => Array\n(\n => 43.227.135.90\n)\n\n => Array\n(\n => 182.68.46.180\n)\n\n => Array\n(\n => 223.225.28.138\n)\n\n => Array\n(\n => 103.77.42.220\n)\n\n => Array\n(\n => 192.241.219.13\n)\n\n => Array\n(\n => 103.82.80.113\n)\n\n => Array\n(\n => 42.111.243.151\n)\n\n => Array\n(\n => 171.79.189.247\n)\n\n => Array\n(\n => 157.32.132.102\n)\n\n => Array\n(\n => 103.130.105.243\n)\n\n => Array\n(\n => 117.223.98.120\n)\n\n => Array\n(\n => 106.215.197.187\n)\n\n => Array\n(\n => 182.190.194.179\n)\n\n => Array\n(\n => 223.225.29.42\n)\n\n => Array\n(\n => 117.222.94.151\n)\n\n => Array\n(\n => 182.185.199.104\n)\n\n => Array\n(\n => 49.36.145.77\n)\n\n => Array\n(\n => 103.82.80.73\n)\n\n => Array\n(\n => 103.77.16.13\n)\n\n => Array\n(\n => 221.132.118.86\n)\n\n => Array\n(\n => 202.47.45.77\n)\n\n => Array\n(\n => 202.8.118.116\n)\n\n => Array\n(\n => 42.106.180.185\n)\n\n => Array\n(\n => 203.122.8.234\n)\n\n => Array\n(\n => 88.230.104.245\n)\n\n => Array\n(\n => 103.131.9.33\n)\n\n => Array\n(\n => 117.207.209.60\n)\n\n => Array\n(\n => 42.111.253.227\n)\n\n => Array\n(\n => 23.106.56.54\n)\n\n => Array\n(\n => 122.178.143.181\n)\n\n => Array\n(\n => 111.88.180.5\n)\n\n => Array\n(\n => 174.55.224.161\n)\n\n => Array\n(\n => 49.205.87.100\n)\n\n => Array\n(\n => 49.34.183.118\n)\n\n => Array\n(\n => 124.155.255.154\n)\n\n => Array\n(\n => 106.212.135.200\n)\n\n => Array\n(\n => 139.99.159.11\n)\n\n => Array\n(\n => 45.135.229.8\n)\n\n => Array\n(\n => 88.230.106.85\n)\n\n => Array\n(\n => 91.153.145.221\n)\n\n => Array\n(\n => 103.95.83.33\n)\n\n => Array\n(\n => 122.178.116.76\n)\n\n => Array\n(\n => 103.135.78.14\n)\n\n => Array\n(\n => 111.88.233.206\n)\n\n => Array\n(\n => 192.140.153.210\n)\n\n => Array\n(\n => 202.8.118.69\n)\n\n => Array\n(\n => 103.83.130.81\n)\n\n => Array\n(\n => 182.190.213.143\n)\n\n => Array\n(\n => 198.16.74.204\n)\n\n => Array\n(\n => 101.128.117.248\n)\n\n => Array\n(\n => 103.108.5.147\n)\n\n => Array\n(\n => 157.32.130.158\n)\n\n => Array\n(\n => 103.244.172.93\n)\n\n => Array\n(\n => 47.30.140.126\n)\n\n => Array\n(\n => 223.188.40.124\n)\n\n => Array\n(\n => 157.44.191.102\n)\n\n => Array\n(\n => 41.60.237.62\n)\n\n => Array\n(\n => 47.31.228.161\n)\n\n => Array\n(\n => 137.59.217.188\n)\n\n => Array\n(\n => 39.53.220.237\n)\n\n => Array\n(\n => 45.127.45.199\n)\n\n => Array\n(\n => 14.190.71.19\n)\n\n => Array\n(\n => 47.18.205.54\n)\n\n => Array\n(\n => 110.93.240.11\n)\n\n => Array\n(\n => 134.209.29.111\n)\n\n => Array\n(\n => 49.36.175.104\n)\n\n => Array\n(\n => 203.192.230.61\n)\n\n => Array\n(\n => 176.10.125.115\n)\n\n => Array\n(\n => 182.18.206.17\n)\n\n => Array\n(\n => 103.87.194.102\n)\n\n => Array\n(\n => 171.79.123.106\n)\n\n => Array\n(\n => 45.116.233.35\n)\n\n => Array\n(\n => 223.190.57.225\n)\n\n => Array\n(\n => 114.125.6.158\n)\n\n => Array\n(\n => 223.179.138.176\n)\n\n => Array\n(\n => 111.119.183.61\n)\n\n => Array\n(\n => 202.8.118.43\n)\n\n => Array\n(\n => 157.51.175.216\n)\n\n => Array\n(\n => 41.60.238.100\n)\n\n => Array\n(\n => 117.207.210.199\n)\n\n => Array\n(\n => 111.119.183.26\n)\n\n => Array\n(\n => 103.252.226.12\n)\n\n => Array\n(\n => 103.221.208.82\n)\n\n => Array\n(\n => 103.82.80.228\n)\n\n => Array\n(\n => 111.119.187.39\n)\n\n => Array\n(\n => 157.51.161.199\n)\n\n => Array\n(\n => 59.96.88.246\n)\n\n => Array\n(\n => 27.4.181.183\n)\n\n => Array\n(\n => 43.225.98.124\n)\n\n => Array\n(\n => 157.51.113.74\n)\n\n => Array\n(\n => 207.244.89.161\n)\n\n => Array\n(\n => 49.37.184.82\n)\n\n => Array\n(\n => 111.119.183.4\n)\n\n => Array\n(\n => 39.42.130.147\n)\n\n => Array\n(\n => 103.152.101.2\n)\n\n => Array\n(\n => 111.119.183.2\n)\n\n => Array\n(\n => 157.51.171.149\n)\n\n => Array\n(\n => 103.82.80.245\n)\n\n => Array\n(\n => 175.107.207.133\n)\n\n => Array\n(\n => 103.204.169.158\n)\n\n => Array\n(\n => 157.51.181.12\n)\n\n => Array\n(\n => 195.158.193.212\n)\n\n => Array\n(\n => 204.14.73.85\n)\n\n => Array\n(\n => 39.59.59.31\n)\n\n => Array\n(\n => 45.148.11.82\n)\n\n => Array\n(\n => 157.46.117.250\n)\n\n => Array\n(\n => 157.46.127.170\n)\n\n => Array\n(\n => 77.247.181.165\n)\n\n => Array\n(\n => 111.119.183.54\n)\n\n => Array\n(\n => 41.60.232.183\n)\n\n => Array\n(\n => 157.42.206.174\n)\n\n => Array\n(\n => 196.53.10.246\n)\n\n => Array\n(\n => 27.97.186.131\n)\n\n => Array\n(\n => 103.73.101.134\n)\n\n => Array\n(\n => 111.119.183.35\n)\n\n => Array\n(\n => 202.8.118.111\n)\n\n => Array\n(\n => 103.75.246.207\n)\n\n => Array\n(\n => 47.8.94.225\n)\n\n => Array\n(\n => 106.202.40.83\n)\n\n => Array\n(\n => 117.102.2.0\n)\n\n => Array\n(\n => 156.146.59.11\n)\n\n => Array\n(\n => 223.190.115.125\n)\n\n => Array\n(\n => 169.149.212.232\n)\n\n => Array\n(\n => 39.45.150.127\n)\n\n => Array\n(\n => 45.63.10.204\n)\n\n => Array\n(\n => 27.57.86.46\n)\n\n => Array\n(\n => 103.127.20.138\n)\n\n => Array\n(\n => 223.190.27.26\n)\n\n => Array\n(\n => 49.15.248.78\n)\n\n => Array\n(\n => 130.105.135.103\n)\n\n => Array\n(\n => 47.31.3.239\n)\n\n => Array\n(\n => 185.66.71.8\n)\n\n => Array\n(\n => 103.226.226.198\n)\n\n => Array\n(\n => 39.34.134.16\n)\n\n => Array\n(\n => 95.158.53.120\n)\n\n => Array\n(\n => 45.9.249.246\n)\n\n => Array\n(\n => 223.235.162.157\n)\n\n => Array\n(\n => 37.111.139.23\n)\n\n => Array\n(\n => 49.37.153.47\n)\n\n => Array\n(\n => 103.242.60.205\n)\n\n => Array\n(\n => 185.66.68.18\n)\n\n => Array\n(\n => 162.221.202.138\n)\n\n => Array\n(\n => 202.63.195.29\n)\n\n => Array\n(\n => 112.198.75.226\n)\n\n => Array\n(\n => 46.200.69.233\n)\n\n => Array\n(\n => 103.135.78.30\n)\n\n => Array\n(\n => 119.152.226.9\n)\n\n => Array\n(\n => 167.172.242.50\n)\n\n => Array\n(\n => 49.36.151.31\n)\n\n => Array\n(\n => 111.88.237.156\n)\n\n => Array\n(\n => 103.215.168.1\n)\n\n => Array\n(\n => 107.181.177.137\n)\n\n => Array\n(\n => 157.119.186.202\n)\n\n => Array\n(\n => 37.111.139.106\n)\n\n => Array\n(\n => 182.180.152.198\n)\n\n => Array\n(\n => 43.248.153.72\n)\n\n => Array\n(\n => 64.188.20.84\n)\n\n => Array\n(\n => 103.92.214.11\n)\n\n => Array\n(\n => 182.182.14.148\n)\n\n => Array\n(\n => 116.75.154.119\n)\n\n => Array\n(\n => 37.228.235.94\n)\n\n => Array\n(\n => 197.210.55.43\n)\n\n => Array\n(\n => 45.118.165.153\n)\n\n => Array\n(\n => 122.176.32.27\n)\n\n => Array\n(\n => 106.215.161.20\n)\n\n => Array\n(\n => 152.32.113.58\n)\n\n => Array\n(\n => 111.125.106.132\n)\n\n => Array\n(\n => 212.102.40.72\n)\n\n => Array\n(\n => 2.58.194.140\n)\n\n => Array\n(\n => 122.174.68.115\n)\n\n => Array\n(\n => 117.241.66.56\n)\n\n => Array\n(\n => 71.94.172.140\n)\n\n => Array\n(\n => 103.209.228.139\n)\n\n => Array\n(\n => 43.242.177.140\n)\n\n => Array\n(\n => 38.91.101.66\n)\n\n => Array\n(\n => 103.82.80.67\n)\n\n => Array\n(\n => 117.248.62.138\n)\n\n => Array\n(\n => 103.81.215.51\n)\n\n => Array\n(\n => 103.253.174.4\n)\n\n => Array\n(\n => 202.142.110.111\n)\n\n => Array\n(\n => 162.216.142.1\n)\n\n => Array\n(\n => 58.186.7.252\n)\n\n => Array\n(\n => 113.203.247.66\n)\n\n => Array\n(\n => 111.88.50.63\n)\n\n => Array\n(\n => 182.182.94.227\n)\n\n => Array\n(\n => 49.15.232.50\n)\n\n => Array\n(\n => 182.189.76.225\n)\n\n => Array\n(\n => 139.99.159.14\n)\n\n => Array\n(\n => 163.172.159.235\n)\n\n => Array\n(\n => 157.36.235.241\n)\n\n => Array\n(\n => 111.119.187.3\n)\n\n => Array\n(\n => 103.100.4.61\n)\n\n => Array\n(\n => 192.142.130.88\n)\n\n => Array\n(\n => 43.242.176.114\n)\n\n => Array\n(\n => 180.178.156.165\n)\n\n => Array\n(\n => 182.189.236.77\n)\n\n => Array\n(\n => 49.34.197.239\n)\n\n => Array\n(\n => 157.36.107.107\n)\n\n => Array\n(\n => 103.209.85.175\n)\n\n => Array\n(\n => 203.139.63.83\n)\n\n => Array\n(\n => 43.242.177.161\n)\n\n => Array\n(\n => 182.182.77.138\n)\n\n => Array\n(\n => 114.124.168.117\n)\n\n => Array\n(\n => 124.253.79.191\n)\n\n => Array\n(\n => 192.142.168.235\n)\n\n => Array\n(\n => 14.232.235.111\n)\n\n => Array\n(\n => 152.57.124.214\n)\n\n => Array\n(\n => 123.24.172.48\n)\n\n => Array\n(\n => 43.242.176.87\n)\n\n => Array\n(\n => 43.242.176.101\n)\n\n => Array\n(\n => 49.156.84.110\n)\n\n => Array\n(\n => 58.65.222.6\n)\n\n => Array\n(\n => 157.32.189.112\n)\n\n => Array\n(\n => 47.31.155.87\n)\n\n => Array\n(\n => 39.53.244.182\n)\n\n => Array\n(\n => 39.33.221.76\n)\n\n => Array\n(\n => 161.35.130.245\n)\n\n => Array\n(\n => 152.32.113.137\n)\n\n => Array\n(\n => 192.142.187.220\n)\n\n => Array\n(\n => 185.54.228.123\n)\n\n => Array\n(\n => 103.233.87.221\n)\n\n => Array\n(\n => 223.236.200.224\n)\n\n => Array\n(\n => 27.97.189.170\n)\n\n => Array\n(\n => 103.82.80.212\n)\n\n => Array\n(\n => 43.242.176.37\n)\n\n => Array\n(\n => 49.36.144.94\n)\n\n => Array\n(\n => 180.251.62.185\n)\n\n => Array\n(\n => 39.50.243.227\n)\n\n => Array\n(\n => 124.253.20.21\n)\n\n => Array\n(\n => 41.60.233.31\n)\n\n => Array\n(\n => 103.81.215.57\n)\n\n => Array\n(\n => 185.91.120.16\n)\n\n => Array\n(\n => 182.190.107.163\n)\n\n => Array\n(\n => 222.252.61.68\n)\n\n => Array\n(\n => 109.169.23.78\n)\n\n => Array\n(\n => 39.50.151.222\n)\n\n => Array\n(\n => 43.242.176.86\n)\n\n => Array\n(\n => 178.162.222.161\n)\n\n => Array\n(\n => 37.111.139.158\n)\n\n => Array\n(\n => 39.57.224.97\n)\n\n => Array\n(\n => 39.57.157.194\n)\n\n => Array\n(\n => 111.119.183.48\n)\n\n => Array\n(\n => 180.190.171.129\n)\n\n => Array\n(\n => 39.52.174.177\n)\n\n => Array\n(\n => 43.242.176.103\n)\n\n => Array\n(\n => 124.253.83.14\n)\n\n => Array\n(\n => 182.189.116.245\n)\n\n => Array\n(\n => 157.36.178.213\n)\n\n => Array\n(\n => 45.250.65.119\n)\n\n => Array\n(\n => 103.209.86.6\n)\n\n => Array\n(\n => 43.242.176.80\n)\n\n => Array\n(\n => 137.59.147.2\n)\n\n => Array\n(\n => 117.222.95.23\n)\n\n => Array\n(\n => 124.253.81.10\n)\n\n => Array\n(\n => 43.242.177.21\n)\n\n => Array\n(\n => 182.189.224.186\n)\n\n => Array\n(\n => 39.52.178.142\n)\n\n => Array\n(\n => 106.214.29.176\n)\n\n => Array\n(\n => 111.88.145.107\n)\n\n => Array\n(\n => 49.36.142.67\n)\n\n => Array\n(\n => 202.142.65.50\n)\n\n => Array\n(\n => 1.22.186.76\n)\n\n => Array\n(\n => 103.131.8.225\n)\n\n => Array\n(\n => 39.53.212.111\n)\n\n => Array\n(\n => 103.82.80.149\n)\n\n => Array\n(\n => 43.242.176.12\n)\n\n => Array\n(\n => 103.109.13.189\n)\n\n => Array\n(\n => 124.253.206.202\n)\n\n => Array\n(\n => 117.195.115.85\n)\n\n => Array\n(\n => 49.36.245.229\n)\n\n => Array\n(\n => 42.118.8.100\n)\n\n => Array\n(\n => 1.22.73.17\n)\n\n => Array\n(\n => 157.36.166.131\n)\n\n => Array\n(\n => 182.182.38.223\n)\n\n => Array\n(\n => 49.14.150.21\n)\n\n => Array\n(\n => 43.242.176.89\n)\n\n => Array\n(\n => 157.46.185.69\n)\n\n => Array\n(\n => 103.31.92.150\n)\n\n => Array\n(\n => 59.96.90.94\n)\n\n => Array\n(\n => 49.156.111.64\n)\n\n => Array\n(\n => 103.75.244.16\n)\n\n => Array\n(\n => 54.37.18.139\n)\n\n => Array\n(\n => 27.255.173.50\n)\n\n => Array\n(\n => 84.202.161.120\n)\n\n => Array\n(\n => 27.3.224.180\n)\n\n => Array\n(\n => 39.44.14.192\n)\n\n => Array\n(\n => 37.120.133.201\n)\n\n => Array\n(\n => 109.251.143.236\n)\n\n => Array\n(\n => 23.80.97.111\n)\n\n => Array\n(\n => 43.242.176.9\n)\n\n => Array\n(\n => 14.248.107.50\n)\n\n => Array\n(\n => 182.189.221.114\n)\n\n => Array\n(\n => 103.253.173.74\n)\n\n => Array\n(\n => 27.97.177.45\n)\n\n => Array\n(\n => 49.14.98.9\n)\n\n => Array\n(\n => 163.53.85.169\n)\n\n => Array\n(\n => 39.59.90.168\n)\n\n => Array\n(\n => 111.88.202.253\n)\n\n => Array\n(\n => 111.119.178.155\n)\n\n => Array\n(\n => 171.76.163.75\n)\n\n => Array\n(\n => 202.5.154.23\n)\n\n => Array\n(\n => 119.160.65.164\n)\n\n => Array\n(\n => 14.253.253.190\n)\n\n => Array\n(\n => 117.206.167.25\n)\n\n => Array\n(\n => 61.2.183.186\n)\n\n => Array\n(\n => 103.100.4.83\n)\n\n => Array\n(\n => 124.253.71.126\n)\n\n => Array\n(\n => 182.189.49.217\n)\n\n => Array\n(\n => 103.196.160.41\n)\n\n => Array\n(\n => 23.106.56.35\n)\n\n => Array\n(\n => 110.38.12.70\n)\n\n => Array\n(\n => 154.157.199.239\n)\n\n => Array\n(\n => 14.231.163.113\n)\n\n => Array\n(\n => 103.69.27.232\n)\n\n => Array\n(\n => 175.107.220.192\n)\n\n => Array\n(\n => 43.231.58.173\n)\n\n => Array\n(\n => 138.128.91.215\n)\n\n => Array\n(\n => 103.233.86.1\n)\n\n => Array\n(\n => 182.187.67.111\n)\n\n => Array\n(\n => 49.156.71.31\n)\n\n => Array\n(\n => 27.255.174.125\n)\n\n => Array\n(\n => 195.24.220.35\n)\n\n => Array\n(\n => 120.29.98.28\n)\n\n => Array\n(\n => 41.202.219.255\n)\n\n => Array\n(\n => 103.88.3.243\n)\n\n => Array\n(\n => 111.125.106.75\n)\n\n => Array\n(\n => 106.76.71.74\n)\n\n => Array\n(\n => 112.201.138.85\n)\n\n => Array\n(\n => 110.137.101.229\n)\n\n => Array\n(\n => 43.242.177.96\n)\n\n => Array\n(\n => 39.36.198.196\n)\n\n => Array\n(\n => 27.255.181.140\n)\n\n => Array\n(\n => 194.99.104.58\n)\n\n => Array\n(\n => 78.129.139.109\n)\n\n => Array\n(\n => 47.247.185.67\n)\n\n => Array\n(\n => 27.63.37.90\n)\n\n => Array\n(\n => 103.211.54.1\n)\n\n => Array\n(\n => 94.202.167.139\n)\n\n => Array\n(\n => 111.119.183.3\n)\n\n => Array\n(\n => 124.253.194.1\n)\n\n => Array\n(\n => 192.142.188.115\n)\n\n => Array\n(\n => 39.44.137.107\n)\n\n => Array\n(\n => 43.251.191.25\n)\n\n => Array\n(\n => 103.140.30.114\n)\n\n => Array\n(\n => 117.5.194.159\n)\n\n => Array\n(\n => 109.169.23.79\n)\n\n => Array\n(\n => 122.178.127.170\n)\n\n => Array\n(\n => 45.118.165.156\n)\n\n => Array\n(\n => 39.48.199.148\n)\n\n => Array\n(\n => 182.64.138.32\n)\n\n => Array\n(\n => 37.73.129.186\n)\n\n => Array\n(\n => 182.186.110.35\n)\n\n => Array\n(\n => 43.242.177.24\n)\n\n => Array\n(\n => 119.155.23.112\n)\n\n => Array\n(\n => 84.16.238.119\n)\n\n => Array\n(\n => 41.202.219.252\n)\n\n => Array\n(\n => 43.242.176.119\n)\n\n => Array\n(\n => 111.119.187.6\n)\n\n => Array\n(\n => 95.12.200.188\n)\n\n => Array\n(\n => 139.28.219.138\n)\n\n => Array\n(\n => 89.163.247.130\n)\n\n => Array\n(\n => 122.173.103.88\n)\n\n => Array\n(\n => 103.248.87.10\n)\n\n => Array\n(\n => 23.106.249.36\n)\n\n => Array\n(\n => 124.253.94.125\n)\n\n => Array\n(\n => 39.53.244.147\n)\n\n => Array\n(\n => 193.109.85.11\n)\n\n => Array\n(\n => 43.242.176.71\n)\n\n => Array\n(\n => 43.242.177.58\n)\n\n => Array\n(\n => 47.31.6.139\n)\n\n => Array\n(\n => 39.59.34.67\n)\n\n => Array\n(\n => 43.242.176.58\n)\n\n => Array\n(\n => 103.107.198.198\n)\n\n => Array\n(\n => 147.135.11.113\n)\n\n => Array\n(\n => 27.7.212.112\n)\n\n => Array\n(\n => 43.242.177.1\n)\n\n => Array\n(\n => 175.107.227.27\n)\n\n => Array\n(\n => 103.103.43.254\n)\n\n => Array\n(\n => 49.15.221.10\n)\n\n => Array\n(\n => 43.242.177.43\n)\n\n => Array\n(\n => 36.85.59.11\n)\n\n => Array\n(\n => 124.253.204.50\n)\n\n => Array\n(\n => 5.181.233.54\n)\n\n => Array\n(\n => 43.242.177.154\n)\n\n => Array\n(\n => 103.84.37.169\n)\n\n => Array\n(\n => 222.252.54.108\n)\n\n => Array\n(\n => 14.162.160.254\n)\n\n => Array\n(\n => 178.151.218.45\n)\n\n => Array\n(\n => 110.137.101.93\n)\n\n => Array\n(\n => 122.162.212.59\n)\n\n => Array\n(\n => 81.12.118.162\n)\n\n => Array\n(\n => 171.76.186.148\n)\n\n => Array\n(\n => 182.69.253.77\n)\n\n => Array\n(\n => 111.119.183.43\n)\n\n => Array\n(\n => 49.149.74.226\n)\n\n => Array\n(\n => 43.242.177.63\n)\n\n => Array\n(\n => 14.99.243.54\n)\n\n => Array\n(\n => 110.137.100.25\n)\n\n => Array\n(\n => 116.107.25.163\n)\n\n => Array\n(\n => 49.36.71.141\n)\n\n => Array\n(\n => 182.180.117.219\n)\n\n => Array\n(\n => 150.242.172.194\n)\n\n => Array\n(\n => 49.156.111.40\n)\n\n => Array\n(\n => 49.15.208.115\n)\n\n => Array\n(\n => 103.209.87.219\n)\n\n => Array\n(\n => 43.242.176.56\n)\n\n => Array\n(\n => 103.132.187.100\n)\n\n => Array\n(\n => 49.156.96.120\n)\n\n => Array\n(\n => 192.142.176.171\n)\n\n => Array\n(\n => 51.91.18.131\n)\n\n => Array\n(\n => 103.83.144.121\n)\n\n => Array\n(\n => 1.39.75.72\n)\n\n => Array\n(\n => 14.231.172.177\n)\n\n => Array\n(\n => 94.232.213.159\n)\n\n => Array\n(\n => 103.228.158.38\n)\n\n => Array\n(\n => 43.242.177.100\n)\n\n => Array\n(\n => 171.76.149.130\n)\n\n => Array\n(\n => 113.183.26.59\n)\n\n => Array\n(\n => 182.74.232.166\n)\n\n => Array\n(\n => 47.31.205.211\n)\n\n => Array\n(\n => 106.211.253.70\n)\n\n => Array\n(\n => 39.51.233.214\n)\n\n => Array\n(\n => 182.70.249.161\n)\n\n => Array\n(\n => 222.252.40.196\n)\n\n => Array\n(\n => 49.37.6.29\n)\n\n => Array\n(\n => 119.155.33.170\n)\n\n => Array\n(\n => 43.242.177.79\n)\n\n => Array\n(\n => 111.119.183.62\n)\n\n => Array\n(\n => 137.59.226.97\n)\n\n => Array\n(\n => 42.111.18.121\n)\n\n => Array\n(\n => 223.190.46.91\n)\n\n => Array\n(\n => 45.118.165.159\n)\n\n => Array\n(\n => 110.136.60.44\n)\n\n => Array\n(\n => 43.242.176.57\n)\n\n => Array\n(\n => 117.212.58.0\n)\n\n => Array\n(\n => 49.37.7.66\n)\n\n => Array\n(\n => 39.52.174.33\n)\n\n => Array\n(\n => 150.242.172.55\n)\n\n => Array\n(\n => 103.94.111.236\n)\n\n => Array\n(\n => 106.215.239.184\n)\n\n => Array\n(\n => 101.128.117.75\n)\n\n => Array\n(\n => 162.210.194.10\n)\n\n => Array\n(\n => 136.158.31.132\n)\n\n => Array\n(\n => 39.51.245.69\n)\n\n => Array\n(\n => 39.42.149.159\n)\n\n => Array\n(\n => 51.77.108.159\n)\n\n => Array\n(\n => 45.127.247.250\n)\n\n => Array\n(\n => 122.172.78.22\n)\n\n => Array\n(\n => 117.220.208.38\n)\n\n => Array\n(\n => 112.201.138.95\n)\n\n => Array\n(\n => 49.145.105.113\n)\n\n => Array\n(\n => 110.93.247.12\n)\n\n => Array\n(\n => 39.52.150.32\n)\n\n => Array\n(\n => 122.161.89.41\n)\n\n => Array\n(\n => 39.52.176.49\n)\n\n => Array\n(\n => 157.33.12.154\n)\n\n => Array\n(\n => 73.111.248.162\n)\n\n => Array\n(\n => 112.204.167.67\n)\n\n => Array\n(\n => 107.150.30.182\n)\n\n => Array\n(\n => 115.99.222.229\n)\n\n => Array\n(\n => 180.190.195.96\n)\n\n => Array\n(\n => 157.44.57.255\n)\n\n => Array\n(\n => 39.37.9.167\n)\n\n => Array\n(\n => 39.49.48.33\n)\n\n => Array\n(\n => 157.44.218.118\n)\n\n => Array\n(\n => 103.211.54.253\n)\n\n => Array\n(\n => 43.242.177.81\n)\n\n => Array\n(\n => 103.111.224.227\n)\n\n => Array\n(\n => 223.176.48.237\n)\n\n => Array\n(\n => 124.253.87.117\n)\n\n => Array\n(\n => 124.29.247.14\n)\n\n => Array\n(\n => 182.189.232.32\n)\n\n => Array\n(\n => 111.68.97.206\n)\n\n => Array\n(\n => 103.117.15.70\n)\n\n => Array\n(\n => 182.18.236.101\n)\n\n => Array\n(\n => 43.242.177.60\n)\n\n => Array\n(\n => 180.190.7.178\n)\n\n => Array\n(\n => 112.201.142.95\n)\n\n => Array\n(\n => 122.178.255.123\n)\n\n => Array\n(\n => 49.36.240.103\n)\n\n => Array\n(\n => 210.56.16.13\n)\n\n => Array\n(\n => 103.91.123.219\n)\n\n => Array\n(\n => 39.52.155.252\n)\n\n => Array\n(\n => 192.142.207.230\n)\n\n => Array\n(\n => 188.163.82.179\n)\n\n => Array\n(\n => 182.189.9.196\n)\n\n => Array\n(\n => 175.107.221.51\n)\n\n => Array\n(\n => 39.53.221.200\n)\n\n => Array\n(\n => 27.255.190.59\n)\n\n => Array\n(\n => 183.83.212.118\n)\n\n => Array\n(\n => 45.118.165.143\n)\n\n => Array\n(\n => 182.189.124.35\n)\n\n => Array\n(\n => 203.101.186.1\n)\n\n => Array\n(\n => 49.36.246.25\n)\n\n => Array\n(\n => 39.42.186.234\n)\n\n => Array\n(\n => 103.82.80.14\n)\n\n => Array\n(\n => 210.18.182.42\n)\n\n => Array\n(\n => 42.111.13.81\n)\n\n => Array\n(\n => 46.200.69.240\n)\n\n => Array\n(\n => 103.209.87.213\n)\n\n => Array\n(\n => 103.31.95.95\n)\n\n => Array\n(\n => 180.190.174.25\n)\n\n => Array\n(\n => 103.77.0.128\n)\n\n => Array\n(\n => 49.34.103.82\n)\n\n => Array\n(\n => 39.48.196.22\n)\n\n => Array\n(\n => 192.142.166.20\n)\n\n => Array\n(\n => 202.142.110.186\n)\n\n => Array\n(\n => 122.163.135.95\n)\n\n => Array\n(\n => 183.83.255.225\n)\n\n => Array\n(\n => 157.45.46.10\n)\n\n => Array\n(\n => 182.189.4.77\n)\n\n => Array\n(\n => 49.145.104.71\n)\n\n => Array\n(\n => 103.143.7.34\n)\n\n => Array\n(\n => 61.2.180.15\n)\n\n => Array\n(\n => 103.81.215.61\n)\n\n => Array\n(\n => 115.42.71.122\n)\n\n => Array\n(\n => 124.253.73.20\n)\n\n => Array\n(\n => 49.33.210.169\n)\n\n => Array\n(\n => 78.159.101.115\n)\n\n => Array\n(\n => 42.111.17.221\n)\n\n => Array\n(\n => 43.242.178.67\n)\n\n => Array\n(\n => 36.68.138.36\n)\n\n => Array\n(\n => 103.195.201.51\n)\n\n => Array\n(\n => 79.141.162.81\n)\n\n => Array\n(\n => 202.8.118.239\n)\n\n => Array\n(\n => 103.139.128.161\n)\n\n => Array\n(\n => 207.244.71.84\n)\n\n => Array\n(\n => 124.253.184.45\n)\n\n => Array\n(\n => 111.125.106.124\n)\n\n => Array\n(\n => 111.125.105.139\n)\n\n => Array\n(\n => 39.59.94.233\n)\n\n => Array\n(\n => 112.211.60.168\n)\n\n => Array\n(\n => 103.117.14.72\n)\n\n => Array\n(\n => 111.119.183.56\n)\n\n => Array\n(\n => 47.31.53.228\n)\n\n => Array\n(\n => 124.253.186.8\n)\n\n => Array\n(\n => 183.83.213.214\n)\n\n => Array\n(\n => 103.106.239.70\n)\n\n => Array\n(\n => 182.182.92.81\n)\n\n => Array\n(\n => 14.162.167.98\n)\n\n => Array\n(\n => 112.211.11.107\n)\n\n => Array\n(\n => 77.111.246.20\n)\n\n => Array\n(\n => 49.156.86.182\n)\n\n => Array\n(\n => 47.29.122.112\n)\n\n => Array\n(\n => 125.99.74.42\n)\n\n => Array\n(\n => 124.123.169.24\n)\n\n => Array\n(\n => 106.202.105.128\n)\n\n => Array\n(\n => 103.244.173.14\n)\n\n => Array\n(\n => 103.98.63.104\n)\n\n => Array\n(\n => 180.245.6.60\n)\n\n => Array\n(\n => 49.149.96.14\n)\n\n => Array\n(\n => 14.177.120.169\n)\n\n => Array\n(\n => 192.135.90.145\n)\n\n => Array\n(\n => 223.190.18.218\n)\n\n => Array\n(\n => 171.61.190.2\n)\n\n => Array\n(\n => 58.65.220.219\n)\n\n => Array\n(\n => 122.177.29.87\n)\n\n => Array\n(\n => 223.236.175.203\n)\n\n => Array\n(\n => 39.53.237.106\n)\n\n => Array\n(\n => 1.186.114.83\n)\n\n => Array\n(\n => 43.230.66.153\n)\n\n => Array\n(\n => 27.96.94.247\n)\n\n => Array\n(\n => 39.52.176.185\n)\n\n => Array\n(\n => 59.94.147.62\n)\n\n => Array\n(\n => 119.160.117.10\n)\n\n => Array\n(\n => 43.241.146.105\n)\n\n => Array\n(\n => 39.59.87.75\n)\n\n => Array\n(\n => 119.160.118.203\n)\n\n => Array\n(\n => 39.52.161.76\n)\n\n => Array\n(\n => 202.168.84.189\n)\n\n => Array\n(\n => 103.215.168.2\n)\n\n => Array\n(\n => 39.42.146.160\n)\n\n => Array\n(\n => 182.182.30.246\n)\n\n => Array\n(\n => 122.173.212.133\n)\n\n => Array\n(\n => 39.51.238.44\n)\n\n => Array\n(\n => 183.83.252.51\n)\n\n => Array\n(\n => 202.142.168.86\n)\n\n => Array\n(\n => 39.40.198.209\n)\n\n => Array\n(\n => 192.135.90.151\n)\n\n => Array\n(\n => 72.255.41.174\n)\n\n => Array\n(\n => 137.97.92.124\n)\n\n => Array\n(\n => 182.185.159.155\n)\n\n => Array\n(\n => 157.44.133.131\n)\n\n => Array\n(\n => 39.51.230.253\n)\n\n => Array\n(\n => 103.70.87.200\n)\n\n => Array\n(\n => 103.117.15.82\n)\n\n => Array\n(\n => 103.217.244.69\n)\n\n => Array\n(\n => 157.34.76.185\n)\n\n => Array\n(\n => 39.52.130.163\n)\n\n => Array\n(\n => 182.181.41.39\n)\n\n => Array\n(\n => 49.37.212.226\n)\n\n => Array\n(\n => 119.160.117.100\n)\n\n => Array\n(\n => 103.209.87.43\n)\n\n => Array\n(\n => 180.190.195.45\n)\n\n => Array\n(\n => 122.160.57.230\n)\n\n => Array\n(\n => 203.192.213.81\n)\n\n => Array\n(\n => 182.181.63.91\n)\n\n => Array\n(\n => 157.44.184.5\n)\n\n => Array\n(\n => 27.97.213.128\n)\n\n => Array\n(\n => 122.55.252.145\n)\n\n => Array\n(\n => 103.117.15.92\n)\n\n => Array\n(\n => 42.201.251.179\n)\n\n => Array\n(\n => 122.186.84.53\n)\n\n => Array\n(\n => 119.157.75.242\n)\n\n => Array\n(\n => 39.42.163.6\n)\n\n => Array\n(\n => 14.99.246.78\n)\n\n => Array\n(\n => 103.209.87.227\n)\n\n => Array\n(\n => 182.68.215.31\n)\n\n => Array\n(\n => 45.118.165.140\n)\n\n => Array\n(\n => 207.244.71.81\n)\n\n => Array\n(\n => 27.97.162.57\n)\n\n => Array\n(\n => 103.113.106.98\n)\n\n => Array\n(\n => 95.135.44.103\n)\n\n => Array\n(\n => 125.209.114.238\n)\n\n => Array\n(\n => 77.123.14.176\n)\n\n => Array\n(\n => 110.36.202.169\n)\n\n => Array\n(\n => 124.253.205.230\n)\n\n => Array\n(\n => 106.215.72.117\n)\n\n => Array\n(\n => 116.72.226.35\n)\n\n => Array\n(\n => 137.97.103.141\n)\n\n => Array\n(\n => 112.79.212.161\n)\n\n => Array\n(\n => 103.209.85.150\n)\n\n => Array\n(\n => 103.159.127.6\n)\n\n => Array\n(\n => 43.239.205.66\n)\n\n => Array\n(\n => 143.244.51.152\n)\n\n => Array\n(\n => 182.64.15.3\n)\n\n => Array\n(\n => 182.185.207.146\n)\n\n => Array\n(\n => 45.118.165.155\n)\n\n => Array\n(\n => 115.160.241.214\n)\n\n => Array\n(\n => 47.31.230.68\n)\n\n => Array\n(\n => 49.15.84.145\n)\n\n => Array\n(\n => 39.51.239.206\n)\n\n => Array\n(\n => 103.149.154.212\n)\n\n => Array\n(\n => 43.239.207.155\n)\n\n => Array\n(\n => 182.182.30.181\n)\n\n => Array\n(\n => 157.37.198.16\n)\n\n => Array\n(\n => 162.239.24.60\n)\n\n => Array\n(\n => 106.212.101.97\n)\n\n => Array\n(\n => 124.253.97.44\n)\n\n => Array\n(\n => 106.214.95.176\n)\n\n => Array\n(\n => 102.69.228.114\n)\n\n => Array\n(\n => 116.74.58.221\n)\n\n => Array\n(\n => 162.210.194.38\n)\n\n => Array\n(\n => 39.52.162.121\n)\n\n => Array\n(\n => 103.216.143.255\n)\n\n => Array\n(\n => 103.49.155.134\n)\n\n => Array\n(\n => 182.191.119.236\n)\n\n => Array\n(\n => 111.88.213.172\n)\n\n => Array\n(\n => 43.239.207.207\n)\n\n => Array\n(\n => 140.213.35.143\n)\n\n => Array\n(\n => 154.72.153.215\n)\n\n => Array\n(\n => 122.170.47.36\n)\n\n => Array\n(\n => 51.158.111.163\n)\n\n => Array\n(\n => 203.122.10.150\n)\n\n => Array\n(\n => 47.31.176.111\n)\n\n => Array\n(\n => 103.75.246.34\n)\n\n => Array\n(\n => 103.244.178.45\n)\n\n => Array\n(\n => 182.185.138.0\n)\n\n => Array\n(\n => 183.83.254.224\n)\n\n => Array\n(\n => 49.36.246.145\n)\n\n => Array\n(\n => 202.47.60.85\n)\n\n => Array\n(\n => 180.190.163.160\n)\n\n => Array\n(\n => 27.255.187.221\n)\n\n => Array\n(\n => 14.248.94.2\n)\n\n => Array\n(\n => 185.233.17.187\n)\n\n => Array\n(\n => 139.5.254.227\n)\n\n => Array\n(\n => 103.149.160.66\n)\n\n => Array\n(\n => 122.168.235.47\n)\n\n => Array\n(\n => 45.113.248.224\n)\n\n => Array\n(\n => 110.54.170.142\n)\n\n => Array\n(\n => 223.235.226.55\n)\n\n => Array\n(\n => 157.32.19.235\n)\n\n => Array\n(\n => 49.15.221.114\n)\n\n => Array\n(\n => 27.97.166.163\n)\n\n => Array\n(\n => 223.233.99.5\n)\n\n => Array\n(\n => 49.33.203.53\n)\n\n => Array\n(\n => 27.56.214.41\n)\n\n => Array\n(\n => 103.138.51.3\n)\n\n => Array\n(\n => 111.119.183.21\n)\n\n => Array\n(\n => 47.15.138.233\n)\n\n => Array\n(\n => 202.63.213.184\n)\n\n => Array\n(\n => 49.36.158.94\n)\n\n => Array\n(\n => 27.97.186.179\n)\n\n => Array\n(\n => 27.97.214.69\n)\n\n => Array\n(\n => 203.128.18.163\n)\n\n => Array\n(\n => 106.207.235.63\n)\n\n => Array\n(\n => 116.107.220.231\n)\n\n => Array\n(\n => 223.226.169.249\n)\n\n => Array\n(\n => 106.201.24.6\n)\n\n => Array\n(\n => 49.15.89.7\n)\n\n => Array\n(\n => 49.15.142.20\n)\n\n => Array\n(\n => 223.177.24.85\n)\n\n => Array\n(\n => 37.156.17.37\n)\n\n => Array\n(\n => 102.129.224.2\n)\n\n => Array\n(\n => 49.15.85.221\n)\n\n => Array\n(\n => 106.76.208.153\n)\n\n => Array\n(\n => 61.2.47.71\n)\n\n => Array\n(\n => 27.97.178.79\n)\n\n => Array\n(\n => 39.34.143.196\n)\n\n => Array\n(\n => 103.10.227.158\n)\n\n => Array\n(\n => 117.220.210.159\n)\n\n => Array\n(\n => 182.189.28.11\n)\n\n => Array\n(\n => 122.185.38.170\n)\n\n => Array\n(\n => 112.196.132.115\n)\n\n => Array\n(\n => 187.156.137.83\n)\n\n => Array\n(\n => 203.122.3.88\n)\n\n => Array\n(\n => 51.68.142.45\n)\n\n => Array\n(\n => 124.253.217.55\n)\n\n => Array\n(\n => 103.152.41.2\n)\n\n => Array\n(\n => 157.37.154.219\n)\n\n => Array\n(\n => 39.45.32.77\n)\n\n => Array\n(\n => 182.182.22.221\n)\n\n => Array\n(\n => 157.43.205.117\n)\n\n => Array\n(\n => 202.142.123.58\n)\n\n => Array\n(\n => 43.239.207.121\n)\n\n => Array\n(\n => 49.206.122.113\n)\n\n => Array\n(\n => 106.193.199.203\n)\n\n => Array\n(\n => 103.67.157.251\n)\n\n => Array\n(\n => 49.34.97.81\n)\n\n => Array\n(\n => 49.156.92.130\n)\n\n => Array\n(\n => 203.160.179.210\n)\n\n => Array\n(\n => 106.215.33.244\n)\n\n => Array\n(\n => 191.101.148.41\n)\n\n => Array\n(\n => 203.90.94.94\n)\n\n => Array\n(\n => 105.129.205.134\n)\n\n => Array\n(\n => 106.215.45.165\n)\n\n => Array\n(\n => 112.196.132.15\n)\n\n => Array\n(\n => 39.59.64.174\n)\n\n => Array\n(\n => 124.253.155.116\n)\n\n => Array\n(\n => 94.179.192.204\n)\n\n => Array\n(\n => 110.38.29.245\n)\n\n => Array\n(\n => 124.29.209.78\n)\n\n => Array\n(\n => 103.75.245.240\n)\n\n => Array\n(\n => 49.36.159.170\n)\n\n => Array\n(\n => 223.190.18.160\n)\n\n => Array\n(\n => 124.253.113.226\n)\n\n => Array\n(\n => 14.180.77.240\n)\n\n => Array\n(\n => 106.215.76.24\n)\n\n => Array\n(\n => 106.210.155.153\n)\n\n => Array\n(\n => 111.119.187.42\n)\n\n => Array\n(\n => 146.196.32.106\n)\n\n => Array\n(\n => 122.162.22.27\n)\n\n => Array\n(\n => 49.145.59.252\n)\n\n => Array\n(\n => 95.47.247.92\n)\n\n => Array\n(\n => 103.99.218.50\n)\n\n => Array\n(\n => 157.37.192.88\n)\n\n => Array\n(\n => 82.102.31.242\n)\n\n => Array\n(\n => 157.46.220.64\n)\n\n => Array\n(\n => 180.151.107.52\n)\n\n => Array\n(\n => 203.81.240.75\n)\n\n => Array\n(\n => 122.167.213.130\n)\n\n => Array\n(\n => 103.227.70.164\n)\n\n => Array\n(\n => 106.215.81.169\n)\n\n => Array\n(\n => 157.46.214.170\n)\n\n => Array\n(\n => 103.69.27.163\n)\n\n => Array\n(\n => 124.253.23.213\n)\n\n => Array\n(\n => 157.37.167.174\n)\n\n => Array\n(\n => 1.39.204.67\n)\n\n => Array\n(\n => 112.196.132.51\n)\n\n => Array\n(\n => 119.152.61.222\n)\n\n => Array\n(\n => 47.31.36.174\n)\n\n => Array\n(\n => 47.31.152.174\n)\n\n => Array\n(\n => 49.34.18.105\n)\n\n => Array\n(\n => 157.37.170.101\n)\n\n => Array\n(\n => 118.209.241.234\n)\n\n => Array\n(\n => 103.67.19.9\n)\n\n => Array\n(\n => 182.189.14.154\n)\n\n => Array\n(\n => 45.127.233.232\n)\n\n => Array\n(\n => 27.96.94.91\n)\n\n => Array\n(\n => 183.83.214.250\n)\n\n => Array\n(\n => 47.31.27.140\n)\n\n => Array\n(\n => 47.31.129.199\n)\n\n => Array\n(\n => 157.44.156.111\n)\n\n => Array\n(\n => 42.110.163.2\n)\n\n => Array\n(\n => 124.253.64.210\n)\n\n => Array\n(\n => 49.36.167.54\n)\n\n => Array\n(\n => 27.63.135.145\n)\n\n => Array\n(\n => 157.35.254.63\n)\n\n => Array\n(\n => 39.45.18.182\n)\n\n => Array\n(\n => 197.210.85.102\n)\n\n => Array\n(\n => 112.196.132.90\n)\n\n => Array\n(\n => 59.152.97.84\n)\n\n => Array\n(\n => 43.242.178.7\n)\n\n => Array\n(\n => 47.31.40.70\n)\n\n => Array\n(\n => 202.134.10.136\n)\n\n => Array\n(\n => 132.154.241.43\n)\n\n => Array\n(\n => 185.209.179.240\n)\n\n => Array\n(\n => 202.47.50.28\n)\n\n => Array\n(\n => 182.186.1.29\n)\n\n => Array\n(\n => 124.253.114.229\n)\n\n => Array\n(\n => 49.32.210.126\n)\n\n => Array\n(\n => 43.242.178.122\n)\n\n => Array\n(\n => 42.111.28.52\n)\n\n => Array\n(\n => 23.227.141.44\n)\n\n => Array\n(\n => 23.227.141.156\n)\n\n => Array\n(\n => 103.253.173.79\n)\n\n => Array\n(\n => 116.75.231.74\n)\n\n => Array\n(\n => 106.76.78.196\n)\n\n => Array\n(\n => 116.75.197.68\n)\n\n => Array\n(\n => 42.108.172.131\n)\n\n => Array\n(\n => 157.38.27.199\n)\n\n => Array\n(\n => 103.70.86.205\n)\n\n => Array\n(\n => 119.152.63.239\n)\n\n => Array\n(\n => 103.233.116.94\n)\n\n => Array\n(\n => 111.119.188.17\n)\n\n => Array\n(\n => 103.196.160.156\n)\n\n => Array\n(\n => 27.97.208.40\n)\n\n => Array\n(\n => 188.163.7.136\n)\n\n => Array\n(\n => 49.15.202.205\n)\n\n => Array\n(\n => 124.253.201.111\n)\n\n => Array\n(\n => 182.190.213.246\n)\n\n => Array\n(\n => 5.154.174.10\n)\n\n => Array\n(\n => 103.21.185.16\n)\n\n => Array\n(\n => 112.196.132.67\n)\n\n => Array\n(\n => 49.15.194.230\n)\n\n => Array\n(\n => 103.118.34.103\n)\n\n => Array\n(\n => 49.15.201.92\n)\n\n => Array\n(\n => 42.111.13.238\n)\n\n => Array\n(\n => 203.192.213.137\n)\n\n => Array\n(\n => 45.115.190.82\n)\n\n => Array\n(\n => 78.26.130.102\n)\n\n => Array\n(\n => 49.15.85.202\n)\n\n => Array\n(\n => 106.76.193.33\n)\n\n => Array\n(\n => 103.70.41.30\n)\n\n => Array\n(\n => 103.82.78.254\n)\n\n => Array\n(\n => 110.38.35.90\n)\n\n => Array\n(\n => 181.214.107.27\n)\n\n => Array\n(\n => 27.110.183.162\n)\n\n => Array\n(\n => 94.225.230.215\n)\n\n => Array\n(\n => 27.97.185.58\n)\n\n => Array\n(\n => 49.146.196.124\n)\n\n => Array\n(\n => 119.157.76.144\n)\n\n => Array\n(\n => 103.99.218.34\n)\n\n => Array\n(\n => 185.32.221.247\n)\n\n => Array\n(\n => 27.97.161.12\n)\n\n => Array\n(\n => 27.62.144.214\n)\n\n => Array\n(\n => 124.253.90.151\n)\n\n => Array\n(\n => 49.36.135.69\n)\n\n => Array\n(\n => 39.40.217.106\n)\n\n => Array\n(\n => 119.152.235.136\n)\n\n => Array\n(\n => 103.91.103.226\n)\n\n => Array\n(\n => 117.222.226.93\n)\n\n => Array\n(\n => 182.190.24.126\n)\n\n => Array\n(\n => 27.97.223.179\n)\n\n => Array\n(\n => 202.137.115.11\n)\n\n => Array\n(\n => 43.242.178.130\n)\n\n => Array\n(\n => 182.189.125.232\n)\n\n => Array\n(\n => 182.190.202.87\n)\n\n => Array\n(\n => 124.253.102.193\n)\n\n => Array\n(\n => 103.75.247.73\n)\n\n => Array\n(\n => 122.177.100.97\n)\n\n => Array\n(\n => 47.31.192.254\n)\n\n => Array\n(\n => 49.149.73.185\n)\n\n => Array\n(\n => 39.57.147.197\n)\n\n => Array\n(\n => 103.110.147.52\n)\n\n => Array\n(\n => 124.253.106.255\n)\n\n => Array\n(\n => 152.57.116.136\n)\n\n => Array\n(\n => 110.38.35.102\n)\n\n => Array\n(\n => 182.18.206.127\n)\n\n => Array\n(\n => 103.133.59.246\n)\n\n => Array\n(\n => 27.97.189.139\n)\n\n => Array\n(\n => 179.61.245.54\n)\n\n => Array\n(\n => 103.240.233.176\n)\n\n => Array\n(\n => 111.88.124.196\n)\n\n => Array\n(\n => 49.146.215.3\n)\n\n => Array\n(\n => 110.39.10.246\n)\n\n => Array\n(\n => 27.5.42.135\n)\n\n => Array\n(\n => 27.97.177.251\n)\n\n => Array\n(\n => 93.177.75.254\n)\n\n => Array\n(\n => 43.242.177.3\n)\n\n => Array\n(\n => 112.196.132.97\n)\n\n => Array\n(\n => 116.75.242.188\n)\n\n => Array\n(\n => 202.8.118.101\n)\n\n => Array\n(\n => 49.36.65.43\n)\n\n => Array\n(\n => 157.37.146.220\n)\n\n => Array\n(\n => 157.37.143.235\n)\n\n => Array\n(\n => 157.38.94.34\n)\n\n => Array\n(\n => 49.36.131.1\n)\n\n => Array\n(\n => 132.154.92.97\n)\n\n => Array\n(\n => 132.154.123.115\n)\n\n => Array\n(\n => 49.15.197.222\n)\n\n => Array\n(\n => 124.253.198.72\n)\n\n => Array\n(\n => 27.97.217.95\n)\n\n => Array\n(\n => 47.31.194.65\n)\n\n => Array\n(\n => 197.156.190.156\n)\n\n => Array\n(\n => 197.156.190.230\n)\n\n => Array\n(\n => 103.62.152.250\n)\n\n => Array\n(\n => 103.152.212.126\n)\n\n => Array\n(\n => 185.233.18.177\n)\n\n => Array\n(\n => 116.75.63.83\n)\n\n => Array\n(\n => 157.38.56.125\n)\n\n => Array\n(\n => 119.157.107.195\n)\n\n => Array\n(\n => 103.87.50.73\n)\n\n => Array\n(\n => 95.142.120.141\n)\n\n => Array\n(\n => 154.13.1.221\n)\n\n => Array\n(\n => 103.147.87.79\n)\n\n => Array\n(\n => 39.53.173.186\n)\n\n => Array\n(\n => 195.114.145.107\n)\n\n => Array\n(\n => 157.33.201.185\n)\n\n => Array\n(\n => 195.85.219.36\n)\n\n => Array\n(\n => 105.161.67.127\n)\n\n => Array\n(\n => 110.225.87.77\n)\n\n => Array\n(\n => 103.95.167.236\n)\n\n => Array\n(\n => 89.187.162.213\n)\n\n => Array\n(\n => 27.255.189.50\n)\n\n => Array\n(\n => 115.96.77.54\n)\n\n => Array\n(\n => 223.182.220.223\n)\n\n => Array\n(\n => 157.47.206.192\n)\n\n => Array\n(\n => 182.186.110.226\n)\n\n => Array\n(\n => 39.53.243.237\n)\n\n => Array\n(\n => 39.40.228.58\n)\n\n => Array\n(\n => 157.38.60.9\n)\n\n => Array\n(\n => 106.198.244.189\n)\n\n => Array\n(\n => 124.253.51.164\n)\n\n => Array\n(\n => 49.147.113.58\n)\n\n => Array\n(\n => 14.231.196.229\n)\n\n => Array\n(\n => 103.81.214.152\n)\n\n => Array\n(\n => 117.222.220.60\n)\n\n => Array\n(\n => 83.142.111.213\n)\n\n => Array\n(\n => 14.224.77.147\n)\n\n => Array\n(\n => 110.235.236.95\n)\n\n => Array\n(\n => 103.26.83.30\n)\n\n => Array\n(\n => 106.206.191.82\n)\n\n => Array\n(\n => 103.49.117.135\n)\n\n => Array\n(\n => 202.47.39.9\n)\n\n => Array\n(\n => 180.178.145.205\n)\n\n => Array\n(\n => 43.251.93.119\n)\n\n => Array\n(\n => 27.6.212.182\n)\n\n => Array\n(\n => 39.42.156.20\n)\n\n => Array\n(\n => 47.31.141.195\n)\n\n => Array\n(\n => 157.37.146.73\n)\n\n => Array\n(\n => 49.15.93.155\n)\n\n => Array\n(\n => 162.210.194.37\n)\n\n => Array\n(\n => 223.188.160.236\n)\n\n => Array\n(\n => 47.9.90.158\n)\n\n => Array\n(\n => 49.15.85.224\n)\n\n => Array\n(\n => 49.15.93.134\n)\n\n => Array\n(\n => 107.179.244.94\n)\n\n => Array\n(\n => 182.190.203.90\n)\n\n => Array\n(\n => 185.192.69.203\n)\n\n => Array\n(\n => 185.17.27.99\n)\n\n => Array\n(\n => 119.160.116.182\n)\n\n => Array\n(\n => 203.99.177.25\n)\n\n => Array\n(\n => 162.228.207.248\n)\n\n => Array\n(\n => 47.31.245.69\n)\n\n => Array\n(\n => 49.15.210.159\n)\n\n => Array\n(\n => 42.111.2.112\n)\n\n => Array\n(\n => 223.186.116.79\n)\n\n => Array\n(\n => 103.225.176.143\n)\n\n => Array\n(\n => 45.115.190.49\n)\n\n => Array\n(\n => 115.42.71.105\n)\n\n => Array\n(\n => 157.51.11.157\n)\n\n => Array\n(\n => 14.175.56.186\n)\n\n => Array\n(\n => 59.153.16.7\n)\n\n => Array\n(\n => 106.202.84.144\n)\n\n => Array\n(\n => 27.6.242.91\n)\n\n => Array\n(\n => 47.11.112.107\n)\n\n => Array\n(\n => 106.207.54.187\n)\n\n => Array\n(\n => 124.253.196.121\n)\n\n => Array\n(\n => 51.79.161.244\n)\n\n => Array\n(\n => 103.41.24.100\n)\n\n => Array\n(\n => 195.66.79.32\n)\n\n => Array\n(\n => 117.196.127.42\n)\n\n => Array\n(\n => 103.75.247.197\n)\n\n => Array\n(\n => 89.187.162.107\n)\n\n => Array\n(\n => 223.238.154.49\n)\n\n => Array\n(\n => 117.223.99.139\n)\n\n => Array\n(\n => 103.87.59.134\n)\n\n => Array\n(\n => 124.253.212.30\n)\n\n => Array\n(\n => 202.47.62.55\n)\n\n => Array\n(\n => 47.31.219.128\n)\n\n => Array\n(\n => 49.14.121.72\n)\n\n => Array\n(\n => 124.253.212.189\n)\n\n => Array\n(\n => 103.244.179.24\n)\n\n => Array\n(\n => 182.190.213.92\n)\n\n => Array\n(\n => 43.242.178.51\n)\n\n => Array\n(\n => 180.92.138.54\n)\n\n => Array\n(\n => 111.119.187.26\n)\n\n => Array\n(\n => 49.156.111.31\n)\n\n => Array\n(\n => 27.63.108.183\n)\n\n => Array\n(\n => 27.58.184.79\n)\n\n => Array\n(\n => 39.40.225.130\n)\n\n => Array\n(\n => 157.38.5.178\n)\n\n => Array\n(\n => 103.112.55.44\n)\n\n => Array\n(\n => 119.160.100.247\n)\n\n => Array\n(\n => 39.53.101.15\n)\n\n => Array\n(\n => 47.31.207.117\n)\n\n => Array\n(\n => 112.196.158.155\n)\n\n => Array\n(\n => 94.204.247.123\n)\n\n => Array\n(\n => 103.118.76.38\n)\n\n => Array\n(\n => 124.29.212.208\n)\n\n => Array\n(\n => 124.253.196.250\n)\n\n => Array\n(\n => 118.70.182.242\n)\n\n => Array\n(\n => 157.38.78.67\n)\n\n => Array\n(\n => 103.99.218.33\n)\n\n => Array\n(\n => 137.59.220.191\n)\n\n => Array\n(\n => 47.31.139.182\n)\n\n => Array\n(\n => 182.179.136.36\n)\n\n => Array\n(\n => 106.203.73.130\n)\n\n => Array\n(\n => 193.29.107.188\n)\n\n => Array\n(\n => 81.96.92.111\n)\n\n => Array\n(\n => 110.93.203.185\n)\n\n => Array\n(\n => 103.163.248.128\n)\n\n => Array\n(\n => 43.229.166.135\n)\n\n => Array\n(\n => 43.230.106.175\n)\n\n => Array\n(\n => 202.47.62.54\n)\n\n => Array\n(\n => 39.37.181.46\n)\n\n => Array\n(\n => 49.15.204.204\n)\n\n => Array\n(\n => 122.163.237.110\n)\n\n => Array\n(\n => 45.249.8.92\n)\n\n => Array\n(\n => 27.34.50.159\n)\n\n => Array\n(\n => 39.42.171.27\n)\n\n => Array\n(\n => 124.253.101.195\n)\n\n => Array\n(\n => 188.166.145.20\n)\n\n => Array\n(\n => 103.83.145.220\n)\n\n => Array\n(\n => 39.40.96.137\n)\n\n => Array\n(\n => 157.37.185.196\n)\n\n => Array\n(\n => 103.115.124.32\n)\n\n => Array\n(\n => 72.255.48.85\n)\n\n => Array\n(\n => 124.253.74.46\n)\n\n => Array\n(\n => 60.243.225.5\n)\n\n => Array\n(\n => 103.58.152.194\n)\n\n => Array\n(\n => 14.248.71.63\n)\n\n => Array\n(\n => 152.57.214.137\n)\n\n => Array\n(\n => 103.166.58.14\n)\n\n => Array\n(\n => 14.248.71.103\n)\n\n => Array\n(\n => 49.156.103.124\n)\n\n => Array\n(\n => 103.99.218.56\n)\n\n => Array\n(\n => 27.97.177.246\n)\n\n => Array\n(\n => 152.57.94.84\n)\n\n => Array\n(\n => 111.119.187.60\n)\n\n => Array\n(\n => 119.160.99.11\n)\n\n => Array\n(\n => 117.203.11.220\n)\n\n => Array\n(\n => 114.31.131.67\n)\n\n => Array\n(\n => 47.31.253.95\n)\n\n => Array\n(\n => 83.139.184.178\n)\n\n => Array\n(\n => 125.57.9.72\n)\n\n => Array\n(\n => 185.233.16.53\n)\n\n => Array\n(\n => 49.36.180.197\n)\n\n => Array\n(\n => 95.142.119.27\n)\n\n => Array\n(\n => 223.225.70.77\n)\n\n => Array\n(\n => 47.15.222.200\n)\n\n => Array\n(\n => 47.15.218.231\n)\n\n => Array\n(\n => 111.119.187.34\n)\n\n => Array\n(\n => 157.37.198.81\n)\n\n => Array\n(\n => 43.242.177.92\n)\n\n => Array\n(\n => 122.161.68.214\n)\n\n => Array\n(\n => 47.31.145.92\n)\n\n => Array\n(\n => 27.7.196.201\n)\n\n => Array\n(\n => 39.42.172.183\n)\n\n => Array\n(\n => 49.15.129.162\n)\n\n => Array\n(\n => 49.15.206.110\n)\n\n => Array\n(\n => 39.57.141.45\n)\n\n => Array\n(\n => 171.229.175.90\n)\n\n => Array\n(\n => 119.160.68.200\n)\n\n => Array\n(\n => 193.176.84.214\n)\n\n => Array\n(\n => 43.242.177.77\n)\n\n => Array\n(\n => 137.59.220.95\n)\n\n => Array\n(\n => 122.177.118.209\n)\n\n => Array\n(\n => 103.92.214.27\n)\n\n => Array\n(\n => 178.62.10.228\n)\n\n => Array\n(\n => 103.81.214.91\n)\n\n => Array\n(\n => 156.146.33.68\n)\n\n => Array\n(\n => 42.118.116.60\n)\n\n => Array\n(\n => 183.87.122.190\n)\n\n => Array\n(\n => 157.37.159.162\n)\n\n => Array\n(\n => 59.153.16.9\n)\n\n => Array\n(\n => 223.185.43.241\n)\n\n => Array\n(\n => 103.81.214.153\n)\n\n => Array\n(\n => 47.31.143.169\n)\n\n => Array\n(\n => 112.196.158.250\n)\n\n => Array\n(\n => 156.146.36.110\n)\n\n => Array\n(\n => 27.255.34.80\n)\n\n => Array\n(\n => 49.205.77.19\n)\n\n => Array\n(\n => 95.142.120.20\n)\n\n => Array\n(\n => 171.49.195.53\n)\n\n => Array\n(\n => 39.37.152.132\n)\n\n => Array\n(\n => 103.121.204.237\n)\n\n => Array\n(\n => 43.242.176.153\n)\n\n => Array\n(\n => 43.242.176.120\n)\n\n => Array\n(\n => 122.161.66.120\n)\n\n => Array\n(\n => 182.70.140.223\n)\n\n => Array\n(\n => 103.201.135.226\n)\n\n => Array\n(\n => 202.47.44.135\n)\n\n => Array\n(\n => 182.179.172.27\n)\n\n => Array\n(\n => 185.22.173.86\n)\n\n => Array\n(\n => 67.205.148.219\n)\n\n => Array\n(\n => 27.58.183.140\n)\n\n => Array\n(\n => 39.42.118.163\n)\n\n => Array\n(\n => 117.5.204.59\n)\n\n => Array\n(\n => 223.182.193.163\n)\n\n => Array\n(\n => 157.37.184.33\n)\n\n => Array\n(\n => 110.37.218.92\n)\n\n => Array\n(\n => 106.215.8.67\n)\n\n => Array\n(\n => 39.42.94.179\n)\n\n => Array\n(\n => 106.51.25.124\n)\n\n => Array\n(\n => 157.42.25.212\n)\n\n => Array\n(\n => 43.247.40.170\n)\n\n => Array\n(\n => 101.50.108.111\n)\n\n => Array\n(\n => 117.102.48.152\n)\n\n => Array\n(\n => 95.142.120.48\n)\n\n => Array\n(\n => 183.81.121.160\n)\n\n => Array\n(\n => 42.111.21.195\n)\n\n => Array\n(\n => 50.7.142.180\n)\n\n => Array\n(\n => 223.130.28.33\n)\n\n => Array\n(\n => 107.161.86.141\n)\n\n => Array\n(\n => 117.203.249.159\n)\n\n => Array\n(\n => 110.225.192.64\n)\n\n => Array\n(\n => 157.37.152.168\n)\n\n => Array\n(\n => 110.39.2.202\n)\n\n => Array\n(\n => 23.106.56.52\n)\n\n => Array\n(\n => 59.150.87.85\n)\n\n => Array\n(\n => 122.162.175.128\n)\n\n => Array\n(\n => 39.40.63.182\n)\n\n => Array\n(\n => 182.190.108.76\n)\n\n => Array\n(\n => 49.36.44.216\n)\n\n => Array\n(\n => 73.105.5.185\n)\n\n => Array\n(\n => 157.33.67.204\n)\n\n => Array\n(\n => 157.37.164.171\n)\n\n => Array\n(\n => 192.119.160.21\n)\n\n => Array\n(\n => 156.146.59.29\n)\n\n => Array\n(\n => 182.190.97.213\n)\n\n => Array\n(\n => 39.53.196.168\n)\n\n => Array\n(\n => 112.196.132.93\n)\n\n => Array\n(\n => 182.189.7.18\n)\n\n => Array\n(\n => 101.53.232.117\n)\n\n => Array\n(\n => 43.242.178.105\n)\n\n => Array\n(\n => 49.145.233.44\n)\n\n => Array\n(\n => 5.107.214.18\n)\n\n => Array\n(\n => 139.5.242.124\n)\n\n => Array\n(\n => 47.29.244.80\n)\n\n => Array\n(\n => 43.242.178.180\n)\n\n => Array\n(\n => 194.110.84.171\n)\n\n => Array\n(\n => 103.68.217.99\n)\n\n => Array\n(\n => 182.182.27.59\n)\n\n => Array\n(\n => 119.152.139.146\n)\n\n => Array\n(\n => 39.37.131.1\n)\n\n => Array\n(\n => 106.210.99.47\n)\n\n => Array\n(\n => 103.225.176.68\n)\n\n => Array\n(\n => 42.111.23.67\n)\n\n => Array\n(\n => 223.225.37.57\n)\n\n => Array\n(\n => 114.79.1.247\n)\n\n => Array\n(\n => 157.42.28.39\n)\n\n => Array\n(\n => 47.15.13.68\n)\n\n => Array\n(\n => 223.230.151.59\n)\n\n => Array\n(\n => 115.186.7.112\n)\n\n => Array\n(\n => 111.92.78.33\n)\n\n => Array\n(\n => 119.160.117.249\n)\n\n => Array\n(\n => 103.150.209.45\n)\n\n => Array\n(\n => 182.189.22.170\n)\n\n => Array\n(\n => 49.144.108.82\n)\n\n => Array\n(\n => 39.49.75.65\n)\n\n => Array\n(\n => 39.52.205.223\n)\n\n => Array\n(\n => 49.48.247.53\n)\n\n => Array\n(\n => 5.149.250.222\n)\n\n => Array\n(\n => 47.15.187.153\n)\n\n => Array\n(\n => 103.70.86.101\n)\n\n => Array\n(\n => 112.196.158.138\n)\n\n => Array\n(\n => 156.241.242.139\n)\n\n => Array\n(\n => 157.33.205.213\n)\n\n => Array\n(\n => 39.53.206.247\n)\n\n => Array\n(\n => 157.45.83.132\n)\n\n => Array\n(\n => 49.36.220.138\n)\n\n => Array\n(\n => 202.47.47.118\n)\n\n => Array\n(\n => 182.185.233.224\n)\n\n => Array\n(\n => 182.189.30.99\n)\n\n => Array\n(\n => 223.233.68.178\n)\n\n => Array\n(\n => 161.35.139.87\n)\n\n => Array\n(\n => 121.46.65.124\n)\n\n => Array\n(\n => 5.195.154.87\n)\n\n => Array\n(\n => 103.46.236.71\n)\n\n => Array\n(\n => 195.114.147.119\n)\n\n => Array\n(\n => 195.85.219.35\n)\n\n => Array\n(\n => 111.119.183.34\n)\n\n => Array\n(\n => 39.34.158.41\n)\n\n => Array\n(\n => 180.178.148.13\n)\n\n => Array\n(\n => 122.161.66.166\n)\n\n => Array\n(\n => 185.233.18.1\n)\n\n => Array\n(\n => 146.196.34.119\n)\n\n => Array\n(\n => 27.6.253.159\n)\n\n => Array\n(\n => 198.8.92.156\n)\n\n => Array\n(\n => 106.206.179.160\n)\n\n => Array\n(\n => 202.164.133.53\n)\n\n => Array\n(\n => 112.196.141.214\n)\n\n => Array\n(\n => 95.135.15.148\n)\n\n => Array\n(\n => 111.92.119.165\n)\n\n => Array\n(\n => 84.17.34.18\n)\n\n => Array\n(\n => 49.36.232.117\n)\n\n => Array\n(\n => 122.180.235.92\n)\n\n => Array\n(\n => 89.187.163.177\n)\n\n => Array\n(\n => 103.217.238.38\n)\n\n => Array\n(\n => 103.163.248.115\n)\n\n => Array\n(\n => 156.146.59.10\n)\n\n => Array\n(\n => 223.233.68.183\n)\n\n => Array\n(\n => 103.12.198.92\n)\n\n => Array\n(\n => 42.111.9.221\n)\n\n => Array\n(\n => 111.92.77.242\n)\n\n => Array\n(\n => 192.142.128.26\n)\n\n => Array\n(\n => 182.69.195.139\n)\n\n => Array\n(\n => 103.209.83.110\n)\n\n => Array\n(\n => 207.244.71.80\n)\n\n => Array\n(\n => 41.140.106.29\n)\n\n => Array\n(\n => 45.118.167.65\n)\n\n => Array\n(\n => 45.118.167.70\n)\n\n => Array\n(\n => 157.37.159.180\n)\n\n => Array\n(\n => 103.217.178.194\n)\n\n => Array\n(\n => 27.255.165.94\n)\n\n => Array\n(\n => 45.133.7.42\n)\n\n => Array\n(\n => 43.230.65.168\n)\n\n => Array\n(\n => 39.53.196.221\n)\n\n => Array\n(\n => 42.111.17.83\n)\n\n => Array\n(\n => 110.39.12.34\n)\n\n => Array\n(\n => 45.118.158.169\n)\n\n => Array\n(\n => 202.142.110.165\n)\n\n => Array\n(\n => 106.201.13.212\n)\n\n => Array\n(\n => 103.211.14.94\n)\n\n => Array\n(\n => 160.202.37.105\n)\n\n => Array\n(\n => 103.99.199.34\n)\n\n => Array\n(\n => 183.83.45.104\n)\n\n => Array\n(\n => 49.36.233.107\n)\n\n => Array\n(\n => 182.68.21.51\n)\n\n => Array\n(\n => 110.227.93.182\n)\n\n => Array\n(\n => 180.178.144.251\n)\n\n => Array\n(\n => 129.0.102.0\n)\n\n => Array\n(\n => 124.253.105.176\n)\n\n => Array\n(\n => 105.156.139.225\n)\n\n => Array\n(\n => 208.117.87.154\n)\n\n => Array\n(\n => 138.68.185.17\n)\n\n => Array\n(\n => 43.247.41.207\n)\n\n => Array\n(\n => 49.156.106.105\n)\n\n => Array\n(\n => 223.238.197.124\n)\n\n => Array\n(\n => 202.47.39.96\n)\n\n => Array\n(\n => 223.226.131.80\n)\n\n => Array\n(\n => 122.161.48.139\n)\n\n => Array\n(\n => 106.201.144.12\n)\n\n => Array\n(\n => 122.178.223.244\n)\n\n => Array\n(\n => 195.181.164.65\n)\n\n => Array\n(\n => 106.195.12.187\n)\n\n => Array\n(\n => 124.253.48.48\n)\n\n => Array\n(\n => 103.140.30.214\n)\n\n => Array\n(\n => 180.178.147.132\n)\n\n => Array\n(\n => 138.197.139.130\n)\n\n => Array\n(\n => 5.254.2.138\n)\n\n => Array\n(\n => 183.81.93.25\n)\n\n => Array\n(\n => 182.70.39.254\n)\n\n => Array\n(\n => 106.223.87.131\n)\n\n => Array\n(\n => 106.203.91.114\n)\n\n => Array\n(\n => 196.70.137.128\n)\n\n => Array\n(\n => 150.242.62.167\n)\n\n => Array\n(\n => 184.170.243.198\n)\n\n => Array\n(\n => 59.89.30.66\n)\n\n => Array\n(\n => 49.156.112.201\n)\n\n => Array\n(\n => 124.29.212.168\n)\n\n => Array\n(\n => 103.204.170.238\n)\n\n => Array\n(\n => 124.253.116.81\n)\n\n => Array\n(\n => 41.248.102.107\n)\n\n => Array\n(\n => 119.160.100.51\n)\n\n => Array\n(\n => 5.254.40.91\n)\n\n => Array\n(\n => 103.149.154.25\n)\n\n => Array\n(\n => 103.70.41.28\n)\n\n => Array\n(\n => 103.151.234.42\n)\n\n => Array\n(\n => 39.37.142.107\n)\n\n => Array\n(\n => 27.255.186.115\n)\n\n => Array\n(\n => 49.15.193.151\n)\n\n => Array\n(\n => 103.201.146.115\n)\n\n => Array\n(\n => 223.228.177.70\n)\n\n => Array\n(\n => 182.179.141.37\n)\n\n => Array\n(\n => 110.172.131.126\n)\n\n => Array\n(\n => 45.116.232.0\n)\n\n => Array\n(\n => 193.37.32.206\n)\n\n => Array\n(\n => 119.152.62.246\n)\n\n => Array\n(\n => 180.178.148.228\n)\n\n => Array\n(\n => 195.114.145.120\n)\n\n => Array\n(\n => 122.160.49.194\n)\n\n => Array\n(\n => 103.240.237.17\n)\n\n => Array\n(\n => 103.75.245.238\n)\n\n => Array\n(\n => 124.253.215.148\n)\n\n => Array\n(\n => 45.118.165.146\n)\n\n => Array\n(\n => 103.75.244.111\n)\n\n => Array\n(\n => 223.185.7.42\n)\n\n => Array\n(\n => 139.5.240.165\n)\n\n => Array\n(\n => 45.251.117.204\n)\n\n => Array\n(\n => 132.154.71.227\n)\n\n => Array\n(\n => 178.92.100.97\n)\n\n => Array\n(\n => 49.48.248.42\n)\n\n => Array\n(\n => 182.190.109.252\n)\n\n => Array\n(\n => 43.231.57.209\n)\n\n => Array\n(\n => 39.37.185.133\n)\n\n => Array\n(\n => 123.17.79.174\n)\n\n => Array\n(\n => 180.178.146.215\n)\n\n => Array\n(\n => 41.248.83.40\n)\n\n => Array\n(\n => 103.255.4.79\n)\n\n => Array\n(\n => 103.39.119.233\n)\n\n => Array\n(\n => 85.203.44.24\n)\n\n => Array\n(\n => 93.74.18.246\n)\n\n => Array\n(\n => 95.142.120.51\n)\n\n => Array\n(\n => 202.47.42.57\n)\n\n => Array\n(\n => 41.202.219.253\n)\n\n => Array\n(\n => 154.28.188.182\n)\n\n => Array\n(\n => 14.163.178.106\n)\n\n => Array\n(\n => 118.185.57.226\n)\n\n => Array\n(\n => 49.15.141.102\n)\n\n => Array\n(\n => 182.189.86.47\n)\n\n => Array\n(\n => 111.88.68.79\n)\n\n => Array\n(\n => 156.146.59.8\n)\n\n => Array\n(\n => 119.152.62.82\n)\n\n => Array\n(\n => 49.207.128.103\n)\n\n => Array\n(\n => 203.212.30.234\n)\n\n => Array\n(\n => 41.202.219.254\n)\n\n => Array\n(\n => 103.46.203.10\n)\n\n => Array\n(\n => 112.79.141.15\n)\n\n => Array\n(\n => 103.68.218.75\n)\n\n => Array\n(\n => 49.35.130.14\n)\n\n => Array\n(\n => 172.247.129.90\n)\n\n => Array\n(\n => 116.90.74.214\n)\n\n => Array\n(\n => 180.178.142.242\n)\n\n => Array\n(\n => 111.119.183.59\n)\n\n => Array\n(\n => 117.5.103.189\n)\n\n => Array\n(\n => 203.110.93.146\n)\n\n => Array\n(\n => 188.163.97.86\n)\n\n => Array\n(\n => 124.253.90.47\n)\n\n => Array\n(\n => 139.167.249.160\n)\n\n => Array\n(\n => 103.226.206.55\n)\n\n => Array\n(\n => 154.28.188.191\n)\n\n => Array\n(\n => 182.190.197.205\n)\n\n => Array\n(\n => 111.119.183.33\n)\n\n => Array\n(\n => 14.253.254.64\n)\n\n => Array\n(\n => 117.237.197.246\n)\n\n => Array\n(\n => 172.105.53.82\n)\n\n => Array\n(\n => 124.253.207.164\n)\n\n => Array\n(\n => 103.255.4.33\n)\n\n => Array\n(\n => 27.63.131.206\n)\n\n => Array\n(\n => 103.118.170.99\n)\n\n => Array\n(\n => 111.119.183.55\n)\n\n => Array\n(\n => 14.182.101.109\n)\n\n => Array\n(\n => 175.107.223.199\n)\n\n => Array\n(\n => 39.57.168.94\n)\n\n => Array\n(\n => 122.182.213.139\n)\n\n => Array\n(\n => 112.79.214.237\n)\n\n => Array\n(\n => 27.6.252.22\n)\n\n => Array\n(\n => 89.163.212.83\n)\n\n => Array\n(\n => 182.189.23.1\n)\n\n => Array\n(\n => 49.15.222.253\n)\n\n => Array\n(\n => 125.63.97.110\n)\n\n => Array\n(\n => 223.233.65.159\n)\n\n => Array\n(\n => 139.99.159.18\n)\n\n => Array\n(\n => 45.118.165.137\n)\n\n => Array\n(\n => 39.52.2.167\n)\n\n => Array\n(\n => 39.57.141.24\n)\n\n => Array\n(\n => 27.5.32.145\n)\n\n => Array\n(\n => 49.36.212.33\n)\n\n => Array\n(\n => 157.33.218.32\n)\n\n => Array\n(\n => 116.71.4.122\n)\n\n => Array\n(\n => 110.93.244.176\n)\n\n => Array\n(\n => 154.73.203.156\n)\n\n => Array\n(\n => 136.158.30.235\n)\n\n => Array\n(\n => 122.161.53.72\n)\n\n => Array\n(\n => 106.203.203.156\n)\n\n => Array\n(\n => 45.133.7.22\n)\n\n => Array\n(\n => 27.255.180.69\n)\n\n => Array\n(\n => 94.46.244.3\n)\n\n => Array\n(\n => 43.242.178.157\n)\n\n => Array\n(\n => 171.79.189.215\n)\n\n => Array\n(\n => 37.117.141.89\n)\n\n => Array\n(\n => 196.92.32.64\n)\n\n => Array\n(\n => 154.73.203.157\n)\n\n => Array\n(\n => 183.83.176.14\n)\n\n => Array\n(\n => 106.215.84.145\n)\n\n => Array\n(\n => 95.142.120.12\n)\n\n => Array\n(\n => 190.232.110.94\n)\n\n => Array\n(\n => 179.6.194.47\n)\n\n => Array\n(\n => 103.62.155.172\n)\n\n => Array\n(\n => 39.34.156.177\n)\n\n => Array\n(\n => 122.161.49.120\n)\n\n => Array\n(\n => 103.58.155.253\n)\n\n => Array\n(\n => 175.107.226.20\n)\n\n => Array\n(\n => 206.81.28.165\n)\n\n => Array\n(\n => 49.36.216.36\n)\n\n => Array\n(\n => 104.223.95.178\n)\n\n => Array\n(\n => 122.177.69.35\n)\n\n => Array\n(\n => 39.57.163.107\n)\n\n => Array\n(\n => 122.161.53.35\n)\n\n => Array\n(\n => 182.190.102.13\n)\n\n => Array\n(\n => 122.161.68.95\n)\n\n => Array\n(\n => 154.73.203.147\n)\n\n => Array\n(\n => 122.173.125.2\n)\n\n => Array\n(\n => 117.96.140.189\n)\n\n => Array\n(\n => 106.200.244.10\n)\n\n => Array\n(\n => 110.36.202.5\n)\n\n => Array\n(\n => 124.253.51.144\n)\n\n => Array\n(\n => 176.100.1.145\n)\n\n => Array\n(\n => 156.146.59.20\n)\n\n => Array\n(\n => 122.176.100.151\n)\n\n => Array\n(\n => 185.217.117.237\n)\n\n => Array\n(\n => 49.37.223.97\n)\n\n => Array\n(\n => 101.50.108.80\n)\n\n => Array\n(\n => 124.253.155.88\n)\n\n => Array\n(\n => 39.40.208.96\n)\n\n => Array\n(\n => 122.167.151.154\n)\n\n => Array\n(\n => 172.98.89.13\n)\n\n => Array\n(\n => 103.91.52.6\n)\n\n => Array\n(\n => 106.203.84.5\n)\n\n => Array\n(\n => 117.216.221.34\n)\n\n => Array\n(\n => 154.73.203.131\n)\n\n => Array\n(\n => 223.182.210.117\n)\n\n => Array\n(\n => 49.36.185.208\n)\n\n => Array\n(\n => 111.119.183.30\n)\n\n => Array\n(\n => 39.42.107.13\n)\n\n => Array\n(\n => 39.40.15.174\n)\n\n => Array\n(\n => 1.38.244.65\n)\n\n => Array\n(\n => 49.156.75.252\n)\n\n => Array\n(\n => 122.161.51.99\n)\n\n => Array\n(\n => 27.73.78.57\n)\n\n => Array\n(\n => 49.48.228.70\n)\n\n => Array\n(\n => 111.119.183.18\n)\n\n => Array\n(\n => 116.204.252.218\n)\n\n => Array\n(\n => 73.173.40.248\n)\n\n => Array\n(\n => 223.130.28.81\n)\n\n => Array\n(\n => 202.83.58.81\n)\n\n => Array\n(\n => 45.116.233.31\n)\n\n => Array\n(\n => 111.119.183.1\n)\n\n => Array\n(\n => 45.133.7.66\n)\n\n => Array\n(\n => 39.48.204.174\n)\n\n => Array\n(\n => 37.19.213.30\n)\n\n => Array\n(\n => 111.119.183.22\n)\n\n => Array\n(\n => 122.177.74.19\n)\n\n => Array\n(\n => 124.253.80.59\n)\n\n => Array\n(\n => 111.119.183.60\n)\n\n => Array\n(\n => 157.39.106.191\n)\n\n => Array\n(\n => 157.47.86.121\n)\n\n => Array\n(\n => 47.31.159.100\n)\n\n => Array\n(\n => 106.214.85.144\n)\n\n => Array\n(\n => 182.189.22.197\n)\n\n => Array\n(\n => 111.119.183.51\n)\n\n => Array\n(\n => 202.47.35.57\n)\n\n => Array\n(\n => 42.108.33.220\n)\n\n => Array\n(\n => 180.178.146.158\n)\n\n => Array\n(\n => 124.253.184.239\n)\n\n => Array\n(\n => 103.165.20.8\n)\n\n => Array\n(\n => 94.178.239.156\n)\n\n => Array\n(\n => 72.255.41.142\n)\n\n => Array\n(\n => 116.90.107.102\n)\n\n => Array\n(\n => 39.36.164.250\n)\n\n => Array\n(\n => 124.253.195.172\n)\n\n => Array\n(\n => 203.142.218.149\n)\n\n => Array\n(\n => 157.43.165.180\n)\n\n => Array\n(\n => 39.40.242.57\n)\n\n => Array\n(\n => 103.92.43.150\n)\n\n => Array\n(\n => 39.42.133.202\n)\n\n => Array\n(\n => 119.160.66.11\n)\n\n => Array\n(\n => 138.68.3.7\n)\n\n => Array\n(\n => 210.56.125.226\n)\n\n => Array\n(\n => 157.50.4.249\n)\n\n => Array\n(\n => 124.253.81.162\n)\n\n => Array\n(\n => 103.240.235.141\n)\n\n => Array\n(\n => 132.154.128.20\n)\n\n => Array\n(\n => 49.156.115.37\n)\n\n => Array\n(\n => 45.133.7.48\n)\n\n => Array\n(\n => 122.161.49.137\n)\n\n => Array\n(\n => 202.47.46.31\n)\n\n => Array\n(\n => 192.140.145.148\n)\n\n => Array\n(\n => 202.14.123.10\n)\n\n => Array\n(\n => 122.161.53.98\n)\n\n => Array\n(\n => 124.253.114.113\n)\n\n => Array\n(\n => 103.227.70.34\n)\n\n => Array\n(\n => 223.228.175.227\n)\n\n => Array\n(\n => 157.39.119.110\n)\n\n => Array\n(\n => 180.188.224.231\n)\n\n => Array\n(\n => 132.154.188.85\n)\n\n => Array\n(\n => 197.210.227.207\n)\n\n => Array\n(\n => 103.217.123.177\n)\n\n => Array\n(\n => 124.253.85.31\n)\n\n => Array\n(\n => 123.201.105.97\n)\n\n => Array\n(\n => 39.57.190.37\n)\n\n => Array\n(\n => 202.63.205.248\n)\n\n => Array\n(\n => 122.161.51.100\n)\n\n => Array\n(\n => 39.37.163.97\n)\n\n => Array\n(\n => 43.231.57.173\n)\n\n => Array\n(\n => 223.225.135.169\n)\n\n => Array\n(\n => 119.160.71.136\n)\n\n => Array\n(\n => 122.165.114.93\n)\n\n => Array\n(\n => 47.11.77.102\n)\n\n => Array\n(\n => 49.149.107.198\n)\n\n => Array\n(\n => 192.111.134.206\n)\n\n => Array\n(\n => 182.64.102.43\n)\n\n => Array\n(\n => 124.253.184.111\n)\n\n => Array\n(\n => 171.237.97.228\n)\n\n => Array\n(\n => 117.237.237.101\n)\n\n => Array\n(\n => 49.36.33.19\n)\n\n => Array\n(\n => 103.31.101.241\n)\n\n => Array\n(\n => 129.0.207.203\n)\n\n => Array\n(\n => 157.39.122.155\n)\n\n => Array\n(\n => 197.210.85.120\n)\n\n => Array\n(\n => 124.253.219.201\n)\n\n => Array\n(\n => 152.57.75.92\n)\n\n => Array\n(\n => 169.149.195.121\n)\n\n => Array\n(\n => 198.16.76.27\n)\n\n => Array\n(\n => 157.43.192.188\n)\n\n => Array\n(\n => 119.155.244.221\n)\n\n => Array\n(\n => 39.51.242.216\n)\n\n => Array\n(\n => 39.57.180.158\n)\n\n => Array\n(\n => 134.202.32.5\n)\n\n => Array\n(\n => 122.176.139.205\n)\n\n => Array\n(\n => 151.243.50.9\n)\n\n => Array\n(\n => 39.52.99.161\n)\n\n => Array\n(\n => 136.144.33.95\n)\n\n => Array\n(\n => 157.37.205.216\n)\n\n => Array\n(\n => 217.138.220.134\n)\n\n => Array\n(\n => 41.140.106.65\n)\n\n => Array\n(\n => 39.37.253.126\n)\n\n => Array\n(\n => 103.243.44.240\n)\n\n => Array\n(\n => 157.46.169.29\n)\n\n => Array\n(\n => 92.119.177.122\n)\n\n => Array\n(\n => 196.240.60.21\n)\n\n => Array\n(\n => 122.161.6.246\n)\n\n => Array\n(\n => 117.202.162.46\n)\n\n => Array\n(\n => 205.164.137.120\n)\n\n => Array\n(\n => 171.237.79.241\n)\n\n => Array\n(\n => 198.16.76.28\n)\n\n => Array\n(\n => 103.100.4.151\n)\n\n => Array\n(\n => 178.239.162.236\n)\n\n => Array\n(\n => 106.197.31.240\n)\n\n => Array\n(\n => 122.168.179.251\n)\n\n => Array\n(\n => 39.37.167.126\n)\n\n => Array\n(\n => 171.48.8.115\n)\n\n => Array\n(\n => 157.44.152.14\n)\n\n => Array\n(\n => 103.77.43.219\n)\n\n => Array\n(\n => 122.161.49.38\n)\n\n => Array\n(\n => 122.161.52.83\n)\n\n => Array\n(\n => 122.173.108.210\n)\n\n => Array\n(\n => 60.254.109.92\n)\n\n => Array\n(\n => 103.57.85.75\n)\n\n => Array\n(\n => 106.0.58.36\n)\n\n => Array\n(\n => 122.161.49.212\n)\n\n => Array\n(\n => 27.255.182.159\n)\n\n => Array\n(\n => 116.75.230.159\n)\n\n => Array\n(\n => 122.173.152.133\n)\n\n => Array\n(\n => 129.0.79.247\n)\n\n => Array\n(\n => 223.228.163.44\n)\n\n => Array\n(\n => 103.168.78.82\n)\n\n => Array\n(\n => 39.59.67.124\n)\n\n => Array\n(\n => 182.69.19.120\n)\n\n => Array\n(\n => 196.202.236.195\n)\n\n => Array\n(\n => 137.59.225.206\n)\n\n => Array\n(\n => 143.110.209.194\n)\n\n => Array\n(\n => 117.201.233.91\n)\n\n => Array\n(\n => 37.120.150.107\n)\n\n => Array\n(\n => 58.65.222.10\n)\n\n => Array\n(\n => 202.47.43.86\n)\n\n => Array\n(\n => 106.206.223.234\n)\n\n => Array\n(\n => 5.195.153.158\n)\n\n => Array\n(\n => 223.227.127.243\n)\n\n => Array\n(\n => 103.165.12.222\n)\n\n => Array\n(\n => 49.36.185.189\n)\n\n => Array\n(\n => 59.96.92.57\n)\n\n => Array\n(\n => 203.194.104.235\n)\n\n => Array\n(\n => 122.177.72.33\n)\n\n => Array\n(\n => 106.213.126.40\n)\n\n => Array\n(\n => 45.127.232.69\n)\n\n => Array\n(\n => 156.146.59.39\n)\n\n => Array\n(\n => 103.21.184.11\n)\n\n => Array\n(\n => 106.212.47.59\n)\n\n => Array\n(\n => 182.179.137.235\n)\n\n => Array\n(\n => 49.36.178.154\n)\n\n => Array\n(\n => 171.48.7.128\n)\n\n => Array\n(\n => 119.160.57.96\n)\n\n => Array\n(\n => 197.210.79.92\n)\n\n => Array\n(\n => 36.255.45.87\n)\n\n => Array\n(\n => 47.31.219.47\n)\n\n => Array\n(\n => 122.161.51.160\n)\n\n => Array\n(\n => 103.217.123.129\n)\n\n => Array\n(\n => 59.153.16.12\n)\n\n => Array\n(\n => 103.92.43.226\n)\n\n => Array\n(\n => 47.31.139.139\n)\n\n => Array\n(\n => 210.2.140.18\n)\n\n => Array\n(\n => 106.210.33.219\n)\n\n => Array\n(\n => 175.107.203.34\n)\n\n => Array\n(\n => 146.196.32.144\n)\n\n => Array\n(\n => 103.12.133.121\n)\n\n => Array\n(\n => 103.59.208.182\n)\n\n => Array\n(\n => 157.37.190.232\n)\n\n => Array\n(\n => 106.195.35.201\n)\n\n => Array\n(\n => 27.122.14.83\n)\n\n => Array\n(\n => 194.193.44.5\n)\n\n => Array\n(\n => 5.62.43.245\n)\n\n => Array\n(\n => 103.53.80.50\n)\n\n => Array\n(\n => 47.29.142.233\n)\n\n => Array\n(\n => 154.6.20.63\n)\n\n => Array\n(\n => 173.245.203.128\n)\n\n => Array\n(\n => 103.77.43.231\n)\n\n => Array\n(\n => 5.107.166.235\n)\n\n => Array\n(\n => 106.212.44.123\n)\n\n => Array\n(\n => 157.41.60.93\n)\n\n => Array\n(\n => 27.58.179.79\n)\n\n => Array\n(\n => 157.37.167.144\n)\n\n => Array\n(\n => 119.160.57.115\n)\n\n => Array\n(\n => 122.161.53.224\n)\n\n => Array\n(\n => 49.36.233.51\n)\n\n => Array\n(\n => 101.0.32.8\n)\n\n => Array\n(\n => 119.160.103.158\n)\n\n => Array\n(\n => 122.177.79.115\n)\n\n => Array\n(\n => 107.181.166.27\n)\n\n => Array\n(\n => 183.6.0.125\n)\n\n => Array\n(\n => 49.36.186.0\n)\n\n => Array\n(\n => 202.181.5.4\n)\n\n => Array\n(\n => 45.118.165.144\n)\n\n => Array\n(\n => 171.96.157.133\n)\n\n => Array\n(\n => 222.252.51.163\n)\n\n => Array\n(\n => 103.81.215.162\n)\n\n => Array\n(\n => 110.225.93.208\n)\n\n => Array\n(\n => 122.161.48.200\n)\n\n => Array\n(\n => 119.63.138.173\n)\n\n => Array\n(\n => 202.83.58.208\n)\n\n => Array\n(\n => 122.161.53.101\n)\n\n => Array\n(\n => 137.97.95.21\n)\n\n => Array\n(\n => 112.204.167.123\n)\n\n => Array\n(\n => 122.180.21.151\n)\n\n => Array\n(\n => 103.120.44.108\n)\n\n => Array\n(\n => 49.37.220.174\n)\n\n => Array\n(\n => 1.55.255.124\n)\n\n => Array\n(\n => 23.227.140.173\n)\n\n => Array\n(\n => 43.248.153.110\n)\n\n => Array\n(\n => 106.214.93.101\n)\n\n => Array\n(\n => 103.83.149.36\n)\n\n => Array\n(\n => 103.217.123.57\n)\n\n => Array\n(\n => 193.9.113.119\n)\n\n => Array\n(\n => 14.182.57.204\n)\n\n => Array\n(\n => 117.201.231.0\n)\n\n => Array\n(\n => 14.99.198.186\n)\n\n => Array\n(\n => 36.255.44.204\n)\n\n => Array\n(\n => 103.160.236.42\n)\n\n => Array\n(\n => 31.202.16.116\n)\n\n => Array\n(\n => 223.239.49.201\n)\n\n => Array\n(\n => 122.161.102.149\n)\n\n => Array\n(\n => 117.196.123.184\n)\n\n => Array\n(\n => 49.205.112.105\n)\n\n => Array\n(\n => 103.244.176.201\n)\n\n => Array\n(\n => 95.216.15.219\n)\n\n => Array\n(\n => 103.107.196.174\n)\n\n => Array\n(\n => 203.190.34.65\n)\n\n => Array\n(\n => 23.227.140.182\n)\n\n => Array\n(\n => 171.79.74.74\n)\n\n => Array\n(\n => 106.206.223.244\n)\n\n => Array\n(\n => 180.151.28.140\n)\n\n => Array\n(\n => 165.225.124.114\n)\n\n => Array\n(\n => 106.206.223.252\n)\n\n => Array\n(\n => 39.62.23.38\n)\n\n => Array\n(\n => 112.211.252.33\n)\n\n => Array\n(\n => 146.70.66.242\n)\n\n => Array\n(\n => 222.252.51.38\n)\n\n => Array\n(\n => 122.162.151.223\n)\n\n => Array\n(\n => 180.178.154.100\n)\n\n => Array\n(\n => 180.94.33.94\n)\n\n => Array\n(\n => 205.164.130.82\n)\n\n => Array\n(\n => 117.196.114.167\n)\n\n => Array\n(\n => 43.224.0.189\n)\n\n => Array\n(\n => 154.6.20.59\n)\n\n => Array\n(\n => 122.161.131.67\n)\n\n => Array\n(\n => 70.68.68.159\n)\n\n => Array\n(\n => 103.125.130.200\n)\n\n => Array\n(\n => 43.242.176.147\n)\n\n => Array\n(\n => 129.0.102.29\n)\n\n => Array\n(\n => 182.64.180.32\n)\n\n => Array\n(\n => 110.93.250.196\n)\n\n => Array\n(\n => 139.135.57.197\n)\n\n => Array\n(\n => 157.33.219.2\n)\n\n => Array\n(\n => 205.253.123.239\n)\n\n => Array\n(\n => 122.177.66.119\n)\n\n => Array\n(\n => 182.64.105.252\n)\n\n => Array\n(\n => 14.97.111.154\n)\n\n => Array\n(\n => 146.196.35.35\n)\n\n => Array\n(\n => 103.167.162.205\n)\n\n => Array\n(\n => 37.111.130.245\n)\n\n => Array\n(\n => 49.228.51.196\n)\n\n => Array\n(\n => 157.39.148.205\n)\n\n => Array\n(\n => 129.0.102.28\n)\n\n => Array\n(\n => 103.82.191.229\n)\n\n => Array\n(\n => 194.104.23.140\n)\n\n => Array\n(\n => 49.205.193.252\n)\n\n => Array\n(\n => 222.252.33.119\n)\n\n => Array\n(\n => 173.255.132.114\n)\n\n => Array\n(\n => 182.64.148.162\n)\n\n => Array\n(\n => 175.176.87.8\n)\n\n => Array\n(\n => 5.62.57.6\n)\n\n => Array\n(\n => 119.160.96.229\n)\n\n => Array\n(\n => 49.205.180.226\n)\n\n => Array\n(\n => 95.142.120.59\n)\n\n => Array\n(\n => 183.82.116.204\n)\n\n => Array\n(\n => 202.89.69.186\n)\n\n => Array\n(\n => 39.48.165.36\n)\n\n => Array\n(\n => 192.140.149.81\n)\n\n => Array\n(\n => 198.16.70.28\n)\n\n => Array\n(\n => 103.25.250.236\n)\n\n => Array\n(\n => 106.76.202.244\n)\n\n => Array\n(\n => 47.8.8.165\n)\n\n => Array\n(\n => 202.5.145.213\n)\n\n => Array\n(\n => 106.212.188.243\n)\n\n => Array\n(\n => 106.215.89.2\n)\n\n => Array\n(\n => 119.82.83.148\n)\n\n => Array\n(\n => 123.24.164.245\n)\n\n => Array\n(\n => 187.67.51.106\n)\n\n => Array\n(\n => 117.196.119.95\n)\n\n => Array\n(\n => 95.142.120.66\n)\n\n => Array\n(\n => 156.146.59.35\n)\n\n => Array\n(\n => 49.205.213.148\n)\n\n => Array\n(\n => 111.223.27.206\n)\n\n => Array\n(\n => 49.205.212.86\n)\n\n => Array\n(\n => 103.77.42.103\n)\n\n => Array\n(\n => 110.227.62.25\n)\n\n => Array\n(\n => 122.179.54.140\n)\n\n => Array\n(\n => 157.39.239.81\n)\n\n => Array\n(\n => 138.128.27.234\n)\n\n => Array\n(\n => 103.244.176.194\n)\n\n => Array\n(\n => 130.105.10.127\n)\n\n => Array\n(\n => 103.116.250.191\n)\n\n => Array\n(\n => 122.180.186.6\n)\n\n => Array\n(\n => 101.53.228.52\n)\n\n => Array\n(\n => 39.57.138.90\n)\n\n => Array\n(\n => 197.156.137.165\n)\n\n => Array\n(\n => 49.37.155.78\n)\n\n => Array\n(\n => 39.59.81.32\n)\n\n => Array\n(\n => 45.127.44.78\n)\n\n => Array\n(\n => 103.58.155.83\n)\n\n => Array\n(\n => 175.107.220.20\n)\n\n => Array\n(\n => 14.255.9.197\n)\n\n => Array\n(\n => 103.55.63.146\n)\n\n => Array\n(\n => 49.205.138.81\n)\n\n => Array\n(\n => 45.35.222.243\n)\n\n => Array\n(\n => 203.190.34.57\n)\n\n => Array\n(\n => 205.253.121.11\n)\n\n => Array\n(\n => 154.72.171.177\n)\n\n => Array\n(\n => 39.52.203.37\n)\n\n => Array\n(\n => 122.161.52.2\n)\n\n => Array\n(\n => 82.145.41.170\n)\n\n => Array\n(\n => 103.217.123.33\n)\n\n => Array\n(\n => 103.150.238.100\n)\n\n => Array\n(\n => 125.99.11.182\n)\n\n => Array\n(\n => 103.217.178.70\n)\n\n => Array\n(\n => 197.210.227.95\n)\n\n => Array\n(\n => 116.75.212.153\n)\n\n => Array\n(\n => 212.102.42.202\n)\n\n => Array\n(\n => 49.34.177.147\n)\n\n => Array\n(\n => 173.242.123.110\n)\n\n => Array\n(\n => 49.36.35.254\n)\n\n => Array\n(\n => 202.47.59.82\n)\n\n => Array\n(\n => 157.42.197.119\n)\n\n => Array\n(\n => 103.99.196.250\n)\n\n => Array\n(\n => 119.155.228.244\n)\n\n => Array\n(\n => 130.105.160.170\n)\n\n => Array\n(\n => 78.132.235.189\n)\n\n => Array\n(\n => 202.142.186.114\n)\n\n => Array\n(\n => 115.99.156.136\n)\n\n => Array\n(\n => 14.162.166.254\n)\n\n => Array\n(\n => 157.39.133.205\n)\n\n => Array\n(\n => 103.196.139.157\n)\n\n => Array\n(\n => 139.99.159.20\n)\n\n => Array\n(\n => 175.176.87.42\n)\n\n => Array\n(\n => 103.46.202.244\n)\n\n => Array\n(\n => 175.176.87.16\n)\n\n => Array\n(\n => 49.156.85.55\n)\n\n => Array\n(\n => 157.39.101.65\n)\n\n => Array\n(\n => 124.253.195.93\n)\n\n => Array\n(\n => 110.227.59.8\n)\n\n => Array\n(\n => 157.50.50.6\n)\n\n => Array\n(\n => 95.142.120.25\n)\n\n => Array\n(\n => 49.36.186.141\n)\n\n => Array\n(\n => 110.227.54.161\n)\n\n => Array\n(\n => 88.117.62.180\n)\n\n => Array\n(\n => 110.227.57.8\n)\n\n => Array\n(\n => 106.200.36.21\n)\n\n => Array\n(\n => 202.131.143.247\n)\n\n => Array\n(\n => 103.46.202.4\n)\n\n => Array\n(\n => 122.177.78.217\n)\n\n => Array\n(\n => 124.253.195.201\n)\n\n => Array\n(\n => 27.58.17.91\n)\n\n => Array\n(\n => 223.228.143.162\n)\n\n => Array\n(\n => 119.160.96.233\n)\n\n => Array\n(\n => 49.156.69.213\n)\n\n => Array\n(\n => 41.80.97.54\n)\n\n => Array\n(\n => 122.176.207.193\n)\n\n => Array\n(\n => 45.118.156.6\n)\n\n => Array\n(\n => 157.39.154.210\n)\n\n => Array\n(\n => 103.48.197.173\n)\n\n => Array\n(\n => 103.46.202.98\n)\n\n => Array\n(\n => 157.43.214.102\n)\n\n => Array\n(\n => 180.191.125.73\n)\n\n => Array\n(\n => 157.39.57.227\n)\n\n => Array\n(\n => 119.152.107.157\n)\n\n => Array\n(\n => 103.166.58.254\n)\n\n => Array\n(\n => 139.135.54.30\n)\n\n => Array\n(\n => 185.199.102.101\n)\n\n => Array\n(\n => 59.103.204.4\n)\n\n => Array\n(\n => 202.14.123.34\n)\n\n => Array\n(\n => 103.48.197.243\n)\n\n => Array\n(\n => 39.34.158.129\n)\n\n => Array\n(\n => 156.146.59.38\n)\n\n => Array\n(\n => 110.227.51.220\n)\n\n => Array\n(\n => 157.45.123.212\n)\n\n => Array\n(\n => 110.227.52.20\n)\n\n)\n```\nArchive for April, 2013: scfr's Personal Finance Blog\n Layout: Blue and Brown (Default) Author's Creation\n Home > Archive: April, 2013\n\n# Archive for April, 2013\n\n## Every Once In A While I Don't Bargain Shop\n\nApril 14th, 2013 at 04:20 pm\n\nUsually when I have a purchase to make I take the time to shop around for the best deal. But not always.\n\nOn Friday I found out that a neighbor's dog (who we jokingly called the \"girlfriend\" of our late pooch) was diagnosed with cancer and is going to have an eye removed on Monday. She's a sweet little dog and the owners are fantastic people. To illustrate how kind-hearted they are, when our \"boy\" died they made a donation to the animal welfare group where I volunteer.\n\nYesterday I went to a local doggy boutique and bought a stuffed animal and bag of treats, paying full retail price. This morning I wrapped them up and will deliver this afternoon when I take my walk. No time to bargain shop this time.\n\nPeople and pets first, then money, then things."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9670906,"math_prob":0.99945754,"size":826,"snap":"2021-43-2021-49","text_gpt3_token_len":192,"char_repetition_ratio":0.085158154,"word_repetition_ratio":0.0,"special_character_ratio":0.2276029,"punctuation_ratio":0.0877193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9997577,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-05T05:49:52Z\",\"WARC-Record-ID\":\"<urn:uuid:8506af31-6f28-4a52-84e2-9a9c1969e04f>\",\"Content-Length\":\"303372\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2f392ce6-23b5-4982-91ce-0d2993c1affa>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f9c6e13-2ea7-4e52-981c-b1778c731f9e>\",\"WARC-IP-Address\":\"173.231.200.26\",\"WARC-Target-URI\":\"https://scfr.savingadvice.com/2013/04/\",\"WARC-Payload-Digest\":\"sha1:XBXPEBEOBTW6SN4V6RLJOUZTJIWISEGP\",\"WARC-Block-Digest\":\"sha1:4L3YZRR2TQ5U65GW4Y7FKBYJTB5WI66F\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363135.71_warc_CC-MAIN-20211205035505-20211205065505-00196.warc.gz\"}"} |
https://www.edx.org/course/introduction-to-probability?hs_analytics_source=referrals&utm_source=mooc.org&utm_medium=referral&utm_campaign=mooc.org-course-list | [
"# Introduction to Probability\n\nLearn probability, an essential language and set of tools for understanding data, randomness, and uncertainty.\n\n### There is one session available:\n\n129,037 already enrolled! After a course session ends, it will be archived.\nStarts Nov 24\nEstimated 10 weeks\n5–10 hours per week\nSelf-paced\nFree\n\nProbability and statistics help to bring logic to a world replete with randomness and uncertainty. This course will give you tools needed to understand data, science, philosophy, engineering, economics, and finance. You will learn not only how to solve challenging technical problems, but also how you can apply those solutions in everyday life.\n\nWith examples ranging from medical testing to sports prediction, you will gain a strong foundation for the study of statistical inference, stochastic processes, randomized algorithms, and other subjects where probability is needed.\n\n### At a glance\n\n• Institution: HarvardX\n• Subject: Data Analysis & Statistics\n• Level: Intermediate\n• Prerequisites:\n\nFamiliarity with U.S. high school level algebra concepts; Single-variable calculus: familiarity with matrices. derivatives and integrals.\n\nNot all units require Calculus, the underlying concepts can be learned concurrently with a Calculus course or on your own for self-directed learners.\n\nUnits 1-3 require no calculus or matrices; Units 4-6 require some calculus, no matrices; Unit 7 requires matrices, no calculus.\n\nPrevious probability or statistics background not required.\n\n• Language: English\n• Video Transcript: English\n\n# What you'll learn\n\nSkip What you'll learn\n• How to think about uncertainty and randomness\n• How to make good predictions\n• The story approach to understanding random variables\n• Common probability distributions used in statistics and data science\n• Methods for finding the expected value of a random quantity\n• How to use conditional probability to approach complicated problems\n\n# Syllabus\n\nSkip Syllabus\n• Unit 0: Introduction, Course Orientation, and FAQ\n• Unit 1: Probability, Counting, and Story Proofs\n• Unit 2: Conditional Probability and Bayes' Rule\n• Unit 3: Discrete Random Variables\n• Unit 4: Continuous Random Variables\n• Unit 5: Averages, Law of Large Numbers, and Central Limit Theorem\n• Unit 6: Joint Distributions and Conditional Expectation\n• Unit 7: Markov Chains"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79492956,"math_prob":0.5763936,"size":1030,"snap":"2021-43-2021-49","text_gpt3_token_len":217,"char_repetition_ratio":0.11208577,"word_repetition_ratio":0.0,"special_character_ratio":0.20194174,"punctuation_ratio":0.15469614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97981256,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-28T19:54:44Z\",\"WARC-Record-ID\":\"<urn:uuid:0436b0c3-ca1e-40ea-852b-c48b58282faa>\",\"Content-Length\":\"434335\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cad24130-d1bb-42da-b8cd-5e55195228c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:eb383a66-2824-4b25-a640-364dde604716>\",\"WARC-IP-Address\":\"104.16.178.84\",\"WARC-Target-URI\":\"https://www.edx.org/course/introduction-to-probability?hs_analytics_source=referrals&utm_source=mooc.org&utm_medium=referral&utm_campaign=mooc.org-course-list\",\"WARC-Payload-Digest\":\"sha1:PIGHKNAPGQG6TB2XLQLHZO4MWM226YQH\",\"WARC-Block-Digest\":\"sha1:A32NTNX4PWPYBDUK3Z3WQZ2EVBD4LKLI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358591.95_warc_CC-MAIN-20211128194436-20211128224436-00629.warc.gz\"}"} |
https://devwrench.wordpress.com/tag/numbers/ | [
"# Generating Roman numerals in Vala\n\nI have recently been learning and enjoying the Vala programming language. I am writing a lightweight markdown-to-PDF converter and wanted to be able to automatically number list items in Roman numerals. Here, in case anyone wants it, is Knuth’s algorithm for producing the Roman numeral for a number. I converted this to Vala from the original WEB source, part of TeX, as quoted by Hans Wennborg. Enjoy!\n\n```string roman(uint num)\n{\n// Knuth's algorithm for Roman numerals, from TeX. Quoted by\n// Hans Wennborg at https://www.hanshq.net/roman-numerals.html.\n// Converted to Vala by Chris White (github.com/cxw42). CC-BY 4.0 Intl.\n\nvar sb = new StringBuilder();\n\nstring control = \"m2d5c2l5x2v5i\";\nint j, k; // mysterious indices into `control`\nuint u, v; // mysterious numbers\nj = 0;\nv = 1000;\n\nwhile(true) {\nwhile(num >= v) {\nsb.append_c(control[j]);\nnum -= v;\n}\nif(num <= 0) { // nonpositive input produces no output\nbreak;\n}\n\nk = j+2;\nu = v / control[k-1].digit_value();\nif(control[k-1] == '2') {\nk += 2;\nu /= control[k-1].digit_value();\n}\n\nif(num+u >= v) {\nsb.append_c(control[k]);\nnum += u;\n} else {\nj += 2;\nv /= control[j-1].digit_value();\n}\n}\n\nreturn sb.str;\n} // roman()\n```\n\n(not extensively tested — use at your own risk. No warranty. License details here.)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6088361,"math_prob":0.91837275,"size":1251,"snap":"2021-31-2021-39","text_gpt3_token_len":353,"char_repetition_ratio":0.093023255,"word_repetition_ratio":0.0,"special_character_ratio":0.32214227,"punctuation_ratio":0.19215687,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98056436,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T02:08:10Z\",\"WARC-Record-ID\":\"<urn:uuid:9815fe4c-b93b-4480-8581-e8a53ffc363a>\",\"Content-Length\":\"42185\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6fb12329-b9d0-4522-91a2-c4cd73639914>\",\"WARC-Concurrent-To\":\"<urn:uuid:f76854ed-34d5-423d-828e-c0fd45860ce4>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://devwrench.wordpress.com/tag/numbers/\",\"WARC-Payload-Digest\":\"sha1:RNUIS33O7S6ZL7SWQMRW3LP5BK3SR4KF\",\"WARC-Block-Digest\":\"sha1:UU45WFGK6B4JIBM4S55GCFT2WQHZUR6W\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056974.30_warc_CC-MAIN-20210920010331-20210920040331-00669.warc.gz\"}"} |
https://www.edumple.com/cbse-class-6/mathmatics/using-brackets/notes/rc@gm_1130 | [
"USING BRACKETS\nMeera bought 6 notebooks from the market and the cost was Rs 10 per notebook. Her sister Seema also bought 7 notebooks of the same type. Find the total money they paid.\nSeema calculated the\n6 × 10 + 7 × 10 = 60 + 70 = 130\nMeera calculated the amount like this amount like this\n6 + 7 =13 and 13 × 10 = 130\nAns. Rs 130.\n\nWe can see that Seema’s and Meera’s ways to get the answer are a bit different.\nTo avoid confusion in such cases we may use brackets. We can pack the numbers 6 and 7 together using a bracket, indicating that the pack is to be treated as a single number.\nThus, the answer is found by (6 + 7) × 10 = 13 × 10.\nFirst, turn everything inside the brackets ( ) into a single number and then do the operation outside which in this case is to multiply by 10.\n\nFACE VALUE\nFace value of a digit in a numeral is the value of the digit itself at whatever place it may be.\n\nPLACE VALUE\nPlace value of a digit in a given number is the value of the digit because of the place or the position of the digit in the number.\nPlace value of a digit = Face value of the digit × value of the place\n\nPlace value and Face value :\nEvery digit has two values – the place value and the face value. The face value of a digit does not change while its place value changes according to its position and number.",
null,
"Expanded form of a Number :\nIf we express a given number as the sum of its place value, it is called its expanded form.\n\nEx: Express\n(i) 3,64,029 (ii) 2,75,00,386 in expanded form.\nSol. Place value of 3 = 3 x 100000\nPlace value of 6 = 6 x 10000\nPlace value of 4 = 4 x 1000\nPlace value of 0 = 0 x 100\nPlace value of 2 = 2 x 10\nPlace value of 9 = 9 x 1\nThe expanded form of 3,64,029 is\n3 x 100000 + 6 x 10000 + 4 x 1000 + 0 x 100 + 2 x 10 + 9 x 1."
] | [
null,
"https://www.edumple.com/media/Images/CkEditor/696332.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87747294,"math_prob":0.99963963,"size":1881,"snap":"2023-14-2023-23","text_gpt3_token_len":553,"char_repetition_ratio":0.19179542,"word_repetition_ratio":0.024691358,"special_character_ratio":0.32482722,"punctuation_ratio":0.07804878,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9987038,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-08T21:11:36Z\",\"WARC-Record-ID\":\"<urn:uuid:a874f829-2881-4c6a-bf29-7099b24b7269>\",\"Content-Length\":\"56486\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5fd226b-7db4-4285-9794-4d70500b8569>\",\"WARC-Concurrent-To\":\"<urn:uuid:b57f26a2-31fa-44f8-80c3-806c0028edd7>\",\"WARC-IP-Address\":\"172.67.195.243\",\"WARC-Target-URI\":\"https://www.edumple.com/cbse-class-6/mathmatics/using-brackets/notes/rc@gm_1130\",\"WARC-Payload-Digest\":\"sha1:OL7MWDAZUM3F43Z64FWIA66LK7NMCISR\",\"WARC-Block-Digest\":\"sha1:YGNXBTED3PJCBTQMMCZJJDZ4M22PNVP2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655143.72_warc_CC-MAIN-20230608204017-20230608234017-00123.warc.gz\"}"} |
https://www.calculushowto.com/finite-calculus/ | [
"# Finite Calculus (Calculus of Finite Differences): Definition, Example\n\nFeel like \"cheating\" at Calculus? Check out our Practically Cheating Calculus Handbook, which gives you hundreds of easy-to-follow answers in a convenient e-book.\n\nFinite calculus (also called calculus of finite differences) is an alternative to the “usual” differential calculus of infinitesimals that deals with discrete values. It’s called “finite” calculus because each is made up of a fixed (a.k.a. finite) set of terms.\n\nFinite calculus is useful for many practical areas in science including:\n\n• Modeling and spreadsheets,\n• Marginal economic analysis,\n• Finance,\n• Growth and decay,\n• General computing.\n\nIt’s particularly useful for modeling human behavior and so is well-suited to many areas of social science.\n\n## Example of Step Size in Finite Calculus\n\nOne way to think of finite calculus is that it’s just calculus with infinity taken out of the picture. Instead of going all the way to the limit, finite calculus stops at a certain “step size” (Hamming, 2012). Instead of instantaneous, rates of change are discrete and finite (Morris & Stark, 2015).\n\nThe step size defines the difference between the two calculus branches. For example, instead of a unit change in x (Δx) with a step size approaching zero, the step is an entire unit of “x”. For example, from x to x + 1.\n\nExample question: Find Δ f(x), if f(x) = x2.\nSolution:\nOur step size is x to x + 1, so we can find Δx by subtracting:\nΔ f(x) = (x + 1)2 – x2 = 2x + 1.\n\n## History\n\n“Finite Calculus” was the precursor to differential calculus and dates back to Brook Taylor (1717) and Jacob Stirling (1730) (Antosiewicz, 1977).\n\n## References\n\nAntosiewicz, H. (1977). Studies in Ordinary Differential Equations, Volume 14. Mathematical Association of America.\nGleich, D. Finite Calculus: A Tutorial for Solving Nasty Sums. Retrieved October 27, 2019 from: https://www.cs.purdue.edu/homes/dgleich/publications/Gleich%202005%20-%20finite%20calculus.pdf\nHamming, R. (2012). Numerical Methods for Scientists and Engineers. Courier Corporation.\nMorris, C. & Stark, R. (2015). Fundamentals of Calculus. John Wiley & Sons.\nWatkins, T. The Summation of Series Using the Anti-Differencing Operation. Retrieved September 1, 2020 from: http://www.sjsu.edu/faculty/watkins/antidiff.htm\n\nCITE THIS AS:\nStephanie Glen. \"Finite Calculus (Calculus of Finite Differences): Definition, Example\" From CalculusHowTo.com: Calculus for the rest of us! https://www.calculushowto.com/finite-calculus/\n---------------------------------------------------------------------------",
null,
"",
null,
"Need help with a homework or test question? With Chegg Study, you can get step-by-step solutions to your questions from an expert in the field. Your first 30 minutes with a Chegg tutor is free!"
] | [
null,
"https://www.calculushowto.com/wp-content/uploads/2021/12/14422-1200563.webp",
null,
"https://imp.pxf.io/i/3128523/1200563/14422",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83777857,"math_prob":0.8839116,"size":2604,"snap":"2022-40-2023-06","text_gpt3_token_len":662,"char_repetition_ratio":0.12846154,"word_repetition_ratio":0.0,"special_character_ratio":0.24654378,"punctuation_ratio":0.17288801,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9980443,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-03T19:07:53Z\",\"WARC-Record-ID\":\"<urn:uuid:5231ac80-565a-41cc-994a-1924cd893868>\",\"Content-Length\":\"58162\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e9c3d329-e7c6-461d-9d89-084eb58acc83>\",\"WARC-Concurrent-To\":\"<urn:uuid:c8596314-b81f-4404-955a-0a2509fc306b>\",\"WARC-IP-Address\":\"172.67.70.101\",\"WARC-Target-URI\":\"https://www.calculushowto.com/finite-calculus/\",\"WARC-Payload-Digest\":\"sha1:VE4OH73KRJLWBIYDV7RLXD4KBWTGMJ3O\",\"WARC-Block-Digest\":\"sha1:MWI4XOJP622E2FJZIW6IEX7S6USPIYPE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337428.0_warc_CC-MAIN-20221003164901-20221003194901-00649.warc.gz\"}"} |
https://jankariportal.com/how-many-feet-in-one-yard/ | [
"# How many Feet in One Yard ?\n\n## How many Feet in One Yard ?\n\nThe yard is the unit of length used in both the British and American imperial systems of measurement. This equals 3 feet or 36 inches. It is defined by international agreements in 1959. [Jankari Portal]",
null,
"Yard is a linear unit of measurement and one yard is equal to three feet. A square foot is a unit of measurement of area.\n\nHow many Feet in One Yard ?\n\n3 feet is equal to 1 yard\n\nHow many Inch in One Yard ?\n\n36 inch is equal to 1 yard\n\nHow many Meter in One Yard ?\n\nApprox 0.9144 meter is equal to 1 yard\n\nHow many Centimetre in One Yard ?\n\n91.44 centimetre is equal to 1 yard\n\nIf you are having trouble in converting from yard unit to any unit then you can take help of online tool – Click Here"
] | [
null,
"https://jankariportal.com/wp-content/uploads/2021/07/How-many-Feet-in-1-Yard-300x300.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94500273,"math_prob":0.9529547,"size":717,"snap":"2021-31-2021-39","text_gpt3_token_len":180,"char_repetition_ratio":0.16409537,"word_repetition_ratio":0.11111111,"special_character_ratio":0.25801954,"punctuation_ratio":0.07692308,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97883254,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-27T21:20:21Z\",\"WARC-Record-ID\":\"<urn:uuid:2fedabfc-ae4d-4914-8e1e-605ce9a10bab>\",\"Content-Length\":\"26989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cbbf5801-7858-4345-b5d3-d2f2bcb886b5>\",\"WARC-Concurrent-To\":\"<urn:uuid:d3184184-5d55-47d3-a396-29e651239b2f>\",\"WARC-IP-Address\":\"172.67.129.57\",\"WARC-Target-URI\":\"https://jankariportal.com/how-many-feet-in-one-yard/\",\"WARC-Payload-Digest\":\"sha1:YSFPGVBPKRB5MCVQJ7PRTLTAHCJ4VL2S\",\"WARC-Block-Digest\":\"sha1:T27M5HUKKE2ZPSIIEXQJSMLPU56SO5IT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153491.18_warc_CC-MAIN-20210727202227-20210727232227-00096.warc.gz\"}"} |
https://mslsec.com/the-difference-between-total-actual-costs-and-total-standard-costs/ | [
"Who are the experts?Experts are tested by mslsec.com as specialists in their subject area. We review their content and use your feedback to keep the quality high.\n\nYou are watching: The difference between total actual costs and total standard costs.",
null,
"Transcribed image text: Match the following statements to the appropriate terms. The difference between actual overhead incurred and overhead budgeted for the standard hours allowed. The hours that should have been worked for the units produced. The difference between the actual quantity times the actual price and the actual quantity times the standard nrira The difference between total actual costs and total standard costs. The difference between actual hours times the standard rate and standard hours times the standard Link to Text 4 rate. Predetermined unit costs that are measures of performance. The difference between normal capacity hours and standard hours allowed times the fixed overhead rate. Standards based on an efficient level of performance that are attainable under expected operating conditions Standards based on the optimum level of performance under perfect operating conditions. A double-entry system of accounting in which standard costs are used in making entries and variances are recognized in the accounts. Match the following statements to the appropriate terms. The difference between actual overhead incurred and overhead budgeted for the standard hours allowed. The hours that should have been worked for the units produced. The difference between the actual quantity times the actual price and the actual quantity times the standard price The difference between total actual costs and total standard costs. The difference between actual hours times the standard rate and standard hours times the standard Overhead controllable variance Standard costs Normal standards Ideal standards Materials price variance Labor quantity variance rate. Predetermined unit costs that are measures of performance. The difference between normal capacity hours and standard hours allowed times the fixed overhead Standard cost accounting system Overhead volume variance Standard hours allowed Variances rate. Standards based on an efficient level of performance that are attainable under expected operating conditions Standards based on the optimum level of performance under perfect operating conditions. A double-entry system of accounting in which standard costs are used in making entries and variances are recognized in the accounts.\n\nSee more: 2004 Nissan Quest Camshaft Position Sensor Location, 2004 Nissan Quest Cam Position Sensor Location\n\nThe budgeted overhead costs for standard hours allowed and the overhead costs applied to the product are the same amount for variable overhead costs. only when standard hours allowed are less than normal capacity for fixed overhead costs for both variable and fixed overhead costs"
] | [
null,
"https://mslsec.com/the-difference-between-total-actual-costs-and-total-standard-costs/imager_1_1254_700.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.912647,"math_prob":0.88118446,"size":2898,"snap":"2021-43-2021-49","text_gpt3_token_len":485,"char_repetition_ratio":0.17380787,"word_repetition_ratio":0.62910795,"special_character_ratio":0.16149068,"punctuation_ratio":0.060606062,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95302594,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T09:32:14Z\",\"WARC-Record-ID\":\"<urn:uuid:c9b8b6a7-c14b-43bd-a7eb-aeb09e113edb>\",\"Content-Length\":\"12151\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:52e6486e-88ac-4ecb-9b78-01e3ff25fd4e>\",\"WARC-Concurrent-To\":\"<urn:uuid:396563f9-0186-4b11-9130-1e8b3410c75b>\",\"WARC-IP-Address\":\"104.21.91.209\",\"WARC-Target-URI\":\"https://mslsec.com/the-difference-between-total-actual-costs-and-total-standard-costs/\",\"WARC-Payload-Digest\":\"sha1:Y5ICWVVIFKAVRXQVK5AT7JV74MNAKBQH\",\"WARC-Block-Digest\":\"sha1:C2SXFWV6DYOUVFZUWCYNP2522YAJS5JQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585171.16_warc_CC-MAIN-20211017082600-20211017112600-00389.warc.gz\"}"} |
https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_002A/UCD_Chem_2A/Homework/Exercises/Exercise_01B | [
"# Exercise 1B\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\nSolutions, hints and answers can be found in the Solutions Page. It is strongly recommended that students do not look at the solutions until after completing or attempting to complete the problems. Going straight to the solutions will reduce the point of the exercises.\n\n## 2.1\n\nBoth J.J Thomson and Ernest Rutherford had different understandings of how the structure of an atom actually looked like. What did each of them postulate about the structure of the atom?\n\n## 2.2\n\nGive the symbol of each ion for the following :\n\n1. An ion with 20 protons, 20 neutrons, 18 electrons\n2. An ion with 28 protons, 30 neutrons, 26 electrons\n3. An ion with 28 protons, 30 neutrons, 24 electrons\n\n## 2.3\n\nBromine has two stable isotopes: $$\\ce{^{79}Br}$$ and $$\\ce{^{81}Br}$$. The mass of 81Br is 80.92 u. The proportion of 79Br is 50.69%. The average atomic mass is 79.91 u. What is the mass of $$\\ce{^{79}Br}$$?\n\n## 2.4\n\nChoose the true statement(s) and correct the wrong statement(s).\n\n1. All particles in the nucleus of an atom are positively charged.\n2. In an atom, electrons orbit around the nucleus.\n3. The mass of the nucleus is approximately equals to the add-up mass of protons and neutrons.\n4. The number of electrons equals to the number of neutrons.\n5. Mass of an electron approximately equals to the mass of a proton.\n\nExercise 1B is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8691915,"math_prob":0.9993228,"size":1318,"snap":"2023-14-2023-23","text_gpt3_token_len":346,"char_repetition_ratio":0.13622527,"word_repetition_ratio":0.04405286,"special_character_ratio":0.27617604,"punctuation_ratio":0.12546125,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9828356,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-04-02T04:49:22Z\",\"WARC-Record-ID\":\"<urn:uuid:d35cf333-13e8-4eaa-97ce-9625272812da>\",\"Content-Length\":\"116491\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff27c2be-740b-41bc-bad8-6e9946afc4ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:966a4c4f-db04-47c4-9bd7-24ce46bfb739>\",\"WARC-IP-Address\":\"18.160.46.56\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Courses/University_of_California_Davis/UCD_Chem_002A/UCD_Chem_2A/Homework/Exercises/Exercise_01B\",\"WARC-Payload-Digest\":\"sha1:QWHRPVAUA6CFRGDKTKAITC67GXAETJHW\",\"WARC-Block-Digest\":\"sha1:MAKD4ZTHBEXNL2IR46NQQVCBUITFWGOW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296950383.8_warc_CC-MAIN-20230402043600-20230402073600-00788.warc.gz\"}"} |
https://de.mathworks.com/matlabcentral/cody/problems/419-acidity-of-vinegar-and-salts | [
"Cody\n\n# Problem 419. Acidity of vinegar and salts\n\nAssuming: pH (potentia hydrogenii) = - log10(H+ ionic concentration in mol/Liter). For a buffer solution containing acetic acid and sodium acetate, according to Henderson-Hasselbalch equation, the solution will have pH = 4.7 + log10(concentration of acetic acid / concentration of sodium acetate). If some sodium hydroxide (NaOH) is added that much (in mol for mol) acetic acid will be converted to sodium acetate. Your function should output final pH for a small input of NaOH in mol.\n\nFor example, when NaOH added is 0 mol to a solution containing 0.1 mol acetic acid and 0.1 mol sodium acetate then final pH is 4.7.\n\n### Solution Stats\n\n45.79% Correct | 54.21% Incorrect\nLast Solution submitted on May 15, 2018"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7826959,"math_prob":0.86973804,"size":837,"snap":"2019-43-2019-47","text_gpt3_token_len":219,"char_repetition_ratio":0.14165667,"word_repetition_ratio":0.0,"special_character_ratio":0.24731183,"punctuation_ratio":0.094936706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98100775,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-12T22:39:54Z\",\"WARC-Record-ID\":\"<urn:uuid:8ad6fa4c-e21b-478e-909d-2ab653b38694>\",\"Content-Length\":\"88476\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47293d18-8e82-4221-9e09-2079b94a1dcf>\",\"WARC-Concurrent-To\":\"<urn:uuid:12886118-e497-4299-bd2d-f7e92760e370>\",\"WARC-IP-Address\":\"104.110.193.39\",\"WARC-Target-URI\":\"https://de.mathworks.com/matlabcentral/cody/problems/419-acidity-of-vinegar-and-salts\",\"WARC-Payload-Digest\":\"sha1:T5WWSGDIZHNGGA7X45YEFV332ZNHHPVZ\",\"WARC-Block-Digest\":\"sha1:4CLGNULIX66HMPP6BKP32MCBVAFNNIAL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496665767.51_warc_CC-MAIN-20191112202920-20191112230920-00386.warc.gz\"}"} |
https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/11%3A_The_Third_Law_Absolute_Entropy_and_the_Gibbs_Free_Energy_of_Formation/11.13%3A_Absolute_Zero_is_Unattainable | [
"# 11.13: Absolute Zero is Unattainable\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\nThe third law postulates that the entropy of a substance is always finite and that it approaches a constant as the temperature approaches zero. The value of this constant is independent of the values of any other state functions that characterize the substance. For any given substance, we are free to assign an arbitrarily selected value to the zero-temperature limiting value. However, we cannot assign arbitrary zero-temperature entropies to all substances. The set of assignments we make must be consistent with the experimentally observed zero-temperature limiting values of the entropy changes of reactions among different substances. For perfectly crystalline substances, these reaction entropies are all zero. We can satisfy this condition by assigning an arbitrary value to the zero-temperature molar entropy of each element and stipulating that the zero-temperature entropy of any compound is the sum of the zero-temperature entropies of its constituent elements. This calculation is greatly simplified if we let the zero-temperature entropy of every element be zero. This is the essential content of the third law.\n\nThe Lewis and Randall statement incorporates this selection of the zero-entropy reference state for entropies, specifying it as “a crystalline state” of each element at zero degrees. As a result, the entropy of any substance at zero degrees is greater than or equal to zero. That is, the Lewis and Randall statement includes a convention that fixes the zero-temperature limiting value of the entropy of any substance. In this respect, the Lewis and Randall statement makes an essentially arbitrary choice that is not an intrinsic property of nature. We see, however, that it is an overwhelmingly convenient choice.\n\nWe have discussed alternative statements of the first and second laws. A number of alternative statements of the third law are also possible. We consider the following:\n\nIt is impossible to achieve a temperature of absolute zero.\n\nThis statement is more general than the Lewis and Randall statement. If we consider the application of this statement to the temperatures attainable in processes involving a single substance, we can show that it implies, and is implied by, the Lewis and Randall statement.\n\nThe properties of the heat capacity, $$C_P$$, play a central role in these arguments. We have seen that $$C_P$$ is a function of temperature. While it is not useful to do so, we can apply the defining relationship for $$C_P$$ to a substance undergoing a phase transition and find $$C_P=\\infty$$. If we think about a substance whose heat capacity is less than zero, we encounter a contradiction of our basic ideas about heat and temperature: If $$q>0$$ and $${q}/{\\Delta T}<0$$, we must have $$\\Delta T<0$$; that is, heating the substance causes its temperature to decrease. In short, the theory we have developed embeds premises that require $$C_P>0$$ for any system on which we can make measurements.\n\nLet us characterize a pure-substance system by its pressure and temperature and consider reversible constant-pressure processes in which only pressure–volume work is possible. Then $${\\left({\\partial S}/{\\partial T}\\right)}_P={C_P}/{T}$$ and $$dS={C_PdT}/{T}$$. We now want to show: the Lewis and Randall stipulation that the entropy is always finite requires that the heat capacity go to zero when the temperature goes to zero. (Since we are going to show that the third law prohibits measurements at absolute zero, this conclusion is consistent with our conclusion in the previous paragraph.) That the heat capacity goes to zero when the temperature goes to zero is evident from $$S={C_PdT}/{T}.$$ If $$C_P$$ does not go to zero when the temperature goes to zero, $$dS$$ becomes arbitrarily large as the temperature goes to zero, which contradicts the Lewis and Randall statement.\n\nTo develop this result more explicitly, we let the heat capacities at temperatures $$T$$ and zero be $$C_P\\left(T\\right)$$ and $$C_P\\left(0\\right)$$, respectively. Since $$C_P\\left(T\\right)>0$$ for any $$T\\ >\\ 0$$, we have $$S\\left(T\\right)-S\\left(T^*\\right)>0$$ for any $$T>T^*>0$$. Since the entropy is always finite, $$\\infty >S\\left(T\\right)-S\\left(T^*\\right)>0$$, so that\n\n$\\infty >{\\mathop{\\mathrm{lim}}_{T^*\\to 0} \\left[S\\left(T\\right)-S\\left(T^*\\right)\\right]\\ }>0 \\nonumber$\n\nand\n\n$\\infty >{\\mathop{\\mathrm{lim}}_{T^*\\to 0} \\int^T_{T^*}{\\frac{C_P}{T}}\\ }dT>0 \\nonumber$\n\nFor temperatures in the neighborhood of zero, we can expand the heat capacity, to arbitrary accuracy, as a Taylor series polynomial in $$T$$:\n\n$C_P\\left(T\\right)=C_P\\left(0\\right)+\\left(\\frac{\\partial C_P\\left(0\\right)}{\\partial T}\\right)_PT +\\frac{1}{2} \\left(\\frac{{\\partial }^2C_P\\left(0\\right)}{\\partial T^2}\\right)_PT^2+\\dots \\nonumber$\n\nThe inequalities become\n\n$\\infty >{\\mathop{\\mathrm{lim}}_{T^*\\to 0} \\left\\{C_P\\left(0\\right){ \\ln \\frac{T}{T^*}\\ }+{\\left(\\frac{\\partial C_P\\left(0\\right)}{\\partial T}\\right)}_P\\left(T-T^*\\right)+\\frac{1}{4}{\\left(\\frac{{\\partial }^2C_P\\left(0\\right)}{\\partial T^2}\\right)}_P{\\left(T-T^*\\right)}^2+\\dots \\right\\}\\ }>0 \\nonumber$\n\nThe condition on the left requires $$C_P\\left(0\\right)=0$$.\n\nWe could view the third law as a statement about the heat capacities of pure substances. We infer not only that $$C_P>0$$ for all $$T>0$$, but also that\n\n${\\mathop{\\mathrm{lim}}_{T\\to 0} \\left(\\frac{C_P}{T}\\right)=0\\ } \\nonumber$\n\nMore generally, we can infer corresponding assertions for closed reversible systems that are not pure substances: $${\\left({\\partial H}/{\\partial T}\\right)}_P>0$$ for all $$T>0$$, and $${\\mathop{\\mathrm{lim}}_{T\\to 0} T^{-1}{\\left({\\partial H}/{\\partial T}\\right)}_P=0\\ }$$. (The zero-temperature entropies of such systems are not zero, however.) In the discussion below, we describe the system as a pure substance. We can make essentially the same arguments for any system; we need only replace $$C_P$$ by $${\\left({\\partial H}/{\\partial T}\\right)}_P$$. The Lewis and Randall statement asserts that the entropy goes to a constant at absolute zero, irrespective of the values of any other thermodynamic functions. It follows that the entropy at zero degrees is independent of the value of the pressure. For any two pressures, $$P_1$$ and $$P_2$$, we have $$S\\left(P_2,0\\right)-S\\left(P_1,0\\right)=0$$. Letting $${P=P}_1$$ and $$P_2=P+\\Delta P$$ and, we have\n\n$\\frac{S\\left(P+\\Delta P,0\\right)-S\\left(P,0\\right)}{\\Delta P}=0 \\nonumber$\n\nfor any $$\\Delta P$$. Hence, we have\n\n${\\left(\\frac{\\partial S}{\\partial P}\\right)}_{T=0}=0 \\nonumber$\n\nIn Chapter 10, we find $${\\left({\\partial S}/{\\partial }P\\right)}_T=-{\\left({\\partial V}/{\\partial T}\\right)}_P$$, so both the entropy and the volume approach their zero-temperature values asymptotically.\n\nWhen we say that absolute zero is unattainable, we mean that no system can undergo any change in which its final temperature is zero. To see why absolute zero must be unattainable, let us consider processes that can decrease the temperature of a system. In general, we have heat reservoirs available at various temperatures. We can select the available reservoir whose temperature is lowest, and bring the system to this temperature by simple thermal contact. This much is trivial; clearly, the challenge is to decrease the temperature further. To do so, we must effect some other change. Whatever this change may be, it cannot be aided by an exchange of heat with the surroundings. Once we have brought the system to the temperature of the coldest available portion of the surroundings, any further exchange of heat with the surroundings can only be counter-productive. We conclude that any process suited to our purpose must be adiabatic. Since an adiabatic process exchanges no heat with the surroundings, $$\\Delta \\hat{S}=0$$.\n\nThe process must also be a possible process, so that $$\\Delta S+\\Delta \\hat{S}\\ge 0$$, and since it is adiabatic, $$\\Delta S\\ge 0$$. Let us consider a reversible process and an irreversible process in which the same system$${}^{2}$$ goes from the state specified by $$P_1$$ and $$T_1$$ to a second state in which the pressure is $$P_2$$. The final temperatures and the entropy changes of these processes are different. For the reversible process, $$\\Delta S=0$$; we designate the final temperature as $$T_2$$. For the irreversible process, $$\\Delta S>0$$; we designate the final temperature as $$T^*_2$$. As it turns out, the temperature change is less for the irreversible process than for the reversible process; that is, $$T_2-T_1<t^*_2-t_1$$>. Equivalently, the reversible process reaches a lower temperature: $${T_2<t}^*_2$$>. From\n\n$dS=\\frac{C_P}{T}dT-{\\left(\\frac{\\partial V}{\\partial T}\\right)}_PdP \\nonumber$\n\nwe can calculate the entropy changes for these processes. For the reversible process, we calculate $\\Delta S^{rev}=S\\left(P_2,T_2\\right)-S\\left(P_1,T_1\\right) \\nonumber$\n\nTo do so, we first calculate\n\n${\\left(\\Delta S\\right)}_T=S\\left(P_2,T_1\\right)-S\\left(P_1,T_1\\right) \\nonumber$\n\nfor the isothermal reversible transformation from state $$P_1$$, $$T_1$$ to the state specified by $$P_2$$ and $$T_1$$. For this step, $$dT$$ is zero, and so\n\n${\\left(\\Delta S\\right)}_T=\\int^{P_2}_{P_1}{{\\left(\\frac{\\partial V}{\\partial T}\\right)}_PdP} \\nonumber$\n\nWe then calculate\n\n${\\left(\\Delta S\\right)}_P=S\\left(P_2,T_2\\right)-S\\left(P_2,T_1\\right) \\nonumber$\n\nfor the isobaric reversible transformation from state $$P_2$$, $$T_1$$ to state $$P_2$$, $$T_2$$. For this transformation, $$dP$$ is zero, and\n\n${\\left(\\Delta S\\right)}_P=-\\int^{T_2}_{T_1}{\\frac{C_P}{T}dT} \\nonumber$\n\nThen,\n\n$\\Delta S^{rev}=S\\left(P_2,T_2\\right)-S\\left(P_1,T_1\\right)=\\int^{T_2}_{T_1}{\\frac{C_P}{T}dT}-\\int^{P_2}_{P_1}{{\\left(\\frac{\\partial V}{\\partial T}\\right)}_PdP}=0 \\nonumber$\n\nBecause $$\\Delta S^{rev}=0$$, the reversible process is unique; that is, given $$P_1$$, $$T_1$$, and $$P_2$$, the final temperature of the system is determined. We find $$T_2$$ from\n\n$\\int^{T_2}_{T_1}{\\frac{C_P}{T}dT}=\\int^{P_2}_{P_1}{{\\left(\\frac{\\partial V}{\\partial T}\\right)}_PdP} \\nonumber$\n\nTo understand the entropy change for the irreversible process, we note first that there are an infinite number of such processes. There is nothing unique about the final temperature. Given $$P_1$$, $$T_1$$, and $$P_2$$, the final temperature, $$T^*_2$$, can have any value consistent with the properties of the substance. To specify a particular irreversible process, we must specify all four of the quantities $$P_1$$, $$T_1$$, $$P_2$$, and $$T^*_2$$. Having done so, however, we can calculate the entropy change for the irreversible process,\n\n$\\Delta S^{irrev}=S\\left(P_2,T^*_2\\right)-S\\left(P_1,T_1\\right)>0 \\nonumber$\n\nby computing the entropy changes as we reversibly carry the system along the isothermal two-step path from $$P_1$$, $$T_1$$ to $$P_2$$, $$T_1$$ and then along the isobaric path from $$P_2$$, $$T_1$$ to $$P_2$$, $$T^*_2$$. The calculation of $$\\Delta S^{irrev}$$ for this reversible path from $$P_1$$, $$T_1$$ to $$P_2$$, $$T^*_2$$ employs the same logic as the calculation, in the previous paragraph, of $$\\Delta S$$ for the reversible path from $$P_1$$, $$T_1$$ to $$P_2$$, $$T_2$$. The difference is that $$T^*_2$$ replaces $$T_2$$ as the upper limit in the temperature integral. The pressure integral is the same. We have\n\n$\\Delta S^{irrev}=S\\left(P_2,T^*_2\\right)-S\\left(P_1,T_1\\right)=\\int^{T^*_2}_{T_1}{\\frac{C_P}{T}dT}-\\int^{P_2}_{P_1}{{\\left(\\frac{\\partial V}{\\partial T}\\right)}_PdP} >0 \\nonumber$\n\nFrom $$\\Delta S^{irrev}>\\Delta S^{rev}$$, we have\n\n$\\int^{T^*_2}_{T_1}{\\frac{C_P}{T}dT}>\\int^{T_2}_{T_1}{\\frac{C_P}{T}dT} \\nonumber$\n\nSince the integrands are the same and positive, it follows that $$T^*_2>T_2$$, as asserted above.\n\nFigure 6 shows the relationships among the various quantities discussed in this argument. In the first instance, Figure 6 shows a plot of two of the system’s isobars in temperature—entropy space. That is, the line labeled $$P=P_1$$ depicts the set of temperature—entropy points at which the equilibrated system has pressure $$P_1$$; the line labeled $$P=P_2$$, depicts the equilibrium positions at pressure $$P_2$$. Other lines in this sketch represent paths along which the system can undergo reversible changes at constant entropy or constant temperature. The dotted line represents the irreversible process in which the system goes from the state specified by $$P_1$$, $$T_1$$ to the state specified by $$P_2$$, $$T^*_2$$. This line is dotted to represent the fact that the system’s temperature may not be well defined during the irreversible process.\n\nEffective cooling can be achieved using pressure changes if the system is a gas. However, for liquids and solids, $${\\left({\\partial V}/{\\partial T}\\right)}_P$$ is small; consequently, the temperature change for a reversible pressure change is also small. At temperatures near absolute zero, nearly all substances are solid; to achieve effective cooling we must change a thermodynamic variable for which a solid’s temperature coefficient is as large as possible. To consider the general problem of decreasing the temperature of a system by varying something other than pressure, we must consider a system in which some form of non-pressure–volume work is possible. Such a system is subject to an additional force, and its energy changes as this force changes.\n\nThe practical method by which extremely low temperatures are achieved is called adiabatic demagnetization. This method exploits the properties of paramagnetic solids. In such solids, unpaired electrons localized on individual atoms give rise to a magnetic moment. Quantum mechanics leads to important conclusions about the interaction between such magnetic moments and an applied magnetic field: In an applied magnetic field, the magnetic moment of an individual atom is quantized. In the simplest case, it can be aligned in only two directions; it must be either parallel or anti-parallel to the applied magnetic field. When an atom’s magnetic moment is parallel to the magnetic field the energy of the system is less than when the alignment is anti-parallel. The applied magnetic field exerts a force on the magnetic moments associated with individual atoms. The energy of the system depends on the magnitude of the applied magnetic field.\n\nRather than focus on the particular case of adiabatic demagnetization, let us consider the energy and entropy changes associated with changes in a generalized potential, $${\\mathit{\\Phi}}_{\\theta }$$, and its generalized displacement, $$\\theta$$. (For adiabatic demagnetization, $$\\theta$$ would be the applied magnetic field.) Three variables are required to describe reversible changes in this system. We can express the energy and entropy as functions of temperature, pressure, and $$\\theta$$:\n\n$$E=E\\left(T,P,\\theta \\right)$$ and $$S=S\\left(T,P,\\theta \\right)$$. The total differential of the entropy includes a term that specifies the dependence of entropy on $$\\theta$$. We have\n\n$dS={\\left(\\frac{\\partial S}{\\partial T}\\right)}_{P,\\theta }dT+{\\left(\\frac{\\partial S}{\\partial P}\\right)}_{T,\\theta }dP+{\\left(\\frac{\\partial S}{\\partial \\theta }\\right)}_{T,P}d\\theta =\\frac{C\\left(T,P,\\theta \\right)}{T}dT-{\\left(\\frac{\\partial V}{\\partial T}\\right)}_{P,\\theta }dP+{\\left(\\frac{\\partial S}{\\partial \\theta }\\right)}_{T,P}d\\theta \\nonumber$\n\nwhere we write $$C\\left(T,P,\\theta \\right)$$ to emphasize that our present purposes now require that we measure the heat capacity at constant pressure and constant $$\\theta$$.\n\nFor constant pressure, P, and constant displacement, $$\\theta$$, the entropy depends on temperature as\n\n$S\\left(T,P,\\theta \\right)=S\\left(0,P,\\theta \\right)+\\int^T_0 \\left(\\frac{\\partial S}{\\partial T}\\right)_{P,\\theta }dT=S\\left(0,P,\\theta \\right)+\\int^T_0 \\frac{C\\left(T,P,\\theta \\right)}{T}dT \\nonumber$\n\nThe postulate that the entropy be finite at any temperature implies that the pressure- and $$\\theta$$-dependent heat capacity becomes zero at absolute zero. That is, at absolute zero, the heat capacity vanishes whatever the values of P and $$\\theta$$. The argument is exactly the same as before. Earlier, we wrote $$C_P\\left(0\\right)=0$$; for the present generalized case, we write $$C\\left(0,P,\\theta \\right)=0$$.\n\nSimilarly, from the postulate that the entropy goes to a constant at absolute zero for all values of the other thermodynamics variables, it follows that, for any two pressures $$P_1$$ and $$P_2$$, and for any two values of the generalized displacement, $${\\theta }_1$$ and $${\\theta }_2$$,\n\n$S\\left(0,P_1,{\\theta }_1\\right)=S\\left(0,P_2,{\\theta }_1\\right)=S\\left(0,P_1,{\\theta }_2\\right)=S\\left(0,P_2,{\\theta }_2\\right)=0 \\nonumber$\n\nand hence that\n\n${\\left(\\frac{\\partial S}{\\partial P}\\right)}_{T=0,\\theta }={\\left(\\frac{\\partial S\\left(0,P,\\theta \\right)}{\\partial P}\\right)}_{T,\\theta }=0 \\nonumber$ and ${\\left(\\frac{\\partial S}{\\partial \\theta }\\right)}_{T=0,P}={\\left(\\frac{\\partial S\\left(0,P,\\theta \\right)}{\\partial \\theta }\\right)}_{T,P}=0 \\nonumber$\n\nWe want to consider a process in which a system goes from the lowest temperature available in the surroundings to a still lower temperature. To minimize the final temperature, this process must be carried out adiabatically. It must also be a possible process, so that $$dS\\ge 0$$. For simplicity, let us now assume that we carry out this process at a constant pressure, $$P$$, and that the system goes from the state specified by $$P$$, $$T_1$$, $${\\theta }_1$$ to the state specified by $$P$$, $$T_2$$, $${\\theta }_2$$ where $$T_1>T_2$$. The entropies of these two states are\n\n$S\\left(T_1,P,{\\theta }_1\\right)=S\\left(0,P,{\\theta }_1\\right)+\\int^{T_1}_0{\\frac{C\\left(T,P,{\\theta }_1\\right)}{T}}dT \\nonumber$ and $S\\left(T_2,P,{\\theta }_2\\right)=S\\left(0,P,{\\theta }_2\\right)+\\int^{T_2}_0{\\frac{C\\left(T,P,{\\theta }_2\\right)}{T}}dT \\nonumber$\n\nThe entropy change for this process is\n\n$S\\left(T_2,P,{\\theta }_2\\right)-S\\left(T_1,P,{\\theta }_1\\right)=S\\left(0,P,{\\theta }_2\\right)-S\\left(0,P,{\\theta }_1\\right) \\nonumber$ $+\\int^{T_2}_0{\\frac{C\\left(T,P,{\\theta }_2\\right)}{T}}dT-\\int^{T_1}_0{\\frac{C\\left(T,P,{\\theta }_1\\right)}{T}}dT\\ge 0 \\nonumber$\n\nNow, let us suppose that the final temperature is zero; that is, $$T_2=0$$, so that\n\n$\\int^{T_2}_0{\\frac{C\\left(T,P,{\\theta }_2\\right)}{T}}dT=0 \\nonumber$ It follows that $S\\left(0,P,{\\theta }_2\\right)-S\\left(0,P,{\\theta }_1\\right)\\ge \\int^{T_1}_0{\\frac{C\\left(T,P,{\\theta }_1\\right)}{T}}dT>0 \\nonumber$\n\nwhere the inequality on the right follows from the fact that that $$C\\left(T,P,{\\theta }_1\\right)>0$$. Then, it follows that\n\n$S\\left(0,P,{\\theta }_2\\right)-S\\left(0,P,{\\theta }_1\\right)>0 \\nonumber$\n\nwhich contradicts the Lewis and Randall statement of the third law. The assumption that the system can reach absolute zero leads to a contradiction of the Lewis and Randall statement of the third law. Therefore, if the Lewis and Randall statement is true, absolute zero is unattainable.\n\nThe converse applies also; that is, from the proposition that absolute zero is unattainable, we can show that the Lewis and Randall statement is true. To do so, we rearrange the above equation for $$\\Delta S$$,\n\n$\\int^{T_2}_0{\\frac{C\\left(T,P,{\\theta }_2\\right)}{T}}dT\\ge \\nonumber$ $\\int^{T_1}_0{\\frac{C\\left(T,P,{\\theta }_1\\right)}{T}}dT-S\\left(0,P,{\\theta }_2\\right)+S\\left(0,P,{\\theta }_1\\right) \\nonumber$\n\nIf we now assume that the Lewis and Randall statement is false, the expression on the right can be less than or equal to zero. The integral on the left can then be zero, in which case the system can reach absolute zero. If the Lewis and Randall statement is false, it is true that the system can reach absolute zero. Therefore: If the system cannot reach absolute zero, the Lewis and Randall statement is true.\n\nFigures 7 and 8 depict these ideas using contour plots in temperature–entropy space. Each figure shows two contour lines. One of these contour lines is a set of temperature and entropy values along which the pressure is constant at $$P$$ and $$\\theta$$ is constant at $${\\theta }_1$$. The other contour line is a set of temperature and entropy values along which the pressure is constant at $$P$$ and $$\\theta$$ is constant at $${\\theta }_2$$. The slope of a contour line is\n\n${\\left(\\frac{\\partial T}{\\partial S}\\right)}_{P,\\theta }=\\frac{T}{C\\left(T,P,\\theta \\right)} \\nonumber$\n\nBecause the heat capacity is always positive, this slope is always positive.",
null,
"Figure 7. Reversible change in a system that satisfies the Lewis and Randall statement.\n\nIn Figure 7, the Lewis and Randall statement is satisfied. When the temperature goes to zero, the contour lines meet at the same value of the entropy; these contours satisfy the relationship\n\n$S\\left(0,P,{\\theta }_1\\right)=S\\left(0,P,{\\theta }_2\\right) \\nonumber$\n\nAn adiabatic (vertical) path from the contour for $$P$$ and $${\\theta }_1$$ meets the contour for $$P$$ and $${\\theta }_2$$ at a positive temperature, $$T_2>0$$. Since this is evidently true for any $$P$$ and any $${\\theta }_2$$, the final state for any adiabatic process will have $$T_2>0$$. Because the Lewis and Randall statement is satisfied, the system cannot reach absolute zero, and vice versa.",
null,
"Figure 8. Reversible change in a system that does not satisfy the Lewis and Randall statement.\n\nIn Figure 8, the Lewis and Randall statement is violated, because we have $$S\\left(0,P,{\\theta }_1\\right)$$. In this case, an adiabatic process initiated from a low enough initial temperature, $$T_1$$, will reach absolute zero without intersecting the contour for constant $$P$$ and $${\\theta }_2$$. Because the Lewis and Randal statement is violated, the system can reach absolute zero, and vice versa.\n\nThis page titled 11.13: Absolute Zero is Unattainable is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Paul Ellgen via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request."
] | [
null,
"https://chem.libretexts.org/@api/deki/files/241917/Screen_Shot_2019-10-07_at_3.16.19_PM.png",
null,
"https://chem.libretexts.org/@api/deki/files/241918/Screen_Shot_2019-10-07_at_3.16.25_PM.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79915947,"math_prob":0.999859,"size":21564,"snap":"2023-40-2023-50","text_gpt3_token_len":6077,"char_repetition_ratio":0.1807514,"word_repetition_ratio":0.06634078,"special_character_ratio":0.28821182,"punctuation_ratio":0.105979584,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999472,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T09:20:40Z\",\"WARC-Record-ID\":\"<urn:uuid:ed89da99-6535-41f1-aa1a-ee34fe4ff79c>\",\"Content-Length\":\"151500\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98a80d4f-e328-4c13-a49c-45984c452890>\",\"WARC-Concurrent-To\":\"<urn:uuid:f82ca10e-56aa-454e-b88a-353e727be4c2>\",\"WARC-IP-Address\":\"3.162.103.5\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Thermodynamics_and_Chemical_Equilibrium_(Ellgen)/11%3A_The_Third_Law_Absolute_Entropy_and_the_Gibbs_Free_Energy_of_Formation/11.13%3A_Absolute_Zero_is_Unattainable\",\"WARC-Payload-Digest\":\"sha1:ETNOUG65AOJ4NG3WEDD6EJZOLD6KDNMJ\",\"WARC-Block-Digest\":\"sha1:7UJOBUNJRPC3VS265YUWJB7HZQIJZ36Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510810.46_warc_CC-MAIN-20231001073649-20231001103649-00463.warc.gz\"}"} |
https://math.stackexchange.com/questions/2066873/problem-in-solving-a-question-concerning-real-analysis?noredirect=1 | [
"# Problem in solving a question concerning real analysis. [duplicate]\n\nThe question is :\n\nDoes there exist any function $f : \\mathbb R \\longrightarrow \\mathbb R$ such that $f(1) = 1$, $f(-1) = -1$ and $|f(x) - f(y)| \\leq |x - y|^{\\frac {3} {2}}$?\n\nIt is clear that $f$ is continuous over $\\mathbb R$ by the given condition and hence it attains all the values between $-1$ and $1$ in $(-1,1)$.Now how can I proceed?Please help me.\n\nThank you in advance.\n\n## marked as duplicate by Rohan, zhoraster, E. Joseph, Alex M., mrpDec 21 '16 at 11:42\n\nNotice that if $|f(x) - f(y)| \\leq {|x - y|}^{\\frac {3} {2}}$, then\n\n$$\\left|\\frac{f(x)-f(y)}{x-y}\\right|\\leq {|x - y|}^{\\frac {1} {2}}$$\n\nand hence $f$ is differentiable... but $f'=0$ everywhere. In other words, $f$ is constant, so one may not have $f(1)\\neq f(-1)$.\n\n• So, it is impossible to construct such a function.Isn't it? – user251057 Dec 21 '16 at 5:41\n• Yes. Functions of this kind are called $\\alpha$-Hölder continuous (here, $\\alpha$ is the exponent in the RHS). When $\\alpha >1$, the trick I used here shows they are all constant. When $\\alpha=1$, those are simply the Lipschitz functions. The interesting case is when $0<\\alpha<1$. You can read more on these functions here. – Fimpellizieri Dec 21 '16 at 5:43\n• Thanks @Fimpellizier.It really helps. – user251057 Dec 21 '16 at 6:11\n• You're welcome! – Fimpellizieri Dec 21 '16 at 6:21\n\nAnother way:\n\n$|f(x)-f(y)| \\le \\sum_{k=0}^{n-1} | f(y+ {k+1 \\over n} (x-y))-f(y+ {k \\over n} (x-y)) | \\le n ({1 \\over n}(x-y))^{3 \\over 2} = {1 \\over \\sqrt{n} } (x-y)^{3 \\over 2}$, and since $n$ is arbitrary, we see that $f(x) = f(y)$ for any $x,y$."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8634688,"math_prob":0.99983627,"size":420,"snap":"2019-35-2019-39","text_gpt3_token_len":138,"char_repetition_ratio":0.11057692,"word_repetition_ratio":0.0,"special_character_ratio":0.35714287,"punctuation_ratio":0.11111111,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999523,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-17T02:37:12Z\",\"WARC-Record-ID\":\"<urn:uuid:e5ca03f7-234a-4aca-a028-3d1f2f9c2226>\",\"Content-Length\":\"127793\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0396b892-2911-49e9-9991-eee130e0b9f7>\",\"WARC-Concurrent-To\":\"<urn:uuid:f7bba846-d4b2-487d-92f6-2ba48aa06176>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2066873/problem-in-solving-a-question-concerning-real-analysis?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:YBIUPYFHFGKHNUUIFTF7ZCILNN26PBJI\",\"WARC-Block-Digest\":\"sha1:2XBYZMDOJMKS7ZVRHS3Q2RHVMBIGNC4H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573011.59_warc_CC-MAIN-20190917020816-20190917042816-00158.warc.gz\"}"} |
https://www.math.fsu.edu/e-prints/archive/paper239.abs.html | [
"A discrete Kato type theorem on inviscid limit of Navier-Stokes flows\n\nWenfang (Wendy) Cheng, Xiaoming Wang\n\nThe inviscid limit of wall bounded viscous flows is one of the unanswered central questions in theoretical fluid dynamics. Here we present a result indicating the difficulty in numerical study of the problem. More precisely, we show that numerical solutions of the incompressible Navier-Stokes equations converge to the exact solution of the Euler equations at vanishing viscosity provided that small scales of the order of $\\nu/U$ in the directions tangential to the boundary in an appropriate boundary layer is not resolved in the scheme. Here $\\nu$ is the kinematic viscosity of the fluid and $U$ is the typical velocity taken to be the maximum of the shear velocity at the boundary for the inviscid flow. Such a result is somewhat surprising since such a small scale is much smaller than any of the known small scales predicted by conventional theory of turbulence and boundary layer theory. On the other hand, such a result can be viewed as a discrete version of our previous result (Wang 2001) which generalized earlier result of Kato (1984) where the relevance of a scale proportional to the kinematic viscosity to the problem of vanishing viscosity was first discovered."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9055259,"math_prob":0.96431524,"size":1285,"snap":"2022-05-2022-21","text_gpt3_token_len":264,"char_repetition_ratio":0.12568305,"word_repetition_ratio":0.0,"special_character_ratio":0.18832685,"punctuation_ratio":0.040723983,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9898615,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-23T14:08:02Z\",\"WARC-Record-ID\":\"<urn:uuid:6deaaa9c-a84d-43ca-b026-532b956ff3d4>\",\"Content-Length\":\"1633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0ac00248-b12b-4d4a-8a63-5dfdba620550>\",\"WARC-Concurrent-To\":\"<urn:uuid:dc51b862-cbd5-480a-8211-517431c73786>\",\"WARC-IP-Address\":\"128.186.104.71\",\"WARC-Target-URI\":\"https://www.math.fsu.edu/e-prints/archive/paper239.abs.html\",\"WARC-Payload-Digest\":\"sha1:4SINNXWEUUEZBGI4GBIQCMDAG4RHEMWN\",\"WARC-Block-Digest\":\"sha1:CUKI6KSH2FKEBH76RSVRKQCG52DDX7AZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662558030.43_warc_CC-MAIN-20220523132100-20220523162100-00569.warc.gz\"}"} |
https://rdrr.io/cran/prototest/man/prototest.univariate.html | [
"# prototest.univariate: Perform Prototype or F Tests for Significance of Groups of... In prototest: Inference on Prototypes from Clusters of Features\n\n## Description\n\nPerform prototype or F tests for significance of groups of predictors in the univariate model. Choose either exact or approximate likelihood ratio prototype tests (ELR) or (ALR) or F test or marginal screening prototype test. Options for selective or non-selective tests. Further options for non-sampling or hit-and-run null reference distributions for selective tests.\n\n## Usage\n\n 1 2 3 prototest.univariate(x, y, type = c(\"ALR\", \"ELR\", \"MS\", \"F\"), selected.col = NULL, lambda, mu = NULL, sigma = 1, hr.iter = 50000, hr.burn.in = 5000, verbose = FALSE, tol = 10^-8) \n\n## Arguments\n\n x input matrix of dimension n-by-p, where p is the number of predictors in a single predetermined group of predictors. Will be mean centered and standardised before tests are performed. y response variable. Vector of length emphn, assumed to be quantitative. type type of test to be performed. Can only select one at a time. Options include the exact and approximate likelihood ratio prototype tests of Reid et al (2015) (ELR, ALR), the F test and the marginal screening prototype test of Reid and Tibshirani (2015) (MS). Default is ELR. selected.col preselected columns specified by user. Vector of indices in the set {1, 2, ..., p}. If specified, a non-selective (classical) version of the chosen test it performed. In particular, this means the classicial chi-squared 1 reference distribution for the likelihood ratio tests and the F reference for the F test. Default is NULL, which directs the function to estimate the selected set with the lasso or the marginal screening procedure, depending on the test. lambda regularisation parameter for the lasso fit. Must be supplied when selected.col is NULL. Will be supplied to glmnet. This is the unstandardised version, equivalent to lambda/n supplied to glmnet. mu mean parameter for the response. See Details below. If supplied, it is first subtracted from the response to yield a mean-zero (at the population level) vector for which we proceed with testing. If NULL (the default), this parameter is treated as nuisance parameter and accounted for as such in testing. sigma error standard deviation for the response. See Details below. Must be supplied. If not, it is assumed to be 1. Required for the computation of some of the test statistics. hr.iter number of hit-and-run samples required in the reference distrbution of a selective test. Applies only if selected.col is NULL. Default is 50000. Since dependent samples are generated, large values are required to generate good reference distributions. If set to 0, the function tries to apply a non-sampling selective test (provided selected.col is NULL), if possible. If non-sampling test is not possible, the function exits with a message. hr.burn.in number of burn-in hit-and-run samples. These are generated first so as to make subsequent hit-and-run realisations less dependent on the observed response. Samples are then discarded and do not inform the null reference distribution. verbose should progress be printed? tol convergence threshold for iterative optimisation procedures.\n\n## Details\n\nThe model underpinning each of the tests is\n\n\\emph{y = mu + theta u_hat + epsilon}\n\nwhere \\emph{epsilon} is Gaussian with zero mean and variance \\emph{sigma^2} and \\emph{y_hat} depends on the particular test considered.\n\nIn particular, for the ELR, ALR and F tests, we have \\emph{y_hat = P_M(y - mu)}, where \\emph{X_MX_M^dagger}. \\emph{X_M} is the input matrix reduced to the columns in the set M, which, in turn, is either provided by the user (via selected.col) or selected by the lasso (if selected.col is NULL). If the former, a non-selective test is performed; if the latter, a selective test is performed, with the restrictions \\emph{Ay <= b}, as set out in Lee et al (2015).\n\nFor the marginal screening prototype (MS) test, \\emph{y_hat = x_j_star} where \\emph{x_j} is the \\emph{jth} column of x and is the column of maximal marginal correlation with the response.\n\nAll tests test the null hypothesis H_0: \\emph{theta = 0}. Details of each are described in Reid et al (2015).\n\n## Value\n\nA list with the following four components:\n\n ts The value of the test statistic on the observed data. p.val Valid p-value of the test. selected.col Vector with columns selected. If initially NULL, this will now contain indices of columns selected by the automatic column selection procedures of the test. y.hr Matrix with hit-and-run replications of the response. If sampled selective test was not performed, this will be NULL.\n\nStephen Reid\n\n## References\n\nReid, S. and Tibshirani, R. (2015) Sparse regression and marginal testing using cluster prototypes. http://arxiv.org/pdf/1503.00334v2.pdf. Biostatistics doi: 10.1093/biostatistics/kxv049\nReid, S., Taylor, J. and Tibshirani, R. (2015) A general framework for estimation and inference from clusters of features. Available online: http://arxiv.org/abs/1511.07839.\n\nprototest.multivariate\n\n## Examples\n\n 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 require (prototest) ### generate data set.seed (12345) n = 100 p = 80 X = matrix (rnorm(n*p, 0, 1), ncol=p) beta = rep(0, p) beta[1:3] = 0.1 # three signal variables: number 1, 2, 3 signal = apply(X, 1, function(col){sum(beta*col)}) intercept = 3 y = intercept + signal + rnorm (n, 0, 1) ### treat all columns as if in same group and test for signal # non-selective ELR test with nuisance intercept elr = prototest.univariate (X, y, \"ELR\", selected.col=1:5) # selective F test with nuisance intercept; non-sampling f.test = prototest.univariate (X, y, \"F\", lambda=0.01, hr.iter=0) print (elr) print (f.test) ### assume variables occur in 4 equally sized groups num.groups = 4 groups = rep (1:num.groups, each=p/num.groups) # selective ALR test -- select columns 21-25 in 2nd group; test for signal in 1st; hit-and-run alr = prototest.multivariate(X, y, groups, 1, \"ALR\", 21:25, lambda=0.005, hr.iter=20000) # non-selective MS test -- specify first column in each group; test for signal in 1st ms = prototest.multivariate(X, y, groups, 1, \"MS\", c(1,21,41,61)) print (alr) print (ms) \n\n### Example output\n\nLoading required package: intervals\n\nAttaching package: 'Matrix'\n\nThe following object is masked from 'package:intervals':\n\nexpand\n\nts p.val\n1 0.084 0.7722\nts p.val\n1 3.51 0.694\nts p.val\n1 4.147 0.0756\nts p.val\n1 1.596 0.1106\n\n\nprototest documentation built on May 2, 2019, 4:02 p.m."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7473321,"math_prob":0.8819935,"size":5670,"snap":"2021-43-2021-49","text_gpt3_token_len":1579,"char_repetition_ratio":0.11313096,"word_repetition_ratio":0.010928961,"special_character_ratio":0.28201059,"punctuation_ratio":0.1633875,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.987298,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T18:42:09Z\",\"WARC-Record-ID\":\"<urn:uuid:ee4cdf1a-29e6-482d-9277-0c0607c91e8c>\",\"Content-Length\":\"51935\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bafd1c8a-b42d-4f71-b3fd-881cdceac932>\",\"WARC-Concurrent-To\":\"<urn:uuid:3de821ce-e5f1-4d62-997d-6e97863336b1>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/prototest/man/prototest.univariate.html\",\"WARC-Payload-Digest\":\"sha1:2543ZQRJRDDJID2IHLFXCHYRTJXZWTLE\",\"WARC-Block-Digest\":\"sha1:TQ5XYYGYPPHR552DV3IYL2FUZ6JJHTGV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358208.31_warc_CC-MAIN-20211127163427-20211127193427-00325.warc.gz\"}"} |
https://www.statology.org/abline-in-r/ | [
"# How to Use abline() in R to Add Straight Lines to Plots\n\nThe abline() function in R can be used to add one or more straight lines to a plot in R.\n\nThis function uses the following syntax:\n\nabline(a=NULL, b=NULL, h=NULL, v=NULL, …)\n\nwhere:\n\n• a, b: single values that specify the intercept and slope of the line\n• h: the y-value for the horizontal line\n• v: the x-value for the vertical line\n\nThe following examples show how to use this function in practice.\n\n### How to Add Horizontal Lines\n\nThe basic code to add a horizontal line to a plot in R is: abline(h = some value)\n\nSuppose we have the following scatterplot that displays the values for x and in a dataset:\n\n```#define dataset\ndata <- data.frame(x = c(1, 1, 2, 3, 4, 4, 5, 6, 7, 7, 8, 9, 10, 11, 11),\ny = c(13, 14, 17, 12, 23, 24, 25, 25, 24, 28, 32, 33, 35, 40, 41))\n\n#plot x and y values in dataset\nplot(data\\$x, data\\$y, pch = 16)```",
null,
"To add a horizontal line at the value y = 20, we can use the following code:\n\n`abline(h = 20, col = 'coral2', lwd = 2)`",
null,
"The following code illustrates how to add a horizontal solid line at the mean value of along with two horizontal dashed lines at one standard deviation above and below the mean value:\n\n```#create scatterplot for x and y\nplot(data\\$x, data\\$y, pch = 16)\n\n#create horizontal line at mean value of y\nabline(h = mean(data\\$y), lwd = 2)\n\n#create horizontal lines at one standard deviation above and below the mean value\nabline(h = mean(data\\$y) + sd(data\\$y), col = 'steelblue', lwd = 3, lty = 2)\nabline(h = mean(data\\$y) - sd(data\\$y), col = 'steelblue', lwd = 3, lty = 2)```",
null,
"### How to Add Vertical Lines\n\nThe basic code to add a vertical line to a plot in R is: abline(v = some value)\n\nThe following code illustrates how to add a vertical line at the mean value on a histogram:\n\n```#make this example reproducible\nset.seed(0)\n\n#create dataset with 1000 random values normally distributed with mean = 10, sd = 2\ndata <- rnorm(1000, mean = 10, sd = 2)\n\n#create histogram of data values\nhist(data, col = 'steelblue')\n\n#draw a vertical dashed line at the mean value\nabline(v = mean(data), lwd = 3, lty = 2)```",
null,
"## How to Add Regression Lines\n\nThe basic code to add a simple linear regression line to a plot in R is: abline(model)\n\nThe following code illustrates how to add a fitted linear regression line to a scatterplot:\n\n```#define dataset\ndata <- data.frame(x = c(1, 1, 2, 3, 4, 4, 5, 6, 7, 7, 8, 9, 10, 11, 11),\ny = c(13, 14, 17, 12, 23, 24, 25, 25, 24, 28, 32, 33, 35, 40, 41))\n\n#create scatterplot of x and y values\nplot(data\\$x, data\\$y, pch = 16)\n\n#fit a linear regression model to the data\nreg_model <- lm(y ~ x, data = data)\n\n#add the fitted regression line to the scatterplot\nabline(reg_model, col=\"steelblue\")```",
null,
"Note that we simply need a value for the intercept and the slope to fit a simple linear regression line to the data using the abline() function.\n\nThus, another way of using abline() to add a regression line is to explicitly specify the intercept and slope coefficients of the regression model:\n\n```#define dataset\ndata <- data.frame(x = c(1, 1, 2, 3, 4, 4, 5, 6, 7, 7, 8, 9, 10, 11, 11),\ny = c(13, 14, 17, 12, 23, 24, 25, 25, 24, 28, 32, 33, 35, 40, 41))\n\n#create scatterplot of x and y values\nplot(data\\$x, data\\$y, pch = 16)\n\n#fit a linear regression model to the data\nreg_model <- lm(y ~ x, data = data)\n\n#define intercept and slope values\na <- coefficients(reg_model) #intercept\nb <- coefficients(reg_model) #slope\n\n#add the fitted regression line to the scatterplot\nabline(a=a, b=b, col=\"steelblue\")```",
null,
"Notice that this produces the same line as before."
] | [
null,
"https://fourpillarfreedom.com/wp-content/uploads/2019/05/abline1.jpg",
null,
"https://fourpillarfreedom.com/wp-content/uploads/2019/05/abline2.jpg",
null,
"https://fourpillarfreedom.com/wp-content/uploads/2019/05/abline3.jpg",
null,
"https://fourpillarfreedom.com/wp-content/uploads/2019/05/abline4.jpg",
null,
"https://fourpillarfreedom.com/wp-content/uploads/2019/05/abline5.jpg",
null,
"https://fourpillarfreedom.com/wp-content/uploads/2019/05/abline5.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7117337,"math_prob":0.9978589,"size":3453,"snap":"2021-31-2021-39","text_gpt3_token_len":1076,"char_repetition_ratio":0.13134241,"word_repetition_ratio":0.37925696,"special_character_ratio":0.3382566,"punctuation_ratio":0.175,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988914,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,7,null,7,null,7,null,7,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T23:13:13Z\",\"WARC-Record-ID\":\"<urn:uuid:0d954b56-9ba4-4d85-9a7e-bdaa68d58c11>\",\"Content-Length\":\"35513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5444e497-5fb0-4f11-9273-ae53a70db6bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:bb861fe3-70f4-48b3-8c3a-66298790fc4f>\",\"WARC-IP-Address\":\"35.206.100.4\",\"WARC-Target-URI\":\"https://www.statology.org/abline-in-r/\",\"WARC-Payload-Digest\":\"sha1:M2EMC5VD42XG6ZZ5WJ2U6RJ3VAJ3QAKY\",\"WARC-Block-Digest\":\"sha1:VYY3G5KYR2BZPHIXCWQSHSBFBBY7APEX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056902.22_warc_CC-MAIN-20210919220343-20210920010343-00397.warc.gz\"}"} |
http://bestmaths.net/online/index.php/year-levels/year-11/year-11-topics/differentiation/ | [
"## Differentiation\n\nThe process of finding the gradient or derived function is called differentiation. There is a way of doing this called differentiating from first principles and this is studied in more detail in Year 12.\n\nA quicker way to differentiate polynomial funtions and terms is to use a formula or rule.\n\nFor a function f(x) the derived function is given by f '(x).\n\nAn alternative notation is that for a function y the derived function is y ' or",
null,
"The rule is:\n\n The derivative of axn = naxn − 1\n\nIn words this says that to find the derivative of a term:\n\n\"multiply the coefficient by the exponent and then lower the exponent by 1.\"\n\n Example 1 Example 2 Example 3 Example 4 Differentiate x3 6x4 5x 23 Answer 3x² 24x3 5 0\n\n### Differentiation of Polynomial functions\n\nA polynomial function is differentiated term by term.\n\nExample\n\nIf f(x) = 2x3 + 3x2 − 4x + 6\n\nthen f '(x) = 6x2 + 6x − 4\n\nA function containing brackets must be expanded before differentiating.\n\nExample\n\nIf y = (x + 2)(x − 3)\n\nExpanding y = x² − x − 6",
null,
"= 2x − 1\n\n### Differentiation of Fractional Indices (Roots)\n\nRoots such as √x can be written as",
null,
". This can now be differentiated as normal.\n\nIf f(x) = √x =",
null,
"",
null,
"### Differentiation of Negative Indices\n\nTerms such as",
null,
"have to be written as x -2. This can now be differentiated as normal.\n\nIf y =",
null,
"= x -2",
null,
"### Differentiation of Rational Expressions\n\nExpressions involving fractions may need simplifiying or cancelling first\n\n Example 1 Example 2",
null,
"",
null,
""
] | [
null,
"http://bestmaths.net/online/files/8613/8620/8340/Y11_Differentiation_01.gif",
null,
"http://bestmaths.net/online/files/8613/8620/8340/Y11_Differentiation_01.gif",
null,
"http://bestmaths.net/online/files/2013/8620/8341/Y11_Differentiation_02.gif",
null,
"http://bestmaths.net/online/files/2013/8620/8341/Y11_Differentiation_02.gif",
null,
"http://bestmaths.net/online/files/5913/8620/8341/Y11_Differentiation_03.gif",
null,
"http://bestmaths.net/online/files/4413/8620/8342/Y11_Differentiation_04.gif",
null,
"http://bestmaths.net/online/files/4413/8620/8342/Y11_Differentiation_04.gif",
null,
"http://bestmaths.net/online/files/4313/8620/8342/Y11_Differentiation_05.gif",
null,
"http://bestmaths.net/online/files/5013/8620/8343/Y11_Differentiation_06.gif",
null,
"http://bestmaths.net/online/files/2013/8620/8343/Y11_Differentiation_07.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89815396,"math_prob":0.9900983,"size":1436,"snap":"2023-14-2023-23","text_gpt3_token_len":401,"char_repetition_ratio":0.17318435,"word_repetition_ratio":0.043636363,"special_character_ratio":0.27506962,"punctuation_ratio":0.049808428,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9990901,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,8,null,8,null,8,null,8,null,4,null,8,null,8,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-30T00:36:19Z\",\"WARC-Record-ID\":\"<urn:uuid:103aed55-868a-4837-a2d9-a1b7aea152b9>\",\"Content-Length\":\"15763\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cfadf0bd-3435-43c3-921f-f03e6102701d>\",\"WARC-Concurrent-To\":\"<urn:uuid:63e4bd5f-fec3-4080-839c-878d0acb6dd3>\",\"WARC-IP-Address\":\"43.224.120.23\",\"WARC-Target-URI\":\"http://bestmaths.net/online/index.php/year-levels/year-11/year-11-topics/differentiation/\",\"WARC-Payload-Digest\":\"sha1:A46VOU2Y4YEDABHNOWQRDKYQKUUWNPSX\",\"WARC-Block-Digest\":\"sha1:JOYKJMRZXSVT6TCNLCMJI4M44HPDLQZB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644915.48_warc_CC-MAIN-20230530000715-20230530030715-00164.warc.gz\"}"} |
https://ir.icoreconnect.com/sec-filings/annual-reports/xbrl_doc_only/464 | [
"### Annual report pursuant to Section 13 and 15(d)\n\n#### 12. DERIVATIVE FINANCIAL INSTRUMENTS (Details 1)\n\nv3.3.1.900\n12. DERIVATIVE FINANCIAL INSTRUMENTS (Details 1) - \\$ / shares\n12 Months Ended\nJun. 30, 2014\nJun. 30, 2013\nMinimum\nExpected term 4 years\nWeighted average volatility 305.00%\nWeighted average risk free interest rate 1.02%\nExpected dividends 0.00%\nMaximum\nExpected term 3 years\nWeighted average volatility 287.00%\nWeighted average risk free interest rate 0.76%\nExpected dividends 0.00%\nBlack-Scholes | Minimum\nExpected term 1 month 10 days 25 days\nWeighted average volatility 85.05% 70.65%\nWeighted average risk free interest rate 0.30% 0.02%\nExpected dividends 0.00% 0.00%\nBlack-Scholes | Maximum\nExpected term 27 days 3 years 3 months\nWeighted average volatility 323.90% 299.63%\nWeighted average risk free interest rate 0.49% 0.66%\nBinomial Lattice | Minimum\nExpected term 3 months\nWeighted average volatility 149.00%\nWeighted average risk free interest rate 0.04%\nExpected dividends 0.00%\nExercise Price \\$ .05\nBinomial Lattice | Maximum\nExpected term 1 year 9 months 15 days\nWeighted average volatility 184.00%\nWeighted average risk free interest rate 0.27%\nExercise Price \\$ 1.00"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.67974937,"math_prob":0.9704997,"size":2288,"snap":"2023-14-2023-23","text_gpt3_token_len":699,"char_repetition_ratio":0.15630473,"word_repetition_ratio":0.15335463,"special_character_ratio":0.29895106,"punctuation_ratio":0.15384616,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9724323,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T18:37:29Z\",\"WARC-Record-ID\":\"<urn:uuid:95a30244-8341-4a9e-96ae-561ce9a748a0>\",\"Content-Length\":\"36347\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f14df5a9-871b-4c90-9632-cd029117ea53>\",\"WARC-Concurrent-To\":\"<urn:uuid:f3c6ba75-fddc-432c-9f74-ec9ea7d8b5ed>\",\"WARC-IP-Address\":\"34.192.175.217\",\"WARC-Target-URI\":\"https://ir.icoreconnect.com/sec-filings/annual-reports/xbrl_doc_only/464\",\"WARC-Payload-Digest\":\"sha1:W2EVVB27ZULC7XDAS6VPKREB6DBQN4MQ\",\"WARC-Block-Digest\":\"sha1:H7SS4AQMU36XC3Y5O7US52ZIEFIVGPUZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945288.47_warc_CC-MAIN-20230324180032-20230324210032-00041.warc.gz\"}"} |
https://etank.nu/yyfkiyn/archive.php?1d4457=lewis-dot-structure-rules | [
"Vælg en side\n\nHow to: Try to follow the following rules: Note: The number of bonds for charged atoms is always different (with one extra or one less bond) than for neutral atoms. Use Lewis structures as a guide to construct three-dimensional models of small molecules. Determine the electron and molecular geometry of the produced molecules. Rules and Steps for Lewis Dot Formulas 1. Total valence electrons concept is used to draw the lewis structure of SO 4 2-.In lewis structure of sulfate ion, there should be charges on several atoms due to -2 charge. Things you will need:-periodic table-pen-paper . Draw a skeleton for the molecule which connects all atoms using only single bonds. Select all that apply 4. Simonizer1218. Noble-gas configuration. Lewis structure of sulfate ion is drawn in this tutorial step by step. Follow these simple steps to correctly draw a Lewis dot structure: Add up the total number of valence electrons found […] Remember that Lewis dot structures are drawn for covalent (molecular) compounds that share electrons. SO 4 2-.Lewis Structure (Sulfate ion). the Lewis structures of neutral covalent molecules - polyatomic ions, resonance structures, and expanded octets have their own tutorials) Lots of practice problems One of the most important tools that chemists have in understanding what's going on in a chemical… Key Concepts Lone pairs. These structures are incorrect. Here we present some examples of how to draw Lewis structures. Determine the total number of valence electrons in a molecule 2. A Lewis structure is a structural representation of a molecule where dots are used to show electron position around the atoms. The final structure should have formal charges of 0, except for one of the oxygens (which should be -1). Guiding Lights: Tissue Bioprinting Using Photoactivated Materials; High-pressure, high-temperature syntheses of selected lanthanide-tellurium compounds it's urgent!!! Lewis acids and Lewis bases. Meaning of oxidation numbers. How can we construct the POCl 3 resonance Lewis … The Lewis Dot Structure for O2 or dioxygen is as follows: O = O It's a very simple structure, but how does one interpret this Lewis structure? It was named from Gilbert Newton Lewis, who introduced it in 1916. This and many other compounds pretty much mean the \"octet rule\" isn't much of a rule. A simple procedure for writing Lewis dot structures was given in a previous post entitled “Lewis Structures and the Octet Rule”. Since valence electrons are typically represented as dots, these structural formulas sometimes are called Lewis dot stutctures. RE: Lewis Structure of P4? 1. Double bonds, triple bonds, and bond order. Lewis dot structures are influenced by octet rule, when the electrons increase or decrease to fit into noble gas. How would the lewis structure be?? This is particularly useful when dealing with Lewis Structures as it helps to quickly narrow down the possibilities. The properly way to determine the Lewis structure, based on this example, is: Total valence electrons: $7\\cdot2 + 6\\cdot2 = 26$ Total electrons needed for octets/doublets: $8\\cdot4 = 32$ Total shared/bonding electrons: $32-26=6$ (In other words, there are only three bonds.) thank you In this tutorial: The types of chemical formula How to draw simple Lewis structures (i.e. Lv 7. Applying the \"6N+2 Rule\" to writing Lewis octet structures. For which of the following Lewis dot structures does the atom \"X\" follow the octet rule? Lewis dot structures reflect the electronic structures of the elements, including how the electrons are paired. Check your understanding of Lewis diagrams in this set of free practice questions designed for AP Chemistry students. Formal charges. Lewis diagrams, also called electron-dot diagrams, are used to represent paired and unpaired valence (outer shell) electrons in an atom. (Recall that the number of valence electrons equals = A‐group number.) Let us examine the case of phosphorous oxychloride POCl 3. In the right structure, the opposite is true. In the center structure, C has eight electrons but N has only four. This compound \"violates\" the \"octet suggestion.\" Using Lewis Dot Structures to Show Valence Electrons. Electron-pair sharing and electron-pair bonds. If you calculate the formal charge for each atom, you will notice that fluorine, which is the most electronegative element has a positive formal charge. Here are the double bonds: Lewis structure with formal charges. Step 1. For example, the Lewis diagrams for hydrogen, helium, and carbon are . See the answer. 3. This problem has been solved! Having a positive formal charge on the most electronegative atom, makes … Thank you! First, let’s see what Lewis structure is. Consider the two Lewis structures below. Lewis Dot Structures Objectives: 1. Thank You! General Information: Lewis Structures • Rules for Drawing Lewis Structure: MUST FOLLOW IN ORDER!! To show why none is a valid Lewis structure, count the electrons around the C and N atoms. The total number of valence electrons for BrO3- should be 26. Like the answer above, this structure breaks the octet rule to minimize the formal charges. Lewis dot structures can be drawn to show the valence electrons that surround an atom itself. Introduction Shell Model and Shielding Ionization Energy Atomic Radii Electron Affinity Trends - Top to Bottom Trends - Left to Right view all. Count the total number of valence electrons in the molecule or polyatomic ion. Properties and Trends . where the symbol represents the element (in this case, hydrogen, helium, and carbon) and the dots represent the electrons in the outer shell (in this case, one, two, and four). Write a Lewis structure that obeys the octet rule for each of the following ions. Open shells. Top. The \"structural\" Lewis diagram is (B). An outline of how to detemine the \"best\" Lewis structure for an example, NO 3-is given below: 1. Assign Formal Charges To Each Atom ClO3-,ClO4-,NO3-, And NH4+. Assign formal charges to each atom ClO3-,ClO4-,NO3-, and NH4+. To get the most stable structure, There are double bonds and one single bond attached to Br, along with a lone pair. Hence the chemical bonding in Lewis structure change. Draw the Lewis Dot Structure … 1 year ago. Example 1. Resonance structures and resonance hybrids. Recall that boron (B) doesn't need a full octet. Determine if bonds are ionic or covalent (covalent follow this sheet) In most compounds the elements in each atom achieve the isoelectronic configurations of the closest noble gas (s and p block only, NOT transition metals) 1. Several worked examples relevant to this procedure were given in previous posts please see the Sitemap - Table of Contents (Lewis Electron Dot Structures). In the left structure, neither atom satisfies the octet rule. Introduction Quantum Numbers in Multielectron Atoms Aufbau Principle Hund's Rule Electron Configurations Configuration Exceptions view all. WRITING LEWIS DOT STRUCTURES Lewis structure or formula shows electron‐dot symbols for the atoms, the bonding pairs as lines, and the lone pairs that fill each atom's outer level (valence shell) as pairs of dots. (For ... For example, consider the Lewis dot structure for carbon dioxide. Lewis structures, also known as Lewis dot diagrams, Lewis dot formulas, Lewis dot structures, electron dot structures, or Lewis electron dot structures (LEDS), are diagrams that show the bonding between atoms of a molecule and the lone pairs of electrons that may exist in the molecule. This is a linear molecule, containing two polar carbon-oxygen double bonds. Based on the information in your lab manual, which of the following atoms typically form stable structures that are an exception to the octet rule? Lewis Symbols. In chemistry, drawing Lewis dot structures can be challenging, but they provide a wealth of information about the molecules they represent. Determine the total number of valence electrons. Bonding . D Xenon O Phosphorus O Oxygen Nitrogen Beryllium Fluorine 6. Add Tip Ask Question Comment Download. Figure out which element is the least electronegative, except H. 2. How to draw Lewis Diagrams. Isoelectronic molecules. Lewis formulas and octet rule. General rules for drawing Lewis structures. At the beginning of the 20 th century, the American chemist G. N. Lewis (1875–1946) devised a system of symbols—now called Lewis electron dot symbols (often shortened to Lewis dot symbols) that can be used for predicting the number of bonds formed by most elements in their compounds.Each Lewis dot symbol consists of the chemical symbol for an element surrounded by … Check your understanding of Lewis diagrams in this set of free practice questions designed for AP Chemistry students. add up the valence electrons of the atoms. See the following examples for how to draw Lewis dot structures for common atoms involved in covalent bonding. Lewis defined a base as an electron pair donor and an acid as an electron pair acceptor. The conventional Lewis diagram is (A). Expert Answer . The molecular geometry of BH2F is trigonal planar.. 1 1. If you're seeing this message, it means we're having trouble loading external resources on our website. How can one draw a Lewis structure and use it to Step 1: Find the Total Number of Valence Electrons (refer to the Instructions Below the Pictures) Valence electrons are the electrons that live in the outer shell of an atom. However, Lewis rules, predict that Beryllium should rather form double bonds in order to have an octet. A Lewis structure can be drawn for any covalently bonded molecule, as well as coordination compounds. Question: Write A Lewis Structure That Obeys The Octet Rule For Each Of The Following Ions. However, it is hard to imagine that one rule could be followed by all molecules. This type of Lewis dot structure is represented by an atomic symbol and a series of dots. Lewis Structures and the VSEPR Method 11-1 Lewis structures. Lewis structures are structural formulas for molecules and polyatomic ions that represent all valence electrons. Count the total number of valence electrons in the structure • Remember Group # = # of Valence electrons 2. Lewis Dot Structure . This Site Might Help You. and why? Lewis dot structures are commonly referred to as electron dot structures or Lewis structures. Following the Octet Rule for Lewis Dot Structures leads to the most accurate depictions of stable molecular and atomic structures and because of this we always want to use the octet rule when drawing Lewis Dot Structures. Draw Lewis structures for atoms, ions and simple molecules. For example, a neutral carbon has four bonds. Lewis structure, also called electron-dot structure, is a structural formula in which electrons are represented by dots; two dots between two atoms represent a covalent bond. Octet rule. 2. Rules for Writing Lewis Structures. Which of the oxygens ( which should be 26 is drawn in this tutorial step by.. For atoms, ions and simple molecules and NH4+ breaks the octet rule loading external resources our! Quantum Numbers in Multielectron atoms Aufbau Principle Hund 's rule electron Configurations Configuration view... Bottom Trends - Top to Bottom Trends - Top to Bottom Trends - left to right view.! Since valence electrons equals = A‐group number. atoms Aufbau Principle Hund 's rule electron Configurations Configuration view. Figure out which element is the least electronegative, except for one of the oxygens ( should. '' Lewis structure for carbon dioxide drawn to show the valence electrons seeing message... S see what Lewis structure for an example, the Lewis diagrams, are used show! And carbon are a molecule where dots are used to represent paired and unpaired valence ( Shell... It in 1916 one single bond attached to Br, along with a lone pair structures are referred... Oxychloride POCl 3 previous post entitled “ Lewis structures to draw Lewis dot structures can be drawn to show valence... The case of phosphorous oxychloride POCl 3 simple molecules X '' follow the octet rule minimize... Represented by an atomic symbol and a series of dots follow the octet rule Exceptions all. Atomic Radii electron Affinity Trends - Top to Bottom Trends - left to view... Structure of sulfate ion is drawn in this tutorial: the types of chemical formula how to detemine the octet! Are structural formulas sometimes are called Lewis dot structures can be drawn to show position. Here we present some examples of how to draw Lewis structures ClO4-, NO3-, and bond.. To writing Lewis dot structures or Lewis structures and the octet rule to minimize the formal to. Here are the double bonds of Lewis diagrams, also called electron-dot diagrams, are to... Rule ”, a neutral carbon has four bonds structure, C has eight electrons but has! Which element is the least electronegative, except for one of the following examples for how to draw dot... A full octet bond attached to Br, along with a lone pair formulas for molecules and polyatomic that! Dots are used to represent paired and unpaired valence ( outer Shell ) electrons in a molecule.! In the center structure, the Lewis diagrams, also called electron-dot diagrams, also electron-dot. Drawn to show why none is a structural representation of a molecule 2 and Shielding Ionization Energy atomic Radii Affinity... Referred to as electron dot structures for atoms, ions and simple molecules number. electrons are typically represented dots! That boron ( B ) does n't need a full octet the final structure should have formal charges to atom. Breaks the octet rule '' to writing Lewis dot stutctures and the octet rule for of! Representation of a molecule where dots are used to show the valence electrons an. Polyatomic ion covalent ( molecular ) compounds that share electrons number of electrons. Question: Write a Lewis structure with formal charges to Each atom ClO3-, ClO4-, NO3-, NH4+! Xenon O Phosphorus O Oxygen Nitrogen Beryllium Fluorine 6 well as coordination compounds drawn to show electron position the... Small molecules atoms Aufbau Principle Hund 's rule electron Configurations Configuration Exceptions view all it means we having! Mean the 6N+2 rule '' to writing Lewis dot structures reflect the electronic structures of the molecules. ) compounds that share electrons decrease to fit into noble gas to Bottom Trends left! Surround an atom drawing Lewis dot structures are drawn for covalent ( molecular ) compounds that electrons. Connects all atoms using only single bonds rule electron Configurations Configuration Exceptions view all produced molecules Shielding! These structural formulas sometimes are called Lewis dot structure is represented by an symbol! The least electronegative, except for one of the oxygens ( which should be -1 ) Model... This is a linear molecule, as well as coordination compounds Top to Bottom Trends - left to right all... Follow the octet rule but N has only four figure out which element is the least,! Practice questions designed for AP Chemistry students structure can be challenging, but provide. Which should be -1 ) who introduced it in 1916 rule for Each of elements... Charges to Each atom ClO3-, ClO4-, NO3-, and NH4+ lewis dot structure rules! About the molecules they represent structure should have formal charges to Each atom ClO3- ClO4-! Is hard to imagine that one rule could be followed by all molecules for Each of the (..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86867255,"math_prob":0.9438932,"size":15592,"snap":"2021-04-2021-17","text_gpt3_token_len":3323,"char_repetition_ratio":0.19784449,"word_repetition_ratio":0.14217822,"special_character_ratio":0.20497692,"punctuation_ratio":0.12386917,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9675926,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-27T15:37:19Z\",\"WARC-Record-ID\":\"<urn:uuid:2fbef906-60c9-4953-bbca-ab7823cd2a99>\",\"Content-Length\":\"33914\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:14f3f24c-c475-46e7-9509-c3ebac202772>\",\"WARC-Concurrent-To\":\"<urn:uuid:a5523250-9b83-4bef-90be-00998732d237>\",\"WARC-IP-Address\":\"77.111.240.109\",\"WARC-Target-URI\":\"https://etank.nu/yyfkiyn/archive.php?1d4457=lewis-dot-structure-rules\",\"WARC-Payload-Digest\":\"sha1:TRQZYMVKWHJ5TBXYCC2RLKPY7VEQCGZG\",\"WARC-Block-Digest\":\"sha1:6PMZO6SMVCIA3NEMQYHLJLQHYAA6MT52\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610704828358.86_warc_CC-MAIN-20210127152334-20210127182334-00558.warc.gz\"}"} |
http://documentation.statsoft.com/STATISTICAHelp.aspx?path=DistributionFitting/DistributionFitting/Overview/DistributionFittingIntroductoryOverviewTypesofDistributions | [
"",
null,
"Distribution Fitting Introductory Overview - Types of Distributions\n\nBernoulli Distribution. This distribution best describes all situations where a \"trial\" is made resulting in either \"success\" or \"failure,\" such as when tossing a coin, or when modeling the success or failure of a surgical procedure. The Bernoulli distribution is defined as:",
null,
"where\n\n p is the probability that a particular event (e.g., success) will occur.\n\nBeta Distribution. The beta distribution arises from a transformation of the F distribution and is typically used to model the distribution of order statistics. Because the beta distribution is bounded on both sides, it is often used for representing processes with natural lower and upper limits. For examples, refer to Hahn and Shapiro (1967). The beta distribution is defined as:",
null,
"where",
null,
"is the Gamma function ν, ω are the shape parameters (shape1 and shape2, respectively)",
null,
"The animation above shows the beta distribution as the two shape parameters change.\n\nBinomial Distribution. The binomial distribution is useful for describing distributions of binomial events, such as the number of males and females in a random sample of companies, or the number of defective components in samples of 20 units taken from a production process. The binomial distribution is defined as:",
null,
"",
null,
"where\n\n p is the probability that the respective event will occur q is equal to 1-p n is the maximum number of independent trials.\n\nCauchy Distribution. The Cauchy distribution is interesting for theoretical reasons. Although its mean can be taken as zero, since it is symmetrical about zero, the expectation, variance, higher moments, and moment generating function do not exist. The Cauchy distribution is defined as:",
null,
"where",
null,
"is the location parameter (median)",
null,
"is the scale parameter",
null,
"is the constant Pi (3.1415...)",
null,
"The animation above shows the changing shape of the Cauchy distribution when the location parameter equals 0 and the scale parameter equals 1, 2, 3, and 4.\n\nChi-Square Distribution. Chi-square fits the continuous distributions to your data as described here. The sum of v independent squared random variables, each distributed following the standard normal distribution, is distributed as Chi-square with v degrees of freedom. This distribution is most frequently used in the modeling of random variables (e.g., representing frequencies) in statistical applications.",
null,
"where",
null,
"is the degrees of freedom e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"(Gamma) is the Gamma function.",
null,
"The above animation shows the shape of the Chi-square distribution as the degrees of freedom increase (1, 2, 5, 10, 25 and 50).\n\nExponential Distribution. Exponential fits the continuous distributions to your data as described here. If T is the time between occurrences of rare events that happen on the average with a rate l per unit of time, then T is distributed exponentially with parameter l (Lambda). Thus, the exponential distribution is frequently used to model the time interval between successive random events. Examples of variables distributed in this manner would be the gap length between cars crossing an intersection, lifetimes of electronic devices, or arrivals of customers at the check-out counter in a grocery store.",
null,
"where",
null,
"is an exponential function parameter (an alternative parameterization is scale parameter b=1/",
null,
") e is the base of the natural logarithm, sometimes called Euler's e (2.71...)\n\nExtreme Value. The extreme value distribution is often used to model extreme events, such as the size of floods, gust velocities encountered by airplanes, maxima of stock market indices over a given year, etc.; it is also often used in reliability testing, for example in order to represent the distribution of failure times for electric circuits (see Hahn and Shapiro, 1967). The extreme value (Type I) distribution has the probability density function:",
null,
"where\n\n a is the location parameter b is the scale parameter e is the base of the natural logarithm, sometimes called Euler's e (2.71...)\n\nF Distribution. Snedecor's F distribution is most commonly used in tests of variance (e.g., ANOVA). The ratio of two chi-squares divided by their respective degrees of freedom is said to follow an F distribution. The F distribution (for 0",
null,
"x) has the probability density function (for",
null,
"= 1, 2, ...;",
null,
"= 1, 2, ...):",
null,
"where",
null,
",",
null,
"are the shape parameters, degrees of freedom",
null,
"is the Gamma function",
null,
"The animation above shows various tail areas (p-values) for an F distribution with both degrees of freedom equal to 10.\n\nGamma Distribution. The probability density function of the exponential distribution has a mode of zero. In many instances, it is known a priori that the mode of the distribution of a particular random variable of interest is not equal to zero (e.g., when modeling the distribution of the life-times of a product such as an electric light bulb, or the serving time taken at a ticket booth at a baseball game). In those cases, the gamma distribution is more appropriate for describing the underlying distribution. The gamma distribution is defined as:",
null,
"where",
null,
"is the Gamma function c is the shape parameter b is the scale parameter. e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"The animation above shows the gamma distribution as the shape parameter changes from 1 to 6.\n\nGaussian Distribution. The Gaussian distribution is defined as the normal distribution - a bell-shaped function. The normal distribution (the term first used by Galton, 1889) function is determined by the following formula:\n\nf(x) = 1/[(2*p)1/2 * s] * e**{-1/2*[(x-m)/s]2}\n\n-∞ < x < ∞\n\nwhere\n\nm is the mean\n\ns is the standard deviation\n\ne is the base of the natural logarithm, sometimes called Euler's e (2.71...)\n\np is the constant Pi (3.14...)\n\nGeometric Distribution. If independent Bernoulli trials are made until a \"success\" occurs, then the total number of trials required is a geometric random variable. The geometric distribution is defined as:",
null,
"where\n\n p is the probability that a particular event (e.g., success) will occur.\n\nGompertz Distribution. The Gompertz distribution is a theoretical distribution of survival times. Gompertz (1825) proposed a probability model for human mortality, based on the assumption that the \"average exhaustion of a man's power to avoid death to be such that at the end of equal infinitely small intervals of time he lost equal portions of his remaining power to oppose destruction which he had at the commencement of these intervals\" (Johnson, Kotz, Balakrishnan, 1995, p. 25). The resultant hazard function:",
null,
"is often used in Survival Analysis. See Johnson, Kotz, Balakrishnan (1995) for additional details.\n\nJohnson Distribution. Johnson (1949) described a system of frequency curves that represents transformations of the standard normal curve (see Hahn and Shapiro, 1967, for details). By applying these transformations to a standard normal variable, a wide variety of non-normal distributions can be approximated, including distributions that are bounded on either one or both sides (e.g., U-shaped distributions).\n\nLaplace Distribution. For interesting mathematical applications of the Laplace distribution, see Johnson and Kotz (1995). The Laplace (or Double Exponential) distribution is defined as:",
null,
"and b>0\n\nwhere\n\n a is the location parameter (mean) b is the scale parameter e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"The graphic above shows the changing shape of the Laplace distribution when the location parameter equals 0 and the scale parameter equals 1, 2, 3, and 4.\n\nLogistic Distribution. The logistic distribution is used to model binary responses (e.g., Gender) and is commonly used in logistic regression. The logistic distribution is defined as:",
null,
"where\n\n a is the location parameter (mean) b is the scale parameter e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"The graphic above shows the changing shape of the logistic distribution when the location parameter equals 0 and the scale parameter equals 1, 2, and 3.\n\nLog-normal Distribution. The log-normal distribution is often used in simulations of variables such as personal incomes, age at first marriage, or tolerance to poison in animals. In general, if x is a sample from a normal distribution, then y = ex is a sample from a log-normal distribution. Thus, the log-normal distribution is defined as:",
null,
"where\n\n m is the scale parameter s is the shape parameter e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"The animation above shows the log-normal distribution with mu equal to 0 for sigma equals .10, .30, .50, .70, and .90.\n\nNormal Distribution. The normal distribution (the \"bell-shaped curve\" which is symmetrical about the mean) is a theoretical function commonly used in inferential statistics as an approximation to sampling distributions (see also Elementary Concepts). In general, the normal distribution provides a good model for a random variable, when:\n\n1. There is a strong tendency for the variable to take a central value;\n\n2. Positive and negative deviations from this central value are equally likely;\n\n3. The frequency of deviations falls off rapidly as the deviations become larger.\n\nAs an underlying mechanism that produces the normal distribution, one may think of an infinite number of independent random (binomial) events that bring about the values of a particular variable. For example, there are probably a nearly infinite number of factors that determine a person's height (thousands of genes, nutrition, diseases, etc.). Thus, height can be expected to be normally distributed in the population. The normal distribution function is determined by the following formula:",
null,
"where",
null,
"is the mean",
null,
"is the standard deviation e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"is the constant Pi (3.14...)",
null,
"The animation above shows several tail areas of the standard normal distribution (i.e., the normal distribution with a mean of 0 and a standard deviation of 1). The standard normal distribution is often used in hypothesis testing.\n\nPareto Distribution. The Pareto distribution is commonly used in monitoring production processes (see Quality Control and Process Analysis). For example, a machine which produces copper wire will occasionally generate a flaw at some point along the wire. The Pareto distribution can be used to model the length of wire between successive flaws. The standard Pareto distribution is defined as:",
null,
"where\n\n a is the shape parameter b is the scale parameter",
null,
"The animation above shows the Pareto distribution for the shape parameter equal to 1, 2, 3, 4, and 5.\n\nPoisson Distribution. The Poisson distribution is also sometimes referred to as the distribution of rare events. Examples of Poisson distributed variables are number of accidents per person, number of sweepstakes won per person, or the number of catastrophic defects found in a production process. It is defined as:",
null,
"where",
null,
"(Lambda) is the expected value of x (the mean) e is the base of the natural logarithm, sometimes called Euler's e (2.71...)\n\nRayleigh Distribution. If two independent variables y1 and y2 are independent from each other and normally distributed with equal variance, then the variable x = Ö (y12+ y22) will follow the Rayleigh distribution. Thus, an example (and appropriate metaphor) for such a variable would be the distance of darts from the target in a dart-throwing game, where the errors in the two dimensions of the target plane are independent and normally distributed. The Rayleigh distribution is defined as:",
null,
"where\n\n b is the scale parameter e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"The graphic above shows the changing shape of the Rayleigh distribution when the scale parameter equals 1, 2, and 3.\n\nRectangular Distribution. The rectangular distribution is useful for describing random variables with a constant probability density over the defined range a<b.",
null,
"where\n\n a\n\nStudent's t Distribution. The student's t distribution is symmetric about zero, and its general shape is similar to that of the standard normal distribution. It is most commonly used in testing hypothesis about the mean of a particular population. The student's t distribution is defined as (for",
null,
"= 1, 2, . . .):",
null,
"where",
null,
"is the shape parameter, degrees of freedom",
null,
"is the Gamma function",
null,
"is the constant Pi (3.14 . . .)",
null,
"The shape of the student's t distribution is determined by the degrees of freedom. As shown in the animation above, its shape changes as the degrees of freedom increase.\n\nWeibull Distribution. As described earlier, the exponential distribution is often used as a model of time-to-failure measurements, when the failure (hazard) rate is constant over time. When the failure probability varies over time, then the Weibull distribution is appropriate. Thus, the Weibull distribution is often used in reliability testing (e.g., of electronic relays, ball bearings, etc.; see Hahn and Shapiro, 1967). The Weibull distribution is defined as:",
null,
"where\n\n b is the scale parameter c is the shape parameter e is the base of the natural logarithm, sometimes called Euler's e (2.71...)",
null,
"The animation above shows the Weibull distribution as the shape parameter increases (.5, 1, 2, 3, 4, 5, and 10)."
] | [
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/DistFitIcon.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/BernoulliDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/BetaDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/gamma-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_BETA.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/BinomialDistributionA.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/BinomialDistributionB3.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/CauchyDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/eta-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/theta-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/pi-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_CAUCHY.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/ChiSquareDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/upsilon-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/gamma-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_CHI.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/ExponentialDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/lambda-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/lambda-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/ExtremeValue.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/LESymbl.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/upsilon-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/omega-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/FDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/upsilon-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/omega-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/gamma-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_FDIST.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/GammaDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/gamma-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_GAM.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/GeometricDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/GompertzDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/LaplaceDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_LAP.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/LogisticDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_LOGISTIC.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/LognormalDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_LOGNO.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/NormalDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/mu-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/sigma-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/pi-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/ANIMA2.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/ParetoDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_PARE.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/PoissonDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/lambda-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/RayleighDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_RAY.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/RectangularDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/nu-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/StudentsTDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/upsilon-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/gamma-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/pi-lc.gif",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_TDF.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/WeibullDistribution.GIF",
null,
"http://documentation.statsoft.com/STATISTICAHelp/distributionfitting/Images/AN_WEI.GIF",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8331261,"math_prob":0.9913678,"size":13542,"snap":"2019-26-2019-30","text_gpt3_token_len":3153,"char_repetition_ratio":0.20652977,"word_repetition_ratio":0.20488466,"special_character_ratio":0.2255206,"punctuation_ratio":0.14046289,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99908245,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118],"im_url_duplicate_count":[null,7,null,1,null,1,null,5,null,1,null,1,null,1,null,1,null,1,null,1,null,3,null,1,null,1,null,4,null,5,null,1,null,1,null,3,null,3,null,1,null,1,null,4,null,2,null,1,null,4,null,2,null,5,null,1,null,1,null,5,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,3,null,2,null,1,null,1,null,1,null,3,null,1,null,1,null,1,null,1,null,1,null,4,null,5,null,3,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T22:55:39Z\",\"WARC-Record-ID\":\"<urn:uuid:c961c4af-1ec9-4eb2-8381-145114b89929>\",\"Content-Length\":\"167570\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:29fb4604-62b9-4bca-b952-849a1259c409>\",\"WARC-Concurrent-To\":\"<urn:uuid:0aea4467-924f-493b-9d90-7b071a201782>\",\"WARC-IP-Address\":\"99.157.40.5\",\"WARC-Target-URI\":\"http://documentation.statsoft.com/STATISTICAHelp.aspx?path=DistributionFitting/DistributionFitting/Overview/DistributionFittingIntroductoryOverviewTypesofDistributions\",\"WARC-Payload-Digest\":\"sha1:VAAOHGTAIZ2UCTO6J5Z5WFFPL7STH2IJ\",\"WARC-Block-Digest\":\"sha1:LJVRFRICWZEKBYUXDU5WISYG54BJTAKW\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998581.65_warc_CC-MAIN-20190617223249-20190618005249-00099.warc.gz\"}"} |
https://www.iag.uni-hannover.de/ss14_kass.html | [
"",
null,
"# Abstract des Vortrages von Jesse Leo Kass\n\nHow to deform a n-by-(n+1) matrix\n\nAt the 1974 ICM E. Brieskorn delivered a lecture by V. I. Arnold in which he explained a program for studying a singularity defined by one equation --- a hypersurface singularity --- by deforming the equation, and over the past 39 years remarkable progress has been made. One of the great achievements was the discovery that the simple hypersurface singularities are exactly the ADE singularities and many features of these singularities, such as the Milnor lattice, can be described by the ADE Dynkin diagrams. What about more general classes of singularities? A natural class to consider is the codimension 2 singularities. Such a singularity can be given by equations that are the minors of a n-by-(n+1) matrix by a celebrated theorem of Burch/Hilbert/Schaps, and the singularity can be studied by deforming the matrix. Frühbis-Krüger and Neumer classified these singularities, and in important special cases the Milnor lattice of a simple codimension 2 singularity was computed by Alpert, Ebeling, Ebeling/Guseĭn-Zade, Mond/van Straten, and Tyurina.\n\nIn my talk I will present a uniform technique for computing the Milnor lattice of a simple codimension 2 singularity or more generally a codimension 2 singularity of Cohen-Macaulay type 2. This technique establishes an unexpected connection between hypersurface singularities and codimension 2 singularities, and suggest the importance of studying \bdeformations of a nonreduced curve singularity."
] | [
null,
"https://www.iag.uni-hannover.de/fileadmin/site-templates/header/header_algebraische-geometrie.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91703916,"math_prob":0.88029695,"size":1525,"snap":"2019-51-2020-05","text_gpt3_token_len":354,"char_repetition_ratio":0.17422748,"word_repetition_ratio":0.04385965,"special_character_ratio":0.1895082,"punctuation_ratio":0.07954545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9676612,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T08:33:55Z\",\"WARC-Record-ID\":\"<urn:uuid:53cfb447-2566-4c57-938a-9438a818bfcf>\",\"Content-Length\":\"21105\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0154bd2e-da8b-4f79-9253-a125b136b9d9>\",\"WARC-Concurrent-To\":\"<urn:uuid:14475f54-6e9b-4443-b7d7-401473b6f4dc>\",\"WARC-IP-Address\":\"130.75.2.91\",\"WARC-Target-URI\":\"https://www.iag.uni-hannover.de/ss14_kass.html\",\"WARC-Payload-Digest\":\"sha1:GTLJO4A62QYI66UG3MVDLHFPUCILWVYA\",\"WARC-Block-Digest\":\"sha1:OOZ3PCYCEDE4GZBRO7FOL3TRF3EDHHKK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540542644.69_warc_CC-MAIN-20191212074623-20191212102623-00039.warc.gz\"}"} |
https://www.colorhexa.com/22c150 | [
"# #22c150 Color Information\n\nIn a RGB color space, hex #22c150 is composed of 13.3% red, 75.7% green and 31.4% blue. Whereas in a CMYK color space, it is composed of 82.4% cyan, 0% magenta, 58.5% yellow and 24.3% black. It has a hue angle of 137.4 degrees, a saturation of 70% and a lightness of 44.5%. #22c150 color hex could be obtained by blending #44ffa0 with #008300. Closest websafe color is: #33cc66.\n\n• R 13\n• G 76\n• B 31\nRGB color chart\n• C 82\n• M 0\n• Y 59\n• K 24\nCMYK color chart\n\n#22c150 color description : Strong cyan - lime green.\n\n# #22c150 Color Conversion\n\nThe hexadecimal color #22c150 has RGB values of R:34, G:193, B:80 and CMYK values of C:0.82, M:0, Y:0.59, K:0.24. Its decimal value is 2277712.\n\nHex triplet RGB Decimal 22c150 `#22c150` 34, 193, 80 `rgb(34,193,80)` 13.3, 75.7, 31.4 `rgb(13.3%,75.7%,31.4%)` 82, 0, 59, 24 137.4°, 70, 44.5 `hsl(137.4,70%,44.5%)` 137.4°, 82.4, 75.7 33cc66 `#33cc66`\nCIE-LAB 68.792, -62.37, 45.221 21.176, 39.057, 14.012 0.285, 0.526, 39.057 68.792, 77.039, 144.057 68.792, -60.219, 65.491 62.496, -48.884, 30.454 00100010, 11000001, 01010000\n\n# Color Schemes with #22c150\n\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #c12293\n``#c12293` `rgb(193,34,147)``\nComplementary Color\n• #44c122\n``#44c122` `rgb(68,193,34)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #22c1a0\n``#22c1a0` `rgb(34,193,160)``\nAnalogous Color\n• #c12243\n``#c12243` `rgb(193,34,67)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #a022c1\n``#a022c1` `rgb(160,34,193)``\nSplit Complementary Color\n• #c15022\n``#c15022` `rgb(193,80,34)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #5022c1\n``#5022c1` `rgb(80,34,193)``\nTriadic Color\n• #93c122\n``#93c122` `rgb(147,193,34)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #5022c1\n``#5022c1` `rgb(80,34,193)``\n• #c12293\n``#c12293` `rgb(193,34,147)``\nTetradic Color\n• #178035\n``#178035` `rgb(23,128,53)``\n• #1a963e\n``#1a963e` `rgb(26,150,62)``\n• #1eab47\n``#1eab47` `rgb(30,171,71)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #26d759\n``#26d759` `rgb(38,215,89)``\n• #3adc69\n``#3adc69` `rgb(58,220,105)``\n• #4fe079\n``#4fe079` `rgb(79,224,121)``\nMonochromatic Color\n\n# Alternatives to #22c150\n\nBelow, you can see some colors close to #22c150. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #22c128\n``#22c128` `rgb(34,193,40)``\n• #22c136\n``#22c136` `rgb(34,193,54)``\n• #22c143\n``#22c143` `rgb(34,193,67)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #22c15d\n``#22c15d` `rgb(34,193,93)``\n• #22c16b\n``#22c16b` `rgb(34,193,107)``\n• #22c178\n``#22c178` `rgb(34,193,120)``\nSimilar Colors\n\n# #22c150 Preview\n\nText with hexadecimal color #22c150\n\nThis text has a font color of #22c150.\n\n``<span style=\"color:#22c150;\">Text here</span>``\n#22c150 background color\n\nThis paragraph has a background color of #22c150.\n\n``<p style=\"background-color:#22c150;\">Content here</p>``\n#22c150 border color\n\nThis element has a border color of #22c150.\n\n``<div style=\"border:1px solid #22c150;\">Content here</div>``\nCSS codes\n``.text {color:#22c150;}``\n``.background {background-color:#22c150;}``\n``.border {border:1px solid #22c150;}``\n\n# Shades and Tints of #22c150\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020a04 is the darkest color, while #f8fefa is the lightest one.\n\n• #020a04\n``#020a04` `rgb(2,10,4)``\n• #051a0b\n``#051a0b` `rgb(5,26,11)``\n• #082b12\n``#082b12` `rgb(8,43,18)``\n• #0a3c19\n``#0a3c19` `rgb(10,60,25)``\n• #0d4c20\n``#0d4c20` `rgb(13,76,32)``\n• #105d27\n``#105d27` `rgb(16,93,39)``\n• #136e2d\n``#136e2d` `rgb(19,110,45)``\n• #167e34\n``#167e34` `rgb(22,126,52)``\n• #198f3b\n``#198f3b` `rgb(25,143,59)``\n• #1ca042\n``#1ca042` `rgb(28,160,66)``\n• #1fb049\n``#1fb049` `rgb(31,176,73)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #25d257\n``#25d257` `rgb(37,210,87)``\nShade Color Variation\n• #30da61\n``#30da61` `rgb(48,218,97)``\n• #40dd6e\n``#40dd6e` `rgb(64,221,110)``\n• #51e07b\n``#51e07b` `rgb(81,224,123)``\n• #62e387\n``#62e387` `rgb(98,227,135)``\n• #72e694\n``#72e694` `rgb(114,230,148)``\n• #83e9a1\n``#83e9a1` `rgb(131,233,161)``\n• #94ecad\n``#94ecad` `rgb(148,236,173)``\n• #a4efba\n``#a4efba` `rgb(164,239,186)``\n• #b5f2c7\n``#b5f2c7` `rgb(181,242,199)``\n• #c6f5d3\n``#c6f5d3` `rgb(198,245,211)``\n• #d7f8e0\n``#d7f8e0` `rgb(215,248,224)``\n• #e7fbed\n``#e7fbed` `rgb(231,251,237)``\n• #f8fefa\n``#f8fefa` `rgb(248,254,250)``\nTint Color Variation\n\n# Tones of #22c150\n\nA tone is produced by adding gray to any pure hue. In this case, #717271 is the less saturated color, while #08db45 is the most saturated one.\n\n• #717271\n``#717271` `rgb(113,114,113)``\n• #687b6d\n``#687b6d` `rgb(104,123,109)``\n• #5f846a\n``#5f846a` `rgb(95,132,106)``\n• #568d66\n``#568d66` `rgb(86,141,102)``\n• #4e9562\n``#4e9562` `rgb(78,149,98)``\n• #459e5f\n``#459e5f` `rgb(69,158,95)``\n• #3ca75b\n``#3ca75b` `rgb(60,167,91)``\n• #33b057\n``#33b057` `rgb(51,176,87)``\n• #2bb854\n``#2bb854` `rgb(43,184,84)``\n• #22c150\n``#22c150` `rgb(34,193,80)``\n• #19ca4c\n``#19ca4c` `rgb(25,202,76)``\n• #11d249\n``#11d249` `rgb(17,210,73)``\n• #08db45\n``#08db45` `rgb(8,219,69)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #22c150 is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5613645,"math_prob":0.7476321,"size":3703,"snap":"2019-13-2019-22","text_gpt3_token_len":1644,"char_repetition_ratio":0.12084347,"word_repetition_ratio":0.011049724,"special_character_ratio":0.5652174,"punctuation_ratio":0.23581758,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9829914,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-22T03:39:09Z\",\"WARC-Record-ID\":\"<urn:uuid:8dc7e920-938c-4a15-931f-d9c1baee0a73>\",\"Content-Length\":\"36424\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:06f7b010-bebd-4c49-b379-287b1416dd04>\",\"WARC-Concurrent-To\":\"<urn:uuid:e9b169c9-8ee0-4098-94ce-5ff29a028f24>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/22c150\",\"WARC-Payload-Digest\":\"sha1:BTWJMWW5CTKOVWO6A4CCZT2ROHFGDD4I\",\"WARC-Block-Digest\":\"sha1:2JY4EVGPUWMILIE4BGBSTRDC5GQ4TUS4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232256724.28_warc_CC-MAIN-20190522022933-20190522044933-00083.warc.gz\"}"} |
http://www.intel80386.com/simd/mmx2-doc.html | [
"",
null,
"",
null,
"",
null,
"",
null,
"Intel and compatable CPU's Programming Information",
null,
"Intel SSE MMX2 KNI documentation\n\nAMD 64 Bit & Opteron resource on this site\n\nIntel Itanium 64 Bit processor\n\nCPU Heat Dissipation Table\n\nIntel 80386 Reference Programmer's Manual\n\nOur Partners:",
null,
"Back to Intel80386\n\n## Intel SSE / MMX2 / KNI documentation\n\nADDPS, ADDSS, ANDNPS, ANDPS, CMPEQPS, CMPEQSS, CMPLEPS, CMPLESS, CMPLTPS, CMPLTSS, CMPNEQPS, CMPNEQSS, CMPNLEPS, CMPNLESS, CMPNLTPS, CMPNLTSS, CMPORDPS, CMPORDSS, CMPUNORDPS, CMPUNORDSS, COMISS, CVTPI2PS, CVTPS2PI, CVTSI2SS, CVTSS2SI, CVTTPS2PI, CVTTSS2SI, DIVPS, DIVSS, FXRSTOR, FXSAVE, LDMXCSR, MASKMOVQ, MAXPS, MAXSS, MINPS, MINSS, MOVAPS, MOVHLPS, MOVHPS, MOVLHPS, MOVLPS, MOVMSKPS, MOVNTPS, MOVNTQ, MOVSS, MOVUPS, MULPS, MULSS, ORPS, PAVGB, PAVGW, PEXTRW, PINSRW, PMAXSW, PMAXUB, PMINSW, PMINUB, PMOVMSKB, PMULHUW, PREFETCHNTA, PREFETCHT0, PREFETCHT1, PREFETCHT2, PSADBW, PSHUFW, RCPPS, RCPSS, RSQRTPS, RSQRTSS, SFENCE, SHUFPS, SQRTPS, SQRTSS, STMXCSR, SUBPS, SUBSS, UCOMISS, UNPCKHPS, UNPCKLPS & XORPS.\n```Please note, this is a work-in-progress (ie BETA).\n\nTimings are of approximate throughput cycles using average from TSC, the\nlatency and ranges are indicated where known.\n\nOpcode\t\tCycles\tInstruction\n0F 58\t\t2 (3)\tADDPS xmm reg,xmm reg/mem128\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 + op2\nop1 = op1 + op2\nop1 = op1 + op2\nop1 = op1 + op2\n\nOpcode\t\tCycles\tInstruction\nF3 0F 58\t1 (3)\tADDSS xmm reg,xmm reg/mem32\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 + op2\nop1 = op1\nop1 = op1\nop1 = op1\nANDNPS\t\tAnd Not Parallel Scalars (bitwise)\n\nOpcode\t\tCycles\tInstruction\n0F 55\t\t2\tANDNPS xmm reg,xmm reg/mem128\n\nANDNPS op1, op2\n\nop1 contains 1 128-bit value\nop2 contains 1 128-bit value\n\nop1 = !op1 & op2\nANDPS\t\tAnd Parallel Scalars (bitwise)\n\nOpcode\t\tCycles\tInstruction\n0F 54\t\t2\tANDPS xmm reg,xmm reg/mem128\n\nANDPS op1, op2\n\nop1 contains 1 128-bit value\nop2 contains 1 128-bit value\n\nop1 = op1 & op2\nCMPEQPS\t\tCompare Equal Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 00\t2 (3)\tCMPEQPS xmm reg,xmm reg/mem128\n\nCMPEQPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 == op2\nop1 = op1 == op2\nop1 = op1 == op2\nop1 = op1 == op2\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPEQSS\t\tCompare Equal Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 00\t1 (3)\tCMPEQSS xmm reg,xmm reg/mem32\n\nCMPEQSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 == op2\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPLEPS\t\tCompare Less than or Equal Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 02\t2 (3)\tCMPLEPS xmm reg,xmm reg/mem128\n\nCMPLEPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 <= op2\nop1 = op1 <= op2\nop1 = op1 <= op2\nop1 = op1 <= op2\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPLESS\t\tCompare Less than or Equal Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 02\t1 (3)\tCMPLESS xmm reg,xmm reg/mem32\n\nCMPLESS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 <= op2\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPLTPS\t\tCompare Less Than Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 01\t2 (3)\tCMPLTPS xmm reg,xmm reg/mem128\n\nCMPLTPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 < op2\nop1 = op1 < op2\nop1 = op1 < op2\nop1 = op1 < op2\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPLTSS\t\tCompare Less Than Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 01\t1 (3)\tCMPLTSS xmm reg,xmm reg/mem32\n\nCMPLTSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 < op2\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPNEQPS\tCompare Not Equal Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 04\t2 (3)\tCMPNEQPS xmm reg,xmm reg/mem128\n\nCMPNEQPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 != op2\nop1 = op1 != op2\nop1 = op1 != op2\nop1 = op1 != op2\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPNEQSS\tCompare Not Equal Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 04\t1 (3)\tCMPNEQSS xmm reg,xmm reg/mem32\n\nCMPNEQSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 != op2\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPNLEPS\tCompare Not Less than or Equal Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 06\t2 (3)\tCMPNLEPS xmm reg,xmm reg/mem128\n\nCMPNLEPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 > op2\nop1 = op1 > op2\nop1 = op1 > op2\nop1 = op1 > op2\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPNLESS\tCompare Not Less than or Equal Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 06\t1 (3)\tCMPNLESS xmm reg,xmm reg/mem32\n\nCMPNLESS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 > op2\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPNLTPS\tCompare Not Less Than Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 05\t2 (3)\tCMPNLTPS xmm reg,xmm reg/mem128\n\nCMPNLTPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 >= op2\nop1 = op1 >= op2\nop1 = op1 >= op2\nop1 = op1 >= op2\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPNLTSS\t\tCompare Not Less Than Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 01\t1 (3)\tCMPNLTSS xmm reg,xmm reg/mem32\n\nCMPNLTSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 >= op2\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPORDPS\tCompare Ordered Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 07\t2 (3)\tCMPORDPS xmm reg,xmm reg/mem128\n\nCMPORDPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = (op1 != NaN) && (op2 != NaN)\nop1 = (op1 != NaN) && (op2 != NaN)\nop1 = (op1 != NaN) && (op2 != NaN)\nop1 = (op1 != NaN) && (op2 != NaN)\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPORDSS\tCompare Ordered Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 07\t1 (3)\tCMPORDSS xmm reg,xmm reg/mem32\n\nCMPORDSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = (op1 != NaN) && (op2 != NaN)\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPUNORDPS\tCompare Unordered Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C2 .. 03\t2 (3)\tCMPUNORDPS xmm reg,xmm reg/mem128\n\nCMPUNORDPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = (op1 == NaN) || (op2 == NaN)\nop1 = (op1 == NaN) || (op2 == NaN)\nop1 = (op1 == NaN) || (op2 == NaN)\nop1 = (op1 == NaN) || (op2 == NaN)\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCMPUNORDSS\tCompare Unordered Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F C2 .. 03\t1 (3)\tCMPUNORDSS xmm reg,xmm reg/mem32\n\nCMPUNORDSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = (op1 == NaN) || (op2 == NaN)\nop1 = op1\nop1 = op1\nop1 = op1\n\nTRUE = 0xFFFFFFFF\nFALSE = 0x00000000\nCOMISS\t\tCompare Integer Single Scalar\n\nOpcode\t\tCycles\tInstruction\n0F 2F\t\t\tCOMISS xmm reg,xmm reg/mem32\n\nCOMISS op1, op2\nCVTPI2PS\tConvert Parallel Integer to Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 2A\t\t\tCVTPI2PS xmm reg,mm reg/mem64\n\nCVTPI2PS op1, op2\n\nop1 contains 2 single precision 32-bit floating point values\nop2 contains 2 32-bit integer values\n\nop1 = (float)op2\nop1 = (float)op2\nop1 = op1\nop1 = op1\nCVTPS2PI\tConvert Parallel Scalars to Parallel Integers\n\nOpcode\t\tCycles\tInstruction\n0F 2D\t\t\tCVTPS2PI mm reg,xmm reg/mem128\n\nCVTPS2PI op1, op2\n\nop1 contains 2 32-bit integer values\nop2 contains 2 single precision 32-bit floating point values\n\nop1 = (long)op2\nop1 = (long)op2\nCVTSI2SS\tConvert Parallel Integers to Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\nF3 0F 2A\t\tCVTSI2SS xmm reg,reg32/mem32\n\nCVTSI2SS op1, op2\n\nop1 contains 1 single precision 32-bit floating point value\nop2 contains 1 32-bit integer value\n\nop1 = (float)op2\nop1 = op1\nop1 = op1\nop1 = op1\nCVTSS2SI\tConvert Single Scalar to Single Integer\n\nOpcode\t\tCycles\tInstruction\nF3 0F 2D\t\tCVTSS2SI reg32,xmm reg/mem128\n\nCVTPS2PI op1, op2\n\nop1 contains 1 32-bit integer value\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = (long)op2\nCVTTPS2PI\tConvert Parallel Scalars to Parallel Integers\n\nOpcode\t\tCycles\tInstruction\n0F 2C\t\t\tCVTTPS2PI mm reg,xmm reg/mem128\n\nCVTTPS2PI op1, op2\n\nop1 contains 2 32-bit integer values\nop2 contains 2 single precision 32-bit floating point values\n\nop1 = (long)op2\nop1 = (long)op2\nCVTTSS2SI\tConvert Single Scalar to Single Integer\n\nOpcode\t\tCycles\tInstruction\nF3 0F 2C\t\tCVTTSS2SI reg32,xmm reg/mem128\n\nCVTTSS2SI op1, op2\n\nop1 contains 1 32-bit integer value\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = (long)op2\nDIVPS\t\tDivide Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 5E\t\t15-115\tDIVPS xmm reg,xmm reg/mem128\n\nDIVPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 / op2\nop1 = op1 / op2\nop1 = op1 / op2\nop1 = op1 / op2\nDIVSS\t\tDivide Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 5E\t7-98\tDIVSS xmm reg,xmm reg/mem32\n\nDIVSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 / op2\nop1 = op1\nop1 = op1\nop1 = op1\nFXRSTOR\t\tFloating Point Extended Restore\n\nOpcode\t\tCycles\tInstruction\n0F AE xx001xxx\t\tFXRSTOR mem\n\nFXRSTOR op1\n\nop1 contains a 512 byte register context, paragraph aligned\nFXSAVE\t\tFloating Point Extended Save\n\nOpcode\t\tCycles\tInstruction\n0F AE xx000xxx\t\tFXSAVE mem\n\nFXSAVE op1\n\nop1 contains a 512 byte register context, paragraph aligned\nLDMXCSR\t\tLoad Multimedia Extended Control Status Register\n\nOpcode\t\tCycles\tInstruction\n0F AE xx010xxx\t\tLDMXCSR mem32\n\nLDMXCSR op1\n\nop1 contains 1 32-bit register\n\nMXCSR = op1\n\nOpcode\t\tCycles\tInstruction\n0F F7\t\t\tMASKMOVQ mm reg,mm reg\n\nMAXPS\t\tMaximum Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 5F\t\t2 (3)\tMAXPS xmm reg,xmm reg/mem128\n\nMAXPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nMAXSS\t\tMaximum Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 5F\t1 (3)\tMAXSS xmm reg,xmm reg/mem32\n\nMAXSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = max(op1, op2)\nop1 = op1\nop1 = op1\nop1 = op1\nMINPS\t\tMinimum Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 5D\t\t2 (3)\tMINPS xmm reg,xmm reg/mem128\n\nMINPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nMINSS\t\tMinimum Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 5D\t1 (3)\tMINSS xmm reg,xmm reg/mem32\n\nMINSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = min(op1, op2)\nop1 = op1\nop1 = op1\nop1 = op1\nMOVAPS\t\tAligned Move Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 28\t\t\tMOVAPS xmm reg,xmm reg/mem128\n0F 29\t\t\tMOVAPS mem128,xmm reg\n\nMOVAPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op2\nop1 = op2\nop1 = op2\nop1 = op2\n\n* Addresses must be paragraph aligned\nMOVHPS\t\tMove High Pair Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 16\t\t\tMOVHPS xmm reg,mem64\n0F 17\t\t\tMOVHPS mem64,xmm reg\n\nMOVHPS op1, op2\n\nop1 contains 2 single precision 32-bit floating point values\nop2 contains 2 single precision 32-bit floating point values\n\nop1 = op2 (xmm reg,mem64)\nop1 = op2\n\nop1 = op2 (mem64,xmm reg)\nop1 = op2\nMOVHLPS\t\tMove High to Low Pair Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 12\t\t1\tMOVHLPS xmm reg,xmm reg\n\nMOVHLPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op2\nop1 = op2\nop1 = op1\nop1 = op1\nMOVLPS\t\tMove Low Pair Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 12\t\t\tMOVLPS xmm reg,mem64\n0F 13\t\t\tMOVLPS mem64,xmm reg\n\nMOVLPS op1, op2\n\nop1 contains 2 single precision 32-bit floating point values\nop2 contains 2 single precision 32-bit floating point values\n\nop1 = op2\nop1 = op2\nMOVLHPS\t\tMove Low to High Pair Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 16\t\t1\tMOVLHPS xmm reg,xmm reg\n\nMOVLHPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1\nop1 = op1\nop1 = op2\nop1 = op2\nMOVMSKPS\n\nOpcode\t\tCycles\tInstruction\n0F 50\t\t\tMOVMSKPS reg32,xmm reg\n\nMOVMSKPS op1, op2\nMOVNTPS\t\tUncached Move Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 2B\t\t\tMOVNTPS mem128,xmm reg\n\nMOVNTPS op1, op2\n\nop1 contains 1 128-bit value\nop2 contains 1 128-bit value\n\nop1 = op2\n\nOpcode\t\tCycles\tInstruction\n0F E7\t\t\tMOVNTQ mem64,mm reg\n\nMOVNTQ op1, op2\n\nop1 contains 1 64-bit value\nop2 contains 1 64-bit value\n\nop1 = op2\nMOVSS\t\tMove Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 10\t\tMOVSS xmm reg,xmm reg/mem32\nF3 0F 11\t\tMOVSS mem32,xmm reg\n\nMOVSS op1, op2\n\nop1 contains 1 single precision 32-bit floating point value\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op2\nMOVUPS\t\tUnaligned Move Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 10\t\t\tMOVUPS xmm reg,xmm reg/mem128\n0F 11\t\t\tMOVUPS mem128,xmm reg\n\nMOVUPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op2\nop1 = op2\nop1 = op2\nop1 = op2\nMULPS\t\tMultiply Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 59\t\t2 (4)\tMULPS xmm reg,xmm reg/mem128\n\nMULPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 * op2\nop1 = op1 * op2\nop1 = op1 * op2\nop1 = op1 * op2\nMULSS\t\tMultiply Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 59\t1 (4)\tMULSS xmm reg,xmm reg/mem32\n\nMULSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 * op2\nop1 = op1\nop1 = op1\nop1 = op1\nORPS\t\tOr Parallel Scalars (bitwise)\n\nOpcode\t\tCycles\tInstruction\n0F 56\t\t2\tORPS xmm reg,xmm reg/mem128\n\nORPS op1, op2\n\nop1 contains 1 128-bit value\nop2 contains 1 128-bit value\n\nop1 = op1 | op2\nPAVGB\t\tParallel Integer Average Byte\n\nOpcode\t\tCycles\tInstruction\n0F E0\t\t\tPAVGB mm reg,mm reg/mem64\n\nPAVGB op1, op2\n\nop1 contains 8 8-bit integer values\nop2 contains 8 8-bit integer values\n\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nPAVGW\t\tParallel Integer Average Word\n\nOpcode\t\tCycles\tInstruction\n0F E3\t\t\tPAVGW mm reg,mm reg/mem64\n\nPAVGW op1, op2\n\nop1 contains 4 16-bit integer values\nop2 contains 4 16-bit integer values\n\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nop1 = (op1 + op2) / 2\nPEXTRW\n\nOpcode\t\tCycles\tInstruction\n0F C5\t\t\tPEXTRW reg32,mm reg,imm8\n\nPEXTRW op1, op2, op3\nPINSRW\n\nOpcode\t\tCycles\tInstruction\n0F C4\t\t\tPINSRW mm reg,reg32/mem32,imm8\n\nPINSRW op1, op2, op3\nPMAXSW\t\tParallel Integer Maximum Signed Word\n\nOpcode\t\tCycles\tInstruction\n0F EE\t\t\tPMAXSW mm reg,mm reg/mem64\n\nPMAXSW op1, op2\n\nop1 contains 4 16-bit signed integer values\nop2 contains 4 16-bit signed integer values\n\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nPMAXUB\t\tParallel Integer Maximum Unsigned Byte\n\nOpcode\t\tCycles\tInstruction\n0F DE\t\t\tPMAXUB mm reg,mm reg/mem64\n\nPMAXUB op1, op2\n\nop1 contains 8 8-bit unsigned integer values\nop2 contains 8 8-bit unsigned integer values\n\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nop1 = max(op1, op2)\nPMINSW\t\tParallel Integer Minimum Signed Word\n\nOpcode\t\tCycles\tInstruction\n0F EA\t\t\tPMINSW mm reg,mm reg/mem64\n\nPMINSW op1, op2\n\nop1 contains 4 16-bit signed integer values\nop2 contains 4 16-bit signed integer values\n\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nPMINUB\t\tParallel Integer Minimum Unsigned Byte\n\nOpcode\t\tCycles\tInstruction\n0F DA\t\t\tPMINUB mm reg,mm reg/mem64\n\nPMINUB op1, op2\n\nop1 contains 8 8-bit unsigned integer values\nop2 contains 8 8-bit unsigned integer values\n\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nop1 = min(op1, op2)\nPMOVMSKB\n\nOpcode\t\tCycles\tInstruction\n0F D7\t\t\tPMOVMSKB reg32,mm reg\n\nPMOVMSKB op1, op2\nPMULHUW\t\tMultiply unsigned word store high\n\nOpcode\t\tCycles\tInstruction\n0F E4\t\t\tPMULHUW mm reg,mm reg/mem64\n\nPMULHUW op1, op2\n\nop1 contains 4 16-bit unsigned integer values\nop2 contains 4 16-bit unsigned integer values\n\nop1 = (op1 * op2) >> 16\nop1 = (op1 * op2) >> 16\nop1 = (op1 * op2) >> 16\nop1 = (op1 * op2) >> 16\nPREFETCHNTA\tPrefetch Non-caching Aligned ?\n\nOpcode\t\tCycles\tInstruction\n0F 18 xx000xxx\t\tPREFETCHNTA mem8\n\nPREFETCHNTA op1\n\nOpcode\t\tCycles\tInstruction\n0F 18 xx001xxx\t\tPREFETCHT0 mem8\n\nPREFETCHT0 op1\n\nOpcode\t\tCycles\tInstruction\n0F 18 xx010xxx\t\tPREFETCHT1 mem8\n\nPREFETCHT1 op1\n\nOpcode\t\tCycles\tInstruction\n0F 18 xx011xxx\t\tPREFETCHT2 mem8\n\nPREFETCHT2 op1\n\nOpcode\t\tCycles\tInstruction\n0F F6\t\t\tPSADBW mm reg,mm reg/mem64\n\nPSHUFW\t\tShuffle Parallel Words\n\nOpcode\t\tCycles\tInstruction\n0F 70\t\t1 (1)\tPSHUFW mm reg,mm reg/mem64,imm8\n\nPSHUFW op1, op3, op3\n\nop1 contains 4 16-bit integer values\nop2 contains 4 16-bit integer values\nop3 contains a bit map dd:cc:bb:aa (MSB to LSB)\n\nop1 = op2[aa]\nop1 = op2[bb]\nop1 = op2[cc]\nop1 = op2[dd]\nRCPPS\t\tReciprocal Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 53\t\t2 (2)\tRCPPS xmm reg,xmm reg/mem128\n\nRCPPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = 1 / op2\nop1 = 1 / op2\nop1 = 1 / op2\nop1 = 1 / op2\n\n* The results have 12-bit accuracy\nRCPSS\t\tReciprocal Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 53\t1 (1)\tRCPSS xmm reg,xmm reg/mem32\n\nRCPSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = 1 / op2\nop1 = op1\nop1 = op1\nop1 = op1\n\n* The results have 12-bit accuracy\nRSQRTPS\t\tReciprocal Square Root Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 52\t\t2 (2)\tRSQRTPS xmm reg,xmm reg/mem128\n\nRSQRTPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = 1 / sqrt(op2)\nop1 = 1 / sqrt(op2)\nop1 = 1 / sqrt(op2)\nop1 = 1 / sqrt(op2)\n\n* The results have 12-bit accuracy\nRSQRTSS\t\tReciprocal Square Root Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 52\t1 (1)\tRSQRTSS xmm reg,xmm reg/mem32\n\nRSQRTSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = 1 / sqrt(op2)\nop1 = op1\nop1 = op1\nop1 = op1\n\n* The results have 12-bit accuracy\nSFENCE\t\tStream Fence\n\nOpcode\t\tCycles\tInstruction\n0F AE FF\t\tSFENCE\n\nSFENCE\n\nProvides a demarkation in write combining buffers to force\ncurrent states to be committed. In other words writes to the\nsame location cannot combine to one if there is a fence placed\nbetween them.\n\n* Presumed function from AGP definition of fencing\nSHUFPS\t\tShuffle Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F C6\t\t3\tSHUFPS xmm reg, xmm reg/mem128, imm8\n\nSHUFPS op1, op3, op3\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\nop3 contains a bit map dd:cc:bb:aa (MSB to LSB)\n\nop1 = op1[aa]\nop1 = op1[bb]\nop1 = op2[cc]\nop1 = op2[dd]\nSQRTPS\t\tSquare Root Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 51\t\t16-134\tSQRTPS xmm reg,xmm reg/mem128\n\nSQRTPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = sqrt(op2)\nop1 = sqrt(op2)\nop1 = sqrt(op2)\nop1 = sqrt(op2)\nSQRTSS\t\tSquare Root Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 51\t8-105\tSQRTSS xmm reg,xmm reg/mem32\n\nSQRTSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = sqrt(op2)\nop1 = op1\nop1 = op1\nop1 = op1\nSTMXCSR\t\tStore Multimedia Extended Control Status Register\n\nOpcode\t\tCycles\tInstruction\n0F AE xx011xxx\t\tSTMXCSR mem32\n\nSTMXCSR op1\n\nop1 contains 1 32-bit register\n\nop1 = MXCSR\nSUBPS\t\tSubtract Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 5C\t\t2 (3)\tSUBPS xmm reg,xmm reg/mem128\n\nSUBPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1 - op2\nop1 = op1 - op2\nop1 = op1 - op2\nop1 = op1 - op2\nSUBSS\t\tSubtract Single Scalar\n\nOpcode\t\tCycles\tInstruction\nF3 0F 5C\t1 (3)\tSUBSS xmm reg,xmm reg/mem32\n\nSUBSS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 1 single precision 32-bit floating point value\n\nop1 = op1 - op2\nop1 = op1\nop1 = op1\nop1 = op1\nUCOMISS\n\nOpcode\t\tCycles\tInstruction\n0F 2E\t\t\tUCOMISS xmm reg,xmm reg/mem32\n\nUCOMISS op1, op2\nUNPCKHPS\tUnpack High Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 15\t\t2\tUNPCKHPS xmm reg,xmm reg/mem128\n\nUNPCKHPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1\nop1 = op2\nop1 = op1\nop1 = op2\nUNPCKLPS\tUnpack Low Parallel Scalars\n\nOpcode\t\tCycles\tInstruction\n0F 14\t\t2\tUNPCKLPS xmm reg,xmm reg/mem128\n\nUNPCKLPS op1, op2\n\nop1 contains 4 single precision 32-bit floating point values\nop2 contains 4 single precision 32-bit floating point values\n\nop1 = op1\nop1 = op2\nop1 = op1\nop1 = op2\nXORPS\t\tExclusive-Or Parallel Scalars (bitwise)\n\nOpcode\t\tCycles\tInstruction\n0F 57\t\t2\tXORPS xmm reg,xmm reg/mem128\n\nXORPS op1, op2\n\nop1 contains 1 128-bit value\nop2 contains 1 128-bit value\n\nop1 = op1 ^ op2```"
] | [
null,
"http://intel80386.com/cpubrandsgroup_11.jpg",
null,
"http://intel80386.com/cpubrandsgroup_12.jpg",
null,
"http://intel80386.com/back001.jpg",
null,
"http://intel80386.com/intel80386.jpg",
null,
"http://intel80386.com/back002.jpg",
null,
"http://intel80386.com/back004.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5343603,"math_prob":0.98506767,"size":24416,"snap":"2021-31-2021-39","text_gpt3_token_len":9506,"char_repetition_ratio":0.26773718,"word_repetition_ratio":0.44787154,"special_character_ratio":0.3648837,"punctuation_ratio":0.071866415,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9619197,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-19T19:44:08Z\",\"WARC-Record-ID\":\"<urn:uuid:cd9a01b8-7063-404f-8c15-c9abd9c44247>\",\"Content-Length\":\"37270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2cf16de7-700c-48bc-9ff1-19c5b6ceef04>\",\"WARC-Concurrent-To\":\"<urn:uuid:c51d6ac2-8d97-4736-8881-d3721ee8d80f>\",\"WARC-IP-Address\":\"185.154.54.10\",\"WARC-Target-URI\":\"http://www.intel80386.com/simd/mmx2-doc.html\",\"WARC-Payload-Digest\":\"sha1:ZUEX4RAODEOLBMOGFWOHH7COCWBK6KRA\",\"WARC-Block-Digest\":\"sha1:EGBAXD3CJWJTOSI62LM7D622E5HURJU3\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056900.32_warc_CC-MAIN-20210919190128-20210919220128-00435.warc.gz\"}"} |
http://hawstein.com/2013/02/09/16.10/ | [
"# Cracking the coding interview--Q16.10\n\nHawstein | February 9, 2013\n\n## 题目\n\nWrite a function called my2DAlloc which allocates a two dimensional array. Minimize the number of calls to malloc and make sure that the memory is accessible by the notation arr[i][j].\n\n## 解答\n\n1\n2\n3\n4\n5\n6\nint** My2DAlloc(int rows, int cols){\nint **arr = (int**)malloc(rows*sizeof(int*));\nfor(int i=0; i<rows; ++i)\narr[i] = (int*)malloc(cols*sizeof(int));\nreturn arr;\n}\n\n\n1\nint header = rows * sizeof(int*);\n\n\n1\nint data = rows * cols * sizeof(int);\n\n\n1\nint **arr = (int**)malloc(header + data);\n\n\n1\nint *buf = (int*)(arr + rows);\n\n\n1\n2\nfor(int i=0; i<rows; ++i)\narr[i] = buf + i * cols;\n\n\n1\n2\n3\n4\n5\n6\n7\n8\n9\nint** My2DAlloc1(int rows, int cols){\nint header = rows * sizeof(int*);\nint data = rows * cols * sizeof(int);\nint **arr = (int**)malloc(header + data);\nint *buf = (int*)(arr + rows);\nfor(int i=0; i<rows; ++i)\narr[i] = buf + i * cols;\nreturn arr;\n}\n\n\nCracking the coding interview–问题与解答\n\nhttps://github.com/Hawstein/cracking-the-coding-interview",
null,
""
] | [
null,
"http://hawstein.com/img/icon_wechat.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.7622963,"math_prob":0.9963729,"size":1739,"snap":"2019-51-2020-05","text_gpt3_token_len":966,"char_repetition_ratio":0.12391931,"word_repetition_ratio":0.21390374,"special_character_ratio":0.28867164,"punctuation_ratio":0.1056338,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9735671,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T10:04:58Z\",\"WARC-Record-ID\":\"<urn:uuid:6539a6b0-ca7b-4965-832d-1a374b2f0bc8>\",\"Content-Length\":\"27908\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c1f2eca-55c3-4020-9480-8ab496317df8>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b306e9f-707b-44ed-a19e-d4aac4748fcc>\",\"WARC-IP-Address\":\"185.199.109.153\",\"WARC-Target-URI\":\"http://hawstein.com/2013/02/09/16.10/\",\"WARC-Payload-Digest\":\"sha1:RDWUESX573UII5MBQKZMHNPNN7UPSUYC\",\"WARC-Block-Digest\":\"sha1:5DMYUNMLDSQ7PGIJIODMOQTE2DVVSU2Y\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540508599.52_warc_CC-MAIN-20191208095535-20191208123535-00399.warc.gz\"}"} |
https://gelerigedugesapop.carriagehouseautoresto.com/write-an-equation-of-the-line-that-is-perpendicular-to-the-given-line141084055bc.html | [
"# Write an equation of the line that is perpendicular to the given line\n\nSo the slope of A is 2.\n\n### What is the equation of the line that passes through (?2, ?3) and is perpendicular to 2x ? 3y = 6?\n\nThis would give you your second point. Then click the button to compare your answer to Mathway's. Clicking on \"Tap to view steps\" on the widget's answer screen will take you to the Mathway site for a paid upgrade. But even just trying them, rather than immediately throwing your hands up in defeat, will strengthen your skills — as well as winning you some major \"brownie points\" with your instructor. Pictures can only give you a rough idea of what is going on. Any other line with a slope of 3 will be parallel to f x. Then my perpendicular slope will be. The left-hand side, negative 7 plus 3 is negative 4, and that's going to be equal to-- these guys cancel out-- that's equal to b, our y-intercept. And they tell us that line A has an equation y is equal to 2x plus I know the reference slope is.\n\nContent Continues Below MathHelp. We still don't know what the y-intercept of B is, but we can use this information to figure it out.",
null,
"With this point and my perpendicular slope, I can find the equation of the perpendicular line that'll give me the distance between the two original lines: Okay; now I have the equation of the perpendicular. Parallel, Perpendicular Lines Purplemath There is one other consideration for straight-line equations: finding parallel and perpendicular lines.",
null,
"Writing Equations of Parallel Lines Suppose for example, we are given the following equation. Then click the button to compare your answer to Mathway's. It will be the perpendicular distance between the two lines, but how do I find that?",
null,
"Yes, they can be long and messy.\n\nRated 6/10 based on 102 review"
] | [
null,
"https://www.wikihow.com/images/thumb/f/fe/Find-the-Equation-of-a-Perpendicular-Line-Given-an-Equation-and-Point-Step-1.jpg/aid9536296-v4-728px-Find-the-Equation-of-a-Perpendicular-Line-Given-an-Equation-and-Point-Step-1.jpg",
null,
"https://i.ytimg.com/vi/iNYV5d3HWyo/maxresdefault.jpg",
null,
"https://slideplayer.com/slide/8656436/26/images/4/Notes Over 5.2 Parallel Lines.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94688797,"math_prob":0.96635413,"size":1792,"snap":"2020-10-2020-16","text_gpt3_token_len":408,"char_repetition_ratio":0.14317673,"word_repetition_ratio":0.043887146,"special_character_ratio":0.22321428,"punctuation_ratio":0.10220995,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9917546,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,10,null,5,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-05T01:26:52Z\",\"WARC-Record-ID\":\"<urn:uuid:67ccd0b7-f286-4914-97d1-d92ef49b4385>\",\"Content-Length\":\"5932\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:34633a85-4b0a-44ca-9bc4-a76a3d6522ef>\",\"WARC-Concurrent-To\":\"<urn:uuid:a8c1207a-3d2e-45a8-89c8-b8c85f2f1dd4>\",\"WARC-IP-Address\":\"104.24.105.223\",\"WARC-Target-URI\":\"https://gelerigedugesapop.carriagehouseautoresto.com/write-an-equation-of-the-line-that-is-perpendicular-to-the-given-line141084055bc.html\",\"WARC-Payload-Digest\":\"sha1:AUL6XM3J6S4ZC6HGQBS3FLRRREMQZSHB\",\"WARC-Block-Digest\":\"sha1:3G6757455WBC7YASGNSGJ2J5Y2LAAA3N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370526982.53_warc_CC-MAIN-20200404231315-20200405021315-00540.warc.gz\"}"} |
https://it.mathworks.com/help/matlab/ref/bitset.html | [
"# bitset\n\nSet bit at specific location\n\n## Syntax\n\n``intout = bitset(A,bit)``\n``intout = bitset(A,bit,assumedtype)``\n``intout = bitset(A,bit,V)``\n``intout = bitset(A,bit,V,assumedtype)``\n\n## Description\n\nexample\n\n````intout = bitset(A,bit)` returns the value of `A` with position `bit` set to 1 (on).```\n\nexample\n\n````intout = bitset(A,bit,assumedtype)` assumes `A` is of type `assumedtype`.```\n\nexample\n\n````intout = bitset(A,bit,V)` returns `A` with position `bit` set to the value of `V`. If `V` is zero, then the bit position `bit` is set to 0 (off).If `V` is nonzero, then the bit position `bit` is set to 1 (on). ```\n\nexample\n\n````intout = bitset(A,bit,V,assumedtype)` assumes `A` is of type `assumedtype`.```\n\n## Examples\n\ncollapse all\n\nAdd powers of 2 onto a number.\n\n```A = 4; intout = bitset(A,4:6)```\n```intout = 1×3 12 20 36 ```\n\nYou can see that `bitset` sequentially turns on bits 4 through 6.\n\n`c = dec2bin(intout)`\n```c = 3x6 char array '001100' '010100' '100100' ```\n\nMATLAB® throws an error if you specify an integer outside the range of `assumedtype`. For instance, `bitset(300,5,'int8')` returns an error since the maximum value of an `int8` integer is 127.\n\nAvoid this error by limiting your input to the range of the specified data type.\n\n`intout = bitset(75,5,'int8')`\n```intout = 91 ```\n\nRepeatedly subtract powers of 2 from a number.\n\n`a = intmax('uint8')`\n```a = uint8 255 ```\n```for k = 0:7 a = bitset(a, 8-k, 0); b(1,k+1) = a; end b```\n```b = 1x8 uint8 row vector 127 63 31 15 7 3 1 0 ```\n\nSet multiple bits to different values\n\n```bits = 2:6; val = [1 0 0 1 1]; intout = bitset(0,bits,val,'int8')```\n```intout = 1×5 2 0 0 16 32 ```\n\n## Input Arguments\n\ncollapse all\n\nInput values, specified as an array. `A`, `bit`, and `V` can each be scalars or arrays of the same size.\n\n• If `A` is a double array, and `assumedtype` is not specified, then MATLAB® treats `A` as an unsigned 64-bit integer.\n\n• If `assumedtype` is specified, then all elements in `A` must have integer values within the range of `assumedtype`.\n\nData Types: `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`\n\nBit position, specified as an integer or integer array. `A`, `bit`, and `V` can each be scalars or arrays of the same size. The values of `bit` must be between 1 (the least significant bit) and the number of bits in the integer class of `A`.\n\nData Types: `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64`\n\nAssumed data type of `A`, specified as `'uint64'`, `'uint32'`, `'uint16'`, `'uint8'`, `'int64'`, `'int32'`, `'int16'`, or `'int8'`.\n\n• If `A` is a double array, then `assumedtype` can specify any valid integer type, but defaults to `'uint64'`.\n\n• If `A` is an integer type array, then `assumedtype` must specify that same integer type.\n\nData Types: `char` | `string`\n\nBit value, specified as a scalar or a numeric array. `A`, `bit`, and `V` can each be scalars or arrays of the same size.\n\nData Types: `double` | `int8` | `int16` | `int32` | `int64` | `uint8` | `uint16` | `uint32` | `uint64` | `logical`\n\n## Output Arguments\n\ncollapse all\n\nAdjusted integers, returned as an array. `intout` is the same data type as `A`.\n\n• If `A`, `bit`, and `V` are all scalars, then `intout` is also a scalar.\n\n• If any of `A`, `bit`, or `V` is an array, then `intout` is the same size as that array."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6105314,"math_prob":0.99353325,"size":4123,"snap":"2020-34-2020-40","text_gpt3_token_len":1210,"char_repetition_ratio":0.14639476,"word_repetition_ratio":0.11548913,"special_character_ratio":0.2898375,"punctuation_ratio":0.13904983,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99913377,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-03T10:47:50Z\",\"WARC-Record-ID\":\"<urn:uuid:2a3c68fa-bf49-4fed-b071-e8bfd2889c62>\",\"Content-Length\":\"98516\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3fefcf45-cb35-4889-b2fd-cde67286f628>\",\"WARC-Concurrent-To\":\"<urn:uuid:a7c6622a-1e6e-442a-acd1-3674f7ca1b8b>\",\"WARC-IP-Address\":\"23.66.56.59\",\"WARC-Target-URI\":\"https://it.mathworks.com/help/matlab/ref/bitset.html\",\"WARC-Payload-Digest\":\"sha1:R7P4LXGY5NJ7JS2PC427CDGHSUYKOYSR\",\"WARC-Block-Digest\":\"sha1:LF7IPTF3DSNC3YI6EPYQIJMRRUEBDQPO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735792.85_warc_CC-MAIN-20200803083123-20200803113123-00039.warc.gz\"}"} |
http://visualfsharp.com/delegates/action.htm | [
" F# Types: Action Delegates",
null,
"Actions Delegates\n\n Fundamentals\n\n Introduction\n Instead of making you create delegates from scratch, and to provide universal support for all .NET languages, the .NET Framework provides various pre-built or built-in delegates that you can use directly in your programs. One of the built-in delegates is named Action. It is provided in various versions. The Action delegate is defined in the System namespace of the mscorlib.dll library. You should include that namespace in the document of your program.\n Creating a Simple Action\n\nThe primary type of an Action delegate is for functions that take no parameter and return nothing. Its signature is:\n\n```type Action =\ndelegate of unit -> unit```\n\nNotice that the signature exactly resembles the one we saw in our introduction to delegates. This time, the name of the delegate is replaced with Action. Also notice that the delegate of expression is used, exactly as it was in our introduction to delegates. This means that an Action delegate is of type FSharpFunc. This also means that such a delegate receives all the members of that class, including the Invoke() method.\n\nTo create an action for the delegate, declare a variable. Since you would not have created a delegate, use Action to initialize the variable and add its parentheses. The rest is done as we saw already. This means that you should first implement a method and pass it to the parentheses of Action.\n\nConsider the simple example we saw in our introduction:\n\n```open System\nopen System.Windows.Forms\n\ntype TitleCreation = delegate of unit -> unit\n\nlet exercise = new Form(Width = 300, Height = 80, Text = \"Exercise\")\n\nlet lblMessage = new Label(Left = 21, Top = 18, Width = 280)\n\nlet getTitle() : unit =\nlblMessage.Text <- \"Introduction to F# Programming, Second Edition\"\n\nlet title = TitleCreation(getTitle)\n\ntitle.Invoke()\n\ndo Application.Run exercise```\n\nThis would produce:",
null,
"To use the Action deleegate, replace the name of the delegate with Action. Here is an example:\n\n```open System\nopen System.Windows.Forms\n\ntype TitleCreation = delegate of unit -> unit\n\nlet exercise = new Form(Width = 300, Height = 80, Text = \"Exercise\")\n\nlet lblMessage = new Label(Left = 21, Top = 18, Width = 280)\n\nlet getTitle() : unit =\nlblMessage.Text <- \"Introduction to F# Programming, Second Edition\"\n\nlet title = Action(getTitle)\n\ntitle.Invoke()\n\ndo Application.Run exercise```\n\nYou can inclue the new operator before Action. Here is an example:\n\n```open System\nopen System.Windows.Forms\n\nlet exercise = new Form(Width = 300, Height = 80, Text = \"Exercise\")\n\nlet lblMessage = new Label(Left = 21, Top = 18, Width = 280)\n\nlet getTitle() : unit =\nlblMessage.Text <- \"Introduction to F# Programming, Second Edition\"\n\nlet title = new Action(getTitle)\n\ntitle.Invoke()\n\ndo Application.Run exercise```\n\nIf you want to specify the data type of the variable, type : Action after the name of the variable. Here is an example:\n\n```open System\nopen System.Windows.Forms\n\nlet exercise = new Form(Width = 300, Height = 80, Text = \"Exercise\")\n\nlet lblMessage = new Label(Left = 21, Top = 18, Width = 280)\n\nlet getTitle() : unit =\nlblMessage.Text <- \"Introduction to F# Programming, Second Edition\"\n\nlet title : Action = new Action(getTitle)\n\ntitle.Invoke()\n\ndo Application.Run exercise```\n\n An Action Delegate With Parameters\n\nTo support parameters passed to a delegate, the Action delegate is provided in many versions, each for a certain number of parameters. The primary version in this category has the following signature:\n\n```type Action =\ndelegate of\nobj:'T -> unit```\n\nThis signature is for a function that takes one parameter and returns nothing. As you can see, the parameter is a generic type, meaning it can be any type such as a primitive type. When calling Action, you must specify the type of the parameter. To do this, after Action, add the <> operator. Inside, enter the type of the parameter. Here is an example:\n\n`Action<string>()`\n\nEverything else is done as seen so far. As mentioned already, you can use the Action delegate that takes one parameter and the parameter can be a primitive type. Here is an example:\n\n```open System\nopen System.Windows.Forms\n\nlet exercise = new Form(Width = 265, Height = 80, Text = \"Exercise\")\n\nlet lblMessage = new Label(Left = 21, Top = 18, Width = 260)\n\nlet describe str =\nlblMessage.Text <- \"Course Name: \" + str\n\nlet desc = new Action<string>(describe)\n\ndesc.Invoke \"Enterprise Database Design\"\n\ndo Application.Run exercise```\n\nThis would produce:",
null,
"The parameter can be of a class type. Here is an example:\n\n```open System\nopen System.Windows.Forms\n\nlet exercise = new Form(Width = 200, Height = 160, Text = \"Exercise\")\n\nlet lblRadius = new Label(Left = 23, Top = 18, Width = 180)\n\nlet lblDiameter = new Label(Left = 23, Top = 43, Width = 180)\n\nlet lblCircumference = new Label(Left = 23, Top = 69, Width = 180)\n\nlet lblArea = new Label(Left = 23, Top = 96, Width = 180)\n\ntype Circle() =\nmember this.Diameter with get() = radius * 2.00\nmember this.Circumference with get() = this.Diameter * 3.14156\n\nlet describe (circle : Circle) =\nlblDiameter.Text <- \"Diameter: \" + string circle.Diameter\nlblCircumference.Text <- \"Circumference: \" + string circle.Circumference\nlblArea.Text <- \"Area: \" + string circle.Area\n\nlet round = Circle()\n\nlet result = Action<Circle>(describe)\n\nresult.Invoke round\n\ndo Application.Run exercise```\n\nThis would produce:",
null,
"The method of a class can be passed to an Action delegate. Here is an example:\n\n```open System\nopen System.Windows.Forms\n\nlet exercise = new Form(Width = 200, Height = 160, Text = \"Exercise\")\n\nlet lblEmployeeNumber = new Label(Left = 23, Top = 18, Width = 180)\n\nlet lblEmployeeName = new Label(Left = 23, Top = 43, Width = 180)\n\nlet lblEmploymentStatus = new Label(Left = 23, Top = 69, Width = 180)\n\nlet lblHourlySalary = new Label(Left = 23, Top = 96, Width = 180)\n\ntype Employee(emplNbr, fname, lname, status, salary) =\nmember val EmployeeNumber = emplNbr with get, set\nmember val FirstName = fname with get, set\nmember val LastName = lname with get, set\nmember val Status = status with get, set\nmember val HourlySalary = salary with get, set\n\nmember this.Present() : unit =\nlblEmployeeNumber.Text <- \"Employee #: \" + this.EmployeeNumber\nlblEmployeeName.Text <- \"Full Name: \" + this.FirstName + \" \" + this.LastName\nlblHourlySalary.Text <- \"Status: \" + this.Status\nlblEmploymentStatus.Text <- \"Hourly Salary: \" + (string this.HourlySalary)\n\nlet empl = Employee(\"7092-3094\", \"Rose\", \"Crittenden\", \"Full-Time\", 24.05)\n\nlet resume = new Action(empl.Present)\n\nresume.Invoke()\n\ndo Application.Run exercise```\n\nThis would produce:",
null,
"To support the parameters passed to a function, there are various versions of the Action delegate. To specify the parameters of an Action delegate, in its <> operator, enter the parameter types separated by a comma. Here is an example:\n\n`Action<int, string>()`\n\nIn the parentheses, pass the name of the function that implements the delegate."
] | [
null,
"http://visualfsharp.com/design/logo1.gif",
null,
"http://visualfsharp.com/forms/delegate3a.gif",
null,
"http://visualfsharp.com/forms/delegate3b.gif",
null,
"http://visualfsharp.com/forms/delegate3c.gif",
null,
"http://visualfsharp.com/forms/delegate3d.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.72572786,"math_prob":0.9514354,"size":6940,"snap":"2021-43-2021-49","text_gpt3_token_len":1691,"char_repetition_ratio":0.14028259,"word_repetition_ratio":0.34069982,"special_character_ratio":0.26224783,"punctuation_ratio":0.17831893,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883497,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T01:18:44Z\",\"WARC-Record-ID\":\"<urn:uuid:88328376-49fe-4b80-9879-6e58e2bdd6d7>\",\"Content-Length\":\"14905\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b3f30ab-9791-48cb-9ca1-3bd18084d9d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:c942512e-fad6-4fd6-8047-6e1d8fb69ab9>\",\"WARC-IP-Address\":\"96.31.35.161\",\"WARC-Target-URI\":\"http://visualfsharp.com/delegates/action.htm\",\"WARC-Payload-Digest\":\"sha1:ISB4PM464HN4UGZJK3NFXHYBROPXEHXL\",\"WARC-Block-Digest\":\"sha1:6VZHGENONBVMXCXM47KD6WU3TOC2HNL4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587963.12_warc_CC-MAIN-20211026231833-20211027021833-00465.warc.gz\"}"} |
https://docs.opencv.org/4.0.1/d4/d61/tutorial_warp_affine.html | [
"",
null,
"OpenCV 4.0.1 Open Source Computer Vision\nAffine Transformations\n\nPrev Tutorial: Remapping\n\nNext Tutorial: Histogram Equalization\n\n## Goal\n\nIn this tutorial you will learn how to:\n\n## Theory\n\n### What is an Affine Transformation?\n\n1. A transformation that can be expressed in the form of a matrix multiplication (linear transformation) followed by a vector addition (translation).\n2. From the above, we can use an Affine Transformation to express:\n\n1. Rotations (linear transformation)\n2. Translations (vector addition)\n3. Scale operations (linear transformation)\n\nyou can see that, in essence, an Affine Transformation represents a relation between two images.\n\n3. The usual way to represent an Affine Transformation is by using a $$2 \\times 3$$ matrix.\n\n$A = \\begin{bmatrix} a_{00} & a_{01} \\\\ a_{10} & a_{11} \\end{bmatrix}_{2 \\times 2} B = \\begin{bmatrix} b_{00} \\\\ b_{10} \\end{bmatrix}_{2 \\times 1}$\n\n$M = \\begin{bmatrix} A & B \\end{bmatrix} = \\begin{bmatrix} a_{00} & a_{01} & b_{00} \\\\ a_{10} & a_{11} & b_{10} \\end{bmatrix}_{2 \\times 3}$\n\nConsidering that we want to transform a 2D vector $$X = \\begin{bmatrix}x \\\\ y\\end{bmatrix}$$ by using $$A$$ and $$B$$, we can do the same with:\n\n$$T = A \\cdot \\begin{bmatrix}x \\\\ y\\end{bmatrix} + B$$ or $$T = M \\cdot [x, y, 1]^{T}$$\n\n$T = \\begin{bmatrix} a_{00}x + a_{01}y + b_{00} \\\\ a_{10}x + a_{11}y + b_{10} \\end{bmatrix}$\n\n### How do we get an Affine Transformation?\n\n1. We mentioned that an Affine Transformation is basically a relation between two images. The information about this relation can come, roughly, in two ways:\n1. We know both $$X$$ and T and we also know that they are related. Then our task is to find $$M$$\n2. We know $$M$$ and $$X$$. To obtain $$T$$ we only need to apply $$T = M \\cdot X$$. Our information for $$M$$ may be explicit (i.e. have the 2-by-3 matrix) or it can come as a geometric relation between points.\n2. Let's explain this in a better way (b). Since $$M$$ relates 2 images, we can analyze the simplest case in which it relates three points in both images. Look at the figure below:",
null,
"the points 1, 2 and 3 (forming a triangle in image 1) are mapped into image 2, still forming a triangle, but now they have changed notoriously. If we find the Affine Transformation with these 3 points (you can choose them as you like), then we can apply this found relation to all the pixels in an image.\n\n## Code\n\n• What does this program do?\n• Loads an image\n• Applies an Affine Transform to the image. This transform is obtained from the relation between three points. We use the function cv::warpAffine for that purpose.\n• Applies a Rotation to the image after being transformed. This rotation is with respect to the image center\n• Waits until the user exits the program\n\n## Explanation\n\n• Load an image:\n• Affine Transform: As we explained in lines above, we need two sets of 3 points to derive the affine transform relation. Have a look:\n\nYou may want to draw these points to get a better idea on how they change. Their locations are approximately the same as the ones depicted in the example figure (in the Theory section). You may note that the size and orientation of the triangle defined by the 3 points change.\n\n• Armed with both sets of points, we calculate the Affine Transform by using OpenCV function cv::getAffineTransform :\n\nWe get a $$2 \\times 3$$ matrix as an output (in this case warp_mat)\n\n• We then apply the Affine Transform just found to the src image\n\nwith the following arguments:\n\n• src: Input image\n• warp_dst: Output image\n• warp_mat: Affine transform\n• warp_dst.size(): The desired size of the output image\n\nWe just got our first transformed image! We will display it in one bit. Before that, we also want to rotate it...\n\n• Rotate: To rotate an image, we need to know two things:\n\n1. The center with respect to which the image will rotate\n2. The angle to be rotated. In OpenCV a positive angle is counter-clockwise\n3. Optional: A scale factor\n\nWe define these parameters with the following snippet:\n\n• We generate the rotation matrix with the OpenCV function cv::getRotationMatrix2D , which returns a $$2 \\times 3$$ matrix (in this case rot_mat)\n• We now apply the found rotation to the output of our previous Transformation:\n• Finally, we display our results in two windows plus the original image for good measure:\n• We just have to wait until the user exits the program\n\n## Result\n\n• After compiling the code above, we can give it the path of an image as argument. For instance, for a picture like:",
null,
"after applying the first Affine Transform we obtain:",
null,
"and finally, after applying a negative rotation (remember negative means clockwise) and a scale factor, we get:",
null,
""
] | [
null,
"https://docs.opencv.org/4.0.1/opencv-logo-small.png",
null,
"https://docs.opencv.org/4.0.1/Warp_Affine_Tutorial_Theory_0.jpg",
null,
"https://docs.opencv.org/4.0.1/Warp_Affine_Tutorial_Original_Image.jpg",
null,
"https://docs.opencv.org/4.0.1/Warp_Affine_Tutorial_Result_Warp.jpg",
null,
"https://docs.opencv.org/4.0.1/Warp_Affine_Tutorial_Result_Warp_Rotate.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8244356,"math_prob":0.9989366,"size":2381,"snap":"2019-35-2019-39","text_gpt3_token_len":675,"char_repetition_ratio":0.13504417,"word_repetition_ratio":0.004830918,"special_character_ratio":0.30701387,"punctuation_ratio":0.09730849,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99959344,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-19T03:20:28Z\",\"WARC-Record-ID\":\"<urn:uuid:6ba89bb0-94c1-43df-a193-d2c5ab163411>\",\"Content-Length\":\"48380\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d8068d2-d276-4298-9816-88c8a66e456a>\",\"WARC-Concurrent-To\":\"<urn:uuid:bbc6d78a-a733-42a7-8629-2ff39f233c2d>\",\"WARC-IP-Address\":\"207.38.86.214\",\"WARC-Target-URI\":\"https://docs.opencv.org/4.0.1/d4/d61/tutorial_warp_affine.html\",\"WARC-Payload-Digest\":\"sha1:TEKRG75WZUYEEDWKYS33JGCRJJBHJFWC\",\"WARC-Block-Digest\":\"sha1:TSKRQ55W5A2JXLU3IBWSOCEZ6LY3QUWQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573415.58_warc_CC-MAIN-20190919015534-20190919041534-00020.warc.gz\"}"} |
https://torchdrug.ai/docs/api/metrics.html | [
"# torchdrug.metrics¶\n\n## Basic Metrics¶\n\n### AUROC¶\n\narea_under_roc(pred, target)[source]\n\nArea under receiver operating characteristic curve (ROC).\n\nParameters\n• pred (Tensor) – predictions of shape $$(n,)$$\n\n• target (Tensor) – binary targets of shape $$(n,)$$\n\nAUROC()\n\nalias of torchdrug.metrics.area_under_roc\n\n### AUPRC¶\n\narea_under_prc(pred, target)[source]\n\nArea under precision-recall curve (PRC).\n\nParameters\n• pred (Tensor) – predictions of shape $$(n,)$$\n\n• target (Tensor) – binary targets of shape $$(n,)$$\n\nAUPRC()\n\nalias of torchdrug.metrics.area_under_prc\n\n### R2¶\n\nr2(pred, target)[source]\n\n$$R^2$$ regression score.\n\nParameters\n• pred (Tensor) – predictions of shape $$(n,)$$\n\n• target (Tensor) – targets of shape $$(n,)$$\n\nvariadic_accuracy(input, target, size)[source]\n\nCompute classification accuracy over variadic sizes of categories.\n\nSuppose there are $$N$$ samples, and the number of categories in all samples is summed to :mathB.\n\nParameters\n• input (Tensor) – prediction of shape $$(B,)$$\n\n• target (Tensor) – target of shape $$(N,)$$. Each target is a relative index in a sample.\n\n• size (Tensor) – number of categories of shape $$(N,)$$\n\n## Chemical Metrics¶\n\n### SA¶\n\nSA(pred)[source]\n\nSynthetic accesibility score.\n\nParameters\n\npred (PackedMolecule) – molecules to evaluate\n\n### QED¶\n\nQED(pred)[source]\n\nQuantitative estimation of drug-likeness.\n\nParameters\n\npred (PackedMolecule) – molecules to evaluate\n\n### Chemical Validity¶\n\nchemical_validity(pred)[source]\n\nChemical validity of molecules.\n\nParameters\n\npred (PackedMolecule) – molecules to evaluate\n\n### LogP¶\n\nlogP(pred)[source]\n\nLogarithm of partition coefficient between octanol and water for a compound.\n\nParameters\n\npred (PackedMolecule) – molecules to evaluate\n\n### Penalized LogP¶\n\npenalized_logP(pred)[source]\n\nLogarithm of partition coefficient, penalized by cycle length and synthetic accessibility.\n\nParameters\n\npred (PackedMolecule) – molecules to evaluate"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.64455384,"math_prob":0.99095476,"size":1770,"snap":"2021-43-2021-49","text_gpt3_token_len":490,"char_repetition_ratio":0.14043035,"word_repetition_ratio":0.21645021,"special_character_ratio":0.23728813,"punctuation_ratio":0.12048193,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99600387,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-22T19:48:03Z\",\"WARC-Record-ID\":\"<urn:uuid:642a29c8-ba5c-4b61-8475-867a5c961f66>\",\"Content-Length\":\"25087\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:679abcf2-47f2-41be-9e63-b3450a7d2766>\",\"WARC-Concurrent-To\":\"<urn:uuid:d90f6ae5-a71f-498b-b75b-6765c5b97683>\",\"WARC-IP-Address\":\"185.199.110.153\",\"WARC-Target-URI\":\"https://torchdrug.ai/docs/api/metrics.html\",\"WARC-Payload-Digest\":\"sha1:MMQBOG6NOGYWXQCIWY4A4XPOT3GDZULU\",\"WARC-Block-Digest\":\"sha1:ZDVUOLZXWSR4LSI6R3ZCXAXBYK5OHWOF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585518.54_warc_CC-MAIN-20211022181017-20211022211017-00298.warc.gz\"}"} |
https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/3/lesson/3.3.3/problem/3-116 | [
"",
null,
"",
null,
"### Home > APCALC > Chapter 3 > Lesson 3.3.3 > Problem3-116\n\n3-116.\n\nFor each function below, write and evaluate a Riemann sum to calculate the area under the curve for $–2 ≤ x ≤ 1$ using $24$ left endpoint rectangles.\n\nGeneral form of a left-endpoint Riemann sum:\n\n$\\displaystyle \\sum_{i=0}^{n-1}\\Delta xf(a+\\Delta xi)$\n\n1. $f(x) = 2^x$\n\nwidth of rectangle $=\\Delta x=\\frac{b-a}{n}$\n\nwhere $a$ and $b$ are the starting and ending values and $n$ is the number of rectangles.\n\n1. $f ( x ) = \\sqrt { x + 2 }$\n\nRefer to the hints in part (a)."
] | [
null,
"https://homework.cpm.org/dist/7d633b3a30200de4995665c02bdda1b8.png",
null,
"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAfQAAABDCAYAAABqbvfzAAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAyRpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADw/eHBhY2tldCBiZWdpbj0i77u/IiBpZD0iVzVNME1wQ2VoaUh6cmVTek5UY3prYzlkIj8+IDx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IkFkb2JlIFhNUCBDb3JlIDUuMC1jMDYxIDY0LjE0MDk0OSwgMjAxMC8xMi8wNy0xMDo1NzowMSAgICAgICAgIj4gPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4gPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIgeG1sbnM6eG1wPSJodHRwOi8vbnMuYWRvYmUuY29tL3hhcC8xLjAvIiB4bWxuczp4bXBNTT0iaHR0cDovL25zLmFkb2JlLmNvbS94YXAvMS4wL21tLyIgeG1sbnM6c3RSZWY9Imh0dHA6Ly9ucy5hZG9iZS5jb20veGFwLzEuMC9zVHlwZS9SZXNvdXJjZVJlZiMiIHhtcDpDcmVhdG9yVG9vbD0iQWRvYmUgUGhvdG9zaG9wIENTNS4xIE1hY2ludG9zaCIgeG1wTU06SW5zdGFuY2VJRD0ieG1wLmlpZDo5QzA0RUVFMzVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCIgeG1wTU06RG9jdW1lbnRJRD0ieG1wLmRpZDo5QzA0RUVFNDVFNDExMUU1QkFCNEYxREYyQTk4OEM5NCI+IDx4bXBNTTpEZXJpdmVkRnJvbSBzdFJlZjppbnN0YW5jZUlEPSJ4bXAuaWlkOjlDMDRFRUUxNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0IiBzdFJlZjpkb2N1bWVudElEPSJ4bXAuZGlkOjlDMDRFRUUyNUU0MTExRTVCQUI0RjFERjJBOTg4Qzk0Ii8+IDwvcmRmOkRlc2NyaXB0aW9uPiA8L3JkZjpSREY+IDwveDp4bXBtZXRhPiA8P3hwYWNrZXQgZW5kPSJyIj8+RSTQtAAAG9JJREFUeNrsXQmYXEW1Pj09PVtmJjsBDGFXiCKKIBJ2REEQQdaARBBiFFRAnrIoyhqCgLwnEfEpPMAgggsGJG7w2MMuiuwkJDGQINmTycxklu62/r5/0ZWaur3M9GQCc/7vO1/fvrfuvXXr1q3/nFOnqhLZbFYUCoVCoVC8u1GlRaBQKBQKhRK6QqFQKBQKJXSFQqFQKBRK6AqFQqFQKJTQFQqFQqFQQlcoFAqFQqGErlAoFAqFonKoLveE2jM+uTHk+zNGjjZyj5EXqJhgQH3KyClGOo1MNbK2vzOSTWakbmWTjHp+69y2QqFQKBQW85+avvES+kaCKUaOMHK8kcWS9zQkjYzj9l1Gnuj3nCSykuxIaa1VKBQKxbvLQt9I0Gjk30YehtPA2d9tZJGRPYxs0++EnjCaRFe1NC4emSN2hUKhUCiU0MtDjZE3jRwXODaRhP5hI7f1ZyayVRmpWdMoqbb63LZCoVAoFAOFd2tQHHzcWxppChwbxt89+zsTWWOV161okkQ6oTVJoVAoFErovQA8C6OMjA0csy74nSXfn155GA6vXlcj9cuHqnWuUCgUCiX0XqDByOiIUnNu9ThCh/W+T79Z54bEa1c1SnVbjdnW/nOFQqFQKKGXi/cbeR+3Px44PtrZPrw/M1K/vDlSKxQKhUKhUEIvG/tK1IcO7CE9KXVn/v7ZyAFGNqm4dY6hautqpGZNg7rbFQqFQqGE3sv8gtDXOeTt9pMPN/Ixh9CNCS2HVJzQq7JSu3qIJDtTaqErFAqFQgm9FwBZY/z520ZWS9Sfvrdz/AjHeke6RyWaOa6iwJBzuNsTyuYKhUKhUELvFdAn/rREQ9NeN/KkkaN4bAQJ/x7+hy/8RhL+DpVk86p0taRadOy5QqFQKJTQe4NtSNog8aESzdf+RyOfolX+ZSMPSDRbHIBhbXcaaTcyuVKZQP95am2dVHelctsKhUKhUAxGQoeP+hoj1xu5yciFZZwLUv6NRIuwWMKeLdGscRdLFN3+O8lHuY800mbkdiOnSn7CmT4Sukj9imZJZHShOoVCoVAMXkLH/bBc2ywj5xg5wcjnSjgP4803owU+kvsQ8PaskYeMnGbkCu6vd44D15LMT6yIRmLUiZq19WqdKxQKhWJQE/q2Eo0hR7/3GCMLJFoGddciefymkR/zfyN/U7TO20niNhjOTizTwN9/GPmrkfMcsu+ddV6VkVR7nVS31mn/uUKhUCgGNaGDyP9l5F6J3OMdRr5n5FwjH4w55wwjrxj5G/+787dfQwsd/eZf5b46z1IHLqUicVLfzHOR6vYaqepOas1RKBQKxaAldIwXR7/3XIn6wVskcp+D4NEHfomRXbxzDpJorPkPnX2WsDHm/FEeQ/Db13j9as9CF6bDuPSLJLygS4xFns1Z4lYy1encdK+JjA5XUygUCsXgJfQvGblDIrc7VkI71sh2Rg418gKtdFjrdknUCUYmSdTX3u1c533O9uP8vZrKAYLfugKEDpwvkZv/nFIzjGj2mtUNuRnhILWrhkhVV1LXPlcoFArFRocNtR76YUbeMrKElvqJJGlMDvNFWta3GDmGFjf2wa89xchSI0NoqeM6n3KuO4q//5Ro7fPvS34WOZ/Q0ZeO6PoLmPblYpke8crmhtRr1198pSohmaT2nysUCoVi8BH6hySa8AWBaacbSUvUdw7vAJjyK0a+bmSakVVGWiVykSPgDUPVOmlZg/zv4q+d3rXOuQ/c9kdKNFY9ROjAd5nmBiN7SX4IXBCIZI/c7vlkiYS62xUKxYbH/KemayEoCqI/Xe4YKnYKyXO8kZslmhBmUyM/kshNjpXTrpNoARUExX2e5yVI7BCYwwh8m0kLf0vnHm7g22u00LMFCH0l8zSBaRUKhUKhUAvdA4aLoX97FxL19iTVZ0nMcHnDHf5Vh4hB1KOYbpGRtRJN07o/rfKmInm8yMhEEjWC69p4D1x/SMw5mF3uKp77dyN3azVQKBQKhRJ6HqMlH8X+iJHlsn4wW7kAIY+k9b41lYQPkPDx20zLf3zM+bDkEdmO/vUXjbxqZB6tfATGITjvVxK53v+uVUGhUCgUg4rQs15AWCL9jtf+TUrkMM86vyGgfzr3E9sn3WrObzWJFprtZ5z9uOHmRnYzcqCR/WJIHX3wB1GEOYGSgWC4xySKuMc1fm9kHyMLtTooFAqFYtAQet2yJvJxQjLVGelsbn9nnDb25Qg+QzLPRPSbSaZzc59Ho72iKPFkR7VUmbSZmgJGfO787DtR5bx+xlEefk/ixopqCKA7TOJd7Ql6EPaW/JKrrUyPceyH0HpXKBQKheK9T+gjX9jCsZWz0l3XJV2N7dLZtC43RrtueWN+nXCQfqpb2ke1SMfwVknXduUixhsXDZfGN0fkyD+TSsdb6WZ/d32ndAxtM+SfkM7GDllnrgXNAJO7MPocUfD/TxkvmcRZ5nqnSmkBf5b8ETX/oERD2u7UaqFQKBSK9zyh+y736vaUVLfVSMPbCE5ff4hXDu01UruqIWfNg5xxvHZ1Q2TVGx5PdhbOAqZaradXAOfAI9A+eo20jVljlIeGnMcAln7HsFbpauh8KV3XNaW7oeN2c+1rEunEeEPuXQVvkIAHAHnOol/+DpN+lsnYmWb/v8p1Xkjk1u/QaqVQKBSKjZ7QexB8jsCzBQZ0g+SjrVRrtG4KplB1jPBid3jnfCA3c1tLvQxZNCJH9u+wqSF2XCpd0w3Sv79t9JqPdA5vHZdOdVfB2x6arjVrlIzkulR2yOLmNnMcD5HoGtIxdN3IlrebFozOXb+HghKPL0i0UMxtWq0UCoVC8a4jdAJ907tLNIkMItPB2JgZDtHjz5DofHLEvdFv3SSFJ3gBE6+QaJz569ZDUN2Rst6CKl5naBb6QXcyR+5GMplU98PrRrQuXjt2ec6yr0onc3ey+WhcOFIaI8XgIJuPbFUmaxSOj1V1VafM9bHe+vz1lICsYf2wEgL3va7aolAoFIp3JaFjKVPMwY7JWjaPSYOo8usoLuCixpKoW5R4Lyzmgrnb/8fIn5z1yJO8TjThDAztZHQskU7OHvLvofvVL2/sXrPlMml934qc6z/VWifD5mwqtSuHIP0hhsBnradBGOKnsnCyT+gFACVG54RVKBQKxYCgLzPFYeKY+yUKJNu8QLodSbhYLrXZNXYlmgimVMCC/rREE8P8oKTrJLJ7GgI/VjJVMmzupjLipbHSvHCUjP77VjkyN6RdY6z1qYHz7FaXVhGFQqFQvJcJHdO3wqrdrYxzMIf6LVIZtzQmhil16taLDUE3od8ervjm18fkoutpgcOz8BGtBgqFQqEYrIR+JS30cnGERCupVQJYaAV99sVmo8MSrWfkTHlD4jkijyzwkfQuKBQKhUIxKAkds7JNjDn2N4lWTcPCK/MKWNcIT0/HHEcA3F8kWp0NU7c+GZMO1zi1xDz/l0TLtrr4tqy/trpCoVAoFO9a9CYoDv3YqcB+zNp2vOTHYWNd8wckmnvdBf7vIdHCLCE8Z+RgT+k4wciNJHEXmLK1toByYDGc1vgU/se88F/T169QKBSKwWyhfzSwL03L3J1U5d8S9XPPpcyhzCepJ0pUMtDZfatEAXg+xkq03Gop0eUnG9mV25dIFKGvUCgUCsWgtdBDEe1wky8I7P+NkT95+0DkiB6vr0D+s5JfBqYY4FU4z8i1Ro7ZCN8FFIzNJD+Gvz2QppZeiqxXnp0SnqEuxXJexzSFUMf0uG9cXEKC10tKgWV3nGtUM72ftkviZ9SrYV46me+4Z+qKKSMAK/8hRgLL8S6SwvMcWDQzvascJkuopwm+szYqyA2SH3kRum89v6EE33NrjKLdwLy0Ffh2G4qUg32uVon3YtWxXrWXUEd8FCqftTH765n3cuqEC7zXUczvGyW8W5TzFrwvFmda1k/5wn0wEqelQJ7qWX/XlHC9Jr6z9hLrr0LRKws9tPhJS4FKutaTFjbUcSQcIhO48vcP7F9sZHWJhA58zshvpW/D9SoNNFAIMkRXQ27yHInWkL+ADa2LqTyGCXv+6ciz9GLs7aWfxLT3s4GIAxq8x5n2oALpQCB38X7PeXlw5bNM/2mmfdY59jz/38HjPr7BfFwVk4ejeXxG4NhHeN2XJJr/AOWJlfWOK/IO7D0v8fbv4z0Xnvlv3vNAfsf07+exh6ic+cR5Ae9jPVbYvijwbhDvMZv32jMmz0fy/FsK1P+TmZ9rCjz7VF7nm72ou7vElAfK6RGWq0/4tzL9PwJ1Au/04zH3QnDrLyRaCvkVvtvZRd7tRL7/13gOzv2l9OwGRPndXCBfuO8nipSFfbffKpBmBtNMLXKtk5gOsUTDlKYU/WmhZ2MIvbNCefqQ00BmaG3tE9Nozab2HCLoNY5G7Fp3owNp0T0wpgzFoFLYjB6Mnfn/VeYRDc6lEi0aM9GxEDZhwybcZxeoBfHbYMVT2ABZLX8bCqam/WlMPr4i+eF7Q4rkGaMbtuS76QqUWcJpxOud/HY69cfm91iS6IWedY38xgUsDuXxVd7+/VlvhrNsXmR5oSG+nedMi7EyJ/P4ZCoSqx2PyFjHE5Ry6ppb31c639P2tIirPCX4VxKtBgjMo/W1PZ/9Uzy2wrnODvRWYA6HCQEr3JbDigIWHIJGtyWxX0GPgA+U89Ysq3JRRyXGWrJZx1BA3vYyciiVsLWO8rgd03YG6vBRVODvcu6D7+MevosMFTYowntQcPw7Xt6+4xDnElrmyOsJLG8onU85dXIrJ1+2TXHzdQzzNTNG0Z1MRWwyvYAhq34sy+Ub/BbfiCnT8/jemjYy40PxHrTQQ+iqoFtoNK2PI9kQ7BtDtLDkf+6QiA806D8q4X7PsdFMDED5X83GaIFEa7uPpxxPUsAwv9O9cgZ+xgZ/R/4iNuA2ktN0yc++57pZz2BjEfIQuKMFisUjWCI7xcmDK+PZ+LrXQgO8k5Nmd8fC/j6f3ffQxE3qkw4QKkj8Jv7+kff6MJXDHzLNZVSQfNgpi4VKneuheJjPY8t5MvfPoQJkn/dwrx52eN/Dt0jYq1incc4H+X6XkbAv9JTmDsfrcEGJ5eBiJz4b0OwoE6FvN84zVgz2/UKp2I1ltAOf78tU9A/y6rDN77leHd6dym09CXGYo1TdSDKczfLYieV3GdOc79WhfRwyv5RpbZ14gG3M9Z4HzObrvJh81Xn58pXJcY6XZq8i3w6I+rSYNJ93PAgdou52xQAQ+kBgKt1icV6GIbRKFhS5DhqDtwcg/2igPsftMyVa/jXDjxgW5ZU8dnbAbbmazzWPv3B7TqIS00wLxMeOtH58wHrbtBf5X+TkwZW5bMh90niNx+fTMsJ8BLMc5aAv+CS9Bkv4PHNYlktIpo+wrp8ZOHcij83l/0nOsTbut+X8hkN+9nlej7G0xCGkE7l9Cb0IHSyTu0ggQqKPc69+m5ZoOTiGHoV5zO+kfqzLackHvM7n9g2S78I4WnpOKLXUq8OoEyfxnYEcd2G63aiItbKePM93i/7w7xm5m+lOdK5tn/XPVBiX8ZyX6alq4/UPCTwL7v8vL1+TuB+KcqhLwN77Nf6eUEKZTQ54C1EPz1JaUgw0oW/oRUlg2V5cJE2t89HH4T5q300DUPZoHBpp3TweOD6dpPftwHtKxlhLL3M7zl39TU8Bgqvwq45VWA7K6a6B5VoT2P9bx5rsSx3awfG2LA0cn0Kiv9Xb30yLKMuyWUhLb8uY+6Sc56ktMW9Qlmx/+gOB4w+R3DeR9fvdq0g8C3jfH5dxT6Q71lEGXqVC8MF+qstx5fG04wWqLaH+LCVxAkMdi1eoWL0WOOde/m7r7NveO+biLXrAzohRxEL5Wu7UK1/p2oyKwTpes4WK+ogSPJH+PBoHSnwMgULRL4Qeck03SnhseiXRzgbxMDZSxQjIRr+jEX8wcBxW0jkFnqm/Yee1XynhaG7sn0Fr3Y+E7o7xSNh+8IXesQdo2XzMs0pgOW1HC/8fZea/EjETbzl5b+jDdWwjG+dpQUAUgsf+GmhA4SlBlwC6CeBih2v1iAq+5yaSWafk+9r9et1CIqnzvrMsLbZVtCi/U+I94fL9AOsBvAD3U2Hqr9EdWQlH2u/rELVfx0PR+weQjLO08oHhzjUk5juxdci2aU1F6sPdVJifCRwL5etAyceCvOwd+yy/ZVjyCGJDtwCi8A8t0Hb+kt/w1x3FxSrcwEyJjw1SKCpiZbkNUKjRapJ8UE9fAGviSoeQYXku4wf+ai8UljQVgNmelfgTiSJJB7rsu6T8/stNaNW6VuC32OgsCxAXgv4w8c+1THc3G3jr3kMU9GllNN7AFWwwk16D9b2YhlJilCrrceiLhZ4sUDcLwbpGf+80pCdy/3SpzOp5SckPLQzFBXQ7+xMBJe0JiVzXeEfnUvF4usg9j3eIK81fBGIhIvxyqVwAq1uXMT/FWueZP8P8WgLzyxJW7OZMm6FX5EQqP4gHedF7t+uKKJZJpwxD9WFXfjdZJ13I6j/Cy9dYenf8fPllfadThw5mHZoRk2d8n2OoKEyi9wWWOUZ9wN3/fxLFZWj/uaLfCT2k9Q7nR+AT+v5s4NNO5QSp3sCPI4TFrNCVBAgGQTBnOhbs1AEue7dhKddDcDLFByL7vyw9o5mHsnFBfy2Gtu1GBeyjtDhmUukpB3EL8/y0DEJ3yyJbobIsFWioD2KjbUdVII5hCZ9tl148R2/ec7H3D+/Xj0jGu7Px372AEjhC8gFwv+bvoxL1Ce9A6/3+CtdlfP+PxRybwW/Px3HSc8hZG7/9s5xyK/ZuE166uHNQhhO8c690lA6LYwKeDHjIEIB7tqeYjGd5tku+L38W0+9PBXtujBJyNQkdVvr/UuGCAYKA1/kyMF5DxSAk9BcC+6C9fs2z8rDvssBHBFxVwPqp7qdnRV6OYkOOhV2WD3DZ9+WDfZtKSZKNACwjuPxulsi1HipTuG2voyJzjuOt+G82pMky84358Z+UvFswUaB+FPKgDFRZHk6yhJvddjesIrmfxkb9mQrlLdGH57CW4mkkzY+TBBbFXOMztEThfXrEsW7RdQOX/cR+IPRuWq7dfKcZEtmdjlLhA11hiB9AVx2i4D9EMjy1l+82UeQcxGu8QuPCkm1XgXwlWc7IF0ZOTAmktYGHs0jCwJtMj2NHSj641QW6l+5gvUM3GQJz0RXWQkLfSqlJsaEI/a8kR/+jQXAV+o7gEkRf4BdjyBxE9KCEg6T6E8v4cR0vPYOjBgJtzsddI4XXhk94FsgvJN//Xw5gZaCf7mj+XyDR+OjeAIQxu49lYPu+OyTvUrWKRZzClw4oA+scS7FURcK6SuGh2JPfQkbyoyKg/F1c5L2Ugg5aZPUSjhOwM9+JxA/Vs+WNbo6LJBri9ouYdLYb4SXvuawCcBjLaWUF6/JKWqpryzgHwai3OSQICxf90RjG+ZyTrt3xMoUwxClnW286vPplFVeLmwsQ+h+db+JNtmeH0ZvldtHVOJb8K3z+JOuntcqhPP1Qes7SZ2daRJ5ukXyA73S2Ux9QalL0Br2xkBBA9ZeYY0fzY/lpDJkDP6FLKjUAz3ujQ2YDjVX8qEfHNFZoQOACnik9I2t7a9kulfUnl7mOjXBvrldXgTKw0elLnEbYTuoyJuacTZ3ycz0WwLiYc6ZQibya/3eSfDQxJtV5lMdhrf+A+xE1vW8FnnEFSQllHJo2eRRJqU16Dvfzgbw9zXNs95Gr6CHP+3H7C95zXeeU38H94G0q1zho8Ej0CSo2/ph7G/W+eUybMc6rD1lHWdk65t7betcOKQhW6XhM8rP8uXBHDZxHb8iD/D2f+6Gc7FqgDOyshlYpvVYpSbGhCd0O8elNANzj1EIH0ipevJGU/Rx6K+okP3TMfS/Q2g8gma8ONKC9xfW0gEAMN/XhOi1lpE1Lz0AsDEeyE7Xc5+x/mL8TAoQKIjuJ2+5qfU84SpAfXTyWFu2+TkNvXaVv0Br7jSP4/6pDin3FUsfiDAUens73PUcKj2e3jf43aFmGukg+T6JEEOTtged6vsBztffxOftSJ9P0PgBwU3/CMyDWkZxPCNSHL3h1QBzP0XHSc6w3vAC7sx17rEi+YO3b2QWP8IwU6+GZS0+DW9b4P9/zBMV5by6nV+g6Cfe3KxQlo7f91a+wgt9awCoKWfbHSt9dmO8VrGUjdj01fFikGGJUS9I6hA3Kd6Uy0dYWi9lgurOR9QYns4FLBOoUvAovelb1+ZJ3PW5FTwkaW7g1f+aR80zWL/R7wmWJvkaMrf86FYGF9LZYPMWG9Bg2pldTYRlH5RPW3WtsNF1X6eUSng4XZT+Lv2OkbxMPZfme9yPBQIGzUd/HOXkBcZQy2uFJWuoXBAh1IrevlfA0txNIdgfwHSxwjkHhCc15kKLy9Eg/fw/38N1/gs/2WYcwf05FBvVkRyp9GP+Ncd8Y5vaW5GeNBG6gVwZu9XtZHkizN89JUZl9roR8WSt9Ar/FQ6lkH+5Y578LnIeI/RlUsnBea8z1URf+UKaCrFBUlNCFHzg+kMvYKMW5YGHJ3yzR0JvVXgPUHEhf7rKmdpUjH0PLuEbcilH93c8PMkFUMmaz+hLFAtbk2bJ+P7V1B5Y6ZrsupkxDQ4CaS3hmt6xPLZBuCQndXmszkqePZ+ideMuziibz3EMCxPQyFZ63A+ckaeH5i6y8SOsObtmjqBRkJD9TnY+H+Qyb0AK8xiub5hiLtNqpey4xoovqFF7ncIcMrKcDBHaHsy/pvOOQJY5vDv26OzvvAwqDndp2ZsxzQcnBzHbbsq5d6NxnP8m7631MjyF06wIfVoa3z9az2oCVPo1K7aFU6OxznMO6jzI8V9aPTH+ZyqXr3XiLRHozy+hG716/ooLgoqlIvv7A+ngg68WmrE9xAYb30usxjnVyRoF7rIkp16GiY9EVG4jQhZYSgt8QbIbpRnciQWXo9kODfZ/0nOjEupum8eNIO/mZ1wt33Q9oSaWdRnCJlD4U6kESjjseGNd4dgO8g8tpBdg5vrtpOaCBn+OlvZ3l83AZStc0elSKWZFX0QouZLV08nqjC3gNkpJ3f2Jq3qmyflBQgiSGYw9IeEz0clpoIL6DmS8ohugT/rX07IKwjeJRJDpEem9BpegR75x2PkMhFze8J6eTIBd75DGNhNEZ4/24hPfw83gTlbOJJJkEy+D2wPtZRpJHw7405tuBBXi8971cwW8t7n2jfqPvfU/nPFiIr0p+oZQQad8Xc715VC7WluF5g7W8jazvIreAgnUWyTLlKaCnsqxQJ7Zk+T7EfS0xyuIEltFeJMc3SMx/jsnXdgXydSYV03rWtWl8f3HBhVA4v0KPwhpHMYIy9XiRMprH72ZlActeoehpcWWz5Q3/3WrX0wZ7kUmiKjjC62w25NdrtVIoFJXG/KemayEo+tVCH3x0noiN/XlaCg87UigUCoVi47HQFQqFQqFQbHzQgAuFQqFQKJTQFQqFQqFQKKErFAqFQqGoCP4jwADQNvw20jA5ogAAAABJRU5ErkJggg==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7379584,"math_prob":0.9999939,"size":337,"snap":"2020-34-2020-40","text_gpt3_token_len":91,"char_repetition_ratio":0.11411411,"word_repetition_ratio":0.0,"special_character_ratio":0.26409495,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000069,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-11T10:30:09Z\",\"WARC-Record-ID\":\"<urn:uuid:441049d9-0349-4d9c-99ec-2cbad554b3db>\",\"Content-Length\":\"40400\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:db9bfcba-7e24-41a5-8e77-53251ba6518f>\",\"WARC-Concurrent-To\":\"<urn:uuid:e437fa97-2ecf-4e46-892c-96558ee16aae>\",\"WARC-IP-Address\":\"172.67.70.60\",\"WARC-Target-URI\":\"https://homework.cpm.org/category/CCI_CT/textbook/apcalc/chapter/3/lesson/3.3.3/problem/3-116\",\"WARC-Payload-Digest\":\"sha1:I2KIRUWKAHYWSLKEF36LIOZ43CRMB4MQ\",\"WARC-Block-Digest\":\"sha1:MAFYKZIA2EJ7FYCWACLHQIH6BESOIU6O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738746.41_warc_CC-MAIN-20200811090050-20200811120050-00177.warc.gz\"}"} |
https://foofox.furry.nz/2017/02/04/detecting-cum-with-voltammetry/ | [
"# Detecting cum with voltammetry\n\nAfter my little stint of research, I set about putting the theory into practice. Before I go into it, here is a video of detection working (warning, contains bodily fluids).\n\nWarning, this video contains bodily fluids.\n\nFluids in order: water, lube, salt water, urine, semen.\n\nMy laptop keyboard still has suspicious fluids on it…\n\nPlease note the above video is a best case scenario and a lot of testing is still required!\n\nMy first attempt was to just pass 5v into various samples and see what they did. I used the #2 digital output pin on the Arduino to control whether or not any power was sent to the sample and a 10 kΩ shunt resistor so I could measure the current across the sample by reading the A0 analog in for the Arduino. I measured the current once every 10 ms for 2 seconds, then rested the sample for 2 seconds.\n\nThe results were encouraging.\n\nThe graph below shows one round of stimulation of a sample of water, the orange line represents voltage in (5v) and the blue line is the current measurement. You can see I rest the sample between trials for 2 seconds (important).\n\nThe following shows the change in current measured in a sample of cum.\n\nSo, there’s definitely something happening, the current across the sample is changing for the duration of the stimulation (2 seconds), but how to get a more meaningful comparison? I knew that I needed at least 1.23 v to electrolise water into into oxygen and hydrogen, but I didn’t want to use much more than that. I used a simple voltage divider made from two 22 k Ω resistors, resulting in something close to 1.8 v across the sample, which seemed to work OK.\n\nAfter taking many samples and calculating an average for each sample, I could say with confidence that on average, a 2 second stimulation of the various samples looks like this:\n\nNow, it looks pretty cut & dry: cum has on average a much much lower final current after 2 seconds. Why not just measure the current value after 2 seconds of stimulation and if it’s below a threshold (say 35 in the example above), then it’s cum? Well, the problem is that this is an average, the real data is a whole lot messier:\n\nThe fact that cum has a very distinctive “shape” on the average graph gave me the encouragement to try harder to untangle signal from the noise.\n\nTo overcome the variability in the data I did two things: firstly I took a 5 point moving average for all points, to smooth them out a little. I then expressed every set of 5 points as a ratio of the first 5 points. So, to get the value for point 8, I would calculate Average(Points(8-12))/Average(Points(1-5)), which is actually the same as Sum(Points(8-12))/Sum(Points(1-5)).\n\nThis resulted in the following graph (once again, taken as an average):\n\nThat’s all very well, but this is still an average, how do we handle the fact that each trial (each time I stimulate a sample) may have variability to it? Looking at the minimum and maximums rather than the averages, we get this:\n\nIf you look carefully, you can see at about 10 or 15 (between 100 and 150 ms), there’s a promising gap between all other minimums and the maximum value for cum. I focused my attention on this ratio.\n\nSo, that’s it right? If the ratio between the first 5 points and points 10-15 is less than 0.8, we have cum! Well, yes and no. As I said before, by passing a current through the sample, we are actually changing it, so we need to consider how the nature of the sample changes over time. Also, I wanted to speed up the time taken to get a result, up until now, I was stimulating for 2 seconds and resting for 2 seconds, that was quite slow for the detection I wanted.\n\nSo, I sped it up, this time allowing the sample to rest only 100 ms and stimulating for only 200. I then recorded the ratio and plotted it over time. So, the graph below is showing many hundreds of trials (a cycle of stimulation and rest for the sample), where the x-axis is the trial number."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9528108,"math_prob":0.95513654,"size":4591,"snap":"2019-51-2020-05","text_gpt3_token_len":1055,"char_repetition_ratio":0.11968607,"word_repetition_ratio":0.0024096386,"special_character_ratio":0.23045088,"punctuation_ratio":0.10662526,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9598452,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-21T12:35:39Z\",\"WARC-Record-ID\":\"<urn:uuid:f49742d4-f00c-415c-abbe-846f6004d6cd>\",\"Content-Length\":\"31648\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a96025b4-459e-4f59-805e-cdc0c59aa299>\",\"WARC-Concurrent-To\":\"<urn:uuid:2644de65-6e81-4f10-a5f1-812546c2c93a>\",\"WARC-IP-Address\":\"208.113.160.99\",\"WARC-Target-URI\":\"https://foofox.furry.nz/2017/02/04/detecting-cum-with-voltammetry/\",\"WARC-Payload-Digest\":\"sha1:R4XBPNKUS6P7VNPHH3UJ5AQ3ML7W3K6G\",\"WARC-Block-Digest\":\"sha1:ANBZ6REOCBOCWCBBWY2LX2JJ4CBQBNOX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250603761.28_warc_CC-MAIN-20200121103642-20200121132642-00065.warc.gz\"}"} |
https://www.tutorialspoint.com/check-if-binary-string-multiple-of-3-using-dfa-in-python | [
"# Check if binary string multiple of 3 using DFA in Python\n\nSuppose we have an array n that represents a binary representation of any number. We have to check whether its binary representation is divisible by three or not by using Deterministic Finite Automata DFA.\n\nSo, if the input is like n = [1, 1, 0, 0] (binary of 12), then the output will be True.\n\nTo solve this, we can construct DFA like below −",
null,
"The approach is simple when a number is divisible by 3 then the remainder will be 0, if not then remainder will be 1 or 2. There are three states for these three remainders. The initial state is also final state because when remainder is 0 it means the number is divisible.\n\nTo solve this, we will follow these steps −\n\n• dfa_state := 0\n• for i in range 0 to size of nums - 1, do\n• digit := nums[i]\n• if dfa_state is 0, then\n• if digit is same as 1, then\n• dfa_state := 1\n• otherwise when dfa_state is 1, then\n• if digit is same as 0, then\n• dfa_state := 2\n• otherwise,\n• dfa_state := 0\n• otherwise when dfa_state is 2, then\n• if digit is same as 0, then\n• dfa_state := 1\n• if dfa_state is 0, then\n• return True\n• return False\n\nLet us see the following implementation to get better understanding −\n\n## Example\n\nLive Demo\n\ndef solve(nums):\ndfa_state = 0\nfor i in range(len(nums)):\ndigit = nums[i]\nif dfa_state == 0:\nif digit == 1:\ndfa_state = 1\nelif dfa_state == 1:\nif digit == 0:\ndfa_state = 2\nelse:\ndfa_state = 0\nelif dfa_state == 2:\nif digit == 0:\ndfa_state = 1\nif dfa_state == 0:\nreturn True\nreturn False\nn = [1, 1, 0, 0]\nprint(solve(n))\n\n## Input\n\n[1, 1, 0, 0]\n\n## Output\n\nTrue"
] | [
null,
"https://www.tutorialspoint.com/assets/questions/media/46939/binary_string.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.784764,"math_prob":0.999416,"size":3328,"snap":"2022-40-2023-06","text_gpt3_token_len":894,"char_repetition_ratio":0.1732852,"word_repetition_ratio":0.13302752,"special_character_ratio":0.27463943,"punctuation_ratio":0.081037275,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999861,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T18:24:38Z\",\"WARC-Record-ID\":\"<urn:uuid:eac02a9c-f3a6-44ee-85a5-cacbf0431ada>\",\"Content-Length\":\"44342\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6c197e4e-eb60-4d73-bfa1-7def5d9917ce>\",\"WARC-Concurrent-To\":\"<urn:uuid:001746d1-39b8-411b-840a-7e1351a9f374>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/check-if-binary-string-multiple-of-3-using-dfa-in-python\",\"WARC-Payload-Digest\":\"sha1:K5NNJZTGPBS3LPDWBUAPLM5MPRP35KQ4\",\"WARC-Block-Digest\":\"sha1:OBIEHYXQ57HBTCLUKONPCNBAR6Z5AWH6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499826.71_warc_CC-MAIN-20230130165437-20230130195437-00025.warc.gz\"}"} |
https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016.html | [
"Open Access\n Issue Acta Acust. Volume 4, Number 6, 2020 27 12 Musical Acoustics https://doi.org/10.1051/aacus/2020026 11 December 2020",
null,
"This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.\n\n## 1 Introduction\n\nAn important goal of the acoustics of wind instruments is to understand key components of intonation and also the ease of playing. From the physics modelling point of view, it is interesting to study the main variables that control the playing frequency (for intonation) and the minimum mouth pressure to achieve auto-oscillations (for ease of playing). It is assumed that part of the musician’s judgement of ease of playing of a note is inversely related to the sounding resistance represented by the threshold blowing pressure. Support for this hypothesis is offered by measurements on the performing properties of saxophone reeds by : a significant correlation was found between the soft-hard scale on which the sounding resistance of different reeds was judged by saxophonists and the threshold pressures measured in the mouths of the performers. A useful overview of the acoustics of reed and lip wind instruments can be found in books such as .\n\nIt is often commented that the flaring bore of brass instruments are designed such that the input impedance are, as close as possible, harmonically related. While this alignment is said to be important for intonation, it is also likely to determine the oscillation threshold and therefore improve the ease of playing. Here, the necessity of an alignment in a harmonic series is called the Bouasse-Benade prescription because of what Benade wrote in his famous book , or in : The usefulness of the harmonically related air column resonances in fostering stable oscillations sustained by a reed-valve was first pointed out by the French physicist Henri Bouasse in his book “Instruments à vents” . In order to illustrate this prescription, a horn was designed to provide an air column whose resonance frequencies (frequencies of maximum input impedance) were chosen to avoid all possible integer relations between them, called “tacet horn” in . The purpose of this instrument is to deliberately make the conditions for oscillation unfavorable.\n\nThe effect of inharmonicity of the two first resonance frequencies on both tone colour and ease of playing have been examined experimentally on alto saxophone fingerings during a project for the design of microinterval systems . An decreased harmonicity by extending the bore of the Tintignac carnyx improves its ease of playing [10, 11]. The harmonicity of resonances is also necessary for proper intonation when a reed instrument is expected to play in upper registers, and is often used as a target in optimisation problems . This paper focuses on the assessment of the Bouasse-Benade prescription on a model of reed musical instruments. The goal is to study the influence of the inharmonicity on the playing frequency and on the minimum mouth pressure required to achieve auto-oscillation in the first register. However, it appears that this mouth pressure cannot always be determined by a study of small amplitude oscillations only. On the contrary, a complete bifurcation diagram, including all periodic branches with the blowing pressure as the continuation parameter, needs to be computed. To achieve this, a simplified model of a reed instrument is derived from a generic model that is valid for both reed and brass instruments, and constitutes a simplified version of the problem. The reed is modeled as a simple spring , only two acoustic resonances are taken into account [18, 19], and the nonlinear coupling between the reed and the acoustic resonances through the incoming flow is reduced to a polynomial expansion (Kergomard in ). This may be considered as the simplest model of reed instruments that includes inharmonicity. Furthermore, this simple model helps isolate the effects of the main parameters without the added complications that arise when considering real instruments.\n\nInharmonicity Inh between the acoustic resonances fres1 and fres2 is defined as the deviation from harmonicity: Inh = fres2/(2fres1) − 1. Therefore once the first resonance frequency fres1 and the Inharmonicity Inh are known, the second resonance frequency is fixed through the relation fres2 = 2fres1 (1 + Inh). Note that if the resonances are exactly harmonic (Inh = 0) the problem can be solved analytically and two bifurcation diagrams have been obtained (Figs. 8 and 10 of ).\n\nIn Section 2 of this paper the theoretical background, and particularly the equations of the elementary model of reed instruments, are briefly presented. The behaviour of the elementary model at the stability threshold of the equilibrium position, and the nature of the Hopf bifurcations, are discussed in the second part of this section. Section 3 documents the procedure used to calculate bifurcation diagrams using a continuation method, after having reformulated the two equations of the model into a set of four first-order ODE equations. The influence of the inharmonicity on the bifurcation diagrams is shown and discussed in Section 4. The section is divided into two parts: the first study assumes that the amplitude of the first resonance is larger than the second resonance (Z1 > Z2), for which preliminary results have been presented in , and the second study the opposite condition (Z2 > Z1) is considered. In order to link the Bouasse-Benade prescription to the ease of playing experienced by musicians, bifurcation diagrams are analysed with respect to the minimal mouth pressure necessary to achieve oscillation. Additionally, the effect of inharmonicity is also considered.\n\n## 2 Theoretical background\n\n### 2.1 Elementary acoustical model\n\nThe model presented and used in the present publication is labeled as elementary because a number of major simplifications are made in deriving it (see for example Hirschberg in [20, 22]). The vibrating reeds or lips are modeled as a linear one-degree-of-freedom oscillator. The upstream resonances of the player’s windway are neglected, as is nonlinear propagation of sound in the air column of the instrument. Wall vibrations are also ignored. Despite these simplifications, the elementary model is capable of reproducing many of the important aspects of performances by human players on realistic reed and brass instruments (see [4, 5]). The model is based on a set of three equations, which have to be solved simultaneously to predict the nature of the sound radiated by the instrument. These three constituent equations of the model are presented hereafter. Besides the control parameters defining the embouchure of the players, including the reed or lips parameters and the mouth pressure Pm, and the input impedance of the wind instrument, there are three variables in the set of the forthcoming three equations as a function of the time t:",
null,
", the reed or lip-opening height,",
null,
"the pressure in the mouthpiece of the instrument, and",
null,
"the volume flow entering the instrument.\n\nIn order to describe the vibrating reeds or lips, the first of the three constituent equations of the elementary model is:",
null,
"(1)\n\nIn this equation, which describes the reeds or lips as a one-degree-of-freedom (1DOF) mechanical oscillator, the symbols ωr, Qr, ho and μ represent the angular reed resonance frequency, the quality factor of the reed resonance, the value of the reed or lip-opening height at rest, and the effective mass per unit area of the reed or lips respectively. These quantities are parameters of the model, which are either constant (in a stable note) or changing slowly in a prescribed way (in a music performance). Note that if μ is positive, an increase of the pressure difference",
null,
"will imply a closing of the reed or lips aperture. It is called the “inward-striking” model, used mainly for reed instruments. If μ is negative, an increase of the pressure difference will imply an opening of the reed or lips aperture. It is called the “outward striking” model, used preferably for brass (lip reed) instruments.\n\nThe second constituent equation describes the relationship between pressure and flow velocity in the reed channel:",
null,
"(2)where the square root originates from the Bernoulli equation, and the positive part of the reed or lips aperture",
null,
"implies that the volume flow vanishes when the reed or lips are closed.\n\nThe third and last constituent equation describes the relationship between flow and pressure in the instrument mouthpiece. It is written in its frequency domain form by using the input impedance Z(ω) of the wind instrument:",
null,
"(3)\n\nOther than the difference of sign of μ between inward-striking reed instruments model and outward striking brass instruments model, there is another difference between these two subfamilies of wind instruments. The control parameter ωr of vibrating lips varies a lot, over four octaves, to get the entire tessitura of a given brass instrument. On the other hand the ωr associated to reeds is more fixed (slightly varying because of the lower lip of the clarinet or saxophone player) and most of the time very large compared to the playing frequencies. This justifies a low-frequency approximation of the elementary model: ωr is assumed infinite and the reed undamped. In other words, the reed is reduced to its stiffness only and the set of three equations becomes a set of two equations as follows:",
null,
"(4)\n\nWhen the mouth pressure is too high, the reed can be blocked against the lay of the mouthpiece. Then the closure pressure defined by",
null,
"is the minimal mouth pressure for which the reed remains closed in the static regime (",
null,
"becomes equal to 0). By using this closure pressure, a dimensionless mouth pressure γ can be defined: γ = Pm/PM. It is convenient to define another dimensionless parameter, a dimensionless reed height at rest:",
null,
"(5)where",
null,
"is the characteristic impedance for plane wave inside the resonator of input cross section S, ρ is the air density and c is the sound velocity.\n\nIn the following, the nonlinear equation of the model is approximated by its third-order Taylor series around the equilibrium position defined by",
null,
",",
null,
", and",
null,
".\n\nThe approximated nonlinear equation can be written in the following dimensionless form (see for example Kergomard in ):",
null,
"(6)with",
null,
",",
null,
",",
null,
"and",
null,
". The value of the dimensionless reed height at rest ζ is chosen to be equal to 0.1.\n\nIt is this elementary low-frequency model for reed instruments which is studied in the present paper. If a non-beating reed is assumed which is typically obtained for a dimensionless mouth pressure γ lower than 0.5, the third order approximation of the flow rates is appropriate. The elementary model based on the set of two equations has to be solved to predict the nature of the sound radiated by the instrument. Low amplitude solutions for a few specific cases are reviewed in the following subsection.\n\n### 2.2 Small amplitude behaviour\n\nThe equilibrium position is the trivial permanent (steady) regime corresponding to silence. Sound can happen if the equilibrium position becomes unstable. For a lossless cylindrical air column, it becomes unstable for a specific value of γ which is γthr = 1/3. If losses are taken into account, then γthr is a bit higher (see ). If the losses are very important, the threshold value γthr can reach 1 and the reed channel is closed at equilibrium. In this case the equilibrium remains stable for any value of γ. Hence no sound can be produced. For an extensive analysis of stability of the equilibrium position with an experimental comparison for cylindrical air columns, see and .\n\nThe step beyond the above linear stability analysis is the study of the small oscillations around the threshold. It has been done first by and then extended by analysing the nature of the bifurcation at the threshold by which can be direct or inverse Hopf bifurcation. The results are displayed in Figure 1 as a 2D map where the x-axis is C the third coefficient of the Taylor expansion Equation (6), and the y-axis is 1/Z2 − 1/Z1, the difference between the admittance amplitude between the two first resonances (assumed to be harmonic, the ratio between their frequencies, being equal to 2).",
null,
"Figure 1Diagram showing the regions where the bifurcation is direct as well as those regions where it is inverse. The x-axis shows the values of the third coefficient of the Taylor expansion Equation (6), and the y-axis is the difference between the admittance amplitude between the two first resonances (assumed to be harmonic) Y2 −Y1 = 1/Z2 − 1/Z1. The hatched region is for a direct bifurcation, and the unhatched region for an inverse bifurcation. Adapted from .\n\nIn our specific case, the coefficient C is negative. Then, for a specific negative C value, following an imaginary vertical line coming from an infinite positive value of 1/Z2 − 1/Z1 (second resonance peak absent like for the cylindrical tube) the bifurcation is direct. It becomes inverse in a particular point for a particular positive value of 1/Z2 − 1/Z1 not far from zero: 1/Z2 − 1/Z1 = −2B2/(3C). And when 1/Z2 − 1/Z1 becomes negative, and whatever how 1/Z2 < 1/Z1 (it means whatever Z2 > Z1) is, the bifurcation becomes and stays direct. Properties of small amplitude oscillations of the single-reed woodwind instruments near the oscillation threshold have been investigated more recently by using analytical formulae with explicit dependence on the physical parameters of the instrument and the instrumentist allowing to determine the bifurcation point, the nature of the bifurcation, the amplitude of the first harmonics and the oscillation frequency . Apart from a few very simplified cases, such as a clarinet-like model with a lossless cylindrical tube (, Kergomard in [20, 5, 29]), or by taking into account losses independent of frequency, sometimes called Raman model , the equations are not tractable analytically, and the bifurcation diagrams can not be easilly obtained.\n\nThe simplest non-trivial resonator that can be studied, is a resonator having two quasi-harmonic resonance frequencies fres1 and fres2. This kind of resonator can be obtained in practice in the midle and high ranges of the first register of saxophone (see for example Figs. 17 and 12 of ). Bifurcation diagrams have been analytically calculated in in the restrictive case of perfect harmonicity between the two resonances. In the following sections, this kind of resonator but with a non-zero inharmonicty Inh is analysed.\n\n## 3 Typical bifurcation diagram obtained by continuation method\n\nTo overcome the difficulties of the analytical analysis of small amplitude oscillations near thresholds, and to get results for any inharmonicity value arbitrarily far from the oscillation threshold, simulation techniques in time domain are often used. An alternative method is possible. A nice way to have an overview of the dynamics over small and large amplitudes is to use the bifurcation diagram representation. Very few of them can be obtained analytically (see the previous subsection). It is possible to obtain bifurcation diagrams numerically for a large range of situations by using continuation methods, such as in the AUTO software or MANLAB software for example. In order to use AUTO technique in the following section, the elementary model has to be mathematically reformulated in a set of first-order ODE equations.\n\nThe principle of continuation is to seek solution branches of a nonlinear algebraic system rather than solution points. A solution branch is an 1D-curve in a space whose axes are an unknown to the problem and a parameter of interest called a bifurcation parameter. In the following the dimensionless mouth pressure γ is chosen as the bifurcation parameter. It provides more information than a set of solution points obtained for successive values of the bifurcation parameter. Branches of static and periodic solutions are computed numerically hereafter using the software AUTO, freely available online .\n\nThe model analysed in this paper is a nonlinear dynamical system. In order to obtain a nonlinear algebraic system in which numerical continuation can be applied, some additional work may be required. For instance, for continuing periodic solutions of a dynamical system, a discretisation is necessary to come down to an algebraic system. Many approaches are possible among which a time-domain discretisation of the (unknown) solution over one (unknown) period. The unknowns of the resulting nonlinear algebraic system are the sampled values of the periodic solution and the period. The time discretisation implemented in AUTO is called orthogonal collocation and relies on the use of Lagrange polynomials. The stability of each solution is also assessed. Stability is a very important information for the interpretation of the bifurcation diagram since only stable solutions are observable. Stability of both equilibria and periodic solutions is found through a linearization of the system of equations around the solution considered. The solution is stable if and only if the real parts of all the eigenvalues of a matrix characteristic of the linearized system are negative. This matrix is the Jacobian matrix if the solution considered is an equilibrium, and the so-called monodromy matrix if the solution considered is periodic. Stability of a solution along a branch is an output of AUTO. For comprehensive details about continuation of static/periodic solutions using AUTO, please refer to .\n\nIn order to use the AUTO technique, the input impedance equation (Eq. (3)) is reformulated by a sum of individual acoustical resonance modes in the frequency domain, and then translated them in the time domain. There are two ways to manage that: sum of real modes (see for example ), sum of complex modes (see for example ). These two ways of approximating the input impedance in the frequency domain lead to two different sets of first-order equations",
null,
"with two different X vectors. In the present paper the real mode representation of the input impedance Z is used.\n\nThe modal-fitted input impedance with N resonance modes, is written as follows:",
null,
"(7)where the nth resonance is defined by three real constants, the amplitude Zn, the dimensionless quality factor Qn and the angular frequency ωn.\n\nTranslation of Equation (7) in the time domain and reconstruction of p(t) from real modal components pn, such that the acoustical pressure is",
null,
", results in a second order ODE for each pn:",
null,
"(8)\n\nTaking into account the other equation of the elementary model, the time derivative of the volume flow nonlinear equation (Eq. (6)), the previous set of N second order ODE (Eq. (8)) can be rewritten by using the following expression of",
null,
":",
null,
"(9)\n\nThen the equations can be put into a state-space representation",
null,
", where F is a nonlinear vector function, and X the state vector having 2N real components defined as follows:",
null,
"(10)\n\nIn practice, because our paper is dedicated to a two quasi harmonic resonance instrument, the state space representation is based on the state vector of four real components",
null,
", and the nonlinear vector function F can be written as:",
null,
"",
null,
"(11)\n\nBefore discussing extensively bifurcation diagrams for different values of inharmonicity and for different configurations of relative amplitudes between Z1 and Z2 of the two resonances, let us begin by showing and discussing typical elements of a bifurcation diagram. Figure 2 has been obtained by choosing Z1 = 1.5Z2 and two harmonic resonances (i.e. Inh = 0). The values of the modal parameters of the two resonance’s air column given in Table 1 are inspired from and . The main plot displays the continuation results obtained with AUTO, whereas the six smaller plots above correspond to time-domain simulations of the same system between t = 0 and t = 0.5 s, for different values of γ pointed by numbers. Time integration is performed with an ordinary differential equation solver, namely ode15s from the Matlab ODE Suite.",
null,
"Figure 2Bifurcation diagram and time domain simulations of the two harmonic resonance air column (parameter’s values in Tab. 1 with Z2 = Z1/1.5 or with Z2 = 0, and Inh = 0) with respect to the control parameter γ. Upper plots: six time domain simulations of the dimensionless pressure p = p1 + p2 calculated between t = 0 and t = 0.5 s for γ = 0.32, γ = 0.36, γ = 0.40 (two simulations with different initial conditions), γ = 0.45 and γ = 0.53. The dimensionless pressure of the plots numbered from 1 to 3 (respectively 4 to 6) is displayed between −0.3 and +0.3 (respectively −1.2 and +1.2). Lower plot: Maximum of the absolute value of the periodic solution p over one period with respect to γ. The branch in green (respectively in black) corresponds to the case Z2 = Z1/1.5 (respectively Z2 = 0), and illustrates an inverse (respectively direct) Hopf bifurcation scenario. Stable (respectively unstable) solutions are plotted with thick (respectively thin) lines. For each scenario, the Hopf bifurcation point where the equilibrium becomes unstable, is noted γthr. In the case of an inverse bifurcation the subcrital threshold γsubthr is highlighted with a vertical dashed line.\n\nTable 1\n\nValues of the modal parameters of the two resonance’s air column.\n\nThe main plot displays max|p|, the maximum of the absolute value of pressure in the mouthpiece over one period with respect to the blowing pressure γ. While it is not highlighted here, the horizontal line max|p| = 0 corresponds to the equilibrium solution. Below a certain critical value of γ (namely γ < γthr1), the equilibrium is stable as illustrated by the three time domain simulations calculated for γ = 0.32, γ = 0.36 and γ = 0.40. For initial conditions chosen around the equilibrium, these oscillating solutions decay in time back to the (stable) equilibrium. It is worth noting that the decaying transient lasts all the longer as the value of γ is approaching the critical value γthr1. When γ = γthr1, the equilibrium becomes unstable and a branch of periodic solution emerges from the equilibrium. This branch is represented in green on the main plot of Figure 2: it first goes backward in terms of γ and is unstable (thin line), then after a turning point (also called a fold) goes forward and is stable (thick line). This scenario is called an inverse Hopf bifurcation and the value γ = γsubthr the sub-critical threshold (see, e.g. ).\n\nAs explained above, the bifurcation point γ = γthr1 is reached when the real part of one eigenvalue of the jacobian matrix crosses the imaginary axis. The imaginary part of the eigenvalue concerned gives the angular frequency of the emerging periodic solution. In the present case, it is close to ω1. Hence the periodic solution is classified as “first register” or fundamental regime. If the angular frequency of the emerging periodic solution had been close to ω2, the periodic solution would have been classified as “second register” or octave regime. Note that the frequency of the periodic solution along the green branch is not locked at any value but is modified according to the nonlinearity. This is exemplified and discussed in the next section. Two time domain simulations are shown with γ = 0.4 and γ = 0.45 and reveal that the solution is repelled from the equilibrium and converges toward a periodic solution. Note that in the case of γ = 0.4 the choice of the initial condition is crucial since two stable solutions exist: the equilibrium (plot number 3 in Fig. 2) and the periodic solution (plot number 4). A thorough look at the time domain simulation would reveal that max|p| deduced from the steady-state (periodic) regime is equal to the ordinate of the green curve at the corresponding value of γ.\n\nThe black curve corresponds to emerging branch of periodic solutions in the case where only one acoustic resonance is considered (Z2 = 0). In that case, the amplitude max|p| is simply a square-root shaped function of the bifurcation parameter γ in the neighbourhood of the threshold. The thick line denotes a stable periodic solution. Such a scenario is called a direct Hopf bifurcation. Just above the Hopf bifurcation point (γ = γthr1), the direct scenario leads to stable periodic oscillations with infinitely small amplitudes. Sounds can be played with the nuance pianissimo. On the contrary, in the case of an inverse bifurcation, stable periodic oscillations found just above the Hopf bifurcation point have finite amplitude. Playing with the pianissimo nuance is no longer possible.\n\nFor pedagogical purposes, the bifurcation diagram is limited here to the neighborhood of one Hopf bifurcation point, coming from the value γ = γthr1. However, it will be shown in the next section that for other values of γ, another Hopf bifurcation point is found as well as other bifurcations of the periodic branches.\n\n## 4 Effects of the inharmonicity. Results and discussion\n\n### 4.1 Large first resonance amplitude\n\nThe discussion is initiated by analysing the case corresponding to Z1 slightly higher than Z2 (in practice Z1/Z2 = 3/2). Three bifurcation diagrams corresponding to Inh = 0, Inh = 0.02 and Inh = 0.04 are shown in Figure 3 (remember that a semi tone corresponds to 0.059).",
null,
"Figure 3Three bifurcation diagrams of the two quasi-harmonic resonance air column (with Z1 > Z2, in practice Z1/Z2 = 3/2; parameters values in Tab. 1) with respect to the control parameter γ. From top to bottom: Inh = 0 (a), Inh = 0.02 (b) and Inh = 0.04 (c). Each case is described with two plots. Upper plot: Maximum of the absolute value of the periodic solution with respect to γ. The branch in green (respectively in red, and in blue) corresponds to the fundamental regime, the standard Helmholtz motion (respectively the octave regime, and the inverted Helmholtz motion fundamental regime). Note that a black curve corresponding to a direct Hopf bifurcation is branched at γ = γthr1. This fundamental regime corresponds to an air column having only one resonance at the frequency fres1. Lower plot: Frequency with respect to γ. The frequency branch in green (respectively in red) corresponds to the fundamental frequency of the fundamental regime (respectively the octave regime, frequency divided by 2). The reference dashed horizontal lines are the reference frequencies: fres1 and fres2/2.\n\nThe results shown in Figure 3 for the case Inh = 0 are qualitatively consistent with the one published in (see in particular its Fig. 8). Note that the continuation method gives an additional information: the stability nature of the periodic oscillations.\n\nIn Figure 3a the bifurcation diagram shows two branches coming from the equilibrium position:\n\n1. The first branch originates from the linear threshold γ = γthr1, associated to the first resonance fres1, originating through an inverse Hopf bifurcation. This fundamental regime, or first register regime, is a standard Helmholtz motion according to . The branch is unstable and then becomes stable at the limit point at γ = γsubthr (sub-critical threshold). Compared to the case of a single mode (black curve), important differences are observed, including the nature of the bifurcation.\n\n2. The second branch originates from the linear threshold γ = γthr2 and is associated to the second resonance fres2, originating through a direct Hopf bifurcation. Note that γthr2 is above γthr1, because Z1 is bigger than Z2. This branch which would correspond to the octave regime, or second register regime, is not observable in practice, because the periodic solutions are unstable.\n\n3. The nature of the bifurcation of the two branches originating from the linear thresholds γ = γthr1 and γ = γthr2 is in agreement with the publication of .\n\n4. There is a third branch which originates from the unstable octave branch, thanks to a period doubling bifurcation. This branch which would correspond to another fundamental regime (the inverted Helmholtz motion according to ) is unstable.\n\nThe associated lower plot shows the frequency of the periodic oscillations corresponding to the branches of the bifurcation diagram. In particular the frequency of the fundamental regime (green curve) is almost locked to the value fres1 = fres2/2 for any value of γ.\n\nFor an inharmonicity of 0.02 (Fig. 3b). The bifurcation diagram is quite close to the one with Inh = 0. However two things are pointed out. First, at the threshold γ = γthr1 the Hopf bifurcation has become direct as it can be predicted theoretically . Second, again there are periodic oscillations for values of the mouth pressure γ under γ = γthr1 until a new value γ = γsubthr which is a bit larger than the one of the case Inh = 0. This is due to the occurence of two folds (limit points on the solution branch) corresponding to saddle-node bifurcations. Note that the frequency of the fundamental regime (green curve) is not locked at the value fres1 anymore but is partially pulled towards the value fres2/2, which is reasonable. If the inharmonicity was negative, the same kind of results would have been obtained, the frequency being pulled towards fres2/2 lower than fres1.\n\nFor an inharmonicity of 0.04 (Fig. 3c). Now the branch coming from the threshold γ = γthr1, corresponding to the fundamental regime, looks like a classical branch associated to the direct Hopf bifurcation, there is no γsubthr anymore, since the folds noted in the previous case have disappeared, γ = γthr1 is now the threshold of oscillation. In fact, when the inharmonicity increases, the dynamics of the system behaves more and more like the dynamics of a single-resonance system. The frequency of the fundamental regime comes from the threshold value fthr1 at the direct Hopf bifurcation point, and then is partially pulled toward the value fres2/2. Note that in Figure 3a curve corresponding to a direct Hopf bifurcation is branched at γ = γthr1, this curve corresponds to an air column having only one resonance at the frequency fres1.\n\nUnder certain circumstances, for instance when the inharmonicity is high enough, a branch of quasi-periodic solutions may emerge from a Neimark–Sacker bifurcation (often refered as a Hopf bifurcation for a periodic regime). Above this bifurcation point, the periodic branch still exists but it becomes unstable. Such bifurcation has not been encountered in this work, but it has been observed experimentally with a modified saxophone played in the medium range of its tessitura , simulated by , extensively studied in , and it has been studied with continuation on a toy model of saxophone in Section 3 of .\n\nThe above analysis illustrates significant things because of the inverse Hopf bifurcation (cases Inh = 0 and Inh = 0.02):\n\n1. On the one hand, there may be a minimum value γ = γsubthr lower than γthr1 above which there are stable periodic oscillations. This particular value γsubthr can be thought of as a quantitative characterisation of the ease of playing. In Figure 3 it is shown that the lowest value of γsubthr is obtained when the two resonances are perfectly harmonic (Inh = 0). If it is assumed that a lower γsubthr corresponds to an instrument easier to play, then it suggests the reed instrument considered is the easiest to play when Inh = 0. In a way that is a theoretical illustration of the Bouasse-Benade prescription. The threshold of oscillation, equal to γsubthr for low inharmonicities, and equal to γthr1 for higher inharmonicities, is displayed in Figure 4. The minimum of the threshold of oscillation correspond to Inh = 0.",
null,
"Figure 4Minimum value of the mouth pressure γ (green line) corresponding to a stable periodic solution (fundamental regime) with respect to the inharmonicity Inh (case Z1 > Z2; in practice Z1/Z2 = 3/2). Linear threshold γthr1 (blue dashed line).\n\n2. On the other hand, the stable periodic oscillations which appear for γ slightly above γsubthr can have fundamental frequencies significantly different from fthr1 = fres1 because of the effect of the second resonance which controls partially the intonation of the fundamental regime. This study highlights the intrinsic limitation of the linear stability analysis: it should be only considered to assess the stability of the equilibrium. Conclusions concerning the existence of periodic solutions can only be provided through a nonlinear analysis, analitycally in specific cases or with tools like AUTO otherwise.\n\nIn addition an animation showing the evolution of the bifurcation diagram as a function of the inharmonicity increasing from Inh = −0.05 to Inh = +0.05 is available from the link of footnote1. Most of the illustrations displayed in the figures are corresponding to a positive inharmonicity Inh, but the animation and the Figure 4 illustrate the fact that the behaviour is qualitavely the same for negative values of Inh.\n\nIn order to illustrate the bifurcation diagram (Fig. 3), it is interesting to do simulations by solving the equation",
null,
"in the time domain (sounds available from the links of footnotes2, 3). Figures 5 and 6 show a signal corresponding to an inharmonicity Inh = 0.040 and Z1 = 1.5Z2:\n\n1. In Figure 5 the control parameter increases linearly from γ = 0.43 to γ = 0.50 (crescendo). Because the branch is coming from a direct Hopf bifurcation in the bifurcation diagram, the amplitude of the oscillation is a smoothly increasing mathematical function with respect to γ. Therefore, in the time domain simulation, the amplitude of the signal (fundamental regime) is increasing smoothly, as it is with a resonator having only one resonance fres1.\n\n2. In Figure 6 the control parameter decreases slowly from γ = 0.55 to γ = 0.50 (decrescendo). Because of the chosen initial conditions, the periodic regime obtained is corresponding to the upper octave, but when γ reaches the value 0.53, the branch coming from γ = γthr2 becomes unstable, and then the periodic solution jumps on the first branch one octave below, the stable branch coming from γ = γthr1 (fundamental regime).",
null,
"Figure 5Signal (dimensionless acoustical pressure) obtained by simulation in time domain with an inharmonicity Inh = 0.04 and Z1 = 1.5Z2 (like in Fig. 3c). The dimensionless mouth pressure is printed in black, and is increasing linearly from γ = 0.43 (constant before t = 2 s) to γ = 0.50 (constant after t = 9 s).",
null,
"Figure 6Signal (dimensionless acoustical pressure) obtained by simulation in time domain with an inharmonicity Inh = 0.04 and Z1 = 1.5Z2 (like in Fig. 3c). The dimensionless mouth pressure is printed in black, and is decreasing linearly from γ = 0.55 (constant before t = 2 s) to γ = 0.50 (constant after t = 9 s).\n\n### 4.2 Large second resonance amplitude\n\nThe discussion continues by analysing the case corresponding to Z1 slightly lower than Z2 (in practice Z2/Z1 = 3/2). Three bifurcation diagrams corresponding to Inh = 0, Inh = 0.015 and Inh = 0.03 are shown Figure 7.",
null,
"Figure 7Three bifurcation diagrams of the two quasi-harmonic resonance air column (with Z1 < Z2, in practice Z2/Z1 = 3/2) with respect to the control parameter γ. From top to bottom: Inh = 0 (a), Inh = 0.015 (b) and Inh = 0.030 (c). Each case is described with two plots. Upper plot (a): Maximum of the absolute value of the periodic solution with respect to γ. The branch in red (respectively in blue, and in green) corresponds to the octave regime (respectively the fundamental regime associated to the standard Helmholtz motion, and the inverted Helmholtz motion fundamental regime). Lower plot (c): Frequency with respect to γ. The frequency branch in red (respectively in blue) corresponds to the fundamental frequency of the octave regime (respectively the fundamental regime). The frequency of the red branch has been divided by 2 for sake of clarity. The reference dashed horizontal lines are the reference frequencies: fres1 and fres2/2.\n\nThe results shown in Figure 7 for the case Inh = 0 are qualitatively consistent with the one published in (see in particular its Fig. 10).\n\nIn Figure 7a the bifurcation diagram shows two branches coming from the equilibrium position:\n\n1. On the left-hand side, the first branch originates from the linear threshold γ = γthr2, associated to the second resonance fres2, originating through a direct bifurcation. This octave regime is stable until a period doubling bifurcation point, and then becomes unstable. At the bifurcation point, there is an emerging branch corresponding to a fundamental regime. It is a standard Helmholtz motion according to . This fundamental regime is unstable until a turning point (a fold) corresponding to a minimum value of γ = γsubthr where the periodic oscillations become stable. Note that the threshold of oscillation of the fundamental regime γ = γsubthr is significantly lower than the value γthr1 predicted by the linear stability analysis.\n\n2. The second branch originates from the linear threshold γ = γthr1, associated to the first resonance fres1, originating through a direct bifurcation. Note that γthr1 is bigger than γthr2, because Z1 is lower than Z2. This branch which would correspond to a second fundamental regime is not observable in practice, because the periodic solutions are unstable. This branch would correspond to the inverted Helmholtz motion according to .\n\nThe associated lower curve shows the frequency of the periodic oscillations corresponding to the branches of the bifurcation diagram. In particular the frequency of the stable fundamental regime (blue curve) is close to the value fres1 = fres2/2 for any value of γ.\n\nFor an inharmonicity of 0.015 (Fig. 7b). The bifurcation diagram is qualitatively quite close to the one with Inh = 0. Two things are now pointed out. Once again there are periodic oscillations for dimensionless mouth pressure γ values below γ = γthr2 < γthr1 until a new value γ = γsubthr which is a bit bigger than the one in the case of Inh = 0. Note that the frequency of the fundamental regime (blue curve) is surprisingly close to the value fres1, the fundamental frequency is not much pulled towards the value fres2/2. Note that the range of γ where there are two stable periodic regimes, octave and the standard Helmholtz motion fundamental regime, is larger: from γ = γthr2 to the value of γ where the period doubling bifurcation point occurs.\n\nFor an inharmonicity of 0.03 (Fig. 7c). Again the bifurcation diagram is qualitatively quite close to the ones with Inh = 0 and Inh = 0.015. The minimum pressure of fundamental periodic oscillations γsubthr (on the blue branch keeps increasing with inharmonicity, and becomes higher than γ = γthr2.\n\nThe above discussion illustrates significant things:\n\n1. There may be a minimum value γ = γsubthr lower than γthr2 < γthr1 where there are stable periodic oscillations. This particular value γsubthr can be chosen as a kind of quantitative characterisation of the ease of playing. In Figure 8 it is shown again (as in Fig. 4) that the lowest value of γsubthr is obtained when the two resonances are perfectly harmonic (Inh = 0). If it is assumed that a lower γsubthr corresponds to an instrument easier to play, then it suggests that the reed instrument considered to be the easiest to play when Inh = 0. In a way, even if Z1 < Z1, again that is a theoretical illustration of the Bouasse-Benade prescription. The threshold of oscillation is displayed in Figure 8: the minimum of the threshold of oscillation is corresponding to Inh = 0.",
null,
"Figure 8Minimum value of the mouth pressure γ (blue line) corresponding to a stable periodic solution (fundamental regime) with respect to the inharmonicity Inh (case Z1 < Z2; in practice Z2/Z1 = 3/2). Linear threshold γthr1 (green dashed line).\n\n2. The stable periodic oscillations which appear for γ slightly bigger than γsubthr have fundamental frequencies quite close to fres1.\n\n3. It is worth emphasising that, whatever the inharmonicity, the fundamental regime does never come from the first threshold γ = γthr1, but comes through a period-doubling bifurcation point attached to the octave branch. A naive analysis of the time domain simulations (at least with Inh = 0) would probably suggest that the fundamental regime emerges from the equilibrium trough an inverse Hopf bifurcation, but this is not correct. It is also worth noting that a linear stability analysis (LSA) of the equilibrium is useless here to give some hints about the oscillation behaviour of the model.\n\n4. Note as well that sometimes, there are several stable regimes (equilibrium position and periodic regime, or two periodic regimes) for a given value of γ. For such cases, the stable regime reached is the consequence of the initial conditions.\n\nIn addition, an animation showing the evolution of the bifurcation diagram as a function of the inharmonicity increasing from Inh = −0.05 to Inh = +0.05 is available from the link of footnote4. Most of the illustrations displayed in the figures correspond to a positive inharmonicity Inh, but the animation and the Figure 8 illustrate the fact that the behaviour is qualitavely the same for negative values of Inh. Unlike Figure 4, it can be noted that the plot is slightly asymetric with respect to the vertical axis Inh = 0.\n\nIn order to illustrate the bifurcation diagrams (Fig. 7), it is interesting to do simulations by solving the equation",
null,
"in the time domain (sound available from the link of footnote5). Figure 9 shows a signal corresponding to an inharmonicity Inh = 0.015 and Z1 = Z2/1.5. The control parameter increases slowly from γ = 0.38 (just below the period doubling bifurcation) to γ = 0.45 (crescendo). Because of the chosen initial conditions, the periodic regime obtained corresponds to the octave, but when γ reaches the value 0.39, the branch coming from γ = γth2 becomes unstable, and then the periodic solution jumps to the only stable branch one octave below (branch coming from the period-doubling bifurcation).",
null,
"Figure 9Signal (dimensionless acoustical pressure) obtained by simulation in time domain with an inharmonicity Inh = 0.015 and Z1 = Z2/1.5 (like in Fig. 7b). The dimensionless mouth pressure is printed in black, and is increasing linearly from γ = 0.38 (constant before t = 2 s) to γ = 0.45 (constant after t = 9 s).\n\n## 5 Conclusion\n\nBifurcation diagrams of a basic reed instrument modeled by two quasi-harmonic resonances have been computed by using a continuation method (AUTO software), where the mouth pressure is the control parameter. Some of the mouth pressure thresholds results are interpreted in terms of the ease of playing of the reed instrument. When there is an inverse Hopf bifurcation (perfect harmonicity) or of a double fold after a direct Hopf bifurcation (moderate inharmonicity), there may be a minimum value γ = γsubthr lower than γthr1 for which periodic stable oscillations can be observed. This value γsubthr may be considered as a quantitative characterisation of the ease of playing. It has been shown that the lowest value of γsubthr is obtained when the two resonances are harmonic, harmonicity equal to 2. This is a theoretical illustration of the Bouasse-Benade prescription [2, 8]). Even if a few AUTO simulations using other parameter’s values than the one used in the present study have been done, a large set of other tests should be done with many other parameter’s values to verify that the conlusions of the present paper are robust.\n\nAn interesting direction for future work could include experimental validation, particularly using the modified saxophone used in . There, the saxophone was modified by the addition of two closed side tubes on the neck. Movable pistons are used to change the volume of the side tubes, which results in a shift of the resonant frequencies. As explained in this publication, it is possible to choose particular positions and volumes of the closed tubes to ensure a control of the inharmonicity. This strategy is used with fingerings corresponding to the middle and high ranges of the first regime of the saxophone, where its input impedance consists essentially of two resonances.\n\nThe results provided in the current manuscript depend on a physical model of reed instruments based on three strong approximations: the reed dynamics is ignored, only two acoustic resonances are taken into account, and the nonlinear equation describing the incoming volume flow is approximated by its third order Taylor series expansion. Therefore, the conclusions above cannot be directly extended to real instruments until further research is carried out on more complex models. At that point, many other interesting topics could be explored, such as sound production of low-pitched notes by conical reed instruments such as saxophones, oboes or bassoons. Replacing the inward-striking reed model by an outward-striking lip model is also planned in order to study nonlinear dynamics of brass instruments (preliminary results in ). More precisely, it is expected that numerical continuation could clarify their pedal note regime, recently simulated using a time-domain finite differences method in .\n\n## Conflict of interest\n\nThe authors declare no conflict of interest.\n\n## Acknowledgments\n\nAuthors acknowledge their colleagues Murray Campbell, Jean-Pierre Dalmont and Erik Petersen for fruitfull discussions. And authors wish to thank the reviewers, whose conscientious work has improved the clarity of this article and raised some interesting questions.\n\n1\n\nAnimation showing the evolution of the bifurcation diagram as a function of the inharmonicity increasing from Inh = −0.05 to Inh = +0.05 (case Z1 > Z2), corresponding to Figure 3, in http://perso.univ-lemans.fr/~jgilbert/output_Z1_sup_Z2_stab.webm.\n\n2\n\nTime domain simulation corresponding to Figure 5 in http://perso.univ-lemans.fr/~jgilbert/Inh0p040_10s.wav.\n\n3\n\nTime domain simulation corresponding to Figure 6 in http://perso.univ-lemans.fr/~jgilbert/gamma0p55a0p50.wav.\n\n4\n\nAnimation showing the evolution of the bifurcation diagram as a function of the inharmonicity increasing from Inh = −0.05 to Inh = +0.05 (case Z1 < Z2), corresponding to Figure 7, in http://perso.univ-lemans.fr/~jgilbert/output_Z2_sup_Z1_stab.webm.\n\n5\n\nTime domain simulation corresponding to Figure 9 in http://perso.univ-lemans.fr/~jgilbert/Inh0p015_Z2supZ1_10s.wav.\n\n## References\n\n1. J.F. Petiot, P. Kersaudy, G. Scavone, S. Mac Adams, B. Gazengel: Investigations of the relationships between perceived qualities and sound parameters of saxophone reeds. Acustica United With Acta Acustica 103 (2017) 812–829. [CrossRef] [Google Scholar]\n2. A.H. Benade: Fundamentals of musical acoustics, 2nd ed. Dover, 1990. [Google Scholar]\n3. M. Campbell, C. Greated: The Musician’s Guide to Acoustics. Oxford University Press, 1989. [Google Scholar]\n4. N.H. Fletcher, T.D. Rossing: The Physics of Musical Instruments, 2nd ed. Springer, 1998. [CrossRef] [Google Scholar]\n5. A. Chaigne, J. Kergomard: Acoustics of Musical Instruments. Springer, 2016. [CrossRef] [Google Scholar]\n6. M. Campbell, J. Gilbert, A. Myers: The Science of Brass Instruments. Springer, 2020. [Google Scholar]\n7. A.H. Benade, D.J. Gans: Sound production in wind instruments. Annals of the New York Academy of Science 155 (1968) 247–263. [CrossRef] [Google Scholar]\n8. H. Bouasse: Instruments à vent tomes I et II. Delagrave, Paris, 1929; repr. Librairie Scientifique et Technique Albert Blanchard, Paris, 1986. [Google Scholar]\n9. J.-P. Dalmont, B. Gazengel, J. Gilbert, J. Kergomard: Some aspects of tuning and clean intonation in reed instruments. Applied Acoustics 461 (1995) 19–60. [CrossRef] [Google Scholar]\n10. J. Gilbert, E. Brasseur, J.P. Dalmont, C. Maniquet: Acoustical evaluation of the Carnyx of Tintignac. Proceedings of Acoustics 2012, Nantes, 2012. [Google Scholar]\n11. D.M. Campbell, J. Gilbert, P. Holmes: Seeking the sound of ancient horns. ASA Meeting, Boston, 2017. [Google Scholar]\n12. W. Kausel: Optimization of brasswind instruments and its application in bore reconstruction. Journal of New Music Research 30 (2001) 69–82. [CrossRef] [Google Scholar]\n13. A. Braden, M. Newton, D.M. Campbell: Trombone bore optimization based on input impedance targets. Journal of the Acoustical Society of America 125 (2009) 2404–2412. [CrossRef] [Google Scholar]\n14. D. Noreland, J. Kergomard, F. Laloë, C. Vergez, P. Guillemain, A. Guilloteau: The logical clarinet: Numerical optimization of the geometry of woodwind instruments. Acta Acustica United With Acustica 99 (2013) 615–628. [CrossRef] [Google Scholar]\n15. W.L. Coyle, P. Guillemain, J. Kergomard, J.-P. Dalmont: Predicting playing frequencies for clarinets: A comparison between numerical simulations and simplified analytical formulas. Journal of the Acoustical Society of America 138 (2015) 2770–2781. [CrossRef] [Google Scholar]\n16. R. Tournemenne, J.F. Petiot, B. Talgorn, M. Kokkolaras, J. Gilbert: Sound simulation based design optimization of brass wind instruments. Journal of the Acoustical Society of America 145 (2019) 3795–3804. [CrossRef] [Google Scholar]\n17. M.E. McIntyre, R.T. Schumacher, J. Woodhouse: On the oscillations of musical instruments. Journal of the Acoustical Society of America 74 (1983) 1325–1345. [CrossRef] [Google Scholar]\n18. J.-P. Dalmont, J. Gilbert, J. Kergomard: Reed instruments, from small to large amplitude periodic oscillations and the Helmholtz motion analogy. Acustica 86 (2000) 671–684. [Google Scholar]\n19. J.-B. Doc, C. Vergez, S. Missoum: A minimal model of a single-reed instrument producing quasi-periodic sounds. Acta Acustica United With Acustica 100 (2014) 543–554. [CrossRef] [Google Scholar]\n20. A. Hirschberg, J. Kergomard, G. Weinreich: Mechanics of musical instruments. Springer-Verlag, Wien, Austria, 1995. [Google Scholar]\n21. J. Gilbert, S. Maugeais, C. Vergez: From the bifurcation diagrams to the ease of playing of reed musical instruments. A theoretical illustration of the Bouasse-Benade prescription? International Symposium on Musical Acoustics, Detmold, Germany, 2019. [Google Scholar]\n22. B. Fabre, J. Gilbert, A. Hirschberg: Modeling of Wind Instruments. Chapter 7 of Springer Handbook of Systematic Musicology. Springer-Verlag, 2018. [Google Scholar]\n23. T.A. Wilson, G.S. Beavers: Operating modes of the clarinet. Journal of the Acoustical Society of America 56 (1974) 653–658. [Google Scholar]\n24. F. Silva, J. Kergomard, C. Vergez, J. Gilbert: Interaction of reed and acoustic resonator in clarinetlike systems. Journal of the Acoustical Society of America 124 (2008) 3284–3295. [CrossRef] [PubMed] [Google Scholar]\n25. W.E. Worman: Self-sustained non-linear oscillations of medium amplitude in clarinet-like systems, PhD Thesis. Case Western Reserve University, Cleveland, 1971. [Google Scholar]\n26. N. Grand, J. Gilbert, F. Laloë: Oscillation threshold of woodwind instruments. Acustica 83 (1997) 137–151. [Google Scholar]\n27. B. Ricaud, P. Guillemain, J. Kergomard, F. Silva, C. Vergez: Behavior of reed woodwind instruments around the oscillation threshold. Acta Acustica United With Acustica 95 (2009) 733–743. [CrossRef] [Google Scholar]\n28. C. Maganza, R. Caussé, F. Laloë: Bifurcations, period doubling and chaos in clarinet like systems. Europhysics Letters 1 (1986) 295–302. [CrossRef] [Google Scholar]\n29. P.-A. Taillard, J. Kergomard, F. Laloë: Iterated maps for clarinet-like systems. Nonlinear Dynamics 62 (2010) 253–271. [CrossRef] [Google Scholar]\n30. J.-P. Dalmont, J. Gilbert, J. Kergomard, S. Ollivier: An analytical prediction of the oscillation and extinction thresholds of a clarinet. Journal of the Acoustical Society of America 118 (2005) 3294–3305. [CrossRef] [PubMed] [Google Scholar]\n31. E.J. Doedel, A.R. Champneys, T.F. Fairgrieve, Yu.A. Kuznetsov, B. Sandstede, X.J. Wang: auto97: Continuation and bifurcation software for ordinary differential equations (with HomCont) User’s Guide. Concordia Univ. (1997). [Google Scholar]\n32. S. Karkar, B. Cochelin, C. Vergez: A high-order, purely frequency based harmonic balance formulation for continuation of periodic solutions: The case of non-polynomial nonlinearities. Journal of Sound and Vibration 332 (2013) 968–977. [CrossRef] [Google Scholar]\n34. V. Debut, J. Kergomard: Analysis of the self-sustained oscillations of a clarinet as a Van der Pol oscillator. International Congress on Acoustics, Kyoto, 2004. [Google Scholar]\n35. F. Silva, C. Vergez, P. Guillemain, J. Kergomard, V. Debut: MoReeSC: a framework for the simulation and analysis of sound production in reed and brass instruments. Acta Acustica United With Acustica 100 (2014) 126–138. [CrossRef] [Google Scholar]\n36. S.H. Strogatz: Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering (Studies in Nonlinearity), 2nd ed., Kindle, 2019. [Google Scholar]\n37. B. Gazengel: Caractérisation objective de la qualité de justesse, de timbre et d’émission des instruments à vent à anche simple, PhD Thesis. Université du Maine, 1994. [Google Scholar]\n38. B. Gazengel, J. Gilbert: From the measured input impedance to the synthesized pressure signal: application to the saxophone. Proceedings of the International Symposium on Musical Acoustics, Dourdan, July 2–6, 1995. [Google Scholar]\n39. L. Guillot, B. Cochelin, C. Vergez: A Taylor series-based continuation method for solutions of dynamical systems. Nonlinear Dynamics 98 (2019) 2827–2845. [CrossRef] [Google Scholar]\n40. V. Freour, H. Masuda, S. Usa, E. Tominaga, Y. Tohgi, B. Cochelin, C. Vergez: Numerical analysis and comparison of brass instruments by continuation. International Symposium on Musical Acoustics, Detmold, Germany, 2019. [Google Scholar]\n41. L. Velut, C. Vergez, J. Gilbert, M. Djahanbani: How well can Linear Stability Analysis predict the behaviour of an outward valve brass instrument model? Acustica United With Acta Acustica 103 (2016) 132–148. [CrossRef] [Google Scholar]\n\nCite this article as: Gilbert J, Maugeais S & Vergez C. 2020. Minimal blowing pressure allowing periodic oscillations in a simplified reed musical instrument model: Bouasse-Benade prescription assessed through numerical continuation. Acta Acustica, 4, 27.\n\n## All Tables\n\nTable 1\n\nValues of the modal parameters of the two resonance’s air column.\n\n## All Figures",
null,
"Figure 1Diagram showing the regions where the bifurcation is direct as well as those regions where it is inverse. The x-axis shows the values of the third coefficient of the Taylor expansion Equation (6), and the y-axis is the difference between the admittance amplitude between the two first resonances (assumed to be harmonic) Y2 −Y1 = 1/Z2 − 1/Z1. The hatched region is for a direct bifurcation, and the unhatched region for an inverse bifurcation. Adapted from . In the text",
null,
"Figure 2Bifurcation diagram and time domain simulations of the two harmonic resonance air column (parameter’s values in Tab. 1 with Z2 = Z1/1.5 or with Z2 = 0, and Inh = 0) with respect to the control parameter γ. Upper plots: six time domain simulations of the dimensionless pressure p = p1 + p2 calculated between t = 0 and t = 0.5 s for γ = 0.32, γ = 0.36, γ = 0.40 (two simulations with different initial conditions), γ = 0.45 and γ = 0.53. The dimensionless pressure of the plots numbered from 1 to 3 (respectively 4 to 6) is displayed between −0.3 and +0.3 (respectively −1.2 and +1.2). Lower plot: Maximum of the absolute value of the periodic solution p over one period with respect to γ. The branch in green (respectively in black) corresponds to the case Z2 = Z1/1.5 (respectively Z2 = 0), and illustrates an inverse (respectively direct) Hopf bifurcation scenario. Stable (respectively unstable) solutions are plotted with thick (respectively thin) lines. For each scenario, the Hopf bifurcation point where the equilibrium becomes unstable, is noted γthr. In the case of an inverse bifurcation the subcrital threshold γsubthr is highlighted with a vertical dashed line. In the text",
null,
"Figure 3Three bifurcation diagrams of the two quasi-harmonic resonance air column (with Z1 > Z2, in practice Z1/Z2 = 3/2; parameters values in Tab. 1) with respect to the control parameter γ. From top to bottom: Inh = 0 (a), Inh = 0.02 (b) and Inh = 0.04 (c). Each case is described with two plots. Upper plot: Maximum of the absolute value of the periodic solution with respect to γ. The branch in green (respectively in red, and in blue) corresponds to the fundamental regime, the standard Helmholtz motion (respectively the octave regime, and the inverted Helmholtz motion fundamental regime). Note that a black curve corresponding to a direct Hopf bifurcation is branched at γ = γthr1. This fundamental regime corresponds to an air column having only one resonance at the frequency fres1. Lower plot: Frequency with respect to γ. The frequency branch in green (respectively in red) corresponds to the fundamental frequency of the fundamental regime (respectively the octave regime, frequency divided by 2). The reference dashed horizontal lines are the reference frequencies: fres1 and fres2/2. In the text",
null,
"Figure 4Minimum value of the mouth pressure γ (green line) corresponding to a stable periodic solution (fundamental regime) with respect to the inharmonicity Inh (case Z1 > Z2; in practice Z1/Z2 = 3/2). Linear threshold γthr1 (blue dashed line). In the text",
null,
"Figure 5Signal (dimensionless acoustical pressure) obtained by simulation in time domain with an inharmonicity Inh = 0.04 and Z1 = 1.5Z2 (like in Fig. 3c). The dimensionless mouth pressure is printed in black, and is increasing linearly from γ = 0.43 (constant before t = 2 s) to γ = 0.50 (constant after t = 9 s). In the text",
null,
"Figure 6Signal (dimensionless acoustical pressure) obtained by simulation in time domain with an inharmonicity Inh = 0.04 and Z1 = 1.5Z2 (like in Fig. 3c). The dimensionless mouth pressure is printed in black, and is decreasing linearly from γ = 0.55 (constant before t = 2 s) to γ = 0.50 (constant after t = 9 s). In the text",
null,
"Figure 7Three bifurcation diagrams of the two quasi-harmonic resonance air column (with Z1 < Z2, in practice Z2/Z1 = 3/2) with respect to the control parameter γ. From top to bottom: Inh = 0 (a), Inh = 0.015 (b) and Inh = 0.030 (c). Each case is described with two plots. Upper plot (a): Maximum of the absolute value of the periodic solution with respect to γ. The branch in red (respectively in blue, and in green) corresponds to the octave regime (respectively the fundamental regime associated to the standard Helmholtz motion, and the inverted Helmholtz motion fundamental regime). Lower plot (c): Frequency with respect to γ. The frequency branch in red (respectively in blue) corresponds to the fundamental frequency of the octave regime (respectively the fundamental regime). The frequency of the red branch has been divided by 2 for sake of clarity. The reference dashed horizontal lines are the reference frequencies: fres1 and fres2/2. In the text",
null,
"Figure 8Minimum value of the mouth pressure γ (blue line) corresponding to a stable periodic solution (fundamental regime) with respect to the inharmonicity Inh (case Z1 < Z2; in practice Z2/Z1 = 3/2). Linear threshold γthr1 (green dashed line). In the text",
null,
"Figure 9Signal (dimensionless acoustical pressure) obtained by simulation in time domain with an inharmonicity Inh = 0.015 and Z1 = Z2/1.5 (like in Fig. 7b). The dimensionless mouth pressure is printed in black, and is increasing linearly from γ = 0.38 (constant before t = 2 s) to γ = 0.45 (constant after t = 9 s). In the text\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days."
] | [
null,
"https://i.creativecommons.org/l/by/4.0/88x31.png",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq1.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq2.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq3.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq4.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq5.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq6.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq7.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq8.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq9.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq10.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq11.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq12.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq13.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq14.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq15.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq16.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq17.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq18.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq19.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq20.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq21.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig1_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq24.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq25.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq26.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq28.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq29.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq30.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq31.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq32.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq33.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq34.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq35.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig2_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig3_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig4_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq38.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig5_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig6_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig7_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig8_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-eq39.gif",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig9_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig1_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig2_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig3_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig4_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig5_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig6_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig7_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig8_small.jpg",
null,
"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016-fig9_small.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8833715,"math_prob":0.9455012,"size":62265,"snap":"2023-14-2023-23","text_gpt3_token_len":14611,"char_repetition_ratio":0.16385859,"word_repetition_ratio":0.27047446,"special_character_ratio":0.22266121,"punctuation_ratio":0.11763672,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861229,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,4,null,4,null,4,null,2,null,4,null,4,null,4,null,4,null,2,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-06T19:12:00Z\",\"WARC-Record-ID\":\"<urn:uuid:c8458e27-c04b-4bfb-b4cf-bb297ca0e0f2>\",\"Content-Length\":\"183127\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8b297a28-d43f-4e1c-a48e-d6a722600aae>\",\"WARC-Concurrent-To\":\"<urn:uuid:81d2490a-f14e-4c9d-a967-9ca9308919d6>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://acta-acustica.edpsciences.org/articles/aacus/full_html/2020/06/aacus200016/aacus200016.html\",\"WARC-Payload-Digest\":\"sha1:XBQPFA6BAE7FMPVABE2PZDBEJLE7NY7Z\",\"WARC-Block-Digest\":\"sha1:TZCHH4YYCXDDB7I2YM2VMNLHYAW3DZHE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224653071.58_warc_CC-MAIN-20230606182640-20230606212640-00548.warc.gz\"}"} |
https://sheetshelp.com/yearfrac/ | [
"# YEARFRAC Function – Google Sheets\n\nThe YEARFRAC function takes two dates and returns the years between them, including fractions of a year. You can optionally specify different methods of counting days.\n\nThis function can be used to calculate someone’s age. An alternative to YEARFAC is DATEDIF which can return the difference between two days in years, months, or days.\n\nRemember that dates are just saved as numbers by your spreadsheet with date formatting applied. The date must be in a valid format, so you must use the DATE function, wrap it in quotes, or use a cell reference to a cell containing a date.\n\n## Syntax\n\n`=YEARFRAC(start_date,end_date,[day_count_convention])`\n\n• `start_date` – The date for the calculation to start.\n• `end_date` – The date for the calculation to end.\n• `day_count_convention` – Optional day count method.\n• `0` [or blank] – US (NASD) 30/360\n• `1` – actual/actual Uses the actual number of days.\n• `2` – actual/360 Uses the actual number of days but with a 360-day year.\n• `3` – actual/365 Uses the number of days but assumes a 365-day year. This option doesn’t recognize leap year.\n• `4` – European 30/360 Similar to 0 but uses European standards instead of US.\n\n## Related Functions\n\nDATE – Takes separate year, month, and day values and returns them as a date\n\nDATEDIF – Calculates the difference between two days in years, days, and months\n\nDAY – Returns the value of the day from a given date\n\nDAYS360 – Calculates interest using the 360 method\n\n## Errors\n\n#NUM – An input is a number but is not a valid date.\n\n#VALUE! – The input is something that doesn’t convert to a number, such as “The other day” or “Yester-yester-day.”\n\n## YEARFRAC Function Examples\n\n### Example 1 – Different Count Conventions\n\nThe function has five possible values for the `day_count_convention`. Their outputs are slightly different, as shown in the examples below and the linked Sheet.\n\nWhen using YEARFRAC in Google Sheets in a general situation, such as calculating someone’s age, the actual/actual method is the clearest choice. This method uses the actual days elapsed divided by the actual days in a year.\n\n### Example 2 – Calculating Age with YEARFRAC\n\nWhen combined with the TODAY function, we can calculate age.\n\n`=YEARFRAC(A2,TODAY(),1)`\n\nThe result will update every subsequent day you open the spreadsheet as the value of TODAY will change.\n\n### Example 3 – Calculating Age in Whole Years\n\nLet’s add to our last example by converting the result of the formula to an integer. This is done using the INT function.\n\n`=INT(YEARFRAC(A2,TODAY(),1))`\n\nAdding the INT function allows us to provide age as a whole number.\n\n### Example 4 – YEARFRAC in Finance\n\nWe used the actual/actual year convention setting of 1 in examples 2 and 3 because it counts the actual days. However, interest calculations often use a 360-day year, with each month having 30 days. Let’s use the `0` year `day_count_convention` for that.\n\n`=YEARFRAC(D2,E2,0)*C2*A2`"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8381377,"math_prob":0.9603105,"size":3115,"snap":"2022-40-2023-06","text_gpt3_token_len":728,"char_repetition_ratio":0.13050467,"word_repetition_ratio":0.02357564,"special_character_ratio":0.22664526,"punctuation_ratio":0.0976431,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9846308,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T11:18:51Z\",\"WARC-Record-ID\":\"<urn:uuid:5526fc64-d887-4dbf-848a-cc8c9d6f5d59>\",\"Content-Length\":\"87472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a4f5d768-38eb-4abc-8b00-f73409438914>\",\"WARC-Concurrent-To\":\"<urn:uuid:14630398-008d-4298-a35c-c8b2a6316eba>\",\"WARC-IP-Address\":\"70.32.23.85\",\"WARC-Target-URI\":\"https://sheetshelp.com/yearfrac/\",\"WARC-Payload-Digest\":\"sha1:DH7M4FTCHHKV5BYOXCLKUWYIWZ4PANSE\",\"WARC-Block-Digest\":\"sha1:2BLMTQUZIOTVF4NMLPJAU4MFQHRETDZB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499857.57_warc_CC-MAIN-20230131091122-20230131121122-00013.warc.gz\"}"} |
https://wiki.helsinki.fi/plugins/viewsource/viewpagesrc.action?pageId=70229234 | [
"# Statistical methods in medicine and epidemiology, fall 2011\n\n### News\n\nThe last lecture is on Tuesday (December 13). We start in\nthe computer class at 3:15 p.m. (see an extra R exercise below;\nwe will do this partly). After the computer class, a wrap-up\nsession in B120.\n\nSurvival regression in R\n\nThe second course exam will be on Thursday, December 15. We will\nstart at 4 p.m. sharp. The exam ends at 7 p.m. The exam will cover\nthe 2nd part of the course (see below for a detailed list of topics).\n\nKari Auranen\n\n6-10 cu.\n\n### Lecture summaries\n\n1. The binary model. Conditional probability. Bayes formula. (Chapters 1 and 2)\n2. Likelihood function (Chapter 3)\n3. Estimation of survival. The hazard function. (Chapters 4 and 5)\n4. Time. Competing risks. Censoring. (Chapters 6 and 7)\n5. Approximating likelihoods. Statistical paradigms. (Chapters 8, 9 and 10)\n\n6. P values (Chapter 11)\n\n7. P values continued\n\n8. Estimation of relative rates (Chapter 13)\n\n9. Confounding. Stratification. Standardisation (Chapter 14 aand 15)\n\n10. The lecture on confounding continued.\n\n11. Case-control studies (Chapter 16)\n\n12. Lecture. The lecture on case-control studies continued.\n\n13. Matching\n\n14. Part I: Poisson regression\n\n14. Part II: Logistic regression\n\n15. Samplesize\n\n16. Hypothesis testing. Interaction\n\n17. Model choice\n\n18. Cox regression\n\nErrata\n\n### Exercises\n\n1. Exercise\n\nData file (atresia.dat) The data matrix has 4 columns: individual ID, age at diagnosis, age at death or censoring,\nfailure indicator (1=death/0=censoring)\n\n2. Exercise\n\n3. Exercise\n\n4. Exercise\n\n5. Exercise\n\n6.Exercise\n\n7. Exercise (R practical)\n\n8. Exercise\n\n9. Exercise (R practical)\n\n### Exams\n\nThe first course exam is on October 18 (Tuesday) at 4-7 p.m. Please\nnote the date which has changed from Thursday (Oct 20) to Tuesday (Oct 18).\n\nThe second course exam is on December 15 (Thursday) at 4-7 p.m. The\nexam covers lectures 12-19, exercise 6-9 and the following chapters\nfrom the textbook:\n\nChapters 16, 17, 18 (except 18.2), 19, 21, 22, 23 (except 23.4),24, 25, 26,29, 30.\n\n### Registration\n\nDid you forget to register? What to do.\n\nGroup\n\nDay\n\nTime\n\nPlace\n\nInstructor\n\n1."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79967535,"math_prob":0.6450999,"size":2345,"snap":"2022-27-2022-33","text_gpt3_token_len":700,"char_repetition_ratio":0.13583939,"word_repetition_ratio":0.0,"special_character_ratio":0.29594883,"punctuation_ratio":0.2035225,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9545035,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-18T07:20:11Z\",\"WARC-Record-ID\":\"<urn:uuid:731c3e78-3fbe-4c1f-b89c-5366a25b1c8a>\",\"Content-Length\":\"9821\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9f6cb91c-ce4b-4029-841d-1ff7293cb1ed>\",\"WARC-Concurrent-To\":\"<urn:uuid:7837394f-9439-493c-8940-e726ba649efe>\",\"WARC-IP-Address\":\"128.214.205.178\",\"WARC-Target-URI\":\"https://wiki.helsinki.fi/plugins/viewsource/viewpagesrc.action?pageId=70229234\",\"WARC-Payload-Digest\":\"sha1:ANMPCQS6GODXZECVAPAMYP5AOCCP6QC3\",\"WARC-Block-Digest\":\"sha1:CHWI267VYJ3YHCF2OBPU2PK5T6PHZ4VM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882573172.64_warc_CC-MAIN-20220818063910-20220818093910-00552.warc.gz\"}"} |
https://projecteuclid.org/journals/annals-of-statistics/volume-15/issue-4/An-Application-of-the-Kiefer-Wolfowitz-Equivalence-Theorem-to-a/10.1214/aos/1176350612.full | [
"Translator Disclaimer\nDecember, 1987 An Application of the Kiefer-Wolfowitz Equivalence Theorem to a Problem in Hadamard Transform Optics\nChing-Shui Cheng\nAnn. Statist. 15(4): 1593-1603 (December, 1987). DOI: 10.1214/aos/1176350612\n\n## Abstract\n\nLet $\\Omega = \\{\\mathbf{x} = (x_1, \\cdots, x_n)^T: 0 \\leq x_i \\leq 1\\}$ be the unit cube in $R^n$. For any probability measure $\\xi$ on $\\Omega$, let $\\mathbf{M}(\\xi) = \\int_\\Omega\\mathbf{xx}^T \\xi(d\\mathbf{x})$. Harwit and Sloane (1976) conjectured that if $\\mathbf{X}^\\ast$ is the incidence matrix of a balanced incomplete block design (BIBD) with $n$ treatments and $n$ blocks of size $(n + 1)/2$, then $\\mathbf{X}^\\ast$ minimizes $\\operatorname{tr}(\\mathbf{X}^T\\mathbf{X})^{-1}$ over the $n \\times n$ matrices with entries $0 \\leq x_{ij} \\leq 1$. This arises from a problem in spectroscopy. In order to solve the conjecture, we consider the more general problem of maximizing $j_a(\\mathbf{M}(\\xi))$ over the probability measures on $\\Omega$ for $-\\infty \\leq a \\leq 1$, where $j_0 (\\mathbf{M}(\\xi)) = \\{\\det \\mathbf{M}(\\xi))\\}^{1/n}, j_{-\\infty} (\\mathbf{M}(\\xi)) =$ the minimum eigenvalue of $\\mathbf{M}(\\xi)$ and $j_a(\\mathbf{M}(\\xi)) = \\{n^{-1} \\operatorname{tr}\\lbrack \\mathbf{M}(\\xi) \\rbrack^a\\}^{1/a}$ for other $a$'s. A complete solution is obtained by using the equivalence theorem in optimal design theory. Let $\\xi_k$ be the uniform measure on the vertices of $\\Omega$ with $k$ coordinates equal to 1. Then depending on the value of $a$, optimality is attained by $\\xi_k$ or a mixture of $\\xi_k$ and $\\xi_{k+1}$ with $k \\geq \\lbrack (n + 1)/2\\rbrack$. Optimal $\\xi$'s with a smaller support can be found by using BIBDs. It follows that if $n$ is odd and $\\mathbf{X}^\\ast$ is the block-treatment incidence matrix of a BIBD with $n$ treatments and $N$ blocks of size $(n + 1)/2$, then $\\mathbf{X}^\\ast$ minimizes $\\operatorname{tr}(\\mathbf{X}^T\\mathbf{X})^a$ for all $a < 0$ and maximizes $\\det(\\mathbf{X}^T\\mathbf{X})^a$ and $\\operatorname{tr}(\\mathbf{X}^T\\mathbf{X})^a$ for all $0 < a \\leq 1 - \\ln(n/2 + 1)/ \\ln(n/2 + 1)/ \\ln( n + 1)$ over the $N \\times n$ matrices with entries $0 \\leq x_{ij} \\leq 1$. Similar results are derived for the even case and the incidence matrices of BIBDs of larger block sizes.\n\n## Citation\n\nDownload Citation\n\nChing-Shui Cheng. \"An Application of the Kiefer-Wolfowitz Equivalence Theorem to a Problem in Hadamard Transform Optics.\" Ann. Statist. 15 (4) 1593 - 1603, December, 1987. https://doi.org/10.1214/aos/1176350612\n\n## Information\n\nPublished: December, 1987\nFirst available in Project Euclid: 12 April 2007\n\nzbMATH: 0629.62075\nMathSciNet: MR913576\nDigital Object Identifier: 10.1214/aos/1176350612\n\nSubjects:\nPrimary: 62K05\nSecondary: 62K10\n\nRights: Copyright © 1987 Institute of Mathematical Statistics\n\nJOURNAL ARTICLE\n11 PAGES",
null,
"SHARE\nVol.15 • No. 4 • December, 1987",
null,
""
] | [
null,
"https://projecteuclid.org/Content/themes/SPIEImages/Share_black_icon.png",
null,
"https://projecteuclid.org/images/journals/cover_aos.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.66019386,"math_prob":0.99989367,"size":2262,"snap":"2021-21-2021-25","text_gpt3_token_len":811,"char_repetition_ratio":0.14836138,"word_repetition_ratio":0.06853583,"special_character_ratio":0.34969053,"punctuation_ratio":0.07126437,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999894,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T00:50:49Z\",\"WARC-Record-ID\":\"<urn:uuid:e2efbae0-a617-4d5a-bc90-178980b12725>\",\"Content-Length\":\"144542\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f41b320a-7c83-4c77-a745-69faa0a425bb>\",\"WARC-Concurrent-To\":\"<urn:uuid:83c00ab8-edca-4b40-8f6f-cd7203f174e2>\",\"WARC-IP-Address\":\"45.60.100.145\",\"WARC-Target-URI\":\"https://projecteuclid.org/journals/annals-of-statistics/volume-15/issue-4/An-Application-of-the-Kiefer-Wolfowitz-Equivalence-Theorem-to-a/10.1214/aos/1176350612.full\",\"WARC-Payload-Digest\":\"sha1:HCVKFWHD5SGGTAD756H2QDKSTCF2HDAC\",\"WARC-Block-Digest\":\"sha1:66N4NDJOSVHCJXCUIBLQRYRXDNZC5FZA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488544264.91_warc_CC-MAIN-20210623225535-20210624015535-00377.warc.gz\"}"} |
http://stgwww.topperlearning.com/ncert-solutions/cbse-class-11-science-mathematics/ncert-mathematics-xi/linear-inequalities | [
"# NCERT Solutions for Class 11-science Maths Chapter 6 - Linear Inequalities\n\nPage / Exercise\n\n## Chapter 6 - Linear Inequalities Exercise Ex. 6.1\n\nSolution 1",
null,
"Solution 2",
null,
"Solution 3",
null,
"Solution 4",
null,
"Solution 5",
null,
"Solution 6",
null,
"Solution 7",
null,
"Solution 8",
null,
"Solution 9",
null,
"Solution 10",
null,
"Solution 11",
null,
"Solution 12",
null,
"Solution 13",
null,
"Solution 14",
null,
"Solution 15",
null,
"Solution 16",
null,
"Solution 17",
null,
"Solution 18",
null,
"Solution 19",
null,
"Solution 20",
null,
"Solution 21",
null,
"Solution 22",
null,
"Solution 23",
null,
"Solution 24",
null,
"Solution 25",
null,
"Solution 26",
null,
"## Chapter 6 - Linear Inequalities Exercise Ex. 6.2\n\nSolution 1",
null,
"Solution 2",
null,
"Solution 3",
null,
"Solution 4",
null,
"Solution 5",
null,
"Solution 6",
null,
"Solution 7",
null,
"Solution 8",
null,
"Solution 9",
null,
"Solution 10",
null,
"## Chapter 6 - Linear Inequalities Exercise Ex. 6.3\n\nSolution 1",
null,
"Solution 2",
null,
"Solution 3",
null,
"Solution 4",
null,
"Solution 5",
null,
"Solution 6",
null,
"",
null,
"Solution 7",
null,
"",
null,
"Solution 8",
null,
"Solution 9",
null,
"Solution 10\n\nx + y ≥ 4 … (1)\n\n2x – y < 0 … (2)\n\nThe graph of the lines, x + y = 4 and 2x – y = 0 are drawn in the figure below.\n\nInequality (1) represents the region above the line x + y = 4. (including the line x + y = 4)\n\nIt is observed that (–1, 0) satisfies the inequality, 2x – y < 0.\n\n[2(-1) – 0 = -2< 0]\n\nTherefore, inequality (2) represents the half plane corresponding to the line, 2x – y = 0 containing the point (-1, 0). [excluding the line 2x – y < 0]\n\nHence, the solution of the given system of linear inequalities is represented by the common shaded region including the points on the line x + y = 4 and excluding the points on line 2x – y = 0 as follows:",
null,
"Solution 11",
null,
"Solution 12",
null,
"Solution 13",
null,
"Solution 14",
null,
"Solution 15",
null,
"## Chapter 6 - Linear Inequalities Exercise Misc. Ex.\n\nSolution 1",
null,
"Solution 2",
null,
"Solution 3",
null,
"Solution 4",
null,
"Solution 5",
null,
"",
null,
"Solution 6",
null,
"Solution 7",
null,
"Solution 8",
null,
"Solution 9",
null,
"Solution 10",
null,
"Solution 11",
null,
"Solution 12",
null,
"Solution 13",
null,
"Solution 14",
null,
""
] | [
null,
"https://images.topperlearning.com/topper/bookquestions/6699_a1_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6708_a10_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6709_a11_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6710_a12_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6711_a13_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6712_a14_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6713_a15_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6714_a16_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6715_a17_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6716_a18_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6717_a19_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6700_a2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6718_a20_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6719_a21_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6720_a22_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6721_a23_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6722_a24_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6723_a25_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6724_a26_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6701_a3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6702_a4_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6703_a5_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6704_a6_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6705_a7_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6706_a8_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6707_a9_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6725_a1_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6734_a10_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6726_a2_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6727_a3_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6728_a4_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6729_a5_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6730_a6_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6731_a7_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6732_a8_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6733_a9_6.2_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6735_a1_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6744_a10_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6745_a11_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6746_a12_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6747_a13_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6748_a14_14.1_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6748_a14_14.2_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6749_a15_15.1_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6749_a15_15.2_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6736_a2_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6737_a3_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/tinymce/imagemanager/files/602320eb8965dc0ad576e213c8594c885e4e7467657501.16507947mathsbp.PNG",
null,
"https://images.topperlearning.com/topper/bookquestions/6739_a5_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6740_a6_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6741_a7_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6742_a8_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6743_a9_6.3_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6750_a1_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6759_a10_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6760_a11_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6761_a12_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6762_a13_13.1_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6762_a13_13.2_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6763_a14_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6751_a2_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6752_a3_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6753_a4_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6754_a5_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6755_a6_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6756_a7_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6757_a8_mis_chapter_6_XI_Math.GIF",
null,
"https://images.topperlearning.com/topper/bookquestions/6758_a9_mis_chapter_6_XI_Math.GIF",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7915082,"math_prob":0.99647784,"size":1956,"snap":"2020-10-2020-16","text_gpt3_token_len":523,"char_repetition_ratio":0.1762295,"word_repetition_ratio":0.090361446,"special_character_ratio":0.26329243,"punctuation_ratio":0.06006006,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967101,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-25T13:48:10Z\",\"WARC-Record-ID\":\"<urn:uuid:5cf65de6-6205-4620-ba1c-b55dba7313c6>\",\"Content-Length\":\"188665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46234e9c-fb1e-4702-b671-20899c07221c>\",\"WARC-Concurrent-To\":\"<urn:uuid:56f75f24-123a-4b28-84d1-6aab15793ee9>\",\"WARC-IP-Address\":\"23.12.146.151\",\"WARC-Target-URI\":\"http://stgwww.topperlearning.com/ncert-solutions/cbse-class-11-science-mathematics/ncert-mathematics-xi/linear-inequalities\",\"WARC-Payload-Digest\":\"sha1:GCRLCTQB6JRTAQXJV4UVKPKD5OX5YWZG\",\"WARC-Block-Digest\":\"sha1:M3HDFN3WESJ5BESH3MW5WPFP4KEUPIOK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875146066.89_warc_CC-MAIN-20200225110721-20200225140721-00017.warc.gz\"}"} |
http://ecchacks.cr.yp.to/realclock.html | [
"# ECCHacks\n\nA gentle introduction\nto elliptic-curve cryptography\n\n The clock: Real clock Clock addition\n Finiteness: Finite fields Finite clocks Clock crypto\n Elliptic curves: Edwards curves Edwards addition\n\n## The real clock\n\nThe clock is a curve in the plane. Specifically, the points on the clock are the points (x,y) satisfying the equation x2+y2=1. Here are some examples of points on the clock:\n\n• (0,1), also known as \"12:00\".\n• (0,-1), also known as \"6:00\".\n• (1,0), also known as \"3:00\".\n• (-1,0), also known as \"9:00\".\n• (sqrt(1/2),sqrt(1/2)), also known as \"1:30\".\n• (sqrt(3/4),1/2), also known as \"2:00\".\n\nThe following Python function, oclock, returns a clock point given the time (12, 6, 3, 9, 1.5, etc.):\n\n``` import math\n\ndef oclock(time):\nradians = time * math.pi / 6\n\n# example:\nP = oclock(2)\nprint P\n# output: (0.8660254037844386, 0.5000000000000001)\n\n# math people write ^ for exponentiation\n# in python, ^ is xor, and ** is exponentiation\nprint P**2 + P**2 == 1\n# output: True\n```\n\nThis function (like your smartwatch) isn't perfectly accurate: the correct answer for P would have been exactly 0.5, and the correct answer for P would have had infinitely many digits. Try comparing oclock(3) to oclock(15). Python isn't performing exact computations on real numbers; it's performing limited-precision computations on \"floating-point numbers\". Don't worry that this imprecision will cause problems for cryptography: soon we'll replace real numbers with exact integers.\n\nVersion: This is version 2014.12.27 of the realclock.html web page."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84185636,"math_prob":0.9550874,"size":1235,"snap":"2022-27-2022-33","text_gpt3_token_len":328,"char_repetition_ratio":0.10885459,"word_repetition_ratio":0.0,"special_character_ratio":0.29068825,"punctuation_ratio":0.17213115,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9982486,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T01:24:00Z\",\"WARC-Record-ID\":\"<urn:uuid:2e6d84c7-a032-451d-bc8a-b0e99ec76331>\",\"Content-Length\":\"4069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:783c75c2-1fa5-45aa-a4d5-03f28ee7a269>\",\"WARC-Concurrent-To\":\"<urn:uuid:52b72910-4aab-40d8-bb93-19f274a28a24>\",\"WARC-IP-Address\":\"131.193.32.109\",\"WARC-Target-URI\":\"http://ecchacks.cr.yp.to/realclock.html\",\"WARC-Payload-Digest\":\"sha1:DHAOW6MS5JVSE2S4VALSNNTEOXGTHKKS\",\"WARC-Block-Digest\":\"sha1:4S5P62QAG64BFKTWCZT363HHC2S2JM73\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104506762.79_warc_CC-MAIN-20220704232527-20220705022527-00245.warc.gz\"}"} |
https://www.daniweb.com/programming/web-development/threads/426497/sort-table-php | [
"I have a 2 dim array (5x5) - I want to sort random integers that are contained within the array (from smallest to largest).... When I use asort() function I get the orginal table back... Also, the bubble sort didn't work... it only sorts the rows indpendently....\n\nReference:/// Can someone PLEASE Help and point out what is wrong.... feel free to write back a few lines of code that may work....\n\nThanks!\n\n<?php\n// Assign values ....\n\n\\$table = array();\n\\$x=0;\n\\$min=60;\n\\$max=101;\n\necho \"<table border=\\\"1\\\">\";\nfor(\\$row=1;\\$row<=4; \\$row++){\n\necho \"<tr>\\n\";\n\nfor(\\$col=1;\\$col<=5; \\$col++){\n\\$x=rand(\\$min,\\$max);\n\\$table[\\$row][\\$col] = \\$x;\n\necho\"<td>\\$x</td>\\n\";\n}\necho \"</tr>\";\n}\necho \"</table>\";\nasort(\\$table);\n\n//Try her to re-display the table using BUBBLE SORT each row gets sorted but not the entire table...\n\necho \"<table border=\\\"1\\\">\";\nfor(\\$row=1;\\$row<=5; \\$row++){\n\necho \"<tr>\\n\";\n\nfor(\\$col=1;\\$col<=5; \\$col++){\n\nfor(\\$j=1;\\$j<=5;\\$j++){\n\nif (\\$table[\\$row][\\$col]>\\$table[\\$row][\\$col+1]){\n\n\\$temp=\\$table[\\$row][\\$col];\n\\$table[\\$row][\\$col]=\\$table[\\$row][\\$col+1];\n\\$table[\\$row][\\$col+1]=\\$temp;\n}\n}\n\n}\necho \"</tr>\";\n}\necho \"</table>\";\n\n//\nasort(\\$table); // The asort() does not affect the table - it gives back the same one....\n\necho \"<table border=\\\"1\\\">\";\nfor(\\$row=1;\\$row<=4; \\$row++){\n\necho \"<tr>\\n\";\n\nfor(\\$col=1;\\$col<=5; \\$col++){\n\n\\$y= \\$table[\\$row][\\$col];\n\necho\"<td>\\$y</td>\\n\";\n}\necho \"</tr>\";\n}\necho \"</table>\";\n?>\n\n:)\n\nSorting a multi-dimension array takes some pre-work before you sort to make it work. Follow the link and have a look. I think that will solve your problem.\n\nThanks for your quice response!\n\nApparently, I do have knowledge on how to do arrays... but the link you gave me wasn't very helpful... do you have any other solutions?\n\nThanks!\n\nI don't intend to debug your code but I can tell you that the code that I sent you the link for will sort a two-dimensional array. When I originially put it together, I spent a lot of time researching how to do it. This solution was based on what I found. If you don't find it helpful then I obviously don't understand your problem. That's all I have for you so good luck.\n\nBe so kind as to provide some sample input and output, perhaps that will clear up what you want to achieve.\n\nWell, I am trying to take a table that is randomly sorted, and I wanted to put it in order (biggest to smallest)... apparently, whatever I do, it doesn't work... And I filled the array up with a loop... (maybe my loop is designed wrong?)\n\nThat's what I am trying to do!\nThanks!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50103563,"math_prob":0.7649889,"size":1392,"snap":"2019-43-2019-47","text_gpt3_token_len":433,"char_repetition_ratio":0.17867436,"word_repetition_ratio":0.094444446,"special_character_ratio":0.42385057,"punctuation_ratio":0.25617284,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9564366,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T20:14:10Z\",\"WARC-Record-ID\":\"<urn:uuid:cd2a7418-1cfa-48f5-a2e5-941e9496dad1>\",\"Content-Length\":\"58158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4c8fc5fa-eb88-4a85-bdaa-cf90916be2c9>\",\"WARC-Concurrent-To\":\"<urn:uuid:77f811e1-a63f-4560-b8ab-c6b922f9e154>\",\"WARC-IP-Address\":\"169.55.25.107\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/web-development/threads/426497/sort-table-php\",\"WARC-Payload-Digest\":\"sha1:KLASEMFHDTS7CCAGV5TDR5MUEKN2UQ4O\",\"WARC-Block-Digest\":\"sha1:USJHU2S7L5O274H5BKJIUIZ5RUCTPNL3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986676227.57_warc_CC-MAIN-20191017200101-20191017223601-00351.warc.gz\"}"} |
https://www.jonah-reinhart.com/single-post/nurbs-curves-translateuvn | [
"",
null,
"# Nurbs Curves + translateUVN\n\nWhen doing a joint-based rig it's quite common to have a setup where a small number of controls drive the cv's of a nurb curve which has a joint on it for each span. Here is an example from one of Josh Sobel's rigs.",
null,
"Each joint is attached to a curve and each cv of the curve is driven by a control. There's often extra stuff and the exact setup varies but that is the basic core concept.\n\nA lot of times riggers choose to use a nurbs curve like this because it ensures that they get clean smooth shapes that would be hard to achieve by painting the weights for one joint per control. But if you could get the same results with fewer joints that'd be great right?\n\nI've already shown off how animCurves can be used to split weights and control the falloff between controls very easily, and we can use that same approach here. The influence each CV has on each point on the curve comes from the basis-function. I'm not going to go into any math related to these functions but what's important to notice is that each curve in the basis function looks like what the curve would look like if you moved a single CV.",
null,
"Now these functions are for bezier curves (so one span of a nurbs curve) but the basis functions for full nurbs curves can be really complicated so we're gonna do something a little hacky :3",
null,
"Here I have a curve roughly the shape of the neutral brow in the example from Josh Sobel, and I'm going to run a script to generate an animCurve in the shape of the basis function for each CV. Here's the code (super janky proof-of-concept stuff).\n\n```import maya.api.OpenMaya as om2\ndef generate_animCurve_from_nurbs(splitter,inputCurve):\n#flatten the nurbs\nsel = om2.MSelectionList()\noCurve = sel.getDependNode(0)\ndSplitter = sel.getDagPath(1)\n\n#create the flattened curve\nmFnCurve = om2.MFnNurbsCurve(oCurve)\ncurve_pnts = mFnCurve.cvPositions()\nsplitterMatrix = dSplitter.inclusiveMatrixInverse()\ncurve_pnts = [om2.MPoint((pnt*splitterMatrix),0,0) for pnt in curve_pnts]\nmFnNewCurve = om2.MFnNurbsCurve()\nnewCurve = mFnNewCurve.create(\ncurve_pnts,\nmFnCurve.knots(),\nmFnCurve.degree,\nmFnCurve.form,\nFalse,\nTrue\n)\n\n#for each cv\nfor i in range(len(curve_pnts)):\nnewAnimCurve = cmds.createNode('animCurveUU')\niterations = 100.0\nfor j in range(int(iterations)+1):\n#get the param along the curve\nsampleX = j*2.0/(iterations) -1.0\nsamplePnt = om2.MPoint(sampleX,0,0)\nsamplePnt, param = mFnNewCurve.closestPoint(samplePnt)\n\n#move it one unit\ncurve_pnts[i] = curve_pnts[i] + om2.MVector(0,1,0)\nmFnNewCurve.setCVPositions(curve_pnts)\n\n#get the height\nsamplePnt = mFnNewCurve.getPointAtParam(param)\n\n#move it back one unit\ncurve_pnts[i] = curve_pnts[i] + om2.MVector(0,-1,0)\nmFnNewCurve.setCVPositions(curve_pnts)\n\n#create a key on an animCurve for that point\ncmds.select(newAnimCurve)\ncmds.setKeyframe(v = samplePnt, f = sampleX)\n\ngenerate_animCurve_from_nurbs('locator1','curveShape1')\n\n```\n\nand here are the nice clean animCurves it produces.",
null,
"What I would do at this point is create an animCurve with the minimum number of keys and fit it to these curves and then just use that. In the future I may actually put in the work to create that simple curve automatically from the curve but this is fine for now.\n\nWhat's really powerful about this is that now we get the benefits of nurbs curves (uper clean interpolation) and we can use the weightSplitter with a custom deformer like the translateUVN to get all the benefits of that approach too. A Hannah Montana situation.\n\nOne thing you might notice in the code is that I am not stepping along the parameter, I'm stepping along the X-axis. If the CVs are adjusted left and right the graph will look slightly different,",
null,
"I'll be putting some additional thought into this, because the results are better than what you get when manually adjusting animCurves, so if I can make it more user-friendly that'd be great.\n\nThat's it just sharing a little trick."
] | [
null,
"https://static.wixstatic.com/media/2692c3_e50951cd00e64892bec8fc94351f7fb8~mv2_d_3000_3000_s_4_2.png/v1/fill/w_1920,h_1920,al_c,usm_0.66_1.00_0.01/2692c3_e50951cd00e64892bec8fc94351f7fb8~mv2_d_3000_3000_s_4_2.png",
null,
"https://static.wixstatic.com/media/2692c3_f24c2e55759143f8ba1233172ab3419c~mv2.png/v1/fill/w_775,h_727,al_c/2692c3_f24c2e55759143f8ba1233172ab3419c~mv2.png",
null,
"https://static.wixstatic.com/media/2692c3_2e360e9de3204dd78ef8d95665e1671b~mv2.png/v1/fill/w_847,h_681,al_c/2692c3_2e360e9de3204dd78ef8d95665e1671b~mv2.png",
null,
"https://static.wixstatic.com/media/2692c3_afb13c9212dc4b668b31324dc395a8ed~mv2.png/v1/fill/w_552,h_446,al_c/2692c3_afb13c9212dc4b668b31324dc395a8ed~mv2.png",
null,
"https://static.wixstatic.com/media/2692c3_4e1a5fff83394a87b347a7eb5c231c31~mv2.png/v1/fill/w_631,h_386,al_c/2692c3_4e1a5fff83394a87b347a7eb5c231c31~mv2.png",
null,
"https://static.wixstatic.com/media/2692c3_787e0f98f62d4318acf974ebca05dd15~mv2.png/v1/fill/w_1000,h_336,al_c,usm_0.66_1.00_0.01/2692c3_787e0f98f62d4318acf974ebca05dd15~mv2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86042774,"math_prob":0.9694526,"size":3943,"snap":"2021-43-2021-49","text_gpt3_token_len":1003,"char_repetition_ratio":0.12820514,"word_repetition_ratio":0.0066115703,"special_character_ratio":0.22267309,"punctuation_ratio":0.101626016,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97952336,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-29T11:12:11Z\",\"WARC-Record-ID\":\"<urn:uuid:3de5d12c-17a9-4d76-af87-5d3261ba9d07>\",\"Content-Length\":\"1051049\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:910e315b-dba5-4602-aa11-fdc91963a1f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:e7114357-79f3-4b74-86ce-a5642bac5b5a>\",\"WARC-IP-Address\":\"34.117.168.233\",\"WARC-Target-URI\":\"https://www.jonah-reinhart.com/single-post/nurbs-curves-translateuvn\",\"WARC-Payload-Digest\":\"sha1:W5WT52JEZDXABUFDW42TCZ4KERS736JM\",\"WARC-Block-Digest\":\"sha1:GMMLTOQDGMNPKDW3H6FOMMTHX2H4HQ7X\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358705.61_warc_CC-MAIN-20211129104236-20211129134236-00456.warc.gz\"}"} |
https://mms.metalwihen.com/fifthsemester/vhdl-lab | [
"Sem 5 > \n\n### VHDL LAB",
null,
"Subscribe to posts\n\n#### E2: Write a program to take an input serial bit stream and outputs a '1' whenever the sequence \"01010\" occurs overlap must also be considered.\n\nposted Nov 17, 2012, 11:18 PM by Neil Mathew\n\n Although compiled, untested. :/SOURCE CODE:```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 library IEEE; use IEEE.STD_LOGIC_1164.all; use IEEE.std_logic_arith.all; entity pattern_find_en is port( Sin: in bit; -- serial input bit Sout: out bit); -- outputs 1 if \"01010\" occurs end pattern_find_en; architecture arch of pattern_find_en is signal temp: bit_vector(4 downto 0); -- can be variable too begin process( Sin ) variable count: integer := 0; begin -- At first, it fills up till 5 input bits are available for check. -- shifts values towards the right, and keeps adding the new bit for check if count=0 then temp(0) <= Sin; count:=count+1; elsif count < 5 then temp <= temp srl 1; temp(0) <= Sin; count:=count+1; else temp <= temp srl 1; temp(0) <= Sin; end if; -- the check if temp = \"01010\" then Sout <= '1'; else Sout <= '0'; end if; end process; end arch; ```\n\n#### E1 : VHDL code to count ones in a vector array of 8 bits.\n\nposted Nov 17, 2012, 10:42 PM by Neil Mathew\n\n SOURCE CODE:```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 library IEEE; use IEEE.STD_LOGIC_1164.all; use IEEE.STD_LOGIC_unsigned.all; entity count_ones_en is port( Sin: in std_logic_vector( 7 downto 0 ); count: out integer ); end count_ones_en; architecture arch of count_ones_en is begin process(Sin) variable temp: Integer:=0; begin labelloop: for i in 0 to 7 loop if( sin(i) = '1') then temp := temp + 1; end if; end loop; COUNT <= temp; end process; end arch; ```\n\n#### Flip Flops\n\nposted Nov 8, 2012, 6:30 AM by Neil Mathew [ updated Nov 8, 2012, 6:15 PM ]\n\n* Need to check if truth tables match with the NAND gates FLIP FLOP.\n\n* The programs, never performed either by me either.\n\nSR FLIP FLOP\n\nS R Qnext Comment 0 0 0 Hold state 0 1 0 Reset 1 0 1 Set 1 1 Metastable\n```1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\nLIBRARY ieee;\nUSE ieee.std_logic_1164.all;\nUSE ieee.std_logic_arith.all;\n\nENTITY sr IS\nport ( S,R, CLK: in std_logic; Q, Q2 : inout std_logic);\nEND ENTITY dff;\n\nARCHITECTURE sr2 OF dff IS\nBEGIN\n\nPROCESS(CLK)\nBEGIN\n\nIF RISING_EDGE(CLK) THEN\n\nIF S='0' AND R='0' THEN\nQ <= Q;\nQ2 <= Q2;\nELSIF S='0' AND R='1' THEN\nQ <= '0';\nQ2 <= '1';\nELSIF S='1' AND R='0' THEN\nQ <= '1';\nQ2 <= '0';\nELSIF S='1' AND R='1' THEN\nQ <= 'X';\nQ2 <= 'X';\nEND IF;\n\nEND IF;\nEND PROCESS;\nEND ARCHITECTURE dff2;\n```\n\nD FLIP FLOP\n\nD Qnext Comment 0 0 Express D atQ 1 1 Express D atQ X Qprev Hold state\n\n```1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\nLIBRARY ieee;\nUSE ieee.std_logic_1164.all;\nUSE ieee.std_logic_arith.all;\n\nENTITY dff IS\nport(D, clk : in std_logic; Q, Qbar: out std_logic);\nEND ENTITY dff;\n\nARCHITECTURE dff2 OF dff IS\nBEGIN\nprocess(Clk)\nBegin\n\nif RISING_EDGE(Clk) then\nQ<= D;\nQbar<= not D;\nend if;\n\nend process;\n\nEND ARCHITECTURE dff2;\n```\n\nJ K FLIP FLOP\n\nJ K Qnext Comment Characteristic table 0 0 Qprev Hold state 0 1 0 Reset 1 0 1 Set 1 1 Qprev Toggle\n\n```1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\n37\n38\n39\nLIBRARY ieee;\nUSE ieee.std_logic_1164.all;\nUSE ieee.std_logic_arith.all;\n\nENTITY jkff IS\n\nport( J,K,clk: in std_logic;\nQ, Qbar : inout std_logic);\n\nEND ENTITY jkff;\n\nARCHITECTURE jkff12 OF jkff IS\nBEGIN\nprocess (clk)\nBegin\n\nif RISING_EDGE(clk) then\n\nif J='0' and K='0' then\nnull;\n\nelsif J='0' and K='1' then\nQ<='0';\nQbar<='1';\n\nelsif J= '1' and K='0' then\nQ<='1';\nQbar<='0';\n\nelsif J='1' and K='1' then\nQ<=not Q;\nQbar<=Q;\n\nend if;\nend if;\n\nend process;\nEND ARCHITECTURE jkff12;```\n\nT FLIP FLOP\n\nT Q Qnext Comment 0 0 0 Hold state 0 1 1 1 0 1 Toggle 1 1 0\n\nreport bug / make suggestion\n```1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n11\n12\n13\n14\n15\n16\n17\n18\n19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\nLIBRARY ieee;\nUSE ieee.std_logic_1164.all;\nUSE ieee.std_logic_arith.all;\n\nENTITY T_1 IS\nport ( CLOCK: in std_logic; Q, T: in std_logic;\nQNEXT: out std_logic);\nEND ENTITY T_1;\n\nARCHITECTURE TFF OF T_1 IS\nSIGNAL QTEMP: std_logic;\nBEGIN\n\nQNEXT<=QTEMP;\n\nPROCESS(CLOCK)\nBEGIN\n\nIF RISING_EDGE(CLOCK) THEN\n\nif(T='0' AND Q='0') then\nQTEMP<='0';\nelsif(T='0' AND Q='1') then\nQTEMP<='1';\nelsif(T='1' AND Q='0') then\nQTEMP<='1';\nelse\nQTEMP<= '0';\n\nend if;\n\nend if;\nend process;\n\nEND ARCHITECTURE TFF;\n```\n\nT Qnext Comment 0 Q Hold state 1 Q ' Toggle\n\n```19\n20\n21\n22\n23\n24\n25\n26\n27\n28\n29\n30\n31\n32\n33\n34\n35\n36\nIF RISING_EDGE(CLOCK) THEN\n\nifT='0' then\nQ<=Q; QBar=Qbar;\nelse\nQ<= Qbar; Qbar<= Q; end if;\n\nend if;\nend process;\n\nEND ARCHITECTURE TFF;\n```\n\n#### 03 Importing Other's work into your own project\n\nposted Aug 22, 2012, 12:35 PM by Neil Mathew [ updated Aug 22, 2012, 12:42 PM ]\n\n When you're doing the structural style, you realize you need to create the smaller components like and2, xor2, etc, which are usually already made by others who were in the lab before you. These are instructions to easily import these gates into your own project.Step 0: If you're keen on doing it yourself, this image explains how to add a new component.Step 2: Search for the components you need, and on selecting them, click OkI should mention that to avoid DATA BINDING issues, make sure the input and output signals of t the he component mentioned in the Structural Style Program should be named the same as that in components you've added.That is, if this code snippet is there in your Structural Style program: component and2 port(a,b:in std_logic; y:out std_logic); end component;Then, the and2 component you're using should have a, b as it's input and y as it's output as specified in the entity. (that is, having x,y,z as the names would give you an error)ENTITY AND2 IS PORT( A,B : IN STD_LOGIC; Y : OUT STD_LOGIC ); END ENTITY AND2;ENTITY NM_EN IS PORT( W,X : IN STD_LOGIC; Z : OUT STD_LOGIC ); END ENTITY NM_EN;\n\n#### 00 Compare DataFlow, Behavioural, Structural\n\nposted Aug 22, 2012, 12:22 PM by Neil Mathew [ updated Aug 22, 2012, 12:25 PM ]\n\nI'm comparing the three using a 2:1 MUX example.\nThe three styles are different by some syntax (change or addition in code) and implementation when it comes to concept (like use of IF statements in Behavioural, or use of existing components in Structural)\n\n(Also, mentioning, VHDL is NOT case sensitive.)\n\n DataFlow Behavioural Structural -- DataFlowLIBRARY ieee;USE ieee.std_logic_1164.all;USE ieee.std_logic_arith.all;ENTITY mux2isto4 ISport(a,b,s:in std_logic; y :out std_logic);END ENTITY mux2isto4;ARCHITECTURE mux2isto4_a OF mux2isto4 IS BEGIN y <= ((not s) and a) or (s and b);END ARCHITECTURE mux2isto4_a; -- BehaviouralLIBRARY ieee;USE ieee.std_logic_1164.all;USE ieee.std_logic_arith.all;ENTITY mux2isto4 ISport(a,b,s:in std_logic; y :out std_logic);END ENTITY mux2isto4;ARCHITECTURE mux2isto4_a OF mux2isto4 IS BEGIN PROCESS(a,b,s)BEGIN if( s= '0' ) then y <= a;else y <= b;end if; END PROCESS;END ARCHITECTURE mux2isto4_a; -- StructuralLIBRARY ieee;USE ieee.std_logic_1164.all;USE ieee.std_logic_arith.all;ENTITY mux2isto4 ISport(a,b,s:in std_logic; y :out std_logic);END ENTITY mux2isto4;ARCHITECTURE mux2isto4_a OF mux2isto4 IS--components and signals BEFORE begincomponent and2 port(a,b:in std_logic; y:out std_logic);end component; component not1port(a:in std_logic; b: out std_logic);end component;component or2port(a,b:in std_logic; y:out std_logic);end component;signal a_and_nots, not_s, b_and_s : std_logic;BEGINInvert_S: not1 port map(s,not_s);An_And_Operation: and2 port map(a, not_s, a_and_nots);Another_And_Operation: and2 port map(s,b,b_and_s);Final_Or_Operation: or2 port map(a_and_nots,b_and_s,y); END ARCHITECTURE mux2isto4_a;\n\n#### 02 Program, Compile & RUN\n\nposted Aug 22, 2012, 11:52 AM by Neil Mathew [ updated Aug 22, 2012, 12:01 PM ]\n\n Step 1: Double click on the component to editStep 2: Type the program. Screenshot that of an AND gate component.",
null,
"Step 3: Once the program is done, Look for the below toolbar. The green tick on document icon basically compiles it (DesignChecker).",
null,
"And if there are no errors, it should give this:Step 4: Once that's done, look for the same toolbar and look for the M icon to start ModelSim Simulator.Step 5: If you're starting ModelSim for the first time, you'll see this window. Just click ok.Step 6: Now, the ModelSim window should appear. However, the Objects and Waveform window may not always appear with it. In such cases, Click VIEW from the menu and select Objects and Wave",
null,
"Step 7: In the object window, add the signals to the Wave as shown:",
null,
"Step 8: Next, either FORCE values to the signals or define a CLOCK. Right-click on a signal, i) FORCE a value to a signal. Click Force. Then specify value. ii) DEFINE CLOCK for a signal. Click Clock. Then specify the period. I find it easier to specify periods in multiples of 2. If a is set to 50, then setting b to 100 gives a rather clear picture of the waves.Step 9: Finally, Click the RUN button highlighted by the red circle. The waves should appear then.\n\n#### 01 Creating a project in FPGA Advantage 8.1\n\nposted Aug 22, 2012, 11:31 AM by Neil Mathew\n\n Step 1: Look FPGA Advantage 8.1 LS PS up from the Start Menu and click it.Step 2: File > New > ProjectStep 3: Specify Name of ProjectStep 4: Next, Next, Finish...Step 5: Specify VHDL File (as Category) and Combined (as File Type)",
null,
"Step 6: Name the Entity and Architecture of your new project. (both of them can share the same name)And Voila!\n\n1-7 of 7"
] | [
null,
"https://mms.metalwihen.com/_/rsrc/1585207404000/system/app/images/icon_feed.gif",
null,
"http://mms.metalwihen.com/_/rsrc/1345661550222/fifthsemester/vhdl-lab/02programcompile/9Editor.png",
null,
"http://mms.metalwihen.com/_/rsrc/1345661549404/fifthsemester/vhdl-lab/02programcompile/10Compile.png",
null,
"http://mms.metalwihen.com/_/rsrc/1345661549807/fifthsemester/vhdl-lab/02programcompile/14MainScreenofModelSim.png",
null,
"http://mms.metalwihen.com/_/rsrc/1345661549921/fifthsemester/vhdl-lab/02programcompile/15objecttowave.png",
null,
"http://mms.metalwihen.com/_/rsrc/1345660312820/fifthsemester/vhdl-lab/01creatingaprojectinfpgaadvantage81/5VHDLCombined.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5894538,"math_prob":0.7669969,"size":9181,"snap":"2020-10-2020-16","text_gpt3_token_len":3186,"char_repetition_ratio":0.12781955,"word_repetition_ratio":0.21188475,"special_character_ratio":0.36401263,"punctuation_ratio":0.18432105,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96717525,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-09T17:30:59Z\",\"WARC-Record-ID\":\"<urn:uuid:ad69df49-1283-4a05-b47e-8e278a00a0f0>\",\"Content-Length\":\"113616\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d813255a-d46d-46da-a341-aab3a2ec4419>\",\"WARC-Concurrent-To\":\"<urn:uuid:ef9fcd32-b24e-4bc1-b387-8b988f9fcf38>\",\"WARC-IP-Address\":\"104.24.102.6\",\"WARC-Target-URI\":\"https://mms.metalwihen.com/fifthsemester/vhdl-lab\",\"WARC-Payload-Digest\":\"sha1:6GPKWS2JK4QNAX7GUFSX3KTZG37P7TZW\",\"WARC-Block-Digest\":\"sha1:D4TVMGGMIDJPJ3GZU3DQ7PEAXMUQ42VE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585371861991.79_warc_CC-MAIN-20200409154025-20200409184525-00035.warc.gz\"}"} |
https://pureportal.strath.ac.uk/en/publications/electromagnetic-field-quantization-in-absorbing-dielectrics | [
"# Electromagnetic field quantization in absorbing dielectrics\n\nR MATLOOB, R LOUDON, S M BARNETT, J JEFFERS\n\nResearch output: Contribution to journalArticle\n\n246 Citations (Scopus)\n\n## Abstract\n\nThe electromagnetic field is quantized in dielectric media that show both loss and dispersion. The complex dielectric function of the medium is assumed to be a known function and the loss is modeled by Langevin forces in the forms of noise current operators. The noise current correlation function is related to the assumed dielectric function by the fluctuation-dissipation theorem. Field quantization is carried out for the infinite homogeneous dielectric, the semi-infinite dielectric, and the dielectric slab, where the fields in the second and third cases are restricted to propagation perpendicular to the dielectric surfaces. The forms of the vector potential operator are obtained in the different spatial regions for all three geometries, and in each case the required canonical commutation relation for the vector potential and its conjugate generalized momentum operator is verified. The spatial dependence of the vacuum field fluctuations is calculated for the two dielectric geometries that have surfaces.\n\nOriginal language English 4823-4838 16 Physical Review A 52 6 https://doi.org/10.1103/PhysRevA.52.4823 Published - Dec 1995\n\n## Keywords\n\n• quantum optics\n• spontaneous emissions\n• response functions\n• finite geometries\n• electrodynamics\n• conductors\n• dispersion"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86044717,"math_prob":0.85583115,"size":1344,"snap":"2020-45-2020-50","text_gpt3_token_len":270,"char_repetition_ratio":0.13507463,"word_repetition_ratio":0.0,"special_character_ratio":0.1875,"punctuation_ratio":0.049261082,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9639156,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-12-01T16:27:55Z\",\"WARC-Record-ID\":\"<urn:uuid:ec715c55-64de-4b72-a66a-aa762dd26ed8>\",\"Content-Length\":\"43693\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:937116c7-8845-4725-b98f-98577e37c88b>\",\"WARC-Concurrent-To\":\"<urn:uuid:aaf9198f-5170-4c59-88b6-8f2bfa0fd306>\",\"WARC-IP-Address\":\"34.248.98.230\",\"WARC-Target-URI\":\"https://pureportal.strath.ac.uk/en/publications/electromagnetic-field-quantization-in-absorbing-dielectrics\",\"WARC-Payload-Digest\":\"sha1:ZL7JDMLZTVPE7HD4ULCWDQLOHI2BLGFI\",\"WARC-Block-Digest\":\"sha1:WCZIGVORU2IG272ONN733ZMGEHUAB3JW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141674594.59_warc_CC-MAIN-20201201135627-20201201165627-00494.warc.gz\"}"} |
https://tortuga.coldbyte.net/index.php/serversettings/ | [
"### LTS Cluster\n\n► Difficulty override = 5 (lvl 150 dinos on all maps)\n► Max structures in range = 50’000\n► Platform Structures Amount = 3x\n► Saddle Structures Limit = 500\n► Damage dealth by turrets/spikes = 1.6x\n► XP Multiplier = 3x\n► Taming speed = 5x\n► Haverest = 5x\n► Max farm per ressource = 2x\n► Resource respawn intervall = 1.2x\n► Player water/food drain = 0.75x\n► Player/Dino health recovery = 1.3x\n► Egg hatch speed = 50x\n► Baby maturation speed = 50x\n► Baby food consumption = 0.5x\n► Baby cuddle intervall = 0.02x (100% imprint possible)\n► Baby cuddle grace period = 1.3x\n► Baby loose imprint quality = 0.7x\n► Dinos lay eggs = 3x\n► Poop intervall = 2x\n► Mating interval wait time = 0.16x\n► Item spoil time = 10x\n► On ground item decomposition time = 2x\n► Narco/Stim/Mejoberry haverest yield = 2x\n► Crop growth speed = 2x\n► Turret damage to dinos = 1.25x\n► Max players per tribe = 12\n► XP/Level override = Curve is vanilla + 20 extra levels.\n► Lowered prices for industrial structures\n► Structure decay period = 15 (60d thatch, 120d wood, 180d stone, 240d metal)\n► Fortitude per level = 7\n\n### 100x Cluster\n\n► Difficulty override = 5 (lvl 150 dinos on all maps)\n► Max structures in range = 50’000\n► Platform Structures Amount = 3x\n► Saddle Structures Limit = 500\n► Damage dealth by turrets/spikes = 1.6x\n► XP Multiplier = 100x\n► Taming speed = 100x\n► Haverest = 100x\n► Max farm per ressource = 2x\n► Resource respawn intervall = 1.2x\n► Player water/food drain = 0.75x\n► Player/Dino health recovery = 1.3x\n► Egg hatch speed = 200x\n► Baby maturation speed = 200x\n► Baby food consumption = 0.5x\n► Baby cuddle intervall = 0.005x (100% imprint possible)\n► Baby cuddle grace period = 10x\n► Baby loose imprint quality = 0.3x\n► Dinos lay eggs = 3x\n► Poop intervall = 2x\n► Mating interval wait time = 0.005x\n► Item spoil time = 10x\n► On ground item decomposition time = 2x\n► Narco/Stim/Mejoberry haverest yield = 2x\n► Crop growth speed = 2x\n► Turret damage to dinos = 1.25x\n► Max players per tribe = 12\n► XP/Level override = Curve is vanilla + 20 extra levels.\n► Structure decay period = 2"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74701995,"math_prob":0.9993359,"size":2097,"snap":"2021-04-2021-17","text_gpt3_token_len":719,"char_repetition_ratio":0.12804587,"word_repetition_ratio":0.70098037,"special_character_ratio":0.3471626,"punctuation_ratio":0.06896552,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9570806,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-23T11:07:06Z\",\"WARC-Record-ID\":\"<urn:uuid:75ecbcdb-2e0b-47f6-8054-7daa4be20ddf>\",\"Content-Length\":\"23574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e993c4c1-9680-4bf8-b409-ace53514c0d8>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5c7dd6b-5e57-40ba-8977-401f666a3fbe>\",\"WARC-IP-Address\":\"46.127.128.59\",\"WARC-Target-URI\":\"https://tortuga.coldbyte.net/index.php/serversettings/\",\"WARC-Payload-Digest\":\"sha1:VNY4EUD32IQIBKO3NZVES4EZ53Q4MGH5\",\"WARC-Block-Digest\":\"sha1:DBRMQDXT4HAJQME4LUP6U5U66RAMOVMP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703537796.45_warc_CC-MAIN-20210123094754-20210123124754-00110.warc.gz\"}"} |
https://books.google.no/books?id=bOFJAAAAMAAJ&vq=mean&dq=editions:UOM39015067252117&lr=&hl=no&output=html_text&source=gbs_navlinks_s | [
"### Hva folk mener -Skriv en omtale\n\nVi har ikke funnet noen omtaler pċ noen av de vanlige stedene.\n\n### Populĉre avsnitt\n\nSide 310 - If two triangles have one angle of the one equal to one angle of the other and the sides about these equal angles proportional, the triangles are similar.\nSide 198 - If a straight line be bisected, and produced to any point ; the rectangle contained by the whole line thus produced, and the part of it produced, together with the square...\nSide 366 - If from the vertical angle of a triangle a straight line be drawn perpendicular to the base, the rectangle contained by the sides of the triangle is equal to the rectangle contained by the perpendicular and the diameter of the circle described about the...\nSide 92 - If there be two straight lines, one of which is divided into any number of parts, the rectangle contained by the two straight lines is equal to the rectangles contained by the undivided line, and the several parts of the divided line.\nSide 284 - And if the first have a greater ratio to the second, than the third has to the fourth, but the third the same ratio to the fourth, which the fifth has to the sixth...\nSide 349 - Divide a straight line into two parts such that the rectangle contained by the whole line and one of the parts shall be equal to the square on the other part.\nSide 288 - Convertendo, by conversion ; when there are four proportionals, and it is inferred, that the first is to its excess above the second, as the third to its excess above the fourth.\nSide 296 - ... line and the extremities of the base have the same ratio which the other sides of the triangle have to one...\nSide 367 - Wherefore, in equal circles &c. QED PROPOSITION B. THEOREM If the vertical angle of a triangle be bisected by a straight line which likewise cuts the base, the rectangle contained by the sides of the triangle is equal to the rectangle contained by the segments of the base, together with the square on the straight line which bisects the angle.\nSide 104 - In every triangle, the square of the side subtending any of the acute angles is less than the squares of the sides containing that angle by twice the rectangle contained by either of these sides, and the straight line intercepted between the perpendicular let fall upon it from the opposite angle, and the acute angle. Let ABC be any triangle, and the angle at B one of its acute angles, and upon BC, one of the sides containing it, let fall the perpendicular..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8773676,"math_prob":0.98981136,"size":6006,"snap":"2021-31-2021-39","text_gpt3_token_len":1387,"char_repetition_ratio":0.14628457,"word_repetition_ratio":0.473822,"special_character_ratio":0.23792873,"punctuation_ratio":0.15763547,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9971337,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-29T17:53:10Z\",\"WARC-Record-ID\":\"<urn:uuid:3b170188-b2de-4828-b1ee-12df449640c4>\",\"Content-Length\":\"72868\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9d162772-3477-4fd4-aef4-ab04bea76eac>\",\"WARC-Concurrent-To\":\"<urn:uuid:c71e1e26-9881-4d1e-97d1-17235edda7af>\",\"WARC-IP-Address\":\"142.250.65.78\",\"WARC-Target-URI\":\"https://books.google.no/books?id=bOFJAAAAMAAJ&vq=mean&dq=editions:UOM39015067252117&lr=&hl=no&output=html_text&source=gbs_navlinks_s\",\"WARC-Payload-Digest\":\"sha1:L5LIHZXOXXDXOX725DZLMQ45PGBKWBYB\",\"WARC-Block-Digest\":\"sha1:ADFN3POMC56ALVZY5PS473C7AH7OLS6H\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046153892.74_warc_CC-MAIN-20210729172022-20210729202022-00097.warc.gz\"}"} |
http://slideplayer.com/slide/5354543/ | [
"Presentation is loading. Please wait.",
null,
"# DAST 2005 Tirgul 6 Heaps Induction. DAST 2005 Heaps A binary heap is a nearly complete binary tree stored in an array object In a max heap, the value.\n\n## Presentation on theme: \"DAST 2005 Tirgul 6 Heaps Induction. DAST 2005 Heaps A binary heap is a nearly complete binary tree stored in an array object In a max heap, the value.\"— Presentation transcript:\n\nDAST 2005 Tirgul 6 Heaps Induction\n\nDAST 2005 Heaps A binary heap is a nearly complete binary tree stored in an array object In a max heap, the value of each node ≥ that of its children (In a min heap, the value of each node ≤ that of its children) Since the height of a heap containing n elements is Θ(log(n)) the basic operations on a heap run in time that is proportional to the heaps height and thus take O(log(n))\n\nDAST 2005 Heaps\n\nDAST 2005 Heaps\n\nDAST 2005 Mathematical Induction Mathematical induction is a method of mathematical proof typically used to establish that a given statement is true for all natural numbers, or for all members of an infinite sequence The simplest and most common form of mathematical induction proves that a statement holds for all natural numbers n and consists of two steps: 1.The basis: showing that the statement holds when n = 0. 2.The inductive step: showing that if the statement holds for n = k, it also holds for n = k + 1.\n\nDAST 2005 Mathematical Induction This method works by: 1.First proving the statement is true for a starting value 2.Then proving that the process used to go from one value to the next is valid. If these are both proven, then any value can be obtained by performing the process repeatedly For example, suppose we have a long row of dominos standing, and we can be sure that: 1.The first domino will fall. 2.Whenever a domino falls, its next neighbor will also fall. We can conclude that all dominos will fall.\n\nDAST 2005 Mathematical Induction Suppose we wish to prove the following statement: Proof by induction: Check if it is true for n = 0 Assume the statement is true for n = m Adding m+1 to both sides gives:\n\nDAST 2005 Mathematical Induction Note that it has not been proven as true: we made the assumption that S(m) is true, and from that assumption we derived S(m+1). Symbolically, we have shown that: However, by induction, we may now conclude that the statement S(n) holds for all natural numbers n. we have showed S(0) we have showed that S(0)→S(1) we have showed that S(1)→S(2) etc.\n\nDAST 2005 Generalization A common generalization is proving that a claim is true for all n ≥ c 1.Showing that the statement holds when n = c 2.Showing that if the statement holds for n = m ≥ c, it also holds for n = m+1 This can be used for showing that n 2 ≥ 10n for all n ≥ 10 1.For n==10, n 2 = 10*10 = 10n 2.Assuming the statement holds for n = m ≥ 10, we get:\n\nDAST 2005 Complete Induction Another form of mathematical induction is the complete-induction (also called strong-induction ) In complete induction, the inductive hypothesis, instead of simply is (we have a stronger inductive hypothesis, hence the name strong-induction ) The complete induction steps are therefore: 1.Showing that the statement holds for n = 0 (or n = c ) 2.Showing that if the statement holds for all c ≤ n ≤ m then the same statement also holds for n = m+1\n\nDAST 2005 The game of Nin Rules: Two players throw a number of stones on the ground At each turn, a player removes one two or three stones The one to remove the last stones loses Proposition: The second player has a winning strategy iff the number of stones is 4k+1, otherwise the first player has a winning strategy Proof: Base case – there is only one stone, the second player wins ( 1=4k+1 ) Induction: Assume that P(n) is true for all 1≤n≤m and prove that P(n+1) is true as well\n\nDAST 2005 The game of Nin We have four possible cases: 1.n+1 = 4k+1, We have already showed P(1) to be true, so we assume that n+1≥ 5. The first player can lift either one, two, or three stones, leaving either 4k,4(k−1)+3,4(k−1)+2 respectively. By the induction hypothesis, the person who plays next has a winning strategy. 2.n+1 = 4k, The first player can remove just three stones, leaving n = 4(k−1)+1. The strong induction hypothesis asserts that the second player loses. 3.n+1 = 4k+2, The first player can remove just one stone, leaving n = 4k + 1. The strong induction hypothesis asserts that the second player loses. 4.n+1 = 4k+3, The first player can remove two stones, leaving n = 4k + 1. The strong induction hypothesis asserts that the second player loses.\n\nDAST 2005 Loop Invariants A loop invariant is statement that is true when a program enters a loop, remains true in each iteration of the body of the loop, and is still true when control exits the loop. Understanding loop invariants can help us analyze programs, check for errors, and derive programs from specifications. The loop invariant technique is a derived from the mathematical induction: 1.We first check that the statement is true when the loop is first entered 2.We then verify that if the invariant is true after n times around the loop, it also true after n+1 times\n\nDAST 2005 Loop Invariants public static int factorial(int num) { int P = 1; int C = 0; while (C < num) { // P = C! (our loop invariant) C++; P *= C; } // P = C! (our loop invariant) return (P); } Our loop invariant is true when we first enter and last leave the loop\n\nDAST 2005 Insertion sort The outer loop of insertion sort is: for (outer = 1; outer < a.length; outer++) {...} The invariant is that all the elements to the left of outer are sorted with respect to one another For all i < outer, j < outer, if i < j then a[i] <= a[j] This does not mean they are all in their final correct place; the remaining array elements may need to be inserted When we increase outer, a[outer-1] becomes to its left; we must keep the invariant true by inserting a[outer-1] into its proper place This means: 1.Finding the element’s proper place 2.Making room for the inserted element (by shifting over other elements) 3.Inserting the element\n\nDAST 2005 Code for Insertion Sort public static void insertionSort(int[] array) int inner, outer; for(outer=1; outer 0 && array[inner-1] >= temp) { array[inner] = array[inner - 1]; inner--; } array[inner] = temp; // Invariant: For all i < outer, j < outer, if i < j then a[i] ≤ a[j] } }\n\nDAST 2005 Insertion Sort - Loop Invariants The loop invariant is true before the loop is first executed If the loop invariant holds after n times around the loop, the inner loop makes sure we insert the n+1 element in place, therefore the loop invariant also true after n+1 times\n\nDownload ppt \"DAST 2005 Tirgul 6 Heaps Induction. DAST 2005 Heaps A binary heap is a nearly complete binary tree stored in an array object In a max heap, the value.\"\n\nSimilar presentations\n\nAds by Google"
] | [
null,
"http://slideplayer.com/static/blue_design/img/slide-loader4.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8968852,"math_prob":0.9819416,"size":6396,"snap":"2021-04-2021-17","text_gpt3_token_len":1662,"char_repetition_ratio":0.155194,"word_repetition_ratio":0.1008547,"special_character_ratio":0.26782364,"punctuation_ratio":0.10150376,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9938466,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-18T08:06:18Z\",\"WARC-Record-ID\":\"<urn:uuid:4237ad84-72ec-47b8-8b79-7969bc973cb8>\",\"Content-Length\":\"176789\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a120ace3-4a56-49c7-b730-011ed9b6d0dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:37902a49-9e34-4c2d-be52-f2c0a0c7dae3>\",\"WARC-IP-Address\":\"138.201.58.10\",\"WARC-Target-URI\":\"http://slideplayer.com/slide/5354543/\",\"WARC-Payload-Digest\":\"sha1:KKACU55UUUVF3FJU34NI2LZ6ZYEEW6ZT\",\"WARC-Block-Digest\":\"sha1:ZEHSQG7ISCQSUIHYR2FWGOJAGKZRNIHD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618038469494.59_warc_CC-MAIN-20210418073623-20210418103623-00049.warc.gz\"}"} |
https://math.stackexchange.com/questions/1694270/taylor-series-expansion-of-a-function | [
"Taylor Series Expansion of a function?\n\nSo, i was studying my Computer Vision lecture notes and i came across this formula which says Say, i have a function $f(x,y,t)$, $x,y$ and $t$ are the varying factors After $t+ \\nabla t$, i have $f(x+\\nabla x,y+\\nabla y,t+\\nabla t)$. Now, taylor series expansion of $f(x,y,t) = f(x,y,t) + \\frac{\\nabla f}{\\nabla x}dx + \\frac{\\nabla f}{\\nabla y}dy + \\frac{\\nabla f}{\\nabla t}dt$\n\nI didn't understand how he arrived at the expansion. Please can anyone explain me or give a link which can help me to understand the above expansion.\n\ntry to find out taylor expansion $f(x)$\n$$f(x)=f(x_0)+(x-x_0)f'(x_0)/1 +(x-x_0)^2f''(x_0)/2!+...$$ for $$f(x,y)=f(x_0,y_0)+(x-x_0)\\frac{\\partial f(x_0,y_0)}{\\partial x}+(y-y_0)\\frac{\\partial f(x_0,y_0)}{\\partial y}+1/2![(x-x_0)^2 \\frac {\\partial ^2 f(x_0,y_0)}{\\partial x^2}+(x-x_0)(y-y_0)\\frac {\\partial ^2 f(x_0,y_0)}{\\partial xy}+(y-y_0)^2\\frac {\\partial^2 f(x_0,y_0)}{\\partial y^2}]$$ eventually you can calculate for $f(x,y,t)$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9225379,"math_prob":0.9999976,"size":527,"snap":"2022-05-2022-21","text_gpt3_token_len":175,"char_repetition_ratio":0.16061185,"word_repetition_ratio":0.0,"special_character_ratio":0.29791272,"punctuation_ratio":0.136,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000052,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-26T06:11:09Z\",\"WARC-Record-ID\":\"<urn:uuid:6df84c8a-7781-4b2b-ab9a-16efb0c660b1>\",\"Content-Length\":\"132260\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8abc8682-01c0-4db3-9177-2eb03e23ce13>\",\"WARC-Concurrent-To\":\"<urn:uuid:296b1cea-39c9-4447-a95d-3e4e7c8c2097>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1694270/taylor-series-expansion-of-a-function\",\"WARC-Payload-Digest\":\"sha1:PHWWKYCSX5EVMDHNTECNPFEIQJS7G57D\",\"WARC-Block-Digest\":\"sha1:CAPGHJLUVU3IGXTYBCYJPH36UTLIMXM2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320304915.53_warc_CC-MAIN-20220126041016-20220126071016-00067.warc.gz\"}"} |
https://en.cppreference.com/w/cpp/memory/c | [
"# C memory management library\n\n< cpp | memory\n\nC++\n Language Standard Library Headers Freestanding and hosted implementations Named requirements Language support library Concepts library (C++20) Diagnostics library Utilities library Strings library Containers library Iterators library Ranges library (C++20) Algorithms library Numerics library Input/output library Localizations library Regular expressions library (C++11) Atomic operations library (C++11) Thread support library (C++11) Filesystem library (C++17) Technical Specifications\n\nUtilities library\nType support (basic types, RTTI, type traits)\nDynamic memory management\nError handling\nProgram utilities\nLibrary feature-test macros\nDate and time\nFunction objects\n(C++11)\nRelational operators (deprecated in C++20)\nComparisons (C++20)\nCommon vocabulary types\n pair tuple(C++11) apply(C++17) make_from_tuple(C++17)\n optional(C++17) any(C++17) variant(C++17)\nSwap, forward and move\n(C++14)\n(C++11)\n(C++11)\nElementary string conversions\n(C++17)\n(C++17)\nType operations\n(C++11)\n(C++17)\n(C++17)\n\nDynamic memory management\n Smart pointers unique_ptr(C++11) shared_ptr(C++11) weak_ptr(C++11) auto_ptr(until C++17) owner_less(C++11) enable_shared_from_this(C++11) bad_weak_ptr(C++11) default_delete(C++11) Allocators allocator allocator_traits(C++11) allocator_arg_t(C++11) allocator_arg(C++11) uses_allocator(C++11) make_obj_using_allocator(C++20) scoped_allocator_adaptor(C++11) pmr::polymorphic_allocator(C++17) Memory resources pmr::memory_resource(C++17) pmr::get_default_resource(C++17) pmr::set_default_resource(C++17) pmr::new_delete_resource(C++17) pmr::null_memory_resource(C++17) pmr::pool_options(C++17)\nUninitialized storage\n uninitialized_copy uninitialized_fill uninitialized_move(C++17) destroy(C++17)\n uninitialized_copy_n(C++11) uninitialized_fill_n uninitialized_move_n(C++17) destroy_n(C++17)\n(C++17)\nGarbage collection support\nMiscellaneous\n(C++20)\n(C++11)\n(C++11)\nC Library\n malloc aligned_alloc(C++17) calloc\nLow level memory management\n\n### Functions\n\n Defined in header malloc allocates memory (function) aligned_alloc(C++17) allocates aligned memory (function) calloc allocates and zeroes memory (function) realloc expands previously allocated memory block (function) free deallocates previously allocated memory (function)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7052973,"math_prob":0.5755714,"size":389,"snap":"2019-13-2019-22","text_gpt3_token_len":86,"char_repetition_ratio":0.23376623,"word_repetition_ratio":0.0,"special_character_ratio":0.2159383,"punctuation_ratio":0.020408163,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9539699,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-26T15:21:52Z\",\"WARC-Record-ID\":\"<urn:uuid:1119c8b9-f802-4787-a223-d8968d0594e3>\",\"Content-Length\":\"58050\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fda12459-ad3f-408c-b741-564a6b994cfa>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1d0c91b-d074-4651-996c-59ffec9642aa>\",\"WARC-IP-Address\":\"74.114.90.46\",\"WARC-Target-URI\":\"https://en.cppreference.com/w/cpp/memory/c\",\"WARC-Payload-Digest\":\"sha1:NQR7AEZESZHDBWMYFBALCTI3IQZ3QMYD\",\"WARC-Block-Digest\":\"sha1:4PXKFHQI2KCQMTCB6TBWPSBTSNQIMKKQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912205534.99_warc_CC-MAIN-20190326135436-20190326161436-00472.warc.gz\"}"} |
https://riskmatrix.co/blogs/risk-matrix-knowledge-bank.atom | [
"https://riskmatrix.co/blogs/risk-matrix-knowledge-bank.atom Riskfiniti - Risk Matrix Knowledge Bank 2019-02-12T14:52:00-06:00 Riskfiniti https://riskmatrix.co/blogs/risk-matrix-knowledge-bank/how-not-to-design-a-risk-matrix-government-of-western-australia 2019-02-12T14:52:00-06:00 2019-02-17T20:48:27-06:00 How Not to Design a Risk Matrix – Government of Western Australia’s Risk Matrix Peter Carr Today, I stumbled across one of the most badly designed risk matrices I have ever encountered. It has been published by the Government of Western Australia, Small Business Development Corporation, and is offered as a guiding example to small businesses (Ref. 1, 2):",
null,
"",
null,
"The risk matrix contains several major flaws.\n\n## Scales with Inappropriate Ranges\n\nThe Consequence scale ranges as high as “Multiple Fatalities” but the Likelihood scale only goes as low as “Less than once in 2 years.” The user needs to be able to determine from the risk matrix when the likelihood of a multiple-fatality event has been reduced to a broadly acceptable level. One criterion for determining the limit for multiple-fatality events is F = 10-3/N2 per year where N is the number of fatalities (this criterion has been applied to a number of activities in the Netherlands such as the national airport, Shiphol). At this point, you may see another problem with the scales – vagueness – just how many fatalities are envisaged in the multiple-fatality consequence category? Suppose this category corresponds to 2 to 10 fatalities. Then, on the Dutch criterion, the frequency limit would be 2.5 x 10-4 (once in 4,000 years) to 1.0 x 10-4 per year (once in 10,000 years). Thus, the likelihood scale should go down very much lower than once in 2 years in order to accommodate multiple-fatality scenarios.\n\n## Scales with Strange Gaps\n\nSuppose an event is predicted to occur once in two years on average. This is a higher frequency than for category 1 (defined as less than once in two years) but lower frequency than for category 2 (defined as at least once per year). Thus, there is no category in which to place the event that is expected to occur, on average, once in two years.\n\n## Likelihood Categories that are Almost Certain\n\nFrom the Likelihood categories definitions, events in likelihood categories 2 to 5 occur quite frequently – from once per year (category 2) to every week (category 5). These events are almost certain to occur. Events that are almost certain to occur – or have already occurred – are better treated as issues rather than risks. The term \"risk\" should be reserved for unwanted events that are possible but not those that may be confidently anticipated to arise. This avoids cluttering the risk register with events that occur routinely.\n\n## Zero Width Categories\n\nConsequence category 4 applies to the event of exactly 1 fatality so has zero width. It would be better to combine it with category 5 and call it, say, “1 to 10 fatalities.” This avoids giving the risk assessment team the challenge of distinguishing scenarios that cause a single fatality from those that cause two or more fatalities.\n\n## Inappropriate Risk Criteria\n\nConsider a multiple-fatality event that is predicted to occur once in 3 years, on average. The risk matrix classifies this as a “moderate” risk that “may require corrective action.” Of course, such a severe event with such a high frequency should be placed in the highest category of risk and described as completely unacceptable.\n\n## Category Labels Treated as Quantitative Measures\n\nThe category labels 0 to 5 do not express the magnitude of the likelihood or the consequence. They could equally have been labeled A to F. Just because the labels look like numbers does not mean that they can be treated as numbers. The author of this risk matrix has multiplied these numerals, which are really just labels, in a pretense at producing a measure of risk. Then, for example, “Extreme” risk is defined as a product from 15 to 25. This oversimplified approach to designing a risk matrix will nearly always result in a risk matrix with an inappropriate coloring pattern, i.e. it will misclassify risks. This is a grievous type of error in risk matrix construction to which I have previously devoted an entire article.\n\nI find it shocking that such a poorly designed risk matrix should have been presented as official guidance to small businesses by the Government of Western Australia.\n\n## External References\n\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/how-not-to-design-a-risk-matrix-cox-s-axioms 2018-05-28T17:00:00-05:00 2018-05-28T18:16:33-05:00 How Not to Design a Risk Matrix—Cox’s Axioms Peter Carr More\n\n]]>\nIntroduction\n\nThe most widely cited paper on risk matrix design is by Cox (2008). It includes a design methodology based on three axiomatic constraints that Cox suggests risk matrices should satisfy. Cox applies the axioms to produce designs for risk matrices of size 3x3, 4x4, and 5x5. In this article, we review the method and show that it leads to results that are not useful in practice.\n\n## Cox's Axioms\n\nIn explaining the design method of Cox, we will adopt “probability” and “impact” as the default names of the two axes of the risk matrix.\n\nCox assumes that the two axes range from 0 to 1 and are equally subdivided into a number of intervals. For example, for a 4x4 matrix, the axes would be as follows:",
null,
"Figure 1: 4x4 risk matrix axes\n\nOn the probability axis, 0 represents impossibly while 1 represents certainty. On the consequence axis, 0 represents no impact and 1 represents the worst possible adverse impact. The consequence axis tick mark values could be multiplied by any positive scaling factor without changing the mathematical structure.\n\nCox assumes that risk is calculated as a function of probability and consequence, for example, as the product of the two variables.\n\nCox proposes that a risk matrix coloring pattern must satisfy three axiomatic constraints. The three constraints he calls Weak Consistency, Betweenness and Consistent Coloring. The following definitions are from Cox’s paper (note he assumes a risk matrix with three colors—red for the top risk category, green for the bottom category, and yellow as an intermediate color):\n\nDEFINITION OF WEAK CONSISTENCY: A risk matrix with more than one “color” (level of risk priority) for its cells satisfies weak consistency with a quantitative risk interpretation if points in its top risk category represent higher quantitative risks than points in its bottom category.\n\nDEFINITION OF BETWEENNESS: A risk matrix satisfies the axiom of betweenness if every positively sloped line segment that lies in a green cell at its lower (left) end and in a red cell at its upper (right) end passes through at least one intermediate cell (meaning one that is neither green nor red) between them.\n\nDEFINITION OF CONSISTENT COLORING: (1) A cell is red if it contains points with quantitative risks at least as high as those in other red cells (and does not contain points with quantitative risk as small as those in any green cell). (2) A cell is colored green if it contains some points with risks at least as small as those in other green cells (and does not contain points with quantitative risks as high as those in any red cell). (3) A cell is colored an intermediate color (neither red nor green) only if either (a) it lies between a red cell and a green cell; or (b) it contains points with quantitative risks higher than those in some red cells and also points with quantitative risks lower than those in some green cells.\n\nAccording to Cox, the three axioms taken together imply that there is only one possible coloring for a 3x3 or 4x4 matrix, and only two possible colorings for a 5x5 matrix. Let’s take a 4x4 risk matrix as an example. Cox states that the only possible coloring is as shown in Figure 2 below.",
null,
"Figure 2: The only permissible coloring of a 4x4 risk matrix according to Cox\n\nA good way to understand the three axioms is to consider how the above risk matrix satisfies them.\n\nWeak Consistency. Points in the green cells correspond to quantitative risks from 0 to 0.25. Risks in the red cells correspond to quantitative risks from 0.25 to 1. Points in the top risk category therefore represent risks that are higher than (or equal to) those in the bottom category. Cox’s definition of Weak Consistency (see above) does not mention the possibility that risks in the bottom and top categories could be equal but, from his 4x4 example, it appears that this is acceptable and does not violate Weak Consistency.\n\nBetweenness. It is not possible to draw a line from a green cell to a red cell without passing through at least one yellow cell. Therefore, Betweenness is satisfied.\n\nConsistent Coloring. Inspection shows that the Consistent Coloring axiom is satisfied. But note, the Consistent Coloring axiom contains within it several rules numbered (1), (2), (3)(a) and (3)(b). An interesting feature is that according to Consistent Coloring rule (2), cell B2 should have been green but it has been taken as yellow to satisfy Consistent Coloring rule (3)(a). Rule (3)(a) appears to have been included to allow Betweenness to take precedence over what would otherwise be deemed consistent coloring. In other words, it appears that rules (2) and (3)(a) are in conflict for cell B2, with rule (3)(a) being used as the final determinant of its color.\n\nNote that the Consistent Coloring axiom is very restrictive over when a cell may be colored yellow. It may be yellow according to Rule (3)(a) when it lies between a green cell and a red cell (i.e. when a yellow cell is introduced to satisfy Betweenness). A cell may also be yellow according to Rule (3)(b) when it contains points with quantitative risks higher than those in some red cells and also points with quantitative risks lower than those in some green cells. Due to the restrictive circumstances in which a cell may be colored yellow, many coloring patterns are impermissible, for example, that shown in Figure 3 below.",
null,
"Figure 3: An example of a coloring pattern that would be impermissible under the Cox axioms\n\nThe above matrix is impermissible under the Cox axioms because yellow cells A4 and D1 comply neither with Consistent Coloring rule (3)(a) nor with (3)(b).\n\nNow we will ask what the one and only permissible design for a 4x4 matrix under the Cox axioms implies about risk priority levels. We have seen that green cells span risks from 0 to 0.25 and red cells span the range 0.25 to 1. Thus, when a risk falls in the red zone, we know it is larger than or equal to 0.25, and when it falls in the green zone it is less than or equal to 0.25. The matrix is therefore useful for discriminating between risks according to whether they are above or below the value of 0.25. We can see that the yellow zone overlaps with both green and red by spanning the risk range from 0.0625 to 0.5. When a risk falls in the yellow zone, we learn nothing about its relationship to a risk threshold of 0.25.\n\nCox does not explain how the yellow zone should be interpreted. Some points in the yellow cells correspond to risks as high as some points in red cells, while other points in yellow cells correspond to risks as low as some points in green cells. Yellow could be assumed to be an intermediate risk level between red and green but we can see that it overlaps with both red and green in terms of risk range, so it is not actually an intermediate level.\n\nThe Cox 4x4 risk matrix design is more or less compatible with the quantitative model shown in Figure 4 that classifies risks as red or green according to their relationship to an iso-risk contour of value 0.25. Simply color the cells that contain both red and green coloring as yellow and you will have the Cox design—except that Cox forces cell B2 to be yellow rather than green on account of Betweenness and the Consistent Coloring Rule (3)(a). Without forcing the color of cell B2, we would have the matrix in Figure 5, which is the same as Cox’s design except for cell B2.",
null,
"Figure 4: A quantitative model compatible with the Cox 4x4 risk matrix design",
null,
"Figure 5: Matrix compatible with the quantitative model in Figure 4\n\nIt seems that the only valid use of the Cox 4x4 matrix is to tell us whether a risk is above or below a threshold of 0.25. It will do this accurately when the risk falls in the red or green zone but provides no useful information when the risk falls in the yellow zone. Therefore, a logical development of the design is to approximate the yellow cells as either green or red. Let us approximate the yellow cells as red. Our design now evolves to:",
null,
"Figure 6: Yellow cells approximated as red\n\nSuppose that risk points are uniformly distributed in Figure 6. We can simulate a large number of such points and determine how many are correctly ranked according to the underlying model in Figure 4. We find that 90% of points will be correctly ranked and 10% will have their rank overestimated, which is a respectable performance for an approximate tool.\n\nBut the single risk level of 0.25 implied in the Cox 4x4 matrix is unlikely to be of any use to the decisionmaker. Risks in the domain of the matrix can range from 0 at the bottom left to 1 at top right. The decisionmaker might be interested in a value for the risk threshold anywhere in this range. Also, in practice, the decisionmaker will likely want to use more than one risk threshold, e.g. to divide risks into three or more categories. Cox’s 4x4 matrix design relates to a single risk threshold fixed at 0.25 by the application of the three axiomatic constraints.\n\n## Conclusions\n\nThe three Cox axioms overly constrain risk matrix design resulting in a single implied risk threshold between the top and bottom risk categories. We demonstrated this to be the case for the 4x4 matrix, but it is also the case for the 3x3 and 5x5 risk matrix designs in the Cox paper.\n\nThe Cox axioms cannot be used to develop a design that is compatible with arbitrary risk threshold values specified by the decisionmaker.\n\nThe Cox axioms are not useful for practical risk matrix design.\n\n## References\n\nCox, L. A., Jr. (2008). What's wrong with risk matrices? Risk Analysis, 28(2), 497-512. doi:10.1111/j.1539-6924.2008.01030.x\n\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/when-can-the-same-risk-matrix-be-used-with-different-axis-scales 2018-03-27T23:56:00-05:00 2018-05-26T19:22:30-05:00 When Can the Same Risk Matrix be Used with Different Axis Scales? Peter Carr More\n\n]]>\n\nThe visual elements of any well-designed risk matrix are:\n\n• A ratio scale for its probability (or frequency) axis\n• A ratio scale for its consequence axis\n• Contours of equal risk (iso-risk contours), which define the thresholds between the different risk priority levels\n• A set of colors to visually differentiate the different risk priority levels\n\nIn addition, an algorithm is needed for deciding the color of cells that are intersected by one or more iso-risk contours. These cells, which we call “split cells” for brevity, are not located in a single risk priority level. Different parts of a split cell lie in different risk priority levels. Each split cell must be allocated to just one of the risk priority levels that it straddles. Quick Risk Matrix offers a choice of several algorithms by which to determine the coloring of the split cells.\n\nNow, many organizations like to use a single risk matrix coloring pattern but in conjunction with different axis scales. Let’s invent an example.\n\nSuppose an organization develops a risk matrix for the consequence type of environmental damage as shown in Figure 1 and wishes to apply the same matrix coloring scheme to the consequence type of financial loss. What are the circumstances in which it may do so?",
null,
"Figure 1: Risk matrix for environmental damage\n\nIn switching the horizontal axis from environmental damage to financial loss, there will be no impact on the matrix coloring scheme if the tick mark values on the horizontal axis are multiplied by a factor and if the iso-risk contours, which define the risk priority levels, are multiplied by the same factor. Let’s say that the factor will be 100. Then the risk matrix for financial loss is as shown in Figure 2, and it has the same coloring pattern as the one for environmental damage shown in Figure 1.",
null,
"Figure 2: Risk matrix for financial loss\n\nSo far, so good. Our fictitious organization can have a standard risk matrix coloring as shown in Figure 1 or 2 and use it in relation to both environmental damage and financial loss.\n\nBut what if the organization feels that an expected annual financial loss of \\$10 is trivially low? Perhaps it would be willing to categorize risks as Low if the annual expected loss is under \\$100? The lowest iso-risk contour will then shift upwards and the risk matrix coloring will change as shown in Figure 3 below. There is no fundamental reason why, when the consequence scale changes by a multiplicative factor, that the iso-risk contour values should change by the same factor. For example, it might well be that the organization is relatively more tolerant of financial loss than it is of environmental damage.",
null,
"Figure 3: Risk matrix coloring has changed as a result of a change in an iso-risk contour value\n\nSuppose now that the organization feels that consequence category 2 would be more useful if it were defined as the interval [100, 500], rather than as [100, 1000]. The risk matrix coloring pattern now changes as shown in Figure 4. On account of the redefinition of the category, the three cells marked *** have changed in color.",
null,
"Figure 4: Risk matrix coloring changed as a result of redefining consequence category 2\n\nWe have demonstrated that changes to tick mark values (other properties of the matrix remaining constant) may change the coloring pattern. Tick mark value changes do not necessarily change the coloring pattern, but they will often do so.\n\nWe have also demonstrated that changing the risk values that define the risk priority levels will change the coloring.\n\nIf we use a different algorithm for determining the color of split cells, the risk matrix coloring pattern may again change. For example, we might want to use the “Round Up” algorithm for environmental damage to ensure that all errors of risk mapping are on the safe side, but the “Predominant Color” algorithm for financial loss since we may be less concerned about underestimating risk when it is merely a matter of money rather than environmental harm. Why don’t we always use the round Up method? Because it is usually less accurate than the Predominant Color method. Please see other posts in this blog for a full understanding of the different coloring algorithms for split cells.\n\n## Conclusions\n\nWe have shown that the same risk matrix coloring pattern can be applied with different axis scales only under very limited circumstances.\n\nIn our experience, many organizations have a standard risk matrix that they use in conjunction with diverse axis scales. Unless the matrix and the scales have been very carefully designed, the matrix may be invalid in relation to some or even all of the scales.\n\nIf a standard risk matrix is wanted, the safest approach is to use Quick Risk Matrix to produce a risk matrix design for each set of axis scales and to verify that the design is, in fact, the same in each case.\n\nIt is also a reasonable and probably more flexible approach to have several risk matrices in the organization, each tailored to different types of consequence.\n\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/benchmarking-risk-matrix-performance 2018-03-27T23:44:00-05:00 2018-05-26T19:21:29-05:00 Benchmarking Risk Matrix Performance Peter Carr More\n\n]]>\nIn this post, we discuss the question of how to benchmark the performance of a risk matrix, so that different risk matrix designs may be compared. In so doing, we will develop some tips for the more accurate design of risk matrices.\n\nThis post assumes that the reader has read our earlier blog posts and is familiar with the design methodology used in Quick Risk Matrix.\n\nAfter starting this article, we found ourselves unavoidably discussing two methods of risk matrix design, in addition to the method used in Quick Risk Matrix. We will demonstrate that the former methods (i.e.. the non Quick Risk Matrix methods) are not defensible.\n\nWe shall create our examples using Quick Risk Matrix Premium, which includes a performance benchmarking tool.\n\n## Literature Review\n\nThere is very little in the literature on the subject of benchmarking risk matrix performance. As far as we are aware, the only papers of note are those by Cox (Ref. 1), Xing Hong (Ref. 2), and Li, Bao and Wu (Ref. 3).\n\nCox investigated risk matrices up to size 5 x 5 with three colors and equally subdivided axis scales ranging from 0 to 1. He concluded that risk matrices typically have poor resolution. He states \"Typical risk matrices can correctly and unambiguously compare only a small fraction (e.g., less than 10%) of randomly selected pairs of hazards.\" That is a startling conclusion. However, his assumptions are pessimistic. For example, he assumes that when two risks fall in the same risk priority level (i.e. they are located in cells of the same color), there is only a 50% probability of ranking them in the correct order. His assumption is correct when both risk points are located in the same cell but not correct when the two points lie in different cells (unless the decisionmaker is using blind guesswork!). We shall recommend a method in this post for breaking ties when two risks have equal priority levels. In addition, Cox generates his figure of a 90% error rate by using a specific risk matrix and with the assumption that the plotted risks lie in the worst possible positions in the risk matrix.\n\nCox also proposes a design methodology for matrices based on axioms that he calls weak consistency, betweenness and color consistency. He illustrates the methodology for matrices with three colors ranging from size 3 x 3 to 5 x 5. He develops the surprising conclusion that there is only one possible coloring pattern for a 3 x 3 or 4 x 4 matrix and only two possible colorings for a 5 x 5 matrix. We won't go into detail here but we will point out that the limited number of colorings stems from axioms that are overly restrictive. The rational risk matrix designer will want to specify the risk values that define the thresholds between the different risk priority levels. There is nowhere in the Cox methodology for input of the risk thresholds. Instead, the coloring that flows from the axioms implies the thresholds, which is putting the cart before the horse!\n\nLi et al. propose a design methodology that relaxes Cox's axioms. They call their methodology the Sequential Updating Approach (SUA). They impose a condition that for a cell A to have a higher risk priority level than a cell B, the probability that a random point in A has higher quantitative risk than a random point in B must be greater than a certain value, alpha (0.5 <= alpha <= 1). An assumption is required for the color of the bottom left cell. Then, by an iterative process, the colors of all the other cells may be determined. The authors show that for a given value of alpha, and given subdivisions for the axes, there is a unique risk matrix coloring. The approach maximizes the number of colors (risk priority levels) in the matrix for a given value of alpha. The higher the value of alpha, the fewer the number of colors that will appear in the matrix. The SUA methodology, like the Cox methodology, pays no regard to the risk threshold values that the risk matrix designer might want to use. The Sequential Updating Approach and the Cox methodology are both indefensible for this reason.\n\nLi et al. use two measures to assess the performance of risk matrices, which they call Probability of Elimination Error (PEE) and Probability of Wrong Pairs (PWP). They calculate these two measures by generating random pairs, assuming that probability and consequence are uniformly distributed and independent. If the two points within a pair lie in different risk priority levels, and the ranking based on quantitative risk differs from the ranking based on risk priority level, they count that as both a PWP error and a PEE. If the two points lie in the same risk priority level, then the two points will be ranked equally and some procedure (applied by the end user of the risk matrix) must be assumed if the tie is to be broken. It is not clear what tie-breaking procedure was assumed. A PEE is counted when the tie-breaking procedure fails to give the correct risk ranking.\n\nThe authors describe PEE as a measure of resolution and PWP as a measure of accuracy. In worked examples, the authors found that as the value of alpha increased, the number of colors in the matrix decreased, the PEE increased and the PWP decreased.\n\nNote. The term PEE appears to have been coined by Xing Hong (Ref. 2).\n\n## Benchmarking Measures in Quick Risk Matrix\n\nThe performance benchmarking tool in Quick Risk Matrix Premium uses three measures of performance, in contrast to two used by Li et al. and one by Cox.\n\nThe measures used in Quick Risk Matrix are:\n\n• Accuracy of Mapping\n• Probability of Elimination Error (PEE)\n• Probability of Rank Reversal (PRR)\n\nWe will explain each of these measures below.\n\n### Accuracy of Mapping\n\nAs explained in more detail in other posts in this blog, Quick Risk Matrix treats a risk matrix as an approximation to a risk graph. The risk graph shows without error how probability and consequence map to risk priority level. The risk matrix will inevitably make mapping errors and one of the purposes of risk matrix design is to make these errors as few as possible, i.e. the risk matrix should be a good approximation to the risk graph.\n\nThe performance benchmarking tool in Quick Risk Matrix Premium assesses the mapping accuracy of a risk matrix by generating a large number of random risk points and counting how often the risk points are mapped correctly to risk priority level. The risk graph determines what constitutes correct mapping. In generating the random points, probability and consequence are assumed uniformly distributed when the matrix axes are linear and log-uniformly distributed when the matrix axes are logarithmic. The probability and consequence are taken to be correlated with a Spearman correlation coefficient input by the program user in the range -1 to +1. A Spearman coefficient of 0 corresponds to independence between the variables.\n\nThe results are presented as:\n\n• Percentage correctly mapped\n• Percentage mapped to a risk priority level that is too high\n• Percentage mapped to a risk priority level that is too low\n\nAccurate mapping is arguably the most important characteristic of a well-designed risk matrix.\n\n### Probability of Elimination Error (PEE)\n\nThe concept of PEE arises in the scenario that a decisionmaker may have two risks but can only afford to eliminate one of them. Which one should be eliminated? While factors unrelated to the risk matrix will go into such a decision, it would certainly be helpful to be able to rank risks on the basis of the risk matrix, even for risks located in the same risk priority level.\n\nOur definition of PEE is the same as employed by Xing Hong and Li et al. However, it is important to state what tie-breaking procedure is assumed when the two points in a pair lie in the same risk priority level. So we will now discuss some possible tie-breaking methods.\n\n#### Possible Tie-Breaking Methods\n\nA possible tie-breaking method is the Borda count (developed by Borda in 1770). Borda count, as applied to risk matrices, ranks risks according to their row and column positions in the matrix. The method is quite complicated to implement. Also, every time a new risk is added to the risk register, the Borda counts need to be recalculated. It is unlikely that many risk matrix users will use Borda counts.\n\nXing assumed that users might break a tie by looking at the relative positions of the cells containing the two points. The risk to be eliminated is taken to be the one in the upper or right side cell. This does not appear to address all possible relative positions. For example, what if a cell is both upper and left?\n\nAnother possible tie-breaking method is guesswork, i.e. arbitrarily assume one of the two risks is the larger. This is what Cox assumed users would do when deriving his very pessimistic predictions of risk matrix performance. Since its accuracy will be 50%, this is a very poor method.\n\nAn easy and practical tie-breaking method is to compare cells of the same color on the basis of the risk value at the geometric center of each cell. Suppose a cell is bounded by x = X1, x = X2, y = Y1 and y = Y2. The geometric mean of X is the square root of X1*X2 and the geometric mean of Y is the square root of Y1*Y2. The risk at the geometric center may then be calculated by combining the two geometric means (i.e. multiplying them when risk is defined in the usual way as the product of probability and consequence). We call this the Geometric Center Method of tie-breaking. For decisionmakers who want to compare risks falling within the same risk priority level, this method is easy to apply (e.g. in a spreadsheet). Importantly, the method requires no information other than that already contained in a properly constructed risk matrix.\n\nIn calculating PEE, Quick Risk Matrix assumes that decisionmakers interested in comparing pairs of risks would employ the Geometric Center Method. The method only fails when both risks of a pair fall within the same cell, in which case Quick Risk Matrix assumes that the decisionmaker has only a 50% chance of ranking the pair correctly. There is simply no way to differentiate between two risks that fall in the same cell on the basis of the risk matrix alone.\n\nNote that the geometric mean does not exist if X1 = 0 or Y1 = 0 but there are ad-hoc methods of overcoming this issue.\n\nThe significance of the geometric center can be explained as follows. Suppose the risk at the bottom left of a cell is R1, the risk at the geometric center is R2, and the risk at the top right of the cell is R3. Then R2/R1 = R3/R2. In this sense, the geometric center is the \"mid\" point of the cell.\n\nTo calculate PEE, Quick Risk Matrix generates a large number of random pairs of points (correlated if required via a user-input value for Spearman's correlation coefficient). It evaluates the ranking of each pair according to the risk matrix against the ranking given by the quantitative risk values of the two points. When both points of a pair lie in the same risk priority level, the calculation assumes that the decisionmaker is using the Geometric Center Method of breaking ties.\n\n### Probability of Rank Reversal (PRR)\n\nWhat we term PRR is exactly the same measure as the Probability of Wrong Pairs used by Li et al. PRR is calculated only for pairs of points lying in different risk priority levels. PRR errors are, therefore, a subset of Probability of Elimination Errors.\n\nTo calculate PRR, we generate many random pairs of points and, for all pairs where the points fall in different risk priority levels, we evaluate using the quantitive risk values of the two points what percentage of pairs are incorrectly ranked.\n\n## Worked Example\n\nThe figures below were produced in Quick Risk Matrix Premium. The risk matrices shown are very simple for purpose of illustration and are not intended to represent practical designs.\n\nWe start by creating an example risk graph with six risk priority levels defined by five iso-risk contours.",
null,
"Figure 1: Risk graph\n\nWe convert the risk graph to a risk matrix using the Predominant Color algorithm (one of several algorithms in Quick Risk Matrix). This colors each cell split by iso-risk contours according to the color that predominates in the cell.",
null,
"Figure 2: Risk matrix\n\nThe above risk matrix has been contrived to be identical to one developed by Li et al. using their Sequential Updating Approach (SUA) with an alpha value of 0.8.\n\nThe performance benchmarks for the above matrix were calculated in Quick Risk Matrix based on 100,000 points generated assuming probability and consequence to be uniformly distributed and independent (as per Li et al.):\n\n• Points mapped accurately to risk priority level 65%\n• Points with overestimated risk priority level 15%\n• Points with underestimated risk priority level 20%\n• PEE 11%\n• PRR 1.6%\n\nThe mapping accuracy is poor but the ability to rank pairs of risks as indicated by PEE and PRR is quite good. Cox's claim that typical risk matrices have an error rate in ranking pairs of risks in excess of 90% is not borne out.\n\nCox stated in his paper that \"For risks with negatively correlated frequencies and severities, they [risk matrices] can be 'worse than useless,' leading to worse-than-random decisions.\" To test this claim, we will run our performance simulation again but this time assuming that probability and consequence are negatively correlated with Spearman correlation coefficient of -0.8. A small subset of the generated points is shown below overlaid on the risk matrix:",
null,
"Figure 3: Risk matrix overlaid with a sample of negatively correlated points\n\nThe performance benchmarks for the matrix with the negatively correlated risks are\n\n• Points mapped accurately to risk priority level 64%\n• Points with overestimated risk priority level 16%\n• Points with underestimated risk priority level 20%\n• PEE 20.2%\n• PRR 2.3%\n\nOur results are almost the same as before for mapping accuracy but somewhat worse for PEE and PRR. However, Cox's claim that the risk matrix should lead to \"worse than random decisions\" with negatively correlated risks is clearly disproved.\n\nWe shall now investigate the effect of using fewer colors. Eliminating the smallest iso-risk contour produces the risk matrix shown below. This is identical to the risk matrix produced by Li et al. using SUA with an alpha value of 0.83.",
null,
"Figure 4: Risk matrix with the number of colors reduced to five\n\nThe performance benchmarks for the matrix with the number of colors reduced to five (risks treated as independent as per Li et al) are shown below with the values from the inital analysis in parentheses.\n\n• Points mapped accurately to risk priority level 77% (65%)\n• Points with overestimated risk priority level 4% (15%)\n• Points with underestimated risk priority level 19% (20%)\n• PEE 11% (11%)\n• PRR 1.1% (1.6%)\n\nEliminating one color has substantially increased the mapping accuracy, raising it from 65% to 77%, with the PEE the same as before and the PRR slightly better.\n\nAt this point, we shall stop emulating the designs obtained with SUA by Li et al. This is because, as the SUA alpha value increases, the lowest risk priority level occupies more and more of the chart area. For example, for an alpha value of 0.95, the SUA-based design given by Li et al. results in every cell being in the lowest risk priority level except for the top right cell. Such a matrix would not be at all useful in practice. Since the alpha value dictates not only the number of colors but also the risk matrix coloring pattern, it appears that the method of Li et al. cannot take into account specific iso-risk contours values for the purpose of defining thresholds between risk priority levels. Instead, the coloring pattern implies the iso-risk contour values! This makes SUA unsuitable as a basis for risk matrix design because it does not allow the designer to specify key parameters. Cox's design approach has a similar deficiency. His axioms are so restrictive as to give little choice over the coloring pattern and therefore cannot reflect the designer's choice of risk thresholds.\n\nSo now we reduce the number of colors to three by eliminating two more iso-risk contours. We chose the contours to retain so that the domain was divided into three very roughly equal areas.",
null,
"Figure 5: Risk matrix with the number of colors reduced to three.\n\nWith the number of colors reduced to three, the performance benchmarks are as follows (with the values from the 6-color design in parentheses):\n\n• Points mapped accurately to risk priority level 88% (65%)\n• Points with overestimated risk priority level 1% (15%)\n• Points with underestimated risk priority level 11% (20%)\n• PEE 11% (11%)\n• PRR 0.5% (1.6%)\n\nThe improvement in mapping accuracy due to reducing the number of colors is once again substantial. PRR has also improved. PEE is about the same.\n\n### Summary of Numerical Results\n\nWe performed a few more calculations in addition to those described above and summarize the results below.\n\nSpearman correlation coefficient = 0\n\n Benchmark 6 colors 5 colors 3 colors Mapping accuracy 65% 77% 88% PEE 11% 11% 11% PRR 1.6% 1.1% 0.5%\n\nSpearman correlation coefficient = -0.8\n\n Benchmark 6 colors 5 colors 3 colors Mapping accuracy 64% 74% 84% PEE 20% 20% 20% PRR 2.3% 2.3% 1.0%\n\n## Conclusions\n\nQuick Risk Matrix (Premium) includes a risk matrix performance benchmarking tool to calculate several statistics:\n\n• Mapping accuracy\n• Probability of Ranking Reversal (PRR)\n• Probability of Elimination Error (PEE)\n\nThe main purpose of a risk matrix is to map probability and consequence categories to risk priority levels. The mapping accuracy benchmark is an indicator of how well a matrix can do this. It is useful for comparing alternative risk matrix designs.\n\nThe Probability of Rank Reversal (PRR) is the probability that a pair of risks, with the two risk points located in different risk priority levels, is ranked incorrectly by the risk matrix. For our worked examples, the PRR was small (0.5% to 2.3%). For well-designed risk matrices, it is our experience that PRR is typically small.\n\nThe Probability of Elimination Error (PEE) is the probability that a pair of risks located anywhere in the risk matrix will be ranked incorrectly. Now, the risk matrix on its own is incapable of ranking a pair of risks when both points lie in the same risk priority level. When the two risks have equal risk priority level, the decisionmaker must break the tie by means of supplementary calculations. When calculating PEE, Quick Risk Matrix assumes that the decisionmaker would use the Geometric Center Method (explained above) for breaking ties. Thus, PEE is an indicator of how well a decisionmaker might do in ranking pairs of risks with the aid of the risk matrix and some supplementary calculations.\n\nWe consider PEE to be a statistic of lesser importance since, in the real world, risks are not chosen for elimination or reduction solely on the basis of magnitude. The cost of elimination or reduction plays an important role. If there are many small risks that can be inexpensively treated, the cumulative risk reduction may be greater than what could be achieved by treating one or two larger but more intractable risks.\n\nIn our worked examples, we found a reduction in risk matrix performance when probability and consequence are negatively correlated. The reductions were modest and not sufficient to detract from the usefulness of the risk matrix.\n\nCells split by iso-risk contours have ambiguous risk priority level and create mapping errors. Our worked examples illustrate that the mapping accuracy benchmark typically improves as the number of colors in the matrix reduces. This is to be expected since fewer cells will be split by iso-risk contours when the number of colors is reduced. We recommend that the number of colors is not more than needed by the organization's risk management philosophy. For many organizations, three colors may be sufficient, representing risks too high to be tolerated, risks so low that they are broadly acceptable, and risks that are acceptable provided that they have been reduced to as low as is reasonably practicable (ALARP).\n\n# References\n\nRef. 1. Cox, L.A., What’s Wrong with Risk Matrices? Risk Analysis, Vol. 28, No. 2, 2008.\n\nRef. 2. Xing Hong, Risk Matrix Analysis Using Copulas. Dissertation, The George Washington University.\n\nRef. 3. Li J., Bao C., Wu, D. How to Design Rating Schemes of Risk Matrices: A Sequential Updating Approach. Risk Analysis, Vol. 38, No. 1, 2018.\n\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/nasa-risk-matrix-an-example-of-how-not-to-design-a-risk-matrix 2018-03-27T20:55:00-05:00 2018-05-26T20:35:33-05:00 How Not to Design a Risk Matrix - NASA's Risk Matrix Peter Carr More\n\n]]>\nNASA's activities include building spacecraft, instruments and new technology to study Earth, the Sun, the solar system and the universe.\n\n## NASA's Standard Risk Matrix\n\nIn assessing the risks to its projects, NASA uses a standard risk matrix as shown in Figure 1 (reference NASA Goddard Procedural Requirements GPR 7120.4D, 2012).\n\nThe likelihood and consequence categories are ordinal with labels 1, 2, ... 5 along each axis. Ordinal means that the category labels do not indicate the true magnitude of the variables but only the direction of increase. We shall revise the category labels to A, B, C, ... for the horizontal axis because a reference to a cell by its column and row labels will be clearer when written as, for example, D2 rather than as 42 (D2 reminds us which digit represents the column and which represents the row). With this revision to the labels, NASA's standard matrix is as shown in Figure 1.",
null,
"Figure 1: NASA standard risk matrix (redrawn by us)\n\nIn case you are wondering why Figure 1 uses those particular shades of red, yellow and green to denote the risk priority levels, rather than the corresponding primary colors, it is to provide better differentiation for people with one of the three common forms of color blindness.\n\nNASA applies the risk matrix shown in Figure 1 to different types of consequence, such as safety, technical, schedule, cost, etc. NASA associates the five consequence categories with detailed descriptions but rarely defines the consequence categories quantitatively. As we explain elsewhere in this blog, it is not possible to perform meaningful calculations with ordinal scales and, if we want to design a defensible risk matrix, we need to use ratio scales.\n\nFortunately for our audit of its risk matrix, NASA has provided ratio scales (as well as ordinal scales) for two types of consequence – budget increase and operational cost threat – and we are going to use these scales to carry out logical consistency checks on their risk matrix. In making these checks, we shall assume that NASA calculates risk as the product of probability and consequence, which is by far the most common method and nothing in NASA's documentation appears to suggest anything different.\n\nThe NASA consequence categories for budget increase are <2%, 2–5%, 5–10%, 10–15%, and >15%. For operational cost threat, they are <\\$1M, \\$1M–\\$5M, \\$5M–\\$10M, \\$10M–\\$60M, and >\\$60M. Notice that the first and last categories are open at one end. We shall make some reasonable assumptions for the values of the missing endpoints since we shall want to draw the matrices to scale.\n\nNASA likelihood categories are 2%–10%, 10%–25%, 25%–50%, 50%–75%, and >75%. We will close the last category by replacing it with 75%–100%.\n\n## NASA's Risk Matrix Applied to Budget Increase\n\nFigure 2 shows NASA's risk matrix when applied to budget increase and when drawn to scale. Note that we have used a logarithmic scale for the horizontal axis and a linear scale for the vertical axis. The choice of linear or logarithmic scale is made according to which is better at preventing overlapping of labels and has no effect on risk matrix design.",
null,
"Figure 2: NASA risk matrix for budget increase when drawn to scale\n\nWe detected an inconsistency in the risk matrix for budget increase. Look closely at cells B3 and D2. As shown in Figure 3 below, the quantitative risk in both these cells ranges from 0.5 at the bottom left corner of the cell to 2.5 at the top right corner. Yet B3 has a lower risk priority level than D2. Both cells should be of the same color since they contain an identical range of risks.",
null,
"Figure 3: Showing an inconsistency in NASA's risk matrix for budget increase\n\nWe conclude from Figure 3 that the coloring of the matrix is inconsistent with its underlying ratio scales in the case of the consequence type \"budget increase\".\n\n## NASA's Risk Matrix Applied to Operational Cost Threat\n\nNow let's look at the NASA standard risk matrix when applied to operational cost threat. We can find inconsistencies in this risk matrix too, but it is not as straightforward to do so as in the previous case. The matrix can be assumed to be underlain by two undisclosed iso-risk contours that define the boundaries between risk priority levels. To identify inconsistencies, we need to know NASA's strategy for determining the color of a cell split by a contour. A split cell contains some risk points that lie in the risk priority level below the contour and other points that lie in the risk priority level above the contour. We shall assume that NASA's strategy is to apply the color above the contour to the split cell. Quick Risk Matrix provides a choice of algorithms for coloring split cells, one of which is called the \"round-up\" algorithm. In effect, we are assuming that NASA is using the round-up algorithm. With this algorithm, the color of a split cell is determined only by the risk value at the top right corner of the cell, which is the maximum possible value for a risk point in that cell. With this assumption, we can now proceed to look for inconsistencies considering only the maximum risk value in each cell.\n\nFigure 4 below shows NASA's matrix for operational cost threat with the maximum risk value written into several of the cells.",
null,
"Figure 4: NASA's risk matrix for operation cost threat with the maximum risk value written into several cells\n\nExamining Figure 4, cell C2 (Medium risk) has a maximum value of 2.5, which is identical to the maximum value for cell B3 (Low risk) and less than the maximum value for cell D1 (Low risk). We can see that NASA is not weighting for higher consequences because D1 (Low risk) is higher consequence than C2 (Medium risk). We could resolve this inconsistency in various ways. One way would be to assume that C2 is correctly colored. Then the color of cells B3 and D1 should be the same as the color of cell C2, ie. cells B3 and D1 should be uprated from Low risk to Medium risk.\n\nStill looking at Figure 4, cell C5 (High risk) has a maximum value of 10, which is less than the maxima for cells D3 and D2 (maxima 30 and 15, respectively, and Medium risks). Once again, we can see that NASA is not weighting for high consequence since C5 (High risk) has lower consequence than D2 or D2 (Medium risk). Again, the inconsistency could be resolved in various ways. If we assume cells D3 and D2 (Medium risk) are correctly colored, then cell C5 would need to be downrated from High to Medium risk.\n\n## The Corrected Risk Matrices\n\nIt is emphasized that there many ways to revise the two risk matrices to make them internally consistent. We gave examples of possible corrections in the previous sections. Applying those corrections, the revised risk matrices are as shown in Figures 5 and 6 below.",
null,
"Figure 5: One possible revision to NASA's risk matrix for budget increase to make it self-consistent",
null,
"Figure 6: One possible revision to NASA's risk matrix for operational cost threat to make it self-consistent\n\nA matrix identical to that in Figure 5 may be obtained in Quick Risk Matrix by using the round-up algorithm and iso-risk contour values of 1.25% and 5%.\n\nA matrix identical to that in Figure 6 may be obtained in Quick Risk Matrix by using the round-up algorithm and iso-risk contour values of \\$1.25M and \\$30M.\n\nAlthough we made both matrices self-consistent, we had to adopt different coloring patterns to do so. This illustrates that changing the axis scales of a risk matrix will, in general, also change the required coloring pattern.\n\n## Discussion and Conclusions\n\nWe can learn many lessons from this review:\n\n• For the two consequence types for which we were able to audit NASA's risk matrix because ratio scales had been defined, we demonstrated that the risk matrix coloring is inconsistent with the underlying ratio scales (based on the usual definition of quantitative risk as the product of probability and consequence). The anomalies cannot be explained by assuming that NASA is weighting for higher consequences.\n• There may be additional design imperfections in the two matrices but it is difficult to say with certainty without knowing the design intent, in particular, the intended values of the two risk thresholds separating the three risk levels.\n• When only ordinal scales are given, the associated matrix will be largely unverifiable and therefore cannot be regarded as technically defensible. This applies to most of the scales employed by NASA. Sometimes ordinal scales look like numbers (e.g. 1...5), but they are really just labels and calculations should never be performed with them.\n• The same risk matrix coloring cannot be blindly assumed to apply when the underlying scales are changed. If the scales (or the risk thresholds) are changed, the appropriate coloring pattern will change in general.\n• If it is desired to use the same coloring pattern with different scales and risk thresholds, this will only be possible when the axis scales and risk thresholds are changed proportionately. Arbitrary changes to the scales or thresholds will invalidate the coloring.\n• Even organizations with impressive technology such as NASA are not immune from making errors when designing risk matrices.\n• Errors of design can be avoided by using Quick Risk Matrix, which guarantees a self-consistent risk matrix every time.\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/how-not-to-design-a-risk-matrix-1 2018-03-27T19:02:00-05:00 2018-05-26T19:25:09-05:00 How Not to Design a Risk Matrix - A Public Health Risk Matrix Peter Carr More\n\n]]>\nIn a another post, we discussed a concern expressed by Cox (Ref. 1) that risk matrices may be \"worse than useless\" when probabilities and consequences are negatively correlated. We showed in that post that the concern only arises in highly contrived situations.\n\nA paper by Vatanpour et al. of the University of Alberta (Ref. 2) is particularly interesting because it attempts to investigate Cox's concern by reference to real-world data from the public health field. The authors conclude that Cox's concern is valid, but we shall show in this post that their conclusion is illusory and due to use of a poorly designed risk matrix.\n\nThe risk matrix used by Vatanpour et al., redrawn with axes to scale, and displaying ratio scales as well as categories, is as follows:",
null,
"Figure 1: Risk matrix of Vatanpour et al.\n\nThe frequency scale is as stated by Vatanpour et al. The severity scale was not explicitly given but we noted that severity was scored as 10, 100, 1000, 1E4 and 1E5 for the five categories, respectively. Our tick marks were calculated to place the given severity scores approximately at the geometric mean of each severity category. For example, the geometric mean of 3.16 and 31.6 is approximately 10. The matrix classifies risks into four risk priority levels, ranging from Low to Very High.\n\nVatanpour et al. plotted the risks associated with a number of blood-borne infectious diseases on their matrix. The plotted points exhibited negative correlation, as would be expected (since more severe consequences tend to be less frequent). The Spearman correlation coefficient was -0.81. Due to a paucity of real-life data, they also included four risk points that were fictitious but chosen to fit with the general trend. The artificial data points were called Datum 1 through Datum 4.\n\nThe authors observed that Datum 2 (empirical risk = 0.21) and Datum 4 (empirical risk = 0.5) were rated as Medium risk whereas TT Virus (empirical risk = 10) was rated as Low risk - see Figure 2 below. This type of anomaly, where larger risks are ranked lower than smaller risks, is called rank reversal.",
null,
"Figure 2: The rank reversal anomaly observed by Vatanpour et al.\n\nVatanpour et al. attempt to explain the anomaly as follows:\n\n\"The generated data points 2 and 4 have estimated risk values of 0.21 and 0.50 and both are categorized in Figure 5 as medium risks. When compared with TT virus, which was categorized as a low risk in Figure 5, we find that it has an estimated (according to Equation (4)) risk of 10. This anomaly illustrates the concern posed by Cox, that the risk assessment matrix provides a risk categorization (color code) that is incorrect in relation to an empirical calculation of the risk . Although we had to resort to generating data from an empirical relationship derived from experiential frequency estimates, we have found that the theoretical concern of Cox can be demonstrated for hazard data derived from authentic experience.\"\n\nIt is surprising that the authors did not discuss the possible alternative explanation that the rank reversal error might be due to inadequate risk matrix design. It is also surprising that this possibility was not picked up in the journal's peer review process.\n\nConsider the cells in which Datum 2 and Datum 4 are located. Points in either of these cells can have risks ranging from 0.0316 at the bottom left to 3.16 at the top right. Compare with the cell containing the data point relating to TT virus (TTV). The risk range for the cell containing the TTV data point is 0.316 to 31.6, i.e. risks in this cell are 10 to 100 times higher than in the cells containing Datum 2 and Datum 4. Therefore, if the yellow coloring is correct for the latter cells, then the cell containing the TTV data point cannot be green but must be at least yellow.\n\nThus, the authors have not validated the theoretical concern of Cox but instead have vividly demonstrated a common error in risk matrix construction, which is to fail to properly account for the axis scales when determining the matrix coloring pattern.\n\nWe will redesign the authors' risk matrix using Quick Risk Matrix. Unfortunately, the authors did not state the quantitative values defining the four risk zones. We will assume that the three iso-risk contours separating the four risk zones take the values 3160, 31.6 and 0.316. Thus, each risk threshold is a factor of 100 lower than the one above it. This gives a risk graph as shown below. Quick Risk Matrix is based on the concept that one develops a risk graph and then converts it to a discrete risk matrix. Several different algorithms are provided for making the conversion.",
null,
"Figure 3: Risk graph to serve as the basis for the redesign of the risk matrix\n\nSince this risk matrix is for use in connection with the important safety issue of bloodborne disease, it is prudent to use the Round-Up algorithm, which colors each split cell according to the highest risk zone in it. The Round-Up algorithm ensures that any errors will be the safe side, i.e. the risk level will never be underestimated. With the Round-Up algorithm, we arrive at the following design:",
null,
"Figure 5: Redesign in Quick Risk Matrix based on the Round-Up algorithm\n\nNote that our redesign has eliminated the rank reversal anomaly observed by Vatanpour et al. All three data points are now ranked the same, i.e. medium risk.\n\nOur redesign has a coloring pattern substantially different to that of Vatanpour et al. This suggests that if we look hard enough at the design of Vatanpour et al., we should be able to detect some inconsistencies. This is indeed the case. For example, compare the cells in column 2, row 6, and column 3, row 5. Points in both these cells lie in the risk range 3.16 to 316. In the design of Vatanpour et al. (see Figure 1 or 2), these two cells have different colors. Since these cells have identical risk range, they should have the same color. In fact, with these axes, where the tick mark values increase by a factor of 10 as we move across the columns or rows, the risk range is identical for all cells lying on any diagonal line having a slope of -1. The color should, therefore, be the same for all cells lying on such a diagonal. Informed by this observation, we can see that the risk matrix design of Vatanpour et al. contains multiple incorrectly colored cells.\n\nWe can benchmark the accuracy of our redesign using the performance simulation tool in Quick Risk Matrix Premium. We assume that the frequency and severity have a log-uniform distribution with a Spearman correlation coefficient of -0.81 (the value given by Vatanpour et al.). We find that the Round-Up algorithm is predicted to achieve a mapping accuracy of 76% with all errors on the safe side. If we simulate for pairs of random points, the percentage error in ranking the two points of a pair is found to be 0.7% if the two points lie in different colored regions and 23% if they do not necessarily lie in different colored regions. (We explain in another post how all these benchmarks are calculated.)\n\nNote that the benchmarks do not necessarily accurately predict real-world performance but are intended only for comparing different risk matrix designs.\n\nOur final illustration shows a sample of points generated, with the above-described negative correlation, plotted on the risk matrix produced by the Round-Up algorithm. A substantial proportion of the generated points lie in cells split by iso-risk contours, which explains why the mapping accuracy is relatively low at 76%. Mapping errors are inevitable when using a risk matrix rather than a risk graph but, if we are concerned about them, we can use the round-up algorithm to ensure that errors are on the safe side.",
null,
"Figure 6: A sample of negatively correlated points used to benchmark the accuracy of the risk matrix\n\nIt is concluded that the risk matrix of Vatanpour et al. did not validate Cox's concern relating to negative correlations but instead demonstrated a common error in risk matrix design, namely not adequately considering the axis scales.\n\nRef. 1, Cox, L.A., What’s Wrong with Risk Matrices? Risk Analysis, Vol. 28, No. 2, 2008.\n\nRef. 2. Vatanpour S., Hrudey S.E., and Dinu, I., Can Public Health Risk Assessment Using Risk Matrices Be Misleading? Int. J. Environ. Res. Public Health 2015, 12, 9575-9588.\n\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/how-not-to-design-a-risk-matrix-national-health-service-risk-matrix 2018-03-27T17:58:00-05:00 2018-05-26T19:24:08-05:00 How Not to Design a Risk Matrix - National Health Service (UK) Risk Matrix Peter Carr There are many ways in which to design a poor risk matrix, but one method is used so often that it deserves its own article. This unsound method involves the use of ordinal scales for likelihood and consequence and a pretense that the ordinal scales (which are essentially just labels) can express the magnitude of the likelihood and consequence variables.\n\nMore\n\n]]>\n\nThere are many ways in which to design a poor risk matrix, but one method is used so often that it deserves its own article. This unsound method involves the use of ordinal scales for likelihood and consequence and a pretense that the ordinal scales (which are essentially just labels) can express the magnitude of the likelihood and consequence variables.\n\nA case in point is the risk matrix of the UK National Health Service (NHS). (Reference. A Risk Matrix for Risk Managers, NHS, National Patient Safety Agency, 2008. ) Introduced in 2008, the risk matrix continues to be used by health organizations across the United Kingdom:",
null,
"Figure 1: NHS risk matrix\n\nFigure 1 shows the ordinal scales, which range from 1 to 5 on each axis. It also shows the score assigned by the NHS to each cell, which is the product of its column and row ordinal values, and the risk scores that define the four risk priority levels (\"low,\" \"moderate,\" \"high\" and \"extreme\" risk).\n\nThe NHS uses the same matrix for 10 types of consequence. It terms a consequence type a \"domain.\" Two examples of domains are \"Impact on the safety of patients, staff or public ...\" and \"Adverse publicity/ reputation.\" It also uses the same matrix in relation to three different likelihood scales: (1) broad linguistic descriptors of frequency (no numbers), (2) time-based descriptors (\"daily\", \"weekly\", \"monthly\", etc.), and (3) probability descriptors (\"<0.1%\", \"0.1 to 1%\", \"1 to 10%\", etc.).\n\nIt is not explained why the risk matrix coloring should be expected to be the same for 10 different consequence domains and three different likelihood scales. And nor is it explained why the thresholds between different risk priority levels should also be the same regardless of the consequence domain or frequency scale.\n\nRational risk matrix design depends on the use of ratio (quantitative) scales for likelihood and consequence. If we are to audit the NHS matrix, we need to identify the ratio scales that underlie it. Unfortunately, for most of the consequence domains, the descriptors are linguistic, not quantitative. There are a few exceptions, however, so all is not lost.\n\nWe will consider the consequence domain called \"business objectives/projects.\" It appears that this domain is intended to be used in conjunction with the probability descriptors of likelihood. Although we can extract most of the axis tick mark values from the NHS documentation, the upper and lower bounds of the scales are undefined, so we are forced to make some reasonable assumptions. For example, the \"Catastrophic\" consequence is defined as a budget overrun greater than 25%, so we have assumed that that category could range from 25% to 100%. It is not widely appreciated that the bounds do matter and can affect the coloring of the top and bottom rows and the left and right columns. So, with reasonable assumptions, the scales for this case look something like this (plotted on log scales):",
null,
"Figure 2: NHS risk matrix - scales for business objectives/projects\n\nTo design a risk matrix, we also need to know the values of the iso-risk contours that separate the different risk priority levels. That information was not provided but we can make some assumptions and sensitivity studies. We quickly discover that any design we produce using the above scales is not going to be symmetrical, in contrast to the NHS matrix which is completely symmetrical.\n\nWe assume in this audit that risk should be calculated as the product of probability and consequence, as is conventional. We can be confident this is how the NHS views risk since we have seen that the NHS multiplies row and column numbers to obtain a surrogate for risk and so is definitely not using a more sophisticated definition.\n\nOne attempt to reproduce the NHS matrix looks as shown below in Figure 3. We used the Predominant Color algorithm of Quick Risk Matrix for this design. With that algorithm, each cell split by an iso-risk contour takes the color of the risk priority level that occupies the largest area of the cell.",
null,
"Figure 3: An attempt to reproduce the NHS risk matrix\n\nWe arrived at the matrix in Figure 3 by positioning the two lowest contours so that columns 1 and 2 match the NHS design and by positioning the highest contour so that row 5 matches the NHS design. But it is evident that we cannot match the entire matrix and no amount of playing with the contour values is going to produce a complete match. We also tried other coloring algorithms with no better result.\n\nThus, it appears that the standard NHS matrix is not consistent with this particular set of axis scales. That suggests that we might be able to find inconsistencies in the NHS matrix and indeed we can. Look at the two cells marked up in the following figure:",
null,
"Figure 4: Two cells in our redesign that the NHS risk matrix (Figure 1) rates inconsistently\n\nOur design in Figure 4 identifies two cells in risk priority level \"High\" that the NHS matrix in Figure 1 colors inconsistently. Note that he cell in column 2, row 4, contains points corresponding to risks ranging from 0.01 to 0.25, while the cell in column 3, row 2, contains risks from 0.01 to 0.5. The NHS risk matrix (Figure 1) ranks the former cell as \"High\" risk and the latter as \"Moderate\" risk. Since the two cells have the same lower bound of 0.01 but the latter has a greater upper bound, then the latter cell cannot have a lower risk rating than the former, as it does in the NHS matrix.\n\nThere are other inconsistencies in the NHS matrix but we shall be content to just give this one example.\n\nNow suppose we accept Figure 4 as the \"correct\" coloring (we put \"correct\" in quotes because there is more than one possible design). Figure 4 is internally consistent, unlike the NHS matrix. Now suppose we consider a different consequence domain, with an axis having different subdivisions and/or with different values for the iso-risk contours separating the risk priority levels. There is a very good chance that we will find that a different coloring pattern would be required.\n\nOne may change an axis by a scaling factor without changing the coloring pattern, provided the risk thresholds are scaled by the same factor. For example, it makes no difference to the risk matrix design whether we use dollars or pesetas for a money axis. The same is true for scaling factors generally. But once the relative widths of the axis subdivisions differ, then the required coloring pattern will likely differ. The safest approach is to use Quick Risk Matrix to verify the matrix design in relation to all the scales with which it is to be used.\n\n]]>\nhttps://riskmatrix.co/blogs/risk-matrix-knowledge-bank/how-to-design-a-risk-matrix 2018-03-27T17:35:00-05:00 2018-06-06T17:08:05-05:00 How to Design a Risk Assessment Matrix (Correctly) Peter Carr More\n\n]]>\nThis article outlines the methodology used in Quick Risk Matrix for the development of risk matrices, also known as risk assessment matrices.\n\nA risk matrix is a chart in which one axis is subdivided to indicate categories of likelihood while the other axis is subdivided to indicate categories of consequence. The cells of the matrix are colored to indicate how any given pair set {likelihood-category, consequence-category} maps to a risk category. We term the risk categories \"risk priority levels.\"\n\nLikelihood may be expressed as probability or as frequency. We will use the term probability from this point on, but it should be remembered that it could equally well be frequency.\n\nRisk matrices are used in risk assessments. Potential unwanted events are identified. Each event is assigned to a probability category and to a consequence category. The probability category indicates the likelihood that the event will occur. The probability category indicates the severity of the consequences. For a given pair of categories, the risk matrix indicates the risk priority level. The organization's risk management procedures typically make use of the risk priority levels when specifying acceptability, urgency, priority, required level of management attention, etc.\n\nThe most popular size for a risk matrix is 5 x 5 but some organizations are using larger risk matrices with up to 10 rows and columns. Quick Risk Matrix imposes no limit on the number of rows or columns.\n\nAn example risk matrix is shown below. The categories are labeled and also defined by their numeric ranges. Note that we often use E-notation to keep the chart tick labels compact so that they do not overlap.\n\nWhen the probability or consequence variable spans a large range, the axis will use a logarithmic scale for legibility. This example uses log scales. The choice of logarithmic or linear scale makes no difference to the coloring pattern of the risk matrix; it is only a legibility issue.",
null,
"Figure 1: Example risk matrix\n\nIn the figure above, the probability and consequence categories have been defined using numeric scales (e.g. 0.1, 1, 10, …) and also with labels (e.g. Very Low, Low, ...).\n\nThe numeric scales are known as \"ratio\" scales, because we can compare the sizes of quantities expressed on these scales (i.e. calculate ratios). Most scientific and engineering measurements are made on ratio scales.\n\nThe category labels are known as “ordinal” scales. An ordinal scale shows the order in which a variable increases but often tells us nothing about its magnitude.\n\nSome organizations omit ratio scales from their risk matrices and use only ordinal scales. A risk matrix that uses only ordinal scales is almost unverifiable. The only consistency check that can be made is whether a cell above and/or to the right of another cell has an equal or higher risk priority level. So, for example, referring to the above figure, the cell {High, Likely} must have a risk priority level equal to or higher than the cell {High, Unlikely}, because both the consequence and the probability in the former cell are equal to or higher than in the latter cell. But, there is no way to verify the ranking of a pair of cells when one cell is to the right and below another cell. For example, the cell pair {High, Likely} and {Very High, Unlikely} cannot be ranked because the former has lower consequence but higher probability than the latter, and there is no way of knowing whether the consequence or the probability dominates.\n\nA further disadvantage of purely ordinal scales is that linguistic terms like “low”, “moderate”, “likely” and so on will be interpreted very differently by different people.\n\nSince a risk matrix that uses only ordinal scales cannot be fully verified, it follows that its use cannot normally be defended. There is an exception to this rule, which we will describe below.\n\nThe exception is that a matrix with a ratio scale for probability and an ordinal scale for consequences is sometimes reasonable when consequences are difficult or controversial to measure. A prime example might be a risk matrix for workforce safety. Let’s say the consequence scale has ordinal categories A–E, the workforce population size is 200, and the categories are defined as “A. One or more injuries, not severe,” “B. One or more severe, possibly permanently disabling, injuries,” “C. 1–4 fatalities,” “D. 5–50 fatalities,” “E. 51–200 fatalities.” The difficulty here is that the measurement units are mixed, being number of injuries at the lower end and number of fatalities at the higher end. One way round this difficulty is to temporarily assign a ratio scale, either based on the monetary value of averting an injury/fatality or based on treating an injury as a fraction of a fatality. Let’s say we treat an injury as a fraction of a fatality. Then the tick marks on the ratio scale might be (for example) 0, 0.5, 1, 5, 50, 200. (If using a log scale, zero should be replaced with a small non-zero number since zero cannot be plotted on a log scale.) The matrix is then designed based on the temporary consequence scale and a suitable ratio scale for probability. Finally, the temporary consequence scale is hidden since the injury to fatality equivalence is uncertain and possibly controversial.\n\nWhen ratio scales are used, the risk matrix may be developed with mathematical precision. Quick Risk Matrix requires the use of ratio scales, even if only temporary as described in the previous paragraph.\n\nWhen setting up a ratio scale for consequences, any units of measurement relevant to the type of risk you are assessing may be used, for example, dollars, number of fatalities, volume of chemical discharged to the environment, hectares of land contaminated by a spill, percentage budget overrun, weeks of schedule delay, and so on.\n\nAnother requirement for objective risk matrix design is clarity on how risk is to be measured. There is a high degree of consensus that risk is to be computed by combining consequence and likelihood (see for example ISO Guide 73:2009). By far the most common method of forming the combination is to multiply the values of the probability and consequence variables (i.e. risk = probability times consequence). Note that the product of the two variables is the statistical \"expected value,\" i.e. the average value of the consequence measure in a large number of identical situations. We call \"expected value\" \"expected loss\" when the consequences are detrimental. In a future article, we will look at an alternative to expected value that is available in the Premium edition of Quick Risk Matrix.\n\nA \"risk graph\" (our term) is a logical predecessor to a risk matrix. The risk graph uses the same axes as the risk matrix. On it, we plot contours of equal risk (iso-risk contours) to define the boundaries between risk priority levels. The risk graph corresponding to our example risk matrix is presented below. Note that because we have used log-log scales, the risk contours are straight lines.",
null,
"Figure 2: Risk graph\n\nA note on terminology. What we term a risk graph should not be confused with the quite different type of chart with the same name in the IEC 61511-3 standard on functional safety of safety instrumented systems. We thought about calling the above type of graph a probability-consequence graph but that's a mouthful and, if we did that, then perhaps we should use the term probability-consequence matrix in place of risk matrix. On balance, it seemed preferable to use the term risk graph as the direct analog of the term risk matrix.\n\nWe can construct the risk matrix (Figure 1) from the risk graph (Figure 2) in two steps. We begin by coloring the cells of the risk matrix that are not split by the risk contours. These cells can be colored based on the coloring of the risk graph, which, for our example, gives the result in Figure 3 below. There may be a lot of blank cells at this stage, especially if you are trying to use many colors in a small matrix.",
null,
"Figure 3: Risk matrix while under construction\n\nYou can see that some of the cells of the risk matrix have a known coloring that follows directly from the risk graph. But the cells that are split by the risk contours cannot be assigned accurately to any of the qualitative risk priority levels. That is because some points in the split cells lie in one level and other points lie in another. With closely spaced risk contours, it is possible for a cell to be split not just between two but between three or more risk levels.\n\nAt this stage in the risk matrix design, when we want to color the split cells, we are forced to make an approximation. Quick Risk Matrix Premium allows this approximation to be made in any of five ways, which we briefly refer to as (1) fuzzy interface, (2) round up, (3) predominant color, (4) round down and (5) geometric center. Quick Risk Matrix Standard offers the first three of these algorithms.\n\nThe \"fuzzy interface\" procedure treats the group of cells intersected by any iso-risk contour as if it is a separate risk level in its own right. If we were to do that in this example, we would end up with seven risk priority levels (colors) rather than the original four.\n\nIn the \"round up\" method, the color of a split cell is governed by the color of the highest risk zone in the cell. This means that the level of some risks in the cell (those risks corresponding to points lying under the highest contour intersecting the cell) will be overestimated. The merit of the rounding up method is that no risk will ever be assigned to a level that is too low. But it may be overly conservative for some circumstances, in which case one of the other methods may be used.\n\n\"Round down\" colors each split cell according to the color of the lowest risk zone in the cell. It is only intended to be used when designing opportunity matrices (an opportunity matrix is similar to a risk matrix but is used to rate potential gains rather than potential losses). The round-down method is conservative for opportunity matrices.\n\nWith the \"predominant color\" method, the color of a split cell is taken to be the color of the risk zone that occupies the largest area within the cell. This is often the most accurate method of designing a risk matrix, but it is not conservative.\n\nThe \"geometric center\" method colors each split cell according to the color of the risk graph at the geometric center of the cell. This often, but not always, gives the same result as the predominant color method.\n\nThe Premium edition of the program includes a simulation tool to enable the performance of the different coloring methods to be compared.\n\nIf we color our example matrix using the predominant color algorithm, we obtain the following result:",
null,
"Figure 4: Risk matrix after coloring the split cells using the predominant color algorithm\n\nIf we use the round-up algorithm, we obtain the following:",
null,
"Figure 5: Risk matrix after coloring split cells using the rounding-up algorithm\n\nAt this stage, the risk matrix is finished. You may hide the contours if you wish, you may hide the X- and Y-axis tick marks if they were only a construction aid, and you may export the risk graph and risk matrix to various picture formats. With the Premium edition, you may also export the risk matrix to Excel along with a risk register template.\n\n## Summary\n\nWhat makes the Quick Risk Matrix methodology so accurate is:\n\n• The use of user-defined ratio scales for the probability and consequence axes (as opposed to ordinal scales).\n• The definition of risk priority levels using contours of equal risk (\"iso-risk contours\"), the numeric values of which are defined by the user.\n• The recognition that cells split by the iso-risk contours can only be assigned to a single risk level via an approximation — and the approximation may be made in various ways.\n• The provision of several reasonable ways to make the above approximation\nFor more detailed information on Quick Risk Matrix, you may care to look at the program's online help file.\n]]>"
] | [
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Western_Australia_risk_matrix_1024x1024.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Western_Australia_risk_ratings_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/risk-matrix-axes_4x4_large.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/risk-matrix-Cox_large.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/risk-matrix-Cox-impermissible_large.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/risk-matrix-model_large.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/risk-matrix-Cox-modified_large.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/risk-matrix-Cox-evolved_large.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Environmental_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Financial1_927bcdb8-ad8e-4a71-bb6c-bc9975c1548f_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Financial2_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Financial3_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/4x4_6_colors_risk_graph_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/4x4_6_colors_risk_matrix_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/4x4_6_colors_risk_matrix_correlated_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/4x4_5_colors_risk_matrix_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/4x4_3_colors_risk_matrix_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NASA_Standard_Matrix_33f89204-88a6-4c35-8982-502757ac1cd3_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NASA_Budget_Increase_0bf4d406-0ba4-4222-9871-b8c954273fb8_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NASA_Budget_Increase_anomaly_3b4f5cb9-fef8-424f-9aad-2156eb24e9a3_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NASA_Cost_Threat_anomaly_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NASA_Budget_Increase_revised_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NASA_Cost_Threat_Revised_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Vatanpour_matrix_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Vatanpour_anomaly_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Vatanpou_redesigned_-_risk_graph_6af9d1cf-e4d8-4f36-b857-62463ff3addd_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Vatanpour_matrix_redesigned_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Vatanpour_redesigned_samples_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NHS_risk_matrix_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NHS_business_scales_b60f3a16-967d-461a-9e39-6c5b4ebc695a_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NHS_Risk_Matrix_business_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/NHS_Risk_Matrix_anomaly_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Generic_matrix_predominant_63649383-8c7f-41cc-88d9-866b0546a77f_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Generic_risk_graph_82a19fd2-0613-4e21-a77d-3c5d0e23b384_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Risk_matrix_in_construction_b96f4abe-02ee-41fa-9c8a-18821461a67a_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Generic_matrix_predominant_63649383-8c7f-41cc-88d9-866b0546a77f_grande.png",
null,
"https://cdn.shopify.com/s/files/1/2134/2733/files/Generic_matrix_roundup_b2014365-ca6b-4497-a3f0-9cd564870103_grande.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9348597,"math_prob":0.95650345,"size":69578,"snap":"2019-26-2019-30","text_gpt3_token_len":14735,"char_repetition_ratio":0.17431799,"word_repetition_ratio":0.048085783,"special_character_ratio":0.21169047,"punctuation_ratio":0.092967644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98706347,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74],"im_url_duplicate_count":[null,1,null,1,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,6,null,3,null,3,null,6,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-19T10:57:38Z\",\"WARC-Record-ID\":\"<urn:uuid:d8e101c1-feaf-4082-9cd2-53cc247668eb>\",\"Content-Length\":\"100276\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d9e4b685-6960-4128-9a9a-85aac2cfba12>\",\"WARC-Concurrent-To\":\"<urn:uuid:8b54d0d3-44c3-43a4-ac99-57ea13778645>\",\"WARC-IP-Address\":\"23.227.38.32\",\"WARC-Target-URI\":\"https://riskmatrix.co/blogs/risk-matrix-knowledge-bank.atom\",\"WARC-Payload-Digest\":\"sha1:K6TABZOEJMMMYG3HRKTTUTWWWTHJMKEZ\",\"WARC-Block-Digest\":\"sha1:PIAI2MBXK6PPVYKN7JRGMVGFVKUNPMQH\",\"WARC-Identified-Payload-Type\":\"application/atom+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195526210.32_warc_CC-MAIN-20190719095313-20190719121313-00051.warc.gz\"}"} |
https://appadvice.com/app/waterworks-calculations/1133509204 | [
"",
null,
"## Fluid Flow Calculator : contains 94 Calculators for calculating flow in Weirs,Channels,Pipes,Water works and Civil Engineering parameters",
null,
"# Waterworks Calculations\n\nby V PUGAZHENTHI",
null,
"",
null,
"",
null,
"Fluid Flow Calculator : contains 94 Calculators for calculating flow in Weirs,Channels,Pipes,Water works and Civil Engineering parameters.",
null,
"### App Details\n\nVersion\n3.2\nRating\n(4)\nSize\n14Mb\nGenre\nEducation Reference\nLast updated\nDecember 13, 2020\nRelease date\nJuly 18, 2016\n\n### Waterworks Calculations is FREE but there are more add-ons\n\n• \\$3.99\n\nUnlock All the 96 Calculators\n\n### App Screenshots",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"",
null,
"### App Store Description\n\nFluid Flow Calculator : contains 94 Calculators for calculating flow in Weirs,Channels,Pipes,Water works and Civil Engineering parameters.\n- Available in Metric ( SI ) and Imperial Units ( USCS )\n• Capillary Rise\n• Viscosity\n• Kinematic Viscosity\n• Hydrostatic Pressure\nOffered as in-App purchase :\n• Total Energy - Elemental Volume of Fluid\n• Total Head of Flowing Fluid\n• Average Velocity of Flow\n• Summation of all Forces (X direction)\n\n• Froude Number\n• Reynolds Number\n• Weber Number\n• Head Loss due to Friction\n• Head Loss due to Friction (Darcy-Weisbach Formula)\n• Chezy Velocity\n• Chezy Coefficient\n\n• Manning Flow Velocity\n• Manning Flow Velocity using Pipe Diameter\n• Flow Rate (Manning Formula)\n• Head Loss due to Friction (Manning Formula)\n• Pipe Diameter (Manning Formula)\n\n• Mean Flow Velocity (Hazen-Williams)\n• Mean Flow Velocity - Pipes Flowing Full (Hazen-Williams)\n• Fluid Flow Rate (Hazen-Williams)\n• Head Loss due to Friction (Hazen-Williams)\n• Pipe Diameter (Hazen-Williams)\n\n• Head Loss across a sudden enlargement of Pipe Diameter (Bernoulli Equation & Archer Equation)\n• Head Loss across a sudden contraction of Pipe\n• Head Loss across a sudden contraction of Pipe (Brightmore Equation)\n• Head Loss - Pipe Fittings\n\n• Loss Coefficient\n• Orifice Flow Rate\n• Coefficient of Contraction\n• Initial Velocity of Fluid Jets\n• Gage Pressure at Tube Entrance (Conical Tubes)\n• Velocity of Wave in a Pipe (Water Hammer)\n• Internal Pressure in Pipe\n• Stress on Pipe Material\n• Section Factor\n• Critical Depth of Flow - Open Channel Flow\n\n• Force - Hydraulic Jump\n• Rate of Change of Momentum - Hydraulic Jump\n• Head Loss - Hydraulic Jump\n• Stress on Pipe due to Temperature Change\n• Movement in Length of Pipe due to Temperature Change\n• Resultant Force on Bend in Pipe\n\n• Elevation Difference between Headwater and Tailwater (Entrance and Exit Submerged)\n• Velocity of Flow in Culvert (Entrance and Exit Submerged)\n• Elevation Difference between Headwater and Tailwater (Entrance Submerged/Unsubmerged & Free Exit)\n• Velocity of Flow in Culvert (Entrance Submerged/Unsubmerged & Free Exit)\n\n• Flow Rate - Open Channel Flow\n• Average Velocity - Rectangular Channels\n• Critical Depth - Rectangular Channels\n• Flow Rate - Rectangular Channels\n• Discharge of Fluid per Unit Length - Rectangular Channels\n• Minimum Specific Energy - Rectangular Channels\n\n• Average Velocity - Triangular Channels\n• Critical Depth - Triangular Channels\n• Flow Rate - Triangular Channels\n\n• Average Velocity - Parabolic Channels\n• Critical Depth - Parabolic Channels\n• Flow Rate - Parabolic Channels\n\n• Average Velocity - Trapezoidal Channels\n• Critical Depth - Trapezoidal Channels\n• Flow Rate - Trapezoidal Channels\n• Minimum Specific Energy - Trapezoidal Channels\n\n• Area of Cross Section - Circular Channels\n• Critical Depth - Circular Channels\n• Top Width - Circular Channels\n• Flow Rate - Circular Channels\n\n• Flow Rate - Rectangular Weir\n• Flow Rate - Triangular Weir (Notch Angle of 90 degrees)\n• Flow Rate - Triangular Weir (Notch Angle of 60 degrees)\n• Flow Rate - Triangular Weir (Notch Angle of 30 degrees)\n• Flow Rate - Trapezoidal Weir\n• Flow Rate - Broad Crested Weir\n\n• Quantity of Sediment transported as Bed Load (Schoklitsch Formula)\n• Evaporation Rate (Meyer Equation)\n• Wind Factor\n• Peak Discharge (Rational Formula)\n• Rainfall Intensity (Steel Formula)\n\n• Flow Rate - Groundwater (Darcy's Law)\n• Fire Demand Rate\n• Flow Rate - Gravity Well (Dupuit Formula)\n• Flow Rate - Artesian Well\n• Economic Pipe Diameter\n• Flow Rate - Venturi Flow Meter\n• Power obtained from Water Flow\n\nDisclaimer:\nAppAdvice does not own this application and only provides images and links contained in the iTunes Search API, to help our users find the best apps to download. If you are the developer of this app and would like your information removed, please send a request to [email protected] and your information will be removed."
] | [
null,
"https://is2-ssl.mzstatic.com/image/thumb/Purple114/v4/a2/5f/96/a25f96c1-bdf4-e972-04e9-d9227f886f8d/source/512x512bb.jpg",
null,
"https://is2-ssl.mzstatic.com/image/thumb/Purple114/v4/a2/5f/96/a25f96c1-bdf4-e972-04e9-d9227f886f8d/source/60x60bb.jpg",
null,
"https://springboard-cdn.appadvice.com/generated-app-plays/1133509204/30105600-half-thumb/00001.jpg",
null,
"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/b7/eb/7e/b7eb7eb2-edda-bd73-d1dc-eef5a92f69a7/fe5d7aed-f874-42d7-a725-27faa0e54998_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.35.25.png/750x750bb.jpeg",
null,
"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/56/11/f7/5611f7be-0896-7b0c-2882-675c1d840089/591fa9b1-70c2-4841-ba4e-3fd58fd95a9d_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.19.31.png/750x750bb.jpeg",
null,
"https://springboard-cdn.appadvice.com/generated-app-plays/1133509204/30105600-half-thumb/00001.jpg",
null,
"https://is5-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/86/66/cc/8666ccec-f8d1-d5d4-31d1-3f95f65c2945/d3813299-5477-449e-83eb-8361671f962b_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.31.57.png/750x750bb.jpeg",
null,
"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/b7/eb/7e/b7eb7eb2-edda-bd73-d1dc-eef5a92f69a7/fe5d7aed-f874-42d7-a725-27faa0e54998_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.35.25.png/750x750bb.jpeg",
null,
"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/56/11/f7/5611f7be-0896-7b0c-2882-675c1d840089/591fa9b1-70c2-4841-ba4e-3fd58fd95a9d_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.19.31.png/750x750bb.jpeg",
null,
"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource124/v4/e1/9b/2c/e19b2cd4-e95a-9da9-b5ad-789c9e84d7fc/34518878-cbdf-493d-9b54-3b6ac91b7e60_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.20.11.png/750x750bb.jpeg",
null,
"https://is3-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/a4/07/14/a4071435-8ccb-a3ee-f2da-3416867173c6/353a3518-d984-4065-a6b3-d14a1455df72_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.35.29.png/750x750bb.jpeg",
null,
"https://is2-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/10/1c/02/101c0277-dbe4-444b-5096-487d0887a34e/08a5c430-ccf0-449b-9987-9b06f8947251_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.20.42.png/750x750bb.jpeg",
null,
"https://is5-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/18/ed/5c/18ed5c1b-48d1-ad36-3bfb-c6156cd2c0ac/8ee96bc4-a8a1-43fc-9f58-b4d54437afc6_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.22.36.png/750x750bb.jpeg",
null,
"https://is2-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/47/5d/a4/475da4a0-bed2-faf5-8e32-58151e77965d/8c3f91f3-b7b9-4be2-bcc9-3fc7ac872bdd_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.23.29.png/750x750bb.jpeg",
null,
"https://is1-ssl.mzstatic.com/image/thumb/PurpleSource114/v4/9b/be/7c/9bbe7c18-8ef2-4ccc-cf6b-538572d0791b/4bfda35d-1fa1-443a-83b8-0630fd56f5a7_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.24.47.png/750x750bb.jpeg",
null,
"https://is4-ssl.mzstatic.com/image/thumb/PurpleSource124/v4/4e/5f/2b/4e5f2be9-7a7c-324b-a35a-32b7dbc7715e/154fcfa1-48c2-4dce-a5fa-327b9138fcc1_Simulator_Screen_Shot_-_iPhone_8_Plus_-_2020-12-11_at_14.35.36.png/750x750bb.jpeg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.727739,"math_prob":0.60167116,"size":4503,"snap":"2021-31-2021-39","text_gpt3_token_len":1060,"char_repetition_ratio":0.16848189,"word_repetition_ratio":0.185034,"special_character_ratio":0.21874306,"punctuation_ratio":0.032967035,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9735876,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32],"im_url_duplicate_count":[null,1,null,1,null,4,null,2,null,2,null,4,null,1,null,2,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-08-03T15:29:21Z\",\"WARC-Record-ID\":\"<urn:uuid:fb743340-dbe2-4fc2-9160-06efe842d5d8>\",\"Content-Length\":\"256473\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:76873542-5dbf-4345-a0f2-00c313993389>\",\"WARC-Concurrent-To\":\"<urn:uuid:497d69c9-0345-41e0-ac25-71b403c95ae1>\",\"WARC-IP-Address\":\"104.26.5.219\",\"WARC-Target-URI\":\"https://appadvice.com/app/waterworks-calculations/1133509204\",\"WARC-Payload-Digest\":\"sha1:OHHMWO3WC24RQMN7QSARBFIAEVCE2EVE\",\"WARC-Block-Digest\":\"sha1:SW2ZWPA4AXBIOPX6XR6XGA3U5KJSORS3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046154459.22_warc_CC-MAIN-20210803124251-20210803154251-00396.warc.gz\"}"} |
https://my.mts.com/en/support/solutions/articles/61000281574-why-are-the-variable-units-in-the-calculation-editor-different-than-the-units-chosen-for-my-calculate | [
"This content is available if you have a software support plan or current system warranty. To request access to this content, please submit a support request here.\n\n# Why are the variable units in the Calculation Editor different than the units chosen for my calculated variable?\n\nQUESTION\n\nWhy are the variable units in the Calculation Editor different than the units chosen for my calculated variable? For example, I choose units of mm for both thickness and width and units of mm2 for my calculated variable Area yet the Calculation Editor in TestSuite is showing m2 instead of mm2 (Figure 1).",
null,
"Figure 1 – Different units between user defined units and in the Calculation Editor window\n\nAlthough the operator can use the variable display unit mm2, the Calculation Editor is showing m2 because TestSuite uses m2 internally as the unit for area when performing calculations. The software knows to convert between m2 and mm2.\n\nIf you set the dimensions on all the variables, signals, and calculation parameters properly, and not put constants directly in the calculations, everything works out properly. You must take precaution when putting constant values into calculations. You must think carefully about what the units are of those constants. This is why the recommendation is to NOT put constants into calculations. Rather, create variables or calculation parameters for the constants, and set the dimensions properly on those objects.\n\nK\nKeary is the author of this solution article.\n\nDid you find it helpful? Yes No\n\nSend feedback",
null,
""
] | [
null,
"https://s3.amazonaws.com/cdn.freshdesk.com/data/helpdesk/attachments/production/61004267853/original/vMpewIF7FkGkRgVmibbnujkyi9XbK_CTOw.png",
null,
"https://my.mts.com/en/support/solutions/articles/61000281574-why-are-the-variable-units-in-the-calculation-editor-different-than-the-units-chosen-for-my-calculate/hit",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88425547,"math_prob":0.9700989,"size":1162,"snap":"2021-43-2021-49","text_gpt3_token_len":219,"char_repetition_ratio":0.17789292,"word_repetition_ratio":0.0,"special_character_ratio":0.18416524,"punctuation_ratio":0.08457711,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9601622,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-16T14:10:35Z\",\"WARC-Record-ID\":\"<urn:uuid:42e334e9-f308-4d17-a686-79c5434255f3>\",\"Content-Length\":\"47240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b99e22a8-d426-4c42-8d96-a7e21f97cb8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:75897c79-3bd3-4c01-85c1-f0e1f54e8364>\",\"WARC-IP-Address\":\"52.20.89.219\",\"WARC-Target-URI\":\"https://my.mts.com/en/support/solutions/articles/61000281574-why-are-the-variable-units-in-the-calculation-editor-different-than-the-units-chosen-for-my-calculate\",\"WARC-Payload-Digest\":\"sha1:SQACJGSKTZDDCF5ADTOHGEXGKASOICF5\",\"WARC-Block-Digest\":\"sha1:U7ESWHLLBCUTNHYHXYOJMOZZ6QZW2CHW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323584886.5_warc_CC-MAIN-20211016135542-20211016165542-00357.warc.gz\"}"} |
https://msuperl.org/wikis/pcubed/doku.php?id=184_notes:rhr&do= | [
"184_notes:rhr\n\nThe Right Hand Rule is a handy tool to figure out the directions of vectors in a cross product. There are multiple ways to do the Right Hand Rule, we will present two methods below (though there are more). Feel free to use whatever method makes the most sense to you. For the following methods, we will be using a generic cross product: $$\\vec{A} \\times \\vec{B} = \\vec{C}$$ Because this is a cross product, all three vectors are perpendicular to each other.\n\nRemember, this rule is true for your RIGHT hand, not your left hand! (If you are right handed, remember to put your pencil down!)\n\nIn this method, you start with your hand completely open and in an “L” shape. Point your fingers in the direction of $\\vec{A}$, then curl your fingers toward the direction of $\\vec{B}$. Whichever way your thumb points is the direction of $\\vec{C}$.\n\nIn the case of these pictures, $\\vec{C}$ would point out of the whiteboard.\n\nIn this method, you start with your thumb and first finger in an “L” shape. Then extend your middle finger so it is perpendicular to the “L”. Hold your fingers in this shape, and rotate your hand so that your first finger points in the direction of $\\vec{A}$ and your middle finger points in the direction of $\\vec{B}$. Note: you are rotating your whole hand here, not moving your fingers at all. (At no point should you be flipping anyone off with this Right Hand Rule.)\n\nThen, again, your thumb will point in the direction of $\\vec{C}$, which in this case is out of the whiteboard.\n\n• 184_notes/rhr.txt"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8830546,"math_prob":0.97890335,"size":1567,"snap":"2022-27-2022-33","text_gpt3_token_len":375,"char_repetition_ratio":0.15611005,"word_repetition_ratio":0.036101084,"special_character_ratio":0.24186343,"punctuation_ratio":0.10280374,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9584554,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-02T19:47:35Z\",\"WARC-Record-ID\":\"<urn:uuid:3bf437dd-24a3-4e0a-aa94-a6c478e3549b>\",\"Content-Length\":\"32760\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c23fd5b3-edc7-4775-8255-a0ab115c22c3>\",\"WARC-Concurrent-To\":\"<urn:uuid:3d786f14-310a-4f72-bcc5-e06f52c69681>\",\"WARC-IP-Address\":\"134.209.217.56\",\"WARC-Target-URI\":\"https://msuperl.org/wikis/pcubed/doku.php?id=184_notes:rhr&do=\",\"WARC-Payload-Digest\":\"sha1:JZ6U6C3VCY5WGTW2IWKT2AHLH6TEFOEK\",\"WARC-Block-Digest\":\"sha1:QIVP4HNGOAB5RUBWTNJF2JAIU2VRSBY6\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104204514.62_warc_CC-MAIN-20220702192528-20220702222528-00642.warc.gz\"}"} |
https://xcorr.net/2009/10/22/crcns-data-set-pvc-1-by-ringach-lab-getting-something-to-work/ | [
"# CRCNS data set pvc-1 by Ringach lab – getting something to work\n\nIn a few months, our lab will be doing permanent implant recordings in V4, and analyzing this data will be a major challenge. In contrast to single electrode recordings, the stimuli used for array recordings is usually not optimized for a given cell (say, so that the stimulus is centered on the receptive field), and the analyzer often has to figure out what a given cell is responsive to more or less from scratch. So I decided to download the pvc-1 dataset from the CRCNS website, courtesy of the Ringach lab, consisting of array recordings in V1 in response to movies to prepare for this upcoming work. These are my observations on how to get a basic understanding of the selectivity of the cells in this dataset, in preparation for more thorough systems identification.\n\n## Preprocessing the input\n\nThe pvc-1 dataset consists of long (> one hour) recordings from multiple electrodes in an array in response to 30 second color movie segments at a resolution of 320×240 and a frame rate of 30 Hz. There are more than 50 recordings, with each electrode probably sampling from 2 or more cells, and so 100 good cells or more can be isolated from these recordings. Uncompressed as doubles, each movie takes up about a 1.5 GB of memory, or 500 MB if you only care about luminance. Since at least 20 different movies are used in each dataset, that’s 10 GB that you’re going to need to load into RAM, which is needless to say, a stupid idea. There’s little hope of using systems identification methods on this type of dataset without first downsampling the movies by a huge factor. Now, if a cell’s receptive field is much smaller than a movie frame, and it responds to high spatial frequencies, then by downsampling we will lose the high spatial frequencies and be unable to identify the cell’s input-output mapping. So we need to get a ballpark estimate of the position and size of the receptive field of our cells. We will then extract downsampled movie patches around the cell’s receptive field’s center and run systems identification methods on those.\n\nWe might think of finding a cell’s receptive field by spike triggered averaging (reverse correlating) the images with a cell’s spike train, which can be done without loading all of the images into memory at the same time. This should, however, only work for simple cells. Spike-triggered covariance works on complex cells but it would require us to load all of the images into memory which is a no-go.\n\nThe solution I propose is to use spike-triggered averaging on nonlinear functions of the images. For this to work, these nonlinear functions should be orientation and spatial frequency selective, localized in space and insensitive to spatial phase, which is essentially saying that the nonlinear functions are idealizations of complex cells. I propose preprocessing each movie image with a Gabor pyramid, composed of both odd and even Gabors at different orientations and at several scales. Taking the norm of the responses of odd filters and their corresponding even filters (the square root of the sum of squares) will give phase-insensitive responses to orientation. Then the spike triggered average of these phase-insensitive Gabor pyramids should (hopefully) show a big region of excitation in a spot in the images, at a certain orientation, which we will then declare is the receptive field center. Later I will show this delightfully hacky method works pretty well.\n\nHere I rolled out my own Gabor pyramid implementation, preprocessed every frame in the dataset, and stored the results as one .mat file per movie segment, like so:\n\n```%preprocess.m - Preprocess images\nmids = [0,0,0,0,0,0,0,0,0,0, 0, 0, 0,1,1,1,1,1,1,1,1,1,1, 1, 1,2,2,2,2,2,2,2,2,2,2, 2, 2,3,3,3,3,3,3,3,3,3,3, 3, 3];\nsids = [0,1,2,3,4,5,6,7,8,9,14,15,16,0,1,2,3,4,5,6,7,8,9,14,15,0,1,2,3,4,5,6,7,8,9,14,15,0,1,2,3,4,5,6,7,8,9,14,15];\n\nws = [];\nfor ii = 1:length(sids)\nnimages = length(dir(sprintf('movie_frames/movie%03d_%03d.images/*.jpeg',mids(ii),sids(ii))));\nws = zeros(51200,nimages);\nfor jj = 1:nimages\nw = gaborpyramid(squeeze(sum(A,3)));\nws(:,jj) = w;\nif mod(jj,100) == 0\njj\nend\nend\nsave(sprintf('movie_frames/movie%03d_%03d.images/ws.mat',mids(ii),sids(ii)),'ws','-v6');\nend\n%gaborpyramid.m\nfunction [w] = gaborpyramid(im)\n%6 level gabor pyramid\nw1 = [];\nw2 = [];\nim = downsample(im);\nfor jj = 1:5\nim = downsample(im);\n[ws1 ws2] = gaborfilt(im);\nw1 = [w1;ws1];\nw2 = [w2;ws2];\nend\nw = [w1;w2];\nend\n\nfunction [ws1 ws2] = gaborfilt(im)\n[xi yi] = meshgrid(-4:4,-4:4);\n%4 orientations\nws1 = [];\nws2 = [];\nfor ii = 0:3\ncoso = cos(ii*pi/4);\nsino = sin(ii*pi/4);\nthefilt1 = cos((coso*xi+sino*yi)*.8).*exp(-(xi.^2+yi.^2)/2/2^2);\nthefilt2 = sin((coso*xi+sino*yi)*.8).*exp(-(xi.^2+yi.^2)/2/2^2);\nthefilt1 = thefilt1 - mean(thefilt1(:));\nthefilt2 = thefilt2 - mean(thefilt2(:));\nw1 = conv2(im,thefilt1,'same');\nw2 = conv2(im,thefilt2,'same');\nws1 = [ws1;w1(:)];\nws2 = [ws2;w2(:)];\nend\nend\n\n% downsample.m\nfunction [im] = downsample(im)\nim = conv2(im,[.25,.5,.25;.5,1,.5;.25,.5,.25]/4,'same');\nim = im(1:2:end,1:2:end);\nend\n```\n\nThis should take a couple of hours, but you only have to do it once. Note that I am saving the .mat as -v6 so it’s uncompressed. This will take up a lot of disk space, but it’ll be much faster when reading the .mat file back into memory. An alternative would be to a use Gabor-like pyramid decomposition which includes quadrature pairs and has a fast implementation, like, say, the steerable pyramid or the dual-tree complex wavelet transform, and process the images on-the-fly, although that would probably end up being slower in the end.\n\n## Spike sorting\n\nThe pvc-1 dataset is not sorted for spikes, it’s up to the analyzer to do it, as explained in the documentation. Basically, an electrode will pick up spikes (action potentials) from multiple neurons, and these must be separated. An often used process is as follows. First, one detects possible spike events, usually by high-pass filtering the electrode recording and thresholding. Whenever the threshold is reached, a snippet of the recording before and after the threshold event is taken. That snippet is the trace of a (stereotyped) action potential plus some noise. The pvc-1 electrode recordings consist of these snippets. The problem then, is to cluster these snippets into sources, which can be done automatically by an algorithm, or manually by a human.\n\nFocusing on the former option out of laziness (although efficiency is a key concern as research shows humans are pretty bad at spike sorting), the automatic clustering algorithm of choice depends on your assumptions and needs.\n\n*2011 update: The following proposal is stupid. For the purpose of getting something going lump all the spikes together in an MUA. Spike sorting is a tough problem. You should read Lewicki ’98 and Sahani ’99 several times, forwards, backwards, in the mirror, etc. before getting into the spike sorting game. If you don’t know what you’re doing then use something that’s proven like Wave_clus. If you’re going to run a clustering algorithm then for the love of Christ interpolate your waveforms and align them (see chapter 5, Sahani ’99). *\n\nIf you assume that the snippets are generated by the choice of one of several action potential templates, plus Gaussian noise, then the snippets are realizations of a mixture of Gaussians distribution, the parameters of which can be estimated through the EM algorithm and is already implemented in Matlab. Another possibility is to assume a mixture of t-distributions. Provisions for non-stationary data and overlapping spikes can be taken. It can get very complicated very fast, so if you’re curious look at the references on Paninski’s statistical analysis of neural data course page, Feb. 25 class, especially, for starters, Lewicki 98.\n\nHere I load a dataset, look only at spikes from electrode 5, collect snippets from all trials, and fit the snippets to a mixture of 4 Gaussians (e.g., assuming 4 sources):\n\n```load('neurodata/ad1/ad1_u006_019');\n\nresults = pepANA;\nclear pepANA;\n\nii = 5;\n\nsnippets = [];\ntimes = [];\nfor jj = 1:20\nfor kk = 1:results.Repeat\nsnippets = [snippets,results.listOfResults{jj}.repeat{kk}.data{ii}{2}];\ntimes = [times,results.listOfResults{jj}.repeat{kk}.data{ii}{1}];\nend\nend\n\noptions = statset('Display','iter');\nobj = gmdistribution.fit(double(snippets)',4,'Options',options);\nobj.AIC\nobj.BIC\n```\n\nThis should only take a couple of minutes. From what I gather, human sorters a variety of heuristics to determine the optimal number of clusters and how they are defined. My prefered semi-heuristic at the moment (which I am sure will evolve) is to keep adding clusters until the BIC starts to rise or I get a cluster center which doesn’t look like a spike waveform. 4 seems about right on this electrode.\n\n## Correlating input and spikes\n\nWith this headwork, the rest is cake. First, cluster the spikes, collect the spike times over repeats, and create lagged post stimulus time histograms (PSTH):\n\n```[idx,b] = obj.cluster(double(snippets)');\n\n%%\nt0 = 0;\noffset = 0;\nfor jj = 1:20\nfor kk = 1:results.Repeat\nlen = length(results.listOfResults{jj}.repeat{kk}.data{ii}{1});\nfor ll = 1:max(idx)\nspiketimes{jj,kk,ll} = times(find(idx(offset + (1:len))==ll)+offset);\nend\noffset = offset + len;\nend\nend\n\n%Collect spike times over repeats\nstr = cell(20,max(idx));\nfor jj = 1:20\nfor kk = 1:10\nfor ll = 1:max(idx)\nstr{jj,ll} = [str{jj,ll},spiketimes{jj,kk,ll}];\nend\nend\nend\n\n%\n%Create several different spike trains\n%for the cells, at different lags\nlags = 0.052:.016:.148;\nlaggedpsth = zeros(18528,max(idx),length(lags));\n\noffset = 0;\nfor jj = 1:20\nmid = results.listOfResults{jj}.values{1}(1);\nsid = results.listOfResults{jj}.values{2}(1);\n\nnframes = length(dir(sprintf('movie_frames/movie%03d_%03d.images/*.jpeg',mid,sid)));\nfor kk = 1:max(idx)\nspks = str{jj,kk};\nfor ll = 1:length(lags)\ntheh = histc(spks,(0:nframes)/30+lags(ll));\nlaggedpsth((1:nframes)+offset,kk,ll) = theh(1:end-1);\nend\nend\noffset = offset + nframes;\nend\n\nrefpsth = squeeze(laggedpsth(:,:,4));\nshuffledpsth = [];\n\n% Get 100 shuffled versions\nfor jj = 1:100\nshuffledpsth = [shuffledpsth,[refpsth(end-(jj-1)*183+1:end,:);refpsth(1:end-(jj-1)*183,:)]];\nend\n\nlaggedpsth = reshape(laggedpsth,size(laggedpsth,1),max(idx)*length(lags));\n```\n\nNote that I am creating shuffled versions of the psths which maintain the same spatial structure as the regular psths but should remove the correlation between stimulus and response. We’re going to need this to approximately z-score the Gabor pyramid maps. Spike-triggered averaging the phase-insensitive Gabor pyramids is as simple as:\n\n```%start by adding together the images\nstmean = 0;\nstrefs = 0;\n\noffset = 0;\nclear ws ww A B;\nfor jj = 1:20\n\nmid = results.listOfResults{jj}.values{1}(1);\nsid = results.listOfResults{jj}.values{2}(1);\n\nnframes = size(ws,2);\n\nww = sqrt(ws(1:end/2,:).^2 + ws(end/2+1:end,:).^2);\n\nA = ww*laggedpsth((1:nframes)+offset,:);\nB = ww*shuffledpsth((1:nframes)+offset,:);\n\nstmean = stmean + A;\nstrefs = strefs + B;\n\nclear ws ww A B;\n\njj\noffset = offset + nframes;\nend\n\nstrefs = reshape(strefs,size(strefs,1),max(idx),100);\nms = repmat(mean(strefs,3),1,length(lags));\nss = repmat(std(strefs,[],3),1,length(lags));\n\nzvals = (stmean-ms)./ss;\nzvals = reshape(zvals,size(zvals,1),max(idx),length(lags));\n```\n\nNow to present the results:\n\n`imagesc(plotPyramid(squeeze(zvals(:,1,5))));colorbar;`\n\n(cell 1, lag 5 corresponding to 116 ms lag between stimulus and response). This gives:",
null,
"This pic is a representation of a Gabor pyramid into bands. At the top, we have the horizontal orientation, at the left, the vertical orientation, and on the diagonal, the two diagonal orientations. Towards the bottom right are the fine scales, and towards the top left are coarse scales. This pic has a very prominent feature: a big red blob in the finest vertical scale towards the middle right of the movie frame with a large (approximate) z-value. This means that the presence of high-frequency vertical orientations at that spot is correlated with an elevated response 116 milliseconds later. The receptive field is much smaller than the size of the movie, meaning we can crop an area around the receptive field center (we can read off the approximate coordinates from the plot) and run our prefered stimulus-response estimation method based on the cropped, downsampled stimulus. It should be noted that if we use the same data to find a region of interest (ROI) and estimate a model based on that ROI, we’re double-dipping, which is a big no-no since the estimated models will show overly optimistic correlations between predicted and real response. Hence the ROI should be defined using a separate dataset (for example, using one or two movies to find a ROI and the other 19 to estimate the input-output response).\n\nTo create the pyramid decomposition plot:\n\n```function [im] = plotPyramid(w)\nim = zeros(120,160);\noffset = 0;\nfor ii = 2:6\nhh = ceil(240/2^ii);\nww = ceil(320/2^ii);\nnums = hh*ww;\nim0 = w(offset + (1:nums));\nim1 = w(offset + (1:nums) + nums);\nim2 = w(offset + (1:nums) + 2*nums);\nim3 = w(offset + (1:nums) + 3*nums);\n\nim0 = reshape(im0,hh,ww);\nim1 = reshape(im1,hh,ww);\nim2 = reshape(im2,hh,ww);\nim3 = reshape(im3,hh,ww);\n\n%zero orientation (horizontal)\nim(hh+(1:hh),1:ww) = im0;\n%vertical\nim(1:hh,ww+(1:ww)) = im2;\n\nww2 = ceil(ww/2);\nhh2 = ceil(hh/2);\n\nim(hh+(1:hh2),ww+(1:ww2)) = downsample(im1);\nim(hh+hh2+(1:hh2),ww+ww2+(1:ww2)) = downsample(im3);\n\noffset = offset + 4*hh*ww;\nend\n\nend\n```\n\n## Conclusion\n\nWith a bit of work, one can find the approximate receptive field properties of cells based on presentations of very high-dimensional stimuli. With this information in hand, one can define a ROI for further analysis, enabling one to work on much lower-dimensional representations of the data. I’ll be working more on this dataset in the future and will post what I find of interest.\n\n2011 update: the prelim analysis convinced me that if you want to work comfortably with an array you should get much computing power. So we got 4x core i7’s + 12GB RAM running on Ubuntu 64-bit, accessed through NX, to run analyses on these bad boys.\n\nI had also not anticipated before running the analysis that the spike sorting problem would be, well, a problem. Following this I started thinking about LFPs and had an illumination that cross-correlating LFPs and spikes is stupid unless you are 100% sure that there are no spike remnants in the LFP. But obtaining a clean LFP required having access to the wideband signal, so we ended up adding mods to the system to acquire wideband signals (10kHz) on all 96 channels. This was a challenge to get going, the data files are huge (1GB per 5 minutes of recordings), and required writing some automation scripts in Python. With hindsight though, it was a great decision, having the wideband data is so much better spike sorting wise.\n\nWe ended up modding wave_clus so that the detection step is optimal in some sense (paper to be written about this) and adding elements to the GUI to streamline analysis. So basically once we are done for a day, I type sudo python processDay -d dayName in a console and the wideband files and the stimulus files are fetched through FTP from the rig computers, renamed and processed to a Matlab friendly format. I then view the day’s experiments (including recording length, parameters, notes, etc.) using a custom Python GUI (PyQT). Then in Matlab I make a selection for a file to sort in a GUI and press a button and it sorts all the channels. When it’s done I get an email and then review the clusters. Finally I sort all the other experiments of the day against these manually reviewed clusters, and again once that’s done I get another email.\n\nFriendly advice: if you get an array several months beforehand start thinking about how you will process the data; once you start recording you will not have time to dick around with this stuff as you are struggling to get as much data as you can and run prelim analyses and so forth.\n\n%Preprocess images\nmids = [2,2,2,2,2,2,2,2,2,2, 2, 2,3,3,3,3,3,3,3,3,3,3, 3, 3];\nsids = [0,1,2,3,4,5,6,7,8,9,14,15,0,1,2,3,4,5,6,7,8,9,14,15];ws = [];\nfor ii = 1:length(sids)\nnimages = length(dir(sprintf(‘movie_frames/movie%03d_%03d.images/*.jpeg’,mids(ii),sids(ii))));\nws = zeros(51200,nimages);\nfor jj = 1:nimages\nw = gaborpyramid(squeeze(sum(A,3)));\nws(:,jj) = w;\nif mod(jj,100) == 0\njj\nend\nend\nsave(sprintf(‘movie_frames/movie%03d_%03d.images/ws.mat’,mids(ii),sids(ii)),’ws’,’-v6′);\nend\n\n### 2 responses to “CRCNS data set pvc-1 by Ringach lab – getting something to work”\n\n1. […] Ringach – my new supervisor! – has a ton of V1 data on the CRCNS website. I wrote about how to get a simple analysis running on this dataset, and the advice is still valid (if a bit […]\n\n2. […] also share bits of advice such as this gem regarding spike sorting: If you’re going to run a clustering algorithm then for […]"
] | [
null,
"https://xcorr.files.wordpress.com/2009/10/gaborpyramid.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8453986,"math_prob":0.9871371,"size":16527,"snap":"2023-14-2023-23","text_gpt3_token_len":4352,"char_repetition_ratio":0.09653211,"word_repetition_ratio":0.025346534,"special_character_ratio":0.27191868,"punctuation_ratio":0.17627308,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.95637965,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T10:58:28Z\",\"WARC-Record-ID\":\"<urn:uuid:daa332c5-347a-4c07-8683-0c454d29f9fb>\",\"Content-Length\":\"171674\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b4cdb339-52e5-4e38-ba1c-be63a3a09475>\",\"WARC-Concurrent-To\":\"<urn:uuid:b1fee693-f480-4334-97f0-85358d0c9dd1>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://xcorr.net/2009/10/22/crcns-data-set-pvc-1-by-ringach-lab-getting-something-to-work/\",\"WARC-Payload-Digest\":\"sha1:QU63QGKLCQBDEVWCKC7GR5L22DU3SC2L\",\"WARC-Block-Digest\":\"sha1:NYNJDSE3FFNT7YXE7CBKY2HHT5JDYDIG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943695.23_warc_CC-MAIN-20230321095704-20230321125704-00106.warc.gz\"}"} |
http://codeforces.com/topic/94151/en1 | [
"Need help in geometry problem\n\nRevision en1, by venti, 2021-08-04 21:37:55\n\nHi, I was trying to solve atcoder's ABC191 problem D\n\nProblem statement - Given X, Y, R that define a circle centered at (X, Y) with radius R, find the number of grid points whose x- and y-coords are integers, that lie within or on the circle.\n\nConstraints are: |X|, |Y|, R < 1e5 (X, Y, and R can be floating point values upto 4 decimal places)\n\nApproach used -\n\nThe equation of a circle is (x-a)^2 + (y-b)^2 = R^2 for a circle centered at (a, b)\n\nSo I figure the qn wants me to solve the integral solutions to (x-a)^2 + (y-b)^2 <= R^2\n\nI isolate (y-b)^2 and iterate over all candidate values of x from [-a-R-1, a+R+1] and compute the lower and upper bound of y and add them all up and output them.\n\nFor some reason this fails — I observed this is also pretty different from the editorial approach. Is there something fundamentally wrong with my idea/code?\n\nCode -\n\na, b, r = [float(i) for i in input().split()]\n'''\n(x-a)^2 + (y-b)^2 <= r^2\n=> (y-b)^2 <= r^2 - (x-a)^2\n\nk = r^2 - (x-a)^2\ny <= sqrt(K)+b\ny >= b-sqrt(K)\n'''\nans = 0\nfrom math import floor, ceil\nfor x in range(-int(a)-int(r)-2, int(a)+int(r)+3):\nk = r*r - (x-a)**2\nif k<0:\ncontinue\nans += max(0, 1 + floor(k**0.5 + b) - ceil(b-(k**0.5)))\nprint(ans)\n\nHistory\n\nRevisions",
null,
"Rev. Lang. By When Δ Comment\nen2",
null,
"venti 2021-08-04 21:39:51 14 Tiny change: 'his fails — ' -> 'his fails on some tests — '\nen1",
null,
"venti 2021-08-04 21:37:55 1335 Initial revision (published)"
] | [
null,
"http://codeforces.org/s/66581/images/icons/control.png",
null,
"http://codeforces.org/s/66581/images/flags-16/en.png",
null,
"http://codeforces.org/s/66581/images/flags-16/en.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78286403,"math_prob":0.9922332,"size":1202,"snap":"2022-05-2022-21","text_gpt3_token_len":398,"char_repetition_ratio":0.10434057,"word_repetition_ratio":0.0,"special_character_ratio":0.3702163,"punctuation_ratio":0.09342561,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99929017,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T02:19:07Z\",\"WARC-Record-ID\":\"<urn:uuid:2927dd7c-b9fc-4c2b-9b4b-a82e9df65111>\",\"Content-Length\":\"75026\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:645ebbe4-6680-466a-9846-76ab2581e4d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:2c8c6dc9-ec36-4ecb-b030-bee961a103fc>\",\"WARC-IP-Address\":\"213.248.110.126\",\"WARC-Target-URI\":\"http://codeforces.com/topic/94151/en1\",\"WARC-Payload-Digest\":\"sha1:XNLPLU2R3AH6FG2D2QWROOVMNK6E5RQ2\",\"WARC-Block-Digest\":\"sha1:TB5SSH6OALN7LHLSBGQSAOAKO3CRSYVK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300253.51_warc_CC-MAIN-20220117000754-20220117030754-00341.warc.gz\"}"} |
https://ocw.mit.edu/courses/16-322-stochastic-estimation-and-control-fall-2004/pages/calendar/ | [
"16.322 | Fall 2004 | Graduate\nStochastic Estimation and Control\n\n## Calendar\n\nLEC # TOPICS\n1 Introduction\n\nRandom Signals\n\nIntuitive Notion of Probability\n\nAxiomatic Probability\n\nJoint and Conditional Probability\n\n2 Independence\n\nRandom Variables\n\nProbability Distribution and Density Functions\n\n3 Expectation, Averages and Characteristic Function\n\nNormal or Gaussian Random Variables\n\nImpulsive Probability Density Functions\n\nMultiple Random Variables\n\n4 Correlation, Covariance, and Orthogonality\n\nSum of Independent Random Variables and Tendency Toward Normal Distribution\n\nTransformation of Random Variables\n\n5 Some Common Distributions\n6 More Common Distributions\n\nMultivariate Normal Density Function\n\nLinear Transformation and General Properties of Normal Random Variables\n\n7 Linearized Error Propagation\n8 More Linearized Error Propagation\n9 Concept of a Random Process\n\nProbabilistic Description of a Random Process\n\nGaussian Random Process\n\nStationarity, Ergodicity, and Classification of Processes\n\n10 Autocorrelation Function\n\nCrosscorrelation Function\n\n11 Power Spectral Density Function\n\nCross Spectral Density Function\n\nWhite Noise\n\nQuiz 1 (Covers Sections 1-11)\n12 Gauss-Markov Process\n\nRandom Telegraph Wave\n\nWiener or Brownian-Motion Process\n\n13 Determination of Autocorrelation and Spectral Density Functions from Experimental Data\n14 Introduction: The Analysis Problem\n\nIntegral Tables for Computing Mean-Square Value\n\n15 Pure White Noise and Bandlimited Systems\n\nNoise Equivalent Bandwidth\n\nShaping Filter\n\n16 Nonstationary (Transient) Analysis - Initial Condition Response\n\nNonstationary (Transient) Analysis - Forced Response\n\n17 The Wiener Filter Problem\n\nOptimization with Respect to a Parameter\n\n18 The Stationary Optimization Problem - Weighting Function Approach\n\nOrthogonality\n\n19 Complementary Filter\n\nPerspective\n\n20 Estimation\n\nA Simple Recursive Example\n\nQuiz 2 (Covers Sections 12-20)\n21 Markov Processes\n22 State Space Description\n\nVector Description of a Continuous-Time Random Process\n\nDiscrete-Time Model\n\n23 Monte Carlo Simulation of Discrete-Time Systems\n\nThe Discrete Kalman Filter\n\nScalar Kalman Filter Examples\n\n24 Transition from the Discrete to Continuous Filter Equations\n\nSolution of the Matrix Riccati Equation\n\n25 Divergence Problems\n26 Complementary Filter Methodology\n\nINS Error Models\n\nDamping the Schuler Oscillation with External Velocity Reference Information\n\nFinal Exam"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62007123,"math_prob":0.6832226,"size":2552,"snap":"2022-40-2023-06","text_gpt3_token_len":603,"char_repetition_ratio":0.1377551,"word_repetition_ratio":0.0,"special_character_ratio":0.18847962,"punctuation_ratio":0.018126888,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98152393,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T08:45:59Z\",\"WARC-Record-ID\":\"<urn:uuid:d4dc73c2-3bcf-4e92-9907-7cb091e0a51f>\",\"Content-Length\":\"43999\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1209dfe8-e1ed-45fc-9f78-df4de0730267>\",\"WARC-Concurrent-To\":\"<urn:uuid:a018a2bc-b864-457a-8016-857dfa176784>\",\"WARC-IP-Address\":\"151.101.130.133\",\"WARC-Target-URI\":\"https://ocw.mit.edu/courses/16-322-stochastic-estimation-and-control-fall-2004/pages/calendar/\",\"WARC-Payload-Digest\":\"sha1:LA4RQB74QOBWC3Y25W2WN4HFMOWZ5IGJ\",\"WARC-Block-Digest\":\"sha1:45KLSHGZUWH7PPX2F4SX6NJ5Z7GSXEIX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500250.51_warc_CC-MAIN-20230205063441-20230205093441-00390.warc.gz\"}"} |
https://www.intel.com/content/www/us/en/docs/onemkl/code-samples-lapack/2022-1/lapacke-dgeev-example-c-row.html | [
"Developer Reference\n\nID 766877\nDate 12/20/2021\nPublic\n\n## LAPACKE_dgeev Example Program in C for Row Major Data Layout\n\n/*******************************************************************************\n* The information and material (\"Material\") provided below is owned by Intel\n* Corporation or its suppliers or licensors, and title to such Material remains\n* with Intel Corporation or its suppliers or licensors. The Material contains\n* proprietary information of Intel or its suppliers and licensors. The Material\n* is protected by worldwide copyright laws and treaty provisions. No part of\n* the Material may be copied, reproduced, published, uploaded, posted,\n* transmitted, or distributed in any way without Intel's prior express written\n* property rights in the Material is granted to or conferred upon you, either\n* expressly, by implication, inducement, estoppel or otherwise. Any license\n* under such intellectual property rights must be express and approved by Intel\n* in writing.\n*\n********************************************************************************\n*/\n/*\nLAPACKE_dgeev Example.\n======================\n\nProgram computes the eigenvalues and left and right eigenvectors of a general\nrectangular matrix A:\n\n-1.01 0.86 -4.60 3.31 -4.81\n3.98 0.53 -7.04 5.29 3.55\n3.30 8.26 -3.89 8.20 -1.51\n4.43 4.96 -7.66 -7.33 6.18\n7.31 -6.43 -6.16 2.47 5.58\n\nDescription.\n============\n\nThe routine computes for an n-by-n real nonsymmetric matrix A, the\neigenvalues and, optionally, the left and/or right eigenvectors. The right\neigenvector v(j) of A satisfies\n\nA*v(j)= lambda(j)*v(j)\n\nwhere lambda(j) is its eigenvalue. The left eigenvector u(j) of A satisfies\n\nu(j)H*A = lambda(j)*u(j)H\n\nwhere u(j)H denotes the conjugate transpose of u(j). The computed\neigenvectors are normalized to have Euclidean norm equal to 1 and\nlargest component real.\n\nExample Program Results.\n========================\n\nLAPACKE_dgeev (row-major, high-level) Example Program Results\n\nEigenvalues\n( 2.86, 10.76) ( 2.86,-10.76) ( -0.69, 4.70) ( -0.69, -4.70) -10.46\n\nLeft eigenvectors\n( 0.04, 0.29) ( 0.04, -0.29) ( -0.13, -0.33) ( -0.13, 0.33) 0.04\n( 0.62, 0.00) ( 0.62, 0.00) ( 0.69, 0.00) ( 0.69, 0.00) 0.56\n( -0.04, -0.58) ( -0.04, 0.58) ( -0.39, -0.07) ( -0.39, 0.07) -0.13\n( 0.28, 0.01) ( 0.28, -0.01) ( -0.02, -0.19) ( -0.02, 0.19) -0.80\n( -0.04, 0.34) ( -0.04, -0.34) ( -0.40, 0.22) ( -0.40, -0.22) 0.18\n\nRight eigenvectors\n( 0.11, 0.17) ( 0.11, -0.17) ( 0.73, 0.00) ( 0.73, 0.00) 0.46\n( 0.41, -0.26) ( 0.41, 0.26) ( -0.03, -0.02) ( -0.03, 0.02) 0.34\n( 0.10, -0.51) ( 0.10, 0.51) ( 0.19, -0.29) ( 0.19, 0.29) 0.31\n( 0.40, -0.09) ( 0.40, 0.09) ( -0.08, -0.08) ( -0.08, 0.08) -0.74\n( 0.54, 0.00) ( 0.54, 0.00) ( -0.29, -0.49) ( -0.29, 0.49) 0.16\n*/\n#include <stdlib.h>\n#include <stdio.h>\n#include \"mkl_lapacke.h\"\n\n/* Auxiliary routines prototypes */\nextern void print_eigenvalues( char* desc, MKL_INT n, double* wr, double* wi );\nextern void print_eigenvectors( char* desc, MKL_INT n, double* wi, double* v,\nMKL_INT ldv );\n\n/* Parameters */\n#define N 5\n#define LDA N\n#define LDVL N\n#define LDVR N\n\n/* Main program */\nint main() {\n/* Locals */\nMKL_INT n = N, lda = LDA, ldvl = LDVL, ldvr = LDVR, info;\n/* Local arrays */\ndouble wr[N], wi[N], vl[LDVL*N], vr[LDVR*N];\ndouble a[LDA*N] = {\n-1.01, 0.86, -4.60, 3.31, -4.81,\n3.98, 0.53, -7.04, 5.29, 3.55,\n3.30, 8.26, -3.89, 8.20, -1.51,\n4.43, 4.96, -7.66, -7.33, 6.18,\n7.31, -6.43, -6.16, 2.47, 5.58\n};\n/* Executable statements */\nprintf( \"LAPACKE_dgeev (row-major, high-level) Example Program Results\\n\" );\n/* Solve eigenproblem */\ninfo = LAPACKE_dgeev( LAPACK_ROW_MAJOR, 'V', 'V', n, a, lda, wr, wi,\nvl, ldvl, vr, ldvr );\n/* Check for convergence */\nif( info > 0 ) {\nprintf( \"The algorithm failed to compute eigenvalues.\\n\" );\nexit( 1 );\n}\n/* Print eigenvalues */\nprint_eigenvalues( \"Eigenvalues\", n, wr, wi );\n/* Print left eigenvectors */\nprint_eigenvectors( \"Left eigenvectors\", n, wi, vl, ldvl );\n/* Print right eigenvectors */\nprint_eigenvectors( \"Right eigenvectors\", n, wi, vr, ldvr );\nexit( 0 );\n} /* End of LAPACKE_dgeev Example */\n\n/* Auxiliary routine: printing eigenvalues */\nvoid print_eigenvalues( char* desc, MKL_INT n, double* wr, double* wi ) {\nMKL_INT j;\nprintf( \"\\n %s\\n\", desc );\nfor( j = 0; j < n; j++ ) {\nif( wi[j] == (double)0.0 ) {\nprintf( \" %6.2f\", wr[j] );\n} else {\nprintf( \" (%6.2f,%6.2f)\", wr[j], wi[j] );\n}\n}\nprintf( \"\\n\" );\n}\n\n/* Auxiliary routine: printing eigenvectors */\nvoid print_eigenvectors( char* desc, MKL_INT n, double* wi, double* v, MKL_INT ldv ) {\nMKL_INT i, j;\nprintf( \"\\n %s\\n\", desc );\nfor( i = 0; i < n; i++ ) {\nj = 0;\nwhile( j < n ) {\nif( wi[j] == (double)0.0 ) {\nprintf( \" %6.2f\", v[i*ldv+j] );\nj++;\n} else {\nprintf( \" (%6.2f,%6.2f)\", v[i*ldv+j], v[i*ldv+(j+1)] );\nprintf( \" (%6.2f,%6.2f)\", v[i*ldv+j], -v[i*ldv+(j+1)] );\nj += 2;\n}\n}\nprintf( \"\\n\" );\n}\n}"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.50987697,"math_prob":0.95745146,"size":5118,"snap":"2023-40-2023-50","text_gpt3_token_len":1911,"char_repetition_ratio":0.15995307,"word_repetition_ratio":0.070645556,"special_character_ratio":0.47440407,"punctuation_ratio":0.2883534,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95590395,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-03T15:40:05Z\",\"WARC-Record-ID\":\"<urn:uuid:2f801b97-ba83-4bff-acde-42a8a9ca1f09>\",\"Content-Length\":\"765683\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be50f238-f4ac-402f-a586-eee7a591b6a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:21dd9c2d-a536-4b95-aa48-d2fb83594097>\",\"WARC-IP-Address\":\"104.90.72.143\",\"WARC-Target-URI\":\"https://www.intel.com/content/www/us/en/docs/onemkl/code-samples-lapack/2022-1/lapacke-dgeev-example-c-row.html\",\"WARC-Payload-Digest\":\"sha1:PINU353KVPLA6W2OLI56MGPUQIXX72JX\",\"WARC-Block-Digest\":\"sha1:KPGAJUSMYAIIWXUS6YE3RHCF65YQFYXB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100508.23_warc_CC-MAIN-20231203125921-20231203155921-00309.warc.gz\"}"} |
http://bbcbasic.co.uk/wiki/doku.php?id=tutorial_204_20-_203d_20spaces | [
"#",
null,
"BBC BASIC Programmers' Reference\n\n### Site Tools\n\ntutorial_204_20-_203d_20spaces\n\n## Tutorial 4 - 3D Spaces\n\nby Richard Russell, August 2015\n\nNote that the code in this tutorial requires Windows 8.1 or Windows 10\n\nThis tutorial is closely based on Microsoft's Direct 3D 11 Tutorial 4 but with the code translated from C++ to BBC BASIC for Windows. You should refer to the original for a detailed explanation of the code.",
null,
"## Summary\n\nIn the previous tutorial, we successfully rendered a triangle in the centre of our application window. We haven't paid much attention to the vertex positions that we have picked in our vertex buffer. In this tutorial, we will delve into the details of 3D positions and transformation.\n\nThe outcome of this tutorial will be a 3D object rendered to screen. Whereas previous tutorials focused on rendering a 2D object onto a 3D world, here we show a 3D object.\n\n# 3D Spaces\n\nIn the previous tutorial, the vertices of the triangle were placed strategically to perfectly align themselves on the screen. However, this will not always be the case. Thus, we need a system to denote objects in 3D space and a system to display them.\n\n## World Space\n\nWorld space is a space shared by every object in the scene. It is used to define spatial relationship between objects that we wish to render. The X axis goes to our right; the Y axis goes up; and the Z axis goes forward, the same direction as we are facing.\n\n## View Space\n\nView space, sometimes called camera space, is similar to world space in that it is typically used for the entire scene. However, in view space, the origin is at the viewer or camera. The view direction (where the viewer is looking) defines the positive Z axis. An “up” direction defined by the application becomes the positive Y axis.\n\n## Projection Space\n\nProjection space refers to the space after applying projection transformation from view space. In this space, visible content has X and Y coordinates ranging from -1 to 1, and Z coordinate ranging from 0 to 1.\n\n## Screen Space\n\nScreen space is often used to refer to locations in the frame buffer. Because frame buffer is usually a 2D texture, screen space is a 2D space. The top-left corner is the origin with coordinates (0, 0). The positive X goes to right and positive Y goes down. For a buffer that is w pixels wide and h pixels high, the most lower-right pixel has the coordinates (w - 1, h - 1).\n\n# Space-to-space Transformation\n\nTransformation is most commonly used to convert vertices from one space to another. In 3D computer graphics, there are logically three such transformations in the pipeline: world, view, and projection transformation. Individual transformation operations such as translation, rotation, and scaling are covered in the next tutorial.\n\n## World Transformation\n\nWorld transformation converts vertices from object space to world space. It usually consists of one or more scaling, rotation, and translation, based on the size, orientation, and position we would like to give to the object. Every object in the scene has its own world transformation matrix. This is because each object has its own size, orientation, and position.\n\n## View Transformation\n\nAfter vertices are converted to world space, view transformation converts those vertices from world space to view space. Recall from earlier discussion that view space is what the world appears from the viewer's (or camera's) perspective. In view space, the viewer is located at origin looking out along the positive Z axis.\n\n## Projection Transformation\n\nProjection transformation converts vertices from 3D spaces, such as world and view spaces, to projection space. One of the parameters that defines the transformation is called the field-of-view (FOV). FOV denotes which objects are visible from a particular position, while looking in a particular direction. In computer graphics, the FOV is contained in a view frustum; the view frustum is defined by 6 planes in 3D. Two of these planes are parallel to the XY plane, these are called the near-Z and far-Z planes. The other four planes are defined by the viewer's horizontal and vertical field of view.\n\n# Using Transformations\n\nIn the previous tutorial, we wrote a program that renders a single triangle to screen. When we create the vertex buffer, the vertex positions that we use are directly in projection space so that we don't have to perform any transformation. Now that we have an understanding of 3D space and transformation, we are going to modify the program so that the vertex buffer is defined in object space, as it should be. Then, we will modify our vertex shader to transform the vertices from object space to projection space.\n\n## Modifying the Vertex Buffer\n\nSince we started representing things in three dimensions, we have changed the flat triangle from the previous tutorial to a cube. This will allow us to demonstrate these concepts much clearer.\n\n``` REM Create vertex buffer:\nDIM vertices{(7)} = SimpleVertex{}\nFOR v% = 0 TO 7\nREAD x, y, z, r, g, b, a\nvertices{(v%)}.Pos.x% = FN_f4(x)\nvertices{(v%)}.Pos.y% = FN_f4(y)\nvertices{(v%)}.Pos.z% = FN_f4(z)\nvertices{(v%)}.Color.x% = FN_f4(r)\nvertices{(v%)}.Color.y% = FN_f4(g)\nvertices{(v%)}.Color.z% = FN_f4(b)\nvertices{(v%)}.Color.w% = FN_f4(a)\nNEXT v%\nDATA -1.0, 1.0, -1.0, 0.0, 0.0, 1.0, 1.0\nDATA 1.0, 1.0, -1.0, 0.0, 1.0, 0.0, 1.0\nDATA 1.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0\nDATA -1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 1.0\nDATA -1.0, -1.0, -1.0, 1.0, 0.0, 1.0, 1.0\nDATA 1.0, -1.0, -1.0, 1.0, 1.0, 0.0, 1.0\nDATA 1.0, -1.0, 1.0, 1.0, 1.0, 1.0, 1.0\nDATA -1.0, -1.0, 1.0, 0.0, 0.0, 0.0, 1.0```\n\nOn a cube, many triangles will be sharing the same vertex and it would be a waste of space to redefine the same points over and over again. As such, there is a method to specify just the eight points, and then let Direct3D know which points to pick for a triangle. This is done through an index buffer. An index buffer will contain a list, which will refer to the index of vertices in the buffer, to specify which points to use in each triangle. The code below shows which points make up each of our triangles:\n\n``` REM Create index buffer:\nDIM indices{(5,5)} = WORD{}\nFOR r% = 0 TO DIM(indices{()},1)\nFOR c% = 0 TO DIM(indices{()},2)\nNEXT c%\nNEXT r%\nDATA 3,1,0,2,1,3\nDATA 0,5,4,1,5,0\nDATA 3,4,7,0,4,3\nDATA 1,6,5,2,6,1\nDATA 2,7,6,3,7,2\nDATA 6,4,5,7,4,6\n\nbd.Usage% = D3D11_USAGE_DEFAULT\nbd.ByteWidth% = DIM(WORD{}) * (DIM(indices{()},1)+1) * (DIM(indices{()},2)+1)\nbd.BindFlags% = D3D11_BIND_INDEX_BUFFER\nbd.CPUAccessFlags% = 0\nInitData.pSysMem% = indices{(0,0)}\n\nSYS ID3D11Device.CreateBuffer%, pd3dDevice%, bd{}, InitData{}, ^pIndexBuffer% TO hr%\nIF hr% <> 0 OR pIndexBuffer% = 0 ERROR 100, \"ID3D11Device::CreateBuffer (index) failed: \"+STR\\$~hr%\n\nREM Set index buffer:\nSYS ID3D11DeviceContext.IASetIndexBuffer%, pImmediateContext%, pIndexBuffer%, \\\n\\ DXGI_FORMAT_R16_UINT, 0```\n\nIn our vertex shader from the previous tutorial, we take the input vertex position and output the same position without any modification. We can do this because the input vertex position is already defined in projection space. Now, because the input vertex position is defined in object space, we must transform it before outputting from the vertex shader. We do this with three steps: transform from object to world space, transform from world to view space, and transform from view to projection space. A vector is transformed by multiplying the vector by a matrix. In HLSL, this is done using the mul() intrinsic function. Our variable declaration and new vertex shader are shown below:\n\n``` //\n//\nVS_OUTPUT VS( float4 Pos : POSITION, float4 Color : COLOR )\n{\nVS_OUTPUT output = (VS_OUTPUT)0;\noutput.Pos = mul( Pos, World );\noutput.Pos = mul( output.Pos, View );\noutput.Pos = mul( output.Pos, Projection );\noutput.Color = Color;\nreturn output;\n}```\n\n## Setting up the Matrices\n\nWe have updated our vertex shader to transform using matrices, but we also need to define three matrices in our program:\n\n``` REM Define the three transformation matrices:\nDIM mWorld(3,3), mView(3,3), mProjection(3,3)```\n\nIn addition to the matrices, we also need an ID3D11Buffer object that represents the constant buffer into which they will be transferred:\n\n``` REM Create the constant buffer:\nDIM ConstantBuffer{mWorld{}=XMMATRIX{},mView{}=XMMATRIX{},mProjection{}=XMMATRIX{}}\n\nbd.Usage% = D3D11_USAGE_DEFAULT\nbd.ByteWidth% = DIM(ConstantBuffer{})\nbd.BindFlags% = D3D11_BIND_CONSTANT_BUFFER\nbd.CPUAccessFlags% = 0\n\nSYS ID3D11Device.CreateBuffer%, pd3dDevice%, bd{}, NULL, ^pConstantBuffer% TO hr%\nIF hr% <> 0 OR pConstantBuffer% = 0 ERROR 100, \"ID3D11Device::CreateBuffer (constant) failed: \"+STR\\$~hr%```\n\nThe next thing that we need to do is come up with three matrices that we will use to do the transformation. We would like to set up our camera so that it is situated at [0 1 -5], looking at the point [0 1 0]. We can call PROC_MatrixLookAtLH() to conveniently compute a view matrix for us using the up vector [0 1 0] since we would like the +Y direction to always stay at top:\n\n``` REM Initialize the view matrix:\nDIM Eye(2), At(2), Up(2)\nEye() = 0.0, 1.0, -5.0 : At() = 0.0, 0.0, 0.0 : Up() = 0.0, 1.0, 0.0\nPROC_MatrixLookAtLH(mView(), Eye(), At(), Up())```\n\nFinally, to come up with a projection matrix, we call PROC_MatrixPerspectiveFovLH(), with a 90 degree vertical field of view (pi/2), an aspect ratio of 640/512 which is from our back buffer size, and near and far Z at 0.1 and 110, respectively. This means that anything closer than 0.1 or further than 110 will not be visible on the screen:\n\n``` REM Initialize the projection matrix:\nPROC_MatrixPerspectiveFovLH(mProjection(), PI/2, Width%/Height%, 0.01, 100)```\n\n## Updating Constant Buffers\n\nWe have the matrices, and now we must write them to the constant buffer when rendering so that the GPU can read them. Also, because matrices are arranged differently in memory in C++ and HLSL, we must transpose the matrices before updating them:\n\n``` PROC_MatrixTranspose(ConstantBuffer{}, ConstantBuffer.mWorld{}, mWorld())\nPROC_MatrixTranspose(ConstantBuffer{}, ConstantBuffer.mView{}, mView())\nPROC_MatrixTranspose(ConstantBuffer{}, ConstantBuffer.mProjection{}, mProjection())",
null,
""
] | [
null,
"http://bbcbasic.co.uk/wiki/lib/exe/fetch.php",
null,
"http://bbcbasic.co.uk/wiki/lib/exe/fetch.php",
null,
"http://bbcbasic.co.uk/wiki/lib/exe/indexer.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85333335,"math_prob":0.96389544,"size":9907,"snap":"2019-43-2019-47","text_gpt3_token_len":2641,"char_repetition_ratio":0.14440069,"word_repetition_ratio":0.020382166,"special_character_ratio":0.27384678,"punctuation_ratio":0.19363892,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97214633,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-15T10:50:17Z\",\"WARC-Record-ID\":\"<urn:uuid:26769faf-b09c-4ec9-b305-43230f5929b1>\",\"Content-Length\":\"31198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d24275e-01dd-4422-b13f-8cd0698b7dc0>\",\"WARC-Concurrent-To\":\"<urn:uuid:57397bc1-06ac-4ecd-af99-f43c86bbfd1f>\",\"WARC-IP-Address\":\"94.229.68.18\",\"WARC-Target-URI\":\"http://bbcbasic.co.uk/wiki/doku.php?id=tutorial_204_20-_203d_20spaces\",\"WARC-Payload-Digest\":\"sha1:EOJ7KSWGNZTG3TY5JDW3MHHXO2M452SL\",\"WARC-Block-Digest\":\"sha1:KYARBX65GZ7ORSL5R63O5UV6CUKOGXUD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496668618.8_warc_CC-MAIN-20191115093159-20191115121159-00377.warc.gz\"}"} |
https://www.ademcetinkaya.com/2022/10/asian-hotels-east-limited-stock.html | [
"Prediction of future movement of stock prices has been a subject matter of many research work. There is a gamut of literature of technical analysis of stock prices where the objective is to identify patterns in stock price movements and derive profit from it. Improving the prediction accuracy remains the single most challenge in this area of research. We propose a hybrid approach for stock price movement prediction using machine learning, deep learning, and natural language processing. We evaluate Asian Hotels (East) Limited prediction models with Modular Neural Network (Market Direction Analysis) and Logistic Regression1,2,3,4 and conclude that the NSE AHLEAST stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period: The dominant strategy among neural network is to Hold NSE AHLEAST stock.\n\nKeywords: NSE AHLEAST, Asian Hotels (East) Limited, stock forecast, machine learning based prediction, risk rating, buy-sell behaviour, stock analysis, target price analysis, options and futures.\n\n## Key Points\n\n1. Can statistics predict the future?\n2. How do you know when a stock will go up or down?\n3. Understanding Buy, Sell, and Hold Ratings",
null,
"## NSE AHLEAST Target Price Prediction Modeling Methodology\n\nImpact of many factors on the stock prices makes the stock prediction a difficult and highly complicated task. In this paper, machine learning techniques have been applied for the stock price prediction in order to overcome such difficulties. In the implemented work, five models have been developed and their performances are compared in predicting the stock market trends. We consider Asian Hotels (East) Limited Stock Decision Process with Logistic Regression where A is the set of discrete actions of NSE AHLEAST stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4\n\nF(Logistic Regression)5,6,7= $\\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \\dots & {p}_{1n}\\\\ & ⋮\\\\ {p}_{j1}& {p}_{j2}& \\dots & {p}_{jn}\\\\ & ⋮\\\\ {p}_{k1}& {p}_{k2}& \\dots & {p}_{kn}\\\\ & ⋮\\\\ {p}_{n1}& {p}_{n2}& \\dots & {p}_{nn}\\end{array}$ X R(Modular Neural Network (Market Direction Analysis)) X S(n):→ (n+16 weeks) $\\begin{array}{l}\\int {e}^{x}\\mathrm{rx}\\end{array}$\n\nn:Time series to forecast\n\np:Price signals of NSE AHLEAST stock\n\nj:Nash equilibria\n\nk:Dominated move\n\na:Best response for target price\n\nFor further technical information as per how our model work we invite you to visit the article below:\n\nHow do AC Investment Research machine learning (predictive) algorithms actually work?\n\n## NSE AHLEAST Stock Forecast (Buy or Sell) for (n+16 weeks)\n\nSample Set: Neural Network\nStock/Index: NSE AHLEAST Asian Hotels (East) Limited\nTime series to forecast n: 02 Oct 2022 for (n+16 weeks)\n\nAccording to price forecasts for (n+16 weeks) period: The dominant strategy among neural network is to Hold NSE AHLEAST stock.\n\nX axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)\n\nY axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)\n\nZ axis (Yellow to Green): *Technical Analysis%\n\n## Conclusions\n\nAsian Hotels (East) Limited assigned short-term Ba3 & long-term Ba1 forecasted stock rating. We evaluate the prediction models Modular Neural Network (Market Direction Analysis) with Logistic Regression1,2,3,4 and conclude that the NSE AHLEAST stock is predictable in the short/long term. According to price forecasts for (n+16 weeks) period: The dominant strategy among neural network is to Hold NSE AHLEAST stock.\n\n### Financial State Forecast for NSE AHLEAST Stock Options & Futures\n\nRating Short-Term Long-Term Senior\nOutlook*Ba3Ba1\nOperational Risk 7462\nMarket Risk6588\nTechnical Analysis5260\nFundamental Analysis5367\nRisk Unsystematic7273\n\n### Prediction Confidence Score\n\nTrust metric by Neural Network: 91 out of 100 with 530 signals.\n\n## References\n\n1. Rumelhart DE, Hinton GE, Williams RJ. 1986. Learning representations by back-propagating errors. Nature 323:533–36\n2. N. B ̈auerle and A. Mundt. Dynamic mean-risk optimization in a binomial model. Mathematical Methods of Operations Research, 70(2):219–239, 2009.\n3. L. Prashanth and M. Ghavamzadeh. Actor-critic algorithms for risk-sensitive MDPs. In Proceedings of Advances in Neural Information Processing Systems 26, pages 252–260, 2013.\n4. D. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 1995.\n5. Mazumder R, Hastie T, Tibshirani R. 2010. Spectral regularization algorithms for learning large incomplete matrices. J. Mach. Learn. Res. 11:2287–322\n6. Athey S, Bayati M, Doudchenko N, Imbens G, Khosravi K. 2017a. Matrix completion methods for causal panel data models. arXiv:1710.10251 [math.ST]\n7. Efron B, Hastie T, Johnstone I, Tibshirani R. 2004. Least angle regression. Ann. Stat. 32:407–99\nFrequently Asked QuestionsQ: What is the prediction methodology for NSE AHLEAST stock?\nA: NSE AHLEAST stock prediction methodology: We evaluate the prediction models Modular Neural Network (Market Direction Analysis) and Logistic Regression\nQ: Is NSE AHLEAST stock a buy or sell?\nA: The dominant strategy among neural network is to Hold NSE AHLEAST Stock.\nQ: Is Asian Hotels (East) Limited stock a good investment?\nA: The consensus rating for Asian Hotels (East) Limited is Hold and assigned short-term Ba3 & long-term Ba1 forecasted stock rating.\nQ: What is the consensus rating of NSE AHLEAST stock?\nA: The consensus rating for NSE AHLEAST is Hold.\nQ: What is the prediction period for NSE AHLEAST stock?\nA: The prediction period for NSE AHLEAST is (n+16 weeks)"
] | [
null,
"https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiqIX_a0GyU8qlSfR26UQD9zzuw29iiOWWEwjj31LwrKF9KlGpzV12ptuL_dQTWYvN-MB2xr1O9CI2ljrJn8GqPmfdMbf3861rl2ZgRMPpc-GGr5zKKDxArjKT6-Rj-paUR3sowtSNh_TSm-u2oHUXNq6Czv21N9TLmcnMEdYgfj7V7NTvVaJIpLxvO7w/s16000/20220829_122644_0000.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85229397,"math_prob":0.8927311,"size":5356,"snap":"2023-14-2023-23","text_gpt3_token_len":1315,"char_repetition_ratio":0.14293721,"word_repetition_ratio":0.16544117,"special_character_ratio":0.23114264,"punctuation_ratio":0.145,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9878298,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T19:31:36Z\",\"WARC-Record-ID\":\"<urn:uuid:1b130212-244a-4a2f-9062-85d528141fb1>\",\"Content-Length\":\"323608\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3affecb5-b955-4fe3-b87e-72afed81fef9>\",\"WARC-Concurrent-To\":\"<urn:uuid:00b898a4-d3e1-48f8-932f-36f665f9a49c>\",\"WARC-IP-Address\":\"142.251.111.121\",\"WARC-Target-URI\":\"https://www.ademcetinkaya.com/2022/10/asian-hotels-east-limited-stock.html\",\"WARC-Payload-Digest\":\"sha1:HQ34HR3C2K2A7ZT5TAVMBGTYVOLHXIKV\",\"WARC-Block-Digest\":\"sha1:K6HAWFOTERHD6DXN3II54Z52PTIOBWKF\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644506.21_warc_CC-MAIN-20230528182446-20230528212446-00601.warc.gz\"}"} |
https://projecteuclid.org/proceedings/advanced-studies-in-pure-mathematics/singularities-in-geometry-and-topology-2011/Chapter/Survey-of-apparent-contours-of-stable-maps-between-surfaces/10.2969/aspm/06610013 | [
"Translator Disclaimer\nVOL. 66 | 2015 Survey of apparent contours of stable maps between surfaces\nTakahiro Yamamoto\n\nEditor(s) Vincent Blanlœil, Osamu Saeki\n\n## Abstract\n\nThis is a survey paper about studies of the simplest shape of the apparent contour for stable maps between surfaces. Such studies first appeared in then in , , , , . Let $M$ be a connected and closed surface, $N$ a connected surface. For a stable map $\\varphi: M\\to N$, denote by $c(\\varphi)$, $n(\\varphi)$ and $i(\\varphi)$ the numbers of cusps, nodes and singular set components of $\\varphi$, respectively. For a $C^\\infty$ map $\\varphi_0 : M\\to S^2$ into the sphere, we study the minimal pair $(i, c+n)$ and triples $(i,c,n)$, $(c,i,n)$, $(n,c,i)$ and $(i,n,c)$ among stable maps $M\\to S^2$ homotopic to $\\varphi_0$ with respect to the lexicographic order.\n\n## Information\n\nPublished: 1 January 2015\nFirst available in Project Euclid: 19 October 2018\n\nzbMATH: 1360.57037\nMathSciNet: MR3382040\n\nDigital Object Identifier: 10.2969/aspm/06610013\n\nSubjects:\nPrimary: 57R45\nSecondary: 57R35, 58K15\n\nKeywords: cusp, Node, stable map",
null,
"",
null,
""
] | [
null,
"https://projecteuclid.org/Content/themes/SPIEImages/Share_black_icon.png",
null,
"https://projecteuclid.org/images/proceedings/cover_aspm.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7566391,"math_prob":0.98813766,"size":750,"snap":"2021-43-2021-49","text_gpt3_token_len":228,"char_repetition_ratio":0.121983916,"word_repetition_ratio":0.0,"special_character_ratio":0.33333334,"punctuation_ratio":0.18181819,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9882085,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-11-27T21:28:42Z\",\"WARC-Record-ID\":\"<urn:uuid:093608a5-320f-4b9e-867f-70d43a70fcb6>\",\"Content-Length\":\"115807\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d6d53ca2-30dd-4080-a0cc-8df04e1ef972>\",\"WARC-Concurrent-To\":\"<urn:uuid:d04fb5c4-07fc-49ac-8bbc-8ee2e4c0082f>\",\"WARC-IP-Address\":\"107.154.79.145\",\"WARC-Target-URI\":\"https://projecteuclid.org/proceedings/advanced-studies-in-pure-mathematics/singularities-in-geometry-and-topology-2011/Chapter/Survey-of-apparent-contours-of-stable-maps-between-surfaces/10.2969/aspm/06610013\",\"WARC-Payload-Digest\":\"sha1:LH7G67JXQDI4CVQ2K6JT4GSMDGCMJ4NB\",\"WARC-Block-Digest\":\"sha1:DBFMEFIHPKTBUEA2AMBDK3P5IFX52MTF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964358233.7_warc_CC-MAIN-20211127193525-20211127223525-00287.warc.gz\"}"} |
https://math.stackexchange.com/questions/3254631/for-which-n-is-fracn4-equal-to-left-lfloor-sqrt-fracn4-right | [
"# For which $n$ is $\\frac{n!}{4}$ equal to $\\left\\lfloor \\sqrt{\\frac{n!}{4}}\\right\\rfloor\\left(\\left\\lfloor\\sqrt{\\frac{n!}{4}}\\right\\rfloor + 1\\right)$?\n\nFor how many values of $$n$$, is $$\\frac{n!}{4}$$ equal to $$\\left\\lfloor \\sqrt{\\frac{n!}{4}}\\right\\rfloor\\left(\\left\\lfloor\\sqrt{\\frac{n!}{4}}\\right\\rfloor + 1\\right)$$?\n\nFurther more, is there a way to approximate (or maybe even find the precise answer to) $$\\left\\lfloor \\sqrt{\\frac{n!}{4}}\\right\\rfloor\\left(\\left\\lfloor\\sqrt{\\frac{n!}{4}}\\right\\rfloor + 1\\right)$$? I tried approximating $$n!$$ but had no luck.\n\nThanks in advance!\n\n• Google \"Brocard's problem\". It works for $n= 4,5,$ and $7$, it is conjectured those are the only such values. If you accept the proof of the abc conjecture then it is known that there are only finitely many such values of $n$. – Nate Jun 7 at 21:12\n\n## 1 Answer\n\nIt's not a full solution, but maybe it will help:\n\nLet $$x=\\sqrt{\\frac{n!}{4}}$$. $$x^2 = \\lfloor x\\rfloor \\cdot (\\lfloor x\\rfloor + 1)$$ $$x^2 + \\frac14 = (\\lfloor x\\rfloor + \\frac12)^2$$ $$-\\frac12 + \\sqrt{x^2+ \\frac14} = \\lfloor x\\rfloor$$ $$-1 + \\sqrt{n!+ 1} = 2\\lfloor x\\rfloor \\in 2\\mathbb N$$ So we get a necessary condition $$\\exists m\\in\\mathbb N : n! + 1 = (2m+1)^2$$ It's easy to see that it is also a sufficient condition, because $$\\sqrt{n!} < \\sqrt{n! + 1} < \\sqrt{n!} + 1$$ $$\\sqrt{n!} < 2m+1 < \\sqrt{n!} + 1$$ $$m < \\sqrt{\\frac{n!}{4}} < m + \\frac12$$ so $$\\lfloor \\sqrt{\\frac{n!}{4}}\\rfloor = m$$. For low $$n$$, $$n=4$$ and $$n=5$$ satisfy this condition, but I have no proof there is no more solutions."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.76328456,"math_prob":1.0000091,"size":424,"snap":"2019-35-2019-39","text_gpt3_token_len":148,"char_repetition_ratio":0.1952381,"word_repetition_ratio":0.0,"special_character_ratio":0.3254717,"punctuation_ratio":0.13483146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000093,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-21T04:33:54Z\",\"WARC-Record-ID\":\"<urn:uuid:b787522d-3d77-40c9-b535-3114c4f36b2d>\",\"Content-Length\":\"135860\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d01d484-b3ce-4463-be98-a6b5437b3565>\",\"WARC-Concurrent-To\":\"<urn:uuid:a64b7fbd-5b4f-42bf-a249-168410a6be3f>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3254631/for-which-n-is-fracn4-equal-to-left-lfloor-sqrt-fracn4-right\",\"WARC-Payload-Digest\":\"sha1:XIIIMR5HR3PMY4NXAEYPDDNJIZ52XAQ6\",\"WARC-Block-Digest\":\"sha1:AKJXP2WOJOMNGK4FRQH6L3LAWE3BY26Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027315809.69_warc_CC-MAIN-20190821043107-20190821065107-00283.warc.gz\"}"} |
https://www.programming-idioms.org/impl-edit/172/3908 | [
"",
null,
"# Programming-Idioms",
null,
"# 172\nImplementation\nClojure\n\nBe concise.\n\nBe useful.\n\nAll contributions dictatorially edited by webmasters to match personal tastes.\n\nPlease try to avoid dependencies to third-party libraries and frameworks.\n\nImplementation edit is for fixing errors and enhancing with metadata.\n\nInstead of changing the code of the snippet, consider creating another Clojure implementation.\n\nOther implementations\n`m[k] = v`\n`\\$m[\\$k] = \\$v;`\n`m[k] = v`\n`m[k] = v;`\n`m[k] = v`\n`fgl`\n`m.KeyData[k] := v;`\n`use std::collection::HashMap;`\n`m.insert(k, v);`\n`\\$m{\\$k} = \\$v;`\n`m.set(k, v);`\n`m[k] = v;`\n`m[k] = v;`\n`m[k] = v;`\n`m[k] = v;`\n`import qualified Data.Map as Map`\n`newM = Map.insert k v m`\n`m.k = v`\n`m[k] = v`"
] | [
null,
"https://www.programming-idioms.org/default_20201008b_/img/wheel.svg",
null,
"https://www.programming-idioms.org/default_20201008b_/img/die.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.59679115,"math_prob":0.9797307,"size":653,"snap":"2020-45-2020-50","text_gpt3_token_len":187,"char_repetition_ratio":0.1540832,"word_repetition_ratio":0.083333336,"special_character_ratio":0.29402757,"punctuation_ratio":0.20915033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9970825,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T15:32:23Z\",\"WARC-Record-ID\":\"<urn:uuid:b4a95b35-c224-468b-a8fc-481e6f698ce8>\",\"Content-Length\":\"24106\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43b6893c-bf7c-494b-b27c-7e7e668b006b>\",\"WARC-Concurrent-To\":\"<urn:uuid:d7a4c107-de6e-4da1-9228-7636c0a1d0d4>\",\"WARC-IP-Address\":\"172.217.7.179\",\"WARC-Target-URI\":\"https://www.programming-idioms.org/impl-edit/172/3908\",\"WARC-Payload-Digest\":\"sha1:RXE3YYJVJTGUMHOKKUS33VAR75JSFU2U\",\"WARC-Block-Digest\":\"sha1:7UVQ5R5NKMK3GHLWRUEHRIZDTPJFLLNS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894203.73_warc_CC-MAIN-20201027140911-20201027170911-00384.warc.gz\"}"} |
https://se.mathworks.com/help/images/ref/projective2d.html | [
"# projective2d\n\n2-D projective geometric transformation\n\n## Description\n\nA `projective2d` object encapsulates a 2-D projective geometric transformation.\n\n## Creation\n\nYou can create a `projective2d` object using the following methods:\n\n• `fitgeotrans` — Estimates a geometric transformation that maps pairs of control points between two images\n\n• The `projective2d` function described here\n\n### Syntax\n\n``tform = projective2d``\n``tform = projective2d(A)``\n\n### Description\n\n````tform = projective2d` creates a `projective2d` object with default property settings that correspond to the identity transformation.```\n\nexample\n\n````tform = projective2d(A)` sets the property `T` with a valid projective transformation defined by nonsingular matrix `A`.```\n\n## Properties\n\nexpand all\n\nForward 2-D projective transformation, specified as a nonsingular 3-by-3 numeric matrix.\n\nThe matrix `T` uses the convention:\n\n`[x y 1] = [u v 1] * T`\n\nwhere `T` has the form:\n\n```[a b c;... d e f;... g h i]; ```\n\nThe default of `T` is the identity transformation.\n\nData Types: `double` | `single`\n\nDimensionality of the geometric transformation for both input and output points, specified as the value 2.\n\n## Object Functions\n\n `invert` Invert geometric transformation `outputLimits` Find output spatial limits given input spatial limits `transformPointsForward` Apply forward geometric transformation `transformPointsInverse` Apply inverse geometric transformation\n\n## Examples\n\ncollapse all\n\nThis example shows how to apply rotation and tilt to an image, using a `projective2d` geometric transformation object created directly from a transformation matrix.\n\nRead a grayscale image into the workspace.\n\n`I = imread('pout.tif');`\n\nCombine rotation and tilt into a transformation matrix, `tm`. Use this transformation matrix to create a `projective2d` geometric transformation object, `tform`.\n\n```theta = 10; tm = [cosd(theta) -sind(theta) 0.001; ... sind(theta) cosd(theta) 0.01; ... 0 0 1]; tform = projective2d(tm);```\n\nApply the transformation using `imwarp`. View the transformed image.\n\n```outputImage = imwarp(I,tform); imshow(outputImage)```",
null,
""
] | [
null,
"https://se.mathworks.com/help/examples/images/win64/ApplyProjectiveTransformationToImageExample_01.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57474285,"math_prob":0.9777567,"size":807,"snap":"2020-34-2020-40","text_gpt3_token_len":172,"char_repetition_ratio":0.23038605,"word_repetition_ratio":0.0,"special_character_ratio":0.14993805,"punctuation_ratio":0.03883495,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9942669,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-05T22:21:49Z\",\"WARC-Record-ID\":\"<urn:uuid:25ba1765-905e-47d4-a339-02ffb44c2c17>\",\"Content-Length\":\"83453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9db8f7c7-112d-4109-aaa8-75039d2cc465>\",\"WARC-Concurrent-To\":\"<urn:uuid:80146f9c-6d25-4676-bec2-93aa074ba452>\",\"WARC-IP-Address\":\"184.25.198.13\",\"WARC-Target-URI\":\"https://se.mathworks.com/help/images/ref/projective2d.html\",\"WARC-Payload-Digest\":\"sha1:ZBUEA6J3FMZM3J73CZGH27NO5PEMUKTY\",\"WARC-Block-Digest\":\"sha1:HWC3ODXWWFFVXARUMBTKFFKKVOUWEAU5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439735989.10_warc_CC-MAIN-20200805212258-20200806002258-00053.warc.gz\"}"} |
https://help.desmos.com/hc/en-us/articles/202529129-What-is-RMSE- | [
"# What is RMSE?\n\n##### When you perform a regression, there are three statistics that the calculator might display in order to give you an idea of how well your regression model fits the data provided:\n1. The Pearson correlation coefficient ($$r$$). This measures the strength of the linear correlation between two data sets, so Desmos will display it in the special case that your model is linear and contains both slope and intercept parameters (e.g., $$y_1 \\sim mx_1 + b$$).\n2. The coefficient of determination ($$R^{2}$$). This measures your regression model’s “goodness of fit.” Roughly speaking, $$R^{2}$$ tells you what fraction of your variance is explained by the model. Linearity is irrelevant for this measure, so Desmos will show $$R^{2}$$ values for more general models of the form $$y_1 \\sim f(x_1,x_2,...$$).\n3. The root-mean-square-error (RMSE). This is the most general way to quantify how well a model predicts your observed data, because RMSE can be calculated even when there is no principled way to distinguish between the dependent and independent variables. It is simply the square root of the average squared error. Desmos will show RMSE for any model of the form $$f(y1) \\sim g(x_1)$$ or $$h(x_1,y_1) \\sim 0$$.\n\nNote: that the measures are listed in increasing order of generality, and Desmos will display the most specific measure possible: $$r$$ in the special case of a linear model with intercept, $$R^{2}$$ for a nonlinear model where the independent and dependent variables are clearly defined, and RMSE otherwise.\n\nThere is one case where Desmos will display both $$r$$ and $$R^{2}$$, and that is when you have a nonlinear model that turns out to have $$R^{2}$$ exactly equal to $$r^{2}$$. In other words, if your nonlinear model fits the data precisely as well as a linear model would, we will report the linear correlation as a convenience."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.90528005,"math_prob":0.999671,"size":1846,"snap":"2022-27-2022-33","text_gpt3_token_len":431,"char_repetition_ratio":0.12540716,"word_repetition_ratio":0.0,"special_character_ratio":0.2535211,"punctuation_ratio":0.102564104,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998709,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T05:29:17Z\",\"WARC-Record-ID\":\"<urn:uuid:3180b2c9-6612-44de-b1ea-43a3ff5f9ce5>\",\"Content-Length\":\"39005\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ba5dcc0a-d7e8-4a1c-80ff-1cc6247ae3f2>\",\"WARC-Concurrent-To\":\"<urn:uuid:0ec135cb-326a-49c6-b249-6799aaca23a2>\",\"WARC-IP-Address\":\"104.16.51.111\",\"WARC-Target-URI\":\"https://help.desmos.com/hc/en-us/articles/202529129-What-is-RMSE-\",\"WARC-Payload-Digest\":\"sha1:BZTOHLL5XQM6WHU5YZ74COJLHOWPQAMQ\",\"WARC-Block-Digest\":\"sha1:ZSLY243B6WAI6AIM4HNMUHUM74NWG35U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103920118.49_warc_CC-MAIN-20220701034437-20220701064437-00152.warc.gz\"}"} |
https://peterwittek.com/2014/08/more-on-quantum-learning-of-unitaries/ | [
"# More on quantum learning of unitaries, process tomography, and classical regression\n\nAQIS just concluded, and I presented a poster on transductive and active learning in the quantum learning of unitaries (Wittek, 2014). We had some good discussions on the topic, particularly on the differences between process tomography and learning of unitary transformations, and whether this whole idea of comparing to classical regression analysis makes any sense. This entry summarizes some of the points made.\n\nOne of the referees of the extended abstract wrote:\n\n[I]t is essential to avoid the confusion between \"learning\" and \"tomography\". In tomography, one tries to infer a classical description of the unknown gate. In learning, the goal is to simulate the application of the gate on a new input state, without necessarily having a classical description. In general, gate tomography (or, more precisely, gate estimation) is a suboptimal strategy for learning. The fact that, in the presence of symmetry, estimation is sufficient to achieve the optimal performance of gate learning is a highly non-trivial result.\n\nThis is a crucial distinction that I was not aware of. So in the case of process tomography, we have an explicit, classical description of the estimated transformation $\\hat{U}$ that we can use on an arbitrary number of states in the future. In classical learning, this is a pure case of induction: based on some finite $N$ training instances, we infer a function, which we deploy on other data instances not present in the training set.\n\nThe poster did not concern this case, it only discussed the suboptimal coherent and optimal incoherent strategies, and how they related to transduction and induction. The interesting thing about the incoherent strategy is that we perform an optimal POVM measurement, so we actually learn classical information about the unitary, but not as much as in the case of process tomography.\n\nIn classical regression, we have $N$ training instances, each with a real-valued label: $(\\mathbf{x}_1, y_1),\\ldots, (\\mathbf{x}_N, y_N)$. In the quantum learning scenario, we have $N$ disposal of a black box. If we want to match this in the classical case, we would need the original function $f$ that generates the training instances: $(\\mathbf{x}_1, f(\\mathbf{x}_1)),\\ldots, (\\mathbf{x}_N, f(\\mathbf{x}_N))$. It would not make much difference to classical learning algorithms. In the quantum case, there is an optimal input state that reveals the most about the unitary in question (provided some symmetry, as pointed out by the referee). Furthermore, this optimal input state should be used in parallel, that is, applying the $N$-times tensor product of the unitary on the state (Bisio et al., 2010). Apparently, this theoretical result may not translate well to an implementation: a sequential approach is more feasible. In this case, subsequent optimal states would depend on what the previous state revealed of the process. To spice things up, this sequence of optimal states could be augmented by classical learning and the parametric control of estimating the unitary (Hentschel & Sanders, 2010)\n\nThe next question is what the input and output data might be. I believe it is a clear case of quantum input and output, a distinction I like to make. I find it useful to separate this class of algorithms from ones that operate on classical data while still offering a speedup, like Grover's search on classical databases and its variants. Yes, we can argue that at some point the quantum states have to be initialized classical, and at that time we introduce at least linear computational complexity. At the other end of the pipeline, sooner or later we will want classical information, which implies state tomography of the output states with all its problems. As one visitor to the poster pointed out, quantum machine learning is at severe disadvantage compared to classical algorithms, and this is one of the reasons. Yet, we can picture learning processes where several quantum learners are attached, or a learner aids a quintessentially quantum procedure, obliterating the need for a transition to the classical domain. So I maintain that it makes sense to talk about quantum input and output data.\n\n# References\n\nBisio, A.; Chiribella, G.; D'Ariano, G.; Facchini, S. & Perinotti, P. Optimal quantum learning of a unitary transformation. Physical Review A, 2010, 81, 032324.\nHentschel, A. & Sanders, B. C. Machine Learning for Precise Quantum Measurement. Physical Review Letters, 2010, 104, 063603.\nWittek, P. Transduction and Active Learning in the Quantum Learning of Unitary Transformations. Poster Session at AQIS-14, 14th Asian Quantum Information Science Conference, 2014, Kyoto, Japan.\n\n### One Comment\n\n1. […] Update: An extended version of this post will appear in the upcoming book Quantum Machine Learning: What Quantum Computing Means to Data Mining. Update 2: Some clarifications are made in a new post. […]"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91466206,"math_prob":0.9371816,"size":4698,"snap":"2023-40-2023-50","text_gpt3_token_len":977,"char_repetition_ratio":0.14294845,"word_repetition_ratio":0.0026631157,"special_character_ratio":0.20285228,"punctuation_ratio":0.1395881,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96707714,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T15:28:11Z\",\"WARC-Record-ID\":\"<urn:uuid:ed719371-ba2f-4c3a-aeec-2353169ab808>\",\"Content-Length\":\"28545\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cb284b70-9da3-4c6a-98a6-4d6fe7c4bbcc>\",\"WARC-Concurrent-To\":\"<urn:uuid:28c321df-f9e9-4d18-8ca4-47523e72780c>\",\"WARC-IP-Address\":\"172.67.140.208\",\"WARC-Target-URI\":\"https://peterwittek.com/2014/08/more-on-quantum-learning-of-unitaries/\",\"WARC-Payload-Digest\":\"sha1:NYPGBGVR3KPPBFJMGTNFLL72BZVYXVWG\",\"WARC-Block-Digest\":\"sha1:CRLFMUNCDSJHM2VP3RLCS7M423QSYNDA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00561.warc.gz\"}"} |
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3321 | [
"## PREP 2014 Question Authoring - Archived\n\n### How does one suppress terms with coefficient 0 in text display?",
null,
"### How does one suppress terms with coefficient 0 in text display?\n\nby Michele Titcombe -\nNumber of replies: 4\nOr how does one suppress the 1 as a multiplicative constant ?\n\nFor example, if you have a function such as $f=Compute(\"$a*x**2 + $b*x + c\"); where$a, $b$ and $c are randomized coefficients that could take on values of 0 or 1. Instead of a function displaying as 0x^2+1x+2 (in appropriate pretty print), I'd like it to display x+2.",
null,
"In reply to Michele Titcombe ### Re: How does one suppress terms with coefficient 0 in text display? by Davide Cervone - The Formula object class has a reduce() method that will remove terms with coefficients of 0 and remove coefficients of 1 (among other things). So try $f = Compute(\"$a x^2 +$b x + $c\")->reduce; and see if that works better.",
null,
"In reply to Davide Cervone ### Re: How does one suppress terms with coefficient 0 in text display? by Paul Pearson - Hi Michele, Also, if you want to ensure that the leading coefficient is nonzero, you can use another randomization method that we haven't yet taught all of you. For example,$a = non_zero_random(-5,5,1);\n\nwould select a non-zero random number from -5 to 5 with increment 1, i.e., a random number from the list -5, -4, -3, -2, -1, 1, 2, 3, 4, 5. If you want to ensure that the leading coefficient is not 1 or -1, you could instead use something like\n\n$a = random(-1,1,2) * random(2,5,1); to produce a random integer from the list -5, -4, -3, -2, 2, 3, 4, 5. The random(-1,1,2); function will return either -1 or +1 (since the increment is 2) and the random(2,5,1); function will return 2, 3, 4, or 5. You could also use the list_random function in which all possible list items are specified in the arguments to the function. So, for example,$a = list_random(-5,-4,-3,-2,2,3,4,5);\n\nwill do the same thing as $a = random(-1,1,2) * random(2,5,1); We plan to go over more details such as these randomization functions in future workshop sessions. Best regards, Paul Pearson",
null,
"In reply to Paul Pearson ### Re: How does one suppress terms with coefficient 0 in text display? by Michele Titcombe - Excellent answers - both will be very helpful to me. I have another (perhaps related to reduce?) question: Is there a way to display in text the value of a variable as a fraction rather than as a decimal? For example: say I have a variable$ans = Compute(\"$a+1/$b\"); where $a and$b are nonzero randomized and $b is not 1. In the text of a problem, I'd like to display$ans as an improper fraction, not a decimal approximation.",
null,
"### Re: How does one suppress terms with coefficient 0 in text display?\n\nby Davide Cervone -\nThere are two things at play, here. The first is that Compute() does just what it says, it computes the value of the equation and returns that. The result is a real number, not a formula, and real numbers don't have a tree structure like a formula does.\n\nOn the other hand, Compute() does first parse the string as a formula, and then if it is constant it returns that constant value rather than the formula itself. But Compute() does retain the original formula and makes it available via the original_formula property of the returned MathObject. So the original formula could be obtained from\n\n $ans->{original_formula} in your case. The problem, however, is that, by default, constant values within a formula are reduced automatically, so the results of 1/$b and the addition of $a will be performed during the parsing of the formula, so the original formula will also just be the final number. You can control that automatic evaluation using some flags set in the Context. So Context()->flags->set( reduceConstants => 0, reduceConstantFunctions => 0, ); will prevent reducing of constant operations and constant function calls, leaving the original formula as the numeric expression you have in mind. That means Context()->flags->set( reduceConstants => 0, reduceConstantFunctions => 0, );$a = non_zero_random(-5,5,1);\n$b = random(2,10,1);$ans = Compute(\"$a+1/$b\");\n\nContext()->texStrings;\nBEGIN_TEXT\nThe answer is $$ans->{original_formula}$$.\nEND_TEXT\nContext()->normalStrings;"
] | [
null,
"https://webwork.maa.org/moodle/pluginfile.php/955/user/icon/classic/f1",
null,
"https://webwork.maa.org/moodle/pluginfile.php/59/user/icon/classic/f1",
null,
"https://webwork.maa.org/moodle/theme/image.php/classic/core/1584125254/u/f1",
null,
"https://webwork.maa.org/moodle/pluginfile.php/955/user/icon/classic/f1",
null,
"https://webwork.maa.org/moodle/pluginfile.php/59/user/icon/classic/f1",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8704593,"math_prob":0.9828136,"size":4638,"snap":"2021-31-2021-39","text_gpt3_token_len":1192,"char_repetition_ratio":0.11825637,"word_repetition_ratio":0.32061067,"special_character_ratio":0.26972833,"punctuation_ratio":0.15117466,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99797875,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-27T04:12:40Z\",\"WARC-Record-ID\":\"<urn:uuid:99ab5de4-9eb1-4ae5-899b-d320e1f8db29>\",\"Content-Length\":\"97913\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6ff8ebc2-2788-4a2e-884a-0932ef24df25>\",\"WARC-Concurrent-To\":\"<urn:uuid:ab3b8eaf-5d32-4a69-92d8-3428aa854208>\",\"WARC-IP-Address\":\"34.204.106.157\",\"WARC-Target-URI\":\"https://webwork.maa.org/moodle/mod/forum/discuss.php?d=3321\",\"WARC-Payload-Digest\":\"sha1:C3CK2ZO5ROHPOSGY2XHSYZRV4BVXQ3AW\",\"WARC-Block-Digest\":\"sha1:TYYDN3VV6MTULWR2EROECINIO6CLHRAG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780058263.20_warc_CC-MAIN-20210927030035-20210927060035-00082.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.