URL
stringlengths
15
1.68k
text_list
listlengths
1
199
image_list
listlengths
1
199
metadata
stringlengths
1.19k
3.08k
https://journals.ametsoc.org/jamc/article/51/11/1980/13640/A-New-Approach-to-Modeling-Vehicle-Induced-Heat
[ "## Abstract\n\nThe distribution of vehicle-induced wind velocity in the transversal direction of roads is measured. A statistical analysis is also performed to find the vehicle stopping time and stopping position at traffic signals. These results are used to build a heat-balance model to predict the road surface temperature resulting from the thermal effects of vehicles. To validate the model, measured and calculated road surface temperatures for a free-running (single path) location and a traffic-signal location are compared. The contributions of meteorological and vehicle-induced heat fluxes to the road surface temperature are quantitatively analyzed. For the present traffic and meteorological conditions, the calculated and measured road surface temperatures were in agreement for both the free-running and traffic-signal locations. Furthermore, the thermal contribution of vehicles to the road surface temperature was found to be nonnegligible at both locations.\n\n## 1. Introduction\n\n### a. Background\n\nThese observations indicate that the thermal effects of vehicles on the road surface temperature are not negligible at traffic signals or when the volume of traffic is high.\n\n### b. Previous studies on thermal effects of vehicles and their modeling\n\nSubsequently, Fujimoto et al. (2008) developed a heat-transfer model by adding sensible heat due to the air movement by passing vehicles to the model built by Watanabe et al. These heat and air movements are hereafter referred to as the vehicle-induced sensible heat and vehicle-induced wind, respectively. We refer to this model as the vehicle road surface temperature (VRST) model. Fujimoto et al. (2008) also examined the thermal effects of vehicles on the road surface temperature by performing a numerical simulation using both an instantaneous and a time-averaged VRST model. The former calculates the time variation in vehicle-induced heat fluxes associated with the passage of vehicles (under pulselike conditions). The latter calculates the road surface temperature using time-averaged vehicle-induced heat fluxes. The following trends were observed: 1) the thermal effects of vehicles lowered the road surface temperature during the day and raised it at night and 2) the difference in the road surface temperatures of the two VRST models was small. Fujimoto et al. (2010) then conducted a simulation analysis of the road surface temperature at a traffic-signal location where vehicles repeatedly stop and start. The results showed that vehicles starting and stopping at traffic signals caused fluctuations in the road surface temperature, and that the temperature continuously fluctuated around 0°C (zero crossing).\n\nThus, the VRST model has made it possible to extract the properties of the road surface temperature overlooked in previous studies, by conducting experiments to identify heat transfer coefficients and variables relevant to traffic by field and indoor experiments using vehicles with tires. However, the VRST model has the following limitations:\n\n1. Spatial changes in the vehicle-induced wind velocity are not considered (currently, the wind velocity at the center of the vehicle is used as a representative value).\n\n2. At traffic-signal locations, it is assumed that vehicle stopping times and positions are fixed.\n\n3. The VRST model has not been satisfactorily validated. A quantitative comparison of the calculated and observed road surface temperatures has not yet been performed at a traffic signal.\n\n### c. Purpose of study\n\nThis study had the following major objectives:\n\n1. to derive a rational relationship between the vehicle speed and vehicle-induced wind velocity on the basis of the distribution of vehicle-induced wind velocity in the transversal direction of the road,\n\n2. to elucidate the properties of vehicle stopping time and position at traffic signals,\n\n3. to improve the vehicle-induced sensible heat and vehicle radiative heat using the results obtained from objectives 1 and 2, and\n\n4. to verify the reliability of the improved VRST model by comparing observed and calculated road surface temperatures at a free-running (single path) location and a traffic-signal location.\n\n## 2. VRST model\n\n### a. Assumptions\n\n• The VRST model is based on several assumptions. For vehicle operation, there are three assumptions:\n\n• 1) The target road surface is the center of the lane (i.e., at the vehicle’s centerline). At traffic-signal locations, the center of the vehicle that is stopped just before the stop line is the target road surface.\n\n• 2) All vehicles travel in the center of the lane and are the same size.\n\n• 3) The time interval between any two vehicles (the vehicle-passage time) is uniformly distributed on the basis of the hourly traffic volume.\n\n• For heat flux, there are two additional assumptions:\n\n• 4) Heat transfer in the transversal direction of the road is neglected.\n\n• 5) Additional wind velocity due to the interactions between natural wind and vehicle-induced wind is not considered. That is, there is no superposition of these two winds, and the sensible heat associated with a relatively high wind velocity is incorporated in the heat balance of the pavement surface layer [Eq. (1)].\n\n### b. Heat balance on pavement surface\n\nThe spikelike changes in heat flux in response to the passing of vehicles are expressed using unit step functions whose values are 0 or 1 [f(t) and g(t)], and the heat balance of the pavement surface layer is given by\n\nwhere ρp is the density of the pavement surface layer (kg m−3), cp is the specific heat of the pavement surface (kJ kg−1 K−1), Tp is the temperature of the pavement surface layer (°C), t is the time (s), Δzs is the thickness of the pavement surface layer (m), Cp is the pavement conductive heat flux (W m−2), Rlu is the road surface radiative heat flux (W m−2), S is the sensible heat flux (W m−2), L is the latent heat flux (W m−2), Rld is the sky radiative heat flux (W m−2), α is the albedo, Rs is the shortwave (insolation) heat flux (W m−2), Rυ is the vehicle radiative heat flux (W m−2), and Qnet is the net heat flux (W m−2). Here, S is given by the following formula based on assumption 5 in section 2a:\n\nwhere Vnw is the natural (background) wind velocity (m s−1), Sa is the natural wind sensible heat flux arising from Vnw (W m−2), Vw is the vehicle-induced wind velocity (m s−1), and Sυ is the vehicle-induced sensible heat flux arising from Vw (W m−2). The details of the heat flux in Eqs. (1) and (2) have been described by Fujimoto et al. (2008), and no further explanation is given here. In addition, in this analysis, the road surface is considered to be dry, and, therefore, L is eliminated. The unit step functions in Eq. (1) will be discussed in detail in the next section.\n\n### c. Modeling thermal effects of vehicles\n\nFigures 1a and 1b show the time variations in the heat fluxes due to the passing of vehicles at free-running and traffic-signal locations, respectively.\n\n#### 1) Free-running location\n\nIn Fig. 1a, t1 is the period during which the road surface is covered by a moving vehicle (the vehicle-passage time), and t2 is the subsequent period during which it is not covered (the non-vehicle-passage time). The quantities Cp, Rlu, and S act on the road surface at all times. During t1, Rυ acts on the road surface while Rld and Rs are zero. Conversely, during t2, Rld and (1 – α)Rs act on the road surface while Rυ is zero. The values of t1 and t2 are defined by the following equations:\n\nwhere Lυ is the vehicle length (m), Vυ is the vehicle speed (km h−1), and Fυ is the hourly traffic volume (vehicles per hour).\n\n#### 2) Traffic-signal location\n\nIn this section, we consider the time taken for a vehicle to stop at the designated point (just before the stop line) after the traffic signal turns red (i.e., the vehicle deceleration time), and the period for which the vehicle remains stationary until the next green signal (vehicle stop time). The stop time at the designated point, t4, is given by\n\nwhere tred is the red-signal period that includes the yellow-signal period, Pst (=t40/tred) is the stop-time ratio, and Psa (=Ns/Ns0) is the stopping-vehicle-number ratio. In addition, t40 is the stop time corresponding to the red-signal period (s), Ns0 is the frequency of the red signal per unit time, and Ns is the frequency of the red signal when a vehicle stops at the designated point in Ns0; t40 is the mean of the stop times measured at a traffic signal. As will be discussed in section 4b, Ns and t40 may be affected by the traffic volume.\n\nFinally, from Eq. (5), the vehicle deceleration time, t3 (h), is\n\nGiven the above thermal effects of vehicles, the unit step functions f(t) and g(t) in Eq. (1) are as listed in Table 1.\n\nTable 1.\n\nUnit step function for heat balance on road surface.", null, "## 3. Measurement and formulation of vehicle-induced wind velocity\n\n### a. Outline of experiment\n\nTo study the distribution of Vw in the transversal y direction of the road, we conducted an outdoor experiment using a typical passenger vehicle (4.97 m in length, 1.93 m in width, and 1.86 m in height). A thermal anemometer (manufactured by Kanomax) was set up at a height of 0.18 m above the road surface as shown in Fig. 2, and the y direction of Vw was measured by translating the driving of the vehicle in the y direction.\n\nThe vehicle’s centerline was considered to be y = 0. The vehicle’s speed was set to 30 km h−1.\n\n### b. Transversal distribution of vehicle-induced wind velocity\n\nFigure 3 shows the time variations in Vw for y* = 0, 0.4, 1.2, and 1.6, where y* is the normalized distance and y* = y/0.5Wυ (Wυ being the vehicle width). The value of Vw increases rapidly immediately after the vehicle passes (t = 0), reaching a peak at approximately 1 s and then decreasing gradually.\n\nThe maximum value of Vw, Vwmax (m s−1), occurred at the center of the vehicle (i.e., y* = 0) and decreased toward the roadside (as y* became larger). We have normalized Vwmax to express the y direction of Vw in a unified expression:\n\nwhere", null, "and", null, "= 0 indicates that Vwmax = Vnw.\n\nFigure 4 shows the relationship between", null, "and y*. Here,", null, "decreases as y* increases, and the relationship between", null, "and y* follows a Gaussian function. That is,\n\nFig. 4.\n\nRelationship between maximum normalized vehicle-induced wind velocity", null, "and normalized distance y*.\n\nFig. 4.\n\nRelationship between maximum normalized vehicle-induced wind velocity", null, "and normalized distance y*.\n\n### c. Representative velocity of vehicle-induced wind\n\nIn determining the representative value of Vw,", null, ", the following assumptions were made:\n\n1. Time variations in", null, "(=Vw0) depend on t but do not depend on y*, as shown in Eq. (9), established by Fujimoto et al. (2008): where tmax is the time (s) for the wind velocity to reach Vwmax0 from the ambient velocity, t0 is the duration of the vehicle-induced wind, and a, b, and c are coefficients. For vehicle speeds Vυ (km h−1) ranging from 10 to 70 km h−1, these variables and coefficients are formulated in terms of Vυ as follows (see Fujimoto et al. 2008):      In addition, t in Eq. (9) indicates the elapsed time (s) since the vehicle passage.\n2. The representative value of", null, ",", null, ", is the average of", null, "over half of the vehicle width (from y* = 0 to 1.0, the shaded area in Fig. 4).\n\n3. Here,", null, "is the product of Vw0 and", null, ". On the basis of these assumptions and Eq. (8),", null, "is calculated as follows: where\n\n## 4. Micrometeorological observation, traffic-volume survey, and road surface temperature measurement on a national route\n\n### a. Outline of observation\n\nThis section describes the micrometeorological observations, the traffic-volume survey, and the road surface temperature measurements (hereinafter referred to as the observations). The observations were made at the free-running location and the traffic-signal location, and these are labeled case BS and case CS, respectively. Case BS was measured at National Route 8 (Echizen City, Fukui, Japan) from 0700 to 1700 LT 6 August 2008. Case CS was measured at an intersection on National Route 416 (Fukui City, Fukui) from 1700 to 0800 LT 29–30 December 2009. In both observations, the air temperature Ta and the relative humidity RHa (%) were measured using a thermohygrometer (HMP45, manufactured by Vaisala). The value of Vnw (m s−1) was measured using a vane anemometer (Weather Wizard III, manufactured by Davis). Both Rs and Rld were measured using a radiation balance meter (CNR1, manufactured by Kipp and Zonen). These values were recorded every minute by a datalogger. The road surface temperature Ts was measured using a radiation thermometer (ST 60, manufactured by Raytek) at points within and outside the vehicle’s passage. Furthermore, the spatial distribution of the road surface temperature was regularly recorded using a thermotracer (TH9100, manufactured by NEC). In case BS, Vυ was calculated by measuring the traveling time between two different positions. In case CS, the green-light period tgrn, tred, t4, and the vehicle-stopping positions near the traffic signal were recorded using a video camera.\n\n### b. Observation results\n\n#### 1) Free-running location\n\nThe weather on the day of observation was fine until 1200 LT, and then it became cloudy. The road surface was completely dry all day. The value of Ta increased from 23.1°C at 0700 LT to 34.8°C at 1230 LT, which was the maximum temperature during the observation period. Subsequently, Ta was around 30°C until 1700 LT. The value of RHa decreased from approximately 80% at 0700 LT to approximately 40% at 1000 LT. Subsequently, RHa varied within a range of 40%–60%, while Vnw was below 1 m s−1 until 1200 LT and reached a maximum of 2.4 m s−1 at 1300 LT. Subsequently, Vnw was in the range 0.5–2.0 m s−1, and Rs increased from the beginning of the observation to a maximum of 908 W m−2 at 1200 LT. It then oscillated because of the effect of the clouds. The value of Rld ranged from 420 to 475 W m−2, while Fυ ranged from 270 to 500 vehicles per hour with an average value of 376 vehicles per hour. In addition, Vυ varied between 37 and 42 km h−1 with an average value of 38 km h−1.\n\n#### 2) Traffic-signal location\n\nDuring the observation period, the weather was fine and the road surface was dry. The value of Ta decreased from 5.7°C at the beginning of the observation to 1.0°C at 0000 LT and then increased to around 2.0°C, while RHa was 60%–70% throughout the observation period. For most of the time, Vnw was less than 0.4 m s−1, and the maximum Vnw of 0.7 m s−1 was reached at 0100 LT. The value of Rld was approximately 300 W m−2, while Fυ decreased from approximately 360 vehicles per hour from 1700 to 1900 LT to a minimum of 51 vehicles per hour at 0400 LT. Throughout the observation period, tgrn was approximately 30 s, while tred was approximately 90 s from 1700 to 2000 LT and ranged from 60 to 75 s for the rest of the observation period. Furthermore, t4 was almost equal to tred at 1700 and 1800 LT but became shorter as Fυ decreased, reaching a minimum of 16 s at 0400 LT.\n\nFigures 7 and 8 show the relationship between Pst or Psa and Fυ. The value of Pst increased in proportion to the power function of Fυ and is given by\n\nThe relationship between Psa and Fυ is given by the same type of function as that for the relationship between Pst and Fυ:\n\n## 5. Comparison of measured and calculated results of road surface temperature\n\n### a. Boundary conditions and initial conditions\n\nA numerical analysis of the road surface temperature was performed for a pavement body of thickness 0.5 m and a subgrade of thickness 4.9 m. To obtain the initial temperatures in the pavement and subgrade, we entered weather data for August and December (from Fukui Local Meteorological Observatory) into the model and carried out a transient analysis until the vertical temperature profile varied with the thermal equilibrium state. The weather and traffic data obtained from the observations were used as the boundary conditions and were given by a linear interpolation of the data collected in time order. The temperature of the bottom boundary of the analysis area was fixed at 15°C in case BS and 10°C in case CS. In addition, on the basis of assumption 4 in section 2, it was considered that there was no heat transfer at the side boundary of the analysis area. Since Vυ could not be measured in case CS, it was fixed to 32 km h−1 with reference to the Fiscal 2005 Road Traffic Census. Table 2 lists the thermophysical property values given in a heat-transfer handbook (written by the Japan Society of Mechanical Engineers in 1993, pp. 238 and 375) for the pavement and ground used in the analysis.\n\nTable 2.\n\nThermophysical property values for pavement and ground.", null, "### b. Spatial distribution and time variation of road surface temperature\n\nThe model was validated by comparing the observed Ts with the calculated Ts. We also discuss the difference in Ts between the vehicle-passage and non-vehicle-passage areas.\n\n#### 1) Free-running location\n\nFigures 9a and 9b show the spatial distribution of Ts in case BS at 0635 and 1157 LT 6 August 2008. In Fig. 9a we see that Ts at all points (A–F) was almost uniform, in the range 27.2°–27.8°C, whether or not vehicles were passing. However, the values of Ts at points G, J, I, and L in the vehicle-passage area in Fig. 9b (50.6°–53.6°C) were approximately 1°–3°C lower than Ts at points H and K without vehicle passage (54.4°–54.8°C).\n\nFigure 10 shows the time variation in Ts in case BS. Hereinafter, the suffixes υ and n for Ts indicate the vehicle- and non-vehicle-passage areas, respectively, and the suffixes m and c indicate the measured and calculated values, respectively.\n\nThe initial Tsvm and Tsnm were both 30.6°C, and there was no difference between them: ΔTsm (=TsvmTsnm) = 0. Both Tsvm and Tsnm increased over time, but Tsnm became higher than Tsvm at around 0900 LT. Both values reached a maximum at 1200 LT (Tsnm = 55.5°C and Tsvm = 51.2°C) with ΔTsm = −4.3°C. Subsequently, both temperatures decreased gradually while maintaining ΔTsm ≈ −1.5°C. The average ΔTsm during the observation period", null, "was −2.0°C.\n\nThe calculated temperatures, Tsvc and Tsnc, reproduced the observed values in general, as shown in Fig. 10. However, when the vehicle-induced sensible heat Sυ is deleted from the heat balance in Eq. (1) [i.e., S = Sa in Eq. (1)], the calculated road surface temperature", null, "was slightly lower than Tsnc and became more inaccurate than Tsvc. As far as the present traffic and meteorological conditions are concerned, it is seen that Sυ cannot be disregarded from the calculation of the road surface temperature.\n\nDuring the observation period, the average difference between Tsvc and Tsnc, ΔTsc (=TsvcTsnc), was −2.0°C, which was in good agreement with", null, ".\n\n#### 2) Traffic-signal location\n\nFigures 11a and 11b indicate the spatial distribution of Ts in case CS at 2102 and 2103 LT 29 December 2009. It is evident from Fig. 11a that the vehicle-body temperature is higher than Ts except on the roof, side-view mirrors, and so on. The values of Ts at points M, N, and O in the non-vehicle-passage area were 7.2°, 8.6°, and 9.5°C, respectively. Here, Ts increased as the measurement point was closer to the vehicle-passage area. In Fig. 11a, the values of Ts at points P, S, R, and U in the vehicle-stopping area were 3°–4°C higher than Ts at points Q and T in the zone where vehicles did not stop or did not pass.\n\nAccording to Prusa et al. (2002), the width of the DM is 1.7–3.9 times that of a vehicle. Consequently, it is clear from Fig. 11 that the road surface temperature over the DM is not uniform. There is an obvious difference in the road surface temperature between the vehicle-passage area and the non-vehicle-passage area. The road surface temperature in the vehicle-passage area can be regarded as the representative surface temperature on the road that is subject to the vehicle-related heat.\n\nFigure 12 shows the time variations in Ts in case CS. At the beginning of the observation Tsvm and Tsnm were 10.9° and 8.2°C, respectively. Slight fluctuations in the temperature continued throughout the observation period. At the beginning of the observation, ΔTsm was 2.7°C, reaching 4.9°C at 2000 LT and then decreasing over time. After 0100 LT, ΔTsm was approximately 0.5°C and the value of", null, "was 1.9°C.\n\nWhile Tsnc varied gradually over time, Tsvc showed fluctuations with a small amplitude. The top-right graph in Fig. 12 shows an enlarged view of the time variation in Tsvc (solid line). It is evident that Tsvc decreased for tgrn (shown as A) and increased for tred (shown as B and C). The cause of the fluctuation will be discussed in detail in section 6a(2).\n\nThe amplitude for Tsvc, ΔTsvc, was approximately 0.3°C for Fυ = 360 vehicles per hour at 1700–1900 LT, approximately 0.2°C for Fυ = 225 vehicles per hour at 0000 LT, and lower than 0.1°C for Fυ = 51 vehicles per hour at 0400 LT. Increases in Fυ tended to increase ΔTsvc.\n\nThe values of Tsnc and Tsvc were in good agreement with the measured temperatures, Tsnm and Tsvm. In addition,", null, "was 1.6°C, which was 0.3°C lower than", null, ". This difference between", null, "and", null, "may depend on the temperature measurement position on the road surface and may be caused by an error in the initial temperature of the pavement body or subgrade.\n\n## 6. Discussion\n\n### a. Heat balance on road surface\n\n#### 1) Free-running location\n\nFigures 13a and 13b show the time variation in heat flux in the non-vehicle-passage area from 0700 to 1700 LT 6 August 2008 and in the vehicle-passage area for 20 s from 1105:30 to 1105:50 LT 8 August 2008 in case BS. The positive vertical (y) axis (top half) and the negative y axis (bottom half) indicate the heat gain and loss, respectively, of the road surface layer.\n\nWe first discuss the heat flux in the non-vehicle-passage area in Fig. 13a. The main causes of heat gain were (1 − α)Rs and Rld, while Rlu and Cp before 1300 LT and Rlu and S = Sa after 1300 LT were the main causes of heat loss. The values of Rlu and Cp were almost constant at approximately −573 and −339 W m−2.\n\nWe now consider the heat flux in the vehicle-passage area in Fig. 13b. For the t1 period (0.4 s), (1 − α)Rs and Rld were zero because of the vehicle’s shielding effect, but were 645 and 451 W m−2, respectively, for the t2 period (9.1 s). Instead, an Rυ of 547 W m−2 acted on the road surface for the t1 period. The maximum value of Sa was −136 W m−2 when Vnw > Vw, and the maximum value of Sυ reached −409 W m−2 (=3 times larger than Sa) when Vnw < Vw. It is seen that Sυ contributes to the decrease in the road surface temperature shown in Fig. 10 (i.e., Tsvc <", null, "), and that a running vehicle plays the role of a fan that cools the road surface.\n\n#### 2) Traffic-signal location\n\nFigures 14a and 14b show the time variations in the heat flux in the non-vehicle-passage area from 1700 to 0800 LT 29 December 2009 and in the vehicle-passage area for 3 min from 0000 LT 30 December 2009 in case CS.\n\nIn the non-vehicle-passage area in Fig. 14a, the heat loss by Rlu and heat gain by Rld were dominant, and S = Sa, Cp, and (1 – α)Rs were relatively small.\n\nWe now discuss the heat flux in the vehicle-passage area in Fig. 14b. The values for t1, t2, t3, and t4 during this period were 0.5, 4.1, 34.3, and 35.9 s, respectively. For the t2 and t3 periods, Rld was 319 W m−2 and reached zero for the t1 and t4 periods. For the t1 and t4 periods Rυ, was 369 W m−2. The maximum value of Sa was −16 W m−2 when Vnw > Vw, and Sυ reached a maximum value of −164 W m−2 when Vnw < Vw. Here, Cp increased as if to compensate for the negative Sυ for tgrn and reached a maximum value of 54 W m−2, but decreased for tred. The value of Rlu was −332 W m−2.\n\nThe average Qnet values for the t1, t2, t3, and t4 periods were 11, −72, 16, and 39 W m−2, respectively.\n\nThe minute fluctuations in Tsvc described in section 5b(2) are caused by abrupt changes in Qnet (from positive to negative and vice versa) associated with the thermal effects of the vehicle.\n\n### b. Evaluation of thermal effects of vehicles\n\nFigure 15 shows a schematic view of the heat balance in the vehicle-passage area (left) and that in the non-vehicle-passage area (right). The contribution of heat flux (Rld, Rlu, Rs, Cp, Rυ, and S) to ΔTsc was quantitatively evaluated as IP* by the following equation:\n\nwhere Pυ is the heat flux in the vehicle-passage area (Rld–υ, Rlu–υ, Rsυ, Cpυ, Rυυ, and Sυ) and Pn is the heat flux in the non-vehicle-passage area (Rld–n, Rlu–n, Rsn, Cpn, and Sn). Note that P* is the hourly heat flux calculated by the time integration of the subtraction of Pn from Pυ. Thus, IP* is the rate of each P* with regard to the sum of the absolute values of P*. A positive IP* increases ΔTsc and a negative IP* reduces ΔTsc.\n\nNext, let us consider IP* at the free-running and traffic-signal locations based on Figs. 16a and 16b, which show the time variations in IP* for cases BS and CS, respectively.\n\n#### 1) Free-running location\n\nAt the free-running location,", null, "and", null, "were always positive, but", null, "and", null, "were always negative because of the vehicle’s shielding effect. In addition, IS* was positive at 1300, 1400, and 1600 LT. This was because Tsvc < Tsnc and Vnw was large. Conversely,", null, "", null, "was positive until 1200 LT and negative at 1300, 1400, and 1600 LT. This was due to the increase in Cpn associated with a drop in Tsnc.\n\nThe values of", null, ",", null, ",", null, ",", null, ",", null, ", and", null, ", which are the means of IP* over the analysis period, were −0.15, 0.10, −0.14, 0.17, 0.18, and −0.16, respectively. At the free-running location, it was difficult to identify the dominant heat flux that affects ΔTsc.\n\n#### 2) Traffic-signal location\n\nAt the traffic-signal location,", null, "and", null, "played an important role in IP* and the contributions of", null, ", IS*, and", null, "to Tsc were relatively small. However, the absolute values of", null, ", IS*, and", null, "increased slightly toward 0400 LT, when Fυ reached its minimum. Because of this increase,", null, "and", null, "decreased, but they were approximately 3 times larger than the absolute values of", null, "or IS*. At 0400 LT, the values of", null, ", IS*, and", null, "were −0.07, 0.12, and −0.10, respectively.\n\nThe values of", null, ",", null, ",", null, ",", null, ",", null, ", and", null, "were −0.38, −0.04, 0.00, 0.04, −0.08, and 0.46, respectively. At the traffic-signal location, Rυ and Rld are the main heat fluxes that affect ΔTsc. However, the effects of Rlu, S, and Cp on ΔTsc are nonnegligible when Fυ becomes small.\n\n## 7. Conclusions\n\nWe measured the distribution of the vehicle-induced wind velocity in the transversal direction of roads, and used a video camera to statistically evaluate the characteristics of vehicle stopping time and position at traffic-signal locations. Using these results, we developed a heat-balance road surface temperature model that considers the thermal effects of vehicles. The measured road surface temperatures were compared with the temperatures calculated by the proposed model at a free-running (single path) location and a traffic-signal location. This clarified the thermal effects of vehicles on the road surface temperature.\n\nOur results are as follow:\n\n1. The maximum value of the vehicle-induced wind velocity appeared at the center of the vehicle and decreased toward the roadside, following a Gaussian distribution.\n\n2. The ratio of the vehicle-stopping period to the red-light period increased with an increase in traffic volume, following a power function. For example, this ratio was 0.25 for a traffic frequency of 50 vehicles per hour and 0.87 for 360 vehicles per hour.\n\n3. The ratio of the number of vehicles stopping at the designated point (just before the stop line) to the total number of stopping vehicles decreased from 0.90 to 0.50 following a power function, as the traffic volume decreased.\n\n4. For both the free-running and traffic-signal locations, the calculated road surface temperatures in the vehicle-passage area and the non-vehicle-passage area were in agreement with the observed values.\n\n5. The computation revealed the following two points: (i) the vehicle passage at the traffic-signal location causes fluctuations in road surface temperature with a small amplitude—the road surface temperature drops during the green-light period and increases during the red-light period, and (ii) the amplitude of the fluctuations in the road surface temperature tends to increase slightly as the traffic volume increases.\n\n6. At the free-running location, it was difficult to identify the dominant heat flux that influenced the difference in the road surface temperature between the vehicle-passage area and the non-vehicle-passage area. At the traffic-signal location, the vehicle relative heat flux and sky relative heat flux were the main contributors to this difference.\n\nAlthough traffic and weather conditions were limited, the proposed model enabled the calculation of the time variation in the road surface temperature in the vehicle-passage area and direct comparison with the observed one. Consequently, it was found that the thermal contribution of vehicles to road surface temperature cannot be neglected and is significantly different between the free-running location and the traffic-signal location. However, further studies will be needed to find the limitations of the parameterizations and formulation of the vehicle-related heat fluxes in this study through the change in vehicle size and vehicle speed.\n\n## Acknowledgments\n\nThis work was supported by KAKENHI (90456434).\n\n### APPENDIX\n\n#### List of Symbols\n\na, b, c Coefficients regarding Vw0\n\nCp Pavement conductive heat flux (W m−2)\n\ncp Specific heat of the pavement surface (kJ kg−1 K−1)\n\nFυ Hourly traffic volume (vehicles per hour)\n\nf(t), g(t) Unit step functions (0 or 1) to express the spikelike changes in heat flux in response to the passing of vehicles\n\nIP* Rate of each P* with regard to the sum of the absolute values of P*", null, "Mean values of IP* over the analysis period\n\nL Latent heat flux (W m−2)\n\nLυ Vehicle length (m)\n\nm, c Suffixes expressed as the measured and calculated values\n\nNs Frequency of the red signal when a vehicle stops at the designated point in Ns0\n\nNs0 Frequency of the red signal per unit time\n\nPsa Stopping-vehicle-number ratio (=Ns/Ns0)\n\nPst Stop-time ratio (=t40/tred)\n\nPn Heat flux in the non-vehicle-passage area (Rld–n, Rlu–n, Rsn, Cpn, and Sn)\n\nPυ Heat flux in the vehicle-passage area (Rld–υ, Rlu–υ, Rsυ, Cpυ, Rυυ, and Sυ)\n\nP* Hourly heat flux calculated by the time integration of the subtraction of Pn from Pυ\n\nQnet Net heat flux (W m−2)\n\nRld Sky radiative heat flux (W m−2)\n\nRs Shortwave (insolation) heat flux (W m−2)\n\nRυ Vehicle radiative heat flux (W m−2)\n\nRHa Relative humidity (%)\n\nS Sensible heat flux (W m−2)\n\nSa Natural wind sensible heat flux arising from Vnw (W m−2)\n\nSυ Vehicle-induced sensible heat flux arising from Vw (W m−2)\n\nTa Air temperature (°C)\n\nTp Temperature of the pavement surface layer (°C)", null, "Calculated road surface temperature without vehicle-induced sensible heat (°C)\n\nt Time (s)\n\nt0 Duration of the vehicle-induced wind (s)\n\nt1 Period during which the road surface is covered by a moving vehicle (the vehicle-passage time) (s)\n\nt2 Subsequent period during which it is not covered (the non-vehicle-passage time) (s)\n\nt3 Vehicle deceleration time (s)\n\nt4 Stop time at the designated point (s)\n\nt40 Stop time corresponding to the red-signal period (s)\n\ntgrn Green-light period (s)\n\ntmax Time for the wind velocity to reach Vwmax from the ambient velocity (s)\n\ntred Red-signal period (s)\n\nVnw Natural (background) wind velocity (m s−1)\n\nVυ Vehicle speed (km h−1)\n\nVw Vehicle-induced wind velocity (m s−1)", null, "Representative value of Vw (m s−1)\n\nVwmax Maximum value of Vw, (m s−1)\n\nVwmax0Vwmax at y* = 0 (m s−1)", null, "Normalized Vwmax", null, "Average of", null, "over half of the vehicle width\n\nVw0Vw at y* = 0 (m s−1)\n\nυ, n Suffixes expressed as the vehicle- and non-vehicle-passage areas\n\nWv Vehicle width (m)\n\ny Transversal direction of the road (m)\n\ny* Normalized distance\n\nα Albedo\n\nρp Density of the pavement surface layer (kg m−3)\n\nΔTsTsvTsn\n\nΔTsvc Amplitude for Tsvc\n\nΔzs Thickness of the pavement surface layer (m)", null, "The average ΔTs during the observation period\n\n## REFERENCES\n\nREFERENCES\nChapman\n,\nL.\n,\nJ. E.\nThornes\n, and\nA. V.\n,\n2001\n:\nModeling of road surface temperatures from a geographical parameter database. Part 1\n.\nStat. Meteor. Appl.\n,\n8\n,\n409\n419\n.\nCrevier\n,\nL.-P.\n, and\nY.\nDelage\n,\n2001\n:\n.\nJ. Appl. Meteor.\n,\n40\n,\n2026\n2037\n.\nFujimoto\n,\nA.\n,\nH.\nWatanabe\n, and\nT.\nFukuhara\n,\n2008\n: Effects of vehicle heat on road surface temperature of dry condition. Proc. 14th Standing Int. Road Weather Conf., Standing International Road Weather Commission, Prague, Czech Republic, ID05. [Available online at http://www.sirwec.org/Papers/prague/5.pdf.]\nFujimoto\n,\nA.\n,\nA.\nSaida\n,\nT.\nFukuhara\n, and\nT.\nFutagami\n,\n2010\n: Heat transfer analysis on road surface temperature near a traffic light. Proc. 17th ITS World Congress, Busan, South Korea, Intelligent Transportation Society, T_AP01138.\nGustavsson\n,\nT.\n, and\nJ.\nBogren\n,\n1991\n:\nInfrared thermography in applied road climatological studies\n.\nInt. J. Remote Sens.\n,\n19\n,\n1311\n1328\n.\nIshikawa\n,\nN.\n,\nH.\nNarita\n, and\nY.\nKajiya\n,\n1999\n:\nContribution of heat from traffic vehicles to snow melting on roads\n.\nTransp. Res. Rec.\n,\n1672\n,\n28\n33\n.\nPrusa\n,\nJ. M.\n,\nM.\nSegal\n,\nB. R.\nTemeyer\n,\nW. A.\nGallus\n, and\nE. S.\nTakle\n,\n2002\n:\nConceptual and scaling evaluation of vehicle traffic thermal effects on snow/ice-covered roads\n.\nJ. Appl. Meteor.\n,\n41\n,\n1225\n1240\n.\nRayer\n,\nP. J.\n,\n1987\n:\nThe Meteorological Office forecast road surface temperature model\n.\nMeteor. Mag.\n,\n116\n,\n180\n191\n.\nSass\n,\nB. H.\n,\n1992\n:\nA numerical model for prediction of road surface temperature and ice\n.\nJ. Appl. Meteor.\n,\n31\n,\n1499\n1506\n.\nSato\n,\nT.\n,\nK.\nKosugi\n,\nO.\nAbe\n,\nS.\nMochizuki\n, and\nS.\nKoseki\n,\n2004\n: Wind and air temperature distribution in the wake of a running vehicle. Proc. 12th Standing Int. Road Weather Conf., Bingen, Germany, Standing International Road Weather Commission. [Available online at http://www.sirwec.org/Papers/bingen/6.pdf.]\nShao\n,\nJ.\n, and\nP. J.\nLister\n,\n1996\n:\nAn automated nowcasting model of road surface temperature and state for winter road maintenance\n.\nJ. Appl. Meteor.\n,\n35\n,\n1352\n1361\n.\nSurgue\n,\nJ. G.\n,\nJ. E.\nThornes\n, and\nR. D.\nOsborne\n,\n1983\n:\nThermal mapping of road surface temperatures\n.\nPhys. Technol.\n,\n13\n,\n212\n213\n.\nTakahashi\n,\nN.\n,\nR. A.\nTokunaga\n,\nM.\nAsano\n, and\nN.\nIshikawa\n,\n2006\n: Developing a method to predict road surface temperatures—Applying heat balance model considering traffic volume. Proc. 13th Standing Int. Road Weather Conf., Turin, Italy, Standing International Road Weather Commission, 58–66. [Available online at http://www.sirwec.org/Papers/torino/9.pdf.]\nWatanabe\n,\nH.\n,\nA.\nFujimoto\n, and\nT.\nFukuhara\n,\n2005\n: Modeling of heat supply to pavement from vehicle. Proc. 21th Cold Region Technology Conf., Sapporo, Japan, Amer. Soc. of Civil Engineers, 195–200." ]
[ null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-t1.png", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf1.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf2.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf3.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf4.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf5.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf6.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf6.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf7.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf8.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf9.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf10.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf11.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf12.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf13.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf14.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-t2.png", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf15.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf16.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf17.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf18.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf19.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf20.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf21.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf22.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf23.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf24.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf25.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf26.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf27.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf28.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf29.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf30.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf31.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf32.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf33.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf34.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf36.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf37.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf38.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf39.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf40.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf41.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf42.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf43.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf44.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf45.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf46.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf47.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf48.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf49.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf50.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf51.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf52.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf53.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf54.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf55.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf56.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf57.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf58.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf59.gif", null, "https://ams.silverchair-cdn.com/ams/content_public/journal/jamc/51/11/10.1175_jamc-d-11-0156.1/4/m_jamc-d-11-0156_1-inf60.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9461905,"math_prob":0.913525,"size":35308,"snap":"2020-34-2020-40","text_gpt3_token_len":8607,"char_repetition_ratio":0.20241332,"word_repetition_ratio":0.090618156,"special_character_ratio":0.23963408,"punctuation_ratio":0.105802044,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9523674,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,2,null,2,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-15T05:53:31Z\",\"WARC-Record-ID\":\"<urn:uuid:0813f841-66f4-4042-99db-779926a1dddb>\",\"Content-Length\":\"377885\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d80f9c7-ec77-496f-805b-ad705a51d4a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:bd30a609-b754-440d-a449-4a680695e3fa>\",\"WARC-IP-Address\":\"52.142.19.226\",\"WARC-Target-URI\":\"https://journals.ametsoc.org/jamc/article/51/11/1980/13640/A-New-Approach-to-Modeling-Vehicle-Induced-Heat\",\"WARC-Payload-Digest\":\"sha1:NGYTHWGIX3HEMIMUKKIXH6WL27AQ6EHZ\",\"WARC-Block-Digest\":\"sha1:IMT3MZCUYSFXVV2NHTAXFIV4VO3A3QL2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439740679.96_warc_CC-MAIN-20200815035250-20200815065250-00450.warc.gz\"}"}
https://physicsmax.com/resistors-parallel-7742
[ "# Resistors in parallel\n\nResistors in parallel\n\nResistors are said to be in parallel when they are placed side by side and their corresponding ends joined together (Fig. 35.6). The same potential difference will thus be applied to each, but they will share the main current in the circuit. We will suppose that the main current I divides into II, 12, and 13 through the resistors RI’ R2′ and R3 respectively and that the common potential difference across them is V. If R is the combined resistance we may write" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9600069,"math_prob":0.86746466,"size":485,"snap":"2020-10-2020-16","text_gpt3_token_len":109,"char_repetition_ratio":0.112266116,"word_repetition_ratio":0.0,"special_character_ratio":0.2185567,"punctuation_ratio":0.08421053,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9734824,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-29T09:14:38Z\",\"WARC-Record-ID\":\"<urn:uuid:124888a9-16c8-48ca-abcb-bdb99ba77af0>\",\"Content-Length\":\"47184\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79b92d9a-5f7d-4e1e-a253-3bcc49f8de15>\",\"WARC-Concurrent-To\":\"<urn:uuid:4287f887-b180-4051-bb4d-52783ee3475d>\",\"WARC-IP-Address\":\"104.24.125.93\",\"WARC-Target-URI\":\"https://physicsmax.com/resistors-parallel-7742\",\"WARC-Payload-Digest\":\"sha1:IFO5FBEMWJMZNBMMF7F4OGSJTPHEXYGX\",\"WARC-Block-Digest\":\"sha1:MPRPCWIY4KRLJSGJM7NCR46CW33USEI3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875148850.96_warc_CC-MAIN-20200229083813-20200229113813-00558.warc.gz\"}"}
https://de2.slideshare.net/ZazzaNM/sxsw-2019-top-trends
[ "Successfully reported this slideshow.", null, "", null, "", null, "×\n1 of 219\n\nSXSW 2019 - Top Trends\n\n11\n\nShare\n\nA personal overview of the most interesting and relevant topics presented at SXSW 2019 in Austin\n\nFrom Artificial Intelligence to Healthcare through the power of algorithms and much more\n\nSee all\n\nSee all\n\nSXSW 2019 - Top Trends\n\n1. 1. S X S W 2 0 1 9 H I G H L I G H T S Matteo Sarzana digitalgonzo.it @zazzanm\n2. 2. Matteo Sarzana digitalgonzo.it @zazzanm W H A T I S S X S W ?\n3. 3. 2 . 1 4 7 S E S S I O N S - 4 . 9 6 7 S P E A K E R S Matteo Sarzana digitalgonzo.it @zazzanm\n4. 4. I M P A C T O F S X S W : 3 5 0 M I O \\$ Matteo Sarzana digitalgonzo.it @zazzanm\n5. 5. 7 5 . 0 9 8 A T T E N D E E S - 1 0 2 C O U N T R I E S Matteo Sarzana digitalgonzo.it @zazzanm\n6. 6. W H A T C A N W E L E A R N ? Matteo Sarzana digitalgonzo.it @zazzanm\n7. 7. N O O N E C A R E S A B O U T T E C H Matteo Sarzana digitalgonzo.it @zazzanm\n8. 8. T H E N E W R O L E O F T E C H Matteo Sarzana digitalgonzo.it @zazzanm\n9. 9. P E O P L E D O N ’ T G I V E A S H I T A B O U T T E C H Matteo Sarzana digitalgonzo.it @zazzanm\n10. 10. ( T H E Y C A R E A B O U T R E S U L T S ) Matteo Sarzana digitalgonzo.it @zazzanm\n11. 11. I T ’ S N O T T E C H U N T I L I T ’ S O N Y O U R P H O N E Matteo Sarzana digitalgonzo.it @zazzanm\n12. 12. C R E A T I O N O F A N E W E C O N O M Y Matteo Sarzana digitalgonzo.it @zazzanm\n13. 13. T E C H N O L O G Y D R I V E N Matteo Sarzana digitalgonzo.it @zazzanm\n14. 14. E Q U A L O P P O R T U N I T I E S Matteo Sarzana digitalgonzo.it @zazzanm\n15. 15. F R O M C O R P O R A T I O N T O C A R T V E N D O R S Matteo Sarzana digitalgonzo.it @zazzanm\n16. 16. C O S T = A C C E S S I B I L I T Y Matteo Sarzana digitalgonzo.it @zazzanm\n17. 17. A N Y O N E W H O O W N S A M O B I L E P H O N E Matteo Sarzana digitalgonzo.it @zazzanm\n18. 18. B U S I N E S S O R I E N T E D Matteo Sarzana digitalgonzo.it @zazzanm\n19. 19. W I T H S O C I A L R E S P O N S I B I L I T Y Matteo Sarzana digitalgonzo.it @zazzanm\n20. 20. U S E R C E N T R E D E X P E R I E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n21. 21. N O T O N L Y O N L I N E Matteo Sarzana digitalgonzo.it @zazzanm\n22. 22. O N L I N E = O F F L I N E Matteo Sarzana digitalgonzo.it @zazzanm\n23. 23. F O R G E T Y O U R W A L L E T Matteo Sarzana digitalgonzo.it @zazzanm\n24. 24. A I F O R E V E R Y O N E Matteo Sarzana digitalgonzo.it @zazzanm\n25. 25. B E N E F I T P O O R > R I C H Matteo Sarzana digitalgonzo.it @zazzanm\n26. 26. P O W E R T O L O C A L B U S I N E S S E S Matteo Sarzana digitalgonzo.it @zazzanm\n27. 27. M A C H I N E E M P O W E R M E N T Matteo Sarzana digitalgonzo.it @zazzanm\n28. 28. A L G O R I T H M B A S E D D E S I G N Matteo Sarzana digitalgonzo.it @zazzanm\n29. 29. I N T E L L I G E N T C O L O R S C H E M E S Matteo Sarzana digitalgonzo.it @zazzanm\n30. 30. I N T E L L I G E N T C O P Y W R I T I N G Matteo Sarzana digitalgonzo.it @zazzanm\n31. 31. I N T E L L I G E N T D E S I G N C O M P O S I T I O N Matteo Sarzana digitalgonzo.it @zazzanm\n32. 32. 1 . 0 0 0 S C R E E N S - 1 I D E A Matteo Sarzana digitalgonzo.it @zazzanm\n33. 33. E N D L E S S P O S S I B I L I T I E S Matteo Sarzana digitalgonzo.it @zazzanm\n34. 34. B E S T O U T C O M E Matteo Sarzana digitalgonzo.it @zazzanm\n35. 35. W H Y M A T T E R S ? Matteo Sarzana digitalgonzo.it @zazzanm\n36. 36. A C C E S S T O P R O F E S S I O N A L T O O L S Matteo Sarzana digitalgonzo.it @zazzanm\n37. 37. P R E V I E W A N D T E S T Matteo Sarzana digitalgonzo.it @zazzanm\n38. 38. F R A C T I O N O F C O S T Matteo Sarzana digitalgonzo.it @zazzanm\n39. 39. E V E R Y T H I N G C A N B E A I T R A N S F O R M E D Matteo Sarzana digitalgonzo.it @zazzanm\n40. 40. L O G O Matteo Sarzana digitalgonzo.it @zazzanm\n41. 41. S T O R E D E S I G N Matteo Sarzana digitalgonzo.it @zazzanm\n42. 42. A D V E R T I S I N G Matteo Sarzana digitalgonzo.it @zazzanm\n43. 43. F R O M A R T I F I C I A L I N T E L L I G E N C E T O A R T I F I C I A L I N F L U E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n44. 44. 1 5 % O F T W I T T E R A N D 6 0 M I O O F F A C E B O O K A R E F A K E Matteo Sarzana digitalgonzo.it @zazzanm\n45. 45. V I R T U A L P E R F O R M E R S Matteo Sarzana digitalgonzo.it @zazzanm\n46. 46. V I R T U A L I N F L U E N C E R S Matteo Sarzana digitalgonzo.it @zazzanm\n47. 47. I S I T R E A L L Y B A D ? Matteo Sarzana digitalgonzo.it @zazzanm\n48. 48. N O , B U T … Matteo Sarzana digitalgonzo.it @zazzanm\n49. 49. D O N ’ T H I D E A R T I F I C I A L I N G R E D I E N T S Matteo Sarzana digitalgonzo.it @zazzanm\n50. 50. W H O I S T H E A U T H O R ? Matteo Sarzana digitalgonzo.it @zazzanm\n51. 51. A W O R L D O F C O - C R E A T O R S Matteo Sarzana digitalgonzo.it @zazzanm\n52. 52. D A T A = C R E A T I O N T O O L S Matteo Sarzana digitalgonzo.it @zazzanm\n53. 53. M O R E D A T A = M O R E P O S S I B I L I T I E S Matteo Sarzana digitalgonzo.it @zazzanm\n54. 54. C O N T E X T I S K E Y Matteo Sarzana digitalgonzo.it @zazzanm\n55. 55. H O W H A S I T E V O L V E D Matteo Sarzana digitalgonzo.it @zazzanm\n56. 56. F R O M H U M A N S B E I N G C O M P U T E R S Matteo Sarzana digitalgonzo.it @zazzanm\n57. 57. T O H U M A N S B E I N G H U M A N S Matteo Sarzana digitalgonzo.it @zazzanm\n58. 58. H U M A N S A B I L I T I E S C A N B E M A T C H E D Matteo Sarzana digitalgonzo.it @zazzanm\n59. 59. D E F I N E T H E D E S I R E D O U T C O M E Matteo Sarzana digitalgonzo.it @zazzanm\n60. 60. C O - C R E A T E D C H A I R Matteo Sarzana digitalgonzo.it @zazzanm\n61. 61. A L P H A G O Matteo Sarzana digitalgonzo.it @zazzanm\n62. 62. R E T I N A S C A N F O R D I A B E T E S Matteo Sarzana digitalgonzo.it @zazzanm\n63. 63. D O T A + O P E N A I Matteo Sarzana digitalgonzo.it @zazzanm\n64. 64. L I P R E A D I N G Matteo Sarzana digitalgonzo.it @zazzanm\n65. 65. A N E W K I N D O F A R T I S A N S H I P Matteo Sarzana digitalgonzo.it @zazzanm\n66. 66. H U M A N + A I Matteo Sarzana digitalgonzo.it @zazzanm\n67. 67. C R E A T I V I T Y I S A L S O A A I T H I N G Matteo Sarzana digitalgonzo.it @zazzanm\n68. 68. W H A T W I L L “ N E V E R ” C H A N G E ? Matteo Sarzana digitalgonzo.it @zazzanm\n69. 69. J U D G E M E N T Matteo Sarzana digitalgonzo.it @zazzanm\n70. 70. T H E R O L E O F B R A N D S Matteo Sarzana digitalgonzo.it @zazzanm\n71. 71. I N N O V A T I O N E N V Y Matteo Sarzana digitalgonzo.it @zazzanm\n72. 72. B R A N D S A R E C O P Y I N G E A C H O T H E R Matteo Sarzana digitalgonzo.it @zazzanm\n73. 73. H A C K A T O N S ? Matteo Sarzana digitalgonzo.it @zazzanm\n74. 74. P R O B L E M S W H I C H C A N ’ T B E S O L V E D Matteo Sarzana digitalgonzo.it @zazzanm\n75. 75. P L A Y O F F E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n76. 76. D O N O T C O P Y C A T Matteo Sarzana digitalgonzo.it @zazzanm\n77. 77. S O L V E T H E R I S K Matteo Sarzana digitalgonzo.it @zazzanm\n78. 78. C U R A T E Y O U R I D E A S Matteo Sarzana digitalgonzo.it @zazzanm\n79. 79. R E T R O T R U S T Matteo Sarzana digitalgonzo.it @zazzanm\n80. 80. L O O K B A C K A T T H I N G S W E T R U S T E D Matteo Sarzana digitalgonzo.it @zazzanm\n81. 81. D O W N G R A D I N G Matteo Sarzana digitalgonzo.it @zazzanm\n82. 82. C L A S S I C G A M E S A N D T O Y S Matteo Sarzana digitalgonzo.it @zazzanm\n83. 83. J O H N D E E R E X A M P L E Matteo Sarzana digitalgonzo.it @zazzanm\n84. 84. O L D A N D N E W C O L L A B O R A T I O N Matteo Sarzana digitalgonzo.it @zazzanm\n85. 85. B R A N D S A R E R E S P O N S I B L E F O R E X P E R I E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n86. 86. G O B E Y O N D T E C H Matteo Sarzana digitalgonzo.it @zazzanm\n87. 87. F R O M S T O R Y T E L L I N G T O E X P E R I E N C E D E S I G N Matteo Sarzana digitalgonzo.it @zazzanm\n88. 88. D E A T H O F C O P Y A N D A R T Matteo Sarzana digitalgonzo.it @zazzanm\n89. 89. C R E A T E A N E W S T R U C T U R E Matteo Sarzana digitalgonzo.it @zazzanm\n90. 90. R E I N V E N T E V E R Y Q U A R T E R Matteo Sarzana digitalgonzo.it @zazzanm\n91. 91. D O N O T S E G M E N T B Y Z I P C O D E Matteo Sarzana digitalgonzo.it @zazzanm\n92. 92. D E S I G N F O R M I N O R I T I E S Matteo Sarzana digitalgonzo.it @zazzanm\n93. 93. D E S I G N F O R D I S A B I L I T I E S Matteo Sarzana digitalgonzo.it @zazzanm\n94. 94. D E S I G N F O R N E W G R O U P S Matteo Sarzana digitalgonzo.it @zazzanm\n95. 95. B A C K S T O R Y T E L L I N G Matteo Sarzana digitalgonzo.it @zazzanm\n96. 96. F A K E S T O R I E S Matteo Sarzana digitalgonzo.it @zazzanm\n97. 97. T H E N E W S T O R E E R A Matteo Sarzana digitalgonzo.it @zazzanm\n98. 98. S T O R E S A S F L A G S H I P S Matteo Sarzana digitalgonzo.it @zazzanm\n99. 99. B U Y I N G I S N O T T H E G O A L Matteo Sarzana digitalgonzo.it @zazzanm\n100. 100. E X P E R I E N C E I S Matteo Sarzana digitalgonzo.it @zazzanm\n101. 101. A N E W S T O R E E X P E R I E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n102. 102. S T O R E S A S E X P E R I E N C E C E N T R E S Matteo Sarzana digitalgonzo.it @zazzanm\n103. 103. A R E S T O R E S D E A D ? Matteo Sarzana digitalgonzo.it @zazzanm\n104. 104. A M A Z O N A C C O U N T S F O R 1 , 9 8 % O F S A L E S G L O B A L L Y Matteo Sarzana digitalgonzo.it @zazzanm\n105. 105. A M A Z O N O F F L I N E Matteo Sarzana digitalgonzo.it @zazzanm\n106. 106. S T O R E S D E C R E A S E C O S T O F A C Q U I S I T I O N Matteo Sarzana digitalgonzo.it @zazzanm\n107. 107. C H A L L E N G E S Matteo Sarzana digitalgonzo.it @zazzanm\n108. 108. S T O C K M A R K E T V S S T O R E S Matteo Sarzana digitalgonzo.it @zazzanm\n109. 109. L O W M A R G I N S Matteo Sarzana digitalgonzo.it @zazzanm\n110. 110. D E M A N D I N G S T A F F Matteo Sarzana digitalgonzo.it @zazzanm\n111. 111. T E C H N O L O G Y S U C K S Matteo Sarzana digitalgonzo.it @zazzanm\n112. 112. T R I L L I O N S O F S K U S Matteo Sarzana digitalgonzo.it @zazzanm\n113. 113. C U S T O M E R S D O N ’ T C A R E A B O U T Y O U R P R O B L E M S Matteo Sarzana digitalgonzo.it @zazzanm\n114. 114. T H E Y C A R E A B O U T : Matteo Sarzana digitalgonzo.it @zazzanm\n115. 115. L O Y A L T Y Matteo Sarzana digitalgonzo.it @zazzanm\n116. 116. E V E R Y T H I N G P E R S O N A L I S E D Matteo Sarzana digitalgonzo.it @zazzanm\n117. 117. O F F L I N E = O N L I N E Matteo Sarzana digitalgonzo.it @zazzanm\n118. 118. O M N I C H A N N E L Matteo Sarzana digitalgonzo.it @zazzanm\n119. 119. B R A N D I S T H E U L T I M A T E R E S P O N S I B L E Matteo Sarzana digitalgonzo.it @zazzanm\n120. 120. P R O D U C T S Matteo Sarzana digitalgonzo.it @zazzanm\n121. 121. A V A I L A B I L I T Y I S K E Y Matteo Sarzana digitalgonzo.it @zazzanm\n122. 122. A I A N D S T O R E S ? Matteo Sarzana digitalgonzo.it @zazzanm\n123. 123. A I C A N ’ T P R E D I C T T R E N D S Matteo Sarzana digitalgonzo.it @zazzanm\n124. 124. I M P O S S I B L E B U R G E R Matteo Sarzana digitalgonzo.it @zazzanm\n125. 125. A I F O R P R O M O T I O N P L A N N I N G Matteo Sarzana digitalgonzo.it @zazzanm\n126. 126. A I F O R C U S T O M E R S E G M E N T A T I O N Matteo Sarzana digitalgonzo.it @zazzanm\n127. 127. A I F O R C R M Matteo Sarzana digitalgonzo.it @zazzanm\n128. 128. A I = E V E R Y T H I N G P E R S O N A L I S E D Matteo Sarzana digitalgonzo.it @zazzanm\n129. 129. G O A L ? Matteo Sarzana digitalgonzo.it @zazzanm\n130. 130. O F F L I N E = O N L I N E + E X P E R I E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n131. 131. A G E O F R O B O T S ? Matteo Sarzana digitalgonzo.it @zazzanm\n132. 132. R O B O T S R E N A I S S A N C E Matteo Sarzana digitalgonzo.it @zazzanm\n133. 133. B U I L D E R S Matteo Sarzana digitalgonzo.it @zazzanm\n134. 134. E X P L O R E R S Matteo Sarzana digitalgonzo.it @zazzanm\n135. 135. O U R E Y E S A N D E A R S Matteo Sarzana digitalgonzo.it @zazzanm\n136. 136. B O E I N G E C H O V O Y A G E R Matteo Sarzana digitalgonzo.it @zazzanm\n137. 137. R O B O T H E S P I A N S Matteo Sarzana digitalgonzo.it @zazzanm\n138. 138. R O B O T H O T E L Matteo Sarzana digitalgonzo.it @zazzanm\n139. 139. B I N A 4 8 Matteo Sarzana digitalgonzo.it @zazzanm\n140. 140. E M B R A C E R O B O T S Matteo Sarzana digitalgonzo.it @zazzanm\n141. 141. R O B O T S W I L L N O T R E P L A C E H U M A N S Matteo Sarzana digitalgonzo.it @zazzanm\n142. 142. R O B O T S W I L L E M P O W E R H U M A N I T Y Matteo Sarzana digitalgonzo.it @zazzanm\n143. 143. H O W A R E M I L L E N N I A L S ? Matteo Sarzana digitalgonzo.it @zazzanm\n144. 144. H O W D O T H E Y L I V E ? Matteo Sarzana digitalgonzo.it @zazzanm\n145. 145. L O V E R E A L T I M E F E E D B A C K Matteo Sarzana digitalgonzo.it @zazzanm\n146. 146. S P E N D M O N E Y R E S P O N S I B L Y Matteo Sarzana digitalgonzo.it @zazzanm\n147. 147. D E B I T V S C R E D I T Matteo Sarzana digitalgonzo.it @zazzanm\n148. 148. V O C A L L Y L O Y A L Matteo Sarzana digitalgonzo.it @zazzanm\n149. 149. I F B R A N D S A R E S U P P O R T I N G T H E I R C H O I C E S Matteo Sarzana digitalgonzo.it @zazzanm\n150. 150. L I V E F O R A P U R P O S E Matteo Sarzana digitalgonzo.it @zazzanm\n151. 151. W A N T T O L E A V E A L E G A C Y Matteo Sarzana digitalgonzo.it @zazzanm\n152. 152. T R U S T I S E V E R Y T H I N G Matteo Sarzana digitalgonzo.it @zazzanm\n153. 153. M I L L E N N I A L S A S C H I L D S Matteo Sarzana digitalgonzo.it @zazzanm\n154. 154. “ D O N ’ T G E T I N A C A R W I T H A S T R A N G E R ” Matteo Sarzana digitalgonzo.it @zazzanm\n155. 155. “ D O N ’ T S L E E P I N A S T R A N G E R H O U S E ” Matteo Sarzana digitalgonzo.it @zazzanm\n156. 156. “ D O N ’ T B U Y F R O M S O M E O N E Y O U D O N ’ T K N O W ” Matteo Sarzana digitalgonzo.it @zazzanm\n157. 157. C A N Y O U I M A G E A W O R L D W I T H O U T U B E R , A I R B N B A N D A M A Z O N ? Matteo Sarzana digitalgonzo.it @zazzanm\n158. 158. M I L L E N N I A L S A N D S O C I A L M E D I A Matteo Sarzana digitalgonzo.it @zazzanm\n159. 159. L I F E B E C O M E S C U R A T E D Matteo Sarzana digitalgonzo.it @zazzanm\n160. 160. T H E B I G G E R T H E A U D I E N C E T H E M O R E C U R A T E D T H E L I F E Matteo Sarzana digitalgonzo.it @zazzanm\n161. 161. I G V S S T O R I E S V S P R I V A T E M E S S A G I N G Matteo Sarzana digitalgonzo.it @zazzanm\n162. 162. A N O P P O R T U N I T Y F O R B R A N D S Matteo Sarzana digitalgonzo.it @zazzanm\n163. 163. C R E A T E T R U S T W H E R E I T D I D N ’ T E X I S T B E F O R E Matteo Sarzana digitalgonzo.it @zazzanm\n164. 164. F R O M M I S B E H A V I O U R T O A W A R D S Matteo Sarzana digitalgonzo.it @zazzanm\n165. 165. S H A L L W E T A L K A B O U T G E N D E R ? Matteo Sarzana digitalgonzo.it @zazzanm\n166. 166. M U D D L E D M A S C U L I N I T Y Matteo Sarzana digitalgonzo.it @zazzanm\n167. 167. W H A T D O E S I T M E A N T O B E A M A N ? Matteo Sarzana digitalgonzo.it @zazzanm\n168. 168. N O T E V E R Y O N E I N T H E S A M E B O X Matteo Sarzana digitalgonzo.it @zazzanm\n169. 169. E N C O U R A G E T H E N O N C O N F O R M I N G Matteo Sarzana digitalgonzo.it @zazzanm\n170. 170. D I V E R S I T Y A N D I N C L U S I O N D O N O T S O L V E P R O B L E M S Matteo Sarzana digitalgonzo.it @zazzanm\n171. 171. C U L T U R E D O E S Matteo Sarzana digitalgonzo.it @zazzanm\n172. 172. L A W S E X I S T S B U T N E E D T O B E E N F O R C E D Matteo Sarzana digitalgonzo.it @zazzanm\n173. 173. Y O U D O N ’ T N E E D T O B E A N A C T I V I S T T O C H A N G E T H E W O R L D Matteo Sarzana digitalgonzo.it @zazzanm\n174. 174. C H A N G E H A P P E N S S L O W L Y Matteo Sarzana digitalgonzo.it @zazzanm\n175. 175. W O R D S C A N C H A N G E T H E W O R L D Matteo Sarzana digitalgonzo.it @zazzanm\n176. 176. U B E R E X A M P L E Matteo Sarzana digitalgonzo.it @zazzanm\n177. 177. W H A T N E X T ? Matteo Sarzana digitalgonzo.it @zazzanm\n178. 178. D E F I N E Y O U R P E R S O N A L B A T T L E Matteo Sarzana digitalgonzo.it @zazzanm\n179. 179. L I V E F O R I T Matteo Sarzana digitalgonzo.it @zazzanm\n180. 180. T A L K A B O U T I T Matteo Sarzana digitalgonzo.it @zazzanm\n181. 181. B E L I E V E I N I T Matteo Sarzana digitalgonzo.it @zazzanm\n182. 182. C O R P O R A T I O N S A S W E K N O W T H E M A R E D E A D Matteo Sarzana digitalgonzo.it @zazzanm\n183. 183. A C T U A L B U S I N E S S M O D E L S A R E W R O N G Matteo Sarzana digitalgonzo.it @zazzanm\n184. 184. P E R S O N A L D A T A + A I = I N F L U E N C E Matteo Sarzana digitalgonzo.it @zazzanm\n185. 185. H O S T I L E T O E V E R Y O N E Matteo Sarzana digitalgonzo.it @zazzanm\n186. 186. N O T O N L Y F A C E B O O K Matteo Sarzana digitalgonzo.it @zazzanm\n187. 187. E N C R Y P T I O N I S N O T T H E S O L U T I O N Matteo Sarzana digitalgonzo.it @zazzanm\n188. 188. M O N E Y I S K I L L I N G I D E A S Matteo Sarzana digitalgonzo.it @zazzanm\n189. 189. O C U L U S Matteo Sarzana digitalgonzo.it @zazzanm\n190. 190. C O M P E T I T I O N I S A L W A Y S G O O D Matteo Sarzana digitalgonzo.it @zazzanm\n191. 191. B I L L G A T E S Q U O T E Matteo Sarzana digitalgonzo.it @zazzanm\n192. 192. W H E R E I S T H E A N T I T R U S T ? Matteo Sarzana digitalgonzo.it @zazzanm\n193. 193. L E A R N F R O M T H E E U Matteo Sarzana digitalgonzo.it @zazzanm\n194. 194. C A L I F O R N I A L A W Matteo Sarzana digitalgonzo.it @zazzanm\n195. 195. R E C O V E R D A M A G E S F O R U N F A I R U S E O F P E R S O N A L D A T A Matteo Sarzana digitalgonzo.it @zazzanm\n196. 196. W H A T C A N W E D O ? Matteo Sarzana digitalgonzo.it @zazzanm\n197. 197. P R O M O T E S T A R T U P S N O T U S I N G D A T A A S B I Z M O D E L Matteo Sarzana digitalgonzo.it @zazzanm\n198. 198. N O T R A D E O F D A T A W I T H O U T E X P L I C I T C O N S E N T Matteo Sarzana digitalgonzo.it @zazzanm\n199. 199. F O C U S O N H U M A N T E C H Matteo Sarzana digitalgonzo.it @zazzanm\n200. 200. T E C H A S E M P O W E R M E N T Matteo Sarzana digitalgonzo.it @zazzanm\n201. 201. V A L U E O F P R I V A C Y Matteo Sarzana digitalgonzo.it @zazzanm\n202. 202. A P P L E Matteo Sarzana digitalgonzo.it @zazzanm\n203. 203. G O O D O R B A D Matteo Sarzana digitalgonzo.it @zazzanm\n204. 204. A I D R I V E R C A R S Matteo Sarzana digitalgonzo.it @zazzanm\n205. 205. H U M A N S A R E T R A I N E R S Matteo Sarzana digitalgonzo.it @zazzanm\n206. 206. M O U S E A N D P A R K I N S O N Matteo Sarzana digitalgonzo.it @zazzanm\n207. 207. N O R E G U L A T I O N = F E A R Matteo Sarzana digitalgonzo.it @zazzanm\n208. 208. T H E N E W C O R P . C O Matteo Sarzana digitalgonzo.it @zazzanm\n209. 209. E M P A T H Y A S B U S I N E S S T O O L Matteo Sarzana digitalgonzo.it @zazzanm\n210. 210. T E S C O S L O W L I N E S Matteo Sarzana digitalgonzo.it @zazzanm\n211. 211. H E R B A L E S S E N C E B O T T L E S Matteo Sarzana digitalgonzo.it @zazzanm\n212. 212. J O I N P A P A G R A N D K I D S O N D E M A N D Matteo Sarzana digitalgonzo.it @zazzanm\n213. 213. A N E W B U S I N E S S M O D E L Matteo Sarzana digitalgonzo.it @zazzanm\n214. 214. E M P A T H Y > M O N E Y Matteo Sarzana digitalgonzo.it @zazzanm\n215. 215. T E C H I S N O T T H E S O L U T I O N Matteo Sarzana digitalgonzo.it @zazzanm\n216. 216. H U M A N S A R E Matteo Sarzana digitalgonzo.it @zazzanm\n217. 217. “You can only understand people if you feel them in yourself.” John Steinbeck Matteo Sarzana digitalgonzo.it @zazzanm\n218. 218. T H A N K S ! Matteo Sarzana digitalgonzo.it @zazzanm\n219. 219. W A N T T O K N O W M O R E ? G E T I N T O U C H M A T T E O S A R Z A N A @ G M A I L . C O M - @ Z A Z Z A N M D I G I T A L G O N Z O . I T Matteo Sarzana digitalgonzo.it @zazzanm" ]
[ null, "https://image.slidesharecdn.com/sxsw2019-190510164728/85/sxsw-2019-top-trends-1-320.jpg", null, "https://image.slidesharecdn.com/sxsw2019-190510164728/85/sxsw-2019-top-trends-2-320.jpg", null, "https://image.slidesharecdn.com/sxsw2019-190510164728/85/sxsw-2019-top-trends-3-320.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89835393,"math_prob":0.7940547,"size":1196,"snap":"2022-05-2022-21","text_gpt3_token_len":240,"char_repetition_ratio":0.09899329,"word_repetition_ratio":0.5027322,"special_character_ratio":0.1889632,"punctuation_ratio":0.12093023,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9896471,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-29T05:13:53Z\",\"WARC-Record-ID\":\"<urn:uuid:efd5e319-76d3-4962-8b69-8e4bf79c60e6>\",\"Content-Length\":\"571364\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3b793e13-93d0-4c7c-82b0-df6afbf40f4d>\",\"WARC-Concurrent-To\":\"<urn:uuid:38cee494-e17e-4cf8-a32b-3a6bceb7e248>\",\"WARC-IP-Address\":\"34.231.115.78\",\"WARC-Target-URI\":\"https://de2.slideshare.net/ZazzaNM/sxsw-2019-top-trends\",\"WARC-Payload-Digest\":\"sha1:B3S3Z22DDQKSVH3XPASEJNV2RQDBF4BI\",\"WARC-Block-Digest\":\"sha1:7BWHRUVG4XLNGDERDMADVMBGWCZS6NF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320299927.25_warc_CC-MAIN-20220129032406-20220129062406-00220.warc.gz\"}"}
https://latex.org/forum/viewtopic.php?p=75481
[ "## LaTeX forum ⇒ General ⇒ musixtex | Formatting for Notes Topic is solved\n\nLaTeX specific issues not fitting into one of the other forums of this category.\nLiben\nPosts: 5\nJoined: Tue Jan 01, 2013 6:10 pm\n\n###", null, "musixtex | Formatting for Notes\n\nHello,\n\nI am doing some work in LaTeX and I am using package musixtex but I don't know how to write some commands. This is model of notes that I need to transfer into LaTeX.", null, "wanted.jpg (85.91 KiB) Viewed 10017 times\n\nbut there is five problems that I need to solve.\n\n1. For the \"align\" of notes I need same length.\n2. The number at the beginning of notes in red circle.\n3. The underscore behind the text in green circle.\n4. The symbol at the end of notes.\n5. In the yellow rectangle I don't know how can I put \"bar\" symbol between two notes.\n\nfor now I have this:", null, "have.jpg (110.7 KiB) Viewed 10017 times\n\nI found some solution but it doesn´t work so I hope that somebody could help me with this.\nbegin{music}\\generalsignature{2}\\def\\nbinstruments{1} \\debutextrait\\def\\writebarno{\\llap{\\tenbf\\the\\barno\\barnoadd}}%\\def\\raisebarno{2\\internote}%\\def\\shiftbarno{1.3\\Interligne}%\\NOtes\\zsong{Ra - }\\qu f\\enotes\\NOtes\\zsong{dosť}\\qu f\\enotes\\NOtes\\zsong{krás - }\\qu g\\enotes\\NOtes\\zsong{na }\\qu h\\enotes\\barre\\NOTes\\zsong{is - }\\qu h\\enotes\\NOTes\\zsong{kra }\\qu g\\enotes\\NOTes\\zsong{bo - }\\qu f\\enotes\\NOTes\\zsong{hov }\\qu e\\enotes\\barre\\NOtes\\zsong{E - }\\qu d\\enotes\\NOtes\\zsong{ly - }\\qu d\\enotes\\NOtes\\zsong{zej - }\\qu e\\enotes\\NOtes\\zsong{ská }\\qu f\\enotes\\barre\\NOTes\\zsong{dcé - }\\qu f\\enotes\\NOTes\\zsong{ra }\\cu e\\enotes\\NOTes\\zsong{ty, }\\hu e\\enotes\\barre\\NOtes\\zsong{o - }\\qu f\\enotes\\NOtes\\zsong{má - }\\qu f\\enotes\\NOtes\\zsong{me - }\\qu g\\enotes\\NOtes\\zsong{ní }\\qu h\\enotes\\finextrait.\\debutextrait...\n\nThank you.\nLast edited by cgnieder on Wed Jan 02, 2013 9:53 pm, edited 1 time in total.\n\nTags:\n\ncgnieder\nSite Moderator\nPosts: 1988\nJoined: Sat Apr 16, 2011 7:27 pm\nHi Liben,\n\nwelcome to the LaTeX-community!\n\n• bar numbers only at the beginning of a line can be achieved by calling \\systemnumbers\n• lyrics are best set up loading and using the musixlyr extension; my example below should give you an idea how to use it. You'll see that in the lyrics there are - and _ used to indicate where a word must be broken or be extended to the next note.\n• one should delete the aux files ending mx1 and mx2 before finally typesetting the whole piece and then run pdflatex, musixflx and pdflatex on the file again to get proper alignment.\n\n\\documentclass{article}\\usepackage[utf8]{inputenc}\\usepackage[T1]{fontenc}\\usepackage{musixtex}\\input{musixlyr} \\begin{document}  \\begin{music}\\setlength\\parindent{0pt}%\\generalsignature{2}%\\renewcommand*\\writebarno{\\textit{\\the\\barno}}%\\systemnumbers\\setlyrics{text}{% Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní a_ sim-ple test to sho-ow}%\\assignlyrics1{text}%\\startpiece\\NOtes\\qu{ffgh}\\enotes\\barre\\NOTes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\barre\\NOTes\\qup f\\cu e\\hu e\\enotes\\barre\\NOtes\\qu{ffgh}\\enotes\\barre\\NOtes\\qu{cdef}\\enotes\\barre\\NOtes\\qu{ghij}\\enotes\\endpiece\\end{music} \\end{document}", null, "musixtex.png (17.74 KiB) Viewed 10017 times\n\nRegards\nClemens\n------------------------------\nchemmacros · chemformula · leadsheets · xsim\n\nLiben\nPosts: 5\nJoined: Tue Jan 01, 2013 6:10 pm\nThank you for your help but I have another problem. I rewrote my old code into LaTeX by using MusiXTeX package but I don´t know how can I write some notes or text of song to separated note line. It means that these two lines", null, "15582906-note.jpg (25.24 KiB) Viewed 9971 times\n\nI need merge into one line.\n\nLast edited by localghost on Mon Jan 14, 2013 9:51 pm, edited 1 time in total.\nReason: Preferably no external links (see Board Rules). Attachments go onto the forum server where possible.\n\ncgnieder\nSite Moderator\nPosts: 1988\nJoined: Sat Apr 16, 2011 7:27 pm\nCan you please post a", null, "minimal working example, i.e., some code starting with \\documentclass and ending with \\end{document} that is compilable? Maybe you could simply post the code that you used to produce the image you posted?\n\nRegards\nClemens\n------------------------------\nchemmacros · chemformula · leadsheets · xsim\n\nLiben\nPosts: 5\nJoined: Tue Jan 01, 2013 6:10 pm\nsorry, I forgot\n\n\\documentclass{article}\\usepackage[utf8]{inputenc}\\usepackage[T1]{fontenc}\\usepackage{musixtex}\\input{musixlyr} \\begin{document}  \\begin{music}\\setlength\\parindent{0pt}\\generalsignature{2}\\renewcommand*\\writebarno{\\textit{\\the\\barno}}\\systemnumbers\\setlyrics{text}{ Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní }\\assignlyrics1{text}\\startpiece\\NOtes\\qu{ffgh}\\enotes\\barre\\NOTes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\barre\\NOTes\\qup f\\cu e\\hu e\\enotes\\barre\\NOtes\\qu{ffgh}\\enotes\\endpiece \\end{music} \\end{document}\n\ncgnieder\nSite Moderator\nPosts: 1988\nJoined: Sat Apr 16, 2011 7:27 pm\nYour image looks as if you haven't run »musixflx« on your document or have forgotten to delete the .mx1 and .mx2 files before doing so. Otherwise it would look like this:", null, "musix1.png (13.65 KiB) Viewed 9964 times\n\nYou may have observed that the texts starts with the second instead of with the first note. The reason for this is the endofline in\n\\setlyrics{text}{ Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní }\n\nAdding % helpd here:\n\\setlyrics{text}{% Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní }\n\nIf you want the whole piece in one line you can use \\startextract and \\endextract instead of \\startpiece and \\stoppiece. Beware that then the music line exceeds into the margin without warning. You might want to use the smallest music size and smaller margins then:\n\n\\documentclass{article}\\usepackage[utf8]{inputenc}\\usepackage[T1]{fontenc} % show page dimensions:\\usepackage{showframe} % reduce margins:\\usepackage[left=1in,right=1in]{geometry} \\usepackage{musixtex}\\input{musixlyr} \\begin{document}  \\begin{music}\\setlength\\parindent{0pt}\\generalsignature{2}% use smallest available size:\\smallmusicsize\\renewcommand*\\writebarno{\\textit{\\the\\barno}}\\systemnumbers\\setlyrics{text}{% Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní }\\assignlyrics1{text}\\startextract\\NOtes\\qu{ffgh}\\enotes\\barre\\NOTes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\barre\\NOTes\\qup f\\cu e\\hu e\\enotes\\barre\\NOtes\\qu{ffgh}\\enotes\\endextract\\end{music} \\end{document}", null, "musix2.png (7.16 KiB) Viewed 9964 times\n\nRegards\nClemens\n------------------------------\nchemmacros · chemformula · leadsheets · xsim\n\nLiben\nPosts: 5\nJoined: Tue Jan 01, 2013 6:10 pm\nOK, I almost got it but there are some details I need to repair.", null, "15610005-notes.jpg (87.56 KiB) Viewed 9947 times\n\n1. Align doesn´t work.\n2. I don´t want number in first line (green circle).\n3. I need to put note symbol on the place where showing red circle. I found commands\n\\raisebox{0mm}{\\qu p}\nbut result is:", null, "15610101-notes-detail.jpg (8.75 KiB) Viewed 9947 times\n\none note is missing and note over note line have small line.\n4. Next problem is with \"\\Endpiece\" symbol. I don´t know why there are two black rectangle. In the picture yellow circle.\n\\documentclass{article}\\usepackage[utf8]{inputenc}\\usepackage[T1]{fontenc}\\usepackage{musixtex}\\usepackage[left=1in,right=1in]{geometry}\\input{musixlyr} \\begin{document} \\begin{music}\\setlength\\parindent{0pt}\\generalsignature{2}\\smallmusicsize\\def\\writebarno{\\llap{\\the\\barno\\barnoadd}}\\def\\raisebarno{2\\internote}\\def\\shiftbarno{1.3\\Interligne}\\systemnumbers\\setlyrics{text}{% Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní žia-rou oh-ňov poď-me ktvo-jej svä-to-sti. Tvo-je ča-ro zno-vu_ zvia-že to čo_ mó-da de-lí dnes všet – kým ľu-ďom brat-mi ká-že stať sa tvo-jich krí-del let tvo-je ča-ro zno-vu_ zvia-že to čo_ mó-da de-lí dnes všet – kým ľu-ďom brat-mi ká-že stať sa tvo-jich krí-del let. }\\assignlyrics1{text}\\startbarno\\startextract\\NOTes\\raisebox{0mm}{\\qu p}\\enotes\\NOtes\\lcharnote{p}{100}\\qu{f}\\enotes\\NOTes\\qu{gh}\\enotes\\barre\\NOTes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\barre\\NOTes\\qup f\\cu e\\hu e\\enotes\\barre\\NOtes\\qu{ffgh}\\enotes\\endextract\\vspace{5px} \\assignlyrics2{text}\\startbarno=6\\startextract\\NOtes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\barre\\NOTes\\qup e\\cu d\\hu d\\enotes\\barre\\NOtes\\uptext{REF.:}\\qu{e}\\enotes\\NOtes\\qu{efd}\\enotes\\barre\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fd}\\enotes\\endextract\\vspace{5px} \\assignlyrics3{text}\\startbarno=11\\startextract\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fe}\\enotes\\barre\\NOTes\\qu{dea}\\enotes\\NOTes\\isslurd0e\\qu f\\enotes\\barre\\NOTes\\tsslur0e\\qu f\\enotes\\NOTes\\qu {fgh}\\enotes\\barre\\NOtes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\endextract\\vspace{5px} \\assignlyrics4{text}\\startbarno=16\\startextract\\NOTes\\qup e\\cu d\\hu d\\enotes\\barre\\NOtes\\uptext{REF.:}\\qu{e}\\enotes\\NOtes\\qu{efd}\\enotes\\barre\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fd}\\enotes\\barre\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fe}\\enotes\\endextract\\vspace{5px} \\assignlyrics5{text}\\startbarno=20\\startextract\\NOTes\\qu{dea}\\enotes\\NOTes\\isslurd0e\\qu f\\enotes\\barre\\NOTes\\tsslur0e\\qu f\\enotes\\NOTes\\qu {fgh}\\enotes\\barre\\NOtes\\qu{hgfe}\\enotes\\barre\\NOtes\\qu{ddef}\\enotes\\barre\\NOTes\\qup e\\cu d\\hu d\\en\\Endpiece\\endextract \\end{music} \\end{document}\nLast edited by localghost on Tue Jan 15, 2013 6:59 pm, edited 1 time in total.\nReason: Preferably no external links (see Board Rules). Attachments go onto the forum server where possible.\n\ncgnieder\nSite Moderator\nPosts: 1988\nJoined: Sat Apr 16, 2011 7:27 pm\nOk, let's make this a little bit like a tutorial, just because I'm in the mood to.", null, "First of all: as I understand it you're trying to typeset a whole piece and not a series of extracts. For this case \\startextract and \\endextract are the wrong choice. The whole piece should be placed inside \\startpiece and \\endpiece (or \\stoppiece or the uppercase variants \\Endpiece or \\Stoppiece for the double bar that terminates a piece). (Using more that one of the ending commands will double the bar lines which is what causes your problem at the end of the piece.) The breaking into lines should be left to TeX or more precisely the program musixflx. Most of your problems will be solved following this rationale.\n\nCreating a piece normally has the following routine. Suppose your main file is called mycoolmusic.tex. Now, after typing the piece you run pdflatex mycoolmusic as you would normally do. Additional to the usual mycoolmusic.aux and mycoolmusic.log files a file named mycoolmusic.mx1 is created. This file serves as input for musixflx. So you now have to run musixflx mycoolmusic. This will create an additional file named mycoolmusic.mx2. This serves as help file for the next latex run to get the right alignment and spacing. So you need to run pdflatex mycoolmusic another time.\n\nIf you now see that you have to change details of the piece you have go through this whole routine again. In order to get it right you should delete both mycoolmusic.mx1 and mycoolmusic.mx2 before doing so or you might observe strange displacements and the like.\n\nNow, just like with LaTeX and normal text one sometimes has to help musixflx to get the line breaking right. Every \\bar (\\barre is an alias) is considered as potential break point. If you want to prohibit this for a certain \\bar you can use \\xbar instead. You can also insert a potential break point without creating a bar line with \\zbar. If you want to force a line break you can use \\alaligne (as equivalent to \\bar) or \\zalaligne (as equivalent to \\zbar).\n\nThe next problem: I guess you want to insert some tempo information above the first bar. As you have noticed \\qu{<pitch>} won't help here. Raising it with a box will at best lead to strange effects. Specifically \\qu{p} creates a quarter note with a stem pointing up at pitch p. Pitch p corresponds to b'' which is exactly what you're getting. I'll present a better solution later using a combination of \\metron, \\Uptext and \\qu. A little more on \\qu{<pitch>} first. There are to siblings, \\ql{<pitch>} which creates a quarter note with a lower stem and \\qa{<pitch>} which creates a quarter note with automatic stem placement.\n\nThe \\qu in \\metron below unfortunately is not hidden from musixlyr. In order to get it ignored we have to remove the % I suggested in my last posting. So at the beginning we'll place something like\n\\notes\\Uptext{\\metron{\\qu}{100}}\\en\n\nHiding the system bar number at the beginning can be done via a conditional. In the following code I test if the number is 1. It will only be printed if otherwise:\n\\def\\thebarno{\\ifnum\\barno=1\\relax\\else\\the\\barno\\fi}\\def\\writebarno{\\llap{\\thebarno\\barnoadd}}\n\nNow, - at last - let's put everything together (to be compiled twice with one run of musixflx in between):\n\n\\documentclass{article}\\usepackage[utf8]{inputenc}\\usepackage[T1]{fontenc}\\usepackage{musixtex}\\usepackage[left=1in,right=1in]{geometry}\\input{musixlyr} \\begin{document} \\begin{music}% general settings:\\setlength\\parindent{0pt}\\generalsignature{2}% more vertical space above of staffs, default is 3\\Interligne:\\stafftopmarg=5\\Interligne\\smallmusicsize% hide bar number if bar number is 1, use systemnumbers:\\def\\thebarno{\\ifnum\\barno=1\\relax\\else\\the\\barno\\fi}\\def\\writebarno{\\llap{\\thebarno\\barnoadd}}\\def\\raisebarno{2\\internote}\\def\\shiftbarno{1.3\\Interligne}\\systemnumbers% lyrics:\\setlyrics{text}{ Ra-dosť krás-na isk-ra bo-hov E-ly-zej-ská dcé-ra ty, o-má-me-ní žia-rou oh-ňov poď-me ktvo-jej svä-to-sti. Tvo-je ča-ro zno-vu_ zvia-že to čo_ mó-da de-lí dnes všet – kým ľu-ďom brat-mi ká-že stať sa tvo-jich krí-del let tvo-je ča-ro zno-vu_ zvia-že to čo_ mó-da de-lí dnes všet – kým ľu-ďom brat-mi ká-že stať sa tvo-jich krí-del let. }\\assignlyrics1{text}% the actual piece:\\startpiece\\notes\\Uptext{\\metron{\\qu}{100}}\\en\\NOTes\\qu{ffgh}\\enotes\\bar\\NOTes\\qu{hgfe}\\enotes\\bar\\NOtes\\qu{ddef}\\enotes\\bar\\NOTes\\qup f\\cu e\\hu e\\enotes\\bar\\NOtes\\qu{ffgh}\\enotes\\bar\\NOtes\\qu{hgfe}\\enotes\\bar\\NOtes\\qu{ddef}\\enotes\\bar\\NOTes\\qup e\\cu d\\hu d\\enotes% get a double bar line to indicate that a new part starts% and force line break:\\setdoublebar\\alaligne\\NOtes\\uptext{REF.:}\\qu{e}\\enotes\\NOtes\\qu{efd}\\enotes\\bar\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fd}\\enotes\\bar\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fe}\\enotes\\bar\\NOTes\\qu{dea}\\enotes\\NOTes\\isslurd0e\\qu f\\enotes\\bar\\NOTes\\tsslur0e\\qu f\\enotes\\NOTes\\qu {fgh}\\enotes\\bar\\NOtes\\qu{hgfe}\\enotes\\bar\\NOtes\\qu{ddef}\\enotes\\bar\\NOTes\\qup e\\cu d\\hu d\\enotes\\setdoublebar\\alaligne\\NOtes\\uptext{REF.:}\\qu{e}\\enotes\\NOtes\\qu{efd}\\enotes\\bar\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fd}\\enotes\\bar\\NOtes\\qu{e}\\enotes\\Notes\\Dqbu fg\\en\\NOtes\\qu{fe}\\enotes\\bar\\NOTes\\qu{dea}\\enotes\\NOTes\\isslurd0e\\qu f\\enotes\\bar\\NOTes\\tsslur0e\\qu f\\enotes\\NOTes\\qu {fgh}\\enotes\\bar\\NOtes\\qu{hgfe}\\enotes\\bar\\NOtes\\qu{ddef}\\enotes\\bar\\NOTes\\qup e\\cu d\\hu d\\en\\Endpiece\\end{music} \\end{document}", null, "musixtexpiece.png (32.87 KiB) Viewed 9928 times\n\nLast but not least: it really is worth reading through the whole documentation of musixtex. Admittedly: it is rather long. Also, it is in English which is not perfect for us non-native speakers. And third, since musixtex is a generic package the syntax often is more plainTeX- than LaTeX-like. Nevertheless it pays off!\n\nRegards\nClemens\n------------------------------\nchemmacros · chemformula · leadsheets · xsim\n\nLiben\nPosts: 5\nJoined: Tue Jan 01, 2013 6:10 pm\nFinally, I have it. Big thanks cgnieder. I don´t understood how to create .mx2 file but I download \"musixflx\" and run .mx1 file with this musixflx and .mx2 file was created. Then when I start .tex file all was align like I need", null, "I have to make some details like title,autor but it wouldn´t be a problem. I read many pdf and another literature about musixtex and I found some useful commands which could helped me but I didn´t know how to used them so they didn´t worked and very often literature was very extensive for me as a beginner of Latex", null, "Thank you for your time and help.\n\ncgnieder\nSite Moderator\nPosts: 1988\nJoined: Sat Apr 16, 2011 7:27 pm\nLiben wrote:Finally, I have it. Big thanks cgnieder.\n\nYou're welcome!\n\nLiben wrote:I don´t understood how to create .mx2 file but I download \"musixflx\" and run .mx1 file with this musixflx and .mx2 file was created.\n\nI don't know which TeX distribution you're using but musixflx is part of TeX Live and of MiKTeX so I assumed you must have it installed already...\n\nLiben wrote:I read many pdf and another literature about musixtex and I found some useful commands which could helped me but I didn´t know how to used them so they didn´t worked and very often literature was very extensive for me as a beginner of Latex", null, "Thank you for your time and help.\n\nWell, at the beginning TeX and LaTeX and all the details can be quite confusing and it takes quite some time to learn good practices and the like. Just ask again if you have another question.\n\nRegards\nClemens\n------------------------------\nchemmacros · chemformula · leadsheets · xsim" ]
[ null, "https://latex.org/forum/images/icons/misc/orangeflag.gif", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/download/file.php", null, "http://latex-community.org/forum/images/icons/smile/info.gif", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/images/smilies/icon_e_smile.gif", null, "https://latex.org/forum/download/file.php", null, "https://latex.org/forum/images/smilies/icon_e_smile.gif", null, "https://latex.org/forum/images/smilies/icon_e_wink.gif", null, "https://latex.org/forum/images/smilies/icon_e_wink.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6088656,"math_prob":0.60669345,"size":1588,"snap":"2019-51-2020-05","text_gpt3_token_len":613,"char_repetition_ratio":0.27462122,"word_repetition_ratio":0.0,"special_character_ratio":0.31234258,"punctuation_ratio":0.0652921,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9561211,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T08:20:32Z\",\"WARC-Record-ID\":\"<urn:uuid:bf993604-6e68-49c8-a736-2d2ddb957859>\",\"Content-Length\":\"211886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd3ab011-6d64-489a-b677-1d876c11ce3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:e2e4c964-fa89-4fa1-9b58-ecb5c4f5fe4a>\",\"WARC-IP-Address\":\"78.46.26.59\",\"WARC-Target-URI\":\"https://latex.org/forum/viewtopic.php?p=75481\",\"WARC-Payload-Digest\":\"sha1:GKKKE736VMHBY5RGK3C7EHUPTXABX3FC\",\"WARC-Block-Digest\":\"sha1:AP44D63CWH7GDUUVY2ALIM44AZQJAXWP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540486979.4_warc_CC-MAIN-20191206073120-20191206101120-00514.warc.gz\"}"}
https://math.stackexchange.com/questions/2170576/v-finite-dimensional-vector-space-and-isomorphic-to-mathbbrn/2170600#2170600
[ "# $V$ finite-dimensional vector space and isomorphic to $\\mathbb{R}^n$?\n\nIf $V$ is a finite-dimensional vector space, does it mean that $V$ is also isomorphic to $\\mathbb{R}^n$ for some $n$? I am having a hard time trying to picture this. I was wondering if someone could explain this to me.\n\n• Just map the finite basis set to any basis set of $\\mathbb{R}^n$ for some $n$ and extend linearly to whole space. This map then should be an isomorphism. we can do this becase we know the space is finite dimensional. Mar 3 '17 at 18:46\n• If it's a finite-dimensional real vector space, then yes. Mar 3 '17 at 18:46\n• The answer is no. A simple counter example is the vector space $\\mathbb{F}_2^n$ where $\\mathbb{F}_2 = \\mathbb{Z}/2\\mathbb{Z}$. Mar 3 '17 at 19:14\n\nLet $V$ have dimension $n$ over $\\Bbb R$, say with basis $\\{v_1,\\dots,v_n\\}$. Define a map $f:V\\to\\Bbb R^n$ by\n$$f(a_1v_1+\\dots+a_nv_n)=(a_1,\\dots,a_n).$$" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83365005,"math_prob":0.9999182,"size":1040,"snap":"2021-43-2021-49","text_gpt3_token_len":340,"char_repetition_ratio":0.112934366,"word_repetition_ratio":0.0124223605,"special_character_ratio":0.32307693,"punctuation_ratio":0.110091746,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000004,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-28T15:13:37Z\",\"WARC-Record-ID\":\"<urn:uuid:a5540224-e135-4b26-81a7-a21de9227064>\",\"Content-Length\":\"169729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cad5d649-9443-4313-a3bd-17b0c66364d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:01554a26-6a27-4dcf-ad58-83012424d334>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2170576/v-finite-dimensional-vector-space-and-isomorphic-to-mathbbrn/2170600#2170600\",\"WARC-Payload-Digest\":\"sha1:2J52D7P4VYUS6OUYHZTWKRNNC2UR5PHB\",\"WARC-Block-Digest\":\"sha1:XNK5SQLT4EJBITUZUGD7QXXWBUZ6UV4I\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588341.58_warc_CC-MAIN-20211028131628-20211028161628-00032.warc.gz\"}"}
https://www.raymaps.com/index.php/how-to-find-point-of-intersection-of-two-lines/
[ "# How to Find Point of Intersection of Two Lines\n\nFinding the point of intersection of two lines has many important application such as in Ray-Tracing Simulation.  Two lines always intersect at some point unless they are absolutely parallel, like the rails of a railway track. We start with writing the equations of the two lines in slope-intercept form.\n\ny1=b1+m1*x1\n\ny2=b2+m2*x2", null, "Here m1 and m2 are the slopes of the two lines and b1 and b2 are their y-intercepts. At the point of intesection y1=y2, so we have.\n\nb1+m1*x1=b2+m2*x2\n\nBut at  the point of intersection x1=x2 as well, so replacing x1 and x2 with x we have.\n\nb1+m1*x=b2+m2*x\n\nor\n\nb1-b2=-x*(m1-m2)\n\nor\n\nx=-(b1-b2)/(m1-m2)\n\nOnce the x-component of the point of intersection is found we can easily find the y-component by substituting x in any of the two line equations above.\n\ny=b1+m1*x\n\nIn future posts we would like to discuss the cases of intersection of two surfaces and the intersection of two volumes.", null, "#### Author: Yasir Ahmed (aka John)\n\nMore than 20 years of experience in various organizations in Pakistan, USA and Europe. Worked as Research Assistant within Mobile and Portable Radio Group (MPRG) of Virginia Tech and was one of the first researchers to propose Space Time Block Codes for eight transmit antennas. The collaboration with MPRG continued even after graduating with an MSEE degree and has resulted in 12 research publications and a book on Wireless Communications. Worked for Qualcomm USA as an Engineer with the key role of performance and conformance testing of UMTS modems. Qualcomm is the inventor of CDMA technology and owns patents critical to the 5G and 4G standards.\n\n2.67 avg. rating (60% score) - 3 votes" ]
[ null, "http://www.raymaps.com/wp-content/uploads/2016/09/straight-lines.jpg", null, "https://secure.gravatar.com/avatar/8f5c607215ecef1fc06a2dfa3d0c4dbf", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9463769,"math_prob":0.939875,"size":1588,"snap":"2023-14-2023-23","text_gpt3_token_len":394,"char_repetition_ratio":0.13699494,"word_repetition_ratio":0.0,"special_character_ratio":0.23236775,"punctuation_ratio":0.05732484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9677195,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,7,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T02:43:13Z\",\"WARC-Record-ID\":\"<urn:uuid:302fd0c0-853b-484b-895e-0da5e51b3cb8>\",\"Content-Length\":\"62579\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7764d0ec-c51e-4fdb-a94f-f8535ec2dfe6>\",\"WARC-Concurrent-To\":\"<urn:uuid:118b43f2-473f-4267-a9f1-3166df255858>\",\"WARC-IP-Address\":\"203.124.44.74\",\"WARC-Target-URI\":\"https://www.raymaps.com/index.php/how-to-find-point-of-intersection-of-two-lines/\",\"WARC-Payload-Digest\":\"sha1:2TXJTFNNZMUMBVUFBKV7DSEG2Y33CLHY\",\"WARC-Block-Digest\":\"sha1:W5P4FBFYVGACONBH5DZERQAX4ZOWOAKL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945242.64_warc_CC-MAIN-20230324020038-20230324050038-00520.warc.gz\"}"}
https://ficheterrain.com/prices4q441255zhmme.html
[ "Home\n\n# 4/12 pitch roof calculator\n\n95% Accuracy. Simple, Easy To Read Two-Page Report. Access Your Report Now Search For Localized Results. Find It Here! Search For Information And Products With Us\n\n### Roofing Measurements Online - Fast Reports in 1 Business Da\n\n• A 4/12 is a roof slope that rises by 4 inches for every 12 inches across. This forms an angle of 18.5° between the horizontal section and the roof, and creates a gentle incline that is seen as a midpoint between a low-pitch and medium-pitch roof\n• Learn more about finding the angle of a line using our slope calculator. Standard Roof Pitches. Most roofs have a pitch in the 4:12 to 9:12 range. A pitch over 9:12 is considered a steep-slope roof, between 2:12 and 4:12 is a low-slope roof, and less than 2:12 is a flat roof\n• These roofs have a pitch less than 4:12. Conventional roofs are the easiest to construct and you can walk on them safely. They have a pitch ranging from 4:12 to 9:12. • High-pitched roofs often need extra fasteners and their pitch can be as high as 21:12. You can also calculate roof pitch even without using a roof slope calculator\n• Roof Pitch. Roof pitch is the measurement of a roof's vertical rise divided by its horizontal run. It is often compared to slope, but is not exactly the same. In the United States, a run of 12 inches (1 foot) is used, and pitch is measured as the rise of the roof over 12 inches. For instance, a 7/12 roof pitch means that the roof rises 7 inches.\n• Use our roof pitch calculator to find the pitch of your roof. Next, multiply the footprint of the roof by the multiplier below for your roof pitch to find the overall roof area. For example, a 4/12 pitch roof that is 100 square feet: 100 × 1.054 = 105.4ft 2. Roof Pitch Area Multipliers\n• Pitch Measurement Method 1. 1. On a ladder beside the roof, place the level a foot or so up the roof, hold it level, and measure from the 12-inch mark on the level's bottom, straight down to the roof. If this distance measures 4 inches, you have a 4 in 12 pitch; 8 inches and you have an 8 in 12 pitch\n• 10-12. 39.81°. 11-12. 42.51°. 12-12. 45°. This is our pitch calculator which will convert pitch to angle or angle to pitch for half degree roof slope calculations. Enter any pitch or fraction of pitch to find angle. Enter any angle or fraction of angle to find pitch\n\nRoof Pitch Calculator Results Explained. Slope - The slope of a roof is represented as X/12, where X is the number of inches in rise for every 12 inches of run.This is very useful information for many purposes, especially for roof framing - the slope, sometimes called pitch, is calibrated on speed squares.. Angle - The angle of a roof is the same as the roof's slope, except instead of being. Estimate the cost to install a new roof in a click of a button! Just plug in your house dimensions, select your roof pitch, relative roof difficulty, choice of materials, and let our Calculator do the rest. To get a 100% free roof quote, enter your zip code above and fill out a simple estimate request form on the next page Calculate Sheathing Needed. Once you have the area you can divide by the area of a sheet of plywood to find the number of sheets needed. A 4×8 sheet of plywood is 32 ft 2. So, dividing the area of the roof by 32 will give you the number of sheets needed. For example, if your roof is 1,500 ft 2 then it will take 47 sheets of plywood to sheath it Enter the diameter of the roof ridge vent gap (1-3) Finally, enter the coverage area (width) of the panels you plan to buy. For instance a 38 panel will have an effective coverage width of 36. The metal roof length calculator calculates the distance from peak to trim so add the length you want the roof metal to extend beyond the eave trim.", null, "### Delivering one-click aerial roof measurement reports to contractors\n\nIn roof pitch calculator, pitch value can be obtained by dividing N by 12 and divide S by the answer. To find slope divide rise by run and multiply it by 100 and for angle take tan -1 of S/N. Calculator. Formula. The rise is the distance from the top of the roof to the bottom. The run is the distance from the outside of the wall to the inside. The rise is the distance from the top of a stud wall to the peak of the roof. A roof's pitch is determined by how much it rises for every foot it runs. Thus, a moderate 6 in 12 roof pitch means the roof rises 6 inches for every 12 horizontal inches it runs. A 12 in 12 pitch is a steep, 45-degree angle roof Our roof rafter calculator tools are handy for calculating the number of rafters needed, rafter length calculator, lineal feet of rafter, board feed in ridge and sub-facia,and the total board feet in the roof. Rise and Run means that a 6/12 pitch roof has 6 of rise (vertical) for each foot of run (horizontal). Roof Pitch Calculator", null, "Use our roof pitch calculator to find the pitch of your roof. Next, find the square footage of the metal roofing panels you want to use. Measure the length and width in feet, then multiply together to find the square footage. Panels are often measured in inches, use our inches to feet conversion calculator to convert to feet Once you know your roof slope expressed as X-in-12 (rise-in-run), the roof pitch multiplier is determined by finding the square root of ( (rise/run)² + 1). Remember that the slope of the roof provides the rise and the run to be plugged into the equation. A roof pitch of 4-in-12 (4:12) has a rise of 4 and a run of 12 Roof Pitch Calculator. The roof pitch measures the steepness of your shed roof and typically takes values from 1 to 12. If your roof pitch is greater than 3 your roof is considered pitched.Read more about roof pitch at Wikipedia.. This free roof pitch calculator will take your roof width (see the picture) and rise (the height of the roof) and will calculate the roof pitch both as numeric value.\n\nWhat is the roof pitch? The roof pitch is the slope of the rafter. The pitch is commonly defined as the ratio of rise over run in the form of x/12.. The rise is the height of the roof, and the run is the horizontal span (as pictured above).. For example, if a roof has a pitch of 4/12, then for every 12 inches the building extends horizontally, it rises 4 inches Share this Calculation. Hip roof framing calculator plan diagram with full dimensions. Rafter Join Detail. Hip roof framing - hip to common to ridge join diagram. Walls (building) 40' x 20'. Eave Overhang (level) 2'. Roof 44' x 24'. Roof Angle 22 ° (Pitch 4.85:12) Overall rise above outer wall 4'- 8~5/16\n\nRafter Length using Pitch = V + (R/2)sqrt (1+P 2 ) Where, V = Overhang R = Roof Span P = Pitch. Use our online Rafter Length Calculator to quickly compute the length of the shed roof rafter by entering the measurements of overhead, length of roof span, pitch or angle. This calculator helps you make the rafters of the suitable sizes required for. Rise 1'- 4. Pitch 4:12. Angle 18.4 °. Slope 33.3 %. Slope Length 4'- 2~19/32. Area 2.67 ft². Enter Run (the flat, level length) then click Pitch, Angle or Rise and enter other known dimension, angle or pitch. The triangle diagram will be re-drawn to scale, with all dimensions marked. Pitch Run Scale Calculate the length of a rafter from the roof slope ratio of inches per foot and a building width measurement. The calculation includes results for hip/valley factor, slope factor and the roof slope in degrees. If you add an eaves overhang dimension, then the calculator will add the amount the rafter sticks passed the wall to the rafter length This roof shingle calculator requires four basic inputs: Roof length (measured in feet) Roof width (measured in feet) Roof pitch. Roof type (either gable or hip) Using these four inputs, you will get several basic outputs: Roof area (measured in sq ft) Number of squares. Number of bundles Does that mean 4:12 pitch? (2 foot rise - 4inches every foot). I'm confused with few things. 1. The deck span is 10 foot. But the rafters will be 12foot 2. I'll like the rafters to end about 1 foot past the headers of deck which will be at 7'6″. I feel like I'm looking at wrong information to calculate the roof pitch\n\n### Roll Data Into Estimates · 95% Accuracy Guarantee\n\nThe combination of two numbers are used to display or show the roof pitch. Two most common methods (4/12 or 4:12) are used for marking the pitch of a roof. On blue prints architects & engineers usually display the pitch of a roof in the format shown on the image where number (4) represents a rise and number (12) represents a length. This means. As a reminder, please, don't forget to add height of rafter, thickness of roofing material and ridge vent to the number that you'll get using our calculator. If your goal is to only calculate the roof height, do not enter any numbers in Wall Height field and only use Building Width and Roof Pitch fields 500 Sycamore Street P.O. Box 177 La Crescent, MN 55947 (507) 895-840 Rafter Stock Size Calculator. Building Supplies. Input the rafter span (on the flat), eave overhang and pitch of the roof. Click the button to calculate the stock size needed for this job. This calculator is to be used as an estimating tool ONLY. Shop Dimensional Lumber Measuring the Pitch. To calculate the area of your roof, first you'll need to calculate the pitch of it: First, use your measuring tape to measure 12 inches on your large level and make a mark at the 12-inch line. Next, place your ladder against your house at the gable end. Climb to the top of your roof\n\nFast Roof Measurements. Reports Delivered in 1 Business Day. Order Yours Now Answer: 4/12 roof pitch equivalents roof rises 4″ in a length of 12″. 4/12 roof pitch angle = 18.43 degrees. 4/12 roof pitch to angle 18.43 degrees. Also referred to as 4/12 roof slope, 4 on 12, 4 to 12 and 4/12 roof angle. 4/12 roof pitch to angle = 18.43 degrees equivalents. Tags. 4 on 12 roof pitch 4/12 roof pitch 4/12 roof slope Measure. It's the measure of the steepness of a roof, or its slope. Roof pitch is expressed as a ratio of the roof's vertical rise to its horizontal span, or run. The most commonly used roof pitches fall in a range between 4/12 and 9/12. Pitches lower than 4/12 have a slight angle, and they are defined as low-slope roofs\n\n### RoofScope® Aerial Roof Reports - Get Roof Pitch Measurement\n\nRoof Pitch = / 12¨. Roofing calculator is a tool which is obtainable on the web for the precise calculation of the materials needed to develop a new roof or to renovate the old roof. This Roofing Calculator tool is extremely useful as it saves time and cash. You do not have to search for the specialists in roofing and employ their solutions to. The best part of a roofing calculator is that it offers a free insight into the total cost of the replacement roof project on a sq ft basis. As you begin to calculate various costs, you will see that the cost is a result of the interplay of many different variables, including roof area (in terms of footage), measurement of any roof extras (like.\n\nThis online truss calculator will determine the all-in cost of your truss based on key inputs related to the pitch, width and overhang of your roof. It will use the current cost of wooden rafters based on the average price found at home improvement stores. The important point to keep in mind when you use your truss calculator is that every. Gable Roof Sheathing Calculator (Imperial) Estimate the amount of plywood or OSB needed for sheathing a roof. The calculation is based on that the Plywood or OSB (Oriented Strand Board) sheets are of a 4' x 8' size. One sheet = 32 ft². This basically calculates the number of sheets needed for a Gable roof. The square footage of a Hip roof area. 4/12 roof pitch angle = 18.43 degrees. 24 Related Question Answers Found What is the length of a common rafter? Calculate the tangent of the roof angle by dividing the roof height by the roof width. For example, if the height is 7.5 feet and the width is 15 feet,. Common Rafter Calculator - Rafter Dimensions - Plumb Cuts - Birds-mouth Dimensions + Cutting Templates - Inch. Allow for Ridge Thickness when determining Rafter Run to Outer Wall. eg: Half building width minus 1/2 ridge thickness. See Rafter Run to Outer Wall Calculator below The slope/pitch of the roof is the incline of the roof expressed as a ratio of the vertical rise to the horizontal run. This ratio is expressed as inches per foot. So a a roof that rises 4 inches in 1 foot or in 12 inches is called a 4/12 pitch or slope\n\nIf your roof is pitched, enter the length and width of the flat area covered by the roof. Roof pitch. You can enter this value either as a ratio x:12 or as an angle, whichever suits you better. Snow cover thickness. Intuitively, this is the number of inches of snow on your roof in the place where the cover is the thickest. Snow type truss count = ( (roof length * 12)/24) + 1, Rounded up to the closest whole number (for example if the result is 14.5, you need to get 15 trusses). To calculate the costs, we use the following two formulas: Including installation costs: total costs = truss count * single truss price + cost per time unit of work * duration of work The pitch usually ranges from 4:12 to 9:12. High-pitched - This type typically requires additional fasteners. The pitch can be as high as 21:12. Roof Pitch Degrees. If you'd like to know how to convert roof pitch to degrees, check out the chart below: 1-12 4.76° 2-12 9.46° 3-12 14.04° 4-12 18.43° 5-12 22.62° 6-12 26.57° 7-12 30.26° 8. Generally, such roofs are characterized by Roof pitch angles that correspond to the pitch slope of between 1/2:12 and 2:12. Low pitched ones are characterized by a pitch of less than 4:12. These are usually not easy in terms of maintenance since they have need of special materials for avoiding leaks", null, "### Find More Results - Search for Informatio\n\n1. 4/12 35 41 52* 36 45* 54* 39 50* 58* 42* 49* 62* ‡ Other pitch combinations available with these spans For Example, a 5/12 - 2/12 combination has approx. the same allowable span as a 6/12 - 3/12 Top Chord 2x4 2x6 2x6 2x4 2x6 2x6 2x4 2x6 2x6 2x4 2x6 2x6 Bottom Chord 2x4 2x4 2x6 2x4 2x4 2x6 2x4 2x4 2x6 2x4 2x4 2x\n2. Rise 437. Pitch 4.37:12. Angle 20°. Slope 36.4%. Slope Length 1277. Area 0.262 m². Enter Run (the flat, level length) then click Pitch, Angle or Rise and enter other known dimension, angle or pitch. The triangle diagram will be re-drawn to scale, with all dimensions marked. Angle Run Scale\n3. us ridge thickness, into the 'Wall Width' entries. (\n4. Watch this short video titled Area of a roof instead. Measuring the Roof's Pitch. The incline or pitch of a roof can be easily measured with a level, tape measure, and a pencil. The roof's pitch is the number of inches the roof rises in 12 inches. Therefore, mark off 12 inches on the level and place it down horizontally against the roof rafter\n\n### Video: Roof Pitch Calculato", null, "### Roof Pitch Calculator - Inch Calculato\n\nFor 20 degrees to roof pitch:- 1) tan of 20 degrees as tan 20° = 0.3639, this will give you the pitch of the roof, 2) multiply the pitch by 12 to find the X in the ratio X/12 such as 0.3639 ×12 = 4.36, thus, a 20 degrees angle of roof pitch is same as 4/12 or 4 in 12 slope, or pitch ratio 4:12 Example calculations: Roof is 8/12 pitch and measures at 1900 square feet 1900 x 1.2= 2280 2280 - 1900= 380 feet added for pitch If the home measures 20 X 40 and has a 7:12 roof, then to calculate the slope of the roof follow these steps: Step 1: Multiply 20 and 40 which equals 800. Step 2: Find the value of 7:12 from the roof slope multiplier table which is 1.16 Hip roof calculator. One of our users asked us to create a calculator that would help him estimate hip roof parameters, such as rafter lengths, roof rise and roof area. So, here it is. To get results you need to provide the roof base dimensions (length and width) and the roof pitch (we assume it is identical for all sides) Our roof truss calculator can be used to aid you in the purchase of your trusses by determining the quantity of trusses and lineal feet required. Connector plates are generally 16 gauge to 20 gauge depending on truss design requirements. The information provided here is not intended to replace truss drawings. Engineered truss drawings should be. Determining the Ridge Beam Height. If you know the slope (X in 12) you want to use for your roof framing design, you can use it, along with the run (R), to determine the height of the ridge beam.For example, if the slope is 4 in 12, and the run is 12 feet, the ridge beam height (M) will be 4 feet. The finished ridge beam height (Z) above the top of the wall will be (M) plus the Y Height\n\nOnline calculator produces an accurate calculation of the rafters online (calculates the sizes of rafters for the roof: the length of rafters, length overhang, the angle of the saw cut, the distance to drinking). The drawings and the size of the rafters are generated in real-time. The calculator provides online calculate the length of rafters a gable roof Hip Roof Area Calculator. Hip roof is a roof with a sharp edge or edges from the ridge to the eaves where the two sides meet. Here is the online hip roof area calculator which helps you calculate the hip roof parameters such as roof rise, common and hip rafters length and roof area based on width, height of roof base and the roof pitch (identical for all sides)\n\nSo you must roof calculate the pitch very carefully and correctly to know about the cost that you need to pay. If your house roof is built with multiple pitches, then square per roof must be calculated differently. 4/12 18.43° 1.0541 5/12 22.62° 1.0833 6/12 26.57° 1.1180 7/12 30.26° 1.1577. For most home styles, roof pitches fall in a range 4/12 (a moderate) slope up to 8/12 (fairly steep). Examples of extreme slopes range from 1/4 /12 (almost flat) to 12/12 (sloping down at a perfect 45-degree angle)\n\n### 2021 Roofing Calculator & Estimator Roof Area, Pitch\n\nOver time the ratio over 12″ (1 foot) was more commonly used as it was easy to reference on the framing square. The term Pitch when used over 12 became to be understood as the standard , and is commonly called the Pitch of the Roof. Just look at the Calculator entry the button is labeled pitch' not slope or incline The way pitch, also known as roof slope, is measured is rise over run. So, if your roof rises 4 inches for every 12 inches of horizontal length, then the pitch of your roof is expressed as 4:12. A ratio is the most common way of expressing roof pitch or roof slope, but degrees are also possible. A 4:12 pitch is around 18.5 degrees, but not exactly A description of what pitch is and how to manipulate the numbers when framing and planning a project. Carpentry is mostly breaking down the elements of a bui.. When building a roof, how do you calculate the length of the rafters? All you need to know are the span of the building and the slope of the roof\n\n### Roof Pitch Calculator Pitches To Angle Char\n\nThe pitch fraction represents a certain amount of vertical rise over the entire span. For example, given a roof with a rise of 4 feet and a span of 24 feet, the pitch is 1 to 6 pitch, which can be expressed as the fraction of 1/6. A 12 to 24 pitch is expressed as 1/2. The term pitch and slope are often used. Our goal is to have a huge inventory of the common trusses like 20, 24, 30, 36 &40' 4:12 pitch trusses and 10,12,14 foot 2:12 lean to trusses in stock and ready to be picked up or delivered. But for right now, we do not guarantee what is in stock. Please call for availability before arranging a delivery or pick-up Tristate Areas Best Roofing Company Delaware Chester Montgomery Buck\n\nA 4/12 roof pitch is referred to as the roof rises 4 inches in height for every 12 inches, as it measured horizontally from the edge of the roof to the centerline. The gentle slope of a 4/12 roof pitch falls on the cusp between moderate-pitch and low-pitch Your roof area is 3500 sq feet Your Roof Pitch is 6/12 pitch. Your Output. Multiplier 0.12. Feet added for pitch = Multiplier x 3500 = 420 sq feet. Total = 3500 + 20 = 3920. Total Squares = 3920 / 100 = 339.20 squares. Roof Pitch to Angle Calculator (enter pitch in the first box - calculation is automatic) Roof Span: ft. in.Roof Rise: ft. in A 4/12 roof pitch can also be expressed as 4 over 12 and when you're talking to a roofer, this is the most common way you'll hear it explained because it's easy to say in conversation. Another way to express a 4/12 pitch is 4 and 12. In this pitch, the four is your roof's rise and the twelve is the roof's run. The roof's incline increases four feet for every 12 feet of horizontal measurement Valley Roof The Valley roof option is used to calculate valleys for 90 degree intersecting roofs with equal slopes and wall plates at the same height. The heel height, overhang, and ridge and fascia boards for both roofs will be the same. The Blind Valley roof can be used to calculate valleys for roofs with unequal slopes 3) Enter roof pitch / slope: Roof pitch is the value of roof rise over roof run, using 12 as the base for roof run. Example: If your roof rises 5 for every 12 of run, enter 5 as roof pitch. If your roof pitch is in degrees, use our roof pitch calc to convert degrees to pitch value. 4) Select rafter spacing and width\n\n• Corkcicle Tumbler.\n• Caswell Beach real estate.\n• Annabel Lee analysis.\n• Norwegian Christmas cookies for sale.\n• Can you adopt if you have a criminal record UK.\n• Glass verandas prices.\n• Grey Peel and Stick Backsplash.\n• Connor McDavid Instagram.\n• What allows you to forward a received email to someone else.\n• When do cinemas open in Scotland.\n• Carpentry PowerPoint templates.\n• Oudtshoorn Farm accommodation.\n• Royal Isabela rentals.\n• Spoof website will stay online.\n• Redskins roster 2014.\n• 🦥sloth.\n• Exotic purple strains.\n• Trio Wedding Ring Sets Walmart.\n• Share chat tamil comedy.\n• National Geographic Elementary.\n• Corgis in CA.\n• Dce089d1g xe.\n• Double endorsement test.\n• Xbox 360 OLX Lahore.\n• Theme name for farewell party.\n• Wind Waker Treasure Charts.\n• How to delete birthdays from iPhone calendar.\n• Kansas State Women's Soccer recruits.\n• Removable Glue Dots for Balloons.\n• KOA Zion National Park.\n• Wudu sequence.\n• Endometrial cancer ultrasound findings.\n• Would you have a relationship with the first person meaning in tamil.\n• Mora whittling knife set.\n• Hazy PPDT pictures.\n• Lilim UKULELE Chords.\n• FOTS stands for.\n• Trolley for moving Furniture." ]
[ null, "https://ficheterrain.com/jfexc/L6JKLQt1q7j7UqmHx49qUgHaGT.jpg", null, "https://ficheterrain.com/jfexc/pWEDmn2Hleay89vJb0vmpQHaEc.jpg", null, "https://ficheterrain.com/jfexc/hVEwmZ-Y8eSRQ0ziZWrlWgHaDj.jpg", null, "https://ficheterrain.com/jfexc/MvK0CpnNUl0q4etfrD7HQAHaEc.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.92517173,"math_prob":0.9775247,"size":21107,"snap":"2021-43-2021-49","text_gpt3_token_len":5278,"char_repetition_ratio":0.17945316,"word_repetition_ratio":0.017100561,"special_character_ratio":0.25981903,"punctuation_ratio":0.10383847,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9609931,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-20T19:42:09Z\",\"WARC-Record-ID\":\"<urn:uuid:1a654ee9-41b3-492e-b9a0-3cb865117347>\",\"Content-Length\":\"34781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc263732-fbea-4dec-8c27-794561e28c92>\",\"WARC-Concurrent-To\":\"<urn:uuid:51bb7c0f-2374-408d-96d0-266655247d5b>\",\"WARC-IP-Address\":\"37.1.204.220\",\"WARC-Target-URI\":\"https://ficheterrain.com/prices4q441255zhmme.html\",\"WARC-Payload-Digest\":\"sha1:CUW3HWRERYQ3KB3J7BDLXGW4PHSGLBNB\",\"WARC-Block-Digest\":\"sha1:XSBCEFSSNLTJ6X5ADECPUE4NPETYXWN7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585348.66_warc_CC-MAIN-20211020183354-20211020213354-00222.warc.gz\"}"}
https://jp.mathworks.com/help/msblks/ref/singlemodulusprescaler.html
[ "# Single Modulus Prescaler\n\nInteger clock divider that divides frequency of input signal\n\n• Library:\n• Mixed-Signal Blockset / PLL / Building Blocks\n\n•", null, "## Description\n\nThe Single Modulus Prescaler subsystem block divides the frequency of the input signal by a tunable integer value, N, passed to the div-by port. In frequency synthesizer circuits, such as a phase-locked loop (PLL) system, these prescalers divide the VCO output frequency by an integer value. The resulting lower frequency at the prescaler output port is comparable to the reference input at the PFD block. The Single Modulus Prescaler is also termed as integer clock divider.\n\n## Ports\n\n### Input\n\nexpand all\n\nInput clock frequency, specified as a scalar. In a PLL system, the clk in port is connected to the output port of a VCO block.\n\nData Types: `double`\n\nRatio of output to input clock frequency, expressed as a scalar integer.\n\nData Types: `double`\n\n### Output\n\nexpand all\n\nOutput clock frequency, expressed as a scalar. In a PLL system, the clk out port is connected to the feedback input port of a PFD block. The output at the clk out port is a square pulse train of 1 V amplitude.\n\nData Types: `double`\n\n## Parameters\n\nexpand all\n\nSelect to enable increased buffer size during simulation. This increases the buffer size of the Logic Decision inside the Single Modulus Prescaler block. By default, this option is deselected.\n\nNumber of samples of the input buffering available during simulation, specified as a positive integer scalar. This sets the buffer size of the Logic Decision inside the Single Modulus Prescaler block.\n\nSelecting different simulation solver or sampling strategies can change the number of input samples needed to produce an accurate output sample. Set the Buffer size to a large enough value so that the input buffer contains all the input samples required.\n\n#### Dependencies\n\nThis parameter is only available when Enable increased buffer size option is selected in the Block Parameters dialog box.\n\n#### Programmatic Use\n\n• Use `get_param(gcb,'NBuffer')` to view the current value of Buffer size.\n\n• Use `set_param(gcb,'NBuffer',value)` to set Buffer size to a specific value.", null, "" ]
[ null, "https://jp.mathworks.com/help/msblks/ref/block_single_modulus_prescaler.png", null, "https://jp.mathworks.com/images/responsive/supporting/apps/doc_center/bg-trial-arrow.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.81154215,"math_prob":0.82364607,"size":787,"snap":"2020-24-2020-29","text_gpt3_token_len":178,"char_repetition_ratio":0.14048532,"word_repetition_ratio":0.016666668,"special_character_ratio":0.19822109,"punctuation_ratio":0.11029412,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9595798,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-02T21:29:46Z\",\"WARC-Record-ID\":\"<urn:uuid:aa3a40d3-1432-4c97-9749-c9715b28e8a8>\",\"Content-Length\":\"79466\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2dc6b250-c2d8-4d38-bb2f-750b46b5e909>\",\"WARC-Concurrent-To\":\"<urn:uuid:6f45de37-575a-42a0-9a5e-d77af335e944>\",\"WARC-IP-Address\":\"96.7.70.236\",\"WARC-Target-URI\":\"https://jp.mathworks.com/help/msblks/ref/singlemodulusprescaler.html\",\"WARC-Payload-Digest\":\"sha1:7OPOJY3JBOFMEW3FRA4MQOBZSVS4O7JX\",\"WARC-Block-Digest\":\"sha1:AGKXANW73SSQ6D7QVPI2IGVIATILEPE6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655880243.25_warc_CC-MAIN-20200702205206-20200702235206-00598.warc.gz\"}"}
https://cs.stackexchange.com/questions/3239/reducing-tsp-to-ham-cycle-to-vertex-cover-to-clique-to-3-cnf-sat-to-sat
[ "# Reducing TSP to HAM-CYCLE to VERTEX-COVER to CLIQUE to 3 CNF-SAT to SAT\n\nIn Cormen's Algorithms book on NP-completeness they prove various problems are NP-complete by reducing a previously proved NP-complete problem (call $K$) to current problem (call $L$). Each proof involves some clever construction which reduces all instances of $K$ to few instances of $L$. Here is the proof order they follow. CIRCUIT-SAT, SAT, 3 CNF-SAT, CLIQUE, VERTEX-COVER, HAM-CYCLE, TSP. e.g. in reducing VERTEX-COVER to HAM-CYCLE they use a widget which does the trick.\n\nAfter this previous question of mine, I think one can reduce back. i.e. one can reduce HAM-CYCLE to VERTEX-COVER problem. I tried searching web for such reductions, but most of the link return the normal reduction order. I'm interested to see if one can reduce in reverse order. i.e. TSP to HAM-CYCLE to VERTEX-COVER to CLIQUE to 3 CNF-SAT to SAT\n\nI'm looking for reverse constructive proofs. I know all of these problems belong to NP-complete hence equivalent.\n\nYou don't have to give complete proof as an answer. Proof sketches are fine too. If you can point me where these proofs are available online, that's completely fine too. I'm just trying to lean how constructions are leveraged among problems that look so different on surface. Thanks!\n\n• Well I found most reductions interesting because they are construction proofs. It's hard to think out-of-the-box constructions. So I wanted to see if reverse construction is possible. – Ankush Aug 17 '12 at 14:34\n• What have you tried? All stated problems are $\\mathsf{NP}$-Complete. Thus every problem in $\\mathsf{NP}$ can be reduced to your problems (by definition of $\\mathsf{NPC}$). Hint: Start reducing TSP to HAM-CYCLE. – Christopher Aug 17 '12 at 14:40\n• @Chris As of now nothing. I mean till yesterday I was under impression that above isn't possible. Let me start with TSP to HAM-CYCLE :) – Ankush Aug 17 '12 at 15:13\n• Note that not all NPC problems are equally hard, see e.g. weakly vs strongly NP-completeness. Therefore, not all reductions are equally simple; those from strong to weak problems have to be complex enough to prevent e.g. efficient approximations to carry over (unless P=NP). – Raphael Aug 18 '12 at 6:16\n• I'd offer you to read Garey & Johnson book, they provide iff proof for some of problems, proof techniques are not easy and you can't expect to solve them yourself in few days. – user742 Aug 18 '12 at 18:32\n\nAs mentioned in one of the comments on the question, $3SAT$ to $SAT$ is trivial, an instance of $3SAT$ is already and instance of $SAT$, so there's no work needed at all.\nTo get from $VERTEX$ $COVER$ $(VC)$ to $CLIQUE$ we can do a couple of short jumps, that'll also put another problem in the loop. $VC$ is the dual of $INDEPENDENT$ $SET$ $(IS)$, that is, we can find a vertex cover of size $k$ in a graph $G$ iff we can find an independent set of size $n-k$ in $G$. Play around with this a little and you'll see why this is true. If we then take the edge-complement graph $\\bar{G}$ of $G$, an independent set of size $t$ in $G$ is the same as a clique of size $t$ in $\\bar{G}$, so that gets us to $CLIQUE$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94059616,"math_prob":0.9709133,"size":1221,"snap":"2019-51-2020-05","text_gpt3_token_len":299,"char_repetition_ratio":0.121610515,"word_repetition_ratio":0.0,"special_character_ratio":0.21539721,"punctuation_ratio":0.12550607,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99662083,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-19T03:13:20Z\",\"WARC-Record-ID\":\"<urn:uuid:b0db466b-d05e-41d9-bf12-c9c3913a4842>\",\"Content-Length\":\"146140\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:877e24df-9de3-4897-9ce6-6a491cd41262>\",\"WARC-Concurrent-To\":\"<urn:uuid:1fa45850-d93b-4f3a-b16e-02194cfab9cf>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/3239/reducing-tsp-to-ham-cycle-to-vertex-cover-to-clique-to-3-cnf-sat-to-sat\",\"WARC-Payload-Digest\":\"sha1:F7TBVLSIXKC3PVEW5XUBXX5AEYPXVPE4\",\"WARC-Block-Digest\":\"sha1:6GIV22YGG2J76S56ZU3QKFRDOOEHJ5H6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250594101.10_warc_CC-MAIN-20200119010920-20200119034920-00045.warc.gz\"}"}
https://answers.everydaycalculation.com/add-fractions/75-50-plus-18-40
[ "Solutions by everydaycalculation.com\n\n1st number: 1 25/50, 2nd number: 18/40\n\n75/50 + 18/40 is 39/20.\n\n1. Find the least common denominator or LCM of the two denominators:\nLCM of 50 and 40 is 200\n2. For the 1st fraction, since 50 × 4 = 200,\n75/50 = 75 × 4/50 × 4 = 300/200\n3. Likewise, for the 2nd fraction, since 40 × 5 = 200,\n18/40 = 18 × 5/40 × 5 = 90/200\n300/200 + 90/200 = 300 + 90/200 = 390/200\n5. 390/200 simplified gives 39/20\n6. So, 75/50 + 18/40 = 39/20\nIn mixed form: 119/20\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83571714,"math_prob":0.997826,"size":720,"snap":"2020-34-2020-40","text_gpt3_token_len":290,"char_repetition_ratio":0.1452514,"word_repetition_ratio":0.0,"special_character_ratio":0.53333336,"punctuation_ratio":0.09756097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99686176,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T20:00:31Z\",\"WARC-Record-ID\":\"<urn:uuid:3365ed72-6810-4b2f-8b52-7ac432f42964>\",\"Content-Length\":\"7462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3bcb9942-4d93-469a-ab0e-014458765909>\",\"WARC-Concurrent-To\":\"<urn:uuid:bf7550e7-0918-4d80-a898-60915783cab2>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/add-fractions/75-50-plus-18-40\",\"WARC-Payload-Digest\":\"sha1:GJLAQ34YGXSCA42IQRRFJJKLGKXPYZKC\",\"WARC-Block-Digest\":\"sha1:BBQ53DVRRSCTFPPTR3OQRORCUNOTFIN4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400212039.16_warc_CC-MAIN-20200923175652-20200923205652-00345.warc.gz\"}"}
https://eduzip.com/ask/question/in-classical-physics-the-force-f-acting-on-a-body-is-assumed-to-b-274052
[ "Physics\n\n# In classical physics the force F acting on a body is assumed to be the product of mass m and acceleration a.If m=2.31 kg and acceleration is $3.123 m/s^2$,then taking into account significant figures the force F should be reported as\n\n7.21 N\n\n##### SOLUTION\n$F=m \\times a$\n$=2.31 \\times 3.123=7.21413 N$\nRounding off two decimal places since one of the products has two decimal places only.\n$F=7.21 N$\n\nYou're just one step away\n\nSingle Correct Medium Published on 18th 08, 2020\nQuestions 244531\nSubjects 8\nChapters 125\nEnrolled Students 204\n\n#### Realted Questions\n\nQ1 Subjective Medium\nIn the United States, a doll house has the scale of $1:12$ of a real house (that is,each length of the doll house is $\\dfrac{1}{12}$ that of the real house) and a miniature house (a doll house to fit within a doll house) has the scale of $1:44$ of a real house. Suppose a real house has a front length of $20\\ m$, a depth of $12\\ m$, a height of $6.0\\ m$, and a standard sloped roof (vertical triangular faces on the ends) of height $3.0\\ m$ In cubic meters, what are the volumes of the corresponding\nminiature house?\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ2 Subjective Medium\n$1$ kgf= _____ (nearly)\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ3 Single Correct Medium\nA quantity $X$ is given by $X$ $={ \\epsilon }_{ 0 }L\\dfrac { \\Delta V }{ \\Delta t }$, where ${ \\epsilon}_{0 }$ is the permittivity of free space, $L$ is a length, $V$ is a potential difference and $t$ is time interval. The dimensional formula for $X$ is the same as that of :\n• A. resistance\n• B. charge\n• C. voltage\n• D. current\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ4 Single Correct Medium\nThe dimensions of $\\varepsilon _0\\mu_0$ are\n• A. $[LT^{-1}]$\n• B. $[LT^{-2}]$\n• C. $[L^2T^{-2}]$\n• D. $[L^{-2}T^2]$\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020\n\nQ5 Single Correct Medium\n1 $\\mathring{A}$ is equal to :\n• A. $0.1$ nm\n• B. $10^{-10}$ cm\n• C. $10^{-8} m$\n• D. $10^{-4} \\mu$.\n\nAsked in: Physics - Units and Measurement\n\n1 Verified Answer | Published on 18th 08, 2020" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74137914,"math_prob":0.99064136,"size":1973,"snap":"2021-43-2021-49","text_gpt3_token_len":609,"char_repetition_ratio":0.117826305,"word_repetition_ratio":0.05263158,"special_character_ratio":0.3482007,"punctuation_ratio":0.11838791,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999749,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-09T08:23:57Z\",\"WARC-Record-ID\":\"<urn:uuid:536d2c3b-3d5a-425e-bce9-9607b30a1f65>\",\"Content-Length\":\"43139\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c2d57982-0bbb-4d30-80a2-428cc90b7327>\",\"WARC-Concurrent-To\":\"<urn:uuid:741c7631-fef5-4852-8f47-ef20cb0c3cff>\",\"WARC-IP-Address\":\"178.63.16.225\",\"WARC-Target-URI\":\"https://eduzip.com/ask/question/in-classical-physics-the-force-f-acting-on-a-body-is-assumed-to-b-274052\",\"WARC-Payload-Digest\":\"sha1:KZGVMFFSPB5BBZCONZGEJ5TQWKIXJFI3\",\"WARC-Block-Digest\":\"sha1:DDUPBL4TKEUQRFGBSG3L2LXKWKGEGBBD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363689.56_warc_CC-MAIN-20211209061259-20211209091259-00416.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/56982/is-lif-a-weak-electrolyte-in-acetic-acid/56984
[ "# Is LiF a weak electrolyte in acetic acid?\n\nThe vapor pressure of acetic acid is 0.033 atm at 25°C. If 5.00g of $$\\ce{LiF}$$ is added to 100g of acetic acid, what would be the vapor pressure of the solution at 25°C?\n\nThe lowering in vapor pressure (VP) can be calculated with this equation:\n\n$${∆P = X_{solute} * P°_{solvent} * i}$$\n\n$${∆P}$$ is the change (lowering) in VP.\n\n$${X_{solute}}$$ is the mole fraction of the solute.\n\n$${P°_{solvent}}$$ is the VP of the pure solvent.\n\n$${i}$$ is the van't Hoff factor of the solute.\n\nWhat is the \"correct\" van't Hoff for this problem, from the perspective of a general chemistry student? I know that $$\\ce{LiF}$$ is not very soluble in water, and I know there are papers on lithium fluoride and acetic acid, but that's all beyond the scope of a general chemistry class.\n\nTherefore, I think it's 2, because $$\\ce{LiF}$$ is expected to be a strong electrolyte. It's ionic, and the solvent is similarly polar (acetic acid has hydrogen bonds). The solvent even resembles water in that both have hydrogen-bonding. So I would expect $$\\ce{LiF}$$ to dissolve fully in acetic acid.\n\nRemember, this is a general chemistry class, and the students aren't expected to know much beyond \"like dissolves like.\" $$\\ce{LiF}$$is polar, and so is acetic acid. So to them, $$\\ce{LiF}$$ should dissolve in acetic acid.\n\nProblem is that I'm in trouble with my supervisor since I told someone that i = 2. My supervisor is telling me that the van't Hoff factor should be 1. I looked over his work to see if he accounted for the dissolving of $$\\ce{LiF}$$when calculating the mole fraction of the solute but he didn't - he just found the mole fraction of $$\\ce{LiF}$$, not the combined mole fractions of $$\\ce{Li+}$$ and $$\\ce{F-}$$.\n\nSo, what's the deal with this problem? Is there some weird exception I'm not aware of regarding $$\\ce{LiF}$$ and acetic acid? The van't Hoff factor should be 2, correct?\n\nAccording to Acid-Base Equilibria in Glacial Acetic Acid. III. Acidity Scale. Potentiometric Determination of Dissociation Constants of Acids, Bases and Salts J. Am. Chem. Soc., 1956, 78 (13), pp 2974–2979:\n\nIt is generally agreed [references 3 and 4] that acids, bases and salts are only slightly dissociated in glacial acetic acid.\n\nWhile the article does not specifically address lithium fluoride, it has quantitative data for lithium chloride.\n\nLithium Chloride dissociates with a pK of 7.08 +/- 0.02 in acetic acid.\n\nSo considering the high concentration of LiF in the problem (5 in 100 grams), if LiF is similar to LiCl and the other salts studied, 1 rather than 2 is correct.\n\nThis knowledge is not part of a standard general chemistry curriculum, so unless a specific book or lecture provided relevant information, I wouldn't expect students to know this.\n\nFurthermore, the solubility of LiF in acetic acid is only 0.84 grams per kilogram according to Glacial Acetic Acid as a Non-aqueous Solvent for Metal Fluorides, so the problem is not realistic. (Only 0.084g of the 5.00 g would dissolve). Also, HF is a strong acid in acetic acid according to that paper, so LiF might even be an exception to the general observation that salts do not dissociate in acetic acid.\n\nAlso, keep in mind that in the liquid phase, acetic acid exists as mostly cyclic dimers, which gives the solvent some non-polar characteristics." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9460661,"math_prob":0.9748161,"size":1838,"snap":"2020-45-2020-50","text_gpt3_token_len":501,"char_repetition_ratio":0.14449291,"word_repetition_ratio":0.012698413,"special_character_ratio":0.2704026,"punctuation_ratio":0.10106383,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9923373,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-20T03:42:10Z\",\"WARC-Record-ID\":\"<urn:uuid:fcdb9af6-a651-4b28-a1ba-d409250849de>\",\"Content-Length\":\"147823\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9c31f0e8-6edd-471e-a6dc-35ea65671db5>\",\"WARC-Concurrent-To\":\"<urn:uuid:7bc56f7f-1499-4653-8769-950ae7a21640>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/56982/is-lif-a-weak-electrolyte-in-acetic-acid/56984\",\"WARC-Payload-Digest\":\"sha1:IIMDITCIWFCBETS3NM3GMKJL7UW7BMVL\",\"WARC-Block-Digest\":\"sha1:WQ76YD6JFC27NFUBFHR7Q3NYR4WODWEJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107869785.9_warc_CC-MAIN-20201020021700-20201020051700-00235.warc.gz\"}"}
http://blog.imxh.com/archives/3245
[ "## 花了一个晚上弄了个模版,把老域名用起来,弄个站\n\nUPDATE `dede_archives` SET `flag` = ‘c’ WHERE `flag` IS NULL ORDER BY rand() LIMIT 50;\n\n``` //替换随机图片 function add_randimg(\\$me){ \\$id=mt_rand(1,105); \\$me = str_replace(\"/images/defaultpic.gif\",\"/uploads/rand/a (\".\\$id.\").jpg\",\\$me); return \\$me; } ```\n\n`[field:litpic function='add_randimg(@me)'/]`\n\n``` echo \\$idlist; ```\n\n`{dede:field name='typeid' function=\"GetTopTypename(@me)\" /}`\n\n``` //获取顶级栏目名 function GetTopTypename(\\$id) { global \\$dsql; \\$row = \\$dsql->GetOne(\"SELECT typename,topid FROM #@__arctype WHERE id= \\$id\"); if (\\$row['topid'] == '0') { return \\$row['typename']; } else { \\$row1 = \\$dsql->GetOne(\"SELECT typename FROM #@__arctype WHERE id= \\$row[topid]\"); return \\$row1['typename']; } } ```" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.56317955,"math_prob":0.8301257,"size":1097,"snap":"2020-34-2020-40","text_gpt3_token_len":631,"char_repetition_ratio":0.08874657,"word_repetition_ratio":0.0,"special_character_ratio":0.2698268,"punctuation_ratio":0.15189873,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9714518,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T11:22:59Z\",\"WARC-Record-ID\":\"<urn:uuid:adbdf7c1-7b7c-4011-95c1-8113e8fb212e>\",\"Content-Length\":\"44587\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c191555c-b11b-4612-a303-7d6cf8db0d6b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b6467dc3-5b16-4f57-b205-e33b2d8a8849>\",\"WARC-IP-Address\":\"45.124.112.98\",\"WARC-Target-URI\":\"http://blog.imxh.com/archives/3245\",\"WARC-Payload-Digest\":\"sha1:BZO2TTVGPM3L55RCJFLKKUNWKG7N4MSQ\",\"WARC-Block-Digest\":\"sha1:FGMRPYCHOGYHV3FKID7P566IV2GMV3LH\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400223922.43_warc_CC-MAIN-20200925084428-20200925114428-00753.warc.gz\"}"}
https://pxiaoer.blog/category/rust/cs110l/
[ "# CS110L-Week 2 Exercises: Ownership and structs\n\n## 第一部分: 所有权\n\n``````\nfn main()\n{\n\nlet mut s = String::from(\"hello\");\nlet ref1 = &s;\nlet ref2 = &ref1;\nlet ref3 = &ref2;\n\n//s = String::from(\"goodboy\"); //错误 s\n\nprintln!(\"{}\", ref3.to_uppercase());\n}``````\n``````fn drip_drop() -> String {\n\nlet s = String::from(\"hello,world\");\n\n//return &s; //错误\n\nreturn s;\n}\n``````\n``````\nlet s1 = String::from(\"hello\");\nlet mut v = Vec::new();\nv.push(s1);\n\n//let s2: String = v; //错误\n\nlet ref s2: String = v;\nprintln!(\"{}\",s2);``````\n\n# CS110L-Week 1 Exercises: Hello world\n\n## 第一部分 helloworld\n\nhelloworld\n\ncargo 提供了命令行来创建rust项目\n\ncargo new helloworld\n\n``````➜ helloworld git:(main) ✗ tree\n.\n├── Cargo.lock\n├── Cargo.toml\n├── src\n│ └── main.rs\n└── target\n├── CACHEDIR.TAG\n└── debug\n├── build\n├── deps\n├── examples\n├── helloworld\n├── helloworld.d\n└── incremental\n└── helloworld-3gmsxxterlir4\n├── s-fw1pxstg53-1leo6ld-2439fudoc3hif\n│ ├── 15ne1gcrt9rvpglq.o\n│ ├── 18abmkatn3n4l0hu.o\n│ ├── 27s29o1kcsspusdg.o\n│ ├── 2xwm4i2jxgoiz5co.o\n│ ├── 3050lnwbirtj1yb5.o\n│ ├── dep-graph.bin\n│ ├── gp126zvew3eug9.o\n│ ├── q4wkdgq1k2s2v4m.o\n│ ├── query-cache.bin\n│ └── work-products.bin\n└── s-fw1pxstg53-1leo6ld.lock``````\n\nmain.rs 自带hello,world\n\n``````fn main() {\nprintln!(\"Hello, world!\");\n}\n``````\n\nCargo.toml是包依赖\n\n``````[package]\nname = \"helloworld\"\nversion = \"0.1.0\"\nauthors = [\"Jimmy Xiang <xxg1413@gmail.com>\"]\nedition = \"2018\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\n``````\n\n## 第二部分 Rust语法热身\n\n`````` let n:i32 = 1;\nlet n = 1;\n\n//可变类型\nlet mut n = 0;\nn = n + 1;\n\n//Rust有两种字符串: &str和String\nlet s: &str = \"hello,world\"; //只读数据段\n\nlet mut s: String = String::from(\"hello,\");\ns.push_str(\"world\");\nprintln!(\"{}\", s);\n\n//动态数组\nlet mut v: Vec<i32> = Vec::new();\nv.push(2);\nv.push(3);\n\n//固定大小数组\nlet mut arr: [i32; 4] = [0,2,4,8];\narr = -2;\nprintln!(\"{}\", arr+arr);\n\n//迭代器\nfor i in arr.iter()\n{\nprintln!(\"{}\",i);\n}\n\n//while\nlet mut sum = 0;\nlet mut i = 0;\nwhile i < 20\n{\ni += 1;\nsum += i;\n\n}\n\nprintln!(\"sum={}\",sum);\n\n//loop 它有助于编译器对变量初始化进行一些假设。\nlet mut i = 0;\nloop {\n\ni += 1;\n\nif i == 10 {\nbreak;\n}\n}\n\nprintln!(\"i={}\",i);\n\n//函数\n\nfn mysum(a: i32, b:i32) -> i32\n{\na + b //Rust是一种基于表达式的 语言,不需要分号\n\n//a + b ; 会出错\n}\n\nprintln!(\"sum={}\", mysum(1,2));``````\n\n``````use std::collections::HashSet;\n//练习\n\nfn add_n(v: Vec<i32>, n: i32) -> Vec<i32> {\n\nlet mut result: Vec<i32> = Vec::new();\n\nfor i in v.iter() {\nresult.push(i+n)\n}\n\nresult\n}\n\nfn add_n_inplace(v: &mut Vec<i32>, n: i32) {\n\nlet mut i = 0;\n\nwhile i < v.len() {\n\nv[i] = v[i] + n;\ni = i + 1;\n}\n}\n\nfn dedup(v: &mut Vec<i32>) {\n\nlet mut hs = HashSet::new();\nlet mut i = 0;\n\nwhile i < v.len() {\n\nif !hs.contains(&v[i]) {\n\nhs.insert(v[i]);\ni += 1;\n\n} else {\n\nv.remove(i);\n}\n\n}\n}\n\n#[cfg(test)]\nmod test {\nuse super::*;\n\n#[test]\n}\n\n#[test]\nlet mut v = vec!;\nassert_eq!(v, vec!);\n}\n\n#[test]\nfn test_dedup() {\nlet mut v = vec![3, 1, 0, 1, 4, 4];\ndedup(&mut v);\nassert_eq!(v, vec![3, 1, 0, 4]);\n}\n}\n``````\n\n# 笔记\n\n#### 0.上节课的练习的答案\n\n```#include <stdio.h>\n#include <stdlib.h>\n#include <assert.h>\n\n// There are at least 7 bugs relating to memory on this snippet.\n// Find them all!\n\n// Vec is short for \"vector\", a common term for a resizable array.\n// For simplicity, our vector type can only hold ints.\ntypedef struct {\nint* data; // Pointer to our array on the heap\nint length; // How many elements are in our array\nint capacity; // How many elements our array can hold\n} Vec;\n\nVec* vec_new() {\nVec vec; //本地变量\nvec.data = NULL;\nvec.length = 0;\nvec.capacity = 0;\nreturn &vec; //悬浮指针\n}\n\nvoid vec_push(Vec* vec, int n) {\nif (vec->length == vec->capacity) {\nint new_capacity = vec->capacity * 2;\nint* new_data = (int*) malloc(new_capacity);\nassert(new_data != NULL);\n\nfor (int i = 0; i < vec->length; ++i) {\nnew_data[i] = vec->data[i];\n}\n\nvec->data = new_data; //忘记释放内存 内存泄露\nvec->capacity = new_capacity;\n}\n\nvec->data[vec->length] = n; //指针的值改变了 n就改变了\n++vec->length;\n}\n\nvoid vec_free(Vec* vec) {\nfree(vec);\nfree(vec->data);\n}\n\nvoid main() {\nVec* vec = vec_new();\nvec_push(vec, 107);\n\nint* n = &vec->data;\nvec_push(vec, 110);\nprintf(\"%d\\n\", *n);//*n 迭代失效\n\nfree(vec->data);\nvec_free(vec);// 双重释放\n}\n```\n\n## 1.Rust编写bug的一些问题\n\nRust只是比其他语言犯错更难一些,并不代表不可以犯错。而且很多逻辑错误是无关语言的。\n\n## 资源:\n\nhttps://stanford-cs242.github.io/f19/lectures/06-2-memory-safety\n\n# CS110L-#01: Welcome to CS 110L\n\n## 笔记\n\n1.为什么使用Rust\n\n2.为什么不是用C/C++?\n\n``````➜ 01 git:(main) ✗ ./a.out\n\nEnter a string:ashdasihdddddddddddddddddddddddddddddddddddddddddddddddddddddddd\n\nString in Upper Case = ASHDASIHDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD%\n➜ 01 git:(main) ✗ ./a.out\n\nEnter a string:GgaudiasSSSSSSSSSASUGDAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\n\n*** stack smashing detected ***: terminated\n 18336 abort ./a.out``````\n\n``````char buffer;\nint bytesToCopy = packet.length;\nif (bytesToCopy < 128) {\nstrncpy(buffer, packet.data, bytesToCopy);\n}``````\n\nC/C++的内存安全问题,也有很多人搞了一些工具来检查。主要是动态分析和静态分析,动态分析需要预测输入,静态分析主要是错误非常多。\n\n``````➜ 01 git:(main) ✗ valgrind --tool=memcheck --leak-check=full ./conver\n==2022== Memcheck, a memory error detector\n==2022== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.\n==2022== Using Valgrind-3.16.1 and LibVEX; rerun with -h for copyright info\n==2022== Command: ./conver\n==2022==\n\nEnter a string:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa\n\n*** stack smashing detected ***: terminated\n==2022==\n==2022== Process terminating with default action of signal 6 (SIGABRT)\n==2022== at 0x489C18B: raise (raise.c:51)\n==2022== by 0x487B858: abort (abort.c:79)\n==2022== by 0x48E63ED: __libc_message (libc_fatal.c:155)\n==2022== by 0x4988B49: __fortify_fail (fortify_fail.c:26)\n==2022== by 0x4988B15: __stack_chk_fail (stack_chk_fail.c:24)\n==2022== by 0x109245: main (convert.c:22)\nString in Upper Case = AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA==2022==\n==2022== HEAP SUMMARY:\n==2022== in use at exit: 0 bytes in 0 blocks\n==2022== total heap usage: 2 allocs, 2 frees, 2,048 bytes allocated\n==2022==\n==2022== All heap blocks were freed -- no leaks are possible\n==2022==\n==2022== For lists of detected and suppressed errors, rerun with: -s\n==2022== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)\n 2022 abort valgrind --tool=memcheck --leak-check=full ./conver``````\n\n3.为什么不是用其他有GC的语言?\n\nGC 有性能问题,垃圾一般都是丢在你家里,收集垃圾的需要敲你的门来收集垃圾。\n\n• 代价昂贵\n• 具有破坏性\n• 存在非确定性\n• 排除了手动优化的可能\n\n• 用户界面程序\n• 游戏\n• 自动驾驶\n• 支付处理\n• 高频交易\n\n4.预习\n\n``````#include <stdio.h>\n#include <stdlib.h>\n#include <assert.h>\n\n// There are at least 7 bugs relating to memory on this snippet.\n// Find them all!\n\n// Vec is short for \"vector\", a common term for a resizable array.\n// For simplicity, our vector type can only hold ints.\ntypedef struct {\nint* data; // Pointer to our array on the heap\nint length; // How many elements are in our array\nint capacity; // How many elements our array can hold\n} Vec;\n\nVec* vec_new() {\nVec vec;\nvec.data = NULL;\nvec.length = 0;\nvec.capacity = 0;\nreturn &vec;\n}\n\nvoid vec_push(Vec* vec, int n) {\nif (vec->length == vec->capacity) {\nint new_capacity = vec->capacity * 2;\nint* new_data = (int*) malloc(new_capacity);\nassert(new_data != NULL);\n\nfor (int i = 0; i < vec->length; ++i) {\nnew_data[i] = vec->data[i];\n}\n\nvec->data = new_data;\nvec->capacity = new_capacity;\n}\n\nvec->data[vec->length] = n;\n++vec->length;\n}\n\nvoid vec_free(Vec* vec) {\nfree(vec);\nfree(vec->data);\n}\n\nvoid main() {\nVec* vec = vec_new();\nvec_push(vec, 107);\n\nint* n = &vec->data;\nvec_push(vec, 110);\nprintf(\"%d\\n\", *n);\n\nfree(vec->data);\nvec_free(vec);\n}``````\n\n 4278 segmentation fault ./pre\n\n7处错误:\n\n1. 在vec_new中,创建了一个局部变量vec,并返回了&vec\n2. vec.data 应该是vec->data = NULL\n3. vec_push中不要使用assert\n4. vec_push中vec->length没有检查,而且可以设置为unsigned\n5. free(vec->data)错误\n6. vec_free中应该先free(vec->data)\n7. vec_push 中 ++vec->length错误" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5838112,"math_prob":0.90953857,"size":5652,"snap":"2023-14-2023-23","text_gpt3_token_len":1916,"char_repetition_ratio":0.19086403,"word_repetition_ratio":0.39342266,"special_character_ratio":0.38411182,"punctuation_ratio":0.23813787,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9918178,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T03:39:06Z\",\"WARC-Record-ID\":\"<urn:uuid:cc9502fb-83c4-40c5-bdda-13469973d861>\",\"Content-Length\":\"119459\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:78e920b0-d809-40b0-9f77-ce745e782b59>\",\"WARC-Concurrent-To\":\"<urn:uuid:04154b8d-63e1-47c9-b52c-9db94cf614e0>\",\"WARC-IP-Address\":\"192.0.78.25\",\"WARC-Target-URI\":\"https://pxiaoer.blog/category/rust/cs110l/\",\"WARC-Payload-Digest\":\"sha1:HRQ2PKJHJQUQHAWPHLZMHJTPEO2L4G7Q\",\"WARC-Block-Digest\":\"sha1:3VNDBFVMX6V7UE27RX7KOKQ7ZKNJDI37\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655247.75_warc_CC-MAIN-20230609032325-20230609062325-00158.warc.gz\"}"}
https://www.clutchprep.com/analytical-chemistry/practice-problems/148150/find-the-number-of-millimoles-of-solute-in-3-50-l-of-a-solution-that-contains-3-
[ "# Problem: Find the number of millimoles of solute in 3.50 L of a solution that contains 3.33 ppm of CuSO4.\n\n###### Problem Details\n\nFind the number of millimoles of solute in 3.50 L of a solution that contains 3.33 ppm of CuSO4.\n\nFrequently Asked Questions\n\nWhat scientific concept do you need to know in order to solve this problem?\n\nOur tutors have indicated that to solve this problem you will need to apply the Volumetric Analysis concept. You can view video lessons to learn Volumetric Analysis. Or if you need more Volumetric Analysis practice, you can also practice Volumetric Analysis practice problems.\n\nWhat is the difficulty of this problem?\n\nOur tutors rated the difficulty ofFind the number of millimoles of solute in 3.50 L of a solut...as low difficulty.\n\nHow long does this problem take to solve?\n\nOur expert Analytical Chemistry tutor, Jules took 1 minute and 36 seconds to solve this problem. You can follow their steps in the video explanation above.\n\nWhat professor is this problem relevant for?\n\nBased on our data, we think this problem is relevant for Professor Torres' class at UCF." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9224729,"math_prob":0.78565586,"size":964,"snap":"2021-04-2021-17","text_gpt3_token_len":215,"char_repetition_ratio":0.14270833,"word_repetition_ratio":0.08805031,"special_character_ratio":0.20435685,"punctuation_ratio":0.11170213,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97031516,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T06:12:48Z\",\"WARC-Record-ID\":\"<urn:uuid:9709b300-fc26-4b77-9780-9fa79274cb9a>\",\"Content-Length\":\"106645\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7ad73d81-3e0f-4675-b447-2a945742f02f>\",\"WARC-Concurrent-To\":\"<urn:uuid:f410ab9d-bf07-4f8b-b396-16fa6498ef0d>\",\"WARC-IP-Address\":\"18.205.129.29\",\"WARC-Target-URI\":\"https://www.clutchprep.com/analytical-chemistry/practice-problems/148150/find-the-number-of-millimoles-of-solute-in-3-50-l-of-a-solution-that-contains-3-\",\"WARC-Payload-Digest\":\"sha1:RCQEHROQNL2QDV6GZ5VL5AU662WW3BJ6\",\"WARC-Block-Digest\":\"sha1:N66OPF2NO6ZOA2SM7S3DPDMLYYFVYAGP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039379601.74_warc_CC-MAIN-20210420060507-20210420090507-00592.warc.gz\"}"}
https://www.sqlservercentral.com/articles/monitoring-job-history-using-reporting-services
[ "", null, "# Monitoring job history using Reporting Services\n\n,\n\nSQL Server Management Studio has a built-in Job Activity monitor available under the SQL Server Agent folder in SSMS. This provides a neat dialog window for easy access to all-you-need-to know about your jobs. One can also run the T-SQL command `EXEC msdb.dbo.sp_help_jobactivity`; to get job status. These cover statistics, statuses, categories, scheduled times and in-depth results of job executions.\n\nFrom time to time, handling couple of jobs is not a problem. However, it might become a problem when the number of jobs exceeds a manageable number in terms of job execution times, scheduled runs and job outcomes. The job history monitor report is a quick solution for getting general overview of your job history in graphical outcome using all the advantages of Reporting Services (SSRS). The report was prepared for monitoring the concurrent job execution and getting full overview of current situation on job activities. It can be used by DBAs, resource monitoring people, system analysts and even middle management as part of Microsoft Sharepoint implementation.\n\nThe query underneath the report is mainly constructed from three major parts worth taking a closer look for better understanding of the report.\n\n### Part A\n\nPart A is a general query for retrieving job history from system tables msdb.dbo.sysjobhistory and msdb.dbo.sysjobs. Query returns results for current day (day of execution) and does some general string conversions and calculations.\n\n```SELECT\nh.job_id\n,j.name\n,CAST( SUBSTRING(CONVERT(VARCHAR(10),h.run_date) , 5,2) +'-'\n+ SUBSTRING(CONVERT(VARCHAR(10),h.run_date) , 7,2) +'-'\n+ SUBSTRING(CONVERT(VARCHAR(10),h.run_date),1,4) + ' ' +\n+ SUBSTRING(CONVERT(VARCHAR(10),replicate('0',6-len(h.run_time)) + CAST(h.run_time AS VARCHAR)), 1, 2) + ':'\n+ SUBSTRING(CONVERT(VARCHAR(10),replicate('0',6-len(h.run_time)) + CAST(h.run_time AS VARCHAR)), 3, 2) + ':'\n+ SUBSTRING(CONVERT(VARCHAR(10),replicate('0',6-len(h.run_time)) + CAST(h.run_time AS VARCHAR)), 5, 2)  AS SMALLDATETIME)\nAS JobStart\n,DATEADD(SECOND, CASE WHEN h.run_duration > 0 THEN (h.run_duration / 1000000) * (3600 * 24)\n+ (h.run_duration / 10000 % 100) * 3600\n+ (h.run_duration / 100 % 100) * 60\n+ (h.run_duration % 100) ELSE 0 END,CAST( SUBSTRING(CONVERT(VARCHAR(10),h.run_date) , 5,2) +'-'\n+ SUBSTRING(CONVERT(VARCHAR(10),h.run_date) , 7,2) +'-'\n+ SUBSTRING(CONVERT(VARCHAR(10),h.run_date),1,4) + ' ' +\n+ SUBSTRING(CONVERT(VARCHAR(10),replicate('0',6-len(h.run_time)) + CAST(h.run_time AS VARCHAR)), 1, 2) + ':'\n+ SUBSTRING(CONVERT(VARCHAR(10),replicate('0',6-len(h.run_time)) + CAST(h.run_time AS VARCHAR)), 3, 2) + ':'\n+ SUBSTRING(CONVERT(VARCHAR(10),replicate('0',6-len(h.run_time)) + CAST(h.run_time AS VARCHAR)), 5, 2)  AS SMALLDATETIME))\nAS JobEND\n,outcome = CASE\nWHEN h.run_status = 0 THEN 'Fail'\nWHEN h.run_status = 1 THEN 'Success'\nWHEN h.run_status = 2 THEN 'Retry'\nWHEN h.run_status = 3 THEN 'Cancel'\nWHEN h.run_status = 4 THEN 'In progress'\nEND\nFROM sysjobhistory AS h\nJOIN sysjobs AS j\non j.job_id = h.job_id\nWHERE\nh.step_id = 0\nAND j.enabled = 1\nAND CAST(SUBSTRING(CONVERT(VARCHAR(10),h.run_date) , 5,2) +'-'\n+ SUBSTRING(CONVERT(VARCHAR(10),h.run_date) , 7,2) +'-'\n+ SUBSTRING(CONVERT(VARCHAR(10),h.run_date),1,4) AS SMALLDATETIME) = CONVERT(VARCHAR(10), GETDATE(), 121)```\n\n### Part B\n\nPart B generated and prepares the time frame. For purpose of Job history monitor report, 5 minutes time (300 seconds) interval was used. And system table  master.dbo.spt_values  was used as pre-prepared tally table.\n\n```SELECT\nv.number\nFROM\nmaster.dbo.spt_values AS v\nWHERE\nv.type = 'P'\nAND v.number <= 288\n```\n\n### Part C\n\nPart C generates empty rows for continuous presentation of time table used later in SSRS. It refers to previously generated part B and adds all missing 5 minutes intervals (or any other desired time interval), and creating actual time flow of a job executions on real time scale. In this part 5 minutes intervals are used again so we keep consistency throughout the query.\n\n```-- Data \"imputation\" of empty rows for all jobs.\n-- To appear in SSRS as a continous block, when job is running for more than 5 minutes\nSELECT\n,a.JobName\n,a.outcome\nFROM\n(\nSELECT\na.JobName\n,a.outcome\n,a.jobStart\n,MIN(a.TimeInterval) AS minTI\n,MAX(a.TimeInterval) AS maxTI\nFROM timeset AS a\nGROUP BY\na.JobName\n,a.outcome\n,a.jobStart\n) AS a\nINNER JOIN master.dbo.spt_values AS s\nON DATEADD(SECOND,300*s.number,DATEDIFF(dd,0,GETDATE()))  BETWEEN a.minTI AND a.maxTI\nWHERE\ns.type = 'P'\nAND s.number <= 288\nORDER BY TimeInterval;\nGO```\n\n### Creating report\n\nThe query is copied to SSRS and we create a general matrix in the Report designer. We show the time interval in the first column with the jobs in rows.", null, "For better readability of the matrix, some additional conditional formatting is introduced on field [outcome]. The colours of job outcome correspond to definition of job outcome introduced in Part A in select list as field {outcome}.\n\n### Executing report\n\nOnce the look of the report is finished, it is built and deployed to your reporting manager TargetServerURL. The report will come out like this:", null, "A 25% Zoom is used intentionally to stress the graphical overview of the report. The sample picture holds a minimum of 25 different job outcomes. One can immediately see concurrent jobs running and their approximate length (remember we used 5 minutes interval; each cell represents 5 minutes). The far left is the timeline continuing with all jobs (in this sample, jobs are sorted alphabetically). The colours denote the job results as prepared in report builder with the SWITCH command.\n\n### Customizing query and report\n\nA quick guide on how to customize the query. If 1 minute (or any other desired time frame will be used; e.g.: 30 minutes) is what you need,  two changes are needed.\n\nIn code Part B, under point  (1) 300 seconds should be changed to 60 seconds (when 1 minute interval will be used) and under point (2) 288 should be changed to 1440. Point (2) is literally number of minutes per hour. 1 day = 1minute* 1440 intervals (1 day = 1440 minutes); creating 1 minute timeframe. 1 day= 5 minutes*288 intervals (1 day = 5*288 minutes). 10 minute interval would mean 144 intervals. Respectively.\n\n```SELECT\nv.number\n/* POINT (1) */    ,DATEADD(SECOND,60*v.number,DATEDIFF(dd,0,GETDATE())) AS timeInterval_FROM\nFROM\nmaster.dbo.spt_values AS v\nWHERE\nv.type = 'P'\n/* POINT (2) */AND v.number <= 1440\n```\n\nThe same time alterations are applied to the code in Part C.\n\n```SELECT\n/* POINT (1) */     DATEADD(SECOND,60*s.number,DATEDIFF(dd,0,GETDATE())) AS TimeInterval\n,a.JobName\n,a.outcome\nFROM\n(\nSELECT\na.JobName\n,a.outcome\n,a.jobStart\n,MIN(a.TimeInterval) AS minTI\n,MAX(a.TimeInterval) AS maxTI\nFROM timeset AS a\nGROUP BY\na.JobName\n,a.outcome\n,a.jobStart\n) AS a\nINNER JOIN master.dbo.spt_values AS s\n/* POINT (2) */    ON DATEADD(SECOND,60*s.number,DATEDIFF(dd,0,GETDATE()))  BETWEEN a.minTI AND a.maxTI\nWHERE\ns.type = 'P'\n/* POINT (3) */AND s.number <= 1440```\n\n### Conclusion\n\nThe Job History monitor report is a simple but very robust and adaptable quick T-SQL solution, using all the advantages of system job tables and SSRS. It is used for a general overview and quick pin pointing the causes and fixing problems resulting in automated job executions. The T-SQL query is quickly adaptable to any SQL Server system, one can even omit this report to only very critical daily jobs, weekly/monthly jobs, alter time intervals.  Report results can be part of any scorecard or dashboard for your DBA or even management. At the same time, report can take advantages of report manager features such as zooming in/out, exporting to excel, PDF, sending as attachment in report subscription.\n\nAuthor: Tomaz Kastrun (tomaz.kastrun@gmail.com)\n\n4.42 (19)\n\n4.42 (19)" ]
[ null, "https://www.sqlservercentral.com/wp-content/mu-plugins/ssc/ssc-post-thumbnails/images/technical-article.png", null, "data:image/gif;base64,R0lGODlhAQABAPAAAPLy8v///yH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null, "data:image/gif;base64,R0lGODlhAQABAPAAAPLy8v///yH5BAAAAAAALAAAAAABAAEAAAICRAEAOw==", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6742864,"math_prob":0.9212049,"size":7955,"snap":"2022-27-2022-33","text_gpt3_token_len":2072,"char_repetition_ratio":0.13168155,"word_repetition_ratio":0.14775726,"special_character_ratio":0.27064738,"punctuation_ratio":0.17672151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95191276,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-05T14:56:35Z\",\"WARC-Record-ID\":\"<urn:uuid:bbb2af33-2005-46d5-9fb0-16506e1ebef0>\",\"Content-Length\":\"274781\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54046183-fecd-496d-b74b-1364c76dfa3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:e22b1bc2-0d28-4f8f-b3d1-1d7eebf7b987>\",\"WARC-IP-Address\":\"34.242.253.12\",\"WARC-Target-URI\":\"https://www.sqlservercentral.com/articles/monitoring-job-history-using-reporting-services\",\"WARC-Payload-Digest\":\"sha1:BMJRPMIFADGWITUQ2UQLPYWB2QSIEBXU\",\"WARC-Block-Digest\":\"sha1:IZ2ZFTCJKHL7C2MUQTBIRSA62WG6FBND\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104585887.84_warc_CC-MAIN-20220705144321-20220705174321-00723.warc.gz\"}"}
https://www.sheetzoom.com/Tips/how-to-use-ifs-function-in-excel
[ "How To Use IFS Function In Excel", null, "IFS function is a new function added to Excel and only available in the latest version of Office (EXCEL 2016, Excel Online and latest mobile excel versions). Therefore when opening an IFS function containing excel workbook in an earlier version of excel, IFS function containing cells will be shown as #NAME? error notification and become unusable.\n\nThe basic function of this is to check pre-defined conditions and then return a corresponding value which are specified by the user.\n\nThe syntax used is explained below:\n\nIFS(logical_test1, value_if_true1, [logical_test2, value_if_true2], [logical_test3, value_if_true3],…)\n\nlogical_test1:                    The conditional argument 1, which would be evaluated as TRUE or FALSE. (Required)\n\nvalue_if_true1:                The value to be returned if the condition 1 (i.e. logical_test1) is true. This can be left empty.\n\nlogical_test2:                    The conditional argument 2, which would be evaluated as TRUE or FALSE. (Optional)\n\nvalue_if_true2:                The value to be returned if the condition 2 (i.e. logical_test2) is true. This can be left empty.\n\nNote: Up to 127 conditions can be added to this function, following the same argument style.\n\nExample #1:\n\nConsider the following data set, the IFS function has been used to assign grades for exam marks.", null, "The used syntax is explained below.\n\n=IFS(B2>74,\"A\",B2>64,\"B\",B2>49,\"C\",B2>34,\"D\",TRUE,\"F\")\n\nB2>74,\"A\" – if B2 is greater than 74, then it will return “A”\n\nB2>64,\"B\" – if B2 is greater than 64, then it will return “B”\n\nB2>49,\"C\" – if B2 is greater than 49, then it will return “C”\n\nB2>34,\"D\" – if B2 is greater than 35, then it will return “D”\n\nTRUE,\"F\") – the values below 35 will not meet any of the above conditions, so it would be considered as ‘TRUE’ and then it will return “F”", null, "Note: The same result can be obtained using the Nested IF function. But the IFS function allows to perform it in a much easier and compact way.\n\nTips:\n\n• If a logical_test is entered without specifying a return value (i.e. value_if_true), the function will return an error message “You've entered too few arguments for this function”.\n\n• If the function is unable find a ‘TRUE’ condition, then #N/A error would be returned." ]
[ null, "https://sheetzoom.blob.core.windows.net/snackbarimages/How To Use IFS Function In Excel.png", null, "https://sheetzoom.blob.core.windows.net/snackbarimages/ifsfunction1.png", null, "https://sheetzoom.blob.core.windows.net/snackbarimages/ifsfunction2.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83915335,"math_prob":0.93176836,"size":517,"snap":"2019-43-2019-47","text_gpt3_token_len":100,"char_repetition_ratio":0.14619882,"word_repetition_ratio":0.0,"special_character_ratio":0.18955512,"punctuation_ratio":0.07608695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98931,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,5,null,5,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T12:10:38Z\",\"WARC-Record-ID\":\"<urn:uuid:332fe9c3-cd19-4a17-a685-da6a00cb97ae>\",\"Content-Length\":\"39881\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3b70627-5d68-437b-86dc-c51721f03ea1>\",\"WARC-Concurrent-To\":\"<urn:uuid:cad6c4e1-7491-49c6-a77d-1e3ebb7d65b9>\",\"WARC-IP-Address\":\"93.94.253.16\",\"WARC-Target-URI\":\"https://www.sheetzoom.com/Tips/how-to-use-ifs-function-in-excel\",\"WARC-Payload-Digest\":\"sha1:5GEHVT6FKNI3EZL7WSUVGN3J6NT6VMEW\",\"WARC-Block-Digest\":\"sha1:6A5D5NSBE45CNJUWKNH2TT6RCL43577V\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986693979.65_warc_CC-MAIN-20191019114429-20191019141929-00328.warc.gz\"}"}
https://weightle.com/bmw-z4
[ "", null, "", null, "", null, "# BMW Z4 Curb Weight in Easy-to-Read Graphs\n\n#### Choose Generation", null, "", null, "## Average weight of BMW Z4", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1452 kg (3202 lbs)\n• Difference from world average: 31 kg or 68 lbs lighter\n• Difference from world's heaviest: 2965 kg or 6538 lbs lighter\n• Load to one wheel: 363 kg (801 lbs)\n• Maximum allowable weight: 1602 kg (3532 lbs)\n• Weight to torque ratio: 4 kg per 1 Nm\n• Weight to length ratio: 3 mm per 1 kg\n• Engine capacity to weight: 1.6 cc per 1 kg\n• Pounds per 1 kW: 16 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2019 BMW Z4 curb weight\n\n• 30i —1415 kg (3120 lbs)\n• 20i —1405 kg (3098 lbs)\n• M40i —1535 kg (3385 lbs)\n\n## 2019 BMW Z4 weight to consumption ratio\n\n• 30i —36 kg/mpg (79 lbs/mpg)\n• 20i —40 kg/mpg (88 lbs/mpg)\n• M40i —47 kg/mpg (104 lbs/mpg)\n\n## 2019 BMW Z4 weight to consumption ratio\n\n• 30i is 4.6% lighter than average\n• 20i is 5.3% lighter than average\n• M40i is 3.5% heavier tran average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n30i 1415 kg /\n3120 lbs\n990 kg (2183 lbs) heavier 5 kg to 1 hp 277 kg/s (611 lbs/s) 236 kg/L\n(520 lbs/L)\n20i 1405 kg /\n3098 lbs\n980 kg (2161 lbs) heavier 7 kg to 1 hp 216 kg/s (476 lbs/s) 207 kg/L\n(456 lbs/L)\nM40i 1535 kg /\n3385 lbs\n1110 kg (2448 lbs) heavier 5 kg to 1 hp 357 kg/s (787 lbs/s) 216 kg/L\n(476 lbs/L)\nVehicle 30i\nCurb weight 1415 kg /\n3120 lbs\nDifference from world's smallest 990 kg (990 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 277 kg/s (611 lbs/s)\nConsumption ratio 236 kg/L\n(520 lbs/L)\nVehicle 20i\nCurb weight 1405 kg /\n3098 lbs\nDifference from world's smallest 980 kg (980 lbs) heavier\nWeight to power ratio 7 kg to 1 hp\n0—60 mph acceleration ratio 216 kg/s (476 lbs/s)\nConsumption ratio 207 kg/L\n(456 lbs/L)\nVehicle M40i\nCurb weight 1535 kg /\n3385 lbs\nDifference from world's smallest 1110 kg (1110 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 357 kg/s (787 lbs/s)\nConsumption ratio 216 kg/L\n(476 lbs/L)", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1458 kg (3215 lbs)\n• Difference from world average: 25 kg or 55 lbs lighter\n• Difference from world's heaviest: 2975 kg or 6560 lbs lighter\n• Load to one wheel: 365 kg (804 lbs)\n• Maximum allowable weight: 1608 kg (3546 lbs)\n• Weight to torque ratio: 4 kg per 1 Nm\n• Weight to length ratio: 2.9 mm per 1 kg\n• Engine capacity to weight: 1.6 cc per 1 kg\n• Pounds per 1 kW: 18 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2013 BMW Z4 curb weight\n\n• 35i —1525 kg (3363 lbs)\n• 20i —1420 kg (3131 lbs)\n• 28i —1400 kg (3087 lbs)\n• 18i —1420 kg (3131 lbs)\n• 35is —1525 kg (3363 lbs)\n\n## 2013 BMW Z4 weight to consumption ratio\n\n• 35i —59 kg/mpg (130 lbs/mpg)\n• 20i —41 kg/mpg (90 lbs/mpg)\n• 28i —40 kg/mpg (88 lbs/mpg)\n• 18i —41 kg/mpg (90 lbs/mpg)\n• 35is —59 kg/mpg (130 lbs/mpg)\n\n## 2013 BMW Z4 weight to consumption ratio\n\n• 35i is 2.8% heavier tran average\n• 20i is 4.2% lighter than average\n• 28i is 5.6% lighter than average\n• 18i is 4.2% lighter than average\n• 35is is 2.8% heavier tran average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n35i 1525 kg /\n3363 lbs\n1100 kg (2426 lbs) heavier 5 kg to 1 hp 318 kg/s (701 lbs/s) 169 kg/L\n(373 lbs/L)\n20i 1420 kg /\n3131 lbs\n995 kg (2194 lbs) heavier 8 kg to 1 hp 215 kg/s (474 lbs/s) 209 kg/L\n(461 lbs/L)\n28i 1400 kg /\n3087 lbs\n975 kg (2150 lbs) heavier 6 kg to 1 hp 259 kg/s (571 lbs/s) 206 kg/L\n(454 lbs/L)\n18i 1420 kg /\n3131 lbs\n995 kg (2194 lbs) heavier 9 kg to 1 hp 184 kg/s (406 lbs/s) 209 kg/L\n(461 lbs/L)\n35is 1525 kg /\n3363 lbs\n1100 kg (2426 lbs) heavier 4 kg to 1 hp 332 kg/s (732 lbs/s) 169 kg/L\n(373 lbs/L)\nVehicle 35i\nCurb weight 1525 kg /\n3363 lbs\nDifference from world's smallest 1100 kg (1100 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 318 kg/s (701 lbs/s)\nConsumption ratio 169 kg/L\n(373 lbs/L)\nVehicle 20i\nCurb weight 1420 kg /\n3131 lbs\nDifference from world's smallest 995 kg (995 lbs) heavier\nWeight to power ratio 8 kg to 1 hp\n0—60 mph acceleration ratio 215 kg/s (474 lbs/s)\nConsumption ratio 209 kg/L\n(461 lbs/L)\nVehicle 28i\nCurb weight 1400 kg /\n3087 lbs\nDifference from world's smallest 975 kg (975 lbs) heavier\nWeight to power ratio 6 kg to 1 hp\n0—60 mph acceleration ratio 259 kg/s (571 lbs/s)\nConsumption ratio 206 kg/L\n(454 lbs/L)\nVehicle 18i\nCurb weight 1420 kg /\n3131 lbs\nDifference from world's smallest 995 kg (995 lbs) heavier\nWeight to power ratio 9 kg to 1 hp\n0—60 mph acceleration ratio 184 kg/s (406 lbs/s)\nConsumption ratio 209 kg/L\n(461 lbs/L)\nVehicle 35is\nCurb weight 1525 kg /\n3363 lbs\nDifference from world's smallest 1100 kg (1100 lbs) heavier\nWeight to power ratio 4 kg to 1 hp\n0—60 mph acceleration ratio 332 kg/s (732 lbs/s)\nConsumption ratio 169 kg/L\n(373 lbs/L)", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1523 kg (3358 lbs)\n• Difference from world average: 40 kg or 88 lbs heavier\n• Difference from world's heaviest: 2900 kg or 6395 lbs lighter\n• Load to one wheel: 381 kg (840 lbs)\n• Maximum allowable weight: 1673 kg (3689 lbs)\n• Weight to torque ratio: 5 kg per 1 Nm\n• Weight to length ratio: 2.8 mm per 1 kg\n• Engine capacity to weight: 1.7 cc per 1 kg\n• Pounds per 1 kW: 18 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2010 BMW Z4 curb weight\n\n• 28i —1475 kg (3252 lbs)\n• 35i —1580 kg (3484 lbs)\n• 20i —1470 kg (3241 lbs)\n• 30i —1505 kg (3319 lbs)\n• 23i —1505 kg (3319 lbs)\n• 35 is —1600 kg (3528 lbs)\n\n## 2010 BMW Z4 weight to consumption ratio\n\n• 28i —42 kg/mpg (93 lbs/mpg)\n• 35i —63 kg/mpg (139 lbs/mpg)\n• 20i —42 kg/mpg (93 lbs/mpg)\n• 30i —54 kg/mpg (119 lbs/mpg)\n• 23i —52 kg/mpg (115 lbs/mpg)\n• 35 is —62 kg/mpg (137 lbs/mpg)\n\n## 2010 BMW Z4 weight to consumption ratio\n\n• 28i is 0.5% lighter than average\n• 35i is 6.5% heavier tran average\n• 20i is 0.9% lighter than average\n• 30i is 1.5% heavier tran average\n• 23i is 1.5% heavier tran average\n• 35 is is 7.9% heavier tran average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n28i 1475 kg /\n3252 lbs\n1050 kg (2315 lbs) heavier 6 kg to 1 hp 273 kg/s (602 lbs/s) 217 kg/L\n(478 lbs/L)\n35i 1580 kg /\n3484 lbs\n1155 kg (2547 lbs) heavier 5 kg to 1 hp 322 kg/s (710 lbs/s) 168 kg/L\n(370 lbs/L)\n20i 1470 kg /\n3241 lbs\n1045 kg (2304 lbs) heavier 8 kg to 1 hp 223 kg/s (492 lbs/s) 216 kg/L\n(476 lbs/L)\n30i 1505 kg /\n3319 lbs\n1080 kg (2382 lbs) heavier 6 kg to 1 hp 259 kg/s (571 lbs/s) 181 kg/L\n(399 lbs/L)\n23i 1505 kg /\n3319 lbs\n1080 kg (2382 lbs) heavier 7 kg to 1 hp 218 kg/s (481 lbs/s) 184 kg/L\n(406 lbs/L)\n35 is 1600 kg /\n3528 lbs\n1175 kg (2591 lbs) heavier 5 kg to 1 hp 348 kg/s (767 lbs/s) 178 kg/L\n(392 lbs/L)\nVehicle 28i\nCurb weight 1475 kg /\n3252 lbs\nDifference from world's smallest 1050 kg (1050 lbs) heavier\nWeight to power ratio 6 kg to 1 hp\n0—60 mph acceleration ratio 273 kg/s (602 lbs/s)\nConsumption ratio 217 kg/L\n(478 lbs/L)\nVehicle 35i\nCurb weight 1580 kg /\n3484 lbs\nDifference from world's smallest 1155 kg (1155 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 322 kg/s (710 lbs/s)\nConsumption ratio 168 kg/L\n(370 lbs/L)\nVehicle 20i\nCurb weight 1470 kg /\n3241 lbs\nDifference from world's smallest 1045 kg (1045 lbs) heavier\nWeight to power ratio 8 kg to 1 hp\n0—60 mph acceleration ratio 223 kg/s (492 lbs/s)\nConsumption ratio 216 kg/L\n(476 lbs/L)\nVehicle 30i\nCurb weight 1505 kg /\n3319 lbs\nDifference from world's smallest 1080 kg (1080 lbs) heavier\nWeight to power ratio 6 kg to 1 hp\n0—60 mph acceleration ratio 259 kg/s (571 lbs/s)\nConsumption ratio 181 kg/L\n(399 lbs/L)\nVehicle 23i\nCurb weight 1505 kg /\n3319 lbs\nDifference from world's smallest 1080 kg (1080 lbs) heavier\nWeight to power ratio 7 kg to 1 hp\n0—60 mph acceleration ratio 218 kg/s (481 lbs/s)\nConsumption ratio 184 kg/L\n(406 lbs/L)\nVehicle 35 is\nCurb weight 1600 kg /\n3528 lbs\nDifference from world's smallest 1175 kg (1175 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 348 kg/s (767 lbs/s)\nConsumption ratio 178 kg/L\n(392 lbs/L)", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1296 kg (2858 lbs)\n• Difference from world average: 187 kg or 412 lbs lighter\n• Difference from world's heaviest: 3105 kg or 6847 lbs lighter\n• Load to one wheel: 324 kg (715 lbs)\n• Maximum allowable weight: 1446 kg (3188 lbs)\n• Weight to torque ratio: 5 kg per 1 Nm\n• Weight to length ratio: 3.2 mm per 1 kg\n• Engine capacity to weight: 1.9 cc per 1 kg\n• Pounds per 1 kW: 19 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2006 BMW Z4 curb weight\n\n• 2.5 si —1290 kg (2844 lbs)\n• 3.0 si —1395 kg (3076 lbs)\n• 2.5i —1275 kg (2811 lbs)\n• 2.0i 16V —1225 kg (2701 lbs)\n\n## 2006 BMW Z4 weight to consumption ratio\n\n• 3.0 si —54 kg/mpg (119 lbs/mpg)\n\n## 2006 BMW Z4 weight to consumption ratio\n\n• 2.5 si is 13% lighter than average\n• 3.0 si is 5.9% lighter than average\n• 2.5i is 14% lighter than average\n• 2.0i 16V is 17.4% lighter than average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n2.5 si 1290 kg /\n2844 lbs\n865 kg (1907 lbs) heavier 6 kg to 1 hp 208 kg/s (459 lbs/s) -\n3.0 si 1395 kg /\n3076 lbs\n970 kg (2139 lbs) heavier 5 kg to 1 hp 258 kg/s (569 lbs/s) 157 kg/L\n(346 lbs/L)\n2.5i 1275 kg /\n2811 lbs\n850 kg (1874 lbs) heavier 7 kg to 1 hp 190 kg/s (419 lbs/s) -\n2.0i 16V 1225 kg /\n2701 lbs\n800 kg (1764 lbs) heavier 8 kg to 1 hp 157 kg/s (346 lbs/s) -\nVehicle 2.5 si\nCurb weight 1290 kg /\n2844 lbs\nDifference from world's smallest 865 kg (865 lbs) heavier\nWeight to power ratio 6 kg to 1 hp\n0—60 mph acceleration ratio 208 kg/s (459 lbs/s)\nConsumption ratio -\nVehicle 3.0 si\nCurb weight 1395 kg /\n3076 lbs\nDifference from world's smallest 970 kg (970 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 258 kg/s (569 lbs/s)\nConsumption ratio 157 kg/L\n(346 lbs/L)\nVehicle 2.5i\nCurb weight 1275 kg /\n2811 lbs\nDifference from world's smallest 850 kg (850 lbs) heavier\nWeight to power ratio 7 kg to 1 hp\n0—60 mph acceleration ratio 190 kg/s (419 lbs/s)\nConsumption ratio -\nVehicle 2.0i 16V\nCurb weight 1225 kg /\n2701 lbs\nDifference from world's smallest 800 kg (800 lbs) heavier\nWeight to power ratio 8 kg to 1 hp\n0—60 mph acceleration ratio 157 kg/s (346 lbs/s)\nConsumption ratio -", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1325 kg (2922 lbs)\n• Difference from world average: 158 kg or 348 lbs lighter\n• Difference from world's heaviest: 3175 kg or 7001 lbs lighter\n• Load to one wheel: 331 kg (731 lbs)\n• Maximum allowable weight: 1475 kg (3252 lbs)\n• Weight to torque ratio: 4 kg per 1 Nm\n• Weight to length ratio: 3.1 mm per 1 kg\n• Engine capacity to weight: 2.3 cc per 1 kg\n• Pounds per 1 kW: 15 lbs/kW\n• Weightly rating: 6 / 10\n\n## 2006 BMW Z4 curb weight\n\n• 3.0 si —1325 kg (2922 lbs)\n\n## 2006 BMW Z4 weight to consumption ratio\n\n• 3.0 si is 10.7% lighter than average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n3.0 si 1325 kg /\n2922 lbs\n900 kg (1985 lbs) heavier 5 kg to 1 hp 245 kg/s (540 lbs/s) -\nVehicle 3.0 si\nCurb weight 1325 kg /\n2922 lbs\nDifference from world's smallest 900 kg (900 lbs) heavier\nWeight to power ratio 5 kg to 1 hp\n0—60 mph acceleration ratio 245 kg/s (540 lbs/s)\nConsumption ratio -", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1415 kg (3120 lbs)\n• Difference from world average: 68 kg or 150 lbs lighter\n• Difference from world's heaviest: 3085 kg or 6803 lbs lighter\n• Load to one wheel: 354 kg (780 lbs)\n• Maximum allowable weight: 1565 kg (3451 lbs)\n• Weight to torque ratio: 4 kg per 1 Nm\n• Weight to length ratio: 2.9 mm per 1 kg\n• Engine capacity to weight: 2.3 cc per 1 kg\n• Pounds per 1 kW: 12 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2006 BMW Z4 curb weight\n\n• 3.2 —1415 kg (3120 lbs)\n\n## 2006 BMW Z4 weight to consumption ratio\n\n• 3.2 is 4.6% lighter than average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n3.2 1415 kg /\n3120 lbs\n990 kg (2183 lbs) heavier 4 kg to 1 hp 295 kg/s (650 lbs/s) -\nVehicle 3.2\nCurb weight 1415 kg /\n3120 lbs\nDifference from world's smallest 990 kg (990 lbs) heavier\nWeight to power ratio 4 kg to 1 hp\n0—60 mph acceleration ratio 295 kg/s (650 lbs/s)\nConsumption ratio -", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1425 kg (3142 lbs)\n• Difference from world average: 58 kg or 128 lbs lighter\n• Difference from world's heaviest: 3075 kg or 6781 lbs lighter\n• Load to one wheel: 356 kg (786 lbs)\n• Maximum allowable weight: 1575 kg (3473 lbs)\n• Weight to torque ratio: 4 kg per 1 Nm\n• Weight to length ratio: 2.9 mm per 1 kg\n• Engine capacity to weight: 2.3 cc per 1 kg\n• Pounds per 1 kW: 12 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2006 BMW Z4 curb weight\n\n• 3.2 —1425 kg (3142 lbs)\n\n## 2006 BMW Z4 weight to consumption ratio\n\n• 3.2 —75 kg/mpg (165 lbs/mpg)\n\n## 2006 BMW Z4 weight to consumption ratio\n\n• 3.2 is 3.9% lighter than average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n3.2 1425 kg /\n3142 lbs\n1000 kg (2205 lbs) heavier 4 kg to 1 hp 297 kg/s (655 lbs/s) 118 kg/L\n(260 lbs/L)\nVehicle 3.2\nCurb weight 1425 kg /\n3142 lbs\nDifference from world's smallest 1000 kg (1000 lbs) heavier\nWeight to power ratio 4 kg to 1 hp\n0—60 mph acceleration ratio 297 kg/s (655 lbs/s)\nConsumption ratio 118 kg/L\n(260 lbs/L)", null, "## Quick notes on BMW Z4 weight\n\n• Average weight for generation: 1342 kg (2959 lbs)\n• Difference from world average: 141 kg or 311 lbs lighter\n• Difference from world's heaviest: 3135 kg or 6913 lbs lighter\n• Load to one wheel: 336 kg (740 lbs)\n• Maximum allowable weight: 1492 kg (3290 lbs)\n• Weight to torque ratio: 5 kg per 1 Nm\n• Weight to length ratio: 3 mm per 1 kg\n• Engine capacity to weight: 1.9 cc per 1 kg\n• Pounds per 1 kW: 20 lbs/kW\n• Weightly rating: 8 / 10\n\n## 2002 BMW Z4 curb weight\n\n• 2.2i —1325 kg (2922 lbs)\n• 2.5i —1335 kg (2944 lbs)\n• 3.0i —1365 kg (3010 lbs)\n\n## 2002 BMW Z4 weight to consumption ratio\n\n• 2.2i —41 kg/mpg (90 lbs/mpg)\n• 2.5i —42 kg/mpg (93 lbs/mpg)\n• 3.0i —44 kg/mpg (97 lbs/mpg)\n\n## 2002 BMW Z4 weight to consumption ratio\n\n• 2.2i is 10.7% lighter than average\n• 2.5i is 10% lighter than average\n• 3.0i is 8% lighter than average\n• Vehicle Curb weight Difference from world's smallest Weight to power ratio 0—60 mph acceleration ratio Consumption ratio\n2.2i 1325 kg /\n2922 lbs\n900 kg (1985 lbs) heavier 8 kg to 1 hp 182 kg/s (401 lbs/s) 182 kg/L\n(401 lbs/L)\n2.5i 1335 kg /\n2944 lbs\n910 kg (2007 lbs) heavier 7 kg to 1 hp 199 kg/s (439 lbs/s) 180 kg/L\n(397 lbs/L)\n3.0i 1365 kg /\n3010 lbs\n940 kg (2073 lbs) heavier 6 kg to 1 hp 244 kg/s (538 lbs/s) 182 kg/L\n(401 lbs/L)\nVehicle 2.2i\nCurb weight 1325 kg /\n2922 lbs\nDifference from world's smallest 900 kg (900 lbs) heavier\nWeight to power ratio 8 kg to 1 hp\n0—60 mph acceleration ratio 182 kg/s (401 lbs/s)\nConsumption ratio 182 kg/L\n(401 lbs/L)\nVehicle 2.5i\nCurb weight 1335 kg /\n2944 lbs\nDifference from world's smallest 910 kg (910 lbs) heavier\nWeight to power ratio 7 kg to 1 hp\n0—60 mph acceleration ratio 199 kg/s (439 lbs/s)\nConsumption ratio 180 kg/L\n(397 lbs/L)\nVehicle 3.0i\nCurb weight 1365 kg /\n3010 lbs\nDifference from world's smallest 940 kg (940 lbs) heavier\nWeight to power ratio 6 kg to 1 hp\n0—60 mph acceleration ratio 244 kg/s (538 lbs/s)\nConsumption ratio 182 kg/L\n(401 lbs/L)" ]
[ null, "https://weightle.com/images/search.png", null, "https://weightle.com/images/close.png", null, "https://weightle.com/images/menu.png", null, "https://weightle.com/images/next-white.png", null, "https://weightle.com/images/next-white.png", null, "https://weightle.com/images/autowebp/bmw-z4-2018-g29.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2013-e89-facelift-2013.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2009-e89-2011.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2006-e85-facelift-2006.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2006-coupe-e86.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2006-m-e85.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2006-m-coupe-e86.webp", null, "https://weightle.com/images/autowebp/bmw-z4-2003-e85.webp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.65284646,"math_prob":0.9131349,"size":9289,"snap":"2022-40-2023-06","text_gpt3_token_len":3456,"char_repetition_ratio":0.21410878,"word_repetition_ratio":0.50511944,"special_character_ratio":0.46904942,"punctuation_ratio":0.010741139,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96488506,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T04:31:39Z\",\"WARC-Record-ID\":\"<urn:uuid:95c2d116-d534-4da0-8ae2-ede97ebb9404>\",\"Content-Length\":\"171886\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:70038501-be34-4294-a3ab-3d7a0eb347fb>\",\"WARC-Concurrent-To\":\"<urn:uuid:c6804dea-febc-428e-88bd-21ec77079ad6>\",\"WARC-IP-Address\":\"148.251.15.149\",\"WARC-Target-URI\":\"https://weightle.com/bmw-z4\",\"WARC-Payload-Digest\":\"sha1:5L4TSFEILFRKOWOYTDBXGH5YSQ64OUAK\",\"WARC-Block-Digest\":\"sha1:7CBNPA2JSECN7U5EISG4OI7JJYQXBGDG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499470.19_warc_CC-MAIN-20230128023233-20230128053233-00390.warc.gz\"}"}
https://kr.mathworks.com/matlabcentral/answers/158402-plot-with-different-symbols
[ "plot with different symbols\n\n조회 수: 66(최근 30일)\nNiki 2014년 10월 13일\n편집: Stephen 2014년 10월 13일\nI have two vectors with 1000 length each. for example\nr1 = rand(1000,1); r2 = rand(1000,1); then I want to plot a scatter which shows r1 versus r2 as follows:\nplot(r1, r2) I want to show each two values of r1 and 2 with the same symbol but different from the rest. any suggestion ?\n댓글 수: 1표시숨기기 없음\nStephen 2014년 10월 13일\nMATLAB has thirteen different plot symbols. How do you wish to divide up 1000 points?\n\n댓글을 달려면 로그인하십시오.\n\n답변(1개)\n\nIain 2014년 10월 13일\nr1 = rand(1,1000);\nr2 = rand(1,1000);\nplot(1:1000,r1,'bx',1:1000,r2,'kx')\nChange the x to +, d, s, o for other symbols....\n댓글 수: 1표시숨기기 없음\nNiki 2014년 10월 13일\nThanks for your comment. But not really if I wanted to do such time consuming way to plot, it was much easier to do it in other ways. I am more thinking to find a more automatic way\n\n댓글을 달려면 로그인하십시오.\n\nCommunity Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7010583,"math_prob":0.9471821,"size":627,"snap":"2022-05-2022-21","text_gpt3_token_len":240,"char_repetition_ratio":0.10593901,"word_repetition_ratio":0.0,"special_character_ratio":0.37161085,"punctuation_ratio":0.19875777,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9770921,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T06:30:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a00ba323-c589-4d5c-b395-3841f1a1205d>\",\"Content-Length\":\"113158\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:77e8eaa7-2fb0-4eff-9d4e-fb58c4936e12>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ce823c2-9817-412a-9f00-6600625a8f79>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://kr.mathworks.com/matlabcentral/answers/158402-plot-with-different-symbols\",\"WARC-Payload-Digest\":\"sha1:YXWSMIE2X3QAYYUBMCKZ76ILOD5HMAE4\",\"WARC-Block-Digest\":\"sha1:DEV4NHBRZT6ZRH3222MJPQEDSEDKG3CX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305141.20_warc_CC-MAIN-20220127042833-20220127072833-00651.warc.gz\"}"}
http://erlang.org/pipermail/erlang-questions/2009-May/043844.html
[ "[erlang-questions] At what point am I \"playing compiler\"\n\nPer Melin <", null, ">\nSun May 17 22:40:18 CEST 2009\n\n```Dennis Byrne:\n> The functions expressive/0 and efficient/0 have the same result.\n> Sometimes I prefer expressive syntax but I am concerned about\n> the compilers (or runtime) make these concerns obselete?\n>\n> expressive() ->\n>        List = [1,2,3],\n>        Last = lists:last(List),\n>        Min = lists:foldl(fun min/2, Last, List),\n>        Max = lists:foldl(fun max/2, Last, List),\n>        Sum = lists:foldl(fun sum/2, 0, List),\n>        {Min, Max, Sum}.\n>\n> efficient() ->\n>        List = [1,2,3],\n>        Last = lists:last(List),\n>        lists:foldl(fun summary/2, {Last, Last, 0}, List).\n>\n> summary(X, {Min, Max, Total}) ->\n>        {min(X, Min), max(X, Max), Total + X}.\n>\n> sum(X, Y) ->\n>        X + Y.\n>\n> min(X, Y) when X < Y ->\n>        X;\n> min(_, Y) ->\n>        Y.\n>\n> max(X, Y) when X > Y ->\n>        X;\n> max(_, Y) ->\n>        Y.\n\nexpressive_and_efficient() ->\nList = [1,2,3],\n{lists:min(List), lists:max(List), lists:sum(List)}.\n\n:-)\n\nI run micro-benchmarks obsessively. On my machine expressive() is 70%\nslower than efficient() without HIPE and 150% slower with HIPE. But\nwe're talking fractions of a microsecond per execution. Should you\nbother?\n\nIf you're concerned with efficiency, timer:tc/3 is your friend. Stuff\nlike this you should obviously loop at least a few million times to\nget reliable times.\n\n```" ]
[ null, "http://erlang.org/pipermail/erlang-questions/emailaddrs/ema-4112.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78046167,"math_prob":0.9413878,"size":1454,"snap":"2019-26-2019-30","text_gpt3_token_len":445,"char_repetition_ratio":0.13310345,"word_repetition_ratio":0.05668016,"special_character_ratio":0.35075653,"punctuation_ratio":0.2356688,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97507966,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-19T09:00:41Z\",\"WARC-Record-ID\":\"<urn:uuid:f78fe9f5-2bbf-49d6-81cd-6ad8673ff999>\",\"Content-Length\":\"5094\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7c9b4e63-5ec7-41fd-8815-0cf1e96163dd>\",\"WARC-Concurrent-To\":\"<urn:uuid:16485a43-3fae-4637-881b-2da40f0d8310>\",\"WARC-IP-Address\":\"192.121.151.106\",\"WARC-Target-URI\":\"http://erlang.org/pipermail/erlang-questions/2009-May/043844.html\",\"WARC-Payload-Digest\":\"sha1:6ZBC4R3U6SVVXGCKXZGWGQBB2TQSHWDL\",\"WARC-Block-Digest\":\"sha1:MFLHSRNHFBY3OQCEPGA4JFTWYMKX3NSJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998943.53_warc_CC-MAIN-20190619083757-20190619105757-00531.warc.gz\"}"}
https://phys.org/tags/wavelength+of+light/
[ "# News tagged with wavelength of light\n\nRelated topics: light\n\npage 1 from 18\n\n## Wavelength\n\nIn physics, the wavelength of a sinusoidal wave is the spatial period of the wave – the distance over which the wave's shape repeats. It is usually determined by considering the distance between consecutive corresponding points of the same phase, such as crests, troughs, or zero crossings, and is a characteristic of both traveling waves and standing waves. Wavelength is commonly designated by the Greek letter lambda (λ). The concept can also be applied to periodic waves of non-sinusoidal shape. The term wavelength is also sometimes applied to modulated waves, and to the sinusoidal envelopes of modulated waves or waves formed by interference of several sinusoids.\n\nAssuming a sinusoidal wave moving at a fixed wave speed, wavelength is inversely proportional to frequency: waves with higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths.\n\nExamples of wave-like phenomena are sound waves, light, and water waves. A sound wave is a periodic variation in air pressure, while in light and other electromagnetic radiation the strength of the electric and the magnetic field vary. Water waves are periodic variations in the height of a body of water. In a crystal lattice vibration, atomic positions vary periodically in both lattice position and time.\n\nWavelength is a measure of the distance between repetitions of a shape feature such as peaks, valleys, or zero-crossings, not a measure of how far any given particle moves. For example, in waves over deep water a particle in the water moves in a circle of the same diameter as the wave height, unrelated to wavelength.\n\nThis text uses material from Wikipedia, licensed under CC BY-SA" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9330816,"math_prob":0.98601973,"size":1608,"snap":"2021-21-2021-25","text_gpt3_token_len":307,"char_repetition_ratio":0.13902743,"word_repetition_ratio":0.0,"special_character_ratio":0.18221393,"punctuation_ratio":0.10380623,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96380806,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T21:14:17Z\",\"WARC-Record-ID\":\"<urn:uuid:c76b6a62-dd80-42e4-9fc0-ba6668866f1d>\",\"Content-Length\":\"65677\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46e40d52-2baf-4ea8-92c7-b87d5abbed8d>\",\"WARC-Concurrent-To\":\"<urn:uuid:b355c953-aefa-475f-9452-aa22f38a9977>\",\"WARC-IP-Address\":\"72.251.236.55\",\"WARC-Target-URI\":\"https://phys.org/tags/wavelength+of+light/\",\"WARC-Payload-Digest\":\"sha1:WAADQKATZANNJZBF5VQZIYAEENPGV4TU\",\"WARC-Block-Digest\":\"sha1:MMDSLBQ6FDGAAUYGFCLZKMA4OIYPLH42\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989856.11_warc_CC-MAIN-20210511184216-20210511214216-00352.warc.gz\"}"}
http://bims.iranjournals.ir/article_364.html
[ "# The (R,S)-symmetric and (R,S)-skew symmetric solutions of the pair of matrix equations A1XB1 = C1 and A2XB2 = C2\n\nDocument Type: Research Paper\n\nAuthors\n\nAbstract\n\nLet $Rin textbf{C}^{mtimes m}$ and $Sin textbf{C}^{ntimes n}$ be nontrivial involution matrices; i.e., $R=R^{-1}neq pm~I$ and $S=S^{-1}neq pm~I$.\nAn $mtimes n$ complex matrix $A$ is said to be an $(R, S)$-symmetric ($(R, S)$-skew symmetric) matrix if $RAS =A$ ($RAS =-A$).\nThe $(R, S)$-symmetric and $(R, S)$-skew symmetric matrices have\na number of special properties and widely used in engineering and\nscientific computating. Here, we introduce the necessary and\nsufficient conditions for the solvability of the pair of matrix\nequations $A_{1}XB_{1}=C_{1}$ and $A_{2}XB_{2}=C_{2}$, over $(R, S)$-symmetric and $(R, S)$-skew symmetric matrices, and give the\ngeneral expressions of the solutions for the solvable cases.\nFinally, we give necessary and sufficient conditions for the\nexistence of $(R, S)$-symmetric and $(R, S)$-skew symmetric\nsolutions and representations of these solutions to the pair of\nmatrix equations in some special cases.\n\nKeywords\n\n### History\n\n• Receive Date: 03 February 2009\n• Revise Date: 15 March 2012\n• Accept Date: 17 March 2010" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7203532,"math_prob":0.9993986,"size":1826,"snap":"2020-24-2020-29","text_gpt3_token_len":601,"char_repetition_ratio":0.17837541,"word_repetition_ratio":0.33828998,"special_character_ratio":0.322563,"punctuation_ratio":0.18877551,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99997103,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T14:22:42Z\",\"WARC-Record-ID\":\"<urn:uuid:cd9aee0b-4a3a-45b5-ba37-51830752619a>\",\"Content-Length\":\"41869\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c4ee5e5b-bcd3-4364-ac20-fec2784bf079>\",\"WARC-Concurrent-To\":\"<urn:uuid:822e7298-557e-457a-a6a8-a99f49e7b43a>\",\"WARC-IP-Address\":\"217.182.166.236\",\"WARC-Target-URI\":\"http://bims.iranjournals.ir/article_364.html\",\"WARC-Payload-Digest\":\"sha1:2EJNWP74AARUSS6REJNPSTEMVENXLF6U\",\"WARC-Block-Digest\":\"sha1:2KCGIS6QCAWDRZBNHKX3SXJN53NCGPQC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347394756.31_warc_CC-MAIN-20200527141855-20200527171855-00579.warc.gz\"}"}
https://codekomusic.com/qa/is-vlookup-useful.html
[ "", null, "# Is Vlookup Useful?\n\n## Is Vlookup obsolete?\n\nVLOOKUP is an obsolete function inherited from Lotus-123.\n\nThere is much better in Excel, more powerful and less limited, it is INDEX/MATCH.\n\nINDEX/MATCH replaces all lookups functions (VLOOKUP, HLOOKUP and LOOKUP)..\n\n## What is pivoting in Excel?\n\nA Pivot Table is used to summarise, sort, reorganise, group, count, total or average data stored in a table. It allows us to transform columns into rows and rows into columns. It allows grouping by any field (column), and using advanced calculations on them.\n\n## Where is Vlookup in Excel?\n\nHow to use VLOOKUP in ExcelClick the cell where you want the VLOOKUP formula to be calculated. … Click “Formula” at the top of the screen. … Click “Lookup & Reference” on the Ribbon. … Click “VLOOKUP” at the bottom of the drop-down menu. … Specify the cell in which you will enter the value whose data you’re looking for.More items…•May 4, 2020\n\n## Can you use Vlookup and Hlookup together?\n\nVLOOKUP and HLOOKUP are two of the most popular formulas in Excel and using them together is one of the first formula combinations that people learn. … Both of these contexts make it worthwhile to learn VLOOKUP HLOOKUP.\n\n## What are the limitations of Vlookup?\n\nLimitations of VLOOKUP One major limitation of VLOOKUP is that it cannot look to the left. The values to lookup must always be on the left-most column of the range and the values to return must be on the right hand side. You cannot use the standard VLOOKUP to look at the columns and the rows to find an exact match.\n\n## What is better than Vlookup?\n\nWith sorted data and an approximate match, INDEX-MATCH is about 30% faster than VLOOKUP. With sorted data and a fast technique to find an exact match, INDEX-MATCH is about 13% faster than VLOOKUP. … If you use VLOOKUP you must look up the same SKU for each column of information you need.\n\n## What is pivot table in simple words?\n\nA pivot table is a table of statistics that summarizes the data of a more extensive table (such as from a database, spreadsheet, or business intelligence program). … Pivot tables are a technique in data processing. They arrange and rearrange (or “pivot”) statistics in order to draw attention to useful information.\n\n## Who invented Excel?\n\nMicrosoft ExcelA simple line chart being created in Excel, running on Windows 10Developer(s)MicrosoftInitial release1987Stable release2103 (16.0.13901.20336) / April 2, 2021Operating systemMicrosoft Windows6 more rows\n\n## Who invented Excel formulas?\n\nDan BricklinDan Bricklin invented the spreadsheet—but don’t hold that against him. The father of the spreadsheet. December 22, 2015 This article is more than 2 years old. You may not know Dan Bricklin, but you are almost certainly familiar with his work.\n\n## What is Hlookup formula in Excel?\n\nHLOOKUP in Excel stands for ‘Horizontal Lookup’. It is a function that makes Excel search for a certain value in a row (the so called ‘table array’), in order to return a value from a different row in the same column.\n\n## How use Vlookup step by step?\n\nHow to use VLOOKUP in ExcelStep 1: Organize the data. … Step 2: Tell the function what to lookup. … Step 3: Tell the function where to look. … Step 4: Tell Excel what column to output the data from. … Step 5: Exact or approximate match.\n\n## Who invented Vlookup?\n\nBill Jelen: “From 1979 – VisiCalc and LOOKUP”! In Bill’s last installment for VLOOKUP WEEK 2012, we go back to 1979…VisiCalc and the whole ’20 Functions’ available in that time! There were no IF statements and there was no VLOOKUP…but there was ‘LOOKUP’. Follow along and see where VLOOKUP began!\n\n## What is Vlookup in Excel example?\n\nThe VLOOKUP function in Excel performs a case-insensitive lookup. For example, the VLOOKUP function below looks up MIA (cell G2) in the leftmost column of the table. Explanation: the VLOOKUP function is case-insensitive so it looks up MIA or Mia or mia or miA, etc.\n\n## Why Hlookup is used in Excel?\n\nWhat is the HLOOKUP Function? HLOOKUP stands for Horizontal Lookup and can be used to retrieve information from a table by searching a row for the matching data and outputting from the corresponding column. While VLOOKUP searches for the value in a column, HLOOKUP searches for the value in a row.\n\n## How do I compare two lists in Excel?\n\nThe quickest way to find all about two lists is to select them both and them click on Conditional Formatting -> Highlight cells rules -> Duplicate Values (Excel 2007). The result is that it highlights in both lists the values that ARE the same.\n\n## How do I match data in Excel?\n\nCompare Two Columns and Highlight MatchesSelect the entire data set.Click the Home tab.In the Styles group, click on the ‘Conditional Formatting’ option.Hover the cursor on the Highlight Cell Rules option.Click on Duplicate Values.In the Duplicate Values dialog box, make sure ‘Duplicate’ is selected.More items…\n\n## What to do when Vlookup returns NA?\n\nProblem: The lookup column is not sorted in the ascending orderChange the VLOOKUP function to look for an exact match. To do that, set the range_lookup argument to FALSE. No sorting is necessary for FALSE.Use the INDEX/MATCH function to look up a value in an unsorted table.\n\n## Why would I use a Vlookup?\n\nWhen you need to find information in a large spreadsheet, or you are always looking for the same kind of information, use the VLOOKUP function. VLOOKUP works a lot like a phone book, where you start with the piece of data you know, like someone’s name, in order to find out what you don’t know, like their phone number.\n\nIt can not lookup and return a value which is to the left of the lookup value. It works only with data which is arranged vertically. VLOOKUP would give a wrong result if you add/delete a new column in your data (as the column number value now refers to the wrong column).\n\n## What is Vlookup in simple words?\n\nVLOOKUP stands for ‘Vertical Lookup’. It is a function that makes Excel search for a certain value in a column (the so called ‘table array’), in order to return a value from a different column in the same row.\n\n## What does Vlookup return?\n\nIn its simplest form, the VLOOKUP function says: =VLOOKUP(What you want to look up, where you want to look for it, the column number in the range containing the value to return, return an Approximate or Exact match – indicated as 1/TRUE, or 0/FALSE).\n\n## What is an Hlookup vs Vlookup?\n\nHLookup searches for a value in the top row of a table and then returns a value in the same column. The VLookup function displays the searched value in the same row but in the next column." ]
[ null, "https://mc.yandex.ru/watch/66676240", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8268373,"math_prob":0.6516607,"size":7186,"snap":"2021-21-2021-25","text_gpt3_token_len":1697,"char_repetition_ratio":0.16805904,"word_repetition_ratio":0.098101266,"special_character_ratio":0.22543836,"punctuation_ratio":0.112280704,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9554147,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-06T12:47:15Z\",\"WARC-Record-ID\":\"<urn:uuid:6e72faa2-1b50-4694-bce7-a8de5ca55eee>\",\"Content-Length\":\"48224\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ed36aaa8-391d-44cb-ada5-e311c5296fed>\",\"WARC-Concurrent-To\":\"<urn:uuid:15ef0311-cf4f-44bd-bc56-77e3bf5d845a>\",\"WARC-IP-Address\":\"87.236.16.235\",\"WARC-Target-URI\":\"https://codekomusic.com/qa/is-vlookup-useful.html\",\"WARC-Payload-Digest\":\"sha1:YRJVZW5YFURX7EN4IW53RM7XMZRZQXT6\",\"WARC-Block-Digest\":\"sha1:632HZCTXI5IJ5JF5LQ7XEX7HCAFJFX7O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243988753.97_warc_CC-MAIN-20210506114045-20210506144045-00458.warc.gz\"}"}
https://zbmath.org/?q=an%3A1308.90166
[ "# zbMATH — the first resource for mathematics\n\nOn the use of iterative methods in cubic regularization for unconstrained optimization. (English) Zbl 1308.90166\nSummary: In this paper we consider the problem of minimizing a smooth function by using the adaptive cubic regularized (ARC) framework. We focus on the computation of the trial step as a suitable approximate minimizer of the cubic model and discuss the use of matrix-free iterative methods. Our approach is alternative to the implementation proposed in the original version of ARC, involving a linear algebra phase, but preserves the same worst-case complexity count. Further we introduce a new stopping criterion in order to properly manage the “over-solving” issue arising whenever the cubic model is not an adequate model of the true objective function. Numerical experiments conducted by using a nonmonotone gradient method as inexact solver are presented. The obtained results clearly show the effectiveness of the new variant of ARC algorithm.\n\n##### MSC:\n 90C30 Nonlinear programming" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6858097,"math_prob":0.88211906,"size":5290,"snap":"2021-31-2021-39","text_gpt3_token_len":1522,"char_repetition_ratio":0.13280363,"word_repetition_ratio":0.04607046,"special_character_ratio":0.31039697,"punctuation_ratio":0.2704692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96428,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T14:27:15Z\",\"WARC-Record-ID\":\"<urn:uuid:edc21446-bb5c-4fbe-9fc3-a892aec28af5>\",\"Content-Length\":\"53805\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6e7fd211-3e5a-4f5e-8a37-2e1855f1487d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1352ac4d-265c-45c4-9413-ec88e0837313>\",\"WARC-IP-Address\":\"141.66.194.2\",\"WARC-Target-URI\":\"https://zbmath.org/?q=an%3A1308.90166\",\"WARC-Payload-Digest\":\"sha1:EXCXOMIAD25U5246Y2BL6GBGJBFOI3IX\",\"WARC-Block-Digest\":\"sha1:WGKBM4VWSYMGG3JPLKBBMPTORR7BCQ5C\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150266.65_warc_CC-MAIN-20210724125655-20210724155655-00682.warc.gz\"}"}
https://rdrr.io/cran/compositions/man/biplot3d.html
[ "biplot3d: Three-dimensional biplots, based on package rgl In compositions: Compositional Data Analysis\n\nDescription\n\nPlots variables and cases in the same plot, based on a principal component analysis.\n\nUsage\n\n 1 2 3 4 5 6 7 8 9 10 11 biplot3D(x,...) ## Default S3 method: biplot3D(x,y,var.axes=TRUE,col=c(\"green\",\"red\"),cex=c(2,2), xlabs = NULL, ylabs = NULL, expand = 1,arrow.len = 0.1, ...,add=FALSE) ## S3 method for class 'princomp' biplot3D(x,choices=1:3,scale=1,..., comp.col=1,comp.labs=paste(\"Comp.\",1:3), scale.scores=lambda[choices]^(1-scale), scale.var=scale.comp, scale.comp=sqrt(lambda[choices]), scale.disp=1/scale.comp)\n\nArguments\n\n x princomp object or matrix of point locations to be drawn (typically, cases) choices Which principal components should be used? scale a scaling parameter like in biplot scale.scores a vector giving the scaling applied to the scores scale.var a vector giving the scaling applied to the variables scale.comp a vector giving the scaling applied to the unit length of each component scale.disp a vector giving the scaling of the display in the directions of the components comp.col color to draw the axes of the components, defaults to black comp.labs labels for the components ... further plotting parameters as defined in rgl::rgl.material y matrix of second point/arrow-head locations (typically, variables) var.axes logical, TRUE draws arrows and FALSE points for y col vector/list of two elements the first giving the color/colors for the first data set and the second giving color/colors for the second data set. cex vector/list of two elements the first giving the size for the first data set and the second giving size for the second data set. xlabs labels to be plotted at x-locations ylabs labels to be plotted at y-locations expand the relative expansion of the y data set with respect to x arrow.len The length of the arrows as defined in arrows3D add logical, adding to existing plot or making a new one?\n\nDetails\n\nThis \"biplot\" is a triplot, relating data, variables and principal components. The relative scaling of the components is still experimental, meant to mimic the behavior of classical biplots.\n\nValue\n\nthe 3D plotting coordinates of the tips of the arrows of the variables displayed, returned invisibly\n\nAuthor(s)\n\nK.Gerald v.d. Boogaart http://www.stat.boogaart.de" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.69823897,"math_prob":0.9713283,"size":2638,"snap":"2022-05-2022-21","text_gpt3_token_len":707,"char_repetition_ratio":0.12376613,"word_repetition_ratio":0.13447432,"special_character_ratio":0.2596664,"punctuation_ratio":0.16448598,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99191463,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-22T03:45:44Z\",\"WARC-Record-ID\":\"<urn:uuid:078b118f-8375-4f9f-a811-e96f4953683a>\",\"Content-Length\":\"57076\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:366eb632-c81b-4b93-aa9a-93acd5726a37>\",\"WARC-Concurrent-To\":\"<urn:uuid:18daf129-38f3-4fe8-ad75-3a04bf38919d>\",\"WARC-IP-Address\":\"51.81.83.12\",\"WARC-Target-URI\":\"https://rdrr.io/cran/compositions/man/biplot3d.html\",\"WARC-Payload-Digest\":\"sha1:F7U4EMTIRFSBQWOFFWUWAUCTSFXHAMHG\",\"WARC-Block-Digest\":\"sha1:K2TYSTP4Q2AXA2PC3VRUKBPPUZGETRXO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303729.69_warc_CC-MAIN-20220122012907-20220122042907-00059.warc.gz\"}"}
https://rd.springer.com/chapter/10.1007%2F978-3-030-11566-1_6
[ "# Inference of a Dyadic Measure and Its Simplicial Geometry from Binary Feature Data and Application to Data Quality\n\nChapter\nPart of the Association for Women in Mathematics Series book series (AWMS, volume 17)\n\n## Abstract\n\nWe propose a new method for representing data sets with an ordered set of binary features which summarizes both measure-theoretic and topological properties. The method does not require any assumption of metric space properties for the data. A data set with an ordered set of binary features is viewed as a dyadic set with a dyadic measure. We prove that dyadic sets with dyadic measures have a canonical set of binary features and determine canonical nerve simplicial complexes. The method computes the two related representations: multiscale parameters for the dyadic measure and the Betti numbers of the simplicial complex. The dyadic product formula representation formulated in previous work is exploited. The parameters characterize the relative skewness of the measure at dyadic scales and localities. The more abstract Betti number statistics summarize the simplicial geometry of the support of the measure. We prove that they provide a simple privacy property. Our methods are compared with other results for measures on sets with tree structures, recent multi-resolution theory, and computational topology. We illustrate the method on a data quality data set and propose future research directions.\n\n## References\n\n1. 1.\nL. Ahlfors, Lectures on Quasi-Conformal Mappings, vol. 10 (van Nostrand Mathematical Studies, Princeton, 1966)\n2. 2.\nD. Bassu, P.W. Jones, L. Ness, D. Shallcross, Product Formalisms for Measures on Spaces with Binary Tree Structures: Representation, Visualization and Multiscale Noise, submitted to SIGMA Forum of Maths (under revision) (2016). https://arxiv.org/abs/1601.02946\n3. 3.\nA. Beurling, L. Ahlfors, The boundary correspondence under quasi-conformal mappings. Acta Math. 96, 125–142 (1956)\n4. 4.\nL. Billera, S. Holmes, K. Vogtmann, Geometry of the space of phylogenetic trees. Adv. Appl. Math. 27, 733–767 (2001)\n5. 5.\nC. Dwork, A. Roth, The algorithmic foundations of differential privacy. Found. Trends Theor. Comput. Sci. 9, 211–401 (2014)\n6. 6.\nH. Edelsbrunner, J. Harer, Persistent homology—a survey. Contemp. Math. 453, 257–282 (2008)\n7. 7.\nF. Fasy, B. Lecci, A. Rinaldo, L. Wasserman, S. Balakrishnan, A. Singh, Confidence sets for persistence diagrams. Ann. Stat. 42, 2301–2339 (2014)\n8. 8.\nR. Fefferman, C. Kenig, J. Pipher, The theory of weights and the Dirichlet problem for elliptical equations. Ann. Math. 134, 65–124 (1991)\n9. 9.\nM. Gavish, B. Nadler, R. Coifman, Multiscale wavelets on trees, graphs and high dimensional data: theory and applications to semi supervised learning, in Proceedings of the 27th International Conference on Machine Learning (Omnipress, Madison, 2010), pp. 367–374Google Scholar\n10. 10.\nS. Harker, K. Mischaikow, M. Mrozek, V. Nanda, Discrete Morse theoretic algorithms for computing homology of complexes and maps. Found. Comput. Math. 14, 151–184 (2014)\n11. 11.\nM.T. Kaczynski, M.K. Mrozek, Computational Homology in Applied Mathematical Sciences 157 (Springer, New York, 2004)Google Scholar\n12. 12.\nJ.-P. Kahane, Sur le chaos multiplicative. Ann. Sci. Math. 9, 105–150 (1985)\n13. 13.\nE. Kolaczyk, R. Nowak, Multiscale likelihood analysis and complexity penalized estimation. Ann. Stat. 32, 500–527 (2004)\n14. 14.\nX. Meng, A trio of inference problems that could win you a Nobel Prize in statistics (if you help fund it), in Past, Present, Future Stat. Sci. (CRC Press, Boca Raton, 2014), pp. 537–562Google Scholar\n15. 15.\nL. Ness, Dyadic product formula representations of confidence measures and decision rules for dyadic data set samples, in MISNC SI DS 201 (ACM, New York, 2016)Google Scholar\n16. 16.\nR. Rhodes, V. Vargas, Gaussian multiplicative chaos and applications: a review. Probab. Surv. 11, 315–392 (2014)\n17. 17.\nK. Turner, S. Mukhurjee, D. Boyer, Persistent homology transform modeling shapes and surfaces. Inf. Inf. 3, 310–344 (2014)" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7658743,"math_prob":0.8406682,"size":4914,"snap":"2019-43-2019-47","text_gpt3_token_len":1264,"char_repetition_ratio":0.13034624,"word_repetition_ratio":0.00882353,"special_character_ratio":0.24664225,"punctuation_ratio":0.21818182,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9743175,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T11:44:11Z\",\"WARC-Record-ID\":\"<urn:uuid:7b4314cb-f225-4838-b018-ddc315b7cd90>\",\"Content-Length\":\"168160\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f3e09004-f974-45b4-8d7c-8b57188737dc>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b6f0661-5085-4c2b-b34a-9e393def4b5e>\",\"WARC-IP-Address\":\"151.101.248.95\",\"WARC-Target-URI\":\"https://rd.springer.com/chapter/10.1007%2F978-3-030-11566-1_6\",\"WARC-Payload-Digest\":\"sha1:QKI4NGPKKVLFBONDK2LKUFDIKSTCPL4R\",\"WARC-Block-Digest\":\"sha1:J2OEN7AUIYVKGPTPDRXIWZUR6JNTVBTT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670135.29_warc_CC-MAIN-20191119093744-20191119121744-00471.warc.gz\"}"}
https://math.stackexchange.com/questions/3142713/how-do-we-get-from-ln-a-ln-prn-to-a-pern-and-similar-logarithmic-equat
[ "# How do we get from $\\ln A=\\ln P+rn$ to $A=Pe^{rn}$ and similar logarithmic equations?\n\nI've been self-studying from the amazing \"Engineering Mathematics\" by Stroud and Booth, and am currently learning about algebra, particularly logarithms.\n\nThere is a question which I don't understand who they've solved. Namely, I'm supposed to express the following equations without logs:\n\n$$\\ln A = \\ln P + rn$$\n\nThe solution they provide is:\n\n$$A = Pe^{rn}$$\n\nBut I absolutely have no idea how they got to these solutions. (I managed to \"decipher\" some of the similar ones piece by piece by studying the rules of logarithms).\n\nThe basic idea behind all basic algebraic manipulations is that you are trying to isolate some variable or expression from the rest of the equation (that is, you are trying to \"solve\" for $$A$$ in this equation by putting it on one side of the equality by itself).\n\nFor this particular example (and indeed, most questions involving logarithms), you will have to know that the logarithm is \"invertible\"; just like multiplying and dividing by the same non-zero number changes nothing, taking a logarithm and then an exponential of a positive number changes nothing.\n\nSo, when we see $$\\ln(A)=\\ln(P)+rn$$, we can \"undo\" the logarithm by taking an exponential. However, what we do to one side must also be done to the other, so we are left with the following after recalling our basic rules of exponentiation: $$A=e^{\\ln(A)}=e^{\\ln(P)+rn}=e^{\\ln(P)}\\cdot e^{rn}=Pe^{rn}$$\n\nA key intuition behind logarithms is that multiplication translates to addition, i.e.\n\n$$\\ln(A*B)=\\ln(A)+\\ln(P)\\qquad$$ and $$\\qquad\\ln(A/B)=\\ln(A)-\\ln(P)$$\n\nWe can use this to solve your equation\n\n\\begin{align} \\ln(A)&=\\ln(P)+rn\\newline \\ln(A)-\\ln(P)&=rn\\newline \\ln(A/P)&=rn\\newline A/P&=e^{rn}\\newline A&=Pe^{rn} \\end{align}\n\nHint: Write $$e^{\\ln(A)}=e^{\\ln(P)+rn}$$ and use that $$e^{\\ln(x)}=x$$\n\nThere are a couple of things you must know about logarithms to understand that piece of mathematics.\n\nFirst of all, you need to know that $$\\ln{e}=1$$. It follows directly from the fact that $$e^1=e$$.\n\nSecondly, you need to know the fact that you can slide whatever you've got in front of the logarithm up and make it an exponent on the thing that's inside the logarithm: $$x\\ln{y}=\\ln{y^x}$$.\n\nAlso, when you add logarithms, you multiply whatever you have under the logarithm signs together as long as the product is a number that's greater than zero: $$\\ln{x}+\\ln{y}=\\ln{(xy)},\\ xy>0$$. (You can only take the logarithm of a positive number)\n\nAnd the last fact you're going to need is the fact that $$\\ln{x}=\\ln{y}\\implies x=y$$.\n\n\\begin{align} \\ln{A}&=\\ln{P}+rn\\cdot 1\\\\ \\ln{A}&=\\ln{P}+rn\\cdot \\ln{e}\\\\ \\ln{A}&=\\ln{P}+\\ln{e^{rn}}\\\\ \\ln{A}&=\\ln{(Pe^{rn})}\\\\ A&=Pe^{rn} \\end{align}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9809102,"math_prob":0.9992791,"size":523,"snap":"2022-05-2022-21","text_gpt3_token_len":123,"char_repetition_ratio":0.09055877,"word_repetition_ratio":0.0,"special_character_ratio":0.23135756,"punctuation_ratio":0.095744684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999951,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-16T11:09:09Z\",\"WARC-Record-ID\":\"<urn:uuid:2882051e-65f4-495b-a9d8-97fbaeb7b63e>\",\"Content-Length\":\"248165\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f4b3b826-2492-46e9-ba40-9fae1860f55e>\",\"WARC-Concurrent-To\":\"<urn:uuid:d86880c5-3460-44bb-b8fc-c13f0838b506>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/3142713/how-do-we-get-from-ln-a-ln-prn-to-a-pern-and-similar-logarithmic-equat\",\"WARC-Payload-Digest\":\"sha1:AYQC2GXJHYV2PJMUGFR62QD5OY2DDRXR\",\"WARC-Block-Digest\":\"sha1:ADVCI2LNAHAR3647SKPUBKCJZ43IJRVM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662510117.12_warc_CC-MAIN-20220516104933-20220516134933-00140.warc.gz\"}"}
https://www.djangospin.com/python-generating-random-elements-random-module/
[ "", null, "# Python: Generating random elements with the random module\n\nGenerating random elements with the random module in Python\n\n## Generating random elements with the random module in Python\n\nThe standard library called random helps the user to inject simulation into your Python programs. Its randint(x, y) function gives a random integer in the range marked by its arguments, inclusive of both end points. If you do not want the upper bound to be inclusive in the range, you can use the randrange(x, b) function, which even offers a step argument just like range().\n\n```>>> import random\n>>> random.randint(0, 5)\t\t\t# random number from 1-5\n3\n>>> random.randint(0, 5)\n5\n>>> random.randint(0, 5)\n3\n\n>>> random.randrange(0, 5)\t\t\t# random number from 1-4\n1\n>>> random.randrange(0, 5)\n4\n>>> random.randrange(0, 5)\n0\n\n>>> random.randrange(0, 5, 2)\t\t# random number from 1-4 & step = 2\n2\n>>> random.randrange(0, 5, 2)\n0\n>>> random.randrange(0, 5, 2)\n2\n>>> random.randrange(0, 5, 2)\n4\n```\n\nTo generate random elements other than integers, we have a method called choice(), which accepts an iterable and returns a random element from it.\n\n```>>> random.choice( ['a', 'b'] )\n'a'\n>>> random.choice( ['a', 'b'] )\n'a'\n>>> random.choice( ['a', 'b'] )\n'b'\n>>> random.choice( ['a', 'b'] )\n'a'\n>>> random.choice( ['a', 'b'] )\n'b'\n```" ]
[ null, "https://www.djangospin.com/wp-content/uploads/2015/12/py_logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6298764,"math_prob":0.9605993,"size":1186,"snap":"2019-51-2020-05","text_gpt3_token_len":347,"char_repetition_ratio":0.250423,"word_repetition_ratio":0.15463917,"special_character_ratio":0.35328835,"punctuation_ratio":0.18548387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98465663,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-16T10:26:03Z\",\"WARC-Record-ID\":\"<urn:uuid:5feaee23-998c-4e63-8b30-60eec751cd0a>\",\"Content-Length\":\"50243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e3663bf-61cd-4a04-bff5-45da136cb1f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:a4092a0f-27c0-43ce-8411-fd4ccb9e09d6>\",\"WARC-IP-Address\":\"69.163.162.247\",\"WARC-Target-URI\":\"https://www.djangospin.com/python-generating-random-elements-random-module/\",\"WARC-Payload-Digest\":\"sha1:O6M6TJZGMXRNDUIMAICSQ6DEU2DIZU3I\",\"WARC-Block-Digest\":\"sha1:EFY4SXTKWXCXWO5KY7M2B4W3YUX2LQC2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541319511.97_warc_CC-MAIN-20191216093448-20191216121448-00439.warc.gz\"}"}
https://schoolbag.info/mathematics/idiots/34.html
[ " Points, Lines, Planes, and Angles - Basics of Geometry - The Shape of the World - Basic Math and Pre-Algebra\n\n## Basic Math and Pre-Algebra\n\nPART 3. The Shape of the World\n\nAny world tour must include seeing the sights, having a look at the shape of things. Whether it’s a natural formation like a mountain or a canyon, or a famous building or monument, by the end of a journey, your scrapbook will certainly have some examples of great geometry.\n\nThis part of our journey is devoted to looking at the geometry of the mathematical world. You’ll learn the basic vocabulary you need to describe what you see and how to identify the polygons, circles, polyhedrals, and other solids that form the architecture of the mathematical world.\n\nWe won’t abandon numbers completely as we look at these shapes, but for a while numbers will be a lesser focus. There’s still a place for calculating, but we’ll investigate relationships between shapes as well.\n\nCHAPTER 12. Basics of Geometry\n\nGeometry begins with a few undefined terms—point, line, and plane—and logically builds a system that describes many physical objects. The word “geometry” means “earth measuring,” and geometry had its beginnings in the work of dividing land up among farmers. Doing that requires lines, angles, and many different shapes.\n\nIn this chapter, you’ll lay the foundation on which to build your knowledge of geometry. Starting from those undefined terms, you’ll learn about portions of lines and combining lines to make angles. You’ll measure and classify angles, explore relationships between them, and bisect segments and angles. Parallel and perpendicular lines are the building blocks of many figures and create many angle relationships. Last of all, you’ll take many of these ideas onto the coordinate plane and see how they connect back to algebra.\n\nPoints, Lines, Planes, and Angles\n\nThe undefined terms of geometry are a curious mix of things you know and things you can only imagine. A point can be thought of as a dot, a tiny spot, or a position. That’s the familiar part. The part that requires imagination is the idea that a point doesn’t take up any space. It has no size and no dimension. You can’t measure it. You can draw a dot to represent a point, even though your dot does take up some space, and you label points with uppercase letters.\n\nYou know what a line is. You see lines all the time. But geometry asks you to use your imagination here, too. A line is a set of points—an infinite string of points—that goes on forever in both directions. It has length, in fact it has infinite length, but it has no width and no height. It’s only one point wide or high, and points don’t take up space. And yet, somehow, you can string points together to make something that has infinite length.\n\nPeople often say, “a straight line,” but in geometry that phrase is redundant. All lines are straight. If it curves or bends, it’s not a line.\n\nWhen you draw a picture to represent a line, even though your picture does have some width and can’t actually go on forever, you put arrows on the ends to show that it keeps going. You can label the line with one script letter, like line l, or by placing two points on the line and writing those two points with a line over the top, like this:", null, ".\n\nDEFINITION\n\nA point is a position in space that has no length, width, or height. A line is a set of points that has length but no width or height. A plane is a flat surface that has length and width but no thickness. Space is the set of all points.\n\nIs your imagination still working? Where do these points and lines live? Where would you draw a point or a line? Perhaps on a sheet of paper or the chalkboard? Those surfaces are the images that help you imagine a plane. A plane is a flat surface that has infinite length and infinite width but no height or thickness. It’s an endless sheet of paper that’s only one point deep.\n\nAnd where do the points and the lines and the planes live? In space! No, not outer space, at least not exactly. Space, in geometry, is the set of all points, everywhere.\n\nBasic Geometry Terms\n\nBefore your imagination is completely exhausted, let’s look at some parts of a line that don’t require so much imagination. A ray, sometimes called a half-line, is a portion of a line from one point, called the endpoint, going on forever in one direction. You can’t measure the length of a ray because, like a line, it goes on forever. A ray looks like an arrow, and you name it by naming its endpoint and then another point on the ray, with an arrow over the top, like this:", null, ".\n\nMore familiar, if only because it can really fit on your paper is a line segment, literally, part of a line. A line is a portion of a line between two endpoints. (Finally, something you can measure!) Name it by its endpoints, with a segment over the top, like this:", null, ".", null, "Lines contain infinitely many points, but they are named by any two points on the line. A line that contains the two points A and B can be named", null, "or", null, ". In the same way, a line segment can be named by its endpoints in either order, but for rays, the order makes a difference. The rays", null, "and", null, "are shown and are two different rays.\n\nDEFINITION\n\nA ray is a portion of a line from one endpoint, going on forever through another point.\n\nA line segment is a point of a line made up of two endpoints and all the points of the line between the endpoints.\n\nWhen you put two rays together, you create a new figure called an angle. An angle is a figure formed by two rays with a common endpoint, called the vertex. The two rays are the sides of the angle. You’ll often see angles whose sides are line segments, but you can think of those segments as parts of rays. (By the way, you can measure angles, too.)\n\nDEFINITION\n\nAn angle is two rays with a common endpoint, called the vertex. The rays are the sides of the angle.", null, "In the angle XYZ, the vertex is Y, and the sides are rays", null, "and", null, ". You can name angles by three letters, one on one side, the vertex, and one on the other side. XYZ and ZYX are both names for this angle. An angle can be named by just its vertex, for example, Y, as long as it is the only angle with that vertex.\n\nCHECK POINT\n\nDraw and label each figure described.\n\n1. Line segment", null, "2. Ray", null, "3. Angle DEF\n\n4. Rays", null, "and", null, "5. Angles PQR and RQT\n\nLength and Angle Measure\n\nWell, you found things you can measure: a line segment or an angle. They don’t go on forever. You can measure them and actually stop somewhere. So how do you do it?\n\nMeasuring just means assigning a number to something to give an indication of its size. The number depends on the ruler you’re using. Feet? Inches? Centimeters? Furlongs? They all measure length (although you don’t see furlongs used much outside of horse racing).\n\nA ruler is just a line or line segment that you’ve broken up into smaller segments, all the same size, and numbered. You could even use a number line, and many times we will.\n\nIf you place a ruler next to a line segment, each endpoint of the segment will line up with some number on the ruler (even if it’s one of the little fraction lines in between the whole numbers).\n\nThe numbers that correspond to the endpoints are called coordinates, and the length of the line segment is the difference between the coordinates. Technically, the length is the absolute value of the difference, because direction doesn’t matter.\n\nDEFINITION\n\nA ruler is a line or segment divided into sections of equal size, labeled with numbers, called coordinates, used to measure the length of a line segment.", null, "A number line like this can be used to measure line segments.\n\nFor example, the length of line segment AB is equal to the distance between coordinates -7 and -2, or five units.\n\nYou can also measure angles. Angles are measured by the amount of rotation from one side to the other. Picture the hands of a clock rotating, creating angles of different sizes. It is important to remember that the lengths of the sides have no effect on the measurement of the angle. The hands of the famous clock known as Big Ben are much longer than the hands of your wrist watch or alarm clock, but they all make the same angle at 9 o’clock.\n\nSo how do you put a ruler on an angle? For starters, it’s not a ruler. A ruler is a line you use to measure parts of lines. Angles aren’t parts of lines. They’re more like wedges from a circle. So to measure them you create an instrument called a protractor, a circle broken into 360 little sections, each called a degree.\n\nIn geometry, angles are measured in degrees. When you put the protractor over the angle with the center of the circle on the vertex of the angle, the sides fall on numbers, called coordinates. The measure of the angle is the absolute value of the difference of the coordinates.\n\nDEFINITION\n\nA protractor is a circle whose circumference is divided into 360 units, called degrees, which is used to measure angles.", null, "A protractor can be created using any circle, but most people are familiar with the plastic half-circle tool shown here.\n\nWhen two segments have the same length, they are called congruent segments. In symbols, you could write", null, "to say that the segment connecting A to B is the same length as the segment connecting X to Y. You could also write AB = XY to say the measurements—the lengths—are the same. With the little segment above the letters, you’re talking about the segment. Without it, you’re talking about the length, a number. Segments are congruent. Lengths are equal.\n\nDEFINITION\n\nTwo segments are congruent if they are the same length. Two angles are congruent if they have the same measure.\n\nThe same is true of angles and their measures. The symbol A refers to the actual angle, and the symbol mA denotes the measure of that angle. If you write XYZ ≅ ∠RST, you’re saying the two angles have the same measure. You could also write mXYZ = mRST. Angles are congruent; measures are equal.\n\nA full rotation all the way around the circle is 360°. Half of that, or 180°, is the measure of a straight angle. The straight angle takes its name from the fact that it looks like a line.\n\nAn angle of 90°, or a quarter rotation, is called a right angle. If one side of a right angle is on the floor, the other side stands upright. Angles between 0° and 90° are called acute angles.\n\nAngles whose measurement in greater than 90° but less that 180° are obtuse angles.\n\nDEFINITION\n\nA straight angle is an angle that measures 180°. A right angle is an angle that measures 90°.\n\nAn angle that measures less than 90° is an acute angle. An obtuse angle is an angle that measures more than 90° but less than 180°.\n\nYou can classify angles one by one, according to their size, but you can also label angles based on their relationship to one another. Sometimes the relationship is about position or location or what the angles look like. Other times it’s just about measurements.\n\nTwo angles whose measurements total to 90° are called complementary angles. If two angles are complementary, each is the complement of the other.\n\nThe complement of an angle of 25° can be found by subtracting the known angle, 25°, from 90°. 90° - 25° = 65°, so an angle of 25° and an angle of 65° are complementary. To find the measure of the complement of an angle of 12°, subtract 90° - 12° = 78°. An angle of 12° and an angle of 78° are complementary angles.\n\nTwo angles whose measurements total to 180° are called supplementary angles. If two angles are supplementary, each is the supplement of the other.\n\nDEFINITION\n\nComplementary angles are a pair of angles whose measurements total 90°.\n\nSupplementary angles are a pair of angles whose measurements total 180°.\n\nTo find the supplement of an angle of 132°, 180° - 132° = 48°. The measure of the supplement of an angle of 103° is 180 - 103 = 77°.\n\nWhen two lines intersect, the lines make an X and four angles are formed. Each pair of angles across the X from one another is a pair of vertical angles. Vertical angles are always congruent; they always have the same measurement.", null, "The angles AED and CEB are vertical angles, as are the angles DEC and AEB.\n\nTwo angles that have the same vertex and share a side but don’t overlap are called adjacent angles. Two adjacent angles whose exterior sides (the ones they don’t share) make a line are called a linear pair Linear pairs are always supplementary.\n\nDEFINITION\n\nVertical angles are a pair of angles both of which have their vertices at the point where two lines intersect and do not share a side.\n\nAdjacent angles are a pair of angles that have the same vertex and share a side but do not overlap one another.\n\nA linear pair is made up of two adjacent angles whose unshared sides form a straight angle.", null, "Angles RPQ and SPQare adjacent. Angles ABD and CBD are both adjacent and linear.\n\nCHECK POINT\n\n6. If mX = 174°, then X is a(n) ____________ angle.\n\n7. If mT = 38°, then T is a(n) ____________.\n\n8. If X and Y are supplementary, and mX = 174°, then mY = ____________.\n\n9. If R and T are complementary, and mT = 38°, then mR = ____________.\n\n10. Lines", null, "and", null, "intersect at point Y. If mPYR = 51°, then mRYA = ____________, and mTYA = ____________.\n\nMidpoints and Bisectors\n\nWhile you’re in the middle of all these lines and segments and rays and angles, it’s a good time to talk about middles. Because lines and rays go on forever, you can’t talk about the middle of a line or the middle of a ray. To say where the middle of something is, you have to be able to measure it. Until you can assign a length to an object, you can’t say where halfway is.\n\nA midpoint is a point on a line segment that divides it into two segments of equal length, two congruent segments. If M is the midpoint of", null, ", then", null, "Each of the little pieces is the same length (AM = MB), and each of them is half as long as", null, ". Only segments have midpoints.\n\nA line or ray or segment that passes through the midpoint of a segment is a segment bisector.\n\nAngles don’t have midpoints, but they can have bisectors. An angle bisector is a ray from the vertex of the angle that divides the angle into two congruent angles.\n\nDEFINITION\n\nThe midpoint of a line segment is a point on the segment that divides it into two segments of equal length.\n\nA segment bisector is a line, ray, or segment that divides a segment into two congruent segments.\n\nAn angle bisector is a line, ray, or segment that passes through the vertex of an angle and cuts it into two angles of equal size.\n\nCHECK POINT\n\n11. M is the midpoint of segment", null, "If PM = 3 cm, MQ= _____ cm and PQ = _____ cm.\n\n12. H is the midpoint of", null, "If XY = 28 inches, then XH = _____ inches.\n\n13. Ray", null, "bisects CAT. If mCAT = 86°, then mHAT = _____.\n\n14. If mAXB = 27° and mBXC = 27°, then _____ bisects AXC.\n\n15. mPYQ= 13°, mQYR = 12°, mRYS = 5°, and mSYT = 20°. True or False:", null, "bisects PYT.\n\n" ]
[ null, "https://schoolbag.info/mathematics/idiots/idiots.files/image198.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image199.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image200.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image201.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image200.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image202.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image199.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image203.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image204.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image205.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image206.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image207.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image208.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image209.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image210.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image211.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image212.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image213.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image214.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image215.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image216.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image217.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image218.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image219.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image220.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image221.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image222.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image223.jpg", null, "https://schoolbag.info/mathematics/idiots/idiots.files/image224.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.93619764,"math_prob":0.97856146,"size":14586,"snap":"2023-40-2023-50","text_gpt3_token_len":3485,"char_repetition_ratio":0.16918118,"word_repetition_ratio":0.048363097,"special_character_ratio":0.23721376,"punctuation_ratio":0.118331715,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9913376,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58],"im_url_duplicate_count":[null,3,null,6,null,6,null,3,null,6,null,3,null,6,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-04T17:16:45Z\",\"WARC-Record-ID\":\"<urn:uuid:9d66ec96-df4d-420d-85d9-4e6e368d4ad5>\",\"Content-Length\":\"37036\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:73204ae5-afa0-498a-aa23-570df1bac679>\",\"WARC-Concurrent-To\":\"<urn:uuid:0b9975cf-d1dd-4ed8-ba98-8c4c2845572f>\",\"WARC-IP-Address\":\"31.131.26.27\",\"WARC-Target-URI\":\"https://schoolbag.info/mathematics/idiots/34.html\",\"WARC-Payload-Digest\":\"sha1:EZK6PLNSC43JCK2GV3GOFNHVPNIZ77JD\",\"WARC-Block-Digest\":\"sha1:AFY4BMYDN45ORHCPMULG7GRGAHA2ALPX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100531.77_warc_CC-MAIN-20231204151108-20231204181108-00265.warc.gz\"}"}
https://slideplayer.com/slide/6885776/
[ "", null, "# Math notes and tips.\n\n## Presentation on theme: \"Math notes and tips.\"— Presentation transcript:\n\nMath notes and tips\n\nMultiplying decimals 4.376 X2.1 ----------------\nDon’t worry about lining up the decimals. However many numbers are behind the decimals then that is how many are in your answer!\n\nDividing decimals 6.625 ÷ 0.53 0 53 /6 625 ------------------------\nMove the decimal so that the outside number is a whole number What ever you do with the decimal on the out side you need to do to the decimal on the inside\n\nM.A.D M – multiply A – add D – denominator stays the same\n\nYou have to make the bottom number (the denominator) the same in the fractions\nSubtracting Adding 1 ¼ - 1/3= The smallest common multiple for 4 and 3 is 12 So 1 ¼ = 1 3/12 and 1/3 = 4/12 You also need to make 1 3/12 in to an improper fraction(use MAD) so it is 15/12 You can now subtract them and you get 11/12 1 + 1 3 The smallest common multiple for 2 and 3 is 6 So: 1 = 3 2 = 6 1 = 2 3 = 6 You can now add them and you get 5/6\n\nMultiplying fractions\nRegular Mixed numbers 2/4 x 1/3 = Multiply the top number (numerator) and the bottom number (denominator) So multiply 2 and 1 and also 4 and 3 to get 2/12 1 2/4 x 1/3 = You must get rid of the whole number by multiplying the 1 by the 4 and adding 2 then you keep the denominator (use MAD) 4x Then multiply like regular 6/4 x 1/3 = 6/12 = = Use order of operations\n\nDividing fractions Keep Change Flip 1÷2 3 4 1 X 4 3 2\n3 4 Keep Change Flip Keep 1st fraction the same change the ÷ to x flip the 2nd fraction X Now you multiply it like regular Answer: 4 2 6 reduced 3\n\nSame sign – find the “sum” Add #’s keep the sign Ex: -6 + (-2) = -8 Ex: -6 – (+2) = -8 + (-2) Different signs – find the difference Subtract the #’s and keep the sign of the # with the higher value. Ex: -8 + (2) = -6 Ex: = 3 Ex: -6 – (-2) = -4 + (+2) Remember: when it shows subtraction – change to addition and flip the sign behind the sign\n\nMultiplying and Dividing Integers\nSame sign – ALWAYS POSITIVE Ex: -5 x -4 = 20 Ex: -36 ÷ -6 = 6 Different signs – ALWAYS NEGATIVE Ex: -3 x 4 = -12 Ex: 30 ÷ -5 = -6\n\nOrder of Operations Order of operations () X2 x / + - P E M D A S\nL X Y E U A E C A N L A U R T L S S Y E E Multiplication and division are equal and opposite operations. Work left to right! Addition and subtraction are equal and opposite operations. Work left to right! Multiply Divide Add Subtract Parentheses Exponents\n\nConverting Fractions/Decimals/Percents\n\nNumber Line -4 -3 -2 -1 0 1 2 3 4 Moving to the left\nMoving to the left #’s decrease in value Negative # look like they get larger, but they are really getting smaller in value. Moving to the right #’s increasing in value Positive # get larger,\n\n0.6 = 2/3 = 66.6% = 33 2/3% (0.66666) Repeating Decimals\n0.3 = 1/3 = 33.3% = 33 1/3% ( ) 0.6 = 2/3 = 66.6% = 33 2/3% ( )\n\nFractions to remember Fraction Decimal Percent ½ .50 50% ¼ .25 25% 1/8\n.125 12.5% 1/16 .0625 6.25% 1/3 .3333 33.3% 2/3 .6666 66.6% 1/12 .08333 8.3% 1/24 .04166 4.16% 1/5 .20 20%" ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.83280087,"math_prob":0.9953833,"size":3122,"snap":"2022-40-2023-06","text_gpt3_token_len":1053,"char_repetition_ratio":0.13790892,"word_repetition_ratio":0.03081664,"special_character_ratio":0.38597053,"punctuation_ratio":0.07614943,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986504,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T16:32:22Z\",\"WARC-Record-ID\":\"<urn:uuid:9d52a9f2-c4f3-47a0-8979-2b42b4663bf3>\",\"Content-Length\":\"177112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a02498de-8631-4db3-ad10-765241d6db7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8663038f-8a4c-4976-927b-d5d019cbce61>\",\"WARC-IP-Address\":\"138.201.58.10\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/6885776/\",\"WARC-Payload-Digest\":\"sha1:ZXUWJYSAGVXKHLNUUJKFBFP5F4PPD5LN\",\"WARC-Block-Digest\":\"sha1:VRPWYC2JJOGO7ZLURFTXFAUZK76HI5Z2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500273.30_warc_CC-MAIN-20230205161658-20230205191658-00528.warc.gz\"}"}
https://www.physicsforums.com/threads/vacuum-travel-formula-creation.466031/
[ "# Vacuum travel formula creation\n\n## Main Question or Discussion Point\n\nSay we have a maglev train travelling i a vacuum. The only thing limiting its speed is the g-force tolerance of the passengers.\n\nThe train would therefore accelerate at a certain rate until halfway, and then decelerate until it reached its destination.\n\nWhat would be the travelling time of such a train as a function of the distance?\n\nRelated Other Physics Topics News on Phys.org\n$$t=\\sqrt{\\frac{4s}{g}}$$\nwhere t is the time, s is the distance and g is the accelleration.\nCalculated using the fact that distance travelled is the area underneath a velocity-time graph.\n\nThank you.\n\nWhat acceleration value g should I use? I'm looking for an acceleration/deceleration that is hardly noticeable for the passengers, making the journey comfortable.\n\nWith an acceleration of 0.5m/s^2 you can cross the USA in 1.5h in a straight line, which is pretty good...\n\nThe chairs could turn 180 degrees when the train is going to decelerate. The top speed would be 2.7km/s.\n\nThe usual problem with trains is that they start and stop at all the intermediate stations...\n\nSay we have a maglev train travelling i a vacuum. The only thing limiting its speed is the g-force tolerance of the passengers.\n\nThe train would therefore accelerate at a certain rate until halfway, and then decelerate until it reached its destination.\n\nWhat would be the travelling time of such a train as a function of the distance?\nAt the distance x the train is accelerated until x/2 so the time is expressed as:\nx/2=gt²/2\nt=√x/g" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9513865,"math_prob":0.98001266,"size":586,"snap":"2020-24-2020-29","text_gpt3_token_len":124,"char_repetition_ratio":0.13573883,"word_repetition_ratio":0.6138614,"special_character_ratio":0.20648465,"punctuation_ratio":0.07272727,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9914889,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-05-27T22:31:12Z\",\"WARC-Record-ID\":\"<urn:uuid:969e0919-2ca2-4652-8c06-6c641a29bcb5>\",\"Content-Length\":\"78779\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:044e2d3c-059b-471f-9d2f-86fef1407e0d>\",\"WARC-Concurrent-To\":\"<urn:uuid:5af8ea99-7560-4e0d-9e86-a3657f991613>\",\"WARC-IP-Address\":\"23.111.143.85\",\"WARC-Target-URI\":\"https://www.physicsforums.com/threads/vacuum-travel-formula-creation.466031/\",\"WARC-Payload-Digest\":\"sha1:D22B4ZMB66EHFVGCJSNJID73B6EC3YM6\",\"WARC-Block-Digest\":\"sha1:S5QYA4JPQP4B6H6NMIJBRDOW4MNPPHJH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347396163.18_warc_CC-MAIN-20200527204212-20200527234212-00304.warc.gz\"}"}
https://tools.carboncollective.co/compound-interest/59209-at-6-percent-in-20-years/
[ "# What is the compound interest on $59209 at 6% over 20 years? If you want to invest$59,209 over 20 years, and you expect it will earn 6.00% in annual interest, your investment will have grown to become $189,891.28. If you're on this page, you probably already know what compound interest is and how a sum of money can grow at a faster rate each year, as the interest is added to the original principal amount and recalculated for each period. The actual rate that$59,209 compounds at is dependent on the frequency of the compounding periods. In this article, to keep things simple, we are using an annual compounding period of 20 years, but it could be monthly, weekly, daily, or even continuously compounding.\n\nThe formula for calculating compound interest is:\n\n$$A = P(1 + \\dfrac{r}{n})^{nt}$$\n\n• A is the amount of money after the compounding periods\n• P is the principal amount\n• r is the annual interest rate\n• n is the number of compounding periods per year\n• t is the number of years\n\nWe can now input the variables for the formula to confirm that it does work as expected and calculates the correct amount of compound interest.\n\nFor this formula, we need to convert the rate, 6.00% into a decimal, which would be 0.06.\n\n$$A = 59209(1 + \\dfrac{ 0.06 }{1})^{ 20}$$\n\nAs you can see, we are ignoring the n when calculating this to the power of 20 because our example is for annual compounding, or one period per year, so 20 × 1 = 20.\n\n## How the compound interest on $59,209 grows over time The interest from previous periods is added to the principal amount, and this grows the sum a rate that always accelerating. The table below shows how the amount increases over the 20 years it is compounding: Start Balance Interest End Balance 1$59,209.00 $3,552.54$62,761.54\n2 $62,761.54$3,765.69 $66,527.23 3$66,527.23 $3,991.63$70,518.87\n4 $70,518.87$4,231.13 $74,750.00 5$74,750.00 $4,485.00$79,235.00\n6 $79,235.00$4,754.10 $83,989.10 7$83,989.10 $5,039.35$89,028.44\n8 $89,028.44$5,341.71 $94,370.15 9$94,370.15 $5,662.21$100,032.36\n10 $100,032.36$6,001.94 $106,034.30 11$106,034.30 $6,362.06$112,396.36\n12 $112,396.36$6,743.78 $119,140.14 13$119,140.14 $7,148.41$126,288.55\n14 $126,288.55$7,577.31 $133,865.86 15$133,865.86 $8,031.95$141,897.81\n16 $141,897.81$8,513.87 $150,411.68 17$150,411.68 $9,024.70$159,436.38\n18 $159,436.38$9,566.18 $169,002.57 19$169,002.57 $10,140.15$179,142.72\n20 $179,142.72$10,748.56 $189,891.28 We can also display this data on a chart to show you how the compounding increases with each compounding period. As you can see if you view the compounding chart for$59,209 at 6.00% over a long enough period of time, the rate at which it grows increases over time as the interest is added to the balance and new interest calculated from that figure.\n\n## How long would it take to double $59,209 at 6% interest? Another commonly asked question about compounding interest would be to calculate how long it would take to double your investment of$59,209 assuming an interest rate of 6.00%.\n\nWe can calculate this very approximately using the Rule of 72.\n\nThe formula for this is very simple:\n\n$$Years = \\dfrac{72}{Interest\\: Rate}$$\n\nBy dividing 72 by the interest rate given, we can calculate the rough number of years it would take to double the money. Let's add our rate to the formula and calculate this:\n\n$$Years = \\dfrac{72}{ 6 } = 12$$\n\nUsing this, we know that any amount we invest at 6.00% would double itself in approximately 12 years. So $59,209 would be worth$118,418 in ~12 years.\n\nWe can also calculate the exact length of time it will take to double an amount at 6.00% using a slightly more complex formula:\n\n$$Years = \\dfrac{log(2)}{log(1 + 0.06)} = 11.9\\; years$$\n\nHere, we use the decimal format of the interest rate, and use the logarithm math function to calculate the exact value.\n\nAs you can see, the exact calculation is very close to the Rule of 72 calculation, which is much easier to remember.\n\nHopefully, this article has helped you to understand the compound interest you might achieve from investing \\$59,209 at 6.00% over a 20 year investment period." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.91771835,"math_prob":0.9987763,"size":4132,"snap":"2023-40-2023-50","text_gpt3_token_len":1306,"char_repetition_ratio":0.13880815,"word_repetition_ratio":0.01490313,"special_character_ratio":0.410697,"punctuation_ratio":0.19242273,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999114,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-01T16:13:01Z\",\"WARC-Record-ID\":\"<urn:uuid:bcdd1982-1d0e-41e9-8e8b-6efab991d5cb>\",\"Content-Length\":\"28083\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c93bb72-e94a-4926-9cdc-95c1db5a93da>\",\"WARC-Concurrent-To\":\"<urn:uuid:045e1100-49a9-432c-b7b6-3415bd3f016a>\",\"WARC-IP-Address\":\"138.197.3.89\",\"WARC-Target-URI\":\"https://tools.carboncollective.co/compound-interest/59209-at-6-percent-in-20-years/\",\"WARC-Payload-Digest\":\"sha1:4CJPEPGU52N6OQXHW5ATZ4KL4F7B326C\",\"WARC-Block-Digest\":\"sha1:XOJE5OWVYMIWNPP2NLVPSVCZZHZILCBM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510903.85_warc_CC-MAIN-20231001141548-20231001171548-00149.warc.gz\"}"}
https://help.nrl.com/exportword?pageId=28672493
[ "Date: Tue, 18 May 2021 14:19:23 +1000 (AEST) Message-ID: <2059320466.1327.1621311563580@rlc-syd-cfl01.rlc.local> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary=\"----=_Part_1326_577914541.1621311563579\" ------=_Part_1326_577914541.1621311563579 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html I am trying to play Fantasy and am experiencing issues with my a= ccount setup...\n\n# I am trying to play Fantasy and am experiencing issues with my acco= unt setup...\n\n=20\n=20\n=20\n=20\n\n=20\n=20 =20\n=20\n=20\n\nBefore you can set up your Fantasy team, you need to sign up for an NRL = Account and verify the email that you used to sign up.\n\nYou can sign up from the Fantasy site, the NRL site or any Club/State si= te.  You can sign up to an NRL Account anytime.", null, "=20\n=20\n=20\n=20 =20\nFinal step to co= mplete your Fantasy Registration\n=20\n=20\n\nEnter the Country you are in. If Australia is selected,= please tell us your State.\n\nAccept the Terms and Conditions by placing a tick in th= e box - if you want to read them click on Terms and Conditions and they wil= l be displayed to you.\n\nLet us know if you would like to hear from the great folks at Youi by pl= acing a tick in the box.\n\nClick Play Now.", null, "=20\n=20\n=20\n=20 =20\n=20\n=20\n\nTell us the game you want to play.\n\nClick Play Fantasy or Play Draft to be= gin setting up your team for this year.", null, "=20\n=20\n=20\n\n=20\n=20\n=20\n=20\n\n=20 =20 =20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n=20\n\n=20\n=20\nMore Insight\n=20\n\n=20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n• =20\n=20 Page:= =20\n=20\n• =20\n=20\n\n=20\n\n=20\n=20\n=20" ]
[ null, "https://help.nrl.com/3D\"2a95aaf79be0c0f6dded4085a45a29c0\"", null, "https://help.nrl.com/3D\"70d1=", null, "https://help.nrl.com/3D\"43f8=", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5863046,"math_prob":0.7119014,"size":461,"snap":"2021-21-2021-25","text_gpt3_token_len":148,"char_repetition_ratio":0.12691467,"word_repetition_ratio":0.0,"special_character_ratio":0.44251627,"punctuation_ratio":0.2840909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99988496,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-18T04:19:25Z\",\"WARC-Record-ID\":\"<urn:uuid:e273d0f6-61db-4808-bd4e-fe009da3b2ab>\",\"Content-Length\":\"478523\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e4b19f43-c537-4f59-ab83-b073045d5b22>\",\"WARC-Concurrent-To\":\"<urn:uuid:49e401f6-9822-40b5-b539-08ddfdb93e79>\",\"WARC-IP-Address\":\"203.42.16.85\",\"WARC-Target-URI\":\"https://help.nrl.com/exportword?pageId=28672493\",\"WARC-Payload-Digest\":\"sha1:WLD4LXP3Y7YPKCGXYU5TPFYF6L6G4WLM\",\"WARC-Block-Digest\":\"sha1:YAGWECHKKWOW6LZA4D6ONJZCOPJ4M55D\",\"WARC-Identified-Payload-Type\":\"message/rfc822\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989820.78_warc_CC-MAIN-20210518033148-20210518063148-00454.warc.gz\"}"}
https://sh-tsang.medium.com/review-adanorm-adaptive-normalization-73865be8d522?source=read_next_recirc---------3---------------------6fd9c622_99ce_433f_860a_142b608646ff-------
[ "## Improve LayerNorm (Layer Normalization)\n\nUnderstanding and Improving Layer Normalization\n2019 NeurIPS, Over 50 Citations (Sik-Ho Tsang @ Medium)\nMachine Translation, Language Model, Image Classification, Layer Normalization\n\n• By understanding LayerNorm (Layer Normalization), a step further is made to improve LayerNorm as AdaNorm (Adaptive Normalization).\n\n# Outline\n\n1. LayerNorm\n2. LayerNorm-simple\n3. DetachNorm\n\n# 1. LayerNorm\n\n• Let x=(x1, x2, …, xH) be the vector representation of an input of size H to normalization layers. LayerNorm re-centers and re-scales input x as:\n• where h is the output of a LayerNorm layer. ⊙ is a dot production operation. μ\u0016 and σ are the mean and standard deviation of input. Bias b and gain g are parameters with the same dimension H.\n• LayerNorm is a default setting in Transformer and Transformer-XL.\n\n# 2. LayerNorm-simple\n\n• For machine translation, Transformer is re-implemented.\n• For language model, 12-layer Transformer-XL is used.\n• For text classification, Transformer with a 4-layer encoder is used.\n• For image classification, 3-layer CNN is used.\n• For parsing, MLP-based parser is used.\n\nThe bias and gain do NOT work on six out of eight datasets.\n\n# 3. DetachNorm\n\n• Detaching derivatives means treating the mean and variance as changeable constants, rather than variables, which do not require gradients in backward propagation.\n• The function θ(.) can be seen as a special copy function, which copies the values of μ\u0016 and σ into constants ^μ\u0016\u0016 and ^σ\u001b.\n\nIn all, DetachNorm keeps the same forward normalization fact as LayerNorm does, but cuts offs the derivatives of the mean and variance.\n\n• DetachNorm performs worse than “w/o Norm”, showing that forward normalization has little to do with the success of LayerNorm.\n\nDetachNorm performs worse than LayerNorm-simple on six datasets. The derivatives of the mean and variance bring higher improvements than forward normalization does.\n\n• In AdaNorm, Φ(y), a function with respect to input x, is used to replace the bias and gain with the following equation:\n\nUnlike the bias and gain being fixed in LayerNorm, Φ(y) can adaptively adjust scaling weights based on inputs.\n\n• To keep the training stability, some constraints are made. (1) First, Φ(y) must be differentiable. (2) Second, the average scaling weight is expected to be fixed, namely the average of Φ(y) is a constant C where C > 0. (3) Third, it is expected that the average of z is bounded, which can avoid the problem of exploding loss.\n• By considering above constraints and based on Chebyshev’s Inequality, finally, Φ(y) is:\n• (Please feel free to read the paper if interested for this proof.)\n• Given an input vector x, the complete calculation process of AdaNorm is:\n• where C is a hyper-parameter, k=1/10.\n• In implementation, the gradient of C(1-ky) is detached and it is only treated as a changeable constant.\n• AdaNorm outperforms LayerNorm on seven datasets, with 0.2 BLEU on En-De, 0.1 BLEU on De-En, 0.2 BLEU on En-Vi, 0.29 ACC on RT, 1.31 ACC on SST, 0.22 ACC on MNIST, and 0.11 UAC on PTB.\n\nUnlike LayerNorm-simple only performing well on bigger models, AdaNorm achieves more balanced results.\n\n• The above figure shows the loss curves of LayerNorm and AdaNorm on the validation set of En-Vi, PTB, and De-En.\n• Compared to AdaNorm, LayerNorm has lower training loss but higher validation loss. Lower validation loss proves that AdaNorm has better convergence." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84101903,"math_prob":0.9237179,"size":3667,"snap":"2023-40-2023-50","text_gpt3_token_len":1001,"char_repetition_ratio":0.12012012,"word_repetition_ratio":0.0070546735,"special_character_ratio":0.25579494,"punctuation_ratio":0.12992701,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9807799,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T09:09:31Z\",\"WARC-Record-ID\":\"<urn:uuid:d4126cfc-e975-4bb9-81d4-36f40f377fea>\",\"Content-Length\":\"270954\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:952efa81-22d8-48ab-8906-d72ad343aefa>\",\"WARC-Concurrent-To\":\"<urn:uuid:6614eb76-141e-4447-a1e8-2e791c15bc00>\",\"WARC-IP-Address\":\"162.159.152.4\",\"WARC-Target-URI\":\"https://sh-tsang.medium.com/review-adanorm-adaptive-normalization-73865be8d522?source=read_next_recirc---------3---------------------6fd9c622_99ce_433f_860a_142b608646ff-------\",\"WARC-Payload-Digest\":\"sha1:4ARTXBPXLZOWUCY637RWC375QNGVDT2D\",\"WARC-Block-Digest\":\"sha1:LQGANFUCGMHYDICBDATGDPD64U3UZ2BC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510983.45_warc_CC-MAIN-20231002064957-20231002094957-00890.warc.gz\"}"}
https://graph-tool.skewed.de/static/doc/centrality.html
[ "graph_tool.centrality - Centrality measures¶\n\nThis module includes centrality-related algorithms.\n\nSummary¶\n\n pagerank Calculate the PageRank of each vertex. betweenness Calculate the betweenness centrality for each vertex and edge. central_point_dominance Calculate the central point dominance of the graph, given the betweenness centrality of each vertex. closeness Calculate the closeness centrality for each vertex. eigenvector Calculate the eigenvector centrality of each vertex in the graph, as well as the largest eigenvalue. katz Calculate the Katz centrality of each vertex in the graph. hits Calculate the authority and hub centralities of each vertex in the graph. eigentrust Calculate the eigentrust centrality of each vertex in the graph. trust_transitivity Calculate the pervasive trust transitivity between chosen (or all) vertices in the graph.\n\nContents¶\n\ngraph_tool.centrality.pagerank(g, damping=0.85, pers=None, weight=None, prop=None, epsilon=1e-06, max_iter=None, ret_iter=False)[source]\n\nCalculate the PageRank of each vertex.\n\nParameters\ngGraph\n\nGraph to be used.\n\ndampingfloat, optional (default: 0.85)\n\nDamping factor.\n\npersVertexPropertyMap, optional (default: None)\n\nPersonalization vector. If omitted, a constant value of $$1/N$$ will be used.\n\nweightEdgePropertyMap, optional (default: None)\n\nEdge weights. If omitted, a constant value of 1 will be used.\n\npropVertexPropertyMap, optional (default: None)\n\nVertex property map to store the PageRank values. If supplied, it will be used uninitialized.\n\nepsilonfloat, optional (default: 1e-6)\n\nConvergence condition. The iteration will stop if the total delta of all vertices are below this value.\n\nmax_iterint, optional (default: None)\n\nIf supplied, this will limit the total number of iterations.\n\nret_iterbool, optional (default: False)\n\nIf true, the total number of iterations is also returned.\n\nReturns\npagerankVertexPropertyMap\n\nA vertex property map containing the PageRank values.\n\nbetweenness\n\nbetweenness centrality\n\neigentrust\n\neigentrust centrality\n\neigenvector\n\neigenvector centrality\n\nhits\n\nauthority and hub centralities\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nThe value of PageRank [pagerank-wikipedia] of vertex v, $$PR(v)$$, is given iteratively by the relation:\n\n$PR(v) = \\frac{1-d}{N} + d \\sum_{u \\in \\Gamma^{-}(v)} \\frac{PR (u)}{d^{+}(u)}$\n\nwhere $$\\Gamma^{-}(v)$$ are the in-neighbors of v, $$d^{+}(u)$$ is the out-degree of u, and d is a damping factor.\n\nIf a personalization property $$p(v)$$ is given, the definition becomes:\n\n$PR(v) = (1-d)p(v) + d \\sum_{u \\in \\Gamma^{-}(v)} \\frac{PR (u)}{d^{+}(u)}$\n\nIf edge weights are also given, the equation is then generalized to:\n\n$PR(v) = (1-d)p(v) + d \\sum_{u \\in \\Gamma^{-}(v)} \\frac{PR (u) w_{u\\to v}}{d^{+}(u)}$\n\nwhere $$d^{+}(u)=\\sum_{y}A_{u,y}w_{u\\to y}$$ is redefined to be the sum of the weights of the out-going edges from u.\n\nIf a node has out-degree zero, it is assumed to connect to every other node with a weight proportional to $$p(v)$$ or a constant if no personalization is given.\n\nThe implemented algorithm progressively iterates the above equations, until it no longer changes, according to the parameter epsilon. It has a topology-dependent running time.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\npagerank-wikipedia(1,2)\n\nhttp://en.wikipedia.org/wiki/Pagerank\n\nlawrence-pagerank-1998\n\nP. Lawrence, B. Sergey, M. Rajeev, W. Terry, “The pagerank citation ranking: Bringing order to the web”, Technical report, Stanford University, 1998\n\nLangville-survey-2005\n\nA. N. Langville, C. D. Meyer, “A Survey of Eigenvector Methods for Web Information Retrieval”, SIAM Review, vol. 47, no. 1, pp. 135-161, 2005, DOI: 10.1137/S0036144503424786 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> pr = gt.pagerank(g)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=pr,\n... vertex_size=gt.prop_to_size(pr, mi=5, ma=15),\n... vorder=pr, vcmap=matplotlib.cm.gist_heat,\n... output=\"polblogs_pr.pdf\")\n<...>", null, "PageRank values of the a political blogs network of [adamic-polblogs].\n\nNow with a personalization vector, and edge weights:\n\n>>> d = g.degree_property_map(\"total\")\n>>> periphery = d.a <= 2\n>>> p = g.new_vertex_property(\"double\")\n>>> p.a[periphery] = 100\n>>> pr = gt.pagerank(g, pers=p)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=pr,\n... vertex_size=gt.prop_to_size(pr, mi=5, ma=15),\n... vorder=pr, vcmap=matplotlib.cm.gist_heat,\n... output=\"polblogs_pr_pers.pdf\")\n<...>", null, "Personalized PageRank values of the a political blogs network of [adamic-polblogs], where vertices with very low degree are given artificially high scores.\n\ngraph_tool.centrality.betweenness(g, pivots=None, vprop=None, eprop=None, weight=None, norm=True)[source]\n\nCalculate the betweenness centrality for each vertex and edge.\n\nParameters\ngGraph\n\nGraph to be used.\n\npivotslist or ndarray, optional (default: None)\n\nIf provided, the betweenness will be estimated using the vertices in this list as pivots. If the list contains all nodes (the default) the algorithm will be exact, and if the vertices are randomly chosen the result will be an unbiased estimator.\n\nvpropVertexPropertyMap, optional (default: None)\n\nVertex property map to store the vertex betweenness values.\n\nepropEdgePropertyMap, optional (default: None)\n\nEdge property map to store the edge betweenness values.\n\nweightEdgePropertyMap, optional (default: None)\n\nEdge property map corresponding to the weight value of each edge.\n\nnormbool, optional (default: True)\n\nWhether or not the betweenness values should be normalized.\n\nReturns\nvertex_betweennessA vertex property map with the vertex betweenness values.\nedge_betweennessAn edge property map with the edge betweenness values.\n\ncentral_point_dominance\n\ncentral point dominance of the graph\n\npagerank\n\nPageRank centrality\n\neigentrust\n\neigentrust centrality\n\neigenvector\n\neigenvector centrality\n\nhits\n\nauthority and hub centralities\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nBetweenness centrality of a vertex $$C_B(v)$$ is defined as,\n\n$C_B(v)= \\sum_{s \\neq v \\neq t \\in V \\atop s \\neq t} \\frac{\\sigma_{st}(v)}{\\sigma_{st}}$\n\nwhere $$\\sigma_{st}$$ is the number of shortest paths from s to t, and $$\\sigma_{st}(v)$$ is the number of shortest paths from s to t that pass through a vertex $$v$$. This may be normalised by dividing through the number of pairs of vertices not including v, which is $$(n-1)(n-2)/2$$, for undirected graphs, or $$(n-1)(n-2)$$ for directed ones.\n\nThe algorithm used here is defined in [brandes-faster-2001], and has a complexity of $$O(VE)$$ for unweighted graphs and $$O(VE + V(V+E)\\log V)$$ for weighted graphs. The space complexity is $$O(VE)$$.\n\nIf the pivots parameter is given, the complexity will be instead $$O(PE)$$ for unweighted graphs and $$O(PE + P(V+E)\\log V)$$ for weighted graphs, where $$P$$ is the number of pivot vertices.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\nbetweenness-wikipedia\n\nhttp://en.wikipedia.org/wiki/Centrality#Betweenness_centrality\n\nbrandes-faster-2001(1,2)\n\nU. Brandes, “A faster algorithm for betweenness centrality”, Journal of Mathematical Sociology, 2001, DOI: 10.1080/0022250X.2001.9990249 [sci-hub, @tor]\n\nbrandes-centrality-2007\n\nU. Brandes, C. Pich, “Centrality estimation in large networks”, Int. J. Bifurcation Chaos 17, 2303 (2007). DOI: 10.1142/S0218127407018403 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> vp, ep = gt.betweenness(g)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=vp,\n... vertex_size=gt.prop_to_size(vp, mi=5, ma=15),\n... edge_pen_width=gt.prop_to_size(ep, mi=0.5, ma=5),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=vp, output=\"polblogs_betweenness.pdf\")\n<...>", null, "Betweenness values of the a political blogs network of [adamic-polblogs].\n\ngraph_tool.centrality.closeness(g, weight=None, source=None, vprop=None, norm=True, harmonic=False)[source]\n\nCalculate the closeness centrality for each vertex.\n\nParameters\ngGraph\n\nGraph to be used.\n\nweightEdgePropertyMap, optional (default: None)\n\nEdge property map corresponding to the weight value of each edge.\n\nsourceVertex, optional (default: None)\n\nIf specified, the centrality is computed for this vertex alone.\n\nvpropVertexPropertyMap, optional (default: None)\n\nVertex property map to store the vertex centrality values.\n\nnormbool, optional (default: True)\n\nWhether or not the centrality values should be normalized.\n\nharmonicbool, optional (default: False)\n\nIf true, the sum of the inverse of the distances will be computed, instead of the inverse of the sum.\n\nReturns\nvertex_closenessVertexPropertyMap\n\nA vertex property map with the vertex closeness values.\n\ncentral_point_dominance\n\ncentral point dominance of the graph\n\npagerank\n\nPageRank centrality\n\neigentrust\n\neigentrust centrality\n\neigenvector\n\neigenvector centrality\n\nhits\n\nauthority and hub centralities\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nThe closeness centrality of a vertex $$i$$ is defined as,\n\n$c_i = \\frac{1}{\\sum_j d_{ij}}$\n\nwhere $$d_{ij}$$ is the (possibly directed and/or weighted) distance from $$i$$ to $$j$$. In case there is no path between the two vertices, here the distance is taken to be zero.\n\nIf harmonic == True, the definition becomes\n\n$c_i = \\sum_j\\frac{1}{d_{ij}},$\n\nbut now, in case there is no path between the two vertices, we take $$d_{ij} \\to\\infty$$ such that $$1/d_{ij}=0$$.\n\nIf norm == True, the values of $$c_i$$ are normalized by $$n_i-1$$ where $$n_i$$ is the size of the (out-) component of $$i$$. If harmonic == True, they are instead simply normalized by $$V-1$$.\n\nThe algorithm complexity of $$O(V(V + E))$$ for unweighted graphs and $$O(V(V+E) \\log V)$$ for weighted graphs. If the option source is specified, this drops to $$O(V + E)$$ and $$O((V+E)\\log V)$$ respectively.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\ncloseness-wikipedia\n\nhttps://en.wikipedia.org/wiki/Closeness_centrality\n\nopsahl-node-2010\n\nOpsahl, T., Agneessens, F., Skvoretz, J., “Node centrality in weighted networks: Generalizing degree and shortest paths”. Social Networks 32, 245-251, 2010 DOI: 10.1016/j.socnet.2010.03.006 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> c = gt.closeness(g)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=c,\n... vertex_size=gt.prop_to_size(c, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=c, output=\"polblogs_closeness.pdf\")\n<...>", null, "Closeness values of the a political blogs network of [adamic-polblogs].\n\ngraph_tool.centrality.central_point_dominance(g, betweenness)[source]\n\nCalculate the central point dominance of the graph, given the betweenness centrality of each vertex.\n\nParameters\ngGraph\n\nGraph to be used.\n\nbetweennessVertexPropertyMap\n\nVertex property map with the betweenness centrality values. The values must be normalized.\n\nReturns\ncpfloat\n\nThe central point dominance.\n\nbetweenness\n\nbetweenness centrality\n\nNotes\n\nLet $$v^*$$ be the vertex with the largest relative betweenness centrality; then, the central point dominance [freeman-set-1977] is defined as:\n\n$C'_B = \\frac{1}{|V|-1} \\sum_{v} C_B(v^*) - C_B(v)$\n\nwhere $$C_B(v)$$ is the normalized betweenness centrality of vertex v. The value of $$C_B$$ lies in the range [0,1].\n\nThe algorithm has a complexity of $$O(V)$$.\n\nReferences\n\nfreeman-set-1977(1,2)\n\nLinton C. Freeman, “A Set of Measures of Centrality Based on Betweenness”, Sociometry, Vol. 40, No. 1, pp. 35-41, 1977, DOI: 10.2307/3033543 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> vp, ep = gt.betweenness(g)\n>>> print(gt.central_point_dominance(g, vp))\n0.105683...\ngraph_tool.centrality.eigenvector(g, weight=None, vprop=None, epsilon=1e-06, max_iter=None)[source]\n\nCalculate the eigenvector centrality of each vertex in the graph, as well as the largest eigenvalue.\n\nParameters\ngGraph\n\nGraph to be used.\n\nweightEdgePropertyMap (optional, default: None)\n\nEdge property map with the edge weights.\n\nvpropVertexPropertyMap, optional (default: None)\n\nVertex property map where the values of eigenvector must be stored. If provided, it will be used uninitialized.\n\nepsilonfloat, optional (default: 1e-6)\n\nConvergence condition. The iteration will stop if the total delta of all vertices are below this value.\n\nmax_iterint, optional (default: None)\n\nIf supplied, this will limit the total number of iterations.\n\nReturns\neigenvaluefloat\n\nThe largest eigenvalue of the (weighted) adjacency matrix.\n\neigenvectorVertexPropertyMap\n\nA vertex property map containing the eigenvector values.\n\nbetweenness\n\nbetweenness centrality\n\npagerank\n\nPageRank centrality\n\nhits\n\nauthority and hub centralities\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nThe eigenvector centrality $$\\mathbf{x}$$ is the eigenvector of the (weighted) adjacency matrix with the largest eigenvalue $$\\lambda$$, i.e. it is the solution of\n\n$\\mathbf{A}\\mathbf{x} = \\lambda\\mathbf{x},$\n\nwhere $$\\mathbf{A}$$ is the (weighted) adjacency matrix and $$\\lambda$$ is the largest eigenvalue.\n\nThe algorithm uses the power method which has a topology-dependent complexity of $$O\\left(N\\times\\frac{-\\log\\epsilon}{\\log|\\lambda_1/\\lambda_2|}\\right)$$, where $$N$$ is the number of vertices, $$\\epsilon$$ is the epsilon parameter, and $$\\lambda_1$$ and $$\\lambda_2$$ are the largest and second largest eigenvalues of the (weighted) adjacency matrix, respectively.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\neigenvector-centrality\n\nhttp://en.wikipedia.org/wiki/Centrality#Eigenvector_centrality\n\npower-method\n\nhttp://en.wikipedia.org/wiki/Power_iteration\n\nlangville-survey-2005\n\nA. N. Langville, C. D. Meyer, “A Survey of Eigenvector Methods for Web Information Retrieval”, SIAM Review, vol. 47, no. 1, pp. 135-161, 2005, DOI: 10.1137/S0036144503424786 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> w = g.new_edge_property(\"double\")\n>>> w.a = np.random.random(len(w.a)) * 42\n>>> ee, x = gt.eigenvector(g, w)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=x,\n... vertex_size=gt.prop_to_size(x, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=x, output=\"polblogs_eigenvector.pdf\")\n<...>", null, "Eigenvector values of the a political blogs network of [adamic-polblogs], with random weights attributed to the edges.\n\ngraph_tool.centrality.katz(g, alpha=0.01, beta=None, weight=None, vprop=None, epsilon=1e-06, max_iter=None, norm=True)[source]\n\nCalculate the Katz centrality of each vertex in the graph.\n\nParameters\ngGraph\n\nGraph to be used.\n\nweightEdgePropertyMap (optional, default: None)\n\nEdge property map with the edge weights.\n\nalphafloat, optional (default: 0.01)\n\nFree parameter $$\\alpha$$. This must be smaller than the inverse of the largest eigenvalue of the adjacency matrix.\n\nbetaVertexPropertyMap, optional (default: None)\n\nVertex property map where the local personalization values. If not provided, the global value of 1 will be used.\n\nvpropVertexPropertyMap, optional (default: None)\n\nVertex property map where the values of eigenvector must be stored. If provided, it will be used uninitialized.\n\nepsilonfloat, optional (default: 1e-6)\n\nConvergence condition. The iteration will stop if the total delta of all vertices are below this value.\n\nmax_iterint, optional (default: None)\n\nIf supplied, this will limit the total number of iterations.\n\nnormbool, optional (default: True)\n\nWhether or not the centrality values should be normalized.\n\nReturns\ncentralityVertexPropertyMap\n\nA vertex property map containing the Katz centrality values.\n\nbetweenness\n\nbetweenness centrality\n\npagerank\n\nPageRank centrality\n\neigenvector\n\neigenvector centrality\n\nhits\n\nauthority and hub centralities\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nThe Katz centrality $$\\mathbf{x}$$ is the solution of the nonhomogeneous linear system\n\n$\\mathbf{x} = \\alpha\\mathbf{A}\\mathbf{x} + \\mathbf{\\beta},$\n\nwhere $$\\mathbf{A}$$ is the (weighted) adjacency matrix and $$\\mathbf{\\beta}$$ is the personalization vector (if not supplied, $$\\mathbf{\\beta} = \\mathbf{1}$$ is assumed).\n\nThe algorithm uses successive iterations of the equation above, which has a topology-dependent convergence complexity.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\nkatz-centrality\n\nhttp://en.wikipedia.org/wiki/Katz_centrality\n\nkatz-new\n\nL. Katz, “A new status index derived from sociometric analysis”, Psychometrika 18, Number 1, 39-43, 1953, DOI: 10.1007/BF02289026 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> w = g.new_edge_property(\"double\")\n>>> w.a = np.random.random(len(w.a))\n>>> x = gt.katz(g, weight=w)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=x,\n... vertex_size=gt.prop_to_size(x, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=x, output=\"polblogs_katz.pdf\")\n<...>", null, "Katz centrality values of the a political blogs network of [adamic-polblogs], with random weights attributed to the edges.\n\ngraph_tool.centrality.hits(g, weight=None, xprop=None, yprop=None, epsilon=1e-06, max_iter=None)[source]\n\nCalculate the authority and hub centralities of each vertex in the graph.\n\nParameters\ngGraph\n\nGraph to be used.\n\nweightEdgePropertyMap (optional, default: None)\n\nEdge property map with the edge weights.\n\nxpropVertexPropertyMap, optional (default: None)\n\nVertex property map where the authority centrality must be stored.\n\nypropVertexPropertyMap, optional (default: None)\n\nVertex property map where the hub centrality must be stored.\n\nepsilonfloat, optional (default: 1e-6)\n\nConvergence condition. The iteration will stop if the total delta of all vertices are below this value.\n\nmax_iterint, optional (default: None)\n\nIf supplied, this will limit the total number of iterations.\n\nReturns\neigfloat\n\nThe largest eigenvalue of the cocitation matrix.\n\nxVertexPropertyMap\n\nA vertex property map containing the authority centrality values.\n\nyVertexPropertyMap\n\nA vertex property map containing the hub centrality values.\n\nbetweenness\n\nbetweenness centrality\n\neigenvector\n\neigenvector centrality\n\npagerank\n\nPageRank centrality\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nThe Hyperlink-Induced Topic Search (HITS) centrality assigns hub ($$\\mathbf{y}$$) and authority ($$\\mathbf{x}$$) centralities to the vertices, following:\n\n\\begin{split}\\begin{align} \\mathbf{x} &= \\alpha\\mathbf{A}\\mathbf{y} \\\\ \\mathbf{y} &= \\beta\\mathbf{A}^T\\mathbf{x} \\end{align}\\end{split}\n\nwhere $$\\mathbf{A}$$ is the (weighted) adjacency matrix and $$\\lambda = 1/(\\alpha\\beta)$$ is the largest eigenvalue of the cocitation matrix, $$\\mathbf{A}\\mathbf{A}^T$$. (Without loss of generality, we set $$\\beta=1$$ in the algorithm.)\n\nThe algorithm uses the power method which has a topology-dependent complexity of $$O\\left(N\\times\\frac{-\\log\\epsilon}{\\log|\\lambda_1/\\lambda_2|}\\right)$$, where $$N$$ is the number of vertices, $$\\epsilon$$ is the epsilon parameter, and $$\\lambda_1$$ and $$\\lambda_2$$ are the largest and second largest eigenvalues of the (weighted) cocitation matrix, respectively.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\nhits-algorithm\n\nhttp://en.wikipedia.org/wiki/HITS_algorithm\n\nkleinberg-authoritative\n\nJ. Kleinberg, “Authoritative sources in a hyperlinked environment”, Journal of the ACM 46 (5): 604-632, 1999, DOI: 10.1145/324133.324140 [sci-hub, @tor].\n\npower-method\n\nhttp://en.wikipedia.org/wiki/Power_iteration\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> ee, x, y = gt.hits(g)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=x,\n... vertex_size=gt.prop_to_size(x, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=x, output=\"polblogs_hits_auths.pdf\")\n<...>\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=y,\n... vertex_size=gt.prop_to_size(y, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=y, output=\"polblogs_hits_hubs.pdf\")\n<...>", null, "HITS authority values of the a political blogs network of [adamic-polblogs].", null, "HITS hub values of the a political blogs network of [adamic-polblogs].\n\ngraph_tool.centrality.eigentrust(g, trust_map, vprop=None, norm=False, epsilon=1e-06, max_iter=0, ret_iter=False)[source]\n\nCalculate the eigentrust centrality of each vertex in the graph.\n\nParameters\ngGraph\n\nGraph to be used.\n\ntrust_mapEdgePropertyMap\n\nEdge property map with the values of trust associated with each edge. The values must lie in the range [0,1].\n\nvpropVertexPropertyMap, optional (default: None)\n\nVertex property map where the values of eigentrust must be stored.\n\nnormbool, optional (default: False)\n\nNorm eigentrust values so that the total sum equals 1.\n\nepsilonfloat, optional (default: 1e-6)\n\nConvergence condition. The iteration will stop if the total delta of all vertices are below this value.\n\nmax_iterint, optional (default: None)\n\nIf supplied, this will limit the total number of iterations.\n\nret_iterbool, optional (default: False)\n\nIf true, the total number of iterations is also returned.\n\nReturns\neigentrustVertexPropertyMap\n\nA vertex property map containing the eigentrust values.\n\nbetweenness\n\nbetweenness centrality\n\npagerank\n\nPageRank centrality\n\ntrust_transitivity\n\npervasive trust transitivity\n\nNotes\n\nThe eigentrust [kamvar-eigentrust-2003] values $$t_i$$ correspond the following limit\n\n$\\mathbf{t} = \\lim_{n\\to\\infty} \\left(C^T\\right)^n \\mathbf{c}$\n\nwhere $$c_i = 1/|V|$$ and the elements of the matrix $$C$$ are the normalized trust values:\n\n$c_{ij} = \\frac{\\max(s_{ij},0)}{\\sum_{j} \\max(s_{ij}, 0)}$\n\nThe algorithm has a topology-dependent complexity.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\nkamvar-eigentrust-2003(1,2)\n\nS. D. Kamvar, M. T. Schlosser, H. Garcia-Molina “The eigentrust algorithm for reputation management in p2p networks”, Proceedings of the 12th international conference on World Wide Web, Pages: 640 - 651, 2003, DOI: 10.1145/775152.775242 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> w = g.new_edge_property(\"double\")\n>>> w.a = np.random.random(len(w.a)) * 42\n>>> t = gt.eigentrust(g, w)\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=t,\n... vertex_size=gt.prop_to_size(t, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=t, output=\"polblogs_eigentrust.pdf\")\n<...>", null, "Eigentrust values of the a political blogs network of [adamic-polblogs], with random weights attributed to the edges.\n\ngraph_tool.centrality.trust_transitivity(g, trust_map, source=None, target=None, vprop=None)[source]\n\nCalculate the pervasive trust transitivity between chosen (or all) vertices in the graph.\n\nParameters\ngGraph\n\nGraph to be used.\n\ntrust_mapEdgePropertyMap\n\nEdge property map with the values of trust associated with each edge. The values must lie in the range [0,1].\n\nsourceVertex (optional, default: None)\n\nSource vertex. All trust values are computed relative to this vertex. If left unspecified, the trust values for all sources are computed.\n\ntargetVertex (optional, default: None)\n\nThe only target for which the trust value will be calculated. If left unspecified, the trust values for all targets are computed.\n\nvpropVertexPropertyMap (optional, default: None)\n\nA vertex property map where the values of transitive trust must be stored.\n\nReturns\ntrust_transitivityVertexPropertyMap or float\n\nA vertex vector property map containing, for each source vertex, a vector with the trust values for the other vertices. If only one of source or target is specified, this will be a single-valued vertex property map containing the trust vector from/to the source/target vertex to/from the rest of the network. If both source and target are specified, the result is a single float, with the corresponding trust value for the target.\n\neigentrust\n\neigentrust centrality\n\nbetweenness\n\nbetweenness centrality\n\npagerank\n\nPageRank centrality\n\nNotes\n\nThe pervasive trust transitivity between vertices i and j is defined as\n\n$t_{ij} = \\frac{\\sum_m A_{m,j} w^2_{G\\setminus\\{j\\}}(i\\to m)c_{m,j}} {\\sum_m A_{m,j} w_{G\\setminus\\{j\\}}(i\\to m)}$\n\nwhere $$A_{ij}$$ is the adjacency matrix, $$c_{ij}$$ is the direct trust from i to j, and $$w_G(i\\to j)$$ is the weight of the path with maximum weight from i to j, computed as\n\n$w_G(i\\to j) = \\prod_{e\\in i\\to j} c_e.$\n\nThe algorithm measures the transitive trust by finding the paths with maximum weight, using Dijkstra’s algorithm, to all in-neighbors of a given target. This search needs to be performed repeatedly for every target, since it needs to be removed from the graph first. For each given source, the resulting complexity is therefore $$O(V^2\\log V)$$ for all targets, and $$O(V\\log V)$$ for a single target. For a given target, the complexity for obtaining the trust from all given sources is $$O(kV\\log V)$$, where $$k$$ is the in-degree of the target. Thus, the complexity for obtaining the complete trust matrix is $$O(EV\\log V)$$, where $$E$$ is the number of edges in the network.\n\nIf enabled during compilation, this algorithm runs in parallel.\n\nReferences\n\nrichters-trust-2010\n\nOliver Richters and Tiago P. Peixoto, “Trust Transitivity in Social Networks,” PLoS ONE 6, no. 4: e1838 (2011), DOI: 10.1371/journal.pone.0018384 [sci-hub, @tor]\n\nL. A. Adamic and N. Glance, “The political blogosphere and the 2004 US Election”, in Proceedings of the WWW-2005 Workshop on the Weblogging Ecosystem (2005). DOI: 10.1145/1134271.1134277 [sci-hub, @tor]\n\nExamples\n\n>>> g = gt.collection.data[\"polblogs\"]\n>>> g = gt.GraphView(g, vfilt=gt.label_largest_component(g))\n>>> g = gt.Graph(g, prune=True)\n>>> w = g.new_edge_property(\"double\")\n>>> w.a = np.random.random(len(w.a))\n>>> t = gt.trust_transitivity(g, w, source=g.vertex(42))\n>>> gt.graph_draw(g, pos=g.vp[\"pos\"], vertex_fill_color=t,\n... vertex_size=gt.prop_to_size(t, mi=5, ma=15),\n... vcmap=matplotlib.cm.gist_heat,\n... vorder=t, output=\"polblogs_trust_transitivity.pdf\")\n<...>", null, "Trust transitivity values from source vertex 42 of the a political blogs network of [adamic-polblogs], with random weights attributed to the edges." ]
[ null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_pr.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_pr_pers.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_betweenness.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_closeness.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_eigenvector.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_katz.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_hits_auths.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_hits_hubs.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_eigentrust.png", null, "https://graph-tool.skewed.de/static/doc/_images/polblogs_trust_transitivity.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.60019433,"math_prob":0.9791077,"size":26851,"snap":"2019-43-2019-47","text_gpt3_token_len":7632,"char_repetition_ratio":0.14422467,"word_repetition_ratio":0.42213348,"special_character_ratio":0.2838628,"punctuation_ratio":0.20477612,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99882495,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,3,null,4,null,3,null,1,null,4,null,4,null,3,null,1,null,3,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-18T23:18:52Z\",\"WARC-Record-ID\":\"<urn:uuid:2427f0bd-fd19-4ea9-8cc9-dd460a755f0c>\",\"Content-Length\":\"118110\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:30d68720-3842-4ac0-8ed8-c450ecb8afe6>\",\"WARC-Concurrent-To\":\"<urn:uuid:40b19b23-419b-45ab-8ef6-a4675dad48c0>\",\"WARC-IP-Address\":\"74.50.54.68\",\"WARC-Target-URI\":\"https://graph-tool.skewed.de/static/doc/centrality.html\",\"WARC-Payload-Digest\":\"sha1:LOYGDYS7IXG3L7J2QUTX2JC5CQZQR5Z3\",\"WARC-Block-Digest\":\"sha1:IPAR2LRDV6BUJGYEMYSJ74ON7C7FTS6A\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986685915.43_warc_CC-MAIN-20191018231153-20191019014653-00264.warc.gz\"}"}
https://socratic.org/questions/5340858702bf3429e1cd603e
[ "# Question #d603e\n\nApr 28, 2015\n\nFor a simple pendulum, it's is not often easy to find potential.\n\nOn the other hand, it is easier and for most cases, more useful to find the change in potential energy", null, "That is, $\\Delta \\text{Pe}$ instead of just $\\text{Pe}$\n\n$\\Delta \\text{Pe} = m g \\Delta h$\n$m =$ mass\n$g =$ acceleration due to gravity\n$\\Delta h =$ change in height above the ground\n\nBut in fact, if we know the position of the ball above the ground, them we can calculate the Potential energy as $\\text{Pe} = m g {x}_{1}$ for example!\n\n$\\Delta \\text{Pe}$ is more of interest because we can conserve energy.\nLike this,\n$m g \\Delta h = \\frac{1}{2} m \\Delta {v}^{2}$" ]
[ null, "https://useruploads.socratic.org/CDFyxx5TRBLgA5cT1Xg2_7C267876-0B87-4FD3-AB36-D8050DCCADFA.PNG", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82254064,"math_prob":0.9997907,"size":465,"snap":"2020-45-2020-50","text_gpt3_token_len":112,"char_repetition_ratio":0.10412148,"word_repetition_ratio":0.0,"special_character_ratio":0.23225807,"punctuation_ratio":0.095744684,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999671,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T07:18:42Z\",\"WARC-Record-ID\":\"<urn:uuid:0040783b-e3b5-4fa7-956e-83ea2b9a1b1f>\",\"Content-Length\":\"33479\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b8f24f0-0a8d-4b7b-9b1f-8c810fcf4ee9>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d65d5a9-28e3-498b-be51-97ec8a0ed24e>\",\"WARC-IP-Address\":\"216.239.32.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/5340858702bf3429e1cd603e\",\"WARC-Payload-Digest\":\"sha1:4IILYJIO34HLHCUSZH2K73UFHIK4TNGA\",\"WARC-Block-Digest\":\"sha1:ZO32UAP4A2ABWBIEFCZCYYBQECEO5IJ5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141186761.30_warc_CC-MAIN-20201126055652-20201126085652-00704.warc.gz\"}"}
https://nrich.maths.org/funnyfactorisation
[ "#### You may also like", null, "Make a set of numbers that use all the digits from 1 to 9, once and once only. Add them up. The result is divisible by 9. Add each of the digits in the new number. What is their sum? Now try some other possibilities for yourself!", null, "### Multiplication Magic\n\nGiven any 3 digit number you can use the given digits and name another number which is divisible by 37 (e.g. given 628 you say 628371 is divisible by 37 because you know that 6+3 = 2+7 = 8+1 = 9). The question asks you to explain the trick.", null, "### N000ughty Thoughts\n\nHow many noughts are at the end of these giant numbers?\n\n# Funny Factorisation\n\n##### Age 11 to 16 Challenge Level:\n\nSome 4 digit numbers can be written as the product of a 3 digit number and a 2 digit number using each of the digits $1$ to $9$ once, and only once.\n\nThe number $4396$ can be written as just such a product.", null, "Can you find the factors?\n\nMaths is full of surprises!\nThe numbers $5796$ and $5346$ can each be written as a product like this in two different ways.", null, "Can you find these four funny factorisations?\n\nExtension\n\nThere are two more funny factorisations to find, using each of the digits $1$ to $9$ once, and only once.\nCan you fill in the blanks in the multiplication below to find one of them?", null, "If you know a bit about computer programming, you may wish to write a program to find the final funny factorisation." ]
[ null, "https://nrich.maths.org/content/01/09/bbprob1/icon.png", null, "https://nrich.maths.org/content/99/03/15plus1/icon.jpg", null, "https://nrich.maths.org/content/01/03/15plus3/icon.jpg", null, "https://nrich.maths.org/content/00/11/six3/funnyfact1.png", null, "https://nrich.maths.org/content/00/11/six3/funnyfact2.png", null, "https://nrich.maths.org/content/00/11/six3/funnyfact3.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9008027,"math_prob":0.9869808,"size":710,"snap":"2020-34-2020-40","text_gpt3_token_len":173,"char_repetition_ratio":0.12606232,"word_repetition_ratio":0.122137405,"special_character_ratio":0.2535211,"punctuation_ratio":0.08783784,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99163115,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,8,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-18T14:51:47Z\",\"WARC-Record-ID\":\"<urn:uuid:a1de3b80-3273-42b2-883a-1bfd9cabf268>\",\"Content-Length\":\"15075\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f5647536-828f-42f7-a87a-696d5762d895>\",\"WARC-Concurrent-To\":\"<urn:uuid:aa012201-b9ee-45a3-8047-bbd70d12b6b6>\",\"WARC-IP-Address\":\"131.111.18.195\",\"WARC-Target-URI\":\"https://nrich.maths.org/funnyfactorisation\",\"WARC-Payload-Digest\":\"sha1:FYY6GF2P3JPU57AD3VEENEYTEROZMXUX\",\"WARC-Block-Digest\":\"sha1:DDZNPBQM5XZSGZD3SU6MJRUZJ2A6FQTT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400187899.11_warc_CC-MAIN-20200918124116-20200918154116-00404.warc.gz\"}"}
https://dsp.stackexchange.com/questions/47604/when-listening-in-to-an-am-signals-of-various-frequencies-how-do-we-exactly-tun
[ "# When listening in to an AM signals of various frequencies, how do we exactly tune in?\n\nAlright, so I read on AM and Double Sideband AM, but I don't the more fundamental idea - the frequencies.\n\nFrequency is just: $\\frac{1}{T}$, where T is period\n\nSo difference between 5 Hz and 30 Hz is that period of 1st signal is $\\frac{1}{5}$ = 0.2 seconds, and of 2nd signal = $\\frac{1}{30} = 0.03 seconds$\n\nand I know in AM you need to demodulate by multiplying by cos(wt) to bring message to lower frequency and then low pass filter to only have message signal etc\n\nbut for simplicity sake, let's assume two signals are transmitted, one is at 5Hz the other one is at 30Hz\n\nHow do you \"listen in\"? Because the frequency spectrum, where frequency is x-axis is a bit misleading!", null, "It shows that you can magically shift to the right and just listen in to the needed signal! That's not the case! Frequency is just how fast a signal repeats. Amplitude is changing according to sine function.\n\nSo if you have two signals, one at 5Hz and second one at 30Hz, then they both \"are in the air\", you can't just magically listen in every $\\frac{1}{30Hz} = 0.03 seconds$ and only get signal at 30Hz, you'll also be getting the signal at 5Hz, they'll be overlapping because frequency is just how fast signal is changing.\n\nSo how do you listen in? Do you just listen in for everything?\n\nThen you have a signal that's changing every \"0.2 seconds (5Hz)\" and another one every \"0.03 seconds\", they'll overlap of course. The only way they don't is if there's only one signal is in the air. I don't see how different frequencies allow to fine tune to one particular signal and ignore others.\n\nBesides, even if there's just one signal in the air, say it's the 5Hz signal, how do you \"listen in\"? Do you just \"listen in\" continuously, or a receiver just turns on and shuts off every $\\frac{1}{5Hz} = 0.2 sec$????", null, "• Trigonometric sum and difference formulas are “just the case”, even if you think the math is “magic”. Also, sampling needs to be done somewhat above twice the highest frequency. Mar 6, 2018 at 11:17\n• I don't really understand what the confusion is, but something that you might be missing is this: before receiving an AM signal, you need to get rid of everything else in the spectrum: IOW, you need a bandpass filter centered around the carrier you're interested in, and that covers both SB, and nothing else.\n– MBaz\nMar 6, 2018 at 13:06\n\nOk, I think here is a good point to introduce equivalent baseband, because you implicitly are already using it!\n\nSo, what does your cosine 5Hz audio signal look like, if you were to set $f_c=0$, in spectrum?\n\nExactly, one dirac at +5 Hz, and one at -5 Hz! Hence, when you mix that up to $f_c\\ne 0$, you get symmetric sidebands. That works for any real signals (ie. for any audio signal) – and even when you add them. That's where your two sidebands come from – positive and (hermitian) symmetrical negative components of any real-valued signal.\n\nSo, in baseband, your sum of 5 and 30 Hz cosines have four spectral components – at -30, -5, +5 and +30 Hz, and mixed up to the carrier frequency you get the same discrete spectral components, but added to $\\pm f_c$. (your figure also shows a component at $f_c$, but that's only there if you got a DC component in your baseband signal, and/or you're not suppressing the carrier)\n\nNow, there's very different receiver architectures for AM-modulated audio. None of them \"just looks every 1/(audio period)\". \"Looking every so often\" presumes you're building something digital. Most AM receivers (most AM being pretty obsolete by now) are not digital. The simplest detector methods really are just rectifying diodes and a low pass filter to get rid of the RF content – I'll leave googling for \"AM diode detector\" up to you; there's a plethora of good articles on that out there. This is all continuous-time, so there's no \"looking that often\" there – it all happens by electronically processing the continuous signal.\n\nNow, assuming we're really aiming for digital here:\n\nThat 1/(audio period) wouldn't even make sense for audio – it breaks Nyquist; you need at least twice the audio bandwidth as sampling rate to be able to reconstruct the signal.\n\nWhat one can build is simply a mixer with $f_c$, which will first (continuous-time!) multiply with a harmonic of that frequency (effectively, a complex sinusoid), and then sample (that's the act of looking only every so often) that, and then you get a digital signal that's basically nothing but your original audio sum.\n\nNote that you've been asking about \"how to deal with the fact that you add two sines of different frequencies\", but that question is totally independent from the AM aspect of that: you just can. Under the Fourier Transform, i.e. in the spectrum, these are orthogonal, i.e. you can perfectly separate them, both in a computer, in a circuit, or with your ear. That's why you're hopefully able to listen to sounds that aren't made of a single tone. In fact, in reality, single-tone sounds are extremely rare.\n\n• >\"The simplest detector methods really are just rectifying diodes and a low pass filter to get rid of the RF content\" How does it receive the signal? And when it is send at a high frequency through air... how is it being transmitted to tower? How does it \"propagate\"?\n– Jack\nMar 7, 2018 at 0:09\n• I said \"I leave googling for diode receivers up to you\" in the very next sentence, with the hint that there's good material out there. I can't take reading what's available off your shoulders. Mar 7, 2018 at 0:35" ]
[ null, "https://i.stack.imgur.com/95LcJ.png", null, "https://i.stack.imgur.com/Yvn21.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.96246386,"math_prob":0.8827668,"size":1874,"snap":"2023-40-2023-50","text_gpt3_token_len":487,"char_repetition_ratio":0.14331551,"word_repetition_ratio":0.0,"special_character_ratio":0.26894343,"punctuation_ratio":0.117794484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95020616,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-22T16:02:29Z\",\"WARC-Record-ID\":\"<urn:uuid:7cc4466d-6793-443f-9a23-8c2413d353c5>\",\"Content-Length\":\"162929\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bb5baa9-f7dc-4e39-8e43-58449cb080f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec8cd066-87b7-401b-94db-012da1589001>\",\"WARC-IP-Address\":\"104.18.11.86\",\"WARC-Target-URI\":\"https://dsp.stackexchange.com/questions/47604/when-listening-in-to-an-am-signals-of-various-frequencies-how-do-we-exactly-tun\",\"WARC-Payload-Digest\":\"sha1:7PJKD2XREQKI5TLI7YNJ5XOKQSB6TAJ5\",\"WARC-Block-Digest\":\"sha1:DCFXS6KZDT7ZNGXJM33OX2JNW77EUSLO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506420.84_warc_CC-MAIN-20230922134342-20230922164342-00681.warc.gz\"}"}
https://physics.stackexchange.com/questions/151317/minimum-angular-velocity-for-circular-motion-pendulum/151322
[ "# Minimum angular velocity for circular motion (pendulum)\n\nHow can I show that there is a minimum angular velocity $\\omega_{min}$, different from zero, such that if we chose an $\\omega$ smaller than $\\omega_{min}$, then it is not possible to have a circular motion as in the picture?", null, "• Why does it merit a bounty? It looks straight forward. Are we missing something? Is this a quantum mechanics question, where there is a minimum energy level? – mmesser314 Dec 9 '14 at 5:35\n• They convinced me they were right!, the correct answer is that there is a minimum $\\omega_{min}$, which is the one requires for an angle $\\theta=0$ – Wolphram jonny Dec 9 '14 at 17:53\n\n## 3 Answers\n\nIf you solve the problem for the two forces the vertical and the horizontal force (which is required for the circular motion) you obtain the relation\n\n$$\\omega^2=\\frac{g}{L\\cos\\theta}$$\n\nHence the minimum required $\\omega$ is $\\sqrt{g/l}$, the same as the angular frequency for motion of a bob in a plane.\n\n• this answer is wrong, why do you think that w is a minimum one instead to that corresponding to circular moption for a given angle? remember that given an angle, there is only one possible omega, there is NOT a range of possible omegas. – Wolphram jonny Dec 9 '14 at 4:38\n• The minimum occurs when $cos \\theta$ equals 1, or $\\theta$ equals 0. Which means there is no circular motion. – LDC3 Dec 9 '14 at 4:39\n• OK, then you agree, there is no minimum for circular motion, just that at theta=0 there is no motion at all! – Wolphram jonny Dec 9 '14 at 4:47\n• No value smaller than w=√g/L can lead to circular motion for simple reason. – SAKhan Dec 9 '14 at 5:07\n\nExpanding the correct answer of @SAKhan:\n\nAssume that the conical pendulum is rotating at an angle $\\theta$ at an angular velocity $\\omega$.\n\nNote also that the radius of the circle is given by $$r=L\\sin(\\theta)$$\n\nFor the point mass to move in a horizontal circle, the total vertical force is zero:$$T\\cos(\\theta)=mg$$\n\nThe net horizontal force must supply the needed centripetal force:$$T\\sin(\\theta)=m\\omega^2r=m\\omega^2L\\sin(\\theta)$$Combining these two equations to eliminate $T$, we get:$$\\omega^2=\\frac{g}{L\\cos(\\theta)}$$\n\nThe maximum value of $\\cos(\\theta)$ is $1$, when $\\theta=0$, so $$\\omega_{minimum}=\\sqrt{\\frac{g}{L}}$$\n\nEdited for clarity:\n\nSo, what does this math mean?\n\nAssume for the sake of simplicity that the length, $L$ of the pendulum is $9.8$ meters, Then the equation for $\\omega$ reduces to$$\\omega = \\sqrt{\\frac{1}{\\cos(\\theta)}}$$ Now, we repeatedly start the pendulum into circular motion, each time at a some different angle $\\theta$. (This could take some fiddling!) For each of these set-ups, once we are sure that the pendulum is moving in a circle, we measure the angle $\\theta$ and the angular velocity $\\omega$. This angular velocity can be measured by taking the period of the circular motion, and dividing it into $2\\pi$.\n\nIf we took all the data and plot them, we would obtain graph that shows that the value of $\\omega$ approaches $1$ as $\\theta$ approaches $0$. Strictly speaking, $\\omega=1$ is a lower limit (rather than a minimum) that is approached asymptotically as the angle $\\theta$ approaches zero.", null, "The vertical axis is $\\omega$, in radians, and the horizontal axis is $\\theta$ in degrees\n\n• why you conlcude that this be the minimum omega for circular motion instead of conlcusing that it is the omega required for circular motion at angle theta? – Wolphram jonny Dec 9 '14 at 4:33\n• The minimum occurs when θ equals 0; which means the pendulum is hanging vertically. – LDC3 Dec 9 '14 at 4:42\n• The graph nicely demonstrates that there is one value of $\\omega$ for each value of $\\theta$, and that this value tends to a non-zero value when the circular motion becomes very small. I wish you had labeled the vertical axis in units of $\\frac{L}{g}$ rather than showing that $\\omega$ tends to $1$ which is slightly misleading. Perhaps define $\\omega_0=\\sqrt{L/g}$ and plot $\\frac{\\omega}{\\omega_0}$ ... – Floris Dec 9 '14 at 14:15\n• Nicely explained – SAKhan Oct 1 '17 at 6:25\n\nThere is such minimun agngular speed: You will always find an an angle that results in circular motion for any given angular speed. The angle is given by:\n\n$$\\cos \\theta=\\frac{g}{L \\omega^2}$$\n\nUpdate: I got to this expression by using the equations of motion:\n\n$m\\omega^2L \\sin \\theta=T\\sin\\theta$\n\nand\n\n$T\\cos\\theta=mg$\n\nWhat this means is that the minimum speed is reached for $\\theta=0$, where \\omega_min=\\sqrt{L/g}. Any other circular motion will require a larger angular velocity. Thus, if we give the pendulum a spees less that the minimum, it will not be able to undergo a circular motion and starts to oscilate.\n\n• @Floris I updated my answer, are you sure I forgot some force? – Wolphram jonny Dec 9 '14 at 3:37\n• Have you forgotten that the bob will move like a regular pendulum below its natural frequency? There is a restoring force proportional to $\\sin\\theta$ when there is no rotation. I believe that sets the lower limit on $\\omega$ – Floris Dec 9 '14 at 3:41\n• Yes, thanks! I missed a sin (theta) when I updated the answer, I'll correct it (but the solution is correct) – Wolphram jonny Dec 9 '14 at 3:41\n• @Wolphramjonny great! But now your equation contradicts your sentence \"There is no such minimun agngular speed\" (meaning, what if $g/L\\omega^2>1$?) – user12029 Dec 9 '14 at 4:33\n• But the question is asking the opposite: for a given $\\omega$ can you always find an angle? (or rather, the question is presenting the assertion that you cannot, and asking for a proof) – David Z Dec 9 '14 at 8:13" ]
[ null, "https://i.stack.imgur.com/9kZyY.jpg", null, "https://i.stack.imgur.com/1mcDo.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86518997,"math_prob":0.9988743,"size":5888,"snap":"2019-35-2019-39","text_gpt3_token_len":1623,"char_repetition_ratio":0.17046227,"word_repetition_ratio":0.052083332,"special_character_ratio":0.28125,"punctuation_ratio":0.09290096,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99993384,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-18T13:29:08Z\",\"WARC-Record-ID\":\"<urn:uuid:085d5f4a-df2a-45c6-9228-039fbd8251bb>\",\"Content-Length\":\"165111\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e5fe439-11c3-4cb1-8dde-b304836d8095>\",\"WARC-Concurrent-To\":\"<urn:uuid:29d5b992-9d1c-4bd9-99e6-3326476b72c6>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://physics.stackexchange.com/questions/151317/minimum-angular-velocity-for-circular-motion-pendulum/151322\",\"WARC-Payload-Digest\":\"sha1:BUANKSGF5HHTBFBDBV6RTRXZXEBZS232\",\"WARC-Block-Digest\":\"sha1:OL5D2JJVRV63IG52QM2URCYEQAGB2BP7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027313889.29_warc_CC-MAIN-20190818124516-20190818150516-00081.warc.gz\"}"}
https://answers.everydaycalculation.com/simplify-fraction/280-1350
[ "Solutions by everydaycalculation.com\n\n## Reduce 280/1350 to lowest terms\n\nThe simplest form of 280/1350 is 28/135.\n\n#### Steps to simplifying fractions\n\n1. Find the GCD (or HCF) of numerator and denominator\nGCD of 280 and 1350 is 10\n2. Divide both the numerator and denominator by the GCD\n280 ÷ 10/1350 ÷ 10\n3. Reduced fraction: 28/135\nTherefore, 280/1350 simplified to lowest terms is 28/135.\n\nMathStep (Works offline)", null, "Download our mobile app and learn to work with fractions in your own time:" ]
[ null, "https://answers.everydaycalculation.com/mathstep-app-icon.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.723005,"math_prob":0.7858586,"size":390,"snap":"2023-40-2023-50","text_gpt3_token_len":132,"char_repetition_ratio":0.15803109,"word_repetition_ratio":0.0,"special_character_ratio":0.47179487,"punctuation_ratio":0.08219178,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9535244,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T08:13:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c82392ff-e953-4577-a37c-a7f45d323071>\",\"Content-Length\":\"6736\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:adf12cae-6baa-4806-9f8d-03ca70eb508a>\",\"WARC-Concurrent-To\":\"<urn:uuid:564fc824-e534-41a8-9463-97cb9d9df69f>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/simplify-fraction/280-1350\",\"WARC-Payload-Digest\":\"sha1:L7EQRGQZWQIZ3J524ZNADPKO6B2PUTEH\",\"WARC-Block-Digest\":\"sha1:2R5UT4T3QQ7YTNTZSICXWKOFMRMA76F4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100550.40_warc_CC-MAIN-20231205073336-20231205103336-00442.warc.gz\"}"}
https://www.unofficialgoogledatascience.com/2017/01/causality-in-machine-learning.html
[ "### Causality in machine learning\n\nBy OMKAR MURALIDHARAN, NIALL CARDIN, TODD PHILLIPS, AMIR NAJMI\n\nGiven recent advances and interest in machine learning, those of us with traditional statistical training have had occasion to ponder the similarities and differences between the fields. Many of the distinctions are due to culture and tooling, but there are also differences in thinking which run deeper. Take, for instance, how each field views the provenance of the training data when building predictive models. For most of ML, the training data is a given, often presumed to be representative of the data against which the prediction model will be deployed, but not much else. With a few notable exceptions, ML abstracts away from the data generating mechanism, and hence sees the data as raw material from which predictions are to be extracted. Indeed, machine learning generally lacks the vocabulary to capture the distinction between observational data and randomized data that statistics finds crucial. To contrast machine learning with statistics is not the object of this post (we can do such a post if there is sufficient interest). Rather, the focus of this post is on combining observational data with randomized data in model training, especially in a machine learning setting. The method we describe is applicable to prediction systems employed to make decisions when choosing between uncertain alternatives.\n\n## Predicting and intervening\n\nMost of the prediction literature assumes that predictions are made by a passive observer who has no influence on the phenomenon. On the other hand, most prediction systems are used to make decisions about how to intervene in a phenomenon. Often, the assumption of non-influence is quite reasonable — say if we predict whether or not it will rain in order to determine if we should carry an umbrella. In this case, whether or not we decide to carry an umbrella clearly doesn't affect the weather. But at other times, matters are less clear. For instance, if the predictions are used to decide between uncertain alternative scenarios then we observe only the outcomes which were realized. In this framing, the decisions we make influence our future training data. Depending on how the model is structured, we typically use the information we gain from realized factual scenarios to assess probabilities associated with unrealized counterfactual scenarios. But this involves extrapolation and hence the counterfactual prediction might be less accurate. Some branches of machine learning (e.g. multi-arm bandits and reinforcement learning) adopt this framing of choice between alternative scenarios in order to study optimal tradeoffs between exploration and exploitation. Our goal here is specifically to evaluate and improve counterfactual predictions.\n\nWhy would we care about the prediction accuracy of unrealized scenarios? There are a number of reasons. First is that our decision not to choose a particular scenario might be incorrect, but we might never learn this because we never generate data to contradict the prediction. Second, real-world prediction systems are constantly being updated and improved — knowledge of errors would help us target model development efforts. Finally, a more niche reason is the use in auction mechanisms such as second-pricing where the winner (predicted highest) must pay what value the runner up is predicted to have realized.\n\nLet's start with a simple example to illustrate the problems of predicting and intervening. Suppose a mobile carrier builds a \"churn\" model to predict which of its customers are likely to discontinue their service in the next three months. The carrier offers a special renewal deal to those who were predicted by the model as most likely to churn. When we analyze the set of customers who have accepted the special deal (and hence not churned), we don't immediately know which customers would have continued their service anyway versus those who renewed because of the special deal. This lack of information has some consequences:\n• we cannot directly measure churn prediction accuracy on customers to whom we made offers\n• we cannot directly measure if the offer was effective (did its benefit exceeded its cost)\n• we must (somehow) account for the intervention when training future churn models\nWhat we want to know is what would have happened had the carrier not acted on the predictions of its churn model.\n\nIn this simple example, we could of course have run an experiment where we have a randomized control group to whom we would have made a special offer but did not (a \"holdback\" group). This gives us a way to answer each of the questions above. But what if we are faced with many different situations and many possible interventions, with the objective to select the best intervention in each case?\n\nLet's consider a more complex problem to serve as the running example in this post. Suppose an online newspaper provides a section on their front page called Recommended for you where they highlight news stories they believe a given user will find interesting (the NYTimes does this for its subscribers). We can imagine a sophisticated algorithm to maximize the number of stories a user will find valuable. It may predict which stories to highlight and in which order, based on the user's location, reading history and the topic of news stories.", null, "Figure 1: A hypothetical example of personalized news recommendations\n\nThose stories which are recommended are likely to be uptaken both because the algorithm works and because of sheer prominence of the recommendation. If the model is more complex (say, multiple levels of prominence, interaction effects), holdback experiments won't scale because they would leave little opportunity to take the optimal action. Randomized experiments are \"costly\" because we do something different from what we think is best. We want to bring a small amount of randomization into the machine learning process, but do it in manner that uses randomization effectively.\n\nPredicting well on counterfactuals is usually harder than predicting well on the observed data because the decision making process creates confounding associations in the data. Continuing our news story recommendation example, the correct decision rule will tend to make recommendations which are most likely to be uptaken. If we try to estimate the effect of recommendation prominence by comparing how often users read recommended stories against stories not recommended, the association between our prediction and prominence would probably dominate — after all, the algorithm chooses to make prominent only those stories which appear likely to be of interest to the reader.\n\nIf we want to use predictions to make good decisions, we have to answer the following questions:\n• How do we measure accuracy on counterfactuals?\n• Are there counterfactual predictions we should avoid in decision making?\n• How can we construct our prediction system to do well on counterfactuals?\nThe rest of this post outlines an approach we have used to tackle these questions.\n\n## A problem of counterfactual prediction\n\nBefore we describe solutions, it is important to be precise about the problem we are trying to address. First off, let's be clear that if the model we are training is the true model (i.e. it correctly specifies the generating mechanism), there is no problem. Likelihood theory guarantees that we will estimate the true model in an unbiased way and asymptotically converge upon it. The problem is that every real-world model is misspecified in some way, and this is what leads to poor counterfactual estimates.\n\nFor illustration purposes, assume there is a single level of prominence and that the true model is binomial for the binary event of uptake $Y$. This true model is described by the GLM equation\n$$\\mathrm{logit}(EY) = \\beta_1 X_1 + \\beta_2 X_2 + \\beta_{\\pi} X_{\\pi} \\tag{Eq 1}$$\nwhere $X_1$ and $X_2$ are continuous features we use to estimate the relevance of the news story to the user, $X_{\\pi}$ is the binary variable indicating whether the story was made prominent. Let's define $\\beta_1 X_1 + \\beta_2 X_2$ to be the quality score of the model. We wish to estimate $\\beta_{\\pi}$, the true log odds effect of prominence on uptake $Y$.\n\nNow suppose we fit the following misspecified model using maximum likelihood\n$$\\mathrm{logit}(EY) = \\beta_1 X_1 + \\beta_{\\pi} X_{\\pi} \\tag{Eq 2}$$\nThis model is misspecified because our quality score is missing $X_2$. Our estimate of $\\beta_{\\pi}$ will pick up the projection of $X_2$ onto $X_{\\pi}$ (and onto $X_1$). In general, we will have misattribution to the extent that errors in our model are correlated with $X_{\\pi}$. This isn't anything new. If all we care about is prediction on observed $Y$, we do fine, at least to the extent $X_2$ can be projected on the space spanned by $X_1$ and $X_{\\pi}$. The fact that our estimate of $\\beta_{\\pi}$ is not unbiased isn't a concern because our predictions are unbiased (i.e. correct on average on the logit scale). The problem only arises when we use the model to predict on observations where the distribution of predictors is different from the training distribution — this of course happens when we are deciding on which stories to administer prominence. Depending on the situation, this could be a big deal, and so it has been at Google.\n\nIn theory, we could use a holdback experiment to estimate the effect of prominence where we randomly do not recommend stories which we would otherwise have recommended. We can estimate the causal effect of prominence as the difference in log odds of uptake between stories which were recommended and those which were eligible (i.e. would have been recommended) but were randomly not recommended. The value of $\\beta_{\\pi}$ in following GLM equation is the causal estimate we seek:\n$$\\mathrm{logit}(EY) = \\beta_{\\pi} X_{\\pi} + \\beta_e X_e \\tag{Eq 3}$$ where $X_e$ is the binary variable denoting the story was eligible for recommendation and $\\beta_e$ its associated coefficient. Since we only recommend eligible stories, $X_{\\pi}=1$ implies $X_e=1$, and $X_{\\pi} \\neq X_e$ occurs only in our holdback.\n\nObserve that $\\beta_{\\pi}$ is estimated as the difference in $\\mathrm{logit}(EY)$ when $X_{\\pi} = 1$, $X_e = 1$ and when $X_{\\pi} = 0$, $X_e = 1$. Why we use this roundabout GLM model to express a simple odds ratio calculation will become clearer further on. The point is that this method works to estimate the causal effect of prominence because randomization breaks the correlation between $X_2$ and $X_{\\pi}$ (see an earlier post for a more detailed discussion on this point). As per Eq 2, we can apply this estimate of $\\beta_{\\pi}$ from our randomized holdback in estimating $\\beta_1$ on observational data.\n\n## Checking accuracy with randomization — realism vs. interpretability\n\nThe best and most obvious way to tell how well we are predicting the effects of counterfactual actions is to randomly take those counterfactual actions a fraction of the time and see what happens. In our news recommendation example, we can randomly decide to recommend or not some stories, and see if our decision-time prediction of the change in uptake rates is correct. As we saw, this works because randomization breaks the correlations between our chosen action (whether to recommend) and other decision inputs (the quality of the recommendation).\n\nIn a complex system, randomization can still be surprisingly subtle because there are often multiple ways to randomize. For example, we can randomize inputs to the decision procedure, or directly randomize decisions. The former approach will tend to produce more realistic outcomes, but can be harder to understand, and may not give us adequate data to assess unlikely decisions. The latter approach is usually easier to understand, but can produce unrealistic outcomes.\n\nTo show how subtle things can be, let's go back to our example earlier where we computed the causal effect of prominence by running a holdback experiment. What we did there was to randomly not recommend stories we would have recommended. But this is just one kind of random perturbation. This particular procedure allows us to estimate (and hence check on) what statisticians call treatment on the treated. In other words, we estimate the average effect of prominence on the uptake of stories we recommend. This is different than the average effect of prominence across the population of stories. What we miss is the effect of prominence on the kinds of stories we never recommend. Suppose the effect of prominence is significantly lower for a news topic that no one finds interesting, say, news on the proceedings of the local chamber of commerce (PLCC). If we never recommend stories of the PLCC, they won't contribute to our holdback experiment and hence we will never learn that our estimates for such stories were too high. We could fix this problem by recommending random stories (as well as randomly suppressing recommendations) and hence directly measure the average effect of recommendation on all stories. But this might not be quite what we want either — it is unclear what we are learning from the unnatural scenario of recommending news of the intolerable PLCC. These recommendations might themselves cause users to react in an atypical manner, perhaps by not bothering to look at recommendations further down the list.\n\nRandom expression and suppression of recommendation are examples of what we called randomizing the decision. The alternative we mentioned was to randomize inputs to the decision. We could achieve this by adding random noise to each news story's quality score and feeding it into the decision procedure. If the amount of random noise is commensurate with the variability in the quality score then we will truly generate a realistic set of perturbations. The data we collect from these randomized perturbations will tend to be near the quality threshold for recommendation, which is usually where data is most valuable. On the other hand, this data might be less interpretable — all sorts of decisions might change, not just the stories whose scores we randomized. The impact of each individual perturbation is not easily separated and this can make the data harder to use for modeling and analysis of prediction errors.\n\nThere is no easy way to make the choice between realism and interpretability. We’ve often chosen artificial randomization that is easy to understand, since it is more likely to produce data useful for multiple applications, and subsequently checked the results against more realistic randomization. Happily, in our case, we found that answers between these two approaches were in good agreement.\n\n## The No Fake Numbers Principle\n\nAutomated decision systems are often mission-critical, so it is important that everything which goes into them is accurate and checkable. We can’t reliably check counterfactual predictions for actions we can’t randomize. These facts lead to the No Fake Numbers (NFN) principle:\nAvoid decisions based on predictions for counterfactual actions you cannot take.\nIn other words, NFN says not to use predictions of unobservable quantities to make decisions.\n\nNow why would anyone ever place demands on a prediction system which run counter this seemingly reasonable principle? In our work, violations of this principle have arisen when we've wanted to impose invariants on the decisions we make, usually with good intentions. The problem isn't the invariants themselves but rather the hypothetical nature of their premises.\n\nFor example, suppose the online newspaper wishes to enforce a policy of \"platform neutrality\" whereby the quality of recommendations should be the same on iPhone and Android mobile phones. Perhaps the newspaper wishes to ensure that users would see the same recommendations regardless of the type of phone they use. However, this is a slippery notion. iPhone users might actually have different aggregate usage patterns and preferences from Android users, making a naive comparison inappropriate.\n\nOne way to address this is to use techniques to derive causal inference from observational analysis to predict what an iPhone user would do if she were using Android. There is valuable literature on this (e.g. Rubin Causal Model) but, fundamentally, you really need to understand what you are doing with a causal model. That means the model needs careful, manual attention from an experienced modeler, and a nuanced understanding of its strengths and weaknesses. This is why observational analysis models are used only when there is no alternative, where the inference made by the model is carefully traded off against its assumptions. Such fine inspection is not feasible for production models, which are updated by many people (and automatically over time), require ongoing automated monitoring, and whose predictions are used for many applications.\n\nWhy is NFN necessary? First, it is generally a better way to design decision systems. Actions violating NFN can never be realized. This usually means they are not directly important for decisions, and the system can be improved by thinking about its goals more carefully.\n\nSecond, and more importantly, mission-critical systems have much stronger requirements than one-off analyses — we need to be able to monitor them for correctness, to define clear notions of improvement, to check that their behavior is stable, and to debug problems. For example, suppose our system uses predictions for what iPhone users would do on Android. If those predictions drift over time, we have no way to tell if the system is working well or not. One-off analyses might be able to get away with using observational analysis techniques, but critical systems need ongoing, robust, direct validation with randomized data. Crucially, we can never check how accurate this prediction is, since we cannot randomly force the iPhone user to use Android.\n\nIn summary, the NFN principle cautions us against imposing requirements whose solutions may have unintended consequences we cannot easily detect. As with any principle, we would override it with great caution.\n\n## Using randomization in training\n\nThe previous sections described how to use randomization to check our prediction models and guide the design of our decision systems. This section describes how to go further, and directly incorporate randomized data into the systems.\n\nLarge scale prediction systems are often bad at counterfactual predictions out of the box. This is because a large scale prediction system is almost certainly misspecified. Thus even very sophisticated ones suffer from a kind of overfitting. These systems don’t classically overfit — they use cross-validation or progressive validation to avoid that — but they tend to overfit to the observed distribution of the data. As we described earlier, this factual distribution doesn’t match the counterfactual distribution of data and hence the model can fail to generalize. When deciding which stories to recommend, we need predictions with and without the prominence of recommendation — but that means we'll need good predictions for interesting stories which don't get recommended and uninteresting stories which are recommended, rare and strange parts of the factual data.\n\nAn obvious attempt to fix this is to upweight randomized data in training, or even train the model solely on the randomized data. Unfortunately, such direct attempts perform poorly. Let's say 1% of the data are randomized. If we train on both randomized and observational data, the observational data will push estimates off course due to model misspecification. However, training solely on randomized data will suffer from data sparseness because you are training on 1% of the data. Nor is upweighting the randomized data of much of a solution — we reduce the influence of observational data only to the extent we reduce its role in modeling. Thus, upweighting is tantamount to throwing away non-randomized data.\n\nThe problem is that the model doesn’t know the random data is random, so it uses it in exactly the same way as it uses any data. As we observed at the start of this post, standard machine learning techniques don’t distinguish between randomized and observational data the way statistical models do. To make better estimates, we need the randomized data to play a different role than the observational data in model training.\n\nWhat is the right role for randomized data? There is probably more than one good answer to that question. For instance, one could imagine shrinking the unbiased, high-variance estimates from randomized data towards the potentially-biased, low-variance observational estimates. This is not the approach we chose for our application but we nonetheless do use the observational estimates to reduce the variance of the estimates made from randomized data.\n\nPreviously we used separate models to learn the effects of prominence and quality. The quality model (Eq 2) took the estimates of the prominence model (Eq 3) as an input. While this does achieve some of our goals it has its own problems. Firstly, this set-up is clunky, we have to maintain two models and changes in performance are the result of complex interactions between the two. Also, updating either is made harder by its relationship to the other. Secondly, the prominence model fails to take advantage of the information in the quality model.\n\nThe approach we have found most effective is best motivated as a refinement of the simple model we used to estimate the causal effect of prominence from our holdback in Eq 3. Let the quality score for each story be the log odds prediction of uptake without prominence. For the model in Eq 2 it would simply be $\\beta_1 X_1$ where $\\beta_1$ is the coefficient of $X_1$ estimated by ML. Assume for a moment that the quality score component is given. An improved estimate of the causal effect of prominence is possible from estimating $\\beta_{\\pi}$ in the model\n$$\\mathrm{logit}(EY) = \\mathrm{offset}(\\hat{\\beta_1} X_1) + \\beta_{\\pi} X_{\\pi} + \\beta_e X_e \\tag{Eq 4}$$\nwhere $\\mathrm{offset}$ is an abuse of R syntax to indicate that this component is given and not estimated. The estimate of $\\beta_{\\pi}$ in this model is still unbiased but by accounting for the (presumed) known quality effect, we reduce the variability of our estimate. In reality, the quality score is not known, and is estimated from the observational data. But regardless, randomization ensures that the estimate from this procedure will be unbiased. As long as we employ an estimate of quality score that is better than nothing, we account for some of the variability and hence reduce estimator variance. We have every reason to be optimistic of a decent-but-misspecified model.\n\nThe procedure above involves first training a model entirely on observational data and then using the quality score thus derived to estimate a second model for prominence, trained on randomized data. The astute reader will note that the observational model itself estimates a prominence effect which we discard. It turns out we can do even better by co-training the quality score together with the prominence. Consider the following iterative updating procedure:\n1. On non-randomized data, use the model $$\\mathrm{logit}(EY) = \\beta_1 X_1 + \\mathrm{offset}(\\hat{\\beta_{\\pi}} X_{\\pi})$$ and only update the quality score coefficients (here just $\\beta_1$).\n2. On randomized data, use the model $$\\mathrm{logit}(EY) = \\mathrm{offset}(\\hat{\\beta_1} X_1) + \\beta_{\\pi} X_{\\pi} + \\beta_e X_e$$ and only update the prominence coefficients (here $\\beta_{\\pi}$ and $\\beta_e$).\nThis co-training works better because it allows the causal estimate of $\\beta_{\\pi}$ to be used in estimating the quality score. There is a lot more to the mechanics of training a large model efficiently at scale, but the crux of our innovation is this.\n\n## Conclusion\n\nIn this post we described how some randomized data may be applied both to check and improve the accuracy of a machine learning system trained largely on observational data. We also shared some of the subtleties of randomization applied to causal modeling. While we've spent years trying to understand and overcome issues arising from counterfactual (and \"counter-usual\", atypical) predictions, there is much we have still to learn. And yet the ideas we describe here have already been deployed to solve some long-standing prediction problems at Google. We hope they will be useful to you as well.\n\n1.", null, "Insightful. Thanks for sharing!\n\n2.", null, "First, what guarantees that the co-training procedure described before the conclusion is stable? That is to say, why shouldn't the estimated values in 1. and 2. oscillate?\n\nSecond, how are X_{\\pi} and X_{e} set at prediction time? Are they both always set to 1?\n\n1.", null, "If you train the models with likelihood maximization and the likelihood function for each model has a single maximum then it must converge.\n\n2.", null, "I am interested in the answer to the second question as well: \"Second, how are X_{\\pi} and X_{e} set at prediction time? Are they both always set to 1?\"\n\n3.", null, "Very interesting, looking forward to the machine learning vs statistics...\n\n4.", null, "I am interested in reading your comparison of machine learning and statistics." ]
[ null, "https://2.bp.blogspot.com/-SPMRpqxF5Tg/WJGHwp8ZqQI/AAAAAAAAcBE/V3nSX-o_U0UZ9RSSp8p8HPxsFxtxHQK0gCLcB/s400/puppies.png", null, "https://3.bp.blogspot.com/-yIK17BF_Zh8/VDoby6hXTsI/AAAAAAAABdw/da6BXGx-P3o/s35/10203368276954634.jpg", null, "https://lh3.googleusercontent.com/zFdxGE77vvD2w5xHy6jkVuElKv-U9_9qLkRYK8OnbDeJPtjSZ82UPq5w6hJ-SA=s35", null, "https://lh3.googleusercontent.com/zFdxGE77vvD2w5xHy6jkVuElKv-U9_9qLkRYK8OnbDeJPtjSZ82UPq5w6hJ-SA=s35", null, "https://lh3.googleusercontent.com/zFdxGE77vvD2w5xHy6jkVuElKv-U9_9qLkRYK8OnbDeJPtjSZ82UPq5w6hJ-SA=s35", null, "https://lh3.googleusercontent.com/zFdxGE77vvD2w5xHy6jkVuElKv-U9_9qLkRYK8OnbDeJPtjSZ82UPq5w6hJ-SA=s35", null, "https://lh3.googleusercontent.com/zFdxGE77vvD2w5xHy6jkVuElKv-U9_9qLkRYK8OnbDeJPtjSZ82UPq5w6hJ-SA=s35", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9407803,"math_prob":0.96891034,"size":40079,"snap":"2020-45-2020-50","text_gpt3_token_len":7887,"char_repetition_ratio":0.14667499,"word_repetition_ratio":0.7570666,"special_character_ratio":0.19192095,"punctuation_ratio":0.081061795,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9636868,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,9,null,5,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T04:43:16Z\",\"WARC-Record-ID\":\"<urn:uuid:a0152508-a233-488d-b7b1-a15ba1cf3d9b>\",\"Content-Length\":\"135253\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:63ea3faf-1fb6-4202-b46f-ae43dc7d4827>\",\"WARC-Concurrent-To\":\"<urn:uuid:bdbf51a8-2307-45fe-893a-49aa119d4cb5>\",\"WARC-IP-Address\":\"172.217.9.211\",\"WARC-Target-URI\":\"https://www.unofficialgoogledatascience.com/2017/01/causality-in-machine-learning.html\",\"WARC-Payload-Digest\":\"sha1:F5BNNHHIBE4L5BNYYADRFBQHAPBOANRT\",\"WARC-Block-Digest\":\"sha1:UYLTPJET4NWPBXU6EN4KDNUT3A2NBKDX\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141205147.57_warc_CC-MAIN-20201130035203-20201130065203-00476.warc.gz\"}"}
https://cs.stackexchange.com/questions/23928/deterministic-multi-tape-turing-machine-construction
[ "# Deterministic Multi-tape Turing Machine construction\n\nI'm trying to construct a deterministic multi-tape turing machine for the following language in order to show that $L$ is in $DTIME(n)$:\n\n$$L = \\{ www \\mid w \\in \\{a,b\\}^+ \\}$$\n\nI'm not sure how to get started. Any hints would be appreciated.\n\n• Welcome to Computer Science! Note that you can use LaTeX here to typeset mathematics in a more readable way. See here for a short introduction. Apr 19 '14 at 6:59\n\nYou could copy the input to 3 tapes, then move the heads on tapes 2 and 3 until they point to the same substring and on tape 3, the end of substring matches the end of string.\n\nThe exact steps could be..\n\n1. copy input\n2. erase first symbol on tape two and first two symbols on tape three\n3. go forward on all tapes until...\n4. if one of the symbols is different than on other tapes, go back to the start of each tape and return to step 2\n5. if tape three reaches end of input, you are done\n\n• \"move the heads on tapes 2 and 3 until they point ...\" That is hard given the fact these positions are not marked in the input. Apr 19 '14 at 9:54\n• @HendrikJan, you do string comparisons as you go. It's not that hard. Apr 19 '14 at 11:40\n• But string comparisons is much harder than the copying part in your hint, in my view. I would compute the position by dividing into 3, which is a nice trick when having two tapes. Apr 20 '14 at 0:50" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89792573,"math_prob":0.9061701,"size":1162,"snap":"2021-43-2021-49","text_gpt3_token_len":299,"char_repetition_ratio":0.097582035,"word_repetition_ratio":0.3534884,"special_character_ratio":0.26333907,"punctuation_ratio":0.10612245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9714697,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-04T08:12:58Z\",\"WARC-Record-ID\":\"<urn:uuid:a06f7117-8852-44d7-b049-ca24196717c4>\",\"Content-Length\":\"138039\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98b2a356-112b-4068-99fd-d62fb7604ed6>\",\"WARC-Concurrent-To\":\"<urn:uuid:c9c0a0e5-f546-406f-a319-95e5d635fe88>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://cs.stackexchange.com/questions/23928/deterministic-multi-tape-turing-machine-construction\",\"WARC-Payload-Digest\":\"sha1:LUGKQAUFDZBIOW3ZGS2LRWCBTLD3NTNC\",\"WARC-Block-Digest\":\"sha1:EJ526GTALPYYGGPLZG34IVA2SEYEZ2XN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362952.24_warc_CC-MAIN-20211204063651-20211204093651-00233.warc.gz\"}"}
http://www.luyenthianhvan.org/2007/06/c-hiu-toefl-bi-4.html
[ "# Online English Test", null, "Level A Level B Level C TOEFL Incorrect word TOEFL reading comprehension Synonym TOEFL ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 Lesson 024 Lesson 025 Lesson 026 Lesson 027 Lesson 028 Lesson 029 Lesson 030 Lesson 031 Lesson 032 Lesson 033 Lesson 034 Lesson 035 Lesson 036 Lesson 037 Lesson 038 Lesson 039 Lesson 040 ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 Lesson 024 Lesson 025 Lesson 026 Lesson 027 Lesson 028 Lesson 029 Lesson 030 Lesson 031 Lesson 032 Lesson 033 Lesson 034 Lesson 035 Lesson 036 Lesson 037 Lesson 038 Lesson 039 Lesson 040 Lesson 041 Lesson 042 Lesson 043 Lesson 044 Lesson 045 Lesson 046 Lesson 047 Lesson 048 Lesson 049 Lesson 050 Lesson 051 Lesson 052 Lesson 053 Lesson 054 Lesson 055 Lesson 056 Lesson 057 Lesson 058 Lesson 059 Lesson 060 Lesson 061 Lesson 062 Lesson 063 Lesson 064 Lesson 065 Lesson 066 Lesson 067 Lesson 068 Lesson 069 Lesson 070 Lesson 071 Lesson 072 Lesson 073 Lesson 074 Lesson 075 Lesson 076 Lesson 077 Lesson 078 Lesson 079 Lesson 080 Lesson 081 Lesson 082 Lesson 083 Lesson 084 Lesson 085 Lesson 086 Lesson 087 Lesson 088 Lesson 089 Lesson 090 Lesson 091 Lesson 092 Lesson 093 Lesson 094 Lesson 095 Lesson 096 Lesson 097 Lesson 098 Lesson 099 Lesson 100 Lesson 101 Lesson 102 Lesson 103 ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 Lesson 024 Lesson 025 Lesson 026 Lesson 027 Lesson 028 Lesson 029 Lesson 030 Lesson 031 Lesson 032 Lesson 033 Lesson 034 Lesson 035 Lesson 036 Lesson 037 Lesson 038 Lesson 039 Lesson 040 Lesson 041 ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 Lesson 024 Lesson 025 Lesson 026 Lesson 027 Lesson 028 Lesson 029 Lesson 030 Lesson 031 Lesson 032 Lesson 033 Lesson 034 Lesson 035 Lesson 036 Lesson 037 Lesson 038 Lesson 039 Lesson 040 Lesson 041 Lesson 042 Lesson 043 Lesson 044 Lesson 045 Lesson 046 Lesson 047 Lesson 048 Lesson 049 Lesson 050 Lesson 051 ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 Lesson 024 Lesson 025 Lesson 026 Lesson 027 Lesson 028 Lesson 029 Lesson 030 Lesson 031 Lesson 032 Lesson 033 Lesson 034 Lesson 035 Lesson 036 Lesson 037 Lesson 038 Lesson 039 Lesson 040 Lesson 041 Lesson 042 Lesson 043 Lesson 044 Lesson 045 Lesson 046 Lesson 047 Lesson 048 Lesson 049 Lesson 050 Lesson 051 Lesson 052 Lesson 053 Lesson 054 Lesson 055 Lesson 056 Lesson 057 Lesson 058 Lesson 059 Lesson 060 Lesson 061 Lesson 062 Lesson 063 Lesson 064 Lesson 065 Lesson 066 Lesson 067 Lesson 068 Lesson 069 Lesson 070 Lesson 071 Lesson 072 Lesson 073 Lesson 074 Lesson 075 Lesson 076 Lesson 077 Lesson 078 Lesson 079 Lesson 080 Lesson 081 Lesson 082 Lesson 083 Lesson 084 Lesson 085 Lesson 086 Lesson 087 Lesson 088 Lesson 089 Lesson 090 Lesson 091 Lesson 092 Lesson 093 Lesson 094 Lesson 095 Lesson 096 Lesson 097 Lesson 098 Lesson 099 Lesson 100 Lesson 101 Lesson 102 Lesson 103 Lesson 104 Lesson 105 Lesson 106 Lesson 107 Lesson 108 Lesson 109 Lesson 110 Lesson 111 Lesson 112 Lesson 113 Lesson 114 Lesson 115 Lesson 116 Lesson 117 Lesson 118 Lesson 119 Lesson 120 Lesson 121 Lesson 122 Lesson 123 Lesson 124 Lesson 125 Lesson 126 Lesson 127 Lesson 128 Lesson 129 Lesson 130 Lesson 131 Lesson 132 Lesson 133 Lesson 134 Lesson 135 Lesson 136 Lesson 137 Lesson 138 Lesson 139 Lesson 140 Lesson 141 Lesson 142 Lesson 143 Lesson 144 Lesson 145 Lesson 146 Lesson 147 Lesson 148 Lesson 149 Lesson 150 Lesson 151 Lesson 152 Lesson 153 Lesson 154 Lesson 155 Lesson 156 Lesson 157 ---Choice--- Lesson 001 Lesson 002 Lesson 003 Lesson 004 Lesson 005 Lesson 006 Lesson 007 Lesson 008 Lesson 009 Lesson 010 Lesson 011 Lesson 012 Lesson 013 Lesson 014 Lesson 015 Lesson 016 Lesson 017 Lesson 018 Lesson 019 Lesson 020 Lesson 021 Lesson 022 Lesson 023 Lesson 024 Lesson 025 Lesson 026 Lesson 027 Lesson 028 Lesson 029 Lesson 030 Lesson 031 Lesson 032 Lesson 033 Lesson 034 Lesson 035 Lesson 036 Lesson 037 Lesson 038 Lesson 039 Lesson 040 Lesson 041 Lesson 042 Lesson 043 Lesson 044 Lesson 045 Lesson 046\n\n### Reading comprehension -TOEFL- Lesson 4 (Đọc hiểu -TOEFL- Bài 4)\n\nĐọc đoạn văn sau và trả lời các câu hỏi:\n\nA pilot cannot fly a plane by sight alone. In many conditions, such as flying at night and landing in dense fog, a pilot must use radar, an alternative way of navigating. Since human eyes are not very good at determining speeds of approaching objects, radar can show a pilot 5 how fast nearby planes are moving.\n\n1. What is the main topic of this passage?\n\n2. In line 2, the word \"dense\" could be replaced by\n\n3. According to the passage, what can radar detect besides location of objects?\n\n4. The word \"shouts\" in line 8 is most similar in meaning to which of the following?\n\n5. Which of the following words best describes the tone of this passage?\n\n6. The phrase \"a burst\" in line 13 is closest in meaning in which of the following?\n\n7. The word \"it\" in line 13 refers to which of the following?\n\n8. Which of the following could best replace the word \"bounce\" in line 13?\n\n9. Which type of waves does radar use?\n\n10. The word \"tracking\" in line 20 is closest in meaning to which of the following?\n\n11. Which of the following would most likely be the topic of the next paragraph?\n\n 1->25", null, "26->49", null, "50->75", null, "76->99", null, "100->125", null, "126->164\n Ôn Tập Ngữ Pháp Phần 1", null, "Ôn Tập Ngữ Pháp Phần 2" ]
[ null, "http://datablue.luyenthianhvan.org/luyenthianhvan/add.gif", null, "http://datablue.luyenthianhvan.org/luyenthianhvan/spacer.gif", null, "http://datablue.luyenthianhvan.org/luyenthianhvan/spacer.gif", null, "http://datablue.luyenthianhvan.org/luyenthianhvan/spacer.gif", null, "http://datablue.luyenthianhvan.org/luyenthianhvan/spacer.gif", null, "http://datablue.luyenthianhvan.org/luyenthianhvan/spacer.gif", null, "http://datablue.luyenthianhvan.org/luyenthianhvan/spacer.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94209176,"math_prob":0.519438,"size":3144,"snap":"2020-24-2020-29","text_gpt3_token_len":715,"char_repetition_ratio":0.1312102,"word_repetition_ratio":0.59893996,"special_character_ratio":0.2302799,"punctuation_ratio":0.1131783,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99989915,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-08T08:37:27Z\",\"WARC-Record-ID\":\"<urn:uuid:80d9d907-e24a-47d4-97e9-ae861e74a9a5>\",\"Content-Length\":\"233939\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cc13b126-1974-4a1c-a36c-e4c0f0d112b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:c19d5bb0-09a3-4aab-b20d-4c8e79a05466>\",\"WARC-IP-Address\":\"172.217.8.19\",\"WARC-Target-URI\":\"http://www.luyenthianhvan.org/2007/06/c-hiu-toefl-bi-4.html\",\"WARC-Payload-Digest\":\"sha1:5LWIE5I5FVQROQZOE2EOD3NFNSKEDR47\",\"WARC-Block-Digest\":\"sha1:B4QL6STNIN57ANUWY4W2Q4SVGIRYS6FJ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655896905.46_warc_CC-MAIN-20200708062424-20200708092424-00543.warc.gz\"}"}
https://mathematica.stackexchange.com/questions/11348/make-a-density-list-plot-histogram-from-large-pre-binned-data-set?noredirect=1
[ "# Make a density list plot/histogram from large, pre-binned data set?\n\nI have a large data set consisting of $\\mathcal{O}(10^9)$ two-dimensional points. In order to save memory and time I have pre-binned these into a uniform grid of $500 \\times 500$ bins using Fortran. When imported into Mathematica 8.0 as a table the resulting data look like:\n\ndata = {{0.388348, 0.388349, 9},{0.388348, 0.776699, 23},...},\n\nwhere the first two items of each entry correspond to the $x$-$y$-coordinates of the upper-right-hand corner of the bin and the third is the count.\n\n## Edit:\n\n• For a sample of the raw data, raw=RandomReal[1,{1000000000,2}] is a good approximation. This is obviously unworkable.\n\n• For the binned data: binned=Table[{.01*Ceiling[raw[[i,1]]/.01],.01*Ceiling[raw[[i,2]]/.01],RandomInteger},{i,1,250000}].\n\nI would like to plot this pre-binned data set in the form of a DensityHistogram, but my data format doesn't fit into what this function is expecting. I have reviewed a similar question for one-dimensional histograms at Histograms with pre-counted data, however I'm at a loss as to how to apply this to 2-D. I have also looked at doing\n\nImage[Rescale[data]]\n\non the raw data. However, this crashes immediately with a SIGSEGV error that has the Wolfram Support team puzzled. Consequently, I haven't gone very far down this road.\n\n## Edit:\n\n• I have also tried ListDensityPlot[data,InterpolationOrder->0]. For the full data set, Mathematica hangs for over 10 minutes, at which point it runs out of memory and the kernel shuts down. For a subset of the data, I get something more reasonable, but I would need some way to scale this up to $500^2$ data points.\n\nMaking these plots seem to be something that is fairly easily done in Matplotlib, but I have already made some other plots in Mathematica and don't want to mess with different styles. I'm fairly new to Mathematica and don't have a good knowledge of all the functionality, unfortunately.\n\nSo, how can I make a DensityHistogram when the bins and counts have already been calculated?\n\n• ListPlot3D[data, InterpolationOrder -> 0, Filling -> Bottom, Mesh -> None] - can you try this and tell me what you see? BTW, welcome to MSE! Also can you upload somewhere your binned and original data sets and provide a link? – Vitaliy Kaurov Oct 1 '12 at 3:47\n• @VitaliyKaurov--With the $\\mathcal{O}(500^2)$ binned set, this ran out of memory. With 1000 points it gave me a 3D plot with the $z$-axis as the count number. I would need a flat, 2D plot that is similar to DensityHistogram. The binned data looks like what I have given above, just $500^2$ of them, and the unbinned data looks similar to just the first two items in each entry. – cosmoguy Oct 1 '12 at 3:58\n• ListDensityPlot[data, ColorFunction -> \"SouthwestColors\"] - then try this and let us know the result. – Vitaliy Kaurov Oct 1 '12 at 4:04\n• @VitaliyKaurov--This was what I tried first, actually. I really need two things: 1) no or very little interpolation and 2) the full $500^2$ data set plotted. With a simple ListDensityPlot Mathematica just hangs forever (10+ minutes) and I'm too impatient to see what the results are. With a truncated data set I get a washed-out density plot that loses sight of substructure. With InterpolationOrder->0 it's starting to resemble what I want, but the plotting time is still very slow. – cosmoguy Oct 1 '12 at 4:14\n• what about uniform binning? binned = BinCounts[raw, {0, 1, 1/100.}, {0, 1, 1/100.}]*1.; binned /= Max[binned]; binned // Image; it works for 10^8 points in about 20 seconds – chris Oct 1 '12 at 6:59\n\n## 1 Answer\n\nYou may be able to use the new WeightedData in version 9 with HistogramDistribution to create a weighted histogram. I've reduced the number of points for speed but it should hopefully scale to your actual problem.\n\nraw = RandomReal[1, {10000000, 2}];\n\nbinned = Table[{.01*Ceiling[raw[[i, 1]]/.01], .01*\nCeiling[raw[[i, 2]]/.01], RandomInteger}, {i, 1, 25000}];\n\n\nNow I create the WeightedData using your bin counts and fit a HistogramDistribution to them. Note that you can set a different binning if you choose but I'm using the automatic binning.\n\nwd = WeightedData[binned[[All, 1 ;; 2]], binned[[All, 3]]];\n\nhd = HistogramDistribution[wd];\n\n\nNow to use DensityPlot to visualize the PDF.\n\nDensityPlot[PDF[hd, {x, y}], {x, 0.01, 1}, {y, 0.01, 1},\nPlotRange -> All, Exclusions -> None, PlotPoints -> 50,\nPlotLegends -> Automatic]", null, "" ]
[ null, "https://i.stack.imgur.com/lOV7r.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9097718,"math_prob":0.89890647,"size":2712,"snap":"2021-21-2021-25","text_gpt3_token_len":747,"char_repetition_ratio":0.09342688,"word_repetition_ratio":0.018518519,"special_character_ratio":0.29056048,"punctuation_ratio":0.15669014,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98016864,"pos_list":[0,1,2],"im_url_duplicate_count":[null,8,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-13T23:43:59Z\",\"WARC-Record-ID\":\"<urn:uuid:a302c936-9952-416d-b210-2770aa422e59>\",\"Content-Length\":\"171694\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:628b4280-118b-4f0d-970c-e0d378afd0c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:3eb5e39a-0846-4a1e-8eb7-8faaa64d59dd>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://mathematica.stackexchange.com/questions/11348/make-a-density-list-plot-histogram-from-large-pre-binned-data-set?noredirect=1\",\"WARC-Payload-Digest\":\"sha1:BZXVYWIPA7QSTOYXOM5D2UOPK7LLSQRU\",\"WARC-Block-Digest\":\"sha1:UFGCCL2VOBE7HOUHTHV4CITPTDJAJADK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487611089.19_warc_CC-MAIN-20210613222907-20210614012907-00600.warc.gz\"}"}
https://socratic.org/questions/how-do-you-find-the-sum-of-the-geometric-sequence-2-4-8-if-there-are-20-terms
[ "# How do you find the sum of the geometric sequence 2,4,8...if there are 20 terms?\n\nJun 19, 2018\n\ncolor(indigo)(S_(20) = (a (r^n-1)) / (r - 1) = 2097150\n\n#### Explanation:", null, "\"Sum of n terms of a G S = S_n = (a (r)^n-1 ))/ (r-1)\n\nwhere a is the first term, n the no. of terms and r the common ratio\n\n$a = 2 , n = 20 , r = {a}_{2} / a = {a}_{3} / {a}_{2} = \\frac{4}{2} = \\frac{8}{4} = 2$\n\n${S}_{20} = \\frac{2 \\cdot \\left({2}^{20} - 1\\right)}{2 - 1}$\n\n${S}_{20} = 2 \\cdot \\left({2}^{20} - 1\\right) = 2097150$" ]
[ null, "https://useruploads.socratic.org/DiuefHu9TiKSpBAxSDRe_Sum%20of%20G%20P.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.78652686,"math_prob":1.0000038,"size":313,"snap":"2021-43-2021-49","text_gpt3_token_len":82,"char_repetition_ratio":0.1262136,"word_repetition_ratio":0.0,"special_character_ratio":0.26517573,"punctuation_ratio":0.14285715,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000033,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-25T20:52:32Z\",\"WARC-Record-ID\":\"<urn:uuid:7567493a-4e90-4dcd-b20d-7d0f5e64d059>\",\"Content-Length\":\"33207\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3e66d8c7-74ba-4152-ac48-f7e7285b9654>\",\"WARC-Concurrent-To\":\"<urn:uuid:75da0a5d-296b-4490-be4c-b20bcee6ea65>\",\"WARC-IP-Address\":\"216.239.38.21\",\"WARC-Target-URI\":\"https://socratic.org/questions/how-do-you-find-the-sum-of-the-geometric-sequence-2-4-8-if-there-are-20-terms\",\"WARC-Payload-Digest\":\"sha1:2WWDWGLBVXVZELYA6H3BIKIAS6JZZLBY\",\"WARC-Block-Digest\":\"sha1:FEYXVAPZQD6R23QWBWQJ3CMIAY3LBN6L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587767.18_warc_CC-MAIN-20211025185311-20211025215311-00129.warc.gz\"}"}
https://statisticsglobe.com/change-number-decimal-places-on-axis-tick-labels-plot-r
[ "# Change Number of Decimal Places on Axis Tick Labels of Plot in R (2 Examples)\n\nIn this article, I’ll explain how to modify the number of decimals on the axis tick labels of a plot in the R programming language.\n\nThe content of the article looks as follows:\n\nLet’s dive into it…\n\n## Creation of Example Data\n\nHave a look at the following example data.\n\n```data <- data.frame(x = seq(0.5, 1, 0.1), # Create example data frame y = c(5, 2, 9, 7, 5, 9)) data # Print example data frame```", null, "Table 1 shows the structure of our example data: It contains six rows and two numerical variables.\n\n## Example 1: Change Number of Axis Label Decimals in Base R Plot\n\nIn Example 1, I’ll explain how to adjust the decimal places on axis tick labels of a Base R graphic.\n\nLet’s first draw a Base R plot with default axis settings:\n\n`plot(data) # Draw Base R plot with default decimals`", null, "As shown in Figure 1, we have plotted a Base R scatterplot with only one decimal place on the x-axis by running the previous R syntax.\n\nIf we want to change the number of decimal places, we first have to specify the axis positions at which the new labels should be added:\n\n```my_label_positions <- seq(min(data\\$x), # Specify axis positions of labels max(data\\$x), length = 6) my_label_positions # Print axis positions of labels # 0.5 0.6 0.7 0.8 0.9 1.0```\n\nNext, we can use the sprintf function to create a character vector containing our axis tick labels with a different number of decimal places:\n\n```my_labels <- sprintf(my_label_positions, # Specify axis labels fmt = '%#.3f') my_labels # Print axis labels # \"0.500\" \"0.600\" \"0.700\" \"0.800\" \"0.900\" \"1.000\"```\n\nFinally, we can use the axis function to manually add our new axis tick labels to our plot:\n\n```plot(data, # Draw Base R plot without x-axis xaxt = \"n\") axis(1, # Manually add axis tick labels at = my_label_positions, labels = my_labels)```", null, "After running the previous R syntax the Base R scatterplot with manually specified decimal places on the x-axis shown in Figure 2 has been created.\n\n## Example 2: Change Number of Axis Label Decimals in ggplot2 Plot\n\nIn Example 2, I’ll explain how to change the number of decimals in ggplot2 graphs.\n\nFirst, we need to install and load the ggplot2 package:\n\n```install.packages(\"ggplot2\") # Install ggplot2 package library(\"ggplot2\") # Load ggplot2 package```\n\nIn the next step, we can create a default ggplot2 plot as shown below:\n\n```ggp <- ggplot(data, aes(x, y)) + # Create ggplot2 plot with default decimals geom_point() ggp # Draw ggplot2 plot with default decimals```", null, "As shown in Figure 3, the previous R code has created a ggplot2 scatterplot with a default number of digits after the decimal point.\n\nIn order to modify the decimal places on our axis labels, we need to install and load the scales package.\n\n```install.packages(\"scales\") # Install scales package library(\"scales\") # Load scales```\n\nNow, we can apply the number_format function and the to specify the accuracy argument to specify a certain accuracy of our axis tick labels.\n\nNote that the following R syntax uses the scale_x_continuous function to change the x-axis values. If we would like to adjust the y-axis, we would have to use the scale_y_continuous function instead.\n\nHowever, let’s draw our graphic:\n\n```ggp + # Modify decimal places on ggplot2 plot axis scale_x_continuous(labels = number_format(accuracy = 0.001))```", null, "Figure 4 reveals the output of the previous code – A ggplot2 scatterplot with more decimal places on the x-axis.\n\n## Video & Further Resources\n\nHave a look at the following video on my YouTube channel. I’m explaining the R programming codes of this article in the video:\n\nThe YouTube video will be added soon.\n\nIn addition, you might read the other tutorials on my homepage. A selection of tutorials is shown below.\n\nYou have learned on this page how to change the number of decimal places on the axis tick labels of a plot in the R programming language. If you have additional comments or questions, let me know in the comments section.\n\nSubscribe to the Statistics Globe Newsletter\n\nGet regular updates on the latest tutorials, offers & news at Statistics Globe.\nI hate spam & you may opt out anytime: Privacy Policy." ]
[ null, "https://statisticsglobe.com/wp-content/uploads/2022/04/table-1-data-frame-change-number-decimal-places-on-axis-tick-labels-plot-r.png", null, "https://statisticsglobe.com/wp-content/uploads/2022/04/figure-1-plot-change-number-decimal-places-on-axis-tick-labels-plot-r.png", null, "https://statisticsglobe.com/wp-content/uploads/2022/04/figure-2-plot-change-number-decimal-places-on-axis-tick-labels-plot-r.png", null, "https://statisticsglobe.com/wp-content/uploads/2022/04/figure-3-plot-change-number-decimal-places-on-axis-tick-labels-plot-r.png", null, "https://statisticsglobe.com/wp-content/uploads/2022/04/figure-4-plot-change-number-decimal-places-on-axis-tick-labels-plot-r.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63993126,"math_prob":0.92519623,"size":5374,"snap":"2022-40-2023-06","text_gpt3_token_len":1371,"char_repetition_ratio":0.15158287,"word_repetition_ratio":0.34288865,"special_character_ratio":0.27000374,"punctuation_ratio":0.11228407,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963414,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T09:05:49Z\",\"WARC-Record-ID\":\"<urn:uuid:f30f0fcd-0eed-4d07-9a35-9b3b53e910e3>\",\"Content-Length\":\"182366\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:22408748-2453-48e6-8b09-f746c984d01e>\",\"WARC-Concurrent-To\":\"<urn:uuid:50564ece-90bf-4718-942f-eb1c052653ee>\",\"WARC-IP-Address\":\"217.160.0.159\",\"WARC-Target-URI\":\"https://statisticsglobe.com/change-number-decimal-places-on-axis-tick-labels-plot-r\",\"WARC-Payload-Digest\":\"sha1:DQTYDSVXYYHVFL6FPCNOWPTIHBYSLQ2M\",\"WARC-Block-Digest\":\"sha1:WDOT5NIFHALNXHIAYO7Q7WGZXIS3V5HD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030334992.20_warc_CC-MAIN-20220927064738-20220927094738-00286.warc.gz\"}"}
http://mathebook.net/dict/idict/iproportion.htm
[ "HOME MATH DICTIONARY DOWNLOAD FEEDBACK DISCLAIMER\n Question: What do you mean by Inverse Proportion ? Answer: When the average speed for a journey of 100 km is doubled, the time taken is halved. The time taken, t hours is said to be inversely proportional to the speed u km per hour. As u increases, t decreases according to the algebraic relationship t = 100 / u In general when y is inversely proportional to x there is an algebraic relationship between x and y of the form y = a / x Where a is a constant. y is inversely proportional to x is often written y", null, "1 / x ." ]
[ null, "http://mathebook.net/dict/images/prop.bmp", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.88899314,"math_prob":0.98125863,"size":468,"snap":"2021-21-2021-25","text_gpt3_token_len":120,"char_repetition_ratio":0.125,"word_repetition_ratio":0.043010753,"special_character_ratio":0.25,"punctuation_ratio":0.08163265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9635113,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-19T13:06:15Z\",\"WARC-Record-ID\":\"<urn:uuid:f083ddb2-f67a-4936-bd07-32b12811d83a>\",\"Content-Length\":\"5665\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:47ed1af4-9d79-44f0-915c-215598a0098d>\",\"WARC-Concurrent-To\":\"<urn:uuid:175969c5-4ec4-443f-8529-f36983df7cf2>\",\"WARC-IP-Address\":\"74.208.236.173\",\"WARC-Target-URI\":\"http://mathebook.net/dict/idict/iproportion.htm\",\"WARC-Payload-Digest\":\"sha1:TRFENT3SS3LCG5HZ2OLUNHTYOKUON62T\",\"WARC-Block-Digest\":\"sha1:WFPWZH4C3BRNKXMLVP2YQAKTYAXZT3ER\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623487648194.49_warc_CC-MAIN-20210619111846-20210619141846-00248.warc.gz\"}"}
https://math.stackexchange.com/questions/1782975/sum-of-powers-of-primitive-root-of-unity-trig-proof
[ "Sum of powers of primitive root of unity- Trig Proof\n\nI'm trying to prove that if $z=\\operatorname{cis}(2\\pi/n) = \\cos(2\\pi/n) + i\\sin(2\\pi/n)$, that is, $z$ is a primitive $n$-th root of unity, for any integer $n\\geq 2$, $1+z+z^2+\\cdots+z^{n-1}=0$. I've already come across a nice and concise proof here, and that same link also has a comment pointing out that it's just a geometric sum which can be expressed as $\\dfrac{1-\\operatorname{cis}^n(2\\pi/n)}{1-\\operatorname{cis}(2\\pi/n)}$ which is just $0$ in the numerator. However, I was wondering if I could do it just using trig functions. It's an inefficient way of proving it, but I was fixated on this approach for so long I was wondering if someone knew how to do it.\n\nProving that the imaginary part is $0$ is easy- you just use the identity $\\sin(a)+\\sin(b)=2\\sin(\\frac{a+b}{2})\\sin(\\frac{a-b}{2})$ and for each integer $j$ where $0< j<n$, pair $i\\sin(2\\pi j/n)$ with $i\\sin(2\\pi (n-j)/n)$ to get $0$. (If $n$ is even, $i\\sin(\\pi)$ can't be paired, but that's of course $0$ as well.)\n\nThis same approach doesn't work for the real part- using the identity $\\cos(a)+ \\cos(b) =2\\cos(\\frac{a+b}{2})\\cos(\\frac{a-b}{2})$, and adding the same pairs gets $2\\cos(2\\pi)\\cos(2\\pi(n-2j)/n)=2\\cos(2\\pi(n-2j)/n)$ so this gets $1+2\\sum_{j=1}^{\\lfloor n/2 \\rfloor}\\cos(2\\pi(n-2j)/n)$ with $\\cos(\\pi/n)=-1$ added if $n$ is even. Then I need to show that that sum is $0$ if $n$ is even and $-1/2$ if $n$ is odd. Is there a clean way of doing this? The only thing I can think to do is repeat the sum of $\\cos$ identity, and that doesn't seem too helpful.\n\n• May 13 '16 at 3:15\n\nUse the identity $$\\displaystyle\\sum\\limits_{m=0}^{n-1} \\cos(mx+y)=\\frac{\\cos\\left(\\dfrac{n-1}{2}x+y\\right)\\sin\\left(\\dfrac{n}{2}\\, x\\right)}{\\sin\\left(\\dfrac{x}{2}\\right)}$$\nand evaluate where $x=2\\pi/n$ and $y=0$ to deduce that the real part is zero.\n• Thanks for the answer. lab pointed out a proof to this in the comments, but there's a slight difference between them. Should that be $\\operatorname{cos}$ and not $\\operatorname{sin}$ as the first factor in the numerator? May 13 '16 at 15:42" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9158002,"math_prob":0.99971056,"size":1535,"snap":"2022-05-2022-21","text_gpt3_token_len":542,"char_repetition_ratio":0.11495754,"word_repetition_ratio":0.0,"special_character_ratio":0.35374594,"punctuation_ratio":0.06077348,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99999785,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-17T23:27:44Z\",\"WARC-Record-ID\":\"<urn:uuid:1dcb30fe-438f-40dc-8925-9ddb81a37866>\",\"Content-Length\":\"138170\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbe91bae-6e5d-4d76-a463-947a87d0d4c8>\",\"WARC-Concurrent-To\":\"<urn:uuid:8dbeb9c5-a3c5-4de5-b44f-f9fd9566512a>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/1782975/sum-of-powers-of-primitive-root-of-unity-trig-proof\",\"WARC-Payload-Digest\":\"sha1:PD4P6YTQF3ESFZVOJZBA7DV2IPHQC7DD\",\"WARC-Block-Digest\":\"sha1:W6XLILIYE4ZAYCTAUY6I6B6SKE5IKXHY\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320300624.10_warc_CC-MAIN-20220117212242-20220118002242-00294.warc.gz\"}"}
https://www.studypug.com/micro-econ-help/econ-consumer-and-producer-surplus
[ "# Consumer & producer surplus", null, "#### Everything You Need in One Place\n\nHomework problems? Exam preparation? Trying to grasp a concept or just brushing up the basics? Our extensive help & practice library have got you covered.", null, "#### Learn and Practice With Ease\n\nOur proven video lessons ease you through problems quickly, and you get tonnes of friendly practice on questions that trip students up on tests and finals.", null, "#### Instant and Unlimited Help\n\nOur personalized learning platform enables you to instantly find the exact walkthrough to your specific type of question. Activate unlimited help now!", null, "##### Intros\n###### Lessons\n1. Consumer & Producer Surplus Overview:\n2. Consumer Surplus\n• Willing to pay vs actually pay\n• Algebraic Calculation of Consumer Surplus\n• Graphical Calculation of Consumer Surplus\n• An Example\n3. Producer Surplus\n• Price producer receives vs minimum price producer accepts\n• Algebraic Calculation of Producer Surplus\n• Graphical Calculation of Producer Surplus\n• An Example\n4. Economic Surplus\n• The total benefit from consumer and producer\n• Sum of consumer surplus and producer surplus\n• Goal is to maximize economic surplus\n• An Example\n##### Examples\n###### Lessons\n1. Finding the Consumer Surplus\nSuppose the demand curve is P = 500 - 20Q  and  P = 200 + 5Q.\n1. Find the market equilibrium\n2. Find the consumer surplus\n2. Suppose the demand curve is P = 800 - 5Q  and  P = 800 + 5Q.\n1. Find the market equilibrium\n2. Find the consumer surplus\n3. Finding the Producer Surplus\nSuppose the demand curve is P = 400 - 20Q  and  P = 300 + 5Q.\n1. Find the market equilibrium\n2. Find the producer surplus\n4. Suppose the demand curve is P = 300 - 3Q  and  P = 250 + Q.\n1. Find the market equilibrium\n2. Find the producer surplus\n5. Finding the Economic Surplus\nSuppose the demand curve is P = 500 - 10Q  and  P = 300 + 5Q.\n1. Find the market equilibrium\n2. Find the consumer surplus\n3. Find the producer surplus\n4. Find the economic surplus\n###### Topic Notes\n\nConsumer Surplus\n\nConsumer Surplus: the difference between what consumers are willing to pay and what they actually pay.\n\nAlgebraically, we calculate this as\n\nConsumer Surplus = Marginal benefit - Price\n\nGraphically, we calculate this by finding the area under the demand curve and above the price paid, up to the quantity bought. Since the demand and supply curve are linear, most of the consumer surplus we see are triangles.", null, "Recall the area of triangle is:\n\nA = $\\large \\frac{bh}{2}$\n\nProducer Surplus\n\nProducer Surplus: the difference between what price the producers receive from the good and the minimum price the producer is willing to accept.\n\nAlgebraically, we calculate this as\n\nProducer Surplus = Price - Marginal Cost\n\nGraphically, we calculate the area that is above the supply curve and below the price sold, up to the quantity supplied. Once again, the area we see are usually triangles.", null, "Economic Surplus\n\nEconomic Surplus: is the total benefit gained from both the consumer and producer. In other words, it is the sum of the consumer surplus and producer surplus.\n\nEconomic Surplus = Consumer Surplus + Producer Surplus\n\nOur goal is to always maximize economic surplus. Economic surplus is always maximized at the market equilibrium, which we consider to be efficient." ]
[ null, "https://dmn92m25mtw4z.cloudfront.net/img_set/sprite/v1/sprite-1w.png", null, "https://dmn92m25mtw4z.cloudfront.net/img_set/sprite/v1/sprite-1w.png", null, "https://dmn92m25mtw4z.cloudfront.net/img_set/sprite/v1/sprite-1w.png", null, "https://dmn92m25mtw4z.cloudfront.net/img_set/sprite/v1/sprite-1w.png", null, "https://www.studypug.com/micro-econ-help/ https:/dmn92m25mtw4z.cloudfront.net/img_set/econ1-5-1-x-1/v1/econ1-5-1-x-1-455w.jpg", null, "https://www.studypug.com/micro-econ-help/ https:/dmn92m25mtw4z.cloudfront.net/img_set/econ1-5-1-x-2/v1/econ1-5-1-x-2-506w.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8940248,"math_prob":0.9703317,"size":1678,"snap":"2023-40-2023-50","text_gpt3_token_len":405,"char_repetition_ratio":0.16009557,"word_repetition_ratio":0.13907285,"special_character_ratio":0.2431466,"punctuation_ratio":0.06115108,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9892435,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-05T14:23:17Z\",\"WARC-Record-ID\":\"<urn:uuid:47980872-04c9-47bb-98e4-b72c90f8e96a>\",\"Content-Length\":\"231191\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:988bea8f-7d22-410e-adb7-85adaa77382a>\",\"WARC-Concurrent-To\":\"<urn:uuid:0dea0387-ab67-4da6-984f-23679510af83>\",\"WARC-IP-Address\":\"3.238.135.136\",\"WARC-Target-URI\":\"https://www.studypug.com/micro-econ-help/econ-consumer-and-producer-surplus\",\"WARC-Payload-Digest\":\"sha1:HAHV5DJE63BPNYMEJQTJWLMGXCEEUEAI\",\"WARC-Block-Digest\":\"sha1:QDWODQVH2IW276OCKLOLRHTCAMYWYRLH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100551.2_warc_CC-MAIN-20231205140836-20231205170836-00663.warc.gz\"}"}
https://chemistry.stackexchange.com/questions/53919/chemical-formula-for-barium-chloride/53928
[ "# Chemical Formula for Barium Chloride\n\nBarium Chloride is represented as $\\ce{BaCl2}$.\n\nSince chlorine is a diatomic molecule, It should be denoted as $\\ce{Cl2}$.\n\nFormulating, we get\n\n\\begin{array}{|c:cc|}\\hline \\small \\rm Element & \\ce{Ba} & \\ce{Cl2}\\\\ \\small \\rm Valency & 2 & 1 \\\\\\hline \\end{array}\n\nCrisscrossing the valencies of Barium and Chlorine we get $\\ce{Ba(Cl2)2}$ — as opposed to the accepted formula of $\\ce{BaCl2}$. How is it so?\n\n• In barium chloride, the chlorine is not in the form of a diatomic molecule. Think about how ionic compounds are formed. – M.A.R. Jun 19 '16 at 6:17\n• @TIPS How are they formed in this case? – Good Guy Jun 19 '16 at 6:27\n• Valency is a property of an element, not a molecule. – Ivan Neretin Jun 19 '16 at 7:55\n• @IvanNeretin How do you obtain the formula for Barium Chloride then? – Good Guy Jun 19 '16 at 9:12\n• Much like you did, except don't mention $\\ce{Cl2}$ at all. There is just Cl, its valency is 1, and then there is Ba with valency 2, so... – Ivan Neretin Jun 19 '16 at 13:52\n\n## 2 Answers\n\nThe compound barium chloride is not the same thing as barium and chlorine mixed together.\nWhen they react, a barium atom will give up two electrons to form a action, and a chlorine molecule will pick up two electrons to form a pair of chloride ions: $$\\ce{Ba -> Ba^2+ +2e^-}$$ $$\\ce{Cl2 +2e^- -> 2Cl^-}$$ When you have both of those things at once, the electrons are \"consumed\" as fast as they are \"produced\", so they don't appear at all in the result: $$\\ce{Ba +Cl2->Ba^2+ +2Cl^-}$$ which forms an ionic lattice when solid. Since this lattice has overall neutral charge, its ionic charges must balance with integer coefficients.\nFunnily enough, these coefficients are $1$ and $2$ for $2+$ and $1-$ respectively, so these are applied to the ions which carry those charges. Thus: $$\\ce{Ba1Cl2}$$ or more simply and directly: $$\\ce{BaCl2}$$\n\n• If you think you have the reaction right, you should be able to give the half-equations for it, such that the conservation of atomic numbers and conservation of net charge holds. Your proposed formula would require something to break at least one, probably both of those. – Nij Jun 19 '16 at 11:36\n\nWhen chlorine is in its free state it is diatomic. But when it reacts with barium it is not in the form of $\\ce{Cl2}$. It will be in its ionic state which is $\\ce{Cl-}$. The same goes for Barium. Barium is mono-atomic and its ionic state is $\\ce{Ba^2+}$. Barium gives one electron to a chlorine atom and another electron to another chlorine atom, as valency of chlorine is 1, so it is $\\ce{BaCl2}$." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9422478,"math_prob":0.99549896,"size":1237,"snap":"2021-21-2021-25","text_gpt3_token_len":350,"char_repetition_ratio":0.12814274,"word_repetition_ratio":0.01923077,"special_character_ratio":0.2821342,"punctuation_ratio":0.09311741,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99227226,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-24T00:31:04Z\",\"WARC-Record-ID\":\"<urn:uuid:b46113f1-8f45-4138-b7ec-89b6e643e4c0>\",\"Content-Length\":\"176021\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:364ad8d1-f74d-4c6c-be15-da3fbb99e4a2>\",\"WARC-Concurrent-To\":\"<urn:uuid:62359513-8347-4836-842c-8b5ed73bdbec>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://chemistry.stackexchange.com/questions/53919/chemical-formula-for-barium-chloride/53928\",\"WARC-Payload-Digest\":\"sha1:LBTJOWCKEEMBZAJCXKT52NCHYVFVHWYK\",\"WARC-Block-Digest\":\"sha1:7WBLG3KJVTRLFRGAG6Q3IEUR7Y4PFIMX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488544264.91_warc_CC-MAIN-20210623225535-20210624015535-00132.warc.gz\"}"}
https://www.solaranywhere.com/support/historical-data/pxx/
[ "Home » Historical Data » PXX Files\n\n# PXX Files\n\n### Introduction\n\nSolarAnywhere PXX files make it easier and faster to calculate your project’s PXX energy yield. The “XX” refers to the probability that the level of irradiance will be exceeded in a given year. You specify the probability (e.g., P90) and SolarAnywhere returns a representative weather file with 8760 hourly values that can be imported into any PV modeling tool. PXX data are available in the web user interface and the API, and are included with time-series licenses (not available with Academic licenses).\n\nSolarAnywhere PXX files are created using an improved method for calculating probability of exceedance that better represents asymmetric irradiance distributions and weather risk than standard approaches. Higher quality data reduce the risk of unnecessarily conservative or costly financing.\n\n### Methodology\n\nTo create a probability of exceedance file, we first need a distribution of annual irradiance. SolarAnywhere offers an accurate, consistent dataset back to 1998 in North America, approximately 20 years and growing. That’s longer than a typical 20-year power purchase agreement. Unfortunately, the period of record is insufficient to create a satisfactory distribution.\n\nThe limitations of standard methods led Dr. Richard Perez to propose an improved method in 2012, which we call here the partial-year method. The partial-year method was presented at the PV modeling workshop in 2017.1 To create an enlarged dataset, GHI is averaged over four-month segments for each year. The four-month average is used to construct all possible year combinations. The number of combinations is equal to the number of the years considered cubed, which is enough to establish the PXX target. In the final step, the algorithm selects months that will create a file with the desired annual irradiance target and a reasonable monthly profile.\n\nSolarAnywhere uses the partial-year method to create probability of exceedance files. The period is consistent with SolarAnywhere TGY datasets so they may be compared directly.\n\n### Comparison to other methods\n\nThere are many approaches to calculate probability of exceedance, and it’s important to understand the approach taken and how that relates to the specific purpose. Here we compare three methods: empirical, normal (Guassian) and partial year.\n\nThe most direct application of the data is the empirical cumulative distribution. Each observation is assumed to be equally likely and sorted from lowest to highest. Probabilities (y) are assigned to each observation by y = (i – 0.5)/m where i is the observation, and m is the number of observations. The probability of exceedance is 1 less the probability. In short, if we have 20 years of annual irradiance values, the lowest year is the P97.5, the second lowest year is the P92.5, and so on. P99 is undefined because the data to support the estimate do not yet exist. The downside of using an empirical distribution is that many samples (>>20) are needed for the true shape of the distribution to emerge.\n\nAs a result, a common practice is to define a normal distribution by calculating the mean and standard deviation of the annual irradiance totals. As an example, in a 2010 ASES publication, 8 years of SolarAnywhere data were used to estimate interannual variability across the continental U.S.2 The problem with assuming a normal distribution is that solar irradiance does not, as a general assumption, fit a normal distribution, which can skew results.3\n\nTo explore the topic further we analyzed 221 locations coincident with the NSRDB TMY3 class 1 weather stations across the U.S.4 SolarAnywhere was used to estimate the annual irradiance for each location. Next, the three distributions were calculated for each location. An example for an arbitrary location is shown in Figure 1.", null, "Fig. 1: Cumulative Distributions for Empirical, Partial-year and Normal Probability of Exceedance Methods for the Knoxville McGhee Tyson Airport\n\nThe data are too sparse for the empirical distribution to provide satisfactory percentiles. In addition, the P99 is undefined, which may not be acceptable for some parties.\n\nHowever, with a sufficiently large sample of sites, the empirical distribution can be used as a reference to assess the fit of the normal and partial-year distributions. The average bias for a distribution with a good fit should be low. For project finance, it’s critical to estimate the left tail of the distribution, so the analysis examined the PXXs associated with the lowest and second lowest irradiance years for the period 1998 through 2016 (P97.4, and P94.1).\n\nThe analysis revealed that both the normal and partial-year distributions have a low average bias (less than +/- 0.2% mean bias error for both methods and both PXXs). Low bias is critical. A poor fitting distribution has the potential to systemically under- or over-represent the resource.\n\nOn an individual site basis, the differences between the empirical distribution and the two other methods were found to be below +/- 1% for half of the sites in the analysis (the interquartile range) for both the lowest and second lowest irradiance years. The results are consistent with the expected sampling error (see Uncertainty).\n\nP99 was also examined. No source exists for an empirical reference of P99. However, we can do several sanity checks. Annual totals less than 9% below the site-mean were observed only once across the 221 sites. Other methods found that P99 falls between -4 and -8% within the continental U.S.5 Therefore, P99 estimates below -9% are unlikely to be a good characterization of the solar resource. Almost 5% of the P99 estimates that assume a normal distribution fell in the range of -9 to -12%. The partial-year method was two-thirds less likely to yield erroneously low P99 estimates.\n\nThe advantage of the partial-year method over the assumption of normality is the proper accounting of the dissymmetry inherent to the data. The normal distribution uses the root sum square of the distance to the mean to calculate the standard deviation. Since the distribution is symmetrical, an unusually high irradiance year can produce a distribution that appears to overestimate the likelihood of a low irradiance year. An example of this is seen in Figure 1. The partial-year method mitigates this issue by using combinations of 4-month averages from the dataset rather than statistics to create the distribution.\n\n### Uncertainty considerations\n\nSolarAnywhere is the most accurate satellite-derived solar database.6 SolarAnywhere’s consistency across time and space is a critical advantage compared to ground-based measurement for the purposes of variability studies. Indeed, these characteristics were a key motivator for its development and enabled SolarAnywhere to identify an unreported calibration issue at one of the nations most trusted ground reference stations.7 Earlier estimations of interannual variability leveraged the unique capability.8\n\nUnfortunately, a calculation of uncertainty is not possible because a statistically significant reference dataset does not exist. Very few high-quality, well-maintained ground stations have more than two decades of record.\n\nThe PXX files do not include additional modeling uncertainties in their construction. In that way developers and independent engineers have control over and full transparency into the uncertainties applied to energy estimations. In addition, the results are reproducible.\n\nThe error in the PXX estimate is a function of the number of observations and the probability level (the XX). The mean of a distribution can be estimated with fewer observations than the P90. A statistical study of sampling error of normal distributions estimates that half of P90’s derived from 19 years of data will be within +/- 1% of the true P90. The 95% confidence interval is +/- 2.0% (σ = 1.2%).9\n\nThe 19-year period of record is expected to exhibit less variability than the 30 years minimum that would be typical for a climatological study. Notably, the period of record does not include any very large volcanic explosions (Volcanic Explosivity Index 6 and above). Such explosions occur at a rate of several per century and therefore influence inter-annual variability around the P99 level. A study of the last major explosion, the Philippines’ Mount Pinatubo in 1991, found peak DNI at four stations in the western USA fell 10-20% from a year prior, but that the impact on GHI was greatly attenuated by a corresponding increase in diffuse irradiance.10\n\nClimate change is another concern. The impact of climate change on future solar energy production cannot be estimated with historical data.\n\nSolarAnywhere probability of exceedance files represent the inter-annual variability of the SolarAnywhere dataset. While the SolarAnywhere database is an excellent long-term record, additional uncertainties should be considered at the tail of the distribution, e.g. P99.\n\n### SolarAnywhere PXX data\n\nSolarAnywhere PXX files make it convenient to calculate your project’s PXX energy yield. SolarAnywhere uses an improved method that shows negligible average bias while producing more realistic PXX estimates than observed annual totals alone. In addition, SolarAnywhere PXX data are two-thirds less likely to yield erroneously low P99 estimates than those based on a normal distribution, reducing the risk of unnecessarily conservative financing." ]
[ null, "https://www.solaranywhere.com/wp-content/uploads/2018/09/PXX-InDepth_Fig1_700x550.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9036257,"math_prob":0.8848497,"size":11070,"snap":"2023-40-2023-50","text_gpt3_token_len":2312,"char_repetition_ratio":0.13600217,"word_repetition_ratio":0.07815275,"special_character_ratio":0.20487805,"punctuation_ratio":0.11667501,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9621596,"pos_list":[0,1,2],"im_url_duplicate_count":[null,7,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-26T05:13:30Z\",\"WARC-Record-ID\":\"<urn:uuid:6322bff2-e05a-4407-b1f1-9009180342c1>\",\"Content-Length\":\"144621\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:722511a8-efc0-4282-84dd-b7312daf8afe>\",\"WARC-Concurrent-To\":\"<urn:uuid:0d9fbef5-d72b-49dd-a869-476ffbf2a102>\",\"WARC-IP-Address\":\"141.193.213.11\",\"WARC-Target-URI\":\"https://www.solaranywhere.com/support/historical-data/pxx/\",\"WARC-Payload-Digest\":\"sha1:EM3DAT6OGTGCG5WRILMHTZUSXADCGMMD\",\"WARC-Block-Digest\":\"sha1:AEFGU47X7NBWZTHJBGTQX4O2TAZE34UX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510149.21_warc_CC-MAIN-20230926043538-20230926073538-00852.warc.gz\"}"}
https://www.mathworks.com/matlabcentral/cody/problems/1946-fibonacci-sum-of-squares/solutions/1904168
[ "Cody\n\n# Problem 1946. Fibonacci-Sum of Squares\n\nSolution 1904168\n\nSubmitted on 18 Aug 2019 by David Kuckuk\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1   Pass\nn = 5; S = 40; assert(isequal(FibSumSquares(n),S))\n\nS = 40\n\n2   Pass\nn = 8; S = 714; assert(isequal(FibSumSquares(n),S))\n\nS = 714\n\n3   Pass\nn = 11; S = 12816; assert(isequal(FibSumSquares(n),S))\n\nS = 12816\n\n4   Pass\nn = 15; S = 602070; assert(isequal(FibSumSquares(n),S))\n\nS = 602070\n\n5   Pass\nn = 21; S = 193864606; assert(isequal(FibSumSquares(n),S))\n\nS = 193864606\n\n6   Pass\nn = 26; S = 23843770274; assert(isequal(FibSumSquares(n),S))\n\nS = 2.3844e+10\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5669219,"math_prob":0.99822825,"size":942,"snap":"2020-45-2020-50","text_gpt3_token_len":318,"char_repetition_ratio":0.15991472,"word_repetition_ratio":0.0,"special_character_ratio":0.3821656,"punctuation_ratio":0.13513513,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99938375,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-30T16:22:08Z\",\"WARC-Record-ID\":\"<urn:uuid:cf9f512c-6ddd-4851-a450-441e16d2b032>\",\"Content-Length\":\"83403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ca817f0e-66b0-4755-9d9a-dc168a9ad7b2>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f806756-2708-4f28-aec5-3685cf46fa83>\",\"WARC-IP-Address\":\"23.223.252.57\",\"WARC-Target-URI\":\"https://www.mathworks.com/matlabcentral/cody/problems/1946-fibonacci-sum-of-squares/solutions/1904168\",\"WARC-Payload-Digest\":\"sha1:BMYXTBNIXOXFCA6SZUXREWGDSV4XVXTP\",\"WARC-Block-Digest\":\"sha1:5T7J3IV4QYQ57MOBF472UYP23NBPZ2NG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107911027.72_warc_CC-MAIN-20201030153002-20201030183002-00514.warc.gz\"}"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=608&pid=5627
[ "• 0 Vote(s) - 0 Average\n• 1\n• 2\n• 3\n• 4\n• 5\n between addition and multiplication", null, "lloyd", null, "Junior Fellow", null, "", null, "Posts: 10 Threads: 1 Joined: Mar 2011 03/10/2011, 09:10 PM This has probably been thought of before, but here goes anyway. I was thinking about the \"sesqui\" operation intermediate between adding and multiplying; I'll write \"@\" here. Obviously a @ b should lie between a+b and ab. Maybe we should take the mean. But which one, arithmetic or geometric? Since one applies to addition and the other to multiplication, why not take both? Then we'll take the mean of these two. But which mean? Again, take both; the proposed value for the sesqui-operation is then the limit of this process when iterated many times. In fact the two values converge quite quickly and for 10-digit precision we usually have convergence within 3 or 4 iterations. Here are some values for a @ a: 1 @ 1 = 1.456791031 2 @ 2 = 4.000000000 3 @ 3 = 7.424041309 4 @ 4 = 11.654328248 5 @ 5 = 16.644985716 6 @ 6 = 22.363401399 7 @ 7 = 28.784583111 8 @ 8 = 35.888457285 9 @ 9 = 43.658368718 10 @ 10 = 52.080163811 11 @ 11 = 61.141591230 12 @ 12 = 70.831889817 13 @ 13 = 81.141493853 14 @ 14 = 92.061815491 15 @ 15 = 103.585079914 16 @ 16 = 115.704197683 17 @ 17 = 128.412664031 18 @ 18 = 141.704478131 19 @ 19 = 155.574077463 20 @ 20 = 170.016283797 21 @ 21 = 185.026258257 22 @ 22 = 200.599463552 23 @ 23 = 216.731631979 24 @ 24 = 233.418738077 25 @ 25 = 250.656975101 26 @ 26 = 268.442734648 27 @ 27 = 286.772588895 28 @ 28 = 305.643275047 29 @ 29 = 325.051681631 30 @ 30 = 344.994836377 31 @ 31 = 365.469895439 32 @ 32 = 386.474133787 I discovered this forum after asking a question recently on sci.math. It looks like people here have been thinking about the same thing: I asked if the next operation after exponentiation should require new numbers, the way that addition/subtraction, multiplication/division, exponentiation/root-taking/logarithms lead from the counting numbers to negative, real and complex numbers respectively.", null, "lloyd", null, "Junior Fellow", null, "", null, "Posts: 10 Threads: 1 Joined: Mar 2011 03/10/2011, 10:51 PM D'oh, I looked at the FAQ and the mean I proposed is covered in detail as a possible basis for the sesqui operation--it is the agm, ArithmeticalGeometricMean in mathematica, also called the Gauss mean. Oh well if I had to invent something that already exists, at least it's something that Lagrange and Gauss also thought of. Back to lurking. Lloyd.", null, "tommy1729", null, "Ultimate Fellow", null, "", null, "", null, "", null, "", null, "Posts: 1,358 Threads: 330 Joined: Feb 2009 03/10/2011, 10:56 PM i think you just posted the ancient Arithmetic-Geometric Mean ? tommy1729", null, "tommy1729", null, "Ultimate Fellow", null, "", null, "", null, "", null, "", null, "Posts: 1,358 Threads: 330 Joined: Feb 2009 03/10/2011, 10:58 PM lol , seems i was typing my reply while you were posting ... what a waste of time ...", null, "JmsNxn", null, "Long Time Fellow", null, "", null, "", null, "", null, "Posts: 291 Threads: 67 Joined: Dec 2010 03/10/2011, 11:42 PM (This post was last modified: 03/10/2011, 11:44 PM by JmsNxn.) What would happen if we created this: x {0} y = x + y x {0.5} y = x @ y x {1} y = x * y x {2} y = x ^ y And then {0.25} will be the same arithmetic-geometric algorithm of {0} and {0.5}; {0.75} will be the arith-geo-algo of {1} and {0.5}, so on and so forth. We could then solve for x {1.5} n, n E N, since: x {1.5} 2 = x {0.5} x Perhaps Taylor series will be derivable giving us complex arguments. It'd also be very interesting to see what happens with logs, i.e: log(x {1.5} 2) = ? since normal operators undergo a transformation I wonder if something happens for these.", null, "lloyd", null, "Junior Fellow", null, "", null, "Posts: 10 Threads: 1 Joined: Mar 2011 03/11/2011, 12:35 AM Surely, though, {0.25} should be weighted 3/4s towards the arithmetic mean, and 1/4 towards the geometric mean. Ah but is the weighting carried out arithmetically or geometrically? Apply a 3/4 arithmetic : 1/4 geometric weighting there too! And take the limiting case again. In other words, for a {0.25} b, with a a [t] b is smooth (or better analytic) for fixed a and b. I mean you define it on the interval [1,2], i.e. between addition and multiplication, and then you would continue it to the higher operations t>2 by a [t+1] (b+1) = a [t] ( a [t+1] b ) And then it would be interesting whether the curve is (infinitely) differentiable at t=2, t=3, etc. Of course it would between the endpoints, i.e. on (2,3) and (3,4), etc.", null, "JmsNxn", null, "Long Time Fellow", null, "", null, "", null, "", null, "Posts: 291 Threads: 67 Joined: Dec 2010 03/11/2011, 06:12 PM (03/11/2011, 12:35 AM)lloyd Wrote: Surely, though, {0.25} should be weighted 3/4s towards the arithmetic mean, and 1/4 towards the geometric mean. Ah but is the weighting carried out arithmetically or geometrically? Apply a 3/4 arithmetic : 1/4 geometric weighting there too! And take the limiting case again. In other words, for a {0.25} b, with a a [t] b is smooth (or better analytic) for fixed a and b. I mean you define it on the interval [1,2], i.e. between addition and multiplication, and then you would continue it to the higher operations t>2 by a [t+1] (b+1) = a [t] ( a [t+1] b ) And then it would be interesting whether the curve is (infinitely) differentiable at t=2, t=3, etc. Of course it would between the endpoints, i.e. on (2,3) and (3,4), etc. IIRC, this is exactly what I tried to do, some three years ago or so... It never really worked out, though some sample graphs for 2 x and 3 x looked quite good.", null, "bo198214", null, "Administrator Posts: 1,389 Threads: 90 Joined: Aug 2007 03/11/2011, 10:59 PM (03/11/2011, 10:07 PM)martin Wrote: (03/11/2011, 12:42 PM)bo198214 Wrote: For me it seems important that the curve t |-> a [t] b is smooth (or better analytic) for fixed a and b. I mean you define it on the interval [1,2], i.e. between addition and multiplication, and then you would continue it to the higher operations t>2 by a [t+1] (b+1) = a [t] ( a [t+1] b ) And then it would be interesting whether the curve is (infinitely) differentiable at t=2, t=3, etc. Of course it would between the endpoints, i.e. on (2,3) and (3,4), etc. IIRC, this is exactly what I tried to do, some three years ago or so... It never really worked out, though some sample graphs for 2 x and 3 x looked quite good. Hey Martin, we are not talking about the function f(x) = a x but about the function g(x) = a [x] b. But you are right that we expect the same smoothness also for f(x), which is how Andrew Robbins came to his tetration extension. « Next Oldest | Next Newest »\n\n Possibly Related Threads... Thread Author Replies Views Last Post A fundamental flaw of an operator who's super operator is addition JmsNxn 4 6,886 06/23/2019, 08:19 PM Last Post: Chenjesu Between addition and product ( pic ) tommy1729 4 3,754 07/10/2016, 07:32 AM Last Post: Gottfried special addition tommy1729 0 1,790 01/11/2015, 02:00 AM Last Post: tommy1729 extension of the Ackermann function to operators less than addition JmsNxn 2 4,020 11/06/2011, 08:06 PM Last Post: JmsNxn Periodic functions that are periodic not by addition JmsNxn 0 2,731 04/17/2011, 09:54 PM Last Post: JmsNxn\n\nUsers browsing this thread: 2 Guest(s)", null, "" ]
[ null, "https://math.eretrandre.org/tetrationforum/images/default_avatar.png", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/images/default_avatar.png", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/uploads/avatars/avatar_47.jpg", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/uploads/avatars/avatar_47.jpg", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/images/default_avatar.png", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/images/default_avatar.png", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/images/default_avatar.png", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "http://math.eretrandre.org/tetrationforum/images/star.png", null, "https://math.eretrandre.org/tetrationforum/images/default_avatar.png", null, "https://math.eretrandre.org/tetrationforum/images/buddy_offline.png", null, "https://math.eretrandre.org/tetrationforum/task.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9316594,"math_prob":0.95345354,"size":7683,"snap":"2019-51-2020-05","text_gpt3_token_len":2489,"char_repetition_ratio":0.10444068,"word_repetition_ratio":0.59207785,"special_character_ratio":0.38786933,"punctuation_ratio":0.16359575,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97897685,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-18T03:07:24Z\",\"WARC-Record-ID\":\"<urn:uuid:7958eaaf-149c-4754-b393-3549ae1beda5>\",\"Content-Length\":\"52025\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:734b0462-0064-4682-bdf8-14bddbf45f97>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d110fc0-5f4e-443a-bfa2-701e968b8322>\",\"WARC-IP-Address\":\"109.237.132.18\",\"WARC-Target-URI\":\"http://math.eretrandre.org/tetrationforum/showthread.php?tid=608&pid=5627\",\"WARC-Payload-Digest\":\"sha1:VEOYLS4ZFFRWURKGTEA5L5SF2B6BTLLZ\",\"WARC-Block-Digest\":\"sha1:FZYU7HO6OJOIOQFWRYD7U4ORVOOSPJG4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250591763.20_warc_CC-MAIN-20200118023429-20200118051429-00072.warc.gz\"}"}
https://paddlenlp.readthedocs.io/zh/stable/source/paddlenlp.transformers.albert.modeling.html
[ "# modeling¶\n\nModeling classes for ALBERT model.\n\nclass `AlbertPretrainedModel`(*args, **kwargs)[源代码]\n\nAn abstract class for pretrained ALBERT models. It provides ALBERT related `model_config_file`, `pretrained_init_configuration`, `resource_files_names`, `pretrained_resource_files_map`, `base_model_prefix` for downloading and loading pretrained models. See `PretrainedModel` for more details.\n\n`base_model_class`\nclass `AlbertModel`(vocab_size=30000, embedding_size=128, hidden_size=768, num_hidden_layers=12, num_hidden_groups=1, num_attention_heads=12, intermediate_size=3072, inner_group_num=1, hidden_act='gelu', hidden_dropout_prob=0, attention_probs_dropout_prob=0, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=0, bos_token_id=2, eos_token_id=3, add_pooling_layer=True)[源代码]\n\nThe bare Albert Model transformer outputting raw hidden-states.\n\nThis model inherits from `PretrainedModel`. Refer to the superclass documentation for the generic methods.\n\nThis model is also a Paddle paddle.nn.Layer subclass. Use it as a regular Paddle Layer and refer to the Paddle documentation for all matter related to general usage and behavior.\n\n• vocab_size (int, optional) -- Vocabulary size of `inputs_ids` in `AlbertModel`. Also is the vocab size of token embedding matrix. Defines the number of different tokens that can be represented by the `inputs_ids` passed when calling `AlbertModel`. Defaults to `30000`.\n\n• embedding_size (int, optional) -- Dimensionality of the embedding layer. Defaults to `128`.\n\n• hidden_size (int, optional) -- Dimensionality of the encoder layer and pooler layer. Defaults to `768`.\n\n• num_hidden_layers (int, optional) -- Number of hidden layers in the Transformer encoder. Defaults to `12`.\n\n• inner_group_num (int, optional) -- Number of hidden groups in the Transformer encoder. Defaults to `1`.\n\n• num_attention_heads (int, optional) -- Number of attention heads for each attention layer in the Transformer encoder. Defaults to `12`.\n\n• intermediate_size (int, optional) -- Dimensionality of the feed-forward (ff) layer in the encoder. Input tensors to ff layers are firstly projected from `hidden_size` to `intermediate_size`, and then projected back to `hidden_size`. Typically `intermediate_size` is larger than `hidden_size`.\n\n• inner_group_num -- Number of inner groups in a hidden group. Default to `1`.\n\n• hidden_act (str, optional) -- The non-linear activation function in the feed-forward layer. `\"gelu\"`, `\"relu\"` and any other paddle supported activation functions are supported.\n\n• hidden_dropout_prob (float, optional) -- The dropout probability for all fully connected layers in the embeddings and encoder. Defaults to `0`.\n\n• attention_probs_dropout_prob (float, optional) -- The dropout probability used in MultiHeadAttention in all encoder layers to drop some attention target. Defaults to `0`.\n\n• max_position_embeddings (int, optional) -- The maximum value of the dimensionality of position encoding, which dictates the maximum supported length of an input sequence. Defaults to `512`.\n\n• type_vocab_size (int, optional) -- The vocabulary size of `token_type_ids`. Defaults to `12`.\n\n• initializer_range (float, optional) --\n\nThe standard deviation of the normal initializer. Defaults to `0.02`.\n\n注解\n\nA normal_initializer initializes weight matrices as normal distributions. See `BertPretrainedModel.init_weights()` for how weights are initialized in `ElectraModel`.\n\n• layer_norm_eps (float, optional) -- The `epsilon` parameter used in `paddle.nn.LayerNorm` for initializing layer normalization layers. A small value to the variance added to the normalization layer to prevent division by zero. Default to `1e-12`.\n\n• pad_token_id (int, optional) -- The index of padding token in the token vocabulary. Defaults to `0`.\n\n• add_pooling_layer (bool, optional) -- Whether or not to add the pooling layer. Default to `False`.\n\n`get_input_embeddings`()[源代码]\n\nget input embedding of model\n\nembedding of model\n\nnn.Embedding\n\n`set_input_embeddings`(value)[源代码]\n\nset new input embedding for model\n\nvalue (Embedding) -- the new embedding of model\n\nNotImplementedError -- Model has not implement `set_input_embeddings` method\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_hidden_states=False, output_attentions=False, return_dict=False)[源代码]\n\nThe AlbertModel forward method, overrides the `__call__()` special method.\n\n• input_ids (Tensor) -- Indices of input sequence tokens in the vocabulary. They are numerical representations of tokens that build the input sequence. Its data type should be `int64` and it has a shape of [batch_size, sequence_length].\n\n• attention_mask (Tensor, optional) -- Mask used in multi-head attention to avoid performing attention on to some unwanted positions, usually the paddings or the subsequent positions. Its data type can be int, float and bool. When the data type is bool, the `masked` tokens have `False` values and the others have `True` values. When the data type is int, the `masked` tokens have `0` values and the others have `1` values. When the data type is float, the `masked` tokens have `-INF` values and the others have `0` values. It is a tensor with shape broadcasted to `[batch_size, num_attention_heads, sequence_length, sequence_length]`. Defaults to `None`, which means nothing needed to be prevented attention to.\n\n• token_type_ids (Tensor, optional) --\n\nSegment token indices to indicate different portions of the inputs. Selected in the range `[0, type_vocab_size - 1]`. If `type_vocab_size` is 2, which means the inputs have two portions. Indices can either be 0 or 1:\n\n• 0 corresponds to a sentence A token,\n\n• 1 corresponds to a sentence B token.\n\nIts data type should be `int64` and it has a shape of [batch_size, sequence_length]. Defaults to `None`, which means we don't add segment embeddings.\n\n• position_ids (Tensor, optional) -- Indices of positions of each input sequence tokens in the position embeddings. Selected in the range ```[0, max_position_embeddings - 1]```. Shape as `(batch_size, num_tokens)` and dtype as int64. Defaults to `None`.\n\nMask to nullify selected heads of the self-attention modules. Masks values can either be 0 or 1:\n\n• inputs_embeds -- If you want to control how to convert `inputs_ids` indices into associated vectors, you can pass an embedded representation directly instead of passing `inputs_ids`.\n\nReturns tuple (`sequence_output`, `pooled_output`) or a dict with `last_hidden_state`, `pooled_output`, `all_hidden_states`, `all_attentions` fields.\n\nWith the fields:\n\n• `sequence_output` (Tensor):\n\nSequence of hidden-states at the last layer of the model. It's data type should be float32 and has a shape of [`batch_size, sequence_length, hidden_size`].\n\n• `pooled_output` (Tensor):\n\nThe output of first token (`[CLS]`) in sequence. We \"pool\" the model by simply taking the hidden state corresponding to the first token. Its data type should be float32 and has a shape of [batch_size, hidden_size].\n\n• `last_hidden_state` (Tensor):\n\nThe output of the last encoder layer, it is also the `sequence_output`. It's data type should be float32 and has a shape of [batch_size, sequence_length, hidden_size].\n\n• `all_hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `all_hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `all_attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `all_attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\ntuple or Dict\n\n```import paddle\n\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')\nmodel = AlbertModel.from_pretrained('albert-base-v1')\n\ninputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}\noutput = model(**inputs)\n```\nclass `AlbertForPretraining`(albert, lm_head, sop_head, vocab_size)[源代码]\n\nAlbert Model with a `masked language modeling` head and a `sentence order prediction` head on top.\n\n`get_output_embeddings`()[源代码]\n\nTo be overwrited for models with output embeddings\n\nthe otuput embedding of model\n\nOptional[Embedding]\n\n`get_input_embeddings`()[源代码]\n\nget input embedding of model\n\nembedding of model\n\nnn.Embedding\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, sentence_order_label=None, labels=None, output_attentions=False, output_hidden_states=False, return_dict=False)[源代码]\n\nThe AlbertForPretraining forward method, overrides the __call__() special method.\n\n• input_ids (Tensor) -- See `AlbertModel`.\n\n• attention_mask (list, optional) -- See `AlbertModel`.\n\n• token_type_ids (Tensor, optional) -- See `AlbertModel`.\n\n• position_ids (Tensor, optional) -- See `AlbertModel`.\n\n• head_mask (Tensor, optional) -- See `AlbertModel`.\n\n• inputs_embeds (Tensor, optional) -- See `AlbertModel`.\n\n• sentence_order_label (Tensor, optional) -- Labels of the next sequence prediction. Input should be a sequence pair Indices should be 0 or 1. `0` indicates original order (sequence A, then sequence B), and `1` indicates switched order (sequence B, then sequence A). Defaults to `None`.\n\n• output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to `False`.\n\n• output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to `False`.\n\n• return_dict (bool, optional) -- Whether to return a `ModelOutput` object. If `False`, the output will be a tuple of tensors. Defaults to `False`.\n\nReturns tuple (`prediction_scores`, `sop_scores`) or a dict with `prediction_logits`, `sop_logits`, `pooled_output`, `hidden_states`, `attentions` fields.\n\nWith the fields:\n\n• `prediction_scores` (Tensor):\n\nThe scores of masked token prediction. Its data type should be float32. and its shape is [batch_size, sequence_length, vocab_size].\n\n• `sop_scores` (Tensor):\n\nThe scores of sentence order prediction. Its data type should be float32 and its shape is [batch_size, 2].\n\n• `prediction_logits` (Tensor):\n\nThe scores of masked token prediction. Its data type should be float32. and its shape is [batch_size, sequence_length, vocab_size].\n\n• `sop_logits` (Tensor):\n\nThe scores of sentence order prediction. Its data type should be float32 and its shape is [batch_size, 2].\n\n• `hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\ntuple or Dict\n\nclass `AlbertForMaskedLM`(albert)[源代码]\n\nAlbert Model with a `masked language modeling` head on top.\n\nalbert (`AlbertModel`) -- An instance of `AlbertModel`.\n\n`get_output_embeddings`()[源代码]\n\nTo be overwrited for models with output embeddings\n\nthe otuput embedding of model\n\nOptional[Embedding]\n\n`get_input_embeddings`()[源代码]\n\nget input embedding of model\n\nembedding of model\n\nnn.Embedding\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_hidden_states=False, output_attentions=False, return_dict=False)[源代码]\n\nThe AlbertForPretraining forward method, overrides the __call__() special method.\n\nReturns tensor `prediction_scores` or a dict with `logits`, `hidden_states`, `attentions` fields.\n\nWith the fields:\n\n• `prediction_scores` (Tensor):\n\nThe scores of masked token prediction. Its data type should be float32. and its shape is [batch_size, sequence_length, vocab_size].\n\n• `logits` (Tensor):\n\nThe scores of masked token prediction. Its data type should be float32. and its shape is [batch_size, sequence_length, vocab_size].\n\n• `hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\nTensor or Dict\n\nclass `AlbertForSequenceClassification`(albert, classifier_dropout_prob=0, num_classes=2)[源代码]\n\nAlbert Model with a linear layer on top of the output layer, designed for sequence classification/regression tasks like GLUE tasks.\n\n• albert (`AlbertModel`) -- An instance of AlbertModel.\n\n• classifier_dropput_prob (float, optional) -- The dropout probability for the classifier. Defaults to `0`.\n\n• num_classes (int, optional) -- The number of classes. Defaults to `2`.\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_hidden_states=False, output_attentions=False, return_dict=False)[源代码]\n\nThe AlbertForSequenceClassification forward method, overrides the __call__() special method.\n\n• input_ids (Tensor) -- See `AlbertModel`.\n\n• attention_mask (list, optional) -- See `AlbertModel`.\n\n• token_type_ids (Tensor, optional) -- See `AlbertModel`.\n\n• position_ids (Tensor, optional) -- See `AlbertModel`.\n\n• head_mask (Tensor, optional) -- See `AlbertModel`.\n\n• inputs_embeds (Tensor, optional) -- See `AlbertModel`.\n\n• labels (Tensor of shape `(batch_size,)`, optional) -- Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., num_classes - 1]`. If `num_classes == 1` a regression loss is computed (Mean-Square loss), If `num_classes > 1` a classification loss is computed (Cross-Entropy).\n\n• output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to `False`.\n\n• output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to `False`.\n\n• return_dict (bool, optional) -- Whether to return a `SequenceClassifierOutput` object. If `False`, the output will be a tuple of tensors. Defaults to `False`.\n\nReturns tensor `logits`, or a dict with `logits`, `hidden_states`, `attentions` fields.\n\nWith the fields:\n\n• `logits` (Tensor):\n\nA tensor of the input text classification logits. Shape as `[batch_size, num_classes]` and dtype as float32.\n\n• `hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\nTensor or Dict\n\n```import paddle\n\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')\nmodel = AlbertForSequenceClassification.from_pretrained('albert-base-v1')\n\ninputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}\noutputs = model(**inputs)\n\nlogits = outputs\n```\nclass `AlbertForTokenClassification`(albert, num_classes=2)[源代码]\n\nAlbert Model with a linear layer on top of the hidden-states output layer, designed for token classification tasks like NER tasks.\n\n• albert (`AlbertModel`) -- An instance of AlbertModel.\n\n• num_classes (int, optional) -- The number of classes. Defaults to `2`.\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_hidden_states=False, output_attentions=False, return_dict=False)[源代码]\n\nThe AlbertForTokenClassification forward method, overrides the __call__() special method.\n\n• input_ids (Tensor) -- See `AlbertModel`.\n\n• attention_mask (list, optional) -- See `AlbertModel`.\n\n• token_type_ids (Tensor, optional) -- See `AlbertModel`.\n\n• position_ids (Tensor, optional) -- See `AlbertModel`.\n\n• head_mask (Tensor, optional) -- See `AlbertModel`.\n\n• inputs_embeds (Tensor, optional) -- See `AlbertModel`.\n\n• labels (Tensor of shape `(batch_size, sequence_length)`, optional) -- Labels for computing the token classification loss. Indices should be in `[0, ..., num_classes - 1]`.\n\n• output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to `False`.\n\n• output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to `False`.\n\n• return_dict (bool, optional) -- Whether to return a `TokenClassifierOutput` object. If `False`, the output will be a tuple of tensors. Defaults to `False`.\n\nReturns tensor `logits`, or a dict with `logits`, `hidden_states`, `attentions` fields.\n\nWith the fields:\n\n• `logits` (Tensor):\n\nA tensor of the input token classification logits. Shape as `[batch_size, sequence_length, num_classes]` and dtype as `float32`.\n\n• `hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\nTensor or Dict\n\n```import paddle\n\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')\nmodel = AlbertForTokenClassification.from_pretrained('albert-base-v1')\n\ninputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}\noutputs = model(**inputs)\n\nlogits = outputs\n```\nclass `AlbertForQuestionAnswering`(albert, num_labels=2)[源代码]\n\nAlbert Model with a linear layer on top of the hidden-states output to compute `span_start_logits` and `span_end_logits`, designed for question-answering tasks like SQuAD.\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_hidden_states=False, output_attentions=False, return_dict=False)[源代码]\n\nThe AlbertForQuestionAnswering forward method, overrides the __call__() special method.\n\n• input_ids (Tensor) -- See `AlbertModel`.\n\n• attention_mask (list, optional) -- See `AlbertModel`.\n\n• token_type_ids (Tensor, optional) -- See `AlbertModel`.\n\n• position_ids (Tensor, optional) -- See `AlbertModel`.\n\n• head_mask (Tensor, optional) -- See `AlbertModel`.\n\n• inputs_embeds (Tensor, optional) -- See `AlbertModel`.\n\n• start_positions (Tensor of shape `(batch_size,)`, optional) -- Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss.\n\n• end_positions (Tensor of shape `(batch_size,)`, optional) -- Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence are not taken into account for computing the loss.\n\n• output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to `False`.\n\n• output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to `False`.\n\n• return_dict (bool, optional) -- Whether to return a `QuestionAnsweringModelOutput` object. If `False`, the output will be a tuple of tensors. Defaults to `False`.\n\nReturns tuple (`start_logits, end_logits`)or a dict with `start_logits`, `end_logits`, `hidden_states`, `attentions` fields.\n\nWith the fields:\n\n• `start_logits` (Tensor):\n\nA tensor of the input token classification logits, indicates the start position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].\n\n• `end_logits` (Tensor):\n\nA tensor of the input token classification logits, indicates the end position of the labelled span. Its data type should be float32 and its shape is [batch_size, sequence_length].\n\n• `hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\ntuple or Dict\n\n```import paddle\n\ntokenizer = AlbertTokenizer.from_pretrained('albert-base-v1')\n\ninputs = {k:paddle.to_tensor([v]) for (k, v) in inputs.items()}\noutputs = model(**inputs)\n\nlogits = outputs\n```\nclass `AlbertForMultipleChoice`(albert)[源代码]\n\nAlbert Model with a linear layer on top of the hidden-states output layer, designed for multiple choice tasks like SWAG tasks .\n\nalbert (`AlbertModel`) -- An instance of AlbertModel.\n\n`forward`(input_ids, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, labels=None, output_hidden_states=False, output_attentions=False, return_dict=False)[源代码]\n\nThe AlbertForQuestionAnswering forward method, overrides the __call__() special method.\n\n• input_ids (Tensor) -- See `AlbertModel`.\n\n• attention_mask (list, optional) -- See `AlbertModel`.\n\n• token_type_ids (Tensor, optional) -- See `AlbertModel`.\n\n• position_ids (Tensor, optional) -- See `AlbertModel`.\n\n• head_mask (Tensor, optional) -- See `AlbertModel`.\n\n• inputs_embeds (Tensor, optional) -- See `AlbertModel`.\n\n• labels (Tensor of shape `(batch_size, )`, optional) -- Labels for computing the multiple choice classification loss. Indices should be in ```[0, ..., num_choices-1]``` where `num_choices` is the size of the second dimension of the input tensors. (See `input_ids` above)\n\n• output_hidden_states (bool, optional) -- Whether to return the hidden states of all layers. Defaults to `False`.\n\n• output_attentions (bool, optional) -- Whether to return the attentions tensors of all attention layers. Defaults to `False`.\n\n• return_dict (bool, optional) -- Whether to return a `MultipleChoiceModelOutput` object. If `False`, the output will be a tuple of tensors. Defaults to `False`.\n\nReturns tensor `reshaped_logits` or a dict with `reshaped_logits`, `hidden_states`, `attentions` fields.\n\nWith the fields:\n\n• `reshaped_logits` (Tensor):\n\nA tensor of the input multiple choice classification logits. Shape as `[batch_size, num_classes]` and dtype as `float32`.\n\n• `hidden_states` (Tensor):\n\nHidden_states of all layers in the Transformer encoder. The length of `hidden_states` is `num_hidden_layers + 1`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, sequence_length, hidden_size`].\n\n• `attentions` (Tensor):\n\nAttentions of all layers of in the Transformer encoder. The length of `attentions` is `num_hidden_layers`. For all element in the tuple, its data type should be float32 and its shape is [`batch_size, num_attention_heads, sequence_length, sequence_length`].\n\nTensor or Dict" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.590438,"math_prob":0.84276867,"size":25200,"snap":"2022-40-2023-06","text_gpt3_token_len":5936,"char_repetition_ratio":0.1817352,"word_repetition_ratio":0.56828195,"special_character_ratio":0.22325397,"punctuation_ratio":0.18262637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96777385,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-02T21:33:45Z\",\"WARC-Record-ID\":\"<urn:uuid:a08e8660-47e7-42b6-8db4-7134a83e7c6d>\",\"Content-Length\":\"135315\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:69919623-33ce-45c5-a536-f9f7ee62d131>\",\"WARC-Concurrent-To\":\"<urn:uuid:30642c13-7ae3-4963-85f8-030911b39f3f>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://paddlenlp.readthedocs.io/zh/stable/source/paddlenlp.transformers.albert.modeling.html\",\"WARC-Payload-Digest\":\"sha1:OYSFXSXUJO7I3ULYZGBW3ULYA6ZNBV7D\",\"WARC-Block-Digest\":\"sha1:R4EQXTMV3XPSHO3OZYZUUFJYJ4IG7F6A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500041.18_warc_CC-MAIN-20230202200542-20230202230542-00026.warc.gz\"}"}
http://jackterwilliger.com/biological-neural-network-synapses/
[ "# Synapses, (A Bit of) Biological Neural Networks – Part II", null, "Synapses are the couplings between neurons, allowing signals to pass from one neuron to another. However, synapses are much more than mere relays: they play an important role in neural computation. The ongoing dramas of excitation and inhibition and of synaptic potentiation and depression give rise to your abilities to make decisions, learn, and remember. It’s amazing, really: collections of these microscopic junctions in your head can represent all sorts of things — your pet’s name, the layout of the New York subway system, how to ride a bike…\n\nIn this post, I give a rough overview of synapses: what they are, how they function, and how to model them. Specifically I will focus on synaptic transmission, with brief sections on short-term and long-term plasticity. Again, like before, nothing new is being said here, but I like to think the presentation is novel.\n\n# Primer: Biology of Chemical Synapses\n\nLet’s begin by looking at the anatomy and physiology of synapses. I forewarn you that, in this section, I’m leaving out ‘long tail’ info. By that, I mean I’ll cover the most common phenomena, e.g. I will not cover most neurotransmitters or receptor types. (You can definitely skip this if you’ve taken any Neuroscience course).\n\n## Anatomy of Chemical Synapses\n\nA synapse occurs between two neurons: a presynaptic neuron and a postsynaptic neuron. You can think of the presynaptic neuron as the sender and the postsynaptic neuron as the receiver. Below are major parts to a synapse:\n\n• Axon terminal: An end of the presynaptic neuron’s axon. The axon terminal stores neurotransmitter in small capsules called vesicles.\n• Synaptic cleft: The synaptic cleft is a gap between the presynaptic and postsynaptic neuron, which can become flooded with neurotransmitter.\n• Dendritic spine: A small protrusion from the postsynaptic dendrite, which meets the presynaptic axon. Dendritic spines are plastic — dynamically changing shape and size, appearing and disappearing. It is believed spines are an integral part of learning and memory.\n• Receptors: Proteins to which neurotransmitter binds. Receptors can open ion channels, possibly exciting the postsynaptic neuron.\n\nKeep in mind, neurons don’t just interface with each other axon $\\rightarrow$ dendrite. Synapses can also appear between axons and cell bodies, axons and axons, axons and axon terminals, etc. However, in these other interfaces, there are no dendritic spines.\n\n### Neurotransmitters & Receptors\n\nNeurotransmitters are substances which are released by presynaptic neurons and cause changes in the postsynaptic neuron. The most direct way neurotransmitters affect the postsynaptic neuron is by either raising or lowing its membrane potential. Excitation and inhibition is surprisingly organized in the brain. These are not absolute rules, but hold generally:\n\n1. Neurotransmitter types are either excitatory or inhibitory, e.g. if a transmitter $s$ excites neurons, it never inhibits — no matter the receptor.\n2. Synapses are either excitatory or inhibitory, i.e. they release either an inhibitory or excitatory neurotransmitter but not both.\n3. Neurons are either excitatory or inhibitory, i.e. they release either inhibitory or excitatory neurotransmitters but not both.\n4. In pyramidal neurons, like the ones drawn below, excitatory synapses (occasionally referred to as Type I) typically occur at the dendrites whereas inhibitory synapses (occasionally referred to as Type II) typically occur near or on the cell body.", null, "Pyramidal neurons from Santiago Ramón y Cajal. I recently visited the Beautiful Brain exhibit at the MIT Museum which reinterprets Cajal’s scientific drawings as art, and rightfully so.\n\nThere is a long list of neurotransmitters, but, in this post we look at the 2 most prevalent:\n\n• Glutamate: binds to receptors which excite the postsynaptic neuron; accounts for 90% of all synaptic connections.\n• GABA: binds to receptors which inhibit the postsynaptic neuron.\n\nThere are two categories of receptors found on postsynaptic neurons:\n\n• Ionotropic: these are ion channels which are open when neurotransmitter binds to them.\n• Metabotropic: these activate G-protein pathways (intracellular signaling mechanism) which can among other things, can indirectly open ion channels.\n\nThere can be multiple types receptors for a neurotransmitter. We’ll consider 2 glutamate receptors:\n\n• AMPA: A quick-to-activate, fast-to-deactivate excitatory ionotropic receptor. Gates sodium and calcium.\n• NMDA: A slow-to-activate, slow-to-deactivate excitatory ionotropic receptor. Gates sodium and calcium.\n\nWe’ll also consider 2 GABA receptors:\n\n• GABA$_A$: A quick-to-activate, fast-to-deactivate iinhibitory onotropic receptor.\n• GABA$_B$: A slow-to-activate, slow-to-deactivate inhibitory metabotropic receptor. G-proteins act as second messengers to open ion channels.\nFast Slow\nExcitatory AMPA NMDA\nInhibitory GABA$_A$ GABA$_B$\n\n## Physiology of Chemical Synapses\n\nSynaptic transmission is a mechanism which allows one neuron to communicate with another: the presynaptic neuron neuron fires an action potential and releases transmitter, which opens the postsynaptic neuron’s ion channels. Looking closer at this three part act:\n\n1. Release: The presynaptic neuron fires an action potential. When the action potential reaches the axon terminal, membrane depolarization triggers voltage gated calcium channels to open. When the calcium enters the terminal, it causes vesicles to fuse to the terminal and dump their contents into the synaptic cleft.\n2. Activation: Transmitter crosses the synaptic clef and binds to receptors on the postsynaptic neuron. This causes the receptors to activate, opening ion channels and activating various chemical pathways.\n3. Deactivation: Postsynaptic channels become deactivated because (1) the concentration of neurotransmitter in the synaptic clef decreases, either by breaking down, or being reuptaken by the presynaptic neuron, and (2) transmitter releases from receptors.\n\nA result of synaptic transmission is a PSP — a postsynaptic potential, i.e. a change in the postsynaptic neuron’s membrane potential at the site of the synapse. This change in potential helps excite or inhibit the postsynaptic neuron. For more about action potentials and neural excitability, you can read the previous part of this series.\n\n# Primer: Exponential Decay\n\nThe mathematical foundation for synaptic dynamics is largely based on exponential decay.\n\nExponential decay is the name for a process where the rate of decay of some quantity is proportional to the current value of that quantity, or:\n\n$$\\frac{dN}{dt} = -\\lambda N$$\n\nwhere $-\\lambda$ is the decay rate and $N$ is the quantity. By integrating, we can find a closed form solution:\n\n$$N(t) = N_0e^{-\\lambda t}$$\n\nwhere $N_0$ is the initial quantity and $N(t)$ is the quantity at time $t$.\n\nIn our case, the quantity is either directly or indirectly a measure of the number of ions in some part of the synapse, e.g. the fraction of active receptors which is dependent on the concentration in the synaptic cleft.\n\nAs a matter of convention, in neuroscience and other fields, it is often more popular to express exponential decays with $\\tau$, referred to as the exponential time constant:\n\n$$\\tau = \\frac{1}{\\lambda}$$\n\nso,\n\n$$\\frac{dN}{dt} = -\\frac{N}{\\tau}$$\n\nand therefore,\n\n$$N(t) = N_0e^{-\\frac{t}{\\tau}}$$\n\nIt turns out $\\tau$ has some nice properties: when $t=\\tau$,  $N$ is $1/e$ its initial value. This happens to be the mean of the decay function. In other words: if we were modeling the fraction of open ion channels, $\\tau$ would be the average time an ion channel spent open.\n\nPlay around with exponential decay, $\\tau$, and half-life here:\n\nWe can also flip the direction of exponential decay:\n\n$$\\frac{dN}{dt} = N_{\\infty} – \\frac{N}{\\tau}$$\n\nwhere $N_{\\infty}$ is the value N decays to. Or:\n\n$$N(t) = N_{\\infty} – N_{\\infty}e^{-\\frac{t}{\\tau}}$$\n\n# Synaptic Dynamics\n\n## Synaptic Current\n\nAside from NMDA, which involves voltage gating dynamics, postsynaptic current can be modeled as:\n\n$$I=\\bar{g}_{s}P(V-E_{s})$$\n\nwhere\n\n• $V$ is the postsynaptic membrane potential\n• $E_{s}$ is the Nernst potential.\n• $\\bar{g_s}$ is the maximum postsynaptic conductance, i.e. max permeability of the receptors.\n• $P \\in [0,1]$ is the probability a channel is open.\n\nThis equation should look familiar if you’ve read part I. Fundamentally, all this equation describes is how ions move across the postsynaptic neuron’s membrane through the ion channels controlled by synapse $s$ — the details merely say how. $V- E_s$ defines what direction ions are moving; $\\bar{g_s}$ defines the max flow of ions across the membrane; and $P$ defines the open/closed dynamics of the channel — $P$ is what the rest of this discussion is about.\n\nJust like a synapse, $P$ has presynaptic and postsynaptic components:\n\n$$P=P_sP_{rel}$$\n\nWhere\n\n• $P_s$ is the conditional probability a receptor activates given transmitter release occurs.\n• $P_{rel}$ is the probability of neurotransmitter release.\n\nFor now, lets just assume $P_{rel}$ is always 1. Until we touch short term plasticity, all we will care about is $P_s$.\n\n$\\tau$ $E_{x}$\nAMPA ~5(ms) ~0(mV)\nNMDA ~150(ms) ~0(mV)\nGABA$_A$ ~6(ms) ~-70(mV)\nGABA$_B$ ~150(ms) ~-90(mV)\n\n## Synaptic Conductance\n\n$P_s$ can be modeled by a pair of transition rates, $\\alpha_s$ and $\\beta_s$, between an open and closed states.\n\n$$\\frac{dP_s}{dt} = \\overbrace{\\alpha_sT(t)(1 – P_s)}^{\\text{rise}} – \\overbrace{\\beta_sP_s}^{\\text{fall}}$$\n\n$T(t)$ represents whether neurotransmitter is present in the synaptic clef. For simplicity, this just a square pulse. When neurotransmitter is present the receptor activates — fast enough that we can largely ignore deactivation (which I do in the visualization). When neurotransmitter is not present the open dynamics, $\\alpha_s$ is 0.\n\nYou’ll notice that this can be rewritten to look like a pair of exponential decays, which you are familiar with!\n\n$$\\frac{dP_s}{dt} = \\overbrace{\\frac{1}{\\tau_{rise}}T(t)(1 – P_s)}^{\\text{rise}} – \\overbrace{\\frac{1}{\\tau_{fall}}P_s}^{\\text{fall}}$$\n\nFor receptors like AMPA, GABA$_A$, and GABA$_B$ the rise of P_s is rapid enough to treat it as instantaneous. Therefore we can simplify this as:\n\n$$\\frac{dP_s}{dt} = -\\frac{P_s}{\\tau_s}$$\n\nor, after a presynaptic action potential\n\n$$P_s \\leftarrow 1$$\n\n# Hands on demo\n\n…but first an overview of how these plots work.\n\nBelow you’ll find a few interactive plots to play around with various receptors.\n\nThese plots are my own concoction, so let me explain how they work:\n\nThe top left plot (vertices & edges) graphically represents the topology of a network of neurons. One vertex can represent multiple neuron. If you see a ring emanate from a vertex, the neuron(s) represented by that vertex fired.\n\nThe top right window lets you play around with some network parameters, i.e. properties of synapses or properties of neurons.\n\nThe bottom left plot shows the membrane potential of neuron $0$.\n\nThe bottom right plot shows the synaptic conductance from neuron $1 \\rightarrow 0$\n\n## AMPA\n\nWhen an AMPA receptor is activated, it raises the postsynaptic membrane potential; its effects are short, on the order of milliseconds.\n\nClick on neuron", null, "to cause an EPSP.\n\n## NMDA\n\nWhen an NMDA receptor is activated, it raises the postsynaptic membrane potential; its effects are long, on the order of hundreds of milliseconds. Unlike the other receptor types discussed in this post, NMDA receptors are voltage gated, meaning that even if NMDA is activated, ions do not necessarily flow. Rather, there must be other coincidental excitatory postsynaptic potentials from AMPA. In that way, NMDA receptors are coincidence detectors. The strong currents NMDA induces are believed to be an important mechanism in learning.\n\n$$I = \\bar{g}_{NMDA}G_{NMDA}P(V – 0)$$\n\nwhere\n\n$$G_{NMDA} =$$\n\nClick on neuron", null, "to cause EPSPs.\n\n## GABA$_A$\n\nWhen a GABA$_A$ receptor is activated, it lowers the postsynaptic membrane potential; its effects are short, on the order of milliseconds.\n\nClick on neuron", null, "to cause an IPSP.\n\n## GABA$_B$\n\nWhen a GABA$_B$ receptor is activated, it lowers the postsynaptic membrane potential; its effects are short, on the order of hundreds of milliseconds.\n\nClick on neuron", null, "to cause an IPSP.\n\n## Release Probability and Short-Term Plasticity\n\nThe transmitter release probability $P_{rel}$ of synapses is dependent on the recent history of activation. In some cases, recent activity causes $P_{rel}$ to increase, in other cases it causes $P_{rel}$ to decrease, however, during quiet periods of inactivity, $P_{rel}$ returns to a neutral state. This process can be referred to as short-term plasticity (STP). When the history of activity causes a synapse to become temporarily stronger, we call it short-term facilitation (STF) and, inversely, when the history of activity causes a synapse to become temporarily weaker, we call it short-term depression (STD).\n\nTo model, short-term facilitation, it makes sense to have the $P_{rel}$ decay to a stable state $P_0$ but inch toward a either 0 or 1 after a presynaptic spike. We achieve this with the following equation:\n\n$$\\frac{dP_{rel}}{dt} = \\frac{P_0 – P_{rel}}{\\tau_P}$$\n\nwhere $P_{rel} \\rightarrow P_{rel} f_F(1 – P_{rel})$ when the presynaptic neuron fires.\n\nIn short-term depression, $P_{rel} \\rightarrow P_{rel}f_D$ when the presynaptic neuron fires.\n\nShort-term plasticity has some very interesting computational properties. Namely, synapses can perform high/low pass filtering. A high frequency input, under STF, becomes stronger, whereas a low frequency input is relatively weaker. Inversely, a high frequency input, under STD, becomes weaker, whereas a high frequency input is relatively stronger!\n\n# Learning & Memory\n\n## Hebbian Learning & Spike-timing dependent plasticity\n\nThere are many types of learning. Here we will touch on Hebbian learning. Hebbian learning postulates that\n\n“When an axon of cell A is near enough to excite B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased” – Donald Hebb\n\nor neurons that fire together wire together.\n\n$$\\frac{dw_i}{dt} = \\eta v u_i$$\n\nwhere $\\eta$ is a learning rate, $v$ is the the postsynaptic neural response and $u_i$ is the presynaptic neural response. If both pre- and post-synaptic neurons fire together they are strengthened, if neither fire together they are not strengthened. Now, while there are problems with this original formulation, e.g. synapses only get stronger, it captures the gist of a common type of synaptic modification.\n\nSpike-timing dependent plasticity (STDP) is a form of Hebbian learning which resolves some of the problems encountered with earlier models, namely it allows synapses to strengthen and weaken and it inherently involves competition between synapses all while avoiding the assumption of global, intracellular mechanisms. The intuition behind this rule is as follows: if a presynaptic neuron fires before a postsynaptic neuron does, it helped cause it to fire; if the presynaptic neuron fires after a postsynaptic neuron does, it did not cause it to fire; synapses learn the causal relationship between neurons.\n\nThis model works as follows:\n\neach time either the presynaptic or postsynaptic neuron fires, the synapse is updated according to some amount based on the difference in spike timing:\n\n$$F(\\Delta t) = \\bigg\\{\\begin{array}{lr} A_+e^{\\frac{\\Delta t}{\\tau_+}}, & \\Delta t < 0 \\\\ -A_-e^{\\frac{\\Delta t}{\\tau_-}}, & \\Delta t > 0 \\end{array}$$", null, "Taken from When the presynaptic neuron fires, the synapse is weakened. When the postsynaptic neuron fires, the synapse is strengthened. Note that potentiation and depression are not necessarily symmetric.\n\nSTDP has several fascinating computational properties:\n\n• Spike correlations: spikes between pre- and post-synaptic neurons become correlated through reinforcement.\n• network latency: STDP can reduce network latency by reinforcing presynaptic neurons which consistently fire prior to the postsynaptic neuron.\n• regulation: STDP regulates network firing rates, achieving a homeostatic effect , despite being quite stable at the local level.\n\nHere is a javascript implementation of STDP taken from — I’ve added a moving scatter plot, so you can see how the network weights evolve over time.\n\nIf you wait long enough (unfortunately quite a while), the distribution should look something like this:", null, "# Concluding Thoughts\n\nIn this post, we briefly viewed an important computational ingredient in the brain — synapses. We saw how they can pass signals between neurons and how they can regulate networks and filter information. We also briefly introduced Hebbian learning theory with STDP.\n\nLastly, a mind-boggling figure: it is estimated the average human brain has 0.15 quadrillion synapses , with around 7000 synapses per neuron in neocortex (probably a hotly debated number…). Usually, I think most figures of large numbers are kind of pointless — but, in this case, each member is dynamic and can represent information about the world. The mechanisms of the brain are rich and complex.\n\n###### References\n1. Song, Sen, Kenneth D. Miller, and Larry F. Abbott. “Competitive Hebbian learning through spike-timing-dependent synaptic plasticity.” Nature neuroscience 3.9 (2000): 919.\n2. http://www.johndmurray.org/materials/teaching/tutorial_synapse.pdf\n3. http://www.ee.columbia.edu/~aurel/nature%20insight04/synaptic%20computation04.pdf\n4. http://www.scholarpedia.org/article/Spike-timing_dependent_plasticity#Basic_STDP_Model\n5. www.scholarpedia.org/article/Short-term_synaptic_plasticity\n6. Pakkenberg, Bente, et al. “Aging and the human neocortex.” Experimental gerontology 38.1-2 (2003): 95-99.\n\n## 3 thoughts on “Synapses, (A Bit of) Biological Neural Networks – Part II”\n\n1.", null, "noreply says:\n\nAs someone with a biological background wanting to understand neuroscience from a computational perspective, this is an awesome resource! Can’t wait for the next post\n\n2.", null, "D.V.D says:\n\nHey, I’m following your blog post (its been super informative so far!!) and I’m confused when it comes to implementing STDP. You mention that STDP modifies weights based on when spikes arrive, so its a function of time. When I’m coding this, do I have to keep a list of all the spikes that neuron A sends to neuron B and apply STDP on all of these spikes? Or do I only use the most recent spike that occurred and have the rest not modify the weight of the synapse at all? If you have any code samples for STDP, it would be great if you can share, there seems to be almost nothing I can find online for spiking neurons other than the math.\n\n3.", null, "Iron4dam says:\n\nCool post! Does the graph of exponential decay has its axis labelled the other way round? Should it be N versus tau?" ]
[ null, "http://jackterwilliger.com/wp-content/uploads/2018/07/cajal-1151x1080.jpg", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/cajal-685x1024.jpg", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/Screenshot_2018-08-12-A-Bit-of-Biological-Neural-Networks-–-Part-II-Synapses2-e1534213743957.png", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/Screenshot_2018-08-12-A-Bit-of-Biological-Neural-Networks-–-Part-II-Synapses2-e1534213743957.png", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/Screenshot_2018-08-12-A-Bit-of-Biological-Neural-Networks-–-Part-II-Synapses2-e1534213743957.png", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/Screenshot_2018-08-12-A-Bit-of-Biological-Neural-Networks-–-Part-II-Synapses2-e1534213743957.png", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/stdp-300x229.png", null, "http://jackterwilliger.com/wp-content/uploads/2018/07/stdp-plot1.svg", null, "http://2.gravatar.com/avatar/b713d76a68338e1a1d00ad0045c8717f", null, "http://1.gravatar.com/avatar/4843d7d84ed8dad627ddeed7a5853803", null, "http://1.gravatar.com/avatar/4fe50228515a0c8de0a0d85f180bb5e7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86725837,"math_prob":0.9138054,"size":18490,"snap":"2022-27-2022-33","text_gpt3_token_len":4459,"char_repetition_ratio":0.15000542,"word_repetition_ratio":0.041771695,"special_character_ratio":0.22314765,"punctuation_ratio":0.119353876,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97033083,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,4,null,4,null,null,null,null,null,null,null,null,null,4,null,4,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-01T14:38:01Z\",\"WARC-Record-ID\":\"<urn:uuid:cfa56c74-831b-4b4a-a1ab-237a7283ce68>\",\"Content-Length\":\"60169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c9b26b3-0f5b-4241-9f6d-7b173fb8607d>\",\"WARC-Concurrent-To\":\"<urn:uuid:43e1b3f8-7ee2-4e98-809b-8d8d6e95f85f>\",\"WARC-IP-Address\":\"104.131.0.30\",\"WARC-Target-URI\":\"http://jackterwilliger.com/biological-neural-network-synapses/\",\"WARC-Payload-Digest\":\"sha1:2JIAIK26O45W55LYPJGBFW6KFIZ4CWL5\",\"WARC-Block-Digest\":\"sha1:RHPBAKNYLWQWHKSEOIWTFQ7JFZXCUAPO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103941562.52_warc_CC-MAIN-20220701125452-20220701155452-00374.warc.gz\"}"}
https://www.geeksforgeeks.org/bit-stuffing-error-detection-technique-using-java/?ref=rp
[ "# Bit Stuffing error detection technique using Java\n\nPrerequisites:\n1. Socket programming in Java\n2. Bit Stuffing\n3. Framing in data Link Layer\n\nData is encapsulated in frames in the data link layer and sent over the network. Bit Stuffing is a error detection technique.\n\nThe idea used is very simple. Each frame begins and ends with a special bit pattern “01111110” which is the flag byte. Whenever the sender’s data link layer encounters five consecutive 1s in the data, it automatically stuffs a 0 bit in the outgoing bit stream. The bit stuffing is analogous to byte stuffing, in which an escape byte is stuffed into the ongoing character stream before a flag byte in the data.\n\nWhen the receiver sees five consecutive 1 bits, followed by a 0 bit, it automatically destuffs(deletes) the 0 bit. Bit stuffing is completely transparent to the network layer in both sender and receiver computers.\n\nWith bit stuffing, the boundary between the two frames can be unambiguously recognized by the flag pattern. Thus, if the receiver loses track of where it is, all it has to do is scan the input for flag sequences., since they can only occur at frame boundaries and never within the data.\n\n```Illustrative Examples\n\nSender Side(Client):\nUser enters a binary data as input.\nEnter data:\n0000001\nData is stuffed and sent to the reciever for unstuffing.\nData stuffed in client: 01111110000000101111110\nSending to server for unstuffing\n\nStuffed data from client: 01111110000000101111110\nReciever has to unstuff the input data from sender and get the original data which\nwas given as input by the user.\nUnstuffed data:\n0000001\n```\n\nThe code implementation of the above logic is given below.\nAt Sender side(client side):\n\n `package` `bitstuffing; ` `import` `java.io.*; ` `import` `java.net.*; ` `import` `java.util.Scanner; ` `public` `class` `BitStuffingClient { ` `    ``public` `static` `void` `main(String[] args) ``throws` `IOException ` `    ``{ ` `        ``// Opens a socket for connection ` `        ``Socket socket = ``new` `Socket(``\"localhost\"``, ``6789``); ` ` `  `        ``DataInputStream dis = ``new` `DataInputStream(socket.getInputStream()); ` `        ``DataOutputStream dos = ``new` `DataOutputStream(socket.getOutputStream()); ` ` `  `        ``// Scanner class object to take input ` `        ``Scanner sc = ``new` `Scanner(System.in); ` ` `  `        ``// Takes input of unstuffed data from user ` `        ``System.out.println(``\"Enter data: \"``); ` `        ``String data = sc.nextLine(); ` ` `  `        ``int` `cnt = ``0``; ` `        ``String s = ``\"\"``; ` `        ``for` `(``int` `i = ``0``; i < data.length(); i++) { ` `            ``char` `ch = data.charAt(i); ` `            ``if` `(ch == ``'1'``) { ` ` `  `                ``// count number of consecutive 1's ` `                ``// in user's data ` `                ``cnt++; ` ` `  `                ``if` `(cnt < ``5``) ` `                    ``s += ch; ` `                ``else` `{ ` ` `  `                    ``// add one '0' after 5 consecutive 1's ` `                    ``s = s + ch + ``'0'``; ` `                    ``cnt = ``0``; ` `                ``} ` `            ``} ` `            ``else` `{ ` `                ``s += ch; ` `                ``cnt = ``0``; ` `            ``} ` `        ``} ` ` `  `        ``// add flag byte in the beginning ` `        ``// and end of stuffed data ` `        ``s = ``\"01111110\"` `+ s + ``\"01111110\"``; ` ` `  `        ``System.out.println(``\"Data stuffed in client: \"` `+ s); ` `        ``System.out.println(``\"Sending to server for unstuffing\"``); ` `        ``dos.writeUTF(s); ` `    ``} ` `} `\n\n `package` `bitstuffing; ` `import` `java.io.*; ` `import` `java.net.*; ` `public` `class` `BitStuffingServer { ` `    ``public` `static` `void` `main(String[] args) ``throws` `IOException ` `    ``{ ` `        ``ServerSocket skt = ``new` `ServerSocket(``6789``); ` ` `  `        ``// Used to block until a client connects to the server ` `        ``Socket socket = skt.accept(); ` ` `  `        ``DataInputStream dis = ``new` `DataInputStream(socket.getInputStream()); ` `        ``DataOutputStream dos = ``new` `DataOutputStream(socket.getOutputStream()); ` ` `  `        ``// Receiving the string from the client which ` `        ``// needs to be stuffed ` `        ``String s = dis.readUTF(); ` `        ``System.out.println(``\"Stuffed data from client: \"` `+ s); ` ` `  `        ``System.out.println(``\"Unstuffed data: \"``); ` `        ``int` `cnt = ``0``; ` ` `  `        ``// removal of stuffed bits: ` `        ``// start from 9th bit because the first 8 ` `        ``//  bits are of the special pattern. ` `        ``for` `(``int` `i = ``8``; i < s.length() - ``8``; i++) { ` `            ``char` `ch = s.charAt(i); ` `            ``if` `(ch == ``'1'``) { ` `                ``cnt++; ` `                ``System.out.print(ch); ` ` `  `                ``// After 5 consecutive 1's one stuffed bit ` `                ``//'0' is added. We need to remove that. ` `                ``if` `(cnt == ``5``) { ` `                    ``i++; ` `                    ``cnt = ``0``; ` `                ``} ` `            ``} ` `            ``else` `{ ` ` `  `                ``// print unstuffed data ` `                ``System.out.print(ch); ` ` `  `                ``// we only need to maintain count  ` `                ``// of consecutive 1's ` `                ``cnt = ``0``; ` `            ``} ` `        ``} ` `        ``System.out.println(); ` `    ``} ` `} `\n\nThe input and output are as shown above.\n\nAttention reader! Don’t stop learning now. Get hold of all the important Java and Collections concepts with the Fundamentals of Java and Java Collections Course at a student-friendly price and become industry ready.\n\nMy Personal Notes arrow_drop_up", null, "Check out this Author's contributed articles.\n\nIf you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.\n\nPlease Improve this article if you find anything incorrect by clicking on the \"Improve Article\" button below." ]
[ null, "https://media.geeksforgeeks.org/auth/profile/cie85k5ip4a95oyzqoel", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.74904287,"math_prob":0.7474792,"size":5569,"snap":"2020-45-2020-50","text_gpt3_token_len":1356,"char_repetition_ratio":0.115543574,"word_repetition_ratio":0.05856833,"special_character_ratio":0.26432034,"punctuation_ratio":0.1435743,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751726,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-27T20:43:21Z\",\"WARC-Record-ID\":\"<urn:uuid:e0337d95-5101-44d5-a875-eb0dd8ef2f4e>\",\"Content-Length\":\"120708\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a8c7821-fd2b-46b5-9e68-c980ff263894>\",\"WARC-Concurrent-To\":\"<urn:uuid:3343be8a-d973-4ee4-88b8-4955c0e3522a>\",\"WARC-IP-Address\":\"23.46.153.75\",\"WARC-Target-URI\":\"https://www.geeksforgeeks.org/bit-stuffing-error-detection-technique-using-java/?ref=rp\",\"WARC-Payload-Digest\":\"sha1:JS43IQGIKM3RYSCERMOQCFZ74JKPA6H5\",\"WARC-Block-Digest\":\"sha1:OEB45TBFHWCIMA5J7L5UH7O5665LZVOK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141194171.48_warc_CC-MAIN-20201127191451-20201127221451-00561.warc.gz\"}"}
https://ronjeffries.com/xprog/articles/sudoku4/
[ "The program reached an impossible state during the first test of the algorithm that I turned loose. I thought I had made a mistake, but it turned out I had not. Well, not a coding mistake.\n\n## Planning Next Steps\n\nIt’s time to decide which direction to go. The Game knows how to find a constrained cell, and knows what value it would like to put into that cell. That suggests that a sufficiently easy game, one that is constrained all the way, can now be solved. One possibility, and it’s a tempting one, is to solve a game. That is, after all, the story.\n\nOn the other hand, we know that the “real” story is to build Sudoku and learn about objects, how to represent the strategies, and so on. The idea was that Sudoku is a good example of how to use TDD well. As such, I’m as interested in getting a “good” program as I am in getting “done”. Maybe more so, since I really don’t need any Sudoku solutions, and I do need to be good at my job, which involves programming.\n\nI think, though, that I can’t resist going for the solution. Let’s see what happens.\n\n## A Constraint-based Solution\n\nI’m assuming that some puzzles are solvable by just repeatedly finding cells that have only one possible result, and filling in that result, repeating until done. I’m further assuming, or at least hoping, that my current “given” puzzle is one of those. If it is, I should be able to write the solve method quickly. The question is how to test it. I need a method “solved?” to tell me whether I’m done. I’ll start with a simple definition and beef it up later.\n\nBegin with a test.\n\n``` class GameTest < Test::Unit::TestCase\n\ndef setup\ngivens =\n[0, 2, 3, 8, 0, 0, 0, 0, 7,\n0, 5, 8, 9, 7, 6, 2, 0, 0,\n7, 0, 0, 2, 0, 0, 0, 5, 0,\n9, 0, 0, 4, 0, 2, 0, 0, 0,\n6, 0, 7, 0, 0, 0, 4, 0, 5,\n0, 0, 0, 7, 0, 8, 0, 0, 6,\n0, 1, 0, 0, 0, 7, 0, 0, 8,\n0, 0, 9, 5, 8, 4, 1, 7, 0,\n8, 0, 0, 0, 0, 3, 5, 4, 0]\n@game = Game.new(givens)\nend\n\ndef test_first_game\nassert_equal(23, @game.first_constrained)\nassert_equal(1, @game.proposed_value(23))\nend\n\ndef test_solved_says_no\nassert_equal(false, @game.solved?)\nend\nend```\n\nI’ve refactored my GameTest class to have a setup, and added a new test with a call to solved?. I’ll implement that to return true, to give me a red bar, then implement a simple version. That turns out to be:\n\n``` def solved?\ncollect_values(0..80).all? { | value | value > 0 }\nend```\n\nI’m just looking to see whether all the cells have a solution, not whether the solution is fully valid. That will do for now. Now I’ll test my test_game, which is not solved because its first element is zero, and then I’ll set its first element non-zero and see if it shows up as solved.\n\n``` def test_fake_completed_game\ngivens = [\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1,\n1, 1, 1, 1, 1, 1, 1, 1, 1\n]\nassert_equal(true, Game.new(givens).solved?)\nend```\n\nThat passes. Now let’s write a test that asks us to solve the game:\n\n``` def test_solve_given_Game\nassert_equal(false, @game.solved?)\n@game.solve\nassert_equal(true, @game.solved?)\nend```\n\nWith an empty definition of solve, that fails on the second assert. Now to write solved:\n\n``` def solve\nwhile solve_one_cell\nend\nend\n\ndef solve_one_cell\ncell_number = first_constrained\nif (cell_number == nil)\nreturn false\nend\nset_cell(cell_number, proposed_value(cell_number))\nend\n\ndef set_cell(cell_number, val)\n@cells[cell_number].value= val\nend```\n\nYikes!! The test fails, on the 80th cell in the game. By inspection, I can see that the lower right corner, still empty, has no possible value! But I need to get to work … I’ll have to return to this tonight.\n\n## Stalking the Problem\n\nI thought about the problem off and on today, just a little bit. I couldn’t see any way that this simple solver could be wrong – but I also know what it means when a programmer says there’s no way his program could be wrong.\n\nSo I decided to set a trap. I extended the solver part of the code so that after setting each new value, it checks to see if the game is still solvable, using just a very simple test: does there exist a cell in the game such that it’s blocked and can contain none of the numbers from 1-9. I ran the problem and it actually failed fairly early … the matrix was less than half full and it had already blocked.\n\nI looked at the code, looked at the last move it had made, and it all looked OK. So I decided that I would go back to websoduku.com and see whether my solution looked like the solved version of the original puzzle. I figured I’d let the program make one move at a time, and see if websoduke agreed with me.\n\nSo the first thing I did, of course, was check to see whether my givens equal the givens of the game on websoduku … and they didn’t. I had mis-transcribed one number!\n\nI changed the number to the original, ran the tests, and they all ran, including my solve_given_Game test up above. The program works, and it was working just fine before I said Yikes! up above. I’m greatly relieved. I’m not sure whether I should back out my impossibility-checking code, or whether to leave it in now that it’s written. I didn’t TDD it, I just slammed it in as debugging code. It’s just a loop with a bunch of checks in it, and a print.\n\nI guess I’ll leave it for now, for you to look at if you care to. The bottom line is that the program works. I’ll retrospect in the next article. For now, know this: I believe the program can solve any game where there always exists at least one square that is forced. The simplest strategy is implemented, and I’m sure it works. I’ll test a bit more, of course, but I think we’re good so far.\n\nWhat I’d like to do pretty soon … maybe next … is to clean up this code. I’ve seen some pretty procedural code among the community who are working this problem right now, and I’d like to do better. And, of course, I’d like to toss in a couple of new strategies..\n\nSpeaking of which, it has been pointed out to me that what I “concluded” in that sketch in the preceding article is flat wrong. I’m not sure what I saw when I was fiddling with a game, but that picture doesn’t represent a true observation about the game. Nice catch, folks!\n\nHere’s the code, just in case … it’s not better or more interesting, it’s just here for the record. I’ll just include the Game, everything else is the same.\n\n## Code With the Debug Pieces In It\n\n```require 'project.rb'\n\nclass Game\ndef Game::test_game\nGame.new((0..80).to_a)\nend\n\ndef initialize(anArray)\n@cells = anArray.collect { | value | Cell.new(value) }\nend\n\ndef cell_value(i)\n@cells[i].value\nend\n\ndef row(row_number)\nrow_start = row_number*9\ncollect_values row_start..row_start+8\nend\n\ndef column(column_number)\ncollect_values column_indexes(column_number)\nend\n\ndef square(square_number)\ncollect_values square_indexes(square_number)\nend\n\ndef collect_values index_collection\nindex_collection.collect { | c | cell_value(c) }\nend\n\ndef column_indexes(column_number)\n(0..8).collect { | row | column_number+row*9 }\nend\n\ndef square_indexes(square_number)\nstart_cell = start_cell(square_number)\nraw_square.collect { | offset | start_cell + offset }\nend\n\ndef raw_square\n[0, 1, 2, 9, 10, 11, 18, 19, 20]\nend\n\ndef start_cell(square_number)\nfirst_row = square_number / 3 * 3\nfirst_column = (square_number % 3) * 3\nfirst_row * 9 + first_column\nend\n\ndef row_containing(aCell)\naCell / 9\nend\n\ndef column_containing(aCell)\naCell % 9\nend\n\ndef square_containing(aCell)\nrow_containing(aCell) / 3 * 3 + column_containing(aCell) / 3\nend\n\ndef first_constrained\n(0..80).each do\n| cell_number |\nreturn cell_number if constrained?(cell_number)\nend\nreturn nil\nend\n\ndef constrained?(cell_number)\ncell_value(cell_number) == 0 && possible_values(cell_number).length == 1\nend\n\ndef possible_values(cell_number)\n[1, 2, 3, 4, 5, 6, 7, 8, 9] -\nrow(row_containing(cell_number)) -\ncolumn(column_containing(cell_number)) -\nsquare(square_containing(cell_number))\nend\n\ndef proposed_value(cell_number)\npossibles = possible_values(cell_number)\nreturn nil if possibles.length != 1\nreturn possibles.first\nend\n\ndef solved?\ncollect_values(0..80).all? { | value | value > 0 }\nend\n\ndef solve\nwhile solve_one_cell\nend\nend\n\ndef solve_one_cell\ncell_number = first_constrained\nif (cell_number == nil)\nreturn false\nend\nproposed = proposed_value(cell_number)\nset_cell(cell_number, proposed)\nif (!possible)\nputs \"Game just became impossible\"\nputs \"I played #{proposed} at #{cell_number} = #{cell_number / 9}, #{cell_number %9}\"\nprint_game\nend\nreturn true\nend\n\ndef set_cell(cell_number, val)\n@cells[cell_number].value= val\nend\n\ndef possible\n(0..80).each do\n| cell_number |\nif (cell_value(cell_number) == 0 && !cell_possible(cell_number))\nreturn false\nend\nend\nreturn true\nend\n\ndef cell_possible(cell_number)\nif possible_values(cell_number).length == 0\nputs \"impossible at #{cell_number} = #{cell_number / 9}, #{cell_number %9}\"\nreturn false\nend\nreturn true\nend\n\ndef print_game\nputs\n(0..8).each do\n| row |\n(0..8).each do\n| col |\nprint cell_value(row*9+col), ' '\nend\nputs\nend\nend\nend```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.84714645,"math_prob":0.9474408,"size":8842,"snap":"2019-51-2020-05","text_gpt3_token_len":2557,"char_repetition_ratio":0.17402127,"word_repetition_ratio":0.09914078,"special_character_ratio":0.29710472,"punctuation_ratio":0.19665484,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9730629,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T20:25:04Z\",\"WARC-Record-ID\":\"<urn:uuid:88a334fc-30f3-426e-8fdb-6838572b5469>\",\"Content-Length\":\"17956\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90107c1d-c815-4932-8d59-257e0f2f3e5d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d14d4ca2-c51d-4059-bba2-31bc16538472>\",\"WARC-IP-Address\":\"69.39.76.248\",\"WARC-Target-URI\":\"https://ronjeffries.com/xprog/articles/sudoku4/\",\"WARC-Payload-Digest\":\"sha1:G5HQTQL5TQNDH4K6WELO4A5W62UYTN2K\",\"WARC-Block-Digest\":\"sha1:OIFLSFF3KL3N6VITJMIM2CL2645WZA4A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250607407.48_warc_CC-MAIN-20200122191620-20200122220620-00156.warc.gz\"}"}
http://www.cpphub.com/2015/03/searching-in-strings.html
[ "# Searching in strings\n\nSearching in strings with examples:\n\nC++ strings supports finding the required string content from the total string.\n\nHere is the list available find functions.\n\n1. find ( ) : It searches a string for a specified character or group of characters and returns the starting position of the first occurrence found or npos if no match is found.\n2. find_first_of ( ): This function searches a target string and returns the position of the first match of any character in a specified group. If it finds no such element then it returns npos.\n3. find_last_of ( ): This function searches a target string and returns the position of the last match of any character in a specified group. If it finds no such element then it returns npos.\n4. find_first_not_of ( ) : This function searches a target string and returns the position of the first element that doesn’t match any character in a specified group. If it finds no such element then it returns npos.\n5. find_last_not_of ( ): This function searches a target string and returns the position of the element with the largest subscript that doesn’t match of any character in a specified group. If it finds no such element then it returns npos.\n6. rfind ( ): This function searches a string from end to beginning for a specified character or group of characters and returns the  starting position of the match if one is found. If it finds no such element then it returns npos.\n\nString searching member functions and their general uses with examples:\n\nSee the below example helps to understand the usage of the string searching functions usage\n\n```#include \"stdafx.h\"\n#include <string>\n#include <iostream>\nusing namespace std;\nint _tmain (int argc, _TCHAR* argv[])\n{\nstring MyString (50, 'S');\nMyString.replace (0, 2, \"NN\");\nfor (int iIdex = 2; iIdex <= (MyString.size () / 2) - 1; iIdex++)\nfor (int factor = 2; factor * iIdex < MyString.size();factor++)\nMyString[factor * iIdex] = 'N';\ncout << \"Prime Number:\" << endl;\nint iIndex = MyString.find ('S');\nwhile (iIndex != MyString.npos) {\ncout << iIndex << \" \";\niIndex++;\niIndex = MyString.find ('S', iIndex);\n}\ncout << \"\\n Not a prime Number:\" << endl;\niIndex= MyString.find_first_not_of ('S');\nwhile (iIndex != MyString.npos) {\ncout << iIndex << \" \";\niIndex++;\niIndex = MyString.find_first_not_of('S', iIndex);\n}\ngetchar ();\nreturn 0;\n}\n\n```\nThe output from above program is\n```Prime Number:\n\n2 3 5 7 11 13 17 19 23 29 31 37 41 43 47\n\nNot a prime Number:\n\n0 1 4 6 8 9 10 12 14 15 16 18 20 21 22 24 25 26 27 28 30 32 33 34 35 36 38 39 40\n\n42 44 45 46 48 49\n\n```\n\nfind( ) allows you to walk forward through a string, detecting multiple occurrences of a\ncharacter or group of characters, while find_first_not_of( ) allows you to test for the absence\nof a character or group.\nThe find member is also useful for detecting the occurrence of a sequence of characters in a\nstring:\n```#include \"stdafx.h\"\n#include <string>\n#include <iostream>\nusing namespace std;\nint _tmain (int argc, _TCHAR* argv[])\n{\nstring InStr(\"This, is, my, FIRST, string, function, using, find\");\nint iIndex = InStr.find (\"i\");\nwhile(iIndex != string::npos) {\ncout << iIndex << endl;\niIndex++;\niIndex = InStr.find(\"i\", iIndex);\n}\ngetchar ();\nreturn 0;\n}\n\n```\nThe output of the above program is\n```2\n\n6\n\n24\n\n34\n\n41\n\n47\n\n```\n\nThe above program performs no case sensitive search. That’s why it is not considered the I in the FIRST\nLet us see the example that performs a case insensitive search:\n\n```// TestWifi.cpp : Defines the entry point for the console application.\n#include \"stdafx.h\"\n#include <string>\n#include <iostream>\nusing namespace std;\nstring MakeUpper (string& StrIn)\n{\nchar* pszBuff = new char[StrIn.length()];\nStrIn.copy (pszBuff, StrIn.length ());\nfor(int iIndex = 0; iIndex < StrIn.length(); iIndex++)\npszBuff[iIndex] = toupper(pszBuff[iIndex]);\nstring strResult(pszBuff, StrIn.length());\ndelete pszBuff;\nreturn strResult;\n}\nstring MakeLower(string& StrIn)\n{\nchar* pszBuff = new char[StrIn.length()];\nStrIn.copy (pszBuff, StrIn.length ());\nfor (int iIndex = 0; iIndex < StrIn.length(); iIndex++)\npszBuff[iIndex] = tolower(pszBuff[iIndex]);\nstring strResult (pszBuff, StrIn.length());\ndelete pszBuff;\nreturn strResult;\n}\nint _tmain (int argc, _TCHAR* argv[])\n{\nstring StrInput (\"This, is, my, FIRST, string, functIon, using, find\");\ncout << StrInput << endl;\ncout << MakeUpper (StrInput) << endl;\ncout << MakeLower (StrInput) << endl;\nint iIndex = StrInput.find (\"i\");\nwhile (iIndex != string::npos) {\ncout << iIndex << endl;\niIndex++;\niIndex = StrInput.find (\"i\", iIndex);\n}\nstring lcase = MakeLower (StrInput);\ncout << lcase << endl;\niIndex = lcase.find (\"i\");\nwhile (iIndex != lcase.npos) {\ncout << iIndex << endl;\niIndex++;\niIndex = lcase.find (\"i\", iIndex);\n}\nstring ucase = MakeUpper (StrInput);\ncout << ucase << endl;\niIndex = ucase.find (\"I\");\nwhile (iIndex != ucase.npos) {\ncout << iIndex << endl;\niIndex++;\niIndex = ucase.find (\"I\", iIndex);\n}\ngetchar ();\nreturn 0;\n}\n\n```\nBoth the MakeUpper ( ) and MakeLower ( ) functions follow the same form: they allocate\nstorage to hold the data in the argument string, copy the data and change the case. Then they\ncreate a new string with the new data, release the buffer and return the result string. The\nc_str( ) function cannot be used to produce a pointer to directly manipulate the data in the\nstring because c_str( ) returns a pointer to const. That is, you’re not allowed to manipulate\nstring data with a pointer, only with member functions." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5992236,"math_prob":0.90868497,"size":7877,"snap":"2019-51-2020-05","text_gpt3_token_len":1962,"char_repetition_ratio":0.15267369,"word_repetition_ratio":0.7135338,"special_character_ratio":0.2698997,"punctuation_ratio":0.15369128,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9856648,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-15T19:53:16Z\",\"WARC-Record-ID\":\"<urn:uuid:47bf75ea-e428-49cc-8f6e-2fc7a6518312>\",\"Content-Length\":\"67860\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee2c5c0e-3ff9-4a86-ba06-7506434c9a17>\",\"WARC-Concurrent-To\":\"<urn:uuid:adb12bbc-577e-4517-aeae-31b7a092046f>\",\"WARC-IP-Address\":\"172.217.15.115\",\"WARC-Target-URI\":\"http://www.cpphub.com/2015/03/searching-in-strings.html\",\"WARC-Payload-Digest\":\"sha1:6P6ABDCIMJWRZUZJ3VD556OVJHKVY3Q3\",\"WARC-Block-Digest\":\"sha1:I5AUK2NM4UZKADHNO5KF7WKUPQPN2SVR\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575541309137.92_warc_CC-MAIN-20191215173718-20191215201718-00123.warc.gz\"}"}
https://discuss.codechef.com/t/confusion-in-yvnum-solution-dec18-cookoff/21432
[ "", null, "# Confusion in YVNUM solution Dec18 cookoff\n\nHere is the link to the Problem and Solution of December cook-off YVNUM.\n\nThere is one thing in this solution which i cant get my head around -\nwhen the number becomes too big we take the modulo of the number wrt to 1000000007 instead of the original number and then do the concatenation operation with it rather than the original number and then expect the answer to be same, will it work??\n\nEx. string 921, modulo number = 103\n\n``````921%103 = 97 but we find that (921219192)%103 != (977997)%103\n\nPlease tell me where am i wrong.\n``````\n\nWe first try to represent the number as sum of several numbers. Now, as the numbers become large, we take the modulo of the component numbers instead of the digits.\n\nSo, to find modulo of (921219192), we break it as\n\na = 921 * 10^6\nb = 219 * 10^3\nc = 192 * 10^0\n\nSo,921219192 = a + b + c\n\nSo, now the answer is (a%103+ b%103 + c%103)%103.\n\nAnd I don’t think that just taking modulo of each cyclic shift and then concatenating it would work." ]
[ null, "https://s3.amazonaws.com/discourseproduction/original/3X/7/f/7ffd6e5e45912aba9f6a1a33447d6baae049de81.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.89686036,"math_prob":0.98830545,"size":521,"snap":"2020-34-2020-40","text_gpt3_token_len":129,"char_repetition_ratio":0.12765957,"word_repetition_ratio":0.021505376,"special_character_ratio":0.30902112,"punctuation_ratio":0.07619048,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9937581,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-29T17:19:28Z\",\"WARC-Record-ID\":\"<urn:uuid:37dcb5cb-ccac-4a0c-acd2-30978a78fc77>\",\"Content-Length\":\"14811\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd366621-fd8b-4eb7-8fee-c98b3cea4929>\",\"WARC-Concurrent-To\":\"<urn:uuid:3739db5c-fe29-4d3d-997a-a3c2ed77c1f3>\",\"WARC-IP-Address\":\"52.54.40.124\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/confusion-in-yvnum-solution-dec18-cookoff/21432\",\"WARC-Payload-Digest\":\"sha1:3ZJUVOSOUZBAKWH43W3YQ7OGNFQQUNDY\",\"WARC-Block-Digest\":\"sha1:XPQA2XTGKRFV74GF2B46ECQHDBOTURUJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400202418.22_warc_CC-MAIN-20200929154729-20200929184729-00787.warc.gz\"}"}
https://herui.me/personalization/how-many-miles-is-78-km.php
[ "### How many miles is 78 km\n\n| |", null, "More information from the unit converter. How many km in 1 miles? The answer is We assume you are converting between kilometre and mile. How to convert 78 to miles. Kilometers to Miles conversion. 78 kilometers how many miles. Transform 78 kilometers in miles (78 km to (mi.\n\n## 78 miles to km\n\nEasily convert miles to kilometers, with formula, conversion chart, auto 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81 . An interactive calculator that allows you to convert between kilometres (km) and miles (mi) 78km = miles, 78 miles = km. 79km = miles, How fast is 78 kilometers per hour? What is 78 kilometers per hour in miles per hour? This simple calculator will allow you to easily convert 78 km/h to mph.\n\nHow far is 78 km in miles? How many miles in a 78 k? What is 78 km in miles? 78 Km to Mi. Of course, you already know the answer to these questions: Conversion 78 km into miles, convert 78 km, convert km into miles, 78 km how many miles?. How many miles per hour are in 78 kilometers per hour? What is 78 kilometers per hour in miles per hour? How fast is 78 kilometers per hour in other units of.\n\nA common question is How many kilometer in 78 mile? And the answer is km in 78 mi. Likewise the question how many mile in 78 kilometer has the. Kilometers (km) to Miles (mi) conversion calculator, table and how to convert. 78 km = mi a network of railways connecting Moscow with the Russian Far East is the longest railway in the world with a length of km (5, mi). A common question isHow many mile in 78 kilometer?And the answer is mi in 78 km. Likewise the question how many kilometer in 78 mile has. Miles to km converter and conversion table to find out how many kilometers in 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81 . Miles to km and km to miles converter and conversin table to find out how 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81 miles and kilometers and access the tables please check how many km in a mile page. Conversion from miles and yards into kilometres and metres. Plus 1 to 50 miles to km and metres table. 47, 75, 48, 77, 49, 78, 50, 80, Wind Speed Converter - Convert wind speeds between Miles per hour, kilometers per hour, knots, metres per second, feet per second. 78 KM to Miles - Convert 78 kilometers to miles. 78 km in miles to find out how many miles are there in 78 kilometers quickly and easily. To convert 78km to miles. See distance to other cities from Manila – Philippines measured in kilometers (km ), miles and nautical miles and their local time. Distances are measured using. How far is it from Dhaka to locations worldwide Bangladesh, Tangail, Sat am, 78 km, 48 miles, 42 nm, Northwest NW · Bangladesh, Comilla, Sat am." ]
[ null, "https://herui.me/personalization/how-many-miles-is-78-km.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8757094,"math_prob":0.9169263,"size":2822,"snap":"2020-10-2020-16","text_gpt3_token_len":828,"char_repetition_ratio":0.20191625,"word_repetition_ratio":0.16360295,"special_character_ratio":0.33167967,"punctuation_ratio":0.20266272,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96059114,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T23:57:00Z\",\"WARC-Record-ID\":\"<urn:uuid:52855236-22d8-424c-b6d3-ef2c7f0d23e7>\",\"Content-Length\":\"11530\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b5d5b583-bbb7-4e21-aa27-fd5ad69a5887>\",\"WARC-Concurrent-To\":\"<urn:uuid:c404da17-5847-400f-92ec-0aab41816bee>\",\"WARC-IP-Address\":\"104.31.68.25\",\"WARC-Target-URI\":\"https://herui.me/personalization/how-many-miles-is-78-km.php\",\"WARC-Payload-Digest\":\"sha1:PZE7IMSLUI6TTVVWPE4PZWADZBRHXSNQ\",\"WARC-Block-Digest\":\"sha1:24C3XKHLYWCW2YMIJF67HILTKLRV5FJI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145989.45_warc_CC-MAIN-20200224224431-20200225014431-00184.warc.gz\"}"}
https://assets.radiopaedia.org/articles/absorbed-dose?lang=us
[ "Absorbed dose\n\nAbsorbed dose is a measure of the energy deposited in a medium by ionizing radiation. It is equal to the energy deposited per unit mass of a medium, and so has the unit joules (J) per kilogram (kg), with the adopted name of gray (Gy) where 1 Gy = 1 J.kg-1.\n\nThe absorbed dose is not a good indicator of the likely biological effect. 1 Gy of alpha radiation would be much more biologically damaging than 1 Gy of photon radiation for example. Appropriate weighting factors can be applied reflecting the different relative biological effects to find the equivalent dose.\n\nThe risk of stochastic effects due to radiation exposure for the population can be quantified using the effective dose, which is a weighted average of the equivalent dose to each organ depending upon its radiosensitivity.\n\nOther related values include:\n\n• absorbed dose rate (Gy.s-1): amount of radiation delivered over a time period\n• rad: the international unit of absorbed dose pre-1980 where 1 Gy = 100 rad\n• kerma (Gy): kinetic energy released per unit mass", null, "", null, "", null, "Unable to process the form. Check for errors and try again.", null, "Thank you for updating your details." ]
[ null, "https://prod-assets.static.radiopaedia.org/assets/loadingAnimation-1da817fd12a97dcdd18f98984dd1fc44dc490890e146bbda2a34c3c51d1da58a.gif", null, "https://prod-assets.static.radiopaedia.org/assets/form/alert_accept-1f6de24cc1a0f2c85a6a44c4c6204e42de09421ef1a2876b1b18af12b0c670e6.png", null, "https://prod-assets.static.radiopaedia.org/assets/error-525764a3f25ba0ed9d3c1d74a54d1c391198308c9979c8deb2b45ee791a899e8.png", null, "https://prod-assets.static.radiopaedia.org/assets/form/alert_accept-1f6de24cc1a0f2c85a6a44c4c6204e42de09421ef1a2876b1b18af12b0c670e6.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8131201,"math_prob":0.90985006,"size":1765,"snap":"2019-43-2019-47","text_gpt3_token_len":420,"char_repetition_ratio":0.13742192,"word_repetition_ratio":0.0,"special_character_ratio":0.21699716,"punctuation_ratio":0.057142857,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9629706,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-19T09:29:46Z\",\"WARC-Record-ID\":\"<urn:uuid:36360ba2-05b8-4d2b-95a5-94aae6fff81d>\",\"Content-Length\":\"126847\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd5b175a-0353-46cb-bfe9-35edbb49db27>\",\"WARC-Concurrent-To\":\"<urn:uuid:53fd245e-8531-4c0f-a4eb-036cbfcf7cf2>\",\"WARC-IP-Address\":\"104.26.9.61\",\"WARC-Target-URI\":\"https://assets.radiopaedia.org/articles/absorbed-dose?lang=us\",\"WARC-Payload-Digest\":\"sha1:HAPOPBHPGLKGJJ6J4QASBX4TMD7GGYWF\",\"WARC-Block-Digest\":\"sha1:FOWIABUP5ZLYJN45F4H3VCHDOWBY452U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986692723.54_warc_CC-MAIN-20191019090937-20191019114437-00187.warc.gz\"}"}
https://enacademic.com/dic.nsf/enwiki/2770587/Indefinite
[ "\n\n# Indefinite inner product space\n\nIn mathematics, in the field of functional analysis, an indefinite inner product space\n\n:$\\left(K, langle cdot,,cdot angle, J\\right)$\n\nis an infinite-dimensional complex vector space $K$ equipped with both an indefinite inner product\n\n:$langle cdot,,cdot angle$\n\nand a positive semi-definite inner product\n\n:$\\left(x,,y\\right) stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} langle x,,Jy angle$,\n\nwhere the metric operator $J$ is an endomorphism of $K$ obeying\n\n:$J^3 = J$.\n\nThe indefinite inner product space itself is not necessarily a Hilbert space; but the existence of a positive semi-definite inner product on $K$ implies that one can form a quotient space on which there is a positive definite inner product. Given a strong enough topology on this quotient space, it has the structure of a Hilbert space, and many objects of interest in typical applications fall into this quotient space.\n\nAn indefinite inner product space is called a Krein space (or $J$\"-space\") if $\\left(x,,y\\right)$ is positive definite and $K$ possesses a majorant topology. Krein spaces are named in honor of the Ukrainian mathematician Mark Grigorievich Krein (3 April 1907 - 17 October 1989).\n\nInner products and the metric operator\n\nConsider a complex vector space $K$ equipped with an indefinite hermitian form $langle cdot ,, cdot angle$. In the theory of Krein spaces it is common to call such a hermitian form an indefinite inner product. The following subsets are defined in terms of the square norm induced by the indefinite inner product:\n\n:$K_\\left\\{0\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} \\left\\{ x in K : langle x,,x angle = 0 \\right\\}$ (\"neutral\"):$K_\\left\\{++\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} \\left\\{ x in K : langle x,,x angle > 0 \\right\\}$ (\"positive\"):$K_\\left\\{--\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} \\left\\{ x in K : langle x,,x angle < 0 \\right\\}$ (\"negative\"):$K_\\left\\{+0\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} K_\\left\\{++\\right\\} cup K_\\left\\{0\\right\\}$ (\"non-negative\"):$K_\\left\\{-0\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} K_\\left\\{--\\right\\} cup K_\\left\\{0\\right\\}$ (\"non-positive\")\n\nA subspace $L subset K$ lying within $K_\\left\\{0\\right\\}$ is called a \"neutral subspace\". Similarly, a subspace lying within $K_\\left\\{+0\\right\\}$ ($K_\\left\\{-0\\right\\}$) is called \"positive\" (\"negative\") \"semi-definite\", and a subspace lying within $K_\\left\\{++\\right\\} cup \\left\\{0\\right\\}$ ($K_\\left\\{--\\right\\} cup \\left\\{0\\right\\}$) is called \"positive\" (\"negative\") \"definite\". A subspace in any of the above categories may be called \"semi-definite\", and any subspace that is not semi-definite is called \"indefinite\".\n\nLet our indefinite inner product space also be equipped with a decomposition into a pair of subspaces $K = K_+ oplus K_-$, called the \"fundamental decomposition\", which respects the complex structure on $K$. Hence the corresponding linear projection operators $P_pm$ coincide with the identity on $K_pm$ and annihilate $K_mp$, and they commute with multiplication by the $i$ of the complex structure. If this decomposition is such that $K_+ subset K_\\left\\{+0\\right\\}$ and $K_- subset K_\\left\\{-0\\right\\}$, then $K$ is called an indefinite inner product space; if $K_pm subset K_\\left\\{pmpm\\right\\} cup \\left\\{0\\right\\}$, then $K$ is called a Krein space, subject to the existence of a majorant topology on $K$.\n\nThe operator $J stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} P_+ - P_-$ is called the (real phase) \"metric operator\" or \"fundamental symmetry\", and may be used to define the \"Hilbert inner product\" $\\left(cdot,,cdot\\right)$:\n\n:$\\left(x,,y\\right) stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} langle x,,Jy angle = langle x,,P_+ y angle - langle x,,P_- y angle$\n\nOn a Krein space, the Hilbert inner product is positive definite, giving $K$ the structure of a Hilbert space (under a suitable topology). Under the weaker constraint $K_pm subset K_\\left\\{pm0\\right\\}$, some elements of the neutral subspace $K_0$ may still be neutral in the Hilbert inner product, but many are not. For instance, the subspaces $K_0 cap K_pm$ are part of the neutral subspace of the Hilbert inner product, because an element $k in K_0 cap K_pm$ obeys $\\left(k,,k\\right) stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} langle k,,Jk angle = pm langle k,,k angle = 0$. But an element $k = k_+ + k_-$ ($k_pm in K_pm$) which happens to lie in $K_0$ because $langle k_-,,k_- angle = - langle k_+,,k_+ angle$ will have a positive square norm under the Hilbert inner product.\n\nWe note that the definition of the indefinite inner product as a Hermitian form implies that:\n\n:$langle x,,y angle = frac\\left\\{1\\right\\}\\left\\{4\\right\\} \\left(langle x+y,,x+y angle - langle x-y,,x-y angle\\right)$\n\nTherefore the indefinite inner product of any two elements $x,,y in K$ which differ only by an element $x-y in K_0$ is equal to the square norm of their average $frac\\left\\{x+y\\right\\}\\left\\{2\\right\\}$. Consequently, the inner product of any non-zero element $k_0 in \\left(K_0 cap K_pm\\right)$ with any other element $k_pm in K_pm$ must be zero, lest we should be able to construct some $k_pm + 2 lambda k_0$ whose inner product with $k_pm$ has the wrong sign to be the square norm of $k_pm + lambda k_0 in K_pm$.\n\nSimilar arguments about the Hilbert inner product (which can be demonstrated to be a Hermitian form, therefore justifying the name \"inner product\") lead to the conclusion that its neutral space is precisely $K_\\left\\{00\\right\\} = \\left(K_0 cap K_+\\right) oplus \\left(K_0 cap K_-\\right)$, that elements of this neutral space have zero Hilbert inner product with any element of $K$, and that the Hilbert inner product is positive semi-definite. It therefore induces a positive definite inner product (also denoted $\\left(cdot,,cdot\\right)$) on the quotient space $ilde\\left\\{K\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} K / K_\\left\\{00\\right\\}$, which is the direct sum of $ilde\\left\\{K\\right\\}_pm stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} K_pm / \\left(K_0 cap K_pm\\right)$. Thus $\\left( ilde\\left\\{K\\right\\},,\\left(cdot,,cdot\\right)\\right)$ is a Hilbert space (given a suitable topology).\n\nProperties and applications\n\nKrein spaces arise naturally in situations where the indefinite inner product has an analytically useful property (such as Lorentz invariance) which the Hilbert inner product lacks. It is also common for one of the two inner products, usually the indefinite one, to be globally defined on a manifold and the other to be coordinate-dependent and therefore defined only on a local section.\n\nIn many applications the positive semi-definite inner product $\\left(cdot,,cdot\\right)$ depends on the chosen fundamental decomposition, which is, in general, not unique. But it may be demonstrated (e. g., cf. Proposition 1.1 and 1.2 in the paper of H. Langer below) that any two metric operators $J$ and $J^prime$ compatible with the same indefinite inner product on $K$ result in Hilbert spaces $ilde\\left\\{K\\right\\}$ and $ilde\\left\\{K\\right\\}^prime$ whose decompositions $ilde\\left\\{K\\right\\}_pm$ and $ilde\\left\\{K\\right\\}^prime_pm$ have equal dimensions. Although the Hilbert inner products on these quotient spaces do not generally coincide, they induce identical square norms, in the sense that the square norms of the equivalence classes $ilde\\left\\{k\\right\\} in ilde\\left\\{K\\right\\}$ and $ilde\\left\\{k\\right\\}^prime in ilde\\left\\{K\\right\\}^prime$ into which a given $k in K$ falls are equal. All topological notions in a Krein space, like continuity, closed-ness of sets, and the spectrum of an operator on $ilde\\left\\{K\\right\\}$, are understood with respect to this Hilbert space topology.\n\nIsotropic part and degenerate subspaces\n\nLet $L$, $L_\\left\\{1\\right\\}$, $L_\\left\\{2\\right\\}$ be subspaces of $K$. The subspace $L^\\left\\{ \\left[perp\\right] \\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} \\left\\{ x in K : langle x,,y angle = 0$ for all $y in L \\right\\}$ is called the orthogonal companion of $L$, and $L^\\left\\{0\\right\\} stackrel\\left\\{mathrm\\left\\{def\\left\\{=\\right\\} L cap L^\\left\\{ \\left[perp\\right] \\right\\}$ is the isotropic part of $L$. If $L^\\left\\{0\\right\\} = \\left\\{0\\right\\}$, $L$ is called non-degenerate; otherwise it is degenerate. If $langle x,,y angle = 0$ for all $x in L_\\left\\{1\\right\\},,, y in L_\\left\\{2\\right\\}$, then the two subspaces are said to be orthogonal, and we write $L_\\left\\{1\\right\\} \\left[perp\\right] L_\\left\\{2\\right\\}$. If $L = L_\\left\\{1\\right\\} + L_\\left\\{2\\right\\}$ where $L_\\left\\{1\\right\\} \\left[perp\\right] L_\\left\\{2\\right\\}$, we write $L = L_\\left\\{1\\right\\} \\left[+\\right] L_\\left\\{2\\right\\}$. If, in addition, this is a direct sum, we write $L= L_\\left\\{1\\right\\} \\left[dot\\left\\{+\\right\\}\\right] L_\\left\\{2\\right\\}$.\n\nPontrjagin space\n\nIf $kappa := min \\left\\{ dim K_\\left\\{+\\right\\}, dim K_\\left\\{-\\right\\} \\right\\} < infty$, the Krein space $\\left(K, langle cdot,,cdot angle, J\\right)$ is called a Pontrjagin space or $Pi_\\left\\{kappa\\right\\}$-space. (Conventionally, the indefinite inner product is given the sign that makes $dim K_\\left\\{+\\right\\}$ finite.) In this case $dim K_\\left\\{+\\right\\}$ is known as the \"number of positive squares\" of $langle cdot,,cdot angle$. Pontrjagin spaces are named after Lev Semenovich Pontryagin.\n\nLiterature\n\n* Bognár, J. : \"Indefinite inner product spaces\", Springer-Verlag, Berlin-Heidelberg-New York, 1974, ISBN 3-540-06202-5.\n* Springer \"Encyclopaedia of Mathematics\" entry for \"Krein space\", contributed by H. Langer (http://eom.springer.de/k/k055840.htm)\n* Azizov, T.Ya.; Iokhvidov, I.S. : \"Linear operators in spaces with an indefinite metric\", John Wiley & Sons, Chichester, 1989, ISBN 0-471-92129-7.\n* Langer, H. : \"Spectral functions of definitizable operators in Krein spaces\", Functional Analysis Proceedings of a conference held at Dubrovnik, Yugoslavia, November 2-14, 1981, Lecture Notes in Mathematics, 948, Springer-Verlag Berlin-Heidelberg-New York, 1982, 1-46, ISSN 0075-8434.\n\nReferences\n\nWikimedia Foundation. 2010.\n\n### Look at other dictionaries:\n\n• Indefinite orthogonal group — In mathematics, the indefinite orthogonal group, O( p , q ) is the Lie group of all linear transformations of a n = p + q dimensional real vector space which leave invariant a nondegenerate, symmetric bilinear form of signature ( p , q ). The… …   Wikipedia\n\n• Minkowski space — A diagram of Minkowski space, showing only two of the three spacelike dimensions. For spacetime graphics, see Minkowski diagram. In physics and mathematics, Minkowski space or Minkowski spacetime (named after the mathematician Hermann Minkowski)… …   Wikipedia\n\n• Pseudo-Euclidean space — A pseudo Euclidean space is a finite dimensional real vector space together with a non degenerate indefinite quadratic form. Such a quadratic form can, after a change of coordinates, be written as : q(x) = left(x 1^2+cdots + x k^2 ight) left(x… …   Wikipedia\n\n• List of mathematics articles (I) — NOTOC Ia IA automorphism ICER Icosagon Icosahedral 120 cell Icosahedral prism Icosahedral symmetry Icosahedron Icosian Calculus Icosian game Icosidodecadodecahedron Icosidodecahedron Icositetrachoric honeycomb Icositruncated dodecadodecahedron… …   Wikipedia\n\n• Иохвидов, Иосиф Семёнович — Иосиф Семёнович Иохвидов Дата рождения …   Википедия\n\n• Split-complex number — A portion of the split complex number plane showing subsets with modulus zero (red), one (blue), and minus one (green). In abstract algebra, the split complex numbers (or hyperbolic numbers) are a two dimensional commutative algebra over the real …   Wikipedia\n\n• Eigenvalues and eigenvectors — For more specific information regarding the eigenvalues and eigenvectors of matrices, see Eigendecomposition of a matrix. In this shear mapping the red arrow changes direction but the blue arrow does not. Therefore the blue arrow is an… …   Wikipedia\n\n• Linear map — In mathematics, a linear map, linear mapping, linear transformation, or linear operator (in some contexts also called linear function) is a function between two vector spaces that preserves the operations of vector addition and scalar… …   Wikipedia\n\n• Definite bilinear form — In mathematics, a definite bilinear form is a bilinear form B over some vector space V (with real or complex scalar field) such that the associated quadratic form is definite, that is, has a real value with the same sign (positive or negative)… …   Wikipedia\n\n• Orthogonal group — Group theory Group theory …   Wikipedia" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8278283,"math_prob":0.99879795,"size":8914,"snap":"2021-04-2021-17","text_gpt3_token_len":2041,"char_repetition_ratio":0.1674523,"word_repetition_ratio":0.014316392,"special_character_ratio":0.209558,"punctuation_ratio":0.11938383,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999899,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T06:40:35Z\",\"WARC-Record-ID\":\"<urn:uuid:3be3ed38-0a35-43d9-ba56-e485d863c22e>\",\"Content-Length\":\"61423\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dfcd51b8-038a-4caa-8631-6c8bae110454>\",\"WARC-Concurrent-To\":\"<urn:uuid:2ada61b9-98c9-4e09-b053-d44d38e1136d>\",\"WARC-IP-Address\":\"35.175.60.16\",\"WARC-Target-URI\":\"https://enacademic.com/dic.nsf/enwiki/2770587/Indefinite\",\"WARC-Payload-Digest\":\"sha1:GHWHMCELYUO7SLEUP5UCMENUQV5O7GFR\",\"WARC-Block-Digest\":\"sha1:UD7K5ZZIQX467SIUODWWK2SAFGDSN7W7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703500028.5_warc_CC-MAIN-20210116044418-20210116074418-00394.warc.gz\"}"}
http://cvgmt.sns.it/paper/4334/
[ "# Sharp geometric inequalities for closed hypersurfaces in manifolds with nonnegative Ricci curvature\n\ncreated by fogagnolo on 21 Jun 2019\n\n[BibTeX]\n\npreprint\n\nInserted: 21 jun 2019\n\nYear: 2018\n\nArXiv: 1812.05022 PDF\n\nAbstract:\n\nIn this paper we consider complete noncompact Riemannian manifolds $(M, g)$ with nonnegative Ricci curvature and Euclidean volume growth, of dimension $n \\geq 3$. We prove a sharp Willmore-type inequality for closed hypersurfaces $\\partial \\Omega$ in $M$, with equality holding true if and only if $(M{\\setminus}\\Omega, g)$ is isometric to a truncated cone over $\\partial\\Omega$. An optimal version of Huisken's Isoperimetric Inequality for $3$-manifolds is obtained using this result. Finally, exploiting a natural extension of our techniques to the case of parabolic manifolds, we also deduce an enhanced version of Kasue's non existence result for closed minimal hypersurfaces in manifolds with nonnegative Ricci curvature.\n\nCredits | Cookie policy | HTML 5 | CSS 2.1" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82483625,"math_prob":0.95083964,"size":778,"snap":"2020-10-2020-16","text_gpt3_token_len":193,"char_repetition_ratio":0.09948321,"word_repetition_ratio":0.0,"special_character_ratio":0.21465296,"punctuation_ratio":0.1,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96385646,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-02-24T14:40:43Z\",\"WARC-Record-ID\":\"<urn:uuid:b6da5d21-c059-4c63-be6c-c18d380a3027>\",\"Content-Length\":\"4621\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9cce9b4a-bbd7-4345-b992-fc93821a8c61>\",\"WARC-Concurrent-To\":\"<urn:uuid:5529398c-3e10-44a7-8888-fad3ed89339b>\",\"WARC-IP-Address\":\"192.167.206.42\",\"WARC-Target-URI\":\"http://cvgmt.sns.it/paper/4334/\",\"WARC-Payload-Digest\":\"sha1:UXC64L2TOXO5IXKIFJJUND524ZRLVOTM\",\"WARC-Block-Digest\":\"sha1:R7NQSRB4LADI5DLQQZLXNHOFKWAD5E6S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-10/CC-MAIN-2020-10_segments_1581875145960.92_warc_CC-MAIN-20200224132646-20200224162646-00162.warc.gz\"}"}
https://discuss.codechef.com/t/how-to-solve-covdsmpl-june-long-challenge-problem-optimally/68711
[ "", null, "# How to solve COVDSMPL (June long challenge problem) optimally\n\nI tried to solve by randomly selecting block from the matrix on the basis of ‘p’ and then if i get answer 0 , i mark all positions in that block .\n\nFinally i queried for each point in the matrix if it does not lie on the block found on previous step.\n\nI did not got any improvement .\n\n5 Likes\n\nI can tell you my approach which is not optimal but can be optimised more I believe !!\nby observing the function of scoring I came to conclusion that by asking bigger sub matrix will be more optimal.So I tried asking for 4 bigger sub matrix and find my answer by subtracting and adding them.These steps I repeated for all the blocks which are not computed previously.\nthis approach gave 1.3 pts something in div1 which is quite low", null, "3 Likes\n\nOnce you have a prefix sum matrix of the original matrix, you can easily construct it. This was probably the most common approach (after querying each cell of course). One small optimization on top of that would be to construct the prefix sum matrix itself by binary search. Still… 1.73 pts in division 2. After that, I was distracted into other questions, so that would be it", null, "1 Like\n\nI queried 1-1, 1- 2, … n-n and computed the actual value accordingly with some little optimizations, but it gave only 1.8 pts.\nMy submission : https://www.codechef.com/viewsolution/34204866\n\nTried inclusion-exclusion principle,got only 1.80\n\n@kk2_agnihotri I also had a similar approach wherein i computed each row count by computing biggest possible matrix and did that for all columns, but could only manage with 3.16 points in Div 2. I believe that some optimizations like checking while querying a particular row that if the no. of 1’s encountered equals the row count then to move to the next row and do the same for coloumns can improve score!\n\nSimplest approach I think just iterate over n*n matrix and query for sub matrices of 1 size… Amd check for answer to thr query…\nFurther you can optimise the solution bh keeping total counts of 1’s in whole matrix and keep track of how many times 1 occur and breaks the loop when count become zero.\nI didn’t get Ac but it was giving WA\n\nHere is my approach. (44/100)\n\nfirst query for whole matrix get total 1s\nnow find rows and colms in following way.\nquery for row 2 to n and colms 1 to n subtract from tot got row1( add it to sum )\nquery for row 3 to n and colms 1 to n row2=total-query-sum (add row 2 to sum)\nrepet untill n/2 and for remaning n/2 start from reverse (n to n/2)\n\nsame for colms.\n\ntry to fill matrix as much as you can\nif row[i]=0 fill whole row with 0 or row[i]=remaining in row[i] fill whole with 1s\n\nLast step.\n\nstart from left corner in following way\nyou can get sum of all rows under element by summing all rows found earlier same for colms right to element\nand can also get sum of processed part(as we started from left corner) by adding all ones from 1 to i and 1 to j\nthen query for common green box (i+1 to n and j+1 to n) as shown in figure and ans\nis a[i][j]=tot-sum(rows,i+1,n)-sum(cols,j+1,n)-ones in matrix (1 to i)row and (1 to j) cols) +query for common part.\n\n5 Likes\n\nI tried binary search at every row.\n\n1 Like\n\nit can be further improved by filling matrix such that common box is as large as possible\ni.e.\ndivide matrix in 4 parts from mid and fill one by one\nfirst 1 to n/2 and 1 to n/2\nsecond 1 to n/2 and n to n/2 (or n/2 to n)\nthird n/2 to n and 1 to n/2\n\nI got about 28 points. Here are the main ideas:\n\n• Consider rows and columns in increasing order, at each step try to compute the cell value.\n• For cases when row <= n/2 and col <= n/2, query (row,col,n,n), (row+1,col,n,n),(row,col+1,n,n) and (row+1,col+1,n,n). Combination of these 4 values easily gets the cell value at (row,col). Once the query is made for some rectangle, keep it in memory so that we don’t send query for it again.\n• For other quadrants (row>n/2, col<n/2 etc) choose the rectangles appropriately, to make them as large as possible.\n• Note that the highest cost will generally occur when p = 2. Nice thing for p=2 is that there are very few ones. To take advantage of this, at the beginning of each row compute the row sum (by querying (row, 1, n/2, n) and (row+1,1,n/2,n) and when they are equal, skip first n/2 columns of this row. This will happen about 55% of the time. If not, solve each quadrant the same way. Do similar trick for the columns n/2 + 1, …, n.\n• Another optimization (although didn’t have time to submit this) can be to split initial matrix into blocks of size b and do something like: for cell (r,c), query (r,c,n,n), (r,c+b,n,n), (r+b,c,n,n) and (r+b,c+b,n,n). This gives us sum in square (r,c,r+b,c+b). If it’s 0, we can skip all these cells in future. The value of b can be chosen depending on p. b=5 works well for p=2 and b=2 works better for p>=5.\n\nHope this helps", null, "6 Likes\n\nUsing a 2D dichotomy over the entropy and using some deduction rules I got the best score in the 2nd division. The main idea is to construct a recursive function `f` that compute a submatrix of A by dividing it in 4 submatrices and calling f over this 4 new submatrices. Before calling `f` on a rectangle (r1, c1, r2, c2) you need to compute the number of ones in all rectangles (1, 1, i, c1), (1, 1, i, c2), (1, 1, r1, j) and (1, 1, r2, j) for r1 \\leqslant i \\leqslant r2 and c1 \\leqslant j \\leqslant c2. https://github.com/Nanored4498/CodeChef/blob/master/Chall_06_20/COVDSMPL.cpp\n\n13 Likes\n\nI never thought that P can be useful", null, "it is nice idea to skip the n/2 part.\n\nI finally got 78 points in div1.\n\nMy stupid method is to preprocess the sum of each column and then ask each row in half. If this sum is 0 or the sum is the area, then you can directly fill in the number.\n\nNote that each time you ask about a rectangle, choose a large rectangle with one of the four corners as its vertex to ask about it, and should reasonably use the previous inquiry results to infer.\n\nIn this way, about 60 points can be reached.\n\n3 Likes\n\nBrother, you are my idol", null, "", null, "1 Like\n\nWait What? Binary Search? can you please elaborate or at least post your solution friend.\n\nOkay, ~70 pt (div. 1) solution here:\n\nIn basic terms, I’m simply recursively traversing the entire matrix. If the current submatrix has only 0’s or only 1’s, I exit the current iteration. Else I split into two halves and go deeper.\n\nHowever without optimizations this gets < 1 pt.\n\nSeveral possible ways to make this cost less:\n\n1. As @kk2_agnihotri correctly pointed out, bigger matrices cost less, so when I receive an order to query a certain submatrix, I try several ways to find the sum in it through other sums. For example, by doing a prefix-sum-like (or suffix-sum-like) technique. I also try calculating the entire horizontal strip, extended down (or up) and then simply subtract the extra elements. Same with vertical strips. I calculate the expected cost of each of these options and simply choose the cheapest.\n\n2. It is possible to keep already calculated queries in a map so as not to do unnecessary work. I set the expected value for such a query (that has already been asked before) as 0, however in earlier versions I used numbers all the way down to -200 (well, I was kinda experimenting with constants as well).\n\n3. Expanding on the idea in (1), we can make the following improvement. Say we have a submatrix whose sum we are about to ask the judge. We notice that if the horizontal strip right above it is already calculated, we can add it to the submatrix and simply subtract the sum from the final answer. The cost of the query will thus only decrease. Same with the horizontal strip below and the two vertical strips to either side.\n\n4. Another, but as far as I can remember, insignificant improvement. If the submatrix about to be sent to the judge can be split into two submatrices with already calculated sums, we can return the sum of the two sums and avoid asking anything altogether.\n\nIn the earlier submissions I had been also trying to calibrate various constants (which worked btw!), but in the highest-scored submission there is absolutely no calibration (actually that only worsens the performance, lol…)\n\nI’ve looked at several other top-scoring submissions. Their authors use Fenwick tree to calculate sums. I just did brute-force, I didn’t really need any fast algorithms, and most of my submissions actually had a runtime of 0.00 (before the part when I included point 4).\n\nHope I explained this more or less clear, and yes, the thought process was not linear, I came to those ideas in a week in total, so it wasn’t like I suddenly thought of something and got 70 points. I was rather close to the upper limit of submissions btw", null, "1 Like\n\nRank1 of div1 is 28w", null, "", null, "Oh sorry it happened in last two hours\nI didn’t observed latest.\n\n2 Likes\n\nI also have a doubt, has anyone tried probabilistic algorithms, such as Bayesian statistics?\n\n1 Like" ]
[ null, "https://s3.amazonaws.com/discourseproduction/original/3X/7/f/7ffd6e5e45912aba9f6a1a33447d6baae049de81.svg", null, "https://discuss.codechef.com/images/emoji/apple/joy.png", null, "https://discuss.codechef.com/images/emoji/apple/grin.png", null, "https://discuss.codechef.com/images/emoji/apple/slight_smile.png", null, "https://discuss.codechef.com/images/emoji/apple/sweat_smile.png", null, "https://discuss.codechef.com/images/emoji/apple/kissing_heart.png", null, "https://discuss.codechef.com/images/emoji/apple/kissing_heart.png", null, "https://discuss.codechef.com/images/emoji/apple/slight_smile.png", null, "https://discuss.codechef.com/images/emoji/apple/shushing_face.png", null, "https://discuss.codechef.com/images/emoji/apple/shushing_face.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.95338297,"math_prob":0.9096442,"size":2552,"snap":"2020-34-2020-40","text_gpt3_token_len":557,"char_repetition_ratio":0.101255886,"word_repetition_ratio":0.0,"special_character_ratio":0.21786834,"punctuation_ratio":0.09861933,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98752093,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,2,null,2,null,null,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-23T17:24:57Z\",\"WARC-Record-ID\":\"<urn:uuid:6563f94c-9138-48e8-87ab-48a31d40582f>\",\"Content-Length\":\"67883\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0e7aeea4-1f0f-4196-bb7f-3e6e42b8129d>\",\"WARC-Concurrent-To\":\"<urn:uuid:586fc962-8469-43b7-ba53-2ba36d08ab20>\",\"WARC-IP-Address\":\"18.213.158.143\",\"WARC-Target-URI\":\"https://discuss.codechef.com/t/how-to-solve-covdsmpl-june-long-challenge-problem-optimally/68711\",\"WARC-Payload-Digest\":\"sha1:IZLKCV2XERSZXYO4OTZ4RODI2IMMIL7M\",\"WARC-Block-Digest\":\"sha1:2HFBEWKSFMONLM7AA4YHTFWADRJLWUHA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400211096.40_warc_CC-MAIN-20200923144247-20200923174247-00407.warc.gz\"}"}
https://gamedev.stackexchange.com/questions/95909/how-to-move-dashes-towards-target/95910
[ "# How to move dashes towards target?\n\nWould you know how to create a dash line that moves towards the direction of the target, and if it collides with a wall, goes to the opposite angle? I have seen atan2 but I am not sure what it does, and I am quite confused about the movement of the dashed line. A point can be found with cos() and sin() depending on a radius... should I change the radius to place each dash? Then how to move them with the same speed?\n\nExample here :", null, "Any advice would be much appreciated.\n\nThanks\n\nYou get the path the same way you'd move the object when you shoot it. Just have a tight loop that simulates the movement of the object and keep track of the position every so often. Now you have a list of positions, if you draw a dot at each position, you have a dotted line the represents the path of the object if it were to be shot from that angle.\n\n• Thanks, \"simulates the movement\"... would you have a small example? – Paul Mar 2 '15 at 1:27\n• You simulate the movement. Take the position of the object, and in each iteration of the loop, add the velocity to the position based off of whatever your frame timestep is, check for collisions, etc. Essentially, each iteration of the loop is going to reproduce whatever your update loop does. – MichaelHouse Mar 2 '15 at 3:14\n• In essence, for the example given, stop thinking about the line as a line (that's it's function but not what it is), start thinking about it as a procession of spheres, all launched from the bottom of the screen, which travel for a fixed distance with a fixed speed and collide / rebound from the sides. – xan Mar 2 '15 at 9:37\n• Thanks Byte56, thanks @xan , I get the point now, moving spheres like objects in the game, instead of drawing a dash line. – Paul Mar 2 '15 at 9:49\n• You have all the parts. Just solve for distance: distance = veclocity*time. In this case, time is your timestep 0.02. Velocity you have set to some constant I imagine. Now that you have distance, you just add that distance to your position each iteration: position += velocity*timestep; – MichaelHouse Mar 2 '15 at 19:35\n\nByte56's answer is very good, especially for the example image given where simulating the movement of each \"ball\" in the line will work well. I'll give you an alternative idea however which might work better, or might be easier to implement if you are trying to work with a dashed line (with or without animation), something like -- -- -- --\n\n1. Calculate the angle at which your dashed line is aimed, and the distance (T) it should extend from the start (call it point S).\n2. Check for an intersection with the wall(s) you have present. There are lots of ways to do this, for example see this question.\n3. If there is no intersection, simply draw your line with whatever tools you use in your engine.\n4. If there is an intersection (call it point I):\n1. Draw the first section of your line between the start point and the intersection point SI, as in 3.\n2. Calculate the angle for the 2nd line segment by reflecting it in the surface you have intersected (see for example this question.\n3. Calculate the remaining line distance (T - SI)\n4. Draw the remaining line segment from point I with the appropriate angle.\n5. Repeat 2 - 4 if more intersections are possible.\n\nAs for animation in this case, that heavily depends on how you are drawing the line. If you are using a \"dashed\" texture you may be able to achieve this by:\n\n• Tiling / Repeating the texture along the length of the line and then \"animating\" / adjusting the texture offsets each frame such that you achieve the illusion of the dashes moving along the line.\n\nOtherwise, if using vectors etc.\n\n• By similarly drawing the individual dashes based on some \"offset\" from the beginning of the line, and then moving this offset over time." ]
[ null, "https://i.stack.imgur.com/dStKK.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9338638,"math_prob":0.90021724,"size":1682,"snap":"2020-10-2020-16","text_gpt3_token_len":365,"char_repetition_ratio":0.13349225,"word_repetition_ratio":0.0,"special_character_ratio":0.22829965,"punctuation_ratio":0.07936508,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9619708,"pos_list":[0,1,2],"im_url_duplicate_count":[null,6,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T18:02:44Z\",\"WARC-Record-ID\":\"<urn:uuid:2fea9c32-a679-4bf3-bba2-40de161f0d08>\",\"Content-Length\":\"157093\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d15e5a99-28b9-4b05-b9cf-38f6b054ad77>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa52065d-3c0b-427d-90cb-ba67629830aa>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://gamedev.stackexchange.com/questions/95909/how-to-move-dashes-towards-target/95910\",\"WARC-Payload-Digest\":\"sha1:XVS5ZI46YCB6ORUFGZOUUWJUHR4ZIUR5\",\"WARC-Block-Digest\":\"sha1:OX7N6ICNRRNW4ZAMOH5HH4RSEQHF7LCN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370505826.39_warc_CC-MAIN-20200401161832-20200401191832-00334.warc.gz\"}"}
https://softoption.us/node/172
[ "# Tutorial 2 Identity: Functional Terms and Rules of Inference\n\nLogical System\n\n### Skills to be acquired in this tutorial:\n\nTo learn about functional terms and two new Rules of Inference II (Identity Introduction) and IE (Identity Elimination).\n\n### Why this is useful:\n\nReasoning with identity is vital for mathematics, philosophy, and many other areas.\n\n### The Tutorial:\n\n#### An aside on functions and some universal abilities of human beings\n\nConsider the statement that Allison’s only brother’s wife’s only sister is her very best friend. There is barely a person on Earth who does not understand what this means. Most everyone also knows full well what it is for the assertion to be true.\n\nThe example is interesting because it illustrates instantiation of the use and understanding of high level concepts which seem to be universal. These include partial functions, function application or compositions, constructions, types, and the distinction between a construction and that which the construction constructs.\n\nBeing an only brother, wife, only sister, etc. are all partial functions. Functions carry one thing to another, and partial functions are functions which might not defined everywhere. Being a only brother, wife, only sister, etc. are partial functions with types-- they are defined on humans (or, perhaps, animals) and definitely not on cars or tables or trees.\n\n'Construction' is a term coined by Pavel Tichy (). It means 'way of arriving at, usually by means of functions and function abstractions, compositions and applications', so the expression 'Allison only brother’s wife’s only sister' refers to a construction. Further, suppose that all the various partial functions are properly defined, with suitable uniqueness for the arguments,  so that the construction via the (only) brother function etc. picks someone, say Flora, and, supposing the assertion to be true, the other construction, via the very best friend partial function, also constructs Flora; we are all very well aware that the two constructions and that which they construct, Flora, are three different things. [Many of us would also realize, with reflection, that there is universal widespread systematic ambiguity between constructions and that which they construct. But that is another story, not entered into here.] So,\n\nAllison brother’s wife’s sister is a construction, and Flora is what it constructs.\n\n[Allison's] very best friend is a construction, and Flora is what it constructs.\n\nAnd what Allison’s brother’s wife’s sister is her very best friend is telling us is that the two different constructions construct the same thing (and notice, in particular, that it is not telling us that Flora is Flora).\n\n#### Back to the main theme\n\nThe word 'terms' in logic means 'names' and thus far we have met two kinds of terms: constants (or proper names), and variables.\n\nThese are not the only names there are. In English consider 'the father of Betty' -- now, this is not a statement, it does not say something true or false, rather it is a construction which names an individual. It takes Betty and what we might call the 'father of' function and uses the two of those to name an individual. Similarly in mathematics, the expression 7 + 2 does not assert something true or false, rather it is a construction which names a number (which also happens to have the name 9).\n\nThe logic we are studying, so called 'First order logic', can accommodate these fancy terms, and it does so by means of 'functional terms'.\n\nHere is a characterization of our syntax for functional terms\n\n• any constant is a functional term.\n• any lower case letter followed by ( followed by any number of functional terms followed by ) is a functional term.\n• and these are all the functional terms there are.\n\nFor example, a, b, c, f(a), g(b), g(ac), h(g(c)b) ... are all functional terms.\n\nTo symbolize, say,\n\n'The father of Betty is kind'\n\nwe choose, say, f(x) to represent the 'father of x' function, b to represent Betty, and Kx to represent the predicate 'x is kind' and the whole lot becomes:\n\nKf(b)\n\nThis same process is extended to arguments. So the argument\n\nEverything is kind.\nTherefore,\nBetty's father is kind.\n\ncould be\n\n(∀x)Kx ∴ Kf(b)\n\nAnd, of course, this is valid and the derivation is simple just using Universal Instantiation.\n\nFunctional terms often appear in identities. For example\n\nAllison’s brother’s wife’s sister is her very best friend.\n\nmight be symbolized\n\ns(w(b(a)))=f(a)\n\nwhere a=Allison, b(x)= is the brother of x, s(x)= is the sister of x, w(x)= is the wife of x, f(x)= is the very best friend of x,\n\nFunctional terms have an 'arity', which, roughly, is the number of arguments they apply to. Remember with predicates you can have one place predicates like Fx, and two place predicates, or relations, like Txy (and three place...). So too with functional terms. The 'father of' function is of arity one-- it expects one term as an argument. In mathematics there are plenty of functions of higher arity. For example, the plus function usually expects two arguments, so 'plus x y' might be represented p(xy).\n\nActually, there is another small issue that comes up here. The way we write functional terms is to write the function first, followed by the arguments. In math, a fair few functions are 'infix', that is to say they are written between their arguments instead of before them-- mathematicians generally write (1 + 2) not +(1 2).\n\n[For your information, Deriver can be configured to do formal arithmetic (which is too advanced for us just at the moment) and if so configured it is familiar with the following conventions:-\n\nIt knows about + ('plus') and . (times) and ' (successor of), and 0,1, 2. + and . are infix and successor is postfix (it comes after its argument) and 0,1, 2 are just the constants 0, 1, 2. 'successor of' just means the number plus one (so the successor of 3 is 4 etc.) Deriver can read an expression like the following K(0+0') and what this says is that '0 plus the successor of 0 has the property K'.\n\nNone of these extended capabilities are of immediate interest to us.]\n\nOur Predicate Logic is also extended by having two new rules, one for introducing and the other eliminating identity. In Deriver, you will find these rules under the Advanced Menu.\n\n#### The Identity Introduction rule, labeled '=I',\n\nallows you to bring in\n\n<term>=<same term>\n\nfor any term that you like.\n\nSo, to give several examples, you can have\n\na=a\nf(hj)=f(hj)\ng(b(c))=g(b(c))\n\nall justified by the rule =I.\n\n#### The Identity Elimination rule, labeled '=E',\n\nis the one that allows you to substitute identical things, one for another. And you use the elimination rule with one identity, say s=t, and one other formula, say F-- you are allowed to substitute chosen occurrences of s in F with t (and chosen occurrences of t in F with s). Often the formula F is itself a statement of identity-- perhaps one formula is a=b and the other b=c; in these cases there are lots of opportunities for substituting. Here are several examples (lines 4,5,6, and 8).", null, "In one strict form of the rule, substitutions are permitted only within atomic formulas (or their negations). But this can lead to long derivations, with taking a compound formula apart, making a substitution, then putting it back together again.\n\nTo simplify issues here we permit substitutions into compound formulas, so, for example a=b can be used to substitute into (∀x)(Kx⊃Fa) .But now we have to go careful with free, bound, free for and capturing (which you are familiar with from Tutorial 24 in Easy Deriver https://softoption.us/node/541). So, there is one restriction. No variable that occurs in the terms of the identity, s or t, say, can be bound in the <formula> you are substituting into. So, with f(x)=g(y,z) and (∀m)(Ff(x)m)∧(∃z)(g(y,z)=a) you are not allowed to substitute because the variable z occurs in the identity and is bound in the formula.\n\n## Exercise to accompany Identity Tutorial 2 .\n\n### Exercise 1(of 3)\n\nFormalize and then derive the following valid argument.\n\na=Ann\nf(x)= father of X\nPx= X is a friendly person\n\nEvery friendly person's father is friendly.\nAnn is a friendly person.\nTherefore,\nAnn's father is friendly.\n\n### Exercise 2(of 3)\n\nProofs\n\nDerive the following\n\na) Gab, a=b ∴ Gaa∧Gbb\nb) a=b,b=c ∴ a=c∧c=a\nc) (∀x)Kx ∴ Kf(b)\nd) (∀x)(Fx⊃ Ff(x)),Fa ∴ Ff(a)\n\n### Exercise 3(of 3)\n\nProofs\n\nIdentity has some general properties, namely, i) everything is identical to itself, ii) if a = b, then b = a, and iii) if a=b and b=c, then a=c. These are known as reflexivity, symmetry, and transitivity, and they can be enshrined into three theorems (which you can prove).\n\nTheorem 1\n∴ (∀x)(x=x)\nTheorem 2\n∴ (∀x)(∀y)((x=y)⊃(y=x))\nTheorem 3\n∴ (∀x)(∀y)(∀z)(((x=y)∧(y=z))⊃(x=z))\n\nIf you decide to use the web application for the exercises you can launch it from here Deriver [Gentzen] — username 'logic' password 'logic'. Then either copy and paste the above formulas into the Journal or use the Deriver File Menu to Open Web Page with this address https://softoption.us/test/Deriver/CombinedTutorialsGentzen.html .\n\nPreferences\n\nYou will need to set some Preferences for this.\n\n• set identity to true (and that will give you the identity rules)\n• set firstOrder to true (to get first order theories)\n• and  you can check that the parser is set to gentzen." ]
[ null, "https://softoption.us/files/images/IdentityNew.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9128098,"math_prob":0.9368584,"size":9190,"snap":"2023-40-2023-50","text_gpt3_token_len":2219,"char_repetition_ratio":0.13161333,"word_repetition_ratio":0.029296875,"special_character_ratio":0.23449402,"punctuation_ratio":0.1227126,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98341423,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T05:11:48Z\",\"WARC-Record-ID\":\"<urn:uuid:a2279e7e-be98-48cf-9974-e15bc9a5c7cd>\",\"Content-Length\":\"29499\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:46a85434-5bbd-40a6-9d4a-a4c703dfeb32>\",\"WARC-Concurrent-To\":\"<urn:uuid:8cef306b-f3a7-4b02-a083-ef99a40cb521>\",\"WARC-IP-Address\":\"68.66.226.84\",\"WARC-Target-URI\":\"https://softoption.us/node/172\",\"WARC-Payload-Digest\":\"sha1:LJ26DIE6K6RDM677KI2OYG6EQQV7PTFV\",\"WARC-Block-Digest\":\"sha1:C6Q3TNW65IOHMNFJR6IN5KIGWKBZYW3G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100056.38_warc_CC-MAIN-20231129041834-20231129071834-00741.warc.gz\"}"}
http://report.kyobobook.co.kr/view/2007/4463136/
[ "검색어 입력폼\n\n# 알고리즘 2장 연습문제\n\n저작시기 2007.01 | 등록일 2007.04.20 | 최종수정일 2017.07.21", null, "한컴오피스 (hwp) | 3페이지 | 가격 10,000원\n\n## 소개글\n\nFundamentals of Algorithms using C++ Pseudo code\n2장 연습문제입니다.\n\n2장 13번 문제.\n2장 14번 문제.\n2장 15번 문제.\n2장 18번 문제.\n\n## 본문내용\n\n2장 13번 문제.\nWrite an algorithm that sorts a list of n items by dividing it into three sublists or almost n/3 items, sorting each sublist recurively and merging the three sorted sublists. Analyze your algorithm, and give the results using order notation.\n\n2장 14번 문제.\nGiven the recurrence relateion\n\n2장 15번 문제.\nconsider procedure solve(P,I,O) given below. This algorithm solves problem P by finding the output(solution) O corresponding to any input I.\n\n2장 18번 문제.\n\nWhen a divide-and-conquer algorithm divides an instance of size n of a problem into subinstances each of size n/c, the recurrence relation is typically given by\n\n없음" ]
[ null, "https://www.happycampus.com/images/v4/common/file_icons/hwp.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6201348,"math_prob":0.898171,"size":744,"snap":"2020-45-2020-50","text_gpt3_token_len":223,"char_repetition_ratio":0.13648649,"word_repetition_ratio":0.0,"special_character_ratio":0.24462366,"punctuation_ratio":0.11612903,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9791161,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T01:28:49Z\",\"WARC-Record-ID\":\"<urn:uuid:a8388ada-d30f-434b-acb6-c0c2737b24c2>\",\"Content-Length\":\"16751\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fe5e5dad-2cb4-4084-af41-dd142a8c1828>\",\"WARC-Concurrent-To\":\"<urn:uuid:f9a93b88-6925-4da9-981a-66d614998558>\",\"WARC-IP-Address\":\"119.205.210.131\",\"WARC-Target-URI\":\"http://report.kyobobook.co.kr/view/2007/4463136/\",\"WARC-Payload-Digest\":\"sha1:CG2FOV56TD3FHBD5P5JWJXLTXUGNEAUR\",\"WARC-Block-Digest\":\"sha1:SODIJXEUL6LD3TX5JWT4W4CS2QNMONFW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894890.32_warc_CC-MAIN-20201027225224-20201028015224-00110.warc.gz\"}"}
https://reviseomatic.org/help/s-index/Index%20for%20AS%20ELEC1%20and%20ELEC2.php
[ "", null, "RANDOM PAGE\n\nSITE SEARCH\n\nLOG\nIN\n\nHELP\n\n# AS ELEC1 / ELEC2\n\nThis is the AQA version closing after June 2019. Visit the the version for Eduqas instead.\n\n# AQA A Level\n\n## ELEC1\n\n### Introductory Electronics\n\n#### System Synthesis\n\n• Systems and Subsystems\ninput\ninput transducers\nprocess\nprocess subsystems\noutput\noutput transducers\nfeedback ( sometimes )\n• System Diagrams\nprocess boxes\nlines represent information flow\nanalysis of whole systems into major subsystems\nanalysis of subsystems into simpler subsystems\n\n#### Signals\n\n• Signals\nanalogue\ndigital\n\n#### Resistive Transducers\n\n• Switch        ON: R = 0        OFF: R = ∞\nDigital output\nCOM = Common\nNO = Normally open\nNC = Normally closed\n• LDR\nLog graph scales\nCurrent limiting resistor\nVoltage divider\nAnalogue Output\n• Thermistors\nLog graph scales\nCurrent limiting resistor\nVoltage divider\nAnalogue Output\nNegative temperature coefficient\n• Potentiometer\n• Voltage divider\nAnalogue Output\noutput is proportional to angle or position\n\n#### Operational amplifiers ( op amps )\n\n• Summary\n• Ideal Op Amp\nInverting\nNon-inverting input\nPower supply requirements\nOutput voltage swing limitations ( ideal and real )\nSaturation\n• Comparator\nAnalogue Input\nOne Bit Digital Output\nAnalogue to Digital Converter\n\n## ELEC2\n\n### Further Electronics\n\n#### Capacitors\n\n• Capacitors\nstore energy in the form of charge\nblock a direct current\nallow the passage of an alternating current\nunit of capacitance is the Farad\nusually measured in pF, nF and µF\nworking voltage\npolarisation ( electrolytic and tantalum capacitors )\nleakage current\n• In Series       CT = 1 / ( 1 / C1 + 1 / C2 + 1 / C3 + ... )\n• In Parallel    CT = C1 + C2 + C3 ...\n\n#### Resistor Capacitor Timing Circuits\n\n• The Time Constant and RC Timing Circuits\nT = R C\nV = 0.63Vs ( charging )    Vs is the supply voltage\nV = 0.37Vs ( discharging )    Vs is the supply voltage\n50% Charge or Discharge\nT = 0.69 R C\n100% Charge or Discharge\nTt = 5 R C\nsketch voltage / time graphs for charging and discharging\n\n#### 555 Timer\n\n• Inside the 555\n• Monostable\ncircuit diagram\nOne stable state\nT = 1.1 R C\nHow it works\n• Astable\ncircuit diagram\nNo stable states\nHow it works\nLow Time:     tL = 0.7 RB C\n\nHigh Time:     tH = 0.7 (RA + RB) C\n\nFrequency:     f = 1.44 / (RA + 2 RB ) C\n\n#### Sequential Logic\n\n• bistable latch based on NAND gates\noperation\nfunction\n• D-type flip-flop\nsymbol\noperation\nfunction\n• Shift Register\ncircuit diagram\noperation\napplications\n• Counters - Up to 4 bits\nFeedback to make a D-type flip-flop divide by 2\nUp counter\nDown counter\nModulo-N counters\nTiming diagrams\n\n#### Number Systems\n\nconvert a 4-bit binary number to decimal\nconvert a 4-bit binary number to hexadecimal\nBinary Coded Decimal ( BCD )\nBCD decoder for a seven segment display\n\n#### Operational Amplifiers\n\n• ideal op-amp properties\nInput resistance = ∞\nOutput resistance = zero\nGain Bandwidth Product\nThe product of voltage gain and bandwidth is a constant\n• Gain = Vout / Vin\n• Voltage bandwidth is the frequency range where Vout is at least 70% of the maximum\n• Power bandwidth is the frequency range where Vout is at least 50% of the maximum\n• inverting amplifier\ncircuit\nVout = - Vin Rf / Rin\nGain = - Rf / Rin\ninput resistance = Rin ( usually a low value )\nthe inverting input is a Virtual Earth\n• summing amplifier\ncircuit\nVout = - Rf ( V1 / R1 + V2 / R2 + V3 / R3 )\ninput resistance = Rin ( for each input )\nthe inverting input is a Virtual Earth\n• difference amplifier\ncircuit\nVout = ( V+ - V- ) Rf / R1\ninput resistance = 2 R1 ( usually a low value )\n• non-inverting op-amp amplifier\ncircuit\nVout / Vin = 1 + Rf / Rin\ninput resistance is equal to that of the op-amp\n• voltage follower\ncircuit\napplications\nvoltage gain = 1\ncurrent and power gain can be very large\ninput resistance is equal to the resistance of the op-amp\n• SUMMARY / Key Facts\n\n#### Power Amplifiers\n\n• Power bandwidth the frequency range where Powerout is at least 50% of the maximum\n• Power gain = Pout / Pin\n• MOSFET source followers ( N and P Channel )\nestimate the power dissipated in a source follower\nmethods for removing the excess heat\n• Heat Sinks\nConduction - use a metal like aluminium\nConvection - have a large surface area to let the heated air rise\n• Push Pull Amplifier - MOSFET - BJT\nEstimate the maximum power output\nCompare with a Class A single ended circuit\nCross over distortion is reduced by\nbiasing the MOSFETs with diodes and resistors\nincluding the MOSFETS in the negative feedback path\nClipping, Limiting, Saturation\nThe output is limited by the power supply voltage\nIncrease the voltage but don't exceed the MOSFET ratings.\n\nreviseOmatic V3     Contacts, ©, Cookies, Data Protection and Disclaimers Hosted at linode.com, London" ]
[ null, "https://reviseomatic.org/v3/sys-images/s-logo.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.71132153,"math_prob":0.9657152,"size":4405,"snap":"2021-31-2021-39","text_gpt3_token_len":1208,"char_repetition_ratio":0.10474892,"word_repetition_ratio":0.08060453,"special_character_ratio":0.23609534,"punctuation_ratio":0.042134833,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99272907,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-23T15:50:36Z\",\"WARC-Record-ID\":\"<urn:uuid:580dfa88-e48a-4d47-bc1d-18d0b624e84d>\",\"Content-Length\":\"22709\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b7dd8e67-45f5-4cd9-83c3-b4fee1de1d50>\",\"WARC-Concurrent-To\":\"<urn:uuid:78eb062e-93b9-4289-ac64-2e1428454e09>\",\"WARC-IP-Address\":\"151.236.220.77\",\"WARC-Target-URI\":\"https://reviseomatic.org/help/s-index/Index%20for%20AS%20ELEC1%20and%20ELEC2.php\",\"WARC-Payload-Digest\":\"sha1:6EWBB7VOISNWGXM3Z67XB6F3DHSGTPOJ\",\"WARC-Block-Digest\":\"sha1:XKS2FBITD33DYMDTJETFNLXYCAVMZHBH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057424.99_warc_CC-MAIN-20210923135058-20210923165058-00253.warc.gz\"}"}
https://www.chemeurope.com/en/encyclopedia/Variational_Monte_Carlo.html
[ "My watch list\nmy.chemeurope.com\n\n# Variational Monte Carlo\n\nVariational Monte Carlo(VMC) is a quantum Monte Carlo method that applies the variational method to approximate the ground state of the system.\n\nThe expectation value necessary can be written in the x representation as", null, "$\\frac{\\langle \\Psi(a) | H | \\Psi(a) \\rangle} {\\langle \\Psi(a) | \\Psi(a) \\rangle } = \\frac{\\int | \\Psi(X,a) | ^2 \\frac{H\\Psi(X,a)}{\\Psi(X,a)} dX} { \\int | \\Psi(X,a)|^2 dX}$.\n\nFollowing the Monte Carlo method for evaluating integrals, we can interpret", null, "$\\frac{ | \\Psi(X,a) | ^2 } { \\int | \\Psi(X,a) | ^2 dX }$ as a probability distribution function, sample it, and evaluate the energy expectation value E(a) as the average of the local function", null, "$\\frac{H\\Psi(X,a)}{\\Psi(X,a)}$, and minimize E(a).\n\nVMC is no different from any other variational method, except that since the many-dimensional integrals are evaluated numerically, we only need to calculate the value of the possibly very complicated wave function, which gives a large amount of flexibility to the method. One of the largest gains in accuracy over writing the wave function separably comes from the introduction of the so-called Jastrow factor, where the wave function is written as", null, "$exp(\\sum{u(r_{ij})})$, where rij is the distance between a pair of quantum particles. With this factor, we can explicitly account for particle-particle correlation, but the many-body integral becomes unseparable, so Monte Carlo is the only way to evaluate it efficiently. In chemical systems, slightly more sophisticated versions of this factor can obtain 80-90% of the correlation energy (see electronic correlation) with less than 30 parameters. In comparison, a configuration interaction calculation may require around 50,000 parameters to reach that accuracy, although it depends greatly on the particular case being considered. In addition, VMC usually scales as a small power of the number of particles in the simulation, usually something like N2-4 for calculation of the energy expectation value, depending on the form of the wave function.\n\n## Wave Function Optimization in VMC\n\nQMC calculations crucially depend on the quality of the trial-function, and so it is essential to have an optimized wave-function as close as possible to the ground state. The problem of function optimization is a very important research topic in numerical simulation. In QMC, in addition to the usual difficulties to find the minimum of multidimensional parametric function, the statistical noise is present in the estimate of the cost function (usually the energy), and its derivatives , required for an efficient optimization.\n\nDifferent cost functions and different strategies were used to optimize a many-body trial-function. Usually three cost functions were used in QMC optimization energy, variance or a linear combination of them. In this thesis we always used energy optimization. The variance optimization have the advantage to be bounded by below, to be positive defined and its minimum is known, but different authors recently showed that the energy optimization is more effective than the variance one.\n\nThere are different motivations for this: first, usually one is interested in the lowest energy rather than in the lowest variance in both variational and diffusion Monte Carlo; second, variance optimization takes many iterations to optimize determinant parameters and often the optimization can get stuck in multiple local minimum and it suffers of the \"false convergence\" problem; third energy-minimized wave functions on average yield more accurate values of other expectation values than variance minimized wave functions do.\n\nThe optimization strategies can be divided into three categories. The first strategy is based on correlated sampling together with deterministic optimization methods. Even if this idea yielded very accurate results for the first-row atoms, this procedure can have problems if parameters affect the nodes, and moreover density ratio of the current and initial trial-function increases exponentially with the size of the system. In the second strategy one use a large bin to evaluate the cost function and its derivatives in such way that the noise can be neglected and deterministic methods can be used.\n\nThe third approach, is based on an iterative technique to handle directly with noise functions. The first example of these methods is the so called Stochastic Gradient Approximation (SGA), that was used also for structure optimization. Recently an improved and faster approach of this kind was proposed the so called Stochastic Reconfiguration (SR) method.\n\n## References\n\n• W. L. McMillan, Phys. Rev. 138, A442 (1965)\n• D. Ceperley, G. V. Chester and M. H. Kalos, Phys. Rev. B 16, 3081 (1977)\n• Wave-Function Optimization in VMC\n• M. Snajdr. and S. M. Rothstein., J. Chem. Phys. 112, 4935 (2000)\n• D. Bressanini et al., J. Chem. Phys. 116, 5345 (2002)\n• J. W. Wilkins C. J. Umrigar and K. G. Wilson, Phys. Rev. Lett. 60, 1719 (1988)\n• P. R. C. Kent, R. J. Needs and G. Rajagopal, Phys. Rev. B, 59, 12344 (1999)\n• X. Lin, H. Zhang and A. M. Rappe, J. Chem. Phys., 112, 2650 (2000)\n• A. Harju, B. Barbiellini, S. Siljamaki, R. M. Nieminen and G. Ortiz, Phys. Rev. Lett. 79, 1173 (1997)\n• S. Tanaka, J. Chem. Phys., 100, 7416 (1994)\n• M. Casula, C. Attaccalite and S. Sorella, J. Chem. Phys. 121, 7110 (2004)\n• N. D. Drummond and R. J. Needs, Phys. Rev. B 72, 085124 (2005)." ]
[ null, "https://www.chemeurope.com/en/encyclopedia/images/math/0/0/e/00e1b6a2de5b6ff6ccc024bf81cb68ff.png ", null, "https://www.chemeurope.com/en/encyclopedia/images/math/b/c/b/bcb122310871ed0556f623f9e7eb0258.png ", null, "https://www.chemeurope.com/en/encyclopedia/images/math/a/a/e/aaecddd5429c4ecd9061a885e5fa59d4.png ", null, "https://www.chemeurope.com/en/encyclopedia/images/math/0/3/f/03fcc459ae1476be059e8cdf720a71a3.png ", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9058302,"math_prob":0.9766742,"size":4885,"snap":"2023-40-2023-50","text_gpt3_token_len":902,"char_repetition_ratio":0.14177422,"word_repetition_ratio":0.0,"special_character_ratio":0.17850563,"punctuation_ratio":0.08574879,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.992083,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-25T17:45:26Z\",\"WARC-Record-ID\":\"<urn:uuid:289ba1df-e69f-482b-9f4f-a1a757bed9dc>\",\"Content-Length\":\"73755\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d31aeee9-bcf6-40bb-addb-eb32260dff18>\",\"WARC-Concurrent-To\":\"<urn:uuid:79d68d59-8dc1-464f-ab79-b243a9456848>\",\"WARC-IP-Address\":\"85.158.2.220\",\"WARC-Target-URI\":\"https://www.chemeurope.com/en/encyclopedia/Variational_Monte_Carlo.html\",\"WARC-Payload-Digest\":\"sha1:IPS7GAFRO7U6U3HXZUYI3WDQGPMGU52N\",\"WARC-Block-Digest\":\"sha1:QDQSBDADGPBQUSDRPZOKQ7D4HMXAJYO4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233509023.57_warc_CC-MAIN-20230925151539-20230925181539-00220.warc.gz\"}"}
https://espanol.answers.yahoo.com/question/index?qid=20200701210730AAGX9uQ
[ "# Physics help with circuits?\n\n1. What does it mean for two resistors to be connected in series in an electrical circuit?\n\nWhat properties (voltage across, current through, resistance) are the same for each resistor in series?\n\n2. What does it mean for two resistors to be connected in parallel in an electrical circuit?\n\nWhat properties (voltage across, current through, resistance) are the same for each resistor in parallel?\n\n### 6 respuestas\n\nRelevancia\n• Given two resistors R1 and R2, if the resistors are in series, the power supply is connected to one end of, say R1, and the other end of R1 is connected to one end of R2. The other end of R2 is connected to the return lead of the power supply. For series resistances the current is the same in both resistors and the voltage across each resistor is proportional to the ratio of a particular resistor to the sum of the resistors.\n\nFor Parallel resistors, one end of each resistor is tied to one lead of the power supply and the other ends of both resistors are connected to the return of the power supply. The voltage is obviously the same across both resistors, but the current is V/R1 for resistor R1 and V/R2 for resistor R2.\n\nHope this helps.\n\n• two or more resistors are in series if they are connected end to end like cars in a RR train. one path in and one path out. They have individual voltage drops and the same current through them.\n\nTwo or more resistors are connected in parallel if they are connected to the same source and their outputs are connected to the same point. they have the same voltage across them and individual currents.\n\n• 1.\n\nWhat does it mean for two resistors to be connected in series in an electrical circuit?\n\nterminal of one  connected to one terminal of the other\n\nWhat properties (voltage across, current through, resistance) are the same for each resistor in series?\n\ncurrent through\n\n2.\n\nWhat does it mean for two resistors to be connected in parallel in an electrical circuit?\n\nterminal connected together on both sides\n\nWhat properties (voltage across, current through, resistance) are the same for each resistor in parallel?\n\nvoltage across\n\n• In series there must be no choice.   Electricity MUST pass through one and the other with no other path.\n\nIn parallel a junction exists where the electricity can reach a further point by moving through two or more different paths.\n\nSo for the series the electricity ( current ) must be the same in each part and the total energy lost ( volts) must be the sum of the amounts lost in each element.\n\nIn parallel as only SOME of the electricity is in each branch then the current is divided into the branches.  The sum of the currents is equal to the total current supplied.\n\nAs the energy in each piece of charge is the same the energy per charge lost in any branch is the same ie the volts are the same across each and any element in a parallel portion of a circuit.\n\n• ¿Qué te parecieron las respuestas? Puedes iniciar sesión para votar por la respuesta.\n• 1. Two resistors in series\n\nvoltage across depends on the  resistors\n\ncurrent through them is the same\n\nTotal resistance is addition of individual resistances\n\n2. Two resistors in parallel\n\nvoltage across them is the same\n\ncurrent through them depends on the individual resistances\n\nTotal resistance is given by the ratio of product  of individual resistances, to the sum of resistances\n\n• What kind of question is that? The same current flows through resistors in series while resistors in parallel experience the same voltage drop.\n\n¿Aún tienes preguntas? Pregunta ahora para obtener respuestas." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.911166,"math_prob":0.9593733,"size":3605,"snap":"2020-34-2020-40","text_gpt3_token_len":802,"char_repetition_ratio":0.18855873,"word_repetition_ratio":0.17088607,"special_character_ratio":0.20471567,"punctuation_ratio":0.07647059,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9945229,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-12T21:22:12Z\",\"WARC-Record-ID\":\"<urn:uuid:37ab568e-dca8-4962-b154-8415f99e11ed>\",\"Content-Length\":\"135955\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6982ba94-cd13-4c84-ae14-eb3e82d46917>\",\"WARC-Concurrent-To\":\"<urn:uuid:20029e57-0881-4b5c-a2e6-da92b99a0eef>\",\"WARC-IP-Address\":\"76.13.32.153\",\"WARC-Target-URI\":\"https://espanol.answers.yahoo.com/question/index?qid=20200701210730AAGX9uQ\",\"WARC-Payload-Digest\":\"sha1:ZBREQGEWDT7CZ3KQTRSKPIDCGZM2G2S3\",\"WARC-Block-Digest\":\"sha1:PFNORXAZMUIMZI7AVA7YUCFAWPAOM2OD\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738944.95_warc_CC-MAIN-20200812200445-20200812230445-00557.warc.gz\"}"}
https://chem.libretexts.org/Courses/El_Paso_Community_College/CHEM1306%3A_Health_Chemistry_I_(Rodriguez)/01%3A_Classifying_Matter/1.S%3A_Chemistry_Matter_and_Measurement_(Summary)
[ "# 1.S: Chemistry, Matter, and Measurement (Summary)\n\n$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$$$\\newcommand{\\AA}{\\unicode[.8,0]{x212B}}$$\n\nTo ensure that you understand the material in this chapter, you should review the meanings of the bold terms in the following summary and ask yourself how they relate to the topics in the chapter.\n\nChemistry is the study of matter, which is anything that has mass and takes up space. Chemistry is one branch of science, which is the study of the natural universe. Like all branches of science, chemistry relies on the scientific method, which is a process of learning about the world around us. In the scientific method, a guess or hypothesis is tested through experiment and measurement.\n\nMatter can be described in a number of ways. Physical properties describe characteristics of a sample that do not change the chemical identity of the material (size, shape, color, and so on), while chemical properties describe how a sample of matter changes its chemical composition. A substance is any material that has the same physical and chemical properties throughout. An element is a substance that cannot be broken down into chemically simpler components. The smallest chemically identifiable piece of an element is an atom. A substance that can be broken down into simpler chemical components is a compound. The smallest chemically identifiable piece of a compound is a molecule. Two or more substances combine physically to make a mixture. If the mixture is composed of discrete regions that maintain their own identity, the mixture is a heterogeneous mixture. If the mixture is so thoroughly mixed that the different components are evenly distributed throughout, it is a homogeneous mixture. Another name for a homogeneous mixture is a solution. Substances can also be described by their phase: solid, liquid, or gas.\n\nScientists learn about the universe by making measurements of quantities, which consist of numbers (how many) and units (of what). The numerical portion of a quantity can be expressed using scientific notation, which is based on powers, or exponents, of 10. Large numbers have positive powers of 10, while numbers less than 1 have negative powers of 10. The proper reporting of a measurement requires proper use of significant figures, which are all the known digits of a measurement plus the first estimated digit. The number of figures to report in the result of a calculation based on measured quantities depends on the numbers of significant figures in those quantities. For addition and subtraction, the number of significant figures is determined by position; for multiplication and division, it is decided by the number of significant figures in the original measured values. Nonsignificant digits are dropped from a final answer in accordance with the rules of rounding.\n\nChemistry uses SI, a system of units based on seven basic units. The most important ones for chemistry are the units for length, mass, amount, time, and temperature. Basic units can be combined with numerical prefixes to change the size of the units. They can also be combined with other units to make derived units, which are used to express other quantities such as volume, density, or energy. A formal conversion from one unit to another uses a conversion factor, which is constructed from the relationship between the two units. Numbers in conversion factors may affect the number of significant figures in a calculated quantity, depending on whether the conversion factor is exact. Conversion factors can be applied in separate computations, or several can be used at once in a single, longer computation.​ Conversion factors are very useful in calculating dosages.\n\n1.S: Chemistry, Matter, and Measurement (Summary) is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by LibreTexts." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.94054323,"math_prob":0.989419,"size":3634,"snap":"2023-14-2023-23","text_gpt3_token_len":686,"char_repetition_ratio":0.10853995,"word_repetition_ratio":0.018835617,"special_character_ratio":0.18794717,"punctuation_ratio":0.10925645,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9837851,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-01T02:40:13Z\",\"WARC-Record-ID\":\"<urn:uuid:a628ad95-4c7b-486e-90cc-51356f7f2bcd>\",\"Content-Length\":\"120157\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:676350f4-5669-46df-8a2a-3d534d010202>\",\"WARC-Concurrent-To\":\"<urn:uuid:c4f4f3b0-2504-45f8-a00b-9c51e71705cf>\",\"WARC-IP-Address\":\"13.249.39.77\",\"WARC-Target-URI\":\"https://chem.libretexts.org/Courses/El_Paso_Community_College/CHEM1306%3A_Health_Chemistry_I_(Rodriguez)/01%3A_Classifying_Matter/1.S%3A_Chemistry_Matter_and_Measurement_(Summary)\",\"WARC-Payload-Digest\":\"sha1:G4IDZ23AOGJMDONMEGL3OARSRBYYF52A\",\"WARC-Block-Digest\":\"sha1:7RWPL7OT46QZVWBP4HDQAOBLY74QBIH3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224647525.11_warc_CC-MAIN-20230601010402-20230601040402-00408.warc.gz\"}"}
http://codinghelmet.com/articles/linq-array-all-modes
[ "LINQ Expression to Find All Modes of an Array\n\nby Zoran Horvat\nJul 30, 2015\n\nMode of an array is the element which occurs more often than any other element of that array. We can write a LINQ expression which counts occurrences of every element of the array and then pick the element with largest count.\n\nIn the previous hint we have developed a LINQ expression which returns array mode ignoring the situation in which more than one element occurs the same number of times. For details, see LINQ Expression to Find Mode of an Array.\n\nIn this article, we will extend the same expression so that it returns a collection of numbers which occur equally many times, but more times than any other element of the array.\n\nBelow is the function which calculates all modes of a collection of integers.\n\n``````IEnumerable<int> AllModes(IEnumerable<int> collection)\n{\n\nvar pairs =\ncollection\n.GroupBy(value => value)\n.OrderByDescending(group => group.Count());\n\nint modeCount = pairs.First().Count();\n\nIEnumerable<int> modes =\npairs\n.Where(pair => pair.Count() == modeCount)\n.Select(pair => pair.Key)\n.ToList();\n\nreturn modes;\n\n}\n``````\n\nThis time, the function runs in two passes. The first pass is to take all distinct elements of the array and to count occurrences of each of them. In the same step, we are sorting distinct elements descending by their number of occurrences. The result is that all modes of the array will appear at the beginning of the resulting collection.\n\nIn the second pass, we are simply taking all the elements from the first collection which have the same count as the most frequent element. All those numbers are modes of the array.\n\nIf you are interested in more academic solutions to this same problem, please take a look at the exercise Finding Mode of an Array.\n\nDemonstration\n\nWe can use this function in the context of integer arrays to find their mode. Here is the console application which demonstrates the AllModes function.\n\n``````using System;\nusing System.Collections.Generic;\nusing System.Linq;\n\nnamespace ArrayMode\n{\n\nclass Program\n{\n\nstatic IEnumerable<int> AllModes(IEnumerable<int> collection)\n{\n\nvar pairs =\ncollection\n.GroupBy(value => value)\n.OrderByDescending(group => group.Count());\n\nint modeCount = pairs.First().Count();\n\nIEnumerable<int> modes =\npairs\n.Where(pair => pair.Count() == modeCount)\n.Select(pair => pair.Key)\n.ToList();\n\nreturn modes;\n\n}\n\nstatic void Print(int[] a)\n{\n\nfor (int i = 0; i < a.Length; i++)\n{\nConsole.Write(\"{0,3}\", a[i]);\nif (i < a.Length - 1 && (i + 1) % 10 == 0)\nConsole.WriteLine();\n}\nConsole.WriteLine();\nConsole.WriteLine();\n\nvar groups = a\n.GroupBy(value => value)\n.OrderBy(group => group.Key);\n\nforeach (var group in groups)\n{\nConsole.WriteLine(\"{0,3} x {1}\", group.Key, group.Count());\n}\n\n}\n\nstatic void Main(string[] args)\n{\n\nRandom rnd = new Random();\nint n = 0;\n\nwhile (true)\n{\n\nConsole.Write(\"Array length (0 to exit): \");\n\nif (n <= 0)\nbreak;\n\nint[] a = new int[n];\nfor (int i = 0; i < a.Length; i++)\na[i] = rnd.Next(9) + 1;\n\nPrint(a);\n\nIEnumerable<int> modes = AllModes(a);\n\nstring separator = \": \";\nConsole.Write(\"Modes\");\nforeach (int mode in modes)\n{\nConsole.Write(\"{0}{1}\", separator, mode);\nseparator = \", \";\n}\nConsole.WriteLine();\nConsole.WriteLine();\n\n}\n\n}\n\n}\n}\n``````\n\nWhen this application is run, it produces the following output:\n\n``````Array length (0 to exit): 10\n8 2 4 4 1 7 9 3 2 9\n\n1 x 1\n2 x 2\n3 x 1\n4 x 2\n7 x 1\n8 x 1\n9 x 2\nModes: 2, 4, 9\n\nArray length (0 to exit): 15\n8 1 9 3 5 8 8 6 2 4\n6 6 1 5 2\n\n1 x 2\n2 x 2\n3 x 1\n4 x 1\n5 x 2\n6 x 3\n8 x 3\n9 x 1\nModes: 6, 8\n\nArray length (0 to exit): 42\n5 3 3 4 4 5 8 7 1 5\n1 2 1 1 3 1 3 1 6 6\n5 1 9 8 8 3 6 6 9 3\n3 1 8 3 6 5 7 8 7 5\n9 4\n\n1 x 8\n2 x 1\n3 x 8\n4 x 3\n5 x 6\n6 x 5\n7 x 3\n8 x 5\n9 x 3\nModes: 1, 3\n\nArray length (0 to exit): 0\n``````", null, "" ]
[ null, "http://codinghelmet.com/img/zh.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.773847,"math_prob":0.99445504,"size":5039,"snap":"2019-26-2019-30","text_gpt3_token_len":1309,"char_repetition_ratio":0.109235354,"word_repetition_ratio":0.09619687,"special_character_ratio":0.2889462,"punctuation_ratio":0.14971751,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789274,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-25T17:33:23Z\",\"WARC-Record-ID\":\"<urn:uuid:a7c6ad02-b105-4abb-8d3b-6c71154f364a>\",\"Content-Length\":\"23112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:00a2a04d-d420-4a6b-90ad-d84ba15605e0>\",\"WARC-Concurrent-To\":\"<urn:uuid:33b219d0-fc6c-40d1-8827-58d5e8079da1>\",\"WARC-IP-Address\":\"104.210.145.181\",\"WARC-Target-URI\":\"http://codinghelmet.com/articles/linq-array-all-modes\",\"WARC-Payload-Digest\":\"sha1:ZKBEHSZ35A44ECH6BDZRNLEAGDXEUVPU\",\"WARC-Block-Digest\":\"sha1:UKG4RGGOVG2FNYAVJ2RYBHIV4UL7V4WO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627999876.81_warc_CC-MAIN-20190625172832-20190625194832-00097.warc.gz\"}"}
https://epi-perspectives.biomedcentral.com/articles/10.1186/1742-5573-5-5
[ "# Partitioning the population attributable fraction for a sequential chain of effects\n\n## Abstract\n\n### Background\n\nWhile the population attributable fraction (PAF) provides potentially valuable information regarding the community-level effect of risk factors, significant limitations exist with current strategies for estimating a PAF in multiple risk factor models. These strategies can result in paradoxical or ambiguous measures of effect, or require unrealistic assumptions regarding variables in the model. A method is proposed in which an overall or total PAF across multiple risk factors is partitioned into components based upon a sequential ordering of effects. This method is applied to several hypothetical data sets in order to demonstrate its application and interpretation in diverse analytic situations.\n\n### Results\n\nThe proposed method is demonstrated to provide clear and interpretable measures of effect, even when risk factors are related/correlated and/or when risk factors interact. Furthermore, this strategy not only addresses, but also quantifies issues raised by other researchers who have noted the potential impact of population-shifts on population-level effects in multiple risk factor models.\n\n### Conclusion\n\nCombined with simple, unadjusted PAF estimates and an aggregate PAF based on all risk factors under consideration, the sequentially partitioned PAF provides valuable additional information regarding the process through which population rates of a disorder may be impacted. In addition, the approach can also be used to statistically control for confounding by other variables, while avoiding the potential pitfalls of attempting to separately differentiate direct and indirect effects.\n\n## Background\n\nRecent attention has focused upon the need to consider the sequential chain of effects when calculating and interpreting relative risk in multiple risk factor models. For example, as illustrated in Figure 1, simultaneously controlling for the mutual association between smoking and birthweight when examining the effect of these variables upon mild mental retardation (MMR) (Figure 1, middle and lower panels) is not equivalent to a model in which smoking leads to elevated risk for low birthweight, which then leads to elevated risk for MMR (Figure 1, top panel). With such models, the manner and sequence in which relative risk is calculated vary depending on the order of the variable in the sequence of effects. A similar issue applies to the estimation of measures of community level effect, such as the population attributable fraction (PAF)–also referred to as population attributable risk, or attributable risk. Ignoring the causal or sequential ordering of risk factors either assumes that they are independent (i.e., do not influence each other–Figure 1, middle panel) or assumes that they are all mutually correlated–every risk factor influences or has bidirectional associations with every other risk factor (Figure 1, bottom panel), even if one occurs in childhood and the other before a child was born.\n\nIn a sequential or causal ordering of effects, an earlier risk factor can impact subsequent risk factors by increasing their rate or prevalence (i.e., an indirect effect). In other words, an indirect effect is where one predictor variable has an impact on an outcome variable through an intermediate predictor variable (e.g., smoking influences low birthweight, low birthweight influences MMR–see Figure 1, top panel). In addition, one risk factor may interact with a subsequent risk factor by magnifying or reducing the effect it has upon the outcome (i.e., an interaction effect).\n\nIt's worth noting that two predictors can have an indirect effect on an outcome with no interaction effect: For example, smoking may lead to higher rates of low birthweight, and low birthweight may lead to higher rates of MMR; but the effect of being born low birthweight may be identical for all children, regardless of whether or not their mother smoked during pregnancy. Similarly, absence of an indirect effect does not preclude an interaction effect upon the same outcome. For example, child sex and birthweight may have no correlation with each other–and hence no indirect effect–while the effect of low birthweight on a developmental outcome may be very small for females but very large for males (i.e., a large interaction effect).\n\nWhile several strategies exist for estimating a PAF for one risk factor while simultaneously statistically controlling for other variables , these strategies do not consider the sequence in which these variables influence each other and the outcome as just described. This results in estimates that have a variety of known problems, including values that are paradoxical, counter-intuitive, or simply nonsensical. These and similar problems have led some to question whether adjusted PAFs are of any practical value . Furthermore, these strategies generally involve either estimating the direct effect (e.g., effect of smoking on MMR that is unrelated to birthweight) or the indirect effect (e.g., effect of smoking on MMR that is related to smoking's effect on birthweight–see Figure 1, top panel). However, others have noted various issues with differentiating direct and indirect effects in biological models , again, raising questions as to the practicality of calculating adjusted PAFs in multiple risk factor models.\n\nIn contrast, this paper outlines a procedure for partitioning the overall PAF associated with a group of risk factors into the individual effects associated with each specific risk factor based upon the order of that risk factor in the sequence of effects. As will be described in more detail, this technique directly parallels the estimation of R2 and change in R2 one estimates through a hierarchical multiple regression in which variables are entered in multiple steps, with those that occur earlier in a process (e.g., prenatal factors) entered prior to those that occur later in a process (e.g., early childhood environment). This results in parameter estimates at any given step being adjusted for the effects of those variables that were entered in earlier steps. This same process can be used to adjust for confounding by other variables, such as sex or SES, which may be related to the risk factors and outcome of interest.\n\nIt is also worth noting that an additional strength of this approach is that it adjusts a PAF for previously entered effects without attempting to differentiate direct and/or indirect effects. Instead, in estimates the total or net effect of a variable–direct and indirect effects combined–after controlling for other risk factors and/or confounding by other variables.\n\nThe proposed procedure is appropriate for representative or population-based studies where estimates of the risk ratio (RR) and the prevalence of a risk factor (pe) can be directly estimated. We first briefly describe existing strategies for assessing adjusted PAFs in multiple risk factor models, and then describe the proposed strategy for partitioning a PAF based upon the order of effects, drawing the parallels between this approach and the estimate of R2 and change in R2 in a multiple regression analysis. We illustrate this technique in three scenarios: (1) two risk factors are related/correlated with each other, but do not interact (i.e., there is not interaction effect), (2) two risk factors are not related/correlated with each other, but do interact (i.e., there is an interaction effect), (3) two risk factors are related/correlated with each other and interact.\n\n### Estimation of PAF in stratified models\n\nThe most transparent approach for estimating a PAF across multiple risk factors is to use a stratified model. In a stratified model, the sample is stratified based upon the possible combinations of risk factors, and a PAF is estimated for each combination. The referent group is those without any of the risk factors under consideration. An example of this approach is presented in Figure 2, where risk factor A and risk factor B are risk factors for MMR in children. The referent group consists of children with neither risk factor, and there are three \"at-risk\" groups: those with A only, those with B only, and those with both A and B. A PAF is calculated for any one these combinations of A and/or B using Equation 1.", null, "(1)\n\nwhere i indicates which of the three at-risk groups is being estimated, PAFi indicates the estimated PAF for the corresponding group, and Pei is the proportion of the sample in group i. In addition, Pe1, Pe2, and Pe3 indicate the proportion of the sample in each of the three at-risk groups, and RR1, RR2, and RR3 indicate their corresponding risk ratio. In other words, three PAFs are calculated, corresponding to those with A only, B only, and those with both A and B. When calculating these estimates, the denominator does not change; however, the numerator for any given estimate is equal to the proportion of the sample in that risk-group (Pei), multiplied by the corresponding risk-ratio minus 1.\n\nThere are several limitations with this strategy. Specifically, stratification does not incorporate any sequence of effect between the risk factors, and it assumes that there is no association between the risk factors. However, it should be noted that the sum of the stratified PAFs (PAFAGG) is a legitimate estimate of the combined aggregate effect of both risk factors relative to those without either risk factor. In other words, PAFAGG estimates the percentage of cases in the population that are associated with either or both A and B regardless of whether A and B are unrelated or whether they are strongly related. This is equivalent to removing any distinction between the two risk factors and simply performing a risk/no risk comparison, as illustrated at the bottom of Figure 2. The issue becomes problematic when one wishes to use stratification to examine the effect specific to either A or B.\n\nIn situations where risk factors are related, several formulas exist for estimating adjusted PAF's . These techniques involve adjusting the relative risk for one variable for the effect of other variables, and then using this adjusted relative risk for estimating a PAF. For example, using a Mantel-Haenszel odds ratio, an adjusted PAF can be estimated using Equation Two...", null, "(2)\n\nwhere pe is estimated as the ratio of the number of exposed cases, relative to the total number of cases, and ORMH is the Mantel-Haenszel odds ratio. The result is a PAF adjusted for the effect of other risk factors in the model. Alternatively, others have proposed strategies for estimating an adjusted PAF through a multiple logistic regression. In general, these approaches use logistic regression to calculate an odds-ratio adjusted for other effects, and then uses this adjusted odds-ratio to estimate a PAF.\n\nWhile resolving the issue of related/correlated risk factors, these approaches have their own limitations. As noted by Rowe and colleagues, individual, unadjusted PAFs can sum to more than 1.0 because a person with more than one risk factor can have a disorder prevented (or caused) in more than one way. For example, if the combination of two risk factors is a sufficient cause for a disorder, cases among individuals with both risk factors will be \"double counted\" when calculating a PAF for each of these risk factors. Consequently, one might expect adjustment techniques to remove this \"overlapping\" risk and result in adjusted PAFs that do not sum to more than 1.0.\n\nHowever, as demonstrated by Coughlin and colleagues, many of these techniques fail in this regard. Specifically, Coughlin and colleagues examined birthweight and maternal consumption of processed lunchmeat as risk factors for childhood astrocytoma. Using a logistic model, the authors found that the PAF for birthweight and processed meat considered jointly was equal to .791. After adjusting each risk factor for its association with the other, the authors reported that the adjusted PAF for birthweight was equal to .558, while the adjusted PAF for processed lunchmeat was .521. Not only did the adjusted PAFs sum to more than the joint PAF when both variables were aggregated, the adjusted PAFs summed to more than 1.0. In this same paper, Coughlin and colleagues propose an adjustment strategy in which the adjusted PAFs will sum to the joint, aggregate PAF for both risk factors together ; however, this strategy, as well as other techniques for calculating adjusted PAFs, does not address the sequence of effects. Instead, they simultaneously remove the effect of all other variables upon each other without considering how earlier risk factors may impact the prevalence of later risk factors.\n\n### An alternative strategy: partitioning a paf sequentially\n\n#### Background\n\nThe method being proposed here is based on an alternative approach to estimating adjusted PAFs. As described below, the method can be seen as being somewhat analogous to partitioning R2 in a multi-step, hierarchical multiple regression, where the estimation of r2 (i.e., the net or total effect of a single variable), total R2 (i.e., the net or total effect of a set of variables), and change in R2 (i.e., the net effect of a variable(s) after controlling for previously entered variables) in a multiple regression analysis[13, 14] have parallels in a simple/unadjusted PAF, an aggregate PAF, and an adjusted PAF. This point is illustrated through a hypothetical study using multiple regression, in which child sex, early childhood parenting, and adolescent peer behavior serve as predictors of adolescent problem behavior.\n\n##### Simple Effects\n\nOne might begin such a study by examining the individual simple r2 of each of these three predictors in relation to adolescent problem behavior. In our hypothetical example, this might result in r2 = .30 for child sex, r2 = .35 for early childhood parenting, and r2 = .40 for adolescent peer behavior. There is nothing inherently wrong with these three r2 estimates–each describes the total or net association between the corresponding predictor and the outcome (adolescent problem behavior).\n\nThis has a direct parallel with PAF estimates in multiple risk factor models. Consider a simple alternative example, where maternal smoking during pregnancy and low birth weight are predictors of MMR in elementary school. One can estimate a simple, unadjusted PAF for smoking and a simple, unadjusted PAF for birth weight–and these would be entirely valid estimates of the total or net association between these risk factors and MMR.\n\n##### Aggregate/Total Effects\n\nReturning to the multiple regression example, if one was interested in simultaneously examining the total effect of all three predictors combined, simply adding the unadjusted r2's would result in a sum of 1.05, an impossible and nonsensical solution. This reflects the lack of independence among the predictors (i.e., the predictors are related to each other), with some of the effect being shared across these variables. Instead, one could perform a multiple regression by entering all three predictors in a single step. In this hypothetical example, we will assume that this results in an R2 of .45, indicating that the three variables as a group account for 45% of the variance in adolescent problem behavior scores.\n\nAgain, this has a direct parallel with PAF estimates in multiple risk factor models. If one was interested in simultaneously examining the effect of multiple predictors and simply added their unadjusted PAFs, the result would not only be invalid, but could be impossible or nonsensical. Instead, one can estimate a total or aggregate PAF (PAFAGG) by comparing those with none of the risk factors of interest to those with one or more of the risk factors. Returning to our PAF example, one could estimate PAFAGG for both smoking and birth weight by contrasting children who had neither of these risk factors, with those who have one or both of them. The result would be an entirely valid estimate of the total or net effect of both of these variables when examined simultaneously.\n\nThese examples address situations where one is interested in either the individual effect of a single predictor or risk factor, or where one is interested in the total or combined effect of several variables or risk factors examined simultaneously. Neither involves adjusting individual effects. However, as we have noted previously, researchers are also often interested in examining the effect of individual predictors or risk factors after statistically controlling for the effect of other variables. This might be due to an interest in statistically controlling for other potential confounding effects, or in order to control for the effect of earlier steps in a more complex process.\n\nReturning to our R2 analogy, in regression this can be done through a hierarchical multiple regression, where variables are entered sequentially in multiple steps, examining the change in R2 at each step. For example, one might find R2 = .30 for the first step (child sex only), then find a change R2 equal to .10 when early childhood parenting is added in the second step (for a total R2 = .40 after step 2), and finally obtain a change in R2 equal to .05 when adolescent peer behavior is added in the third step (for a total R2 = .45 after step 3). Note that the estimates of the change in R2 do not separate direct and indirect effects associated with that variable. For example, the .10 change R2 for early childhood parenting reflects both any direct effect it has on the outcome, as well as any indirect effect it may have through peer behavior.\n\nThe change-in- R2's seen in each individual step sum to the total R2 obtained in the final step, which is also identical to the total R2 obtained by entering all three variables simultaneously in a single step. Combining the different approaches, one can estimate the total effect of each variable on its own (the simple r2's), the total effect of all variables when examined together (the total R2), and the unique effect of individual variables controlling for one or more other variables (the change in R2's). This approach is valuable in that it allows one to examine the relative process through which these variables influence the outcome, in conjunction to their individual net effects (r2's) and the overall impact of all variables together (R2). Furthermore, if the effect of the predictors on the outcome is believed to be confounded by other variables, those variables can be entered in the first step in order to adjust for those possible confounding effects.\n\nUnfortunately, this last step currently has no equivalent parallel in PAF analyses. What is needed is a procedure whereby multiple risk factors can be examined in two or more steps, with the PAF adjusted at each step for the effect of variables entered in earlier steps, and where these sequentially adjusted PAF estimates sum to the total PAF observed when all variables are examined simultaneously. This would provide the final, third parallel between R2 as an indicator of the variance in an outcome associated with multiple predictors in multiple regression, and PAF as an indicator of the percentage of cases in a population associated with multiple risk factors. The procedure we are proposing accomplishes this task.\n\nAs described above, this procedure is designed to complement the information obtained by an unadjusted PAF and PAFAGG. Just like the simple r2, the simple, unadjusted PAF provides an estimate of the total or net effect of a risk factor. Similarly, just like the total R2, the PAFAGG provides an estimate of the net or total effect of a group of variables. However, this procedure provides additional, valuable information that supplements both of these. In situations where one has a causal-sequence of effects, such as maternal smoking leading to increased cases of babies born low birthweight, leading to increased numbers of children identified as having MMR, this can provide potentially interesting process information. Returning to our example, whereas the simple PAF for low birth weight indicates the total effect it has on cases of MMR, the adjusted PAF reflects how much of that effect is not driven by earlier processes (i.e., maternal smoking). This can provide valuable information for researchers interested in these developmental, longitudinal processes. Furthermore, if the effect of the risk factors on the outcome is believed to be confounded by other variables, those variables can be entered in the first step in order to adjust for those possible confounding effects.\n\n### Previous techniques for partitioning a PAF sequentially\n\nThis approach differs from that proposed by Eide and Gefeller, who have also proposed a method of sequential PAF estimation. Their strategy was to add risk factors to a model one at a time and calculate the increase in the total PAF at each step. They do not suggest ordering variables based on a causal sequence, but instead propose that an optimal strategy would be to start with the risk factor having the largest individual PAF. The resulting estimate for each variable is referred to as a sequential attributable fraction. For example, they cite a previous study to illustrate how the PAF for smoking as a predictor of chronic cough was 41.2%, while the PAF of smoking and occupational dust exposure together was 51.2%. Therefore, the sequential PAF for smoking would be 41.2%, while the sequential PAF for occupational exposure would be 10.0%.\n\nEide and Gefeller go on to propose that an average PAF can be calculated by estimating the mean sequential PAF for a given risk factor, based upon all possible orderings of variables in the model. In this way, if the mean sequential PAF is calculated for each variable in the model, the sum of the average PAFs equals the aggregate PAF obtained when all variables are examined together. While this addresses the issue of PAFs summing to more than 1.0, to quote Rowe and colleagues, estimates based on this approach \"which assume complete elimination of one risk factor while the prevalence of the other risk factor remains static, do not represent realistic scenarios\" (p.246)\n\nMore importantly, while their approach is fairly straightforward, it does so by simply attributing the entire remaining portion of an aggregate PAF to subsequent risk factors. The result is that it does not allow for interactive effects (i.e., it assumes there is no interaction between risk factors as predictors of the outcome). As we will document in the final section of this paper, incorporating interactions into PAF estimates solves the seemingly paradoxical findings that have been noted by Wilcox. By including possible interaction terms, our approach provides the correct answer to these paradoxical situations and directly addresses one of the key concerns with calculating adjusted PAF estimates.\n\n### The proposed sequential partitioning strategy\n\nIn contrast to these other approaches, the sequential partitioning strategy we propose incorporates two key features. First, it recognizes that in multiple risk factor models one risk factor may in fact lead to a higher rate or prevalence of a subsequently occurring risk factor. This results in a risk factor having both a direct effect on the outcome, as well as indirect effects through increased rates of subsequent risk factors. Consider maternal smoking during pregnancy and low birth weight–both are related to MMR, but smoking has an indirect effect through increased rates of low birth weight. As such, part of the effect of low birth weight is in fact due to the indirect effect of smoking and should be attributed to smoking, not to low birth weight. In such a model, low birth weight can be thought of as an intermediate or mediating risk factor. The need to address this is the very issue raised by Rowe and colleagues (see the top example in Figure 1).\n\nSecond, while on a conceptual level, the issue of direct and indirect effects is important in determining the sequential order of variables, as well as in interpreting the resulting effects, the proposed strategy does not involve attempting to separate out or differentiate direct and indirect effects. This is an equally important point given concerns raised regarding difficulties in separating direct and indirect effects. Specifically, the PAF associated with the first risk factor reflects its total effect. It is a single value that reflects both any direct effect upon the outcome and any indirect effect through mediating variables. The adjusted PAF estimated for any subsequent risk factors removes the impact of all preceding variables on that risk factor, but still results in a single estimate that reflects both any direct effect of that variable upon the outcome and any indirect effect it may have that is mediated by variables appearing later in the process. Returning to the previous example, the PAF associated with smoking would reflect the total impact of smoking–both the effect it has directly upon MMR, and the indirect effect it has by increasing the number of children born low birth weight. The PAF associated with low birth weight would reflect that portion of the low birth weight effect that is unrelated to smoking. If a third risk factor was entered after low birth weight, the PAF for low birth weight would reflect any direct effect of low birth weight (controlling for smoking) and any indirect effect it has through that third risk factor (again, controlling for smoking). This also highlights the point that if the effect of the risk factors on the outcome is believed to be confounded by other variables, those variables can be entered in the first step in order to adjust for those possible confounding effects.\n\nThe proposed procedure begins with a PAF (PAFAGG) describing the total aggregated effect of all risk factors, and then partitions this PAF based upon the sequential order of the effects in the model. Similar to the stratified approach, the sum of the PAFs obtained is equal to the PAF that would be obtained by simply placing all individuals with one or more of the risk factors into a \"risk\" group, and then calculating a PAF for this aggregate indicator of \"risk\" relative to those with none of the risk factors. This is an important characteristic, in that it emphasizes the strategy is partitioning the total, net effect of all the risk factors in the model. In addition, similar to adjustment approaches, it addresses issues that arise from risk factors being correlated, as well as risk factors having interactive effects. Finally, based upon the sequence of effects, this new procedure adjusts the prevalence of risk factors, pe, not just the RR, at each step.\n\nFor simplicity and transparency, we focus on two risk factors entered in two steps; however, the process can readily be continued to include additional risk factors or potential confounds across 3 or more steps. We begin by describing the procedure for situations where two risk factors are related/correlated with each other, but do not interact. We then address the case where two risk factors are not related/correlated with each other, but do interact. We then address the situation where the two risk factors are related/correlated with each other and interact.\n\nIt should be noted that given analyses are conducted using risk ratios, we are specifically referring to an interaction in the risk-additivity sense, meaning that the expected RR for a person experiencing risk factor A and risk factor B (assuming no interaction) is equal to the RR for A plus the RR for B minus 1. This contrasts with the multiplicative interaction as would be seen in the product term of a logistic regression, where the expected odds ratio for a person experiencing both A and B (assuming no interaction) is equal to the odds ratio of A multiplied by the odds ratio of B (see for a thorough discussion regarding these distinctions).\n\nFinally, we should once more note that while we have referred to indirect effects as a basis for establishing a sequential model, this procedure does not attempt to differentiate direct and indirect effects. The partitioned PAF for any variable contains both any direct effect that variable has on the outcome, as well as any indirect effect it may have through subsequent variables in the model, after removing the effect of any earlier variables in the model. This is in the same manner that the change in R2 for a variable entered in a multi-step hierarchical multiple regression reflects the net effect (both direct and indirect through any subsequent variables) of that variable, after removing the variance associated with any variables that had been entered on a previous step.\n\n### Computational illustrative examples\n\n#### No interaction among risk factors\n\nThe first example involves two risk factors, A and B, where A is believed to lead to increased rates of B, and both are believed to result in elevated rates of MMR. A and B are related but have no interaction effect. For example, smoking may lead to higher rates of low birthweight, and low birthweight may lead to higher rates of MMR; but the effect of being born low birthweight may be identical for all children, regardless of whether or not their mother smoked during pregnancy. Data for this example are presented in Figures 3 and 4.\n\n##### Step 1\n\nThe first step is to calculate an unadjusted PAF for risk factor A using the general PAF formula...", null, "(3)\n\nwhere Pe is the proportion of the population exposed to risk factor A. This is the total relationship between risk factor A and population rates of MMR. This includes both its direct effect that is unrelated to B, and the indirect effect it has through increased rates of B. For this example, the unadjusted PAF for A is equal to 26.67%.\n\n##### Step 2\n\nThe next series of steps adjust the rate of risk factor B so as to remove the effect of A upon B. To do this, one first calculates RRB|A, which is the risk ratio for B based upon exposure to A. In other words, RRB|A treats the presence of risk factor B as the \"outcome\", and estimates the increased risk of B among individuals with risk factor A, relative to the risk of B among individuals without A. For this example, RRB|A is equal to 1.20, indicating the children who experience A are 1.20 times more likely to experience B, than are children who did not experience A.\n\n##### Step 3\n\nThe next step involves creating an adjusted frequency table by adjusting frequencies among those individuals with risk factor A in order to remove any effect that A may have had in terms of increasing rates of risk factor B. In essence, this adjusted table reflects the predicted frequencies given no association between A and B. RRB|A quantifies the relationship between A and B, in that it indicates the increased probability that a person will experience B if they have also experienced A. Therefore, multiplying the number of individuals with both A and B by the inverse of RRB|A, while keeping constant both (1) the total number of individuals experiencing A, and (2) the probability of having the outcome of interest, removes any effect of A on rates of B. Mathematically, this process is described in equations 4 through 7, below.\n\nReferring to Figure 4, the adjusted number of individuals experiencing both risk factor A and risk factor B (N'AB) is equal to", null, "(4)\n\nwhere NAB is the original number of individuals with both A and B. By removing the effect of A upon B, the total number of individuals in this group would change; however, the probability of the outcome among these individual (p'Case|AB) is not affected and continues to be equal to .030, so that...p' Case|AB = p Case|AB (5)\n\nwhere pCase|AB is the unadjusted probability of the outcome among individuals with both A and B. As reflected in Figure 4, adjusting for the relationship between the two risk factors, the expected number of individuals with both A and B would be equal to 416.67.\n\nThrough the adjustment process, the total number of individuals exposed to A does not change (i.e., continues to equal 1250), consequently any change in the number of individuals experiencing both risk factors must be offset by a corresponding change in the number of individuals who experience A but not B (N'A_). Mathematically, this is equal to...\n\nN' A_ = N AB + N A_ - N' AB (6)\n\nwhere NA_ is the number of individuals in the original data with A who do not have B. In this example, N'A_ equals 833.333. As before, the probability of developing the outcome among these individuals does not change...\n\np'Case|A_ = pCase|A_ (7)\n\nwhere p'Case|A_ is the adjusted probability of the outcome among this group, and pCase|A_ is the probability of the outcome among this group in the original data (i.e., among individuals experiencing A but not B, both the adjusted and unadjusted probability of having MMR is equal to .020). The effect of this adjustment is to make the risk ratio for B based upon A equal to one.\n\n##### Step 4\n\nThe adjusted frequency table is then aggregated based upon exposure/lack of exposure to risk factor B, allowing an adjusted PAF of B. However, Equation 3 is now inappropriate as it would not refer to the original number of cases. The adjusted PAF (PAF'B) is instead calculated based on an equivalent form of Equation 3 that has then been slightly modified in order to reflect the adjusted number of cases relative to the original, unadjusted number of cases...", null, "(8)\n\nSpecifically, Ncase is the total number of cases of the outcome in the original table, N'case is the total number of cases in the adjusted table, p'Case|Not B is the probability of the outcome among those in the adjusted table without risk factor B, and N is the total number of individuals in the sample. In effect, the numerator estimates the reduction in the number of cases based on the adjusted data, which would be observed if those experiencing B had the same probability of the outcome as did those who did not experience B; while the denominator is the number of cases observed in the unadjusted data. The result is the proportion of cases in the original, unadjusted data related to the adjusted data for B. By making this estimate relative to the unadjusted number of cases, it can be combined with the previously estimate PAF for A, which was also relative to the unadjusted number of cases.\n\nThis results in an adjusted PAF for B equal to 18.33%. Given the adjustment process removes any relationship between A and B, the unadjusted PAF for risk factor A (PAFA) and the adjusted PAF for risk factor B (PAF'B) sum to the overall, aggregate PAF for A and B combined (PAFAGG) if there is no interaction.\n\n### Interactive effects\n\n#### Risk factors have an interaction effect, but are not related/uncorrelated with each other\n\nThese next two examples illustrate how sequential partitioning is applied to data in which the risk factors have an interaction effect. We will first consider an interaction in a stratified analysis with risk factors that are unrelated/uncorrelated with each other. For example, there may be no association between child gender and maternal smoking (i.e. child gender has no indirect effect on the outcome through maternal smoking); however, being both male and having a mother who smoked may result in greater risk than would be expected given the individual risks of being male and maternal smoking, alone. As noted previously, a stratified analysis would be acceptable given the risk factors are not related to each other. Figure 5 contains hypothetical data for which an interaction exists between two unrelated risk factors, A and B, as predictors of MMR. Assuming no interaction, the expected risk ratio among individuals with both risk factors is equal to...\n\nE(RR AB ) = RR A_ + RR _B - 1 (9)\n\nwhere RRAB is the risk ratio for individuals with both risk factors, RRA_ is the risk ratio for individuals with A but not B, and RR_B is the risk ratio for individuals with B but not A. For the data in Figure 5, the expected RRAB with no interaction would be 4.00, translating to 20 cases of MMR. The observed number of cases of MMR for individuals in this group was 12. The difference, 8 cases, reflects the interaction effect (see for detail regarding estimation of interactive effects).\n\nIt was previously noted that with non-interacting risk factors, PAFAGG was equal to the sum of the unadjusted PAF for A and the adjusted PAF for B. However, in this interacting example, PAFA and the adjusted PAFB sum to 33.33%, which is 19.05% less than PAFAGG. This 19.05% corresponds to 8 of the 42 cases of MMR, which, given the risk factors are not related to each other, is also equal to the magnitude of the interaction effect obtained in the stratified analysis. Consequently, the interaction in a sequentially partitioned PAF (PAFInter) is equal to...\n\nPAF Inter = PAF AGG - [PAF A + PAF' B ] (10)\n\nAnd expressed as a number of cases....\n\nN Inter = (PAF AGG - [PAF A + PAF' B ])* N Cases (11)\n\nwhere NCases is the number of cases of the outcome (MMR) in the original sample.\n\n#### Risk factors interact and are related/correlated with each other\n\nThe final example considers the situation where there is an interaction involving risk factors that are related/correlated with each other. For example, (1) maternal smoking during pregnancy may lead to higher rates of babies born low birthweight, while low birthweight then leads to higher risk of MMR (i.e., smoking and birthweight are related) and (2) the effect of being born low birthweight on MMR may be different for those babies whose mothers also smoked, than is the effect of low birthweight for those babies whose mothers did not smoke (i.e., a smoking × birthweight interaction on MMR). Data for this example are presented in Figure 6. Applying the sequential partitioning strategy, PAFA is equal to 14.29%, the adjusted PAFB is equal to 29.12%, and the PAF for the interaction is equal to 6.59%. Applying Equation 11, the PAF for the interaction translates to 4.286 cases of the outcome.\n\nIn contrast, if we examine the same data using a stratified analysis, we would focus on the 500 individuals with both risk factors. Using Equation 9 and additional computation, the expected number of cases of MMR among individuals with both A and B is 18.33, while the observed number is 15, or a difference of 3.33. However, due to the fact that the risk factors are related/correlated, this value is biased and therefore does not equal the result obtained in the sequential approach. Nevertheless, the bias can be corrected by multiplying this result by the inverse of the RR for the occurrence of B given a person also experiences A (RRB|A -1). In other words...\n\nNINT-SEQ = NINT-STARTRRB|A-1 (12)\n\nwhere NINT-SEQ is the number of cases of the outcome that were associated with the interaction using the sequential partitioning approach, and NINT-STRAT is the biased estimate of the number of cases of an outcome associated with the interaction when one inappropriately applies the stratified method. Applying this correction results in a value of 4.286 cases, which is the same as obtained through the sequential partitioning approach. Note that this correction does not make the stratified approach appropriate when risk factors are related/correlated. It is used in this instance simply to highlight a limitation of stratification in such instances, and to illustrate how the sequential approach provides a logical and easily understandable correction for this issue.\n\n### Population shifts: sequential partitioning as a solution to an otherwise paradoxical effect\n\nFinally, it is worth noting how the sequential partitioning procedure addresses issues resulting from population shifts in the frequencies of risk factors. Wilcox described how in a study of the effects of birthweight and altitude upon infant mortality, it is possible for the birthweight frequency and mortality distribution to shift based upon altitude. In effect, the shapes of the curves do not change, but the optimal birthweight does.\n\nAs detailed by Wilcox , this can result in a number of seemingly paradoxical findings. For example, while increased altitude is associated with lower mean birthweight, and while lower birthweight is associated with increased mortality, altitude has no relationship with mortality. Furthermore, among infants born low birthweight, the mortality rate among high altitude births is less than the mortality rate among low altitude births. In contrast, among infants born with a high birthweight, the mortality rate among high altitude births is greater than that seen among low altitude births.\n\nTo illustrate how the sequential partitioning approach addresses these issues, an artificial data set was created reflecting a hypothetical relationship between altitude, birthweight, and mortality. Artificial data were used in order to ensure that the only effect associated with altitude was the shift in optimal values. This would allow the manner in which the partitioning approach addresses such shifts to be most evident. Specifically, two samples were created. The first, representing \"low altitude\" births had a mean birthweight of 3500 g, which was normally distributed with a standard deviation of 1000 g. A second sample, representing \"high altitude\" births had a mean birthweight of 3200 g and was also normally distributed with a standard deviation of 1000 g. Mortality rates per 1000 births was equal to", null, "(13)\n\nWhere Mort1000 is the mortality rate per 1000 births, WPOP is the mean birthweight in grams for a given population, and WX is a child's birthweight in grams. This resulted in a mortality rate of 1 per 1000 births at the mean population birthweight, and 518 per 1000 births two and a half standard deviations from the mean. Weights in each sample ranged from 2.5 standard deviations below the mean to 2.5 standard deviations above their corresponding mean, with each sample containing 1,000,000 births. Using a criterion for low birthweight as being less than 2500 grams, results are presented in Figure 7. For clarity, unless otherwise noted, all values referenced in the subsequent material are explicitly identified in Figure 7 with italics and bold blue font.\n\nAs expected, high altitude is related to low birthweight (RR = 1.54) and low birthweight is related to mortality (RR = 3.84); however, reflecting the paradox, altitude is unrelated to mortality (RR = 1.00). Furthermore, as expected, among high altitude births, the effect of low birthweight upon mortality (RR = .053/.017 = 3.17) is lower than the effect seen among low altitude births (RR = 4.85). For example, while not presented in Figure 7, the mortality rate among children born 1500 g was 70.1 per 1000 high altitude births and 148.4 per 1000 low altitude births. In contrast, the mortality rate among high altitude infants born high birthweight is in fact greater than the mortality rate among low altitude infants born high birthweight (90.0 per 1000 high altitude births > 5000 g, and only 42.5 per 1000 low altitude births > 5000 g). This exactly reflects the paradox resulting from population shifts that is noted by Wilcox. However, these seemingly paradoxical patterns disappear if one adjusts for altitude prior to examining the effect of birthweight .\n\nFortunately, the sequential partitioning technique incorporates just such a procedure, and furthermore, quantifies the degree to which a PAF may be impacted by this population shift. As presented in Figure 8, the overall PAF for altitude and birthweight is 36.29%. Based on the model that high altitude leads to lower birthweight, the sequential partitioning approach results in a PAF for altitude equal 0%. In other words, the conclusion would be that altitude has no relationship with mortality. This in fact corresponds to the data, and is also the conclusion Wilcox notes one should draw in situations where this type of population shift occurs .\n\nFurthermore, using the sequential partitioning approach, the adjusted PAF for birthweight is equal to 28.88%, with an adjusted risk ratio of 3.99. In contrast, the unadjusted PAF for birthweight (as reported in Figure 7) is 34.79%, with an unadjusted risk ratio of 3.84. The difference between the adjusted values calculated using the partitioning strategy and the raw unadjusted values is also exactly what one would expect. Specifically, given the downward shift observed in the distribution for the high altitude birth and mortality curves, a portion of the high altitude births will inappropriately be classified as low birthweight, when in fact within their population, they are not low birthweight. This results in an unadjusted risk ratio lower than would be expected were one to include only the \"true\" cases of low birthweight, relative to each population. As this suggests, the unadjusted risk ratio is slightly smaller than the adjusted RR.\n\nIn contrast, the improper inclusion of population-specific normal birthweight infants in the low birthweight group results in an exaggerated PAF for birthweight. As would therefore be expected, the adjusted PAF for birthweight is in fact somewhat smaller than the unadjusted PAF for birthweight. Finally, given the only difference between low and high altitude births is the shift in the distributions, the interaction effect (PAF = 7.41%) quantifies the degree to which the aggregate PAF (36.29%) capitalizes on the definition of \"low birthweight\" being misapplied to a population where this shift has occurred.\n\nIt is worth noting that very different conclusions would be drawn using alternative strategies in which all risk factors are simultaneously adjusted for all other effects. For example, the unadjusted odds ratio for birthweight is 3.84, while the altitude-adjusted Mantel-Haenszel odds ratio for birthweight is 4.05. When calculating an adjusted PAF, Equation Two incorporates the larger, adjusted odds ratio, but does not consider the impact of altitude upon rates of low birthweight, and so pe is unchanged. Consequently, a constant pe and a larger odds ratio will increase the value for the PAF and lead to the conclusion that birthweight has a larger effect than it actually does. Similarly, the unadjusted odds ratio for altitude is 1.00, while the birthweight-adjusted Mantel-Haenszel odds ratio for altitude is .86. A constant pe and a smaller odds ratio will result in a smaller, in this case negative, PAF. One would therefore conclude that altitude is a protective factor–again counter to the correct finding that altitude has no effect.\n\n## Summary\n\nIt should be noted that this procedure does not address or prove \"causality\". The issue of establishing and quantifying causality in epidemiological research is a topic of ongoing theoretical and philosophical debate . Instead, this is a descriptive procedure providing a measure of the relative population-level effects of multiple risk factors based on a specific model that may or may not be true. As noted previously, the results will differ depending upon the specific order of effects indicated by a model. The question remains whether a proposed model is or is not plausible. Nevertheless, the sequential partitioning strategy proposed here provides a valuable alternative means for examining the population-level impact of multiple risk factors unavailable through other techniques that provide less clear or less meaningful values. The procedure allows one to partition the overall effect of multiple risk factors based upon the sequence of effects that exists among the variables. In essence, it allows researchers to incorporate into their models the effect that one risk factor may have in terms of increasing rates of other risk factors in the model. In addition, if the effect of a risk factor on the outcome is believed to be confounded by other variables, those variables can be entered in the first step in order to adjust for those possible confounding effects. Consequently, the technique provides a potentially valuable tool for researchers interested in multiple risk factor models. A Microsoft Excel file [Partitioning PAF Excel Tool.xls] containing an annotated worksheet for the calculations and procedures reported here is available online through the journal website 1.\n\n## References\n\n1. Terry MB, Neugut AI, Shwartz S, Susser E: Risk factors for a causal intermediate and an endpoint: Reconciling differences. American Journal of Epidemiology 2000, 151:339–345.\n\n2. Yale ME, Mason CA, Scott KG: Direct and indirect effects of prenatal exposure to tobacco and low birthweight on the prevalence of childhood disability. Manuscript under review 2008.\n\n3. Bruzzi P, Green SB, Byar DP, Brinton LA, Schairer C: Estimating the Population Attributable Risk for Multiple Risk Factors Using Case-Control Data. American Journal of Epidemiology 1985, 122:904–915.\n\n4. Greenland S, Drescher K: Maximum likelihood estimation of the attributable fraction from logistic models. Biometrics 1993, 49:865–872.\n\n5. Benichou J: Methods of adjustment for estimating the attributable risk in case-control studies: A review. Statistics in Medicine 1991, 10:1753–1773.\n\n6. Rowe AK, Powell KE, Flanders WD: Why population attributable fractions can sum to more than one. American Journal of Preventive Medicine 2004,26(3):243–249.\n\n7. Robins JM, Greenland S: Identifiability and exchangeability for direct and indirect effects. Epidemiology 1992,3(2):143–155.\n\n8. Petersen ML, Sinisi SE, Laan MJ: Estimation of direct causal effects. Epidemiology 2006,17(3):276–284.\n\n9. Cole SR, Hernán MA: Fallibility in estimating direct effects. International Journal of Epidemiology 2002, 31:161–165.\n\n10. Kaufman JS, MacLehose RF, Kaufman S: A further critique of the analytic strategy of adjusting for covariates to identify biological mediation. Epidemiologic Perspectives and Innovations 2004.,1(4):\n\n11. Smith CA, Pratt M: Cardiovascular disease. Chronic disease epidemiology and control (Edited by: Brownson RC, Remington PL, Davis JR). Washington, D.C.: American Public Health Association 1993, 83–107.\n\n12. Coughlin SS, Nass CC, Pickle LW, Trock B, Bunin G: Regression methods for estimating attributable risk in population-based case-control studies: A comparison of additive and multiplicative models. American Journal of Epidemiology 1991, 133:305–313.\n\n13. Cohen J, Cohen P, West SG, Aiken L: Applied Multiple Regression/Correlation Analysis for the Behvaioral Sciences. 3 Edition Hillsdale, NJ: Lawrence Erlbaum Associates 2002.\n\n14. Keith TZ: Multiple Regression and Beyond. Boston, MA: Pearson Education 2006.\n\n15. Eide GE, Gefeller O: Sequential and average attributable fractions as aids in the selection of preventive strategies. Journal of Clinical Epidemiology 1995,48(5):645–655.\n\n16. Wilcox AJ: On the importance-and the unimportance-of birthweight. International Journal of Epidemiology 2001, 30:1233–1241.\n\n17. Greenland S, Rothman KJ: Concepts of interactions. Modern Epidemiology 2 Edition (Edited by: Rothman K, Greenland S). Philadelphia, PA: Lippincott-Raven Publishers 1998, 329–342.\n\n18. Maldonado G, Greenland S: Estimating causal effects. International Journal of Epidemiology 2002,31(2):422–429.\n\n19. Dawid A: Commentary: Counterfactuals: help or hindrance? International Journal of Epidemiology 2002,31(2):429–430.\n\n20. Kaufman JS, Kaufman S: Commentary: Estimating causal effects. International Journal of Epidemiology 2002,31(2):431–432.\n\n21. Elwert F, Winship C: Commentary: Population versus individual level causal effects. International Journal of Epidemiology 2002,31(2):432–434.\n\n22. Shafer G: Commentary: Estimating causal effects. International Journal of Epidemiology 2002,31(2):434–435.\n\n23. Maldonado G, Greenland S: Response: Defining and estimating causal effects. International Journal of Epidemiology 2002,31(2):435–438.\n\n## Acknowledgements\n\nThis work was supported in part by a grant #BM019 from the Biomedical Research Program of the Florida Dept. of Health (Keith G. Scott, Ph.D., Principal Investigator).\n\n## Author information\n\nAuthors\n\n### Corresponding author\n\nCorrespondence to Craig A Mason.\n\n### Competing interests\n\nThe authors declare that they have no competing interests.\n\n### Authors' contributions\n\nCM and ST contributed equally to the conceptualization and preparation of the manuscript. Both authors have read and approved the final manuscript.\n\n## Electronic supplementary material\n\n### 1742-5573-5-5-S1.xls\n\nAdditional file 1: Partitioning PAF Excel Tool. A Microsoft Excel sheet providing annotated partitioned PAF estimates and corresponding calculations. (XLS 50 KB)\n\n## Rights and permissions\n\nReprints and Permissions\n\nMason, C.A., Tu, S. Partitioning the population attributable fraction for a sequential chain of effects. Epidemiol Perspect Innov 5, 5 (2008). https://doi.org/10.1186/1742-5573-5-5", null, "" ]
[ null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1742-5573-5-5/MediaObjects/12980_2008_Article_49_Equ1_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1742-5573-5-5/MediaObjects/12980_2008_Article_49_Equ2_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1742-5573-5-5/MediaObjects/12980_2008_Article_49_Equ3_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1742-5573-5-5/MediaObjects/12980_2008_Article_49_Equ4_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1742-5573-5-5/MediaObjects/12980_2008_Article_49_Equ5_HTML.gif", null, "https://media.springernature.com/full/springer-static/image/art%3A10.1186%2F1742-5573-5-5/MediaObjects/12980_2008_Article_49_Equ6_HTML.gif", null, "https://epi-perspectives.biomedcentral.com/track/article/10.1186/1742-5573-5-5", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9377119,"math_prob":0.9153584,"size":53275,"snap":"2023-40-2023-50","text_gpt3_token_len":11033,"char_repetition_ratio":0.17377184,"word_repetition_ratio":0.08392435,"special_character_ratio":0.20341624,"punctuation_ratio":0.10817184,"nsfw_num_words":3,"has_unicode_error":false,"math_prob_llama3":0.9802417,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,4,null,4,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-30T10:38:50Z\",\"WARC-Record-ID\":\"<urn:uuid:8644ad14-3c83-41ca-b9b6-21ad14742b04>\",\"Content-Length\":\"352633\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:379a40d9-03dc-4b2c-8aac-91c69155289f>\",\"WARC-Concurrent-To\":\"<urn:uuid:4578f6a2-6f07-4410-886b-ec5fd5a8ec98>\",\"WARC-IP-Address\":\"146.75.28.95\",\"WARC-Target-URI\":\"https://epi-perspectives.biomedcentral.com/articles/10.1186/1742-5573-5-5\",\"WARC-Payload-Digest\":\"sha1:WPJPI7N6PZHFR5AVRJCTOY6S43JEO3AE\",\"WARC-Block-Digest\":\"sha1:JAZ5ADO2CPP4BHUMCUL7X3SBOAG6JAJJ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100184.3_warc_CC-MAIN-20231130094531-20231130124531-00824.warc.gz\"}"}
https://en.formulasearchengine.com/wiki/Quotient_algebra
[ "# Quotient algebra\n\nIn mathematics, a quotient algebra, (where algebra is used in the sense of universal algebra), also called a factor algebra, is obtained by partitioning the elements of an algebra into equivalence classes given by a congruence relation, that is an equivalence relation that is additionally compatible with all the operations of the algebra, in the formal sense described below.\n\n## Compatible relation\n\nLet A be a set (of the elements of an algebra ${\\mathcal {A}}$", null, "), and let E be an equivalence relation on the set A. The relation E is said to be compatible with (or have the substitution property with respect to) an n-ary operation f if for all $a_{1},a_{2},\\ldots ,a_{n},b_{1},b_{2},\\ldots ,b_{n}\\in A$", null, "whenever $(a_{1},b_{1})\\in E,(a_{2},b_{2})\\in E,\\ldots ,(a_{n},b_{n})\\in E$", null, "implies $(f(a_{1},a_{2},\\ldots ,a_{n}),f(b_{1},b_{2},\\ldots ,b_{n}))\\in E$", null, ". An equivalence relation compatible with all the operations of an algebra is called a congruence.\n\n## Congruence lattice\n\nFor every algebra ${\\mathcal {A}}$", null, "on the set A, the identity relation on A, and $A\\times A$", null, "are trivial congruences. An algebra with no other congruences is called simple.\n\nOn the other hand, congruences are not closed under union. However, we can define the closure of any binary relation E, with respect to a fixed algebra ${\\mathcal {A}}$", null, ", such that it is a congruence, in the following way: $\\langle E\\rangle _{\\mathcal {A}}=\\bigcap \\{F\\in \\mathrm {Con} ({\\mathcal {A}})|E\\subseteq F\\}$", null, ". Note that the (congruence) closure of a binary relation depends on the operations in ${\\mathcal {A}}$", null, ", not just on the carrier set. Now define $\\vee :\\mathrm {Con} ({\\mathcal {A}})\\times \\mathrm {Con} ({\\mathcal {A}})\\to \\mathrm {Con} ({\\mathcal {A}})$", null, "as $E_{1}\\vee E_{2}=\\langle E_{1}\\cup E_{2}\\rangle _{\\mathcal {A}}$", null, "." ]
[ null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/280ae03440942ab348c2ca9b8db6b56ffa9618f8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/cc87ebc8958a0d3a1ad8b47be3058805c0445996", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/2ac2aec7c2bee122e766e83a0e96321698449f18", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/a1641706b15504e3943b35aeca8c06d50347901b", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/280ae03440942ab348c2ca9b8db6b56ffa9618f8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/91e587030076802eec026dc75906339cf1f61b70", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/280ae03440942ab348c2ca9b8db6b56ffa9618f8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/d0f3e9f248441770927efc0365e38619964dcb13", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/280ae03440942ab348c2ca9b8db6b56ffa9618f8", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/5b8b617844dbe53e3a736b21cc20e258dd8dab06", null, "https://wikimedia.org/api/rest_v1/media/math/render/svg/49a32d3c10e0d4cf2a6861df2b05efb472cbb410", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9206132,"math_prob":0.99946135,"size":2765,"snap":"2022-40-2023-06","text_gpt3_token_len":619,"char_repetition_ratio":0.18109381,"word_repetition_ratio":0.013043478,"special_character_ratio":0.20325497,"punctuation_ratio":0.115234375,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99993396,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,1,null,1,null,2,null,null,null,null,null,null,null,1,null,null,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-01T01:22:42Z\",\"WARC-Record-ID\":\"<urn:uuid:8a1c02b3-429b-47c0-94f9-1cbb7ed3f813>\",\"Content-Length\":\"83244\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:850c643c-ff1b-45e8-a995-5e17f5b9d5f4>\",\"WARC-Concurrent-To\":\"<urn:uuid:08086629-fcbb-48ec-8996-0c78839b8422>\",\"WARC-IP-Address\":\"132.195.228.228\",\"WARC-Target-URI\":\"https://en.formulasearchengine.com/wiki/Quotient_algebra\",\"WARC-Payload-Digest\":\"sha1:XK6XNBDZO4DUPTATTJR4TWI5XYGAKNLX\",\"WARC-Block-Digest\":\"sha1:QHZZGOVN3N5O2EFGVGGYTCZ5D3A4WR7G\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335514.65_warc_CC-MAIN-20221001003954-20221001033954-00722.warc.gz\"}"}
https://slideplayer.com/slide/4457840/
[ "", null, "# M ACROECONOMICS C H A P T E R © 2007 Worth Publishers, all rights reserved SIXTH EDITION PowerPoint ® Slides by Ron Cronovich N. G REGORY M ANKIW Money.\n\n## Presentation on theme: \"M ACROECONOMICS C H A P T E R © 2007 Worth Publishers, all rights reserved SIXTH EDITION PowerPoint ® Slides by Ron Cronovich N. G REGORY M ANKIW Money.\"— Presentation transcript:\n\nM ACROECONOMICS C H A P T E R © 2007 Worth Publishers, all rights reserved SIXTH EDITION PowerPoint ® Slides by Ron Cronovich N. G REGORY M ANKIW Money and Inflation 4\n\nslide 1 CHAPTER 4 Money and Inflation In this chapter, you will learn…  The classical theory of inflation  causes  effects  social costs  “Classical” – assumes prices are flexible & markets clear  Applies to the long run\n\nU.S. inflation and its trend, 1960-2006 slide 2 0% 3% 6% 9% 12% 15% 1960196519701975198019851990199520002005 long-run trend % change in CPI from 12 months earlier\n\nslide 3 CHAPTER 4 Money and Inflation The connection between money and prices  Inflation rate = the percentage increase in the average level of prices.  Price = amount of money required to buy a good.  Because prices are defined in terms of money, we need to consider the nature of money, the supply of money, and how it is controlled.\n\nslide 4 CHAPTER 4 Money and Inflation Money: Definition Money is the stock of assets that can be readily used to make transactions.\n\nslide 5 CHAPTER 4 Money and Inflation Money: Functions  medium of exchange we use it to buy stuff  store of value transfers purchasing power from the present to the future  unit of account the common unit by which everyone measures prices and values\n\nslide 6 CHAPTER 4 Money and Inflation Money: Types 1. fiat money  has no intrinsic value  example: the paper currency we use 2. commodity money  has intrinsic value  examples: gold coins, cigarettes in P.O.W. camps\n\nslide 7 CHAPTER 4 Money and Inflation Discussion Question Which of these are money? a. Currency b. Checks c. Deposits in checking accounts (“demand deposits”) d. Credit cards e. Certificates of deposit (“time deposits”)\n\nslide 8 CHAPTER 4 Money and Inflation The money supply and monetary policy definitions  The money supply is the quantity of money available in the economy.  Monetary policy is the control over the money supply.\n\nslide 9 CHAPTER 4 Money and Inflation The central bank  Monetary policy is conducted by a country’s central bank.  In the U.S., the central bank is called the Federal Reserve (“the Fed”). The Federal Reserve Building Washington, DC\n\nslide 10 CHAPTER 4 Money and Inflation Money supply measures, April 2006 \\$6799 M1 + small time deposits, savings deposits, money market mutual funds, money market deposit accounts M2 \\$1391 C + demand deposits, travelers’ checks, other checkable deposits M1 \\$739CurrencyC amount (\\$ billions) assets includedsymbol\n\nslide 11 CHAPTER 4 Money and Inflation The Quantity Theory of Money  A simple theory linking the inflation rate to the growth rate of the money supply.  Begins with the concept of velocity…\n\nslide 12 CHAPTER 4 Money and Inflation Velocity  basic concept: the rate at which money circulates  definition: the number of times the average dollar bill changes hands in a given time period  example: In 2007,  \\$500 billion in transactions  money supply = \\$100 billion  The average dollar is used in five transactions in 2007  So, velocity = 5\n\nslide 13 CHAPTER 4 Money and Inflation Velocity, cont.  This suggests the following definition: where V = velocity T = value of all transactions M = money supply\n\nslide 14 CHAPTER 4 Money and Inflation Velocity, cont.  Use nominal GDP as a proxy for total transactions. Then, where P = price of output (GDP deflator) Y = quantity of output (real GDP) P  Y = value of output (nominal GDP)\n\nslide 15 CHAPTER 4 Money and Inflation The quantity equation  The quantity equation M  V = P  Y follows from the preceding definition of velocity.  It is an identity: it holds by definition of the variables.\n\nslide 16 CHAPTER 4 Money and Inflation Money demand and the quantity equation  M/P = real money balances, the purchasing power of the money supply.  A simple money demand function: (M/P ) d = k Y where k = how much money people wish to hold for each dollar of income. (k is exogenous)\n\nslide 17 CHAPTER 4 Money and Inflation Money demand and the quantity equation  money demand: (M/P ) d = k Y  quantity equation: M  V = P  Y  The connection between them: k = 1/V  When people hold lots of money relative to their incomes (k is high), money changes hands infrequently (V is low).\n\nslide 18 CHAPTER 4 Money and Inflation Back to the quantity theory of money  starts with quantity equation  assumes V is constant & exogenous:  With this assumption, the quantity equation can be written as\n\nslide 19 CHAPTER 4 Money and Inflation The quantity theory of money, cont. How the price level is determined:  With V constant, the money supply determines nominal GDP (P  Y ).  Real GDP is determined by the economy’s supplies of K and L and the production function (Chap 3).  The price level is P = (nominal GDP)/(real GDP).\n\nslide 20 CHAPTER 4 Money and Inflation The quantity theory of money, cont.  Recall from Chapter 2: The growth rate of a product equals the sum of the growth rates.  The quantity equation in growth rates:\n\nslide 21 CHAPTER 4 Money and Inflation The quantity theory of money, cont.  (Greek letter “pi”) denotes the inflation rate: The result from the preceding slide was: Solve this result for  to get\n\nslide 22 CHAPTER 4 Money and Inflation The quantity theory of money, cont.  Normal economic growth requires a certain amount of money supply growth to facilitate the growth in transactions.  Money growth in excess of this amount leads to inflation.\n\nslide 23 CHAPTER 4 Money and Inflation The quantity theory of money, cont.  Y/Y depends on growth in the factors of production and on technological progress (all of which we take as given, for now). Hence, the Quantity Theory predicts a one-for-one relation between changes in the money growth rate and changes in the inflation rate.\n\nslide 24 CHAPTER 4 Money and Inflation Confronting the quantity theory with data The quantity theory of money implies 1.countries with higher money growth rates should have higher inflation rates. 2.the long-run trend behavior of a country’s inflation should be similar to the long-run trend in the country’s money growth rate. Are the data consistent with these implications?\n\nslide 25 CHAPTER 4 Money and Inflation International data on inflation and money growth Singapore U.S. Switzerland Argentina Indonesia Turkey Belarus Ecuador\n\nU.S. inflation and money growth, 1960-2006 slide 26 0% 3% 6% 9% 12% 15% 1960196519701975198019851990199520002005 M2 growth rate inflation rate Over the long run, the inflation and money growth rates move together, as the quantity theory predicts.\n\nslide 27 CHAPTER 4 Money and Inflation Seigniorage  To spend more without raising taxes or selling bonds, the govt can print money.  The “revenue” raised from printing money is called seigniorage (pronounced SEEN-your-idge).  The inflation tax: Printing money to raise revenue causes inflation. Inflation is like a tax on people who hold money.\n\nslide 28 CHAPTER 4 Money and Inflation Inflation and interest rates  Nominal interest rate, i not adjusted for inflation  Real interest rate, r adjusted for inflation: r = i  \n\nslide 29 CHAPTER 4 Money and Inflation The Fisher effect  The Fisher equation: i = r +   Chap 3: S = I determines r.  Hence, an increase in  causes an equal increase in i.  This one-for-one relationship is called the Fisher effect.\n\nslide 30 CHAPTER 4 Money and Inflation Inflation and nominal interest rates in the U.S., 1955-2006 percent per year -5 0 5 10 15 19551960196519701975198019851990199520002005 inflation rate nominal interest rate\n\nslide 31 CHAPTER 4 Money and Inflation Inflation and nominal interest rates across countries Switzerland Germany Brazil Romania Zimbabwe Bulgaria U.S. Israel\n\nslide 32 CHAPTER 4 Money and Inflation Exercise: Suppose V is constant, M is growing 5% per year, Y is growing 2% per year, and r = 4. a.Solve for i. b.If the Fed increases the money growth rate by 2 percentage points per year, find  i. c.Suppose the growth rate of Y falls to 1% per year.  What will happen to  ?  What must the Fed do if it wishes to keep  constant?\n\nslide 33 CHAPTER 4 Money and Inflation Answers: a.First, find  = 5  2 = 3. Then, find i = r +  = 4 + 3 = 7. b.  i = 2, same as the increase in the money growth rate. c.If the Fed does nothing,  = 1. To prevent inflation from rising, Fed must reduce the money growth rate by 1 percentage point per year. V is constant, M grows 5% per year, Y grows 2% per year, r = 4.\n\nslide 34 CHAPTER 4 Money and Inflation Two real interest rates   = actual inflation rate (not known until after it has occurred)   e = expected inflation rate  i –  e = ex ante real interest rate: the real interest rate people expect at the time they buy a bond or take out a loan  i –  = ex post real interest rate: the real interest rate actually realized\n\nslide 35 CHAPTER 4 Money and Inflation Money demand and the nominal interest rate  In the quantity theory of money, the demand for real money balances depends only on real income Y.  Another determinant of money demand: the nominal interest rate, i.  the opportunity cost of holding money (instead of bonds or other interest-earning assets).  Hence,  i   in money demand.\n\nslide 36 CHAPTER 4 Money and Inflation The money demand function (M/P ) d = real money demand, depends  negatively on i i is the opp. cost of holding money  positively on Y higher Y  more spending  so, need more money (“L” is used for the money demand function because money is the most liquid asset.)\n\nslide 37 CHAPTER 4 Money and Inflation The money demand function When people are deciding whether to hold money or bonds, they don’t know what inflation will turn out to be. Hence, the nominal interest rate relevant for money demand is r +  e.\n\nslide 38 CHAPTER 4 Money and Inflation Equilibrium The supply of real money balances Real money demand\n\nslide 39 CHAPTER 4 Money and Inflation What determines what variablehow determined (in the long run) Mexogenous (the Fed) radjusts to make S = I Y P adjusts to make\n\nslide 40 CHAPTER 4 Money and Inflation How P responds to  M  For given values of r, Y, and  e, a change in M causes P to change by the same percentage – just like in the quantity theory of money.\n\nslide 41 CHAPTER 4 Money and Inflation What about expected inflation?  Over the long run, people don’t consistently over- or under-forecast inflation, so  e =  on average.  In the short run,  e may change when people get new information.  EX: Fed announces it will increase M next year. People will expect next year’s P to be higher, so  e rises.  This affects P now, even though M hasn’t changed yet….\n\nslide 42 CHAPTER 4 Money and Inflation How P responds to   e  For given values of r, Y, and M,\n\nslide 43 CHAPTER 4 Money and Inflation Discussion question Why is inflation bad?  What costs does inflation impose on society? List all the ones you can think of.  Focus on the long run.  Think like an economist.\n\nslide 44 CHAPTER 4 Money and Inflation A common misperception  Common misperception: inflation reduces real wages  This is true only in the short run, when nominal wages are fixed by contracts.  (Chap. 3) In the long run, the real wage is determined by labor supply and the marginal product of labor, not the price level or inflation rate.  Consider the data…\n\nslide 45 CHAPTER 4 Money and Inflation Average hourly earnings and the CPI, 1964-2006 \\$0 \\$2 \\$4 \\$6 \\$8 \\$10 \\$12 \\$14 \\$16 \\$18 \\$20 19641970197619821988199420002006 hourly wage 0 50 100 150 200 250 CPI (1982-84 = 100) CPI (right scale) wage in current dollars wage in 2006 dollars\n\nslide 46 CHAPTER 4 Money and Inflation The classical view of inflation  The classical view: A change in the price level is merely a change in the units of measurement. So why, then, is inflation a social problem?\n\nslide 47 CHAPTER 4 Money and Inflation The social costs of inflation …fall into two categories: 1. costs when inflation is expected 2. costs when inflation is different than people had expected\n\nslide 48 CHAPTER 4 Money and Inflation The costs of expected inflation: 1. Shoeleather cost  def: the costs and inconveniences of reducing money balances to avoid the inflation tax.     i   real money balances  Remember: In long run, inflation does not affect real income or real spending.  So, same monthly spending but lower average money holdings means more frequent trips to the bank to withdraw smaller amounts of cash.\n\nslide 49 CHAPTER 4 Money and Inflation The costs of expected inflation: 2. Menu costs  def: The costs of changing prices.  Examples:  cost of printing new menus  cost of printing & mailing new catalogs  The higher is inflation, the more frequently firms must change their prices and incur these costs.\n\nslide 50 CHAPTER 4 Money and Inflation The costs of expected inflation: 3. Relative price distortions  Firms facing menu costs change prices infrequently.  Example: A firm issues new catalog each January. As the general price level rises throughout the year, the firm’s relative price will fall.  Different firms change their prices at different times, leading to relative price distortions… …causing microeconomic inefficiencies in the allocation of resources.\n\nslide 51 CHAPTER 4 Money and Inflation The costs of expected inflation: 4. Unfair tax treatment Some taxes are not adjusted to account for inflation, such as the capital gains tax. Example:  Jan 1: you buy \\$10,000 worth of IBM stock  Dec 31: you sell the stock for \\$11,000, so your nominal capital gain is \\$1000 (10%).  Suppose  = 10% during the year. Your real capital gain is \\$0.  But the govt requires you to pay taxes on your \\$1000 nominal gain!!\n\nslide 52 CHAPTER 4 Money and Inflation The costs of expected inflation: 5. General inconvenience  Inflation makes it harder to compare nominal values from different time periods.  This complicates long-range financial planning.\n\nslide 53 CHAPTER 4 Money and Inflation Additional cost of unexpected inflation: Arbitrary redistribution of purchasing power  Many long-term contracts not indexed, but based on  e.  If  turns out different from  e, then some gain at others’ expense. Example: borrowers & lenders  If  >  e, then (i   ) < (i   e ) and purchasing power is transferred from lenders to borrowers.  If  <  e, then purchasing power is transferred from borrowers to lenders.\n\nslide 54 CHAPTER 4 Money and Inflation Additional cost of high inflation: Increased uncertainty  When inflation is high, it’s more variable and unpredictable:  turns out different from  e more often, and the differences tend to be larger (though not systematically positive or negative)  Arbitrary redistributions of wealth become more likely.  This creates higher uncertainty, making risk averse people worse off.\n\nslide 55 CHAPTER 4 Money and Inflation One benefit of inflation  Nominal wages are rarely reduced, even when the equilibrium real wage falls. This hinders labor market clearing.  Inflation allows the real wages to reach equilibrium levels without nominal wage cuts.  Therefore, moderate inflation improves the functioning of labor markets.\n\nslide 56 CHAPTER 4 Money and Inflation Hyperinflation  def:   50% per month  All the costs of moderate inflation described above become HUGE under hyperinflation.  Money ceases to function as a store of value, and may not serve its other functions (unit of account, medium of exchange).  People may conduct transactions with barter or a stable foreign currency.\n\nslide 57 CHAPTER 4 Money and Inflation What causes hyperinflation?  Hyperinflation is caused by excessive money supply growth:  When the central bank prints money, the price level rises.  If it prints money rapidly enough, the result is hyperinflation.\n\nslide 58 CHAPTER 4 Money and Inflation A few examples of hyperinflation money growth (%) inflation (%) Israel, 1983-85 295275 Poland, 1989-90 344400 Brazil, 1987-94 13501323 Argentina, 1988-90 12641912 Peru, 1988-90 29743849 Nicaragua, 1987-91 49915261 Bolivia, 1984-85 42086515\n\nslide 59 CHAPTER 4 Money and Inflation Why governments create hyperinflation  When a government cannot raise taxes or sell bonds,  it must finance spending increases by printing money.  In theory, the solution to hyperinflation is simple: stop printing money.  In the real world, this requires drastic and painful fiscal restraint.\n\nslide 60 CHAPTER 4 Money and Inflation The Classical Dichotomy Real variables: Measured in physical units – quantities and relative prices, for example:  quantity of output produced  real wage: output earned per hour of work  real interest rate: output earned in the future by lending one unit of output today Nominal variables: Measured in money units, e.g.,  nominal wage: Dollars per hour of work.  nominal interest rate: Dollars earned in future by lending one dollar today.  the price level: The amount of dollars needed to buy a representative basket of goods.\n\nslide 61 CHAPTER 4 Money and Inflation The Classical Dichotomy  Note: Real variables were explained in Chap 3, nominal ones in Chapter 4.  Classical dichotomy: the theoretical separation of real and nominal variables in the classical model, which implies nominal variables do not affect real variables.  Neutrality of money: Changes in the money supply do not affect real variables. In the real world, money is approximately neutral in the long run.\n\nChapter Summary Money  the stock of assets used for transactions  serves as a medium of exchange, store of value, and unit of account.  Commodity money has intrinsic value, fiat money does not.  Central bank controls the money supply. Quantity theory of money assumes velocity is stable, concludes that the money growth rate determines the inflation rate. CHAPTER 4 Money and Inflation slide 62\n\nChapter Summary Nominal interest rate  equals real interest rate + inflation rate  the opp. cost of holding money  Fisher effect: Nominal interest rate moves one-for-one w/ expected inflation. Money demand  depends only on income in the Quantity Theory  also depends on the nominal interest rate  if so, then changes in expected inflation affect the current price level. CHAPTER 4 Money and Inflation slide 63\n\nChapter Summary Costs of inflation  Expected inflation shoeleather costs, menu costs, tax & relative price distortions, inconvenience of correcting figures for inflation  Unexpected inflation all of the above plus arbitrary redistributions of wealth between debtors and creditors CHAPTER 4 Money and Inflation slide 64\n\nChapter Summary Hyperinflation  caused by rapid money supply growth when money printed to finance govt budget deficits  stopping it requires fiscal reforms to eliminate govt’s need for printing money CHAPTER 4 Money and Inflation slide 65\n\nChapter Summary Classical dichotomy  In classical theory, money is neutral--does not affect real variables.  So, we can study how real variables are determined w/o reference to nominal ones.  Then, money market eq’m determines price level and all nominal variables.  Most economists believe the economy works this way in the long run. CHAPTER 4 Money and Inflation slide 66" ]
[ null, "https://slideplayer.com/static/blue_design/img/slide-loader4.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.86054486,"math_prob":0.8847042,"size":19035,"snap":"2021-43-2021-49","text_gpt3_token_len":4356,"char_repetition_ratio":0.2232673,"word_repetition_ratio":0.074200556,"special_character_ratio":0.24239558,"punctuation_ratio":0.10234244,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9611566,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T21:39:37Z\",\"WARC-Record-ID\":\"<urn:uuid:c01d5283-23ea-45db-9823-fc965de720e4>\",\"Content-Length\":\"262834\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c24b7f3-c269-4239-bb74-8d35c0bf7973>\",\"WARC-Concurrent-To\":\"<urn:uuid:bebfb24e-2731-4a73-9cb9-0129b107975a>\",\"WARC-IP-Address\":\"138.201.58.10\",\"WARC-Target-URI\":\"https://slideplayer.com/slide/4457840/\",\"WARC-Payload-Digest\":\"sha1:7RAPO7W2MSJZEZW4YWYVYXILPCIA7N6E\",\"WARC-Block-Digest\":\"sha1:XG2RZVEKSYST54GHIV3ZHITM3X52XISO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587606.8_warc_CC-MAIN-20211024204628-20211024234628-00437.warc.gz\"}"}
https://bitbucket.org/snippets/XOR_Hex/kByG75
[ "# XOR HexFlare-On 2017 Challenge 3 Solution\n\n ``` 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106``` ```#!/usr/bin/env python2 ''' Author: @XOR_Hex #flareon4 Challenge 3 ' Three parts ' 0) Dump the ASM from IDA, Start @ 0x40107C and the length is 0x79 (see box @ loc_401029) ' bytes = idaapi.get_many_bytes(0x40107C, 0x79) ' with open('/home/firefly/buffer.asm', 'wb') as f: ' f.write(bytes) ' 1) Figure out what byte the code needs to be xor'ed with. Last word must match 0xfb5eL (loop at 401039, comparison check: 40105E) ' Code to be xor'ed and add'ed is located at 40107C and the length is 0x79 (see box @ loc_401029) ' To determine the input byte the modified (xord + add) bytes have to be passed through sub_4011E6 before the comparison check can be done ' 2) After the check passes the state memory from angr needs to be interpreted as code, so use capstone ' The op_str from capstone contains what looks like ASCII characters... ' 3) Take the capstone output and interpret the ASCII codes to get the flag ''' import angr import sys p = angr.Project('greek_to_me.exe', load_options={'auto_load_libs': False}) f2 = None # Interate through all of the possible byte values to find the correct \"user\" input to de-mask the flag for buf in xrange(0x100): print(\"Trying buf = {0}\".format(buf)) # Variable to store the bits written to disk using IDA asm = None # Store the output from the first de-obfuscation routine b2 = [] # Read in bytes written to file from IDA with open('greek_to_me_buffer.asm', 'rb') as f: asm = f.read() # Re-implement loc_401039 dl = buf for byte in asm: bl = ord(byte) bl = bl ^ dl bl = bl & 0xff bl = bl + 0x22 bl = bl & 0xff b2.append(bl) # Set up angr to \"run\" sub_4011E6 s = p.factory.blank_state(addr=0x4011E6) s.mem[s.regs.esp+4:].dword = 1 # Angr memory location to hold the xor'ed and add'ed bytes s.mem[s.regs.esp+8:].dword = 0x79 # Length of ASM # Copy bytes output from loc_401039 into address 0x1 so Angr can run it asm = ''.join(map(lambda x: chr(x), b2)) s.memory.store(1, s.se.BVV(int(asm.encode('hex'), 16), 0x79 * 8 )) # Create a simulation manager... #import pdb; pdb.set_trace() simgr = p.factory.simulation_manager(s) # Tell Angr where to go, though there is only one way through this function, # we just need to stop after ax is set simgr.explore(find=0x401268) # Once ax is set, check to see if the value in ax matches the comparison value for found in simgr.found: #import pdb; pdb.set_trace() print(' ax = %s' % hex(found.state.solver.eval(found.state.regs.ax))) # Comparison check if hex(found.state.solver.eval(found.state.regs.ax)) == '0xfb5eL': # Upon success, dump the asm code = (\"%x\" % found.state.solver.eval_upto(found.state.memory.load(1, 0x79), 1)).decode('hex') print('\\n Winner is: {0}\\n\\n'.format(buf)) print(' %s' % code) bl = None dl = None flag = [] # Using capstone, interpret the ASM from capstone import * md = Cs(CS_ARCH_X86, CS_MODE_32) for i in md.disasm(code, 0x1000): flag_char = None # The if statements do the work of interpreting the ASCII codes to their value counterpart if i.op_str.split(',').startswith(\"byte ptr\"): flag_char = chr(long(i.op_str.split(','), 16)) if i.op_str.split(',').startswith('bl'): bl = chr(long(i.op_str.split(','), 16)) if i.op_str.split(',').startswith('dl'): dl = chr(long(i.op_str.split(','), 16)) if i.op_str.split(',').strip() == 'dl': flag_char = dl if i.op_str.split(',').strip() == 'bl': flag_char = bl if (flag_char): flag.append(flag_char.strip()) print(\" 0x%x\\t%s\\t%s\\t%s\" %(i.address, i.mnemonic, i.op_str, flag_char)) print('\\n\\n') print(''.join(flag)) sys.exit(0) ```" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5174543,"math_prob":0.7843283,"size":3908,"snap":"2020-24-2020-29","text_gpt3_token_len":1324,"char_repetition_ratio":0.09810451,"word_repetition_ratio":0.022012578,"special_character_ratio":0.39943704,"punctuation_ratio":0.16035634,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9622846,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-13T05:59:21Z\",\"WARC-Record-ID\":\"<urn:uuid:c7ce6e22-f511-4798-aa33-da18eb5b5315>\",\"Content-Length\":\"68284\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:454f12a0-9c20-4b3b-a323-0e2c5b6e97ea>\",\"WARC-Concurrent-To\":\"<urn:uuid:9606bc36-a597-4ad8-acf9-c296f6cd84e8>\",\"WARC-IP-Address\":\"18.205.93.1\",\"WARC-Target-URI\":\"https://bitbucket.org/snippets/XOR_Hex/kByG75\",\"WARC-Payload-Digest\":\"sha1:OES27VVALXEPOMVQMTJAUGVOEYXJR5OD\",\"WARC-Block-Digest\":\"sha1:J7HDZLAAVDSBK2LRZ4AAQM3OGFP364K6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593657142589.93_warc_CC-MAIN-20200713033803-20200713063803-00522.warc.gz\"}"}
https://www.tutorialspoint.com/java-program-to-add-long-integers-and-check-for-overflow
[ "# Java Program to add long integers and check for overflow\n\nTo check for Long overflow, we need to check the Long.MAX_VALUE with the added long result. Here, Long.MAX_VALUE is the maximum value of Long type in Java.\n\nLet us see an example wherein long integers are added and if the result is more than the Long.MAX_VALUE, then an exception is thrown.\n\nThe following is an example showing how to check for Long overflow.\n\n## Example\n\nLive Demo\n\npublic class Demo {\npublic static void main(String[] args) {\nlong val1 = 80989;\nlong val2 = 87567;\nSystem.out.println(\"Value1: \"+val1);\nSystem.out.println(\"Value2: \"+val2);\nlong sum = val1 + val2;\nif (sum > Long.MAX_VALUE) {\nthrow new ArithmeticException(\"Overflow!\");\n}\n}\n}\n\n## Output\n\nValue1: 80989\nValue2: 87567\nAddition Result: 168556\n\nIn the above example, we have taken the following two integers.\n\nlong val1 = 80989; long val2 = 87567;\n\nlong sum = val1 + val2;\nIf (sum> Long.MAX_VALUE) {\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6702604,"math_prob":0.9616517,"size":2433,"snap":"2023-14-2023-23","text_gpt3_token_len":582,"char_repetition_ratio":0.18237957,"word_repetition_ratio":0.1393643,"special_character_ratio":0.25565147,"punctuation_ratio":0.104477614,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99339575,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-11T00:14:31Z\",\"WARC-Record-ID\":\"<urn:uuid:5ff081e6-346f-4f11-afc1-6cbb73ac8b25>\",\"Content-Length\":\"41005\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b175ad7-c643-489a-bf38-5865146dc73d>\",\"WARC-Concurrent-To\":\"<urn:uuid:d0797364-2639-419e-a065-972ed92f852b>\",\"WARC-IP-Address\":\"192.229.210.176\",\"WARC-Target-URI\":\"https://www.tutorialspoint.com/java-program-to-add-long-integers-and-check-for-overflow\",\"WARC-Payload-Digest\":\"sha1:AAFS4NBYCOIHT4RXDLDOKIB6ZGFQKPRE\",\"WARC-Block-Digest\":\"sha1:U3NRYZCS3ASMZZ7Z3V3JZ2JLE7WAFCUM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224646652.16_warc_CC-MAIN-20230610233020-20230611023020-00588.warc.gz\"}"}
https://roots-and-radicals.com/bounding-roots-of-polynomials.html
[ "Algebra Tutorials!\nThursday 21st of October", null, "Home", null, "Square Roots and Radical Expressions", null, "Solving Radical Equations", null, "Simplifying Radical Expressions", null, "Irrational Numbers in General and Square Roots in Particular", null, "Roots of Polynomials", null, "Simplifying Radical Expressions", null, "Exponents and Radicals", null, "Products and Quotients Involving Radicals", null, "Roots of Quadratic Equations", null, "Radical Expressions", null, "Radicals and Rational Exponents", null, "Find Square Roots and Compare Real Numbers", null, "Radicals", null, "Radicals and Rational Exponents", null, "Theorems on the Roots of Polynomial Equations", null, "SYNTHETIC DIVISION AND BOUNDS ON ROOTS", null, "Simplifying Radical Expressions", null, "Exponents and Radicals", null, "Properties of Exponents and Square Roots", null, "Solving Radical Equations", null, "Rational Exponents and Radicals,Rationalizing Denominators", null, "Rational Exponents and Radicals,Rationalizing Denominators", null, "Quadratic Roots", null, "Exponents and Roots", null, "Multiplying Radical Expressions", null, "Exponents and Radicals", null, "Solving Radical Equations", null, "Solving Quadratic Equations by Factoring and Extracting Roots", null, "Newton's Method for Finding Roots", null, "Roots of Quadratic Equations Studio", null, "Roots, Radicals, and Root Functions", null, "Review division factoring and Root Finding", null, "Radicals", null, "Simplifying Radical Expressions", null, "Multiplying and Simplifying Radical Expressions", null, "LIKE RADICALS", null, "Multiplication and Division of Radicals", null, "Radical Equations", null, "BOUNDING ROOTS OF POLYNOMIALS\nTry the Free Math Solver or Scroll down to Tutorials!\n\n Depdendent Variable\n\n Number of equations to solve: 23456789\n Equ. #1:\n Equ. #2:\n\n Equ. #3:\n\n Equ. #4:\n\n Equ. #5:\n\n Equ. #6:\n\n Equ. #7:\n\n Equ. #8:\n\n Equ. #9:\n\n Solve for:\n\n Dependent Variable\n\n Number of inequalities to solve: 23456789\n Ineq. #1:\n Ineq. #2:\n\n Ineq. #3:\n\n Ineq. #4:\n\n Ineq. #5:\n\n Ineq. #6:\n\n Ineq. #7:\n\n Ineq. #8:\n\n Ineq. #9:\n\n Solve for:\n\n Please use this form if you would like to have this math solver on your website, free of charge. Name: Email: Your Website: Msg:\n\n# Roots of Polynomials\n\nSection I: Linear Polynomials\n\nConsider the polynomial equation ax + b = 0, symbolically solve for the value of x.", null, "Of course the value of x is called the root of the linear polynomial ax + b. The TI-86\nhas a built in polynomial solver package accessed via 2nd POLY on the key board.\nTo find the roots of a polynomial using MAPLE we can use the command\n\nsolve(our polynomial=0);\n\nFind the roots of the polynomials in each of three ways: first \"by hand\", then via the\nTI-86, and lastly using MAPLE. Record the answers in order left to right.\n\n P&B SGC CAS Comments", null, "Try #5 using the command solve(a*x + b=0, x); in MAPLE.\nWhy do you think the information \",x\" must be put into the MAPLE command in\n#5 when we didn't need it in the others?\n\nWhat conclusion do you make with regard to using the TI-86 directly to solve linear\npolynomials?\n\nIn using MAPLE in #5 we have encountered our first instance of using the amazing\nfeature that distinguishes a Computer Algebra System (CAS) from the previous\ngeneration of mathematics software packages: it can do Symbolic Manipulation!\n\nNext let's solve some quadratics; that is, find the roots of polynomials of the form\nax2+bx+c. You have done this once, but do it again. The quadratic formula tells us\nthe two roots of ax2+bx+c are\n\n x = and x=\n\nAs above, find the roots of the following \"by hand\", with the TI-86,(it should work\nhere) and with MAPLE.(Can you guess which problems will require insertion of ,x\ninto the solve command, and which will not?)\n\n P&B SGC CAS Comments", null, "What's special about #8 and #10?\n\nNotice the difference in the way the TI-85 expresses complex numbers and the way\nMAPLE expresses them.\n\n P&B SGC CAS Comments", null, "Now see what MAPLE gives when you ask it to solve for the roots of\n16. ax2 + bx +c.\n\n X = &X =\n\nThis should look familiar to you, what is it?\n\nExercise: In each of the above, 6-15, try to use the evalf command in MAPLE to\nobtain the TI-86 approximate answer. You will first need to define something to be\nthe roots, eg,\n\nr:=expression;\n\ndefines r to be whatever is the expression. So\n\nr:=solve(2x2-3x+2=0,x);\n\nrepresents the two roots of 2x^2-3x+2=0.\nSince r represents two roots evalf(r); will evaluate the floating point approximation\nto the first root and evalf(r); will approximate the second. So try this for each of\nthe above. See if you can guess in advance which work and which don't.\n\n MAPLE", null, "Approximation TI ans evalf", null, "Next we will play with two MAPLE commands that further illustrate the nice features\nof symbolic manipulation:\n\nexpand(algebraic expression); and factor(algebraic expression);\nIn each of the following first do the required calculation \"by hand\", then let MAPLE\ndo it.\n\n by hand MAPLE", null, "16. expand((x+3)*(x-4));", null, "", null, "17. factor(%);", null, "", null, "18. factor(2x2 +3x -2);", null, "", null, "19. expand((ax+1)*(bx+1));", null, "", null, "20. factor(x2 +bx +ax +ab);", null, "", null, "21. expand((x+I)*(x-I));", null, "", null, "22. factor(%);", null, "You should now be able to \"build\" quadratics with any roots you like using expand\nand factor. Make some interesting ones." ]
[ null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/images/left_bullet.jpg", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_82.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_83.jpg", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_84.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_85.jpg", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_86.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_87.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null, "https://roots-and-radicals.com/articles_imgs/5540/roots_88.gif", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.82350653,"math_prob":0.9698935,"size":3051,"snap":"2021-43-2021-49","text_gpt3_token_len":854,"char_repetition_ratio":0.12372826,"word_repetition_ratio":0.011342155,"special_character_ratio":0.2749918,"punctuation_ratio":0.12164297,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99489105,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,7,null,7,null,7,null,7,null,7,null,7,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-21T17:57:21Z\",\"WARC-Record-ID\":\"<urn:uuid:cbf2942c-1ae7-49fe-862d-ad12484452f3>\",\"Content-Length\":\"93471\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ec5ef441-31cb-4034-8aa0-a8f004d84c5a>\",\"WARC-Concurrent-To\":\"<urn:uuid:c1e13206-c3b5-480c-bc21-8190f5b50c22>\",\"WARC-IP-Address\":\"54.197.228.212\",\"WARC-Target-URI\":\"https://roots-and-radicals.com/bounding-roots-of-polynomials.html\",\"WARC-Payload-Digest\":\"sha1:P6TOFWVMS7XKYWB2DQ3B6VRSK554MCDU\",\"WARC-Block-Digest\":\"sha1:JMCA727FWJFN5ZFITWCJZZPWCUDFVW7X\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585439.59_warc_CC-MAIN-20211021164535-20211021194535-00236.warc.gz\"}"}
http://itwebtutorials.mga.edu/js/chp5/nested-loops.aspx
[ "Web Development Tutorials\n\nPrint this Section\n\nNesting Loops in Loops\n\nLoops can be nested inside other loops. This is necessary, for example, to produce a table of information. A table is two dimensional, with rows and columns. Therefore, an outer loop is needed to increment down the rows of the table and an inner loop is needed to increment across the columns of the table.\n\nFigure 5-8. Creating a table with nested loops.\n\nfunction makeTable() {\nvar tableString = \"<table border='1' cellpadding='3'>\";\nfor (i=1; i<=4; i++) {\ntableString += \"<tr>\";\nfor (j=1; j<=5; j++) {\ntableString += \"<td>\" + i + \".\" + j + \"</td>\";\n}\ntableString += \"</tr>\";\n}\ntableString += \"</table>\";\ndocument.getElementById(\"tableOut\").innerHTML = tableString;\n}\n\nfunction init() {\nvar makeTableBtn = document.getElementById(\"makeTableBtn\");\nmakeTableBtn.onclick=makeTable;\n}\n\n<input type=\"button\" value=\"Make Table\" onclick=\"makeTableBtn\"/>\n<div id=\"tableOut\"/></div>\n\nListing 5-11. Code to create a table with nested loops.\n\nThe script creates an HTML table, including the necessary <table>, <tr>, and <td> tags. These tags are constructed within a string variable named tableString. Each portion of the table definition is concatenated to this string as the script executes.\n\nThe script initially places the opening <table> tag into variable tableString.\n\ntableString = \"<table border='1' cellpadding='3'>\";\n\nAn outer for loop increments index i from 1 through 4, formatting the opening and closing <tr> tag enclosing each of the four rows.\n\nfor (i=1; i<=4; i++) {\ntableString += \"<tr>\";\nfor (j=1; j<=5; j++) {\ntableString += \"<td>\" + i + \".\" + j + \"</td>\";\n}\ntableString += \"</tr>\";\n}\n\nFor each row, an inner for loop increments index j from 1 through 5, formatting the <td> tag for each of five cells.\n\nfor (i=1; i<=4; i++) {\ntableString += \"<tr>\";\nfor (j=1; j<=5; j++) {\ntableString += \"<td>\" + i + \".\" + j + \"</td>\";\n}\ntableString += \"</tr>\";\n}\n\nEach table cell contains a number indicating its row and column. This value is given by concatenating the i index of the row with the j index of the column. Finally, the closing </table> tag is appended to the end of the string, and tableString is written to the innerHTML property of the output division.\n\ntableString += \"</table>\";\ndocument.getElementById(\"tableOut\").innerHTML = tableString;\n\nIt is helpful to visualize how loop variables are incremented for nested loops. Below is a script that shows the values of i and j as a pair of nested loops executes.\n\nfunction seeLoops() {\nfor (i=1; i<=5; i++) {\ndisplay i;\nfor (j=1; j<=5; j++) {\ndisplay j;\n}\n}\n}\n\nfunction init() {\nvar seeLoopsBtn = document.getElementById(\"seeLoopsBtn\");\nseeLoopsBtn.onclick=seeLoops;\n}" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5912995,"math_prob":0.9118454,"size":2890,"snap":"2019-43-2019-47","text_gpt3_token_len":764,"char_repetition_ratio":0.15661816,"word_repetition_ratio":0.18736383,"special_character_ratio":0.30415225,"punctuation_ratio":0.16970803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97875696,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-21T04:41:58Z\",\"WARC-Record-ID\":\"<urn:uuid:409ab763-fb80-493c-8d29-b9b86c31db9a>\",\"Content-Length\":\"16919\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:546dba81-7eab-46bf-ae34-d1b66d0a2611>\",\"WARC-Concurrent-To\":\"<urn:uuid:581a48ef-232c-4332-950d-7392961913d5>\",\"WARC-IP-Address\":\"168.16.222.3\",\"WARC-Target-URI\":\"http://itwebtutorials.mga.edu/js/chp5/nested-loops.aspx\",\"WARC-Payload-Digest\":\"sha1:I23EPQHGGG5XNZV6YMZI2PY7OG7MZCLZ\",\"WARC-Block-Digest\":\"sha1:JVARCOR4DD5INJRUZS3LR5AQVMIPB7BQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987756350.80_warc_CC-MAIN-20191021043233-20191021070733-00479.warc.gz\"}"}
https://domain.glass/search/?q=625+as+a+fraction
[ "## \"625 as a fraction\"\n\nRequest time (0.057 seconds) [cached] - Completion Score 180000\n625 as a fraction of an inch-2.05    625 as a fraction in simplest form-4.31    625 as a fraction on tape measure-4.86    625 as a fraction c.. f-4.97    625 as a fraction step by step-5\n0.625 as a fraction    1.625 as a fraction    2.625 as a fraction    3.625 as a fraction    5.625 as a fraction    4.625 as a fraction\n6 results & 6 related queries\nRelated Search: 0.625 as a fraction\n\nRelated Search: 1.625 as a fraction\n\nRelated Search: 2.625 as a fraction\n\nRelated Search: 3.625 as a fraction\n\nRelated Search: 5.625 as a fraction\n\nRelated Search: 4.625 as a fraction\n\n##### Search Elsewhere:", null, "", null, "" ]
[ null, "https://rtb.adx1.com/pixels/pixel.js", null, "https://serve.popads.net/cpixel.php", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.940506,"math_prob":0.97745854,"size":712,"snap":"2022-40-2023-06","text_gpt3_token_len":223,"char_repetition_ratio":0.39689267,"word_repetition_ratio":0.03937008,"special_character_ratio":0.36516854,"punctuation_ratio":0.15116279,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95958465,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-04T22:46:38Z\",\"WARC-Record-ID\":\"<urn:uuid:95bd3d3a-4a51-45e5-b776-83b9327c8b81>\",\"Content-Length\":\"10329\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:e73fd20f-3ae1-4d30-84ba-f7d0d992e6d4>\",\"WARC-Concurrent-To\":\"<urn:uuid:f6f7566a-fe4c-4d30-9121-4dc6fe83046f>\",\"WARC-IP-Address\":\"172.66.43.99\",\"WARC-Target-URI\":\"https://domain.glass/search/?q=625+as+a+fraction\",\"WARC-Payload-Digest\":\"sha1:TENPP3TDNWNJC3566HJ4KKG2RO3MRXJG\",\"WARC-Block-Digest\":\"sha1:RSRKZ24CE7O3AVITSCOOUARSQ2TPKLAZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337529.69_warc_CC-MAIN-20221004215917-20221005005917-00130.warc.gz\"}"}
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=18&t=31055
[ "## DeBroglie\n\nJamie Reniva 1J\nPosts: 14\nJoined: Wed Nov 15, 2017 3:00 am\n\n### DeBroglie\n\nHow do we know when to use the DeBroglie equation? I'm still a little bit confused where to get \"p\" in the equation.\n\nIsabelle De Rego 1A\nPosts: 40\nJoined: Fri Apr 06, 2018 11:02 am\n\n### Re: DeBroglie\n\nYou only use De Broglie when you are finding the wavelength of something with a mass( wavelength = h/p) . p=mv, where m is mass and v is velocity. So, they would probably give you the two of the three variables and then you would solve for the third one.\n\n004985802\nPosts: 27\nJoined: Fri Feb 02, 2018 3:00 am\n\n### Re: DeBroglie\n\nyou would use this to measure the wavelength of a moving object that has mass in order to determine whether it has wavelike properties\n\nNick Griffin 1K\nPosts: 3\nJoined: Fri Apr 06, 2018 11:03 am\n\n### Re: DeBroglie\n\nDeBroglie determines the wavelength/wavelike properties of things with mass. So you don't use it for light, just things with mass (and velocity since you need momentum which is mass x velocity). You would probably use it in a problem to find wavelength, mass, or velocity (h is a constant)\n\nAnnaYan_1l\nPosts: 96\nJoined: Fri Apr 06, 2018 11:05 am\nBeen upvoted: 1 time\n\n### Re: DeBroglie\n\nI agree with the people above! p (which means momentum) is = to (mass)(velocity) which is usually given to you in some shape or form in the question. It is for a moving object (not light)\n\nShimran Kumar 1C\nPosts: 30\nJoined: Fri Apr 06, 2018 11:03 am\n\n### Re: DeBroglie\n\nLight does have a momentum. The photons however do not have a mass. So I suppose if the problem gave you a value for the momentum (p), you could use the de Broglie wavelength as normal. Otherwise, this equation doesn't work for light.\n\nEndri Dis 1J\nPosts: 33\nJoined: Fri Apr 06, 2018 11:02 am\n\n### Re: DeBroglie\n\nCan someone give an example of a practice problem that uses the DeBroglie Equation?\n\nNicole Shak 1L\nPosts: 35\nJoined: Wed Nov 22, 2017 3:03 am\n\n### Re: DeBroglie\n\nIn class the example was to find the wavelength of a 0.1 kg baseball traveling at a velocity of 35 m/s. You would use the DeBroglie Equation, wavelength=h/m*v to solve this." ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9362401,"math_prob":0.6944102,"size":2102,"snap":"2020-34-2020-40","text_gpt3_token_len":616,"char_repetition_ratio":0.15252621,"word_repetition_ratio":0.07672634,"special_character_ratio":0.28591818,"punctuation_ratio":0.14222223,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9881662,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-13T03:24:44Z\",\"WARC-Record-ID\":\"<urn:uuid:17f77106-40bf-40c2-b5d1-c5c77c6d2818>\",\"Content-Length\":\"65866\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8d8a889b-4afc-487c-9070-5f8ef65b9caf>\",\"WARC-Concurrent-To\":\"<urn:uuid:5e6bff86-8690-46cc-9ee9-bcd792b896d7>\",\"WARC-IP-Address\":\"169.232.134.130\",\"WARC-Target-URI\":\"https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=18&t=31055\",\"WARC-Payload-Digest\":\"sha1:QLNUVSFZHGGKJTYVVLDHTRR7IDZ2ASG3\",\"WARC-Block-Digest\":\"sha1:5AN7LXZ44LV4PVW7RKXKCBDFU4HGM3BW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439738950.61_warc_CC-MAIN-20200813014639-20200813044639-00354.warc.gz\"}"}
https://www.convertunits.com/from/square+centimeter/to/square+kilometer
[ "## ››Convert square centimetre to square kilometre\n\n square centimeter square kilometer\n\nHow many square centimeter in 1 square kilometer? The answer is 10000000000.\nWe assume you are converting between square centimetre and square kilometre.\nYou can view more details on each measurement unit:\nsquare centimeter or square kilometer\nThe SI derived unit for area is the square meter.\n1 square meter is equal to 10000 square centimeter, or 1.0E-6 square kilometer.\nNote that rounding errors may occur, so always check the results.\nUse this page to learn how to convert between square centimeters and square kilometers.\nType in your own numbers in the form to convert the units!\n\n## ››Want other units?\n\nYou can do the reverse unit conversion from square kilometer to square centimeter, or enter any two units below:\n\n## Enter two units to convert\n\n From: To:\n\n## ››Metric conversions and more\n\nConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3\", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more!" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.80089474,"math_prob":0.9953194,"size":1477,"snap":"2021-43-2021-49","text_gpt3_token_len":339,"char_repetition_ratio":0.27698573,"word_repetition_ratio":0.0,"special_character_ratio":0.20649967,"punctuation_ratio":0.14079422,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.987008,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-03T17:08:47Z\",\"WARC-Record-ID\":\"<urn:uuid:59595128-47e2-4dbd-9461-88033f8aa2ff>\",\"Content-Length\":\"50690\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1404ed7d-f547-48ed-b4b7-14f0f47f6bb2>\",\"WARC-Concurrent-To\":\"<urn:uuid:6a6a04ee-2df3-4e29-a85d-d3504cc72260>\",\"WARC-IP-Address\":\"34.195.3.75\",\"WARC-Target-URI\":\"https://www.convertunits.com/from/square+centimeter/to/square+kilometer\",\"WARC-Payload-Digest\":\"sha1:QDGK5OHA3K2OEAOBA7ZM7OT6ZTJ5ZMU6\",\"WARC-Block-Digest\":\"sha1:J6JRJ7NEX6FJV2ZIVURYJBGU6TY3PMUI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964362891.54_warc_CC-MAIN-20211203151849-20211203181849-00265.warc.gz\"}"}
https://electronics.stackexchange.com/questions/320706/ripple-current-in-boost-converter-for-nixies
[ "# Ripple Current in Boost Converter for Nixies\n\nI'm trying to design a power supply for 6 IN-8 Nixie Tubes (driven individually, not multiplexed). Each tube has a current draw of 2.5-4.5mA, so they pull around 30mA(max) total. The input voltages is 12V and the output voltage should be 180V. Switching frequency should be around 100kHz (10uS for each cycle).\n\nUsing $Volt = (Henries*Amps)/time$ from this question, I found the inductance to be around 400uH ($(12V*0.00001s)/0.03A$). However, I calculated the inductor ripple current from this IC boost converter pdf (equation 2) and got a ripple current value of 285mA ($(12V*0.95)/(100000Hz*0.0004H)$).\n\nThese calculations lead me to a series of questions:\n\n1. As per their datasheet, Nixie tubes can't exceed 4.5mA. Any more current will affect their lifespan. However, my understanding of loads is that they will only draw as much current as they need. When drawing current from the power supply, will the Nixies draw only their needed current? Or their needed current $\\pm$ the inductor ripple current?\n2. Is it possible to create more ripple current than the output current? Assuming its not:\n1. What is the correct way to calculate ripple current?\n2. Is it typical for ripple current to be larger (for an average load of 30mA) in a boost converter operating in CCM or DCM?\n\nSchematic edit (where the switch should function like an ideal mosfet):", null, "• add a schematic for your boost converter and any filter following Jul 27, 2017 at 6:54\n\nThis is a broad-brush explanation how to get to the inductance required in a discontinuous boost converter.\n\nTry and think of things in terms of power (as per my answer to your linked question). Your output power is 180 volts x 30 mA = 5.4 watts so, if you transfer energy 100,000 times per second then the energy transfer in one cycle is 54 uJ.\n\nKnowing that you need to store energy in the first half of the switching cycle and release it in the second half of the cycle you can use the inductor energy formula: -\n\nW (energy) = $\\dfrac{LI^2}{2}$ therefore I = $\\sqrt{\\dfrac{2\\times 54\\times 10^{-6}}{L}}$.\n\nAlso knowing that V = $L\\dfrac{di}{dt}$ we can put numbers of di and dt.\n\n• di is the change in current needed to charge energy into the coil (as per I from the energy equation above)\n• dt can be half a switching cycle (5 us)\n• V is the 12 volts input supply\n\nThis boils down to doing a bit of algebra to find L: -\n\nL = $\\dfrac{(12 \\times 5\\times 10^{-6})^2}{2\\times 54\\times 10^{-6}}$ = 33 uH.\n\nI found the inductance to be around 400uH\n\nYou have to use the correct approach.\n\nIf you work out the current charged into and discharged from the inductor using my approximate approach the peak current in the inductor is 1.818 amps and this is also the peak to peak ripple current because of discontinuous operation.\n\nRipple current is what the inductor sees - it doesn't actually flow into the load because most of it is soaked-up in the output capacitor. The load will draw what current it needs from the 180 volts but the trick is keeping the output voltage stable because: -\n\nA booster is a power regulator - it regulates power not voltage\n\n\nTo regulate voltage you have to have a control loop around the basic power regulator to keep the mark-space ratio correct so that voltage is regulated by controlling power.\n\nIs it typical for ripple current to be larger (for an average load of 30mA) in a boost converter operating in CCM or DCM?\n\nMy simplified example above is for DCM and this will have a peak-to-peak ripple current that bears little relationship with load current.\n\nIn CCM, the inductor is always conducting and this means the peak-to-peak ripple current can be much smaller; the energy in the inductor isn't depleted to zero therefore the energy transfer per cycle is based around: -\n\nW (energy) = $\\dfrac{L.I^2_{max}}{2}-\\dfrac{L.I^2_{min}}{2}$\n\nIn other words, if Imax is high then Imin need only be a little bit smaller to get the same energy per cycle (compared to DCM).\n\nI decided to do a quick simulation to see how things panned out against my formulas: -", null, "For 100 kHz switching at 50:50 duty I got a stable peak voltage of 186 volts with a peak inductor current of 1.9 amps with a load of 6 kohm. The output capacitor is only 330 nF just so that the output would charge up quicker in the sim. Inductor is as calculated - 33 uH.\n\nRemember - this is a fixed load scenario - to make a booster with a regulated output you need an overall control system that tweaks the duty cycle as output load and input voltage varies. There is no such thing as a working simplified boost circuit with good voltage regulation.\n\nRipple current in a boost converter refers to the peak to peak AC current in the INDUCTOR. It has only a little influence on the LOAD current in a properly designed boost converter.\n\nThe inductor current builds up during the switch's ON time. During this time the output capacitors hold the voltage up.\n\nDuring the OFF time the inductor supplies current to the output capacitors replenishing the charge lost during the off time.\n\nThe output voltage should only change a little during this time. This change is called output voltage ripple. Depending on your design specs it could be 10s or 100s of mV.\n\nThe load will draw what it needs from the output, which is relatively constant. If the output voltage ripple is a problem then you can add capacitance or change the switching frequency or inductor value to get smaller ripple.\n\n• so the inductor ripple current only has significance with the power supply components and not the load itself? Jul 27, 2017 at 7:09\n• @TranslucentDragon correct. Consider amounts of energy. If the ripple in the inductor was the same as in the tubes, and the tubes have 180V, yet the voltage across the inductor when the booster's switch is on is only 12V, what magical well does all that extra energy in the 180V rail come from? Obviously the current through the inductor at 12V has to be much higher than the output current at 180V. Jul 27, 2017 at 7:19" ]
[ null, "https://i.stack.imgur.com/tKqP7.png", null, "https://i.stack.imgur.com/k9obt.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9166236,"math_prob":0.97731847,"size":3126,"snap":"2023-14-2023-23","text_gpt3_token_len":777,"char_repetition_ratio":0.12588085,"word_repetition_ratio":0.0,"special_character_ratio":0.256238,"punctuation_ratio":0.05457464,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99039495,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-03T11:32:41Z\",\"WARC-Record-ID\":\"<urn:uuid:966db9db-20d3-4a93-ad8e-c1fb6dbf6101>\",\"Content-Length\":\"171493\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68bad62b-0491-4bca-8a64-219375cfbbc2>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc6b5ee3-60dd-463b-853a-0866e8b5d5d1>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://electronics.stackexchange.com/questions/320706/ripple-current-in-boost-converter-for-nixies\",\"WARC-Payload-Digest\":\"sha1:WFCSKS2VOK6BMCKWHZDKPR6VG732ZMSU\",\"WARC-Block-Digest\":\"sha1:2WPLG5CJAKYSHWDKAW65ORMMG5JWKJM4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224649193.79_warc_CC-MAIN-20230603101032-20230603131032-00021.warc.gz\"}"}
https://www.boost.org/doc/libs/1_75_0/libs/icl/doc/html/boost_icl/interface/function_synopsis.html
[ "#", null, "Boost C++ Libraries\n\n...one of the most highly regarded and expertly designed C++ library projects in the world.\n\nThis is the documentation for an old version of Boost. Click here to view this page for the latest version.\n\n### Function Synopsis\n\nIn this section a single matrix is given, that shows all functions with shared names and identical or analogous semantics and their polymorphic overloads across the class templates of the icl. In order to achieve a concise representation, a series of placeholders are used throughout the function matrix.\n\nThe placeholder's purpose is to express the polymorphic usage of the functions. The first column of the function matrix contains the signatures of the functions. Within these signatures `T` denotes a container type and `J` and `P` polymorphic argument and result types.\n\nWithin the body of the matrix, sets of boldface placeholders denote the sets of possible instantiations for a polymorphic placeholder `P`. For instance e i S denotes that for the argument type `P`, an element e, an interval i or an interval_set S can be instantiated.\n\nIf the polymorphism can not be described in this way, only the number of overloaded implementations for the function of that row is shown.\n\nPlaceholder\n\nArgument types\n\nDescription\n\n`T`\n\na container or interval type\n\n`P`\n\npolymorphic container argument type\n\n`J`\n\npolymorphic iterator type\n\n`K`\n\npolymorphic element_iterator type for interval containers\n\n`V`\n\nvarious types `V`, that do dot fall in the categories above\n\n1,2,...\n\nnumber of implementations for this function\n\nA\n\nimplementation generated by compilers\n\ne\n\nT::element_type\n\nthe element type of `interval_sets` or `std::sets`\n\ni\n\nT::segment_type\n\nthe segment type of of `interval_sets`\n\ns\n\nelement sets\n\n`std::set` or other models of the icl's set concept\n\nS\n\ninterval_sets\n\none of the interval set types\n\nb\n\nT::element_type\n\ntype of `interval_map's` or `icl::map's` element value pairs\n\np\n\nT::segment_type\n\ntype of `interval_map's` interval value pairs\n\nm\n\nelement maps\n\n`icl::map` icl's map type\n\nM\n\ninterval_maps\n\none of the interval map types\n\nd\n\ndiscrete types\n\ntypes with a least steppable discrete unit: Integral types, date/time types etc.\n\nc\n\ncontinuous types\n\ntypes with (theoretically) infinitely many elements beween two values.\n\nTable 1.13. Synopsis Functions and Overloads\n\nT\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n`T::T()`\n\n1\n\n1\n\n1\n\n1\n\n1\n\n`T::T(const P&)`\n\nA\n\n1\n\n1\n\n```T& T::operator=(const P&)```\n\nA\n\n1\n\n1\n\n`void T::swap(T&)`\n\n1\n\n1\n\n1\n\n1\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n`bool T::empty()const`\n\n1\n\n1\n\n1\n\n1\n\n```bool is_empty(const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool contains(const T&, const P&)```\n```bool within(const P&, const T&)```\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n```bool operator == (const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool operator != (const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool operator < (const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool operator > (const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool operator <= (const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool operator >= (const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool is_element_equal(const T&, const P&)```\n\n1\n\n1\n\n```bool is_element_less(const T&, const P&)```\n\n1\n\n1\n\n```bool is_element_greater(const T&, const P&)```\n\n1\n\n1\n\n```bool is_distinct_equal(const T&, const P&)```\n\n1\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n`size_type T::size()const`\n\n1\n\n1\n\n1\n\n1\n\n```size_type size(const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```size_type cardinality(const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n`difference_type length(const T&)`\n\n1\n\n1\n\n1\n\n```size_type iterative_size(const T&)```\n\n1\n\n1\n\n1\n\n1\n\n```size_type interval_count(const T&)```\n\n1\n\n1\n\n```J T::find(const P&)```\n\n2\n\n2\n\n```J find(T&, const P&)```\n\n```codomain_type& operator[] (const domain_type&)```\n\n1\n\n```codomain_type operator() (const domain_type&)const```\n\n1\n\n1\n\n`interval_type hull(const T&)`\n\n1\n\n1\n\n```T hull(const T&, const T&)```\n\n1\n\n```domain_type lower(const T&)```\n\n1\n\n1\n\n1\n\n```domain_type upper(const T&)```\n\n1\n\n1\n\n1\n\n```domain_type first(const T&)```\n\n1\n\n1\n\n1\n\n```domain_type last(const T&)```\n\n1\n\n1\n\n1\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n```T& T::add(const P&)```\n\n```T& add(T&, const P&)```\n\n```J T::add(J pos, const P&)```\n\n```J add(T&, J pos, const P&)```\n\n```T& operator +=(T&, const P&)```\n\n```T operator + (T, const P&)```\n```T operator + (const P&, T)```\n\n```T& operator |=( T&, const P&)```\n\n```T operator | (T, const P&)```\n```T operator | (const P&, T)```\n\n```T& T::subtract(const P&)```\n\n```T& subtract(T&, const P&)```\n\n```T& operator -=(T&, const P&)```\n\n```T operator - (T, const P&)```\n\n```T left_subtract(T, const T&)```\n\n1\n\n```T right_subtract(T, const T&)```\n\n1\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n```V T::insert(const P&)```\n\n```V insert(T&, const P&)```\n\n```J T::insert(J pos, const P&)```\n\n```J insert(T&, J pos, const P&)```\n\n```T& insert(T&, const P&)```\n\n```T& T::set(const P&)```\n\n1\n\n```T& set_at(T&, const P&)```\n\n1\n\n`void T::clear()`\n\n1\n\n1\n\n1\n\n1\n\n```void clear(const T&)```\n\n1\n\n1\n\n1\n\n1\n\n```T& T::erase(const P&)```\n\n```T& erase(T&, const P&)```\n\n`void T::erase(iterator)`\n\n1\n\n1\n\n1\n\n1\n\n`void T::erase(iterator,iterator)`\n\n1\n\n1\n\n1\n\n1\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n```void add_intersection(T&, const T&, const P&)```\n\n```T& operator &=(T&, const P&)```\n\n```T operator & (T, const P&)```\n```T operator & (const P&, T)```\n\n```bool intersects(const T&, const P&)```\n```bool disjoint(const T&, const P&)```\n\n```T& T::flip(const P&)```\n\n```T& flip(T&, const P&)```\n\n```T& operator ^=(T&, const P&)```\n\n```T operator ^ (T, const P&)```\n```T operator ^ (const P&, T)```\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n`J T::begin()`\n\n2\n\n2\n\n2\n\n2\n\n`J T::end()`\n\n2\n\n2\n\n2\n\n2\n\n`J T::rbegin()`\n\n2\n\n2\n\n2\n\n2\n\n`J T::rend()`\n\n2\n\n2\n\n2\n\n2\n\n```J T::lower_bound(const key_type&)```\n\n2\n\n2\n\n2\n\n2\n\n```J T::upper_bound(const key_type&)```\n\n2\n\n2\n\n2\n\n2\n\n```pair<J,J> T::equal_range(const key_type&)```\n\n2\n\n2\n\n2\n\n2\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n`K elements_begin(T&)`\n\n2\n\n2\n\n`K elements_end(T&)`\n\n2\n\n2\n\n`K elements_rbegin(T&)`\n\n2\n\n2\n\n`K elements_rend(T&)`\n\n2\n\n2\n\nintervals\n\ninterval\nsets\n\ninterval\nmaps\n\nelement\nsets\n\nelement\nmaps\n\n```std::basic_ostream operator << (basic_ostream&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\nMany but not all functions of icl intervals are listed in the table above. Some specific functions are summarized in the next table. For the group of the constructing functions, placeholders d denote discrete domain types and c denote continuous domain types `T::domain_type` for an interval_type `T` and an argument types `P`.\n\nTable 1.14. Additional interval functions\n\nT\n\ndiscrete\n_interval\n\ncontinuous\n_interval\n\nright_open\n_interval\n\nleft_open\n_interval\n\nclosed\n_interval\n\nopen\n_interval\n\nInterval bounds\n\ndynamic\n\ndynamic\n\nstatic\n\nstatic\n\nstatic\n\nstatic\n\nForm\n\nasymmetric\n\nasymmetric\n\nsymmetric\n\nsymmetric\n\n```T singleton(const P&)```\n\n```T construct(const P&, const P&)```\n\n```T construct(const P&, const P&, interval_bounds)```\n\n```T hull(const P&, const P&)```\n\n```T span(const P&, const P&)```\n\n```static T right_open(const P&, const P&)```\n\n```static T left_open(const P&, const P&)```\n\n```static T closed(const P&, const P&)```\n\n```static T open(const P&, const P&)```\n\n```bool exclusive_less(const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool lower_less(const T&, const T&)```\n```bool lower_equal(const T&, const T&)```\n```bool lower_less_equal(const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool upper_less(const T&, const T&)```\n```bool upper_equal(const T&, const T&)```\n```bool upper_less_equal(const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```bool touches(const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```T inner_complement(const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n1\n\n```difference_type distance(const T&, const T&)```\n\n1\n\n1\n\n1\n\n1\n\n1\n\n1\n\n##### Element iterators for interval containers\n\nIterators on interval conainers that are refered to in section Iteration of the function synopsis table are segment iterators. They reveal the more implementation specific aspect, that the fundamental aspect abstracts from. Iteration over segments is fast, compared to an iteration over elements, particularly if intervals are large. But if we want to view our interval containers as containers of elements that are usable with std::algoritms, we need to iterate over elements.\n\nIteration over elements . . .\n\n• is possible only for integral or discrete `domain_types`\n• can be very slow if the intervals are very large.\n• and is therefore depreciated\n\nOn the other hand, sometimes iteration over interval containers on the element level might be desired, if you have some interface that works for `std::SortedAssociativeContainers` of elements and you need to quickly use it with an interval container. Accepting the poorer performance might be less bothersome at times than adjusting your whole interface for segment iteration.\n\nCaution", null, "So we advice you to choose element iteration over interval containers judiciously. Do not use element iteration by default or habitual. Always try to achieve results using namespace global functions or operators (preferably inplace versions) or iteration over segments first.\n Copyright © 2007-2010 Joachim FaulhaberCopyright © 1999-2006 Cortex Software GmbH Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)" ]
[ null, "https://www.boost.org/gfx/space.png", null, "https://www.boost.org/doc/libs/1_75_0/doc/src/images/caution.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.553282,"math_prob":0.95035005,"size":5243,"snap":"2021-04-2021-17","text_gpt3_token_len":1675,"char_repetition_ratio":0.34205002,"word_repetition_ratio":0.49455154,"special_character_ratio":0.36219722,"punctuation_ratio":0.07852194,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9852236,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-04-20T23:09:57Z\",\"WARC-Record-ID\":\"<urn:uuid:2e840dac-4c91-467d-af92-f0fd1ff7c560>\",\"Content-Length\":\"150243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2813fecb-287f-4be1-8870-73481ff24ad8>\",\"WARC-Concurrent-To\":\"<urn:uuid:ce477adc-8087-4bb9-972c-8c13210e289f>\",\"WARC-IP-Address\":\"146.20.110.251\",\"WARC-Target-URI\":\"https://www.boost.org/doc/libs/1_75_0/libs/icl/doc/html/boost_icl/interface/function_synopsis.html\",\"WARC-Payload-Digest\":\"sha1:ZZGHQXEWBHQ23ECNQR4O24KXB33KUCZ7\",\"WARC-Block-Digest\":\"sha1:7U5NCAOQ3MYOD6IQHOZ6S3ZKGHQ6UYN3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-17/CC-MAIN-2021-17_segments_1618039491784.79_warc_CC-MAIN-20210420214346-20210421004346-00609.warc.gz\"}"}
https://pythonawesome.com/easy-json-wrapper-packed-with-features/
[ "# ?️ JSONx\n\nEasy JSON wrapper packed with features.\n\nThis was made for small discord bots, for big bots you should not use this JSON wrapper.\n\n# ? Usage\n\nClone this file into your project folder.\n\nAdd `from db import JSONx` to the top of your project.\n\n# ? Docs\n\n## `db.set(key: str, value, *, pathmagic=\"\")`\n\nSets the key to the value in the JSON.\n\nif the `pathmagic` kwarg is given, it will spit it by the `+`‘s and make dicts(or use existing ones) until it finishes, then it will set the value to the key in the last dict.\n\nNote that the `pathmagic` kwarg will override its path if it isnt a dict.\n\n## `db.get(key: str, *, default=None, pathmagic=\"\")`\n\nReturns the value of the key in the json, if the key isn’t set in the json, it returns the default kwarg.\n\nif the `pathmagic` kwarg is given, it will spit it by the `+`‘s and follow the path in it in the JSON data, it will return the `default` kwarg if the path is empty or has a value that isnt a dict.\n\n## `db.all()`\n\nReturns all the JSON data.\n\n## `db.rem(key: str, *, pathmagic=\"\")`\n\nRemoves the key and value pair from the JSON.\n\nNote that this will not do anything if the key isn’t set in the JSON or the path is invalid.\n\nif the `pathmagic` kwarg is given, it will spit it by the `+`‘s and follow the path in it in the JSON data, then it will remove the key and value pair.\n\n## `db.nuke()`\n\nDeletes everything in the JSON.\n\nUse with caution.\n\n# ? Examples\n\nAssume that the `db.json` file is empty\n\n## `db.set()`\n\n### Normal usage\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndb.set(\"test\", 123)\n\ndata = db.all()\n\nprint(data)```\n\nOutput\n\n``````{'test': 123}\n``````\n\n### Using with `pathmagic` kwarg\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndb.set(\"test\", 123, pathmagic=\"a+b+c\")\n\ndata = db.all()\n\nprint(data)```\n\nOutput\n\n``````{'a': {'b': {'c': {'test': 123}}}}\n``````\n\n## `db.get()`\n\n### Normal usage\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndb.set(\"test\", 123)\n\ndata = db.get(\"test\")\n\nprint(data)```\n\nOutput\n\n``````123\n``````\n\n### Using without `default` kwarg\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndata = db.get(\"test\")\n\nprint(data)```\n\nOutput\n\n``````None\n``````\n\n### Using with `default` kwarg\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndata = db.get(\"test\", default=123)\n\nprint(data)```\n\nOutput\n\n``````123\n``````\n\n### Using with `pathmagic` kwarg\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndb.set(\"test\", 123, pathmagic=\"a+b+c\")\n\ndata = db.get(\"test\", pathmagic=\"a+b+c\")\n\nprint(data)```\n\nOutput\n\n``````123\n``````\n\n## `db.rem()`\n\n### Normal usage\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndb.set(\"test\", 123)\n\ndata = db.all()\n\nprint(data)\n\ndb.rem(\"test\")\n\ndata = db.all()\n\nprint(data)```\n\nOutput\n\n``````{'test': 123}\n{}\n``````\n\n### Using with `pathmagic` kwarg\n\nCode\n\n```from db import JSONx\n\ndb = JSONx(\"db.json\")\n\ndb.set(\"test\", 123, pathmagic=\"a+b+c\")\n\ndata = db.all()\n\nprint(data)\n\ndb.rem(\"test\", pathmagic=\"a+b+c\")\n\ndata = db.all()\n\nprint(data)```\n\nOutput\n\n``````{'a': {'b': {'c': {'test': 123}}}}\n{'a': {'b': {'c': {}}}}\n``````\n\nView Github" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.518396,"math_prob":0.8213603,"size":2725,"snap":"2022-05-2022-21","text_gpt3_token_len":809,"char_repetition_ratio":0.14222713,"word_repetition_ratio":0.37583894,"special_character_ratio":0.30899084,"punctuation_ratio":0.16202946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99178815,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-19T19:13:41Z\",\"WARC-Record-ID\":\"<urn:uuid:05a081ee-1d26-4653-8843-003acf915605>\",\"Content-Length\":\"44234\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b950314a-c8a0-4743-b02e-3562ac7816d7>\",\"WARC-Concurrent-To\":\"<urn:uuid:614e187c-7a66-4ff0-8a0c-f501895f12d0>\",\"WARC-IP-Address\":\"172.67.166.150\",\"WARC-Target-URI\":\"https://pythonawesome.com/easy-json-wrapper-packed-with-features/\",\"WARC-Payload-Digest\":\"sha1:2SXONDSNQVI4AT3WWOKSYNXF2AEDYPSD\",\"WARC-Block-Digest\":\"sha1:A7C4YEVTLL2P7GNIPPJ6UVCKDMIO5CI2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662529658.48_warc_CC-MAIN-20220519172853-20220519202853-00743.warc.gz\"}"}
https://metanumbers.com/103301
[ "# 103301 (number)\n\n103,301 (one hundred three thousand three hundred one) is an odd six-digits composite number following 103300 and preceding 103302. In scientific notation, it is written as 1.03301 × 105. The sum of its digits is 8. It has a total of 2 prime factors and 4 positive divisors. There are 93,900 positive integers (up to 103301) that are relatively prime to 103301.\n\n## Basic properties\n\n• Is Prime? No\n• Number parity Odd\n• Number length 6\n• Sum of Digits 8\n• Digital Root 8\n\n## Name\n\nShort name 103 thousand 301 one hundred three thousand three hundred one\n\n## Notation\n\nScientific notation 1.03301 × 105 103.301 × 103\n\n## Prime Factorization of 103301\n\nPrime Factorization 11 × 9391\n\nComposite number\nDistinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 103301 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0\n\nThe prime factorization of 103,301 is 11 × 9391. Since it has a total of 2 prime factors, 103,301 is a composite number.\n\n## Divisors of 103301\n\n4 divisors\n\n Even divisors 0 4 2 2\nTotal Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 112704 Sum of all the positive divisors of n s(n) 9403 Sum of the proper positive divisors of n A(n) 28176 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 321.405 Returns the nth root of the product of n divisors H(n) 3.66628 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors\n\nThe number 103,301 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 103,301) is 112,704, the average is 28,176.\n\n## Other Arithmetic Functions (n = 103301)\n\n1 φ(n) n\nEuler Totient Carmichael Lambda Prime Pi φ(n) 93900 Total number of positive integers not greater than n that are coprime to n λ(n) 9390 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 9849 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares\n\nThere are 93,900 positive integers (less than 103,301) that are coprime with 103,301. And there are approximately 9,849 prime numbers less than or equal to 103,301.\n\n## Divisibility of 103301\n\n m n mod m 2 3 4 5 6 7 8 9 1 2 1 1 5 2 5 8\n\n103,301 is not divisible by any number less than or equal to 9.\n\n## Classification of 103301\n\n• Arithmetic\n• Semiprime\n• Deficient\n\n### Expressible via specific sums\n\n• Polite\n• Non-hypotenuse\n\n• Square Free\n\n### Other numbers\n\n• LucasCarmichael\n\n## Base conversion (103301)\n\nBase System Value\n2 Binary 11001001110000101\n3 Ternary 12020200222\n4 Quaternary 121032011\n5 Quinary 11301201\n6 Senary 2114125\n8 Octal 311605\n10 Decimal 103301\n12 Duodecimal 4b945\n20 Vigesimal ci51\n36 Base36 27ph\n\n## Basic calculations (n = 103301)\n\n### Multiplication\n\nn×y\n n×2 206602 309903 413204 516505\n\n### Division\n\nn÷y\n n÷2 51650.5 34433.7 25825.2 20660.2\n\n### Exponentiation\n\nny\n n2 10671096601 1102334949979901 113872302667873753201 11763122737894026579416501\n\n### Nth Root\n\ny√n\n 2√n 321.405 46.9211 17.9278 10.0652\n\n## 103301 as geometric shapes\n\n### Circle\n\n Diameter 206602 649059 3.35242e+10\n\n### Sphere\n\n Volume 4.61745e+15 1.34097e+11 649059\n\n### Square\n\nLength = n\n Perimeter 413204 1.06711e+10 146090\n\n### Cube\n\nLength = n\n Surface area 6.40266e+10 1.10233e+15 178923\n\n### Equilateral Triangle\n\nLength = n\n Perimeter 309903 4.62072e+09 89461.3\n\n### Triangular Pyramid\n\nLength = n\n Surface area 1.84829e+10 1.29911e+14 84344.9\n\n## Cryptographic Hash Functions\n\nmd5 c2a96e701e29d4ee54e891ff50a200de ed65821eb9fa932733eef33811ee8b3729b6e870 51cb69e002770bcbb2f0616ff5196a573845e2ae927c7f2d67b1246ffd20c860 dc3ae3d777dc1b94cff915ca8fcdd0028fed3160e74df312edcd2139a9dd1d6ac8b6046201f83501984aaee414d76933f5c695f6e013f8f657c6668f5949019f 628c7fdafaae00d159dc8be3525a8282c56c1b9f" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63365805,"math_prob":0.9785567,"size":4651,"snap":"2021-43-2021-49","text_gpt3_token_len":1626,"char_repetition_ratio":0.119216695,"word_repetition_ratio":0.03211679,"special_character_ratio":0.45345086,"punctuation_ratio":0.07496823,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9962893,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T12:43:26Z\",\"WARC-Record-ID\":\"<urn:uuid:fc1e243f-9497-4b80-8255-00af24772221>\",\"Content-Length\":\"40070\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3c645a39-ac4f-4671-bbe0-a6763e6de29c>\",\"WARC-Concurrent-To\":\"<urn:uuid:fa1feae4-dd41-4dde-b9fb-eb0d72d900aa>\",\"WARC-IP-Address\":\"46.105.53.190\",\"WARC-Target-URI\":\"https://metanumbers.com/103301\",\"WARC-Payload-Digest\":\"sha1:V773FZMCFUZLYNPKSNIMSWOWLTUNGBFF\",\"WARC-Block-Digest\":\"sha1:WEZVA53ZD6HXK4I53Z4Y56GHE24BS3IF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585177.11_warc_CC-MAIN-20211017113503-20211017143503-00183.warc.gz\"}"}
https://www.varsitytutors.com/gre_math-help/how-to-find-the-perimeter-of-an-equilateral-triangle
[ "## Example Questions\n\n### Example Question #1 : Equilateral Triangles\n\nFind the perimeter of an equilateral triangle with a height of", null, ".\n\nNone of the answer choices are correct.", null, "", null, "", null, "", null, "", null, "Explanation:\n\nPerimeter is found by adding up all sides of the triangle. All sides in an equilateral triangle are equal, so we need to find the value of just one side to know the values of all sides.\n\nThe height of an equilateral triangle divides it into two equal 30:60:90 triangles, which will have side ratios of 1:2:√3. The height here is the √3 ratio, which in this case is equivalent to 8, so to get the length of the other two sides, we put 8 over √3 (8/√3) and 2 * 8/√3 = 16/√3, which is the hypotenuse of our 30:60:90 triangle.\n\nThe perimeter is then 3 * 16/√3, or 48/√3.\n\n### Example Question #2 : Equilateral Triangles\n\nIf the height of an equilateral triangle is", null, ", what is the perimeter?", null, "", null, "", null, "", null, "", null, "", null, "Explanation:\n\nBy having a height in an equilateral triangle, the angle is bisected therefore creating two", null, "triangles.\n\nThe height is opposite the angle", null, ". We can set-up a proportion.\n\nSide opposite", null, "is", null, "and the side of equilateral triangle which is opposite", null, "is", null, ".", null, "Cross multiply.", null, "Divide both sides by", null, "", null, "Multiply top and bottom by", null, "to get rid of the radical.", null, "Since each side is the same and there are three sides, we just multply the answer by three to get", null, "### Example Question #3 : Equilateral Triangles\n\nIf area of equilateral triangle is", null, ", what is the perimeter?", null, "", null, "", null, "", null, "", null, "", null, "Explanation:\n\nThe area of an equilateral triangle is", null, ".\n\nSo let's set-up an equation to solve for", null, "", null, "Cross multiply.", null, "The", null, "cancels out and we get", null, ".\n\nThen take square root on both sides and we get", null, ". Since we have three equal sides, we just multply", null, "by three to get", null, "as the final answer.\n\nTired of practice problems?\n\nTry live online GRE prep today.", null, "" ]
[ null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/178711/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/178709/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/7394/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/178710/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/7395/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/7395/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384340/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384336/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384337/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384338/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384339/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384335/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384335/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384319/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384320/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384321/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384322/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384323/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384324/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384341/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384342/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384327/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384343/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384329/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384344/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384345/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384362/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384359/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384356/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384357/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384361/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384358/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384356/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384295/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384296/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384363/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384364/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384299/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384365/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384366/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384367/gif.latex", null, "https://vt-vtwa-assets.varsitytutors.com/vt-vtwa/uploads/formula_image/image/384368/gif.latex", null, "https://vt-vtwa-app-assets.varsitytutors.com/images/problems/og_image_practice_problems.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.90304595,"math_prob":0.9977741,"size":1669,"snap":"2022-05-2022-21","text_gpt3_token_len":421,"char_repetition_ratio":0.18258259,"word_repetition_ratio":0.038961038,"special_character_ratio":0.24865189,"punctuation_ratio":0.12215909,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9995129,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86],"im_url_duplicate_count":[null,4,null,4,null,4,null,4,null,8,null,8,null,4,null,4,null,4,null,4,null,4,null,8,null,8,null,7,null,null,null,null,null,null,null,null,null,null,null,4,null,4,null,7,null,4,null,7,null,4,null,4,null,4,null,4,null,8,null,4,null,4,null,4,null,8,null,null,null,null,null,4,null,4,null,null,null,4,null,4,null,4,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-27T13:01:10Z\",\"WARC-Record-ID\":\"<urn:uuid:ed01067a-d9f8-4d31-a606-f9da97432a18>\",\"Content-Length\":\"211022\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:968c8875-0592-485a-a583-7facfc744349>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc98cfac-1c51-4dc0-a3e8-85aba188af69>\",\"WARC-IP-Address\":\"99.84.108.6\",\"WARC-Target-URI\":\"https://www.varsitytutors.com/gre_math-help/how-to-find-the-perimeter-of-an-equilateral-triangle\",\"WARC-Payload-Digest\":\"sha1:45U2UMEFIFGF4OUQRLWKK5Q7ZONFW2UP\",\"WARC-Block-Digest\":\"sha1:C7KUUF4VICZ22G4EM6Q4G4F23345CF2Y\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662647086.91_warc_CC-MAIN-20220527112418-20220527142418-00645.warc.gz\"}"}
http://prepdog.org/4th/4th-common/4.md.3.-test1c_apply_the_area_and_perimeter_formulas_for_rectangles.htm
[ "4.MD.3.-TEST1C Apply the area and perimeter formulas for rectangles\n Name:    4.MD.3.-TEST1C Apply the area and perimeter formulas for rectangles\n\nMultiple Choice\nIdentify the choice that best completes the statement or answers the question.\n\n1.\n\nThe area of the dog park is 1,750 square yards.  The length of the dog park is 50 yards.  What is the width of the dog park?\n a. 30 yards c. 50 yards b. 35 yards d. 35 square yards\n\n2.\n\nThe length of the bedroom is 12 feet.  The perimeter of the bedroom is 44 feet.  What is the area of the bedroom?\n a. 120 square feet c. 46 square feet b. 408 square feet d. 120 feet\n\n3.\n\nWhat is the perimeter of a square that has an area of 64 square inches?\n a. 32 square inches c. 32 inches b. 16 inches d. 8 inches\n\n4.\n\nThe area of the tabletop is 30 square feet.  The width of the table top is 5  feet.  What is the perimeter of the tabletop?\n a. 22 square feet c. 6 feet b. 22 feet d. 35 feet\n\n5.\n\nThe area of the forest is 6,000 square miles.  The length of the forest is 100 miles.  What is the width of the forest?\n a. 60 miles c. 400 miles b. 200 miles d. 60 square miles\n\n6.\n\nThe perimeter of the game center is 360 feet .  The length of the game center is 120 feet.  What is the area of the game center?\n a. 4,800 square feet c. 7,200 square feet b. 7,200 feet d. 480 square feet\n\n7.\n\nThe area of the barn is 2,000  square feet.  The length of the barn is 50 feet.  What is the width of the barn?\n a. 20 feet c. 40 square feet b. 40 feet d. 50 feet\n\n8.\n\nThe area of the square is 36 square inches.  What is the length of each side of the square?\n a. 9 inches c. 6 square inches b. 4 inches d. 6 inches\n\n9.\n\nThe area of the picture is 36 square inches.  The length of the picture is 9 inches.  What is the width of the picture?\n a. 6 inches c. 45 inches b. 4 inches d. 26 inches\n\n10.\n\nThe surface area of the swimming pool is 800 square feet.  The length of the swimming pool is 40 feet.   What is the perimeter of the swimming pool?\n a. 840 feet c. 120 feet b. 240 feet d. 120 square feet" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8419463,"math_prob":0.99701697,"size":1938,"snap":"2020-10-2020-16","text_gpt3_token_len":580,"char_repetition_ratio":0.24612203,"word_repetition_ratio":0.092857145,"special_character_ratio":0.37203303,"punctuation_ratio":0.17427386,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9986555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T00:54:13Z\",\"WARC-Record-ID\":\"<urn:uuid:47ae9d4f-b13c-488a-8b4c-7da85cf5f7ac>\",\"Content-Length\":\"40961\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:573c20bd-c100-441b-8b10-77ce42ff5182>\",\"WARC-Concurrent-To\":\"<urn:uuid:40f5cbb5-e204-4ec5-830f-5154aa24f1e2>\",\"WARC-IP-Address\":\"72.52.248.11\",\"WARC-Target-URI\":\"http://prepdog.org/4th/4th-common/4.md.3.-test1c_apply_the_area_and_perimeter_formulas_for_rectangles.htm\",\"WARC-Payload-Digest\":\"sha1:SD3NRHZO3LHV7QJC4LX5AAQB2UA2YBOH\",\"WARC-Block-Digest\":\"sha1:7MRZXF3Q5CGKZBB42KQREMUFASOYHGRM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370505359.23_warc_CC-MAIN-20200401003422-20200401033422-00363.warc.gz\"}"}
https://questions.examside.com/past-years/jee/question/a-bar-manet-of-length-14-cm-is-placed-in-the-magnetic-meridi-jee-main-physics-units-and-measurements-g0bimoyejjbhtesj
[ "1\nJEE Main 2021 (Online) 16th March Morning Shift\n+4\n-1\nA bar manet of length 14 cm is placed in the magnetic meridian with its north pole pointing towards the geographic north pole. A neutral point is obtained at a distance of 18 cm from the center of the magnet. If BH = 0.4 G, the magnetic moment of the magnet is (1G = 10$$-$$4 T)\nA\n2.880 $$\\times$$ 102 J T$$-$$1\nB\n2.880 J T$$-$$1\nC\n2.880 $$\\times$$ 103 J T$$-$$1\nD\n28.80 J T$$-$$1\n2\nJEE Main 2021 (Online) 16th March Morning Shift\n+4\n-1\nFor an electromagnetic wave travelling in free space, the relation between average energy densities due to electric (Ue) and magnetic (Um) fields is :\nA\nUe = Um\nB\nUe $$\\ne$$ Um\nC\nUe < Um\nD\nUe > Um\n3\nJEE Main 2021 (Online) 16th March Morning Shift\n+4\n-1\nA conducting bar of length L is free to slide on two parallel conducting rails as shown in the figure", null, "Two resistors R1 and R2 are connected across the ends of the rails. There is a uniform magnetic field $$\\overrightarrow B$$ pointing into the page. An external agent pulls the bar to the left at a constant speed v.\n\nThe correct statement about the directions of induced currents I1 and I2 flowing through R1 and R2 respectively is :\nA\nBoth I1 and I2 are in clockwise direction\nB\nI1 is in clockwise direction and I2 is in anticlockwise direction\nC\nI1 is in anticlockwise direction and I2 is in clockwise direction\nD\nBoth I1 and I2 are in anticlockwise direction\n4\nJEE Main 2021 (Online) 26th February Evening Shift\n+4\n-1\nAn aeroplane, with its wings spread 10 m, is flying at a speed of 180 km/h in a horizontal direction. The total intensity of earth's field at that part is 2.5 $$\\times$$ 10$$-$$4 Wb/m2 and the angle of dip is 60$$^\\circ$$. The emf induced between the tips of the plane wings will be __________.\nA\n88.37 mV\nB\n62.50 mV\nC\n54.125 mV\nD\n108.25 mV\nJEE Main Subjects\nPhysics\nMechanics\nElectricity\nOptics\nModern Physics\nChemistry\nPhysical Chemistry\nInorganic Chemistry\nOrganic Chemistry\nMathematics\nAlgebra\nTrigonometry\nCoordinate Geometry\nCalculus\nEXAM MAP\nJoint Entrance Examination" ]
[ null, "https://imagex.cdn.examgoal.net/1kmilq5ig/6a67b0da-5f2d-4712-8082-dde2a0f335e6/a959ce80-89f5-11eb-9c26-d9a423ffb060/file-1kmilq5ih.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8484658,"math_prob":0.98789704,"size":8449,"snap":"2023-14-2023-23","text_gpt3_token_len":2602,"char_repetition_ratio":0.37383068,"word_repetition_ratio":0.48725212,"special_character_ratio":0.32453546,"punctuation_ratio":0.014634146,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99382764,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-22T02:10:15Z\",\"WARC-Record-ID\":\"<urn:uuid:2f89a0e7-d0bf-406e-b644-5e8ae82606e8>\",\"Content-Length\":\"285653\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fa45fb3-e404-45a3-a0a1-e58e9d63aad4>\",\"WARC-Concurrent-To\":\"<urn:uuid:819545b2-fcc2-4a0a-9766-2509ff88067c>\",\"WARC-IP-Address\":\"172.67.132.22\",\"WARC-Target-URI\":\"https://questions.examside.com/past-years/jee/question/a-bar-manet-of-length-14-cm-is-placed-in-the-magnetic-meridi-jee-main-physics-units-and-measurements-g0bimoyejjbhtesj\",\"WARC-Payload-Digest\":\"sha1:R45ISZDOGFQKOTVZZMRLGS6KT5FT4BNN\",\"WARC-Block-Digest\":\"sha1:FYSYIROTOC7BBINISOTATNUSOZDDIHI4\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943749.68_warc_CC-MAIN-20230322020215-20230322050215-00261.warc.gz\"}"}
https://ywqzx.xyz/?post=596
[ "# What charge does a magnetic field generate\n\n## Lorentz force: how charge is deflected in a magnetic field\n\nLorentz force is generally the sum of electrical and magnetic forcethat hit an electrically charged particle with the charge \\ (q \\) works when dealing with the speed \\ (\\ class {blue} {v} \\) in one Magnetic field \\ (\\ class {violet} {B} \\) and in one electric field \\ (E \\) moves. Depending on how the direction of movement of the particle is relative to the direction of the magnetic field, the Lorentz force is different. This is made possible by the angle \\ (\\ alpha \\) between \\ (\\ class {blue} {v} \\) and \\ (\\ class {violet} {B} \\) determined.\n\nAmount of the Lorentz force (general)\n\n### Electrical component of the Lorentz force\n\nThe first term in the formula for Lorentz force stands for electrical force \\ (F _ {\\ text e} \\):\n\nFormula: electric force\nExample: electron and protonIf you bring a positively charged proton close to a negatively charged electron, the electron experiences an electrical force \\ (F _ {\\ text e} \\) and moves towards the proton. The same thing happens with the proton: It also experiences a force that the electron exerts on it. The two generate inhomogeneous (i.e. location-dependent) electric fields. In this way the charges influence each other electrically.\n\nThe electric field strength \\ (E \\) states what force a charge can exert on another charge, i.e.: Force per charge:\n\nThe field is strongest close to the charge and becomes weaker the further the electrical charges move away from each other. At the location of the electron the electric field generated by the proton has a certain value \\ (F _ {\\ text e} \\).\n\nIf there is no external electric field \\ (E \\) in which the charge moves, then the electric part of the Lorentz force vanishes: \\ (F _ {\\ text e} = q \\ cdot0 = 0 \\). Then only the magnetic part \\ (F _ {\\ text m} \\) remains.\n\n### Magnetic part of the Lorentz force\n\nThe second summand in the Lorentz force stands for the magnetic force \\ (\\ class {green} {F _ {\\ text m}} \\), which acts on an electrical charge \\ (q \\) in the magnetic field \\ (\\ class {violet} {B} \\) when the charge interacts with the speed \\ (\\ class {blue} {v} \\) moves:\n\nMagnetic force\n\nIn the following we assume that the charge is only in a magnetic field \\ (\\ class {violet} {B} \\). That means, there is no electric field \\ (E \\) and therefore no electric force on the charge. Then Lorentz force \\ (F \\) is equal to the magnetic force \\ (\\ class {green} {F _ {\\ text m}} \\):\n\nIn order for a magnetic force \\ (\\ class {green} {F} \\) to act on a particle, it must meet the following properties:\n\n1. The particle has to move - otherwise the velocity would be \\ (\\ class {blue} {v} ~ = ~ 0 \\) and thus also the Lorentz force:\n2. The particle must not be neutral - because neutral particles have no charge \\ (q ~ = ~ 0 \\). Therefore the Lorentz force would also disappear in this case:\n\nWhen you have made sure that the particle fulfills the above two properties, then you can calculate the magnitude of the Lorentz force (in). In principle 3 cases can occur. The load is moving ...\n\n1. parallel to the magnetic field: \\ (\\ class {blue} {v} \\) || \\ (\\ class {violet} {B} \\)\n2. perpendicular to the magnetic field: \\ (\\ class {blue} {v} \\) ⊥ \\ (\\ class {violet} {B} \\)\n3. oblique to the magnetic field: at the angle \\ (\\ alpha \\)\n\n### Case 1: Movement parallel to the magnetic field\n\nIn this case the angle that is in the formula for Lorentz force is: \\ (\\ alpha \\) = 0. A sine of 0 degrees is 0, which is why there is no magnetic force on the particle and it therefore disappears:\n\n### Case 2: Movement perpendicular to the magnetic field\n\nIf two vectors (such arrows and so on) - in this case speed \\ (\\ class {blue} {v} \\) and magnetic flux density \\ (\\ class {violet} {B} \\) - are perpendicular to one another, then that means that they enclose a \\ (90 ^ \\ circ \\) angle. And as you may know from mathematics: A sine of 90 degrees is 1. Therefore you can write in a simplified way:\n\nFormula: Lorentz force - movement perpendicular to the magnetic field\n\nIf you also have the direction of the force - without vector calculation - then use the three-finger rule!\n\nYou can only use the three-finger rule with which you can determine the direction of the Lorentz force in case 2! In two other cases it does not apply! Brief repetition of the three-finger rule:\n\n• thumb - points in the direction of the cause, here movement of the charge, i.e. in the direction of the velocity \\ (\\ class {blue} {v} \\).\n• index finger - points in the direction of the magnetic south pole (mostly marked with green in school).\n• Middle finger - shows you the Lorentz force direction as soon as you have correctly directed the other two fingers.\n\nFor positive charges (e.g. protons) you have to use your right hand and for negative charges (e.g. electron) you have to use the left hand.\n\n### Creation of circular motion\n\nFor example, if you use an electron gun to shoot a negative charge \\ (q = -e \\) into a magnetic field \\ (\\ class {violet} {B} \\) directed into your screen, in such a way that the charge with a constant velocity \\ (\\ class {blue} {v} \\) vertically (ie case 2) enters the magnetic field, then it experiences - as you know - a Lorentz force and is deflected upwards. However, the load does not just fly straight up, but runs through a circular path, because:\n\n#### Calculate the radius of the circular path\n\nIf you want to calculate the radius \\ (r \\) of the circular path, you simply equate the Lorentz force with the centripetal force: Like the centripetal force, the Lorentz force always acts in the center of the circle, which is why it replaces the centripetal force here. Form the equation according to \\ (r \\), then you have:\n\nFormula: radius of the circular path\n\nThe formula gives you some useful information about the circular path radius.\n\nYou can judge the strength of the magnetic field, for example, by looking at the radius of the circular path that has arisen. Because the larger the radius, the weaker the magnetic field.\n\n#### Period of the circular motion\n\nIf you still want to calculate the period \\ (T \\), i.e. the time that the particle needs to make exactly one circular motion, then you use the formula for uniform motion:\n\nThis formula is allowed here, since the amount of the speed of the particle (but not its direction!) Is constant at any point in time, which is why it is actually a uniform and not an accelerated movement.\n\nThe segment \\ (s \\) - is the circumference of the circle, so: \\ (s = 2 \\ pi \\, r \\). Time \\ (t \\) is the period \\ (T \\). The period indicates how long a cycle lasts.\n\nYou now have everything you need! Plug in the circumference \\ (s \\) in. The time \\ (t \\) in the searched period is \\ (T \\): \\ (t = T \\). Also plug in the radius in:\n\nFormula: Period of a circular movement\n\n### Case 3: Movement at an angle to the magnetic field\n\nYou are interested in the magnitude of the magnetic force on a charge that does not necessarily move exactly perpendicular to the magnetic field. The charge could somehow move partially parallel to the magnetic field. Therefore you consider:\n\nFormula: Amount of the Lorentz force - any entry angle\n\nIf the speed \\ (\\ class {blue} {v} \\) is directed obliquely to the magnetic flux density \\ (\\ class {violet} {B} \\), then speed can be in a parallel \\ (\\ class {blue} {v_ {||}} \\) and one vertical \\ (\\ class {blue} {v _ {\\ perp}} \\) part of the magnetic field.\n\nThe parallel part - in contrast to the vertical part - has no influence on the magnetic force and therefore this part is not responsible for the deflection of the electron in the magnetic field; because the vertical part forms an angle of 0 degrees with the magnetic flux density \\ (\\ class {violet} {B} \\), which is why the force for this part disappears (because of \\ (\\ sin (0 ^ {\\ circ}) ~ = ~ 0 \\)):\n\nA partial movement parallel and a partial movement perpendicular to the magnetic field creates a cylindrical spiral path, a so-called Helix. Its axis is parallel to the magnetic field. It has a radius \\ (r \\) and a pitch \\ (h \\). Where the pitch is simply a distance parallel to the magnetic field that is covered within a period \\ (T \\).\n\n### Lorentz force on a current-carrying conductor\n\nLorentz force acts not only on individual charges, but also on entire electrical currents! They represent nothing more than electrical charges that move, for example, through an electrical conductor.\n\nCharges in the conductor (which represent the electric current) cover the length of the conductor \\ (L \\) within a certain time \\ (t \\). Distance per time is defined as speed (in this case the speed of the charges in the conductor):\n\nInserting the speed into the Lorentz force formula and rearranging \\ (t \\) results in: And \\ (\\ frac {q} {t} \\) is defined as the current strength \\ (\\ class {blue} {I} \\). In total you have:\n\nLorentz force on a current-carrying conductor\n\n### Lorentz force between two ladders\n\nImagine two electrical cables parallel to each other and let the electrical current \\ (\\ class {blue} {I_1} \\) flow through one and \\ (\\ class {blue} {I_2} \\) in the same direction through the other. You will find that the two conductors tighten due to the Lorentz force. But how - without a magnetic field? The reason is:\n\nThe magnetic field generated by the charges in the conductor encompasses the conductor. In addition, the generated magnetic field is not concentrated in a specific place, but rather extensive. Because of this expansion, the other conductor is suddenly in an external magnetic field.\n\nIf you proceed in the same way with the other conductor, you will find that the magnetic field at the location of the other conductor points in the opposite direction, so that the Lorentz force also points in the opposite direction than with the other conductor.\n\nWhat happens if the electrical currents in the conductors go in opposite directions?\n\nYou can calculate the force that each conductor experiences - due to the other conductor - as follows:\n\nFormula: Lorentz force between two conductors" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9009284,"math_prob":0.98732626,"size":10107,"snap":"2021-31-2021-39","text_gpt3_token_len":2432,"char_repetition_ratio":0.19202217,"word_repetition_ratio":0.07360673,"special_character_ratio":0.2590284,"punctuation_ratio":0.07925151,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9978622,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-28T10:49:44Z\",\"WARC-Record-ID\":\"<urn:uuid:bd7d14fc-594f-4a84-a6fd-68254c83b65a>\",\"Content-Length\":\"18186\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:026b22d0-79e1-4cf0-bcec-6c1fbe41fd16>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4f0d805-e15c-4167-ba1f-9b19e8f1df0f>\",\"WARC-IP-Address\":\"172.67.163.113\",\"WARC-Target-URI\":\"https://ywqzx.xyz/?post=596\",\"WARC-Payload-Digest\":\"sha1:FM434Q5NKTQE57Y7JFNXM4N66SYAGONH\",\"WARC-Block-Digest\":\"sha1:MNTXDVRRCHM7RD4CMAD5L4X5MUUXREWO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780060677.55_warc_CC-MAIN-20210928092646-20210928122646-00097.warc.gz\"}"}
https://www.thestudentroom.co.uk/showthread.php?t=6290856
[ "# A level maths help needed!\n\nWatch\nAnnouncements\n#1\nA box is made from a square base of a side length x and height h. The surface area of the box (not including the lid) is 75cm2. Calculate the maximum volume of the box.\n\nI've got that 4xh+x2=75 and that v=x2h but I'm not sure how to get any of the values\n1\n9 months ago\n#2\nv/h=x^2, and sub that into the first equation. just a guess, see where it gets you aha, good luck\nLast edited by username5071070; 9 months ago\n0\n#3\n(Original post by katrinayates)\nv/h=x^2, and sub that into the first equation. just a guess, see where it gets you aha, good luck\nwhy does v/h=x2?\n0\n9 months ago\n#4\n(Original post by Juliakinga)\nA box is made from a square base of a side length x and height h. The surface area of the box (not including the lid) is 75cm2. Calculate the maximum volume of the box.\n\nI've got that 4xh+x2=75 and that v=x2h but I'm not sure how to get any of the values\nFrom first eqt make h subject of formula then replace in second eqt. You will obtain V in terms of x. Differentiate V with respect to x and set to 0\n0\n9 months ago\n#5\n(Original post by Juliakinga)\nA box is made from a square base of a side length x and height h. The surface area of the box (not including the lid) is 75cm2. Calculate the maximum volume of the box.\n\nI've got that 4xh+x2=75 and that v=x2h but I'm not sure how to get any of the values\n\n75=4xh+x^2\n\nV=x^2 * h\n\nWe want to maximise the function V=x^2*h\n\nTherefore, find V in terms of x only (i.e. rearrange your first equation for h, and then substitute it in to the volume equation).\n\nThen you have a function for the volume in terms of x only. Now you can find the maximum by differentiating and setting equal to 0 (maximum is stationary points).\n\nThis problem is quite similar to one I made a video on a few days ago - its another problem where you need to find the maximum of something, in a problem which is in context and thus more difficult than your average differantiation problem. It's worth having a look at! Let me know if you find it interesting and if you manage to get it! [Link to another similar problem].\n0\n#6\n(Original post by Hilton184)\n\n75=4xh+x^2\n\nV=x^2 * h\n\nWe want to maximise the function V=x^2*h\n\nTherefore, find V in terms of x only (i.e. rearrange your first equation for h, and then substitute it in to the volume equation).\n\nThen you have a function for the volume in terms of x only. Now you can find the maximum by differentiating and setting equal to 0 (maximum is stationary points).\n\nThis problem is quite similar to one I made a video on a few days ago - its another problem where you need to find the maximum of something, in a problem which is in context and thus more difficult than your average differantiation problem. It's worth having a look at! Let me know if you find it interesting and if you manage to get it! [Link to another similar problem].\nSo when I differentiate and get the equation 75/4 - 3/4 x^2, what do I get when I solve the equation. Is this the length x?\n0\n9 months ago\n#7\n(Original post by Juliakinga)\nSo when I differentiate and get the equation 75/4 - 3/4 x^2, what do I get when I solve the equation. Is this the length x?\nYes it is the length that will give you max Volume.\n0\n9 months ago\n#8\n(Original post by Juliakinga)\nSo when I differentiate and get the equation 75/4 - 3/4 x^2, what do I get when I solve the equation. Is this the length x?\nI can't see an equation, just an expression ... yes, when it is an equation it will give you x. You then need to find the volume.\n0\n9 months ago\n#9\n(Original post by Juliakinga)\nSo when I differentiate and get the equation 75/4 - 3/4 x^2, what do I get when I solve the equation. Is this the length x?\nYes, that is correct when you differentiate. You get dV/dx = 75/4 -3/4 (x^2).\n\nNow, if you set this equal to 0 and solve for x, you will obtain the x value which gives the maximum volume. This is because dV/dx is the gradient function and when it is equal to 0 it is a stationary point (maximum point in this case). If you watch the video I linked this does a similar problem but demonstrates how the maximum is obtained when you differentiate a function like this. Hope this helps!\n0\nX\n\nnew posts", null, "Back\nto top\nLatest\nMy Feed\n\n### Oops, nobody has postedin the last few hours.\n\nWhy not re-start the conversation?\n\nsee more\n\n### See more of what you like onThe Student Room\n\nYou can personalise what you see on TSR. Tell us a little about yourself to get started.\n\n### Poll\n\nJoin the discussion\n\n#### Current uni students - are you thinking of dropping out of university?\n\nYes, I'm seriously considering dropping out (158)\n14.62%\nI'm not sure (46)\n4.26%\nNo, I'm going to stick it out for now (320)\n29.6%\nI have already dropped out (30)\n2.78%\nI'm not a current university student (527)\n48.75%" ]
[ null, "https://www.thestudentroom.co.uk/images/v2/icons/arrow_up.svg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.9176429,"math_prob":0.960993,"size":5869,"snap":"2020-45-2020-50","text_gpt3_token_len":1649,"char_repetition_ratio":0.1258312,"word_repetition_ratio":0.80350876,"special_character_ratio":0.27006304,"punctuation_ratio":0.08473282,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99617255,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T18:32:32Z\",\"WARC-Record-ID\":\"<urn:uuid:f795d017-d40a-4401-b2f3-dfd18c3767de>\",\"Content-Length\":\"275567\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:6f1ede1b-5199-4273-a03d-d7f368d727a1>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8840b47-4a33-4d1b-915d-e5c59a9b2d27>\",\"WARC-IP-Address\":\"104.22.19.140\",\"WARC-Target-URI\":\"https://www.thestudentroom.co.uk/showthread.php?t=6290856\",\"WARC-Payload-Digest\":\"sha1:DQWQWAGWGQOS56E4XJODWNLDUEGJIJF2\",\"WARC-Block-Digest\":\"sha1:7HIQZLQPK7K5V5Y7TJPOJJWIVJEAEWY2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107900200.97_warc_CC-MAIN-20201028162226-20201028192226-00216.warc.gz\"}"}
https://www.rexygen.com/doc/ENGLISH/MANUALS/BRef/PIDAT.html
[ "PIDAT – PID controller with relay autotuner\n\nBlock SymbolLicensing group: AUTOTUNING", null, "Function Description\nThe PIDAT block has the same control function as the PIDU block. Additionally it is equipped with the relay autotuning function.\n\nIn order to perform the autotuning experiment, it is necessary to drive the system to approximately steady state (at a suitable working point), choose the type of controller to be autotuned (PI or PID) and activate the TUNE input by setting it to on. The controlled process is regulated by special adaptive relay controller in the experiment which follows. One point of frequency response is estimated from the data measured during the experiment. Based on this information the controller parameters are computed. The amplitude of the relay controller (the level of system excitation) and its hysteresis is defined by the amp and hys parameters. In case of hys=0 the hysteresis is determined automatically according to the measurement noise properties on the controlled variable signal. The signal TBSY is set to onduring the tuning experiment. A successful experiment is indicated by and the controller parameters can be found on the outputs pk, pti, ptd, pnd and pb. The c weighting factor is assumed (and recommended) c=0. A failure during the experiment causes $\\mathtt{\\text{TE}}=\\mathtt{\\text{on}}$ and the output ite provides further information about the problem. It is recommended to increase the amplitude amp in the case of error. The controller is equipped with a built-in function which decreases the amplitude when the deviation of output from the initial steady state exceeds the maxdev limit. The tuning experiment can be prematurely terminated by activating the TBRK input.\n\nInputs\n\n dv Feedforward control variable double sp Setpoint variable double pv Process variable double tv Tracking variable double hv Manual value double MAN Manual or automatic mode bool off .. Automatic mode on ... Manual mode TUNE Start the tuning experiment bool TBRK Stop the tuning experiment bool\n\nOutputs\n\n mv Manipulated variable (controller output) double de Deviation error double SAT Saturation flag bool off .. The controller implements a linear control law on ... The controller output is saturated TBSY Tuner busy flag bool TE Tuning error bool off .. Autotuning successful on ... An error occurred during the experiment ite Error code; expected time (in seconds) to finishing the tuning experiment while the tuning experiment is active long 1000 . Signal/noise ratio too low 1001 . Hysteresis too high 1002 . Too tight termination rule 1003 . Phase out of interval pk Proposed controller gain double pti Proposed integral time constant double ptd Proposed derivative time constant double pnd Proposed derivative component filtering double pb Proposed weighting factor – proportional component double\n\nParameters\n\n irtype Controller type (control law)  $\\odot$6 long 1 .... D 2 .... I 3 .... ID 4 .... P 5 .... PD 6 .... PI 7 .... PID RACT Reverse action flag bool off .. Higher mv $\\to$ higher pv on ... Higher mv $\\to$ lower pv k Controller gain $K$  $\\odot$1.0 double ti Integral time constant ${T}_{i}$  $\\odot$4.0 double td Derivative time constant ${T}_{d}$  $\\odot$1.0 double nd Derivative filtering parameter $N$  $\\odot$10.0 double b Setpoint weighting – proportional part  $\\odot$1.0 double c Setpoint weighting – derivative part double tt Tracking time constant. No meaning for controllers without integrator.  $\\odot$1.0 double hilim Upper limit of the controller output  $\\odot$1.0 double lolim Lower limit of the controller output  $\\odot$-1.0 double iainf Type of apriori information  $\\odot$1 long 1 .... No apriori information 2 .... Astatic process (process with integration) 3 .... Low order process 4 .... Static process + slow closed loop step response 5 .... Static process + middle fast (normal) closed loop step response 6 .... Static process + fast closed loop step response k0 Static gain of the process (must be provided in case of $\\mathtt{\\text{iainf}}=\\mathtt{\\text{3}},\\mathtt{\\text{4}},\\mathtt{\\text{5}}$)  $\\odot$1.0 double n1 Maximum number of half-periods for estimation of frequency response point  $\\odot$20 long mm Maximum number of half-periods for averaging  $\\odot$4 long amp Relay controller amplitude  $\\odot$0.1 double uhys Relay controller hysteresis double ntime Length of noise amplitude estimation period at the beginning of the tuning experiment [s]  $\\odot$5.0 double rerrap Termination value of the oscillation amplitude relative error  $\\odot$0.1 double aerrph Termination value of the absolute error in oscillation phase  $\\odot$10.0 double maxdev Maximal admissible deviation error from the initial steady state  $\\odot$1.0 double\n\nIt is recommended not to change the parameters n1, mm, ntime, rerrap and aerrph.\n\n2019 © REX Controls s.r.o., www.rexygen.com" ]
[ null, "https://www.rexygen.com/doc/ENGLISH/MANUALS/BRef/HTMLimages/BRef_ENG129x.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.7566964,"math_prob":0.9236311,"size":4021,"snap":"2019-26-2019-30","text_gpt3_token_len":1023,"char_repetition_ratio":0.180234,"word_repetition_ratio":0.030701755,"special_character_ratio":0.22631186,"punctuation_ratio":0.072837636,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98306966,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T20:35:59Z\",\"WARC-Record-ID\":\"<urn:uuid:c67e3516-5835-4e94-84bb-a1a43495a08c>\",\"Content-Length\":\"43679\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:604feb31-e01a-4f56-9b2b-4b338afb2842>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed8e2257-efea-4eb9-85ba-4ab6b3f104cf>\",\"WARC-IP-Address\":\"217.198.124.118\",\"WARC-Target-URI\":\"https://www.rexygen.com/doc/ENGLISH/MANUALS/BRef/PIDAT.html\",\"WARC-Payload-Digest\":\"sha1:TV5TUF7LGY3MQCV2BWSDZRNK3CRU4M2H\",\"WARC-Block-Digest\":\"sha1:AEBV7MWOUVDRQDELT7BUXTXPPC3AT5WA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998580.10_warc_CC-MAIN-20190617203228-20190617225228-00474.warc.gz\"}"}
https://codescracker.com/java/program/java-program-swap-two-numbers.htm
[ "# Java Program to Swap Two Numbers\n\nThis article covers a program in Java that swaps two numbers entered by user at run-time of the program.\n\n## Swap Two Numbers using Third Variable in Java\n\nThe question is, write a Java program to swap any two given numbers. The number must be received by user at run-time. The program given below is its answer:\n\n```import java.util.Scanner;\n\npublic class CodesCracker\n{\npublic static void main(String[] args)\n{\nint a, b, temp;\nScanner s = new Scanner(System.in);\n\nSystem.out.print(\"Enter the First Number: \");\na = s.nextInt();\nSystem.out.print(\"Enter the Second Number: \");\nb = s.nextInt();\n\ntemp = a;\na = b;\nb = temp;\n\nSystem.out.println(\"\\na = \" +a);\nSystem.out.println(\"b = \" +b);\n}\n}```\n\nThe snapshot given below shows the sample run of above Java program, on swapping of two given numbers, with user input 30 as first and 40 as second number:\n\nThat is, the first number say 30 gets stored in a variable, and the second number say 40 gets stored in b variable. And using the statement:\n\n`temp = a;`\n\nNow temp holds the value of a, that is 30. Again using the following statement:\n\n`a = b;`\n\nThe value of a becomes 40. And finally using the statement given below:\n\n`b = temp;`\n\nThe value of temp, that is 30 gets initialized to b. So now, b holds the value of a, and a holds the value of b. That's it.\n\nThe above program can also be created in this way:\n\n```import java.util.Scanner;\n\npublic class CodesCracker\n{\npublic static void main(String[] args)\n{\nScanner s = new Scanner(System.in);\n\nSystem.out.print(\"Enter the First Number: \");\nint a = s.nextInt();\nSystem.out.print(\"Enter the Second Number: \");\nint b = s.nextInt();\n\nSystem.out.println(\"\\n----Before Swap----\");\nSystem.out.println(\"a = \" +a);\nSystem.out.println(\"b = \" +b);\n\nint temp = a;\na = b;\nb = temp;\n\nSystem.out.println(\"\\n----After Swap----\");\nSystem.out.println(\"a = \" +a);\nSystem.out.println(\"b = \" +b);\n}\n}```\n\nHere is its sample run with same user input as of previous program's sample run:\n\n## Swap Two Numbers without using Third Variable in Java\n\nThis program does not uses any third variable like temp to swap two numbers. Rather it uses the simple addition and subtraction operation to do the job.\n\n```import java.util.Scanner;\n\npublic class CodesCracker\n{\npublic static void main(String[] args)\n{\nScanner s = new Scanner(System.in);\n\nSystem.out.print(\"Enter the First Number: \");\nint numOne = s.nextInt();\nSystem.out.print(\"Enter the Second Number: \");\nint numTwo = s.nextInt();\n\nSystem.out.println(\"\\n----Before Swap----\");\nSystem.out.println(\"numOne = \" +numOne);\nSystem.out.println(\"numTwo = \" +numTwo);\n\nnumOne = numOne + numTwo;\nnumTwo = numOne - numTwo;\nnumOne = numOne - numTwo;\n\nSystem.out.println(\"\\n----After Swap----\");\nSystem.out.println(\"numOne = \" +numOne);\nSystem.out.println(\"numTwo = \" +numTwo);\n}\n}```\n\n## Swap Two Numbers using Function in Java\n\nThis program uses a user-defined function named swap() that takes two arguments. The first argument refers to the first number, whereas the second argument refers to the second number. The function swaps the two passed arguments and prints the values after swap.\n\n```import java.util.Scanner;\n\npublic class CodesCracker\n{\npublic static void main(String[] args)\n{\nScanner s = new Scanner(System.in);\n\nSystem.out.print(\"Enter the First Number: \");\nint a = s.nextInt();\nSystem.out.print(\"Enter the Second Number: \");\nint b = s.nextInt();\n\nSystem.out.println(\"\\n----Before Swap----\");\nSystem.out.println(\"a = \" +a);\nSystem.out.println(\"b = \" +b);\n\nswap(a, b);\n}\n\npublic static void swap(int x, int y)\n{\nint z;\nz = x;\nx = y;\ny = z;\n\nSystem.out.println(\"\\n----After Swap----\");\nSystem.out.println(\"a = \" +x);\nSystem.out.println(\"b = \" +y);\n}\n}```\n\n## Swap Two Numbers using Bitwise Operator in Java\n\nThis program uses Bitwise operator to do the same job, that is, swapping of two numbers.\n\n```import java.util.Scanner;\n\npublic class CodesCracker\n{\npublic static void main(String[] args)\n{\nScanner s = new Scanner(System.in);\n\nSystem.out.print(\"Enter the First Number: \");\nint a = s.nextInt();\nSystem.out.print(\"Enter the Second Number: \");\nint b = s.nextInt();\n\nSystem.out.println(\"\\n----Before Swap----\");\nSystem.out.println(\"a = \" +a);\nSystem.out.println(\"b = \" +b);\n\na = a^b;\nb = a^b;\na = a^b;\n\nSystem.out.println(\"\\n----After Swap----\");\nSystem.out.println(\"a = \" +a);\nSystem.out.println(\"b = \" +b);\n}\n}```\n\nNote - The ^ is a Bitwise XOR or Bitwise Exclusive OR operator. To learn about this operator, refer to Bitwise Operators.\n\n#### Same Program in Other Languages\n\nJava Online Test\n\n« Previous Program Next Program »\n\nLike/Share Us on Facebook 😋" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.63805467,"math_prob":0.96372664,"size":8396,"snap":"2022-40-2023-06","text_gpt3_token_len":2050,"char_repetition_ratio":0.21508579,"word_repetition_ratio":0.1651311,"special_character_ratio":0.2608385,"punctuation_ratio":0.1462939,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963048,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-10-02T06:34:00Z\",\"WARC-Record-ID\":\"<urn:uuid:1cd6e61b-409e-4496-bd51-45a0bdb9bb3b>\",\"Content-Length\":\"39682\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:80358404-c1fe-4096-860f-82f43c0eaa3d>\",\"WARC-Concurrent-To\":\"<urn:uuid:9b18cebb-dc3a-418a-9465-13293e4c6c08>\",\"WARC-IP-Address\":\"148.72.215.147\",\"WARC-Target-URI\":\"https://codescracker.com/java/program/java-program-swap-two-numbers.htm\",\"WARC-Payload-Digest\":\"sha1:7OAU55X5D4T3VC3GYUXNKWUS66SKOOOD\",\"WARC-Block-Digest\":\"sha1:6376LDYNAUDZ5XATFATTHZRKF5JOYYSG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030337287.87_warc_CC-MAIN-20221002052710-20221002082710-00105.warc.gz\"}"}
https://studylib.net/doc/11235482/surfaces-in-three-space-1-quick-review-of-the-conic-secti...
[ "# Surfaces in Three-Space 1 Quick Review of the Conic Sections a) Parabola", null, "```Surfaces in Three-Space\nQuick Review of the Conic Sections\na) Parabola\nb) Ellipse\nc) Hyperbola\n=1\n1\nSurfaces in Three-Space\nThe graph of a 3-variable equation which can be written in the form\nF(x,y,z) = 0 or sometimes z = f(x,y) (if you can solve for z) is a surface\nin 3D. One technique for graphing them is to graph cross-sections\n(intersections of the surface with well-chosen planes) and/or traces\n(intersections of the surface with the coordinate planes).\nWe already know of two surfaces:\na) plane\nAx + By + Cz = D\nb) sphere\n(x-h)2 + (y-k)2 + (z-l)2 = r2\nEX 1 Sketch a graph of z = x2 + y2\nand x = y2 + z2.\n2\nA cylinder is the set of all points on lines parallel to l that intersect C\nwhere C is a plane curve and l is a line intersecting C, but not in the\nplane of C.\nl\nA Quadric Surface is a 3D surface whose equation is of the second degree.\nThe general equation is\nAx2+ By2 + Cz2 + Dxy + Exz + Fyz + Gx + Hy + Iz + J = 0 ,\ngiven that A2 + B2 + C2 ≠ 0 .\nWith rotation and translation, these possibilities can be reduced to two\ndistinct types.\n1) Ax2 + By2 + Cz2 + J = 0\n2) Ax2 + By2 + Iz = 0\n3\nELLIPSOID\nHYPERBOLOID OF ONE SHEET\nHYPERBOLOID OF TWO SHEETS\nELLIPTIC PARABOLOID\nHYPERBOLIC PARABOLOID\nELLIPTIC CONE\n-\n4\nEX 2 Name and sketch these graphs\na) 9x2 + y2 - z2 = -4\nb) 9x2 + y2 - z2 = 4\nc) x2 + 4y2 - z = 0\nd) x2 + y2 = 1\ne) x2 - y2 = 25\nf)\nz = y2\n5\n```" ]
[ null, "https://s2.studylib.net/store/data/011235482_1-44dac9e58ea9f0b8580491a09393348f-768x994.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8421349,"math_prob":0.99942875,"size":1385,"snap":"2021-31-2021-39","text_gpt3_token_len":480,"char_repetition_ratio":0.112237506,"word_repetition_ratio":0.01986755,"special_character_ratio":0.33501804,"punctuation_ratio":0.048442908,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99914473,"pos_list":[0,1,2],"im_url_duplicate_count":[null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-22T06:13:08Z\",\"WARC-Record-ID\":\"<urn:uuid:4eab21fe-578c-4a71-ad59-3033c5fad346>\",\"Content-Length\":\"42562\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:d7383a5f-ff53-4cc0-88d2-cdd5297c3d10>\",\"WARC-Concurrent-To\":\"<urn:uuid:f8cbdf28-b35e-4a11-8ace-226cdf3e21fe>\",\"WARC-IP-Address\":\"172.67.175.240\",\"WARC-Target-URI\":\"https://studylib.net/doc/11235482/surfaces-in-three-space-1-quick-review-of-the-conic-secti...\",\"WARC-Payload-Digest\":\"sha1:5R5TDKSV24LPLD3WG4NIWEKGMNREAJKY\",\"WARC-Block-Digest\":\"sha1:WXVEB4CSQNMWYH267U77NSAFW5HVIVNN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057329.74_warc_CC-MAIN-20210922041825-20210922071825-00147.warc.gz\"}"}
https://bytefarm.ch/fail2ban/viewlog?ip=120.92.15.82
[ "", null, "fail2ban bad ip database: ip 120.92.15.82\n\n| ip database | live view | stats | report | help | api key:\n\n ip: 120.92.15.82 hostname: 120.92.15.82 country:", null, "[CN] China first reported: 30.10.2018 11:54.00 GMT+0200 last reported: 29.01.2019 05:44.23 GMT+0200 time period: 90d 17h 50m 23s total reports: 11 reported by: 4 host(s) filter(s): ssh (8) ssh (3) tor exit node no badips.com db Lookup", null, "port scan of '120.92.15.82':\n\n[-hide]\n```# Nmap 6.40 scan initiated Tue Oct 30 11:54:02 2018 as: /usr/bin/nmap -sU -sS -O 120.92.15.82\nNmap scan report for 120.92.15.82\nHost is up (0.28s latency).\nNot shown: 1000 open|filtered ports, 993 filtered ports\nPORT STATE SERVICE\n20/tcp closed ftp-data\n21/tcp open ftp\n22/tcp open ssh\n80/tcp open http\n443/tcp closed https\n3000/tcp closed ppp\n3306/tcp open mysql\nNo exact OS matches for host (If you know what OS is running on it, see http://nmap.org/submit/ ).\nTCP/IP fingerprint:\nOS:SCAN(V=6.40%E=4%D=10/30%OT=21%CT=20%CU=%PV=N%G=Y%TM=5BD83930%P=x86_64-pc\nOS:-linux-gnu)SEQ(SP=105%GCD=1%ISR=10A%TI=Z%CI=Z%TS=D)OPS(O1=M5B4ST11NW6%O2\nOS:=M5B4ST11NW6%O3=M5B4NNT11NW6%O4=M5B4ST11NW6%O5=M5B4ST11NW6%O6=M5B4ST11)W\nOS:IN(W1=3890%W2=3890%W3=3890%W4=3890%W5=3890%W6=3890)ECN(R=Y%DF=Y%TG=40%W=\nOS:3908%O=M5B4NNSNW6%CC=Y%Q=)T1(R=Y%DF=Y%TG=40%S=O%A=S+%F=AS%RD=0%Q=)T2(R=N\nOS:)T3(R=N)T4(R=Y%DF=Y%TG=40%W=0%S=A%A=Z%F=R%O=%RD=0%Q=)T5(R=Y%DF=Y%TG=40%W\nOS:=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)T6(R=Y%DF=Y%TG=40%W=0%S=A%A=Z%F=R%O=%RD=0%Q=\nOS:)T7(R=Y%DF=Y%TG=40%W=0%S=Z%A=S+%F=AR%O=%RD=0%Q=)U1(R=N)IE(R=N)\n\nOS detection performed. Please report any incorrect results at http://nmap.org/submit/ .\n# Nmap done at Tue Oct 30 11:57:52 2018 -- 1 IP address (1 host up) scanned in 230.58 seconds\n```\n```Σ = 46 | Δt = 0.003978967666626s\n```" ]
[ null, "https://bytefarm.ch/fail2ban/images/endlessknot.png", null, "https://bytefarm.ch/fail2ban/images/flags/China.png", null, "https://chart.googleapis.com/chart", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.5040353,"math_prob":0.87037534,"size":1367,"snap":"2019-26-2019-30","text_gpt3_token_len":672,"char_repetition_ratio":0.12912692,"word_repetition_ratio":0.0,"special_character_ratio":0.48207754,"punctuation_ratio":0.11253197,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99694324,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,10,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-16T16:44:23Z\",\"WARC-Record-ID\":\"<urn:uuid:ac6b94dd-7b4e-4ae2-8023-6c4edf7c7c00>\",\"Content-Length\":\"6760\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f8b4ff9f-0da0-43ff-9f03-966d72d8a366>\",\"WARC-Concurrent-To\":\"<urn:uuid:11dba07c-0e63-4a5e-9efb-44171418825f>\",\"WARC-IP-Address\":\"178.62.246.20\",\"WARC-Target-URI\":\"https://bytefarm.ch/fail2ban/viewlog?ip=120.92.15.82\",\"WARC-Payload-Digest\":\"sha1:L6ZJM672HD2T5FVPOGO3FZKRBBQSM35U\",\"WARC-Block-Digest\":\"sha1:ACYLJWOAVID3DGLPHLPG5PE3WJTV4ERQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998288.34_warc_CC-MAIN-20190616162745-20190616184745-00106.warc.gz\"}"}
https://gis.stackexchange.com/questions/382418/haversine-distance-versus-euclidean-on-an-eqc-equi-distance-projection
[ "# Haversine distance versus Euclidean on an eqc \"equi-distance\" projection\n\nI've got a network covering a large area, but the individual links are fairly small (<1km). For calculating edge lengths I'm trying to decide whether it would be better to use Haversine distance on the decimal degrees or Euclidean distance on all the geometries converted into a CRS designed for distance measurements.\n\nOption 1: Haversine Distance on the (lon, lat) of endpoints in 'epsg:4326' (python code for reference):\n\n``````####==== Calculate the great circle distance in meters for two lat/lon points\ndef haversineDist(lon1, lat1, lon2, lat2):\n# convert decimal degrees to radians\nlon1, lat1, lon2, lat2 = map(math.radians, [lon1, lat1, lon2, lat2])\ndlon = lon2 - lon1\ndlat = lat2 - lat1\ntheAngle = math.sin(dlat/2)**2 + math.cos(lat1) * math.cos(lat2) * math.sin(dlon/2)**2\nreturn 6367000 * 2 * math.asin(math.sqrt(theAngle)) ## distance in meters\n``````\n\nOption 2: Pick a center point to set the CRS, convert the geometries to that CRS, and calculate edge lengths using Euclidean distance. For example, use the CRS `+proj=eqc +lat_0=35.6812 +lon_0=139.7671 +units=m` for the area around Tokyo.\n\nI think Option 2 is more accurate for areas fairly close to the reference point, but what if I am also measuring lengths of edges several degrees away, such as around `43.52, 141.62`? If I want to keep the measurements of length consistent and easily reproducible, then Option 1 seems better.\n\nI am still fairly new to these considerations, so there may be even better options that I am not aware of.\n\n• So, none of the GIS experts that supposedly use this site have any advise to offer on this point? It seems like it would be a fairly common consideration for geospatial analyses, so I expected there to be a canonical answer that I just can't find. Maybe GIS is all just guesswork for everybody (intentionally provocative). Jan 27, 2021 at 9:17" ]
[ null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8435527,"math_prob":0.9774794,"size":1483,"snap":"2023-40-2023-50","text_gpt3_token_len":394,"char_repetition_ratio":0.10209601,"word_repetition_ratio":0.0,"special_character_ratio":0.27781525,"punctuation_ratio":0.14478114,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9887555,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T01:23:57Z\",\"WARC-Record-ID\":\"<urn:uuid:778b0672-23a5-4a5d-b489-88f4160163e5>\",\"Content-Length\":\"154310\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:631cae8e-d48c-4c84-9f6f-86925128b52f>\",\"WARC-Concurrent-To\":\"<urn:uuid:cec1a4c5-e320-40ab-ba65-e67baf09987a>\",\"WARC-IP-Address\":\"104.18.10.86\",\"WARC-Target-URI\":\"https://gis.stackexchange.com/questions/382418/haversine-distance-versus-euclidean-on-an-eqc-equi-distance-projection\",\"WARC-Payload-Digest\":\"sha1:7DLJG4HXA336JONXPB2X655KI3V7TTC4\",\"WARC-Block-Digest\":\"sha1:S2H6AE373B5HG5XWYLHJ6I3DEFLE7WIT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511284.37_warc_CC-MAIN-20231003224357-20231004014357-00771.warc.gz\"}"}
https://answers.opencv.org/question/1890/how-to-estimate-the-weber-contrast-of-an-image/
[ "# how to estimate the (weber) contrast of an image?\n\nHi, can someone help me to calculate the contrast of an luminance-image? I've already implement a method to calculate the contrast of color and a approach for the luminance- contrast. I think the weber-contrast is a good solution. Isn't it?\n\nThe formula i found on Wikipedia:", null, "there I representing the luminance of the features and I_b the background luminance.\n\nFor my implementation i use JavaCV. The code is:\n\npublic double analyse(CvMat input) {\ndouble contrast = 0;\n\n// convert to lab to extract luminance channel\n\ncvCvtColor(input, input, CV_BGR2Lab);\n\nCvMat background = cvCreateMat(input.rows(), input.cols(), CV_8UC1);\ncvSplit(input, background, null, null, null);\n\n//calc local background\ncvSmooth(background, background, CV_BLUR, 5);\nJavaCVUtil.showImage(background.asIplImage(), \"\");\nint width = input.cols();\nint height = input.rows();\n\nfor (int y = 0; y < height; y++) {\n\nfor (int x = 0; x < width; x++) {\ncontrast += (input.get(y, x, 0) - background.get(y, x))\n/ background.get(y, x);\n\n}\n}\n//normalize\ncontrast /= (height * width);\nreturn contrast\n\n}\n\n\nMaybe someone can say me what's wrong with this code. For example, for the following image i get a NaN Error:", null, "greetings\n\nedit retag close merge delete\n\nSort by » oldest newest most voted", null, "Could it be that your background(x,y) luminance gets a value of zero for a certain pixel in the loop where you accumulate contrast. I do not see you checking for that. Thus, if there is a single pixel with a luminance of zero in the image, you will get the exception.\n\nmore\n\nOfficial site\n\nGitHub\n\nWiki\n\nDocumentation" ]
[ null, "http://upload.wikimedia.org/wikipedia/en/math/8/3/3/8338f7d96a60e909afd98078c3469acd.png", null, "https://answers.opencv.org/upfiles/13462517147368723.jpg", null, "https://www.gravatar.com/avatar/d4ddf9dd040b42f80911e4a53010dfc7", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.6141838,"math_prob":0.8886324,"size":1223,"snap":"2020-45-2020-50","text_gpt3_token_len":313,"char_repetition_ratio":0.15586546,"word_repetition_ratio":0.010810811,"special_character_ratio":0.26901063,"punctuation_ratio":0.21666667,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96671796,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,2,null,4,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-21T08:48:35Z\",\"WARC-Record-ID\":\"<urn:uuid:51b2e14b-e6d5-468a-9268-f5e4397e0df2>\",\"Content-Length\":\"54431\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3a7d5510-3796-4e62-abfb-1ed0a5618735>\",\"WARC-Concurrent-To\":\"<urn:uuid:32fd345b-bceb-4ae5-8ea2-29b0d59a0d84>\",\"WARC-IP-Address\":\"5.9.49.245\",\"WARC-Target-URI\":\"https://answers.opencv.org/question/1890/how-to-estimate-the-weber-contrast-of-an-image/\",\"WARC-Payload-Digest\":\"sha1:AJ772LEG3IDMMGUVYC6WMGOEAN23CAHR\",\"WARC-Block-Digest\":\"sha1:4RAFVAQXMK42ISNCGJ57KV2EVYPQQNYE\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107876136.24_warc_CC-MAIN-20201021064154-20201021094154-00230.warc.gz\"}"}
http://phpstudy.php.cn/php/1939.html
[ "## PHP和Mysqlweb应用开发核心技术-第1部分 Php基础-2 php语言介绍\n\n.php字符串中的变量扩展系统\n.php中可用的更多数据类型\n.类型之间的转换\n.输入和使用变量和常量\n.如何在php中构建表达式以及构建表达式所需的操作符\n.使用语言中可用的控制结构\n.1 输入字符串的更多介绍\n\n<?php\n\\$hour = 16;\n\\$kilometres = 4;\n\\$content = \"cookie\";\necho \" 4pm in 24 hour time is {\\$hour}00 hours.<br/>\\n\";\necho <<<DONE\nThere are {\\$kilometres}000m in {\\$kilometres}km.<br/>\nThe jar is now, indeed, full of \\${content}s.<br/>\nDONE;\n?>\n\nThere are 4000m in 4km.\nThe jar is now, indeed, full of cookies.\n\n.2 数据类型的更多介绍\n1.数组:使用array方法来声明数组。它获得一组初始值并且返回保存所有这些值的数组对象,在默认情况下,把从0开始的整数名称或者键(key)赋给数组中的值\n,也可以指定要添加的新项的索引。\\$frunit=\"nespola\";但是你也可以使用字符串值指定键,而不使用赋值给它的默认数字。\n\\$myfavourite=array(\"car\"=>\"ferrari\",\"number“=>21,\"city\"=>\"ouagadougou\");\n\n\\$a + \\$b 联合 \\$a 和 \\$b 的联合。\n\\$a == \\$b 相等 如果 \\$a 和 \\$b 具有相同的键/值对则为 TRUE。\n\\$a === \\$b 全等 如果 \\$a 和 \\$b 具有相同的键/值对并且顺序和类型都相同则为 TRUE。\n\\$a != \\$b 不等 如果 \\$a 不等于 \\$b 则为 TRUE。\n\\$a <> \\$b 不等 如果 \\$a 不等于 \\$b 则为 TRUE。\n\\$a !== \\$b\n\n<?php\n\\$a = array(\"a\" => \"apple\", \"b\" => \"banana\");\n\\$b = array(\"a\" => \"pear\", \"b\" => \"strawberry\", \"c\" => \"cherry\");\n\\$c = \\$a + \\$b; // Union of \\$a and \\$b\necho \"Union of \\\\$a and \\\\$b: \\n\";\nvar_dump(\\$c);\n\\$c = \\$b + \\$a; // Union of \\$b and \\$a\necho \"Union of \\\\$b and \\\\$a: \\n\";\nvar_dump(\\$c);\n?>\n\nUnion of \\$a and \\$b: array(3) { [\"a\"]=> string(5) \"apple\" [\"b\"]=> string(6) \"banana\" [\"c\"]=> string(6) \"cherry\" } Union of \\$b and \\$a: array(3) { [\"a\"]=> string(4) \"pear\" [\"b\"]=> string(10) \"strawberry\" [\"c\"]=> string(6) \"cherry\" } 2.2.2对象 将在第四单元中面向对象的程序设计中使用。 2.2.3 特殊类型和值 NULL 是php中的特殊类型和值 ,它表示\"无值\".符合如下要求它就是null: .它们被设置为区分大小写的关键字null; .它们从没有赋值过 .使用unset方法明确清除了它们。 资源:有时候,php需要处理不一定来自php的对象,比如数据库或者操作系统对象的句柄。它们称为资源的特殊变量.\n.3 强制类型转换\n2.3.1 基础\n\n.二进制运算操作符\n.布尔表达式和表达式操作符\n.需要字符串的特定方法--特定方法和操作符,比如echo\\print或者字符串连接符(.)\n\n(int)\\(interger)\n(string)-转换为文本字符串\n(object)-转换为对象\n2.3.2 特殊强制类型转换\n\n(int)4.999\n\n(float)true=1.0\n\nnull转为空字符串('\").\n\nnull和其他未设置的变量被转换为有0个元素的空数组\n\n2.3.3 有用的强制类型转换函数\nis_type()\n.is_integer,.is_float,.is_bool,is_null,.is_object.返回布尔型 ,指出特定变量是否属于适当的类型 。\ngettype()是非常有用的例程,它告诉你php当前认为变量或者表达式是什么类型。不推荐使用这个转换函数。\nsettype()使用两个参数:要转换的变量和要转换为的类型 ,它表示字符串。\n.4 变量和常量\n2.4.1 定义常量\n\n2.4.2 按值 和按引用的变量\n\n\\$a=123;\n\\$b=&\\$a;\n2.4.3 变量的范围\n\n2.4.4 变量的生存期\n\n2.4.5 预定义变量\nphp提供很多预定义变量,它们给出操作环境的信息,大多是超级全局数组例如:\n\\$GLOBALS-它包含正在执行的脚本内部全局可用的所有变量的引用\n\\$_SERVER-脚本所在周边环境的信息\n\\$_SESSION、\\$_COOKIE-它包含管理访问者和关于称为\"cookie“的存储方式的信息\n\\$_REQUEST-它包含\\$_post、\\$_GET和\\$_session数组\n\\$_ENV-它包含php语言引擎所在的进程的环境变量.数组的键是环境变量的名称。\n\\$php_errormsg-它保存php语言引擎在执行当前脚本时生成的最新的错误信息.\n.5 表达式和操作符\n2.5.1 操作符:组合表达式\n\n-\\$a 取反 \\$a 的负值。\n\\$a + \\$b 加法 \\$a 和 \\$b 的和。\n\\$a - \\$b 减法 \\$a 和 \\$b 的差。\n\\$a * \\$b 乘法 \\$a 和 \\$b 的积。\n\\$a / \\$b 除法 \\$a 除以 \\$b 的商。\n\\$a % \\$b 取模 \\$a 除以 \\$b 的余数。\n\n\\$a == \\$b 等于 TRUE,如果 \\$a 等于 \\$b。\n\\$a === \\$b 全等 TRUE,如果 \\$a 等于 \\$b,并且它们的类型也相同。(PHP 4 引进)\n\\$a != \\$b 不等 TRUE,如果 \\$a 不等于 \\$b。\n\\$a <> \\$b 不等 TRUE,如果 \\$a 不等于 \\$b。\n\\$a !== \\$b 非全等 TRUE,如果 \\$a 不等于 \\$b,或者它们的类型不同。(PHP 4 引进)\n\\$a < \\$b 小与 TRUE,如果 \\$a 严格小于 \\$b。\n\\$a > \\$b 大于 TRUE,如果 \\$a 严格 \\$b。\n\\$a <= \\$b 小于等于 TRUE,如果 \\$a 小于或者等于 \\$b。\n\\$a >= \\$b 大于等于 TRUE,如果 \\$a 大于或者等于 \\$b。\n\n\\$a and \\$b And(逻辑与) TRUE,如果 \\$a 与 \\$b 都为 TRUE。\n\\$a or \\$b Or(逻辑或) TRUE,如果 \\$a 或 \\$b 任一为 TRUE。\n\\$a xor \\$b Xor(逻辑异或) TRUE,如果 \\$a 或 \\$b 任一为 TRUE,但不同时是。\n\\$a Not(逻辑非) TRUE,如果 \\$a 不为 TRUE。\n\\$a&& \\$b And(逻辑与) TRUE,如果 \\$a 与 \\$b 都为 TRUE。\n\\$a || \\$b Or(逻辑或) TRUE,如果 \\$a 或 \\$b 任一为 TRUE。\n\n\\$a & \\$b And(按位与) 将把 \\$a 和 \\$b 中都为 1 的位设为 1。\n\\$a|| \\$b Or(按位或) 将把 \\$a 或者 \\$b 中为 1 的位设为 1。\nxor ^ \\$b Xor(按位异或) 将把 \\$a 和 \\$b 中不同的位设为 1。\nNot \\$a Not(按位非) 将 \\$a 中为 0 的位设为 1,反之亦然。\n\\$a << \\$b Shift left(左移) 将 \\$a 中的位向左移动 \\$b 次(每一次移动都表示“乘以 2”)。\n\\$a >> \\$b Shift right(右移) 将 \\$a 中的位向右移动 \\$b 次(每一次移动都表示“除以 2”)。\n\n\\$a + \\$b 联合 \\$a 和 \\$b 的联合。\n\\$a == \\$b 相等 如果 \\$a 和 \\$b 具有相同的键/值对则为 TRUE。\n\\$a === \\$b 全等 如果 \\$a 和 \\$b 具有相同的键/值对并且顺序和类型都相同则为 TRUE。\n\\$a != \\$b 不等 如果 \\$a 不等于 \\$b 则为 TRUE。\n\\$a <> \\$b 不等 如果 \\$a 不等于 \\$b 则为 TRUE。\n\\$a !== \\$b 不全等 如果 \\$a 不全等于 \\$b 则为 TRUE。\n\n\\$a=10;\n\\$b=\\$a++; b=10 ,a=11;\n\\$c=++\\$a; c=12,a=12;\n\\$d=\\$a--; d=12,a=11;\n\\$e=--\\$a; e=10,a=10;\n\n2.5.2 组合表达式和操作符的过程\n\n.6 控制结构\n2.6.1 if语句\n1. if (expr)\nstatement\nelse\n2. elseif/else if 2.6.2 switch语句\n\n<?php\nif (\\$a == 5):\necho \"a equals 5\";\necho \"...\";\nelseif (\\$a == 6):\necho \"a equals 6\";\necho \"!!!\";\nelse:\necho \"a is neither 5 nor 6\";\nendif;\n?>\n\nswitch 语句和具有同样表达式的一系列的 IF 语句相似。很多场合下需要把同一个变量(或表达式)与很多不同的值比较,并根据它等于哪个值来执行不同的代码。   这正是 switch 语句的用途。\n\n<?php\nif (\\$i == 0) {\necho \"i equals 0\";\n} elseif (\\$i == 1) {\necho \"i equals 1\";\n} elseif (\\$i == 2) {\necho \"i equals 2\";\n}\nswitch (\\$i) {\ncase 0:\necho \"i equals 0\";\nbreak;\ncase 1:\necho \"i equals 1\";\nbreak;\ncase 2:\necho \"i equals 2\";\nbreak;\n}\n?>\n\n2.6.3 while/do ....while循环\nwhile(expr)\nblock\ndo\nblock\nwhile (expr);\n\n<?php\ndo {\nif (\\$i < 5) {\necho \"i is not big enough\";\nbreak;\n}\n\\$i *= \\$factor;\nif (\\$i < \\$minimum_limit) {\nbreak;\n}\necho \"i is ok\";\n/* process i */\n} while(0);\n?>\n\n2.6.4 for 循环\nfor(expr1;expr2;expr3)\nblock\nexpr1:当第一次遇到FOR循环执行它一次。执行完毕后开始循环迭代。\nexpr2:在每次迭代之前计算它。如为true,就执行代码块。\nexpr3-在每次迭代之后计算它\n\n<?php\n/* example 1 */\nfor (\\$i = 1; \\$i <= 10; \\$i++) {\necho \\$i;\n}\n/* example 2 */\nfor (\\$i = 1; ; \\$i++) {\nif (\\$i > 10) {\nbreak;\n}\necho \\$i;\n}\n/* example 3 */\n\\$i = 1;\nfor (;;) {\nif (\\$i > 10) {\nbreak;\n}\necho \\$i;\n\\$i++;\n}\n/* example 4 */\nfor (\\$i = 1, \\$j = 0; \\$i <= 10; \\$j += \\$i, print \\$i, \\$i++);\n?>\n\n2.6.5 foreach循环:用于特定类型。在5单元中进行更多讲解\n2.6.6 中断循环 :break 和continue\n\nxhEditor:基于jQuery的高效的XHTML编辑器\nHTML表格标记教程(3):宽度和高度属性WIDTH、HEIGHT\nCSS教程:CSS命名参考\n\nJavaScript获取页面上某个元素的代码\nJSP设计模式\n\nPDO_MYSQL的一些预定义常量\nIIS vs.Apache: 哪个才是安全首选?\n\nMootools 1.2教程(2) DOM选择器\n\nCopyright © 2016 phpStudy |" ]
[ null ]
{"ft_lang_label":"__label__zh","ft_lang_prob":0.9027924,"math_prob":0.98450834,"size":6766,"snap":"2019-13-2019-22","text_gpt3_token_len":4350,"char_repetition_ratio":0.107364684,"word_repetition_ratio":0.14660195,"special_character_ratio":0.36639076,"punctuation_ratio":0.1727672,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9779326,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-05-25T01:05:34Z\",\"WARC-Record-ID\":\"<urn:uuid:2f03097d-17f6-4543-b69f-d549da968f71>\",\"Content-Length\":\"29611\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25c11add-7339-4f4c-9815-4f7fc1be0d15>\",\"WARC-Concurrent-To\":\"<urn:uuid:8a3586d5-267e-4966-8d78-f5a2cfbfbfe1>\",\"WARC-IP-Address\":\"101.227.0.134\",\"WARC-Target-URI\":\"http://phpstudy.php.cn/php/1939.html\",\"WARC-Payload-Digest\":\"sha1:MY57RKXSHIN2V3V2MCEGZBWYH6UUGKQE\",\"WARC-Block-Digest\":\"sha1:ABNO4NSV2WVNZI44PR6JDLWTHCQIB5QB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-22/CC-MAIN-2019-22_segments_1558232257845.26_warc_CC-MAIN-20190525004721-20190525030721-00242.warc.gz\"}"}
https://www.allaboutcircuits.com/technical-articles/use-the-xilinx-system-generator-to-implement-a-simple-dds/
[ "Technical Article\n\n# Use the Xilinx System Generator to Implement a Simple DDS\n\nJuly 02, 2018 by Steve Arar\n\n## In this article, we’ll discuss implementing a simple direct digital synthesizer (DDS) using the Xilinx System Generator.\n\nIn this article, we’ll discuss implementing a simple direct digital synthesizer (DDS) using the Xilinx System Generator.\n\nSystem Generator is a powerful tool that integrates Xilinx FPGA design process with MATLAB’s Simulink which uses a high-level description to easily realize a complex system. We first design the system and verify its functionality in the Simulink environment. The graphical high-level description of Simulink significantly facilitates modeling, simulating, and analyzing the design. Then, we can generate the VHDL description of the design and add it to our project in the Xilinx ISE software.\n\nIn this article, we’ll discuss implementing a simple direct digital synthesizer (DDS) using the Xilinx System Generator.\n\n### Starting the System Generator\n\nBefore launching the System Generator, you should note two points:\n\n1. Make sure that your System Generator version is compatible with the MATLAB version that you’re going to use.\n2. Associate the MATLAB software with your System Generator.\n\nIn this article, I will use ISE 14.7 with MATLAB 2013a. The recommended way to start the System Generator is by choosing \"Xilinx Design Tools\\ISE Design Suite 14.7\\System Generator\\Sysgen Generator\" from “All Programs” menu of Windows. This will open the “Simulink Library Browser” which is shown in Figure 1.", null, "##### Figure 1\n\nAs shown in the figure, the following three Xilinx categories are added to the list of the libraries:\n\n1. Xilinx Blockset\n2. Xilinx Reference Blockset\n3. Xilinx XtremeDSP Kit\n\nIn this article, we will use the blocks provided in the “Xilinx Blockset” to implement a simple DDS as shown in Figure 2.", null, "### Creating a System Generator Model for the DDS\n\nTo create a new Simulink model, choose File\\New\\Model. This will open the following blank window which allows us to describe the block diagram of Figure 2.", null, "##### Figure 3\n\nIn the rest of the article, we will add the required building blocks and review the important settings in each block’s dialog box. For more information about the configurable parameters of the different blocks, please refer to this Xilinx document.\n\nThe first block that we need is an adder. We can use the “AddSub” block that can be found in the “Xilinx Blockset\\math” category. Figure 4 below shows the symbol and the “Basic” tab of the configurable parameters for this block.", null, "##### Figure 4\n\nThe block has two inputs (a and b) and one output which gives a+b. We will leave the parameters of the “Basic” tab as they are. The settings of the “Output” tab is shown in Figure 5. This tab sets the parameters of the output. To have a 16-bit accumulator, we choose the “User Defined” option which allows us to set the “Number of bits” to 16 and the “Binary point” to zero. This means that the output is a 16-bit integer. For the DDS of Figure 2, we don’t need to define a fractional output but, if we had set the “Number of bits” to 16 and the “Binary point” to 14, 14 bits out of the total 16 bits of the output will be considered to the right of the binary point. To read more about the fixed-point representation in general, refer to this article.", null, "##### Figure 5\n\n“Arithmetic type” and “Overflow” are two other parameters that are important to us. The “Arithmetic type” should be unsigned because the output of the accumulator is interpreted as an unsigned number. The “Overflow” should be set to “Wrap” because the accumulator should roll over when reaching its maximum.\n\nFigure 6 shows the “Implementation” tab of the block. In this page, you can choose to implement the adder using either the Fabric or the DSP48 slices. For a discussion about the difference between the two choices, refer to the mentioned article. We will leave it as it’s by default, i.e. implement using the Fabric.", null, "### Registers\n\nNext, we will add the set of registers at the output of the adder. Registers can be implemented using the “Delay” block found in the “Xilinx Blockset\\Basic Elements” category. We will keep all the settings of this block as they are.\n\nConnecting the “AddSub” block to the “Delay” block, we obtain the schematic shown in Figure 7.", null, "### Quantizer\n\nNow, we need to add the “Quantizer” of Figure 2 which passes the p most significant bits (MSBs) of the accumulator to the lookup table (LUT) and discards the other bits. This functionality can be achieved with the “Slice” block which is in the “Xilinx Blockset\\Basic Elements” category. The symbol and the configurable parameters dialog box for a “Slice” is shown in Figure 8.", null, "##### Figure 8\n\nThe parameter “Width of slice (number of bits)” specifies the number of bits to extract from the input. Assuming that eight MSBs of the accumulator output must be conveyed to the LUT, we know that the width of the output is eight, so we set the “Width of slice” to eight as shown in the figure.\n\nWe also need to specify which bit positions of the input are taken to form the eight-bit output of the “Slice” block. This can be done using the “Specify range as” parameter. There are three options for this parameter. We choose the “Upper bit location + width”. We should specify the location of the upper bit from the input that will be passed on to the MSB of the “Slice” output. Since we want the eight MSBs of the input, the upper bit will be the MSB of the input. Hence, we set the “Offset of top bit” to zero and “Relative to” to “MSB of input”. This means that the upper desired bit has zero offset relative to the MSB of the input. The width of the slice is already specified in the “Width of slice” parameter. Hence, the desired range of the input is fully specified. For more information about this block, refer to page 327 of this Xilinx document.\n\n### ROM\n\nWe will add a “ROM” from “Xilinx Blockset\\memory” category. The symbol and the configurable parameters dialog box for a “ROM” is shown in Figure 9. Since the address input of the ROM is p=8 bits wide, we should set the “Depth” of the ROM to $$2^8=256$$ . As shown in the figure, we can easily write a mathematical expression to specify the content of the ROM. The mathematical expression in Figure 9 generates 256 samples from one period of a sinusoid. These samples will be taken based on the data precision that will be specified in the “Output” tab. You can also choose the memory type which can be implemented as either a “Distributed memory” or a “Block RAM”. We will use a Block RAM to implement the required memory.\n\nThe data format of the stored values can be chosen under the “Output” tab. We will use the default parameters as shown in Figure 10. Note that the “Arithmetic type” is “signed (2’s comp)” because our samples include negative values. From the total 16 bits of the output only two bits are allocated to the integer part because the maximum and minimum of the samples are +1 and -1, respectively. The remaining bits are used to represent the fractional value of the samples.", null, "##### Figure 9", null, "### Data Type Conversion Between the Xilinx Portion and Simulink\n\nConnecting the discussed blocks according to Figure 2, we obtain the schematic shown in Figure 11.", null, "##### Figure 11\n\nThe model is almost complete but we need some other blocks to simulate the system. Unlike the Xilinx blockset which uses fixed-point numbers to represent different values; the Simulink environment has its own data type. For example, Simulink may employ the “double” data type which is a 64-bit two’s complement floating-point number. That’s why we need some blocks to perform data type conversion when transferring the data from Simulink to the Xilinx portion of Figure 11 or when transferring the output of Figure 11 to the Simulink environment. This can be achieved using the “Gateway In” and “Gateway Out” blocks as shown in Figure 12.", null, "##### Figure 12\n\nWe’ll look at the parameters of the “Gateway In” and “Gateway Out” blocks in a minute but, before that, you should note that two other blocks, “Step” at the input and “Scope” at the output, are added to the model. As you can see, after adding the “Gateway In” and “Gateway Out” blocks, we can apply an input to our model using the Simulink general “Source” blocks. Or, we can monitor the outputs of the system by means of the Simulink general “Sink” blocks. In Figure 12, we have applied a “Step” to the phase increment input of the DDS. Also, the output of the ROM is monitored using a Simulink “Scope” block.\n\nIn addition to the data type conversion, the “Gateway In” and “Gateway Out” blocks define the top-level ports of the HDL design that will be later obtained from the Simulink model. For example, putting a “Gateway In” block before the “a” input of the adder, we let the system generator know that the “a” input is actually an input of the top-level design. Similarly, the “Gateway Out” defines the top-level outputs of the HDL design.\n\nThe configurable parameters dialog box for a “Gateway In” block is as shown in Figure 13.", null, "##### Figure 13\n\nWe should set the “Arithmetic type” to “Unsigned” and choose the “Number of bits” equal to 16 with no fractional bits, i.e. “Binary point”=0. Hence, any input from the Simulink environment will be represented as a 16-bit fixed-point unsigned number in the Xilinx portion of the design. Note that the format that we specified here is consistent with the data format of the Add/Sub block. We will leave the other parameters of the block as they are by default.\n\nThe “Gateway Out” block automatically detect the fixed-point format of its driving stage. That’s why, here, we can use the default settings of the block.\n\n### Xilinx System Generator Block\n\nAny Simulink model that uses Xilinx blocks must include a “System Generator” block. This block allows us to control the system and simulation parameters. It also handles HDL code generation. The symbol and the dialog box for the “System Generator” block are shown in Figure 14.", null, "##### Figure 14\n\nSystem Generator compiles the design into low-level representations. We can choose the type of the low-level representation from the “Compilation” parameter of the dialog box. In this article, we will choose “HDL Netlist” as shown in the figure. This generates a collection of HDL and some auxiliary files that can be processed by a synthesis tool. As you can see in the figure, we have chosen “XST” and “VHDL” as the synthesis tool and the HDL, respectively.\n\nYou should also choose the target device from the “Part” parameter of the dialog box and give the software the destination folder to store the generated files. Figure 15 below shows the “Clocking” tab of the dialog box.", null, "##### Figure 15\n\nThe first parameter “FPGA clock period (ns)” defines the period of the desired clock for the design. This parameter can be passed on to the synthesis tool in the next stages of the design. It can guide the synthesis software to choose an appropriate implementation based on the clock requirements of the design. In the above figure, the “FPGA clock period” is set to 10 nanoseconds. This means that we expect the design to be run with a clock period of 10ns on the board. Simulink can use a normalized form of this clock period in its simulations. This normalization is specified by the “Simulink system period (sec)” parameter of Figure 15. By setting this parameter to 1, every 10 ns of the hardware implementation will be represented by 1 second in the Simulink environment.\n\n### Simulation\n\nTo simulate the design, we set the “Step” input to go from 264 to 528 at “Step time” 500. You can use other arbitrary parameters for this block. Finally, we are ready to simulate our DDS model which is shown in Figure 16.", null, "##### Figure 16\n\nBy clicking the “Run” button of Simulink, we get the following curve on the “Scope”.", null, "##### Figure 17\n\nSince at “Step time”=500, the input goes from 264 to 528, the output frequency increases by a factor of two.\n\n### Adding the Design to an ISE Project\n\nAfter setting all the parameters of the design, we can generate the VHDL description of the model by pushing the “Generate” button in Figure 14. This will produce a “.sgp” file which can be added to an ISE project.\n\nBy choosing “Add source” from an ISE project, we can include the .sgp file in our top-level design. Now, the added file can be used just the way we use an IP core. By clicking on the added file and choosing “View HDL Instantiation Template”, we can find the template to use the component.\n\nTo read about using IP cores and VHDL components, refer to Use the Xilinx CORDIC Core to Easily Generate Sine and Cosine Functions and How to Use VHDL Components to Create a Neat Hierarchical Design, respectively.\n\n### Conclusion\n\nIn this article, we used the “System Generator” to implement a simple DDS. The high-level graphical capabilities of Simulink allows us to easily model a complex digital system. After verifying the functionality of the design in the Simulink environment, we can generate the VHDL description of the design and add it to our project in the Xilinx ISE software.", null, "" ]
[ null, "https://www.allaboutcircuits.com/uploads/articles/Fig1m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig2m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig3m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig4m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig5m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig6m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig7m62120181.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig8m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig9m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig10m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig11m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig12m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig13m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig14m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig15m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig16m6212018.png", null, "https://www.allaboutcircuits.com/uploads/articles/Fig17.png", null, "https://www.allaboutcircuits.com/images/site/default_avatar.jpg", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.8350623,"math_prob":0.89330274,"size":13048,"snap":"2019-51-2020-05","text_gpt3_token_len":2991,"char_repetition_ratio":0.16398343,"word_repetition_ratio":0.06504425,"special_character_ratio":0.22080012,"punctuation_ratio":0.08190855,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.97709477,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,5,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-08T10:45:20Z\",\"WARC-Record-ID\":\"<urn:uuid:948e51cc-d0c7-4b43-9ada-5d35f8f1f9b6>\",\"Content-Length\":\"104448\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:74cbfd93-629f-4c42-8d75-4c1f39cef521>\",\"WARC-Concurrent-To\":\"<urn:uuid:b8cf46c5-234d-4399-b7e1-958291322e37>\",\"WARC-IP-Address\":\"104.20.235.39\",\"WARC-Target-URI\":\"https://www.allaboutcircuits.com/technical-articles/use-the-xilinx-system-generator-to-implement-a-simple-dds/\",\"WARC-Payload-Digest\":\"sha1:UMRY2Z56QWJKXYKIM2OGS7VH4KELL64T\",\"WARC-Block-Digest\":\"sha1:5T6DIFC2QITZVUIDBKVKL7VV52OYPA5M\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540508599.52_warc_CC-MAIN-20191208095535-20191208123535-00487.warc.gz\"}"}
http://jar.fyicenter.com/3486_JDK_1_1_Source_Code_Directory.html?C=java.io.RandomAccessFile
[ "JDK 1.1 Source Code Directory", null, "JDK 1.1 source code directory contains Java source code for JDK 1.1 core classes: \"C:\\fyicenter\\jdk-1.1.8\\src\".\n\nHere is the list of Java classes of the JDK 1.1 source code:\n\n✍: FYIcenter\n\njava/io/RandomAccessFile.java\n\n```/*\n* @(#)RandomAccessFile.java\t1.35 01/12/10\n*\n* SUN PROPRIETARY/CONFIDENTIAL. Use is subject to license terms.\n*/\n\npackage java.io;\n\nimport java.io.File;\n\n/**\n* Instances of this class support both reading and writing to a\n* random access file. An application can modify the position in the\n* file at which the next read or write occurs.\n* This class provides a sense of security\n* by offering methods that allow specified mode accesses of\n*\n* @author unascribed\n* @version 1.35, 12/10/01\n* @since JDK1.0\n*/\npublic\nclass RandomAccessFile implements DataOutput, DataInput {\nprivate FileDescriptor fd;\n\n/**\n* Creates a random access file stream to read from, and optionally\n* to write to, a file with the specified name.\n* <p>\n* The mode argument must either be equal to <code>\"r\"</code> or\n* <code>\"rw\"</code>, indicating either to open the file for input or\n* for both input and output.\n*\n* @param name the system-dependent filename.\n* @param mode the access mode.\n* @exception IllegalArgumentException if the mode argument is not equal\n* to <code>\"r\"</code> or to <code>\"rw\"</code>.\n* @exception IOException if an I/O error occurs.\n* @exception SecurityException if a security manager exists, its\n* <code>checkRead</code> method is called with the name\n* argument to see if the application is allowed read access\n* to the file. If the mode argument is equal to\n* <code>\"rw\"</code>, its <code>checkWrite</code> method also\n* is called with the name argument to see if the application\n* is allowed write access to the file. Either of these may\n* result in a security exception.\n* @see java.lang.SecurityException\n* @since JDK1.0\n*/\npublic RandomAccessFile(String name, String mode) throws IOException {\nboolean rw = mode.equals(\"rw\");\nif (!rw && !mode.equals(\"r\"))\nthrow new IllegalArgumentException(\"mode must be r or rw\");\nSecurityManager security = System.getSecurityManager();\nif (security != null) {\nif (rw) {\nsecurity.checkWrite(name);\n}\n}\nfd = new FileDescriptor();\nopen(name, rw);\n}\n\n/**\n* Creates a random access file stream to read from, and optionally\n* to write to, the file specified by the <code>File</code> argument.\n* <p>\n* The mode argument must either be equal to <code>\"r\"</code> or to\n* <code>\"rw\"</code>, indicating either to open the file for input,\n* or for both input and output, respectively.\n*\n* @param file the file object.\n* @param mode the access mode.\n* @exception IllegalArgumentException if the mode argument is not equal\n* to <code>\"r\"</code> or to <code>\"rw\"</code>.\n* @exception IOException if an I/O error occurs.\n* @exception SecurityException if a security manager exists, its\n* <code>checkRead</code> method is called with the pathname\n* of the <code>File</code> argument to see if the\n* mode argument is equal to <code>\"rw\"</code>, its\n* <code>checkWrite</code> method also is called with the\n* pathname to see if the application is allowed write access\n* to the file.\n* @see java.io.File#getPath()\n* @since JDK1.0\n*/\npublic RandomAccessFile(File file, String mode) throws IOException {\nthis(file.getPath(), mode);\n}\n\n/**\n* Returns the opaque file descriptor object associated with this stream.\n*\n* @return the file descriptor object associated with this stream.\n* @exception IOException if an I/O error occurs.\n* @see java.io.FileDescriptor\n* @since JDK1.0\n*/\npublic final FileDescriptor getFD() throws IOException {\nif (fd != null) return fd;\nthrow new IOException();\n}\n\n/**\n* Opens a file and returns the file descriptor. The file is\n* opened in read-write mode if writeable is true, else\n* the file is opened as read-only.\n* @param name the name of the file\n* @param writeable the boolean indicating whether file is\n* writeable or not.\n*/\nprivate native void open(String name, boolean writeable) throws IOException;\n\n/**\n* Reads a byte of data from this file. This method blocks if no\n* input is yet available.\n*\n* @return the next byte of data, or <code>-1</code> if the end of the\n* file is reached.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic native int read() throws IOException;\n\n/**\n* Reads a sub array as a sequence of bytes.\n* @param b the data to be written\n* @param off the start offset in the data\n* @param len the number of bytes that are written\n* @exception IOException If an I/O error has occurred.\n*/\nprivate native int readBytes(byte b[], int off, int len) throws IOException;\n\n/**\n* Reads up to <code>len</code> bytes of data from this file into an\n* array of bytes. This method blocks until at least one byte of input\n* is available.\n*\n* @param b the buffer into which the data is read.\n* @param off the start offset of the data.\n* @param len the maximum number of bytes read.\n* @return the total number of bytes read into the buffer, or\n* <code>-1</code> if there is no more data because the end of\n* the file has been reached.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic int read(byte b[], int off, int len) throws IOException {\n}\n\n/**\n* Reads up to <code>b.length</code> bytes of data from this file\n* into an array of bytes. This method blocks until at least one byte\n* of input is available.\n*\n* @param b the buffer into which the data is read.\n* @return the total number of bytes read into the buffer, or\n* <code>-1</code> if there is no more data because the end of\n* this file has been reached.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic int read(byte b[]) throws IOException {\n}\n\n/**\n* Reads <code>b.length</code> bytes from this file into the byte\n* array. This method reads repeatedly from the file until all the\n* bytes are read. This method blocks until all the bytes are read,\n* the end of the stream is detected, or an exception is thrown.\n*\n* @param b the buffer into which the data is read.\n* @exception EOFException if this file reaches the end before reading\n* all the bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void readFully(byte b[]) throws IOException {\n}\n\n/**\n* Reads exactly <code>len</code> bytes from this file into the byte\n* array. This method reads repeatedly from the file until all the\n* bytes are read. This method blocks until all the bytes are read,\n* the end of the stream is detected, or an exception is thrown.\n*\n* @param b the buffer into which the data is read.\n* @param off the start offset of the data.\n* @param len the number of bytes to read.\n* @exception EOFException if this file reaches the end before reading\n* all the bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void readFully(byte b[], int off, int len) throws IOException {\nint n = 0;\nwhile (n < len) {\nint count = this.read(b, off + n, len - n);\nif (count < 0)\nthrow new EOFException();\nn += count;\n}\n}\n\n/**\n* Skips exactly <code>n</code> bytes of input.\n* <p>\n* This method blocks until all the bytes are skipped, the end of\n* the stream is detected, or an exception is thrown.\n*\n* @param n the number of bytes to be skipped.\n* @return the number of bytes skipped, which is always <code>n</code>.\n* @exception EOFException if this file reaches the end before skipping\n* all the bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic int skipBytes(int n) throws IOException {\nseek(getFilePointer() + n);\nreturn n;\n}\n\n// 'Write' primitives\n\n/**\n* Writes the specified byte to this file.\n*\n* @param b the <code>byte</code> to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic native void write(int b) throws IOException;\n\n/**\n* Writes a sub array as a sequence of bytes.\n* @param b the data to be written\n* @param off the start offset in the data\n* @param len the number of bytes that are written\n* @exception IOException If an I/O error has occurred.\n*/\nprivate native void writeBytes(byte b[], int off, int len) throws IOException;\n\n/**\n* Writes <code>b.length</code> bytes from the specified byte array\n* starting at offset <code>off</code> to this file.\n*\n* @param b the data.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic void write(byte b[]) throws IOException {\nwriteBytes(b, 0, b.length);\n}\n\n/**\n* Writes <code>len</code> bytes from the specified byte array\n* starting at offset <code>off</code> to this file.\n*\n* @param b the data.\n* @param off the start offset in the data.\n* @param len the number of bytes to write.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic void write(byte b[], int off, int len) throws IOException {\nwriteBytes(b, off, len);\n}\n\n// 'Random access' stuff\n\n/**\n* Returns the current offset in this file.\n*\n* @return the offset from the beginning of the file, in bytes,\n* at which the next read or write occurs.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic native long getFilePointer() throws IOException;\n\n/**\n* Sets the file-pointer offset, measured from the beginning of this\n* file, at which the next read or write occurs. The offset may be\n* set beyond the end of the file. Setting the offset beyond the end\n* of the file does not change the file length. The file length will\n* change only by writing after the offset has been set beyond the end\n* of the file.\n*\n* @param pos the offset position, measured in bytes from the\n* beginning of the file, at which to set the file\n* pointer.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic native void seek(long pos) throws IOException;\n\n/**\n* Returns the length of this file.\n*\n* @return the length of this file.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic native long length() throws IOException;\n\n/**\n* Closes this random access file stream and releases any system\n* resources associated with the stream.\n*\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic native void close() throws IOException;\n\n//\n// Some \"reading/writing Java data types\" methods stolen from\n// DataInputStream and DataOutputStream.\n//\n\n/**\n* single byte from the file. A value of <code>0</code> represents\n* <code>false</code>. Any other value represents <code>true</code>.\n* This method blocks until the byte is read, the end of the stream\n* is detected, or an exception is thrown.\n*\n* @return the <code>boolean</code> value read.\n* @exception EOFException if this file has reached the end.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final boolean readBoolean() throws IOException {\nif (ch < 0)\nthrow new EOFException();\nreturn (ch != 0);\n}\n\n/**\n* Reads a signed 8-bit value from this file. This method reads a\n* byte from the file. If the byte read is <code>b</code>, where\n* <code>0&nbsp;&lt;=&nbsp;b&nbsp;&lt;=&nbsp;255</code>,\n* then the result is:\n* <ul><code>\n* (byte)(b)\n*</code></ul>\n* <p>\n* This method blocks until the byte is read, the end of the stream\n* is detected, or an exception is thrown.\n*\n* @return the next byte of this file as a signed 8-bit\n* <code>byte</code>.\n* @exception EOFException if this file has reached the end.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final byte readByte() throws IOException {\nif (ch < 0)\nthrow new EOFException();\nreturn (byte)(ch);\n}\n\n/**\n* Reads an unsigned 8-bit number from this file. This method reads\n* a byte from this file and returns that byte.\n* <p>\n* This method blocks until the byte is read, the end of the stream\n* is detected, or an exception is thrown.\n*\n* @return the next byte of this file, interpreted as an unsigned\n* 8-bit number.\n* @exception EOFException if this file has reached the end.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final int readUnsignedByte() throws IOException {\nif (ch < 0)\nthrow new EOFException();\nreturn ch;\n}\n\n/**\n* Reads a signed 16-bit number from this file. The method reads 2\n* bytes from this file. If the two bytes read, in order, are\n* <code>b1</code> and <code>b2</code>, where each of the two values is\n* between <code>0</code> and <code>255</code>, inclusive, then the\n* result is equal to:\n* <ul><code>\n* (short)((b1 &lt;&lt; 8) | b2)\n* </code></ul>\n* <p>\n* This method blocks until the two bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next two bytes of this file, interpreted as a signed\n* 16-bit number.\n* @exception EOFException if this file reaches the end before reading\n* two bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final short readShort() throws IOException {\nif ((ch1 | ch2) < 0)\nthrow new EOFException();\nreturn (short)((ch1 << 8) + (ch2 << 0));\n}\n\n/**\n* Reads an unsigned 16-bit number from this file. This method reads\n* two bytes from the file. If the bytes read, in order, are\n* <code>b1</code> and <code>b2</code>, where\n* <code>0&nbsp;&lt;=&nbsp;b1, b2&nbsp;&lt;=&nbsp;255</code>,\n* then the result is equal to:\n* <ul><code>\n* (b1 &lt;&lt; 8) | b2\n* </code></ul>\n* <p>\n* This method blocks until the two bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next two bytes of this file, interpreted as an unsigned\n* 16-bit integer.\n* @exception EOFException if this file reaches the end before reading\n* two bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final int readUnsignedShort() throws IOException {\nif ((ch1 | ch2) < 0)\nthrow new EOFException();\nreturn (ch1 << 8) + (ch2 << 0);\n}\n\n/**\n* Reads a Unicode character from this file. This method reads two\n* bytes from the file. If the bytes read, in order, are\n* <code>b1</code> and <code>b2</code>, where\n* <code>0&nbsp;&lt;=&nbsp;b1,&nbsp;b2&nbsp;&lt;=&nbsp;255</code>,\n* then the result is equal to:\n* <ul><code>\n* (char)((b1 &lt;&lt; 8) | b2)\n* </code></ul>\n* <p>\n* This method blocks until the two bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next two bytes of this file as a Unicode character.\n* @exception EOFException if this file reaches the end before reading\n* two bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final char readChar() throws IOException {\nif ((ch1 | ch2) < 0)\nthrow new EOFException();\nreturn (char)((ch1 << 8) + (ch2 << 0));\n}\n\n/**\n* Reads a signed 32-bit integer from this file. This method reads 4\n* bytes from the file. If the bytes read, in order, are <code>b1</code>,\n* <code>b2</code>, <code>b3</code>, and <code>b4</code>, where\n* <code>0&nbsp;&lt;=&nbsp;b1, b2, b3, b4&nbsp;&lt;=&nbsp;255</code>,\n* then the result is equal to:\n* <ul><code>\n* (b1 &lt;&lt; 24) | (b2 &lt;&lt; 16) + (b3 &lt;&lt; 8) + b4\n* </code></ul>\n* <p>\n* This method blocks until the four bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next four bytes of this file, interpreted as an\n* <code>int</code>.\n* @exception EOFException if this file reaches the end before reading\n* four bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final int readInt() throws IOException {\nif ((ch1 | ch2 | ch3 | ch4) < 0)\nthrow new EOFException();\nreturn ((ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0));\n}\n\n/**\n* Reads a signed 64-bit integer from this file. This method reads eight\n* bytes from the file. If the bytes read, in order, are\n* <code>b1</code>, <code>b2</code>, <code>b3</code>,\n* <code>b4</code>, <code>b5</code>, <code>b6</code>,\n* <code>b7</code>, and <code>b8,</code> where:\n* <ul><code>\n* 0 &lt;= b1, b2, b3, b4, b5, b6, b7, b8 &lt;=255,\n* </code></ul>\n* <p>\n* then the result is equal to:\n* <p><blockquote><pre>\n* ((long)b1 &lt;&lt; 56) + ((long)b2 &lt;&lt; 48)\n* + ((long)b3 &lt;&lt; 40) + ((long)b4 &lt;&lt; 32)\n* + ((long)b5 &lt;&lt; 24) + ((long)b6 &lt;&lt; 16)\n* + ((long)b7 &lt;&lt; 8) + b8\n* </pre></blockquote>\n* <p>\n* This method blocks until the eight bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next eight bytes of this file, interpreted as a\n* <code>long</code>.\n* @exception EOFException if this file reaches the end before reading\n* eight bytes.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final long readLong() throws IOException {\n}\n\n/**\n* <code>int</code> value as if by the <code>readInt</code> method\n* and then converts that <code>int</code> to a <code>float</code>\n* using the <code>intBitsToFloat</code> method in class\n* <code>Float</code>.\n* <p>\n* This method blocks until the four bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next four bytes of this file, interpreted as a\n* <code>float</code>.\n* @exception EOFException if this file reaches the end before reading\n* four bytes.\n* @exception IOException if an I/O error occurs.\n* @see java.lang.Float#intBitsToFloat(int)\n* @since JDK1.0\n*/\npublic final float readFloat() throws IOException {\n}\n\n/**\n* <code>long</code> value as if by the <code>readLong</code> method\n* and then converts that <code>long</code> to a <code>double</code>\n* using the <code>longBitsToDouble</code> method in\n* class <code>Double</code>.\n* <p>\n* This method blocks until the eight bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return the next eight bytes of this file, interpreted as a\n* <code>double</code>.\n* @exception EOFException if this file reaches the end before reading\n* eight bytes.\n* @exception IOException if an I/O error occurs.\n* @see java.lang.Double#longBitsToDouble(long)\n* @since JDK1.0\n*/\npublic final double readDouble() throws IOException {\n}\n\n/**\n* Reads the next line of text from this file. This method\n* successively reads bytes from the file until it reaches the end of\n* a line of text.\n* <p>\n* A line of text is terminated by a carriage-return character\n* (<code>'&#92;r'</code>), a newline character (<code>'&#92;n'</code>), a\n* carriage-return character immediately followed by a newline\n* character, or the end of the input stream. The line-terminating\n* character(s), if any, are included as part of the string returned.\n* <p>\n* This method blocks until a newline character is read, a carriage\n* return and the byte following it are read (to see if it is a\n* newline), the end of the stream is detected, or an exception is thrown.\n*\n* @return the next line of text from this file.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final String readLine() throws IOException {\nStringBuffer input = new StringBuffer();\nint c;\n\nwhile (((c = read()) != -1) && (c != '\\n')) {\ninput.append((char)c);\n}\nif ((c == -1) && (input.length() == 0)) {\nreturn null;\n}\nreturn input.toString();\n}\n\n/**\n* Reads in a string from this file. The string has been encoded\n* using a modified UTF-8 format.\n* <p>\n* The first two bytes are read as if by\n* <code>readUnsignedShort</code>. This value gives the number of\n* following bytes that are in the encoded string, not\n* the length of the resulting string. The following bytes are then\n* interpreted as bytes encoding characters in the UTF-8 format\n* and are converted into characters.\n* <p>\n* This method blocks until all the bytes are read, the end of the\n* stream is detected, or an exception is thrown.\n*\n* @return a Unicode string.\n* @exception EOFException if this file reaches the end before\n* @exception IOException if an I/O error occurs.\n* @exception UTFDataFormatException if the bytes do not represent\n* valid UTF-8 encoding of a Unicode string.\n* @since JDK1.0\n*/\npublic final String readUTF() throws IOException {\n}\n\n/**\n* Writes a <code>boolean</code> to the file as a 1-byte value. The\n* value <code>true</code> is written out as the value\n* <code>(byte)1</code>; the value <code>false</code> is written out\n* as the value <code>(byte)0</code>.\n*\n* @param v a <code>boolean</code> value to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeBoolean(boolean v) throws IOException {\nwrite(v ? 1 : 0);\n//written++;\n}\n\n/**\n* Writes a <code>byte</code> to the file as a 1-byte value.\n*\n* @param v a <code>byte</code> value to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeByte(int v) throws IOException {\nwrite(v);\n//written++;\n}\n\n/**\n* Writes a <code>short</code> to the file as two bytes, high byte first.\n*\n* @param v a <code>short</code> to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeShort(int v) throws IOException {\nwrite((v >>> 8) & 0xFF);\nwrite((v >>> 0) & 0xFF);\n//written += 2;\n}\n\n/**\n* Writes a <code>char</code> to the file as a 2-byte value, high\n* byte first.\n*\n* @param v a <code>char</code> value to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeChar(int v) throws IOException {\nwrite((v >>> 8) & 0xFF);\nwrite((v >>> 0) & 0xFF);\n//written += 2;\n}\n\n/**\n* Writes an <code>int</code> to the file as four bytes, high byte first.\n*\n* @param v an <code>int</code> to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeInt(int v) throws IOException {\nwrite((v >>> 24) & 0xFF);\nwrite((v >>> 16) & 0xFF);\nwrite((v >>> 8) & 0xFF);\nwrite((v >>> 0) & 0xFF);\n//written += 4;\n}\n\n/**\n* Writes a <code>long</code> to the file as eight bytes, high byte first.\n*\n* @param v a <code>long</code> to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeLong(long v) throws IOException {\nwrite((int)(v >>> 56) & 0xFF);\nwrite((int)(v >>> 48) & 0xFF);\nwrite((int)(v >>> 40) & 0xFF);\nwrite((int)(v >>> 32) & 0xFF);\nwrite((int)(v >>> 24) & 0xFF);\nwrite((int)(v >>> 16) & 0xFF);\nwrite((int)(v >>> 8) & 0xFF);\nwrite((int)(v >>> 0) & 0xFF);\n//written += 8;\n}\n\n/**\n* Converts the float argument to an <code>int</code> using the\n* <code>floatToIntBits</code> method in class <code>Float</code>,\n* and then writes that <code>int</code> value to the file as a\n* 4-byte quantity, high byte first.\n*\n* @param v a <code>float</code> value to be written.\n* @exception IOException if an I/O error occurs.\n* @see java.lang.Float#floatToIntBits(float)\n* @since JDK1.0\n*/\npublic final void writeFloat(float v) throws IOException {\nwriteInt(Float.floatToIntBits(v));\n}\n\n/**\n* Converts the double argument to a <code>long</code> using the\n* <code>doubleToLongBits</code> method in class <code>Double</code>,\n* and then writes that <code>long</code> value to the file as an\n* 8-byte quantity, high byte first.\n*\n* @param v a <code>double</code> value to be written.\n* @exception IOException if an I/O error occurs.\n* @see java.lang.Double#doubleToLongBits(double)\n* @since JDK1.0\n*/\npublic final void writeDouble(double v) throws IOException {\nwriteLong(Double.doubleToLongBits(v));\n}\n\n/**\n* Writes the string to the file as a sequence of bytes. Each\n* character in the string is written out, in sequence, by discarding\n* its high eight bits.\n*\n* @param s a string of bytes to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeBytes(String s) throws IOException {\nint len = s.length();\nbyte[] b = new byte[len];\ns.getBytes(0, len, b, 0);\nwriteBytes(b, 0, len);\n}\n\n/**\n* Writes a string to the file as a sequence of characters. Each\n* character is written to the data output stream as if by the\n* <code>writeChar</code> method.\n*\n* @param s a <code>String</code> value to be written.\n* @exception IOException if an I/O error occurs.\n* @see java.io.RandomAccessFile#writeChar(int)\n* @since JDK1.0\n*/\npublic final void writeChars(String s) throws IOException {\nint clen = s.length();\nint blen = 2*clen;\nbyte[] b = new byte[blen];\nchar[] c = new char[clen];\ns.getChars(0, clen, c, 0);\nfor (int i = 0, j = 0; i < clen; i++) {\nb[j++] = (byte)(c[i] >>> 8);\nb[j++] = (byte)(c[i] >>> 0);\n}\nwriteBytes(b, 0, blen);\n}\n\n/**\n* Writes a string to the file using UTF-8 encoding in a\n* machine-independent manner.\n* <p>\n* First, two bytes are written to the file as if by the\n* <code>writeShort</code> method giving the number of bytes to\n* follow. This value is the number of bytes actually written out,\n* not the length of the string. Following the length, each character\n* of the string is output, in sequence, using the UTF-8 encoding\n* for each character.\n*\n* @param str a string to be written.\n* @exception IOException if an I/O error occurs.\n* @since JDK1.0\n*/\npublic final void writeUTF(String str) throws IOException {\nint strlen = str.length();\nint utflen = 0;\n\nfor (int i = 0 ; i < strlen ; i++) {\nint c = str.charAt(i);\nif ((c >= 0x0001) && (c <= 0x007F)) {\nutflen++;\n} else if (c > 0x07FF) {\nutflen += 3;\n} else {\nutflen += 2;\n}\n}\n\nif (utflen > 65535)\nthrow new UTFDataFormatException();\n\nwrite((utflen >>> 8) & 0xFF);\nwrite((utflen >>> 0) & 0xFF);\nfor (int i = 0 ; i < strlen ; i++) {\nint c = str.charAt(i);\nif ((c >= 0x0001) && (c <= 0x007F)) {\nwrite(c);\n} else if (c > 0x07FF) {\nwrite(0xE0 | ((c >> 12) & 0x0F));\nwrite(0x80 | ((c >> 6) & 0x3F));\nwrite(0x80 | ((c >> 0) & 0x3F));\n//written += 2;\n} else {\nwrite(0xC0 | ((c >> 6) & 0x1F));\nwrite(0x80 | ((c >> 0) & 0x3F));\n//written += 1;\n}\n}\n//written += strlen + 2;\n}\n}\n```\n\njava/io/RandomAccessFile.java\n\n2018-11-17, 38179👍, 0💬" ]
[ null, "http://jar.fyicenter.com/JDK/_icon_JDK.png", null ]
{"ft_lang_label":"__label__en","ft_lang_prob":0.66824627,"math_prob":0.5157702,"size":27455,"snap":"2021-04-2021-17","text_gpt3_token_len":7515,"char_repetition_ratio":0.2119777,"word_repetition_ratio":0.43837062,"special_character_ratio":0.318157,"punctuation_ratio":0.13508806,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96548516,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-22T22:47:45Z\",\"WARC-Record-ID\":\"<urn:uuid:c6c0d12c-f73b-4108-9027-0fa2af9401fd>\",\"Content-Length\":\"55741\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8c738134-6b6b-4002-ae99-8c9fd2874326>\",\"WARC-Concurrent-To\":\"<urn:uuid:5d8e6809-294b-47af-a4db-003da8aef8fa>\",\"WARC-IP-Address\":\"74.208.236.35\",\"WARC-Target-URI\":\"http://jar.fyicenter.com/3486_JDK_1_1_Source_Code_Directory.html?C=java.io.RandomAccessFile\",\"WARC-Payload-Digest\":\"sha1:CEY3K3ZL5WX46AOHFEVO2JCCLD2N73YN\",\"WARC-Block-Digest\":\"sha1:4YE4TQAUOINVTHGXN3L54PX5MP764WBX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703531429.49_warc_CC-MAIN-20210122210653-20210123000653-00617.warc.gz\"}"}