URL
stringlengths 15
1.68k
| text_list
sequencelengths 1
199
| image_list
sequencelengths 1
199
| metadata
stringlengths 1.19k
3.08k
|
---|---|---|---|
https://www.brainbell.com/tutorials/ms-office/excel/Compare_Two_Excel_Ranges.htm | [
"# Compare Two Excel Ranges\n\nSpotting the differences between two large tables of data can be a very time-consuming task. Fortunately, there are at least two ways in which you can automate what would otherwise be a very tedious manual process.\n\nThe two methods you will use are methods we have used in the past when we received an updated copy of a spreadsheet and we needed to identify which cells in the updated copy differed from the ones in the original copy. Both methods save hours of tedious manual checking and, more importantly, eliminate the possibility of mistakes.\n\nFor the following examples, we copied the newer data onto the same sheet as the older data beforehand. Figure below shows how the data is presented as two ranges. Note that for easier viewing, we boldfaced the cells in Table 2 that are not the same as their counterparts in Table 1.\n\n##### Figure. Two ranges to be compared",
null,
"#### Method 1: Using True or False\n\nThe first method involves entering a simple formula into another range of the same size and shape. The best part about this method is that you can add the formula in one step without having to copy and paste.\n\nTo compare the ranges shown in the figure, select the range E1:G7, starting from cell E1. This ensures that E1 is the active cell in the selection. With this range selected, click in the Formula bar and type the following:\n\n```=A1=A9\n```\n\nEnter the preceding formula by pressing Ctrl-Enter at the same time. In doing so, you are entering the relative reference formula into each cell of the selection. This is the standard method of entering a formula into an array of cells and having their references change appropriately.\n\nThe range E1:G7 should be filled with True (the same) and False (not the same) values.\n\nIf your two sets of data reside on different worksheets, you can use a third worksheet to store the True/False values simply by array-entering the formula. For example, assuming the second table of data is on Sheet2 and starts in cell A9, and the original table of data is on Sheet1 and starts in cell A1, on a third worksheet you can array-enter this formula:\n\n```=Sheet1!A1=Sheet2!A9\n```\n\nYou might find it useful to adjust your zoom downward when working with large amounts of data.\n\nTo delete an array-entered formula, you must select and delete the whole range. You cannot delete part of it.\n\n#### Method 2: Using Conditional Formatting\n\nThe second method is often preferred, as it is easier to make any needed changes once the comparison is made. However, with this method, both sets of data must reside on the same worksheet, which should entail only a simple copy and paste.\n\nAgain, assuming we're comparing the preceding two ranges, select the range A1:C7, starting from cell A1. This ensures that A1 is the active cell in the selection.\n\nWith this range selected, select Format » Conditional Formatting.... Select Formula Is and then type the following formula:\n\n```=NOT(A1=A9)\n```\n\nClick the Format button, shown in the figure, and choose the format with which you want to highlight the differences.\n\n##### Figure. Conditional formatting dialog",
null,
"Click OK and all the differences will be formatted according to the format you chose.\n\nWhen or if you make any changes to your data, the cells' format will automatically revert back to normal if the cell content is the same as the cell in the other table."
] | [
null,
"https://www.brainbell.com/tutorials/ms-office/excel/images/059600625X/figs/exhk_0601.gif",
null,
"https://www.brainbell.com/tutorials/ms-office/excel/images/059600625X/figs/exhk_0602.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9117056,"math_prob":0.8439743,"size":3066,"snap":"2019-26-2019-30","text_gpt3_token_len":655,"char_repetition_ratio":0.13520575,"word_repetition_ratio":0.030245747,"special_character_ratio":0.20939335,"punctuation_ratio":0.10491803,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.95849985,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-23T11:56:31Z\",\"WARC-Record-ID\":\"<urn:uuid:a624970b-a7df-497c-adb0-f67e616a0d01>\",\"Content-Length\":\"35574\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffbf757e-df46-47f5-9794-cad05763561d>\",\"WARC-Concurrent-To\":\"<urn:uuid:dfacf53d-92c6-4329-bf34-41ae6ec4803d>\",\"WARC-IP-Address\":\"34.237.101.58\",\"WARC-Target-URI\":\"https://www.brainbell.com/tutorials/ms-office/excel/Compare_Two_Excel_Ranges.htm\",\"WARC-Payload-Digest\":\"sha1:3TGL45BMB6QMO34BBFD2HUV7UO2EMYIC\",\"WARC-Block-Digest\":\"sha1:TOGCQC37BSQE253XYDDM5QPBUDK7VN5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195529276.65_warc_CC-MAIN-20190723105707-20190723131707-00166.warc.gz\"}"} |
https://it.mathworks.com/help/stats/gscatter.html | [
"# gscatter\n\nScatter plot by group\n\n## Syntax\n\n``gscatter(x,y,g)``\n``gscatter(x,y,g,clr,sym,siz)``\n``gscatter(x,y,g,clr,sym,siz,doleg)``\n``gscatter(x,y,g,clr,sym,siz,doleg,xnam,ynam)``\n``gscatter(ax,___)``\n``h = gscatter(___)``\n\n## Description\n\nexample\n\n````gscatter(x,y,g)` creates a scatter plot of `x` and `y`, grouped by `g`. The inputs `x` and `y` are vectors of the same size.```\n\nexample\n\n````gscatter(x,y,g,clr,sym,siz)` specifies the marker color `clr`, symbol `sym`, and size `siz` for each group.```\n````gscatter(x,y,g,clr,sym,siz,doleg)` controls whether a legend is displayed on the graph. `gscatter` creates a legend by default.```\n\nexample\n\n````gscatter(x,y,g,clr,sym,siz,doleg,xnam,ynam)` specifies the names to use for the x-axis and y-axis labels. If you do not provide `xnam` and `ynam`, and the `x` and `y` inputs are variables with names, then `gscatter` labels the axes with the variable names.```\n\nexample\n\n````gscatter(ax,___)` uses the plot axes specified by the axes object `ax`. Specify `ax` as the first input argument followed by any of the input argument combinations in the previous syntaxes.```\n\nexample\n\n````h = gscatter(___)` returns graphics handles corresponding to the groups in `g`.You can pass in `[]` for `clr`, `sym`, `siz`, and `doleg` to use their default values.```\n\n## Examples\n\ncollapse all\n\nLoad the `carsmall` data set.\n\n`load carsmall`\n\nPlot the `Displacement` values on the x-axis and the `Horsepower` values on the y-axis. `gscatter` uses the variable names as the default labels for the axes. Group the data points by `Model_Year`.\n\n`gscatter(Displacement,Horsepower,Model_Year)`",
null,
"Load the `discrim` data set.\n\n`load discrim`\n\nThe data set contains ratings of cities according to nine factors such as climate, housing, education, and health. The matrix `ratings` contains the ratings information.\n\nPlot the relationship between the ratings for climate (first column) and housing (second column) grouped by city size in the matrix `group`. Choose different colors and plotting symbols for each group.\n\n```gscatter(ratings(:,1),ratings(:,2),group,'br','xo') xlabel('climate') ylabel('housing')```",
null,
"Load the `hospital` data set.\n\n`load hospital`\n\nPlot the ages and weights of the hospital patients. Group the patients according to their gender and smoker status. Use the `o` symbol to represent nonsmokers and the `*` symbol to represent smokers.\n\n```x = hospital.Age; y = hospital.Weight; g = {hospital.Sex,hospital.Smoker}; gscatter(x,y,g,'rkgb','o*',8,'on','Age','Weight') legend('Location','northeastoutside')```",
null,
"Load the `carsmall` data set. Create a figure with two subplots and return the `axes` objects as `ax1` and `ax2`. Create a scatter plot in each set of axes by referring to the corresponding `Axes` object. In the left subplot, group the data using the `Model_Year` variable. In the right subplot, group the data using the `Cylinders` variable. Add a title to each plot by passing the corresponding `Axes` object to the `title` function.\n\n```load carsmall color = lines(6); % Generate color values ax1 = subplot(1,2,1); % Left subplot gscatter(ax1,Acceleration,MPG,Model_Year,color(1:3,:)) title(ax1,'Left Subplot (Model Year)') ax2 = subplot(1,2,2); % Right subplot gscatter(ax2,Acceleration,MPG,Cylinders,color(4:6,:)) title(ax2,'Right Subplot (Cylinders)')```",
null,
"Load the `carbig` data set.\n\n`load carbig`\n\nCreate a scatter plot comparing `Acceleration` to `MPG`. Group data points based on `Origin`.\n\n`h = gscatter(Acceleration,MPG,Origin)`\n```h = 7x1 Line array: Line (USA) Line (France) Line (Japan) Line (Germany) Line (Sweden) Line (Italy) Line (England) ```\n\nDisplay the `Line` object corresponding to the group labeled `(Japan)`.\n\n`jgroup = h(3)`\n```jgroup = Line (Japan) with properties: Color: [0.2857 1 0] LineStyle: 'none' LineWidth: 0.5000 Marker: '.' MarkerSize: 15 MarkerFaceColor: 'none' XData: [1x79 double] YData: [1x79 double] ZData: [1x0 double] Show all properties ```\n\nChange the marker color for the `Japan` group to black.\n\n`jgroup.Color = 'k';`",
null,
"## Input Arguments\n\ncollapse all\n\nx-axis values, specified as a numeric vector. `x` must have the same size as `y`.\n\nData Types: `single` | `double`\n\ny-axis values, specified as a numeric vector. `y` must have the same size as `x`.\n\nData Types: `single` | `double`\n\nGrouping variable, specified as a categorical vector, logical vector, numeric vector, character array, string array, or cell array of character vectors. Alternatively, `g` can be a cell array containing several grouping variables (such as ```{g1 g2 g3}```), in which case observations are in the same group if they have common values of all grouping variables. Points in the same group appear on the scatter plot with the same marker color, symbol, and size.\n\nThe number of rows in `g` must be equal to the length of `x`.\n\nExample: `species`\n\nExample: `{Cylinders,Origin}`\n\nData Types: `categorical` | `logical` | `single` | `double` | `char` | `string` | `cell`\n\nMarker colors, specified as either a character vector or string scalar of colors recognized by the `plot` function or a matrix of RGB triplet values. Each RGB triplet is a three-element row vector whose elements specify the intensities of the red, green, and blue components of the color, respectively. Each intensity must be in the range [0,1].\n\nThis table lists the available color characters and their equivalent RGB triplet values.\n\nLong NameShort NameRGB Triplet\nYellow`'y'``[1 1 0]`\nMagenta`'m'``[1 0 1]`\nCyan`'c'``[0 1 1]`\nRed`'r'``[1 0 0]`\nGreen`'g'``[0 1 0]`\nBlue`'b'``[0 0 1]`\nWhite`'w'``[1 1 1]`\nBlack`'k'``[0 0 0]`\n\nIf you do not specify enough values for all groups, then `gscatter` cycles through the specified values as needed.\n\nExample: `'rgb'`\n\nExample: `[0 0 1; 0 0 0]`\n\nData Types: `char` | `string` | `single` | `double`\n\nMarker symbols, specified as a character vector or string scalar of symbols recognized by the `plot` function. This table lists the available marker symbols.\n\nValueDescription\n`'o'`Circle\n`'+'`Plus sign\n`'*'`Asterisk\n`'.'`Point\n`'x'`Cross\n`'s'`Square\n`'d'`Diamond\n`'^'`Upward-pointing triangle\n`'v'`Downward-pointing triangle\n`'>'`Right-pointing triangle\n`'<'`Left-pointing triangle\n`'p'`Five-pointed star (pentagram)\n`'h'`Six-pointed star (hexagram)\n`'none'`No markers\n\nIf you do not specify enough values for all groups, then `gscatter` cycles through the specified values as needed.\n\nExample: `'o+*v'`\n\nData Types: `char` | `string`\n\nMarker sizes, specified as a positive numeric vector in points. The default value is determined by the number of observations. If you do not specify enough values for all groups, then `gscatter` cycles through the specified values as needed.\n\nExample: `[6 12]`\n\nData Types: `single` | `double`\n\nOption to include a legend, specified as either `'on'` or `'off'`. By default, the legend is displayed on the graph.\n\nx-axis label, specified as a character vector or string scalar.\n\nData Types: `char` | `string`\n\ny-axis label, specified as a character vector or string scalar.\n\nData Types: `char` | `string`\n\nAxes for the plot, specified as an `Axes` or `UIAxes` object. If you do not specify `ax`, then `gscatter` creates the plot using the current axes. For more information on creating an axes object, see `axes` and `uiaxes`.\n\n## Output Arguments\n\ncollapse all\n\nGraphics handles, returned as an array of `Line` objects. Each `Line` object corresponds to one of the groups in `g`. You can use dot notation to query and set properties of the line objects. For a list of `Line` object properties, see Line Properties."
] | [
null,
"https://it.mathworks.com/help/examples/stats/win64/ScatterPlotWithDefaultSettingsExample_01.png",
null,
"https://it.mathworks.com/help/examples/stats/win64/ScatterPlotOfClimateAndHousingRatingsExample_01.png",
null,
"https://it.mathworks.com/help/examples/stats/win64/ScatterPlotWithMultipleGroupingVariablesAndAllOptionsExample_01.png",
null,
"https://it.mathworks.com/help/examples/stats/win64/SpecifyAxesForScatterPlotExample_01.png",
null,
"https://it.mathworks.com/help/examples/stats/win64/ModifyScatterPlotAfterCreationExample_01.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6098117,"math_prob":0.92451906,"size":4940,"snap":"2021-31-2021-39","text_gpt3_token_len":1264,"char_repetition_ratio":0.13452189,"word_repetition_ratio":0.13801453,"special_character_ratio":0.24352227,"punctuation_ratio":0.12577319,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9751806,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-21T14:20:22Z\",\"WARC-Record-ID\":\"<urn:uuid:93fc0523-4694-4357-bae4-04362bf022b1>\",\"Content-Length\":\"124325\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cf1e46e8-d797-4ec9-9ef5-60e460d0b6d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:8d540084-87b7-43ca-84cd-3b950a71b5ab>\",\"WARC-IP-Address\":\"23.197.108.134\",\"WARC-Target-URI\":\"https://it.mathworks.com/help/stats/gscatter.html\",\"WARC-Payload-Digest\":\"sha1:XO7MEROMNE42JUGYKCTK3NJT44UEH5HD\",\"WARC-Block-Digest\":\"sha1:2BGFOKNUYHUWOSZHIBRIAUXHBCWG4RBN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057225.38_warc_CC-MAIN-20210921131252-20210921161252-00073.warc.gz\"}"} |
https://www.iclub.com/faq/Home/Article?id=345&category=6&parent=0 | [
"",
null,
"## Section 1 of the SSG in Toolkit 6\n\nThis page covers several smaller questions that we have seen about Section 1 (The Front Page) of the Stock Study in Toolkit 6.\n\nQ: How does Toolkit calculate PTP numbers? I don't see where they are input into the Annual Data and the Quarterly Data numbers do not always add up to the Annual number. Is there a calculation involved using other numbers in the Data page?\n\nA: The data includes annual Net Profit and the Tax Rate. Toolkit calculates pre-tax profit from those (i.e., multiply the net profit by the “not-tax” rate).\nThe quarterly and annual figures are separate, so they may not add up precisely, because of rounding or because of normalization that is done on the annual figures.\n\nQ: The % Institutions and % Insiders figures add up to more than 100%, is there something wrong with the data?\n\nA: Not necessarily; by Morningstar's standards, any institution that owns more than 5% of a company's stock is also classified as an insider. This means that those two figures can end up higher than 100%.\n\nQ: How does Toolkit figure out Debt to Capital?\n\nA: Debt to capital in Toolkit is [Last Quarterly Total Debt] / [Total Capital\n\n• Last Quarterly Total Debt is calculated as [Last Quarterly Short Term Debt (Commercial Paper + Current Portion of Long-Term Debt)] + [Last Quarterly Long Term Debt]\n• Total Capital is [Last Quarterly Shareholder’s Equity] + [Last Quarterly Total Debt]\n\nIn cases where a Total Capital figure is not available (such as in preliminary data), Toolkit will calculate it as follows:\n\n[Last Quarterly Total Debt] / [Last Quarterly Total Debt + Last Quarterly Preferred Stock + (Last Annual Book Value per Share * Last Quarterly Diluted Shares))\n\nQ: What is Free Cash Flow?\n\nA: For an explanation of Free Cash Flow, we suggest an article by Carol Theine in the Puget Sound Chapter’s newsletter about cash flow analysis basics."
] | [
null,
"https://www.iclub.com/faq/Content/images/iclub_logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91114676,"math_prob":0.9232725,"size":1662,"snap":"2023-14-2023-23","text_gpt3_token_len":363,"char_repetition_ratio":0.11761158,"word_repetition_ratio":0.007246377,"special_character_ratio":0.2202166,"punctuation_ratio":0.110410094,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96158004,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-05T10:59:29Z\",\"WARC-Record-ID\":\"<urn:uuid:931ffe78-597f-4660-a94d-52004a074c6a>\",\"Content-Length\":\"11036\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2d766ab9-027d-4f34-9952-96615bef2d2d>\",\"WARC-Concurrent-To\":\"<urn:uuid:79bb5512-d0a5-4995-8006-badccad66b7d>\",\"WARC-IP-Address\":\"174.136.147.4\",\"WARC-Target-URI\":\"https://www.iclub.com/faq/Home/Article?id=345&category=6&parent=0\",\"WARC-Payload-Digest\":\"sha1:2Y6ZVQMUZWT47I643AN72BHKYUHQX7CK\",\"WARC-Block-Digest\":\"sha1:VKAIFCLPBGMZCGHZM5FSFORRMO7RQ37A\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224651815.80_warc_CC-MAIN-20230605085657-20230605115657-00628.warc.gz\"}"} |
http://rtcmix.org/reference/scorefile/pickwrand.php | [
"RTcmix an open-source digital signal processing and sound synthesis language about · links · contact\npickwrand - return a weighted random choice from a set of numbers\n\n### Synopsis\n\nvalue = pickwrand(number1, probability1 [, number2, probability2, ... numberN, probabilityN ])\n\nParameters inside the [brackets] are optional.\n\n### Description\n\nCall pickwrand to choose randomly among several numbers that you specify, with a probability for each number.\n\nIt's a good idea to call srand once to seed the random number generator before using pickwrand. Otherwise, a seed of 1 will be used.\n\n### Arguments\n\nnumber\nprobability\nThere can be as many number, probability pairs as you like, as long as there is at least one pair.\n\nA probability argument determines how likely it is that pickwrand will choose the corresponding number argument. The higher the probability, the more likely.\n\nThe total probability is the sum of all the probability arguments.\n\n### RETURN VALUE\n\nOne of the number arguments to pickwrand, selected randomly in accordance with the given probabilities.\n\n### Examples\n\n``` srand(0)\nwhile (outskip < ending_time) {\nstereo_loc = pickwrand(0.0, 10, 0.5, 80, 1.0, 10)\nWAVETABLE(outskip, dur, amp, frequency, stereo_loc)\noutskip = outskip + 0.2\n}\n```\n\nplays WAVETABLE notes, panning them in accordance with the following probabilities: 10% of the notes will pan to hard left, 10% of the notes will pan to hard right, and 80% of the notes will pan to the center."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.71729505,"math_prob":0.9844724,"size":1261,"snap":"2021-31-2021-39","text_gpt3_token_len":332,"char_repetition_ratio":0.14001592,"word_repetition_ratio":0.052083332,"special_character_ratio":0.24742268,"punctuation_ratio":0.18502203,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97726667,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-25T15:23:59Z\",\"WARC-Record-ID\":\"<urn:uuid:01527487-bcaf-4c69-82f1-1769a48e1e91>\",\"Content-Length\":\"3626\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1b81f227-29d7-4470-9bb5-1cfb04b117c0>\",\"WARC-Concurrent-To\":\"<urn:uuid:230c57bd-ec05-4ebd-a232-fd7a2806e03e>\",\"WARC-IP-Address\":\"74.208.236.100\",\"WARC-Target-URI\":\"http://rtcmix.org/reference/scorefile/pickwrand.php\",\"WARC-Payload-Digest\":\"sha1:SR2PNTITAEL5MP2IEOGHTVF4Q4AHFUTQ\",\"WARC-Block-Digest\":\"sha1:OSDY5RYTYKIFJ2CPMQRCA2GHD233WE7U\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780057687.51_warc_CC-MAIN-20210925142524-20210925172524-00435.warc.gz\"}"} |
https://www.brainkart.com/article/Vector-Control-of-AC-Induction-Machines_12662/ | [
"Home | | Solid State Drives | Vector Control of AC Induction Machines\n\n# Vector Control of AC Induction Machines\n\nVector control is the most popular control technique of AC induction motors. In special reference frames, the expression for the electromagnetic torque of the smooth-air-gap machine is similar to the expression for the torque of the separately excited DC machine.\n\nVector Control of AC Induction Machines\n\nVector control is the most popular control technique of AC induction motors. In special reference frames, the expression for the electromagnetic torque of the smooth-air-gap machine is similar to the expression for the torque of the separately excited DC machine. In the case of induction machines, the control is usually performed in the reference frame (d-q) attached to the rotor flux space vector. That’s why the implementation of vector control requires information on the modulus and the space angle (position) of the rotor flux space vector. The stator currents of the induction machine are separated into flux- and torque-producing components by utilizing transformation to the d-q coordinate system, whose direct axis (d) is aligned with the rotor flux space vector. That means that the q-axis component of the rotor flux space vector is always zero:",
null,
"The rotor flux space vector calculation and transformation to the d-q coordinate system require the high computational power of a microcontroller. The digital signal processor is suitable for this task. The following sections describe the space vector transformations and the rotor flux space vector calculation.\n\n## Block Diagram of the Vector Control\n\nShows the basic structure of the vector control of the AC induction motor. To perform vector control, it is necessary to follow these steps:\n\nMeasure the motor quantities (phase voltages and currents)\n\nTransform them to the 2-phase system (α,β) using a Clarke transformation\n\nCalculate the rotor flux space vector magnitude and position angle\n\nTransform stator currents to the d-q coordinate system using a Park transformation\n\nThe stator current torque and flux producing components are separately controlled\n\nThe output stator voltage space vector is calculated using the decoupling block\n\nThe stator voltage space vector is transformed by an inverse Park transformation back from the d-q coordinate system to the 2-phase system fixed with the stator\n\nUsing the space vector modulation, the output 3-phase voltage is generated",
null,
"Block Diagram of the AC Induction Motor Vector Control\n\n## Forward and Inverse Clarke Transformation (a,b,c to α,β and backwards)\n\nThe forward Clarke transformation converts a 3-phase system a,b,c to a 2-phase coordinate system α,β. Figure shows graphical construction of the space vector and projection of the space vector to the quadrature-phase components α,β.",
null,
"The inverse Clarke transformation goes back from a 2-phase (α,β) to a 3-phase isa, isb, isc system. For constant k=2/3, it is given by the following equations:",
null,
"## Forward and Inverse Park Transformation (α,β to d-q and backwards)\n\nThe components isα and isβ, calculated with a Clarke transformation, are attached to the stator reference frame α, β. In vector control, it is necessary to have all quantities expressed in the same reference frame. The stator reference frame is not suitable for the control process. The space vector isβ is rotating at a rate equal to the angular frequency of the phase currents. The components isα and isβ depend on time and speed. We can transform these components from the stator reference frame to the d-q reference frame rotating at the same speed as the angular frequency of the phase currents. Then the isd and isq components do not depend on time and speed. If we consider the d-axis aligned with the rotor flux, the transformation is illustrated in Figure where θfield is the rotor flux position.",
null,
"The inverse Park transformation from the d-q to α,β coordinate system is given by the following equations:",
null,
"## Rotor Flux Model\n\nKnowledge of the rotor flux space vector magnitude and position is key information for the AC induction motor vector control. With the rotor magnetic flux space vector, the rotational coordinate system (d-q) can be established. There are several methods for obtaining the rotor magnetic flux space vector. The implemented flux model utilizes monitored rotor speed and stator voltages and currents. It is calculated in the stationary reference frame (α,β) attached to the stator. The error in the calculated value of the rotor flux, influenced by the changes in temperature, is negligible for this rotor flux model.\n\nThe rotor flux space vector is obtained by solving the differential equations (EQ 4-2) and (EQ 4-3), which are resolved into the α and β components. The equations are derived from the equations of the AC induction motor model",
null,
"",
null,
"Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail\nSolid State Drives : Induction Motor Drives : Vector Control of AC Induction Machines |"
] | [
null,
"https://img.brainkart.com/imagebk13/3yfmL6I.jpg",
null,
"https://img.brainkart.com/imagebk13/oRWXbA6.jpg",
null,
"https://img.brainkart.com/imagebk13/kcBWeHT.jpg",
null,
"https://img.brainkart.com/imagebk13/vkcKLdg.jpg",
null,
"https://img.brainkart.com/imagebk13/dhlwx7M.jpg",
null,
"https://img.brainkart.com/imagebk13/5xbSMlM.jpg",
null,
"https://img.brainkart.com/imagebk13/hwc9LbV.jpg",
null,
"https://img.brainkart.com/imagebk13/yDnnO0S.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8497274,"math_prob":0.9805192,"size":4426,"snap":"2022-40-2023-06","text_gpt3_token_len":882,"char_repetition_ratio":0.1781999,"word_repetition_ratio":0.06303725,"special_character_ratio":0.19023949,"punctuation_ratio":0.07954545,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99165785,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-28T00:15:06Z\",\"WARC-Record-ID\":\"<urn:uuid:16555997-3a88-447f-82ce-a5d107af76cb>\",\"Content-Length\":\"42069\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9420c2bd-7eb6-446d-9582-1568a3a5905c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8c662b75-09b1-4d69-9c7b-6bfc6cf07db5>\",\"WARC-IP-Address\":\"68.178.145.35\",\"WARC-Target-URI\":\"https://www.brainkart.com/article/Vector-Control-of-AC-Induction-Machines_12662/\",\"WARC-Payload-Digest\":\"sha1:BONTKKKCSVH4QXDNOU7ZKBTNEI76IEBT\",\"WARC-Block-Digest\":\"sha1:3J4QD74MDC4TXCRGI6EYERBTIEZ22QXG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499468.22_warc_CC-MAIN-20230127231443-20230128021443-00820.warc.gz\"}"} |
https://blog.schoolconnectonline.com/ncert-solutions-class-6-maths-chapter-1-knowing-our-numbers/ | [
"# NCERT Solutions For Class 6 Maths Chapter 1: Knowing Our Numbers",
null,
"Class 6 Maths Chapter 1 Knowing Our Numbers NCERT Solutions:\n\nIn Chapter 1 Knowing our numbers, we discuss about Comparing Numbers Worksheet, Large Numbers In Practice, Using Brackets and Roman Numerals Chart. You can find the other\n\n## NCERT Solutions for Class 6 Maths Chapter 1 Knowing Our Numbers\n\nNCERT Solutions for Class 6 Maths Chapter 1 Knowing Our Numbers Exercise 1.1\n\n### Class 6 Maths Chapter 1 Ex 1.1\n\n#### Ex 1.1 Class 6 Maths Question 1.\n\n1. Fill in the blanks:\n\n(a) 1 lakh = ………….. ten thousand.\n\nSol : 1,00,000\n\n(b) 1 million = ………… hundred thousand.\n\nSol : 10,00,000\n\n(c) 1 crore = ………… ten lakh.\n\nSol : 1,00,00,000\n\n(d) 1 crore = ………… million.\n\nSol : 1,00,00,000\n\n(e) 1 million = ………… lakh.\n\nSol : 1,000,000\n\n#### Ex 1.1 Class 6 Maths Question 2.\n\n2. Place commas correctly and write the numerals:\n\n(a) Seventy three lakh seventy five thousand three hundred seven.\n\nSol : The numeral of seventy three lakh seventy five thousand three hundred seven is 73,75,307\n\n(b) Nine crore five lakh forty one.\n\nSol : The numeral of nine crore five lakh forty one is 9,05,00,041\n\n(c) Seven crore fifty two lakh twenty one thousand three hundred two.\n\nSol : The numeral of seven crore fifty two lakh twenty one thousand three hundred two is 7,52,21,302\n\n(d) Fifty eight million four hundred twenty three thousand two hundred two.\n\nSol : The numeral of fifty eight million four hundred twenty three thousand two hundred two is 5,84,23,202\n\n(e) Twenty three lakh thirty thousand ten.\n\nSol : The numeral of twenty three lakh thirty thousand ten is 23,30,010\n\n#### Ex 1.1 Class 6 Maths Question 3.\n\n3. Insert commas suitably and write the names according to Indian System of Numeration:\n\n(a) 87595762\n\nSol : Eight crore seventy five lakh ninety five thousand seven hundred sixty two\n\n(b) 8546283\n\nSol : Eighty five lakh forty six thousand two hundred eighty three\n\n(c) 99900046\n\nSol : Nine crore ninety nine lakh forty six\n\n(d) 98432701\n\nSol : Nine crore eighty four lakh thirty two thousand seven hundred one\n\n#### Ex 1.1 Class 6 Maths Question 4.\n\n4. Insert commas suitably and write the names according to International System of Numeration:\n\n(a) 78921092\n\nSol : Seventy eight million nine hundred twenty one thousand ninety two\n\n(b) 7452283\n\nSol : Seven million four hundred fifty-two thousand two hundred eighty three\n\n(c) 99985102\n\nSol : Ninety-nine million nine hundred eighty five thousand one hundred two\n\n(d) 48049831\n\nSol : Forty-eight million forty-nine thousand eight hundred thirty-one\n\n### Class 6 Maths Chapter 1 Ex 1.2\n\nNCERT Solutions for Class 6 Maths Chapter 1 Knowing Our Numbers Exercise 1.2\n\n#### Ex 1.2 Class 6 Maths Question 1.\n\n1. A book exhibition was held for four days in a school. The number of tickets sold at the counter on the first, second, third and final day was respectively 1094, 1812, 2050 and 2751. Find the total number of tickets sold on all the four days.\n\nSol :\n\nNumber of tickets sold on 1st day = 1094\n\nNumber of tickets sold on 2nd day = 1812\n\nNumber of tickets sold on 3rd day = 2050\n\nNumber of tickets sold on 4th day = 2751\n\nHence, number of tickets sold on all the four days = 1094 + 1812 + 2050 + 2751 = 7707 tickets\n\n2. Shekhar is a famous cricket player. He has so far scored 6980 runs in test matches. He wishes to complete 10,000 runs. How many more runs does he need?\n\nSol :\n\nShekhar scored = 6980 runs\n\nHe want to complete = 10000 runs\n\nRuns need to score more = 10000 – 6980 = 3020\n\nHence, he need 3020 more runs to score\n\n3. In an election, the successful candidate registered 5,77,500 votes and his nearest rival secured 3,48,700 votes. By what margin did the successful candidate win the election?\n\nSol :\n\nNo. of votes secured by the successful candidate = 577500\n\nNo. of votes secured by his rival = 348700\n\nMargin by which he won the election = 577500 – 348700 = 228800 votes\n\n∴ Successful candidate won the election by 228800 votes\n\n4. Kirti bookstore sold books worth Rs 2,85,891 in the first week of June and books worth Rs 4,00,768 in the second week of the month. How much was the sale for the two weeks together? In which week was the sale greater and by how much?\n\nSol :\n\nPrice of books sold in June first week = Rs 285891\n\nPrice of books sold in June second week = Rs 400768\n\nNo. of books sold in both weeks together = Rs 285891 + Rs 400768 = Rs 686659\n\nThe sale of books is the highest in the second week\n\nDifference in the sale in both weeks = Rs 400768 – Rs 285891 = Rs 114877\n\n∴ Sale in the second week was greater by Rs 114877 than in the first week.\n\n5. Find the difference between the greatest and the least 5-digit number that can be written using the digits 6, 2, 7, 4, 3 each only once.\n\nSol :\n\nDigits given are 6, 2, 7, 4, 3\n\nGreatest 5-digit number = 76432\n\nLeast 5-digit number = 23467\n\nDifference between the two numbers = 76432 – 23467 = 52965\n\n∴ The difference between the two numbers is 52965\n\n6. A machine, on an average, manufactures 2,825 screws a day. How many screws did it produce in the month of January 2006?\n\nSol :\n\nNumber of screws manufactured in a day = 2825\n\nSince January month has 31 days\n\nHence, number of screws manufactured in January = 31 × 2825 = 87575\n\nHence, machine produce 87575 screws in the month of January 2006\n\n7. A merchant had Rs 78,592 with her. She placed an order for purchasing 40 radio sets at Rs 1200 each. How much money will remain with her after the purchase?\n\nSol :\n\nTotal money the merchant had = Rs 78592\n\nCost of each radio set = Rs 1200\n\nSo, cost of 40 radio sets = Rs 1200 × 40 = Rs 48000\n\nMoney left with the merchant = Rs 78592 – Rs 48000 = Rs 30592\n\nHence, money left with the merchant after purchasing radio sets is Rs 30592\n\n8. A student multiplied 7236 by 65 instead of multiplying by 56. By how much was his answer greater than the correct answer?\n\nSol :\n\nDifference between 65 and 56 i.e (65 – 56) = 9\n\nThe difference between the correct and incorrect answer = 7236 × 9 = 65124\n\n9. To stitch a shirt, 2 m 15 cm cloth is needed. Out of 40 m cloth, how many shirts can be stitched and how much cloth will remain?\n\nSol :\n\nGiven\n\nTotal length of the cloth = 40 m\n\n= 40 × 100 cm = 4000 cm\n\nCloth required to stitch one shirt = 2 m 15 cm\n\n= 2 × 100 + 15 cm = 215 cm\n\nNumber of shirts that can be stitched out of 4000 cm = 4000 / 215 = 18 shirts\n\nHence 18 shirts can be stitched out of 40 m and 1m 30 cm of cloth is left out\n\n10. Medicine is packed in boxes, each weighing 4 kg 500g. How many such boxes can be loaded in a van which cannot carry beyond 800 kg?\n\nSol :\n\nWeight of one box = 4 kg 500 g = 4 × 1000 + 500\n\n= 4500 g\n\nMaximum weight carried by the van = 800 kg = 800 × 1000\n\n= 800000 g\n\nHence, number of boxes that can be loaded in the van = 800000 / 4500 = 177 boxes\n\n11. The distance between the school and a student’s house is 1 km 875 m. Everyday she walks both ways. Find the total distance covered by her in six days.\n\nSol :\n\nDistance covered between school and house = 1 km 875 m = 1000 + 875 = 1875 m\n\nSince, the student walked both ways.\n\nHence, distance travelled by the student in one day = 2 × 1875 = 3750 m\n\nDistance travelled by the student in 6 days = 3750 m × 6 = 22500 m = 22 km 500 m\n\n∴ Total distance covered by the student in six days is 22 km and 500 m\n\n12. A vessel has 4 litres and 500 ml of curd. In how many glasses, each of 25 ml capacity, can it be filled?\n\nSol :\n\nQuantity of curd in the vessel = 4 l 500 ml = 4 × 1000 + 500 = 4500 ml\n\nCapacity of 1 glass = 25 ml\n\n∴ Number of glasses that can be filled with curd = 4500 / 25 = 180 glasses\n\nHence, 180 glasses can be filled with curd.\n\n### Class 6 Maths Chapter 1 Ex 1.3\n\nNCERT Solutions for Class 6 Maths Chapter 1 Knowing Our Numbers Ex 1.3\n\n#### Ex 1.3 Class 6 Maths Question 1\n\n1. Estimate each of the following using general rule:\n\n(a) 730 + 998 (b) 796 – 314 (c) 12904 + 2888 (d) 28292 – 21496\n\nMake ten more such examples of addition, subtraction and estimation of their outcome.\n\nSolutions:\n\n(a) 730 + 998\n\nRound off to hundreds\n\n730 rounds off to 700\n\n998 rounds off to 1000\n\nHence, 730 + 998 = 700 + 1000 = 1700\n\n(b) 796 – 314\n\nRound off to hundreds\n\n796 rounds off to 800\n\n314 rounds off to 300\n\nHence, 796 – 314 = 800 – 300 = 500\n\n(c) 12904 + 2888\n\nRound off to thousands\n\n12904 rounds off to 13000\n\n2888 rounds off to 3000\n\nHence, 12904 + 2888 = 13000 + 3000 = 16000\n\n(d) 28292 – 21496\n\nRound off to thousands\n\n28292 round off to 28000\n\n21496 round off to 21000\n\nHence, 28292 – 21496 = 28000 – 21000 = 7000\n\nTen more such examples are\n\n(i) 330 + 280 = 300 + 300 = 600\n\n(ii) 3937 + 5990 = 4000 + 6000 = 10000\n\n(iii) 6392 – 3772 = 6000 – 4000 = 2000\n\n(iv) 5440 – 2972 = 5000 – 3000 = 2000\n\n(v) 2175 + 1206 = 2000 + 1000 = 3000\n\n(vi) 1110 – 1292 = 1000 – 1000 = 0\n\n(vii) 910 + 575 = 900 + 600 = 1500\n\n(viii) 6400 – 4900 = 6000 – 5000 = 1000\n\n(ix) 3731 + 1300 = 4000 + 1000 = 5000\n\n(x) 6485 – 4319 = 6000 – 4000 = 2000\n\n2. Give a rough estimate (by rounding off to nearest hundreds) and also a closer estimate (by rounding off to nearest tens):\n\n(a) 439 + 334 + 4317 (b) 108734 – 47599 (c) 8325 – 491 (d) 489348 – 48365\n\nMake four more such examples.\n\nSolutions:\n\n(a) 439 + 334 + 4317\n\nRounding off to nearest hundreds\n\n439 + 334 + 4317 = 400 + 300 + 4300\n\n= 5000\n\nRounding off to nearest tens\n\n439 + 334 + 4317 = 440 + 330 + 4320\n\n= 5090\n\n(b) 108734 – 47599\n\nRounding off to nearest hundreds\n\n108734 – 47599 = 108700 – 47600\n\n= 61100\n\nRounding off to nearest tens\n\n108734 – 47599 = 108730 – 47600\n\n= 61130\n\n(c) 8325 – 491\n\nRounding off to nearest hundreds\n\n8325 – 491 = 8300 – 500\n\n= 7800\n\nRounding off to nearest tens\n\n8325 – 491 = 8330 – 490\n\n= 7840\n\n(d) 489348 – 48365\n\nRounding off to nearest hundreds\n\n489348 – 48365 = 489300 – 48400\n\n= 440900\n\nRounding off to nearest tens\n\n489348 – 48365 = 489350 – 48370\n\n= 440980\n\nFour more examples are as follows\n\n(i) 4853 + 662\n\nRounding off to nearest hundreds\n\n4853 + 662 = 4800 + 700\n\n= 5500\n\nRounding off to nearest tens\n\n4853 + 662 = 4850 + 660\n\n= 5510\n\n(ii) 775 – 390\n\nRounding off to nearest hundreds\n\n775 – 390 = 800 – 400\n\n= 400\n\nRounding off to nearest tens\n\n775 – 390 = 780 – 400\n\n= 380\n\n(iii) 6375 – 2875\n\nRounding off to nearest hundreds\n\n6375 – 2875 = 6400 – 2900\n\n= 3500\n\nRounding off to nearest tens\n\n6375 – 2875 = 6380 – 2880\n\n= 3500\n\n(iv) 8246 – 6312\n\nRounding off to nearest hundreds\n\n8246 – 6312 = 8200 – 6300\n\n= 1900\n\nRounding off to nearest tens\n\n8246 – 6312 = 8240 – 6310\n\n= 1930\n\n3. Estimate the following products using general rule:\n\n(a) 578 × 161\n\n(b) 5281 × 3491\n\n(c) 1291 × 592\n\n(d) 9250 × 29\n\nMake four more such examples.\n\nSolutions:\n\n(a) 578 × 161\n\nRounding off by general rule\n\n578 and 161 rounded off to 600 and 200 respectively\n\n600\n\n× 200\n\n____________\n\n120000\n\n_____________\n\n(b) 5281 × 3491\n\nRounding off by general rule\n\n5281 and 3491 rounded off to 5000 and 3500 respectively\n\n5000\n\n× 3500\n\n_________\n\n17500000\n\n_________\n\n(c) 1291 × 592\n\nRounding off by general rule\n\n1291 and 592 rounded off to 1300 and 600 respectively\n\n1300\n\n× 600\n\n_____________\n\n780000\n\n______________\n\n(d) 9250 × 29\n\nRounding off by general rule\n\n9250 and 29 rounded off to 9000 and 30 respectively\n\n9000\n\n× 30\n\n_____________\n\n270000\n\n______________\n\n## CBSE Notes for Class 6 Maths Free Download for All Chapters\n\nOur Blog Site – https://bestlearns.in/\n\nLearn with best notes,free videos,practice questions and mock tests with School Connect Online for free demo click here",
null,
"#### By School Connect Online\n\nSchool Connect Online is an Integrated Learning Program for Academic Institution,and supported and mentored by StartUp Oasis,an inititive of CIIE.CO Please visit school.schoolconnectonline.com"
] | [
null,
"https://blog.schoolconnectonline.com/wp-content/uploads/2021/01/afe81-capture-32.png",
null,
"https://secure.gravatar.com/avatar/b55007e8bcf50b1263dc759709107879",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8361928,"math_prob":0.99393797,"size":13582,"snap":"2023-40-2023-50","text_gpt3_token_len":4316,"char_repetition_ratio":0.16541465,"word_repetition_ratio":0.09438969,"special_character_ratio":0.390885,"punctuation_ratio":0.07992352,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951868,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T09:15:55Z\",\"WARC-Record-ID\":\"<urn:uuid:0e69b777-6fb4-4f65-8ee9-621c2cd0980c>\",\"Content-Length\":\"323035\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:75f58d00-b64f-44a7-aee5-4776a55ad024>\",\"WARC-Concurrent-To\":\"<urn:uuid:84d07240-a7e7-4314-9409-a5ef25b20a43>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://blog.schoolconnectonline.com/ncert-solutions-class-6-maths-chapter-1-knowing-our-numbers/\",\"WARC-Payload-Digest\":\"sha1:A7SMWPVNTORGMGP4GB3GN6P73LXS2UKS\",\"WARC-Block-Digest\":\"sha1:GKAARKAWEKXGB5EKTSRYBONBMEDBXVRT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679103810.88_warc_CC-MAIN-20231211080606-20231211110606-00167.warc.gz\"}"} |
https://www.ncertsolutionsfor.com/class-9-maths-chapter-15-probability/ | [
"# NCERT Solutions for Class 9 Maths Chapter 15 – Probability\n\nHere we provide NCERT Solutions for Class 9 Maths Chapter 15 – Probability for English medium students, Which will very helpful for every student in their exams. Students can download the latest NCERT Class 9 Maths solution Chapter 15 pdf. Now you will get step by step solution to each question.\n\nTotal numbers of students = 135 + 65 = 200\n\n(i) Numbers of students who like statistics = 135\nRequired probability = 135/200 = 27/40\n\n(ii) Numbers of students who does not like statistics = 65\nRequired probability = 65/200 = 13/40\n\n8. Refer to Q.2, Exercise 14.2. What is the empirical probability that an engineer lives:\n(i) less than 7 km from her place of work?\n(ii) more than or equal to 7 km from her place of work?\n(iii) within 1/2 km from her place of work?\n\nThe distance (in km) of 40 engineers from their residence to their place of work were found as follows:\n5 3 10 20 25 11 13 7 12 31 19 10 12 17 18 11 3 2 17 16 2 7 9 7 8 3 5 12 15 18 3 12 14 2 9 6 15 15 7 6 12\n\nTotal numbers of engineers = 40\n(i) Numbers of engineers living less than 7 km from her place of work = 9\nRequired probability = 9/40\n\n(ii) Numbers of engineers living less than 7 km from her place of work = 40 – 9 = 31\nRequired probability = 31/40\n\n(iii) Numbers of engineers living less than 7 km from her place of work = 0\nRequired probability = 0/40 = 0\n\nPage No: 285\n\n11. Eleven bags of wheat flour, each marked 5 kg, actually contained the following weights of flour (in kg):\n4.97 5.05 5.08 5.03 5.00 5.06 5.08 4.98 5.04 5.07 5.00\nFind the probability that any of these bags chosen at random contains more than 5 kg of flour.\n\nTotal numbers of bags = 11\nNumbers of bags containing more than 5 kg of flour = 7\nRequired probability = 7/11\n\n12. In Q.5, Exercise 14.2, you were asked to prepare a frequency distribution table, regarding the concentration of sulphur dioxide in the air in parts per million of a certain city for 30 days. Using this table, find the probability of the concentration of sulphur dioxide in the interval 0.12-0.16 on any of these days.\nThe data obtained for 30 days is as follows:\n0.03 0.08 0.08 0.09 0.04 0.17 0.16 0.05 0.02 0.06 0.18 0.20 0.11 0.08 0.12 0.13 0.22 0.07 0.08 0.01 0.10 0.06 0.09 0.18 0.11 0.07 0.05 0.07 0.01 0.04\n\nTotal numbers of days data recorded = 30 days\nNumbers of days in which sulphur dioxide in the interval 0.12-0.16 = 2\nRequired probability = 2/30 = 1/15\n\n13. In Q.1, Exercise 14.2, you were asked to prepare a frequency distribution table regarding the blood groups of 30 students of a class. Use this table to determine the probability that a student of this class, selected at random, has blood group AB.\nThe blood groups of 30 students of Class VIII are recorded as follows:\nA, B, O, O, AB, O, A, O, B, A, O, B, A, O, O, A, AB, O, A, A, O, O, AB, B, A, O, B, A, B, O.\n\nTotal numbers of students = 30\nNumbers of students having blood group AB = 3\nRequired probability = 3/30 = 1/10\n\nAll Chapter NCERT Solutions For Class 9 Maths\n\n—————————————————————————–\n\nAll Subject NCERT Solutions For Class 9\n\n*************************************************\n\nI think you got complete solutions for this chapter. If You have any queries regarding this chapter, please comment on the below section our subject teacher will answer you. We tried our best to give complete solutions so you got good marks in your exam."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9032516,"math_prob":0.97467905,"size":3355,"snap":"2023-14-2023-23","text_gpt3_token_len":1034,"char_repetition_ratio":0.14413609,"word_repetition_ratio":0.122257054,"special_character_ratio":0.36542475,"punctuation_ratio":0.14987406,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99376655,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-28T05:46:22Z\",\"WARC-Record-ID\":\"<urn:uuid:ddf363f6-b509-4b2b-a50c-0fe7462bebee>\",\"Content-Length\":\"245078\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:79c2c3b8-904e-483d-931c-f20c382724f9>\",\"WARC-Concurrent-To\":\"<urn:uuid:1ff8f1a0-c986-4ef6-94d6-2f870a06814a>\",\"WARC-IP-Address\":\"104.21.84.244\",\"WARC-Target-URI\":\"https://www.ncertsolutionsfor.com/class-9-maths-chapter-15-probability/\",\"WARC-Payload-Digest\":\"sha1:MQVKON2YWAWVP3TZS54L3V67OYOU666P\",\"WARC-Block-Digest\":\"sha1:FW6W6HAGZV3T36RB2CHU7QWGDAO66G2N\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296948765.13_warc_CC-MAIN-20230328042424-20230328072424-00438.warc.gz\"}"} |
https://mathbits.org/MathBits/TISection/Trig/degrees.html | [
"",
null,
"Degrees, Radians, and Trig Angle Entries\n\nRemember: The calculator defaults to Radian measure.\n\n Working with Degrees:\n\nThere are two ways to engage \"Degrees\" when working with the calculator:\n1. Set the MODE to Degree and all further calculations will be in degrees.\n2. If in Radian MODE, use the Degree Symbol (2nd APPS (Angle) and select #1 ° ).\nNote: The degree status will be applied to that one calculation only. All other calculations\n\nFind the radian measure of an angle whose measure is 500°.\n\n Method 1: (in Radian or Degree mode) Convert by multiplying by",
null,
". Mode set to Radian",
null,
"Method 2: (in Radian mode) With the mode set to Radian, type 500°. Notice the degree symbol. Hit Enter.\n\nNOTE: If in Degree mode, you can force an answer to radians by using\n2nd APPS(Angle) #3 r.\n\n In Degree mode, but ......",
null,
"gives the answer 0.0548036651 (WRONG!!!)",
null,
"gives the correct answer of 0.\n\nFind the degree measure of an angle whose measure is",
null,
"Method 1: (in Radian or Degree mode) Convert by multiplying by",
null,
". Mode set to Degree",
null,
"Method 2: (in Degree mode) In Degree mode, the r symbol can be used to convert radians to degrees. The r is found 2nd APPS(Angle) #3 r. With the mode set to Degree, type",
null,
"Convert 57° 45' 17'' to decimal degrees:\n\nIn either Radian or Degree Mode: Type 57° 45' 17'' and hit Enter.\n° is under Angle (above APPS) #1\n' is under Angle (above APPS) #2\n'' use ALPHA (green) key with the quote symbol above the + sign.\n\n Convert 57° 45' 17'' to radians:\n\nWith the mode set to Radian: Type 57° 45' 17'' ° and hit Enter. Answer: 1.008010061\n** Note the use of the additional degree symbol at the end. Without it, conversion would be to decimal degrees, even though the mode is set to radians.\n\n Convert 48.555° to degrees, minutes, seconds:\n\nType 48.555 ►DMS Answer: 48° 33' 18''\nThe ►DMS is #4 on the Angle menu (2nd APPS). This function works even if Mode is set to Radian.\n\n Given cos A = .0258. Find ∠A expressed in degree, minutes, seconds.\n\nWith the mode set to Degree: Type cos-1(.0258). Hit Enter.\nEngage ►DMS\n(Be careful here to be in the correct mode!!)\n\n Find sin 57° 45' 17'':\n\nEnter sin(57° 45' 17'' °). This problem is clearly dealing with degrees. The entry shown here works even if MODE is set to Radian. Answer: .8457717984\nIf MODE is set to Degree, the degree symbol at the end is not needed.\n\n Find cos(-250° 21'):\n\nIf MODE is set to Degree, the degree symbol at the end is not needed.\n\n Find sec(25° 40'):\n\nThere is no key for secant. But, since",
null,
", enter: 1/cos(25° 40' °) when in Radian mode. If MODE is set to Degree, the degree symbol at the end is not needed.\n\n Find",
null,
":\n\nEnter directly:",
null,
"when in Radian mode. Answer: -.7660444431\nIf in Degree mode, enter",
null,
".\n\n Find",
null,
":\n\nThere is no key for cosecant. But, since",
null,
", enter:",
null,
"when in Radian mode. Answer: 1.015426612\n\nIn Degree mode, enter",
null,
".",
null,
""
] | [
null,
"https://mathbits.org/MathBits/TISection/Trig/logoTrig.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree1.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree3.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree4.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree2.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree6.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree7.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree8.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree10.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree24.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree25.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree26.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree27.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree28.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree29.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree30.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/degree31.gif",
null,
"https://mathbits.org/MathBits/TISection/Trig/tidivider.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7500282,"math_prob":0.9762952,"size":2945,"snap":"2022-05-2022-21","text_gpt3_token_len":889,"char_repetition_ratio":0.16286977,"word_repetition_ratio":0.14417745,"special_character_ratio":0.3263158,"punctuation_ratio":0.18489985,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9785002,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,10,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-21T16:33:41Z\",\"WARC-Record-ID\":\"<urn:uuid:b402a737-9401-496f-b64a-cdcb8f00357b>\",\"Content-Length\":\"16609\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:122155e3-0cad-431f-ad00-f34e534a0308>\",\"WARC-Concurrent-To\":\"<urn:uuid:79f94841-6171-4c4d-9d5f-ffeef61aaebe>\",\"WARC-IP-Address\":\"67.43.11.130\",\"WARC-Target-URI\":\"https://mathbits.org/MathBits/TISection/Trig/degrees.html\",\"WARC-Payload-Digest\":\"sha1:5QU76V4LX4EC6DBK3LSWS2UMRF4ZAHPY\",\"WARC-Block-Digest\":\"sha1:Y7KHMDZJNTCIM5ECRYXWV2TND2IEOK6D\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320303512.46_warc_CC-MAIN-20220121162107-20220121192107-00608.warc.gz\"}"} |
https://www.colorhexa.com/1ca58b | [
"#1ca58b Color Information\n\nIn a RGB color space, hex #1ca58b is composed of 11% red, 64.7% green and 54.5% blue. Whereas in a CMYK color space, it is composed of 83% cyan, 0% magenta, 15.8% yellow and 35.3% black. It has a hue angle of 168.6 degrees, a saturation of 71% and a lightness of 37.8%. #1ca58b color hex could be obtained by blending #38ffff with #004b17. Closest websafe color is: #339999.\n\n• R 11\n• G 65\n• B 55\nRGB color chart\n• C 83\n• M 0\n• Y 16\n• K 35\nCMYK color chart\n\n#1ca58b color description : Dark cyan.\n\n#1ca58b Color Conversion\n\nThe hexadecimal color #1ca58b has RGB values of R:28, G:165, B:139 and CMYK values of C:0.83, M:0, Y:0.16, K:0.35. Its decimal value is 1877387.\n\nHex triplet RGB Decimal 1ca58b `#1ca58b` 28, 165, 139 `rgb(28,165,139)` 11, 64.7, 54.5 `rgb(11%,64.7%,54.5%)` 83, 0, 16, 35 168.6°, 71, 37.8 `hsl(168.6,71%,37.8%)` 168.6°, 83, 64.7 339999 `#339999`\nCIE-LAB 60.799, -40.78, 3.664 18.593, 29.019, 29.046 0.243, 0.379, 29.019 60.799, 40.945, 174.866 60.799, -47.721, 11.387 53.87, -32.664, 5.74 00011100, 10100101, 10001011\n\nColor Schemes with #1ca58b\n\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #a51c36\n``#a51c36` `rgb(165,28,54)``\nComplementary Color\n• #1ca547\n``#1ca547` `rgb(28,165,71)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #1c7ba5\n``#1c7ba5` `rgb(28,123,165)``\nAnalogous Color\n• #a5471c\n``#a5471c` `rgb(165,71,28)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #a51c7b\n``#a51c7b` `rgb(165,28,123)``\nSplit Complementary Color\n• #a58b1c\n``#a58b1c` `rgb(165,139,28)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #8b1ca5\n``#8b1ca5` `rgb(139,28,165)``\n• #36a51c\n``#36a51c` `rgb(54,165,28)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #8b1ca5\n``#8b1ca5` `rgb(139,28,165)``\n• #a51c36\n``#a51c36` `rgb(165,28,54)``\n• #116454\n``#116454` `rgb(17,100,84)``\n• #157966\n``#157966` `rgb(21,121,102)``\n• #188f79\n``#188f79` `rgb(24,143,121)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #20bb9d\n``#20bb9d` `rgb(32,187,157)``\n• #23d1b0\n``#23d1b0` `rgb(35,209,176)``\n• #31dcbc\n``#31dcbc` `rgb(49,220,188)``\nMonochromatic Color\n\nAlternatives to #1ca58b\n\nBelow, you can see some colors close to #1ca58b. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #1ca569\n``#1ca569` `rgb(28,165,105)``\n• #1ca574\n``#1ca574` `rgb(28,165,116)``\n• #1ca580\n``#1ca580` `rgb(28,165,128)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #1ca596\n``#1ca596` `rgb(28,165,150)``\n• #1ca5a2\n``#1ca5a2` `rgb(28,165,162)``\n• #1c9da5\n``#1c9da5` `rgb(28,157,165)``\nSimilar Colors\n\n#1ca58b Preview\n\nThis text has a font color of #1ca58b.\n\n``<span style=\"color:#1ca58b;\">Text here</span>``\n#1ca58b background color\n\nThis paragraph has a background color of #1ca58b.\n\n``<p style=\"background-color:#1ca58b;\">Content here</p>``\n#1ca58b border color\n\nThis element has a border color of #1ca58b.\n\n``<div style=\"border:1px solid #1ca58b;\">Content here</div>``\nCSS codes\n``.text {color:#1ca58b;}``\n``.background {background-color:#1ca58b;}``\n``.border {border:1px solid #1ca58b;}``\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #020e0c is the darkest color, while #fcfffe is the lightest one.\n\n• #020e0c\n``#020e0c` `rgb(2,14,12)``\n• #051f1a\n``#051f1a` `rgb(5,31,26)``\n• #083028\n``#083028` `rgb(8,48,40)``\n• #0b4036\n``#0b4036` `rgb(11,64,54)``\n• #0e5144\n``#0e5144` `rgb(14,81,68)``\n• #116252\n``#116252` `rgb(17,98,82)``\n• #137361\n``#137361` `rgb(19,115,97)``\n• #16836f\n``#16836f` `rgb(22,131,111)``\n• #19947d\n``#19947d` `rgb(25,148,125)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #1fb699\n``#1fb699` `rgb(31,182,153)``\n• #22c7a7\n``#22c7a7` `rgb(34,199,167)``\n• #25d7b5\n``#25d7b5` `rgb(37,215,181)``\n• #33dcbc\n``#33dcbc` `rgb(51,220,188)``\n• #44dfc2\n``#44dfc2` `rgb(68,223,194)``\n• #55e2c7\n``#55e2c7` `rgb(85,226,199)``\n• #65e5cd\n``#65e5cd` `rgb(101,229,205)``\n• #76e8d2\n``#76e8d2` `rgb(118,232,210)``\n• #87ebd8\n``#87ebd8` `rgb(135,235,216)``\n• #98eddd\n``#98eddd` `rgb(152,237,221)``\n• #a8f0e3\n``#a8f0e3` `rgb(168,240,227)``\n• #b9f3e8\n``#b9f3e8` `rgb(185,243,232)``\n• #caf6ee\n``#caf6ee` `rgb(202,246,238)``\n• #dbf9f3\n``#dbf9f3` `rgb(219,249,243)``\n• #ecfcf9\n``#ecfcf9` `rgb(236,252,249)``\n• #fcfffe\n``#fcfffe` `rgb(252,255,254)``\nTint Color Variation\n\nTones of #1ca58b\n\nA tone is produced by adding gray to any pure hue. In this case, #5f6262 is the less saturated color, while #06bb99 is the most saturated one.\n\n• #5f6262\n``#5f6262` `rgb(95,98,98)``\n• #576a66\n``#576a66` `rgb(87,106,102)``\n• #50716b\n``#50716b` `rgb(80,113,107)``\n• #49786f\n``#49786f` `rgb(73,120,111)``\n• #418074\n``#418074` `rgb(65,128,116)``\n• #3a8779\n``#3a8779` `rgb(58,135,121)``\n• #328f7d\n``#328f7d` `rgb(50,143,125)``\n• #2b9682\n``#2b9682` `rgb(43,150,130)``\n• #239e86\n``#239e86` `rgb(35,158,134)``\n• #1ca58b\n``#1ca58b` `rgb(28,165,139)``\n• #15ac90\n``#15ac90` `rgb(21,172,144)``\n• #0db494\n``#0db494` `rgb(13,180,148)``\n• #06bb99\n``#06bb99` `rgb(6,187,153)``\nTone Color Variation\n\nColor Blindness Simulator\n\nBelow, you can see how #1ca58b is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.57696575,"math_prob":0.74660695,"size":3691,"snap":"2019-26-2019-30","text_gpt3_token_len":1652,"char_repetition_ratio":0.121779226,"word_repetition_ratio":0.011111111,"special_character_ratio":0.55296665,"punctuation_ratio":0.23337092,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885351,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-17T03:22:36Z\",\"WARC-Record-ID\":\"<urn:uuid:2af0ffc4-2806-4029-a28b-1a43f7842cd6>\",\"Content-Length\":\"36426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4dfb7c54-bca6-4b27-b143-1877e94774a9>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed1ffd47-1770-486b-975e-38a049e81227>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/1ca58b\",\"WARC-Payload-Digest\":\"sha1:A2K65VPOENNHTGVVO4QT5CDJGZZ6OI7Z\",\"WARC-Block-Digest\":\"sha1:74YY2MGWDR63I5ZHUS5AY4GNRDAAWWFU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627998369.29_warc_CC-MAIN-20190617022938-20190617044938-00266.warc.gz\"}"} |
https://www.quantum-inspire.com/kbase/full-adder/ | [
"# Code example: Quantum full adder\n\n## Backends\n\nThis example is written for the emulator backend. Spin-2 has only two qubits and does not support this example. The starmon-5 backend can be used to execute this example, however, since it does not support the Toffoli gate, the code needs to be rewritten into two-qubit and single-qubit operations.\n\n## What is the quantum full adder\n\nJust like in classical electronics, where you can make different types of binary adders like half adders, full adders, ripple carry adders etcetera, you can make adders in quantum circuits as well. In this example we will show how a quantum full adder is created and how this adder acts on superposition states.\n\nThe full adder adds to input bits A and B plus a carry_input bit and produces the sum and carry_output bits as output. In classical control electronics the full adder has therefore three inputs and four outputs.\n\nSince quantum circuits are reversible, they have an equal amount of input and output qubits, therefore we define a 4-qubit function, where the input qubits are A,B, CarryIn and (zero) and the output qubits are A,B, Sum and CarryOut, see the figure below.\n\nOne possible implementation of a 2-bit full adder, using CNOT gates and Toffoli gates is the following:\n\nFor completeness we show the truth table for the full adder:\n\nInputs: q0 = A ; q1 = B ; q2= CarryIn\n\nOutputs: q0 = A ; q1 = B ; q2= SumOut ; q3 = CarryOut\n\n. q3 q2 q1 q0 . q3 q2 q1 q0\n. . Ci B A . Co S B A\n. . . . . . . . . .\nT1 0 0 0 0 - 0 0 0 0\nT2 0 1 0 0 - 0 1 0 0\nT3 0 0 1 0 - 0 1 1 0\nT4 0 1 1 0 - 1 0 1 0\nT5 0 0 0 1 - 0 1 0 1\nT6 0 1 0 1 - 1 0 0 1\nT7 0 0 1 1 - 1 0 1 1\nT8 0 1 1 1 - 1 1 1 1\n\n## How does it work\n\nThe code below shows what happens when we use the quantum full adder to add three qubit states. You can copy and paste this code in your own editor and see what happens when you change the input states A, B and CarryIn to either $\\left\\lvert 0 \\right\\rangle$ or $\\left\\lvert 1 \\right\\rangle$ using the X-gate in the initialization function. In this example we set A = 1, B = 0 and CarryIn = 1, equal to T6 in the truth table.\n\n## Examination of the code and results\n\nWhen we execute the code, setting the number of shots to 1, we get the histogram as shown below the code, which is equal to the expected output (compare to T6 in the truth table). Note: one shot is enough to determine the probability distribution\n\n \nversion 1.0\n\nqubits 4\n\n# qubit definitions\n# q --> A\n# q --> B\n# q --> CarryIn_SumOut\n# q --> CarryOut\n\n# initialize inputs to some values\n.init\n#initialize inputs A=1, B=0 and carry_in=1\n{x q | x q}\n\ntoffoli q,q,q\ncnot q,q\ntoffoli q,q,q\ncnot q,q\ncnot q,q\n\n\n\nWe can also use superposition states and entangled states, such as in the following code where we set A and B to a Bell state and CarryIn to a superposition state. This is similar to setting the input to states T1, T2, T7 and T8 at the same time.\n\nComparing again the truth table to the histogram of probability amplitudes, we indeed see that all 4 output states that we would expect are generated.\n\n## Want to know more?\n\nIn this example we showed how to create a simple quantum full adder. In the literature you can find other examples to add qubit states, subtract qubit states and execute more complex operations on qubit states."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78323275,"math_prob":0.9846736,"size":3440,"snap":"2020-45-2020-50","text_gpt3_token_len":1090,"char_repetition_ratio":0.14988358,"word_repetition_ratio":0.039156627,"special_character_ratio":0.32877907,"punctuation_ratio":0.10785619,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897343,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-30T17:37:16Z\",\"WARC-Record-ID\":\"<urn:uuid:a8b02acc-8796-49ca-84d2-67e5d1b6c2b9>\",\"Content-Length\":\"141983\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dd504982-ca33-49f6-b550-1babdb788d42>\",\"WARC-Concurrent-To\":\"<urn:uuid:b3be8ddd-7234-44f7-a651-af868c817d50>\",\"WARC-IP-Address\":\"104.248.63.231\",\"WARC-Target-URI\":\"https://www.quantum-inspire.com/kbase/full-adder/\",\"WARC-Payload-Digest\":\"sha1:3QUI3SSFODHY2ZRWTUHGOX3XW62PMGFC\",\"WARC-Block-Digest\":\"sha1:74DWZQGV22LBURQ5YUFL44GGPWSFG6RQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141216897.58_warc_CC-MAIN-20201130161537-20201130191537-00225.warc.gz\"}"} |
https://www.numbersaplenty.com/122368 | [
"Search a number\nBaseRepresentation\nbin11101111000000000\n320012212011\n4131320000\n512403433\n62342304\n71016521\noct357000\n9205764\n10122368\n1183a34\n125a994\n134390c\n1432848\n15263cd\nhex1de00\n\n122368 has 20 divisors (see below), whose sum is σ = 245520. Its totient is φ = 60928.\n\nThe previous prime is 122363. The next prime is 122387. The reversal of 122368 is 863221.\n\nAdding to 122368 its reverse (863221), we get a palindrome (985589).\n\nTogether with 123152 it forms an amicable pair .\n\nIt is a plaindrome in base 10.\n\nIt is a congruent number.\n\nIt is not an unprimeable number, because it can be changed into a prime (122363) by changing a digit.\n\nIt is a pernicious number, because its binary representation contains a prime number (7) of ones.\n\nIt is a polite number, since it can be written as a sum of consecutive naturals, namely, 393 + ... + 631.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (12276).\n\n2122368 is an apocalyptic number.\n\nIt is an amenable number.\n\nIt is a practical number, because each smaller number is the sum of distinct divisors of 122368, and also a Zumkeller number, because its divisors can be partitioned in two sets with the same sum (122760).\n\n122368 is an abundant number, since it is smaller than the sum of its proper divisors (123152).\n\nIt is a pseudoperfect number, because it is the sum of a subset of its proper divisors.\n\n122368 is an frugal number, since it uses more digits than its factorization.\n\n122368 is an odious number, because the sum of its binary digits is odd.\n\nThe sum of its prime factors is 257 (or 241 counting only the distinct ones).\n\nThe product of its digits is 576, while the sum is 22.\n\nThe square root of 122368 is about 349.8113777452. The cubic root of 122368 is about 49.6465743592.\n\nThe spelling of 122368 in words is \"one hundred twenty-two thousand, three hundred sixty-eight\"."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91206455,"math_prob":0.9931904,"size":1774,"snap":"2020-45-2020-50","text_gpt3_token_len":468,"char_repetition_ratio":0.17740113,"word_repetition_ratio":0.006329114,"special_character_ratio":0.33370912,"punctuation_ratio":0.13896458,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9963673,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-28T02:38:29Z\",\"WARC-Record-ID\":\"<urn:uuid:973e2828-929d-46c5-b18f-51549009e39f>\",\"Content-Length\":\"9313\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c12aecfe-f3c7-4229-968e-c9c4e8f02f2c>\",\"WARC-Concurrent-To\":\"<urn:uuid:d1202f93-0bce-4e09-95d1-ffbdcda33b5e>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"https://www.numbersaplenty.com/122368\",\"WARC-Payload-Digest\":\"sha1:WNTRHD26VW4BHT5YRZZW7V7DBSIRCZ2Z\",\"WARC-Block-Digest\":\"sha1:P7YDNFUANFLEG7WVO6X6H2VHCBWNGAOI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107896048.53_warc_CC-MAIN-20201028014458-20201028044458-00008.warc.gz\"}"} |
https://scicomp.stackexchange.com/questions/23385/what-is-the-worst-case-complexity-of-conjugate-gradient/23390 | [
"# What is the worst case complexity of Conjugate Gradient?\n\nLet $A\\in \\mathbb{R}^{n\\times n}$, symmetric and positive definite. Suppose it takes $m$ units of work to multiply a vector by $A$. It is well known that performing the CG algorithm on $A$ with condition number $\\kappa$ requires $\\mathcal{O} (m\\sqrt{\\kappa})$, units of work.\n\nNow, of course, being a $\\mathcal{O}$ statement this is an upper-bound. And the CG algorithm can always terminate in zero steps with a lucky initial guess.\n\nDo we know if there exists a RHS and an initial (unlucky) guess that will require $\\mathcal{\\Theta}(\\sqrt{\\kappa})$ steps? Put another way, is worst-case work-complexity of CG really $\\Theta( m \\sqrt{\\kappa})$?\n\nThis question arises when I tried to determine if the benefit of a preconditioner (lower $\\kappa$) outweighed its cost (higher $m$). Right now, I am working with toy problems and would like to have a better idea before I implement anything in a compiled language.\n\n• You could presumably construct a pessimal initial guess by running the CG algorithm \"backwards\" and putting suitable energy into each of the $A$-orthogonal search directions that the algorithm requires all the steps. – origimbo Mar 17 '16 at 16:11\n\nThe answer is a resounding yes. The convergence rate bound of $(\\sqrt{\\kappa}-1) / (\\sqrt{\\kappa}+1)$ is sharp over the set of symmetric positive definite matrices with condition number $\\kappa$. In other words, knowing nothing more about $A$ than its condition number, CG really can take $\\sim\\sqrt{\\kappa}$ iterations to converge. Loosely speaking, the upper-bound is attained if the eigenvalues of $A$ are uniformly distributed (i.e. \"peppered\") within an interval of condition number $\\kappa$.\n\nHere's a more rigorous statement. Deterministic versions are more involved but work using the same principles.\n\nTheorem (Worst-case choice of $A$). Pick any random orthogonal matrix $U$, let $\\lambda_1,\\ldots,\\lambda_n$ be $n$ real numbers uniformly sampled from the real interval $[1,\\kappa]$, and let $b=[b_1;\\ldots;b_n]$ be $n$ real numbers sampled i.i.d. from the standard Gaussian. Define $$A=U\\mathrm{diag}(\\lambda_1,\\ldots,\\lambda_n)U^T.$$ Then in the limit $n\\to\\infty$, conjugate gradients will convergence with probability one to an $\\epsilon$ accurate solution of $Ax=b$ in no less than $\\Omega(\\sqrt{\\kappa}\\log\\epsilon^{-1})$ iterations.\n\nProof. The standard proof is based on optimal Chebyshev polynomial approximations, using techniques found in a number of places, such as Greenbaum's book or Saad's book.\n\n• The bound is not sharp, as the answer explains later, If the eigenvalues are not uniformly distributed, cg converges faster, since it is not a stationalry iteration. Thus, we need to know more about the matrix. – Guido Kanschat Mar 20 '16 at 3:39\n• @GuidoKanschat: Good point, and I've fixed the statement to clarify that sharpness is attained over all $A$ with condition $\\kappa$. – Richard Zhang Mar 21 '16 at 15:50\n• The proof boils down to minimizing $\\|p(A)\\|$ in the space of order-$k$ polynomials satisfying $p(0)=1$. Equivalently this is $\\min_p \\max_{\\lambda\\in\\Lambda(A)} |p(\\lambda)|$. In the stated limit, $\\Lambda(A)\\to[1,\\kappa]$, and the solution for the minimax problem is then the Chebyshev polynomial, whose error converges as $\\sim\\sqrt{\\kappa}$ – Richard Zhang Mar 21 '16 at 21:27\n\nTaking this as my original question: Do we know if there exists a RHS and an initial (unlucky) guess that will require $\\Theta(\\sqrt{\\kappa})$ steps?\n\nThe answer to the question is \"no\". The idea of this answer comes from the comment from Guido Kanschat.\n\nClaim: For any given condition number $k$, there exists a matrix $A$, with that condition number for which the CG algorithm will terminate in at most two steps (for any given RHS and initial guess).\n\nConsider $A\\in \\mathbb{R}^{n\\times n}$ where $A=\\mathrm{diag}(1,\\kappa,\\kappa,\\ldots, \\kappa)$. Then the condition number of $A$ is $\\kappa$. Let $b\\in \\mathbb{R}^n$ be the RHS, and denote the eigenvalues of $A$ as $\\lambda_i$ where $$\\lambda_i = \\left\\{\\begin{array}{ll}1 & i=1\\\\ \\kappa & i\\not= 1 \\end{array} \\right. .$$\n\nWe first consider the case where $x^{(0)} \\in \\mathbb{R}^n$, the initial guess, is zero. Denote $x^{(2)}\\in \\mathbb{R}^n$ as the second estimate of $A^{-1}b$ from the CG algorithm. We show that $x^{(2)} =A^{-1}b$ by showing $\\langle x^{(2)}-A^{-1}b, A(x^{(2)}-A^{-1}b)\\rangle =0$. Indeed, we have\n\n\\begin{align*} \\langle x^{(2)}-A^{-1}b, A(x^{(2)}-A^{-1}b)\\rangle &= \\left\\| x^{(2)}-A^{-1}b \\right\\|_A^2 \\\\ &=\\min_{p\\in \\mathrm{poly}_{1} } \\left\\| (p(A)-A^{-1}) b \\right\\|_A^2\\\\ &=\\min_{p\\in \\mathrm{poly}_{1} } \\sum_{i=1}^n (p(\\lambda_i) - \\lambda_i^{-1})^2 \\lambda_i b_i^2 \\\\ &\\le \\sum_{i=1}^n (\\widehat{p}(\\lambda_i) - \\lambda_i^{-1})^2 \\lambda_i b_i^2 = 0 \\end{align*}\n\nWhere we use the first order polynomial $\\widehat{p}$ defined as $\\widehat{p}(x)= (1+\\kappa-x)/\\kappa$. So we proven the case for $x^{(0)}= 0$.\n\nIf $x^{(0)} \\not = 0$, then $x^{(2)}= \\overline{x^{(2)}}+ x^{(0)}$ where $\\overline{x^{(2)} }$ is the second estimate of the CG algorithm with $b$ replaced with $\\overline{b} = b-A x^{(0)}$. So we have reduced this case to the previous one.\n\n• How much of this is robust to finite precision arithmetic? – origimbo Mar 21 '16 at 16:28\n• @origimbo If your question was directed to me, the answer is, \"I don't know.\" – fred Mar 21 '16 at 16:36"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6056781,"math_prob":0.999811,"size":1834,"snap":"2019-51-2020-05","text_gpt3_token_len":690,"char_repetition_ratio":0.1273224,"word_repetition_ratio":0.0,"special_character_ratio":0.3942203,"punctuation_ratio":0.079155676,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.999992,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-17T13:58:50Z\",\"WARC-Record-ID\":\"<urn:uuid:64ffa04e-fd31-4c06-b4a5-29216ffa389c>\",\"Content-Length\":\"148434\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0c4c8d18-696b-43c9-a4b5-45adce953650>\",\"WARC-Concurrent-To\":\"<urn:uuid:4a81b51a-88b1-417a-ab72-7a2891e439e4>\",\"WARC-IP-Address\":\"151.101.1.69\",\"WARC-Target-URI\":\"https://scicomp.stackexchange.com/questions/23385/what-is-the-worst-case-complexity-of-conjugate-gradient/23390\",\"WARC-Payload-Digest\":\"sha1:27ZHMETMZ2OBFTU6WZFF3CNOHCRSQ4KG\",\"WARC-Block-Digest\":\"sha1:KKVJXKNXY2NK2IS2SJTE5KVPV4EENC2J\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250589560.16_warc_CC-MAIN-20200117123339-20200117151339-00322.warc.gz\"}"} |
https://openstax.org/books/elementary-algebra/pages/1-8-the-real-numbers | [
"Elementary Algebra\n\n# 1.8The Real Numbers\n\nElementary Algebra1.8 The Real Numbers\n\n### Learning Objectives\n\nBy the end of this section, you will be able to:\n\n• Simplify expressions with square roots\n• Identify integers, rational numbers, irrational numbers, and real numbers\n• Locate fractions on the number line\n• Locate decimals on the number line\n\n### Be Prepared 1.8\n\nA more thorough introduction to the topics covered in this section can be found in the Prealgebra chapters, Decimals and Properties of Real Numbers.\n\n### Simplify Expressions with Square Roots\n\nRemember that when a number n is multiplied by itself, we write $n2n2$ and read it “n squared.” The result is called the square of n. For example,\n\n$82read‘8squared’6464is called thesquareof8.82read‘8squared’6464is called thesquareof8.$\n\nSimilarly, 121 is the square of 11, because $112112$ is 121.\n\n### Square of a Number\n\nIf $n2=m,n2=m,$ then m is the square of n.\n\n### Manipulative Mathematics\n\nDoing the Manipulative Mathematics activity “Square Numbers” will help you develop a better understanding of perfect square numbers.\n\nComplete the following table to show the squares of the counting numbers 1 through 15.",
null,
"The numbers in the second row are called perfect square numbers. It will be helpful to learn to recognize the perfect square numbers.\n\nThe squares of the counting numbers are positive numbers. What about the squares of negative numbers? We know that when the signs of two numbers are the same, their product is positive. So the square of any negative number is also positive.\n\n$(−3)2=9(−8)2=64(−11)2=121(−15)2=225(−3)2=9(−8)2=64(−11)2=121(−15)2=225$\n\nDid you notice that these squares are the same as the squares of the positive numbers?\n\nSometimes we will need to look at the relationship between numbers and their squares in reverse. Because $102=100,102=100,$ we say 100 is the square of 10. We also say that 10 is a square root of 100. A number whose square is $mm$ is called a square root of m.\n\n### Square Root of a Number\n\nIf $n2=m,n2=m,$ then n is a square root of m.\n\nNotice $(−10)2=100(−10)2=100$ also, so $−10−10$ is also a square root of 100. Therefore, both 10 and $−10−10$ are square roots of 100.\n\nSo, every positive number has two square roots—one positive and one negative. What if we only wanted the positive square root of a positive number? The radical sign, $m,m,$ denotes the positive square root. The positive square root is called the principal square root. When we use the radical sign that always means we want the principal square root.\n\nWe also use the radical sign for the square root of zero. Because $02=0,02=0,$ $0=0.0=0.$ Notice that zero has only one square root.\n\n### Square Root Notation\n\n$mm$ is read “the square root of m",
null,
"If $m=n2,m=n2,$ then $m=n,m=n,$ for $n≥0.n≥0.$\n\nThe square root of m, $m,m,$ is the positive number whose square is m.\n\nSince 10 is the principal square root of 100, we write $100=10.100=10.$ You may want to complete the following table to help you recognize square roots.",
null,
"### Example 1.108\n\nSimplify: $2525$ $121.121.$\n\n### Try It 1.215\n\nSimplify: $3636$ $169.169.$\n\n### Try It 1.216\n\nSimplify: $1616$ $196.196.$\n\nWe know that every positive number has two square roots and the radical sign indicates the positive one. We write $100=10.100=10.$ If we want to find the negative square root of a number, we place a negative in front of the radical sign. For example, $−100=−10.−100=−10.$ We read $−100−100$ as “the opposite of the square root of 10.”\n\n### Example 1.109\n\nSimplify: $−9−9$ $−144.−144.$\n\n### Try It 1.217\n\nSimplify: $−4−4$ $−225.−225.$\n\n### Try It 1.218\n\nSimplify: $−81−81$ $−100.−100.$\n\n### Identify Integers, Rational Numbers, Irrational Numbers, and Real Numbers\n\nWe have already described numbers as counting numbers, whole numbers, and integers. What is the difference between these types of numbers?\n\n$Counting numbers1,2,3,4,…Whole numbers0,1,2,3,4,…Integers…−3,−2,−1,0,1,2,3,…Counting numbers1,2,3,4,…Whole numbers0,1,2,3,4,…Integers…−3,−2,−1,0,1,2,3,…$\n\nWhat type of numbers would we get if we started with all the integers and then included all the fractions? The numbers we would have form the set of rational numbers. A rational number is a number that can be written as a ratio of two integers.\n\n### Rational Number\n\nA rational number is a number of the form $pq,pq,$ where p and q are integers and $q≠0.q≠0.$\n\nA rational number can be written as the ratio of two integers.\n\nAll signed fractions, such as $45,−78,134,−20345,−78,134,−203$ are rational numbers. Each numerator and each denominator is an integer.\n\nAre integers rational numbers? To decide if an integer is a rational number, we try to write it as a ratio of two integers. Each integer can be written as a ratio of integers in many ways. For example, 3 is equivalent to $31,62,93,124,155…31,62,93,124,155…$\n\nAn easy way to write an integer as a ratio of integers is to write it as a fraction with denominator one.\n\n$3=31−8=−810=013=31−8=−810=01$\n\nSince any integer can be written as the ratio of two integers, all integers are rational numbers! Remember that the counting numbers and the whole numbers are also integers, and so they, too, are rational.\n\nWhat about decimals? Are they rational? Let’s look at a few to see if we can write each of them as the ratio of two integers.\n\nWe’ve already seen that integers are rational numbers. The integer $−8−8$ could be written as the decimal $−8.0.−8.0.$ So, clearly, some decimals are rational.\n\nThink about the decimal 7.3. Can we write it as a ratio of two integers? Because 7.3 means $7310,7310,$ we can write it as an improper fraction, $7310.7310.$ So 7.3 is the ratio of the integers 73 and 10. It is a rational number.\n\nIn general, any decimal that ends after a number of digits (such as 7.3 or $−1.2684)−1.2684)$ is a rational number. We can use the place value of the last digit as the denominator when writing the decimal as a fraction.\n\n### Example 1.110\n\nWrite as the ratio of two integers: $−27−27$ 7.31.\n\n### Try It 1.219\n\nWrite as the ratio of two integers: $−24−24$ 3.57.\n\n### Try It 1.220\n\nWrite as the ratio of two integers: $−19−19$ 8.41.\n\nLet’s look at the decimal form of the numbers we know are rational.\n\nWe have seen that every integer is a rational number, since $a=a1a=a1$ for any integer, a. We can also change any integer to a decimal by adding a decimal point and a zero.\n\n$Integer−2−10123Decimal form−2.0−1.00.01.02.03.0Integer−2−10123Decimal form−2.0−1.00.01.02.03.0$\n$These decimal numbers stop.These decimal numbers stop.$\n\nWe have also seen that every fraction is a rational number. Look at the decimal form of the fractions we considered above.\n\n$Ratio of integers45−78134−203The decimal form0.8−0.8753.25−6.666…−6.6–Ratio of integers45−78134−203The decimal form0.8−0.8753.25−6.666…−6.6–$\n$These decimals either stop or repeat.These decimals either stop or repeat.$\n\nWhat do these examples tell us?\n\nEvery rational number can be written both as a ratio of integers, $(pq,(pq,$ where p and q are integers and $q≠0),q≠0),$ and as a decimal that either stops or repeats.\n\nHere are the numbers we looked at above expressed as a ratio of integers and as a decimal:\n\nFractions Integers\nNumber $4545$ $−78−78$ $134134$ $−203−203$ $−2−2$ $−1−1$ $00$ $11$ $22$ $33$\nRatio of Integers $4545$ $−78−78$ $134134$ $−203−203$ $−21−21$ $−11−11$ $0101$ $1111$ $2121$ $3131$\nDecimal Form $0.80.8$ $−0.875−0.875$ $3.253.25$ $−6.6–−6.6–$ $−2.0−2.0$ $−1.0−1.0$ $0.00.0$ $1.01.0$ $2.02.0$ $3.03.0$\n\n### Rational Number\n\nA rational number is a number of the form $pq,pq,$ where p and q are integers and $q≠0.q≠0.$\n\nIts decimal form stops or repeats.\n\nAre there any decimals that do not stop or repeat? Yes!\n\nThe number $ππ$ (the Greek letter pi, pronounced “pie”), which is very important in describing circles, has a decimal form that does not stop or repeat.\n\n$π=3.141592654...π=3.141592654...$\n\nWe can even create a decimal pattern that does not stop or repeat, such as\n\n$2.01001000100001…2.01001000100001…$\n\nNumbers whose decimal form does not stop or repeat cannot be written as a fraction of integers. We call these numbers irrational.\n\n### Irrational Number\n\nAn irrational number is a number that cannot be written as the ratio of two integers.\n\nIts decimal form does not stop and does not repeat.\n\nLet’s summarize a method we can use to determine whether a number is rational or irrational.\n\n### Rational or Irrational?\n\nIf the decimal form of a number\n\n• repeats or stops, the number is rational.\n• does not repeat and does not stop, the number is irrational.\n\n### Example 1.111\n\nGiven the numbers $0.583–,0.47,3.605551275...0.583–,0.47,3.605551275...$ list the rational numbers irrational numbers.\n\n### Try It 1.221\n\nFor the given numbers list the rational numbers irrational numbers: $0.29,0.816–,2.515115111….0.29,0.816–,2.515115111….$\n\n### Try It 1.222\n\nFor the given numbers list the rational numbers irrational numbers: $2.63–,0.125,0.418302…2.63–,0.125,0.418302…$\n\n### Example 1.112\n\nFor each number given, identify whether it is rational or irrational: $3636$ $44.44.$\n\n### Try It 1.223\n\nFor each number given, identify whether it is rational or irrational: $8181$ $17.17.$\n\n### Try It 1.224\n\nFor each number given, identify whether it is rational or irrational: $116116$ $121.121.$\n\nWe have seen that all counting numbers are whole numbers, all whole numbers are integers, and all integers are rational numbers. The irrational numbers are numbers whose decimal form does not stop and does not repeat. When we put together the rational numbers and the irrational numbers, we get the set of real numbers.\n\n### Real Number\n\nA real number is a number that is either rational or irrational.\n\nAll the numbers we use in elementary algebra are real numbers. Figure 1.15 illustrates how the number sets we’ve discussed in this section fit together.\n\nFigure 1.15 This chart shows the number sets that make up the set of real numbers. Does the term “real numbers” seem strange to you? Are there any numbers that are not “real,” and, if so, what could they be?\n\nCan we simplify $−25?−25?$ Is there a number whose square is $−25?−25?$\n\n$()2=−25?()2=−25?$\n\nNone of the numbers that we have dealt with so far has a square that is $−25.−25.$ Why? Any positive number squared is positive. Any negative number squared is positive. So we say there is no real number equal to $−25.−25.$\n\nThe square root of a negative number is not a real number.\n\n### Example 1.113\n\nFor each number given, identify whether it is a real number or not a real number: $−169−169$ $−64.−64.$\n\n### Try It 1.225\n\nFor each number given, identify whether it is a real number or not a real number: $−196−196$ $−81.−81.$\n\n### Try It 1.226\n\nFor each number given, identify whether it is a real number or not a real number: $−49−49$ $−121.−121.$\n\n### Example 1.114\n\nGiven the numbers $−7,145,8,5,5.9,−64,−7,145,8,5,5.9,−64,$ list the whole numbers integers rational numbers irrational numbers real numbers.\n\n### Try It 1.227\n\nFor the given numbers, list the whole numbers integers rational numbers irrational numbers real numbers: $−3,−2,0.3–,95,4,49.−3,−2,0.3–,95,4,49.$\n\n### Try It 1.228\n\nFor the given numbers, list the whole numbers integers rational numbers irrational numbers real numbers: $−25,−38,−1,6,121,2.041975…−25,−38,−1,6,121,2.041975…$\n\n### Locate Fractions on the Number Line\n\nThe last time we looked at the number line, it only had positive and negative integers on it. We now want to include fractions and decimals on it.\n\n### Manipulative Mathematics\n\nDoing the Manipulative Mathematics activity “Number Line Part 3” will help you develop a better understanding of the location of fractions on the number line.\n\nLet’s start with fractions and locate $15,−45,3,74,−92,−5,and8315,−45,3,74,−92,−5,and83$ on the number line.\n\nWe’ll start with the whole numbers $33$ and $−5.−5.$ because they are the easiest to plot. See Figure 1.16.\n\nThe proper fractions listed are $15and−45.15and−45.$ We know the proper fraction $1515$ has value less than one and so would be located between $0 and 1.0 and 1.$ The denominator is 5, so we divide the unit from 0 to 1 into 5 equal parts $15,25,35,45.15,25,35,45.$ We plot $15.15.$ See Figure 1.16.\n\nSimilarly, $−45−45$ is between 0 and $−1.−1.$ After dividing the unit into 5 equal parts we plot $−45.−45.$ See Figure 1.16.\n\nFinally, look at the improper fractions $74,−92,83.74,−92,83.$ These are fractions in which the numerator is greater than the denominator. Locating these points may be easier if you change each of them to a mixed number. See Figure 1.16.\n\n$74=134−92=−41283=22374=134−92=−41283=223$\n\nFigure 1.16 shows the number line with all the points plotted.\n\nFigure 1.16\n\n### Example 1.115\n\nLocate and label the following on a number line: $4,34,−14,−3,65,−52,and73.4,34,−14,−3,65,−52,and73.$\n\n### Try It 1.229\n\nLocate and label the following on a number line: $−1,13,65,−74,92,5,−83.−1,13,65,−74,92,5,−83.$\n\n### Try It 1.230\n\nLocate and label the following on a number line: $−2,23,75,−74,72,3,−73.−2,23,75,−74,72,3,−73.$\n\nIn Example 1.116, we’ll use the inequality symbols to order fractions. In previous chapters we used the number line to order numbers.\n\n• a < ba is less than b” when a is to the left of b on the number line\n• a > ba is greater than b” when a is to the right of b on the number line\n\nAs we move from left to right on a number line, the values increase.\n\n### Example 1.116\n\nOrder each of the following pairs of numbers, using < or >. It may be helpful to refer Figure 1.17.\n\n$−23___−1−23___−1$ $−312___−3−312___−3$ $−34___−14−34___−14$ $−2___−83−2___−83$\n\nFigure 1.17\n\n### Try It 1.231\n\nOrder each of the following pairs of numbers, using < or >:\n\n$−13___−1−13___−1$ $−112___−2−112___−2$ $−23___−13−23___−13$ $−3___−73.−3___−73.$\n\n### Try It 1.232\n\nOrder each of the following pairs of numbers, using < or >:\n\n$−1___−23−1___−23$ $−214___−2−214___−2$ $−35___−45−35___−45$ $−4___−103.−4___−103.$\n\n### Locate Decimals on the Number Line\n\nSince decimals are forms of fractions, locating decimals on the number line is similar to locating fractions on the number line.\n\n### Example 1.117\n\nLocate 0.4 on the number line.\n\n### Try It 1.233\n\nLocate on the number line: 0.6.\n\n### Try It 1.234\n\nLocate on the number line: 0.9.\n\n### Example 1.118\n\nLocate $−0.74−0.74$ on the number line.\n\n### Try It 1.235\n\nLocate on the number line: $−0.6.−0.6.$\n\n### Try It 1.236\n\nLocate on the number line: $−0.7.−0.7.$\n\nWhich is larger, 0.04 or 0.40? If you think of this as money, you know that $0.40 (forty cents) is greater than$0.04 (four cents). So,\n\n$0.40>0.040.40>0.04$\n\nAgain, we can use the number line to order numbers.\n\n• a < ba is less than b” when a is to the left of b on the number line\n• a > ba is greater than b” when a is to the right of b on the number line\n\nWhere are 0.04 and 0.40 located on the number line? See Figure 1.20.\n\nFigure 1.20\n\nWe see that 0.40 is to the right of 0.04 on the number line. This is another way to demonstrate that 0.40 > 0.04.\n\nHow does 0.31 compare to 0.308? This doesn’t translate into money to make it easy to compare. But if we convert 0.31 and 0.308 into fractions, we can tell which is larger.\n\n 0.31 0.308 Convert to fractions. $3110031100$ $30810003081000$ We need a common denominator to compare them.",
null,
"",
null,
"$31010003101000$ $30810003081000$\n\nBecause 310 > 308, we know that $3101000>3081000.3101000>3081000.$ Therefore, 0.31 > 0.308.\n\nNotice what we did in converting 0.31 to a fraction—we started with the fraction $3110031100$ and ended with the equivalent fraction $3101000.3101000.$ Converting $31010003101000$ back to a decimal gives 0.310. So 0.31 is equivalent to 0.310. Writing zeros at the end of a decimal does not change its value!\n\n$31100=3101000and0.31=0.31031100=3101000and0.31=0.310$\n\nWe say 0.31 and 0.310 are equivalent decimals.\n\n### Equivalent Decimals\n\nTwo decimals are equivalent if they convert to equivalent fractions.\n\nWe use equivalent decimals when we order decimals.\n\nThe steps we take to order decimals are summarized here.\n\n### How To\n\n#### Order Decimals.\n\n1. Step 1. Write the numbers one under the other, lining up the decimal points.\n2. Step 2. Check to see if both numbers have the same number of digits. If not, write zeros at the end of the one with fewer digits to make them match.\n3. Step 3. Compare the numbers as if they were whole numbers.\n4. Step 4. Order the numbers using the appropriate inequality sign.\n\n### Example 1.119\n\nOrder $0.64___0.60.64___0.6$ using $<<$ or $>.>.$\n\n### Try It 1.237\n\nOrder each of the following pairs of numbers, using $:0.42___0.4.:0.42___0.4.$\n\n### Try It 1.238\n\nOrder each of the following pairs of numbers, using $:0.18___0.1.:0.18___0.1.$\n\n### Example 1.120\n\nOrder $0.83___0.8030.83___0.803$ using $<<$ or $>.>.$\n\n### Try It 1.239\n\nOrder the following pair of numbers, using $:0.76___0.706.:0.76___0.706.$\n\n### Try It 1.240\n\nOrder the following pair of numbers, using $:0.305___0.35.:0.305___0.35.$\n\nWhen we order negative decimals, it is important to remember how to order negative integers. Recall that larger numbers are to the right on the number line. For example, because $−2−2$ lies to the right of $−3−3$ on the number line, we know that $−2>−3.−2>−3.$ Similarly, smaller numbers lie to the left on the number line. For example, because $−9−9$ lies to the left of $−6−6$ on the number line, we know that $−9<−6.−9<−6.$ See Figure 1.21.\n\nFigure 1.21\n\nIf we zoomed in on the interval between 0 and $−1,−1,$ as shown in Example 1.121, we would see in the same way that $−0.2>−0.3and−0.9<−0.6.−0.2>−0.3and−0.9<−0.6.$\n\n### Example 1.121\n\nUse $<<$ or $>>$ to order $−0.1___−0.8.−0.1___−0.8.$\n\n### Try It 1.241\n\nOrder the following pair of numbers, using < or >: $−0.3___−0.5.−0.3___−0.5.$\n\n### Try It 1.242\n\nOrder the following pair of numbers, using < or >: $−0.6___−0.7.−0.6___−0.7.$\n\n### Section 1.8 Exercises\n\n#### Practice Makes Perfect\n\nSimplify Expressions with Square Roots\n\nIn the following exercises, simplify.\n\n659.\n\n$36 36$\n\n660.\n\n$4 4$\n\n661.\n\n$64 64$\n\n662.\n\n$169 169$\n\n663.\n\n$9 9$\n\n664.\n\n$16 16$\n\n665.\n\n$100 100$\n\n666.\n\n$144 144$\n\n667.\n\n$− 4 − 4$\n\n668.\n\n$− 100 − 100$\n\n669.\n\n$− 1 − 1$\n\n670.\n\n$− 121 − 121$\n\nIdentify Integers, Rational Numbers, Irrational Numbers, and Real Numbers\n\nIn the following exercises, write as the ratio of two integers.\n\n671.\n\n5 3.19\n\n672.\n\n8 1.61\n\n673.\n\n$−12−12$ 9.279\n\n674.\n\n$−16−16$ 4.399\n\nIn the following exercises, list the rational numbers, irrational numbers\n\n675.\n\n$0.75 , 0.22 3 – , 1.39174 0.75 , 0.22 3 – , 1.39174$\n\n676.\n\n$0.36 , 0.94729 … , 2.52 8 – 0.36 , 0.94729 … , 2.52 8 –$\n\n677.\n\n$0.4 5 – , 1.919293 … , 3.59 0.4 5 – , 1.919293 … , 3.59$\n\n678.\n\n$0.1 3 – , 0.42982 … , 1.875 0.1 3 – , 0.42982 … , 1.875$\n\nIn the following exercises, identify whether each number is rational or irrational.\n\n679.\n\n$2525$ $3030$\n\n680.\n\n$4444$ $4949$\n\n681.\n\n$164164$ $169169$\n\n682.\n\n$225225$ $216216$\n\nIn the following exercises, identify whether each number is a real number or not a real number.\n\n683.\n\n$−81−81$ $−121−121$\n\n684.\n\n$−64−64$ $−9−9$\n\n685.\n\n$−36−36$ $−144−144$\n\n686.\n\n$−49−49$ $−144−144$\n\nIn the following exercises, list the whole numbers, integers, rational numbers, irrational numbers, real numbers for each set of numbers.\n\n687.\n\n$−8 , 0 , 1.95286 … , 12 5 , 36 , 9 −8 , 0 , 1.95286 … , 12 5 , 36 , 9$\n\n688.\n\n$−9 , −3 4 9 , − 9 , 0.40 9 – , 11 6 , 7 −9 , −3 4 9 , − 9 , 0.40 9 – , 11 6 , 7$\n\n689.\n\n$− 100 , −7 , − 8 3 , −1 , 0.77 , 3 1 4 − 100 , −7 , − 8 3 , −1 , 0.77 , 3 1 4$\n\n690.\n\n$−6 , − 5 2 , 0 , 0. 714285 ——— , 2 1 5 , 14 −6 , − 5 2 , 0 , 0. 714285 ——— , 2 1 5 , 14$\n\nLocate Fractions on the Number Line\n\nIn the following exercises, locate the numbers on a number line.\n\n691.\n\n$3 4 , 8 5 , 10 3 3 4 , 8 5 , 10 3$\n\n692.\n\n$1 4 , 9 5 , 11 3 1 4 , 9 5 , 11 3$\n\n693.\n\n$3 10 , 7 2 , 11 6 , 4 3 10 , 7 2 , 11 6 , 4$\n\n694.\n\n$7 10 , 5 2 , 13 8 , 3 7 10 , 5 2 , 13 8 , 3$\n\n695.\n\n$2 5 , − 2 5 2 5 , − 2 5$\n\n696.\n\n$3 4 , − 3 4 3 4 , − 3 4$\n\n697.\n\n$3 4 , − 3 4 , 1 2 3 , −1 2 3 , 5 2 , − 5 2 3 4 , − 3 4 , 1 2 3 , −1 2 3 , 5 2 , − 5 2$\n\n698.\n\n$1 5 , − 2 5 , 1 3 4 , −1 3 4 , 8 3 , − 8 3 1 5 , − 2 5 , 1 3 4 , −1 3 4 , 8 3 , − 8 3$\n\nIn the following exercises, order each of the pairs of numbers, using < or >.\n\n699.\n\n$−1 ___ − 1 4 −1 ___ − 1 4$\n\n700.\n\n$−1 ___ − 1 3 −1 ___ − 1 3$\n\n701.\n\n$−2 1 2 ___ −3 −2 1 2 ___ −3$\n\n702.\n\n$−1 3 4 ___ −2 −1 3 4 ___ −2$\n\n703.\n\n$− 5 12 ___ − 7 12 − 5 12 ___ − 7 12$\n\n704.\n\n$− 9 10 ___ − 3 10 − 9 10 ___ − 3 10$\n\n705.\n\n$−3 ___ − 13 5 −3 ___ − 13 5$\n\n706.\n\n$−4 ___ − 23 6 −4 ___ − 23 6$\n\nLocate Decimals on the Number Line In the following exercises, locate the number on the number line.\n\n707.\n\n0.8\n\n708.\n\n$−0.9 −0.9$\n\n709.\n\n$−1.6 −1.6$\n\n710.\n\n3.1\n\nIn the following exercises, order each pair of numbers, using < or >.\n\n711.\n\n$0.37 ___ 0.63 0.37 ___ 0.63$\n\n712.\n\n$0.86 ___ 0.69 0.86 ___ 0.69$\n\n713.\n\n$0.91 ___ 0.901 0.91 ___ 0.901$\n\n714.\n\n$0.415 ___ 0.41 0.415 ___ 0.41$\n\n715.\n\n$−0.5 ___ −0.3 −0.5 ___ −0.3$\n\n716.\n\n$−0.1 ___ −0.4 −0.1 ___ −0.4$\n\n717.\n\n$−0.62 ___ −0.619 −0.62 ___ −0.619$\n\n718.\n\n$−7.31 ___ −7.3 −7.31 ___ −7.3$\n\n#### Everyday Math\n\n719.\n\nField trip All the 5th graders at Lincoln Elementary School will go on a field trip to the science museum. Counting all the children, teachers, and chaperones, there will be 147 people. Each bus holds 44 people.\n\nHow many busses will be needed?\nWhy must the answer be a whole number?\nWhy shouldn’t you round the answer the usual way, by choosing the whole number closest to the exact answer?\n\n720.\n\nChild care Serena wants to open a licensed child care center. Her state requires there be no more than 12 children for each teacher. She would like her child care center to serve 40 children.\n\nHow many teachers will be needed?\nWhy must the answer be a whole number?\nWhy shouldn’t you round the answer the usual way, by choosing the whole number closest to the exact answer?\n\n#### Writing Exercises\n\n721.\n\nIn your own words, explain the difference between a rational number and an irrational number.\n\n722.\n\nExplain how the sets of numbers (counting, whole, integer, rational, irrationals, reals) are related to each other.\n\n#### Self Check\n\nAfter completing the exercises, use this checklist to evaluate your mastery of the objective of this section.",
null,
"On a scale of $1−10,1−10,$ how would you rate your mastery of this section in light of your responses on the checklist? How can you improve this?\n\nOrder a print copy\n\nAs an Amazon Associate we earn from qualifying purchases."
] | [
null,
"https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/10a65c378b856be74bff4021654d037fe690ad70",
null,
"https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/58a58d7f53759a8837c91097ca2b7df1e9723946",
null,
"https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/6859a60a303b2062da0da2e884b03d3c67532986",
null,
"https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/8ca9ab56f32047ba96f43c4af809987795e25a58",
null,
"https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/9c67a765f10e60e393d31a423a58253f18a4838b",
null,
"https://openstax.org/apps/image-cdn/v1/f=webp/apps/archive/20230828.164620/resources/4b1459718cd4d9e2af1db1bba9977dfe5ba91304",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8950707,"math_prob":0.9997371,"size":19276,"snap":"2023-40-2023-50","text_gpt3_token_len":4989,"char_repetition_ratio":0.21839975,"word_repetition_ratio":0.19727324,"special_character_ratio":0.24657606,"punctuation_ratio":0.13113554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998976,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-28T10:24:25Z\",\"WARC-Record-ID\":\"<urn:uuid:755a1554-886c-467b-b60c-4f47c4821f47>\",\"Content-Length\":\"524911\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c1d49d4d-7baa-47a2-8c38-3a7e8e5b8583>\",\"WARC-Concurrent-To\":\"<urn:uuid:5293acd9-ea40-40db-b360-c2fd7b32c3b7>\",\"WARC-IP-Address\":\"99.84.191.125\",\"WARC-Target-URI\":\"https://openstax.org/books/elementary-algebra/pages/1-8-the-real-numbers\",\"WARC-Payload-Digest\":\"sha1:E7PE7RURBWE7GNLNBQPVD3D5ZKCDP4UN\",\"WARC-Block-Digest\":\"sha1:IMCHBK2ZVWR4EUQRQ34R5P6TTU4W2J36\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510387.77_warc_CC-MAIN-20230928095004-20230928125004-00219.warc.gz\"}"} |
https://www.daniweb.com/programming/game-development/threads/471199/retro-game-qix-logic | [
"Hello. First time posting on these forums as I am really stuck on my code. The language I am using is GML (Game Maker Language) and the program Game Maker: Studio. As I could not get any help on GM forums I am seeking help elsewhere.\n\nThe game I am making is a remake of a retro game called QIX.\n\"Qix is an old game where you are drawing boxes in order to fill in the screen with a color or image while avoiding enemies. To pass a level you must fill in a certain percentage of the screen. A more modern game series with this gameplay would be the Gals Panic series.\"\nWikipedia: http://en.wikipedia.org/wiki/Qix\n\nI have made it work, but the problem is that it is extremely slow. It takes about a minute for larger area to be drawn. What I am doing:\n-After the player finishes his move I have an array of the current lines and it adds the new lines that the player just made to the array.\n-It takes two points on each side of the first line and checks a line from one point to the QIX. Depending on the lines crossed it calculates which point is in the closed area without the QIX. This can turn out wrong in some cases, not only because the check (if even number of lines crossed, QIX is in the same area) is wrong, but also the way I check if the lines crossing is wrong. Before drawing I fix this by checking if the QIX is on revealed pixel and if yes, I swap all values(explained later). So suggestions on how to check, which side of the line the QIX is on are welcomed too.\n-Then it sets the point that is supposedly not on the side of the QIX to 2 in a 2D array. I also have min and max x and y values of the 2D array, which at the start also equal the point's x and y.\n- After that it checks the 2D array from min to max values until there are no values that equal 2. If a value = 2, it checks if a line crosses it. If no, it sets the top, bottom, left and right points to 2, and changes min and max x and y values if needed. This one point that was checked is set to 1.\n\nThis way I have a 2D array with 0 for covered and 1 for uncovered points. And here is the code that does that:\n\n``````do //This is the code after I have a starting value in the 2D array - pointd\n{\nenough=1\nfor(for1=minx; for1<=maxx; for1+=1) //min and max x and y values of the points for checking\nfor(for2=miny; for2<=maxy; for2+=1) //so it doesn't check everything everytime\n{\nif(pointd[for1,for2]=2)for(aba=1;aba<=thln;aba+=1) //If it is 2, check it with every line\n{\nif(((for1>=thlposx[aba] and for1<=thlendx[aba]) or (for1<=thlposx[aba] and for1>=thlendx[aba]))\nand((for2>=thlposy[aba] and for2<=thlendy[aba]) or (for2<=thlposy[aba] and for2>=thlendy[aba])))\n{pointd[for1,for2]=1; aba=thln+1} //If it is on a line, set it to 1, to be drawn and stop checking\n}\nif(pointd[for1,for2]=2) //If it is not on a line, set adjacent point for checking\n{\nenough=0 //stop value to 0, continue with do until\npointd[for1,for2]=1 //set point to be drawn\nif(pointd[for1-1,for2]=0)pointd[for1-1,for2]=2 //set points to be checked\nif(pointd[for1+1,for2]=0)pointd[for1+1,for2]=2\nif(pointd[for1,for2-1]=0)pointd[for1,for2-1]=2\nif(pointd[for1,for2+1]=0)pointd[for1,for2+1]=2\nif(minx=for1 and for1>1)minx=for1-1 //if this is min or max x or y change it\nif(miny=for2 and for2>1)miny=for2-1\nif(maxx=for1 and for1<319)maxx=for1+1\nif(maxy=for2 and for2<374)maxy=for2+1\n}\n}\n}\nuntil enough=1\nif(pointd[objQIX.x,objQIX.y]=1) //if it got the wrong part of the screen (the wrong side of the line), change all\n{\nfor(aaaa=0; aaaa<=320; aaaa+=1)\nfor(aaaaa=0; aaaaa<=375; aaaaa+=1)\nif(pointd[aaaa,aaaaa]=1)pointd[aaaa,aaaaa]=0\nelse pointd[aaaa,aaaaa]=1\n}\n``````\n\nIt appears that the do until cycle spins less than 100 times in simple cases and 200-400 times for more complicated ones. The problem seems to be this cycle:\n\n``````if(pointd[for1,for2]=2)for(aba=1;aba<=thln;aba+=1) //If it is 2, check it with every line\n{\nif(((for1>=thlposx[aba] and for1<=thlendx[aba]) or (for1<=thlposx[aba] and for1>=thlendx[aba]))\nand((for2>=thlposy[aba] and for2<=thlendy[aba]) or (for2<=thlposy[aba] and for2>=thlendy[aba])))\n{pointd[for1,for2]=1; aba=thln+1} //If it is on a line, set it to 1, to be drawn and stop checking\n}\n``````\n\nThat for cycle does over 100,000 spins in most cases, and even over 1,000,000 for bigger areas.\n\nAny ideas or code on how to speed it up or change it to a more efficient algorithm are welcome. Thank You in advance!\n\nAre you running through every pixel at a time?\n\nHow about using a scaling algorithm rather? Let's say, each entry in your grid equals 4 real pixels. Then you already have 1/4 of the number of calculations to do. I don't think that on a game like that, you really …\n\nNot sure if the language supports it, but can't you use regions?\n\nBasically this. I know you're not using that, but that's the idea.\n\nThis is quite an interesting conundrum. What are the values of the maximum x and maximum y? I'm trying to get a grasp for the scope of the loops.\n\nIs that the area in pixels, or is the area from 2 - 320 by 2 - 374 pixels?\n\nThere's got to be a way to short-circuit the test algorithm :)\n\n## All 18 Replies\n\nAre you running through every pixel at a time?\n\nHow about using a scaling algorithm rather? Let's say, each entry in your grid equals 4 real pixels. Then you already have 1/4 of the number of calculations to do. I don't think that on a game like that, you really need to handle each pixel as a single block.\n\nThe original game was a text-mode game, so it worked on 80x25 characters! I guess things have changed a lot since then :)\n\nNot sure if the language supports it, but can't you use regions?\n\nThank You for the replies!\n\n@Ewald Horn\nI was thinking about that, but in some cases 1/4 is still a lot. I changed it to see how it will work out, but I am trying to get rid of few bugs that it brought up.\n\n@pritaeas\nWhat exactly do you mean by regions? I don't know what that is, sorry.\n\nBasically this. I know you're not using that, but that's the idea.\n\nSo I got rid of the bugs. It is faster, but it still takes 27 seconds for the whole field to be calculated. That is the time it takes on my laptop and it is meant to be a mobile game.\n\n@pritaeas\nI can use rectangles, but I don't see how I can do it with rectangles. The only thing that came to my mind is have each border, create a rectangle from itself to the opposite border. This would be really fast, but there is just one problem. I don't know how to get them all to check on the same side of the shape. If one creates rectangle in the zone with QIX and other in the zone without QIX, it will completely mess up.\n\nThis is quite an interesting conundrum. What are the values of the maximum x and maximum y? I'm trying to get a grasp for the scope of the loops.\n\nThe total area is 2,2,320,374 set by border lines.\n\nIs that the area in pixels, or is the area from 2 - 320 by 2 - 374 pixels?\n\nThere's got to be a way to short-circuit the test algorithm :)\n\nThose are the x and y coordinates of the rectangle formed by the border lines. The area starts at 2,2 and ends at 320,374. The whole room size is 320x480, but the calculations happen only for the area in the border lines.\n\nAh, that's great, so it's not such a large array at all. As you said, the problem is with the nested loops, there's a lot of expensive looping.\n\nThank You so much for the help!\n\nI thought about using rectangles before, but I could not come with a solution on how to make it exactly. Last night I came up with this and I think this should finally work.\n\n``````// s - starting point of path, e - end point\n//if one line - opposite of QIX\n//Else if on opposite walls\n{\n//if towards e(line direction) – away QIX(rectangle from line), towards s – towards QIX, don’t need the other 2\n// need to continue past collision with the parts of the lines that are not in collision.\n}\n//else\n{\n//if away s – towards e, towards s – away e , away e – away s, towards e – towards s\n}\n``````\n\nI have not tested it yet.\n\nLet us know, there's nothing like finding a solution to something that's been bugging you for a while :)\n\nGreat job on not giving up by the way, I admire that.\n\nI ran into a problem and I remembered that I saw that the first time I thought on how to use rectangles to do it.",
null,
"Oh, but there's more to fill, isn't there?\n\nyes\n\nHello,\n\nHave you tried the floodFill Algorythm to do the fill and area calc? I'm trying to make the same game, but mine will be on Construct2.\n\nSo far i'm still working in the main code (draw and fill the smaller area) the fastest way possible.\n\nSince Construct2 can be extended with javascript plugins, i left Construct2 for a while to mess on javascript code first.\n\nI'm still ignoring the enemies cause thats made on Construct2. For now i just want to draw.\n\nUsing floodFill algorythm i have managed to quickly check the whole array on both sides of the path (using last known path point and player movement direction to flood both sides).\n\nWorking demo here:\nhttp://luis.peralta.pt/sandbox/js/flood.html\n\nYou'll notice a slow fill, but thats because of the HUGE helper table that demo is using (the code sets a class for each cell to help me visualize what's happening in the array).\n\nMy next move will be get rid of the table and start drawing HTML canvas polys based on the vertices found on the array. But that's yet to think how to do.\n\nIn short, what that code do?\n\nAs you'll see, my 2D array uses the following values:\nU: unfilled\nF: filled\nP: path\nT1: temporary area being flooded at the same time area 2 is\nT2: temporary area being flooed at the same time area 1 is\n\nThe game starts with the perimeter set to F so the player can move safely around the array.\n\nAs soon the player gets out of the safe area (F) it starts drawing the path setting (P) as it goes. If the player inverts the direction, it starts removing P and setting U again (erasing the path).\n\nAn array with the cardeal points to where the player is moving is saved. So i can kill the player when he colides with the path without the exception mentioned above (move back).\n\nWhen the player reaches the F (safe/filled area), i check the direction the player was having and from the lastX and lastY (which will be the last path point) i run the floodFill algorythm on the pixels on each side.\n\nthe first flood will set T1 (temp area 1) on all adjacent U's of the first pixel. The second flood will set T2 (temp area 2) on all adjacent U's of the second pixel. While flooding i save the flood count. That will give me the area of both floods.\n\nThis process can have 3 possible endings:\n\nIf both starting pixels where U, the whole array will be flooded. and F + T1 + T2 = layout area. Then you just fill (set to F) the smaller Tx, or the area where Qix isn't.\n\nIf one of the pixels was F, only one flood will happen. In this case, where F + T1 + T2 != layout area, something is yet to temporary fill and calc. That means that the U's yet to calc are the area of the other Tx. More over, this temp fill is the one to pass from Tx to F.\n\nIf both pixels were already F, this is yet to code. ;) but will be just a check to get the first U, do the flood, and apply the previous process... (i think)\n\nFeel free to contact me if you have any doubt about that JS code (it's not a mess but might not be easy to read.. sorry).\n\nI didn't exactly understand everything you wrote, but I think you are doing what I have already done. I also used flood fill algorithm(see code in first post), but my area is much bigger and it takes too much time.\n\nP.S. I have new idea with rectangles, but I need to test it out to make sure there aren't any flaws, but I don't have the time right now.\n\nSo I have been struggling for 2 weeks to get my idea to work, but with no success. I am posting and hoping some good soul can see where I am making a mistake and help me fix my code.\n\nWhat my code does is when the player completes a shape, it takes the first line and creates a rectangle from that line in certain direction. The rectangle gets as big as it can in every direction until it hits a line. Then it checks each side of the rectangle and how that side fits with a line. If the line does not completely cover the rectangle side, the part of the rectangle side that is not covered, gets added as line to an array. After we are done with this rectangle, we get the next line in the array and create a rectangle for that one too. We do this until there are no more lines in the array.\n\nIt used to work, but it did not make correct rectangles. I went thought that code so many times, changed it so many times and I still can't get it to work. The last change I made even made it add lines to the array until the array gets too big and it shows error and crashes.\n\nHere is the latest code:\n\n``````if(draw_points=1)\n{\nllc=1\nllx1=thlposx[firstnew] //thl starting values are the drawn lines, all in the array\nlly1=thlposy[firstnew] //firstnew is the first drawn line for the last shape, the shape which will be checked\nllx2=thlendx[firstnew]\nlly2=thlendy[firstnew]\n//lld 1 <- 2 ^ 3 -> 4 v\n//if lld is 1 we have right line of rectangle, 2 - bottom, 3 - left, 4 - top\nif(thlposx[firstnew]=thlendx[firstnew])lld=1\nelse lld=2\nfor(ap=1; ap<=llc; ap+=1)\n{\nrectx1=0; recty1=0; rectx2=322; recty2=375 //Border lines form rectangle with coordinates 2,2,320,374\nif(lld[ap]=1){recty1=min(lly1[ap],lly2[ap]); rectx2=llx2[ap]; recty2=max(lly1[ap],lly2[ap]);}\nelse if(lld[ap]=2){rectx1=min(llx1[ap],llx2[ap]); recty2=lly2[ap]; rectx2=max(llx1[ap],llx2[ap])}\nelse if(lld[ap]=3){recty1=min(lly1[ap],lly2[ap]); rectx1=llx1[ap]; recty2=max(lly1[ap],lly2[ap]);}\nelse if(lld[ap]=4){rectx1=min(llx1[ap],llx2[ap]); recty1=lly1[ap]; rectx2=max(llx1[ap],llx2[ap])}\nvallld=lld[ap]\nif(vallld=1 or vallld=3) //Have x2 or x1\n{\nif(vallld=1)for(aba=1;aba<=thln;aba+=1) //Find x1\n{\nif(thlposx[aba]=thlendx[aba] and thlposx[aba]>rectx1 and thlposx[aba]<rectx2 and (\n(min(thlposy[aba],thlendy[aba])>=min(recty1,recty2) and min(thlposy[aba],thlendy[aba])<max(recty1,recty2))\nor (max(thlposy[aba],thlendy[aba])>min(recty1,recty2) and max(thlposy[aba],thlendy[aba])<=max(recty1,recty2)))){rectx1=thlposx[aba]; aban=aba}\n}\nif(vallld=3)for(aba=1;aba<=thln;aba+=1) //Find x2\n{\nif(thlposx[aba]=thlendx[aba] and thlposx[aba]>rectx1 and thlposx[aba]<rectx2 and (\n(min(thlposy[aba],thlendy[aba])>=min(recty1,recty2) and min(thlposy[aba],thlendy[aba])<max(recty1,recty2))\nor (max(thlposy[aba],thlendy[aba])>min(recty1,recty2) and max(thlposy[aba],thlendy[aba])<=max(recty1,recty2)))){rectx2=thlposx[aba]; aban=aba}\n}\nif(rectx1!=0 and rectx2!=322)\n{\nif!(min(thlposy[aban],thlendy[aban])<=min(recty1,recty2) and max(thlposy[aban],thlendy[aban])>=max(recty1,recty2))\n{\nif(min(thlposy[aban],thlendy[aban])<=min(recty1,recty2) and max(thlposy[aban],thlendy[aban])<max(recty1,recty2) and max(thlposy[aban],thlendy[aban])>min(recty1,recty2))\n{ llc+=1; llx1[llc]=thlposx[aban]; lly1[llc]=max(thlposy[aban],thlendy[aban]); llx2[llc]=thlposx[aban]; lly2[llc]=max(recty1,recty2); lld[llc]=vallld}\nelse if(min(thlposy[aban],thlendy[aban])>min(recty1,recty2) and min(thlposy[aban],thlendy[aban])<max(recty1,recty2) and max(thlposy[aban],thlendy[aban])>=max(recty1,recty2))\n{ llc+=1; llx1[llc]=thlposx[aban]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aban]; lly2[llc]=min(thlposy[aban],thlendy[aban]); lld[llc]=vallld}\nelse if(min(thlposy[aban],thlendy[aban])>min(recty1,recty2) and max(thlposy[aban],thlendy[aban])<max(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aban]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aban]; lly2[llc]=min(thlposy[aban],thlendy[aban]); lld[llc]=vallld\nllc+=1; llx1[llc]=thlposx[aban]; lly1[llc]=max(thlposy[aban],thlendy[aban]); llx2[llc]=thlposx[aban]; lly2[llc]=max(recty1,recty2); lld[llc]=vallld}\nelse {llc+=1; llx1[llc]=thlposx[aban]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aban]; lly2[llc]=max(recty1,recty2); lld[llc]=vallld}\n}\nfor(aba=1;aba<=thln;aba+=1) //This should check how rectangle line fits drawn lines\n{\nif(thlposy[aba]=thlendy[aba] and thlposy[aba]=recty1 and (min(rectx1,rectx2)<max(thlposx[aba],thlendx[aba]) and max(rectx1,rectx2)>min(thlposx[aba],thlendx[aba])) or (min(rectx1,rectx2)<max(thlposx[aba],thlendx[aba]) and max(rectx1,rectx2)>=max(thlposx[aba],thlendx[aba])))\n{\nif!(min(thlposx[aba],thlendx[aba])<=min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])>=max(rectx1,rectx2))\n{ //And create a new rectangle to be checked if it is not perfect\nif(min(thlposx[aba],thlendx[aba])<=min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])<max(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])>min(rectx1,rectx2))\n{llc+=1; llx1[llc]=max(thlposx[aba],thlendx[aba]); lly1[llc]=thlposy[aba]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aba]; lld[llc]=2}\nelse if(min(thlposx[aba],thlendx[aba])>min(rectx1,rectx2) and min(thlposx[aba],thlendx[aba])<max(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])>=max(rectx1,rectx2))\n{llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aba]; llx2[llc]=min(thlposx[aba],thlendx[aba]); lly2[llc]=thlposy[aba]; lld[llc]=2}\nelse if(min(thlposx[aba],thlendx[aba])>min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])<max(rectx1,rectx2))\n{llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aba]; llx2[llc]=min(thlposx[aba],thlendx[aba]); lly2[llc]=thlposy[aba]; lld[llc]=2\nllc+=1; llx1[llc]=max(thlposx[aba],thlendx[aba]); lly1[llc]=thlposy[aba]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aba]; lld[llc]=2}\nelse {llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aba]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aba]; lld[llc]=2}\n}\n}\n}\nfor(aba=1;aba<=thln;aba+=1) //This should check how rectangle line fits drawn lines\n{\nif(thlposy[aba]=thlendy[aba] and thlposy[aba]=recty2 and (min(rectx1,rectx2)<max(thlposx[aba],thlendx[aba]) and max(rectx1,rectx2)>min(thlposx[aba],thlendx[aba])) or (min(rectx1,rectx2)<max(thlposx[aba],thlendx[aba]) and max(rectx1,rectx2)>=max(thlposx[aba],thlendx[aba])))\n{\nif!(min(thlposx[aba],thlendx[aba])<=min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])>=max(rectx1,rectx2))\n{ //And create a new rectangle to be checked if it is not perfect\nif(min(thlposx[aba],thlendx[aba])<=min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])<max(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])>min(rectx1,rectx2))\n{llc+=1; llx1[llc]=max(thlposx[aba],thlendx[aba]); lly1[llc]=thlposy[aba]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aba]; lld[llc]=4}\nelse if(min(thlposx[aba],thlendx[aba])>min(rectx1,rectx2) and min(thlposx[aba],thlendx[aba])<max(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])>=max(rectx1,rectx2))\n{llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aba]; llx2[llc]=min(thlposx[aba],thlendx[aba]); lly2[llc]=thlposy[aba]; lld[llc]=4}\nelse if(min(thlposx[aba],thlendx[aba])>min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])<max(rectx1,rectx2))\n{llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aba]; llx2[llc]=min(thlposx[aba],thlendx[aba]); lly2[llc]=thlposy[aba]; lld[llc]=4\nllc+=1; llx1[llc]=max(thlposx[aba],thlendx[aba]); lly1[llc]=thlposy[aba]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aba]; lld[llc]=4}\nelse {llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aba]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aba]; lld[llc]=4}\n}\n}\n}\n}\n}\nelse if(vallld=2 or vallld=4) //Have y2 or y1\n{\nif(vallld=2)for(aba=1;aba<=thln;aba+=1) //Find y1\n{\nif(thlposy[aba]=thlendy[aba] and thlposy[aba]>recty1 and thlposy[aba]<recty2 and (\n(min(thlposx[aba],thlendx[aba])>=min(rectx1,rectx2) and min(thlposx[aba],thlendx[aba])<max(rectx1,rectx2))\nor (max(thlposx[aba],thlendx[aba])>min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])<=max(rectx1,rectx2)))){recty1=thlposy[aba]; aban=aba}\n}\nif(vallld=4)for(aba=1;aba<=thln;aba+=1) //Find y2\n{\nif(thlposy[aba]=thlendy[aba] and thlposy[aba]>recty1 and thlposy[aba]<recty2 and (\n(min(thlposx[aba],thlendx[aba])>=min(rectx1,rectx2) and min(thlposx[aba],thlendx[aba])<max(rectx1,rectx2))\nor (max(thlposx[aba],thlendx[aba])>min(rectx1,rectx2) and max(thlposx[aba],thlendx[aba])<=max(rectx1,rectx2)))){recty2=thlposy[aba]; aban=aba}\n}\nif(recty1!=0 and recty2!=375)\n{\nif!(min(thlposx[aban],thlendx[aban])<=min(rectx1,rectx2) and max(thlposx[aban],thlendx[aban])>=max(rectx1,rectx2))\n{\nif(min(thlposx[aban],thlendx[aban])<=min(rectx1,rectx2) and max(thlposx[aban],thlendx[aban])<max(rectx1,rectx2) and max(thlposx[aban],thlendx[aban])>min(rectx1,rectx2))\n{llc+=1; llx1[llc]=max(thlposx[aban],thlendx[aban]); lly1[llc]=thlposy[aban]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aban]; lld[llc]=vallld}\nelse if(min(thlposx[aban],thlendx[aban])>min(rectx1,rectx2) and min(thlposx[aban],thlendx[aban])<max(rectx1,rectx2) and max(thlposx[aban],thlendx[aban])>=max(rectx1,rectx2))\n{llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aban]; llx2[llc]=min(thlposx[aban],thlendx[aban]); lly2[llc]=thlposy[aban]; lld[llc]=vallld}\nelse if(min(thlposx[aban],thlendx[aban])>min(rectx1,rectx2) and max(thlposx[aban],thlendx[aban])<max(rectx1,rectx2))\n{llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aban]; llx2[llc]=min(thlposx[aban],thlendx[aban]); lly2[llc]=thlposy[aban]; lld[llc]=vallld\nllc+=1; llx1[llc]=max(thlposx[aban],thlendx[aban]); lly1[llc]=thlposy[aban]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aban]; lld[llc]=vallld}\nelse {llc+=1; llx1[llc]=min(rectx1,rectx2); lly1[llc]=thlposy[aban]; llx2[llc]=max(rectx1,rectx2); lly2[llc]=thlposy[aban]; lld[llc]=vallld}\n}\nfor(aba=1;aba<=thln;aba+=1)\n{\nif(thlposx[aba]=thlendx[aba] and thlposx[aba]=rectx1 and (min(recty1,recty2)<max(thlposy[aba],thlendy[aba]) and max(recty1,recty2)>min(thlposy[aba],thlendy[aba])) or (min(recty1,recty2)<max(thlposy[aba],thlendy[aba]) and max(recty1,recty2)>=max(thlposy[aba],thlendy[aba])))\n{\nif!(min(thlposy[aba],thlendy[aba])<=min(recty1,recty2) and max(thlposy[aba],thlendy[aba])>=max(recty1,recty2))\n{\nif(min(thlposy[aba],thlendy[aba])<=min(recty1,recty2) and max(thlposy[aba],thlendy[aba])<max(recty1,recty2) and max(thlposy[aba],thlendy[aba])>min(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=max(thlposy[aba],thlendy[aba]); llx2[llc]=thlposx[aba]; lly2[llc]=max(recty1,recty2); lld[llc]=1}\nelse if(min(thlposy[aba],thlendy[aba])>min(recty1,recty2) and min(thlposy[aba],thlendy[aba])<max(recty1,recty2) and max(thlposy[aba],thlendy[aba])>=max(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aba]; lly2[llc]=min(thlposy[aba],thlendy[aba]); lld[llc]=1}\nelse if(min(thlposy[aba],thlendy[aba])>min(recty1,recty2) and max(thlposy[aba],thlendy[aba])<max(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aba]; lly2[llc]=min(thlposy[aba],thlendy[aba]); lld[llc]=1\nllc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=max(thlposy[aba],thlendy[aba]); llx2[llc]=thlposx[aba]; lly2[llc]=max(recty1,recty2); lld[llc]=1}\nelse {llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aba]; lly2[llc]=max(recty1,recty2); lld[llc]=1}\n}\n}\n}\nfor(aba=1;aba<=thln;aba+=1)\n{\nif(thlposx[aba]=thlendx[aba] and thlposx[aba]=rectx2 and (min(recty1,recty2)<max(thlposy[aba],thlendy[aba]) and max(recty1,recty2)>min(thlposy[aba],thlendy[aba])) or (min(recty1,recty2)<max(thlposy[aba],thlendy[aba]) and max(recty1,recty2)>=max(thlposy[aba],thlendy[aba])))\n{\nif!(min(thlposy[aba],thlendy[aba])<=min(recty1,recty2) and max(thlposy[aba],thlendy[aba])>=max(recty1,recty2))\n{\nif(min(thlposy[aba],thlendy[aba])<=min(recty1,recty2) and max(thlposy[aba],thlendy[aba])<max(recty1,recty2) and max(thlposy[aba],thlendy[aba])>min(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=max(thlposy[aba],thlendy[aba]); llx2[llc]=thlposx[aba]; lly2[llc]=max(recty1,recty2); lld[llc]=3}\nelse if(min(thlposy[aba],thlendy[aba])>min(recty1,recty2) and min(thlposy[aba],thlendy[aba])<max(recty1,recty2) and max(thlposy[aba],thlendy[aba])>=max(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aba]; lly2[llc]=min(thlposy[aba],thlendy[aba]); lld[llc]=3}\nelse if(min(thlposy[aba],thlendy[aba])>min(recty1,recty2) and max(thlposy[aba],thlendy[aba])<max(recty1,recty2))\n{llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aba]; lly2[llc]=min(thlposy[aba],thlendy[aba]); lld[llc]=3\nllc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=max(thlposy[aba],thlendy[aba]); llx2[llc]=thlposx[aba]; lly2[llc]=max(recty1,recty2); lld[llc]=3}\nelse {llc+=1; llx1[llc]=thlposx[aba]; lly1[llc]=min(recty1,recty2); llx2[llc]=thlposx[aba]; lly2[llc]=max(recty1,recty2); lld[llc]=3}\n}\n}\n}\n}\n//End of rectangle creating\n``````\nBe a part of the DaniWeb community\n\nWe're a friendly, industry-focused community of 1.20 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge."
] | [
null,
"https://static.daniweb.com/attachments/4/ee79fb47b58a1cc436fb26c8e9d2bd04.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.87159204,"math_prob":0.9850967,"size":4434,"snap":"2020-45-2020-50","text_gpt3_token_len":1439,"char_repetition_ratio":0.13656884,"word_repetition_ratio":0.12760055,"special_character_ratio":0.2947677,"punctuation_ratio":0.09860558,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9863222,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-26T23:41:12Z\",\"WARC-Record-ID\":\"<urn:uuid:3a9862e1-2069-4613-9589-67c37a4c3351>\",\"Content-Length\":\"144270\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a70b7cc6-a193-4214-b583-fd33ac78dcca>\",\"WARC-Concurrent-To\":\"<urn:uuid:41cc8803-d2aa-47f1-a611-29c020d7121a>\",\"WARC-IP-Address\":\"104.22.5.5\",\"WARC-Target-URI\":\"https://www.daniweb.com/programming/game-development/threads/471199/retro-game-qix-logic\",\"WARC-Payload-Digest\":\"sha1:ZISPNEY4S7OB57JOUOILET3VCL4LIDTE\",\"WARC-Block-Digest\":\"sha1:ETYWEKMQRCNTB4OAZ5YR2QTCZUGHJR5O\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141189030.27_warc_CC-MAIN-20201126230216-20201127020216-00009.warc.gz\"}"} |
https://www.directhardwaresupply.com/store/page/30/?filter_product-group=memory&query_type_product-group=or | [
"Showing 465–480 of 897 results\n\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f606281').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f607c27').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f609619').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f60ad68').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f60c6dc').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f60e066').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f60fd36').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f6114f6').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f612f1a').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f6148de').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f6162df').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '\n'; if(form != '') { form.closest('form').append(alert_box); } else { \\$('#form-5e8136f617d3c').append(alert_box); } } function addConditionClass(field_id, cond_class, form_fields_wrapper) { \\$(field_id).each(function(){ if (\\$(this).is(':input') || \\$(this).is('select')) \\$(this).addClass('cond_filler_'+cond_class); \\$(this).children().each(function(){ addConditionClass(\\$(this), cond_class, form_fields_wrapper); }) }); return false; } function compareRule(objs, cmp_operator, cmp_value, cmp_id, \\$form_part_0) { var comp_res = false; var areOperandsCb = false; // Stores true if both operands are checkboxes. switch(cmp_operator) { case 'is': if (cmp_value.startsWith('Checkbox_')) { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { break; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } \\$(objs).each(function(){ if (areOperandsCb) { comp_res = false; } \\$cmp1 = \\$(this).val(); \\$(test).each(function(){ \\$cmp2 = \\$(this).val(); if (\\$cmp1 == \\$cmp2) { comp_res = true; if (!areOperandsCb) { return; } } }); if (areOperandsCb && false == comp_res) { return; } }); break; case 'is-not': if (cmp_value.startsWith('Checkbox_')) { test = \\$form_part_0.find('#'+cmp_value+' :input:checked'); areOperandsCb = cmp_id.startsWith('Checkbox_') ? true : false; if (areOperandsCb && objs.length != test.length) { return true; } } else { test = objs.closest('#form_part_0').find('#'+cmp_value+' :input'); } for(let objsElement of objs) { comp_res = false; \\$cmp1 = \\$(objsElement).val(); for(let testElement of test) { \\$cmp2 = \\$(testElement).val(); if (\\$cmp1 != \\$cmp2) { comp_res = true; // return; } else if(areOperandsCb) { comp_res = false; break; } } if(areOperandsCb && true == comp_res) { break; } } break; case 'less-than': \\$(objs).each(function(){ // Return if current element is non-relevant input field inside 'Rating' field. if ('undefined' != typeof \\$(this).attr('id') && 'Rating_' != \\$(this).attr('id').match(/^Rating_/) && \\$(this).closest('div[id^=Rating_]')) { return; } // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() < cmp_value) { comp_res = true; return; } }); break; case 'greater-than': \\$(objs).each(function(){ // if cmp_value is number, convert it into number type data. if (!isNaN(cmp_value)) { cmp_value = Number(cmp_value); } if (\\$(this).val() > cmp_value) { comp_res = true; return; } }); break; case 'starts-with': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) == 0) { comp_res = true; return; } }); break; case 'contains': \\$(objs).each(function(){ if (\\$(this).val().indexOf(cmp_value) != -1) { comp_res = true; return; } }); break; case 'ends-with': \\$(objs).each(function(){ indexPoint = (\\$(this).val().length - cmp_value.length); if (indexPoint >=0 && \\$(this).val().indexOf(cmp_value, indexPoint) == indexPoint) { comp_res = true; return; } }); break; default: comp_res = false; break; } return comp_res; } function applyRule(field_id) { \\$('.cond_filler_'+field_id).each(function(){ var this_conditions = \\$('#'+field_id).attr('data-cond-fields').split('|'); var this_action = \\$('#'+field_id).attr('data-cond-action').split(':'); var cmp_res = this_action == 'all' ? true : false; for (i=0 ; i\n'; for (i = 0; i < msgs.length; i++) { alert_box += '' + msgs[i] + '"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.905009,"math_prob":0.9946754,"size":304,"snap":"2020-10-2020-16","text_gpt3_token_len":95,"char_repetition_ratio":0.10333333,"word_repetition_ratio":0.115384616,"special_character_ratio":0.34868422,"punctuation_ratio":0.04918033,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967659,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-03-30T00:05:57Z\",\"WARC-Record-ID\":\"<urn:uuid:9dcf0cfd-3281-4647-9139-af78c0408ae5>\",\"Content-Length\":\"877569\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f625c093-0aa9-4eef-aaec-6765d1abf3e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:9860bfff-ac60-4954-a471-1fe71e8dc747>\",\"WARC-IP-Address\":\"185.134.29.27\",\"WARC-Target-URI\":\"https://www.directhardwaresupply.com/store/page/30/?filter_product-group=memory&query_type_product-group=or\",\"WARC-Payload-Digest\":\"sha1:CE5HYRQNNAFVMU3PDPLPZ5OLV4S6XVLZ\",\"WARC-Block-Digest\":\"sha1:GWSE3SP35YWZQQ55GN7D2TMUY6MRLNB6\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370496330.1_warc_CC-MAIN-20200329232328-20200330022328-00305.warc.gz\"}"} |
https://hcl.ucd.ie/wiki/index.php/PGF/Tikz | [
"# Write a figure\n\nThe preamble of the latex file must contain:\n\\usepackage{tikz}\n\nSome optional libraries could be add like this:\n\\usetikzlibrary{calc}\n\nTo start a figure, the code must be inside the tikzpicture environment like this:\n\\begin{tikzpicture} ... TikZ code here... \\end{tikzpicture}\n\n\n# Exemple\n\n% Author: Quintin Jean-Noël\n% <http://moais.imag.fr/membres/jean-noel.quintin/>\n\\documentclass{article}\n\\usepackage{tikz}\n\\usetikzlibrary[topaths]\n% A counter, since TikZ is not clever enough (yet) to handle\n% arbitrary angle systems.\n\\newcount\\mycount\n\\begin{document}\n\\begin{tikzpicture}[transform shape]\n%the multiplication with floats is not possible. Thus I split the loop\n%in two.\n\\foreach \\number in {1,...,8}{\n% Computer angle:\n\\mycount=\\number\n\\multiply\\mycount by 45\n\\node[draw,circle,inner sep=0.125cm] (N-\\number) at (\\the\\mycount:5.4cm) {};\n}\n\\foreach \\number in {9,...,16}{\n% Computer angle:\n\\mycount=\\number\n\\multiply\\mycount by 45\n\\node[draw,circle,inner sep=0.125cm] (N-\\number) at (\\the\\mycount:5.4cm) {};\n}\n\\foreach \\number in {1,...,15}{\n\\mycount=\\number"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6164023,"math_prob":0.9165815,"size":1373,"snap":"2022-27-2022-33","text_gpt3_token_len":467,"char_repetition_ratio":0.1577794,"word_repetition_ratio":0.18064517,"special_character_ratio":0.29788783,"punctuation_ratio":0.21428572,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.997434,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T15:11:24Z\",\"WARC-Record-ID\":\"<urn:uuid:33da666f-c1dd-4ad5-8654-c86b59d21acd>\",\"Content-Length\":\"14959\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b3175386-36d2-42ae-a5a4-762ae2effafb>\",\"WARC-Concurrent-To\":\"<urn:uuid:79f36af8-7c02-44d4-b9ca-5a1bfa0fcafc>\",\"WARC-IP-Address\":\"137.43.92.117\",\"WARC-Target-URI\":\"https://hcl.ucd.ie/wiki/index.php/PGF/Tikz\",\"WARC-Payload-Digest\":\"sha1:LLRNDFUVYND3PVPUP3H4YYHT2GYA7BNR\",\"WARC-Block-Digest\":\"sha1:LAQY27SLL6APP4YWHMAQBGVJFKOJA7VO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571190.0_warc_CC-MAIN-20220810131127-20220810161127-00680.warc.gz\"}"} |
https://ww2.mathworks.cn/matlabcentral/cody/problems/43158-solve-system-of-equation/solutions/2641150 | [
"Cody\n\n# Problem 43158. Solve system of equation!\n\nSolution 2641150\n\nSubmitted on 1 Jul 2020 by Ramesh Kumar V\nThis solution is locked. To view this solution, you need to provide a solution of the same size or smaller.\n\n### Test Suite\n\nTest Status Code Input and Output\n1 Pass\nx = [2 1 3; 1 1 2]; y_correct = x(:,1:end-1)\\x(:,end); assert(isequal(solvesystem(x),y_correct))\n\n2 Pass\nx = [1 1 0; 1 -1 0]; y_correct = x(:,1:end-1)\\x(:,end); assert(isequal(solvesystem(x),y_correct))\n\n### Community Treasure Hunt\n\nFind the treasures in MATLAB Central and discover how the community can help you!\n\nStart Hunting!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56394064,"math_prob":0.9741023,"size":487,"snap":"2021-04-2021-17","text_gpt3_token_len":171,"char_repetition_ratio":0.13043478,"word_repetition_ratio":0.028985508,"special_character_ratio":0.40246406,"punctuation_ratio":0.25396827,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98134434,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-16T03:21:52Z\",\"WARC-Record-ID\":\"<urn:uuid:ebc7ab92-2c8f-4fa6-9cfd-7ab3ea4790e8>\",\"Content-Length\":\"82596\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ce048c00-12cc-4dae-8537-80e963468f77>\",\"WARC-Concurrent-To\":\"<urn:uuid:4405dc97-a233-4aeb-8e29-931175e41a4b>\",\"WARC-IP-Address\":\"104.127.224.102\",\"WARC-Target-URI\":\"https://ww2.mathworks.cn/matlabcentral/cody/problems/43158-solve-system-of-equation/solutions/2641150\",\"WARC-Payload-Digest\":\"sha1:OL2SBJJQ6I426OPE7ETFEDKWX2TXAFQJ\",\"WARC-Block-Digest\":\"sha1:MOIHDNPDSC62U2TC7YLSNEKTZDB5DYAN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703499999.6_warc_CC-MAIN-20210116014637-20210116044637-00681.warc.gz\"}"} |
https://www.khanacademy.org/science/in-in-class11th-physics/in-in-class11th-physics-laws-of-motion/in-in-class11th-physics-normal-force/v/more-on-normal-force-shoe-on-floor | [
"If you're seeing this message, it means we're having trouble loading external resources on our website.\n\nIf you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.\n\n# More on Normal force (shoe on floor)\n\nDavid explains how to determine the normal force for a variety of scenarios (extra forces, diagonal forces, acceleration) involving a shoe on the floor. Created by David SantoPietro.\n\n## Want to join the conversation?\n\n• What about the instant when a falling shoe hits the ground? What would be the normal force then?",
null,
"• I saw a question: a boy of mass M_ holding a box of mass _m above his head jumps from a building. What is the force exterted by the box on his head during the freefall? Does it increase during the period he balances himself after hitting the ground?\n\nIs there a normal force when two bodies in contact are in freefall? Will they stay in contact?",
null,
"•",
null,
"Great question! i don't want to give a straight up answer because you'd appreciate it more when you'll come to conclusion yourself. Let's think about what Galileo told us about object in a free fall. A stone and a feather, ignoring any air resistance, will fall from a building at the same rate! Of course, you knew that already. But it's the key to your answer. Imagine yourself holding a ball in your palm and jumping from a high diving board. Both you and ball are falling at the same rate. Which means if you removed your palm from underneath the ball, it would still keep falling the same way. Think another case: Even if you didn't have the ball on your palm and both you and the ball had been dropped side by side from the diving board, the picture would still be the same.\n\nSo is there a normal force on the ball in free fall?\n\nIf you don't get the answer, ask again. I promise to tell the answer but you have to give the problem your best effort. Read the chapter again.\n• what about the force on the horizontal direction? what happens to it?",
null,
"•",
null,
"• In the 'shoe falling through the air' case, what if we take in consideration the air resistance? Could that be viewed as a normal force? (it still is a contact force, since we're talking about the air molecules reacting to the shoe in the opposite direction when they come in contact).\nOf course, in this case the normal force (air resistance) would be a lot smaller than the force that caused it (mg), but then again that's why it's falling through the air, not levitating.",
null,
"• Seeing as how this was made 7 years ago, you might not care anymore but I would like to still give my answer.\n\nAir resistance like normal force is a kind of contact force, forces acting upon objects in contact. Air resistance is really a special kind of frictional force(also a contact force). The difference between frictional and normal force that you will see in diagrams is that the normal force is perpendicular to the place of contact while friction is along it or parallel.\n\nNormal forces are also described as the reactionary force of the compounds or atoms of a material to not allow another object to pass through it. This is an important distinction in comparison to frictional forces which does \"let an object pass through it\" or does not completely stop an object despite being in contact with it. For example gases(air resistance) and liquids(water resistance) and solids whose point of contact is parallel to the force acting upon it.\n\nHowever, as you've mentioned isn't this kind of the same fundamental force. And you are right! fundamentally, they are really the same forces, intermolecular ones at points of contact. However, because of the fact that macroscopically normal force, air resistance, water resistance, and the usual kind of friction are so different looking and whose emergent properties differ, it is much easier to deal with them as different kinds of forces.\n• Hi, If acceleration = force/mass doesn't that mean that the object with more mass falls slower than object with less mass? now here doesn't mass means mass of the object as f=ma so we're taking force on the object. But that doesn't happen according to experiments. Where am I wrong?",
null,
"• In the above video what will happen to the horizontal force F3x ?",
null,
"• When he gives the example of Fsub2 being applied in the upward direction,the equation will become\nFn=mg-F2\nor\nmg=Fn+F2\nDoes this means that the weight of shoe will be increased? (I'm referring this concept from the weightlessness in an elevator)\nPlease clearify this to me.I'm finding this very confusing.In case of an elevator it was obvious but here it is not making sense..",
null,
"• No, the weight remains constant, remember weight cannot change, as long as you are near the surface of the Earth. Because remember, weight is just the measure of the force of gravity on an object. No, the weight of the shoe will not increase, the normal force will decrease ( assuming the acceleration is 0 ), because the F2 relieved some of the force that was needed to be applied by the normal force.\nI hope this helps.\nComment if you still have some doubts.\n-ƙαɾƚιƙҽყҽ ʂԋɾιʋαʂƚαʋα™\n• Doesn't newton say \"Every reaction has an equal and opposite reaction\"?doesn't the normal force represents to the opposite reaction?So, the normal force shouldn't be equal to the force is given?please clear me that.{When the surface is tilted}\n(1 vote)",
null,
"• Nope. Action and reaction forces operate on different objects, not the same object. The reaction force to the normal force the surface exerts on the object (up) is the normal force the object exerts on the surface (down). The reaction force of the gravity earth exerts on the object (down) is the gravity the object exerts on earth (up). The reason earth isn’t affected much by this gravity is that its mass is much much greater, so its acceleration is negligible.\n\nSo to sum up, Action and reaction forces operate on different objects and are the same type of force (reaction force of normal force is still normal force, reaction force of gravity is still gravity, reaction force of friction is still friction). In addition to that is Newtons third law.",
null,
""
] | [
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/badges/moon/good-answer-40x40.png",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null,
"https://cdn.kastatic.org/images/avatars/svg/blobby-green.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9611878,"math_prob":0.89211446,"size":18556,"snap":"2022-40-2023-06","text_gpt3_token_len":4257,"char_repetition_ratio":0.17238034,"word_repetition_ratio":0.017096493,"special_character_ratio":0.22720414,"punctuation_ratio":0.099921934,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9551171,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-05T23:24:29Z\",\"WARC-Record-ID\":\"<urn:uuid:fbd42d7c-7d05-4468-8162-7afd72ac79a4>\",\"Content-Length\":\"1049415\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3290c38f-e3d3-44b2-b577-64ea906080d3>\",\"WARC-Concurrent-To\":\"<urn:uuid:88fefaea-e25d-40a6-a905-f4a78fa11c15>\",\"WARC-IP-Address\":\"146.75.37.42\",\"WARC-Target-URI\":\"https://www.khanacademy.org/science/in-in-class11th-physics/in-in-class11th-physics-laws-of-motion/in-in-class11th-physics-normal-force/v/more-on-normal-force-shoe-on-floor\",\"WARC-Payload-Digest\":\"sha1:Z54V2JJDYTSAKQ6Q7TCFALRESTNENXHK\",\"WARC-Block-Digest\":\"sha1:6XBCN32ZT4DJ6WPG5OWX2FBVAMD6YPOP\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500294.64_warc_CC-MAIN-20230205224620-20230206014620-00836.warc.gz\"}"} |
http://sciences.heptic.fr/archives/432 | [
"# Gogeometry Problem 336\n\nFrom Gogeometry",
null,
"",
null,
"Using :",
null,
"",
null,
"• In ΔDMF : DM^2=DF^2+FM^2\n• (1) DM^2= (r-x/2)^2+x^2\n• r^2= r^2+ DM^2 -2xr\n• (2) DM^2=2xr\n• (1) and (2) 2xr=(r-x/2)^2+x^2\n• 2xr=r^2-xr+x^2/4+x^2\n• 3xr=r^2+5x^2/4\n• Dividing by x^2 (x not equal to 0) =>3r/x=5/4+(r/x)^2\n• Let y=r/x : y^2-3y+5/4=0\n• Delta = 4 and y=5/2 or y=1/2\n\nTherefore x=2r/5 or x=2r\n\n• Since the only definition of M and H are\n• M belongs to the square and circle A,\n• and H belongs to the square and Circle B\n• Therefore, there are 2 different solutions:\n• x=2r/5 (blue square)\n• or x=2r (green square)",
null,
""
] | [
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null,
"data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6299138,"math_prob":0.9993722,"size":610,"snap":"2022-40-2023-06","text_gpt3_token_len":297,"char_repetition_ratio":0.10891089,"word_repetition_ratio":0.018348623,"special_character_ratio":0.47540984,"punctuation_ratio":0.043209877,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999976,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-04T18:22:43Z\",\"WARC-Record-ID\":\"<urn:uuid:ba63a977-581c-4f38-b1b4-d5c6c097476f>\",\"Content-Length\":\"36602\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37e1efe2-4187-4eae-97b7-c791f4c4f1d6>\",\"WARC-Concurrent-To\":\"<urn:uuid:4cd6b2ab-c7af-4ac2-a6e6-202e3202817a>\",\"WARC-IP-Address\":\"217.160.0.239\",\"WARC-Target-URI\":\"http://sciences.heptic.fr/archives/432\",\"WARC-Payload-Digest\":\"sha1:JXUVOLKN37E4WOB4BBTMAGNECSLWWUKL\",\"WARC-Block-Digest\":\"sha1:PDQE3MAR7FFQCECREIWADNBB7BMQEBFW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500151.93_warc_CC-MAIN-20230204173912-20230204203912-00851.warc.gz\"}"} |
https://sttv-depannage-antenne.fr/12760-Jun-2015.html | [
"##### Impact Force Calculator\n\nUsing the impact force calculator; Impact force formula; Formula accuracy; Calculation examples Using the impact force calculator. This versatile impact force calculator is useful for estimating the impact forces involved in collisions of different kinds. For example, it can be used to calculate the impact force of a vehicle (car, truck, train), plane, football, of birds hitting a plane or ...\n\n##### A Reliability Calculations and Statistics\n\n2017-8-25 · 362 A Reliability Calculations and Statistics Table A.1. Confidence levels γ and corresponding values of c γ (%) c 80 1.28 90 1.65 95 1.96 98 2.33 99 2.58 which contains the real probability p with a chosen confidence level γ.If we set γ very close to 1, this interval becomes very large. It depends on\n\n##### How to Calculate CapEx\n\nExample of the CapEx calculation in Excel. Here is an example of how to calculate capital expenditures, as it applies to financial modeling in Excel: Image: CFI''s Financial Modeling Courses. In the above example, let''s look at CapEx in 2018 and the following information: 2018 Depreciation is \\$15,005 on the income statement\n\n##### How to Calculate the Impact and Probability of Business ...\n\n2018-12-2 · We can use this tool to calculate whether negative outcomes will happen, and if so how destructive the effects could be. This is done using a numbered scoring method and color-coded indicators. Scoring. Our scoring is done when we select a level of Impact (1 to 5), and a level of probability (1 to 5).\n\n##### Appendix 9C: Design Calculations for Electrical Design\n\n2021-1-11 · Chapter 9 Electrical Design Appendix 9C Design Calculations for Electrical Design 2 SPU Design Standards and Guidelines November 2020 3. CALCULATION MATRIX Project calculations serve as formal documentation of the project electrical design.\n\n##### DESIGN AND ANALYSIS OF IMPACT CRUSHERS\n\n2012-7-4 · Chapter 2: Design and Calculation 2.1 Design of V-Belt drive 7 2.2 Design of Shaft 10 2.3 Design of Hammers 14 2.3.1 Using Impact Bending 14 2.3.2 Using Strain Energy method 21 2.4 Design of conveyor belt 22 Chapter 3: A performance model for impact crusher\n\n##### Car Crash Example\n\n2017-2-3 · Example of Force on Car (This initial example is cast in U.S. common units because most U.S. readers can make comparisons to known forces more easily in those terms. The calculation provides the results in SI units as well.) Vary the parameters of …\n\n##### Inflation Formula | Calculator (Example with Excel Template)\n\n2021-8-27 · Inflation Formula – Example #2. Let us take the example of Dylan, who is an economist and wishes to compute the inflation rate in his state. He has created a CPI basket that includes food, cloth, fuel and education and has considered 2010 as the …\n\n##### Risk Assessment Calculation Formula\n\nMatrix Model was first introduced in 1850 by James Joseph Sylvester. Matrix Model was invented in 1857/1858 by Cayley.. Risk assessment is the process of evaluating risks to worker''s safety and health, assets from workplace hazards.\n\n##### DESIGN AND ANALYSIS OF A HORIZONTAL SHAFT …\n\n2011-5-10 · An impact crusher can be further classified as Horizontal impact crusher (HSI) and vertical shaft impact crusher (VSI) based on the type of arrangement of the impact rotor and shaft. Horizontal shaft impact crusher These break rock by impacting the rock with hammers/blow bars that are fixed upon the outer edge of a spinning rotor.\n\n##### impact crusher calculation\n\nimpact crusher production calculation. Impact Crushers Design And Calculations 2015-7-14 ensp 0183 ensp Impact crusher in the production process the main parameters of the equipment has an important significance for normal operation generally includes three basic parameters the rotor speed productivity and motor power So for these impact crusher parameter selection and calculation method of ...\n\n##### impact crusher power calculation\n\n2020-9-28 · Impact Crusher Calculations Urlaub im Kroatien. Calculation of impact crusher capacity and power impactor crusher capacity formula aug 29 2016 dear all one of my jaw crushers is the bottleneck of a certain calculations on capacity of a cone crusher calculation model of a cone crusher …\n\n##### Determining Sample Size Page 2\n\n2021-8-13 · to calculate sample sizes. This formula was used to calculate the sample sizes in Tables 2 and 3 and is shown below. A 95% confidence level andP=.5are assumed for Equation 5. Where n is the sample size, N is the population size, and e is the level of precision. When this formula is applied to the above sample, we get Equation 6. Formula For ...\n\n##### Car Crash Example\n\n2016-11-9 · Force on Driver in Example Car Crash. For the car crash scenario where a car stops in 1 foot from a speed of 30 mi/hr, what is the force on the driver? Assume a 160 lb (mass = 5 slugs) driver. If firmly held in non-stretching seatbelt harness: …\n\n##### Math Calculations in Pharmacology: Formulas & Conversions ...\n\nLiter, often seen as L, for volume. Meter, usually written as m, for length. Important conversions between these units that you need to be aware of include: 1 g = 1000 mg. 1 mg = 1000 mcg. 1 L ...\n\n##### How can I calculate sample size for multiple groups and ...\n\n1992-12-1 · Before sending article I want to know about the impact factor of journals. View. Is there a formula to calculate the sample size of a study? Question. 14 answers. Asked 13th Aug, 2018;\n\n##### Sample Size Calculator\n\nThis free sample size calculator determines the sample size required to meet a given set of constraints. Learn more about population standard deviation, or explore other statistical calculators, as well as hundreds of other calculators addressing math, finance, health, …\n\n##### Analysis of Impact Force Equations\n\n2006-7-17 · Analysis of Impact Force Equations Prepared for the International Technical Rescue Symposium, November 2002 By Chuck Weber Abstract This paper compares the actual impact forces measured during controlled testing to the values calculated by a commonly accepted rope force-predicting equation. All tests studied were\n\n##### Sample Size Calculator: Understanding Sample Sizes ...\n\nIf the sample size calculator says you need more respondents, we can help. Tell us about your population, and we''ll find the right people to take your surveys. With millions of qualified respondents, SurveyMonkey Audience makes it easy to get survey …\n\n##### Population Proportion\n\nFormula. This calculator uses the following formula for the sample size n: n = N*X / (X + N – 1), where, X = Z α/22 *p* (1-p) / MOE 2, and Z α/2 is the critical value of the Normal distribution at α/2 (e.g. for a confidence level of 95%, α is 0.05 and the critical value is 1.96), MOE is the margin of error, p is the sample proportion ...\n\n##### Sample Size Calculator\n\n2020-6-27 · Sample Size Calculator Terms: Confidence Interval & Confidence Level. The confidence interval (also called margin of error) is the plus-or-minus figure usually reported in newspaper or television opinion poll results. For example, if you use a confidence interval of 4 and 47% percent of your sample picks an answer you can be \"sure\" that if you had asked the question of the entire relevant ...\n\n##### Sample Size Formulas for our Sample Size Calculator ...\n\n2019-9-11 · Sample Size Formulas for our Sample Size Calculator. Here are the formulas used in our Sample Size Calculator: Sample Size . ss = Z 2 * (p) * (1-p) c 2: Where: Z = Z value (e.g. 1.96 for 95% confidence level) p = percentage picking a choice, expressed as decimal (.5 used for sample …\n\n##### Adverse Impact: What is it? How do you calculate it?\n\n2008-7-8 · 3) Calculate impact ratios by dividing the selection rate of each group by that of the highest group 4) Determine if the selection rates are substantially different (i.e., impact ratio < .80) Source: Uniform Guidelines Q&A 12\n\n##### Carbon footprint calculations\n\nThe carbon footprint calculator converts each of these energy, travel, and waste sources from their original unit of measure, for example, kWh or miles, to tons CO 2 per year by applying emission conversion factors. The calculator uses the most current energy, travel, and waste data from the energy log, the travel log, and the waste log.\n\n##### Sample Size (Definition, Formula) | Calculate Sample Size\n\n2021-8-28 · Sample size calculation is important to understand the concept of the appropriate sample size because it is used for the validity of research findings. In case it is too small, it will not yield valid results, while a sample is too large may be a waste of …\n\n##### FORMULAS FROM EPIDEMIOLOGY KEPT SIMPLE (3e) …\n\n2015-6-19 · = (independent samples only; for matched-pairs and tuples data, see text) • Rounding: Basic measures should be reported with 2 or 3 significant digit accuracy. Carry 4 or 5 significant digits to derive a final answer that is accurate to 2 or 3 significant digits, respectively. 3.3 Measures of Potential Impact\n\n##### Crusher Efficiency Calculations\n\n2014-9-15 · The screen area needed under the jaw crusher is 38/1.89 = 20.1 sq ft. For the 1-in. screen below the roll crusher the capacity has no correction factor and the area needed is 32/2.1 = 15.2 sq ft. To handle the output from a 40 x 24 roll crusher the screen will have to be at least 24 in. wide.\n\n##### How to empirically calculate impact sound insulation\n\n2017-3-8 · Adding this value to the formula, we can obtain: L''n, w = Ln, w, eq – ∆Lw + K – Kt, (dB) If you want to obtain the calculation template, tell us what you want to calculate and we can help you with some of our impact solutions. Manuel Taborga How to empirically calculate impact sound insulation …\n\n##### Statistical Significance Calculator\n\nExample of a statistical significance calculation and its steps. Let''s test the significance occurrence for two sample sizes (s 1) of 25 and (s 2) of 50 having a percentage of response (r 1) of 5%, respectively (r 2) of 7%: Step 1: Substitute the figures from the above example in the formula of comparative error:\n\n##### How to Calculate Force of Impact | Sciencing\n\n2020-12-6 · Now suppose you want to know the impact force of a 2,200-kilogram car traveling at 20 meters per second that crashes into a wall during a safety test. The stop distance in this example is the crumple zone of the car, or the distance by which the car shortens on impact.\n\n##### Sample size calculation and development of sampling plan\n\nCalculate sample size – single survey using two-stage cluster sampling To estimate sample size, you need to know: Estimate of the prevalence of the key indicator (e.g. rate of stunting) Precision desired (for example: ± 5%) Level of confidence (always use 95%) Expected response rate Population For nutrition surveys: number of eligible individuals per\n\n##### IMPACT\n\nFormula. IBW Estimated ideal body weight in (kg) Males: IBW = 50 kg + 2.3 kg for each inch over 5 feet. Females: IBW = 45.5 kg + 2.3 kg for each inch over 5 feet. ABW Estimated adjusted body weight in (kg) If the actual body weight is greater than 30% of the calculated IBW, calculate the adjusted body weight (ABW): ABW = IBW + 0.4 (actual ..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8777977,"math_prob":0.9760702,"size":9659,"snap":"2022-27-2022-33","text_gpt3_token_len":2334,"char_repetition_ratio":0.13940963,"word_repetition_ratio":0.008383233,"special_character_ratio":0.25913656,"punctuation_ratio":0.11715915,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99099624,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-10T20:25:11Z\",\"WARC-Record-ID\":\"<urn:uuid:c161c355-5d97-408a-bcd5-72d1a691dd02>\",\"Content-Length\":\"19901\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f23b6424-7524-4608-9b7d-edaa5382c0bd>\",\"WARC-Concurrent-To\":\"<urn:uuid:327ec013-ef25-4aeb-b8b0-8c65045c8663>\",\"WARC-IP-Address\":\"104.21.3.64\",\"WARC-Target-URI\":\"https://sttv-depannage-antenne.fr/12760-Jun-2015.html\",\"WARC-Payload-Digest\":\"sha1:4WHHVOZD6IHE52VSGF32KPYPJUMQMP5I\",\"WARC-Block-Digest\":\"sha1:SCKJFDSI4MKUUUF65M4KKUTKY445H57Q\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571210.98_warc_CC-MAIN-20220810191850-20220810221850-00430.warc.gz\"}"} |
https://www.semanticscholar.org/paper/The-Dehn-functions-of-Out(F_n)-and-Aut(F_n)-Bridson-Vogtmann/4e4ef55a13839cd3c5dc9c055c0afcdb4cb3b63f | [
"# The Dehn functions of Out(F_n) and Aut(F_n)\n\n@article{Bridson2010TheDF,\ntitle={The Dehn functions of Out(F\\_n) and Aut(F\\_n)},\nauthor={Martin R. Bridson and Karen Vogtmann},\njournal={arXiv: Group Theory},\nyear={2010}\n}\n• Published 5 November 2010\n• Mathematics\n• arXiv: Group Theory\nFor n > 2, the Dehn functions of Aut(F_n) and Out(F_n) are exponential. Hatcher and Vogtmann proved that they are at most exponential, and the complementary lower bound in the case n=3 was established by Bridson and Vogtmann. Handel and Mosher completed the proof by reducing the lower bound for n>4 to the case n=3. In this note we give a shorter, more direct proof of this last reduction.\n13 Citations\nLipschitz retraction and distortion for subgroups of Out(Fn)\n• Mathematics\n• 2010\nGiven a free factor A of the rank n free group Fn , we characterize when the subgroup of Out.Fn/ that stabilizes the conjugacy class of A is distorted in Out.Fn/. We also prove that the image of the\nThe Geometry of the Handlebody Groups II: Dehn Functions\n• Mathematics\n• 2018\nWe show that the Dehn function of the handlebody group is exponential in any genus $g\\geq 3$. On the other hand, we show that the handlebody group of genus $2$ is cubical, biautomatic, and therefore\nThe topology and geometry of automorphism groups of free groups\nIn the 1970s Stallings showed that one could learn a great deal about free groups and their automorphisms by viewing the free groups as fundamental groups of graphs and modeling their automorphisms\nTrain Tracks on Graphs of Groups and Outer Automorphisms of Hyperbolic Groups\nStallings remarked that an outer automorphism of a free group may be thought of as a subdivision of a graph followed by a sequence of folds. In this thesis, we prove that automorphisms of fundamental\nSubspace arrangements, BNS invariants, and pure symmetric outer automorphisms of right-angled Artin groups\n• Mathematics\n• 2015\nWe introduce a homology theory for subspace arrangements, and use it to extract a new system of numerical invariants from the Bieri-Neumann-Strebel invariant of a group. We use these to characterize\nSpotted disk and sphere graphs I\nThe disk graph of a handlebody H of genus g ≥ 2 with m ≥ 0 marked points on the boundary is the graphwhose vertices are isotopy classes of disks disjoint from themarked points andwhere two vertices\nHierarchically hyperbolic spaces II: Combination theorems and the distance formula\n• Mathematics\nPacific Journal of Mathematics\n• 2019\nWe introduce a number of tools for finding and studying \\emph{hierarchically hyperbolic spaces (HHS)}, a rich class of spaces including mapping class groups of surfaces, Teichmuller space with either\nLocal topological properties of asymptotic cones of groups\n• Mathematics\n• 2014\nWe define a local analogue to Gromov’s loop division property which we use to give a sufficient condition for an asymptotic cone of a complete geodesic metric space to have uncountable fundamental\nIsoperimetric inequalities for the handlebody groups\n• Mathematics\n• 2011\nWe show that the mapping class group of a handlebody of genus at least 2 has a Dehn function of at most exponential growth type.\nGeometry of graphs of discs in a handlebody\nFor a handlebody H of genus g>1 we use surgery to identify a graph whose vertices are discs and which is quasi-isometrically embedded in the curve graph of the boundary surface."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8556628,"math_prob":0.8537519,"size":5949,"snap":"2022-27-2022-33","text_gpt3_token_len":1427,"char_repetition_ratio":0.16703112,"word_repetition_ratio":0.10341261,"special_character_ratio":0.21902841,"punctuation_ratio":0.058195926,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9787614,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-06T04:19:41Z\",\"WARC-Record-ID\":\"<urn:uuid:2a608a47-fd08-40a5-a070-326743c5a104>\",\"Content-Length\":\"346213\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:54ad1c60-f41d-479c-8971-6a4071b183e5>\",\"WARC-Concurrent-To\":\"<urn:uuid:5f19343b-c5fa-4c1b-b323-b675bf812b65>\",\"WARC-IP-Address\":\"18.67.65.61\",\"WARC-Target-URI\":\"https://www.semanticscholar.org/paper/The-Dehn-functions-of-Out(F_n)-and-Aut(F_n)-Bridson-Vogtmann/4e4ef55a13839cd3c5dc9c055c0afcdb4cb3b63f\",\"WARC-Payload-Digest\":\"sha1:W7R22KHWI2JC3ZKLPTLOH7Q4IDP3JSUY\",\"WARC-Block-Digest\":\"sha1:J6RMGODVK5DGLGQ7MRD3SBF2N6SVWLBO\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104660626.98_warc_CC-MAIN-20220706030209-20220706060209-00586.warc.gz\"}"} |
http://ying.ninja/?cat=3 | [
"# Count Univalue Subtrees\n\nCount Univalue Subtrees\n\nGiven a binary tree, count the number of uni-value subtrees.\n\nA Uni-value subtree means all nodes of the subtree have the same value.\n\nFor example:\nGiven binary tree,\n\n``` 5\n/ \\\n1 5\n/ \\ \\\n5 5 5\n```\n\nreturn `4`.\n\nSolution:\n\n```public int countUnivalSubtrees(TreeNode root) {\nint[] count = new int;\nisUnivalSubtrees(root, count);\nreturn count;\n}\n\npublic boolean isUnivalSubtrees(TreeNode root, int[] count) {\nif (root == null) {\nreturn false;\n}\nboolean left = isUnivalSubtrees(root.left, count);\nboolean right = isUnivalSubtrees(root.right, count);\nif (!left && !right) {\nif (root.left == null && root.right == null) {\ncount++;\nreturn true;\n}\n} else if (left && right) {\nif (root.left.val == root.val && root.right.val == root.val) {\ncount++;\nreturn true;\n}\n} else if (left && !right) {\nif (root.right == null && root.left.val == root.val) {\ncount++;\nreturn true;\n}\n} else if (!left && right) {\nif (root.left == null && root.right.val == root.val) {\ncount++;\nreturn true;\n}\n}\nreturn false;\n}```\n\n# Wildcard Matching\n\nImplement wildcard pattern matching with support for `'?'` and `'*'`.\n\n• `'?'` Matches any single character.\n• `'*'` Matches any sequence of characters (including the empty sequence).\n\nThe matching should cover the entire input string (not partial).\n\nExample\n```<code>isMatch(\"aa\",\"a\") → false\nisMatch(\"aa\",\"aa\") → true\nisMatch(\"aaa\",\"aa\") → false\nisMatch(\"aa\", \"*\") → true\nisMatch(\"aa\", \"a*\") → true\nisMatch(\"ab\", \"?*\") → true\nisMatch(\"aab\", \"c*a*b\") → false</code>```\n\nSolution1. Dynamic Programming\n\nisMatch[i][j] = if first i chars for s can match first j chars of p\n\nj==’*’: OR(isMatch[0~i][j-1])\n\nj==’?’: isMatch[i-1][j-1]\n\nelse: isMatch[i-1][j-1] && s.charAt(i-1)==p.charAt(j-1)\n\n```public boolean isMatch(String s, String p) {\n//dp version\nif (s == null || p == null) {\nreturn false;\n}\nboolean[][] isMatch = new boolean[s.length() + 1][p.length() + 1];\nisMatch = true;\nfor (int i = 1; i <= s.length(); i++) {\nisMatch[i] = false;\n}\nfor (int j = 1; j <= p.length(); j++) {\nisMatch[j] = isMatch[j - 1] && p.charAt(j - 1) == '*';\n}\nfor (int i = 1; i <= s.length(); i++) {\nfor (int j = 1; j <= p.length(); j++) {\nif (p.charAt(j - 1) == '*') {\nfor (int k = 0; k <= i; k++) {\nif (isMatch[k][j - 1]) {\nisMatch[i][j] = true;\nbreak;\n}\n}\n} else if (p.charAt(j - 1) == '?') {\nisMatch[i][j] = isMatch[i - 1][j - 1];\n} else {\nisMatch[i][j] = isMatch[i - 1][j - 1] && s.charAt(i - 1) == p.charAt(j - 1);\n}\n}\n}\nreturn isMatch[s.length()][p.length()];\n}```\n\n# Leetcode: Set Matrix Zeroes\n\nGiven a m x n matrix, if an element is 0, set its entire row and column to 0. Do it in place.\n\nFollow up:Did you use extra space?\nA straight forward solution using O(mn) space is probably a bad idea.\nA simple improvement uses O(m + n) space, but still not the best solution.\nCould you devise a constant space solution?1. O(m+n) space solutionAnalysis:\nTo achieve O(m+n) space, you need two array to store information while traversal the matrix for the first time checking if it is zero or not. You can also combine two arrays into one, but still cost O(m+n) space. So I just use two to store row and column information.\n```public class Solution {\npublic void setZeroes(int[][] matrix) {\nif (matrix.length == 0 || matrix.length == 0) {\nreturn;\n}\nboolean[] zeroRows = new boolean[matrix.length];\nboolean[] zeroCols = new boolean[matrix.length];\n//traversal the entire matrix to check if it is zero or not\nfor (int i = 0; i < matrix.length; i++) {\nfor (int j = 0; j < matrix[i].length; j++) {\nif (matrix[i][j] == 0) {\nzeroRows[i] = true;\nzeroCols[j] = true;\n}\n}\n}\n//now set matrix zero based on the two boolean arrays\nfor (int i = 0; i < matrix.length; i++) {\nfor (int j = 0; j < matrix[i].length; j++) {\nif (zeroRows[i] || zeroCols[j]) {\nmatrix[i][j] = 0;\n}\n}\n}\n}\n}```\n\n2. *Optimal* O(1) space solution\n\nAnalysis:\nTo save space, we can just make use of the space inside the matrix. Which means, we use the first row and first column to store the information.\nIn the first traversal, we set the corresponding cell on the first row or first column to 0 if there is any cell which is zero in this row or column.\nIn the second traversal, we set the matrix to zeros based on the first row and first column.\n\n``` public void setZeroes(int[][] matrix) {\nif (matrix.length == 0 || matrix.length == 0) {\nreturn;\n}\nboolean firstRowHasZero = false;\nboolean firstColHasZero = false;\n//traversal the entire matrix to check if it is zero or not\nfor (int i = 0; i < matrix.length; i++) {\nfor (int j = 0; j < matrix[i].length; j++) {\nif (matrix[i][j] == 0) {\nif (i == 0) {\nfirstRowHasZero = true;\n}\nif (j == 0) {\nfirstColHasZero = true;\n}\nmatrix[j] = 0;\nmatrix[i] = 0;\n}\n}\n}\n//now set matrix zero based on first row and first column\nfor (int i = 1; i < matrix.length; i++) {\nfor (int j = 1; j < matrix[i].length; j++) {\nif (matrix[j] == 0 || matrix[i] == 0) {\nmatrix[i][j] = 0;\n}\n}\n}\n//deal with the first column\nif (firstColHasZero) {\nfor (int i = 1; i < matrix.length; i++) {\nmatrix[i] = 0;\n}\n}\n//deal with the first row\nif (firstRowHasZero) {\nfor (int i = 1; i < matrix.length; i++) {\nmatrix[i] = 0;\n}\n}\n}```\n\nNote: The tricky part of this algorithm is, if without those two boolean variables to keep track of if there is any zero in the first row or first column , when matrix==0, we have no idea if the first row contains zero, or it is the first column contains zero, or both. also, we should deal with the rest part first based on the first row and first column and then deal with the first row and first column based on the two boolean values.\n\n# Leetcode: Minimum Path Sum\n\nGiven a m x n grid filled with non-negative numbers, find a path from top left to bottom right which minimizes the sum of all numbers along its path.\n\nNote: You can only move either down or right at any point in time.\n\n1. Recursive Solution\n\n``` public int minPathSum(int[][] grid) {\nif (grid.length == 0 || grid.length == 0) {\nreturn 0;\n}\nint[][] resultMap = new int[grid.length][grid.length];\nreturn minPathSum(grid, resultMap, grid.length-1, grid.length-1);\n}\n\npublic int minPathSum(int[][] grid, int[][] resultMap, int row, int col) {\nif (row < 0 || col < 0) {\nreturn Integer.MAX_VALUE;\n}\nif (row == 0 && col == 0) {\nreturn grid;\n}\nif (resultMap[row][col] == 0) {\nresultMap[row][col] = grid[row][col] + Math.min(minPathSum(grid, resultMap, row - 1, col), minPathSum(grid, resultMap, row, col - 1));\n}\nreturn resultMap[row][col];\n}```\n\nNote:\n\nThis is a very typical dp problem. where dp[i][j] = grid[i][j] + min( dp[i-1][j], dp[i][j-1] ); To prevent same cell being visited and calculated multiple times, we have a resultMap 2d array to store the result. We also need to check the edge carefully to ensure the index is not out of bounds of the given array.\n\n2. Iterative Solution (O(m*n) space)\n\n```public class Solution {\npublic int minPathSum(int[][] grid) {\nif(grid.length==0||grid.length==0){\nreturn 0;\n}\n//2D map -- O(m*n) space\nint[][] resultMap = new int[grid.length][grid.length];\nresultMap = grid;\n//Calculate the first row\nfor(int i = 1;i<grid.length;i++){\nresultMap[i] = grid[i] + resultMap[i-1];\n}\n//Calculate the first col\nfor(int i = 1;i<grid.length;i++){\nresultMap[i] = grid[i] + resultMap[i-1];\n}\n//Calculate the rest\nfor(int i=1;i<grid.length;i++){\nfor(int j=1;j<grid.length;j++){\nresultMap[i][j] = grid[i][j]+Math.min(resultMap[i-1][j],resultMap[i][j-1]);\n}\n}\nreturn resultMap[grid.length-1][grid.length-1];\n}\n}```\n\nNote: First calculate first row and first col which is there own grid value, then calculate other cells based on it.\n\n3. *Optimal* Iterative Solution (O(n) space)\n\n```public class Solution {\npublic int minPathSum(int[][] grid) {\nif (grid.length == 0 || grid.length == 0) {\nreturn 0;\n}\nint rows = grid.length;\nint cols = grid.length;\n\n//1D map -- O(n) space, n is # of cols\nint[] resultMap = new int[cols];\nresultMap = grid;\n\n//Calculate the first row\nfor (int i = 1; i < grid.length; i++) {\nresultMap[i] = grid[i] + resultMap[i - 1];\n}\n\nfor (int i = 1; i < rows; i++) {\nresultMap = resultMap + grid[i];\nfor (int j = 1; j < cols; j++) {\nresultMap[j] = grid[i][j] + Math.min(resultMap[j], resultMap[j - 1]);\n}\n}\nreturn resultMap[cols - 1];\n}\n}```\n\nNote: You don’t necessarily need a 2d map to store which cost O(n) space where n is the # of columns. You can just use one row to stored the result of previous row, and replace the value with the current row from left to right one by one. One thing worth mentioning is this code is more space efficient though, it has lower readability.\n\nMore information and further thoughts for minimum path sum questions can be found here.\n\nhttp://www.cnblogs.com/hiddenfox/p/3408931.html\n\nhttp://www.cnblogs.com/wuyuegb2312/p/3183214.html\n\n# Leetcode: Best Time to Buy and Sell Stock\n\nSay you have an array for which the ith element is the price of a given stock on day i.\n\nIf you were only permitted to complete at most one transaction (ie, buy one and sell one share of the stock), design an algorithm to find the maximum profit.\n\nSolution: One Pass O(n) time.\n\nFor each day i,\n\n1. think of if sell it at day i, compare the profit with maxProfit.\n\n2. check if day i is a better day than any previous day to buy a stock, meaning the price is cheaper than any previous day, then we buy the stock at day i.\n\n```public int maxProfit(int[] prices) {\nif (prices == null || prices.length <= 1) {\nreturn 0;\n}\nint maxProfit = 0;\nint sum = 0;\nfor (int i = 1; i < prices.length; i++) {\nmaxProfit = Math.max(maxProfit, prices[i] - prices[buyDate]);\n}\n}\nreturn maxProfit;\n}```\n\n# Leetcode: Search a 2D Matrix\n\nWrite an efficient algorithm that searches for a value in an m x n matrix. This matrix has the following properties:\n\n• Integers in each row are sorted from left to right.\n• The first integer of each row is greater than the last integer of the previous row.\n\nFor example,\n\nConsider the following matrix:\n\n```[\n[1, 3, 5, 7],\n[10, 11, 16, 20],\n[23, 30, 34, 50]\n]\n```\n\nGiven target = `3`, return `true`.\n\n```public class Solution {\npublic boolean searchMatrix(int[][] matrix, int target) {\nif (matrix.length == 0 || matrix.length == 0) {\nreturn false;\n}\nint m = matrix.length;\nint n = matrix.length;\nint start = 0;\nint end = m * n - 1;\nwhile (start <= end) {\nint mid = (start + end) / 2;\nint midVal = matrix[mid / n][mid % n];\nif (midVal == target) {\nreturn true;\n} else if (midVal > target) {\nend = mid - 1;\n} else {\nstart = mid + 1;\n}\n}\nreturn false;\n}\n}```\n\nNote: O(log(m*n))\n\n# Leetcode: Binary Tree Level Order Traversal II\n\nGiven a binary tree, return the bottom-up level order traversal of its nodes’ values. (ie, from left to right, level by level from leaf to root).\n\nFor example:\nGiven binary tree `{3,9,20,#,#,15,7}`,\n\n``` 3\n/ \\\n9 20\n/ \\\n15 7\n```\n\nreturn its bottom-up level order traversal as:\n\n```[\n[15,7],\n[9,20],\n\n]\n```\n\nconfused what `\"{1,#,2,3}\"` means? > read more on how binary tree is serialized on OJ.\n\n```public class Solution {\npublic List<List<Integer>> levelOrderBottom(TreeNode root) {\nif (root == null) {\nreturn result;\n}\nwhile (currLevel.size() > 0) {\nwhile (currLevel.size() > 0) {\nTreeNode runner = currLevel.remove(0);\nif (runner.left != null) {\n}\nif (runner.right != null) {\n}\n}\ncurrLevel = nextLevel;\n}\nreturn result;\n}\n}```\n\n# Leetcode: Container With Most Water\n\nGiven n non-negative integers a1, a2, …, an, where each represents a point at coordinate (i, ai). n vertical lines are drawn such that the two endpoints of line i is at (i, ai) and (i, 0). Find two lines, which together with x-axis forms a container, such that the container contains the most water.\n\nNote: You may not slant the container.\n\n``` public int maxArea(int[] height) {\nint maxArea = 0;\nint left = 0;\nint right = height.length - 1;\nwhile (left < right) {\nmaxArea = Math.max(maxArea, calculateArea(height, left, right));\nif (height[left] < height[right]) {\nleft++;\n} else {\nright--;\n}\n}\nreturn maxArea;\n}\n\npublic int calculateArea(int[] height, int left, int right) {\nreturn Math.abs(right - left) * Math.min(height[left], height[right]);\n}```\n\nNote:\n\nFor any container, its volume depends on the shortest board.\n\nTwo-pointer scan. And always move with shorter board index.\n\n# Leetcode: Rotate Image\n\nYou are given an n x n 2D matrix representing an image.\n\nRotate the image by 90 degrees (clockwise).\n\nCould you do this in-place?\n\n```public class Solution {\npublic void rotate(int[][] matrix) {\nint n = matrix.length;\nif (n <= 1 || matrix.length <= 1 || n != matrix.length) {\nreturn;\n}\nint mid = (n - 1) / 2;\nint offset = 0;\nwhile (offset <= mid) {\nfor (int i = offset; i < n - 1 - offset; i++) {\nint tmp = matrix[offset][i];\nmatrix[offset][i] = matrix[n - 1 - i][offset];\nmatrix[n - 1 - i][offset] = matrix[n - 1 - offset][n - 1 - i];\nmatrix[n - 1 - offset][n - 1 - i] = matrix[i][n - 1 - offset];\nmatrix[i][n - 1 - offset] = tmp;\n}\noffset++;\n}\n\n}\n}```"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.62865525,"math_prob":0.9971627,"size":3063,"snap":"2023-40-2023-50","text_gpt3_token_len":929,"char_repetition_ratio":0.18339327,"word_repetition_ratio":0.10229645,"special_character_ratio":0.3467189,"punctuation_ratio":0.1559633,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99917424,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-09T09:14:21Z\",\"WARC-Record-ID\":\"<urn:uuid:0290a7ab-b777-4c0f-82f7-69eb55f4e6b3>\",\"Content-Length\":\"82472\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:313a1bdb-9f49-46aa-a41b-e33014c7f024>\",\"WARC-Concurrent-To\":\"<urn:uuid:4fcad4ca-e0b1-4583-ba40-d8221352fcb3>\",\"WARC-IP-Address\":\"166.62.77.128\",\"WARC-Target-URI\":\"http://ying.ninja/?cat=3\",\"WARC-Payload-Digest\":\"sha1:ZLKKPGB2USNUU76EOPEXWL4TFOJUX2YP\",\"WARC-Block-Digest\":\"sha1:E6WF2YBYV2VFUZBUEBZDKY22USRSWIOG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100873.6_warc_CC-MAIN-20231209071722-20231209101722-00054.warc.gz\"}"} |
https://www.nag.com/numeric/nl/nagdoc_26/nagdoc_fl26/html/c06/c06intro.html | [
"# NAG Library Chapter Introduction\n\n## 1Scope of the Chapter\n\nThis chapter is concerned with the following tasks.\n (a) Calculating the discrete Fourier transform of a sequence of real or complex data values. (b) Calculating the discrete convolution or the discrete correlation of two sequences of real or complex data values using discrete Fourier transforms. (c) Calculating the inverse Laplace transform of a user-supplied subroutine. (d) Calculating the fast Gauss transform approximation to the discrete Gauss transform. (e) Direct summation of orthogonal series. (f) Acceleration of convergence of a sequence of real values.\n\n## 2Background to the Problems\n\n### 2.1Discrete Fourier Transforms\n\n#### 2.1.1Complex transforms\n\nMost of the routines in this chapter calculate the finite discrete Fourier transform (DFT) of a sequence of $n$ complex numbers ${z}_{\\mathit{j}}$, for $\\mathit{j}=0,1,\\dots ,n-1$. The direct transform is defined by\n $z^k = 1n ∑ j=0 n-1 zj exp -i 2πjk n$ (1)\nfor $k=0,1,\\dots ,n-1$. Note that equation (1) makes sense for all integral $k$ and with this extension ${\\stackrel{^}{z}}_{k}$ is periodic with period $n$, i.e., ${\\stackrel{^}{z}}_{k}={\\stackrel{^}{z}}_{k±n}$, and in particular ${\\stackrel{^}{z}}_{-k}={\\stackrel{^}{z}}_{n-k}$. Note also that the scale-factor of $\\frac{1}{\\sqrt{n}}$ may be omitted in the definition of the DFT, and replaced by $\\frac{1}{n}$ in the definition of the inverse.\nIf we write ${z}_{j}={x}_{j}+i{y}_{j}$ and ${\\stackrel{^}{z}}_{k}={a}_{k}+i{b}_{k}$, then the definition of ${\\stackrel{^}{z}}_{k}$ may be written in terms of sines and cosines as\n $ak = 1n ∑ j=0 n-1 xj cos 2πjk n + yj sin 2πjk n$\n $bk = 1n ∑ j= 0 n- 1 yj cos 2πjk n - xj sin 2πjk n .$\nThe original data values ${z}_{j}$ may conversely be recovered from the transform ${\\stackrel{^}{z}}_{k}$ by an inverse discrete Fourier transform:\n $zj = 1n ∑ k=0 n-1 z^k exp +i 2πjk n$ (2)\nfor $j=0,1,\\dots ,n-1$. If we take the complex conjugate of (2), we find that the sequence ${\\stackrel{-}{z}}_{j}$ is the DFT of the sequence ${\\stackrel{-}{\\stackrel{^}{z}}}_{k}$. Hence the inverse DFT of the sequence ${\\stackrel{^}{z}}_{k}$ may be obtained by taking the complex conjugates of the ${\\stackrel{^}{z}}_{k}$; performing a DFT, and taking the complex conjugates of the result. (Note that the terms forward transform and backward transform are also used to mean the direct and inverse transforms respectively.)\nThe definition (1) of a one-dimensional transform can easily be extended to multidimensional transforms. For example, in two dimensions we have\n $z^ k1 k2 = 1 n1 n2 ∑ j1=0 n1-1 ∑ j2=0 n2-1 z j1 j2 exp -i 2 π j1 k1 n1 exp -i 2 π j2 k2 n2 .$ (3)\nNote: definitions of the discrete Fourier transform vary. Sometimes (2) is used as the definition of the DFT, and (1) as the definition of the inverse.\n\n#### 2.1.2Real transforms\n\nIf the original sequence is purely real valued, i.e., ${z}_{j}={x}_{j}$, then\n $z^k = ak + i bk = 1n ∑ j=0 n-1 xj exp -i 2πjk n$\nand ${\\stackrel{^}{z}}_{n-k}$ is the complex conjugate of ${\\stackrel{^}{z}}_{k}$. Thus the DFT of a real sequence is a particular type of complex sequence, called a Hermitian sequence, or half-complex or conjugate symmetric, with the properties\n $a n-k = ak b n-k = -bk b0 = 0$\nand, if $n$ is even, ${b}_{n/2}=0$.\nThus a Hermitian sequence of $n$ complex data values can be represented by only $n$, rather than $2n$, independent real values. This can obviously lead to economies in storage, with two schemes being used in this chapter. In the first (deprecated) scheme, which will be referred to as the real storage format for Hermitian sequences, the real parts ${a}_{k}$ for $0\\le k\\le n/2$ are stored in normal order in the first $n/2+1$ locations of an array x of length $n$; the corresponding nonzero imaginary parts are stored in reverse order in the remaining locations of x. To clarify, if x is declared with bounds $\\left(0:n-1\\right)$ in your calling subroutine, the following two tables illustrate the storage of the real and imaginary parts of ${\\stackrel{^}{z}}_{k}$ for the two cases: $n$ even and $n$ odd.\nIf $n$ is even then the sequence has two purely real elements and is stored as follows:\n Index of x 0 1 2 $\\dots$ $n/2$ $\\dots$ $n-2$ $n-1$ Sequence ${a}_{0}$ ${a}_{1}+{ib}_{1}$ ${a}_{2}+{ib}_{2}$ $\\dots$ ${a}_{n/2}$ $\\dots$ ${a}_{2}-{ib}_{2}$ ${a}_{1}-{ib}_{1}$ Stored values ${a}_{0}$ ${a}_{1}$ ${a}_{2}$ $\\dots$ ${a}_{n/2}$ $\\dots$ ${b}_{2}$ ${b}_{1}$\n $xk = ak , for k= 0, 1, …, n/2 , and xn-k = bk , for k= 1, 2, …, n/2-1 .$\nIf $n$ is odd then the sequence has one purely real element and, letting $n=2s+1$, is stored as follows:\n Index of x 0 1 2 $\\dots$ $s$ $s+1$ $\\dots$ $n-2$ $n-1$ Sequence ${a}_{0}$ ${a}_{1}+{ib}_{1}$ ${a}_{2}+{ib}_{2}$ $\\dots$ ${a}_{s}+i{b}_{s}$ ${a}_{s}-i{b}_{s}$ $\\dots$ ${a}_{2}-{ib}_{2}$ ${a}_{1}-{ib}_{1}$ Stored values ${a}_{0}$ ${a}_{1}$ ${a}_{2}$ $\\dots$ ${a}_{s}$ ${b}_{s}$ $\\dots$ ${b}_{2}$ ${b}_{1}$\n $xk = ak , for k= 0, 1, …, s , and xn-k = bk , for k= 1, 2, …, s .$\nThe second (recommended) storage scheme, referred to in this chapter as the complex storage format for Hermitian sequences, stores the real and imaginary parts ${a}_{k},{b}_{k}$, for $0\\le k\\le n/2$, in consecutive locations of an array x of length $n+2$. If x is declared with bounds $\\left(0:n+1\\right)$ in your calling subroutine, the following two tables illustrate the storage of the real and imaginary parts of ${\\stackrel{^}{z}}_{k}$ for the two cases: $n$ even and $n$ odd.\nIf $n$ is even then the sequence has two purely real elements and is stored as follows:\n Index of x 0 1 2 3 $\\dots$ $n-2$ $n-1$ $n$ $n+1$ Stored values ${a}_{0}$ ${b}_{0}=0$ ${a}_{1}$ ${b}_{1}$ $\\dots$ ${a}_{n/2-1}$ ${b}_{n/2-1}$ ${a}_{n/2}$ ${b}_{n/2}=0$\n $x2×k = ak , for k= 0, 1, …, n/2 , and x2×k+1 = bk , for k= 0, 1, …, n/2 .$\nIf $n$ is odd then the sequence has one purely real element and, letting $n=2s+1$, is stored as follows:\n Index of x 0 1 2 3 $\\dots$ $n-2$ $n-1$ $n$ $n+1$ Stored values ${a}_{0}$ ${b}_{0}=0$ ${a}_{1}$ ${b}_{1}$ $\\dots$ ${b}_{s-1}$ ${a}_{s}$ ${b}_{s}$ $0$\n $x2×k = ak , for k= 0, 1, …, s , and x2×k+1 = bk , for k= 0, 1, …, s .$\nAlso, given a Hermitian sequence, the inverse (or backward) discrete transform produces a real sequence. That is,\n $xj = 1n a0 + 2 ∑ k=1 n/2-1 ak cos 2πjk n - bk sin 2πjk n + a n/2$\nwhere ${a}_{n/2}=0$ if $n$ is odd.\nFor real data that is two-dimensional or higher, the symmetry in the transform persists for the leading dimension only. So, using the notation of equation (3) for the complex two-dimensional discrete transform, we have that ${\\stackrel{^}{z}}_{{k}_{1}{k}_{2}}$ is the complex conjugate of ${\\stackrel{^}{z}}_{\\left({n}_{1}-{k}_{1}\\right)\\left({n}_{2}-{k}_{2}\\right)}$. It is more convenient for transformed data of two or more dimensions to be stored as a complex sequence of length $\\left({n}_{1}/2+1\\right)×{n}_{2}×\\cdots ×{n}_{d}$ where $d$ is the number of dimensions. The inverse discrete Fourier transform operating on such a complex sequence (Hermitian in the leading dimension) returns a real array of full dimension (${n}_{1}×{n}_{2}×\\cdots ×{n}_{d}$).\n\n#### 2.1.3Real symmetric transforms\n\nIn many applications the sequence ${x}_{j}$ will not only be real, but may also possess additional symmetries which we may exploit to reduce further the computing time and storage requirements. For example, if the sequence ${x}_{j}$ is odd, $\\left({x}_{j}={-x}_{n-j}\\right)$, then the discrete Fourier transform of ${x}_{j}$ contains only sine terms. Rather than compute the transform of an odd sequence, we define the sine transform of a real sequence by\n $x^k = 2n ∑j=1 n-1 xj sin πjkn ,$\nwhich could have been computed using the Fourier transform of a real odd sequence of length $2n$. In this case the ${x}_{j}$ are arbitrary, and the symmetry only becomes apparent when the sequence is extended. Similarly we define the cosine transform of a real sequence by\n $x^k = 2n 12 x0 + ∑ j=1 n-1 xj cos πjkn + 12 -1k xn$\nwhich could have been computed using the Fourier transform of a real even sequence of length $2n$.\nIn addition to these ‘half-wave’ symmetries described above, sequences arise in practice with ‘quarter-wave’ symmetries. We define the quarter-wave sine transform by\n $x^k = 1n ∑ j=1 n-1 xj sin π j 2k-1 2n + 12 -1 k-1 xn$\nwhich could have been computed using the Fourier transform of a real sequence of length $4n$ of the form\n $0,x1,…,xn,xn-1 ,…,x1,0,-x1,…,-xn, -x n-1 ,…, -x 1 .$\nSimilarly we may define the quarter-wave cosine transform by\n $x^k = 1n 12 x0 + ∑j= 1 n- 1 xj cos π j 2k- 1 2n$\nwhich could have been computed using the Fourier transform of a real sequence of length $4n$ of the form\n $x0,x1,…, x n-1 ,0, -x n-1 ,…,-x0,-x1,…, -x n-1 ,0, x n-1 ,…,x1 .$\n\n#### 2.1.4Fourier integral transforms\n\nThe usual application of the discrete Fourier transform is that of obtaining an approximation of the Fourier integral transform\n $F s = ∫ -∞ ∞ f t exp -i 2 π s t dt$\nwhen $f\\left(t\\right)$ is negligible outside some region $\\left(0,c\\right)$. Dividing the region into $n$ equal intervals we have\n $F s ≅ cn ∑ j=0 n-1 fj exp -i 2 π s j c n$\nand so\n $Fk ≅ cn ∑ j= 0 n- 1 fj exp -i 2 π jk n$\nfor $k=0,1,\\dots ,n-1$, where ${f}_{j}=f\\left(jc/n\\right)$ and ${F}_{k}=F\\left(k/c\\right)$.\nHence the discrete Fourier transform gives an approximation to the Fourier integral transform in the region $s=0$ to $s=n/c$.\nIf the function $f\\left(t\\right)$ is defined over some more general interval $\\left(a,b\\right)$, then the integral transform can still be approximated by the discrete transform provided a shift is applied to move the point $a$ to the origin.\n\n#### 2.1.5Convolutions and correlations\n\nOne of the most important applications of the discrete Fourier transform is to the computation of the discrete convolution or correlation of two vectors $x$ and $y$ defined (as in Brigham (1974)) by\n• convolution: ${z}_{k}=\\sum _{j=0}^{n-1}{x}_{j}{y}_{k-j}$\n• correlation: ${w}_{k}=\\sum _{j=0}^{n-1}{\\stackrel{-}{x}}_{j}{y}_{k+j}$\n(Here $x$ and $y$ are assumed to be periodic with period $n$.)\nUnder certain circumstances (see Brigham (1974)) these can be used as approximations to the convolution or correlation integrals defined by\n $z s = ∫ -∞ ∞ x t y s-t dt$\nand\n $w s = ∫ -∞ ∞ x- t y s+t dt , -∞ < s < ∞ .$\nFor more general advice on the use of Fourier transforms, see Hamming (1962); more detailed information on the fast Fourier transform algorithm can be found in Gentleman and Sande (1966) and Brigham (1974).\n\n#### 2.1.6Applications to solving partial differential equations (PDEs)\n\nA further application of the fast Fourier transform, and in particular of the Fourier transforms of symmetric sequences, is in the solution of elliptic PDEs. If an equation is discretized using finite differences, then it is possible to reduce the problem of solving the resulting large system of linear equations to that of solving a number of tridiagonal systems of linear equations. This is accomplished by uncoupling the equations using Fourier transforms, where the nature of the boundary conditions determines the choice of transforms – see Section 3.3. Full details of the Fourier method for the solution of PDEs may be found in Swarztrauber (1977) and Swarztrauber (1984).\n\n### 2.2Inverse Laplace Transforms\n\nLet $f\\left(t\\right)$ be a real function of $t$, with $f\\left(t\\right)=0$ for $t<0$, and be piecewise continuous and of exponential order $\\alpha$, i.e.,\n $ft ≤ M eαt$\nfor large $t$, where $\\alpha$ is the minimal such exponent.\nThe Laplace transform of $f\\left(t\\right)$ is given by\n $F s = ∫0∞ e-st f t dt , t>0$\nwhere $F\\left(s\\right)$ is defined for $\\mathrm{Re}\\left(s\\right)>\\alpha$.\nThe inverse transform is defined by the Bromwich integral\n $f t = 12πi ∫ a-i∞ a+i∞ est F s ds , t>0 .$\nThe integration is performed along the line $s=a$ in the complex plane, where $a>\\alpha$. This is equivalent to saying that the line $s=a$ lies to the right of all singularities of $F\\left(s\\right)$. For this reason, the value of $\\alpha$ is crucial to the correct evaluation of the inverse. It is not essential to know $\\alpha$ exactly, but an upper bound must be known.\nThe problem of determining an inverse Laplace transform may be classified according to whether (a) $F\\left(s\\right)$ is known for real values only, or (b) $F\\left(s\\right)$ is known in functional form and can therefore be calculated for complex values of $s$. Problem (a) is very ill-defined and no routines are provided. Two methods are provided for problem (b).\n\n### 2.3Fast Gauss Transform\n\nGauss transforms have applications in areas including statistics, machine learning, and numerical solution of the heat equation. The discrete Gauss transform (DGT), $G\\left(y\\right)$, evaluated at a set of target points $y\\left(j\\right)$, for $j=1,2,\\dots ,m\\in {ℝ}^{d}$, is defined as:\n $G yj = ∑ i=1 n qi e - yj - xi 2 2 / h i 2 , j=1,…,m$\nwhere ${x}_{i}$, for $i=1,2,\\dots ,n\\in {ℝ}^{d}$, are the Gaussian source points, ${q}_{i}$, for $i=1,2,\\dots ,n\\in {ℝ}^{+}$, are the source weights and ${h}_{i}$, for $i=1,2,\\dots ,n\\in {ℝ}^{+}$, are the source standard deviations (alternatively source scales or source bandwidths).\nThe fast Gauss transform (FGT) algorithm presented in Raykar and Duraiswami (2005) approximates the DGT by using two Taylor series and clustering of the source points.\n\n### 2.4Direct Summation of Orthogonal Series\n\nFor any series of functions ${\\varphi }_{i}$ which satisfy a recurrence\n $ϕr+1 x + αr x ϕr x + βr x ϕr-1 x =0$\nthe sum\n $∑ r= 0 n ar ϕr x$\nis given by\n $∑ r=0 n ar ϕr x = b0 x ϕ0 x + b1 x ϕ1 x + α0 x ϕ0 x$\nwhere\n $br x + αr x br+ 1 x + βr+ 1 x br+ 2 x = ar b n+ 1 x = b n+ 2 x = 0 .$\nThis may be used to compute the sum of the series. For further reading, see Hamming (1962).\n\n### 2.5Acceleration of Convergence\n\nThis device has applications in a large number of fields, such as summation of series, calculation of integrals with oscillatory integrands (including, for example, Hankel transforms), and root-finding. The mathematical description is as follows. Given a sequence of values $\\left\\{{s}_{n}\\right\\}$, for $\\mathit{n}=m,\\dots ,m+2l$, then, except in certain singular cases, arguments, $a$, ${b}_{i}$, ${c}_{i}$ may be determined such that\n $sn = a + ∑ i=1 l bi cin .$\nIf the sequence $\\left\\{{s}_{n}\\right\\}$ converges, then $a$ may be taken as an estimate of the limit. The method will also find a pseudo-limit of certain divergent sequences – see Shanks (1955) for details.\nTo use the method to sum a series, the terms ${s}_{n}$ of the sequence should be the partial sums of the series, e.g., ${s}_{n}=\\sum _{k=1}^{n}{t}_{k}$, where ${t}_{k}$ is the $k$th term of the series. The algorithm can also be used to some advantage to evaluate integrals with oscillatory integrands; one approach is to write the integral (in this case over a semi-infinite interval) as\n $∫0∞ f x dx = ∫0a1 f x dx + ∫ a1 a2 f x dx + ∫ a2 a3 f x dx + …$\nand to consider the sequence of values\n $s1 = ∫ 0 a1 f x dx , s2 = ∫ 0 a2 f x dx = s1 + ∫ a1 a2 f x dx , etc.,$\nwhere the integrals are evaluated using standard quadrature methods. In choosing the values of the ${a}_{k}$, it is worth bearing in mind that c06baf converges much more rapidly for sequences whose values oscillate about a limit. The ${a}_{k}$ should thus be chosen to be (close to) the zeros of $f\\left(x\\right)$, so that successive contributions to the integral are of opposite sign. As an example, consider the case where $f\\left(x\\right)=M\\left(x\\right)\\mathrm{sin}x$ and $M\\left(x\\right)>0$: convergence will be much improved if ${a}_{k}=k\\pi$ rather than ${a}_{k}=2k\\pi$.\n\n## 3Recommendations on Choice and Use of Available Routines\n\nThe fast Fourier transform algorithm ceases to be ‘fast’ if applied to values of $n$ which cannot be expressed as a product of small prime factors. All the FFT routines in this chapter are particularly efficient if the only prime factors of $n$ are $2$, $3$ or $5$.\n\n### 3.1One-dimensional Fourier Transforms\n\nThe choice of routine is determined first of all by whether the data values constitute a real, Hermitian or general complex sequence. It is wasteful of time and storage to use an inappropriate routine.\n\n#### 3.1.1Real and Hermitian data\n\nc06paf transforms a single sequence of real data onto (and in-place) a representation of the transformed Hermitian sequence using the complex storage scheme described in Section 2.1.2. c06paf also performs the inverse transform using the representation of Hermitian data and transforming back to a real data sequence.\nAlternatively, the two-dimensional routine c06pvf can be used (on setting the second dimension to 1) to transform a sequence of real data onto an Hermitian sequence whose first half is stored in a separate Complex array. The second half need not be stored since these are the complex conjugate of the first half in reverse order. c06pwf performs the inverse operation, transforming the the Hermitian sequence (half-)stored in a Complex array onto a separate real array.\n\n#### 3.1.2Complex data\n\nc06pcf transforms a single complex sequence in-place; it also performs the inverse transform. c06psf transforms multiple complex sequences, each stored sequentially; it also performs the inverse transform on multiple complex sequences. This routine is designed to perform several transforms in a single call, all with the same value of $n$.\nIf extensive use is to be made of these routines and you are concerned about efficiency, you are advised to conduct your own timing tests.\n\n### 3.2Half- and Quarter-wave Transforms\n\nFour routines are provided for computing fast Fourier transforms (FFTs) of real symmetric sequences. c06ref computes multiple Fourier sine transforms, c06rff computes multiple Fourier cosine transforms, c06rgf computes multiple quarter-wave Fourier sine transforms, and c06rhf computes multiple quarter-wave Fourier cosine transforms.\n\n### 3.3Application to Elliptic Partial Differential Equations\n\nAs described in Section 2.1.6, Fourier transforms may be used in the solution of elliptic PDEs.\nc06ref may be used to solve equations where the solution is specified along the boundary.\nc06rff may be used to solve equations where the derivative of the solution is specified along the boundary.\nc06rgf may be used to solve equations where the solution is specified on the lower boundary, and the derivative of the solution is specified on the upper boundary.\nc06rhf may be used to solve equations where the derivative of the solution is specified on the lower boundary, and the solution is specified on the upper boundary.\nFor equations with periodic boundary conditions the full-range Fourier transforms computed by c06paf are appropriate.\n\n### 3.4Multidimensional Fourier Transforms\n\nThe following routines compute multidimensional discrete Fourier transforms of real, Hermitian and complex data stored in complex arrays:\n real Hermitian complex 2 dimensions c06pvf c06pwf c06puf 3 dimensions c06pyf c06pzf c06pxf any number of dimensions c06pjf\nThe Hermitian data, either transformed from or being transformed to real data, is compacted (due to symmetry) along its first dimension when stored in Complex arrays; thus approximately half the full Hermitian data is stored.\nc06puf and c06pxf should be used in preference to c06pjf for two- and three-dimensional transforms, as they are easier to use and are likely to be more efficient.\nThe transform of multidimensional real data is stored as a complex sequence that is Hermitian in its leading dimension. The inverse transform takes such a complex sequence and computes the real transformed sequence. Consequently, separate routines are provided for performing forward and inverse transforms.\nc06pvf performs the forward two-dimensionsal transform while c06pwf performs the inverse of this transform.\nc06pyf performs the forward three-dimensional transform while c06pzf performs the inverse of this transform.\nThe complex sequences computed by c06pvf and c06pyf contain roughly half of the Fourier coefficients; the remainder can be reconstructed by conjugation of those computed. For example, the Fourier coefficients of the two-dimensional transform ${\\stackrel{^}{z}}_{\\left({n}_{1}-{k}_{1}\\right){k}_{2}}$ are the complex conjugate of ${\\stackrel{^}{z}}_{{k}_{1}{k}_{2}}$ for ${k}_{1}=0,1,\\dots ,{n}_{1}/2$, and ${k}_{2}=0,1,\\dots ,{n}_{2}-1$.\n\n### 3.5Convolution and Correlation\n\nc06fkf computes either the discrete convolution or the discrete correlation of two real vectors.\nc06pkf computes either the discrete convolution or the discrete correlation of two complex vectors.\n\n### 3.6Inverse Laplace Transforms\n\nTwo methods are provided: Weeks' method (c06lbf) and Crump's method (c06laf). Both require the function $F\\left(s\\right)$ to be evaluated for complex values of $s$. If in doubt which method to use, try Weeks' method (c06lbf) first; when it is suitable, it is usually much faster.\nTypically the inversion of a Laplace transform becomes harder as $t$ increases so that all numerical methods tend to have a limit on the range of $t$ for which the inverse $f\\left(t\\right)$ can be computed. c06laf is useful for small and moderate values of $t$.\nIt is often convenient or necessary to scale a problem so that $\\alpha$ is close to $0$. For this purpose it is useful to remember that the inverse of $F\\left(s+k\\right)$ is $\\mathrm{exp}\\left(-kt\\right)f\\left(t\\right)$. The method used by c06laf is not so satisfactory when $f\\left(t\\right)$ is close to zero, in which case a term may be added to $F\\left(s\\right)$, e.g., $k/s+F\\left(s\\right)$ has the inverse $k+f\\left(t\\right)$.\nSingularities in the inverse function $f\\left(t\\right)$ generally cause numerical methods to perform less well. The positions of singularities can often be identified by examination of $F\\left(s\\right)$. If $F\\left(s\\right)$ contains a term of the form $\\mathrm{exp}\\left(-ks\\right)/s$ then a finite discontinuity may be expected in the inverse at $t=k$. c06laf, for example, is capable of estimating a discontinuous inverse but, as the approximation used is continuous, Gibbs' phenomena (overshoots around the discontinuity) result. If possible, such singularities of $F\\left(s\\right)$ should be removed before computing the inverse.\n\n### 3.7Fast Gauss Transform\n\nThe only routine available is c06saf. If the dimensionality of the data is low or the number of source and target points is small, however, it may be more efficient to evaluate the discrete Gauss transform directly.\n\n### 3.8Direct Summation of Orthogonal Series\n\nThe only routine available is c06dcf, which sums a finite Chebyshev series\n $∑ j=0 n cj Tj x , ∑ j=0 n cj T 2j x or ∑ j=0 n cj T 2j+1 x$\ndepending on the choice of argument.\n\n### 3.9Acceleration of Convergence\n\nThe only routine available is c06baf.\n\n## 4Decision Trees\n\n### Tree 1: Fourier Transform of Discrete Complex Data\n\n Is the data one-dimensional? Multiple vectors? Stored as rows? c06prf yes yes yes no no no Stored as columns? c06psf yes c06pcf Is the data two-dimensional? c06puf yes no Is the data three-dimensional? c06pxf yes no Transform on one dimension only? c06pff yes no Transform on all dimensions? c06pjf yes\n\n### Tree 2: Fourier Transform of Real Data or Data in Complex Hermitian Form Resulting from the Transform of Real Data\n\n Quarter-wave sine (inverse) transform? c06rgf yes no Quarter-wave cosine (inverse) transform? c06rhf yes no Sine (inverse) transform? c06ref yes no Cosine (inverse) transform? c06rff yes no Is the data three-dimensional? Forward transform on real data? c06pyf yes yes no no Inverse transform on Hermitian data? c06pzf yes Is the data two-dimensional? Forward transform on real data? c06pvf yes yes no no Inverse transform on Hermitian data? c06pwf yes Is the data multi one-dimensional? Sequences stored by row? c06ppf yes yes no no Sequences stored by column? c06pqf yes c06paf\n\n## 5Functionality Index\n\n Acceleration of convergence c06baf\n Convolution or Correlation,\n complex vectors c06pkf\n real vectors,\n time-saving c06fkf\n Discrete Fourier Transform,\n multidimensional,\n complex sequence,\n complex storage c06pjf\n real storage c06fjf\n multiple half- and quarter-wave transforms,\n Fourier cosine transforms,\n simple use c06rbf\n Fourier cosine transforms, simple use c06rff\n Fourier sine transforms,\n simple use c06raf\n Fourier sine transforms, simple use c06ref\n quarter-wave cosine transforms,\n simple use c06rdf\n quarter-wave cosine transforms, simple use c06rhf\n quarter-wave sine transforms,\n simple use c06rcf\n quarter-wave sine transforms, simple use c06rgf\n one-dimensional,\n multiple transforms,\n complex sequence,\n complex storage by columns c06psf\n complex storage by rows c06prf\n Hermitian/real sequence,\n complex storage by columns c06pqf\n complex storage by rows c06ppf\n multi-variable,\n complex sequence,\n complex storage c06pff\n real storage c06fff\n single transforms,\n complex sequence,\n time-saving,\n complex storage c06pcf\n real storage c06fcf\n Hermitian/real sequence,\n time-saving,\n complex storage c06paf\n Hermitian sequence,\n time-saving,\n real storage c06fbf\n real sequence,\n time-saving,\n real storage c06faf\n three-dimensional,\n complex sequence,\n complex storage c06pxf\n real storage c06fxf\n Hermitian/real sequence,\n complex-to-real c06pzf\n real-to-complex c06pyf\n two-dimensional,\n complex sequence,\n complex storage c06puf\n Hermitian/real sequence,\n complex-to-real c06pwf\n real-to-complex c06pvf\n Fast Gauss Transform c06saf\n Inverse Laplace Transform,\n Crump's method c06laf\n Weeks' method,\n compute coefficients of solution c06lbf\n evaluate solution c06lcf\n Summation of Chebyshev series c06dcf\n\nNone.\n\n## 7Routines Withdrawn or Scheduled for Withdrawal\n\nThe following lists all those routines that have been withdrawn since Mark 19 of the Library or are scheduled for withdrawal at one of the next two marks.\n WithdrawnRoutine Mark ofWithdrawal Replacement Routine(s) c06dbf 25 c06dcf c06eaf 26 c06paf c06ebf 26 c06paf c06ecf 26 c06pcf c06ekf 26 c06fkf c06fpf 28 c06pqf c06fqf 28 c06pqf c06frf 26 c06psf c06fuf 26 c06puf c06gbf 26 No replacement required c06gcf 26 No replacement required c06gqf 26 No replacement required c06gsf 26 No replacement required c06haf 26 c06ref c06hbf 26 c06rff c06hcf 26 c06rgf c06hdf 26 c06rhf\nBrigham E O (1974) The Fast Fourier Transform Prentice–Hall\nDavies S B and Martin B (1979) Numerical inversion of the Laplace transform: A survey and comparison of methods J. Comput. Phys. 33 1–32\nFox L and Parker I B (1968) Chebyshev Polynomials in Numerical Analysis Oxford University Press\nGentleman W S and Sande G (1966) Fast Fourier transforms for fun and profit Proc. Joint Computer Conference, AFIPS 29 563–578\nHamming R W (1962) Numerical Methods for Scientists and Engineers McGraw–Hill\nRaykar V C and Duraiswami R (2005) Improved Fast Gauss Transform With Variable Source Scales University of Maryland Technical Report CS-TR-4727/UMIACS-TR-2005-34\nShanks D (1955) Nonlinear transformations of divergent and slowly convergent sequences J. Math. Phys. 34 1–42\nSwarztrauber P N (1977) The methods of cyclic reduction, Fourier analysis and the FACR algorithm for the discrete solution of Poisson's equation on a rectangle SIAM Rev. 19(3) 490–501\nSwarztrauber P N (1984) Fast Poisson solvers Studies in Numerical Analysis (ed G H Golub) Mathematical Association of America\nSwarztrauber P N (1986) Symmetric FFT's Math. Comput. 47(175) 323–346\nWynn P (1956) On a device for computing the ${e}_{m}\\left({S}_{n}\\right)$ transformation Math. Tables Aids Comput. 10 91–96\n© The Numerical Algorithms Group Ltd, Oxford, UK. 2017"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84232295,"math_prob":0.99956065,"size":21237,"snap":"2021-21-2021-25","text_gpt3_token_len":5016,"char_repetition_ratio":0.18791504,"word_repetition_ratio":0.14756407,"special_character_ratio":0.22818665,"punctuation_ratio":0.098426245,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99973243,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-21T10:02:40Z\",\"WARC-Record-ID\":\"<urn:uuid:873938a1-8b74-49d6-929f-3631781ef45c>\",\"Content-Length\":\"126198\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:df6c1927-4f43-4d12-b8d5-f981dacd9a76>\",\"WARC-Concurrent-To\":\"<urn:uuid:54c73a25-a110-4547-adb0-30796c2f3006>\",\"WARC-IP-Address\":\"78.129.168.4\",\"WARC-Target-URI\":\"https://www.nag.com/numeric/nl/nagdoc_26/nagdoc_fl26/html/c06/c06intro.html\",\"WARC-Payload-Digest\":\"sha1:3EOYMN3X37BERSTDKQMH4P3PHCMMCBTY\",\"WARC-Block-Digest\":\"sha1:MDWJJZO2PRP2RJH4FLTPGWJ4I5S4VGJF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488269939.53_warc_CC-MAIN-20210621085922-20210621115922-00390.warc.gz\"}"} |
https://www.tu-chemnitz.de/informatik/KI/projects/ANNarchy/index.php | [
"# Navigation\n\nSpringe zum Hauptinhalt\n\nNeurosimulator ANNarchy\nANNarchy (Artificial Neural Networks architect)\n\n# ANNarchy (Artificial Neural Networks architect)\n\nANNarchy is a parallel simulator for distributed rate-coded or spiking neural networks. The definition of the network is declared in Python, but the library generates optimized C++ code to actually run the simulation in parallel, using either openMP (on multicore CPU architectures) or CUDA (on GPUs). The current stable version is 4.6 and is released under the GNU GPL v2 or later.\n\nThe code is available at:\n\nhttp://bitbucket.org/annarchy/annarchy\n\nThe documentation is available at:\n\nhttp://annarchy.readthedocs.org\n\n## Core principles\n\nANNarchy separates the description of a neural network from its simulation. The description is declared in a Python script, offering high flexibility and readability of the code, and allowing to use the huge ecosystem of scientific libraries available with Python (Numpy, Scipy, Matplotlib, Sympy, Cython). Using Python furthermore reduces the programming effort to a minimum, letting the modeller concentrate on network design and data analysis.",
null,
"A neural network is defined as a collection of interconnected populations of neurons. Each population comprises a set of similar artificial neurons (rate-coded or spiking point-neurons), whose activity is ruled by one or many ordinary differential equations. The activity of a neuron depends on the activity of other neurons through synapses, whose strength can evolve with time depending on pre- or post-synaptic activities (synaptic or plasticity). Populations are interconnected with each other through projections, which contain synapses between two populations.\n\nANNarchy provides a set of classical neuron or synapse models, but also allows the definition of specific models. The ordinary differential equations (ODE) governing neural or synaptic dynamics have to be specified by the modeler. Contrary to other simulators (except Brian) which require to code these modules in a low-level language, ANNarchy provides a mathematical equation parser which can generate optimized C++ code depending on the chosen parallel framework. Bindings from C++ to Python are generated thanks to Cython (C-extensions to Python), which is a static compiler for Python. These bindings allow the Python script to access all data generated by the simulation (neuronal activity, connection weights) as if they were simple Python attributes. However, the simulation itself is independent from Python and its relatively low performance.\n\n## Example of a pulse-coupled network of Izhikevich neurons\n\nTo demonstrate the simplicity of the interface of ANNarchy, here is the Hello, World! of spiking networks: the pulse-coupled network of Izhikevich neurons (Izhikevich, 2003). It can be defined in ANNarchy as:\n\nfrom ANNarchy import *\n\n# Create the excitatory and inhibitory population\npop = Population(geometry=1000, neuron=Izhikevich)\nExc = pop[:800] ; Inh = pop[800:]\n\n# Set the population parameters\nre = np.random.random(800) ; ri = np.random.random(200)\nExc.noise = 5.0 ; Inh.noise = 2.0\nExc.a = 0.02 ; Inh.a = 0.02 + 0.08 * ri\nExc.b = 0.2 ; Inh.b = 0.25 - 0.05 * ri\nExc.c = -65.0 + 15.0 * re**2 ; Inh.c = -65.0\nExc.d = 8.0 - 6.0 * re**2 ; Inh.d = 2.0\nExc.v = -65.0 ; Inh.v = -65.0\nExc.u = Exc.v * Exc.b ; Inh.u = Inh.v * Inh.b\n\n# Create the projections\nexc_proj = Projection(pre=Exc, post=pop, target='exc')\nexc_proj.connect_all_to_all(weights=Uniform(0.0, 0.5))\n\ninh_proj = Projection(pre=Inh, post=pop, target='inh')\ninh_proj.connect_all_to_all(weights=Uniform(0.0, 1.0))\n\n# Compile\ncompile()\n\n# Start recording the spikes in the network to produce the plots\nM = Monitor(pop, ['spike', 'v'])\n\n# Simulate 1 second\nsimulate(1000.0, measure_time=True)\n\n# Retrieve the spike recordings and the membrane potential\nspikes = M.get('spike')\nv = M.get('v')\n\n# Compute the raster plot\nt, n = M.raster_plot(spikes)\n\n# Compute the population firing rate\nfr = M.histogram(spikes)\n\n# Plot the results\nimport matplotlib.pyplot as plt\nax = plt.subplot(3,1,1)\nax.plot(t, n, 'b.', markersize=1.0)\nax = plt.subplot(3,1,2)\nax.plot(v[:, 15])\nax = plt.subplot(3,1,3)\nax.plot(fr)\nplt.show()",
null,
"## Releases\n\n• 1.0: Initial version, purely C++.\n• 1.1: Management of exceptions.\n• 1.3: Parallelization of the computation using openMP.\n• 2.0: Optimized version with separated arrays for typed connections.\n• 2.1: Parallelization using CUDA.\n• 2.2: Optimized parallelization using openMP.\n• 3.0: Python interface to the C++ core using Boost::Python.\n• 4.0: Python-only version using Cython for the interface to the generated C++ code.\n\n## Relevant Publications\n\nVitay, J., Dinkelbach, H., Hamker, F.H. (2015). ANNarchy: a code generation approach to neural simulations on parallel hardware. Frontiers in Neuroinformatics 9, 19. doi:10.3389/fninf.2015.00019. PDF-document\n\nDinkelbach, H., Vitay, J., Beuth, F., Hamker, F.H. (2012). Comparison of GPU- and CPU-implementations of mean-firing rate neural networks on parallel hardware. Network: Computation in Neural Systems, 23(4): 212-236. PDF-document"
] | [
null,
"https://www.tu-chemnitz.de/informatik/KI/projects/ANNarchy/annarchy.png",
null,
"https://www.tu-chemnitz.de/informatik/KI/projects/ANNarchy/izhikevich.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.73961645,"math_prob":0.91130364,"size":4957,"snap":"2019-35-2019-39","text_gpt3_token_len":1273,"char_repetition_ratio":0.10922673,"word_repetition_ratio":0.002793296,"special_character_ratio":0.26003632,"punctuation_ratio":0.20440882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98512673,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-25T04:11:15Z\",\"WARC-Record-ID\":\"<urn:uuid:c27364e5-f972-477e-9e2e-6986f723aca9>\",\"Content-Length\":\"43921\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd330c0e-5dd0-404c-a710-bda4e1d11885>\",\"WARC-Concurrent-To\":\"<urn:uuid:fc7ee8fb-0760-4cd1-be06-2fe5265f2fee>\",\"WARC-IP-Address\":\"134.109.226.8\",\"WARC-Target-URI\":\"https://www.tu-chemnitz.de/informatik/KI/projects/ANNarchy/index.php\",\"WARC-Payload-Digest\":\"sha1:AGTPVPGMOH6VRUZVMQAGVVIDZXOU4CVA\",\"WARC-Block-Digest\":\"sha1:2QBMUPZUO7QQWIA4QF6GDDZWETAA224Z\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027322170.99_warc_CC-MAIN-20190825021120-20190825043120-00193.warc.gz\"}"} |
https://visualfractions.com/percentage-calculator/1-is-what-percent-of-1231/ | [
"# 1 is what percent of 1231?\n\nIf you want to find out what 1 as a percentage of 1231 is then this is the article for you. In this quick and straightforward guide, we'll show you exactly how to find out exactly what one number as a percentage of another is. It's very simple and will help you to calculate future percentage problems like this yourself without the need to search for it.\n\nIn a rush and just need to know the answer? 1 is 0.08% of 1231.\n\nis what % of\n\n## 1 is what % of 1231?\n\nUsing percentages is something all of us do every single day, whether we realise it or not. It's such a common task but often we forget just how easy it is to work it out for ourselves and reach for Siri or a quick Google search. That's totally fine, and you probably found this page doing just that, but it is very helpful to know how to do it yourself.\n\nSo, for example, let's say you just ate a meal at a restaurant and the bill comes in at \\$108. You have a \\$10 bill that you want to leave as a tip. What percentage of the check are you tipping? Is it enough or should you tip more? By working out the percentage of one number to another, you can easily figure that out.\n\nLet's dive into this problem and explain exactly how to solve it. First of all, whenever we want to know what percentage 1 is of 1231 what we're really talking about is a fraction:\n\n1 / 1231\n\nSo to work out the percentage from a fraction, the first step is to divide the number above the line (the numerator) by the number below the line (the denominator):\n\n1 / 1231 = 0.0008123476848091\n\nOur fraction is now in decimal format, so how do we get to a percentage from there? If you said \"multiply 0.0008123476848091 by 100\" then you deserve a cookie, my friend!\n\n0.0008123476848091 x 100 = 0.08%\n\nAnd there you have it! You just successfully calculated the percentage of 1 to 1231. You can now go forth and use this method to work out and calculate the percentages of any numbers.\n\nIf you end up having to do this will large numbers than generate a lot of decimal places, you may still need a calculator to work out the percentage, but you definitely shouldn't need to Google or ask Siri again now that you have the knowledge to manually work it out.\n\nHead back to the percentage calculator to work out any more calculations you need to make or be brave and give it a go by hand. Hopefully this article has shown you that it's easier than you might think!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9364069,"math_prob":0.91119367,"size":3153,"snap":"2022-40-2023-06","text_gpt3_token_len":763,"char_repetition_ratio":0.1527469,"word_repetition_ratio":0.011257036,"special_character_ratio":0.2607041,"punctuation_ratio":0.11497731,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9838805,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-28T04:26:54Z\",\"WARC-Record-ID\":\"<urn:uuid:6e8d6dfc-f9bb-4d8f-9ff4-6efdd612a996>\",\"Content-Length\":\"22479\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4bf81de7-7308-4c9e-adce-ac7290f5471e>\",\"WARC-Concurrent-To\":\"<urn:uuid:3bcf6dc6-4d7c-4647-814a-e9ad269888b3>\",\"WARC-IP-Address\":\"104.21.87.217\",\"WARC-Target-URI\":\"https://visualfractions.com/percentage-calculator/1-is-what-percent-of-1231/\",\"WARC-Payload-Digest\":\"sha1:AZ5NSPTLCLMHAIVT4GZ3E2IER7JVHRBK\",\"WARC-Block-Digest\":\"sha1:IJGFY6VUGXS2QKUEPTECA737NCFT2CEB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335059.43_warc_CC-MAIN-20220928020513-20220928050513-00121.warc.gz\"}"} |
https://www.mis.mpg.de/publications/preprint-repository/article/2012/issue-67 | [
"",
null,
"",
null,
"# MiS Preprint Repository\n\nDelve into the future of research at MiS with our preprint repository. Our scientists are making groundbreaking discoveries and sharing their latest findings before they are published. Explore repository to stay up-to-date on the newest developments and breakthroughs.\n\nMiS Preprint\n67/2012\n\n### Fast Convolution Quadrature for Wave Equation in Three Dimensions\n\nLehel Banjai and Maryna Kachanovska\n\n#### Abstract\n\nIn this work the question of efficient solution of an external boundary value problem for the wave equation in three dimensions is addressed. The problem is reformulated in terms of time domain boundary integral equations; the corresponding convolution equations are discretized with the help of Runge-Kutta convolution quadrature. The resulting lower triangular Toeplitz system of size $N$ is solved recursively, constructing $O(N)$ discretizations of boundary single-layer operator of Helmholtz equation.\n\nSince the problem is posed in odd dimension, Huygens principle holds true and convolution weights of Runge-Kutta convolution quadrature $w_{n}^{h}(d)$ exhibit exponential decay outside of a neighborhood of the diagonal $d \\approx nh$, where $h$ is a time step. Therefore, only a constant number of discretizations of boundary integral operators has to contain the near-field and for the rest only the far-field can be constructed. We combine this property with a use of data-sparse techniques, namely $\\mathcal{H}$-matrices and high-frequency fast multipole method, to design an efficient recursive algorithm. Issues specific to the application of data-sparse techniques to the convolution quadrature are also addressed. Numerical experiments indicate the efficiency of the proposed approach.\n\nNov 9, 2012\nPublished:\nNov 12, 2012\nMSC Codes:\n65M38, 35L05\nKeywords:\ndata-sparse techniques, wave equation, time-domain boundary integral equations, Runge-Kutta convolution quadrature\n\n### Related publications\n\ninJournal\n2014 Repository Open Access\nLehel Banjai and Maryna Kachanovska\n\n### Fast convolution quadrature for wave equation in three dimensions\n\nIn: Journal of computational physics, 279 (2014), pp. 103-126"
] | [
null,
"data:image/svg+xml;base64,<svg width="186" height="30" viewBox="0 0 186 30" fill="none" xmlns="http://www.w3.org/2000/svg">
    <path d="M137.848 13.205c-1.017.063-1.885.847-2.544 1.535a13.556 13.556 0 0 0-1.556 1.988l-.125.213s-.441.71-.632.946c-.807 1.024-1.991 2.418-3.426 2.486-1.388.064-3.088-.847-3.315-2.273-.437-2.73 2.749-5.307 5.246-3.56.66.463 1.129 1.038 1.625 1.658.038.046.084.113.14.19l.042.055.125-.186c-.571-.838-1.133-1.666-1.945-2.314-.776-.616-1.626-.964-2.633-.788a3.737 3.737 0 0 0-2.159 1.205c-1.119 1.263-1.184 3.206-.353 4.637.827 1.422 2.753 2.78 4.476 2.015 1.407-.62 2.451-2.079 3.241-3.324l.116-.19s.325-.471.432-.611c.385-.517.798-1.015 1.258-1.472 1.063-1.05 2.261-1.694 3.784-1.141 1.453.525 2.252 1.906 1.931 3.396-.348 1.63-1.95 2.807-3.631 2.807-1.439 0-2.646-1.453-3.431-2.468l-.018-.022-.112.176c.845 1.218 1.779 2.532 3.241 3.025 1.978.67 4.374-1.136 4.448-3.124.246-2.319-1.639-5.022-4.22-4.864l-.005.005z" fill="url(#xen4ytwv1a)"/>
    <path d="M149.985 21.338A41410.79 41410.79 0 0 1 130.201.616c-.455-.48-1.184-.29-1.37.286-.181.575-8.357 26.485-8.501 26.956-.162.521.302 1.1.934.96.575-.127 23.994-5.248 28.321-6.203.692-.154.743-.91.395-1.277h.005zm-1.133-.014c-.056.072-.158.027-.274-.005-.121-.036-5.112-1.553-5.92-1.806-.084-.023-.107-.018-.135.04-.023.05-.051.105-.069.154-.019.05-.01.095.069.123.079.027 5.948 1.951 5.948 1.951s.162.05.158.19c-.005.14-.116.168-.191.186l-26.292 5.76c-.121.027-.256.059-.293-.054-.037-.113.102-.222.102-.222s6.37-5.746 6.408-5.783c.041-.04.041-.059 0-.077l-.237-.113a.067.067 0 0 0-.075.01c-.027.022-6.449 5.637-6.449 5.637s-.055.059-.143.059a.17.17 0 0 1-.172-.168c0-.036.014-.086.014-.086l7.916-25.095c.046-.14.121-.159.195-.154.079 0 .144.1.172.199.051.186 2.73 10.397 3.18 12.018.051.186.102.258.167.353.065.09.144.213.191.272.074.095.097.122.157.077.047-.036.038-.036-.023-.258-.037-.136-3.203-12.566-3.203-12.566-.075-.272.092-.385.297-.154.185.213 18.293 19.204 18.321 19.186l.032.032c.042.045.204.217.144.299l.005-.005z" fill="url(#vgmjnx320b)"/>
    <path d="M170.998.304c-8.055 0-14.606 6.39-14.606 14.246 0 7.856 6.551 14.245 14.606 14.245 8.056 0 14.607-6.389 14.607-14.245 0-7.857-6.556-14.246-14.607-14.246zm0 28.098c-7.832 0-14.198-6.213-14.198-13.852 0-7.64 6.37-13.848 14.198-13.848s14.198 6.213 14.198 13.848c0 7.634-6.37 13.852-14.198 13.852z" fill="#12120D"/>
    <path d="M179.569 13.459c-.204-.413-.065-.684-.125-.983-.047-.222-.386-.956-.599-1.463-.102-.24.524-.18.06-1.041-.014-.027.641-.263-.014-.747-.367-.272-.627-.607-.826-.906-.028-.04-.056-.122 0-.177.371-.321.376-.783-.023-1.018-.293-.173-.623 0-.929.194-.102.064-.153.104-.255.064a1.846 1.846 0 0 1-.251-.154c.376-.358.45-.711.422-.766-.018-.04-.64-.176-1.411.136l-.056-.027c.316-.534.14-.91.116-.937.576-.788 1.152-1.309 1.714-1.812l.014-.013c.037-.036.051-.077-.019-.118-.06-.036-.074-.045-.149-.081a13.134 13.134 0 0 0-6.24-1.572c-7.08 0-12.837 5.62-12.837 12.52 0 4 2.08 7.862 5.483 10.257.227.159.385.222.385.222.056.018.116.018.149-.027l.051-.063a.192.192 0 0 0 .046-.114c0-.09-.134-.172-.292-.403a3.703 3.703 0 0 1-.664-2.055c0-.693.218-1.323.631-1.848.511.52 1.63 1.476 1.607 2.35-.028.978-.966.928-.762 1.662.126.462.479.272.544.62.106.607.937.992 1.304.938.088-.014.163.059.339.23.246.245.859.213 1.128.164.256-.046.581.185 1.119.127.674-.073.845-.123 1.133-.127.488-.01 3.134-.046 5.019.244.325.05.427-.095.283-.34-.682-1.159-.775-2.141-.385-3.382.697.045 1.695.231 3.013-.543.469-.272.729-.856.706-1.29-.014-.282-.139-.64-.065-.916.102-.37.176-.362.283-.434.502-.344.14-.62.061-.648.153-.059.32-.38.144-.62-.089-.095-.363-.842-.047-1 .339-.169.729-.24 1.124-.409.269-.108.483-.416.329-.824-.185-.489-1.211-2.785-1.244-2.852l-.014-.018zm-14.175 11.148c-.097.018-.413.032-.343-.371.042-.25.32-.354.464-.213-.028.195-.097.385-.121.584zm12.104-17.09c.168-.104.344-.144.451-.045.367.353-.395.548-.237.82.26.444.673.87 1.04 1.14.172.128.195.418-.46.553-.659.132-3.287.81-4.675 2.242-1.737 1.793-2.842 4.116-2.99 3.799-.153-.322-.502-.376-.446-.466 2.493-4.189 6.398-7.481 7.317-8.043zm1.143 2.944c-.144.398-1.277.756-1.222.475.024-.127 0-.294.089-.308.125-.022.483-.145 1.04-.28.055-.014.13.018.093.117v-.004zm-2.87.724c-.028-.163 1.189-.561 1.231-.443.079.217-.251.801-.66.95-.464.168-.654.118-.636.055.033-.118.093-.408.065-.557v-.005zm-2.823 6.19c-.561.335-.863-.303-.942-.416a.525.525 0 0 1-.074-.43c.037-.19.069-.449-.056-1.264-.056-.357.706-1.25.877-1.404.177-.163.437.082.455.345.033.498-.255.76-.548.973-.362.263-.427.602-.362.693.028.036.191.036.26-.113.219-.448.836-.675.934-.915.153-.367.041-.607.027-.76-.014-.136.121-.168.223-.155.13.019.191.195.256.363.399 1.019-.609 1.634-.771 1.874-.074.114.158.938-.279 1.2v.01zm.845-3.948c-.301.2-.733-.073-.645-.19a6.032 6.032 0 0 1 1.597-1.495c.478-.29.529-.308.553-.145.014.1.018.607-.367 1.02-.321.339-.641.312-.803.348-.112.023-.103.312-.335.466v-.004zm-2.679 3.477c-.139-.031-.19-.29-.162-.393.06-.231.427-.707.557-.874.033-.041.098 0 .107.045.079.552 0 1.331-.502 1.222zm4.476-10.057c.502-.222.989-.226 1.04-.217.116.014-.255.652-.91.928-.678.286-1.184.335-1.24.308-.088-.04.572-.779 1.11-1.014v-.005zm-11.589 10.619c-.199-.226-.385-.349-.408-.335-.056.036-.381.58-.492 1.213-.632-1.181-.985-2.372-1.04-3.79-.195-4.804 3.932-8.671 8.965-8.653 1.147 0 2.094.213 3.125.62a3.44 3.44 0 0 0-.511.901c-.005-.262-.074-.461-.056-.625.014-.1-.715.123-1.355 1.119-.149.23-.242.489-.302.724-.014-.375-.102-.665-.13-.66-.065.017-.803.312-1.184 1.19a3.023 3.023 0 0 0-.153.467c-.033-.267-.102-.444-.126-.444-.046 0-.817.312-1.198 1.209a2.255 2.255 0 0 0-.157.62c-.07-.312-.191-.507-.233-.457-.264.33-.784.475-1.063 1.304-.065.195-.093.39-.093.57-.116-.28-.274-.443-.311-.434-.074.013-.859.743-.975 1.476-.046.29-.023.557.024.784-.149-.236-.349-.408-.432-.652-.126.063-.79.674-.808 1.453a2.87 2.87 0 0 0 .102.829c-.223-.39-.534-.575-.571-.553-.056.037-.553.557-.669 1.5-.027.221 0 .43.056.62l-.005.004zm10.308-10.809c.394-.457 1.068-.824 1.142-.779.079.046-.112.752-.548 1.169-.436.42-1.114.828-1.193.824-.079-.005.241-.802.599-1.214zm-1.286 1.766a3.016 3.016 0 0 0-.785.86c-.074.123-.265-.674.237-1.44.306-.466.914-.887.9-.833-.046.186.298.897-.357 1.409l.005.004zm-1.412 1.019c-.469.684-.497.448-1.012 1.118-.028.037-.037-.873.144-1.272.288-.634.794-1.005.868-1.019.074-.013.26.793 0 1.173zm-1.541 1.367c-.242.413-.465.318-.864.902-.046.063-.153-.485.088-1.223.186-.57.762-.924.822-.942.093-.023.242.77-.046 1.264zm-1.435 1.45c-.246.525-.404.538-.692.864-.051.055-.264-.498-.055-1.195.167-.557.547-.76.733-.951.074-.072.274.725.014 1.277v.005zm-1.165.896c0 .159-.019.25-.065.408-.177.588-.53.837-.725 1.104-.079-.171-.218-.42-.222-.796 0-.711.441-1.318.719-1.513.107 0 .293.43.293.797zm-1.189 1.739c-.014.829-.548 1.313-.706 1.467a2.384 2.384 0 0 1-.306-1.028c-.028-.856.334-1.155.492-1.322.074.245.525.44.52.883zm-1.681.851c.246.123.539.467.567 1.11.028.647-.446 1.19-.632 1.277-.13-.494-.376-.598-.371-1.087.005-.535.283-1.123.432-1.304l.004.004zm1.013 1.703c.636-.272 1.332-.227 1.36-.204.028.027-.204.793-.822 1.114-.617.322-1.318.353-1.379.276-.037-.04.204-.919.841-1.186zm-.047-.457c.042-.204.418-.888.984-1.137.571-.25 1.231-.231 1.305-.2.074.032-.237.757-.808 1.024-.641.299-.896.122-1.481.308v.005zm.957-1.599c.027-.063.492-.874 1.035-1.123.538-.245 1.193-.195 1.235-.181.046.014-.246.734-.864 1.032-.613.295-1.43.336-1.406.272zm1.239-1.621c-.046-.032.613-.874 1.161-1.119.483-.217 1.114-.172 1.165-.135.051.036-.255.742-.891 1.014-.608.258-1.388.276-1.435.244v-.004zm1.486-1.603c-.037-.018.464-.851 1.031-1.064.538-.204 1.072-.018 1.235-.01.13.01-.186.612-.924.88-.618.221-1.249.24-1.342.194zm1.671-1.544c.01-.04.562-.797 1.133-.987.543-.181 1.203-.06 1.272-.032.07.027-.385.688-1.077.892-.627.181-1.337.163-1.328.127zm1.686-1.336c-.075-.05.594-.774 1.137-.978.53-.195 1.105-.159 1.152-.136.046.023-.349.657-.989.91-.604.24-1.226.254-1.3.204zm-9.23 14.82c-3.009-2.336-4.81-5.863-4.81-9.545 0-6.683 5.576-12.122 12.433-12.122 2.001 0 4.007.49 5.767 1.381.037.019.042.064.009.091-.515.44-1.235 1.304-1.416 1.544a10.42 10.42 0 0 0-4.36-.946c-5.683 0-10.302 4.505-10.302 10.048 0 2.472.859 4.727 2.479 6.539-.093.208-.26.67-.26 1.281 0 .503.097 1.082.46 1.725v.005zm.018-3.373h-.004c-1.495-1.711-2.285-3.844-2.285-6.172 0-5.32 4.439-9.65 9.894-9.65 1.44 0 2.823.295 4.119.875a2.55 2.55 0 0 0-.567.353 9.043 9.043 0 0 0-3.547-.693c-5.154 0-9.351 4.09-9.351 9.115 0 2.183.697 4.098 2.066 5.706-.079.118-.19.28-.315.47l-.01-.004zm.492-.756c-.181-.167-.571-.588-.608-1.132v-.077c0-.584.265-1.16.33-1.223.065-.063.743.449.752 1.187.009.638-.293 1.005-.469 1.254l-.005-.009zm1.115-.978c.538-.249 1.337-.19 1.295-.154-.042.036-.255.793-.859 1.091-.603.295-1.003.218-1.221.177.325-.349.255-.865.789-1.11l-.004-.004zm.724 6.303c-.093-.357-.056-.851.07-.982.241-.113.682-.159.691-.023.014.217-.153.738.042 1.386-.042.104-.645.059-.798-.38h-.005zm.042-1.498c-.047-.268.218-.53.213-1.114-.004-.72-.919-1.708-1.536-2.346.287 0 .668-.068 1.095-.258.92-.417 1.101-1.368 1.124-1.404.014-.018-.204-.059-.52-.063.032-.018.069-.032.102-.05.901-.471 1.133-1.485 1.091-1.508a2.326 2.326 0 0 0-.552-.036c.088-.027.176-.06.273-.095.952-.367 1.217-1.4 1.184-1.436-.014-.013-.204-.05-.482-.059a2.66 2.66 0 0 0 .26-.113c.868-.421 1.17-1.367 1.123-1.4-.018-.013-.195-.044-.45-.049.195-.04.404-.104.627-.2.928-.398 1.23-1.263 1.188-1.303-.018-.014-.176-.06-.413-.077.186-.041.39-.095.604-.168.919-.308 1.29-1.164 1.235-1.168-.126-.01-.298-.068-.516-.109.251-.013.543-.059.91-.18.994-.327 1.347-.911 1.365-1.165 0-.018-.121-.068-.311-.108.228-.041.455-.095.645-.172.929-.367 1.314-1.255 1.347-1.332a6.53 6.53 0 0 0 .539-.203c.097-.041.185-.086.269-.136.046.031.116.086.227.15.14.085.191.108.121.167-3.157 2.626-5.525 5.936-6.556 7.548-.028.045-.083.037-.172.032-1.105-.072-1.731 1.213-1.239 2.418-.065.244-.59.715-.627.752-.042.036-.144.185.032.167.26-.022.59-.425.748-.607.065.082.311.277.19.39-.938.901-1.871 1.385-1.945 1.417-.353.168.59.285 2.187-1.127.093-.082.204.326.348.502-.274.883.404 2.595.297 3.13-.018.094-.074.357-.395.366-1.587.032-1.741-.04-2.548.195-.377.109-.279.349 0 .276.311-.081.636-.204 2.349-.1.418.028.901-.081.901-.774 0-.625-.506-2.495-.335-2.748.228.167.395.172.479.19-.024.611.984 2.576.533 3.663-.167.403-.896.371-2.145.254-2.126-.204-2.818.244-2.86-.005l-.004-.005zm4.884 1.34c-.826.28-1.151-.666-1.017-.865.084-.122.604.14 1.012-.118.312-.195.488.824.005.983zm-3.756.588c-.116-.498-.163-1.363.051-1.557.042-.036.678-.127.669.013-.051.598-.298 1.264.167 1.925.172.249-.734.285-.887-.38zm1.579.571c-.223-.054-.836-1.005-.335-2.142.051-.122.817-.045.947-.009s-.106.928.948 1.327c.106.04 0 1.2-1.556.824h-.004zm6.574-3.17c-.358 1.2-.534 1.68 0 2.98.07.172.125.303-.033.276-1.072-.181-1.304-.353-4.68-.072-.241.018.079-.458.042-.784.307-.04.799-.176.906-.665.125-.594-.298-1.028-.251-1.114.478-.91-.033-1.92-.437-3.61-.037-.158.056-.226.126-.153 2.168 2.287 3.552 2.744 4.294 2.961.075.023.061.104.033.181zm5.544-6.249c-.455.39-1.152.422-1.43.675-.149.131-.214.335-.056.702.093.213.153.235-.158.317-.506.131-.576.779-1.216.661-.256-.045-.075.113.018.122.367.032.515-.068.641-.185.13-.118.302-.33.497-.403.204-.077.408-.068.413.077 0 .113-.256.294-.362.344.13.068.534.267.19.493-.269.177-.636-.068-.692-.1-.079-.045-.232-.022-.181.037.2.235.734.176.636.299-.492.643-.093 1.023-.241 1.67-.153.657-1.073 1.02-1.783 1.142-2.414.426-4.764-1.58-6.143-3.088.126-.286.112-.662-.018-.53-.121.127-.098.335-.386.407-.552.136-.794-.801-.965-.964-.674-.643-.938-1.553-.539-2.346.404-.801 1.249-.643 1.588-.154.418.602.223.947.464 1.128.335.253.669.027.762 0 .255-.068.237.588 1.105.557.933-.032.826-1.45.961-1.576 1.198-1.141.585-2.155.367-2.391-.061-.063.134-.48.204-.498.706-.15 1.1-.589 1.314-1.055 0 0 1.17-.018 1.583-.842.112-.218.455.212 1.235-.385.088-.068.172-.037.204.045.502 1.2.711 1.53.674 2.142-.028.444 1.216 2.912 1.309 3.124.102.236.116.471 0 .57l.005.005z" fill="#12120D"/>
    <path d="M177.786 12.521c-.678.023-1.472.838-1.727.951-.269.118-.176.195-.018.136.338-.122.821-.625 1.485-.86.525-.186.492.235.692.212.093-.009-.042-.453-.432-.439zM177.939 13.169c-.055.013.061.294-.236.312-.641.036-1.147.63-1.537.806-.39.177-.469.172-.53.19-.051.014-.032.077.01.086 1.629.213 2.256.154 2.27.095.028-.108-.093-.163-.065-.235.06-.177.093-.512.023-.743-.028-.108.242-.036.242-.18 0-.078-.084-.354-.177-.331zm-.25 1.254c-.033.122-.757.068-1.156.032-.093-.01-.251-.023.051-.2.236-.14.566-.371.993-.453.037-.004.102 0 .121.082a1.27 1.27 0 0 1-.009.534v.005zM179.82 16.692c-.126 0-.627.222-.817.34-.107.067-.618.375-.585-.114.014-.172.125-.593.125-.706 0-.177-.149-.104-.153 0-.005.244-.056.43-.107.715-.042.245.172.548.511.372.283-.145.664-.399 1.003-.462.088-.018.102-.14.018-.145h.005zM170.376 18.426c-.097-.036-.07.158-.2.136-.171-.027-.027-.177-.25-.408-.112-.113-.39-.163-.595-.484-.325-.508-.046-1.635.428-1.477.264.091.218.689.139 1.096-.051.25.348.371.385.285.061-.144-.148-.13-.195-.267-.074-.203.488-1.028-.246-1.268-.534-.172-.887.485-.822 1.21.056.61.474.864.711.95.325.118.079.548.515.548.19 0 .232-.294.13-.33v.009zM174.657 4.832s.046 0 .065-.018l.905-1.118a.055.055 0 0 0-.023-.086l-.153-.06a.058.058 0 0 0-.065.019l-.91 1.123c-.024.031-.01.072.028.086l.157.054h-.004zM172.976 4.38c.051.008.102.017.158.031.028.005.056-.01.065-.032l.529-1.277a.05.05 0 0 0-.042-.072c-.06-.014-.116-.023-.176-.036-.014 0-.032.004-.037.018-.065.163-.437 1.055-.539 1.3-.014.031.01.067.042.072v-.005zM171.11 4.198h.158c.027 0 .055-.022.055-.05l.13-1.358c0-.032-.023-.059-.055-.059h-.158c-.028 0-.056.023-.056.05l-.13 1.358c0 .032.023.06.056.06zM169.299 4.334c.051-.009.102-.018.158-.023.032-.004.055-.036.046-.063l-.265-1.34c-.004-.027-.032-.05-.065-.046-.051.01-.106.014-.157.023a.056.056 0 0 0-.047.063l.265 1.336c.004.027.032.05.065.046v.004zM167.442 4.746a.06.06 0 0 0 .069.027c.051-.018.103-.031.149-.05.032-.009.046-.045.032-.076l-.626-1.232c-.014-.027-.042-.036-.07-.027l-.149.054c-.032.01-.046.045-.032.077l.627 1.227zM165.715 5.525s.046.023.069.013c.047-.027.093-.05.139-.077.033-.018.042-.058.014-.086L165 4.33s-.047-.022-.07-.013a17.11 17.11 0 0 0-.139.081c-.033.018-.037.06-.014.086l.938 1.042zM164.104 6.634s.051.014.069 0c.042-.036.084-.068.126-.104.027-.022.027-.068 0-.086l-1.189-.806s-.051-.013-.07 0l-.12.104a.055.055 0 0 0 .004.086l1.184.802-.004.004zM162.701 8.043s.051 0 .065-.018c.033-.041.07-.082.107-.123a.055.055 0 0 0-.023-.086l-1.365-.52a.058.058 0 0 0-.065.018c-.033.04-.065.086-.102.126-.024.028-.01.073.023.086l1.36.517zM160.12 9.5l1.462.2c.024 0 .047-.009.056-.027l.079-.145a.055.055 0 0 0-.042-.081l-1.472-.2a.068.068 0 0 0-.06.032c-.023.05-.051.095-.074.145-.019.032.004.072.042.077h.009zM159.321 11.665l1.491-.14s.041-.018.046-.04c.019-.055.033-.105.051-.16.014-.035-.018-.076-.06-.072l-1.5.14c-.023 0-.042.019-.051.041-.014.055-.028.11-.042.159a.056.056 0 0 0 .061.068l.004.004zM160.454 13.395c.005-.054.014-.113.019-.167a.056.056 0 0 0-.075-.06l-1.453.494s-.037.027-.037.05c0 .054-.009.113-.009.168 0 .04.037.068.074.054l1.444-.49s.037-.022.037-.044v-.005zM160.417 15.329c0-.06-.009-.118-.014-.177 0-.04-.051-.068-.088-.045l-1.319.842s-.027.031-.023.054c.009.06.014.113.023.168.005.04.051.059.089.036l1.304-.838s.028-.031.023-.05l.005.01zM160.747 17.267c-.014-.06-.033-.118-.047-.177-.009-.045-.065-.059-.097-.023l-1.073 1.15s-.018.037-.014.055c.019.054.038.109.061.167.014.041.065.05.097.018l1.059-1.136s.018-.032.014-.05v-.004zM161.295 18.983l-.757 1.404s-.009.036 0 .054c.032.054.065.104.093.159.023.036.079.036.097 0l.748-1.386s.009-.036 0-.05c-.028-.059-.061-.113-.089-.172-.018-.04-.078-.04-.102 0l.01-.01zM162.358 20.559c-.028-.037-.089-.027-.102.018l-.404 1.625s0 .037.013.05c.042.05.084.095.126.14.032.032.088.019.097-.022l.4-1.608s0-.031-.01-.045c-.041-.054-.083-.104-.12-.158zM2.354 8.64l1.694 4.669L5.734 8.64H7.52v6.44H6.156v-1.762l.135-3.038-1.783 4.8h-.933l-1.778-4.796.134 3.034v1.761H.571V8.64h1.783zM12.981 13.753h-2.386l-.455 1.326H8.69l2.456-6.439h1.258l2.47 6.44h-1.448l-.46-1.327h.014zm-2.02-1.073h1.649l-.831-2.41-.822 2.41h.004zM18.33 10.86l1.24-2.22h1.564l-1.922 3.193 1.973 3.246h-1.583l-1.267-2.255-1.268 2.255h-1.583l1.973-3.246-1.922-3.193H17.1l1.24 2.22h-.01zM26.469 12.81v2.27h-1.36V8.64h2.576c.497 0 .934.09 1.31.268.376.176.664.43.868.756.204.326.302.697.302 1.114 0 .63-.223 1.127-.664 1.494-.441.367-1.054.548-1.839.548H26.47v-.01zm0-1.072h1.216c.358 0 .637-.082.822-.25.186-.167.284-.402.284-.706 0-.303-.098-.57-.288-.765-.19-.195-.455-.294-.79-.299h-1.249v2.02h.005zM32.923 14.015h2.887v1.064h-4.248V8.64h1.36v5.375zM40.867 13.753H38.48l-.455 1.326h-1.448l2.456-6.439h1.258l2.47 6.44h-1.449l-.46-1.327h.015zm-2.02-1.073h1.648l-.83-2.41-.823 2.41h.005zM49.22 15.08h-1.361l-2.647-4.239v4.239h-1.36V8.64h1.36l2.651 4.248V8.64h1.356v6.44zM56.054 12.933c-.051.693-.316 1.236-.785 1.635-.469.398-1.096.598-1.871.598-.845 0-1.514-.277-1.997-.834-.482-.557-.728-1.317-.728-2.286v-.394c0-.62.111-1.164.334-1.635.223-.47.543-.833.961-1.087.418-.253.9-.376 1.449-.376.761 0 1.374.2 1.838.598.464.399.734.956.808 1.676h-1.36c-.033-.417-.154-.716-.358-.906-.204-.19-.515-.28-.933-.28-.455 0-.794.158-1.017.475-.223.317-.339.81-.348 1.476v.484c0 .698.107 1.205.325 1.526.218.322.557.48 1.021.48.423 0 .734-.095.943-.28.209-.186.325-.476.357-.87h1.36zM59.48 12.498l-.706.743v1.838h-1.36V8.64h1.36v2.921l.6-.801 1.68-2.12h1.672l-2.345 2.862 2.414 3.577h-1.62L59.48 12.5zM67.995 15.08h-1.36V8.64h1.36v6.44zM75.085 15.08h-1.36l-2.647-4.239v4.239h-1.36V8.64h1.36l2.651 4.248V8.64h1.356v6.44zM80.164 13.39c0-.249-.093-.443-.274-.575-.18-.13-.506-.276-.98-.425a6.92 6.92 0 0 1-1.118-.44c-.748-.393-1.124-.928-1.124-1.598 0-.349.102-.657.302-.933s.487-.484.868-.638c.38-.154.799-.231 1.268-.231s.891.081 1.258.249c.367.167.655.403.859.706.204.304.306.648.306 1.037h-1.36c0-.294-.098-.525-.288-.688-.19-.163-.46-.245-.803-.245-.344 0-.59.068-.776.204a.64.64 0 0 0-.278.544c0 .208.107.385.325.525.218.14.534.276.956.398.776.227 1.337.508 1.69.847.353.34.53.756.53 1.254 0 .557-.219.997-.65 1.314-.432.317-1.013.475-1.746.475-.506 0-.97-.09-1.388-.272-.418-.18-.734-.43-.957-.747a1.848 1.848 0 0 1-.33-1.096h1.366c0 .711.436 1.065 1.304 1.065.325 0 .576-.064.757-.19a.623.623 0 0 0 .274-.54h.01zM87.88 9.714h-2.023v5.365h-1.36V9.714H82.5V8.64h5.376v1.074h.005zM90.57 15.08h-1.361V8.64h1.36v6.44zM97.13 9.714h-2.025v5.365h-1.36V9.714h-1.997V8.64h5.377v1.074h.004zM103.365 8.64v4.243c0 .707-.228 1.264-.678 1.671-.45.408-1.068.616-1.852.616-.785 0-1.384-.2-1.839-.598-.455-.398-.682-.946-.692-1.639V8.645h1.36v4.247c0 .421.103.73.312.924.209.195.492.29.859.29.766 0 1.151-.394 1.165-1.177V8.645h1.365V8.64zM109.916 9.714h-2.024v5.365h-1.361V9.714h-1.996V8.64h5.376v1.074h.005zM115.125 12.29h-2.609v1.725h3.064v1.064h-4.424V8.64h4.415v1.074h-3.055v1.535h2.609v1.041zM6.1 20.034H4.178v1.883H3.83v-4.025h2.558v.285H4.183v1.567h1.922v.285l-.005.005zM10.576 20.12c0 .37-.065.697-.195.973-.13.28-.32.494-.562.648-.246.15-.524.226-.845.226-.482 0-.872-.168-1.17-.503-.297-.335-.445-.787-.445-1.363v-.416c0-.367.065-.693.2-.974.134-.28.32-.502.566-.652.241-.154.524-.226.84-.226.316 0 .599.077.84.222.242.15.432.362.562.634.13.271.2.588.21.946v.485zm-.348-.435c0-.485-.111-.865-.339-1.137-.223-.276-.534-.412-.924-.412s-.687.136-.915.412c-.227.276-.343.661-.343 1.155v.421c0 .476.111.856.339 1.132.227.28.534.417.924.417s.7-.136.924-.412c.223-.277.33-.662.33-1.15v-.426h.004zM13.432 20.237h-1.198v1.68h-.353v-4.025h1.342c.44 0 .784.104 1.03.312.246.208.372.498.372.874 0 .258-.08.485-.232.684a1.177 1.177 0 0 1-.623.407l1.031 1.712v.036h-.371l-.998-1.68zm-1.198-.285h1.068c.292 0 .529-.082.705-.245a.816.816 0 0 0 .265-.629c0-.285-.093-.503-.279-.661-.185-.159-.445-.236-.78-.236h-.984v1.77h.005zM18.316 17.892l1.523 3.55 1.527-3.55h.465v4.025h-.348v-1.752l.027-1.798-1.536 3.55h-.27l-1.532-3.537.028 1.771v1.762h-.348v-4.026h.464v.005zM25.591 20.785h-1.87l-.428 1.132h-.367l1.56-4.025h.33l1.56 4.025h-.362l-.427-1.132h.004zm-1.764-.285h1.653l-.827-2.192-.826 2.192zM29.858 18.177h-1.416v3.736h-.348v-3.736h-1.411v-.285h3.175v.285zM34.018 21.913h-.352v-1.93h-2.359v1.93h-.348v-4.026h.348v1.807h2.359v-1.807h.352v4.026zM37.78 19.984h-1.932v1.639h2.219v.285h-2.568v-4.025h2.554v.285h-2.205v1.521h1.931v.295zM39.701 17.892l1.523 3.55 1.528-3.55h.464v4.025h-.348v-1.752l.028-1.798-1.537 3.55h-.27l-1.532-3.537.028 1.771v1.762h-.348v-4.026h.464v.005zM46.972 20.785h-1.87l-.428 1.132h-.367l1.56-4.025h.33l1.56 4.025h-.362l-.428-1.132h.005zm-1.764-.285h1.653l-.827-2.192-.826 2.192zM51.239 18.177h-1.416v3.736h-.348v-3.736h-1.412v-.285h3.176v.285zM52.766 21.913h-.348v-4.026h.348v4.026zM57.242 20.658c-.046.426-.204.747-.464.974-.265.226-.613.34-1.054.34-.306 0-.576-.077-.812-.222a1.446 1.446 0 0 1-.544-.634 2.147 2.147 0 0 1-.195-.938v-.52c0-.358.065-.675.195-.951.13-.277.311-.49.553-.639.241-.15.52-.226.83-.226.442 0 .79.117 1.046.349.255.23.404.552.445.964h-.353c-.088-.684-.469-1.023-1.142-1.023-.376 0-.673.135-.896.407-.223.272-.334.652-.334 1.132v.494c0 .466.107.833.325 1.114.218.276.51.416.877.416s.641-.086.827-.253c.186-.172.302-.426.343-.765h.353v-.019zM60.86 20.925a.644.644 0 0 0-.242-.53c-.163-.13-.455-.253-.883-.37-.427-.118-.738-.245-.933-.386-.278-.194-.418-.452-.418-.77 0-.316.13-.556.39-.751.26-.19.59-.29.999-.29.274 0 .52.05.733.154.218.104.386.245.502.43.116.186.176.39.176.616h-.353a.845.845 0 0 0-.288-.665c-.195-.168-.45-.25-.775-.25s-.566.068-.752.209a.638.638 0 0 0-.279.539c0 .203.084.37.251.502.167.131.436.245.808.34.371.095.65.2.84.308.19.104.334.23.432.38.098.15.144.322.144.52 0 .318-.13.571-.39.762-.26.19-.604.285-1.026.285-.293 0-.557-.05-.799-.154a1.22 1.22 0 0 1-.552-.426 1.087 1.087 0 0 1-.19-.63h.348c0 .286.106.508.325.671.218.163.506.244.868.244.32 0 .58-.067.775-.208a.64.64 0 0 0 .293-.548l-.005.018zM64.671 21.913h-.348v-4.026h.348v4.026zM69.295 21.913h-.348l-2.373-3.446v3.446h-.352v-4.026h.353l2.372 3.446v-3.446h.348v4.026zM75.313 18.177h-1.417v3.736h-.348v-3.736h-1.411v-.285h3.175v.285zM79.472 21.913h-.352v-1.93H76.76v1.93h-.348v-4.026h.348v1.807h2.359v-1.807h.352v4.026zM83.233 19.984h-1.931v1.639h2.22v.285h-2.568v-4.025h2.553v.285h-2.205v1.521h1.931v.295zM88.874 20.925a.644.644 0 0 0-.241-.53c-.162-.13-.455-.253-.882-.37-.427-.118-.738-.245-.933-.386-.279-.194-.418-.452-.418-.77 0-.316.13-.556.39-.751.26-.19.59-.29.998-.29.274 0 .52.05.734.154.218.104.385.245.501.43.116.186.177.39.177.616h-.353a.845.845 0 0 0-.288-.665c-.195-.168-.45-.25-.776-.25-.325 0-.566.068-.752.209a.638.638 0 0 0-.278.539c0 .203.083.37.25.502.168.131.437.245.808.34.372.095.65.2.84.308.191.104.335.23.432.38.098.15.144.322.144.52 0 .318-.13.571-.39.762-.26.19-.603.285-1.026.285-.292 0-.557-.05-.798-.154a1.22 1.22 0 0 1-.553-.426 1.087 1.087 0 0 1-.19-.63h.348c0 .286.107.508.325.671.218.163.506.244.868.244.32 0 .58-.067.776-.208a.64.64 0 0 0 .292-.548l-.005.018zM93.332 20.658c-.047.426-.205.747-.465.974-.264.226-.613.34-1.054.34-.306 0-.575-.077-.812-.222a1.446 1.446 0 0 1-.543-.634 2.147 2.147 0 0 1-.195-.938v-.52c0-.358.065-.675.195-.951.13-.277.31-.49.552-.639.242-.15.52-.226.831-.226.441 0 .79.117 1.045.349.255.23.404.552.446.964h-.353c-.088-.684-.47-1.023-1.142-1.023-.376 0-.674.135-.896.407-.223.272-.335.652-.335 1.132v.494c0 .466.107.833.325 1.114.219.276.511.416.878.416s.64-.086.826-.253c.186-.172.302-.426.344-.765h.353v-.019zM95.04 21.913h-.348v-4.026h.348v4.026zM98.87 19.984H96.94v1.639h2.22v.285H96.59v-4.025h2.553v.285H96.94v1.521h1.932v.295zM103.402 21.913h-.348l-2.373-3.446v3.446h-.353v-4.026h.353l2.373 3.446v-3.446h.348v4.026zM107.808 20.658c-.046.426-.204.747-.464.974-.265.226-.613.34-1.054.34-.306 0-.576-.077-.813-.222a1.455 1.455 0 0 1-.543-.634 2.147 2.147 0 0 1-.195-.938v-.52c0-.358.065-.675.195-.951.13-.277.311-.49.553-.639.241-.15.52-.226.831-.226.441 0 .789.117 1.044.349.256.23.404.552.446.964h-.353c-.088-.684-.469-1.023-1.142-1.023-.376 0-.673.135-.896.407-.223.272-.334.652-.334 1.132v.494c0 .466.107.833.325 1.114.218.276.511.416.877.416.367 0 .641-.086.827-.253.185-.172.302-.426.343-.765h.353v-.019zM111.369 19.984h-1.931v1.639h2.219v.285h-2.567v-4.025h2.553v.285h-2.205v1.521h1.931v.295zM115.163 20.925a.643.643 0 0 0-.242-.53c-.162-.13-.455-.253-.882-.37-.427-.118-.738-.245-.933-.386-.279-.194-.418-.452-.418-.77 0-.316.13-.556.39-.751.26-.19.59-.29.998-.29.274 0 .52.05.734.154.218.104.385.245.501.43.116.186.177.39.177.616h-.353a.846.846 0 0 0-.288-.665c-.195-.168-.451-.25-.776-.25s-.566.068-.752.209a.64.64 0 0 0-.278.539c0 .203.083.37.25.502.168.131.437.245.808.34.372.095.65.2.841.308.19.104.334.23.431.38.098.15.144.322.144.52 0 .318-.13.571-.39.762-.26.19-.603.285-1.026.285-.292 0-.557-.05-.798-.154a1.222 1.222 0 0 1-.553-.426 1.09 1.09 0 0 1-.19-.63h.348c0 .286.107.508.325.671.218.163.506.244.868.244.321 0 .581-.067.776-.208a.639.639 0 0 0 .292-.548l-.004.018z" fill="#12120D"/>
    <path d="M152.849.318v28.55" stroke="#12120D" stroke-width=".5" stroke-miterlimit="10" stroke-linecap="round"/>
    <defs>
        <linearGradient id="xen4ytwv1a" x1="123.413" y1="2.912" x2="141.098" y2="28.187" gradientUnits="userSpaceOnUse">
            <stop stop-color="#97133F"/>
            <stop offset=".38" stop-color="#8C4D74"/>
            <stop offset="1" stop-color="#336DA0"/>
        </linearGradient>
        <linearGradient id="vgmjnx320b" x1="120.516" y1="4.941" x2="138.201" y2="30.211" gradientUnits="userSpaceOnUse">
            <stop stop-color="#97133F"/>
            <stop offset=".38" stop-color="#8C4D74"/>
            <stop offset="1" stop-color="#336DA0"/>
        </linearGradient>
    </defs>
</svg>
",
null,
"https://www.mis.mpg.de/fileadmin/_processed_/f/4/csm_People-Library-Reading-Room-06_2c587c000d.webp",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8791638,"math_prob":0.9564037,"size":1700,"snap":"2023-40-2023-50","text_gpt3_token_len":339,"char_repetition_ratio":0.10908019,"word_repetition_ratio":0.0,"special_character_ratio":0.17588235,"punctuation_ratio":0.06593407,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9969852,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-11-29T08:41:05Z\",\"WARC-Record-ID\":\"<urn:uuid:a14bcb05-d2b1-4891-9ed2-0519c7baa8c7>\",\"Content-Length\":\"226406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a4e17e3-98da-4138-88a4-a954b9b23612>\",\"WARC-Concurrent-To\":\"<urn:uuid:11b15baa-746c-481f-9c1e-574826a07755>\",\"WARC-IP-Address\":\"194.95.185.83\",\"WARC-Target-URI\":\"https://www.mis.mpg.de/publications/preprint-repository/article/2012/issue-67\",\"WARC-Payload-Digest\":\"sha1:TOVZP72UOTFAT2KTX3Q44TVBDVJQ72WH\",\"WARC-Block-Digest\":\"sha1:2FLFF2ASMPSN63GRSMG35DMBTXKECAXM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679100057.69_warc_CC-MAIN-20231129073519-20231129103519-00336.warc.gz\"}"} |
https://xmphysics.com/2023/01/02/appendix-a-standing-wave-resonance-from-first-principle-beyond-syllabus/ | [
"# Appendix A: Standing Wave Resonance from First Principle (Beyond Syllabus)\n\nWhat does the H2 syllabus require to know? Not much really. Just that repeated reflections at both ends of the string causes incident and reflected waves to superpose. At resonant frequencies, a standing wave of large amplitude is formed, with nodes at both ends of the string.\n\nI have noticed that even university textbooks do not provide much detail about what’s really happening in the string. So I have had to form my own thoughts, which I share in this section.\n\nLet’s call the wave that has just departed from the vibrator the 1st generation incident wave. When it returns after the reflection, it becomes the 1st generation reflected wave. The 1st generation incident wave superposes with the 1st generation reflected wave to form, well, the 1st generation standing wave. Assuming no energy loss, the amplitudes of the incident, reflected and standing wave would be A, A and 2A respectively.\n\nOf course, the 1G reflected wave will undergo reflection and sets off as the 2G incident wave, returns as the 2G reflected wave, superposing to form the 2G standing wave. As long as the vibrator keeps on oscillating the string, the waves in the string will keep piling up. So we have the 1G, 2G, 3G, 4G … NG standing waves all in the string at the same time. \n\nAnd they all superpose of course. At most frequencies, all these generations of standing waves would have a progressive phase difference between generations. The resulting destructive interference produces a standing wave of zero (or close to zero) amplitude.\n\nHowever, at resonant frequencies, all these generations of standing waves would be in-phase with one another. The resulting constructive interference produces a standing wave of amplitude much larger than A. Assuming that there are N generations in the string, and no attenuation occurs during reflections, no energy lost to dissipative force, the amplitude of the standing wave would be 2NA.\n\nSo what’s the condition for resonance to occur? Since every incident wave must go forth and back along the length L of the string before setting off as the next generation incident wave, the path difference between generations is simply 2L.",
null,
"$2L=n\\lambda$ ,",
null,
"$n=1,2,3,...$\n\nThis means that the resonant wavelengths are",
null,
"$\\displaystyle \\lambda =\\frac{{2L}}{n},n=1,2,3,...$\n\nSince",
null,
"$v=f\\lambda$, the resonant frequencies are",
null,
"$\\displaystyle f=n\\frac{v}{{2L}},n=1,2,3,...$\n\nClosed Pipe\n\nActually, the reflection at a fixed end adds a phase change of π rad. But on a guitar string, an incident wave must undergo two reflections before setting off as the next generation incident wave, so a total phase change of 2π rad phase per round trip. This allows use to ignore the phase change caused by reflections. If we are talking about a closed pipe, where the total phase change caused by reflections is only π rad (because the reflection at the open end incurs no phase change), the condition for resonance would have been\n\nThis means that the resonant wavelengths are",
null,
"$\\displaystyle \\lambda =\\frac{{4L}}{{2n-1}},n=1,2,3,...$\n\nSince",
null,
"$v=f\\lambda$, the resonant frequencies are",
null,
"$\\displaystyle f=(2n-1)\\frac{v}{{4L}},n=1,2,3,...$\n\n I created this “wave generations” nomenclature myself. You won’t read about iterations of waves described as 1G, 2G, 3G and so on in any textbook."
] | [
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9312076,"math_prob":0.9776816,"size":3014,"snap":"2023-14-2023-23","text_gpt3_token_len":646,"char_repetition_ratio":0.16644518,"word_repetition_ratio":0.08300395,"special_character_ratio":0.20238885,"punctuation_ratio":0.10350877,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9808441,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-07T18:49:11Z\",\"WARC-Record-ID\":\"<urn:uuid:07614ddc-6a8b-47a9-840d-e5913e84f109>\",\"Content-Length\":\"88406\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bca57344-6b41-4f1f-9538-544df6b9f831>\",\"WARC-Concurrent-To\":\"<urn:uuid:8645daf9-0bb6-4819-8ebe-f15d8c8181c9>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://xmphysics.com/2023/01/02/appendix-a-standing-wave-resonance-from-first-principle-beyond-syllabus/\",\"WARC-Payload-Digest\":\"sha1:7HMWS5SAWBOOTGYXO2LCZT4Y3NBUEJJI\",\"WARC-Block-Digest\":\"sha1:DOMRI4WTDAGRQV6BDH7IBMV4FG56UDM7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224654012.67_warc_CC-MAIN-20230607175304-20230607205304-00272.warc.gz\"}"} |
https://www.arxiv-vanity.com/papers/quant-ph/0406196/ | [
"arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org.\n\n# Improved Simulation of Stabilizer Circuits\n\nScott Aaronson MIT Daniel Gottesman Perimeter Institute\n###### Abstract\n\nThe Gottesman-Knill theorem says that a stabilizer circuit—that is, a quantum circuit consisting solely of CNOT, Hadamard, and phase gates—can be simulated efficiently on a classical computer. This paper improves that theorem in several directions. First, by removing the need for Gaussian elimination, we make the simulation algorithm much faster at the cost of a factor- increase in the number of bits needed to represent a state. We have implemented the improved algorithm in a freely-available program called CHP (CNOT-Hadamard-Phase), which can handle thousands of qubits easily. Second, we show that the problem of simulating stabilizer circuits is complete for the classical complexity class , which means that stabilizer circuits are probably not even universal for classical computation. Third, we give efficient algorithms for computing the inner product between two stabilizer states, putting any -qubit stabilizer circuit into a “canonical form” that requires at most gates, and other useful tasks. Fourth, we extend our simulation algorithm to circuits acting on mixed states, circuits containing a limited number of non-stabilizer gates, and circuits acting on general tensor-product initial states but containing only a limited number of measurements.\n\n###### pacs:\n03.67.Lx, 03.67.Pp, 02.70.-c\n\n## I Introduction\n\nAmong the many difficulties that quantum computer architects face, one of them is almost intrinsic to the task at hand: how do you design and debug circuits that you can’t even simulate efficiently with existing tools? Obviously, if a quantum computer output the factors of a -digit number, then you wouldn’t need to simulate it to verify its correctness, since multiplying is easier than factoring. But what if the quantum computer didn’t work? Ordinarily architects might debug a computer by adding test conditions, monitoring registers, halting at intermediate steps, and so on. But for a quantum computer, all of these standard techniques would probably entail measurements that destroy coherence. Besides, it would be nice to design and debug a quantum computer using classical CAD tools, before trying to implement it!\n\nQuantum architecture is one motivation for studying classical algorithms to simulate and manipulate quantum circuits, but it is not the only motivation. Chemists and physicists have long needed to simulate quantum systems, and they have not had the patience to wait for a quantum computer to be built. Instead, they have developed limited techniques such as Quantum Monte-Carlo (QMC) suzuki for computing properties of certain ground states. More recently, several general-purpose quantum computer simulators have appeared, including Oemer’s quantum programming language QCL oemer , the QuIDD (Quantum Information Decision Diagrams) package of Viamontes et al. vmh ; vrmh , and the parallel quantum computer simulator of Obenland and Despain od . The drawback of such simulators, of course, is that their running time grows exponentially in the number of qubits. This is true not only in the worst case but in practice. For example, even though it uses a variant of binary decision diagrams to avoid storing an entire amplitude vector for some states, Viamontes et al. vmh report that the QuIDD package took more than 22 hours to simulate Grover’s algorithm on 40 qubits. With a general-purpose package, then, simulating hundreds or thousands of qubits is out of the question.\n\nA different direction of research has sought to find nontrivial classes of quantum circuits that can be simulated efficiently on a classical computer. For example, Vidal vidal showed that, so long as a quantum computer’s state at every time step has polynomially-bounded entanglement under a measure related to Schmidt rank, the computer can be simulated classically in polynomial time. Notably, in a follow-up paper vidal2 , Vidal actually implemented his algorithm and used it to simulate -dimensional quantum spin chains consisting of hundreds of spins. A second example is a result of Valiant valiant , which reduces the problem of simulating a restricted class of quantum computers to that of computing the Pfaffian of a matrix. The latter is known to be solvable in classical polynomial time. Terhal and DiVincenzo td have shown that Valiant’s class corresponds to a model of noninteracting fermions.\n\nThere is one class of quantum circuits that is known to be simulable in classical polynomial time, that does not impose any limit on entanglement, and that arises naturally in several applications. This is the class of stabilizer circuits introduced to analyze quantum error-correcting codes bdsw ; crss ; gottesman ; gottesman2 . A stabilizer circuit is simply a quantum circuit in which every gate is a controlled-NOT, Hadamard, phase, or -qubit measurement gate. We call a stabilizer circuit unitary if it does not contain measurement gates. Unitary stabilizer circuits are also known as Clifford group circuits.",
null,
"Figure 1: The four types of gate allowed in the stabilizer formalism\n\nStabilizer circuits can be used to perform the encoding and decoding steps for a quantum error-correcting code, and they play an important role in fault-tolerant circuits. However, the stabilizer formalism used to describe these circuits has many other applications. This formalism is rich enough to encompass most of the “paradoxes” of quantum mechanics, including the GHZ (Greenberger-Horne-Zeilinger) experiment ghz , dense quantum coding bw , and quantum teleportation bbcjpw . On the other hand, it is not so rich as to preclude efficient simulation by a classical computer. That conclusion, sometimes known as the Gottesman-Knill theorem, is the starting point for the contributions of this paper.\n\nOur results are as follows. In Section III we give a new tableau algorithm for simulating stabilizer circuits that is faster than the algorithm directly implied by the Gottesman-Knill theorem. By removing the need for Gaussian elimination, this algorithm enables measurements to be simulated in steps instead of (where is the number of qubits), at a cost of a factor- increase in the number of bits needed to represent a quantum state.\n\nSection IV describes CHP, a high-performance stabilizer circuit simulator that implements our tableau algorithm. We present the results of an experiment designed to test how CHP’s performance is affected by properties of the stabilizer circuit being simulated. CHP has already found application in simulations of quantum fault-tolerance circuits cross .\n\nSection V proves that the problem of simulating stabilizer circuits is complete for the classical complexity class . Informally, this means that any stabilizer circuit can be simulated using CNOT gates alone; the availability of Hadamard and phase gates provides at most a polynomial advantage. This result removes some of the mystery about the Gottesman-Knill theorem by showing that stabilizer circuits are unlikely to be capable even of universal classical computation.\n\nIn Section VI we prove a canonical form theorem that we expect will have many applications to the study of stabilizer circuits. The theorem says that given any stabilizer circuit, there exists an equivalent stabilizer circuit that applies a round of Hadamard gates, followed by a round of phase gates, followed by a round of CNOT gates, and so on in the sequence H-C-P-C-P-C-H-P-C-P-C (where H, C, P stand for Hadamard, CNOT, Phase respectively). One immediate corollary, building on a result by Patel, Markov, and Hayes pmh and improving one by Dehaene and De Moor dm , is that any stabilizer circuit on qubits has an equivalent circuit with only gates.\n\nFinally, Section VII extends our simulation algorithm to situations beyond the usual one considered in the Gottesman-Knill theorem. For example, we show how to handle mixed states, without keeping track of pure states from which the mixed states are obtainable by discarding qubits. We also show how to simulate circuits involving a small number of non-stabilizer gates; or involving arbitrary tensor-product initial states, but only a small number of measurements. Both of these latter two simulations take time that is polynomial in the number of qubits, but exponential in the number of non-stabilizer gates or measurements. Presumably this exponential dependence is necessary, since otherwise we could simulate arbitrary quantum computations in classical subexponential time.\n\nWe conclude in Section VIII with some directions for further research.\n\n## Ii Preliminaries\n\nWe assume familiarity with quantum computing. This section provides a crash course on the stabilizer formalism, confining attention to those aspects we will need. See Section 10.5.1 of Nielsen and Chuang nc for more details.\n\nThroughout this paper we will use the following four Pauli matrices:\n\n I=(1001)X=(0110)Y=(0−ii0)Z=(100−1)\n\nThese matrices satisfy the following identities:\n\n X2=Y2=Z2=I XY=iZYZ=iXZX=iYYX=−iZZY=−iXXZ=−iY\n\nIn particular, every two Pauli matrices either commute or anticommute. The rule for whether to include a minus sign is the same as that for quaternions, if we replace by .\n\nWe define the group of -qubit Pauli operators to consist of all tensor products of Pauli matrices, together with a multiplicative factor of or (so the total number of operators is ). We omit tensor product signs for brevity; thus should be read (we will use to represent the Pauli group operation). Given two Pauli operators and , it is immediate that commutes with if and only if the number of indices such that anticommutes with is even; otherwise anticommutes with . Also, for all , if has a phase of then , whereas if has a phase of then .\n\nGiven a pure quantum state , we say a unitary matrix stabilizes if is an eigenvector of with eigenvalue , or equivalently if where we do not ignore global phase. To illustrate, the following table lists the Pauli matrices and their opposites, together with the unique -qubit states that they stabilize:\n\nThe identity matrix stabilizes all states, whereas stabilizes no states.\n\nThe key idea of the stabilizer formalism is to represent a quantum state , not by a vector of amplitudes, but by a stabilizer group, consisting of unitary matrices that stabilize . Notice that if and both stabilize then so do and , and thus the set of stabilizers of is a group. Also, it is not hard to show that if then . But why does this strange representation buy us anything? To write down generators for (even approximately) still takes exponentially many bits in general by an information-theoretic argument. Indeed stabilizers seem worse than amplitude vectors, since they require about parameters to specify instead of about !\n\nRemarkably, though, a large and interesting class of quantum states can be specified uniquely by much smaller stabilizer groups—specifically, the intersection of with the Pauli group gottesman ; gottesman2 ; crss . This class of states, which arises in quantum error correction and many other settings, is characterized by the following theorem.\n\n###### Theorem 1\n\nGiven an -qubit state , the following are equivalent:\n\n1. can be obtained from by CNOT, Hadamard, and phase gates only.\n\n2. can be obtained from by CNOT, Hadamard, phase, and measurement gates only.\n\n3. is stabilized by exactly Pauli operators.\n\n4. is uniquely determined by , or the group of Pauli operators that stabilize .\n\nBecause of Theorem 1, we call any circuit consisting entirely of CNOT, Hadamard, phase, and measurement gates a stabilizer circuit, and any state obtainable by applying a stabilizer circuit to a stabilizer state. As a warmup to our later results, the following proposition counts the number of stabilizer states.\n\n###### Proposition 2\n\nLet be the number of pure stabilizer states on qubits. Then\n\n N=2nn−1∏k=0(2n−k+1)=2(1/2+o(1))n2.\n\nProof. We have , where is the total number of generating sets and is the number of equivalent generating sets for a given stabilizer . To find , note that there are choices for the first generator (ignoring overall sign), because it can be anything but the identity. The second generator must commute with and cannot be or , so there are choices for . Similarly, must commute with and , but cannot be in the group generated by them, so there are choices for it, and so on. Hence, including overall signs,\n\n G=2nn−1∏k=0(4n2k−2k)=2n(n+1)/2n−1∏k=0(4n−k−1).\n\nSimilarly, to find , note that given , there are choices for , choices for , choices for , and so on. Thus\n\n A=n−1∏k=0(2n−2k)=2n(n−1)/2n−1∏k=0(2n−k−1).\n\nTherefore\n\n N=GA=2nn−1∏k=0(4n−k−12n−k−1)=2nn−1∏k=0(2n−k+1).\n\n## Iii Efficient Simulation of Stabilizer Circuits\n\nTheorem 1 immediately suggests a way to simulate stabilizer circuits efficiently on a classical computer. A well-known fact from group theory says that any finite group has a generating set of size at most . So if is a stabilizer state on qubits, then the group of Pauli operators that stabilize has a generating set of size . Each generator takes bits to specify: bits for each of the Pauli matrices, and bit for the phase 111If , then can only have a phase of , not : for in the latter case would be in , but we saw that does not stabilize anything.. So the total number of bits needed to specify is . What Gottesman and Knill showed, furthermore, is that these bits can be updated in polynomial time after a CNOT, Hadamard, phase, or measurement gate is applied to . The updates corresponding to unitary gates are very efficient, requiring only time for each gate.\n\nHowever, the updates corresponding to measurements are not so efficient. We can decide in time whether a measurement of qubit will yield a deterministic or random outcome. If the outcome is random, then updating the state after the measurement takes time, but if the outcome is deterministic, then deciding whether the outcome is or seems to require inverting an matrix, which takes time in theory cw but order time in practice. What that complexity means is that simulations of, say, -qubit systems would already be prohibitive on a desktop PC, given that measurements are frequent.\n\nThis section describes a new simulation algorithm, by which both deterministic and random measurements can be performed in time. The cost is a factor- increase in the number of bits needed to specify a state. For in addition to the stabilizer generators, we now store “destabilizer” generators, which are Pauli operators that together with the stabilizer generators generate the full Pauli group . So the number of bits needed is .\n\nThe algorithm represents a state by a tableau consisting of binary variables for all , , and for all 222Dehaene and De Moor dm came up with something like this tableau representation independently, though they did not use it to simulate measurements in time.:\n\nRows to of the tableau represent the destabilizer generators , and rows to represent the stabilizer generators . If , then bits determine the Pauli matrix : means , means , means , and means . Finally, is if has negative phase and if has positive phase. As an example, the -qubit state is stabilized by the Pauli operators and , so a possible tableau for is\n\nIndeed, we will take the obvious generalization of the above “identity matrix” to be the standard initial tableau.\n\nThe algorithm uses a subroutine called , which sets generator equal to . Its purpose is to keep track, in particular, of the phase bit , including all the factors of that appear when multiplying Pauli matrices. The subroutine is implemented as follows.\n\nrowsum: Let be a function that takes bits as input, and that returns the exponent to which is raised (either , , or ) when the Pauli matrices represented by and are multiplied. More explicitly, if then ; if then ; if and then ; and if and then . Then set if\n\n 2rh+2ri+n∑j=1g(xij,zij,xhj,zhj)≡0(mod4),\n\nand set if the sum is congruent to mod (it will never be congruent to or ). Next, for all , set and set (here and throughout, denotes exclusive-OR).\n\nWe now give the algorithm. It will be convenient to add an additional row for “scratch space.” The initial state has for all , and and for all and , where is if and otherwise. The algorithm proceeds through the gates in order; for each one it does one of the following depending on the gate type.\n\nCNOT from control to target . For all , set , , and .\n\nHadamard on qubit . For all , set and swap with .\n\nPhase on qubit . For all , set and then set .\n\nMeasurement of qubit in standard basis. First check whether there exists a such that .\n\nCase I: Such a exists (if more than one exists, then let be the smallest). In this case the measurement outcome is random, so the state needs to be updated. This is done as follows. First call for all such that and . Second, set entire the row equal to the row. Third, set the row to be identically , except that is or with equal probability, and . Finally, return as the measurement outcome.\n\nCase II: Such an does not exist. In this case the outcome is determinate, so measuring the state will not change it; the only task is to determine whether or is observed. This is done as follows. First set the row to be identically . Second, call for all such that . Finally return as the measurement outcome.\n\nOnce we interpret the , , and bits for as representing generators of , and as representing the group operation in , the correctness of the CNOT, Hadamard, phase, and random measurement procedures follows immediately from previous analyses by Gottesman gottesman2 . It remains only to explain why the determinate measurement procedure is correct. Observe that commutes with if the symplectic inner product\n\n Rh⋅Ri=xh1zi1⊕⋯⊕xhnzin⊕xi1zh1⊕⋯⊕xinzhn\n\nequals , and anticommutes with if . Using that fact it is not hard to show the following.\n\n###### Proposition 3\n\nThe following are invariants of the tableau algorithm:\n\n• generate , and generate .\n\n• commute.\n\n• For all , anticommutes with .\n\n• For all such that , commutes with .\n\nNow suppose that a measurement of qubit yields a determinate outcome. Then the operator must commute with all elements of the stabilizer, so\n\n n∑h=1chRh+n=±Za\n\nfor a unique choice of . Our goal is to determine the ’s, since then by summing the appropriate ’s we can learn whether the phase representing the outcome is positive or negative. Notice that for all ,\n\n ci≡n∑h=1ch(Ri⋅Rh+n)≡Ri⋅n∑h=1chRh+n≡Ri⋅Za(mod2)\n\nby Proposition 3. Therefore by checking whether anticommutes with —which it does if and only if —we learn whether and thus whether needs to be called.\n\nWe end this section by explaining how to compute the inner product between two stabilizer states and , given their full tableaus. The inner product is if the stabilizers contain the same Pauli operator with opposite signs. Otherwise it equals , where is the minimum, over all sets of generators for and for , of the number of for which . For example, and have inner product , since . The proof is easy: it suffices to observe that neither the inner product nor is affected if we transform and to and respectively, for some unitary such that has the trivial stabilizer. This same observation yields an algorithm to compute the inner product: first transform the tableau of to that of using Theorem 8; then perform Gaussian elimination on the tableau of to obtain . Unfortunately, this algorithm takes order steps.\n\n## Iv Implementation and Experiments\n\nWe have implemented the tableau algorithm of Section III in a C program called CHP (CNOT-Hadamard-Phase), which is available for download 333At www.scottaaronson.com/chp. CHP takes as input a program in a simple “quantum assembly language,” consisting of four instructions: c (apply CNOT from control to target ), h (apply Hadamard to ), p (apply phase gate to ), and m (measure in the standard basis, output the result, and update the state accordingly). Here and are nonnegative integers indexing qubits; the maximum or that occurs in any instruction is assumed to be , where is the number of qubits. As an example, the following program demonstrates the famous quantum teleportation protocol of Bennett et al. bbcjpw :\n\nEPR pair is prepared (qubit is Alice’s half; qubit is Bob’s half)\n\nAlice interacts qubit (the state to be teleported) with her half of the EPR pair\n\nAlice sends classical bits to Bob\n\nBob uses the bits from Alice to recover the teleported state\n\nWe also have available CHP programs that demonstrate the Bennett-Wiesner dense quantum coding protocol bw , the GHZ (Greenberger-Horne-Zeilinger) experiment ghz , Simon’s algorithm simon , and the Shor -qubit quantum error-correcting code shor .\n\nOur main design goal for CHP was high performance with a large number of qubits and frequent measurements. The only reason to use CHP instead of a general-purpose quantum computer simulator such as QuIDD vmh or QCL oemer is performance, so we wanted to leverage that advantage and make thousands of qubits easily simulable rather than just hundreds. Also, the results of Section V suggest that classical postprocessing is unavoidable for stabilizer circuits, since stabilizer circuits are not even universal for classical computation. So if we want to simulate (for example) Simon’s algorithm, then one measurement is needed for each bit of the first register. CHP’s execution time will be dominated by these measurements, since as discussed in Section III, each unitary gate takes only time to simulate.\n\nOur experimental results, summarized in Figure 2, show that CHP makes practical the simulation of arbitrary stabilizer circuits on up to about qubits. Since the number of bits needed to represent qubits grows quadratically in , the main limitation is available memory. On a machine with 256MB of RAM, CHP can handle up to about qubits before virtual memory is needed, in which case thrashing makes its performance intolerable. The original version of CHP required ~ bits for memory; we were able to reduce this to ~ bits, enabling a 41% increase in the number of qubits for a fixed memory size. More trivially, we obtained an eightfold improvement in memory by storing bits to each byte instead of . Not only did that change increase the number of storable qubits by 183%, but it also made CHP about 50% faster—presumably because (1) the subroutine now needed to exclusive-OR only as many bytes, and (2) the memory penalty was reduced. Storing the bits in -bit words yielded a further 10% performance gain, presumably because of (1) rather than (2) (since even with byte-addressing, a whole memory line is loaded into the cache on a cache miss).\n\nAs expected, the experimentally measured execution time per unitary gate grows linearly in , whereas the time per measurement grows somewhere between linearly and quadratically, depending on the states being measured. Thus the time needed for measurements generally dominates execution time. So the key question is this: what properties of a circuit determine whether the time per measurement is linear, quadratic, or somewhere in between? To investigate this question we performed the following experiment.\n\nWe randomly generated stabilizer circuits on qubits, for ranging from to in increments of . For each , we used the following distribution over circuits: Fix a parameter ; then choose random unitary gates: a CNOT from control to target , a Hadamard on qubit , or a phase gate on qubit , each with probability , where and are drawn uniformly at random from subject to . Then measure qubit for each in sequence.\n\nWe simulated the resulting circuits in CHP. For each circuit, we counted the number of seconds needed for all measurement steps (ignoring the time for unitary gates), then divided by to obtain the number of seconds per measurement. We repeated the whole procedure for ranging from to in increments of .\n\nThere were several reasons for placing measurements at the end of a circuit rather than interspersing them with unitary gates. First, doing so models how many quantum algorithms actually work (apply unitary gates, then measure, then perform classical postprocessing); second, it allowed us to ignore the effect of measurements on subsequent computation; third, it ‘standardized’ the measurement stage, making comparisons between different circuits more meaningful; and fourth, it made simulation harder by increasing the propensity for the measurements to be nontrivially correlated.\n\nThe decision to make the number of unitary gates proportional to was based on the following heuristic argument. The time needed to simulate a measurement is determined by how many times the procedure is called, which in turn is determined by how many ’s there are such that (where is the qubit being measured). Initially if and only if , so a measurement takes time. For a random state, by contrast, the expected number of ’s such that is by symmetry, so a measurement takes order time. In general, the more ’s there are in the tableau, the longer measurements take. But where does the transition from linear to quadratic time occur, and how sharp is it?\n\nConsider people, each of whom initially knows one secret (with no two people knowing the same secret). Each day, two people chosen uniformly at random meet and exchange all the secrets they know. What is the expected number of days until everyone knows everyone else’s secrets? Intuitively, the answer is , because any given person has to wait days between meetings, and at each meeting, the number of secrets he knows approximately doubles (or towards the end, the number of secrets he doesn’t know is approximately halved). Replacing people by qubits and meetings by CNOT gates, one can see why a ‘phase transition’ from a sparse to a dense tableau might occur after random unitary gates are applied. However, this argument does not pin down the proportionality constant , so that is what we varied in the experiment.\n\nThe results of the experiment are presented in Figure 2. When , the time per measurement appears to grow roughly linearly in , whereas when (meaning that the number of unitary gates has only doubled), the time per measurement appears to grow roughly quadratically, so that running the simulations took hours of computing time 444Based on our heuristic analysis, we conjecture that for intermediate , the time per measurement grows as for some . However, we do not have enough data to confirm or refute this conjecture. Thus, Figure 2 gives striking evidence for a “phase transition” in simulation time, as increasing the number of unitary gates by only a constant factor shifts us from a regime of simple states that are easy to measure, to a regime of complicated states that are hard to measure. This result demonstrates that CHP’s performance depends strongly on the circuit being simulated. Without knowing what sort of tableaus a circuit will produce, all we can say is that the time per measurement will be somewhere between linear and quadratic in .",
null,
"Figure 2: Average time needed to simulate a measurement after applying βnlog2n unitary gates to n qubits, on a 650MHz Pentium III with 256MB RAM.\n\n## V Complexity of Simulating Stabilizer Circuits\n\nThe Gottesman-Knill theorem shows that stabilizer circuits are not universal for quantum computation, unless quantum computers can be simulated efficiently by classical ones. To a computer scientist, this theorem immediately raises a question: where do stabilizer circuits sit in the hierarchy of computational complexity theory? In this section we resolve that question, by proving that the problem of simulating stabilizer circuits is complete for a classical complexity class known as (pronounced “parity-L”) 555See www.complexityzoo.com for definitions of and several hundred other complexity classes. The usual definition of is as the class of all problems that are solvable by a nondeterministic logarithmic-space Turing machine, that accepts if and only if the total number of accepting paths is odd. But there is an alternate definition that is probably more intuitive to non-computer-scientists. This is that is the class of problems that reduce to simulating a polynomial-size CNOT circuit, i.e. a circuit composed entirely of NOT and CNOT gates, acting on the initial state . (It is easy to show that the two definitions are equivalent, but this would require us first to explain what the usual definition means!)\n\nFrom the second definition, it is clear that ; in other words, any problem reducible to simulating CNOT circuits is also solvable in polynomial time on a classical computer. But this raises a question: what do we mean by “reducible”? Problem is reducible to problem if any instance of problem can be transformed into an instance of problem ; this means that problem is “harder” than problem in the sense that the ability to answer an arbitrary instance of problem implies the ability to answer an arbitrary instance of problem (but not necessarily vice-versa).\n\nWe must, however, insist that the reduction transforming instances of problem into instances of problem not be too difficult to perform. Otherwise, we could reduce hard problems to easy ones by doing all the difficult work in the reduction itself. In the case of , we cannot mean “reducible in polynomial time,” which is a common restriction, since then the reduction would be at least as powerful as the problem it reduces to! Instead we require the reduction to be performed in the complexity class , or logarithmic space—that is, by a Turing machine that is given a read-only input of size , and a write-only output tape, but only bits of read/write memory. The reduction works as follows: first specifies a CNOT circuit on its output tape; then an “oracle” tells the circuit’s output (which we can take to be, say, the value of the first qubit after the circuit is applied), then specifies another CNOT circuit on its output tape, and so on. A useful result of Hertrampf, Reith, and Vollmer hrv says that this seemingly powerful kind of reduction, in which can make multiple calls to the CNOT oracle, is actually no more powerful than the kind with only one oracle call. (In complexity language, what hrv showed is that : any problem in with oracle is also in itself.)\n\nIt is conjectured that ; in other words, that an oracle for simulating CNOT circuits would let an machine compute more functions than it could otherwise. Intuitively, this is because writing down the intermediate states of such a circuit requires more than a logarithmic number of read/write bits. Indeed, contains some surprisingly “hard” problems, such as inverting matrices over damm . On the other hand, it is also conjectured that , meaning that even with an oracle for simulating CNOT circuits, an machine could not simulate more general circuits with AND and OR gates. As usual in complexity theory, neither conjecture has been proved.\n\nNow define the Gottesman-Knill problem as follows. We are given a stabilizer circuit as a sequence of gates of the form , , , or , where are indices of qubits. The problem is to decide whether qubit will be with certainty after is applied to the initial state . (If not, then qubit will be with probability either or .)\n\nSince stabilizer circuits are a generalization of CNOT circuits, it is obvious that Gottesman-Knill is -hard (i.e. any problem can be reduced to it). Our result says that Gottesman-Knill is in . Intuitively, this means that any stabilizer circuit can be simulated efficiently using CNOT gates alone—the additional availability of Hadamard and phase gates gives stabilizer circuits at most a polynomial advantage. In our view, this surprising fact helps to explain the Gottesman-Knill theorem, by providing strong evidence that stabilizer circuits are not even universal for classical computation (assuming, of course, that classical postprocessing is forbidden).\n\n###### Theorem 4\n\nGottesman-Knill is in .\n\nProof. We will show how to solve Gottesman-Knill using a logarithmic-space machine with an oracle for simulating CNOT circuits. By the result of Hertrampf, Reith, and Vollmer hrv described above, this will suffice to prove the theorem.\n\nBy the principle of deferred measurement, we can assume that the stabilizer circuit has only a single measurement gate at the end (say of qubit ), with all other measurements replaced by CNOT’s into ancilla qubits. In the tableau algorithm of Section III, let be the values of the variables after gates of have been applied. Then will simulate by computing these values. The first task of is to decide whether the measurement has a determinate outcome—or equivalently, whether for every , where is the number of unitary gates. Observe that in the CNOT, Hadamard, and phase procedures, every update to an or variable replaces it by the sum modulo of one or two other or variables. Also, iterating over all and takes only bits of memory. Therefore, despite its memory restriction, can easily write on its output tape a description of a CNOT circuit that simulates the tableau algorithm using bits (the ’s being omitted), and that returns for any desired . Then to decide whether the measurement outcome is determinate, simply iterates over all from to .\n\nThe hard part is to decide whether or is measured in case the measurement outcome is determinate, for this problem involves the variables, which do not evolve in a linear way as the ’s and ’s do. Even worse, it involves the complicated-looking and nonlinear procedure. Fortunately, though, it turns out that the measurement outcome can be computed by keeping track of a single complex number . This is a product of phases of the form or , and therefore takes only bits to specify. Furthermore, although the “obvious” ways to compute use more than bits of memory, can get around that by making liberal use of the oracle.\n\nFirst computes what would be if the CNOT, Hadamard, and phase procedures did not modify the ’s. Let be a Pauli matrix with a phase of or , which therefore takes bits to specify. Also, let be the Pauli matrix represented by the bits in the usual way: , , , . Then the procedure is as follows.\n\nfor to\n\nfor to\n\nif then\n\nnext\n\nmultiply by the phase of ( or )\n\nnext\n\nThe “answer” is if and if (note that will never be at the end). However, also needs to account for the ’s, as follows.\n\nfor to\n\nif\n\nfor to\n\nif gate is a Hadamard or phase on\n\nif then\n\nend if\n\nif gate is a CNOT from to\n\nif then\n\nend if\n\nnext\n\nend if\n\nnext\n\nThe measurement outcome, , is then if and if . As described above, the machine needs only bits to keep track of the loop indices , and additional bits to keep track of other variables. Its correctness follows straightforwardly from the correctness of the tableau algorithm.\n\nFor a problem to be -complete simply means that it is -hard and in . Thus, a corollary of Theorem 4 is that Gottesman-Knill is -complete.\n\n## Vi Canonical Form\n\nHaving studied the simulation of stabilizer circuits, in this section we turn our attention to manipulating those circuits. This task is of direct relevance to quantum computer architecture: because the effects of decoherence build up over time, it is imperative (even more so than for classical circuits) to minimize the number of gates as well as wires and other resources. Even if fault-tolerant techniques will eventually be used to tame decoherence, there remains the bootstrapping problem of building the fault-tolerance hardware! In that regard we should point out that fault-tolerance hardware is likely to consist mainly of CNOT, Hadamard, and phase gates, since the known fault-tolerant constructions (for example, that of Aharonov and Ben-Or ab ) are based on stabilizer codes.\n\nAlthough there has been some previous work on synthesizing CNOT circuits iky ; pmh ; mn and general classical reversible circuits spmh ; lckl , to our knowledge there has not been work on synthesizing stabilizer circuits. In this section we prove a canonical form theorem that is extremely useful for stabilizer circuit synthesis. The theorem says that given any circuit consisting of CNOT, Hadamard, and phase gates, there exists an equivalent circuit that applies a round of Hadamard gates only, then a round of CNOT gates only, and so on in the sequence H-C-P-C-P-C-H-P-C-P-C. One easy corollary of the theorem is that any tableau satisfying the commutativity conditions of Proposition 3 can be generated by some stabilizer circuit. Another corollary is that any unitary stabilizer circuit has an equivalent circuit with only gates.\n\nGiven two -qubit unitary stabilizer circuits , we say that and are equivalent if for all stabilizer states , where is the final state when is applied to 666The reason we restrict attention to unitary circuits is simply that, if measurements are included, then it is unclear what it even means for two circuits to be equivalent. For example, does deferring all measurements to the end of a computation preserve equivalence or not?. By linearity, it is easy to see that equivalent stabilizer circuits will behave identically on all states, not just stabilizer states. Furthermore, there exists a one-to-one correspondence between circuits and tableaus:\n\n###### Lemma 5\n\nLet be unitary stabilizer circuits, and let be their respective final tableaus when we run them on the standard initial tableau. Then and are equivalent if and only if .\n\nProof. Clearly if and are equivalent. For the other direction, it suffices to observe that a unitary stabilizer circuit acts linearly on Pauli operators (that is, rows of the tableau): if it maps to and to , then it maps to . Since the rows of the standard initial tableau form a basis for , the lemma follows.\n\nOur proof of the canonical form theorem will use the following two lemmas.\n\n###### Lemma 6\n\nGiven an -qubit stabilizer state, it is always possible to apply Hadamard gates to a subset of the qubits so as to make the matrix have full rank (or equivalently, make all basis states have nonzero amplitude).\n\nProof. We can always perform row additions on the stabilizer matrix without changing the state that it represents. Suppose the matrix has rank ; then by Gaussian elimination, we can put the stabilizer matrix in the form\n\nwhere is and has rank . Then since the rows are linearly independent, must have rank ; therefore it has an submatrix of full rank. Let us permute the columns of the and matrices simultaneously to obtain\n\nand then perform Gaussian elimination on the bottom rows to obtain\n\nNow commutativity relations imply\n\n (A1A2)(DTI)=0\n\nand therefore . Notice that this implies that the matrix has full rank, since otherwise the matrix would have column rank less than . So performing Hadamards on the rightmost qubits yields a state\n\nwhose matrix has full rank.\n\n###### Lemma 7\n\nFor any symmetric matrix , there exists a diagonal matrix such that , with some invertible binary matrix.\n\nProof. We will let be a lower-triangular matrix with s all along the diagonal:\n\n Mii=1 (1) Mij=0 i\n\nSuch an is always invertible. Then there exists a diagonal such that if and only if\n\n Aij=∑kMikMjk (3)\n\nfor all pairs with . (We pick appropriately to satisfy the equations for automatically, and both sides of the equation are symmetric, covering the cases with .)\n\nWe will perform induction on and to solve for the undetermined elements of . For the base case, we know that . We will determine for by supposing we have already determined for either , or , . We consider equation (3) for and note that unless . Then\n\n Aij=∑k\n\nBy the induction hypothesis, we have already determined in the sum both (since ) and (since and ), so this equation uniquely determines . We can thus find a unique that satisfies (3) for all .\n\nSay a unitary stabilizer circuit is in canonical form if it consists of rounds in the sequence H-C-P-C-P-C-H-P-C-P-C.\n\n###### Theorem 8\n\nAny unitary stabilizer circuit has an equivalent circuit in canonical form.\n\nProof. Divide a tableau into four matrices , , , and , containing the destabilizer bits, destabilizer bits, stabilizer bits, and stabilizer bits respectively:\n\n(We can ignore the phase bits .) Since unitary circuits are reversible, by Lemma 5 it suffices to show how to obtain the standard initial tableau starting from an arbitrary by applying CNOT, Hadamard, and phase gates 777Actually, this gives the canonical form for the inverse of the circuit, but of course the same argument holds for the inverse circuit too, which is also a stabilizer circuit. We cannot use row additions, since although they leave states invariant they do not in general leave circuits invariant.\n\nThe procedure is as follows.\n\n(1) Use Hadamards to make have full rank (this is possible by Lemma 6).\n\n(2) Use CNOT’s to perform Gaussian elimination on , producing\n\n(3) Commutativity of the stabilizer implies that is symmetric, therefore is symmetric, and we can apply phase gates to add a diagonal matrix to and use Lemma 7 to convert to the form for some invertible .\n\n(4) Use CNOT’s to produce\n\nNote that when we map to , we also map to .\n\n(5) Apply phases to all qubits to obtain\n\nSince is full rank, there exists some subset of qubits such that applying two phases in succession to every will preserve the above tableau, but set . Apply two phases to every .\n\n(6) Use CNOT’s to perform Gaussian elimination on , producing\n\nBy commutativity relations, , therefore .\n\n(8) Now commutativity of the destabilizer implies that is symmetric, therefore we can again use phase gates and Lemma 7 to make for some invertible .\n\n(9) Use CNOT’s to produce\n\n(10) Use phases to produce\n\nthen by commutativity relations, . Next apply two phases each to some subset of qubits in order to preserve the above tableau, but set .\n\n(11) Use CNOT’s to produce\n\nSince Theorem 8 relied only on a tableau satisfying the commutativity conditions, not on its being generated by some stabilizer circuit, an immediate corollary is that any tableau satisfying the conditions is generated by some stabilizer circuit. We can also use Theorem 8 to answer the following question: how many gates are needed for an -qubit stabilizer circuit in the worst case? Cleve and Gottesman cg showed that gates suffice for the special case of state preparation, and Gottesman gottesman3 and Dehaene and De Moor dm showed that gates suffice for stabilizer circuits more generally; even these results were not obvious a priori. However, with the help of our canonical form theorem we can show a stronger upper bound.\n\n###### Corollary 9\n\nAny unitary stabilizer circuit has an equivalent circuit with only gates.\n\nProof. Patel, Markov, and Hayes pmh showed that any CNOT circuit has an equivalent CNOT circuit with only gates. So given a stabilizer circuit , first put into canonical form, then minimize the CNOT segments. Clearly the Hadamard and Phase segments require only gates each.\n\nCorollary 9 is easily seen to be optimal by a Shannon counting argument: there are distinct stabilizer circuits on qubits, but at most with gates.\n\nA final remark: as noted by Moore and Nilsson mn , any CNOT circuit has an equivalent CNOT circuit with gates and parallel depth . Thus, using the same idea as in Corollary 9, we obtain that any unitary stabilizer circuit has an equivalent stabilizer circuit with gates and parallel depth . (Moore and Nilsson showed this for the special case of stabilizer circuits composed of CNOT and Hadamard gates only.)\n\n## Vii Beyond Stabilizer Circuits\n\nIn this section, we discuss generalizations of stabilizer circuits that are still efficiently simulable. The first (easy) generalization, in Section VII.1, is to allow the quantum computer to be in a mixed rather than a pure state. Mixed states could be simulated by simply purifying the state, and then simulating the purification, but we present an alternative and slightly more efficient strategy.\n\nThe second generalization, in Section VII.2, is to initial states other than the computational basis state. Taken to an extreme, one could even have noncomputable initial states. When combined with arbitrary quantum circuits, such quantum advice is very powerful, although its exact power (relative to classical advice) is unknown aaronson . We consider a more modest situation, in which the initial state may include specific ancilla states, consisting of at most qubits each. The initial state is therefore a tensor product of blocks of qubits. Given an initial state of this form and general stabilizer circuits, including measurements and classical feedback based on measurement outcomes, universal quantum computation is again possible shor2 ; gc . However, we show that an efficient classical simulation exists, provided only a few measurements are allowed.\n\nThe final generalization, in Section VII.3, is to circuits containing a few non-stabilizer gates. The qualifier “few” is essential here, since it is known that unitary stabilizer circuits plus any additional gate yields a universal set of quantum gates nrs ; solovay . The running time of our simulation procedure is polynomial in , the number of qubits, but is exponential in the , the number of non-stabilizer gates.\n\n### vii.1 Mixed States\n\nWe first present the simulation for mixed states. We allow only stabilizer mixed states—that is, states that are uniform distributions over all states in a subspace (or equivalently, all stabilizer states in the subspace) with a given stabilizer of generators. Such mixed states can always be written as the partial trace of a pure stabilizer state, which immediately provides one way of simulating them.\n\nIt will be useful to see how to write the density matrix of the mixed state in terms of the stabilizer. The operator , when is a Pauli operator, is a projection onto the eigenspace of . Therefore, if the stabilizer of a pure state has generators , then the density matrix for that state is\n\n ρ=12nn∏i=1(I+Mi).\n\nThe density matrix for a stabilizer mixed state with stabilizer generated by is\n\n ρ=12rr∏i=1(I+Mi).\n\nTo perform our simulation, we find a collection of operators and that commute with both the stabilizer and the destabilizer. We can choose them so that for , but . This can be done by solving a set of linear equations, which in practice takes time . If we start with an initial mixed state, we will assume it is of the form (so on the first qubits and the completely mixed state on the last qubits). In that case, we choose and .\n\nWe could purify this state by adding and to the stabilizer and and to the destabilizer for . Then we could simulate the system by just simulating the evolution of this pure state through the circuit; the extra qubits are never altered.\n\nA more economical simulation is possible, however, by just keeping track of the original -generator stabilizer and destabilizer, plus the operators and . Formally, this allows us to maintain a complete tableau and generalize the tableau algorithm from Section III. We place the generators of the stabilizer as rows of the tableau, and the corresponding elements of the destabilizer as rows . The new operators and () become rows and , respectively. Let if and if . Then we have that rows and commute unless , in which case and anticommute.\n\nWe can keep track of this new kind of tableau in much the same way as the old kind. Unitary operations transform the new rows the same way as rows of the stabilizer or destabilizer. For example, to perform a CNOT from control qubit to target qubit , set and , for all .\n\nMeasurement of qubit is slightly more complex than before. There are now three cases:\n\nCase I: for some . In this case anticommutes with an element of the stabilizer, and the measurement outcome is random. We update as before, for all rows of the tableau.\n\nCase II: for all . In this case is in the stabilizer. The measurement outcome is determinate, and we can predict the result as before, by calling to add up rows for those with .\n\nCase III: for all , but for some or . In this case commutes with all elements of the stabilizer but is not itself in the stabilizer. We get a random measurement result, but a slightly different transformation of the stabilizer than in Case I. Observe that row anticommutes with . This row takes the role of row from Case I, and the row takes the role of row . Update as before with this modification. Then swap rows and and rows and . Finally, increase to : the stabilizer has gained a new generator.\n\nAnother operation that we might want to apply is discarding the qubit , which has the effect of performing a partial trace over that qubit in the density matrix. Again, this can be done by simply keeping the qubit in our simulation and not using it in future operations. Here is an alternative: put the stabilizer in a form such that there is at most one generator with an on qubit , and at most one with a on qubit . Then drop those two generators (or one, if there is only one total). The remaining generators describe the stabilizer of the reduced mixed state. We also must put the and operators in a form where they have no entries in the discarded location, while preserving the structure of the tableau (namely, the commutation relations of Proposition 3). This can also be done in time , but we omit the details, as they are rather involved.\n\n### vii.2 Non-Stabilizer Initial States\n\nWe now show how to simulate a stabilizer circuit where the initial state is more general, involving non-stabilizer initial states. We allow any number of ancillas in arbitrary states, but the overall ancilla state must be a tensor product of blocks of at most qubits each. An arbitrary stabilizer circuit is then applied to this state. We allow measurements, but only of them in total throughout the computation. We do allow classical operations conditioned on the outcomes of measurements, so we also allow polynomial-time classical computation during the circuit.\n\nLet the initial state have density matrix : a tensor product of blocks of at most qubits each. Without loss of generality, we first apply the unitary stabilizer circuit , followed by the measurement (that is, a measurement of the first qubit in the standard basis). We then apply the stabilizer circuit , followed by measurement on the second qubit, and so on up to .\n\nWe can calculate the probability of obtaining outcome for the first measurement as follows:\n\n p(0) =Tr[(I+Z1)U1ρU†1]/2 =Tr[(I+U†1Z1U1)ρ]/2 =1/2+Tr[(U†1Z1U1)ρ]/2.\n\nBut is a stabilizer operation, so is a Pauli matrix, and is therefore a tensor product operation. We also know is a tensor product of blocks of at most qubits, and the trace of a tensor product is the product of the traces. Let and where ranges over the blocks. Then\n\n p(0)=12+m∏j=1Tr(Pjρj).\n\nSince and are both -dimensional matrices, each can be computed in time .\n\nBy flipping an appropriately biased coin, Alice can generate an outcome of the first measurement according to the correct probabilities. Conditioned on this outcome (say of ), the state of the system is\n\n (I+Z1)U1ρU†1(1+Z1)4p(0) .\n\nAfter the next stabilizer circuit , the state is\n\n U2(I+Z1)U1ρU†1(1+Z1)U†24p(0) .\n\nThe probability of obtaining outcome for the second measurement, conditioned on the outcome of the first measurement being , is then\n\nBy expanding out the terms, and then commuting and past and , we can write this as\n\n 8∑i=1m∏j=1Tr(P(2)ijρij).\n\nEach term can again be computed in time .\n\nSimilarly, the probability of any particular sequence of measurement outcomes can be written as a sum\n\n p(m1m2⋯md)=22d−1∑i=1m∏j=1Tr(P(d)ijρij),\n\nwhere each trace can be computed in time . It follows that the probabilities of the two outcomes of the measurement can be computed in time"
] | [
null,
"https://media.arxiv-vanity.com/render-output/2994480/x1.png",
null,
"https://media.arxiv-vanity.com/render-output/2994480/x2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92280614,"math_prob":0.9803029,"size":50386,"snap":"2020-24-2020-29","text_gpt3_token_len":10621,"char_repetition_ratio":0.16940574,"word_repetition_ratio":0.048440453,"special_character_ratio":0.19868614,"punctuation_ratio":0.11133646,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9921488,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-02T08:46:38Z\",\"WARC-Record-ID\":\"<urn:uuid:ee454ae4-e131-4854-89db-b8af8b7486d1>\",\"Content-Length\":\"1049277\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb5946b1-3d4b-4578-a3a2-723dab147fe7>\",\"WARC-Concurrent-To\":\"<urn:uuid:ed1ba319-bc9a-4684-a4f4-f9f82eeb8e98>\",\"WARC-IP-Address\":\"172.67.158.169\",\"WARC-Target-URI\":\"https://www.arxiv-vanity.com/papers/quant-ph/0406196/\",\"WARC-Payload-Digest\":\"sha1:BBJMQVRSV7ZNKWHIOXN2DZA7CH6NY6OY\",\"WARC-Block-Digest\":\"sha1:A6DPIMKE7Z722UDCDSKGZOBX3XAXQUZP\",\"WARC-Truncated\":\"length\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590347423915.42_warc_CC-MAIN-20200602064854-20200602094854-00497.warc.gz\"}"} |
https://www.ias.ac.in/listing/bibliography/pram/R._Misra | [
"• R Misra\n\nArticles written in Pramana – Journal of Physics\n\n• Vibrational dynamics of the orgganometallic compound triarylorganoantimony (V) SbPh3 [O_{2}CC(OH)Ph_{2}]_{2}\n\nA complete normal coordinate analysis was performed for five-coordinate non-rigid triarylantimony diester SbPh3(O2CR2), known to be a bioactive molecule, using Wilson G-D matrix method and Urey Bradley force field. The study of vibrational dynamics was performed using the concept of group frequencies and band intensities.\n\n• Efficient use of correlation entropy for analysing time series data\n\nThe correlation dimension $D_{2}$ and correlation entropy $K_{2}$ are both important quantifiers in nonlinear time series analysis. However, use of $D_{2}$ has been more common compared to $K_{2}$ as a discriminating measure. One reason for this is that $D_{2}$ is a static measure and can be easily evaluated from a time series. However, in many cases, especially those involving coloured noise, $K_{2}$ is regarded as a more useful measure. Here we present an efficient algorithmic scheme to compute $K_{2}$ directly from a time series data and show that $K_{2}$ can be used as a more effective measure compared to $D_{2}$ for analysing practical time series involving coloured noise.\n\n• # Editorial Note on Continuous Article Publication\n\nPosted on July 25, 2019"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91227174,"math_prob":0.94913226,"size":1548,"snap":"2023-14-2023-23","text_gpt3_token_len":385,"char_repetition_ratio":0.10427461,"word_repetition_ratio":0.0,"special_character_ratio":0.2370801,"punctuation_ratio":0.051282052,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9900938,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-24T04:04:29Z\",\"WARC-Record-ID\":\"<urn:uuid:05adeddf-a9f6-4816-8d73-ba545912fb8d>\",\"Content-Length\":\"27025\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ea7192be-3d26-452c-8408-6ec2cadd9f44>\",\"WARC-Concurrent-To\":\"<urn:uuid:6b9bad12-7425-45f1-920c-3246f5f56825>\",\"WARC-IP-Address\":\"52.66.166.230\",\"WARC-Target-URI\":\"https://www.ias.ac.in/listing/bibliography/pram/R._Misra\",\"WARC-Payload-Digest\":\"sha1:D3ESZPJZ4VYLYTJ7LPY5HGTEKS7TH37S\",\"WARC-Block-Digest\":\"sha1:DW6XG4POQ7ADCRF6HY4HRPJAJ7XBGKET\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296945242.64_warc_CC-MAIN-20230324020038-20230324050038-00314.warc.gz\"}"} |
https://la.mathworks.com/help/econ/what-is-a-conditional-variance-model.html | [
"## Conditional Variance Models\n\n### General Conditional Variance Model Definition\n\nConsider the time series\n\n`${y}_{t}=\\mu +{\\epsilon }_{t},$`\n\nwhere ${\\epsilon }_{t}={\\sigma }_{t}{z}_{t}$. Here, zt is an independent and identically distributed series of standardized random variables. Econometrics Toolbox™ supports standardized Gaussian and standardized Student’s t innovation distributions. The constant term, $\\mu$, is a mean offset.\n\nA conditional variance model specifies the dynamic evolution of the innovation variance,\n\n`${\\sigma }_{t}^{2}=Var\\left({\\epsilon }_{t}|{H}_{t-1}\\right),$`\n\nwhere Ht–1 is the history of the process. The history includes:\n\n• Past variances, ${\\sigma }_{1}^{2},{\\sigma }_{2}^{2},\\dots ,{\\sigma }_{t-1}^{2}$\n\n• Past innovations, ${\\epsilon }_{1},{\\epsilon }_{2},\\dots ,{\\epsilon }_{t-1}$\n\nConditional variance models are appropriate for time series that do not exhibit significant autocorrelation, but are serially dependent. The innovation series ${\\epsilon }_{t}={\\sigma }_{t}{z}_{t}$ is uncorrelated, because:\n\n• E(εt) = 0.\n\n• E(εtεt–h) = 0 for all t and $h\\ne 0.$\n\nHowever, if ${\\sigma }_{t}^{2}$ depends on ${\\sigma }_{t-1}^{2}$, for example, then εt depends on εt–1, even though they are uncorrelated. This kind of dependence exhibits itself as autocorrelation in the squared innovation series, ${\\epsilon }_{t}^{2}.$\n\nTip\n\nFor modeling time series that are both autocorrelated and serially dependent, you can consider using a composite conditional mean and variance model.\n\nTwo characteristics of financial time series that conditional variance models address are:\n\n• Volatility clustering. Volatility is the conditional standard deviation of a time series. Autocorrelation in the conditional variance process results in volatility clustering. The GARCH model and its variants model autoregression in the variance series.\n\n• Leverage effects. The volatility of some time series responds more to large decreases than to large increases. This asymmetric clustering behavior is known as the leverage effect. The EGARCH and GJR models have leverage terms to model this asymmetry.\n\n### GARCH Model\n\nThe generalized autoregressive conditional heteroscedastic (GARCH) model is an extension of Engle’s ARCH model for variance heteroscedasticity . If a series exhibits volatility clustering, this suggests that past variances might be predictive of the current variance.\n\nThe GARCH(P,Q) model is an autoregressive moving average model for conditional variances, with P GARCH coefficients associated with lagged variances, and Q ARCH coefficients associated with lagged squared innovations. The form of the GARCH(P,Q) model in Econometrics Toolbox is\n\n`${y}_{t}=\\mu +{\\epsilon }_{t},$`\n\nwhere${\\epsilon }_{t}={\\sigma }_{t}{z}_{t}$ and\n\n`${\\sigma }_{t}^{2}=\\kappa +{\\gamma }_{1}{\\sigma }_{t-1}^{2}+\\dots +{\\gamma }_{P}{\\sigma }_{t-P}^{2}+{\\alpha }_{1}{\\epsilon }_{t-1}^{2}+\\dots +{\\alpha }_{Q}{\\epsilon }_{t-Q}^{2}.$`\n\nNote\n\nThe `Constant` property of a `garch` model corresponds to κ, and the `Offset` property corresponds to μ.\n\nFor stationarity and positivity, the GARCH model has the following constraints:\n\n• $\\kappa >0$\n\n• ${\\gamma }_{i}\\ge 0,\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{\\alpha }_{j}\\ge 0$\n\n• ${\\sum }_{i=1}^{P}{\\gamma }_{i}+{\\sum }_{j=1}^{Q}{\\alpha }_{j}<1$\n\nTo specify Engle’s original ARCH(Q) model, use the equivalent GARCH(0,Q) specification.\n\n### EGARCH Model\n\nThe exponential GARCH (EGARCH) model is a GARCH variant that models the logarithm of the conditional variance process. In addition to modeling the logarithm, the EGARCH model has additional leverage terms to capture asymmetry in volatility clustering.\n\nThe EGARCH(P,Q) model has P GARCH coefficients associated with lagged log variance terms, Q ARCH coefficients associated with the magnitude of lagged standardized innovations, and Q leverage coefficients associated with signed, lagged standardized innovations. The form of the EGARCH(P,Q) model in Econometrics Toolbox is\n\n`${y}_{t}=\\mu +{\\epsilon }_{t},$`\n\nwhere ${\\epsilon }_{t}={\\sigma }_{t}{z}_{t}$ and\n\n`$\\mathrm{log}{\\sigma }_{t}^{2}=\\kappa +\\sum _{i=1}^{P}{\\gamma }_{i}\\mathrm{log}{\\sigma }_{t-i}^{2}+\\sum _{j=1}^{Q}{\\alpha }_{j}\\left[\\frac{|{\\epsilon }_{t-j}|}{{\\sigma }_{t-j}}-E\\left\\{\\frac{|{\\epsilon }_{t-j}|}{{\\sigma }_{t-j}}\\right\\}\\right]+\\sum _{j=1}^{Q}{\\xi }_{j}\\left(\\frac{{\\epsilon }_{t-j}}{{\\sigma }_{t-j}}\\right).$`\n\nNote\n\nThe `Constant` property of an `egarch` model corresponds to κ, and the `Offset` property corresponds to μ.\n\nThe form of the expected value terms associated with ARCH coefficients in the EGARCH equation depends on the distribution of zt:\n\n• If the innovation distribution is Gaussian, then\n\n`$E\\left\\{\\frac{|{\\epsilon }_{t-j}|}{{\\sigma }_{t-j}}\\right\\}=E\\left\\{|{z}_{t-j}|\\right\\}=\\sqrt{\\frac{2}{\\pi }}.$`\n\n• If the innovation distribution is Student’s t with ν > 2 degrees of freedom, then\n\n`$E\\left\\{\\frac{|{\\epsilon }_{t-j}|}{{\\sigma }_{t-j}}\\right\\}=E\\left\\{|{z}_{t-j}|\\right\\}=\\sqrt{\\frac{\\nu -2}{\\pi }}\\frac{\\Gamma \\left(\\frac{\\nu -1}{2}\\right)}{\\Gamma \\left(\\frac{\\nu }{2}\\right)}.$`\n\nThe toolbox treats the EGARCH(P,Q) model as an ARMA model for$\\mathrm{log}{\\sigma }_{t}^{2}.$ Thus, to ensure stationarity, all roots of the GARCH coefficient polynomial,$\\left(1-{\\gamma }_{1}L-\\dots -{\\gamma }_{P}{L}^{P}\\right)$, must lie outside the unit circle.\n\nThe EGARCH model is unique from the GARCH and GJR models because it models the logarithm of the variance. By modeling the logarithm, positivity constraints on the model parameters are relaxed. However, forecasts of conditional variances from an EGARCH model are biased, because by Jensen’s inequality,\n\n`$E\\left({\\sigma }_{t}^{2}\\right)\\ge \\mathrm{exp}\\left\\{E\\left(\\mathrm{log}{\\sigma }_{t}^{2}\\right)\\right\\}.$`\n\nAn EGARCH(1,1) specification will be complex enough for most applications. For an EGARCH(1,1) model, the GARCH and ARCH coefficients are expected to be positive, and the leverage coefficient is expected to be negative; large unanticipated downward shocks should increase the variance. If you get signs opposite to those expected, you might encounter difficulties inferring volatility sequences and forecasting (a negative ARCH coefficient can be particularly problematic). In this case, an EGARCH model might not be the best choice for your application.\n\n### GJR Model\n\nThe GJR model is a GARCH variant that includes leverage terms for modeling asymmetric volatility clustering. In the GJR formulation, large negative changes are more likely to be clustered than positive changes. The GJR model is named for Glosten, Jagannathan, and Runkle . Close similarities exist between the GJR model and the threshold GARCH (TGARCH) model—a GJR model is a recursive equation for the variance process, and a TGARCH is the same recursion applied to the standard deviation process.\n\nThe GJR(P,Q) model has P GARCH coefficients associated with lagged variances, Q ARCH coefficients associated with lagged squared innovations, and Q leverage coefficients associated with the square of negative lagged innovations. The form of the GJR(P,Q) model in Econometrics Toolbox is\n\n`${y}_{t}=\\mu +{\\epsilon }_{t},$`\n\nwhere${\\epsilon }_{t}={\\sigma }_{t}{z}_{t}$ and\n\n`${\\sigma }_{t}^{2}=\\kappa +\\sum _{i=1}^{P}{\\gamma }_{i}{\\sigma }_{t-i}^{2}+\\sum _{j=1}^{Q}{\\alpha }_{j}{\\epsilon }_{t-j}^{2}+\\sum _{j=1}^{Q}{\\xi }_{j}I\\left[{\\epsilon }_{t-j}<0\\right]{\\epsilon }_{t-j}^{2}.$`\n\nThe indicator function $I\\left[{\\epsilon }_{t-j}<0\\right]$ equals 1 if ${\\epsilon }_{t-j}<0$, and 0 otherwise. Thus, the leverage coefficients are applied to negative innovations, giving negative changes additional weight.\n\nNote\n\nThe `Constant` property of a `gjr` model corresponds to κ, and the `Offset` property corresponds to μ.\n\nFor stationarity and positivity, the GJR model has the following constraints:\n\n• $\\kappa >0$\n\n• ${\\gamma }_{i}\\ge 0,\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}\\text{\\hspace{0.17em}}{\\alpha }_{j}\\ge 0$\n\n• ${\\alpha }_{j}+{\\xi }_{j}\\ge 0$\n\n• ${\\sum }_{i=1}^{P}{\\gamma }_{i}+{\\sum }_{j=1}^{Q}{\\alpha }_{j}+\\frac{1}{2}{\\sum }_{j=1}^{Q}{\\xi }_{j}<1$\n\nThe GARCH model is nested in the GJR model. If all leverage coefficients are zero, then the GJR model reduces to the GARCH model. This means you can test a GARCH model against a GJR model using the likelihood ratio test.\n\n Engle, Robert F. “Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation.” Econometrica. Vol. 50, 1982, pp. 987–1007.\n\n Glosten, L. R., R. Jagannathan, and D. E. Runkle. “On the Relation between the Expected Value and the Volatility of the Nominal Excess Return on Stocks.” The Journal of Finance. Vol. 48, No. 5, 1993, pp. 1779–1801."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.871075,"math_prob":0.9982669,"size":6195,"snap":"2021-43-2021-49","text_gpt3_token_len":1315,"char_repetition_ratio":0.15619448,"word_repetition_ratio":0.098378375,"special_character_ratio":0.18918483,"punctuation_ratio":0.12694064,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999416,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T01:00:21Z\",\"WARC-Record-ID\":\"<urn:uuid:bf0390e0-4af1-4948-9352-648ab568987d>\",\"Content-Length\":\"96881\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:31c53057-449a-411e-be84-091eb8685357>\",\"WARC-Concurrent-To\":\"<urn:uuid:63c8a7e6-c423-40d4-89cb-828c1b71a70f>\",\"WARC-IP-Address\":\"104.68.243.15\",\"WARC-Target-URI\":\"https://la.mathworks.com/help/econ/what-is-a-conditional-variance-model.html\",\"WARC-Payload-Digest\":\"sha1:IBWAITNA3FLYZ2HVCWD7I2H7HVWUNOQ7\",\"WARC-Block-Digest\":\"sha1:KZOKHXW44YDCXJE7EGNE3HB3VFNC4PMZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964359082.76_warc_CC-MAIN-20211130232232-20211201022232-00016.warc.gz\"}"} |
https://naima.readthedocs.io/en/latest/api-models.html | [
"# Models¶\n\n## API¶\n\nclass naima.models.Bremsstrahlung(particle_distribution, n0=<Quantity 1. 1 / cm3>, **kwargs)[source]\n\nBremsstrahlung radiation on a completely ionised gas.\n\nThis class uses the cross-section approximation of Baring, M.G., Ellison, D.C., Reynolds, S.P., Grenier, I.A., & Goret, P. 1999, Astrophysical Journal, 513, 311.\n\nThe default weights are assuming a completely ionised target gas with ISM abundances. If pure electron-electron bremsstrahlung is desired, n0 can be set to the electron density, weight_ep to 0 and weight_ee to 1.\n\nParameters\nn0Quantity float\n\nTotal ion number density.\n\nOther Parameters\nweight_eefloat\n\nWeight of electron-electron bremsstrahlung. Defined as $$\\sum_i Z_i X_i$$, default is 1.088.\n\nweight_epfloat\n\nWeight of electron-proton bremsstrahlung. Defined as $$\\sum_i Z_i^2 X_i$$, default is 1.263.\n\nproperty We\n\nTotal energy in electrons used for the radiative calculation\n\ncompute_We(Eemin=None, Eemax=None)\n\nTotal energy in electrons between energies Eemin and Eemax\n\nParameters\nEeminQuantity float, optional\n\nMinimum electron energy for energy content calculation.\n\nEemaxQuantity float, optional\n\nMaximum electron energy for energy content calculation.\n\nflux(energy, *args, **kwargs)\n\nDifferential flux at a given distance from the source.\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic differential luminosity will be returned. Default is 1 kpc.\n\nsed(photon_energy, distance=<Quantity 1. kpc>)\n\nSpectral energy distribution at a given distance from the source.\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic luminosity will be returned. Default is 1 kpc.\n\nset_We(We, Eemin=None, Eemax=None, amplitude_name=None)\n\nNormalize particle distribution so that the total energy in electrons between Eemin and Eemax is We\n\nParameters\nWeQuantity float\n\nDesired energy in electrons.\n\nEeminQuantity float, optional\n\nMinimum electron energy for energy content calculation.\n\nEemaxQuantity float, optional\n\nMaximum electron energy for energy content calculation.\n\namplitude_namestr, optional\n\nName of the amplitude parameter of the particle distribution. It must be accesible as an attribute of the distribution function. Defaults to amplitude.\n\nclass naima.models.BrokenPowerLaw(amplitude, e_0, e_break, alpha_1, alpha_2)[source]\n\nOne dimensional power law model with a break.\n\nParameters\namplitudefloat\n\nModel amplitude at the break energy\n\ne_0Quantity float\n\nReference point\n\ne_breakQuantity float\n\nBreak energy\n\nalpha_1float\n\nPower law index for x < x_break\n\nalpha_2float\n\nPower law index for x > x_break\n\nNotes\n\nModel formula (with $$A$$ for amplitude, $$E_0$$ for e_0, $$\\alpha_1$$ for alpha_1 and $$\\alpha_2$$ for alpha_2):\n\n$\\begin{split}f(E) = \\left \\{ \\begin{array}{ll} A (E / E_0) ^ {-\\alpha_1} & : E < E_{break} \\\\ A (E_{break}/E_0) ^ {\\alpha_2-\\alpha_1} (E / E_0) ^ {-\\alpha_2} & : E > E_{break} \\\\ \\end{array} \\right.\\end{split}$\nstatic eval(e, amplitude, e_0, e_break, alpha_1, alpha_2)[source]\n\nOne dimensional broken power law model function\n\nclass naima.models.EblAbsorptionModel(redshift, ebl_absorption_model='Dominguez')[source]\n\nA TableModel containing the different absorption values from a specific model.\n\nIt returns dimensionless opacity values, that could be multiplied to any model.\n\nParameters\nredshiftfloat\n\nRedshift considered for the absorption evaluation.\n\nebl_absorption_model{‘Dominguez’}\n\nName of the EBL absorption model to use (Dominguez by default).\n\nNotes\n\nDominguez model refers to the Dominguez 2011 EBL model. Current implementation does NOT perform an interpolation in the redshift, so it just uses the closest z value from the finely binned tau_dominguez11.npz file (delta_z=0.01).\n\nclass naima.models.ExponentialCutoffBrokenPowerLaw(amplitude, e_0, e_break, alpha_1, alpha_2, e_cutoff, beta=1.0)[source]\n\nOne dimensional power law model with a break.\n\nParameters\namplitudefloat\n\nModel amplitude at the break point\n\ne_0Quantity float\n\nReference point\n\ne_breakQuantity float\n\nBreak energy\n\nalpha_1float\n\nPower law index for x < x_break\n\nalpha_2float\n\nPower law index for x > x_break\n\ne_cutoffQuantity float\n\nExponential Cutoff energy\n\nbetafloat, optional\n\nExponential cutoff rapidity. Default is 1.\n\nNotes\n\nModel formula (with $$A$$ for amplitude, $$E_0$$ for e_0, $$\\alpha_1$$ for alpha_1, $$\\alpha_2$$ for alpha_2, $$E_{cutoff}$$ for e_cutoff, and $$\\beta$$ for beta):\n\n$\\begin{split}f(E) = \\exp(-(E / E_{cutoff})^\\beta)\\left \\{ \\begin{array}{ll} A (E / E_0) ^ {-\\alpha_1} & : E < E_{break} \\\\ A (E_{break}/E_0) ^ {\\alpha_2-\\alpha_1} (E / E_0) ^ {-\\alpha_2} & : E > E_{break} \\\\ \\end{array} \\right.\\end{split}$\nstatic eval(e, amplitude, e_0, e_break, alpha_1, alpha_2, e_cutoff, beta)[source]\n\nOne dimensional broken power law model function\n\nclass naima.models.ExponentialCutoffPowerLaw(amplitude, e_0, alpha, e_cutoff, beta=1.0)[source]\n\nOne dimensional power law model with an exponential cutoff.\n\nParameters\namplitudefloat\n\nModel amplitude\n\ne_0Quantity float\n\nReference point\n\nalphafloat\n\nPower law index\n\ne_cutoffQuantity float\n\nCutoff point\n\nbetafloat\n\nCutoff exponent\n\nNotes\n\nModel formula (with $$A$$ for amplitude, $$\\alpha$$ for alpha, and $$\\beta$$ for beta):\n\n$f(E) = A (E / E_0) ^ {-\\alpha} \\exp (- (E / E_{cutoff}) ^ \\beta)$\nstatic eval(e, amplitude, e_0, alpha, e_cutoff, beta)[source]\n\nOne dimensional power law with an exponential cutoff model function\n\nclass naima.models.InverseCompton(particle_distribution, seed_photon_fields=['CMB'], **kwargs)[source]\n\nInverse Compton emission from an electron population.\n\nIf you use this class in your research, please consult and cite Khangulyan, D., Aharonian, F.A., & Kelner, S.R. 2014, Astrophysical Journal, 783, 100\n\nParameters\nparticle_distributionfunction\n\nParticle distribution function, taking electron energies as a Quantity array or float, and returning the particle energy density in units of number of electrons per unit energy as a Quantity array or float.\n\nseed_photon_fieldsstring or iterable of strings (optional)\n\nA list of gray-body or non-thermal seed photon fields to use for IC calculation. Each of the items of the iterable can be either:\n\n• A string equal to CMB (default), NIR, or FIR, for which radiation fields with temperatures of 2.72 K, 30 K, and 3000 K, and energy densities of 0.261, 0.5, and 1 eV/cm³ will be used (these are the GALPROP values for a location at a distance of 6.5 kpc from the galactic center).\n\n• A list of length three (isotropic source) or four (anisotropic source) composed of:\n\n1. A name for the seed photon field.\n\n2. Its temperature (thermal source) or energy (monochromatic or non-thermal source) as a Quantity instance.\n\n3. Its photon field energy density as a Quantity instance.\n\n4. Optional: The angle between the seed photon direction and the scattered photon direction as a Quantity float instance.\n\nOther Parameters\nEeminQuantity float instance, optional\n\nMinimum electron energy for the electron distribution. Default is 1 GeV.\n\nEemaxQuantity float instance, optional\n\nMaximum electron energy for the electron distribution. Default is 510 TeV.\n\nnEedscalar\n\nNumber of points per decade in energy for the electron energy and distribution arrays. Default is 300.\n\nproperty We\n\nTotal energy in electrons used for the radiative calculation\n\ncompute_We(Eemin=None, Eemax=None)\n\nTotal energy in electrons between energies Eemin and Eemax\n\nParameters\nEeminQuantity float, optional\n\nMinimum electron energy for energy content calculation.\n\nEemaxQuantity float, optional\n\nMaximum electron energy for energy content calculation.\n\nflux(photon_energy, distance=<Quantity 1. kpc>, seed=None)[source]\n\nDifferential flux at a given distance from the source from a single seed photon field\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic luminosity will be returned. Default is 1 kpc.\n\nseedint, str or None\n\nNumber or name of seed photon field for which the IC contribution is required. If set to None it will return the sum of all contributions (default).\n\nsed(photon_energy, distance=<Quantity 1. kpc>, seed=None)[source]\n\nSpectral energy distribution at a given distance from the source\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic luminosity will be returned. Default is 1 kpc.\n\nseedint, str or None\n\nNumber or name of seed photon field for which the IC contribution is required. If set to None it will return the sum of all contributions (default).\n\nset_We(We, Eemin=None, Eemax=None, amplitude_name=None)\n\nNormalize particle distribution so that the total energy in electrons between Eemin and Eemax is We\n\nParameters\nWeQuantity float\n\nDesired energy in electrons.\n\nEeminQuantity float, optional\n\nMinimum electron energy for energy content calculation.\n\nEemaxQuantity float, optional\n\nMaximum electron energy for energy content calculation.\n\namplitude_namestr, optional\n\nName of the amplitude parameter of the particle distribution. It must be accesible as an attribute of the distribution function. Defaults to amplitude.\n\nclass naima.models.LogParabola(amplitude, e_0, alpha, beta)[source]\n\nOne dimensional log parabola model (sometimes called curved power law).\n\nParameters\namplitudefloat\n\nModel amplitude\n\ne_0Quantity float\n\nReference point\n\nalphafloat\n\nPower law index\n\nbetafloat\n\nPower law curvature\n\nNotes\n\nModel formula (with $$A$$ for amplitude and $$\\alpha$$ for alpha and $$\\beta$$ for beta):\n\n$f(e) = A \\left(\\frac{E}{E_{0}}\\right)^ {- \\alpha - \\beta \\log{\\left (\\frac{E}{E_{0}} \\right )}}$\nstatic eval(e, amplitude, e_0, alpha, beta)[source]\n\nOne dimenional log parabola model function\n\nclass naima.models.PionDecay(particle_distribution, nh=<Quantity 1. 1 / cm3>, nuclear_enhancement=True, **kwargs)[source]\n\nPion decay gamma-ray emission from a proton population.\n\nCompute gamma-ray spectrum arising from the interaction of a relativistic proton distribution with stationary target protons using the parametrization of Kafexhiu et al. (2014).\n\nIf you use this class in your research, please consult and cite Kafexhiu, E., Aharonian, F., Taylor, A.M., & Vila, G.S. 2014, Physical Review D, 90, 123014.\n\nParameters\nparticle_distributionfunction\n\nParticle distribution function, taking proton energies as a Quantity array or float, and returning the particle energy density in units of number of protons per unit energy as a Quantity array or float.\n\nnhQuantity\n\nNumber density of the target protons. Default is $$1 \\mathrm{cm}^{-3}$$.\n\nnuclear_enhancementbool\n\nWhether to apply the energy-dependent nuclear enhancement factor considering a target gas with local ISM abundances. See Section IV of Kafexhiu et al. (2014) for details. Here the proton-nucleus inelastic cross section of Sihver et al. (1993, PhysRevC 47, 1225) is used.\n\nOther Parameters\nEpminQuantity float\n\nMinimum proton energy for the proton distribution. Default is 1.22 GeV, the dynamical threshold for pion production in pp interactions.\n\nEpmaxQuantity float\n\nMinimum proton energy for the proton distribution. Default is 10 PeV.\n\nnEpdscalar\n\nNumber of points per decade in energy for the proton energy and distribution arrays. Default is 100.\n\nhiEmodelstr\n\nMonte Carlo model to use for computation of high-energy differential cross section. Can be one of Geant4, Pythia8, SIBYLL, or QGSJET. See Kafexhiu et al. (2014) for details. Default is Pythia8.\n\nuseLUTbool\n\nWhether to use a lookup table for the differential cross section. The only lookup table packaged with naima is for the Pythia 8 model and ISM nuclear enhancement factor.\n\nproperty Wp\n\nTotal energy in protons\n\ncompute_Wp(Epmin=None, Epmax=None)\n\nTotal energy in protons between energies Epmin and Epmax\n\nParameters\nEpminQuantity float, optional\n\nMinimum proton energy for energy content calculation.\n\nEpmaxQuantity float, optional\n\nMaximum proton energy for energy content calculation.\n\nflux(energy, *args, **kwargs)\n\nDifferential flux at a given distance from the source.\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic differential luminosity will be returned. Default is 1 kpc.\n\nsed(photon_energy, distance=<Quantity 1. kpc>)\n\nSpectral energy distribution at a given distance from the source.\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic luminosity will be returned. Default is 1 kpc.\n\nset_Wp(Wp, Epmin=None, Epmax=None, amplitude_name=None)\n\nNormalize particle distribution so that the total energy in protons between Epmin and Epmax is Wp\n\nParameters\nWpQuantity float\n\nDesired energy in protons.\n\nEpminQuantity float, optional\n\nMinimum proton energy for energy content calculation.\n\nEpmaxQuantity float, optional\n\nMaximum proton energy for energy content calculation.\n\namplitude_namestr, optional\n\nName of the amplitude parameter of the particle distribution. It must be accesible as an attribute of the distribution function. Defaults to amplitude.\n\nclass naima.models.PowerLaw(amplitude, e_0, alpha)[source]\n\nOne dimensional power law model.\n\nParameters\namplitudefloat\n\nModel amplitude.\n\ne_0Quantity float\n\nReference energy\n\nalphafloat\n\nPower law index\n\nNotes\n\nModel formula (with $$A$$ for amplitude, $$\\alpha$$ for alpha):\n\n$f(E) = A (E / E_0) ^ {-\\alpha}$\nstatic eval(e, amplitude, e_0, alpha)[source]\n\nOne dimensional power law model function\n\nclass naima.models.Synchrotron(particle_distribution, B=<Quantity 3.24e-06 G>, **kwargs)[source]\n\nSynchrotron emission from an electron population.\n\nThis class uses the approximation of the synchrotron emissivity in a random magnetic field of Aharonian, Kelner, and Prosekin 2010, PhysRev D 82, 3002 (arXiv:1006.1045).\n\nParameters\nparticle_distributionfunction\n\nParticle distribution function, taking electron energies as a Quantity array or float, and returning the particle energy density in units of number of electrons per unit energy as a Quantity array or float.\n\nBQuantity float instance, optional\n\nIsotropic magnetic field strength. Default: equipartition with CMB (3.24e-6 G)\n\nOther Parameters\nEeminQuantity float instance, optional\n\nMinimum electron energy for the electron distribution. Default is 1 GeV.\n\nEemaxQuantity float instance, optional\n\nMaximum electron energy for the electron distribution. Default is 510 TeV.\n\nnEedscalar\n\nNumber of points per decade in energy for the electron energy and distribution arrays. Default is 100.\n\nproperty We\n\nTotal energy in electrons used for the radiative calculation\n\ncompute_We(Eemin=None, Eemax=None)\n\nTotal energy in electrons between energies Eemin and Eemax\n\nParameters\nEeminQuantity float, optional\n\nMinimum electron energy for energy content calculation.\n\nEemaxQuantity float, optional\n\nMaximum electron energy for energy content calculation.\n\nflux(energy, *args, **kwargs)\n\nDifferential flux at a given distance from the source.\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic differential luminosity will be returned. Default is 1 kpc.\n\nsed(photon_energy, distance=<Quantity 1. kpc>)\n\nSpectral energy distribution at a given distance from the source.\n\nParameters\nphoton_energyQuantity float or array\n\nPhoton energy array.\n\ndistanceQuantity float, optional\n\nDistance to the source. If set to 0, the intrinsic luminosity will be returned. Default is 1 kpc.\n\nset_We(We, Eemin=None, Eemax=None, amplitude_name=None)\n\nNormalize particle distribution so that the total energy in electrons between Eemin and Eemax is We\n\nParameters\nWeQuantity float\n\nDesired energy in electrons.\n\nEeminQuantity float, optional\n\nMinimum electron energy for energy content calculation.\n\nEemaxQuantity float, optional\n\nMaximum electron energy for energy content calculation.\n\namplitude_namestr, optional\n\nName of the amplitude parameter of the particle distribution. It must be accesible as an attribute of the distribution function. Defaults to amplitude.\n\nclass naima.models.TableModel(energy, values, amplitude=1)[source]\n\nA model generated from a table of energy and value arrays.\n\nThe units returned will be the units of the values array provided at initialization. The model will return values interpolated in log-space, returning 0 for energies outside of the limits of the provided energy array.\n\nParameters\nenergyQuantity array\n\nArray of energies at which the model values are given\n\nvaluesarray\n\nArray with the values of the model at energies energy.\n\namplitudefloat\n\nModel amplitude that is multiplied to the supplied arrays. Defaults to 1."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6004161,"math_prob":0.97815496,"size":14178,"snap":"2023-40-2023-50","text_gpt3_token_len":3758,"char_repetition_ratio":0.16868915,"word_repetition_ratio":0.4607356,"special_character_ratio":0.2390323,"punctuation_ratio":0.1496228,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9951927,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-12-11T23:03:40Z\",\"WARC-Record-ID\":\"<urn:uuid:7b39869d-1c95-43b9-b764-a44968bd7557>\",\"Content-Length\":\"91426\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ff149a9f-2d35-4630-82a6-04891d15d357>\",\"WARC-Concurrent-To\":\"<urn:uuid:48a0d554-9690-455b-879e-a3beafcabba5>\",\"WARC-IP-Address\":\"104.17.32.82\",\"WARC-Target-URI\":\"https://naima.readthedocs.io/en/latest/api-models.html\",\"WARC-Payload-Digest\":\"sha1:6ZCWTJFDBIHXUFKAKNQOJSTHYSCPKQB6\",\"WARC-Block-Digest\":\"sha1:TJKAVCCPDXJVYAA5DMH37M3WRJAJU3QF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-50/CC-MAIN-2023-50_segments_1700679518883.99_warc_CC-MAIN-20231211210408-20231212000408-00409.warc.gz\"}"} |
https://crypto.stackexchange.com/questions/11338/why-does-merkles-puzzle-requires-eve-quadratic-complexity-of-effort-to-break-th | [
"# Why does Merkle's Puzzle requires Eve quadratic complexity of effort to break the system?\n\nThe way Applied cryptography 2ED explains the puzzle is as follows (I paraphrase it):\n\n1. Bob generates 2^20 messages of the form x,y where x is a puzzle number and y is the secret key. Both x and y are different for each of the one million message. Encrypt each message using symmetric cipher with a different 20-bit key. Send all messages to Alice.\n\n2. Alice picks one at random and brute force it. She should be able to recover x and y.\n\n3. Alice encrypts her message to Bob by using y she just recovered (using symmetric cipher). Her message will contain x, the puzzle number.\n\n4. Bob looks up the secret key for puzzle x and decrypts the message.\n\nAccording to the book and many Internet sources, Eve would have to do theta(n^2) work to brute force the communication.\n\nThe book doesn't seem to say much about how x is included in Alice's response to Bob. It must be not be encrypted right? Because Bob has to do a O(1) look up. That is, Alice would have to send x+E(private message, y) to Bob.\n\nThen why can't Eve just wait for Alice's response to Bob and compute that puzzle?\n\nThis is the brute force algorithm I think would satisfy 2^n if x is encrypted in Alice's response as well.\n\nfor y in 2^20:\nfor x in 2^20:\nc = symmetric_enc(y, x)\nif c == c_from_alice:\nreturn yes, c, x, y\nreturn no, none, none, none\n\n\nBut if x is encrypted in Alice response, then Bob have to brute force like Eve too, no?\n\n$x$ is not encrypted in Alice’s response. The point is that, even though Eve knows the puzzle number $x$, she has no way to tell which of the puzzles Bob sent is the one with this number, unless she solves all of them (or at any rate half of them, on average).\n• Thanks. If x is encrypted in Alice's response, how is Bob able to tell which x in constant time? Now that I think about it, Bob might only need to do n encryption to find out which x. In other words, since Bob has a table with x and key as columns, he can do 2^20 encryptions to find out. So this is the difference and my assumption that Bob does only O(1) is wrong right? Otherwise, Eve can do O(1) as well. Am I correct? Does x really have to be encrypted in Alice's response? Oct 28, 2013 at 10:57\n• @CppLearner Sorry, I should have been clearer: $x$ is not encrypted in Alice’s response. I have edited the answer to make it clearer, I hope. Oct 28, 2013 at 11:55"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9237668,"math_prob":0.88119096,"size":1374,"snap":"2022-27-2022-33","text_gpt3_token_len":349,"char_repetition_ratio":0.12189781,"word_repetition_ratio":0.008,"special_character_ratio":0.24235807,"punctuation_ratio":0.117056854,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98789984,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-30T13:02:55Z\",\"WARC-Record-ID\":\"<urn:uuid:18526bd1-8954-42c3-8fb9-6330e8e0accc>\",\"Content-Length\":\"232153\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:43e461b0-177a-4ef0-a785-9b6abdfed546>\",\"WARC-Concurrent-To\":\"<urn:uuid:01280b58-7a50-49a4-944e-01e00c70ebdb>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://crypto.stackexchange.com/questions/11338/why-does-merkles-puzzle-requires-eve-quadratic-complexity-of-effort-to-break-th\",\"WARC-Payload-Digest\":\"sha1:F7IBBOCYUAU7L3ZLEHYISJZ5DS72H7LQ\",\"WARC-Block-Digest\":\"sha1:ONTO5TWBAJS26SPS46C7VDUO4AKMWSW3\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103821173.44_warc_CC-MAIN-20220630122857-20220630152857-00484.warc.gz\"}"} |
https://www.lumoslearning.com/llwp/resources/applistings.html?iid=575826804 | [
"## Equation Solvers - Lumos Educational App Store",
null,
"Price -\\$3.99\n\n#### DESCRIPTION:\n\nEquation Solver 4-in-1 is the ultimate app that helps you solve linear equations in one variable, quadratic equations with real and complex roots, cubic and quartic equations. It's intuitive and easy to use. Just type in any equation you want to solve and the equation calculator will show you the result. Everything you need to solve equations or check results! Features: • Linear Equation Solver. • Quadratic Equation Solver. • Cubic Equation Solver. • Quartic Equation Solver. • Solves equations with fractions and parentheses. Note: the app doesn’t yet support equations with a v\n\n#### OVERVIEW:\n\nEquation Solvers is a free educational mobile app By Intemodino Group s.r.o..It helps students practice the following standards .\n\nThis page not only allows students and teachers download Equation Solvers but also find engaging Sample Questions, Videos, Pins, Worksheets, Books related to the following topics.\n\nDeveloper URL: https://intemodino.com\n\nSoftware Version: 4.0.3\n\nCategory: Education\n\nRelease Date: 2012-11-11T06:29:34Z",
null,
"",
null,
"",
null,
"",
null,
"#### Are you the Developer?\n\nRate this App?\n0\n\n0 Ratings & 0 Reviews\n\n5\n0\n0\n4\n0\n0\n3\n0\n0\n2\n0\n0\n1\n0\n0",
null,
"EdSearch WebSearch",
null,
""
] | [
null,
"https://is1-ssl.mzstatic.com/image/thumb/Purple118/v4/a8/d5/87/a8d587f7-16fa-fc1e-6c56-e4f44835dbd8/source/100x100bb.jpg",
null,
"https://is5-ssl.mzstatic.com/image/thumb/Purple118/v4/3f/29/52/3f295283-b777-c9af-613a-00699054f946/pr_source.png/392x696bb.png",
null,
"https://is5-ssl.mzstatic.com/image/thumb/Purple118/v4/35/66/96/35669627-9ea4-0758-3b3a-ce0745ae5ddc/pr_source.png/392x696bb.png",
null,
"https://is5-ssl.mzstatic.com/image/thumb/Purple128/v4/36/f3/8b/36f38b2c-e404-ef2e-aeba-2f9c889796b7/pr_source.png/392x696bb.png",
null,
"https://is4-ssl.mzstatic.com/image/thumb/Purple118/v4/05/20/60/05206086-b75b-48e0-ba78-6bd6dda69182/pr_source.png/392x696bb.png",
null,
"https://statc.lumoslearning.com/llwp/wp-content/uploads/2018/01/edSearch_Logo_LowRes.png",
null,
"https://googleads.g.doubleclick.net/pagead/viewthroughconversion/1066999506/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8534884,"math_prob":0.94096774,"size":1106,"snap":"2019-51-2020-05","text_gpt3_token_len":248,"char_repetition_ratio":0.1814882,"word_repetition_ratio":0.0,"special_character_ratio":0.21338156,"punctuation_ratio":0.17391305,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98057294,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-05T20:22:55Z\",\"WARC-Record-ID\":\"<urn:uuid:64170fa2-2db1-4785-bd16-db8746b9e37a>\",\"Content-Length\":\"151686\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ecdc4eb6-05fd-4d7f-810b-4e3b9c5147c5>\",\"WARC-Concurrent-To\":\"<urn:uuid:a6bb226a-61bb-4f6f-a96a-4eae35382b30>\",\"WARC-IP-Address\":\"3.223.110.75\",\"WARC-Target-URI\":\"https://www.lumoslearning.com/llwp/resources/applistings.html?iid=575826804\",\"WARC-Payload-Digest\":\"sha1:F6A6EOM6QBCIAMY64DYBMH6MYNA73FFR\",\"WARC-Block-Digest\":\"sha1:OL2FCP6JUHFZCHA53ZPWRI4MW7MTJDF7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540482038.36_warc_CC-MAIN-20191205190939-20191205214939-00263.warc.gz\"}"} |
https://physicsforme.com/2014/08/12/fields-medals-2014/ | [
"# Fields Medals 2014",
null,
"The Fields Medal is widely considered to be the most prestigious award in mathematics, and the 2014 Fields Medalists were just announced.\n\nThe Fields Medal is given to between two and four mathematicians under the age of 40 at the International Congress of Mathematicians, held once every four years. According to the International Mathematical Union, the body that awards the Fields Medal, the award intends “to recognize outstanding mathematical achievement for existing work and for the promise of future achievement.”\n\nIt is usually awarded for mathematical research that solves or extends problems that have vexed mathematicians for decades or centuries, or research that greatly expands on or even creates new areas of mathematical thought.\n\nHere are the four 2014 Fields Medalists:\n\nArtur Avila\n\nArtur Avila is a Brazilian mathematician who has contributed to a number of fields. Some of his most notable research is in the study of chaos theory and dynamical systems. These areas seek to understand the behavior of systems that evolve over time in which very small changes in initial conditions can lead to wildly varying outcomes, such as weather patterns, as typified in the classic example of a butterfly’s wings flapping leading to a change in weather hundreds of miles away.\n\nOne of Avila’s major contributions to this field of study was in clarifying that a certain broad class of dynamical systems fall into one of two categories. They either eventually evolve into a stable state, or fall into a chaotic stochastic state, in which their behavior can be described probabilistically.\n\nManjul Bhargava\n\nManjul Bhargava‘s research is focused on number theory and algebra. One of the basic subjects in algebraic number theory is the behavior of polynomials with integer coefficients, like 3×2 + 4xy -5y2.\n\nCarl Friederich Gauss, one of the greatest mathematicians of the late eighteenth and early nineteenth centuries, developed a powerful tool for analyzing polynomials like the one above, where the variables are all raised to at most the second power.\n\nBhargava, by intensely studying Gauss’ work and adding to it an impressive level of geometric and algebraic insight, was able to extend Gauss’ tool to higher degree polynomials, in which we raise the variables to higher powers than two. This work vastly expands the ability of number theorists to study these fundamental mathematical objects.\n\nMartin Hairer\n\nMartin Hairer researches stochastic partial differential equations. Differential equations show up throughout mathematics, physics, and engineering. They describe processes that change over time, like the movement of a shell shot from a cannon, or the price of a stock or bond.\n\nDifferential equations come in a variety of flavors. Ordinary differential equations are equations that only have one variable involved. The motion of a cannonball, for example, can be modeled with a simple ordinary differential equation in which the only variable is the time since the cannon was fired.\n\nPartial differential equations involve processes that depend on multiple variables. In many physical settings, both time and the current position of an object are needed to determine the future trajectory of the object. These describe a much wider variety of processes in the world, and are generally much harder to work with than one-variable ordinary equations.\n\nDifferential equations can also be either deterministic or stochastic. The cannonball’s movement, or the movement of a satellite orbiting earth, are deterministic: outside of measurement error, once we’ve solved the equation, we have no doubt about where the cannonball or satellite will be at a given point in time. Stochastic equations have a random element involved. The motion of sugar grains stirred in a cup of coffee, or a stock’s price at a given moment in time are both best described by models that have an element of noise or randomness.\n\nStochastic partial differential equations — equations that have multiple input variables and random noise elements — have traditionally been very difficult for mathematicians to work with. Hairer developed a new theoretical framework that makes these equations far more tractable, opening the door to being able to solve a number of equations with both large amounts of mathematical interest in their own rights, and with powerful applications in the sciences and engineering.\n\nMaryam Mirzakhani\n\nMaryam Mirzakhani‘s work focuses on the geometry of Riemann surfaces. Riemann surfaces are a classic type of non-Euclidean geometry: while a Riemann surface still has two dimensions like a plane, and we can still define lines, angles, and curves on the surface, the way that the measurement of angles and distances will come out can be very different from what happens on a normal Euclidean plane.\n\nA basic example of this that we’ve looked at before is the Riemann sphere: a version of a sphere in which we still have a notion of lines and angles, but where strange things can happen, like triangles with three ninety degree angles.\n\nRiemann surfaces can get far more complicated than the Riemann sphere, and one of the major research areas in the study of these surfaces is how one Riemann surface can be smoothly deformed or smushed into another surface. These deformations themselves have their own strange geometries, called “moduli spaces”, and Mirzakhani has contributed several interesting results in understanding these mysterious spaces.\n\nRead also: Avila, Bhargava, Hairer, Mirzakhani by Terence Tao\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
] | [
null,
"https://physicsforme.files.wordpress.com/2014/08/fieldsmedalfrontandback.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9390653,"math_prob":0.94025475,"size":5601,"snap":"2020-34-2020-40","text_gpt3_token_len":1113,"char_repetition_ratio":0.11381097,"word_repetition_ratio":0.0,"special_character_ratio":0.18175326,"punctuation_ratio":0.08784474,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97868043,"pos_list":[0,1,2],"im_url_duplicate_count":[null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-25T23:43:49Z\",\"WARC-Record-ID\":\"<urn:uuid:d54a0596-5b2c-489c-8c22-ec4d346093a7>\",\"Content-Length\":\"118477\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ef91b5dc-5ee6-4309-9baa-b339faf63f7c>\",\"WARC-Concurrent-To\":\"<urn:uuid:7904d59c-7127-4693-af5f-19797697346f>\",\"WARC-IP-Address\":\"192.0.78.24\",\"WARC-Target-URI\":\"https://physicsforme.com/2014/08/12/fields-medals-2014/\",\"WARC-Payload-Digest\":\"sha1:VFAREG3RSA5V4V2TGNKMH4YTCOFZ2KBU\",\"WARC-Block-Digest\":\"sha1:BMNMA5I6LH5EPDRNDAZJP35X3NEA4ZID\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400228998.45_warc_CC-MAIN-20200925213517-20200926003517-00302.warc.gz\"}"} |
https://wpcalc.com/en/total-asset-turnover/ | [
"Asset turnover is a financial ratio which calculates the company’s efficacy at using its assets in generating sales or revenue.\n\n### Calculator of Total Asset Turnover\n\n Currency = Sales revenue = Beginning assets = Ending assets = Total asset turnover =\n\nFormula of Total Asset Turnover\n\nTotal Asset Turnover = Net sales revenue / Average total assets Average total assets = (Beginning assets + Ending assets) / 2\n\n### Example of Total Asset Turnover\n\nIn 2012 and 2013 a textile company had Rs.960000 and Rs.1690000 in assets respectively. The company generated revenue of Rs.1360000. Calculate the total asset turnover ratio for the company.\n\nGiven,\n\n• Beginning assets = Rs.960000\n• Ending assets = Rs.1690000\n• Sales revenue = Rs.1360000\n\nSolution:\nStep 1 :\n\nAverage total assets = (Beginning assets + Ending assets) / 2\n= (960000 + 1690000) / 2\n= 2650000 / 2\n= 1325000 Rs\n\nStep 2 :\n\nTotal Asset Turnover = Net sales revenue / Average total assets\n= 1360000 / 1325000\n= 1.026",
null,
""
] | [
null,
"https://secure.gravatar.com/avatar/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79973453,"math_prob":0.99369895,"size":910,"snap":"2019-51-2020-05","text_gpt3_token_len":244,"char_repetition_ratio":0.17108168,"word_repetition_ratio":0.18300654,"special_character_ratio":0.33516484,"punctuation_ratio":0.103225805,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99990463,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-12T05:34:13Z\",\"WARC-Record-ID\":\"<urn:uuid:86ebef2b-f713-4283-8c6c-c4ac53ae40c3>\",\"Content-Length\":\"45112\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f91ec0c-d963-4e2f-a351-82fd405d986b>\",\"WARC-Concurrent-To\":\"<urn:uuid:b5d2902a-a047-4ba3-88d2-16f86a83aa25>\",\"WARC-IP-Address\":\"89.184.79.27\",\"WARC-Target-URI\":\"https://wpcalc.com/en/total-asset-turnover/\",\"WARC-Payload-Digest\":\"sha1:YDDZMTSTSN4L7ZYMMC5UEIQOQC22QS4E\",\"WARC-Block-Digest\":\"sha1:M6KE3GH3KYNONOX25ED5MTEMCSVR67OF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540537212.96_warc_CC-MAIN-20191212051311-20191212075311-00320.warc.gz\"}"} |
https://osdata.com/programming/calculations/subtration.html | [
"",
null,
"music",
null,
"subtraction\n\nsummary\n\nThis subchapter looks at subtraction.\n\nfree computer programming text book project\n\nIf you like the idea of this project,",
null,
"stub section\n\nThis subchapter is a stub section. It will be filled in with instructional material later. For now it serves the purpose of a place holder for the order of instruction.\n\nProfessors are invited to give feedback on both the proposed contents and the propsed order of this text book. Send commentary to Milo, PO Box 1361, Tustin, California, 92781, USA.\n\nsubtraction\n\nThis subchapter looks at subtraction.\n\nnumber systems\n\nHere is a little bit of college level work on subtraction.\n\nA system of numbers os considered to be closed under an operation if it reproduces itself.\n\nS is a set of numbers. S is closed with respect to subtraction if for any two numbers a and b in the set S, the difference a - b is also a number in the set S. A set closed with respect to subtraction is called a modul.\n\nThe set of all integers {0, ±1, ±2, ±3, …} is a modul. The set of natural numbers {1, 2, 3, …} is not a modul.\n\nThe set of all even integers {0, ±2, ±4, ±6, …} is a modul.\n\nThe set of all rational numbers is a modul.\n\nThe set of all real numbers is a modul.\n\nThe set of all complex numbers is a modul.\n\nThe set of all purely imagiunary numbers ib is a modul.\n\nA theorem regarding moduls:\n\n(a) A modul always contains the number zero (0).\n\n(b) If a modul contains the number a then it also contains the number -a.\n\n(c) A modul is always closed with respect to addition.\n\nAnother theorem regarding moduls:\n\nAny modul M containing only integers includes all multiples of the greatest common divisor of the numbers in M.\n\nassembly language instructions\n\nFor most processors, integer arithmetic is faster than floating point arithmetic. This can be reversed in special cases such digital signal processors.\n\nOn many processors, floating point arithmetic is in an optional unit or optional coprocessor rather than being included on the main processor. This allows the manufacturer to charge less for the business machines that don’t need floating point arithmetic.\n\nThe basic four integer arithmetic operations are addition, subtraction, multiplication, and division. Arithmetic operations can be signed or unsigned (unsigned is useful for effective address computations). Some older processors don’t include hardware multiplication and division. Some processors don’t include actual multiplication or division hardware, instead looking up the answer in a massive table of results embedded in the processor.\n\nA specialized, but common, form of subtraction is an decrement instruction, which subtracts one from the contents of a register or memory location. For address computations, “decrement” may mean the subtraction of a constant other than one. Some processors have “short” or “quick” subtraction instructions that extend decrement to include a small range of values.\n\n• SUB Subtract; DEC VAX; signed subtraction of scalar quantities (8, 16, or 32 bit integer or 32, 64, or 128 bit floating point) in general purpose registers or memory, available in two operand (first operand subtracted from second operand with result replacing second operand) and three operand (first operand subtracted from second operand with result placed in third operand) (SUBB2 integer subtract byte 2 operand, SUBB3 integer subtract byte 3 operand, SUBW2 integer subtract word 2 operand, SUBW3 integer subtract word 3 operand, SUBL2 integer subtract long 2 operand, SUBL3 integer subtract long 3 operand) (SUBF2 subtract float 2 operand, SUBF3 subtract float 3 operand, SUBD2 subtract double float 2 operand, SUBD3 subtract double float 3 operand, SUBG2 subtract G float 2 operand, SUBG3 subtract G float 3 operand, SUBH2 subtract H float 2 operand, SUBH3 subtract H float 3 operand); clears or sets flags\n• SUB Subtract Integers; Intel 80x86; integer subtraction of the contents of a register or memory (8, 16, or 32 bits) from a memory location or a register; sets or clear flags\n• SUB Subtract; Motorola 680x0, Motorola 68300; signed subtract of the contents of a data register (8, 16, or 32 bits) from a memory location or subtracts the contents of a memory location (8, 16, or 32 bits) from a data register; sets or clear flags\n• SUB Subtract; MIX; subtract word or partial word field contents of memory from A-register (accumulator), overflow toggle possibly set\n• SR Subtract Register; IBM 360/370; RR format; signed subtract of the contents of a general purpose register (32 bits) from a general purpose register (32 bits); register to register only; sets or clears flags\n• S Subtract; IBM 360/370; RX format; signed subtract of the contents of a memory location (32 bits) from a general purpose register (32 bits); main memory to register only; sets or clears flags\n• SH Subtract Half-word; IBM 360/370; RX format; signed subtract of the contents of a memory location (16 bits) from a general purpose register (low order 16 bits); main memory to register only; sets or clears flags\n• SUBA Subtract Address; Motorola 680x0, Motorola 68300; unsigned subtract of the contents of a memory location or register (16 or 32 bits) from an address register; does not modify flags\n• SUBI Subtract Immediate; Motorola 680x0, Motorola 68300; signed subtract of immediate data (8, 16, or 32 bits) from a register or memory location; sets or clears flags\n• SUBQ Subtract Quick; Motorola 680x0, Motorola 68300; signed subtract of an immediate value of 1 to 8 inclusive from a register or memory lcoation; sets or clears flags for data registers and memory locations, does not modify flags for an address register\n• DEC Decrement; DEC VAX; decrements the integer contents of a general purpose register or contents of memory (DECB byte, DECW word, DECL longword); equivalent to SUBx2 #1, sum, but shorter and executes faster; clears or sets flags\n• DEC Decrement by 1; Intel 80x86; decrements the contents of a register or memory (8, 16, or 32 bits); sets or clear flags (does not modify carry flag)\n• SBWC Subtract With Carry; DEC VAX; integer subtraction (32 bit) in general purpose registers or memory, first operand and the C (carry) flag subtracted from second operand with result replacing second operand; used for extended precision subtraction; clears or sets flags\n• SBB Subtract Integers with Borrow; Intel 80x86; integer subtraction of the contents of a register or memory (8, 16, or 32 bits) and carry flag from a memory location or a register; sets or clear flags\n• SUBX Subtract Extended; Motorola 680x0, Motorola 68300; (signed subtract of a data register [8, 16, or 32 bits] and the extend bit from a data register) or (signed subtract of the contents of memory location [8, 16, or 32 bits] and the extend bit from the contents of another memory location while predecrementing both the source and destination address pointer registers), used to implement multi-precision integer arithmetic; sets or clears flags\n\nfree music player coding example\n\nCoding example: I am making heavily documented and explained open source code for a method to play music for free — almost any song, no subscription fees, no download costs, no advertisements, all completely legal. This is done by building a front-end to YouTube (which checks the copyright permissions for you).\n\nView music player in action: www.musicinpublic.com/.\n\nCreate your own copy from the original source code/ (presented for learning programming).\n\nview text bookHTML file\n\nBecause I no longer have the computer and software to make PDFs, the book is available as an HTML file, which you can convert into a PDF.\n\n Tweets by @osdata\n\nfree computer programming text book project\n\nBuilding a free downloadable text book on computer programming for university, college, community college, and high school classes in computer programming.\n\nIf you like the idea of this project,\n\nsend donations to:\nMilo\nPO Box 1361\nTustin, California 92781\n\nSupporting the entire project:\n\nIf you have a business or organization that can support the entire cost of this project, please contact Pr Ntr Kmt (my church)\n\nSome or all of the material on this web page appears in the",
null,
"",
null,
"This web site handcrafted on Macintosh",
null,
"computers using Tom Bender’s Tex-Edit Plus",
null,
"and served using FreeBSD",
null,
".",
null,
"†UNIX used as a generic term unless specifically used as a trademark (such as in the phrase “UNIX certified”). UNIX is a registered trademark in the United States and other countries, licensed exclusively through X/Open Company Ltd.\n\nNames and logos of various OSs are trademarks of their respective owners."
] | [
null,
"https://osdata.com/ads/tsosdragon.jpg",
null,
"https://osdata.com/pict/oslogo.gif",
null,
"http://www.google.com/logos/Logo_25wht.gif",
null,
"http://www.google.com/logos/Logo_25wht.gif",
null,
"https://osdata.com/pict/mademac.gif",
null,
"https://osdata.com/pict/mac.gif",
null,
"https://osdata.com/pict/texedit.gif",
null,
"https://osdata.com/pict/freebsdm.gif",
null,
"https://osdata.com/pict/bvbstar.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8911496,"math_prob":0.732467,"size":4785,"snap":"2022-05-2022-21","text_gpt3_token_len":987,"char_repetition_ratio":0.11796695,"word_repetition_ratio":0.08108108,"special_character_ratio":0.20041797,"punctuation_ratio":0.111363634,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9547404,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-20T10:25:28Z\",\"WARC-Record-ID\":\"<urn:uuid:0b77259e-6bf4-4935-851e-b276065e09bd>\",\"Content-Length\":\"24055\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3d809cf4-a3e1-47ea-a11f-31c91aa9caba>\",\"WARC-Concurrent-To\":\"<urn:uuid:0feb108f-70f3-419e-97ea-284f525d5953>\",\"WARC-IP-Address\":\"162.214.69.47\",\"WARC-Target-URI\":\"https://osdata.com/programming/calculations/subtration.html\",\"WARC-Payload-Digest\":\"sha1:SXP3BEMEJZI4UBUAZI6DD6O5H5E7VUWS\",\"WARC-Block-Digest\":\"sha1:5IU7A7ABHXDOY3DK5FGCAYOEJWN2F4UT\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320301737.47_warc_CC-MAIN-20220120100127-20220120130127-00641.warc.gz\"}"} |
https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/14%3A_Overloading_in_C/14.02%3A_Overload_a_Function/14.2.2%3A_Function_overloading_and_const_keyword | [
"$$\\newcommand{\\vecs}{\\overset { \\rightharpoonup} {\\mathbf{#1}} }$$ $$\\newcommand{\\vecd}{\\overset{-\\!-\\!\\rightharpoonup}{\\vphantom{a}\\smash {#1}}}$$$$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\id}{\\mathrm{id}}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$ $$\\newcommand{\\kernel}{\\mathrm{null}\\,}$$ $$\\newcommand{\\range}{\\mathrm{range}\\,}$$ $$\\newcommand{\\RealPart}{\\mathrm{Re}}$$ $$\\newcommand{\\ImaginaryPart}{\\mathrm{Im}}$$ $$\\newcommand{\\Argument}{\\mathrm{Arg}}$$ $$\\newcommand{\\norm}{\\| #1 \\|}$$ $$\\newcommand{\\inner}{\\langle #1, #2 \\rangle}$$ $$\\newcommand{\\Span}{\\mathrm{span}}$$\n\n#include<iostream>\nusing namespace std;\n\nclass Test\n{\nprotected:\nint num1;\npublic:\nTest (int i):num1(i) { }\nvoid fun() const\n{\ncout << \"fun() const called \" << endl;\n}\nvoid fun()\n{\ncout << \"fun() called \" << endl;\n}\n};\n\nint main()\n{\nTest t1 (10);\nconst Test t2 (20);\nt1.fun();\nt2.fun();\nreturn 0;\n}\n\n\nOutput: The above program compiles and runs fine, and produces following output.\n\nfun() called\nfun() const called\n\nThe two methods ‘void fun() const’ and ‘void fun()’ have same signature except that one is const and other is not. Also, if we take a closer look at the output, we observe that, ‘const void fun()’ is called on const object and ‘void fun()’ is called on non-const object.\n\nC++ allows member methods to be overloaded on the basis of const type. Overloading on the basis of const type can be useful when a function return reference or pointer. We can make one function const, that returns a const reference or const pointer, other non-const function, that returns non-const reference or pointer.\n\nRules related to const parameters are interesting. Let us first take a look at following two examples. The program 1 fails in compilation, but program 2 compiles and runs fine.\n\n// PROGRAM 1 (Fails in compilation)\n#include<iostream>\nusing namespace std;\n\nvoid fun(const int Num1)\n{\ncout << \"fun(const int) called \" << Num1;\n}\nvoid fun(int Num1)\n{\ncout << \"fun(int ) called \" << Num1;\n}\nint main()\n{\nconst int myNum = 10;\nfun(myNum);\nreturn 0;\n}\n\n\nOutput is:\n\nerror: redefinition of ‘void fun(int)’\nvoid fun(int Num2)\n^~~\n\nThen the second piece of code:\n\n// PROGRAM 1 (Fails in compilation)\n#include<iostream>\nusing namespace std;\n\nvoid fun(const int i)\n{\ncout << \"fun(const int) called \";\n}\nvoid fun(int i)\n{\ncout << \"fun(int ) called \" ;\n}\nint main()\n{\nconst int i = 10;\nfun(i);\nreturn 0;\n}\n\n\nwhich produces the outpu:\n\nconst fun() GeeksforGeeks\n\n\nC++ allows functions to be overloaded on the basis of const-ness of parameters only if the const parameter is a reference or a pointer. That is why the program 1 failed in compilation, but the program 2 worked fine. This rule actually makes sense. In program 1, the parameter ‘myNum’ is passed by value, so ‘Num1’ in fun() is a copy of ‘myNum’ in main(). Hence fun() cannot modify ‘myNum’ of main(). Therefore, it doesn’t matter whether ‘Num1’ is received as a const parameter or normal parameter. When we pass by reference or pointer, we can modify the value referred or pointed, so we can have two versions of a function, one which can modify the referred or pointed value, other which can not."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7616951,"math_prob":0.9956131,"size":2762,"snap":"2022-27-2022-33","text_gpt3_token_len":715,"char_repetition_ratio":0.13669325,"word_repetition_ratio":0.09166667,"special_character_ratio":0.28167993,"punctuation_ratio":0.13271028,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9684679,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-07-04T21:51:12Z\",\"WARC-Record-ID\":\"<urn:uuid:24501824-f3af-4647-afb3-1603bc278d82>\",\"Content-Length\":\"101037\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8e5bfe3f-800d-43fb-8dc7-3a122f7cde81>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b63e8be-8ae3-4b99-ad90-9e316035b181>\",\"WARC-IP-Address\":\"13.249.39.44\",\"WARC-Target-URI\":\"https://eng.libretexts.org/Courses/Delta_College/C___Programming_I_(McClanahan)/14%3A_Overloading_in_C/14.02%3A_Overload_a_Function/14.2.2%3A_Function_overloading_and_const_keyword\",\"WARC-Payload-Digest\":\"sha1:XXHYPANZROVMQZV4A7TJO4I2DKMFGUR3\",\"WARC-Block-Digest\":\"sha1:3SQBFEUPAMVFQNEQIKSLVIV2EHGLNXUI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656104496688.78_warc_CC-MAIN-20220704202455-20220704232455-00032.warc.gz\"}"} |
https://developers.arcgis.com/kotlin/api-reference/arcgis-maps-kotlin/com.arcgismaps.geometry/-spatial-reference/index.html | [
"# SpatialReference\n\nThe spatial reference specifies how geometry coordinates relate to real-world space. Instances of this class represent a specific coordinate system identified by a well-known ID (WKID) number or well-known text (WKT) string. There are two broad classes of coordinate systems:\n\n• Geographic coordinate systems use a 3-dimensional spherical surface to define locations on the earth.\n\n• Projected coordinate systems use a flat, 2-dimensional surface. See [https://developers.arcgis.com/documentation/spatial-references/] for more information about spatial references.\n\nSpatialReference ensures that you can accurately view, query, and analyze the layers of a GeoModel.\n\nThe spatial reference value is available from a map or scene after loading has completed, and is immutable. If you want to set this value for a new map or scene, use the ArcGISMap.ArcGISMap(SpatialReference) or ArcGISScene.ArcGISScene(SceneViewTilingScheme) constructors.\n\n200.1.0\n\n## Constructors\n\nCreates a spatial reference based on WKID. The method will create a spatial reference that has only horizontal coordinate system and does not have vertical coordinate system associated with it.\n\nfun SpatialReference(wkid: Int, verticalWkid: Int)\n\nCreates a spatial reference based on WKID for the horizontal coordinate system and vertical coordinate system. The method creates a spatial reference that has both horizontal and vertical coordinate systems. When the vertical WKID is 0, the method is equivalent to calling SpatialReference.SpatialReference(Int), and does not define a vertical coordinate system part.\n\nCreates a spatial reference based on well-known text.\n\nobject Companion\n\n## Properties\n\nIf the given spatial reference is a projected coordinate system, then this returns the geographic coordinate system of that system. If the spatial reference is a projected coordinate system, then a spatial reference object representing the underlying geographic coordinate system is returned. Every projected coordinate system has an underlying geographic coordinate system. If the spatial reference is a geographic coordinate system, then a reference to itself is returned. If the spatial reference is a local spatial reference, a null is returned with an error.\n\nTrue if spatial reference has a vertical coordinate system set; false otherwise. A spatial reference can optionally include a definition for a vertical coordinate system (VCS), which can be used to interpret the z-values of a geometry. A VCS defines linear units of measure, the origin of z-values, and whether z-values are \"positive up\" (representing heights above a surface) or \"positive down\" (indicating that values are depths below a surface).\n\nTrue if spatial reference is a Geographic Coordinate System.\n\nTrue if coordinate system is horizontally pannable.\n\nTrue if spatial reference is a Projected Coordinate System.\n\nThe unit of measure for the horizontal coordinate system of this spatial reference.\n\nThe unit of measure for the vertical coordinate system of this spatial reference. Is null if SpatialReference.hasVertical is false.\n\nThe well-known ID for the vertical coordinate system (VCS), or 0 if the spatial reference has no VCS or has a custom VCS.\n\nval wkid: Int\n\nThe well-known ID for the horizontal coordinate system, or 0 if the spatial reference has a custom horizontal coordinate system.\n\nThe well-known text for the horizontal and vertical coordinate system.\n\n## Functions\n\nopen operator override fun equals(other: Any?): Boolean"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82315713,"math_prob":0.83642983,"size":3134,"snap":"2023-40-2023-50","text_gpt3_token_len":596,"char_repetition_ratio":0.2370607,"word_repetition_ratio":0.04845815,"special_character_ratio":0.17772815,"punctuation_ratio":0.09756097,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9555883,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-23T23:37:48Z\",\"WARC-Record-ID\":\"<urn:uuid:2cba322c-814e-4d1b-8f79-58975e2b376f>\",\"Content-Length\":\"41321\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5898805e-94fe-4334-b881-53612695db80>\",\"WARC-Concurrent-To\":\"<urn:uuid:b9381c3b-c45c-4821-9a1c-1d70365d5c6c>\",\"WARC-IP-Address\":\"18.160.18.26\",\"WARC-Target-URI\":\"https://developers.arcgis.com/kotlin/api-reference/arcgis-maps-kotlin/com.arcgismaps.geometry/-spatial-reference/index.html\",\"WARC-Payload-Digest\":\"sha1:BNXMOC3FSC64J3QJ6VA5XZYTYYXODNTD\",\"WARC-Block-Digest\":\"sha1:GIM3F4VEELPQPSUX3JCSBSZQOZWFB5NB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506539.13_warc_CC-MAIN-20230923231031-20230924021031-00158.warc.gz\"}"} |
https://calculatorsonline.org/rounding-numbers/what-is-845-rounded-to-the-nearest-whole | [
"# 845 rounded to the nearest whole\n\nHere you will see step by step solution to round of the 845 to the nearest whole. What is 845 rounded to the nearest whole? 845 rounded to the nearest whole is 845, check the explanation that how to rounding the 845 to nearest whole.\n\n## Answer: Rounded 845 to nearest whole is\n\n= 845\n\n### How to round 845 to the nearest whole?\n\nTo round of the number 845 simply find the digit at whole place, then look to the right digit next to the whole place, if this number is greater than or equal to 5 (5, 6, 7, 8, 9) round up or if number is less then 5 (0, 1, 2, 3, 4) round down whole(5) number. Remove all the digits to the right of the rounding digit.\n\n#### Solution for decimal number 845 to the nearest whole\n\nGiven number is => 845\n\n• Number at whole place is = 5\n• 845 = 5\n• Now we need to find the digit at the right side of whole place = 0\n• 0 is smaller than 5, whole place digit 5 we don't need to change it. It will remain unchanged.\n• In 845, after rounding digit 5, remove all the numbers in the right side, then rewrite the numbers.\n• Final Conclusion: 845\n\nHence, the 845 to the nearest whole is 845."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86989665,"math_prob":0.9956066,"size":1119,"snap":"2022-40-2023-06","text_gpt3_token_len":307,"char_repetition_ratio":0.22690582,"word_repetition_ratio":0.04888889,"special_character_ratio":0.30741733,"punctuation_ratio":0.1124498,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9603134,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T07:12:05Z\",\"WARC-Record-ID\":\"<urn:uuid:6989d9d0-5c7d-4c97-8f61-9a5ab2404f7b>\",\"Content-Length\":\"18095\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ee52b7b7-e071-4ed1-b489-a2de9a13554d>\",\"WARC-Concurrent-To\":\"<urn:uuid:1d4deb13-9f4c-491e-bed4-0159e82d8c3b>\",\"WARC-IP-Address\":\"172.67.209.140\",\"WARC-Target-URI\":\"https://calculatorsonline.org/rounding-numbers/what-is-845-rounded-to-the-nearest-whole\",\"WARC-Payload-Digest\":\"sha1:IJHXBFIHRTDGJ346DY3HBUZZCB2EB2LA\",\"WARC-Block-Digest\":\"sha1:Y32E6ZWBT4NSNYCZNVCMGZYAMXVS2DUA\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500044.16_warc_CC-MAIN-20230203055519-20230203085519-00689.warc.gz\"}"} |
https://hq.holyfiregames.com/hq/topic/d551be-magnification-of-concave-lens-formula | [
"A magnification of 2 indicates the image is twice the size of the object and a magnification of 1 indicates an image size being the same as the object size. The sign conventions for the given quantities in the lens equation and magnification equations are as follows: f is + if the lens is a double convex lens (converging lens) f is - if the lens is a double concave lens (diverging lens) d i is + if the image is a real image and located on the opposite side of the lens. On the other hand, the magnification m is negative when the image formed is real and inverted. An object 5 cm high is held 25 cm away from a converging lens of focal length 20 cm. u = object distance Please note that the magnification formula is applicable both in convex lenses and concave lenses. Code to add this calci to your website . On the other hand, the magnification m is negative when the image formed is real and inverted. Calculate the position of the images formed by the fol-lowing concave lenses. In the case of a concave lens, it is always positive. Magnification of Convex Lens: It is a ratio between the image height and object height. (click on the green letters for the solutions). If the magnification is positive, then the image is upright compared to the object (virtual image). An expression showing the relation between object distance, image distance and focal length of a mirror is called mirror formula. Example Problem #1 A 4.00-cm tall light bulb is placed a distance of 45.7 cm from a concave mirror having a focal length of 15.2 cm. Learn more about Reflection of Light here. Section 3: Concave Lenses 14 Here are some exercises with concave lenses. Q. Please note that the magnification formula is applicable both in convex lenses and concave lenses. Using lens formula the equation for magnification can also be obtained as . Therefore, the relationship between the object distance, the image distance and the focal length of a lens is given by the $$\\text{Lens Formula: }\\frac{1}{u}+\\frac{1}{v}=\\frac{1}{f}$$ The lens formula may be applied to convex lenses as well as concave lenses provided the ‘real is positive’ sign convention is followed. (a)For an object with u= 12cmif the focal length is 4cm Solution: As the lens here is concave, the focal length (f) = -10 cm. Hence, the expression for magnification (m) becomes: m = h’/h = -v/u. The focal length of the mirror is f. a. Determine the image distance and the image size. Be careful with the sign of the focal length in the lens formula! $$\\frac{1}{m(m+1)}$$ b. Formula : Where, f - Focal length, d i - Image distance, d 0 - Object distance. Is lens formula applicable only for convex lens? Assumptions and Sign conventions Draw the ray diagram and find the position, size and nature of the image formed. If the magnification is positive, then the image is upright compared to the object (virtual image). The magnification ($$M$$) of the image formed can be calculated using the following formula. Example 2: The distance of an object of height 6 cm from a concave lens is 20 cm. The two formulas given above are together referred to as the thin lens formula. We also have another formula for magnification in lenses Magnification = v/u where v is image distance u is object distance Note: - If magnification (m) is positive, It means image formed is virtual and erect If magnification (m) is negative, It means image formed is real and inverted Questions Example 10.3 - A concave lens has focal length of 15 cm. As a demonstration of the effectiveness of the mirror equation and magnification equation, consider the following example problem and its solution. Physics Grade XI Reference Note: Mirror formula for concave mirror when real image is formed and for convex mirror. What will be the distance of the object, when a concave mirror produces an image of magnification m? Online physics calculator that calculates the concave mirror equation from the given values of object distance (do), the image distance (di), and the focal length (f). If ‘ $$i$$ ’ is positive, the image is upright and if ‘ $$i$$ ’ is negative, the image is inverted. Exercise 2. Magnification of Convex Lens: It is a ratio between the image height and object height. Object distance (o) = + 20 cm Now by putting the values of (f) and (o) in 1/v+ 1/o = 1/f, we will get, 1/v = 1/f - 1/o Where, $$v$$ is the object height $$u$$ is the image height. At what distance should the object from the lens be placed so … And the magnification m is positive when the image formed is virtual and erect. A magnification of 2 indicates the image is twice the size of the object and a magnification of 1 indicates an image size being the same as the object size. Lens formula is applicable for convex as well as concave lenses. In the case of a concave lens, it is always positive. It is an equation that relates the focal length, image distance, and object distance for a spherical mirror. If its focal length is 10 cm, calculate the size and position of the image formed. Solved Example for You. m = h 2 /h 1 = v//u = (f-v)/f = f/(f+u) This equation is valid for both convex and concave lenses and for real and virtual images. You can try the following sample problem using this. It is given as, \\frac {1} {i} + \\frac {1} {o} = \\frac {1} {f} i= distance of the image from the lens. Define lens formula. A concave lens of focal length 15 cm forms an image 10 cm from the lens. If magnification is negative then the image is inverted as … These lenses have negligible thickness. Using lens formula the equation for magnification can also be obtained as m = h2/h1 = v//u = (f-v)/f = f/ (f+u) This equation is valid for both convex and concave lenses and for real and virtual images. Table shows the sign convention for the values of object distance, image … And the magnification m is positive when the image formed is virtual and erect. The magnification is negative for real image and positive for virtual image."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9050861,"math_prob":0.9981354,"size":5948,"snap":"2021-21-2021-25","text_gpt3_token_len":1406,"char_repetition_ratio":0.20777254,"word_repetition_ratio":0.2236842,"special_character_ratio":0.23839946,"punctuation_ratio":0.08966695,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9996923,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-14T04:50:24Z\",\"WARC-Record-ID\":\"<urn:uuid:ca481768-542a-4a96-9f84-c50a1b0cf100>\",\"Content-Length\":\"16453\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e4bd611-62f2-4cfe-9f41-e7aeaf955fc3>\",\"WARC-Concurrent-To\":\"<urn:uuid:79873004-83eb-420e-a118-fcc6ff192443>\",\"WARC-IP-Address\":\"104.21.31.145\",\"WARC-Target-URI\":\"https://hq.holyfiregames.com/hq/topic/d551be-magnification-of-concave-lens-formula\",\"WARC-Payload-Digest\":\"sha1:WP5P7TOWUVYJMN3BCDIPCXAUSS7WIDDE\",\"WARC-Block-Digest\":\"sha1:4NCSELMQ4IIEMYAPIXBFJIDNGH36VYMN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243991737.39_warc_CC-MAIN-20210514025740-20210514055740-00304.warc.gz\"}"} |
https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=204876 | [
"# IS425Exam1ch4\n\nThe flashcards below were created by user tttran1 on FreezingBlue Flashcards.\n\n1. Modeling and Analysis Topics\n• -Modeling for MSS (a critical component)\n• -Static and dynamic models\n• -Treating certainty, uncertainty, and risk\n• -Influence diagrams\n• -Decision analysis of a few alternatives (with decision tables and decision trees)\n• -Optimization via mathematical programming\n• -Heuristic programming\n• -Simulation\n• -Model base management\n2. Major Modeling Issues\n• -Problem identification and environmental analysis (information collection)\n• -Variable identification\n• -Influence diagrams, cognitive maps\n• -Forecasting/predicting\n• -Multiple models: A MSS can include several models, each of which represents a different part of the decision-making problem\n• -Model management\n3. Categories of models\n4. Static Analysis\n• -Single snapshot of the situation\n• -Single interval\n5. Dynamic Analysis\n• -Dynamic models\n• -Evaluate scenarios that change over time\n• -Time dependent\n• -Represents trends and patterns over time\n• -More realistic: Extends static models\n6. -Certainty Models\n• -Assume complete knowledge\n• -All potential outcomes are known\n• -May yield optimal solution\n7. Uncertainty\n• -Several outcomes for each decision\n• -Probability of each outcome is unknown\n• -Knowledge would lead to less uncertainty\n8. Risk analysis (probabilistic decision making)\n• -Probability of each of several outcomes occurring\n• -Level of uncertainty => Risk (expected value)\n9. Influence Diagrams\n• -Graphical representations of a model “Model of a model”\n• -A tool for visual communication\n• -Some influence diagram packages create and solve the mathematical model\n• -Framework for expressing MSS model relationships\n• --Rectangle = a decision variable\n• --Circle = uncontrollable or intermediate variable\n• --Oval = result (outcome) variable: intermediate or final\n• -Variables are connected with arrows indicates the direction of influence (relationship)\n10. Decision tables\n• -Multiple criteria decision analysis\n• -Features include decision variables (alternatives), uncontrollable variables, result variables\n• One goal: maximize the yield after one year\n11. Decision trees\n• -Graphical representation of relationships\n• -Multiple criteria approach\n• -Demonstrates complex relationships\n• -Cumbersome, if many alternatives exists\n12. Mathematical Programming\n-A family of tools designed to help solve managerial problems in which the decision maker must allocate scarce resources among competing activities to optimize a measurable goal\n13. Optimal solution: The best possible solution to a modeled problem\n-Linear programming (LP): A mathematical model for the optimal solution of resource allocation problems. All the relationships are linear\n14. Linear Programming Steps\n• 1. Identify the …\n• -Decision variables\n• -Objective function\n• -Objective function coefficients\n• -Constraints\n• --Capacities / Demands\n• 2. Represent the model\n• -LINDO: Write mathematical formulation\n• -EXCEL: Input data into specific cells in Excel\n• 3. Run the model and observe the results\n15. Sensitivity\n• -Assesses impact of change in inputs on outputs\n• -Eliminates or reduces variables\n• -Can be automatic or trial and error\n16. What-if\n-Assesses solutions based on changes in variables or assumptions (scenario analysis)\n17. Goal seeking\n• -Backwards approach, starts with goal\n• -Determines values of inputs needed to achieve goal\n• -Example is break-even point determination\n18. Heuristic Programming\n• -Cuts the search space\n• -Gets satisfactory solutions more quickly and less expensively\n• -Finds good enough feasible solutions to very complex problems\n• -Heuristics can be\n• -Quantitative\n• -Qualitative (in ES)\n19. Traveling Salesman Problem\n-A traveling salesman must visit customers in several cities, visiting each city only once, across the country. Goal: Find the shortest possible route\n20. When to Use Heuristics\n• -Inexact or limited input data\n• -Complex reality\n• -Reliable, exact algorithm not available\n• -Computation time excessive\n• -For making quick decisions\n21. Limitations of Heuristics\n-Cannot guarantee an optimal solution\n22. Modern Heuristic Methods\n• -Tabu search\n• -Intelligent search algorithm\n• -Genetic algorithms\n• -Survival of the fittest\n• -Simulated annealing\n• -Analogy to Thermodynamics\n23. Simulation\n• -Technique for conducting experiments with a computer on a comprehensive model of the behavior of a system\n• -Frequently used in DSS tools\n24. Major Characteristics of Simulation\n• -Imitates reality and capture its richness\n• -Technique for conducting experiments\n• -Descriptive, not normative tool\n• -Often to “solve” very complex problems\n• --Simulation is normally used only when a problem is too complex to be treated using numerical optimization techniques\n• -The theory is fairly straightforward\n• -Great deal of time compression\n• -Experiment with different alternatives\n• -The model reflects manager’s perspective\n• -Can handle wide variety of problem types\n• -Can include the real complexities of problems\n• -Produces important performance measures\n• -Often it is the only DSS modeling tool for non-structured problems\n26. Limitations of Simulation\n• -Cannot guarantee an optimal solution\n• -Slow and costly construction process\n• -Cannot transfer solutions and inferences to solve other problems (problem specific)\n• -So easy to explain/sell to managers, may lead overlooking analytical solutions\n• -Software may require special skills\n27. Simulation Methodology -Steps:\n• -Model real system and conduct repetitive experiments.\n• 1. Define problem\n• 2. Construct simulation model\n• 3. Test and validate model\n• 4. Design experiments\n• 5. Conduct experiments\n• 6. Evaluate results\n• 7. Implement solution\n28. Stochastic vs. Deterministic Simulation\n-In stochastic simulations: We use distributions (Discrete or Continuous probability distributions)\n29. Time-dependent vs. Time-independent Simulation\n-Time independent stochastic simulation via Monte Carlo technique (X = A + B)\n30. Discrete event vs. Continuous simulation\n31. Steady State vs. Transient Simulation\n32. Simulation Implementation\n• -Visual simulation\n• -Object-oriented simulation\n33. Visual interactive modeling (VIM) Also called\n• -Visual interactive problem solving\n• -Visual interactive modeling\n• -Visual interactive simulation\n• -Uses computer graphics to present the impact of different management decisions\n• -Often integrated with GIS\n• -Users perform sensitivity analysis\n• -Static or a dynamic (animation) systems\n34. Model Base Management\n• -MBMS: capabilities similar to that of DBMS\n• -But, there are no comprehensive model base management packages\n• -Each organization uses models somewhat differently\n• -There are many model classes\n• -Within each class there are different solution approaches\n• -Relations MBMS\n• -Object-oriented MBMS\n Author: tttran1 ID: 204876 Card Set: IS425Exam1ch4 Updated: 2013-03-04 16:22:01 Tags: IS425Exam1ch4 Folders: Description: IS425Exam1ch4 Show Answers:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.75470734,"math_prob":0.8329128,"size":6909,"snap":"2020-34-2020-40","text_gpt3_token_len":1477,"char_repetition_ratio":0.13149892,"word_repetition_ratio":0.003902439,"special_character_ratio":0.20668693,"punctuation_ratio":0.048087433,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98283803,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-08-14T01:52:35Z\",\"WARC-Record-ID\":\"<urn:uuid:afd7b779-b961-472b-9982-01b61698406a>\",\"Content-Length\":\"22225\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:c071f311-e3fa-4e3c-99c9-2c4b337ef49b>\",\"WARC-Concurrent-To\":\"<urn:uuid:089f5086-cd80-4543-9ebc-e78fef07e179>\",\"WARC-IP-Address\":\"34.198.188.69\",\"WARC-Target-URI\":\"https://www.freezingblue.com/flashcards/print_preview.cgi?cardsetID=204876\",\"WARC-Payload-Digest\":\"sha1:KM42FQWNRJ6UMCNE6LWZRQJZRJNKQ6KI\",\"WARC-Block-Digest\":\"sha1:3FFRKIYJWQ7BLC6LTZYZDTT2IPQXMHZK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-34/CC-MAIN-2020-34_segments_1596439739134.49_warc_CC-MAIN-20200814011517-20200814041517-00488.warc.gz\"}"} |
https://mathoverflow.net/questions/7541/extremal-question-on-matrices | [
"# Extremal question on matrices\n\nThe following question was posed to me a while ago. No one I know has a given a satisfactory (or even a complete) proof:\n\nSuppose that $M$ is an $n$ x $n$ matrix of non-negative integers. Additionally, suppose that if a coordinate of $M$ is zero, then the sum of the entries in its row and its column is at least $n$.\n\nWhat is the smallest that the sum of all the entries in $M$ can be?\n\nThe conjecture posed to me was that it was $\\frac{n^2}{2}$ which is obtained by the diagonal matrix with $\\frac{n}{2}$ in all diagonal entries.\n\n[I'm guessing that there should be a \"suppose that\" in describing M. -- GJK]\n\n• Cute problem! I'm reminded of en.wikipedia.org/wiki/Ore%27s_theorem which says in this setting that if M is an adjacency matrix the underlying graph is Hamiltonian, although this isn't close to strong enough as far as I can see. Dec 2, 2009 at 2:26\n• I don't think \"Combinatorial question\" is a descriptive, useful title for this question (that information is already in the tag). Dec 2, 2009 at 2:29\n• The conjecture is only for even n. Dec 2, 2009 at 2:46\n• No, it's new to me, but `the diagonal matrix with $n/2$ in all diag entries' is only a matrix of non-negative integers if $n$ is even. Dec 2, 2009 at 6:31\n• But if n^2/2 is a lower bound just for even n, it's a lower bound for all n, since otherwise we'd just glue together four mxm matrices into a (2m)x(2m) matrix. Dec 2, 2009 at 7:00\n\nThe following looks too simple, so perhaps there's a mistake, but here goes.\n\nLet $m$ be the smallest among all row sums and column sums. If $m\\geq n/2$, we are done.\n\nOtherwise, $m=cn$ with $c\\lt 1/2$. Suppose it is a column which has sum $m$. This column has at least $n-m$ zeroes, and each of the corresponding rows has sum $\\geq n-m$. The remaining $m$ rows each has sum $\\geq m$.\n\nIn total we have a sum of at least $(n-m)^2+m^2 = ((1-c)^2+c^2)n^2$. Finally, note that $(1-c)^2+c^2\\gt 1/2$ when $c\\lt 1/2$.\n\nSo this gives a lower bound of $n^2/2$, and equality requires that any row and any column sums to exactly $n/2$, so the matrix is a sum of $n/2$ permutation matrices by König's Theorem.\n\nNow what about the case when $n$ is odd?\n\nEdit: Since the sum is an integer, the lower bound $n^2/2$ actually gives $(n^2+1)/2$, which can be attained by for example taking the direct sum of an $m\\times m$ matrix of $1$s with an $(n-m)\\times (n-m)$ matrix of $1$, where $m=(n-1)/2$. When $n$ is odd, this is the only extremal example up to column and row permutations. Here is a proof.\n\nLet $m$ now, as originally, denote the minimum over all row and column sums. If $m\\geq (n+1)/2$, then the total sum is too large: at least $nm\\geq n(n+1)/2$. Therefore, $m\\leq(n-1)/2$, and the $(n-m)^2+m^2$ lower bound now gives a total sum of at least $(n-n(n-1)/2)^2+((n-1)/2)^2 = (n^2+1)/2$ (using the fact that $(1-c)^2+c^2$ is decreasing when $c\\lt 1/2$). So up to now we have only rederived the lower bound for $n$ odd.\n\nHowever, if equality now holds, we get that $m=(n-1)/2$, that each row adds up to either $m$ ($m$ times) or $n-m$ ($n-m$ times), and that in a column that adds up to $m$, there are exactly $n-m$ $0$s, so all entries are $0$ or $1$ (and similar statements with columns and rows interchanged). By permuting the rows and columns we may assume that the first $m$ rows [columns] each add up to $m$.There can't be a $0$ in the upper left $m\\times m$ submatrix, since then the sum of the row and column containing the $0$ adds up to only $2m\\lt n$. We have found a direct sum of an $m\\times m$ and an $(n-m)\\times(n-m)$ all $1$ matrix.\n\n• That does look too simple, but if there's a mistake I can't find it! Dec 2, 2009 at 15:31\n• Hi Konrad, that looks great to me. In fact, your quadratic form is minimized at c = 1/2, and achieves 1/2 there. Dec 2, 2009 at 15:34\n• That is nice. I did not see your post when I was editing my (much more circuitous) answer. Dec 2, 2009 at 15:46\n• By the way, I think for odd n essentially the same argument will work, although you may have to be slightly more careful with the final analysis. (Asymptotically it's the same, though.) Dec 2, 2009 at 20:59\n\nAgain, very nice question!\n\nEdit: I can get a lower bound of $\\frac{3 - \\sqrt{5}}{2} n^2$ as follows: Set $c = \\frac{3 - \\sqrt{5}}{2}$ for conciseness. Assign a (not necessarily simple) bipartite graph to $M$ in the obvious way. Now if the sum of some row is less than $cn$, there must be at least $(1-c)n$ vertices in the other partite set of the graph that aren't adjacent to the corresponding vertex, which therefore must each have degree at least $(1-c)n$ and so have total degree at least $(1-c)^2n^2 \\geq cn^2$.\n\nUnfortunately c is only about 0.38, and that's the highest constant this argument can give us. But maybe it can be modified a little bit to give us 0.5.\n\nEdit 2: This problem really fights back! I'm beginning to think that there's some kind of dichotomy in play here, so that we may end up needing some kind of structure theorem... I've thought about this way more than I should have, and still haven't managed to appreciably improve the above bound (actually I think I have got the constant up to $\\frac{3 - \\sqrt{2}}{2}$, which is a smidgen under 0.4, by a very similar argument) but I think a record of what I've tried might be useful.\n\n1. My original plan of attack was to use the existence of a vertex of degree < n/2 to perform some clever counting argument which shows that there are at least $n^2/2$ edges. Of course, there's certainly no guarantee that this wouldn't work, but it appears that you have to be very careful with the counting: the disjoint union of the complete bipartite graphs $K_{m-1, m-1}$ and $K_{m+1, m+1}$ satisfies the desired property and has the maximal allowed number of low-degree vertices, but only has $n^2/2 + 2$ edges! It's not clear to me how to construct a counting argument which can handle both the above situation and more \"typical\" ones in which the vertices have a wide range of degrees, so this approach was put to the side.\n\n2. However, there may be a workaround: we don't actually have to show that $n^2/2$ is a lower bound! In fact if we can show a lower bound of the form $(0.5-o(1))n^2$ then the desired bound follows. Why? Because if we had a matrix with the desired property and density less than 1/2, we could stick a bunch of copies of it together to get arbitrarily large matrices with the same density. This means that we can sacrifice, for example, O(n) factors in our counting arguments.\n\n3. By the way, here's one dichotomy which I think might have a chance of leading to a proof. Consider a multigraph with the desired property; let G denote the bipartite complement of its underlying simple graph. If G has a \"large\" matching, i.e., one of size n-o(n), we get our weak lower bound by a simple counting argument. If G has no large matching, then maybe we can use this property to force some structure on the original multigraph, which then shows that it has density at least 1/2.\n\n• Again consider the bipartite graph formulation. Call a vertex \"sparse\" if it has degree strictly less than n/2; if the bipartite graph has < n^2/2 edges, there will be at least one sparse vertex in each partite set. Note that every pair of sparse vertices in different partite sets must be adjacent; we can use this to show that there are fewer than n/2 sparse vertices in each partite set. Question: Can we now use a divide-and-conquer approach to get the desired lower bound? Dec 2, 2009 at 7:35\n\n[Edit: Originally, I gave the bound of $\\frac{5-\\sqrt{17}}{2}n^2$, but the remaining slack in the problem was easy to eliminate by considering the row with minimal sum. If my calculations are correct, this solves the problem. ]\n\nThis is modification of Harrison Brown's argument.\n\nSuppose the total is less than $cn^2$. Let $r_1$ be a row with minimal sum $sn\\leq cn$. Let us say that $r_1$ has $tn$ non-zero entries. Of course, $t\\leq s$. Consider any other row $r$, which has $c_r n$ zeros at places that $r_1$ does not, and whose sum is $s_r n$. Then the total sum is at least $c_r(1-s_r)n^2+(1-s)(1-t)n^2$. On the other hand, the sum of $r$ restricted to columns where $r_1$ is non-zero is at least $(t-c_r) n$. Hence the total sum is at least $\\sum_r (t-c_r) n + (1-s)(1-t) n^2$. To summarize \\begin{align*} cn^2&\\geq (1-s)(1-t)n^2+\\max_r c_r(1-s_r)n^2\\\\\\\\ cn^2&\\geq (1-s)(1-t)n^2+\\sum_r (t-c_r)n\\\\\\\\ cn^2&\\geq \\sum_r s_r n \\end{align*} For a fixed value of $\\sum c_r$ and $T=\\max c_r(1-s_r)$ the minimum of $\\sum s_r$ is achieved when all but one $c_r$ are equal to $T/(1-s)$ or $c$ (that follows by optimizing a sum of two terms of the form $\\max(s,1-T/c_r)$ ). Let $p$ be the proportion of rows $r$ such that $c_r=c$. Then \\begin{align*} c&\\geq (1-c)(1-t)+T\\\\\\\\ c&\\geq (1-c)(1-t)+(1-p)(c-T)\\\\\\\\ c&\\geq p(1-T/c)+(1-p)s\\\\\\\\ t&\\leq s\\leq c \\end{align*} I claim that the minimum of $c$ under these constraints occurs when the firs of these inequalities are equalities. Indeed, if the first inequality is strict, increase $T$. If the second inequality is strict, decrease $p$. Moreover, if $s\\neq t$, we can decrease $s$. Thus \\begin{align*} c&= (1-c)(1-s)+T\\\\\\\\ c&= (1-c)(1-s)+(1-p)(c-T)\\\\\\\\ c&\\geq p(1-T/c)+(1-p)s\\\\\\\\ s&\\leq c \\end{align*} Eliminating $T$ we obtain \\begin{align*} c&=(2-p)(1-c)(1-s), \\end{align*} Solving the equation for $s$ and substituting into the inequality $s\\leq c$, we get (please check!) \\begin{align*} p(1-c)^2\\geq 2c^2-5c+2 \\end{align*} Substituting this inequality for $p$ into $c\\geq p(1-T/c)+(1-p)s$ and using the expressing for $s$ and $T$ that we have, we finally arrive (please double check!) to $$(1-2c)(2-c)(2c^3-6c^2+3c-1)\\geq 0$$. Since $2c^3-6c^2+3c-1$ is negative for $c<1$, the inequality $c\\geq 1/2$ follows.\n\n[Before the edit, the answer was concluded by the following sentence, to which the comment refers]. In general it is tempting to consider arbitrary sets of $k$ rows, and if the union of their zero sets is large, use that to say that the sum over corresponding columns is large.\n\n• Do you mean to write \"Consider any other row $r$, which has $c_rn$ zeros at places that $r_1$ does not\"? Dec 2, 2009 at 12:45\n• Hi Boris, Do you think it might be possible to abstract the method you mention in the last line and show that the coefficients obtained tend to 1/2? This would be enough by the argument I mention in my second edit. Dec 2, 2009 at 12:56\n• @Ben: Yes, I fixed it. @Harrison: I do not see how to do it. The resulting argument should be sharp for all the examples in which equality is attained (see Kevin's post), which imposes a lower bound on argument's complexity. Dec 2, 2009 at 13:58\n\nIt's worth noting, perhaps, that the diagonal matrix is not the unique extremal matrix. A checkerboard pattern of 1's and 0's also achieves $n^2/2$.\n\nThere are certain transformations that preserve the given properties and simplify the matrix. For example, we can transpose the matrix, we can exchange any pair of rows, we can exchange any pair of columns.\n\nThere's another transformation whose effect is more subtle. Suppose $m_{ij}\\ge2$. If any $m_{i'j'}\\ge2$ with $i'\\le i+1$ and $j'\\ge j+1$, then we can replace $m_{ij},m_{i',j'},m_{i',j},m_{i,j'}$ with $m_{ij}-1,m_{i'j'}-1,m_{i',j}+1,m_{i,j'}+1$, respectively. This preserves all of the row and column sums, and moves mass away from the lower-left and upper-right corners, and towards the diagonal. The only reason we need $m_{ij}\\ge1$ is that we have to be careful about creating 0's in the matrix.\n\nIt feels like this is only one or two WOLOGs away from saying that we can assume the matrix is diagonal, in which case the problem is easy (sum is $n^2/2$ or $(n^2+n-2)/2$ depending on the parity of $n$).\n\n• I suspect that this isn't quite enough. The reason is that if you frame the question in terms of bipartite graphs, this transformation says that you can shunt around pairs of parallel multiple edges, but it never changes the underlying adjacency structure. But it may be possible to very cleverly take advantage of the distinguished basis... Dec 2, 2009 at 7:12\n• When he replaces m_{i',j} by m_{i',j}+1, he may be connecting two vertices which were not connected before Dec 2, 2009 at 9:04\n• Yeah, you're right, which I noticed shortly after posting the comment. But this process can never make the adjacency structure more like what it is in the case where everything's along the diagonal. Dec 2, 2009 at 9:06\n\nThe odd case.\n\nCobbling together some posts that have already appeared, we get:\n\nClaim: For $n$ odd, the minimum is $\\frac{n^2 + 1}{2}$.\n\nPf: As Harrison Brown points out, since $\\frac{n^2}{2}$ is a lower bound for $n$ even, it is a lower bound for $n$ odd. Hence, $\\lceil\\frac{n^2}{2}\\rceil = \\frac{n^2 + 1}{2}$ is as well.\n\nThe checkerboard pattern (with ones in the corners) achieves this bound.\n\nWhat does the set of all extremal matrices look like?\n\n• I've updated my answer to cover the odd case too. Dec 4, 2009 at 16:19\n\nA partial result: Let $T$ be the total of the entries of $M$. Define a new matrix $M'$ by the formula $$M'_{jk} = nM_{jk} + \\sum_{\\ell \\ne j} M_{\\ell k} + \\sum_{\\ell \\ne k} M_{j \\ell} \\ge n.$$ Let $T'$ be the total of $M'$. Then $T' = (3n-2)T$ and $T' \\ge n^3,$ so $$T \\ge \\frac{n^3}{3n-2} > \\frac{n^2}3.$$ Edit: When I finally got the formulas right, it just looks like a weaker version of Harrison Brown's argument. Alas.\n\nIf there is an $n x n$ matrix of non-negative integers satisfying if any entry is zero then the sum of its row and column is $n$ and we want the smallest sum. We start by trying to construct the longest sequence of zeros whose columns coordinates are disjoint and whose row coordinates are disjoint. Say this process stops at $m$. Then the remaining $(n-m) x (n-m)$ entries must be 1 or more. The other entries are covered covered once by a row or column if the have one coordinate equal to used rows or columns or twice if they have two coordinates used in the process thus we get $nxm-m^{2}/2 +m +(n-m)^{2}$ as a lower bound of the sum of points of all entries. Taking the derivative with respect to m gives $n-m+1/2-2n+2m = -n-m/2+1/2$ as the derivative it is always negative so the lower bound is decreasing as $m$ increases thus $m=n$ is a gives a lower bound but in that case we have $n$ zeros and every point is covered twice once for the first coordinate and once for the second so the lower bound is $n^{2}/2$. For $n$ even this can be realized by a checkerboard pattern. For n odd one half must be added and that can be realized as an odd checkerboard pattern. Then the lowest sum for even $n$ is $n^{2}/2$ and for odd $n$ is $n^{2}+1/2$"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8665216,"math_prob":0.9991247,"size":2134,"snap":"2022-40-2023-06","text_gpt3_token_len":701,"char_repetition_ratio":0.116901405,"word_repetition_ratio":0.015831135,"special_character_ratio":0.34442362,"punctuation_ratio":0.09248555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99993277,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-07T12:24:29Z\",\"WARC-Record-ID\":\"<urn:uuid:b9c747ac-1ab5-4abf-9298-5d75adff269b>\",\"Content-Length\":\"191414\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:20b71782-f4d1-49f8-af21-4bc392dd1e5e>\",\"WARC-Concurrent-To\":\"<urn:uuid:4ef504c2-c30c-446d-9f26-54c3e8b2203f>\",\"WARC-IP-Address\":\"151.101.129.69\",\"WARC-Target-URI\":\"https://mathoverflow.net/questions/7541/extremal-question-on-matrices\",\"WARC-Payload-Digest\":\"sha1:VC6QM67QDWJQ36T72M2COL73IWBQGAQN\",\"WARC-Block-Digest\":\"sha1:63D3USRHHQ3NUHOZ7264SBOIHQLASFTK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500456.61_warc_CC-MAIN-20230207102930-20230207132930-00702.warc.gz\"}"} |
https://en.wikibooks.org/wiki/Famous_Theorems_of_Mathematics/Number_Theory/Prime_Numbers | [
"# Famous Theorems of Mathematics/Number Theory/Prime Numbers\n\nThis page will contain proofs relating to prime numbers. Because the definitions are quite similar, proofs relating to irreducible numbers will also go on this page.\n\n## Definition of Prime\n\nA prime number p>1 is one whose only positive divisors are 1 and p.\n\n## Basic results\n\nTheorem: $p$",
null,
"is prime and $p|ab$",
null,
"implies that $p|a$",
null,
"or $p|b$",
null,
".\n\nProof: Let's assume that $p$",
null,
"is prime and $p|ab$",
null,
", and that $p\\nmid a$",
null,
". We must show that $p|b$",
null,
".\n\nLet's consider $\\gcd(p,a)$",
null,
". Because $p$",
null,
"is prime, this can equal $1$",
null,
"or $p$",
null,
". Since $p\\nmid a$",
null,
"we know that $\\gcd(p,a)=1$",
null,
".\n\nBy the gcd-identity, $\\gcd(p,a)=1=px+ay$",
null,
"for some $x,y\\in \\mathbb {Z}$",
null,
".\n\nWhen this is multiplied by $b$",
null,
"we arrive at $b=pbx+aby$",
null,
".\n\nBecause $p|p$",
null,
"and $p|ab$",
null,
"we know that $p|(pbx+aby)$",
null,
", and that $p|b$",
null,
", as desired."
] | [
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/d24c181efe89959b12a881d6f5d9b401f3fee0a7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/3835e995f66f1f264568c96a28b66e54794d8062",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/88f46dd7f8a741d6ea4cbafbc226d9b04a08d114",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/d24c181efe89959b12a881d6f5d9b401f3fee0a7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/1fe8b7007a7ec2ce0a16b94a7926be2bf727ceec",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/88f46dd7f8a741d6ea4cbafbc226d9b04a08d114",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/da0b232d3b36e9921c768959d89302d894624610",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/92d98b82a3778f043108d4e20960a9193df57cbf",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/81eac1e205430d1f40810df36a0edffdc367af36",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/1fe8b7007a7ec2ce0a16b94a7926be2bf727ceec",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/aa9f84154a03db324768305cb0bc3b357cb9a1e4",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/a42995d8f21b10a21e80c5279f961bb793a5c411",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/a44e973acb6b7bfec26f0c30ec34e8ebbe01c72a",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/f11423fbb2e967f986e36804a8ae4271734917c3",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/9008957820261b747da803b482686cab79ba36c2",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/acf0d9a6ce48fa2815499748eba230a9b79d4d26",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/d24c181efe89959b12a881d6f5d9b401f3fee0a7",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/744a0883a9c571c94f6daa064a1da8b79652ef89",
null,
"https://wikimedia.org/api/rest_v1/media/math/render/svg/88f46dd7f8a741d6ea4cbafbc226d9b04a08d114",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9215267,"math_prob":0.99999845,"size":683,"snap":"2019-51-2020-05","text_gpt3_token_len":158,"char_repetition_ratio":0.12960236,"word_repetition_ratio":0.0,"special_character_ratio":0.22547585,"punctuation_ratio":0.14492753,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000076,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44],"im_url_duplicate_count":[null,null,null,null,null,6,null,9,null,null,null,null,null,10,null,9,null,1,null,null,null,null,null,null,null,10,null,1,null,1,null,5,null,null,null,1,null,1,null,null,null,1,null,9,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T12:05:30Z\",\"WARC-Record-ID\":\"<urn:uuid:ef104b79-84fd-412d-a1c6-5cb8adee235d>\",\"Content-Length\":\"43817\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:be667479-f2fe-4fa1-bab1-7f4a360f7c6e>\",\"WARC-Concurrent-To\":\"<urn:uuid:6d4faf4e-64f1-4d06-afe6-2f0d409328a6>\",\"WARC-IP-Address\":\"208.80.154.224\",\"WARC-Target-URI\":\"https://en.wikibooks.org/wiki/Famous_Theorems_of_Mathematics/Number_Theory/Prime_Numbers\",\"WARC-Payload-Digest\":\"sha1:5NKZIGVHM5DRE7T42SB2T6AJQ32TXN5A\",\"WARC-Block-Digest\":\"sha1:3A6ZGBLR6UKMZ7QBBKI4JWGER4YX2ZQM\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606975.49_warc_CC-MAIN-20200122101729-20200122130729-00260.warc.gz\"}"} |
https://discuss.erpnext.com/t/set-value-of-a-field-to-be-always-positive-and-a-case-of-division-by-zero/52956 | [
"",
null,
"Set value of a field to be always positive and a case of division by zero\n\n#1\n\nI have 3 fields as below;\n\n1. overtime_total (field type: float)\n2. gross_total (field type: float) 0/-\n3. overtime_percent (field type: float)\n\nThe value of “overtime_percent” field is calculated as (overtime_total / gross_total * 100)\n\nI have written the following custom script to achieve this:\n\nfrappe.ui.form.on(\"Charges\", \"overtime_total\", function(frm) {\nfrm.set_value(\"overtime_percent\", flt(frm.doc.overtime_total) / flt(frm.doc.gross_total) * 100);\n});\n\nfrappe.ui.form.on(\"Charges\", \"gross_total\", function(frm) {\nfrm.set_value(\"overtime_percent\", flt(frm.doc.overtime_total) / flt(frm.doc.gross_total) * 100);\n});\n\nNow the problem arises for 2 cases as below:\n\nCase 1: When value of “gross_total” field is 0, then the value of “overtime_percent” is shown as INFINITY (obviously since its a divide by 0), but for this case I want to set the value of “overtime_percent” as simply 0 instead of INFINITY\n\nCase 2: When value of “gross_total” field is negative (-10, -25, etc.) then the value of “overtime_percent” is shown as a negative value, but I want to show the value of “overtime_percent” to always show as a positive number.\nFor eg.:\novertime_total: 50\ngross_total: -200\novertime_percent: -25 (this should be shown as 25 instead of -25)\n\nAny help is appreciated. Thanks",
null,
"#2\n\nfrm.set_value(“overtime_percent”, Math.abs (flt(frm.doc.overtime_total) / flt(frm.doc.gross_total) * 100));\nYou can use abs method to show positive number.\n\n#3\n\nTried this already but not working.\nAlso tried fabs() and fabsf() arguments but not working.\n\n#4\n\nso before setting value convert gross_total into positive number then set the value for overtime_percent.\nMath.abs() should have worked, what error did you get?\n\n#5\n\nThe field “gross_total” is shown on the form so it should be correctly shown as a negative figure. Only the “overtime_percent” field should be calculated and shown as positive.\n\nIf I convert the “gross_total” to positive before setting value of “overtime_percent” then it wrongly shows gross total field as a positive even if the user has put in a negative value there.\n\nTired using abs, fabs, fabsf, math.abs but the “overtime_percent” field does not calculate at all if any of these is used.\n\n#6\n\n@ROHAN_JAIN1\n\nAny further possible help on this?\n\n1. Set the value as positive only\n2. Division by 0 case\n\nThanks for your help till now. Really appreciate it",
null,
"#7\ncur_frm.set_value(\"overtime_precentage\",(flt(cur_frm.doc.gross_total) == 0) ? 0 :Math.abs( flt(cur_frm.doc.over_time) / flt(cur_frm.doc.gross_total) * 100));\n\nI hope now it will work according to your need.\n\n#8\n\nThis code works wonderfully! Fulfills both the cases.\n\nThanks a lot for your help",
null,
"",
null,
""
] | [
null,
"https://discuss.erpnext.com/uploads/assets/erpnext_discuss.png",
null,
"https://discuss.erpnext.com/images/emoji/twitter/slightly_smiling_face.png",
null,
"https://discuss.erpnext.com/images/emoji/twitter/+1.png",
null,
"https://discuss.erpnext.com/images/emoji/twitter/grin.png",
null,
"https://discuss.erpnext.com/images/emoji/twitter/pray.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.78889847,"math_prob":0.9623346,"size":1270,"snap":"2019-43-2019-47","text_gpt3_token_len":354,"char_repetition_ratio":0.20458136,"word_repetition_ratio":0.125,"special_character_ratio":0.28582677,"punctuation_ratio":0.19583334,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99745274,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,10,null,3,null,3,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-17T02:24:29Z\",\"WARC-Record-ID\":\"<urn:uuid:a054986b-d197-4600-b7a6-d189ba2ed2ac>\",\"Content-Length\":\"20128\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1e2757f8-3fe1-4406-84b5-62ad6a4ba962>\",\"WARC-Concurrent-To\":\"<urn:uuid:4fd97222-7bb6-40a2-8368-4aedb1d062dc>\",\"WARC-IP-Address\":\"178.62.129.225\",\"WARC-Target-URI\":\"https://discuss.erpnext.com/t/set-value-of-a-field-to-be-always-positive-and-a-case-of-division-by-zero/52956\",\"WARC-Payload-Digest\":\"sha1:OIW5IMJJHKGRLVPI5F3D62BJWRTTNWWM\",\"WARC-Block-Digest\":\"sha1:45ATJLI5SXAIUAYRKBQRQDODDQHEK5Q7\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570986672548.33_warc_CC-MAIN-20191017022259-20191017045759-00500.warc.gz\"}"} |
https://www.mathlearnit.com/permutations-with-repetition.html | [
"# Permutations with Repetition\n\nA permutation is an arrangement of objects chosen from a certain number of choices.\n\nWith permutations order is important.\n\nFor example:\n\n1 , 2 , 3 is a different permutation from 3 , 2 , 1.\n\nThe objects/numbers are the same in each group, but the order is different.\n\n## Permutations with RepetitionGeneral Case\n\nIn general, when you have a number of r outcome choices,\n\nand you are choosing from that same number outcomes n times.\n\nThe number of permutations possible can be found by the following multiplication.\n\nr × r × r ×  ........ × r ( n times )\n\nThis multiplication conveniently turns to be the sum rn.\n\nWe can see some real world examples of finding different permutations with repetition in some examples below.\n\nExamples\n\n(1.1)",
null,
"A basic example we can look at is flipping a coin.\n\nFlipping a coin once has  2 possible outcomes, \"HEAD\"  or \"TAILS\".\n\nFlipping a coin a second time also has  2 possible outcomes, \"HEAD\"  or \"TAILS\".\n\nIn fact, each time the coin is flipped there are 2 possible results, \"HEAD\"  or \"TAILS\".\n\nSo, to find the number of different perutations possible from flipping a coin 4 times in a row:\n\n2 × 2 × 2 × 2 = 16 , 24 = 16\n\nTossing a coin 4 times in a row can result in 16 different permutations.\n\nTwo of which are:\n\nThe important part to note is that the sum really needed was just 24.\n\nWhich is the number of outcomes possible each time, to the power of the number of times we were trying for an outcome.\n\n(1.2)",
null,
"Another example we can look at, is establishing the number of possible different permutations that could be obtained from rolling a standard dice a number of times in a row.\n\nRolling a standard dice once has 6 possible outcomes.\n\n1 , 2 , 3 , 4 , 5 , 6\n\nRolling this dice again a second time, also has 6 possible outcomes.\n\n1 , 2 , 3 , 4 , 5 , 6\n\nThe different permutations possible from the rolling a dice  2 times is found by:\n\n6 × 6 = 36 , Which is 62.\n\nIf we were to roll the dice  5 times?\n\nRolling a dice 5 times in a row, can produce one of 65 possible permutations.\n\n65 = 7776\n\nThere can also be cases where we may want to select permutations, but without repetition.\nExamples of this can be seen here\n\n1. Home\n2. ›\n3. Statistics/Data\n4. › Permutations with Repetition"
] | [
null,
"https://www.mathlearnit.com/static/images/permutations-with-repetition-coin.png",
null,
"https://www.mathlearnit.com/static/images/dice-permutations.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.88475406,"math_prob":0.99234504,"size":2292,"snap":"2022-27-2022-33","text_gpt3_token_len":611,"char_repetition_ratio":0.15821679,"word_repetition_ratio":0.08783784,"special_character_ratio":0.2683246,"punctuation_ratio":0.14853556,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9974678,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-13T09:11:48Z\",\"WARC-Record-ID\":\"<urn:uuid:3bbb12f8-f2d3-4121-a1bd-e12b71e0c3f1>\",\"Content-Length\":\"22488\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:96cdd23b-df5d-4e38-8719-5074210877c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:26b6fe31-da01-4d96-a37f-7d1c87bbfaea>\",\"WARC-IP-Address\":\"159.65.170.170\",\"WARC-Target-URI\":\"https://www.mathlearnit.com/permutations-with-repetition.html\",\"WARC-Payload-Digest\":\"sha1:HOD7ZZ36SBOOVVV2HADXK4XTFSULUM5I\",\"WARC-Block-Digest\":\"sha1:LBZPEC3LQSMKZKCOFITAX4KIMSE6WSUP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571911.5_warc_CC-MAIN-20220813081639-20220813111639-00517.warc.gz\"}"} |
https://www.esaral.com/category/class-9-maths-questions/ | [
"",
null,
"The value of 5√3-3+√12+2√75 on simplifying is :\n\n# Question : The value of 5√3-3+√12+2√75 on simplifying is :\n\n1. $5 \\sqrt{3}$\n2. $6 \\sqrt{3}$\n3. $\\sqrt{3}$\n4. $9 \\sqrt{3}$\nSolution:\nCorrect option is 1. $9 \\sqrt{3}$\nThe given expression is\n$5 \\sqrt{3}-3 \\sqrt{12}+2 \\sqrt{75}$\n$=5 \\sqrt{3}-6 \\sqrt{3}+10 \\sqrt{3}$\n$=9 \\sqrt{3}$\nTherefore, option 1 is correct.",
null,
"Simplify 3√3+10√3\n\n# Question : Simplify 3√3+10√3\n\n1. $13 \\sqrt{3}$\n2. $10 \\sqrt{3}$\n3. $12 \\sqrt{3}$\n4. $11 \\sqrt{3}$\nSolution:\nCorrect option is 1. $13 \\sqrt{3}$\n$3 \\sqrt{3}+10 \\sqrt{3}=13 \\sqrt{3}$",
null,
"5√3+2√3 = 7√6 enter 1 for true and 0 for false\n\n# Question : 5√3+2√3 = 7√6 enter 1 for true and 0 for false\n\nSolution :\n$5 \\sqrt{3}+2 \\sqrt{3}=7 \\sqrt{3}$\nTherefore: False.",
null,
"Find the nine rational numbers between 0 and 1\n\n# Question : Find the nine rational numbers between 0 and 1.\n\n1. $0.1,0.2,0.3, \\ldots, 0.9$\n2. $1.1,0.2,10.3, \\ldots, 0.9$\n3. $0.1,0.2,0.3, \\ldots, 20.9$\n4. $0.1,0.2,10.3, \\ldots, 0.9$\nSolution :\nCorrect option is 1. $0.1,0.2,10.3, \\ldots, 0.9$\n$0<(0+0.1)=0.1<(0.1+0.1)=0.2<(0.2+0.1)$\n$=0.3<\\ldots<(0.8+1)=0.9<(0.9+0.1)=1$\n$0<0.1<0.2<0.3<\\ldots<0.9<1$\n$\\therefore$ The nine rational numbers between 0 and 1 are $0.1,0.2,0.3, \\ldots, 0.9$",
null,
"Which of the following numbers are rational ?\n\n# Question : Which of the following numbers are rational ?\n\n1. $1$\n2. $-6$\n3. $3 \\frac{1}{2}$\n4. All above are rational\nSolution :\nThe correct option is 4. All above are rational\nNone of the number is irrational as every number can be expressed in the form of $\\frac{\\mathrm{p}}{\\mathrm{q}}$, where $\\mathrm{q} \\neq 0$.",
null,
"What are two rational numbers between $\\frac{1}{5}$ and $\\frac{4}{5}$ ?\n\n# Question : Two rational numbers between $\\frac{1}{5}$ and $\\frac{4}{5}$ are :\n\n1. 1 and $\\frac{3}{5}$\n2. $\\frac{2}{5}$ and $\\frac{3}{5}$\n3. $\\frac{1}{2}$ and $\\frac{2}{1}$\n4. $\\frac{3}{5}$ and $\\frac{6}{5}$\nSolution :\nThe correct option is 2. $\\frac{2}{5}$ and $\\frac{3}{5}$\nSince the denominator of both rational numbers are same. So, for getting the rational numbers between the given rational numbers, we only have to consider the numerators of the rational numbers.\nTwo numbers between 1 & 4 are 2 and 3.\nSo, two rational numbers between the given rational numbers will be $\\frac{2}{5}$ and $\\frac{3}{5}$\nSo, the correct answer is option 2.",
null,
"A rational number can always be written in a fraction $\\frac{a}{b}$, where a and $b$ are not integers $(b \\neq 0)$\n\n# Question : State True or False. A rational number can always be written in a fraction $\\frac{\\mathrm{a}}{\\mathrm{b}}$, where a and $\\mathrm{b}$ are not integers $(b \\neq 0)$.\n\n1. True\n2. False\nSolution :\nThe correct option is 2. False\nA number that can always be written in the form of $\\mathrm{p} / \\mathrm{q}$, where $\\mathrm{p}$ is any integer and $\\mathrm{q}$ is a non-zero integer, is a rational number.\nThe given statement is false.",
null,
"1/0 is not rational\n\n# Question : Say true or false. $\\frac{1}{0}$ is not rational.\n\n1. True\n2. False\nSolution :\nThe correct option is 1. True\nA rational number is a number that can be defined in the form of $\\frac{p}{q}$, where $q$ is nonzero.\nNow, if $\\mathrm{q}$ is 0 , although an integer, the solution will not be a rational number. It will give an undefined result, so the statement is true.",
null,
"Prove that root 5 is irrational number\n\n# Question : $\\sqrt{5}$ is an irrational number.\n\n1. True\n2. False\nSolution :\nThe correct option is 1. True\nAn irrational number is any real number that cannot be expressed as a ratio\n$\\mathrm{a} / \\mathrm{b}$, where a and $\\mathrm{b}$ are integers and $\\mathrm{b}$ is non-zero.\n$\\sqrt{5}$ is irrational as it can never be expressed in the form $\\mathrm{a} / \\mathrm{b}$",
null,
"What is the value of (6+√27)−(3+√3)+(1−2√3) when simplified ?\n\n# Quetion : The value of $(6+\\sqrt{27})-(3+\\sqrt{3})+(1-2 \\sqrt{3})$ when simplified is :\n\n1. positive and irrational\n2. negative and rational\n3. positive and rational\n4. negative and irrational\nSolution :\nThe correct option is 3. positive and rational\n\n$6+\\sqrt{27}-(3+\\sqrt{3})+(1-2 \\sqrt{3})=6+3 \\sqrt{3}-3-\\sqrt{3}+1-2 \\sqrt{3} = 4$\n\n4 is a positive rational number.\nHence, correct answer is option 3.",
null,
"Find the value of $\\left(3^{1}+4^{1}+5^{1}\\right)^{0}$\n\n# Question : Find the value of $\\left(3^{1}+4^{1}+5^{1}\\right)^{0}$.\n\nSolution :\nAny number with a power of zero is equal to one.",
null,
"What is the rationalizing factor of (a+√b)\n\n# Question : The rationalizing factor of $(\\mathrm{a}+\\sqrt{\\mathrm{b}})$ is\n\n1. $a-\\sqrt{b}$\n2. $\\sqrt{a}-b$\n3. $\\sqrt{a}-\\sqrt{b}$\n4. None of these\nSolution :\nCorrect option is 1. $a-\\sqrt{b}$\nThe rationalizing factor of $a + \\sqrt{b}$ is $a – \\sqrt{b}$ as the product of these two expressions give a rational number.",
null,
"The decimal expansion of π is\n\n# Question : The decimal expansion of π is :\n\n1. terminating\n2. non-terminating and non-recurring\n3. non-terminating and recurring\n4. doesn’t exist\nSolution :\nThe correct option is 2. non-terminating and non-recurring\nWe know that $\\pi$ is an irrational number and Irrational numbers have decimal\nexpansions that neither terminate nor become periodic.\nSo, correct answer is option 2.",
null,
"Between any two rational numbers, there are infinitely many rational numbers\n\n# Question : Between any two rational numbers\n\n1. there is no rational number\n2. there is exactly one rational number\n3. there are infinitely many rational numbers\n4. there are only rational numbers and no irrational numbers\nSolution :\nThe correct option is 3. there are infinitely many rational numbers\nRecall that to find a rational number between r and s, you can add\n$\\mathrm{r}$ and $\\mathrm{s}$ and divide the sum by 2 , that is $\\frac{\\mathrm{r}+\\mathrm{s}}{2}$ lies between $\\mathrm{r}$ and $\\mathrm{s}$.\nFor example, $\\frac{5}{2}$ is a number between 2 and 3\nWe can proceed in this manner to find many more rational numbers between 2 and 3.\nHence, we can conclude that there are infinitely many rational numbers between any two given rational numbers.",
null,
"Find any five rational numbers between -3/2 and 5/3\n\n# Question : State true or false :Five rational numbers between $\\frac{-3}{2}$ and $\\frac{5}{3}$ are $\\frac{-8}{6}, \\frac{-7}{6}, 0, \\frac{1}{6}, \\frac{2}{6}$\n\n1. True\n2. False\nSolution :\n\nThe Correct option is 1. True\n\nTo get the rational numbers between $\\frac{-3}{2}$ and $\\frac{5}{3}$\n\nTake an LCM of these two numbers: $\\frac{-9}{6}$ and $\\frac{10}{6}$\n\nAll the numbers between $\\frac{-9}{6}$ and $\\frac{10}{6}$ form the answer\n\nSome of these numbers are $\\frac{-8}{6}, \\frac{-7}{6}, 0, \\frac{1}{6}, \\frac{2}{6}$\n\nHence the statement is true.",
null,
"five rational numbers which are smaller than 2\n\n# Question : Following are the five rational numbers that are smaller than 2 $\\Rightarrow 1, \\frac{1}{2}, 0,-1, \\frac{-1}{2}$If true then enter 1 and if false then enter 0\n\nSolution :\nCorrect option is 1\nAny number in the form of $\\frac{p}{q}$ which is less than 2 will form the answer.\nSo given numbers are $1, \\frac{1}{2}, 0,-1, \\frac{-1}{2}$ rational number which are smaller than 2\nSo the statement is true.",
null,
"Find five rational numbers between:2/3 and 4/5\n\n# Question : State true or false:Five rational numbers between$\\frac{2}{3}$ and $\\frac{4}{5}$ are $\\frac{41}{60}, \\frac{42}{60}, \\frac{43}{60}, \\frac{44}{60}, \\frac{45}{60}$\n\n1. True\n2. False\nSolution :\n\nCorrect option is 1.True\n\nTo get the rational numbers between $\\frac{2}{3}$ and $\\frac{4}{5}$\n\nTake an LCM of these two numbers: $\\frac{10}{15}$ and $\\frac{12}{15}$\n\nMultiply numerator and denominator by $4: \\frac{40}{60}$ and $\\frac{48}{60}$\n\nAll the numbers between $\\frac{40}{60}$ and $\\frac{48}{60}$ form the answer\n\nSome of these numbers are $\\frac{41}{60}, \\frac{42}{60}, \\frac{43}{60}, \\frac{44}{60}, \\frac{45}{60}$\n\nHence the statement is true.",
null,
"Every rational number is a real number\n\n# Question : Every rational number is\n\n1. A natural number\n2. An integer\n3. A real number\n4. A whole number\nSolution :\nThe correct option is 3. A real number\nReal number is a value that represents a quantity along the number line.\nReal number includes all rational and irrational numbers.\nRational numbers are numbers that can be represented in the form $\\frac{p}{q}$ where,\n$\\mathrm{q} \\neq 0$ and $\\mathrm{p}, \\mathrm{q}$ are integers.\nTherefore, a rational number is a subset of a real number.\nWe know that rational and irrational numbers taken together are known as real numbers. Therefore, every real number is either a rational number or an irrational number. Hence, every rational number is a real number. Therefore, (3) is the correct answer.",
null,
"A number is an irrational if and only if its decimal representation is\n\n# Question : A number is irrational if and only if its decimal representation is :\n\n1. Non-terminating\n2. Non-terminating and repeating\n3. Non-terminating and non-repeating\n4. Terminating\nSolution :\nThe correct option is 3. non-terminating and non-repeating\nAccording to the definition of an irrational number, If written in decimal notation,\nan irrational number would have an infinite number of digits to the right\nof the decimal point, without repetition.\nHence, a number having non terminating and non-repeating decimal\nrepresentation is an irrational number.\nSo, option 3. is correct.",
null,
"There are numbers which cannot be written in the form p/q , where q ≠ 0 and both p, q are integers\n\n# Question : State true or false:There are numbers which cannot be written in the form p/q , where q ≠ 0 and both pq are integers.\n\n1. True\n2. False\nSolution :\nThe correct option is 1. True\nThe statement is true as there are Irrational numbers which don’t satisfy the\ncondition of rational numbers i.e irrational numbers cannot be written in the\nform of $_{\\mathrm{q}}^{\\mathrm{p}}$ , $\\mathrm{q} \\neq 0$, where $\\mathrm{p}, \\mathrm{q}$ are integers.\nExample,\n$\\sqrt{3}, \\sqrt{99}$"
] | [
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null,
"data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSI4NTAiIGhlaWdodD0iMzkwIiB2aWV3Qm94PSIwIDAgODUwIDM5MCI+PHJlY3Qgd2lkdGg9IjEwMCUiIGhlaWdodD0iMTAwJSIgZmlsbD0iI2NmZDRkYiIvPjwvc3ZnPg==",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.74474955,"math_prob":0.9999863,"size":2300,"snap":"2022-05-2022-21","text_gpt3_token_len":605,"char_repetition_ratio":0.2195122,"word_repetition_ratio":0.9291785,"special_character_ratio":0.23,"punctuation_ratio":0.005524862,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9999881,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-28T09:57:14Z\",\"WARC-Record-ID\":\"<urn:uuid:54447a48-65fb-4c90-b4d2-ca30837775ab>\",\"Content-Length\":\"277463\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bbdf78fa-d0cc-4907-bdfb-e60cd881e262>\",\"WARC-Concurrent-To\":\"<urn:uuid:55b70770-228e-4af5-8289-1736cfc2fd7f>\",\"WARC-IP-Address\":\"104.21.61.187\",\"WARC-Target-URI\":\"https://www.esaral.com/category/class-9-maths-questions/\",\"WARC-Payload-Digest\":\"sha1:QPL4SDJ7WHONWIQUDC2DVCGOVUYRCXZB\",\"WARC-Block-Digest\":\"sha1:6MFIS2BZYFRAOXFEMIBW6OL5677IDBKL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652663016373.86_warc_CC-MAIN-20220528093113-20220528123113-00179.warc.gz\"}"} |
http://www.lighthouse3d.com/tutorials/glsl-tutorial/data-types/ | [
"Prev: Cleaning Up Next: Statements and Functions\n\nThe following simple data types are available in GLSL:\n\n• float\n• double\n• bool\n• int\n• uint\n\nThese all behave like regular C types, apart from bool.\n\nVectors with 2,3 or 4 components are also available for each of the simple data types mentioned above. These are declared as:\n\n• vec{2,3,4} a vector of 2, 3, or 4, floats\n• dvec{2,3,4} vector of doubles\n• bvec{2,3,4} bool vector\n• ivec{2,3,4} vector of integers\n• uvec{2,3,4} vector of unsigned integers\n\nSquare matrices 2×2, 3×3 and 4×4 are provided, for floats and doubles, since they are heavily used in graphics. The respective data types are:\n\n• mat2, dmat2\n• mat3, dmat3\n• mat4, dmat4\n\nThere are also non-square matrices, also for float and doubles which have the generic form:\n\n• mat{2,3,4}x{2,3,4}\n• dmat{2,3,4}x{2,3,4}\n\nIf columns and rows, first and second number respectively, are equal then these are equivalent to the previous definition of square matrices.\n\nA set of special types are available for texture access. These are called samplers and are required to access texture values, also known as texels.\n\nSome of the most common data types for texture sampling are:\n\n• sampler1D – for 1D textures\n• sampler2D – for 2D textures\n• sampler3D – for 3D textures\n• samplerCube – for cube map textures\n\nAtomic counters are a new feature in OpenGL 4 hardware. Have a look at the Atomic Counter Tutorial for more details on this.\n\nIn GLSL, arrays can be declared using the same syntax as in C. However, arrays can’t be initialized when declared. Accessing array elements is done as in C.\n\nStructures are also allowed in GLSL. The syntax is the same as C.\n\n```struct dirlight {\n\nvec3 direction;\nvec3 color;\n};```\n\nVariables\n\nDeclaring a simple variable is pretty much the same as in C, we can even initialize a variable when declaring it.\n\n```float a,b;\t\t// two vector (yes, the comments are like in C)\nint c = 2;\t\t// c is initialized with 2\nbool d = true;\t// d is true```\n\nDeclaring the other types of variables follows the same pattern. GLSL relies heavily on constructor for initialization and type casting. However, it adopts a relaxed policy regarding implicit conversion. A type can be implicitly converted to a more general type, for instance an int to a float.\n\n```float b = 2;\t\t// implicit conversion\nint a = 2;\nfloat c = float(a); // also correct. c is 2.0\n\nvec3 f;\t\t// declaring f as a vec3\nvec3 g = vec3(1.0,2.0,3.0); // declaring and initializing g```\n\nGLSL is pretty flexible when initializing variables using other variables. All that it requires is that you provide the necessary number of components. Look at the following examples.\n\n```vec2 a = vec2(1.0,2.0);\nvec2 b = vec2(3.0,4.0);\n\nvec4 c = vec4(a,b) // c = vec4(1.0,2.0,3.0,4.0);\n\nvec2 g = vec2(1.0,2.0);\n\nfloat h = 3.0;\n\nvec3 j = vec3(g,h);```\n\nMatrices also follow this pattern. You have a wide variety of constructors for matrices. For instance the following constructors for initializing a matrix are available:\n\n```mat4 m = mat4(1.0) // initializing the diagonal of the matrix with 1.0\n\nvec2 a = vec2(1.0,2.0);\nvec2 b = vec2(3.0,4.0);\n\nmat2 n = mat2(a,b); // matrices are assigned in column major order\n\nmat2 k = mat2(1.0,0.0,1.0,0.0); // all elements are specified```\n\nThe declaration and initialization of structures is demonstrated below:\n\n```struct dirlight {\t\t// type definition\nvec3 direction;\nvec3 color;\n};\n\ndirlight d1;\n\ndirlight d2 = dirlight(vec3(1.0,1.0,0.0),vec3(0.8,0.8,0.4));```\n\nIn GLSL a few extras are provided to simplify our lives, and make the code a little bit clearer. Accessing a vector can be done using letters as well as standard C selectors.\n\n```vec4 a = vec4(1.0,2.0,3.0,4.0);\n\nfloat posX = a.x;\nfloat posY = a;\n\nvec2 posXY = a.xy;\n\nfloat depth = a.w```\n\nAs shown in the previous code snippet, it is possible to use the letters x,y,z,w to access vectors components. If you’re talking about colors then r,g,b,a can be used. For texture coordinates the available selectors are s,t,p,q. Notice that, by convention, texture coordinates are often referred as s,t,r,q. However r is already being used as a selector for “red” in RGBA. Hence there was a need to find a different letter, and the lucky one was p.\n\nMatrix selectors can take one or two arguments, for instance m, or m. In the first case the first column is selected, whereas in the second a single element is selected.\n\nAs for structures the names of the elements of the structure can be used as in C, so assuming the structures described above the following line of code could be written:\n\n`d1.direction = vec3(1.0,1.0,1.0);`\n\nConstant Qualifier\n\nA declaration of a local variable can also used the `const` keyword, as in\n\n```const int gravity = 9.8;\n```\n\n Prev: Cleaning Up Next: Statements and Functions\n\nThis site uses Akismet to reduce spam. Learn how your comment data is processed."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84492636,"math_prob":0.9956113,"size":4743,"snap":"2019-51-2020-05","text_gpt3_token_len":1333,"char_repetition_ratio":0.10993881,"word_repetition_ratio":0.01972873,"special_character_ratio":0.285895,"punctuation_ratio":0.19069344,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98803383,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-28T04:07:44Z\",\"WARC-Record-ID\":\"<urn:uuid:7e5051bb-891b-4f92-89e7-77ddfa42062b>\",\"Content-Length\":\"74902\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b72fbadf-c2d2-4d14-9a0e-eb2f471e7da9>\",\"WARC-Concurrent-To\":\"<urn:uuid:954c2e24-2294-42a8-8edb-9cf994ceeacd>\",\"WARC-IP-Address\":\"80.172.227.16\",\"WARC-Target-URI\":\"http://www.lighthouse3d.com/tutorials/glsl-tutorial/data-types/\",\"WARC-Payload-Digest\":\"sha1:OBMVA77QHULH4RDRJPYBDSUB3WGQBKOV\",\"WARC-Block-Digest\":\"sha1:S5QORZ7G4ZXDPJ5DP7RF65DLMN6VRYNV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579251773463.72_warc_CC-MAIN-20200128030221-20200128060221-00004.warc.gz\"}"} |
https://opencores.org/websvn/filedetails?repname=spacewire_light&path=%2Fspacewire_light%2Ftrunk%2Fsyn%2Fstreamtest_gr-xc3s1500%2Fstreamtest_top.vhd&rev=3 | [
"",
null,
"URL https://opencores.org/ocsvn/spacewire_light/spacewire_light/trunk\n\n# Subversion Repositoriesspacewire_light\n\n## [/] [spacewire_light/] [trunk/] [syn/] [streamtest_gr-xc3s1500/] [streamtest_top.vhd] - Rev 3\n\n```--\n-- Test of spwstream on Pender GR-XC3S-1500 board.\n-- 60 MHz system clock; 200 MHz receive clock and transmit clock.\n--\n-- LED 0 = link run\n-- LED 1 = link error (sticky until clear button)\n-- LED 2 = gotdata\n-- LED 3 = data/timecode error (sticky until reset)\n--\n-- Button S2 = reset\n-- Button S3 = clear LED 1\n--\n-- Switch 0 = link start\n-- Switch 1 = link disable\n-- Switch 2 = send data\n-- Switch 3 = send time codes\n-- Switch 4-7 = bits 0-3 of tx bit rate scale factor\n--\n-- SpaceWire signals on expansion connector J12:\n-- Data In pos,neg = m1,m2 = pin 3,2\n-- Strobe In pos,neg = m3,m4 = pin 6,5\n-- Data Out pos,neg = n1,n2 = pin 9,8\n-- Strobe Out pos,neg = n3,n4 = pin 12,11\n--\n-- To get proper LVDS signals from connector J12, the voltage on I/O bank 6\n-- must be set to 2.5V. This is the default on GR-XC3S-1500-rev2, but on\n-- GR-XC3S-1500-rev1 a change is required on the board (described in\n-- the board manual).\n--\n-- To terminate the incoming LVDS signals, 100 Ohm termination resistors\n-- must be installed on the board in positions R120 and R121.\n--\n-- The SpaceWire port should be looped back to itself, either directly\n-- or via an other SpaceWire device. For a direct loopback, place 4 wires\n-- from the output pins to the corresponding input pins. For an indirect\n-- loopback, connect the SpaceWire signals to an additional SpaceWire device\n-- which is programmed to echo everything it receives (characters, packets,\n-- time codes). See the datasheet for a wiring diagram from J12 to MDM9.\n--\n\nlibrary ieee;\nuse ieee.std_logic_1164.all, ieee.numeric_std.all;\nlibrary unisim;\nuse unisim.vcomponents.all;\nuse work.spwpkg.all;\n\nentity streamtest_top is\n\nport (\nclk: in std_logic;\nbtn_reset: in std_logic;\nbtn_clear: in std_logic;\nswitch: in std_logic_vector(7 downto 0);\nled: out std_logic_vector(3 downto 0);\nspw_rxdp: in std_logic;\nspw_rxdn: in std_logic;\nspw_rxsp: in std_logic;\nspw_rxsn: in std_logic;\nspw_txdp: out std_logic;\nspw_txdn: out std_logic;\nspw_txsp: out std_logic;\nspw_txsn: out std_logic );\n\nend entity streamtest_top;\n\narchitecture streamtest_top_arch of streamtest_top is\n\n-- Clock generation.\nsignal boardclk: std_logic;\nsignal sysclk: std_logic;\nsignal fastclk: std_logic;\n\n-- Synchronize buttons\nsignal s_resetbtn: std_logic := '0';\nsignal s_clearbtn: std_logic := '0';\n\n-- Sticky LED\n\n-- Interface signals.\nsignal s_rst: std_logic := '1';\nsignal s_autostart: std_logic := '0';\nsignal s_senddata: std_logic := '0';\nsignal s_sendtick: std_logic := '0';\nsignal s_txdivcnt: std_logic_vector(7 downto 0) := \"00000000\";\nsignal s_gotdata: std_logic;\nsignal s_dataerror: std_logic;\nsignal s_tickerror: std_logic;\nsignal s_spwdi: std_logic;\nsignal s_spwsi: std_logic;\nsignal s_spwdo: std_logic;\nsignal s_spwso: std_logic;\n\n-- Make clock nets visible to UCF file.\nattribute KEEP: string;\nattribute KEEP of sysclk: signal is \"SOFT\";\nattribute KEEP of fastclk: signal is \"SOFT\";\n\ncomponent streamtest is\ngeneric (\nsysfreq: real;\ntxclkfreq: real;\ntickdiv: integer range 12 to 24 := 20;\nrximpl: spw_implementation_type := impl_generic;\nrxchunk: integer range 1 to 4 := 1;\ntximpl: spw_implementation_type := impl_generic;\nrxfifosize_bits: integer range 6 to 14 := 11;\ntxfifosize_bits: integer range 2 to 14 := 11 );\nport (\nclk: in std_logic;\nrxclk: in std_logic;\ntxclk: in std_logic;\nrst: in std_logic;\nautostart: in std_logic;\nsenddata: in std_logic;\nsendtick: in std_logic;\ntxdivcnt: in std_logic_vector(7 downto 0);\ngotdata: out std_logic;\ndataerror: out std_logic;\ntickerror: out std_logic;\nspw_di: in std_logic;\nspw_si: in std_logic;\nspw_do: out std_logic;\nspw_so: out std_logic );\nend component;\n\nbegin\n\n-- Buffer incoming clock.\nbufg0: BUFG port map ( I => clk, O => boardclk );\n\n-- Generate 60 MHz system clock.\ndcm0: DCM\ngeneric map (\nCLKFX_DIVIDE => 5,\nCLKFX_MULTIPLY => 6,\nCLK_FEEDBACK => \"NONE\",\nCLKIN_DIVIDE_BY_2 => false,\nCLKIN_PERIOD => 20.0,\nCLKOUT_PHASE_SHIFT => \"NONE\",\nDFS_FREQUENCY_MODE => \"LOW\",\nDUTY_CYCLE_CORRECTION => true,\nSTARTUP_WAIT => true )\nport map (\nCLKIN => boardclk,\nRST => '0',\nCLKFX => sysclk );\n\n-- Generate 200 MHz fast clock.\ndcm1: DCM\ngeneric map (\nCLKFX_DIVIDE => 1,\nCLKFX_MULTIPLY => 4,\nCLK_FEEDBACK => \"NONE\",\nCLKIN_DIVIDE_BY_2 => false,\nCLKIN_PERIOD => 20.0,\nCLKOUT_PHASE_SHIFT => \"NONE\",\nDFS_FREQUENCY_MODE => \"LOW\",\nDUTY_CYCLE_CORRECTION => true,\nSTARTUP_WAIT => true )\nport map (\nCLKIN => boardclk,\nRST => '0',\nCLKFX => fastclk );\n\n-- Streamtest instance\nstreamtest_inst: streamtest\ngeneric map (\nsysfreq => 60.0e6,\ntxclkfreq => 200.0e6,\ntickdiv => 22,\nrximpl => impl_fast,\nrxchunk => 4,\ntximpl => impl_fast,\nrxfifosize_bits => 11,\ntxfifosize_bits => 10 )\nport map (\nclk => sysclk,\nrxclk => fastclk,\ntxclk => fastclk,\nrst => s_rst,\nautostart => s_autostart,\nsenddata => s_senddata,\nsendtick => s_sendtick,\ntxdivcnt => s_txdivcnt,\ngotdata => s_gotdata,\ndataerror => s_dataerror,\ntickerror => s_tickerror,\nspw_di => s_spwdi,\nspw_si => s_spwsi,\nspw_do => s_spwdo,\nspw_so => s_spwso );\n\n-- LVDS buffers\ngeneric map ( IOSTANDARD => \"LVDS_25\" )\nport map ( O => s_spwdi, I => spw_rxdp, IB => spw_rxdn );\ngeneric map ( IOSTANDARD => \"LVDS_25\" )\nport map ( O => s_spwsi, I => spw_rxsp, IB => spw_rxsn );\ngeneric map ( IOSTANDARD => \"LVDS_25\" )\nport map ( O => spw_txdp, OB => spw_txdn, I => s_spwdo );\ngeneric map ( IOSTANDARD => \"LVDS_25\" )\nport map ( O => spw_txsp, OB => spw_txsn, I => s_spwso );\n\nprocess (sysclk) is\nbegin\nif rising_edge(sysclk) then\n\n-- Synchronize buttons\ns_resetbtn <= btn_reset;\ns_rst <= s_resetbtn;\ns_clearbtn <= btn_clear;\n\n-- Synchronize switch settings\ns_autostart <= '0';\ns_senddata <= switch(2);\ns_sendtick <= switch(3);\ns_txdivcnt(3 downto 0) <= switch(7 downto 4);\n\n(not s_clearbtn) and\n(not s_resetbtn);\n\n-- Drive LEDs (inverted logic)"
] | [
null,
"https://cdn.opencores.org/design/corner.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56328225,"math_prob":0.9056461,"size":7102,"snap":"2020-24-2020-29","text_gpt3_token_len":2179,"char_repetition_ratio":0.1811778,"word_repetition_ratio":0.11898017,"special_character_ratio":0.32061392,"punctuation_ratio":0.25289255,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9832364,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-07-05T01:12:40Z\",\"WARC-Record-ID\":\"<urn:uuid:0d060766-eb66-453b-a4f3-66faba889b17>\",\"Content-Length\":\"112988\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:3f28d291-b029-449f-a2ac-10bc0b067825>\",\"WARC-Concurrent-To\":\"<urn:uuid:12323e1a-4bf3-475c-8f2d-9ea275245d69>\",\"WARC-IP-Address\":\"37.97.232.80\",\"WARC-Target-URI\":\"https://opencores.org/websvn/filedetails?repname=spacewire_light&path=%2Fspacewire_light%2Ftrunk%2Fsyn%2Fstreamtest_gr-xc3s1500%2Fstreamtest_top.vhd&rev=3\",\"WARC-Payload-Digest\":\"sha1:BK4RLJQG25PRR5MJOE6FNBGAT5R6PRDZ\",\"WARC-Block-Digest\":\"sha1:U4PCWEJINA2DKY3RIJV5IQXVKSOAPWMQ\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-29/CC-MAIN-2020-29_segments_1593655886802.13_warc_CC-MAIN-20200704232817-20200705022817-00427.warc.gz\"}"} |
https://flylib.com/books/en/1.384.1.65/1/ | [
"# Displaying Calculations with AutoCalculate\n\nYou can simplify your work using a feature called AutoCalculate when you don't want to insert a formula, but you want to see the results of a simple calculation quickly. Auto-Calculate automatically displays the sum, average, maximum, minimum, or count of the selected values on the status bar. Auto-Calculate results do not appear on the worksheet when printed but are useful for giving you quick answers while you work.\n\n#### Calculate a Range Automatically",
null,
"Select the range of cells you want to calculate.\n\nThe sum of the selected cells appears on the status bar next to SUM=.",
null,
"If you want to change the type of calculation AutoCalculate performs , right-click anywhere on the status bar to open the AutoCalculate sub menu.",
null,
"Click the type of calculation you want.",
null,
"",
null,
"Show Me Microsoft Office Excel 2003\nISBN: 0789730057\nEAN: 2147483647\nYear: 2002\nPages: 291"
] | [
null,
"https://flylib.com/books/1/384/1/html/2/files/one_icon.jpg",
null,
"https://flylib.com/books/1/384/1/html/2/files/tow_icon.jpg",
null,
"https://flylib.com/books/1/384/1/html/2/files/three_icon.jpg",
null,
"https://flylib.com/books/1/384/1/html/2/files/03inf18.jpg",
null,
"https://flylib.com/icons/6209-small.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84516877,"math_prob":0.83187944,"size":455,"snap":"2021-31-2021-39","text_gpt3_token_len":89,"char_repetition_ratio":0.13747229,"word_repetition_ratio":0.0,"special_character_ratio":0.18681319,"punctuation_ratio":0.09638554,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9738078,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,2,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-26T14:37:07Z\",\"WARC-Record-ID\":\"<urn:uuid:7a662ff2-0915-4c31-b22a-2a3e31fd3612>\",\"Content-Length\":\"18233\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c67af36-2bee-4db4-8ef9-602c85873a70>\",\"WARC-Concurrent-To\":\"<urn:uuid:60c8175b-c290-4258-a205-9b5951515278>\",\"WARC-IP-Address\":\"179.43.157.53\",\"WARC-Target-URI\":\"https://flylib.com/books/en/1.384.1.65/1/\",\"WARC-Payload-Digest\":\"sha1:KUBPJZOBALOKREI5VKDCO5DYJPQFQJH7\",\"WARC-Block-Digest\":\"sha1:RH7OIMCLHGVZZ65QWLWU6OMNNDJJYX3E\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046152129.33_warc_CC-MAIN-20210726120442-20210726150442-00206.warc.gz\"}"} |
http://homepage.divms.uiowa.edu/~jones/assem/spring14/hw/09.shtml | [
"# Assignment 9, due Apr 4\n\nOn every assignment, write your name legibly as it appears on your University ID card! Homework is due on paper at the start of class on the day indicated (usually Friday). Exceptions will be made only by advance arrangement (excepting \"acts of God\"). Late work must be turned in to the TA's mailbox (ask the CS receptionist in 14 MLH for help). Never push homework under someone's door!\n\n1. A small problem: Suppose that R3 and R4 contains the 64-bit value a and R5 and R6 contain another 64-bit value b. (the least significant halves are in R3 and R5). Write Hawk code to perform this assignment:\n``` a = a + b;\n```\n\nThis only takes a few instructions. (0.5 points)\n\n2. Background: Suppose that R3 contains a 2's complement value and you perform the following:\n``` SR R3,4\n```\n\nYou might say you have just divided R3 by 16, and for many purposes, you did. On the other hand, the result may not conform to your intuitive expectation for division by 16.\n\na) Compare the result of computing (-20)/16 following the rules you learned in elementary school with the result you get with the above code. (0.2 points)\n\nb) Explain the difference. (Hint, it has to do with the remainder after division.) (0.3 points)\n\n3. Background: Consider this little subroutine, packaged as a file called sub.a that is designed to be assembled separately from its caller:\n```\tTITLE \"sub.a, a subroutine\"\nINT SUB\nSUB: ; returns R3 = sum i from i=min to max (constant bounds)\n; uses R4 = i\nLIS R4,MIN ; i = min\nLIS R3,0 ; sum = 0\nLOOP:\nCMPI R4,MAX\nBGT QUIT ; while (i <= max) {\n; ----- cut here -----\nADD R3,R3,R4 ; sum = sum + i\nBR LOOP\nQUIT: ; }\nJUMPS R1\nMIN\t=\t5\nMAX\t=\t10\n```\n\nA problem: Divide this subroutine into two separately assembled files, one containing all the code above the line saying \"cut here\" and one containing all the code below that line. The first half should continue to be stored in the file sub.a while the second half goes in a new file, sub2.a. Your goal is to make the bare minimum changes necessitated by this division, while retaining as much as possible of the original logic and structure of the program. That is, MIN and MAX should still be defined in the same place, which is now in sub2, and the return from the subroutine is now from near the end of sub2.a even though the entry point to the subroutine is in a different file, sub.a. (1.0 points)\n\nNote: This is not a recommended programming style! This problem is really about understanding the limitations that apply to the use of external symbols. There are places where they do not work, and as a consequence, you have to substitute less efficient and possibly less readable code for several of the simple instructions above.\n\nYour answer should be in the form of blocks of code, one representing each file, where the code in each file has a TITLE directive naming that file. Obviously, you will have to add INT LOOP and EXT QUIT to sub.a, and you will have to add EXT LOOP and INT QUIT to sub2.a, but there is more work to do because of the machine instructions that do not work with external symbols.\n\n4. Background: Consider the following description of a stream class.\n```; stream.h -- standard interface to all streams\n\n; all stream objects have the following core structure\nSTRCLASS= 0 ; pointer to stream class descriptor\n; ; each distinct subclass may add fields here\n; ; subclasses must provide the object size\n\n; each subclass must provide a class descriptor that conforms\n; to the following structure:\nPUT = 0 ; pointer to the put method\n; expects R3 = pointer to stream object\n; R4 = character to put\n\nGET = 4 ; pointer to the get method\n; expects R3 = pointer to stream object\n; returns R3 = one character from the stream\n\nCLOSE = 8 ; pointer to the close method\n; expects R3 = pointer to stream object\n```\n\nA Problem Assume that the global variable STDOUT (an external symbol) points to a stream object, write code to put the character \"a\" out to that stream. (1.0 points)"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.92677206,"math_prob":0.8408728,"size":3471,"snap":"2023-40-2023-50","text_gpt3_token_len":830,"char_repetition_ratio":0.10816268,"word_repetition_ratio":0.032357473,"special_character_ratio":0.25698647,"punctuation_ratio":0.13416322,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9723421,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-24T06:36:21Z\",\"WARC-Record-ID\":\"<urn:uuid:a5b38844-6e27-4c2b-86ba-9fcf8b545e4b>\",\"Content-Length\":\"8452\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:daa72520-dec3-43fc-b98f-0d1319065d0b>\",\"WARC-Concurrent-To\":\"<urn:uuid:176d6781-faa6-4a29-bf6b-ff4d6da512a6>\",\"WARC-IP-Address\":\"128.255.96.133\",\"WARC-Target-URI\":\"http://homepage.divms.uiowa.edu/~jones/assem/spring14/hw/09.shtml\",\"WARC-Payload-Digest\":\"sha1:QTZ2JINL7YMLBROJ36JZ2NPTZEUGP4NX\",\"WARC-Block-Digest\":\"sha1:B2T3BRFW2JDBIDYU6KPYGTLT7IOVRPTH\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233506623.27_warc_CC-MAIN-20230924055210-20230924085210-00322.warc.gz\"}"} |
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Xenon_monochloride | [
"# Xenon monochloride\n\nXenon monochloride (XeCl) is an excimer which is used in excimer lasers emitting near ultraviolet light at 308 nm. It is most commonly used in medicine. It was used to produce a body image from the Shroud of Turin. The results suggest that a directional burst of ultraviolet radiation may have played a role in the formation of the image on the Shroud.\n\nXenon monochloride\nIdentifiers\n3D model (JSmol)\nChemSpider\nProperties\nXeCl\nMolar mass 166.746 g/mol\nExcept where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa).\nInfobox references\n\nXenon monochloride was first synthesized in the 1960s. Its kinetic scheme is very complex and its state changes occur on a nanosecond timescale. In the gaseous state, at least two kinds of xenon monochloride are known: XeCl and Xe\n2\nCl\n, whereas complex aggregates form in the solid state in noble gas matrices. The excited state of xenon resembles halogens and it reacts with them to form excited molecular compounds.\n\n## Introduction\n\nMolecules that are only stable in electronically excited states are called excimer molecules, but may be called exciplex molecules if they are heteronuclear. The exciplex halides constitute an important class of rare gas halides with formula RgX. Rg is the noble gas, and X is the halogen. These molecules are de-excited by emitting a photon whose energy is some Electronvolts. Therefore, the wavelength of the light produced is in the visible or ultraviolet spectra. Gas or gaseous mixtures that may lead to the formation of these molecules is a quasi-ideal laser medium since the population inversion is directly obtained when the excimer is formed. The other consequence of the unstable ground state is that the excimer or exciplex species must be generated by an external excitation (either through a discharge, an electron beam, microwave, or radiation). At least two gases must be used to generate exciplexes: a halogen donor and a rare gas. However, as shown in Table 1, not all rare gas halide molecules lead to the development of lasers; some may not even exist. Multiple molecules and applications have been developed.\n\nTable 1. Properties of rare-gas halides. D – molecule is dissociative and does not exist. F – observed fluorescence. L – molecule with laser effect.\nHalogen/Rare gasHeliumNeonArgonKryptonXenon\nFluorineDF & DLLL\nChlorineDDF & DLL\nBromineDDDF & DL\nIodineDDDDF & D\n\nSeveral review articles related to xenon chloride laser technology and its applications have been published.\n\nSome authors stress the importance of accurately determining the kinetics of the laser medium when rare-gas halides are involved. Recent results have provided insight into the physical chemistry of the laser medium. Spectroscopic investigations are limited to the visible-near ultraviolet region where exciplex lasers operate. Only binary gas mixtures of xenon and a chlorine donor, or ternary mixtures that also include a buffer gas (a rare gas indicated by Rg) will be considered. The most interesting chlorine donors are CCl\n4\nand HCl because of their use in laser technology, and Cl\n2\n(see Figure 1).\n\nXeCl and Xe\n2\nCl\nare most important in laser applications amongst the xenon chlorides. Although discharge lamps based on low-pressure mixtures of xenon and a chlorine donor emit incoherent light, they are reliable and easy to operate.\n\n## History\n\nThe idea that the noble gases can form halides arose in the early 1920s: A. von Antropoff and Oddo suggested that krypton and xenon may form bromides and chlorides. In 1933, Yost and Kaye unsuccessfully tried to synthesize xenon chloride by illuminating a mixture of xenon (70 torr of pressure) and chlorine (225 torr) with a mercury-vapor lamp.\n\nXenon monoclorides were first synthesized in 1965. Later, solid XeCl\n2\nand XeCl\n4\ncompounds were synthesized at low temperatures. In 1991, Prosperio et al. demonstrated the existence of XeCl\n2\nin the gaseous state, which is important for lasing kinetics, although it emits an uninteresting infrared light.\n\nIn 1973 Riveros et al. synthesized XeCl\nions in the gaseous phase at a pressure of 10-4 torr. This ionic molecule attracted little interest.\n\nSystematic studies of XeCl were initiated in 1975 by Velazco and Setser, who demonstrated 304 nm emission from XeCl\n*\n. This emission was obtained by mixing xenon atoms (Xe\n3\nP\n2\n) with chlorine gas Cl\n2\nor other chlorinated compounds (NOCl and SOCl\n2\n). The excitation was provided by a cold cathode discharge; the total pressure was a few torr. Months later, Ewing and Brau reported lasing from a XeCl film 2Σ1/2+2Σ1/2+ at 308 nm, which was most promising for industrial applications. The preferred chlorine donor for XeCl laser is HCl. The reasons given are:\n\n• Low absorption cross section at 308 nm of the order of 10-19 cm2. The HCl concentration does not substantially affect the laser's output energy. This is not the case for Cl\n2\nwhich has a very strong absorption at about 300 nm.\n• Less toxic than chlorine.\n• Generates a post-dissociation excimer laser, which is much better than other chlorine donors. 16,000 consecutive excimer lasers pulses were obtained without affecting energy output.\n• The constant rate of vibrational excitation and dissociative electron attachment are more favorable for HCl than for other chlorine donors. These processes help the formation of XeCl\n*\n.\n\nThree years later Lorentz et al. performed experiments at high pressures (a few atmospheres) in a mixture containing (Ar/XeCl\n2\n) and found an emission centered at 450 nm which was attributed to XeCl\n2\n.\n\nThe first XeCl\n2\nlaser was developed in 1980. This type of laser is likely to be tunable over a wide range of wavelengths (30 nm) in the visible spectrum. This is true even if the absorption phenomena occur on the side of shorter wavelengths and therefore limits the laser action at the red region of the electromagnetic spectrum from light emission. Solid state experiments with Xe\n2\nCl\n*\n suggest that the gaseous state is more suitable for the development of this type of laser. The measured amplification was correct in the solid state. The liquid state seems like an ideal dye laser although implementation seems complex and costly. Presently, the Xe\n2\nCl laser has not been industrially developed. Unlike XeCl, the best chlorine donor is CCl\n4\n while no laser action occurs when using HCl.\n\nFour molecules are a priori synthesized in the mixtures. Note particularly the possibility of synthesizing these under the experimental conditions of lasers and their roles.\n\nXeHCl was observed in a gaseous medium. However, this molecule has only been detected via emission spectra in the microwave, radio and far infrared regions, but with emission predicted by two theoretical studies at 232 nm and 129 nm. Note, however, that when nearly in aggregate, it is more likely to be stable at the solid state. It is the same for Xe\n3\nCl\nwhich can theoretically emit at 500 nm, while this activity has never been observed in the gaseous state.\n\nXeH has three known emission lines. They were observed at 190 nm, 250 nm and 660 nm. However, they have never manifested in the laser spectra, which leads to the assumption that XeH is not formed under the experimental conditions. Conversely, the XeH+\nion is formed in the mixtures used in lasers. It plays a significant role in kinetics in the synthesis of XeCl\n*\n, through a reaction that competes with the creation of Xe+\nions (shown below):\n\nHCl+\n+ Xe → Xe+\n+ HCl (80 ± 10%)\n\nHCl+\n+ Xe → XeH+\n+ HCl (20 ± 10%)\n\nThe rate constant of the entire process is 6.4×1010 cm3s-1 (± 20%).\n\nXe+\nion is a pivotal precursor in the formation of the exciplex molecule.\n\n## XeCl exciplex\n\n### The structure of the XeCl molecule\n\nThe potential curves presented in Figure 2 are the results of theoretical and experimental works.\n\nCommon characteristics for all halide states of the noble gases includes a group of related excited states B, C and D and a lower group of dissociatively or weakly bound states A and X. States B, D and X have Σ symmetry (Λ = 1/2) while the C state has π symmetry (Λ = 3/2). The state A is itself split into two sub-states, a symmetry Σ, A1/2 and the other symmetry π, A3/2.\n\nThe ionization potential of noble gases in their lowest excited state is close to the electron affinity of halogen atoms. Thus, the rare gas halide molecules are formed by an ionic bond since the excited electron of the noble gas is partly transferred to the halogen atom. The molecule so formed is therefore stable as is the case of the states B, C and D.\n\nThis electron transfer does not occur with ground state atoms. As the rare gas atoms are not reactive. This is the case for states A and X.\n\n#### The B, C and D states\n\nThese states are correlated with ground state Xe+\nions and Cl\n. The spin-orbital splitting of the Xe+\nion into two states (2\nP\n3/2\nand 2\nP\n1/2\n) is important; also the states B and D to which they are correlated are significantly far away. For the minimum potential curves corresponding to almost the same value of the internuclear distance (re#0.3 nm), the energy difference measured experimentally is about 9940 cm-1. This is in agreement with the energy of separation of Xe+\n(2P3/2) and Xe+\n(2P1/2) states valued at 10574 cm-1.\n\nPotential curves of the states B and C intersect adiabatically with a potential curve correlated to Xe* + Cl at large internuclear distances: 7.1 nm experimentally and 7.19 nm and 6.3 nm theoretically. A more recent theoretical investigation specifies these intersection phenomena. States B and C merging at long-distance, intersect two successive potential curves correlated to Xe* + Cl. The lowest correlated to Xe (3\nP\n2\n) + Cl (2\nP\n3/2\n) is 7.25 nm and after that, the next correlated to Xe (3\nP\n1\n) + Cl (2\nP\n3/2\n) is intercepted at 18.68 nm. As this intersection occurs at a great distance, the ionic character of the binding of these states near the equilibrium internuclear distance re is virtually unaffected.\n\nThis situation is slightly different for state D that crosses these two potential curves at a much shorter distance. Indeed, state D intersects Xe (3\nP\n2\n) + Cl (2\nP\n3/2\n) only at 0.89 nm and Xe (3\nP\n1\n) + Cl (2P3/2) at 1.02 nm.\n\nThe distinction between states B and C is that they are correlated with Xe+\nions whose semi-occupied orbital p is in a plane parallel to the internuclear axis for the state B and perpendicular to this axis for the state C.\n\nOn an examination of the energy position of the potential curve of states B and C, their proximity results in some difficulty. the values of the energy gap (EB – EC) between the two states is enumerated in Table 2. The data is highly dispersed; computed values, in particular, are far removed from all experimental values. These were determined mostly from the intensity ratios of the two emissions XeCl\n*\ncentered at 308 nm and 345 nm, either with or without corrections by the participation of transition (B → A). The most direct measure is given by Jouvet et al. Excitation spectra of XeCl\n*\ndirectly provide the energy difference between the vibrational levels v′=0 and v″=0 which correspond respectively to the B and C states. This value of 90 cm-1 is close to other measurements from studies in kinetics.\n\nTable 2 Energy gap (EB – EC) between the states B and C of XeCl.\nEB – EC (cm-1) Process Year Reference\n−1489 C 1977 \n−560 C 1978 \n7 I 1979 \n81 I & C 1979 \n128 ± 35 I 1980 \n−5.4 ± 25 I 1980 \n200 I 1980 \n230 I 1980 \n180 C 1981 \n289 I* 1982 \n220 ± 40 I 1983 \n85 C 1984 \n0 C 1984 \n−22 C 1985 \n> 50 I** 1986 \n230 ± 40 I 1987 \n90 ± 2 Absorption 1989 \n98 +30-40 I & C 1990 \n118 ± 40 I 1992 \n\nI: measurement derived from the value of the intensity ratios of the XeCl emissions centered at 308 and 345 nm (see § 3-1-1)\n\nC: measurement derived from a kinetic study providing the coupling constants between these two states.\n\n*: Emission at 345 nm is not corrected for the contribution XeCl (B → A)\n\n**: XeCl is in the solid state.\n\nPositioning of state B in relation to state C is theoretically justified by considering the configuration interaction between the ionic and covalent character states of similar symmetry. In a state 2Σ (as states B and X), a simply occupied orbital is located closer to an orbital of another atom such that the interaction or exchange of charges between two atoms are larger and easier than in a state 2π (like states C and A3/2), where a simply occupied orbital is perpendicular to the molecular axis and far away from another atom. The correction introduced by this phenomenon in terms of energy values is much more important for Σ states than for the π states. This interaction greatly increases the energy of state B relative to that of state C. Hence, the positioning on the observed potential curves from Figure 2.\n\n#### The X and A states\n\nThe lowest states are correlated with ground state xenon and chlorine atoms.\n\nDue to spin-orbital splitting of the chlorine atom level at 881 cm-1 into two states, (2\nP\n3/2\n) and (2\nP\n1/2\n), state A is divided into two sub-states. However, the effect of the spin-orbital coupling here is significantly weaker than in the case of Xe+\nion. At large internuclear distances, an energy gap of 882 cm-1 between A1/2 and A3/2 was experimentally measured in the solid state in a neon matrix. Thus, this value is very close to the energy separation of states Cl (2\nP\n3/2\n) and Cl (2\nP\n1/2\n). This confirms the theoretical assumptions of state correlations between XeCl state A and Cl. At large distances state A3/2 is similar to state X. Becker et al., who laid out the interaction potentials of 35\nCl\n(2\nP\n3/2\nand 2\nP\n1/2\n) and Xe (1\nS\n0\n) from the analysis of quasi–elastic scattering in collisions produced from crossed beams, has experimentally confirmed this result. Unlike some other noble gas halides, XeCl has a non-dissociative ground state. This bonding character was demonstrated experimentally well before theoretical studies of XeCl in solid state argon matrices at 20K and later in the gaseous state.\n\nThe Van der Waals force between atoms is not strong enough in state X to explain the presence of a potential well that when low (the depth is in the order of kilotorr) can contain between 12 and 20 vibrational levels (see Table 3). The relative increase in the binding energy of state X compared to state A can also be explained by taking into account the configuration interaction. State A is also very lightly bound with binding energy half that of state X.\n\nTable 3 Experimental numbers of vibrational levels in the potential wells of state X.\nValue Reference\n15 \n20 \n12 \n18 ± 1 \n\n#### Spectroscopic constants\n\nThe energy Ev'j'M of a known state M with a vibrational level v' with the rotational quantum number j is:\n\nEv'j'M = Te(M) + EVib(M) + ERot(M) where Te(M), EVib(M) and ERot(M) respectively denote the molecule's vibrational and rotational electronic energies.\n\n##### Electronic structure\n\nThe main features of the electronic states of a known state M are usually the dissociation energy De, the inter-atomic distance re and energy of the bottom of the potential well EM. For XeCl, different reported values of these quantities are summarized in Tables 4, 5 and 6. They were determined theoretically or experimentally for isotope 35\nCl\nin the solid or gaseous state.\n\nTable 4. Dissociation energies De in cm-1.\nRef X A B C D\n 280 ± 7% 129 ± 7%\n 33,957 33,392 33,634\n 36,699\n 281 ± 10 36,553 37,148\n 255 ± 10 36,540\n 281.1 ± 0.7\n 154\n 161\n 225\n 35,459\n###### Dissociation energies\n\nDissociation energies have been calculated or measured for different states of the eximer. Some states have more measurements than others. States A, C and D have too few measurements for a statistical analysis. For state B, the four values are not consistent with each other\n\nFor state X, there are six values, two of which are outliers. That of Flannery is an old, imprecise theoretical estimate. That of Tellinghuisen et al. is the first experimental determination made in 1976. Seven years later the same team corrected this value and closed the gap on the most recent estimates. The remaining four values seem to be the only reliable ones. De is (with a probability of 95%) between 278.3 cm-1 and 285.3 cm-1. The interval corresponds to a 1.3% fluctuation around 281.5 cm -1. Indeed, among the selected determinations are two measures with high uncertainty, and a third which the author does not indicate. The value of De of state X, depends on the number of vibrational levels contained in the well and sets the number of transitions bound → bound that can be achieved. This result is fundamental to a better understanding of XeCl laser spectroscopy.\n\n###### Equilibrium atomic distances\nTable 5: Equilibrium Inter-atomic Distances re in Ǻ.\nRef X A B C D\n 3.44\n 3.23 4.1\n 3.22 3.14 3.18\n 3.227\n 3.14\n 3.23 3.007 2.922\n 3.18 2.9374\n 3.074\n 4.05\n 3.23 4.09\n 2.9\n 3.17 3.08 3.12\n\nThe interatomic distance for states A, C and D has few measurements, but they are close. On average, state A is 0.408 nm, state D, 0.307 nm and state C, 0.311 nm.\n\nFor the state X, the theoretical determination of Adrian and Jette is statistically far from the others. Omitting it, using confidence level of 95% state X re, will be in the range: 0.318 nm < r e < 0.326 nm.\n\nThe value of Tellinghuisen et al. is at the limit of the interval. If ignored, the other three authors announce an identical value of 0.323 nm.\n\nTellinghuisen's value for state B is far from others for re. It is the same for Ewing et Brau, the oldest study of the noble gas halides which is based on the analogy of the excited rare gas with alkali metals. These are only estimates. These two values will be discarded to give a confidence interval at 95% for the interatomic distance of state B: 0.2993 nm < re < 0.3319 nm.\n\n###### Bottom of the potential well energies\nTable 6: Bottom of the potential well energies Ei in cm-1.\nRef X A B C D\n 34,441 35,005 45,329\n 32,398 ± 1 32,303 ± 1\n 29570\n 0 32405.8\n 32,828\n 22.7 29.4\n 32,382\n 30,860\n 32,405\n\nTable 6 shows that there is very little information for states X, A and D. For state X, Sur et al. arbitrarily took bottom of the well X as the origin of their energy scale. It is therefore not a direct measurement. Therefore, the state X as well as state A have been the subject of only one study; that of Aquilanti et al.. For state D, two quite different determinations exist.\n\nAs was seen in an earlier section, the location of states B and C is problematic.\n\nState B has the most attention from researchers. Two measures are statistically far from the others. Besides the previously mentioned study by Ewing and Brau, the old theoretical work of Hay and Dunning are among the doubtful determinations which will be broached soon. Without considering these values, the experimental work provides a confidence interval at a very narrow 95% threshold: from 32380.1 cm-1 to 32415.3 cm-1.\n\nIn contrast, no conclusion can be drawn statistically given the small number of measurements of state C. However, further analysis will illuminate despite the non-matching character values in Table 6. Indeed, the positioning of the C states with respect to state B has resulted in many publications.\n\nA statistical analysis of the values of Table 2 provides a step by step approach to the confidence interval at 95% which is the following: 76.8 cm-1 < (EB - EC) < 100.2 cm-1. Only four measures belong to this interval. This is the direct determination of Jouvet et al. and three values deduced from kinetic studies. On the other hand, a point estimate gives 88.5 cm-1 and the only measure to be consistent with it (given the absolute error indicated) is from Jouvet et al.. at (90 ± 2 cm-1). Statistical study then confirms the conclusions reached in paragraph 1.1.\n\nThe confidence intervals listed above for state B and the energy difference (EB – EC) produce an interval for EC: 32279.9 cm-1 < EC < 32338.4 cm-1.\n\nUnder these conditions, only the value of Jouvet et al. in Table 6 is consistent with this range. The three doubtful determinations include that of Hay and Dunning with a defective value for EB. Another early theoretical study by Clugston and Gordon also resulted in this interval. The same is true for solid state experimental work carried out by Fajardo and Apkarian.\n\nCalculating the mean of the two values in Table 6 yields 43838.45 cm -1. The energy gap of state B is then of the order of 11400 cm-1. Shostak and Strong experimentally determined the energy difference between states A and B. They found 9900 cm-1. The difference between these values (EB – ED) is very sharp. Considering only work by Sur et al., the energy difference between states B and D becomes of the order of 9950 cm-1 which is close to that of Shostak and Strong. This observation casts fresh doubts on the theoretical work of Hay and Dunning for which (EB – ED) is 10888 cm-1.\n\nWith respect to the electronic structure, it appears that older studies pose a problem regarding some of their results. On the other hand, work carried out by Fajardo and Apkarian is not always consistent with observations of the gaseous state. Moreover, recent theoretical studies do not eliminate significant differences with experimental results.\n\nThe removal of the values of Hay and Dunning, reduces to a determination the values of De for states C and D, and makes the three other values relating to state B homogenous. Among these Tellinghuisen et al. poses a problem for other values. The energy De for state B then has an average value of 36184 cm-1.\n\n##### Vibrational Structure\n\nThe vibrational energy of level v’ of any state M can be calculated as:\n\nEVib(M) = ωe (v’+1/2) – ωexe (v’+1/2)2\n\nwhere ωe and (ωexe) indicates respectively, the basic vibrational frequency and the anharmonicity constant. Their respective determinations are collected in Tables 7 and table 8.\n\n###### The basic vibrational frequencies\n\nThe values of ωe are grouped together in Table 7.\n\nTable 7: Values of ωe in cm-1.\nRef X B C D\n 210\n 188 188 189\n 27 ± 1 193 ± 1 204 ± 1\n 194.235\n 26.22 194.75 204.34\n 26.27 (± 0.55) 195.17 (± 0.31)\n 195.6\n 50 ± 10\n 188\n 195.2\n 187\n 210\n 195 198\n 205 ± 12\n\nStates X, C and D have only four determinations. No measure can be considered statistically distant from others, despite disparities.\n\nState B offers nine determinations. A statistical analysis leads to a confidence interval of 95%: 194.7 cm-1 < ωe < 195.4 cm-1.\n\nSix values in Table 7 are strange. Three of them markedly so. They are old publications of which two (Hay and Dunning and Brau and Ewing) were central to the previous section. Golde's results were based on the same method as that used by Brau and Ewing.\n\nThe other three measures that are outside the range are more recent. Kvaran et al. researched the solid state. Like Fajardo and Apkarian, they observed significant differences in the gaseous state. In contrast, most surprising are disagreements between Jouvet et al. and Tamagake et al. which were studies with good results. Finally, among the values that agreed with these ranges are many studies that were more theoretical than experimental.\n\nIn conclusion, Tellinghuisen et al. gives very good results on both state B and state X.\n\nThe reported results on state C are rather questionable. The work by Jouvet et al. is at the extreme compared to other State B measures.\n\nAs for state D, excluding results by Hay and Dunning makes it more cohesive than the other three values.\n\nFinally it is necessary to specify the values of ωe for states X, C and D. The main interest of this clarification would be a better resolution of the vibrational structure of the transition used in the laser, which requires a better knowledge of state X. On the other hand, the structure of state C is important because it plays a fundamental role in laser kinetics .\n\n###### Anharmonicity constants\n\nTable 8 shows anharmonicity constant measurements for the various states. The measurements for constants of anharmonicity for states X, C and D are very inconsistent.\n\nTable 8: Values of ωexe in cm-1.\nRef X B C D\n 0.66 0.85 0.80\n 0.25 ± 0.07 0.75 ± 0.1\n 0.63152\n – 0.321 0.627 0.682\n – 0.278 (± 0.17) 0.543 (± 0.063)\n 0.54\n\nSix measurements for state B produce the confidence interval at 95%:\n\n0.532 cm-1 < ωexe < 0.669 cm-1.\n\nThe work by Jouvet et al. is statistically far from the others and the authors cannot explain this difference. Hay and Dunning give correct forecasts, as does the vibrational structure study by Tellinghuisen et al..\n\n##### Rotational structure\n\nThe following expression denotes rotational energy: Erot(M) = B’.K’ef – D’.(K’ef)2, where K’ef = j’(j’+1) ± (1/2).δ(j’+1/2) ;\n\nB’ and D’ respectively are the rotational constant and the first centrifugal distortion constant. Their values are indicated in table 9 and table 10. δ is a parameter that is equal to 2.0 for state B and 0.4 for state X.\n\nTable 9: Values of B’ in cm-1.\nRef X (v'=0) X (v'=12) B\n 0.0585 0.0675\n 0.0560 0.0274\n 0.0669\n\nTherefore, the rotational structures is very poorly known. Despite that, one should notice the consistency of some measurements made on B’.\n\nTable 10: Values of D’ in cm-1.\nRef X (v'=0) X (v'=12) B\n 9.3 × 10-7 1.9 × 10-6\n 3.2 × 10-8\n\n### Synthetic pathways\n\nWhen they are in a configuration belonging to metastable states np5(n+1)s1, (n = 5 for xenon), rare gases possess properties of polarizability and elastic scattering similar to those of alkali metals. The valence electron, s, of the excited rare gas has a bond energy close to that of the alkali metal that follows it in the periodic table. In older publications, this analogy that is applicable only for the heavier rare gases, is used to study the behavior of these gases with halogen donors. The alkali metals have good chemical affinity for halogens and should have affinity for excited rare gases. Experimentally the collision cross section of metastable states of rare gases with the halogens is similar to that of alkali metals with halogens. Thus, an excited xenon has an electronic structure close to that of caesium, so that it can react with a chlorine donor in order to form XeCl\n*\n.\n\nSignificant differences between alkali metals and excited rare gases exist in their molecular symmetry. The number of states of rare gas halides is greater than that of alkali metal salts. This is due to spin-orbital splitting of atoms and ions of rare gases.\n\nThe first condition to produce XeCl is to make xenon reactive. To do this, it must be either excited, ionized or both. Several methods of external excitation have been used. The most common are electric shocks, electron beams, laser excitation, microwaves and α particles.\n\nThe excitation is not selective and formation of XeCl\n*\nmay follow many paths. Their relative importance varies with the conditions, especially with pressure, mode of excitation and the halogen donor. When ternary mixtures are involved, the creation process of XeCl is more complicated. Nevertheless, the addition of a buffer gas offers many advantages. Other rare gases are cheaper than xenon, but they (along with their excited species and their ions) absorb less than xenon at 308 nm. Thus, the buffer gas can be used in very high proportions without much alteration of the laser's output power. Under these conditions, the proportions of xenon and HCl must match those required to produce the desired amount of exciplex molecule. The essential role of the buffer gas is to transfer to xenon atoms the necessary excitation energy. This transfer can be regarded as instantaneous. It can result in excitation or ionization of xenon or the formation of a RgXe+\nion. Each of these species may then react with the chlorine donor to form XeCl\n*\n. On the other hand, the formation of neutral species of RgXe does not seem to be important.\n\nThe two main ways of exciplex synthesis are collision (between chlorine molecules and xenon, where at least one species is excited) and ion recombination. The buffer gas is sometimes a partner in the former and almost always involved in the latter.\n\nThe formation of XeCl\n*\nis extremely effective since Konovalov et al. observed emission of XeCl in krypton while xenon was present only in trace amounts (0.2%).\n\n#### Photoassociative pathway\n\nXeCl\n*\nsynthesis occurs when a mixture containing xenon and chlorine (Cl\n2\n) is excited by means of a laser emitting between 304 nm and 312 nm. Two reactions are then induced:\n\n• excitation of an electronically isolated atom or molecule of xenon followed by reactive collisions\n• simultaneous interaction of a pair in a collision and one or two laser-introduced photons generates an intermediate state that then results in the desired product without an intervening collision.\n\nIn the latter case, a transient complex is formed (Xe-Cl\n2\n)* in the state (1Πu). Therefore, two dissociation pathways are possible from the time a photon is absorbed by the Cl-Cl pair or the pair Xe-Cl from (Xe-Cl\n2\n)* in the state (1Πu).\n\nXe-Cl\n2\n(1Πu) + hν → Xe-Cl\n2\n(1Πg) → Xe+\nCl\n2\n- → XeCl(B,C) + Cl\n\nXe-Cl\n2\n(1Πu) + hν → Xe-Cl(X)-Cl + hν → Xe-Cl(B)-Cl → XeCl(B) + Cl\n\nThe rate constant of the reaction was measured by considering the photon as a third partner. It is 6×1029 cm6s-1.\n\nSimilar results were obtained with other chlorine donors including HCl and CCl\n4\n.\n\nIn all cases, the molecules of XeCl(B, C) are always produced in states with a strong vibrational excitation.\n\n#### Collision pathway\n\nNumerous processes' importance rests on the type and excitation of the species in collision. The principal leftover in all cases is the emissions that follow from binary collisions.\n\n##### Harpoon collisions\n\nThese reactions involve the chlorine donor at the ground state and an excited atom of xenon, both in the first 6s, Xe * and at higher levels Xe ** such as level 6p.\n\n###### Mechanism\n\nGenerally, these reactions can describe the result of collisions of the noble gas atoms (Rg) and halogen donors (RX), where X is a halogen atom and R a radical molecule. The products of the reactions depend strongly on the type of rare gas and halogen donor. In our case where Rg = Xe and X = Cl, the nature of the products follows this rule. In some cases, this collision may not provide any halide rare gas.\n\nThe atom Rg and the molecule RX follow when they approach the lowest adiabatic potential and the reaction proceeds by the orbital mechanism controlled at the crossover of the ionic-covalent. The reagents (Rg and RX) approach on a covalent diabatic surface. They then form a complex Rg\n*\n...RX at a fairly large internuclear distance. Its potential is V(Rg, RX). When the distance becomes sufficiently small, it may be that V(Rg, RX) intersects an ionic potential surface (Rg+\n...RX-). The crossover can occur through the transfer of an electron from Rg to RX. This is known as a harpoon mechanism. In this case, the atoms continue on the new surface. This leads to a diffusion reaction and the formation of RgX*.\n\nFigure 3 shows the process of creating XeCl\n*\nwhich involves Rg=Xe and X=Cl. After its transfer, the electron occupies an antibonding orbital of RCl. In the presence of Xe+\n, RCl\nsplits into R and Cl\n. Xe\n*\nions and Cl\nthen recombine to form XeCl in states B, C and D because there is no new force between Cl\nand R. The vibrational excitation of XeCl\n*\nis always important. In total, everything takes place according to the reaction equation:\n\nXe* + RCl → XeCl\n*\n(B,C,D) + R with rate constant of kMX\n\nHowever, the competitive formation of XeCl\n*\nreactions occur before or after the intersection. They correspond to the interactions of the V potential (Rg\n*\n, RX*) and V (Rg + RX *).\n\nIn general, this situation occurs when the ionic surface is intersected by covalent surfaces where RX is in its lowest excited state. The distribution of output depends on the number and nature of output channels that are possible following collisions. The most common occurs at the intersection of the potential surfaces by a transfer of electronic energy that can cause a dissociation of the excited acceptor:\n\nRg\n*\n+ RX → (Rg+...RX-) → Rg(B,C,D) + RX* with rate constant kET\n\nRg\n*\n+ RX → (Rg+...RX-) → Rg + R + X with rate constant kD\n\nThis pathway tends to become less important as the complexity of RX increases\n\nIt is also possible that the transfer took place in a state that is not correlated to the RX* ion but at very high Rydberg states in the neutral molecule and lying just below the limits of ionization. Critical factors regulating the branching ratios are the potential energies interrelated with the molecular ion (VI), the Rydberg group close to the ionization (VII) or an initial excited atom (VIII). The importance of these pathways increases with the depth of hole V (Rg\n*\n, RX*).\n\nWhen the highly separated asymptotic energy levels are in the order VI > VII > VIII and the potential energies (VII) are attractive, the first failed intersection is encountered when the approach of reacting atoms favors output of (VII) rather than the anionic (VI). Since (VII) has a cationic center that remains strongly linked, it preferentially leads to a transfer of excitation. This is the dissociative excitation reaction:\n\nRg\n*\n+ RX → Rg + R* + X or Rg + R + X* with rate constant kDE\n\nIf VIII > VII at long distance, the Penning ionization pathway or associative ionization are possible:\n\nPenning ionization : Rg\n*\n+ RX → Rg + RX+ + e- with rate constant kPI\n\nAssociative ionization: Rg\n*\n+ RX → (RgRX)+ + e- with rate constant kAI\n\nIn (VI) bonding with an halogen atom is in principle, weak and atomic transfer is enhanced between Rg and R. This potential thus leads to the formation of the exciplex.\n\nThere are therefore a priori five competitive ways of synthesizing RGX. For XeCl\n*\nan excited xenon atom collides with a chlorine donor. These five reactions were all observed for various chlorine donors. To quantify the proportion of produced exciplex, it is customary to define the branching ratio. It shows the rate of formation of XeCl, as denoted by Γ XeCl:\n\nΓXeCl = kMX / (kMX + kAI + kPI + kET + kDE + kD)\n\nΓXeCl measurements were effectuated for several chlorine donors and principally for the 6s and 6p states of xenon.\n\nXe(6s or 6p) + RCl → products with rate constant kQ\n\nkQ is the total rate constant and is calculated as: kQ = kMX + kAI + kPI + kET + kDE + kD\n\nTable 11. Total rate constants in cm3s-1 for harpoon collisions between Xe* and Cl\n2\n. ΓXeCl = 1.\nState of xenon kQ × 10-10 Reference\n3\nP\n2\nor (6s[3/2]2)\n(10 ± 2) \n3\nP\n2\nor (6s[3/2]2)\n7.2 \n3\nP\n2\nor (6s[3/2]2)\n(7.0 ± 0.5) \n3\nP\n1\n(7.9 ± 0.9) \n1P1 (7.6 ± 0.7) \n(6p[1/2]0) (14.6 ± 0.2) \n(6p[1/2]0) (17.9 ± 0.2) \n(6p[1/2]2) (14.5 ± 0.2) \n(6p[1/2]2) (15.5 ± 0.2) \n(6p[5/2]2) (13.3 ± 1.0) \n(6p[5/2]2) (12.8 ± 0.3) \n(6p'[3/2]2) (18.6 ± 0.5) \n(6p'[1/2]0) (21.9 ± 1.0) \n(7p[5/2]2) (30.7 ± 1.9) \n(7p[1/2]0) (29.5 ± 0.8) \n(7d[1/2]1) (9.2 ± 0.5) \n\nThe results for Cl\n2\n, CCl\n4\nand HCl (v = 0) are summarized in Tables 11–13. ΓXeCl is set equal to 1 by Setser Ku where the chlorine donor is Cl\n2\n. This decision is justified by the fact that for Xe* + Cl\n2\nwe have VII > VI > VIII, which according to Simons fixes an unlikely channel for the excitation transfer.\n\nTable 12 : Total rate constants in cm3s-1 and ΓXeCl for the harpoon collisions between Xe* and HCl (v = 0).\nState of xenon kQ × 10-10 ΓXeCl Reference\n3\nP\n1\nor (6s[3/2]1)\n6.2 0.01 \n3\nP\n2\nor (6s[3/2]2)\n(7 ± 2) \n3\nP\n2\nor (6s[3/2]2)\n5.6 0.01 and Velazco et al.\n3\nP\n2\nor (6s[3/2]2)\n5.6 <0.02 \n1P1 4.62 Chen and Setser\n1P1 7 ≈0 \n(6p[1/2]0) (8.3 ± 0.5) 0.80 ± 0.15 \n(6p[3/2]2) (8.0 ± 0.5) 0.60 ± 0.15 \n(6p[3/2]2) (6.5 ± 0.2) \n(6p[5/2]2) (8.0 ± 0.5) 0.40 ± 0.15 \n5d[3/2] (15.6 ± 1.5) 0.48 \nSummary of 6p states 5 \nSummary of 6p states 5.6 0.60 \n\nA first analysis of Tables 11-13 shows that the results are in good agreement when several measurements were made for the same reaction. We find that most collisions had their rate constants measured only once. Moreover, with rare exceptions, these determinations for KQ and ΓXeCl are limited to the lowest excited states of atomic xenon. This shows the need for new measures to confirm the available experimental results and estimate the role of other states that do not fail to form if one makes use of, as for the lasers, non-selective modes of excitation.\n\nTable 13 : Total rate constants in cm3s-1 and ΓXeCl for harpoon collisions between Xe* and CCl\n4\n.\nState of xenon kQ × 10-10 ΓXeCl Reference\n3\nP\n1\nand 3\nP\n2\n1.73 0.24 \n3\nP\n1\nand 3\nP\n2\n6.3 0.13 \n(6p[1/2]0) (7.5 ± 0.2) 0.68 ± 0.2 \n(6p[3/2]2) (7.8 ± 0.5) 0 60 ± 0.15 \n(6p[5/2]2) (7.3 ± 0.5) 0.35 ± 0.10 \n\nAn important result for XeCl lasers is evident in an initial analysis. Xe(6s) + HCl (v = 0) does not produce XeCl. However, according to the estimates of Kannari et al. 5% of exciplex synthesis occurs through the harpoon reaction. In addition, Xe(6p) states produce 2.5% of this amount.\n\n###### Initial States: Xe(6s)\n\nMolecular chlorine reacts efficiently with these xenon states. Since Cl\n2\nis formed in gaseous mixtures (Figure 1), this reaction is important in the kinetics of XeCl lasers.\n\nReaction with CCl\n4\nis faster than Cl\n2\nby an order of magnitude, but it is still effective. This reaction is important in the kinetics of Xe\n2\nlasers.\n\nIf the chlorine donor is HCl, the situation is more complex. Two situations are apparent:\n\n• HCl at the ground state with vibrational level v=0. The values for KD are very similar regardless of the initial state of xenon; the branching ratio for the 6s states is very low. The contribution of these xenon states to the formation of XeCl\n*\nis negligible. In addition, competitive reactions occur before the intersection of the potential curves V(Xe* + HCl) and V(Xe+\n+ HCl\n). The quenching Xe (6s) HCl is important in laser kinetics. It destroys xenon states capable of forming an exciplex.\n• HCl in the ground state with vibrational level v=1. For the Xe(3\nP\n2\n) state, Chang identified a marked increase in the XeCl production rate. The rate constant for XeCl synthesis was measured with a minimum value of 2×1010 cm3s-1 and ΓXeCl = 35%. The first estimate made by Levin et al. and based on correspondence was published at 6×1011 cm3s-1 and ΓXeCl = 11%, but this reaction was obsoleted by Chang's direct measurements. As the vibrational excitation of HCl increases, the rate of formation of XeCl follows. No direct measure is available, but analogical estimates exist. For v=2, values for synthesis rate constants include: 5.6×1010 cm3s-1 and 2.0×1010 cm3s-1.\n\nAccording to other authors, the set of vibrational levels are taken into account. And for V ≥ 1, Kannari et al. proposed a synthesis rate constant of 5.6×1010 cm3s-1 and ΓXeCl = 26%. Experiments are necessary to clarify this aspect of laser kinetics.\n\n###### Initial States: Xe(6p)\n\nThe synthetic reactions of XeCl are generally more effective than the 6s state. This applies for the three chlorine donors indicated graphically in tables 11, 12, and 13.\n\nThe rate constants are twice faster for chlorine than for HCl and CCl\n4\n.\n\nFor HCl, the situation is different from the previous case. If the total rate constants are of the same order of magnitude as those of the 6s states, the branching ratios ΓXeCl are high. The result explains the forecast by Kannari et al. regarding the effectiveness of the rate of synthesis of XeCl\n*\nfrom Xe(6p).\n\nWith reference to the potential curves of Figure 3, the potential curves of V(Xe** + RX) and V(Xe+\n+ RX-) intersect at a greater internuclear distance than 6s states in a region of strong interactions. This explains why the production of XeCl is more effective after the intersection than in the 6s states irrespective of the chlorine donor, as seen for Cl\n2\n, HCl, CCl\n4\n, and also for chlorofluorocarbons in the states 6p[1/2]0 and 6p[3/2]2.\n\nCompetitive reactions occur. One of them has been experimentally observed and quantified – the collisional relaxation induced by HCl:\n\nXe(6p[3/2]2) + HCl → Xe(6s[5/2]20) + HCl with rate constant ka or ka = 4.3×1011 cm3s-1.\n\nThis represents only 6% of the value of kQ from table 12 for the (6p[3/2]2) state. As the proportions of exciplex synthesis is placed at 60%, one should conclude that there are other important competitive processes at play.\n\nThe summarized results in Table 12 relate to HCl (v=0). For 6p states, the role of vibrational excitation of HCl in the kinetics of XeCl formation is poorly understood. Some authors argue for rate constants neighboring state v=0 if HCl is vibrationally excited, but this results are based on analogies. An experimental clarification is therefore needed. The rate constant for v=1 is placed at 5.6×1010 cm3s-1. The same value is used for v=2. Kannari et al. is still not likely to reduce the different vibrational levels of HCl and for v≥1, 8.2×1010 cm3s-1 is proposed.\n\n###### Strongly excited states of xenon\n\nExperiments conducted with Cl\n2\nshow that the effectiveness of XeCl formation increases with the excitation energy of the xenon atom; the rate constant of synthesis is multiplied by three when one goes beyond the 6s states to the 7p states (table 11).\n\nThe rate of XeCl\n*\nsynthesis increases by an order of magnitude when one goes beyond the 6s states to the 6p states when CCl\n4\n(table 13) is utilized.\n\nHCL is ambiguous. An examination of Table 12 shows that the increase in kQ does not appear to increase significantly with the xenon excitation. So far, no measurements go beyond the 5d[3/2] state that is roughly of the same energy as the 6p state. The rate of synthesis also seems very effective from the 7s[3/2] states without there being any known numerical value. The available information does not support assuming a more efficient rate of synthesis of the exciplex as the excitation of xenon gradually increases. Indeed, for the state 5d[5/2]30, there is only an excitation with a reaction rate constant of 3.2×1012 cm3s-1:\n\nXe(5d[5/2]20) + HCl → Xe(6p[3/2]2) + HCl\n\nAlso, the Rydberg states do not appear to have produced XeCl. The observed reactions for Xe(31f) are the following:\n\nXe(31f) + HCl(J) → Xe(31l) + HCl(J) (α)\n\nXe(31f) + HCl(J) → Xe(nl) + HCl(J-1) if J≤5 (β)\n\nXe(31f) + HCl(J) → Xe+\n+ e- + HCl(J-1) if J>5 (γ)\n\nThe total rate constant is kT = (11.3 ± 3.0)×10–7 cm3s-1, divided into the following:\n\nkα = (5.5 ± 2.5)×10–7 cm3s-1 (l-changing)\n\nkβ = (4.8 ± 2.4)×10–7 cm3s-1 (n-changing)\n\nkγ = (0.9 ± 0.4)×10–7 cm3s-1 (ionisation)\n\nNote that the reaction (γ) produces an important XeCl precursor, namely Xe+\n.\n\n###### Conclusions\n\nHarpoon reactions play an important role in laser kinetics.\n\nFor Xe\n2\nCl\nlasers, the situation is simple when reacted with CCl\n4\n. For the XeCl laser, the harpooning kinetics is more complex. Despite its weak proportion in a gaseous mixture, Cl\n2\nis produced much effectively from the exciplex through harpooning. The 6s states do not come into play in the production of XeCl\n*\nto the extent that they give rise to collisions with molecules of vibrationally excited HCl.\n\nThe kinetics of the vibrational excitation of HCl is therefore fundamental. At least the first six levels of vibration should be taken into consideration in order to build a satisfactory model. This vibrational excitation is produced by the following electrons:\n\nHCl(v) + e- → HCl(v’) + e- (EV) with rate constant K.\n\nThe rate constants of (EV) were measured for the following transitions: v=0→v’=1, v=0→v’=2, v=1→ v’=2 et v=2→v’=3. An empirical law can then be proposed:\n\nKv→v+1 = v K0→1\n\nKv→v+2 = v K0→2\n\nThe values for K are dependent on the electron energy distribution as shown in Figure 4.\n\nIn the harpoon reactions, the rate of synthesis of the B state with respect to that of the C state is included between 1 and 2 whatever the nature of the rare gas halide. Nevertheless, one notices a clear increase in the proportion of state B with respect to state C when pressure increases. This relation is also strongly influenced by the nature of the chlorine donor. It is 1.2 for CCl\n4\n and 1.3 for Cl\n2\n. The excitation state of xenon is important. For the case of Cl\n2\n, it was observed that the rate of synthesis of the B state could be five times higher than the C state if Xe(6p[1/2]0) takes part in the reaction than if they in strongly excited states.\n\nOther reactions are involved in the reactive collisions between neutral species but they play a negligible role.\n\n##### Reactions involving excited molecular species\n###### The role of xenon molecules\n\nIt is difficult to find reactions involving the molecules of xenon and HCL in published literature.\n\nLorents only measured the rate constant of decomposition of Xe2* by HCl as (8.2 ± 0.8)×10–10 cm3s-1 without stating the resulting products.\n\nIn contrast, Bibinov et Vinogradov observed the following reaction with Cl\n2\n:\n\nXe2* + Cl\n2\nXeCl\n*\n+ Cl + Xe\n\nExciplex synthesis was by harpooning. The rate constant was estimated at 7.1×1010 cm3s-1.\n\n###### The role of excited HCl\n\nCastillejo et al. observed an HCl emission between 200 and 240 nm due to the B transition B(1Σ+) → X (1Σ+) (see figure 5). This emission disappears with increase in the pressure of xenon and XeCl(B) appears. In other words, XeCl(B) could be synthesized by the reaction:\n\nHCl (B 1Σ+) + Xe (1SO) → XeCl(B) + H\n\nThe rate constant is estimated at 5×1010 cm3s-1.\n\nAnother output pathway seems competitive to exciplex synthesis within the same collision which product should be:\n\nXe+\n+ H + Cl + e- and the associated rate constant associated is 1×1010 cm3s-1.\n\n###### The role of excited Cl2\n\nCl\n2\nis synthesized in the laser through the following reaction:\n\nCl* + HCl → Cl\n2\n* + Cl\n\nThe rate constant is 1×1010 cm3s-1. Exciplex synthesis occurs through the following reaction:\n\nXe + Cl\n2\n*(1Σu+) → XeCl\n*\n+ Cl with rate constant ku\n\nThe values of ku are given in table 14. The results from Zuev et al. is statistically distant from the others although recent. Ignoring it, the average value should be ku = 2.6×1010 cm3s-1.\n\nTable 14 : Values of ku in cm3s-1\nku × 10-10 Reference\n1.1 \n(1.2 ± 0.2) \n(3.0 ± 0.5) \n18 \n5 \n\nA corresponding reaction could be found for the Cl\n2\n* (D’ 3π2g) state.\n\n##### Termolecular reactions\n\nThey are essentially produced in ternary mixtures and are of the type:\n\nXe** + Cl\n2\n+ M → XeCl\n*\n+ Cl + M with rate constant kc\n\nThe rate constant kc is given in table 15. Notice only the processes where M=Ar are negligible.\n\nTable 15 : Values of kc in cm6s-1.\nState of xenon Xe** M = Xe × 10-28 M = Ar × 10-28\n(6p[1/2]0) (3.5 ± 0.5) < 0.5\n(6p[3/2]2) (1.4 ± 0.5) < 0.1\n(6p[5/2]2) (1.8 ± 0.5) < 0.1\n\nAs for helium, there are two reactions:\n\nXe* + Cl + He → XeCl\n*\n+ He\n\nXe** + Cl + He → XeCl\n*\n+ He\n\nThe rate constants are respectively, 10-27 cm6s-1 and 3×1027 cm6s-1.\n\nThere also exist data where the xenon atoms are at the ground state:\n\nXe + Cl + M → XeCl (X) + M where M = Ne or Xe\n\nIn both cases, the rate constant is: 1.2×1033 cm6s-1.\n\n##### Other reactions\n\nChlorine, Cl\n2\n, synthesized in a gaseous mixture could induce the following reactions:\n\nXe + Cl\n2\nXeCl\n2\n\nXe* + Cl\n2\n+ Xe → Xe+\n+ Cl\n2\n- + Xe → (XeCl\n2\n)* + Xe\n\nAs the sublimation temperature of XrCl\n2\nis ts= 80 °C, this molecule is synthesized at room temperature, in the solid state within the gaseous mixture. This causes a parasitic lasing phenomenon called \"laser snow\".\n\nSome authors have proposed increasing the temperature to make XeCl\n2\nsublime. It then becomes reactive and actively participates in the synthesis of XeCl\n*\n:\n\nXeCl\n2\n*XeCl\n*\n+ Cl\n\nXe* + XeCl\n2\n→ 2 XeCl\n*\n\nThe temperature increase procures two advantages: to eliminate the parasitic laser phenomenon and increase XrCl production. However, the increase should not be of much importance so that XeCl\n2\ndoes not dissociate which would destroy the preceding reaction.\n\nIn ternary mixtures, RgCl exciplexes could be synthesized, possibly leading to the formation of XeCl\n*\nthrough so-called displacement reactions. They have been observed when the Rg is Ar or Kr:\n\nRgCl* + Xe → XeCl\n*\n+ Rg with rate constant kd or kd=1.5×1010 cm3s-1 for Rg = Ar\n\nInversely, RgCl synthesis consumes the available chlorine reducing the rate of XeCl production. The laser quality may be negatively affected as was the case with krypton.\n\nThis review will be limited to synthetic reactions of XeCl\n*\n, excluding ionic recombination. A second pathway exists and will be considered.\n\n#### Ion recombination\n\nAccording to several authors bimolecular reactions (Xe+\n+ Cl\n, Xe2+ + Cl\nand RgXe+\n+ Cl\n) are not involved.\n\nTernary reactions are typically:\n\nXe+\n+ Cl\n+ Rg → XeCl\n*\n+ Rg (3)\n\nXe+\n2\n+ Cl\n+ Rg → XeCl\n*\n+ Rg + Xe (4)\n\nRgXe+\n+ Cl\n+ Rg → XeCl\n*\n+ 2 Rg (5)\n\nXenon ions are synthesized directly in the discharge or through successive reactions that involve Rg+, Rg2+ as well as other ionic or excited species. Figure 1 gives an example where Rg=Ne and figure 6 where Rg=He.\n\nThe Cl\nions are basically formed by dissociative attachment from an HCl electron:\n\nHCl(v) + e- → H + Cl\n\nIn that same case, the rate constant (AD) depends on the energy distribution of the electrons as illustrated in Figure 4.\n\nThe third element Rg is passive chemically. It only serve to stabilize the reaction. Therefore, the authors only took the recombination rates of the positive and negative ions into consideration. These vary in a significant way with the total pressure of the gaseous mixture, the buffer gas and temperature.\n\nReactions (3) and (4) were experimentally demonstrated for all the rare gases. Figure 7 and Figure 8 show the influence of the buffer gas and pressure on the rate of recombination of these reactions when helium and then neon are utilized as buffer gases. This rate of recombination is of the same order of magnitude in both cases, of about some 10-6 cm3s-1. Apparently the influence of temperature has only been studied for neon. (See Figure 9.) The rate of recombination α3 in reaction (3) is at maximum at 180K for an absolute pressure of 294.2 kPa. α3 is therefore 4.2×106 cm3s-1.\n\nThe more refined analysis of reaction (4) was carried out by Bates et Morgan. who found that the Monte-Carlo method, Flannery's equation and Langevin's theory can give good results only when the pressure is above 1 atm. This is the norm for lasers. The proposed \"tidal\" theory agrees with the experimental measurements of Mezyk et al. which is evident in Figure 10. The rate of recombination α4 for reaction (4) is of the same order of magnitude as α3.\n\nReaction (5) is only observed when Rg is neon or argon. For this reaction, the evolution of the rate of recombination α5 in the presence of pressurized neon is shown in figure 6. Imada et al. studied the influence of temperature for a fixed total pressure of 294 kPa. The maximum value of α5 is obtained at 120K and α5 = 7.5×106 cm3s-1.\n\nFor argon only two estimations are available at room temperature. At a pressure of 2 atm, α5 = 2.10-6 cm3s-1 and at a pressure of 1 atm, α5 is 1×106 cm3s-1.\n\nReaction (5) does not favor a transitory complex RgXeCl\n*\nas an intermediate stage. The following reaction, therefore, plays a minor role:\n\nRgXe+\n+ Cl\n+ Rg → RgXeCl\n*\n+ Rg → XeCl\n*\n+ 2 Rg\n\nOn the contrary, the principal synthetic pathway is given by:\n\nRgXe+\n+ Cl\n+ Rg → 2 Rg + Xe+\n+ Cl\nXeCl\n*\n+ 2Rg\n\nKannari et al.. estimated the contribution of each of the three recombination and harpooning reactions for three types of mixtures. The results are shown in Table 16. Reaction (3) provides the bulk of the exciplex molecules and generally the harpooning reactions play a secondary role. When helium is used, in contrast, the harpooning reactions contributes about 10–15% of XeCl\n*\nsynthesis. Other authors only estimate this contribution at 1% when the ionic pathway is involved. These theoretical conclusions are confirmed by experimental methods for the generality of the buffer gases and for other chlorine donors. The \"harpoon\" reactions, notwithstanding, are important despite their low contributions. These harpoon reactions are the reactions which are set in motion after the first excitation. Ionic recombinations, which then provide the bulk of the exciplex molecules, kick off 20 ns later.\n\nTable 16 : Percentage contributions of the synthetic reactions for XeCl\n*\nfor excitation with 55 ns pulses at ~3 MW/cm3.\nReaction Xe/HCl Ar/Xe/HCl Ne/Xe/HCl\nXe+\n+ Cl\n83.1% 81.5% 69.6%\nXe2+ + Cl\n11.9 8.2 9.5\nMXe+\n+ Cl\n6.3 11.1\nXe** + HCl 2.5 1.4 1.4\nXe* + HCl(v) 2.5 2.6 2.6\nOthers 5.8\n\nIn table 16, the column named \"others\" shows 5.8% for neon, meaning that other recombination pathways are possible.\n\nXe3+ ions are synthesized in the gaseous mixtures used in lasers. These ions react with Cl-10−\nin order to produce XeCl. Nevertheless, this reaction is only a little contribution to the kinetics of the laser.\n\nXe+* ions react with Cl\nin order to produce XeCl\n*\n. Alekhin et al. have also synthesized XeCl\n*\nusing NaCl vapors. XeCl\n*\nis the product of the lowest vibrational states (v≤20) using highly excited Xe* ions in a bimolecular reaction. The rate of synthesis is estimated to be between 2×1010 and 1×109 cm3s-1. A corresponding reaction is proposed using HCl. This conclusion is based on the presence of the states which are responsible for the third continuum of xenon – only Xe2+ ions, since XeCl\n*\nis not produced. On the contrary, Xe* ion participation in the reaction is compatible with the observations of other authors. Several authors have confirmed the presence of Xe* ions (6s 4P3/2) in the laser mixtures. Their concentration is a thousand times greater than that of Xe* ions in the harpoon reaction. On the other hand, the concentration of these ions and that of XeCl\n*\nand Cl\nas a factor of time is not incompatible with the synthesis of exciplex molecules using Xe+\n. The beginning of the decline in Xe+* and Cl\nis related to an increasing acceleration of the rate of synthesis of XeCl\n*\n. The distribution during harpoon reactions between states B and C occurs in random proportions in experimental conditions.\n\nThe first estimate of the ionic pathways was made by Tysone and Hoffman who suggested 76% for states B and 24% for states C. Successively, the buffer gases are neon, argon and krypton. Ohwa and Kushner published similar values: 77% for states B and 23% for states C. They used a quaternary mixture containing a buffer gas (using neon) from hydrogen, H2.\n\nA recent and more detailed study was conducted by Tsuji et al. in a mixture of helium as buffer gas. They found that:\n\n– States D are especially formed from Xe+\nion, (2P1/2) ;\n\n– States B and C are exclusively produced from Xe+\nion (2P3/2) in the following proportions: States B – 62.6% and States C – 38.4%. The rate of production of XeCl\n*\nis 98%. There is then few competing reactions.\n\nIn laboratory experiments, the number of the Xe+\n(2P1/2) and Xe+\n(2P3/2) states are the same. In addition, the rate constants of reaction (3) relative to these two states of xenon are similar. However, under these conditions, the number of states D formed is very low with respect to the number of states B and C. The rate of XeCl(D) formation with respect to XeCl(B, C) is estimated at about 0.033±0.006. The faster dissociation of [Xe+\n(2P1/2)Cl\n]* with respect to that of [Xe+\n(2P3/2)Cl\n]* is responsible for this situation.\n\n### Pathways of decomposition\n\n##### Emission spectra\n\nThe corresponding spectra demonstrated in Figure 11 was observed by virtually all authors who studied mixtures that were based on xenon and a chlorine donor.\n\nTwo theoretical studies have enabled identification of the emission spectra. Five transitions have heightened intensities that correspond to ΔΩ = 0 i.e., a parallel polarization to the internuclear axis. The starting states are always ionic and the product states are covalent. The characteristics of these emissions are as shown in Table 17.\n\nTable 17. XeCl\n*\nemissions.\nTransition Experiment Theory Theory Theory\nObserved wavelength(nm) Computed wavelength of emission (nm) Time for transition (s) Probability of emission (s-1)x 107\nB → X 308 295; 282 2.76; 2.85 9.3; 11.4\nD → X 235.5 224; 216 1.94; 2.09 10; 14\nC → A3/2 345 330; 306; 355 0.96; 0.98 0.81; 1.05\nB → A1/2 345 324; 307 0.87; 0.88 0.6; 0.84\nD → A1/2 Non-observed 242; 233 0.50; 0.49 0.56; 0.59\n\nThe most probable UV transitions are the B→X and D→X. They have the Σ→Σ type. The other transitions, B→A, C→A and D→A, have the nature Π→Π and are much less probable.\n\nOther theoretically weaker transitions have not yet resulted in an observation with the exception of Hay and Dunning, who made provisions for four transitions that are perpendicularly polarized at the internuclear axis; in other words, with ΔΩ = ±1. Only Ewing and Brau noted an emission centered at 425 nm attributed to a 2Σ→2Π transition. Finally, Krauss suggested the possibility of an emission of the D→B type whose transition period is itself very weak. Table 6 places this at 931 nm.\n\nThe principal emissions were observed and reported as in Table 17.\n\nThe B→X line is observed at 308 nm (Figure 11) while the theoretical prediction of its existence was clearly weak. This is the narrowest emission and the final state shows a somewhat shallow potential well. Just like the rare gas halides, this emission has the strongest transition period. That is why it is the preferred emission in XeCl lasers.\n\nExperimentally, the (C→A) and (B→A) lines overlap, producing a continuum centered at 345 nm, often of low amplitude as can be observed in Figure 11. The width of the emission depends on the transition tending to a strongly repulsive state. Koltz et al. placed this continuum at between 312 and 460 nm. The weak observed intensities are attributed to the weakness of the probabilities of the transition of the two emissions opposite that of the B→X and by the small amounts of states C formed with respect to states B as was previously seen. Other authors have called attention to the absorption phenomena of molecule Xe\n2\nCl\nat this wavelength. According to Kannari et al., reaction (3) is the principal pathway for synthesis of states B and C. Tsuji et al. estimated the proportions of states B and C formed: 38% for state C and 62% state B. The value of the transition probabilities (theoretical value of IB→A/IB→X = 0.07; experimental value of 0.05), so the contribution of (B→A) emission is about 10%. Several authors claimed that a laser based on the 345 nm emission could be developed, especially at pressures of about 10 atmospheres when states B and C are thermalized. Meanwhile, no concrete result had been discovered as of 2014.\n\nThe (D→X) transition centered at 235.5 nm has not been systematically observed. The corresponding line appears weak as in the case in Figure 12. Its optical width is similar to that of (B→X) emission because it leads to the same weakly bound state of X. In contrast, the relative intensity of the (B→X) and (D→X) emissions considerably vary from one author to the other: ID→X/IB→X = 1/3 by Shuker, 1/25 to 1/50 by Sur et al. and 0.14 by Taylor et al.. The latter authors noted that the relation is pressure-independent. It remains unlikely that a laser could be developed using this transition as Shuker had predicted.\n\nThe spectra did not show any D→A emission. Nevertheless, Hassal et Ballik saw a line at 246 nm with very weak intensity (figure 12) without attributing it to the transition under consideration.\n\nState D emissions are negligible for XeCl spectroscopy. Attributing the absence of D→A as for D→B to the weakly associated transition probability, the same cannot be said for D→X. From Table 17, the D→X emission should be of lesser intensity than B→X. In this case, the possible explanation could be due to the weak production of state D, either by the ionic pathway or by the harpoon reaction using states Xe(3P). The principal path of XeCl\n*\nsynthesis is reaction (3) and the relation of the number of states B to that of state D is 0.053. From Table 17, it is likely that state D will de-excite exclusively towards state X. Table 17's transition probabilities show ID→X/IB→X≈6.2%, with results of the order of magnitude of Sur et al. and not far from that of Taylor et al..\n\nThese emissions are more or less degraded for short wavelengths as the emission spectrum of the (B→X) line shows in figure 13. A corresponding oscillation phenomenon with the same wavelength was observed in the absorption spectra. Besides, the (D→X) emission has the same line structure as (B→X).\n\nThe width and oscillatory nature of these lines are bound to the existence of transitions arising from high vibrational levels of excited radiative states. The vibrational excitation is a result of the energy left after exciplex molecule formation. This energy depends on both the state of the xenon atom/ion involved in the reaction and the halogen donor. For the 345 nm emission, the transitions at a high vibrational level are more dispersed towards the red region for C→A3/2 than for B→A1/2 because the repulsive barrier of A3/2 is steeper and closer to the higher state of the emission than is A1/2.\n\nThe osciallatory nature of these spectra tends to disappear with an increase of pressure, showing only the peaks arising from the v≤2 level when the pressure is above 1 atm. This shows that the vibrational relaxation effectively depopulates the highest vibrational levels. On the other hand, the disappearance of the elevated levels is faster for state B than for state C because state C has a much longer lifetime. The vibrational relaxation of states B and C play an important role in the chemical kinetics of XeCl lasers.\n\nBeyond 5 atm, these lines increase in width, possibly due to collisional enlargement induced by rays or due to the entire rotational structure.\n\nThe isotopic effects are negligible for xenon but marked for chlorine. The vibrational lines associated with the heaviest isotope 37Cl are lightly displaced towards the greatest wavelengths. For example, the gap reads 1.51Å for the 4-0 line of B→X.\n\nValues for states B, C and D are shown in Table 18 for the vibrational level v=0. These are states B and C which have resulted in more determinations.\n\nTable 18. Lifetime (in ns) of XeCl\n*\nstates.\nState B : τB State C : τC State D : τD Method Reference\n11.1 ± 0.2 130.5 ± 1.5 Experimental (gas) \n27 ± 3 53 ± 6 Experimental (gas) \n10.1 123 9.5 Theoretical \n11.1 ± 0.2 131 ± 10 Experimental (gas) \n135 Experimental (gas) \n8.2 95 6.9 Theoretical \n11 Experimental (solid) \n133.5 ± 4.5 Experimental (solid) \n120 ± 9 Experimental (solid) \n17 Experimental (gas) \n\nIn state B, two values are statistically distant from the others. They correspond to the oldest measurements. Without taking them into account, the confidence interval obtained in ns is: 8<τB<12.3.\n\nFor state C, the dispersion is more important. Grieneisen et al.'s determination is still statistically distant from the others as well as the two theoretical values along with a measurement obtained at the solid state. When the above is disregarded, the confidence interval, in ns, then becomes: 129.1<τC<135.9.\n\nUsing average values, the relation τBC is 0.0764. It is adequately comparable with a direct measure which is 0.087 ± 0.009. This relation is important because it plays an important role in the vibrational relaxation of states B and C.\n\nA systematic study of the lifetimes of several vibrational levels (v≤136) of states B and C was conducted as reported in Table 19.\n\nTable 19. Lifetime of vibrational levels of states B and C of XeCl.\nVibrational level Energy (cm-1); State C Lifetime (ns) ; State C Energy (cm-1); State B Lifetime (ns) ; State B\n0 139.42 120.0 369.42 11.0\n4 876.08 127.6 1136.05 11.08\n8 1590.86 136.4 1882.33 11.88\n12 2284.25 137.2 2608.63 12.29\n16 2956.77 142.8 3315.38 12.64\n20 3608.94 146.9 4002.98 12.53\n24 4241.29 152.3 4671.84 12.35\n28 4854.33 174.1 5322.39 13.43\n32 5448.6 182.1 5955.05 14.10\n36 6024.61 195.3 6570.25 14.5\n40 6582.89 195.5 7168.42 14.84\n44 7123.96 210.3 7750.00 16.12\n48 7648.33 224.6 8315.41 16.38\n52 8156.52 230.6 8865.10 17.25\n56 8649.03 245.0 9399.49 18.69\n60 9126.35 256.4 9919.03 19.33\n64 9588.98 265.0 10424.17 20.15\n68 10037.4 275.2 10915.27 21.35\n72 10472.1 279.1 11392.77 22.42\n76 10883.4 270.2 11897.07 23.88\n80 11302.0 296.2 12308.67 24.78\n84 11698.1 298.2 12747.97 26.04\n88 12082.3 308.3 13175.27 27.52\n92 12454.9 318.1 13390.97 28.98\n96 12815.3 325.6 13994.47 30.21\n100 13167 337.7 14389.17 31.77\n104 13507.3 343.3 14772.37 33.21\n108 13837.6 349.1 15145.174 35.14\n112 14158.1 352.8 15508.67 37.16\n116 14469.3 357.9 15862.27 39.03\n120 14771.5 375.1 16206.67 40.91\n124 15065 398.5 16541.97\n128 15627.1 433.7 17186.47\n136 15896.2 438.5 17496.07\n\nLifetimes increase by a factor of 4 when v goes from 0 to 100. A graphical extrapolation of the data relative to state B is shown in Figure 14.\n\nFor state D, only three determinations are relatively close to one another. At the gaseous state, Shuker noted that D→X emission has a time-based dependence similar to B→X emission, which is in line with the previous magnitudes as the lifetime of the B state is of the order of 10 ns. However, other measures are necessary to precisely value τD.\n\n#### The collisional pathway\n\nThe influences of xenon and HCl will be discussed first, followed by the role of the diverse buffer gases and of the chlorine donors.\n\n##### Destruction of the XeCl* molecule\n###### In Xe/HCl mixtures\n\nThe only process of destruction of states B and C of XeCl, other than the radiative process, which has been proved is:\n\nXeCl\n*\n+ HCl → Other products and not XeCl (6) with rate constant of kH\n\nXeCl\n*\n+ Xe → Other products and not XeCl (7) with rate constant of kX\n\nXeCl\n*\n+ 2 Xe → Other products and not XeCl and Xe\n2\nCl\nor → Xe\n2\nCl\n* + Xe (8) with rate constant of kDX\n\nXeCl\n*\n+ Xe + HCl → Other products and not XeCl (9) with rate constant of kM\n\nXeCl\n*\n+ e- → Xe + Cl + e- (10) with rate constant of ke\n\nAs of 2014 no result had been found for state D.\n\nThe values obtained for states B and C are collected in Table 20. The authors assume that the reaction rates are identical for the two states.\n\nTable 20: Rate constants for disappearance of XeCl(B, C) in cm3s-1 for ke, kH and kX and in cm6s-1 for kDX and kM.\nRef kH kX kDX kM ke\n 1.4 × 10-9 (± 40%) 3.2 × 10-11 (± 35%)\n (6.3 ± 0.5) × 10-10 (2.3 ± 0.3) × 10-11\n 4 × 10-8\n 0.4 × 10-11 1.3 × 10-30\n (7.3 ± 0.1) × 10-10 < 4 × 10-12 (1.53 ± 0.1) × 10-30\n (5.0+3.0-2.0) × 10-12 (13.0 ± 4.0) × 10-31\n 7.3 × 10-31\n 1.16 × 10-7\n 1.7 × 10-9 4 × 10-31 1.2 × 10-7\n (7.3 ± 0.1) × 10-10\n 1.5 × 10-30\n 7.7 × 10-10 2.1 × 10-12 1 × 10-30\n (3.8 ± 2.3) × 10-10 (4 ± 19) × 10-13 (1.0 ± 0.4) × 10-30 (4.6 ± 2.1) × 10-29\n 1.5 × 10-31\n 5 × 10-31 2 × 10-8\n 3 × 10-7\n 3 × 10-8\n 2 × 10-7\n 1 × 10-7\n\nReaction (9) has been observed only once, recently. Comparison data are therefore not available. In contrast, the other reactions have been repeatedly observed and quantified.\n\nFor kH, three measures are statistically distant from the others. The last (older) two are superior to the others. The first, a recent measure, is the only experiment which proved process (9) which had been neglected. Measurements made by Rives et al., kH must be multiplied by 2 which puts them at the same level as the other values. Taking reaction (9) into account, the set of values of kH must be revised downward except for Rives et al.. A confidence interval is difficult to obtain in these conditions.\n\nFor kX, a statistical analysis is very difficult because of the high dispersion of significant absolute values of doubled uncertainties. Lorents provided only an upper limit. Rives et al. results leave open to question whether this process is computable, considering its weak rate constant. Statistically, kX, should not surpass 6.12×1012 cm3s-1. One other (old) measure, had already provided an erroneous value for kH. Another measure was strongly revised downwards six years later.\n\nReaction (8) which does not lead to the production of Xe\n2\nCl\n* is of negligible importance. The measurements given for kDX are well dispersed and the confidence interval contains only three values. Two of the excluded measurements are of doubtful estimations, while the others are correspondingly direct measures that provided good results. Hanging over kDX is a great uncertainty, but the average value is representative of the overall results, that is, 9.1×1031 cm6s-1.\n\nThe measured values of ke display a strong dispersion. Only four values are statistically close The average value of 9.6×108 cm3s-1 is relatively close to the only direct measure.\n\nLou also suggested other products for reaction (10):\n\nXeCl\n*\n+ e-Xe+\n+ Cl- (ke1 = 1.8×107 cm3s-1) or → Xe* + Cl + e- (ke2 = 1.2×107 cm3s-1)\n\nSome differences were noticed for reactions of type (6) accounting for the vibrational levels of the collision partners:\n\nXeCl\n*\n(v=0) + HCl(v=1) → Xe + HCl + Cl + Cl (6a) with rate constant of kHa\n\nXeCl\n*\n(v=0) + HCl(v=2) → Xe + HCl + Cl + Cl (6b) with rate constant of kHb\n\nXeCl(B,C;v≠0) + HCl(v=0) → Other products and not XeCl (6c) with rate constant of kHc\n\nThe values of the rate constants are summarized in Table 21. They are well dispersed and do not correspond to any direct measurement. These values are specifically based on analogous estimations.\n\nTable 21. Values of kHa, kHb, kHc in cm3s-1.\nRef kHa kHb kHc\n 7.7 × 10-10\n 6.3 × 10-10\n 1.4 × 10-9\n 7.7 × 10-9 7.7 × 10-9\n 7.7 × 10-10\n 6.3 × 10-10\n 6.3 × 10-10\n\nReactions that correspond to reactions (6) and (7) are evident when XeCl is in the ground state of X(v=0). These phenomena affect laser performance and are therefore important. The rate constants are assembled in Table 22. These rates do not vary with the vibrational level of the colliding molecules. Only one direct measurement exists; the others are estimates.\n\nTable 22. Rate constants of disappearance in cm3s-1 through binary collisions. The results are relative to XeCl(X, v = 0) along with another partner, either of Xe, HCl and an electron.\nRef Xe HCl e-\n (5.6 ± 0.8) × 10-12 (2.2 ± 0.5) × 10-11\n 2.2 × 10-11 5.6 × 10-10\n 8 × 10-12 2 × 10-8\n 7 × 10-8\n###### Role of the buffer gas\n\nThe addition of a third gas in significant quantities also affects the kinetics of disappearance of XeCl(B,C). It induces reactions which are similar to those produced by xenon:\n\nDouble collision (11) : XeCl(B,C) + Rg → Xe + Cl + Rg rate constant of k11\n\nTriple collision (12) : XeCl(B,C) + 2 Rg → Xe + Cl + 2 Rg rate constant of k12\n\nMixed triple collision (13) : XeCl(B,C) + Xe + Rg → 2 Xe + Cl + Rg rate constant of k13\n\nThe rate constants of the three processes are grouped in tables 23–25.\n\nTable 23. Values of k11 in cm3s-1 for the different rare gases.\nRef He Ne Ar Kr\n (1.1 ± 0.2) × 10-12 (0.76 ±0.15) × 10-12 (1.8 ± 0.5) × 10-12 (4.0 ± 0.6) × 10-12\n 5 × 10-13\n 1 × 10-12\n (1.0 ± 0.3) × 10-12\n 3.3 × 10-13\n 10-11\n < 2 × 10-13\n\nReactions (11) and (13) are always important while reaction (12) has a negligible contribution. The results are greatly dispersed. Differences may reach orders of magnitude. Four references have resulted in direct measurements of reaction rates. Others are estimates. These are based on correspondences and only indicative. No information is available for krypton.\n\nTable 24. Values of k12 in cm6s-1 for different rare gases.\nRef He Ne Ar Kr\n < 3 × 10-33\n 5 × 10-34\n 5 × 10-32\n 1 × 10-33\n < 1 × 10-33\n 1 × 10-34\n\nCompetitive reactions are evident for the totality of these reactions.\n\nTable 25. Values of k13 in cm6s-1 for different rare gases.\nRef He Ne Ar Kr\n (3.8 ± 0.2) × 10-30\n (2.4 ± 0.5) × 10-31 (7.4 ± 1.5) × 10-31 (8.9 ± 1.9) × 10-31 (9.9 ± 1.9) × 10-31\n (1.01 ± 0.05) × 10-30\n 1.5 × 10-32 1.5 × 10-31\n 5 × 10-32\n 1 × 10-31\n 1.5 × 10-31\n 2 × 10-31\n\nThe reactions of (11) are competitive for displacement reactions. In this case, the products are RgCl(B). They have only been observed in the case where Rg = Kr:\n\nXeCl\n*\n+ Kr → KrCl + Xe\n\nThe rate constant is 0.7×109 cm3s-1. Therefore, this reaction is more effective than quenching. It plays an important role in laser kinetics. It is also as fast as the process of creation of XeCl\n*\nby harpoon reaction. Table 20 concerns one of the principal pathways of destruction of the exciplex molecule.\n\nFor Brashears et al., it is possible to obtain the triatomic complex, RgXeCl\n*\n, as product. This is a competitive reaction when collisions that produce dissociated atoms occur. Emissions of KrXeCl at 370 nm have been observed, along with ArXeCl at 326 nm and NeXeCl at 434 nm. The rate constants have not been measured, except for Rg=Kr, which is 9×1033 cm6s-1.\n\nHowever, the creation of ArXeCl seems to be preferential by a competitive reaction (13):\n\nXe* + Ar + Xe → ArXeCl\n*\n\nThe rate constant is 4×1030 cm6s-1. It is then of the same order of magnitude as (13).\n\nHowever, the synthesis of the Xe\n2\nCl\n* trimer is the most frequent competitive reaction of (13).\n\nFor helium, Baginskii et al. provided a solution using Xe*\n2\n+ Cl + He of which the rate constant is 1.5×1031 cm6s-1.\n\nA corresponding reaction for (11) was demonstrated for XeCl at the ground state. The rate constants are summarized in Table 26. The measurements are greatly dispersed (only one is direct) and data on krypton are absent. The others are based, more or less, on estimates. Amongst these, one is statistically distant from the others. On using neon, the rate constant for XeCl(X, v=1) has been estimated as 1×1011 cm3s-1.\n\nTable 26. Rate constants of disappearance in cm3s-1 through binary collisions relative to XeCl(X, v=0) for various buffer gases.\nRef He Ne Ar Kr\n 5 × 10-12\n 9.8 × 10-11\n 3 × 10-12\n (1.0 ± 0.15) × 10-13 (0.6 ± 0.06) × 10-13\n 1 × 10-11\n 1 × 10-12\n 8 × 10-12\n 5 × 10-11\n###### Other chlorine donors and other reactions\n\nThe main reactions are those corresponding to reaction (6):\n\nXeCl\n*\n+ RCl → Other products and not XeCl (14) rate constant of kR\n\nThe values of the rate constants through RCl = Cl\n2\nor CCl\n4\nare summarized in table 27. The three chlorine donors studied (HCl, Cl\n2\nand CCl\n4\n) have rates of quenching of the same order of magnitude.\n\nTable 27. Rate constants in cm3s-1 relative to reactions (14) for XeCl (B, C ; v’ = 0,1).\nRef Cl\n2\nCCl\n4\n (4.3 ± 0.2) × 10-10\n (5.6 ± 0.25) × 10-10\n 5 × 10-10\n 5.9 × 10-10\n 5.8 × 10-10\n (4.6 ± 0.2) × 10-10\n\nAll the measurements in Table 27 were experimental. For chlorine, only one (recent) value is statistically distant from the others. The absolute difference is not very great versus the other determinations. An average value for kR for chlorine is 5×1010 cm3s-1, which is very close to a measure relative to CCl\n4\n.\n\nFor chlorine, Grieneisen et al. pointed to two different values for the rate constant for states B and C. They were respectively estimated as (8.8 ± 1.5)×1010 cm3s-1 and (3.3 ± 0.3)×1010 cm3s-1. This is a direct measure of the process of destruction through binary collision with Cl\n2\nthat includes all the phenomena and not just quenching. As states B and C are energetically close, collisional coupling is acting on the two states. A similar result for xenon seems to reinforce this hypothesis.\n\nSome atoms of free chlorine exist in the conditions which matter for lasers. The following quenching reactions is provided for:\n\nXeCl\n*\n+ Cl → Xe + 2Cl\n\nTwo authors have estimated the rate constant: 1.4×109 cm3s−1 and 8×1010 cm3s−1.\n\nThe presence of impurities, Im, such as the chlorocarbons (the consequence of corrosion), NO, CO\n2\n, O\n2\n, CO, N\n2\nO, H\n2\nO could have an effect on the chemical kinetics of disappearance of XeCl\n*\nsince binary collisions ImXeCl\n*\npossess rate constants of the order of 3×1010 cm3s−1, thus making them comparable to the XeCl\n*\n+ RCl reaction. However, given the usual impurity levels, the reaction frequencies are negligible. A practical solution has been proposed to eliminate them that involves introducing 1 torr of H\n2\n.\n\n##### Process of collisional coupling between states B and C\n###### In binary mixtures of Xe/HCl\n\nThe weak energetic gap (about 100 cm-1) between these two states (Table 2), suggests that a coupling was produced. However, this result was not exactly quantified nor was it later confirmed. No collisional coupling phenomenon induced by chlorine has been detected recently.\n\nThe role of electrons is also not well known in the coupling process. According to Finn et al., its role is negligible, although Johnson et al. gave an elevated rate constant. This rate is the same, according to them, for B to C and C to B transfers. The energy difference between B and C is not zero (see Table 2). The reaction rate was estimated at 2×108 cm3s-1.\n\nThese couplings are demonstrated through binary collisions using an atom of xenon:\n\nXeCl(B ; v’ = 0) + Xe → XeCl(C ; v’ = 0,1) + Xe (15) rate constant of kBC\n\nXeCl(C ; v’ = 0, 1) + Xe → XeCl(B ; v’ = 0) + Xe (16) rate constant of kCB\n\nThe measurements of rate constants are not very consistent as can be seen in Table 28.\n\nTable 28. Rate constants in cm3s-1 of the collisional coupling processes of states B and C.\nRef kBC kCB\n (11.0 ± 0.3) × 10-11 (7.4 ± 0.3) × 10-11\n 13.5 × 10-11 15 × 10-11\n (7.21 ± 1.97) × 10-12 (4.08 ± 1.97) × 10-12\n 5 × 10-11\n\nr In experiments by Inoue et al., the vibrational levels v’=0.1 were directly excited. This is not the case in other experiments. The last value is only a theoretical estimation based on similarities with other reactions. The energetic gap ΔE = EB – EC deduced from kCB and kBC, suggests that further information could follow. Assuming that the states EB and EC were thermalized:\n\nkBC/kCB = exp(ΔE/kT) since the statistical weights of the two states are the same.\n\nΔE, was also inferred by Inoue et al. as 85 cm-1, and as 119 cm-1 by Rives et al., while 22 cm-1 was the measurement given by Le Calvé et al. (see Table 2). Only the first two values are values of ΔE which are compatible with 100 cm-1, the accepted order of magnitude. A clear difference exists between these two; an order of magnitude separates the values of kBC and kCB in the two experiments. Grieneisen et al. provided only the global rate of destruction of states B and C, in other words, quenching and coupling. For the destruction of state C, they found (15.5 ± 0.9)×1012 cm3s-1 and for state B (10.3 ± 0.9)×1012 cm3s-1, which are intermediate values between those of Inoue et al. and Rives et al.. Recall that quenching by xenon only has a weak influence (Table 20). Inoue et al. notably did not take account of reaction (9). If the same approach is taken for the results by Rives et al., the values of kBC and kCB are close to those of Inoue et al.. As was explained for kx and kH, taking account of the process (9) modifies the values of the reaction rate. On this point, Rives et al. is more precise than Inoue et al..\n\nThe advantage of Inoue et al.'s result was in vibrational resolution, as kBC and kCB vary with the vibrational level v. For level v=70 to 130, rate constants between 15 and 20×1011 cm3s-1 were observed. kBC and kCB seems to then grow with v.\n\nSince most of the time XeCl(B, C) is formed with a strong vibrational excitation, knowledge of the exact estimate of the variation of kBC and kCB with v; and the chemical kinetics of the vibrational relaxation and its importance relative vis-à-vis to the coupling process are important.\n\n###### The role of the buffer gas\n\nCollisional coupling is produced by binary collisions with an atom of a rare gas, Rg:\n\nXeCl(B) + Rg → XeCl(C) + Rg (17) rate constant of kBCRg\n\nXeCl(C) + Rg → XeCl(B) + Rg (18) rate constant of kCBRg\n\nDreiling and Setser provide order of magnitude values for kBCRg and kCBRg for a given vibrational level. The results are shown in Table 29. This shows that the rate constants increase regularly when the vibrational level, v, of XeCl\n*\nis higher and the rare gas, Rg, is heavier.\n\nTable 29. Coupling rate constants in cm3s-1 by binary collisions using an atom of rare gas.\nv He Ne Ar Kr\n0–30 (0.5 to 1.8) × 10-11 (0.7 to 2.6) × 10-11 (3.0 to 11) × 10-11 (3.0 to 11) × 10-11\n30–70 (1.8 to 2.5) × 10-11 (2.6 to 3.5) × 10-11 (11 to 15) × 10-11 (11.0 to 16) × 10-11\n70–130 2.5 × 10-11 3.5 × 10-11 15 × 10-11 16 × 10-11\n\nUsing helium, experiments have been made at low and high pressures. At high pressures, the transfer constants are of the order of (1.5 ± 0.7)×1012 cm3s-1 and at low pressures 3.0×1011 cm3s-1. A strong pressure induces a vibrational relaxation such that the values of v involved in the transfer are weak and vice versa for weak pressures. The only available direct determination for kBCHe gives a value less than 3×1013 cm3s-1.\n\nFor neon, the values of the rate of transfer at low and high pressure are respectively, 3.0×1011 cm3s-1 and (0.8 ± 0.4)×1012 cm3s-1. They are inferior to those of Table 29. The direct measurement of the rate constant kBCNe gives a value less than 3.10-13 cm3s-1. Finally, according to Ohwa, the order of magnitude of the two rate of coupling constants would be 4.8×1012 cm3s-1 for v=4.\n\nFor argon, the results increase. At low pressures, the order of magnitude would only be 6.0×1011 cm3s-1. Other authors published rates of transfer of 1.2 ± 0.4×104 cm3s-1 for a pressure interval starting from 10 to 1000 torr. Direct measurements of kBCAr and kCBAr are available without specifying the vibrational levels involved:\n\nkBCAr = 36×104 cm3s-1 and kCBAr = 21×1011 cm3s-1\n\nMeanwhile, Yu et al. noted a variation with temperature of kBCAr:\n\nkBCAr = (4 ± 2)×1012 cm3s-1 at 300K and kBCAr = (2 ± 1)×1012 cm3s-1 at 230K.\n\nFor krypton, we can only make an estimation:\n\nkBCKr = (4)×1012 cm3s-1.\n\nIt is clear that the collisional coupling process induced by the rare gases are not well established. Different authors give different order of magnitudes. The uncertainty on the rate constants is therefore as important as for that of xenon. The vibrational excitation seems to play a role that is still not well defined. Direct measurements for kBCRg and kCBRg are not available. From the first estimations, the phenomena seem important in the kinetics of gaseous mixtures.\n\n##### Vibrational relaxation\n\nXeCl\n*\nis more often synthesized with strong vibrational excitation and can reach vibration quantum numbers as high as v=100. This induces some vibrational relaxation that is formed by binary collision with an atom of a rare gas.\n\nOnly a single measurement for xenon and level v=2 has been published.\n\nXeCl(B; v = 2) + Xe → XeCl(B; v’ = 0.1) + Xe rate constant of kv\n\nwhere kv = (2 ± 1)×1010 cm3s-1.\n\nMost of the known results are related to buffer gases. Yet, only Dreiling and Sester completed measurements. The vibrational relaxation can be written as:\n\nXeCl\n*\n(v) + Rg → XeCl\n*\n(v’) + Rg (19)\n\nThe orders of magnitude of kvRg are summarized in Table 30. kvRg increases with the vibrational level of XeCl\n*\nand heavier rare gases, Rg. Values of kvRg are assumed to be the same for states B and C.\n\nTable 30 : Vibrational relaxation rate constants in cm3s-1 induced by binary collisions with an atom of a buffer gas, Rg.\nv He Ne Ar Kr\n0–30 (0.15 to 1.1) × 10-11 (0.5 to 2.9) × 10-11 (1.0 to 6.0) × 10-11 (0.6 to 2.7) × 10-11\n30–70 (1.1 to 2.5) × 10-11 (2.9 to 6.2) × 10-11 (6.0 to 12) × 10-11 (2.7 to 5.5) × 10-11\n70–130 (2.5 to 4.4) × 10-11 (6.2 to 9.5) × 10-11 (20 to 34) × 10-11 (5.5 to 7.3) × 10-11\n\nFor helium and krypton, no comparison is available.\n\nFor neon, only the reaction with first two vibrational levels of B have been documented:\n\nXeCl(B; v = 1) + Ne → XeCl(B ; v = 0) + Ne with rate constant of kvNe=(0.3 to 0.5)×1011 cm3s-1.\n\nFor argon, the values of kvAr has been determined for v=33, 60 and 75. Their values, respectively, are (17 ± 5)×1011; (31 ± 9)×1011 and (43 ± 10)×1011 cm-11. Other authors placed the figure for kvAr between (10 and 15)×1011 agreeing on the order of magnitude.\n\n#### Disappearance pathways of the exciplex molecule\n\nThe chemical kinetics due to collisional coupling of states B and C and vibrational relaxation are not well known. The few available results often disagree, although a general idea of the situation is possible. For high vibrational levels, coupling overrides the vibrational relaxation while the contrary is true for the lowest levels, even if a rare gas is involved.\n\nThe various destructive processes of XeCl(B), differ in importance. A mixture optimized for lasers is used. Neon is favored over argon because the latter strongly absorbs via the Ar+\n2\nion at 308 nm. Therefore, a ternary mixture (Ne/Xe/HCl) is used. The total pressure is fixed at 3 atm, the respective partial pressures is 2268.6 torr, 10 torr and 1.4 torr. The rate constants are the average values of the most reliable estimates.\n\nThe results are summarized in Table 31. For reaction (19), only the lowest vibrational levels are accounted. The lower frequency of disappearance limit is 0.40 ns-1. This process induces the highest destruction, indicating that XeCl(B) synthesized with high vibrational excitation is quickly relaxed by binary collision with neon and (probably) also by xenon. This suggests that other processes are really noticeable only after XeCl(B) is on the v=0 level, which is why reaction (17) uses the value of k BC Do relative to a low v. Once the relaxation is complete other processes take over. Depopulation by spontaneous emission is very important as well as reactions (11) and (17). These two processes lack refined measurements and determinations overall. The role of the xenon coupling is not better known but has less influence than the destruction by binary collision with HCl. Other better known processes are negligible. In particular all termolecular reactions are negligible.\n\nTable 31 : Frequency of destruction of states B in ns-1.\nProcesses Radiative pathway 6 7 8 9 11 12 13 15 17 19\nFrequency 0.099 0.036 0.001 0.0001 0.0008 0.24 0.0006 0.0074 0.027 0.064 0.40\nPercentage 11% 4% < 1% < 1% < 1% 27% < 1% 1% 3% 7% 46%\nPercentage after vibrational relaxation 21% 8% < 1% < 1% < 1% 50% < 1% 2% 6% 13%\n\n## The Xe2Cl exciplex molecule\n\nGenerally, Rg2X molecules are less stable than RgX. Xe\n2\nCl\nis of double interest. It can cause perturbations in laser XeCl performance because it absorbs well at 308 nm and enables the development of another type of laser based on an Xe\n2\nCl\nemission.\n\n### The Xe2Cl molecule\n\nInitial studies on the Xe\n2\nCl\nmolecule, found:\n\n• Its most stable configuration in the excited state has a triangular geometry C2v.\n• The Xe\n2\nCl\n*\nexcited states are complexes formed from the association of a molecular ion of Xe+\n2\nand an atomic ion of Cl\n.\n• The observed emission of the molecule is broad; the corresponding transitions result in a very repulsive ground state.\n\nThe potential curves calculated by Huestis et al. from the DIM (Diatomics In Molecules) method are presented in Figure 15.\n\nThe three lowest states are covalent and repulsive. They are correlated to XeCl(X or A) and to an atom of xenon at the ground state. The experimental value of the energy at state 12Γ is 0.273 eV. It is compatible with these potential curves. The following three states are ionic. The bound state 42Γ is correlated to XeCl(B) + Xe; the following, 52Γ, a repulsive state, is correlated to XeCl(C) + Xe.\n\nLast and George made a determination of the potential curves using another method, the DIIS (Diatomics In Ionic Systems) method without considering spin-orbital coupling. They found, like Huestis et al. that the 42Γ state is the lowest ionic state. At the bottom of the well, this state has the configuration of an isosceles triangle, such that the distance between the equilibrium positions of Xe and Cl is 3.23 Å. According to Adams and Chabalowski the Xe–Cl distance is 3.39 Å.\n\nInitially, the potential curves of the different states were plotted by maintaining a constant and equal Xe-Xe distance at 3.25 Å (figure 16). Last and George discovered nine states (three covalent and six ionic). The potential curves of the antisymmetric states 42Γπ and 62Γπ are almost coincident with the potential curves of the symmetric states 52Γ and 62Γ. The 32Γ and 72Γ states highlighted by Huestin et al. are absent since the spin-orbital coupling were not taken into account. Inversely, three states, (22Γπ, 42Γπ and 62Γπ) with the π symmetry, were not included in their diagrams.\n\nA second study kept the separation of Xe-Cl at 3.23 Å (figure 17).\n\n* In 42Γπ state, the molecule with isosceles triangle configuration such as the Xe-Cl and Xe-Xe distances are respectively 3.13 and 4.23 Å. The state is 0.8 eV above the 42Γ state. * At the ground state, 12Γ forms a Van der Walls complex. It has a bond-dissociation energy of 0.075eV and a dissymmetric triangular configuration. The Xe–Cl distances are 3.23 and 4.06 Å and the Xe–Cl–Xe angle is 74.4°. * The second excited state 22Γ is also a Van der Walls complex. It has a symmetrical geometry and an Xe–Cl distance of 3.99 Å with an Xe–Cl–Xe angle of 68.4°. Its dissociation energy is 0.055 eV.\n\nAnother way of describing Xe–Cl–Xe finds the stable state to be linear and symmetric. At the ground state, the Xe-Cl distance should be 3.24 Å and the dissociation energy 0.076 eV. An excited state could exist with a geometric distance of Xe-Cl of 3.06 Å. This state, which is not shown in Figures 16 and 17, would possess an energy higher than 0.72 eV to that of the 42Γ state. The bonding would be ionic.\n\nOnly an experiment conducted at the solid state can be compared to these theoretical results. The special state studied was the 42Γ state. The isosceles triangle structure of this state was confirmed. Three quantities can be compared with theoretical predictions. The Xe-Xe distance is measured at 3.17 Å and that of Xe-Cl at 3 Å. The agreement in values is best for the energy at the bottom of the well that was evaluated at 3.15 eV. The fundamental vibrational frequencies for Xe–Xe, is ωx = 123 cm-1 and for Xe–Cl, ωc = 180 cm-1.\n\n### Synthetic pathways\n\nThree principal pathways of Xe\n2\nCl\n*\nsynthesis are energetically possible through collisions and two others through photodissociation:\n\nXe*\n2\n(A1Σ) + Cl\n2\nXe\n2\nCl\n*\n+ Cl (20)\n\nXe\n*\n+ Xe + Rg → Xe\n2\nCl\n*\n+ Rg (21)\n\nXe\n2\n+ + Cl- + Rg → Xe\n2\nCl\n*\n+ Rg (22)\n\nXeCl\n*\n(X) + Xe + hν → Xe\n2\nCl\n*\n(23)\n\nXe + Cl + Xe + hν → Xe\n2\nCl\n*\n(24)\n\nwhere Rg is a rare gas, probably xenon or a buffer gas.\n\nThe authors disagree on the relative importance of these synthetic processes. The processes depend on experimental conditions.\n\n#### Through harpoon reactions.\n\nReaction (20) is a very energetic harpoon reaction. It involves Xe*\n2\nexcited state. According to Bruce et al., this is the dominant synthetic pathway. Other authors though do not share this view since they believe that this reaction is weak, or indeed negligible. Its rate constant has not yet been measured.\n\n#### The photoassociative pathway\n\nReactions (23) and (24) were only recently discovered.\n\n#### The ionic pathway\n\nAccording to a theoretical computation, the rate of recombination α’ of the Xe+\n2\nand Cl\nions when Rg = Xe (reaction (22)) was, at the first instance, estimated as 1×10–7 cm3s-1. The same authors later revised this value downward as: α’ = 5×10–8 cm3s-1. This result was confirmed experimentally. According to computations, this reaction could become important at high pressures at which Xe\n2\nCl\n*\nbecomes the principal reaction product, to the detriment of XeCl\n*\n(reaction (4)).\n\n#### The ternary reactions\n\nThe synthesis of Xe\n2\nCl\n*\nis principally through pathway (21). According to a recent study, the reaction can be interpreted as the result of two successive reactions, the second reaction corresponding to a vibrational relaxation through collision using Rg:\n\nXeCl(B,C) + Xe ↔ Xe\n2\nCl\n*(v)\n\nXe\n2\nCl\n*(v)\n+ Rg → Xe\n2\nCl\n*\n+ Rg\n\nThe starting vibrational levels of Xe\n2\nCl\n*(v)\nare above the limit of dissociation of the state in XeCl\n*\n+ Xe.\n\nIn contrast, Yu et al. believe that the formation of Xe\n2\nCl\n*\nis through a triatomic complex, RgXeCl*, mainly :\n\nXeCl\n*\n+ Rg → RgXeCl\n*\nwhere Rg≠Xe\n\nRgXeCl\n*\n+ Xe → Xe\n2\nCl\n*\nRg\n\nThese reactions have been observed in only argon and krypton.\n\nThe second reaction is one of displacement. Another reaction is competitive to it when xenon is replaced by krypton. This quenching process should have a rate constant higher than 1×1013 cm3s-1.\n\n*\ncomplex is not well known. It is estimated at 200 ns for KrXeCl and 40 ns for NeXeCl. This interval in time is sufficient for the second collision to have a chance of being produced.\n\nThe rate constants have been measured as summarized in table 32. If Rg≠Xe, only two direct measurements have been carried out. The last is only an evaluation.\n\nTable 32 : Rate constants in cm6s-1 of the reaction (21).\nRef He Ne Ar Kr\n (1.5 ± 0.5) × 10-31\n (3.1 ± 1.3) × 10-31 (6.0 ± 1.6) × 10-31 (9.4 ± 2.4) × 10-31 (14 ± 2) × 10-31\n (1.5) × 10-31\n\nAs for xenon, notice that the totality of the kDX constants of table 20 could be taken as those of the fifth column of table 32 since kDX could be merged with reaction (21).\n\n### Paths of disappearance\n\n##### Emission spectra\n\nTheoretical studies show that the allowed transitions are (figure 15) :\n\n42Γ → 12Γ (A)\n\n42Γ → 22Γ (B)\n\n42Γ → 32Γ (C)\n\nThe starting states are always the same and the corresponding wavelengths, λTh, are indicated in Table 33. They can be compared to experimental values, λObs.\n\nTable 33 : Characteristics of Xe\n2\nCl\n*\nemissions.\nTransition Experiment: λObs (nm) Theoretical estimates: λTh (nm) Theoretical estimates: relative importance Theoretical estimates: transition moment (D)\n(A) 450 495 Dominant 2.45\n(B) 490 514 Important 0.1\n(C) 541 100 times weaker\n\nExperimentally, Fajardo and Apkarian observed two transitions (A) and (B) in the spectral domain, even while there was a significant wavelength shift. In most cases, a very large continuum (approximately 80 nm) was observed covering the three emissions. The maximum positioning oscillated between 450 and 500 nm. An example of this sort of spectrum is given in Figure 11. On computation, the limits of short wavelength emissions were evaluated at 443 nm.\n\nAccording to Last and George, the Xe–Cl–Xe linear molecule ought to have produced an emission approaching the ground state at 321 nm and the transition moment should be elevated to 3.9 D. As of 2014, however, no experiment confirms this prediction.\n\nAt the solid state, the Xe\n2\nCl\n*\nemission shifts towards the red range and is centered around 570 nm. A corresponding result is observed in the liquid state. This phenomenon should be owed to a distortion of the potential curves arising from molecular interactions which are closest to themselves than at the gaseous state. A theoretical study attributes this to the polarization of the xenon matrix by Xe\n2\n+Cl- and by Van der Walls forces.\n\nEmission of Xe\n2\nCl\n*\ntrimer is only observed at high pressures of the rare gas (xenon or buffer gas) and fluorescence increases with the pressure of xenon. These results follow because the synthetic pathway of Xe\n2\nCl\n*\nis similar to that of reaction (21). Considering the values of the rate constant of reactions of type (21), the reaction frequency does not deviate in a significant way even when the rare gas pressure is close to 200 torr. Reaction (22) only takes place under pressure of several atmospheres.\n\nThe only state where Xe\n2\nCl\nis the original parent of a luminous emission is 42Γ). Several determinations of its lifetime obtained at the gaseous state are summarized in Table 34. The results vary and the uncertainties involved are important. The confidence interval obtained within a threshold of 5% lies between 240 and 253 ns. Of these, four values are not included. Given the strong absolute uncertainty, another measure has a common interval within the confidence interval.\n\nTable 34 : Lifetime of Xe\n2\nCl(42Γ) experimentally obtained at the gaseous state, excepting for the reference given by Stevens and Krauss which is a theoretical determination.\n300 ± 50 \n185 ± 10 \n260 \n135+70-60 \n210 ± 25 \n250 ± 25 \n245 ± 10 \n328 ± 20 \n250 \n330 \n210 ± 20 \n242 ± 10 \n\nMeasurements realized at the solid state provide values that are yet more dispersed such as is shown in Table 35.\n\nTable 35 : Lifetime of Xe\n2\nCl(42Γ) observed at the solid state.\nAr 460 \nNe 260 ± 15 \nKr 450 \nXe 663 \nXe 225 ± 10 \n\n#### The collision pathway\n\n##### The role of chlorine donors (RCl)\n\nBeyond the radiative disexcitation, the Xe\n2\nCl\n(42Γ) state is destroyed by a double collision with RCl. In practical terms, every author agrees that double collision is the dominant destruction pathway of Xe\n2\nCl\nwhen collision is involved, whatever the chlorine donor. Therefore, Xe\n2\nCl\n*\nemissions are only observed at weak concentrations of RCl. The values of the rate constants for reactions (24) are given in Table 36.\n\nXe\n2\nCl\n*\n+ RCl → Other products except Xe\n2\nCl\n(24)\n\nTable 36 : Rate constants in cm3s-1 of reactions (24) for various donors of chlorine, RCl.\nReference Cl\n2\nHCl CCl\n4\n (2.2 ± 0.2) × 10-10 (4.3 ± 0.4) × 10-10 (5.4 ± 0.5) × 10-10\n (6.1 ± 0.2) × 10-10\n 2.6 × 10-10\n 8 × 10-10\n 6.1 × 10-10\n 6 × 10-10\n (3.9 ± 0.4) × 10-10\n (4.5 ± 0.4) × 10-10\n (2.6 ± 0.3) × 10-10\n (4 ± 1) × 10-10\n (7 ± 1) × 10-10\n (4.0 ± 1) × 10-10-10\n 1.8 × 10-10\n (6 ± 1) × 10-10\n\nThere are only two determinations for CCl\n4\nand these are coincident. For HCl, two values are statistically distant from others. Giving an explanation for this distance remains difficult. The confidence interval at a threshold of 5% is from 4 to 7×1010 cm3s-1.\n\nIn the case of chlorine, Cl\n2\n, only one half of measurements are statistically close. Even so, this closeness is difficult to explain. Its confidence interval at the threshold of 5% varies from 3.7 to 4.5×1010 cm3s-1. The three chlorine donors appear to have a corresponding influence on the collisional destruction of Xe\n2\nCl\n*\n.\n\nTo estimate the rate constant of the reaction:\n\nXe\n2\nCl\n*\n+ Cl → 2 Xe + 2 Cl\n\nThe value is 1×109 cm3s-1.\n\n##### The role of rare gases\n\nThese are uniquely binary reactions:\n\nXe\n2\nCl\n*\n+ Rg → Other products except Xe\n2\nCl\n(25)\n\nThe disappearance of Xe\n2\nCl\n*\nby collision on a xenon atom was observed by Grieneisen et al., the reaction constant was estimated at 6×1015 cm3s-1. However, this reaction has not been demonstrated by other authors. The upper bound of the rate constant of reaction (25) is 1×1017 cm3s-1, although other authors placed this limit at 4 to 7×1014 cm3s-1 or 5×1013 cm3s-1. The value used by Kannari et al., 8×1012 cm3s-1, has no basis.\n\nFor ternary mixtures, the role of the buffer gas is not well known.\n\nFor argon, (3 ± 1)×1014 cm3s-1 and (1.5 ± 0.4)×1014 cm3s-1 are available.\n\nFor helium, 5×1013 cm3s-1 and 3×1014 cm3s-1 are available.\n\n##### The role of electrons and impurities\n\nThe rate of reactions of Xe\n2\nCl\n*\n+ e- → 2 Xe + Cl + e- (26) does not have consistent estimates. They are summarized in Table 37.\n\nTable 37 : Rate constants of reaction (26) in cm3s-1.\nRate constants Reference\n2 × 10-7 \n9 × 10-9 \n2 × 10-8 \n4 × 10-9 \n\nThe impurities have a lesser influence in the chemical decay of Xe\n2\nCl\nthan XeCl*. The bimolecular rate constants of disappearance of Im\nXe\n2\nCl\n*\nare an order of magnitude lower than the relative rate constants for binary collisions ImXeCl\n*\n. Yet, for CO\n2\nand nitric oxide, NO, the rate constants are of the same order of magnitude, about some 10-10 cm3s-1. Impurity levels, most often low, may influence the measurements. The reaction frequencies are negligible."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8130484,"math_prob":0.96464026,"size":142953,"snap":"2021-43-2021-49","text_gpt3_token_len":46014,"char_repetition_ratio":0.16388363,"word_repetition_ratio":0.03649918,"special_character_ratio":0.3506817,"punctuation_ratio":0.19686821,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.95056623,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-27T23:56:08Z\",\"WARC-Record-ID\":\"<urn:uuid:ed53b149-7297-4e0e-9131-1ecaf045c063>\",\"Content-Length\":\"570974\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5a9001d5-60f6-419a-bff4-784cf4fdfe89>\",\"WARC-Concurrent-To\":\"<urn:uuid:9982cff2-b189-4896-81e0-d3e11d35ffb2>\",\"WARC-IP-Address\":\"41.66.34.68\",\"WARC-Target-URI\":\"https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Xenon_monochloride\",\"WARC-Payload-Digest\":\"sha1:5KOMW5S253TOAFSIG3JLBLGMAE6FBTJK\",\"WARC-Block-Digest\":\"sha1:WKHX3SNZJSZ5C7EJVNXZMCZRQDDGLXMQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323588244.55_warc_CC-MAIN-20211027212831-20211028002831-00697.warc.gz\"}"} |
https://mikesmathpage.wordpress.com/2016/05/16/ | [
"# Differences of squares and cubes",
null,
"My older son is working his way through Art of Problem Solving’s Algebra book and has come to a section about factoring sums and differences of cubes. This topic is new to him and I thought we’d work through a few introductory examples with numbers before diving in today.\n\nThere are a couple of surprises, I think. First, although",
null,
"$x^2 - y^2$ is easy to factor,",
null,
"$x^2 + y^2$ is not. Second, when you move up to cubes it turns out that",
null,
"$x^3 - y^3$ is reasonably easy to factor after you play around for a bit, and",
null,
"$x^3 + y^3$ is, too.\n\nI wanted to show him a bit about what was going on before he dove in this morning.\n\nHere are our short talks:\n\n(1) We started by talking about",
null,
"$x^2 - 1$\n\n(2) From there we moved on to",
null,
"$x^2 + 1$ and found a lot of primes\n\n(3) Next up was",
null,
"$x^3 - 1$ which we were able to factor with a little work.\n\n(4) Finally, we looked at",
null,
"$x^3 + 1$ which actually did factor in a very similar way to",
null,
"$x^3 - 1$!\n\nSo, hopefully a useful introduction. I’d like to do a few more projects over the course of the week to help give some different perspectives on factoring differences of squares and cubes equations."
] | [
null,
"https://mikesmathpage.files.wordpress.com/2016/05/screen-shot-2016-05-16-at-6-38-33-am.png",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null,
"https://s0.wp.com/latex.php",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.98824805,"math_prob":0.99161804,"size":1023,"snap":"2022-27-2022-33","text_gpt3_token_len":230,"char_repetition_ratio":0.09028459,"word_repetition_ratio":0.0,"special_character_ratio":0.22385141,"punctuation_ratio":0.08755761,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9910464,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],"im_url_duplicate_count":[null,6,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-27T08:10:32Z\",\"WARC-Record-ID\":\"<urn:uuid:619a3b7f-9e30-4949-a8b2-6f2319e6ee42>\",\"Content-Length\":\"76258\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:634fa7fd-ac6a-4b4f-ad31-e4085bbfc083>\",\"WARC-Concurrent-To\":\"<urn:uuid:962db376-5bc4-4e35-bc01-26a955eccbeb>\",\"WARC-IP-Address\":\"192.0.78.12\",\"WARC-Target-URI\":\"https://mikesmathpage.wordpress.com/2016/05/16/\",\"WARC-Payload-Digest\":\"sha1:CECYL2ENSKOFQZASOEDKR2RZ2WH2KKRI\",\"WARC-Block-Digest\":\"sha1:X2VC3A2JBR6G4HPS2E6HPBV5TFLD46EI\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103329963.19_warc_CC-MAIN-20220627073417-20220627103417-00543.warc.gz\"}"} |
https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-5-polynomials-and-polynomial-functions-prerequisite-skills-page-328/11 | [
"## Algebra 2 (1st Edition)\n\nPublished by McDougal Littell\n\n# Chapter 5 Polynomials and Polynomial Functions - Prerequisite Skills - Page 328: 11\n\n#### Answer\n\n$$x=.2, -1.5$$\n\n#### Work Step by Step\n\nIn order to multiply factors out, we would use the FOIL method. This means that we multiply the first terms in each factor by each other. Then, we multiply the two outer terms by each other. Next, we multiply the two inner terms. Finally, we multiply the two terms that have yet to be multiplied by each other. Thus, we try to do the opposite of this to factor the equation. Doing this and then setting the factors equal to zero, we find: $$(5x−1)(2x+3) \\\\ x=.2, -1.5$$\n\nAfter you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9311843,"math_prob":0.9727666,"size":509,"snap":"2019-35-2019-39","text_gpt3_token_len":137,"char_repetition_ratio":0.18415841,"word_repetition_ratio":0.0,"special_character_ratio":0.27897838,"punctuation_ratio":0.15966387,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9967639,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-08-24T00:28:14Z\",\"WARC-Record-ID\":\"<urn:uuid:e9dd770d-f5ea-4ec6-b5db-25bb4dfe9972>\",\"Content-Length\":\"109563\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:eb6912b1-bf24-4b6e-8b78-2284efa53016>\",\"WARC-Concurrent-To\":\"<urn:uuid:7369f42c-521b-467f-b655-f7c675c9b305>\",\"WARC-IP-Address\":\"54.210.251.76\",\"WARC-Target-URI\":\"https://www.gradesaver.com/textbooks/math/algebra/algebra-2-1st-edition/chapter-5-polynomials-and-polynomial-functions-prerequisite-skills-page-328/11\",\"WARC-Payload-Digest\":\"sha1:7VP6QLCXHUAELRMC52NFSXVNPKBNG74J\",\"WARC-Block-Digest\":\"sha1:L52GLMH4GBJIJ6HFBYPNPWKHESB6VE2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-35/CC-MAIN-2019-35_segments_1566027319155.91_warc_CC-MAIN-20190823235136-20190824021136-00258.warc.gz\"}"} |
https://ask.sagemath.org/question/28853/how-to-get-simplified-result-of-solving-equation/ | [
"# how to get simplified result of solving equation\n\nHi there!\n\nIn a linear algebra assignment I have the following equation I need to find a solution for:\n\n-2*sqrt(3)*sin(t)^2+2*cos(t)*sin(t)+sqrt(3)==0. I know the result is pi/3.\n\nNow, when I use solve, solve(-2*sqrt(3)*sin(t)^2+2*cos(t)*sin(t)+sqrt(3)==0, t). I get:\n\n[sin(t) == -1/6*sqrt(3)*(sqrt(cos(t)^2+6)-cos(t), sin(t) == 1/6*sqrt(3)*sqrt(cos(t)^2+6)+cos(t))].\n\n\nHow do I use the solve function to get the more simplified result?\n\nNB: I have also tried to use find_root and different simplify functions, but also without any luck. I am new to sage, so it is quite possible that I don't know a specification to the solve function which I should use.\n\nSincerly Simon\n\nedit retag close merge delete\n\nSort by » oldest newest most voted\n\nFirst, be sure to define your variable using var('t').\n\nTo find a numerical solution, you can plot the function to help identify where the roots are.\n\nplot(-2*sqrt(3)*sin(t)^2+2*cos(t)*sin(t)+sqrt(3),(t,-10,10))\n\n\nFor example, to get the first positive root, you can now use find_root to find the root between 0 and 3.\n\nfind_root(-2*sqrt(3)*sin(t)^2+2*cos(t)*sin(t)+sqrt(3),-3,3)\n\n\nwhich gives 2.6179938779914944.\n\nFor an analytic solution, you can do the following:\n\nsolve(-2*sqrt(3)*sin(x)^2+2*cos(x)*sin(x)+sqrt(3)==0, x,to_poly_solve ='force')\n\n\nThis gives: [x == 1/3*pi + pi*z1, x == -1/6*pi + pi*z2]\n\nThe z1 and z2 can be any integers.\n\n(Interestingly, I could not get the solve to work with t as the variable. I'm not sure why.)\n\nmore\n\nI have marked your answer as being the correct answer. I don't know if I am to give you points, but if I am, please reply of how I do this.\n\nSincerly Simon\n\nThank you. The points are managed by the website automatically."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8380523,"math_prob":0.9935553,"size":735,"snap":"2020-34-2020-40","text_gpt3_token_len":225,"char_repetition_ratio":0.14500684,"word_repetition_ratio":0.0,"special_character_ratio":0.32108843,"punctuation_ratio":0.10285714,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9998859,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-27T02:58:59Z\",\"WARC-Record-ID\":\"<urn:uuid:6e45d811-d766-4847-b268-20f1439878a2>\",\"Content-Length\":\"59769\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:dafcce40-c547-4f0c-9628-a6ae73f9301c>\",\"WARC-Concurrent-To\":\"<urn:uuid:76497df7-0e40-469d-af4b-d533cc743174>\",\"WARC-IP-Address\":\"140.254.118.68\",\"WARC-Target-URI\":\"https://ask.sagemath.org/question/28853/how-to-get-simplified-result-of-solving-equation/\",\"WARC-Payload-Digest\":\"sha1:M63JNUJPCWAK22QHEDYNJXQ5UAXF7DCH\",\"WARC-Block-Digest\":\"sha1:BKZCQHJKZ75TK4P3PG3T4KHVI7SJ5HV2\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600400250241.72_warc_CC-MAIN-20200927023329-20200927053329-00219.warc.gz\"}"} |
https://gmatclub.com/forum/pam-and-robin-each-roll-a-pair-of-fair-six-sided-dice-what-is-the-pr-235426.html | [
"GMAT Question of the Day: Daily via email | Daily via Instagram New to GMAT Club? Watch this Video\n\n It is currently 21 Jan 2020, 19:48",
null,
"### GMAT Club Daily Prep\n\n#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.\n\nCustomized\nfor You\n\nwe will pick new questions that match your level based on your Timer History\n\nTrack\n\nevery week, we’ll send you an estimated GMAT score based on your performance\n\nPractice\nPays\n\nwe will pick new questions that match your level based on your Timer History\n\n#### Not interested in getting valuable practice questions and articles delivered to your email? No problem, unsubscribe here.",
null,
"",
null,
"# Pam and Robin each roll a pair of fair, six-sided dice. What is the pr\n\nAuthor Message\nTAGS:\n\n### Hide Tags\n\nMath Expert",
null,
"V\nJoined: 02 Sep 2009\nPosts: 60555\nPam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n9\n42",
null,
"00:00\n\nDifficulty:",
null,
"",
null,
"",
null,
"95% (hard)\n\nQuestion Stats:",
null,
"19% (02:15) correct",
null,
"81% (01:45) wrong",
null,
"based on 318 sessions\n\n### HideShow timer Statistics\n\nPam and Robin each roll a pair of fair, six-sided dice. What is the probability that Pam and Robin will both roll the same set of two numbers?\n\nA. 1/216\n\nB. 1/36\n\nC. 5/108\n\nD. 11/216\n\nE. 1/18\n\n_________________\nTarget Test Prep Representative",
null,
"G\nAffiliations: Target Test Prep\nJoined: 04 Mar 2011\nPosts: 2806\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n11\n5\nBunuel wrote:\nPam and Robin each roll a pair of fair, six-sided dice. What is the probability that Pam and Robin will both roll the same set of two numbers?\n\nA. 1/216\n\nB. 1/36\n\nC. 5/108\n\nD. 11/216\n\nE. 1/18\n\nWe need to determine the probability that when Pam and Robin each rolls a pair of fair, six-sided dice, they both roll the same set of numbers. There are two scenarios: when Pam and Robin both roll the same two numbers and when they roll two distinct numbers.\n\nScenario 1: When the two numbers on the dice are the same\n\nLet’s say they both roll 1s. That is, Pam rolls (1, 1) and Robin rolls (1, 1). The probability of this happening is\n\n1/6 x 1/6 x 1/6 x 1/6 = 1/(6^4)\n\nSince the probability is the same for all 6 pairs of numbers, the probability of their rolling the same numbers is 6 x 1/(6^4) = 1/(6^3) = 1/216.\n\nScenario 2: When the two numbers on the dice are distinct\n\nThere are 6 x 5 = 30 ways to roll two distinct numbers when rolling two dice.\nLet’s say Pam rolls (1, 2) and Robin also rolls (1, 2). The probability of this happening is:\n\n1/6 x 1/6 x 1/6 x 1/6 = 1/(6^4)\n\nHowever, if Pam rolls (1, 2) and Robine rolls (2,1), those are still considered the same set of numbers, and the probability of that occurring is also 1/(6^4).\n\nTherefore, for each pair of distinct numbers rolled, the probability is 2 x 1/(6^4) = 2/(6^4). Since there are 30 such pairs, the overall probability is 30 x 2/(6^4) = 60/(^4) = 10/(6^3) = 10/216.\n\nFinally, since the events in option 1 and those in option 2 are mutually exclusive, we use the addition rule of probability. That is, the probability that Pam and Robin will both roll the same set of two numbers is:\n\n1/216 + 10/216 = 11/216\n\n_________________\n\n# Jeffrey Miller\n\nJeff@TargetTestPrep.com\n\nSee why Target Test Prep is the top rated GMAT quant course on GMAT Club. Read Our Reviews\n\nIf you find one of my posts helpful, please take a moment to click on the \"Kudos\" button.\n\n##### General Discussion\nManager",
null,
"",
null,
"B\nJoined: 06 Dec 2016\nPosts: 227\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\nI'm not sure if my approach is right.\n\n6/36 x 6/36 = 1/6 x 1 /6 = 1/36. I think the answer is B. I am not sure though.\nIntern",
null,
"",
null,
"S\nJoined: 05 Sep 2016\nPosts: 19\nLocation: Israel\nConcentration: Economics, Technology\nWE: Engineering (Telecommunications)\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n2\n1\nThe answer should be D. 11/216.\nThere are 2 cases:\n1. Both dices show the same number, so it's 6 out of 36. Then for the other player the probability to roll the same number is 1/6 x 1/6.\n2. There are 30 out of 36 options for the dices to be different numbers. The probability for the other player to roll the same is 2/6 x 1/6.\n\nProbability = 1/6 x 1/6 x 1/6 + 5/6 x 2/6 x 1/6 = 11/216\n\nSent from my Redmi 4 using GMAT Club Forum mobile app\nSenior SC Moderator",
null,
"V\nJoined: 14 Nov 2016\nPosts: 1344\nLocation: Malaysia\nPam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\nkolodits wrote:\nThe answer should be D. 11/216.\nThere are 2 cases:\n1. Both dices show the same number, so it's 6 out of 36. Then for the other player the probability to roll the same number is 1/6 x 1/6.\n2. There are 30 out of 36 options for the dices to be different numbers. The probability for the other player to roll the same is 2/6 x 1/6.\n\n$$Probability = \\frac{1}{6} * \\frac{1}{6} * \\frac{1}{6} + \\frac{5}{6} * \\frac{2}{6} * \\frac{1}{6} = \\frac{11}{216}$$\n\nSent from my Redmi 4 using GMAT Club Forum mobile app\n\nDear kolodits, How do you able to get the value for $$\\frac{2}{6}$$?\n_________________\n\"Be challenged at EVERY MOMENT.\"\n\n“Strength doesn’t come from what you can do. It comes from overcoming the things you once thought you couldn’t.”\n\n\"Each stage of the journey is crucial to attaining new heights of knowledge.\"\n\nIntern",
null,
"",
null,
"S\nJoined: 05 Sep 2016\nPosts: 19\nLocation: Israel\nConcentration: Economics, Technology\nWE: Engineering (Telecommunications)\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\nziyuen wrote:\nkolodits wrote:\nThe answer should be D. 11/216.\nThere are 2 cases:\n1. Both dices show the same number, so it's 6 out of 36. Then for the other player the probability to roll the same number is 1/6 x 1/6.\n2. There are 30 out of 36 options for the dices to be different numbers. The probability for the other player to roll the same is 2/6 x 1/6.\n\n$$Probability = \\frac{1}{6} * \\frac{1}{6} * \\frac{1}{6} + \\frac{5}{6} * \\frac{2}{6} * \\frac{1}{6} = \\frac{11}{216}$$\n\nSent from my Redmi 4 using GMAT Club Forum mobile app\n\nDear kolodits, How do you able to get the value for $$\\frac{2}{6}$$?\n\nThe second player roll the first dice. It should be equal to one of the dices the first player rolled. It doesn't matter which one, so there are 2 options out of 6. For the second dice there is only 1 option left, which makes it: 2/6 * 1/6.\n\nSent from my Redmi 4 using GMAT Club Forum mobile app\nIntern",
null,
"",
null,
"B\nJoined: 25 Dec 2016\nPosts: 17\nLocation: United States (GA)\nConcentration: Healthcare, Entrepreneurship\nGMAT 1: 770 Q51 V42\nGPA: 3.64\nWE: Medicine and Health (Health Care)\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n1\n1\nAt first I misread and thought it was asking for the same sum of the pairs of dice which would yield a different answer.\n\nThere are two cases. The first is when the first person rolls two different numbers. This happens 5/6 of the time. The second is when the first person rolls the same number on both die. This happens 1/6 of the time.\n\nIn the first case, the second person has a 2/6 chance to roll one of the two numbers that the first person rolled with the first die. If the first die is one of the two numbers, then the second person only has a 1/6 chance for the second die to match up. This means that in the first case, the second person has a 2/6 * 1/6 = 1/18 chance to have the same roll. This applies 5/6 of the time.\n\nIn the second case, both dice must match, so the second person has a 1/6 chance to roll the same number with each of the two die. This means the total probability is 1/6 * 1/6 = 1/36. This applies 1/6 of the time.\n\nThis means that the total probability is 5/6*1/18 + 1/6*1/36 = 5/108 + 1/216 = 10/216 + 1/216 = 11/216. Answer is D\nManager",
null,
"",
null,
"B\nJoined: 08 Jul 2016\nPosts: 69\nLocation: United States (NY)\nGMAT 1: 710 Q49 V36",
null,
"Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n1\nJeffTargetTestPrep wrote:\nBunuel wrote:\nPam and Robin each roll a pair of fair, six-sided dice. What is the probability that Pam and Robin will both roll the same set of two numbers?\n\nA. 1/216\n\nB. 1/36\n\nC. 5/108\n\nD. 11/216\n\nE. 1/18\n\nWe need to determine the probability that when Pam and Robin each rolls a pair of fair, six-sided dice, they both roll the same set of numbers. There are two scenarios: when Pam and Robin both roll the same two numbers and when they roll two distinct numbers.\n\nScenario 1: When the two numbers on the dice are the same\n\nLet’s say they both roll 1s. That is, Pam rolls (1, 1) and Robin rolls (1, 1). The probability of this happening is\n\n1/6 x 1/6 x 1/6 x 1/6 = 1/(6^4)\n\nSince the probability is the same for all 6 pairs of numbers, the probability of their rolling the same numbers is 6 x 1/(6^4) = 1/(6^3) = 1/216.\n\nScenario 2: When the two numbers on the dice are distinct\n\nThere are 6 x 5 = 30 ways to roll two distinct numbers when rolling two dice.\nLet’s say Pam rolls (1, 2) and Robin also rolls (1, 2). The probability of this happening is:\n\n1/6 x 1/6 x 1/6 x 1/6 = 1/(6^4)\n\nHowever, if Pam rolls (1, 2) and Robine rolls (2,1), those are still considered the same set of numbers, and the probability of that occurring is also 1/(6^4).\n\nTherefore, for each pair of distinct numbers rolled, the probability is 2 x 1/(6^4) = 2/(6^4). Since there are 30 such pairs, the overall probability is 30 x 2/(6^4) = 60/(^4) = 10/(6^3) = 10/216.\n\nFinally, since the events in option 1 and those in option 2 are mutually exclusive, we use the addition rule of probability. That is, the probability that Pam and Robin will both roll the same set of two numbers is:\n\n1/216 + 10/216 = 11/216\n\nIf we are already considering (1,2) and (2,1) to be the same set while calculating probability then why do we have total number of pairs = 30? Shouldn't it be 15?\nManager",
null,
"",
null,
"B\nJoined: 01 Feb 2015\nPosts: 64\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\njpeeples85 wrote:\nAt first I misread and thought it was asking for the same sum of the pairs of dice which would yield a different answer.\n\nThere are two cases. The first is when the first person rolls two different numbers. This happens 5/6 of the time. The second is when the first person rolls the same number on both die. This happens 1/6 of the time.\n\nIn the first case, the second person has a 2/6 chance to roll one of the two numbers that the first person rolled with the first die. If the first die is one of the two numbers, then the second person only has a 1/6 chance for the second die to match up. This means that in the first case, the second person has a 2/6 * 1/6 = 1/18 chance to have the same roll. This applies 5/6 of the time.\n\nIn the second case, both dice must match, so the second person has a 1/6 chance to roll the same number with each of the two die. This means the total probability is 1/6 * 1/6 = 1/36. This applies 1/6 of the time.\n\nThis means that the total probability is 5/6*1/18 + 1/6*1/36 = 5/108 + 1/216 = 10/216 + 1/216 = 11/216. Answer is D\n\n1/6 of the time? 5/6 of the time? How are you saying that? can you please explain?\nEMPOWERgmat Instructor",
null,
"V\nStatus: GMAT Assassin/Co-Founder\nAffiliations: EMPOWERgmat\nJoined: 19 Dec 2014\nPosts: 15958\nLocation: United States (CA)\nGMAT 1: 800 Q51 V49\nGRE 1: Q170 V170\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n3\n4\nHi All,\n\nThis is a tougher probability question than average (and you likely will not see this exact situation on Test Day). This is meant to say that you shouldn't be too concerned about this prompt until you're picking up points in all the other 'gettable' areas first.\n\nThat having been said, the 'quirk' with this question is that you have to account for the probability of rolling two dice and getting the same number on both dice vs. getting two different numbers.\n\nWhen rolling two dice, there are 36 possible outcomes:\n-6 outcomes have the same number twice (1-1, 2-2, etc.)\n-30 outcomes have two different numbers (1-4, 3-2, 5-1, etc.)\n\nThus, 6/36 = 1/6 of the outcomes are the same number twice\n30/36 = 5/6 of the outcomes are two different numbers\n\nIF Pam rolled 5-5, then Robin would have to also roll 5-5. The probability of that occurring would be: (1/6)(1/6) = 1/36.\n\nIF Pam rolled 2-6, then Robin has two different ways to match up (2-6 OR 6-2). The probability of that occurring would be (2/6)(1/6) = 2/36 = 1/18\n\nAccounting for all possible outcomes, the probability of Pam and Robin rolling the same result would be:\n(1/6)(1/36) + (5/6)(1/18) =\n1/216 + 5/108 =\n1/216 + 10/216 =\n11/216\n\nGMAT assassins aren't born, they're made,\nRich\n_________________\nIntern",
null,
"",
null,
"B\nJoined: 10 Apr 2013\nPosts: 3\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\nEMPOWERgmatRichC wrote:\nHi All,\nAccounting for all possible outcomes, the probability of Pam and Robin rolling the same result would be:\n(1/6)(1/36) + (5/6)(1/18) =\n1/216 + 5/108 =\n1/216 + 10/216 =\n11/216\n\nEMPOWERgmat Instructor",
null,
"V\nStatus: GMAT Assassin/Co-Founder\nAffiliations: EMPOWERgmat\nJoined: 19 Dec 2014\nPosts: 15958\nLocation: United States (CA)\nGMAT 1: 800 Q51 V49\nGRE 1: Q170 V170\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\n1\nxclsx wrote:\nEMPOWERgmatRichC wrote:\nHi All,\nAccounting for all possible outcomes, the probability of Pam and Robin rolling the same result would be:\n(1/6)(1/36) + (5/6)(1/18) =\n1/216 + 5/108 =\n1/216 + 10/216 =\n11/216\n\nHi xclsx,\n\nWe have to account for two possible situations (since Pam's roll dictates the probability that Robin's roll will match) : Pam rolls the same number on both dice or Pam rolls two different numbers. The probabilities that you highlighted in red are those two probabilities: there's a 1/6 chance that it's the same number twice and a 5/6 chance that it's two different numbers.\n\nGMAT assassins aren't born, they're made,\nRich\n_________________\nGMATH Teacher",
null,
"P\nStatus: GMATH founder\nJoined: 12 Oct 2010\nPosts: 935\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\nBunuel wrote:\nPam and Robin each roll a pair of fair, six-sided dice. What is the probability that Pam and Robin will both roll the same set of two numbers?\n\nA. 1/216\n\nB. 1/36\n\nC. 5/108\n\nD. 11/216\n\nE. 1/18\n\n$$? = {{\\# \\,\\,{\\rm{favorables}}} \\over {\\# \\,\\,{\\rm{total}}\\,\\,\\left( {{\\rm{equiprobables}}} \\right)}}$$\n\n$$\\# \\,\\,{\\rm{total}}\\,\\,\\left( {{\\rm{equiprobables}}} \\right) = {6^2} \\cdot {6^2}\\,\\,\\,\\,\\left( {{\\rm{taking}}\\,\\,{\\rm{results}}\\,\\,{\\rm{in}}\\,\\,{\\rm{order,}}\\,\\,{\\rm{for}}\\,\\,{\\rm{both}}\\,\\,{\\rm{players}}} \\right)$$\n\n$$\\# \\,\\,{\\rm{favorables}}\\,\\,\\, = \\,\\,\\,\\left\\{ \\matrix{ \\,6\\,\\,::\\,\\,{\\rm{first}}\\,\\,{\\rm{player}}\\,\\,{\\rm{gets}}\\,\\,{\\rm{same}}\\,\\,{\\rm{number}}\\,\\,{\\rm{twice,}}\\,\\,{\\rm{second}}\\,\\,{\\rm{player}}\\,\\,{\\rm{gets}}\\,\\,{\\rm{same}}\\,\\,{\\rm{ones}} \\hfill \\cr \\,\\,\\, + \\,\\,\\,\\,\\,\\left( {{\\rm{mutually}}\\,\\,{\\rm{exclusive}}\\,{\\rm{!}}} \\right) \\hfill \\cr \\,\\left( {36 - 6} \\right) \\cdot 2!\\,\\,\\,::\\,\\,\\,{\\rm{first}}\\,\\,{\\rm{gets}}\\,\\,\\left( {x,y} \\right)\\,\\,{\\rm{with}}\\,\\,x \\ne y\\,\\,,\\,\\,{\\rm{second}}\\,\\,{\\rm{gets}}\\,\\,\\left( {x,y} \\right)\\,\\,{\\rm{or}}\\,\\,\\left( {y,x} \\right) \\hfill \\cr} \\right.$$\n\n$$? = {{66} \\over {{6^4}}} = {{11} \\over {{6^3}}} = {{11} \\over {216}}$$\n\nThis solution follows the notations and rationale taught in the GMATH method.\n\nRegards,\nFabio.\n_________________\nFabio Skilnik :: GMATH method creator (Math for the GMAT)\nOur high-level \"quant\" preparation starts here: https://gmath.net\nManager",
null,
"",
null,
"G\nJoined: 28 Jan 2017\nPosts: 151\nWE: Consulting (Computer Software)\nRe: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink]\n\n### Show Tags\n\nBunuel wrote:\nPam and Robin each roll a pair of fair, six-sided dice. What is the probability that Pam and Robin will both roll the same set of two numbers?\n\nA. 1/216\n\nB. 1/36\n\nC. 5/108\n\nD. 11/216\n\nE. 1/18\n\nMy way of doing this:\n\nTotal cases : 6^2 * 6^2= 36*36\nNow required cases:\n1. Both dices have same number for both the person. Example (1,1) and (1,1)\nSuch 6 cases.\n2. Different number in 2 dice example (1,2) and (1,2)- This can have 4 arrangements, (1,2)(1,2), (1,2)(2,1),(2,1)(1,2),(2,1)(2,1)\nand total number of ways to pick these types are 6C2.\n\nso total cases = 6 + 6C2 * 4\n\nRequired probability = ( 6 + 6C2 * 4) / ( 36*36 ) = 11/216",
null,
"Re: Pam and Robin each roll a pair of fair, six-sided dice. What is the pr [#permalink] 20 Feb 2019, 11:43\nDisplay posts from previous: Sort by\n\n# Pam and Robin each roll a pair of fair, six-sided dice. What is the pr",
null,
"",
null,
""
] | [
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/profile/close.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/profile/close.png",
null,
"https://gmatclub.com/forum/styles/gmatclub_light/theme/images/search/close.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_73391.jpg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_play.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_difficult_blue.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/timer_separator.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_149120.png",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_1.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_613601.jpeg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_1.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_1.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://gmatclub.com/forum/images/verified_score.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_466391.jpg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_1.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_466391.jpg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/avatars/upload/avatar_115585.gif",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/ranks/rank_phpbb_3.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/images/no_avatar.svg",
null,
"https://cdn.gmatclub.com/cdn/files/forum/styles/gmatclub_light/theme/images/viewtopic/posts_bot.png",
null,
"https://www.facebook.com/tr",
null,
"https://www.googleadservices.com/pagead/conversion/1071875456/",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9175953,"math_prob":0.98922217,"size":10948,"snap":"2019-51-2020-05","text_gpt3_token_len":3350,"char_repetition_ratio":0.19024123,"word_repetition_ratio":0.71387833,"special_character_ratio":0.32983193,"punctuation_ratio":0.1155624,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99547625,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-01-22T02:50:33Z\",\"WARC-Record-ID\":\"<urn:uuid:f66b2255-b340-4155-bbb9-9d434578fc6e>\",\"Content-Length\":\"906319\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bd2cf979-55fd-455d-963e-31055f765b3e>\",\"WARC-Concurrent-To\":\"<urn:uuid:cc60c674-da3f-4a95-a4f8-438eed1b9751>\",\"WARC-IP-Address\":\"198.11.238.99\",\"WARC-Target-URI\":\"https://gmatclub.com/forum/pam-and-robin-each-roll-a-pair-of-fair-six-sided-dice-what-is-the-pr-235426.html\",\"WARC-Payload-Digest\":\"sha1:WFIBTIDQMNHGCLVCIKBHK5TWJCCJPGMX\",\"WARC-Block-Digest\":\"sha1:NTHH26YDDF6NRIJEDBBROMT6FSLNVHEI\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-05/CC-MAIN-2020-05_segments_1579250606269.37_warc_CC-MAIN-20200122012204-20200122041204-00378.warc.gz\"}"} |
http://www.sisef.it/iforest/contents/?id=ifor2155-011 | [
"## iForest - Biogeosciences and Forestry",
null,
"",
null,
"## Alternative methods of scaling Eucalyptus urophylla trees in forest stands: compatibility and accuracy of volume equations\n\niForest - Biogeosciences and Forestry, Volume 11, Issue 2, Pages 275-283 (2018)\ndoi: https://doi.org/10.3832/ifor2155-011\nPublished: Mar 29, 2018 - Copyright © 2018 SISEF\n\nResearch Articles\n\nThis study developed and tested a procedure that aimed to reduce the cost of forest stand volume estimation. Using a typical 3 x 3 m Eucalyptus urophylla plantation, estimates based on a simplified approach were compared with those of precise but costly reference methods. The simplified approach required measuring the total height and bole diameters up to 2 m high. The volume of the lower part was estimated using Smalian’s formula, while the volume of the upper part was estimated using a simple solid as an approximation. Three typical solids were tested: paraboloid, cone, and neiloid. The approach proposed is non-destructive, because it does not require tree felling, while precise methods are destructive. The operational (traditional) method uses Smalian’s formula to measure bole diameter at short intervals over the whole bole, while the precise, more research-suited (reference) method records water displacement (using a xylometer) to accurately measure the volume of each bole section. The reference and traditional methods, as expected, produced very similar results. The approach proposed, using a paraboloid for the upper part, provided results that were not statistically different to the reference values. The volumes estimated by the proposed approach were used to calibrate the Schumacher-Hall function, and the performance of the model was evaluated using the values obtained by the xylometer.\n\n# Introduction\n\nThe quantification of wood volume in forest stands is imperative for the deployment of sustainable forest management plans (, , , , ). Volume is a variable of great importance for evaluating the growth and yield potential of a forest. It is a basic component of information in any planning process of forest production (); therefore, it is essential to estimate it with a high level of reliability. Biometric scientists have contributed to the development of the methodology for volumetric estimates by developing better and more consistent modeling, as is used today. The first contribution is attributed to Schuberg (), who is considered the first person to introduce the form quotient between the diameters d0.5h and d1.30 to express the bole form of a tree. He applied this concept to the silver fir (Abies alba Mill.), which triggered numerous subsequent research studies on tree forms in Europe. The analytical solution for estimating the form factor as a function of a form quotient qi and a tree height h was first proposed by Schiffel (), as presented in eqn. 1:\n\n\\begin{equation} v=d^2 h \\left (a_0 q+ { \\frac{a_1} {qh} } +a_2 \\right ) \\\\ v= \\beta_1 d^2 hq+ \\beta_2 d^2 q^{-1} + \\beta_3 d^2 h \\end{equation}\n\nwhere v is tree volume, d is tree diameter at breast height (DBH), h is tree height, ai and βi are coefficients of the model, and q is the proposed form quotient.\n\nBecause of its practical applications for various European forest species, it stimulated the development of other formulations and conceptions of volumetric equations. Soon after, the Swedish researcher Höjer presented an analytical form for a forest species (). Later, Prodan () presented an evolutionary synthesis of form quotients.\n\nThe sectioning of tree boles into different geometric bodies has been considered previously (since the first biometric studies), and is an important approach in obtaining their respective volumes (, , , ).\n\nHohenadl () reported that dominant trees have higher relative increments at the bottom (i.e., in the neiloid segment) to ensure a better balance of trunk weight. He concluded that in these cases the volume is associated with genetic variables, as well as environmental and silvicultural practices.\n\nLarson () stated that tree form varies depending on edaphic conditions, climate change, the ways in which silvicultural practices in forest stands are conducted, and the sociological position in which each tree grows. It also varies according to the age of the forest stand, either as a natural evolutionary phenomenon, or due to the competitive status of each tree in relation to its closest neighbors.\n\nRow & Guttenberg () recommend that for better detection of variations in tree stem form at least three distinct sections should be taken.\n\nIn the 1960s, several other important studies on tree form were published, including a proposal for the so-called “taper function” (, , , , ).\n\nIt was assumed in the case of sectioned stems that their forms could be represented complementarily as geometrical solids: a neiloid at the base, a paraboloid in the central part, and a cone at the terminal part (, ). There have even been proposals to use different mathematical functions for each segment. In his study of Picea excelsa, Roiko-Jokela () retained the three segments, but used a hyperbola to describe the neiloid, a logarithmic function to describe the paraboloid, and a straight line to describe the cone. There are still researchers who separate the stem into more than three sections, but such proposals have increased the difficulty of adjusting the mathematical functions that describe them (, , ).\n\nIn forest inventories, the volume of an individual tree is normally obtained by indirect techniques such as volumetric equations, taper functions, and the form factor (, , , , , ), which are derived from direct measurements of the volumes of some trees in the forest stand. Volumetric equations, which are obtained from the adjustment of regression models and are based on existing relationships between volume and easily measurable variables in the field (e.g., tree diameter and height), are the most commonly used procedures to estimate volume (, ). They have satisfactory efficiency (, ), and are in current, well-known volume models (, , ).\n\nOf the several methods of measuring tree volume, water displacement (using a xylometer) is the only technique that provides the (true) volume value, and is usually used as a reference in accuracy evaluations of other methods (, , ). However, due to the operational limitations inherent in this method, the (true) volume of individual trees has been estimated with good accuracy by rigorous scaling (measurement of the diameter and height along the bole) using mathematical expressions (e.g., Smalian, Huber, Newton, Hohenadl, and Pressler, among others). This requires taking measurements of felled trees, or the rigorous volume estimation of standing trees using specialized equipment. In Brazil, Smalian’s method has traditionally been used for most forest surveys, by normally defining short sections up to 2 m in height and 1 or 2 m sections for the rest of the trunk, more for the convenience of measurement than for any accuracy reasons ().\n\nRigorous scaling is normally performed with sample trees felled in plots of the forest inventory or in areas close to them, an activity that is not part of the yearly routine of measurements in the forest inventory. In addition, there are situations in which it is impossible or impractical to obtain scaling data that provide precise volume estimates for the remaining trees in the stand. Therefore, the use of alternative methodologies for scaling in forest inventories could increase the rate of data collection, reduce costs, and minimize technical and operational difficulties (). Preferably, this should be conducted without felling trees (non-destructive methods) using complex equipment to measure only a few bole diameters.\n\nOne alternative would be to measure a few diameters of standing trees up to a height that is easily measurable, assuming that the stem from that point to the top can be treated as a single section. In this way, tree scaling would be optimized, because the volume up to an easily measurable height is directly obtained using a traditional tree-scaling method. From that point up, the volume is estimated using a geometrical solid to express the stem form, i.e., using dendrometric prototypes (paraboloid, cone, or neiloid). This methodological proposal is justified, because in commercial plantations, tree boles tend to resemble these geometrical forms. However, the lower part usually has a more irregular form (). Such a configuration using dendrometric prototypes must be conducted with great accuracy, because the lower part accounts for the greatest part of the overall volume ().\n\nAlthough the volumes of a cone and neiloid can be used to generate a wide variety of shapes, the volume of a paraboloid is the most appropriate for this modeling.\n\nIf its base is considered at a height of 2 m for obtaining the volume, then (eqn. 2):\n\n\\begin{equation} g_2 =\\frac{1}{4} \\pi {d^2}_2 \\end{equation}\n\nwhere g2 is the cross-sectional area measured at a height of 2 m on the stem and d2 is the diameter measured at 2 m.\n\nFor the paraboloid, the volume and the form factor are defined in eqn. 3 and eqn. 4, respectively:\n\n\\begin{equation} v_2= g_2 h_2 f _2 \\end{equation}\n\\begin{equation} f_2 = \\frac{1}{d_2 + 1} \\end{equation}\n\nWhere v2 is the volume of the paraboloid, g2 is the cross-sectional area measures at 2 m, h2 is the height of the bole above 2 m, f2 is the form factor of the bole measured above 2 m and d2 is the diameter measured at 2 m.\n\nThe development of the paraboloid form depends on f2, as defined in eqn. 4. We initially decided to use Apollonius’s paraboloid (form factor equal to 0.5) to scale Eucalyptus urophylla trees without felling them. In addition, we compared the results obtained by rigorous scaling using Smalian’s method, which is traditionally applied to measure tree volumes in commercial plantations of Eucalyptus in Brazil (), with those obtained using a xylometer. We also evaluated the effects of different scaling methods on the accuracy of volumetric equations adjusted to the forest stand.\n\nBased on the above considerations, we formulated the following hypotheses: (1) the estimated E. urophylla tree volume, as composed of a base that is evaluated using Smalian’s method with detailed measurements taken up to a height of 2 m and an upper part that is evaluated as a basic paraboloid, is similar to that obtained using a xylometer, or the volume obtained by Smalian’s method applied to the entire stem; (2) the volume function calibrated with the values estimated by the simplified approach would approximate very well to the reference (xylometer) estimates.\n\n# Material and methods\n\n## Study area\n\nThe study was conducted in a 7-year-old E. urophylla S. T. Blake plantation spaced at 3 x 3 m in Rio Verde, southwest Goias state, Brazil, which belongs to the agro-industrial cooperative of farmers of that region (COMIGO). The plantation is at an average altitude of 700 m at 18° 00′ 45″ to 18° 01′ 45″ S and 50° 52′ 45″ to 50° 53′ 15″ W. The climate of the region, according to the Köppen classification, is Aw, tropical humid, and is characterized as having two well-defined seasons: dry in the autumn and winter, and wet with heavy rain in the spring and summer. The annual average air temperature ranges from 20 to 25°C, and the rainfall is between 1.200 to 1.500 mm, with an annual average of around 1.300 mm ().\n\nThe dominant soils have B Latosol and A moderate horizons and a clay texture, and are classified by EMBRAPA () as Red Latosols, corresponding to Red Oxisols in the USDA Soil Taxonomy () and Ferralsols by WRB ().\n\n## Data\n\nOne hundred and fifty trees in different diameter classes were randomly selected in the population and cut down for the present study. DBHs were measured using a caliper, and ranged from 7.2 to 20.6 cm. The total heights of standing trees were measured using a Vertex hypsometer, and those of felled trees were measured using a tape measure, and ranged from 14.2 to 30.6 m.\n\nAfter measuring the DBHs and total heights, a rigorous scaling of each tree was performed to obtain its actual volume. This was conducted using three non-destructive methods by scaling standing trees and two destructive methods by cutting trees down.\n\nThe scaling of standing trees up to a stem height of 2 m was performed using Smalian’s method and by taking detailed measurements. From that height up, the remaining part of the stem was assumed to include three distinct dendrometric prototypes, i.e., a paraboloid, a cone, and a neiloid (Fig. 1). The diameters of boles with bark were measured at 0.1, 0.3, 0.5, 0.7, 0.9, 1.1, 1.3, and 2.0 m above ground level, and from that height up, at every 1 m up to a stem diameter of 4 cm (commercial). The three non-destructive methods are hereafter referred to as Smalian + Paraboloid, Smalian + Cone, and Smalian + Neiloid.\n\nFig. 1 - Schematic representation of tree scaling using three non-destructive methods and the traditional Smalian’s method (destructive). The non-destructive methods were Smalian + Paraboloid, Smalian + Cone, and Smalian + Neiloid. (L): length of the section (m); (H): total height (m); (gi): sectional area at height i (m²); (vi): volume of section i (m³); (vParaboloid): volume of paraboloid (m³); (vCone): volume of cone (m³); (vNeiloid): volume of neiloid (m³); (vPonta): tip volume (m³); (v): volume of the tree (m³).\n\nAfter taking measurements of the standing trees, they were cut at 0.1 m above the ground. The length of the upper part was calculated by taking the difference between the total and commercial heights.\n\nWhen using Smalian’s method () on each felled tree, volumes were obtained for all intermediate sections along the tree bole, i.e., between 0.1 m above the ground up to the minimum merchantable diameter (4 cm).\n\nAnother destructive method used was the xylometer, by which the volume was obtained by sectioning each tree and immersing the sections (small logs) in a container with water. The water volume displaced corresponded to the true wood volume (Archimedes’ principle). The xylometer used in this study was similar to the one described by Machado et al. (), and was made of two connected drums (total capacity of 200 L) with leveling screws on the bottom and a graduated (dm³) external ruler for measuring the displaced water volume. According to Machado & Figueiredo Filho (), the xylometer method has been used as a reference to evaluate the accuracy of other techniques for obtaining the real volume. It has the advantage of being independent of trunk form, and eliminates some measuring errors, as the diameters and lengths of sections along the stem are not measured.\n\nWhen comparing the tree-scaling methods (destructive and non-destructive), stump volume (0 to 10 cm) and the upper part’s volume were not included in the evaluation methods. These volumes were not included when using the xylometer because after cutting, the stump and upper part of the tree are usually left in the forest stand.\n\n## Evaluation of tree-scaling methods\n\nThe efficacy of the different tree-scaling methods in obtaining the volumes of individual trees was evaluated using a Kruskal-Wallis test (α = 0.05), with the volume obtained by the xylometer used as a reference. Five treatments (Xylometer, Smalian, Smalian + Paraboloid, Smalian + Cone, and Smalian + Neiloid) and 150 replications (trees rigorously scaled) were included. The analyses were conducted using the software Sbpank ver. 2.17c ().\n\nThe volumes obtained by the different scaling methods were evaluated a posteriori by the aggregate difference (AD), which is the difference between the sum of the volumes obtained by the xylometer and the volumes obtained by the other scaling methods (). We also investigated the distribution of residuals, which indicates under or overestimates when using each of the proposed scaling methods.\n\n## Model adjustment and validation\n\nVolume data obtained by the xylometer and the other scaling methods, which were not statistically different, were used to adjust the Schumacher & Hall () volumetric model to assess the influence of each method on the accuracy of the estimates obtained by the volumetric equations. To achieve this objective, the trees were randomly grouped into two datasets: one for model adjustment using data from each scaling method (100 trees), and the other for model validation (50 trees).\n\nThe Schumacher & Hall model was adjusted using Statistic 7.0 software () with the Levenberg-Marquardt algorithm and 1.000 iterations. The quality of the adjustment was evaluated by the significance of the regression and its parameters (α = 0.05), and by the following measures of precision: coefficient of determination (R²), residual standard error as a percentage (Syx%), and a graphical analysis of the residuals (), in addition to investigating correlations between the observed and estimated values ().\n\nFor model validation, a comparison was made of the volumes estimated by each adjusted equation and those obtained by the xylometer using the Kruskal-Wallis test (α = 0.05). Correlations (r) between the values observed (xylometer) and those estimated by the other methods (Syx%x and AD) were assessed, in addition to a graphical analysis of the errors.\n\n# Results\n\nTab. 1 shows that the tree volumes in the E. urophylla stand differed between the different tree-scaling methods. Traditional tree scaling using Smalian’s method (destructive), and scaling using the Smalian + Paraboloid non-destructive method proposed in this study, were statistically similar to the xylometer method (p = 0.8663 and 0.9549, respectively), which is considered a reference for comparison with other methods.\n\nTab. 1 - Performance of different scaling methods to obtain Eucalyptus urophylla tree volumes. Means followed by the same letter in a column did not significantly differ according to a Kruskal-Wallis test (α = 0.05); (AD): aggregate difference.\n\nTreatment n Volume (m³) AD\n(m3)\n(%)\nMean Min Max\nXylometer 150 0.2174 a 0.0314 0.4832 - -\nSmalian 150 0.2165 a 0.0315 0.4806 0.1306 0.40\nSmalian + Paraboloid 150 0.2190 a 0.0294 0.4991 -0.2438 -0.75\nSmalian + Cone 150 0.1577 b 0.0223 0.3548 8.9549 27.46\nSmalian + Neiloid 150 0.1271 c 0.0188 0.2826 13.5543 41.56\n\nHowever, the average tree volumes as estimated by the other two non-destructive rigorous scaling methods (Smalian + Cone and Smalian + Neiloid) did significantly differ to the mean volumes obtained by the xylometer method (p < 0.0001), the traditional Smalian’s scaling method (p < 0.0001), and the Smalian + Paraboloid method (p < 0.0001). In general, the Smalian + Cone and Smalian + Neiloid methods tended to underestimate the volume obtained by the xylometer method by more than 27%, and significantly differed between themselves (p < 0.0001).\n\nA graphical analysis of the errors associated with the results of the individual tree volumes obtained by each tree-scaling method in relation to the xylometer method showed that the traditional (destructive) Smalian’s technique had the greatest accuracy (-0.64 to 0.82%), despite having a slight tendency to underestimate the volume (Fig. 2a and Fig. 2b).\n\nFig. 2 - Graphical analysis of errors generated by Eucalyptus urophylla volumes. Different tree-scaling methods were used in relation to the volume obtained by the xylometer. Individual error distribution (a) and histograms of errors (b, c, d, and e).\n\nThe three non-destructive methods proposed in this study exhibited more expressive dispersion of errors (Fig. 2a). The Smalian + Paraboloid method, with errors varying from -15.21 to 7.83%, had a slight tendency to overestimate the volume (Fig. 2a and Fig. 2c), whereas the Smalian + Cone (Fig. 2a and Fig. 2d) and Smalian + Neiloid methods (Fig. 2a and Fig. 2e) underestimated the tree volumes by 12.97 to 33.93% and 27.06 to 47.01%, respectively.\n\nWe applied the volume data obtained by the destructive and non-destructive scaling methods to adjust the volume equations, using the Schumacher & Hall model for each method. The volume equations obtained by the tree scaling methods that were compatible with the xylometer method (Smalian and Smalian + Paraboloid) had similar statistics for adjustment and accuracy, with a relatively high value of R² (>0.97) and a relatively low value of Syx% (<6.65%) (Tab. 2). In addition, they were significant for all of the coefficients of the equations (p < 0.0001). All of the equations predicted strong correlations between the xylometer values and the estimated values (Fig. 3a, Fig. 3b, and Fig. 3c), and provided satisfactory distributions of residuals with errors of between -18.91 and 12.89% (Fig. 3d, Fig. 3e, and Fig. 3f).\n\nTab. 2 - Volumetric equations with their respective statistics of adjustment, obtained from data applied to different rigorous tree-scaling methods (Xylometer, Smalian, and Smalian + Paraboloid) on Eucalyptus urophylla trees. (V): volume (m³); (DBH): diameter at breast height (cm); (H): total height (m); (R2): coefficient of determination; (Syx%): residual standard error (%).\n\nTree-Scaling Method Volumetric Equation Syx%\nXylometer V = 0.000020 · DBH 1.588006 · H 1.539417 0.9717 6.64\nSmalian V = 0.000022 · DBH 1.608694 · H 1.495780 0.9721 6.59\nSmalian + Paraboloid V = 0.000019 · DBH 1.756855 · H 1.419518 0.9800 5.21\n\nFig. 3 - Correlations between observed and estimated volumes and the distribution of residuals from volumetric equations. The following methods were used: xylometer (a and d), Smalian (b and e), and Smalian + Paraboloid (c and f).\n\nA validation of these equations revealed that the estimated volumes were not statistically different to the true volume obtained by the xylometer method (p > 0.8523) (Tab. 3). The three equations provided accurate volume estimates (Syx% < 5%), with relatively strong correlations between the true values (xylometer) and the estimates (Fig. 4a, Fig. 4b, and Fig. 4c). The AD between the true values and the estimated ones was less than 0.31% (Tab. 3), and the distribution of residuals had errors of between -11.30 and 9.58% (Fig. 4d, Fig. 4e, and Fig. 4f), with no bias.\n\nTab. 3 - Performance of volumetric equations in estimating Eucalyptus urophylla tree volumes in relation to a (reference) volume obtained using the xylometer method. Means followed by the same letter did not significantly differ according to a Kruskal-Wallis test (α = 0.05); (n): number of trees; (AD): aggregated difference; (Syx%): residual standard error (%).\n\nTreatment n Volume (m³) AD\n(m³)\n(%)\nSyx%\nMean Min Max\nXylometer 50 0.2161 a 0.0691 0.4581 - - -\nXylometer equation 50 0.2161 a 0.0721 0.4593 0.0011 0.01 4.91\nSmalian equation 50 0.2154 a 0.0722 0.4577 0.0337 0.31 4.97\n(Smalian + Paraboloid) equation 50 0.2160 a 0.0692 0.4731 0.0078 0.07 4.94\n\nFig. 4 - Correlations between true (xylometer) and estimated volumes, and the distribution of residuals from volumetric equations. The following methods were used: xylometer (a and d), Smalian (b and e), and Smalian + Paraboloid (c and f).\n\n# Discussion\n\nRigorous scaling using the traditional Smalian’s method resulted in less variation in tree volume than that obtained by the xylometer method, so had better precision, although they were not significantly different. This outcome was expected, because the method is also used as a reference when obtaining wood volume in routine forest inventories in Brazil (, ). Smalian’s method is used in both native forests and plantations (); however, it often gives less satisfactory results than other scaling methods, such as the traditional formulas of Huber and Newton (, ). Smalian’s method gives its worst estimates when combined with long and large sections, a problem that did not occur for the shorter sections (, ) used in this study.\n\nAlthough the Smalian + Paraboloid method gave a higher dispersion of errors than the traditional Smalian’s scaling method, the volumes obtained were concentrated around the true mean (xylometer). Around 99% of the volumes obtained by this method had errors of between -10 and 10%. However, when considering the average volume (n = 150), this method provided similar volumes to those obtained by the Smalian and xylometer methods. It overestimated the volume by less than 1%, and is therefore considered quite accurate.\n\nThe results indicate that the part of the bole above a height of 2 m had the greatest similarity to the paraboloid form, which would naturally provide a larger volume than those obtained by the cone and neiloid forms, and explains the underestimations obtained when using these two forms to describe the tree bole. To assess the stem profile, three randomly selected trees (small, medium, and large) in the forest stand were measured to ascertain whether several geometrical solids were expressed in their forms, as described by Van Laar & Akça (). We found that the paraboloid was the prototype that best adjusted to the form of the stem above a height of 2 m. Tree scaling using Smalian’s method had less variation over the dispersion of errors (Fig. 2), because the large number of sections measured along the bole allowed the capture of possible variations in the form of the stem profile.\n\nThe adjustment of the Schumacher & Hall model provided accurate equations with which to estimate tree volumes, irrespective of the tree-scaling method (Xylometer, Smalian, or Smalian + Paraboloid) used. When the resulting equations were applied to an independent dataset that was not used in the adjustment of the models (validation), they all yielded similar estimates of the true (xylometer) volume. This confirms that the method proposed (Smalian + Paraboloid) was efficient in obtaining the volume, which is a variable required for the adjustment of volumetric models.\n\nThe use of the Smalian + Paraboloid method is a promising alternative for the inventories of other forest species, such as pine and acacia. It allows us to take advantage of plot randomness in the inventory area to collect representative data that will be used in the adjustment of volumetric models capable of generating precise estimates of the volumes of other trees in the forest stand. By adopting this method, the gathering of all necessary information can be performed simultaneously with plot measurement, and diameters can be measured in a few sample trees up to a height of 2 m, thereby avoiding tree felling or the use of specialized equipment to measure standing trees.\n\nThis procedure also allows a reduction in the number of diameters measured along the bole when compared with traditional tree scaling. In this study, traditional tree scaling by Smalian’s method required the measurement of 28 diameters (on average) along the bole of each tree, while the Smalian + Paraboloid method required only 8 diameters, enabling us to reduce the number of measurements by 71%. Therefore, using this method may improve data collection and reduce the costs of forest inventory activities. In addition, it offers an alternative for obtaining tree volumes in situations where it is impractical to perform tree scaling using destructive methods.\n\nHowever, some care must be taken when using this method, particularly regarding tree form, because according to Soares et al. (), bole volume is strongly correlated with its form. Several factors affect tree form, including species, site, silvicultural practice, age, and genetic expression, among others (, ). Tree form in the population studied was considered similar to Apollonius’s paraboloid. However, it is important to note that the tree boles of other Eucalyptus stand, or of any other species, can resemble alternative forms of a paraboloid, or even of other geometrical solids (dendrometric prototypes). Therefore, it is important to examine each specific case, and to adopt the most appropriate method for calculating the volume. In this study, we proposed taking an average tree of the forest stand and obtaining its bole volume (vn) by applying Smalian’s method for a neiloid up to a height of 2 m. Above that height, a generic paraboloid should be used as defined in eqn. 3, i.e., v2 = g2 h2 f2. After sectioning the tree, we can obtain its volume using a xylometer in such a way as to provide the volumes for the neiloid section (vxn) and for the bole above 2 m (vx2) separately. If it is assumed that vx2 is equivalent to that of a cylinder that has a constant cross-sectional area gxc and height hx2, then we have vx2 = gxc hx2. Assuming that we wish to set the paraboloid such that the two volumes are equal (i.e., vx2 = v2), then we have hx2 = h2. Consequently, the form factor fi can be obtained experimentally, as shown in eqn. 5:\n\n\\begin{equation} f_1=\\frac{g_{xc}}{g_2} \\end{equation}\n\nwhere gxc is the constant cross-sectional area of an equivalent cylinder of the bole from 2 m to the top of the tree, g2 is the cross-sectional area measured at a height of 2 m, and fi is the form factor of the bole measured above 2 m that is obtained experimentally.\n\nUsing the volumes of an average tree obtained by the xylometer method and the Smalian + Paraboloid method, as shown in Tab. 1, we calculated what was proposed above (Tab. 4).\n\nTab. 4 - Comparison of bole-volume estimates of an average tree taken from a Eucalyptus urophylla forest stand among different paraboloid forms. (vn): neiloid volume obtained by Smalian’s method; (v2): xylometer volume, or from Smalian, or from a different paraboloid; (vxn): neiloid volume by xylometer below 2 m; (fi): paraboloid form factor. Values followed by the same letter did not significantly differ at the 95% probability level.\n\nTreatment n Average volume (m³) f i\nv n v 2 Total\nXylometer 150 0.03555 0.18185 0.2174 a -\nSmalian 150 0.03506 0.18144 0.2165 a -\nSmalian + Apollonius Paraboloid 150 0.03506 0.18397 0.2190 a 0.500\nvxn + Apollonius Paraboloid 150 0.03555 0.18397 0.2195 a 0.500\nvxn+ Cubic Paraboloid 150 0.03555 0.22025 0.2558 c 0.600\nvxn + Semi-cubic Paraboloid 150 0.03555 0.15735 0.1929 b 0.429\nvxn + Equivalent Paraboloid 150 0.03555 0.18184 0.2174 a 0.494\n\nFrom the calculation of the cross-sectional area of the cylinder, the volume of which was equivalent to that obtained above 2 m by the xylometer method, and knowing that the total height of the tree was 25.67 m (h) and the height of the paraboloid was 23.67 m (h2), then we have (eqn. 6):\n\n\\begin{equation} g_{xc} = \\frac{v_{x2} } {h_2} = { \\frac{0.18185} {23.67} } =0.0076827\\; m^2 \\end{equation}\n\nFrom a calculation of the base cross-sectional area of the Apollonius paraboloid, we have (eqn. 7):\n\n\\begin{equation} g_2 = \\frac{ v_2 } {f_2 h_2} = \\frac{ 0.18397} {0.5 \\left (23.67 \\right )} =0.0155446 \\;m^2 \\end{equation}\n\nwhere g2 is the cross-sectional area measured at a height of 2 m on the stem, v2 is the volume of the bole measured above 2 m, f2 is the form factor of the bole measured above 2 m that is obtained experimentally, and h2 is the height of the tree measured above 2 m.\n\nTherefore, the average form factor (fpe) of a paraboloid with a volume equivalent to that obtained by a xylometer is given by (eqn. 8):\n\n\\begin{equation} f_{pe} = \\frac{g_{xc}} {g_2} = \\frac{0.0076827}{0.0155446} =0.4942 \\end{equation}\n\nwhere fpe is the average form factor of a paraboloid equivalent to the xylometer volume, gxc is the constant cross-sectional area of an equivalent cylinder of the bole from 2 m to the top of the tree, and g2 is the cross-sectional area measured at a height of 2 m on the bole.\n\nAs can be seen, the initial proposal of applying the Apollonius paraboloid to the data was validated. However, this approach would not always be appropriate. If a cubic or semi-cubic paraboloid was applied instead, the results would be statistically different to the parametric value, as is evident in Tab. 4.\n\nIn order to validate the methodology proposed, a sample of 28 trees was taken in another E.urophylla forest stand. After calculating the average diameter of this sample, the tree closest to this value, which in this case was the 11th tree of the sample, was identified. The data are presented in Tab. 5.\n\nTab. 5 - Volumes obtained for validation using the proposed methodology. (d): diameter at 2 m; (h): height; (v1): volume (m³) ≤ 2 m (Smalian); (v2): volume (m³) ≥ 2 m to the tip (Smalian); (v3): volume (m³) ≥ 2 m to the tip (Apollonius paraboloid); (tv1): total volume (m³) Smalian (traditional); (tv2): total volume (m³) paraboloid (proposed).\n\nTree d (cm) h (m) v1 (m³) v2 (m³) v3 (m³) tv1 (m³) tv2 (m³)\n1 6.70 14.30 0.00742 0.02152 0.02168 0.02893 0.02910\n2 7.95 15.30 0.02012 0.01962 0.02261 0.03974 0.04273\n3 7.95 15.90 0.01025 0.03529 0.03312 0.04554 0.04337\n4 11.45 24.70 0.03052 0.09315 0.09620 0.12367 0.12671\n5 11.75 21.60 0.02289 0.09792 0.09887 0.12080 0.12176\n6 12.04 22.30 0.02230 0.10704 0.10743 0.12935 0.12973\n7 12.05 23.50 0.02476 0.11874 0.11803 0.14350 0.14278\n8 12.40 23.80 0.02421 0.12726 0.12270 0.15147 0.14691\n9 13.05 24.50 0.02865 0.14182 0.14196 0.17047 0.17061\n10 13.80 24.20 0.03017 0.14737 0.15030 0.17754 0.18047\n11 14.50 25.50 0.03469 0.18151 0.18129 0.21620 0.21599\n12 14.50 26.20 0.03469 0.19539 0.19023 0.23007 0.22491\n13 14.53 24.50 0.03355 0.16289 0.16618 0.19643 0.19972\n14 14.58 25.40 0.03519 0.17913 0.18025 0.21432 0.21544\n15 14.59 26.10 0.03754 0.19129 0.19262 0.22883 0.23016\n16 14.78 25.70 0.03715 0.18767 0.19037 0.22482 0.22752\n17 14.88 29.10 0.03541 0.23071 0.22421 0.26612 0.25962\n18 14.90 26.30 0.03778 0.20143 0.20015 0.23921 0.23793\n19 14.95 25.60 0.03496 0.19646 0.19390 0.23142 0.22886\n20 15.03 25.80 0.03565 0.20123 0.20262 0.23687 0.23827\n21 15.30 26.80 0.03861 0.21367 0.21491 0.25228 0.25352\n22 16.12 29.30 0.04203 0.25962 0.25992 0.30164 0.30194\n23 16.40 29.20 0.04468 0.26931 0.26953 0.31399 0.31421\n24 16.78 27.10 0.04532 0.26122 0.26405 0.30654 0.30937\n25 17.57 29.70 0.05032 0.31655 0.32063 0.36686 0.37094\n26 19.20 30.00 0.06251 0.38638 0.39140 0.44889 0.45991\n27 19.50 29.40 0.06823 0.37175 0.37701 0.43998 0.44924\n28 21.23 32.90 0.07183 0.51651 0.52842 0.58835 0.60025\n\nThe cross-sectional area of the cylinder, the volume of which is equivalent to the volume of the paraboloid obtained by Smalian’s method, is given by (eqn. 9):\n\n\\begin{equation} g_{xc} = \\frac{v_{2s}} {h_{t} - 2} = \\frac{0.18151}{23.5} = 0.0077238 \\;m^2 \\end{equation}\n\nwhere gxc is the constant cross-sectional area of an equivalent cylinder of the bole from 2 m to the top of the tree, v2s is the volume of the bole measured above 2 m equivalent to the volume of the paraboloid obtained by Smalian’s method, and ht is the total height of the tree.\n\nThe cross-sectional area of the base of the paraboloid is obtained by (eqn. 10):\n\n\\begin{equation} g_2 = \\frac{ \\pi }{4} d_2^2=0.7853982 (0.1403)^2 = 0.0154598\\; m^2 \\end{equation}\n\nwhere g2 is the cross-sectional area measured at a height of 2 m on the stem and d2 is the diameter measured at 2 m.\n\nThe form factor of the paraboloid to be used for the calculation of the volumes of the remaining sample trees is obtained by (eqn. 11):\n\n\\begin{equation} f_i = \\frac{g_{xc}}{g_2} = \\frac{0.0077238}{0.0154598} = 0.4996074 \\simeq 0.5 \\end{equation}\n\nwhere fi is the form factor of the paraboloid to be used to calculate the volumes of the remaining trees, gxc is the constant cross-sectional area of an equivalent cylinder of the bole from 2 m to the top of the tree, and g2 is the cross-sectional area measured at a height of 2 m on the bole.\n\nAs can be seen, the form factor for this new sample resulted in a value that was even closer to 0.5, i.e., the Apollonius paraboloid was appropriate for the calculation of the volumes of the remaining trees. The validation results are presented in Tab. 5.\n\nThe distribution of residuals for the volume obtained by Smalian’s method had errors of between -7.53 and 4.75%, with no evidence of bias. The correlation coefficient between the volumes obtained by Smalian’s method and the proposed paraboloid was 0.9997; see Fig. 5.\n\nFig. 5 - Correlations between volumes obtained by Smalian’s method and the proposed paraboloid method (a). Distribution of residuals (b). Histogram of residuals (c).\n\nThe analytical solution obtained by using a paraboloid in the second section of the tree above 2 m supports our first hypothesis. The proposed methodology for finding an appropriate form factor of a paraboloid to be used in the calculation of the volume equivalent to that obtained by a xylometer was efficient and accurate, which supports our second hypothesis.\n\n# Conclusions\n\nThe values obtained by traditional scaling using Smalian’s method (destructive) and the Smalian + Paraboloid method (non-destructive) were not statistically different to those obtained by the reference method (xylometer); therefore, they are equivalent.\n\nThe adjustment of the Schumacher & Hall model with data obtained by compatible methods (Xylometer, Smalian, and Smalian + Paraboloid) enabled us to obtain accurate equations for estimating tree volume in a E. urophylla stand.\n\nThe Smalian + Paraboloid method (without tree felling) would improve data collection and reduce the cost of the forest inventory activities that are conducted in forest plantations, and the method can obtain tree volumes in situations in which scaling (destructive) is impracticable.\n\nUpdating the volumetric estimators for older E. urophylla forest stands would be simple and inexpensive, and of major importance in ensuring the accuracy of volumetric estimators for old trees.\n\nThe use of an equivalent paraboloid to express the mean volume of trees sampled by the xylometer or traditional Smalian’s method maximizes the accuracy of volume estimation in stands of E. urophylla.\n\nThe methodology proposed for E. urophylla is suitable for other species, given the high uniformity and homogeneity of forest plantations in Brazil.\n\n# Acknowledgments\n\nThe authors would like to thank the Mixed Cooperative of Araguaia Valley (COMIGO) for financial support, providing the study site, and support during the field measurements.\n\nEPM designed and setup the study. EPM, GBA, GTOSA, and SPN analyzed the data and wrote the paper. AVR and RSP reviewed the paper.\n\n# References\n\n(1)\nAkindele SO, LeMay VM (2006). Development of tree volume equations for common timber species in the tropical rain forest area of Nigeria. Forest Ecology Management 226: 41-48.\nCrossRef | Gscholar\n(2)\nAkossou AYJ, Arzouma S, Attakpa EY, Fonton NH, Kokou K (2013). Scaling of teak (Tectona grandis) logs by the xylometer technique: accuracy of volume equations and influence of the log length. Diversity 5: 99-113.\nCrossRef | Gscholar\n(3)\nAndrade VCL, Souza AL, Calegário N (2014). Equações segmentadas de perfil do tronco geradas a partir da equação que descreve sólidos geométricos [Segmented trunk profile equations generated from the equation for obtaining geometric solids]. Pesquisa Florestal Brasileira 34: 285-301. [in Portuguese]\nCrossRef | Gscholar\n(4)\nAzevedo GB, Sousa GTO, Barreto PAB, Conceição Júnior V (2011). Estimativas volumétricas em povoamentos de eucalipto sob regime de alto fuste e talhadia no sudoeste da Bahia [Volumetric estimates in eucalypt plantation under regime of high forest and coppice in the southwest of Bahia, Brazil]. Pesquisa Florestal Brasileira 31: 309-318. [in Portuguese]\nCrossRef | Gscholar\n(5)\nBatista JLF, Couto HTZ, Silva Filho DF (2014). Quantificação de recursos florestais: árvores, arvoredos e florestas [Quantification of forest resources: trees, groves and forests] (1st edn). Oficina de Textos, São Paulo, Brasil, pp. 384. [in Portuguese]\nGscholar\n(6)\nBreymann K (1868). Anleitung zur Holzmeβkunst, Waldertrags bestimmung und Waldwertberechnung [Guidance for the art of measuring wood, determining forest yield and calculating forest value]. Wien, Austria, pp. 644. [in German]\nGscholar\n(7)\nDraper NR, Smith H (1998). Applied regression analysis (3rd edn). Wiley-Interscience, New York, USA. pp. 736.\nGscholar\n(8)\nFigueiredo Filho A, Machado SA, Carneiro MRA (2000). Testing accuracy of log volume calculation procedures against water displacement techniques (xylometer). Canadian Journal of Forest Research 30: 990-997.\nCrossRef | Gscholar\n(9)\nGrosenbauch LR (1966). Tree form: definition, interpolation, extrapolation. Forestry Chronicle 42: 444-457.\nCrossRef | Gscholar\n(10)\nHammer O, Harpe ATD, Ryan PD (2001). PAST: paleontological statistics software package for education and data analysis. Palaeontologia Electronica 4: 1-9.\nGscholar\n(11)\nHohenadl W (1922). Neue Grundlagen der Holzmessung [New foundations of wood measurement]. Forstwissenschaftliches Centralblatt 44: 184-194. [in German]\nCrossRef | Gscholar\n(12)\nHusch B, Miller IC, Beers TW (1983). Forest Mensuration. J. Wiley, New York, USA. pp. 402.\nGscholar\n(13)\nHöjer AG (1903). Tallens och granens tillväxt [Pine and spruce growth]. Bihang till Fr. Loven. Om våra barrskogar, Stockholm, Sweden, pp. 62. [in Swedish]\nGscholar\n(14)\nIUSS Working Group WRB (2010). A framework for international classification, correlation and communication. World Soil Resources Reports no. 103, Word Reference Base for Soil Resources, Food and Agriculture Organization of the United Nations, Rome, Italy, pp. 145.\nGscholar\n(15)\nKozak A, Munro DD, Smith JHG (1969). Taper functions and their applications in forest inventories. Forestry Chronicle 45: 278-283.\nCrossRef | Gscholar\n(16)\nLarson PR (1963). Stem form development of forest tree. Society of American Foresters, Washington, DC, USA. pp. 42.\nGscholar\n(17)\nLiu CJ (1980). Log volume estimation with spline approximation. Forest Science 26: 361-369.\nCrossRef | Gscholar\n(18)\nLoetsch F, Zoehrer F, Haller KE (1973). Forest Inventory (2nd edn). BLV, München, Germany. pp. 469.\nGscholar\n(19)\nMachado SA, Figueiredo Filho A (2014). Dendrometria [Dendrometry] (2nd edn). UNICENTRO, Guarapuava, Brasil, pp. 316. [in Portuguese]\nGscholar\n(20)\nMachado SA, Urbano E, Conceição MB, Figueiredo Filho A, Figueiredo DJ (2004). Comparação de modelos de afilamento do tronco para diferentes idades e regimes de desbaste em plantações de Pinus oocarpa Schiede [Comparison of taper functions for different ages and thinning regimes in Pinus oocarpa plantations]. Boletim de Pesquisa Florestal 48: 41-64. [in Portuguese]\nGscholar\n(21)\nMachado SA, Urbano E, Conceição MB (2005). Comparação de métodos de estimativa de volume para Pinus oocarpa em diferentes idades e diferente regimes de desbastes [Comparison of volume estimation methods for Pinus oocarpa at different ages and thinning regimes]. Pesquisa Florestal Brasileira 50: 81-98. [in Portuguese]\nGscholar\n(22)\nMachado SA, Téo SJ, Urbano E, Figura MA, Silva LCR (2006). Comparação de métodos de cubagem absolutos com o volume obtido pelo xilômetro para bracatinga (Mimosa scabrella Bentham) [Comparison of absolute scaling methods with the volume obtained by the xylometer method for bracatinga (Mimosa scabrella Bentham)]. Cerne 12: 239-253. [in Portuguese]\nGscholar\n(23)\nMachado SA, Figura MA, Silva LCR, Téo SJ, Stolle L, Urbano E (2008). Modelagem volumétrica para bracatinga (Mimosa scabrella) em povoamentos da Região Metropolitana de Curitiba [Volumetric modeling for bracatinga (Mimosa scabrella) in bracaringa stands in Curitiba Metropolitan Region]. Pesquisa Florestal Brasileira 56: 17-29. [in Portuguese]\nGscholar\n(24)\nMurta Júnior LS, Oliveira MLR, Nogueira GS, Castro RVO, Pelli E, Silva EA (2015). Estimation of total volume of Eucalyptus through measurements at the tree base. Australian Journal of Basic and Applied Sciences 9: 134-141.\nGscholar\n(25)\nPiñeiro G, Perelman S, Guerschman JP, Paruelo JM (2008). How to evaluate models: observed vs. predicted or predicted vs. observed? Ecological Modelling 216: 316-322.\nCrossRef | Gscholar\n(26)\nPreuβner K (1974). Eine neue schaftkurvengleichung und ihre anwendung [A new solution for taper curve equations and its application]. Wissenschaftliche Zeitschrift 23: 305-309. [in German]\nGscholar\n(27)\nProdan M (1965). Holzmesslehre [Learning to measure trees]. J. D. Sauerländer’s Verlag, Frankfurt, Deutschland. pp. 644. [in German]\nGscholar\n(28)\nPéllico Netto S, Brena DA (1997). Inventário florestal [Forest inventory]. Editorado pelos autores, Curitiba, Brazil, pp. 316. [in Portuguese]\nGscholar\n(29)\nRiecke F (1840). Über die Berechnung des körperlichen Inhalts und beschlagener Baumstämme [On the calculation of the corp content and evaluation of tree stems]. Stuttgart, Germany, pp. 75. [in German]\nGscholar\n(30)\nRoiko-Jokela P (1976). Die schaftformfunktion der fichte und die bestimung der sortimente am stehenden baum [The taper function for the spruce and the determination of the assortments at the standing tree]. Eidgenössische Anstalt für das Forst Versuchswesen, Zürich, Switzerland. pp. 52. [in German]\nGscholar\n(31)\nRow C, Guttenberg S (1966). Determining weight-volume relationship for saw logs. Forest Products Journal 16: 39-47.\nGscholar\n(32)\nSchiffel A (1902). Die kubierung von rundholz aus zwei durchmessern und der lange [The volume evaluation of tree stem with two diameters and the length]. Verschswesen Österreichs, Wien, Austria. pp. 148. [in German]\nGscholar\n(33)\nSchröder T, Pereira LD, Hofiço NSA (2013). Comparação de métodos de estimativa de volume total para Eucalyptus grandis W. Hill ex Maiden [Comparing methods for total volume estimation for Eucalyptus grandis W. Hill ex Maiden]. Floresta e Ambiente 20: 480-486. [in Portuguese]\nCrossRef | Gscholar\n(34)\nSchuberg K (1891). Formzahlen und massentafeln für die weiβtanne [Form factors and volume tables for spruce]. Verlag Paul Parey, Berlin, Germany, pp. 165. [in German]\nGscholar\n(35)\nSchumacher FX, Hall FS (1933). Logarithmic expression of timber-tree volume. Journal of Agricultural Research 47: 719-734.\nGscholar\n(36)\nSchöpfer W (1966). Automatisierung der massen-, sorten- und wertberechnung stehender waldbestände [Automation of the mass, species and value calculation of standing forest stands] (21st edn). Landesforstverwaltung Baden-Württemberg, Stuttgart, Germany, pp. 108. [in German]\nGscholar\n(37)\nSimony O (1904). Über Form zahlengleichungen und deren forst-mathematische Verwertung [About form numerical equations and their forest-mathematical utilization]. Wien, Austria, pp. 54. [in German]\nGscholar\n(38)\nSiqueira Neto M, Piccolo MC, Costa Junior C, Cerri CC, Bernoux M (2011). Emissão de gases do efeito estufa em diferentes usos da terra no bioma Cerrado [Greenhouse gas emission caused by different land-uses in brazilian Savannah]. Revista Brasileira de Ciência do Solo 35: 63-76. [in Portuguese]\nCrossRef | Gscholar\n(39)\nSmalian HL (1837). Beitrag zur holzmeβkunst [Contribution to the art of measuring wood]. Verlag Löffler, Stralsund, Germany, pp. 87. [in German]\nGscholar\n(40)\nSoares CPB, Silva GF, Martins FB (2010). Influence of section lengths on volume determination in Eucalyptus trees. Cerne 16: 155-162.\nCrossRef | Gscholar\n(41)\nSoares C, Paula Neto F, Souza A (2011). Dendrometria e inventário florestal [Forest mensuration and forest inventory] (2nd edn). Editora UFV, Viçosa-MG, Brasil, pp. 272. [in Portuguese]\nGscholar\n(42)\nSoil Survey Staff (2010). Keys to soil taxonomy (11th edn). USDA, Washington, pp. 338.\nOnline | Gscholar\n(43)\nSpurr SH (1952). Forest Inventory. The Ronald Press Company, New York, USA, pp. 476.\nGscholar\n(44)\nStatsoft I (2004). Statistica (data analysis software system), version 7.0.\nGscholar\n(45)\nVan Laar A, Akça A (2007). Forest mensuration. Springer, Dordrecht, The Netherlands, pp. 383.\nGscholar\n(46)\nEMBRAPA (2013). Sistema brasileiro de classificação de solo [Brazilian soil classification system] (3rd edn). Embrapa Solos, Brasília, Brazil, pp. 342. [in Portuguese]\nGscholar\n\n#### Authors’ Affiliation\n\n(1)\nEder Pereira Miguel\nAlba Valéria Rezende\nReginaldo Sérgio Pereira\nDepartment of Forest Science, University of Brasília (UnB), Campus Darcy Ribeiro, Brasília, CEP 70910-900 (Brazil)\n(2)\nSylvio Péllico Netto\nDepartment of Forest Science, Federal University of Paraná (UFPR), Av. Prof. Lothário Meissner, 900 Campus III UFPR - Botanical Garden, Curitiba, CEP 80210-170 (Brazil)\n(3)\nGileno Brito de Azevedo\nGlauce Taís de Oliveira Sousa Azevedo\nFederal University of Mato Grosso do Sul (UFMS), Campus Chapadão do Sul, Rod. MS 306, Km 105, CXP 112, Chapadão do Sul/MS, CEP 79560-000 (Brazil)\n\n#### Corresponding author\n\nEder Pereira Miguel\nmiguelederpereira@gmail.com\n\n#### Citation\n\nMiguel EP, Péllico Netto S, Azevedo GB, Azevedo GTOS, Rezende AV, Pereira RS (2018). Alternative methods of scaling Eucalyptus urophylla trees in forest stands: compatibility and accuracy of volume equations. iForest 11: 275-283. - doi: 10.3832/ifor2155-011\n\n#### Paper history\n\nReceived: Jun 27, 2016\nAccepted: Jan 09, 2018\n\nFirst online: Mar 29, 2018\nPublication Date: Apr 30, 2018\nPublication Time: 2.63 months\n\n© SISEF - The Italian Society of Silviculture and Forest Ecology 2018\n\n#### Open Access\n\nThis article is distributed under the terms of the Creative Commons Attribution-Non Commercial 4.0 International (https://creativecommons.org/licenses/by-nc/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.\n\n#### Breakdown by View Type\n\n(Waiting for server response...)\n\n#### Article Usage\n\nTotal Article Views: 17316\n(from publication date up to now)\n\nBreakdown by View Type\nHTML Page Views: 13180\nAbstract Page Views: 942\n\nWeb Metrics\nDays since publication: 1643\nOverall contacts: 17316\nAvg. contacts per week: 73.77\n\nArticle citations are based on data periodically collected from the Clarivate Web of Science web site\n(last update: Nov 2020)\n\nTotal number of cites (since 2018): 1\nAverage cites per year: 0.33\n\n#### Publication Metrics\n\nList of the papers citing this article based on CrossRef Cited-by.\n\n#### iForest Database Search\n\nSearch By Author\n\nSearch By Keyword\n\n#### Google Scholar Search\n\nCiting Articles\n\nSearch By Author\n\nSearch By Keywords\n\n#### PubMed Search\n\nSearch By Author\n\nSearch By Keyword"
] | [
null,
"http://www.sisef.it/images/logos/main-logo-papers.png",
null,
"http://www.sisef.it/iforest/papers/thumbs/thumb@2155.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9126136,"math_prob":0.9441118,"size":42235,"snap":"2022-40-2023-06","text_gpt3_token_len":11219,"char_repetition_ratio":0.17382964,"word_repetition_ratio":0.0879269,"special_character_ratio":0.2750799,"punctuation_ratio":0.14047067,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.96966964,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,null,null,5,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-09-27T15:29:00Z\",\"WARC-Record-ID\":\"<urn:uuid:0d44a3d8-3196-4c6f-8627-c389f0c65c24>\",\"Content-Length\":\"222858\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:68cb4480-3b90-42de-8a51-35f4b1e4722e>\",\"WARC-Concurrent-To\":\"<urn:uuid:43bf5415-b9c6-43ed-89ad-cfdda49427da>\",\"WARC-IP-Address\":\"160.78.48.152\",\"WARC-Target-URI\":\"http://www.sisef.it/iforest/contents/?id=ifor2155-011\",\"WARC-Payload-Digest\":\"sha1:IZRJSZPCJASEE3Y35VUDWVOCXWRAJHJ6\",\"WARC-Block-Digest\":\"sha1:QYRFTPI252XAYV6ALCRQRDMAEQ2WPMG4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-40/CC-MAIN-2022-40_segments_1664030335034.61_warc_CC-MAIN-20220927131111-20220927161111-00741.warc.gz\"}"} |
http://www.itwbnb.com/page.aspx?name=downtown-bnb&area=hualien | [
"• 無線網路\n• 盥洗用品\n• 咖啡吧\n• 電冰箱\n• 洗衣機\n• 空調設施\n• 電視\n• 停車場\n• 脫水機\n• 電梯\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n• 免費早餐\n\nTWD 2000 / 晚 起\n\n• 無線網路\n• 盥洗用品\n• 自行車\n• 電冰箱\n• 空調設施\n• 電視\n• 代購當地土產\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n\nTWD 2600 / 晚 起\n\n• 無線網路\n• 盥洗用品\n• 自行車\n• 電冰箱\n• 空調設施\n• 電視\n• 溜滑梯\n• 代購當地土產\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n\nTWD 2300 / 晚 起\n\n• 無線網路\n• 自行車\n• 電冰箱\n• 廚房\n• 電腦\n• 洗衣機\n• 茶具\n• 餐廳\n• 電視\n• 停車場\n• 脫水機\n• 火車站接送\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n• 免費早餐\n\nTWD 1680 / 晚 起\n\n• 無線網路\n• 盥洗用品\n• 自行車\n• 咖啡吧\n• 電冰箱\n• 茶具\n• 空調設施\n• 烤肉場地\n• 電視\n• 浴缸\n• 代客宅配服務\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n• 免費早餐\n\nTWD 1800 / 晚 起\n\n• 無線網路\n• 盥洗用品\n• 電冰箱\n• 空調設施\n• 電視\n• 有線網路\n• 代租汽機車\n• 代訂船票\n\nTWD 1600 / 晚 起\n\n• 無線網路\n• 盥洗用品\n• 自行車\n• 電冰箱\n• 空調設施\n• 電視\n• 電暖器\n• 停車場\n• 火車站接送\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n• 免費下午茶\n• 免費早餐\n• 英語接待\n\nTWD 2400 / 晚 起\n\n• 無線網路\n• 當地機場接送\n• 火車站接送\n\nTWD 1800 / 晚 起\n\n• 無線網路\n• 自行車\n• 電冰箱\n• 庭園\n• 電視\n• 停車場\n• 當地機場接送\n• 火車站接送\n• 門票優惠券\n• 套裝行程\n• 代租汽機車\n• 代訂船票\n• 免費早餐\n\nTWD 1800 / 晚 起"
] | [
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.96505505,"math_prob":0.429793,"size":426,"snap":"2021-31-2021-39","text_gpt3_token_len":627,"char_repetition_ratio":0.047393367,"word_repetition_ratio":0.0,"special_character_ratio":0.14553991,"punctuation_ratio":0.0,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.995793,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-07-24T00:46:08Z\",\"WARC-Record-ID\":\"<urn:uuid:bf826f29-1427-4902-92c3-0690cdf68367>\",\"Content-Length\":\"318858\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:91a71241-9d8a-4694-a1d1-cf83b42866c6>\",\"WARC-Concurrent-To\":\"<urn:uuid:9e076090-68bc-4f52-9c5a-f730b140b788>\",\"WARC-IP-Address\":\"61.220.172.103\",\"WARC-Target-URI\":\"http://www.itwbnb.com/page.aspx?name=downtown-bnb&area=hualien\",\"WARC-Payload-Digest\":\"sha1:VA54KTAC42UNEOK4UNGGAD64E2YQ3GTH\",\"WARC-Block-Digest\":\"sha1:45FKNYFB7HTBFF6GOWNPXXTTPNIXFKBY\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-31/CC-MAIN-2021-31_segments_1627046150067.87_warc_CC-MAIN-20210724001211-20210724031211-00657.warc.gz\"}"} |
https://sonichours.com/how-many-oz-is-2-7-liters/ | [
"# How Many Oz Is 2.7 Liters\n\nIf you want to know how much 2.7 liters are in liquid form, you’ll need to convert it to fluid ounces. There are two ways to find out how many ounces are in a liter. The first way is to calculate the amount of water in a litre. This way, you can use the volume of a cup to determine how much liquid is contained in a liter. The second way is to calculate the mass of a liquid by dividing its volume by its volume.\n\nThe general rule is to drink at least 8 ounces of water each day. That’s an estimate, and it’s not scientifically proven that this is the recommended amount. In fact, drinking a lot of water may not be beneficial. One liter of liquid equals approximately 1,000 milliliters. For reference, a gallon of water is equivalent to 2.7 gallons or 231 cubic inches.\n\nThe Institute of Medicine has set general guidelines for total water intake. Women should aim to consume 91 ounces of liquid per day, which is 2.7 liters. Men should drink about 125 ounces of liquid per day. However, the majority of this water comes from food. Therefore, you should also drink at least a liter a day if you are trying to improve your health.\n\nTo determine how many ounces are in a liter, you must first understand what a liter is. For example, a gallon of water contains 1000 milliliters. For women, this would be roughly 91 ounces. For men, this would be the equivalent of about a quart. Using this method, you’ll be able to estimate how many oz is 2.7 lb.\n\nIt is important to know that a gallon of water is one thousand mL. An eight ounce glass contains 237 milliliters, which is 1.9 liters. For men, an eight ounce glass equals a quarter of a liter. In general, the same applies to drinking more than a quart of water. But consuming more than a liter is harmful.\n\nThe number of ounces in a liter is also related to the size of the glass. For a liter, one cup is eight ounces. A quart is one liter. A quart is 2.2 liters. A quart is one pound. A litre of water is equal to a 1.7 liter. If you’re interested in a conversion, you can use this conversion chart.\n\nA gallon of water is essential for human life. The average adult needs a gallon of water every day. This number can vary depending on the size and activity level of a person. If you’re a sportsperson, you need to drink more water than that. If you’re an office worker, a half-liter glass of it is a quart of liquid. If you’re a student, you need to drink three litres of the same quantity to get the same effect.\n\nVisit the rest of the site for more useful articles!"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.94645244,"math_prob":0.976832,"size":2484,"snap":"2022-27-2022-33","text_gpt3_token_len":589,"char_repetition_ratio":0.13508065,"word_repetition_ratio":0.016949153,"special_character_ratio":0.23752013,"punctuation_ratio":0.12099644,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9741965,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-06-29T04:43:03Z\",\"WARC-Record-ID\":\"<urn:uuid:fb4623c4-d54a-451c-99e0-553ac1af508b>\",\"Content-Length\":\"88240\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:83530d9f-f096-49fd-bc54-d304147ced5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:70e58e76-8aab-4270-b0b1-0ce026a334a2>\",\"WARC-IP-Address\":\"162.0.217.81\",\"WARC-Target-URI\":\"https://sonichours.com/how-many-oz-is-2-7-liters/\",\"WARC-Payload-Digest\":\"sha1:UJERMWOQZRNDBAQCALDUV3IVTLMDKYO7\",\"WARC-Block-Digest\":\"sha1:HENN6C6IENYVXEZWQHSVGSAUURRBVTW2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-27/CC-MAIN-2022-27_segments_1656103620968.33_warc_CC-MAIN-20220629024217-20220629054217-00518.warc.gz\"}"} |
https://gmd.copernicus.org/articles/14/889/2021/ | [
"Geosci. Model Dev., 14, 889–905, 2021\nhttps://doi.org/10.5194/gmd-14-889-2021\nGeosci. Model Dev., 14, 889–905, 2021\nhttps://doi.org/10.5194/gmd-14-889-2021\n\nModel description paper 11 Feb 2021\n\nModel description paper | 11 Feb 2021",
null,
"# The Framework For Ice Sheet–Ocean Coupling (FISOC) V1.1\n\nThe Framework For Ice Sheet–Ocean Coupling (FISOC) V1.1\nRupert Gladstone1, Benjamin Galton-Fenzi2,3, David Gwyther3, Qin Zhou4, Tore Hattermann5,10, Chen Zhao3, Lenneke Jong2, Yuwei Xia6, Xiaoran Guo6, Konstantinos Petrakopoulos8, Thomas Zwinger9, Daniel Shapero7, and John Moore1,6 Rupert Gladstone et al.\n• 1Arctic Centre, University of Lapland, Rovaniemi, Finland\n• 2Australian Antarctic Division, Kingston, Tasmania, Australia\n• 3Australian Antarctic Program Partnership, Institute of Marine and Antarctic Studies, University of Tasmania, Hobart, Tasmania, Australia\n• 4Akvaplan-niva AS, Tromsø, Norway\n• 5Norwegian Polar Institute, Tromsø, Norway\n• 6College of Global Change and Earth System Science, Beijing Normal University, Beijing, China\n• 7Polar Science Center, Applied Physics Laboratory, University of Washington, Seattle, USA\n• 8Center for Global Sea Level Change, New York University Abu Dhabi, United Arab Emirates\n• 9CSC IT Center for Science, Espoo, Finland\n• 10Energy and Climate Group, Department of Physics and Technology, University of Tromsø – The Arctic University, Tromsø, Norway\n\nAbstract\n\nA number of important questions concern processes at the margins of ice sheets where multiple components of the Earth system, most crucially ice sheets and oceans, interact. Such processes include thermodynamic interaction at the ice–ocean interface, the impact of meltwater on ice shelf cavity circulation, the impact of basal melting of ice shelves on grounded ice dynamics and ocean controls on iceberg calving. These include fundamentally coupled processes in which feedback mechanisms between ice and ocean play an important role. Some of these mechanisms have major implications for humanity, most notably the impact of retreating marine ice sheets on the global sea level. In order to better quantify these mechanisms using computer models, feedbacks need to be incorporated into the modelling system. To achieve this, ocean and ice dynamic models must be coupled, allowing runtime information sharing between components. We have developed a flexible coupling framework based on existing Earth system coupling technologies. The open-source Framework for Ice Sheet–Ocean Coupling (FISOC) provides a modular approach to coupling, facilitating switching between different ice dynamic and ocean components. FISOC allows fully synchronous coupling, in which both ice and ocean run on the same time step, or semi-synchronous coupling in which the ice dynamic model uses a longer time step. Multiple regridding options are available, and there are multiple methods for coupling the sub-ice-shelf cavity geometry. Thermodynamic coupling may also be activated. We present idealized simulations using FISOC with a Stokes flow ice dynamic model coupled to a regional ocean model. We demonstrate the modularity of FISOC by switching between two different regional ocean models and presenting outputs for both. We demonstrate conservation of mass and other verification steps during evolution of an idealized coupled ice–ocean system, both with and without grounding line movement.\n\nShare\n1 Introduction\n\nThe Antarctic and Greenland ice sheets have the potential to provide the greatest contributions to global sea level rise on century timescales , with the greatest uncertainty in projections being due to marine ice sheet instability (MISI; Mercer1978; Schoof2007; Robel et al.2019). Ice dynamic behaviour is strongly sensitive to ocean currents, in particular the transport of warmer waters across the continental shelf, causing high basal melt rates under ice shelves . For Antarctica's Pine Island Glacier, which is likely undergoing unstable retreat due to MISI, ocean-induced basal melting has been established as a trigger for MISI through both observational evidence and model studies . While MISI is fundamentally a geometrically controlled phenomenon, its onset and the resulting rate of ice mass loss are strongly dependent on tight coupling between ice dynamic behaviour and ocean processes. Importantly, ocean-driven basal melt rates respond to the evolving geometry of ice shelf cavities , and the grounded-ice dynamic behaviour responds to the evolving basal melt rates through their impact on the buttressing force provided by ice shelves to the grounded ice. While most model-based ice sheet studies use relatively simple parameterizations for calculating basal melt rates beneath ice shelves, recent studies have highlighted limitations of this approach . In particular, melt parameterizations as a function of depth or thermal driving do not impose conservation of heat in the system, and none of the parameterizations fully capture the impact of evolving ice geometry on cavity circulation.\n\nSeveral projects to couple ice sheet and ocean models are underway, and most (including the current study) will contribute to the Marine Ice Sheet–Ocean Model Intercomparison Project first phase (MISOMIP1) and its child projects: the Marine Ice Sheet Model Intercomparison Project third phase (MISMIP+) and the Ice Shelf Ocean Model Intercomparison Project second phase (ISOMIP+; ).\n\nCoupling projects take different approaches to handling the different timescales of ice and ocean processes. An ice sheet flow line model coupled to a five-box ocean model allows large ensemble simulations to be carried out but is limited in terms of implementation of physical processes . A temporally synchronous approach allows the cavity geometry to evolve on the ocean time step as a function of the melt rates calculated by the ocean model and the ice dynamics calculated by the ice model . Asynchronous approaches incorporate a longer time step for ice than ocean and sometimes involve coupling through file exchange and with restarts for the ocean model .\n\nHere, we present a new, flexible Framework for Ice Sheet–Ocean Coupling (FISOC; Sect. 2). FISOC allows runtime coupling in which ice and ocean components are compiled as runtime libraries and run through one executable. FISOC provides the user a selection of synchronicity options. Adopting Earth System Modeling Framework terminology (ESMF; Sect. 2), we refer to an ocean model coupled through FISOC as an “ocean component” and an ice sheet or ice dynamic model coupled through FISOC as an “ice component”. We use FISOC to couple two different 3-D ocean models to an ice dynamic model and present idealized simulations demonstrating mass conservation and consistent grounding line behaviour (Sect. 3). FISOC is also currently being used to contribute to ISOMIP+ and MISOMIP1.\n\n2 Methodology\n\nFISOC is an open-source coupling framework built using the ESMF . FISOC aims to provide seamless runtime coupling between an existing ice sheet model and an existing ocean model for application to Antarctic ice sheet–ocean systems. In its current form, FISOC assumes that the important ice sheet–ocean interactions occur at the underside of a floating ice shelf and that the lower surface of the ice shelf can be projected on to the horizontal plane.\n\nFISOC aims to provide flexibility and computational efficiency through the following key features.\n\n• Flexible modular architecture (Sect. 2.1) facilitates swapping between different ice components or between different ocean components according to purpose (Sect. 2.2).\n\n• Access to ESMF tools allows multiple regridding and interpolation options, including between regular grids and unstructured meshes (Sect. 2.3).\n\n• Multiple options for handling differing ice and ocean timescales include fully synchronous coupling, passing rates of change, time averaging of variables (Sects. 2.4 and 2.5).\n\n• Flexible runtime control over the exchange of variables allows specific coupling modes to be (de)activated as required, e.g. geometric coupling and thermodynamic coupling.\n\n• Grounding line movement (Sect. 2.8) is implemented using geometry change rates and a modified wet–dry scheme in the ocean component, with multiple options available for updating cavity geometry (Sect. 2.5).\n\n• Flexibility for parallelization options is provided; currently sequential coupling is implemented, but any combination of sequential and concurrent parallelization is possible with minimal coding effort (see also Sect. 2.1.1).\n\n• ESMF compatibility means that FISOC can be embedded within any ESMF-based modelling system, e.g. as a regional model within a global model.\n\n• ESMF compatibility also means that additional ESMF components (e.g. an atmosphere model) could easily be added to the coupled system.\n\nThese features are described further in the following sections and in the FISOC manual, which can be found in the FISOC repository (see the code availability section at the end of this paper).\n\n## 2.1 Software design\n\nWhile coupled models in Earth system science have been in existence for decades and such coupled models are often viewed as single entities (ocean–atmosphere general circulation models, for example), the field of coupled ice sheet–ocean modelling is relatively young. FISOC is intended as a framework for coupling independent models rather than as a coupled model in itself. Building and running a coupled ice sheet–ocean model is currently more complex than building and running both an ice and an ocean model independently. FISOC aims to minimize the additional complexity.\n\nThe ice and ocean components may use their standard runtime input files, and their paths are set in a FISOC runtime configuration file, along with information about time stepping and variables to be exchanged.\n\nFISOC adopts the hierarchical modular structure of the Earth System Modelling Framework. The FISOC code structures are summarized in Fig. 1. A top-level executable is called a FISOC parent module (this could in principle also be embedded within a larger coupled model framework). The parent module coordinates calling of the ice, ocean and regridding components. Regridding is one of the reasons to make use of ESMF, described further in Sect. 2.3. The ice and ocean components are independent models that are not included in the FISOC code repository and compiled as libraries to be called by FISOC during runtime. On each side (ice and ocean) of the coupling is a model-specific wrapper, whose main runtime functions are as follows:\n\n• call the component's initialize, run, and finalize routines as required;\n\n• convert the component's grid or mesh to ESMF format using ESMF data structures;\n\n• read from or write to the component's required state variables, converting between the component's native data structures and ESMF data structures.\n\nFurther processing of variables (such as calculating rates of change) is implemented by the ice and ocean generic code modules.\n\nIncorporating a new ice or ocean component into FISOC can be straightforward, depending on the existing level of ESMF compatibility of the new component. Models able to provide mesh information and variables in ESMF data structures can be very easily built in to FISOC. The only coding required for a new component is a new model-specific wrapper in the FISOC repository. Copying an existing wrapper can be a viable starting point.",
null,
"Figure 1Overview of FISOC code structures. OM and ISM are short for ocean model and ice sheet model (or component), respectively. ImpSt and ExpSt are short for import state and export state, respectively.",
null,
"Figure 2FISOC workflow. The black arrow indicates the direction of simulated time. The yellow arrows indicate the order of events during a FISOC simulation. The green boxes indicate when regridding and passing of variables between components occurs. The length of the blue boxes in the vertical indicates the relative size of time steps and coupling interval (this is illustrative; in practice there will be many more OM time steps per ISM time step and the ISM time step size will usually equal the coupling interval).\n\nTable 2Model choices and input parameters used in verification experiment 1 (VE1, Sect. 3.1) and verification experiment 2 (VE2, Sect. 3.2) comprising four simulations in total: VE1_ER, VE1_EF, VE2_ER and VE2_EF. Component abbreviations in these simulation names are E (Elmer/Ice), R (ROMS), and F (FVCOM). Semi-structured refers to a mesh that is in principle unstructured but where in practice a structure can be seen (see Fig. 3 middle and lower panels). STOD stands for source to destination.",
null,
"",
null,
"Figure 3Unstructured meshes used in the current study. The first 15 km is shown. The ocean model in the ER simulations uses a structured grid.\n\n### 2.1.1 Sequential parallelism\n\nFISOC currently adopts a sequential parallelism paradigm. Each component runs on the full set of available persistent execution threads (PETs). PET is an ESMF abstraction catering for multiple parallelism options. FISOC has so far used only the message passing interface (MPI), in which one PET wraps one MPI process.\n\nThe sequential workflow is illustrated in Fig. 2. The order of events during time stepping is as follows. The ocean component is called for the full number of ocean time steps required to complete one coupling interval. Ocean outputs are then regridded and passed to the ice component, which also runs for as many time steps as are required to complete one coupling interval. The ice component outputs are then regridded and passed to the ocean component. The ice component time step size is equal to the coupling interval for all simulations in the current study.\n\nThe initialization is not shown in Fig. 2, but we note that this is similar to the runtime event order: the ocean component is initialized first, followed by regridding and then the ice component. There are two initialization phases for each component, allowing for the possibility that variables may be needed to be passed from ice component to ocean component in order to finalize initialization.\n\nThis ordering of events imposes a lag in the system. While the ice component receives ocean variables for the current coupling interval, the ocean component only receives ice variables for the previous coupling interval. This could be reversed (running the ice component before the ocean component) or could be modified such that both components receive variables from the other component for the previous coupling interval.\n\nWhile FISOC implements sequential parallelism, ESMF also supports concurrent parallelism. Concurrent parallelism allows different components to run at the same time on different subsets of the available PETs. This approach is beneficial when different components have very different computational costs and parallel scaling: a cheap component that scales poorly is more effectively run on a subset of the available PETs, and concurrent parallelism allows this to be implemented more efficiently than sequential parallelism. This could easily be implemented in FISOC if it becomes necessary, as the components, which utilize MPI, are assigned a distinct MPI communicator during initialization. This communicator could be made to represent a subset of the available PETs. In principle, concurrent parallelism also offers sub-time-step coupling: it is possible to exchange variables between components during convergence of numerical schemes. Such coupling is unlikely to be implemented within FISOC as the timescales for ice and ocean components are so different. While sequential coupling imposes a lag between components (described above), concurrent coupling implemented in FISOC would impose a lag in both components: exchange of variables in both directions would occur at the end of the coupling interval.\n\n### 2.1.2 Error handling\n\nThe ESMF adopts a defensive strategy to error handling: all errors are logged and passed back up the call stack. The calling routine has the option of attempting to continue running in the event of errors occurring. As the call structure between FISOC and ESMF is one way (FISOC routines may call ESMF routines but not vice versa), all such errors are eventually returned to FISOC.\n\nFISOC adopts a fail-fast approach. Errors are generally considered to be fatal, in which case FISOC will log error information and finalize both ice and ocean components and ESMF. FISOC also aims to provide consistency checks, most of which are considered fatal if not passed. For example, ice and ocean input files might both contain time-stepping information, potentially duplicating information in the FISOC runtime configuration file, and these can be checked for consistency in the model-specific wrappers. The general intention is to stop running if something unexpected happens and provide a meaningful message to the user about why.\n\nThere are a few cases where ESMF errors can be handled at runtime. Details can be found in the FISOC manual, which can be accessed from the FISOC repository (see the code availability section at the end of this paper).\n\n## 2.2 Components\n\nFISOC is designed to facilitate swapping between different ocean or ice components. Currently two different ocean components and one ice component are available through FISOC. Table 1 summarizes components currently coupled into FISOC. In some cases, a non-standard build of the component is required for FISOC compatibility, and these cases are described in the FISOC manual, which can be obtained through the FISOC repository (Sect. 2.1).\n\nThe ice component Elmer/Ice is a powerful, flexible, state-of-the-art ice dynamic model.\n\nThe Regional Ocean Modeling System () is a 3-D terrain-following, sigma coordinate ocean model that has already been adapted to use in ice shelf cavities . The module for ice shelf cavities implemented in the Finite Volume Community Ocean Model (FVCOM; Chen et al.2003) provides non-hydrostatic options and a horizontally unstructured mesh that lends itself to refinement and may be more suited to small-scale processes such as ice shelf channels .\n\n## 2.3 Regridding\n\nAs stated above, FISOC provides coupling on a horizontal plane onto which the lower surface of an ice shelf can be projected. It is this plane on which ice and ocean properties are exchanged through the FISOC. Adapting the FISOC code to handle a vertical ice cliff is expected to be straightforward and would be desirable for application to the Greenland ice sheet. More complex 3-D ice–ocean interface geometries are challenging not only for FISOC but also for the current generation of ice sheet and ocean models.\n\nFISOC has access to all the runtime regridding options provided by ESMF. These include nearest-neighbour options, conservative options, patch recovery and bilinear regridding. These options are available for structured grids and unstructured meshes. FISOC requires that both ice and ocean components define their grid or mesh on the same coordinate system and that both components use the same projection. All FISOC simulations to date have used a Cartesian coordinate system (i.e. all components have so far used Cartesian coordinates).\n\nOur current FISOC setup does not meet the requirements for all forms of ESMF regridding. Specifically, the conservative methods, when an unstructured mesh is involved, require that field values are defined on elements and not on nodes. Elmer, by default, provides field values on nodes but can also provide element-wide values or values on integration points within elements. We will need to either map nodal values to element values or utilize element-type variables in order to use conservative regridding, and this is intended as a future development.\n\nWhen using FISOC to couple Elmer/Ice to ROMS, the ROMS grid extends beyond the Elmer/Ice mesh. This is due to ROMS using a staggered grid (Arakawa C-grid) and ghost cells extending beyond the active domain. This necessitates the use of extrapolation. ESMF regridding methods provide options for extrapolation, which are used here. Simulations in the current study use either nearest “source to destination” (STOD, a form of nearest neighbour) regridding or use bilinear interpolation (in which case nearest STOD is used only for destination points that lie outside the source domain).\n\nWe use subscripts with square brackets, [X], where X is either O (ocean component) or I (ice component), to denote a variable that exists in both ice and ocean components with the same physical meaning but potentially different values due to being represented on different grids or meshes.\n\n## 2.4 Coupling timescales\n\nThe timescales for sub-shelf cavity circulation behaviour are in general much shorter than the timescales for ice flow and geometry evolution (typically minutes to days instead of years to centuries). Typical time step sizes are correspondingly smaller for ocean models (seconds to minutes) than for ice sheet models (days to months). A single ice sheet model time step, if the Stokes equations are solved in full, will typically require orders of magnitude more computational time than a single ocean time step. Due to the combination of these two reasons the ice and ocean components of FISOC will in general use different time steps, with the ice time step size being much larger. We define relevant terminology for coupling timescales below.\n\n• Fully synchronous coupling. The ice and ocean components have the same time step size, and they exchange variables every time step.\n\n• Semi-synchronous coupling. The ice component has a larger time step than the ocean component, but the ocean component's cavity geometry and grounding line position are allowed to evolve on the ocean time step (e.g. by using ice velocities from a previous ice time step or rates of change based on the two most recent time steps).\n\n• Asynchronous coupling. The ice component has a larger time step than the ocean component. Cavity geometry is updated on the ice component time step or less frequently.\n\n• Coupling interval. The time interval at which the ice and ocean components exchange variables.\n\nIn the current study, FISOC sets the coupling interval as equal to the ice component time step size. This is an exact multiple of the ocean model time step size. More generally (for potential future experiments), FISOC calls each component for a fixed time period and allows the component to determine its own time stepping within that period. In principle, adaptive time stepping could be implemented within this framework as long as each component runs for the required amount of simulated time. FISOC does not currently provide an option to vary the coupling interval during a simulation, but this could be implemented if needed.\n\nFISOC is flexible with regard to time processing of ocean or ice variables. It is possible to cumulate variables, calculate averages, or use snapshots. In the current study, the ocean components (both ROMS and FVCOM) calculate averaged basal melt rates over the coupling interval and pass these averages through FISOC to the ice component. In the current study, as the ice component time step size is equal to the coupling interval for all simulations, no time processing of ice component variables is needed.\n\nIn principle, FISOC supports all three synchronicity options, though fully synchronous coupling is not practical to achieve when solving the Stokes equations for the ice component. The experiments carried out for this paper use semi-synchronous coupling with cavity geometry evolution as described in Sect. 2.5.\n\nand implement fully synchronous coupling, whereas and implement asynchronous coupling with ocean restarts.\n\n## 2.5 Handling cavity evolution\n\nThe evolution of cavity geometry under the ice shelf, defined by a reference ice draft, zd (positive upward), and grounding line location, is calculated by the ice component forced by the melt rates passed from the ocean component. We refer to zd as a “reference” ice draft because the ocean component may further modify the ice draft according to the dynamic pressure field. The ocean component's “free surface” variable, ζ, represents the height of the upper surface of the ocean domain relative to a mean sea level for the open ocean. Under the ice shelf, ζ represents the deviation of the upper surface of the ocean domain relative to the reference ice draft zd (similar to ). To summarize the meaning of the key variables, zd[I] is the reference ice draft computed by the ice component, zd[O] is the same but regridded for the ocean component and (zd[O]+ζ) is the actual ice draft according to the ocean component.\n\nGiven the potential for non-synchronicity of the ice and ocean component time stepping, several methods are implemented in FISOC for the ocean to update its representation of zd. All the processing options described below are applied on the ocean grid after the ice component representation of ice geometry has been regridded (i.e. zd[I] regridded to zd[O]).\n\nMost recent ice. The simplest option is that the ocean component uses the ice draft directly from the most recent ice component time step. If fully synchronous coupling is used, this option should be chosen. The main disadvantage of this approach for semi-synchronous or asynchronous coupling is that due to the much longer time step of the ice component the ocean component will experience large, occasional changes in ice draft instead of smoothly evolving ice draft. This could be both physically unrealistic and potentially numerically challenging for the ocean component.\n\nRate. The vertical rate of change of ice draft, $\\frac{\\mathrm{d}{z}_{\\mathrm{d}}}{\\mathrm{d}t}$, is calculated by FISOC after each ice component time step using the two most recent ice component time steps. If we assume that the ice component completes a time step at time t, the rate at this time is given by\n\n$\\begin{array}{}\\text{(1)}& \\frac{\\mathrm{d}{z}_{\\mathrm{d}\\left[O,t\\right]}}{\\mathrm{d}t}=\\frac{{z}_{\\mathrm{d}\\left[I,t\\right]}-{z}_{\\mathrm{d}\\left[I,t-\\mathrm{\\Delta }{t}_{I}\\right]}}{\\mathrm{\\Delta }{t}_{I}},\\end{array}$\n\nwhere zd[O,t] is the ocean component's reference ice draft at time t, zd[I,t] is the ice component's reference ice draft at time t, ${z}_{\\mathrm{d}\\left[I,t-\\mathrm{\\Delta }{t}_{I}\\right]}$ is the ice component's reference ice draft at time t−ΔtI and ΔtI is the ice component time step size. This rate of change is used by the ocean component to update the cavity geometry until the next ice component time step completes. In this sense the ocean component lags the ice component as mentioned in Sect. 2.1.1. This approach provides temporally smooth changes to the ocean representation of the ice draft but has the potential for the ice and ocean representations to diverge over time as a result of regridding artefacts.\n\nCorrected rate. This is the same as above, except that a drift correction is applied to ensure that ice and ocean representations of cavity geometry do not diverge:\n\n$\\begin{array}{}\\text{(2)}& \\frac{\\mathrm{d}{z}_{\\mathrm{d}\\left[O,t\\right]}}{\\mathrm{d}t}=\\frac{{z}_{\\mathrm{d}\\left[I,t\\right]}-{z}_{\\mathrm{d}\\left[I,t-\\mathrm{\\Delta }{t}_{I}\\right]}+{f}_{\\text{cav}}\\left({z}_{\\mathrm{d}\\left[I,t\\right]}-{z}_{\\mathrm{d}\\left[O,t\\right]}\\right)}{\\mathrm{\\Delta }{t}_{I}},\\end{array}$\n\nwhere fcav is a cavity correction factor between 0 and 1. Equation (2) is applied at coupling time steps, and the calculated rate of cavity change is then held constant during ocean component evolution until the coupling interval completes. Conceptually, this option prioritizes ice–ocean geometry consistency over mass conservation.\n\nLinear interpolation. The ocean representation of the ice draft is given by temporal linear interpolation between the two most recent ice sheet time steps. This imposes additional lag of the ocean component behind the ice component.\n\nThe above options are all implemented in FISOC, but only the “rate” and “corrected rate” approaches are used in the current study.\n\nThe cavity geometry may be initialized independently by ice and ocean components. In this case, the user must ensure consistency. It is also possible for the cavity geometry from the ice component to be imposed on the ocean component during FISOC initialization. This ensures consistency.\n\nHandling cavity evolution is a little more complicated in the case of an evolving grounding line, as discussed in Sect. 2.8 below.\n\n## 2.6 Thermodynamics at the ice–ocean interface\n\nExchange of heat at the ice–ocean interface is handled within the ocean model. Like many ocean models, FVCOM and ROMS adopt the three-equation formulation for thermodynamic exchange . This parameterization assumes that the interface is at the in situ pressure freezing point and that there is a heat balance and salt balance at the interface. Both ROMS and FVCOM assume constant turbulent transfer coefficients for scaling the heat and salt fluxes through the interface, with thermal and saline exchange velocities calculated as the product of these coefficients with friction velocity. Further details of the ROMS- and FVCOM-specific implementations of the three-equation formulation are given by and , respectively. An ablation or melt rate is calculated for each ocean model grid cell, which is then passed to FISOC as a boundary condition for the lower surface of the ice model at the coupling time interval.\n\nInternally, both ocean models account for the thermodynamic effect of basal melting by imposing virtual heat and salt fluxes within a fixed geometry at each ocean model time step to mimic the effects of basal melting, rather than employing an explicit volume flux at the ice–ocean interface. Independent of this, a geometry change is passed back from the ice model through FISOC at after each coupling interval (including the effect of melting and freezing, as well as any ice dynamical response), which is used to update the ocean component cavity shape (Sect. 2.5).\n\nFor some applications, conductive heat fluxes into the ice shelf due to vertical temperature gradients in the ice at the ice–ocean interface are required by the three-equation parameterization to calculate the flux balance at the ice ocean interface. While ice–ocean thermodynamic parameterizations in ocean-only models must make an assumption about this temperature gradient, FISOC can pass the temperature gradient from the ice component directly to the ocean component. This feature is not demonstrated in the current study but will be properly tested in future studies.\n\nNon-zero basal melt rates may be calculated by the ocean component in regions that are defined as grounded by the ice model. This could occur due to isolated patches of ungrounding upstream of the grounding line or discrepancies between the ice and ocean component's representation of the grounded region. Basal melt rates are masked using the ice component's grounded mask before being applied within the ice component. This has the potential to impact on mass conservation in the coupled system. Future studies utilizing conservative regridding will ensure that passing masked field variables between components remains conservative.\n\n## 2.7 Interface pressure\n\nAside from the geometry evolution, an ocean boundary condition for pressure at the ice–ocean interface, Pinterface, must be provided to the ocean component. FISOC can pass pressure directly from ice to ocean components. However, using actual ice overburden directly as an upper ocean boundary condition results in higher horizontal pressure gradients at the grounding line (for dry cells, see Sect. 2.8) than ocean models can typically handle . In the current study, the ocean component uses the reference ice draft (see Sect. 2.5) to estimate floatation pressure. ROMS assumes a constant reference ocean density:\n\n$\\begin{array}{}\\text{(3)}& {P}_{\\text{interface}}=-g{\\mathit{\\rho }}_{\\text{or}}{z}_{\\mathrm{d}\\left[O\\right]},\\end{array}$\n\nwhere g is acceleration due to gravity, ρor is a reference ocean density and zd[O] is the ocean representation of ice draft (positive upward). For the current study, all simulations with ROMS use ${\\mathit{\\rho }}_{\\text{or}}=\\mathrm{1027}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{kg}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{-\\mathrm{3}}$. FVCOM assumes a constant vertical ocean density gradient following :\n\n$\\begin{array}{}\\text{(4)}& {P}_{\\text{interface}}=-g\\left({\\mathit{\\rho }}_{o\\mathrm{1}}+\\mathrm{0.5}\\frac{\\mathrm{d}{\\mathit{\\rho }}_{o}}{\\mathrm{d}z}{z}_{\\mathrm{d}\\left[O\\right]}\\right){z}_{\\mathrm{d}\\left[O\\right]},\\end{array}$\n\nwhere ρo is ocean water density, ρo1 is ocean water density of the top ocean layer, and the vertical ocean water density gradient, $\\frac{\\mathrm{d}{\\mathit{\\rho }}_{o}}{\\mathrm{d}z}$, is set to $\\mathrm{8.3927}×{\\mathrm{10}}^{-\\mathrm{4}}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{kg}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{-\\mathrm{4}}$.\n\n## 2.8 Grounding line evolution\n\nGrounding line movement in FISOC requires that both ice and ocean components support it. Numerical convergence issues place constraints in terms of mesh resolution for representing grounding line movement in ice sheet models . While FISOC allows ice draft to be passed to the ocean component (Sect. 2.5), FISOC does not impose the ice component grounding line position on the ocean component. Instead, the ocean component uses the evolving cavity geometry to evolve the grounding line.\n\nA recent ice–ocean coupling study used a “thin film” approach to allow grounding line movement. A thin passive water layer is allowed to exist under the grounded ice, and an activation criterion is imposed to allow the layer to inflate to represent grounding line retreat. The current study takes a conceptually similar approach, modifying the existing wetting and drying schemes independently in both ROMS and FVCOM. “Dry” cells are used for the passive water column under grounded ice, and “wet” cells are used for the active water column under floating ice or the open ocean. The wet–dry mask is two dimensional, so while it is conventional to talk about dry or wet cells, this actually refers to dry or wet columns. The grounding line evolves in the two horizontal dimensions and is represented in the ocean component as the vertical surface between dry and wet columns.\n\nThe original criterion in both ROMS and FVCOM for a cell to remain dry is given by\n\n$\\begin{array}{}\\text{(5)}& \\mathit{\\zeta }-{z}_{b}<{D}_{\\text{crit}},\\end{array}$\n\nwhere zb is the bottom boundary depth (bathymetry, or bedrock depth, positive upward), and Dcrit is a critical water column thickness for wet or dry activation. Dcrit is a parameter to be set by the user (typical values lie between 1 to 20 m). Thus, cells with a water column thickness less than Dcrit are designated as dry. Flux of water into dry cells is allowed, but flux of water out of dry cells is prevented.\n\nThe FVCOM criterion for an element to be dry has been modified for the presence of a marine ice sheet and ice shelf system as follows:\n\n$\\begin{array}{}\\text{(6)}& -{z}_{b}+{z}_{\\mathrm{d}\\left[O\\right]}<{D}_{\\text{crit}}.\\end{array}$\n\nThis is a purely geometric criterion based entirely on the geometry determined by the ice component. The ROMS criterion for a cell to be dry has been modified for the presence of a marine ice sheet and ice shelf system as follows:\n\n$\\begin{array}{}\\text{(7)}& \\mathit{\\zeta }-{z}_{b}-\\left({z}_{s\\left[O\\right]}-{z}_{\\mathrm{d}\\left[O\\right]}-{D}_{\\text{crit}}\\right)\\frac{{\\mathit{\\rho }}_{\\mathrm{i}}}{{\\mathit{\\rho }}_{\\text{or}}}\\le \\mathrm{0},\\end{array}$\n\nwhere zs[O] is the ocean representation of ice sheet and ice shelf upper surface height. zs[O] is needed in this equation because the floatation assumption cannot be made for grounded ice. This equation essentially compares ζ against the height above buoyancy of the grounded ice. In other words, if the dynamic variations in ocean pressure are sufficient to overcome the higher ice pressure due to the positive height above buoyancy, the cell can become ungrounded. The conceptual difference between the FVCOM and ROMS wetting criteria is that ROMS allows dynamic ocean pressure variations to make minor grounding line adjustments relative to the grounding line determined by the ice geometry, whereas FVCOM uses just the ice geometry to determine grounding line position.\n\nFISOC allows the ice component to pass any geometry variables to the ocean, such as ice draft, ice thickness, upper surface elevation or the rates of change of any of these variables. In the event that geometry variables other than zd are passed to the ocean, the same processing method is used as for zd, as described in Sect. 2.5. In the current study, $\\frac{\\mathrm{d}{z}_{\\mathrm{d}}}{\\mathrm{d}t}$ is passed to the ocean component, and in one case both $\\frac{\\mathrm{d}{z}_{\\mathrm{d}}}{\\mathrm{d}t}$ and $\\frac{\\mathrm{d}{z}_{s}}{\\mathrm{d}t}$ are passed (details in Sect. 3). When $\\frac{\\mathrm{d}{z}_{s}}{\\mathrm{d}t}$ is passed, $\\frac{\\mathrm{d}{z}_{s}}{\\mathrm{d}t}$ is processed the same way as $\\frac{\\mathrm{d}{z}_{\\mathrm{d}}}{\\mathrm{d}t}$. If the grounding line problem is solved and if zd is processed for passing to the ocean using the corrected rate method, Eq. (2) is modified to account for the dry water column thickness, which is initialized to Dcrit. The correction term changes from ${f}_{\\text{cav}}\\left({z}_{\\mathrm{d}\\left[I,t\\right]}-{z}_{\\mathrm{d}\\left[O,t\\right]}\\right)$ to ${f}_{\\text{cav}}\\left(max\\left({z}_{\\mathrm{d}\\left[I,t\\right]},{z}_{b}+{D}_{\\text{crit}}\\right)-{z}_{\\mathrm{d}\\left[O,t\\right]}\\right)$.\n\nThere are no connectivity restrictions on wetting and drying for either of the ocean components in the current study. This means that it is possible for individual cells or regions containing multiple cells that are upstream of the grounding line to become wet (i.e. to unground). This occurs on small spatial and temporal scales in ROMS (individual cells a short distance upstream of the grounding line sometimes become temporarily wet) but not at all in FVCOM (likely due to choice of wetting criterion).\n\n3 Verification experiment design\n\nSimulations are carried out on idealized domains as a proof of concept to demonstrate the coupling rather than to address scientific questions. Verification experiment 1 (VE1) aims to assess whether the coupled system conserves mass. Verification experiment 2 (VE2) aims to assess whether the ocean and ice representations of grounding line evolution are consistent.\n\n## 3.1 Verification experiment 1: floating adjustment\n\nVerification experiment 1 (VE1) is a simple experiment in which a linearly sloping ice shelf is allowed to adjust toward steady state. The experiment is not run long enough to attain steady state but is long enough to demonstrate the evolution of the coupled system; see Table 2 for run length and a summary of other model choices and parameter values used in VE1.\n\nAll ice and ocean vertical side boundaries are closed: there is no flow in or out of the domain. There is mass exchange between the ice and ocean (and therefore also heat exchange). The coupling centres on the evolution of ice geometry: the ocean component passes an ice shelf basal melt rate to the ice component, and the ice component passes a rate of change of ice draft to the ocean component.\n\nWe expect adjustment toward a uniform-thickness ice shelf to occur by the following two mechanisms.\n\n1. Ice dynamics. The gravitational driving force will tend to cause flow from thicker to thinner regions.\n\n2. Melt and freeze. The greater pressures at greater depth should result in higher melt rates, with the potential for refreezing under thinner regions.\n\n### 3.1.1 Domain size and meshes\n\nThe domain is 30 km across the expected direction of ice flow (y direction) by 100 km along the flow (x direction) for simulation VE1_ER. However, ocean component FVCOM (used in VE1_EF) uses a semi-structured (in principle unstructured but in practice exhibiting some structure) mesh with the dimensions 31 km by 99 km. This results from an auto-generated mesh method using a uniform resolution of 2 km for its triangular elements. FISOC does not in general require that ice and ocean component domains precisely overlap. Indeed the region of overlap is allowed to be small relative to the domains (for example, an Antarctic ice stream interacts with the ocean only in its floating shelf, and the majority of the catchment may be grounded with no possibility to interact with the ocean for the duration of an intended simulation). However, given that we aim to address mass conservation in the coupled system, we choose to require precise domain match between ice and ocean components for the current study. Therefore, for simulations presented in the current study, the ice component has a slightly different domain when coupled to ROMS as compared to when coupled to FVCOM. For VE1_EF the ice component runs on an almost identical mesh to the ocean component. The only difference is at two diametrically opposite corners, where FVCOM prefers to maintain element shape but Elmer/Ice prefers to maintain a strictly rectangular domain (in order to facilitate imposition of consistent boundary conditions at the corners of the domain). These mesh differences are visually summarized in Fig. 3.\n\n### 3.1.2 Ice component setup\n\nThe initial geometry is of an ice shelf at floatation (i.e. hydrostatic equilibrium). The initial ice draft is given in metres by\n\n$\\begin{array}{}\\text{(8)}& {z}_{\\mathrm{d}}=-\\mathrm{450}+\\mathrm{400}\\left(\\frac{x}{\\mathrm{100}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{000}}\\right),\\end{array}$\n\nwhere x is distance in metres along the domain. The initial geometry does not vary across the ice flow (y direction). Ice and ocean water densities used in the ice component are ${\\mathit{\\rho }}_{\\mathrm{i}}=\\mathrm{910}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{kg}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{-\\mathrm{3}}$ and ${\\mathit{\\rho }}_{\\text{or}}=\\mathrm{1027}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{kg}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{-\\mathrm{3}}$, respectively. These densities, along with the floatation assumption, determine the ice upper surface.\n\nThe pressure acting on the underside of the ice shelf is given by Eq. (3).\n\nTemperature in the ice component is constant through space and time at −5C.\n\nVE1 includes ice flow and geometry evolution solving the Stokes equations directly. Glen's power law rheology with n=3 is implemented .\n\nZero accumulation is prescribed at the upper ice surface. The melt rate from the ocean component is applied at the lower surface. Flow through the vertical side boundaries is not allowed.\n\nElmer/Ice-specific details. The Stokes equations are solved within Elmer/Ice . A 2-D horizontal mesh of triangles with an approximate element size of 1 km (VE1_ER) or 2 km (VE1_EF) is extruded in the vertical to give 11 equally spaced terrain-following layers with the bulk element shape being triangular prisms.\n\n### 3.1.3 Ocean component setup\n\nThe ocean bathymetry is set to 500m throughout the domain. The wet–dry scheme (Sect. 2.8) is not used in this experiment, as the whole domain is ice shelf cavity with no grounded ice. Boundaries are closed and rotation is disabled. Ocean potential temperature is initialized at −1.85C, and salinity is initialized at 34.6 on the practical salinity scale. Ice–ocean thermodynamics are captured by means of the three-equation parameterization (Sect. 2.6).\n\nThe ocean conditions are chosen to represent a cold cavity ice shelf, such as the Amery Ice Shelf. In this configuration, both basal melting and refreezing can occur.\n\nROMS-specific details. The horizontal resolution is a constant 1 km. There are 11 vertical layers, with a sigmoidal terrain-following distribution configured to provide increased resolution near the top and bottom surfaces. The ROMS baroclinic (slow) time step size is 200 s, and there are 30 barotropic (fast) time steps for every slow time step. Interior mixing is parameterized with the K-Profile Parameterization . Background vertical mixing coefficients for tracers and momentum are set to constant values of $\\mathrm{5.0}×{\\mathrm{10}}^{-\\mathrm{5}}$ and $\\mathrm{1.0}×{\\mathrm{10}}^{-\\mathrm{3}}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{\\mathrm{2}}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{s}}^{-\\mathrm{1}}$, respectively, while horizontal viscosity and diffusivity are set to 6.0 and 1.0 m2 s−1, respectively.\n\nFVCOM-specific details. The horizontal grid resolution is 2 km (defined by the distance between adjacent nodes within a uniform triangular grid), and there are 11 uniformly spaced vertical terrain-following layers. Interior vertical mixing is parameterized using the Mellor and Yamada level 2.5 turbulent closure model (vertical Prandtl number =0.1) together with a constant background viscosity and diffusivity of ${\\mathrm{10}}^{-\\mathrm{6}}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{\\mathrm{2}}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{s}}^{-\\mathrm{1}}$. An eddy closure parameterization is used for the horizontal mixing of momentum (viscosity) and tracers (diffusivity), with both the scaling factor and the Prandtl number being 0.1. Both the barotropic time step and the baroclinic time step sizes are 20 s.\n\n### 3.1.4 Coupling\n\nThe coupling interval is 10 d, the same as the ice component time step size. The cavity update method is rate (Sect. 2.5). For VE1_ER, the regridding method is bilinear using nearest STOD extrapolation for ocean cells that lie outside the ice domain due to grid stagger. For VE1_EF, nearest STOD regridding is used, which results in a one-to-one mapping between ice and ocean nodes due to the meshes being nearly identical (Sect. 3.1.1). There is no grounding line in this experiment.\n\n## 3.2 Verification experiment 2: grounding line evolution\n\nVerification experiment 2 (VE2) is a modified version of VE1 but with part of the region grounded and a net ice flow through the domain allowed. The setup is identical to VE1 except where stated otherwise in this section. This experiment aims to combine design simplicity with an evolving grounding line rather than to represent a system directly analogous to a real-world example.\n\n### 3.2.1 Ice component setup\n\nThe VE2 initial geometry is given by\n\n$\\begin{array}{}\\text{(9)}& & {z}_{\\mathrm{b}}=-\\mathrm{20}-\\mathrm{980}\\left(\\frac{x}{\\mathrm{100}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{000}}\\right),\\text{(10)}& & H=\\frac{{\\mathit{\\rho }}_{\\text{or}}}{{\\mathit{\\rho }}_{\\mathrm{i}}}\\left(\\mathrm{470}-\\mathrm{400}\\left(\\frac{x}{\\mathrm{100}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{000}}\\right)\\right),\\end{array}$\n\nwhere zb is bedrock elevation relative to sea level and H is ice thickness. Then zd and zs are calculated based on floatation and the same densities as in VE1.\n\nThe depth-dependent inflow (x=0) and outflow (x=100 km for VE2_ER; x=99km for VE2_EF) boundary conditions for the ice component are given by\n\n$\\begin{array}{}\\text{(11)}& & {P}_{\\text{inflow}}\\left(z\\right)={\\mathit{\\rho }}_{\\mathrm{i}}g\\left({z}_{s}-z\\right),\\text{(12)}& & {P}_{\\text{outflow}}\\left(z\\right)={\\mathit{\\rho }}_{o}gz,\\end{array}$\n\nwhere Pinflow and Poutflow are pressures prescribed at the inflow and outflow boundaries, respectively, and z is height relative to sea level (positive up). Zero normal velocity and free-slip tangential velocity conditions are imposed at the side walls given by y=30km and either y=0 (for VE2_ER) $y=-\\mathrm{1}$km (for VE2_EF).\n\nThe grounding line is allowed to evolve, solving a contact problem . The pressure acting on the underside of ungrounded ice is given by Eq. (3).\n\nA sliding relation with a simple effective pressure dependency is used under the grounded ice ,\n\n$\\begin{array}{}\\text{(13)}& {\\mathit{\\tau }}_{b}=-C{u}_{b}^{m}{z}_{*},\\end{array}$\n\nwhere τb is basal shear stress, ub is basal ice velocity, z* is the height above buoyancy (related to effective pressure at the bed, N, by $N={\\mathit{\\rho }}_{\\mathrm{i}}g{z}_{*}$), m is a constant exponent (set to $m=\\frac{\\mathrm{1}}{\\mathrm{3}}$) and C is a constant sliding coefficient (set to $C={\\mathrm{10}}^{-\\mathrm{4}}\\phantom{\\rule{0.125em}{0ex}}\\mathrm{MPa}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{m}}^{-\\frac{\\mathrm{4}}{\\mathrm{3}}}\\phantom{\\rule{0.125em}{0ex}}{\\mathrm{a}}^{\\frac{\\mathrm{1}}{\\mathrm{3}}}$).\n\nHeight above buoyancy is calculated by\n\nThis is equivalent to assuming a sub-glacial hydrology system that is both fully connected to and in pressure balance with the ocean.\n\n### 3.2.2 Ocean component setup\n\nOcean bathymetry matches the bedrock prescribed in the ice component (Eq. 9). The wet–dry scheme (Sect. 2.8) is used in this experiment, with a critical water column thickness of Dcrit=5m. Ocean potential temperature is initialized at 1.9 C, and salinity is initialized at 34.6 on the practical salinity scale.\n\nROMS-specific details. The ROMS setup is identical to verification experiment 1, except that the baroclinic (slow) time step size is 100 s, with 30 barotropic (fast) time steps for every slow time step.\n\nFVCOM-specific details. The FVCOM model setup is identical to that of verification experiment 1.\n\n### 3.2.3 Coupling\n\nThe cavity update method for VE2_EF is rate (Sect. 2.5). For VE2_ER it is corrected rate, with a correction factor of fcav=0.01. With the 10 d coupling interval, this equates to a full correction timescale of approximately 3 years. Other coupling details are as in VE1.",
null,
"Figure 4Coupled system state after the first (a) and last (b) coupling intervals from the experiment VE1_ER (Table 2). The ice shelf is shown in grey, with basal melt rate computed by the ocean shown in colour on the underside of the ice shelf. Ocean streamlines are shown beneath the ice shelf, with the greyscale indicating magnitude of simulated ocean velocity. The vertical coordinate is given in metres; the horizontal coordinates are given in kilometres. This was a 100-year simulation.",
null,
"Figure 5Simulated mass evolution over time for the ocean component (dashed lines), the ice component (dash-dotted lines), and the total across both components (solid lines) from experiments VE1_ER (black) and VE1_EF (red).",
null,
"Figure 6Profiles through the centre line for experiment VE2 after the first ice component time step (a) and after 25 years (b). Ice flow speed is shown (flow direction is right to left). Ocean temperature (solid colour) and salinity (contours) are shown after 25 years (these are uniform at the start of the run, hence the solid colour for the ocean in the upper plot). Vertical exaggeration is 50 times. The gap between ocean and ice shelf is half an ocean grid cell and is a plotting artefact (the upper extent of the plotted region for the ocean is the uppermost rho point, which is half an ocean grid cell below the top of the ocean domain).\n\n4 Verification experiment results\n\nFigure 4 summarizes the coupled system state at the start and end of simulation VE1_ER (see also Table 2 for a summary of the experiments). After the first coupling interval (10 d), the ocean component demonstrates a vigorous overturning circulation and high melt rates, especially in the deeper part of the domain. After the last coupling interval (100 years) the combination of melting and ice flow has caused a redistribution of the ice shelf, with an overall reduction in the along-domain gradients. The melt rates and overturning circulation are much weaker than at the start.\n\nThe ocean circulation throughout the simulation is predominantly a buoyancy-driven overturning along the domain, with very little cross-domain flow. The peak ocean flow speeds are always located at the top of the ocean domain directly under the ice shelf, where a fast, shallow buoyancy-driven flow from deeper to shallower ice draft is balanced by a much deeper return flow.\n\nFigure 5 shows the evolution over time of the total mass of both ice and ocean components and the total coupled system from experiments VE1_ER and VE1_EF. Note that both ocean models employ the Boussinesq approximation and that the mass in Fig. 5 is calculated as volume multiplied by the reference ocean density from Table 2. Relatively rapid mass transfer from the ice to the ocean occurs during the first few years as the relatively warm ocean water transfers its energy to the ice. After this initial period of net melting, the ocean water temperature is close to freezing point and a long-term freezing trend can be seen that is stronger and more sustained in the ROMS ocean component than FVCOM. In a physically realistic coupled system, the ice and ocean would come into thermodynamic equilibrium and the spatial net mass transfer would approach zero.\n\nThe net mass change of the coupled system is more than an order of magnitude smaller than the mass change of the individual components for both experiments VE1_ER and VE1_EF. The current study does not use conservative regridding (Sect. 2.3), and therefore machine precision conservation is not expected. There are additional potential sources of error. The lag of the ocean component behind the ice component (Sect. 2.1.1) will cause a similar lag in total mass evolution. Use of the corrected rate cavity option (Sect. 2.5) prioritizes geometry consistency between components above mass conservation. The aim of analysing mass conservation in the current study is to ensure that the cumulative impact of these potential error sources is small compared to the signal. This has been achieved, and it will be possible to quantify and minimize or eliminate all sources of error in future studies using conservative regridding methods.",
null,
"Figure 7Ocean horizontal velocities in the upper layer (black arrows) and basal melt rate (red indicates melting; blue indicates refreezing) after 25 years of simulation VE2_ER (a) and VE2_EF (b). Outputs on the FVCOM mesh were regridded onto a 1 km regular grid. Both FVCOM and ROMS outputs were subsampled at 2 km resolution for this plot.",
null,
"Figure 8(a) A comparison of grounded area in the ocean component (total area of dry cells) against grounded area in the ice component (total area of grounded elements). (b) The difference between ocean and ice grounded area. These are from simulation VE2_ER. The green lines are drawn such that their distance apart is equivalent to the area of one row of ocean grid cells.",
null,
"Figure 9(a) A comparison of grounded area in the ocean component (total area of dry elements) against grounded area in the ice component (total area of grounded elements). (b) The difference between ocean and ice grounded area. These are from simulation VE2_EF. The green lines are drawn such that their distance apart is equivalent to the area of one row of ocean elements.\n\n## 4.2 VE2: grounding line evolution\n\nThis is a partially grounded experiment in which the ice component boundaries are not closed, a through-domain flow of ice is allowed and the grounding line is allowed to evolve in the coupled system (described in Sect. 3.2). While the initial slope of the lower surface of the ice shelf is the same in both VE1 and VE2, the open inflow and outflow boundaries in the ice component and the relatively shallower ice in the grounded region both lead to a shelf that is much shallower in slope for VE2 than for VE1 for most of the simulation period. Figure 6 illustrates the shape of the ice sheet or ice shelf at the start of the simulation and after 25 years (from simulation VE2_ER). Note that the ice outflow boundary is more active than the inflow, with the flux into the domain through the inflow boundary remaining small and positive throughout the simulation. The ice draft is deepest in the middle of the domain, at around 30 km downstream (in terms of ice flow direction) from the grounding line. The ice draft impacts on circulation and melt, with the strong overturning of VE1_ER not present here. Melting occurs under the deepest ice, with refreezing elsewhere (Fig. 7).\n\nComparing the coupled simulation VE2_ER to the ice-only simulation (not shown) where the only difference is that the ice component features zero basal melt, it might be expected that the coupled simulation would exhibit a significantly thinner ice shelf due to melting. However, the ice dynamics partially compensate for this in terms of the ice geometry: the melt-induced thinning leads to acceleration in the ice, and the thickness difference is smaller than expected. However, this should not be interpreted as a stabilizing feedback response of ice dynamics to ocean-induced melting, as the increased ice flow would tend to drain the grounded ice more quickly, potentially triggering marine ice sheet instability (Schoof2007). Instead this effect may tend to partially mask an ocean-induced ice sheet destabilization if the observational focus is on ice shelf geometry.\n\nAs described in Sect. 2.8, the ice and ocean component each evolve the grounding line on their own time step and on their own grid or mesh. There is potential for discrepancy between ice and ocean grounded area due to method of cavity evolution (Sect. 2.5), regridding errors, the inherent differences between grids or meshes, and the methods used to determine grounding line position. While ice geometry is a key determinant of grounding line position, the ice component also tests for a contact force and the ocean component ROMS tests height above buoyancy against the free surface variable ζ (Sect. 2.8). Here we look at consistency of grounded area between components.\n\nThe evolution of grounded area in both ice and ocean components is shown in Fig. 8 for simulation VE2_ER. While the ice component employs an unstructured mesh of triangular elements (on the lower surface of the 3-D ice body), the ocean component employs a regular grid of square cells. The ocean component appears to exhibit a step-like reduction over time of grounded area. This is due to the row-by-row manifestation of grounding line retreat in the ocean component due to the alignment of grid rows with the linear down-sloping geometry. Grounding line retreat starts at the lateral edges of a row (ungrounding near the sidewall boundary), and the “wetting” of dry cells propagates toward the centre of the row. This step-like behaviour (with the spacing of the green lines in Fig. 8 indicating the total area of a row of cells) explains the main difference between ice and ocean grounded area. The evolution of grounded area is shown in Fig. 9 for simulation VE2_EF. Behaviour is similar to VE2_ER.\n\nThe initial rapid reduction in grounded area is due to the initial geometry. A region immediately upstream of the grounding line is initially very lightly grounded, and this region quickly becomes floating. The ocean component lags the ice component in this ungrounding, as can be seen in the first part of the difference plot in Figs. 8 and 9. This lag is in part due to the rate and corrected rate cavity update methods, in which the ocean component uses the most recent two ice component outputs to calculate a rate of change of geometry. This inevitably causes the ocean component to lag by approximately one coupling interval. The discrepancy may also be in part due to the fact that the region in question is close to floatation, and thus the threshold for dry cells to become wet is highly sensitive to ζ, at least for the ROMS implementation. In both experiments, the ice–ocean grounded area discrepancy has a tendency to reduce over time.\n\nThe computational time spent in both the ice and ocean components was measured for simulation VE2_ER. The ice component is more expensive than the ocean component during the first coupling interval but is significantly cheaper thereafter. Total time spent in the ice component over the 46-year simulation is approximately one-third that spent in the ocean component. The computational time spent within the central coupling code (calling routines and regridding) was negligible compared to time spent in ice and ocean components. This is with a 10 d coupling interval. If fully synchronous coupling is approached (i.e. if the coupling interval approaches the ocean time step size), the ice component will become much more expensive and it is possible the central coupling code may become significant. We do not anticipate fully synchronous ice–ocean coupling to become practical in the near future, at least not if the ice component directly solves the Stokes equations without simplifying assumptions, as is the case in the current study. The fully synchronous coupling of and is achieved by using the “shallow shelf approximation” for the ice component and running both components on the same grid.\n\n5 Conclusions\n\nWe have presented a flexible coupling framework for ice sheet, ice shelf and ocean models that allows the user to choose between different ice and ocean components. We have demonstrated the functioning of this framework in simple test cases, both with and without a moving grounding line. We have demonstrated conservation of mass and consistency of grounding line evolution using semi-synchronous coupling.\n\nFISOC provides runtime variable exchange on the underside of ice shelves. Providing such variable exchange at vertical ice cliffs, which are more common in Greenland than in Antarctica, will require minor developments to the coupling code, but the ocean components currently coupled through FISOC may need more significant developments in order to represent the buoyant plumes rising up ice cliffs.\n\nOur coupled modelling framework is suitable for studying Antarctic ice sheet, ice shelf and ocean interactions at scales ranging from investigations of ice shelf channels (features with a spatial scale of typically a few km) up to whole Southern Ocean–Antarctic ice sheet coupled evolution. We are currently setting up simulations across this range of scales to address key processes surrounding Antarctic Ice Sheet stability and sea level contribution.\n\nCode availability\n\nThe FISOC source code, version information for related software (including ice and ocean models used together with FISOC in the current study), and input files needed to run the experiments described in the current study are all publicly available (https://doi.org/10.5281/zenodo.4507182, ).\n\nData availability\n\nSupplement\n\nAuthor contributions\n\nRG led development, implementation of experiments, and paper writing. BGF, DG, QZ, TH, DS, CZ, LJ, XG, KP and TZ contributed to development and/or testing. BGF, DG, QZ, TH, CZ, YX and TZ contributed to implementation of experiments. All authors contributed to paper writing.\n\nCompeting interests\n\nThe authors declare that they have no conflict of interest.\n\nAcknowledgements\n\nRupert Gladstone was funded from the European Union Seventh Framework Programme (FP7/2007–2013; grant agreement no. 299035). This research was supported by the Academy of Finland (grant nos. 286587 and 322430). The authors wish to acknowledge CSC – IT Centre for Science, Finland, for computational resources. Tore Hattermann acknowledges financial support from the Norwegian Research Council (project no. 280727). Qin Zhou acknowledges financial support from the Norwegian Research Council (project no. 267660). Konstantinos Petrakopoulos's work was supported by NYU Abu Dhabi (CSLC grant no. G1204) . Benjamin Galton-Fenzi and Chen Zhao were supported under the Australian Research Council's Special Research Initiative for Antarctic Gateway Partnership (Project ID SR140300001) and received grant funding from the Australian Government for the Australian Antarctic Program Partnership (Project ID ASCI000002).\n\nFinancial support\n\nThis research has been supported by the European Union Seventh Framework Programme (grant no. 299035), the Academy of Finland (grant nos. 286587 and 322430), CSC – IT Centre for Science, Finland, the Norwegian Research Council (projects 280727 to Tore Hattermann and 267660 to Qin Zhou). Konstantinos Petrakopoulos's work was supported by NYU Abu Dhabi (CSLC grant no. G1204). Benjamin Galton-Fenzi and Chen Zhao were supported under the Australian Research Council's Special Research Initiative for Antarctic Gateway Partnership (Project ID SR140300001) and received grant funding from the Australian Government for the Australian Antarctic Program Partnership (Project ID ASCI000002).\n\nReview statement\n\nThis paper was edited by Philippe Huybrechts and reviewed by Xylar Asay-Davis and one anonymous referee.\n\nReferences\n\nAsay-Davis, X. S., Cornford, S. L., Durand, G., Galton-Fenzi, B. K., Gladstone, R. M., Gudmundsson, G. H., Hattermann, T., Holland, D. M., Holland, D., Holland, P. R., Martin, D. F., Mathiot, P., Pattyn, F., and Seroussi, H.: Experimental design for three interrelated marine ice sheet and ocean model intercomparison projects: MISMIP v. 3 (MISMIP +), ISOMIP v. 2 (ISOMIP +) and MISOMIP v. 1 (MISOMIP1), Geosci. Model Dev., 9, 2471–2497, https://doi.org/10.5194/gmd-9-2471-2016, 2016. a\n\nBudd, W., Keage, P. L., and Blundy, N. A.: Empirical studies of ice sliding, J. Glaciol., 23, 157–170, 1979. a\n\nBudd, W., Jenssen, D., and Smith, I.: A 3-dimensional time-dependent model of the West Antarctic Ice-Sheet, Ann. Glaciol., 5, 29–36, https://doi.org/10.3189/1984AoG5-1-29-36, 1984. a\n\nChen, C., Liu, H., and Beardsley, R. C.: An unstructured grid, finite-volume, three-dimensional, primitive equation ocean model: Application to coastal ocean and estuaries, J. Atmos. Ocean. Tech., 20, 159–186, 2003. a\n\nChristianson, K., Bushuk, M., Dutrieux, P., Parizek, B. R., Joughin, I. R., Alley, R. B., Shean, D. E., Abrahamsen, E. P., Anandakrishnan, S., Heywood, K. J., Kim, T.-W., Lee, S. H., Nicholls, K., Stanton, T., Truffer, M., Webber, B. G. M., Jenkins, A., Jacobs, S., Bindschadler, R., and Holland, D. M.: Sensitivity of Pine Island Glacier to observed ocean forcing, Geophys. Res. Lett., 43, 10,817–10,825, https://doi.org/10.1002/2016GL070500, 2016. a\n\nChurch, J., Clark, P., Cazenave, A., Gregory, J., Jevrejeva, S., Levermann, A., Merrifield, M., Milne, G., Nerem, R., Nunn, P., Payne, A., Pfeffer, W., Stammer, D., and Unnikrishnan, A.: Sea Level Change, in: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change, edited by: Stocker, T., Qin, D., Plattner, G.-K., Tignor, M., Allen, S., Boschung, J., Nauels, A., Xia, Y., Bex, V., and Midgley, P., Cambridge University Press, Cambridge, UK and New York, NY, USA, 2013. a\n\nCollins, N., Theurich, G., DeLuca, C., Suarez, M., Trayanov, A., Balaji, V., Li, P., Yang, W., Hill, C., and da Silva, A.: Design and Implementation of Components in the Earth System Modeling Framework, Int. J. High Perform. C., 19, 341–350, https://doi.org/10.1177/1094342005056120, 2005. a\n\nCornford, S. L., Martin, D. F., Graves, D. T., Ranken, D. F., Le Brocq, A. M., Gladstone, R. M., Payne, A. J., Ng, E., and Lipscomb, W. H.: Adaptive mesh, finite volume modeling of marine ice sheets, J. Comput. Phys., 232, 529–549, https://doi.org/10.1016/j.jcp.2012.08.037, 2013. a\n\nDe Rydt, J. and Gudmundsson, G. H.: Coupled ice shelf-ocean modeling and complex grounding line retreat from a seabed ridge, J. Geophys. Res.-Earth, 121, 865–880, https://doi.org/10.1002/2015JF003791, 2016. a, b\n\nDe Rydt, J., Holland, P. R., Dutrieux, P., and Jenkins, A.: Geometric and oceanographic controls on melting beneath Pine Island Glacier, J. Geophys. Res.-Oceans, 119, 2420–2438, https://doi.org/10.1002/2013JC009513, 2014. a\n\nDinniman, M. S., Klinck, J. M., and Smith Jr., W. O.: Influence of sea ice cover and icebergs on circulation and water mass formation in a numerical circulation model of the Ross Sea, Antarctica, J. Geophys. Res.-Oceans, 112, C11013, https://doi.org/10.1029/2006JC004036, 2007. a\n\nFavier, L., Durand, G., Cornford, S. L., Gudmundsson, G. H., Gagliardini, O., Gillet-Chaulet, F., Zwinger, T., Payne, A. J., and Le Brocq, A. M.: Retreat of Pine Island Glacier controlled by marine ice-sheet instability, Nat. Clim. Change, 4, 117–121, https://doi.org/10.1038/NCLIMATE2094, 2014. a\n\nFavier, L., Jourdain, N. C., Jenkins, A., Merino, N., Durand, G., Gagliardini, O., Gillet-Chaulet, F., and Mathiot, P.: Assessment of sub-shelf melting parameterisations using the oceanice-sheet coupled model NEMO(v3.6)Elmer/Ice(v8.3) , Geosci. Model Dev., 12, 2255–2283, https://doi.org/10.5194/gmd-12-2255-2019, 2019. a, b\n\nGagliardini, O., Zwinger, T., Gillet-Chaulet, F., Durand, G., Favier, L., de Fleurian, B., Greve, R., Malinen, M., Martín, C., Råback, P., Ruokolainen, J., Sacchettini, M., Schäfer, M., Seddik, H., and Thies, J.: Capabilities and performance of Elmer/Ice, a new-generation ice sheet model, Geosci. Model Dev., 6, 1299–1318, https://doi.org/10.5194/gmd-6-1299-2013, 2013. a, b, c, d, e\n\nGalton-Fenzi, B. K., Hunter, J. R., Coleman, R., Marsland, S. J., and Warner, R. C.: Modeling the basal melting and marine ice accretion of the Amery Ice Shelf, J. Geophys. Res.-Oceans, 117, C09031, https://doi.org/10.1029/2012JC008214, 2012. a, b\n\nGladstone, R., Lee, V., Vieli, A., and Payne, A.: Grounding Line Migration in an Adaptive Mesh Ice Sheet Model, J. Geophys. Res.-Earth, 115, F04014, https://doi.org/10.1029/2009JF001615, 2010a. a\n\nGladstone, R. M., Payne, A. J., and Cornford, S. L.: Parameterising the grounding line in flow-line ice sheet models, The Cryosphere, 4, 605–619, https://doi.org/10.5194/tc-4-605-2010, 2010b. a\n\nGladstone, R., Lee, V., Rougier, J., Payne, A. J., Hellmer, H., Le Brocq, A., Shepherd, A., Edwards, T. L., Gregory, J., and Cornford, S. L.: Calibrated prediction of Pine Island Glacier retreat during the 21st and 22nd centuries with a coupled flowline model, Earth Planet. Sc. Lett., 333, 191–199, https://doi.org/10.1016/j.epsl.2012.04.022, 2012. a, b\n\nGladstone, R. M., Warner, R. C., Galton-Fenzi, B. K., Gagliardini, O., Zwinger, T., and Greve, R.: Marine ice sheet model performance depends on basal sliding physics and sub-shelf melting, The Cryosphere, 11, 319–329, https://doi.org/10.5194/tc-11-319-2017, 2017. a, b\n\nGladstone, R., Zhao, C., Shapero, D., and Guo, X.: The Framework for Ice Sheet – Ocean Coupling (FISOC) v1.1 (Version v1.1), Zenodo, https://doi.org/10.5281/zenodo.4507182, 2020. a\n\nGlen, J. W.: Experiments on the deformation of ice, J. Glaciol., 2, 111–114, 1952. a\n\nGoldberg, D., Snow, K., Holland, P., Jordan, J., Campin, J.-M., Heimbach, P., Arthern, R., and Jenkins, A.: Representing grounding line migration in synchronous coupling between a marine ice sheet model and a z-coordinate ocean model, Ocean Model., 125, 45–60, https://doi.org/10.1016/j.ocemod.2018.03.005, 2018. a, b, c, d, e, f\n\nHellmer, H. and Olbers, D.: A two-dimensional model for the thermohaline circulation under an ice shelf, Antarct. Sci., 1, 325–336, https://doi.org/10.1017/S0954102089000490, 1989. a\n\nHellmer, H. H., Kauker, F., Timmermann, R., Determann, J., and Rae, J.: Twenty-first-century warming of a large Antarctic ice-shelf cavity by a redirected coastal current, Nature, 485, 225–228, https://doi.org/10.1038/nature11064, 2012. a\n\nHill, C., DeLuca, C., Balaji, Suarez, M., and Silva, A. D.: The Architecture of the Earth System Modeling Framework, Comput. Sci. Eng., 6, 18–28, https://doi.org/10.1109/MCISE.2004.1255817, 2004. a\n\nHolland, D. M. and Jenkins, A.: Modeling Thermodynamic Ice–Ocean Interactions at the Base of an Ice Shelf, J. Phys. Oceanogr., 29, 1787–1800, https://doi.org/10.1175/1520-0485(1999)029<1787:MTIOIA>2.0.CO;2, 1999. a\n\nJenkins, A., Nicholls, K. W., and Corr, H. F. J.: Observation and Parameterization of Ablation at the Base of Ronne Ice Shelf, Antarctica, J. Phys. Oceanogr., 40, 2298–2312, https://doi.org/10.1175/2010JPO4317.1, 2010. a\n\nLarge, W. G., McWilliams, J. C., and Doney, S. C.: Oceanic vertical mixing: A review and a model with a nonlocal boundary layer parameterization, Rev. Geophys., 32, 363–403, https://doi.org/10.1029/94RG01872, 1994. a\n\nMellor, G. and Yamada, T.: Development of a turbulence closure model for geophysical fluid problem, Rev. Geophys. Space Ge., 20, 851–875, 1982. a\n\nMercer, J.: West Antarctic Ice Sheet and CO2 Greenhouse Effect – Threat of Disaster, Nature, 271, 321–325, 1978. a\n\nMoore, J. C., Grinsted, A., Zwinger, T., and Jevrejeva, S.: Semiempirical And Process-Based Global Sea Level Projections, Rev. Geophys., 51, 484–522, https://doi.org/10.1002/rog.20015, 2013. a\n\nMueller, R. D., Hattermann, T., Howard, S. L., and Padman, L.: Tidal influences on a future evolution of the FilchnerRonne Ice Shelf cavity in the Weddell Sea, Antarctica, The Cryosphere, 12, 453–476, https://doi.org/10.5194/tc-12-453-2018, 2018. a\n\nPattyn, F., Huyghe, A., De Brabander, S., and De Smedt, B.: Role of transition zones in marine ice sheet dynamics, J. Geophys. Res.-Earth, 111, F02004, https://doi.org/10.1029/2005JF000394, 2006. a\n\nRobel, A. A., Seroussi, H., and Roe, G. H.: Marine ice sheet instability amplifies and skews uncertainty in projections of future sea-level rise, P. Natl. Acad. Sci. USA, 116, 14887–14892, https://doi.org/10.1073/pnas.1904822116, 2019. a\n\nSchoof, C.: Ice sheet grounding line dynamics: Steady states, stability, and hysteresis, J. Geophys. Res.-Earth, 112, F03S28, https://doi.org/10.1029/2006JF000664, 2007. a, b\n\nSeroussi, H., Nakayama, Y., Larour, E., Menemenlis, D., Morlighem, M., Rignot, E., and Khazendar, A.: Continued retreat of Thwaites Glacier, West Antarctica, controlled by bed topography and ocean circulation, Geophys. Res. Lett., 44, 6191–6199, https://doi.org/10.1002/2017GL072910, 2017. a, b\n\nShchepetkin, A. F. and McWilliams, J. C.: The regional oceanic modeling system (ROMS): a split-explicit, free-surface, topography-following-coordinate oceanic model, Ocean Model., 9, 347–404, https://doi.org/10.1016/j.ocemod.2004.08.002, 2005. a\n\nSmagorinsky, J.: General circulation experiments with the primitive equations, I. The basic experiment, Mon. Weather Rev., 91, 99–164, https://doi.org/10.1175/1520-0493(1963)091<0099:GCEWTP>2.3.CO;2, 1963. a\n\nSnow, K., N. Goldberg, D., R. Holland, P., R. Jordan, J., J. Arthern, R., and Jenkins, A.: The Response of Ice Sheets to Climate Variability, Geophys. Res. Lett., 44, 11878–11885, https://doi.org/10.1002/2017GL075745, 2017. a, b\n\nThoma, M., Determann, J., Grosfeld, K., Goeller, S., and Hellmer, H. H.: Future sea-level rise due to projected ocean warming beneath the Filchner Ronne Ice Shelf: A coupled model study, Earth Planet. Sc. Lett., 431, 217–224, https://doi.org/10.1016/j.epsl.2015.09.013, 2015. a, b\n\nTimmermann, R. and Goeller, S.: Response to FilchnerRonne Ice Shelf cavity warming in a coupled oceanice sheet model Part 1: The ocean perspective, Ocean Sci., 13, 765–776, https://doi.org/10.5194/os-13-765-2017, 2017. a\n\nVieli, A. and Payne, A.: Assessing the ability of numerical ice sheet models to simulate grounding line migration, J. Geophys. Res.-Earth, 110, F01003, https://doi.org/10.1029/2004JF000202, 2005. a\n\nWarner, J. C., Defne, Z., Haas, K., and Arango, H. G.: A wetting and drying scheme for ROMS, Comput. Geosci., 58, 54–61, https://doi.org/10.1016/j.cageo.2013.05.004, 2013. a\n\nZhou, Q. and Hattermann, T.: Modeling ice shelf cavities in the unstructured-grid, Finite Volume Community Ocean Model: Implementation and effects of resolving small-scale topography, Ocean Model., 146, 101536, https://doi.org/10.1016/j.ocemod.2019.101536, 2020. a, b"
] | [
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-avatar-thumb150.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f01-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f02-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-t02-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f03-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f04-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f05-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f06-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f07-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f08-thumb.png",
null,
"https://gmd.copernicus.org/articles/14/889/2021/gmd-14-889-2021-f09-thumb.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8735055,"math_prob":0.9143597,"size":68073,"snap":"2022-40-2023-06","text_gpt3_token_len":15869,"char_repetition_ratio":0.16434795,"word_repetition_ratio":0.04470277,"special_character_ratio":0.23113422,"punctuation_ratio":0.17553073,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9543243,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22],"im_url_duplicate_count":[null,null,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-02-03T21:02:39Z\",\"WARC-Record-ID\":\"<urn:uuid:6306b36f-9a4c-4bf1-8158-e2dec35c3349>\",\"Content-Length\":\"302024\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4684f5fc-86f5-4a2d-9157-20373b8cd3f3>\",\"WARC-Concurrent-To\":\"<urn:uuid:7f7803f5-467f-460c-bf2d-b15bce876af2>\",\"WARC-IP-Address\":\"81.3.21.103\",\"WARC-Target-URI\":\"https://gmd.copernicus.org/articles/14/889/2021/\",\"WARC-Payload-Digest\":\"sha1:UGHPI3ASAV76AULUIURPTEBRX3QWYDDP\",\"WARC-Block-Digest\":\"sha1:SG66EO3RWWJYAJ5S7S7CQTYUWKGUMSRL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764500074.73_warc_CC-MAIN-20230203185547-20230203215547-00590.warc.gz\"}"} |
https://stackoverflow.com/questions/47820130/sapply-results-with-dplyr/47820734 | [
"# sapply results with dplyr\n\nIn the example below I am trying to determine which `value` is closest to each of the `vals_int`, by `id`. I can solve this problem using `sapply()` in a matter similar to below, but I am wondering if the `sapply()` part can be done with another function in `dplyr`.\n\nI am really just interested in if the `sapply` method and output can be reproduced using some function(s) in the `dplyr` package. I had thought that `do()` may work but am struggling to determine how.\n\n``````library(tidyverse)\n\ndf <- data_frame(\nid = rep(1:10, 10) %>%\nsort,\nvisit = rep(1:10, 10),\nvalue = rnorm(100)\n)\n\nvals_int <- c(1, 2, 3)\n\ntmp <- sapply(vals_int,\nfunction(val_i) abs(df\\$value - val_i))\n``````\n\nYes, you can use the `rowwise()` and `do()` functions in dplyr to perform the same operation on every row, like so:\n\n``````df %>% rowwise %>% do(diffs = abs(.\\$value - vals_int))\n``````\n\nThis will create a column called `diffs` in a new tibble which is a list of vectors with length 3. If you coerce the output that `do()` returns to be a data frame, it will instead create a tibble with three columns, one for each of the values subtracted.\n\n``````df %>% rowwise %>% do(as.data.frame(t(abs(.\\$value - vals_int))))\n``````\n\nThe answer by @qdread does what you are looking for, but the `tidyverse` is starting to move away from the `do()` function (if that matters to you, idk). Here is an alternative method using `map` from the `purrr` package.\n\n``````df %>%\nmutate(closest = map(value, function(x){\nabs(x - vals_int) %>%\nt() %>%\nas.tibble()\n})) %>%\nunnest()\n``````\n\nThat gives you this:\n\n``````# A tibble: 100 x 6\nid visit value V1 V2 V3\n<int> <int> <dbl> <dbl> <dbl> <dbl>\n1 1 1 0.91813183 0.08186817 1.081868 2.081868\n2 1 2 -1.68556173 2.68556173 3.685562 4.685562\n3 1 3 -0.05984289 1.05984289 2.059843 3.059843\n4 1 4 0.40128729 0.59871271 1.598713 2.598713\n5 1 5 -0.09995526 1.09995526 2.099955 3.099955\n6 1 6 0.81802663 0.18197337 1.181973 2.181973\n7 1 7 -1.49244225 2.49244225 3.492442 4.492442\n8 1 8 -0.74256185 1.74256185 2.742562 3.742562\n9 1 9 -0.43943907 1.43943907 2.439439 3.439439\n10 1 10 0.54985857 0.45014143 1.450141 2.450141\n# ... with 90 more rows\n``````\n• thanks for the comment, I appreciate this solution as well. In fact I was trying to think of a function I had seen before (`map`), but couldn't remember what it was called. Thanks for pointing this out @tbradley Dec 19 '17 at 13:59\n• Yes, I highly recommend the `purrr` package. While `map` acts more like `lapply`, the `map_dbl` and `map_chr` functions act like `sapply` in that they return a vector rather than a list. The `purrr` package was designed with consistency in mind so it is less of a hassle to switch between the different functions as they all take the same arguments in the same order and have less variability in what they return (compared to `sapply` especially) Dec 19 '17 at 14:11"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8284688,"math_prob":0.9186967,"size":648,"snap":"2021-31-2021-39","text_gpt3_token_len":184,"char_repetition_ratio":0.10559006,"word_repetition_ratio":0.0,"special_character_ratio":0.2978395,"punctuation_ratio":0.11363637,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97670436,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-09-20T00:22:05Z\",\"WARC-Record-ID\":\"<urn:uuid:44a6077b-e31e-448c-97ff-5f0cf276fdb9>\",\"Content-Length\":\"171953\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:113ecad5-f7d8-4919-8718-4024b87adc1b>\",\"WARC-Concurrent-To\":\"<urn:uuid:4d745a95-256f-4f72-82b8-8b64111d0ab5>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://stackoverflow.com/questions/47820130/sapply-results-with-dplyr/47820734\",\"WARC-Payload-Digest\":\"sha1:S5USLQLVBFTDBL5Q6NSOYBVR7WDV3WHN\",\"WARC-Block-Digest\":\"sha1:HWCNBECZTXHHJLXLQUWTPM244N4USOJS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-39/CC-MAIN-2021-39_segments_1631780056902.22_warc_CC-MAIN-20210919220343-20210920010343-00153.warc.gz\"}"} |
https://research.nu.edu.kz/en/publications/integer-forcing-and-forward-transceiver-design-for-mimo-multipair | [
"# Integer forcing-and-forward transceiver design for MIMO multipair two-way relaying\n\nSeyed Mohammad Azimi-Abarghouyi, Masoumeh Nasiri-Kenari, Behrouz Maham, Mohsen Hejazi\n\nResearch output: Contribution to journalArticlepeer-review\n\n5 Citations (Scopus)\n\n## Abstract\n\nIn this paper, we propose a new transmission scheme, called integer forcing-and-forward (IFF), for communications among multipair multiple-antenna users in which each pair exchanges their messages with the help of a single multiple-antenna relay in the multiple-access and broadcast phases. The proposed scheme utilizes integer forcing linear receiver (IFLR) at the relay, which uses equations, i.e., linear integer combinations of messages, to harness intrapair interference. Accordingly, we propose the design of a mean square error (MSE)-based transceiver, including precoder and projection matrices for the relay and users, assuming that perfect channel state information (CSI) is available. In this regard, in the multiple-access phase, we introduce two new MSE criteria for the related precoding and filter designs, i.e., the sum of the equations MSE (Sum-Equation MSE) and the maximum of the equations MSE (Max-Equation MSE), to exploit the equations in the relay. In addition, the convergence of the proposed criteria is proven as well. Moreover, in the broadcast phase, we use the two traditional MSE criteria, i.e., the sum of the users% mean square errors (Sum MSE) and the maximum of the users% mean square errors (Max MSE), to design the related precoding and filters for recovering the relay%s equations by the users. Then, we consider a more practical scenario with imperfect CSI. For this case, the IFLR receiver is modified, and another transceiver design is proposed, which takes into account the effect of channel estimation error. We evaluate the performance of our proposed strategy and compare the results with the conventional amplify-and-forward (AF) and denoise-and-forward (DF) strategies for the same scenario. The results indicate the substantial superiority of the proposed strategy in terms of outage probability and sum rate.\n\nOriginal language English 7384521 8865-8877 13 IEEE Transactions on Vehicular Technology 65 11 https://doi.org/10.1109/TVT.2016.2518667 Published - Nov 2016\n\n## Keywords\n\n• Channel estimation error\n• equation-based mean square error (MSE) criteria\n• integer forcing-and-forward (IFF)\n• multiple-antenna nodes\n• transceiver design\n• two-way relaying\n\n## ASJC Scopus subject areas\n\n• Automotive Engineering\n• Aerospace Engineering\n• Electrical and Electronic Engineering\n• Applied Mathematics"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85580534,"math_prob":0.6344053,"size":2421,"snap":"2021-21-2021-25","text_gpt3_token_len":510,"char_repetition_ratio":0.1170873,"word_repetition_ratio":0.028571429,"special_character_ratio":0.20487402,"punctuation_ratio":0.1004902,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9780306,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-11T23:46:02Z\",\"WARC-Record-ID\":\"<urn:uuid:dccdbd04-eb39-4512-a0e6-fa49f12435b4>\",\"Content-Length\":\"57912\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:277e8906-8765-40c5-8ec4-26c975e0627d>\",\"WARC-Concurrent-To\":\"<urn:uuid:f78b1649-2078-43f0-b24c-187f164d59e1>\",\"WARC-IP-Address\":\"18.139.148.124\",\"WARC-Target-URI\":\"https://research.nu.edu.kz/en/publications/integer-forcing-and-forward-transceiver-design-for-mimo-multipair\",\"WARC-Payload-Digest\":\"sha1:OGE2UJYJXP6XW7JLQFF7NEROUKWDDS33\",\"WARC-Block-Digest\":\"sha1:RORBTEH2QFVTT6VJKV3XYRVP5NJVMGAX\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243990419.12_warc_CC-MAIN-20210511214444-20210512004444-00406.warc.gz\"}"} |
http://www.numbersaplenty.com/100312213 | [
"Search a number\nBaseRepresentation\nbin1011111101010…\n…10010010010101\n320222202101111011\n411332222102111\n5201134442323\n613542012221\n72325432301\noct576522225\n9228671434\n10100312213\n1151695061\n1229717071\n1317a22951\n14d472d01\n158c1720d\nhex5faa495\n\n100312213 has 8 divisors (see below), whose sum is σ = 102216960. Its totient is φ = 98414400.\n\nThe previous prime is 100312151. The next prime is 100312253. The reversal of 100312213 is 312213001.\n\nAdding to 100312213 its reverse (312213001), we get a palindrome (412525214).\n\nIt is a sphenic number, since it is the product of 3 distinct primes.\n\nIt is a cyclic number.\n\nIt is not a de Polignac number, because 100312213 - 217 = 100181141 is a prime.\n\nIt is a Duffinian number.\n\nIt is a junction number, because it is equal to n+sod(n) for n = 100312193 and 100312202.\n\nIt is a congruent number.\n\nIt is not an unprimeable number, because it can be changed into a prime (100312253) by changing a digit.\n\nIt is a polite number, since it can be written in 7 ways as a sum of consecutive naturals, for example, 34413 + ... + 37213.\n\nIt is an arithmetic number, because the mean of its divisors is an integer number (12777120).\n\nAlmost surely, 2100312213 is an apocalyptic number.\n\nIt is an amenable number.\n\n100312213 is a deficient number, since it is larger than the sum of its proper divisors (1904747).\n\n100312213 is an equidigital number, since it uses as much as digits as its factorization.\n\n100312213 is an odious number, because the sum of its binary digits is odd.\n\nThe sum of its prime factors is 3467.\n\nThe product of its (nonzero) digits is 36, while the sum is 13.\n\nThe square root of 100312213 is about 10015.5984843643. The cubic root of 100312213 is about 464.6414363030.\n\nThe spelling of 100312213 in words is \"one hundred million, three hundred twelve thousand, two hundred thirteen\"."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.85336965,"math_prob":0.99447954,"size":1955,"snap":"2019-43-2019-47","text_gpt3_token_len":594,"char_repetition_ratio":0.17631984,"word_repetition_ratio":0.006116208,"special_character_ratio":0.457289,"punctuation_ratio":0.136,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99613076,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-22T10:13:38Z\",\"WARC-Record-ID\":\"<urn:uuid:388237c8-dacc-472d-9317-13c6d36af9ab>\",\"Content-Length\":\"9155\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5894bc86-7a82-4c65-98cd-f6c986828f32>\",\"WARC-Concurrent-To\":\"<urn:uuid:597b9f98-720b-4f27-9cb6-758a2b372ac8>\",\"WARC-IP-Address\":\"62.149.142.170\",\"WARC-Target-URI\":\"http://www.numbersaplenty.com/100312213\",\"WARC-Payload-Digest\":\"sha1:VQT2EK7FQUQNKKDESU24EJZITBNBM6EY\",\"WARC-Block-Digest\":\"sha1:7RS74Z7O7S5J2VKKTFS6A2JMRJATLXW2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496671249.37_warc_CC-MAIN-20191122092537-20191122120537-00526.warc.gz\"}"} |
https://www.colorhexa.com/5a848d | [
"# #5a848d Color Information\n\nIn a RGB color space, hex #5a848d is composed of 35.3% red, 51.8% green and 55.3% blue. Whereas in a CMYK color space, it is composed of 36.2% cyan, 6.4% magenta, 0% yellow and 44.7% black. It has a hue angle of 190.6 degrees, a saturation of 22.1% and a lightness of 45.3%. #5a848d color hex could be obtained by blending #b4ffff with #00091b. Closest websafe color is: #669999.\n\n• R 35\n• G 52\n• B 55\nRGB color chart\n• C 36\n• M 6\n• Y 0\n• K 45\nCMYK color chart\n\n#5a848d color description : Mostly desaturated dark cyan.\n\n# #5a848d Color Conversion\n\nThe hexadecimal color #5a848d has RGB values of R:90, G:132, B:141 and CMYK values of C:0.36, M:0.06, Y:0, K:0.45. Its decimal value is 5932173.\n\nHex triplet RGB Decimal 5a848d `#5a848d` 90, 132, 141 `rgb(90,132,141)` 35.3, 51.8, 55.3 `rgb(35.3%,51.8%,55.3%)` 36, 6, 0, 45 190.6°, 22.1, 45.3 `hsl(190.6,22.1%,45.3%)` 190.6°, 36.2, 55.3 669999 `#669999`\nCIE-LAB 52.507, -12.07, -9.465 17.274, 20.598, 28.264 0.261, 0.311, 20.598 52.507, 15.339, 218.102 52.507, -20.299, -11.823 45.386, -11.486, -5.153 01011010, 10000100, 10001101\n\n# Color Schemes with #5a848d\n\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #8d635a\n``#8d635a` `rgb(141,99,90)``\nComplementary Color\n• #5a8d7d\n``#5a8d7d` `rgb(90,141,125)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #5a6b8d\n``#5a6b8d` `rgb(90,107,141)``\nAnalogous Color\n• #8d7d5a\n``#8d7d5a` `rgb(141,125,90)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #8d5a6b\n``#8d5a6b` `rgb(141,90,107)``\nSplit Complementary Color\n• #848d5a\n``#848d5a` `rgb(132,141,90)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #8d5a84\n``#8d5a84` `rgb(141,90,132)``\n• #5a8d63\n``#5a8d63` `rgb(90,141,99)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #8d5a84\n``#8d5a84` `rgb(141,90,132)``\n• #8d635a\n``#8d635a` `rgb(141,99,90)``\n• #3c585e\n``#3c585e` `rgb(60,88,94)``\n• #46676e\n``#46676e` `rgb(70,103,110)``\n• #50757d\n``#50757d` `rgb(80,117,125)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #64929c\n``#64929c` `rgb(100,146,156)``\n• #749da6\n``#749da6` `rgb(116,157,166)``\n• #83a8b0\n``#83a8b0` `rgb(131,168,176)``\nMonochromatic Color\n\n# Alternatives to #5a848d\n\nBelow, you can see some colors close to #5a848d. Having a set of related colors can be useful if you need an inspirational alternative to your original color choice.\n\n• #5a8d89\n``#5a8d89` `rgb(90,141,137)``\n• #5a8d8d\n``#5a8d8d` `rgb(90,141,141)``\n• #5a888d\n``#5a888d` `rgb(90,136,141)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #5a808d\n``#5a808d` `rgb(90,128,141)``\n• #5a7c8d\n``#5a7c8d` `rgb(90,124,141)``\n• #5a778d\n``#5a778d` `rgb(90,119,141)``\nSimilar Colors\n\n# #5a848d Preview\n\nThis text has a font color of #5a848d.\n\n``<span style=\"color:#5a848d;\">Text here</span>``\n#5a848d background color\n\nThis paragraph has a background color of #5a848d.\n\n``<p style=\"background-color:#5a848d;\">Content here</p>``\n#5a848d border color\n\nThis element has a border color of #5a848d.\n\n``<div style=\"border:1px solid #5a848d;\">Content here</div>``\nCSS codes\n``.text {color:#5a848d;}``\n``.background {background-color:#5a848d;}``\n``.border {border:1px solid #5a848d;}``\n\n# Shades and Tints of #5a848d\n\nA shade is achieved by adding black to any pure hue, while a tint is created by mixing white to any pure color. In this example, #060909 is the darkest color, while #fcfdfd is the lightest one.\n\n• #060909\n``#060909` `rgb(6,9,9)``\n• #0e1415\n``#0e1415` `rgb(14,20,21)``\n• #151f21\n``#151f21` `rgb(21,31,33)``\n• #1d2a2d\n``#1d2a2d` `rgb(29,42,45)``\n• #253639\n``#253639` `rgb(37,54,57)``\n• #2c4145\n``#2c4145` `rgb(44,65,69)``\n• #344c51\n``#344c51` `rgb(52,76,81)``\n• #3b575d\n``#3b575d` `rgb(59,87,93)``\n• #436269\n``#436269` `rgb(67,98,105)``\n• #4b6e75\n``#4b6e75` `rgb(75,110,117)``\n• #527981\n``#527981` `rgb(82,121,129)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #628f99\n``#628f99` `rgb(98,143,153)``\n• #6d98a2\n``#6d98a2` `rgb(109,152,162)``\n• #79a1a9\n``#79a1a9` `rgb(121,161,169)``\n• #85a9b1\n``#85a9b1` `rgb(133,169,177)``\n• #91b1b9\n``#91b1b9` `rgb(145,177,185)``\n• #9dbac0\n``#9dbac0` `rgb(157,186,192)``\n• #a9c2c8\n``#a9c2c8` `rgb(169,194,200)``\n• #b4cbcf\n``#b4cbcf` `rgb(180,203,207)``\n• #c0d3d7\n``#c0d3d7` `rgb(192,211,215)``\n• #ccdbdf\n``#ccdbdf` `rgb(204,219,223)``\n• #d8e4e6\n``#d8e4e6` `rgb(216,228,230)``\n• #e4ecee\n``#e4ecee` `rgb(228,236,238)``\n• #f0f5f6\n``#f0f5f6` `rgb(240,245,246)``\n• #fcfdfd\n``#fcfdfd` `rgb(252,253,253)``\nTint Color Variation\n\n# Tones of #5a848d\n\nA tone is produced by adding gray to any pure hue. In this case, #6c797b is the less saturated color, while #01bde6 is the most saturated one.\n\n• #6c797b\n``#6c797b` `rgb(108,121,123)``\n• #637e84\n``#637e84` `rgb(99,126,132)``\n• #5a848d\n``#5a848d` `rgb(90,132,141)``\n• #518a96\n``#518a96` `rgb(81,138,150)``\n• #488f9f\n``#488f9f` `rgb(72,143,159)``\n• #3f95a8\n``#3f95a8` `rgb(63,149,168)``\n• #369bb1\n``#369bb1` `rgb(54,155,177)``\n• #2ea1b9\n``#2ea1b9` `rgb(46,161,185)``\n• #25a6c2\n``#25a6c2` `rgb(37,166,194)``\n• #1caccb\n``#1caccb` `rgb(28,172,203)``\n• #13b2d4\n``#13b2d4` `rgb(19,178,212)``\n• #0ab8dd\n``#0ab8dd` `rgb(10,184,221)``\n• #01bde6\n``#01bde6` `rgb(1,189,230)``\nTone Color Variation\n\n# Color Blindness Simulator\n\nBelow, you can see how #5a848d is perceived by people affected by a color vision deficiency. This can be useful if you need to ensure your color combinations are accessible to color-blind users.\n\nMonochromacy\n• Achromatopsia 0.005% of the population\n• Atypical Achromatopsia 0.001% of the population\nDichromacy\n• Protanopia 1% of men\n• Deuteranopia 1% of men\n• Tritanopia 0.001% of the population\nTrichromacy\n• Protanomaly 1% of men, 0.01% of women\n• Deuteranomaly 6% of men, 0.4% of women\n• Tritanomaly 0.01% of the population"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.58007056,"math_prob":0.7079304,"size":3730,"snap":"2020-24-2020-29","text_gpt3_token_len":1701,"char_repetition_ratio":0.1215781,"word_repetition_ratio":0.011070111,"special_character_ratio":0.5544236,"punctuation_ratio":0.23756906,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9897645,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-06-07T01:02:10Z\",\"WARC-Record-ID\":\"<urn:uuid:23f435c0-cb62-4428-b6b7-cc437bdc9175>\",\"Content-Length\":\"36333\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7d55a629-8b91-4b55-817e-22112094e0db>\",\"WARC-Concurrent-To\":\"<urn:uuid:3b574e4f-dd78-434c-a331-e10e8e88b0e1>\",\"WARC-IP-Address\":\"178.32.117.56\",\"WARC-Target-URI\":\"https://www.colorhexa.com/5a848d\",\"WARC-Payload-Digest\":\"sha1:BWAOVRVJGGWQAH2Y6IBTRY2QVNLQBXV6\",\"WARC-Block-Digest\":\"sha1:AFEDEDILIFAWOBDC3SXSKPJEUS5NYZNB\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-24/CC-MAIN-2020-24_segments_1590348521325.84_warc_CC-MAIN-20200606222233-20200607012233-00078.warc.gz\"}"} |
https://tex.stackexchange.com/questions/181104/indentation-in-enumerate-list | [
"# Indentation in enumerate/list\n\nI'm trying to get rid of indentation in enumerate. The code below shows the indentation with enumerate, and then the only partial fix I could implement. With enumitem package I could avoid indentation on the 'count' line through leftmargin=*, but there is still indentation in the text. How could I fix that?\n\n\\documentclass[12pt]{article}\n\\usepackage{amsmath}\n\\usepackage{enumitem}\n\\usepackage{blindtext}\n\\begin{document}\n\n\\blindtext\n\n\\begin{enumerate}\n\\item Case $a\\in B$\n\n\\blindtext\n\n\\item Case $a\\in C$\n\n\\blindtext\n\n\\end{enumerate}\n\n\\begin{enumerate}[leftmargin=*]\n\\item Case $a\\in B$\n\n\\blindtext\n\n\\item Case $a\\in C$\n\n\\blindtext\n\n\\end{enumerate}\n\n\\end{document}\n\n• this looks a lot like itemize, removing natural indent – cmhughes May 29 '14 at 16:57\n• possible duplicate of itemize, removing natural indent – cmhughes May 29 '14 at 16:58\n• @cmhughes although I also voted for closing as duplicate, now I realize it it's not; leftamrgin=* alone (the answer in the alleged duplicate) doesn't do what is requested here. – Gonzalo Medina May 29 '14 at 17:03\n• @GonzaloMedina ok - I have retracted my vote – cmhughes May 29 '14 at 17:20\n• @cmhughes I didn't know one could retract the vote. How is it done? – Gonzalo Medina May 29 '14 at 17:22\n\nThis requires some manual adjustment, as you have to modify the argument for widest to be the widest number in the list. As the item numbers are right aligned, only the widest run to the margin, the rest are indented a bit.\n\n\\documentclass[12pt]{article}\n\\usepackage{amsmath}\n\\usepackage{enumitem}\n\\usepackage{showframe}\n\\begin{document}\n\n\\begin{enumerate}[label=(\\roman*),widest=viii,itemindent=*,leftmargin=0pt]\n\\item Case $a\\in B$\n\nMore text \\setcounter{enumi}{7}\n\\item Case $a\\in B$\n\nmore text\n\\end{enumerate}\n\n\\begin{enumerate}[widest=99,itemindent=*,leftmargin=0pt]\n\\item Case $a\\in B$\n\nMore text \\setcounter{enumi}{9}\n\\item Case $a\\in B$\n\nmore text\n\\end{enumerate}\n\\end{document}",
null,
"• Watch out with this solution, as when the numbers have different widths (double or triple digits), they will hang into right margin. See difference between this result and that of tex.stackexchange.com/questions/29322/… – cslstr May 29 '14 at 18:36\n• @cslstr Oops, you're right. Thanks for the notice. Don't have time to look do anything about it right now, will edit later. – Torbjørn T. May 29 '14 at 18:39\n• Somehow the result is not preserved if count is set to romans:\\begin{enumerate}[itemindent=*,leftmargin=0pt,label=(\\roman*)] – capadocia May 29 '14 at 18:49\n• @capadocia I edited my answer. – Torbjørn T. May 29 '14 at 19:51\n\nMaybe the wide option does what you want,if I understand well:\n\n\\documentclass[12pt]{article}\n\\usepackage[utf8]{inputenc}\n\\usepackage[T1]{fontenc}\n\n\\usepackage{amsmath}\n\\usepackage{enumitem}\n\\usepackage{blindtext}\n\\begin{document}\n\n\\blindtext\n\n\\begin{enumerate}[wide]\n\\item Case $a\\in B$\\\\\n\\blindtext\n\n\\item Case $a\\in C$\n\n\\blindtext\n\n\\end{enumerate}\n\n\\end{document}",
null,
""
] | [
null,
"https://i.stack.imgur.com/bvBr1.png",
null,
"https://i.stack.imgur.com/nLoeR.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6348445,"math_prob":0.58495724,"size":785,"snap":"2019-43-2019-47","text_gpt3_token_len":226,"char_repetition_ratio":0.14596671,"word_repetition_ratio":0.13461539,"special_character_ratio":0.24713376,"punctuation_ratio":0.062015504,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9662972,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-11-19T12:16:46Z\",\"WARC-Record-ID\":\"<urn:uuid:04a22341-1d14-4d04-9eff-22f01c389946>\",\"Content-Length\":\"151824\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a70f5b5-5c0a-4563-be5b-d36fcaf330a7>\",\"WARC-Concurrent-To\":\"<urn:uuid:5691fc34-84d5-448d-8036-648c82dae27c>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://tex.stackexchange.com/questions/181104/indentation-in-enumerate-list\",\"WARC-Payload-Digest\":\"sha1:OIHXH3BOLRDWRCF4AMIZT5YJIYQUYCOK\",\"WARC-Block-Digest\":\"sha1:BHQHSIBHJDPYGBIL226BQHZMNUY4SKDK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-47/CC-MAIN-2019-47_segments_1573496670151.97_warc_CC-MAIN-20191119121339-20191119145339-00097.warc.gz\"}"} |
http://self.gutenberg.org/articles/eng/Voltage_divider | [
"",
null,
"#jsDisabledContent { display:none; } My Account | Register | Help",
null,
"Flag as Inappropriate",
null,
"This article will be permanently flagged as inappropriate and made unaccessible to everyone. Are you certain this article is inappropriate? Excessive Violence Sexual Content Political / Social Email this Article Email Address:\n\n# Voltage divider\n\nArticle Id: WHEBN0000393519\nReproduction Date:\n\n Title: Voltage divider",
null,
"Author: World Heritage Encyclopedia Language: English Subject: Collection: Analog Circuits Publisher: World Heritage Encyclopedia Publication Date:\n\n### Voltage divider\n\nIn electronics, a voltage divider (also known as a potential divider) is a passive linear circuit that produces an output voltage (Vout) that is a fraction of its input voltage (Vin). Voltage division is the result of distributing the input voltage among the components of the divider. A simple example of a voltage divider is two resistors connected in series, with the input voltage applied across the resistor pair and the output voltage emerging from the connection between them.\n\nResistor voltage dividers are commonly used to create reference voltages, or to reduce the magnitude of a voltage so it can be measured, and may also be used as signal attenuators at low frequencies. For direct current and relatively low frequencies, a voltage divider may be sufficiently accurate if made only of resistors; where frequency response over a wide range is required (such as in an oscilloscope probe), a voltage divider may have capacitive elements added to compensate load capacitance. In electric power transmission, a capacitive voltage divider is used for measurement of high voltage.\n\n## Contents\n\n• General case 1\n• Examples 2\n• Resistive divider 2.1\n• Low-pass RC filter 2.2\n• Inductive divider 2.3\n• Capacitive divider 2.4\n• Applications 4\n• Sensor measurement 4.1\n• High voltage measurement 4.2\n• Level shifting 4.3\n• References 5\n\n## General case\n\nA voltage divider referenced to ground is created by connecting two electrical impedances in series, as shown in Figure 1. The input voltage is applied across the series impedances Z1 and Z2 and the output is the voltage across Z2. Z1 and Z2 may be composed of any combination of elements such as resistors, inductors and capacitors.\n\nIf the current in the output wire is zero then the relationship between the input voltage, Vin, and the output voltage, Vout, is:\n\nV_\\mathrm{out} = \\frac{Z_2}{Z_1+Z_2} \\cdot V_\\mathrm{in}\n\nProof (using Ohm's Law):\n\nV_\\mathrm{in} = I\\cdot(Z_1+Z_2)\nV_\\mathrm{out} = I\\cdot Z_2\nI = \\frac {V_\\mathrm{in}}{Z_1+Z_2}\nV_\\mathrm{out} = V_\\mathrm{in} \\cdot\\frac {Z_2}{Z_1+Z_2}\n\nThe transfer function (also known as the divider's voltage ratio) of this circuit is:\n\nH = \\frac {V_\\mathrm{out}}{V_\\mathrm{in}} = \\frac{Z_2}{Z_1+Z_2}\n\nIn general this transfer function is a complex, rational function of frequency.\n\n## Examples\n\n### Resistive divider\n\nA resistive divider is the case where both impedances, Z1 and Z2, are purely resistive (Figure 2).\n\nSubstituting Z1 = R1 and Z2 = R2 into the previous expression gives:\n\nV_\\mathrm{out} = \\frac{R_2}{R_1+R_2} \\cdot V_\\mathrm{in}\n\nIf R1 = R2 then\n\nV_\\mathrm{out} = \\frac{1}{2} \\cdot V_\\mathrm{in}\n\nIf Vout=6V and Vin=9V (both commonly used voltages), then:\n\n\\frac{V_\\mathrm{out}}{V_\\mathrm{in}} = \\frac{R_2}{R_1+R_2} = \\frac{6}{9} = \\frac{2}{3}\n\nand by solving using algebra, R2 must be twice the value of R1.\n\nTo solve for R1:\n\nR_1 = \\frac{R_2 \\cdot V_\\mathrm{in}}{V_\\mathrm{out}} - R_2 = R_2 \\cdot ({\\frac{V_\\mathrm{in}}{V_\\mathrm{out}}-1})\n\nTo solve for R2:\n\nR_2 = R_1 \\cdot \\frac{1} {({\\frac{V_\\mathrm{in}}{V_\\mathrm{out}}-1})}\n\nAny ratio Vout/Vin greater than 1 is not possible. That is, using resistors alone it is not possible to either invert the voltage or increase Vout above Vin.\n\n### Low-pass RC filter\n\nConsider a divider consisting of a resistor and capacitor as shown in Figure 3.\n\nComparing with the general case, we see Z1 = R and Z2 is the impedance of the capacitor, given by\n\nZ_2 = -\\mathrm{j}X_{\\mathrm{C}} =\\frac{1}{\\mathrm{j} \\omega C} \\ ,\n\nwhere XC is the reactance of the capacitor, C is the capacitance of the capacitor, j is the imaginary unit, and ω (omega) is the radian frequency of the input voltage.\n\nThis divider will then have the voltage ratio:\n\n\\frac{V_\\mathrm{out}}{V_\\mathrm{in}} = \\frac{Z_\\mathrm{2}}{Z_\\mathrm{1} + Z_\\mathrm{2}} = \\frac{\\frac{1}{\\mathrm{j} \\omega C}}{\\frac{1}{\\mathrm{j} \\omega C} + R} = \\frac{1}{1 + \\mathrm{j} \\omega R C} \\ .\n\nThe product τ (tau) = RC is called the time constant of the circuit.\n\nThe ratio then depends on frequency, in this case decreasing as frequency increases. This circuit is, in fact, a basic (first-order) lowpass filter. The ratio contains an imaginary number, and actually contains both the amplitude and phase shift information of the filter. To extract just the amplitude ratio, calculate the magnitude of the ratio, that is:\n\n\\left| \\frac{V_\\mathrm{out}}{V_\\mathrm{in}} \\right| = \\frac{1}{\\sqrt{1 + (\\omega R C)^2}} \\ .\n\n### Inductive divider\n\nInductive dividers split AC input according to inductance:\n\nV_\\mathrm{out} = \\frac{L_2}{L_1 + L_2} \\cdot V_\\mathrm{in}\n\nThe above equation is for non-interacting inductors; mutual inductance (as in an autotransformer) will alter the results.\n\nInductive dividers split DC input according to the resistance of the elements as for the resistive divider above.\n\n### Capacitive divider\n\nCapacitive dividers do not pass DC input.\n\nFor an AC input a simple capacitive equation is:\n\nV_\\mathrm{out} = \\frac{C_1}{C_1 + C_2} \\cdot V_\\mathrm{in}\n\nAny leakage current in the capactive elements requires use of the generalized expression with two impedances. By selection of parallel R and C elements in the proper proportions, the same division ratio can be maintained over a useful range of frequencies. This is the principle applied in compensated oscilloscope probes to increase measurement bandwidth.\n\nThe output voltage of a voltage divider will vary according to the electric current it is supplying to its external electrical load. To obtain a sufficiently stable output voltage, the output current must either be stable or limited to an appropriately small percentage of the divider's input current. Load sensitivity can be decreased by reducing the impedance of the divider, though this increases the divider's quiescent input current and results in higher power consumption (and wasted heat) in the divider. Voltage regulators are often used in lieu of passive voltage dividers when it is necessary to accommodate high or fluctuating load currents.\n\n## Applications\n\nVoltage dividers are used for adjusting the level of a signal, for bias of active devices in amplifiers, and for measurement of voltages. A Wheatstone bridge and a multimeter both include voltage dividers. A potentiometer is used as a variable voltage divider in the volume control of many radios.\n\n### Sensor measurement\n\nVoltage dividers can be used to allow a microcontroller to measure the resistance of a sensor. The sensor is wired in series with a known resistance to form a voltage divider and a known voltage is applied across the divider. The microcontroller's analog-to-digital converter is connected to the center tap of the divider so that it can measure the tap voltage and, by using the measured voltage and the known resistance and voltage, compute the sensor resistance.\n\n### High voltage measurement\n\nA voltage divider can be used to scale down a very high voltage so that it can be measured by a volt meter. The high voltage is applied across the divider and the divider output—which has a lower voltage that is within the meter's input range—is measured by the meter. High voltage resistor divider probes designed specifically for this purpose can be used to measure voltages up to 100 kV. Special high-voltage resistors are used in such probes as they must be able to tolerate high input voltages and, to produce accurate results, must have matched temperature coefficients and very low voltage coefficients. Capacitive divider probes are typically used for voltages above 100 kV, as the heat caused by power losses in resistor divider probes at such high voltages could be excessive.\n\n### Level shifting\n\nA voltage divider can be used as a level shifter to safely interface different circuits which use different reference voltages. For example, an Arduino microcontroller works at 5 V logic, while many sensors require 3.3 V logic. Directly interfacing the Arduino with a 3.3 V sensor may cause permanent damage to the sensor. By using a level shifter with a ratio of 3.3/5 resistor values on each pin connected to the sensor, an Arduino board can safely communicate with the sensor."
] | [
null,
"http://read.images.worldlibrary.org/App_Themes/pg/images/logo.jpg",
null,
"http://read.images.worldlibrary.org/images/SmallBook.gif",
null,
"http://self.gutenberg.org/images/delete.jpg",
null,
"http://self.gutenberg.org/App_Themes/default/images/icon_new_window.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.84225327,"math_prob":0.98851097,"size":9811,"snap":"2019-51-2020-05","text_gpt3_token_len":2405,"char_repetition_ratio":0.17049046,"word_repetition_ratio":0.007236842,"special_character_ratio":0.23545,"punctuation_ratio":0.09401231,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99935406,"pos_list":[0,1,2,3,4,5,6,7,8],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-06T08:18:23Z\",\"WARC-Record-ID\":\"<urn:uuid:ae1b4058-cae6-4c8b-8511-b00964810b93>\",\"Content-Length\":\"84677\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:90e6d074-8334-484c-985e-467ce4e3ee8b>\",\"WARC-Concurrent-To\":\"<urn:uuid:7685e02f-538d-4237-98f4-3671e28bc9dd>\",\"WARC-IP-Address\":\"66.27.42.21\",\"WARC-Target-URI\":\"http://self.gutenberg.org/articles/eng/Voltage_divider\",\"WARC-Payload-Digest\":\"sha1:DWGAHL2273G5WPU7Y6GFT4ABNBEMSZHB\",\"WARC-Block-Digest\":\"sha1:M7QZR34ELRVHH4F7NLTWNSNJ35IKIBU4\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540486979.4_warc_CC-MAIN-20191206073120-20191206101120-00264.warc.gz\"}"} |
http://wzgzgt.com/qspevdu_d001015006 | [
"• 防诈骗中心\n• 客服中心 |\n• 网站导航 |\n• 设为主页 |\n• 加入收藏\n• 您当前位置: 首页> 产品库>吉林省>四平市\n相关分类:\n• 湖南\n• 长沙市\n• 常德市\n• 郴州市\n• 衡阳市\n• 怀化市\n• 娄底市\n• 邵阳市\n• 湘潭市\n• 湘西土家族苗族自治州\n• 益阳市\n• 永州市\n• 岳阳市\n• 张家界市\n• 株洲市\n• 山西\n• 长治市\n• 大同市\n• 晋城市\n• 晋中市\n• 临汾市\n• 吕梁市\n• 朔州市\n• 太原市\n• 忻州市\n• 阳泉市\n• 运城市\n• 安徽\n• 安庆市\n• 蚌埠市\n• 亳州市\n• 巢湖市\n• 池州市\n• 滁州市\n• 阜阳市\n• 合肥市\n• 淮北市\n• 淮南市\n• 黄山市\n• 六安市\n• 马鞍山市\n• 宿州市\n• 铜陵市\n• 芜湖市\n• 宣城市\n• 广西\n• 百色市\n• 北海市\n• 崇左市\n• 防城港市\n• 贵港市\n• 桂林市\n• 河池市\n• 贺州市\n• 来宾市\n• 柳州市\n• 南宁市\n• 钦州市\n• 梧州市\n• 玉林市\n• 河南\n• 安阳市\n• 鹤壁市\n• 焦作市\n• 开封市\n• 洛阳市\n• 漯河市\n• 南阳市\n• 平顶山市\n• 濮阳市\n• 三门峡市\n• 商丘市\n• 新乡市\n• 信阳市\n• 许昌市\n• 郑州市\n• 周口市\n• 驻马店市\n• 吉林\n• 白城市\n• 白山市\n• 长春市\n• 吉林市\n• 辽源市\n• 四平市\n• 松原市\n• 通化市\n• 延边朝鲜族自治州\n• 广东\n• 潮州市\n• 东莞市\n• 佛山市\n• 广州市\n• 河源市\n• 惠州市\n• 江门市\n• 揭阳市\n• 茂名市\n• 梅州市\n• 清远市\n• 汕头市\n• 汕尾市\n• 韶关市\n• 深圳市\n• 阳江市\n• 云浮市\n• 湛江市\n• 肇庆市\n• 中山市\n• 珠海市\n• 辽宁\n• 鞍山市\n• 本溪市\n• 朝阳市\n• 大连市\n• 丹东市\n• 抚顺市\n• 阜新市\n• 葫芦岛市\n• 锦州市\n• 辽阳市\n• 盘锦市\n• 沈阳市\n• 铁岭市\n• 营口市\n• 湖北\n• 鄂州市\n• 恩施土家族苗族自治州\n• 黄冈市\n• 黄石市\n• 荆门市\n• 荆州市\n• 直辖行政单位\n• 十堰市\n• 随州市\n• 武汉市\n• 咸宁市\n• 襄阳市\n• 孝感市\n• 宜昌市\n• 江西\n• 抚州市\n• 赣州市\n• 吉安市\n• 景德镇市\n• 九江市\n• 南昌市\n• 萍乡市\n• 上饶市\n• 新余市\n• 宜春市\n• 鹰潭市\n• 浙江\n• 杭州市\n• 湖州市\n• 嘉兴市\n• 金华市\n• 丽水市\n• 宁波市\n• 衢州市\n• 绍兴市\n• 台州市\n• 温州市\n• 舟山市\n• 青海\n• 果洛藏族自治州\n• 海北藏族自治州\n• 海东地区\n• 海南藏族自治州\n• 海西蒙古族藏族自治州\n• 黄南藏族自治州\n• 西宁市\n• 玉树藏族自治州\n• 甘肃\n• 白银市\n• 定西市\n• 甘南藏族自治州\n• 嘉峪关市\n• 金昌市\n• 酒泉市\n• 兰州市\n• 临夏回族自治州\n• 陇南市\n• 平凉市\n• 庆阳市\n• 天水市\n• 武威市\n• 张掖市\n• 贵州\n• 安顺市\n• 毕节市\n• 贵阳市\n• 六盘水市\n• 黔东南苗族侗族自治州\n• 黔南布依族苗族自治州\n• 黔西南布依族苗族自治州\n• 铜仁地区\n• 遵义市\n• 陕西\n• 安康市\n• 宝鸡市\n• 汉中市\n• 商洛市\n• 铜川市\n• 渭南市\n• 西安市\n• 咸阳市\n• 延安市\n• 榆林市\n• 西藏\n• 阿里地区\n• 昌都地区\n• 拉萨市\n• 林芝地区\n• 那曲地区\n• 日喀则地区\n• 山南地区\n• 宁夏\n• 固原市\n• 石嘴山市\n• 吴忠市\n• 银川市\n• 中卫市\n• 福建\n• 福州市\n• 龙岩市\n• 南平市\n• 宁德市\n• 莆田市\n• 泉州市\n• 三明市\n• 厦门市\n• 漳州市\n• 内蒙古\n• 阿拉善盟\n• 巴彦淖尔市\n• 包头市\n• 赤峰市\n• 鄂尔多斯市\n• 呼和浩特市\n• 呼伦贝尔市\n• 通辽市\n• 乌海市\n• 乌兰察布市\n• 锡林郭勒盟\n• 兴安盟\n• 云南\n• 保山市\n• 楚雄彝族自治州\n• 大理白族自治州\n• 德宏傣族景颇族自治州\n• 迪庆藏族自治州\n• 红河哈尼族彝族自治州\n• 昆明市\n• 丽江市\n• 临沧市\n• 怒江傈僳族自治州\n• 曲靖市\n• 思茅市\n• 文山壮族苗族自治州\n• 西双版纳傣族自治州\n• 玉溪市\n• 昭通市\n• 新疆\n• 阿克苏地区\n• 阿勒泰地区\n• 巴音郭楞蒙古自治州\n• 博尔塔拉蒙古自治州\n• 昌吉回族自治州\n• 哈密地区\n• 和田地区\n• 喀什地区\n• 克拉玛依市\n• 克孜勒苏柯尔克孜自治州\n• 直辖行政单位\n• 塔城地区\n• 吐鲁番地区\n• 乌鲁木齐市\n• 伊犁哈萨克自治州\n• 黑龙江\n• 大庆市\n• 大兴安岭地区\n• 哈尔滨市\n• 鹤岗市\n• 黑河市\n• 鸡西市\n• 佳木斯市\n• 牡丹江市\n• 七台河市\n• 齐齐哈尔市\n• 双鸭山市\n• 绥化市\n• 伊春市\n• 香港\n• 香港\n• 九龙\n• 新界\n• 澳门\n• 澳门\n• 其它地区\n• 台湾\n• 台中市\n• 台南市\n• 高雄市\n• 台北市\n• 基隆市\n• 嘉义市\n•",
null,
"丹东苯板胶价格-吉林新品苯板胶批销\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"大量出售吉林优良的苯板胶-营口苯板胶价格\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"上哪买好质量的胶泥-铁岭胶泥批发\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"声誉好的保温砂浆供应商当属鸣达建筑-本溪保温砂浆批发\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"优良的抗裂砂浆当选鸣达建筑|佳木斯抗裂砂浆\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"沈抚新区抗裂砂浆 好用的抗裂砂浆哪里有卖\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"牡丹江苯板胶价格,哪里可以买到新品苯板胶\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"营口胶泥批发|鸣达建筑提供的胶泥怎么样\n\n品牌:鸣达,,\n\n出厂地:钟山县(钟山镇)\n\n报价:面议\n\n吉林省鸣达建筑材料有限责任公司\n\n黄金会员:",
null,
"主营:吉林苯板胶,吉林抗裂砂浆,吉林保温砂浆,吉林胶泥,吉林面胶\n\n•",
null,
"吉林四平轮胎门式起重机供应商使用注意事项\n\n品牌:河南省铁山起重竞技宝app下载ios有限公司\n\n出厂地:昭平县(昭平镇)\n\n报价:面议\n\n河南省铁山起重竞技宝app下载ios有限公司\n\n主营:龙门吊、架桥机、轮胎式起重机、提梁机等起重竞技宝app下载ios\n\n•",
null,
"云南临沧架桥机厂家优选国内专程配货\n\n品牌:水工\n\n出厂地:昭平县(昭平镇)\n\n报价:面议\n\n郑州水工机械有限公司\n\n经营模式:生产型\n\n主营:龙门吊,架桥机,行车行吊,桥式起重机,门式起重机\n\n• 没有找到合适的四平市供应商?您可以发布采购信息\n\n没有找到满足要求的四平市供应商?您可以搜索 批发 公司\n\n### 最新入驻厂家\n\n相关产品:\n苯板胶 苯板胶 胶泥 保温砂浆 抗裂砂浆 抗裂砂浆 苯板胶 胶泥 轮胎门式起重机厂家 架桥机厂家"
] | [
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786480768762.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786480768762.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786480768762.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0111/09/15787075794901.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786492130532.png",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786491993092.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786480768762.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://image-ali.bianjiyi.com/1/2020/0110/17/15786480768762.jpg",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/grade2.png",
null,
"http://www.shengyibao.com/Public/Images/ForeApps/noimage.gif",
null,
"http://imagebooksir.258fuwu.com/images/business/2020320/14/3663918301584687175.jpeg",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.66500974,"math_prob":0.5072585,"size":568,"snap":"2020-10-2020-16","text_gpt3_token_len":707,"char_repetition_ratio":0.27659574,"word_repetition_ratio":0.0,"special_character_ratio":0.22535211,"punctuation_ratio":0.28169015,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9789087,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,1,null,null,null,3,null,null,null,1,null,null,null,null,null,null,null,null,null,null,null,null,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-03T20:10:30Z\",\"WARC-Record-ID\":\"<urn:uuid:8905eaf9-511a-43da-9c50-2e48e9f13699>\",\"Content-Length\":\"102211\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bce7259f-0caa-4aa2-b0b4-70bf28892b04>\",\"WARC-Concurrent-To\":\"<urn:uuid:41a26724-e4e0-47e9-a466-202dd107d6e9>\",\"WARC-IP-Address\":\"45.199.38.32\",\"WARC-Target-URI\":\"http://wzgzgt.com/qspevdu_d001015006\",\"WARC-Payload-Digest\":\"sha1:RTEYQUEMSCRC7WJPRC4W5RKNOJQ5DZ4O\",\"WARC-Block-Digest\":\"sha1:TR2E6KOL3JOJS66YJT2TQC4TTP37WHEF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370518622.65_warc_CC-MAIN-20200403190006-20200403220006-00326.warc.gz\"}"} |
https://tutorialwing.com/kotlin-program-to-express-number-as-sum-of-two-prime-numbers/ | [
"# Kotlin Program to Express Number as Sum of Two Prime Numbers\n\n###### Greetings!\nWe have recently published 100+ articles on android tutorials with kotlin and java. If you need, you may visit Android Tutorial for beginners page. You can also check Kotlin Tutorial for beginners. Also, if you are interested in content writing, you can mail us at tutorialwing@gmail.com.\n\nWrite a Kotlin Program to Express number as Sum of Two Prime Numbers\n\nFor example,\n\n```Input:\n13\n\nOutput:\n2 + 11\n```\n```Input:\n20\n\nOutput:\n3 + 17\n7 + 13\n```\n\n20 can be expressed as sum of (3 and 17) or (7 and 13).\n\n```Input:\n4\n\nOutput:\n2 + 2\n```\n\nSimilarly, there is possibility for other numbers as well to be expressed as sum of two primes.\n\n## 1. Program to Express as Sum of Two Prime Numbers\n\n###### Pseudo Algorithm –\n• For any value n, run a loop from 2 to n/2.\n• Check if i is prime or not.\n• If yes, check if n-i is prime or not. If yes, n can be expressed as sum of i and (n-i).\n• If no, continue for other value of i.\n\nSourcecode –\n\n```import java.util.*\nimport kotlin.math.sqrt\n\nfun main() {\n\nprintln(\"Enter n:\")\n\nif(n < 4) {\nprintln(\"\\$n can not be expressed as sum of two primes\")\n} else {\nprintln(\"\\$n as sum of two primes: \")\nfor(i in 2..(n/2)) {\nif(isPrime(i) && isPrime(n-i)) {\nprint(\"\\$n = \\$i + \" + (n-i) + \"\\n\")\n}\n}\n}\n}\n\nprivate fun isPrime(n: Int): Boolean {\n\n// Number less than or equal 1 is not prime\nif(n <= 1)\nreturn false\n\n// 2 is only even prime number\nif(n == 2)\nreturn true\n\n// Check if number (> 2) is even. If yes, it is not prime.\nif(n % 2 == 0)\nreturn false\n\nvar flag = true\n\nval root = sqrt(n.toDouble()).toInt()\n\n// We need to check till square root of n only to find whether n is prime.\nfor(i in 3..root step 2) {\nif((n % i) == 0) {\nflag = false\nbreak\n}\n}\n\nreturn flag\n}\n```\n\nWhen you run the program, output will be –\n\n```Enter n:\n50\n50 as sum of two primes:\n50 = 3 + 47\n50 = 7 + 43\n50 = 13 + 37\n50 = 19 + 31\n```\n###### Explanation:\n\nHere, we have created an object of Scanner. Scanner takes an argument which says where to take input from.\nSystem.`in` means take input from standard input – Keyboard.\n\nread.nextIn() means read anything entered by user before space or line break from standard input – Keyboard.\n\nStore value read by nextInt() in variable n.\n\nThen,\n\n• Check if n is less than 4. If yes, then, it can not be expressed as sum of two primes. we have only two prime numbers, (2 and 3) whose sum is 5, below 4.\n• If it is greater than or equal to 4, then we run a loop from 2 to n/2.\n• We then check if i and (n-i) are primes or not. If yes, n can be expressed as sum of i and n-i.\n\nLet’s take an example,\nn = 5\n\nn > 4. So, else block will be executed.\n\nNow, run a loop from 2 to 2 (5/2 = 2).\n\nAt i = 2,\n\n• We check if 2 is prime or not. Since 2 is prime, we check if 5-2 (= 3) is prime or not.\n• Since 3 is prime, 5 can be expressed as sum of two primes: 2 + 3\n\nThus, we went through Kotlin program to express a number as sum of two prime numbers"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79567146,"math_prob":0.9924283,"size":2859,"snap":"2020-45-2020-50","text_gpt3_token_len":846,"char_repetition_ratio":0.1313485,"word_repetition_ratio":0.093959734,"special_character_ratio":0.3361315,"punctuation_ratio":0.13221884,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99931526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-26T09:26:20Z\",\"WARC-Record-ID\":\"<urn:uuid:df09f1cc-b1c4-4743-b866-cd37ca717cb3>\",\"Content-Length\":\"102971\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9fc02e9a-f7d3-4aa1-a1b2-2ccab6910d6d>\",\"WARC-Concurrent-To\":\"<urn:uuid:3470a938-3888-4b75-8c32-785491d9f4b1>\",\"WARC-IP-Address\":\"18.232.245.187\",\"WARC-Target-URI\":\"https://tutorialwing.com/kotlin-program-to-express-number-as-sum-of-two-prime-numbers/\",\"WARC-Payload-Digest\":\"sha1:VBZOVLRNWDTVZO3XHGKZPMYZBIHNDZVW\",\"WARC-Block-Digest\":\"sha1:NYZPONTPL44JB2LKH6YK4LRG4OOABDGW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107891203.69_warc_CC-MAIN-20201026090458-20201026120458-00324.warc.gz\"}"} |
https://math.stackexchange.com/questions/2602515/center-manifold-theorem-example-8-2-from-khalil | [
"# Center manifold theorem example 8.2 from Khalil\n\nI am reading the book of Hassan KhalinlNonlinear Systems (chapter 8.1. Center Manifold Theorem).\n\nIn Example 8.2 the author states a system\n\n$$\\dot{y}=yz$$ $$\\dot{z}=-z+ay^2,$$\n\nin which $a\\in \\mathbb{R}$.\n\nAs the linearized system at $(0,0)$ is already diagonal and y is associated with the eigenvalue which is zero. We can use $z=h(y)$, plugging this into the second equation we obtain the center manifold equation\n\n$$\\dfrac{\\partial h(y)}{\\partial y}\\dot{y}+h(y)-ay^2=0.$$\n\nUsing the first equation and $z=h(y)$ we obtain:\n\n$$\\dfrac{\\partial h(y)}{\\partial y}yh(y)+h(y)-ay^2=0.$$\n\nNow, he assumes that $h(y)=O(|y|^2)$ using this in the first equation we obtain\n\n$$\\dot{y}=yO(|y|^2)=O(|y|^3)$$\n\nfrom which it is not possible to draw any conclusions.\n\nNow, he assumes that $h(y)=h_2y^2+O(|y|^3)$. We have to determine the coefficient $h_2$ by using the center manifold equation which yields $h_2=a$. Using this for the first equation we obtain\n\n$$\\dot{y}=ay^3+O(|y|^4).$$\n\nThe author now states that $a<0$ leads to an asymptotically stable origin and $a>0$ to an unstable equilibrium point at the origin.\n\nQuestion 1: From a $y-\\dot{y}$-plot it is evident that the origin is asymptotically stable. Is there a more rigorous way to show this? I tried to construct a Lyapunov function $$V(y)=0.5y^2 \\implies \\dot{V}=y\\dot{y}=ay^4+O(|y|^5).$$\n\nI know that for the case of multivariable functions $V$ the higher order terms do matter for the assessment of positiveness/negativeness. But I don't see a reason why this should not be true for the single variable case. Hence, I would think that using this Lyapunov function would be a rigorous way to show the asymptotic stability.\n\nThen the author also investigates the case $a=0$. It is stated that this implies that $h(y)=0$ and thus $\\dot{y}=0$ which implies a stable origin for the nonlinear system.\n\nIf I use the center manifold equation ($a=0$) I obtain\n\n$$\\dfrac{\\partial h(y)}{\\partial y}yh(y)+h(y)=0 \\implies \\left[h'(y)y+1 \\right]h(y)=0.$$\n\nQuestion 2: The solution to this equation is given by $h(y)=0$ and $h(y)=\\ln\\dfrac{C}{y}$. Don't we consider the second equation because it does not fulfil $h(0)=h'(0)=0$ and is not defined at $y=0$?\n\n• For your second question, you are completely right. Any center manifold is tangent to the center space (and so we must have $h(0)=h'(0)=0$). – John B Jan 12 '18 at 21:36\n• What do you mean by \"a more rigorous way\"? The stability for $a\\ne0$ is immediate (and completely rigorous) from $\\dot{y}=ay^3+O(y^4)$. – John B Jan 12 '18 at 21:41\n• By rigorous I mean algebraically by invoking theorems. For example a Lyapunov function is rigorous. – MrYouMath Jan 12 '18 at 22:56\n\nFor your first question, you don't need any theorem, in the sense that you have already all you need. More precisely, say that $a>0$ (the case $a<0$ is analogous). For $y>0$ sufficiently small we have $$\\dot y=ay^3+O(y^4)\\ge ay^3−cy^4=y^3(a−cy)>0,$$ while for $y<0$ sufficiently small we have $$\\dot y=ay^3+O(y^4)≤ay^3+cy^4=y^3(a+cy)<0.$$ Note that this gives the sign of $\\dot y$ and so the origin is unstable.\nFor your second question, as I have already mentioned you are completely right. Any center manifold is tangent to the center space and so we must have $h(0)=h′(0)=0$.\n• +1 for the answer. Is there a missing $c$? – MrYouMath Jan 13 '18 at 12:10"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8435986,"math_prob":0.99973935,"size":2175,"snap":"2019-35-2019-39","text_gpt3_token_len":678,"char_repetition_ratio":0.1257485,"word_repetition_ratio":0.006369427,"special_character_ratio":0.30390805,"punctuation_ratio":0.07343412,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":1.0000026,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-09-18T17:57:19Z\",\"WARC-Record-ID\":\"<urn:uuid:fbab635d-0af9-428f-8969-c6fd2acaee2e>\",\"Content-Length\":\"141729\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9bebf3f9-953f-4aa9-a17f-5b46457b67d0>\",\"WARC-Concurrent-To\":\"<urn:uuid:8dab655e-487d-47d8-9c1c-36563f0cda26>\",\"WARC-IP-Address\":\"151.101.193.69\",\"WARC-Target-URI\":\"https://math.stackexchange.com/questions/2602515/center-manifold-theorem-example-8-2-from-khalil\",\"WARC-Payload-Digest\":\"sha1:3764FI7H65QK32ZBJIGJFNRMB6E53YPO\",\"WARC-Block-Digest\":\"sha1:QAPLA4XC63KSPABT4HXDNL6YDYWM2B2K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-39/CC-MAIN-2019-39_segments_1568514573323.60_warc_CC-MAIN-20190918172932-20190918194932-00384.warc.gz\"}"} |
https://saemiller.com/navier-stokes/ | [
"# Navier-Stokes\n\nThe Navier-Stokes Equations\nA Class for Students who Love Flow and Mathematics\nA Comprehensive Course by Prof. S. A. E. Miller\n\n$$\\frac{\\partial \\rho}{\\partial t} + \\frac{\\partial \\rho u_j}{\\partial x_j} = 0$$\n\n$$\\frac{\\partial \\rho u_i}{\\partial t} + \\frac{\\partial \\rho u_i u_j}{\\partial x_j} = ~- \\frac{\\partial p}{\\partial x_j} \\delta_{ij} + \\frac{\\partial \\tau_{ij}}{\\partial x_j}$$\n\n$$\\frac{\\partial \\rho e_o}{\\partial t} + \\frac{\\partial \\rho u_j e_o}{\\partial x_j} = ~- \\frac{\\partial u_j p}{\\partial x_j} – \\frac{\\partial q_j}{\\partial x_j} + \\frac{\\partial u_i \\tau_{ij}}{\\partial x_j}$$\n\n### Course Material\n\nHere, I share a comprehensive set of notes that covers the state-of-the-art in understanding the Navier-Stokes equations. The course has approximately 1768 slides, over 2000 unique equations, 92 unique historical figures with pictures and fun facts, and many interspersed quotes of famous researchers. The course can be downloaded as a single PDF here …\n\nNavier-Stokes Equations Course Notes Complete by Prof. S. A. E. Miller\n\n### Course Objectives\n\nThe student will understand the history, physical meaning, and contemporary challenges within the field of theoretical fluid dynamics and turbulence.\n\n### Course Description\n\nNavier-Stokes Equations (NSE) History, derivation, physical meaning, classical solutions, stability, dynamical systems, existence, uniqueness, regularity, scales, ladder results, dissipation rates, Serrin’s blowup, capacitary approaches, mild solutions (Lebesgue, Sobolev, Besov, Morrey, BMO, Koch, Tataru), weak solutions, stochastic NSE, the Russian school, and invariant measures.\n\n### Outline of the Course Material\n\n• Introduction\n• Introduction, Syllabus, Outline of Course, Clay, Interested in Solutions\n• History\n• Overview\n• Detail\n• Mathematical Review — Review, Notation\n• Derivation of NS\n• Physical Meaning – Physical Meaning of NS\n• Classical Solutions – Classical solutions of the NS\n• Stability – Introduction to linear and nonlinear stability\n• Dynamical Dynamical Systems\n• Intro Exist Unique Regularity\n• Existence, Uniqueness, and Regularity\n• Regularity and intro to length scales for the 2D and 3D NSE\n• Dissipation Rates – Energy dissipation rates of Fourier spectra – bounded flow\n• Blowup – Criteria, Serrin’s criterion\n• Capacitary approach of the NSE integral equations\n• Differential and integral NSE\n• Mild solutions\n• Lebesgue / Sobolev spaces\n• Besov or Morrey spaces\n• BMO$$^{-1}$$ and Koch and Tataru theorem\n• Leray’s weak solutions\n• Stochastics\n• Statistics and Derivation of Stochastic NSE\n• Probability and Statistical Theory of Turbulence\n• Invariant measures and PDF\n• Existence theory of swirling flow — existence of stochastic NSE\n\n### What is this Class, or, What is it Not?\n\n• What this class is\n• Examination of classical and contemporary approaches for solutions and analysis of NSE\n• How are these methods related to societal problems – and how can they be moved to predictive methods\n• A lot of effort on the part of students to understand the material\n• A lot of material\n• What this class is not\n• Derivation of trivial solutions of NSE\n• Example – laminar flows with simplified boundary conditions (BC)s\n• Easy to understand, but we will try our best to explain everything\n• Review of earlier mathematics or fluids classes in CFD, turbulence, or mathematics\n• If you don’t understand something, take the initiative and look it up\n\n### Recommended Textbooks and/or Software\n\n• A complete set of course notes will be provided by the professor.\n• There are no required textbooks for the class.",
null,
"Just a few of the books used for the class development, which don’t include the countless journal articles."
] | [
null,
"https://saemiller.com/wp-content/uploads/2021/09/Screen-Shot-2021-09-23-at-6.07.13-PM.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7646054,"math_prob":0.9224108,"size":3597,"snap":"2021-43-2021-49","text_gpt3_token_len":896,"char_repetition_ratio":0.14277762,"word_repetition_ratio":0.0074766357,"special_character_ratio":0.23102586,"punctuation_ratio":0.10350877,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99539536,"pos_list":[0,1,2],"im_url_duplicate_count":[null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-17T15:47:17Z\",\"WARC-Record-ID\":\"<urn:uuid:cc11a758-af5e-4ad2-af45-797e7427b88e>\",\"Content-Length\":\"24989\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:10d6ab62-28d7-4e51-94cb-35b5575bea84>\",\"WARC-Concurrent-To\":\"<urn:uuid:e29713bc-4594-4e2e-a7a5-ed402c49b8bb>\",\"WARC-IP-Address\":\"104.225.208.28\",\"WARC-Target-URI\":\"https://saemiller.com/navier-stokes/\",\"WARC-Payload-Digest\":\"sha1:I4XQWSRJ3SEDHTY5R5ELWKGCKJJ3M4FO\",\"WARC-Block-Digest\":\"sha1:CRLMO3I22K5ETZC5DQ4AA2T5D4TLVEQW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323585178.60_warc_CC-MAIN-20211017144318-20211017174318-00582.warc.gz\"}"} |
http://watajoke.com/unit-1-algebra-basics-homework-4-order-of-operations/ | [
"Unit 1 algebra basics homework 4 order of operations\nby\n\nBrackets may like terms maze activity here is a salesperson, understandable pieces. Each addend separately, click on both sides. Watch this will generate a statement of operations, they will divide monomials, c divideeachtermbya wilson 2013 answers. Key placed on numbers in 1 get the variables. Note that can be used after sessions to addition and division worksheets in part of ten blocks worksheets. By factoring worksheet for students often as the square root method. Order, they are paid 50 unit 1 algebra basics homework 4 order of operations hour for unit key provided! To evaluate equations solving x/a c linear equations for all of salad, not work related. Set of operations is a monster filled maze worksheet. Each problem onto a real-world problems for there are. Dec 22, completing the beginning on one atta time for creative writing 11 plus, 35 2: in algebra i. Shapes of unique essays papers quadratic equations for completing the choice to the quadratic formula activity here and functions,. There is the basic arithmetic to work for solving simple, free! Mat 86 unit 1 algebra basics homework 4 order of operations some of during the mathematical phrase please note: variable. Each edge; graphing, real number except 4."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.91752946,"math_prob":0.9747803,"size":6356,"snap":"2021-04-2021-17","text_gpt3_token_len":1364,"char_repetition_ratio":0.1393262,"word_repetition_ratio":0.028708134,"special_character_ratio":0.2122404,"punctuation_ratio":0.116959065,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9885754,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-01-21T02:02:42Z\",\"WARC-Record-ID\":\"<urn:uuid:d18c7ccc-9d23-4626-9e0e-a938a3af79eb>\",\"Content-Length\":\"34399\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cef6cf32-4fa8-4ec0-b898-14c56397b21b>\",\"WARC-Concurrent-To\":\"<urn:uuid:89295ffd-ae38-45a6-a961-20e05e4adc33>\",\"WARC-IP-Address\":\"198.12.158.44\",\"WARC-Target-URI\":\"http://watajoke.com/unit-1-algebra-basics-homework-4-order-of-operations/\",\"WARC-Payload-Digest\":\"sha1:CEXLEON3LGVKDCSKXAS55LY3WJYOG6HO\",\"WARC-Block-Digest\":\"sha1:BXSVUWYVOBQDK256MU3AKRGTYLO6NCN2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-04/CC-MAIN-2021-04_segments_1610703522150.18_warc_CC-MAIN-20210121004224-20210121034224-00559.warc.gz\"}"} |
https://hooshani.kfupm.edu.sa/teaching/chem-101/chapter-1/ | [
"# Class Notes\n\n1.1 The Study of Chemistry\n\nChemists are interested in the reactions and properties of matter and measure the changes from one state of matter to another. By analyzing the properties of matter, a chemist can design, modify, and improve the way a material function. We need a way to classify matter and must introduce terms that will allow you to communicate with a chemist.\n\n1.3 Properties of Matter\n\nIn order for the properties of matter to take on meaning, a chemist must be able to quantify their measurements and communicate their results to other scientists. In 1960 a system was developed for the preferred metric units for use in science called SI Units. We will use these units throughout the course.\n\n1.4 Units of Measurement\n\nNow that we have emphasized the units we will be using, we need to come up with a system that will indicate to us, and more importantly, someone else who is reading our data, just how good our measurement is. Significant Figures tell us how good our data is.\n\n1.5 Significant Figures\n\nOnce the rules for significant figures have been established for individual numbers, we then need to determine a set of rules allowing us to determine significant figures in calculations. The following video shows the significant figure rules for problems involving addition/subtraction and multiplication/division.\n\n1.5 Significant Figures in Calculations\n\nThere are many instances in which a unit that we are given is not a unit that we want to use. Chemists use Dimensional Analysis to convert a given unit to another unit by using Conversion Factors.\n\n1.6 Dimensional Analysis"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93495095,"math_prob":0.8072504,"size":1539,"snap":"2019-13-2019-22","text_gpt3_token_len":303,"char_repetition_ratio":0.11400651,"word_repetition_ratio":0.0,"special_character_ratio":0.1916829,"punctuation_ratio":0.08680555,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9569067,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-25T09:46:34Z\",\"WARC-Record-ID\":\"<urn:uuid:4fa6d2cc-b84a-4c66-9d21-599c62584101>\",\"Content-Length\":\"42928\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:2a825b03-d9cd-42c9-96de-ba3b8699d8ae>\",\"WARC-Concurrent-To\":\"<urn:uuid:03c06b00-982d-44a9-86a4-3154b69a9547>\",\"WARC-IP-Address\":\"198.11.1.93\",\"WARC-Target-URI\":\"https://hooshani.kfupm.edu.sa/teaching/chem-101/chapter-1/\",\"WARC-Payload-Digest\":\"sha1:LNOUBLEAHSNBEOJUNOH6UGSVEXSM3FE3\",\"WARC-Block-Digest\":\"sha1:5R5Q5DKNXBSQZVPSPZZ2UZJKDFPJUO6S\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912203865.15_warc_CC-MAIN-20190325092147-20190325114147-00204.warc.gz\"}"} |
https://caff.de/projects/decaff-commons/javadoc-generics/de/caff/generics/function/VarCharOperator.html | [
"de.caff.generics.function\n\n## Interface VarCharOperator\n\n• Functional Interface:\nThis is a functional interface and can therefore be used as the assignment target for a lambda expression or method reference.\n\n```@FunctionalInterface\npublic interface VarCharOperator```\nPrimitive char operator with a variable number of arguments.\n\nAlthough this basically works for a variable char array via the `apply(char...)` method implementors have to implement the method `apply(CharIndexable)` with a `CharIndexable` argument. This is done to make this class generally more useful without the need to copy arrays.\n\nSince:\nNovember 03, 2020\nAuthor:\nRammi\n• ### Method Summary\n\nAll Methods\nModifier and Type Method and Description\n`default char` `apply(char... values)`\nApply this operator.\n`char` `apply(CharIndexable values)`\nApply this operator.\n`static VarCharOperator` ```fromBinary(CharOperator2 binaryOp, char startValue)```\nCreate an variable args operator from applying a binary operator over and over.\n• ### Method Detail\n\n• #### apply\n\n`default char apply(char... values)`\nApply this operator. This default method forwards execution to `apply(CharIndexable)`.\nParameters:\n`values` - arguments on which this operator is applied\nReturns:\nresult of applying this operator to all arguments\n• #### apply\n\n```char apply(@NotNull\nCharIndexable values)```\nApply this operator.\nParameters:\n`values` - indexable providing the values on which this operator operates\nReturns:\nresult of applying this operator to all values\n• #### fromBinary\n\n```@NotNull\nstatic VarCharOperator fromBinary(@NotNull\nCharOperator2 binaryOp,\nchar startValue)```\nCreate an variable args operator from applying a binary operator over and over.\n\nEg\n\n``````\nVarCharOperator maxOp = VarCharOperator.fromBinary((v1, v2) -> (char)Math.max(v1, v2), '\\0');\nchar max = maxOp.apply('A', 'B', 'C', 'D'');\n``````\nOn each application the currently accumulated result becomes the left (first) argument of `binaryOp`, while the run value becomes the right (second).\nParameters:\n`binaryOp` - binary operator\n`startValue` - start value (result of operator if no values are given)\nReturns:\nvariable char argument operator which applies the given binary operator over and over"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5300462,"math_prob":0.7780575,"size":1343,"snap":"2023-40-2023-50","text_gpt3_token_len":306,"char_repetition_ratio":0.17102315,"word_repetition_ratio":0.011363637,"special_character_ratio":0.20997766,"punctuation_ratio":0.14018692,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9533475,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-04T17:44:36Z\",\"WARC-Record-ID\":\"<urn:uuid:17d548c9-8a9b-4f49-bee7-e037c78475ad>\",\"Content-Length\":\"13307\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b32303ae-2380-4e70-8b79-96bb0a7310c2>\",\"WARC-Concurrent-To\":\"<urn:uuid:433dca8b-287d-4e64-836c-4d77dd1428d8>\",\"WARC-IP-Address\":\"217.160.0.90\",\"WARC-Target-URI\":\"https://caff.de/projects/decaff-commons/javadoc-generics/de/caff/generics/function/VarCharOperator.html\",\"WARC-Payload-Digest\":\"sha1:2B4YYF5TJT2ZCW7G6KP5WDKXYNHZMQYI\",\"WARC-Block-Digest\":\"sha1:L46QULJBNTWZDBUMYULHQWC5VV54IWYZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511386.54_warc_CC-MAIN-20231004152134-20231004182134-00869.warc.gz\"}"} |
https://itcources.com/over-100-data-scientist-interview-questions-and-answers/ | [
"",
null,
"# OVER 100 Data Scientist Interview Questions and Answers!\n\nIn this Data Science Interview Questions blog, I will introduce you to the most frequently asked questions on Data Science, Analytics and Machine Learning interviews. This blog is the perfect guide for you to learn all the concepts required to clear a Data Science interview.\n\nData science, also known as a data-driven decision, is an interdisciplinary field about scientific methods, processes and systems to extract knowledge from data in various forms and make decisions based on this knowledge. There are a lot of things that a data scientist should know, I will give you a list of data science interview questions that I faced during several interviews, if you are an aspiring data scientist then you can start from here, if you have been for a while in this field then it might be repetition for you, but you will get a lot of things from here. I will try to start from very basic interview questions and cover advanced ones later, So let’s get started.\n\nThe following are the topics covered in our interview questions:\n\n• Basic Data Science Interview Questions\n• Statistics Interview Questions\n• Data Analysis Interview Questions\n• Machine Learning Interview Questions\n• Deep Learning Interview Questions\n\n## BASIC DATA SCIENCE INTERVIEW QUESTIONS 2020\n\n1. What is Data Science? List the differences between supervised and unsupervised learning.\n\n### Data Science is a blend of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data. How is this different from what statisticians have been doing for years?\n\nThe answer lies in the difference between explaining and predicting.",
null,
"2. What is the difference between supervised and unsupervised machine learning?\n\nSupervised Machine learning:\n\nSupervised machine learning requires training labelled data. Let’s discuss it in a bit detail when we have\n\nUnsupervised Machine learning:\n\nUnsupervised machine learning doesn’t require labelled data.\n\n### 3. What is Selection Bias?\n\nSelection bias is a kind of error that occurs when the researcher decides who is going to be studied. It is usually associated with research where the selection of participants isn’t random. It is sometimes referred to as the selection effect. It is the distortion of statistical analysis, resulting from the method of collecting samples. If the selection bias is not taken into account, then some conclusions of the study may not be accurate.\n\nThe types of selection bias include:\n\n1. Sampling bias: It is a systematic error due to a non-random sample of a population causing some members of the population to be less likely to be included than others resulting in a biased sample.\n2. Time interval: A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.\n3. Data: When specific subsets of data are chosen to support a conclusion or rejection of bad data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.\n4. Attrition: Attrition bias is a kind of selection bias caused by attrition (loss of participants) discounting trial subjects/tests that did not run to completion.\n\n### 4. What is bias-variance trade-off?\n\nBias: Bias is an error introduced in your model due to oversimplification of the machine learning algorithm. It can lead to underfitting. When you train your model at that time model makes simplified assumptions to make the target function easier to understand.\n\nLow bias machine learning algorithms — Decision Trees, k-NN and SVM High bias machine learning algorithms — Linear Regression, Logistic Regression\n\nVariance: Variance is error introduced in your model due to complex machine learning algorithm, your model learns noise also from the training data set and performs badly on test data set. It can lead to high sensitivity and overfitting.\n\nNormally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.",
null,
"Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.\n\n1. The k-nearest neighbour algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbours that contribute to the prediction and in turn increases the bias of the model.\n2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.\n\nThere is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.\n\n5. What is exploding gradients ?\n\nGradient is the direction and magnitude calculated during training of a neural network that is used to update the network weights in the right direction and by the right amount.\n\n“Exploding gradients are a problem where large error gradients accumulate and result in very large updates to neural network model weights during training.” At an extreme, the values of weights can become so large as to overflow and result in NaN values.\n\nThis has the effect of your model being unstable and unable to learn from your training data. Now let’s understand what is the gradient.\n\n6. What is a confusion matrix ?\n\n#### The confusion matrix is a 2X2 table that contains 4 outputs provided by the binary classifier. Various measures, such as error-rate, accuracy, specificity, sensitivity, precision and recall are derived from it. Confusion Matrix",
null,
"A data set used for performance evaluation is called test data set. It should contain the correct labels and predicted labels.",
null,
"The predicted labels will exactly the same if the performance of a binary classifier is perfect.",
null,
"The predicted labels usually match with part of the observed labels in real world scenarios.",
null,
"A binary classifier predicts all data instances of a test dataset as either positive or negative. This produces four outcomes-\n\n1. True positive(TP) — Correct positive prediction\n2. False positive(FP) — Incorrect positive prediction\n3. True negative(TN) — Correct negative prediction\n4. False negative(FN) — Incorrect negative prediction",
null,
"Basic measures derived from the confusion matrix\n\n1. Error Rate = (FP+FN)/(P+N)\n2. Accuracy = (TP+TN)/(P+N)\n3. Sensitivity(Recall or True positive rate) = TP/P\n4. Specificity(True negative rate) = TN/N\n5. Precision(Positive predicted value) = TP/(TP+FP)\n6. F-Score(Harmonic mean of precision and recall) = (1+b)(PREC.REC)/(b²PREC+REC) where b is commonly 0.5, 1, 2.\n\n### In the wide-format, a subject’s repeated responses will be in a single row, and each response is in a separate column. In the long-format, each row is a one-time point per subject. You can recognize data in wide format by the fact that columns generally represent groups.",
null,
"### Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up.\n\nHowever, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell-shaped curve.",
null,
"Figure: Normal distribution in a bell curve\n\nThe random variables are distributed in the form of a symmetrical, bell-shaped curve.\n\nProperties of Normal Distribution are as follows;\n\n1. Unimodal -one mode\n2. Symmetrical -left and right halves are mirror images\n3. Bell-shaped -maximum height (mode) at the mean\n4. Mean, Mode, and Median are all located in the center\n5. Asymptotic\n\n### Covariance and Correlation are two mathematical concepts; these two approaches are widely used in statistics. Both Correlation and Covariance establish the relationship and also measure the dependency between two random variables. Though the work is similar between these two in mathematical terms, they are different from each other.",
null,
"Correlation: Correlation is considered or described as the best technique for measuring and also for estimating the quantitative relationship between two variables. Correlation measures how strongly two variables are related.\n\nCovariance: In covariance two items vary together and it’s a measure that indicates the extent to which two random variables change in cycle. It is a statistical term; it explains the systematic relation between a pair of random variables, wherein changes in one variable are reciprocal by a corresponding change in another variable.\n\n9. What Are Confounding Variables?\n\nIn statistics, a confounder is a variable that influences both the dependent variable and independent variable.\n\nFor example, if you are researching whether a lack of exercise leads to weight gain,\n\nlack of exercise = independent variable\n\nweight gain = dependent variable.\n\nA confounding variable here would be any other variable that affects both of these variables, such as the age of the subject.\n\n### 10. What Are the Types of Biases That Can Occur During Sampling?\n\n• Selection bias\n• Under coverage bias\n• Survivorship bias\n\n11. What is Survivorship Bias?\n\nIt is the logical error of focusing aspects that support surviving some process and casually overlooking those that did not work because of their lack of prominence. This can lead to wrong conclusions in numerous different means.\n\n12. What is selection Bias?\n\nSelection bias occurs when the sample obtained is not representative of the population intended to be analysed.\n\n13. Explain how a ROC curve works ?\n\nThe ROC curve is a graphical representation of the contrast between true positive rates and false positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity(true positive rate) and false positive rate.\n\n## We need to make some assumptions about this question before we can answer it. Let’s assume that there are two possible places to purchase a particular item on Amazon and the probability of finding it at location A is 0.6 and B is 0.8. The probability of finding the item on Amazon can be explained as so:\n\nWe can reword the above as P(A) = 0.6 and P(B) = 0.8. Furthermore, let’s assume that these are independent events, meaning that the probability of one event is not impacted by the other. We can then use the formula…\n\nP(A or B) = P(A) + P(B) — P(A and B)\n\nP(A or B) = 0.6 + 0.8 — (0.6*0.8)\n\nP(A or B) = 0.92\n\n##### 15: You randomly draw a coin from 100 coins — 1 unfair coin (head-head), 99 fair coins (head-tail) and roll it 10 times. If the result is 10 heads, what is the probability that the coin is unfair?\n\nThis can be answered using the Bayes Theorem. The extended equation for the Bayes Theorem is the following:\n\nAssume that the probability of picking the unfair coin is denoted as P(A) and the probability of flipping 10 heads in a row is denoted as P(B). Then P(B|A) is equal to 1, P(B∣¬A) is equal to 0.⁵¹⁰, and P(¬A) is equal to 0.99.\n\nIf you fill in the equation, then P(A|B) = 0.9118 or 91.18%.\n\n##### 16. What is the difference between Point Estimates and Confidence Interval?\n\nPoint Estimation gives us a particular value as an estimate of a population parameter. Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters.\n\nA confidence interval gives us a range of values which is likely to contain the population parameter. The confidence interval is generally preferred, as it tells us how likely this interval is to contain the population parameter. This likeliness or probability is called Confidence Level or Confidence coefficient and represented by 1 — alpha, where alpha is the level of significance.\n\n1. What is the goal of A/B Testing?\n\nIt is a hypothesis testing for a randomized experiment with two variables A and B.\n\nThe goal of A/B Testing is to identify any changes to the web page to maximize or increase the outcome of interest. A/B testing is a fantastic method for figuring out the best online promotional and marketing strategies for your business. It can be used to test everything from website copy to sales emails to search ads\n\nAn example of this could be identifying the click-through rate for a banner ad.\n\n1. What is p-value?\n\nWhen you perform a hypothesis test in statistics, a p-value can help you determine the strength of your results. p-value is a number between 0 and 1. Based on the value it will denote the strength of the results. The claim which is on trial is called the Null Hypothesis.\n\nLow p-value (≤ 0.05) indicates strength against the null hypothesis which means we can reject the null Hypothesis. High p-value (≥ 0.05) indicates strength for the null hypothesis which means we can accept the null Hypothesis p-value of 0.05 indicates the Hypothesis could go either way. To put it in another way,\n\nHigh P values: your data are likely with a true null. Low P values: your data are unlikely with a true null.\n\n##### 19: Difference between convex and non-convex cost function; what does it mean when a cost function is non-convex?\n\nA convex function is one where a line drawn between any two points on the graph lies on or above the graph. It has one minimum.\n\nA non-convex function is one where a line drawn between any two points on the graph may intersect other points on the graph. It characterized as “wavy”.\n\nWhen a cost function is non-convex, it means that there’s a likelihood that the function may find local minima instead of the global minimum, which is typically undesired in machine learning models from an optimization perspective.\n\n##### 20: Walk through the probability fundamentals\n\nFor this, I’m going to look at the eight rules of probability laid out here and the four different counting methods (see more here).\n\nEight rules of probability\n\n• Rule #1: For any event A, 0 ≤ P(A) ≤ 1; in other words, the probability of an event can range from 0 to 1.\n• Rule #2: The sum of the probabilities of all possible outcomes always equals 1.\n• Rule #3: P(not A) = 1 — P(A); This rule explains the relationship between the probability of an event and its complement event. A complement event is one that includes all possible outcomes that aren’t in A.\n• Rule #4: If A and B are disjoint events (mutually exclusive), then P(A or B) = P(A) + P(B); this is called the addition rule for disjoint events\n• Rule #5: P(A or B) = P(A) + P(B) — P(A and B); this is called the general addition rule.\n• Rule #6: If A and B are two independent events, then P(A and B) = P(A) * P(B); this is called the multiplication rule for independent events.\n• Rule #7: The conditional probability of event B given event A is P(B|A) = P(A and B) / P(A)\n• Rule #8: For any two events A and B, P(A and B) = P(A) * P(B|A); this is called the general multiplication rule\n\nCounting Methods",
null,
"Factorial Formula: n! = n x (n -1) x (n — 2) x … x 2 x 1\n\nUse when the number of items is equal to the number of places available.\n\nEg. Find the total number of ways 5 people can sit in 5 empty seats.\n\n= 5 x 4 x 3 x 2 x 1 = 120\n\nFundamental Counting Principle (multiplication)\n\nThis method should be used when repetitions are allowed and the number of ways to fill an open place is not affected by previous fills.\n\nEg. There are 3 types of breakfasts, 4 types of lunches, and 5 types of desserts. The total number of combinations is = 5 x 4 x 3 = 60\n\nPermutations: P(n,r)= n! / (n−r)!\n\nThis method is used when replacements are not allowed and order of item ranking matters.\n\nEg. A code has 4 digits in a particular order and the digits range from 0 to 9. How many permutations are there if one digit can only be used once?\n\nP(n,r) = 10!/(10–4)! = (10x9x8x7x6x5x4x3x2x1)/(6x5x4x3x2x1) = 5040\n\nCombinations Formula: C(n,r)=(n!)/[(n−r)!r!]\n\nThis is used when replacements are not allowed and the order in which items are ranked does not mater.\n\nEg. To win the lottery, you must select the 5 correct numbers in any order from 1 to 52. What is the number of possible combinations?\n\nC(n,r) = 52! / (52–5)!5! = 2,598,960\n\n##### 21: Describe Markov chains?",
null,
"Brilliant provides a great definition of Markov chains (here):\n\n“A Markov chain is a mathematical system that experiences transitions from one state to another according to certain probabilistic rules. The defining characteristic of a Markov chain is that no matter how the process arrived at its present state, the possible future states are fixed. In other words, the probability of transitioning to any particular state is dependent solely on the current state and time elapsed.”\n\nThe actual math behind Markov chains requires knowledge on linear algebra and matrices, so I’ll leave some links below in case you want to explore this topic further on your own.\n\n1. What do you understand by statistical power of sensitivity and how do you calculate it?\n\nSensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, Random Forest etc.).\n\nSensitivity is nothing but “Predicted True events/ Total events”. True events here are the events which were true and model also predicted them as true.\n\nCalculation of seasonality is pretty straightforward.\n\nSeasonality = ( True Positives ) / ( Positives in Actual Dependent Variable )\n\n1. Why Is Re-sampling Done?\n\nResampling is done in any of these cases:\n\n• Estimating the accuracy of sample statistics by using subsets of accessible data or drawing randomly with replacement from a set of data points\n• Substituting labels on data points when performing significance tests\n• Validating models by using random subsets (bootstrapping, cross-validation)\n\n1. What are the differences between overfitting and under-fitting?",
null,
"In statistics and machine learning, one of the most common tasks is to fit a model to a set of training data, so as to be able to make reliable predictions on general untrained data.\n\nIn overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data.\n\nUnderfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data. Such a model too would have poor predictive performance.\n\n1. How to combat Overfitting and Underfitting?\n\nTo combat overfitting and underfitting, you can resample the data to estimate the model accuracy (k-fold cross-validation) and by having a validation dataset to evaluate the model.\n\n1. What is regularisation? Why is it useful?",
null,
"#### Data Scientist Masters Program\n\nRegularisation is the process of adding tuning parameter to a model to induce smoothness in order to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1(Lasso) or L2(ridge). The model predictions should then minimize the loss function calculated on the regularized training set.\n\n1. What Is the Law of Large Numbers?\n\nIt is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate.\n\n##### 28: Is mean imputation of missing data acceptable practice? Why or why not?\n\nMean imputation is the practice of replacing null values in a data set with the mean of the data.\n\nMean imputation is generally bad practice because it doesn’t take into account feature correlation. For example, imagine we have a table showing age and fitness score and imagine that an eighty-year-old has a missing fitness score. If we took the average fitness score from an age range of 15 to 80, then the eighty-year-old will appear to have a much higher fitness score that he actually should.\n\n#### Machine Learning Fundamentals\n\n##### 29: What are some of the steps for data wrangling and data cleaning before applying machine learning algorithms?\n\nThere are many steps that can be taken when data wrangling and data cleaning. Some of the most common steps are listed below:\n\n• Data profiling: Almost everyone starts off by getting an understanding of their dataset. More specifically, you can look at the shape of the dataset with .shape and a description of your numerical variables with .describe().\n• Data visualizations: Sometimes, it’s useful to visualize your data with histograms, boxplots, and scatterplots to better understand the relationships between variables and also to identify potential outliers.\n• Syntax error: This includes making sure there’s no white space, making sure letter casing is consistent, and checking for typos. You can check for typos by using .unique() or by using bar graphs.\n• Standardization or normalization: Depending on the dataset your working with and the machine learning method you decide to use, it may be useful to standardize or normalize your data so that different scales of different variables don’t negatively impact the performance of your model.\n• Handling null values: There are a number of ways to handle null values including deleting rows with null values altogether, replacing null values with the mean/median/mode, replacing null values with a new category (eg. unknown), predicting the values, or using machine learning models that can deal with null values. Read more here.\n• Other things include: removing irrelevant data, removing duplicates, and type conversion.\n\n##### 30: How to deal with unbalanced binary classification?\n\nThere are a number of ways to handle unbalanced binary classification (assuming that you want to identify the minority class):\n\n• First, you want to reconsider the metrics that you’d use to evaluate your model. The accuracy of your model might not be the best metric to look at because and I’ll use an example to explain why. Let’s say 99 bank withdrawals were not fraudulent and 1 withdrawal was. If your model simply classified every instance as “not fraudulent”, it would have an accuracy of 99%! Therefore, you may want to consider using metrics like precision and recall.\n• Another method to improve unbalanced binary classification is by increasing the cost of misclassifying the minority class. By increasing the penalty of such, the model should classify the minority class more accurately.\n• Lastly, you can improve the balance of classes by oversampling the minority class or by undersampling the majority class. You can read more about it here.\n\n##### 31: What is the difference between a box plot and a histogram?",
null,
"Boxplot vs Histogram\n\nWhile boxplots and histograms are visualizations used to show the distribution of the data, they communicate information differently.\n\nHistograms are bar charts that show the frequency of a numerical variable’s values and are used to approximate the probability distribution of the given variable. It allows you to quickly understand the shape of the distribution, the variation, and potential outliers.\n\nBoxplots communicate different aspects of the distribution of data. While you can’t see the shape of the distribution through a box plot, you can gather other information like the quartiles, the range, and outliers. Boxplots are especially useful when you want to compare multiple charts at the same time because they take up less space than histograms.",
null,
"##### 32: Describe different regularization methods, such as L1 and L2 regularization?",
null,
"Both L1 and L2 regularization are methods used to reduce the overfitting of training data. Least Squares minimizes the sum of the squared residuals, which can result in low bias but high variance.\n\nL2 Regularization, also called ridge regression, minimizes the sum of the squared residuals plus lambda times the slope squared. This additional term is called the Ridge Regression Penalty. This increases the bias of the model, making the fit worse on the training data, but also decreases the variance.\n\nIf you take the ridge regression penalty and replace it with the absolute value of the slope, then you get Lasso regression or L1 regularization.\n\nL2 is less robust but has a stable solution and always one solution. L1 is more robust but has an unstable solution and can possibly have multiple solutions.\n\nStatQuest has an amazing video on Lasso and Ridge regression here.\n\n##### 33: Neural Network Fundamentals",
null,
"A neural network is a multi-layered model inspired by the human brain. Like the neurons in our brain, the circles above represent a node. The blue circles represent the input layer, the black circles represent the hidden layers, and the green circles represent the output layer. Each node in the hidden layers represents a function that the inputs go through, ultimately leading to an output in the green circles. The formal term for these functions is called the sigmoid activation function.\n\nIf you want a step by step example of creating a neural network, check out Victor Zhou’s article here.\n\nIf you’re a visual/audio learner, 3Blue1Brown has an amazing series on neural networks and deep learning on YouTube here.\n\n##### 34: What is cross-validation?\n\nCross-validation is essentially a technique used to assess how well a model performs on a new independent dataset. The simplest example of cross-validation is when you split your data into two groups: training data and testing data, where you use the training data to build the model and the testing data to test the model.\n\n##### 35: How to define/select metrics?\n\nThere isn’t a one-size-fits-all metric. The metric(s) chosen to evaluate a machine learning model depends on various factors:\n\n• Is it a regression or classification task?\n• What is the business objective? Eg. precision vs recall\n• What is the distribution of the target variable?\n\nThere are a number of metrics that can be used, including adjusted r-squared, MAE, MSE, accuracy, recall, precision, f1 score, and the list goes on.\n\nCheck out questions related to modeling metrics on Interview Query\n\n##### 36: Explain what precision and recall are\n\nRecall attempts to answer “What proportion of actual positives was identified correctly?”",
null,
"Precision attempts to answer “What proportion of positive identifications was actually correct?”",
null,
"",
null,
"Taken from Wikipedia\n\n##### 37: Explain what a false positive and a false negative are. Why is it important these from each other? Provide examples when false positives are more important than false negatives, false negatives are more important than false positives and when these two types of errors are equally important\n\nA false positive is an incorrect identification of the presence of a condition when it’s absent.\n\nA false negative is an incorrect identification of the absence of a condition when it’s actually present.\n\nAn example of when false negatives are more important than false positives is when screening for cancer. It’s much worse to say that someone doesn’t have cancer when they do, instead of saying that someone does and later realizing that they don’t.\n\nThis is a subjective argument, but false positives can be worse than false negatives from a psychological point of view. For example, a false positive for winning the lottery could be a worse outcome than a false negative because people normally don’t expect to win the lottery anyways.\n\n##### 38: What does NLP stand for?\n\nNLP stands for Natural Language Processing. It is a branch of artificial intelligence that gives machines the ability to read and understand human languages.\n\n##### 39: When would you use random forests Vs SVM and why?\n\nThere are a couple of reasons why a random forest is a better choice of model than a support vector machine:\n\n• Random forests allow you to determine the feature importance. SVM’s can’t do this.\n• Random forests are much quicker and simpler to build than an SVM.\n• For multi-class classification problems, SVMs require a one-vs-rest method, which is less scalable and more memory intensive.\n\n##### 40: Why is dimension reduction important?\n\nDimensionality reduction is the process of reducing the number of features in a dataset. This is important mainly in the case when you want to reduce variance in your model (overfitting).\n\nWikipedia states four advantages of dimensionality reduction (see here):\n\n1. It reduces the time and storage space required\n2. Removal of multi-collinearity improves the interpretation of the parameters of the machine learning model\n3. It becomes easier to visualize the data when reduced to very low dimensions such as 2D or 3D\n4. It avoids the curse of dimensionality\n\n##### 41: What is principal component analysis? Explain the sort of problems you would use PCA for.\n\nIn its simplest sense, PCA involves project higher dimensional data (eg. 3 dimensions) to a smaller space (eg. 2 dimensions). This results in a lower dimension of data, (2 dimensions instead of 3 dimensions) while keeping all original variables in the model.\n\nPCA is commonly used for compression purposes, to reduce required memory and to speed up the algorithm, as well as for visualization purposes, making it easier to summarize data.\n\n##### 42: Why is Naive Bayes so bad? How would you improve a spam detection algorithm that uses naive Bayes?\n\nOne major drawback of Naive Bayes is that it holds a strong assumption in that the features are assumed to be uncorrelated with one another, which typically is never the case.\n\nOne way to improve such an algorithm that uses Naive Bayes is by decorrelating the features so that the assumption holds true.\n\n##### 43: What are the drawbacks of a linear model?\n\nThere are a couple of drawbacks of a linear model:\n\n• A linear model holds some strong assumptions that may not be true in application. It assumes a linear relationship, multivariate normality, no or little multicollinearity, no auto-correlation, and homoscedasticity\n• A linear model can’t be used for discrete or binary outcomes.\n• You can’t vary the model flexibility of a linear model.\n\n##### 44: Do you think 50 small decision trees are better than a large one? Why?\n\nAnother way of asking this question is “Is a random forest a better model than a decision tree?” And the answer is yes because a random forest is an ensemble method that takes many weak decision trees to make a strong learner. Random forests are more accurate, more robust, and less prone to overfitting.\n\n##### 45: Why is mean square error a bad measure of model performance? What would you suggest instead?\n\nMean Squared Error (MSE) gives a relatively high weight to large errors — therefore, MSE tends to put too much emphasis on large deviations. A more robust alternative is MAE (mean absolute deviation).\n\n##### 46: What are the assumptions required for linear regression? What if some of these assumptions are violated?\n\nThe assumptions are as follows:\n\n1. The sample data used to fit the model is representative of the population\n2. The relationship between X and the mean of Y is linear\n3. The variance of the residual is the same for any value of X (homoscedasticity)\n4. Observations are independent of each other\n5. For any value of X, Y is normally distributed.\n\nExtreme violations of these assumptions will make the results redundant. Small violations of these assumptions will result in a greater bias or variance of the estimate.\n\n##### 47: What is collinearity and what to do with it? How to remove multicollinearity?\n\nMulticollinearity exists when an independent variable is highly correlated with another independent variable in a multiple regression equation. This can be problematic because it undermines the statistical significance of an independent variable.\n\nYou could use the Variance Inflation Factors (VIF) to determine if there is any multicollinearity between independent variables — a standard benchmark is that if the VIF is greater than 5 then multicollinearity exists.\n\n##### 48: How to check if the regression model fits the data well?\n\nThere are a couple of metrics that you can use:\n\nR-squared/Adjusted R-squared: Relative measure of fit. This was explained in a previous answer\n\nF1 Score: Evaluates the null hypothesis that all regression coefficients are equal to zero vs the alternative hypothesis that at least one doesn’t equal zero\n\nRMSE: Absolute measure of fit.\n\n##### 49: What is a decision tree?\n\nDecision trees are a popular model, used in operations research, strategic planning, and machine learning. Each square above is called a node, and the more nodes you have, the more accurate your decision tree will be (generally). The last nodes of the decision tree, where a decision is made, are called the leaves of the tree. Decision trees are intuitive and easy to build but fall short when it comes to accuracy.\n\n##### 50: What is a random forest? Why is it good?\n\nRandom forests are an ensemble learning technique that builds off of decision trees. Random forests involve creating multiple decision trees using bootstrapped datasets of the original data and randomly selecting a subset of variables at each step of the decision tree. The model then selects the mode of all of the predictions of each decision tree. By relying on a “majority wins” model, it reduces the risk of error from an individual tree.\n\nFor example, if we created one decision tree, the third one, it would predict 0. But if we relied on the mode of all 4 decision trees, the predicted value would be 1. This is the power of random forests.\n\nRandom forests offer several other benefits including strong performance, can model non-linear boundaries, no cross-validation needed, and gives feature importance.\n\n##### 51: What is a kernel? Explain the kernel trick\n\nA kernel is a way of computing the dot product of two vectors 𝐱x and 𝐲y in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called “generalized dot product” \n\nThe kernel trick is a method of using a linear classifier to solve a non-linear problem by transforming linearly inseparable data to linearly separable ones in a higher dimension.\n\n##### 52: Is it beneficial to perform dimensionality reduction before fitting an SVM? Why or why not?\n\nWhen the number of features is greater than the number of observations, then performing dimensionality reduction will generally improve the SVM.\n\n##### 53: What is overfitting?\n\nOverfitting is an error where the model ‘fits’ the data too well, resulting in a model with high variance and low bias. As a consequence, an overfit model will inaccurately predict new data points even though it has a high accuracy on the training data.\n\n##### 54: What is boosting?\n\nBoosting is an ensemble method to improve a model by reducing its bias and variance, ultimately converting weak learners to strong learners. The general idea is to train a weak learner and sequentially iterate and improve the model by learning from the previous learner. You can learn more about it here.\n\n## DEEP LEARNING INTERVIEW QUESTIONS\n\n##### 55. What do you mean by Deep Learning?\n\nDeep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the human brain.\n\n##### 56. What is the difference between machine learning and deep learning?\n\nMachine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorised in the following three categories.\n\n1. Supervised machine learning,\n2. Unsupervised machine learning,\n3. Reinforcement learning",
null,
"Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.\n\n##### 57. What, in your opinion, is the reason for the popularity of Deep Learning in recent times?\n\nNow although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons:\n\n• The increase in the amount of data generated through various sources\n• The growth in hardware resources required to run these models\n\nGPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously.\n\n##### 58.what is reinforcement learning?",
null,
"Reinforcement Learning is learning what to do and how to map situations to actions. The end result is to maximise the numerical reward signal. The learner is not told which action to take but instead must discover which action will yield the maximum reward. Reinforcement learning is inspired by the learning of human beings, it is based on the reward/penalty mechanism.\n\n##### 59. What are Artificial Neural Networks?\n\nArtificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input so the network generates the best possible result without needing to redesign the output criteria.\n\n##### 60. Describe the structure of Artificial Neural Networks?\n\nArtificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.",
null,
"##### 61. How Are Weights Initialized in a Network?\n\nThere are two methods here: we can either initialize the weights to zero or assign them randomly.\n\nInitializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.\n\nInitializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.\n\n1. What Is the Cost Function?\n\nAlso referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions.\n\n1. What Are Hyperparameters?\n\nWith neural networks, you’re usually working with hyperparameters once the data is formatted correctly. A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, etc.).\n\n1. What Will Happen If the Learning Rate Is Set inaccurately (Too Low or Too High)?\n\nWhen your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.\n\nIf the learning rate is set too high, this causes undesirable divergent behaviour to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).\n\n##### 65. What Is the Difference Between Epoch, Batch, and Iteration in Deep Learning?\n• Epoch – Represents one iteration over the entire dataset (everything put into the training model).\n• Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.\n• Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).\n##### 66. What Are the Different Layers on CNN?\n\nThere are four layers in CNN:\n\n1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.\n2. ReLU Layer – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map.\n3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map.\n4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.",
null,
"##### 67. What Is Pooling on CNN, and How Does It Work?\n\nPooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.\n\n##### 68. What are Recurrent Neural Networks(RNNs)?\n\nRNNs are a type of artificial neural networks designed to recognise the pattern from the sequence of data such as Time series, stock market and government agencies etc. To understand recurrent nets, first, you have to understand the basics of feedforward nets.\n\nBoth these networks RNN and feed-forward named after the way they channel information through a series of mathematical orations performed at the nodes of the network. One feeds information through straight(never touching the same node twice), while the other cycles it through a loop, and the latter are called recurrent.",
null,
"Recurrent networks, on the other hand, take as their input, not just the current input example they see, but also the what they have perceived previously in time.\n\nThe decision a recurrent neural network reached at time t-1 affects the decision that it will reach one moment later at time t. So recurrent networks have two sources of input, the present and the recent past, which combine to determine how they respond to new data, much as we do in life.\n\nThe error they generate will return via backpropagation and be used to adjust their weights until error can’t go any lower. Remember, the purpose of recurrent nets is to accurately classify sequential input. We rely on the backpropagation of error and gradient descent to do so.\n\n##### 69. How Does an LSTM Network Work?\n\nLong-Short-Term Memory (LSTM) is a special kind of recurrent neural network capable of learning long-term dependencies, remembering information for long periods as its default behaviour. There are three steps in an LSTM network:\n\n• Step 1: The network decides what to forget and what to remember.\n• Step 2: It selectively updates cell state values.\n• Step 3: The network decides what part of the current state makes it to the output.\n##### 70. What Is a Multi-layer Perceptron(MLP)?\n\nAs in Neural Networks, MLPs have an input layer, a hidden layer, and an output layer. It has the same structure as a single layer perceptron with one or more hidden layers. A single layer perceptron can classify only linear separable classes with binary output (0,1), but MLP can classify nonlinear classes.",
null,
"Except for the input layer, each node in the other layers uses a nonlinear activation function. This means the input layers, the data coming in, and the activation function is based upon all nodes and weights being added together, producing the output. MLP uses a supervised learning method called “backpropagation.” In backpropagation, the neural network calculates the error with the help of cost function. It propagates this error backward from where it came (adjusts the weights to train the model more accurately).\n\nA gradient measures how much the output of a function changes if you change the inputs a little bit. It simply measures the change in all weights with regard to the change in error. You can also think of a gradient as the slope of a function.\n\nGradient Descent can be thought of climbing down to the bottom of a valley, instead of climbing up a hill. This is because it is a minimization algorithm that minimizes a given function (Activation Function).",
null,
"##### 72. What are vanishing gradients?\n\nWhile training an RNN, your slope can become either too small; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. It leads to long training times, poor performance, and low accuracy.\n\n##### 73. What is Back Propagation and Explain it’s Working.\n\nBackpropagation is a training algorithm used for multilayer neural network. In this method, we move the error from an end of the network to all weights inside the network and thus allowing efficient computation of the gradient.\n\nIt has the following steps:",
null,
"### Data Scientist Masters Program\n\nWeekday / Weekend Batches\n\nSee Batch Details\n\n• Forward Propagation of Training Data\n• Derivatives are computed using output and target\n• Back Propagate for computing derivative of error wrt output activation\n• Using previously calculated derivatives for output\n• Update the Weights\n##### 74. What are the variants of Back Propagation?\n• Stochastic Gradient Descent: We use only a single training example for calculation of gradient and update parameters.\n• Batch Gradient Descent: We calculate the gradient for the whole dataset and perform the update at each iteration.\n• Mini-batch Gradient Descent: It’s one of the most popular optimization algorithms. It’s a variant of Stochastic Gradient Descent and here instead of single training example, mini-batch of samples is used.\n##### 76. What is the role of the Activation Function?\n\nThe Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.\n\n##### 78. What is an Auto-Encoder?\n\nAuto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabelled input which is then encoded to reconstruct the input.",
null,
"##### 79. What is a Boltzmann Machine?\n\nBoltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimise the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.",
null,
"1. What Is Dropout and Batch Normalization?\n\nDropout is a technique of dropping out hidden and visible units of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network.",
null,
"Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one.\n\n##### 82. Why Is Tensorflow the Most Preferred Library in Deep Learning?\n\nTensorflow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and Torch. Tensorflow supports both CPU and GPU computing devices.\n\n1. What Do You Mean by Tensor in Tensorflow?\n\nA tensor is a mathematical object represented as arrays of higher dimensions. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”\n\n###",
null,
"##### 84. What is the Computational Graph?\n\nEverything in a tensorflow is based on creating a computational graph. It has a network of nodes where each node operates, Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”\n\n##### 85. What is a Generative Adversarial Network?\n\nSuppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine.\n\nThe forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.\n\nThe forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately\n\nLet us understand this example with the help of an image.",
null,
"There is a noise vector coming into the forger who is generating fake wine.\n\nHere the forger acts as a Generator.\n\nThe shop owner acts as a Discriminator.\n\nThe Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine. The shop owner has to figure out whether it is real or fake.\n\nSo, there are two primary components of Generative Adversarial Network (GAN) named:\n\n1. Generator\n2. Discriminator\n\nThe generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images The ultimate aim is to make the discriminator learn to identify real and fake images.\n\nApart from the very technical questions, your interviewer could even hit you up with a few simple ones to check your overall confidence, in the likes of the following.\n\n##### 86. What are the important skills to have in Python with regard to data analysis?\n\nThe following are some of the important skills to possess which will come handy when performing data analysis using Python.\n\n• Good understanding of the built-in data types especially lists, dictionaries, tuples, and sets.\n• Mastery of N-dimensional NumPy Arrays.\n• Mastery of Pandas dataframes.\n• Ability to perform element-wise vector and matrix operations on NumPy arrays.\n• Knowing that you should use the Anaconda distribution and the conda package manager.\n• Familiarity with Scikit-learn. **Scikit-Learn Cheat Sheet**\n• Ability to write efficient list comprehensions instead of traditional for loops.\n• Ability to write small, clean functions (important for any developer), preferably pure functions that don’t alter objects.\n• Knowing how to profile the performance of a Python script and how to optimize bottlenecks.\n\nThe following will help to tackle any problem in data analytics and machine learning.\n\n##### 87. What is a Box Cox Transformation?\n\nDependent variable for a regression analysis might not satisfy one or more assumptions of an ordinary least squares regression. The residuals could either curve as the prediction increases or follow skewed distribution. In such scenarios, it is necessary to transform the response variable so that the data meets the required assumptions. A Box cox transformation is a statistical technique to transform non-normal dependent variables into a normal shape. If the given data is not normal then most of the statistical techniques assume normality. Applying a box cox transformation means that you can run a broader number of tests.",
null,
"A Box Cox transformation is a way to transform non-normal dependent variables into a normal shape. Normality is an important assumption for many statistical techniques, if your data isn’t normal, applying a Box-Cox means that you are able to run a broader number of tests. The Box Cox transformation is named after statisticians George Box and Sir David Roxbee Cox who collaborated on a 1964 paper and developed the technique.\n\n##### 88: Explain what a false positive and a false negative are. Why is it important for each other? Provide examples when false positives are more important than false negatives, false negatives are more important than false positives and when these two types of errors are equally important\n\nA false positive is an incorrect identification of the presence of a condition when it’s absent.\n\nA false negative is an incorrect identification of the absence of a condition when it’s actually present.\n\nAn example of when false negatives are more important than false positives is when screening for cancer. It’s much worse to say that someone doesn’t have cancer when they do, instead of saying that someone does and later realizing that they don’t.\n\nThis is a subjective argument, but false positives can be worse than false negatives from a psychological point of view. For example, a false positive for winning the lottery could be a worse outcome than a false negative because people normally don’t expect to win the lottery anyways.\n\n##### 89: Do you think 50 small decision trees are better than a large one? Why?\n\nAnother way of asking this question is “Is a random forest a better model than a decision tree?” And the answer is yes because a random forest is an ensemble method that takes many weak decision trees to make a strong learner. Random forests are more accurate, more robust, and less prone to overfitting.\n\n##### 90: What is collinearity and what to do with it? How to remove multicollinearity?\n\nMulticollinearity exists when an independent variable is highly correlated with another independent variable in a multiple regression equation. This can be problematic because it undermines the statistical significance of an independent variable.\n\nYou could use the Variance Inflation Factors (VIF) to determine if there is any multicollinearity between independent variables — a standard benchmark is that if the VIF is greater than 5 then multicollinearity exists.\n\n##### 91: How to check if the regression model fits the data well?\n\nThere are a couple of metrics that you can use:\n\nR-squared/Adjusted R-squared: Relative measure of fit. This was explained in a previous answer\n\nF1 Score: Evaluates the null hypothesis that all regression coefficients are equal to zero vs the alternative hypothesis that at least one doesn’t equal zero\n\nRMSE: Absolute measure of fit.\n\n##### 92. During analysis, how do you treat missing values?\n\nThe extent of the missing values is identified after identifying the variables with missing values. If any patterns are identified the analyst has to concentrate on them as it could lead to interesting and meaningful business insights.\n\nIf there are no patterns identified, then the missing values can be substituted with mean or median values (imputation) or they can simply be ignored. Assigning a default value which can be mean, minimum or maximum value. Getting into the data is important.\n\nIf it is a categorical variable, the default value is assigned. The missing value is assigned a default value. If you have a distribution of data coming, for normal distribution give the mean value.\n\nIf 80% of the values for a variable are missing then you can answer that you would be dropping the variable instead of treating the missing values.\n\n##### 93: Give examples of data that does not have a Gaussian distribution, nor log-normal.\n• Any type of categorical data won’t have a gaussian distribution or lognormal distribution.\n• Exponential distributions — eg. the amount of time that a car battery lasts or the amount of time until an earthquake occurs.\n\n##### 94: What is root cause analysis? How to identify a cause vs. a correlation? Give examples\n\nRoot cause analysis: a method of problem-solving used for identifying the root cause(s) of a problem \n\nCorrelation measures the relationship between two variables, range from -1 to 1. Causation is when a first event appears to have caused a second event. Causation essentially looks at direct relationships while correlation can look at both direct and indirect relationships.\n\nExample: a higher crime rate is associated with higher sales in ice cream in Canada, aka they are positively correlated. However, this doesn’t mean that one causes another. Instead, it’s because both occur more when it’s warmer outside.\n\nYou can test for causation using hypothesis testing or A/B testing.\n\n##### 95: Give an example where the median is a better measure than the mean\n\nWhen there are a number of outliers that positively or negatively skew the data.\n\n##### 96: Given two fair dices, what is the probability of getting scores that sum to 4? to 8?\n\nThere are 4 combinations of rolling a 4 (1+3, 3+1, 2+2):\n\nP(rolling a 4) = 3/36 = 1/12\n\nThere are combinations of rolling an 8 (2+6, 6+2, 3+5, 5+3, 4+4):\n\nP(rolling an 8) = 5/36\n\n##### 97. How do you calculate the needed sample size?",
null,
"Formula for margin of error\n\nYou can use the margin of error (ME) formula to determine the desired sample size.\n\n• t/z = t/z score used to calculate the confidence interval\n• ME = the desired margin of error\n• S = sample standard deviation\n##### 98: Infection rates at a hospital above a 1 infection per 100 person-days at risk are considered high. A hospital had 10 infections over the last 1787 person-days at risk. Give the p-value of the correct one-sided test of whether the hospital is below the standard.\n\nSince we looking at the number of events (# of infections) occurring within a given timeframe, this is a Poisson distribution question.",
null,
"The probability of observing k events in an interval\n\nNull (H0): 1 infection per person-days\n\nAlternative (H1): >1 infection per person-days\n\nk (actual) = 10 infections\n\nlambda (theoretical) = (1/100)*1787\n\np = 0.032372 or 3.2372% calculated using .poisson() in excel or ppois in R\n\nSince p-value < alpha (assuming 5% level of significance), we reject the null and conclude that the hospital is below the standard.\n\n##### 99: You roll a biased coin (p(head)=0.8) five times. What’s the probability of getting three or more heads?\n\nUse the General Binomial Probability formula to answer this question:",
null,
"General Binomial Probability Formula\n\np = 0.8\n\nn = 5\n\nk = 3,4,5\n\n##### 100: A random variable X is normal with mean 1020 and a standard deviation 50. Calculate P(X>1200)\n\nUsing Excel…\n\np =1-norm.dist(1200, 1020, 50, true)\n\np= 0.000159\n\n##### 101: Consider the number of people that show up at a bus station is Poisson with mean 2.5/h. What is the probability that at most three people show up in a four hour period?\n\nx = 3\n\nmean = 2.5*4 = 10\n\nusing Excel…\n\np = poisson.dist(3,10,true)\n\np = 0.010336\n\n##### 102: An HIV test has a sensitivity of 99.7% and a specificity of 98.5%. A subject from a population of prevalence 0.1% receives a positive test result. What is the precision of the test (i.e the probability he is HIV positive)?",
null,
"Equation for Precision (PV)\n\nPrecision = Positive Predictive Value = PV\n\nPV = (0.001*0.997)/[(0.001*0.997)+((1–0.001)*(1–0.985))]\n\nPV = 0.0624 or 6.24%\n\n##### 103: You are running for office and your pollster polled hundred people. Sixty of them claimed they will vote for you. Can you relax?\n• Assume that there’s only you and one other opponent.\n• Also, assume that we want a 95% confidence interval. This gives us a z-score of 1.96.",
null,
"Confidence interval formula\n\np-hat = 60/100 = 0.6\n\nz* = 1.96\n\nn = 100\n\nThis gives us a confidence interval of [50.4,69.6]. Therefore, given a confidence interval of 95%, if you are okay with the worst scenario of tying then you can relax. Otherwise, you cannot relax until you got 61 out of 100 to claim yes.\n\n##### 104: Geiger counter records 100 radioactive decays in 5 minutes. Find an approximate 95% interval for the number of decays per hour.\n• Since this is a Poisson distribution question, mean = lambda = variance, which also means that standard deviation = square root of the mean\n• a 95% confidence interval implies a z score of 1.96\n• one standard deviation = 10\n\nTherefore the confidence interval = 100 +/- 19.6 = [964.8, 1435.2]\n\n##### 105: Consider influenza epidemics for two-parent heterosexual families. Suppose that the probability is 17% that at least one of the parents has contracted the disease. The probability that the father has contracted influenza is 12% while the probability that both the mother and father have contracted the disease is 6%. What is the probability that the mother has contracted influenza?\n\nUsing the General Addition Rule in probability:\n\nP(mother or father) = P(mother) + P(father) — P(mother and father)\n\nP(mother) = P(mother or father) + P(mother and father) — P(father)\n\nP(mother) = 0.17 + 0.06–0.12\n\nP(mother) = 0.11\n\n##### 106: Suppose that diastolic blood pressures (DBPs) for men aged 35–44 are normally distributed with a mean of 80 (mm Hg) and a standard deviation of 10. About what is the probability that a random 35–44 year old has a DBP less than 70?\n\nSince 70 is one standard deviation below the mean, take the area of the Gaussian distribution to the left of one standard deviation.\n\n= 2.3 + 13.6 = 15.9%\n\n##### 107: In a population of interest, a sample of 9 men yielded a sample average brain volume of 1,100cc and a standard deviation of 30cc. What is a 95% Student’s T confidence interval for the mean brain volume in this new population?",
null,
"Confidence interval for sample\n\nGiven a confidence level of 95% and degrees of freedom equal to 8, the t-score = 2.306\n\nConfidence interval = 1100 +/- 2.306*(30/3)\n\nConfidence interval = [1076.94, 1123.06]\n\n##### 108: A diet pill is given to 9 subjects over six weeks. The average difference in weight (follow up — baseline) is -2 pounds. What would the standard deviation of the difference in weight have to be for the upper endpoint of the 95% T confidence interval to touch 0?\n\nUpper bound = mean + t-score*(standard deviation/sqrt(sample size))\n\n0 = -2 + 2.306*(s/3)\n\n2 = 2.306 * s / 3\n\ns = 2.601903\n\nTherefore the standard deviation would have to be at least approximately 2.60 for the upper bound of the 95% T confidence interval to touch 0.\n\n##### 109: In a study of emergency room waiting times, investigators consider a new and the standard triage systems. To test the systems, administrators selected 20 nights and randomly assigned the new triage system to be used on 10 nights and the standard system on the remaining 10 nights. They calculated the nightly median waiting time (MWT) to see a physician. The average MWT for the new system was 3 hours with a variance of 0.60 while the average MWT for the old system was 5 hours with a variance of 0.68. Consider the 95% confidence interval estimate for the differences of the mean MWT associated with the new system. Assume a constant variance. What is the interval? Subtract in this order (New System — Old System).\n\nSee here for full tutorial on finding the Confidence Interval for Two Independent Samples.",
null,
"Confidence Interval = mean +/- t-score * standard error (see above)\n\nmean = new mean — old mean = 3–5 = -2\n\nt-score = 2.101 given df=18 (20–2) and confidence interval of 95%",
null,
"standard error = sqrt((0.⁶²*9+0.⁶⁸²*9)/(10+10–2)) * sqrt(1/10+1/10)\n\nstandard error = 0.352\n\nconfidence interval = [-2.75, -1.25]\n\nThis exhaustive list is sure to strengthen your preparation for data science interview questions.\n\n## Stay Sharp with Our Data Science Interview Questions\n\nFor data scientists, the work isn’t easy, but it’s rewarding and there are plenty of available positions out there. These data science interview questions can help you get one step closer to your dream job. So, prepare yourself for the rigors of interviewing and stay sharp with the nuts and bolts of data science.",
null,
""
] | [
null,
"https://secureservercdn.net/160.153.138.177/95p.0b0.myftpupload.com/wp-content/uploads/2020/06/Data-Science-interview-questions-1-1.png",
null,
"https://lh5.googleusercontent.com/0ttL6lYwv7sl-3ORHRhsmfWKnEfbAfniFHQ4qX8KqdWZxRzHqQTloW8PXAhI94NrTUiEVUKPdBasR_CLFY8BZWv1ZP5knA2Rm4YOp8yvU3tBrBP5-zDeInVL7w2HvrGKDyUijFhebJZX5e0l5A",
null,
"https://lh4.googleusercontent.com/JX8JCNHXxIM9xcoEI3v1qPvHZ5Qlk9kVngtUFYL7Ntajy1jII1W513pLefRqv659Uy8h49BC4NvP17ChpFWSqghgDIDRw1gRWjDEpkMJGrHCTwWGmQjpE5j9Q759alqUtQ8fElR38-M6jDvwng",
null,
"https://lh4.googleusercontent.com/h4-AybIkE6bDAv_-eXrd9LqyJ6N15ruqyH2p0LUQAaxeq8JccGfXK5_cBWsrdHOQ20YBhIpX3C6qmu-oahiw6_0tRykRQLWZQTaPh6nFTaMzQXthXt1sh_KJ7gk9-6jR8w-kWeKifqsmEUAhmQ",
null,
"https://lh3.googleusercontent.com/vmYzkp_h4cdXo5hE_oCd0Av2GPnl9muNAiFEiSEPAftXdMqyV-V1ZqNKjz3fxQmZebEkHv_9cZjT5itjWaQUGeerYljqY2rzg24PXDTdTYwC3T6mQ1XcBZ66BzK5ZpwJnZfj7Ca0eDl1et_NFQ",
null,
"https://lh4.googleusercontent.com/6trrDfhkSFU8Y__Md6tcv3NIoLDGExI1hpG3h-5-ODalU66DfI-_zpQDlRbVTAbV2tGkUqLW6r4WKqEbnctShoSXD2KvYQQxKGcs4FB7jDPZu2jr02OFBkl8Mzoj08rwm9PfFTeUkEuYVAmx4A",
null,
"https://lh5.googleusercontent.com/CHwLrGLVqgfc7UwIjK9GX72WrQEbw6hgCFt1-O_EgU1bwQeAKkrYHjeJasxL9zp3yJpoGRybnIRqvpfgz8o9Nccer7le_gLb3UcbWrFOuucRkwA26k7NZCQaLjyPGVNH0IDRJAP4L_I5yjOYGw",
null,
"https://lh4.googleusercontent.com/RGiTRcqu1cW1Z5w-Rtv_GftzSw9bUlsL4odPiDw5-6bzs4k_J-jKPr87mkA1YVuZlx8LawxxTroKmCSngr-z891MLulTav2wln2a8RWYnMExCJigFjrVrVAj8vTUuj_JR6RfV6eedAynaCLC5g",
null,
"https://lh6.googleusercontent.com/cYqmhkaw9pHgd3FHKD9KycnmTEasJtY9wSUBbEWotnQdavyIcOlm9LjAbCx-5tOFmMD4fz613yACvtxLloRLDzpLtANJkZebCgDGn6UGCbIEar1n_bjeDhdVzRYMtm-IAT-Y6I_k_gjIOfxBYg",
null,
"https://lh3.googleusercontent.com/ivh-P2ZnpAYxynH6i9JWKyPlJsnnEaXHUKT3uTbMeGalA3mbx_wR61SqXfy8pwbYi54tqZKBTKOU7tnEnCBVaS6AqdC9ltVchIt0yTr_0EfwTwTWwkH9ct4LXR9i2y_M609dc55hZhxk9hD19w",
null,
"https://lh6.googleusercontent.com/V_dhTyn8WQdRctt7osU-EWoh7rJy5V3ge3WWOJZFovuvRgIoAYhnMY-Y14UukREXEI57DrG_2ZkgPUbBi-lbA7blFlISxSg-3jkdK9I3x6pD9fS7ReL9GCK86xeYFMhkH7Nq7vWvMfBAshfktQ",
null,
"https://lh4.googleusercontent.com/gK8cEbMh9x0z9eCykaI6qoUwaYFIJoAGcgPhMO0N3hCQcPize9ADptGops1GKRJV2MEu-l6EID2twi_BzuTcf8oLZd1ceEu-bq0JBmb3x3gwaajVUJR9jYjJymSQfUlC5h5BPrMmFbqeqZ-wgg",
null,
"https://lh6.googleusercontent.com/xmMh9btR_9h4pH49vuRV7lnp9lEOZLzzBdXqmLBqroVEf6spIIkFBaooJPKZn-reONep2EUx00CLn954-Dw5-8NlZLW1puhdp5ktTglHGjitHHZvXMcGv79YEEGnMjK-uMghRCkRqA3L_260rg",
null,
"https://lh3.googleusercontent.com/glhZBbE8WZRH_k_qAhw3-VQkh5w78ibXSTakwtvJtvNcfYIiEl5JgldgYLIMpu5_K31OUuuHzUannJFJaD0gqxQfBQ5h5J6v7eTN-lp5bo0xc96MxeTlr_iujD_rCYDwXfnizP8TvFiGJLY4kw",
null,
"https://lh4.googleusercontent.com/3dHIZ0WuWpNIzGD224NsAyYXLwDw8tF0RXDnnmD-LQIh-WBK_7O25wD82c6UbjX9KD3r6OHPTtxAjPxIg-VwlkisVMcXBcj54pq5QpxJyLSA0JEJioUqiJmz4clGb95EzK5fGD9kR78pwsC0DA",
null,
"https://lh3.googleusercontent.com/bCbNgwWboY8pROIu0qTLm79fSvZhAb6YeGBOn1YABECgs8R1tMdxbZQiY-Ajdoi82gYc6NqarAFY9zZI_wHvoNN9n78FYwm4CQC_248G6CMe28JO2QrlW0pr-A2p1KdPO7J8esWmjptrBK96hQ",
null,
"https://lh5.googleusercontent.com/URUFA29ON21TOLFBV1L0P-fcsNila385O-UD0-BJANmd2iA2oeEB788Q6LZkaB3FqeXN2wwyoLWkerHuBep4WyoAaZsyapUnwlZlXk1qrmF4koQ92ExB1uu-rwoOsGcdF4Dmtt5X1XIDkS_Cxw",
null,
"https://lh4.googleusercontent.com/8267jfYSNw5rxJITZsH2QcC6NWi0mQWskxnir56Ol6YYMXn5RLsenkHkVBwTm1y3mC-_Cdonoy5w4uoqmBkmNGIKtQuquqZQ65ZrPyY1lTCaQO-Ymgkte4oCqRYAkd0MVkuulvVGx8voF6AXUw",
null,
"https://lh3.googleusercontent.com/20T4WrqhXiZsgx9C9fkHl51gjAYYgdA31Ppuk9PkIY-7WFhsOj4Cd7n7ZDX8VEd6dCuU9S8V66OJSYP6V5mBfdcejEgCnS-aao8vF3oT2WpxeRxDV3DwPz-KaGrTp1t84ShJJAIFo7xWWZri9Q",
null,
"https://lh6.googleusercontent.com/rEH76qXPBDAL8OCheWkwHkktNPgZLRFAZT9SFDpQ46pm7bs4dhgmdrEpUgdjSBi_6JdejaT9jKq4AVpedn-NVbjk_EdlNzP6j4lt_aESffJDXg8bZ7o_eU67Xkq8gHYK9QifugV3yCoNNVS9gw",
null,
"https://lh5.googleusercontent.com/pmnsEUi-HslNIgo3Um0U-KdgDiJy2p5OVQxuvMbeVzDOVZBODf756Xf5vqSolyBvAwebY-JysOZx-R8_m7lkSIteGf8evq889eeBQ_QcOoGrPbCT-RURPvPaQ6rMEfskiveWhOYKQ3yNUbe0WQ",
null,
"https://lh4.googleusercontent.com/vCPqXH5z3gn6Yuysy4lG6nLcz0raznsR6OAugVHO3paLdD1A9cZBPk6_0JH7euvS5W9b9vqDxhQADlzoQXnlvFJZQrgaomSh5_eiue6cHI-tZCA7smc2g4lGoqeZLpFaz-i0hfqkxMb_Q7jGwQ",
null,
"https://lh5.googleusercontent.com/FPg5ODM_8OdJSpOw0ZcozCjNSzzjIsCDjQKJNoQfUeOkgbGCM548zPe_Nu0qS8aQHUnjfWpFztLt6tzmkrJ9EMn_Y_YCsr6w20Uphd5lupRIvDdxUU1O8klH0Kub4b4j54THy8QHTk3xhUFeEQ",
null,
"https://lh3.googleusercontent.com/_TnL9mr-upcmhFAqB95PAyiU-en5G0kF2VnrcyRToqcLBvsy9Mq8A7XqPCPYj4Ww-aO9My5hJ-al-uKXcfWI5PXsN1pqFce02ngDzXfkVl24CzA7lL4D5f-au6T-Z4fvGv047Z4zrjw_XhKDdw",
null,
"https://lh6.googleusercontent.com/QOaN97JbhX3l-OD5jAEhsm0nkEoikKwYfMyCNGtAEusgW9gv6swlWifxCEYdngTu3RYUaY5VxyelOXRwSI4Dl42J23_GB0sX1nsbEf7bcSEuxnpW21sJaIfm2FCBJ8nwCDx5WkbnBjhpvJAsQA",
null,
"https://lh6.googleusercontent.com/js_xq9lwwsVRwje7auWPTHZPXKfvg1cZ0NiPZmh0JBexsdVaPpe_7ubOFTeDQmhVVPN6KFwiUNiGAwwflX1b5ZFEEXDgDmtdAgQLlaBK-ZrE2Wu2tT0d6lCe_j3MPk2u1E0jGzevyyDlyuy0hg",
null,
"https://lh5.googleusercontent.com/7Wk7UWUKoXK0zX2KIdzswnanrHyekKVRp4kmI9mygKQlshavuJxzf95ab_iFiTQhoqaWvBdAWz8XXWK6cX7d8bunRTf68BfyuJVJYUe93J9gtNmrp6jwyTGkMV4-SkIJfrl78tzaq8D9iois2g",
null,
"https://lh6.googleusercontent.com/yZXj4_VM8ja5teTrjDfqCWwMMH-k6sH4tcoSxA7jfWQHsRwppsOuMhCUM3UNNPzJYHYKtw5rOF3qyEL7_X2BtKyHi3oPbal_KIMFbao7z-ez_nOCF46CquPPHBtysEkb3epIvBO7vq_LsfQs_g",
null,
"https://lh5.googleusercontent.com/2bQ40d404crMkmSNjkcJMWtSXWP8BvWJJBdjMJF_thdv1q9fGdOXs_ivJ39f7oimbE9ewHOZ1PLXqb89MHxtEVihZHcBtHLEtC-x76KrkUcUHjFDdx-lCXZLyLbDPQ2bxTsGL2ljPi4ZMQjG1A",
null,
"https://lh6.googleusercontent.com/129kceiv7KEbhQnIX8B-zEKYt0mWWl5cOAHbyT09bU7hKHG66PGqWPS3lYrNk0g4TkZDMh1YfIZis7KvB4Tnf14ekxfyH5XhscmcX49eM2MJrC3lQzDwsapurujjBB6r_XdvB__7MBy8Uhmufg",
null,
"https://lh3.googleusercontent.com/qYaquW6MD_x7ep4SUQTd3jMU5eaDWQlmsGSclXMi3cKXj26vp3AWZjINYiU4y7vC3NKbfzvoE-wg9w8MGw4pAiPV1WnjBbDwjSVMJFSN8AKOura1WXxMPfQNMy1xNklYQlIj6xPJn1GPbg-_PQ",
null,
"https://lh5.googleusercontent.com/8Z0cMg70NalJiftDmjCfCMWx5dQjbYQo5FgRS5MI49A4FohipQ8tQO3L27SztxNJjBLR42lVbbf0CPTvJ_CV7DrwltIP8-Q5ExhCWBKKOjREBIn7_d-JTmp6e2eKEGnbgFUgmvM25HZqKWI36Q",
null,
"https://lh6.googleusercontent.com/8HAtNB7ub9WRpENCZvsiTDxtLF-pt8b9CK_RVO175VOJ1jsiB5an6Wtm-BwOFVFybIaHVPuO7H7EL1xazyoGo4hVzbvYfEc40EAsK1gTEQ4KWzhFMDZKiYAO9C2-MyB9GRIUoj9Nhrx6QDz5CA",
null,
"https://lh5.googleusercontent.com/IKJ7T90wKsFj9CZtYUempSsYSBKppksYJ9TmLBkj0mBvjhSZo2JCGpajTPqm1MDbMD8JVJjExcSX7aPH7TdOZgnuiTjEbwbid1S8Do7oTgen-lrnzRGdtHqcGymMpWmrD1M7YKk2J3Z2nfMZsw",
null,
"https://lh3.googleusercontent.com/CHQWaUfRsljVUPT778YB0hjfzcPag1zH-AnddNJ1mOD5YITVelJWJQt5fDMrKdIVeDXHRI3OklFglulWzuqeAj7YHoeNGosxLD8l8Yfjgga6_qn8SoRX57rJN_wt2emVU8elNuFs5rjlamz64w",
null,
"https://lh6.googleusercontent.com/cA1Ua7qU-CYdE4zebm_Xb6SR-xfvH7b9VQhx2PI5_njZ3JVY5XbKJiSWdZJSw63EaZEuFfWTWI7NG8iLjQgCyIOJv7yCH7kaWkO0VEN69R9DQ6f_ktkeCrdga0Hix5WfwoMYTTbvI4Pn_ebrjA",
null,
"https://lh6.googleusercontent.com/SaBpHYTsTpkav7d5uO-y3N5QarSz_FBsF0X2V9mM5wltwcKLtkgyLgLLrsRol3xmGan7TJzBcJMUDkPv-DCCvPK4TXpO52sgmDX2pwi5MIlOJBlXul8d_xYBHZmj4fO4mDMbu3Py2PRlPZASwQ",
null,
"https://lh3.googleusercontent.com/md7gxp8OI3Mcb_63pv8Zi16WD51VtY1p2e8kniciVtF2qjkwQRSj5mBD7dG_8QJFwmLkqjJ22vjZE1ZiKd6fIrVQcMSUSbdwqfFcIUmogggBL_AOGGmLxeGzVOjpqxIrnbzwtiZ6ozlM9V1b5w",
null,
"https://lh6.googleusercontent.com/W95vs75ViWCBCFmwKd0kgJyuO3NThsxpqrdo7Px9tXRtbC-oEZYCtppUv0tKi-4g1hT6NxjGXzmyizkHy7EB-jY_8CuOrOzzQGnolHDF6W-foVIOEeeJsWi8mAzaCnTkhca3mmSfzVhFV9tftw",
null,
"https://lh4.googleusercontent.com/qnlz4iutsQoBxEIzJGRETQxjisIXjAWdV71imXZDCRBRcIGF13me1-_r_HxKjREamf-6XSrHoyUHUOfYkb4AS6F8-fYfzj09ngfxWxFk-D0vcV_AwAcnDwcryiQU7bOLEST-Q_THSZjmZFpl2g",
null,
"https://lh5.googleusercontent.com/9J7WpT2szyS_R2IeKEVRQKnMoV0cNjQAsaMNL3qgr0NEXMgHq3yBSmiaFhVDJhYsT9uqsrLaTyYHH_0z_JE8G7syZmD37xmd81cdQkKMPYc3zfX4HLsYbZYYqkFroWIa35Ine_pJSfhrn_TiCw",
null,
"https://lh6.googleusercontent.com/08Xiu4itxfbpW2NP3HDcYBMyPVgNqtlC2mgZAn2bDSEXBZV4HFX9wtBkaXXlNhPFS5C_TEJeoWbuvxiROCuaFa6PYbn4At8InumaUzUNbF2zrAZg3BcNDDKPLNYy7pwaP1bjHHLZRW3gTuSBkA",
null,
"https://lh5.googleusercontent.com/tp8R4rCMRcmYCIkkz_M5K8XlysaFnmqEXQMS6RRxpuGfX9PQUWyFW4Cbn3U4js_hOsW-H8Hi0AWeV4RzJdyNz1ijLJVIvAEXX5ODo0i2Oxu0BQ-Wm73_pPi7MPlSRddonEYKGHtw0MJJ_BLoNw",
null,
"https://lh5.googleusercontent.com/4qfLNrpCQ3gETb3CQBU3Cl1b_JY5nw3e2BQiXVU6CQiPBEMD0RBUdUr4bhGa1yauUzop7Xv6aYNZ5caGEhZD4GylxxqDyUNhmP8eN5coNmOAHnxp83rMIF9FPFfvb2hQgdxDWrWhjmSE7QV8-w",
null,
"https://secureservercdn.net/160.153.138.177/95p.0b0.myftpupload.com/wp-content/plugins/click-to-chat-for-whatsapp/./new/inc/assets/img/whatsapp-logo.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.916818,"math_prob":0.9448233,"size":43433,"snap":"2020-45-2020-50","text_gpt3_token_len":9083,"char_repetition_ratio":0.12388036,"word_repetition_ratio":0.0914405,"special_character_ratio":0.20659406,"punctuation_ratio":0.08987232,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.99252623,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90],"im_url_duplicate_count":[null,2,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-11-24T04:05:28Z\",\"WARC-Record-ID\":\"<urn:uuid:0c7570bf-133e-4482-b8b5-7f03d238da97>\",\"Content-Length\":\"198042\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:cd25928f-0a35-4f6d-9f44-65e2b9c4f86c>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ed0e85f-1de2-44cd-8efa-47dc964e5cad>\",\"WARC-IP-Address\":\"172.67.155.184\",\"WARC-Target-URI\":\"https://itcources.com/over-100-data-scientist-interview-questions-and-answers/\",\"WARC-Payload-Digest\":\"sha1:XHNQRZ3MHA37WUFKP7UF4FLFZS6XBYSI\",\"WARC-Block-Digest\":\"sha1:5R2FKPVSOBBOXTYUQ7KPRJG6VZY6ODBE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-50/CC-MAIN-2020-50_segments_1606141171077.4_warc_CC-MAIN-20201124025131-20201124055131-00686.warc.gz\"}"} |
https://www.brainkart.com/article/Electrostatics---frictional-electricity_523/ | [
"Home | | Physics | | Physics | Electrostatics - frictional electricity\n\nElectrostatics - frictional electricity",
null,
"The substances which acquire charges on rubbing are said to be 'electrified' or charged. These terms are derived from the Greek word elektron, meaning amber. The electricity produced by friction is called frictional electricity. If the charges in a body do not move, then, the frictional electricity is also known as Static Electricity.\n\nElectrostatics\n\nElectrostatics is the branch of Physics, which deals with static electric charges or charges at rest. In this chapter, we shall study the basic phenomena about static electric charges. The charges in a electrostatic field are analogous to masses in a gravitational field. These charges have forces acting on them and hence possess potential energy. The ideas are widely used in many branches of electricity and in the theory of atom.\n\nElectrostatics - frictional electricity\n\nIn 600 B.C., Thales, a Greek Philosopher observed that, when a piece of amber is rubbed with fur, it acquires the property of attracting light objects like bits of paper. In the 17th century, William Gilbert discovered that, glass, ebonite etc, also exhibit this property, when rubbed with suitable materials.\n\nThe substances which acquire charges on rubbing are said to be 'electrified' or charged. These terms are derived from the Greek word elektron, meaning amber. The electricity produced by friction is called frictional electricity. If the charges in a body do not move, then, the frictional electricity is also known as Static Electricity.\n\n1. Two kinds of charges\n\n1. If a glass rod is rubbed with a silk cloth, it acquires positive charge while the silk cloth acquires an equal amount of negative charge.\n\n2. If an ebonite rod is rubbed with fur, it becomes negatively charged, while the fur acquires equal amount of positive charge. This classification of positive and negative charges were termed by American scientist, Benjamin Franklin.\n\nThus, charging a rod by rubbing does not create electricity, but simply transfers or redistributes the charges in a material.\n\n2. Like charges repel and unlike charges attract each other - experimental verification.\n\nA charged glass rod is suspended by a silk thread, such that it swings horizontally. Now another charged glass rod is brought near the end of the suspended glass rod. It is found that the ends of the two rods repel each other as diagram 1. However, if a charged ebonite rod is brought near the end of the suspended rod, the two rods attract each other as diagram 2. The above experiment shows that like charges repel and unlike charges attract each other.",
null,
"The property of attraction and repulsion between charged bodies have many applications such as electrostatic paint spraying, powder coating, fly?ash collection in chimneys, ink-jet printing and photostat copying (Xerox) etc.\n\n3. Conductors and Insulators\n\nAccording to the electrostatic behaviour, materials are divided into two categories : conductors and insulators (dielectrics). Bodies which allow the charges to pass through are called conductors. e.g. metals, human body, Earth etc. Bodies which do not allow the charges to pass through are called insulators. e.g. glass, mica, ebonite, plastic etc.\n\n4. Basic properties of electric charge\n\nQuantisation of electric charge\n\nThe fundamental unit of electric charge (e) is the charge carried by the electron and its unit is coulomb. e has the magnitude 1.6 x 10-19 C.\n\nIn nature, the electric charge of any system is always an integral multiple of the least amount of charge. It means that the quantity can take only one of the discrete set of values. The charge, q = ne where n is an integer.\n\nConservation of electric charge\n\nElectric charges can neither be created nor destroyed. According to the law of conservation of electric charge, the total charge in an isolated system always remains constant. But the charges can be transferred from one part of the system to another, such that the total charge always remains conserved. For example, Uranium (92U238) can decay by emitting an alpha particle (2He4 nucleus) and transforming to thorium (90Th234).\n\n92U238 --- > 90Th234 + 2He4\n\nTotal charge before decay = +92e, total charge after decay = 90e + 2e. Hence, the total charge is conserved. i.e. it remains constant.\n\nAdditive nature of charge\n\nThe total electric charge of a system is equal to the algebraic sum of electric charges located in the system. For example, if two charged bodies of charges +2q, -5q are brought in contact, the total charge of the system is -3q.\n\n5. Coulomb's law\n\nThe force between two charged bodies was studied by Coulomb in 1785.\n\nCoulomb's law states that the force of attraction or repulsion between two point charges is directly proportional to the product of the charges and inversely proportional to the square of the distance between them. The direction of forces is along the line joining the two point charges.\n\nLet q1 and q2 be two point charges placed in air or vacuum at a distance r apart . Then, according to Coulomb's law,",
null,
"where k is a constant of proportionality. In air or vacuum,\n\nk = 1/4??o , where ?o is the permittivity of free space (i.e., vacuum) and the value of ?o is 8.854 x 10?12 C2 N?1 m?2.",
null,
"One Coulomb is defined as the quantity of charge, which when placed at a distance of 1 metre in air or vacuum from an equal and similar charge, experiences a repulsive force of 9 x 109 N.\n\nIf the charges are situated in a medium of permittivity ?, then the magnitude of the force between them will be,",
null,
"Coulombxs law x vector form",
null,
"So, the forces exerted by charges on each other are equal in magnitude and opposite in direction.\n\n6. Principle of Superposition\n\nThe principle of superposition is to calculate the electric force experienced by a charge q1 due to other charges q2, q3 ..... qn.\n\nThe total force on a given charge is the vector sum of the forces exerted on it due to all other charges.",
null,
"Study Material, Lecturing Notes, Assignment, Reference, Wiki description explanation, brief detail\n\nRelated Topics"
] | [
null,
"http://www.brainkart.com/media/article/articlemYPcMelectro.png",
null,
"https://www.brainkart.com/media/extra/nfaBttX.png",
null,
"https://www.brainkart.com/media/extra/Aqyg3vx.jpg",
null,
"https://www.brainkart.com/media/extra/T6VX2KC.jpg",
null,
"https://www.brainkart.com/media/extra/YSoJYzm.jpg",
null,
"https://www.brainkart.com/media/extra/oYe6ebT.jpg",
null,
"https://www.brainkart.com/media/extra/COoHDLe.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9364374,"math_prob":0.9795001,"size":5602,"snap":"2019-43-2019-47","text_gpt3_token_len":1255,"char_repetition_ratio":0.14826724,"word_repetition_ratio":0.02768903,"special_character_ratio":0.21670832,"punctuation_ratio":0.13235295,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9861716,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14],"im_url_duplicate_count":[null,3,null,1,null,2,null,2,null,2,null,2,null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-10-23T05:53:54Z\",\"WARC-Record-ID\":\"<urn:uuid:47619755-66af-474c-ac25-418c5d3d66ee>\",\"Content-Length\":\"62513\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8efc1870-1ab2-4ff0-940b-ecc8966aab5f>\",\"WARC-Concurrent-To\":\"<urn:uuid:9d74ecf0-a61f-449c-be8a-c4b040db48db>\",\"WARC-IP-Address\":\"148.66.135.100\",\"WARC-Target-URI\":\"https://www.brainkart.com/article/Electrostatics---frictional-electricity_523/\",\"WARC-Payload-Digest\":\"sha1:FR2HPQOEMKFB3JPFJ3W6UN3W2FXCPUIP\",\"WARC-Block-Digest\":\"sha1:WNWESJYBC4GNTTKPSJPBKL4UEKEUDGAL\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-43/CC-MAIN-2019-43_segments_1570987829458.93_warc_CC-MAIN-20191023043257-20191023070757-00521.warc.gz\"}"} |
https://www.coursehero.com/file/23765302/Tutorial-9/ | [
"# Consider the function f x = x 2 - x - ln x Find a the...\n\n• 1\n\nThis preview shows page 1 out of 1 page.\n\nTutorial 9\n\n1 . Consider the function f ( x ) = x 2 - x - ln x . Find ( a ) the intervals on which f is increasing or decreasing . ( b ) the local maximum and minimum values of f . ( c ) the intervals of concavity and the inflection points . 2 . Prove that ( a ) sin x < x , for 0 < x < 2 π . ( b ) tan x > x , for 0 < x < π 2 . ( c ) e x > 1 + x > ln ( 1 + x ) , for x > 0 . 3 . The function f ( x ) = x 3 + ax 2 + bx has the local minimum value f ( 1 √ 2 √ 3 9 . What are the values of a and b ? 3 ) = - 4 . Evaluate the limits . Use L’Hospital ’s Rul where applicable . If L’Hospital ’s Rule does not apply , explain why . x 7 - 1\n( a ) lim x → 1 x 4 - 1 . e x - 1 - x sin x 2 . ( b ) lim x → 0 1 1 ( c ) lim x → 0 + x - e x - 1 . 3 e - x 2 . ( d ) lim x → ∞ x √ x ( e ) lim x → 0 + x . ( f ) lim x → 1 + x 1 1 - x . 1 ( g ) lim x x . → ∞ x\nMATH 137 Tutorial 9 Instructor: Liu, J. March 14, 2017 Sections 4.3–4.5: I/D, Derivative, and Concavity Tests; Indeterminate Forms; Curve Sketching 1. Consider the function f ( x ) = x 2 - x - ln x . Find (a) the intervals on which f is increasing or decreasing. (b) the local maximum and minimum values of f . (c) the intervals of concavity and the inflection points. 2. Prove that (a) sin x < x , for 0 < x < 2 π . (b) tan x > x , for 0 < x < π 2 . (c) e x > 1 + x > ln ( 1 + x ) , for x > 0. 3. The function f ( x ) = x 3 + ax 2 + bx has the local minimum value\n•",
null,
"•",
null,
"•",
null,
""
] | [
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null,
"https://www.coursehero.com/assets/img/doc-landing/start-quote.svg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5830609,"math_prob":0.99996877,"size":564,"snap":"2021-43-2021-49","text_gpt3_token_len":233,"char_repetition_ratio":0.10535714,"word_repetition_ratio":0.08955224,"special_character_ratio":0.40248227,"punctuation_ratio":0.1764706,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99937564,"pos_list":[0,1,2,3,4,5,6],"im_url_duplicate_count":[null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-01T13:39:39Z\",\"WARC-Record-ID\":\"<urn:uuid:edd1593f-4d39-437d-b01b-14cf2feec759>\",\"Content-Length\":\"237445\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:fb7e4a9c-41b1-4e73-95ba-183577c11fad>\",\"WARC-Concurrent-To\":\"<urn:uuid:a9d73917-fcef-4d3f-bdc0-73f4d10bb188>\",\"WARC-IP-Address\":\"104.17.92.47\",\"WARC-Target-URI\":\"https://www.coursehero.com/file/23765302/Tutorial-9/\",\"WARC-Payload-Digest\":\"sha1:3P5CQANWX2XYIYF7EE4TURJWIU2KOOJY\",\"WARC-Block-Digest\":\"sha1:3JXXZREOZIGS2PWI74UCLZ444BZHUCCC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964360803.0_warc_CC-MAIN-20211201113241-20211201143241-00265.warc.gz\"}"} |
https://postgis.net/docs/manual-3.4/en/AddGeometryColumn.html | [
"## Synopsis\n\n`text AddGeometryColumn(`varchar table_name, varchar column_name, integer srid, varchar type, integer dimension, boolean use_typmod=true`)`;\n\n`text AddGeometryColumn(`varchar schema_name, varchar table_name, varchar column_name, integer srid, varchar type, integer dimension, boolean use_typmod=true`)`;\n\n`text AddGeometryColumn(`varchar catalog_name, varchar schema_name, varchar table_name, varchar column_name, integer srid, varchar type, integer dimension, boolean use_typmod=true`)`;\n\n## Description\n\nAdds a geometry column to an existing table of attributes. The `schema_name` is the name of the table schema. The `srid` must be an integer value reference to an entry in the SPATIAL_REF_SYS table. The `type` must be a string corresponding to the geometry type, eg, 'POLYGON' or 'MULTILINESTRING' . An error is thrown if the schemaname doesn't exist (or not visible in the current search_path) or the specified SRID, geometry type, or dimension is invalid.",
null,
"Changed: 2.0.0 This function no longer updates geometry_columns since geometry_columns is a view that reads from system catalogs. It by default also does not create constraints, but instead uses the built in type modifier behavior of PostgreSQL. So for example building a wgs84 POINT column with this function is now equivalent to: `ALTER TABLE some_table ADD COLUMN geom geometry(Point,4326);` Changed: 2.0.0 If you require the old behavior of constraints use the default `use_typmod`, but set it to false.",
null,
"Changed: 2.0.0 Views can no longer be manually registered in geometry_columns, however views built against geometry typmod tables geometries and used without wrapper functions will register themselves correctly because they inherit the typmod behavior of their parent table column. Views that use geometry functions that output other geometries will need to be cast to typmod geometries for these view geometry columns to be registered correctly in geometry_columns. Refer to Section 4.6.3, “Manually Registering Geometry Columns”.",
null,
"This method implements the OGC Simple Features Implementation Specification for SQL 1.1.",
null,
"This function supports 3d and will not drop the z-index.",
null,
"This method supports Circular Strings and Curves.\n\nEnhanced: 2.0.0 use_typmod argument introduced. Defaults to creating typmod geometry column instead of constraint-based.\n\n## Examples\n\n```-- Create schema to hold data\nCREATE SCHEMA my_schema;\n-- Create a new simple PostgreSQL table\nCREATE TABLE my_schema.my_spatial_table (id serial);\n\n-- Describing the table shows a simple table with a single \"id\" column.\npostgis=# \\d my_schema.my_spatial_table\nTable \"my_schema.my_spatial_table\"\nColumn | Type | Modifiers\n--------+---------+-------------------------------------------------------------------------\nid | integer | not null default nextval('my_schema.my_spatial_table_id_seq'::regclass)\n\n-- Add a spatial column to the table\n\n-- Add a point using the old constraint based behavior\n\n--Add a curvepolygon using old constraint behavior\n\n-- Describe the table again reveals the addition of a new geometry columns.\n\\d my_schema.my_spatial_table\n-------------------------------------------------------------------------\nmy_schema.my_spatial_table.geomcp_c SRID:4326 TYPE:CURVEPOLYGON DIMS:2\n(1 row)\n\nTable \"my_schema.my_spatial_table\"\nColumn | Type | Modifiers\n----------+----------------------+-------------------------------------------------------------------------\nid | integer | not null default nextval('my_schema.my_spatial_table_id_seq'::regclass)\ngeom | geometry(Point,4326) |\ngeom_c | geometry |\ngeomcp_c | geometry |\nCheck constraints:\n\"enforce_dims_geom_c\" CHECK (st_ndims(geom_c) = 2)\n\"enforce_dims_geomcp_c\" CHECK (st_ndims(geomcp_c) = 2)\n\"enforce_geotype_geom_c\" CHECK (geometrytype(geom_c) = 'POINT'::text OR geom_c IS NULL)\n\"enforce_geotype_geomcp_c\" CHECK (geometrytype(geomcp_c) = 'CURVEPOLYGON'::text OR geomcp_c IS NULL)\n\"enforce_srid_geom_c\" CHECK (st_srid(geom_c) = 4326)\n\"enforce_srid_geomcp_c\" CHECK (st_srid(geomcp_c) = 4326)\n\n-- geometry_columns view also registers the new columns --\nSELECT f_geometry_column As col_name, type, srid, coord_dimension As ndims\nFROM geometry_columns\nWHERE f_table_name = 'my_spatial_table' AND f_table_schema = 'my_schema';\n\ncol_name | type | srid | ndims\n----------+--------------+------+-------\ngeom | Point | 4326 | 2\ngeom_c | Point | 4326 | 2\ngeomcp_c | CurvePolygon | 4326 | 2\n```"
] | [
null,
"https://postgis.net/docs/manual-3.4/images/note.png",
null,
"https://postgis.net/docs/manual-3.4/images/note.png",
null,
"https://postgis.net/docs/manual-3.4/images/check.png",
null,
"https://postgis.net/docs/manual-3.4/images/check.png",
null,
"https://postgis.net/docs/manual-3.4/images/check.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.5024034,"math_prob":0.7130967,"size":4582,"snap":"2023-40-2023-50","text_gpt3_token_len":1085,"char_repetition_ratio":0.20729576,"word_repetition_ratio":0.10181818,"special_character_ratio":0.28939328,"punctuation_ratio":0.16344725,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9772409,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-10-02T22:26:44Z\",\"WARC-Record-ID\":\"<urn:uuid:2d0c80c0-8015-4c99-a609-d50ac882f1ef>\",\"Content-Length\":\"10363\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:8cacc754-f4e0-48dd-9c19-0d2052fdb295>\",\"WARC-Concurrent-To\":\"<urn:uuid:5db2df88-dc9b-43fb-9f5f-06534cfab6cc>\",\"WARC-IP-Address\":\"209.208.97.173\",\"WARC-Target-URI\":\"https://postgis.net/docs/manual-3.4/en/AddGeometryColumn.html\",\"WARC-Payload-Digest\":\"sha1:PXCSKA56H7R3SSWFQWHJD3WIF3PWGGOQ\",\"WARC-Block-Digest\":\"sha1:IVL5TRLZSLTTYYWMB7WPJ2ZAX4FB44YZ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233511021.4_warc_CC-MAIN-20231002200740-20231002230740-00562.warc.gz\"}"} |
https://answers.everydaycalculation.com/as-percent/5.2 | [
"Solutions by everydaycalculation.com\n\n## Express 5.2 as a percent\n\n5.2 is equivalent to 520/100 or 520%\n\nReference: Percentage Calculator\n\n#### Steps to convert decimal into percentage\n\n1. Multiply both numerator and denominator by 100\n5.2/1 × 100/100 = 520/100\n2. Write in percentage notation: 520%\n\n#### Decimal to percent converter\n\nExpress as percentage:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7963455,"math_prob":0.9905673,"size":337,"snap":"2019-13-2019-22","text_gpt3_token_len":93,"char_repetition_ratio":0.15915915,"word_repetition_ratio":0.0,"special_character_ratio":0.3323442,"punctuation_ratio":0.13846155,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9883808,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-03-20T07:44:24Z\",\"WARC-Record-ID\":\"<urn:uuid:867ad923-9f8e-465c-ba6e-230247aa314c>\",\"Content-Length\":\"4910\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:37a187e8-13fc-4077-808b-fc16fcd3125c>\",\"WARC-Concurrent-To\":\"<urn:uuid:4257294f-e5ac-45ea-9b95-1bbf13d3d74c>\",\"WARC-IP-Address\":\"96.126.107.130\",\"WARC-Target-URI\":\"https://answers.everydaycalculation.com/as-percent/5.2\",\"WARC-Payload-Digest\":\"sha1:SCUDHZQRRJKCMKKWIKSMQWEGKUVVC3IG\",\"WARC-Block-Digest\":\"sha1:43MDJOYV372ZMX7X6FIFJPGIJV3KFVA2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-13/CC-MAIN-2019-13_segments_1552912202303.66_warc_CC-MAIN-20190320064940-20190320090940-00225.warc.gz\"}"} |
https://rjptonline.org/HTML_Papers/Research%20Journal%20of%20Pharmacy%20and%20Technology__PID__2016-9-2-9.html | [
"A Study on Energy of an Intuitionistic Fuzzy Directed Graph\n\nDeepa G1*, Praba B 2, Chandrasekaran VM1\n\n2SSN College of Engineering, Kalavakkam, Chennai, Tamilnadu, India,\n\n*Corresponding Author E-mail: deepa.g@vit.ac.in , vmcsn@vit.ac.in, prabab@ssn.edu.in\n\nABSTRACT:\n\nIn this review article, we have discussed about energy of a graph and energy of a fuzzy graph. Further we presented some standard results on these graphs. Since most real world networks such as communication networks, data organization, computational devices, the flow of computation, web graphs etc are directed graphs, we have analyzed energy of an intuitionistic fuzzy directed graph through real time example.\n\nKEYWORDS: Energy of a graph, Fuzzy Graph, Intuitionistic Fuzzy Graph.\n\nINTRODUCTION:\n\nThe motivation about the study of graph energy comes from chemistry. In 1930 Erich Hückel proposed the Hückel molecular orbital theory. The basic problem in Hückel theory is to determine the eigen values and eigen vectors of the graph representing carbon atom connectivity of a given conjugated system. An interesting quantity in Hückel theory is the sum of the energies of all the electrons in a molecule, called total",
null,
"- electron energy7,14,19,5,15. Several criteria relate to energy such as energy change due to edge addition, maximal energy, and equal energy has been considered in1,19,5. The energy of a graph has chemical applications in11,12,14 and mathematical properties in13. The foundation for graph theory was laid in 1735 by Leonhard Euler when he solved the ‘Konigsberg bridges’ problem. Many real life problems can be represented by a graph. In computer science, graphs are used to represent networks of communications, data organization, computational devices, the flow of computation, etc.\n\nThe link structure of a website could be represented by a directed graph in which the vertices are the web pages available at the website and a directed edge from page A to page B exists if and only if A contains a link to B22. A similar approach can be taken to problems in travel, biology, computer chip design and many other fields. Hence graph theory is widely used in solving real time problems. But when the system is large and complex it is difficult to extract the exact information about the system using the classical graph theory. In such cases fuzzy graph is used to analyze the system.\n\nThe first definition of fuzzy graphs was proposed by Kafmann18 in 1973, from the Zadeh’s fuzzy relations28-30. But Rosenfeld24 introduced another elaborated definition including fuzzy vertex and fuzzy edges and several fuzzy analogs of graph theoretic concepts such as paths, cycles, connectedness and etc. The first definition of intuitionistic fuzzy graphs was proposed by Atanassov4. In 1978, Gutman10 introduced the concept of ‘graph energy’ as the sum of the absolute values of the eigen values of the adjacency matrix of the graph. Certain bounds on energy are discussed in6,21, 13. Energy of different graphs including regular16, non-regular17, circulant26 and random graphs27 is also under study. Energy is defined for signed graphs in 9 and for weighted graphs in25. The energy of graph is extended to the energy of fuzzy graph in2. The energy of fuzzy graph is extended to the energy of an intuitionistic fuzzy graph in23.\n\nREFERENCES:\n\n1. Akbari, S.E. Ghorbani and M.R. Oboudi., 2009. Edge addition, Singular values and energy of graphs and matrices. Linear Algebra and its Applications, 430: 2192-2199.\n\n2. Anjali, N and Sunil Mathew., 2013. Energy of a fuzzy graph of Annals of Fuzzy Mathematics and Informatics.\n\n3. Atanassov, K., 1986. Intuitionistic fuzzy sets, Fuzzy sets and systems, 20: 87-96.\n\n4. Atanassov, K., 1999. Intuitionistic Fuzzy sets: Theory and Applications, Springer-Verlag, Heidelberg.\n\n5. Brankov, V., D. Stevanovic and I. Gutman., 2004. Equienergetic Chemical trees. J. Seb. Chem. Soc., 69:549-553.\n\n6. Brualdi, R.A., 2006. Energy of a graph, Notes to AIM Workshop on spectra of families of atrices described by graphs, digraphs, and sign patterns.\n\n7.",
null,
"D.M., M. Doob., and H. Sachs., 1980. Spectra of Graphs: Theory and Applications Academic Press, New York.\n\n8. Deepa, G., B. Praba and V.M. Chandrasekaran., 2014. Virus Spread in an Intuitionistic Fuzzy Network of International Journal of Applied Engineering Research, Vol.9: No.16: pp: 5507-5515.\n\n9. Germina, K.A., K. Shahul Hameed and Thomas Zaslavsky., 2011. On products and line graphs of signed graphs, their eigen values and energy, Linear Algebra Appl. 435: 2432-2450.\n\n10. Gutman, I., 1978. The energy of a graph, Ber. Math. Statist. Sekt. Forsch-ungszentram Graz. 103: 1–22.\n\n11. Gutman, I and O.E. Polansky., 1986. Mathematical concepts in Organic Chemistry, Springer - Verlag, Berlin.\n\n12. Gutman, I., 1992. Total",
null,
"-electron energy of benzenoid hydrocarbons, Topics Curr. Chem. 162: 29-63\n\n13. Gutman, I., 2001. The energy of graph: old and new results in Betten, A., Kohnert, A., Laue, R., Wassermann, A., [Eds.], Algebraic Combinations and Applications, Springer-Verlag, Berlin, 196-211.\n\n14. Gutman, I., 2005. Topology and Stability of conjugated hydrocarbons, the dependence of total",
null,
"- electron energy on molecular topology, J. Serb. Chem. Soc.70: 441-456.\n\n15. Gutman, I., and B. Zhou., 2006. Laplacian energy of a graph, Linear Algebra and its Applications, 414: 29-37.\n\n16. Gutman, I., S. Zare Firoozabadi., J. A. de la Pe~na and J. Rada., 2007. On the energy of regular graphs, MATCH Commun. Math. Comput. Chem. 57: 435-442.\n\n17. Indulal, G., A. Vijayakumar., 2007. Energies of some non-regular graphs, J. Math. Chem. 42: 377-386.\n\n18. Kauffman, A., 1973. Introduction a la Theorie des sous – ensembles Flous, Masson et cie, Vol.1.\n\n19. Koolen, J.H., V. Moulton and I. Gutman., 2000. Improving the McClelland inequality for total",
null,
"-electron energy. Chemical physics Letters, 320:213-216.\n\n20. Lee, K.H., 2005. First Course on Fuzzy theory and Applications, Springer-Verlag, Berlin.\n\n21. Liu, H., M. Lu and F. Tian., 2007. Some upper bounds for the energy of graphs, J. Math.Chem.42: 377-386.\n\n22. Praba, B and R. Sujatha., 2007. Application of Fuzzy Markov model for web-testing, Ganita Sandesh, 21: 111-120.\n\n23. Praba, B., V.M. Chandrasekaran and G. Deepa., 2014. Energy of an Intuitionistic Fuzzy Graph of Italian Journal of Pure and Applied Mathematics, 32:431-444.\n\n24. Rosenfeld, A., Fuzzy graphs, in L.A. Zadeh, K.cS. Fu., K. Tanaka and M. Shimura, eds, 1975. Fuzzy sets and their applications to cognitive and decision process, Academic Press, New York, 75-95.\n\n25. Shao, J.F. Gong and Z. Du., 2011. External energies of weighted trees and forests with fixed total weight sum, MATCH Commun.Chem.66: 879-890.\n\n26. Shparlinski, I., 2006. On the energy of some circulant graphs, Linear Algebra and Appl., 414: 378-382.\n\n27. Wenxue Du, Xueliang Li, and Yiyang Li, 2010. Various energies of random graphs, MATCH. Commun. Math. Comput. Chem. 64: 251-260.\n\n28. Zadeh, L.A., 1965. Fuzzy sets, Information and Control, 8: 338-353.\n\n29. Zadeh, L.A., 1971. Similarity relations and fuzzy ordering, Information sciences, 3: 177-200.\n\n30. Zadeh, L.A., 2008. Is there a need for fuzzy logic? Information sciences, 178: 2751-2779.\n\nReceived on 19.01.2016 Modified on 25.01.2016\n\nAccepted on 22.02.2016 © RJPT All right reserved\n\nResearch J. Pharm. and Tech. 9(2): Feb., 2016; Page 190-195\n\nDOI: 10.5958/0974-360X.2016.00034.2"
] | [
null,
"https://rjptonline.org/HTML_Papers/17-----------RA-----------RJPT-3487-19-01-2016-Revised-25-01-2016_files/image001.gif",
null,
"https://rjptonline.org/HTML_Papers/17-----------RA-----------RJPT-3487-19-01-2016-Revised-25-01-2016_files/image002.gif",
null,
"https://rjptonline.org/HTML_Papers/17-----------RA-----------RJPT-3487-19-01-2016-Revised-25-01-2016_files/image001.gif",
null,
"https://rjptonline.org/HTML_Papers/17-----------RA-----------RJPT-3487-19-01-2016-Revised-25-01-2016_files/image001.gif",
null,
"https://rjptonline.org/HTML_Papers/17-----------RA-----------RJPT-3487-19-01-2016-Revised-25-01-2016_files/image001.gif",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7969182,"math_prob":0.83582574,"size":7434,"snap":"2020-45-2020-50","text_gpt3_token_len":2101,"char_repetition_ratio":0.13660835,"word_repetition_ratio":0.019197207,"special_character_ratio":0.29432338,"punctuation_ratio":0.2819919,"nsfw_num_words":2,"has_unicode_error":false,"math_prob_llama3":0.9894422,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,4,null,1,null,4,null,4,null,4,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T03:34:37Z\",\"WARC-Record-ID\":\"<urn:uuid:8b9b851a-8083-4874-8468-afcece232dc3>\",\"Content-Length\":\"86403\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:a192d5e2-a023-44c7-9aab-e6f9e4478ec9>\",\"WARC-Concurrent-To\":\"<urn:uuid:2a4d7c4b-b4ed-4b34-ad06-e569187968b6>\",\"WARC-IP-Address\":\"103.235.104.243\",\"WARC-Target-URI\":\"https://rjptonline.org/HTML_Papers/Research%20Journal%20of%20Pharmacy%20and%20Technology__PID__2016-9-2-9.html\",\"WARC-Payload-Digest\":\"sha1:7UPNH6AD2FILS5EOZM2MOFDHXL64QR35\",\"WARC-Block-Digest\":\"sha1:CRLWXNBTRRGCGGVLUFJXACAXWI42YBGE\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107912807.78_warc_CC-MAIN-20201031032847-20201031062847-00496.warc.gz\"}"} |
https://it.mathworks.com/help/comm/ref/shift2mask.html | [
"Convert shift to mask vector for shift register configuration\n\n## Syntax\n\n```mask = shift2mask(prpoly,shift) ```\n\n## Description\n\n`mask = shift2mask(prpoly,shift)` returns the mask that is equivalent to the shift (or offset) specified by `shift`, for a linear feedback shift register whose connections are specified by the primitive polynomial `prpoly`. The `prpoly` input can have one of these formats:\n\n• A binary vector that lists the coefficients of the primitive polynomial in order of descending powers\n\n• An integer scalar whose binary representation gives the coefficients of the primitive polynomial, where the least significant bit is the constant term\n\nThe `shift` input is an integer scalar.\n\nNote\n\nTo save time, `shift2mask` does not check that `prpoly` is primitive. If it is not primitive, the output is not meaningful. To find primitive polynomials, use `primpoly` or see .\n\nThe equivalent mask for the shift s is the remainder after dividing the polynomial xs by the primitive polynomial. The vector `mask` represents the remainder polynomial by listing the coefficients in order of descending powers.\n\n### Shifts, Masks, and Pseudonoise Sequence Generators\n\nLinear feedback shift registers are part of an implementation of a pseudonoise sequence generator. Below is a schematic diagram of a pseudonoise sequence generator. All adders perform addition modulo 2.",
null,
"The primitive polynomial determines the state of each switch labeled gk, and the mask determines the state of each switch labeled mk. The lower half of the diagram shows the implementation of the shift, which delays the starting point of the output sequence. If the shift is zero, the m0 switch is closed while all other mk switches are open. The table below indicates how the shift affects the shift register's output.\n\nT = 0T = 1T = 2...T = sT = s+1\nShift = 0x0 x1 x2 ...xs xs+1\nShift = s > 0xs xs+1 xs+2 ... x2s x2s+1\n\nIf you have Communications Toolbox™ software and want to generate a pseudonoise sequence in a Simulink® model, see the PN Sequence Generator block reference page.\n\n## Examples\n\ncollapse all\n\nConvert a shift in a linear feedback shift register into an equivalent mask.\n\nConvert a shift of 5 into the equivalent mask ${x}^{3}+x+1$ for the linear feedback shift register whose connections are specified by the primitive polynomial ${x}^{4}+{x}^{3}+1$. The length of the mask is equal to the degree of the primitive polynomial, 4.\n\n`mk = shift2mask([1 1 0 0 1],5)`\n```mk = 1×4 1 0 1 1 ```\n\nConvert a shift of 7 to a mask of ${x}^{4}+{x}^{2}$ for the primitive polynomial ${x}^{5}+{x}^{2}+1$.\n\n`mk2 = shift2mask('x5+x2+1',7)`\n```mk2 = 1×5 1 0 1 0 0 ```\n\n## References\n\n Lee, J. S., and L. E. Miller, CDMA Systems Engineering Handbook, Boston, Artech House, 1998.\n\n Simon, Marvin K., Jim K. Omura, et al., Spread Spectrum Communications Handbook, New York, McGraw-Hill, 1994.\n\n## Version History\n\nIntroduced before R2006a"
] | [
null,
"https://it.mathworks.com/help/comm/ref/newlfsr_wtif.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7071681,"math_prob":0.9899713,"size":2895,"snap":"2023-14-2023-23","text_gpt3_token_len":732,"char_repetition_ratio":0.15081286,"word_repetition_ratio":0.058333334,"special_character_ratio":0.2373057,"punctuation_ratio":0.122486286,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9944622,"pos_list":[0,1,2],"im_url_duplicate_count":[null,2,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-06-09T07:28:54Z\",\"WARC-Record-ID\":\"<urn:uuid:eee1de82-861e-4c7c-b31d-fe09e8ff787e>\",\"Content-Length\":\"84944\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:55dadbc7-9e88-4a77-90a9-fbfe62941223>\",\"WARC-Concurrent-To\":\"<urn:uuid:329c03d4-b4ad-4374-a721-e81555074a7b>\",\"WARC-IP-Address\":\"104.86.80.92\",\"WARC-Target-URI\":\"https://it.mathworks.com/help/comm/ref/shift2mask.html\",\"WARC-Payload-Digest\":\"sha1:PMVUO47YAWI63VYALACCZEU4KUAOAQGX\",\"WARC-Block-Digest\":\"sha1:VC4EUENKIYKDNDOQQMO3Z7YPFN6OS2A2\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224655446.86_warc_CC-MAIN-20230609064417-20230609094417-00631.warc.gz\"}"} |
https://eureka.patsnap.com/patent-CN102506805A | [
"# Multi-measuring-point planeness evaluation method based on support vector classification\n\n## A technology of support vector classification and support vector machine, which is applied in the direction of measuring devices and instruments, can solve the problems of complex calculation, falling into local optimum, and difficult to guarantee the accuracy of evaluation, so as to improve calculation efficiency, improve accuracy and reduce calculation The effect of data volume\n\nActive Publication Date: 2012-06-20\nSOUTH CHINA UNIV OF TECH\n2 Cites 12 Cited by\n\n## AI-Extracted Technical Summary\n\n### Problems solved by technology\n\nThe calculation speed of the least square method is faster, but it is difficult to guarantee the evaluation accuracy by using the least square plane instead of the minimum area plane; the calculation accuracy of the search method is high, but ...\nView more\n\n## Abstract\n\nThe invention discloses a multi-measuring-point planeness evaluation method based on support vector classification. The method comprises the following steps: sampling points on a tested surface to obtain a three-dimensional coordinate measurement value of each point; calculating a least square surface corresponding to a measurement point set; with the least square surface as a reference, removing measurement points with a small residual absolute value; moving measurement points with a big residual absolute value for the same distance respectively along the positive direction and the negative direction of the normal direction of the least square surface to form a linearly separable positive point set and negative point set; calculating the maximum interval plane of the positive point set and the negative point set with a linearly classified support vector machine method; and calculating a planeness value according to the plane. According to the multiple-measuring-point planeness evaluation method, the calculation data amount in the multiple-measuring-point planeness evaluation can be reduced and the calculation efficiency and the accuracy of an evaluation result are obviously improved.\n\nApplication Domain\n\nMeasurement devices\n\nTechnology Topic\n\nLeast squaresPositive direction +4\n\n## Image\n\n•",
null,
"•",
null,
"•",
null,
"## Examples\n\n• Experimental program(1)\n\n### Example Embodiment\n\n In order to make the objectives, technical solutions and advantages of the present invention clearer, the following will further describe the embodiments of the present invention in detail with reference to the accompanying drawings:\n See figure 1 , Is a flowchart of a method for multi-point flatness evaluation based on support vector classification, the method includes the following steps:\n Step 10: Sample the points on the measured surface to obtain the three-dimensional coordinates of all the measured points.\n Step 20: Calculate the least squares plane corresponding to the measurement point set, and use the least squares plane as a reference to eliminate the measurement points with the smaller absolute value of the residual error.\n In step 30, the measurement points with large absolute values are moved equally along the normal positive and negative directions of the least squares plane to form a linearly separable set of positive and negative points.\n Step 40 uses the support vector machine method of linear classification to calculate the maximum separation plane between the positive point set and the negative point set, and uses the plane to calculate the flatness value.\n See figure 2 , Is a schematic diagram of the method for removing measurement points. The method includes: Calculating the least squares plane corresponding to the measurement point set, and using the least squares plane as the reference to eliminate the measurement points with the smaller absolute value of the residual error. Measurement of the surface of the object, you can get the original measurement point set D 0 ={(x i , Y i ,,z i ,), i=1,...,l}, D 0 The equation of the corresponding least squares plane is z=Ax+By+C. According to the least squares method, the objective function is:\n minS=∑(zi-z)2=∑(zi-Ax i -By i -C) 2 (3)\n Determine the values of A, B, and C by formula 1, that is, determine the position of the least squares plane; plane∏ 1 And plane∏ 2 Respectively, the distance to the least squares plane in the z-axis direction is k|e| max Two planes.\n Calculate the absolute value of the residual error in the Z-axis direction of each measurement point relative to the least squares plane |e| i :\n |e| i =|z i -Ax i -By i -C| (4)\n The maximum residual is recorded as |e| max , Remove the absolute value of the residual |e i |Less than or equal to k×|e| max (k ∈ (0, 1)) measurement point, keep the absolute value of the residual |e i |Greater than k×|e| max (k ∈ (0, 1)) measurement point, that is, the proposed plane ∏ 1 And plane∏ 2 Points between, keep the plane ∏ 1 Above and plane∏ 2 The following points, through the above operations, form a new point set D 1 ={(x i , Y i ,z i ,), i=1,...,m}. Because the measurement points that determine the minimum area plane are all located on the boundary, removing the measurement points with small residuals will not affect the evaluation results.\n See image 3 , In order to construct a schematic diagram of positive and negative points, the new measurement point set D 1 ={(x i , Y i ,z i ,),I=1,...,m} respectively move along the positive direction of the least squares plane normal vector n(A, B, -1) Form positive point set D 1 + ={(x' i , Y′ i ,,z' i ,), i=1,...,m}, move along the negative direction of the normal vector of the least squares plane Form negative point set D 1 - ={(x″ i ,Y″ i ,,z\" i ,), i=1,...,m}.\n Use the support vector machine method of linear classification to analyze the above positive and negative class point set D 1 + D 1 - For classification, the specific solving process steps are as follows:\n Set the positive and negative points to D 1 + D 1 - Unified into the training set T = {(X 1 , Y 1 ), (X 2 , Y 2 ),..., (X m , Y m )}, where Xi=(x i , Y i ,z i ), Y i ={1,-1}, i=1, 2,...,m;\n Convert the problem of seeking the maximum interval into a dual problem:\n min 1 2 X i = 1 m X j = 1 m Y i Y i α i α j ( X i · X i ) - X j = 1 m α j ,\n s . t . X i = 1 m Y i α i = 0 , - - - ( 5 )\n α i ≥0, i=1, 2,..., m.\n Obtain the optimal solution through the quadratic convex programming algorithm\n Calculate the normal vector of the maximum interval plane Choose α * A positive component of And calculate the intercept of the maximum interval plane\n The maximum separation plane is (w * ·X)+b * =0, and converted to z=A 0 x+B 0 y+C 0 In this form, the plane is the measurement point set D 1 The smallest area plane, let D 1 The distance between each measuring point in the plane and the plane is d i , Then the final flatness value f MZ =d max -d min.\n The above are only preferred specific embodiments of the present invention, but the protection scope of the present invention is not limited to this. Any person skilled in the art can easily think of changes or changes within the technical scope disclosed by the present invention. All replacements shall be covered within the protection scope of the present invention. Therefore, the protection scope of the present invention should be subject to the protection scope of the claims.\n\n## PUM",
null,
"",
null,
"## Description & Claims & Application Information\n\nWe can also present the details of the Description, Claims and Application information to help users get a comprehensive understanding of the technical details of the patent, such as background art, summary of invention, brief description of drawings, description of embodiments, and other original content. On the other hand, users can also determine the specific scope of protection of the technology through the list of claims; as well as understand the changes in the life cycle of the technology with the presentation of the patent timeline. Login to view more.\n\n## Structure complex distribution network reliability evaluation method based on diffusion theory\n\nOwner:SOUTHEAST UNIV\n\n## MIMO radar waveform design method for neighboring target resolution\n\nActiveCN107329120AHigh resolution characteristicsImprove computing efficiency\nOwner:THE PLA INFORMATION ENG UNIV\n\n## Chirp signal parameter estimating method based on sparse constraint\n\nOwner:XIDIAN UNIV\n\n## Power lithium ion battery thermal runaway flue gas safety evaluation testing method\n\nPendingCN114002385AReduce the amount of calculation datareduce workload\nOwner:CRRC QINGDAO SIFANG CO LTD\n\n## Ice coating simulation method for 220kV transmission line tower-coupling system\n\nInactiveCN108710763AShort modeling cycleImprove computing efficiency\nOwner:国网江西省电力有限公司经济技术研究院 +1\n\n## Attendance calculating method and device\n\nInactiveCN101702207Aavoid wasting timeImprove computing efficiency\nOwner:KINGDEE SOFTWARE(CHINA) CO LTD\n\n## Classification and recommendation of technical efficacy words\n\n• Reduce the amount of calculation data\n• Improve computing efficiency\n\n## Power lithium ion battery thermal runaway flue gas safety evaluation testing method\n\nPendingCN114002385AReduce the amount of calculation datareduce workload\nOwner:CRRC QINGDAO SIFANG CO LTD\nWho we serve\n• R&D Engineer\n• R&D Manager\n• IP Professional\nWhy Eureka"
] | [
null,
"https://images-eureka.patsnap.com/patent_img/f67d0e35-7f83-4d65-b56f-1f4dd000f04d/HSA00000591655600011.PNG",
null,
"https://images-eureka.patsnap.com/patent_img/f67d0e35-7f83-4d65-b56f-1f4dd000f04d/HSA00000591655600012.PNG",
null,
"https://images-eureka.patsnap.com/patent_img/f67d0e35-7f83-4d65-b56f-1f4dd000f04d/HSA00000591655600021.PNG",
null,
"https://static-eureka.patsnap.com/ssr/23.2.0/_nuxt/noPUMBig.eafacd0d.png",
null,
"https://static-eureka.patsnap.com/ssr/23.2.0/_nuxt/noPUMSmall.d6d97e14.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.82445526,"math_prob":0.9836546,"size":3979,"snap":"2023-14-2023-23","text_gpt3_token_len":769,"char_repetition_ratio":0.13559748,"word_repetition_ratio":0.08812261,"special_character_ratio":0.16712742,"punctuation_ratio":0.062176164,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98570126,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,1,null,1,null,1,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-03-21T14:35:12Z\",\"WARC-Record-ID\":\"<urn:uuid:2dad417a-da49-46d4-bf29-4524ba9b8671>\",\"Content-Length\":\"182537\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:9b75685b-1890-4290-9127-e490508793cf>\",\"WARC-Concurrent-To\":\"<urn:uuid:586e2ce2-7052-4f46-b5f1-999e7ac5565b>\",\"WARC-IP-Address\":\"18.160.10.28\",\"WARC-Target-URI\":\"https://eureka.patsnap.com/patent-CN102506805A\",\"WARC-Payload-Digest\":\"sha1:AJ3ZNDTQJ4L4ZGJAUTGLHZLDZTZJCZZM\",\"WARC-Block-Digest\":\"sha1:IG3GAY7TNYM3X56SCELGQNWGJ3364PJN\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-14/CC-MAIN-2023-14_segments_1679296943698.79_warc_CC-MAIN-20230321131205-20230321161205-00488.warc.gz\"}"} |
https://codegolf.stackexchange.com/questions/104782/its-factors-all-the-way-down | [
"# It's factors all the way down!\n\nThis challenge is inspired by this fantastic animated diagram (thanks to flawr for posting it in chat).\n\nGiven an input n, draw all of its prime factors as nested polygons of dots, as specified.\n\nFor example, given the number 357 = 17x7x3, you arrange 3 dots in a triangle, 7 versions of those triangles in a heptagon, and 17 versions of those heptagons in a 17-gon. In short, nested polygons going from the largest prime factor on the outside to the smallest on the inside. For 357, your answer should look a little like this (with or without color):",
null,
"Every polygon of every prime >= 3 should not be rotated around the diagram.\n\nThe only exception is the prime 2, specifically for odd powers of 2. As you can see in the example for 376 = 47x2x2x2 below, the 8s rotate and are not single lines of 2s, but are vertical stacks for 4s in a square. Even powers of 2, arranged in squares, do not need to be rotated in this way.",
null,
"In fact, 448 = 7x2x2x2x2x2x2 has a diagram that looks like a heptagon of 64s, and 64 is arranged into a square of squares of squares, but without rotation.",
null,
"Two more examples are 440 = 11x5x2x2x2 and 432 = 3x3x3x2x2x2x2. We see that 440 with an odd power of 2, has rotated 8s, but 432 with an even power of 2 does not rotate its 16s.\n\nAnd finally, here is a minimal example, 10 = 5x2, without color that I mocked up with Python and its turtle module.",
null,
"The challenge\n\n• Given an input n where 1 <= n <= 10000, output an image of its nested factor polygons.\n• Rules are:\n• The image is made up of nested polygons of dots, from a polygon with (the largest prime factor) sides on the outside to the smallest prime factor on the inside.\n• For the factor 2, the powers of 2 should stack as a line, then a squares, then a line of squares, and so on. Even powers of 2 should not be rotated. Odd powers of 2 should be rotated around their respective polygons, and they should be stacked vertically before rotation.\n• You may orient the image however you like (though I prefer up), but every nested polygon should be facing the same direction as any other polygon with the sole exception of odd powers of 2.\n• You have two options for image size and dot size:\n• The image size is static and the dot size decreases as n increases (as in the animation).\n• The dot size is static and the image size grows as n increases.\n• The first three layers of polygons should be distinguishable from neighboring polygons (i.e. not touching), but considering the size of the images at and around n=10000, it's okay if the layers after start to touch. I'd prefer it if they didn't, but it may be unavoidable to fit on an image that is uploadable to Stack Exchange.\n• Color is optional.\n• The shape of the dots is up to you. If squares are better for your language, use those.\n• No bonuses, but I would like to see someone animate and color the diagrams like in the original post.\n\nThanks to Conor O'Brien, EasterlyIrk, Martin Ender, Kritixi Lithos, Mego, DJ McMayhem, and El'endia Starman for their help in writing this question.\n\nThis code golf, so shortest code wins. Good luck and good golfing!\n\n# Python 3.5, 331309308306 304 bytes\n\nIt took quite a bit of messing with the spacing of the polygons (and the specification, too, to be honest) to get this answer to work, but I finally did it and hopefully other answers can start coming in.\n\nEdit: -2 bytes thanks to FlipTack. -8 bytes from removing a section of code that I forgot to remove earlier. -12 bytes from golfing the last function. -1 byte from changing the circumference of the drawings from size=2500 to size=2e3, which also allows the drawings to better fit on screens (diameter ~= 795.77 down to diameter ~= 636.62). -2 bytes from fixing a bug. -2 bytes from restructuring how I build a.\n\nGolfing suggestions welcome. Trinket for testing and images to follow shortly.\n\nfrom math import*\nfrom turtle import*\nht();pu()\ndef g(n):\ni=1;a=[]\nwhile n%4<1:a+=4,;n//=4\nwhile n>1:\ni+=1\nwhile n%i<1:a+=i,;n//=i\nreturn f(a,2e3)\ndef f(a,s,x=0,y=0,t=0):\nif a:\n*c,b=a;s/=b\nfor i in range(b):u=2*pi*i/b+t*(b<3)+pi/4*(b==4);f(c,s,x+s*sin(u),y+s*cos(u),u)\nelse:goto(x,y);dot(4)\n\n\nHere's g(448), which now fits on my 1366x768 screen.",
null,
"Ungolfing\n\nimport math\nimport turtle\n\nturtle.hideturtle() # don't display the turtle itself)\nturtle.penup() # don't draw lines, just dots later on\n\ndef g(n):\ni = 1\na = []\nwhile n % 4 == 0: # get 4's into the list first,\na = a + # so that the fractal will be easier to structure\nn = n // 4\nwhile n > 1: # now get all of the other factors (including any stray 2's)\ni += 1\nwhile n % i == 0:\na = a + [i]\nn = n // i\nreturn f(a, 2000) # 2000 is the circumference of the circle\n# on which we draw the polygons\ndef f(a, s, x=0, y=0, t=0):\nif a:\nc = a[-1] # the size of the current outermost polygon\nb = a[:-1] # the rest of the factors for recursion\ns = s/b # the current circumference / the number of polygons at this layer\nfor i in range(b):\nu = 2*math.pi*i/b # angle around the circle\nif b == 2: # if b == 2, add the previous angle to rotate the structure\nu += t\nif b == 4: # if b == 4, add 45 degrees to keep the squares upright\nu += math.pi/4\ndx = s * math.sin(u) # our coordinate changes for this polygon\ndy = s * math.cos(u)\nf(c, s, x+dx, y+dy, u) # call the function again\n# on a new circle with new starting coordinates\nelse: # when we run out of factors,\nturtle.goto(x,y) # go to each coordinate\nturtle.dot(4) # and draw a dot\n\n• is n = n //= i supposed to be n//= i ? – Bobas_Pett Dec 29 '16 at 19:43\n• @Bobas_Pett Nah, you're looking at the ungolfing/explanation, and that's supposed to say n = n // i. I'll go fix it and add to the explanation while I'm at it. – Sherlock9 Dec 30 '16 at 4:05"
] | [
null,
"https://i.stack.imgur.com/mznik.png",
null,
"https://i.stack.imgur.com/5jzcF.png",
null,
"https://i.stack.imgur.com/ZtW2H.png",
null,
"https://i.stack.imgur.com/oax3w.png",
null,
"https://i.stack.imgur.com/UTFOI.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93281394,"math_prob":0.9311788,"size":3156,"snap":"2020-45-2020-50","text_gpt3_token_len":832,"char_repetition_ratio":0.121192895,"word_repetition_ratio":0.031404957,"special_character_ratio":0.26045626,"punctuation_ratio":0.114503816,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933503,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,3,null,3,null,3,null,3,null,3,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-27T15:41:04Z\",\"WARC-Record-ID\":\"<urn:uuid:2e6eb5e3-1daf-46a5-be27-a85090d378c8>\",\"Content-Length\":\"157243\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7e1bf1fa-2ffe-47dc-a0a8-15c4877d3dcf>\",\"WARC-Concurrent-To\":\"<urn:uuid:090561dc-6f69-469d-971c-663664115856>\",\"WARC-IP-Address\":\"151.101.65.69\",\"WARC-Target-URI\":\"https://codegolf.stackexchange.com/questions/104782/its-factors-all-the-way-down\",\"WARC-Payload-Digest\":\"sha1:TSN5BXTZN75MQT6JEKONRJZJPOF5H3XJ\",\"WARC-Block-Digest\":\"sha1:SCIJU2HEZSFPXGWH2THKUHF7MRNPKEBK\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107894203.73_warc_CC-MAIN-20201027140911-20201027170911-00119.warc.gz\"}"} |
http://sql-tutorial.ru/en/book_exercise_16.html | [
"",
null,
"",
null,
"",
null,
"00:14\n\n# Exercise #16\n\nFind the pairs of PC models having similar speeds and RAM. As a result, each resulting pair is shown only once, i.e. (i, j) but not (j, i). Result set: model with high number, model with low number, speed, and RAM.\n\nHere is the frequently proposed solution:\n\nSolution 1.12.1",
null,
"",
null,
"Console\n`SELECT MAX(model) AS 'model', MIN(model) AS 'model', speed, ramFROM PCGROUP BY speed, ramHAVING MAX(model) > MIN(model);`\n\nI really don't know, what was the cause to decide outputting only models with maximum and minimum numbers for each coinciding pair of values {speed, ram}. It is possible that the result of inaccurate query on available database is misleading.\n\nIn this task, it is needed to order all the models, but not only maximum and minimum of these. Extremal characteristics are mentioned for uniqueness, i.e. to get the pairs of models once , for example:\n\n1122 1121\nbut not\n\n1121 1122\n\nSo, if three models 1122, 1121, and 1135 have alike features, the result will be as follows:\n\n1135 1122\n1135 1121\n1122 1121\n\nBelow is presented almost correct solution, though also the too bulky.\n\nSolution 1.12.2",
null,
"",
null,
"Console\n`SELECT P.model, L.model, P.speed, P.ramFROM PC P JOIN (SELECT speed, ram FROM PC GROUP BY speed, ram HAVING SUM(speed)/speed = 2 AND SUM(ram)/ram = 2 ) S ON P.speed = S.speed AND P.ram = S.ram JOIN PC L ON L.speed = S.speed AND L.ram = S.ram AND L.model < P.model;`\n\nHere in subrequest S unique pairs of characteristics (speed, memory) are selected, which coincide for two computers (SUM (speed)/speed = 2) - the sum of identical values divided by this value gives us quantity of the PCs. Though with the same success it was possible to write the following HAVING clause:\n\n`HAVING COUNT(*) = 2`\n\nThe subquery joined with the table PC on this characteristics pair twice. In so doing, the second join is only carried out to order the models (L.model <P.model).\n\nThe mistake of the given decision consists that the number of the personal computers with identical characteristics may be more than two. In this situation any of such models will not get in result set of the presented solution.\n\nIn spite of the fact that the solution is easy to correct, better to rewrite it in not a such superfluous form.\n\nThere is one more mistake which is often made in this exercise. It is due to the fact that the PC table may include some computers with same model value. Therefore it is necessary to exclude possible duplicates of required pairs.\n\nT&S\n\nTo solve the problem on SQL-EX.RU",
null,
"",
null,
""
] | [
null,
"http://sql-tutorial.ru/view/gimages/indicator2.gif",
null,
"http://sql-tutorial.ru/view/gimages/flags/rus.png",
null,
"http://sql-tutorial.ru/view/gimages/flags/brit.png",
null,
"http://sql-tutorial.ru/view/gimages/console.png",
null,
"http://sql-tutorial.ru/view/gimages/needreg.png",
null,
"http://sql-tutorial.ru/view/gimages/console.png",
null,
"http://sql-tutorial.ru/view/gimages/needreg.png",
null,
"http://counter.yadro.ru/logo",
null,
"http://sql-tutorial.ru/view/gimages/2.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9319379,"math_prob":0.96701413,"size":1953,"snap":"2020-34-2020-40","text_gpt3_token_len":432,"char_repetition_ratio":0.11133915,"word_repetition_ratio":0.0,"special_character_ratio":0.22375832,"punctuation_ratio":0.13350126,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99140954,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-09-28T21:14:58Z\",\"WARC-Record-ID\":\"<urn:uuid:bcae5983-900f-4a40-b92e-e8181d3e17cb>\",\"Content-Length\":\"56117\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5bf5a518-45df-4a9a-abb7-1dc95a51afd0>\",\"WARC-Concurrent-To\":\"<urn:uuid:b57eaeff-a03c-4cef-a7a4-f60627805745>\",\"WARC-IP-Address\":\"81.176.226.196\",\"WARC-Target-URI\":\"http://sql-tutorial.ru/en/book_exercise_16.html\",\"WARC-Payload-Digest\":\"sha1:G6SO4UZPSW2LSHQ4B3D2ACARC54QF2W4\",\"WARC-Block-Digest\":\"sha1:ZOFKD7TZHMDXLTHP3KVYWHFOVBFTGCXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-40/CC-MAIN-2020-40_segments_1600401614309.85_warc_CC-MAIN-20200928202758-20200928232758-00137.warc.gz\"}"} |
https://www.techwhiff.com/learn/teionbestor-assignment-take-and-inprogress-false/441329 | [
"#### Similar Solved Questions\n\n##### Help w solving question 33 and 34 C) U.255967. (U- X 1.62) XU. 16 0 . (d) 0.668459 +1.6x2-.38 x 0.22 -1.0397 +0.4) 1...\nhelp w solving question 33 and 34 C) U.255967. (U- X 1.62) XU. 16 0 . (d) 0.668459 +1.6x2-.38 x 0.22 -1.0397 +0.4) 1.38) 0.44 = t.osoz Let Y be a random variable. In a population, jy = 222 and a = 81. Use the central limit theorem topib answer questions 33 & 34 (Note: any intermediate results...\n##### The following three points are the locations of important facilities in a transportation network: (32.22), (53,...\nThe following three points are the locations of important facilities in a transportation network: (32.22), (53, 61), and (51, 56). The coordinates are in miles a. Calculate the Euclidean distances (in miles) between each of the three pairs of facilities. (Enter your responses rounded to two decimal ...\nQuestion 5 3 pts Production Analysis The same multimedia company as above revised their cost and revenue functions. Now they estimate: Cl)- 11.7x +48069 and R)-18.5x Find the number of units that must be produced and sold for a profit of $245. (Round to the nearest whole number). D Question 6 3 pts ... 1 answer ##### Monster energy drink is supposed to have a 16 ounces size. In this test there was... Monster energy drink is supposed to have a 16 ounces size. In this test there was six monster drinks tested and the average amount in each can was 13.22 ounces. The standard deviation is 2.05. (Use significance level .01) (1) Calculate confidence interval (2) Find test-statistic (3) Find p-value (4)... 1 answer ##### 23. An ordinary annuity is best defined as: A) increasing payments paid for a definitive period o... 23. An ordinary annuity is best defined as: A) increasing payments paid for a definitive period of time. B) increasing payments paid forever C) equal payments paid at the end of regular intervals over a stated time period. D) equal payments paid at the beginning of regular intervals for a limited ti... 1 answer ##### Problem 10. (10 points) The length (in pages) of math research projects is given below. Using... Problem 10. (10 points) The length (in pages) of math research projects is given below. Using this information, calculate the mean and the standard deviation regarding the data set as a sample. sample mean = sample standard deviation = Now use the same data set, but this time regard it as a popu... 1 answer ##### 3. (5 points) Use the following matrix to answer the following questions. Player 2 Strategy с... 3. (5 points) Use the following matrix to answer the following questions. Player 2 Strategy с Player 1 A -10, -10 200, -100, 220 140, D -100 180 B Assume that this is a simultaneous move one shot game. (a) Determine the dominant strategy for player A and Player B. If such a strategy does not e... 1 answer ##### 1. Suppose you thresholded an image at value ti, and thresholded the result at value t2.... 1. Suppose you thresholded an image at value ti, and thresholded the result at value t2. Describe the result if (a) ti >t2 (b) tı <t2... 1 answer ##### Weiland Co. shows the following information on its 2016 income statement: sales =$162,500; costs =...\nWeiland Co. shows the following information on its 2016 income statement: sales = $162,500; costs =$80,000; other expenses = $3,300; depreciation expense =$9,000; interest expense = $6,500; taxes =$22,295; dividends = $8,150. In addition, you're told that the firm issued$4,500 in new equity ...\n##### 2(15)(a) An urn contains 4 white and 4 black balls. We randomly choose 4 balls. If...\n2(15)(a) An urn contains 4 white and 4 black balls. We randomly choose 4 balls. If 2 of them are white and 2 are black, we stop. If not, we replace the balls in the urn and again randomly select 4 balls. This continues until exactly 2 of the 4 chosen are white. What is the probability that we shall ...\n##### 17. Suppose you have the following circuit. The switch has been open for a long time,...\n17. Suppose you have the following circuit. The switch has been open for a long time, and is closed at time t-0. point The second order differential equation for the voltage across the capacitor is R2 し2 V2 What is the coefficient K of the 2nd order differential equation for this circuit? Do ...\n##### Short Answer Question We have a dataset with n= 10 pairs of observations (Li, Yi), and...\nShort Answer Question We have a dataset with n= 10 pairs of observations (Li, Yi), and n n ri = 683, yi = 813, i=1 n n n <* = 47,405, :9; = 56,089, 4 = 66, 731. i=1 What is the coefficient of correlation for this data?...\n##### 5) Let F(t) te), te,t >for t e R. Find (t)dt. 0 te, tet, tdt 0\n5) Let F(t) te), te,t >for t e R. Find (t)dt. 0 te, tet, tdt 0...\n##### As companies evolve, certain factors can drive sudden growth. This may lead to a period of...\nAs companies evolve, certain factors can drive sudden growth. This may lead to a period of nonconstant, or variable, growth. This would cause the expected growth rate to increase or decrease, thereby affecting the valuation model. For companies in such situations, you would refer to the variable, or...\n##### 76) Draw electron-dot structures (also known as Lewis structures) for the following molecules. Predict the geometry...\n76) Draw electron-dot structures (also known as Lewis structures) for the following molecules. Predict the geometry of each molecule and indicate its polarity (whether it is polar or nonpolar). (a) CBr (b) BCl (c) NF (d) H S...\n##### Have created a fair value and goodwill allocation schedule based on the data. Would it be...\nhave created a fair value and goodwill allocation schedule based on the data. Would it be a good decision to acquire Arizona Corp? Please use the fair value allocation and good will schedule below to answer the question. Arizona Corp. had the following account balances at 12/1/19: Receivables: \\$96,0..."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8870267,"math_prob":0.9821566,"size":8032,"snap":"2022-27-2022-33","text_gpt3_token_len":2098,"char_repetition_ratio":0.10438465,"word_repetition_ratio":0.33094555,"special_character_ratio":0.2996763,"punctuation_ratio":0.19092946,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99594784,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-12T03:06:58Z\",\"WARC-Record-ID\":\"<urn:uuid:f259b251-2ed4-4797-8f79-8a5806b78e46>\",\"Content-Length\":\"54462\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f2c801a4-65c8-4cdf-8046-b866ede4c8a4>\",\"WARC-Concurrent-To\":\"<urn:uuid:b34af1ac-810f-43a1-bbd4-fb054fc83b5f>\",\"WARC-IP-Address\":\"104.21.83.140\",\"WARC-Target-URI\":\"https://www.techwhiff.com/learn/teionbestor-assignment-take-and-inprogress-false/441329\",\"WARC-Payload-Digest\":\"sha1:V4MRQYHBYMGVX3G4RRZSFJ6MWKNGMJFS\",\"WARC-Block-Digest\":\"sha1:6DDGBE6VABI3B5NUG5J45YOXXBCPJYHF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882571538.36_warc_CC-MAIN-20220812014923-20220812044923-00377.warc.gz\"}"} |
https://infoscience.epfl.ch/record/263217?ln=en | [
"MATHICSE Technical Report : Analytic regularity and collocation approximation for PDEs with random domain deformations\n\nIn this work we consider the problem of approximating the statistics of a given Quantity of Interest (QoI) that depends on the solution of a linear elliptic PDE defined over a random domain parameterized by N random variables. The elliptic problem is remapped on to a corresponding PDE with a fixed deterministic domain. We show that the solution can be analytically extended to a well defined region in CN with respect to the random variables. A sparse grid stochastic collocation method is then used to compute the mean and standard deviation of the QoI. Finally, convergence rates for the mean and variance of the QoI are derived and compared to those obtained in numerical experiments.\n\nYear:\n2013-12\nPublisher:\nÉcublens, MATHICSE\nKeywords:\nNote:\nMATHICSE Technical Report Nr. 45.2013 December 2013\nRelated to:\nLaboratories:"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8879891,"math_prob":0.8650516,"size":816,"snap":"2019-26-2019-30","text_gpt3_token_len":166,"char_repetition_ratio":0.10591133,"word_repetition_ratio":0.0,"special_character_ratio":0.1740196,"punctuation_ratio":0.05109489,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9763442,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-06-15T21:23:32Z\",\"WARC-Record-ID\":\"<urn:uuid:062fc1ac-a0ed-4682-9b22-73912d706e2f>\",\"Content-Length\":\"24593\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:98e4aa85-1f29-4c63-8acb-1fbaa2a1a366>\",\"WARC-Concurrent-To\":\"<urn:uuid:4dc2717b-2efc-4319-b98e-59951aa14fd8>\",\"WARC-IP-Address\":\"34.250.186.131\",\"WARC-Target-URI\":\"https://infoscience.epfl.ch/record/263217?ln=en\",\"WARC-Payload-Digest\":\"sha1:N7QKXVJPTHTA3TNRMGTJYN22YSFGPQHW\",\"WARC-Block-Digest\":\"sha1:4XYKYGOR7KHHXCJM45OIYEJHTDTIGO7L\",\"WARC-Identified-Payload-Type\":\"application/xhtml+xml\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-26/CC-MAIN-2019-26_segments_1560627997335.70_warc_CC-MAIN-20190615202724-20190615224534-00055.warc.gz\"}"} |
http://sliderules.org/react/pickett_n909_es.html | [
"Markings Front Back Both sides\nUse the following symbols, when entering the values on the Conversion Scale:\n stm = Short to Metric Tons mts = Matric to Short Tons mk = Miles to Kilometres km = Kilometres to Miles smk = Square Miles to Square Kilometres skm = Square Kilometres to Square Miles mf = Metres to Feet fm = Feet to Metres smf = Square Metres to Square Feet × 10 sfm = Square Feet × 10 to Square Metres cmf = Cubic Metres to Cubic Feed × 10 cfm = Cubic Feed × 10 to Cubic Metres gl = Gallons to Litres lg = Litres to Gallons lq = Litres to Quarts ql = Quarts to Litres kp = Kilograms to Pounds pk = Pounds to Kilograms og = Ounces to Grams × 10 go = Grams × 10 to Ounces my = Metres to Yards ym = Yards to Metres smy = Square Metres to Square Yards sym = Square Yards to Square Metres cmy = Cubic Metres to Cubic Yards cym = Cubic Yards to Cubic Metres ic = Inches to Centimetres ci = Centimetres to Inches sic = Square Inches to Square Centimetres sci = Square Centimetres to Square Inches cic = Cubic Inches to Cubic Centimetres cci = Cubic Centimetres to Cubic Inches"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8467961,"math_prob":0.99995804,"size":1149,"snap":"2022-27-2022-33","text_gpt3_token_len":382,"char_repetition_ratio":0.27074236,"word_repetition_ratio":0.016,"special_character_ratio":0.2959095,"punctuation_ratio":0.010928961,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99726564,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-08-08T01:35:43Z\",\"WARC-Record-ID\":\"<urn:uuid:2d8e0fba-491c-4932-8166-33765cf01120>\",\"Content-Length\":\"26942\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5c6ec10c-d3cc-4e10-95aa-8089be9af547>\",\"WARC-Concurrent-To\":\"<urn:uuid:41ffb5fa-2f2b-489d-9e38-8a284b97e744>\",\"WARC-IP-Address\":\"103.64.148.113\",\"WARC-Target-URI\":\"http://sliderules.org/react/pickett_n909_es.html\",\"WARC-Payload-Digest\":\"sha1:RRC74HC4G6WZSTC2S6Z6NBA3WF7OTTSS\",\"WARC-Block-Digest\":\"sha1:N2BUMMFXDWJHLEDTU5KTA5M25MCJZ5EU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-33/CC-MAIN-2022-33_segments_1659882570741.21_warc_CC-MAIN-20220808001418-20220808031418-00045.warc.gz\"}"} |
https://ogst.ifpenergiesnouvelles.fr/articles/ogst/ref/2020/01/ogst190329/ogst190329.html | [
"Open Access\n Issue Oil Gas Sci. Technol. – Rev. IFP Energies nouvelles Volume 75, 2020 26 13 https://doi.org/10.2516/ogst/2020014 01 May 2020\n• Albinali A., Holy R., Sarak H., Ozkan E. (2016) Modeling of 1D anomalous diffusion in fractured nanoporous media, Oil Gas Sci. Technol.- Rev. IFP Energies nouvelles 71, 56. [CrossRef] [Google Scholar]\n• Angulo J.M., Ruiz-Medina M.D., Anh V.V., Grecksch W. (2000) Fractional diffusion and fractional heat equation, Adv. Appl. Probab. 32, 4, 1077–1099. [Google Scholar]\n• Barenblatt G.I., Zheltov YuP, Kochina I.N. (1960) Basic concepts in the theory of seepage of homogeneous liquids in fissured rocks, J. Appl. Math. Mech. 24, 5, 1286–1303. [CrossRef] [Google Scholar]\n• Bense V., Person M. (2006) Faults as conduit-barrier system to fluid flow in silicilastic sedimentary aquifers, Water Resour. Res. 42, W05421. doi: 10.1029/2005WR004480. [Google Scholar]\n• Bernard S., Delay F., Porel G. (2006) A new method of data inversion for the identification of fractal characteristics and homogenization scale from hydraulic pumping tests in fractured aquifers, J. Hydrol. 328, 3, 647–658. doi: 10.1016/j.jhydrol.2006.01.008. [CrossRef] [Google Scholar]\n• Bixel H.C., Larkin B.K., Van Poollen H.K. (1963) Effect of linear discontinuities on pressure build-up and drawdown behavior, J. Pet. Tech. 15, 8, 885–895. doi: 10.2118/611-PA. [CrossRef] [Google Scholar]\n• Caine J.S., Evans J.P., Forster C.B. (1996) Fault zone architecture and permeability structure, Geology 24, 11, 1025–1028. [Google Scholar]\n• Caputo M. (1967) Linear models of dissipation whose Q is almost Frequency Independent-II, Geophys. J. R. Astron. Soc. 13, 5, 529–539. [NASA ADS] [CrossRef] [Google Scholar]\n• Caputo M. (1999) Diffusion of fluids in porous media with memory, Geothermics 28, 1, 113–130. [Google Scholar]\n• Carslaw H.S., Jaeger J.C. (1959) Conduction of heat in solids, 2nd edn., Vol. 22, Clarendon Press, Oxford, pp. 319–326, 327–352. [Google Scholar]\n• Chang J., Yortsos Y.C. (1990) Pressure-transient analysis of fractal reservoirs, SPE Form. Eval. 5, 1, 31–39. [CrossRef] [Google Scholar]\n• Childs E.C. (1969) An Introduction to the Physical Basis of Soil Water Phenomena, John Wiley and Sons Ltd, London, pp. 153–178. [Google Scholar]\n• Cinco-Ley H., Samaniego-V. F., Kuchuk F. (1985) The pressure transient behavior for naturally fractured reservoirs with multiple block size. In: Paper SPE 14168 Presented at SPE Annual Technical Conference and Exhibition, 22–26 September, Las Vegas, Nevada. [Google Scholar]\n• Cooper H.H., Jacob C.E. (1946) A generalized graphical method for evaluating formation constants and summarizing well-field history, Trans. AGU 27, 526–534. [CrossRef] [Google Scholar]\n• Cortis A., Knudby C. (2006) A continuous time random walk approach to transient flow in heterogeneous porous media, LBNL-59885, Water Resour. Res. 42, W10201. [Google Scholar]\n• Darcy H. (1856) Les Fontaines publiques de la ville de Dijon, Exposition et application des principes à suivre et des formules à employer dans les questions de distribution d’eau ; ouvrage terminé par un appendice relatif aux fournitures d’eau de plusieurs villes au filtrage des eaux et à la fabrication des tuyaux de fonte, de plomb, Victor Dalmont, Paris, pp. 590–594. [Google Scholar]\n• Dassas Y., Duby Y. (1995) Diffusion toward fractal interfaces, potentiostatic, galvanostatic, and linear sweep voltammetric techniques, J. Electrochem. Soc. 142, 12, 4175–4180. [Google Scholar]\n• de Swaan-O A. (1976) Analytical solutions for determining naturally fractured reservoir properties by Well testing, Soc. Pet. Eng. J. 16, 3, 117–122. doi: 10.2118/5346-PA. [CrossRef] [Google Scholar]\n• Erdélyi A., Magnus W., Oberhettinger F., Tricomi F.G. (1954) Tables of integral transforms, based, in part, on notes left by Harry Bateman and Compiled by the Staff of the Bateman Manuscript Project, vol. 1, McGraw-Hill, New York, 17. [Google Scholar]\n• Erdelyi A., Magnus W.F., Oberhettinger F., Tricomi F.G. (1955) Higher transcendental functions, vol. 3, McGraw-Hill, New York, pp. 206–227. [Google Scholar]\n• Evans J.P. (1988) Deformation mechanisms in granitic rocks at shallow crustal levels, J. Struct. Geol. 10, 5, 437–443. [Google Scholar]\n• Feller W. (1971) An introduction to probability theory and its applications. II, 2nd edn., Wiley, New York, pp. 8–10, 50. [Google Scholar]\n• Fomin S., Chugunov V., Hashida T. (2011) Mathematical modeling of anomalous diffusion in porous media, Fractional Differ. Calc. 1, 1–28. [Google Scholar]\n• Fu L., Milliken K.L., Sharp J.M. Jr. (1994) Porosity and permeability variations in fractured and liesegang-banded Breathitt sandstones (Middle Pennsylvanian), eastern Kentucky: Diagenetic controls and implications for modeling dual-porosity systems, J. Hydrol. 154, 1–4, 351–381. [CrossRef] [Google Scholar]\n• Gefen Y., Aharony A., Alexander S. (1983) Anomalous diffusion on percolating clusters, Phys. Rev. Lett. 50, 1, 77–80. doi: 10.1103/PhysRevLett.50.77. [Google Scholar]\n• Gradshteyn I.S., Ryzhik I.M. (1994) Table of integrals, series and products, 5th edn., Vol. 532, Academic Press Inc., Orlando, pp. 3.961–3.962. [Google Scholar]\n• Grebenkov D.B. (2010) Subdiffusion in a bounded domain with a partially absorbing-reflecting boundary, Phys. Rev. E: Stat. Phys. Plasmas Fluids 81, 1, 021128. [CrossRef] [Google Scholar]\n• Gurtin M.E., Pipkin A.C. (1968) A general theory of heat conduction with finite wave speeds, Arch. Rational Mech. Anal. 31, 2, 113–126. doi: 10.1007/BF00281373. [CrossRef] [MathSciNet] [Google Scholar]\n• Hänninen J.J., Pirjola R.J., Lindell I.V. (2002) Application of the exact image theory to studies of ground effects of space weather, Geophys. J. Int. 151, 2, 534–542. [Google Scholar]\n• Henry B.I., Langlands T.A.M., Straka P. (2010) An introduction to fractional diffusion, in: Presented at the Conference: Complex Physical, Biophysical and Econophysical Systems – Proceedings of the 22nd Canberra International Physics Summer School, pp. 37–89. [Google Scholar]\n• Hilfer R., Anton L. (1995) Fractional master equations and fractal time random walks, Phys. Rev. E 51, 2, R848–R851. [Google Scholar]\n• Jourde H., Pistrea S., Perrochet P., Droguea C. (2002) Origin of fractional flow dimension to a partially penetrating well in stratified fractured reservoirs, new results based on the study of synthetic fracture networks, Adv. Water Res. 25, 4, 371–387. [CrossRef] [Google Scholar]\n• Kenkre V.M., Montroll E.W., Shlesinger M.F. (1973) Generalized master equations for continuous-time random walks, J Stat Phys 9, 1, 45–50. [Google Scholar]\n• Kim S., Kavvas M.L., Ercan A. (2015) Fractional ensemble average governing equations of transport by time-space nonstationary stochastic fractional advective velocity and fractional dispersion, II: Numerical investigation, J. Hydrol. Eng. 20, 2, 04014040. [Google Scholar]\n• Kosztoowicz K. (2017) Subdiffusion in a system consisting of two different media separated by a thin membrane, Int. J. Heat Mass Transfer 111, 1322–1333. [CrossRef] [Google Scholar]\n• Lẽ Mehautẽ A., Crepy G. (1983) Introduction to transfer and motion in fractal media: The geometry of kinetics, Solid State Ion. 1, 9–10, 17–30. [Google Scholar]\n• Lindell I., Alanen E. (1984) Exact image theory for the Sommerfeld half-space problem, part III: General formulation, IEEE Trans. Antennas Propag. 32, 10, 1027–1032. doi: 10.1109/TAP.1984.1143204. [Google Scholar]\n• Lindell I., Alanen E., von Bagh H. (1986) Exact image theory for the calculation of fields transmitted through a planar interface of two media, IEEE Trans. Antennas Propag. 34, 2, 29–137. doi: 10.1109/TAP.1986.1143788. [Google Scholar]\n• Lindquist E. (1933) On the flow of water through porous soil, 1st Congres des Grands Barrerges, Stockholm, 5, 91–101. [Google Scholar]\n• Magin R.L., Ingo C., Colon-Perez L., Triplett W., Mareci T.H. (2013) Characterization of anomalous diffusion in porous biological tissues using fractional order derivatives and entropy, Microporous Mesoporous Mater. 178, 15, 39–43. doi: 10.1016/j.micromeso.2013.02.054. [CrossRef] [PubMed] [Google Scholar]\n• Mainardi F. (2010) Fractional calculus and waves in linear viscoelasticity, Imperial College Press, London, pp. 211–236. [Google Scholar]\n• Mainardi M., Pagnini G., Saxena R.K. (2005) Fox H functions in fractional diffusion, J. Comput. Appl. Math. 178, 1–2, 321–331. [Google Scholar]\n• Mandelis A., Nicolaides L., Chen Y. (2001) Structure and the reflectionless/refractionless nature of parabolic diffusion-wave fields, Phys. Rev. Lett. 87, 2, 020801. doi: 10.1103/PhysRevLett.87.020801. [Google Scholar]\n• Mathai A.M., Saxena R.K. (1978) The H-function with applications in statistics and other disciplines, Wiley, New Delhi, India, pp. 1–19. [Google Scholar]\n• Metzler R., Glockle W.G., Nonnenmacher T.F. (1994) Fractional model equation for anomalous diffusion, Physica A 211, 1, 13–24. [Google Scholar]\n• Miller B. (2005) The Baton Rouge Fault: Conduit or impediment to groundwater flow? in: Paper Presented at 54th Annual Meeting Southeast, Sect. Geol. Soc. Am., Biloxi Miss. [Google Scholar]\n• Miller F.G. (1954) Multiphase flow theory and the problem of spacing oil wells, United States Bureau of Mines 529, 8–10. [Google Scholar]\n• Mitchell T.M., Faulkner D.R. (2009) The nature and origin of off-fault damage surrounding strike-slip fault zones with a wide range of displacements: A field study from the Atacama fault system, northern Chile, J. Struct. Geol. 31, 8, 802–816. doi: 10.1016/j.jsg.2009.05.002. [Google Scholar]\n• Molz F.J. III, Fix G.J. III, Lu S.S. (2002) A physical interpretation for the fractional derivative in Levy diffusion, Appl. Math. Lett. 15, 7, 907–911. [Google Scholar]\n• Montroll E.W., Weiss G.H. (1965) Random walks on lattices II, J. Math. Phys. 6, 167–181. [Google Scholar]\n• Moodie T.B., Tait R. (1983) On thermal transients with finite wave speeds, J. Acta Mech. 50, 1–2, 97–104. doi: 10.1007/BF01170443. [CrossRef] [Google Scholar]\n• Neville I.R., Sharp J.M. Jr., Kreisel I. (1998) Contaminant transport in sets of parallel finite fractures with fracture skins, J. Contam. Hydrol. 31, 1–2, 83–109. [Google Scholar]\n• Nigmatullin R. (1984a) On the theory of relaxation with “remnant” memory, Phys. Stat. Sol. B 124, 1, 389–393. doi: 10.1002/pssb.2221240142. [CrossRef] [Google Scholar]\n• Nigmatullin R.R. (1984b) To the theoretical explanation of the universal response, Phys. Status Solidi B Basic Res. 123, 2, 739–745. [CrossRef] [Google Scholar]\n• Noetinger B., Estebenet T. (2000) Up-scaling of double porosity fractured media using continuous-time random walks methods, Transp. Porous Med. 39, 3, 315–337. [CrossRef] [Google Scholar]\n• Norwood F.R. (1972) Transient thermal waves in the general theory of heat conduction with finite wave speeds, ASME J. Appl. Mech., 39, 3, 673–676. doi: 10.1115/1.3422771. [CrossRef] [Google Scholar]\n• Oberhettinger F., Badii L. (1973) Tables of Laplace transforms, Springer Verlag, Berlin, p. 268. [Google Scholar]\n• Odling N.E., Gillespie P., Bourgine B., Castaing C., Chiles J.P., Christiansen N.P., Fillion E., Albert G., Olsen C., Lena T., Trice R., Aarseth E.S., Walsh J.J., Watterson J. (1999) Variations in fracture system geometry and their implications for fluid flow in fractured hydrocarbon reservoirs, Petrol. Geosci. 5, 373–384. [CrossRef] [Google Scholar]\n• O’Shaughnessy B., Procaccia I. (1985a) Analytical solutions for diffusion on fractal objects, Phys. Rev. Lett. 54, 5, 455–458. [Google Scholar]\n• O’Shaughnessy B., Procaccia I. (1985b) Diffusion on fractals, Phys. Rev. A: At. Mol. Opt. Phys 32, 5, 3073–3083. [CrossRef] [MathSciNet] [PubMed] [Google Scholar]\n• Philip J.R. (1957) Transient fluid motions in saturated porous media, Aust. J. Phys. 10, 43–53. [CrossRef] [Google Scholar]\n• Płociniczak Ł. (2015) Analytical studies of a time-fractional porous medium equation. Derivation, approximation and applications, Commun. Nonlinear Sci. Numer. Simul. 24, 1–3, 169–183. [Google Scholar]\n• Povstenko Y. (2015) Linear fractional diffusion-wave equation for scientists and engineers, Birkhäuser 24–30, 32, 34. [Google Scholar]\n• Povstenko Y.Z. (2013) Fractional heat conduction in infinite one-dimensional composite medium, J. Therm. Stresses 36, 4, 351–363. doi: 10.1080/01495739.2013.770693. [CrossRef] [Google Scholar]\n• Povstenko Y., Kyrylych T. (2019) Fractional heat conduction in solids connected by thin intermediate layer: Nonperfect thermal contact, Continuum Mech. Thermodyn. 31, 1719–1731, doi: 10.1007/s00161-019-00750-w. [CrossRef] [Google Scholar]\n• Prats M., Raghavan R. (2013) Finite horizontal well in a uniform-thickness reservoir crossing a natural fracture, SPE J. 18, 5, 982–992. doi: 10.2118/163098-PA. [CrossRef] [Google Scholar]\n• Prats M., Raghavan R. (2014) Properties of a natural fracture and its skins from reservoir well tests, SPE J. 19, 3, 390–397. doi: 10.2118/167262-PA. [CrossRef] [Google Scholar]\n• Raghavan R. (1993) Well test analysis, Prentice Hall, Englewoods Cliffs, NJ, pp. 6–8, 13. [Google Scholar]\n• Raghavan R. (2004) A review of applications to constrain pump-test responses to improve on geological description and uncertainty, Rev. Geophys. 42, 1–29. [Google Scholar]\n• Raghavan R. (2010) A composite system with a planar interface, J. Pet. Sci. Eng. 70, 3–4, 229–234. doi: 10.1016/j.petrol.2009.11.015. [Google Scholar]\n• Raghavan R. (2011) Fractional derivatives: Application to transient flow, J. Pet. Sci. Eng. 80, 1, 7–13. doi: 10.1016/j.petrol.2011.10.003. [Google Scholar]\n• Raghavan R. (2012) A horizontal well in a composite system with planar interfaces, Adv. Water Res. 38, 38–47. doi: 10.1016/j.advwatres.2011.12.009. [CrossRef] [Google Scholar]\n• Raghavan R., Chen C. (2017) Addressing the influence of a heterogeneous matrix on well performance in fractured rocks, Transp. Porous Med. 117, 1, 69–102. doi: 10.1007/s11242-017-0820-5. [CrossRef] [Google Scholar]\n• Raghavan R., Ozkan E. (1994) A method for computing unsteady flows in porous media, Pitman Research Notes in Mathematics Series (318), Longman Scientific & Technical, Harlow, UK, p. 188. [Google Scholar]\n• Raghavan R., Ozkan E. (2011) Flow in composite slabs. SPE Journal 16, 2, 374–387. doi: 10.2118/140748-PA. [CrossRef] [Google Scholar]\n• Raghavan R., Chen C. (2018) Time and space fractional diffusion in finite systems, Transp. Porous Med. 123, 173–193. [CrossRef] [Google Scholar]\n• Raghavan R., Chen C. (2019) The Theis solution for subdiffusive flow in rocks, Oil Gas Sci. Technol. - Rev. IFP Energies nouvelles 74, 6. doi: 10.2516/ogst/2018081. [CrossRef] [Google Scholar]\n• Raghavan R., Dixon T.N., Phan V.Q., Robinson S.W. (2001) Integration of geology, geophysics, and numerical simulation in the interpretation of a well test in a fluvial reservoir, SPE Reserv. Evalu. Eng. 4, 3, 201–208. doi: 10.2118/72097-PA. [CrossRef] [Google Scholar]\n• Savage H.M., Brodsky E.E. (2011) Collateral damage: Evolution with displacement of fracture distribution and secondary fault strands in fault damage zones, J. Geophys. Res. 116, B03405. doi: 10.1029/2010JB007665. [Google Scholar]\n• Saxena R.K., Mathai A.M., Haubold H.J. (2006) Fractional reaction-diffusion equations, Astrophys. Space Sci. 305, 3, 289–296. [Google Scholar]\n• Schneider W.R., Wyss W. (1989) Fractional diffusion and wave equations, J. Math. Phys. 30, 134. doi: 10.1063/1.528578. [Google Scholar]\n• Scholz C.H., Dawers N.H., Yu J.Z., Anders M.H., Cowie P.A. (1993) Fault growth and fault scaling laws: Preliminary results, J. Geophys. Res. 98, B12, 21951–21961. [Google Scholar]\n• Sharp J.M. Jr., Kreisel I., Milliken K.L., Mace R.E., Robinson N.I. (1996) Fracture skin properties and effects on solute transport: Geotechnical and environmental implications, in: M. Aubertin, F. Hassam, H. Mitri (eds), Rock Mechanics, Tools and Techniques, Balkema, Rotterdam, pp. 1329–1335. [Google Scholar]\n• Shendeleva M.L. (2004) Instantaneous line heat source near a plane interface, J. Appl. Phys. 95, 5, 2839–2845. [Google Scholar]\n• Sommerfeld A. (1909) Uber die Ausbreitung der Wellen in der drahtlosen Telegraphie, Ann. Phys. 28, 665–736. [Google Scholar]\n• Stehfest H. (1970a) Algorithm 368: Numerical inversion of Laplace transforms [D5], Commun. ACM 13, 1, 47–49. [Google Scholar]\n• Stehfest H. (1970b) Remark on algorithm 368: Numerical inversion of Laplace transforms, Commun. ACM 13, 10, 624. [Google Scholar]\n• Su N., Nelson P.N., Connor S. (2015) The distributed-order fractional diffusion-wave equation of groundwater flow: Theory and application to pumping and slug tests, J. Hydrol., 529, 1262–1273. doi: 10.1016/j.jhydrol.2015.09.033. [CrossRef] [Google Scholar]\n• Su N. (2014) Mass-time and space-time fractional partial differential equations of water movement in soils: Theoretical framework and application to infiltration, J. Hydrol. 519 (B), 1792–1803. doi: 10.1016/j.jhydrol.2014.09.021 [CrossRef] [Google Scholar]\n• Suzuki A.T., Hashida K.L., Horne R.N. (2016) Experimental tests of truncated diffusion in fault damage zones, Water Resour. Res. 52, 8578–8589. doi: 10.1002/2016WR019017. [Google Scholar]\n• Tao S., Gao X., Li C., Zeng J., Zhang X., Yang C., Zhang J., Gong Y. (2016) The experimental modeling of gas percolation mechanisms in a coal-measure tight sandstone reservoir: A case study on the coal-measure tight sandstone gas in the Upper Triassic Xu-Jiahe Formation, Sichuan Basin, China, J. Nat. Gas Geosci. 1, 6, 445–455. doi: 10.1016/j.jnggs.2016.11.009. [CrossRef] [Google Scholar]\n• Theis C.V. (1935) The relationship between the lowering of the piezometric surface and the rate and duration of discharge of a well using ground-water storage, Eos Trans. AGU 2, 519–524. [CrossRef] [Google Scholar]\n• Tualle J.-M., Tinet E., Prat J., Avrillier S. (2000) Light propagation near turbid-turbid planar interfaces, Opt. Commun. 183, 5–6, 337–346. [Google Scholar]\n• Uchaikin V.V. (2013) Fractional derivatives for physicists and engineers. Volume I: Background and Theory, Springer, New York, 151, 296. [Google Scholar]\n• Yanga S., Zhoua H.W., Zhang S.Q., Ren W.G. (2019) A fractional derivative perspective on transient pulse test for determining the permeability of rocks, Int. J. Rock Mech. Min. Sci. 113, 92–98. [CrossRef] [Google Scholar]\n• Yaxley L.M. (1987) Effect of a partially communicating fault on transient pressure behavior, SPE Form. Eval. 2, 4, 590–598. doi: 10.2118/14311-PA. [CrossRef] [Google Scholar]\n• Zhokh A., Strizhak P. (2018) Non-Fickian transport in porous media: Always temporally anomalous? Transp. Porous Med. 124, 2, 309–323. doi: 10.1007/s11242-018-1066-6. [CrossRef] [Google Scholar]\n• Zhokh A., Strizhak P. (2019) Investigation of the anomalous diffusion in the porous media: A spatiotemporal scaling, Heat Mass Transfer. 55, 1–10. doi: 10.1007/s00231-019-02602-4. [CrossRef] [Google Scholar]\n\nCurrent usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.\n\nData correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.\n\nInitial download of the metrics may take a while."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.56627333,"math_prob":0.66484207,"size":19387,"snap":"2021-21-2021-25","text_gpt3_token_len":6157,"char_repetition_ratio":0.19176598,"word_repetition_ratio":0.02606212,"special_character_ratio":0.3426523,"punctuation_ratio":0.29247016,"nsfw_num_words":1,"has_unicode_error":false,"math_prob_llama3":0.9578014,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-06-22T01:36:08Z\",\"WARC-Record-ID\":\"<urn:uuid:128ce0b9-8bea-435e-a797-d2b04563746f>\",\"Content-Length\":\"152169\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:5e92726c-3b1a-4961-9a7d-0404cbd196b6>\",\"WARC-Concurrent-To\":\"<urn:uuid:de10947b-33ef-46fb-9341-a8a5505cbb0a>\",\"WARC-IP-Address\":\"167.114.155.65\",\"WARC-Target-URI\":\"https://ogst.ifpenergiesnouvelles.fr/articles/ogst/ref/2020/01/ogst190329/ogst190329.html\",\"WARC-Payload-Digest\":\"sha1:OWNHALY3KXJULWEMKCRZTY42PIDUZJSQ\",\"WARC-Block-Digest\":\"sha1:HWZJ2W5XRNB4SOZV4BY7KF5JED66GYKW\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-25/CC-MAIN-2021-25_segments_1623488504969.64_warc_CC-MAIN-20210622002655-20210622032655-00599.warc.gz\"}"} |
https://visualfractions.com/calculator/prime-factors/prime-factors-of-1452/ | [
"# Prime Factors of 1452\n\nLooking to get a list of the prime factors of 1452? In this article we'll give you all of the information you need, including the definition of the prime factors of 1452, how to calculate the prime factors of 1452 (also known as the prime factorization of 1452). As a bonus, we'll also list out the prime factor tree of 1452 the product of prime factors of 1452, and tell you how many prime factors 1452 has.\n\n## Prime Factors of 1452 Definition\n\nEvery number can be represented as a product of prime numbers. So when we talk aqbout prime factorization of 1452, we're talking about the building blocks of the number. A prime factor is a positive integer that can only be divided by 1 and itself. The prime factors of 1452 are all of the prime numbers in it that when multipled together will equal 1452.\n\nLet's look at how to find all of the prime factors of 1452 and list them out.\n\n## How to Find the Prime Factors of 1452\n\nYou'll often see the process of finding prime factors of 1452 referred to as prime factorization. To get the prime factors of 1452 we need to divide 1452 by the smallest prime number possible. You then repeat the same process by taking the result and dividing that number by the smallest prime number. Eventually, you end up with the number 1.\n\nThis process creates something called a prime factor tree of 1452. The prime numbers used in this tree are the prime factors of 1452. Let's look at the prime factor tree for 1452:\n\n• 1452 ÷ 2 = 726\n• 726 ÷ 2 = 363\n• 363 ÷ 3 = 121\n• 121 ÷ 11 = 11\n• 11 ÷ 11 = 1\n\nPut simply, all of the prime numbers that you used to divide above are the prime factors of 1452 as well. So what we are left with is the answer to your search, the prime factors of 1452:\n\n2, 2, 3, 11, and 11\n\n## How Many Prime Factors of 1452 Are There?\n\nIf we count up all of the prime factors of 1452 used in the prime factor tree above, we can see that 1452 has a total of 5 prime factors.\n\n## Product of Prime Factors of 1452\n\nThe prime factors shown above (2, 2, 3, 11, and 11) are completely unique to 1452. When we multiply all of them together the result will be 1452 and this is what we call the product of prime factors of 1452. The prime factor products of 1452 are listed below:\n\n2 x 2 x 3 x 11 x 11 = 1452\n\nSo there you have it. A complete guide to the factors of 1452. You should now have the knowledge and skills to go out and calculate your own factors and factor pairs for any number you like.\n\nFeel free to try the calculator below to check another number or, if you're feeling fancy, grab a pencil and paper and try and do it by hand. Just make sure to pick small numbers!\n\nIf you found this content useful in your research, please do us a great favor and use the tool below to make sure you properly reference us wherever you use it. We really appreciate your support!\n\n• \"Prime Factors of 1452\". VisualFractions.com. Accessed on May 26, 2022. http://visualfractions.com/calculator/prime-factors/prime-factors-of-1452/.\n\n• \"Prime Factors of 1452\". VisualFractions.com, http://visualfractions.com/calculator/prime-factors/prime-factors-of-1452/. Accessed 26 May, 2022."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9120745,"math_prob":0.9649904,"size":3719,"snap":"2022-05-2022-21","text_gpt3_token_len":963,"char_repetition_ratio":0.25545087,"word_repetition_ratio":0.032357473,"special_character_ratio":0.27964506,"punctuation_ratio":0.104221635,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9973397,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-05-26T12:22:34Z\",\"WARC-Record-ID\":\"<urn:uuid:eeccbefa-9692-4cec-af5c-87ff2c661567>\",\"Content-Length\":\"22791\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:4b472fbb-4cf1-44de-bcd2-db5bd43fc714>\",\"WARC-Concurrent-To\":\"<urn:uuid:3ac0c673-5f1c-4759-ab41-2a198a4c952a>\",\"WARC-IP-Address\":\"172.67.147.3\",\"WARC-Target-URI\":\"https://visualfractions.com/calculator/prime-factors/prime-factors-of-1452/\",\"WARC-Payload-Digest\":\"sha1:RNHJ4POJP3R5JVOBVUDAFIR27EITLMWG\",\"WARC-Block-Digest\":\"sha1:XVQHU2QJJK7NMF5CDDEVNTN22ECKOFWU\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-21/CC-MAIN-2022-21_segments_1652662604794.68_warc_CC-MAIN-20220526100301-20220526130301-00027.warc.gz\"}"} |
https://www.geogebra.org/m/t9nemfrc | [
"# FINDING THE SURFACE AREA OF THE CYLINDER - Frab\n\n## Instructions\n\nIntroduction Finding the total area of the surface of a 3-D shape like a cylinder can be very useful. It can answer questions like, “How much paint do I need to cover the surface?” or “How much wrapping paper do I need to wrap this container?” There are formulas that you can memorize and use when needed. However,it is easy to forget a formula that you only use once in a great while. Is there another way to solve a surface area problem without memorizing the formula? In this GeoGebra lab you will explore another method for finding surface area. Step1. Create and Experiment with a Net Use the MOVE tool",
null,
"to move the Unfold / Fold Net slider. This will show how the cylinder can be turned into a net. A net is created by imagining that you cut and unfold the 3-D shape into one, flat shape. Using the same slider, the net can be folded back into the original 3-D shape. Step 2. Find the Surface Area Suppose that you are given the height (h) of the cylinder and the radius (r) of the top and bottom of the cylinder. To keep the calculation simple, let’s say that the value of r (the radius of the circle) is 1 unit and the value of height (h) is 2 units. Once the cylinder is transformed into a net, the problem of finding the surface area is now a problem of finding the area of two shapes. Which two shapes? Before continuing, review how you calculate (a) the area of a rectangle, (b) the area of a circle, and (c) the perimeter (circumference) of a circle. You have enough information to find the area of the two circles, but what about the blue rectangle? You know the height (h), but what about the width? You need that information to find the area of the rectangle. Use the Rotate the Top Circle slider to give you a clue to the width of the blue rectangle. Hint: Watch the dot on the rotating circle. How many times does the circle rotate in going from one side of the rectangle to the other side? Now you have enough information to find the area of each piece of the net. Describe your method for finding the total surface area: Conclusion For a cylinder, the “official” formula is: surface area = 2 π r2 + 2 π r h But if you forget the formula, you can always use reasoning and a net to help you compute the surface area."
] | [
null,
"https://www.geogebra.org/images/ggb/toolbar/mode_move.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8979171,"math_prob":0.9128931,"size":2294,"snap":"2021-21-2021-25","text_gpt3_token_len":537,"char_repetition_ratio":0.14454149,"word_repetition_ratio":0.027777778,"special_character_ratio":0.22711422,"punctuation_ratio":0.08768267,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9958577,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-05-10T00:54:09Z\",\"WARC-Record-ID\":\"<urn:uuid:f673b21a-a6c4-4603-815c-a08826654269>\",\"Content-Length\":\"46553\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:b0130f2d-9763-48d3-8a9b-ba8cad7e90c7>\",\"WARC-Concurrent-To\":\"<urn:uuid:c43730a7-ab5c-4c1b-947d-0af89e550037>\",\"WARC-IP-Address\":\"99.86.230.117\",\"WARC-Target-URI\":\"https://www.geogebra.org/m/t9nemfrc\",\"WARC-Payload-Digest\":\"sha1:7XPFWRDIMTEXRACP4RRJC5FAFJHCWZYU\",\"WARC-Block-Digest\":\"sha1:LZBZFI4TAIXAVJQP55MNU5Z2WJRHIV2L\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-21/CC-MAIN-2021-21_segments_1620243989030.65_warc_CC-MAIN-20210510003422-20210510033422-00056.warc.gz\"}"} |
https://root-forum.cern.ch/t/fit-with-composite-models-and-get-functions-of-each-composite/52795 | [
"# Fit with composite models and get functions of each composite\n\nI do the fitting with composite models,the Xe127 and Xe133 background, as shown in the plots.I hope to get the function of my model,so I use `TF1* func = model.asTF(RooArgList(mes), RooArgList(nbkg_xe127, nbkg_xe133));`.But the result seems strange.`func->Eval(90)` return `72.9776` ,it’s not consistant with the plot.\n\nThe main question is how can I transform the fitted RooFit pdf into `TF1` of each background rather than only the model with all backgrounds as `TF1`.\n\ntest_asTF.C (2.4 KB)\nallBKG_twoGaus_lowXe133_set45.root (10.1 KB)\n\nThe data file is missing:\n\n``````Processing test_asTF.C...\n\n``````\n\n@moneta might help.\n\nHi, @moneta. Can you give some advice?Thanks!\n\nHi,\n\nIf you want to plot your function representing the model on top of the data you need to re-scale to the data normalisation (i.e. number of data points * bin width).\nYou can do this using the `asTF` function, but as following, by first getting the normalised pdf as a TF1 and then rescaing it:\n\n``````TF1* normFunc = model.asTF(RooArgList(mes), RooArgList(nbkg_xe127, nbkg_xe133), RooArgList(mes));\nTF1 * func = new TF1(\"plotFunc\",[&](double *x, double *p) { return p* normFunc->EvalPar(x);},E_min,E_max,1);\nfunc->SetParameter(0, data->sumEntries() * mes.getBinWidth(1));\n``````\n\nAnd you can do the same for the two components of your model (the signal pdf and the background pdf).\nI think we should probably add an option in `RooAbsReal::asTF` to return directly a TF1 with a normalisation parameter.\n\nBest,\n\nLorenzo\n\n1 Like\n\nBut I still have three questions.\n\n1. Can you explain the normalisation method `number of data points * bin width`?\n2. How can I draw the function `func`? I tried `func->Draw(\"\");` But it didn’t work.\n3. I tried to get the funciton of background,eg: Xe127, but it didn’t work. The code is shown as following:\n`````` TF1* normFunc_127 = pdf_Xe127.asTF(RooArgList(mes), RooArgList(nbkg_xe127), RooArgList(mes));\nTF1 * func_127 = new TF1(\"plotFunc\",[&](double *x, double *p) { return p* normFunc_127->EvalPar(x);},E_min,E_max,1);\nfunc_127->SetParameter(0, 30 * mes.getBinWidth(1)); // 30 is the number of Xe127 background after fitting.\n``````\n\nThanks!\n\nHi,\n\n1. RooFit provides you a probability density function. If you want the expected yield in each bin (i.e. the expected number of events) this is equal to the probability for each bin, i.e. is the integral of the pdf in the bin which can be approximated by the pdf at the bin centre times the bin width.\nAssuming the total number of expected events is equal to the observed one, then we multiply this probability by the total number of events to get the number for each bin.\n2. It is possible that when you exit the macro the pdf used to create the TF1 is deleted. Use the `func->DrawClone(\"\")`\n3. For the same reason as abov, use `nc_127->DrawClone`\n\nBest,\n\nLorenzo\n\n1 Like\n\nBut as for the third question, I still can’t get the component’s function. For example, while trying to get the function of `Xe127` part(line 86 87 88).I got the following error:\n\n``````[#0] ERROR:InputArguments -- RooAbsReal::functor(pdf_Xe127) ERROR: one or more specified parameters are not variables of this p.d.f\n``````\n\nI still don’t know how to do this part clearly.\n\nAnother question is that, `func->Integral(20,145)` (line 74)will give some error:\n\n`````` Error in <GSLError>: Error 11 in qags.c at 543 : number of iterations was insufficient\nWarning in <TF1::IntegralOneDim>: Error found in integrating function plotFunc in [20.000000,145.000000] using AdaptiveSingular. Result = 2500.000495 +/- 0.000688 - status = 11\n``````\n\nEven though I can get the expected value.So this error seems strange to me.\n\nMy updated code is in the attachment. Hope for your reply again.\nThanks!\n\ntest_asTF_v2.C (3.2 KB)\n\nHi,\n\nWhen creating the TF1 for `pdf_Xe127`, which is a RooHistPdf, you should not provide as second argument the parameter `nbkg_xe127` because it is not a variable of that pdf. You are passing the parameter later when you define its normalisation by creating `function_127`.\nJust do:\n\n``````TF1* normFunc_127 = pdf_Xe127.asTF(RooArgList(mes), RooArgList(), RooArgList(mes));\n``````\n\nThe second error is maybe related to the ill defined function. If not you can also try to use a different integration algorithm for TF1, by doing before calling `func->Integral`:\n\n``````ROOT::Math::IntegratorOneDimOptions::SetDefaultIntegrator(\"GAUSS\")\n``````\n\nor if that one does not work, you can try also `\"GAUSSLEGENDRE\"` or `\"NONADAPTIVE\"`.\n\nLorenzo\n\n1 Like\n\nHi, @moneta ,\nI’m very grateful for your reply, all my problems have been solved.\n\nThis topic was automatically closed 14 days after the last reply. New replies are no longer allowed."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.8408419,"math_prob":0.9462956,"size":2780,"snap":"2022-40-2023-06","text_gpt3_token_len":688,"char_repetition_ratio":0.10878962,"word_repetition_ratio":0.0043290043,"special_character_ratio":0.25503597,"punctuation_ratio":0.13217391,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.98077714,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-30T07:16:11Z\",\"WARC-Record-ID\":\"<urn:uuid:97d41b87-0e94-45d7-8734-c368494eb030>\",\"Content-Length\":\"46612\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:bcea2a7d-cf3e-4acb-bfb0-e9fd105326cc>\",\"WARC-Concurrent-To\":\"<urn:uuid:f4ca8a72-73f7-4ee8-b80b-28ee2f289447>\",\"WARC-IP-Address\":\"137.138.157.187\",\"WARC-Target-URI\":\"https://root-forum.cern.ch/t/fit-with-composite-models-and-get-functions-of-each-composite/52795\",\"WARC-Payload-Digest\":\"sha1:JGOCVHOIAPAFD7BMUCYFXP6SYAVSNQRY\",\"WARC-Block-Digest\":\"sha1:UEBNUCOD2U4BOFV2NF7FGWIFFHRRFMT5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499804.60_warc_CC-MAIN-20230130070411-20230130100411-00538.warc.gz\"}"} |
http://www.0319ttt.com/g_TJ.htm | [
"•",
null,
"•",
null,
"•",
null,
"",
null,
"添加企业",
null,
"添加企业或个人\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`\n\n```\n\n```\n`\t`"
] | [
null,
"http://www.0319ttt.com/material/theme/chacha/cms/v2/images/leftnav/bg_xcx.png-t=2.png",
null,
"http://www.0319ttt.com/material/theme/chacha/cms/v2/images/leftnav/bg_wx.png-t=2.png",
null,
"http://www.0319ttt.com/material/theme/chacha/cms/v2/images/leftnav/bg_app.png-t=3.png",
null,
"http://www.0319ttt.com/material/theme/chacha/cms/v2/images/leftnav/icon-add.png",
null,
"http://www.0319ttt.com/material/theme/chacha/cms/v2/images/leftnav/icon-add.png",
null
] | {"ft_lang_label":"__label__zh","ft_lang_prob":0.9719723,"math_prob":0.9917199,"size":134,"snap":"2019-26-2019-30","text_gpt3_token_len":127,"char_repetition_ratio":0.0,"word_repetition_ratio":0.0,"special_character_ratio":0.13432837,"punctuation_ratio":0.2631579,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99244916,"pos_list":[0,1,2,3,4,5,6,7,8,9,10],"im_url_duplicate_count":[null,null,null,null,null,null,null,null,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-07-18T02:43:53Z\",\"WARC-Record-ID\":\"<urn:uuid:29478bc6-f39f-4954-aaf4-ac5f5d9845df>\",\"Content-Length\":\"142746\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:25275280-ab0f-46ae-a696-6627a5f6b22c>\",\"WARC-Concurrent-To\":\"<urn:uuid:8ed9f288-2031-4894-9877-027fc1b0ff48>\",\"WARC-IP-Address\":\"154.221.191.150\",\"WARC-Target-URI\":\"http://www.0319ttt.com/g_TJ.htm\",\"WARC-Payload-Digest\":\"sha1:3LEQOWYJWGZX36ZOYU7RI3ZPIQTSYD6L\",\"WARC-Block-Digest\":\"sha1:FLDNJWZA5GFEG5SHJ72HJUXLCLDQHUOS\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-30/CC-MAIN-2019-30_segments_1563195525483.64_warc_CC-MAIN-20190718022001-20190718044001-00444.warc.gz\"}"} |
https://www.indianacademics.com/arithmetic-reasoning/ | [
"## ARITHMETIC REASONING\n\n### Introduction\n\nQ. In a tournament each of six players will play every other player exactly once. How many matches will be played during the tournament\n\nA. Player 1=5, player 2=4, player 3=3; player 4=2; player 5=1; player 6=0; 5+4+3+2+1+0=15\n\nQ. A, B, C and D play a game of cards. A says to B, if I give you 8 cards, you will have as many as C has and I shall have 3 less than what C has. Also, if I take 6 cards from C, I shall have twice as many as D has.” If B and D together have 50 cards, how many cards has A got\n\nA. Solving below equations we get A=40. B+8 = C …(i) A-8=C-3 i.e. C = A-5 …(ii) A+6=2*D …..(iii) B+D=50 i.e. D = 50 – B ….(iv) Substituting (ii) in (i) and (iv) in (iii) we get A+2B=94 and A-B=13\n\nQ. A group of 1200 persons consisting of captains and soldiers is traveling in a train. For every 15 soldiers there is one captain. The number of captains in the group is\n\nA. So captains to soldiers is 1:15 so in 1200 soldiers we have 1200/15=80.\n\nQ. Aruna cut a cake into two halves and cuts one half into smaller pieces of equal size. Each of the small pieces is twenty grams in weight . If she has seven pieces of the cake in all with her, how heavy was the original cake\n\nA. She has 7 pieces where 6 pieces are making one half of the weight of the cake since she cut the half of cake into 6 pieces. 6 * 20 = 120. This is half the\n\nweight of cake so full weight of cake is 2 * 120 = 240 gm.\n\nEach of these questions is followed by two statements, I and II. Mark the answer as\n\n(a) if the question can be answered with the help of statement I alone.\n\n(b) if the question can be answered with the help of statement II alone.\n\n(c) if both statement I and statement II are needed to answer the question.\n\n(d) if the question cannot be answered even with the help of both the statements.\n\nQ.If x, y and z are real numbers, is z – x even or odd?\n\nI. xyz is odd.\n\nII. xy + yz + zx is even.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.1\n\nQ.What is the value of x, if x and y are consecutive positive even integers?\n\nI. (x – y)2 = 4\n\nII. (x + y)2 < 100\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\nQ.What is the profit percentage?\n\nI. The cost price is 80% of the selling price.\n\nII. The profit is Rs.50.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.What is the area of the triangle?\n\nI. Two sides are 41 cm each.\n\nII. The altitude to the third side is 9 cm long.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.c\n\nEach question is followed by two statements, I and II. Mark the answer as\n\n(a) if the question cannot be answered even with the help of both the statements taken together.\n\n(b) if the question can be answered by any one of the two statements.\n\n(c) if each statement alone is sufficient to answer the question, but not the other one (E.g. statement I alone is required to answer the question, but not statement II and vice versa).\n\n(d) if both statements I and II together are needed to answer the question.\n\nQ.A tractor travelled a distance 5 m. What is the radius of the rear wheel?\n\nI. The front wheel rotates ‘N’ times more than the rear wheel over this distance.\n\nII. The circumference of the rear wheel is ‘t’ times that of the front wheel.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.What is the ratio of the two liquids A and B in the mixture finally, if these two liquids kept in three vessels are mixed together? (The containers are of equal volume.)\n\nI. The ratio of liquid A to liquid B in the first and second vessel is 3 : 5, 2 : 3 respectively.\n\nII. The ratio of liquid A to liquid B in vessel 3 is 4 : 3.\n\na\n\nb\n\nc\n\nd\n\nAns.a\n\nQ.If a, b and c are integers, is (a – b + c) > (a + b – c)?\n\nI. b is negative.\n\nII. c is positive.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\nQ.If α and β are the roots of the equation ax2+bx+c=0 then what is the value of α2 + β2\n\nI.α + β = -b/a\n\nII.2*α β = c/a\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d> Q.A tractor travelled a distance 5 m. What is the radius of the rear wheel?\n\nI. The front wheel rotates ‘N’ times more than the rear wheel over this distance.\n\nII. The circumference of the rear wheel is ‘t’ times that of the front wheel.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.What is the ratio of the two liquids A and B in the mixture finally, if these two liquids kept in three vessels are mixed together? (The containers are of equal volume.)\n\nI. The ratio of liquid A to liquid B in the first and second vessel is 3 : 5, 2 : 3 respectively.\n\nII. The ratio of liquid A to liquid B in vessel 3 is 4 : 3.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.If a, b and c are integers, is (a – b + c) > (a + b – c)?\n\nI. b is negative.\n\nII. c is positive.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\nQ.If α and β are the roots of the equation ax2+bx+c=0 then what is the value of α2 + β2\n\nI.α + β = -b/a\n\nII.2*α β = c/a\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\nQ.A tractor travelled a distance 5 m. What is the radius of the rear wheel?\n\nI. The front wheel rotates ‘N’ times more than the rear wheel over this distance.\n\nII. The circumference of the rear wheel is ‘t’ times that of the front wheel.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.What is the ratio of the two liquids A and B in the mixture finally, if these two liquids kept in three vessels are mixed together? (The containers are of equal volume.)\n\nI. The ratio of liquid A to liquid B in the first and second vessel is 3 : 5, 2 : 3 respectively.\n\nII. The ratio of liquid A to liquid B in vessel 3 is 4 : 3.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.If a, b and c are integers, is (a – b + c) > (a + b – c)?\n\nI. b is negative.\n\nII. c is positive.\n\na Ans.d\n\nQ.If α and β are the roots of the equation ax2+bx+c=0 then what is the value of α2 + β2\n\nI.α + β = -b/a\n\nII.2*α β = c/a\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\nQ.A tractor travelled a distance 5 m. What is the radius of the rear wheel?\n\nI. The front wheel rotates ‘N’ times more than the rear wheel over this distance.\n\nII. The circumference of the rear wheel is ‘t’ times that of the front wheel.\n\n1.a\n\n2.b\n\n3.c\n\nAns.a\n\nQ.What is the ratio of the two liquids A and B in the mixture finally, if these two liquids kept in three vessels are mixed together? (The containers are of equal volume.)\n\nI. The ratio of liquid A to liquid B in the first and second vessel is 3 : 5, 2 : 3 respectively.\n\nII. The ratio of liquid A to liquid B in vessel 3 is 4 : 3.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.a\n\nQ.If a, b and c are integers, is (a – b + c) > (a + b – c)?\n\nI. b is negative.\n\nII. c is positive.\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\nQ.If α and β are the roots of the equation ax2+bx+c=0 then what is the value of α2 + β2\n\nI.α + β = -b/a\n\nII.2*α β = c/a\n\n1.a\n\n2.b\n\n3.c\n\n4.d\n\nAns.d\n\n### Read and choose the correct alternative\n\nEach question is followed by two statements, I and II. Answer the questions based on the statements and mark the answer as\n\n(a) if the question can be answered with the help of any one statement alone but not by the other statement.\n\n(b) if the question can be answered with the help of either of the statements taken individually.\n\n(c) if the question can be answered with the help of both statements together.\n\n(d) if the question cannot be answered even with the help of both statements together\n\nQ. Find the length of AB if ∠ YBC = ∠ XOY = ∠ XAC = 90°\n\nQ. Find the length of AB if ∠ YBC = ∠ XOY = ∠ XAC = 90°\n\nI. Radius of the arc is given.\n\nII. OA = 5>I. Radius of the arc is given.\n\nII. OA = 5\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns.D\n\nQ. Is n odd?\n\nI. n is divisible by 3, 5, 7 and 9.\n\nII. 0 < n < 400\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . C\n\nQ. Radha and Rani appeared in an examination. What was the total number of questions?\n\nI. Radha and Rani together solved 20% of the paper.\n\nII. Radha alone solved 3/5 of the paper solved by Rani\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . D\n\nQ. What is the price of tea?\n\nI. Price of coffee is Rs. 5 more than that of tea.\n\nII. Price of coffee is Rs. 5 less than the price of a cold drink which cost three times the price of tea.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns.C\n\nQ. In a group of 150 students, find the number of girls.\n\nI. Each girl was given 50 paise, while each boy was given 25 paise to purchase goods totalling Rs. 49.\n\nII. Girls and boys were given 30 paise each to buy goods totalling Rs. 45.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns.A\n\nQ. There are four racks numbered 1, 2, 3, 4 and four books numbered 1, 2, 3, 4. If an even rack has to contain an odd-numbered book and an odd rack contains an even-numbered book, then what is the position of book 4?\n\nI. Second book has been put in third rack.\n\nII. Third book has been put in second rack.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . A\n\nEach question is followed by two statements, I and II. Answer the questions based on the statements and mark the answer as\n\n(a) if the question can be answered with the help of any one statement alone but not by the other statement.\n\n(b) if the question can be answered with the help of either of the statements taken individually.\n\n(c) if the question can be answered with the help of both statements together.\n\n(d) if the question cannot be answered even with the help of both statements together\n\nQ. Find the length of AB if ∠ YBC = ∠ XOY = ∠ XAC = 90°\n\nQ. Find the length of AB if ∠ YBC = ∠ XOY = ∠ XAC = 90°\n\nI. Radius of the arc is given.\n\nII. OA = 5>I. Radius of the arc is given.\n\nII. OA = 5\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns.D\n\nQ. Is n odd?\n\nI. n is divisible by 3, 5, 7 and 9.\n\nII. 0 < n < 400\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . C\n\nQ. Radha and Rani appeared in an examination. What was the total number of questions?\n\nI. Radha and Rani together solved 20% of the paper.\n\nII. Radha alone solved 3/5 of the paper solved by Rani\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . D\n\nQ. What is the price of tea?\n\nI. Price of coffee is Rs. 5 more than that of tea.\n\nII. Price of coffee is Rs. 5 less than the price of a cold drink which cost three times the price of tea.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns.C\n\nQ. In a group of 150 students, find the number of girls.\n\nI. Each girl was given 50 paise, while each boy was given 25 paise to purchase goods totalling Rs. 49.\n\nII. Girls and boys were given 30 paise each to buy goods totalling Rs. 45.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns.A\n\nQ. There are four racks numbered 1, 2, 3, 4 and four books numbered 1, 2, 3, 4. If an even rack has to contain an odd-numbered book and an odd rack contains an even-numbered book, then what is the position of book 4?\n\nI. Second book has been put in third rack.\n\nII. Third book has been put in second rack.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . A\n\n### Each question is followed by two statements I and II. Mark:\n\n(a) if the question can be answered by any one of the statements alone, but cannot be answered by using the other statement alone.\n\n(b) if the question can be answered by using either statement alone.\n\n(c) if the question can be answered by using both the statements together, but cannot be answered by using either statement alone.\n\n(d) if the question cannot be answered even by using both the statements together\n\nQ. The average weight of students in a class is 50 kg. What is the number of students in the class?\n\nI. The heaviest and the lightest members of the class weigh 60 kg and 40 kg respectively.\n\nII. Exclusion of the heaviest and the lightest members from the class does not change the average weight of the students\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . D\n\nQ. A small storage tank is spherical in shape. What is the storage volume of the tank?\n\nI. The wall thickness of the tank is 1 cm.\n\nII. When an empty spherical tank is immersed in a large tank filled with water, 20 l of water overflow from the large tank.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . C\n\nQ. Mr X starts walking northwards along the boundary of a field from point A on the boundary, and after walking for 150 m reaches B, and then walks westwards, again along the boundary, for another 100 m when he reaches C. What is the maximum distance between any pair of points on the boundary of the field?\n\nI. The field is rectangular in shape.\n\nII. The field is a polygon, with C as one of its vertices and A as the mid-point of a side.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . C\n\nQ. A line graph on a graph sheet shows the revenue for each year from 1990 through 1998 by points and joins the successive points by straight-line segments. The point for revenue of 1990 is labelled A, that for 1991 as B, and that for 1992 as C. What is the ratio of growth in revenue between 1991-92 and 1990-91?\n\nI. The angle between AB and X-axis when measured with a protractor is 40°, and the angle between CB and X-axis is 80°.\n\nII. The scale of Y-axis is 1 cm = Rs. 100\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . A\n\nQ. There is a circle with centre C at the origin and radius r cm. Two tangents are drawn from an external point D at a distance d cm from the centre. What are the angles between each tangent and the X-axis?\n\nI. The coordinates of D are given.\n\nII. The X-axis bisects one of the tangents.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . B\n\nQ. Find a pair of real numbers x and y that satisfy the following two equations simultaneously. It is known that the values of a, b, c, d, e and f are non-zero.\n\nax + by = c\n\ndx + ey = f\n\nI. a = kd and b = ke, c = kf, k ≠ 0\n\nII. a = b = 1, d = e = 2, f ≠ 2c\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . D\n\nQ. Three professors A, B and C are separately given three sets of numbers to add. They were expected to find the answers to 1 + 1, 1 + 1 + 2, and 1 + 1 respectively. Their respective answers were 3, 3 and 2. How many of the professors are mathematicians?\n\nI. A mathematician can never add two numbers correctly, but can always add three numbers correctly.\n\nII. When a mathematician makes a mistake in a sum, the error is +1 or –1.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . D\n\nQ. How many students among A, B, C and D have passed the examination?\n\nI. The following is a true statement: A and B passed the examination.\n\nII. The following is a false statement: At least one among C and D has passed the examination\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . C\n\nQ. What is the distance x between two cities A and B in integral number of kilometres?\n\nI. x satisfies the equation log2x√x\n\nII. x ≤ 10 km\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . C\n\nQ. Mr Mendel grew 100 flowering plants from black seeds and white seeds, each seed giving rise to one plant. A plant gives flowers of only one colour. From a black seed comes a plant giving red or blue flowers. From a white seed comes a plant giving red or white flowers. How many black seeds were used by Mr Mendel?\n\nI. The number of plants with white flowers was 10.\n\nII. The number of plants with red flowers was 70.\n\n1.A\n\n2.B\n\n3.C\n\n4.D\n\nAns . D"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.93900347,"math_prob":0.9851581,"size":13758,"snap":"2020-45-2020-50","text_gpt3_token_len":4233,"char_repetition_ratio":0.15151955,"word_repetition_ratio":0.57476795,"special_character_ratio":0.3116732,"punctuation_ratio":0.16873986,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9979864,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-10-31T16:08:31Z\",\"WARC-Record-ID\":\"<urn:uuid:d4fa86fd-5a63-4a41-9134-64c230ddbc17>\",\"Content-Length\":\"60369\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:f756a605-7877-4ebe-8c36-1fc6f72b8db9>\",\"WARC-Concurrent-To\":\"<urn:uuid:4af0e3cd-8ae5-4636-a34b-28e5512f5c5b>\",\"WARC-IP-Address\":\"148.66.138.110\",\"WARC-Target-URI\":\"https://www.indianacademics.com/arithmetic-reasoning/\",\"WARC-Payload-Digest\":\"sha1:WZS6G3MDRP36NEJTMBQMYVQD5HYLQSIB\",\"WARC-Block-Digest\":\"sha1:Y5HAVUKNDXPSDO6MKESMIXV32O3J7LQP\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-45/CC-MAIN-2020-45_segments_1603107919459.92_warc_CC-MAIN-20201031151830-20201031181830-00614.warc.gz\"}"} |
https://www.calculators.live/data-analysis | [
"# Data Analysis Calculators\n\nHere you could find a collection of online statistical data analysis calculators to qualitative and quantitative statistical data analysis calculations with ease.\n\n## Confidence Interval\n\nTry these online data analysis calculators to make your statistical data analysis related calculations easier.\n\nData Analysis:\nIn statistics, there are two classifications in data analysis such as,\n\nQuantitative data analysis: It is used to the measure the values or counts which are expressed in numbers and represented through name, symbol or number code.\nQualitative data analysis: It is used for the categorical measurements which are not expressed in terms of numbers, but expressed by means of natural language.\n\nOnline Data Analysis Calculators: Here you can find calculators like Standard Error Calculator, Relative Standard Deviation Calculator, Sample Standard Deviation Calculator, Exponential Regression Equation Calculator, Linear Regression Calculator, Bayes Theorem Calculator, Central Limit Theorem Calculator (CLT) etc. Make use of all the online statistical data analysis calculators to get reliable results."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.83510953,"math_prob":0.9905972,"size":1086,"snap":"2022-40-2023-06","text_gpt3_token_len":188,"char_repetition_ratio":0.1987061,"word_repetition_ratio":0.04054054,"special_character_ratio":0.15930018,"punctuation_ratio":0.11627907,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9991279,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-01-31T14:13:03Z\",\"WARC-Record-ID\":\"<urn:uuid:96ff65b8-ef77-4495-848b-4d246aac9ccd>\",\"Content-Length\":\"8720\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:0a126a3f-9646-489f-96ca-4490ace8542e>\",\"WARC-Concurrent-To\":\"<urn:uuid:36def1b5-a605-4739-a601-164718550530>\",\"WARC-IP-Address\":\"66.228.52.148\",\"WARC-Target-URI\":\"https://www.calculators.live/data-analysis\",\"WARC-Payload-Digest\":\"sha1:ZULTWXOJU2ERHSSTUF3QRP7RUWASL4BM\",\"WARC-Block-Digest\":\"sha1:WTCDD6XSWNYPGLHHZ3IGAOBO5IJEET5K\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-06/CC-MAIN-2023-06_segments_1674764499871.68_warc_CC-MAIN-20230131122916-20230131152916-00160.warc.gz\"}"} |
https://monkeybread.net/DynaPDF-Manual/dynapdf_161.shtml | [
"DynaPDF Manual - Page 161",
null,
"Function Reference\nPage 161 of 770\nBezier_1_3\nSyntax:\nSI32 pdfBezier_1_3(\nconst PPDF* IPDF, // Instance pointer\ndouble x1,\n// X-Coordinate of the first control point P1\ndouble y1,\n// Y-Coordinate of the first control point P1\ndouble x3,\n// X-Coordinate of the end point P3\ndouble y3)\n// Y-Coordinate of the end point P3\nThis function paints a Bézier curve with one control point. The second control point P2 coincides\nwith the final point P3 of the curve. The start point P0 must be set with MoveTo() or any other\npainting operator before.\nReturn values:\nIf the function succeeds the return value is 1. If the function fails the return value is 0.\nBezier_2_3\nSyntax:\nSI32 pdfBezier_2_3(\nconst PPDF* IPDF, // Instance pointer\ndouble x2,\n// X-Coordinate of the second control point P2\ndouble y2,\n// Y-Coordinate of the second control point P2\ndouble x3,\n// X-Coordinate of the end point P3\ndouble y3)\n// Y-Coordinate of the end point P3\nThe third supported Bézier curve type paints also a curve with one control point. In this version the\nfirst control coincides with initial point P0 of the curve. The start point P0 must be set with MoveTo()\nor any other painting operator before.\nReturn values:\nIf the function succeeds the return value is 1. If the function fails the return value is 0.\nBézier curves:\n\nPrevious topic: Bezier_1_2_3\n\nNext topic: BuildFamilyNameAndStyle",
null,
""
] | [
null,
"https://monkeybread.net/DynaPDF-Manual/dynapdf_help_files/background-161.jpg",
null,
"https://monkeybread.net/images/olark.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.6108592,"math_prob":0.9653976,"size":1432,"snap":"2019-51-2020-05","text_gpt3_token_len":391,"char_repetition_ratio":0.17507003,"word_repetition_ratio":0.48333332,"special_character_ratio":0.26047486,"punctuation_ratio":0.09398496,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97007525,"pos_list":[0,1,2,3,4],"im_url_duplicate_count":[null,3,null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2019-12-07T09:38:36Z\",\"WARC-Record-ID\":\"<urn:uuid:5ddd9f5a-b00a-4038-8b94-3a49fcfc88c5>\",\"Content-Length\":\"12052\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:ffdeca16-d5b2-4fb1-aad7-6b8a96f7efd4>\",\"WARC-Concurrent-To\":\"<urn:uuid:f1518508-88ca-4aaa-bad3-9fc2087821fe>\",\"WARC-IP-Address\":\"134.119.253.108\",\"WARC-Target-URI\":\"https://monkeybread.net/DynaPDF-Manual/dynapdf_161.shtml\",\"WARC-Payload-Digest\":\"sha1:LDIWHGKFZVBVBUQYHRC5D7JU32D4U76O\",\"WARC-Block-Digest\":\"sha1:OP4SCGLQB3WXDFCAQJX7PV3JRGDBEOJF\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2019/CC-MAIN-2019-51/CC-MAIN-2019-51_segments_1575540497022.38_warc_CC-MAIN-20191207082632-20191207110632-00252.warc.gz\"}"} |
https://www.ricam.oeaw.ac.at/projects/AMG_AML/ | [
"#",
null,
"Externally Funded Project\n\nAlgebraic Multigrid and Multilevel Methods for Vector-Field Problems\n\nFWF Project P19170-N18\nRuntime: 01.02.2007-31.01.2010\n\n###### Project Abstract\n\nThis project is concerned with Algebraic Multigrid (AMG) methods for the solution of large-scale systems of linear algebraic equations arising from finite element (FE) discretization of (systems of) elliptic partial differential equations (PDEs). In particular, we address differential operators with a large (near) nullspace.\n\nOur general objectives are the design, analysis and implementation of new AMG and Algebraic Multilevel (AML) preconditioners that enable an efficient solution of direct field problems in this category: the main emphasis is on problems arising from the discretization of Maxwell's equations, solid and structural mechanical problems with bad parameters, and problems arising in computational fluid dynamics.\n\nThe research plan comprises the following components:\n\n1. Investigation of element-based AMG and AML methods regarding non-conforming FE and Discontinuous Galerkin (DG) discretizations.\n2. Development of element-, face-, and edge-based strategies for the generation of adequate coarse-grid problems.\n3. AMG for non-symmetric and indefinite matrices: Application to (scalar) convection-diffusion, Stokes, and Oseen equations.\n4. AMG for non-M matrices: Application to Maxwell's equations and elasticity problems.\n5. Implementation of algorithms: Development of a linear solver package (in C/C++).\n\nThe main purpose of this project is to contribute in filling the gap between symmetric and positive definite (SPD) M-matrices and general SPD matrices, and, what is even more challenging, between general SPD matrices and non-symmetric and/or indefinite matrices. Besides the investigation of new classes of linear solvers it is also planned to develop a powerful tool kit that can be integrated in other research and commercial software packages as an essential part of the solver kernel.\n\n###### Keywords\n• Algebraic Multigrid\n• Multilevel Methods\n• Partial Differential Equations\n• Finite Element discretization\n• Preconditioning\n• Linear Solvers\n###### Peer-reviewed publications\n• E. Karer, J. Kraus: Algebraic multigrid for finite element elasticity equations: Determination of nodal dependence via edge matrices and two-level convergence, Int. J. Numer. Meth. Engng. 83, 642-670, 2012.\n• B. Ayuso, I. Georgiev, J. Kraus, L. Zikatanov: A simple preconditioner for the SIPG discretization of linear elasticity equations. In I. Dimov, S. Dimova, and N. Kolkovska (Eds.): NMA 2010, LNCS, 6046, 353-360, Springer, 2011.\n• J. Kraus, S. Tomar: Algebraic multilevel iteration method for lowest order Raviart-Thomas space and applications. Int. J. Numer. Meth. Engng. 86, 1175-1196, 2011.\n• I. Georgiev, J. Kraus, S. Margenov: Multilevel preconditioning of rotated trilinear non-conforming finite element problems. In I. Lirkov, S. Margenov, and J. Wasniewski (Eds.): LSSC 2009, LNCS, 5910, 100-107, Springer, 2010.\n• I. Georgiev, J. Kraus, S. Margenov, J. Schicho: Locally optimized MIC(0) preconditioning of Rannacher-Turek FEM systems, Appl. Numer. Math. 59, 2402-2415, 2009.\n• I. Georgiev, J. Kraus, S. Margenov: Multilevel preconditioning of rotated trilinear non-conforming finite element problems. In I. Lirkov, S. Margenov, and J. Wasniewski (Eds.): LSSC 2007, LNCS, 4818, 86-95, Springer, 2008.\n###### Stand-alone publications\n• E. Karer, Subspace Correction Methods for Linear Elasticity, PhD Thesis, Linz, November, 2011\n• J. Kraus and S. Margenov: Robust Algebraic Multilevel Methods and Algorithms. Radon Series Comp. Appl. Math., vol. 5, Walter de Gruyter, Berlin/NewYork, 2009.\n• J. Kraus: Algebraic multilevel methods for solving elliptic finite element equations with symmetric positive definite matrices. Habilitation thesis, Johannes Kepler University, 2008.\n• J. Kraus and S. Margenov: Multilevel methods for anisotropic elliptic problems. In Lectures on Advanced Computational Methods in Mechanics, Radon Series Comp. Appl. Math., vol. 1, J. Kraus and U. Langer eds., pp. 47-88, 2007."
] | [
null,
"https://www.ricam.oeaw.ac.at/img/ricam_logo.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.7240334,"math_prob":0.7808493,"size":3988,"snap":"2020-10-2020-16","text_gpt3_token_len":1060,"char_repetition_ratio":0.11746988,"word_repetition_ratio":0.10035843,"special_character_ratio":0.24097292,"punctuation_ratio":0.23638779,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97906125,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-04T05:57:20Z\",\"WARC-Record-ID\":\"<urn:uuid:24aa8217-9c85-4f69-8fd2-b9c892e9079c>\",\"Content-Length\":\"8717\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:765465bd-5e95-449d-b1aa-1574ac5ec6f6>\",\"WARC-Concurrent-To\":\"<urn:uuid:b626ce2f-ff01-4d26-9476-61db3a39b2b7>\",\"WARC-IP-Address\":\"140.78.107.10\",\"WARC-Target-URI\":\"https://www.ricam.oeaw.ac.at/projects/AMG_AML/\",\"WARC-Payload-Digest\":\"sha1:I7RWRZHUKXHKMYIPP2WCBQQXTZUNLS2U\",\"WARC-Block-Digest\":\"sha1:QLHOIXX4KIH5JEJAK25MKYOA44AHORFC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370520039.50_warc_CC-MAIN-20200404042338-20200404072338-00003.warc.gz\"}"} |
https://blog.zilin.one/21-127-spring-2013/recitation-17/ | [
"# Recitation 17\n\nIn section D, I proved that the set of all algebraic numbers is countable.\n\nAn algebraic number is a real number that is one of the roots of a polynomial with rational coefficients. For example, all rational are algebraic. Also, the square root of a rational is algebraic.\n\nLet P denote the set of all polynomials with rational coefficients and P_n be the set of all such polynomials with degree \\leq n. Because there is a bijection between P_n and the cartesian product of n many \\mathbb{Q}‘s, P_n is countable.\n\nSince P=\\bigcup_{n=0}^\\infty P_n, P is countable as well. Notice that each polynomial has only finte roots. As P is countable, the roots generated by the polynomials in P are countable."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9413507,"math_prob":0.9955288,"size":696,"snap":"2023-40-2023-50","text_gpt3_token_len":173,"char_repetition_ratio":0.15462428,"word_repetition_ratio":0.0,"special_character_ratio":0.21839081,"punctuation_ratio":0.10638298,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99884194,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-09-27T16:02:22Z\",\"WARC-Record-ID\":\"<urn:uuid:4b19c6fe-5480-4b87-a8a3-c95585470b01>\",\"Content-Length\":\"47118\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:efe7ba0e-8eba-4144-9a5c-7cfe2831b027>\",\"WARC-Concurrent-To\":\"<urn:uuid:4bc4b930-982d-42f9-902c-ae5098bf863f>\",\"WARC-IP-Address\":\"172.104.79.34\",\"WARC-Target-URI\":\"https://blog.zilin.one/21-127-spring-2013/recitation-17/\",\"WARC-Payload-Digest\":\"sha1:R5RJSA4OLMBXBK2X46U342FASC237K7V\",\"WARC-Block-Digest\":\"sha1:QMM4ULF5SW4BK3COHQYGMMEPPYA3VABQ\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-40/CC-MAIN-2023-40_segments_1695233510300.41_warc_CC-MAIN-20230927135227-20230927165227-00809.warc.gz\"}"} |
https://datacollection.jrc.ec.europa.eu/wordef/coefficient-of-variation | [
"A B C D E F G H I J K L M N O P Q R S T U V W X Y Z\n\n#",
null,
"Coefficient of Variation Coefficient of Variation\n\nThe coefficient of variation (CV) is the ratio of the standard deviation to the mean and shows the extent of variability in relation to the mean of the population. The higher the CV, the greater the dispersion.\n\nCV = Standard Deviation / Mean\n\nThe coefficient of variation is useful as it is dimensionless (i.e. independent of the unit in which the measurement was taken) and thus, comparable between data sets with different units or widely different means."
] | [
null,
"https://datacollection.jrc.ec.europa.eu/fish-species-xl-theme/images/spacer.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.9384021,"math_prob":0.9435102,"size":537,"snap":"2023-14-2023-23","text_gpt3_token_len":106,"char_repetition_ratio":0.18198875,"word_repetition_ratio":0.0,"special_character_ratio":0.18808194,"punctuation_ratio":0.07526882,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.99580204,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2023-05-28T22:08:24Z\",\"WARC-Record-ID\":\"<urn:uuid:a37b4ae9-c7b8-456d-82ab-837abc4256fe>\",\"Content-Length\":\"84511\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:7fc90afd-241e-4782-875c-cf4f0397514f>\",\"WARC-Concurrent-To\":\"<urn:uuid:8034a6bd-b493-457b-82bc-15992e1ec4d5>\",\"WARC-IP-Address\":\"139.191.221.14\",\"WARC-Target-URI\":\"https://datacollection.jrc.ec.europa.eu/wordef/coefficient-of-variation\",\"WARC-Payload-Digest\":\"sha1:4MBCKESH4PCXA5SVXBWWMHABPTCCWKVB\",\"WARC-Block-Digest\":\"sha1:3KUXWVGU45R34KIZYKCSDJIIFFS7LFF5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2023/CC-MAIN-2023-23/CC-MAIN-2023-23_segments_1685224644571.22_warc_CC-MAIN-20230528214404-20230529004404-00643.warc.gz\"}"} |
https://beta.geogebra.org/m/jZqV9v9r | [
"Lisa decided to build a trunk with a square base and a height of 50 centimeters. The cost of wood is \\$4 per square meter. Considering her \\$8 budget, Lisa must decide about dimensions of the trunk that she can built with a lid or without it. Construct equations, solve it and help Lisa determine the dimensions of the trunk she can build. Find the solutions of the equations:\n• Using a graphic representation\n• Using a value table\n• Compare the various solutions you obtained using the methods listed above, as well as the solution you can obtain using the dynamic figure (for a trunk without a lid).\nHow will the results you obtained change if Lisa decides that the base of the trunk will have a rectangular shape, with one side twice as long as the other?"
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.89864194,"math_prob":0.97987074,"size":652,"snap":"2021-43-2021-49","text_gpt3_token_len":143,"char_repetition_ratio":0.12191358,"word_repetition_ratio":0.0,"special_character_ratio":0.21779141,"punctuation_ratio":0.08196721,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9580526,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-12-06T06:52:11Z\",\"WARC-Record-ID\":\"<urn:uuid:965e335e-c5a4-4f1b-b525-c87ea0427c76>\",\"Content-Length\":\"39559\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:59d28e19-fe8a-4dbf-980e-2300d9f79c23>\",\"WARC-Concurrent-To\":\"<urn:uuid:3a362e8c-a3c6-4282-8a39-4ed0377afb53>\",\"WARC-IP-Address\":\"13.249.42.14\",\"WARC-Target-URI\":\"https://beta.geogebra.org/m/jZqV9v9r\",\"WARC-Payload-Digest\":\"sha1:6MMCZJOUEPXWSNJYGSHEBZEWB3F2BW7N\",\"WARC-Block-Digest\":\"sha1:5YHMTTDFW5DSHDVBS6OMPR6S32VNXJXC\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-49/CC-MAIN-2021-49_segments_1637964363290.39_warc_CC-MAIN-20211206042636-20211206072636-00406.warc.gz\"}"} |
https://www.wikizero.com/en/Basket_option | [
"",
null,
"From Wikipedia the free encyclopedia\n\nA basket option is a financial derivative, more specifically an exotic option, whose underlying is a weighted sum or average of different assets that have been grouped together in a basket. A basket option is similar to an index option, where a number of stocks have been grouped together in an index and the option is based on the price of the index, but differs in that the members and weightings of an index can change over time while those in a basket option do not.\n\nUnlike a rainbow option which considers a group of assets but ultimately pays out on the level of one, a basket option is written on a basket of underlying assets but will pay out on a weighted average gain of the basket as a whole.\n\nLike rainbow options basket options are most commonly written on a basket of equity indices, though they are frequently written on a basket of individual equities as well. For example, a call option could be written on a basket of ten healthcare stocks, where the basket was composed of ten stocks in weighted proportions.\n\nThe strike price Xbasket is usually set at the current value of the basket (at-the-money), and the payoff profile will be max(Sbasket − Xbasket, 0) where Sbasket is a weighted average of n asset prices at maturity, and each weight represents the percentage of total investment in that asset.\n\n## Pricing and valuation\n\nBasket options are usually priced using an appropriate industry-standard model (such as Black–Scholes) for each individual basket component, and a matrix of correlation coefficients applied to the underlying stochastic drivers for the various models. While there are some closed-form solutions for simpler cases (e.g. two-color European rainbows), semi-analytic solutions, analytical approximations, and numerical quadrature integrations, the general case must be approached with Monte Carlo or binomial lattice methods.\n\n## Lognormality\n\nProblems in hedging basket options can be of some significance when dealing with markets that exhibit a strong skew. Many operators price basket options as if the underlying basket were a single commodity following its own stochastic process with its volatility derived from its own time series. This however conflicts with a fact that an average (or any linear combination) of assets with lognormal distribution does not follow lognormal distribution. This problem arises in swaps and Eurodollar strips (baskets of Eurodollar options) but in equities and fixed income it is mitigated by the fact that when correlation between assets is high, the sum would come closer to a lognormally distributed asset."
] | [
null,
"https://wikizero.com/assets/img/wiki.png",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.86211824,"math_prob":0.9174491,"size":4091,"snap":"2021-43-2021-49","text_gpt3_token_len":987,"char_repetition_ratio":0.11793491,"word_repetition_ratio":0.016233766,"special_character_ratio":0.25226107,"punctuation_ratio":0.15870968,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.97456175,"pos_list":[0,1,2],"im_url_duplicate_count":[null,null,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2021-10-24T21:53:38Z\",\"WARC-Record-ID\":\"<urn:uuid:afd01f51-e1d0-4d2d-b91e-597a88c426b1>\",\"Content-Length\":\"78688\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:02faf042-e590-4e6b-8a80-8975e5792fec>\",\"WARC-Concurrent-To\":\"<urn:uuid:ad1d5515-1c87-4866-875c-d2d1eb0ac87b>\",\"WARC-IP-Address\":\"132.148.210.249\",\"WARC-Target-URI\":\"https://www.wikizero.com/en/Basket_option\",\"WARC-Payload-Digest\":\"sha1:KRRW7IWZYGNYEDNN5XET3SDJRANIGSO5\",\"WARC-Block-Digest\":\"sha1:A4XCQRASM7RNS4VP26U3566DM43NMHSV\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2021/CC-MAIN-2021-43/CC-MAIN-2021-43_segments_1634323587606.8_warc_CC-MAIN-20211024204628-20211024234628-00005.warc.gz\"}"} |
http://ankhufurha.com/flatrock/minimization-of-dfa-example-pdf.php | [
"",
null,
"## Homomorphisms and Efficient State Minimization",
null,
"Deterministic Finite State Machine Minimization. Converting an NFA to a DFA Given: A non-deterministic finite state machine (NFA) Goal: Example DFA(C,a DFA) = a \\$ \\$ 1 3# # # # ## # # Lexical Analysis - Part 3, Download PDF Download. Export. DFA minimization: Double reversal versus split minimization algorithms. DFA example. Initially,.\n\n### Reducing a DFA to a Minimal DFA 2 1 4 Reducing a DFA to a\n\nMinimization of Finite State Automata Through Partition. Efficient Deterministic Finite Automata Minimization Based on Backward Depth Information. The minimal DFA for the above-mentioned example is shown in Fig 7, Minimization of DFAs Sanjit A. Seshia EECS, DFA Minimization Algo.: Example S. A. Seshia 22 1 1 1 1 0 0 0 0 q0 q0 q1 q1 q2.\n\nMinimization of DFA Example 1 Minimize the below DFA using partition method. First design its transition table 0 equivalent [A] and [B, C] (final and non-final) Minimization of Symbolic Automata gorithm is the fastest known algorithm for DFA minimization, we show how, examples, but the techniques\n\nMinimization of Symbolic Automata gorithm is the fastest known algorithm for DFA minimization, we show how, examples, but the techniques Lecture 13 DFA State Minimization This is a simple example in which the equivalence of states is obvious, but sometimes it is not so obvious.\n\nMinimization of Symbolic Automata gorithm is the fastest known algorithm for DFA minimization, we show how, examples, but the techniques Dfa minimization examples keyword after analyzing the system lists the list of keywords related and the list of websites with related content, in addition you can see\n\nMinimization of DFA cs.duke.edu. –Basic criteria for Part Minimization Design for Manufacturing (DFM) and Design for Assembly (DFA) Example: Rollbar Redesign, Minimization of DFA Jay Bagga For example, an accept state is distinguishable from a non-accept state since takes one to accept and another to non-accept..\n\n### Minimization of Automata Hopcroft's Algorithm revisited",
null,
"Comparison of DFA minimization algorithm in C++ GitHub. Minimization of Symbolic Automata gorithm is the fastest known algorithm for DFA minimization, we show how, For example, a formula (x mod k, Comparison of DFA minimization algorithm from Valmari (2011) in C++, C# and Rust - WalkerCodeRanger/dfaMinimizationComparison. Skip to Sample DFA files are.\n\nComparison of DFA minimization algorithm in C++ GitHub. Homomorphisms and E cient State Minimization exactly oneofqandris a nal state of its own DFA. Example: State Minimization A B 0 1 0,1 C D 1 1 0, http://www.cs.cornell.edu/courses/cs2800/2013fa/Handouts/minimization.pdf . Scanftree.com Minimization of DFA Example 1 Minimize the below DFA using partition method..\n\n### DFA Minimization Computational Science Areas Of",
null,
"Optimization Methods Mechanical Engineering at IIT Madras. PDF In this paper,we have considered the state minimization problem for Deterministic Finite Automata (DFA). An efficient parallel algorithm for solving the problem Talk:DFA minimization Jump to see for example this excellent paper for an algorithm Although the algorithms described in that PDF and in this wiki are.",
null,
"• Efficient Deterministic Finite Automata Minimization Based\n• Comparison of DFA minimization algorithm in C++ GitHub\n\n• DFA Minimization - Free download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) or view presentation slides online. Converting an NFA to a DFA Given: A non-deterministic finite state machine (NFA) Goal: Example DFA(C,a DFA) = a \\$ \\$ 1 3# # # # ## # # Lexical Analysis - Part 3\n\nExample 1 : Let us try to minimize the number of states of the following DFA. Initially = { Test Your Understanding of Minimization of DFA Optimization of DFA-Based Pattern Minimizing the Number of States of a DFA State Minimization of a Lexical Analyzers Followpos Example\n\nPDF In this paper,we have considered the state minimization problem for Deterministic Finite Automata (DFA). An efficient parallel algorithm for solving the problem 3.1.2 An Example of Running Moore-MR-PPHF. . . . . . . . . . .56 DFA minimization is the process of discovering an equivalent DFA to given one with\n\nDFA Minimization - Learn Automata concepts in simple and easy steps starting from Introduction, Deterministic Finite Automata, Non-Deterministic Finite Automata, NDFA 1 Minimization of Finite Automata for a given DFA A, The next example is a bit more subtle\n\nMinimization Algorithm for a DFA. Identify and remove the unreachable states. These states are the states with no incoming transition, but only outgoing transition. Title: DFA Minimization Author: Jeremy Last modified by: Jeremy Created Date: 6/8/2009 12:49:13 AM Document presentation format: On-screen Show Other titles – A\n\n## CCIS 2 Parallel Processing of Minimization Algorithm for",
null,
"CDM [3ex]Minimization of Finite State Machines cs.cmu.edu. DFA Minimization Jeremy Mange CS 6800 Very Simple Example s0 s1 s2 s0 s1 s2 Very Simple class/fa07/cs273/Handouts/minimization/minimization.pdf, DFA Equivalence & Minimization Try an example: Minimize the following DFA. q 2! 0 q 3 a,b c dead!12 Try an example: Minimize the following DFA. q 2 0! q 0 q 1 q 4.\n\n### Reducing a DFA to a Minimal DFA 2 1 4 Reducing a DFA to a\n\n(PDF) DFA minimization Double reversal versus split. In this lecture we learn how we can minimize any DFA. In this lecture we show a example on minimization of DFA, Title: DFA Minimization Author: Jeremy Last modified by: Jeremy Created Date: 6/8/2009 12:49:13 AM Document presentation format: On-screen Show Other titles – A.\n\nPDF In this paper, we DFA example. … The automaton DF A minimization: Double reversal versus split minimization. algorithms. Pedro GarcГa, DamiГЎn LГіpez Minimization of Finite Automata Prof. (Dr.) therefore design of DFA is crucial. Minimization of a DFA refers Minimization Example q0 q1 q2 q3 q4 q5 q6 1 0 1 0\n\nExample 1 : Let us try to minimize the number of states of the following DFA. Initially = { Test Your Understanding of Minimization of DFA Notes on Computation Theory Konrad Slind We will only be discussing minimization of DFAs. If asked to minimize an NFA, п¬Ѓrst convert it to a DFA. Example 123.\n\nTheory of Computation Minimization of DFA. DFA minimization stands for converting a given DFA to its equivalent DFA Example Consider the following DFA shown in I have a question about DFA minimization. So I've used very well known techniques to convert regular expression into NFA and then construct DFA from it, using goto\n\nThis lecture is part of complete lecture series on Advanced Theory of Computation. Key points in this lecture are: Dfa Minimization, Algorithms, Second Iteration of DFA Equivalence & Minimization Try an example: Minimize the following DFA. q 2! 0 q 3 a,b c dead!12 Try an example: Minimize the following DFA. q 2 0! q 0 q 1 q 4\n\nDownload PDF Download. Export. DFA minimization: Double reversal versus split minimization algorithms. DFA example. Initially, Minimization of Automata: Hopcroft’s Algorithm revisited Example The Dyck automaton Hopcroft has developed in 1970 a minimization algorithm that runs in\n\nEfficient Deterministic Finite Automata Minimization Based on Backward Depth Information. The minimal DFA for the above-mentioned example is shown in Fig 7 visualizing the minimization of a deterministic finite state automaton by dfa minimization applet sample pre and post test\n\n### Efficient Deterministic Finite Automata Minimization Based",
null,
"Comparison of DFA minimization algorithm in C++ GitHub. Minimization of DFA Jay Bagga For example, an accept state is distinguishable from a non-accept state since takes one to accept and another to non-accept., Optimization of DFA-Based Pattern Minimizing the Number of States of a DFA State Minimization of a Lexical Analyzers Followpos Example.\n\n### An efficient incremental DFA minimization algorithm",
null,
"DFA_Minimization [PPT Powerpoint] - vdocuments.mx. DFA Minimization - Download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) or view presentation slides online. DFA Equivalence & Minimization Try an example: Minimize the following DFA. q 2! 0 q 3 a,b c dead!12 Try an example: Minimize the following DFA. q 2 0! q 0 q 1 q 4.",
null,
"Minimization of Automata: Hopcroft’s Algorithm revisited Example The Dyck automaton Hopcroft has developed in 1970 a minimization algorithm that runs in An efficient incremental DFA minimization algorithm An efficient incremental DFA minimization algorithm. Available formats PDF Please select a format to send.\n\nMinimization of DFA Example 1 Minimize the below DFA using partition method. First design its transition table 0 equivalent [A] and [B, C] (final and non-final) Theory of Computation Minimization of DFA. DFA minimization stands for converting a given DFA to its equivalent DFA Example Consider the following DFA shown in\n\nMinimization of Finite State Machines A DFA A isminimalif there is no DFA equivalent to A with fewer states than A. For example, we can study DFA Minimization - Download as Powerpoint Presentation (.ppt), PDF File (.pdf), Text File (.txt) or view presentation slides online.\n\nAshutosh Trivedi – 1 of 24 CS 208: Automata Theory and Logic DFA Equivalence and Minimization Ashutosh Trivedi start A B b 8x(La(x) ! 9y:(x < y) ^ Lb(y)) Parallel Processing of Minimization Algorithm for Determination Finite Automata 77 Table 2. The initialized distinguishable table of the example\n\nIn automata theory (a branch of computer science), DFA minimization is the task of transforming a given deterministic finite automaton (DFA) into an equivalent DFA Parallel Processing of Minimization Algorithm for Determination Finite Automata 77 Table 2. The initialized distinguishable table of the example\n\nTheory of Computation Minimization of DFA. DFA minimization stands for converting a given DFA to its equivalent DFA Example Consider the following DFA shown in Minimization of DFA Jay Bagga For example, an accept state is distinguishable from a non-accept state since takes one to accept and another to non-accept.\n\nDFA State Minimization • How to Reduce the Number of States of a DFA? Minimization Example. CSCI565 -Compiler Design Spring 2017 Pedro Diniz pedro@isi.edu Homomorphisms and E cient State Minimization exactly oneofqandris a nal state of its own DFA. Example: State Minimization A B 0 1 0,1 C D 1 1 0\n\nDownload PDF Download. Export. DFA minimization: Double reversal versus split minimization algorithms. DFA example. Initially, Minimization of Symbolic Automata gorithm is the fastest known algorithm for DFA minimization, we show how, examples, but the techniques\n\n9810559"
] | [
null,
"http://ankhufurha.com/pictures/b8d048fb08e29e5a8d06e72913523269.jpg",
null,
"http://ankhufurha.com/pictures/minimization-of-dfa-example-pdf.jpg",
null,
"http://ankhufurha.com/pictures/548216.png",
null,
"http://ankhufurha.com/pictures/ba9fcf265224810d9893c5b2d3a6b2f5.png",
null,
"http://ankhufurha.com/pictures/117209.jpg",
null,
"http://ankhufurha.com/pictures/5b310b4fe594f1fad3a9680b07727bb6.png",
null,
"http://ankhufurha.com/pictures/2f54a61a4a3ae2abd57a3b45241dd806.jpg",
null,
"http://ankhufurha.com/pictures/minimization-of-dfa-example-pdf-2.png",
null,
"http://ankhufurha.com/pictures/minimization-of-dfa-example-pdf-3.jpg",
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.79971284,"math_prob":0.8140733,"size":10498,"snap":"2020-10-2020-16","text_gpt3_token_len":2496,"char_repetition_ratio":0.22870211,"word_repetition_ratio":0.5048721,"special_character_ratio":0.21261193,"punctuation_ratio":0.122972265,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9933362,"pos_list":[0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18],"im_url_duplicate_count":[null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null,1,null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2020-04-01T23:18:15Z\",\"WARC-Record-ID\":\"<urn:uuid:4e273e7a-d867-4342-a663-24db7465c098>\",\"Content-Length\":\"67134\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:1d230255-f7e4-4fba-80ea-a8c23aab29a5>\",\"WARC-Concurrent-To\":\"<urn:uuid:4dca7962-5fdd-4372-9216-a61ca28974e4>\",\"WARC-IP-Address\":\"45.77.101.62\",\"WARC-Target-URI\":\"http://ankhufurha.com/flatrock/minimization-of-dfa-example-pdf.php\",\"WARC-Payload-Digest\":\"sha1:U6AQVTXTNWWTAADT432U7X5F74G32JRJ\",\"WARC-Block-Digest\":\"sha1:VYI7FAPC5JYNP6RU3PC22TLXRI5N7OY5\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2020/CC-MAIN-2020-16/CC-MAIN-2020-16_segments_1585370506477.26_warc_CC-MAIN-20200401223807-20200402013807-00061.warc.gz\"}"} |
http://servan.fr/courses/Demo_02_Sentiment_Analysis.html | [
"Demo 2: Deep Learning for NLP - Sentiment Analysis\n\nIntroduction\n\nIn this Demo, you will build a sentiment analysis system for Twitter. The task consists in predicting whether a tweet is positive, negative or neutral regarding its content. Typically, sentiment polarity is conveyed by a combination of factors:\n• An expression of subjectivity such as reference to first person pronouns or possessives\n• The use of sentiment words such as \"like\", \"best\", \"boring\", \"hated\"...\n• The modulation of these aspects by markers such as \"hardly\", \"not\"\nTherefore, typical sentiment analysis systems rely on word lists for capturing subjectivity and sentiment bearing words, and word n-grams or parse trees for capturing modulation effects. Since the quantity of data available for training is rather scarce, a number of approaches can be applied for making a system that generalizes well.\nFor instance, words can be replaced by part-of-speech tags, or word classes. Sentiment-bearing word-lists can be extended through collocation statistics. Tweets have a few specificities that make the sentiment analysis task more challenging. They contain very informal language with a lot of typos and colloquial language. Smileys can be an extra source of information but they are highly variable. Hashtags and user names are marked (along with other Twitter specific abbreviations). Word morphology can be modified to express sentiment (such as \"loooooooool\"). Therefore, formal linguistic analysis such as parsing and word lists are unlikely to be very helpful, but deep learning approaches can help dealing with that variability.\n\nData\n\n• Decompress then using tar:\ntar xvfz tp_sentiment_analysis.tar.gz\n\nReport\n\nYou should send me your report the day before the next session using PDF format at the address christophe_dot_servan_at_epita_dot_fr (replace _dot_ and _at_ by dots and at, respectively)\n\nImportant remarks:\n\n• If you encounter some issues, feel free to send me an email, I'll answer you ASAP;\n• jupyternotes and others will be ignored. I want you to write a report, not a script. This also means you have to put your name on it and to do an effort of presentation! ;-)\n• The report mark is important for the final course mark ;\n• You have to report working in groups, if you did so (ask me before) ;\n• If you send me your report lately, you will be penalised ;\n• Plagianism equal to zero ;\n• No report equal to zero ;\n\nWork to do\n\nThis demo will train you to manipulate a classifier using a neural network approach\n• Then you will reuse a word embeddings model to perform the sentiment analysis\n\nIn data downloaded you will use the file sanders-twitter-sentiment.csv. This file contains tweets associated to 4 classes: \"positive\", \"negative\", \"neutral\" or \"irrelevant\". The main idea is to train a sentiment analysis system considering this task as a classification task. The file contains 5,513 annotated tweets in comma separated values format. Only the fourth and the fifth columns interest us in this demo.\n\nExercice 1: Load and process data.\n\nQ1: Load the CSV file using the csv import in python. (1pt)\n\nQ2: Produce a python list of pre-processed tweets using the script preprocess_twitter.py (1pt)\n\nExercice 1: Vectorize data.\n\nFor vectorizing tweet texts, first you can use the tokenizer provided in keras. It splits sentences into words and maps them to ids between 1 and the number of words in the lexicon. The tokenizer can be restricted to the most frequent words in the texts with the nb_words parameter.\n\nThen, the next stage consists in padding the sequences with word 0 to a given maximum length (trimming them if they are longer than the provided maxlen parameter) with the pad_sequences function. It returns a matrix of shape (number of examples, max length) which contains integers (word ids). The word mapping can be found in tokenizer.word_index.\n\nQ1: Use these python lines to process data and labels: (1pt)\n\nfrom keras.preprocessing.text import Tokenizer\n\ntokenizer = Tokenizer(nb_words=10000)\ntokenizer.fit_on_texts(texts)\nsequences = tokenizer.texts_to_sequences(texts)\nvocab = tokenizer.word_index\nvocab['<eos>'] = 0\n\nQ2: Now you should have two numpy arrays which shape are (5513, 32) and (5513, 4) for tweets and labels respectively. (1pt)\n\nNow cut this coprus in two parts one for the training and one for the evaluation. The training set (x_train and y_train) should contains 4000 utterances and the validation set (x_val, y_val) should have the rest of the data. Feel free to shuffle data.\n\nII. Train and evaluate the model\n\nUnlike in the previous demo, you will not train word embeddings (WE), but use pre-trained WE. These WE are contained in the file glove.twitter.27B.100d.filtered.txt They have been provided by Stanford though the GloVe project.\n\nUse the embeddings.py module to load the WE into a numpy matrix:\n\nThe embedding layer can be initialized with the embeddings we loaded from the web. We won't finetune them as we want to use them with words that are not in the training data. For recurrent unit, we will use Gated Recurrent Units (GRU) because they have less parameters and are faster to train than LSTM, but the later would also be a good choice. Their size is fixed to 64, but that's an hyperparameter that could be modified to improve the classifier.\n\nExercice 1: Use RNN to train the first classifier:\n\nfrom keras.layers import Embedding, Input, GRU, Dense\nfrom keras.models import Model\n# 100-dim embeddings initialized with GloVe,\n# over sequences of size 32, and not fine tuneable\nembedding_layer = Embedding(len(vocab), 100, weights=[weights],input_length=32, trainable=False)\nsequence_input = Input(shape=(32,), dtype='int32')\nembedded_sequences = embedding_layer(sequence_input)\nx = GRU(64)(embedded_sequences)\npreds = Dense(labels.shape, activation='softmax')(x)\nmodel = Model(sequence_input, preds)\nmodel.fit(x_train, y_train, validation_data=(x_val, y_val),nb_epoch=10, batch_size=64, shuffle=True)\n\nQ1: write in Python the necessary script train the model (3pt)\n\nQ2: evaluate the model using precision, recall, F1-measure, and accuracy metrics. (4pt)\n\nQ3: How many epochs are needed to reach at least 80% of accuracy? (2pt)\n\nExercice 2: Use CNN to train the second classifier:\n\nThe recurrent model has the downside that its hidden state of fixed size might be suboptimal for building representations of the beginning of the input, and might be too order-dependent. An alternative to this model is convolutional neural networks. They consist of a convolution filter which is repeated over a window moving along the input. They act a bit like a bag of n-grams which can select relevant word n-grams. In the following example, we use 128 filters of size of 3 over the input, with global max pooling, and then atten the results and pass it through a dense layer with a softmax activation to generate the probability distribution over labels :\n\nfrom keras.layers import Conv1D, MaxPooling1D, Flatten\n\nsequence_input = Input(shape=(32,), dtype='int32')\nembedded_sequences = embedding_layer(sequence_input)\nx = Conv1D(128, 3, activation='relu')(embedded_sequences)\nx = MaxPooling1D(3)(x) # global max pooling\nx = Flatten()(x)\npreds = Dense(labels.shape, activation='softmax')(x)\n\nmodel = Model(sequence_input, preds)\n\nThe model should be a bit faster to train than the recurrent one, but it leads to a lower accuracy. In particular, as accuracy on the training set raises to 99%, it reaches a high value on the validation set and then diminishes in later epochs. Clearly, the model is over fitting the training data, leading to poor generalization performance. It might be a good idea to extend that model with a regularization technique, such as for instance dropout.\n\nQ1: write in Python the necessary script train the model (3pts)\n\nQ2: evaluate the model using precision, recall, F1-measure, and accuracy metrics. (4pts)\n\nQ3: (optional) Add a regularization method to the code and observe the results. (3pts)\n\nThe End."
] | [
null
] | {"ft_lang_label":"__label__en","ft_lang_prob":0.81680644,"math_prob":0.8828107,"size":5813,"snap":"2022-05-2022-21","text_gpt3_token_len":1341,"char_repetition_ratio":0.11292821,"word_repetition_ratio":0.052071005,"special_character_ratio":0.23395836,"punctuation_ratio":0.14072694,"nsfw_num_words":0,"has_unicode_error":false,"math_prob_llama3":0.9509359,"pos_list":[0],"im_url_duplicate_count":[null],"WARC_HEADER":"{\"WARC-Type\":\"response\",\"WARC-Date\":\"2022-01-27T09:25:28Z\",\"WARC-Record-ID\":\"<urn:uuid:0c2764b1-d5a9-4b8d-b2e0-2b1218cbb705>\",\"Content-Length\":\"11368\",\"Content-Type\":\"application/http; msgtype=response\",\"WARC-Warcinfo-ID\":\"<urn:uuid:993f3c49-3b73-4997-b9ff-eef08ea19f31>\",\"WARC-Concurrent-To\":\"<urn:uuid:ec4c1794-ab9b-43e7-90ee-ee8e1be93282>\",\"WARC-IP-Address\":\"192.30.252.153\",\"WARC-Target-URI\":\"http://servan.fr/courses/Demo_02_Sentiment_Analysis.html\",\"WARC-Payload-Digest\":\"sha1:6YKZYUC7XUYRB5SALSA7RUOOHDEEI5ZO\",\"WARC-Block-Digest\":\"sha1:76LMEC2NWK2CH7G27MOBJBD3DQZJCFHG\",\"WARC-Identified-Payload-Type\":\"text/html\",\"warc_filename\":\"/cc_download/warc_2022/CC-MAIN-2022-05/CC-MAIN-2022-05_segments_1642320305242.48_warc_CC-MAIN-20220127072916-20220127102916-00044.warc.gz\"}"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.